entry_id
stringlengths 33
33
| published
stringlengths 14
14
| title
stringlengths 15
199
| authors
list | primary_category
stringlengths 5
18
| categories
list | text
stringlengths 1
461k
|
---|---|---|---|---|---|---|
http://arxiv.org/abs/2307.02063v1
|
20230705070026
|
A genetic algorithm based superdirective beamforming method under excitation power range constraints
|
[
"Jingcheng Xie",
"Haifan Yin",
"Liangcheng Han"
] |
eess.SP
|
[
"eess.SP"
] |
A Genetic Algorithm based Superdirective Beamforming Method under Excitation
Power Range Constraints
Jingcheng Xie,
Haifan Yin, Member, IEEE,
and Liangcheng Han
Jingcheng Xie, Haifan Yin and Liangcheng Han are with Huazhong University of Science and Technology,
Wuhan 430074, China (e-mail: xiejc@hust.edu.cn; yin@hust.edu.cn; hanlc@hust.edu.cn).
===========================================================================================================================================================================================================================================================
The array gain of a superdirective antenna array can be proportional to the square of the number of antennas. However, the realization of the so-called superdirectivity entails accurate calculation and application of the excitations. Moreover, the excitations require a large dynamic power range, especially when the antenna spacing is smaller. In this paper, we derive the closed-form solution for the beamforming vector to achieve superdirectivity. We show that the solution only relies on the data of the array electric field, which is available in measurements or simulations. In order to alleviate the high requirement of the power range, we propose a genetic algorithm based approach with a certain excitation range constraint. Full-wave electromagnetic simulations show that compared with the traditional beamforming method, our proposed method achieves greater directivity and narrower beamwidth with the given range constraints.
superdirectivity, beamforming, excitation range constraint, spherical wave expansion, genetic algorithm
§ INTRODUCTION
As one of the enabling technologies of the fifth generation mobile communication (5G), massive multiple-input multiple-output (MIMO) is the key to boost the spectral efficiency with a large number of antennas at the base station <cit.>. However, the increase in the number of antennas often leads to the inevitable rise of the array size since the antenna spacing is generally no less than half a wavelength. In recent years, with the greater demand for the spectral efficiency, researchers begin to discuss the possibility of super dense antenna arrays <cit.><cit.>. In this scenario, the mutual coupling between array elements is no longer negligible yet helpful. Uzkov has proved in <cit.> that the directivity of a linear array with M isotropic antennas can reach M^2 as the spacing between antennas tends to zero. Hence, for base stations with many antennas, the array gain can be more significant <cit.>.Despite the potential improvement of the array gain, the precise calculation of the superdirective beamforming vector is a challenging problem. The work <cit.> derives the beamforming vector and measures the directivity of a two-element array. However, the derivation ignores the field distortion caused by the strong mutual coupling. The author of <cit.> designs a four-element parasitic superdirective array. The beamforming vector is calculated using the spherical wave expansion (SWE), which may lead to high calculation complexity as the number of antennas increases. A prototype of the superdirective antenna array based on impedance coupling and field coupling <cit.> is built and measured in <cit.>, whose beamforming vector is corrected by the coupling matrix that still need to be calculated in the measurement. Furthermore, the work <cit.> shows that the required amplitude range of the beamforming vector increases as the antenna spacing decreases and the number of antennas grows. It shows another practical challenge that the wide range of the amplitude of the beamforming vector usually exceeds the linear range of the power amplifiers, which undermines the practical value of superdirectivity. To our best knowledge, this problem has not been addressed so far in the open literature.In this paper, we first derive the beamforming vector starting from SWE and obtain a more concise closed-form solution which is only related to the electric field. Moreover, based on the derived solution, we propose an approach utilizing the idea of genetic algorithm (GA) in order to alleviate the problem of the wide power range requirement for the beamforming vector.
GA is widely used in antenna array optimization <cit.>. However, this paper is the first to ultilize the idea to obtain the beamforming vector with excitation range constraints. Finally, the results are simulated under full-wave electromagetic simulations. The results show that, compared with the traditional method, our proposed method achieves greater directivity and narrower beamwidth with the given range constraints.
§ DERIVATION OF THE SUPERDIRECTIVE BEAMFORMER
In this section, we derive the beamforming vector for superdirective arrays under the framework of spherical wave expansion. The SWE is firstly introduced by Hansen <cit.> to generate solutions to the vector wave equation. Then a detailed formulation and derivation is given by Stratton <cit.>. It can decompose the electromagnetic field into a series of orthogonal spherical wave basis. In this method, the electric field can be expanded in the spherical coordinates (r,θ ,ϕ ) as <cit.>
E⃗ (r,θ ,ϕ ) = k/√(η)∑_s=1^2∑_n=1^∞∑_m=-n^n Q_smnF⃗_smn^(3)(r,θ,ϕ) ,
where η is the medium intrinsic impedance, k is the wavenumber. Q_smn is the spherical wave coefficient, and F⃗_smn^(3)(r,θ,ϕ) is the wave function, where s,m,n denote the wave modes.In the far-field region, as kr→∞, the electric field E can be simplified to
E⃗ (r,θ ,ϕ ) →k/√(η)e^ikr/√(4π)kr∑_s=1^2∑_n=1^∞∑_m=-n^n Q_smnK⃗_smn(θ,ϕ),
where K⃗_smn(θ,ϕ)=lim_kr →∞[√(4π)kr/e^ikrF⃗_smn^(3)(r,θ,ϕ)] are the far-field pattern functions. Their explicit expressions are
K⃗_1mn(θ,ϕ) =√(2/n(n+1))(-m/ | m |) ^me^jmϕ(-j)^n+1
{jmP̅_n^ | m | (cosθ)/sinθθ̂-dP̅_n^ | m | (cosθ)/dθϕ̂},
K⃗_2mn(θ,ϕ) =√(2/n(n+1))(-m/ | m |) ^me^jmϕ(-j)^n
{dP̅_n^ | m | (cosθ)/dθθ̂+jmP̅_n^ | m | (cosθ)/sinθϕ̂},
where P̅_n^ | m | is the associated normalized Legendre function. Finally, the SWE of the electric field in the far-field region can be represented as <cit.>
E⃗(θ,ϕ)=k√(η)∑_s=1^2∑_n=1^∞∑_m=-n^nQ_smnK⃗_smn(θ ,ϕ ) .
For a certain antenna, the power radiated per unit solid angle in the given direction is defined as
P_A(θ,ϕ) =1/2r^2η | E⃗(r,θ ,ϕ ) |^2
=1/21/4π | ∑_s=1^2∑_n=1^∞∑_m=-n^nQ_smnK⃗_smn(θ ,ϕ ) |^2.
As for an isotropical radiator, the power radiated per unit solid angle is equal to the total radiated power divided by 4π, namely
P_i=P_total/4π=1/21/4π | ∑_s=1^2∑_n=1^∞∑_m=-n^nQ_smn |^2 .
Therefore, the directivity of the antenna in the given direction (θ,ϕ) is defined as
D(θ,ϕ)=P_A(θ,ϕ)/P_i
= | ∑_s=1^2∑_n=1^∞∑_m=-n^nQ_smnK⃗_smn(θ ,ϕ ) |^2/∑_s=1^2∑_n=1^∞∑_m=-n^n |Q_smn |^2 .
The maximization of (8) can be obtain by applying the Cauchy-Schwartz inequality <cit.>. However, the calculation of K⃗_smn(θ ,ϕ ) and Q_smn is very complicated. Starting from this expression, we derive a more concise expression of directivity and solution to the beamforming vector. For an antenna array with M elements, let 𝐛=[b_1,b_2,⋯,b_M]^T∈ℂ^M× 1 denotes the beamforming vector, where b_i,i=1,2,⋯,M represents the excitation coefficent on the i-th antenna. 𝐄=[𝐞 _1,𝐞 _2,⋯,𝐞 _M]∈ℂ^2ql× M represents the electric field in the quantified angle of each antenna, in which q and l denote the discrete angles in the θ and ϕ direction. 𝐄_θ_0,ϕ_0=[E_1(θ_0,ϕ_0), E_2(θ_0,ϕ_0),⋯,E_M(θ_0,ϕ_0)]^T is the electric field of each antenna in the given direction (θ_0,ϕ_0).
theoremTheorem
For the antenna array with M elements, the superdirective beamforming vector that achieves the maximum directivity in the given direction (θ_0,ϕ_0) can be obtained by eigenvector decomposition of the following matrix
(𝐄^H𝐄)^-1(𝐄_θ_0,ϕ_0𝐄_θ_0,ϕ_0^H)
The corresponding directivity is
D(θ_0,ϕ_0)=𝐛^T𝐄_θ_0,ϕ_0𝐄_θ_0,ϕ_0^H𝐛^∗/𝐛^T𝐄^H𝐄𝐛^∗· c,
where c is a constant.
The proof can be found in Appendix <ref>.
Theorem <ref> indicates that the solution to the superdirective beamforming vector only relies on the electric field of the array element. Such information can be obtained by simulations or experimental measurements in anechoic chambers.
§ SUPERDIRECTIVITY WITH EXCITATION RANGE CONSTRAINTS
The entries of the obtained superdirective beamforming vector 𝐛 generally have a wide range of amplitude, especially when the number of antennas M increases and the antenna spacing decreases. In this section, we propose a solution under a certain range constraint of the amplitude. The problem can be described as
max_𝐛
f(𝐛)=𝐛^T𝐄_θ_0,ϕ_0𝐄_θ_0,ϕ_0^H𝐛^∗/𝐛^T𝐄^H𝐄𝐛^∗
[ s.t. max( | b_i |)/min( | b_i |) ≤ P i=1,2,3, ⋯,M,; ]
where P is the given range of amplitude. Without loss of generality, the minimum amplitude is normalized to 1, and the problem can be rewritten as
max_𝐛 f(𝐛)=𝐛^T𝐄_θ_0,ϕ_0𝐄_θ_0,ϕ_0^H𝐛^∗/𝐛^T𝐄^H𝐄𝐛^∗
[ s.t. 1≤ | b_i |≤ P i=1,2,3, ⋯, M.; ]
The objective function and the constraint of the above optimization problem are non-convex and thus it is difficult to solve directly. Therefore, we proposed an approach to solve the above problem with the idea of Genetic Algorithm. The algorithm can be summarized as generating the initial population, calculating the fitness function, selecting the candidates and repoducing by crossover and mutation.First, we randomly generate a set of I beamforming vectors. For each vector 𝐛, we choose f(𝐛) as the fitness function and calculate f(𝐛) to form a I-dimensional array. Then we sort the array from the largest to the smallest, selecting the beamforming vectors corresponding to the first m of the sorted array. These m vectors are remained for evolution. Noting that each vector has M complex numbers containing the amplitude and phase, we separately encode the amplitude and phase of each complex number. Fig. <ref> shows the detailed encoding process. Both amplitude and phase are encoded as a binary sequence and then combined as a chromosome. We can control the unit quantization value of the encoding of the amplitude to ensure that the amplitude satisfies the constraint. For instance, in case the excitation range constraint is P and the amplitude is encoded to x bits, the unit quantization value should be P-1/2^x-1.After the encoding stage, a population consisting of m initial candidate beamforming vectors are generated and each vector consists of M chromosomes. These candidates are used to reproduce “children” by crossover from two randomly selected candidates over each of their M chromosomes. The reproduction procedure mainly includes two genetic operations: crossover and mutation. Parents are randomly picked in the candidate pool and mated. For each of the M chromosomes, two random crossover point are selected. Then the chromosome fragment between the two points in the corresponding chromosome of one parent is swapped into that of the other to generate child chromosome. After the same operation over all M chromosomes of the parent, a child solution is reproduced. For each reproduced chromosome, a mutation process may happen that converts a bit to the opposite one with a very low probability. Fig. <ref> shows the crossover and mutation process in which a chromosome has 10 bits of amplitude and 8 bits of phase. We repeat the above process until the population increase from m to I.
For the reproduced set I, the selection and reproduction are repeated until the termination conditions are met. In general, the conditions can be either the biggest f(𝐛) is achieved or there is no further improvement in the successive iteration. Finally, a pseudo code of the proposed method is summarized in Algorithm 1.
§ NUMERICAL RESULTS
To prove the effectiveness of the proposed beamforming algorithm, full-wave simulations are carried out in this section.The simulation is performed at 1.6 GHz considering two different arrays (four and six identical uniformly spaced electrical dipoles). The designed dipole antenna array is shown in Fig. <ref>. The array is printed on Rogers RO4003C (lossy) substrate (ε_r=3.55, μ_r=1, tanδ=0.0027, the width L=85.5 mm and the thickness is 0.813 mm). The length and width of the dipole antenna are H=71.48 mm and w=1 mm respectively. The caliber h of the port is 2.54 mm for connection to a SubMiniature version A (SMA) connector. The distance between two adjacent antennas is set to 0.1λ or 0.2λ and the designed end-fire direction is (θ_0=90^∘,ϕ_0=0).Considering the mutual coupling effects between each array element, the electric field 𝐞_i=[ E̅(θ_1,ϕ_1)_θ ⋯ E̅(θ_l,ϕ_q)_θ ]^T∈ℂ^lq× 1 in (17) of each antenna is simulated, from which the electric field E_θ_0,ϕ_0 in the end-fire direction (θ_0=90^∘,ϕ_0=0) is extracted. Then, the simulated complex electric fields is used to calculate the fitness function f(𝐛) (see section 3-A). Finally, we apply our proposed algorithm to obtain the beamforming vectors 𝐛, based on which the radiation pattern is simulated.
In the simulation, we let a 7-bit binary code denote the amplitude and 0.01, 0.02, 0.03 per unit, respectively. Therefore, we choose P=2.27 for 4 antennas with 0.1λ spacing and P=2.27, 3.54, 4.81 for 6 antennas with 0.2λ spacing. To show that our proposed method is efficient with all constraint P, the maximum ratio transmission (MRT) and the traditional superdirecitve beamforming that ignores the field distortion due to mutual coupling
are chosen for comparison.
The simulated result in the E-plane (ϕ=0^∘) of the 4 antennas is shown in Fig. <ref>.
The directivity simulated using eigenvalue decomposition in the end-fire direction is 16.45 and the beamwidth is 51^∘.
With the constraint P=2.27, our method achieves the directivity of 11.33, which is much higher than 4.45 of the MRT and 6.15 of the traditional method. Moreover, it can be found that the 3-dB beamwidth of our proposed method is 62.7^∘, which is narrower than 132.6^∘ of the MRT method and 68.4^∘ of the traditional method.
Then, we increase the number of antennas to 6 and the spacing is changed to 0.2λ. The simulated directivity pattern in the E-plane (ϕ=0^∘) is shown in Fig. <ref>. It is obvious that our proposed method with all given constraints has a better performance in the main-lobe direciton than the traditional method and MRT.
The detailed results are shown in Table <ref>, where the directivity is obtained in the end-fire direction (θ_0=90^∘,ϕ_0=0). It can be found that our proposed method achieves greater directivity and much narrower beamwidth.
In Fig. <ref>, the number of antennas is increased to 8 while the spacing maintains 0.2λ. The detailed results are listed in Table <ref>. It can be found that our proposed method is effective and feasible even the number of antenna increases.
§ CONCLUSION
In this paper, we derived the beamforming vector to achieve the superdirectivity, which can be calculated entirely from the electric field of the antenna array. Moreover, to alleviate the requirement requirement of the wide amplitude range for the beamforming vector, a GA-based effective algorithm is proposed to obtain a beamforming vector with a certain excitation range constraint. The simulated results showed that compared with the traditional superdirective beamforming method and the MRT, our proposed method achieves greater directivity.
§ PROOF OF THEOREM <REF>
For an antenna array with M elements, the wave coefficient of each wave mode is the sum of that of each antenna element, which means Q_smn=∑_i=1^Mb_iQ_smni. Then the directivity of an M-element antenna array is
D(θ,ϕ)= |∑_i=1^Mb_i∑_s=1^2∑_n=1^∞∑_m=-n^nQ_smniK⃗_smn(θ ,ϕ ) |^2/∑_s=1^2∑_n=1^∞∑_m=-n^n |∑_i=1^Mb_iQ_smni |^2 .
In order to simplify this expression, we transform it to the form of matrix. Let Q_smni=[ Q_1,-1,1,i ... Q_2,N,N,i ]∈ℂ ^1× T represent all modes of the wave coefficients of the i-th element, where T=2× N× (N+2) and N is the trunction point of n since it has an infinite number of values <cit.> <cit.> <cit.>. Then the wave coefficient of every mode and every array element can be expressed as
𝐐=[ Q_1,-1,1,1 ... Q_2,N,N,1; Q_1,-1,1,2 ... Q_2,N,N,2; ... ... ...; Q_1,-1,1,M ... Q_2,N,N,M ]∈ℂ ^M× T .
Similarly, we can also transform the far-field pattern functions into the form of the matrix
𝐊=[ K_1,-1,1(θ_1,ϕ_1)_θ ... K_2,N,N(θ_1,ϕ_1)_θ; K_1,-1,1(θ_1,ϕ_1)_ϕ ... K_2,N,N(θ_1,ϕ_1)_ϕ; K_1,-1,1(θ_2,ϕ_1)_θ ... K_2,N,N(θ_2,ϕ_1)_θ; K_1,-1,1(θ_2,ϕ_1)_ϕ ... K_2,N,N(θ_2,ϕ_1)_ϕ; ... ... ...; K_1,-1,1(θ_l,ϕ_q)_ϕ ... K_2,N,N(θ_l,ϕ_q)_ϕ ]∈ℂ ^ (2lq) × T
where each row indicates the θ or ϕ component values of the far-field pattern functions of all modes at a given solid angle and lq is the the number of the angle quantization points.Then the directivity expression (13) in a given direction (θ_0,ϕ_0) can be rewritten as
D(θ_0,ϕ_0)=𝐛^T [ 𝐐𝐊_θ_0,ϕ_0^T ] [ 𝐐𝐊_θ_0,ϕ_0^T ]^H𝐛^∗/𝐛^T𝐐𝐐^H𝐛^∗ ,
where 𝐛=[b_1,b_2,⋯,b_M]^T∈ℂ^M× 1 is the beamforming vector and 𝐊_θ_0,ϕ_0 is the far-field pattern function in the given direction (θ_0,ϕ_0).Moreover, if we quantify the electric field of the i-th antenna at the same points
𝐞_i=[ E̅(θ_1,ϕ_1)_θ E̅(θ_1,ϕ_1)_ϕ ⋯ E̅(θ_l,ϕ_q)_ϕ ]^T∈ℂ^2lq× 1.
Then the SWE of the electric field in (5) becomes <cit.>
𝐞 _i=k√(η)𝐊𝐐_i^T ,
where the subscript i indicates the i-th antenna. By inserting (18) into (16), we obtain
D(θ_0,ϕ_0)=𝐛^T𝐄_θ_0,ϕ_0𝐄_θ_0,ϕ_0^H𝐛^∗/𝐛^T𝐐𝐐^H𝐛^∗·1/(k√(η) )^2 ,
where 𝐄_θ_0,ϕ_0=[E_1(θ_0,ϕ_0), E_2(θ_0,ϕ_0),⋯,E_M(θ_0,ϕ_0)]^T is the electric field of each antenna in the given direction (θ_0,ϕ_0).Since the normalized far-field pattern functions are orthogonal <cit.>, which means
∫_0^2π∫_0^πK⃗_smn(θ,ϕ)K⃗_s'm'n'(θ,ϕ)^∗dθdϕ
=
1 if s=s',m=m',n=n'
0 o.w
By inserting (20), the expression (19) can be rewritten as
D(θ_0,ϕ_0)=𝐛^T𝐄_θ_0,ϕ_0𝐄_θ_0,ϕ_0^H𝐛^∗/𝐛^T𝐄^H𝐄𝐛^∗· c,
where c is a constant related to the unit of the electric field and 𝐄=[𝐞_1,𝐞_2,...,𝐞_M]∈ℂ^2ql× M is the electric field in the quantified angle of each antenna.The expression (21) has the form of a generalized Rayleigh quotient in the M-dimensional complex space ℂ^M. The maximization of the directivity can be obtained by the eigenvalue decomposition of the corresponding matrix <cit.>.
(𝐄^H𝐄)^-1(𝐄_θ_0,ϕ_0𝐄_θ_0,ϕ_0^H)𝐱=λ𝐱.
Since 𝐄_θ_0,ϕ_0 represents the electric field of each array element in the given direction, which means the rank of the matrix (𝐄_θ_0,ϕ_0𝐄_θ_0,ϕ_0^H) is 1, the above eigenvalue problem only has one solution. Let the eigenvalue of the matrix (𝐄^H𝐄)^-1(𝐄_θ_0,ϕ_0𝐄_θ_0,ϕ_0^H) be λ_0 and the corresponding eigenvector be 𝐱_0, then the maximum directivity is D(θ_0,ϕ_0)_max= λ_0· c with beamforming vetor 𝐛=𝐱_0^∗. Finally, Theorem <ref> is proved.
IEEEtran
|
http://arxiv.org/abs/2307.02824v1
|
20230706074520
|
Response of an Interferometer Mounted on an Elastic Square Plate to Gravitational Waves
|
[
"Thomas Spanner",
"Thomas Mieling",
"Stefan Palenta"
] |
gr-qc
|
[
"gr-qc"
] |
1]Thomas Spanner
2]Thomas B. Mieling
ORCID: https://orcid.org/0000-0002-6905-0183
1]Stefan Palenta
ORCID: https://orcid.org/0000-0002-6541-9537
[1]University of Vienna, Faculty of Physics, Boltzmanngasse 5, 1090 Vienna, Austria
[2]University of Vienna, Faculty of Physics, Vienna Doctoral School in Physics (VDSP), Vienna Center for Quantum Science and Technology (VCQ) and Research platform TURIS, Boltzmanngasse 5, 1090 Vienna, Austria
Response of an Interferometer Mounted on an Elastic Square Plate to Gravitational Waves
[
August 1, 2023
=======================================================================================
Laser-interferometric gravitational wave detectors are commonly modeled as being at rest in transverse-traceless coordinates (and thus geodesic).
In this paper, we analyze what happens if the interferometer is mounted on a material that can undergo elastic oscillations caused by the gravitational wave.
We thus compute the response of a two-dimensional elastic material to linearized gravitational radiation and compute the resulting response of a laser interferometer, mounted on such a plate.
§ INTRODUCTION
This work builds upon the paper <cit.> by Hudelist et al. Therein, the equations of motion describing an elastic body under the influence of gravitational waves (GWs) were derived by using a concrete matter model in the theory of relativistic elasticity. These were then used to solve the simple one-dimensional problem of a rod in a GW background. This paper aims to generalize this to a two-dimensional thin plate and then goes on to calculate the signal a Michelson interferometer placed on the plate would measure.
In other words, it derives the response of elastic interferometers to a GW.
This refines previous models of laser-interferometric GW detection in media, such as the ones in Refs. <cit.>.
The model considered here does not include any damping behavior and we only search for the steady-state solution, when a continuous GW hits the plate.
First, we discuss a set of normal modes, i.e. solutions for the in-plane vibrations of the plate without a gravitational wave. Next, we express the solution as a Fourier series, plug it into the equations, and solve for the Fourier coefficients. But for non-periodic functions, taking the derivative of the Fourier series does not result in the Fourier series of the derivative. To fix this, we develop a modified spectral approach which is then used to find a solution. The resulting series is truncated, and the linear system of equations is then solved numerically.
This is then used to compute the signal in a Michelson interferometer, whose constituents (the laser, beam splitter, and mirrors) are attached to the elastic plate. Finally, the results are compared to those obtained for an interferometer whose constituents are at rest in transverse-traceless coordinates.
We use the metric signature (-,+,+,+), Greek letters denote spacetime indices and Latin letters denote spacial indices.
Furthermore, we use units with the speed of light in vacuo set to unity: c=1.
§ ELASTIC PLATE IN THE PRESENCE OF GRAVITATIONAL WAVES
Far from the source, GWs are weak and therefore can be described using linearized gravity
g_μν = η_μν + ϵ h_μν ,
with ϵ≪ 1. In the transverse-traceless (TT) gauge, a monochromatic plane wave shall be expressed via
h_ij = [ A_+ A_× 0; A_× -A_+ 0; 0 0 0 ] e^i ω (t-z) .
We wish to determine the deformation of homogeneous and isotropic materials due to such a gravitational wave. Within the framework of linearized relativistic elasticity (cf. e.g. <cit.>), the deformation is encoded in the displacement field u^i, which gives rise to the strain
e_ij
= (∂_i u_j + ∂_j u_i + h_ij) ,
and the Cauchy stress tensor
σ^ij
= λδ^ij e^k_k
+ 2 μ e^ij
+ O(ϵ^2) ,
where λ and μ are the Lamé parameters of the material.
Furthermore, the stress-strain relation for an isotropic and homogeneous body (cf. e.g. <cit.>) reads
σ^ij = λδ^ij e^k_k+ 2 μ e^ij + O(ϵ^2) .
Here, we restrict the discussion to the case where the material under consideration is a thin plate, lying in the z = 0 plane.
The absence of tensions normal to the plate is then modeled by setting σ^iz=0. Therefore, using (<ref>) the strain tensor e can be expressed in terms of its x- and y-components as
e = [ e^11 e^12 0; e^12 e^22 0; 0 0 -λ/λ + 2 μ(e^11+e^22) ] .
The equations of motion of linearized elasticity then yield
ρ∂_t^2 u^j
= ∂_i σ^ij
= μ3 λ + 2μ/λ+2μ∂^j ∂_i u^i + μΔ u^j
.
We seek the steady-state solution oscillating with the GW frequency ω. Therefore, we make the ansatz u^i = cos(ω t) φ^i(x,y). Plugging this into <ref> yields an equation only for φ^i(x,y):
ω^2 φ^i
+ c_2^2 Δφ^i
+ c_3^2 ∂^i ∂_k φ^k
= 0 ,
where we have introduced the wave speeds c_1^2 = 4μ/ρλ + μ/λ+2μ, c_2^2 = μ/ρ and c_3^2 = c_1^2 -
c_2^2 (cf. <ref> for further discussions). Note that this is consistent with the more convenient representation c_1^2 = E/ρ(1-ν^2) of the longitudinal wave speed c_1 using the Young modulus E and the Poisson ratio ν <cit.>.
For a quadratic plate of length L in the z = 0 plane, these equations must be solved on the domain P = {(x,y)| x,y ∈ [-L/2, +L/2]}.
The bulk equations of motion must be supplemented by boundary conditions on the boundary ∂ P.
In the absence of external forces, one has σ^ij n_j |_∂ P = 0, where n is the unit outward-pointing normal to ∂ P (within a time-slice of constant t) <cit.>.
Splitting the boundary ∂ P as ∂ P = ∂ P_x ∪∂ P_y, where ∂ P_x ={(x, y)|x=± L/2, y ∈ [-L/2, L/2]}, and ∂ P_y = {(x, y)|y=± L/2, x ∈ [-L/2, L/2]}, one has
[
c_1^2 ∂_x φ^x
+ (c_1^2-2 c_2^2) ∂_y φ^y
]|_∂ P_x = -c_2^2 A_+ ,
[
c_1^2 ∂_y φ^y
+ (c_1^2-2 c_2^2) ∂_x φ^x
]|_∂ P_y = + c_2^2 A_+ ,
[
∂_x φ^y
+ ∂_y φ^x
]|_∂ P = - A_× .
§ DELTA CORRECTED SPECTRAL METHOD
For clarity, within this section, we make use of the rescaling x^i → x^i/L to dimensionless coordinates and the corresponding dimensionless wave speeds c_i = c_iω L. Then the differential equation (<ref>) turns into
φ^i + c_2^2 Δφ^i + c_3^2 ∂^i ∂_k φ^k = 0 .
and the boundary conditions (<ref>) read
c̅_1^2 ∂_x φ^x|_∂ P_x + (c̅_1^2-2 c̅_2^2) ∂_y φ^y|_∂ P_x = -c̅_2^2 L A_+ ,
c̅_1^2 ∂_y φ^y|_∂ P_y + (c̅_1^2-2 c̅_2^2) ∂_x φ^x|_∂ P_y = + c̅_2^2 L A_+ ,
∂_x φ^y|_∂ P + ∂_y φ^x|_∂ P = - L A_× .
We wish to implement a spectral method to (approximately) solve the boundary value problem (BVP) consisting of <ref>. However, since the solution φ is in general not periodic, the derivatives taken from its Fourier series representation must be adapted to solve the differential equation (<ref>). In the first step, this procedure shall be introduced in a one-dimensional BVP.
§.§ Illustration of the Problem in 1D
The smooth solution φ(x) of a one-dimensional BVP has the Fourier expansion
F[φ](x)
= ∑_n=0^∞ a_n cos(2π n x) + ∑_n=1^∞ b_n sin(2π n x) .
But since that solution is not necessarily periodic on [-1/2, 1/2], the Fourier series actually represents the periodic continuation of φ(x) (see <ref>), which in general has a jump at the boundary. At such points, the Fourier series F[φ] takes the average of both one-sided limits of the original function. The size of the jump shall be denoted by d_0=φ(+1/2)-φ(-1/2). Then, the correct function values can be recovered by the following relation:
φ(x) =
F[φ](x) x ∈ (-1/2,1/2) ,
F[φ](1/2) ± d_0 x=±1/2 .
Now the derivative of the periodic continuation ∂_x F[φ](x) differs from the periodic continuation of the derivative F[φ'] by a jump of height -d_0 at the boundary and all its periodic recurrences. This can be expressed via
F[φ'] = ∂_x F[φ] + d_0 F[δ(x-)] .
The first derivative is again not necessarily periodic, and therefore has a jump d_1=φ'(+)-φ'(-) at the boundary. So the above procedure can be iterated yielding
F[φ”] = ∂_x F[φ'] + d_1 F[δ(x-)]= ∂_x^2 F[φ] + d_0 F[δ '(x-)] + d_1 F[δ(x-)] .
plugging this into the one-dimensional version of (<ref>) and solving for the Fourier coefficients {a_n,b_n} correctly reproduces the solution for a one-dimensional rod under the influence of a GW as found in Ref. <cit.>.
§.§ Fourier Series with Dirac Deltas in Two Dimensions
In two dimensions we write the Fourier series of an arbitrary function f(x,y) on the square [-,+] × [-,+] in terms of complex exponentials
F[f](x,y)
= ∑_n,m=-∞^∞ C_nm e^i k⃗·x⃗ ,
k⃗ = 2 π[ n; m ] .
Once again, the function is in general not periodic and so the Fourier series only agrees with the function inside the square (see <ref>). Instead of a constant describing the jump size, there are now two jump functions, one along ∂ P_x (denoted by d_0) and one along ∂ P_y (denoted by e_0).
d_0(y) = f( ,y ) -f(-,y) ,
e_0(x) = f(x,)-f(x,-) .
To get the correct Fourier series for the partial derivatives of f we need, similar to the one-dimensional case, to add the periodic continuation the Dirac δ-function to the partial derivative of the Fourier series. For ∂_x f this means:
F[∂_x f(x,y)] = ∂_x F[f(x,y)] + F[ d_0(y) δ (x- ) ] .
But now the jump is represented by a function, instead of a constant. To get back the correct values of the derivatives at the boundaries we again need jump functions:
d_x(y) = ∂_x f ( ,y ) - ∂_x f(-,y) ,
e_x(x) = ∂_x f(x,)- ∂_x f(x,-) .
The subscript denotes the derivative to which the jump function belongs, e.g. d_x for the jump in ∂_x f. The relation between the function values and the Fourier series is then given by
∂_x f(x,y)
=
F[∂_x f(x,y)]
- 1/2 < x,y < 1/2 ,
F[∂_x f(x,y)] ±1/2 d_x(y)
x = ±1/2, - 1/2 < y < 1/2 ,
F[∂_x f(x,y)] ±1/2 e_x(x)
y = ±1/2, - 1/2 < x < 1/2 .
To get the second derivatives we treat the first derivative as the function f and use what we already know:
F[∂_x^2 f(x,y)] = ∂_x F[∂_x f(x,y)] + F[ d_x(y) δ ( x- ) ]
= ∂_x ( ∂_x F[f(x,y)] + F[ d_0(y) δ (x- ) ] ) + F[ d_x(y) δ ( x- ) ]
=∂_x^2 F[f(x,y)] + F[ d_0(y) δ' (x- ) ] + F[ d_x(y) δ ( x- ) ] ,
The second y-derivative looks very similar to the expression in the 1D case, but the mixed derivative is more interesting, because the second derivative now also acts on the jump functions.
F[∂_x ∂_y f]
= ∂_x ∂_y F[f]
+ ∂_x F[ e_0(x)] F[ δ (y- ) ]
+ ∂_y F[ d_0(y) ] F[δ ( x- ) ]
+ J F[δ (x- ) δ (y- )] .
Here, J denotes the jump in the jump functions:
J := d_0()-d_0(-) = f(,) - f(-, ) - f(,-) + f(-,-) = e_0()-e_0(-) .
The desired solution to <ref> with the boundary conditions given in <ref> has an x- and y-component and is called φ instead of f. Therefore, all Fourier coefficients and jump functions also get an index.
It is useful to first look at the boundary conditions expressed in terms of the Fourier series. For instance, the expression for σ_xy at x=± reads
F[∂_x φ^y(±,y)] ± d_x^y(y) + F[∂_y φ^x(±,y)] ± d_y^x(y) + A_× = 0 .
The Fourier series at x= has the same value as the one at x=- so when both cases are subtracted from one another what remains is the relation
d^y_x(y) + d_y^x(y) = 0 .
Similar relations can be found when considering σ_xx on ∂ P_x, σ_yy on ∂ P_y and σ_xy on ∂ P_y respectively:
c̅_1^2 d^x_x(y) + (c̅_1^2- 2 c̅_2^2) d_y^y(y) = 0 ,
(c̅_1^2- 2 c̅_2^2) e^x_x(x) + c̅_1^2 e_y^y(x) = 0 ,
e^y_x(x) + e_y^x(x) = 0 .
Using these, all equations can be expressed in terms of d^i_0 and e^i_0 only, e.g.
d_x^x = -c̅_1^2- 2 c̅_2^2c̅_1^2 d_y^y = -c̅_1^2- 2 c̅_2^2c̅_1^2∂_y d_0^y .
For the second x- and y-derivatives this means
F[∂_x^2 φ^x] = ∂_x^2 F[φ^x] + F[ d_0^x(y) δ' (x- ) ] - c̅_1^2- 2 c̅_2^2c̅_1^2 F[ δ ( x- ) ] ( ∂_y F[ d_0^y(y)] + J F[δ (y- )] ) ,
F[∂_y^2 φ^x] = ∂_y^2 F[φ^x] + F[ e^x_0(y) δ' (x- ) ] - ∂_x F[ e_0^y(x)] F[ δ( x-) ] - J F[δ (x- ) δ (y- )] .
Each of the spatial derivatives in <ref> includes one term containing the factor F[δ (x- ) δ (y- )]. Looking only at their prefactors, one finds that these cancel each other and the Fourier series form of the equation becomes
F[φ^x] + c_1^2 ( ∂_x^2 F[φ^x] + F[ d_0^x(y) δ' (x- ) ] ) + (c̅_1^2-2 c̅_2^2) ∂_x F[ e^y_0(x)] F[ δ (y- ) ] ,
+ c_3^2 ∂_x ∂_y F[φ^y] + c_2^2 ( ∂_y^2 F[φ^x] + F[ e^x_0(x) δ' (y- ) ] + ∂_y F[ d_0^y(y) ] F[δ ( x- ) ] ) = 0 .
One obtains a similar expression for the y component of the equation.
These can now be turned into equations for the Fourier coefficients C^i_nm of the function and the jumps c_m(d^i_0) and c_n(e^i_0). The same can be done with the boundary conditions which give four more expressions.
So we now have six relations (for each n and m) which contain the same information as the original BVP, but are expressed in Fourier coefficients. These are summarized in the following box.
In terms of the Fourier coefficients C^j_nm for φ^j and the Fourier coefficients c_n(e_0^j) and c_m(d_0^j) of the jump functions, the system consisting of <ref> takes the form
C_nm^x (1 - 4 π^2 (c_1^2 n^2 + c_2^2 m^2))
- C_nm^y 4 π^2 n m c_3^2
+ 2 i π [
c_1^2 c_m(d_0^x) n (-1)^n
+ c_2^2 c_n(e_0^x) m (-1)^m
+ (c_1^2 - 2 c_2^2) n c_n(e_0^y) (-1)^m
+ c_2^2 m c_m(d_0^y) (-1)^n
] = 0 ,
C_nm^y (1 - 4 π^2 (c_2^2 n^2 + c_1^2 m^2))
- C_nm^x 4 π^2 n m c_3^2
+ 2 i π [
c_2^2 c_m(d_0^y) n (-1)^n
+ c_1^2 c_n(e_0^y) m (-1)^m
+ c_2^2 n c_n(e_0^x) (-1)^m
+ (c_1^2 - 2 c_2^2) m c_m(d_0^x) (-1)^n
] = 0 ,
∑_n=-∞^∞ (-1)^n
[
c_1^2 (2 π i n C_nm^x
+ c_m(d_0^x) (-1)^n)
+ (c_1^2-2 c_2^2) (2 π i m C_nm^y + c_n(e_0^y) (-1)^m)
]
= - c_2^2 L A_+ δ_m0 ,
∑_m=-∞^∞ (-1)^m
[
c_1^2 (2 π i m C_nm^y + c_n(e_0^y) (-1)^m)
+ (c_1^2-2 c_2^2) (2 π i n C_nm^x + c_m(d_0^x) (-1)^n)
]
= + c_2^2 L A_+ δ_m0 ,
∑_n=-∞^∞ (-1)^n
[
2 π i m C_nm^x
+ c_n(e_0^x) (-1)^m
+ 2 π i n C_nm^y
+ c_m(d_0^y) (-1)^n
]
= - L A_×δ_m0 ,
∑_m=-∞^∞ (-1)^m
[
2 π i m C_nm^x
+ c_n(e_0^x) (-1)^m
+ 2 π i n C_nm^y + c_m(d_0^y) (-1)^n]
= - L A_×δ_m0 .
The recipe to solve these expressions is as follows: First, solve the first two relations for C_nm^i in terms of the Fourier coefficients of the jump functions. Then plug these into the relations from the boundary conditions. Truncate the infinite series at some large value M and solve the resulting linear system numerically.
The fact that there are as many equations as there are unknowns suggests the existence of a unique solution. This is confirmed by the numerical analysis presented below.
Then, the relations from the first step can be used to calculate the C_nm^i which in turn can be used to approximate the solution via the truncated Fourier series.
§ PHYSICAL DISTANCES
§.§ Coordinate Transformation to Local Lorentz Coordinates
We want to mount an interferometer on the plate to measure gravitational waves. To calculate the signal it would measure we need to calculate the physical path lengths of the two interferometer arms. This is done using local Lorentz coordinates (ξ, η) <cit.>, given by the transformation
ξ = x + ϵ A_+ x cos(ω t)+ ϵ A_× y cos(ω t) ,
η = y - ϵ A_+ y cos(ω t) + ϵ A_× x cos(ω t) ,
where x and y denote the TT coordinates as above.
In local Lorentz coordinates (LL coordinates) ξ and η, physical distances are given by the difference in coordinates.
The TT-coordinates are related to the deformation by
x^i = X^i + ϵ u^i ,
where the upper case coordinates are the body coordinates X^i (cf. <cit.>). Putting this all together and ignoring ϵ^2 terms, we get
ξ = X (1 + ϵ A_+ cos(ω t)) +ϵ A_× Y cos(ω t) + ϵ u^x ,
η = Y (1 - ϵ A_×cos(ω t)) + ϵ A_× X cos(ω t) + ϵ u^y .
§.§ Physical Displacement Field
We can use the coordinate transformation (<ref>) to translate the displacement fields (u^x, u^y), or their time-independent part (φ^x, φ^y), into the physical displacement fields (u^ξ, u^η) or (φ^ξ, φ^η):
φ^ξ = φ^x + A_+ X + A_× Y ,
φ^η = φ^y - A_+ Y + A_× X .
For illustration, the plate’s material parameters are taken to be c_1 = 1950 and c_2= 540, which corresponds to polyethylene.
We discuss only the response to a purely plus-polarized wave shown in Figure <ref>, as only this case gives a signal in the interferometer discussed in the next section.
At some frequencies the solution becomes very large, i.e. the GW hits a resonance. The frequencies of the first four of these resonances are ω_1 ≈2400, ω_2 ≈3200, ω_3 ≈5940 and ω_4 ≈7220.
The first resonance looks like the n=1 mode of the plate, discussed in <ref>. It has a frequency of ω_1 = (540) √(2)π /≈2399.
This agrees very well with the first resonance frequency, which is what is to be expected in a model without damping. A damping term would shift the resonance frequencies.
We would expect to find the n=2 mode at the frequency ω≈4800, but there is no corresponding resonance in <ref>.
Following this argument further, the next mode with n=3 is at ω = 7200 and indeed there is a corresponding spike in the resonance curve Figure <ref>. This suggests, that only the odd-numbered modes are compatible with the plus-polarized GW.
We hypothesize that the excitability is related to the mass quadrupole moment, or more precisely to its second time-derivative, as is the case for the emission of gravitational waves via Einstein’s famous quadrupole formula (see e.g. <cit.>). One notices that this quantity is non-vanishing only for the odd-numbered modes, which fits nicely with our results.
As each of the resonances in <ref> should be related to a normal mode, we expect that there are many more than the ones found in <ref>.
§ INTERFEROMETER
So far, we have only looked at the resonances where the maximum of φ^i diverges. These do not necessarily correspond to the frequencies at which the interferometer gives the strongest signal, as it might be possible that the deformations along the path of the laser cancel each other.
§.§ Interferometer Setup and Assumptions
We consider a standard Michelson interferometer, see <ref> and the explanation below. We assume that the instruments are rigid and move as the plate below them does. The phase difference picked up by the laser along the two different paths can be measured as a change in light intensity at the detector. In a more realistic setting, a Fabry–Pérot interferometer would be used, where the laser effectively bounces back and forth multiple times.
For simplicity, it is assumed that the plate does not change while the light crosses the instrument. The light takes about T_l= 2L/c ≈e-7 for a 15 long plate. The upper limit of frequencies of gravitational waves from known physical phenomena we want to detect is 10, which corresponds to a period of T_GW= e-4. For these values, their ratio is 10^-3, so that the assumption of a static plate during the photon flight is justified. This ratio stays constant as long as the product ω L is constant. Thus, for smaller plates, larger frequencies can be considered, and the other way around.
The positions of the mirrors and the beam-splitter are A=(-X_0,Y_0), C=(X_0,-Y_0) and B=(-X_0,-Y_0) (see <ref>). Later, X_0=Y_0=0.4 L is chosen for the calculations. This is to avoid the error at the boundaries due to the finite number of terms in the Fourier series approximation which is largest near the rim of the plate.
§.§ Calculation of Signal
In this section, we calculate the phase difference between the laser going along the two different interferometer arms. The calculations are similar to those for the case of free mirrors, found for example in <cit.> or <cit.> . We discuss only the case of pure plus-polarization, A_+=1, A_×=0. Looking at the metric in the local Lorentz frame as given in <cit.>
g_μν=η_μν-2
[ Φ 0 0 Φ; 0 0 0 0; 0 0 0 0; Φ 0 0 Φ ] ,
with Φ given by
Φ
= -1/4ϵḧ_+(t)(ξ^2-η^2)
= ϵω^2/4cos(ω t) (ξ^2-η^2) .
One can see that the g_00 component is different from the flat-space metric. As it is dependent on the position on the plate, clocks run at different rates depending on where they are. Combining this with the motion of the mirrors, there are two effects that create a phase difference along the two interferometer arms:
* A difference in time elapsed because one path is longer than the other.
* A difference due to clocks running at different speeds along both paths and hence the light traveling at different coordinate speeds.
We calculate the elapsed time along null geodesics (which are still straight lines) taken by the photons. First, we look at the photon moving along the lower interferometer arm from point B to C. For the four-velocity of the photons K^μ we have
0
= g_μν K^μ K^ν
= (η_tt - 2 Φ) (K^t)^2 + η_ξξ (K^ξ)^2
= -(1 + ϵω^22cos(ω t) (ξ^2-η^2) ) (K^t)^2 + (K^ξ)^2 .
Using K^t = dtdλ and K^ξ = d ξ/dλ for a path parameterized by λ, this can be rearranged to an expression for the coordinate speed of light
d ξ/dt
= K^ξ/K^t
= √(1+ ϵω^22cos(ω t) (ξ^2-η^2) )≈ 1 + ϵω^24cos(ω t) (ξ^2-η^2) .
We again use the assumption that the plate is stationary during one light crossing, i.e. the cosine is approximately constant and we take it to be 1 for the maximal possible effect. Integrate along the path from ξ_B=-0.4 L + ϵφ^ξ (-0.4 L, -0.4 L) to ξ_C=0.4 L + ϵφ^ξ (0.4 L, -0.4 L) and η = -0.4L we find
Δ t_BC
= 0.8 L
+ ϵ[φ^ξ (0.4 L, -0.4 L) - φ^ξ(-0.4 L, -0.4 L) ]
- ϵω^24 (0.4 L)^3 (23 - 2 ) .
The first term is the time it would take the laser to cross the path without any GW. The second term is due to the motion of the mirrors and the last term is the correction due to the different rates at which time passes.
Doing this also for the other interferometer arm the difference in the two elapsed times is found to be given by
Δ t = Δ t_BAB - Δ t_BCB = Δ l - ϵ4 ω^23 (0.4 L)^3 ,
where Δ l is the difference in path lengths given by
Δ l
= 2 ϵcos(ω t) (
φ^η|_A
- φ^η|_B
- φ^ξ|_C
+ φ^ξ|_B
) .
The two effects are additive and can be considered separately. We will later see, that Δ l is of magnitude 1ϵ. The second effect would only be relevant if the plate would be more than 30 kilometers long, in which case our approximations no longer hold. This agrees with the conclusion of <cit.>. Thus the relevant part of the signal results from the difference in lengths along the two interferometer arms Δ l.
§.§ Results
The resulting differences in path lengths for different frequencies are shown in Figure <ref>. Also, for comparison, the resulting signal for an interferometer with freely suspended mirrors (like LIGO, φ^ξ = φ^η = 0) of the same size is plotted. One can see, that there are long ranges (for instance from ω L = 3000 to almost 6000) where the signal from the interferometer on the plate is larger. So this is not only the case near the resonances.
We can choose the size of the plate L so that the frequency range with the largest signal coincides with the range where interesting phenomena are expected.
An approximate polynomial solution for the case of small period ratio ε=ω Lc_1 and pure plus-polarization is given by:
φ^x(x,y) = L2 A_+ ( - x + 124ε^2 c_1^2c_2^2c_1^2c_3^2[-32 x ( 1 - c_2^2c_1^2) + 3 ( 1-2 c_2^2c_1^2) x y^2 + x^3 ] ) .
Upon conversion of this result to LL coordinates, this can be used to explore the small frequency case (small compared to c_1L). The resulting path-length difference is:
Δ l = ϵ L^3 cos(ω t) ω^2 2.03 × 10^-7 .
The approximation is valid up to at most ω L = 100 and if we insert this value, we find Δ l/L = ϵcos(ω t) 2.03× 10^-3. This confirms that the signal indeed almost vanishes for low frequencies. Figure <ref> shows a plot of the signal for the polynomial solution and the numerical approach. It can be seen, that the two curves start to diverge at approximately ω L = 800, which is to be expected since the low-frequency limit no longer applies there. Also, for very small frequencies the agreement is not perfect. This is because the numerical approach becomes ill-conditioned. Overall, the two curves become closer, if the Fourier series in the numerical approach are truncated at larger mode number M.
§ CONCLUSION
In this work, we were able to find a numerical solution describing the behavior of a quadratic elastic plate under the influence of a gravitational wave. This was done by developing a spectral approach that deals with the derivatives of Fourier series of non-periodic functions. The validity of this approach was checked in the low-frequency limit against an approximate polynomial solution.
This solution was then used to calculate the signal a laser interferometer placed on this plate would see. It was discovered, that for broad frequency ranges, the signal is larger than the one an interferometer of the same size, but consisting of freely suspended mirrors, would measure. Of course, the advantage of interferometers with free mirrors, like LIGO, is that they can use kilometer-long tunnels to get a larger signal. Plates are restricted to far smaller sizes.
Intuitively, one would expect, that the material would oppose the motion of the mirrors and thus minimize the signal. As demonstrated above, this is only true for gravitational wave frequencies which are small compared to the first resonance frequency of the plate.
The signal is especially large near certain resonance frequencies. Looking at the motion pattern of the plate near the resonances, we discovered that some of them correspond to the normal mode solutions from section <ref>.
The amplitude of the signal is of the order of ϵ L ≈ 10^-20 L. This can be improved by using a larger plate or longer effective path lengths with the help of a Fabry–Pérot interferometer to bounce the laser back and forth multiple times. Though there are limits to these improvements, as the effects will start to cancel once the plate undergoes more than half of its oscillation during the light-crossing. The resulting path-length difference has to be compared to the wavelength of the laser which is of order e-7. As this is far larger, the phase difference and hence the interference effect will be very small.
To get an idea of the practicability of these measurements, one would need to compare the resulting deformations to the ones caused by thermal noise and seismic disturbances of the plate. Moreover, our model does not include any damping behavior, which would also reduce the amplitude of the signal and make experimental observations even more difficult.
§ ACKNOWLEDGEMENTS
We are grateful to Piotr Chruściel and Robert Beig for many helpful discussions.
T.M. is a recipient of a DOC Fellowship of the Austrian Academy of Sciences at the University of Vienna, Faculty of Physics, and is supported by the Vienna Doctoral School in Physics (VDSP), the research network TURIS, and in part by the Austrian Science Fund (FWF), Project No. P34274, as well as by the European Union (ERC, GRAVITES, 101071779).
§ BULK SOLUTIONS
With a plane wave ansatz
φ^j(x,y) = a^j e^i k_l x^l ,
the equations of motion reduce to the matrix system
B a⃗ =
[ (k^2 c_2^2-ω^2) + c_3^2 k_x^2 c_3^2 k_x k_y; c_3^2 k_x k_y (k^2 c_2^2-ω^2) + c_3^2 k_y^2 ][ a^x; a^y ] = 0 .
For non-zero a, the determinant of B must vanish.
There are two values of k_j, and the corresponding 0-Eigenvectors of B, for which this is the case:
* p-waves: ω=c_1 κ
where the wave vector is denoted κ_j to distinguish it from the second case.
The matrix then simplifies to
μ+λ/ρ_0[ -κ_y^2 κ_x κ_y; κ_x κ_y -κ_x^2 ][ a^x; a^y ] = 0 ,
and is solved by vectors a_j which are collinear with κ_j, i.e. longitudinal oscillations.
* s-waves: ω=c_2 k
In this case we call the amplitude b^i instead of a^i to easier distinguish between the two cases.
Here the matrix becomes
μ+λ/ρ_0[ k_x^2 k_x k_y; k_x k_y k_y^2 ][ b^x; b^y ] = 0
and is solved when k_j b^j = 0, i.e. transversal oscillations.
§ NORMAL MODES IN FLAT SPACE
Without a GW (h_ij=0), the differential equations we want to solve stay the same, but in the boundary conditions (<ref>) the right-hand side vanishes.
§.§ S-Wave Modes
Here the solutions to the PDE have the form φ^j=b^j e^ik_l x_l with k_j b^j = 0.
It might be the case that a combination of more such solutions is needed to satisfy the boundary conditions. For the separation of the time dependence to still work, they all need to have the same magnitude of k⃗ (and therefore also ω), while the direction is still free. Which k_j-vectors should we combine? Looking at one boundary, e.g. x=L2, we have a linear combination of cos(k_y y) and sin(k_y y) which has to vanish. The only other k_j which can help to cancel these terms are the ones with the same k_y. The same follows for k_x from the condition on the boundary y=L2. So only two k_j vectors can be combined helpfully:
k_j^(1) = (k_x, k_y) and k_j^(2) = (k_x, -k_y) .
Putting this all together φ^i can be written in terms of cosines and sines as
φ_x = k_y ( A c_x c_y + B s_x s_y + C s_x c_y + D c_x s_y ) ,
φ_y = k_x ( B c_x c_y + A s_x s_y - D s_x c_y - C c_x s_y ) ,
where the notation cos(k_x x) = c_x and similar has been used. Since φ⃗ is divergence-free, the boundary conditions reduce to
[t]
σ_xx|_∂ P_x = ϵ 2 c_2^2 ∂_x u^x = 0 ,
σ_yy|_∂ P_y = ϵ 2 c_2^2 ∂_y u^y = 0 ,
σ_xy|_∂ P = ϵ c_2^2[ ∂_x u^y +∂_y u^x ] = 0 .
Evaluating the first condition with the help of Mathematica for ∂ P_x at (±L/2, ± y) and adding the four terms with different sign combinations (++++, ++-+, +-+-, +—) gives four simpler, necessary but not sufficient conditions:
C k_x k_y cos(k_x L/2) cos(k_y y) = 0 ,
B k_x k_y cos(k_x L/2) sin(k_y y) = 0 ,
A k_x k_y sin(k_x L/2) cos(k_y y) = 0 ,
D k_x k_y sin(k_x L/2) sin(k_y y) = 0 .
Doing something similar with the other conditions/boundaries and trying to solve them all, results in two possible mode solutions:
A=B=C=0 , k_x=k_y=2 π/L n ,
A=B=D=0 , k_x=k_y=π/L (2n+1) .
Therefore the whole solution u⃗ can be written as follows.
[backgroundcolor=gray!40]
“Quadratic” S-Wave Modes:
* For n even:
u^x = + cos(ω t) cos( π/L n x) sin( π/L n y) ,
u^y = -cos(ω t) sin( π/L n x) cos( π/L n y ) .
* For n odd:
u^x = - cos(ω t) sin( π/L n x) cos( π/L n y ) .
u^y = +cos(ω t) cos( π/L n x ) sin( π/L n y ) .
<Ref> shows the modes for n=1 and n=2. For these modes, the corners always stay fixed.
§.§ Mixed Modes
So far, we saw that there are s-wave modes and want to explore if there are p-wave or mixed modes. This is answered by the following theorem.
Consider solutions φ^j made up of the building blocks b^j e^ik_l x^l, k_j b^j= 0 (s-waves) and a^j e^i κ_l x^l, with a^l ∥κ^l (p-waves). The magnitude of the wave vectors is given by the dispersion relations ω=c_1 κ and ω =c_2 k while the direction is still free to choose.
Then, there exists no finite sum of such terms which satisfies the boundary conditions (<ref>) with vanishing right-hand side, except for pure s-wave modes (<ref>).
As mentioned above, we need only consider a fixed value of ω.
First, note that since c_1 > c_2 the relation κ < k for the magnitudes of the wave vectors holds. Since we do not want a pure s-wave solution, φ^j contains at least one p-wave term a^j e^i κ_l x^l. The most general p-wave solution can be written as
φ_x = κ_x ( A c_x c_y + B s_x s_y + C s_x c_y + D c_x s_y ) ,
φ_y = κ_y ( -B c_x c_y - A s_x s_y + D s_x c_y + C c_x s_y ) ,
for some constants A,B,C,D.
This leads to a σ_xx component, evaluated at the x=1/2 boundary which has the following form
σ_xx(L/2,y)= E cos(κ_y y) + F sin(κ_y y) ,
with some constants E and F. It does not satisfy the boundary condition on its own, so we need to add an s-wave term with the same y-component k_y=κ_y. But the k_x component of the s-wave is then given by
k_x= ±√((ω/c_2)^2 - k_y^2) .
As |k_j| > |κ_j| there is no other possible p-wave with the same κ_x component, so this term has to satisfy the boundary conditions on ∂ P_y on its own, in particular:
σ_yy(x,L/2) ±σ_yy(x,-L/2) = 0 .
The general form of such a s-wave then looks like (<ref>) and if we insert the resulting CS-tensor in the above relation two necessary conditions follow:
k_x k_y cos(k_y L/2) (A sin(k_x x)-C cos(k_x x)) = 0 ,
k_x k_y sin(k_y L/2) (D sin(k_x x)-B cos(k_x x)) = 0 .
This has to be true for all x. Since k_j is already fixed, only the constants A, B, C and D can be used to satisfy these conditions. But in general, the only possibility to do this is A=B=C=D=0. Therefore there is no s-wave and we are back at a pure p-wave. But we already saw in the previous section that this does not satisfy the boundary conditions.
This concludes our analytic investigation of mode solutions. In terms of finite sums, there are only the quadratic s-wave modes (<ref>). To find more general solutions the whole Fourier series would need to be considered.
abbrv
|
http://arxiv.org/abs/2307.01961v1
|
20230705000918
|
Doodles and blobs on a lined page: convex quasi-envelops of traversing flows on surfaces
|
[
"Gabriel Katz"
] |
math.GT
|
[
"math.GT",
"math.AT"
] |
Oriented spanning trees and stationary distribution of digraphs
Changjiang Bu
August 1, 2023
===============================================================
We consider 2-moderate immersions of closed curves (“doodles") and compact surfaces (“blobs") in the products ℝ× S^1 and ℝ× I (where I denotes the interval [0, 1]), up to cobordisms that also are 2-moderate immersions of surfaces and solids in ℝ× S^1 × [0, 1] and ℝ× I × [0, 1], respectively. By definition, the 2-moderate immersions of curves and surfaces do not have tangencies of order ≥ 3 to the fibers of the obvious projections ℝ× S^1 → S^1, ℝ× S^1 × [0, 1] → S^1 × [0, 1] and ℝ× I → I, ℝ× I × [0, 1] → I × [0, 1].
These bordisms come in different flavors: in particular, we can introduce one flavor, based regular embeddings only. We compute these bordisms of regular embeddings and construct invariants that distinguish between the bordisms of immersions and embeddings.
§ INTRODUCTION
This paper illustrates the richness of traversing vector flows on surfaces with boundary. Some multi-dimensional versions of these ideas and constructions can be found in <cit.>, <cit.>, and <cit.>. However, the case discussed here of 2-moderate one and two-dimensional embeddings and immersions against the background of a fixed 1-dimensional foliation on a target surface A has its unique and pleasing features. One of which is the drastic simplification of the combinatorial considerations that characterize our treatment in <cit.>, <cit.>, and <cit.> of similar multidimensional immersions.
Our general inspiration comes from the works of V. I. Arnold <cit.>, <cit.>, <cit.>, and V. A. Vassiliev <cit.>, <cit.>.
Although doodles on surfaces has been a well-traveled destination <cit.>, <cit.>, doodles against the background of given foliation on a compact surface are not. The same can be said about submersions : X → A of compact surfaces X into cylinders or strips A, equipped with a product foliation ℱ(v̂). The main problem we are dealing in this paper is to classify such submersions (up to a kind of cobordism), while controlling the tangency patterns of (X̣) to ℱ(v̂).
|
http://arxiv.org/abs/2307.03252v1
|
20230706185048
|
Convex Hull Thrackles
|
[
"Balázs Keszegh",
"Dániel Simon"
] |
math.CO
|
[
"math.CO",
"cs.CG"
] |
Convex Hull Thrackles
Balázs KeszeghAlfréd Rényi Institute of Mathematics and ELTE Eötvös Loránd University, Budapest, Hungary.
Research supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences, by the National Research, Development and Innovation Office – NKFIH under the grant K 132696 and FK 132060, by the ÚNKP-21-5 and ÚNKP-22-5 New National Excellence Program of the Ministry for Innovation and Technology from the source of the National Research, Development and Innovation Fund and by the ERC Advanced Grant “ERMiD”. This research has been implemented with the support provided by the Ministry of Innovation and Technology of Hungary from the National Research, Development and Innovation Fund, financed under the ELTE TKP 2021-NKTA-62 funding scheme.
Dániel SimonAlfréd Rényi Institute of Mathematics. Research supported by the ERC Advanced Grant “Geoscape”.
August 1, 2023
====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
A thrackle is a graph drawn in the plane so that every pair of its edges meet exactly once, either at a common end vertex or in a proper crossing. Conway's thrackle conjecture states that the number of edges is at most the number of vertices. It is known that this conjecture holds for linear thrackles, i.e., when the edges are drawn as straight line segments.
We consider convex hull thrackles, a recent generalization of linear thrackles from segments to convex hulls of subsets of points. We prove that if the points are in convex position then the number of convex hulls is at most the number of vertices, but in general there is a construction with one more convex hull. On the other hand, we prove that the number of convex hulls is always at most twice the number of vertices.
A drawing of a graph is a representation of the graph in the
plane such that the vertices are represented by distinct points and the edges by simple continuous curves connecting the corresponding point pairs and not passing through any other point representing a vertex. When it leads to no confusion, we identify the vertices and the edges with their representations.
A drawing of a graph is a thrackle if every pair of edges meet precisely once, either at a common endvertex or at a proper crossing. Conway’s conjecture <cit.> states that every thrackle of n vertices can have at most n edges.
It is easy to draw thrackles with n edges but from above, through a series of improvements (Lovász, Pach, and Szegedy <cit.> proved 2n, Cairns and Nikolayevsky proved 3/2 n <cit.>, improved further by Fulek and Pach <cit.> and by Goddyn and Xu <cit.>), the current best upper bound is 1.393n due to Xu <cit.>.
Perhaps the most natural special case is the case of linear thrackles, i.e., when edges are represented by straight line segments. The conjecture was originally proved in this case by Erdős <cit.>. Another nice proof of this is due to Perles, see <cit.>.
Further natural special cases solved are when the edges are represented by x-monotone curves <cit.> (this generalizes linear thrackles) and when the drawing is outerplanar <cit.>, i.e., if the points lie on the boundary of a disk and the edges lie inside this disk. In the latter case, the proof method of Perles again works while in the former it fails <cit.>.
There are plenty of generalizations of thrackles that we do not consider here in detail (when we allow tangencies, when we only require an odd number of intersections between edges, etc.).
The version that interests us is the following. Ágoston et al. proposed a generalization of linear thrackles <cit.>, which was altered by a suggestion of Gosset to get the following variant <cit.> (see Section <ref> for more details).
Suppose that we are given a set P of n points in general position in the plane, and a family of subsets of P whose convex hulls are all different and form the family C()={Conv(S):S∈} where Conv(S) denotes the convex hull of S. We say that C() is a convex hull thrackle on P if the following hold:
* C_1⊄C_2 for any two different C_1,C_2∈ C();
* C_1∩ C_2∅ for any two different C_1,C_2∈ C();
* C_1∩ C_2∩ C_3⊂ P for any three different C_1,C_2,C_3∈ C().
Note that by the definition C_1∩ C_2∩ C_3 is either empty or a single point of P.
Observe that if in a convex hull thrackle all convex hulls are segments then it is a linear thrackle and, in the other direction, a linear thrackle is a convex hull thrackle in which all convex hulls are segments.
<cit.>
A convex hull thrackle on n points has at most n convex hulls.
We disprove this conjecture by presenting an infinite class of examples with n points and m=n+1 convex hulls:
For every n≥ 6, there exists a convex hull thrackle on n points with n+1 convex hulls.
In <cit.> they note that an interesting special case is when P is in convex position. Indeed, this case is a generalization of outerplanar linear thrackles (for thrackles this specific case was not so important as any one of the two conditions in itself already allows the proof method of Perles to be applied). For this case, we can prove that their conjecture holds:
If P is a set of n points in convex position then any convex hull thrackle on P has at most n convex hulls.
From the proof it also follows that the extremal examples in Theorem <ref> are extensions of extremal examples of linear thrackles, see Claim <ref>. Some examples of linear thrackles with n points are the following. Take an odd number of points evenly spaced on a circle where each point is connected by a segment to the two points that are almost opposite to it. Or take points in convex position and connect one point to every other point by a segment and also connect by a segment its two neighbors on the convex hull of the point set.
Even though there is a counterexample for the above conjecture, convex hull thrackles are a pretty natural extension of linear thrackles and worth investigating further. In particular, as far as our construction is concerned, the conjecture may be already true if we replace the upper bound n with n+1.
Our third result is that replacing n with 2n is enough:
A convex hull thrackle on n points can have at most 2n convex hulls.
Another direction would be to find a proper adjustment of the definition that excludes our counterexample and for which the original conjecture may be true. We discuss some variants of the definition in Section <ref>.
We finish the Introduction by mentioning a related definition of Asada et al. <cit.>. They also extend linear thrackles to convex hulls. On a given points set P a set of convex hulls C() is a thrackle of convex sets if there exists a finite set W⊇ P such that for each pair of convex hulls C_i C_j we have |C_i∩ C_j∩ W|=1. They conjecture that in this case we still have at most n convex hulls. Note that given a linear thrackle (of segments) we can add to P the set of intersection points of segments to get the required W. Thus this definition is also a natural extension of linear thrackles. They prove their conjecture in special cases only. In particular they show that the number of convex hulls is at most |W|. We do not see any direct connection between this definition and the definition of convex thrackles we worked with.
§ POINTS IN CONVEX POSITION
We first prove a simple special case, note that for this the points are not necessarily in convex position.
A convex hull thrackle on n points in which every set S∈ contains a fixed point p∈ P can have at most n-1 convex hulls.
Let p∈ P be the point such that {p}∈ S for every S∈. If for some q p we have that {p,q}∈ then q cannot be part of any other convex hull as there are no convex hulls containing each other. Thus if there are k such points then there are at most k such segments in C() and there is no other convex hull which is a segment. Convex hulls that are not segments contain at least two segments containing p, on the other hand every such segment is in at most two convex hulls. Therefore the number of non-segment convex hulls is at most the number of points not counted earlier, that is, at most n-1-k. Altogether we have at most n-1 convex hulls, as claimed.
In the rest of this section, we always assume that P is in convex position.
From now on let such that C() is a convex hull thrackle on P. We take a minimal counterexample, i.e., where |P|=n is as small as possible, ||=m>n and such that ∑_S∈|S| is as small as possible. This implies that m=n+1. Whenever we say that a convex hull thrackle is smaller than another one we refer to this ordering. If there is a set S∈ with |S|=1 then ||=1, a contradiction, therefore every set in has size at least two.
The proof is a consequence of a series of lemmas.
There cannot be a boundary edge of Conv(P) which is a proper subset of a convex hull from C().
Assume on the contrary and let Conv(p_1,p_2) be such a boundary edge (p_1,p_2∈ P) and let S∈ contain both of p_1 and p_2 and a further vertex p_3. We claim that we can remove one of p_1,p_2 from S to get a smaller counterexample, which is a contradiction.
Indeed, assume on the contrary that if we remove p_1 from S then the resulting family of convex hulls is not a convex hull thrackle. Only the first property can fail, that is, there is a set S_1∈ which is disjoint from Conv(S∖{p_1}). Removing p_2 shows that there is also a set S_2∈ disjoint from Conv(S∖{p_2}). Using that P is in convex position and p_1 and p_2 are consecutive on the hull, we get that S_1∩ S={p_1}, S_2∩ S={p_2} and that Conv(S_1)∖{p_1} and Conv(S_2)∖{p_2} are in different connected components of Conv(P)∖ Conv(S) (here we used that p_3 exists as otherwise there would be only one connected component). Thus Conv(S_1) and Conv(S_2) are disjoint, contradicting that originally we had a convex hull thrackle.
See Figure <ref>.
No boundary edge of Conv(P) can be in two convex hulls from C().
No segment Conv({p_1,p_2}) with p_1,p_2∈ P which is not a boundary edge in Conv(P) is contained in two convex hulls from C().
Assume on the contrary and let C_1,C_2∈ C() be two convex hulls that contain both of p_1 and p_2. Every other convex hull must be disjoint from the interior of e=Conv({p_1,p_2}). The segment e splits Conv(P) into two convex sets. Except for C_1,C_2, every other convex hull is strictly on one side of e (by strictly we mean that it is disjoint from the other component of Conv(P)∖{e}) and contains at most one of p_1,p_2.
We claim that for both sides of e there is a convex hull strictly on that side. Assume on the contrary, then wlog. there is no convex hull strictly on the left side of e. Then ∖{C_1} restricted to the right side (actually only C_2 needs to be restricted, the rest are already on the right side) must be a convex hull thrackle on at most n-1 points (n-1 comes from the fact that there are points of P on both sides of e besides p_1,p_2 as it is not a boundary edge of Conv(P)) as by restricting to the right side no containments could be introduced. Thus by the minimality of our example we have that m-1≤ n-1 and thus m≤ n, a contradiction.
Thus a convex hull strictly on the right and a convex hull strictly on the left exist, they can only meet at points p_1,p_2, so every convex hull must contain either p_1 or p_2 while only C_1 and C_2 contains both. Therefore if a convex hull on one side contains p_1, then on the other side all the additional convex hulls must contain p_1, and the same is true the other way. Hence one of p_1 or p_2 will be contained in all of the convex hulls.
Now by Claim <ref> we get that has at most n-1 elements, a contradiction.
No pair of points of P can be in two convex hulls from C().
The corollary is a direct consequence of Corollary <ref> and Lemma <ref>.
Corollary <ref> guarantees that whenever we replace a set S in with a subset of S of size 2 then the first condition of being a convex hull thrackle will still hold. This will be used repeatedly in the remainder of this section.
If there are at least 3 convex hulls from C() containing a point p∈ P then there are two whose intersection is {p}.
Let C_1=Conv(S_1),C_2=Conv(S_2),C_3=Conv(S_3) be these convex hulls. Let p,p_2,… p_n be the points of P ordered counterclockwise on the boundary of Conv(P). Now let I_i be the minimal length interval of p_2,… p_n that contains S_i∩ P∖{p}. It is easy to see that if I_1∩ I_2∩ I_3∅ then C_1∩ C_2∩ C_3∉ P, a contradiction. Otherwise, if, I_1∩ I_2∩ I_3∅ then two of these intervals must be disjoint, and then the intersection of the two corresponding C_i's is {p}.
If there are two convex hulls whose only intersection is {p} for some p∈ P then both of them are segments.
Indeed, otherwise we can replace them with their appropriate boundary edges incident to p (the edge closer to the other convex hull) and it is easy to see that they must still intersect every other convex hull, thus it remains to be a convex hull thrackle. Also, it would be a smaller one, a contradiction. See Figure <ref>.
There cannot be a set S∈ of size two and a set S'∈ of size ≥ 3 that have a nonempty intersection.
Assume on the contrary that S∩ S' contains a point {p}, p∈ P.
By Lemma <ref> {p} must be a proper subset of Conv(S)∩
Conv(S').
Let S={p,q}. Let f=Conv({p',q'}) be the unique edge on the boundary of Conv(S') whose interior intersects the segment e=Conv(S). If replacing S' with {p',q'} we get a convex hull thrackle then it is a smaller counterexample, a contradiction. Otherwise, if it is not a convex hull thrackle then there must be a convex hull Conv(Q)∈ C() such that Q∩ f=∅. As Conv(Q)∩ Conv(S)≠∅ we get that p∈ Q. Wlog. p' and Q∖{p} are contained in the same connected component of Conv(P)∖ S. Let g={p,p'}. As Conv(Q) is disjoint from f, Conv(Q)∖{p} must be disjoint from g and thus be in one of the connected components of Conv(P)∖ g, precisely in the one that does not contain q. In this case, we replace S' with g. As every other convex hull must intersect both Conv(Q) and Conv(S), they all must intersect the segment g as well, and so this is still a convex hull thrackle, but a smaller one, a contradiction. See Figure <ref>.
The lemmas imply that each point of P is either only in sets from of size 2 or only in sets from of size ≥ 3, moreover, in the latter case, there can be at most two such sets for each point. Thus P can be split into two disjoint subsets P=P_2∪ P_3, such that the sets S∈ that contain 2 points of P contain only points from P_2 while the sets S ∈ that contain at least 3 points of P contain only points from P_3. Moreover for every p∈ P_3 at most two sets from contain p. Thus there are at most 2/3|P_3| sets of size ≥ 3 while for P_2 we can apply the result about (convex) linear thrackles (see Corollary <ref> for a proof) to conclude that there are at most |P_2| sets of size 2, altogether there are at most n sets in , a contradiction.
If a convex hull thrackle on n points in convex position has n convex hulls, then there is an underlying linear thrackle, i.e., there is an injection S⇒ S': S'⊆ S, |S'|=2 from such that the image of is a linear thrackle.
Let us have now an extremal example, i.e. a convex hull thrackle with m=n convex hulls. Going through the proof of Theorem <ref> we see that we did not directly use that our convex hull thrackle is a counterexample, instead we always replaced some non-segment convex hull with its proper subset contradicting its minimality.
Thus repeating the algorithm steps in the proof of Theorem <ref> we repeatedly replace convex hulls with smaller convex hulls to get another convex hull thrackle with the same number of points and convex hulls as before. At the end, according to the computation at the end of the proof of Theorem <ref>, if there are non-segment convex hulls left then we have less than n convex hulls, a contradiction. Thus at the end every convex hull must be a segment, finishing the proof.
There is one exception when in the replacement we used the fact that m=n+1. This was in Lemma <ref> where we used that m=n+1 to show that there are convex hulls strictly on both sides of the segment e. Let us modify slightly the argument from there. Now we work with a convex hull thrackle with m=n and assume on the contrary that on the left side there is no convex hull. If the left side (including p_1,p_2) has size at least n_1≥ 4 then the restriction of ∖{C_1} is a convex hull thrackle on at most n-2 points and with m-1=n+1-1=n-2+1 thrackles, a contradiction by Theorem <ref>.
If the restrictions of C_1 and C_2 on the right side do not contain each other then restricted to the right side is a convex hull thrackle on n-1 points and m convex hulls, a contradiction by Theorem <ref>. If the restrictions of C_1 and C_2 on the left side do not contain each other then n_1≥ 4 and we are done. Thus on each side the restrictions of C_1 and C_2 must contain each other and n_1=3. Thus if p_3 is the unique vertex to the left of e then wlog. C_1 contains p_3 while C_2 does not contain p_3 while C_2⊋ C_1∖{p_3}. Now applying the proof of Lemma <ref> on the restriction of ∖{C_1} on the right side we get that we can remove p_1 or p_2 from C_2 to get a convex hull thrackle. This together with C_1 must be also a convex hull thrackle. See Figure <ref>.
We note that for non-extremal convex thrackles it is not always true that there is an underlying linear thrackle, see, e.g., the construction for Theorem <ref> or Figure <ref> for such an example with points in convex position.
§ GENERAL CASE LOWER BOUND CONSTRUCTION
See Figure <ref> for an illustration. The points p,p',q form a triangle which contains q' and there are additional points r_1, …, r_n-4 inside ppq'Δ such that r_1,…, r_n-4,p',p are in convex position in this order positioned such that they can be seen from q' also in this order. There are n-1 triangular convex hulls incident to q', one formed by each containment minimal angle at q' determined by the remaining points.
There are two more convex hulls, both contain p,p'. One contains in addition the points r_i with i even and the other the points r_j with j odd.
It is easy to check that this is a convex hull thrackle on n points and with n+1 convex hulls.
§ GENERAL CASE UPPER BOUND PROOF
If there is a set S∈ with |S|=1 then ||=1, we are done, therefore we can assume that every set in has size at least two. For a convex hull thrackle C() on point set P we define its boundary diagram D() (or simply D when is clear from the context) the following way: we replace each convex hull of with at least 3 vertices with the set of its boundary segments. Notice that each such segment is joining two vertices of the graph. If the convex hull was a segment, then we replace it with 3 copies of this segment. This way we get a multiset of segments. If on a pair of points there are k segments, then we regard this as one segment with weight k. Thus D is a weighted (not multi)set of segments connecting points of P. Notice that by definition of a convex hull thrackle, a segment e of D that is also present in C() as a convex hull (i.e., e=Conv(S) with S∈, |S|=2) has weight exactly 3 (as by definition no other S' can contain S), while the rest of the segments of D can have weight at most 2 (as by definition a segment can be the boundary segment of at most two convex hulls).
Since every convex hull has been replaced with at least 3 segments when defining the boundary diagram, it is enough to show that the sum of weights of segments in D is at most 6n, where n=|P|.
See Figure <ref> for the next set of definitions.
Given points a,p,b, the angle apb∠ is defined to be the angle at p we get when we rotate the vector p⃗a⃗ counterclockwise (around p) to make it coincide with p⃗b⃗.
An object (a point, a set of points, or a convex hull) is to the left of a vector p⃗a⃗ if for every point b of the object we have 0<|apb∠|≤ 180. p is also said to be left to p⃗a⃗.
Given a point p∈ P and a point set S∈ for which p is a vertex of Conv(S). Let a and b be the vertices of Conv(S) next to p on the boundary of Conv(S) such that b is left to p⃗a⃗ (where a=b is allowed when Conv(S) is a segment). The apb∠ is called the wedge at p. We also say that pb is the left side of the wedge.
A segment pa of D is called a leftie from p, if there is a point set S∈ for which S (equivalently Conv(S)) is to the left of p⃗a⃗. We say that such an S (also Conv(S)) witnesses that pa is a leftie from p. Otherwise, it is called a non-leftie from p.
We note that being a leftie could be defined to any pair of points the same way, not just for pairs that form a segment of D, but we do not need this. Also, usually the convex hull C witnessing that pa is a leftie from p will be incident to p.
No segment can be a leftie from both of its endpoints.
Assume on the contrary. Then the two convex hulls witnessing that the segment is a leftie from both of its endpoints must be disjoint, a contradiction. See Figure <ref>.
For any two wedges at p that intersect only in p, their left sides cannot be both non-lefties from p.
Call one of these sides pa, the other pb. If 0< |bpa∠|≤ 180 then pb is a leftie from p, but if 0<|apb∠|≤ 180 then pa is a leftie from p.
The restriction to segments of the concept of lefties and the previous two statements about it are the tools in the proof of Perles mentioned in the Introduction <cit.> that linear thrackles have at most n segments, we show the proof for the sake of completeness:
If every convex hull is a segment (i.e., we have a linear thrackle) then there are at most n segments.
If every convex hull is a segment then by Observation <ref>
no two segments incident to the same vertex p can be non-lefties from p (note that the left side of a segment is the segment itself). On the other hand by Lemma <ref> every segment is a non-leftie from one of its endpoints. These together imply that we have at most as many segments as points of P.
Back to the proof of Theorem <ref>, in the general case we also try to do something similar to what is done in the proof of Corollary <ref>. More precisely, it is enough to prove that for each vertex p the sum of the weight of segments of D incident to p that are non-lefties from p is at most 6. Indeed, by Lemma <ref> each segment of the boundary diagram has to be non-leftie from at least one of its endvertices so if each vertex has at most a weight of 6 non-lefties from it, then the whole boundary diagram can have at most 6n weight of segments, hence the convex hull thrackle can contain at most 2n convex hulls. Thus, the following lemma concludes the proof of Theorem <ref>:
For every vertex p the sum of the weight of segments incident to p that are non-lefties from p is at most 6.
The proof of the lemma is split into a few cases.
* Case 1: There exists a segment pa in D which is also a convex hull in (thus has weight 3) and pa is a non-leftie from p.
In this case for any q where pq is a non-leftie from p, we know that 0≤ |apq∠|< 180, as otherwise the convex hull pa would witness that pq is a leftie. In addition, this angle cannot have size 0, since then either 3 vertices would be collinear, or the convex hull pa would be contained in another convex hull from C(), both of which are forbidden. That is, q is left of pa.
Thus, we cannot have another weight 3 non-leftie segment from p or two non-leftie segments from p that are on the boundary of the same convex hull, because that hull would then witness pa being a leftie from p. So for each non-leftie segment incident to p besides pa there must be another leftie segment which is part of the boundary of the same convex hull.
From now on, we think of these convex hulls as if they were the wedges they define.
To rephrase what we have shown already, each non-leftie segment at p is to the left to pa while the other segment incident to v on the boundary of the same convex hull is not to the left to pa, thus the corresponding wedge contains a non-zero part of the line defined by pa. We also know that no three wedges can overlap by the definition of a convex hull thrackle. Hence at most one of these wedges can contain the halfline pa, and at most two the complement halfline on the line defined by pa. This gives at most 3 wedges that can contribute weight 1 to the non-leftie segments incident to p. Together with pa, this is at most weight 6 of segments, as required, finishing this case. See Figure <ref>
From now on we can and will assume that there is no segment incident to p in D which is also a convex hull in and is a non-leftie from p.
* Case 2: There is a wedge apb∠ whose both sides are non-lefties from p.
Denote by a' and b' the reflection of a and b to p, respectively. Note that it is impossible for a convex angle at p (thus also a wedge) to contain more than 2 of {pb,pa,pb',pa}. Moreover, using that pa and pb are non-lefties, it is easy to see that every wedge w that has a non-leftie side fits into one of the following Types (listing also its contribution to the sum of weights of non-leftie segments of D at p):
* w contains pa but not pb'. Only the left side of w can be a non-leftie. Contributes weight ≤ 2.
* w contains pb' but not pa. Contributes weight ≤ 1.
* w contains both pb and pa'. Contributes weight ≤ 2.
* w contains both pa and pb'. Contributes weight ≤ 2.
We cannot have more than one Type 4 wedge (as these and ∠ apb all contain pa).
* Case 2(a): There is no Type 4 wedge at p.
In this case, we can have at most one Type 1 wedge, at most two Type 2, and at most one Type 3 wedge. Their maximal weight contribution together is thus 6. If the Type 1 or Type 3 wedge does not exist then together with the contribution of ∠ apb we have a maximum sum of weight of 6.
Otherwise, notice that the Type 1 and Type 3 wedges must be disjoint apart from point p. Then by Observation <ref> their left sides cannot be both non-lefties and so at least an additional weight of 1 is lost.
Thus, to have a total weight of 7, all the Type 2 wedges must exist too.
Let the left side of the Type 1 wedge be pc, and the left side of the Type 3 wedge be pd. Exactly one of them is a leftie.
The Type 3 wedge must be disjoint apart from p from one of the Type 2 wedges, as they cannot both contain pd. Since they are Type 2, they do not contain pa either, so the disjoint wedge must witness pd is a leftie.
So pc must be a non-leftie. So the Type 3 wedge must contain pc'. Then there is a Type 2 wedge that does not contain pc'. But then the Type 1 wedge witnesses that the right side of this Type 2 wedge is a leftie. So another weight of non-leftie segments must be lost, thus the weight contribution of all non-leftie segments is at most 4 excluding ∠ apb, which together with the contribution of ∠ apb gives a maximum sum of weight of 6.
* Case 2(b): There is one Type 4 wedge at p.
In this case, we have one Type 4, at most one Type 2, at most one Type 3, and no Type 1 wedges. Their maximal weight contribution together is thus 5. If any of them does not exist then together with the contribution of ∠ apb we have a maximum sum of weight of 6.
Otherwise, if the Type 4 and Type 3 wedges are disjoint apart from p, then again by Observation <ref>, their left sides cannot be both non-lefties and so at least an additional weight of 1 is lost.
Otherwise, the Type 3 and Type 4 wedges must have at least a common segment. Then the Type 2 wedge and the Type 3 wedge must be disjoint apart from p as there are no triple intersections. However, the Type 2 wedge does not contain pa and thus it must witness that the left side of the Type 3 wedge is a leftie and so at least an additional weight of 1 is again lost.
Thus the weight contribution of all non-leftie segments is always at most 4 excluding ∠ apb, which together with the contribution of ∠ apb gives a maximum sum of weight of 6.
* Case 3: Every wedge has a leftie side. In addition to that, there is a wedge whose left side is a leftie and whose right side is a non-leftie.
Let apb∠ be a wedge such that pb is a leftie from p, and pa is a non-leftie from p. Denote by a' the reflection of a to p.
Since pa is a non-leftie, every other wedge at p must either be contained entirely in a'pa∠, or must contain pa' or pa.
At most two such wedges can contain pa', and at most one wedge can contain pa. If there are at most two wedges entirely in a'pa∠ with a non-leftie side then the sum of the weight of all wedges at p is at most 6, as required. Otherwise, there are at least 3 wedges entirely in a'pa∠ having a non-leftie side. Notice that among these there must be two wedges that are disjoint apart from p (as there are no triple intersections apart from p). Since these two wedges are contained in an angle ≤ 180, the wedge on the left witnesses that both sides of the one on the right are lefties, a contradiction. See Figure <ref>.
* Case 4:
All wedges with a non-leftie side have their non-leftie side on the left side.
In this case by Observation <ref> we cannot have 2 disjoint wedges with a non-leftie side at p. As among any 4 wedges at p, two must be disjoint (as there are no triple intersections apart from p), we can have at most 3 wedges at p, thus their weight is at most 3<6, as required.
We have exhausted all cases and proved that in each case the sum of the weight of the wedges at p is at most 6, as required.
This finishes the proof of Theorem <ref>.
§ VARIATIONS ON THE DEFINITION
We briefly investigate the necessity of the assumptions in the definition of convex hull thrackles. Allowing C_1∩ C_2= ∅ makes a straight line drawing of the complete graph allowed, having n 2 segments. Not requiring C_1∩ C_2∩ C_3⊂ P, the following construction shows that we can have (n/3)^3-o(1) convex hulls. Take n points in convex position, split them into 3 equal intervals and for each triple of points, one from each interval, we take its convex hull.
Regarding the assumption that there is no containment between the convex hulls, allowing C_1⊂ C_2 we can get ⌊3/2(n-1)⌋ convex hulls, as observed by Gossett <cit.>, who made this remark in order to disprove the version of Conjecture <ref> that originally appeared in <cit.>, in which containments were allowed. Gossett's construction is the following. Take n points in convex position, connect one of them, p, to all others by segments. Add also every second triangle incident to p determined by segments consecutive in their linear order around p. See Figure <ref>. On the other hand, our proof works in this case too except for Case 1, in which case instead of sum of weight at most 6 we can only guarantee a sum of weight at most 7. Thus, when we allow containments, this gives the upper bound 7n/3 on the number of convex hulls.
Furthermore, if alongside containments we also allow that the family of convex hulls is a multiset (i.e., the same convex hull can appear multiple times in our family of convex hulls), then doubling the segments of a star we can have at least 2n-2 convex hulls, again even if the points are in convex position. On the other hand in Case 1 we can guarantee a sum of weight at most 8, which gives the upper bound 8n/3.
Finally, what if we do not assume that the points are in general position, i.e., there can be 3 or more points on a line? We still do not have a better lower bound than n+1. On the other hand, we can again modify the proof of Lemma <ref> to work for points in non-general position. First, when defining the boundary diagram we need to use maximum length sides, i.e., even if a point lies on a side of some convex hull, in the diagram there is only a single segment associated with this side. With this modification the same proof works, except that in Case 1 we yet again lose a bit, we can only guarantee a sum of weight at most 7, which gives the upper bound 7n/3. If we allow both points in non-general position, containments and multisets then we can still guarantee a sum of weight at most 8, which gives the upper bound 8n/3.
The details of these various modifications to Case 1 of the proof of Lemma <ref> are left to the interested reader.
plainurl
|
http://arxiv.org/abs/2307.00527v1
|
20230702093843
|
Graph Neural Network based Log Anomaly Detection and Explanation
|
[
"Zhong Li",
"Jiayang Shi",
"Matthijs van Leeuwen"
] |
cs.SE
|
[
"cs.SE",
"cs.AI",
"cs.LG"
] |
LIACS, Leiden University
Leiden
the Netherlands
0000-0003-1124-5778
z.li@liacs.leidenuniv.nl
LIACS, Leiden University
Leiden
the Netherlands
0000-0002-7014-0805
j.shi@liacs.leidenuniv.nl
LIACS, Leiden University
Leiden
the Netherlands
0000-0002-0510-3549
m.van.leeuwen@liacs.leidenuniv.nl
Event logs are widely used to record the status of high-tech systems, making log anomaly detection important for monitoring those systems. Most existing log anomaly detection methods take a log event count matrix or log event sequences as input, exploiting quantitative and/or sequential relationships between log events to detect anomalies. Unfortunately, only considering quantitative or sequential relationships may result in many false positives and/or false negatives. To alleviate this problem, we propose a graph-based method for unsupervised log anomaly detection, dubbed Logs2Graphs, which first converts event logs into attributed, directed, and weighted graphs, and then leverages graph neural networks to perform graph-level anomaly detection. Specifically, we introduce One-Class Digraph Inception Convolutional Networks, abbreviated as OCDiGCN, a novel graph neural network model for detecting graph-level anomalies in a collection of attributed, directed, and weighted graphs. By coupling the graph representation and anomaly detection steps, OCDiGCN can learn a representation that is especially suited for anomaly detection, resulting in a high detection accuracy. Importantly, for each identified anomaly, we additionally provide a small subset of nodes that play a crucial role in OCDiGCN's prediction as explanations, which can offer valuable cues for subsequent root cause diagnosis. Experiments on five benchmark datasets show that Logs2Graphs performs at least on par state-of-the-art log anomaly detection methods on simple datasets while largely outperforming state-of-the-art log anomaly detection methods on complicated datasets.
<ccs2012>
<concept>
<concept_id>10010520.10010553.10010562</concept_id>
<concept_desc>Computer systems organization Embedded systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
[500]Software and its engineering Maintaining software
Graph Neural Network based Log Anomaly Detection and Explanation
Matthijs van Leeuwen
August 1, 2023
================================================================
§ INTRODUCTION
Modern high-tech systems, such as cloud computers or lithography machines, typically consist of a large number of components. Over time these systems have become larger and more complex, making manual system operation and maintenance hard or even infeasible. Therefore, automated system operation and maintenance is highly desirable. To achieve this, system logs are universally used to record system states and important events. By analysing these logs, faults and potential risks can be identified, and remedial actions may be taken to prevent severe problems. System logs are usually semi-structured texts though, and identifying anomalies through log anomaly detection is often challenging.
Since both industry and academia show great interest in identifying anomalies from logs, a plethora of log anomaly detection methods have been proposed. Existing log anomaly detection methods can be roughly divided into three categories: quantitative-based, sequence-based, and graph-based methods. Specifically, quantitative-based methods, such as OCSVM <cit.> and PCA <cit.>, utilise a log event count matrix to detect anomalies, and are therefore unable to capture semantic information of and sequential information between log events. Meanwhile, sequence-based methods, including DeepLog <cit.> and LogAnomaly <cit.>, aim to detect anomalies by taking sequential (and sometimes semantic) information into account. They cannot consider the full structure among log events though. In contrast, graph-based methods, such as GLAD-PAW <cit.>, convert logs to graphs and exploit semantic information as well as the structure among log events, exhibiting the following three advantages over the former two categories of methods <cit.>: 1) they are able to identify problems for which the structure among events is crucial, such as performance degradation; 2) they are capable of providing contextual log messages corresponding to the identified problems; and 3) they can provide the `normal' operation process in the form of a graph, helping end-users find root-causes and take remedial actions. Existing graph-based methods, such as GLAD-PAW, typically transform log events into undirected graphs though, which may fail to capture important information on the order among log events. Moreover, most existing graph-based methods perform graph representation and anomaly detection separately, leading to suboptimal detection accuracy.
Moreover, as highlighted by Li et al. <cit.>, with the growing adoption of anomaly detection algorithms in safety-critical domains, there is a rising demand for providing explanations for the decisions made within those domains. This requirement, driven by ethical considerations and regulatory mandates, underscores the importance of accountability and transparency in such contexts. Moreover, in practical applications, the attainment of precise anomaly explanations contributes largely to the timely isolation and diagnosis of anomalies, which can mitigate the impact of anomalies by facilitating early intervention <cit.>. However, to our knowledge, most existing log anomaly detection methods focus exclusively on detection performance without giving any explanations.
To overcome these limitations, we propose Logs2Graphs, a graph-based unsupervised log anomaly detection approach by designing a novel one-class graph neural network. Specifically, Logs2Graphs first utilises off-the-shelf methods to learn a semantic embedding for each log event, and then assigns log messages to different groups. Second, Logs2Graphs converts each group of log messages into an attributed, directed, and weighted graph, with each node representing a log event, the node attributes containing its semantic embedding, a directed edge representing how an event is followed by another event, and the corresponding edge weight indicating the number of times the events follow each other. Third, by coupling the graph representation learning and anomaly detection objectives, we introduce One-Class Digraph Inception Convolutional Networks (OCDiGCN) as a novel method to detect anomalous graphs from a set of graphs. As a result, Logs2Graphs leverages the rich and expressive power of attributed, directed and edge-weighted graphs to represent logs, followed by using graph neural networks to effectively detect graph-level anomalies, taking into account both semantic information of log events and structure information (including sequential information as a special case) among log events. Importantly, by decomposing the anomaly score of a graph into individual nodes and visualizing these nodes based on their contributions, we provide straightforward and understandable explanations for identified anomalies.
Overall, our contributions can be summarised as follows: (1) We introduce Logs2Graphs, which formalises log anomaly detection as a graph-level anomaly detection problem and represents log sequences as directed graphs to capture more structure information than previous approaches; (2) We introduce OCDiGCN, the first end-to-end unsupervised graph-level anomaly detection method for attributed, directed and edge-weighted graphs. By coupling the graph representation and anomaly detection objectives, we improve the potential for accurate anomaly detection over existing approaches; (3) For each detected anomaly, we identify important nodes as explanations, offering valuable cues for subsequent root cause diagnosis; (4) We empirically compare our approach to eight state-of-the-art log anomaly detection methods on five benchmark datasets, showing that Logs2Graphs performs at least on par and often better than its competitors.
The reminder of this paper is organised as follows. Section 2 revisits related work, after which Section 3 formalises the problem. Section 4 describes Digraph Inception Convolutional Networks <cit.>, which are used for Logs2Graphs in Section 5. We then evaluate Logs2Graphs in Section 6 and conclude in Section 7.
§ RELATED WORK
Graph-based log anomaly detection methods usually comprise five steps: log parsing, log grouping, graph construction, graph representation learning, and anomaly detection. In this paper we focus on graph representation learning, log anomaly detection and explanation, thus only revisiting related work in these fields.
§.§ Graph Representation Learning
Graph-level representation learning methods, such as GIN <cit.> and Graph2Vec <cit.>, are able to learn a mapping from graphs to vectors. Further, graph kernel methods, including Weisfeiler-Lehman (WL) <cit.> and Propagation Kernels (PK) <cit.>, can directly provide pairwise distances between graphs. Both types of methods can be combined with off-the-shelf anomaly detectors, such as OCSVM <cit.> and iForest <cit.>, to perform graph-level anomaly detection.
To improve on these naïve approaches, efforts have been made to develop graph representation learning methods especially for anomaly detection. For instance, OCGIN <cit.> and GLAM <cit.> combine the GIN <cit.> representation learning objective with the SVDD objective <cit.> to perform graph-level representation learning and anomaly detection in an end-to-end manner. GLocalKD <cit.> performs random distillation of graph and node representations to learn `normal' graph patterns. Further, OCGTL <cit.> combines neural transformation learning and one-class classification to learn graph representations for anomaly detection. Although these methods are unsupervised or semi-supervised, they can only deal with attributed, undirected, and unweighted graphs.
iGAD <cit.> considers graph-level anomaly detection as a graph classification problem and combines attribute-aware graph convolution and substructure-aware deep random walks to learn graph representations. However, iGAD is a supervised method, and can only handle attributed, undirected, and unweighted graphs. CODEtect <cit.> takes a pattern-based modelling approach using the minimum description length (MDL) principle and identifies anomalous graphs based on motifs. CODEtect can (only) deal with labelled, directed, and edge-weighted graphs, but is computationally very expensive. To our knowledge, we introduce the first unsupervised method for graph-level anomaly detection that can handle attributed, directed and edge-weighted graphs.
§.§ Log Anomaly Detection and Explanation
Log anomaly detection methods can be roughly subdivided into three categories: 1) traditional, `shallow' methods, such as principal component analysis (PCA) <cit.>, one-class SVM (OCSVM) <cit.>, isolation forest (iForest) <cit.> and histogram-based outlier score (HBOS) <cit.>, which take a log event count matrix as input and analyse quantitative relationships; 2) deep learning based methods, such as DeepLog <cit.>, LogAnomaly <cit.>, and AutoEncoder <cit.>, which employ sequences of log events (and sometimes their semantic embeddings) as input, analysing sequential information and possibly semantic information of log events to identify anomalies; and 3) graph-based methods, such as TCFG <cit.> and GLAD-PAW <cit.>, which first convert logs into graphs and then perform graph-level anomaly detection.
To our knowledge, only a few works <cit.> have capitalised on the powerful learning capabilities of graph neural networks for log anomaly detection. GLAD-PAW <cit.> first transforms logs into attributed and undirected graphs and then uses a Position Aware Weighted Graph Attention Network to identify anomalies. However, converting logs into undirected graphs may result in loss of important sequential information. Further, DeepTraLog <cit.> combines traces and logs to generate a so-called Trace Event Graph, which is an attributed and directed graph. On this basis, they train a Gated Graph Neural Networks based Deep Support Vector Data Description model to identify anomalies. However, their approach requires the availability of both traces and logs, and is unable to handle edge weights. In contrast, like LogGD <cit.>, our proposed Logs2Graphs approach is applicable to generic logs by converting logs into attributed, directed, and edge-weighted graphs. However, LogGD is a supervised method that requires fully labelled training data, which is usually impractical and even impossible. In contrast, our proposed algorithm OCDiGCN is the first unsupervised graph-level anomaly detection method for attributed, directed, and edge-weighted graphs.
Although anomaly explanation has received much attention in traditional anomaly detection <cit.>, only a few studies <cit.> considered log anomaly explanation. Specifically, PLELog <cit.> offers explanations by quantifying the significance of individual log events within an anomalous log sequence, thereby facilitating improved identification of relevant log events by operators. Similarly, our method provides straightforward explanations for anomalous log groups by identifying and visualising a small subset of important nodes.
§ PROBLEM STATEMENT
Before we state the log anomaly detection problem, we first introduce the necessary notations and definitions regarding event logs and graphs.
Event logs. Logs are used to record system status and important events, and are usually collected and stored centrally as log files. A log file typically consists of many log messages. Each log message is composed of three components: a timestamp, an event type (log event or log template), and additional information (log parameter). Log parsers are used to extract log events from log messages.
Further, log messages can be grouped into log groups (a.k.a. log sequences) using certain criteria. Specifically, if a log identifier is available for each log message, one can group log messages based on such identifiers. Otherwise, one can use a fixed or sliding window to group log messages. The window size can be determined according to timestamp or the number of observations.
Besides, counting the occurrences of each log event within a log group results in a event count vector. Consequently, for a log file consisting of many log groups, one can obtain a event count matrix. The process of generating an event count matrix (or other feature matrix) is known as feature extraction. Extracted features are often used as input to an anomaly detection algorithm to identify log anomalies, i.e., log messages or log groups that deviate from what is considered `normal'.
Graphs. We consider an attributed, directed, and edge-weighted graph 𝒢 = (𝒱, ℰ, X, Y), where 𝒱 = {v_1,...,v_|𝒱|} denotes the set of nodes and ℰ = {e_1,...,e_|ℰ|}⊆𝒱×𝒱 represents the set of edges between nodes. If (v_i,v_j)∈ℰ, then there is an edge from node v_i to node v_j. Moreover, X∈ℝ^|𝒱|× d is the node attribute matrix, with the i-th row representing the attributes of node v_i, and d is the number of attributes. Besides, Y∈ℕ^|ℰ|× |ℰ| is the edge-weight matrix, where Y_ij represents the weight of the edge from node v_i to v_j.
Equivalently, 𝒢 can be described as (A, X, Y), with adjacency matrix A∈ℝ^|𝒱|×|𝒱|, where A_ij = 𝕀[(v_i,v_j) ∈ℰ] indicates whether there is an edge from node v_i to node v_j, for i,j ∈{1,...,|𝒱|}.
§.§ Graph-based Log Anomaly Detection
Given a set of log files, we let ℒ = {L_1,...,L_|ℒ|} denote the set of unique log events. We divide the log messages into M log groups Q = {q_1,...,q_m,...,q_M}, where q_m={q_m1,...,q_mn,...,q_mN} is a log group and q_mn a log message.
For each log group q_m, we construct an attributed, directed, and edge-weighted graph 𝒢_m = (𝒱_m, ℰ_m, X_m, Y_m) to represent the log messages and their relationships. Specifically, each node v_i∈𝒱_m corresponds to exactly one log event L ∈ℒ (and vice versa). Further, an edge e_ij∈ℰ_m indicates that log event i is at least once immediately followed by log event j in q_m. Attributes x_i∈X_m represent the semantic embedding of log event i, and y_ij∈Y_m is the weight of edge e_ij, representing the number of times event i was immediately followed by event j. In this manner, we construct a set of log graphs {𝒢_1,...,𝒢_m,...,𝒢_M}.
We can use these definitions to define graph-level anomaly detection:
Problem 1 (Graph-based Log Anomaly Detection). Given a set of attributed, directed, and weighted graphs that represent logs, find those graphs that are notably different from the majority of graphs.
What we mean by `notably different' will have to be made more specific when we define our method, but we can already discuss what types of anomalies can potentially be detected. Most methods aim to detect two types of anomalies:
* A log group (namely a graph) is considered a quantitative anomaly if the occurrence frequencies of some events in the group are higher or lower than expected from what is commonly observed. For example, if a file is opened (event A) twice, it should normally also be closed (event B) twice. In other words, the number of event occurrences #A = #B in a normal pattern and an anomaly is detected if #A ≠#B.
* A log group (namely a graph) is considered to contain sequential anomalies if the order of certain events violates the normal order pattern. For instance, a file can be closed only after it has been opened in a normal workflow. In other words, the order of event occurrences A → B is considered normal while B → A is considered anomalous.
An advantage of graph-based anomaly detection is that it detect these two types of anomalies, but also anomalies reflected in the structure of the graphs. Moreover, no unsupervised log anomaly detection approaches represent event logs as attributed, directed, weighted graphs, which allow for even higher expressiveness than undirected graphs (and thus limiting the information loss resulting from the representation of the log files as graphs).
§ PRELIMINARIES: DIGRAPH INCEPTION CONVOLUTIONAL NETS
To learn node representations for attributed, directed, and edge-weighted graphs, <cit.> proposed Digraph Inception Convolutional Networks (DiGCN).
Specifically, given a graph 𝒢 described by an adjacency matrix A∈ℝ^|𝒱|×|𝒱|, a node attribute matrix X∈ℝ^|𝒱|× d, and an edge-weight matrix Y∈ℝ^|𝒱|×|𝒱|, DiGCN defines the k-th order digraph convolution as
Z^(k) =
XΘ^(0) k=0
ΨXΘ^(1) k=1
ΦXΘ^(k) k ≥ 2,
where Ψ=1/2(Π^(1)1/2P^(1)Π^(1)-1/2+Π^(1)-1/2P^(1)TΠ^(1)1/2) and Φ = W^(k)-1/2P^(k)W^(k)-1/2. Particularly, Z^(k)∈ℛ^|𝒱|× f denote the convolved output with f output dimension, and Θ^(0),Θ^(1),Θ^(k) represent the trainable parameter matrices.
Moreover, P^(k) is the k-th order proximity matrix defined as
P^(k) =
I k=0
D̃^-1Ã k=1
Ins((P^(1))^(k-1)(P^(1)T)^(k-1)) k ≥ 2,
where I∈ℛ^|𝒱|× |𝒱| is an identity matrix, Ã = A + I, and D̃ denotes the diagonal degree matrix with D̃_ii = ∑_jÃ_ij. Besides, Ins((P^(1))^(k-1)(P^(1)T)^(k-1)) is defined as
1/2Intersect((P^(1))^(k-1)(P^(1)T)^(k-1),(P^(1)T)^(k-1)(P^(1))^(k-1))
, with Intersect(·) denoting the element-wise intersection of two matrices (see <cit.> for computation details). In addition, W^(k) is the diagonalized weight matrix of P^(k), and Π^(1) is the approximate diagonalized eigenvector of P^(1). Particularly, the approximate diagonalized eigenvector is calculated based on personalised PageRank <cit.>, with a parameter α to control the degree of conversion from a digraph to an undirected graph. We omit the details to conserve space, and refer to <cit.> for more details.
After obtaining the multi-scale features {Z^(0),Z^(1),...,Z^(k)}, DiGCN defines an Inception block as
Z = σ(Γ(Z^(0),Z^(1),...,Z^(k))),
where σ represents an activation function, and Γ(·) denotes a fusion operation, which can be summation, normalisation, and concatenation. In practice, we often adapt a fusion operation that keeps the output dimension unchanged, namely Z∈ℛ^|𝒱|× f. As a result, the i-th row of Z (namely Z_i) denotes the learned vector representation for node v_i in a certain layer.
§ GRAPH-BASED ANOMALY DETECTION FOR EVENT LOGS
We propose Logs2Graphs, a graph-based log anomaly detection method tailored to event logs. The overall pipeline consists of the usual main steps, i.e., log parsing, log grouping, graph construction, graph representation learning, and anomaly detection, and is illustrated in Figure <ref>. Note that we couple the graph representation learning and anomaly detection steps to accomplish end-to-end learning once the graphs have constructed.
First, after collecting logs from a system, the log parsing step extracts log events and log parameters from raw log messages. Since log parsing is not the primary focus of this article, we use Drain <cit.> for this task. Drain is a log parsing technique with fixed depth tree, and has been shown to generally outperform its competitors <cit.>. We make the following assumptions on the log files:
* Logs files are written in English;
* Each log message contains at least the following information: date, time, operation detail, and log identifier;
* The logs contain enough events to make the mined relationships (quantitative, sequential, structural) statistically meaningful, i.e., it must be possible to learn from the logs what the `normal' behaviour of the system is.
Second, the log grouping step uses the log identifiers to divide the parsed log messages into log groups. Third, for each resulting group of log messages, the graph construction steps builds an attributed, directed, and edge-weighted graph, as described in more detail in Subsection 5.1. Fourth and last, in an integrated step for graph representation learning and anomaly detection, we learn a One-Class Digraph Inception Convolutional Network (OCDiGCN) based on the obtained set of log graphs. The resulting model can be used for graph-level anomaly detection. This model couples the graph representation learning objective and anomaly detection objective, and is thus trained in an end-to-end manner. The model, its training, and its use for graph-level anomaly detection are explained in detail in Subsection 5.2.
§.§ Graph Construction
We next explain how to construct an attributed, directed, and edge-weighted graph given a group of parsed log messages, and illustrate this in Figure <ref>. Particularly, the motivation of graph construction is to keep everything relevant in log data.
First, we construct nodes to represent the different log events. That is, the number of nodes depends on the number of unique log events that occur in the log group. Second, starting from the first line of log messages in chronological order, we add an directed edge from log event L_i to L_j and set its edge-weight to 1 if the next event after L_i is L_j. If the corresponding edge already exists, we increase its edge-weight by 1. In this manner, we obtain a labelled, directed, and edge-weighted graph.
However, using only the labels (e.g., open or write) of log events for graph construction may lead to missing important information. That is, we can improve on this by explicitly taking the semantic information of log events into account, by which we mean that we should look at the text of the log event in entirety. Specifically, we generate a vector representation for each log event as follows:
* Preprocessing: for each log event, we first remove non-character words and stop words, and split compound words into separate words;
* Word embedding: we use Glove <cit.>, a pre-trained word embedding model with 200 embedding dimensions to generate a vector representation for each English word in a log event;
* Sentence embedding: we generate a vector representation for each log event. Considering that the words in a sentence are usually not of equal importance, we use Term Frequency-Inverse Document frequency (TF-IDF) <cit.> to measure the importance of words. As a result, the weighted sum of word embedding vectors composes the vector representation of a log event.
By augmenting the nodes with the vector representations of the log events as attributes, we obtain an attributed, directed, and edge-weighted graph.
§.§ OCDiGCN: One-Class Digraph Inception Convolutional Nets
We next describe One-Class Digraph Inception Convolutional Networks, abbreviated as OCDiGCN, a novel method for end-to-end graph-level anomaly detection. We chose to build on Digraph Inception Convolutional Networks (DiGCN) <cit.> for their capability to handle directed graphs, which we argued previously is an advantage in graph-based log anomaly detection.
Considering that DiGCN was designed for node representation learning, we repurpose it for graph representation learning as follows:
z = Readout(Z_i| i∈{1,2,...,|𝒱|}).
That is, at the final iteration layer, we utilise a so-called Readout(·) function to aggregate node vector representations to obtain a graph vector representation. Importantly, Readout(·) can be a simple permutation-invariant function such as maximum, sum or mean, or a more advanced graph-level pooling function <cit.>.
Next, note that DiGCN work did not explicitly enable learning edge features (i.e., Y). However, as DiGCN follows the Message Passing Neural Network (MPNN) framework <cit.>, incorporating Y into Equation (1) and conducting computations in Equations (2-4) analogously enables learning edge features.
Now, given a set of graphs {𝒢_1,...,𝒢_m,...,𝒢_M}, we can use Equation (<ref>) to obtain an explicit vector representation for each graph, respectively. We denote the vector presentation of 𝒢_m learned by the DiGCN model as DiGCN(𝒢_m;ℋ).
In graph anomaly detection, anomalies are typically identified based on a reconstruction or distance loss <cit.>. In particular, the One-Class Deep SVDD objective <cit.> is commonly used for two reasons: it can be easily combined with other neural networks, and more importantly, it generally achieves a state-of-the-art performance <cit.>. Therefore, to detect anomalies, we train a one-class classifier by optimising the following One-Class Deep SVDD objective:
min_ℋ1/M∑_m=1^M‖DiGCN(𝒢_m;ℋ)-o‖_2^2+λ/2∑_l=1^L‖H^(l)‖_F^2,
where H^(l) represents the trainable parameters of DiGCN at the l-th layer, namely (Θ^(0)(l),Θ^(1)(l),...,Θ^(k)(l))^T, ℋ denotes {H^(1),...,H^(L)}, λ>0 represents the weight-decay hyperparameter, ‖·‖_2 is the Euclidean norm, and ‖·‖_F denotes the Frobenius norm. Moreover, o is the center of the hypersphere in the learned representation space. Ruff et al. <cit.> found empirically that setting o to the average of the network representations (i.e., graph representations in our case) obtained by performing an initial forward pass is a good strategy.
Ruff et al. <cit.> also pointed out, however, that One-Class Deep SVDD classification may suffer from a hypersphere collapse, which will yield trivial solutions, namely mapping all graphs to a fixed center in the representation space. To avoid a hypersphere collapse, the hypersphere center o is set to the average of the network representations, the bias terms in the neural networks are removed, and unbounded activation functions such as ReLU are preferred.
After training the model on a set of non-anomalous graphs (or with a very low proportion of anomalies), given a test graph 𝒢_m, we define its distance to the center in the representation space as its anomaly score, namely
score(𝒢_m) = ‖DiGCN(𝒢_m;ℋ)-o‖_2.
Training and hyperparameters: In summary, OCDiGCN is composed of an L-layer DiGCN architecture to learn node representations, plus a Readout(·) function to obtain the graph representation. It is trained in an end-to-end manner via optimising the SVDD objective, which can be optimised using stochastic optimisation techniques such as Adam <cit.>. Overall, OCDiGCN takes a collection of non-anomalous graphs and a set of hyperparameters, which are outlined in Table <ref>, as inputs. Importantly, the pseudo-code for Logs2Graphs is given in Algorithm <ref>.
§.§ Anomaly Explanation
Our anomaly explanation method can be regarded as a decomposition method <cit.>, that is, we build a score decomposition rule to distribute the prediction anomaly score to the input space. Concretely, a graph 𝒢_m is identified as anomalous if and only if its graph-level representation has a large distance to the hyper-sphere center (Equation <ref>). Further, the graph-level representation is obtained via a Readout(·) function applied on the node-level representations (Equation <ref>). Therefore, if the Readout(·) function is attributable (such as sum or mean), we can easily obtain the a small subset of important nodes (in the penultimate layer) whose node embeddings contribute the most to the distance. Specifically, the importance score of node v_j (in the penultimate layer) in a graph 𝒢_m is defined as:
|score(𝒢_m) - score(𝒢_m/{Z_j})|/ score(𝒢_m)
where score(𝒢_m) is defined in Equation <ref> and score(𝒢_m/{Z_j}) is the anomaly score by removing the embedding vector of v_j (namely Z_j) when applying Readout function to obtain the graph-level representation.
Next, for each important node in the penultimate layer, we extend the LRP (Layerwise Relevance Propagation) algorithm <cit.> to obtain a minor set of important nodes in the input layer (this is not the contribution of our paper and we simply follow the practice in <cit.>). If certain of these nodes are connected by edges, the resulting subgraphs can provide more meaningful explanations. As the LRP method generates explanations utilizing the hidden features and model weights directly, its explanation outcomes are deemed reliable and trustworthy <cit.>.
§ EXPERIMENTS
We perform extensive experiments to answer the following questions:
* Detection accuracy: How effective is Logs2Graphs at identifying log anomalies when compared to state-of-the-art methods?
* Directed vs. undirected graphs: Is the directed log graph representation better than the undirected version for detecting log anomalies?
* Node Labels vs. Node Attributes: How important is it to use semantic embedding of log event template as node attributes?
* Robustness analysis: To what extent is Logs2Graphs robust to contamination in training data?
* Ability to detect structural anomalies: Can Logs2Graphs better capture structural anomalies and identify structurally equivalent normal instances than other contenders?
* Explainability Analysis: How understandable are the anomaly detection results delivered by Logs2Graphs?
* Sensitivity analysis: How do the values of the hyperparameters influence the detection accuracy?
* Runtime analysis: What are the runtime for different methods?
§.§ Experiment Setup
§.§.§ Datasets
The five datasets that we use, summarised in Table <ref>, were chosen for three reasons: 1) they are commonly used for the evaluation of log anomaly detection methods; 2) they contain ground truth labels that can be used to calculate evaluation metrics; and 3) they include log identifiers that can be used for partitioning log messages into groups. For each group of log messages in a dataset, we label the group as anomalous if it contains at least one anomaly.
More details are given as follows:
* HDFS <cit.> consists of Hadoop Distributed File System logs obtained by running 200 Amazon EC2 nodes. These logs contain block_id, which can be used to group log events into different groups. Moreover, these logs are manually labeled by Hadoop experts.
* Hadoop <cit.> was collected from a Hadoop cluster consisting of 46 cores over 5 machines. The ContainerID variable is used to divide log messages into different groups.
* BGL, Spirit and Thunderbird contain system logs collected from the BlueGene/L (BGL) supercomputing system,
Spirit supercomputing system, and Thunderbird supercomputing system located at Sandia National Labs, respectively. For those datasets, each log message was manually inspected by engineers and labelled as normal or anomalous. For BGL, we use all log messages, and group log messages based on the Node variable. For Spirit and Thunderbird, we only use the first 1 million and first 5 million log messages for evaluation, respectively. Furthermore, for these two datasets, the User is used as log identifier to group log messages. However, considering that an ordinary user may generate hundreds of thousands of logs, we regard every 100 consecutive logs of each user as a group. If the number of logs is less than 100, we also consider it as a group.
§.§.§ Baselines
To investigate the performance of Logs2Graphs, we compare it with the following seven state-of-the-art log anomaly detection methods: Principal Component Analysis (PCA) <cit.>, One-Class SVM (OCSVM) <cit.>, Isolation Forest (iForest) <cit.>, HBOS <cit.>, DeepLog <cit.>, LogAnomaly <cit.> and AutoEncoder <cit.>, and one state-of-the-art graph level anomaly detection method: GLAM <cit.>.
We choose these methods as baselines because they are often regarded to be representatives of traditional machine learning-based (PCA, OCSCM, IForest, HBOS) and deep learning-based approaches (DeepLog, LogAnomaly and AutoEncoder), respectively. All methods are unsupervised or semi-supervised methods that do not require labelled anomalous samples for training the models.
§.§.§ Evaluation Metrics
The Area Under Receiver Operating Characteristics Curve (AUC ROC) and the Area Under the Precision-Recall Curve (AUC PRC) are widely used to quantify the detection accuracy of anomaly detection. Therefore, we employ both to evaluate and compare the different log anomaly detection methods. AUC PRC is also known as Average Precision (AP). For both AUC ROC and AUC PRC, values closer to 1 indicate better performance.
§.§ Model Implementation and Configuration
Traditional machine learning based approaches—such as PCA, OCSVM, iForest, and HBOS—usually first transform logs into log event count vectors, and then apply traditional anomaly detection techniques to identify anomalies. For these methods, we utilise their open-source implementations provided in PyOD <cit.>. Meanwhile, for deep learning methods DeepLog, LogAnomaly, and AutoEncoder, we use their open-source implementations in Deep-Loglizer <cit.>. For these competing methods, we use their default hyperparameter values.
For all deep learning based methods, the experimental design adopted in this study
follows a train/validation/test strategy with a distribution of 70%:5%:25% for normal instances. Specifically, the model was trained using 70% of normal instances, while 5% of normal instances and an equal number of abnormal instances were employed for validation (i.e., hyperparameter tuning). The remaining 25% of normal instances and the remaining abnormal instances were used for testing. Specifically, Table <ref> summarises the hyperparameters involved in OCDiGCN as well as their recommended values.
We implemented and ran all algorithms in Python 3.8 (using PyTorch <cit.> and PyTorch Geometric <cit.> libraries when applicable), on a computer with Apple M1 chip 8-core CPU and 16GB unified memory. For reproducibility, all code and datasets will be released on GitHub.
§.§ Comparison to the state of the art
We first compare Logs2Graphs to the state of the art. The results are shown in Table <ref>, based on which we make the following main observations:
* In terms of AUC ROC, Logs2Graphs achieves the best performance against its competitors on four out of five datasets. Particularly, Logs2Graphs outperforms the closet competitor on BGL with 9.6% and delivers remarkable results (i.e., an AUC ROC larger than 0.99) on Spirit and Thunderbird. Similar observations can be made for Average Precision.
* Deep learning based methods generally outperform the traditional machine learning based methods. One possible reason is that traditional machine learning based methods only leverage log event count vectors as input, which makes them unable to capture and exploit sequential relationships between log events and the semantics of the log templates.
* The performance of (not-graph-based) deep learning methods is often inferior to that of Log2Graphs on the more complex datasets, i.e., Hadoop, BGL, Spirit, and Thunderbird, which all contain hundreds or even thousands of log templates. This suggests that LSTM-based models may not be well suited for logs with a large number of log templates. One possible reason is that the test dataset contains many unprecedented log templates, namely log templates that are not present in the training dataset.
* In terms of ROC AUC score, all methods except for OCSVM and AutoEncoder achieve impressive results (with RC> 0.91) on HDFS. One possible reason is that HDFS is a relatively simple log dataset that contains only 48 log templates. Concerning AP, PCA and LSTM-based DeepLog achieve impressive results (with AP>0.89) on HDFS. Meanwhile, Logs2Graphs obtains a competitive performance (with AP=0.87) on HDFS.
§.§ Directed vs. undirected graphs
To investigate the practical added value of using directed log graphs as opposed to undirected log graphs, we convert the logs to attributed, undirected, and edge-weighted graphs, and apply GLAM <cit.>, a graph-level anomaly detection method for undirected graphs. We use the same graph construction method as for Logs2Graphs, except that we use undirected edges. Similar to our method, GLAM also couples the graph representation learning and anomaly detection objectives by optimising a single SVDD objective. The key difference with OCDiGCN is that GLAM leverages GIN <cit.>, which can only tackle undirected graphs, while OCDiGCN utilises DiGCN <cit.> that is especially designed for directed graphs.
The results in Table <ref> indicate that GLAM's detection performance is comparable to that of most competitors. However, it consistently underperforms on all datasets, except for Hadoop, when compared to Logs2Graphs. Given that the directed vs undirected representation of the log graphs is the key difference between the methods, a plausible explanation is that directed graphs have the capability to retain the temporal sequencing of log events, whereas undirected graphs lack this ability. Consequently, GLAM may encounter difficulties in detecting sequential anomalies and is outperformed by Logs2Graphs.
§.§ Node Labels vs. Node Attributes
To investigate the importance of using semantic embedding of log event template as node attributes, we replace the node semantic attributes with one-hot-encoding of node labels (i.e., using an integer to represent a log event template). The performance comparisons in terms of ROC AUC for Logs2Graphs are depicted in Figure <ref>, which shows that using semantic embedding is always superior to using node labels. Particularly, it can lead to a substantial performance improvement on Hadoop, Spirit and HDFS datasets. The PR AUC results show a similar behaviour and thus are omitted.
§.§ Robustness to Contamination
To investigate the robustness of Logs2Graphs when the training dataset is contaminated, we report its performance in terms of ROC AUC under a wide range of contamination levels. Figure <ref> shows that the performance of Logs2Graphs decreases with an increase of contamination in the training data. The PR AUC results show a similar behaviour and thus are omitted. Hence, it is important to ensure that the training data contains only normal graphs (or with a very low proportion of anomalies).
§.§ Ability to Detect Structural Anomalies and Recognise Unseen Normal Instances
To showcase the effectiveness of different neural networks in detecting structural anomalies, we synthetically generate normal and anomalous directed graphs as shown in Figure <ref>. As Deeplog, LogAnomaly and AutoEncoder require log sequences as inputs, we convert directed graphs into sequences by sequentially presenting the endpoints pair of each edge. Moreover, for GLAM we convert directed graphs into undirected graphs by turning each directed edge into undirected edge.
Moreover, to investigate their capability of recognising unseen but structurally equivalent normal instances, we generate the following normal log sequences based on the synthetic normal graph as training data: A → B → C → D → A (1000), B → C → D → A → B (1000) and C → D → A → B → C (1000), and the following as test dataset: D → A → B → C → D (1000).
Specifically, the results in Table <ref> indicate that Logs2Graphs, Deelog and LogAnomaly can effectively detect structural anomalies while AutoEncoder and GLAM fail in some cases. However, log sequences based methods, namely Deelog, LogAnomaly and AutoEncoder, can lead to high positive rates due to their inability of recognising unseen but structurally equivalent normal instances.
§.§ Anomaly Explanation
Particularly, Figure <ref> provides an example of log anomaly explanation with the HDFS dataset. For each detected anomalous log graph (namely a group of logs), we first quantify the importance of nodes according to the description in Section <ref>. Next, we visualise the anomalous graph by assigning darker shade of red to more important nodes. In this example, the node “WriteBlock(WithException)" contributes the most to the anomaly score of an anomalous log group and thus is highlighted in red.
§.§ Sensitivity Analysis
We examine the effects of three hyperparameters in OCDiGCN on the detection performance.
The Number of Convolutional Layers: L is a potentially important parameter as it determines how many convolutional layers to use in OCDiGCN. Figure <ref> (top row) depicts PR AUC and ROC AUC for the five benchmark datasets when L is varied from 1 to 5. We found that L=1 yields consistently good performance. As the value of L is increased, there is only a slight enhancement in the resulting performance or even degradation, while the associated computational burden increases substantially. We thus recommend and use L=1.
The Embedding Dimension d: From Table <ref> (middle row) , one can see that d=128 yields good performance on Spirit, Hadoop, HDFS and Thunderbird, while further increasing d obtains negligible performance improvement or even degradation. However, an increase of d on BGL leads to significantly better performance. One possible reason is that BGL is a complex dataset wherein anomalies and normal instances are not easily separable on lower dimensions.
The Proximity Parameter k:
As this parameter increases, a node can gain more information from its further neighbours. Figure <ref> (bottom row) contrasts the detection performance when k is set to 1 and 2, respectively. Particularly, we construct one Inception Block when k = 2, using concatenation to fuse the results.
We observe that there is no significant improvement in performance when using a value of k=2 in comparison to k=1. It is important to recognize that a node exhibits 0th-order proximity with itself and 1st-order proximity with its immediately connected neighbors. If k=2, a node can directly aggregate information from its 2-order neighbours. As described in Table 1, graphs generated from logs usually contain a limited number of nodes, varying to 6 to 34. Therefore, this is no need to utilise the Inception Block, which was originally designed to handle large graphs in <cit.>.
§.§ Runtime Analysis
Note that traditional machine learning methods, including PCA, OCSVM, IForest and HBOS, usually perform log anomaly detection in a transductive way. In other words, they require the complete dataset beforehand and do not follow a train-and-test strategy. In contrast, neural network based methods, such as DeepLog, LogAnomaly, AutoEncoder, and Logs2Graphs, perform log anomaly detection in an inductive manner, namely following a train-and-test strategy.
Figure <ref> shows that most computational time demanded by Logs2Graphs is allocated towards the graph generation phase. In contrast, the training and testing phases require a minimal time budget. The graph generation phase can be amenable to parallelisation though, thereby potentially reducing the overall processing time. As a result, Logs2Graphs shows great promise in performing online log anomaly detection. Meanwhile, other neural networks based models—such as DeepLog, LogAnomaly, and AutoEncoder—demand considerably more time for the training and testing phases.
§ THREATS TO VALIDITY
We have discerned several factors that may pose a threat to the validity of our findings.
Limited Datasets. Our experimental protocol entails utilizing five publicly available log datasets, which have been commonly employed in prior research on log-based anomaly detection. However, it is important to acknowledge that these datasets may not fully encapsulate the entirety of log data characteristics. To address this limitation, our future work will involve conducting experiments on additional datasets, particularly those derived from industrial settings, in order to encompass a broader range of real-world scenarios.
Limited Competitors. This study focuses solely on the experimental evaluation of eight competing models, which are considered representative and possess publicly accessible source code. However, it is worth noting that certain models such as GLAD-PAW did not disclose their source code and it requires non-trivial efforts to re-implement these models. Moreover, certain models such as CODEtect require several months to conduct the experiments on our limited computing resources. For these reasons, we exclude them from our present evaluation. In subsequent endeavors, we intend to re-implement certain models and attain more computing resources to test more models.
Purity of Training Data. The purity of training data is usually hard to guarantee in practical scenarios. Although Logs2Graphs is shown to be robust to very small contamination in the training data, it is critical to improve the model robustness by using techniques such as adversarial training <cit.> in the future.
Graph Construction. The graph construction process, especially regarding the establishment of edges and assigning edge weights, adheres to a rule based on connecting consecutive log events. However, this rule may be considered overly simplistic in certain scenarios. Therefore, more advanced techniques will be explored to construct graphs in the future.
§ CONCLUSIONS
We introduced Logs2Graphs, a new approach for unsupervised log anomaly detection. It first converts log files to attributed, directed, and edge-weighted graphs, translating the problem to an instance of graph-level anomaly detection. Next, this problem is solved by OCDiGCN, a novel method based on graph neural networks that performs graph representation learning and graph-level anomaly detection in an end-to-end manner. Important properties of OCDiGCN include that it can deal with directed graphs and do unsupervised learning.
Extensive results on five benchmark datasets reveal that Logs2Graphs is at least comparable to and often outperforms state-of-the-art log anomaly detection methods such as DeepLog and LogAnomaly. Furthermore, a comparison to a similar method for graph-level anomaly detection on undirected graphs demonstrates that directed log graphs lead to better detection accuracy in practice.
Zhong Li and Matthijs van Leeuwen: this publication is part of the project Digital Twin with project number P18-03 of the research programme TTW Perspective, which is (partly) financed by the Dutch Research Council (NWO). Jiayang Shi: This research is co-financed by the European Union H2020-MSCA-ITN-2020 under grant agreement no. 956172 (xCTing).
ACM-Reference-Format
|
http://arxiv.org/abs/2307.02223v1
|
20230705115946
|
Direct segmentation of brain white matter tracts in diffusion MRI
|
[
"Hamza Kebiri",
"Ali Gholipour",
"Meritxell Bach Cuadra",
"Davood Karimi"
] |
eess.IV
|
[
"eess.IV",
"cs.CV",
"q-bio.NC"
] |
White matter tract segmentation
Computational Radiology Laboratory (CRL), Department of Radiology, Boston Children's Hospital, and Harvard Medical School, USA Department of Radiology, Lausanne University Hospital (CHUV) and University of Lausanne (UNIL), Lausanne, Switzerland
Direct segmentation of brain white matter tracts in diffusion MRI
Hamza Kebiri1,2, and Ali Gholipour1, Meritxell Bach Cuadra1,2,
Davood Karimi1
August 1, 2023
===================================================================================
The brain white matter consists of a set of tracts that connect distinct regions of the brain. Segmentation of these tracts is often needed for clinical and research studies. Diffusion-weighted MRI offers unique contrast to delineate these tracts. However, existing segmentation methods rely on intermediate computations such as tractography or estimation of fiber orientation density. These intermediate computations, in turn, entail complex computations that can result in unnecessary errors. Moreover, these intermediate computations often require dense multi-shell measurements that are unavailable in many clinical and research applications. As a result, current methods suffer from low accuracy and poor generalizability. Here, we propose a new deep learning method that segments these tracts directly from the diffusion MRI data, thereby sidestepping the intermediate computation errors. Our experiments show that this method can achieve segmentation accuracy that is on par with the state of the art methods (mean Dice Similarity Coefficient of 0.826). Compared with the state of the art, our method offers far superior generalizability to undersampled data that are typical of clinical studies and to data obtained with different acquisition protocols. Moreover, we propose a new method for detecting inaccurate segmentations and show that it is more accurate than standard methods that are based on estimation uncertainty quantification. The new methods can serve many critically important clinical and scientific applications that require accurate and reliable non-invasive segmentation of white matter tracts.
§ INTRODUCTION
The brain white matter is organized into a set of distinct tracts. These tracts are bundles of myelinated axons that connect different brain regions such as the cerebral cortex and the deep gray matter. Although they are tightly packed and often cross one another, each tract has an entirely different function and connects different regions of the brain <cit.>. Accurate segmentation of these tracts is needed in clinical studies and medical research. For example, in surgical planning one needs to know the precise extent of the individual tracts in order to assess the risk of damage to specific neurocognitive functions that may result from surgical removal of brain tissue. As another prominent example, changes in the micro-structural properties of different tracts is commonly used in studying brain development and disorders.
Magnetic resonance imaging (MRI) is the modality of choice for non-invasive assessment of white matter tracts in vivo. Although some of the tracts may be identifiable on T1, T2, or FLAIR images <cit.>, accurate segmentation of most tracts is only possible with diffusion MRI. Individual tracts may be extracted from whole-brain tractograms by specifying inclusion and exclusion regions of interest (ROIs). This process, which is usually referred to as “virtual dissection”, is time-consuming, subjective, and it has low reproducibility <cit.>. Some prior works have aimed at automating the virtual dissection process by learning to compute the inclusion/exclusion ROIs <cit.>. It is also possible to extract the tracts from a whole-brain tractogram by grouping similar streamlines using a clustering approach. This can be done by comparing individual streamlines with a predefined set of tracts in an atlas <cit.>. Some techniques additionally take into account the location of the streamlines relative to anatomical landmarks in the brain <cit.>. Tractography-based methods are inherently limited by the errors in streamline tractography <cit.>. To avoid these errors, some methods segment the tracts on diffusion tensor or fiber orientation images, thereby avoiding the tractography. Some of the segmentation techniques that have been explored in the past include Markov Random Fields <cit.>, k-nearest neighbors technique <cit.>, template matching <cit.>, and more recently deep learning <cit.>. However, none of these intermediate parameters (e.g., the diffusion tensor) have an unambiguous biophysical meaning and their computation entails unavoidable estimation errors. Moreover, the intermediate computations for most existing methods assume availability of dense multi-shell diffusion MRI measurements, which are not acquired in many clinical and research applications. As a result, existing methods have low accuracy and limited generalizability when applied to typical clinical scans.
In this work, we develop and validate a new method that segments white matter tracts directly from the diffusion MRI data. The new method does not require tractography or computation of other intermediate parameters. Moreover, we present a simple but effective technique for detecting less accurate segmentations. We show that the new methods achieve superior accuracy and generalizability compared with the existing methods.
§ MATERIALS AND METHODS
§.§ Segmentation model
Our method, shown schematically in Figure <ref>, is based on a fully convolutional network (FCN). The network architecture is similar to nnU-Net (we refer to <cit.> for the details of the architecture). Our method predicts tract segmentations directly from the diffusion MRI data. To enhance the generalizability of the method and to enable it to work with scans acquired using different gradient tables (i.e., different gradient strengths and/or different gradient directions): (i) We train the model with measurements that are typically acquired for diffusion tensor imaging (DTI). DTI-style scans include single-shell measurements at a b-value of around 750-1200 s/mm^2 <cit.>. They are the most common acquisition in clinical and research applications. We normalize these measurements by a non-weighted (b=0) measurement. (ii) We project the normalized data onto a fixed spherical harmonics (SH) basis. We use SH bases formulations of <cit.> with an order 2, which results in 6 SH coefficients regardless of the number of measurements. We use these 6 coefficient maps as input to the FCN.
Our approach of using the data as the model input has three advantages:
(1) It eliminates the need to compute intermediate parameters (e.g., fiber orientation distribution or tractogram), thereby avoiding the associated computational errors <cit.>. If the goal is tract segmentation, there is no need to incur those errors by going through intermediate computations.
(2) It improves the generalizability of the method with respect to different acquisition schemes. If, for example, the input is the tractogram, the tract segmentation results can be significantly influenced by the tractography method that is used to compute the tractogram. Moreover, computation of intermediate parameters may demand especial measurement schemes that may be unavailable at test time. For example, methods that are based on fiber orientation distribution typically require high angular resolution measurements, which can result in a loss of accuracy if such measurements are not available <cit.>.
(3) It offers a highly effective data augmentation method during both training and test/inference. Data augmentation during training improves the training of large deep learning models with limited data. It is especially common in applications such as medical imaging where labeled data are costly to obtain. Test-time data augmentation, on the other hand, can be used to improve prediction accuracy and also to estimate prediction uncertainty <cit.>. Our train- and test-time data augmentation strategies are explained below.
Let us denote the set of b0-normalized measurements in a scan with { x(q_i) }_i=1^m, where q_i is the unit vector indicating the gradient direction for the i^th measurement. During training, in each iteration we select a subset of size 6-12 from the m measurements { x(q_j) }_j ∈ S ⊆{1, … , m }, chosen uniformly at random without replacement. We select these measurements such that the gradient directions for each pair of measurements are far apart in the q space, using an approach similar to <cit.>. We use the selected measurement subset (after projecting onto the SH basis) as input to the model. This can act as a highly effective and computationally-efficient data augmentation strategy as it presents a different view of the input to the model in each training iteration.
During inference, we use n different measurement subsets, selected similarly as in training described above, to predict n different segmentations. Let us denote the segmentation probability map for a specific tract with each of these measurement subsets as { y_k }_k=1^n. We compute the voxel-wise average of these predictions to obtain a final segmentation prediction, which we denote with y̅. Furthermore, we can compute a measure of disagreement between these n predictions to estimate segmentation uncertainty. Disagreement between segmentation predictions is usually quantified using metrics of volume overlap or surface distance <cit.>. Each of these metrics quantifies the segmentation error from a narrow perspective. Furthermore, these metrics discard the probability information by binarizing the segmentations. Recent segmentation uncertainty quantification methods have also followed a purely voxel-wise approach <cit.>, which ignores the spatial distribution of the segmentation probabilities. To characterize the disagreement in a way that accounts for the complete probability distribution of the predicted segmentations, we use a method based on the Wasserstein Distance, also known as earth mover's distance (EMD) <cit.>. Given two probability distributions p and q defined on the same metric space, this distance is defined as 𝙴𝙼𝙳(p,q)= inf_γ∈Γ(p,q)𝐄_(x,y) ∼γ d(x,y), where d is a distance measure and Γ(p,q) is the set of joint probability distributions whose marginals are equal to p and q. Intuitively, if p and q are considered as two piles of earth, EMD is the cost of turning one into the other. Although EMD can be easily quantified for scalar variables, to the best of our knowledge there are no methods for computing EMD for probability distributions in IR^2 or IR^3. Here, we adopt an approximation that was originally proposed in <cit.> for comparing multi-dimensional histograms. We demonstrate this computation for a simple 3 × 3 histogram in Figure <ref>. Given a pair of multi-dimensional histograms (or probability distributions), the method first unfolds the histograms as shown in the example in Figure <ref> and finds a minimum distance pairing between the two. The distance between the two histograms is defined as the sum of the pair-wise distances in the pairing.
Based on this approximation, we compute the EMD between two segmentation probability maps in ℝ^3 as 𝙴𝙼𝙳(p,q)= ∑_t=0^1 d ( P(t), Q(t) ), where P is the cumulative sum of unfolded p as shown in Figure <ref> and the same for Q, and d computes the ℓ_2 distance between the paired P and Q. This computation requires that the two inputs have the same mass, which we satisfy by normalizing the segmentations to have a unit sum. Furthermore, to reduce the computation time, we reduce the size of the segmentation volumes by a factor of 4 in each dimension via cubic interpolation. Given the set of n segmentation predictions computed as explained above, we estimate the segmentation uncertainty as u= 1/n∑_k 𝙴𝙼𝙳(y_k, y̅).
§.§ Implementation details
The segmentation network was implemented in TensorFlow 1.6 and run on an NVIDIA GeForce GTX 1080 GPU on a Linux machine with 64 GB of memory and 20 CPU cores. The network takes 3D patches of size 96^3 voxels as input and estimate the tract segmentation map for that patch. The network input has 6 channels as described above. The network output has 41 channels for the 41 tracts considered in this work. A complete description of these tracts can be found in <cit.>. We merged the left and right sections of bilateral tracts, such as arcuate fasciculus, into one label. We trained the network to maximize the Dice similarity coefficient (DSC) between the predicted and ground-truth segmentation of the tracts using Adam <cit.> with a batch size of 1 and a learning rate of 10^-4, which was reduced by half if after a training epoch the validation loss did not decrease. We compare our method with TractSeg <cit.>. TractSeg was shown to be vastly superior to many tractography dissection methods <cit.>. Therefore, we do not compare with those methods.
§ EXPERIMENTAL RESULTS AND DISCUSSION
We applied the method on 105 subjects in the Human Connectome Project study <cit.>. Manual segmentations of 72 tracts for these subjects are publicly available <cit.>. We followed a five-fold cross-validation approach, each time leaving 21 subjects for test and training on the remaining 84 subjects. Table <ref> summarizes the performance of the proposed method and TractSeg. We report DSC, 95 percentile of the Hausdorff Distance (HD95), and average symmetric surface distance (ASSD). In addition to TractSeg, we compare our method with atlas-based segmentation (MAS), whereby 20 training images are registered to the test subject and the registration transforms are used to warp the segmentation labels from the training images to the test image. Voxel-wise averaging is then used to estimate the segmentations for the test image. We implemented this in two ways: MAS-FA, where we computed the registrations based on fractional anisotropy (FA) images using ANTS <cit.>, and MAS-FOD, where we computed the registrations based on fiber orientation density images using mrregister <cit.>.
Segmentation performance results are presented in Table <ref>. Figure <ref> shows example tract segmentations predicted by our method and TractSeg. Our method using only the DTI measurements (b=1000) achieved segmentation accuracy that was very close to TractSeg using the multi-shell data with three times as many measurements. Paired t-tests did not show any significant differences (at p=0.01) between our method and TractSeg in terms of any of the three criteria. When TractSeg was applied on the b=1000 measurements, its performance was worse than our method in terms of all three criteria. To simulate under-sampled clinical scans, we selected 6 of the b=1000 measurements as proposed in <cit.>. As shown in Table <ref>, the performance of our method remained almost unchanged, whereas the performance of TractSeg deteriorated significantly. Paired t-tests with a p=0.01 threshold showed that (1) the performance of our method did not change in terms of any of the three criteria on any of the 41 tracts when 6 measurements were used compared with 90 measurements. (2) Our method achieved significantly higher DSC and significantly lower HD95 and ASSD (all with p<0.01) with both 90 and six measurements compared with the other three methods. As shown in Figure <ref>, segmentations produced by our method are almost indistinguishable between 90 and 6 measurements. Although we cannot present the segmentation results for all tracts, Table <ref> shows the mean DSC for six of the tracts, including anterior commissure and fornix which were the two most difficult tract to segment for our method and for TractSeg.
We further tested our method on scans of children between 2-8 years of age from an independent dataset <cit.>. Each scan in this dataset included 30 measurements in a single shell at b=750. We chose six measurements as input to our model as described above. We manually extracted 32 tracts from 12 different subjects on this dataset. Our method achieved DSC, HD95, and ASSD of 0.786 ± 0.076, 2.85 ± 1.20, and 1.017 ± 0.291, respectively. Although this shows a drop in accuracy, it is a highly encouraging result given the fact that this was a completely independent test dataset that was different from our training dataset in two important ways: (1) subject age: young children (2-8 years) versus adults (21-36 years), and (2) measurement b-value of 750 versus 1000. Compared with our method, TractSeg failed on this dataset, completely missing most of the tracts and achieving a mean DSC of 0.070. To further evaluate the reproducibility of our method on this dataset, we selected two disjoint subsets of six measurements from each scan and applied our method to segment the tracts. We computed the DSC between the tracts computed with the two measurement subsets. We did this for 100 scans, each from a different subject. The DSC for our method was 0.867 ± 0.041, whereas it was 0.115 ± 0.109 for TractSeg. Example results for our method on this dataset are shown in Figure <ref>.
Figure <ref> shows a plot of our proposed segmentation uncertainty, u, versus accuracy in terms of DSC. It shows that u is highly effective in identifying the less accurate segmentations. If we choose segmentations with a DSC of 0.70 and lower to be inaccurate, with a threshold of u=0.30 we can detect such segmentation with sensitivity=0.86, specificity=0.92, and accuracy=0.91. In Table <ref> we compare method with the two standard methods based on estimation segmentation uncertainty: dropout, and ensemble methods. We refer to <cit.> for a description of these methods. Our method achieves overall better results. Note that the ensemble method requires training of multiple models. We trained 10 models in this experiment, which increased the training time by a factor of 10.
§.§ Computational time and other experiments
Training time for our method is approximately 24 hours. Our method segments a test image in 2.4 seconds. TractSeg requires approximately 60 seconds to segment an image. MAS methods require much longer time, approximately 3 minutes for MAS-FA and 12 minutes for MAS-FOD.
In recent years attention-based vision models have become very common in medical image segmentation. To experiment with one such model, we applied the model of <cit.>, which has been developed specifically for 3D medical image segmentation. This model achieved a DSC of 0.740 ± 0.125, which was far lower segmentation performance those reported above.
§ CONCLUSIONS
Our method shows great promise in segmenting various white matter tracts. The appeal of our method is twofold: (1) Superior accuracy on under-sampled data that are typical of clinical scans, as clearly demonstrated by our results in Figure <ref> and Tables <ref> and <ref>. (2) Superior generalizability to multi-center data. This was clearly demonstrated in our experiment with an independent validation dataset, with some examples presented in Figure <ref>.
§ ACKNOWLEDGMENT
This research was supported in part by the National Institute of Biomedical Imaging and Bioengineering, the National Institute of Neurological Disorders and Stroke, and Eunice Kennedy Shriver National Institute of Child Health and Human Development of the National Institutes of Health (NIH) under award numbers R01HD110772, R01NS128281, R01NS106030, R01EB018988, R01EB031849, R01EB032366, and R01HD109395. This research was also partly supported by NVIDIA Corporation and utilized NVIDIA RTX A6000 and RTX A5000 GPUs. The content of this publication is solely the responsibility of the authors and does not necessarily represent the official views of the NIH or NVIDIA.
This work was also supported by the Swiss National Science Foundation (project 205321-182602). We acknowledge access to the facilities and expertise of the CIBM Center for Biomedical Imaging, a Swiss research center of excellence founded and supported by Lausanne University Hospital (CHUV), University of Lausanne (UNIL), Ecole polytechnique fédérale de Lausanne (EPFL), University of Geneva (UNIGE) and Geneva University Hospitals (HUG).
splncs04
|
http://arxiv.org/abs/2307.01941v1
|
20230704220910
|
A relative orientation for the moduli space of stable maps to a del Pezzo surface
|
[
"Jesse Leo Kass",
"Marc Levine",
"Jake P. Solomon",
"Kirsten Wickelgren"
] |
math.AG
|
[
"math.AG",
"math.AT",
"math.SG",
"Primary 14H10, Secondary 14N35, 14F42, 53D45"
] |
plain
Current: J. L. Kass, Dept. of Mathematics, University of California, Santa Cruz, 1156 High Street, Santa Cruz, CA 95064, United States of America
jelkass@ucsc.edu
https://www.math.ucsc.edu/people/faculty.php?uid=jelkass
Current: M. Levine, University of Duisburg-Essen, Germany
marc.levine@uni-due.de
https://www.esaga.uni-due.de/marc.levine/
Current: J. P. Solomon, Institute of Mathematics, Hebrew University, Givat Ram Jerusalem, 91904, Israel
jake@math.huji.ac.il
http://www.ma.huji.ac.il/ jake/
Current: K. Wickelgren, Department of Mathematics, Duke University, 120 Science Drive
Room 117 Physics, Box 90320, Durham, NC 27708-0320, USA
kirsten.wickelgren@duke.edu
https://services.math.duke.edu/ kgw/
[2020]Primary 14H10; Secondary 14N35, 14F42, 53D45.
We prove orientation results for evaluation maps of moduli spaces of rational stable maps to del Pezzo surfaces over a field, both in characteristic 0 and in positive characteristic. These results and the theory of degree developed in a sequel produce quadratically enriched counts of rational curves over non-algebraically closed fields of characteristic not 2 or 3. Orientations are constructed in two steps. First, the ramification locus of the evaluation map is shown to be the divisor in the moduli space of stable maps where image curves have a cusp. Second, this divisor is related to the discriminant of a branched cover of the moduli space given generically by pairs of points on the universal curve with the same image.
A relative orientation for the moduli space of stable maps to a del Pezzo surface
Kirsten Wickelgren
July 2023
=================================================================================
§ INTRODUCTION
An orientation of a map f: X → Y of smooth schemes X and Y over a field k is defined to be the data of a line bundle on X and an isomorphism ^⊗ 2≅ω_f, where ω_f ≅(f^* T^*Y, T^*X) denotes the relative canonical bundle. An orientation of f is viewed as a relative orientation of X over Y. For example, for k = ℝ, an orientation of f: X → Y gives the data of a topological orientation on the real manifold f^-1(y)() for y a regular value of f.
By a del Pezzo surface, we mean a smooth, projective surface S over a field k whose inverse canonical class -K_S is ample. Examples of interest include blow-ups of ^2 at fewer than 9 points, ^1 ×^1, and cubic surfaces. The degree of S is d_S = K_S · K_S. Let D be an effective Cartier divisor on S. A pointed rational map of degree D on S is a map u: → S from an arithmetic genus 0 curve with at worst nodal singularities to S such that u_* [] = D in ^1(S) along with marked points p_1,…,p_n of the smooth locus of . Such a map is stable if it has finitely many automorphisms. There is a moduli stack M̅_0,n(S, D) parametrizing rational stable maps (u: → S, p_1,…, p_n) of degree D. See <cit.>. This moduli stack is discussed further in Section <ref>. Define the evaluation map
: M̅_0,n(S, D) → S^n
by taking the class of (u:→ S, (p_1,…, p_n)) to (u(p_1), …, u(p_n)). This paper constructs an orientation of away from the preimage of a set A ⊂ S^n of codimension ≥ 2 under appropriate hypotheses. First consider the case where the characteristic of k is 0.
Assume that D is not an m-fold multiple of a -1-curve for m>1. Moreover, assume that d_S≥ 4, or d_S=3 and d:= -K_S · D≠ 6, or d_S = 2 and d≥ 7.
Suppose k is a field of characteristic zero and that (S,D) satisfies Hypothesis <ref>. Let n=d-1. Then there is a codimension ≥ 2 closed subset A of S^n such that
|_^-1(S^n ∖ A): M̅_0,n(S, D) ∖^-1(A) → S^n ∖ A
admits an orientation.
The closed subset A is constructed in Theroem <ref>, and the orientation is constructed in Theorem <ref>.
In positive characteristic, we lift to a map over a complete discrete valuation ring Λ with residue field k,
: M̅_0,n(, ) →^n,
and orient away from the inverse image of a codimension ≥ 2 subset of ^n under additional hypotheses which we know describe.
Let M^_0,n(S, D) ⊂M̅_0,n(S, D) denote the locus of stable maps that are birational onto their images with irreducible domain curves. See Definition <ref>. For irreducible and thus smooth, a stable map u: → S over an algebraically closed field is unramified if df: f^* T^*S → T^* is surjective.
In addition to Hypothesis <ref>, assume k is perfect of characteristic not 2 or 3. If d_S =2, assume additionally that for every effective D' ∈ Pic(S), there is a geometric point u in each irreducible component of M^_0(S, D') with u unramified.
The existence of unramified maps as in Hypothesis <ref> for d_S ≥ 3 is shown in Appendix <ref> following arguments of <cit.>.
Suppose (S,D) satisfies Hypothesis <ref>. Let n=d-1. Then there is a codimension ≥ 2 closed subset ⊂^n such that
|_^-1(^n ∖): M̅_0,n(, ) ∖^-1() →^n ∖
admits an orientation.
Theorem <ref> is shown as Theorem <ref> and Theorem <ref>. See also Construction <ref> for the construction of .
Orienting enables one to define an appropriate notion of the degree of , and we do this in <cit.>. When k = , this degree of is determined by a certain Gromov–Witten invariant <cit.> <cit.><cit.> <cit.> <cit.> <cit.>. When k =, this degree contains the additional information of a certain Welschinger invariant <cit.>, which can be viewed as an open Gromov-Witten invariant <cit.>. The open Gromov-Witten invariants of <cit.> were defined as the degree of a relatively oriented evaluation map. See also <cit.>.
Furthermore, we consider twists
_σ: M̅_0,n(S, D)_σ→∏_i=1^r _L_i/k S
of for σ = (L_1,…,L_r) with L_i a finite separable extension of k and ∑_i=1^r L_i = n =d-1. When k has characteristic zero and Hypothesis <ref> holds, we construct an orientation of _σ away from the preimage of a subset of ∏_i=1^r _L_i/k S of codimension ≥ 2. When Hypothesis <ref> holds, we construct an analogous orientation for a lifting of _σ to a map over a complete discrete valuation ring Λ with residue field k. See Sections <ref> and <ref>.
Thus, for each of these twists, we are able to define a degree <cit.>. In the case k = , the twists are needed to obtain the full range of Welschinger or open Gromov–Witten invariants for (S,D) under the above hypotheses.
Restrictions of the twists _σ to certain dense opens are pulled back from a symmetrized evaluation map _ which maps to the quotient ^n_0S of the complement of the pairwise diagonals in S^n by the symmetric group on n-letters. Orientation results are obtained for _ in Section <ref> in characteristic 0 and in Section <ref> in positive characteristic. Relations between degrees of _σ and _ are given in <cit.>.
The relevant notion of degree <cit.> comes from Morel and Voevodsky's ^1-homotopy theory <cit.> and Morel's degree of a map of spheres <cit.>. Under appropriate hypotheses on f: X → Y, the degree may be may be computed as a weighted count of the points of f^-1(y) for a general point y. The weights are no longer integers but elements of the Grothendieck–Witt group (k(y)), which appears here from Morel's calculation of stable π_0,0 of the sphere spectrum in ^1-homotopy theory. The Grothendieck–Witt group (k(y)) is defined to be the group completion of isomorphism classes of symmetric, nondegenerate bilinear forms over k(y).
We show in <cit.> that the degree of _σ is given by a weighted count of the stable maps in the fiber over a chosen tuple of points. Each stable map through the chosen tuple of points is given a weight in (k) connected to the field of definition of the curve and the fields of definitions of the tangent directions at the nodes. Thus, the degree of _σ is a quadratically enriched curve enumerating invariant.
Andrés Jaramillo Puentes and Sabrina Pauli have work in progress that computes the degree of the untwisted evaluation map over toric surfaces via a tropical correspondence theorem, building on their previous work <cit.>. Other quadratic or ^1-enrichements of enumerative results are found in, e.g. <cit.> <cit.> <cit.> <cit.> <cit.> <cit.>.
The main steps in our construction of relative orientations are as follows. Let D_ (respectively D_) denote the Cartier divisor on M^_0,n(S, D) defined as the closure of (f:→ S, (p_1,…, p_n)) such that f() has one simple cusp (respectively tacnode) and nodes, but no other singularities. See Definition <ref> and Lemma <ref>.
Suppose k is a field of characteristic 0 and (S,D) satisfies Hypothesis <ref>. Then, there exists a codimension ≥ 2 subscheme A ⊂ S^n such that |_ev^-1(S^n ∖ A) is a map between smooth schemes that is étale on the complement of D_cusp with differential vanishing to order 1 along D_cusp.
With A as in Theorem <ref>, let
M̅_0,n(S,D)^:=M̅_0,n(S,D) ∖^-1(A) = ev^-1(S^n ∖ A).
Let n^→_0,n(S,D)^ denote the pullback of the universal curve M̅_0,n+1(S, D) →M̅_0,n(S,D) to _0,n(S,D)^. In Section <ref>, we define a closed subscheme in the product of universal curves,
⊂n^×_M̅_0,n(S, D)^n^,
called the double point locus. By construction comes with a projection map π:→M̅_0,n(S, D). Over a point (f:^1→ S, (p_1,…, p_n)) ∈ M_0,n(S, D) such that f(^1) has only ordinary double points, the fiber of π consists of pairs of points x_1,x_2 ∈^1 such that f(x_1) = f(x_2).
Let ⊂n^×_M̅_0,n(S, D)^n^ denote the locus of points (f,x,x) where f : ^1 → S is a map and x ∈^1 is such that f(x) is a simple cusp of f(^1). The locus is naturally a subscheme of the double point locus as proven in Lemma <ref>. Let ⊂ denote the locus (f,x_1,x_2) where f : ^1 → S is a map and x_1,x_2 ∈^1 are such that f(x_1)= f(x_2) is a point where f(^1) has a simple tacnode.
Under the assumptions of Theorem <ref>, we can choose A such that the double point locus is smooth and the map π : →_0, n(S,D)^ is finite, flat and generically étale. The ramification of π is supported on and , where it is simply ramified, and the divisor of the discriminant is given
_π=1· D_+2· D_
The definition of the discriminant is recalled in (<ref>). Theorem <ref> is proven as Corollary <ref> and Theorem <ref>.
Theorems <ref> and <ref> combine to give an orientation of as follows. Discriminant bundles are canonically isomorphic to the square of a line bundle. Thus, Theorem <ref> identifies _M̅^_0,n(S,D)(D_) canonically as the square of a line bundle, and Theorem <ref> identifies _M̅^_0,n(S,D)(D_) with the relative canonical bundle of |_M̅^_0,n(S,D). This orientation is given in Theorem <ref>.
§.§ Acknowledgements
This paper has been a long time in coming. While many of the ideas were present in 2018, understanding enough of the geometry of the moduli stack (and coping with non-mathematical realities) led to a drawn-out battle and we sincerely thank the mathematicians and organizations who supported us during this time.
We thank Rahul Pandharipande, Sho Tanimoto and Ilya Tyomkin, for helpful discussions.
ML is supported by the ERC Grant QUADAG: this paper is part of a project that has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 832833).
< g r a p h i c s >
JS was partially supported by ERC Starting Grant 337560 as well as ISF Grants 569/18 and 1127/22. KW was partially supported by National Science Foundation Awards DMS-1552730, DMS-2001890, and DMS-2103838.
§ RATIONAL CURVES ON A DEL PEZZO SURFACE
§.§ Definitions of moduli spaces of stable maps to del Pezzo surfaces
We will be using the moduli space of pointed, stable maps, and we set up notation and references for this now. Let be a Noetherian ring. Let → be a smooth projective -scheme and let be an effective relative Cartier divisor on .
Let M̃_0,n(, ) denote the scheme of morphisms f:^1→ with f_*([^1])∈ ||, together with n disjoint points (i.e. disjoint sections from the base scheme) p_1,…,p_n of ^1 with its natural _2-action. Let
d:=(-· K_)
in ^0() denote the degree of with respect to -K_ and suppose that d is everywhere greater than 0. (Intersection numbers are locally constant. See for example <cit.>.) This implies that each f:^1→ with f_*([^1])∈ || is a stable map (in the sense of having finite automorphisms – see <cit.>).
This stability gives a natural map M̃_0,n(, )→ M_0,n(, ) to a moduli stack M_0,n(, ) of stable maps given by the quotient by _2. M_0,n(, ) is an open substack of a compactified moduli stack M̅_0,n(, ) of n-pointed, stable maps of a genus zero curve to , in the curve class . See <cit.> for more information. We use the notation M̃_0(, ), M_0(, ), and M̅_0(, ) to denote M̃_0,n(, ), M_0,n(, ), and M̅_0,n(, ), respectively, with n=0.
The moduli stack M̅_0,n(, ) is a proper (in particular, separated) algebraic stack over with projective coarse moduli space. M̅_0,n(, ) is constructed by representing the functor of morphisms of n-pointed curves as a quasi-projective subscheme M̃̅̃_0,n(,) of a suitable Hilbert scheme and then defining M̅_0,n(, ) as the quotient stack of this scheme by the natural _N-action (for suitable N). In particular, over an open substack V with trivial groupoid structure, M̃̅̃_0,n(,) has a free and stable _N-action, so V is isomorphic to its image in the coarse moduli space. In particular, V is a quasi-projective scheme over .
We will be interested in the case where is a del Pezzo surface. The references <cit.> and <cit.> contain pertinent information on del Pezzo surfaces, and we now give a definition and some description.
A -scheme is a del Pezzo surface over if is smooth of relative dimension 2 and projective over and the anti-canonical sheaf -K_ is relatively ample. The degree d_ of a del Pezzo surface is the self-intersection K_^(2).
§.§ Over a field
Now let k be a perfect base field and let S be a del Pezzo surface over k of degree d_S. Over the algebraic closure of k, we may represent S as the blow-up of ^2 at 9-d_S points or as S=^1×^1 (in this case, d_S=8). In case d_S≥ 3, the anti-canonical divisor -K_S is very ample and in case d_S=2, the linear system |-K_S| defines a finite, 2-1 cover of ^2. We call S a general del Pezzo of degree d_S if d_S≥ 5 or if S is the blow-up of ^2 at 9-d_S “general” points (that is, an assertion about S is true for all sets of points outside a closed algebraic subset of S^9-d_S).
§.§ Properties of rational curves
We will need to examine the geometry of M_0,n(S,D) and M̅_0,n(S, D) at some length. To do this, we need to recall quite a number of definitions of properties of rational curves on S which affect the geometry of their neighborhoods in these moduli spaces.
For later use in <ref>, in this section we use a separated Noetherian scheme B as base-scheme. We fix a del Pezzo surface S→ B over B, endowed with relative effective Cartier divisor D. Following <ref>, we have the moduli stack M_0,n(S, D)→ B.
Let C→ B be a separated morphism of a noetherian scheme C to B and let f:^1_C→ S_C be a morphism. We say that f is non-constant, resp. separable, if for each geometric point x→ C, the base-change f_x:^1_x→ S_x is non-constant, resp. separable to the image curve f_x(^1_x)⊂ S_x. For f:^1_C→ S_C non-constant and separable, we have the normal sheaf _f defined by the exactness of the sheaf sequence
0→ T_^1_T/T→ f^*T_S→_f→ 0
When additionally, f_*([^1_C])∈ |D_C|, we have that d:=(-D· K_S) is the degree of the determinant of f^* T_S_C over C. If C= K for some field K, we say that f:^1_K→ S_K is defined over K; to save notation, we call this a morphism f:^1→ S, defined over K. Similarly, we write f_*([^1])∈ |D| for f_*([^1_K])∈ |D_K|.
Suppose f:^1→ S is a nonconstant and separable morphism defined over a field F and f_*([^1])∈ |D|. Then the sequence (<ref>) yields _f≅_^1(m)⊕_f^ with m=d-2-_FH^0(^1, _f^) where _f^ denotes the torsion subsheaf of _f.
For f:^1→ S defined over some field F, we call f free if _f is generated by global sections and H^1(^1, _f)=0; equivalently, H^1(^1, _f(-1))=0 (see <cit.>). In general, we call f:^1_C→ S_C free if f_x is free for all geometric points x→ C.
For f:^1→ S defined over some field F, we let _f^⊂_f be the torsion subsheaf of _f. We have _f≅_^1(m)⊕_f^, where m=d-2-ℓ and ℓ the length of _f^.
For a morphism f:^1→ S defined over F, we write H^i(^1, _f) for the F-vector space H^i(^1_F, _f) and drop the subscript F in other situations if the context makes the meaning clear.
Let (, p_*) be a semi-stable genus zero curve with n marked points and let f:→ S be a stable map with (reduced) image curve C=f()⊂ S, defined over some field F. We call f birational if f:→ C is a birational map of curves, that is, f^* is an isomorphism on total quotient rings; f is non-birational if f is not birational.
Let M^_0,n(S,D) ⊂ M_0,n(S,D) be the open subscheme with geometric points [(f,p_*)] such that f : ^1 → f(^1) is birational. Let 0n(S,D)⊂M̅_0,n(S,D) denote the closure of M_0,n(S,D)^.
Since a birational map has no automorphisms, M^_0,n(S,D) is in fact an open subscheme of the moduli stack of stable maps M_0,n(S, D).
We let M^_0,n(S, D)⊂ M_0,n(S, D) be the open subscheme with geometric points [(f,p_*)] such that f:^1→ f(^1) is birational and free.
We say that a map f: → S from a genus 0 curve is unramified if f:→ f() is an unramified map of relative curves. For smooth, this is equivalent to the induced map on cotangent spaces df: f^* T^*S → T^* being surjective. Let M^_0,n(S, D) represent those [(f,p)] in M_0,n(S, D) such that f:→ f() is unramified.
Suppose f:^1→ S is unramified. Then we have the exact sheaf sequence <ref>, _f^≅ 0, and _f≅_^1(d-2).
In addition, letting M̃^_0,n(S,D), M̃^_0,n(S, D) be the corresponding subschemes of M̃_0,n(S, D), the map M̃^_0,n(S,D)→ M_0,n^(S,D) is a _2-bundle; for F an algebraically closed field, we will often identify an [f]∈ M^_0,n(S,D)(F) with a choice of lifting f∈M̃^_0,n(S,D)(F), leaving the context to make the meaning clear.
M^_0,n(S, D) consists of birational maps.
Since birationality can be detected after base change to the algebraic closure, it suffices to show this for geometric points of M^_0,n(S, D).
Let f : (^1,p_*) → S be a geometric point of M^_0,n(S, D).
By the universal property of normalization, the map f : ^1 → f(^1) factors through the normalization ν: Z → f(^1). So, let g : ^1 → Z be the map such that ν∘ g = f. By Lüroth's theorem, Z ≅^1. Since f is unramified, the differential df_q : T_q ^1 → T_f(q)S is injective for all q ∈^1. It follows that dg_q : T_q ^1 → T_g(q) Z is injective for all q ∈^1 and thus g is unramified. Since g is a non-constant map of curves, whose codomain is normal, g is flat <cit.>. Since g is flat, unramified, and of finite presentation, g is étale <cit.>. Since the étale fundamental group of ^1 is trivial (even in characteristic p>0) <cit.>, g is an isomorphism.
Our desired orientation of will be described in terms of singularities of the image curve f(^1) at (f:^1→ S, (p_1,…, p_n)) in M_0,n(S,D), so we define certain singularities now. Suppose f:^1→ S is an unramified map defined over an algebraically closed field. If the preimage of any point of S consists of at most two points and for all points with two inverse images p_1 and p_2, the subspaces df(T_^1, p_i) are distinct for i=1,2, then the image curve f(^1) has only ordinary double points. We can extend this notion to apply to a map f: → S over an algebraically closed field, where has potentially multiple components. We say that f has only ordinary double points if → f() is unramified, and if the map from the normalization ∐^n ^1 → S satisfies the property that the preimage of any point of S consists of at most two points and for all points with two inverse images p_1 and p_2, the subspaces df(T_^1, p_i) are distinct for i=1,2.
Let M^_0,n(S, D) represent those (f,p_*) in M_0,n(S, D) such that f is unramified, and over every geometric point of the base, f(^1) has only ordinary double points. Dropping the assumption that the genus 0 curve be smooth, let ^_0,n(S,D) represent those f in _0,n(S,D) such that f : → f() is unramified, and has only ordinary double points over every geometric point of the base.
Let f:→ S be a geometric point of M_0,n^(S,D). We say that f has a cusp or worse if there is point p ∈ such that Tf(p) T_p→ T_f(p)S is the zero map; we say that f has a cusp if in addition f^-1(f(p))={p}. We say a geometric point f of M_0^(S,D) has a tacnode or worse if there are points p≠ q ∈ such that f(p) = f(q), and T_f( T_p) = T_f(T_qS); if in addition f^-1(f(p))={p,q} we say f has a tacnode. We say that a geometric point f∈ M_0^(S,D) has a m-fold point or worse if there are m points in , p_1,…, p_m, with f(p_i)=f(p_j) for all i,j; we say that f has an m-fold point if in addition f^-1(f(p_1))={p_1,…, p_m}. For m=3, we use the term triple point instead of m-fold point.
A cusp at p∈ is ordinary if _k(p)(Ω_,p/f^*Ω_S,f(p))=1 and f^-1(f(p))={p}. An m-fold point is ordinary if the images df(T_p_i) in of the tangent spaces in T_S, f(p_i) are pairwise distinct and f^-1(f(p_1))={p_1,…, p_m}. A tacnode is ordinary if f^-1(f(p))={p,q}, and there are generators x, y for the maximal ideal in the complete local ring _S, f(p) such that the image curve f() has defining equation y(y-x^2)∈_S, f(p),
Forgetting the last marked point defines a proper morphism
π_n:X_0,n: = M_0,n+1(S, D) → M_0,n(S, D)
from the universal curve. Evaluation on the (n+1)st point gives a map f:=_n+1: M_0,n+1(S, D) → S. As usual, we write
π:X_0: = M_0,1(S, D) → M_0(S, D)
in case n=0, and let π_:X_0^→ M_0^(S,D) be the restriction over
M_0^(S,D).
Let Δ_S⊂ S×_kS, Δ_X^_0⊂ X^_0×_M^_0(S, D)X^_0 be the diagonals. Define the locally closed subset ^ of X^_0×_M^_0(S, D)X^_0 by
^:=(f× f)^-1(Δ_S)∖Δ_X^_0
^ is closed in X^_0×_M^_0(S, D)X^_0.
Let ^ be the closure of ^ and suppose that
^∖^ is non-empty, equivalently, there is a point (p,p)∈^∩Δ_X^_0.Using the valuative criterion for properness, this means there is a complete discrete valuation ring , with generic point η, closed point a and parameter t, and a map g:→^ with g(η)∈^ and g(x)∈Δ_X^_0. We may assume that the residue field κ of is algebraically closed; after making a base-change to κ and changing notation, we may assume κ=k.
The projection → M^_0(S, D) gives us an unramified -morphism F_:^1_→ S_, together with two sections s_1, s_2: →^1_ such that f_(s_1(η))=f_(s_2(η)). Since S is separated over k, we have f_∘ s_1=f_∘ s_2. Let q∈ S() be the -point f_∘ s_1=f_∘ s_2. Consider the completion _S,q^∧ of _S_ along q. Since is complete and local, and S is smooth over k, we can write the maximal ideal of _S,q^∧ as (y_1,y_2) and we have _S,q^∧=[[y_1,y_2]]:=lim_←, n[y_1,y_2]/((t, y_1,y_2)^n). Similarly, we have the k-point p=s_1(a)=s_2(a)∈^1(k), and we may assume that p=0∈^1:=^1∖{(0:1)}, with standard coordinate x:=X_1/X_0. We pass to the completion of [x] at (a,0), which we identify with [[x]].
Thus on [[x]], f_ is given by two element f_i:=f_^*(y_i)∈[[x]], with f_i≡0 x, i=1,2. Similarly, the sections s_i are given by s_i∈, with s_i≡0 t, and f_i(s_1)=f_i(s_2), i=1,2. Translating on [[x]] by s_1, we may assume that s_1=0. Since s_1≠ s_2, s_2 is not zero, and we may write s_2=a_nt^n t^n+1 with a_n∈ k≠0.
Since f_i≡0 x, we may write f_i(x)=xh_i(x), i=1,2 for some h_i∈[[x]]. Since f_i(s_2)=0, and s_2≠0, we also have h_i(s_2)=0, so h_i is divisible by x-s_2 and f_i is thus divisible by x^2-xs_2. But then the imagef̅_i(x)∈ k[[x]] under the quotient map [[x]]→ k[[x]] is divisible by x^2, so
df̅_i/dx(0)=0
and thus f_k:^1→ S is ramified at (1:0), contrary to our assumption that maps to M^_0(S, D).
We now return to our usual setting over the field k.
* The locus of stable maps with a cusp or worse is a closed subset of M^_0(S, D).
* The locus of stable maps with a tacnode or worse is a closed subset of M^_0(S, D).
* For each m≥3, the locus of stable maps with an m-fold point or worse is
is a closed subset of M^_0(S, D).
We have the maps
f × f: X_0^×_M^_0(S, D) X_0^→ S × S
f × f × f: X_0^×_M^_0(S, D) X_0^×_M^_0(S, D) X_0^→ S × S × S
Let Δ(f) denote the inverse image of Δ_S under f × f. Then Δ(f) is closed and contains Δ_X_0^; by Lemma <ref>, we have the closed subset ^:=Δ(f)∖Δ_X_0^ of X_0^×_M^_0(S, D) X_0^. Clearly
^ parametrizes unramified maps f:→ S in M_0(S, D) together with a pair of points p≠ q∈ such that f(p)=f(q).
Similarly, for m≥3 an integer, we have the m-fold fiber product (X_0^)^×_M_0^m and the morphism
f^× m: (X_0^)^×_M_0^m→ S^m.
Let Δ^(m)_S⊂ S^m denote the (small) diagonal and let Δ^(m)(f):=(f^× m)^-1(Δ^(m)_S), a closed subset of (X_0^)^×_M_0^m. For 1≤ i<j≤ m, let Δ_X^_0, i,j⊂ (X_0,n^)^×_M_0^m denote the i,j-diagonal. It follows from repeated applications of Lemma <ref> that
(m-fold)^:=Δ^(m)(f)∖∪_1≤ i<j≤ mΔ_X^_0, i,j
is a closed subset of (X_0,n^)^×_M_0^m.
Since the projection
π^(m)_:(X_0^)^×_M^_0(S, D)m→ M^_0(S, D)
is proper, we have the closed subset
D^_m-fold:=π^(m)_((m-fold)^)
of M^_0(S, D), parametrizing those f∈ M^_0(S, D) having an m-fold point or worse.
For the case of a tacnode, let p:(T_S)→ S be the projectivization of the tangent bundle of S and let
T_2:^→(T_S)×_S(T_S)
be the map sending (f:→ S, p,q) to the pair of lines (df(T_,p), df(T_,q), viewed as a pair of points in (T_S). Let ^_⊂^=T_2^-1(Δ_(T_S)), a closed subset of ^, hence also closed in (X_0^)^×_M^_0(S, D)2. Letting
π^(2)_:(X_0^)^×_M^_0(S, D)2→ M^_0(S, D)
be the projection, we see as above that
D^_:=π^(2)_(_∖Δ_X_0^)
is a closed subset of M_0^(S,D) that parametrizes maps f∈ M_0^(S,D) with a tacnode or worse.
Finally, for the case of a cusp, we consider the universal curve π:X^_0,1→ M^_0,1(S,D) with section s:M^_0,1(S,D)→ X^_0,1. Consider the universal map over M^_0,1(S, D),
F:X^_0,1→ S×_kM^_0,1(S, D)
and let R⊂ X^_0,1 be the support of the cokernel of the map
dF:F^*(p_1^*Ω_S)→Ω_X^_0,1/M^_0,1(S, D)
Let R̅_0,1:=π(R∩ s(M_0,1(S,D))), a closed subset of M^_0,1(S,D), and let R̅ be the image of R̅_0,1 under the projection M^_0,1(S,D)→ M^_0(S,D). Noting that π_:M^_0,1(S,D)→ M^_0(S,D) is the universal curve over M^_0(S,D), so π_ is proper, and thus R̅ is closed in M^_0(S,D).
Relying on Lemma <ref>, we make the following definition.
We let Z_⊂ M^_0(S,D) be the closed subset of stable maps with a cusp or worse. We let Z_⊂ M^_0(S,D), resp. Z_⊂ M^_0(S,D) be the closed set of stable maps with a tacnode or worse, resp. a triple point or worse.
We have open subschemes
M^_0,n(S, D)⊂ M^_0,n(S, D)⊂ M_0,n(S, D)
Forgetting the last point defines a proper morphism
π:X_0,n: = M_0,n+1(S, D) → M_0,n(S, D)
from the universal curve. Evaluation on the (n+1)st point gives a map f:=_n+1: M_0,n+1(S, D) → S, which in turn induces a map of coherent sheaves df: f^* Ω_S →Ω_M_0,n+1(S, D)/M_0,n(S, D). The cokernel of df has closed support on X_0,n, whence closed image under π. The open complement in M_0,n(S, D) of this image is M^_0,n(S, D) by Definition <ref>.
Geometric points of the complement of M^_0(S, D) in M^_0(S, D) are f: ^1 → S with either three distinct points p_1,p_2,p_3 such that f(p_1) = f(p_2) = f(p_3) or two distinct points p_1,p_2 with f(p_1) = f(p_2) and f_* T_p_1^1 = f_* T_p_2^1. These are closed conditions as in Lemma <ref>.
§.§ Some geometry of moduli stacks of birational and/or unramified maps
Here are two fundamental results.
Suppose k is algebraically closed and of characteristic zero, n=d-1≥ 1 and S is a general del Pezzo of degree d_S. Let N_D, S be the Gromov-Witten invariant counting the number of rational curves in the curve class D passing through n general points of S and suppose N_D,S>0. Then N_D, S is equal to the number of integral rational curves C in the curve class D
passing through general points p_1,…, p_n of S. Moreover, for each such C, the associated morphism f:^1→ S with image C is unramified.
This result can be interpreted as follows: let
:M̅_0,n(S,D)→ S^n
(f:→ S, (p_1,…, p_n)) ↦ (f(p_1),…, f(p_n))∈ S^n
denote the evaluation map, where is a semi-stable genus 0 curve with n distinct points p_1,…, p_n and f:(, p_1,…, p_n)→ S is a stable map. For S general, if N_D, S>0, then :M̅_0(S,D)→ S^n is surjective and étale over a dense open subset U of S^n, moreover, for each p_*:=(p_1,…, p_n)∈ U, we have ^-1(p_*)⊂ M^_0(S, D).
Suppose k is algebraically closed and of characteristic zero and that d_S≥2. Then M^_0(S, D) is empty or is irreducible of dimension d-1.
For results of this kind in positive characteristic, see <cit.>.
Recall that _f denotes the normal sheaf as defined by (<ref>).
Suppose that f is a geometric point of M_0(S, D) such that f:^1→ S is a birational to the image curve f(^1) and H^1(^1, _f)=0, for instance, f
a geometric point of M_0^(S, D) or f unramified. Then M_0(S,D) is a smooth scheme over k of dimension d-1 at f.
Note that if f is unramified then _f≅(d- 2) (Remark <ref>) and H^1(^1, _f)=0, so we may assume f to be birational and H^1(^1, _f)=0.
Since f is birational, f has no automorphisms, so
M_0(S,D) is a k-scheme in a neighborhood of f. Since H^1(^1, _f)=0 and the morphism f:^1→ S has
no automorphisms, then by standard deformation theory, M_0(S,D) is smooth over k at f, and the tangent space at f is isomorphic to H^0(^1, _f)
H^0(^1, _f) ≅ T_fM_0(S, D).
By Remark <ref>, _f≅_^1(m)⊕_f^ with m=d-2-_FH^0(^1, _f^), where F denotes the field of definition of F. Thus H^0(^1, _f) has dimension d-1 over F, which also proves that M_0(S,D) is a smooth scheme over k of dimension d-1 at f as claimed.
More generally, consider a geometric point (f,p_1,…,p_n) of M_0,n(S,D) where f: ^1 → S is a stable map and p_1,…, p_n are marked points on the domain curve ^1. There is a canonical isomorphism
T_fM_0,n(S,D) ≅ℍ^1(^1, T^1(-∑_i p_i) df→ f^* TS )
identifying the tangent space T_fM_0,n(S,D) with the hypercohomology of ^1 with coefficients in the two-term complex T^1(-∑_i p_i) Tf→ f^* TS, where T^1(-∑_i p_i) is the sheaf of those tangent vector fields vanishing at the p_i. See <cit.>.
When f is birational, the map df: T^1(-∑_i p_i) → f^* TS is injective, and there is a canonical quasi-isomorphism between T^1(-∑_i p_i) → f^* TS and the sheaf _f,p defined by
0 → T_^1(-∑ p_i) → f^* T_S →_f,p→ 0.
Suppose that f is a geometric point of M_0,n(S, D) with field of definition F such that f:^1→ S is birational and H^1(^1, _f,p)=0. Then M_0,n(S,D) is smooth at f of dimension d-1+n and there is a canonical isomorphism
T_f M_0(S, D) ≅ H^0(^1, _f,p).
By Remark <ref>, there is a canonical quasi-isomorphism between _f,p and T^1(-∑_i p_i) → f^* TS. Since H^1(^1, _f,p)=0, it follows from <cit.> and standard deformation theory that M̅_0,n(S,D) is smooth at f and T_fM̅_0(S, D) ≅ H^0(^1, _f,p). Thus the dimension of M̅_0,n(S,D) at f is _F H^0(^1, _f,p). We have that H^0(^1, _f,p) = d-1+n by the calculation _f,p≅_^1(m)⊕_f,p^ with m=n+d-2-_FH^0(^1, _f,p^) similarly to the above.
Let f : (,p_1,…,p_n) → S be a point of M̅_0,n(S,D) satisfying the following conditions.
* =_1∪_2, with _i≅^1 and _1 ∩_2 = {p}.
* f is unramified and f|__1 is transversal to f|__2 at p.
Then M̅_0,n(S,D) is smooth at f and has dimension d -1 + n.
Let denote the mapping cone of f^*Ω_S →Ω_(∑_i = 1^n p_i), or equivalently is the two-term complex
= f^*Ω_S →Ω_(∑_i = 1^n p_i)
By <cit.> the tangent space of M̅_0,n(S,D) at f is ^1_(,_) and the obstructions are ^2_(,_). It follows from stability <cit.> that
^0(, _) = 0.
We show that
^1_(,_) = d-1+n, ^2_(,_) = 0.
By definition of , there is long exact sequence
0→^0(, _)→^0(Ω_(∑_i = 1^n p_i), _)→^0(f^*Ω_S, _)
→^1(, _)→^1(Ω_(∑_i = 1^n p_i), _) →^1(f^*Ω_S, _)
→^2(, _) →^2(Ω_(∑_i = 1^n p_i), _) →^2(f^*Ω_S, _)→…
We show
^1(f^*Ω_S, _) = 0, ^0(f^*Ω_S, _) = d + 2.
Indeed, since f^* Ω_S is locally free,
^i(f^*Ω_S, _) ≅ H^i(, f^* T_S)
for all i.
Let i_j : _j → denote the inclusion and let f_j = f∘ i_j. Let D_j = (f_j)_*([^1]) and let d_j = -K_S · D_j. There is a short exact sequence
0 → f^*T_S → (i_1)_* i_1^* f^*T_S ⊕ (i_2)_* i_2^* f^*T_S → (i_p)_*(i_p)^*f^*T_S → 0.
Since i_j is affine,
H^k((i_1)_* i_1^* f^*T_S ⊕ (i_2)_* i_2^* f^*T_S) = H^k(_1,f_1^*T_S) ⊕ H^k(_2,f_2^*T_S), k = 0,1.
So, by the long exact sequence in cohomology, it suffices to show that
H^1(_i,f_i^*T_S) = 0, H^0(_i,f_i^*T_S) = d_i + 2,
and that the map
H^0(_1,f_1^*T_S) ⊕ H^0(_2,f_2^*T_S) → T_S,f(p)
is surjective. To prove (<ref>), consider the exact sequence
0 → T_^1→ f_i^*T_S →_f_i→ 0.
Observe that _f_i≅(d_i-2). Moreover, since S is del-Pezzo, d_i = -K_S · D_i > 0. Thus
H^0(_f_i) = d_i-1, H^1(_f_i) = 0.
Since
H^0(T_^1) = 3, H^1(T_^1) = 0,
equation (<ref>) follows.
To prove the surjectivity of (<ref>), consider the commutative diagram
H^0(T_^1) [r][d]^df_i T_^1,p[d]^(df_i)_p
H^0(f_i^*T_S) [r] T_S,f_i(p).
Since the upper horizontal arrow is surjective, the image of the bottom horizontal arrow contains (df_i)_p(T_^1,p). Since f_1 and f_2 are transversal at p, the surjectivity of (<ref>) follows.
Next, we calculate ^k(Ω_(∑_i = 1^n p_i), _). Indeed, since the dualizing sheaf ϖ of is a line bundle, Serre duality gives
^k(Ω_(∑_i = 1^n p_i), _) ≅^k(Ω_(∑_i = 1^n p_i)⊗ϖ, ϖ) ≅^1-k(_, Ω_(∑_i = 1^n p_i)⊗ϖ).
This shows that ^2(Ω_(∑_i = 1^n p_i), _) = 0. It follows from the exact sequence (<ref>) and (<ref>) that ^2(, _) = 0 and M̅_0,n(S,D) is smooth at f as claimed.
On the other hand,
χ(Ω_(∑_i = 1^n p_i)⊗ϖ) = ∑_k = 0^1 (-1)^k H^k(Ω_(∑_i = 1^n p_i)⊗ϖ)
is constant in flat families.
We smooth to a flat family → k[[t]] with smooth generic fiber ^1_k((t)) and with sections 𝔭_i reducing to p_i over k, i=1,…, n. More precisely, is projective over k[[t]], ∖{p}→ k[[t]] is smooth, and an open neighborhood of p in is isomorphic to k[[t]][x,y]/(xy-t) as k[[t]]-scheme.
An easy computation shows that Ω_/k[[t]] is flat over k[[t]]; since → k[[t]] is an lci morphism, the relative dualizing sheaf ϖ_/k[[t]] is an invertible sheaf, hence is also flat over k[[t]]. Since the sheaf Euler characteristic is locally constant in flat, proper families, we have
∑_k = 0^1 (-1)^k H^k(Ω_(∑_i = 1^n p_i)⊗ϖ)
=
∑_k = 0^1 (-1)^k H^k(^1_k((t)), Ω_^1_k((t))/k((t))(∑_i = 1^n 𝔭_i, k((t)))⊗Ω_^1_k((t))/k((t)))
=
∑_k = 0^1 (-1)^k H^k(^1_k((t)), _^1_k((t))(n-4)) = n-3.
It follows from the exact sequence (<ref>) and (<ref>) that
^1_(,_) = d-1+n.
as claimed.
Let (f,p_1,…,p_n) be a geometric point of M_0,n(S,D) such that f: ^1 → S is birational and H^1(^1, _f,p)=0. Let F be the field of definition of (f,p_1,…,p_n). Suppose additionally that
df ⊗ F: T_^1,p_i⊗ F ↪ T_S,q_i⊗ F
is injective for all i. For example, if f could be a geometric point of M^_0,n(S, D). Then there is an additional description of the kernel and cokernel of d in terms of the exact sequence
0→_f(-∑_ip_i)→_f→⊕_i (f^*T_S, q_i⊗ F)/df(T_^1, p_i⊗ F)→ 0,
where q_i = f(p_i). We give this description now. Applying the snake lemma to the map of short exact sequences
0[r] [d] T_^1(-∑_ip_i)[r] [d]_1 f^*T_S[r] [d] N_f,p[r] 0
0[r] _^1[r] f^*T_S[r] N_f [r] 0
defines a canonical isomorphism
(N_f,p→_f) ≅→ (T_^1(-∑_ip_i) → T_^1) ≅⊕_i 𝒪_p_i
where 𝒪_p_i := (p_i)_* 𝒪_F is the pushforward of the the structure sheaf of the points p_i. We also deduce from (<ref>) that N_f,p→_f is surjective, giving the short exact sequence
0 →⊕_i 𝒪_p_i→ N_f,p→_f → 0.
This gives rise to the map of short exact sequences
10pt0 [r] H^0(⊕_i 𝒪_p_i) [d] [r] H^0(N_f,p) [r] [d]_d_f H^0(N_f) [r] [d] 0
0 [r] ⊕_i df(T_^1, p_i⊗ F)[r] ⊕_i (f^*T_S, q_i⊗ F) [r] ⊕_i (f^*T_S, q_i⊗ F)/df(T_^1, p_i⊗ F) [r] 0 .
As the left vertical map is an isomorphism, the snake lemma gives canonical isomorphisms
d_f ≅ (H^0(N_f) →⊕_i (f^*T_S, q_i⊗ F)/df(T_^1, p_i⊗ F)) ≅ H^0(_f(-∑_ip_i))
and
d_f ≅ (H^0(N_f) →⊕_i (f^*T_S, q_i⊗ F)/df(T_^1, p_i⊗ F))≅ H^1(_f(-∑_ip_i)).
Let F be an algebraically closed extension field of k. For f:^1_F→ S_F a morphism and p∈^1(F), choose a uniformizing parameter t_p∈𝔪_p⊂_^1, p and coordinates (x, y) at q=f(p ). Define the integer e_p≥0 by f^*(x,y)_^1, p=(t^e_p); we call e_p the ramification index of f at p.
If f^*(x,y)_^1, p=(t^e_p), then after a linear of coordinates, we may assume that f^*(x)=ut^e_p, f^*(y)=vt^e_p+r with u, v∈_^1,p^× and r>0. Thus N_f^⊗_^1, p≅ F^t_p, with t_p≥ e_p-1, with equality if e_p is prime to the characteristic. Let t(f)=∑_p∈^1t_p. We call t(f) the torsion index of f. For f : → S a possibly reducible stable map, we define the torsion index t(f) to be the sum of torsion indices of the restrictions of f to each of the irreducible components.
Remark <ref> generalizes as follows.
Let F be an algebraically closed extension field of k, and let f:^1_F→ S_F a morphism.
* f is unramified in the sense of Definition <ref> if and only if e_p=1 for all points p. This is equivalent to the requirement that t(f) = 0.
* By (<ref>), we have _f/_f^=_^1(d-2-t(f)).
Suppose k is a field of characteristic zero. Let V⊂ M^_0(S, D) be an integral closed subscheme and let f be a geometric generic point of V. Then the composition
T_fV→ T_fM^_0(S, D)≅ H^0(^1_F, _f)→ H^0(^1_F, _f/_f^)
of the displayed canonical maps is injective. Moreover, either
* d-1 - V≥ t(f) or
* V =0.
We prove the first assertion following the proof of a closely related result by Tyomkin <cit.>.
Since M̅_0,n(S,D) is a separated Artin stack and M^_0(S, D) is an open subscheme of M̅_0,n(S,D), there is an étale dominant map ϕ:Ṽ→ V and a morphism F:Ṽ×^1→Ṽ× S over Ṽ representing the inclusion V→M̅_0,n(S,D). We consider f as a geometric point of Ṽ.
Sending a point v∈Ṽ to the image curve F(v,^1)⊂ v× S defines a morphism α̃:Ṽ→ |D|≅^N; if F(v, ^1)=F(v', ^1), then since both F(v,-) and F(v'-) are birational maps to F(v, ^1), there is a unique isomorphism ϕ:^1→^1 (defined over k(v)⊗_k(α̃(v))k(v')) with F(v',-)=F(v,-)∘ϕ. Thus, since Ṽ→ V is étale, α̃ descends to a morphism α:V→ |D| and there is a dense open subscheme U of V over which the map α is an isomorphism with an open subscheme of the image scheme α(V)⊂ |D|. For v∈ V, let C_v be the image curve F(ṽ,^1) for ṽ∈Ṽ lying over v.
For v=f a geometric generic point of V, the map f:^1→ C:=f(^1) is birational and C has only finitely many singularities. Choose a smooth curve E on S (defined over k) such that
* H^0(S, _S(D-E))=0,
*
E intersects C transversely.
Then <ref> also holds for C_v for all v in a dense open subset of V. Letting N= E· D, this gives us the morphism β:V_0→_N(E), β(v)=C_v∩ E for V_0⊂ V a dense open subscheme. By <ref>, we may assume that β(V_0) is contained in the open subscheme ^0_N(E) of _N(E) parametrizing reduced closed subschemes of E of length N, which is a smooth scheme over k.
We claim that after shrinking V_0 is necessary, β defines isomorphism of V_0 with its image in ^0_N(E). Since the characteristic is zero, it suffices to show that β is injective on geometric points of V_0. (Indeed, by <cit.> it suffices to show that the field extension k(β(η_V)) ⊂ k(η_V) is an isomorphism, where η_V denotes the generic point of V or equivalently the image of f. Since k is characteristic 0, this is equivalent to ( k(η_V)/k(β(η_V))) = 1.)
Take v∈ V_0 a geometric point, giving the curve C_v on S. We have the exact sheaf sequence
0→_S(D-E)→_S(D)_E(E· D)→0;
since H^0(S, _S(D-E))=0, i_E^* induces an inclusion of linear systems i_E^*: |D|→ |E∩ D|. Thus, for v, v' geometric points of V_0, if C_v∩ E= C_v'∩ E then C_v=C_v', and since α:U→ |D| is injective on geometric points and β=i_E^*∘α, we see that β(v)=β(v').
On the other hand, let W⊂^0_N(E) be a smooth locally closed subscheme and let w∈ W be a geometric point. Then w corresponds to N distinct points q_1,…, q_N of E and T_w ^0_N(E) is isomorphic to ⊕_i=1^NT_q_iE. Taking W=β(V_0) and w=β(f), the points q_1,…, q_N are the (transverse) intersection points of C∩ E, so at each p_i, we have T_S, q_i=T_C, q_i⊕ T_E,q_i. Since the points q_i are all smooth points of C=f(^1), and f:^1→ C is birational, we have T_C,q_i≅ T_^1,p_i, where p_i=f^-1(q_i), and the projection T_S, q_i→ T_E,q_i defines an isomorphism π_i:_f⊗ k(p_i)→ T_E, q_i. Since β is an isomorphism V_0→β(V_0)⊂_N^0(E), sending T_f(V)=H^0(^1, _f)→⊕_i=0^NT_E, q_i=T_β(f)^0_N(E) via the composition
T_f(V)=H^0(^1, _f)⊕_i=1^N_f⊗ k(p_i)
⊕_i=0^NT_E, q_i
is injective. But as all the points q_i∈ C are smooth points, f is unramified at each q_i, so this latter map factors through H^0(^1, _f)→ H^0(^1, _f/_f^), so T_fV→
H^0(^1, _f/_f^) is injective, as claimed.
As H^0(^1_F, _f/_f^)≅ F^d-1-t(f) for d-1-t(f)≥0 and is zero if d-1-t(f)<0 (see Remark <ref> ), the second assertion follows from the first, which proves the lemma. In the first case, M^_0(S, D) is smooth of f of dimension d-1 by obstruction theory because the obstruction H^1(^1_F, _f) ≅ H^1(^1_F, _f/_f^)=0 and H^0(^1_F, _f) ≅ F^d-1. In the second case, V = 0 because V ≤ T_f V = 0.
For k of characteristic p>0, the first assertion of Lemma <ref> is false: Consider the family of maps f_a:^1→^2
f_a(t_0, t_1)=(t_0^p-2t_1^2+at_0^p, t_1^p, t_0^p), a∈^1,
Fixing an a, take the tangent vector corresponding to the morphism f_a+ϵ over k(a)[ϵ]/(ϵ^2). Then f_a and f_a+ϵ have the same defining equation y^2=x^p-a^p, so the corresponding section of _f vanishes away from t=0.
Let k be a perfect field, S a del Pezzo surface over k and D an effective Cartier divisor on S. Let n=- K_S· D-1. Let f : (^1,p_*) → S be a geometric point of M_0,n(S,D)^. Then M̅_0,n(S, D) is a smooth scheme of dimension 2n at f, and :M̅_0,n(S, D)→ S^n is étale at f.
Since f is unramified, f is birational by Lemma <ref>, so there are no automorphisms of f and M̅_0,n(S, D) is a scheme near (f, p_*). Since f:^1→ f(^1) is unramified, we have _f≅_^1(n-1) (Remark <ref>) and n≥0, so H^1(^1, _f)=0 and H^0(^1, _f)≅ F^n. Lemma <ref> implies that M̅_0,n(S, D) is smooth of dimension 2n at
(f, p_*). By Remark <ref>, the kernel and cokernel of d at f are isomorphic to H^0(^1, _f(-∑_i=1^n p_i)) and H^1(^1, _f(-∑_i=1^n p_i)), respectively. Since _f≅_^1(n-1), it follows that _f(-∑_i=1^n p_i)≅_^1(-1) so both of these terms are zero.
Assume d_S = 2. Then, the anti-canonical map π : S →^2 is a 2-1 finite morphism with branch divisor a smooth quartic curve.
This is <cit.>.
Assume d_S = 2 and k ≠ 2,3. Let π : S →^2 be the anti-canonical map as in Lemma <ref>. Let f : ^1 → S be birational to its image C = f(^1) and have C · (-K_S) = 2. Then one of the following holds.
*
π|_C : C →π(C) is an isomorphism, π(C) is a smooth conic, and f : ^1 → C is an isomorphism.
*
π|_C : C →π(C) has degree 2 and one of the following holds.
* f is unramified and C has a single ordinary double point.
*
C has a single ordinary cusp and f is ramified at a single point with t(f) = 1. Moreover, π(C) is a line tangent to the branch curve of π at a flex.
Since π has degree 2, it follows that π|_C : C →π(C) is either birational or has degree 2.
If π:C→π(C ) is birational, then π(C ) ·(1) = C · (-K_S) = 2, so π(C) is a smooth conic. Hence, f : ^1 → C and π|_C : C →π(C) are both isomorphisms.
If π : C →π(C) has degree 2, then
2 (π(C ) ·(1)) = π_*(C) ·(1) = C · (-K_S) = 2,
so π(C) is a line ℓ. Let E ⊂^2 be the branch curve of π. There are five possible cases:
ℓ· E = p_1 + p_2 + p_3 + p_4, ℓ· E=2· p_1 +p_2+p_3,
ℓ· E=3· p_1+p_2, ℓ· E = 2 p_1 + 2p_2, ℓ· E=4· p,
with the p_i distinct in the first four cases. If ℓ· E were the sum of 4 distinct points, then C would be a smooth curve of genus 1 contrary to the hypothesis. In the last two cases, C would be geometrically reducible contrary to the hypothesis. In the second case C has an ordinary double point, so f must be unramified. In the third case, C has an ordinary cusp, and since k ≠ 3, it follows that f is ramified at a single point with t(f) = 1.
Often we will want to use the following assumption.
For every effective Cartier divisor D' on S, there is a geometric point f in each irreducible component of M^_0(S, D') with f unramified.
We prove in Appendix <ref> that Assumption <ref> holds for k > 3 and d_S ≥ 3. See Theorem <ref>. We thank Sho Tanimoto for suggesting the argument.
If k = 0 and d_S ≥ 2, then Assumption <ref> holds.
Let f be a geometric generic point of M^_0(S,D'). By Theorem <ref> the scheme M^_0(S,D') is irreducible of dimension (-K_S· D')-1. Consequently, Lemma <ref> implies that the map H^0(^1_F, _f)→ H^0(^1_F, _f/_f^) is injective. So, _f^ is trivial and f is unramified.
Suppose that k is a field of characteristic ≠2,3. Furthermore, suppose d_S ≥ 2 and Assumption <ref> holds. Let f∈ M^_0(S, D) be a geometric generic point.
Then f is in M^_0(S, D).
Since the condition to be unramified is open, Assumption <ref> implies that f is unramified. By Remark <ref> we have _f = (d-2). Since d ≥ 1, it follows that H^1(^1,_f) = 0. Therefore, Lemma <ref> gives _f M_0^(S,D) = d-1.
Let C:=f(^1). Suppose first that d≥ 4. Since _f M^_0(S, D)=d-1, we may apply <cit.>, which gives the result in this case. Since this result is proven under the assumption of characteristic zero, we give a quick sketch of the proof of the relevant portion of the result. Since f is unramified, we have _f≅_^1(d-2). We need to show that any point q of C has at most two preimages, and moreover, if two points of ^1 have the same image under f, the images of their tangent spaces are distinct. Suppose first that there are three distinct points p_1, p_2, p_3∈^1 with f(p_i)=q for i=1,2,3 for the sake of contradiction. Identify H^0(^1, _f) with T_fM^_0(S, D), and consider the first order deformation of f corresponding to s∈ H^0(^1, _f). Since f is birational, there is an open neighborhood of q such that all other points of the neighborhood have at most one preimage under f. Since f is a geometric generic point, the first order deformation must retain the property that there are three points mapping to one. Thus if s(p_1)=s(p_2)=0, then s(p_3)=0 as well. On the other hand, since d-2≥ 2 and _f≅_^1(d-2), we may find an s∈ H^0(^1, _f) with s(p_1)=s(p_2)=0 but s(p_3)≠0, which yields the desired contradiction.
We are now reduced to eliminating the possibility that we have points p_1, p_2∈^1 with f(p_1)=q=f(p_2) and df(T_^1, p_1)=df(T_^1, p_2). Suppose that d≥5. Since f is unramified, _f≅_^1(d-2), and since d-2≥3, we can find a section s∈ H^0(^1, _f) with s having a 2nd order zero at p_1, and a zero of order one at p_2. For the associated deformation f_u of f defined over F[[u]], we have, to first order, f_u(p_1)=f_u(p_2), df_u(T_^1, p_1)=df(T_^1, p_1) but df_u(T_^1, p_2)≠ df(T_^1, p_2): if we take analytic coordinates (x,y) on S at q:=f(p_1)=f(p_2) so that the image of the branch of f around p_2 is defined by y=0, then for a suitable local parameter t on ^1 at p_2, we have f_u(t)=(t, aut) modulo terms of hgher order in t and u, with a≠0. Thus df_u(T_^1, p_2)⊂ T_S,q≅^2 is the span of the vector (1, au), while df_u(T_^1, p_2) is the span of (1,0), both modulo u^2. This eliminates the tacnode in f(^1) at q by taking the deformation f_u(^1); as above, this implies that there was no tacnode in f(^1) to begin with.
Suppose d=4 and d_S≥3. The anti-canonical map embeds S in a ^d_S, so we may consider f(^1) as a degree four rational curve in ^d_S with a tacnode at q=f(p_1)=f(p_2). We claim that f(^1) is contained in a ^2⊂^d_S. Since every degree four rational curve is the linear projection of degree four rational normal curve in ^4, the fact that f(^1) is not smooth implies that f(^1) is contained in a ^3⊂^d_S. Let ℓ be the tangent line to the tacnode of f(^1) and consider a plane Π containing ℓ. If f(^1)⊄Π, then since ℓ is tangent to each of the two branches of f(^1) at q, the intersection multiplicity at q of Π and f(^1) in ^3 is at least 4, hence equal to 4 since f(^1) has degree 4. Taking a point q'∈ f(^1), q'≠ q, we can take Π' to be the plane spanned by ℓ and q'. But if f(^1)⊄Π', then Π'· f(^1) has degree ≥5, which is impossible, so f(^1) is contained in Π'.
This implies that the intersection multiplicity in Π' at q of ℓ with f(^1) is 4, and thus ℓ has intersection multiplicity 2 with each of the two branches of f(^1) at q. Using the fact that f(^1) has arithmetic genus 3, one sees that in local analytic coordinates at q on Π', f(^1) has equation of the form (y-x^2)(y-ax^2-bx^3+…)=0, where either a≠1 or a=1 and b≠0 (here we are using the assumption that k≠2,3). Consider now f(^1) as a smooth curve on S. We claim there is a choice of analytic coordinates at q on S so that f(^1) also has equation of the same form in the completion of _S,q. Indeed, Π' must be tangent to S at q, because the Zariski tangent space of f(^1) at q has dimension 2 since q is a singular point, and therefore it is equal to the Zariski tangent space of S at q. It follows that a projection from S to Π' is a local analytic isomorphism and the form of the equation of f(^1) at q is unchanged.
We may assume that the branch through p_1 has the equation y=x^2 and the branch through p_2 the equation y=ax^2+bx^3+…. Since d=4, we have _f=_^1(2), so there is a section s of _f having a zero of order 2 at p_1 and with s(p_2)≠0. The resulting deformation f_u of f has image curve f_u(^1) with local analytic equation at q of the form (y-x^2)(y-ax^2-bx^3+…-u)=0, modulo higher order terms in u. Thus, intersection of the two local branches of f_u(^1) coming from a neighborhood of p_1 and a neighborhood of p_2 is of the form (1-a)x^2-bx^3=u, which in characteristic ≠ 2,3 shows that the tacnode has separated into two ordinary double points if a≠1, respectively, three ordinary double points if a=1, b≠0. As above, this shows that there was no tacnode on f(^1) to begin with.
Suppose that d_S≥ 3 and d ≤ 3. We rule out multiple points and tacnodes by a global argument. When d = 3, as above, f(^1) is a rational cubic curve in ^d_S. Thus, f(^1) is either a smooth twisted cubic curve in a ^3⊂^d_S, or a singular cubic in a ^2⊂^d_S. In the first case, there is nothing to show, and in the second, since f is unramified, f(^1) has a single ordinary double point as singularity. If d_S≥ 3 and d=1,2, then f(^1) is a line (d=1) or a smooth conic (d=2). This completes the proof for d_S≥ 3.
If d_S=2, we are in the situation of Lemma <ref> with anti-canonical double cover π:S→^2 branched along a smooth degree four curve E. We have handled the case d≥5 above.
We handle the case d=4 as follows. Let C=f(^1). Then either π:C→π(C) is a double cover, with π(C) a smooth conic, or π:C→π(C) is birational, with π(C) a rational quartic curve. In the latter case, we need only eliminate the case of C having a tacnode. If C does have a tacnode, at say q'∈ C then π(C) has a tacnode at q:=π(q'). Since π(C) is a quartic curve, the tacnode on π(C) has local analytic equation as above: (y-x^2)(y-ax^2-bx^3+…) with a≠1 or a=1 and b≠0. If the map π is unramified at q', then C has the same local analytic equation at q' as does π(C) at q, in suitable analytic coordinates x',y'. If π is ramified at q', then a local analytic computation shows that C has local analytic equation at q' of the form (y'-x^'2)(y'-a'x^'2+…) with a'≠1, again, in suitable analytic coordinates x',y'. In either case, we proceed exactly as we did above in the case d_S≥3, d=4.
If C→π(C) is a double cover, then π(C) is a smooth conic and π(C)· E must be of the form
π(C)· E=p_1+p_2+2q_1+2q_2+2q_3
with p_1≠ p_2 and the p_i distinct from all the q_j (see the proof of Lemma <ref>). If all the q_j are distinct, then C is smooth outside of ordinary double points at the points q_j' over q_j, j=1,2,3. If however q_1=q_2 then C acquires an ordinary tacnode at q_1'=q_2' and if q_1=q_2=q_3 then C acquires a higher order tacnode at q_1'=q_2'=q_3'. Since d=4, we have _f M^_0(S, D)=3, so we need only show that the dimension of the space of smooth conics that have π(C)· E of the form p_1+p_2+2q_1+2q_2+2q_3 with at least two of the q_j equal is at most 2.
For this, fix q_1=q_2=q and consider the linear system on E cut out by degree two curves C' with C'· E-4q-2q_3>0. This is the projective space on H^0(E, _E(2)(-4q-2q_3)). If h^0(E, _E(2)(-4q-2q_3))>0, then C'· E-4q-2q_3=p_1+p_2 is effective divisor of degree two. By adjunction, the canonical class K_E is _E(1), so (H^0(E, K_E(-p_1-p_2))) is the projective space of lines ℓ in ^2 with ℓ· E≥ p_1+p_2, in other words, (H^0(E, K_E(-p_1-p_2))) is the line through p_1 and p_2 if p_1≠ p_2, or the line tangent to E at p if p=p_1=p_2. Thus h^1(E, _E(2)(-4q-2q_3))=h^0(E, K_E-p_1-p_2)=1 and by Riemann-Roch, we have
h^0(E, _E(2)(-4q-2q_3))=2+1-3+h^1(E, _E(2)(-4q-2q_3))=1,
assuming that h^0(E, _E(2)(-4q-2q_3))>0.
In other words, for fixed q, q_3∈ E, there is at most one smooth conic C' with C'· E-4q-2q_3=p_1+p_2 with p_1≠ p_2 and the p_j distinct from q, q_3. We can then vary the points q, q_3 over E, to conclude that the space of smooth conics that have π(C)· E of the form p_1+p_2+4q+2q_3 as above has dimension at most 2. The same argument shows that the space of smooth conics that have π(C)· E of the form p_1+p_2+6q with p_1≠ p_2 and p_j≠ q for j=1,2 has dimension at most 1. This finishes the proof in case d_S=2, d=4.
It remains to handle the cases d=1,2,3, d_S=2, and with k≠ 2,3.
If d=3, then π:C→π(C ) is birational and π(C ) is a degree 3 integral rational curve in ^2, and hence has a single singularity, which is either an ordinary double point or an ordinary cusp. Thus C itself is either smooth or also has has a single singularity, which is either an ordinary double point or an ordinary cusp. Since f is unramified, C cannot have an ordinary cusp. Thus f is in M^_0(S, D).
For d=1, π(C ) is a line and thus f : ^1 → C and π:C→π(C ) are isomorphisms. If d=2, then we are in the situation of Lemma <ref>. In all cases of the lemma except <ref><ref> we see immediately that f is in M^_0(S, D). We show that case <ref><ref> does not occur as follows.
Either π:C→π(C ) is birational, in which case π(C ) is a smooth conic and C is smooth, or π:C→π(C ) has degree two, in which case π(C ) is a line ℓ. In this latter case, let E⊂^2 be the smooth quartic branch curve of the map π:S|to ^2. Since f:^1→ C is birational, there are three possible cases: either ℓ· E=2· p_1 +p_2+p_3, ℓ· E=3· p_1+p_2 or ℓ· E=4· p, with the p_i distinct in the first two cases; if ℓ· E were the sum of 4 distinct points, then C would be a smooth curve of genus 1. In the first case C has an ordinary double point, in the second, an ordinary cusp and in the third a tacnode (in this latter case, π^-1(ℓ) is a union of two -1 curves, intersecting at a single point with multiplicity 2, but we will not need this fact).
By Lemma <ref> there are only finitely many possibilities for ℓ=π(C) if ℓ· E=3· p_1+p_2 or ℓ· E=4· p. Since f is a generic point and _f M^_0(S, D)=d-1=1, f is not of this form. This completes the proof.
Let k be an algebraically closed field of characteristic ≠ 2, 3. Let E⊂^2 be a smooth degree four curve. Then for all but finitely many points p∈ E, the tangent line ℓ_p to E at p intersects E at p with multiplicity two. Moreover, E has only finitely many bi-tangents.
Since E is a smooth quartic curve, E has genus two; let J(E) denote the Jacobian of E. We first show that for at most finitely many p, one has ℓ_p· E=4· p. Indeed, if this is the case for p, q, we have _E(4· (p-q))≅_E . Thus, if ℓ_p· E=4· p for all but finitely many p∈ E, then the map α:E(k)× E(k)→ J(E)(k), α(p,q)=[_E(p-q)]∈(E), has image in the 4-torsion subgroup of J(E)(k), plus possibly finitely many additional points of J(E)(k). This is impossible, since the image of α generates J(E)(k) and J(E) is an abelian variety of dimension two.
Suppose that for all p∈ E, we have ℓ_p· E=3· p+p' (possibly p=p') and choose q∈ E such that ℓ_q· E=3· q+q' with q'≠ q and such that a general line through q intersects E in four distinct points. Let π:C→ be the linear projection from q, where is the ^1 of all lines in ^2 containing q. For ℓ such a line, π^-1(ℓ)=ℓ· E-q. Then π has degree three and since we are assuming the characteristic is different from three, π is a separable morphism. For p≠ q in E, π is ramified at p if and only if the tangent line ℓ_p contains q; at such p, π has ramification index e_p(π)=3. For ℓ=ℓ_q, π^-1(ℓ)=ℓ_q· E-q=2q+q', so e_q(π)=2; at all other points x∈ E, e_x(π)=1. Since the characteristic is ≠ 2,3, π is everywhere tamely ramified. But then the Riemann-Hurwitz formula says
3· (-2)+∑_x∈ E(e_x(π)-1)=2g(E)-2=4
which is not possible, since ∑_x∈ E(e_x(π)-1)=(e_q(π)-1)+∑_p, e_p(π)=3e_p(π)-1 is odd.
We finish by showing that E has only finitely many bi-tangents (we include as a bi-tangent a line ℓ with ℓ· E=4p). By what we have already shown, for all but finitely many points p∈ E, each line ℓ through p that is also a tangent line to E at some point q≠ p satisfies ℓ· E=p+p'+2q with p'≠ q, and if ℓ is the tangent line to E at p, then ℓ· E=2p+q+q' with p≠ q, p≠ q'. Taking such a point p and considing the projection from p, π:E→^1 as above, we see that each ramified point q of π satisfies e_q(π)=2. By the Riemann-Hurwitz formula, this says that there are exactly 10 such points, so there are 10 lines ℓ through p with ℓ· E=p+p'+2q and with p'≠ q. At most one of these lines can be the tangent line to E at p, so there exists at least nine ponts q on E such that the tangent line to E at q is not a bi-tangent. Since the set of bi-tangents is a closed subset of the dimension one variety of all tangent lines to E, E has only finitely many bi-tangents.
The Fermat quartic E⊂^2 defined by ∑_i=0^2X_i^4=0 is an example of a smooth quartic curve over a field of chararcteristic three such that each tangent line ℓ_p has at least a three-fold intersection at p: the Hessian matrix is identically zero. We don't know an example in characteristic two.
Suppose that k is an algebraically closed field of characteristic ≠2,3. Suppose d_S≥ 2 and S satisfies Assumption <ref>. Let f∈ M^_0(S, D) be a geometric generic point over k.
Let C⊂ S be a reduced curve defined over k. Then each point p∈ f(^1)∩ C is a smooth point of C. Moreover, if d≥3, then each point p∈ f(^1)∩ C is a smooth point of f(^1) and f(^1) and C intersect transversely at p.
The assumption that d ≥ 3 is necessary. For example, let C be the image of the map ^1 →^1 ×^1 given by t ↦ (t, t^p) and let f be ^1 × [1,0].
Let C'=f(^1). By Proposition <ref>, the map f:^1→ C' is birational and unramified and C' has only ordinary double points as singularities. Since f is free and birational, the degree d:=C'·(-K_S) satisfies d≥ 2.
We first show that for p∈ S(k) a k-point, p is not in C'. Let F be an algebraically closed field of definition for f:^1→ S and suppose that p is in f(^1). There are two cases: f^-1(p)={q_1, q_2} with q_1≠ q_2 in ^1(F), or f^-1(p)=q is a single F-point of ^1. In the first case, p is an ordinary double point of C'; let ℓ_1, ℓ_2 be the two tangent lines, ℓ_i=df(T_^1, q_i). Since d≥2 and f is unramified, _f=_^1(d-2), so there is a section s of _f with s(q_i)≠ 0 for i=1,2. There are thus analytic coordinates x,y for S at p such that C' has equation xy=0 at p and a deformation f_u corresponding to s, and defined over F[[u]], has image curve f_u(^1) with equation (x-au)(y-bu) with ab≠ 0, modulo terms of order ≥2. Considering f_u as a morphism defined the algebraic closure F((u)), this shows that p is not in f_u(^1). Since f was already a geometric generic point of M^_0(S, D) over k, p is not in f(^1). A similar argument treats the case where f^-1(p) is a single point.
In particular, this shows that f(^1)∩ C contains no singular point of C.
Now suppose d≥ 3 and take p∈ f(^1)∩ C. If p is a singular point of f(^1), we have f^-1(p)= {q_1, q_2} with q_1≠ q_2 in ^1(F). We first show that df(T_^1,q_i)≠ T_C,p for i=1,2. For this, we already have df(T_^1,q_1)≠ df(T_^1,q_2), so we may assume that df(T_^1,q_1)=T_C,p, df(T_^1,q_2)≠ T_C,p, and that in the local anaytic description of f(^1) as xy=0, T_C,p is given by x=0 and df(T_^1,q_2) is given by y=0. This also identifies _f⊗ F(q_1) with df(T_^1,q_2). Since d ≥ 3, there is a section s of _f with s having a zero of order one at q_1, s(t)=at+…, a≠0, where t is the local coordinate at q_1 given by the pullback of y and we use a local trivialization of _f at q_1 corresponding to a trivialization of df(T_^1,q_2). Letting f_u be a deformation of f over F[[u]] with first order term given by s. This gives the equation for f_u(^1) of the form (x-auy)(y-bu)=0, modulo terms of higher degree, which shows df_u(T_^1,q_1)≠ T_C,p. As f is already a geometric generic point over k, this shows that df(T_^1,q_1)≠ T_C,p to begin with.
A similar argument shows that df(T_^1,q)≠ T_C,p if p is a smooth point of f(^1) and f(q)=p.
Now suppose that there is a point p∈ f(^1)∩ C that is a singular point of f(^1). Write f^-1(p)={q_1, q_2}. Since d≥3, there is a section s of _f≅_^1(d-2) such s(q_1)=0, s(q_2)≠ 0. Since f(T_^1,q_i)≠ T_C,p for i=1,2, we have analytic coordinates x,y at p such that f(^1) is defined by xy=0 and C is defined by y=x+…. We identify _f⊗ F(q_i) with T_C,p, i=1,2 and use the pullback of y-x as a local coordinate at q_1. Letting f_u be a deformation of f corresponding to s, we have the equation for f_u(^1) of the form (x-au(y-x))(y-bu)=0 modulo terms of higher order, and with b≠0. This shows that the double point on f_u(^1) is (0, bu) in these coordinates, modulo terms of higher order, and thus the tangent vector describing the 1st order movement of the double point of f(^1) is non-zero in the normal bundle of C at p. This shows that the double point p of f(^1) moves away from C in f_u(^1) over F((u)). As above, this shows that each point of f(^1)∩ C is smooth on f(^1).
Since df(T_^1, q)≠ T_C,f(q) if p=f(q) is in C, this implies that f(^1) and C intersect transversely at each intersection point p.
Let k be a field of characteristic 0. Let V⊂ M^_0(S, D) be an integral closed subscheme, let f∈ V be a geometric generic point and let C:=f(^1)⊂ S. Suppose that C has a cusp at q∈ S and let p∈^1 be the point lying over q. Then:
*
V≥ 1.
*
If V=1 and either d_S≥3 or d≥ 6, then q is an ordinary cusp and f is unramified on ^1∖{p}.
<ref> Since C has a cusp, f is ramified, and thus _f/_f^≅_^1(d-2-t(f)) with t(f)≥1 as in Remark <ref>. Since the map T_fV→ H^0(^1, _f/_f^) is injective (Lemma <ref>) and M^_0(S, D) is smooth of dimension d-1, we have V ≥ 1.
<ref> Now suppose that V=1. By the computation above, we have t(f)=1=t_p(f) f is unramified away from p and e_p(f)=2. Suppose that the cusp at q is a higher order cusp; this implies that C is not a component of any H∈ |-K_S|. Take a standard system of parameters t, (x,y) for the cusp, so C has local equation y^2=x^2n+1, n≥2. If d_S≥3 there is a ^1 of curves H∈ |-K_S| passing through q and tangent to the limit tangent line at q. All such H have local equation of the form y=a_2x^2+a_3x^3+…, so there is at least one such H with a_2=0. This yields a multiplicity of at least 6 for q in H· C, so d≥6. Thus, in any case, we have d≥6.
We have _f/_f^≅_^1(d-3), so as d-3≥3, there is a global section s of _^1(d-3) having a zero of order 3 at p.
In our standard coordinate system t, (x,y), we have f(t)=(t^2, t^2n+1) for some n>1. Defining the invertible subsheaf ⊂ f^*T_S as the kernel of f^*T_S→_f/_f^, we have the injective map df:T_^1→ with image (-p)⊂. Thus ≅_^1(3), H^1(^1, )=0, so H^0(^1, f^*T_S)→ H^0(^1, _f/_f^) is surjective and we may lift s to s̃∈ H^0(^1, f^*T_S). With respect to our standard parameters t, (x,y), we have ⊗ F(p )=F·/ x|_p and / y|_p maps to a generator of _f/_f^⊗ F(p ). Thus, in f^*T_S⊗__^1_^1, p^∧, we have
s̃=a(t)·/ x +b(t)· t^3/ y
with a(t)∈_^1, p^∧ and b(t) a unit in _^1, p^∧.
The section s̃ defines a 1st order deformation f_ϵ,1 of f, which one can lift to a deformation f_ϵ over F[[ϵ]], since H^1(^1, f^*T_S)=0. From our description of s̃, we have
f_ϵ≡ f_ϵ,1(t)=(t^2, t^2n+1)+ϵ(a(t), b(t)· t^3)ϵ^2
By a translation in x (≡𝕀ϵ) we may assume that a(t)=0 and thus
f_ϵ=(t^2, t^2n+1+ϵ· b(t)· t^3)ϵ^2
Working over F((ϵ)) we may replace y with (1/b(0)ϵ)· y-∑_j≥ 2b_jx^j+y·∑_j≥ 1c_jx^j to form a standard coordnate system t, (x, y_ϵ) with
f_ϵ^*(x)=t^2, f_ϵ^*(y_ϵ)=t^3
Over the field F((ϵ)), the image curve C_ϵ:=f_ϵ(^1) has an ordinary cusp at f_ϵ(p ), which shows that V is a proper closed subscheme of an irreducible component of Z_; as Z_≥1 by Lemma <ref>, this contradicts V=1.
Let k be a field of characteristic zero. Let V⊂ M^_0(S, D) be an integral closed subscheme, let f∈ V be a geometric generic point and let C:=f(^1)⊂ S.
*
Suppose that d_S≥ 2 or d≥ 4 and that C has a tacnode. Then V≥ 1.
*
Suppose that d_S≥ 4, or d_S=3 and d≠ 6, or d≥ 7. Suppose that C has a tacnode of order ≥ 2. Then V≥ 2.
<ref> By Lemma <ref>, we may assume that f is unramified, so _f≅_^1(d-2). Suppose that C has the tacnode at q. We note that C is not a component of any H∈ |-K_S|: if S_k̅≇^1×^1, then S_k̅ is the blow-up of ^2 at 9-d_S points and thus each H∈ |-K_S| projects to a cubic curve in ^2 (containing those points). Since no irreducible component of a cubic plane curve has a tacnode, C cannot be a component of H. In case S_k̅≅^1×^1, take an H∈ |-K_S| and blow up S at a smooth point of H, π:S'→ S. The proper transform of H to S' is in |-K_S'|, so again, no component of H has a tacnode.
If d_S≥2, we may find an H∈ |-K_S| containing q and if H is smooth at q, with tangent T_H,q equal to the common tangent line of the tacnode: this represents at most two linear conditions on |-K_S|≅^d_S. These conditions imply that H intersects each of the two branches of C at q with multiplicity at least two, and thus d= H· C≥ 4, so in any case d≥ 4.
Let p_1, p_2∈^1 be the pre-images of q under f. Since q is a tacnode, we have df(T_^1, p_1)=df(T_^1, p_2), which gives a canonical isomorphism of the normal spaces N_f⊗ k(p_1)≅_f⊗ k(p_2).
Since the family of maps parametrized by V is equisingular on a dense open subset, V is equisingular on a neighborhood of f. This implies that the tangent map T_fV→ H^0(^1, _f)=T_fM_0(S,D) followed by the restriction map
_p_1, p_2:H^0(^1, _f)→_f⊗ k(p_1)⊕_f⊗ k(p_2)≅_f⊗ k(p_1)^2
has image contained in the diagonal. Since _p_1, p_2 itself is surjective (d-2≥ 2) it follows that V≥1.
For <ref>, we first consider the case of a tacnode of order ≥ 2. Let f^-1(q)={p_1, p_2} and choose a standard system of parameters t_1, t_2, (x,y) for the tacnode at q. This gives the local defining equation for C, y(y-x^n+1)∈_^1, q≅ F[[x,y]], with n≥2. If d_S≥3, there is a ^1 of curves H∈ |-K_S| which contain q and with tangent T_H,q equal to the common tangent line of the tacnode (or are singular at q). Thus there is an H∈ |-K_S| with local defining equation g=y+bxy+cy^2+… and then q appears with multiplicity ≥ 6 in H· C. Thus d≥ 6 if d_S≥3. If d_S≥ 4, there is a ^2 of curves H∈ |-K_S| which contain q and with tangent T_H,q equal to the common tangent line of the tacnode (or are singular at q). Arguing as above, there is a ^1 of H∈ |-K_S| such that H intersects C at q with multiplicity ≥ 6, and thus we can find such an H that also intersects C at a point q'≠ q, hence d≥ 7. Thus, in all cases, we have d≥ 7.
Suppose that f is ramified and d≥ 6. Then by Lemma <ref>, V≥1 and if V=1, then by Lemma <ref>, t(f)=1 and f is ramified at a single point p_3, with f(p_3) a simple cusp. Thus _f/_f^≅_^1(d-3). Consider the map df:T_^1→ f^*T_S. Since f is ramified to first order at p_3, the kernel of f^*T_S→_f/_f^ is an invertible subsheaf of f^*T_S containing df(T_^1) and with quotient /df(T_^1)≅ k(p_3). Thus df(T_^1)=(-p) and ≅_^1(3).
Since d≥6, _FH^0(^1, _f/_f^)≥ 4, so the restriction map
_p_1,p_2: H^0(^1, _f/_f^)→_f,p_1/(t_1^2)⊕_f,p_2/(t_2^2)
is surjective. Thus there is a section s of _f/_f^ which in the local coordinates t_1 and p and t_2 at p_2 has the form
s(t_1)=at_1+c_1t_1^2+…, s(t_2)=c_2t_2^2+…
with a≠0. As in the proof of Lemma <ref>, this defines a 1st order deformation f_ϵ,1 of f, which we can lift to a deformation f_ϵ of f defined over F[[ϵ]] with q_ϵ:=f_ϵ(p_1)=f_ϵ(p_2) an ordinary double point (over F((ϵ))). Thus
C_ϵ:=f_ϵ(^1) is not an equisingular deformation at q. We claim that we can modify f_ϵ without changing the class of C_ϵ in the local deformation space at q_ϵ, but such that there is an F[[ϵ]] point p_3ϵ deforming p_3 such that f_ϵ is ramified at p_3ϵ. This will exhibit V as a proper closed subscheme of an irreducible component of Z_, hence V> Z_≥1.
To verify the claim, write f_ϵ in the standard coordinate system (s, (x', y')) for the ordinary cusp at q'=f(p_3):
f_ϵ(s)=(s^2+∑_i=1^∞ϵ^i∑_j=0^∞ a_i,js^j,
s^3+∑_i=1^∞ϵ^i∑_j=0^∞ b_i,js^j)
In the basis / x', / y' for f_ϵ^*T_^1 near p_3, the line bundle has generator λ:=(2, 3s) with df(T_^1)⊂ the _^1-submodule generated by s·λ. We modify f_ϵ,1 first by adding -∑_i=1^∞ϵ^i·(b_i,1/3)·λ to eliminate the linear term in the y' coordinate. Note that in a neighborhood of p_1 and p_2, =df(T_^1), so modifying by a section of acts by a local automorphism of ^1 in a neighborhood of p_1, p_2, which does not affect the class of C_ϵ in the local deformation theory of C near q. Thus we may assume that f_ϵ is of the form
f_ϵ(s)=(s^2+∑_i=1^∞ϵ^i∑_j=0^∞ a_i,js^j,
s^3+∑_i=1^∞ϵ^i∑_j=0^∞ b_i,js^j)
with b_i,1=0 for all i. Making a similar modification by adding -∑_i=1^∞ϵ^i·(a_i,1/2)· s·λ will eliminate that linear terms in the x' coordinate, so we may assume that a_i,1=0 for all i; this modification corresponds to a translation in s, so we have the new origin p_3ϵ. Then it is clear that f_ϵ is ramified at p_3ϵ.
Suppose now that f is unramified. In this case, we use the assumption that d≥7. Then _f≅_^1(d-2), so d-2≥5 and _FH^0(^1, _f)≥ 6 and thus the restriction map
_p_1,p_2: H^0(^1, _f/_f^)→_f,p_1/(t_1^3)⊕_f,p_2/(t_2^3)
is surjective. We take a global section s of _f with a zero of order three at p_1 and a zero of order two at p_2 and construct as above a deformation f_ϵ of f with first-order deformation corresponding to s, and of the form
f_ϵ(t_1)=(t_1, ϵ· t_1^3) (ϵ^2, ϵ· t_1^4F[[t_1]]), f_ϵ(t_2)=(t_2, t_2^n+1+ϵ· t_2^2) (ϵ^2, ϵ· t_2^3F[[t_2]])
and with f_ϵ(p_1)=f_ϵ(p_2).
This gives the image curve C_ϵ an ordinary tacnode at q_ϵ=f_ϵ(p_1)=f_ϵ(p_2), so V is a proper closed subscheme of an integral component of Z_, hence V≥ 2.
Let S be a del Pezzo surface over a field k of characteristic zero and let V⊂ M^_0(S, D) be an integral closed subscheme. Let f∈ V be a geometric generic point and let C:=f(^1)⊂ S.
*
Suppose that d_S≥2, f∈ M^_0(S,D) and C has a singular point q of order m. Then V≥ m-2.
*
Suppose that d_S≥2 and C has singular points q≠ q'. Suppose that f is ramified at a point p' with f(p')=q', that f is unramified at all points p with f(p )=q and that C has multiplicity m>2 at q. Then V≥ 2.
*
Suppose that d_S≥3 or d≥ 6. Suppose that C has a singular point q, that f is ramified at a point p with f(p)=q, and that f is unramified at a point p'≠ p with f(p' )=q. Then V≥ 2.
*
Suppose that d_S≥3, f∈ M^_0(S,D) and C has singular points q, q' of order m, m', respectively, then
V≥ m+m'-4.
*
Suppose that d_S=2, f∈ M^_0(S,D) and C has singular points q, q' of order m, m', respectively. Then V≥ m+m'-5. If d≥ 7 and m≥ m'≥ 3, then V≥ 2.
<ref> We refer to the exact sequence (<ref>). Since f is unramified, _f≅ sO_^1(d-2). Since d≥1, we have H^1(^1, _f)=0, so following Lemma <ref>, M_0(S, D) is smooth of dimension d-1 at f.
For a general H∈ |-K_S| with q∈ H, H is integral and does not contain C as a component. Since |-K_S| has dimension d_S≥2, there is an H∈ |-K_S| with H∩ C⊃{q, q'}, with q≠ q', so d=(H· C)≥ m+ 1.
Let F be an algebraically closed field over which f is defined. Since f is unramified, f^-1(q)={x_1,…, x_m} with x_i≠ x_j for i≠ j. Given v∈ T_V,f, we have the corresponding first order deformation f_v of f, defined over F[ϵ]/ϵ^2, the corresponding deformations x_iϵ of x_i and q_ϵ of q, with f_ϵ(x_iϵ)=q_ϵ for all i. Let L_i⊂ T_S, q be the image df(T_^1, x_i). The deformation f_v corresponds to a section s_v of N_f, which gives us the affine subspaces L_i+s_v(x_i)⊂ T_S, q, and the conditions f_ϵ(x_iϵ)=q_ϵ, i=1,…, m implies ∩_i=1^nL_i+s_v(x_i)≠. Let W⊂⊕_i=1^mN_f⊗ k(x_i) be the set of (v_i)∈⊕_i=1^mN_f⊗ k(x_i) satisfying ∩_i=1^mL_i+v_i≠; W is a linear subspace of codimension ≥ m-2. Since _f≅_^1(d-2) and d-2≥ m-1, it follows that H^0(^1, _f)≥ m and the product of restriction maps
Res:=∏_i=1^m_x_i:H^0(^1, _f)→⊕_i=1^m_f⊗ k(x_i)
is surjective. Letting W'⊂ H^0(^1, _f) be the inverse image of W under Res, we have W'≥ m-2 and s_v∈ W' for all v∈ T_V,f. Thus the image of T_V,f under the map v↦ s_v is a subspace of H^0(^1, _f) of codimension ≥ m-2. Via the identification H^0(^1, _f)=T_M_0(S,D), f, the map v↦ s_v is just the inclusion of T_V,f in T_M_0(S,D), f, so V has codimension ≥ m-2 in M_0(S,D), as claimed.
For <ref>, the fact that f is ramified at p' implies that _f^≠{0} and thus _f/_f^≅_^1(d-s) with s≥3. By Lemma <ref>, the map T_V,f→ H^0(^1, _f/_f^) is injective. If V≤ 1, then T_V,f≥ d-2, and as H^0(^1,_^1(d-s))=d-s+1, we have s=3, V= 1 and T_V,f→ H^0(^1,_^1(d-3)) is an isomorphism.
Since f is ramified at p', and d_S≥ 2, there is an H∈ |-K_S| with H· C≥ m· q+2· q', so d≥ m+2. If m≥ 4, then by (1), we have V≥ 2, so we may assume that m=3, so d-3≥ 2. Letting p_1, p_2, p_3 be the points of ^1 mapping to q, we see that the map
_p_1, p_2, p_3: H^0(^1, _f/_f^)→⊕_i=1^3 _f⊗ k(p_i)
is surjective, and thus the composite V→⊕_i=1^3 _f⊗ k(p_i) is surjective as well. However, from the proof of (1), the condition that the triple point at q deforms along a first order deformation corresponding to a section s∈ H^0(^1, _f) defines a proper subspace of ⊕_i=1^3 _f⊗ k(p_i), which contradicts the fact that V→⊕_i=1^3 _f⊗ k(x_i) is surjective.
For <ref>, we argue as for <ref> to reduce to the case m=3, t(f)=1 and f is unramified on ^1∖{p}. We consider an analytic neighborhood of C near q as the union of the branch (C, p) corresponding to p∈^1 and the branch (C, p') corresponding to p'∈^1. Since m=3, t(f)=1 and f is unramified on ^1∖{p}, it follows that the branch
(C, p) is a cusp. Since d_S≥ 3, there is an H∈ |-H_K| containing q and singular at q. As the multiplicity of q in the branch (C,p) is 2, this implies that q occurs with multiplicity ≥ 6 in H· C, so d≥6. Also _f/_f^≅_^1(d-3) so d-3≥ 3. Moreover, if (C,p) is not an ordinary cusp, then a small modification of the argument for Lemma <ref><ref> shows that V≥2, so we may assume that (C,p) is an ordinary cusp.
Letting t∈_^1, p be a local parameter, the restriction map
_(p,2) p': H^0(^1, _f/_f^)→_f/_f^⊗_^1, p/(t^2)⊕_f⊗ k(p')
is surjective. There is thus a section s∈ H^0(^1, _f/_f^) mapping to zero in
_f/_f^⊗_^1, p/(t^2) and to a non-zero element in _f⊗ k(p'). Since H^1(^1, _f)=0, the corresponding first-order deformation f_ϵ,1 is unobstructed. Arguing as for the proof of Lemma <ref><ref>, we may extend f_ϵ,1 to a deformation f_ϵ over F[[ϵ]] so that f_ϵ (considered over F((ϵ))) is still ramified at p. The conditions on s imply that f_ϵ(p)≡ pϵ^2, while the branch of f_ϵ through p' does not pass through p, and thus the branch of f_ϵ through p' does not pass through f_ϵ(p ). Thus f_ϵ(p ) has multiplicity two on C_ϵ. This implies that V is a proper closed subscheme of an integral component of Z_, hence V≥2.
The proof of <ref> is similar: taking an H∈ |-K_S| passing through q and q' and tangent to one of the branches of C at q, we see that d≥ m+m'+1. We thus have H^0(^1, _f)≥ m+m' and for p_1,…, p_m lying over q and p_1',…, p_m'' lying over q', the evaluation map
_p_*, p_*':H^0(^1, _f)→⊕_i=1^m_f⊗ k(p_i)⊕⊕_i=1^m'_f⊗ k(p_i')
is surjective. Arguing as in <ref>, the subspace of
⊕_i=1^m_f⊗ k(p_i)⊕⊕_i=1^m'_f⊗ k(p_i') corresponding to 1st order deformations of the local germs of f near p_1,…, p_m, p_1',…, p_m'' for which q and q' deform to singular points of order m, m' respectively has codimension ≥ m+m'-4,
and thus V≥ m+m'-4.
For <ref>, we have the estimate d≥ m+m'. In this case, the map _p_*, p_*' has image of codimension at most one, and the argument of <ref> shows that V≥ d-5≥ m+m'-5, in particular, if m≥ m'≥ 3 and m+m'≥ 7, then V≥2. If m=m'=3 and d≥7, then as d-2≥ 5, the restriction map
_p_*, p_*':H^0(^1, _f)→⊕_i=1^3_f⊗ k(p_i)⊕⊕_i=1^3_f⊗ k(p_i')
is surjective and the argument of <ref> shows that V≥ 2.
Let k be a field of characteristic 0 and suppose that d_S≥ 4 or d_S=3 and d≠6, or d≥ 7. Let V⊂ M^_0(S, D) be an integral closed subscheme, let f∈ V be a geometric generic point and let C:=f(^1)⊂ S. Suppose that V=1.
*
If C has a triple point at q∈ S, then q is the only triple point of C, q is an ordinary triple point, f is unramified, and all other singularities of C are ordinary double points.
*
If C has a cusp at q∈ S, then q is the only cusp of C, q is an ordinary cusp and all other singularities of C are ordinary double points.
*
If C has a tacnode at q∈ S, then q is an ordinary tacnode, f is unramified, and all other singularities of C are ordinary double points.
<ref> If f is ramified at some point, then by Lemma <ref><ref><ref>, V≥2 so f must be unramified. Similarly, if C has a point q' of multiplicity ≥4, then by Lemma <ref><ref> V≥2, so C has only triple points and double points.
We first show that q is an ordinary triple point. Since f is unramified, f^-1(q) is three distinct points p_1, p_2, p_3 of ^1. If q is not ordinary, then (after reordering) df(T_^1, p_2)=df(T_^1, p_3).
As in the proof of Lemma <ref><ref>, we have d≥ 6, and _f≅_^1(d-2). Moreover, there is a global section s with s(p_1)≠ 0, and s having a second order zero at p_2 and p_3. To first order, this preserves the tacnode at q corresponding to the branches at p_2, p_3, but the branch at p_1 no longer passes through q. Since H^1(N_f)=0, this 1st order deformation is unobstructed, and arguing further as in the proof of Lemma <ref><ref>, we can extend this to a deformation f_ϵ of f over F[[ϵ]] with f_ϵ(^1) having a tacnode at q. Since D_ has codimension one, this implies that V has codimension ≥2, contrary to our assumption.
Now suppose C has a double point at q' and that q' is not an ordinary double point. Since f is unramified, q' must be a tacnode; let p'_1, p'_2∈^1 be the points lying over q'. If d_S≥3, there is an H∈ |-K_S| passing through q' and q, and sharing the common tangent line at q'. Thus d≥ 6. If d_S=3, then by assumption, d≥7. If d_S≥ 4, then we can find an H as above and passing through an additional point of C, so again d≥7, and thus in all cases d≥7.
As _f≅_^1(d-2), H^0(^1, _f) has dimension ≥ 6 and
:H^0(^1, _f)→⊕_i=1^3_f⊗ k(p_i)⊕_f⊗ k(p'_1) ⊕_f⊗__^1, p_2'/𝔪_p_2'^2
is surjective. Taking a section s of H^0(^1, _f) with 1st order zeros at p_1,p_2, p_3, a second order zero at p_2' but with s(p'_1)≠0, then s defines a first order deformation f_ϵ,1 that is equisingular at q but not so at q'. As before, we can extend f_ϵ,1 to a deformation f_ϵ over F[[ϵ]] so that q deforms to a triple point on f_ϵ(^1), but the deformation near q' is not equisingular. Thus V is a proper closed subscheme of D_, so V≥2, contrary to assumption.
For <ref>, suppose f is ramified at p∈^1 with f(p )=q. By Lemma <ref>, q is an ordinary cusp and f is unramified on ^1∖{p}. By Lemma <ref>, C:=f(^1) has only double points. If q'≠ q is a double point of C, then as f is unramified over q', q' must be a tacnode. By Lemma <ref>, q' is an ordinary tacnode. Assuming d_S≥3, let H∈ |-K_S| be chosen so that {q, q'}⊂ H and that H is tangent to the common tangent line at q'; as above, this actually implies that d≥7 and _f/_f^≅_^1(d-3), d-3≥ 4. Letting p'_1, p'_2∈^1 be the points over q', we may find a section s∈ H^0(^1, _f/_f^) with a second order zero at p and at p_1' and s(p'_2)≠ 0. As in the proof of Lemma <ref><ref> and <ref>, the corresponding first order deformation f_ϵ,1 of f can be extended to a deformation f_ϵ over F[[ϵ]] so that f_ϵ is ramified at p, but the deformation C_ϵ:=f_ϵ(^1) is not equisingular at q', which yields V≥ 2, contrary to assumption.
For <ref>, Lemma <ref><ref> implies that the tacnode at q must be an ordinary tacnode, by <ref> f is unramified and by <ref> and Lemma <ref>, all other singularities are double points. Applying Lemma <ref><ref>, each double point q'≠ q is either an ordinary tacnode or an ordinary double point, so suppose q' is an ordinary tacnode.
Suppose d_S≥3. Taking an H∈ |-K_S| passing through q and q' and tangent to the common tangent line at q, we see that d≥ 6 and N_f≅_^1(d-2); as above, this implies that d≥7 in all cases. Let p_1, p_2∈^1 be the points lying over q and p_1', p_2'∈^1 be the points lying over q'. Let t_1, t_2,(x,y) be a standard system of parameters for q and let t_1', t_2',(x',y') be a standard system of parameters for q'
Consider the restriction map
:H^0(^1, _f)→
_f⊗_^1, p_1/(t_1^2) ⊕
N_f⊗_^1, p_2/(t_2^2)⊕_f⊗ k(p_1')⊕_f⊗ k(p_2')=:W
SInce d≥7, is surjective. In particular, we may find a global section s of _f that has a 2nd order zero at p_1 and p_2, a first order zero at p_1' and is non-zero at p_2'. As H^1(^1, _f)=0, we may extend the corresponding first order deformation of f to a deformation f_ϵ defined over F[[ϵ]].
Using the surjectivity of , we may take our extension f_ϵ of the 1st order deformation so that q deforms to an ordinary tacnode q_ϵ on the image curve C_ϵ, and in the coordinates t_1', t_2',(x',y'), we have
f_ϵ(t'_1)=(t'_1+ϵ· x_1(t'_1, ϵ), ϵ· y_1(t'_1, ϵ)),f_ϵ(t'_2)=(t'_2+ϵ· x_2(t'_2, ϵ), t_2^'2+ ϵ· y_2(t'_2, ϵ))
with y_1(0,ϵ)=0, y_2(0,ϵ)≠ 0.
Translating in x' and then in t'_2 (by translations ≡0ϵ) we may rewrite this as
f_ϵ(t'_1)=(t'_1, ϵ· y_1(t'_1, ϵ)),f_ϵ(t_2)=(t'_2, t_2^'2+ ϵ· y_2(t'_2, ϵ))
still with y_1(0,ϵ)=0, y_2(0,ϵ)≠ 0. Translating by replacing y' with y'-ϵ· y_2(t'_2, ϵ), we reduce to
f_ϵ(t'_1)=(t'_1, ϵ· y_2(t'_2, ϵ)),f_ϵ(t'_2)=(t'_2, t_2^'2)
again with y_2(0,ϵ)≠ 0. The image curve C_ϵ=f_ϵ(^1) thus has defining equation (y'-ϵ· y_2(t'_2, ϵ))(y'-x^' 2)∈ F[[x',y', ϵ]]. By Lemma <ref>, C'_ϵ does not have a tacnode q'_ϵ specializing to q'. This exhibits V as a proper closed subscheme of an integral component of Z_, forcing V≥2, contrary to assumption.
Let f=y(y-x^2)∈ F[[x,y]] define an ordinary tacnode over an algebraically closed field F of characteristic ≠2. let
f_ϵ(x,y)=(y-∑_i=1^∞ϵ^ig_i(x))(y-x^2)∈ F[[x,y,ϵ]]
define a deformation of f over F[[ϵ]] and suppose that g_1(0)≠0. Then f_ϵ is not equisingular: the curve C_ϵ:= F[[√(ϵ), x,y]]/(f_ϵ)[1/ϵ] has two ordinary double points specializing to (0,0) and no other singularities.
Write ∑_i=1^∞ϵ^ig_i(x)=ϵ·∑_i,j≥0a_i,jϵ^ix^j with a_i,j∈ F. Then a_0,0=g_1(0)≠0, so there is an h(x,ϵ)∈ F[[ϵ,x]] with h^2=∑_i,j≥0a_i,jϵ^ix^j. The singular locus of C_ϵ is just the intersection of y=∑_i=1^∞ϵ^ig_i(x) with y=x^2, that is, the subscheme defined by x^2-ϵ· h^2 or, over F[[√(ϵ), x]], (x-√(ϵ)· h)(x+√(ϵ)· h). Since F[[√(ϵ), x]]/(x-√(ϵ)· h) and F[[√(ϵ), x]]/(x+√(ϵ)· h) are both reduced, we have the desired description of C_ϵ
Note that Z_, Z_ and Z_ each have only finitely many irreducible components, by Definition <ref>.
We define reduced codimension one subschemes D_, D_, D_ on M̅_0(S, D) as follows.
* Let D_ be the closure in M̅_0(S, D) of the union of the codimension one integral components Z_⊂ M_0^(S,D)
* Let D_ be the closure in M̅_0(S, D) of the union of the codimension one integral components Z_⊂ M_0^(S,D)
* Let D_ be the closure in M̅_0(S, D) of the union of the codimension one integral components Z_⊂ M_0^(S,D)
§ NON-BIRATIONAL AND NON-FREE MAPS
Having examined M^_0(S, D), we look more closely at the moduli stack of primary interest, M̅_0,n(S, D) with n=-D· K_S-1; set d=-K_S· D. We have the “forget the marked points” map π_n/0:M̅_0,n(S, D)→M̅_0(S, D), which is a composition of the structure maps for the various universal curves π_i+1/i:M̅_0,i+1(S, D)→M̅_0,i(S, D), hence proper and flat.
The codimension one subschemes D_, D_, D_ of M̅_0,n(S, D) are given by applying π_n/0^-1 to the corresponding closed subschemes of M̅_0(S, D).
The results on M^_0(S,D) of the previous section carry over directly to M^_0,n(S,D), for instance, setting d:=-D· K_S, M^_0,n(S,D) is a smooth finite-type k scheme with _k M^_0,n(S,D)=2d-2, or M^_0,n(S,D) is empty; this follows from Lemma <ref>. We proceed to study the complement M̅_0,n(S, D)∖ M^_0,n(S,D).
Following the construction of M̅_0,n(S, D) given in <cit.>, there is a quasi-projective scheme M̃̅̃_0,n(S,D) with _N-action, presenting
M̅_0,n(S, D) as quotient stack _N\M̃̅̃_0,n(S,D). For an F-point of M̃̅̃_0,n(S,D), F⊃ k an algebraically closed field, we have the corresponding morphism f:→ S, where is a semi-stable genus 0 curve. This gives us the image Cartier divisor f_*(), which we may consider as an F-point of the projective space |D|; this extends to a morphism :M̃̅̃_0,n(S,D)→ |D|. We note that f_*()=(gf)_*(g) for g∈_N. It follows that we have the morphism M_0,n(S,D)→ |D|≅^N, N=(D^(2)+d)/2, sending the equivalence class [f] of a morphism f:→ S to f_*([]). For V⊂𝐌_0,n(S,D) a locally closed substack, we have the constructible subset (V)⊂ |D| and we may speak of the dimension (V), which is at most V.
Let f : ^1 → S factor as f = g ∘ q where g : ^1 → S is birational onto its image and q : ^1 →^1 is a finite map. Then we have a short exact sequence
0 →(dq) →_f → q^* _g → 0.
Moreover, if g is free then f is free.
We have the following commutative diagram.
0 [r] T ^1 [r]^df[d]^dq f^* TS [r] [d]^≀ _f [r] [d] 0
0 [r] q^* T^1 [r]^q^*dg q^* g^* TS [r] q^* _g [r] 0
So, the short exact sequence follows by the snake lemma. Since k = 0, it follows that (dq) is torsion. Thus,
_f/_f^≃ q^* (_g/_g^).
If g is free, then _g/_g^≃ O(m) for m ≥ 0, so _f/_f^≃ O((q) m). So f is free.
Suppose k has characteristic zero. Let V⊂M̅_0,n(S, D) be an integral closed substack with geometric generic point f. Suppose that f corresponds to a morphism f:^1→ S with image curve C:=f(^1) and that f is non-free. Then V=0. Moreover, one of the following cases holds:
*
d=1, n=0, C is a -1 curve on S and f:^1→ C is an isomorphism.
*
d=2, C is a -1 curve on S and f:^1→ C is a 2-1 cover. In this case, (V)=1.
*
d=2, d_S=2, f:^1→ C is birational, C has an ordinary cusp at q∈ C, f^-1(q) is a single point p∈^1 and f:^1∖{p}→ C∖{q} is an isomorphism. Moreover (V)=1.
*
d≥ 3 and (V)≥ d-1≥ 2.
In case (3), V is dense in a component of D_.
If f is not birational to its image, we factor f = f' ∘ c where c : ^1 →^1 has degree e and f': ^1 → S is birational to its image. Let D' = f'_*([^1]) and let V' be the closure of f'. Then, V = e V' and V = V'. Lemma <ref> implies that f' is not free.
Since f' is non-free, we have _f'/_f'^≅_^1(m) with m<0 and thus
H^0(^1, _f'/_f'^)={0}.
Since the tangent map T_f'V'→ H^0(^1, _f'/_f'^) is injective (Lemma <ref>), this says that V = V'=0, so f_*(^1) is the unique geometric point of V. Thus each element of (V) consists of a sequence of n points of C. So, (V) = n and (V)= 2n-n=d-1. If d≥ 3, we are in case <ref>.
If d=1, then f is birational and C is a line and thus a -1 curve, giving us case <ref>.
Suppose d = 2. It follows that n = 1. First assume d_S ≥ 3, so -K_S embeds S. So, either C is a conic and f:^1→ C is an isomorphism or C is a line and f:^1→ C is a double cover. If f:^1→ C is an isomorphism, then _f≅_^1, so f is free contrary to our hypothesis. If f is a double cover,
we apply Lemma <ref> to obtain _f/_f^≅_^1(-2), so f is not free and we are in case <ref>.
If d=2 and d_S=2, then we are in the situation of Lemma <ref>. If f:^1→ C has degree 2, then C · (-K_S) = 1, so C is a -1 curve on S and we are again in case <ref>.
Suppose d=d_S=2 and f:^1→ C is birational. We are in the situation of Lemma <ref>. If π:C→π(C ) is birational, then π(C) is a smooth conic, C is smooth and f:^1→ C is an isomorphism. Thus, _f=_^1 and f is free, contrary to the hypothesis. If π:C→π(C ) is a double cover, then either f:^1→ C is unramified, hence _f=_^1 and f is free, or C has a single ordinary cusp, and t(f)=1 so _f/_f^≅_^1(d-3)=_^1(-1). This is case <ref>.
Let k be a field. Suppose Assumption <ref> holds for S. Let V⊂M̅_0,n(S, D) be an integral closed substack with geometric generic point f. Suppose that f corresponds to a morphism f:^1→ S with image curve C:=f(^1) and let d_f be the degree of f:^1→ C. Then (V)≤ (d/d_f)-1.
By passing to an extension, we may assume that k is algebraically closed, and in particular perfect. Let d_C=-C· K_S, so d_C· d_f=d and d/d_f=d_C≥1; in particular, d≠1 if d_f>1. Let F be an algebraically closed field of definition for C and f. Let C̃ be the normalization of C. Then C̃≅^1 (Lüroth's theorem) and f factors as ^1C̃C with g birational and f̃ of degree d_f. By Assumption <ref>, there is an unramified map g_0 in the irreducible component of M_0^(S, C) containing g. By Lemma <ref>, _g_0 M_0^(S, C) = d_C -1. Thus
(V)≤_g_0 M_0^(S, C) ≤ d_C-1=(d/d_f)-1
Let U be a normal, integral scheme over a field F and let → U be an n-marked semi-stable genus zero curve. We say that is treelike if the normalization π : → is a disjoint union with each component isomorphic to ^1_U.
Let → U be a treelike family over U, with U normal and integral.
*
Suppose the number of irreducible components in the normalization of is r. Then for each geometric point x of U, the fiber _x has exactly r-1 double points.
*
Let u be a geometric generic point of U and let y be a double point of _u. Then there is a unique pair of components _i, _j of with y equal to the image of (_i×__j)_u in _u. Let η be the generic point of U and let
(_i×__j)_η denote the closure of (_i×__j)_η in _i×__j. Then the projection
(_i×__j)_η→ U
is an isomorphism.
*
Let _i, _j be distinct irreducible components of and let u be a geometric generic point of U. If (_i×__j)×_Uu=, then _i×__j=.
*
For each pair of components _i, _j with _i×__j≠, the projection _i×__j→ U is an isomorphism, defining two sections σ_i,j^i:U→_i, σ_i,j^j:U→_j via the two projections
_i×__j→_i, _i×__j→_j.
For <ref>, recall that each connected genus zero semi-stable curve P over an algebraically closed field F has _Fχ(_P)=1. Let π:P̃→ P be the normalization of P, and let r be the number of irreducible components of P̃. Then χ(_P̃)=r. On the other hand, we have the exact sheaf sequence on P,
0→_P→π_*_P̃→⊕_i=1^sF(p_i)→0
where {p_1,…, p_s} are the double points of P. Then
r=_Fχ(_P̃)=_Fχ( π_*_P̃)=_Fχ(_P)+s=1+s,
so r=s+1.
Now, since our family is treelike, each geometric fiber _x has normalization P̃_x≅⨿_i=1^r^1_x, so each geometric fiber _x has exactly r-1 double points.
For <ref>, it follows directly from the definition of a semi-stable, genus zero curve, that there is a unique pair of components _i u, _j u of _u with y equal to the image of (_i u×__u_j u in _u. The first part of (2) thus follows from the fact that is treelike. It is also obvious that the map _i u×__u_j u→_u→ u is an isomorphism, so (_i×__j)_u→ u is an isomorphism.
The morphism u→η is qcqs and faithfully flat, so the map (_i×__j)_η→η is an isomorphism. For each geometric point x of U, _ix×__x_jx is a finite set so (_i×__j)_η→ U is birational, proper and quasi-finite. Since U is normal, and (_i×__j)_η is integral, the projection is an isomorphism, by Zariski's main theorem.
For <ref>, it follows from <ref> that _u has exactly r-1 double points, y_1,…, y_r-1. By (2), there are sections σ_1,…, σ_r-1:U→×_× with image sections σ̅_1,…,σ̅_r-1:U→ with σ̅_i(η)=y_i. By (1) again, for each geometric point x of U, the points σ̅_1(x),…,σ̅_r-1(x) are exactly the double points of the fiber _x.
If now (_i×__j)×_Uu=, but there is a z∈ (_i×__j)_x, then the image of z in is a double point of _x, so z=σ̅_l(x) for some i. But then there is a pair of components P̃_i', P̃_j' with σ_i corresponding to the non-empty fiber product P̃_i'×_P̃_j', and wit {i,j,i',j'} having size at least three. But then z is a point of _x of multiplicity at least three, a contradiction.
For <ref>, it suffices by <ref> to show that the inclusion
(_i×__j)_η→_i×__j
is an isomorphism. For this, we may take the basechange from k to its algebraic closure, so we may assume that k is itself algebraically closed. Since the k-points of U are dense, we need only check over a neighborhood of each closed point a∈ U. Let b∈_i×__j be the unique point lying over a (b is automatically a closed point); we consider b simultaneously as a closed point of _i, _j and . We may also pass to the completions :=_U,a and :=_,b. As _U,a is an excellent normal local domain, is a complete normal local domain. Let 𝔪⊂ be the maximal ideal.
We consider the versal deformation space of the singularity xy=0, which has base (k[[t]]) and versal family (k[[t,x,y]]/(xy-t). From this description of the versal family, we find there is an element f∈𝔪 such that is isomorphic as an -algebra to [[x,y]]/(xy-f). In addition, the section σ:U→ with σ(a)=bt given by (2) defines a surjection ψ:[[x,y]]/(xy-f)→ splitting the inclusion →[[x,y]]/(xy-f). Moreover, since σ(U) is contained in the relative singular locus of → U, the induced map ψ:→[[x,y]]/(xy-f) is contained in the closed formal subscheme of [[x,y]]/(xy-f) defined by the vanishing of the section d(xy-f) of the completed relative Kähler differential Ω̂_[[x,y]]/=[[x,y]]· dx⊕[[x,y]]· dy. Since d(xy-f)=xdy+ydx, this says that the kernel of ψ contains the ideal (x,y)+(xy-f)/(xy-f). But since [[x,y]]/(x,y)=, this says that (x,y)⊃ (xy-f) in [[x,y]], hence f=0 and ≅[[x,y]]/(xy). This in turn implies that __i,b≅[[x,y]]/(x), __j,b≅[[x,y]]/(y) and thus
__i×__j,b≅[[x,y]]/(x,y)==
_(_i×__j)_η,b,
which proves <ref>.
Because (, p_*) is semi-stable, any marked point p_i: U → lands in the smooth locus of . Since normalization commutes with smooth base change, π: → is an isomorphism over the smooth locus and p_i has a unique preimage under π(U): (U) → (U).
We define a tree associated to as follows. Let the vertices V() be the set of components of . Let the half-edges H() ⊂(U) be the preimage under π of the marked points and nodal points of ,
H() = {π(U)^-1(p_i): i=1,…,n }∪{σ^i_i,j, σ^i_i,j : _i, _j ∈ V() such that _i×__j≠}
where the σ^i_i,j, σ^j_i,j are the sections constructed in Lemma <ref><ref>. Thus, there is a canonical map ν : H() → V(). Let i : H() → H() be the involution with orbits of length 1 corresponding to marked points and orbits of length 2 corresponding to nodal points. Let the edges E() ⊂ H()× H() be the subset consisting of orbits of i of length 2. Since has genus zero, the map ν induces an inclusion E() → V()× V(). Thus, we obtain a tree T(). Below, by abuse of notation, we may use to refer to T().
Let → U be a semistable genus zero curve with U integral. There exists a dense open subscheme U_0 and a surjective finite morphism W → U_0 such that ×_U W → W is treelike and W is smooth.
Let η be the generic point of U, and let η: k(η)→ U be a geometric point with image η and residue field an algebraic closure of k(η). The basechange _η of to η has a normalization _η→_η. Normal schemes are regular in codimension 1, whence the curve _η is regular. Since k(η) is algebraically closed, it follows that _η is smooth <cit.>. A smooth genus 0 curve over an algebraically closed field is isomorphic to a disjoint union of ^1's. This isomorphism descends to a finite extension k(η) → L, giving a pullback diagram
_η[d] [r] ∐_ i=1^M ^1_L [d]^α
_η[d] [r] _L[d]
k(η)[r] L
By enlarging L if necessary, we may assume that α: ∐_ i=1^M ^1_L →_L is birational and finite, because the property of being an isomorphism and finite is detected after fpqc basechange. Since ∐_ i=1^M ^1_L is normal, it follows that ∐_ i=1^M ^1_L →_L is canonically the normalization.
We may choose an open subset U_0 of U with U_0 affine. Let W = (W) where (W) is the integral closure of (U_0) in L. Since U_0 is a finite type k-algebra and an integral domain, (U_0) is a Nagata ring and N-2 <cit.>. Thus W → U_0 is a finite surjective map giving rise to the field extension k(η) → L on generic points. (W) is an integral domain by construction. It follows that W is reduced, whence geometrically reduced because k is perfect <cit.>. (W) is furthermore a finite type k-algebra because it is finite over (U_0). Thus by generic smoothness <cit.>, there is a non-empty open subset W' of W which is smooth over k. Replacing U_0 by the (non-empty, open) complement of the image of W-W', and then replacing W by its pullback over the new U_0, we obtain a finite surjective map W → U_0 with W smooth over k <cit.>. Passing to a further open subset of U_0, we may assume that α extends to a map α': ∐_i=1^M ^1_W→_W which we may assume to be finite and birational. As above, it follows that α' is canonically the normalization.
Let U be a normal, integral scheme over a field F and let → U be an n-marked semi-stable genus 0 curve which is treelike. Let f: (,p_*) → S be a stable map of degree D. We say that f is simple if for each geometric point u of U, the restriction of f_u to any component of _u is either constant or birational and no two components of _u have the same image under f_u as a reduced closed subscheme.
Let → V be an n-marked semi-stable genus zero curve and let f : (,p_*) → S be a stable map of degree D. There exist
* a stratification V = V_0 ∪…∪ V_N;
* finite covers W_i → V_i;
* n-marked semi-stable genus zero curves _i → W_i;
* stable maps f_i : _i → S;
such that
* W_i is integral and normal, and _i → W_i is treelike.
* f_i is simple;
*
there exists a function m : V(_i) →_>0 such that
∑_v ∈ V(_i) m(v) D_v = D,
where D_v = (f_i)_* [v] is the Cartier divisor corresponding to the image of v under f_i.
*
Let a ∈ V_i be a geometric generic point, and let f_a denote the restriction of f to _a. If f_a is not simple, then there exists v ∈ V(_i) such that m(v) ≥ 2. Otherwise _i has the same number of components as _a.
* ∪_i ev(f_i) = ev(f).
This is an algebraic version of <cit.>.
Let a be a geometric generic point of V. By Noetherian induction it suffices to find an open neighborhood U of a, a finite surjective map W → U with W integral and normal, a n-marked treelike semi-stable genus zero curves ' → W, and a simple stable map f': ' → S such that there exists a function m : V(') → such that ∑_v ∈ V(') m(v) D_v = D and (f_U) = (f'), where f_U: (_U, p_*|_U) → S denotes the restriction of f.
Consider the n-marked stable map f_a: (_a, p_*|_a) → S. _a splits into a finite number of components _a1, …, _ar. We aim to rid ourselves of repeated image curves and non-birational components which are not contracted. We may assume that the components _a1, …, _as have different (reduced closed) images under f_a and the images of _a(s+1), …, _ar are all the same as one of the images of _a1, …, _as. For any i such that the restriction f|__ai:_ai→ S of f to _ai is not birational, let _ai' → f(_ai) be defined to be the normalization of the reduced image curve f(_ai). Since _ai≅^1 is normal, f |__ai factors
_aiπ_i→_ai' f'_i→ f(_ai).
For notational simplicity, set _ai'=_ai if f|__ai:_ai→ S is birational.
We will glue together the _ai' for i=1,…, s potentially with some additional (^1)'s to form a treelike n-marked curve _a'. (We will not be gluing in the '_a(s+1), …, '_ar.) Since the curve _a is treelike, we have an associated tree T(_a). Removing the vertices _a(s+1), …, _ar (and resulting edges) produces a forest F. (By a forest, we mean a finite disjoint union of trees.) View _a1 as the root of T(_a). Traveling out from _a1 defines a root of every tree of F. Call the trees of F not containing _a1 the detached trees. Suppose there is a detached tree in F whose root r is attached in T(_a) to a component with the same image as a component c on the tree of F containing _a1. Then attach r to c. If there is no such root, then the component containing _a1 contains all the vertices and the forest is just a tree. Now _a1 is contained in a potentially larger component of a new forest. We again consider any detached tree of this new forest whose root r is attached to a component with the same image as a component c on the new tree containing _a1. Again attach r to c. This process stops when we have a formed a new tree T' whose vertices are in canonical bijection with '_a1, …, '_as.
We will modify this tree T' to a new tree T” with some extra vertices. It will have an associated treelike n-marked semi-stable genus 0 curve _a' over a. The extra vertices will correspond to contracted components. For each i=1,…, s, let A_i ⊆{1,…, r} denote the subset those indices j such that _aj has the same reduced closed image curve as _ai. Let H(_a) denote the half-edges associated to the tree-like _a (Definition <ref>). Define H_i(_a) ⊂ H(_a) to be those half-edges lying in _aj for j∈ A_i. In other words, H_i(_a) contains the marked points and the points where two components are attached for every _aj with j ∈ A_i. Because v is a geometric point, we may choose a preimage under f_a' in _ai' for every point f_a(p) with p in H_i(_a). Let H'_i denote the multiset of these preimages, i.e. the set of these preimages where repeated preimages are contained with the appropriate multiplicity.
We build _a' by gluing (^1)'s together. Start by putting the component '_a1 in _a'. If H'_1 has points with multiplicity greater than 1, attach a ^1 at the corresponding point p and choose (arbitrarily) smooth points on the new ^1 in bijective correspondence with the multiple copies of p. For points of multiplicity equal to 1, mark the corresponding point on '_a1. This builds a larger marked semi-stable genus 0 curve. The tree T” has a vertex for '_a1 and each of the attached (^1)'s. Extend f_v' to this union by sending any attached (^1) to the corresponding f_a(p) in S.
We continue to build _a' and f_a': _a' → S. For each edge in T' connected to the first vertex, attach the corresponding component _ai' for some i = 1,…, s at the appropriate point of the _a' under construction. For each point p of H_i with multiplicity greater than one, attach a new ^1 to _a' and choose and mark smooth points on the new ^1 in bijective correspondence with the multiple copies of p. Extend the definition of f_a' by mapping '_ai by f'_ai and contracting the new (^1)'s to the corresponding f_a'(p). Add the _ai' and new (^1)'s to the tree T' and edges corresponding to the attachment points.
Running through the vertices of T', we obtain a treelike semistable n-marked curve (_a', p'_*) with tree T” together with a stable map f_a': (_a', p'_*) → S.
Define m_a : V(_a') → by setting m(_ai') equal to the sum of the degrees of f_a|__aj:_aj→ S for each j in A_i and setting m to be 0 on any contracted component. Thus ∑_j ∈ A_i (f_a)_*[_aj] = m(_ai') (f_a')_* [_ai'], and
∑_v ∈ V('_a) m(v) (f_a')_* [_ai'] = D.
Since _a' is treelike, there is a corresponding curve ' over V. Spreading out, there is an open neighborhood W of a in V such that the marked points of '_a come from sections W →'_W. We thus obtain an n-marked semi-stable treelike (', p'_*) over W. By potentially shrinking W further, the stable map f_a': (_a', p'_*) → S spreads out to a stable map f':(', p'_*) → S over W. The property <ref> follows from (<ref>). This completes the proof.
Let V be an integral finite type k-scheme, smooth over k, and let → V be a tree-like family with two irreducible components, =_1∪_2. Let v∈ V be a geometric generic point and let f:→ S be a morphism. Let D_i=f_v*(_i,v) and let d_i=f_v*(_i,v)·(-K_S). Let D=D_1+D_2, d=d_1+d_2.
Sending x∈ V to the curve f_i,x*(_i,x) defines morphisms
f̅_i:V→ |D_i|, i=1,2
and we have
f̅:V→ |D|
with f̅(x)=f̅_1(x)+f̅_2(x).
Let F be an algebraic closed field and let f : (,p_*) → S be a simple stable map over F.
Let _cont⊂ be the union of the irreducible components that get mapped to points by f and let _simp⊂ be the union of the remaining irreducible components of . A connected component of _cont is rigid if it intersects two or more components of _simp and movable otherwise. Let _r(f) (resp. _m(f)) denote the set of rigid (resp. movable) connected components of _cont containing at least one marked point. Observe that each such component is a subtree of . For T ∈_m(f), let n_T be the total number of marked points on the components of T. Let _s(f) be the set of irreducible components of _simp.
Suppose that Assumption <ref> holds for S. We take n=d-1. Let p:V→M̅_0,n(S, D) be a map of an integral finite type k-scheme V to M̅_0,n(S, D), giving the stable map f_V:(_V, p_V*)→ S of an n-pointed genus 0 curve (_V, p_V*) over V. Suppose _V is treelike and f_V is simple. Let v be a geometric generic point of V, giving the stable map f:(,p_*)→ S. We consider the image (V)⊂ S^n. Then
_k (V) ≤ d + n - |_s(f)| - |_m(f)| - |_r(f)|,
or more precisely,
_k (V) ≤ d +n - |_s(f) | - ∑_T ∈_m(f) (n_T-1) - |_r(f)|.
Let T ∈_m(f). We claim that T contains a leaf of . Indeed, T is a subtree of that only connects to ∖ T by a single edge. Therefore, T contains at least two marked points. On the other hand, if T ∈_r(f), the point f(T) of f() is in the intersection f(P)∩ f(P') with P≠ P' ∈_s(f).
Let ⊂ |D| × S denote the incidence variety and let ⊂ denote the preimage of (V). Let π_V : _V → V denote the projection. The map f_V:_V→ S induces the morphism
f̅_V:V→ |D|
sending g∈ V to the divisor f_V*(_V,g) on S.
We also have the relative evaluation map
_V, |D|=(f̅_V, _V):V→ |D|× S^n
factoring the evaluation map _V:V→ S^n.
For C in _s(f), define d_C = f_*[C] · (-K_S) to be the degree of f restricted to C.
By Lemma <ref>,
(V)≤∑_C ∈_s(f) (d_C -1) ≤ d- |_s(f) |.
Since V is treelike, the decomposition of _y into _y,simp and _y, cont and the further decomposition into the various trees in _m(f_y),_r(f_y), is constant as y varies over V. Namely, there are canonical bijections between, e.g. _m(f_y) and _m(f_y') for y,y' in V. For y∈ V and P a component of _y, let n_P denote the number of marked points.
Fix a geometric point x of (V) and let V_x⊂ V be the fiber f̅^-1_V(x). We proceed to give a bound on _k(x)(V_x). For y∈ V_x and P a component of _y, simp, we have n_P marked points, each mapping via f to the curve f_y(P)⊂ S, so over all of V_x these contribute at most n_P to _k(x)(V_x). If P is a component of some rigid tree T, then each of the n_P marked points of P map to the intersection of two components of f(_y,simp), so these contribute 0 to _k(x)(V_x). Finally, taking together all the components P in some movable tree T, the sum ∑_P ∈ T n_P ≥ 2 marked points in T all map to the same point of the curve f(P_T), where P_T is the curve in in _y,simp intersecting T. So altogether, these marked points contribute at most 1 to _k(x)(V_x). We obtain the bound
_k(x)(V_x)≤∑_P ∈_s(f)n_P+∑_T ∈_m(f) 1
Combining the bounds (<ref>) and (<ref>), since n_T ≥ 2 for each T∈_m(f), we get
_k (V)≤_k _V, |D|(V) ≤_k(V) + max_x ∈ V_k(x)(V_x)
≤ d-|_s(f) | + ∑_P ∈_s(f)n_P+∑_T∈_m(f)1
= d +n - |_s(f) | - ∑_T ∈_m(f) (n_T-1) - |_r(f)|
≤ d + n - |_s(f) | - |_m(f)| - |_r(f)|.
Suppose that Assumption <ref> holds for S. We take n=d-1. Assume D is not an m-fold multiple of a -1-curve for m>1. Let p:V→M̅_0,n(S, D) be a map of an integral finite type k-scheme V to M̅_0,n(S, D), giving the stable map f_V:(_V, p_V*)→ S of an n-pointed genus 0 curve (_V, p_V*) over V. Let v be a geometric generic point of V, giving the stable map f:(,p_*)→ S. We consider the image (V)⊂ S^n.
* Suppose that has at least 3 irreducible components. Then (V)≥ 2.
* Suppose that f is non-birational. Then (V)≥ 2.
* Suppose that =_1∪_2 has 2 irreducible components and f is birational. Then (V)≥1.
* Suppose that =_1∪_2 has 2 irreducible components, f is birational and (V)=1. Suppose k=0, and either
* d_S≥3 or
* d_S=2 and d≠ 2, 4.
Then f is unramified on and the image curve C:=f() has only ordinary double points as singularities. In particular, f is an isomorphism to its image in a neighborhood of _1∩_2.
Moreover, if _i has n_i marked points, i=1,2, and d_i=-K_S· f_*(_i), then d_i-1≤ n_i≤ d_i, i=1,2.
We apply Lemma <ref> to f_V:(_V, p_V*)→ S. It suffices to prove the result for each stratum V_i of V, so we may assume from the start that there is only a single stratum V_1=V. Similarly since k is perfect, we may assume that W:=W_1 is smooth over k. Indeed, since W_1 is integral, it is reduced. So, since k is perfect, it is geometrically reduced by <cit.>. Thus, it is generically smooth by <cit.>, and we can apply Noetherian induction.
Denote the pullback family to W by (_W, p_W*) and the induced map to S by f_W:(_W, p_W*)→ S. Let w be a geometric generic point of W lying over v, giving the stable map f_W,w:(_W,w, p_W*,w)→ S, with reduced image curve f_W,w(_W,w)=f()_ and with (W)=(V).
We now prove <ref> in the case f is simple.
If |_s(f)| ≥ 3, we are done by inequality (<ref>). If |_s(f)| = 2 then |_r(f)| + |_m(f)| ≥ 1, so this case is also covered.
If |_s(f)| = 1 then _r(f) = ∅. If |_m| ≥ 2, we are again done. If |_s(f)| = 1 and |_m(f)| = 1, then for the unique T ∈_m we have n_T ≥ 3 since T has at least two vertices. So, we are done by inequality (<ref>). This completes the proof of <ref> when f is simple.
It remains to discuss the case of non-simple f. In this case, we consider the family _W→ W and map f_W:(_W, p_W*)→ S, with (W)=(V). If _W has at least 3 components, we are done by <ref> applied to the family _W. So, we can assume that _W has at most two components. By properties <ref> and <ref> of Lemma <ref>, since S is del Pezzo, the curve class D':=f_W,w*(_W,w) has degree
d':=D'· (-K_S)<d.
If only a single component P of _W,w is non-collapsed,
then f()_=f_W,w(P), hence is irreducible. Also, f_W,w*(P) = D'. By assumption f()_ is not a -1 curve, so D'· (-K_S)≥ 2. So, by property <ref> of Lemma <ref>, there exists m ≥ 2 such that D = m D'. Therefore, by property <ref> of Lemma <ref>,
d= m D' · (-K_S) ≥ d' + 2.
So, by inequality (<ref>) applied to f_W, we have
_k(W)≤ d'+n-1≤ d+n-3.
If _W,w has two non-collapsed components, then by inequalities (<ref>) and (<ref>), we obtain
_k ev(W) ≤ d' + n - 2 ≤ d + n - 3.
This completes the proof of <ref>.
We now prove <ref>. By <ref>, we may assume has at most 2 components. So, if one component collapses to a point, that component has at least two marked points. It follows that the image under is contained in a diagonal of S^n whence has codimension at least S = 2. Thus, it remains to consider the case when neither component collapses to a point. The assumption implies that there exists a geometric generic point a ∈ V such that f_a is not simple. Let d' be defined as in (<ref>). Then d' < d by (<ref>). Moreover, if _W has only one component, then d' < d-1 by (<ref>). Then apply (<ref>) to f_W. This completes the proof of <ref>.
We now prove <ref>. Define d' as in (<ref>).
If _W has only one component, then d' < d by (<ref>). So, <ref> follows by applying (<ref>) to f_W.
We now prove <ref>. So, we have (V) = 1. As we are only interested in the geometric generic point f and every geometric generic point of V lifts to a geometric generic point of W, we may assume that the family _V→ V is tree-like with two irreducible components _V,1, _V,2, and the map f_V:_V→ S decomposes as f_V,1∪ f_V,2, with f_V,i:_V,i→ S. Fixing a point v∈ V, let D_i be the curve class f_V,i,v*(_i,v), let d_i=D_i·(-K_S) and suppose that n_i of the n marked points of _V are in _V,i, so d=d_1+d_2, n=n_1+n_2. The families f_V,i:_V,i→ S thus determine a morphism
q:V→ M^_0,n_1(S, D_1)× M^_0,n_2(S, D_2).
The subset q(V) is a constructible subset of the product M^_0,n_1(S, D_1)× M^_0,n_2(S, D_2). Let V' be a dense subset of q(V) such that V' is locally closed in the product. Then q^-1(V') is dense and open in V. Thus we may replace V with q^-1(V') and assume from the start that V' = q(V) is locally closed in the product.
Let _i: M^_0,n_1(S, D_1) → S^n_i for i=1,2 denote the evaluation maps. The map : V → S^n factors through q by = (_1 ×_2) ∘ q. Moreover M^_0,n_1(S, D_1)× M^_0,n_2(S, D_2) is a fine moduli space, so the families _V,i for i=1,2 are pulled back from V'. Let g = (g_1,g_2) be a geometric generic point of V'. To prove <ref>, it suffices to show that g_i is unramified for i=1,2 and the image curves of the g_i have only ordinary double points and intersect transversally in S.
We claim that V' is open in M^_0,n_1(S, D_1)× M^_0,n_2(S, D_2). By construction, _1 ×_2(V') = (V) = 1. Since f is birational, neither component is contracted, so d_1, d_2 ≥ 1. Thus by Assumption <ref> and Lemma <ref>, we have M^_0,n_i(S, D_i) = n_i + d_i -1.
2n-1 = (V) = _1 ×_2(V') ≤ V' ≤
M^_0,n_1(S, D_1)× M^_0,n_2(S, D_2) ≤ n_1 + d_1 -1 + n_2 + d_2 -1 = 2n-1
Thus the inequalities are equalities. It follows that V' is open in M^_0,n_1(S, D_1)× M^_0,n_2(S, D_2) as claimed.
We now prove the desired bounds on n_i. Indeed,
1 = ev_1 × ev_2(V') = ev_1(M^_0,n_1(S, D_1)) + ev_2(M^_0,n_2(S, D_2))
So, for i = 1,2.
1 ≥ ev_i(M^_0,n_i(S, D_i)) ≥ 2n_i - (d_i + n_i -1) = n_i - d_i +1.
So, n_i ≤ d_i. But n_1 + n_2 = n = d - 1 = d_1 + d_2 - 1, so n_i ≥ d_i -1.
It remains to prove that C = f() has only ordinary double points. Since V' is open in M^_0,n_1(S, D_1)× M^_0,n_2(S, D_2), it follows that (g_1,g_2) is a geometric generic point of M^_0,n_1(S, D_1)× M^_0,n_2(S, D_2). By Assumption <ref>, g_i is unramified for i=1,2. Let C_i denote the image curve of g_i. By Proposition <ref>, C_1 and C_2 have only ordinary double points. Otherwise d_i =1 and g_i is an isomorphism to a -1 curve, which is smooth. Thus C_1 and C_2 have only ordinary double points. If d_1 or d_2 is at least 3, we apply Lemma <ref> to conclude that C_1 and C_2 intersect transversally in S. Otherwise, d_1,d_2 ≤ 2. We have three cases (d_1, d_2) = (1,1),(1,2),(2,2).
Suppose (d_1, d_2) = (1,1). Then C_1 and C_2 are distinct -1-curves because f is birational. By hypothesis d_S ≥ 3. Then C_1 and C_2 are embedded lines in ^d_S and must intersect transversally.
Suppose d_S≥3, so S is anti-canonically embedded in ^d_S.
If d=3, then we may assume d_1=1, d_2=2, so C_1 is a -1 curve and C_2 is a smooth conic. By the adjunction formula, we have C_2· C_2=0, and thus C_2 is a smooth rational curve on S with trivial normal bundle. Noting the exact sequence
0 →_S →_S(C_2) →i_C_2_* (_C_2(C_2· C_2) ) → 0,
this implies that h^0(S, _S(C_2))=2. Thus the complete linear system |C_2|≅^1 has dimension 1 and has no base-points. Moreover, there is open subset U⊂ |C_2| such that each u∈ U corresponds to a smooth rational curve C_u in the curve class |C_2|. Since k=0, Bertini's theorem applied to the linear system |C_2|∩ C_1 on C_1 implies that for all u in a dense open subset V of U, C_u intersects C_1 transversely. Since g_2 is generic, this implies that C_1 and C_2 intersect transversely. If d_1=d_2=2, the same proof shows that C_1 and C_2 are smooth curves intersecting transversely.
The remaining case is (d_1,d_2) =(1,2), d=3, d_S=2. Let π:S→^2 be the anti-canonical map, with smooth quartic branch curve E. Thus C_1 is a -1 curve. There are two possibilities for C_2: either π induces an isomorphism of C_2 with a smooth plane conic, or C_2→π(C_2) is a double cover, with π(C_2) a line ℓ satisfying ℓ· E=p_1+p_2+2p_3, with the p_i distinct points of E (see Lemma <ref> and its proof); indeed, by Lemma <ref>, such lines are generic in the variety of tangent lines to E. In the first case, we again have C_2· C_2=0 and we proceed as in the case d_S≥3, d=3. Consider the second case. We note that the -1 curve C_1 is one of the two components of π^-1(ℓ'), where ℓ' is a line satisfying ℓ'· E=2p_1+2p_2, with p_1, p_2 points of E, not necessarily distinct. Since g_2 is generic, the lines ℓ and ℓ' intersect at a point q not on E, so π is étale over a neighborhood of q and thus C_1 and C_2 intersect transversely.
If d_S=2, then if the branch curve E of the anti-canonical map π:S→^2 admits line ℓ with a 4-fold intersection at a single point q of E, then π^-1(ℓ)=C_1∪ C_2, where the C_i are -1 curves intersecting with multiplicity 2 at the point q' of S lying over q. This is why we need to exclude the case d_S=d=2 in <ref> above.
Lemma <ref> <ref> is false without the hypothesis that D is not an m-fold multiple of a -1-curve for m>1. For example consider S = _0 ^2 over a field of characteristic 0, and let D be twice the exceptional divisor E. Then d=2. The family over ^1 ≅ k[a] given by f_^1: ^1 t ↦ t^2 - a⟶^1 ≅ E → S has (^1) of codimension 1.
Let k be a field of characteristic not 2 or 3 and let S be a del Pezzo surface over k with d_S ≥ 2. Suppose that Assumption <ref> holds for S, and that D is a Cartier divisor on S which is not an m-fold multiple of a -1-curve for m>1. Then (M̅_0,d-1(S, D) ∖ M^_0,d-1(S,D)) ≥ 1.
By Lemma <ref> <ref> <ref> <ref>, (M̅_0,d-1(S, D) ∖ M^_0,d-1(S,D)) ≥ 1. By Proposition <ref>, the generic point of any irreducible component of M^_0,d(S, D) is in the ordinary double point locus. Thus the dimension of any irreducible component of M^_0,d(S, D) is 2(d-1). See Lemmas <ref> <ref> and Remark <ref>. Moreover, M^_0,d-1(S,D) is dense in M^_0,d(S, D). Thus M^_0,d(S, D) ∖ M^_0,d-1(S,D) is dimension <2(d-1), whence so is the closure of (M^_0,d(S, D) ∖ M^_0,d-1(S,D)), which proves the claim.
§ FINE STRUCTURE OF THE EVALUATION MAP
Let k be a perfect field, let S be a del Pezzo surface over k with effective Cartier divisor D. Let d=- K_S· D≥1 and let n=d-1. We introduce a list of assumptions, which will be convenient for future reference (but which are not running assumptions throughout this section).
*
k = 0.
* D is not an m-fold multiple of a -1-curve for m>1.
*
One of the following holds.
* d_S≥ 4
* d_S=3 and d≠ 6
* d_S = 2 and d≥ 7
We do assume that k is characteristic 0 in this section.
The locus of reducible stable maps is
M̅_0,n(S,D)^ = M̅_0,n(S,D) ∖ M_0,n(S,D).
The locus of balanced stable maps
M̅_0,n(S,D)^⊂M̅_0,n(S,D)^
is the closure of the locus of stable maps f : (,p_*) → S satisfying the following conditions.
* =_1∪_2, with _i≅^1 and _1 ∩_2 = {p}.
*
f is unramified and f|__1 is transversal to f|__2 at p.
* Letting n_i denote the number marked points on _i, and letting
d_i:=-K_S· f_*(_i)
denote the degree of f|__i, we have d_i-1≤ n_i≤ d_i for i=1,2.
Let f : (,p_*) → S be a stable map representing a geometric generic point of an irreducible component of M̅_0,n(S,D)^. Then is unramified at f.
Since f is unramified, f is birational (Lemma <ref>) and has no automorphisms, so M̅_0,n(S,D) is a scheme in a neighborhood of f. Applying <cit.>, the claim is equivalent to showing d is injective on tangent spaces. By Proposition <ref>, it thus suffices to show that d is surjective on tangent spaces.
We write
=_1∪_2, with _i≅^1 and _1 ∩_2 = {p}.
Let n_i denote the number of marked points on _i, and let
D_i =f_*([_i]), d_i:=-K_S· D_i.
We may assume that n_1 = d_1 and n_2 = d_2 -1 and p_j lies on _1 for j = 1,…,n_1.
Let ν : M̅_0,n(S,D) →M̅_0,n-1(S,D) denote the map forgetting the first marked point and stabilizing. Let f' = ν(f). Let n_1' = n_1 -1 and n_2' = n_2 be the number of marked points on the respective components of f'. We show that
dev_f': T_f'M̅_0,n-1(S,D)^→ T_ev(f')S^n-1
is surjective. Consider the following diagram.
M̅_0,n_1'+1(S,D_1) ×_S M̅_0,n_2'+1(S,D_2) [ld]_ν_1×ν_2[dd]^ev_1 × ev_2[r]^(.65)τ M̅_0,n-1(S,D)^[dd]^ev
M̅_0,n_1'(S,D_1) ×M̅_0,n_2'(S,D_2)[rd]^ev_1× ev_2
S^n_1'× S^n_2'[r]^∼ S^n-1.
The fiber product over S is taken with respect to the evaluation maps at the (n_i'+1)th marked point on M̅_0,n_i'+1(S,D_i) for i = 0,1. The map τ is the map that attaches the domains at the (n_i' + 1)th marked points forming a node. Observe that f' belongs to the image of τ. Let
f_i' ∈M̅_0,n_i'+1(S,D_i)
be such that τ(f_1',f_2') = f'. By commutativity of the diagram, it suffices to show that
d(ev_1 × ev_2)_(f_1',f_2') is surjective. By Lemma <ref>, d(ev_i)_ν_i(f_i') is surjective for i = 1,2. So, using the commutativity of the diagram, it suffices to show that d(ν_1 ×ν_2)_(f_1',f_2') is surjective. Indeed, let ξ_i ∈Γ(_i,_ν_i(f_i')) represent tangent vectors at ν(f_i'). By condition (<ref>) of Definition <ref>,
df_1'(T_p_n_1'+1_1)⊕ df_2'(T_p_n_1' + 1_2) = T_f(p) S.
So, projecting along the respective summands, we obtain canonical isomorphisms
(_f_1')_p_n_1'+1≅ T_p_n_2'+1_2, (_f_2')_p_n_2'+1≅ T_p_n_1'+1_1.
Let v_i ∈ T_p_n_i'+1_i be the tangent vector corresponding to ξ_j(p_n_j' + 1) for j = 3-i. Then the tangent vectors at f'_i corresponding to ξ_i and v_i lift the tangent vectors corresponding to ξ_i. Thus, we have established the surjectivity of dev_f'.
We show that there exists a tangent vector v ∈ T_fM̅_0,n(S,D) ∖ T_fM̅_0,n(S,D)^ such that dev_j(v) = 0 for j = 2,… n.
T_f M̅_0,n(S,D)^[d]^dν_f[r] T_f M̅_0,n(S,D) [r]^(.6)dev_f T_ev(f) S^n [d]^dπ
T_f'M̅_0,n-1(S,D)^[rr]^(.6)dev_f' T_ev(f')S^n-1.
Since f is generic, the domain curve of ν(f) does not undergo stabilization. It follows that dν_f is surjective. By the proceeding claim, dev_f' is surjective. So, for any v'∈ T_fM̅_0,n(S,D) ∖ T_fM̅_0,n(S,D)^, we may pick w ∈ T_f M̅_0,n(S,D)^ such that
dev_f'∘ dν_f(w) = dπ∘ dev_f (v').
Thus, we take v = v' - w. To complete the construction of v, we show that such v' exists. Indeed, by Proposition <ref>
T_fM̅_0,n(S,D) = d - 1 +n.
On the other hand, we compute
T_fM̅_0,n(S,D)^ = d - 2 +n
as a transverse fiber product over S. The dimension of the factors are given by Lemma <ref>. The transversality over S follows from Lemma <ref>.
We claim that to complete the proof of the proposition it suffices to show that
dev_1(v) ∉ df_p_1(T_p_1).
Indeed, let
V = span{v,T_p_1}⊂ T_f M̅_0,n(S,D).
So, by construction of v, we have a diagram with short exact rows as follows.
T_f M̅_0,n(S,D)^[d] '[rd]^(.6)dev_f'∘ dν_f [rrdd]
V [rr][d]^dev_f|_V T_f M̅_0,n(S,D) [rr] [d]^dev_f [d]^c
T_f(p_1) S [rr] ⊕_i = 1^n T_f(p_i) S [rr]^dπ ⊕_i = 2^n T_f(p_i)S.
(Here denotes the quotient vector space T_f M̅_0,n(S,D)/V.) We showed above that dev_f'∘ dν_f is surjective, so dπ∘ dev_f is surjective. Thus c is surjective. It follows from (<ref>) that dev_f|_V is surjective. So, dev_f is surjective as desired.
Finally, we show (<ref>). Let F be the field of definition of f. Let 𝔣 : (𝔓,𝔭_*) → S denote a stable map over F[ϵ]/(ϵ^2) corresponding to v. We identify with the closed fiber of 𝔓, so 𝔣|_ = f. Choose an open set p ∈ U ⊂𝔓 with an open immersion
U ⊂(F[s,t,ϵ]/(st- ϵ,ϵ^2)).
Choose an open set f(p) ∈ V ⊂ S and maps x,y: V →^1 such that x × y : V →^2 is étale and
x ∘ f ∈ s + (s^2,st,t^2), y ∘ f ∈ t + (s^2,st,t^2).
Consider the open subscheme
U' = (F[s,s^-1,ϵ]/(ϵ^2))∩ U ⊂ U.
The advantage of U' is that it carries a well-defined vector field ∂/∂ϵ along the locus {ϵ = 0}. Such a vector field cannot exist on U because the map U →(F[ϵ]/(ϵ^2)) is not smooth.
We claim it suffices to show that the sections
ξ = . d𝔣(∂/∂ϵ)|_ϵ = 0∈Γ(U' ∩,f^*TS), η = df(∂/∂ s) ∈Γ(U' ∩,f^*TS),
are generically linearly independent. Indeed, since
dev_1(v) = ξ(p_1), df_p_1(T_p_1) = span_F(η(p_1)),
and p_1 is generic, we obtain (<ref>).
We show the generic linear independence of ξ and η as follows. Observe that
x ∘𝔣|_U∈ s + (ϵ, s^2,st,t^2), y ∘𝔣|_U∈ t + (ϵ, s^2,st,t^2).
Let Q ⊂ F(s)[ϵ]/(ϵ^2) denote the regular functions on U', let Z = U ∩{t = 0} and let R ⊂ F[s]_(s) denote the regular functions on Z. Observe there is an inclusion map R → Q induced by the inclusion F[s]_(s)⊂ F(s)[ϵ]/(ϵ^2). For a_1,a_2 ∈ Q, we denote by (a_1,a_2)_R ⊂ Q the set of all functions obtained by linear combinations of a_1 and a_2 with coefficients in R.
Restricting to U' amounts to replacing t ↦ϵ s^-1, so we obtain,
x ∘𝔣|_U'∈ s + (ϵ, s^2)_R, y ∘𝔣|_U'∈ϵ s^-1 + (ϵ,s^2)_R.
So,
dx(ξ) = d(x ∘𝔣|_U').(∂/∂ϵ)|_ϵ = 0
is regular at p, and
dy(ξ) = d(y ∘𝔣|_U').(∂/∂ϵ)|_ϵ = 0
has a simple pole at p. On the other hand, by equation (<ref>) we have
dx(η) = d(x ∘ f|_Z)(∂/∂ s) ∈ 1 + (s), dy(η) = d(y ∘ f|_Z)(∂/∂ s) ∈ (s).
It follows that
[ dx(ξ) dy(ξ); dx(η) dy(η) ]
has a simple pole at p,
so ξ,η, are generically linearly independent as desired.
Suppose Basic Assumptions <ref> hold for k, S, D, and suppose M^_0(S,D)=. Then ev(M̅_0,n(S,D)) ≥ 2.
Since k = 0, it follows by Lemma <ref> that Assumption <ref> holds. By Lemma <ref> and Basic Assumption <ref>
<ref>, we conclude that ev(M_0,n(S,D)) ≥ 2. Suppose by way of contradiction that
c: = ev(M̅_0,n(S,D) ∖ M_0,n(S,D)) ≤ 1.
By Lemma <ref> and Basic Assumption <ref>
<ref> we see that
c = 1
and M̅_0,n(S,D)^≠. So, Proposition <ref> shows that c = 0, which is a contradiction.
Suppose Basic Assumptions <ref>
hold for k, S, D.
Then there is a closed subset A⊂ S^n with A≥ 2 such that the complement of the inverse image M̅_0,n(S,D)^:=M̅_0,n(S,D) ∖^-1(A) satisfies the following.
*
M̅_0,n(S,D)^= if and only if M^_0(S,D)=. If M^_0(S,D)≠, then the moduli space M̅_0,n(S,D)^ is a geometrically irreducible smooth finite-type k-scheme, and the restriction of to :M̅_0,n(S,D)^→ S^n∖ A is a finite, flat, dominant morphism.
*
The evaluation map is étale in a neighborhood of each f ∈M̅_0,n(S,D)^ with t(f)=0.[See Definition <ref> for the definition of the torsion index t(f).]
*
M̅_0,n(S,D)^ contains a dense
open subset of M_0,n^(S;D).
* Geometric points f of M̅_0,n(S,D)^ correspond to birational maps.
*
Let f be a geometric point of M̅_0,n(S,D)^∖ M_0,n^(S;D), which we consider as a morphism f:→ S for some genus 0 semi-stable curve . Then f satisfies:
* If =^1 is irreducible, then the image curve C:=f(^1) has one singular point q that is not an ordinary double point, and C has either an ordinary cusp, an ordinary tacnode or an ordinary triple point at q. Moreover, the marked points do not map to q and f is free.
* If is not irreducible, then =_1∪_2, with _i≅^1. The image curve C:=f() has only ordinary double points as singularities. Moreover, if n_i of the n marked points of are in _i, and C_i:=f(_i) has degree d_i:=-K_S· C_i, then d_i-1≤ n_i≤ d_i for i=1,2.
In particular, if M̅_0,n(S,D)^≠∅ then :M̅_0,n(S,D)^→ S^n∖ A is dominant.
As the assertions are all detected after a field extension of k, we may assume that k is algebraically closed. The moduli stack M̅_0,n(S,D) is a proper Artin stack over k, so the morphism :M̅_0,n(S,D)→ S^n is a proper morphism.
If M_0^(S,D) = , by Lemma <ref> we can take A = ev(M̅_0,n(S,D)) and
M̅_0,n(S,D)^ = .
Since M_0,n^(S,D) ⊂ M_0,n^(S,D) part <ref> holds. The rest of the Proposition is immediate. Thus, for the remainder of the proof, we assume M_0^(S,D) ≠.
By Lemma <ref> and Basic Assumption <ref><ref>
we may apply Lemma <ref>. By Lemma <ref> <ref>, we may choose M̅_0,n(S,D)^ so that geometric points f of M̅_0,n(S,D)^ correspond to birational maps, showing <ref>.
We claim that we may choose M̅_0,n(S,D)^ so that geometric points f: ^1 → S in M̅_0,n(S,D)^∖ M_0,n^(S;D) are free. By semi-continuity of cohomology, the non-free locus is closed, and therefore has a finite number of irreducible components. By Lemma <ref> with V the closure of a geometric point with the reduced substack structure, this will be accomplished by eliminating cases 1,2, and 3 of Lemma <ref>. Case 3 of Lemma <ref> contradicts Basic Assumption <ref> <ref>. We now eliminate case 2. Since f: ^1 → C is a two-to-one cover, we have D=f_*[^1] = 2C. By
assumption, we have that M^_0(S,D)≠. Since d=2, it follows from Lemma <ref>, Remark <ref>, and the vanishing of H^1(^1, (-1)) that M^_0(S,D)≠. Choose a geometric point f' of M^_0(S,2C). Let C':=f'(^1). We can not have C' = C because then f' would not be birational. Thus C' · C must be positive, as it is the intersection of two distinct irreducible curves. On the other hand, C' · C = 2C · C = -2 because C is a -1 curve. In case 1, the map f belongs to M_0,n^(S;D), which we do not consider.
By the preceding claims of birationality and freeness,
(M̅_0,n(S,D)^∩ M_0,n(S,D))∖ M_0,n^(S;D) ⊂ M_0,n(S,D)^.
So Lemma <ref> applies and <ref><ref> follows except for the claim about the marked points. However, the locus in (M̅_0,n(S,D)^∩ M_0,n(S,D))∖ M_0,n^(S;D) where one of the marked points coincides with q is codimension 1, so we can redefine A to remove it.
Since we have chosen M̅_0,n(S,D)^ so that geometric points are birational, we may apply Lemma <ref> part <ref>. Note that by Basic Assumption <ref> <ref>, we have d_S ≥ 2. Thus when the domain curve of f:→ S is reducible, we have =_1∪_2, with _i≅^1, and the image curve C:=f() has only ordinary double points as singularities, and f:→ C is an isomorphism in a neighborhood of _1∩_2. Since has arithmetic genus 0, the intersection _1 ∩_2 consists of a single point.
For the bounds d_i-1≤ n_i≤ d_i in <ref><ref>, proceed as follows.
Let V ⊂M̅_0,n(S,D)^∖ M_0,n(S,D) be an irreducible component. If ev(V) ≥ 2, add ev(V) to A. So, we may assume ev(V) ≤ 1. Applying Lemma <ref> parts <ref> and <ref> to the generic point of V proves gives the desired bounds and thus completes the proof of <ref>.
Next, we prove <ref>. By assumption,
we have M^_0(S,D)≠. So, it follows from Proposition <ref> that M̅_0,n(S,D)^≠∅. Since d_S ≥ 2, apply Lemma <ref> to deduce that is étale on M_0,n^(S;D). Thus, for any proper closed subset A⊂ S^n, the complement of the preimage M_0,n^(S;D)∖^-1(A) is dense in M_0,n^(S;D). So, M̅_0,n(S,D)^ = ^-1(S^n∖ A) contains a non-empty dense open subset of M_0,n^(S;D) proving part <ref>.
Since is unramified on the non-empty space M_0,n^(S;D), and M_0,n^(S;D) is smooth of dimension 2n by Lemma <ref>, it follows that :M_0,n^(S;D)→ S^n is dominant. By part <ref> it follows that :M̅_0,n(S,D)^→ S^n∖ A is dominant as claimed in part <ref>.
We now show part <ref>. Let f: → S with t(f)=0 represent a point of M̅_0,n(S,D)^. In the case = ^1, part <ref> follows from Lemma <ref>. In the case ≠^1, it follows from <ref><ref> that f is balanced, so we may apply Proposition <ref>.
The geometric irreducibility in <ref> is proved as follows. Since birationality is an open condition, property <ref> implies M̅_0,n(S,D)^⊂0n(S,D). Since the inclusion M̅_0,n(S,D)^⊂M̅_0,n(S,D) is open and dense, so is the inclusion M̅_0,n(S,D)^⊂0n(S,D). So geometric irreducibility follows from Theorem <ref>.
The fact that ^-1(S^n∖ A) is a scheme as in <ref> follows from the fact that by construction each f∈^-1(S^n∖ A) is birational and hence has no automorphisms.
We now show the smoothness claim of <ref>. Let f ∈M̅_0,n(S,D)^. If f is irreducible, by property <ref><ref> it is free, so Lemma <ref> asserts that M̅_0,n(S,D)^ is smooth at f of dimension 2n. If f is reducible, then property <ref><ref> and Proposition <ref> imply that M̅_0,n(S,D)^ is smooth at f of dimension 2n.
To finish the proof of <ref> and thus the proof of the proposition, we need to show that the map :^-1(S^n∖ A)→ S^n∖ A is finite and flat. By <ref>, it is generically finite. It is proper because M̅_0,n(S,D) and S are proper and properness is preserved under pull-back. Since S is smooth, after potentially adding to A a subset of codimension at least 2 in S, we can assume that :^-1(S^n∖ A)→ S^n∖ A is finite and flat as desired. See, for example, Proposition 3.8 of <cit.>.
From Theorem <ref>, we see that :M̅_0,n(S,D)^→ S^n∖ A is unramified on M̅_0,n(S,D)^∖ D_. We conclude our discussion of the structure of the evaluation map by computing the ramification index along D_.
Let f be a geometric point of D_∩_0,n(S, D)^ with field of definition F. We describe a linear map F → T_f_0,n(S, D)^ (which in fact we will show to be injective for any such f, even a geometric generic point). First, we construct vectors 𝒱_a in the tangent space T_f̃_0,n(S, D)_F^ of the base change to F of _0,n(S, D)^ at the canonical point f̃ corresponding to f; our map sends a in F to the image of 𝒱_a in T_f_0,n(S, D)^. This allows us to use the cohomological description of the tangent space to _0,n(S, D)^ at a closed point while constructing tangent vectors at potentially non-closed points.
By Theorem <ref>, the point f̃ corresponds to a morphism f̃:_F^1→ S_F which is birational onto its image C = f(_F^1) together with n=d-1 distinct points p_1,…, p_n∈^1(F). Moreover, there is a unique point p∈^1 where f̃ is ramified, and f̃(^1) has a simple cusp at q=f̃(p). We may assume p=0:=[1:0]∈^1. Let t be the standard coordinate on ^1 ∖{∞}.
We can choose a system of analytic coordinates (x,y) at the ordinary cusp q∈ C ⊂ S such that f is analytically locally of the form
f(t)=(t^2, t^3).
Since q is a simple cusp, we can find x,y ∈_S,q and u,v ∈_^1,p^* such that f^*(x)=ut^2 and f^*(y) = v t^3. After rescaling by constants, we can assume u,v ∈ 1 + 𝔪_p. So,
f^*(x) = t^2 + ∑_i ≥ 3 a_i t^i, f^*(y) = t^3 + ∑_j ≥ 4 b_j t^j.
If a_i ≠ 0 for some i, we proceed by induction. Let k be the minimal integer such that a_k ≠ 0. If k is even, we change coordinates on S by
x ↦ x-a_k x^k/2, y ↦ y.
If k is odd, we change coordinates by
x ↦ x- a_k x^k-3/2 y, y ↦ y.
After this procedure, the minimal integer k such that a_k ≠ 0 increases. The infinite composition of these coordinate changes converges formally. Thus, we may assume that f^*(x) = t^2.
Similarly, if b_j ≠ 0 for some j, we proceed by induction. Let l be the minimal integer such that b_l ≠ 0. If l is even, we change coordinates on S by
x ↦ x, y ↦ y - b_l x^l/2.
If l is odd, we change coordinates by
x ↦ x, y ↦ y - b_l x^l-3/2y.
Again, an infinite composition of such coordinate changes converges formally, so we may assume f^*(y) = t^3.
We proceed with (x,y) as in the preceding lemma.
Fix an a∈ F^× and let t'=X_0/X_1=1/t be the standard coordinate on U_1:=^1∖{0}. Since / t=-t^'2/ t', the (rational) vector field (1/t)·/ t extends to a global section of T_^1(p ). Let
v_a:=(a/t)/ t∈ H^0(^1, T_^1(p)).
Since df̃ has a zero at p=0, we have df̃(v_a) in H^0(^1, f̃^*TS). Thus df̃(v_a) induces a tangent vector to _0(S, D)^ corresponding to its image in H^0(^1, _f̃) along with a first order deformation f̃_1ϵ of f̃. Let f_1 ϵ denote the first order deformation of f given by projecting f̃_1ϵ along S_F → S.
Suppose Basic Assumptions <ref> hold for k, S, D. Let f be a geometric point of D_∩M̅_0,n(S, D)^ with field of definition F.
Then
* The first order deformation f_1ϵ defined above extends to a deformation f_ϵ: ^1 ⊗ F[[ϵ]] → S_F such that there are local analytic coordinates (x,y) on S such that near p the map f_ϵ is of the form
f_ϵ(t)=(t^2, t^3)+ϵ· (2a, 3at)ϵ^2.
* There is a deformation of marked points (p_1ϵ,…,p_p,nϵ) such that the tangent vector 𝒱_a corresponding to the deformation (f_ϵ, p_1ϵ,…,p_nϵ) is in d().
* Let v: ( F[[ϵ]]) →_0,n(S, D)^ be the morphism corresponding to (f_ϵ, p_1ϵ,…,p_nϵ). The composition of ∘v has ramification index order 2, i.e. e_0(∘ f_ϵ) = 2.
In particular, the map :_0,n(S, D)^→ S^n has ramification index two along D_, i.e., for f∈ D_∩_0,n(S, D)^ a geometric generic point, e_f()=2.
We may assume that f is a closed point, so f̃=f. Let C,p,q,t,x,y,p_1,…,p_n be as in the above discussion.
Let ⊂ f^*T_S be the kernel of the quotient map f^*T_S→_f/_f^. Then is an invertible subsheaf of f^*T_S containing the image df(T_^1). By the diagram
0 [r] T_^1[r][d] f^*T_S [r][d] _f [r][d] 0
0 [r] [r] f^*T_S [r] _f/_f^[r] 0
and the snake lemma, we have _f^≅/df(T_^1). Since df vanishes to first order at p and nowhere else, the map df:T_^1→ identifies with
≅ T_^1(p )≅_^1(3).
Since (f^*T_S) ≅(d), it follows from the exact sequence
0 →→ f^*T_S →_f/_f^→ 0,
that
_f/_f^≅_^1(d-3).
If d_S≥ 3, then S embeds into ^N by -K_S, and one can choose a hyperplane in ^N passing through the cusp q and one other point of C = f(^1). This hyperplane has intersection multiplicity d≥ 4 with D. So d≥4 for any (d_S, d) allowed by the hypothesis. Thus _f/_f^(-1) is generated by global sections and H^1(^1, _f(-1))=0; in particular, f is free.
For a morphism ϕ:Y→ X of smooth varieties over k and a geometric point y∈ Y with image x=ϕ(y), we have the differential
dϕ_y:T_Y,y→ T_X,x.
The second order differential is the map
d^2ϕ_y:^2_k(y)(T_Y,y⊗ k(y))→(dϕ_y ⊗ k(y)),
defined as follows.
The map ϕ^*:_X,x→_Y,y induces the map on jet spaces
^2ϕ^*:^2_X,x=_X,x/𝔪_x^3→^2_Y,y=_Y,y/𝔪_y^3,
and thus ^2ϕ^* induces a map of the kernel of dϕ^*:Ω^1_X,x⊗ k(x)→Ω^1_Y,y⊗ k(y) to the subspace
^2Ω^1_Y,y⊗ k(y) ≅^2(𝔪_y^1/𝔪_y^2) ≅𝔪_y^2/𝔪_y^3 ⊂^2_Y,y.
The map d^2ϕ_y is the dual of this map.
We show that d^2 _f is nonvanishing. Let q_i=f(p_i)∈ S(F). The sheaf sequence
0→_f(-∑_ip_i)→_f→⊕ f^*T_S, q_i/df(T_^1, p_i)→ 0
identifies the cokernel of d_f:T_f(_0,n(S, D))→ T_(f)(S^n) with H^1(^1, _f(-∑_ip_i)). See Remark <ref>. We will show d^2 _f ≠ 0, by showing its restriction to d_f is nonvanishing. Since _f(-∑_ip_i)≅_f^⊕_^1(-2), the sequence (<ref>) similarly gives an identification
d_f ≅ H^0(_f^) ≅ H^0(i_p*F).
We are interested in showing that the map
d^2_f:^2 H^0(_f^) ≅ F → H^1(^1, _f(-∑_ip_i))≅ F.
is nonzero. We will consider d^2_f as a quadratic form on F, sending a∈ F to d^2_f(𝒱_a^2) where 𝒱_a is the tangent vector 𝒱_a∈ T_f(_0,n(S, D)^) corresponding to i_p*(a).
Noting that H^1(^1, _f(-∑_ip_i))≅ H^1(^1, _f/_f^(-∑_ip_i)), we will compute d^2_f by composing with the Serre duality pairing
H^1(^1, _f/_f^(-∑_i=1^np_i))× H^0(^1, K_^1⊗ (_f/_f^)^∨(∑_i=1^np_i))→ F.
In fact, since _f/_f^≅_^1(d-3)=_^1(n-2), we have
K_^1⊗ (_f/_f^)^∨(∑_i=1^np_i)≅_^1
so there is a unique (up to a non-zero scalar) section ω∈ H^0(^1, K_^1⊗ (_f/_f^)^∨(∑_i=1^np_i)). We will show that ω,d^2_f(𝒱^2_a)≠ 0 for nonzero a.
We construct a deformation (f_ϵ, p_1ϵ,…, p_nϵ) of (f,p_1,…, p_n) over F[[ϵ]] with first order deformation corresponding to 𝒱_a as follows. The invertible sheaf has local generator λ:=2·/ x+3t·/ y near p, with t·λ=df(/ t). Recall above we set
v_a:=(a/t)/ t∈ H^0(^1, T_^1(p ))
for a∈ F^×,
so df(v_a)=a·λ in ⊗_^1,0. Let s_a:=df(v_a)∈ H^0(^1, ). In particular, s_a in H^0(^1,) ⊆ H^0(^1,_f) is in H^0(^1,_f^) ⊆ H^0(^1,_f).
Fixing the isomorphism i_p*F ≅^_f to be given by a ↦ a [λ], we see that
i_p*a = [s_a] ∈ H^0(^_f).
Thus 𝒱_a corresponds to the first order deformation f_1ϵ of f corresponding to the image of s_a in H^0(^1, _f) equipped with appropriate marked points.
Since H^1(^1, _f)={0}, the 1st order deformation f_1ϵ extends to a deformation f_ϵ over F[[ϵ]]. Locally in the coordinate system t, (x,y), the map f_ϵ is of the form
f_ϵ(t)=(t^2, t^3)+ϵ· (2a, 3at)ϵ^2.
This shows <ref>.
We choose points of ^1(F[[ϵ]]) deforming the p_i such that (f_ϵ, p_1ϵ,…, p_nϵ) determines an element of d() as follows. The global vector field v_a on U_1 gives us the automorphism ϕ_ϵ of U_1× F[[ϵ]]
ϕ_ϵ(t')=t'-ϵ· a t^' 3 .
In coordinates (t,ϵ), this is
ϕ_ϵ(t)=1/1/t-ϵ· at^-3 = t+ϵ·a/t+ϵ^2·a^2/t^3ϵ^3
Let p_iϵ= ϕ_-ϵ(p_i)∈^1(F[[ϵ]]). Although the t, (x,y) coordinate system is not necessarily valid near p_i, we have
f_ϵ(p_iϵ)≡ f(p_i)ϵ^2
because
/ϵ f_ϵ(p_iϵ)|_ϵ = 0 = /ϵ f_ϵ|_ϵ=0(p_i) + df(/ϵ p_iϵ|_ϵ=0)
= s_a(p_i) + df(/ϵϕ_-ϵ(p_i)|_ϵ=0)
= s_a(p_i) + df(-v_a(p_i))
= s_a(p_i) - s_a(p_i) = 0.
Thus the F[[ϵ]] point v( F[[ϵ]]) →M̅_0,n(S, D)^ given by (f_ϵ, p_1ϵ,…, p_nϵ) determines a tangent vector which is in d(). It follows from (<ref>) that d() is 1-dimensional. Therefore, by (<ref>) the tangent vector corresponding to 𝔳
equals 𝒱_a∈ T_f(_0,n(S, D)^). This shows (<ref>).
We now give a cocycle representing d^2_f(𝒱^2_a) in H^1(^1,_f/_f^(∑_i(-p_i))) in terms of a morphism h:U_1[[ϵ]]→ S, defined by
h(t', ϵ):=f_ϵ∘ϕ_-ϵ(t').
Note that dh(t',0)(/ϵ)=0 by a chain rule calculation similar to the above, whence the canonical map (df_t') → (dh_(t',0)) is an isomorphism. It follows from (<ref>) that d^2h((/ϵ|_ϵ=0)^2) is thus a section of H^0(U_1, _f).[Another point of view on this is that for any vector field v on U_1[[ϵ]] extending /ϵ|_ϵ =0, including the examples v=/ϵ and v=/ϵ+ϵ/ t', one obtains a section in H^0(U_1, h^*TS) because the section dh(v) ∈ H^0(U_1[[ϵ]], h^*TS) vanishes along ϵ =0 and a section of a vector bundle has a first derivative which lies in the same vector bundle restricted to the vanishing locus of the section by taking the derivative in some local trivialization. The image of this section in H^0(U_1, _f) is independent of the choice of v.] We let ^2h/ϵ^2|_ϵ=0∈ H^0(U_1,_f) denote this section d^2h((/ϵ|_ϵ=0)^2).
Let t_i' be the t' coordinate of p_i. We compute
^2h(t', ϵ)/ϵ^2|_ϵ=0, t'=t_i'=d^2f_ϵ(p_iϵ)/dϵ^2|_ϵ=0
in f^*T_S, q_i/T_^1, p_i=N_f⊗ F(p_i).
On the other hand,
d^2_f(𝒱_a^2)= (…, d^2f_ϵ(p_iϵ)/dϵ^2|_ϵ=0,…),
where
:⊕_i=1^nf^*T_S, q_i/df(T_^1, p_i)→ H^1(^1, _f(-∑_i=1^np_i))
is the boundary map in the cohomology sequence associated to (<ref>).
Let be the cover of ^1 given by U_1=^1∖{0}, U_0=^1∖{p_1,…, p_n}. This is indeed a cover because the p_i do not coincide with the point p = 0 where f is ramified by Theorem <ref> <ref> <ref>. By (<ref>), (<ref>),
we can represent d^2_f(𝒱_a) as the 1-cocycle in C^1(, _f/_f^(∑_i(-p_i))) given by
[^2h/ϵ^2|_ϵ=0]∈ H^0(U_0∩ U_1, _f(∑_i(-p_i)).
Moreover, the trace map H^1(^1, ω_^1/F)≅→ F sends a class [η]∈ H^1(^1, ω_^1/F) represented by some η∈ C^1(,ω_^1/F)=H^0(U_0∩ U_1, ω_^1/F) to the residue _0η.
For ω∈ H^0(^1, ω_^1/F⊗(_f/_f^)^∨(∑_ip_i)), the pairing ω, d^2_f(𝒱_a) is therefore given by
ω,d^2_f(𝒱^2_a)= _0^2h/ϵ^2|_ϵ=0·ω
where ^2h/ϵ^2|_ϵ=0·ω is to be considered as a section of ω_^1/F over U_0∩ U_1 via the pairing
ω_^1/F⊗(_f/_f^)^∨(∑_ip_i)⊗_f/_f^(-∑_ip_i)→ω_^1/F.
To make the computation of _0, we use a trivialization of _f/_f^ in a neighborhood of 0 given as follows: Since has local generator λ=2/ x+3t/ y, and
_f/_f^=f^*T_^2/, sending a section α/ x+β/ y of f^*T_^2 to 2β-3α t descends to give an isomorphism of _f/_f^(-∑_ip_i) with _^1 over ^1∖{p_1,…, p_n, ∞}. Define γ∈_^1,0^× so that ω will transform to a 1-form γ(t)dt via this isomorphism.
Combining the previous local expressions for f_ϵ(t) and ϕ_ϵ(t), we have
h(t, ϵ)=(t^2, t^3)+ϵ^2(3a^2/t^2, 3a^2/t)ϵ^3
in local coordinates t, (x, y), ϵ.
Thus ^2h/ϵ^2|_ϵ=0 maps to -6a^2/t under this local trivialization of _f/_f^(-∑_ip_i). Thus
^2h/ϵ^2|_ϵ=0·ω=-6a^2/t·γ(t)dt,
which yields
_0 ^2h/ϵ^2|_ϵ=0·ω=-6γ(0)· a^2.
Thus, the quadratic form d^2_f is nonzero as claimed and hence the ramification index is two.
We now combine our results to compute the relative canonical bundle ω_ of : M̅_0,n(S,D)^→ S^n ∖ A to be _M̅_0,n(S,D)^(D_). Here, :M̅_0,n(S,D)^→ S^n∖ A is as given by Theorem <ref>. Recall that the relative cotangent sheaf is defined Ω_:=( ^*(Ω_S^n/k),Ω_M̅_0,n(S,D)^/k) and the relative canonical bundle ω_ is given ω_≅ω_M̅_0,n(S,D)^/k⊗^*(ω_S^n/k)^-1.
Let k be a field of characteristic 0. Suppose Basic Assumptions <ref> hold for k, S, D. We suppose that M̅_0,n(S,D)^ is non-empty.
* is ramified along D_ with ramification index two: at each geometric generic point f of D_, there are local analytic coordinates t_1,…, t_2n for M̅_0,n(S,D)^ at f and s_1,…, s_2n for S^n at (f) such that D_ has local defining equation t_1 and ^*(s_1)=t_1^2, ^*(s_i)=t_i for i=2,…, 2n.
* The determinant of the section d():_M̅_0,n(S,D)^→Ω_ of the relative cotangent bundle has divisor 1· D_, and thus determines an isomorphism
d():_M̅_0,n(S,D)^(D_)→ω_.
Noting that M̅_0,n(S,D)^ and S^n are both smooth k-schemes, it follows that ω_ is an invertible sheaf. Moreover, is flat and is unramified over M̅_0,n(S,D)^∖ D_, so we need only show that the ramification index of along D_ is two, which is Lemma <ref>.
(2) is an immediate consequence of (1) and the theorem on purity of the branch locus.
§ THE DOUBLE POINT LOCUS
We define the double point locus using ideas from <cit.>.
Given a composition of closed immersions Z ⊂ W ⊂ X, we define the subscheme of W residual to Z to be the subscheme defined by the ideal sheaf (I_W : I_Z). Recall that this is the ideal sheaf of all local sections s of _X such that s t lies in I_W for all local sections t of I_Z.
Let S be a smooth del Pezzo surface over a perfect field k equipped with an effective Cartier divisor satisfying Assumption <ref> <ref> <ref>. We work mostly in characteristic 0, but also have some analysis in characteristic p. Throughout this section, we assume that M̅_0,n(S,D)^ is non-empty, which implies that M^_0(S,D) is non-empty.
In characteristic 0, M̅_0,n(S,D)^ is non-empty if and only if M^_0(S,D) is non-empty, and these are both equivalent to M̅_0,n(S,D)^ being non-empty by Theorem <ref><ref><ref>. In characteristic p under Assumption <ref>, we also have that M̅_0,n(S,D)^ is non-empty if and only if M^_0(S,D) is non-empty by Proposition <ref>. So we could equally well assume that M^_0(S,D) is non-empty. When M^_0(S,D) is empty but (k,S,D) satisfies Assumption <ref> <ref> <ref> and Assumption <ref>, the associated Gromov-Witten invariants are defined to be 0. Note the consistency with Lemma <ref> and Corollary <ref>; over a dense open of S^n, has empty fiber in this case, so it makes sense to define the degree and the count to be 0.
Let d=- K_S· D≥1. Let n=d-1. Let n→M̅_0,n(S,D) denote the universal curve, n := M̅_0,n+1(S,D) and let : n→ S ×M̅_0,n(S,D) denote the universal map, or in other words, the product of evaluation on the (n+1)st marked point with the canonical projection. If the characteristic of k is 0, we may apply Theorem <ref> and obtain the smooth k-scheme _0,n(S,D)^. Let n^→_0,n(S,D)^ denote the pullback of n→M̅_0,n(S,D) to _0,n(S,D)^. In particular, n^ is a scheme. In positive characteristic, the map n^→_0,n(S,D)^ will be replaced by the pullback n^→_0,n(S,D)^ of n→M̅_0,n(S,D) to the locus _0,n(S,D)^ of parametrized curves with only ordinary double points .
Work in schemes over _0, n(S, d)^. So for example, we have S ×_0, n(S, d)^ and
Δ_S ↪ (S ×_0, n(S, d)^) ×__0, n(S, d)^ (S ×_0, n(S, d)^) ≅ S × S ×_0, n(S, d)^
Consider the product of the universal maps
×n^×__0,n(S,d)^n^→ S × S ×_0, n(S, d)^.
The preimage (×)^-1(Δ_S) of the diagonal Δ_S ⊂ S × S ×_0, n(S, d)^ contains the diagonal Δ_n^⊂n^×__0,n(S,d)^n^ as one irreducible component.
Under the hypotheses of Theorem <ref>, the double point locus is the subscheme
⊂n^×__0, n(S,D)^n^
defined to be the subscheme of (×)^-1(Δ_S) residual to Δ_n^. Let
π : →_0, n(S,D)^
denote the canonical map.
Now drop the assumption that the characteristic of k is 0. For S a smooth del Pezzo surface over k and D an effective Cartier divisor, define
⊂n^×__0, n(S,D)^n^
to be the suscheme of (×)^-1(Δ_S) residual to Δ_n^ and let π denote the projection map π: →_0, n(S,D)^.
Let k be a perfect field. Let S be a smooth del Pezzo surface over k equipped with an effective Cartier divisor D. The projection from the double point locus π: →_0,n^(S,D) is étale.
Over _0,n^(S,D) the product of universal evaluation maps
×n^×__0,n(S,D)^n^∖Δ_n^→ S × S ×_0, n(S, D)^
is transverse to Δ_S over _0, n(S, D)^. So, the morphism
∖Δ_n^ = (×)^-1(Δ_S)∖Δ_n^π⟶_0, n(S, d)^
is smooth of relative dimension zero and thus étale. A straightforward argument based on Remark <ref> shows that
∩Δ_n^ = ∅.
Alternatively, this follows from Lemma <ref>.
For the remainder of this section, we assume k has characteristic zero. Note that we have
= π^-1(_0,n^(S,D)) ⊂.
We will need the following special loci in .
Let ⊂Δ_n^ denote the locus with geometric points given by a map f : ^1 → S and a point p ∈^1 where f has a simple cusp, together with marked points (p_1,…,p_n) on ^1 such that (f, p_1,… p_n) is in _0, n(S,D)^. Let ⊂ denote the locus with geometric points given by (f : ^1 → S, p_1, …, p_n) a geometric point of _0, n(S,D)^ and a pair of points p,q ∈^1 where f has a simple tacnode. Let ⊂ denote the locus with geometric points given by a geometric point (f : ^1 → S, p_1, …, p_n) of _0, n(S,D)^ and a pair of points p,q ∈^1 where f has a triple point.
Let : B →n^ be a family of stable maps corresponding to a curve → B and a map f : → S. Let
: ×_B →n^×__0, n(S,D)^n^
be the induced map. Let _ denote the ideal sheaf of . Let (p_1,p_2) be a point of ×_B such that f(p_1) = f(p_2). Let t_1,t_2, be local coordinates on at p_1,p_2, respectively. So, locally the ideal sheaf of the diagonal Δ_⊂×_B is generated by t_1 - t_2. Let s = (s_1,s_2) be local coordinates at f(p_1) = f(p_2). Then, since t_1 - t_2 is not a zero divisor, the pull-back of the colon ideal sheaf _ is given by
^*_ = (s∘ f(t_1) - s∘ f(t_2)/t_1 - t_2).
We have ⊂.
Let f : ^1 → S be a map with a single simple cusp at p ∈^1. So (f,p,p) ∈Δ_n^ is an F point of . Let : (F) →n^ be the corresponding map, so we get an induced map
: ^1 ×^1 →n^×__0, n(S,D)^n^.
Choose local coordinates on ^1 at p and on S at f(p) as in Lemma <ref>. Let t_1,t_2, be copies of the local coordinate on . In particular, t_i vanishes at p. By Remark <ref>, we have locally at p,
^*_ = ((t_1^2,t_1^3) - (t_2^2,t_2^3)/t_1 - t_2) = (t_1 + t_2, t_1^2 + t_1 t_2 + t_2^2) = (t_1+ t_2, t_1^2),
and it is clear these equations vanish at t_1 = t_2 = 0.
The loci ,,⊂ are closed.
This follows from Theorem <ref><ref><ref>.
In light of Lemma <ref>, we equip ,, and with the reduced induced subscheme structure.
The projection from the double point locus π: →_0, n(S, d)^ is étale over a neighborhood of D_.
The proof is the same as that of Lemma <ref>.
To prove the double point locus is smooth at points of and , we introduce the following lemma.
Let k be a field, and let X and Y be smooth, integral, finite type k-schemes. Let Z⊂ Y be a closed subscheme and take z∈ Z. Suppose that there is a morphism f:X→ Y with z∈ f(X), and an integer ℓ such that
* there is an irreducible component Z_0 of Z containing z and of codimension ≤ℓ on Y,
* The closed subscheme X×_YZ of X is smooth of pure codimension ℓ on X.
Then Z is smooth of pure codimension ℓ in a neighborhood of z.
We may assume k is algebraically closed and z is a k-point of Z. Since X is smooth, Z/k is smooth in a neighborhood of z if and only if X×_k Z is smooth over X in a neighborhood of X ×_k z. Let Γ_f⊂ X×_kY be the graph of f. Note that X×_YZ is isomorphic to the intersection Γ_f ∩ X×_kZ. So changing notation, we may assume that f:X→ Y is a closed immersion, that is, we may assume that X is a smooth closed subscheme of Y.
Since the assertion is local on Y for the étale topology, we may assume that Y is an principal open subscheme of ^n_k. Since X is smooth, we may assume that X is a smooth complete intersection in Y, with ideal I_X=(g_1,…, g_m), where m is the codimension of X in Y.
Let i:X∩ Z→ Z be the inclusion. We have the exact sequence of _X∩ Z,z-modules
I_X,z/I_X,z^2⊗__X,z_X∩ Z,z i^*Ω_Z/k, z→Ω_X∩ Z, z→ 0
Since X∩ Z is smooth of dimension n-m-ℓ at z, Ω_X∩ Z, z≅_X∩ Z, z^n-m-ℓ. Thus the sequence splits and
i^*Ω_Z/k, z≅Ω_X∩ Z, z⊕(d).
Since the images of g_1,…, g_m in d(I_X,z/I_X,z^2) generate (d), we have a surjection _X∩ Z,z^m→(d). Applying -⊗__X∩ Z, zk(z), we see that
_k(z)Ω_Z/k, z⊗__Z,zk(z)= _k(z)i^*Ω_Z/k, z⊗__X∩ Z,zk(z)=
_k(z)Ω_X∩ Z, z⊗__X∩ Z,zk(z)+ _k(z)(d) ⊗__X∩ Z,zk(z)
≤ (n-m-ℓ) + m=n-ℓ.
Choose generators f_1,…, f_s for I_Z, z⊂_Y,z and let x_1,…, x_n be the standard coordinates on ^n. Then
_k(z)Ω_Z/k, z⊗__Z,zk(z)=n-rank[ f_i/ x_j ](z)
Since
n-ℓ≥_k(z)Ω_Z/k, z⊗__Z,zk(z)
by the previous, it follows that
rank[ f_i/ x_j ](z)≥ℓ
After reordering the f_i, we may assume that the matrix
[ f_i/ x_j ](z)_1≤ i≤ℓ
has rank ℓ, which implies that (after shrinking Y if necessary) the closed subscheme Z'⊂ Y defined by f_1,…, f_ℓ is smooth of codimension ℓ; shrinking Y again if necessary, we may assume that Z' is integral. But then Z_0 is a closed subscheme of Z' of the same dimension, so Z_0=Z' and Z_0⊂ Z⊂ Z', so Z=Z' and Z is smooth of codimension ℓ on Y.
The double point locus is smooth of dimension d-1 + n at the geometric points of the cuspidal locus and the map π : →_0, n(S,D)^ has ramification index 2 at . The restriction of π to a map → D_∩_0,n(S,D)^ is birational.
Let = (f,p) = ((f,p),(f,p)) represent a geometric point
F→⊂n×__0,n(S,D)^n.
Let q = f(p). We may assume p = 0 ∈^1.
Let t be the standard coordinate on ^1 ∖{∞} and let (x,y) be analytic coordinates on S at q as in Lemma <ref>.
Using Lemma <ref><ref>, choose a family of maps : ^1_F[[ϵ]]→ S such that |_{ϵ = 0} = f and near p,
(ϵ,t) = a_10ϵ + a_02t^2 + a_11ϵ t + a_03t^3 (ϵ^2,ϵ t^2,t^4)
where a_ij∈ F^2 are given by
a_02 = (1,0), a_11 = (0,3a), a_03 = (0,1).
Let
: ^1_F[[ϵ]]×_(F[[ϵ]])^1_F[[ϵ]]→n^×__0, n(S,D)^n^
denote the induced map.
We have local coordinates on ^1_F[[ϵ]]×_(F[[ϵ]])^1_F[[ϵ]] at (p,p,0) given by ϵ and two copies t_1,t_2, of the parameter t. Let be the ideal sheaf of on n×__0,n(S,D)^n. So, analytically locally ^* = (t_1,t_2,ϵ).
By Remark <ref>, the pull-back ^*_ is generated analytically locally by
Υ = (ϵ,t_1) - (ϵ,t_2)/t_1 - t_2 = a_02(t_1 + t_2) + a_11ϵ + a_03(t_1^2 + t_1 t_2 + t_2^2) (^*)^3 + (ϵ^2,t_1ϵ,t_2ϵ).
Since a_02 and a_11 are linearly independent, it follows that the subscheme determined by the ideal sheaf ^*_ is smooth of codimension 2 at (p,p,0). Apply Lemma <ref> with l = 2, z = ,
X = ^1_F[[ϵ]]×_(F[[ϵ]])^1_F[[ϵ]], Z = , Y = n^×__0, n(S,D)^n^,
and X ×_Y Z the subscheme of X determined by ^*_. Since is given by two equations, we can take Z_0 = Z. It follows that is smooth at of dimension d-1 + n.
Now assume that is a geometric generic point of . Let be as in the proof of Lemma <ref>. That is, we set the parameter ϵ to zero in the above. Let 𝔮⊂_(p,p),^1 ×^1 be the ideal sheaf of ^-1(π^-1(f)). By equation (<ref>), the quotient _(p,p),^1 ×^1/𝔮 is given by F[t_1,t_2]/(t_1+t_2,t_1^2), which has length two and induces an isomorphism to F after taking the reduced subscheme. Therefore, the ramification index of π at is 2. By Proposition <ref>(2), the map π: → D_∩_0,n(S,D)^ is generically a bijection; since we are in characteristic zero, this implies that π: → D_∩_0,n(S,D)^ is birational.
Let f : ^1 → S represent a point of D_. Let p ≠ p' ∈^1 such that q = f(p) = f(p') is the tacnode.
*
There exist local coordinates t,t' at p,p', respectively, and local analytic coordinates x,y, on S at q such that near p,
(x,y) ∘ f = (t,t^2)
and near p',
(x,y) ∘ f = (t',0).
*
There exists a family : ^1_F[[ϵ]]→ S such that |_{ϵ = 0} = f, and near p,
(x,y) ∘(t,ϵ) = (t,t^2 + ϵ) ϵ^2,
and near p',
(x,y) ∘(t,ϵ) = (t',0) ϵ^2.
The double point locus is smooth of dimension d-1+n at the geometric points of the tacnodal locus , the map π : →_0, n(S,D)^ has ramification index 2 at , and the map π|_: → D_ is two to one.
Let = ((f,p),(f,p')) represent a geometric point
F →⊂n×__0,n(S,D)^n.
Let q = f(p). Let t,t', be local coordinates at p,p', respectively, and let x,y, be analytic coordinates at q as in Lemma <ref><ref>. Let : ^1_F[[ϵ]]→ S be the family of Lemma <ref><ref>. Let
: ^1_F[[ϵ]]×_(F[[ϵ]])^1_F[[ϵ]]→n^×__0, n(S,D)^n^
denote the induced map. We have local coordinates on ^1_F[[ϵ]]×_(F[[ϵ]])^1_F[[ϵ]] at (p,p',0) given by t,t',ϵ.
The pull-back ^* _ is generated analytically locally by
(x,y)∘(ϵ,t) - (x,y)∘(ϵ,t') = (t-t',t^2 + ϵ) ϵ^2.
It follows that the subscheme determined by ^* _ is smooth of codimension 2 at (p,p',0). Apply Lemma <ref> with l = 2, z = ,
X = ^1_F[[ϵ]]×_(F[[ϵ]])^1_F[[ϵ]], Z = , Y = n^×__0, n(S,D)^n^,
and X ×_Y Z the subscheme of X determined by ^*_. Since is given by two equations, we can take Z_0 = Z. It follows that is smooth at of dimension d-1+n.
Now, assume that is a geometric generic point of . Let
|_0 : ^1 → S
be the restriction to ϵ = 0. Let 𝔮⊂_(p,p'),^1 ×^1 be the ideal sheaf of (|_0)^-1(π^-1(f)). By equation (<ref>), the quotient _(p,p'),^1 ×^1/𝔮 is given by F[t,t']/(t-t',t^2), which has length two. Therefore, the ramification index of π at is 2.
Finally, the map π|_: → D_ is two to one because
π(f,p,p') = π(f,p',p) = f.
The double point locus is smooth of dimension d-1+n. The map π : →_0, n(S,D)^ is finite, flat and generically étale. The ramification of π is supported on and , where it is simply ramified.
The smoothness and dimension of follow from Theorem <ref> <ref> and Lemmas <ref>, <ref>, <ref> and <ref>. The map π is quasi-finite because the fiber over a point of _0,n(S,D)^ represented by a map f : → S is contained in the union of the ramification locus of f and the locus where f is not 1-1. Since π is proper, it follows that π is finite. Since the domain and range of π are smooth of the same dimension and π is quasi-finite, it follows that π is flat. Lemmas <ref> and <ref> show that π is étale over and . In particular, it is generically étale. The ramification over and was computed in Lemmas <ref> and <ref>.
§ ORIENTING THE EVALUATION MAP
In this section, we continue to assume that _0, n(S,D)^ is non-empty.
Let A be a Noetherian ring. Let f:Y→ Z be a finite flat morphism of smooth A-schemes. It follows that f_*_Y is a locally free _Z-module. The multiplication map on _Y gives the morphism of _Z-modules
m:f_*_Y⊗__Zf_*_Y→ f_*_Y,
and since f_*_Y is a finite locally free _Z-module, we have the trace map
_f:f_*_Y→_Z
defined by sending s∈ f_*_Y(U) to the trace of the multiplication map × s: f_*_Y(U)→ f_*_Y(U). Rewriting the composition ∘ m as
δ: f_*_Y→ f_*_Y^∨
we have the discriminant _f: f_*_Y→ f_*_Y^∨, given by
_f = δ.
Equivalently,
_f:_Z→ ( f_*_Y)^-2
Now suppose that f is étale over each generic point of Z, and that Z is reduced.
Since _f is a surjection if f is étale, we see that _f is generically injective, hence injective since Z is reduced. This gives us the effective Cartier divisor
(_f), supported on the branch locus of f.
For V a locally free sheaf of rank r on Z, we write ^n V for the n-tensor power over _Z of the invertible sheaf V=Λ^rV. Recall Definition <ref>.
Let S be a del Pezzo surface of degree d_S over a field k of characteristic 0, and let D be an effective Cartier divisor. Suppose that Basic Assumptions <ref> <ref> <ref> holds. We may then apply Theorem <ref> and obtain : M̅^_0,n→ S^n. By Definition <ref>, we have the map π : →_0, n(S,D)^ from the double point locus, which is finite, flat and generically étale by Corollary <ref>. We therefore have _π: _M̅^_0,n(S,D)→ (π_*_)^⊗ -2 by the above construction. The results of Section <ref> compute the divisor of this section.
We have
(_π)=1· D_+2· D_
and thus _π defines an isomorphism
_π:_M̅^_0,n(S,D)(D_)→ (π_*_)(-D_)^⊗ -2
A finite, flat map f: X → Y between smooth schemes over a field has a different 𝔇_f, which is an ideal sheaf on an effective Cartier divisor <cit.>. In characteristic 0, the different is computed <cit.> to be the product of ideal sheaves p^e_p-1 where p runs over the codimension 1 ideal sheaves of X where f is ramified and e_p is the ramification index. By Corollary <ref>, it follows that 𝔇_π = 1 + 1. By Propositions 14 of Chapter 3 in <cit.>, (_π) = π_* (𝔇_π). Thus (_π)= π_* + π_*. By Lemmas <ref> and <ref> respectively,
π_* = D_ and π_* = 2· D_ .
We are now in a position to orient the evaluation map in characteristic 0.
: M̅_0,n^→ S^n is a map between smooth schemes by Theorem <ref> <ref> and is therefore a local complete intersection morphism <cit.>. By Theorem <ref>, defines an isomorphism
d():_M̅_0,n(S,D)^(D_)→ω_
Let k be a field of characteristic 0 and let S and D be as in Theorem <ref>. Let be the invertible sheaf on M̅^_0,n(S,D) given by
=^-1π_*_(-D_)
Then the composition d∘_π^-1:^⊗ 2→ω_ is an isomorphism on M̅^_0,n(S,D).
This is a direct consequence of Theorem <ref> and Theorem <ref>.
§ THE SYMMETRIZED MODULI SPACE
Let k be a field of characteristic 0. Let S be a del Pezzo surface equipped with an effective Cartier divisor D satisfying Basic Assumptions <ref> <ref> <ref> and such that K_S· D≥1. We continue to assume that _0, n(S,D)^ is non-empty. See Remark <ref>.
The symmetric group _n acts freely on M̅_0,n(S,D) by permuting the marked points, and acting trivially on the underlying curve and the morphism to S. This extends to an action on the universal curve X_0,n→M̅_0,n(S,D) and the usual permutation action on S^n, giving us the following _n-equivariant diagram.
n[r] M̅_0,n(S,D)[r]^(.6) S^n
Let S^n_0 denote the complement of the pairwise diagonals in S^n, so the restriction of the _n action to S^n_0 is free. Let n=d-1. Moreover, (_0,n(S,D)^) ⊂ S^n_0 because by Theorem <ref> <ref> there are no contracted components in the stable maps of _0,n(S,D)^. As above, let n^ denote the inverse image of _0,n(S,D)^ under n→M̅_0,n(S,D).
Thus, we obtain the following _n-equivariant diagram in which all actions are free.
[r][dr]^π n^×__0, n(S,D)^n^[d]
M̅^_0,n(S,D)[r]^(.6) S^n_0
Since S^n is projective over k and and π are quasi-finite (Theorem <ref><ref> and Corollary <ref>), the schemes M̅^_0,n(S,D), S^n_0, and are quasi-projective. So, one may take their quotients by _n in the category of quasi-projective k-schemes. Since the actions are free, these quotients are smooth over k. We denote the respective quotients by _, M̅^_0,n,(S,D), and ^n_0S. We denote the induced maps by π^ and ^. Note that ^n_0 S is an open subscheme of the standard nth symmetric product ^n S. Thus we obtain the following diagram of smooth quasi-projective k-schemes.
_[r]^(.3)π_ M̅^_0,n,(S,D)[r]^(.6)_ ^n_0S
Observe that all squares in the following diagram are Cartesian, where the vertical maps are the quotient maps.
[r]^(.35)π[d] M̅^_0,n(S,D)[d][r]^(.6) S^n_0 [d]
_[r]^(.35)π_ M̅^_0,n,(S,D)[r]^(.6)_ ^n_0S
For □∈{, , }, we let D^_□ denote the reduced image of D_□ in M̅^_0,n,(S,D).
Let k,S,D be as in Theorem <ref>.
* The canonical section d_:_M̅^_0,n,(S,D)→ω__ has divisor 1· D^_ and induces an isomorphism
d_:_M̅^_0,n,(S,D)(D^_)→ω__.
* The divisor of _π_:_M̅^_0,n,(S,D)→^-2π^_*_^ is D^_+2· D_^ and induces an isomorphism
_π^:_M̅^_0,n(S,D)(D^_)→ [π^_*_^(-D_)]^⊗ - 2
*
Letting ^:=[π_*_^(-D_)]^-1, we have the isomorphism
d_∘_π_^-1:(^)^⊗ 2→ω__.
This follows from Theorem <ref> and Theorem <ref>, noting that the relative dualizing sheaf ω_f of a morphism f is compatible with étale base-chance, as is the divisor of the discriminant of a morphism, and the divisor of a section of an invertible sheaf is detectible after finite étale base-change. Specifically, the fact that d=1· D_ and _π=1· D_+2· D_ implies that
d^=1· D^_ and _π^=1· D^_+2· D^_; the remaining assertions are direct consequences of these two identities.
§ TWISTING THE DEGREE MAP
As before, we let k be a field. Let S be a smooth del Pezzo surface over k equipped with an effective Cartier divisor D. Let k ⊆ denote a separable closure of k. For a k-scheme Y and field extension k ⊂ L, we write Y_L for Y ×_k L. Let
σ= (L_1, …, L_r)
be an r-tuple of subfields L_i ⊂ containing k for i=1,…, r subject to the requirement that ∑_i=1^k [L_i : k] = n. We think of σ as the fields of definition of a list of points of S that our curves will be required to pass through.
The list σ is used to define twists _σ of the evaluation map : M̅_0,n(S, D) → S^n in the following manner. The Galois group (/k) acts on the -points of k-schemes. Thus σ gives rise to a canonical homomorphism ρ(σ) :(/k) →_(σ), where (σ) denotes the -points of the k-scheme ∐_i=1^r L_i and _(σ)≅_n denotes the symmetric group. For convenience, we fix an identification (σ) = {1,2,…, n} and thus a canonical isomorphism _(σ) = _n.
There is a canonical inclusion of _n into (S^n). We include _n into (_0,n(S,D)) by permutation of the marked points, and acting trivially on the underlying curve and the morphism to S: for τ in _n, set
τ(u: C → S, p_1, …, p_n) = (u: C → S, p_τ^-1(1), …, p_τ^-1(n)).
Let X=S^n,X=_0,n(S,D) or the double point locus . The 1-cocycles
g ↦ρ(σ)(g) × g
(/k) → (X_)
determine twists X_σ of X. Since _ and π_: _→_0, n(S, d)^_ are Galois equivariant for the twisted action, they descend to k-maps denoted _σ and π_σ respectively
_σ: _0,n(S,D)_σ→ (S^n)_σ
π_σ: _σ→_0,n(S,D)_σ.
The twist (S^n)_σ of S^n by σ can be expressed as the restriction of scalars
(S^n)_σ≅∏_i=1^r _L_i/k S,
allowing us to view _σ as a map with this codomain.
In this section, we orient an appropriate restriction of _σ in characteristic 0. Assume that k,S, and D are as in the hypotheses of Theorem <ref>. We continue to assume that _0, n(S,D)^ is non-empty. (See Remark <ref>.) We may assume the set A⊂ S^n used to construct M̅^_0,n(S,D) in Theorem <ref> is stable under the action of _n. This action then restricts to an action on S^n -A defining an open k-subscheme (S^n -A)_σ⊂ S^n_σ whose closed complement A_σ has codimension ≥ 2.
Forgetting the marked points determines a k-map M̅_0,n(S,D)^_σ→M̅_0(S,D) from the twisted good moduli space to the untwisted moduli space of stable curves because _n acts trivially on the underlying curve. For □∈{, , }, we let D_□,σ denote the preimage of D_□ under this map.
Let k,S,D be as in Theorem <ref>.
* _σ: M̅^_0,n(S,D)_σ→ (S^n)_σ is a map between smooth k-schemes.
*
The canonical section d_σ:_M̅^_0,n(S,D)_σ→ω__σ has divisor 1· D_, σ and induces an isomorphism
d_σ:_M̅^_0,n(S,D)_σ(D_, σ)→ω__σ.
*
The divisor of _π_σ:_M̅^_0,n(S,D)_σ→ [(π_σ)_*__σ]^⊗ -2 is
D_,σ+2· D_,σ
and induces an isomorphism
_π_σ:_M̅^_0,n(S,D)_σ(D_,σ)→ [ (π_σ)_*__σ(-D_,σ)]^⊗ -2
*
Letting _σ:=[(π_σ)_*__σ(-D_)]^-1, we have the isomorphism
d_σ∘_π_σ^-1:(_σ)^⊗ 2→ω__σ.
Let L be a finite normal extension of k containing L_i for i=1,…, r. Then the cocycle (<ref>) factors through (L/k), and there is a canonical isomorphism X_L ≅ X_σ, L for X = M̅^_0,n(S,D), S^n - A or S^n. Similarly the base-change _σ, L of _σ is identified with _L via these canonical isomorphisms.
<ref> then follows from Theorem <ref> and the smoothness of S because smoothness is fpqc local and may therefore be checked after base-change to L.
Note that k ⊂ L is étale as k is characteristic 0. The claims <ref> and <ref> follow from Theorem <ref> and Theorem <ref>, respectively, noting that the relative dualizing sheaf ω_f of a morphism f is compatible with étale base-change, as is the divisor of the discriminant of a morphism, and the divisor of a section of an invertible sheaf is detectible after finite étale base-change.
<ref> follows from <ref>, <ref> and <ref>
For the comparison of the ^1-degrees corresponding to the orientations of Theorem <ref> <ref> and Theorem <ref> <ref> in <cit.>, we note that there is a pullback diagram
M̅^_0,n(S,D)_σ[d]__σ[r] [d]__M̅^_0,n,(S,D)
S^n_σ[r] ^n S
where the horizontal maps are determined by the quotient maps over the algebraic closure or L (e.g. S^n_L→^n S_L), where L is a finite normal extension of k containing the L_i. Since the bottom horizontal map is étale over Sym^n_0 S, the upper horizontal map is étale.
§ POSITIVE CHARACTERISTIC
In this section we will extend many of our constructions that up to now have been restricted to characteristic zero to del Pezzo surfaces in positive characteristic. The method is to lift to characteristic zero.
§.§ Lifting to characteristic zero
We first recall some basic facts from deformation theory.
Let Λ be a complete discrete valuation ring with residue field k and quotient field K.
* Let X, Y be smooth, proper Λ-schemes and let f_0:Y_k→ X_k be a morphism. Suppose that H^1(Y_k, f_0^*T_X_k/k)=0. Then there is a Λ-morphism f:Y→ X with f_k=f_0. If f_0 is a closed immersion, then so is f.
* Let X_0 be a smooth proper k-scheme. Suppose H^2(X_0, T_X_0/k)=0. Then there is a smooth proper Λ-scheme X with an isomorphism ϕ:X_k X_0 over k. If in addition H^1(X_0, T_X_0/k)=0, then (X, ϕ) is unique up to isomorphism over Λ.
* Let X be a proper Λ-scheme and let _0 be an invertible sheaf on X_k. If H^2(X_k, _X_k)=0, there is an invertible sheaf on X and an isomorphism ψ:_k_0 of coherent sheaves on X_k. If in addition H^1(X_k, _X_k)=0, then (,ψ) is unique up to isomorphism of invertible sheaves on X.
* Let be an invertible sheaf on a proper Λ-scheme p:X→Λ. If H^1(X_k, _k)=0, then p_* is a free Λ-module and the natural map π_*⊗_Λ k→ H^0(X_k, _k) is an isomorphism. In particular, each section s_0∈ H^0(X_k, _k) lifts to a section s∈ H^0(X, ).
<ref> Let X̂, Ŷ denote the formal schemes associated to the Λ-schemes X, Y. By <cit.>, f_0 extends to a morphism of formal schemes f̂:Ŷ→X̂. By <cit.>, there is a unique Λ-morphism f:Y→ X inducing f̂ on the formal schemes. In particular, f_k=f_0.
If moreover f_0 is a closed immersion, then it follows that f̂:Ŷ→X̂ is a (formal) closed immersion. Then <cit.> implies that f:Y→ X is a closed immersion.
<ref> This can be found in <cit.>.
<ref> Use <cit.>.
<ref> Apply <cit.> with E=Λ_X, F=.
Let S be a del Pezzo surface over a field k, with effective Cartier divisor D. As above, let d_S=_k(K_S· K_S) and d=_k(-K_S· D). Then
* H^1(S,_S)=H^2(S, _S)=0.
* Let T_S/k denote the tangent sheaf. Then H^2(S, T_S/k) =0 and
_kH^1(S, T_S/k)=0 if d_S≥ 5
5-d_S if 1≤ d_S< 5
* H^1(S, _S(D))=0.
Since cohomology commutes with flat base-change, we may extend from k to its algebraic closure, and assume from the start that k is algebraically closed. Then S is either ^1×^1 or is a blow-up of ^2 at r:=9-d_S≥0 points.
For <ref>, if S=^1×^2, then the Künneth formula gives H^1(S, _S)≅ H^1(^1, _^1)^2=0, H^2(S, _S)≅ H^1(^1, _^1)^⊗_k2=0.
For S=^2, the vanishing of H^i(^2, _^2) for i>0 may be found in <cit.>.
If π:S→^2 is the blow-up of ^2 at {p_1,…, p_r}, r≥1, let E_i=π^-1(p_i). We compute H^i(S, _S) via the Leray spectral sequence
E_2^p,q=H^p(^2, R^qπ_*_S)⇒ H^p+q(S,_S)
Since π_*_S=_^2, we need only show that R^qπ_*_S=0 for q>0. We use the formal functions thereom
(R^qπ_*_S)_p_i=lim_←
n≥0H^q(E_i, _S/_E_i^n+1)
As E_i≅^1 and _E_i^n/_E_i^n+1≅_^1(n), we find that
(R^qπ_*_S)_p_i=0; clearly (R^qπ_*_S)_p=0 for p not among the p_i, completing the proof of <ref>.
For <ref>, in case S=^1×^1, we have
T_S/k=p_1^*_^1(2)⊕ p_2^*_^1, from which <ref> easily follows. If π:S→^2 is the blow-up of ^2 at {p_1,…, p_r}, let E_i=π^-1(p_i). If r=0, we have the Euler sequence
0→_^2→_^2(1)^3→ T_^2/k→0
and H^i(^2, _^2(d))=0 for i>0, d≥-1, giving H^1(^2, T_^2/k)=0 for i>0. This also shows that _kH^0(^2, T_^2/k)=8.
For 0<r, we have the exact sequence
0→ T_S/kπ^*T_^2/k→⊕_j=1^ri_j*_E_j(-E_j^(2))→ 0
with i_j:E_j→ S the inclusion.
Identifying E_j with ^1, we have _E_j(-E_j^(2))≅_^1(1), so H^i(_E_j(-E_j^(2)))=0 for i>0. Using the Leray spectral sequence again, we see that H^i(S, π^*T_^2/k)≅ H^i(^2, T_^2/k)=0 for i>0. Thus H^2(S, T_S/k)=0 and we have the exact sequence
H^0(^2, T_^2/k)⊕_j=1^rH^0(E_j, _E_j(-E_j^(2)))→ H^1(S, T_S/k)→ 0.
Taking parameters (x, y) at p_j, we identify E_i with ^1, _E_j(-E_j^(2)) with _^1(1), T_^2, p_j with k·/ x⊕ k·/ y, and we have i_j^*(/ x)=-X_1, i_j^*(/ y)=X_0. This identifies π_*(_E_i(-E_i^(2))) with T_^2, p_i, giving the exact sequence
H^0(^2, T_^2/k)⊕_j=1^rT_^2,p_j→ H^1(S, T_S/k)→ 0.
The automorphism group _3 of ^2 acts 4-transitively on 4-tuples of points, no three of which lie on a line. Since S is a del Pezzo surface, S has no -a curves for a>1, so this condition is satisfied for the set {p_1,…, p_r}, and thus the map ∑_ji_p_j^* is surjective for r≤ 4. For r=4, counting dimensions shows ∑_ji_p_j^* is an isomorphism, and for r>4, ∑_ji_p_j^* is injective, giving
_kH^1(S, T_S/k)=r-4=5-d_S
as claimed.
For <ref>, we have the exact sequence
0→_S→_S(D)→ i_D*_D(D^(2))→ 0
so we reduce to showing H^1(D, _D(D^(2)))=0. Letting ω_D denote the dualizing sheaf on D, Serre duality gives H^1(D, _D(D^(2)))≅ H^0(D, ω_D⊗_D(-D^(2))). But the adjunction formula says ω_D=K_S(D)⊗__S_D, so ω_D⊗_D(-D^(2))≅_D(K_S· D). Since -K_S is ample, _D(K_S· D)<0, so H^0(D, _D(K_S· D))=0.
Let S be a del Pezzo surface over a field k, with effective Cartier divisor D. Let d_S=_k(K_S· K_S) and d=_k(-K_S· D). Let Λ be a complete discrete valuation ring with residue field k and quotient field K. Then
* There is a smooth proper Λ-scheme π:→Λ with an isomorphism ϕ:_k S.
* For each lifting (, ϕ) of S over Λ as in <ref>, letting i:S→ be the closed immersion induced by ϕ, the restriction map i^*:()→(S) is an isomorphism.
* For each lifting (, ϕ) of S over Λ as in <ref>, is a del Pezzo surface over Λ and the generic fiber _K is a del Pezzo surface over K. Moreover, we have d__K=d_S.
* For each lifting (, ϕ) of S over Λ as in <ref>, there is an effective Cartier divisor on with ϕ(_k)=D. Moreover, we have _K(-K__K·_K)=d.
<ref> follows from Proposition <ref><ref> and Lemma <ref><ref>.
For <ref>, we have H^1(S, _S)=H^2(S, _S)=0 by Lemma <ref><ref>. Applying Proposition <ref><ref> shows that i^* is an isomorphism.
For <ref>, -K_S is ample, and K_S lifts canonically to the relative dualizing sheaf ω_/Λ, which restricted to _K is the canonical sheaf K__K. By <cit.>, there is an ample invertible sheaf on with ϕ(_k)=-K_S. But then by <ref>, is isomorphic to ω_/Λ, so -K__K is ample on _K. Thus is a del Pezzo surface over Λ and _K is a del Pezzo surface over K. The assertion that d__K=d_S follows from the conservation of intersection numbers (see e.g. <cit.>).
Finally, to prove <ref>, we have H^1(S, _S(D))=0 by Lemma <ref><ref>, and by <ref>, there is an invertible sheaf on lifting _S(D). Let s_0∈ H^0(S, _S(D)) be the canonical section, so ÷(s_0)=D. By Proposition <ref> <ref> we may lift s_0 to a section s∈ H^0(, ); letting =÷(s), we see that ϕ(_k)=D. The identity _K(-K__K·_K)=d follows aas above by conservation of intersection numbers.
Let S be a del Pezzo surface over a field k, with effective Cartier divisor D, and let , be a lifting of (S,D) over Λ as in Lemma <ref>. Suppose that S, D satisfy Basic Assumptions <ref><ref> <ref>. Then _K, _K satisfies
Basic Assumptions <ref><ref>, <ref>, <ref> and Assumption <ref>.
By Lemma <ref>, we have d__K=d_S and d_K=d. Thus Basic Assumption <ref> <ref> for S,D implies this assumption for _K, _K. Suppose E⊂ S is a -1 curve. Then by Lemma <ref><ref>, there is a lifting Ẽ of E to a relative Cartier divisor on , and Ẽ_K is a -1 curve on _K. Moreover, by
Lemma <ref> <ref>, if D=m· E, then D_K=m·Ẽ_K, so Basic Assumptions <ref><ref> for S, D implies this assumption for _K, _K. Basic Assumption <ref><ref> is trivially satisfied for _K, _K since K has characteristic zero. Similarly, Lemma <ref> implies that _K, _K satisfy Assumption <ref>.
§.§ The moduli space M̅_0,n(, )^ and its first properties
Let k be a perfect field of characteristic p>3. Let S be a del Pezzo surface over k with effective Cartier divisor D. We assume that D is not the zero divisor; let d= (- K_S· D) ≥1.
Let Λ be a complete discrete valuation ring with residue field k and quotient field K of characteristic 0. We fix a lifting (→Λ, ) of (S, D), which exists by Lemma <ref>; also by that result, the generic fiber _K is a del Pezzo surface with d__K=d_S, and the effective Cartier divisor _K on _K has degree d_K:=_K(-K__K·_K)=d.
The following elementary lemma will be used below.
Let Y →(Λ) be of finite type and let Z' ⊂ Y_K be a closed subscheme. Let Z be the closure of Z' in Y. Then
* Z is flat over Λ. In particular, no irreducible component of Z is contained in the fiber of Y over the closed point of Λ.
* Suppose that Z' is reduced. Then Z is reduced.
* Let W be a reduced closed subscheme of Y such that no irreducible component of W is is contained in the fiber of Y over the closed point of Λ. Then W is flat over Λ.
* Let W⊂ Y be a closed subscheme of Y containing Z', with support of W equal to the support of Z and with W_K=Z'. Suppose that the special fiber W_k is reduced. Then W=Z.
We claim that the sheaf _Z is t-torsion free, where t∈Λ is a generator of the maximal ideal. This implies the result, since a Λ-module M is flat if and only if M is t-torsion free.
To see that _Z is t-torsion free, we may assume that Y is affine and finite type over Λ, Y= A. Then Y is a closed subscheme of an affine space over Λ, ^n_Λ, and the closure of Z' in Y is the same as the closure in ^n_Λ, so we may assume that Y=^n_Λ. Let K be the quotient field of Λ and let I'⊂ K[x_1,…, x_n] be the ideal of Z'. Then the idea I of Z in Λ[x_1,…, x_n] is the maximal ideal J such that JK[x_1,…, x_n]=I'.
Take x̅∈Λ[x_1,…, x_n]/I such that tx̅=0. Lifting x̅ to x∈Λ[x_1,…, x_n], we have tx∈ I. But then the image of x in K[x_1,…, x_n] is in tI'=I', so by maximality of I, we have x∈ I and x̅=0. This proves the first assertion of <ref>.
For <ref> suppose that Z' is reduced. Let ⊂_Z be the ideal sheaf of Z_ in Z, then since Z' is reduced and Z_K=Z', we have _K=0. Again by the maximality of _Z, we must have =0, so Z is reduced.
Let W⊂ Y be as in <ref> and let W'⊂ W be the closure of W_K. By <ref>, <ref>, W' is flat over Λ. But as W' and W have the same support and W is reduced, we must have W'=W, so W is flat over Λ, proving <ref>.
For <ref>, let _W⊂_Y be the ideal sheaf of W. Again by maximality of _Z, we have _W⊂_Z; let ⊂_W be the image of _Z. Since W_K=Z'=Z_K, we have _K=0, that is, is supported on Y_k. Applying -⊗_Λ k to the exact sequence
0→→_W→_Z→ 0
and recalling that Z is flat over Λ, we have the exact sequence
0→/t→_W_k→_Z_k→ 0
But Z_k is a closed subscheme of W_k with the same irreducible components, and W_k is reduced, so Z_k=W_k and thus /t=0. Since is supported on Y_k, it follows from Nakayama's lemma that =0, so W=Z.
We recall the moduli stack M̅_0,n(, ), which was discussed in some more detail in Section <ref>. We have the evaluation map
: M̅_0,n(, ) →^n
lifting _k:M̅_0,n(S, D)→ S^n; here we write ^n for the n-fold fiber product of over Λ.
We now extend the constuction of the open subset M̅_0,n(S',D')^ as outlined in Theorem <ref> to the mixed characteristic case
Suppose that (S, D) satisfies Basic Assumptions <ref> <ref>, <ref> and Assumption <ref>. Then by Lemma <ref>, , satisy all the Basic Assumptions <ref>.
Thus, we may apply Theorem <ref>, take a closed subset A_K ⊂^n_K as in Theorem <ref>, and let A_K⊂^n be its closure. By Lemma <ref>, A_K has codimension ≥ 2 in ^n. Recalling that M^_0,n(S,D) is open in M̅_0,n(S,D) by Lemma <ref>, we let A_k be the closed subset _k (M̅_0,n(S,D) ∖ M^_0,n(S,D)) of S^n_k. By Corollary <ref>, A_k has positive codimension in S^n_k, since we are assuming S, D satisfies Assumption <ref>. Let
: = A_K∪ A_k
and define
M̅_0,n(, )^ := M̅_0,n(, ) - ^-1( ).
We may freely enlarge , as long as we ensure that remains closed in ^n, _K satisfies the conditions of
Theorem <ref> for K, _K, _K, and _k has positive codimension in S^n.
For the remainder of <ref>, we will assume that S, D satisfies the conditions of Construction <ref>, that is Basic Assumptions <ref> <ref>, <ref> and Assumption <ref> all hold for S, D.
We show in the Appendix (Theorem <ref>) that del Pezzo surfaces with d_S≥3 in characteristic ≥ 3 satisfy Assumption <ref>.
Our next task is to show that M̅_0,n(, )^ is smooth over Λ; we first need a lemma.
Let f_0:^1→ S be a morphism in M_0(S,D).
If f_0 is in M_0^(S,D), then f_0 lifts to a morphism f∈ M_0^(,).
Since f_0 is unramified, we have _f_0≅_^1(d-2). Since d≥1, we have H^1(^1, _f)=0. From the exact sequence
0→ T_^1→ f_0^*T_S→_f→0
and the fact that T_^1≅_^1(2), we see that H^1(^1, f_0^*T_S)=0. Applying Proposition <ref>, we see that f_0 lifts to a morphism f:^1_Λ→. By Lemma <ref><ref>, we see that f is in M_0(, ). Since the support of the cokernel of df:f^*Ω_/Λ→Ω_^1_Λ/Λ is closed and has empty intersection with the special fiber ^1_k. the fact that ^1_Λ is proper over Λ implies that this cokernel is zero, hence f is unramified.
Let
M^_0,n(, )^:=M^_0,n(, )∩M̅_0,n(, )^.
M̅_0,n(, )^ is smooth over Λ. Moreover,
M̅_0,n(, )^ is non-empty if and only if M̅_0,n(_K, _K)^ is non-empty.
After replacing Λ with an unramified extension Λ→Λ', with Λ' a complete discrete valuation ring with residue field the algebraic closure of k, applying the base-change to Λ' and changing notation, we may assume that k is algebraically closed.
We first consider the case in which M̅_0,n(_K, _K)^ is empty. We claim that in this case, M̅_0,n(, )^ is itself empty. Indeed, by the construction of M̅_0,n(, )^, this is the same as asserting that
M^_0(S,D) is empty. If not, take a k-point f_0:^1→ S of M^_0(S,D). By Lemma <ref>, f_0 lifts to a morphism f∈ M_0^(,), and thus restricts to f_K∈ M_0^(_K,_K). But then
M_0^(_K,_K)⊃ M_0^(_K,_K)≠, hence by
Theorem <ref><ref>, M̅_0,n(_K, _K)^≠, contrary to our assumption. This proves the second assertion in the statement of the proposition.
Thus, if M̅_0,n(_K, _K)^ is empty, then M̅_0,n(, )^ is empty and hence is smooth over Λ, as desired.
We now assume that M̅_0,n(_K, _K)^ is non-empty.
Since M̅_0,n commutes with base-change, the generic fiber of M̅_0,n(, )^ is smooth by construction and Theorem <ref>, and is non-empty by assumption. Similarly, the special fiber of M̅_0,n(, )^ is contained in M_0,n^(S,D) by construction, which is smooth by Lemma <ref>. Thus the structure map M̅_0,n(, )^→Λ has smooth fibers. By <cit.>, it is then enough to show that M̅_0,n(, )^ is flat over Λ.
Let Z be the closure of the generic fiber M̅_0,n(, )^_K in
M̅_0,n(, )^. By Lemma <ref><ref>, it suffices to show that Z=M̅_0,n(, )^.
Clearly Z is a closed subscheme of M̅_0,n(, )^. We first show that Z and M̅_0,n(, )^ have the same support.
To show this, it suffices to show that for each point x_0 in the special fiber M̅_0,n(, )^, there is a point x∈M̅_0,n(, )^_K that specializes to x_0. In particular, if M̅_0,n(, )^_k is empty, there is nothing to prove, so assume that M̅_0,n(, )^_k is non-empty.
Choose a point
x_0:=(f_0, p_*) ∈M̅_0,n(, )^_k⊂ M_0,n^(S,D).
By Lemma <ref>, f_0 lifts to a morphism f∈ M_0^(,). Since Λ is a complete discrete valuation ring, each of the k-points p_1,…, p_n of ^1_k lift to Λ-points 𝔭_1,…, 𝔭_n of ^1_Λ, giving us the lifting of (f_0, p_*) to a point (f, 𝔭_*) of M̅_0,n(, ).
Because the closure in M̅_0,n(, ) of the complement of M_0,n(_K, _K)^ in M̅_0,n(_K, _K) is disjoint from M_0,n(, )^ by construction, it follows that x:=(f, 𝔭_*) is a point of M̅_0,n(, )^. Thus x_0 is a specialization of x, as desired.
Finally, since M̅_0,n(, )^_k is smooth over k, the special fiber M̅_0,n(, )^_k of M̅_0,n(, )^ is reduced. We have Z_K=M̅_0,n(, )^_K be construction. Thus by Lemma <ref><ref>, we have M̅_0,n(, )^=Z, completing the proof.
For the remainder of <ref>, we assume that M̅_0,n(, )^ is non-empty; equivalently (Proposition <ref>), M̅_0,n(_K, _K)^ is non-empty.
§.§ Divisors and the double point locus for M̅_0,n(, )^
The morphism : M̅_0,n(, )^→^n - is proper because it is the pullback of a proper morphism, and quasi-finite by Theorem <ref><ref> and Lemma <ref>. Thus : M̅_0,n(, )^→^n - is finite.
Define D_ to be the closure of (D_)_K in M̅_0,n(, )^. By Lemma <ref>, the intersection D_∩M̅_0,n(, )^_k has codimension 1 in M̅_0,n(, )^_k and is flat over Λ. Since is finite, (D_∩M̅_0,n(, )^_k) is at least codimension 1 in S_k^n, whence codimension 2 in ^n. Adding (D_∩M̅_0,n(, )^_k) to , we may assume that D_ has empty intersection with M̅_0,n(, )^_k. Since D_ is closed and codimension 1 in a smooth scheme, D_ is a relative Cartier divisor. We may similarly define D_ to be the closure of (D_)_K and assume that D_ is a Cartier divisor on M̅_0,n(, )^ which has empty intersection with M̅_0,n(, )^_k.
The divisor of the section d():_M̅_0,n(, )^→ω_ is D_, and thus d() defines an isomorphism
d(): _M̅_0,n(, )^ (D_) →ω_
on M̅_0,n(, )^.
The evaluation map is compatible with base change. Suppose that M^_0,n(_k, _k) is non-empty. On the special fiber, M̅_0,n(, )^_k is contained in M^_0,n(_k, _k). By Lemma <ref>, ≅_k is étale on M^_0,n(_k, _k), so _k: M̅_0,n(_k, _k)^→ S^n is flat and unramified. Since ramification can be checked on fibers of a smooth Λ-scheme, is unramified at points of M̅_0,n(_k, _k)^ (<cit.>). Since M̅_0,n(, )^ is flat over Λ (Proposition <ref>), it follows from <cit.> that is flat on M̅_0,n(_k, _k)^. Thus is étale on M̅_0,n(_k, _k)^, whence is étale over an open neighborhood U of the special fiber M̅_0,n(,)_k^⊂M̅_0,n(,)^. Thus d()∩ U = 0.
We recall that _K,_K satisfy Basic Assumptions <ref> by Lemma <ref>. Over the open set of M̅_0,n(_K,_K)^ given by the generic fiber of
M̅_0,n(,)^, the proposition follows from Theorem <ref>
If M^_0,n(_k, _k) is empty, then we need only check on
M̅_0,n(,)^_K⊂M̅_0,n(_K,_K)^, which follows as above from Theorem <ref>.
Forgetting the last marked point defines a map M̅_0,n+1(, ) →M̅_0,n(, ) from the universal curve to the moduli space. Define X̅_0,n(, )^⊂M̅_0,n+1(, ) to be the inverse image of M̅_0,n(, )^.
Define the double point locus π̃: →M̅_0,n(, )^ using the natural analogue of Definition <ref>.
Let M^_0,n(, )^:=M̅_0,n(, )^∩ M^_0,n(, ). The double point locus satisfies the following.
* By possibly enlarging , we may take to be smooth over Λ.
* The map π̃ is finite and flat.
* The map π̃ is étale over M^_0,n(, )^ and
M^_0,n(, )^ is an open neighborhood of M̅_0,n(, )^_k in M̅_0,n(, )^.
At points of the generic fiber _K, it follows from Corollary <ref> that, after enlarging if necessary, is smooth over Λ; similarly, <ref> holds for
π̃_K: _K →M̅_0,n(, )^_K.
The fact that M^_0,n(, )^ is an open neighborhood of M̅_0,n(, )^_k in M̅_0,n(, )^ follows from the construction of M̅_0,n(, )^.
Let M^_0,n(, )^:=M̅_0,n(, )^∩ M^_0,n(, ) and let ^ be the restriction of over the open subscheme M^_0,n(, )^. Define n^, → M^_0,n(, )^ similarly. Since M^_0,n(, )^ is an open neighborhood of the special fiber M̅_0,n(, )^_k in M̅_0,n(, )^, to complete the proof, it suffices to prove
<ref>, <ref> and <ref> for the restriction
π̃^: ^→ M^_0,n(, )^
of π̃.
Since ^ is closed in n^,×_M^_0,n(, )^n^, and
n^,×_M^_0,n(, )^n^,→ M^_0,n(, )^ is proper, this shows that ^→ M^_0,n(, )^ is proper.
By Lemma <ref>,
^∩Δ_n^,=
the intersection taking place in n^,×_M^_0,n(, )^n^,.
Thus
^= (^,×_M^_0,n(, )^^,)^-1(Δ_/M^_0,n(, )^)]∖Δ_n^,
where ^,:n^,→ M^_0,n(, )^×_Λ is the universal map and Δ_/M^_0,n(, )^ is the relative diagonal.
Next we recall that M^_0,n(, ) is smooth over Λ and n^ is smooth over M^_0,n(, ), hence n^,×_M^_0,n(, )^n^, is smooth over Λ. To prove
<ref>, it thus suffices to show that ^,×_M^_0,n(, )^^, is transverse to the inclusion Δ_/M^_0,n(, )^↪ M^_0,n(, )^×_Λ×_Λ, at points away from Δ_n^,. This tranversality follows immediately from the definition of M^_0,n(, ).
Similarly, this transversality implies that ^→ M^_0,n(, )^ is a smooth morphism and that ^ has codimension two in n^,×_M^_0,n(, )n^,. Since ^→ M^_0,n(, )^ is thus smooth, proper and of relative dimension zero, it follows that
^→ M^_0,n(, )^ is finite and étale. This completes the proof.
As before, we have _π̃: _M̅_0,n(, )^→ (π̃_* _)^⊗ -2. (See Section <ref>. By Lemma <ref> <ref>,<ref>, π̃ admits the claimed discriminant, and π̃_* _ is a line bundle.)
The divisor of _π̃ is computed
_π̃=1· D_+2· D_
and thus _π̃ defines an isomorphism
_π:_M̅_0,n(, )^ (D_)→ (π̃_*_)(-D_)^⊗ -2
By Lemma <ref> <ref>, π̃ is étale over an open neighborhood U of the special fiber M̅_0,n(,)_k^⊂M̅_0,n(,)^. Thus _π̃∩ U = 0. Over the open set of M̅_0,n(_K,_K)^ given by the generic fiber, we may apply Theorem <ref>, which proves the claim.
Let be the invertible sheaf on M̅_0,n(,)^ given by
=[π̃_*_(-D_)]^⊗ -1
Then the composition d ∘_π̃^-1:^⊗ 2→ω_ is an isomorphism on M̅_0,n(, )^.
Follows immediately from Lemmas <ref> and <ref>.
§.§ The symmetrized evaluation map in positive characteristic
We continue to assume that M̅_0,n(,)^ is non-empty.
Just as in Section <ref>, let ^n_0 denote the complement of the relative diagonals in ^n, where the product is taken over Λ. The symmetric group _n acts freely on ^n_0, M̅_0,n(,), and the universal curve M̅_0,n+1(,). By enlarging the closed subset of (<ref>) to be invariant under _n, we likewise obtain a free action on M̅_0,n(, )^, X̅_0,n(, )^ and the double point locus π̃: →M̅_0,n(, )^.
Proceedings as in Section <ref>, we take quotients and form the following commutative diagram with Cartesian squares and vertical maps finite étale quotient maps:
[r]^(.35)[d] M̅_0,n(,)^[d][r]^(.6) ^n_0 [d]
_[r]^(.35)_ M̅_0,n,(,)^[r]^(.6)_ ^n_0.
For □∈{, , }, we let D^_□ denote the reduced image of D_□ in M̅^_0,n,(,).
Let k be a perfect field of characteristic p>3. Let S be a del Pezzo surface over k with effective Cartier divisor D, satisfying Basic Assumptions <ref> <ref> and Assumption <ref>.
* The canonical section d_:_M̅^_0,n,(,)→ω__ has divisor 1· D^_ and induces an isomorphism
d_:_M̅^_0,n,(,)(D^_)→ω__.
* The divisor of __:_M̅^_0,n,(,)→ [_ *__]^⊗ -2 is D^_+2· D_^ and induces an isomorphism
__:_M̅^_0,n, (,)(D^_)→ [_ *__(-D_)]^⊗ -2
* Letting ^:=^-1_ *__(-D_), we have the isomorphism
d_∘__^-1:(^)^⊗ 2→ω__.
The proof is essentially the same as the proof of Theorem <ref>. Replace the uses Theorem <ref> and Theorem <ref> with Lemmas <ref> and <ref>.
§.§ Twists of the evaluation map in positive characteristic
We continue to assume that M̅_0,n(,)^ is non-empty.
As in Section <ref>, let σ= (L_1, …, L_r) be an r-tuple of subfields L_i ⊂ containing k for i=1,…, r subject to the requirement that ∑_i=1^k [L_i : k] = n. The reduction map defines an equivalence between the category of finite étale extensions of Λ and the analogous category over k <cit.>. Thus the twisting construction from Section <ref> lifts over Λ.
Let Λ⊂Λ^unr be the extension corresponding to the separable closure of k. ( is ind-finite.) Let ↪^n be a closed set as constructed in (<ref>). By potentially enlarging we may assume that is invariant under the action of symmetric group _n. Proceeding as in Section <ref>, we obtain a Λ-map
_σ: _0,n(,)^_σ→ (^n∖)_σ
with special fiber _σ and which is canonically identified with after base change to Λ^unr. We similarly twist the double point locus π̃: →M̅_0,n(, )^ producing a Λ-map
_σ: _σ→_0,n(,)^_σ
We again have a forgetful map M̅_0,n(,)^_σ→M̅_0(,). For □∈{, }, we let D_□,σ denote the preimage of D_□ under this map.
Let k be a perfect field of characteristic p>3. Let S be a del Pezzo surface over k with effective Cartier divisor D, satisfying Assumptions <ref> <ref> and Assumption <ref>.
* _σ: M̅_0,n(,)^_σ→ (^n)_σ is a map between smooth Λ-schemes.
*
The canonical section d_σ:_M̅^_0,n(,)_σ→ω__σ has divisor 1· D_, σ and induces an isomorphism
d_σ:_M̅^_0,n(,)_σ(D_, σ)→ω__σ.
*
The divisor of __σ:_M̅^_0,n(,)_σ→ [(_σ)_*__σ]^⊗ -2 is
D_,σ+2· D_,σ
and induces an isomorphism
__σ:_M̅^_0,n(,)_σ(D_,σ)→ [ (_σ)_*__σ(-D_,σ)]^⊗ -2
*
Letting _σ:=[(_σ)_*__σ(-D_)]^-1, we have the isomorphism
d_σ∘__σ^-1:(_σ)^⊗ 2→ω__σ.
The proof is parallel to the proof of Theorem <ref>.
§ UNRAMIFIED MAPS IN POSITIVE CHARACTERISTIC
In this section we
let S be a del Pezzo surface over a field k of characteristic greater than 3 with d_S := K_S · K_S ≥ 3, and we prove the following result.
Let D ∈(S) be effective. If M^_0(S, D) is non-empty, then M^_0(S, D) is irreducible, and there is a geometric point u∈ M^_0(S, D) with u unramified.
It follows from Lemma <ref> that if M^_0(S, D) is irreducible and there is a geometric point u∈ M^_0(S, D) with u unramified, then there is a dense open subset of M^_0(S, D) consisting of unramified maps.
In what follows, we say that a general f∈ M_0(S,D) has property P to mean that property P holds for all geometric points in dense open subset of M_0(S,D).
Recall that D∈(S) is nef if for every reduced, irreducible curve C on S, the intersection degree D· C is non-negative.
Let D ∈ Pic(S) be effective and let d=-K_S· D. If d≥2 and M^_0(S, D) is non-empty, then D is nef.
Take f:^1→ S a geometric point of M^_0(S, D), let D_0⊂ S be the image curve f(^1) and let C be a reduced, irreducible curve on S. Since f is birational, D_0 is also reduced and irreducible and the class [D_0]∈(S) is f_*([^1])=D. Thus, if C≠ D_0, then C· D=C· D_0≥0. Also, since ^1→ D_0 is the normalization of D_0, it follows that D_0 has arithmetic genus p_a(D_0)≥ g(^1)=0. By the adjunction formula, we have
D_0·(D_0+K_S)=2p_a(D_0)-2≥ -2.
Since D· (-K_S)≥2, we thus have
D_0· D=D_0· D_0≥ -2+(-K_S· D)≥ 0
so D is nef.
Following <cit.>, we say that a reduced irreducible curve D_0 on S is a -K_S-conic if -K_S· D_0=2. Since d_S≥ 3, -K_S embeds S in a projective space ^d_S and under this embedding a -K_S-conic is an irreducible degree two curve, hence a smooth conic in some plane ^2⊂^d_S.
The following is a consequence of Theorem 1.5 of <cit.>.
Let S be a del Pezzo surface of degree d_S ≥ 3 over a field k of characteristic p > 3. Let D ∈(S) satisfy d := -K_S · D ≥ 2 and suppose that M_0(S,D)^≠. Then M_0(S,D) is irreducible, the general point is a free, birational map, and the evaluation map : M_0,1(S,D) → S is dominant.
By Lemma <ref>, D is nef. By <cit.>, if d≥ 3 and D is not a multiple of a -K_S-conic, then M_0(S,D) is irreducible and a general point f:^1→ S is a free, birational map. In particular, the normal sheaf _f has torsion-free quotient isomorphic to (e) with e≥0. This in turn implies that the evaluation map from the universal curve : M_0,1(S,D) → S has surjective differential at a general point x∈ M_0,1(S,D) lying over f, hence : M_0,1(S,D) → S is dominant.
If d = 2 and M_0(S,D)^≠, it follows that D is the class of a -K_S-conic. So, it remains to consider the case D = m[D_0] for a -K_S-conic D_0 with m ≥ 1. By the proof of <cit.>, the space M_0(S,D) is irreducible and a general point f:^1→ S is an m-fold cover of a smooth conic. If m ≥ 2, then M_0(S,D)^ = , so we are done. If m = 1, the general point f is an isomorphism onto its image. So, _f ≅_^1, whence f is free and the same argument as above shows that : M_0,1(S,D) → S is dominant.
Suppose that M_0(S, D)^≠. Let d:=-K_S· D and suppose that 2≤ d≤ 3. Then a general f∈ M_0(S, D)^ is unramified.
By Lemma <ref> and Theorem <ref>, we need only find a single unramified f in M_0(S, D)^.
For d=2, f being birational implies that f(^1) is a -K_S-conic on S. Thus f(^1) is smooth and f:^1→ f(^1) is an isomorphism.
For d=3, suppose that f:^1→ S is ramified and birational to its image C:=f(^1). Let p∈^1 be a point of ramification of f and let q=f(p). We take the canonical embedding S⊂^d_S, so C is a degree 3 rational curve in ^d_S with singular point q. We claim that C spans a plane P and as a curve on P≅^2, C is a cubic curve with an ordinary cusp. Indeed, we may choose two additional points p_1, p_2 on ^1∖{p} so that q, f(p_1), f(p_2) do not lie on a line in ^d_S Let Π be a hyperplane in ^d_S containing q, f(p_1) and f(p_2). If Π does not contain C, then since q is a singular point of C, these three points of intersection contribute at least 4 to the total intersection degree Π· C=3, which is impossible. Thus C⊂Π. We repeat this argument, finding in the end that C is contained in a plane P, as claimed. Since f is ramified at p, it follows that q is a cusp on C, and since C is a plane cubic and the characteristic is >3, it follows that q is an ordinary cusp on C, with local equation of the form y^2=x^3 in an open affine plane ⊂ P containing q.
Since P intersects S in the curve C with singular point q, it follows that P is tangent to S at q, so there is a local analytic isomorphism (S,q)≅ (^2,(0,0)). This gives us local analytic coordinates x, y on S at q such that C has the equation y^2=x^3 and f is given in these coordinates by f(t)=(t^2, t^3) in an analytic neighborhood of p=0∈^1⊂^1, with local analytic coordinate t. By Lemma <ref>, there is a deformation f_ϵ of f:^1→ S such that
f_ϵ(t)=(t^2+2aϵ, t^3+3aϵ t) ϵ^2
Thus
df_ϵ(d/dt)=(2t+ϵ^2g)·/ x +(3t^2+3aϵ+ϵ^2h)·/ y
for some g,h∈ k[[t,ϵ]]. Thus df_ϵ((d/dt)=0⇒ϵ=0, i.e., f_ϵ is unramified in an ϵ-adic neighborhood of p, for ϵ≠0.
Alternatively, letting x_ϵ=x-2aϵ, y_ϵ=y, the curve C_ϵ:=f_ϵ(^1) has local equation near q of the form
y_ϵ^2=x_ϵ^3-6aϵ x_ϵ^2ϵ^2
which has an ordinary double point at (x_ϵ, y_ϵ)=(0,0), since k>3.
Since M_0(S,D)^ is open in M_0(S,D), we have found an unramified map in M_0(S,D)^, which completes the proof.
Suppose that for i=1,2, M_0(S, D_i)^≠ and a general f_i∈ M_0(S, D_i)^ is unramified. For general f_i in M_0(S, D_i)^, and d_i= -K_S · D_i ≥ 2, the maps f_1, f_2 intersect transversely, that is for any p_1,p_2 such that f_1(p_1) = f_2(p_2)=q, we have
df_1(T_p_1^1) + df_2(T_p_2^1) = T_S,q.
By Theorem <ref>, we may assume that both maps f_i are free. As the corresponding evaluation maps are dominant, we may assume that f_1(_1)∩ f_2(_2) is a finite set.
Suppose first that d_1≥ 3 and that df_1(T_p_1^1) = df_2(T_p_2^1). Since f_1 is free and unramified, we have _f_1≅_^1(d_1-2). Since d_1≥3, we have H^1(_1, _f_1)=0 and there is a section s of _f_1 that has a zero of order one at p_1∈_1. Letting f_1ϵ be the deformation of f_1 (modulo automorphisms of _1) corresponding to s, we see that modulo ϵ^2, we have f_1ϵ(p_1)=f_1(p_1)=q, but df_1ϵ(T_p_1^1)≠ df_1(T_p_1^1). Since we have taken f_1, f_2 general, this implies that we had df_1(T_p_1^1) ≠ df_2(T_p_2^1) to begin with, which completes the proof in case one of d_1, d_2 is at least 3.
Suppose d_1=d_2=2. Since both f_i are unramified, this implies that C_i:=f_i(_i) are both -K_S-conics, hence are plane conic curves on S⊂^d_S, after taking the anti-canonical embedding of S. Let P_i⊂^d_S be the plane spanned by C_i, i=1,2.
Suppose first that P_1≠ P_2. We may therefore take a general hyperplane Π_1⊂^d_S containing P_1, but not containing C_2. Since Π_1 is general, we have
Π_1· S=C_1+D_1
for some effective 1-dimensional algebraic cycle D_1 on S; since Π_1 does not contain C_2, D_1∩ C_2 is a finite set of points of S. By the associativity of intersection product, we have
(C_1+D_1)·_S C_2=Π_1·_^d_S C_2=(Π_1· P_2)·_P_2C_2
Since Π_1· P_2 is a line in P_2, this implies that (C_1+D_1)·_S C_2=2, so 0≤ C_1· C_2≤ 2. Since q is a point on C_1∩ C_2, we thus have 1≤ C_1· C_2≤2.
If C_1· C_2=1, then C_1 and C_2 intersect transversely at q, and we are done. If C_1· C_2=2, and there are two points in the intersection, again C_1 and C_2 intersect transversally at q. Otherwise, we may choose local analytic coordinates x,y on S at q so that C_1 is defined by y=0 and C_2 is defined by y=x^2+.... We have _f_1≅_^1, so we may take a nowhere vanishing section s of _f_1 and deform f_1 according to this section to the map f_1ϵ. In our local coordinates, this gives the equation of C_1ϵ:=f_1ϵ(_1) as y=λϵ+... for some constant λ≠ 0. Thus C_1ϵ and C_2 intersect transversely at two points, since k≠ 2. Again, since f_1, f_2 were assumed to be general, this means that C_1 and C_2 intersected transversely at q to begin with, which settles the case P_1≠ P_2.
To finish, suppose that P_1=P_2; call this common plane P. Then C_1, C_2 are two smooth conics in the plane P intersecting at a finite set of points, so the intersection multiplicity m(C_1· C_2, q) satisfies 1≤ m(C_1· C_2, q)≤ 4. If m(C_1· C_2, q)=1 we are done. In case 2≤ m(C_1· C_2, q)≤ 4, we again take local analytic coordinates x, y at q on S so that C_1 is defined by y=0 and C_2 is defined by y=x^m+... with m∈{2,3,4}. We make a deformation of f_1 as above, and noting that k>4, we find that C_1ϵ intersects C_2 transversely at all intersection points near (in the ϵ-adic topology) to q, which completes the proof.
Let (U, u) be a smooth pointed curve over k, and let (, c) be a reduced pointed surface over k. Let π:→ U be a proper, flat, surjective morphism such that π(c)=u, and ∖{c} is smooth over U. In addition, we assume that the fiber _u:=π^-1(u) is the union of two smooth curves C_1, C_2 joined at the single point c, _u:=C_1∪_cC_2, and that _u has an ordinary double point at c.
Let T be a smooth finite-type k-scheme and let f:→ T be a morphism. Suppose that the respective restrictions of f, f_1:C_1→ T, and f_2:C_2→ T, are unramified, and
df_1(T_cC_1) ∩ df_2(T_cC_2) =0,
the intersection taking place in T_T,f(c). Then there is an open neighborhood U'⊂ U of u such for all v∈ U'∖{u}, the restriction f_v:_v→ T of f to the fiber _v:=π^-1(v) is unramified.
Let f̅:C_1∪_cC_2→ T denote the restriction of f to _u. We have the map
df:f^*Ω_T/k→Ω_/U
and our assumption on the maps f_1, f_2 implies that on _u, the map
df⊗ k(u)=df̅:f̅^*Ω_T/k→Ω_C_1∪_cC_2/k
is surjective. Nakayama's lemma implies that df is surjective over an open neigborhood ' of π^-1(u) in and since π is proper, there is an open neighborhood U' of u such that π^-1(U')⊂', which gives us the open neighborhood we wanted.
We proceed by induction on d = -K_S · D. If d = 1, the moduli space contains a unique map, which is an isomorphism onto a -1 curve.
If d ≥ 2, then M_0(S,D)^ is irreducible by Theorem <ref>, so we need only find an unramified map in M_0(S,D)^ to finish the proof.
If d = 2,3, it follows from Lemma <ref> that the general map in M_0(S,D)^ is unramified.
For d≥ 4, Theorem 1.1 and the following paragraph of <cit.> show that the hypotheses for <cit.> are satisfied. By Theorem <ref>, the closure M_0(S,D) of M_0(S,D) in _0(S,D) is an irreducible component of _0(S,D) with a dense open subset
M_0(S,D) parametrizing a dominant family of birational maps of irreducible curves. Thus, we may apply
<cit.> to M_0(S,D). This shows that there is a smooth irreducible pointed curve (U,u), a proper, flat, surjective pointed map π:(, p) → (U,u) defining a semi-stable family of genus 0 curves over U, and map f:→ S such that
* _t:=π^-1(t) is a smooth ^1 for t∈ U∖{u},
* the map f_t:_t→ S is birational for t∈ U∖{u}.
* the fiber f̅:_u→ S is a reducible stable map f̅ : =_1∪_p_2 → S in M̅_0(S,D) with two irreducible components f_i : _i → S, i=1,2,
* Each f_i is a general member of a dominant family of birational stable maps in M_0(S,D_i).
By induction and Remark <ref>, each f_i is unramified. Write D_i = (f_i)_*([_i]) and let d_i = -K_S · D_i. Since the families are dominant, d_i ≥ 2.
By Lemma <ref> the maps f_1,f_2 intersect transversally at the point q = f(p). By Lemma <ref> there is a neighborhood U' of u such that map f_v:_t=^1→ S is unramified for all v∈ U'∖{u}, that is, f_v is an unramified map in M_0(S,D)^ so the theorem follows.
alpha
|
http://arxiv.org/abs/2307.00855v1
|
20230703084849
|
Review of Large Vision Models and Visual Prompt Engineering
|
[
"Jiaqi Wang",
"Zhengliang Liu",
"Lin Zhao",
"Zihao Wu",
"Chong Ma",
"Sigang Yu",
"Haixing Dai",
"Qiushi Yang",
"Yiheng Liu",
"Songyao Zhang",
"Enze Shi",
"Yi Pan",
"Tuo Zhang",
"Dajiang Zhu",
"Xiang Li",
"Xi Jiang",
"Bao Ge",
"Yixuan Yuan",
"Dinggang Shen",
"Tianming Liu",
"Shu Zhang"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
BehaveFormer: A Framework with Spatio-Temporal Dual Attention Transformers for IMU enhanced Keystroke Dynamics
[
August 1, 2023
==============================================================================================================
UTF8gbsn
Visual prompt engineering is a fundamental methodology in the field of visual and image Artificial General Intelligence (AGI). As the development of large vision models progresses, the importance of prompt engineering becomes increasingly evident. Designing suitable prompts for specific visual tasks has emerged as a meaningful research direction. This review aims to summarize the methods employed in the computer vision domain for large vision models and visual prompt engineering, exploring the latest advancements in visual prompt engineering. We present influential large models in the visual domain and a range of prompt engineering methods employed on these models. It is our hope that this review provides a comprehensive and systematic description of prompt engineering methods based on large visual models, offering valuable insights for future researchers in their exploration of this field.
BehaveFormer: A Framework with Spatio-Temporal Dual Attention Transformers for IMU enhanced Keystroke Dynamics
[
August 1, 2023
==============================================================================================================
§ INTRODUCTION
Since the introduction of the Transformer architecture by Vaswani et al. <cit.>, deep learning models have experienced remarkable advancements in both parameter size and complexity. Over time, the scale of these models has grown exponentially. Early examples of language models include BERT <cit.>, T5 <cit.>, GPT-1 <cit.>, GPT-2 <cit.> and various BERT variants <cit.>. In addition, there exists a multitude of domain-specific BERT variants that are tailored to optimize performance in distinct fields of study or industry <cit.>. More recently, large language models <cit.> have become building blocks for general-purpose AI systems and are typically trained on extensive datasets through self-supervised learning <cit.>. This exponential growth in the scale and complexity of these models has significantly enhanced their capacity to comprehend natural language, allowing them to adapt to various downstream tasks <cit.>. Notable examples include GPT-3 <cit.>, ChatGPT <cit.>, GPT-4 <cit.>, and others <cit.>, including domain-specific large language models <cit.>. This ability to generalize across multiple downstream tasks without explicit training, commonly known as zero-shot generalization, represents a groundbreaking advancement in the field <cit.>.
Inspired by the success of pre-trained language models in natural language processing (NLP), researchers have ventured into exploring pre-trained visual models in the field of computer vision. These visual models are pre-trained on massive image datasets and possess the ability to understand the content of images and extract rich semantic information. Examples of pre-trained visual models include ViT <cit.>, Swin Transformer <cit.>, VideoMAE V2 <cit.> and others <cit.>. By learning representations and features from a large amount of data, these models enable computers to more effectively comprehend and analyze images for diverse downstream applications <cit.>. Moreover, multi-modal visual models, such as CLIP <cit.> and ALIGN <cit.>, employ contrastive learning to align textual and visual information. This alignment enables pre-trained models to effectively apply learned semantic information to the visual domain, facilitating efficient generalization in downstream tasks. However, despite their remarkable achievements, these models still face limitations in terms of their generalization capabilities.
The rapid advancements in artificial intelligence (AI) have given rise to a plethora of exciting technological breakthroughs, among which the development of AI systems based on foundational models has emerged as a prominent area of research <cit.>. This conceptual framework has been coined and unified by AI experts, representing an emerging paradigm in the field <cit.>. The significance of this concept extends to the notion of emergence, which has become increasingly evident with the rise of machine learning techniques. Emergence manifests itself in the execution of tasks such as automatic inference and the progressive emergence of advanced features and functionalities through deep learning, such as contextual learning. The concept of emergence emphasizes that system behavior is intricately induced rather than explicitly constructed, underscoring the dynamic nature of foundational models in the AI landscape.
Recently, the Segment Anything Model (SAM) <cit.> has brought about a new trend in solving downstream tasks. Models with prompt engineering modules can solve a wide range of downstream tasks through prompts <cit.>. These models' remarkable zero-shot generalization capability highlights the significance of prompt engineering in downstream tasks <cit.>. However, applying large visual models to specific tasks necessitates an effective approach to guide the model's learning and inference processes <cit.>. This is where visual prompt engineering comes into play. It is a methodology that involves designing and optimizing visual prompts to steer large models toward generating the desired outputs.
The emergence of foundation models has unleashed tremendous potential for the advancement of artificial intelligence systems, with particular significance in the domain of computer vision. Visual prompt engineering serves as an adaptive interface and a versatile toolkit that seamlessly integrates with large visual models. By synergistically fusing the capabilities of large visual models with the ingenuity of visual prompt engineering, we empower ourselves to harness the full potential of foundation models, resulting in unparalleled flexibility and efficiency in the realm of image analysis and task resolution. This pioneering integration paves the way for exploring vast frontiers in artificial intelligence applications, unveiling many prospects and ushering in unprecedented opportunities.
§.§ Scope and Focus of the Review
This review focuses primarily on prompt engineering methods in computer vision. A collection of relevant literature was gathered by crawling arXiv using the keyword "visual prompt". As shown in Figure <ref>, articles unrelated to computer vision were subsequently filtered out using ChatGPT, resulting in a total account of 500 papers. These papers mainly discuss prompt algorithms pertinent to computer vision, ranging from multi-modal visual-language models to visual and general artificial intelligence models. Prompts take on various forms, including text prompts in multi-modal settings, image prompts, and text-image prompts, each requiring distinct characteristics for different tasks. This paper comprehensively reviews prompt engineering in computer vision, providing insights into different aspects such as multi-modal prompt design, image prompt design, and text-image prompt design, taking into account the specific requirements of different tasks. The aim is to shed light on the advancements and current state of prompt engineering in computer vision, thereby facilitating further research in this field.
§.§ Outline of the Review
This comprehensive review provides a scientific overview of the latest advancements in computer vision prompts and summarizes the existing design methods in the field. The review is structured as follows:
In the introduction, we traced the evolution of foundational AI models from the inception of the Transformer architecture to the development of large-scale vision models, highlighting how the growth and complexity of these models have spurred the innovative use of prompts (including visual prompts).
Section II presents an overview of key models that have contributed significantly to the advancement of visual prompts and AGI, including Transformer, CLIP, Visual Prompt Tuning (VPT), and SAM. These influential models serve as fundamental references for understanding the following discussions on prompt learning and application in AGI.
Section III delves into visual prompt learning, focusing on multi-modal prompts and visual prompt tuning. Different models and their variants specifically designed for multi-modal prompts are explored, highlighting different approaches and applications in this area. In addition, the section also discusses models and their variants that enable the effective tuning of visual prompts to enhance performance in specific tasks or domains, emphasizing the importance of selecting the appropriate prompt modality for different applications, as exemplified by the use of bounding box prompts in medical image segmentation and text prompts in natural image understanding.
Section IV focuses on the application of visual prompts in AGI models, highlighting the integration of prompts within AGI architectures and showcasing their contributions to strong generalization performance. The latest advancements in visual prompts for AGI models are presented, illustrating how proper prompt design can enable improved performance across diverse domains and tasks.
Section V explores future directions and implications of visual prompts research. We discuss potential developments in the field, taking into account advancements in AGI and related areas. Apart from advancements, the section also addresses the challenges and opportunities associated with them in visual prompts technology and provides insights into the broader implications and potential impact of visual prompts.
The conclusion section provides a summary of the main points discussed throughout the review. The critical role of visual prompts in AGI is emphasized, along with their potential for enhancing performance and generalization. The importance of prompt design and different modalities for different applications is reiterated, and future research prospects in this area are highlighted as well. The conclusion presents a concise recap of the significance of visual prompts and their implications for AGI, serving as a closing remark leaving readers with a clear understanding of the importance of visual prompts.
§ BACKGROUND KNOWLEDGE
In this section, we will introduce some fundamental concepts, beginning with an elucidation of prompt engineering, followed by an overview of foundational models in the field of computer vision. Within this comprehensive review, our primary focus will be on the key techniques and important approaches pertaining to prompt engineering for large visual models.
§.§ Prompts in Natural Language Processing
In the field of NLP, to achieve parameter-efficient tuning for pre-trained models, prompt-based methods <cit.> are presented by montaging the inputs with additional context. Since prompt-based approaches can bridge the gap between pre-training and downstream tasks and unleash the potential of pre-trained models, they have witnessed remarkable superiority in various NLP tasks.
According to the location of prompts within the text, prompts can be grouped into two shapes. The first is the cloze prompt, which exists in the middle of the text, and the other is the prefix prompt usually attaching at the end of the text.
Most previous works <cit.> design the desired prompts by manually defining or automatically learning.
The most simple manner to create the prompts is manually defining according to the common knowledge related to downstream tasks. Brown et. al. <cit.> manually define specific prompts towards multiple downstream NLP tasks including machine translation and question answering. Schick et. al. <cit.> leverage pre-defined prompts to boost the few-shot text classification and generation tasks.
Although manually constructing prompts is simple and intuitive, they rely on complex different strategies for different tasks and need much professional experience, which are expensive and inefficient. Moreover, pre-designed prompts are usually not the optimal ones and they can't adaptively cope with many difficult tasks well. To yield more efficient prompt templates, many researches <cit.> propose to automatically learn optimal prompts via sparse supervisions, which can be divided into two categories including the discrete prompts and continuous prompts.
As the natural text context, discrete prompts are automatically searched pre-defined discrete space related to corresponding phrases.
For example, Jiang et. al. <cit.> present MINE as a mining-based method to automatically find prompts with both training inputs and outputs.
Wallace et al. <cit.> design a gradient-based search with input tokens to find short texts related to pre-trained models to generate the desired predictions in an iterative manner. Gao et al. <cit.> regard the searching prompts as a sequence-to-sequence (seq2seq) generation task and leverage a seq2seq pre-trained model into the prompt searching process.
Instead of limit the prompts to natural language in the discrete space, other works <cit.> aim to automatically construct desired prompts in the continuous text embedding space, which relaxes the searching scope and can be optimized via learnable parameters adaptively using downstream datasets.
For instance, Li et. al. <cit.> prepend a sequence of continuous task-specific sequence to the inputs and keep the pre-trained models frozen. Lester et. al. <cit.> prepend the inputs with special tokens to construct a prompt and explicitly tune the token embeddings.
With prompts, the gap between pre-trained tasks and various downstream tasks can be narrowed, and performance can be efficiently boosted, potentially approaching the level of full parameter fine-tuning. This demonstrates that an appropriate parameter initialization can be considerably beneficial for downstream NLP tasks.
§.§ Foundation Models
Transformer With the gradual development of large models, the Transformer architecture has emerged as a veritable foundational model, heralding a new era in the field. Transformer <cit.> was first proposed for translation tasks in NLP, which combines Multi-head Self Attention (MSA) with Feed-forward Networks (FFN) to offer a global perceptual field and multi-channel feature extraction capabilities. The subsequent development of the Transformer-based BERT <cit.> proved to be seminal in NLP, exhibiting exceptional performance across multiple language-related tasks <cit.>. Leveraging the great flexibility and scalability of the Transformer, researchers have started to train larger Transformer models, including GPT-1 <cit.>, GPT-2 <cit.>, GPT-3 <cit.>, GPT-4 <cit.>, T5 <cit.>, PaLM <cit.>, LLaMA <cit.> and others. These models have further advanced the performance and generalization capabilities of the Transformer-based architecture, surpassing human-level performance in certain tasks <cit.>, and there is still potential for further development in terms of improving the effectiveness of training. Meanwhile, the Vision Transformer (ViT) <cit.> has extended the application of Transformer architecture to the field of computer vision, bridging the gap between Transformer models in textual and image domains, and validating its feasibility as an unified architecture. Subsequent endeavors in computer vision began to improve and extend ViT model, such as DeiT <cit.>, Swin Transformer <cit.>, TNT <cit.>, MAE <cit.>, MoCo-v3 <cit.>, BeiT <cit.>, etc. These works have successfully applied ViT to diverse vision-related tasks and achieved outstanding performance.
In the Transformer architecture <cit.>, the smallest unit of feature is a token. This inherent characteristic of the Transformer makes it well-suited for handling multi-modal data, as embedding layers can convert any modality into tokens. Consequently, numerous works in the multi-modal domain, such as ViTL <cit.>, DALL-E <cit.>, CLIP <cit.>, VLMO <cit.>, ALBEF <cit.> and others, have adopted the Transformer as the primary framework for multi-modal data interaction, including text-to-image and image-to-text retrieval, image captioning and image/text generation, etc. As the era of large models unfolds, researchers have proposed large multi-modal models such as CoCa <cit.>, Flamingo <cit.>, BEiT-v3 <cit.>, PALI <cit.>, GPT-4 <cit.>, with the aim of further enhancing performance on a variety of downstream tasks. In summary, Transformer, as a fundamental model, continues to dominated today's research in the field.
CLIP OpenAI has unveiled a groundbreaking vision-language model that leverages the association between images and text to perform weakly supervised pre-training, significantly boosting performance by expanding the available data. The developed work involves collecting a massive dataset of 400 million image-text pairs for training. CLIP <cit.>, short for Contrastive Language-Image Pre-Training, utilizes fixed human-designed prompts that enable zero-shot prediction and demonstrates superior few-shot capabilities surpassing other state-of-the-art models. The success of CLIP highlights the power of combining visual and textual information and underscores the effectiveness of weakly supervised training using large data <cit.>. CLIP's achievement signals a potential breakthrough in the understanding and application of multi-modal techniques, showcasing the ability to capture rich feature representations <cit.>.such as Group VIT <cit.>, ViLD <cit.>, Glip <cit.>, Clipasso <cit.>, Clip4clip <cit.>, ActionCLIP <cit.> and so on. It offers new insights into the advancement of prompt engineering in computer vision, providing a promising avenue for future developments <cit.>.
VPT When adapting large vision models to downstream tasks, modifying the input rather than altering the parameters of the pre-trained model itself is often preferred. This approach involves introducing a small number of task-specific learnable parameters into the input space, allowing for the learning of task-specific continuous vectors. This technique, known as prompt tuning, enables efficient fine-tuning of pre-trained models without modifying their underlying parameters. The VPT <cit.> approach was the first to address and investigate the universality and feasibility of visual prompts. The proposed VPT method includes both deep and shallow versions, which attain impressive results by learning the prompt and class head of the input data while keeping the parameters of the pre-trained transformer model fixed. This work primarily aimed to demonstrate the effectiveness of visual prompting and provided a novel prompt design perspective. By showing that satisfactory results can be obtained by simply modifying the input, VPT showcased the potential of prompt tuning as an efficient strategy for downstream tasks and opened up new avenues for prompt engineering research <cit.>.
SAM In 2023, Meta AI released a project aimed at creating a universal image segmentation model capable of addressing a wide range of downstream segmentation tasks on new data through prompt engineering. To achieve this, the SAM <cit.> is created. SAM leverages prompt engineering to tackle general downstream segmentation tasks by utilizing the prompt segmentation task as a pre-training objective <cit.>. To enhance the model's flexibility in adapting to prompts and to improve its robustness against interference, SAM is divided into three components: the image encoder, the prompt encoder, and the mask decoder. This division effectively distributes the computational cost, resulting in a sufficiently adaptable and versatile segmentation model <cit.>. SAM’s strength lies in its ability to generalize efficiently across different segmentation tasks, thanks to the prompt engineering approach <cit.>. This methodology of pre-training on prompt segmentation and fine-tuning on specific downstream tasks helps SAM leverage the knowledge learned from the prompt segmentation task to improve performance on a wide range of segmentation problems, including medical image analysis <cit.>, video object tracking <cit.>, data annotation <cit.>, 3D reconstruction <cit.>, robotics <cit.>, image editing <cit.>, and more. Furthermore, SAM’s modular design allows for flexibility and adaptability to different prompt formats, making it a versatile solution for various segmentation challenges.
§ VISUAL PROMPTS LEARNING
§.§ Multi-Modal Models and Prompts
The research field of multi-modal prompt learning has gained significant attention, with several notable works in the area.
CLIP One such work is CLIP <cit.>, an innovative visual language model that incorporates the concept of manually crafted prompts. In CLIP, prompts take the form of "a photo of a [class]," with [class] denoting the specific data label. This design allows CLIP to comprehend both visual and textual information, establishing meaningful associations between these two modalities. However, fixed manual prompts have been found to be highly sensitive to results and have a significant impact on outcomes, an issue that has been noted in some studies.
CoOP To address this problem, CoOP <cit.> introduces the concept of automatic prompts. Automatic prompts represent the downstream task's prompt as a trainable continuous vector, enhancing prompt flexibility and adjustability. This approach allows prompts to be optimized based on specific task characteristics, rather than being limited to fixed manual settings. By training learnable prompt vectors, the CoOP model can automatically learn the appropriate prompt representation for different tasks. This flexibility enables the model to better adapt to diverse data and task requirements. However, the CoOP method exhibits lower generalization performance compared to CLIP on new data, which may be due to overfitting on downstream tasks. To address this issue, the authors introduce a lightweight Meta-Net that leverages the outputs of the image encoder and combines them with the trainable prompt. This results in a dynamic prompt that not only is a self-adaptive continuous vector learned regarding downstream tasks but also incorporates image features as conditions. The introduction of this dynamic prompt has significant implications for achieving better generalization performance.
DenseCLIP DenseCLIP <cit.> is a novel approach aimed at addressing the challenge of transferring large pre-trained models to dense tasks. While contrastive image-text pairing-based pre-training models demonstrate impressive performance on downstream tasks, the transferability to dense tasks remains a challenge for researchers that has yet to be explored. To address this gap, the researchers introduce an innovative method that incorporates CLIP and prompt patterns into dense tasks for the first time, along with a context-aware prompt that can adaptively adjust based on the specific task and input context. The utilization of context-aware prompts enables better capture of the semantic correlation between images and text in dense tasks, transforming the image-text matching problem into a pixel-text matching problem that improves model performance. DenseCLIP leverages large pre-trained models, such as CLIP, to learn the contrastive relationship between images and text, optimizing the model by maximizing the similarity between matched image-text pairs. This transformation and training strategy allows the model to better comprehend the fine-grained relationship between images and text in dense tasks. The introduction of DenseCLIP provides an innovative approach to transferring large pre-trained models to dense tasks. This method combines context-aware prompts and pixel-text matching problems, offering valuable insights and techniques for addressing the image-text correlation challenge in dense tasks.
MaPLe The aforementioned work has demonstrated a series of significant advancements in the field of natural language processing, starting from manual text prompts to continuous text prompts and further extending to text-image prompts that incorporate image features. However, relying on prompts from a single modality results in suboptimal model performance. To address this issue, a new approach known as Multi-modal Prompt Learning (MaPLe) <cit.> is proposed, where continuous prompts are used concurrently across multiple modalities. This approach emphasizes the interplay between text and images in prompt construction, enabling models to be enhanced to a certain extent. This dynamic method of prompt construction facilitates interactions and mutual influences between text and image prompts during model training. The introduction of this approach is significant for achieving more accurate and comprehensive multi-modal understanding. It leverages the interrelated information between text and images and incorporates more contextual and semantic information during the model training process, which further improves model performance.
Imagic Imagic <cit.> has proposed a groundbreaking framework for text-guided image editing, introducing complex text-based semantic editing for individual real-world images. For the first time, this framework enables the manipulation of object poses and compositions within an image while preserving their original features. By providing both an original image and a target text prompt, Imagic's framework allows for precise modifications that align with the semantic context of the image.
GALIP Generative Adversarial CLIPs (GALIP) <cit.>, a novel framework, has been proposed to enable text-to-image generation. Building upon the intricate scene understanding abilities and image comprehension of CLIP, GALIP introduces CLIP Visual Encoder (CLIPViT) and a learnable mate discriminator (Mate-D) for adversarial training, harnessing the generalization capabilities of CLIP. Ultimately, the framework employs text-conditioned prompts to adapt to downstream tasks, enhancing the synthesis capabilities for complex images.
PTP To address the limitations of previous visual language model pre-training frameworks in terms of their lack of visual grounding and localization abilities, a new paradigm called Position-Guided Text Prompting (PTP) <cit.> has been proposed. PTP introduces a novel approach by encouraging the model to predict objects within given blocks or regress the blocks corresponding to specific objects. This reformulates visual grounding tasks as fill-in-the-blank problems using the provided PTP. Research findings have demonstrated that incorporating the PTP module into several state-of-the-art visual language model pre-training frameworks has led to significant improvements in representative cross-modal learning architectures and benchmark performance.
§.§ Visual Prompts
The utilization of visual prompts in computer vision tasks can be traced back to interactive segmentation, a technique that requires user input, such as clicks <cit.>, bounding boxes <cit.>, or scribbles <cit.>, to guide the algorithm in accurately identifying object boundaries. These visual prompts provide valuable guidance to the segmentation process, enabling more precise and reliable results. In the context of few-shot image segmentation, the annotated support image can also be considered as another form of visual prompt <cit.>. By leveraging the information in the support image and its corresponding segmentation mask, the algorithm can generalize and adapt its segmentation capabilities to similar target images. Recently, inspired by the success of prompt engineering in Natural Language Processing (NLP), the computer vision domain has witnessed a series of advancements in utilizing prompts. Researchers have started exploring the idea of formulating prompts as continuous vectors tailored to specific visual tasks. This involves designing prompts as guiding signals for computer vision models, helping them to generate or analyze visual content more effectively. By fine-tuning the models with task-specific prompts, notable improvements have been achieved in various visual tasks, including image classification, object detection, and image generation.
VPT In addition to various prompting techniques in different modes, innovative developments in visual image prompting in the field of computer vision have emerged dramatically. VPT <cit.> draws inspiration from large NLP models and employs prompt engineering to guide the fine-tuning process on a frozen pre-trained backbone model. It achieves this by introducing a small number of trainable parameters in the input space to serve as prompts. By optimizing these prompts, VPT enhances model performance regarding specific visual patterns and task requirements during the inference process, as depicted in the diagram. VPT is efficient, adaptable, and applicable to a wide range of visual tasks. By utilizing well-designed prompts, researchers can guide the model to perform better on specific visual tasks.
AdaptFormer Similarly, AdaptFormer <cit.>, an existing ViT-based model, has been optimized to improve its efficiency on action recognition benchmarks by integrating lightweight modules into its architecture. The design philosophy behind AdaptFormer is to utilize trainable modules that are tailored to the task's specific constraints within the pre-trained ViT model. The experimental results show that AdaptFormer outperforms fully fine-tuned models in action recognition tasks.
Convpass Convpass <cit.> is a methodology aimed at rapidly tailoring pre-trained ViT by implementing convolutional bypasses. The primary goal of Convpass is to reduce the computational expenses associated with fine-tuning while increasing the adaptability of pre-trained models for particular computer vision tasks. Convolutional bypasses, integrated in Convpass, permit accelerated training and inference of models, without compromising performance. This technique aims to simplify the learning process and maximize the efficiency of utilizing pre-trained ViT for targeted computer vision applications.
ViPT ViPT <cit.> proposes a prompt-tuning approach to address the challenge of limited large data in downstream multi-modal tracking tasks. This technique allows for the direct utilization of existing foundational knowledge for extracting RGB-modal features in RGB+auxiliary modality tracking tasks. The prompt-tuning method involves fine-tuning the parameters of the base model while retaining the knowledge from the pre-training phase. The prompt module allows for flexible adaptation to task-specific data. ViPT tackles the issue of insufficient data in multi-modal tracking tasks by employing the prompt-tuning method. This approach leverages prior knowledge embedded in the pre-trained model to facilitate the extraction of RGB-modal features. By fine-tuning the base model while preserving its existing knowledge, the prompt module enables efficient adaptation to the specific data requirements of the task at hand. ViPT's prompt-tuning technique is an effective solution to overcome data scarcity in multi-modal tracking tasks, optimizing the base model's parameters, retaining valuable knowledge from pre-training, and offering adaptability to task-specific data. This approach ensures logical and fluent integration of RGB-modal features, maximizing performance in RGB+auxiliary modality tracking tasks.
DAM-VPTo address the challenge of handling complex distribution shifts from the original pre-training data distribution when using a single dataset-specific prompt, Diversity-Aware Meta Visual Prompting
(DAM-VP) <cit.> introduces the concept of diversity-aware meta visual prompts. This approach employs a diversity-adaptive mechanism to cluster the downstream dataset into smaller, homogeneous subsets, each with its own individually optimized prompt. The aim is to tackle the difficulties arising from transferring knowledge between different data distributions. Research has revealed that leveraging prompt knowledge learned from previous datasets can expedite convergence and improve performance on new datasets. The integration of diversity-aware meta visual prompts in DAM-VP enables the model to adaptively exploit the diversity within the downstream dataset, facilitating more effective transfer learning and improved generalization.
§ VISUAL PROMPTS IN AGI
With the impressive generalization capabilities demonstrated by universal models across various domains, significant progress has been made in large computer vision models as well. By training base models on diverse datasets, these models are capable of adapting to downstream tasks through prompt learning. This approach not only alleviates training demands and optimizes resource utilization, but also introduces new avenues for the development of computer vision. For example, the innovative "segment anything model" achieves powerful zero-shot transfer capabilities for downstream tasks by employing appropriate prompts, hence its versatile applications in various domains. This universal artificial intelligence model can learn general concepts and exhibit zero-shot transfer capabilities on unknown data on account of its transferability, showcasing the immense potential of general artificial intelligence. Nevertheless, prompt engineering is still pivotal to generalizing the model to new tasks and cannot be overwhelmed by other factors. Prompt engineering refers to the process of designing prompts that enable the model to adapt and generalize to different tasks. The question is crucial as a properly designed prompt can lead to effective representation learning, thereby enhancing its performance on various tasks. The prompt should be emphasized as the key to guiding models to understand and accommodate the context and requirements of either a seen or unseen task.
Many other models have emerged, including notable models such as OneFormer <cit.>, SegGPT <cit.>, SEEM <cit.>, Uni-Perceiver v2 <cit.>, demonstrating powerful capabilities in general artificial intelligence and providing new possibilities for addressing various tasks. These models employ zero-shot transfer methods, making prompt learning a critical manifestation of model generalization.
In this section, we will delve into a detailed explanation of the prompt construction methods based on promptly, interactive models, encompassing key elements such as object detection, multi-modal fusion, and the combination of various models. These methods provide powerful tools for harnessing the generalization capabilities of large models, thereby offering feasible solutions for achieving downstream tasks.
§.§ Object Detection
In the task of object detection, the role of prompt-based methods in attaining the ability to generalize is of utmost importance to achieve general artificial intelligence. These methods are considered a fundamental basis for achieving such generalization. Nevertheless, despite SAM claiming its ability to segment any object, its practical application has been questioned. In particular, concerns have been raised regarding the efficacy of SAM for applications such as medical image segmentation, camouflage object detection, mirror and transparent object detection, and other similar scenarios. As a result, recent studies have concentrated on evaluating SAM's performance in various settings. These studies have demonstrated that point or box prompts are highly effective in various practical scenarios. SAM has achieved robust zero-shot performance in natural images, remote sensing, and medical imaging domains. However, its ability to generalize in complex application scenarios, particularly where semantic information is ambiguous or environments with low contrast, may not meet task requirements. Therefore, additional research is necessary to enhance SAM's performance in complex environments. In the realm of crater detection, SAM is leveraged for automated image segmentation. Subsequently, the shape of each segmented mask is assessed, and additional processing steps, such as filtering and boundary extraction, are carried out as well.
Object Counting In the domain of object counting <cit.>, researchers adopt SAM by employing bounding boxes as prompts to generate segmentation masks. Dense image features obtained from an image encoder are multiplied and averaged with a reference object's feature vector. Subsequently, a point grid consisting of 32 points per edge is employed as a prompt for segmentation. The resulting mask feature vector is obtained by multiplying and averaging it together with the dense features. Eventually, to determine the total count, the cosine similarity between the predicted mask and the reference example's feature vectors is calculated. When the cosine similarity exceeds a predefined threshold, the target object is regarded as recognized. By calculating all of the target objects, the total count is finally obtained.
Remote Sensing SAMIn the domain of remote sensing image segmentation <cit.>, due to the top-down perspective of remote sensing images, objects within the scene can have arbitrary orientations. Consequently, a technique was proposed to use the Rotated Bounding Box (R-Box) minimum enclosing horizontal rectangle as guidance for SAM segmentation when designing prompts. For the mask prompt, it is defined as the corresponding area enclosed by the bounding box. Previous studies have also demonstrated the suitability and effectiveness of bounding boxes in designing prompts for efficient annotation purposes <cit.>.
SAM-AdapterSAM-Adapter <cit.> has been developed to infuse specialized domain knowledge into the original SAM model, thereby enhancing its capability to generalize across various downstream tasks. The integration has yielded promising outcomes. The adapter is engineered to acquire relevant knowledge and generate task-specific prompts at the preliminary stage. Notably, prompts can be outputted at each Transformer layer. By adopting this approach, prompts included in the segmented network significantly improves the performance of SAM in challenging tasks. The approach has achieved impressive results in disparate domains such as pseudocolor object detection, shadow detection, and medical image segmentation.
§.§ Multi-modal Fusion
In recent research, the introduction of textual information into visual models, such as images, paintings, frames, and videos, has significantly diversified and improved the fidelity of tasks. Inspired by the concept of generative models, the incorporation of textual prompts into CLIP has emerged as an effective approach.
Text2Seg Text2Seg <cit.> introduces a vision-language model that relies on text prompts as the input. The model operates as follows: First, the text prompt serves as an input for Grounding DINO, which generates bounding boxes. These bounding boxes guide SAM in generating segmentation masks. Following this, the CLIP Surgery process then generates heatmaps using the text prompts, and the point prompts derived from these heatmaps are fed into SAM. Lastly, a similarity algorithm is applied to obtain the ultimate segmentation map.
SAMText SAMText <cit.> introduces a versatile methodology for generating segmentation masks aimed at scene text in images or video frames. The process initiates, once the input is provided, by extracting bounding box coordinates from a scene text detection model, using existing annotations. These extracted bounding box coordinates serve as prompts for SAM, which facilitates the subsequent generation of masks. If the bounding boxes exhibit orientation, SAMText computes their minimum bounding rectangles to obtain horizontal bounding boxes, which, in turn, serve as SAM's prompts for mask generation.
Caption Anything Caption Anything <cit.> has introduced a fundamental model-enhanced image captioning framework, which facilitates multi-modal control encompassing both visual and linguistic aspects. The framework seamlessly integrates SAM and ChatGPT, merging the visual and language modalities such that users can interactively model with the framework. During usage, users initially utilize various prompts, specifically points or bounding boxes, to flexibly control the input image, thereby enabling interactive user manipulation. The framework further refines the output instructions using large language models, ensuring effective alignment with the user's intended meaning and achieving significant consistency with the user's intent.
SAA+ Segment Any Anomaly + (SAA+) <cit.> introduces a novel technique of zero-shot anomaly segmentation that utilizes hybrid prompt regularization to enhance the adaptability of existing foundational models. The proposed regularization prompt incorporates domain-specific expertise and contextual information from the target image, thereby reinforcing more robust prompt and facilitating more accurate identification of anomalous regions. In addition, many works have similarly concluded that incorporating domain expert knowledge as prior support might offer a potential solution to segmentation problems in complex scenes <cit.>.
§.§ Combination of Various Models
In complex scenarios, SAM's performance often lacks robustness, necessitating a new solution that combines interactive methods with efficient tools. The combination of these functionalities presents several potentials for a wide range of fields and exhibits outstanding performance in various tasks.
Inpaint Anything Image inpainting, a pathological inverse problem that involves restoring missing or damaged parts of an image with visually plausible structures and textures, has been extensively studied in the field of computer vision. Inpaint Anything (IA) <cit.> has proposed a conceptual pipeline based on the combination of various foundational models. By leveraging the strengths of these models, IA introduces three key functionalities in image inpainting: Remove Anything, Fill Anything, and Replace Anything. The pipeline follows a precise sequence, as shown in the diagram. Initially, a click prompt is employed to automatically segment designated regions, creating masks. Next, state-of-the-art inpainting models such as LaMa and Stable Diffusion (SD) are utilized to fill these masks, effectively completing the removal task. Following this step, a robust AI model like SD leverages a meticulously designed text prompt to generate the specific content required for filling or replacing the voids, facilitating the successful completion of the entire operation.
Edit Everything Edit Everything <cit.> introduces a generative system that combines SAM, CLIP, and SD to edit images guided by both image and text inputs. The original image is first segmented into multiple fragments using SAM. Then, the image-editing process is guided by text prompts such that it transforms the source image into the target image, aligning with the provided source and target prompts.
SAM-Track SAM-Track <cit.> proposes a video segmentation framework that combines Grounding-DINO, DeAOT, and SAM to enable interactive and automated object tracking and segmentation across multiple modalities. The framework incorporates interactive prompts in the form of click-prompt, box-prompt, and text-prompt in the first frame of the video to guide SAM's segmentation process. Subsequently, text-prompts are subsequently utilized in the following frames for further result refinement. This versatile framework finds applications in a wide range of domains, including unmanned aerial vehicle technology, autonomous driving, medical imaging, augmented reality, and biological analysis.
Explain Any Concept Explain Any Concept (EAC) <cit.> proposes a novel approach to explain concepts based on three pipelines. While SAM excels in instance segmentation, its integration into Explainable AI poses the computational challenge of excessive complexity. EAC solves this by employing SAM for initial segmentation and introducing a surrogate model for efficient explanation. The first stage of the process employs SAM for instance segmentation, followed by the utilization of a surrogate model that approximates the target deep neural network. In the final stage, the trained network is applied to the results obtained in the first stage, facilitating the effective interpretation of the model's predictions.
§ FUTURE DIRECTIONS AND IMPLICATIONS
With the continuous advancement of powerful large vision models, the significance of prompts within these models has become more prominent. Designing well-crafted prompts to effectively guide downstream tasks has emerged as a burgeoning avenue for tackling this issue. However, the performance of general artificial intelligence remains constrained by its reliance on domain-specific knowledge. To overcome this limitation, future endeavors should focus on expanding the breadth of knowledge by integrating diverse and comprehensive datasets, employing interdisciplinary methodologies, and fostering fruitful collaborations among experts from various domains. These endeavors will contribute to enhancing the capabilities of AI systems and addressing the challenges associated with leveraging prompts in a more holistic manner.
§.§ Adaption of Large Vision Models
Large vision models have emerged as a prominent trend in the field of AI, prompting the need to address the challenge of effectively adapting these models to downstream tasks. Several key techniques offer potential solutions in this regard.
Prompt fine-tuning, an essential tool in general artificial intelligence, serves as a crucial step in enabling models to better apply to downstream tasks <cit.>. By designing suitable prompt examples and predefined inputs, models can be guided to better align with the target task, thus enhancing performance through fine-tuning for improved downstream task completion.
Reinforcement learning enables models to continuously learn and adjust their parameters by leveraging feedback signals obtained from experimentation and errors, thereby maximizing their performance <cit.>. When combined with prompt fine-tuning, reinforcement learning demonstrates outstanding effectiveness in optimizing adaptive model performance.
Adapter modules enable efficient adjustments for specific tasks within large models by introducing small, functional modules <cit.>. This approach selectively modifies only certain parts of the model's structure without requiring significant changes to the overall architecture. Incorporating adapter modules in prompt engineering, not only maintains the integrity of the larger model but also introduces task-specific functional structures, enabling more targeted prompt construction.
Knowledge distillation is a technique that transfers knowledge from a large model to a smaller one. The key lies in compactly representing the knowledge from the larger model and applying it to new tasks, preserving the essential performance and generalization capabilities while facilitating natural deployment in new environments <cit.>. While prompt fine-tuning is a natural choice for large models, the effectiveness of prompt engineering in smaller models remains an open question. Knowledge distillation can assist in applying prompts to small models by transferring the foundational knowledge and generalization abilities from large models, thereby achieving local deployment of smaller models.
Researchers have already devised a range of model adaptation methods, which have proven effective in various domains. These methods encompass techniques such as transfer learning, domain adaptation, and fine-tuning. While these approaches have their merits, they may not fully address the unique challenges posed by different domains. As the pursuit of effective model adaptation continues, the future holds promise for even greater achievements in applying models to specific tasks across various domains.
§.§ Challenges and Considerations for Visual AGI
Pioneered by advancements in the field of NLP, the development of a unified framework for large models is poised to become a critical undertaking in the future of visual models. However, in stark contrast to the progress made in NLP, the challenges faced by large models in the computer vision domain are more pronounced.
First and foremost, a key aspect of achieving AGI lies in interactive engagement with the environment and maximizing rewards <cit.>. While NLP benefits from well-defined learning environments, enabling textual communication and task completion through multi-turn dialogues, the CV domain lacks a clear path and lacks interactive environments <cit.>. Building a realistic environment for a CV proves exceedingly difficult due to the high costs and risks associated with human-agent interactions. On the other hand, constructing a virtual environment poses challenges when it comes to transferring trained agents to real-world scenarios <cit.>.
Moreover, the image space exhibits stronger semantic sparsity, domain variations, and infinite granularity compared to the text space <cit.>. Building upon the immense success of NLP, a foundation for achieving a unified paradigm in the CV domain has been established. Future research endeavors can take inspiration from the development of large NLP models, employing generative pre-training techniques combined with fine-tuning through instructions to achieve a unified approach in the CV field. Additionally, incorporating the capabilities of NLP, and applying multimodal techniques to large vision models can enable the fusion of language and images as an interactive mode for generative pre-training. This, in turn, opens up new avenues for human-machine interaction as a novel pre-training interactive mode.
§.§ Applications Across Multiple Domains
Visual prompts and large visual models have enabled significant progress in fields where visual understanding and analysis are critical. For example, the prompt-driven SAM has unlocked new opportunities in domains like medical imaging, agriculture, image editing, object detection, audio-visual localization, and beyond <cit.>. In the medical domain, visual prompts such as segmentation masks, bounding boxes, and key points are used to help detect diseases, quantify the severity of lesions, and analyze medical scans <cit.>. For the typical treatment sites in radiation oncology, Zhang et al. compared the Dice and Jaccard outcomes between clinical manual delineation and automatic segmentation using SAM with box prompt and proved SAM's robust generalization capabilities in automatic segmentation for radiotherapy <cit.>. In agriculture, visual prompts could be used to monitor crop growth, detect weeds or pests, and estimate crop yields <cit.>. Yang et al. assessed the zero-shot segmentation performance of SAM on representative chicken segmentation tasks and proved that SAM-based object tracking could provide valuable data on the behavior and movement patterns of broiler birds <cit.>.
By harnessing the advancements in LVM prompts, numerous fields stand to benefit from their integration. The ability to align language and visual data opens doors to improved medical diagnoses <cit.>, empowering healthcare providers with valuable insights from image-based information. Additionally, the flexibility of LVM prompts enables transformative applications in the realm of natural images, facilitating creative image manipulations and empowering users with powerful editing capabilities. Moreover, the utilization of prompts in video tracking introduces new possibilities for seamless human-machine interaction, allowing for enhanced object detection and precise audio-visual localization.
As research in the field progresses, the increasing utilization of LVM prompts is anticipated to revolutionize various industries and domains. The synergy between language and visual models, facilitated by prompts, paves the way for novel solutions, improved efficiency, and enhanced user experiences across multiple domains. In summary, visual prompts provide annotated data for visual understanding across domains. They give context and guidance, enhancing the ability of machines to interpret visual data. Visual prompts have become an important tool for optimizing visual recognition systems with the increasing use of machine vision.
§ CONCLUSION
This review paper provides a comprehensive assessment of the remarkable advancements achieved in the domain of prompt engineering within the field of computer vision. It encompasses a detailed exploration of the design of visual prompts based on the ViT network architecture and the application of prompts leveraging AGI models. From a model-centric standpoint, the study delves into the positive effects of prompts on downstream tasks, highlighting their potential to inspire and enhance the emergent capabilities of large models.
Moreover, this article offers an in-depth discussion of the significance and performance of prompt engineering in various scenarios and domains, emphasizing its pivotal role in the field of computer vision. The article emphasizes the immense potential inherent in prompt engineering, holding promise for groundbreaking advancements in this discipline.
Finally, the paper concludes by providing insights into future research avenues, highlighting the remarkable prospects of utilizing prompt engineering to completely revolutionize computer vision. This technique has the unparalleled potential of improving current models and enabling the creation of novel applications, thus enlightening further research into this area. Given the growing significance of prompts in computer vision in various fields, the outcomes of this study are highly relevant and timely.
|
http://arxiv.org/abs/2307.02514v1
|
20230705124011
|
Exploring Multimodal Approaches for Alzheimer's Disease Detection Using Patient Speech Transcript and Audio Data
|
[
"Hongmin Cai",
"Xiaoke Huang",
"Zhengliang Liu",
"Wenxiong Liao",
"Haixing Dai",
"Zihao Wu",
"Dajiang Zhu",
"Hui Ren",
"Quanzheng Li",
"Tianming Liu",
"Xiang Li"
] |
eess.AS
|
[
"eess.AS",
"cs.AI",
"cs.SD"
] |
S3C: Self-Supervised Stochastic Classifiers for Few-Shot Class-Incremental Learning
Jayateja Kalla Soma Biswas
August 1, 2023
===================================================================================
Alzheimer's disease (AD) is a common form of dementia that severely impacts patient health. As AD impairs the patient's language understanding and expression ability, the speech of AD patients can serve as an indicator for this disease. This study investigates various methods for detecting AD using patient's speech and transcripts data from the DementiaBank Pitt database. The proposed approach involves pre-trained language models and Graph Neural Network (GNN) that constructs a graph from the speech transcript and extracts features using GNN for AD detection. Data augmentation techniques, including synonym replacement, GPT-based augmenter and so on, were used to address the small dataset size. Audio data was also introduced, and WavLM model was used to extract audio features. These features were then fused with text features using various methods. Finally, a contrastive learning approach was attempted by converting speech transcripts back to tts audio and using it for contrastive learning with the original audio. We conducted intensive experiments and analysis on the above methods. Our findings shed light on the challenges and potential solutions in AD detection using speech and audio data.
§ INTRODUCTION
Alzheimer's disease (AD), named after German psychiatrist Alois Alzheimer, is the most common form of dementia. AD degenerates brain cells, seriously affecting patients' quality of life <cit.>. Patients with AD have a shorter life expectancy with a median survival time of 3 to 6 years after diagnosis, which is even shorter for those with other underlying diseases <cit.>. Memory loss is a hallmark symptom of AD, with patients getting lost and forgetting family and friends <cit.>. AD also impairs language ability, making it difficult to read, write, and communicate <cit.>, and may cause difficulty swallowing and movement disorders in advanced stages <cit.>. AD is also a significant societal and economic burden, with 50 million people worldwide afflicted by the disease in 2020, projected to rise to 152 million by 2050 <cit.>. The cost of AD diagnosis, treatment, and care worldwide is estimated to be around 1 trillion US dollars per year <cit.>.
While AD is incurable, early diagnosis can slow down its development, making it crucial to detect the disease at its early stages <cit.>. However, medical diagnostic methods are often expensive, invasive, and require specialized equipment, necessitating the development of low-cost, non-intrusive, and simple diagnostic methods. Given that AD can impair patients' speech, their speech patterns exhibit certain characteristics, such as frequent silence, incoherence, word retrieval difficulty, and repetition <cit.>. In this study, we utilized patient-generated speech and corresponding transcribed text data, applying various NLP-based methods to diagnose AD and compare their effectiveness. Through thorough analysis and comparison of various NLP-based methods, we aim to provide valuable insights and help advance the development of more effective diagnostic tools for Alzheimer's disease.
§ RELATED WORK
There is prior work that utilizes patients' speech transcript for AD detection and diagnosis. A study by Ben Ammar et al. <cit.> proposed an AD detection model that extracts linguistic features from patient speech transcripts and performs feature selection based on the KNN algorithm. The selected features are then used to train a machine learning classifier, such as SVM, for the final diagnosis. Yamanki et al. <cit.> proposed a contrastive learning model based on Siamese BERT to extract discriminative features from both the text of the patient speech transcripts and other features such as demographic, lexical, semantic information. The extracted features are then used for AD diagnosis using machine learning classifiers. The work by Roshanzamir et al. <cit.> developed text data processing pipeline for analyzing patient speech transcript data. The pipeline consists of an augmentation module that enriches the input text data and a splitter that breaks text into sentences. The model then uses BERT to encode the sentences and the output encoded embeddings are used as input for various classification models, including multi-layer perceptron (MLP), convolutional neural network (CNN), and bidirectional LSTM (biLSTM).
Some other work also incorporates speech data of patients to assist in AD diagnosis. A study proposed by Bertini et al. <cit.> uses log mel spectrograms and audio data augmentation techniques. The patients' audio data are first converted into log mel spectrograms, which is then enhanced using data augmentation techniques. Then, an autoencoder learns a condensed representation of the spectrogram, which then serves as input for a multilayer perceptron. Martinc et al. <cit.> use audio feature engineering for diagnosing AD. Specifically, they extract acoustic and textual features from the speech segments using openSMILE toolkit and GloVe, and further extract Active Data Representation (ADR) features based on them.
A study by Agbavor et al. <cit.> uses pre-trained audio model including data2vec and wav2vec2 to extract audio features from patients, and evaluates its performance on the ADReSSo (Alzheimer's Dementia Recognition through Spontaneous Speech only) dataset.
§ METHODS
In our work, we employ 4 approaches to diagnose AD using audio recordings and transcripts of audio. First, we attempt a GNN-based method, which we call AD-GNN. This method utilizes the pre-trained language model BERT and GNN to extract features from patient speech transcription and performs classification based on these features. Next, in the second approach, we first use text data augmenters to augment patient speech transcription and then use AD-GNN for classification. We hope that data augmentation can partially address the overfitting problem caused by small dataset size. In the third approach, we use both audio and text data, i.e., patient audio and its transcription. We expect the introduction of audio data can improve the performance. This approach employ pre-trained speech model to extract audio features and AD-GNN to extract text features, and we attempt various modal fusion methods. The fourth approach, called the CLIPPO-like method, also use both audio and text data. The innovation of this method lies in its imitation of CLIPPO's <cit.> work. The text data is converted into TTS audio and then the same model is used to extract features from both the TTS audio and the original audio for contrastive learning. Fig. <ref> illustrates the basic framework of these four methods.
§.§ GNN-based Method
There is a growing trend of developing and applying graph-based representation learning methods to better model complex structural patterns in real-world data. Graph Neural Network (GNN) has achieved great success in both computer vision <cit.> and NLP <cit.>, with applications in the medical domain (e.g., the analysis of medical images <cit.>, medical notes <cit.> and radiology reports <cit.>). Thus, we envision that similar methods can also improve the effectiveness of NLP-based AD detection. At the same time, however, there is limited work related to detecting AD using graph-based methods.
In this work, we propose a lightweight, graph-based classification model for AD diagnosis using patient speech transcripts. The proposed model first computes the text embedding of the patient's speech with a pre-trained language model (BERT) <cit.>, taking advantage of the pre-trained language model that has been trained on a massive text corpus using a large-scale model architecture. After this step, the proposed model constructs the graph representation of the embedding and utilizes Graph Neural Network (GNN) to learn discriminative features for the final disease classification. Our hypothesis is that the graph structure and the corresponding GNN-extracted features would provide more complex and context-rich representations, compared to representations learned from language models alone (which is the typical approach of existing work). The proposed AD-GNN model is evaluated on the patient speech transcripts data from the DementiaBank Pitt database <cit.>. We formulate this problem as a binary (AD vs. normal) classification task and compare the AD-GNN model with previous methods.
§.§.§ Notation
The input of our model is a piece of text 𝐓=[t_1, t_2,⋯,t_n], which can be regarded as a token sequence with length n. 𝐓 gets its initial embedding 𝐇^0=[h_1^0, h_2^0, ⋯, h_n^0] through the embedding initializer where the initial embedding of token t_i is represented as h_i^0. Then, we construct a graph 𝐆=(𝐕,𝐄) according to 𝐓 which consists of a set of n nodes 𝐕={v_1,v_2,⋯, v_n} and edges (v_i,v_j)∈𝐄. The adjacency matrix of 𝐆 is denoted as 𝐀. The feature of node v_i is initialized as h_i^0 and will be updated to h_i^K through a K-layer GNN network. 𝐇^K=[h_1^K, h_2^K, ⋯, h_n^K], i.e., the final embedding of 𝐓, will be fed into the classifier and the model outputs the final one-hot (binary classes) classification results.
§.§.§ Model Architecture
AD-GNN is based on the Graph4NLP project <cit.>, which enables efficient development and experimentation with GNN for NLP tasks such as text classification, semantic parsing, and machine translation. The algorithmic pipeline of AD-GNN is shown in Fig. <ref>. AD-GNN uses raw text as input and passes it to the embedding initializer to embed each token. Then, a graph constructor is adopted to construct the initial graph, where each node represents a token in the original text. After this step, the graph passes through GNN layers, and the node embeddings are updated. Finally, the classification result is obtained through the classifier. We will introduce the model details below.
Embedding Initialization
Text 𝐓 is a sequence of tokens. The first step of the AD-GNN model is to convert the tokens into initialized embeddings 𝐇^0. Word vectors and pre-trained language models like BERT are widely used for embedding initialization. Graph4NLP library provides many strategies for token embeddings, and we chose two of them based on preliminary experiments:
* w2v_bilstm. We identify the word vector (glove.840B.300d.word2vec) for each token and then feed these representations to BiLSTM to obtain contextual information. The outputs of BiLSTM are the embeddings needed for further processing.
* w2v_bert_bilstm. First, we use word vectors for the initial embedding. Then, BERT is adopted to encode the input text. Finally, we concatenate the word vector embedding and the BERT embedding into one vector and use it as the input to BiLSTM to compute the final embedding.
Graph Construction
Graph4NLP provides many strategies to build a graph from text sequences. Here we experiment with three of them: dependency graph, dynamic graph, and fused graph.
The first method is to build a dependency graph based on the dependency relationship between words to describe the text structure. Stanford CoreNLP <cit.> implements a transition-based dependency parser based on a neural network. It is worth noting that if the input text contains multiple sentences, after obtaining the dependency tree of each sentence, we will connect the last node of the dependency tree of the previous sentence with the header node of the next sentence with an edge to produce a connected graph.
The second method is to build a dynamic graph whose structure can evolve during the training process. The rationale is that static graphs (e.g., dependency graphs) may be incomplete or improper, and the errors produced in the graph construction phase cannot be corrected. These errors may affect the accuracy of the final classification results. To counter this problem, Chen et al. proposed an end-to-end dynamic graph learning framework, Iterative Deep Graph Learning (IDGL) <cit.>, which is used in AD-GNN. IDGL first calculates the similarity between every two nodes to obtain a complete graph and then performs graph sparsification to generate a sparse graph. We use weighted cosine similarity to measure the similarity between nodes.
𝐒_i j=cos(𝐖⊙h_i^0, 𝐖⊙h_j^0),
where 𝐒_i j denotes weighted cosine similarity between node v_i and node v_j, 𝐖 denotes a trainable weight vector, ⊙ denotes the Hadamard product, and h_i^0 and h_j^0 are the embeddings of node v_i and v_j, respectively. There are various graph sparsification techniques in IDGL and ε-neighborhood method is adopted in AD-GNN, because it only retains the connection with a weight greater than a pre-defined threshold ε.
𝐀_i, j = {[ 𝐒_i, j 𝐒_i, j>ε; 0 otherwise ]. ,
where 𝐒_i, j is the similarity between node v_i and v_j, 𝐀_i, j is the weight of edge between node v_i and v_j in the sparse graph.
Another method for graph construction is to fuse dependency and dynamic graph together to form a new graph. The fusion method implemented by Graph4NLP can be represented as
𝐀_com = λ𝐋_dep + (1-λ)f(𝐀),
where 𝐀_com is the adjacency matrix of the new graph, λ is a hyperparameter with a value between 0 and 1, 𝐋_dep is the normalized Laplacian matrix of the dependency graph, 𝐀 is the adjacency matrix of the dynamic graph and f(·) is the matrix normalization operation (e.g., row normalization).
GNN Layers
The initialized graph is fed into GNN layers to learn the feature representation of each node. In the proposed AD-GNN model, GraphSAGE (Graph SAmple and aggreGatE) <cit.> and GGNN (Gated Graph Neural Network) <cit.> with K layers are separately tested. For the GraphSAGE model, its forward propagation operation of node v in layer k consists of three steps:
* Considering the calculation efficiency, the random sampling method is adopted to sample n neighbors of each node v in graph, which are denoted by 𝒩 (v)
* Aggregate the embedding of neighbor nodes 𝒩 (v) through the aggregation function AGGREGATE_k(·)
(usually LSTM) to obtain the embedding of 𝒩 (v), i.e.
h_𝒩(v)^k = AGGREGATE_k({h_u^k-1, ∀ u ∈𝒩(v)}).
* Concatenate the embedding of node v at k-1 layer and the embedding of neighbors of node v at k layer, and pass the result through a full connection layer to get the embedding of node v at layer k, i.e.
h_v^k = σ(𝐖^k·CONCAT(h_v^k-1, h_𝒩(v)^k)),
where 𝐖 ^ k is a trainable matrix and σ is the activation function.
For the GGNN model, which is a GRU-based message passing model, its forward propagation operation of node v in layer k consists of three steps:
* The first step is to perform message passing operation, that is, node v and its adjacent nodes interact through edges. The result of message passing operation a_v^k is
a_v^k=𝐀_v:^⊤[h_1^k-1^⊤…h_n^k-1^⊤]^⊤+b,
where 𝐀_v:^⊤ denotes a matrix where the element in row i and column 0 represents the feature of the edge from node i to node v, and the element in row i and column 1 represents the feature of the edge from node v to node i.
* The second step is to generate new information. r_v^k is the reset gate that controls the generation of new information.
r_v^k =σ(𝐖^r a_v^k+𝐔^r h_v^k-1),
where 𝐖^r and 𝐔^r are both trainable matrices. The resulting new information h_v^k is
h_v^k =tanh(𝐖a_v^k+𝐔(r_v^k ⊙h_v^k-1)),
where 𝐖 and 𝐔 are both trainable matrices and ⊙ denotes Hadamard product.
* The last step is to forget certain old information and remember new information to obtain the final representation. The update gate z_v^k controls what information should be remembered.
z_v^k =σ(𝐖^z a_v^k+𝐔^z h_v^k-1),
where 𝐖^z and 𝐔^z are both trainable matrices. The final feature of node v at layer k is calculated as
h_v^k =(1-z_v^k) ⊙h_v^k-1+z_v^k ⊙h_v^k.
Classifier
AD-GNN uses a pooling layer and a multi-layer perceptron (MLP) to perform the final classification. An average pooling layer is used to average the features across all nodes to characterize the whole graph, i.e.,
r = 1/n∑_i=1^nh_i^K,
where h_i^K denotes the final feature of node v_i, and r denotes the feature of the whole graph. The MLP layer accept the averaged graph feature as input and performs the binary classification.
§.§.§ Loss Function
The loss function of the proposed AD-GNN model varies according to the graph construction method. For the dependency graph, the loss is simply the cross entropy loss L_pred of the classification. For dynamic graph or fused graph, the smoothness, connectivity and sparsity are considered for the regularization of the constructed graph:
L_G= α/n^2tr((𝐇^0)^T 𝐋𝐇^0)
+ -β/n1^T log (𝐀1)
+ γ/n^2𝐀_F^2
,
where α, β, γ are hyperparameters, n is the number of nodes, tr(·) denotes the trace of the matrix, 𝐇^0 is the node feature matrix, 𝐀 is the adjacency matrix, 𝐋 is the unnormalized graph Laplacian matrix, 1 denotes a vector whose elements are all one and ·_F denotes the F-norm of a matrix. The first term, the second term, the third term penalizes non-smoothness, disconnection, and the density of the graph, respectively. The final loss is defined as the sum of classification loss and graph regularization loss, that is,
L=L_pred+L_G .
§.§ Data Augmentation Method
Data augmentation is a method that can expand the variety of data used to train models by generating modified versions of existing data. In order to address the issue of limited data in the Pitt Cookie-Theft dataset, we use a variety of text data augmentation techniques. The goal of these methods is to artificially increase the size of our dataset, thereby providing our models with more diverse training examples, reducing the risk of overfitting, and potentially improving their performance. Here is an introduction to the data augmentation methods we use.
* Synonym Replacement. This method uses a synonym dictionary, such as WordNet <cit.>, to replace words in the original text with their synonyms. This method aims to maintain the semantic integrity of the sentences while introducing lexical diversity.
* Counter-fitting Embedding Replacement. The data augmentation method based on word embedding <cit.> refers to replacing words in a sentence with other words that are close to them in the embedding space. However, this poses a problem as two words with similar embeddings only indicate that they typically occur in similar contexts, but their semantics may not necessarily be the same and may even be opposite. The counter-fitting embedding data augmentation method <cit.> uses counter-fitting embedding instead, which reduces the distance between synonyms and increases the distance between antonyms, thus better ensuring semantic consistency.
* Masked Language Model Augmentation. This method <cit.> leverages the power of pre-trained language models such as RoBERTa model <cit.> to generate new sentences. It uses the RoBERTa model to perform three operations: token swap, token insert, and token merge. Specifically, token swap operation means randomly replace tokens in the original text with a special "[MASK]" token. Token insert operation means randomly insert "[MASK]" tokens into the original text. Token merge operation means randomly merged adjacent tokens into a single "[MASK]" token. Subsequently, it uses RoBERTa to predict the most likely token for the masked positions.
* Random Sentence Deletion. This method randomly removes sentences from the original text. This technique aims to simulate missing or incomplete information, which is common in real-world scenarios. This process helps to improve the model's robustness and generalization ability by training it to make accurate predictions even when confronted with incomplete data.
* Augmentation with GPT Models. This method <cit.> uses ChatGPT to rephrase each samples. By using appropriate prompt, chatGPT can rephrase the patient's speech transcript data as instructed and thus effectively increase the size of training set.
§.§ Multimodal Method
Audio data can provide additional information that is not captured in transcriptions. For instance, changes in speech patterns, such as pace, tone and rhythm, especially unsmooth speech, such as stuttering and pauses, are often early indicators of cognitive decline in AD patients. These features can be extracted from audio data but are lost in text transcriptions.
In this work, we use WavLM <cit.> model to extract audio features and perform classification. WavLM is a universal pre-trained speech model developed by Microsoft. The model initially applies random transformations to the input speech signal, such as mixing and adding noise, to enhance the model's robustness. Subsequently, a CNN encoder and a Transformer encoder are used to extract audio features. The Transformer encoder employs gated relative position bias to better capture the sequence order of the input speech. Furthermore, WavLM adopts a masked loss similar to BERT. Specifically, it uses the K-means algorithm to convert speech features into discrete labels, and predicts the labels of masked positions. WavLM is pre-trained on large-scale and diverse speech data, including e-books, YouTube videos, European Parliament recordings and so on, totaling 94,000 hours. Therefore, compared to traditional audio feature extraction methods, it exhibits excellent robustness and generalization capabilities.
Next, we employ multimodal learning. Multimodal learning, which combines information from different types of data (in this case, text and audio), can lead to more robust and accurate models. This is because different modalities can provide complementary information, allowing the model to learn from a more generalizable representation of the data. We pass the text features obtained by AD-GNN and the audio features extracted by WavLM through a fully connected layer, respectively, to ensure that the text and audio features have the same dimension. Then we fuse these two features to facilitate sufficient interaction between them. The fused features are then fed into the MLP classifier for classification, yielding the final result. Our multimodal approach is illustrated in Fig. <ref>.
The fusion method is crucial for the effectiveness of multimodal models. We have attempted two multimodal fusion methods: direct concatenation and cross network. Direct concatenation is the simplest fusion method, which directly concatenates the audio and text features into one tensor. Cross network proposed by Wang et al. <cit.> can explicitly apply feature crossing. It consists of multiple layers, where each layer produces higher-order interactions based on existing ones, and keeps the interactions from previous layers.
§.§ CLIPPO-like Method
CLIPPO (CLIP-Pixels Only) <cit.> is a pixel-based multimodal model that can understand both images and alt-text simultaneously without requiring text encoders or tokenizers. Its approach is to render alt-text as images and then encode both images and text using the same network architecture. It uses a contrastive learning loss function to make embeddings of matching images and alt-text as close as possible and all other image and alt-text embeddings as far apart as possible. The importance of CLIPPO lies in its achievement of unified modeling of images and text, not limited by text tokenizers, simplifying the complexity of multimodal learning, and improving model scalability and generalization. Compared to using two completely different models for text and image modalities, CLIPPO has reduced the number of parameters by half while achieving comparable experimental results on various tasks, including zero-shot image classification and image-text retrieval and so on.
In our research, we attempt to replicate the work of CLIPPO. Fig. <ref> illustrates our model architecture. We follow these steps:
* We use SpeechT5 model <cit.> fine-tuned for text-to-speech (TTS) to convert the patients' transcripts back into speech. The speech features of these TTS audios, such as intonation and speed, are similar to those of normal human speech.
* Next, we use WavLM to extract features from both the original audio and the TTS audio.
* We then employ contrastive learning to compare the generated TTS audio with the patient's original speech data. Similar to CLIPPO, we aim to make the features of the matched original and TTS audio as similar as possible, while keeping the features of other original and TTS audio pairs as different as possible. This approach can help the model extract richer and more discriminative features. We use a contrastive loss function similar to that used in CLIPPO. Specifically, in each batch, there are n samples, and the original audio feature of the i-th sample is represented as a_i^origin, while the TTS audio feature of the i-th sample is represented as a_i^tts. First, we perform L2 normalization on the above features to obtain e_i^origin, i=1,2,⋯,n and e_i^tts, i=1,2,⋯,n. Then, we calculate the cosine similarity matrix S between the original audio features and TTS audio features. S is an n× n matrix, and the element in the i-th row and j-th column S_ij is calculated as S_ij=e_i^origine_j^ttse^t, where t is a temperature parameter. Among these n× n sample pairs, there are n positive sample pairs (matching original audio and TTS audio pairs) and n^2-n negative sample pairs (all other original audio and TTS audio pairs). We separately calculate the cross-entropy loss of original audio L_con^origin and the cross-entropy loss of TTS audio L_con^tts. Finally, we take the average of these two loss as the final contrastive loss L_con. That is:
L_con^origin = cross_entropy_loss(S, labels, axis=0),
L_con^tts = cross_entropy_loss(S, labels, axis=1),
L_con = 1/2(L_con^origin+L_con^tts),
where labels=[1,2,⋯,n].
* At the same time, we record the classification loss L_origin for the original audio and the classification loss L_tts for the TTS audio, both of which are calculated using cross-entropy loss.
* The final loss is a weighted sum of the above three losses, namely,
L=α L_con + β L_origin + γ L_tts.
The advantage of this approach lies in the fact that since the TTS audio is directly generated from text, it contains semantic information that may not be well reflected in the original speech. By using contrastive learning, we can make the features of the original audio and its corresponding TTS audio as similar as possible. This allows our model to learn how to link speech features such as intonation and pauses with textual features such as semantic information, without the need for pre-trained language models such as BERT. This approach reduces the number of parameters required, resulting in a more compact model structure.
§ EXPERIMENTS
§.§ Datasets
In this work, we used patients' speech and corresponding transcript data from the DementiaBank Pitt database <cit.>. Participants were asked to describe what they saw when presented with a picture showing a mother washing dishes in a sink and children trying to steal cookies from a cookie jar (the "Cookie-Theft picture"). The dataset consists of 257 samples marked "probable/possible AD" and 243 samples marked "healthy control." The audio records of these interviews were manually transcribed and annotated. The transcriptions are in the CHAT (Codes for the Human Analysis of Transcripts) format <cit.>, a standard protocol for TalkBank data.
§.§ Experimental Settings
In the experiments part, we have conducted extensive experiments on GNN-based method, data augmentation-based method, multimodal method, and CLIPPO-like method. For GNN-based method, we separately test the effects of different choices of embedding initializer, graph constructor, and GNN on model performance. For the embedding initializer, we tested w2v_bilstm and w2v_bert_bilstm. For graph construction methods, we compared dependency graph, dynamic graph, and the fusion of dependency graph and dynamic graph. For the choice of GNNs, we tested GraphSAGE and GGNN. We also experimented with adjusting the number of layers in the GNN, as well as completely removing the GNN for ablation study. The batch size, epochs, and learning rate are set as 20, 30, and 0.001 respectively. The ε in Eq. <ref> is set as 0.95. The α,β,γ in Eq. <ref> are set as 0.1, 0.1, and 0.3, respectively. The λ in Eq. <ref> is set as 0.5. The length of embedding h is set to 300.
For data augmentation methods, we first use various text data augmentation methods to increase the size of the training set, Then, just like in the previous experiment, we use AD-GNN for classification. The data augmentation methods we used are listed below.
* Counter-Fitting Embedding Augmenter. This method replaces words in a paragraph with other words that are top-k similar to them in the counter-fitting embedding space.
* Sentence Deletion Augmenter. This method randomly removes one sentence from the original text.
* RoBERTa Augmenter. This method uses the RoBERTa model to perform three operations: token swap, token insert, and token merge.
* Wordnet Augmenter. This method substitutes words in the original text with their synonyms from the WordNet lexical database.
* GPT-3.5 Augmenter. This method uses the gpt-3.5-turbo model to generate new samples. The prompt we provided to the model is “Please write another paragraph using the speaking style of the following paragraph.”
* GPT-4 Augmenter. This method uses the gpt-4 model to generate new samples. We provide the same prompt to the model as used in GPT-3.5 Augmenter.
For all data augmentation methods except GPT-4 Augmenter, we attempted to double the size of the dataset by generating one new sample for each original sample. For GPT-4 Augmenter, due to cost considerations, we randomly selected one sample every five samples for data augmentation. Later, to investigate the impact of augmentation factor on performance, we also conducted experiment on the Sentence Deletion Augmenter with augmentation factor of 5 (that is, generating 5 new samples for each original sample). For data augmentation methods other than that based on GPT-3.5 and GPT-4, we use the implementation provided in the TextAttack <cit.> library.
For the experiments on multimodal methods, we compared the effectiveness of using only AD-GNN for classifying transcriptions of patient speech, only using the WavLM-base model for classifying patient speech audio, and combining both with the multimodal method. For the multimodal method, we compared the effectiveness of two fusion methods: direct concatenation and cross network. The batch size, epochs, and learning rate are set as 20, 30, and 0.001, respectively, which is the same as in the GNN-based method experiment. The sampling rate of the audio data is 16,000. For the experiment of CLIPPO-like method, the batch size, learning rate, and epoch were set to 4, 1.5e-5, and 100, respectively. The parameters α, β, and γ in Eq. <ref> were set to 1.
For all the experiments mentioned above, we employed 10-fold cross-validations for 5 times to get stable results. We used accuracy for the evaluation metrics as the number of positive and negative samples is largely balanced.
§.§ Results
The results related to AD-GNN are shown in Table <ref>. Apparently, w2v_bert_bilstm leads to better classification performance (compared to w2v_bilstm), indicating that the pre-trained language model is more effective in performing the initial token embedding. However, we noticed that regardless of the graph construction method and GNN used, there was no significant improvement in accuracy, except for a slight improvement in accuracy, reaching 0.8504, when using the fused graph and two-layer GraphSAGE. The reason may be that the graph did not capture the key information related to AD detection. For example, dependency graph mainly focus on the grammatical relationship between words in a sentence, and these relationships may not be relevant to the language features of AD patients, such as vocabulary richness.
The experimental results about data augmentation are shown in Table <ref>. It is shown that all data augmentation methods have a negligible impact on the accuracy of the model. Among them, Counter-Fitting Embedding Augmenter, Wordnet Augmenter, GPT-4 Augmenter, and Sentence Deletion Augmenter (with augmentation factor of 5) can improve the performance, but the effect is not significant. Moreover, increasing the augmentation factor also has a very limited impact on the effect. This may be due to the following reasons. First, AD patients are used to using simple words, and synonym replacement may replace words with uncommon or complex ones. Moreover, although Sentence Deletion Augmenter can simulate the information loss in the real world, it may also cause the loss of important information. Additionally, excessive augmentation may introduce too much noise, making it difficult for the model to learn useful information. What's more, some data augmentation methods such as RoBERTa Augmenter may not preserve the semantics of the original sentence. In addition, data augmentation methods may not simulate the speaking style of AD patients. For example, for GPT3.5 Augmenter, even when we ask it to mimic the speaking style of the speaker in the prompt, its effect is sometimes unsatisfactory, and the generated sentences are too formal.
The experimental results about multimodal methods are shown in Table <ref>. It can be observed that the accuracy of the text modality (0.8460) is much higher than that of the audio modality (0.7714). This may be due to the complexity and dimension of audio data are often higher than those of text data, which may make it difficult for the model to learn effective feature representations during the training process. In addition, background noise in audio may also affect the accuracy of the experiment. The reason might also be that although the WavLM model performs well on many speech tasks, it is only pre-trained on data such as electronic books and European Parliament recordings, and has not been pre-trained on AD patients' data. Therefore, it cannot fully capture audio features related to AD.
When we fused the data from both text and audio modalities, we found that the accuracy of the direct concatenation method (0.8475) is almost the same as that of the cross network method (0.8418), and is very close to the experimental result of using only text. Perhaps this is because the experimental results of the two modalities differ greatly, with strong performance in the text modality but mediocre results in the audio modality, causing the model to overly rely on the stronger modality.
But in the end, We found that the CLIPPO-like method significantly improves performance compared to using only raw audio. This indicates that aligning TTS voice with raw audio helps the model understand the semantic information of the audio without the need for pre-trained speech models like BERT.
§ CONCLUSION AND DISCUSSION
In this study, we systematically explored various methods for detecting Alzheimer's disease (AD) using patients' speech and their transcribed text data, including GNN-based methods, data augmentation methods, multimodal-based methods and CLIPPO-like methods, and conducted extensive experiments, which provided rich references for future research. Our study also conducted in-depth analysis which provided directions for future work. We also found that the CLIPPO-like method can enable the model to learn semantic information without introducing pre-trained language model, which can significantly improve the performance compared to using only raw audio.
For future research, there are several possible directions for improvement. First, in addition to using patient speech and their transcribed text, we can also combine other data, such as the patient's facial expressions, which may provide more information and improve the detection of AD. Second, we believe that bigger datasets may be the best way to improve model performance. In our study, we only used about 500 samples, which may have limited the performance of our models. Finally, we believe that improving data augmentation methods may be a promising direction. In our study, we tried various data augmentation methods, but the effects were not ideal. We believe that if the new samples generated by augmenter can better simulate the language features of AD patients, the model may have better performance.
§ REPRODUCIBILITY STATEMENT
* Datasets: The patient audio and transcript data used in this work is provided by the DementiaBank Pitt database at dementia.talkbank.org <cit.>. Data will be available for download upon request.
* Code: Both the code and experiment settings for our model are available at: <https://github.com/shui-dun/multimodal_ad>.
unsrt
|
http://arxiv.org/abs/2307.02847v1
|
20230706082631
|
Towards Efficient Control Flow Handling in Spatial Architecture via Architecting the Control Flow Plane
|
[
"Jinyi Deng",
"Xinru Tang",
"Jiahao Zhang",
"Yuxuan Li",
"Linyun Zhang",
"Fengbin Tu",
"Leibo Liu",
"Shaojun Wei",
"Yang Hu",
"Shouyi Yin"
] |
cs.AR
|
[
"cs.AR"
] |
Revisiting Computer-Aided Tuberculosis Diagnosis
Yun Liu, Yu-Huan Wu, Shi-Chen Zhang, Li Liu, Min Wu, and Ming-Ming Cheng
Y. Liu, S.-C. Zhang and M.-M. Cheng are with Tianjin Key Laboratory of Visual Computing and Intelligent Perception, Nankai University, Tianjin, China. (E-mail: vagrantlyun@gmail.com, zhangshichen@mail.nankai.edu.cn, and cmm@nankai.edu.cn)
Y.-H. Wu is with the Institute of High Performance Computing (IHPC), Agency for Science, Technology and Research (A*STAR), Singapore. (E-mail: wu_yuhuan@ihpc.a-star.edu.sg)
L. Liu is with the College of Electronic Science, National University of Defense Technology, Changsha, Hunan, China. (E-mail: liuli_nudt@nudt.edu.cn)
M. Wu is with the Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore. (E-mail: wumin@i2r.a-star.edu.sg)
Corresponding author: L. Liu. (E-mail: liuli_nudt@nudt.edu.cn)
A preliminary version of this work has been published on CVPR (oral) <cit.>.
August 1, 2023
========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
plain
Spatial architecture is a high-performance architecture that uses control flow graphs and data flow graphs as the computational model and producer/consumer models as the execution models. However, existing spatial architectures suffer from control flow handling challenges. Upon categorizing their PE execution models, we find that they lack autonomous, peer-to-peer, and temporally loosely-coupled control flow handling capability. This leads to limited performance in intensive control programs.
A spatial architecture, Marionette, is proposed, with an explicit-designed control flow plane. The Control Flow Plane enables autonomous, peer-to-peer and temporally loosely-coupled control flow handling. The Proactive PE Configuration ensures timely and computation-overlapped configuration to improve handling Branch Divergence. The Agile PE Assignment enhance the pipeline performance of Imperfect Loops. We develop full stack of Marionette (ISA, compiler, simulator, RTL) and demonstrate that in a variety of challenging intensive control programs, compared to state-of-the-art spatial architectures, Marionette outperforms Softbrain, TIA, REVEL, and RipTide by geomean 2.88×, 3.38×, 1.55×, and 2.66×.
§ INTRODUCTION
Moore's Law has propelled progress in conventional processors for numerous decades, and the formerly exponential growth is now diminishing. Fortunately, spatial architectures, such as Coarse-Grained Reconfigurable Array (CGRA) <cit.>, Reconfigurable Dataflow Processors <cit.>, and Systolic Arrays <cit.>, exhibit great promise owing to their inherent flexibility and remarkable performance.
Spatial architectures represent a category of accelerators that harness the immense potential of high computational parallelism through direct intercommunication among an array of processing engines (PEs). The software programs are transformed into control flow graphs (CFGs) and data flow graphs (DFGs) to facilitate their compilation. The CFG captures the program's control dependencies, including loops and conditions. The widely prevalent execution model for spatial architectures is the producer/consumer pipeline model <cit.>. The CFGs and DFGs are assigned to PEs and interconnect networks. This mapping relationship is expressed through carefully distributed instructions (configurations) among the PEs and interconnect networks. Spatial architecture can mitigate data transfers from memory, thus bypassing the storage bottleneck and facilitating the relentless advancement of computing power. Furthermore, its producer/consumer pipeline model excels in accelerating kernels with conventional data-level parallelism, bringing noteworthy performance benefits in many domains such as artificial intelligence (AI), mobile communication, and image processing <cit.>.
However, contemporary applications impose heightened demands on the spatial architecture's capacity to adeptly handle control flow. On the one hand, modern applications across pivotal domains, such as mobile communication, computer vision, bioinformatics, and general-purpose kernels, exhibit intricate control flow patterns, involving branches or nested loops, as shown in Table <ref>. On the other hand, AI also has higher requirements for control flow processing capabilities. Tenstorrent has introduced a cutting-edge framework for big AI models, encompassing Dynamic Sparsity, Conditional Execution, and Dynamic Routing <cit.>. FlexMoE <cit.> has further proposed an innovative scheduling module that enables dynamic mapping of the model-to-hardware allocation based on real-time dataflow over the existing DNN runtime.
Unfortunately, spatial architectures exhibit limitations in effectively handling control flows. To tackle this challenge, this work strives to comprehensively pinpoint the fundamental architectural limitations of the hardware execution model of spatial architecture, at both array-level and PE-level. We survey the representative spatial architectures in the past decade, categorizing their PE execution models into two distinct paradigms, namely von Neumann PE and dataflow PE. Furthermore, we conduct an in-depth analysis of these two models in the context of handling two typical control flow scenarios, namely Branch Divergence and Imperfect Loop.
Our observations indicate that conventional PE execution models cause significant PE idleness while handling above typical control flows. This limitation stems from the fact that: (1) von Neumann PEs lack the capability to autonomously initiate configuration changes in other PEs, and that there exists no direct channel for control information transfer between PEs; (2) The distinctive token utilized by dataflow PEs results in a temporally and spatially close-knit coupling of control flow and data flow, consequently constraining control flow transfer. We advocate that an ideal PE for control flow handling is expected to (1) autonomously change the configurations of other PEs, (2) incorporate a peer-to-peer control flow path to enable agile control information transmission, and (3) be temporally loosely-coupled with dataflow path.
To achieve the above objectives, our insight is that the control flow handling in spatial architectures should stand out from the current hybrid design that mixes control flow handling and data flow handling. This calls for a new architectural scheme, which decouples the control flow handling and dataflow handling.
We introduce Marionette, a spatial architecture design with decoupled control flow plan and data flow plane. Specifically, we architect a control flow plane for existing spatial architecture, which includes redefined PE architecture and features. We believe a decoupled control flow plane is naturally beneficial for autonomous, peer-to-peer, and temporally loosely-coupled control flow handling capability.
Control Flow Plane: The control and configuration components of Marionette have been consolidated into a control flow plane, which completely decouples control flow handling from data flow handling. To enable autonomous, peer-to-peer and temporally loosely-coupled control flow handling, the control flow plane incorporates three distinct PE micro-architectures and a specialized control network.
Proactive PE Configuration: To timely and computation-overlapped configuration, we devise the Control Flow Sender, which timely transfer control flow to hasten the configuration of subsequent PEs. This allows for executing current-stage computation and next-stage configuration in the same stage. Consequently, Marionette achieves a high PE utilization rate in Branch Divergence.
Agile PE Assignment: Given the solid foundation for control flow handling capabilities provided by the Marionette control flow plane, we optimize the Marionette scheduling strategy and develope a Control Flow Scheduler. This enhancement renders Marionette highly flexible in constructing basic block pipelines, leading to a significant improvement in PE utilization when processing Imperfect Loops.
The contributions of this work are as follows:
* We present a taxonomy that categorizes PE execution models of spatial architectures based on prior research.
* We synopsize the control flow predicaments encountered by extant spatial architectures when confronted with demanding control flow applications, specifically, the paucity of autonomous, peer-to-peer, and temporally loosely-coupled control flow handling capability. To the best of our knowledge, this study represents the first comprehensive reexamination of the role of control flow in execution models for spatial architectures.
* We enhance the execution model of spatial architectures by adopting a decoupled control flow plane and introducing three corresponding innovative features. Marionette, our software-defined hardware solution, realizes each features, and we implement a full stack including compiler, simulator, and RTL design.
* We conduct a comprehensive evaluation that includes the acceleration effect of each innovation features. Compared to the state-of-the-art architectures, Marionette outperforms Softbrain, TIA, REVEL, and RipTide by geomean 2.88×, 3.38×, 1.55×, and 2.66× across a variety of intensive control flow applications.
§ BACKGROUND
This paper presents a challenge with the intensive control flow algorithm employed in Spatial Architectures (SAs). To provide a comprehensive understanding of this problem, we first provide an overview of the SA's computational model and execution model.
§.§ SA Computational Model
Upper part of Figure <ref> shows the computational model of SA. A SA normally uses a Control Data Flow Graph (CDFG) as its computational model, where a program running on the SA is represented as CDFGs. The CDFG consists of a control flow graph (CFG) and data flow graphs (DFGs). A DFG <cit.> is a graph that depicts operations as nodes and data dependencies as edges. As there is no control dependencies in DFG, it is usually embedded in a basic block (BB). The BB has a single entry and a single exit. The CFG <cit.> is a graph whose nodes are BBs, and the edges represent the control dependencies between the BBs.
§.§ SA Execution Model
We briefly introduce the essential architecture of SA and then elaborate the hardware execution model at array-level and PE-level. We believe that understanding the root cause of control flow handling problem is worth another re-examination of the hardware execution model.
SA Hardware Architecture:
As shown in the lower left of Figure <ref>, a SA consists of a group of processing elements (PEs) interconnected by an on-chip network. A PE typically includes a set of functional units, such as adders, multipliers, and shifters. These PEs are designed to support higher-level operations such as multiplication. Moreover, the PEs can be reconfigured to perform different tasks, and the interconnect network can be programmed to support various data flows and communication patterns, allowing the PEs to be connected in different ways to form various computational structures.
SA Array-level Execution Model:
SA needs programming to run applications. A typical way is to map the DFGs of the program onto the PE array, along with a set of hardware resources that will be used to execute the tasks. The hardware resources can include functional units, interconnects, and memory blocks. A mapping algorithm is then used to determine the optimal placement of tasks on the PEs and to configure the interconnect network to support the required data flows. Such configurations of PE and networks are achieved through instructions.
The producer/consumer pipeline model is a crucial characteristic of SA array-level execution model that enables efficient data transfer and computation. Each PE is assigned a specific operator in the DFG, and multiple PEs are spatially interconnected to form a pipeline. Figure <ref> lower right part depicts the producer/consumer pipeline, wherein the pipeline initiation interval (II) equals 1, which means that for each cycle, a new loop iteration can begin. As a result, the producer/consumer pipeline model affords two crucial benefits: high parallelism and effective hardware resource utilization.
§.§ PE Execution Model
In this work, our goal is to comprehensively examine the existing SA execution model and pinpoint the root cause of inefficient control flow handling both at the array-level and PE-level. To achieve this goal, we first conduct a comprehensive survey of SAs in the past decade, as shown in Table <ref>. We then categorize them into von Neumann architecture-derived <cit.> and dataflow architecture-derived <cit.> PE according to their control flow handling schemes, as shown in Figure <ref>[The related work is detailed in Section <ref>]. We will delve into the distinct ways they employ to handle control flow and identify their respective limitations.
Control Flow Handling of von Neumann PE:
A typical execution model of von Neumann PE is shown in Figure <ref> (a). Von Neumann PE gets its configurations from the instruction buffer to configure the interconnect input/output, local register, and function unit. In the traditional von Neumann architecture, the execution sequence of instructions is controlled by the program counter (PC) pointer. However, in the evolved von Neumann PE, the PC pointer is often replaced by a finite state machine or a control core. This enables more flexible control flow and quicker reconfiguration of the processor. The logic of switching configurations is pre-set at compile time and can be quickly reconfigured. During execution, each von Neumann PE has distributed and isolated configuration logic, which cannot be directly changed by other PEs. To adapt to the pipeline of the spatial architecture, each instruction may take effect on the data input for a period of time, enabling producer-consumer parallelism.
Control Flow Handling of dataflow PE: Figure <ref> (b) displays a standard model for executing dataflow PEs. A token is used as input, consisting of a data and a tag. The tag activates the configuration, while the data performs the operation. This means that the dataflow PE's configuration can be altered by other PEs while in use. However, the token links control flow and data flow, leading to some limitations. Each PE execution requires configuration before execution, causing latency overhead. Moreover, this coupling limits the effect of an instruction to the current data, hampering pipeline control.
§ CHALLENGES AND MOTIVATIONS
In this section, we will provide a detailed explanation of how von Neumann PE and dataflow PE handle control flow when executing control flow-intensive kernels. We will use two representative control flows, namely Branch Divergence and Imperfect Loop, which are commonly found in various algorithms. These control flows account for over 40% of the average percentage in popular benchmark suites such as MachSuite <cit.>, Rodinia <cit.> and PolyBench <cit.>.
Our goal is to show the root cause of existing spatial architectures' awkwardness in handling the control flow, which is due to the lacking mechanism for autonomous, peer-to-peer, and temporally loosely-coupled control flow information transfer among PEs in SA. This observation motivates us to propose a decoupled control flow plane for SA, which includes redefined PE architecture and features.
§.§ Two Typical Control Flow Forms
Branch Divergence is a prevalent issue in CFGs, where the program's control flow divides into different execution paths due to the presence of conditional branches. This occurs when the program encounters a decision point, where it must choose between multiple execution paths based on specific conditions. Branch Divergence is common in various kernels such as Sort and Merge, in the database and sparse computing. As an example, Figure <ref> (a) shows a code snippet in Merge, where the data flow within the conditional branch dynamically forks into two paths (true and false), or BBs. This results in divergent execution paths and can lead to poor PE utilization in both existing PE execution models.
Imperfect Loop is another typical control flow form which can be characterized as nested loops, with computations present in the outer loop bodies. It is a common feature in scientific computing and finite element analysis, particularly in applications such as blocked matrix multiplication (GEMM) and computational fluid dynamics (CFD). Figure <ref> (b) shows a code snippet of Sparse Vec/Mat. Mult. (SPMV), where the inner loop (in green) executes every block size times while the outer loop (in yellow) executes only once. Different nested loops have varying BB execution frequencies, which can cause an imbalanced pipeline and poor PE utilization in existing PE execution models.
§.§ Control Flow Handling Inefficiency in von Neumann PE
The von Neumann PE's execution model exhibits two limitations in handling control flow: firstly, it is unable to autonomously initiate configuration changes in other PEs; secondly, there exists no direct channel for the transmission of control information between PEs. Consequently, in the face of Branch Divergence, von Neumann PEs typically resort to two implementation approaches.
Branch Divergence: Switch Configuration
The first method (shown in the left half of Figure <ref> (c)) is to implement the branch in the fashion of switching PE configurations in the time dimension. Specifically, the branch target PEs need to wait for the control flow information and switch the configurations. Due to the inability of the branch PE to autonomously modify the control information of the target PEs, and the lack of a direct channel for control flow information transmission between them, the branch PE is constrained to indirectly convey control information through an external Centralized Control Unit (CCU). This approach is clumsy. As the branch PE needs to transfer the control flow result to the CCU. Then the CCU replies to the branch target PE through the configuration network while the whole array is left idle.
Branch Divergence: Predication
The second approach, referred to as "Predication" (shown in the right half of Figure <ref> (c)), is more prevalent. It involves converting branches into distinct data paths in spatial dimension by consuming additional PEs. The configurations for both branch targets are pre-configured in two target PE lanes, respectively. Subsequently, the correct PE lane is selected from these two paths for the following BB according to the branch result. However, the not taken PEs will be left idle. It would be great if this idle resource could be used as other kernels.
Imperfect Loop:
The CFG of Imperfect Loop can be seen as a variation of Branch Divergence. BB 4 is responsible for making a branch decision, with one branch leading to BB 5 and the other to BB 2 and BB 3. As a result, the von Neumann PE typically employs the Predication method in Branch Divergence. Based on this, there is a key point in the SPMV algorithm: the outcome of BB 3 determines the loop boundary for BB 5. Thus, it bears resemblance to the Switch Configuration employed in Branch Divergence. In this case, BB 3 transmits the control flow to the Centralized Control Unit, which subsequently configures the loop generator of the inner loop. As shown in Figure <ref> (d), the low utilization rate of Von Neumann PE in various stages is quite significant, and it can be attributed to a reason similar to the idle state under Branch Divergence.
§.§ Control Flow Handling Inefficiency in Data flow PE
The execution model of the dataflow PE demonstrates a restriction in managing control flow. The constraint stems from the binding of control and data flow within tokens. This close-knit coupling in both temporal and spatial dimensions restricts the control flow transferring.
Branch Divergence:
We expose the challenge caused by the temporal tight coupling between control flow and data flow when dataflow PEs perform Branch Divergence, as shown in Figure <ref> (e). The concurrent arrival of both tag (control flow information) and data flow via the same channel at the same time necessitates a PE configuration stage as a consequent operation of data entry, leading to an explicit overhead for PE configuration. Unfortunately, this explicit overhead results in a suboptimal utilization of the PE.
Imperfect Loop:
When executing an Imperfect Loop, dataflow PEs can accentuate the challenges arising from the close coupling of control flow and data flow, manifesting in both temporal and spatial domains, as shown in Figure <ref> (f). On the one hand, the synchronization of control flow and data flow naturally leads to a longer pipeline II, as elaborated in the preceding paragraph. On the other hand, the close spatial couping of control flow and data flow implies that control information transferring reliant on the data path is frequently inflexible. For instance, in the absence of a direct control flow pathway between the second PE and the loop generator, control flow information must traverse the red data path, leading to pipeline idleness. Consequently, even though the dataflow PE has some autonomy, the inherent drawback results in a substantially reduced utilization rate.
§.§ Insights and Motivations
We can observe that both von Neumann PE and dataflow PE will cause significant PE idleness, as shown in Figure <ref> (i)(j). This is mainly due to (1) PEs cannot autonomously change the configuration of other PEs; (2) Control flow transmission between PEs is restricted, manifested in both spatial and temporal dimensions: 1. Spatially, control flow transmission is not direct, but instead realized by switching configurations through centralized control units or coupled into the data flow (i.e. tag or predication). 2. Temporally, control flow transfer is also constrained. The configuration stage is hard to overlap with computation since the configuration function and computation function are temporally tightly-coupled.
Facing the intensive control flows, an ideal PE is expected to (1) be able to autonomously change the control information of other PEs, (2) incorporate a peer-to-peer control flow path to enable agile control information transmission, and (3) feature an temporally loosely-coupled control flow path to facilitate the concurrent execution of current-stage computation and next-stage configuration within the same stage, as shown in Fig. <ref> (g)(h). To achieve this objective, a critical requirement is to decouple the control flow handling and dataflow handling, which calls for a new architectural scheme. Our architecture is predicated on this assumption, and the superior performance of our design relative to other state-of-the-art architectures is demonstrated in Table <ref>.
§ DESIGN
The current PE execution models expose a deficiency in control flow handling ability. It becomes imperative to deploy an autonomous, peer-to-peer and temporally loosely-coupled control channel. We propose that in spatial architectures, control flow handling should be separated from the current hybrid designs that combine control flow handling with data flow handling. This introduces a new layer of abstraction, called the control flow plane of the spatial architecture, which facilitates the separate design and optimization of control flow handling for each PE. This approach represents the only viable means to swiftly reconfigure a set of PEs and achieve control coordination within the spatial architecture. We introduce Marionette, a spatial architecture design with decoupled control flow plan and data flow plane. Specifically, we architect a control flow plane for existing spatial architecture, which incorporates three innovative features. This section presents our proposal for Marionette and organizes the discussion to describe its key features.
* What is the control flow plane of Marionette, and how to realize a autonomous, peer-to-peer and temporally loosely-coupled control flow handling capability by establishing a control flow plane?
* How is the Proactive PE Configuration used to achieve timely and computation-overlapped configuration through the Control Flow Sender, and what solutions does it offer for Branch Divergence and pipeline initiation?
* How does the Agile PE Assignment enhance the pipeline performance of Imperfect Loops through a refined Marionette scheduling strategy and Control Flow scheduler?
§.§ Control Flow Plane
Control flow plane abstraction in Marionette: As shown in Figure <ref> (d), to attain a fully decoupled control flow plane, we have encapsulated all control flow-related components, including the Controller, Control Network, Control FIFO, and the control flow part within PEs, within Marionette's control flow plane. The primary objective of the control flow plane is to establish a correlation between CFG and the hardware implementation. In this process, the control flow is represented by instruction addresses, and the PE generates and sends new instruction address to other PEs to realize an autonomous control flow handling. A cluster of PEs operating on a consistent instruction address can depict a BB. Similarly, Marionette's data flow plane encompasses components such as the Data network, Data SRAM, and Data flow part inside the PE. Upon receiving the corresponding configuration from the control flow part, the data flow plane is responsible for performing data flow computations and accessing memory, thus ensuring the realization of the DFG.
Execution model of Marionette PE: In order to achieve autonomous, peer-to-peer, and temporally loosely-coupled control flow handling within PEs, we decouple and optimize the control flow component of the conventional PE execution model. The Marionette PE execution model, illustrated in Figure <ref> (a), demonstrates the decoupling of the control flow part and data flow part. We design the micro-architecture of three control flow components: the Control Flow Scheduler, Control Flow Trigger, and Control Flow Sender, along with a corresponding ISA that enables independent control flow handling and ports. This permits the free transmission and receipt of control flow, unrestricted by the data flow within the PE.
Control Flow Trigger, shown in Figure <ref>, is the pivotal configuration unit of the Marionette PE framework. It is composed of two phases, namely the check phase and the configuration phase. The Control Flow Trigger is designed to sustain the configuration determined in the configuration phase until a fresh control input is detected during the check phase. This contrasts with data flow PE instructions, which are solely responsible for a single calculation. By virtue of the majority of PEs executing within the confines of the same basic block's producer-consumer pipeline, the Control Flow Trigger obviates the overhead of switching instructions.
Autonomous control flow handling: As previously noted, the autonomy of the Marionette PE stems from its ability to generate instruction addresses for subsequent PEs. The configuration of control flow part provides a range of instruction addresses.
Temporally loosely-coupled control flow handling:
Given that the Control Flow Scheduler, Control Flow Trigger, and Control Flow Sender on the control flow plane possess distinct instruction sets and execution procedures, the control flow and data flow are inherently decoupled. Consequently, it allows for overlapping of the next configuration phase with the current computation phase through Proactive PE Configuration, as detailed in Section <ref>. This approach facilitates the transfer of control and data flow between PEs, as demonstrated in Figure <ref> (b).
A peer-to-peer Control Network: To enable timely and flexible peer-to-peer control flow transfer with minimal area overhead, we design a control network based on the Benes network. This well-known rearrangeable non-blocking network <cit.> has a butterfly-shaped interconnection structure, and a much smaller number of node switches than the Crossbar network, serving as our design starting point due to its low area overhead and high flexibility (Figure <ref> (a)). However, it lacks broadcasting capabilities. To address this, we incorporate the Consecutive Spreading (CS) network <cit.>, which performs broadcast and has a smaller area overhead than cascading multiple same-sized networks. We present a CS-Benes network that connects PEs, control FIFOs, and the controller, providing configurable network output with a fixed connection and no arbitration during control transfers. Each path in the network contributes one element of throughput every cycle. We reserve many extensible interfaces. Figure <ref> (c) displays the specific interface design of our CS-Benes network, and in Section <ref>, we evaluate the control network's scalability.
§.§ Proactive PE Configuration
In the Marionette control flow plane design, the control flow and data flow are loosely-coupled in the time domain, which allows for the execution of current-stage computation and next-stage configuration within the same stage. To accelerate the configuration process of subsequent PEs, we innovate Proactive PE Configuration and develop Control Flow Sender that send control information at the earliest possible time. The data flow part of the PE currently implements three distinct operating modes for the control flow transmitter: DFG operator mode, branch operator mode, and loop operator mode, as shown in Figure <ref> (a). Additionally, the Control Flow Sender features Proactive PE Configuration in the DFG operator mode and loop operator mode that hastens the transmission of control flow.
The Marionette PE's data flow part executes non-branch calculation operators in the DFG operator mode. This mode indicates that the current and subsequent PEs are in the same BB and share the same control flow. To expedite control flow transmission, the current PE's configuration is sent to the subsequent PE in advance once the configuration is completed. Consequently, when the current PE sends the data flow's dataflow result to the subsequent PE, the configuration of the latter has already been completed. In contrast, the branch operator mode executes the branch (not loop) operator, indicating that the current and subsequent PEs are in different basic blocks with a control jump in between. Therefore, the control flow of the PE must wait for the data flow calculation result to determine the control information of the subsequent PE, and no proactive control flow is transmitted. Lastly, in the loop operator mode, the data flow part executes the loop operator, and to ensure continuous pipeline generation, the loop configuration is maintained in advance.
As shown in Figure <ref> (b), the execution of three Marionette PEs is demonstrated under Branch Divergence, where one PE is in branch operator mode, and others are in DFG operator mode. The Proactive PE Configuration feature allows Marionette PE to achieve a branch target PE utilization rate that is similar to that of an ideal PE. Moreover, Figure <ref> (c) illustrates the functionality of the PE as a loop generator, which is responsible for continuously generating the loop pipeline. (This illustrates the method for minimizing Pipeline II. In reality, Pipeline II is configurable.)
§.§ Agile PE Assignment
Leveraging the Marionette control flow plane architecture, the PE exhibits autonomous, peer-to-peer and temporally loosely-coupled control flow handling capabilities, aided by a dedicated loop operator that regulates the pipeline II. This serves as the foundation for realigning the flawed loop pipeline, and the Agile PE Assignment feature we proposed enhances PE utilization in Imperfect Loops. The feature encompasses a refined Marionette scheduling strategy and a Control Flow scheduler.
The Marionette scheduling algorithm is shown in the upper left of Figure <ref>. The frequency of BB execution varies among nested loop levels. Thus, we establish the mapping at each loop level and construct the pipeline with BB granularity. Once all BBs in the current loop level have been scheduled and unassigned PEs remain, we reshape (time-extend) the mapping of both the current layer BB and the inner layer BB, as they satisfy control dependencies in the current state. Time Extended mapping is a widely adopted technique in compilation <cit.>, which entails the folding of the initial spatial domain mapping into the temporal domain, thereby reducing PE resources while also increasing the II. However, reshaping may result in idle PEs due to the diverse DFG shapes of BBs. To address this issue, we select a mapping scheme that minimizes PE waste and expand the original mapping to generate the mapping result of the current loop level. The reshaping scheme, PE waste, and scheduling results of the three-layer nested loop algorithm are illustrated in the upper right of Figure <ref>.
The lower part of Figure <ref> showcases the Agile PE Assignment through Marionette timeline diagrams. The Marionette control flow plane allows for a highly flexible construction of the BB pipeline, encompassing adaptable PE resources, startup time, and pipeline II. This results in significantly enhanced PE utilization. To collect the control information generated by the outer BB pipeline, we design the Control Flow Scheduler and Control FIFOs. When the inner loop BB completes a round of loop iterations, it utilizes the pre-collected outer loop BB's control information to determine whether to initiate the next loop, thereby avoiding frequent configuration switch to the outer loop BB. Furthermore, the Control Flow Arbiter inside the Control Flow Scheduler possesses the capability to arbitrate between the current execution configuration and input control, enabling dynamic adjustment of the execution priority among BBs with varying levels of nested loops.
§.§ Programming and Compilation
The process of programming Marionette entails several tasks: (1) Annotating the branches and loop statements of the algorithm with #pragma tags, (2) Extracting and analyzing the control data flow graph (CDFG), and (3) Mapping the control flow and data flow portions to the corresponding planes of Marionette, while imposing constraints on memory access and communication. To elucidate the mapping of the program onto Marionette, we employ an example from Figure <ref>. Figure <ref> (a) depicts the original kernel's C code, while Figure <ref> (b) illustrates the extracted CDFG. Subsequently, Figure <ref> (c) showcases the specific mapping onto Marionette's control flow and data flow planes, using the Marionette scheduling algorithm explained in Figure <ref>. The CDFG for Branch Divergence may yield a tree-like structure, and the same level BBs are mapped to the same PE lane to the extent possible. Furthermore, Imperfect Loops are partitioned into distinct mappings.
§ IMPLEMENTATION
Here we discuss the implementation of the hardware, software stack and simulator used in the evaluation. Figure <ref> shows an overview.
Hardware:
Our Marionette design is parameterizable (e.g., PE array size, FU type, port widths, memory size, etc.) and yields an architectural description shared with the software stack and simulator. Table <ref> shows the hardware parameters. We synthesized a prototype of the Marionette at 500MHz using the 28nm technology library.
Software Stack:
We use the annotated source code to generate an LLVM intermediate representation (IR), which represents low-level operations on data flow and control flow. An automatic tool examines LLVM's IR and generates several DFGs and one CFG based on the PE data flow plane and control flow plane capabilities, respectively. The final bitstream generation step converts CFG and DFG into configuration bitstreams according to the hardware model.
Simulator:
We have developed a cycle-level accurate simulator. It uses the binary configuration file output by the compiler to verify the functional correctness of the Marionette and to evaluate the performance.
§ EVALUATION METHODOLOGY
§.§ Comparison Methodology
We built a cycle-level accurate simulator with the option to implement innovation points to compare the performance gains obtained by each innovation point separately. The simulator optimistically offers high memory access flexibility.
First, we evaluate the performance of Marionette PE (including Proactive PE Configuration) compared to von Neumann PE and dataflow PE. In order to verify the optimization effect on Branch Divergence, for a fair comparison, we do not consider the dedicated control network and Agile PE Assignment. And we unify the data network.
Second, we evaluate the peer-to-peer control network. Besides, we conduct a DC synthesis experiment on the control network delay under different frequencies and network stages. Moreover, we compare the network normalized area with the state-of-the-art architecture.
In addition, we evaluate the Agile PE Assignment to verify the optimization effect on Imperfect Loop. Furthermore, we compute the utilization improvement of the PE that originally executed the outer BB and pipeline utilization, and analyze the relationship between the result of the peer-to-peer control network and the Agile PE Assignment.
Finally, we built the performance models of Softbrain <cit.>, TIA <cit.>, REVEL <cit.> (15 systolic PEs, 1 tagged-dataflow PE), Riptide <cit.> (16 fully functional PEs and 25 control flow operators inside network) and Marionette with the simulator and normalized the computing fabric to the same size to compare the performance.
§.§ Benchmark
We selected a wide range of 13 benchmarks to evaluate our work. FFT, NW, Viterbi (VI), Merge-Sort (MS) and GEMM are from Machsuite <cit.>. ADPCM and CRC are from Mibench <cit.>. Hough Transform (HT) is from HosNa Suite <cit.>. We also selected LDPC Decode (LDPC) <cit.> and SC Decode (SCD) <cit.>. Some of the control flow characteristics of these benchmarks have been qualitatively described in Table <ref>. Conv-1d (CO), Sigmoid (SI) and Gray Processing (GP) are simple single-layer loop applications, which are prepared as a fair comparison. Table <ref> shows the data sizes.
§ EVALUATION
We evaluate the performance improvement of features in Marionette. In addition, we conducted scalability experiments for the control network and compared the network with other work. Finally, we compare Marionette with state-of-the-art architectures and show that Marionette performs better on intensive control flow applications.
§.§ Advantages of Proactive PE Configuration
Figure <ref> shows the speedup of our Marionette PE with Proactive PE Configuration compared to von Neumann PE and dataflow PE. The results show that the Marionette PE outperforms von Neumann PE by geomean 1.18× and up to 1.45× (Merge Sort). Moreover, it outperforms dataflow PE by geomean 1.33× and up to 1.76× (GEMM). Further, we separately count the proportion of operators under the branch. The ratio can expose the utilization waste caused by the static mapping of von Neumann execution model. Merge Sort has the highest branch subsequent PE ratio due to the Proactive Configuration saving the most PE resources. Due to the pipeline II, the data flow PE still has poor performance even if it has some flexibility.
§.§ Advantages of Control Network
As shown in Figure <ref>, the peer-to-peer control network leads to a shorter transfer delay of the control flow. It outperforms on average 1.14× and up to 1.36× (CRC). CRC, ADPCM, and Merge Sort are only partially pipelined. Hence, the overhead of the control flow transfer is high, and the speedup is apparent.
After adding the control network, we compare the network area overhead with the state-of-the-art architectures. As shown in Table <ref>, considering a fair comparison, we normalize the computing fabric (Plasticine uses a 3-lane 4-stage PCU and a 4-stage SRAM-free PMU), respectively measure the network (including data network, memory network and control network) area and the ratio of the network to computing fabric. Our network area is only 0.0118mm^2, which is 11.5% of the computing fabric. While each architecture may have a unique functional design with varying PE and network functions and sizes, we can infer from our experiment results that a peer-to-peer control flow network can alleviate the burden of utilizing other network structures for transmitting control flow. This, in turn, reduces the interconnection complexity of the original network and minimizes overhead.
We also evaluate the scalability of the control network by synthesizing different stages of the control network under various time constraints. Figure <ref> shows the result. Higher frequency and large-scale fabric will increase network latency. However, we believe the low increase in network latency is acceptable because the data flow has more severe constraints than the control flow.
§.§ Advantages of Agile PE Assignment
As shown in Figure <ref>, Agile PE Assignment significantly improves the performance, which achieves an average speedup of 2.03× and up to 5.99×.
We also further analyze the improvement of Agile PE Assignment, respectively from outer-BB PE utilization and pipeline utilization in a fine-grained manner, in Figure<ref>. We only selected the multi-layer nested loop benchmark where the innermost loop can form a pipeline.
"Outer-BB PE utilization" pertains to those PEs initially assigned to solely execute the outer loop BB. By dynamically assignment, they can either join the construction of the outer loop pipeline or reconfigure them as inner loop pipelines, leading to an average improvement of 21.57×. Among them, GEMM forms a dense spatial pipeline structure and obtains a utilization rate of 134×.
The measure of pipeline utilization is determined by the proportion of pipeline initiations to the overall number of executions. This ratio provides an indication of the pipeline's level of idleness. Overall, Agile PE Assignment has achieved an average of 1.54× optimization in the pipeline utilization in different benchmarks. Hough Transform, NW, SC Decode and GEMM are suitable because outer BBs can generate more control flow. FFT and Viterbi have a data-dependent pipeline II and limit the practical pipeline to 33% (II=2). In general, the improvement is limited by the loop structure and the limitations of data dependencies between loops (LDPC).
Speedup comparison between control network and Agile PE Assignment: An exciting balance of speedup between the control network and Agile PE Assignment is shown in Figure <ref>. CRC, ADPCM, Merge Sort, and LDPC cannot be well pipelined. Therefore, Agile PE Assignment cannot create a significant acceleration, but the acceleration of the control network is noticeable. For Viterbi, Hough Transform, SC Decode and GEMM, the control flow is comparatively regular, so the Agile PE Assignment of the control flow is evident. While the proportion of control flow in the critical path is diminished, and the acceleration effect of the control network declines.
§.§ Marionette Outperforms State-of-the-art architectures
We compare Marionette with other architectures. The results are shown in Figure <ref>. For non-intensive control flow benchmarks, all architectures have similar performance except for TIA which has a longer pipeline II (dataflow PE). Single BB inside loop structure is particularly suitable for constructing balanced pipelines. The innovative features of the Marionette do not deteriorate performance for non-intensive control flow applications. For intensive control flow benchmarks, TIA and Softbrain have similar performance. On average, the Marionette speedup is 2.88× that of Softbrain, 3.38× that of TIA, 1.55× that of REVEL, and 2.66× that of RipTide. In some benchmarks such as Viterbi, Hough Transform, SC Decode and GEMM, the REVEL execution model is comparable to the Agile PE Assignment, so the speedup is better. However, Agile PE Assignment has apparent advantages because it is flexible enough.
§ RELATED WORK
Spatial architecture taxonomy by execution model: We divide PEs of SA and reconfigurable spatial architectures into von Neumann PE and dataflow PE according to the execution mode. Table <ref> lists some of them.
Von Neumann PE only passively executes the configuration according to the instruction sequence. Some SAs <cit.> are configured by a unified controller (main processor or co-processor), and some use counters <cit.> or finite state machines <cit.> to control the order of instructions. They both have controllers that construct sequential instruction flows. To satisfy some dynamic properties, there are some von Neumann PEs request configuration from the processor <cit.>.
Dataflow PE means the reconfigurable PE determines the execution state according to the input data to select the configuration execution<cit.>. Its essence is the out-of-order execution of instructions.
The marionette PE is innovative from the von Neumann PE and the data flow PE, and decouples the configuration process through the control plane to achieve timely and Proactive Configuration, which does not exist before.
Dedicated Control Network Design: Some SAs add control bits to tag data with additional functions, such as SPU <cit.>, etc. The control signal is coupled with the data network, which cannot satisfy control flow flexibility. The DRIPS control network <cit.> is essentially a config network-on-chip (NoC), not a control network for our control flow characteristics. The RipTide <cit.> moves most control operations to the network, but the transferring is slow and inflexible, and the data and control information in the network are still coupled. Overall, Marionette has the first independent control network designed for control flow from the control plane.
Spatial pipelines on multiple BBs: Most SAs support spatial pipelines of innermost loops. Some SAs support spatial pipelines of different BBs, but limit their execution resources. FIFER <cit.> restricts different BBs to pipeline on different computing fabrics. REVEL <cit.> restricts the innermost loop to pipeline on systolic computing fabric and the outer loop BBs to pipeline on only a few dataflow PEs. The mismatch between the number of pipeline operators and the fixed execution resources can lead to PE underutilization and performance bottlenecks. In our execution model, the BB pipelines are dynamically balanced by the control flow. The mechanism of the DRIPS <cit.> balancing of pipelines is passive by setting a fixed time window by the controller to obtain the current pipeline state and resend the configuration by the config NoC. The passive centralized balancing approach will lead to pipeline pauses. Our dynamic balancing spatial pipeline is active and decentralized, thus having better dynamic balancing pipeline results.
§ CONCLUSION
This work describes Marionette, a spatial architecture with a decoupled, explicit-designed control flow plane and three corresponding innovative features. We developed full stack of Marionette (ISA, compiler, simulator, RTL) and demonstrate that in a variety of challenging control-intensive applications, compared to state-of-the-art spatial architectures, Marionette outperforms Softbrain, TIA, REVEL, and RipTide by geomean 2.88×, 3.38×, 1.55×, and 2.66×.
10
url@samestyle
polybench
M. A. Abella-González, P. Carollo-Fernández, L.-N. Pouchet,
F. Rastello, and G. Rodríguez, “Polybench/python: Benchmarking python
environments with polyhedral optimizations,” in Proceedings of the
30th ACM SIGPLAN International Conference on Compiler Construction, ser. CC
2021.1em plus 0.5em minus 0.4emNew York, NY, USA: Association
for Computing Machinery, 2021, p. 59–70. [Online]. Available:
<https://doi.org/10.1145/3446804.3446842>
2018PX-CGRA
O. Akbari, M. Kamal, A. Afzali-Kusha, M. Pedram, and M. Shafique, “Px-cgra:
Polymorphic approximate coarse-grained reconfigurable architecture,” in
2018 Design, Automation & Test in Europe Conference & Exhibition
(DATE).1em plus 0.5em minus 0.4emIEEE, 2018, pp. 413–418.
1970CFG
F. E. Allen, “Control flow analysis,” ACM Sigplan Notices, vol. 5,
no. 7, pp. 1–19, 1970.
arikan2009channel
E. Arikan, “Channel polarization: A method for constructing capacity-achieving
codes for symmetric binary-input memoryless channels,” IEEE
Transactions on information Theory, vol. 55, no. 7, pp. 3051–3073, 2009.
bae2018auto
I. Bae, B. Harris, H. Min, and B. Egger, “Auto-tuning cnns for coarse-grained
reconfigurable array-based accelerators,” IEEE Transactions on
Computer-Aided Design of Integrated Circuits and Systems, vol. 37, no. 11,
pp. 2301–2310, 2018.
torng2020dac
R. Bahr, C. Barrett, N. Bhagdikar, A. Carsello, R. Daly, C. Donovick, D. Durst,
K. Fatahalian, K. Feng, P. Hanrahan, T. Hofstee, M. Horowitz, D. Huff,
F. Kjolstad, T. Kong, Q. Liu, M. Mann, J. Melchert, A. Nayak, A. Niemetz,
G. Nyengele, P. Raina, S. Richardson, R. Setaluri, J. Setter, K. Sreedhar,
M. Strange, J. Thomas, C. Torng, L. Truong, N. Tsiskaridze, and K. Zhang,
“Creating an agile hardware design flow,” in 2020 57th ACM/IEEE
Design Automation Conference (DAC), 2020, pp. 1–6.
balasubramanian2021compiler
M. Balasubramanian, “Compiler design for accelerating applications on
coarse-grained reconfigurable architectures,” Ph.D. dissertation, Arizona
State University, 2021.
bandara2022revamp
T. K. Bandara, D. Wijerathne, T. Mitra, and L.-S. Peh, “Revamp: a systematic
framework for heterogeneous cgra realization,” in Proceedings of the
27th ACM International Conference on Architectural Support for Programming
Languages and Operating Systems, 2022, pp. 918–932.
pact-xpp
V. Baumgarte, G. Ehlers, F. May, A. Nückel, M. Vorbach, and M. Weinhardt,
“Pact xpp—a self-reconfigurable data processing architecture,” the
Journal of Supercomputing, vol. 26, no. 2, pp. 167–184, 2003.
2021hosna
N. N. Bavarsad, H. M. Makrani, H. Sayadi, L. Landis, S. Rafatirad, and
H. Homayoun, “Hosna: A dpc++ benchmark suite for heterogeneous
architectures,” in 2021 IEEE 39th International Conference on Computer
Design (ICCD).1em plus 0.5em minus 0.4emIEEE, 2021, pp.
509–516.
1962BENESNET
V. E. Beneš, “On rearrangeable three-stage connecting networks,”
The Bell System Technical Journal, vol. 41, no. 5, pp. 1481–1492,
1962.
ASH
M. Budiu, G. Venkataramani, T. Chelcea, and S. C. Goldstein, “Spatial
computation,” in Proceedings of the 11th international conference on
Architectural support for programming languages and operating systems, 2004,
pp. 14–26.
che2009rodinia
S. Che, M. Boyer, J. Meng, D. Tarjan, J. W. Sheaffer, S.-H. Lee, and
K. Skadron, “Rodinia: A benchmark suite for heterogeneous computing,” in
2009 IEEE international symposium on workload characterization
(IISWC).1em plus 0.5em minus 0.4emIeee, 2009, pp. 44–54.
PADDI
D. C. Chen and J. M. Rabaey, “A reconfigurable multiprocessor ic for rapid
prototyping of algorithmic-specific high-speed dsp data paths,” IEEE
Journal of Solid-State Circuits, vol. 27, no. 12, pp. 1895–1904, 1992.
chen2016eyeriss
Y.-H. Chen, J. Emer, and V. Sze, “Eyeriss: A spatial architecture for
energy-efficient dataflow for convolutional neural networks,” ACM
SIGARCH computer architecture news, vol. 44, no. 3, pp. 367–379, 2016.
2014FPCA
J. Cong, H. Huang, C. Ma, B. Xiao, and P. Zhou, “A fully pipelined and
dynamically composable architecture of cgra,” in 2014 IEEE 22nd Annual
International Symposium on Field-Programmable Custom Computing
Machines.1em plus 0.5em minus 0.4emIEEE, 2014, pp. 9–16.
dataflow1983tagged
D. E. Culler, “Dataflow architectures,” Annual review of computer
science, vol. 1, no. 1, pp. 225–253, 1986.
2019spu
V. Dadu, J. Weng, S. Liu, and T. Nowatzki, “Towards general purpose
acceleration by exploiting common data-dependence forms,” in
Proceedings of the 52nd Annual IEEE/ACM International Symposium on
Microarchitecture, 2019, pp. 924–939.
MP-CGRA
J. Deng, L. Zhang, L. Wang, J. Liu, K. Deng, S. Tang, J. Gu, B. Han, F. Xu,
L. Liu, S. Wei, and S. Yin, “Mixed-granularity parallel coarse-grained
reconfigurable architecture,” in 2022 59th ACM/IEEE Design Automation
Conference (DAC), 2022, pp. 1–6.
1974DFG
J. B. Dennis, J. B. Fosseen, and J. P. Linderman, “Data flow schemas,” in
International Symposium on Theoretical Programming.1em plus
0.5em minus 0.4emSpringer, 1974, pp. 187–216.
2018i-DPsCGRA
L. Duch, S. Basu, M. Peón-Quirós, G. Ansaloni, L. Pozzi, and
D. Atienza, “i-dps cgra: an interleaved-datapaths reconfigurable accelerator
for embedded bio-signal processing,” IEEE Embedded Systems Letters,
vol. 11, no. 2, pp. 50–53, 2018.
2009TCPA
H. Dutta, D. Kissler, F. Hannig, A. Kupriyanov, J. Teich, and B. Pottier, “A
holistic approach for tightly coupled reconfigurable parallel processors,”
Microprocessors and Microsystems, vol. 33, no. 1, pp. 53–62, 2009.
fan2018stream
X. Fan, D. Wu, W. Cao, W. Luk, and L. Wang, “Stream processing dual-track cgra
for object inference,” IEEE Transactions on Very Large Scale
Integration (VLSI) Systems, vol. 26, no. 6, pp. 1098–1111, 2018.
2015nda
A. Farmahini-Farahani, J. H. Ahn, K. Morrow, and N. S. Kim, “Nda: Near-dram
acceleration architecture leveraging commodity dram devices and standard
memory modules,” in 2015 IEEE 21st International Symposium on High
Performance Computer Architecture (HPCA).1em plus 0.5em minus
0.4emIEEE, 2015, pp. 283–295.
gao2016hrl
M. Gao and C. Kozyrakis, “Hrl: Efficient and flexible reconfigurable logic for
near-data processing,” in 2016 IEEE International Symposium on High
Performance Computer Architecture (HPCA).1em plus 0.5em minus
0.4emIeee, 2016, pp. 126–137.
2016HRLl
M. Gao and C. Kozyrakis, “Hrl: Efficient and flexible reconfigurable logic for
near-data processing,” in 2016 IEEE International Symposium on High
Performance Computer Architecture (HPCA).1em plus 0.5em minus
0.4emIeee, 2016, pp. 126–137.
gobieski2021snafu
G. Gobieski, A. O. Atli, K. Mai, B. Lucia, and N. Beckmann, “Snafu: an
ultra-low-power, energy-minimal cgra-generation framework and architecture,”
in 2021 ACM/IEEE 48th Annual International Symposium on Computer
Architecture (ISCA).1em plus 0.5em minus 0.4emIEEE, 2021, pp.
1027–1040.
gobieski2022riptide
G. Gobieski, S. Ghosh, M. Heule, T. Mowry, T. Nowatzki, N. Beckmann, and
B. Lucia, “A programmable, energy-minimal dataflow compiler and
architecture,” in 2022 55th IEEE/ACM International Symposium on
Microarchitecture (MICRO).1em plus 0.5em minus 0.4emIEEE,
2022, pp. 546–564.
piperench
S. C. Goldstein, H. Schmit, M. Budiu, S. Cadambi, M. Moe, and R. R. Taylor,
“Piperench: A reconfigurable architecture and compiler,” Computer,
vol. 33, no. 4, pp. 70–77, 2000.
2012dyser
V. Govindaraju, C.-H. Ho, T. Nowatzki, J. Chhugani, N. Satish,
K. Sankaralingam, and C. Kim, “Dyser: Unifying functionality and parallelism
specialization for energy-efficient computing,” IEEE Micro, vol. 32,
no. 5, pp. 38–51, 2012.
2011dyser
V. Govindaraju, C.-H. Ho, and K. Sankaralingam, “Dynamically specialized
datapaths for energy efficient computing,” in 2011 IEEE 17th
International Symposium on High Performance Computer Architecture.1em
plus 0.5em minus 0.4emIEEE, 2011, pp. 503–514.
2001mibench
M. R. Guthaus, J. S. Ringenberg, D. Ernst, T. M. Austin, T. Mudge, and R. B.
Brown, “Mibench: A free, commercially representative embedded benchmark
suite,” in Proceedings of the fourth annual IEEE international
workshop on workload characterization. WWC-4 (Cat. No. 01EX538).1em
plus 0.5em minus 0.4emIEEE, 2001, pp. 3–14.
Xputer
R. W. Hartenstein, A. G. Hirschbiel, M. Riedmuller, K. Schmidt, and M. Weber,
“A novel asic design approach based on a new machine paradigm,” IEEE
Journal of Solid-State Circuits, vol. 26, no. 7, pp. 975–989, 1991.
karunaratne2017hycube
M. Karunaratne, A. K. Mohite, T. Mitra, and L.-S. Peh, “Hycube: A cgra with
reconfigurable single-cycle multi-hop interconnect,” in Proceedings of
the 54th Annual Design Automation Conference 2017, 2017, pp. 1–6.
karunaratne20194d
M. Karunaratne, D. Wijerathne, T. Mitra, and L.-S. Peh, “4d-cgra: Introducing
branch dimension to spatio-temporal application mapping on cgras,” in
2019 IEEE/ACM International Conference on Computer-Aided Design
(ICCAD).1em plus 0.5em minus 0.4emIEEE, 2019, pp. 1–8.
2007RICA
S. Khawam, I. Nousias, M. Milward, Y. Yi, M. Muir, and T. Arslan, “The
reconfigurable instruction cell array,” IEEE Transactions on very
large scale integration (VLSI) systems, vol. 16, no. 1, pp. 75–85, 2007.
2007TFLEX
C. Kim, S. Sethumadhavan, M. S. Govindan, N. Ranganathan, D. Gulati, D. Burger,
and S. W. Keckler, “Composable lightweight processors,” in 40th
Annual IEEE/ACM International Symposium on Microarchitecture (MICRO
2007).1em plus 0.5em minus 0.4emIEEE, 2007, pp. 381–394.
1988CSNET
C.-T. Lea, “A new broadcast switching network,” IEEE transactions on
communications, vol. 36, no. 10, pp. 1128–1137, 1988.
2015dynaspam
F. Liu, H. Ahn, S. R. Beard, T. Oh, and D. I. August, “Dynaspam: Dynamic
spatial architecture mapping using out of order instruction schedules,” in
2015 ACM/IEEE 42nd Annual International Symposium on Computer
Architecture (ISCA).1em plus 0.5em minus 0.4emIEEE, 2015, pp.
541–553.
2013REMUS
L. Liu, C. Deng, D. Wang, M. Zhu, S. Yin, P. Cao, and S. Wei, “An
energy-efficient coarse-grained dynamically reconfigurable fabric for
multiple-standard video decoding applications,” in Proceedings of the
IEEE 2013 Custom Integrated Circuits Conference.1em plus 0.5em minus
0.4emIEEE, 2013, pp. 1–4.
2017HReA
L. Liu, Z. Li, C. Yang, C. Deng, S. Yin, and S. Wei, “Hrea: An
energy-efficient embedded dynamically reconfigurable fabric for 13-dwarfs
processing,” IEEE Transactions on Circuits and Systems II: Express
Briefs, vol. 65, no. 3, pp. 381–385, 2017.
2006tartan
M. Mishra, T. J. Callahan, T. Chelcea, G. Venkataramani, S. C. Goldstein, and
M. Budiu, “Tartan: evaluating spatial computation for whole program
execution,” ACM SIGARCH Computer Architecture News, vol. 34, no. 5,
pp. 163–174, 2006.
Nguyene2020pipette
Q. M. Nguyen and D. Sanchez, “Pipette: Improving core utilization on irregular
applications through intra-core pipeline parallelism,” in 2020 53rd
Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), 2020,
pp. 596–608.
2021fifer
Q. M. Nguyen and D. Sanchez, “Fifer: Practical acceleration of irregular
applications on reconfigurable architectures,” in MICRO-54: 54th
Annual IEEE/ACM International Symposium on Microarchitecture, 2021, pp.
1064–1077.
nicol2017coarse
C. Nicol, “A coarse grain reconfigurable array (cgra) for statically scheduled
data flow computing,” Wave computing white paper, pp. 1–9, 2017.
2017WaveDPU
C. Nicol, “A coarse grain reconfigurable array (cgra) for statically scheduled
data flow computing,” Wave computing white paper, pp. 1–9, 2017.
nie2023flexmoe
X. Nie, X. Miao, Z. Wang, Z. Yang, J. Xue, L. Ma, G. Cao, and B. Cui,
“Flexmoe: Scaling large-scale sparse pre-trained model training via dynamic
device placement,” arXiv preprint arXiv:2304.03946, 2023.
2017stream
T. Nowatzki, V. Gangadhar, N. Ardalani, and K. Sankaralingam, “Stream-dataflow
acceleration,” in 2017 ACM/IEEE 44th Annual International Symposium on
Computer Architecture (ISCA).1em plus 0.5em minus 0.4emIEEE,
2017, pp. 416–429.
TIA
A. Parashar, M. Pellauer, M. Adler, B. Ahsan, N. Crago, D. Lustig, V. Pavlov,
A. Zhai, M. Gambhir, A. Jaleel, R. Allmon, R. Rayess, S. Maresh, and J. Emer,
“Triggered instructions: A control paradigm for spatially-programmed
architectures,” in Proceedings of the 40th Annual International
Symposium on Computer Architecture, ser. ISCA '13.1em plus 0.5em
minus 0.4emNew York, NY, USA: Association for Computing Machinery,
2013, p. 142–153. [Online]. Available:
<https://doi.org/10.1145/2485922.2485935>
2009PPA
H. Park, Y. Park, and S. Mahlke, “Polymorphic pipeline array: a flexible
multicore accelerator with virtualized execution for mobile multimedia
applications,” in Proceedings of the 42nd Annual IEEE/ACM
International Symposium on Microarchitecture, 2009, pp. 370–380.
pellauer2019buffets
M. Pellauer, Y. S. Shao, J. Clemons, N. Crago, K. Hegde, R. Venkatesan, S. W.
Keckler, C. W. Fletcher, and J. Emer, “Buffets: An efficient and composable
storage idiom for explicit decoupled data orchestration,” in
Proceedings of the Twenty-Fourth International Conference on
Architectural Support for Programming Languages and Operating Systems, 2019,
pp. 137–151.
2017plasticine
R. Prabhakar, Y. Zhang, D. Koeplinger, M. Feldman, T. Zhao, S. Hadjis,
A. Pedram, C. Kozyrakis, and K. Olukotun, “Plasticine: A reconfigurable
architecture for parallel patterns,” in 2017 ACM/IEEE 44th Annual
International Symposium on Computer Architecture (ISCA).1em plus
0.5em minus 0.4emIEEE, 2017, pp. 389–402.
2014machsuite
B. Reagen, R. Adolf, Y. S. Shao, G.-Y. Wei, and D. Brooks, “Machsuite:
Benchmarks for accelerator design and customized architectures,” in
2014 IEEE International Symposium on Workload Characterization
(IISWC).1em plus 0.5em minus 0.4emIEEE, 2014, pp. 110–119.
richardson2008modern
T. Richardson and R. Urbanke, Modern coding theory.1em plus 0.5em
minus 0.4emCambridge university press, 2008.
2013T3
B. Robatmili, D. Li, H. Esmaeilzadeh, S. Govindan, A. Smith, A. Putnam,
D. Burger, and S. W. Keckler, “How to implement effective prediction and
forwarding for fusable dynamic multicore architectures,” in 2013 IEEE
19th International Symposium on High Performance Computer Architecture
(HPCA).1em plus 0.5em minus 0.4emIEEE, 2013, pp. 460–471.
2004trips
K. Sankaralingam, R. Nagarajan, H. Liu, C. Kim, J. Huh, N. Ranganathan,
D. Burger, S. W. Keckler, R. G. McDonald, and C. R. Moore, “Trips: A
polymorphous architecture for exploiting ilp, tlp, and dlp,” ACM
Transactions on Architecture and Code Optimization (TACO), vol. 1, no. 1,
pp. 62–93, 2004.
morphosys
H. Singh, M.-H. Lee, G. Lu, F. J. Kurdahi, N. Bagherzadeh, and E. M.
Chaves Filho, “Morphosys: an integrated reconfigurable system for
data-parallel and computation-intensive applications,” IEEE
transactions on computers, vol. 49, no. 5, pp. 465–481, 2000.
2016HARTMP
J. D. Souza, L. Carro, M. B. Rutzig, and A. C. S. Beck, “A reconfigurable
heterogeneous multicore with a homogeneous isa,” in 2016 Design,
Automation & Test in Europe Conference & Exhibition (DATE).1em plus
0.5em minus 0.4emIEEE, 2016, pp. 1598–1603.
DRP
M. Suzuki, Y. Hasegawa, Y. Yamada, N. Kaneko, K. Deguchi, H. Amano, K. Anjo,
M. Motomura, K. Wakabayashi, T. Toi, and T. Awashima, “Stream applications
on the dynamically reconfigurable processor,” in Proceedings. 2004
IEEE International Conference on Field- Programmable Technology (IEEE Cat.
No.04EX921), 2004, pp. 137–144.
2003wavescalar
S. Swanson, K. Michelson, A. Schwerin, and M. Oskin, “Wavescalar,” in
Proceedings. 36th Annual IEEE/ACM International Symposium on
Microarchitecture, 2003. MICRO-36.1em plus 0.5em minus 0.4emIEEE, 2003, pp. 291–302.
2007wavescalar
S. Swanson, A. Schwerin, M. Mercaldi, A. Petersen, A. Putnam, K. Michelson,
M. Oskin, and S. J. Eggers, “The wavescalar architecture,” ACM
Transactions on Computer Systems (TOCS), vol. 25, no. 2, pp. 1–54, 2007.
2022drips
C. Tan, N. B. Agostini, T. Geng, C. Xie, J. Li, A. Li, K. J. Barker, and
A. Tumeo, “Drips: Dynamic rebalancing of pipelined streaming applications on
cgras,” in 2022 IEEE International Symposium on High-Performance
Computer Architecture (HPCA).1em plus 0.5em minus 0.4emIEEE,
2022, pp. 304–316.
tenstorrent2022web
tenstorrent. Software and Silicon in Serbia w/ Ljubisa Bajic and Jim Keller.
(2022, Mar 17). [Online]. Available:
<https://tenstorrent.com/research/software-and-silicon-in-serbia-w-ljubisa-bajic-and-jim-keller/>
2021uecgre
C. Torng, P. Pan, Y. Ou, C. Tan, and C. Batten, “Ultra-elastic cgras for
irregular loop specialization,” in 2021 IEEE International Symposium
on High-Performance Computer Architecture (HPCA).1em plus 0.5em minus
0.4emIEEE, 2021, pp. 412–425.
vasilyev2016evaluating
A. Vasilyev, N. Bhagdikar, A. Pedram, S. Richardson, S. Kvatinsky, and
M. Horowitz, “Evaluating programmable architectures for imaging and vision
applications,” in 2016 49th Annual IEEE/ACM International Symposium on
Microarchitecture (MICRO).1em plus 0.5em minus 0.4emIEEE,
2016, pp. 1–13.
2014sSGMF
D. Voitsechov and Y. Etsion, “Single-graph multiple flows: Energy efficient
design alternative for gpgpus,” ACM SIGARCH computer architecture
news, vol. 42, no. 3, pp. 205–216, 2014.
2018dMT-CGRA
D. Voitsechov, O. Port, and Y. Etsion, “Inter-thread communication in
multithreaded, reconfigurable coarse-grain arrays,” in 2018 51st
Annual IEEE/ACM International Symposium on Microarchitecture (MICRO).1em plus 0.5em minus 0.4emIEEE, 2018, pp. 42–54.
von1993first
J. Von Neumann, “First draft of a report on the edvac,” IEEE Annals of
the History of Computing, vol. 15, no. 4, pp. 27–75, 1993.
2016DORA
M. A. Watkins, T. Nowatzki, and A. Carno, “Software transparent dynamic binary
translation for coarse-grain reconfigurable architectures,” in 2016
IEEE International symposium on high performance computer architecture
(HPCA).1em plus 0.5em minus 0.4emIEEE, 2016, pp. 138–150.
2020REVEL
J. Weng, S. Liu, Z. Wang, V. Dadu, and T. Nowatzki, “A hybrid
systolic-dataflow architecture for inductive matrix algorithms,” in
2020 IEEE International Symposium on High Performance Computer
Architecture (HPCA).1em plus 0.5em minus 0.4emIEEE, 2020, pp.
703–716.
PADDI2
A. K. Yeung and J. M. Rabaey, “A reconfigurable data-driven multiprocessor
architecture for rapid prototyping of high throughput dsp algorithms,” in
[1993] Proceedings of the Twenty-sixth Hawaii International Conference
on System Sciences, vol. 1.1em plus 0.5em minus 0.4emIEEE,
1993, pp. 169–178.
|
http://arxiv.org/abs/2307.01018v1
|
20230703134809
|
A review of uranium-based thin films
|
[
"R. Springell",
"E. Lawrence Bright",
"D. A. Chaney",
"L. M. Harding",
"C. Bell",
"R. C. C. Ward",
"G. H. Lander"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"cond-mat.other"
] |
REVIEW ARTICLE
A review of uranium-based thin films
R. SpringellaCorresponding author email: phrss@bristol.ac.uk, lottie.harding@bristol.ac.uk, E. Lawrence Brighta,b, D. A. Chaneya,b, L. M. Hardinga, C. Bella, R. C. C. Wardc and G. H. Landera,d
a School of Physics, University of Bristol, Tyndall Avenue, Bristol BS8 1TL.
b European Synchrotron Radiation Facility, 38040 Grenoble, France.
c Clarendon Laboratory, Oxford Physics, Parks Road, Oxford OX1 3PU, UK.
d European Commission, Joint Research Centre (JRC), Directorate for Nuclear Safety and Security, Postfach 2340, D-76125 Karlsruhe, Germany.
Received July 3, 2023, Accepted xx
=========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Thin films based on silicon and transition-metal elements dominate the semi-conducting industry and are ubiquitous in all modern devices.
Films have also been produced in the rare-earth series of elements for both research and specialized applications.
Thin films of uranium and uranium dioxide were fabricated in the 1960s and 1970s, but there was little sustained effort until the early 2000s. Significant programmes started at Oxford University (transferring to Bristol University in 2011), and Los Alamos National Laboratory (LANL) in New Mexico, USA.
In this review we cover the work that has been published over the last ∼20 years with these materials.
Important breakthroughs occurred with the fabrication of epitaxial thin films of initially uranium metal and UO_2, but more recently of many other uranium compounds and alloys.
These have led to a number of different experiments that are reviewed, as well as some important trends.
The interaction with the substrate leads to differing strain and hence changes in properties.
An important advantage is that epitaxial films can often be made of materials that are impossible to produce as bulk single crystals.
Examples are U_3O_8, U_2N_3 and alloys of U-Mo, which form in a modified bcc structure.
Epitaxial films may also be used in applied research.
They represent excellent surfaces, and it is at the surfaces that most of the important reactions occur in the nuclear fuel cycle.
For example, the fuel-cladding interactions, and the dissolution of fuel by water in the long-term storage of spent fuel.
To conclude, we discuss possible future prospects, examples include bilayers containing uranium for spintronics, and superlattices that could be used in heterostructures.
Such applications will require a more detailed knowledge of the interface interactions in these systems, and this is an important direction for future research.
Uranium, actinides, thin films, epitaxy
§ INTRODUCTION
Thin films are ubiquitous in modern technology. They form the basis of the semiconductor industry: from light emitting diodes to the millions of transistors in every single computer central processing unit. Digital memory technologies are similarly underpinned by thin films, from the spin-valve heterostructures in hard drive read heads to ferroelectric random-access memory. More recently, the rapid developments of several high profile quantum computing architectures are based on thin films of superconducting aluminium. It is not unreasonable to argue that thin films have transformed the technology of the late 20th century, and continue to do so to this day. At the same time, for many years thin films have provided researchers with key insights into fundamental condensed matter physics. Examples in this area include the discovery of the integer and fractional quantum Hall effects in GaAs heterostructures <cit.> and oscillatory exchange coupling in magnetic/non-magnetic multilayers <cit.>. These discoveries earned Nobel prizes, but there are a host of other novel effects in thin film layers and heterostructures.
Sitting at the bottom of the periodic table, the actinides are defined by the presence of 5f electrons which give rise to a plethora of weird and wonderful physical properties <cit.> that vary significantly across the series as the nature of the 5f electrons changes from largely itinerant in Th and U, to almost fully localized in Am and beyond, with the notoriously complex Pu separating the two sub-series. The elements up to and including Pu exhibit a vast range of isomorphs, whereas Am and beyond crystalise into a double hexagonal close-packed (dhcp) structure and behave akin to the heavier rare-earth metals. Likewise, superconductivity gives way to magnetism across the series. Plutonium and uranium also display two properties unique to single element materials: negative thermal expansion in Pu and ambient pressure charge-density modulations in U. The phenomena found in actinide containing compounds are no less fascinating and include heavy-fermion behavior <cit.>, spin fluctuation states, large spin-orbit coupling, Jahn-Teller distortions, quadrupolar ordering <cit.> piezoelectricity, and magnetorestriction <cit.>, to name but a few.
The marriage of thin films and actinides provides a vast parameter space of experimental and theoretical exploration and there are some distinct areas of study where real advances have been made, and there are still many exciting opportunities for the future; in fundamental research and investigations on applied nuclear fuel and waste materials, driven by a renewed global appetite for advanced fuel materials for modern 21st century nuclear reactor fleets. Fig.<ref> highlights the range and connectivity of the materials, properties and phenomena that can be found in the current body of literature. This approach offers some practical experimental and theoretical advantages over more traditional bulk materials, as well as opening new scientific avenues for study.
These sample systems have macroscopic surface areas (typically of the order 1 cm^2), with typical masses of 100’s of micrograms. This provides a basic advantage for active work in that many facilities and institutions become accessible that otherwise would be restricted, and transport, handling, and storage of these samples is significantly easier than for their bulk counterparts. For example, a typical 1000 Å film of UO_2 would have an activity of only ∼1.5 Bq and would contain a comparable uranium mass to that found in the human body (∼100 μg).
Sample synthesis is rapid compared to typical bulk methods, not by mass, but by sample type. This means that one can study a range of compositions quickly, which is ideal for surveys of phase diagrams, alloys, additives etc. These techniques are good for controlling growth parameters in situ, which means that one can mimic surfaces and interfaces of relevant nuclear materials, or design multilayers/interfaces to explore fundamental 2D physics of 5f states. One can engineer the structure, composition, phase, and stoichiometry, one can grow amorphous materials, polycrystalline materials, controlling the grain size, single crystals: modifying the crystal quality, strain fields etc. It allows us to experimentally model many complex effects, (such as radiation damage) in a much simpler way by more careful control of variables and removal of complexity from bulk systems. This connects much more directly with theoretical modelling in nuclear materials, where these idealised systems are able to feed into computational models and vice versa.
Epitaxial matching to a substrate crystal allows fine control of the crystalline growth. This raises the question of whether different allotropes of the films can be stabilised by using different substrates, which act as "guiding templates". As we shall see, this use of substrates to apply strain and alter the properties has already been done for α-uranium metal. Some success, also with uranium, has been achieved in preparing a hexagonal-close packed (hcp) structure using templating. Notably here the hcp structure does not exist in the bulk. If the interface between film and substrate is fully understood there is the possibility that new allotropes of bulk materials can be produced with unknown properties. The first challenge, of course, is to find the right substrate and method to prepare an epitaxial sample, but this search for new structures could clearly open exciting scientific possibilities.
Broadly, the power of thin films for fundamental physics or materials science can be categorised in terms of (a) surface and dimensional effects; (b) proximity effects; (c) strain effects. Some of these are already being exploited. The interface between the film and the substrate provides not only flexibility in a geometric sense, but also provides a pathway for electronic interactions. For example, the actinide elements have a large spin-orbit parameter, which is roughly dependent on Z^4, where Z is the atomic number, and this parameter is important for spintronic applications. Preparing and exploiting such samples clearly is a considerable challenge, but understanding the interfaces is the primary one.
New physics can be expected, such as topological ground states. Such states have already been predicted in 2012 <cit.> for Pu and Am compounds. This comes from the fact that the 5f states in Pu are almost exactly at the boundary between itinerant (before Pu) and localized (after Pu), but this condition can also be found for uranium compounds, e.g. UNiSn, <cit.> and UTe_2 <cit.>. Initially, of course, the effort on actinide films has been confined almost entirely to using the elements Th and U, which can be handled without difficulty in most laboratories. However, in the longer term, facilities that can handle actinides up to at least Cm can be envisaged. There are huge advantages working with small quantities of these elements, and much that could be discovered, especially about Pu, an element that has six allotropes in the solid state before it melts <cit.>. In this respect it is worth noting that the only transuranium epitaxial films produced so far are those of NpO_2 and PuO_2 at Los Alamos National Laboratory <cit.>. Some of the science deduced from experiments on these samples are discussed below.
The 5f electrons are also the key features of many heavy-fermion compounds, including ferromagnetic superconductors containing uranium <cit.>, where it is commonly assumed that it is the hybridization of the conduction and 5f states that lead to the peculiar properties of these systems. In the famous case of URu_2Si_2 the physics of the system remains unresolved, despite a vast amount of both theory and experiment <cit.> since the discovery of superconductivity in this material in 1986. More recently, UTe_2 has attracted wide attention as a possible topological triplet superconductor <cit.>. The prize for the highest superconducting T_c (18 K) of any heavy-fermion material still goes to PuCoGa_5, which is mainly a mystery 20 years after its discovery <cit.>. All the studies referenced above have been performed on bulk samples. Usually, but not all, the experiments reported used single crystals. If epitaxial films of these materials, especially the heavy fermions, were available, further experiments could be easily envisaged.
The basic research motivation for producing actinide thin films is thus abundantly clear. But there is another motivation, equally important. Beyond their undoubted fundamental interest, understanding actinide compounds has substantial importance due to the property that all are radioactive and from the 15 elements, one typically finds 7 fissile isotopes. From this subset, fissile uranium-235 in the chemical form of UO_2 powers the large majority of the world’s operational nuclear reactors, generating ∼10% of world power, which is more than a quarter of the world’s low carbon electricity production <cit.>.
Although much is understood about UO_2 <cit.>, there are still questions to be answered, particularly relating to the surface and interface reactions and properties of UO_2. For example, the synthesis of fuel/clad interfaces opens up the possibility of designing experiments to test pellet-clad-interaction, and uranium metal/oxide interfaces can be used to investigate the behaviour of stored metal wastes. UO_2 surfaces can be used to investigate interactions with aqueous environments, simulating 'leakers' (split fuel pins during operation, giving rise to high temperature water and steam exposure) and intermediate longer term spent fuel storage scenarios. Doped UO_2 systems could pave the way for studies of modified fuel types to improve thermal conductivity, to improve structural degradation during operation, or to improve end-of-life behaviour.
In addition, there is intense interest in developing alternative actinide compounds to fuel reactors in a safer and more efficient way. These so-called “advanced technology fuels (ATF)” include uranium silicides, nitrides, metallic uranium alloys, thorium compounds, as well as other more exotic fuel designs. A campaign of any new fuel composition in bulk form, and then proceeding studies on radiation behaviour, thermal properties, or interaction with coolant/storage media, are understandably, intensive operations. Using thin films can shortcut many of the typical hurdles and provide a great deal of supporting information in a much shorter time. It is possible for example, to synthesise a new fuel design, simulate corrosion behaviour in long-term storage to assess its feasibility, without ever having to embark on a full in-reactor fuel performance review.
Hopefully, it should now be clear that thin films, and particularly epitaxial films, have an important and irreplaceable role to play in advancing our understanding of the actinides and their compounds both from a fundamental aspect as well as those compounds of great practical importance to meeting our ongoing energy needs in a decarbonising world. In this review we will cover the growth methods and considerations for various uranium-based films, detail many of the key experiments conducted to date and what they have taught us already, before laying out a roadmap for the future, highlighting the scientific areas we feel hold the most promise and would benefit most from a thin film approach.
§.§ Early efforts (before ∼ 2000) on uranium-based films
Probably the first recorded use of thin films was by Steeb <cit.> who demonstrated in 1961 that vapour deposition onto heated substrates such as MgO produced an epitaxial film of UO_2 with a thickness of ∼100 that could be further oxidized to U_4O_9. Further work on the structure of UO_2+x was done by electron microscopy at Stuttgart by Steeb et al.. This was followed by work with electron microscopy by Navinsek <cit.> and Nasu et al. <cit.> using different substrates - the best being identified as LiF and NaF. They also observed fission tracks after the samples were irradiated in a reactor. These efforts seem to have reduced once suitable bulk single-crystal samples were produced and the quantitative study of the structure of UO_2+x using neutron diffraction was demonstrated. The use of neutrons allowed the positions of the light oxygen atoms to be deduced, and became a major tool in characterizing such systems <cit.>.
For uranium metal, the first production of thin films was reported by T. Gouder in 1993 <cit.> in Karlsruhe who deposited monolayers of uranium onto various substrates to explore localisation effects in the uranium overlayer. The first effort to produce epitaxial thin films was reported by Molodtsov et al. in 1998 <cit.> in Dresden. The main objective of their work was to measure resonant photoemission from the surface of uranium <cit.>, and scanning tunneling spectroscopy <cit.>. As discussed later, in Section <ref>, a key difficulty in the interpretation of these studies is that there was no X-ray characterization of the samples, as it was not possible to cap the samples and remove them from the preparation chamber. Nonetheless, interest in different structural forms of uranium, as well as the surface layers, was stimulated by these experiments. Theoretically Hao et al. <cit.>, had earlier predicted that the surface 5f states in the bcc form would be more localized than in the α (orthorhombic) form. Later Stojic et al.<cit.> predicted that such localization, and ordered magnetism, would even occur at the surface of α-U, but no evidence for this has been found.
In a series of experiments, a group in first Darmstadt and then Mainz in Germany grew epitaxial films of the heavy fermion compounds UPd_2Al_3 and UNi_2Al_3, which have hexagonal symmetry, with molecular-beam techniques and used heated (111) oriented LaAlO_3 (LAO) substrates <cit.>. Their interest was in transport measurements, as both systems show antiferromagnetic order with superconductivity at lower temperatures, but with the material remaining antiferromagnetic. Tunneling spectroscopy was used to demonstrate the crucial role of the antiferromagnetic fluctuations in inducing the superconductivity in UPd_2Al_3 <cit.>, and measurements of the optical conductivity were also made <cit.>. Rather similar measurements were made on epitaxial films of UNi_2Al_3 <cit.>. At the same time one of the thin films of UPd_2Al_3 was used in a series of synchrotron experiments to show how the coherence of the x-ray beam, together with the large absorption at the uranium M_4 edge, allows information to be obtained on the spatial position of the scattering volume <cit.>. This technique is also discussed in Section <ref>, for more recent experiments on UO_2 epitaxial films.
In the 1990's there was also considerable interest into whether memory systems could be based on the magneto-optical Kerr effect (MOKE), and many different systems were studied. This effect requires a bulk ferromagnetic signal. Samples consisting of multilayers of amorphous UAs and elemental Co were produced and the MOKE measurements showed that the uranium had a magnetic moment at room temperature <cit.>. A more detailed experiment later took place <cit.> to measure the XMCD signal at the uranium M_4 edge in a sample of the form [UAs_80/Co_20]_12. The XMCD data confirmed a moment of ∼ 0.80 μ_B per U atom at low temperature, but this rapidly declined at higher temperatures. This study showed relatively poorly defined interfaces, with diffusion between the layers.
§ AN OVERVIEW OF THE GROWTH OF URANIUM-BASED FILMS
§.§ Deposition Techniques
Thin films, in a research sense, typically range from the Angstrom (Å) to the micron scale, and involve the controlled deposition of the material of interest onto a prepared surface of a chosen substrate. There is a range of chemical or physical processes that one can employ, and many other reviews and textbooks have dealt with this subject comprehensively <cit.>. Here we will focus on just the subset of those techniques that have been used for U-based deposition. It is worth noting that deposition of U has some specific considerations, which depend on materials restrictions of a particular nation, and are centred around the basic radioactivity of the starting material of depleted U.
The choice of deposition method depends on the final application, whether a fundamental study of basic physics, or an applied nuclear materials investigation, this will influence the choice of material, metal, oxide, intermetallic etc. and the required physical and chemical structures. This Section will try and provide a strategic roadmap for new and existing research groups who wish to utilise uranium deposition, by comparing and contrasting the most successful examples in the literature.
Physical vapour deposition (PVD) is by far the most frequently adopted technique in this field and can be generalised as the vaporisation of a starting material that is then condensed onto a substrate <cit.>. PVD methods are flexible, they can be used for metal, compound, and multilayer deposition and one can control stoichiometry, phase and crystalline quality, to some degree. The drawbacks are the need for large apparatus, high or ultra-high vacuum, and bulk solid starting materials, and that the deposited material is often highly energetic, so some additional thermalisation energy at the substrate position is often required.
Of the many PVD options, sputtering is the most prevalent and there are many examples of research groups employing this approach <cit.>. DC magnetron sputtering is commonly a UHV (<10^-9 mbar base vacuum) technique, which consists of discs/ingots of starting material (in these cases, either U metal or UO_2 ceramic, typically many centimetres in diameter) that are bombarded by a plasma of ionised noble gas, such as argon (pressures typically <10^-2 mbar). Note that ceramic targets require the use of pulsed-DC or RF sputtering techniques. Typical deposition rates vary from 0.1 to 2 Å/s. This has been used for U metal deposition <cit.>, bilayer spintronics <cit.>, multilayers <cit.> and intermetallics <cit.>.
A sophisticated modification to this technique is the use of triode sputtering, which employs a tungsten filament for electron emission to stabilise the plasma at the source <cit.>. This has the advantage of using much smaller starting quantities, so more exotic starting materials are accessible, it is a more efficient use of material; the deposition `racetrack' produced by DC magnetron sputtering can yield efficiencies in the range of only 5%. However, the lateral homogeneity at the substrate position is not as good. Both of these methods can be adapted for reactive sputtering by feeding a small partial pressure of reactive gas into the chamber (pressures typically <10^-4 mbar), and this has been used to successfully grow oxides <cit.>, nitrides <cit.>, oxynitrides <cit.> and hydrides of U <cit.>.
For the simplest polycrystalline films, one has control over magnetron power, sputter gas pressure, target to substrate distance and substrate temperature. For reactively grown compounds, we have the gas partial pressure as an added lever, and for binary and even tertiary systems, the relative powers of the co-depositing magnetrons become the crucial control mechanism for the formation of specific phases. Even then, it may be difficult to achieve phase pure materials. However, here one can utilise epitaxial matching to `lock-in' desired phases, which prove elusive, even in bulk systems: the line compounds of the U-Si system for example <cit.>.
There are important differences with well-known literature examples of epitaxial thin film systems, which have extremely close lattice matches and result in high-quality crystals, often of single domains with mosaics below 0.05^∘ (where the mosaic width is the rocking curve full width at half maximum [FWHM]) <cit.>. Most U-based substrate matches are far from ideal, but are required to stabilise epitaxy of the many compounds, phases and orientations described in this review, hence they can have more complicated crystallographic domains and have mosaics from 0.1 - 2^∘ <cit.>. Also, the multilayers are not of the same quality as some of the famous rare-earth or tunnel junction heterostructures and superstructures <cit.>, although these standards could be possible for UO_2/ThO_2 on CaF_2, for example.
Pulsed laser deposition (PLD) is also a vacuum-based technique, but where the vaporisation is typically performed by a high-power pulsed excimer laser. A plasma `plume' then carries energetic material towards a substrate, which can then be thermalised to aid the crystalline growth of the film. It is also possible to deposit reactively, to make oxides, nitrides etc. The most notable PLD work at Los Alamos National Laboratory, used a KrF excimer laser (λ=248 nm, repetition rate 1-5 Hz) in varying partial pressures of oxygen, employing substrate heating and rotation, to stabilise UO_2, U_3O_8 and UO_3 oxides of uranium <cit.>. Although there is little work in the literature, using PLD for U-based deposition, it has many of the same attributes as sputtering. It could be used for metal deposition, and for multilayer and heterostructure synthesis. It typically requires bulk starting materials; the deposition rates are similar and crystalline quality is comparable. However, binary systems, such as silicides and carbides might be more complicated, whereby sputtering or an evaporation technique could be more suitable. Also, the energetics of the deposition process must be controlled sufficiently to prevent the insertion of defects into the growing film.
Molecular beam epitaxy (MBE) is a UHV-based technique that uses Knudsen effusion cells or direct e-beam heating (for the more refractory materials such as U) onto small quantities of starting material to provide gradual sublimation. The energetics of this process are lower, and the atoms have longer mean free paths. This results in controlled deposition rates of fractions of an Å per second and near layer-by-layer growth. MBE has an advantage in the ability to monitor the growing surface in real time using electron diffraction (RHEED – Reflection High Energy Electron Diffraction) without the requirement for differential pumping. Gas sources can be added to grow oxides, etc, often with cracker stages to produce confined beams of highly-reactive atomic oxygen, or ozone. This is an expensive technique that is focussed on the synthesis of high-quality epitaxial films and is more commonly found in the manufacture of semiconductor devices and magnetic memory; GMR and magnetic tunnel junctions, for example <cit.>, although MBE has also been used extensively to grow epitaxial rare-earth layers and superlattices <cit.>. This technique can be used to deposit metals, oxides and more complex ternary/quaternary intermetallic materials. Some of the early work at Darmstadt, investigated the _2_3and UNi_2Al_3 systems <cit.> as epitaxial thin films. MBE requires a great deal of investment and is not good for high throughput studies, i.e. for fast exploration of phase diagrams or a wide range of compositions, but is useful for particular studies where epitaxial quality is crucial.
Aside from PVD there are also chemical methods of deposition, which generally avoid the requirement for bulk solid U-metal or U compounds, and although the scope of these has been limited in terms of the range of U-based materials, they feature prominently in the literature. Chemical vapour deposition (CVD) is a vacuum deposition technique that encompasses an enormous range of materials; it involves the reaction or decomposition of volatile precursors onto a substrate wafer. In terms of U-based materials, groups in Cologne <cit.> and UC Berkeley <cit.> have successfully used CVD and Magnetic field-assisted CVD to make thin films of UO_2, employing the decomposition of U(IV) amidate and reduction of uranium hexakis-tert-butoxide, respectively. These methods are yet to yield epitaxial films but could be useful for investigating UO_2 grain morphologies in polycrystalline samples. The deposition rates are higher than most PVD techniques and these techniques could be used for efficient growth of >μm thick layers. Although studies have so far been focused on uranium oxides, it may also be possible to modify the methods to deposit metals, nitrides and carbides.
The sol-gel process has also been used to prepare UO_2 films <cit.>. This is most prevalent in metal oxide fabrication, such as TiO_2, where a colloidal solution or ‘sol’ is deposited onto a substrate, then becomes a two-phase wet gel, and liquid is removed slowly to allow for densification and eventual film synthesis. In the case of UO_2, uranyl acetate in methanol and acetic acid were heated together to form the precursor sol, which was then dropped onto substrates that were spun to coat evenly. A final heating stage was used to drive off remaining liquid and form a dense film. This method has some drawbacks in terms of crystal structure control and synthesis of epitaxial films. However, it is possible to synthesise >μm thick layers, and to dope the uranium oxide with typical semiconductor dopant concentrations, which can be very difficult to achieve in most PVD methods.
Polymer assisted deposition (PAD) is another chemical solution method, which is common for metal oxide thin film growth <cit.>. Precursors are made using metal ion-coordinated polymers. In this way, the polymer properties, such as viscosity, can be modified to control the metal ion distribution to form homogeneous films. U-based PAD has been successfully used by groups at Los Alamos National Laboratory <cit.> for more than a decade. They have reported epitaxial synthesis of a number of U-oxides, UN_2 and UC_2 <cit.>, as well as the growth of PuO_2 epitaxial films <cit.>. In the case of uranium oxides, for example, an aqueous solution of UO_2(NO_3)_2 is added to a polymer, which is then spin coated onto carefully chosen and prepared substrates with the desired lattice matches. These are then annealed in the presence of oxygen at 1000 ^∘C to form epitaxial films. This has similar advantages to other chemical processes, where film thicknesses are routinely larger than those made with PVD, but clear progress has been made with the PAD technique for uranium, such that epitaxial films of similar quality to those produced by the best PVD methods are possible.
There are a number of options when embarking on a new research programme in this field. In terms of overall strategy, one needs to consider some important questions. Firstly, what sort of material(s); pure metal, compound, heterostructure etc. as this will be the first limiting step in terms of synthesis choice. What amount of starting material is required/is accessible? Does the starting material need to be a compound first, or is it better to make the composition during the deposition process? Often, even if the correct composition is present in the starting material, it is necessary to adjust one or more of the components during growth. For example, a UO_2 target material will require additional oxygen to reach stoichiometry. How many samples are required? Some techniques are more suited to high throughput than others.
For metallic systems, especially those containing U, oxidation of the surface (which can be the entire depth of the film in some cases) is a major problem. This means that a capping layer is necessary – typical materials are aluminium, chromium, tungsten, etc., however, the authors recommend niobium, as this layer develops a thin passivating oxide layer of approximately 20 Å. Capping layers can sometimes be necessary for oxides also, as these will become hyper-stoichiometric over time and this can affect physical properties. In some cases, buffer layers are used to provide a chemical barrier and to mediate the lattice mismatch between substrate and film. The requirement of extra layers then has consequences for the choice of synthesis method.
Typical research projects/programmes involve preparation of starting material and substrate surfaces; this could be chemical cleaning, sonication, Ar plasma cleaning, or a combination of these steps. Compositions of desired materials are usually tested first on substrate standards, such as glass, or silicon, and where polycrystalline samples are enough then this can provide the basis for the remainder of the synthesis. However, where epitaxy is required, then the strategy is more refined, as careful substrate matching is required, considerations of temperature, thermal expansions of the different materials and possible interfacial mixing. It is not always easy to predict which substrate to use.
§.§ Characterisation Techniques
For most chemical-based synthesis routes the characterisation takes place ex-situ, once the films have been made. However, for all of the PVD techniques described, some in-situ characterisation is routinely used during the deposition process. For more detailed investigations of the physical structure of the films it is more likely that an ex-situ measurement will be employed, and this will depend on the length-scale of interest and whether lateral or longitudinal information is important. Here, we present the most common techniques in more detail with particular recommendations for U-based materials.
The deposition rate is in general the primary characterisation parameter for thin films as this determines the layer thicknesses, and there are many instruments and techniques that are found in the literature. The method of choice will depend on whether in- or ex-situ measurements are needed, the thickness regime, and the required precision; of course, it is often the case that a combination of methods is preferred. For thickness determination within deposition chambers/vessels, quartz crystal microbalances (QCM) are commonly employed, which exploit the Sauerbrey equation <cit.>, relating the frequency of oscillation of a piezoelectric crystal (quartz for example) with the mass deposited, i.e. as the thickness increases and more mass is deposited, the frequency decreases. The resolution is typically 1 Hz for resonant frequencies in the MHz range, which means that this technique has approximately, monolayer sensitivity. One major advantage is that this can be used during the growth process. However, this means that it has to be mounted in the deposition system itself, which can be complicated, and for a precise measurement it must be at the same position as the substrate, which can be spatially restrictive. Less common methods of thickness determination during growth include laser interferometry <cit.>, RHEED, utilising the oscillating intensity of the specular diffraction spot <cit.>, and ellipsometry <cit.>.
Both low energy electron diffraction (LEED) <cit.> and reflection high energy electron diffraction (RHEED) <cit.> are used to identify the crystal structure. It is possible to distinguish between polycrystalline, highly textured, and single crystal systems, and in the most advanced cases, even monitor strain as a function of growth. For MBE, electron diffraction can be acquired during growth, however, for PLD and sputtering, typically the synthesis process has to be paused to view the diffraction image, unless a double differentially pumped electron beam path is employed <cit.>. Fig. <ref> shows typical RHEED images from a single crystal lanthanum aluminate (_3 or LAO) substrate and UO_2 epitaxial film <cit.>.
X-ray reflectivity (XRR) is used to probe the electron density profile of the film <cit.>, which gives information about the sample morphology; the thickness, the roughness at each surface/interface <cit.>, and the value of the electron density itself, which can be used to infer the composition of the film. The geometry is typically in a specular or longitudinal mode, where the incident and reflected angles are equal (incorporating any offset angle due to sample surface misalignment). Therefore, the wave-vector momentum transfer (usually written Q or q_z) is along the surface normal, with no sensitivity to lateral features. There are many freely available resources for modelling this reflectivity spectrum <cit.>, which use Parratt’s recursive method <cit.>, and then employing a range of fitting algorithms in order to explore the parameter space and find the best global minimum.
Fig. <ref> shows two reflectivity curves for thin Si and U films, with data shown as the open black circles and the fits as solid orange and blue lines, respectively <cit.>. The oscillations are known as Kiessig fringes, which depend on the film thickness <cit.>; an approximate relationship between film thickness and fringe separation is highlighted in the figure. Models of the reflected intensity are often sensitive to changes in the layer thickness on the order of a fraction of an Å. However, one should note that for films greater than 1000Å thick, these fringes become very small and are eventually too difficult to resolve. For a perfectly smooth film the intensity decays as a function of 1/Q^4, surface and interface roughness cause this intensity to decay more rapidly <cit.>. The roughness is usually approximated as the root mean square of the thickness variation of a layer and appears as a Gaussian spread of electron density at an interface <cit.>. This means that interfacial roughness and interdiffusion manifest in the same way for specular XRR.
The roughness also affects the amplitude of the fringes, as does the electron density differences between the layers. The overall electron density of the topmost layers in a sample affects the position of the critical angle, θ_C (shown in Fig. <ref>), where total external reflection ends, and the x-rays first start to penetrate the film. There is often a reduction in intensity associated with the smallest angles, below θ_C, and this is known as a footprint effect (also visible in Fig. <ref>, where the incident beam size becomes so large that a sizeable fraction of the photons are no longer incident on the surface of the sample.
XRR can also be used for more complex systems and in more complex geometries in order to extract even more information. For instance, bilayers, trilayers etc. will exhibit multiple periodic intensity oscillations that result from coherent reflections from each of the interfaces <cit.>; see the U metal film in Fig. <ref>, where a thinner oxide layer appears as a much longer wavelength oscillation, more prominent at low Q values. Heterostructures and multilayers with repeat units will result in the appearance of sequential peaks of intensity, known as Bragg peaks, whose position will depend on the bilayer thickness and intensity on the number of repeat units <cit.>. The technique itself can be extended further by allowing the incident and reflected angles to vary at various positions in Q (along the specular ridge) to measure intensity as a function of Q_x. Modelling this intensity is more difficult and one has to employ the distorted wave born approximation, which maps height-height correlations to generate lateral coherence lengths, and a jaggedness factor, which describe the distribution of height variation and the smoothness of this variation as a function of lateral dimension in the sample <cit.>.
In summary, XRR is an extremely powerful and versatile technique, which non-destructively investigates the physical structure of a thin film sample. Due to the often sizeable parameter space, and extreme variation in intensity, fitted models may look convincing, exhibiting excellent figures of merit, but they can often be misleading, so it is wise to synthesise a series of samples, where only one or two growth parameters are varied systematically.
X-ray diffraction (XRD) has been used to study the crystalline nature of materials for over a hundred years and has some very special considerations when used to study thin films and heterostructures <cit.>. Thin films inherently have a small sample volume, and the majority of the photons will pass directly through the sample without scattering. However, the atomic form factor and therefore the scattering amplitude vary as a function of atomic number (Z), which means that the observed intensity varies as a function of Z^2 . This is an important advantage when considering U-based thin films, because U is such a strong scatterer that even films of just a few tens of Å have measurable intensities on a standard laboratory x-ray source (Cu K_α∼ 1.54Å, for example).
Typical measurements to determine phase and structure, are in a similar specular/longitudinal geometry to that described for XRR, however they use 2θ angles in ranges from 15 - 140^∘. The positions of the peaks are the first indication of the crystal structure of the materials in the film, although the spectra can often be dominated by intensity from the substrate. At the two crystallographic extremes; glass gives an amorphous signature, which results in a long, damped periodic intensity background, which is relatively weak overall, but persists at all angles, whereas single crystal substrates only exhibit extremely strong intensity peaks at specific angles, relating to the d-spacing along the unique growth axis out of the plane.
Polycrystalline films are grown when there is no obvious lattice match between substrate and film, or when no thermalisation has been used to aid epitaxy. Usually, all of the reflections that one would expect for a powder are visible, and it is even possible to use the Scherrer equation <cit.> to determine grain size, which analyses the peak widths in the same way as for bulk samples; this works well if the grains are smaller than the film thickness, otherwise it is just a measure of the film thickness itself and does not indicate a lateral grain dimension (could be plate-like in shape for example). Normally, for bulk materials, it is standard practice to measure in a longitudinal geometry, and that is true in the first instance for thin-film measurements, however, if a researcher wants to improve their measured signal to be more surface sensitive, this can be achieved by fixing a small incident angle (few degrees in 2θ) to fully illuminate the sample, and then scan the detector angle to achieve the desired 2θ range.
In many cases, the polycrystalline film will have a preferred orientation, or texture <cit.>, which manifests as a preferential intensity for particular reflections, which deviates strongly from the intensities expected from a theoretically ideal powder pattern <cit.>. For high symmetry structures, this is typically with the closest packed plane flat, facing upwards along the surface normal, so the [110] and [111] orientations for bcc and fcc crystals, respectively, for example. Due to the inherent energetics of most deposition processes and thermalisation, impurities and other structural defects, most polycrystalline samples will exhibit some form(s) of microstrain or residual stress. Microstrain depends on grain orientation, as local lattice spacing variations may vary from grain to grain and can be analysed using the Williamson-Hall analysis <cit.>, relating peak position, width and grain size to the microstrain. Residual stress results in an average change in lattice spacing, using sin^2ψ analysis <cit.>, where lattice parameters are calculated from the d-spacing of a selected family of planes, measured as a function of sample inclination.
Epitaxial films have a unique axis along a crystallographic surface normal and they are deposited onto single crystal substrates, which also exhibit a unique axis, see Fig. <ref>. For some sample systems these axes might not be coincident, and an angular offset can be measured between the rocking curve peaks of the film and substrate. Usually, there will already be a predicted lattice match and the first high-angle scan is a survey scan with wide open diffracted beam slits to allow maximum intensity with low resolution. In this way, it is usually possible to observe all of the reflections aligned close to the surface normal of the film. Remember that the crystallographic directions will be different to the flat surface, since the polished substrate surface will not be perfectly aligned along a crystal plane. At this point, the slits can be narrowed to improve resolution and a more detailed measurement of the position of film and substrate peaks allow the determination of respective lattice parameters. Fig. <ref> also highlights another inherent feature in a high-angle diffraction spectrum, which are unique to thin films, the phenomenon of Laue fringes. These are due to a beating frequency in the diffracted signal due to added interference of x-rays reflected at interfacial boundaries (not too dissimilar from the Kiessig fringes in XRR). The main peak for thin layers is also a lot more Gaussian in shape as the number of monolayers decreases. In fact, in this regime, it becomes possible to model the whole diffraction spectrum using discrete numbers of lattice planes to generate the observed peak widths <cit.>.
Once the orientation has been determined, a rocking-curve measurement <cit.>, which varies incident and reflected beam angles at a fixed 2θ, is used to align at the maximum of the peak. The full width at half maximum (FWHM) of this rocking curve is also a standard measure of crystal quality (for films and bulk crystals) and is known as the mosaicity. Literature values for single crystals of U-based studies can be from 0.1 - 2^∘, see the left hand panel of Fig. <ref>. For many epitaxial thin film systems an unusual phenomenon is observed in the rocking curves <cit.>. They consist of two peak shapes, a sharp component, together with a wider contribution. There are a number of theories as to why this is present, and a recent paper by Wildes <cit.> gives a good discussion of such profiles. It is usual to take the wider component as a reflection of the bulk mosaicity of the deposited film.
To confirm that a film is indeed epitaxial and then to relate the rotational orientation of the substrate to the film, it is necessary to measure off-specular reflections and probe their azimuthal dependence, by rotation around the surface normal, a so-called phi scan <cit.>. Fig. <ref> shows a phi-scan for a [001]-oriented USi_3 film grown on a [001]-oriented CaF_2 single crystal substrate <cit.>. The first point to note is that if this were simply textured then there would be no discrete azimuthal dependence. Second, the number of reflections in the 360^∘ range indicates how many domains are present, as there may be more than one in-plane direction that results in a lattice match; in fact, this is very common for cubic materials. Finally, the azimuthal angular offset between substrate and film determines the in-plane lattice match relationship.
One could go further and measure truly in-plane diffraction, where the whole system is set at a slight tilt angle and the detector is moved out of the scattering plane, until the Q-vector is pointing almost along the surface of the sample, this is known as grazing incidence x-ray diffraction (GIXRD) <cit.>. In this regime, the surface is severely truncated, and the diffraction spots become elongated and more rod-like, since the Fourier transform of a Dirac δ function (analogous to lattice for a single layer) is a constant (hence, rod of scattering). These are known as crystal truncation rods and can be modelled to give detailed in-plane information <cit.>.
To summarise, there are many ways that XRD can be used to give a good understanding of the crystal structure(s), lattice parameters, lattice matches, stresses and strains etc. However, most x-ray spot sizes at the sample position will be several mm^2, which means that these techniques are not suited for local features, individual grain information, stresses, strains etc., and a microscopy method may be better.
Scanning electron microscopy (SEM) uses a focused beam of electrons to raster across the sample, which produces secondary and backscattered electrons, which can be used for imaging or diffraction imaging, achieving magnifications of approx. ×250,000 <cit.>. For epitaxial films, which are often smooth (rms roughness of <10’s Å), the SEM image can look featureless, although it is possible to magnify the edge of a sample to observe the film/substrate interface and estimate the film thickness. This only works for thick films >1000 Å and the resolution is no better than 100 Å. However, there are other operational modes of an SEM, which can be used to give higher resolution images, compositional and crystallographic information. If the SEM system has an additional focussed ion beam column, then it is possible to cut thin cross-sectional foils through the surface and across the film/substrate interface and image them in transmission mode <cit.>.
Compositional information can be gathered using energy-dispersive x-ray spectroscopy (EDX), where the electron beam stimulates the emission of characteristic x-rays, which relate to specific electron shell transitions in given elements <cit.>. This is not highly accurate quantitatively, but it can give a good approximation of the elemental composition. Moreover, the rastered electron beam gives lateral compositional data, so that a map of elemental composition can be constructed, which is especially useful for heterogeneous systems.
Finally, electron back-scatter diffraction (EBSD) can be used to investigate the structure, phase and crystal orientation <cit.>. For bulk materials, the surface has to be extremely smooth, but for thin film synthesis the resulting surface is often far better, in terms of rms roughness, than any mechanically prepared material, as long as the lateral features are larger than the lateral resolution (∼100 Å). For single crystals, EBSD would just give a single orientation and a map of the surface would be uninteresting. However, for polycrystalline samples it is possible to generate a map of the grain structure and individual orientations, see Fig. <ref> for an example UO_2 polycrystalline film, deposited on polycrystalline YSZ <cit.>.
It should be noted that SEM is largely non-destructive for low resolution imaging and ESBD. However, the sample can undergo significant damage if images are taken at high resolution, or it may need to be coated if non-conducting. Also, the probing depth of the EDX process is on the order of microns, which means that signals are often dominated by the substrate materials.
High-resolution Transmission electron microscopy (HRTEM) images a thin cross-section of a sample, using a magnetically focussed, highly collimated, high energy (∼200) electron beam in a transmission geometry <cit.>. This is typically a UHV set up, although specialist systems are able to deliver gas environments <cit.>. Typical magnifications can reach ×1,000,000 so can image in the 10 regime, and can operate in a variety of different acquisition modes where the image contrast can be simply due to thickness variation, atomic number or crystallographic orientation. It is also possible to generate diffraction images and analyse the crystal structure at a very local level. Fig. <ref> shows an example HRTEM high resolution image and diffraction image from an epitaxial UO_2 film deposited on lanthanum aluminate (LAO) as an example <cit.>.
Electron energy loss spectroscopy (EELS) <cit.>, which uses an electron spectrometer to measure the energy lost by electrons due to inelastic scattering to probe core shell states, much in the same way as EDX, can be used to make high resolution elemental composition maps. Note that HRTEM usually relies on initial preparation via SEM and focussed ion beam milling. A cross-sectional lift out is prepared by careful etching and final platinum adhesion to a movable needle to remove the sample, before attaching it to a standard TEM grid. Specific consideration has to be made for U-based materials, as the electron density of U poses a particularly difficult challenge for the transmission of an electron beam. This means that foils of <1000 Å are preferable, and this requires added preparation time and care, compared with more standard materials.
Ion beam analysis (IBA) <cit.> is a less common, but complementary field of materials characterisation that, particularly for the case of thin films, uses Rutherford backscattering spectroscopy (RBS) to map the depth-dependent profile of sub-micron layers with element specificity. This technique typically uses the energy profile of backscattered helium ions to build a model of the component species in a thin layer, where the depth resolution is limited by the energy resolution of the detector. This can be especially effective for heavy ions, such as uranium <cit.>, where it is possible to operate in isotope detection limits of the ppm. The most advanced high resolution systems can operate with a depth resolution of less than 20 Å <cit.>, which although not at the same scale as XRR, does provide simultaneous element-specific information.
X-ray photoemission spectroscopy (XPS) is based on the photoelectric effect <cit.>, which is the emission of electrons from a given material, due to an incident light source. Most lab-based facilities use an Al or Mg K-edge x-ray source with energies of 1486.6 and 1253.6 eV, respectively, which can be monochromated to improve the final energy resolution of the resulting spectra <cit.>. The electrons are emitted from core shells within the different atomic species in the film and their energies are then determined by an electrostatic hemispherical analyser. This means that the system has to be in UHV conditions, and in fact the main chambers are often in the 10^-11 mbar range. Fig. <ref> shows a schematic of a typical XPS <cit.>.
At a basic level, the spectra of the core levels of the constituent elements give a fingerprint of the composition, similar in information to EDX, however, the probing depth here is less than 100 Å, which means that the signal will result from interactions with the film in all but the thinnest layers. An analysis of the integrated areas of respective core-level peaks can also give quantitative compositional information <cit.>. Fig. <ref> shows the expected core levels for a U atom <cit.>.
One of the most powerful uses of XPS is in characterising the binding state of the constituent elements, where specific chemical shifts in energy appear due to the existence of particular valence states <cit.>. In some compounds the core levels of the metallic ions have features that are even more sensitive to the local coordination chemistry. For example, UO_2 has `so-called' shake up satellites that are sensitive to the oxygen stoichiometry, and with careful measurement it is possible to measure excess oxygen to better than about 3%, i.e. x to ±0.06 in UO_2± x <cit.>.
XPS is clearly a very powerful non-destructive technique for chemical analysis, but there are possible modifications from the standard that make it even more useful. Since many of the deposition systems are in UHV conditions, it is also possible to combine these facilities with XPS. Focussed x-ray sources, or focussed analysers, can achieve lateral resolutions of 10’s μm, so that it is possible to laterally map the chemical states of samples. Argon plasma sources can be used to gently etch through the sample, which gives depth profiling information, which could be crucial for understanding complex interfaces. Finally, the use of a much lower-energy ultraviolet source (UPS) means that the valence states are accessible with good resolution, which in conjunction with the XPS spectra, creates a more complete chemical picture <cit.>.
§ URANIUM METAL PHASES AND ALLOYS
§.§ Introduction
Before discussing the progress in created thin films of U metal and related alloys, we first discuss some of the fundamentals of bulk U metal, including a brief discussion of its phase diagram and some important electronic properties that have been studied over many decades. Much of this discussion will be focused on the basic physics, to frame the importance and power of studies with thin films, but it is important not to forget the use of metallic U in the earliest days of nuclear energy production, as well as of course weapons.
The phase diagram of U as a function of pressure and temperature is shown in Fig. <ref>. Similar to many metals U adopts a high-temperature body-centred cubic, bcc, γ-phase stable from the melting point of 1132 ^∘C down to 772 ^∘C. At pressures below approximately 3.5 this temperature marks the transition to a tetragonal β phase, stable between 772 ^∘C and 662 ^∘C, with a complex atomic structure and 30 atoms in the unit cell. Efforts were made to solve the β structure as early as the 1950's <cit.>, but it was only in 1988 with the advent of modern neutron powder diffraction, together with Rietveld analysis, that the the structure was eventually solved <cit.>.
Significantly before this is in 1937 Jacob and Warren <cit.> solved the atomic structure of the room-temperature stable, orthorhombic α-phase which is adopted below 662 ^∘C at atmospheric pressure or, above approximately 3.5, directly from the bcc phase at around 800 ^∘C. The α-phase allotrope, defined within the Cmcm space group and shown in Fig. <ref> is a highly open, consisting of series of corrugated atomic chains nested along the [010] direction. Such a structure produces a series of highly anisotropic interatomic distances reflecting the complex role of the 5f electrons in stabilizing the structure <cit.>. At ambient pressure this low symmetry, highly open orthorhombic character is unique among the elements, however it has been found as a common stable structure for the high-pressure forms of the light rare-earths and is also closely related to the high-pressure forms of the heavier actinides <cit.>. For a more in-depth description of the structures of the three bulk allotropes the reader is directed to Ref. <cit.> and for a detailed account of extensive structural studies conducted on bulk α in the 1980's and early 1990's the reader is directed to the review by Lander et al. <cit.>.
Given the complexity of the U phase diagram and the unusual structures contained within it, it is perhaps unsurprising that large single crystals of U metal are difficult to prepare. Indeed, until the production of single crystals of α-U from a molten salt process in the late 1990's <cit.>, only the few crystals made by E. Fisher in the 1950's (known to contain measurable impurities of Si and Fe) were available for use <cit.>. Obtaining single crystals of the two high temperature allotropes has proved even more elusive. Long lived metastable single crystals of the β phase were first synthesised by A. N Holden in 1951 by quenching a 1.36 at.%U-Cr alloy from the β-phase in water <cit.>. However these crystals were not truly single phase and contained a likely Cr rich precipitate. Later, the same authors produced phase pure single crystals by reducing the Cr content to 0.5 at.% however such crystals transform to the α-phase within a few hours at room temperature <cit.>. To date, bulk single crystals of the bcc γ phase have never successfully been produced despite significant efforts.
Despite the challenges in crystal growth, subsequent decades saw great numbers of detailed and varied studies into the structural, electronic and thermodynamic properties of the bulk U allotropes. For the room temperature α-phase, many of the experiments focused on pursuing the various possible long range ordered states that could exist at low temperature. The major findings of most studies these studies (pre-1994) are encompassed by the review article in Ref. <cit.>. Perhaps the greatest experimental effort was expended on probing the unexpected and mysterious phase transition near 43 K, first observed in temperature dependant measurements of the elastics constants in 1961 <cit.>. Key to understanding general properties of α, as well as the CDW state it hosts, was a pioneering experiment in 1979 <cit.> to measure the phonon dispersion in a bulk single crystal, and a subsequent experiment by Smith et al. in 1980 <cit.> that extended the phonon studies to low temperature, and showed significant phonon softening near q_CDW highlighting the transition as soft-phonon driven. Robust ab-initio calculations that accurately modelled the observed dispersion were not developed until 2008 by J. Bouchet <cit.>.
At even lower temperatures a superconducting state was established, T_c = 0.7 <cit.> but there is still much ongoing debate as to the exact nature of the superconductivity, bulk or filamentary <cit.>, and the importance of sample purity considerations. The coexistence of these two ground states is unique amongst the elements, and reminiscent of more exotic highly correlated electron systems such as the high T_c cuprate family of superconductors. However, the exact relationship between the ordered states is still unclear. It has been determined that pressure initially suppresses the CDW and enhances the superconductivity, increasing T_c to over 2 K, however further increase in pressure to 10 suppressed both phenomena <cit.>. Additionally, magnetism has long been discussed and sought in connection with U metal, early neutron work on thermal expansion even claimed to have found extra peaks <cit.>. Although later it was established that these came from multiple scattering processes and to date no magnetism has been found in bulk α-U <cit.>.
Considering the relative difficulty of obtaining bulk single crystals for the various allotropes and the obvious applications for epitaxial strain to modify the different types of long range order present, it becomes clear that the synthesis of U thin films, especially epitaxial films, is of great interest. The following Sections will lay out the general synthesis considerations for the key metallic systems before detailing a number of important case studies where metallic U thin films have been employed to provide scientific insight into this fascinating element and its many allotropes that would not have been possible by relying on bulk synthesis routes alone. In terms of superconductivity, there is ongoing work on elemental, and alloy thin films, but the situation at the time of writing is currently unclear in thin films.
§.§ Production of metallic uranium films
The number of laboratories world wide that are sufficiently equipped to deposit metallic U films is understandably small. Aside from the issues of sourcing appropriate purity starting material, the base vacuum within any potential deposition system must be sufficiently low to avoid the immediate formation of _2 or other compounds. As a result the ratio of metal film producing laboratories to oxide or other U compound producing laboratories is also small. As discussed in Section <ref> the first U deposition for the explicit purpose of forming metallic films was conducted in the 1990's by T. Gouder in an effort to induce localisation <cit.> after which there was also early work on heavy fermion films <cit.>, and attempts to stabilise the metastable hcp form of U in thin layers <cit.>. Following these early examples, we will now discuss a number of the key U thin film systems that have been developed to date and are detailed in Table <ref>. The majority these systems were first fabricated using the PVD method DC magnetron sputtering, see Section <ref> and the optimum growth conditions discovered by an iterative process using many of the characterisation methods discussed in Section <ref>.
§.§.§ Uranium containing multilayers
The deposition of polycrystalline or amorphous U layers is substantially easier than the epitaxial systems described below. However, there has been significant and continued interest in such systems in the context of multilayers and bilayers to investigate the possibility of induced behaviour between U and other metals as a direct result of their proximity across the interface. The first work on uranium containing multilayers was conducted by Beesley et al. <cit.>, which spurred significant further work detailed in Sections <ref> and <ref>. The precise growth details for each system can be found in the references included in table <ref> however in general the U layers are grown by sputtering techniques with minimal substrate cleaning and little or no heating during growth. As induced effects typically exist over relatively short length scales, a key parameter in such films is the interfacial quality between the layers, and this can vary dramatically from system to system. Note that similar growth considerations are valid for both multilayer and bilayer systems; however, significantly more characterisation has been performed on the multilayer systems and thus they will form the focus of the following discussion. As described in Section <ref>, XRR is an invaluable tool for characterising thin film systems. Typical XRR curves from U/Fe and U/Gd multilayers are shown in Fig. <ref>.
Analysis of these and related data demonstrate that the interfacial properties of the U/Fe, U/Co, and U/Ni multilayers are quite different from that of the U/Gd multilayers. In the case of the transition metal systems, the interfaces are not as chemically sharp, and there is an interdiffused region of thickness ∼ 15 Å at each interface. In contrast, for the U/Gd multilayers, the interfaces are much sharper, and there appears no significant interdiffusion between Gd and U <cit.>. This conclusion is supported by transmission electron microscopy studies - a typical image of one such film is shown in Fig. <ref> . Here the layers are well defined with relatively low roughness, and the Gd layers are strongly crystalline, whereas the U layers have small nano-crystallites or are amorphous. In many cases the brighter Gd crystallites extend across the layers suggesting sizes of as much as 50 Å in the vertical growth direction.
Regarding the growth directions of the layers, in all cases the structure of the 3d transition element follows the expected preferred orientation, i.e. bcc for Fe, hcp for Co, and fcc for Ni. The U is in a poorly defined α structure. Surprisingly however, in the U/Gd multilayers, although the Gd forms in the expected hcp phase, the U forms also in a hcp symmetry, with a unique c (hexagonal axis) growth axis. There is no ordering that could be found within the hexagonal planes, so these consist of a number of random domains of the hexagonal basal plane <cit.>. The c_Gd is 5.84 Å, which is close to the bulk value of 5.78 Å, and the c_U = 5.60 Å. If we assume that the atomic volume is the same as that of the of the α-U form, then the a_U∼ 2.91 Å, giving a c_U/a_U ratio of 1.92, which is much larger than that expected for a hard sphere model 1.633. The only element close to this is Cd with a ratio c/a = 1.89. We return to the topic of hcp-U in Section <ref>.
§.§.§ Achieving single crystal films of α
The key breakthroughs in the synthesis of epitaxial metallic U layers came with the realisation of the importance of unreactive epitaxial buffer layers deposited prior to the U layer, and the addition of substrate heating to allow sufficient mobility of deposited atoms. Refractory metal seed/buffer layers are routinely implemented in the growth of rare-earth systems to prevent interactions with the substrate and/or to bridge a large mismatch in lattice parameters between the substrate and overlayer in order to achieve successful adhesion of a crystalline thin film. As many substrates contains oxygen, which reacts readily with U metal, the addition of these (nominally) non-reactive buffers opens up a new region of phase space. It is fortunate that the refractory metals that prove good chemical buffers also demonstrate excellent epitaxial matches with U. As we will see below, however, the details of these epitaxial matches are not obvious in many cases. To date high quality epitaxial layers of, orthorhombic and hexagonal U metal have been achieved <cit.>, as well as pseudo-bcc U-Mo alloys <cit.>. As with the multilayer systems described above each system will be briefly explored here and further details for each system can be found in the references provided in Table <ref>.
Utilising this approach of high temperature growth and epitaxial refractory metal buffers the epitaxial growth of α was first achieved in 2008 via deposition onto thin, single crystal buffer layers of either Nb(110) or W(110) at 600 ^∘C <cit.>. The optimal growth temperature was later refined to 450 ^∘C <cit.>. In these cases, the substrates were single crystals of sapphire, Al_2O_3, epi-polished parallel to the (11.0) plane. This excellent, if non-intuitive, epitaxial match had already been identified <cit.>. The resulting films were capped with a thin layer of one of the two refractory metals to protect the U layer from atmospheric degradation. It was found that Nb serves as a better capping layer as the passivating oxide, Nb_2O_5, is limited to approximately 20-30 Å and can be stable for many years. Despite the use of similar bcc buffer layers (a_Nb=3.300 Å and a_W=3.165 Å), there are significant differences in the orientations, domain structures and strains induced in the U overlayers for these two buffer materials.
Firstly, for the Nb(110)/α(110) epitaxial match the overlayer grows with the orientation relationship illustrated in Fig.<ref>. The Nb buffer has a growth axis [110], and one of the in-plane [1-11] axes is aligned parallel to the sapphire [00.1] <cit.>. The deposited U atoms self organise in order to align the close-packed rows of each layer, indicated by grey diagonal lines in Fig. <ref>. The epitaxy is driven by the close match between the distances d_Nb= a_Nb = 3.311 Å and d_U=1/2(a_U^2+b_U^2)^1/2=3.264 Å thus circumventing conventional wisdom that two systems with almost 6% maximum lattice mismatch should never form an epitaxial system. As the system cools back to room temperature from the elevated growth temperature, the in-plane c-axis is locked into a state of tensile strain due to the positive thermal expansion coefficient of α-U along the [001]-axis (+30×10^-6 K^-1). It is expected that the a-axis, which is closest to the surface normal, should then contract slightly in order to maintain the unit cell volume. The low mosaic spread of both the Nb and U layers (Δω=0.15^∘) in Ref. <cit.> indicated high quality epitaxial matches between the substrate, buffer and U overlayer.
Secondly, the W/U epitaxial system results in layers of α-U(001) with multiple domains, and an a-axis that is held in a state of tensile strain <cit.>. The complexity arises partly because the epitaxy of Al_2O_3/W(110) results in the formation of two domains that are related via a 70.5^∘ in-plane rotation about the W[110] axis. The epitaxial match for one of eight predicted U domains <cit.> is shown in Fig. <ref>, such that the distances d_U=d_110=2.556 and d_W=2d_211=2.584 are aligned producing rows of U and W atoms in register. The eight possible domains arise since the condition U[110]∥W(11̅1) can be satisfied in eight equivalent ways. However, the associated experimental data reported by Ward et al. <cit.> was inconclusive and showed only four domains, further reduced to two strong and two weak domains by the underlying sapphire orientation. It is noteworthy that these domains form a pseudo-hexagonal symmetry with angles between domains of either 52 or 63^∘. These unusual angles arise because of the orthorhombic symmetry of U. However, this pseudo-hexagonal symmetry could have played a role in the earlier experiments reporting hcp-U <cit.> as X-ray characterisation of the films was not performed.
§.§.§ The hunt for hexagonal close packed uranium
As explained above, and shown in Fig. <ref> the phase diagram of U does not contain the hexagonal close packed (hcp) form, despite it being a common structure in elemental metals. However, as was first pointed out by Axe <cit.>, the single soft phonon mode that relates the bcc and orthorhombic α-forms passes first through a hypothetical hcp structure. As such one could imagine that the hcp structure could be stabilised in thin film form if an appropriate epitaxial match and growth conditions could be identified. On cooling these materials might well develop magnetic ordering, super-conductivity and/or CDWs; indeed, some of these phenomena have already been predicted <cit.>.
There are reports that U may have been stabilized in such a hcp phase <cit.>. The first paper in 1998 <cit.> describes the fabrication of the films and their characterization with low-energy electron diffraction (LEED). After evaporation of the U metal onto a W(110) substrate, the films were annealed at 1127 ^∘C, and the subsequent LEED pattern suggested a close-packed structure of with an interatomic spacing of 3.2(1) Å. They then state that: “The LEED data, however, do not allow us to decide definitively whether the close-packed pattern relates to a cubic structure like fcc, or to a hexagonal arrangement, as for most rare-earth metals.” Because the films were fabricated in a closed system, and strongly oxidise if exposed to air, they were not able to perform an X-ray analysis on these films. This would, in any case, be difficult, as the films were only ∼ 80 Å thick, which is close to the limit for analysis with a laboratory-based X-ray diffractometer. Subsequent resonant photoemission experiment were performed on these samples to show that the material had considerable 5f weight at the Fermi level <cit.>, and scanning tunnelling spectroscopy <cit.>, again showing the predominant feature of the 5f states near the Fermi energy. In Ref. <cit.> the authors report lattice parameters of a=3.5(3) and c= 5.4(2) , giving a c/a ∼ 1.61(8) , which is not far from the close-packed ideal value of 1.633. More recently a 70 Å film was also grown by Chen et al. <cit.> who reported (from LEED) a close-packed structure with a U-U spacing of 3.15(10) Å. The c lattice parameter was not stated, but using angular-resolved photoemission they reported evidence for the 5f states hybridising with the conduction-electron states, as well as f–f hybridisation. Again, a difficulty in this work is the absence of comprehensive X-ray structural characterisation.
Regarding capped systems - that can be removed from UHV conditions for X-ray characterisation - as discussed in Section <ref>, it was reported that the U layers (which were usually <100 Å in thickness) in U/Gd multilayers formed in a hexagonal structure with c=5.60(1) Å. If an atomic volume similar to the α-U phase is assumed, then this gives a=2.91(1) Å and a ratio c/a=1.92, a very large value. As reported in <cit.>, an attempt was made extend this result and form an epitaxial hcp layer by depositing a 500 Å U film onto a 500 Å epitaxial buffer of Gd grown on Nb. The c-axis was found to be 5.625(5) Å, and using the off-specular family of (10.4) reflections a was to found to be 2.96(2) Å, giving an atomic volume close to the α phase and c/a= 1.90(2), consistent with the value found in the multilayers. Some diagrams of possible epitaxy are shown in Fig. <ref>, but these were not experimentally confirmed and attempts to increase the U layer thickness resulted in exfoliation. Furthermore, in the successful, 500 thick system, the hcp-U layer displayed poor mosaicity of 1.5^∘, compared to < 0.2^∘ for the α-U films on Nb. Interestingly, theory <cit.>, has predicted a value for c/a = 1.84 with a similar atomic volume as α-U. A more recent theory <cit.>, has found a = 3.01 Å, and a c/a = 1.82. When these authors fixed their a = 2.96 Å to agree with the experiment <cit.>, they calculated c/a=1.864. Thus, the large c/a ratio for the hcp phase seems to be a strong feature of both experiment and theory.
Recently an extensive study of preparing hcp-U has been reported by Nicholls et al. <cit.> to try and understand whether this form of U can be made stable and thick enough to investigate other material properties e. g. electronic transport, or lattice dynamics. In addition to using the substrates discussed above, an effort was also made on Cu(111) and Ir(111) faces, as these fcc materials present a hexagonal face in this orientation that has lattice parameters close to those discussed above for hcp U. Fig. <ref> shows that the initial phase deposited was indeed hcp, but that a short time after growth (∼20 minutes for a film of 1400) there is a rapid decomposition of the hcp phase, which transitions to the stable orthorhombic structure with the the main grain orientations (020) and (110) <cit.> . Thinner samples of hcp films are stable for somewhat longer times, but in all cases observe phase decomposition of the hcp phase was observed as a function of time. It was also observed in the same study that the specific decay path hcp to orthorhombic U varies depending on film thickness.
§.§.§ Achieving single crystal films of γ alloys
It has been known since the mid 1950's that α makes for a poor nuclear fuel as, due to its orthorhombic structure, it displays both anisotropic thermal expansion and poor dimensional stability under irradiation <cit.>. As such, the high temperature, bcc, γ was highlighted early on as an enticing prospect for increased U density for low-enriched fuel solutions <cit.>. Long lived γ metastable samples have been obtained through the alloying of various transition metals combined with rapid cooling techniques since the 1960's, up to the present day <cit.>. However, a direct result of such fabrication methods is the inability to produce single crystal samples thus severely limiting the experimental probes that can be brought to bear when attempted to understand the properties of this allotrope. Adamska et al. <cit.> were the first to demonstrate that epitaxial matching could provide a mechanism to avoid rapid cooling and instead lock in the metastable γ phase using epitaxial strain. This procedure makes use of the Nb/Al_2O_3 match detailed in section <ref> to provide a [110] surface onto which to co-deposit U and Mo at 800^∘C, inside the stable γ region, and the similarity in a_Nb = 3.33 Å and a_γ≈ 3.41 Å allows a simple “cube-on-cube” epitaxial match that produces sufficient strain to preserve the γ^s phase upon cooling to room temperature. This work was built on further by Chaney et al. <cit.> and refined to span the large majority of the γ region in the metastable UMo phase diagram as shown in Fig. <ref>. Mo was chosen as proof of concept as it provides the strongest stabilising effect of all the transition metals and is also the most likely future fuel candidate, however this growth approach can reasonably be extended to any γ U-transition metal alloy system with high probability of success.
§.§ Science with uranium metal thin films
This Section discusses: the early attempts to make multilayers containing one layer of pure U metal; the important breakthrough to achieve epitaxial α-U metal; the discovery that interfacial strain changes the properties of the α-U film; experiments attempting to stabilize hcp-U films; and the first efforts to make U/ferromagnet heterostructures for spintronics experiments. We end with a short description of experiments on thin films (not epitaxial) of plutonium and important photoemission experiments showing the delocalisation of the Pu 5f electrons as a function of film thickness.
§.§.§ Induced magnetism in uranium-ferromagnetic multilayers
The fundamental question in multilayer samples which combine ferromagnetic and non-magnetic materials is whether the latter has any magnetic moment induced on it due to the proximity of the former. Early work by Beesley et al. on U/Fe multilayers <cit.> employed Mössbauer spectroscopy and polarized-neutron reflectivity to examine the magnetic moments at the Fe atomic site, and determined the easy axis of magnetisation was in the plane of the layers. Later, the element specific synchrotron-based techniques of X-ray magnetic circular dichroism (XMCD) <cit.> and X-ray resonant reflectivity (XRRR) <cit.> were developed allowing information on the magnitude and profile of the much smaller induced moment at the U atoms sites. Notably it was found that the U moments are indeed induced, but are only significant close to the interface for multilayer systems containing transition metal atoms. As mentioned in Section <ref> the interface sharpness was relatively poor for multilayers containing 3d transition metals with approximately 15 Å of interdiffusion between the layers. Furthermore, Mössbauer experiments <cit.> suggest that, at least in the case of Fe, the structure of the Fe in this region was amorphous, and it is in this region that the largest induced U moments are found. This sharp decrease in the U moment as a function of distance away from the interface has been supported by theory <cit.>. The magnetic moments of the transition metals were slightly smaller than the bulk values, and the easy direction of magnetisation was in the plane of the films <cit.>.
The work with the transition-metal ferromagnets was summarized in Ref. <cit.> in 2008. It was noted that that the maximum U moment in the U/Fe system was ∼ 0.12 μ_B and reduced for both the U/Co and U/Ni systems. This behaviour is ascribed to the position of the 3d band relative to the Fermi level (E_F). Given that the 5f band is centered close to the Fermi level, E_F, a maximum in 3d5f band overlap, and thus maximised hybridization, occurs when the transition metal 3d band also lies close to E_F. This criteria is maximised for Fe and decreases for the higher elements, Co and Ni.
In contrast, considering now the lanthanide/uranium multilayer system, U/Gd <cit.>, there is no overlap in energy between that of the U 5f band and that of the localised Gd 4f electrons, since the latter are well removed from E_F. Hence the induced U moment is much smaller, ∼ 0.02 μ_B per atom, but it does oscillate through the U layer, as might be expected if this is driven by a RKKY-type interaction mediated via the conduction electrons. Notably the Gd magnetic moments are much reduced to ∼ 4 μ_B from the elemental value of 7.5 μ_B known for pure Gd.
This largeapparent reduction of the magnetic moment in U/Gd multilayers has been investigated using various techniques <cit.>. One possible scenario to explain the reduced magnetisation is that the particular type of columnar growth observed by TEM, and discussed in Section <ref>, leads to the pinning of Gd moments at the boundaries of the columns. To investigate this requires tools that probe the length scales laterally within the layers, rather than the vertical growth direction. By tuning polarized X-rays to the M_5 edge of Gd at 1187 eV it was possible to measure sufficiently far away from the specular ridge to learn something about length of both the chemical and magnetic correlations <cit.>. When data are obtained with the X-ray circularly polarized in the two opposite directions the magnetic components change sign. Hence the charge diffuse scattering, which represents the chemical configuration, can then be obtained by adding the two contributions and dividing by two, whereas the magnetic contribution is obtained by subtracting the two contributions. The results are shown in Fig. <ref>, and it can be easily seen that the intensities fall abruptly at the same value of Q_x, the component of the moment transfer in the plane of the layers. This shows that the chemical and magnetic diffuse scattering define a similar length scale over which these correlations occur, which is not surprising given the proposed link to the columnar growth. A detailed analysis shows that this distance is 120 ± 20 Å, whereas the interlayer distance is ∼ 50 Å. This information then allows a simple model to be constructed based on the idea that the columnar growth resembles the model in Fig. <ref>, which explains the strong reduction of the Gd saturated moment, and confirms the columnar growth of Gd <cit.>.
§.§.§ Uranium metal bilayers for future spintronic applications
In the field of spintronics, where information is carried by the spin magnetic moment of the electron rather than charge, one of the key parameters is the spin-orbit coupling parameter of the material, which can have a significant impact on both the generation of spin currents (via the spin Hall effect for example), and the spin lifetime of carriers in non-magnetic systems. Since the spin-orbit interaction increases in the periodic table as approximately Z^4, it is of interest to see whether U can be used in such devices, and be competitive with other materials. The first experiments with U <cit.> were done on a bilayer consisting of a 125 Å of permalloy (_0.8_0.2) ferromagnetic layer deposited on glass with a layer of 30 Å of uranium metal (non-magnetic layer) on top, capped with a 30 Å film of Nb to prevent oxidation. The key parameter to determine is the spin Hall angle θ_H, where this is defined as the ratio of the injected spin current and resulting charge current. It is this quantity which should increase strongly with heavier elements. The method applied uses dynamical spin pumping <cit.>. In spin pumping, the precessing magnetisation of an externally excited ferromagnet (FM) undergoing ferromagnetic resonance (FMR) is dynamically coupled to the charge carriers in an adjacent non-magnetic (NM) system, resulting in a net transfer of spin angular momentum across the ferromagnetic/non-magnetic interface. The measured value of θ_H for U is positive and has a value of +0.004. This value is smaller than the value measured of +0.006 <cit.> for Pt, which is somewhat surprising. Since both the 5f and 6d shells of U are less than half filled, one would, to first approximation, expect a negative θ_H for U, as is found for transition metals. In detail, however, the situation is more complex: the extrinsic and intrinsic contributions to the spin Hall angle need to be isolated, which depends on the presence of impurity scattering in the samples <cit.>. The crystal structure and resultant electronic band structure play an important role in the magnitude of the intrinsic contribution, and can be calculated in terms of the Berry phase curvature. Theoretical work by Wu et al. examined the α, γ, and hcp phases of U, and suggested that the hcp phase shows the largest spin Hall conductivity near to E_F. In the same work they also showed that the nature of any 3d transition metal ion impurities (which might be present at a disordered interface) can significantly affect the magnitude and even the sign of θ_H <cit.>. In more recent work the same authors show the peculiar result that spin accumulation in uranium films is highest at the side of the film opposite to the impurity position <cit.>. Clearly, there is more to do on this subject to understand the value of U in spintronic applications.
In another study the anisotropic magnetic properties of thin U/Fe and U/Ni bilayers <cit.> were investigated. Here the transition metal layers of ∼ 100 Å were deposited on glass, and the U layer thicknesses (d_U) were varied from 0 to 80 Å. The growth was carried out at room temperature, and the layers were polycrystalline, as expected. The U appeared to be mostly [001] textured as the preferred growth axis. Magnetisation measurements at room temperature were made of the coercive field as a function of angle in the plane, to investigate the impact of the heavy U on the magnetic anisotropy of the transition metal layer: since magnetocrystalline anisotropy is inherently linked to spin-orbit coupling, the proximity of enhanced spin-orbit effects in the U layer might be expected to have a significant impact. Some data are shown in Fig. <ref>.
From these results the uniaxial anisotropy coefficient K_eff may be calculated and this is found to have a maximum in the U/Fe system at d_U ∼ 55 Å. Further examination suggests that quantum-well states in the U may be affecting the changes in K_eff, suggesting some impact of the electronic structure of neighbouring U layer on the magnetic layer. Once again, the interface probably needs further investigation, especially since an induced moment on the U may be present, and as discussed in Section <ref>, an interdiffused layer of at least 15 Å, might be present.
§.§.§ Manipulating the CDW States in α using epitaxial strain
As explained in Section <ref>, significant research efforts were expended to understand the low temperature charge ordering in α through which it was established that there exists a series of CDW transitions starting at 43 K and ending at 22 K with a fully commensurate CDW state described by the wavevector q=1/2a^* + 1/6b^* +5/27c^* <cit.>. It was also determined that one of the key parameters determining CDW behaviour is the length of the a, as this is the direction along which the primary component of the CDW distortion lies <cit.>. It is known from work on the phonons under pressure that on compression the soft-phonon mode in the [100] direction hardens (i.e. increases in energy) increasing the energy required for the CDW to form, suppressing the CDW state and enhancing T_c from ∼0.5 K at ambient pressure to a maximum of ∼2 K at 1.2 GPa <cit.>. In a naïve picture of competing CDW/SC ground states, tensile strain along the a-axis should therefore enhance the CDW stability and suppress superconductivity. Given this, the difference in both the magnitude and sign of the strain exerted on the α-U a-axis in the epitaxial Nb/U and W/U systems described in Section <ref> opened the door to the potential to achieve something impossible in bulk, manipulating the CDW state via epitaxial strain.
The initial exploratory studies in this area were conducted in 2008 <cit.> and found that the two systems do indeed display vastly different CDW states. Starting with the Nb/U films system, the CDW reflections were found in the bulk positions <cit.> but with a dramatic change in the relative reflection intensities compared to bulk where all satellites are equal in intensity <cit.>. Fig. <ref>b gives the clue as to why the CDW is so close to that found in the bulk, but is in fact “weaker”, as measured by the smaller change of the d_220 in the film compared to the bulk. The d_220 spacing, which includes a component from the a-axis, is smaller for the film than for the bulk, i.e. a-axis is under compression in the film. This implies a weaker CDW interaction, hence less intensity in the CDW peaks, and less change of lattice parameter on cooling below T_CDW although the latter stays at about the same value. Since the film was grown on a Nb buffer, which is superconducting at ∼ 9 K, no effort was made to see whether the film was superconducting, since a simple resistive measurement would not detect the lower T_c. Regarding the change in relative reflection intensities; for each layer of satellites, h±1/2, the CDW reflections are defined by the vectors Q_1 to Q_4, shown in Fig. <ref>a, and it was found that for h+ the satellites could be divided into two clear pairs of comparable intensity, a strong pair Q_1/Q_3 and a weak pair Q_2/Q_4. The matching intensity within a pair is expected as the only difference is ± k which is a true mirror plane, however to explain the intensity variation between pairs the authors note that the stronger pair is closely aligned with the growth axis [110]. It can thus be inferred that a major effect of epitaxial strain is to promote or suppress the formation of CDW domains depending on their relative alignment with the growth axis. The variation of the CDW wavevector was also measured as a function of temperature and it was found that, unlike bulk, there were no lock-in transitions of the individual components, q_x, q_y, and q_z <cit.>.
The situation in the U/W system is quite different to that in the U/Nb system, and the complex domain structure discussed in Section <ref> needs to be suppressed in order to simplify reciprocal space for measurements. By depositing a thin (100 Å) layer of niobium between the substrate and tungsten layer, the W(110) buffer is limited to a single domain and the total number of observable α-U domains is reduced from eight to four. The domains are unequally populated and only two strong reflections (separated by an in-plane rotation of ∼56^∘) tend to be observed <cit.>. As shown in Fig. <ref>a, at 250 K the atomic volumes are comparable, but the a and c axes have expanded, and the b is contracted. The CDW occurs at ∼ 120 K (three times that of the U/Nb film and the bulk) and results in a large further expansion of the a-axis; at the same time the c-axis (which is not constrained, as this is the growth direction) contracts to allow the atomic volume to approach that of the bulk sample. Further to the dramatic increase in T_CDW, the most remarkable result of the tensile epitaxial strain is to fundamentally change the CDW character, such that it becomes fully commensurate with the lattice, and changes from three- to one-dimensional losing both q_y and q_z components retaining only the q_x = 0.50 component. Fig. <ref>b shows the development of this peak, and its width, as a function of temperature.
This work showed conclusively that the expansion of the a-axis is the key to the marked increase in T_CDW. In the U/W films the a-axis is under tension, allowing its expansion. It is also confirms the theory developed first in Ref. <cit.> that the soft-mode phonon behaviour, and the associated strong electron-phonon interaction, are crucial to the development of the CDW. In the context of these different types of long range order, further strain tuning of epitaxial films could provide invaluable information regarding the interplay between these two low temperature ground states.
§.§.§ Correlated disorder in pseudo-bcc uranium alloys
As introduced in Section <ref> there has been growing interest in bcc uranium – transition metal alloys over the last 20 years with a re-assessment of their potential role as advanced nuclear fuels <cit.>. However, as traditional synthesis methods are incapable of producing single crystals, detailed structural analysis has been severely limited. The first seminal work on this problem was presented by Yakel in 1969 <cit.> who obtained diffraction from single grains at the tip of a polycrystalline needles. He observed significant diffuse scattering in the γ^s phase, and attributed it to local structural modifications which preserved the global bcc structure as a configurational average refuting previous work suggesting chemical order <cit.>. However, his proposed solution was both complex and largely ignored by the majority of the U-alloy community who, for the most part, have considered γ^s as fully stabilised bcc. This is understandable as the characteristic diffuse intensity is at least eight orders of magnitude weaker than the Bragg reflections and therefore when working with polycrystalline samples and standard powder diffraction refinement techniques is impossible to detect <cit.>.
As explained in Section <ref> the single crystal synthesis problem was recently overcome using epitaxial matching as a stabilising force in substitute of rapid cooling <cit.> allowing the γ^s phase to be studied in detail using modern synchrotron techniques for the first time. Using the diffuse x-ray scattering diffractometer at the ESRF <cit.> Chaney et al. confirmed the bcc structure is not preserved locally and instead a three-dimensional structural modulation gives rise to a well defined diffuse X-ray pattern, shown in Fig. <ref>. Further, the exceptional flux and resolution provided by modern synchrotrons allowed for another key breakthrough, the separation of the diffuse signal into two distinct types of different origin. The first group (H) were re-indexed as a coherent precursor structure of the intermetallic _2 allowing the remaining diffuse alloy reflections (N) to be explained by a substantially simplified model. The unique structural solution was given by a local orthorhombic structure, formed by atomic displacements along the ⟨001⟩_s with no requirement for chemical ordering. The superstructure is C-face centered and defined with four atoms in the unit cell at (0,1/4±δ,1/4) where |δ|=0 corresponds to the bcc structure. Not only is this a simpler solution, it also removes the unusual condition present in the previously suggested structure, whereby there are two atomic sets with distinctly different coordination and no chemical ordering.
Given all atoms are displaced from their bcc positions, the individual displacements, which transform from the parent to superstructure, can be viewed as a frozen phonon. The responsible modes being of general type TA_1[110]_p, with polarization along the fourfold axis [001]_p and are the exact modes predicted to destabilize the high-temperature bcc phase <cit.>. Twelvefold degeneracy generates six equivalent and equally occupied superstructure domains, however, given the diffuse nature of the reflections it is clear that the identity of any one choice of domain is only preserved over a relativity short distance, ∼30 Å along b_s/c_s and ∼22 Å along a_s. As such, Chaney et al.. <cit.> suggest that the local superstructure is better thought of, not as local order, but as correlated displacive disorder that lowers the local symmetry with three dimensional correlations between nearby atoms governed by rules represented as a frozen phonon. Twelvefold degeneracy maintains the higher average symmetry, while allowing anisotropic neighbour distances reminiscent of α to be recovered. This situation can be understood intuitively as U has a desire, rare among the elements, to occupy highly open structures that produce extreme anisotropic local environments <cit.>. Thus, by attempting to stabilise U onto a highly symmetric bcc lattice the mismatch in preferred symmetry creates an intrinsic conflict within the system. In the absence of sufficient thermal energy, this conflict creates a structural instability resolved by the formation of correlated disorder. The same experiments also showed this correlated disorder was intrinsically tunable. The authors use a qualitative metric combining correlated volume and magnitude of atomic displacement to capture the “correlated disorder strength” and show strong tunability with alloy composition, as evidenced by the variation of intensity and FWHM of the diffuse reflections in Fig. <ref> as a function of minor alloy content.
Building on the discovery of correlated displacive disorder Chaney et al. <cit.> explored the phonon dispersion for the 23 at. % Mo system using the grazing incidence inelastic X-ray scattering technique pioneered at the ID28 beamline <cit.>, ESRF <cit.>. It is well understood that correlated disorder can couple to other periodic phenomena such as lattice dynamics <cit.> and as mentioned in Section <ref> the uranium phonon dispersion has been subject to many notable theoretical efforts including high-temperature modelling <cit.> and more recently calculations for the UMo alloy itself <cit.>. However, due to the lack of suitable single crystal samples no experimental data existed making this a significant gap in understanding. The first dispersion was published in 2019 by Brubaker et al. <cit.> who extracted a single enlarged grain from an annealed polycrystalline 20 at. % sample by laser cutting. This work, along with the studies by Chaney et al. <cit.>, agree well with each other as well as with theoretical predictions. The full dispersion as measured by Chaney et al. <cit.> including room temperature ab initio calculations is shown in Fig. <ref>. Both experiments also observed extraordinary phonon lifetime suppression away from zone centre. As an alloy some intrinsic phonon broadening is expected, however this contribution was simulated and determined to be <2 for all q, thus, the authors concluded that almost all phonon lifetime suppression in the system is attributable to disorder-phonon coupling.
§.§.§ Thin films of transuranium metals
The number of laboratories that can perform experiments on transuranium films is, of course, extremely limited. To our knowledge the only laboratory to produce epitaxial thin films is Los Alamos National Laboratory, where epitaxial transuranium oxide films have been grown and will be covered in Section <ref>. However, the controversy over the electronic structure of the light actinides has been such that various efforts in photoemission and related spectroscopies have been undertaken over the past 20 – 30 years, many of which are discussed in a recent Plutonium Handbook<cit.>. For our purposes in this review we highlight one particular measurement that was able to monitor the localization of the 5f electrons in Pu metal films by monitoring the photoemission spectra as a function of sample thickness <cit.>.
The key spectra are reprinted in Fig. <ref> and show an appreciable shift for the thinnest sample. Of course, in this work no X-ray characterization could be made of the samples, so the assumption is that the Pu metal was in the α-Pu form and was polycrystalline. These spectra have been often cited in work on the electronic structure of Pu metal. The 3-peak (A, B & C) structure are characteristic of 5f localization, together with the peak D that represents the position of the 5f states below E_F.
§ URANIUM OXIDE SYSTEMS
§.§ Introduction
As the difficulties of manufacturing pure U metal fuel elements, as well as the safety issues raised by low-melting temperature of 1132 ^∘C, became more evident in the 1950's, uranium dioxide was introduced as an alternative nuclear fuel in the early 1960's. It remains today the fuel of choice of a majority of power reactors. A complete industry is devoted to the manufacture of such nuclear fuels and their treatment post-irradiation. The properties of UO_2 have thus been studied extensively in both a fundamental, as well as an applied, sense <cit.>. A major issue is related to the surface properties of UO_2, as it is at the surface that reactions will initially occur, and, with irradiated fuels, the potential danger arises from material escaping from the surfaces of such fuels. Films in the thin limit are essentially simply surfaces, so it is reasonable to expect that they can help with the understanding the behaviour of UO_2 surfaces. We shall show one example of research in this area. Another key aspect is related to interfaces involving UO_2, for example the interface between the fuel and the cladding, and here too, thin bilayers should offer new insights.
From a more fundamental point of view, UO_2 is a semiconductor with a band gap of ∼ 2 eV, which is almost twice that of silicon. It has garnered interest in applications (especially in satellites) requiring high reflectance in the wavelength range range of 40 - 100 Å. Both UO_2 and UN films reflect significantly more <cit.> than all known materials such as Au and Ir, and UO_2 is also cheaper. Additionally, there has been some effort to consider UO_2 for solar cells, as, if doped with oxygen, UO_2 is a p-type semiconductor <cit.>.
§.§ Surface studies of UO_2
An excellent review of the chemical reactions at the surface of UO_2 was given by Idriss <cit.> in 2010. Those studies (so far) have not used thin films, but are done on specially prepared surfaces of single crystals from bulk samples. Many of the properties deduced are relevant to thin-film research, and frequently can be extended with suitable films. On the chemical structure of the surface, earlier work was reported in the 1980s <cit.>. An understanding of the chemical structure of the surfaces is important because with the fluorite CaF_2 structure of UO_2 the only primary surface that is non-polar is the (110). The other two surfaces, (100) and (111), are polar, so that with these latter two surfaces there will always be an atomic rearrangement at the surface to resolve the polar discontinuity with the vacuum <cit.>. We will need some of these results to understand later studies of the dissolution of UO_2.
Upon further oxidation the chemical formula can be generalised as UO_2+x, where x = 0 at stoichiometry, and proceeds through many different phases, U_4O_9 (x = 0.25), U_3O_8 (x = 0.67), and finally arrives at UO_3 (x = 1). As oxidation starts at the surface, various experiments have been performed by Stubbs and collaborators using the technique of x-ray surface truncation rods to understand the initial oxidation process <cit.>. Other notable experiments on surfaces have been reported by Seibert <cit.>, and Spurgeon et al. <cit.> on UO_2+x. There has also been a series of theoretical investigations on the surface structure of UO_2 <cit.>, with the last two concentrating on the interaction of water with the surface.
Another area that has been studied is the magnetic structure and phase transition at the surface of UO_2. In the bulk this aspect has been studied since the 1960s <cit.>, but the first observation of the magnetic structure from the surface was with resonant magnetic scattering <cit.>. This work showed an unusual aspect of the phase transition at 30 K from the paramagnetic to antiferromagnetic state, and later experiments clearly defined this as a so-called “surface transition”, i.e. a transition that is induced directly by the presence of the reduced symmetry at the surface <cit.>. Perhaps counter-intuitively, we shall show an example where a study of the magnetism, has been able to illustrate some important information about the interface of UO_2 with the substrate.
§.§ Growth of oxides
Given the abundant work reported on UO_2 surfaces in Ref. <cit.>, it is perhaps surprising that it took so long before epitaxial films of UO_2 were made systematically. As reported in Sec. <ref>, epitaxial films were already made in the 1960's <cit.> and 1970's <cit.>, but relatively little was done with them.
The first recorded epitaxial UO_2 films were made at Los Alamos National Laboratory and reported in 2007 <cit.>. The substrates were LAO, which can be bought commercially. Since that time a whole series of papers have reported the growth of epitaxial UO_2 and also higher oxides with a wide variety of methods on many different substrates <cit.>. These are given in Table <ref>.
Epitaxial UO_2 films were grown by DC-magnetron sputtering at Oxford in 2010 <cit.> using substrates of LAO following the success of Burrell et al. <cit.>, but a paper on these samples was not published until 2013. In the meantime, a paper by Strehle et al. <cit.> appeared in 2012, also using DC-sputtering, and reporting epitaxial samples of UO_2 and U_3O_8. Strehle et al. <cit.> published much useful information on the growth of UO_2-based epitaxial films, and they also demonstrated that epitaxial films could be grown on yttrium-stabilized zirconia (YSZ), even though the mis-match was a large 6%. This is important in UO_2-film research as YSZ substrates can be readily obtained in the three important orientations (100), (110) and (111), allowing epitaxial films of UO_2 to be grown in all three principal orientations. Strehle et al. also showed by chemical analysis with RBS and XPS that the (1–1.2) sapphire substrate material is unstable with respect to Al transport into the uranium-oxide overlayers.
§.§ Science with UO_2 thin films
§.§.§ Photoemission experiments on UO_2 and PuO_2 epitaxial films
We shall highlight here the ARPES results obtained from UO_2 and PuO_2 epitaxial films. Whereas polished single crystals of UO_2 exist and were already used for ARPES measurements, there are no sizable single crystals of PuO_2, so that effort is unique and important. The central question posed by the theory presented also in Ref. <cit.> is to what extent are the actinide 5f electrons hybridizing with the oxygen 2p electrons? The hybrid density functional theory discussed predicts that in UO_2 the mixing between these two orbitals is present, but relatively small, whereas it should be much stronger in PuO_2. Below in Figs. <ref> and <ref> we show the UO_2 and PuO_2 results from the ARPES measurements.
Clearly, the much greater dispersion (∼ 3 eV) in the 5f level in PuO_2 <cit.> than found in UO_2 of ∼ 0.1 eV is a key finding confirming the theory used in this investigation of the actinide dioxides.
§.§.§ Antiferromagnetism of UO_2 epitaxial films
The work published on the films from Oxford/Bristol about the low-temperature antiferromagnetic (AF) structure of UO_2 films illustrates the challenges of working with epitaxial films, which must be grown on substrates. The films were produced at ITU, Karlsruhe, as well as at Bristol. All initial films used LAO substrates and the depositing temperature was 650 ^∘C. A careful examination <cit.> of the UO_2 films showed tetragonal symmetry, even for films over 2000 Å thick. The a and c axes are shown in Fig. <ref> below.
The UO_2 films had a rocking curve width that exceeded 1^∘, (2^∘ for the thinnest samples) although the substrate peaks were very sharp. Some of the thinner samples showed no signs of AF magnetic ordering. This certainly is related to the tetragonal symmetry found and demonstrated in Fig. <ref>. The AF structure of UO_2 is associated with a cubic lattice <cit.>, and the distortion, which is due to the interaction with the substrate, breaks that condition. On cooling these samples to base temperature (∼ 10 K) one should see the (001) AF peak of UO_2 along the specular direction, as this direction is [001]. For the thinnest samples (t ≤ 250 Å) no AF peak was observed. T_N of UO_2 is 30 K, and this is a robust value, i.e. it takes an effort to change this, so this result was surprise. The energy dependence of the AF peaks observed from the thicker films was also measured, as shown in Fig.<ref>.
A model proposed by Bernhoeft et al. <cit.> was used that can calculate the profile of the energy dependence of the peak as a function of depth that is ordered magnetically. The model shows that the peaks come from a volume that is tied to the surface of the UO_2 film, rather than an ordering tied to the interface with the substrate. It appears that there is a “dead magnetic layer” next to the substrate. After performing the experiments, the authors discovered that LAO has a ferroelastic transition at 560 ^∘C, which is below the depositing temperature of 650 ^∘C used for these films. Thus, on cooling, the transition occurs in the substrate effecting the interface. The thinner UO_2 films have a larger strain (see Fig. <ref>), which prevents the UO_2 from ordering magnetically, because the AF structure of UO_2 requires a cubic UO_2 chemical structure. Above the dead layer of ∼ 500 Å the relaxation allows the local cubic symmetry, and ordering occurs.
A semi-proof of this hypothesis was obtained by depositing UO_2 on CaF_2, which has a lattice parameter and crystal structure almost identical to UO_2, and finding that the AF ordering was throughout the film <cit.>. Of course, in the original choice of LAO for a substrate using the PAD method <cit.>, the films are deposited at room temperature, so this unexpected problem did not arise. This is a cautionary tale; one is not only producing an epitaxial film, but also an interface at which unexpected interactions may occur.
§.§.§ Enhanced paramagnetism in strained epitaxial UO_2 films
An interesting paper appeared in 2022 <cit.> with a report that in thin (< 200 Å) epitaxial UO_2 films prepared by PLD on various perovskite substrates the uranium ions exhibit enhanced paramagnetism. The authors used YAO (YAlO_3), LAO, LSAT (La,Sr)(Al,Ta)O_3, and STO, all of which have lattice spacings close to that of of UO_2, and where epitaxial UO_2 growth has a √(2) arrangement on the substrate surface. In this configuration the lattice mismatch, compared to UO_2, covers a range of strain from -3.86% (for YAO) to +0.91% (for STO). They performed reciprocal space mapping to determine the individual in-plane and out-of-plane strains in the UO_2 films. From the unit cell volume, compared to UO_2, the authors propose that these films represent hypo- (i.e. x < 0) or hyper- (x > 0) stoichiometric samples of UO_2±x in the fluorite structure, with x in the range from -0.06, (UO_1.94) for STO to +0.23, (UO_2.23) for YAO substrates. This assignment is based on the lattice parameters of bulk UO_2±x. The films have a large induced paramagnetism, as summarized in Fig. <ref>.
A key question here is whether there is true ferromagnetism in these samples, as the authors suggest. When compared to the bulk magnetic behaviour of UO_2 there are a range of important points to note. The sizes of the induced magnetic moments is exceptional, ranging from 1.2 to 3.2 μ_B. In pure stoichiometric UO_2 the antiferromagnetic moment is only 1.74 μ_B and to break the AF coupling at least 14 T is needed, and then no large induced paramagnetism is observed up to 100 T <cit.>. Indeed the paramagnetic susceptibilities that they report are at least 100 times greater than found in pure bulk UO_2, and also found by susceptibility measurements <cit.> in non-stoichiometric samples.
It is well accepted in bulk UO_2+x that the AF ordering is suppressed for x > ∼ 0.10 <cit.>, and yet the authors report large induced moments of > 1 μ_B for UO_2 on YAO, which they propose is equivalent to UO_2.23. An earlier study (see previous Section <ref>) of a number of UO_2 films deposited (by sputtering) on LAO was reported by Bao et al. <cit.> in 2013. No magnetization studies were done on these samples, so a direct comparison with Ref. <cit.> is not possible. On the basis of the ground-state known for UO_2 the maximum moment for the Γ_5 triplet <cit.> ground state is ∼ 2 μ_B. The splitting to the higher levels is such that no moment greater than this should be observed. It remains to be understood if the crystal field present in non-stoichiometric UO_2 could be so heavily distorted by strain, and the relatively small movement away from stoichiometry.
Since these results are so unexpected, more studies will undoubtedly be needed to provide more insight. Key measurements include temperature-dependent measurements to clarify the existence or otherwise of a Curie point, and element specific measurements (e.g. X-ray circular magnetic dichroism at the uranium M_4,5 edges) to demonstrate that the magnetism arises from U ions <cit.>. We note that the easy direction for the moments is out of plane, which is unusual, as normally with thin films the easy axis is in-plane on account of the shape anisotropy. Interfacial anisotropy is known to drive the moments out of plane, famously in Co/Pt and Co/Pd superlattices <cit.>, but an intriguing point here is that polarised-neutron reflectivity measurements have shown that these moments are distributed uniformly across the film thickness.
§.§.§ Search for exchange bias using UO_2 thin films
Exchange bias is a phenomenon in which the hysteresis loop of a ferromagnetic material is offset in the field by interface interaction with another system, usually an antiferromagnetic material. Discovered more than 60 years ago <cit.> it has many applications in devices. The production of epitaxial UO_2 films opened the possibility of examining exchange bias with an anisotropic antiferromagnet. The first samples were made with magnetite (Fe_3O_4) as the ferromagnetic material <cit.>. 300 Å of UO_2 were deposited on LAO substrates, and then varying thicknesses (90 to 700 Å) of Fe_3O_4 were deposited on top of the UO_2, with a 500 Å cap of Mg deposited to prevent any further reaction occurring. Cross-sectional TEM scans showed partial coherence across the UO_2/Fe_3O_4 boundary, but there were a number of domains in the Fe_3O_4.
The hysteresis loops were then measured as a function of field at various temperatures down to 5 K. Exchange bias of 2.6 kOe was found in the thinnest (∼ 100 Å) Fe_3O_4 samples at 5 K, but rapidly diminished for thicker samples. An unusual feature was the presence of reasonable exchange bias up to ∼ 50 K, i.e. considerably above the T_N = 30 K of UO_2.
A second attempt was made with (polycrystalline) permalloy (Ni_80Fe_20) films replacing the Fe_3O_4 in the previous study. Also, the LAO substrates were replaced by CaF_2 because of the known problems with the former <cit.>. In this system the exchange bias was considerably smaller than found in the films with Fe_3O_4, presumably because the UO_2/permalloy interface was not as coherent as the one made with Fe_3O_4 and the effect was not present above the T_N of UO_2. Rather surprisingly, when the magnetization was performed perpendicular to the plane of the film (which is the hard direction of magnetization), there was considerable hysteresis, and an order of magnitude difference in the exchange bias, measured as 220 Oe, still much smaller than found with Fe_3O_4. Moreover, the t^-1 dependence (where t is the permalloy thickness) of this exchange bias shows that the origin of the effect is interfacial in nature <cit.>.
Although it would appear that neither of these measured effects is particularly dramatic, this project, as with the metal bilayers mentioned in Sec. <ref>, depends to a large extent on the interfacial nature of the bilayers. Until those can be well characterized and improved, the observation of effects such as exchange bias must be taken with some caution. In addition, UO_2 is a complicated 3k antiferromagnet <cit.>, and the effect might be greater with an actinide collinear antiferromagnet.
§.§.§ Dissolution studies of UO_2
In the spirit of thinner UO_2 films approximating to the surface, explorations have been carried out of the interaction of very thin epitaxial films (< 100 Å) with water as a function of pH. It has been known for many years that the corrosion of UO_2 proceeds through a process converting the stable U^4+ ion, which is almost insoluble in water, into a uranyl U^6+ ion, which is then highly soluble in water <cit.>. The strategy in this context is that by treating the surface of thin films, it might be possible to observe, either with Bragg scattering or reflectivity, some change when the surface was treated with oxidising agents. Starting simply by wiping the UO_2 thin films with ionised water, which normally should have no effects as the U^4+ ion is very stable; it was found that when the synchrotron beam illuminated the film and water, there was strong radiolysis of the water, the production of both H_2O_2 and OH^- radicals, and a concomitant conversion of U^4+ to U^6+, and dissolution of U from the sample. The intensity of the Bragg peak of the sample of thickness ∼ 40 Å immediately reduced. With a bulk sample this reduction of the Bragg intensity would not be visible, and even with a sample of 1000 Å the effect is small.
Using a combination of the reflectivity and Bragg scattering, Springell et al. <cit.> were able to construct a picture of the dissolution as a function of exposure, as pictorially shown in Fig. <ref>.
During the experiment the pH of the water was changed. As expected, the dissolution is faster in acidic water (pH ∼ 2) than in an alkaline (pH ∼ 11) solution. In the latter dissolution was almost halted. The experiments also tested whether using X-rays at the L_3 absorption edge at 17.116 keV increased the effect due to photocatalytic processes (i.e. the large number of excited electrons emitted at this energy), but no measurable effect was observed.
The authors also remarked on the clean image of the beam on the thin UO_2 film when using a scanning electron microscope. If the main product of the radiolysis was H_2O_2 alone, then one would expect some dilution of the edges of the ridge cut out by the beam in Fig. <ref>, as the lifetime of H_2O_2 is quite long, so the process should continue after the beam is shut off. The sharpness of the profile suggests that other products, possible OH free radicals, might be important, as they have a short lifetime and would disappear once the beam is turned off. This question has not yet been answered with further experiments.
The next question to be probed with this method was the directional dependence of the dissolution <cit.>. These experiments showed quite conclusively that the most stable surface of UO_2 is the (111) planes, as has been theoretically predicted. As discussed earlier, the surface of UO_2 is only non-polar for the (110) plane, i.e. there are layers containing two oxygen atoms for every one uranium atom at the surface, so that the surface layer has no charge (non-polar). For the other two directions (100) and (111) there will be a re-arrangement at the natural surface layers so that charge neutrality occurs <cit.>. It will therefore depend on the stability of this latter process which plane is the more stable.
As can be seen from the figure, the (111) plane is the most stable. Interestingly, the dissolution appears to proceed and after a short while is passivated. The most unstable surface appears to be the (110).
These measurements can be compared to oxidation studies (not dissolution) performed by Stubbs and collaborators for both the (111) <cit.> and (100) <cit.> surfaces, as well as a number of theoretical studies that approach this subject <cit.>. All agree on the stability of the (111) surface. The dissolution methodology has also been used for other systems, notably UN and U_2N_3, which we shall cover later in this review.
§.§.§ Studies of phonons with irradiated UO_2
Thin films give another perspective when discussing properties of irradiated materials. In a reactor the fuel elements are in a high flux of fast neutrons that penetrate deep into all materials and cause damage throughout. Of course, differences will appear for different materials, and also as a function of the temperature. The materials in a reactor become extremely hot, i.e. radioactive, emitting both alpha particles, and high-energy gamma radiation from unstable nuclei in the fission process and from fission products themselves. Examining such materials requires high-tech “hot laboratories” that are exceedingly expensive to maintain and need specialised staff.
Some (but not all) of the work on radiation damage can be simulated by using high-energy irradiation sources, such as He^2+ and heavier ions, to cause the damage. Whereas such methods leave the samples essentially inactive, such sources of radiation (which are charged) do not (unlike neutral neutrons) penetrate deep into materials. Depending on the energy, they can perhaps penetrate 2-10 μm. Such damage is therefore ideally matched to thin films, where the whole sample can be damaged in a homogeneous manner. A good example of such a use for thin films is the work on the examination of the phonons in irradiated thin films of UO_2 by Rennie et al. (2018) <cit.>.
It has been known for many years that the thermal conductivity of UO_2 is strongly reduced on irradiation in a reactor <cit.>. As the thermal conductivity falls, the radial temperature gradient across the fuel pin becomes more substantial, leading to enhanced cracking and deformation. Consequently, the decay in thermal conductivity not only reduces the reactor efficiency but also contributes to the degradation in structural integrity of the fuel; together these effects ultimately act to limit the fuel lifetime. The thermal conductivity in UO_2 is contributed almost exclusively by the phonons, at least at temperatures below ∼ 1500 K, where the contributions from polarons are small <cit.>. This is also the region of interest for reactor operations. The phonons in stoichiometric UO_2 were first measured in 1965 in a pioneering experiment by Dolling, Cowley, and Woods at Chalk River National Laboratories, Canada <cit.>. Since then, they have been measured many times, but recently Pang et al. <cit.> have published a study at different temperatures where they show quantitatively how each phonon branch contributes to the thermal conductivity.
The thin epitaxial films chosen for the experiment had a thickness of 3000 Å and were deposited on SrTiO_3 substrates. (YSZ based materials exhibit a considerable amount of diffuse scattering, so these were avoided). Fig. <ref> illustrates the situation of the damage when the films were exposed to a 2.1 MeV He^2+ accelerated beam. The first 1 μm has a roughly homogeneous damage profile, whereas the so-called Bragg peak of the damage is located at just under 4 μm, i.e. well into the substrate in this situation.
One of the most difficult parameters to determine was the amount and type of radiation damage to produce in the films. If the damage is too extensive and the lattice itself is partially destroyed, then, clearly, we are unable to measure the phonon spectra as related to crystal directions; on the other hand, too little damage risks observing only small or no changes in the phonons. After the above irradiation, which calculations determined to be ∼ 0.15 dpa (displacements per atom), a sizeable change in the lattice parameter corresponding to an expansion of Δ a/a = +0.56 (2) % was observed. Since the full widths at half maxima (FWHMs) were almost the same for the two films, in both the longitudinal and transverse directions, the damage was judged to be uniform across the 3000 Å of the film, and the crystallinity remains almost intact. This can be compared with other experiments on bulk materials, <cit.> where the swelling of ∼ 0.7 % corresponded to an irradiation of ∼ 5 × 10^17 α particles/g and the thermal conductivity was reduced after irradiation by ∼ 50%.
The experiments at the ESRF ID28 facility <cit.> used grazing incidence inelastic X-ray scattering to determine the phonon dispersion curves. Technically this is challenging as to achieve ∼ 3 meV resolution, which we need to be able to determine any broadening of the phonons, the incident energy used at the instrument is 17.794 keV, which uses the Si (999) reflections for the analyser. By chance, this energy is close to the U L_3 absorption edge of 17.166 keV. At this energy the 1/e penetration of the photon beam in UO_2 is ∼ 10 μm. The angle of incidence was in all cases < 1 deg, but the film was slightly tilted to give a further penetration of the beam of ∼ 1500 Å. In spite of this, the total mass of the UO_2 illuminated by the X-ray beam can be estimated at ∼ 100 ng. The experimental data for two points close to the zone-boundary of the TA[010] phonon (the X point) are shown in Fig. <ref>. The phonons are well defined and their frequencies as a function of wave-vector q do not change compared to the bulk experiments <cit.>, and their widths can be measured. Notice that the central peak is more intense (compared to the phonons) for the irradiated (green) samples. This is because of the additional elastic diffuse scattering from the defects in the case of the irradiated sample.
The final results are shown in Fig. <ref>. Appreciable broadening of the TA phonons occurs for the irradiated sample, and it is observed to be a function of the phonon energy. Although the pristine films do appear to have slightly larger widths than found in bulk experiments, the change with irradiation is unmistakable. A similar effect is found for the longitudinal modes, and the net result in terms of thermal conductivity can be judged to be about a factor of two reduction on irradiation.
Using such thin films Weisensee et al. <cit.> have irradiated them with 2 MeV Ar^+ ions (at room temperature) and by using a method of time-domain thermal reflectance have shown that the thermal conductivity is reduced by ∼ 50%. This is in agreement with the above studies. Unfortunately, for all the power of the X-ray technique, there is still the subject of sensitivity to the oxygen atoms. Because of the large mass difference between uranium and oxygen, the acoustic modes tend to be dominated by U motions, and the optic branches dominated by oxygen motions. In this case, the low number of electrons around the oxygen nucleus means that the X-ray technique is not sensitive to the optic modes. A good example is the study by X-rays of the phonon dispersion curves in NpO_2 <cit.>, which was done with a small bulk sample and not with the grazing incidence technique.
As Pang et al. showed <cit.>, the LO_1 optic mode is one of the main carries of heat in the UO_2 system, but this was not accessible in our experiments, so that a total estimate of the thermal conductivity of the irradiated sample could not be made.
§.§.§ Studies of irradiated films
Another series of experiments was conducted during work for a PhD degree at Cambridge University by A. J. Popel and collaborators. Thin films of UO_2 deposited on YSZ substrates in the three principal directions were irradiated at GANIL (Caen, France) with 110 MeV ^238U^31+ for irradiation up to about 5 × 10^12 ions/cm^2. The first paper <cit.> showed that this dose was not sufficient to destroy the crystallinity of the films. The most stable (least radiation damage) was with the (111) plane of UO_2. There is no mixing of U and Zr at the substrate interface. A second series of UO_2 thin films deposited on LSAT was irradiated, again at GANIL, with 92 MeV ^129Xe^23+ ions to a fluence of 4.8 × 10^15 ions/cm^2 <cit.>. In this case considerable damage was done to the UO_2 films. The surface roughness was increased and there was evidence of the “cauliflower-like” structures, which are a feature of high-dose irradiations. In addition, aluminium was found to have diffused from the substrates into the UO_2 films, a phenomenon already observed by Strehle et al. <cit.>. In both these studies film thicknesses were ∼ 1000 – 1500 Å and the radiations penetrated the whole film thickness.
A third paper <cit.> was published using the same films as described above but focusing on analyses with core X-ray photoemission spectroscopy (XPS) using the O 1s line at ∼ 503 eV and the U 4f spin-orbit split lines at ∼380 and ∼ 391 eV. The position of these transitions (and the accompanying satellites) allows an estimate to be made of the excess oxygen in the sample, i.e. the value of x in UO_2+x. After irradiation these values ranged from 0.07 to 0.11 on LSAT films (Xe irradiation) and 0.17 to 0.23 on the YSZ substrates (with U irradiation). Thus, despite the larger fluence (three orders of magnitude) with Xe ions, and the clear damage to the lattice structure, the excess oxygen was actually greater for the U irradiation of films on YSZ. Whether the substrate has any role in this process is one of the questions raised by this work.
A fourth paper <cit.> examined the Xe irradiated films for dissolution in water. The experiments found that the irradiated samples showed a decrease in the amount of dissolved uranium, as compared to the corresponding unirradiated samples. This somewhat counter-intuitive result was ascribed to irradiation-induced chemical mixing of the UO_2 films with the substrate elements, which resulted in stabilization of the UO_2 matrix and increased its aqueous durability. The last paper in this series <cit.> returned to the analysis by XPS (and also the valence band UPS) and used a 1500 Å UO_2 (111) film deposited on YSZ. The film was then irradiated with ^40Ar^+ ions for various times and then annealed at various temperatures. On the basis of the measured spectral parameters one can conclude that the annealed film with x = 0.12 contains mostly the U^4+ and U^5+ ions with some small amount of U^6+ ions also present. Embedding Xe into UO_2 films has also been reported by Usov et al. <cit.>.
§.§.§ Use of films to assure high-quality surfaces
As mentioned above, in the case of experiments that are very surface sensitive, it sometimes is easier to use a thin film than go to the lengths of polishing and annealing a single crystal surface. It is known from a number of studies (e.g. <cit.>) that when exposed to air UO_2 acquires a layer of ∼ 30 Å where the surface is UO_2+x, and this is quite independent of whether the surface is polar or non-polar. No doubt a careful examination of such effects would show a similar characteristic as those measured by Stubbs et al. <cit.>; however, if the measuring technique is extremely surface sensitive, then clearly the first few layers of a sample exposed to air do not represent stoichiometric UO_2. Such effects can be minimised by having the sample under vacuum, such as in photoemission experiments, but may not be completely eliminated.
Thin epitaxial films give a simple method to eliminate such effects. The films are prepared at high temperature in vacuum, so that if they are removed in a “vacuum suitcase” they may be loaded into another vacuum chamber without exposure to air. This method was used in some recent experiments <cit.> using soft X-ray spectroscopy, as shown in Fig. <ref>. The incident X-ray energy was tuned to the N_4,5 edges of uranium (779 and 737 eV, respectively), which have a wavelength of ∼ 16 Å. At such an energy the penetration depth of the X-ray beam is certainly only a few 10’s of Å at best, and the presence of any non-stoichiometric UO_2+x and/or roughness at the UO_2 surface may strongly attenuate the outgoing X-rays. The technique, known as Resonant Inelastic X-ray Scattering (RIXS) is able to resolve the high-energy multiplets, a question that has been discussed about UO_2 (and other actinide systems) for the last 50 years. This work has now been followed by experiments on epitaxial films of U_3O_8 and UN <cit.>. More information on this (and other) X-ray techniques may be found in Caciuffo et al. (2022) <cit.>.
§.§.§ Studies of higher oxides
The UO_2 – UO_3 phase diagram consists of a number of different phases. Good summaries of the defect structures that appear up to UO_2+x, where x ∼ 0.20, complementing the pioneering 1963 paper by B. T. M. Willis (1963) <cit.>, are those of Garrido et al. (2006) <cit.>, Rousseau et al. (2006) <cit.>, Wang et al. (2014) <cit.>, and J. M. Elorrieta et al. (2016) <cit.>.
Distinct phases exist in UO_2+x at x = 0.25 (U_4O_9), x = 0.33 (U_3O_7), x = 0.67 (U_3O_8), and x = 1 (UO_3), with some having more than one allotrope. There is a copious literature on these systems, some of it going back to the 1960's . In terms of films, as shown in Table II. Only a few have been grown as epitaxial films. Including UO_2, these are U_3O_8 and UO_3 <cit.>.
(i) U_3O_8
U_3O_8 is particularly interesting as it is known to have a mixture of U^5+ and U^6+ with twice as much of the former compared to the latter. The structure was first solved in 1964 by Loopstra <cit.>. Recently, two papers reporting studies of polycrystalline U_3O_8 have been reported, and these raise a number of questions that may be answered with using epitaxial films. No bulk crystals exist, which is true for all x > 0 in the U–O phase diagram, so epitaxial films represent the only single-crystal samples available. We have already reported the electronic structure of U_3O_8 in <cit.>, and this (and associated XPS core-hole spectroscopy) is consistent with the model of U^5+/U^6+ mentioned above <cit.>.
Enriquez et al. (2020) <cit.> have used thin films to measure the optical band gaps and report UO_2 films have a direct band gap of 2.61 eV, whereas epitaxial α-U_3O_8 and α-UO_3 films exhibit indirect band gaps of 1.89 and 2.26 eV, respectively. This value for UO_2 seems somewhat higher than the accepted value of ∼ 2.1 eV, but we agree that the indirect band gap for U_3O_8 is ∼ 1.8 eV <cit.>.
(ii) Transition UO_2 to U_3O_8
Another key question is how the transition occurs between UO_2 and U_3O_8. The paper by Allen & Holmes (1995) <cit.> proposed that the transformation started from the (111) face of UO_2 and proceeded in a number of steps, so that the short c (4.146 Å at RT) of the orthorhombic structure of U_3O_8 resembles the (111) interplanar spacing of cubic UO_2 (3.138 Å) after suitable relaxation.
Experiments by J. Wasik (University of Bristol thesis, 2021) <cit.> show that this idea is incorrect. Instead, oxidation starts from the (100) face of cubic UO_2 and distortions are made to produce a (130) plane of U_3O_8, as shown in Fig. <ref>. The UO_2 films were grown on YSZ for the experiments reported in this thesis because all three principal orientations can be grown on YSZ <cit.>. A further point from these experiments is that no similar observation could be made starting with either (110) or (111) UO_2 thin films. In both cases the oxidation resulted in the product peeling off the substrate, and no phases could be identified.
The process of going from one epitaxial symmetry to another is called “topotaxy” <cit.> and quite unusual, although important in semiconductor technology <cit.>. Note that there are reports of preparing polycrystalline U_3O_8 films in which the authors note a strong texture with the (130) diffraction line being the preferred direction.
§ URANIUM HYDRIDES, NITRIDES, AND SILICIDES
§.§ Introduction
Along with the element (Sec. <ref>), and the oxides (Sec. <ref>), the most work on bulk samples incorporating uranium has been done on U compounds, especially those such as UPt_3 or URuSi_2, which are representative of the well-known heavy-fermion superconductors. For the moment, except for the notable exception of the early work on UPd_2Al_3 and UNi_2Al_3 at Mainz <cit.> starting in the mid 1990's, we are unaware of any work with epitaxial films on U heavy-fermions. This will certainly change, and we will discuss some of the possibilities opened up by fabricating such materials as epitaxial films in the Conclusions (Sec. <ref>).
However, the fabrication and exploration of thin films of the alloys, hydrides, nitrides, and silicides already represents a large and important field. Both nitrides and silicides have been discussed in terms of advanced fuels, primarily because both have much higher thermal conductivities than the presently used UO_2. The nitrides, particularly, UN, have been the object of much basic research since the 1960's.
A great deal of work has been reported on the uranium hydrides. As discussed later, UH_3 is a ferromagnet, and has been of considerable interest over the years. Uranium-hydrides have also been thoroughly investigated as the reaction between these two elements is very strong, exothermic, and constitutes a considerable safety challenge if, for example, hydrogen is produced inside a vessel containing U metal. We shall discuss an experiment using bilayers of U metal and oxide that starts to address these reactions.
§.§ Growth of thin films of hydrides, nitrides, and silicides
The growth of U-H and U-N phases has been achieved by reactive growth with the presence of the relevant gas in the chamber during U deposition. The U-Si, as well as a range of other materials, have been prepared by co-deposition. Table <ref> summarises the various thin films in these categories that have been grown to date.
§.§ Science with thin films of hydrides, nitrides, and silicides
§.§.§ Thin films of uranium hydrides
The hydrides of uranium have been of interest for a long time. The most interesting property of UH_3 (which exists in two structural forms) is that it becomes ferromagnetic at ∼174 K. This was first discovered in Wroclaw, Poland in 1952 <cit.>. This was particularly interesting at the time as it was the first material to be found ferromagnetic with two elements that were themselves non-magnetic. Of course, there were originally doubts whether some impurity was playing a role, but the later work <cit.> showed that the ferromagnetism was intrinsic with a moment of ∼ 1.5 μ_B/U atom. Since then, much work has been done on this compound. The material was first prepared as thin films in 2004 <cit.> by depositing at room temperature on Si(111) wafers. Later, many more samples were produced, including alloys incorporating Zr and Mo, as discussed in the review articles published recently by L. Havela and collaborators <cit.>. The main reason for the thin films is that the properties of interest are deduced from the photoemission spectra, and these are sensitive just to the top 20 - 50 Å. The films must not be heated above ∼ 350 K as the compound decomposes. If exposed to air UH_3 catches fire, so it must be handled cautiously! Various attempts were made at Oxford, and then later at Bristol, to produce epitaxial films, but they have been unsuccessful to date. Epitaxial growth usually requires high temperatures to allow the atoms to have considerable energy, but this is not possible in the case of UH_3, as mentioned above.
(i) Stabilization of UH_2
An interesting development <cit.> that occurred in the UH_3 saga took place when the authors used Si(100) substrate with a = 5.43 Å. This value is close to the lattice parameters of the known dihydrides NpH_2 (a = 5.34 Å) and PuH_2 (a = 5.36 Å). The idea was then to produce UH_2, which has not been reported as a stable compound. By cooling the substrate to T = 177 K, they managed to produce a film (∼ 4000 Å) of UH_2 with a lattice parameter a = 5.36 Å. Surprisingly, the film appears to be polycrystalline with a random domain distribution with domain sizes ∼ 500 - 1000 Å. The sample was capped with 30 Å of Mo. The absence of any preferred orientation suggests that the substrate-film interaction is relatively small, but certainly at these temperatures the chances of attaining epitaxy would be small. UH_2 exhibited a somewhat lower T_C than UH_3, but, like UH_3, it shows relatively wide hysteresis loops, suggesting a strongly anisotropic ferromagnet, despite the cubic structure <cit.>.
(ii) Experiments with hydrogen on bilayers of U and UO_2
As discussed earlier, UH_3 is a dangerous material. When U metal is stored in the presence of either moisture or organic material in sealed containers hydrogen can be produced over long periods <cit.>. The hydrogen reacts with the U producing pyrophoric finely divided radioactive powder in the container. A major safety incident can occur if the containers are then opened and a large amount of UH_3 is present <cit.>. The process of hydrogenation of U is obviously a complex reaction, and will be dependent on the pressure of hydrogen, and the form of the uranium and its temperature. Because we have a method to make epitaxial films of U metal in its α form (see Sec. <ref>) an experiment was undertaken to use such a film coated with a layer of UO_2 and then allow the hydrogen to react in-situ at the synchrotron beamlime to monitor the diffraction pattern as a function of time and temperature <cit.>. The UO_2 surface layer does not make any epitaxy with the underlying U, but there will be a preferred orientation with the (111) reflection dominant. Studies were performed on a nominal 1000 Å metal layer with a deposited UO_2 layer of ∼ 200 Å, and also a sample left to oxidise in air over two days, forming a layer of ∼ 600 Å oxide.
A number of points are clear from the diffraction patterns (Fig. <ref>) without further analysis.
(a) The main orientation of the U epitaxial layer has the (110) orientation, but there is a ∼ 10 % formation of (002) grains, which are significantly smaller in size (probably of the order of 200 Å) than those oriented in the (110) direction, which clearly extend over the whole thickness of the U film. However, the (110) grains are consumed much faster with hydrogen exposure – a property that was found throughout the experiment and indicating a significant anisotropy in the consumption of the metal.
(b) The peaks just below the position of the UO_2 (200) are coming from higher oxides in the case of oxidizing by air, and these are rapidly removed by exposure to hydrogen – as expected.
(c) Both UO_2 peaks decrease in intensity; they also move in opposite directions. The UO_2 (111) d decreases and the UO_2 (200) d increases with hydrogen doping. Of course, these observations are coming from different particles since the experiment is sampling only those with their scattering vectors exactly along the specular direction. However, this implies a directional effect of the hydrogen passing through UO_2, previously it was thought the UO_2 was not changed by H-exposure, but that is clearly incorrect. The change of intensity is not easy to explain, but may be related to the change of particle size.
(d) One mystery was the absence of any intensity from the UH_3 that must be forming with the clear consumption of U metal. The main diffraction line one would expect to observe would be at 2θ ∼ 30^∘, and there is no sign of a peak at this value at the end of the H exposure. This mystery was partly resolved by EELS work on a separate sample where the UH_3 was identified with defects and grain boundaries, and was almost certainly either nanocrystalline or amorphous.
These experiments have raised a number of questions about what is clearly a complicated process. The simplest method of hydrogenation would be that the hydrogen arrives from the UO_2 layer and starts consumption of the U from the top towards the bottom of the U film. However, such a process would lead to the particles with (110) getting smaller, and hence the peak widths increasing. This is not observed, and the EELS experiments also rule this out. A more likely model involves grain-boundary corrosion with the interface moving laterally into the grains. A consequence of the consumption of U is a proportional increase in the d-spacing of U(110), and to a lesser extent the U(002), in the direction of film thickness.
The experiment described here <cit.> was a first attempt at a complicated problem. More work needs to be done, including, for example, reflectivity studies and using better samples for the (110) and (002) α-U orientations. In addition, neutrons could be very effective in this study, especially for locating the hydrogen as there is a strong contrast with neutrons between hydrogen and deuterium.
§.§.§ Magnetism and electronic structure of uranium nitride thin films
Uranium mononitride (UN) has been of interest for many years both to applied projects, as well as fundamental research.
On the applied side UN there is increased interest in the last two decades <cit.>, in using UN as an advanced technology nuclear fuel to replace UO_2.
Compared to UO_2, UN has a higher U density (thus enabling a lower enrichment to be used), has a better thermal conductivity, and an equally high melting point (the last two affecting safety).
However, it has two disadvantages, a high reactivity with water and oxygen above 200 ^∘C, and the large cross section of ^14N implies that there will a substantial amount of ^14C produced.
On the fundamental side, UN is part of a large group of actinide pnictides that have the simple fcc rocksalt structure, and many experiments on these materials have been reported since the 1960s.
There is still a debate over the electronic structure of UN.
A recent review <cit.> advances the case that UN is a mixed system, with the 5f electrons partly localized and partly itinerant.
This is not in agreement with earlier work using angular-resolved photoemission <cit.> or with the results from neutron inelastic scattering <cit.>, where broad excitations and no crystal-field levels were reported; both these experiments suggest an itinerant model would be more appropriate.
UN is known to be antiferromagnetic at 53 K from the work of Curry in 1965 <cit.> with an ordered moment of only 0.75(10) μ_B.
The effective magnetic moment above T_N is in the range ∼ 2.7 μ_B.
Recent resonant inelastic X-ray scattering (RIXS) experiments show that UN cannot be described in terms of a localized 5f^3 configuration, and that a band description is certainly more appropriate, in agreement with the ARPES and neutron experiments <cit.>.
Thin films were first reported from ITU, Karlsruhe, in a paper by Black et al. <cit.>.
Using reactive sputter deposition onto glass, they showed that the stoichiometry of the films deposited, measured by XPS, could be varied by changing the N_2 partial pressure.
Structural analysis of these films showed preferred orientation in the 111 direction with an average grain size of 170 Å for the films deposited at room temperature, and somewhat larger for films deposited at 400 ^∘C <cit.>.
Other properties also changed with deposition temperature, with the strength of preferred orientation, the residual stress and the density of structural defects all decreasing with increasing temperature.
The magnetic studies presented some puzzling results, and there was no clear sign of the AF transition in susceptibility measurements.
The long-range AF behaviour of crystalline UN is replaced by a ferromagnetic-cluster glass behaviour resulting from a defected antiferromagnetism in the films deposited at higher temperatures, with the highly disturbed thin films (low temperature deposition) exhibiting weak Pauli paramagnetism.
Similar experiments were done on US films (US has the same crystal structure as UN, and in the bulk is a ferromagnet at 177 K), and some suppression of the T_C was observed <cit.>, but less than observed for UN.
The first epitaxial single-crystal UN thin films, fabricated by E. Lawrence Bright et al. <cit.>, used a (001) Nb buffer on a (1-102) Al_2O_3 substrate, with the Nb acting as a physical and chemical buffer to stabilise (001) UN.
These UN films were used for an attempt to find the crystallographic distortion that has been controversial at the AF ordering temperature T_N. In the original AF model of UN, Curry assumed that the ordering was of type-I with ferromagnetic layers of atoms arranged in a + – + – arrangement, the so-called 1k arrangement, with the ferromagnet moments perpendicular to the layers.
Since the overall symmetry is cubic, this implies that any cube axis can be the direction of the moments, so that there are three clearly different spatial domains. Each domain will have tetragonal symmetry when the U moment orders, so there should be a magnetostrictive distortion at T_N. This should result in a clear splitting of the reflections from the different domains. Marples et al. <cit.> claimed to have found such a distortion, but a careful examination of their paper shows that they did not observe a finite splitting of the peaks at high angle, but simply a broadening of the peaks. Whereas this might indicate a distortion, it could also be a strain effect. In a re-examination of this effect the experiment <cit.> showed that the distortion, if present, is much smaller than suggested by Ref. <cit.>, and more in line with the experiments reported by Knott et al. <cit.>. However, strain effects were observed in the experiment. A 700 Å film was used and the tensile strain was measured as + 20 × 10^-6. We show below in Fig. <ref> the nominal change of the lattice parameter at the magnetic ordering temperature. The AF ordering temperature (T_N) is slightly lower in the film (46 K) than reported in the bulk (53 K).
The point of this figure is to show that although the expansion of the lattice parameter at low temperature is clearly present, the magnitude of this effect is sample dependent and can vary by almost a factor of three between different samples. This expansion is certainly a property of UN, but its magnitude is determined by the macro-strain properties of the individual samples. Marples et al. <cit.> proposed the distortion at T_N gave a strain such that 2(c - a)/(c +a) = - 650 × 10^-6. Lawrence Bright et al. <cit.> have lowered that to < |200 × 10^-6| in agreement with Ref. <cit.>. With the absence of such a distortion there remains some doubt over the magnetic structure of UN. An alternative interpretation would be that the structure is a 3k type-I structure as found in USb <cit.>. In such a system the symmetry below T_N is cubic. Further work on this is warranted; however, neutron experiments such as Ref. <cit.> require large single crystals (∼ 1 g) and cannot be performed on thin films.
(001) U_2N_3 was stabilised on (001) CaF_2.
The second experiment with synchrotron radiation was performed on the U_2N_3 epitaxial film. The crystal structure is known to be the bixbyite type of body-centered cubic, isostructural with Mn_2O_3 and also rare-earth systems such as Gd_2O_3. Earlier work on U_2N_3 was reported by Troc <cit.> and showed that stoichiometric U_2N_3 has a lattice parameter of 10.70 Å, but that with additional nitrogen this reduces to ∼ 10.60 Å by about UN_1.80. At the same time the stoichiometric material (UN_1.50) orders antiferromagnetically at ∼ 90 K, and with added nitrogen the T_N reduces so that by UN_1.80 there is no ordering. There is no report of a successful neutron experiment finding the magnetic structure of U_2N_3, so that aspect is unknown.
There was some strain found in the U_2N_3 epitaxial film with the (001) orientation growth on CaF_2 substrates. The growth direction parameter was found as 10.80(1) Å, whereas the in-plane parameters were 10.60(2) Å (i.e. strain = + 1.8 %). The atomic volume corresponds to a lattice parameter of 10.67 Å, suggesting the films are close to stoichiometry. When the X-ray energy was tuned to 3.726 keV, which corresponds to the U M_4 resonance, extra peaks were found at the non-bcc Bragg peaks for temperatures below T_N = 73.5 K. This gives a simple q = 1 AF wave-vector for the magnetic structure, as was found for the isostructural Yb_2O_3 <cit.>. Determining the magnetic configuration is considerably more difficult. In the case of Yb_2O_3 the configuration is non-collinear, but the T_N in that case is 2.3 K, so the interactions are certainly stronger in U_2N_3 than the rare-earth materials, and probably involve more direct exchange interactions. In principle, the resonant magnetic intensities are related to the arrangement of moments, but the main problem is making reliable absorption corrections for the off-specular reflections when using a film of 2000 Å. There are two independent sites for U atoms in U_2N_3, and there is interest in knowing the magnetic configuration. An attempt was made with neutron diffraction at the WISH instrument on the ISIS spallation source, but although both a film and a 1 g polycrystalline sample were used, no magnetic scattering was observed. This suggests the U moments are below ∼ 0.5 μ_B.
As discussed above, the crystal structure of UN is the simple rocksalt structure, that of U_2N_3 being the bixbyite structure, with two independent sites for the uranium atom (U1 and U2), whereas only one exists in UN. A number of methods, including photoemission experiments <cit.>, have been used to estimate the valency of these materials, and for UN this is ∼ 3+, i.e. U(III), but for U_2N_3 the valency is higher. Such methods are not site selective, so leave open the question of the valency at each individual site. This is important as the U(VI) valent state is highly soluble in water, so if that is present in at least one of the sites of U_2N_3, this could explain its high corrosion rates discussed below <cit.>.
The magnetic properties give one clue to the valency; for example, U(VI) has no 5f electrons so cannot be magnetic. In an effort to extract further information on the valency and bonding of the two separate uranium sites 'Diffraction absorption experiments' were performed at the U M_4 edge on a number of Bragg reflections of the bcc 2000 Å U_2N_3 film.
The reflections have different contributions from the two independent uranium atoms, as the atomic sites have different symmetries. For strong Bragg reflections, in which the scattering from both the U_1 and U_2 atoms are in-phase, or one set is absent, the expected result is a dispersive curve that reflects the combined effect of both the real (f_o + f') and imaginary (f”) parts of the uranium scattering factor. Such a curve is shown in the green curve in Fig. <ref> for the (112) reflection, in which only the U_2 atoms participate. (Scattering from nitrogen is neglected, as it is far weaker than that from uranium; in addition, there is no edge sensitive to nitrogen in the energy range covered).
However, for reflections in which the strong Thomson scattering (from the 86 core electrons of uranium) is reduced by the cancellation between the two separate uranium atoms, a very interesting profile is shown in Fig. <ref> that is precisely the energy profile at the M_4 edge of the imaginary part (f”) from the U atoms. This profile reflects the fact that around one of the U sites is an aspherical charge density, which involves the U 5f electrons. For example, for the (013) reflection, which is forbidden and has no contribution from the Thomson (spherical) charge density, this aspherical part is the only contribution to the scattering intensity. Similarly, for the (002) and (022), in which the strong spherical charge density contributions almost cancel, the aspherical part is also observed. From the pattern of the intensities, it becomes clear that any aspherical contribution from the U_1 sites must be small, suggesting that these sites may possibly have the U(VI) valency, in which there are no occupied 5f states.
This effect has been observed before, mainly at the K edge of the transition metals <cit.>. However, at the K edge with the d transition metals there is the possibility of both dipole and quadrupolar transitions, making the identification of the underlying physics complicated. For the U M_4 edge this ambiguity is removed; the transition is definitely of dipole symmetry illuminating an aspherical contribution from 5f states. The local non-centrosymmetric coordination of this distribution around the U nucleus then couples to the imaginary scattering factor (f”) giving rise to scattered intensity, with a distinctive energy profile, at the Bragg position <cit.>. In the case of U_2N_3 the effects are temperature independent, and so not related to the magnetic order at ∼75 K. They are almost certainly induced by covalency, probably mixing between the U 5f states and the nitrogen 2p states.
In conclusion, these experiments strongly suggest that the U1 site may have a significantly higher valency, quite possibly U(VI), and this is responsible for the rapid corrosion rates of U_2N_3. In addition, these experiments have opened the way for more quantitative modelling in such systems, based on the observation of an aspherical 5f charge distribution around the U2 nucleus.
§.§.§ Reactivity studies of UN
Thin films have become relatively popular for investigating the oxidation and corrosion behaviour across the uranium-nitrogen phase diagram, with several studies coming from Bristol <cit.>, and from the Science and Technology on Surface Physics and Chemistry Laboratory, Mianyang, China <cit.>. Dissolution experiments comparing films of both UN and U_2N_3 of ∼ 600 Å with similar UO_2 films in a 0.1 M H_2O_2 solution showed surprising results, shown in Fig. <ref>. Reflectivity measurements allowed a measurement of the film thickness, and hence the dissolution rate <cit.>. The dissolution results were unexpected, being equivalent to 0.033(1), 0.010(2), and 0.19(3) mg/cm^2/hr for UO_2, UN, and U_2N_3, respectively. Although in the literature the UN dissolution rate in water is actually greater than that for UO_2, when the effect of radiolysis is simulated using H_2O_2, the results are different.
Studies from Mianyang have looked at surface oxidation thin films with different N/U ratios produced by reactive RF magnetron sputtering. As UN does not accommodate stoichiometry changes, this produced mixed phase films. Auger electron spectroscopy (AES) of U, UN_0.23 (composed of U and UN), UN_0.68 (mainly UN) films before and after oxygen exposure found that an oxide layer of UO_2 formed on the surface <cit.>. However, later work using XPS revealed that the oxidation of the UN and metallic U phases in the films is not a simple combination of two independent oxidation behaviors, but an interactive association <cit.>.
Oxidation studies on UN_1.66 films found that UN_xO_y oxides formed, both when investigated with AES <cit.> and XPS <cit.>. Further investigations on a UN_1.85 film found a three-layered oxidation surface structure, composed of uranium oxides (U_4O_9,UO_2+x and UO_2), a U-N-O ternary compound layer, and an N-rich uranium nitride UN_1.85+x layer. This layered structure is proposed to be responsible for the measured long-term stability of the surface oxide layer <cit.>.
Epitaxial films grown by Lawrence Bright at Bristol <cit.> have been used to investigate the oxidation of a UN (001) surface <cit.>. Using such single phase films allowed the reaction of UN to be investigated without the influence of a secondary metallic U phase. XRR measurements of the thickness of the surface layers that formed on exposure to air showed that the surface passivated. The chemistry of these layers was investigated with a XPS depth profile, identifying a surface UO_2+xN_y layer and U_2N_3 intermediate layer, not dissimilar to the oxidised surface of U_2N_3+x described above <cit.>. The epitaxial nature of the sample, producing a single (001) oriented UN single crystal surface, provided further insight into the reaction: XRD measurements of the oxidised surface showed a topotactic relationship (see Fig. <ref>) between the film and surface oxide, which is proposed to play a critical role in the passivation mechanism.
§.§.§ Thin films of uranium silicides
Uranium silicide phases have been of interest since the 1940's.
Work conducted by A.R. Kaufmann, B.D. Cullity, and G. Bitsianes, as reported by W.H. Zachariasen in 1948 <cit.>, described the crystal structure of a tri-silicide phase (USi_3), and proposed additional uranium silicide phases: USi_2, U_2Si_3, USi, U_5Si_3, and U_10Si_3.
Zachariasen presented in the 1948 paper <cit.> the uranium disilicide phase, USi_2, which was found to be isomorphous with ThSi_2 and PuSi_2, all with body-centred tetragonal structures, and I4/amd space groups.
The uranium-silicon binary phase diagram, provided by Okomoto et al. <cit.> and Middleburgh et al. <cit.>, indicates there are around seven stoichiometric phases which all exist as line-compounds.
The nature of these line-compounds suggest the fabrication of individual phases in the bulk is challenging. Middleburgh et al. <cit.>, suggests that the U_3Si_2 phase could not incorporate additional uranium into the lattice without forming mixed-phases. It is this factor that results in the separation and production of multiple phases if the stoichiometric U:Si ratio is not satisfied. As a result, engineering of U-Si phases is particularly difficult for bulk investigations.
The Reduced Enrichment for Research and Test Reactors (RERTR) Program, initiated by the Department of Energy (US-DOE), suggested the use of U-Si phases in order to implement low-enriched uranium (LEU) fuel compounds within research reactors to prevent proliferation <cit.>.
With being highlighted as nuclear fuel compounds, the U-Si phases have gained attention with regards to their fundamental behaviours. Remschnig et al., <cit.> investigated the structural and magnetic behaviour of the binary U-Si phases, probing: U_3Si, U_3Si_2, USi, U_3Si_5, USi_1.88, and USi_3. Bulk single crystals of U_3Si_2, USi, U_3Si_5, and USi_1.88 were extracted from arc melted samples within this study. Additional investigations conducted by Antonio et al., <cit.> probed the thermal and transport properties of the primed ATF candidate, U_3Si_2. Understanding the thermal behaviour of fuel candidates is integral to its consideration as a commercial fuel and eventual implementation into the nuclear fuel cycle. Here, the heat capacity, electrical resistivity, Seebeck and Hall effects, and thermal conductivity were probed in a temperature range of 2–300 K in magnetic fields up to 9 T. The U_3Si_2 samples used in this investigation were engineered via the arc-melting of elemental U and Si powders. Impurities of USi and UO_2 were observed using XRD during sample characterisation. Low temperature thermal investigations on the U-Si phases are complementary to the high-temperature studies conducted by White et al., <cit.> on bulk U_3Si_2 sintered samples. A major roadblock in investigating uranium silicide phases is sample fabrication.
The production of U-Si materials often results in the formation of multi-phased systems <cit.>. As a result, it can be difficult to identify and attribute structural or chemical behaviours to a particular phase.
The adaptability of engineering U-based surfaces on substrates suggests that producing uranium-silicon phases in this form is suitable for both applied and fundamental investigations. An early investigation, conducted by S. Fujimori in 1988 <cit.> presented the electronic structure, probed using X-ray photoelectron spectroscopy, of uranium deposited upon a [100]-oriented silicon surfaces. The study aimed to understand the interactions between the two elements, and to probe the possibility of the epitaxial growth of uranium silicides upon the [100]-silicon surface. The uranium layers were deposited using an `MBE-like' technique, and characterisation of the surfaces was conducted using X-ray photoelectron spectroscopy (XPS). The work presented in <cit.> did not conclude if the annealing of uranium deposited on [100]-oriented Si surfaces resulted in the formation of uranium silicide phases.
A second paper, produced by Fujimori in 2000 <cit.>, showed the deposition of U metal onto a prepared (111)-oriented Si surface, as a different way of controlling the 5f electrons when compared to bulk studies, and to further understand the possibility to produce epitaxial uranium silicide phases. Similar to the initial paper, the uranium surfaces were deposited using a method which is described as `MBE-like', and were subsequently characterised using in-situ XPS. Valence band spectra collected from the U surfaces suggested structural disorder at the interface between substrate and film. This was noted with a broadening of the Si sp band states at 3, 7.5, and 10 eV. Additionally, the Si and U atomic cross-sections provided by Yeh et al. <cit.>, vary with 0.13 Mb for U 5f, 0.01 Mb for Si 3s, and 0.0017 Mb for Si 3p, suggesting that small amounts of U deposited upon the (111)-Si may dominate the valence band spectra.
A large body of work conducted by Harding et al. <cit.> in 2023 showed the epitaxial stabilisation of four uranium silicide phases as epitaxial thin films: U_3Si, U_3Si_5, α-USi_2, and USi_3, with poly-crystalline U_3Si_2. The U-Si phases presented in <cit.> were all synthesised using DC magnetron sputtering where a co-sputtering technique was implemented allowing for material from U and Si targets to be deposited simultaneously under UHV conditions. The technique allowed for control over the relative U and Si contents, resulting in the formation of U-Si phases that extend over the entire binary phase diagram.
These phases were structurally found to be epitaxial with their respective single-crystal substrates using X-ray diffraction techniques (Table <ref>). A deeper study into the chemical bonding of U-Si phases was also presented in this work. Using XPS, Harding et al. <cit.> presented the metallic nature of these U-Si thin films with clear asymmetry noted in the U-4f core level spectra.
Using area ratios between the U-4f and Si-2s, the U-Si thin films were found to be stoichiometric within error, with the exception of the α-USi_2 phase.
The uranium disilicide, presented in Table <ref>, has a U:Si ratio representative of a uranium monosilicide phase.
From the structural characterisation presented in <cit.>, the data suggested the formation of the tetragonal α-USi_2 phase similar to the data presented by Sasa et al.. <cit.> in their 1976 manuscript.
The work conducted on the uranium silicide phases has demonstrated the ability to control stoichiometry using epitaxial lattice matching.
The understanding of the U-Si epitaxial system, as presented by Harding et al. <cit.> and trialled by Fujimori <cit.>, can form the basis of using epitaxial stabilisation to navigate other phase diagrams.
§ CONCLUSIONS AND FUTURE PROSPECTS
We have attempted in this review to give an account of several decades of work on U-based thin film. Various efforts were made in the period 1960 – 2000, most of which are discussed in this review, but none managed to continue over a long enough period to build up a substantial body of work that encouraged other Laboratories to start a significant effort. It is important to distinguish between attempts to produce thin samples to reduce the radioactive inventory, which have been widespread over the years, and thin films on chosen substrates in an effort to make epitaxial (or at least strongly textured) thin films. Thomas Gouder and his collaborators at the European Commission’s Laboratory in Karlsruhe, Germany, have been involved primarily in the first effort discussed above, and have pushed beyond U into Pu, Np, Am, and even Cm.
This changed with our own program, first starting at Oxford University in ∼ 2002 and then transferred in 2011 to Bristol University, and also the program at Los Alamos National Laboratory, which started at about the same time <cit.>. Both these programs have concentrated on uranium, and aimed to prepare epitaxial films. As discussed in Sec. <ref>, the epitaxial films of PuO_2 <cit.> were also made at LANL by the polymer assisted deposition (PAD) technique, but all other samples have been with U. More recently, an effort using pulsed laser deposition (PLD) has also been mounted at LANL <cit.>.
In discussing these potential advances in actinide materials, we need to be aware that these materials are radioactive, and not familiar to the general public, except in connection with nuclear fuel (especially irradiated) or nuclear weapons. However, uranium is a natural element found in all soil, and also in the human body to the extent of between 50 – 100 μg for an adult. Thorium is similar. To put this in perspective, a thin film of UO_2 of 5 × 5 mm^2 with a thickness of 1000 Å has a mass of uranium of ∼ 25 μg and an activity of 0.3 Bq. A banana has an activity of ∼ 15 Bq. Working with these materials we usually cover them with a 50 Å layer of Nb, so no radioactive particles can escape, and any potential device would be suitably encapsulated. It is clear that the use of thin films of thorium or uranium (which would probably have a thickness less than 1000 Å) does not represent any kind of hazard. Of course, working with heavier actinides than uranium, e. g. plutonium, is a different matter, and they could be only prepared and used in specialized laboratories.
Thin-film methods, capable of producing high-quality single crystal films with thicknesses ranging from 100-5000 Å, have played an increasing role in the study of actinide containing materials as they are, by their very nature, extremely low mass samples naturally reducing radioactive risks. The recent and dramatic increase in X-ray flux available at many large synchrotron facilities, plus the significant development in grazing- or low-incidence angle scattering techniques, has mitigated the experimental difficulties previously associated with such extremely low scattering volume samples; opening the door to wealth of experimental opportunities, some of which are discussed in this review. A wider perspective on synchrotron use with actinides can be found in <cit.>.
As well as providing a safer method for actinide studies, the use of thin films, in particular epitaxial thin films, has numerous other benefits over bulk crystals and we expect these to play an increasing role in the future of actinide research. Lattice matching provides an additional dimension to the synthesis phase space, through which various crystal film orientations can be synthesized. Strain – stretching or compressing particular crystallographic directions - can be used as a tuning parameter for various physical phenomena, metastable phases can be stabilized at ambient conditions, and some phases not present in the bulk phase diagram can even by synthesized in thin film form. We report on early experiments with elemental uranium in this context (see Sec. <ref> et seq.). The work by Sharma et al. <cit.> with UO_2 is another, see Section <ref>.
Such films are naturally suited to almost all forms of transport measurements, in particular in-plane directionally dependant studies. Interface and proximity effects can be explored systematically through the fabrication of Ångstrom scale precision designed multi-layers and heterostructures, and such methods naturally extend to the exploration of exotic devices with actinide containing functional layers for possible advanced devices. The theoretical paper by Dennett et al. <cit.> discusses such a project.
An important step was made at the University of Illinois with the discovery <cit.> that epitaxial UO_2 could be deposited by sputtering U onto yttrium-stabilised zirconia (YSZ). There has been an effort also at Charles University, Prague, especially on the hydrides, and they discovered that UH_2 (with the cubic fluorite structure) could be stabilized as a thin film (see Sec. <ref>). Some experiments have been reported from facilities in China, especially at the Surface Physics and Chemistry Laboratory, Jiangyou. Recent work from them suggests that cubic UN_2 with the CaF_2 structure can be produced as a thin film <cit.>.
More recently, Idaho National Laboratory (INL) have announced that they are starting a thin-film program on the actinides using molecular-beam epitaxy. They have also published an review of possible work with actinide thin films <cit.>. This is a most useful exercise in setting the stage for further work, and contains an important source of the literature on this subject. We note however, that the “mineralization” technique discussed in that review is used to prepare single crystals, and does not involve any substrate, so does not belong in the category of “thin-film samples” as we have discussed in this review. Furthermore, the methods mentioned on transuranium systems do not explore the possibility of producing epitaxial (i.e. single crystal) samples, so the number of bona fide “thin film samples” produced is smaller than it might first appear.
In covering this wide field of endeavor, it is useful to make a distinction between different samples on the following basis. First, (category A) one can imagine that epitaxial samples can be made that allow the study of essentially bulk properties. A good example in this category are epitaxial samples of the bcc alloys with U-Mo (Sec. <ref>). No single crystals of these can be made in the bulk, although there are numerous studies of the alloy system. The production of epitaxial samples thus allowed bulk properties, such as the phonon-dispersion curves, to be examined. In this study important diffuse scattering was observed <cit.> and led to the discovery of a new type of correlated disorder. A second example would be U_2N_3 (Sec. <ref>), where, again, single crystals of the bulk material have not been produced, so the synchrotron experiments discovered some new effects in this system <cit.> that were not evident earlier. Yet another example is the study of the phonons in radiation damaged UO_2 (Sec. <ref>); here the epitaxial films allow homogeneous damage by irradiation in beams of charged particles to the films, which can then be examined by synchrotron-radiation techniques <cit.>.
The second type of investigation (category B) is based on creating epitaxial samples that have new properties, some of which may be a consequence of the interaction of the substrate. An excellent example of this is all the work performed so far on the elemental metal α-uranium (Sec. <ref>), where the properties of the charge-density wave can be manipulated by different strains of the α-U a caused by depositing on different substrates. This has added considerably to our understanding of the metal’s exceptional properties, and led to a further effort to prepare hcp-U, which does not exist in the bulk (Sec. <ref>). In this category, we must also include dissolution experiments (Sec. <ref>), initially performed on thin (∼ 100 Å) films of UO_2. Of course, this effect is present at any UO_2 surface, but if one uses a bulk crystal then the effect at the surface will be swamped by the response of the bulk crystal, and it will be essentially impossible to measure such an effect. A third example is the work on bilayers of U/Fe (Sec. <ref>) and multilayers including U (Sec. <ref>).
Quoting this third example, as well as the ideas proposed by Dennett et al. <cit.>, brings us to the question of the interface. As pointed out in this latter reference, the interface is crucial to the operation of any heterostructure. So far, our experiences with the interfaces in U systems have been somewhat mixed. This was also an issue of the early attempts (at IBM) to make memory systems from multilayers of amorphous UAs (which is a ferromagnet at T_c ∼ 140 K), and elemental cobalt <cit.>. A close examination of one of these samples <cit.> showed a poor interface and mixing of the two individual elements over a region of ∼ 10 Å between the two components. Following these efforts, a series of multilayers were made at Oxford University (Sec. <ref>) of the ferromagnet elements and U metal. In this work, the interfaces involving the ferromagnetic 3d elements, Fe, Co and Ni, were poor, with considerable inter-diffusion over a region of at least 15 Å across the nominal interfaces <cit.>, but with U/Gd, the interfaces were extremely sharp and there appeared to be no interdiffusion <cit.>. Similarly, the interfaces of the sample with permalloy/U <cit.> and U/Fe <cit.> are probably also poor. A considerable effort needs to be made to understand how to make these interfaces better, perhaps by using an alloy of U, or even a compound, or depositing an intermediate thin layer. Dennett et al. <cit.> have suggested UN_2 and GaAs (created as a superlattice) might be a useful device, but at this stage no work has been done on trying to make such a compound; all we know is that, if it exists, the interfacial strain would be ∼ 1%. In fact, no superlattices of any material containing U have ever been produced! The closest is the work on U/Gd multilayers, <cit.>, but even there, the hcp-U was not ordered in-plane, as there is a large misfit between the in-plane parameters of U (∼ 3 Å) and those of Gd (3.64 Å). Much remains to be done, but our progress over the last two decades gives rise to optimism that these challenges can be met.
There are undoubtedly many compounds containing U (Category A) that can be fabricated as thin films. An excellent example is the early work (in the 1990s) done at the Universities of first Darmstadt and then Mainz by Adrian and his colleagues. This work produced epitaxial films of the heavy-fermion compounds UPd_2Al_3 and UNi_2Al_3 <cit.> deposited on LAO with the molecular-beam technique. No effort, so far, has been made at either Oxford/Bristol or Los Alamos National Laboratory to prepare any heavy-fermion materials, so this is a completely open field. For example, a great deal could be learnt about actinide electronic structure by having thin epitaxial films of the isostructural UCoGa_5, NpCoGa_5, and PuCoGa_5. The 5f states in these three systems show paramagnetic behavior in the first compound, antiferromagnetic behavior (at 47 K) in the second, and superconducting behavior (at 18 K) in the Pu compound. One could imagine a series of important experiments on such samples, including angular-resolved photoemission, that would shed further light on the electronic structure of the 5f states, and what features of that electronic structure are responsible for such diverse behaviour in three isostructural compounds.
Theoretical studies have proposed a number of interesting properties in U systems where single crystals would certainly be welcome for further studies and comparisons with theory. Examples include a topological insulator-to Weyl semimetal transition in the system UNiSn <cit.>. Searching for two-dimensional actinide systems (such as can be stabilized in thin films) Lopez-Bezanilla <cit.> has identified the system UB_4 as potentially of considerable interest as the theory shows that the dispersion in the band states (resulting in Dirac cones) is driven by the hybridization of the uranium 5f orbitals with the p_z orbitals of the B atoms. Another example concerns possible magnetism in U/W(110) superlattices <cit.>. Of course, magnetism in some form has already been predicted at pure U surfaces <cit.>, and this would be interesting, but difficult, to detect. Most of the theoretical techniques used for these predictions have used pure density functional theory (DFT), and it is known that these methods over-estimate the tendency for spin polarization in the actinides. Isaacs and Marianetti <cit.> have argued that a combination of DFT and dynamical mean-field theory (DMFT) are needed to address electronic structure of correlated electron materials <cit.>. However, such materials need to be prepared and examined before we can draw any definitive conclusions.
In guessing or proposing systems in Category B, the problem is the challenge of the unknown. Indeed, heterostructures may well be of great interest in the actinides. We have started down this path with work reported with U, and also an effort on producing exchange bias with UO_2 films in Sec. <ref>, but difficulties are encountered in both cases in fabricating good interfaces. Ultimately, of course, we should be able to fabricate “superlattices” where the interfaces are of high quality and the strains across them are small by choosing the correct materials. Only then will we be able to answer questions posed by theories, such as that on Pb-Pu superlattices by Rudin <cit.>.
All of these theoretical predictions depend crucially on the unusual behavior of the 5f electrons in the actinides. Situated far away from the nucleus, they are not shielded from the neighboring electrons, as is the case of the tightly packed 4f electrons in the lanthanide series, so that the 5f electrons readily interact with electrons of neighboring atoms. This gives them, for example, the property that actinide ions can exist in multiple valence states, depending on their environment. In addition, the 5f electrons carry a large orbital moment. Some properties involving orbital moments are proportional to a higher power (Z^4) of the atomic number, giving the actinide series an obvious advantage. The ability to interact with electrons from neighboring atoms is understood in the word “hybridization”, although this can take many different forms, and requires advanced theoretical methods to describe the process. In some sense, these properties have made the actinides complicated to work with; now, hopefully, with new theory and the capabilities of fabricating thin films, heterostructures, and superlattices, we are on the verge of making these peculiar properties useful in technology!
Although a great deal of the initial work on U-based thin films was devoted to fundamental physics and the exploration of correlated electronic states, it is clear that the last decade has seen a significant shift in activity towards the investigation of applied nuclear materials, as more research groups realise the strength of this approach. Working swiftly on large bulk sample sets of active materials is difficult and this can be overcome, using thin film deposition. The controlled engineering of samples is also a contributing factor, where we are able to reduce complex systems to simpler, purer experimental analogues, where variables can be carefully controlled in order to precisely determine mechanisms of degradation, in structure, chemistry, and thermal transport etc., all important aspects for nuclear materials research.
Some of the most important aspects, from a storage and operation perspective, for nuclear fuels research concerns the interaction of the fuel with air and water. Ambient oxidation processes and dissolution in long-term storage can be slow and yield only small changes that are difficult to detect with bulk samples. By using thin film analogues of nuclear fuels, with modern, sophisticated diffraction techniques it is possible to measure surface changes on the order of Ångstroms. Research has already begun, using thin films to provide important corrosion rate data to model spent nuclear UO_2 fuels in radiolytic environments, the ambient degradation and oxidation of candidate advanced technology fuels, and the corrosion, oxidation and hydriding of stored metallic waste forms. This could be extremely useful in the study of spent fuel behaviour in long term repository conditions; not just for the UO_2 fuel that is already stored by many countries across the globe, but also for possible advanced technology fuels (UN, U_3Si_2 for example) and more complex composites (UCO in TRISO), proposed in the next generations of small and advanced modular reactors.
Looking to the future, the advancement of high resolution physical and chemical characterisation techniques, including laboratory-based equipment, synchrotron and neutron beamline experiments, and the more frequent application of these techniques to U-based thin film systems will yield ever more detailed understanding of these complex systems. This will not only help to underpin our theoretical framework for important nuclear materials, but in some instances could provide crucial data for the utilisation of uranium in device technologies.
§ ACKNOWLEDGMENTS
We would like to acknowledge the funding and support from the Engineering and Physical Sciences Research Council (EPSRC), UK. Recent grants in advanced fuels (ATLANTIC, EP/S011935/1) and nuclear waste and decommissioning (TRANSCEND, EP/S01019X/1) have opened up new research avenues and provided PDRA and PhD student support, which has been invaluable to the continuation of this field. More directly, we thank the EPSRC for the recent award of a new deposition and surface characterisation facility (EP/V035495/1), becoming a national nuclear user facility (FaRMS: https://www.nnuf.ac.uk/farms).
We would like to thank our many industry supporters, particularly Dave Goddard and Rob Burrows of the National Nuclear Laboratory, and Dave Geeson and Norman Godfrey of the AWE for provision of advice and materials over the years. We would also like to acknowledge our international collaborators at the JAEA, INL, CEA and ESRF who have helped enrich and expand the scope of this growing field of research.
We would like to give particular credit to Bill Stirling, Mike Wells, Stan Zochowski, Mike Thomas, Sean Langridge, and the late Roger Cowley for their interest and support over many years. Early funding for this program was obtained from the European Commission's Joint Research Center, Karlsruhe, Germany, and we thank Jean Rebizant for organizing this valuable assistance. Constructive comments on the manuscript were made by Ladia Havela from Prague, and we thank him for these. We also acknowledge direct and indirect input from several generations of PhD students, in particular Rebecca Nicholls.
tfq
|
http://arxiv.org/abs/2307.01717v1
|
20230704134305
|
On the Constrained Time-Series Generation Problem
|
[
"Andrea Coletta",
"Sriram Gopalakrishan",
"Daniel Borrajo",
"Svitlana Vyetrenko"
] |
cs.LG
|
[
"cs.LG",
"cs.AI"
] |
numbers,compressnatbib
[scale=0.4](0,.35) – (.25,0) – (1,.7) – (.25,.15) – cycle;
|
http://arxiv.org/abs/2307.00538v1
|
20230702103431
|
Search for planets in hot Jupiter systems with multi-sector TESS photometry. III. A study of ten systems enhanced with new ground-based photometry
|
[
"G. Maciejewski",
"M. Fernández",
"A. Sota",
"P. J. Amado",
"J. Ohlert",
"R. Bischoff",
"W. Stenglein",
"M. Mugrauer",
"K. -U. Michel",
"J. Golonka",
"A. Blanco Solsona",
"E. Lapena",
"J. Molins Freire",
"A. De los Ríos Curieses",
"J. A. Temprano Sicilia"
] |
astro-ph.EP
|
[
"astro-ph.EP"
] |
Search for planets in hot Jupiter systems with multi-sector TESS photometry. III. A study of ten systems enhanced with new ground-based photometry[This research is partly based on: (1) data obtained at the 1.5 m and 0.9 m telescopes of the Sierra Nevada Observatory (Spain), which is operated by the Consejo Superior de Investigaciones Científicas (CSIC) through the Instituto de Astrofísica de Andalucía; (2) observations made with the Liverpool Telescope operated on the island of La Palma by the Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council; and (3) observations obtained with telescopes of the University Observatory Jena, which is operated by the Astrophysical Institute of the Friedrich-Schiller-University.]
G. M a c i e j e w s k i^1, M. F e r n á n d e z^2, A. S o t a^2, P. J. A m a d o^2, J. O h l e r t^3,4, R. B i s c h o f f^5, W. S t e n g l e i n^5, M. M u g r a u e r^5, K.-U. M i c h e l^5, J. G o l o n k a^1, A. B l a n c o S o l s o n a^6, E. L a p e ñ a^6, J. M o l i n s F r e i r e^6, A. D e l o s R í o s C u r i e s e s^6, J. A. T e m p r a n o S i c i l i a^6
^1Institute of Astronomy, Faculty of Physics, Astronomy and Informatics,
Nicolaus Copernicus University in Toruń, Grudziadzka 5, 87-100 Toruń, Poland,
e-mail: gmac@umk.pl
^2Instituto de Astrofísica de Andalucía (IAA-CSIC), Glorieta de la Astronomía 3, 18008 Granada, Spain
^3Michael Adrian Observatorium, Astronomie Stiftung Trebur, 65428 Trebur, Germany
^4University of Applied Sciences, Technische Hochschule Mittelhessen, 61169 Friedberg, Germany
^5Astrophysikalisches Institut und Universitäts-Sternwarte, Schillergäßchen 2, 07745 Jena, Germany
^6Valencia International University, 46002 Valencia, Spain
June 2023, accepted for publication in Acta Astronomica vol. 73
The loneliness of hot Jupiters supports the high-eccentricity migration as a primary path leading to the formation of systems with those planets stripped of any close-in planetary companions. Here we present the null results of searches for low-mass planets close to hot Jupiters in 10 planetary systems: HAT-P-4, HAT-P-10, HAT-P-12, HAT-P-17, HAT-P-19, HAT-P-32, HAT-P-44, Qatar-6, TrES-4, and WASP-48. We employed multi-sector time-series photometry from the Transiting Exoplanet Survey Satellite enhanced with new ground-based transit light curves to determine the sizes of hypothetical planets that might still avoid being detected. We redetermined transit parameters for the known hot Jupiters using a homogenous approach. We refuted transit timing variations for HAT-P-12 b, claimed recently in the literature. The transit timing data permitted us to place tighter constraints on third bodies in HAT-P-19 and HAT-P-32 systems detected in Doppler measurements. We also study four multi-periodic pulsating variable stars in the field around HAT-P-17.Hot Jupiters – Stars: individual: HAT-P-4, HAT-P-10, HAT-P-12, HAT-P-17, HAT-P-19, HAT-P-32, HAT-P-44, Qatar-6, TrES-4, WASP-48, BD+30 4487, Gaia DR3 11849720750852486656, TYC 2717-453-1, Gaia DR3 1849743737517511296 – Planets and satellites: individual: HAT-P-4 b, HAT-P-10 b, HAT-P-12 b, HAT-P-17 b, HAT-P-19 b, HAT-P-32 b, HAT-P-44 b, Qatar-6 b, TrES-4 b, WASP-48 b
§ INTRODUCTION
Orbital architectures of planetary systems with massive planets on tight orbits, so-called hot Jupiters, shed unique light on the mechanisms of planetary formation. Unless those giant planets were formed in situ (Batygin 2016), they must be transferred from their birthplace beyond the water-ice line into their current orbits. Whilst the early migration in a proto-planetary disk and in-situ formation can preserve nearby low-mass planets, the violent evolution paths leave hot Jupiters in seclusion (Mustill 2015). Statistical studies show that the latter scenario is more probable because the hot Jupiters usually constitute single-planet systems or are accompanied by other massive planets on wide orbits. However, the observed lack of eccentric proto-hot Jupiters (Dawson 2015) and a slowly growing number of hot Jupiters in compact orbital configurations (Hord 2022) show that the actual statistics still need clarification.
In our project (Maciejewski 2020, 2022), we use publicly available photometric data acquired with the Transiting Exoplanet Survey Satellite (TESS, Ricker 2015) to search for additional planets with the transit method. In addition, we homogeneously analyse transit times for known hot Jupiters to search for perturbations induced by potential non-transiting planets close to resonant configurations (Wu 2023).
This paper presents results for 10 planetary systems: HAT-P-4, HAT-P-10, HAT-P-12, HAT-P-17, HAT-P-19, HAT-P-32, HAT-P-44, Qatar-6, TrES-4, and . The space-borne observations were enhanced with new transit light curves acquired with ground-based instruments. For 4 systems, we acquired additional photometric time series, which complemented the transit search based on the TESS data.
§ SYSTEMS OF THE SAMPLE
The basic observational properties of the investigated systems are presented in Table 1. Their stellar and planetary properties are collected in Table 2. Below, a brief characterisation of the individual systems is provided.
lcccccc12.5cmObservational properties of the systems of the sample
System RA (J2000) Dec (J2000) m_ G Distance d_ tr δ_ tr
hh:mm:ss.s ±dd:mm:ss (mag) (pc) (min) (ppth)
HAT-P-4 15:19:57.9 +36:13:47 11.1 321.8±1.6 253.2^+2.0_-3.2 7.64^+0.18_-0.19
HAT-P-10 03:09:28.5 +30:40:25 11.6 125.1±2.1 157.1^+3.4_-3.7 17.16^+0.49_-0.44
HAT-P-12 13:57:33.5 +43:29:37 12.4 141.9±0.2 140.7^+5.4_-4.2 20.12^+0.69_-0.73
HAT-P-17 21:38:08.7 +30:29:19 10.3 92.4±0.2 243.4^+6.4_-5.5 15.06^+0.51_-0.52
HAT-P-19 00:38:04.0 +34:42:42 12.5 201.7±0.6 166.2^+7.0_-5.0 19.30^+0.69_-0.84
HAT-P-32 02:04:10.3 +46:41:16 11.1 286.2±1.7 184.9^+2.4_-2.0 23.23^+0.27_-0.27
HAT-P-44 14:12:34.6 +47:00:53 13.0 348.4±1.7 183.7^+5.7_-4.9 18.79^+0.48_-0.47
Qatar-6 14:48:50.5 +22:09:09 11.1 101.0±0.2 95.7^+4.4_-5.1 22.9^+1.5_-1.0
TrES-4 17:53:13.0 +37:12:43 11.5 523.7±7.1 214.7^+9.0_-8.9 9.53^+0.27_-0.23
WASP-48 19:24:39.0 +55:28:23 10.8 462.1±2.2 186.1^+6.4_-5.2 9.06^+0.24_-0.24
7lCoordinates were taken from the Gaia Data Release 3 (DR3, Gaia Collaboration 2021).
7lm_ G is the apparent brightness in the G band from the DR3. Distance is calculated on the
7lDR3 parallaxes with uncertainties from error propagation. d_ tr and δ_ tr are the transit duration
7land transit depth refined in this study. ppth stands for parts per thousand of the normalised
7lout-of-transit flux.
lccccc12.5cmPhysical properties of the systems of the sample
Star M_⋆ (M_⊙) R_⋆ (R_⊙) T_ eff (K) log g_⋆ (cm s^-2) [Fe/H]
HAT-P-4 1.26^+0.06_-0.14 1.59±0.07 5860±80 4.14^+0.01_-0.04 +0.24±0.08
HAT-P-10 0.83 ± 0.03 0.79±0.02 4980±60 4.56±0.02 +0.13±0.08
HAT-P-12 0.733 ± 0.018 0.701^+0.017_-0.012 4650±60 4.61±0.01 -0.29±0.05
HAT-P-17 0.857 ± 0.039 0.838 ± 0.21 5246±80 4.52±0.02 0.00±0.08
HAT-P-19 0.842 ± 0.042 0.820 ± 0.048 4990±130 4.54±0.05 +0.23±0.08
HAT-P-32 1.160 ± 0.041 1.219 ± 0.016 6207±88 4.33±0.01 -0.04±0.08
HAT-P-44 0.942 ± 0.041 0.949^+0.080_-0.037 5295±100 4.46±0.06 +0.33±0.10
Qatar-6 0.822 ± 0.021 0.822 ± 0.021 5052±66 4.64±0.01 -0.025±0.094
TrES-4 1.22 ± 0.17 1.738 ± 0.092 6100±100 4.045±0.034 0.0±0.2
WASP-48 1.09 ± 0.08 1.09 ± 0.14 6000±150 4.50±0.15 -0.12±0.12
Planet M_ b (M_ Jup) R_ b (R_ Jup) 3lData source
HAT-P-4 b 0.68 ± 0.04 1.27±0.05 3lKovács (2007)
HAT-P-10 b 0.487 ± 0.018 1.005^+0.032_-0.027 3lBakos (2009)
HAT-P-12 b 0.211 ± 0.012 0.959^+0.029_-0.021 3lHartman (2009)
HAT-P-17 b 0.534 ± 0.018 1.010 ± 0.029 3lHoward (2012)
HAT-P-19 b 0.292 ± 0.018 1.132 ± 0.072 3lHartman (2011a)
HAT-P-32 b 0.86 ± 0.16 1.789 ± 0.025 3lHartman (2011b)
HAT-P-44 b 0.352 ± 0.029 1.242^+0.106_-0.051 3lHartman (2014)
Qatar-6 b 0.668 ± 0.066 1.062 ± 0.071 3lAlsubai (2018)
TrES-4 b 0.84 ± 0.10 1.674 ± 0.094 3lMandushev (2007)
WASP-48 b 0.98 ± 0.09 1.67 ± 0.10 3lEnoch (2011)
6lM_⋆, R_⋆, T_ eff, g_⋆, and [Fe/H] are the mass, radius, effective temperature, gravitational acceleration
6lin cgs, and metallicity for the host star, respectively. M_ b and R_ b are the mass and radius of the
6ltransiting planet in Jupiter units. The data source refers to the parameters of both the star and
6lthe planet.
HAT-P-4. This planetary system comprises a low-density, inflated planet and a metal-rich, evolved main-sequence late F or early G star (Kovács 2007). Winn (2011) showed that the planet's 3.05-day orbit is prograde with a sky-projected angle between the planetary orbital and stellar axes of . Furthermore, a linear trend in radial velocities (RVs) was detected and identified as a constant acceleration of the systemic barycentre γ̇ = 0.0246 ± 0.0026 m s^-1 day^-1 due to a third body (a planet or a companion star). The system parameters were refined by Christiansen (2011), who analysed photometric time series acquired with the NASA EPOXI Mission of Opportunity. Then, the transit light curves were reanalysed by Southworth (2011). The system parameters were redetermined again by Wang (2021). Todorov (2013) obtained occultation light curves with the warm Spitzer Space Telescope and found that heat recirculation from the day to the night side of the planet is inefficient. Furthermore, occultation timing placed a tight constraint on zero orbital eccentricity. This finding was confirmed in the RV study by Bonomo (2017). Mugrauer (2014) used common proper motions and radial velocities to find that the host star is likely accompanied by a widely separated G2 star, named HAT-P-4 B. Saffe (2017) found the companion has metallicity lower by ∼0.1 dex compared to that of HAT-P-4 A. They postulated that the giant planet's migration could trigger the fall of planetesimals and rocky planets (with a total mass of ∼10 M_⊕) onto HAT-P-4 A at the time of the system's formation, enriching the star with metals.
HAT-P-10. The planet in this system is another low-density hot Jupiter announced by the HATNet survey (Bakos 2009) and then independently by the WASP project (West 2009). Thus, it is also known as WASP-11 b. It orbits a K dwarf every 3.7 d. Different approaches of both teams resulted in divergent values of the mean planetary density of 0.594 ± 0.052 g cm^-3 (Bakos 2009) and 0.926^+0.09_-0.15 g cm^-3 (West 2009). In their follow-up study, Wang (2014) obtained the value of 0.697^+0.046_-0.062 g cm^-3. Then, Mancini (2015) found the density of 0.672 ± 0.037 g cm^-3. Those authors also determined a spin-orbit alignment with λ equal to 7^∘± 5^∘. Knutson (2014) postulated that the system accelerates with the rate of -0.014 ± 0.032 m s^-1 day^-1. Ngo (2015) demonstrated that a common proper motion M dwarf companion could induce this RV trend. Mugrauer (2019) showed that the system is a member of a hierarchical triple-star system.
HAT-P-12. The planet was found to transit a K4 dwarf every 3.2 days (Hartman 2009). This is a low-density (≈0.3 g cm^-3) globe with a sub-Saturn mass. The sky-projected orbital obliquity angle remains not well constrained with λ = -54^+41_-13 degrees (Mancini 2018). The system's parameters were refined in the follow-ups studies by Lee (2012), Mallonn (2015b), Sada & Ramón-Fox (2016), Mancini (2018), Öztürk & Erdem (2019), and Wang (2021). Although Sing (2016) found a strong optical scattering slope from blue to near-IR wavelengths, Line (2013), Mallonn (2015b), Turner (2017), and Yan (2020) concluded that HAT-P-12 b is covered by a cloudy atmosphere ruling out the presence of the Rayleigh scattering. The scenario invoking a completely clear atmosphere was also refuted by Alexoudi (2018). However, those authors found the Rayleigh scattering slope discernible in the visible transmission spectrum. Finally, Wong (2020) detected both the clouds inferred from weakened water vapour absorption and Rayleigh scattering produced by small particles. Atmospheric models with photochemical hazes composed of soot or tholins were found to reproduce the planetary transmission spectrum. However, Jiang (2021) noticed that conflicting results of previous atmospheric studies could be rendered by stellar contamination of unocculted stellar spots and faculae. Wong (2020) also provided evidence that heat in the planet's atmosphere is efficiently redistributed between the day and night hemispheres.
HAT-P-17. The host star is an early K dwarf, which is orbited by a half-Jupiter-mass transiting planet, HAT-P-17 b, on a 10.3-day eccentric orbit (Howard 2012) and a ≈3 M_ Jup planet, HAT-P-17 c, on a ≈4000 day orbit (Howard 2012, Bonomo 2017). Initially, the orbit of planet b was found to be aligned (Fulton 2013), but then a slight misalignment with λ = -27.5^∘± 6.7^∘ was derived (Mancini 2022). The system's parameters were refined by Mancini (2022), who enhanced their analysis with TESS photometric time series acquired for two transits in sector 15.
HAT-P-19. The K1 dwarf hosts a low-density Saturn-mass planet (Hartman 2011a), moving on a 4.0-day circular orbit (Bonomo 2017). The systemic barycentre accelerates outward (Hartman 2011a) with a rate of , revealing the presence of a third body. The system parameters were refined by Mallonn (2015a), Seeliger (2015), Baştürk (2020), and Wang (2021). The analysis of transmission spectra by Mallonn (2015a) opts for a flat-spectrum (grey) atmosphere of HAT-P-19 b.
HAT-P-32. The transiting planet is a low-density hot Jupiter whipping around its highly active late-F main-sequence star in 2.15 days (Hartman 2011b). Its orbit is circular (Bonomo 2017) and polar with the sky-projected orbital obliquity angle of 85.0^∘± 1.5^∘ (Albrecht 2012). The systemic properties were revised in studies by Seeliger (2014), Tregloan-Reed (2018), Wang (2019), and Baştürk (2022). The presence of a third body was inferred from the linear RV trend of -0.094 ± 0.023 m s^-1 day^-1 (Knutson 2014, Bonomo 2017). Furthermore, Adams (2013) detected an M1.5-dwarf companion (Zhao 2014), which was confirmed to be gravitationally bound by Ngo (2015). However, that stellar companion was too distant to explain the observed RV trend. The analyses of transmission spectra and multicolour broad-band transit light curves show that the planetary spectrum is flat, which could come from grey absorption in clouds of the upper planetary atmosphere (Gibson 2013, Nortmann 2016, Mallonn 2016). Mallonn & Strassmeier (2016) and Alam (2020) confirmed the clouds or hazes. Damiano (2017) reported the presence of water vapour, then confirmed by Alam (2020). Mallonn (2019b) placed an upper constraint on the planetary geometric albedo, which must be lower than 0.2. Czesla (2022) provided hints that the planet is losing its mass via the first Lagrange point due to the Roche lobe overflow.
HAT-P-44. Transits of HAT-P-44 b are observed every 4.3 days (Hartman 2014). The planet is a bloated hot Saturn with a non-transiting massive planetary companion on a wide orbit. The orbital eccentricity of planet b and the orbital parameters of planet c remain poorly constrained in the current RV data. Mallonn (2019a) acquired a new transit light curve and combined it with amateur data to refine the transit ephemeris for HAT-P-44 b. Then, Ivshina & Winn (2022) extracted transit times from TESS sectors 16 and 23 to make this ephemeris even more precise.
Qatar-6. The system comprises a hot sub-Jovian-mass planet on a circular 3.5-day orbit and an early-K dwarf (Alsubai 2018). The orbital geometry results in grazing transits. Rice (2023) showed that the orbit is well aligned. Mugrauer (2019) detected a candidate stellar companion bound to the host star. Rice (2023) demonstrated that both stars likely constitute an edge-on binary system, which implies a configuration of spin-orbit and orbit-orbit alignment.
TrES-4. TrES-4 b is a low-density, bloated hot Jupiter moving on a 3.55-day circular and aligned orbit around an F-type main-sequence star (Mandushev 2007, Sozzetti 2009, Bonomo 2017, Narita 2010). The system properties were refined in follow-up studies by Chan (2011) and Sozzetti (2015).
WASP-48. The slightly evolved late-F dwarf hosts a bloated Jupiter-like planet moving along a 2.1-day circular orbit (Enoch 2011, Bonomo 2017). The system's parameters were revised by Ciceri (2015). In their studies, O'Rourke (2014), Clark (2018), and Murgas (2017) detected the planet's thermal emission in the infrared. They interpreted a flat optical transmission spectrum as a manifestation of a cloud-free atmosphere with titanium oxide and vanadium oxide.
§ OBSERVATIONS AND DATA REDUCTION
§.§ TESS photometric time series
cccccccc12.5cmDetails on TESS observations used
Sect./ from–to pnr N_ tr Sect./ from–to pnr N_ tr
/Mode (ppth) /Mode (ppth)
4cHAT-P-4 4cHAT-P-44
24/SC 2020-Apr-16–2020-May-13 2.53 9 16/LC 2019-Sep-11–2019-Oct-07 5.35 –
50/SC 2022-Mar-26–2022-Apr-22 2.42 5 23/SC 2020-Mar-18–2020-Apr-16 5.85 6
51/SC 2022-Apr-22–2022-May-18 2.70 4 49/SC 2022-Feb-26–2022-Mar-26 6.36 6
3rtotal: 18 50/SC 2022-Mar-26–2022-Apr-22 6.12 5
4cHAT-P-10 3rtotal: 17
18/LC 2019-Nov-02–2019-Nov-27 2.85 – 4cQatar-6
42/SC 2021-Aug-20–2021-Sep-16 2.82 6 50/SC 2022-Mar-26–2022-Apr-22 2.63 5
58/SC 2022-Oct-29–2022-Nov-26 2.45 7 51/SC 2022-Apr-22–2022-May-18 3.15 4
3rtotal: 13 3rtotal: 9
4cHAT-P-12 4cTrES-4
16/LC 2019-Sep-11–2019-Oct-07 4.08 – 25/SC 2020-May-13–2020-Jun-08 2.87 6
23/SC 2020-Mar-18–2020-Apr-16 3.99 5 26/SC 2020-Jun-08–2020-Jul-04 2.96 6
49/SC 2022-Feb-26–2022-Mar-26 3.61 7 40/SC 2021-Jun-24–2021-Jul-23 2.91 8
50/SC 2022-Mar-26–2022-Apr-22 3.88 4 52/SC 2022-May-18–2022-Jun-13 2.54 6
3rtotal: 16 53/SC 2022-Jun-13–2022-Jul-09 2.91 6
4cHAT-P-17 3rtotal: 32
15/SC 2019-Aug-15–2019-Sep-11 1.55 2 4cWASP-48
55/SC 2022-Aug-05–2022-Sep-01 1.35 2 14/LC 2019-Jul-18–2019-Aug-15 3.14 –
56/SC 2022-Sep-01–2022-Sep-30 1.27 3 15/LC 2019-Aug-15–2019-Sep-11 2.95 –
3rtotal: 7 16/LC 2019-Sep-11–2019-Oct-07 2.63 –
4cHAT-P-19 23/SC 2020-Mar-18–2020-Apr-16 3.23 10
17/LC 2019-Oct-07–2019-Nov-02 4.83 – 26/SC 2020-Jun-08–2020-Jul-04 2.87 12
57/SC 2022-Sep-30–2022-Oct-29 4.12 7 40/SC 2021-Jun-24–2021-Jul-23 2.91 12
3rtotal: 7 41/SC 2021-Jul-23–2021-Aug-20 2.64 12
4cHAT-P-32 53/SC 2022-Jun-13–2022-Jul-09 2.87 10
18/LC 2019-Nov-02–2019-Nov-27 2.78 – 54/SC 2022-Jul-09–2022-Aug-05 2.77 11
58/SC 2022-Oct-29–2022-Nov-26 2.09 13 55/SC 2022-Aug-05–2022-Sep-01 2.90 12
3rtotal: 13 56/SC 2022-Sep-01–2022-Sep-30 2.45 12
3rtotal: 91
8lMode specifies long cadence (LC) or short cadence (SC) photometry.
8lpnr is the photometric noise rate in parts per thousand (ppth) of the normalised flux per minute of
8lexposure, see Fulton (2011). N_ tr is the number of transits qualified for this study.
Light curves were extracted from TESS science frames following the procedure described in detail in Maciejewski (2020). Here we give a short outline: The main portion of data was acquired with the short cadence (SC) of the 2-minute exposure time. The target pixel files were downloaded from the exo.MAST portal[https://exo.mast.stsci.edu]. In some sectors, only observations with the 30-minute exposure time were available. Those long-cadence (LC) data were extracted from full-frame images with the TESSCut[https://mast.stsci.edu/tesscut/] tool (Brasseur 2019). The final light curves were obtained with the Lightkurve v1.9 package (Lightkurve Collaboration 2018). They were de-trended and normalised to unity outside the transits with the built-in Savitzky-Golay filter. Then, the light curves were visually inspected to remove time-correlated flux ramps and measurements affected by scattered light.
The TESS observations used in this study are summarised for individual targets in Table 3.
§.§ Ground-based transit observations
c l l l c c c c12.5cmSummary for the ground-based transit observations
ID Date UT start-end Telesc., filter Airmass change N_obs t_exp Γ pnr
(s) (ppth)
8cHAT-P-4 b
1 2015-05-06 21:59-03:15 OSN 1.5, R_C 1.17 - 1.00 - 1.18 539 30 1.71 0.65
2 2016-03-28 22:35-04:42 OSN 1.5, R_C 1.66 - 1.00 - 1.06 483 40 1.32 0.63
3 2018-04-19 19:25-02:25 PIW 0.6, clear 1.61 - 1.05 - 1.14 1251 17 3.00 1.08
4 2018-04-25 22:55-05:22 LT 2.0, r' 1.35 - 1.01 - 1.30 315 40 1.02 0.61
5 2018-06-13 21:25-03:24 LT 2.0, r' 1.08 - 1.01 - 1.72 295 40 1.02 0.67
6 2019-03-15 23:17-04:45 OSN 1.5, R_C 1.76 - 1.00 - 1.01 768 20 2.35 0.76
8cHAT-P-10 b
1 2017-08-28 22:17-02:27 PIW 0.6, clear 1.84 - 1.10 456 25 2.00 1.40
2 2018-08-25 00:15-04:40 OSN 1.5, R_C 2.00 - 1.01 335 40 1.32 0.98
3 2020-11-20 23:03-03:30 OSN 0.9, R_C 1.01 - 1.58 284 50 1.07 1.46
8cHAT-P-12 b
1 2019-04-12 20:07-02:30 PIW 0.6, clear 1.19 - 1.01 - 1.18 233 57 1.00 1.72
2 2019-04-25 20:00-01:53 PIW 0.6, clear 1.16 - 1.01 - 1.21 348 57 1.00 1.74
3 2020-03-24 23:10-03:09 PIW 0.6, clear 1.04 - 1.01 - 1.13 358 37 1.50 1.54
4 2020-04-06 19:07-00:10 PIW 0.6, clear 1.41 - 1.01 - 1.02 453 37 1.50 2.12
8cHAT-P-17 b
1 2017-09-19 19:50-01:48 OSN 0.9, clear 1.12 - 1.01 - 1.49 906 20 2.61 0.70
2 2019-08-23 20:40-01:55 JENA 0.9, clear 1.16 - 1.07 - 1.31 324 45 1.05 1.17
8cHAT-P-19 b
1 2020-10-21 19:56-23:56 JENA 0.9, clear 1.11 - 1.04 - 1.13 237 45 1.07 1.46
2 2020-11-10 20:32-01:49 OSN 1.5, clear 1.02 - 1.00 - 1.60 690 20 2.35 0.64
8cHAT-P-32 b
1 2016-12-21 17:26-23:30 PIW 0.6, clear 1.03 - 1.01 - 1.39 707 27 2.00 1.33
2 2017-09-20 18:31-02:15 PIW 0.6, clear 1.83 - 1.01 - 1.04 1379 27 2.00 1.97
3 2017-12-13 15:58-20:27 PIW 0.6, clear 1.19 - 1.01 - 1.02 536 27 2.00 1.09
4 2018-10-12 18:29-01:25 PIW 0.6, clear 1.44 - 1.01 - 1.07 830 27 2.00 1.38
5 2018-12-09 19:45-01:49 OSN 0.9, clear 1.05 - 1.02 - 1.62 834 20 2.31 0.69
6 2019-01-06 19:41-00:45 OSN 0.9, clear 1.02 - 1.97 1004 12 3.34 0.79
7 2019-10-06 20:25-01:55 OSN 0.9, R_C 1.67 - 1.02 330 50 1.07 1.65
8 2020-11-09 18:28-23:19 TRE 1.2, clear 1.25 - 1.00 - 1.02 440 30 1.54 0.69
8cHAT-P-44 b
1 2018-02-24 20:14-04:25 PIW 0.6, clear 1.86 - 1.01 - 1.05 482 57 1.00 2.01
2 2018-04-08 20:18-01:27 PIW 0.6, clear 1.20 - 1.01 - 1.04 213 57 1.00 1.97
3 2019-02-16 20:30-04:41 PIW 0.6, clear 1.96 - 1.01 - 1.04 736 37 1.50 2.49
4 2019-03-31 20:40-01:02 JENA 0.9, clear 1.31 - 1.00 261 45 1.05 1.85
5 2019-04-30 21:36-02:59 TRE 1.2, clear 1.03 - 1.00 - 1.27 152 120 0.47 1.65
6 2020-02-26 00:22-05:51 OSN 1.5, clear 1.34 - 1.02 - 1.08 708 25 2.20 1.01
8lDate yyyy-mm-dd is given for the beginning of the observing run in UT. N_obs is the number of
8luseful scientific exposures acquired. t_exp is the exposure time used. Γ is the median number of
8lexposures per minute. pnr is the photometric scatter in ppth of the normalised flux per minute
8lof observation.
table-1
c l l l c c c c12.5cmConcluded
ID Date UT start-end Telesc., filter Airmass change N_obs t_exp Γ pnr
(s) (ppth)
8cQatar-6 b
1 2019-04-09 20:22-02:15 PIW 0.6, clear 1.73 - 1.17 - 1.26 696 27 2.00 1.39
2 2019-04-23 19:34-00:00 PIW 0.6, clear 1.69 - 1.17 379 37 1.50 1.51
3 2019-04-30 22:46-01:10 OSN 1.5, clear 1.12 - 1.04 - 1.05 340 20 2.35 0.54
4 2019-04-30 22:16-00:59 JENA 0.9, clear 1.18 - 1.14 - 1.20 150 45 1.05 1.00
5 2019-05-07 22:53-01:02 JENA 0.9, clear 1.14 - 1.25 126 45 1.05 1.26
6 2019-05-21 23:12-02:31 OSN 1.5, clear 1.04 - 1.45 468 20 2.35 0.69
7 2019-05-28 22:12-02:55 OSN 1.5, clear 1.04 - 1.83 796 15 2.93 0.96
8 2021-02-18 02:11-05:31 OSN 1.5, R_C 1.33 - 1.04 533 20 2.69 0.64
9 2021-03-04 02:43-06:09 OSN 1.5, I_C 1.10 - 1.04 - 1.14 347 30 1.69 0.73
8cTrES-4 b
1 2019-05-12 20:50-01:07 JENA 0.9, clear 1.64 - 1.04 253 45 1.07 1.39
8cWASP-48 b
1 2018-04-06 23:34-04:00 TRE 1.2, clear 1.65 - 1.05 166 80 0.68 0.78
2 2018-05-04 20:28-01:28 PIW 0.6, clear 1.71 - 1.04 593 27 2.00 1.61
3 2018-07-18 20:54-02:01 TRE 1.2, clear 1.07 - 1.00 - 1.12 205 80 0.68 0.99
4 2018-08-02 20:39-02:26 TRE 1.2, clear 1.03 - 1.00 - 1.28 224 80 0.68 1.08
5 2018-08-02 20:33-03:01 OSN 0.9, clear 1.15 - 1.05 - 1.41 692 30 1.82 0.83
6 2019-07-26 21:27-02:20 OSN 0.9, clear 1.12 - 1.05 - 1.22 484 30 1.67 0.86
7 2019-10-22 18:23-22:34 JENA 0.9, clear 1.04 - 1.56 217 45 1.05 1.54
The photometric time series for exoplanetary transits were acquired between 2015 and 2020. Six instruments were engaged:
* the 2.0 m Liverpool Telescope (Steele 2004) at Observatorio del Roque de los Muchachos (La Palma, Spain) and the IO:I
near-infrared camera (Barnsley 2012) – LT 2.0,
* the 1.5 m Ritchey-Chrétien Telescope at the Sierra Nevada Observatory (OSN, Spain) equipped with a Roper Scientific VersArray 2048B CCD camera – OSN 1.5,
* the 1.2 m Cassegrain telescope at the Michael Adrian Observatory (Trebur, Germany), equipped with an SBIG STL-6303E CCD camera – TRE 1.2,
* the 0.9 m Ritchey-Chrétien Telescope at the OSN, equipped with a Roper Scientific VersArray 2048B CCD camera – OSN 0.9,
* the 0.9 m telescope at the University Observatory Jena (Germany) and the Schmidt Teleskop Kamera (Mugrauer & Berthold 2010) – JENA 0.9,
* the 0.6 m Cassegrain telescope at the Institute of Astronomy of the Nicolaus Copernicus University (Piwnice near Toruń, Poland), equipped with an SBIG STL-1001M (by June 2018) or FLI ML16803 (from August 2018) CCD camera – PIW 0.6.
The telescopes were automatically or manually guided to minimise field drifts during observing runs with a precision of a few arc seconds. The instrumental set-ups were defocused, spreading starlight over many CCD pixels to reduce flat-fielding errors and minimise the dead time lost for the CCD readout. The observations were primarily performed without any filter to maximise the signal-to-noise ratio for precise transit modelling, and only occasionally were the observations acquired through a red filter.
The observing runs were scheduled in such a way as to acquire complete light curves with the full coverage of the critical phases, such as the transit's ingress and egress. In total, 48 light curves were obtained. The details on the individual runs are given in Table 4.
Data reduction was performed with AstroImageJ software (Collins 2017). The science frames were calibrated following a standard procedure involving de-biasing or dark-current correction and flat-fielding with sky-flat-field frames. Fluxes were obtained with the aperture photometry method with the aperture radius and ensemble of comparison stars optimised in trial iterations. Mid-exposures' timestamps were transformed into barycentric Julian dates and barycentric dynamical time BJD_TDB. Out-of-transit measurements with a trial transit model were used to try de-trending against air mass, time, the x and y position on the chip, and seeing. The final light curves were normalised to a baseline outside the transit.
§.§ Ground-based out-of-transit monitoring
r l c c c c c c12.5cmSummary for the out-of-transit monitoring
# Date UT BJD_TDB start–end N_obs t_exp Γ pnr t_run
(2450000+) (s) (ppth) (h)
8cHAT-P-10
1 2018-10-13 8405.4893–8405.6636 503 27 2.00 1.30 4.18
2 2018-10-14 8406.4028–8406.6445 694 27 2.00 1.45 5.80
1c⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯
9 2019-10-29 8786.2891–8786.6725 799 37 1.57 1.50 9.20
7rtotal: 45.50
8cHAT-P-17
1 2016-08-16 7625.3073–7625.5387 992 17 3.02 1.61 5.55
2 2016-08-17 7626.2990–7626.5497 1057 17 3.02 1.58 6.02
1c⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯
7 2018-10-24 8416.2079–8416.4517 861 17 3.01 1.11 5.85
7rtotal: 37.23
8cHAT-P-44
1 2016-12-28 7751.4948–7751.6045 142 55 1.00 2.17 2.63
2 2017-01-02 7756.4872–7756.6363 201 55 1.00 2.58 3.58
1c⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯
29 2019-04-07 8581.3000–8581.5916 618 37 1.50 2.36 7.00
7rtotal: 167.30
8cQatar-6
1 2018-02-20 8170.4739–8170.5470 207 25 2.00 1.75 1.75
2 2018-03-17 8195.4027–8195.6521 709 25 2.00 1.96 5.99
1c⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯
18 2019-05-05 8609.3391–8609.5531 457 37 1.50 1.56 5.14
7rtotal: 96.90
8lDesignations as in Table 4. t_run is the time covered by uninterrupted observations.
8lThis table is available in its entirety in a machine-readable form at the CDS.
8lA portion is shown here for guidance regarding its form and content.
For four targets, HAT-P-10, HAT-P-17, HAT-P-44, and Qatar-6, additional photometric time series outside transits were acquired with the PIW 0.6 telescope. This photometric monitoring aimed to search for transit-like signals from hypothetical other planets in those systems before the TESS observations. More than 340 hours of observations were acquired in total between 2016 and 2019. The data reduction procedure was the same as for transit observations (Sect. 3.2). The details on individual runs are collected in Table 5.
§ DATA ANALYSIS AND RESULTS
§.§ Transit modelling
The transit parameters in the studied systems were redetermined using the new photometric data. For transits covered with TESS observations, the chunks of the length of 5 times the transit duration and centred at the expected mid-transit time were extracted from the SC time series. The LC data were skipped due to a lower time resolution, which dilutes transit shape (Hernandez Camero 2023). The TESS light curves and the new ground-based observations were modelled simultaneously with the Transit Analysis Package (TAP, Gazak 2012). Transit geometry was determined by the ratio of the planet to star radii R_p/R_⋆, semi-major axis scaled in star radii a/R_⋆, and orbital inclination i_orb. The mid-transit time T_mid was determined for each observed transit. Possible photometric trends in the time domain were accounted for with a second-order polynomial fitted for each light curve separately.
The pixel scale of the TESS cameras is 21 arc seconds per pixel, and photometric measurements may suffer from contaminating flux from nearby sources. Thus, we modified TAP by adding the flux contamination parameter c_ F, defined as
c_ F = Δ F/F_0 ,
where Δ F is the additional flux in an aperture and F_0 is the unaffected target flux. This parameter was free for TESS light curves, and its value was common to all transits. For ground-based observations, c_ F was fixed at zero.
The limb darkening (LD) was approximated with the quadratic form coded with u_1 and u_2 coefficients. As advocated by Patel & Espinoza (2022), both parameters were free in the transit model with a separate pair for each passband. If a single light curve was acquired in a given filter, LD coefficients were allowed to vary around theoretical predictions from tables of Claret & Bloemen (2011) under the Gaussian penalty with a conservative value of 0.1.
HAT-P-17 b is the only planet in our sample moving on a noncircular orbit. The values of its eccentricity e_ b = 0.342 ± 0.004 and argument of periastron were taken from Bonomo (2017) and allowed to vary under the Gaussian penalties equal to the uncertainties of those parameters. For remaining systems, the circular orbits were assumed following Bonomo (2017) results or from the discovery papers.
The parameters of the best-fitting models and their uncertainties were derived with the Markov Chain Monte Carlo (MCMC) method, as described in detail in Maciejewski (2020). They are collected in Table 6. The LD coefficients are listed in Table 7. Figure 1 displays TESS's phase-folded transit light curves and the best-fitting models. The individual ground-based light curves are presented in Figs. 2 and 3.
lccccc12.5cmSystem parameters from transit light curve modeling
Planet R_p/R_⋆ a/R_⋆ i_orb (^∘) c_ F (%) c_ FluxCT (%)
HAT-P-4 b 0.0874^+0.0010_-0.0011 5.994^+0.033_-0.063 89.09^+0.63_-0.84 6.5^+2.6_-2.8 0.6
HAT-P-10 b 0.1310^+0.0019_-0.0017 12.12^+0.16_-0.22 89.14^+0.55_-0.47 5.1^+2.2_-2.2 1.1
HAT-P-12 b 0.1418^+0.0024_-0.0026 11.50^+0.28_-0.26 88.45^+0.57_-0.38 4.4^+2.4_-2.6 0.1
HAT-P-17 b 0.1227^+0.0021_-0.0021 22.71^+0.37_-0.35 89.35^+0.21_-0.16 6.5^+3.4_-3.6 4.8
HAT-P-19 b 0.1389^+0.0025_-0.0030 11.97^+0.31_-0.25 88.31^+0.53_-0.33 4.0^+2.7_-2.9 0.1
HAT-P-32 b 0.1524^+0.0009_-0.0009 6.051^+0.051_-0.048 88.12^+0.61_-0.42 2.3^+1.0_-1.0 1.0
HAT-P-44 b 0.1371^+0.0018_-0.0017 11.93^+0.22_-0.24 88.86^+0.62_-0.42 4.7^+2.2_-2.3 0.8
Qatar-6 b 0.1512^+0.0049_-0.0033 12.30^+0.15_-0.17 85.86^+0.10_-0.12 -3.2^+4.9_-5.5 1.0
TrES-4 b 0.0976^+0.0014_-0.0012 5.93^+0.11_-0.11 82.54^+0.25_-0.25 0.0^ a 2.1
WASP-48 b 0.0952^+0.0012_-0.0013 4.569^+0.081_-0.067 81.55^+0.38_-0.31 2.7^+2.9_-2.4 1.8
6lc_ FluxCT is the total flux contamination predicted by FluxCT (Schonhut-Stasik & Stassun 2023).
6l^ a) not fitted because only one ground-based light curve of moderate quality was available.
lcccccc12.5cmLimb darkening coefficients determined from transit light curve modeling
System u_ 1,TESS u_ 2,TESS u_ 1,clear u_ 2,clear u_1,R u_2,R
HAT-P-4 0.41^+0.08_-0.08 0.07^+0.14_-0.14 - - ^ a)0.40^+0.09_-0.09 ^ a)0.17^+0.14_-0.14
HAT-P-10 0.48^+0.09_-0.09 0.12^+0.18_-0.18 - - ^ b)0.56^+0.10_-0.10 ^ b)0.17^+0.19_-0.18
HAT-P-12 0.51^+0.13_-0.13 0.01^+0.27_-0.25 0.66^+0.13_-0.13 -0.08^+0.26_-0.23 - -
HAT-P-17 0.429^+0.064_-0.061 0.19^+0.13_-0.14 0.60^+0.12_-0.13 0.11^+0.21_-0.20 - -
HAT-P-19 0.37^+0.19_-0.19 0.22^+0.33_-0.34 0.50^+0.15_-0.14 0.12^+0.30_-0.30 - -
HAT-P-32 0.304^+0.064_-0.064 0.06^+0.12_-0.12 0.416^+0.054_-0.054 0.00^+0.10_-0.10 - -
HAT-P-44 0.50^+0.13_-0.15 -0.10^+0.27_-0.23 0.64^+0.10_-0.10 -0.16^+0.20_-0.19 - -
Qatar-6 0.50^+0.32_-0.33 0.06^+0.36_-0.32 0.38^+0.22_-0.21 0.20^+0.27_-0.31 - -
TrES-4 0.42^+0.29_-0.27 0.00^+0.31_-0.33 - - - -
WASP-48 0.23^+0.20_-0.16 0.22^+0.22_-0.26 0.43^+0.28_-0.25 0.11^+0.31_-0.36 - -
7l^ a) no distinction between R_ C and Sloan-r' because of their similar spectral bands.
7l^ b) to be specific: R_ C.
For the homogeneity of our transit timing analysis, mid-transit times were also redetermined for the literature data. We only considered the light curves that are publicly available, repeating the fitting procedure with TAP. The new and redetermined mid-transit times are given in Table 8.
l c c c c l12.5cmTransit mid-points for the studied planets
Planet E T_ mid (BJD_TDB) +σ (d) -σ (d) Data source
HAT-P-4 b 950 2457149.512385 0.000418 0.000428 OSN 1.5
HAT-P-4 b 1057 2457476.560467 0.000462 0.000495 OSN 1.5
HAT-P-4 b 1303 2458228.464403 0.000599 0.000594 PIW 0.6
⋯ ⋯ ⋯ ⋯ ⋯ ⋯
WASP-48 b 2093 2459851.184428 0.001049 0.001055 TESS
6lThis table is available in its entirety in a machine-readable form at the CDS.
6lA portion is shown here for guidance regarding its form and content.
§.§ Transit timing
Transit timing data sets were used to refine transit ephemerides in the form:
T_ mid (E) = T_0 + P_ orb· E ,
where E is the transit number counted from the reference epoch T_0, taken from the discovery papers. The best-fitting parameters and their 1σ uncertainties were extracted from posterior probability distributions produced by 100 MCMC walkers with 10^4 steps each and the first 1000 steps discarded. The results are given in Table 9, and the transit timing residuals against the refined ephemerides are plotted in Figs. 4 and 5.
l c c c 12.5cmTransit ephemeris elements for the investigated planets
Planet T_0 (BJD_TDB) P_ orb (d) χ^2_ red
HAT-P-4 b 2454245.81527 ± 0.00041 3.05652330 ± 0.00000030 1.01
HAT-P-10 b 2454759.68719 ± 0.00020 3.72247955 ± 0.00000020 0.96
HAT-P-12 b 2454216.77332 ± 0.00016 3.21305813 ± 0.00000014 1.68
HAT-P-17 b 2454801.17059 ± 0.00045 10.3385344 ± 0.0000010 0.83
HAT-P-19 b 2455091.53491 ± 0.00035 4.00878322 ± 0.00000038 0.90
HAT-P-32 b 2454420.44699 ± 0.00010 2.150008111 ± 0.000000056 1.34
HAT-P-44 b 2455696.93746 ± 0.00036 4.30119078 ± 0.00000051 0.88
Qatar-6 b 2457784.03266 ± 0.00016 3.50620150 ± 0.00000047 0.97
TrES-4 b 2454230.90558 ± 0.00048 3.55392915 ± 0.00000037 1.18
WASP-48 b 2455364.55185 ± 0.00023 2.14363663 ± 0.00000015 1.01
4lχ^2_ red is the reduced chi-square for the refined linear ephemeris
The mid-transit times were probed for possible long-term trends which could be caused by a monotonic or periodic change of P_ orb. Trail quadratic ephemerides in the form:
T_mid= T_0 + P_orb· E + 1/2 d P_orb/ d E· E^2 ,
where d P_orb/ d E is the change in the orbital period between succeeding transits, were evaluated. The Bayesian information criterion (BIC) disfavours the quadratic ephemerides for all planets of our sample. For four systems, HAT-P-4, HAT-P-10, HAT-P-19, and HAT-P-32, the radial acceleration of the barycentre, γ̇, was detected in the Doppler measurements (Bonomo 2017). For those systems, Table 10 lists the derived values of d P_orb/ d E and the predicted values of ( d P_orb/ d E)_ RV, which were calculated with the formula:
( d P_orb/ d E)_ RV = γ̇/c P_orb^2 .
The current timing data sets do not verify the RV predictions for HAT-P-4 and . They were consistent within the 1σ level due to the high relative errors of d P_orb/ d E. Interestingly, the discrepant results at 6.3σ and 4.4σ were obtained for HAT-P-19 and HAT-P-32, respectively.
l c c 12.5cmConstraints on a constant period change from transit timing and radial acceleration
System d P_orb/ d E (10^-10 d) ( d P_orb/ d E)_ RV (10^-10 d)
HAT-P-4 -1.5 ± 9.5 7.0 ± 1.0
HAT-P-10 -0.4 ± 9.8 -9.2 ± 1.3
HAT-P-19 -13 ± 23 240 ± 33
HAT-P-32 2.6 ± 1.5 -14.5 ± 3.5
The linear ephemerides were subtracted from the timing data, and the residuals were searched for periodic signals employing the analysis of variance algorithm (AoV, Schwarzenberg-Czerny 1996). Periodograms were calculated for trial periods between 2 and 10^4 transit intervals for each planet. The empirical levels of the false alarm probability (FAP) were determined with the bootstrap method, which was based on 10^5 trials. As shown in Fig. 6, no statistically significant signal was detected for any planet.
§.§ Search for additional transiting planets
Adopting the procedure from Maciejewski (2020), we used the AoVtr code (Schwarzenberg-Czerny & Beaulieu 2006) to search for transit-like flux drops in the joint SC and LC TESS photometric time series. For HAT-P-10, HAT-P-17, HAT-P-44, and Qatar-6, the light curves were enhanced with ground-based photometric monitoring (Sect. 3.3). The refined transit ephemerides were used to mask out transits and occultations of the known giant planets. The algorithm was run for trial periods between 0.2 and 100 days with a resolution in the frequency domain equal to 5 × 10^-5 day^-1. As the algorithm's sensitivity might depend on the number of bins of a phase-folded light curve, the procedure was iterated over the number of bins from 10 to 100 with a step of 10. The periodogram with the highest peak was saved for further analysis. The statistical significance of peaks was estimated with the bootstrap method executed on 10^4 resampled datasets. The periodograms with the FAP levels are plotted in Fig. 7.
No statistically significant signal, with FAP below 0.1%, was detected for any system. For three systems, HAT-P-10, HAT-P-32, and TrES-4, the strongest peaks reached the FAP level close to 1%. Thus, we visually inspected the phase-folded light curves to verify these signals. We finally identified them as not actual transit shapes.
Injection-recovery tests were adopted from Maciejewski (2020) to determine the transit detection thresholds for the individual systems. The transit depths were then converted into the upper radii of potential planets that remain below the detection thresholds of the present photometric time series. The results are displayed in Fig. 8.
§.§ Variable stars in the HAT-P-17 field
Whilst reducing the ground-based observations, we identified four multi-periodic pulsating variable stars in the field around HAT-P-17. We extracted their light curves from TESS data for further analysis. We list those variables in Table 11 and the detected pulsation frequencies in Table 12. Their exemplary light curves are plotted in Fig. 9. Below, we give their brief characteristics.
c l l l c 12.5cmShort-period pulsating variable stars identified in the field of HAT-P-17.
ID Name RA (J2000) Dec (J2000) m_ G
hh:mm:ss.s ±dd:mm:ss (mag)
V1 BD+30 4487 21:36:50.1 +30:41:01.4 10.68
V2 Gaia DR3 1849720750852486656 21:39:17.7 +30:16:12.1 12.95
V3 TYC 2717-453-1 21:38:22.2 +30:33:22.3 11.89
V4 Gaia DR3 1849743737517511296 21:39:06.9 +30:28:18.6 13.34
5lCoordinates and apparent brightness in the Gaia G band m_ G are taken from the DR3.
V1. The star was found to display a 0.08-d periodic variation with a range of 2%. It was observed with TESS in Sector 55 with a cadence of 10 minutes. The periodogram analysis performed with the PERIOD04 package (Lenz and Breger 2005) revealed a wealth of periodicities, demonstrating that this is a multi-periodic δ Scuti star. Using a standard pre-whitening procedure, we identified 14 frequencies with a signal-to-noise ratio (S/N) above 5.4, as Baran (2015) advocate for time series data from space missions. The star has not been previously reported to be variable.
V2. A rapid brightness modulation with a period of 0.037 d and a range floating up to 2% was observed. The periodogram analysis of the TESS data, acquired in sector 56 with the 200-s cadence, revealed 8 frequencies between 22.9 and with amplitudes up to 0.3%. The star is a multi-periodic δ Scuti variable. No reports on its variability can be found in the literature.
V3. We initially identified this star as a single-period 0.026-d pulsator changing its brightness by 2%. It is registered in the International Variable Star Index (VSX) and assigned a δ Scuti type. The TESS observations, obtained in Sector 56 with the 200 s cadence, allowed us to identify other 3 frequencies with much lower amplitudes. Therefore, we classify this star as a multi-periodic δ Scuti star.
V4. Brightness variations with a period of 0.093 d and a peak-to-peak amplitude of 4% were found for this star. It is classified in VSX as a δ Scuti variable and is registered in the Czech Variable Star Catalogue as CzeV1766. Our ground-based observations provided a hint for an amplitude modulation which was confirmed in the 200-s cadence TESS light curve from Sector 56. The power spectrum shows 16 frequencies between 10.1 and 28.2 day^-1 and amplitudes up to 1%.
l c c c c 12.5cmMulti-frequency solutions for the pulsating stars.
V1 V2 V3 V4
No. Frequency (day^-1) Frequency (day^-1) Frequency (day^-1) Frequency (day^-1)
Amplitude (ppth) Amplitude (ppth) Amplitude (ppth) Amplitude (ppth)
F1 11.99059(5) 26.81479(34) 37.6674986(12) 10.790088(22)
7.250(14) 3.351(57) 11.153(34) 9.85(8)
F2 11.82779(7) 25.60031(37) 43.735803(14) 11.245132(17)
3.743(16) 3.002(57) 1.257(33) 8.630(64)
F3 13.18320(47) 22.9902(38) 42.816332(46) 10.022355(16)
2.260(16) 2.36(94) 0.447(33) 7.78(7)
F4 11.36592(11) 28.25227(52) 42.892795(38) 17.40692(16)
2.149(18) 2.167(56) 0.529(33) 3.305(65)
F5 9.49050(30) 24.87537(86) 17.34429(31)
1.113(17) 1.43(6) 1.62(7)
F6 9.35757(28) 22.9469(23) 10.81111(45)
1.009(18) 1.38(33) 1.57(8)
F7 11.663(9) 24.8129(16) 21.58108(5)
0.72(12) 0.75(6) 1.281(72)
F8 12.600(1) 23.025(37) 16.8804(28)
0.574(18) 0.6(1) 1.23(10)
F9 10.0924(6) 10.18861(61)
0.502(17) 1.08(7)
F10 9.9923(11) 16.0711(2)
0.423(19) 1.082(63)
F11 23.8183(9) 21.26679(44)
0.237(15) 0.95(7)
F12 23.9803(41) 10.110(16)
0.179(27) 0.92(14)
F13 5.32494(61) 10.703(37)
0.192(18) 0.80(15)
F14 25.17385(80) 28.198905(44)
0.146(15) 0.735(62)
F15 21.6010(8)
0.81(8)
F16 22.04(5)
0.74(13)
5lThe best-fitting parameters come from the least-squares calculations. Their uncertainties
5lare given in parentheses, estimated using Monte Carlo simulations based on 100 processes
5lwith 10^4 steps each.
§ DISCUSSION
The transit parameters re-determined by us are mostly in 1-2σ agreement with previous works. For HAT-P-44, we provide the first verification of the transit parameters. For HAT-P-4 b, our value of R_p/R_⋆ agrees with the determinations of Christiansen (2011) and Winn (2011) within 0.2 and 0.4σ, respectively, while it is greater than the values of Kovács (2007) and Wang (2021) by 4.7 and 3.1σ, respectively. In those two studies, transits of HAT-P-4 b appear to be shallower, implying the flux contamination of ≈12%. Even in the wide TESS aperture, the expected contamination is much lower, ≈0.6%. In addition, our tests show that keeping the LD coefficients free does not overestimate R_p/R_⋆. For TrES-4 b, our value of R_p/R_⋆ agrees with the determinations by Mandushev (2007), Sozzetti (2009), and Chan (2011) within 1σ. However, Sozzetti (2015) reported deeper transits and hence the greater radius for the planet, deviating from our result by 4.1σ. To investigate the source of this discrepancy, we re-analysed the original light curves of Sozzetti (2015). The apparently deeper transits result from the simplifications adopted by those authors: usage of a constant flux baseline outside the transits in conjunction with a relatively short coverage of out-of-transit observations and a simplified LD law in the linear form. Our approach applied to those light curves yields a value of R_p/R_⋆, which is consistent with other studies.
Our transit timing analysis revealed no sign of deviation from the linear ephemeris for each system. For two systems, HAT-P-19 and HAT-P-32, the RV accelerations claimed in the literature would produce apparent shortening or lengthening of the orbital period due to the light travel time (LTT) effect (Irwin 1952). Wide stellar companions in these systems can be excluded due to multiplicity studies of exoplanet host stars (see Mugrauer 2019, Michel and Mugrauer 2021). For HAT-P-19, the outward acceleration of the systemic barycentre was detected by Hartman (2011a) and confirmed in the homogenous RV analysis by Bonomo (2017). The phenomenon of this magnitude would manifest as an apparent lengthening of the orbital period, giving a cumulative departure from the linear transit ephemeris by ≈24 minutes over 13 years, the coverage of transit timing observations. We can discard the constant accelerations with γ̇ > 0.058 m s^-1 day^-1 and γ̇ < -0.108 m s^-1 day^-1 at 95% confidence. The reported RV slope must be a fragment of a periodic signal shorter than the span of transit timing observations. Our reanalysis of RV measurements from Hartman (2011a) reveals a planetary signal with an amplitude of ≈ 30 m s^-1, translating into the mass of the planetary companion of ≈0.9 M_ Jup / sin i. The RV data set, however, places weak constraints on the signal's period, which is correlated with an orbital eccentricity: from 260 days for a circular orbit to tens of thousands of days for orbital eccentricities above 0.9.
We performed an analogous analysis for HAT-P-32, for which γ̇ = -0.094± 0.023 m s^-1 day^-1 was reported (Knutson 2014, Bonomo 2017). Thanks to the span of 13 years, the HAT-P-32 b's transit observations allowed us to discards constant accelerations with γ̇ < -0.0007 m s^-1 day^-1 and γ̇ > 0.010 m s^-1 day^-1 at 95% confidence. The only RV data set was acquired by Knutson (2014) between 2008 and 2012, providing a time coverage of about 1800 days. Thus, the orbital period of a third body in the system remains poorly constrained. Our numerical experiments show that a low-mass brown dwarf companion (M_ csin i_ c≈ 20 M_ Jup) would produce a detectable TTV signal with a period of about 17 years and a peak-to-peak amplitude of 2 minutes. Thus, the companion is rather a massive planet on a 4–6 au orbit.
The wide orbit companions to hot Jupiters in such planetary systems as and HAT-P-32 are potentially responsible for the migration mechanism. Knowing their nature would be meaningful for theories of the formation of systems with hot Jupiters.
For the remaining systems with non-zero Doppler acceleration, HAT-P-4 and HAT-P-10, the constraints coming from transit timing are too weak to address the values of γ̇ derived from the RV measurements. As summarised in Table 10, our values of d P_orb/ d E are consistent within 1σ with the RV accelerations, as well as with zero.
For all of the planets of our sample, the flux contamination in the TESS aperture was found to be statistically indistinguishable from 0. This finding aligns with the values extracted with the online tool FluxCL (Schonhut-Stasik & Stassun 2023), which are listed in Table 6. For the majority of the systems, these contaminations are at the level of 2% or lower, so they are negligible. The only exception is , for which c_ FluxCT = 4.8%. Our determination of c_ F = 6.5^+3.4_-3.6% agrees with that value within 1σ.
Our analysis reveals no transiting planetary companions to the hot Jupiters of our sample. These planets were also found to be perfect clocks, with their transits following the linear ephemerides. This lack of resonant planetary companions completes the picture of the loneliness of hot Jupiters. However, Sariya (2021) postulated the presence of a possible non-sinusoidal TTV for HAT-P-12 b, contrary to the conclusions of Öztürk & Erdem (2019). These perturbations would be induced by a 0.2 M_ Jup companion on an 8.8-day orbit. Whilst this model was found to improve the value of the reduced χ^2 for the transit timing data set of Sariya (2021), we note that it devastates a single-planet RV solution. The purported companion would produce an RV signal with an amplitude similar to that of that is not supported by the RV data. Our best-fitting linear ephemeris has χ^2_ red = 1.7 and is overestimated by two outlying points from the literature, one from Alexoudi (2018) and another from Mancini (2018). Rejecting them causes the χ^2_ red value to drop to 1.0. Our investigation failed to identify the reason why those points stand out.
The negative results of our search for nearby planetary companions to hot Jupiters add to the non-detections already discussed in the literature (e.g., Hord 2021, Wang 2021). The known companions to hot Jupiters, including a most recently discovered super-Earth, WASP-132 c (Hord 2022), are smaller than Neptune. The sensitivity of transit detection is related to, among other factors, the photometric precision, the amount of photometric data, and the size of the host star. We went down to Neptune sizes for HAT-P-4 and TrES-4 and entered the super-Earth regime in the HAT-P-17 system. For the remaining systems, we probed down to mini-Neptunes.
§ CONCLUSIONS
As the transit timing data discard the constant acceleration scenario for HAT-P-19 b and HAT-P-32 b, their systems may contain additional planets on wide orbits. Precise Doppler follow-up studies could confirm their existence. Transit times for HAT-P-12 b were consistent with the constant period model, discarding the non-sinusoidal TTV signal, which was recently claimed in the literature. The loneliness of the hot Jupiters of our sample supports the high-eccentricity migration as a primary path leading to the formation of systems with massive planets stripped of any close-in planetary companions.
We would like to thank all participants involved in the observations, especially F. Hildebrandt. GM acknowledges the financial support from the National Science Centre, Poland through grant no. 2016/23/B/ST9/00579. MF and PJA acknowledge financial support from grants PID2019-109522GB-C52/AEI/ 10.13039/501100011033 of the Spanish Ministry of Science and Innovation (MICINN) and PY20_00737 from Junta de Andalucía. MF, AS, and PJA acknowledge financial support from the grant CEX2021-001131-S funded by MCIN/AEI/10.13039/ 501100011033. RB and MM acknowledge the support of the DFG priority program SPP 1992 “Exploring the Diversity of Extrasolar Planets” in projects NE515/58-1 and MU2695/27-1. This paper includes data collected with the TESS mission, obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the TESS mission is provided by the NASA Explorer Program. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. This research made use of Lightkurve, a Python package for Kepler and TESS data analysis (Lightkurve Collaboration, 2018). This research has made use of the SIMBAD database and the VizieR catalogue access tool, operated at CDS, Strasbourg, France, and NASA's Astrophysics Data System Bibliographic Services. This research has made use of the International Variable Star Index (VSX) database, operated at AAVSO, Cambridge, Massachusetts, USA. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 730890. This material reflects only the authors views and the Commission is not liable for any use that may be made of the information contained therein.
Adams, E.R., 20131469
Alam, M.K., 202016051
Albrecht, S., 201275718
Alexoudi, X., 2018Å620A142
Alsubai, K., 201815552
Bakos, G.Á., 20096961950
Baran, A.S., Koen, C., & Pokrzywka, B.2015448L16
Barnsley, R.M., 2016J. Astron. Telesc. Instrum. Syst.2id.015002
Baştürk, Ö., 20204964174
Baştürk, Ö., 20225122062
Batygin, K., 2016829114
Bonomo, A.S., 2017Å602A107
Brasseur, C.E., 2019Astrophysics Source Code Library 1905.007
Chan, T., 2011141179
Christiansen, J.L., 201172694
Ciceri, S., 2015Å577A54
Claret, A., & Bloemen, S.2011Å529A75
Clark, B.J.M., 2018Å615A86
Collins, K.A., 201715377
Czesla, S., 2022Å657A6
Damiano, M., 201715439
Dawson, M., Murray-Clay, R.A., Johnson, J.A.201579866
Enoch, B., 201114286
Fulton, B.J., 201114284
Fulton, B.J., 201377280
Gaia c, 2021Å6491
Gazak, J.Z., 2012Advances in Astronomy201230
Gibson, N.P., 20134362974
Hartman, J.D., 2009706785
Hartman, J.D., 2011a72652
Hartman, J.D., 2011b74259
Hartman, J.D., 2014147128
Hernandez Camero, J.H., 20235204103
Hord, B.J., 2021162263
Hord, B.J., 202216413
Howard, A.W., 2012749134
Irwin, J.B.1952116211
Ivshina, E.S. & Winn, J.N.202225962
Jiang, C., 2021Å656A114
Knutson, H.A., 2014785126
Kovács, G., 2007670L41
Lee, J.W., 201214395
Lenz, P., & Breger, M.2005CoAst14653
Line, M.R., 2013778183
Lightkurve Collaboration, 2018Astrophysics Source Code Library 1812.013
Maciejewski, G.202070181
Maciejewski, G.2022721
Maciejewski, G., 2018IBVS636243
Mallonn, M. & Strassmeier, K.G.2016Å590A100
Mallonn, M., 2015aÅ580A60
Mallonn, M., 2015bÅ583A138
Mallonn, M., 2016463604
Mallonn, M., 2019aÅ622A81
Mallonn, M., 2019bÅ624A62
Mancini, L., 2015Å579A136
Mancini, L., 2018Å613A41
Mancini, L., 2022Å664A162
Mandushev, G., 2007667L195
Michel, K.-U. and Mugrauer, M.2021Frontiers in Astronomy and Space Sciences8id.14
Mugrauer, M.20194905088
Mugrauer, M. and Berthold, T.2010Astron. Nachr.331449
Mugrauer, M., Ginski, C., Seeliger, M.20144391063
Murgas, F., 2017Å605A114
Mustill, A.J., 201580814
Narita, N., 2010PASJ62653
Ngo, H., 2015800138
Nortmann, L., 2016Å594A65
O'Rourke, J.G., 2014781109
Öztürk, O. & Erdem, A.20194862290
Patel, J.A. and Espinoza, N.2022163228
Rice, M., 202316565
Ricker, G.R., 2015J. Astron. Telesc. Instrum. Syst.1id.04003
Sada, P.V. & Ramón-Fox, F.G.2016128024402
Saffe, C., 2017Å604L4
Sariya, D.P., 2021Res. Astron. Astrophys.21097
Schonhut-Stasik, J. and Stassun, K.2023RNASS7id.18
Schwarzenberg-Czerny, A.1996ApJL460107
Schwarzenberg-Czerny, A., Beaulieu, J.-Ph.2006365165
Seeliger, M., 2014441304
Seeliger, M., 20154514060
Sing, D.K., 2016Nature52959
Southworth, J.20114172166
Sozzetti, A., 20096911145
Sozzetti, A., 2015Å575L15
Steele, I.A. 2004Proc. SPIE, doi:10.1117/12.5514565489679
Todorov, K.O., 2013770102
Tregloan-Reed, J., 20184745485
Turner, J.D., 20174723871
Wang, X.-B., 201414792
Wang, Y.-H., 201915782
Wang, X.-Y., 202125515
West, R.G., 2009Å502395
Winn, J.N., 201114163
Wong, I., 2020159234
Wu, D.-H., Rice, M., Wang, S.2023165171
Yan, F., 2020Å642A98
Zhao, M., 2014796115
|
http://arxiv.org/abs/2307.03048v1
|
20230706151423
|
Origin-Destination Travel Time Oracle for Map-based Services
|
[
"Yan Lin",
"Huaiyu Wan",
"Jilin Hu",
"Shengnan Guo",
"Bin Yang",
"Youfang Lin",
"Christian S. Jensen"
] |
cs.LG
|
[
"cs.LG"
] |
^1School of Computer and Information Technology, Beijing Jiaotong University, China
^2Department of Computer Science, Aalborg University, Denmark
^3Beijing Key Laboratory of Traffic Data Analysis and Mining, Beijing, China
ylincs, hywan, guoshn@bjtu.edu.cn, hujilin, byang, csj@cs.aau.dk
Given an origin (O), a destination (D), and a departure time (T), an Origin-Destination (OD) travel time oracle (ODT-Oracle) returns an estimate of the time it takes to travel from O to D when departing at T. ODT-Oracles serve important purposes in map-based services. To enable the construction of such oracles, we provide a travel-time estimation (TTE) solution that leverages historical trajectories to estimate time-varying travel times for OD pairs.
The problem is complicated by the fact that multiple historical trajectories with different travel times may connect an OD pair, while trajectories may vary from one another. To solve the problem, it is crucial to remove outlier trajectories when doing travel time estimation for future queries.
We propose a novel, two-stage framework called Diffusion-based Origin-destination Travel Time Estimation (DOT), that solves the problem. First, DOT employs a conditioned Pixelated Trajectories (PiT) denoiser that enables building a diffusion-based PiT inference process by learning correlations between OD pairs and historical trajectories. Specifically, given an OD pair and a departure time, we aim to infer a PiT. Next, DOT encompasses a Masked Vision Transformer (MViT) that effectively and efficiently estimates a travel time based on the inferred PiT. We report on extensive experiments on two real-world datasets that offer evidence that DOT is capable of outperforming baseline methods in terms of accuracy, scalability, and explainability.
Origin-Destination Travel Time Oracle for Map-based Services
Yan Lin^1,3,
Huaiyu Wan^1,3,
Jilin Hu^2,
Shengnan Guo^1,3,
Bin Yang^2,
Youfang Lin^1,3,
Christian S. Jensen^2
August 1, 2023
======================================================================================================================
§ INTRODUCTION
The diffusion of smartphones and the ongoing digitalization of societal processes combine to enable a wide range of map-based services <cit.>. Many such services rely on the availability of origin-destination (OD) oracles that provide estimates of the travel times, distances, and paths between origin (O) and destination (D) locations, e.g., distance oracles <cit.> and path oracles <cit.>.
Examples include pricing in outsourced transportation services <cit.>, estimation of overall travel costs <cit.>, transportation scheduling <cit.>, delivery services <cit.>, and traffic flow prediction <cit.>.
For example, in flex-transport, taxi companies are paid by a public entity for making trips. The payments is based on pricing models that involve estimating the travel times of trips, but the driver is free to choose any travel path.
We study the problem of constructing an OD travel time oracle (ODT-Oracle) that takes an OD pair and a departure time T as input, and returns a travel time Δ t needed to travel from O to D when departing at time T.
ODT-Oracle can provide accurate estimation of travel times that is valuable for helping public entities, taxi companies, or other transportation service providers to plan their operations more effectively, while minimizing the need for detailed travel path information.
Figure <ref> shows four trajectories for similar OD pair: 𝒯_1=⟨ (O_1, 8:00), (g^1_1, t^1_1), (g^2_1, t^2_1), (g^3_1,t^3_1), (D_1, 8:15) ⟩, 𝒯_2=⟨ (O_2, 8:02), (g^1_2, t^1_2), (g^2_2, t^2_2), (g^3_2,t^3_2), (D_2, 8:17) ⟩, 𝒯_3=⟨ (O_3, 8:05),(g^1_3, t^1_3), (g^2_3, t^2_3), (g^3_3,t^3_3), (D_3, 8:20) ⟩, and 𝒯_4=⟨ (O_4, 8:04),(g^1_4, t^1_4), (g^2_1, t^2_4), (g^3_4,t^3_4), (D_4, 8:39) ⟩, where each element (g, t) denotes a GPS point g and timestamp t. Thus, we have four data instances for ODT-Oracle: [O_1, D_1, 8:00] → 15, [O_2, D_2, 8:02] → 15, [O_3, D_3, 8:05] → 15, and [O_4, D_4, 8:04] → 35, where the travel time is the arrival time minus the departure time, e.g., 𝒯_1 has travel time 8:15-8:00=15 min.
We observe that 𝒯_4 is very different from the other three trajectories since it goes via place B, which makes it an outlier. When given a query Q=[O_5, D_5, 8:10], the ODT-Oracle is to estimate the travel time from O_5 to D_5 at 8:10. Suppose we have an unseen trajectory 𝒯_5=⟨ (O_5, 8:10), (g^1_1, t^1_1), (g^2_1, t^2_1), (g^3_1,t^3_1), (D_5, 8:25) ⟩. Then, we can use the travel time of 𝒯_5, 15min, as the ground truth result for the query Q=[O_5, D_5, 8:10]. The closer the result is to 15min, the more accurate the ODT-Oracle is.
Studies exist that have attempted to solve this problem can be classified into two main categories <cit.>: Non-machine learning-based method <cit.> and machine learning-based method <cit.>.
We can further classify non-machine learning-based methods into historical trajectory-based methods and path-based methods. TEMP <cit.> is a representative historical trajectory-based method that estimates the travel time of a given ODT-Input by averaging the travel time of historical trajectories that have a similar origin, destination and departure time. It suffers from poor accuracy when very different trajectories exist for the same O and D. For example, Figure <ref> has four trajectories with similar departure times, where three similar trajectories (𝒯_1, 𝒯_2, and 𝒯_3) have travel time 15 minutes, while the latter (𝒯_4) goes via point B and takes 35 minutes.
In this case, the historical trajectory-based method returns (15×3+35)/4=20 minutes, which is inaccurate due to outlier 𝒯_4.
The path-based methods effectively solve a shortest path problem. They first map the GPS coordinates of the origin and destination onto a road network using map-matching <cit.>, i.e., O→ O' and D→ D'. Then, they calculate the shortest path from O' to D' and return the travel time of this path as the estimated OD travel time. However, this type of method may also have poor accuracy due to two reasons. First, the map-matching results may be inaccurate. Second, the weights in the road network are not accurate. For example, in Figure <ref>, the query origin O_5 and destination D_5, are mapped to O_5' and D_5', respectively. Then, a path-based method finds the shortest path from O_5' to D_5', which is the dashed line in Figure <ref>. However, the underlying trajectories relevant to the query [O_5, D_5, 8:10] do not use the shortest path, which makes the travel time estimate of path-based methods inaccurate.
When considering the machine learning-based methods, most existing studies <cit.> model an ODT-Oracle as a regression problem without considering historical trajectories. Still, figure <ref> has four historical trajectories with similar OD pairs, which are four independent training data instances for regression models.
When regression models are trained with this data by using the least squares method or mean squared error (MSE) through backpropagation, they are likely to output 20 minutes when fed the same OD pair. Therefore, the regression-based methods also experience poor accuracy.
The recent DeepOD <cit.> attempts to alleviate this problem by introducing an auxiliary loss. Given a historical trajectory, it first learns an OD representation with the given OD pair, departure time, and external features. It also tries to learn a representation of the given trajectory. The next step is to match the two representations, i.e., the OD and affiliated trajectory representations, which is the auxiliary loss. Then the OD representation is used to estimate the travel time that is compared with the ground truth, which is the main loss. Finally, DeepOD is trained with the combination of main loss and the auxiliary loss, which enables performance improvements. However, in the example in Figure <ref>, outlier 𝒯_4 is still used for training in DeepOD, which is the key reason for the reduced accuracy of existing solutions for ODT-Oracles.
In this paper, we propose a new trajectory format, namely that of a Pixelated Trajectory (PiT). We represent a PiT using an image format, which is denoted as ℝ^N× M × C, where N and M are the height and width of the PiT, respectively, and C denotes the number of features in the PiT, which consists of a Mask, a Time of the day (ToD), and a Time offset.
Figure <ref> shows examples of PiTs for 𝒯_1, 𝒯_2, 𝒯_3, and 𝒯_4. In the example, we use N=3, M=4, and C=1, so the PiT has the format X∈ℝ^3× 4 × 1, where X[·, ·, 1] is the mask that captures whether the cells are traversed by the trajectory or not.
Although 𝒯_1, 𝒯_2, and 𝒯_3 are different, their corresponding PiTs are very similar. This similarity helps the model learn common patterns and characteristics from them. On the other hand, PiT_4 is quite different from the other PiTs, which allows for easier identification and removal of this PiT as an outlier. This demonstrates the effectiveness of the proposed PiT representation in handling different trajectories and identifying outliers.
To remove the impact of outliers, we propose a novel, two-stage ODT-Oracle framework, called Diffusion-based Origin-destination Travel Time Estimation (DOT).
In the first stage, we propose a PiT inference model to infer the PiT for a query (O, D, T). Given the example in Figure <ref> and the query Q=[O_5, D_5, 8:10], we aim to generate a PiT that is similar to PiT_1, PiT_2, and PiT_3, but is different from PiT_4.
To do this, we first propose a conditional diffusion model to generate a PiT based on the origin, destination, and departure time. Then, we leverage historical trajectory data to train this model such that it can generate a PiT when (O, D, T) is given.
In the second stage, called PiT travel time estimation, we aim to estimate travel times based on the generated PiTs. For example, given the PiTs in Figure <ref>, we generate a PiT that is similar to PiT_1, PiT_2, and PiT_3 but is different from PiT_4. So, given the query Q=[O_5, D_5, 8:10], the outlier 𝒯_4 is disregarded during estimation.
Since the inferred PiT is formed by cells with spatial-temporal features, we attempt to model the global correlations in PiTs based on self-attention and the vision Transformer (ViT) <cit.> to improve the estimation accuracy.
Considering that a PiT only occupies very few cells in the whole image, we further propose a Masked Vision Transformer (MViT) equipped with an efficient self-attention masking scheme to speed up the estimation process. The efficiency of training and estimation are improved substantially compared to the original ViT.
In summary, we make the following contributions:
* We propose a novel two-stage framework for enabling accurate ODT-Oracles.
* We introduce a PiT inference model that identifies a PiT conditioned on a query (O, D, T), making it possible to reduce the impact of outlier trajectories.
* We propose a Masked Vision Transformer to model global spatial-temporal correlation in the inferred PiT, enabling more effective and efficient travel time estimation.
* We report on extensive experiments on two real-world datasets that offer evidence that the proposed method is capable of outperforming existing methods.
The remainder of the paper is organized as follows. Section <ref> covers related work.
Section <ref> introduces preliminaries and formalizes the problem. Sections <ref> and <ref> detail PiT inference and travel time estimation, respectively. Section <ref> reports on the empirical study, and Section <ref> concludes.
§ RELATED WORK
§.§ Travel Time Estimation
Travel time estimation (TTE) has important applications. In location-based services, TTE can provide users with information on how to plan their trip or on when their packages will arrive. In urban planning, many data analyses, such as flow prediction and congestion prediction, depending on the results of TTE. TTE solutions can be classified into path-based TTE and ODT-Oracles, where the biggest difference is whether a path is given.
Path-based travel time estimation
Early studies implement regression techniques <cit.> or decomposition methods <cit.> to estimate the travel time of trajectories. The accuracies of these methods are limited due to their limited modeling capacities. More recently, deep learning approaches that are capable of modeling complex spatial-temporal correlations in trajectories have been gaining attention. WDR <cit.>, DeepTTE <cit.>, DeepETA <cit.>, TADNM <cit.>, and CompactETA <cit.> all utilize recurrent neural networks <cit.> to model the sequential spatial-temporal information embedded in trajectories.
Considering travel time as distributions rather than scalar values, DeepGTT <cit.> implements a variational encoder to model the travel time distributions of trajectories.
To further improve the prediction performance, TAML <cit.>, DRTTE <cit.>, and WDDRA <cit.> implement a multitask learning framework to make the estimation of timestamps more accurate.
DFTTE <cit.> proposes a fusion network to merge multiple sources of information for prediction. MetaTTE <cit.> incorporate meta-learning technique to generalize travel time estimation on multi-city scenarios.
STDGCN <cit.> employs the neural architecture search techniques to identify the optimal network structure for estimation.
Path-routing methods
The superiority of path-based TTE methods is enabled by their affiliated trajectories. But in the scenario of an ODT-Oracle, paths are unknown to the model, rendering path-based methods inapplicable.
One plausible solution to this issue involves leveraging path-routing methods to infer the potential paths between a given origin-destination pair.
Classical algorithms such as Dijkstra's algorithm <cit.> and various alternative routing methods <cit.> primarily focus on determining the paths associated with minimal travel costs, yet their calculated routes may deviate from the actual paths drivers would choose.
DeepST <cit.> makes use of historical travel behavior derived from trajectory data, thereby enhancing the accuracy of generated paths.
It's noteworthy that route recovery methods <cit.> and map-matching algorithms <cit.>, though bearing structural similarity to path-routing methods, often demand specific details such as arrival time or comprehensive GPS sequences, which are impractical in the context of the ODT-Oracle.
ODT-Oracles
The motivation for building an ODT-Oracle comes from problems related to path-based travel time estimation. During estimation, only origin and destination locations and departure times are given, and it is challenging to build an accurate ODT-Oracle since the underlying trajectories are unknown. TEMP <cit.> averages the travel times of historical travels and does not contain any learnable parameter. ST-NN <cit.> proposes to learn a non-linear mapping of origin-destination location pairs to the corresponding travel times using neural networks. MURAT <cit.> extends the input features with embeddings from road segments, spatial cells, and temporal slots.
Nevertheless, the underlining trajectory of a trip is highly related to the travel time. All the above methods ignore the correlations between origin-destination location pairs and travel trajectories in historical data, so their prediction accuracies are limited. DeepOD <cit.> addresses this problem by incorporating historical trajectories during training and making the embeddings of origin-destination pairs and travel trajectories close in the latent space. However, it is sensitive to outliers in historical trajectories and is unable to provide explainable travel times.
§.§ Diffusion Models
The diffusion model is a generative model recently popularized in computer vision <cit.>. Coming from nonequilibrium thermodynamics <cit.>, diffusion models have a strong theoretical background and have been proven to perform excellently in image generation. There are two Markov processes in diffusion models: the forward diffusion process and the reverse denoising diffusion process. The forward diffusion process adds noise to a clean image through a pre-defined noise schedule until it turns into a Gaussian noise. The reverse denoising diffusion process removes noise from the noisy image step-by-step until the clean image is recovered. After training, one can utilize the reverse process to randomly generate images that follow the distribution of the training dataset.
Due to the effectiveness and flexibility of diffusion models, many ongoing efforts aim to migrate them into other domains and tasks, such as time series imputation <cit.>, audio synthesis <cit.>, shape generation <cit.> and language modeling <cit.>. In this study, we exploit diffusion models as a powerful means of modeling correlations from historical trajectories. Specifically, we propose a conditioned diffusion model for accurate and explainable ODT-Oracle.
§ PRELIMINARIES
§.§ Definitions
A trajectory 𝒯 is a sequence of timestamped GPS points: 𝒯= ⟨ (g_1, t_1), (g_2, t_2), …, (g_|𝒯|, t_|𝒯|) ⟩, where g_i=(lng_i,lat_i), i=1,…,|𝒯| denotes i-th GPS point, and |𝒯| denotes the total number of GPS points in the trajectory.
Given an area of interest on the map, usually, the area covering all historical trajectories, we split the longitude and latitude equally with a total number of L_G segments, resulting in a total of L_G^2 spatial cells. A Pixelated Trajectory (PiT), X∈ℝ^L_G× L_G× C, is represented as a tensor, where C is the number of channels. Each trajectory has a corresponding PiT.
In one PiT, the feature X[x,y,k] records the value of k-th feature in cell (x, y).
We utilize three feature channels, i.e., Mask, Time of the day (ToD), and Time offset. If the GPS points in a trajectory 𝒯 never fall into the cell (x, y), then all three channels of the corresponding cell are set to -1. If a GPS point (g_i, t_i) in 𝒯 is the earliest one that falls into the cell (x, y), we calculate the values of three channels as follows.
* Mask. Indicates whether the trajectory contains GPS points located in this cell or not. X[x, y, 1]=1 means there are one or more GPS points in the trajectory that are located in cell (x, y).
* ToD. A normalized value with range [-1, 1] that denotes when the cell is visited. It can be calculated as X[x, y, 2]=2×(t_i % 86400) / 86400 -1, where t_i denotes the Unix timestamp of i-th GPS point in the trajectory 𝒯 and 86400 is the total number of seconds in 24 hours.
* Time offset. Also, a normalized value with range[-1, 1] indicates the visiting order of this cell in the trajectory. It can be calculated as X[x, y, 3]=2× (t_i - t_1) / (t_|𝒯| - t_1) - 1, where t_i denotes the Unix timestamp of i-th GPS point in the trajectory 𝒯.
Figure <ref> demonstrates an example of PiT construction. Suppose we have a trajectory 𝒯=⟨ (g_1, 9:00), (g_2, 9:36), (g_3, 12:00) ⟩, and we split the area of interest into 3× 3 cells. GPS points g_1, g_2, g_3 locate in cell (3,1), (2,2), (1,3), respectively. Thus, we set the mask channels of the three cells as 1. The ToD channels of the cells are calculated as X[3,1,2]=2× 9× 24× 60/86400-1=-0.25, X[2,2,2]=2× (9× 24+36)× 60/86400-1=-0.2, X[1,3,2]=2× (12× 24× 60)/86400-1=0.0, respectively. The time offset channel of the cell (2,2) is calculated as X[2,2,3]=2× (9×24+36) / (12×24 - 9×24)=-0.6, while X[3,1,3]=-1.0, X[1,3,3]=1.0. For other cells where no GPS point is located in, all their channels are set to -1.
In this way, we obtain a PiT, X∈ℝ^3× 3× 3, corresponding to the trajectory 𝒯.
By representing historical trajectories using PiT rather than using raw sequential features, our method can focus on differences in trajectories that will actually affect the travel time, rather than minor diversities that might cause disturbances. On the other hand, transforming sequential trajectories into equally-sized images is better suited for the two-stage framework we introduce later.
The ODT-Input for an ODT-Oracle, odt, is a tuple that consists of three elements: odt=(g_o, g_d, t_o), where g_o and g_d are the GPS coordinates of the origin and destination, respectively. t_o denotes the departure time.
§.§ Problem Formulation
OD Travel Time Oracle. Given the set of historical trajectories 𝕋, we aim to learn ODT-Oracle f^𝕋_θ to estimate the travel time Δ t and the PiT X for any future ODT-Input, odt. To be noted, odt in the query does not require to appear in 𝕋. Formally, we have:
odt Δ t, X
§.§ Method Overview
In this work, we propose a Diffusion-based Origin-destination Travel Time Estimation (DOT) method to build an accurate and explainable ODT-Oracle, which follows a two-stage framework.
The first stage is the PiT inference stage, shown in Figure <ref>(a); the second stage is the PiT travel time estimation stage, shown in Figure <ref>(b).
In the PiT inference stage, we try to infer the PiT corresponds to a given ODT-Input. The ODT-Input is considered the conditional information incorporated into a conditioned PiT denoiser.
The denoiser samples from standard Gaussian noise at the beginning. Then, it produces the inferred PiT conditioned on the ODT-Input through a multi-step conditioned denoising diffusion process.
In the PiT travel time estimation stage, we estimate the travel time based on the inferred PiT. The PiT is first flattened and mapped into a feature sequence to capture the global spatial-temporal correlation better. To improve the model's efficiency, we propose a Masked Vision Transformer (MViT) to estimate the travel time.
We explain the proposed method in detail in the following sections. The PiT inference stage is introduced in Section <ref>, and the PiT travel time estimation stage is introduced in Section <ref>.
§ MODELING HISTORICAL TRAJECTORIES THROUGH PIT INFERENCE
§.§ Diffusion-based PiT Inference
Given the origin-destination location pair of a future trip, the travel time is highly related to the route that a driver takes.
However, the actual route is unavailable at the time of estimating. To achieve accurate ODT-Oracle, we aim to comprehensively learn the correlation between ODT-Inputs and travel trajectories from historical trips. Then, we can utilize the learned correlation to infer travel time based on possible trajectories given the future ODT-Inputs.
Suppose we have the historical trajectory, denoted as 𝒯, and the corresponding ODT-Input, odt=(g_o, g_d, t_o) of the trip. We aim to learn a posterior probability p(𝒯|odt), which represents the relationships between odt and 𝒯 from the set of historical trajectories 𝕋.
Since we represent a trajectory 𝒯 using the PiT, denoted as X, which is stated in Section <ref>, the probability can be written as p(X|odt). However, this probability is unknown in reality, so we aim to learn it through a set of learnable parameters θ. The estimated probability is then denoted as p_θ(X|odt).
There is no doubt that the learning model p_θ is crucial to achieving accurate ODT-Oracle. If the inferred travel trajectory from p_θ is very different from the expected trajectory given a future ODT-Input, then the estimated travel time cannot be good.
In this paper, we build our learning model based on Denoising Diffusion Probabilistic Models (DDPM) <cit.>.
Since DDPM models the unconditioned data distribution p(X), the generated data is not restricted, which can be any signal from the underlying data distribution. For example, it can generate various PiTs, whose origins and destinations differ. However, in our case, we aim to model the data distribution conditioned on ODT-Input odt. Therefore, we propose a diffusion-based conditioned PiT inference framework, which consists of a diffusion process and a conditioned denoising diffusion process. We detail these two processes in the following Sections.
§.§.§ The diffusion process for adding noise to PiTs
Intuitively, the diffusion process is to add noise to a signal step by step until we reach a simple prior distribution, e.g., Gaussian distribution. Figure <ref>(a) gives a clear illustration of how the diffusion process works in our paper, i.e., we try to obtain a noisy PiT, X_N, by giving a clear PiT, X_0, in a total of N diffusion steps. A single step of this diffusion process can be formulated as follows.
q(X_n|X_n-1)=𝒩(X_n; √(1-β_n)X_n-1,β_n 𝐈),
where 𝒩 (μ;Σ) is the Gaussian distribution with mean value μ and covariance matrix Σ, β_n is the coefficient used for controlling the noise level in the n-th step. In practice, β_n is often fixed for every step and follows a monotonic schedule so that the added noise level increases with n. We follow the linear schedule used in DDPM <cit.>, where β_n scales linearly from 0.0001 to 0.02 with n.
Starting from the clear PiT X_0, we can get the noisy PiT in the n-th step by:
q(X_n|X_0)=∏_m=1^n q(X_m|X_m-1)
Since the noise level in each step is fixed and the added noises follow Gaussian distribution, we can simplify Equation <ref> into the following form by utilizing the property of Gaussian distribution.
q(X_n|X_0)=𝒩(X_n; √(α_n)X_0, (1-α_n) 𝕀),
where α_n=1-β_n and α_n=∏_m=1^nα_m. Therefore, we can sample the noisy PiT at any step in the diffusion process by adding a specific noise, i.e., q(X_n|X_0), to the original X_0, without adding noise step-by-step.
Finally, the noisy PiT in the last step of the diffusion process follows a standard Gaussian noise, which is represented as follows.
X_N q(X_N) = 𝒩(X_N; 0, 𝐈)
Here, X_N is a good start for the following PiT inference process since it contains no prior information. In other words, we can randomly sample noise from the standard Gaussian distribution as the starting point for the inference process.
§.§.§ The conditioned denoising diffusion process for PiT inference
To infer a PiT given a future ODT-Input, we implement a reverse diffusion process under the prior information of ODT-Input. More specifically, we gradually remove noise from the noisy PiT, until we retrieve a noise-free PiT of the travel trajectory corresponding to the ODT-Input. The process is formally called the conditioned denoising diffusion process, which is illustrated in Figure <ref>(b).
A single step of the conditioned denoising diffusion process can be formulated as follows.
p(X_n-1|X_n, odt) = 𝒩(X_n-1; μ_n-1, Σ_n-1),
where μ_n-1 and Σ_n-1 are the mean and variance at the step of n-1, and X_n-1 follows a conditional probability relies on the noisy PiT from the previous step X_n and the ODT-Input odt. However, we are unaware of the actual form of p in practice, so we implement two neural network modules to learn the mean and variance. Thus, Equation <ref> can be reformulated as follows.
p_θ(X_n-1|X_n, odt) = 𝒩(X_n-1; μ_θ(X_n, n, odt), Σ_θ(X_n, n, odt)),
where μ_θ(·) and Σ_θ(·) denotes the mean and variance estimated by the neural network parameterized by θ.
The complete PiT inferring process can be formulated as follows.
p_θ(X_0|X_N, odt) = ∏_n=1^N p_θ(X_n-1|X_n, odt)
As is denoted in Equation <ref>, X_N can be sampled from a standard Gaussian distribution, and odt is the input from the user. Therefore, we can infer a PiT based on these two inputs. We detail the process of PiT inference in Algorithm <ref>.
Since we utilize a probabilistic model to infer the most plausible PiT given an ODT-Input, we can remove the impact of outliers.
The remaining question is how to train the set of parameters θ with the historical trajectories 𝕋, which we discuss in the following section.
§.§.§ Re-parameterization and training of the PiT inference model.
To simplify the training of θ, we follow DDPM <cit.> to build the conditioned denoising diffusion process by keeping the variance in Equation <ref> fixed at any given step, i.e., Σ_θ(X_n, n, odt)=√(β_n)𝐈. Therefore, only the mean needs to be estimated.
Then, we re-parameterize the mean μ_θ(·). Instead of directly predicting μ_θ(·), we aim to predict the added noise at the n-th step in the diffusion process. We denote the predicted noise as ϵ_θ(X_n, n, odt), such that μ_θ(·) can be re-parameterized as follows.
μ_θ(X_n, n, odt) = 1/√(α_n)(X_n - β_n/√(1-α_n)ϵ_θ(X_n, n, odt))
Next, we re-parameterize Equation <ref>, and substitute μ_θ(·) with Equation <ref>, which can be formulated as follows.
X_n-1 = μ_θ(X_n, n, odt) + Σ_θ(X_n, n, odt)ϵ
= 1/√(α_n)(X_n - β_n/√(1-α_n)ϵ_θ(X_n, n, odt)) + √(β_n)ϵ,
where ϵ𝒩(0, 𝐈).
To train the set of parameters θ, we must supervise all steps of the conditioned denoising diffusion process. In practice, we can sample n from a uniform distribution, U(1,N), so that all steps can eventually be trained.
At the n-th step, we use the mean squared error to minimize the difference between the ground truth and the predicted noise, and the loss function can be formulated as follows.
L_n =∥ϵ - ϵ_θ(X_n, n, odt) ∥ ^2
=∥ϵ - ϵ_θ(√(α_n)X_0 + √(1-α_n)ϵ, n, odt) ∥ ^2,
where the noisy PiT, X_n, comes from the diffusion process and is calculated using the simplified form presented in Equation <ref>. The detailed training algorithm is presented in Algorithm <ref>.
§.§ Conditioned PiT Denoiser
In this section, in detail, we introduce the implementation of denoiser ϵ_θ(X_n, n, odt). Two basic requirements need to be met by the denoiser. The first one is that the predicted noise, i.e., this denoiser's output, should be the same size as the input noisy PiT, X_n. The other one is that the denoiser takes input as the noisy PiT X_n, the step indicator n, and the ODT-Input odt. Bearing these requirements, we implement our denoiser based on Unet <cit.>, a neural network widely used in computer vision. Unet is efficient yet powerful due to its bottleneck architecture and residual connection design. Yet, the naive Unet cannot take input as n and odt. Thus, we aim to cast these features into latent spaces and fuse them into our Unet-based conditioned PiT denoiser. The overall architecture of our denoiser is shown in Figure <ref>.
We first implement the positional encoding to encode the step indicator n into an embedding vector, PE(n)∈ℝ^d, which is commonly used in Transformer <cit.>. The detailed encoding operation is formulated as follows:
PE(n)[2i] = sin(n/10000^2i/d)
PE(n)[2i-1] = cos(n/10000^2i/d),
where d is an even value, and i∈{1, …, d/2} is the dimension indicator of the feature vector.
We then implement a fully-connected layer to transform odt into a d-dimensional latent space, formulated as follows.
FC_OD(odt): ℝ^5 →ℝ^d
Next, we introduce how to fuse these latent representations into the proposed PiT denoiser. Following the Unet <cit.>, our denoiser is formed by L_D layers of down-sampling blocks, one layer of middle block and L_D layers of up-sampling blocks, with residual connections between the down-sampling blocks and the up-sampling blocks. Each down-sampling and up-sampling block contains a sequential stack of two ODT-Input Conditioned Convolutional (OCConv) modules that are built based on ConvNeXt <cit.>, a multi-head dot-product attention module, and a convolutional module for up-sampling or down-sampling.
The middle block contains two OCConv modules, with an attention module in the middle.
The down-sampling blocks down-sample the input spatially but expand it channel-wisely, while the up-sampling blocks do the opposite.
The conditional information odt and n is incorporated into every OCConv module in the denoiser, whose data flow in the OCConv module is illustrated in Figure <ref>.
Specifically, the OCConv module firstly takes input as an image, denoted as X_in∈ℝ^L_in× L_in× C_in, which goes through a 2D convolutional layer Conv2D with all dimensions unchanged, resulting in the hidden state X_hid.
X_hid = Conv2D(X_in),
X_hid∈ℝ^L_in× L_in× C_in
Then, we utilize a fully-connected layer to transform the sum of PE(n) and FC_OD(odt) into a vector with a size of C_in, which is then added to every pixel in X_hid, formulated as follows.
FC_Cond(·) : ℝ^d→ℝ^C_in
X_hid'[:, :, i] = X_hid[:, :, i] + FC_Cond(PE(n) + FC_OD(odt))[i],
where i = 1, 2, …, C_in.
Finally, X_hid' goes through a two-layer 2D convolutional network with activation and residual connection to the output state:
X_out = Conv2D(σ(Conv2D(X_hid')))
X_out' = X_out + ResConv(X_in),
where σ(·) is the activation function, and we use the Gaussian Error Linear Units (GELU) <cit.> in this paper. ResConv is the residual connection implemented using 2D convolution. The output state X_out'∈ℝ^L_in× L_in× C_out is then fed into the following modules in the denoiser.
Usually, C_out=2· C_in for the down-sampling blocks, C_out= ⌊ C_in/2 ⌋ for the up-sampling blocks.
§ PIT TRAVEL TIME ESTIMATION
Once we finish the training of the PiT inference stage detailed in Section <ref>, we aim to propose a PiT travel time estimation module based on the inferred PiT, which is detailed in this Section.
Since the inferred PiT, X, is in the pixelated format, it is intuitive to come up with an estimator based on convolutional network networks (CNNs). Yet, CNNs focus on modeling local properties, which is not good at travel time estimation, where capturing the global correlation is essential. Vision Transformer (ViT) <cit.> that utilizes self-attention to consider global correlation among all pixels seems to be a good fit in our case. However, many pixels in PiT don't contain valid information, making the vanilla ViT perform poorly in our scenario. Therefore, we propose the Masked Vision Transformer (MViT) to improve the efficiency in both training and estimating significantly.
§.§ PiT Flatten and Feature Extraction
Given the inferred PiT X∈ℝ^L_G× L_G× C, we firstly flatten it to a sequence with a length of L_G^2:
X_flat =
⟨ X[1, 1, :], X[1, 2, :], …, X[1, L_G, :],
X[2, 1, :], X[2, 2, :], …, X[2, L_G, :],
…
X[L_G, 1, :], X[L_G, 2, :], …, X[L_G, L_G, :]
⟩
After flattening, the cell (x, y) in PiT becomes the (x + (y - 1) * L_G)-th item in the sequence. X_flat follows the arrangement of pixels rather than chronological order like normal temporal sequences.
We then further extract the spatial-temporal features from X_flat using three embedding modules:
* Cell embedding module E[·]. We initialize an embedding matrix E∈ℝ^L_G^2 × d_E, where the column E[x + (y - 1) * W] is the embedding vector of the cell (x, y), that is, the embedding vector of item X[x, y, :] in the flatten sequence. d_E is the embedding dimension. E can be viewed as the innate spatial features of cells.
* Positional encoding module PE(·). Since self-attention is order-independent, we encode the position of items in a flattened sequence using the positional encoding introduced in Equation <ref>. Here, the encoding vector for item X[x, y, :] is PE(x + (y - 1) * W)∈ℝ^d_E.
* Latent casting module FC_ST(·). Recall that we design three feature channels for each cell in Section <ref>. We utilize a fully-connected layer to cast them into the latent space: FC_ST(X[x, y, :]): ℝ^3 →ℝ^d_E.
The outputs from three modules are summed up to form the latent input vector for each item in X_flat:
X_latent[x, y] = E[x + (y - 1) * W] +
PE(x + (y - 1) * W) +
FC_ST(X[x, y, :])
Then, the latent sequence, X_latent=⟨ X_latent[1, 1], X_latent[1, 2], ⋯, X_latent[L_G, L_G] ⟩ is the input to MViT.
§.§ Masked Vision Transformer
Many items in X_latent don't contain any valid information since their corresponding features in PiT are set to -1. In the vanilla vision Transformer (ViT), we can apply an attention mask so that the attention weights of these items are set to zero. Yet, this mask scheme can't improve computational efficiency since the attention weights are still calculated for all items, shown in Figure <ref>.
This is particularly problematic in our scenario, where PiT covers the full spatial space, but most cells are not visited for one trajectory.
To speed up the calculation on flattened PiT sequences, we aim to implement a more efficient mask scheme.
We propose the Masked Vision Transformer (MViT). First, we calculate a mask to determine whether one item in X_latent contains valid information, and we can obtain such a mask through the first channel of PiT:
X_mask[x, y] = {[ True ; False ].,
for x∈{1, …, L_G}, y∈{1, …, L_G}. X_latent[x, y, :] contains valid information if X_mask[x, y] = True.
Following the Transformer <cit.> and ViT, our MViT is stacked by multiple layers of MViT layer, and each layer contains two modules, a self-attention and a feed-forward network, both with the residual connection. Unlike ViT, self-attention in MViT is only applied to items with valid information. It is equivalent to only retaining the items with valid information to form a masked sequence and only applying self-attention to the masked sequence, as demonstrated in Figure <ref>. Formally, one MViT layer is calculated as:
X_out-seq = FFN(Att(Mask(X_in-seq, X_mask)))
Mask(X_in-seq, X_mask) = ⟨ X_in-seq[x, y, :],
x∈{1, …, L_G}, y∈{1, …, L_G},
X_mask[x,y] = True⟩,
where X_in-seq, X_out-seq represent the input sequence and output sequence of the MViT layer, Mask(X_in-seq, X_mask) denotes the masked input sequence, Att(·), FFN(·) denote the multi-head attention and the feed-forward network, respectively. Since we only apply self-attention on items with valid information, the calculation cost depends on the total number of valid items, not the length of the full sequence. Therefore, we can improve the efficiency of flattening PiT sequences significantly.
Then, we stack a total of L_E MViT layers to form the MViT, and calculate the memory sequence using MViT.
X_latent' = MViT(X_latent, X_mask),
where the memory sequence X_latent' has the same dimension as X_latent, with length equal to the number of valid items in X_latent.
Next, we apply a mean pooling layer and a fully-connected layer to calculate the result of travel time estimation as follows.
Δ t = FC_pre(mean(X_latent')),
where the pooling applies to the dimension of sequence length.
Finally, to train the PiT travel time estimation model, we use mean squared error as the loss function to make the prediction result as close to the ground truth as possible, formulated as follows.
L_pre = ‖Δ t - Δ t‖^2
The two stages, PiT inference and PiT travel time estimation, are trained separately. More specifically,
after training of the PiT inference model detailed in Algorithm <ref>, the learnable parameters θ are fixed. During training of the PiT travel time estimation model, p_θ(·) is only used for inferring, without further parameter update.
By properly utilizing the inferred PiT for travel time estimation, we can achieve high estimation accuracy without the real trajectories that are also not available.
§ EXPERIMENTS
To evaluate the effectiveness of the proposed DOT framework, we conduct extensive experiments on two real-world trajectory datasets and compare DOT with existing ODT-Oracle methods.
§.§ Datasets
In our experiments, we utilize two real-world taxi trajectory datasets collected from the cities of Chengdu from Didi Chuxing[https://gaia.didichuxing.com/] and Harbin <cit.> in China. We remove trajectories that traveled less than 500 meters or 5 minutes, or more than 1 hour during pre-processing. Then, we filter out sparse trajectories by setting the minimum sampling rate to 80 seconds. The statistics of datasets after pre-processing are listed in Table <ref>.
§.§ Comparison Methods
To prove the superiority of the proposed method, we compare it with twelve baselines.
Three are routing methods, two are path-based methods, and the others are ODT-Oracle methods.
§.§.§ Routing Methods
These methods identify the optimal path on the road network from origin to destination. We provide them with a weighted road network, where the weights represent the average travel time of road segments that is calculated from historical trajectories.
The travel time is the sum of the historical average travel time of the road segments in the identified path.
* Dijkstra Shortest Path (Dijkstra) <cit.>: calculates the path between origin and destination with the smallest weights.
* Deep Probabilistic Spatial Transition (DeepST) <cit.>: generates the most probable traveling path between origin and destination based on the learned historical travel behaviors.
§.§.§ Path-based Methods
These methods predicts the travel time given the travel path. Since the real travel path is absent in the scenario of ODT-Oracle, we feed these methods with the path generated by DeepST.
* Wide-Deep-Double Recurrent model with Auxiliary loss (WDDRA) <cit.>: utilizes multi-tasking auxiliary loss to improve the accuracy of travel time estimation.
* Automated Spatio-Temporal Dual Graph Convolutional Networks (STDGCN) <cit.>: implement the neural architecture search techniques to automatically identify the optimal network structure.
§.§.§ ODT-Oracle Methods
These methods aim to predict travel time based on ODT-Input, serving as direct comparison to the proposed method. Four of them are traditional methods, the others are neural network-based methods.
* Temporally weighted neighbors (TEMP) <cit.>: averages the travel times of historical trajectories that have a similar origin, destination and departure time.
* Linear Regression (LR): learns a linear map from ODT-Inputs to travel times from historical travels.
* Gradient Boosted Machine (GBM): a non-linear regression method, which is implemented using XGBoost <cit.>.
* Road Network Vertex Embedding (RNE) <cit.>: calculates the shortest path distances between vertices in the embedding space.
* Spatial Temporal Neural Network (ST-NN) <cit.>: jointly predicts the travel distance and time given origin and destination.
* MUlti-task Representation learning for Arrival Time estimation (MURAT) <cit.>:
jointly predicts the travel distance and travel time given origin, destination and departure time.
* Effective Travel Time Estimation (DeepOD) <cit.>:
incorporates the correlation between ODT-Inputs and travel trajectories from history through an auxiliary loss during training.
§.§ Experimental Settings
For both datasets, we first sort trajectories by their departure date and time. Then, we split them into training, validation and testing sets by 8:1:1. The PiT inference model is trained on the training set for 50 epochs, then used for PiT inference on validation and testing sets. The PiT travel time estimation is trained on the training set, early-stopped on the inferred validation set, and calculated final metrics on the inferred testing set. We use root mean squared error (RMSE), mean absolute error (MAE) and mean absolute percentage error (MAPE) to evaluate the precision of ODT-Oracles.
All methods are implemented using Python and PyTorch <cit.>. Baselines are implemented following the parameters suggested in their original papers. For the hyper-parameters of the proposed method, we consider 5 key parameters in different modules, their range and optimal values are listed in Table <ref>. It's worth mentioning that all hyper-parameters are chosen based on MAE results on the validation set. We also demonstrate the effectiveness of these hyper-parameters on the testing set later.
During model training, we choose the Adam optimizer and an initial learning rate of 0.001 across the board. We run all experiments on Ubuntu 20.04 servers equipped with Intel(R) Xeon(R) W-2155 CPUs and nVidia(R) TITAN RTX GPUs.
§.§ Comparison with Baselines
§.§.§ Overall effectiveness comparison
Table <ref> demonstrates the comparison between different methods on overall effectiveness. The proposed method consistently shows its performance is superior over the two datasets.
The performance of routing methods in estimating travel time is largely dependent on the accuracy of their calculated routes. Dijkstra primarily focuses on finding shortest paths, which results in less accurate travel time estimations. In contrast, DeepST learns travel behavior from historical trajectory data, leading to significantly improved estimation accuracy.
The travel time estimation accuracy of path-based methods heavily relies on the quality of the input travel paths.
Both WDDRA and STDGCN exhibit slightly improved performance compared to their path provider, DeepST, as they are capable of learning complex correlations between travel paths and travel time.
STDGCN achieves higher accuracy compared to WDDRA, taking advantage of the neural architecture search technique.
The four traditional ODT-Oracle methods, TEMP, LR, GBM, and RNE, tackle the ODT-Oracle problem through feature engineering and kernel design.
These methods face difficulties in modeling the intricate correlations between ODT-Inputs and travel times, and they do not take historical trajectories into account, which is essential for accurate ODT-Oracle predictions.
Among them, the linear method LR performs the worst, suggesting that the spatial-temporal features in ODT-Inputs and travel times do not have a linear correlation.
The history average method TEMP gets the second-worst result, primarily due to the imbalance of historical trips under different ODT-Inputs and the presence of outliers in historical trajectories.
GBM outperforms TEMP and LR, attributable to its relatively higher model capacity.
RNE achieves the best performance among traditional ODT-Oracle methods, as it incorporates hierarchical embeddings to capture the distances between locations more effectively.
The four neural network-based ODT-Oracle methods consistently outperform their traditional counterparts.
It indicates that neural network-based methods excel at extracting complex spatial-temporal correlations between ODT-Inputs and travel times, thanks to their higher model capacity.
Therefore, they can learn more appropriate representations for spatial-temporal features.
This conclusion is even more obvious when comparing GBM with ST-NN, whose inputs are the same, with only the origin and destination. We can observe that ST-NN has a much better result than GBM. We can also observe that MURAT outperforms ST-NN in both datasets on all metrics. MURAT considers more comprehensive factors, e.g., road network, spatial cells and temporal slots. However, they have not considered taking advantage of the historical trajectories until DeepOD. We can observe that DeepOD can achieve the second best in most cases but is still worse than our method. It indicates that inappropriate handling of outliers in historical trajectories can hurt the performance of the ODT-Oracle.
The proposed method DOT gets the best results on both datasets. We transform raw trajectories into Pixelated Trajectories (PiTs) to make our model more robust on localized, small differences in trajectories. During training, we explicitly model the spatial-temporal correlation between ODT-Inputs and PiTs from historical trips. During travel time estimation, we infer the most probable PiT given a future ODT-Input and utilize the inferred PiT for accurate travel time estimation. The performance superiority of DOT demonstrates the effectiveness of inferring a robust representative form of travel trajectories and avoiding the impact of outliers in historical trajectories.
§.§.§ Scalability comparison
To evaluate the scalability of various methods, we sampled the training set of Chengdu by 20%, 40%, 60%, 80% and 100%, then tested the MAPE of different methods trained on the sampled training set, whose results are demonstrated in Table <ref>.
Generally speaking, all methods benefit from larger-scale training data. Since larger data have higher density, we can improve the generalization and robustness of the trained models.
When the scale of training data decreases, the performance downgrade over different methods demonstrates their scalability. The proposed method remains relatively stable and consistently achieves the best result compared with other methods under the same circumstances. We can also observe that our worst performance at scale 20%, 14.951, is even better than that of DeepOD at scale 100%, 14.997. It indicates that the proposed method can work in more data-scarce scenarios than other methods.
§.§.§ Efficiency comparison
To investigate the efficiency, we calculate the model size, training time and estimation speed of different methods on Chengdu. The model size demonstrates the required memory size when running these methods. The training time and estimation speed show the methods' efficiency during training and real-world travel time estimation. Note that the two stages of the proposed method are trained separately, so we give these two training times separately.
The storage of the weighted road network in Dijkstra takes up some memory. It doesn't require any training, and expedite techniques to speed up their routing process on road network.
On the other hand, the data-driven routing method DeepST exhibits a relatively slow training and estimation speed, primarily due to the use of RNN sequential model <cit.>. RNNs cannot parallelize on path sequences, which consequently hinders their estimation speed.
This same reason accounts for the slow estimation speed of the two path-based methods, WDDRA and STDGCN, as they also employ RNNs for processing the input path sequences.
Compared with other neural network methods, two traditional methods LR and GBM have smaller model sizes. GBM trains and predicts slower than LR, since it is a non-linear method and contains multiple decision trees for integrated learning. TEMP is an average history method that does not need training. Yet, it needs to cache all historical trips and calculate distances for every pair of trips between queries and training set during prediction. Thus, the model size and prediction speed of TEMP scale nearly linearly with the dataset size, which is problematic for large-scale datasets.
We can observe that ST-NN has the simplest design and smallest model size among neural network-based methods. So, it is the fastest in both training and predicting.
RNE stores embeddings for all segments in the road network, resulting in slightly reduced efficiency compared to ST-NN.
We can also observe that the proposed DOT is relatively slower during its training due to its two-stage framework. However, the prediction speed catches up with the state-of-the-art neural network-based methods since we propose an efficient estimation module, MViT. Since the training phase is always completed offline, we can claim that the efficiency of our method is on par with the existing inference methods.
§.§ Quantitative Analysis
§.§.§ Effectiveness of outlier removal.
We assess the potential performance improvements that can be achieved by combining outlier removal methods with existing baselines. We implement the state-of-the-art outlier detection method DeepTEA <cit.> to eliminate outlier trajectories from the training set. Then, we re-train a select set of baselines and evaluate their travel time estimation performance. The results are presented in Table <ref>.
All baselines experience performance improvements, with the exception of RNE on Chengdu. This demonstrates that proper handling of outliers in historical trajectories can be advantageous for achieving accurate travel time estimation.
Nevertheless, the proposed method continues to outperform these baselines even after outlier removal, for two main reasons.
First, as illustrated in Figure <ref>, the proposed PiT representation enables our method to more effectively identify outliers that significantly affect travel time estimation, while ignoring negligible differences in trajectories.
Second, we design our PiT inference stage as a diffusion-based generative process, which is more robust at modeling the distribution of historical trajectories with outliers, separating outliers from normal trajectories. This approach has also been demonstrated to be effective in other studies such as <cit.>.
§.§.§ Impact of grid length
To study the impact of different resolutions of PiT, we vary the grid length, L_G, and investigate the efficiency at different grid lengths, i.e., the model size, the training efficiency at the first and second stages, and the inferring efficiency. From Figures <ref> and <ref>, we can observe that the model size and the training time of the first stage increase with the increasing of L_G. It is because the PiT becomes larger when L_G increases, bringing increased kernel sizes and filters in the conditioned PiT denoiser. Figure <ref> shows that the proposed MViT scales well with the increase of L_G compared with the vanilla ViT in training at the second stage. It indicates that the total number of grid cells PiT occupies is almost unchanged, but the proportion of occupied grid cells is becoming less, so MViT can perform much better than ViT. We can see from Figure <ref> that MViT also improves the estimation speed significantly compared to ViT.
We can also observe that there is no clear difference between MViT and ViT at L_G=10. PiT occupies a much larger proportion of grids when the grid size becomes larger. Yet, the total number of grid cells is the least at L_G=10, resulting in the least training time for ViT.
Finally, we can conclude that the proposed MViT can improve the framework's efficiency in both training and estimating.
§.§.§ Impact of hyper-parameters
The optimal hyper-parameters listed in Table <ref> are selected with parameter experiments on validation sets. In this section, we further demonstrate the effectiveness of these key hyper-parameters on testing sets, which is shown in Figure <ref>. We have the following observations.
* As is indicated in Figure <ref>, it is optimal to set grid length L_G to 20. A smaller grid length results in coarse-grained PiTs, which is insufficient to estimate travel time accurately. On the other hand, a bigger grid length increases the sparsity of PiTs, resulting in the model being more sensitive to negligible differences in trajectories. Interestingly, increasing the grid length hurts the prediction performance more than decreasing it. This shows that it is better to model trajectories into PiTs, which can reduce the disturbance to accurate travel time estimation.
* Figure <ref> indicates that more diffusion steps lead to better results, which is intuitive that we can learn the noise better with more steps. However, the gain becomes less when N is larger than 1000. Therefore, we select N=1000 as a good trade-off between effectiveness and efficiency.
* L_D, d_E, L_E determine the representation capacity of the PiT inference model and the PiT travel time estimation.
Figures <ref>, <ref> and <ref> demonstrate their effectiveness respectively. All of them have optimal values. A too-small model struggles to extract the complex spatial-temporal correlations in datasets, while a too-big model leads to overfitting.
§.§.§ Ablation study
To verify the effectiveness of features and modules in the proposed method, we conduct an ablation study in the following variants of DOT methods and input features.
* Routing+Est.: combine the routing methods listed in Section <ref> with the PiT travel time estimation stage of our method.
* Infer.+Path-based: integrates the PiT inference stage with the path-based approaches presented in Section <ref>.
* No-t: remove the departure time t_o from the ODT-Input odt.
* No-od: remove the origin and destination coordinates from odt.
* No-odt: remove the conditional information odt completely.
* No-CE: remove the cell embedding module from the PiT travel time estimation.
* No-ST: remove the latent casting module from the PiT travel time estimation.
* Est-CNN: replace MViT with a CNN-based estimator.
* Est-ViT: replace MViT with the vanilla vision Transformer.
We compare the performance of these variants with the DOT method, and the experimental results are given in Table <ref>. We have the following observations:
* Combining the routing methods with the proposed PiT travel time estimation stage failed to yield satisfactory results. This is mainly because the routes inferred by these methods are not accurate enough, which is also demonstrated in Table <ref>.
Additionally, these routings methods do not generate temporal features, i.e., the second and third channels in PiTs, and these features are instead populated based on historical average travel times between cells. In contrast, the proposed method infers both spatial and temporal features of PiTs based on the learned spatio-temporal information from historical trajectories, which benefits travel time estimation accuracy.
* Integrating the proposed PiT inference stage with the path-based methods enhances their estimation performance compared to the results in Table <ref>.
The improvements can be attributed to the higher accuracy of the inferred routes compared to those produced by DeepST.
However, the results still do not surpass those obtained by DOT, as the RNN sequential models employed in WDDRA and STDGCN are not as effective as the proposed MViT in modeling spatio-temporal correlations in PiTs, as also shown by other studies of time series mining <cit.>.
* Removing features from the conditional ODT-Input reduces the estimation performance, since the inferred PiT cannot accurately correspond to the given ODT-Input.
* Removing the embedding modules from the PiT travel time estimation also makes prediction worse, since the spatial-temporal information embedded in PiT is not comprehensively utilized.
* Comparing results from Est-CNN and DOT proves that our Transformer-based MViT is more effective than CNN-based methods when dealing with PiT. Comparing results from Est-ViT and DOT shows that the estimation performance of MViT is very close to ViT. Combining the efficiency comparison in Figure <ref>, the proposed MViT can improve the efficiency over ViT while not compromising on estimation performance.
§.§ Analysis on the Explainability
To evaluate the explainability of the proposed method, we first conduct some quantitative experiments to demonstrate the accuracy of the inferred trajectories. We then conduct a case study to visualize the inferred trajectories.
§.§.§ PiT inference accuracy
We conduct experiments on both datasets to verify the accuracy of the inferred PiTs. Specifically, given the ODT-Inputs from the testing set,
we calculate the RMSE and MAE between the inferred PiTs and the ground truth ones, whose results are listed in Table <ref>.
We also compare the accuracy of the inferred routes from DOT with the planned routes in Dijkstra and DeepST. Specifically, the routes planned by these methods are transformed into the same form as the first (mask) channel in PiT.
We then compare the accuracy of the mask channels calculated by these methods with those inferred by our model, using precision (Pre), recall (Rec), and F1 scores as metrics. The results are presented in Table <ref>.
We can observe that we achieve a pretty accurate PiT inference and route inference on both datasets, which helps us to achieve good performance in the second stage of travel time estimation.
Accurate route inference also means that our model can provide the users with the most probable route that is chosen by most drivers, given a future travel plan.
§.§.§ Case study
We conduct two case studies on the testing set of Chengdu.
For the first case study, we visualize the raw trajectories and the third channel (time offset) of both the ground truth and inferred PiTs under certain circumstances. We investigate the following circumstances with trajectories that travel from the same origin to the same destination.
* Trajectories depature during the same time: In Figure <ref>(a), we can observe that two PiTs are almost the same when departing during the same time of the day, where the second PiT has extra cells compared to the first one, which is an outlier. Figure <ref>(b) shows the PiT inferred from the first stage by giving the ODT-Input with the same OD and T. We can observe that the inferred PiT matches the ground truth well, where the small portion of outliers in the second PiT is removed.
* Trajectories departure during different time: Figure <ref> demonstrates the case with the same origin-destination pair but a different departure time. It indicates that different travel trajectories can be chosen during different times of the day, which makes a big difference in the travel time estimation. Since the proposed method considers the departure time, it can infer PiTs at different departure times.
In the second case study, we examine the evolution of the average travel time between pairs of spatial cells during a day. We select the top-3 most frequently traveled pairs of cells and calculate the average travel time between them by dividing a day into two-hour intervals.
Figure <ref> display the average travel time calculated from ground truth trajectories and the inferred PiTs of DOT, where the cells are denoted by their coordinates specified in Defintion <ref>. The travel time calculated from the inferred PiTs varies throughout the day, indicating that our method accounts for the evolving traffic conditions. Furthermore, it closely matches the travel time calculated from ground truth trajectories, signifying the accuracy of the temporal features in the inferred PiTs.
In conclusion, the proposed method's conditioned PiT inference model can learn from history and infer PiT given a future ODT-Input. It ensures the accurate performance of travel time estimation, which is highly dependent on the spatial-temporal information of trajectories. In addition, the inferred PiT gives an intuitive overview of the future trip, improving the explainability of the model, and providing users with useful information.
§ CONCLUSIONS
To build an accurate and explainable ODT-Oracle, we propose a two-stage framework called DOT.
In the first stage, we propose a conditioned PiT denoiser to implement a PiT inference model. This model can learn from historical trajectories and can infer a PiT given an OD pair and a departure time, which reduce the impact of outlier historical trajectories.
In the second stage, we propose an MViT to model the global correlation of the inferred PiT efficiently, which can give an accurate estimation of travel time. Comprehensive experiments are conducted on two real-world datasets to demonstrate the performance superiority of the proposed method.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.01858v1
|
20230704180302
|
Black hole interior Petz map reconstruction and Papadodimas-Raju proposal
|
[
"Niloofar Vardian"
] |
hep-th
|
[
"hep-th",
"gr-qc"
] |
Weak Hadamard matrices and Weakly Hadamard diagonalizable graphs
Darian McLaren1, Hermie Monterde2, and Sarah Plosker3
August 1, 2023
================================================================
§ INTRODUCTION
To describe our universe, we need to seek a theory of quantum gravity. The study of black hole physics can be the simplest way toward this big aim. The black hole information paradox <cit.> is one of the important questions of physics in the last few decades <cit.>. It is related to the smoothness of the horizon of the black hole.
Among all the work related to it, I will focus on the Papadodimas-Raju proposal that the authors could even construct the interior operator on the boundary side <cit.> , they called it mirror operator
[The mirror operator in the Papadodimas-Raju proposal is different from the mirror operators that are defined in the appendix.].
In parallel, we have the island conjecture related to the information paradox and address very abstractly that we can construct the modes in the island from the Hawking radiation by mapping the so-called Petz map <cit.>.
The Petz map has its origin in quantum information theory <cit.>. Recently, it has been found that the best way to understand the semi-classical limit of AdS/CFT is in the language of quantum error correction codes <cit.>. The error-correcting codes then are the isometries from the bulk Hilbert space to the dual boundary theory.
In the case of subregion duality, at large N limit, following the JLMS argument <cit.>, there must be a recovery channel that map the operator in the entanglement wedge a = ℰ_A to the given region of the boundary A.
In <cit.>, authors could find the explicit expression of the recovery channel using the global HKLL map as a global isometry which embeds the entire bulk to the entire boundary which is known as Petz map. But, it was very recently that the explicit calculation using the Petz map has been done <cit.>.
When the geometry contains a black hole, because of lack of such an isometry we can not follow the discussion in <cit.> and write the explicit form of the mapping.
In this paper, instead of following that work to write the quantum channel by taking trace over the complementary region, we use the definition of Petz map in modular theory. It is good to note that the Petz recovery channel has it is origin in modular theory, roughly speaking at the age when the theory of quantum computation and information was born.
We find the Petz reconstruction of the interior modes and we reach the same resalt as the Papadodimas-Raju proposal.
In this paper, in Sec. <ref>, we review the Petz recovery channel. In Sec. <ref>, we will describe some aspects of black hole physics in AdS, and then, in Sec. <ref> we will discuss in detail how we can use the notion of Petz map in modular theory to reconstruct the interior modes in the geometries contain a black hole. We start with the two-sided eternal black hole in AdS and then we will generalize the discussion to the one-sided one.
Finally, in Sec. <ref> we will discuss the connection with the Papadodimas-Raju proposal for reconstruction of the black hole interior in AdS/CFT.
§ PETZ MAP
In this section, we introduce the Petz recovery channel and in particular we review its original definition in modular theory.
§.§ Universal recovery channel
In quantum information theory, the evolution of states in the Schrodinger picture is modeled by a quantum channel ℰ: 𝒮(ℋ_A) →𝒮(ℋ_B), where 𝒮(ℋ) is the set of all density matrices on the Hilbert space ℋ.
The channel ℰ is reversible if there exists another quantum channel ℛ, called the recovery channel
such that
ℛ∘ℰ (ρ)=ρ for all ρ∈𝒮(ℋ_A).
The reversibility of ℰ is related to the quantum relative entropy of states under the action of ℰ. The relative entropy between two states is a measure of distinguishability of them which is defined as
<cit.>
S(ρ | σ) = (ρlogρ - ρlogσ).
It is a non-increasing function under the action of any quantum channel, i.e.
S(ρ | σ)≥ S ( ℰ(ρ)| ℰ(σ))
that is known as data processing inequality <cit.>.
For a given channel ℰ, we have the exact correctability just in case the equality in (<ref>) holds
<cit.>.
In such a case, the recovery channel is given in terms of the dual channel ℰ^* and
once fixed full rank density matrix ρ∈𝒮(ℋ_A)
𝒫_ρ,ℰ(.)=ρ ^1/2ℰ^* (ℰ(ρ)^-1/2 (.)ℰ(ρ)^-1/2)ρ^1/2
known as the Petz recovery channel
<cit.>.
In (<ref>), ℰ^* is the (Hilbert-Schmidt) dual channel defined as the solution to
( ρ ℰ^*(O))= ( ℰ(ρ) O)
for all ρ∈𝒮(ℋ_A) and O∈ℒ(ℋ_B). The set of all bounded operators act on ℋ is denoted by ℒ(ℋ).
One can use the Heisenberg picture and map the operators by using the dual of the recovery channel
ℛ^*: ℒ(ℋ_B) →ℒ(ℋ_A),
which for the quantum channel in (<ref>) is given as
𝒫_ρ,ℰ ^* (.)
=ℰ(ρ )^-1/2ℰ(ρ ^1/2 (.) ρ ^1/2)ℰ(ρ)^-1/2
called Petz map.
§.§ Petz map and modular theory
In order to study the recovery of the information in quantum field theories, it would be really helpful to have an alternative description for the Petz recovery channel in the
context of the Tomita-Takesaki theory (A brief review can be found in the Appendix <ref>).
Luckily, the Petz recovery channel indeed has its origin in the study of operator
algebras <cit.>. It has also been studied in some recent works <cit.>.
We will now review the definition of the Petz map in the algebraic approach. All the discussion below is in the Heisenberg picture and it is almost based on <cit.>.
Consider two Type 1 von Neumann algebras
in their standard forms:
(𝒜, ℋ_A, J_A, 𝒫_𝒜 ) and
(ℬ, ℋ_B, J_B, 𝒫_ℬ ).
Assume two faithful states ρ_A and ρ _B respectively on 𝒜 and ℬ that we will refer to their unique vector representations by
|ρ _A ^1/2⟩∈𝒫_A
and |ρ_B ^1/2⟩∈𝒫_B.
The corresponding GNS Hilbert spaces of the algebras over the states |ρ _A ^1/2⟩ and |ρ _B ^1/2⟩ are denoted by ℋ_A and ℋ_B.
Let us consider a linear superoperator 𝒯: 𝒜→ℬ and denote its corresponding operator between the corresponding GNS Hilbert spaces by T : ℋ_A →ℋ_B.
One can define a dual of it 𝒯^*_ρ: ℬ→𝒜 as a solution to
⟨ b |𝒯(a)⟩ _ ρ _B = ⟨ b |Ta⟩ _ ρ _B = ⟨ T^† b |a⟩ _ ρ _A = ⟨𝒯^*_ρ(b) |a⟩ _ ρ _A
in the GNS Hilbert space (<ref>) for all a ∈𝒜 and b ∈ℬ .
In the case of matrix algebra, the definition (<ref>) can be rewritten as
(ρ_B b^†𝒯(a)) = ( ρ_A 𝒯^* _ρ (b^†) a).
We note it here that if we replace both ρ _A and ρ _B with the identity operators (unnormalized maximally mixed states), the dual map 𝒯^* we will get is the usual dual map in the quantum information theory defined in (<ref>).
One can find 𝒯^*_ρ in (<ref>) in terms of 𝒯^* as
𝒯^*_ρ (b) = ρ_A ^-1𝒯^* ( ρ _ B b).
On the other hand, the GNS Hilbert space ℋ_A can be created by acting with the commutant of the algebra ℋ_A = {𝒜' |ρ_A ^1/2⟩} and the same for another algebra ℋ_B ={ℬ' |ρ_B ^1/2⟩}.
Therefore, given T between the GNS Hilbert spaces, we can in principle associate to it one superoperator between the commutants 𝒯'_ρ: ℬ' →𝒜' which defined as
⟨ b' |𝒯(a)⟩ _ ρ _B = ⟨𝒯'_ρ(b') |a⟩ _ ρ _A
for all a ∈𝒜 and b' ∈ℬ' that is called ρ-dual of the superoperator 𝒯.
Then, one can use the modular conjugations in both Hilbert spaces to define a superoperator between the original algebras 𝒯^P_ρ:ℬ→𝒜 as
𝒯^P_ρ(.) = 𝒥_A ∘𝒯'_ρ∘𝒥_B (.) = J_A 𝒯'_ρ (J_B (.) J_B ) J_A
which is exactly the Petz dual map we are interested in that for the Type 1 von Neumann algebra, one can explicitly find its form as
𝒯 ^P_ρ (.)
=ρ _A ^-1/2𝒯 (ρ_B ^1/2 (.) ρ_B ^1/2)ρ _A ^-1/2.
The Petz dual map can also be realized as the solution to the relation (<ref>)
with respect to the KMS inner product. If the state |Ψ⟩ is cyclic and separating for a von Neumann algebra 𝒜, the KMS inner product on 𝒜 is defined as
⟨ a_1 | a_2⟩_ψ, KMS = ⟨𝒥_ψ(a_1^†)|a_2⟩_ψ
= ⟨Ψ| a_1^† Δ_ψ^1/2 a_2 |Ψ⟩.
while the last expression can be found using (<ref>).
In the case of matrix algebra, it is reduced to
⟨ a_1 |a_2⟩_ρ = ( ρ ^1/2 a_1^†ρ ^1/2 a_2).
While the definition (<ref>) is only for Type 1 von Neumann algebras, the one in (<ref>) can be generalized to the mapping between general von Neumann algebras. In particular, it is helpful for high-energy physics applications where the von Neumann algebras under consideration are of Type 3_1.
To summarize, consider (𝒜, ℋ_A, J_A, 𝒫_𝒜 ), (ℬ, ℋ_B, J_B, 𝒫_ℬ ) and let 𝒯: 𝒜→ℬ be a unital completely positive map between the algebras.
One can choose an arbitrary state ρ_B ∈𝒮(ℬ) and if both ρ _B and ρ _A = 𝒯^*(ρ_B) are faithful states on the corresponding von Neumann algebras, construct the Hilbert space representation of the algebras over them. Then,
if for all ρ , σ∈𝒮(ℬ) we have
S(ρ | σ) = S( 𝒯^*(ρ) | 𝒯^*(σ)),
there exists a unital completely positive map 𝒯: ℬ→𝒜 that 𝒯∘𝒯 acts as an identity operator on ℋ_B. The 𝒯 is nothing but the Petz dual map given in (<ref>)
and can be also shown that
(𝒯)^* = 𝒫 _ρ_B, 𝒯^* in (<ref>).
In <cit.>, one can find the discussion for generalization of the Petz dual map in cases where the states are not faithful.
§.§ Approximate recoverability
The equality of relative entropy is a necessary and sufficient condition for exact recoverability. In the case of approximate quantum error correction, the quality of recovery is controlled by the behavior of the relative entropy under the action of the quantum channel. At the heart of this result is a strengthened version of the monotonicity of the relative entropy which undergoes a slight change in relative entropy through the channel provides the approximate recoverability of the states.
For a given channel ℰ, Junge et al <cit.> found an expression for the recovery channel ℛ_ρ, ℰ, which is closely related to Petz recovery channel and it is also universal. In terms of ℰ and an arbitrary full rank density matrix ρ, it is given by
ℛ_ρ, ℰ(.) = ∫ _-∞^∞ dt p(t) ρ^-it 𝒫_ρ, ℰ( ℰ(ρ)^it (.) ℰ(ρ)^-it) ρ^it
while p(t) = π /( cosh (2π t) +1) and
𝒫_ρ, ℰ is the Petz recovery channel given in (<ref>). Moreover, they gave a lower bound on the difference between the relative entropy in terms of the fidelity between the original state and the recovered one as
S(ρ | σ) - S( ℰ(ρ)| ℰ(σ)) ≥
-2 log F( ρ , ℛ_σ, ℰ∘ℰ (ρ)).
The result above has been found in the context of the quantum information theory i.e., Type 1 von Neumann algebra. While one should go beyond it
in the case of QFTs and gravity.
In <cit.>, authors generalized the previous result to quantum channels between general von Neumann algebras in the context of modular theory.
In the Heisenberg picture, consider 𝒯 : 𝒜→ℬ be a unital, normal and two-positive map between von Neumann algebras.
One can associate a dual Petz map 𝒯 ^P_ψ : ℬ→𝒜 (<ref>).
They found the recovery map in the form of
α (.) = ∫ _-∞^∞ dt p(t) α^t_ψ, 𝒯 (.)
while
α^t_ψ, 𝒯 (.) = ζ ^t _ψ, 𝒜∘𝒯^P_ψ∘ζ ^-t _ψ, ℬ (.)
is called the rotated Petz map, and ζ ^t _ψ, 𝒜 is the modular flow for |Ψ_A⟩, 𝒜 is define as
ζ ^t _ψ, 𝒜 (a) =Ad Δ^it _ψ, 𝒜 (a) = Δ^it _ψ, 𝒜 (a) Δ^-it _ψ, 𝒜 ∀ a ∈𝒜.
In the case of finite-dimensional Type 1 factor, the recovery map in (<ref>) reduced to the dual of the recovery channel in (<ref>).
§ BLACK HOLES IN ADS
In this section, We briefly review the geometry of the eternal two-sided AdS black holes, quantizing the free field theory on this background which is needed for studying the bulk reconstruction in AdS/CFT at strict large N limit.
We also discuss the operator algebra of observable in the case that an eternal black hole in the bulk is dual to the two CFTs in the termo-field-double (TFD) state, and in the end, talk about the typical one-sided black holes in AdS.
§.§ AdS eternal black holes
There is a unique spherically symmetric solution of the Einstein equation with negative curvature known as AdS-Schwarzschild geometry. Its metric in (d+1)-dimension (d≥3) is given by
ds^2= - f(r) dt^2 + dr^2/f(r) + r^2 dΩ^2_d-1.
In the AdS unit, we have
f(r) = 1+ r^2 - α/r^d-2
where α is a parameter proportional to the ADM mass M.
Like in flat space, one can maximally extend the solution by introducing AdS-Kruskal coordinates
U≡ -e^2π(r_*-t)/ β
V≡ e^2π(r_*+t) /β
where here, r_* is the tortoise coordinate defined as dr_*/dr= f^-1(r).
The metric in the new coordinates can be written as
ds^2= (β/2π)^2 f(r)/UV dU dV + r^2 dΩ^2_d-1.
The metric is originally defined in the region U<0, V>0 corresponding to outside the horizon.
By extending the geometry in a maximal way when we assume that there is no matter anywhere, one can describe eternal black holes in AdS spacetime (Fig. <ref>).
In all regions, one can introduce Schwarzschild coordinates. Their relation with Kruskal coordinates is given as <cit.> :
regions Kruskal coordinates relationship with the AdS-Schwarzschild coordinates
1 U< 0 , V> 0 U= -e^2π/β(r_*-t), V= e^2π/β(r_*+t)
2 U> 0 , V> 0 U= e^2π/β(r_*-t), V= e^2π/β(r_*+t)
3 U> 0 , V< 0 U= e^2π/β(r_*-t), V= -e^2π/β(r_*+t)
4 U< 0 , V< 0 U= -e^2π/β(r_*-t), V= -e^2π/β(r_*+t)
Regions 1 and 3 are two asymptotically AdS regions corresponding to black hole exteriors that for each of them, another one is behind the horizon.
In the U-V plane, surfaces of constant r_* are hyperboloids that always stay within a single region. On the other hand, the surfaces of constant t are simply straight lines running through the origin which means we can think of time translations as rotations of the Kruskal diagram about the bifurcation point.
Although, we should keep in mind that a line can not be rotated past the horizon by a finite rotation.
Moreover, since there is no global timelike isometry, the entire geometry is time-dependent.
It is good to note that the natural choice for the vacuum of the bulk effective theory on the AdS eternal black hole is the Hartle-Hawking (HH) state |HH⟩.
It has been conjectured by Maldacena
<cit.> that the AdS eternal black hole has a holographic description in terms of two copies of an identical CFT in the TFD state
|Ψ_TFD⟩ = 1/√(Z_β)∑ _i e^-β E_i /2|i^*⟩_L |i⟩_R.
where β^-1 is the Hawking temperature of the black hole.
Therefore, one can describe each holographic CFT dual to the eternal black hole with the thermal density matrix
ρ_th = 1/Z_β e^-β H, where Z_β is the partition function of the CFT at the temperature β^-1.
Here, the states |i⟩ are the energy eigenstates of one single CFT and
|i^*⟩_L(R) = Θ|i⟩_R(L).
The Θ is an anti-unitary operator that reverses the time direction after exchanging the CFTs.
Having the states |i^*⟩ for the left CFT comes from the point that the left CFT is glued to the region 3 with a flip in the AdS-Schwarzschild time coordinate. In other words, the time in left and right CFTs are identified as t_R= t and t_L = -t, respectively.
The isometry of the entire bulk generated by t → t+T also corresponds to the identity
e^i Ĥ|Ψ_TFD⟩ = |Ψ_TFD⟩, Ĥ= H_R -H_L.
The time translation on the entire geometry is generated by a Killing vector field denoted by V. It is future-directed timelike on the right exterior and past-directed timelike on the left exterior. The conserved charge associated to the global time translation is
ĥ = ∫ _Σ d Σ^μ V^ν T_μν
where Σ is a Cauchy hypersurface and T_μν is the energy-momentum tensor of the bulk fields.
The boundary dual of the operator ĥ is
ĥ = βĤ.
It comes from the fact that the vector field V on the right boundary reduces to β∂_t and to - β∂_t on the left one. We will discuss these quantity's interpretation in the algebraic context later.
§.§ Scalar field quantization in AdS eternal black hole background
Consider a free scalar field propagating on a curved spacetime background. The equation of motion is the Klein-Gordon (KG) equation, (-m^2)ϕ =0. The field then has a Heisenberg picture expression as
ϕ (x) = ∑ _n f_n(x) a_n + f^*_n(x) a_n^†,
where f_n are the classical solutions of the KG equation in the given background that should be normalized with respect to the KG norm. We should also impose normalizable boundary conditions at infinity.
To each mode f_n, we associate the annihilation and creation operators a_n, a_n^† with normalized commutation relation. The Hilbert space of QFT at every Cauchy slice of the entire background of interest can be constructed as a Fock space using these ladder operators.
On the other hand, we can always decompose a Cauchy slice Σ into the smaller slices Σ_r such that their union covers the entire Cauchy surface, Σ= ∪_rΣ_r.
We can find an alternative expression for the field in the domain of dependence of each subregion denoted by 𝒟(Σ_r), by solving the equation of motion on the coordinate system that covers only 𝒟(Σ_r)
ϕ (x_r) = ∑ _α f_r, α(x_r) a_r, α + f^*_r, α(x_r) a_r, α^†.
The new operators a_r, α has support only on Σ_r.
Here, we can use the Bogoliubov transformation and write the mode functions on the entire Cauchy slice as linear combinations of a_r, α. Hence, we can in principle expand the field ϕ at each point in terms of a_r, α.
Now, let us quantize a scalar field on the eternal AdS black hole background. In principle, we can do it by solving the equation of motion in Kruskal coordinates. However, there is another possibility when we take the Cauchy slice Σ such that it passes through the bifurcation point. We can consider Σ as the union of two smaller slices as
Σ= Σ_r ∪Σ_l (Fig. <ref>). Then, if we consider just region 1(3), Σ_r (Σ_l) itself is a complete Cauchy slice on that. We also know coordinate systems that cover regions 1 and 3 which are nothing but two copies of AdS-Schwarzschild coordinates. Therefore, we can first start by region 1 and solve the KG equation outside the horizon in AdS-Schwarzschild coordinates
(<ref>).
One can find its solutions as
f_ω,m which are the modes labeled by quantum numbers ω,m. To each of them, we associate a couple of creation and annihilation operators with normalized commutation relation, denoted by a_ω,m. Therefore, one can express the fields lies in region 1 as
ϕ_1 (x) = ∑ _m ∫_0^∞dω/2π1/√(2ω) (f_ω,m(x)a_ω,m + f^*_ω,m(x)a^†_ω,m).
We can follow the same analysis in region 3 and find another set of operators a_ω,m with the same algebra as a_ω,m while they commute with all a_ω,m. One can write the fields in region 3 like region 1 as
ϕ_3 (x) = ∑ _m ∫_0^∞dω/2π1/√(2ω) (f_ω,m(x) a_ω,m + f^*_ω,m(x)a^†_ω,m).
Therefore, we have the expansion of the field in the entire Cauchy slice Σ and so, it is straightforward to find the expression for fields in region 2 and 4 by evolving them with respect to the total Hamiltonian.
As it is mentioned, the vacuum of the quantum field in the eternal black hole background is an analog of the HH state
corresponding to the black hole temperature T = 1/β
which is defined to satisfy
(a_ω,m - e^-βω 2 a^† _ω, m) |HH⟩ =0,
and characterized by thermal occupation levels for both modes a_ω,k and a_ω,k
⟨ a_ω,m a^†_ω',m'⟩_HH = e^βω/e^βω-1δ(ω-ω')δ_m,m'
⟨ a^†_ω,m a_ω',m'⟩_HH = 1/e^βω-1δ(ω-ω')δ_m,m'
and the same for the modes a_ω,m <cit.>.
Using this standard formalism of quantization of the field theory on the curved spacetime background, one can also describe the Hilbert space of the theory as the Fock space built on the HH vacuum, denoted as ℋ_BH^(Fock).
§.§ Operator algebra of observables
Consider two copies of the boundary CFT in the TFD state dual to a two-sided eternal black hole in AdS.
As explained in the previous section, small perturbations around the black hole background can be described by quantizing the QFT in curved spacetime background.
The algebra of low-energy effective field theory in the left and right exteriors is denoted by 𝒜_l,0 and 𝒜_r,0 respectively.
In the large N limit, the algebras of observables outside the horizon of the black hole, i.e 𝒜_l,0 and 𝒜_r,0, are of Type 3_1 von Neumann algebra since the algebra of observables of a local region in QFT must be of this Type. As these two spacetime regions are spacelike separated, the corresponding operator algebras are each other's commutants.
One can split ĥ in (<ref>) as a difference between right and left operators as
ĥ = h_r - h_l
while
h_r= ∫ _Σ_r d Σ^μ V^ν T_μν, h_l= ∫ _Σ_l d Σ^μ V^ν T_μν.
To obtain this splitting we should choose the Cauchy hypersurface Σ to pass through the bifurcate horizon, i.e. Σ = Σ_r ∪Σ_l.
The question that can arise at this point is whether the operators h_l and h_r belong to the operator algebras 𝒜_l,0 and 𝒜_r,0 or not?
As explained in <cit.> besides this formal splitting,
due to the ultraviolet divergences near the horizon, i.e.
|h_r(l)|HH⟩|^2 = ⟨HH| h_r(l)^2 |HH⟩ = ∞,
the operators h_r, h_l do not make sense as an operator on the bulk Hilbert space ℋ_BH^(Fock).
There is another way of answering this question: in the Tomita-Takesaki theory, the operator ĥ is related to the modular Hamiltonian of the HH state for the algebra 𝒜_r,0
Δ = e^- ĥ
A modular Hamiltonian of a Type 3_1 algebra never has a splitting as in (<ref>) and so the operators h_r, h_l are not well-defined and so they do not belong to the operator algebras 𝒜_l,0 and 𝒜_r,0 at strict large N limit.
The operators dual to the low energy effective field theory are the subtracted single trace operators of the boundary theory which have Gaussian correlation functions in the large N limit and we denote them here as 𝒜_L,0 and 𝒜_R,0 for the left and right CFTs.
Therefore, the commutator of the single trace operators in the large N limit is c-number.
Since the operators h_l and h_r are not part of the bulk operator algebras, the gauge theory Hamiltonian of the boundary theories, H_L and H_R must not be part of the algebras 𝒜_L,0 and 𝒜_R,0 as well.
Above the Hawking-Page temperature that the two CFTs in the TFD state are dual to the two-sided eternal black hole in the bulk, H_R and H_L have the thermal expectation value and connected two-point function of order N^2.
Although the operators H_R and H_L do not have a large N limit, their difference does have as its bulk dual ĥ (<ref>). The modular operator of the TFD state of the boundary for the algebra 𝒜_R,0 is
Δ = e^- βĤ. To obtain the modular operator, one can also start with finite N where the algebra of observables on both sides are of Type 1 von Neumann algebras. The left and right Hamiltonians can be written in terms of the usual Hamiltonian of a single copy of the system H as H_L = H ⊗ I and H_R = I ⊗ H. In the case of Type 1 algebra, each system individually can be described by a reduced density matrix which here for the TFD state, they are obtained to be the thermal density matrices
ρ_L = ρ_R = e^ - β H / Z.
Then, one can find the modular operator of the TFD state by using the relation (<ref>) as
Δ = ρ_L^-1⊗ρ_R = e^ - β H_R + β H_L = e^-βĤ
which is also valid at large N limit.
By subtracting the expectation value of the Hamiltonian, one can define the operator
H_R' = H_R - ⟨ H_R ⟩
and the same for the left side. We have
⟨ H_R'^2 ⟩∼ N^2 and thus H_R' does not have a large N limit.
By dividing it by N, we can introduce an operator
U = 1/N H_R'
with Gaussian correlation function at large N limit, the same as any other single trace operators.
Therefore, U has a large N limit, but at this limit, it is central as
[ U, O]= 1/N [H_R , O] = - i/N∂ O/∂ t
and at N=∞, it commutes with all the rest of single trace operators.
As explained, H_R and thus U is not part of the algebra 𝒜_R,0. Therefore, we can also define 𝒜_R,0 to consist of only the single trace operators that have non-zero commutators.
In other words, the operator algebra of low-energy excitations around the black hole background is dual to the single trace operators of the boundary with a nontrivial commutator,
they describe a generalized free field (GFF) over the thermal state of the CFT.
Using the AdS/CFT argument, we can identify the operator algebra of the bulk and boundary as
𝒜_r,0 =𝒜_R,0, 𝒜_l,0 =𝒜_L,0
which by itself requires that 𝒜_L,0 and 𝒜_R,0 be of Type 3_1 as well <cit.>.
In addition to the argument above about the nature of the algebra 𝒜_R,0, it has been also studied recently by Leutheusser and Liu in <cit.> purely in the boundary theory without requiring the duality with the bulk theory.
Using the half-sided modular inclusion, they argued that above the Hawking-Page temperature, there is an emergent operator algebra which is a von Neumann algebra of Type 3_1.
Take B to be a time band in the right boundary and denote the algebra generated
by subtracted single-trace operators in B as 𝒜^B_R,0
which
is dual to the bulk algebra of operators in the causal wedge of the time band B.
Since the generator of the boundary time translation is not part of the algebra 𝒜_R,0, the algebra 𝒜^B_R,0 is not coincide with the algebra of subtracted single-trace operators on the entire boundary and rather 𝒜^B_R,0 is just a subalgebra of 𝒜_R,0.
As Ĥ is the generator of the boundary time translation, the modular flow of the algebra 𝒜_R,0 shifts the boundary time t → t+β u. Therefore, the operator algebra in the time band B =(t_0, ∞) maps to itself under conjugation by Δ ^iu for u >0
Δ ^iu 𝒜^B_R,0 Δ ^-iu= 𝒜^B_R,0 u>0.
This structure is called half-sided modular inclusion and exists only if 𝒜_R,0 is a Type 3_1 von Neumann algebra.
The 1/N corrections to this picture have been discussed by Witten in <cit.>. In particular, he showed that it modifies the emergent Type 3_1 algebra to an algebra of Type 2_∞.
At large N limit, one can define algebra 𝒜_R as an extension of the algebra 𝒜_R,0 by adding an additional generator U as
𝒜_R = 𝒜_R,0⊗𝒜_U.
The algebra 𝒜_R is no longer a factor since U is central. Similarly, one can define the algebra 𝒜_L on the left CFT by defining the operator
U'= H_L'/N.
Note that the operators U and U' are not completely independent since U-U' = Ĥ /N. At strict large N limit, U-U' vanishes, and therefore, the algebra 𝒜_L can also be defined in terms of U as
𝒜_L = 𝒜_L,0⊗𝒜_U.
In the large N limit, the algebras 𝒜_L and 𝒜_R are of Type 3_1 von Neumann algebra <cit.>. Once we go beyond the large N limit and consider 1/N corrections, the algebras 𝒜_L and 𝒜_R
become of Type 2_∞. Mathematically, the Type 2_∞ algebra is the crossed product of the Type 3_1 algebra of the strict large N limit by its modular automorphism group. By duality, these boundary algebras are dual to the bulk algebras of observables on each side of the black hole denoted by 𝒜_l and 𝒜_r
𝒜_r = 𝒜_R, 𝒜_l= 𝒜_L
which incorporates the algebra 𝒜_r(l),0, the observable U that is central at large N limit and 1/N corrections. Beyond N=∞, U is no longer central and the 1/N corrections modify the algebra in such a way that its center becomes trivial. More precisely perturbatively in 1/N, the algebra of observables deforms to the factor of Type 2_∞.
§.§ One-sided black holes in AdS/CFT
The full AdS-Schwarzschild geometry in (<ref>) describes an additional asymptotically AdS region that is connected to our universe by a wormhole.
Besides them, like in flat space, black holes can also be created by some sort of collapsing shell. In such a case, there is no wormhole connecting to another universe since the geometry at the earlier time looks nothing like the full AdS-Schwarzschild geometry as the interior is non-vacuum. Nevertheless, the one-sided geometry may share some features such as singularity and future horizon with maximally extended solutions.
Here, there is a new important feature in comparison with flat space. The Hawking radiation in AdS black hole background will reach the boundary in a finite time and then, it is reflected back by the boundary.
If the black hole is small enough, the entire black hole evaporates before the radiation gets to the boundary.
By increasing the size of the black hole, while the Schwarzschild radius of the black hole reaches the AdS radius, the radiation will be reflected back into the black hole very fast.
So, the black hole will quickly reach equilibrium with the Hawking radiation and remain constant in size up to small fluctuation.
As a result, with the usual reflecting boundary condition, big black holes in AdS never evaporate and they are eternal.
Thus, for a big black hole in AdS formed from collapse, at a late enough time when all the matter has fallen into the black hole and the fluctuations of the horizon have decayed away, the quantum fields start behaving like ones in the eternal black hole background.
It is known that the small black holes in AdS are not stable while the big ones are.
Black hole formation by collapse has a holographic interpretation as the thermalization of the CFT pure state on the boundary of the AdS space. In the dual CFT, we start in a pure state and then allow it to settle down and thermalize after a while. It will evolve to a state that is indistinguishable from a thermal state for the set of interesting observables.
Therefore, the late time CFT correlation functions on a massive pure state can be approximated by correlation functions on the thermal density matrix.
As we said, a big black hole in a pure state is dual to a high-energy pure state in the CFT. For a black hole at fixed energy, as Bekenstein proposed, the number of black hole microstates is counted by black hole entropy S. At sufficiently large energy, the black hole microstates dominate the microcanonical ensemble of the CFT. In other words, almost all high-energy states in the CFT have a bulk description as a single black hole.
In general, we can think of an equilibrium pure state as a typical state.
A typical black hole microstate of energy E_0 in the CFT is defined as a pure state which is a random superposition of energy eigenstates in a narrow energy band
|Ψ_0⟩ = ∑ _E_i ∈ (E_0 - δ E , E_0 + δ E) c_i |E_i⟩,
where c_i are random numbers selected with the uniform Haar measure.
These typical states represent the majority of black hole microstates of a given energy. They are approximately in equilibrium and so,
it is expected that correlators in these states will be the same as thermal correlators
⟨Ψ_0| O(x_1)O(x_2)...O(x_n) |Ψ_0⟩ = 1/Z_β( e^-β H O(x_1)O(x_2)...O(x_n))
where β^-1 is the temperature corresponding to the energy E_0. We note that these typical states are not exactly the same as the late-time configuration of a black hole forming by collapse, as they have a narrower energy band.
Since these typical states look time-independent for the simple observables, their dual geometry should be characterized by an approximate killing isometry which is timelike near the horizon. It is mostly accepted that the geometry contains one exterior region described by the AdS-Schwarzschild metric.
It was proposed in <cit.> that the entire dual geometry is just the exterior region terminates on the horizon by a firewall which is consistent with the time translation symmetry we have.
However, since the curvature near the horizon of a big black hole is low, their proposal demands a modification of general relativity at low curvature.
In addition to this solution, it has been conjectured in <cit.> that if we have a smooth horizon, the dual geometry to a typical pure state contains the black and white hole interiors and part of the left region as well (Fig. <ref>).
Finally, we mentioned that to study the evaporation of stable black holes in AdS, one can impose the absorbing boundary condition for the big black holes instead of reflecting ones. Here instead, the Hawking radiation never returns to the black hole since the outgoing modes are absorbed by the boundary, and so, the black hole evaporates.
In the dual theory then, the CFT is not a closed system and it does not evolve unitarily. However, one can as usual add an auxiliary system which here stores the outgoing Hawking radiation when it reaches the boundary.
§ BLACK HOLE EXTERIOR RECONSTRUCTION
In this section, we will discuss the reconstruction of the modes on the two-sided black hole background. First, we review the HKLL procedure and after that, we will explain how we can use the Petz map definition in modular theory to reconstruct the modes on the left exterior from the reconstruction of the modes on the right exterior of the black hole.
§.§ Reconstruction of the black hole exterior using HKLL map
At large N we can treat the bulk theory as a quantum field theory on a curved spacetime background. One can then represent the black hole exterior in terms of the CFT operators using the HKLL reconstruction procedure <cit.>.
It is known that the free scalar field ϕ in the bulk is dual to the scalar conformal primary of the boundary with conformal dimension Δ= d/2 + √(m^2 + d^2/4), which is related to the boundary limit of the field ϕ via extrapolate dictionary as
lim _r →∞ r^Δϕ (t, r, Ω) = O (t,Ω).
In case we have a gauge theory in the boundary, these primary operators are usually some single trace operators.
Consider scalar conformal primary operator O.
The same as for the vacuum, large N factorization holds for the thermal correlation functions, i.e.
(ρ_th O(x_1)...O(x_2n)) =
1/2^n∑ _π( ρ_th O(x_π_1)O(x_π_2)) ... ( ρ_th O(x_π_2n-1)O(x_π_2n)) + O(1/N),
where π runs over the set of permutations.
From (<ref>), one can find out that the large N factorization holds for the typical pure states as well, thus in all cases, each Schwarzschild mode in the bulk is dual to a GFF on the boundary.
We can expand the boundary GFF in terms of its Fourier modes O_ω,m as
O(t,Ω) = ∑ _m ∫_0^∞dω/2π(g_ω,m(t, Ω)O_ω,m + g^*_ω,m(t,Ω)O^†_ω,m).
The thermal expectation values of the Fourier operators also imply that they behave like the unnormalized creation and annihilation operators. One can use the extrapolate dictionary to find the rescaled operators Ô_ω,m= M_ω,m^-1 O_ω,m, which are identified with the bulk modes a_ω,m.
These CFT operators Ô_ω,m are the ones thermally populated at the Hawking temperature of the black hole β^-1
1/Z_β (e^-β HÔ_ω,mÔ_ω',m'^†)
= e^βω/e^βω-1δ(ω-ω')δ_m,m'
1/Z_β(e^-β HÔ^†_ω,mÔ_ω',m') = 1/e^βω-1δ(ω-ω')δ_m,m'.
Having the identification between bulk and boundary modes, we can follow the mode sum approach in <cit.> and find the CFT expression for every bulk field outside the horizon as
Φ _HKLL (t,r,Ω) = ∫ dt' dΩ' K(t,r,Ω| t',Ω') O(t',Ω')
for an appropriate choice of smearing function K. We can also find the field expression in terms of Fourier modes by plugging (<ref>) into (<ref>) as
Φ _HKLL (t,r,Ω) = ∑ _m ∫_0^∞dω/2π(ℱ_ω,m(t,r, Ω)O_ω,m + ℱ^*_ω,m(t,r,Ω)O^†_ω,m)
where
ℱ_ω,m(t,r, Ω)=
∫ dt' dΩ' K(t,r,Ω| t',Ω') g_ω,m(t',Ω').
By comparing with (<ref>), we can find that ℱ_ω,m(t,r, Ω)= M^-1_ω,m f_ω,m(t,r,Ω).
§.§ Coarse-grained vs fine-grained observables
Consider a big AdS black hole in equilibrium. An observable outside the horizon of the black hole has access just to the information in the exterior of the black hole, referred to as region 1. This bulk observable can not distinguish the microstate of the black hole and more generally, the fact that it is a one-sided black hole or connected to another universe through a wormhole.
However, in all cases, if we are just interested to the low energy observables outside the horizon, we find from (<ref>)
that it is enough to describe the system by a thermal density matrix.
On the other hand, having a big black hole in AdS is dual to the thermalization of the boundary theory. In general, the thermalization of a closed quantum system leads to the division of the observables of the theory into two parts:
the coarse-grained or macroscopic observables and the fine-grained or microscopic ones, denoted by 𝒜_c and 𝒜_f, respectively.
The coarse-grained observables are the ones that can be easily measured by the low-energy observer.
More precisely, the thermalization of the system means that if we are just interested to measure the macroscopic observable, the system can be approximately described by a thermal density matrix, i.e.
ρ|_𝒜_c = ρ_th, c = 1/Z^c_β e^-β H_c,
where H_c is the coarse-grained Hamiltonian and Z^c_β is the coarse-grained partition function
<cit.>.
In AdS/CFT where we have the duality between the bulk and boundary theories, the bulk Hilbert space is isomorphic to the boundary one. In the bulk, we have a fundamental theory of quantum gravity that in low energy described
by a local quantum field theory on a curved spacetime background. These are usually the macroscopic degrees of freedom in the bulk while the stringy and trans-Planckian observables are the non-perturbative microscopic degrees of freedom.
When we have a black hole in the bulk, the coarse-grained observables are just the operators lie outside the horizon while the fine-grained one contains the degrees of freedom of the black hole interior as well as the non-perturbative ones on the entire bulk.
In the rest, we are interested to study the bulk gravity in the low energy and we denote the algebra of operator in this regime in the exterior and interior of the black hole by 𝒜_ext and 𝒜_int, respectively.
As it is mentioned above, for the low-energy observables outside the horizon
we can describe the bulk theory by the thermal density matrix for the bulk effective field theory lives in the AdS-Schwarzschild coordinates, in other words
ρ_bulk|_𝒜_ext = ρ_th, 1.
§.§ Reconstruction of the black hole exterior using the Petz map
As previously described, one can map the local bulk field in the exterior of a black hole into the non-local CFT operators using the HKLL procedure
ϕ (t,r,Ω) ⟶Φ _HKLL (t,r,Ω) = ∫_bdy dt' dΩ' K(t,r,Ω| t',Ω') O(t',Ω').
In other words, the HKLL map provides us an isometry of embedding V_HKLL: ℋ_ext→ℋ_CFT which in the Heisenberg picture maps the operators as
ϕ (t,r,Ω) ⟶Φ _HKLL (t,r,Ω)= V_HKLL ϕ (t,r,Ω) V_HKLL^† .
It is equivalent to consider the quantum channel which maps the density matrices in the exterior region to the boundary density matrices ℰ: 𝒮(𝒜_ext) →𝒮(𝒜_CFT) as
ℰ(.) = V_HKLL (.) V^†_HKLL.
However, we should be careful that the low-energy observers can only measure the coarse-grained operators, for any GFF O(t, Ω), they can measure just
O_c(t, Ω) = P_coarse O(t, Ω) P_coarse
where P_coarse is the projection onto the coarse-grained Hilbert space which traces out the fine-grained degrees of freedom. Thus, the actual map we have is
ϕ (t,r,Ω) ⟶ P_coarse Φ _HKLL (t,r,Ω) P_coarse
=P_coarse V_HKLL ϕ (t,r,Ω) V_HKLL^† P_coarse
=∫_bdy dt' dΩ' K(t,r,Ω| t',Ω') O_c(t',Ω').
The Hilbert space of these coarse-grained GFF has the Fock space structure which is isomorphic to the Hilbert space of the free fields on the AdS-Schwarzschild background. Therefore, we can introduce the quantum channel
ℰ_c: 𝒮(𝒜_ext) →𝒮(𝒜_coarse) as
ℰ_c(.) = V_HKLL (.) V^†_HKLL|_𝒜_coarse = V_c (.) V^†_c
where V_c = P_coarse V_HKLL. Unlike ℰ, the quantum channel ℰ_c is invertible as the evolution is done via a unitary evolution. In this case, the recovery channel is simply known as
ℛ_c(.) = V^†_c (.) V_c = ℰ^*_c(.),
which one can also find using the Petz recovery channel formula (<ref>).
Therefore, one can use the dual of the recovery channel to map the operators in the Heisenberg picture, ℛ_c^* :𝒜_ext→𝒜_coarse
ℛ^*_c(.) = V_c (.) V_c^†= ℰ_c(.).
As a result, we have
ℛ^*_c( ϕ (t,r,Ω))=∫_bdy dt' dΩ' K(t,r,Ω| t',Ω') O_c(t',Ω')
From now on, we drop the subscript c, but we mean by O the coarse-grained GFF.
Since
𝒜_ext = span{a_ω,m, a^†_ω,m}, it is enough to find the action of the recovery map ℛ_c^* on these set of operators.
One can easily use (<ref>), (<ref>) and reach to
ℛ^*_c(a_ω,m) = Ô_ω,m, ℛ^*_c(a^†_ω,m) = Ô^†_ω,m.
For now, let us consider a two-sided geometry that is dual to the TFD state of the two identical non-interacting CFTs in the boundary which is given by (<ref>),
and take the t=0 Cauchy slice in the bulk, Fig. <ref>.
As we had in Sec. <ref>, the bulk Hilbert space corresponding to the quantizing the small fluctuations around the black hole geometry is denoted as ℋ^(Fock)_BH and the algebra of low-energy observables on two sides of the black hole as 𝒜_l,0 and 𝒜_r,0. The bulk Hilbert space can be constructed through the action of the operator algebra of only the right exterior on the HH state (one can obtain it equivalently from the algebra of the left exterior), i.e.
ℋ^(Fock)_BH = 𝒜_r,0|HH⟩ = 𝒜_l,0|HH⟩.
The Hilbert space of the full boundary theory is
ℋ = ℋ_CFT_L⊗ℋ_CFT_R.
The bulk states in the Hilbert space ℋ^(Fock)_BH are dual to the set of states called code subspace in the boundary theory corresponding to the excitations around the TFD state. They can be obtained by acting with the dual single-trace boundary operators on the TFD state denoted by ℋ_TFD.
The TFD code subspace spanned by the states a |Ψ_TFD⟩ with a∈𝒜_R,0.
The corresponding code subspace has a structure of a Hilbert space that can be made via GNS construction by using only 𝒜_R,0 or 𝒜_L,0 over TFD state (We will discuss it more precisely later).
At large N limit where the algebras are of Type 3_1, the TFD Hilbert space does not have a tensor product structure, in other words, there are no candidates for ℋ_L and ℋ_R such that
ℋ_TFD = ℋ_L ⊗ℋ_R.
The GFFs are then the representations of the single trace operators on the TFD Hilbert space and thus the algebras 𝒜_L,0 and 𝒜_R,0 are the von Neumann algebras on ℋ_TFD.
We note that these representations are not exactly the same as the original operators since they only define on ℋ_TFD while the single trace operators act on the full CFT Hilbert space.
We can follow the discussion above and map the algebra of operators in each region to the coarse-grained operators on the boundary via two copies of the Petz recovery channel as
ℛ^*_c,R: 𝒜_r,0⟶𝒜_R,0, ℛ^*_c,L: 𝒜_l,0 ⟶𝒜_L,0
such that
ℛ ^*_c,R(a_ω,m) = Ô_ω,m;R, ℛ^*_c,R(a^†_ω,m) = Ô^†_ω,m;R
ℛ ^*_c,L(a_ω,m) = Ô_ω,m;L, ℛ^*_c,L(a^†_ω,m) = Ô^†_ω,m;L.
Knowing the boundary reconstruction of the Schwarzschild modes, we can find the boundary representation of the bulk field in every bulk point.
Although we can simply map the operator algebra in region 3 to the left CFT through ℛ_c,L^*, there is an alternative way to write this mapping by using the Petz map formula in the GNS Hilbert space (<ref>). This will be helpful later when we are interested to find the CFT representation of the one-sided black hole interior modes.
As we mentioned, the effective field theory on the eternal black hole background is described by the HH state, thus
ρ_bulk|_𝒜_r,0 = ρ_th, 1 ρ_bulk|_𝒜_l,0 = ρ_th, 3,
and it is cyclic and separating for the operator algebra both in regions 1 and 3. Moreover,
since these two regions are spacelike separated, we have
[𝒜_l,0, 𝒜_r,0]=0 and so they are each others commutants
( 𝒜_l,0)' = 𝒜_r,0.
On the other hand, the boundary theory is in the TFD state, therefore each CFT itself is described by the CFT thermal state and as we mentioned above, its restriction to the coarse-grained algebra is also a thermal state but this time with respect to the coarse-grained Hamiltonian.
They are also the commutants of each other on the ℋ_TFD
(𝒜_L,0)'=𝒜_R,0.
Therefore, one can use the modular theory expression (<ref>) and write the Petz map ℛ_c,L^* as
ℛ_c,L^*(.) = 𝒥_TFD∘ℛ'_c,R-ρ_th∘𝒥_HH (.) = J_TFD ℛ'_c,R-ρ_th (J_HH (.) J_HH ) J_TFD
while 𝒥_HH and 𝒥_CFT are respectively the modular conjugations of the bulk and boundary theories with respect to the low-energy observables, and, ℛ'_c,R-ρ_th is defined based on the relation in (<ref>).
In the bulk where the theory is described by HH state, the modular conjugation operator is the anti-unitary CPT operator-more precisely, CRT transformation- which in the AdS-Schwarzschild coordinates acts as
𝒥_HH (ϕ_1(t,r,Ω))= J_HH ϕ_1(t,r,Ω) J_HH = ϕ_3(-t,r,Ω).
One then can find that
𝒥_HH : a_ω,m⟷a_ω,m.
On the boundary side where the theory is described by the TFD state, the modular conjugation acts in the TFD Hilbert space as
𝒥_TFD : O_ω,m;L⟷ O_ω,m;R.
Therefore, one can simply check that
ℛ _c,L^*(a_ω,m) = 𝒥_TFD∘ℛ'_c,R-ρ_th∘𝒥_HH (a_ω,m) = Ô_ω,m;L
ℛ _c,L^*(a_ω,m^†) = 𝒥_TFD∘ℛ'_c,R-ρ_th∘𝒥_HH (a_ω,m^†) = Ô^†_ω,m;L.
Before going ahead we note here that after considering the 1/N corrections, the picture needs some modifications.
The algebra of observables on each side of the eternal black hole is 𝒜_l and 𝒜_r which are of Type 2_∞. They are dual to the crossed product of 𝒜_L,0 and 𝒜_R,0
and their group of modular automorphism. The mapping now is as below.
It should be considered that since we are working at 1/N correction, it is better even to use the approximate version of the recovery channel introduced in Sec. <ref>.
§.§ Vacuum of the GNS Hilbert space
As we had in the previous section, the Hilbert space of the effective field theory in bulk is dual to the TFD Hilbert space ℋ_TFD.
In this section, we will discuss more precisely the structure of dual code subspace on the boundary.
The boundary theory is the tensor factor of two identical CFTs, ℋ = ℋ_CFT_L⊗ℋ_CFT_R. The algebra of bounded operators on ℋ is
ℒ(ℋ) = ℒ(ℋ_CFT_L) ⊗ℒ(ℋ_CFT_R)
and the algebra of low-energy observables is a subalgebra of the full algebra
𝒜= 𝒜_L,0⊗𝒜_R,0.
In order to define the TFD Hilbert space as explained in <cit.> we should associate a state |a⟩ to each operator a ∈𝒜 with the inner product among them which is defined as
⟨ a |b⟩ = ⟨Ψ_TFD| a^† b |Ψ_TFD⟩
for all a,b ∈𝒜 and in particular if both a,b belong to 𝒜_R,0 or 𝒜_L,0, it is reduced to
⟨ a |b⟩ = (ρ_th a^† b).
The set of vectors |a⟩ does not have a Hilbert space structure since there exists non-zero operators y ≠ 0 in the algebra 𝒜 such that ⟨ y |y⟩ =0 <cit.>.
In other words, the TFD state is not separating for the algebra 𝒜. It is just cyclic and separating for the full operator algebra on each CFT.
In such a case to construct a Hilbert space from this set of vectors, we can use the GNS construction to set such a vectors to zero by introducing the set of equivalence classes. The equivalence relation between them is defined as
a ∼ a+y a ∈𝒜, y ∈𝒴
while 𝒴 is the set of operators such that ⟨ y |y⟩ =0.
Moreover, since the action of the single trace operators on both sides on TFD state are related to each other,
only the algebra 𝒜_R,0 or 𝒜_L,0 is enough to generate the full TFD Hilbert space. In other words, all the vectors in ℋ_TFD can be written as |a⟩ with a ∈𝒜_R,0.
Now let us consider just the algebra of single trace operators 𝒜_R,0 on the right boundary theory and build the GNS Hilbert space of the algebra with respect to the thermal density matrix which is denoted as ℋ_ρ_th^ GNS.
In the GNS Hilbert space, the thermal state is represented by a pure state denoted by |Ω_0⟩ which is called the GNS vacuum. It is also the state in ℋ^GNS_ρ_th corresponds to the identity operator of the algebra 𝒜_R,0.
The GNS construction provides a representation for the algebra 𝒜_R,0 on ℋ^GNS_ρ_th which we denote here as ℳ_R.
The representation of any operator a ∈𝒜_R,0 is π (a) ∈ℳ_R that acts only on ℋ^GNS_ρ_th and thus, it is state-dependent since it depends on the state that the GNS Hilbert space is built over that. On the other hand, the original operator acts on the full CFT Hilbert space and is state-independent.
The inner product among the states in the GNS Hilbert space is written as
⟨ a | b ⟩ = ⟨Ω_0|π (a)^†π (b) |Ω_0⟩.
Here the algebra consists of the single trace operators of the CFT. Their representations on the GNS Hilbert space are the GFFs acting only on ℋ^GNS_ρ_th and we also have
⟨Ω_0|π(O_R(x_1))^†π( O_R(x_2)) |Ω_0⟩ = ( ρ_th O(x_1)^† O(x_2)).
As we had, the TFD Hilbert space can be obtained using just 𝒜_R,0 or 𝒜_L,0 alone over the TFD state. While the boundary theory is in the TFD state, the CFT_R is described by thermal state and so there should be some relation between the TFD Hilbert space and the GNS Hilbert space corresponding to the thermal matrix over 𝒜_R,0. Indeed, it can be shown that they are isomorphic
ℋ_TFD ≅ ℋ^GNS_ρ_th.
We defined the algebra ℳ_R to be the representation of the 𝒜_R,0 on the GNS Hilbert space. Therefore, its commutant which we denote as ℳ_L can be interpreted as the representation of the 𝒜_L,0 on ℋ^GNS_ρ_th.
The GNS vacuum |Ω_0⟩ is cyclic and separating for both ℳ_R
and ℳ_L.
Therefore, there is a modular operator for the algebras which generate automorphisms of them and leaves |Ω_0⟩ invariant. If we denote the modular operator for ℳ_R
as Δ_0, then Δ_0^-1 is the modular operator for the algebra ℳ_L.
It can be seen as the representation of the Δ, the modular operator for 𝒜_R,0, in the GNS Hilbert space and in particular, it should satisfy
π( Δ^-iu a Δ^iu ) = Δ^-iu_0 π ( a) Δ^iu _0
for all a ∈𝒜_R,0.
Now consider bulk field ϕ which is dual to the boundary operator O in the AdS/CFT dictionary.
To be more precise from the algebraic point of view, we should say that the bulk field restricted in the regions 1 and 3 are dual to the representations of the O_R
and O_L in the GNS Hilbert space that here they are nothing but GFFs.
The extrapolate dictionary (<ref>) should also be written more carefully as
π( O_R(t,Ω) ) = lim _r →∞ r^Δ ϕ_R (t, r, Ω)
π( O_L(t,Ω) ) = lim _r →∞ r^Δ ϕ_L (t, r, Ω) .
Under the duality, at strict large N limit we reach to the identifications:
ℋ^GNS_ρ_th = ℋ^(Fock)_BH, |Ω_0⟩ = |HH⟩, ℳ_R= 𝒜_r,0,
ℳ_L= 𝒜_l,0.
Moreover, the state-dependence of the GNS representations of the boundary dual operators is indeed a reflection of the fact that when we treat with the gravity at weak coupling in bulk, its mode expansion that identifies the bulk operator algebra for us depends on the bulk semi-classical geometry.
In order to create a GNS Hilbert space to describe the code subspace in the boundary, one can alternatively start with another cyclic and separating vector |Ω⟩ for the algebra ℳ_R as the vacuum. Therefore, by duality, we have
|Ω⟩ = |HH⟩.
In general, the new vacuum can be related to |Ω_0⟩ by a unitary as
|Ω⟩ = U |Ω_0⟩.
In particular, the simple cases are in the form of
|Ω⟩ = v_L w_R |Ω_0⟩
while v_L ∈ℳ_L and w_R ∈ℳ_R.
The vacuum |Ω_0⟩ chose to be related to building the GNS Hilbert space around the TFD state of the boundary theory. Then the GNS vacuum |Ω⟩ interpretation depends on whether it belongs to
ℋ ^GNS _Ω_0 or not.
If it does so, i.e.
ℋ ^GNS _Ω = ℋ ^GNS _Ω_0
the new GNS vacuum
corresponds to having some small excitations around the eternal black hole background
and the unitaries w_R and v_L are related to the excitations which lie just in region 1 and 3 respectively.
On the other hand, if the state |Ω⟩ does not lie in the GNS Hilbert space, bulk geometry is no longer described by the eternal black hole. The new vacuum can be related to excitations due to the unitary that changes the energy of the system by an amount that scales with N and thus its backreaction changes the geometry of the spacetime.
Another possible example could be the time-shifted TFD state, defined as
|Ψ_T⟩ = e^i (H_L + H_R)T/2|Ψ_TFD⟩ = e^i H_LT|Ψ_TFD⟩ = e^i H_RT|Ψ_TFD⟩
or more generally evolving the TFD state with some other global charges. They correspond to large diffeomorphisms in the bulk which for some large value of T, it changes the bulk geometry (for more detail see <cit.>).
More generally we can extend the discussion in Sec. <ref> and write the mapping from the bulk algebra of observables to the representation of the corresponding algebra on the boundary as
ℛ^*_c,R: 𝒜_r,0⟶ℳ_R, ℛ^*_c,L: 𝒜_l,0 ⟶ℳ_L
while again to map the operators lies in the region 3 to the left CFT, we can use the Petz map definition in the modular theory
ℛ_c,L^*( . ) = 𝒥_GNS, Ω∘ℛ'_c,R-Ω∘𝒥_HH ( . )
where the bulk mode maps as
a_ω,m ⟶𝒥_GNS, Ω∘ℛ'_c,R-Ω∘𝒥_HH (a_ω,m) = π (Ô_ω,m;L)
a^†_ω,m ⟶𝒥_GNS, Ω∘ℛ'_c,R-Ω∘𝒥_HH (a^†_ω,m) = π (Ô^†_ω,m;L)
and π provides the representation of the single trace operators in ℋ_Ω^GNS.
Let us consider two vacua, |Ω⟩ and |Ω_0⟩, lie in the same GNS Hilbert space.
In such a case since the Petz reconstruction of the modes act on the same Hilbert space, it is interesting to compare them and find the relation between them.
Take the Petz map corresponding to two different GNS Hilbert space as
ℛ^*_Ω and ℛ^*_Ω_0. From (<ref>), we get
⟨a_1 |Δ_bulk^1/2 |a_2 ⟩ =
⟨ℛ^*_Ω_0 (a_1) |Δ_0^1/2 |
ℛ^*_Ω_0(a_2) ⟩ _Ω_0
⟨a_1 |Δ_bulk^1/2 | a_2 ⟩ =
⟨ℛ^*_Ω (a_1)
|Δ^1/2 |ℛ^*_Ω(a_2) ⟩_Ω.
Since the two are in the same GNS Hilbert space, their dual bulk geometry is the same and the left-hand sides of the equalities (<ref>) coincide. By comparing the left-hand sides
⟨Ω_0| ℛ^*_Ω_0 (a_1^†) Δ_0^1/2ℛ^*_Ω_0(a_2)| Ω_0 ⟩
=
⟨Ω| ℛ^*_Ω (a_1^†) Δ^1/2ℛ^*_Ω(a_2)| Ω⟩
and using (<ref>), one reach to
ℛ^*_Ω (a_ω) = U ℛ^*_Ω_0 (a_ω) U^†.
Consider that we have the eternal black hole in the bulk and the vacuums |Ω_0⟩ and |Ω⟩ respectively correspond to the eternal black hole in equilibrium and have some excitations on the eternal black hole background created by U. Then, we get
ℛ^*_Ω (a_ω,m^†) = π( Ô_ω,m;L)
ℛ^*_Ω_0 (a_ω,m^†) = π_0 ( Ô_ω,m;L)
where π_0 and π are representations of the single trace operators on the GNS Hilbert spaces. From (<ref>), we reach to
π( Ô_ω,m;L) = U π_0 ( Ô_ω,m;L) U^†.
§ INTERIOR PETZ RECONSTRUCTION AND PAPADODIMAS-RAJU PROPOSAL
The idea of reconstructing the bulk modes in the left exterior via the Petz map (<ref>) is helpful even in the cases in which we have a one-sided black hole in the bulk.
In this section, we attempt to construct the interior modes of a typical black hole microstate and we will see that we arrive to the same result as the Papadodimas-Raju proposal.
§.§ Papadodimas-Raju proposal
Consider a big one-sided black hole in AdS. In this case, only the bulk modes outside the horizon can be described in the boundary theory using the HKLL procedure, while describing the interior modes is much more challenging. For this purpose, a remarkable proposal has been introduced by Papadodimas and Raju in a series of papers <cit.>
to find a CFT description of the black hole interior when the system is in a pure state. Here we first shortly review the PR proposal.
The main idea of the PR proposal is to focus on a code subspace of the CFT theory, which is created by acting with a small algebra on the corresponding pure state and then find the CFT description of the interior operator in a state-dependent manner just in the chosen code subspace.
If the CFT pure state describes the stable black hole in AdS, it should be close to the thermal state. More precisely, we consider a typical pure state in the high-temperature phase of the gauge theory denoted by |Ψ_0⟩ (see Sec. <ref>).
The small algebra 𝒜 corresponds to simple experiments in the effective field theory in the bulk, i.e. the observables outside the horizon of the black hole. At large N limit, 𝒜 can be thought of as the set of products of simple trace operators of low conformal dimensions up to K number of these operators
𝒜 = span { O_ω_1, O_ω_1 O_ω_2,..., O_ω_1 O_ω_2... O_ω_K}
such that O_ω_i are the Fourier modes of the single trace operators and
∑_i ω_i ≪𝒩, while 𝒩 is the CFT's central charge. Therefore, we do not have too many insertions and so K≪𝒩.
Taking 𝒜 as the linear span of the products of the operators is equivalent to considering it as the set of all polynomials in the modes of the operators
A_α = ∑ _Iα (I) ( O_ω_i)^I(ω_i)
where α(I) are arbitrary coefficients and the sum runs over all functions I. This set of polynomials forms a linear space. The size of the set all such polynomials scales like 𝒩^K and limit the dimension of this space K should satisfy the constraint
(𝒜) = 𝒩^K ≪ e^𝒩.
Given a typical black hole microstate |Ψ_0⟩, one can define the code subspace, also called the small Hilbert space as
ℋ_ψ_0 = span {𝒜|Ψ_0⟩}.
A typical pure state for the observables in O ∈𝒜 can be approximated by the thermal state, i.e.
⟨Ψ_0| O^† O |Ψ_0⟩ = 1/Z ( e^-β H O^† O ) + O(1/ 𝒩).
To describe interior modes in the dual CFT theory, the PR proposal requires doubling the set of operators O_ω corresponding to the operator in the small algebra which they call mirror operator. These mirror operators commute with the original operators and moreover, they should be entangled with them in the pure state |Ψ_0⟩ in an appropriate way
to ensure they have the right properties in a given state of the CFT.
More concretely, the mirror operators defined as
O_ω|Ψ_0⟩ = e^-βω /2 O^†_ω|Ψ_0⟩
O_ω O_ω_1 ... O_ω_n|Ψ_0⟩ = O_ω_1 ... O_ω_n O_ω|Ψ_0⟩
Thus, demanding that the mirror operator has the correct behavior within low-point correlators
in a given pure state leads to the set of linear equations for the mirror operators.
As far as we do not have too many operator insertions, this set of operators can be solved in the full Hilbert space of the CFT.
§.§ Black hole interior reconstruction using Petz map
As we had in Sec. <ref>, we can start with just one black hole exterior. Having only access to the black hole exterior is equivalent to doing simple experiments on the boundary theory. In other words, the algebra of low-energy operators which we denote here as 𝒜_ex is identified with the algebra of coarse-grained operators of the CFT, denoted by 𝒜_c (see Sec. <ref>)
𝒜_ex = 𝒜_c.
From (<ref>) and (<ref>), we know that for the black hole in equilibrium both
ρ_bulk|_𝒜_ex and ρ_CFT|_𝒜_c
are thermal and under the duality, we can identify them.
One can follow the discussion in appendix <ref> and build the GNS Hilbert spaces corresponding to these thermal states in the bulk and boundary over the algebras
𝒜_ex and 𝒜_c, we denote them as
ℋ^GNS_ex and ℋ^GNS_c
respectively. We take the vectors
|Ω_ex⟩∈ℋ^GNS_ex |Ω_c⟩∈ℋ^GNS_c
as the cyclic vectors corresponding to
ρ_bulk|_𝒜_ex and ρ_CFT|_𝒜_c
which satisfy (<ref>). Then, we have
ℋ^GNS_ex = span{𝒜_ex|Ω_ex⟩}
ℋ^GNS_c = span{𝒜_c|Ω_c⟩}.
The same as in Sec. <ref>, we will refer to the cyclic vectors in (<ref>) as the GNS vacuums of the GNS Hilbert spaces (<ref>). We also identify the algebras with their representations on the GNS Hilbert spaces.
By accessing to the information outside the black hole, the observable can not distinguish between all possible geometries of the entire bulk.
The bulk can be described as an eternal two-sided black hole or a one-sided black hole. If we know that in the bulk we have an eternal black hole that is dual to the TFD state on the boundary, we will reach exactly the setup that we discussed in Sec. <ref>.
In this case, we have
|Ω_ex⟩ = |HH⟩, |Ω_c⟩ = |Ω_0⟩, ℋ^GNS_ex= ℋ^(Fock)_ρ_th, ℋ^GNS_c= ℋ_TFD
and
𝒜_ex = 𝒜_r,0, (𝒜_ex)' = 𝒜_l,0 , 𝒜_c = ℳ_R ,
(𝒜_c)' = ℳ_L.
There is another possibility that we have a black hole microstate in equilibrium. The CFT dual of such a geometry is a typical state defined in (<ref>). We assume that the geometry corresponds to a typical state in the bulk has a smooth horizon and contains an interior region.
Here, there does not exist a second copy of the CFT as the left system and entire bulk is dual to just one CFT.
Consider a Cauchy slice Σ in the bulk. We can divide it as
Σ = Σ_ex∪Σ_in.
𝒜_ex is the operator algebra of observable on Σ_ex and the same we can denote the algebra of operators on Σ_in as 𝒜_in.
Since the two regions are spacelike separated, they commute with each other, and as they cover the entire Cauchy slice
(𝒜_ex)' = 𝒜_in.
Therefore, the commutant of the algebra 𝒜_ex in the GNS Hilbert space
ℋ^GNS_ex can be interpreted as the representation of the 𝒜_in
in the GNS Hilbert space. We identify them and denote the representation of the operator algebra inside the black hole in the GNS Hilbert space as 𝒜_in too.
We can build 𝒜_in in the GNS Hilbert space by conjugationg with the modular conjugation as
𝒜_in = J_bulk 𝒜_ex J_bulk.
Hence, to each element of the algebra we associate an operator in the commutant as
a_ω∈𝒜_ex⟶a_ω∈𝒜_in
where corresponds to the modes in the black hole interior.
On the other hand on the boundary, the degree of freedoms inside the black hole are encoded in the fine-grained observables of the CFT. In other words, the image of 𝒜_in on the boundary, which we denote as 𝒜_in-bdy is a subalgebra of fine-graind algebra of the CFT
𝒜_in-bdy⊂𝒜_f⊂ℒ(ℋ_CFT).
Since 𝒜_in is the commutant of 𝒜_ex, under the duality it should map to the commutant of 𝒜_c on the ℋ^GNS_c.
Therefore, we can identify the commutant of the algebra 𝒜_c on the ℋ_c^GNS with the representation of the 𝒜_in-bdy on the GNS Hilbert space
(𝒜_c)' = J_bdy 𝒜_c J_bdy=𝒜_in-bdy
where J_bdy is the modular conjugation for the vacuum |Ω_c⟩ corresponding to the algebra 𝒜_c.
We also note that if the black hole is in the microstate |Ψ_0⟩, the GNS Hilbert space ℋ^GNS_c
is isomorphic to the GNS Hilbert space obtained by acting the elements of 𝒜_c on the state |Ψ_0⟩.
Following the discussion in Sec. <ref>, we can map the algebra of the operator outside the horizon 𝒜_ext to the coarse-grained algebra 𝒜_cg of the boundary theory through the Petz map in (<ref>) while the Schwarzschild modes mapped as (<ref>).
But here instead we do not know the isometry that map the interior modes to the boundary theory unlike in the case of the eternal black holes where the left exterior can also be mapped to the left CFT via the second copy of the HKLL map.
Moreover, we do not even know any global mapping like the global HKLL map in the pure AdS spacetime that maps the entire bulk to the entire boundary for us to use the same logic as the one has been done to find the Petz map in order to reconstruct the entanglement wedge of a given boundary region.
The discussion we had in the previos section suggest the idea that we can use the definition of the Petz map in modular theory and map the interior modes to the boundary via (<ref>). In addition to that from the bulk perspective for a black hole in a typical microstates the geometry locally is the same as eternal black holes, and even for the late time bulk correlation functions betweem the operators inside and outside the horizon of the collapsing star geometry, it is known that they can be approximated very well by the correlators of the operators in regions 1 and 2 of an eternal black hole <cit.>. Thus indeed here the interior modes play the role of the modes coming from the left side on the eternal geometry. But the important difference is that the commutant of the image of the operator algebra in the region 1 is no longer in the second CFT but rather it represents a subalgebra of fine-grained operators in the original CFT.
As a consequence of all the discussions above, we introduce the Petz map that encode the interior modes of a black hole microstate in the dual CFT ℛ^*_c,in : 𝒜_in→𝒜_in-bdr as
ℛ^*_c,in (.) = 𝒥_bdy∘ℛ'_c,Ω∘𝒥_bulk (.)
which leads to
a_ω,m ⟶ℛ^*_c,in(a_ω,m) = J_bdy Ô_ω,m;c J_bdy≡O_ω,m∈ℒ(ℋ^GNS_c)
a^†_ω,m ⟶ℛ^*_c,in(a^†_ω,m) = J_bdy Ô^†_ω,m;c J_bdy≡O_ω,m ^†∈ℒ(ℋ^GNS_c).
We can also find the dual operator through its insertion between the vectors in the GNS Hilbert space. From (<ref>), it is clear that every vector |a⟩∈ℋ^GNS_c can be obtained by the action of an element of the algebra a ∈𝒜_c on the GNS vacuum
|a⟩≡π(a) |Ω_c⟩
where π (a) is the representation of a in the GNS Hilbert space. Then, we have
⟨a|O_ω,m|b⟩ = ⟨Ω_c|π (a)^† O_ω,m π(b) |Ω_c⟩
= ⟨Ω_c|π (a)^† π(b) O_ω,m|Ω_c⟩
⟨a|O^†_ω,m|b⟩ = ⟨Ω_c|π (a)^† O^†_ω,m π(b) |Ω_c⟩
= ⟨Ω_c|π (a)^† π(b) O^†_ω,m|Ω_c⟩.
To go ahead, we remind that when the black hole is in equilibrium from (<ref>) we have
|Ω_c⟩|_𝒜_c = ρ_th,c= 1/Z^c_β e^-β H_c,
One can obtain that for every a ∈𝒜_cg
J_bdy a J_bdy|Ω_cg⟩ = ρ_th,c^1/2 a^† ρ_th,c^-1/2|Ω_c⟩ = e^-β H_c/2 a^† e^β H_c/2|Ω_c⟩.
Considering (<ref>) roughly speaking, we can also interpret the vacuum |Ω_c⟩ as a TFD state with respect to H_c in the GNS Hilbert space as
|Ω_c⟩ = ∑ _i e^-β E_i /2|E_i⟩_c |E_i⟩_f
where E_is are the energy eigenvalues of the coarse-grained Hamiltonian.
As it is mentioned, the coarse-grained observables are the GFFs around the thermal state of the CFT. Thus, the coarse-grained Hamiltonian should be in the form of
H_c = ∑_ω,mω O_ω,m;c^† O_ω,m;c
where the operators O_ω,m;c, the projection of the Fourier modes of the single trace operators onto the coarse-grained part of the system, can be identified with the representation of the single trace operators in the GNS Hilbert space ℋ^GNS_c.
They satisfy
[H_c, O_ω,m;c] = - ω O_ω,m;c
[H_c, O^†_ω,m;c] = ω O^†_ω,m;c
and therefore, one can obtain that
e^-β H_c/2 O_ω,m;c e^β H_c/2 = e^βω /2 O_ω,m;c
e^-β H_c/2 O_ω,m;c^† e^β H_c/2
= e^-βω /2 O_ω,m;c^†.
In the end, using the definition of the operator O_ω,m (<ref>) and the relations (<ref>), we reach to
⟨a|O_ω,m|b⟩
= e^-βω /2 ⟨Ω_c|π (a)^† π(b) O_ω,m;c^†|Ω_c⟩ = e^-βω /2 (ρ_th,c a^† b O^†_ω,m) + O(1/ 𝒩)
⟨a|O^†_ω,m|b⟩ = e^βω /2 ⟨Ω_c|π (a)^† π(b) O_ω,m;c|Ω_c⟩ =e^βω /2 ( ρ_th,c a^† b O_ω,m) + O(1/ 𝒩).
Moreover, we also obtain
O_ω,m|Ω_c⟩ = e^-βω /2 O^†_ω,m;c|Ω_c⟩
O_ω,m O_ω_1,m_1;c ... O_ω_n,m_n;c|Ω_c⟩ = O_ω_1,m_1;c ... O_ω_n,m_n;c O_ω,m|Ω_c⟩
which is equivalent to the operators result in the PR proposal but it is obtained in a more concrete way. The black hole microstate is replaced by the GNS vacuum state |Ω_c⟩ and considering the code subspace in the PR proposal is equivalent to work in the GNS Hilbert space.
The GNS Hilbert space ℋ^GNS_c can be also constructed through the action of 𝒜_c on other cyclic and separating vectors |Ω_c'⟩ belong to ℋ^GNS_c. They can be related to |Ω_c⟩ via a unitary operator u ∈ℒ(ℋ^GNS_c) as
|Ω_c'⟩ = U |Ω_c⟩
and the vectors in the GNS Hilbert space can be identified as
ℋ^GNS_c = span{ | a'⟩≡π (a) |Ω_c'⟩ | ∀ a ∈𝒜_c }.
If we denote the representation of the operator dual to the interior mode a_ω,m in the GNS Hilbert space is built over the vector |Ω_c'⟩ by O_ω,m', from (<ref>) we reach to
O_ω,m' = U O_ω,m U^†
while O_ω,m is the representation of the dual operator we defined in the GNS Hilbert space over the thermal state (<ref>).
In particular for the matrix elements of the operator we find that
⟨a'|O_ω,m' |b'⟩ = ⟨Ω_c'|π (a)^† O_ω,m' π(b) |Ω_c'⟩
= ⟨Ω_c| U^†π (a)^† U O_ω,m U^†π(b) U|Ω_c⟩.
and to be compatible with PR proposal, we can also find that
O_ω,m' |Ω_c'⟩ = U O_ω,m|Ω_c⟩
= U ρ_th,c^1/2 O_ω,m;c^† ρ_th,c^-1/2 U^†|Ω_c'⟩
= U e^-βω/2 O_ω,m;c^† U^†|Ω_c'⟩.
The vectors in (<ref>) correspond to an equilibrium black hole background which is excited by some sources.
It can be provided by turning on a source for some CFT operators. The unitary operator in (<ref>) is indeed the representation of composite operator which creates that excitation.
The simplest cases for the unitary operator U that result in a cyclic and separating vector |Ω_c'⟩ correspond to the local unitaries as
V_c ∈𝒜_c
related to the excitation only on the region 1, or
W_f∈𝒜_c'= 𝒜_in-bdy.
correspond to the excitation behind the black hole horizon.
If we have the unitaries as in (<ref>), we get
⟨ a'| O_ω,m' | b'⟩ = ⟨ V_c^† a V_c | O_ω,m |V_c^† b V_c ⟩
since V_c^† a V_c ∈𝒜_c, and in the case that we have some excitation inside the black hole, i.e. act with some unitary in the form of (<ref>), we get
⟨ a'| O_ω,m' | b'⟩ = ⟨ a| O_ω,m |b⟩
as W_f and W_f^† commute with every a, b ∈𝒜_c.
As a result, as it is expected we see that if we access only on the coarse-grained observables, we can not detect what happening behind the black hole horizon.
§ DISCUSSION
In order to study the evaporating black hole in AdS, one can use absorbing boundary conditions. In <cit.>, it has been showen that exactly at Page time, there is a phase transition in the location of the quantum extremal surface. The new Ryu-Takayanagi surface lies slightly inside the black hole event horizon. Thus after Page time, some parts of the interior now encoded in the early Hawking radiation, or in the other word, it can be reconstructed through the bath.
In order to study the reconstruction of the interior, let us first consider a general entangled system. The CFT can be entangled with another CFT or a collection of qubits. we refer to another system as a bath. Here, we consider entangled state as
|Ψ_en⟩ = ∑ _i α_i |ψ_i⟩⊗|i⟩
where α_i are some coefficients, |ψ_i⟩ are orthonormal states in the original CFT, and |i⟩ are states in the bath.
The sum can be over a small number of states or an exponentially large number.
We denote the coarse-grained algebra of the original CFT as 𝒜_cg and the operator algebra of the bath as ℬ. We define
𝒜_bdy = 𝒜_cg⊗ℬ.
The code subspace here is the set of states obtained by acting the algebra 𝒜_bdy over the state |Ψ_en⟩. The corresponding subspace has the structure of a Hilbert space that can be made via GNS construction
ℋ^GNS_en≅𝒜_bdy|Ψ_en⟩.
This set of states generally is bigger than 𝒜_cg|Ψ_en⟩ and in some specific cases like when the entangled state is TFD state, these two sets coincide.
In general, as it was discussed in <cit.>, the GNS Hilbert space can be decomposed into the direct sum of ℋ^j_Ψ_en while all are closed under the action of the coarse-grained algebra
ℋ^GNS_en = ⊕ _j ℋ^j_ψ_en.
For each j, one can identify a unique state |ψ_en^j⟩∈ℋ^j_ψ_en which is an equilibrium state with respect to 𝒜_cg
|ψ_en^j⟩|_𝒜_cg = ρ _th ^c
and entire ℋ^j_ψ_en can be generated by acting with 𝒜_cg on |ψ_en^j⟩.
For the exterior of the black hole, the same as (<ref>), we have the mapping ℛ^*_ext: 𝒜_ext→𝒜_cg and from its ρ-dual, we can find the Petz map from the operator algebra of the interior to the commutant of the representation of the coarse-grained algebra on the boundary
ℛ^*_in: 𝒜_in→ℳ'_cg.
The interior part of a Cauchy slice at late time can be divided into the island and the remaining part of the interior which can be reconstructed from the original CFT. Thus we consider 𝒜_in = 𝒜_island⊗𝒜_in-CFT.
The algebra ℳ'_cg is the representation of the 𝒜'_cg= ℬ⊗𝒜_fg while 𝒜_fg
is the fine-grain algebra of the original CFT.
In the GNS Hilbert space ℛ^*_in can be obtained from the direct sum of the mapping in each ℋ_en^j as
ℛ^*_in = ⊕_j ℛ^*_in,j
Each ℛ^*_in,j can be obtained from the same approach as obtaining the Petz dual map in modular theory. In each mapping ℛ^*_in,j depending on the structure of the entanglement in |ψ^j_en⟩, the interior can be map to the commutant of the 𝒜_cg in the ℋ_en^j that can be the representation of a subalgebra of the fine-grained algebra or the algebra of the bath system.
From the island conjecture, it is expected that
ℛ^*_in(a) = ⊕_j ℛ^*_in,j(a) ∈𝒜_fg ∀ a ∈𝒜_in,CFT
ℛ^*_in(a) = ⊕_j ℛ^*_in,j(a) ∈ℬ ∀ a ∈𝒜_island
Up to this point, there is not exist any microscopic proof of the island conjecture in the literature.
Doing the exact calculation of the Petz map reconstruction of an evaporating black hole can be a good check of the island conjecture.
I would like to thank M.Mirbabayi and K. Papadodimas for useful
discussions and comments on the draft. In particular, I would like to thank my supervisor, K. Papadodimas for
initial discussions on a possible connection of the Petz map
reconstruction of the black hole interior with the Papadodimas-Raju
proposal. I would like to thank
particularly M. Bertolini and M. Serone for their invaluable support
during this work.
The research is partially supported by INFN Iniziativa Specifica - String Theory and Fundamental Interactions project.
§ TOMITA-TAKESAKI THEORY IN A NUTSHELL
In this section, we briefly review the Tomita-Takesaki theory. It is mostly based on <cit.>.
The set of all bounded, linear operators acting on a Hilbert space ℋ is denoted by ℒ(ℋ). A subset 𝒜⊂ℒ(ℋ) which is closed under Hermitian conjugation, addition, multiplication, and closed under the weak convergent limit that also contains the unit operator is called a von Neumann algebra.
For a given 𝒜, the set of all bounded operators which commute with every elements of 𝒜 is called the commutant of 𝒜
𝒜' = {b ∈ℒ(ℋ) | ab = ba, ∀ a ∈𝒜}
which itself is a von Neumann algebra. For any von Neumann algebra 𝒜 on ℋ, we have 𝒜” = (𝒜')' = 𝒜.
Another von Neumann algebra which is induced by 𝒜 is the center of the algebra, denoted by Z_𝒜= 𝒜∩𝒜'.
A representation of the algebra 𝒜 in a Hilbert space ℋ is a map π from the algebra to the bounded operators on ℋ such that π(ab) = π (a) π (b) and π (a^*) = π (a) ^†. The map π is unital if π (I)= I.
A linear form over 𝒜 is a function from algebra to the complex numbers ϕ :𝒜→ℂ such that
ϕ (α a + β b) = αϕ (a) + βϕ (b) ∀ a,b ∈𝒜 , α , β∈ℂ.
It is called positive if ϕ (aa^*) ≥ 0, ∀ a ∈𝒜, and normalized if ϕ (I) = 1. A normalized, positive linear form is called a state on a von Neumann algebra.
Following the GNS construction, for each positive linear form
ϕ over 𝒜, one can build a Hilbert space ℋ_ϕ and a representation π _ ϕ of the algebra 𝒜 by linear operators acting on ℋ_ϕ. The state ϕ defines a Hermitian scalar product on 𝒜 as
⟨ a | b ⟩ = ϕ (a^* b) ∀ a,b ∈𝒜.
A vector |Ψ⟩∈ℋ is called cyclic for an algebra 𝒜 if the set of a |Ψ⟩ for a ∈𝒜 are dense in ℋ and separating if the condition a|Ψ⟩ =0 implies that a=0. If |Ψ⟩ is cyclic and separating for 𝒜, it is also for 𝒜'. And naturally, a representation π is called cyclic if there exists a vector |Ψ⟩ in the representation space ℋ such that π (𝒜) |Ψ⟩ is dense in ℋ. The GNS construction provides a cyclic representation with the cyclic vector |Ψ⟩ which
ψ (a)= ⟨Ψ | π _ψ (a)| Ψ⟩
that is familiar form of the expectation values in quantum mechanics. For the faithful linear form, we will identify 𝒜 with π _ ψ (𝒜).
Moreover, there is a correspondence between superoperators on 𝒜 and linear operators acting on ℋ. A linear map from algebra to itself 𝒯: 𝒜→𝒜 is called a superoperator. A superoperator is called unital if 𝒯(I)=I and ϕ-preserving if
ϕ (𝒯 (a))= ϕ (a) for all a ∈𝒜. For a generic von Neumann algebra, every normal superopeartor has a corresponding operator in the GNS Hilbert space. However, the converse does not always hold, like the local algebra of QFT. Although, in matrix algebra, the correspondence is one-to-one.
The GNS Hilbert space operator T_ψ∈ℒ(ℋ_ψ) corresponding to the superoperator 𝒯 is defined in such a way that
T_ψ (a|Ψ⟩) = 𝒯(a)|Ψ⟩
for all a∈𝒜.
If 𝒯 is unital, T_ψ leaves |Ψ⟩ invariant and if 𝒯 is ψ-preserving, T^† _ψ also leaves |Ψ⟩ invariant.
Before proceeding, to have more intuition, let us first consider the Type 1 von Neuman algebra, i.e. the algebra of d × d complex matrices acting irreducibly on the Hilbert space 𝒦 of a d-level system, denoted by ℒ(𝒦). In such a system, states are described by a positive, semi-definite, Hermitian operator of trace one ρ∈ℒ(𝒦), which is called density matrix.
The set of all density matrices on 𝒦 denoted by 𝒮(𝒦).
Corresponding to the state ρ, one can define a map ϕ _ρ : ℒ(𝒦) →ℂ given by ϕ _ ρ (a)= (ρ a), such that for any observable on the system gives us its expectation value on the state ρ.
Given a density matrix
ρ = ∑ _i λ _i ^ 2 |i⟩⟨i|
We can always purify the state by coupling it with a second system with the Hilbert space 𝒦' such that dim 𝒦' ≥rank ρ. The Schmidt decomposition always guarantees that there exists such a system that equality holds. Here, we take 𝒦' = span{|i'⟩} to be isomorphic to 𝒦.
The state ρ on 𝒦 can be purified by a vector
|ρ^1/2⟩ = ∑ _ i λ _i |i⟩|i'⟩∈𝒦⊗𝒦'
such that ρ = _𝒦' |ρ^1/2⟩⟨ρ^1/2|.
Therefore, we can consider 𝒜 = ℒ(ℋ) ⊗ I_𝒦' as
a von Neumann algebra on 𝒦⊗𝒦'= ℋ_ ρ. Here, the commutant is simply 𝒜' =I_𝒦⊗ℒ(𝒦') (Sometimes, for simplicity, we just refer them as 𝒜 = ℒ(𝒦) and 𝒜' = ℒ(𝒦')).
The vector |ρ^1/2⟩ is cyclic and separating for the algebra 𝒜 if and only if ρ is full-rank. We have also
ϕ _ ρ (a)= (ρ a) = ⟨ρ ^1/2 |(a ⊗ I )| ρ ^1/2⟩
for all a ∈𝒜. Thus, following the GNS construction, we can find a cyclic representation of 𝒜 with the cyclic vector |ρ^1/2⟩. The map from
𝒜→ℋ_ ρ defined as
a ⟶|a⟩_ ρ = (a ⊗ I )|ρ^1/2⟩=∑ _ i λ _i (a|i⟩) |i'⟩,
and ℋ_ ρ is nothing but the set of vectors (a ⊗ I )| ρ ^1/2⟩ endowed with the inner product
⟨ a | b ⟩ _ ρ =ϕ _ ρ (a^ † b)= tr (ρ a^ † b) = ⟨ρ^1/2 |(a^† b ⊗ I )| ρ^1/2⟩.
In addition, for every a∈𝒜, there exists an operator a_m' ∈𝒜' that creates the same vector in the ℋ_ρ as a
(a ⊗ I)|ρ⟩ =(I ⊗ a_m')|ρ⟩
which is called the mirror operator of a and it is given by
a_m' = ρ ^1/2 a^T ρ ^-1/2,
where transpose is taken in the ρ eigenbasis.
More generally, one can consider a generic algebra 𝒜 on some Hilbert space ℋ, and for every cyclic and separating state |Ψ⟩ for 𝒜 on ℋ creates a GNS Hilbert space.
There is a classification of von Neumann algebras on finite-dimensional Hilbert space corresponding to the center of the algebra.
One special case is when the center is trivial 𝒵_𝒜 = {λ I }.
In such a case the algebra is called a factor. If 𝒜 is a factor on ℋ, there always exists a tensor factorization of Hilbert space ℋ = ℋ_A ⊗ℋ_A such that 𝒜 is just the set of all linear operators on one tensor factor 𝒜= ℒ(ℋ_A) ⊗ I _A.
For a generic case that 𝒜 is not a factor, there is a decomposition of the Hilbert space as ℋ = ⊕ _α ( ℋ_ 𝒜 _ α⊗ℋ_ A_α) which 𝒜 is block-diagonal 𝒜 = ⊕ _α( ℒ(ℋ_ 𝒜 _ α) ⊗ I_ A_α) .
Here, the set of states on the algebra 𝒜 is the intersection of the algebra with the set of states on the Hilbert space 𝒮(𝒜)= 𝒜∩𝒮(ℋ).
Any state ρ∈𝒮 (𝒜) is connected with the standard definition of state on von Neumann algebra by linear form ϕ _ρ (a) = tr(ρ a) for all a ∈𝒜.
Moreover, for any state ρ on ℋ, there exists a unique restriction ρ|_𝒜 on 𝒮(𝒜) such that
ϕ _ ρ |_ 𝒜(a) = ϕ_ ρ (a) ∀ a ∈𝒜.
For now, consider a factor 𝒜 = ℒ (ℋ_A) ⊗ I_ A on ℋ = ℋ_A ⊗ℋ_A and a state ρ on ℋ. The restricton of any ρ∈𝒮(ℋ) on 𝒜 is
ρ| _ 𝒜 = ρ _ A
⊗1/ |A|I_A,
that ρ _ A ≡ _Aρ is the reduced density matrix of the subsystem A and |A|= (ℋ_ A).
One can follow the discussion above for ρ _ A and create the GNS Hilbert space representation of 𝒜 = ℒ(ℋ_A) as ℋ_ ρ _A = 𝒜 |ρ _A ^1/2⟩. If ρ _A is full-rank and the two tensor factors have the same dimensionality, this GNS Hilbert space is isomorphic to the original ℋ. Otherwise, it is isomorphic to one subspace of ℋ.
Let us look at some important superoperators and their corresponding operators in the GNS Hilbert space:
An important anti-linear superoperator which defines complex conjugation is the modular map 𝒮(a) = a^†. Its GNS Hilbert space operator correspondence, called Tomita operator S_ρ: ℋ_ρ→ℋ_ρ, acts as
S_ρ(a ⊗ I)|ρ^1/2⟩ = (a^ †⊗ I) |ρ ^1/2⟩.
It is clear that S_ρ^2 = I, thus S_ρ is invertible. We also have S_ρ |ρ ^1/2⟩ = |ρ ^1/2⟩. As S_ρ is anti-linear, the S^† _ ρ is defined by
⟨a|S_ρ^† b ⟩ _ρ = ⟨S_ρ a|b ⟩ ^* _ρ = ⟨b|S_ρ a ⟩ _ρ
for all a, b ∈𝒜. The Tomita operator for the commutant 𝒜' is S_ρ ' = S _ρ ^ †.
Another important anti-linear superoperator is the one-to-one corresponding map 𝒥_ ρ : 𝒜→𝒜' between operators in 𝒜 and 𝒜', such that
𝒥_ρ (|i⟩⟨j|) = |i'⟩⟨j'|.
The operator corresponding to it is the anti-linear map J_ρ :ℋ_ρ→ℋ_ρ called modular conjugation that acts on the GNS Hilbert space as
J_ρ(a ⊗ I)|ρ ^1/2⟩ = (I ⊗ (a^ †)^T)|ρ ^1/2⟩
where the transpose is in the ρ eigenbasis. In other words in this basis, J_ρ acts as
J_ρ c_i |i⟩|j'⟩ = c_i^* |j⟩|i'⟩.
It also leaves |ρ ^1/2⟩ invariant.
We also have the relative modular operator 𝒟_σ| ρ : 𝒜→𝒜 as an superoperator on 𝒜 defined as 𝒟_σ| ρ (a) = σ a ρ ^-1, where ρ and σ are two full-rank density matrices. Its corresponding operator on the GNS Hilbert space is Δ_σ| ρ : ℋ_ρ→ℋ_ρ. Since by definition, we have
Δ_σ| ρ(a ⊗ I)|ρ ^1/2⟩ = (𝒟_σ| ρ (a)⊗ I )|ρ ^1/2⟩.
One can find Δ_σ| ρ = σ⊗ρ ^-1 using the mirror operator. In case σ is the same as ρ, the operator
Δ_ρ = ρ⊗ρ ^-1
corresponding to
𝒟_ρ (a) = ρ a ρ ^-1 is called modular operator.
It leaves |ρ ^1/2⟩ invariant.
One can check that
Δ _ρ = S _ρ S_ρ ^ †
J_ρ = Δ _ρ ^ 1/2 S_ρ
S _ρ = J_ρΔ _ρ ^ 1/2 = Δ _ρ ^ -1/2 J_ρ.
One can also show that
J _ρ 𝒜 J_ρ = 𝒜' J _ρ 𝒜' J_ρ = 𝒜
and
Δ ^ z _ρ 𝒜 Δ ^ -z_ρ = 𝒜 Δ ^ z _ρ 𝒜' Δ ^ -z_ρ = 𝒜',
for all z ∈ℂ. If we write Δ _ρ = e ^ -K_ρ, where K_ρ is called the modular Hamiltonian, the later equation can be interpreted as
e ^ iK_ρ t 𝒜 e ^ -iK_ρ t = 𝒜
e ^ iK_ρ t 𝒜' e ^ -iK_ρ t = 𝒜'
for z = -it
which says both 𝒜 and 𝒜' are closed under time evolution using the modular Hamiltonian.
It is good to note it now that for every isometry v' ∈𝒜',
the vector v'|ρ ^1/2⟩ or v'_m |ρ ^1/2⟩ is also a purification of ρ in ℋ_ρ. These vectors are also cyclic and separating for the algebra 𝒜. Thus, one could start from one of them instead of |ρ ^1/2⟩ and build ℋ_ρ by acting the elements of 𝒜 on it.
Actually, it comes from the point that while the eigenbasis of ρ is the preferred basis for 𝒦, we still have the freedom to choose a basis for 𝒦'. Here, acting with the isometry v' is indeed related to the change of basis in 𝒦'.
To have just one unique vector corresponding to any state ρ, we can use the modular conjugation operator defined in (<ref>) or (<ref>), fix this operator, and choose the vector which is invariant under J_ρ. The set of all vectors that are invariant under J_ρ is called the natural cone. The states on 𝒜 are in one-to-one correspondence with the vectors in the natural cone.
Take |e⟩ to be the vector corresponding to the maximally mixed state in the natural cone. For every σ∈𝒮(𝒜), the vector
(σ ^ 1/2⊗ I) |e⟩ is also in the natural cone:
J_ρ ( σ ^ 1/2⊗ I) |e⟩ = (I ⊗ (σ ^ 1/2)^T)|e⟩ = ( σ ^ 1/2⊗ I) |e⟩.
The vector |e⟩ itself is given as |e⟩= ( ρ ^ -1/2⊗ I) |ρ ^1/2⟩ in ℋ_ρ. Thus, the unique purification of the state σ in the natural cone is
|σ ^1/2⟩ = (σ ^ 1/2ρ ^ -1/2⊗ I) |ρ ^1/2⟩= Δ _σ | ρ ^ 1/2 |ρ ^1/2⟩.
Thus, we reach to J_ρ |σ ^1/2⟩ = Δ _σ | ρ ^ 1/2 |ρ ^1/2⟩, that also holds in infinite dimensional system.
We usually consider the von Neumann algebra in its standard form which is defined as (𝒜, ℋ, J, 𝒫_𝒜 ) where the algebra 𝒜 acts on the Hilbert space ℋ, J is a anti-linear, unitary involution and 𝒫_𝒜 is the natural cone which is invariant by J.
Finally, we note that although in a finite-dimensional system the Hilbert space approach and algebraic approach are equivalent, in an infinite dimension like QFT, it is not the case. In QFT, there is even no tensor factorization of the Hilbert space and Indeed the algebraic approach is appropriate to work in.
For an open region, 𝒪 in the Minkowski spacetime, 𝒜_𝒪 is defined to be the algebra of operators supported only in 𝒪 which is called the local algebra of the quantum field theory. 𝒜_𝒪 is also a von Neumann algebra that has the properties below:
* For 𝒪_1 ⊂𝒪_2, we have 𝒜_𝒪_1⊂𝒜_𝒪_2.
* If 𝒪_1 and 𝒪_2 are spacelike seperated, we have [𝒜_𝒪_1, 𝒜_𝒪_2] = 0.
* If 𝒪' denote the causal complement of 𝒪, then 𝒜_𝒪' = 𝒜_𝒪', that is called Hagg duality.
* If we denote the causal completion of 𝒪 as 𝒪, then we have 𝒜_𝒪 = 𝒜_𝒪.
An important statement for local algebra in QFT is the Reeh-Schlieder theorem. It says that the vacuum vector |Ω⟩ is cyclic and separating for the local algebra in any region 𝒪. It means that to generate the full vacuum sector of the Hilbert space, one needs to act just with the operator restricted to any arbitrary open region. Therefore, although there is not any notion of trace or tensor factorization in QFT, the Tomita-Takesaki theory provides us with a powerful tool to define the quantum information quantities also in QFT.
As we had in Sec. <ref>, an important quantity to study the recoverability of the quantum channels in the theory of quantum error correction is the relative entropy which is defined in (<ref>). But since, the expression in (<ref>) can be used just for the Type 1 von Neumann algebra, to use the theory of QEC to study of QFTs and gravity, we need to generalaize the definition of relative entropy such that it can be applied for a generic type of von Neumann algebra.
One can check that
the relative entropy
can be also rewritten in terms of the relative modular operator in the GNS Hilbert space as
S(ρ | σ) = -⟨ρ^ 1/2| logΔ _σ|ρ|ρ^1/2⟩.
By using the expression (<ref>),
the relative entropy was generalized to the general v.
Neumann algebras by Araki
<cit.>
using relative modular hamiltonians.
And in the case of the local algebra in QFTs,
the suitable definition of the relative entropy
between two states |Ψ⟩ and |Φ⟩ for measurements in the spacetime region 𝒪 is define as
S_𝒪(Ψ | Φ) = -⟨Ψ| logΔ _Φ|Ψ(𝒪)|Ψ⟩.
*
ieeetr
|
http://arxiv.org/abs/2307.00390v1
|
20230701173408
|
PersonaGen: A Tool for Generating Personas from User Feedback
|
[
"Xishuo Zhang",
"Lin Liu",
"Yi Wang",
"Xiao Liu",
"Hailong Wang",
"Anqi Ren",
"Chetan Arora"
] |
cs.SE
|
[
"cs.SE"
] |
PersonaGen: A Tool for Generating Personas from User Feedback
Xishuo Zhang1,
Lin Liu1,
Yi Wang2,
Xiao Liu2,
Hailong Wang1,
Anqi Ren1,
Chetan Arora3
1College of Computer Science and Technology, Inner Mongolia Normal University, Hohhot, China
xishuozhang163@163.com, liulin@imnu.edu.cn, 9177385@qq.com, 3138363067@qq.com
2School of Information Technology, Deakin University, Geelong, Australia
xve@deakin.edu.au, xiao.liu@deakin.edu.au
3Faculty of Information Technology, Monash University, Melbourne, Australia
chetan.arora@monash.edu
August 1, 2023
============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Personas are crucial in software development processes, particularly in agile settings. However, no effective tools are available for generating personas from user feedback in agile software development processes. To fill this gap, we propose a novel tool that uses the GPT-4 model and knowledge graph to generate persona templates from well-processed user feedback, facilitating requirement analysis in agile software development processes. We developed a tool called PersonaGen. We evaluated PersonaGen using qualitative feedback from a small-scale user study involving student software projects. The results were mixed, highlighting challenges in persona-based educational practice and addressing non-functional requirements.
Persona, GPT-4 Model, Knowledge Graph, Requirements Engineering, User Feedback.
§ INTRODUCTION
Personas are critical in software development, particularly in agile settings <cit.>. Among various data sources for creating personas, user feedback is indispensable and can be collected through, among others, app reviews, interviews, surveys, and usability testing <cit.>. However, using personas in agile software development based on user feedback has several challenges. For instance, development teams must consistently update and maintain these personas when requirements change <cit.>.
Moreover, in the context of rapid iteration and delivery in agile development, using personas can undeniably be seen as a waste of time, especially for startups. Thus, it is pivotal to effectively generate personas from user feedback data in agile software development processes.
Personas and user feedback jointly contribute to user experience (UX), user interface design (UI), and software development processes <cit.>. Notably, user feedback can be employed to optimize personas, and combining personas and user feedback can facilitate the iteration of software projects. In recent years, some research has started focusing on applying data-driven personas in various types of software projects, such as B2B (Business to business) software projects <cit.>. However, there is still a gap in exploring the use of feedback data for persona generation in agile software development processes. Therefore, using user feedback to generate personas is an imminent need in agile software development processes.
To address this research gap, we propose a tool called PersonaGen. This tool employs the GPT-4 model and knowledge graphs for generating personas from well-processed user feedback. PersonaGen has three major features. 1. PersonaGen is the first tool to use the GPT-4 model for cleaning, integrating, predicting and analyzing user feedback. 2. PersonaGen is the first to construct a knowledge graph through various data attributes. 3. PersonaGen can classify different persona attributes and establish connections between various attributes to generate persona recommendations.
§ PERSONAGEN DESCRIPTION
The main objective of PersonaGen is to help agile development teams to generate personas from user feedback. Personas are a crucial tool in agile software development processes. They are responsible for agile software development and project iterations. Our goal is to use the GPT-4 model and construct a knowledge graph to develop a tool capable of generating persona templates from well-processed user feedback. Fig. <ref> illustrates the main processes of PersonaGen.
§.§ Major Features
1. User Feedback Data Cleaning, Integration, Prediction, and Analysis via GPT-4 Model. An important process in our tool involves cleaning, integration, predicting, and analyzing user feedback. Effectively processing is a key step in generating high-quality persona templates. To facilitate this, we employed the GPT-4 model into our tool, enabling the generation of various high-quality and detailed persona contents. The tool can save the processed data in CSV format.
2. Knowledge Graph Construction-Based User Feedback Data. We constructed a knowledge graph to strengthen the connections among various attributes associated with each persona. We built different nodes with various data attributes, to facilitate persona classifications and recommendations. These include user requirement nodes, requirement type nodes, demographic nodes, and job role nodes. Furthermore, users can define different nodes based on their specific preferences.
3. Generating Persona Templates. We developed persona classifications and recommendations. Our tool can classify different persona templates and recommend similar personas based on various attributes at different nodes. Moreover, the content of these personas includes demographic information, visualization, feedback, motivations and requirement for applications from user feedback. Figure <ref> illustrates an example persona template.
§.§ Implementation
We developed PersonaGen as a web application. PersonaGen has been implemented using common web technologies (HTML, CSS, JavaScript, and Java-Springboot). The database for constructing the knowledge graph uses Neo4j[https://neo4j.com]. We first applied the GPT-4 model[https://openai.com/gpt-4] for cleaning, integrating, predicting, and analyzing user feedback, which was obtained from student software projects. The tool project is publicly available[https://github.com/xishuozhang/PersonaGen/tree/main].
§ EVALUATION RESULTS
We conducted small-scale user studies to evaluate the value of PersonaGen. Specifically, we analyzed the qualitative feedback data from three student software projects, which involved a total of 13 third-year of undergraduate participants/students. We generated a series of persona templates using PersonaGen based on user feedback from student software projects. We found that the results were rather mixed. Some participants considered that the accuracy of persona generation using the GPT-4 model was superior to an independent analysis conducted by the participants themselves, primarily due to their lack of rich experience in qualitative analysis. In addition, some participants had some experience in software development within the industry and had a better understanding of the concept and application of personas. However, some participants considered that there was a lack of education and practical knowledge related to persona-based practices in student projects. While most participants were confident in effectively addressing functional requirements, they found it challenging to analyze non-functional requirements, such as users with accessibility requirements.
§ CONCLUSION AND FUTURE WORK
This paper introduces a tool called PersonaGen to assist agile software development processes in using personas for requirements engineering. The core module of this tool relies heavily on the large language model provided by the GPT-4 model. The construction of a knowledge graph facilitates the classification and recommendation of persona templates based on different nodes. The feedback on the PersonaGen tool was rather mixed, suggesting a need for enhanced persona-based educational practices, and highlighting challenges in addressing non-functional requirements. Furthermore, we plan to focus on addressing accessibility requirements and integrating more human-centric aspects into persona templates. In the future, this PersonaGen tool will be employed in educational practices, such as UI/UX design and requirements engineering related courses.
§ ACKNOWLEDGEMENTS
This work is supported by the Natural Science Foundation of Inner Mongolia under Grand No.2023LHMS060062020 National Key Research and Development Plan under Grand No.2020YFC1523305.
IEEEtran
|
http://arxiv.org/abs/2307.01200v1
|
20230703175945
|
Real-time Monocular Full-body Capture in World Space via Sequential Proxy-to-Motion Learning
|
[
"Yuxiang Zhang",
"Hongwen Zhang",
"Liangxiao Hu",
"Hongwei Yi",
"Shengping Zhang",
"Yebin Liu"
] |
cs.CV
|
[
"cs.CV"
] |
An inflationary disk phase to explain extended protoplanetary dust disks
[
August 1, 2023
========================================================================
[1]Corresponding authors.
Learning-based approaches to monocular motion capture have recently shown promising results by learning to regress in a data-driven manner. However, due to the challenges in data collection and network designs, it remains challenging for existing solutions to achieve real-time full-body capture while being accurate in world space. In this work, we contribute a sequential proxy-to-motion learning scheme together with a proxy dataset of 2D skeleton sequences and 3D rotational motions in world space. Such proxy data enables us to build a learning-based network with accurate full-body supervision while also mitigating the generalization issues. For more accurate and physically plausible predictions, a contact-aware neural motion descent module is proposed in our network so that it can be aware of foot-ground contact and motion misalignment with the proxy observations. Additionally, we share the body-hand context information in our network for more compatible wrist poses recovery with the full-body model. With the proposed learning-based solution, we demonstrate the first real-time monocular full-body capture system with plausible foot-ground contact in world space. More video results can be found at our project page: https://liuyebin.com/proxycaphttps://liuyebin.com/proxycap.
§ INTRODUCTION
Capturing full-body motions from monocular videos is an essential technology for various applications such as gaming, VR/AR, sports analysis, .
One ultimate goal is to achieve real-time full-body capture while being accurate and physically plausible in world space.
Despite the recent advancements, this task is still far from being solved.
Compared with the optimization-based methods <cit.>, learning-based approaches <cit.> can predict 3D body motions directly, which are pretty efficient and have become increasingly popular.
As data-driven solutions, the accuracy of learning-based approaches is bounded by the quality of supervision, , the motion annotations.
Meanwhile, their robustness heavily relies on the diversity of training data.
Since the motion capture datasets <cit.> are typically created with marker-based or multi-view systems and are limited in the diversity of body appearances and backgrounds, numerous efforts have been dedicated to enriching such data by generating pseudo labels <cit.> for in-the-wild datasets.
Though these pseudo labels become increasingly aligned with the images, they lack accurate global motions due to the simplified camera assumptions and depth ambiguity.
To tackle this limitation, researchers have also created synthetic data <cit.> by rendering human models with controllable cameras.
Despite the accurate motions and camera parameters of such synthetic data, there are distinct domain gaps between the real-world images and the rendered ones.
For physically plausible motion capture, there have been several attempts <cit.> to correct the motion results in the world space via physics-based trajectory optimization <cit.>.
However, these solutions typically require intensive computational resources, making them unsuitable for real-time applications.
Recent state-of-the-art solutions <cit.> alleviate this issue by incorporating the physics simulation in the learning process via reinforcement learning.
Despite their effectiveness, these methods typically require the modeling of a known 3D scene <cit.>.
To handle the above challenges, we propose a learning-based full-body motion capture method with real-time performance and plausible physical contact with the ground plane.
To this end, we start from the training data and propose a simple yet effective data generation strategy.
Our key insight is to leverage the existing large-scale motion sequence data, , AMASS <cit.>, and synthesize sequential proxy-to-motion pairs with virtual camera settings.
More specifically, we adopt the 2D skeleton sequences as proxy representations and associate them with the 3D rotational motion sequences in world space.
Such a data generation strategy addresses the limitation of previous datasets in the aspects of scale, accuracy, and generalization:
i) the data can be synthesized at a large scale and high quality as both body motions and virtual camera settings are accurate in world space;
ii) as one of the well-studied representations, 2D skeletons can be estimated in a pretty accurate and robust manner for existing state-of-the-art toolkits, which mitigates the generalization issue of previous synthetic datasets;
iii) The proxy data can be synthesized individually and then integrated to form a full-body proxy dataset by combining hand gestures and facial expressions with body motions.
Moreover, the proxy sequences also help handle the depth ambiguity compared to previous work <cit.> using the proxy representations of a single frame.
Though the proposed proxy dataset can be more accurate and large-scale, it remains challenging for regression-based networks to learn physically plausible motions from proxy data.
To address this, we propose a network architecture that takes a proxy sequence as input and produces 3D motions in world space.
Our network first predicts coarse 3D motions and then refines them to be more accurate and physically plausible.
Specifically, we introduce a contact-aware neural motion descent module for iterative motion refinement.
In this module, we explore a cross-attention layer by using the contact status, proxy misalignment, and the current motion state as the query, key, and value.
By doing so, the neural descent module can produce more accurate motion updates based on foot-ground contact and motion misalignment with the proxy observations.
Our solution is well-suited for full-body motion capture by leveraging the full-body proxy dataset.
For better body-hand compatibility, our network further incorporates the context information from the hand part to predict more compatible wrist poses in the full-body model.
As a learning-based solution, our method for full-body motion prediction can be trained end-to-end and run in real-time during inference. Our contributions can be summarized as follows:
0em
* To overcome the accuracy and generalization issues, we adopt 2D skeleton sequences as proxy representations and synthesize proxy data from accurate 3D rotational motions in world space. The proxy data of different body parts can also be integrated as one with full-body motions.
* For accurate and physically plausible predictions, we propose a contact-aware neural motion descent module so that our network can be aware of foot-ground contact and motion misalignment with the proxy observations.
* For full-body motion capture, we leverage the body-hand context information in our network so that the wrist poses can be more compatible with the full-body model.
* We demonstrate a real-time monocular full-body capture with plausible foot-ground contact in world space.
§ RELATED WORK
Monocular motion capture has been an active research field in recent years.
We give a brief review of the works related to ours and refer readers to <cit.> for a more comprehensive survey.
Motion Capture Datasets. Existing motion capture datasets are either captured with marker-based <cit.> or marker-less <cit.> systems.
Due to the requirement of markers or multi-view settings, the diversity of these datasets is limited in comparison with in-the-wild datasets.
To enrich the motion datasets, numerous efforts <cit.> have been dedicated to generating pseudo-ground truth labels with better alignment in the image space but do not consider the motion in world space.
On the other hand, researchers have also resorted to using synthetic data <cit.> by rendering human models with controllable viewpoints and backgrounds.
However, such synthetic datasets are either too expensive to create or have large domain gaps with real-world images.
Proxy Representations for Human Mesh Recovery. Due to the lack of annotated data and the diversity of human appearances and backgrounds, learning accurate 3D motions from raw RGB images is challenging even for deep neural networks.
To alleviate this issue, previous approaches have exploited the different proxy representations, including silhouettes <cit.>, 2D/3D landmarks <cit.>, segmentation <cit.>, and IUV <cit.>.
These proxy representations can provide guidance for the neural network and hence make the learning process easier.
However, the proxy representations simplify the observation and introduce additional ambiguity in depth and scale, especially when using proxy representations in a single frame <cit.>.
In this work, we alleviate this issue by adopting 2D skeleton sequences as proxy representations and propose to generate proxy data with accurate motions in world space.
Full-body Motion Capture. Recent state-of-the-art approaches <cit.> have achieved promising results for the estimation of body-only <cit.>, hand-only <cit.>, and face-only <cit.> models.
By combining the efforts together, these regression-based approaches have been exploited for monocular full-body motion capture.
These approaches <cit.> typically regress the body, hands, and face models by three expert networks and integrate them together with different strategies.
For instance, PIXIE <cit.> learns the integration by collaborative regression, while PyMAF-X <cit.> adopts an adaptive integration strategy with elbow-twist compensation to avoid unnatural wrist poses.
Despite the progress, it remains difficult for existing solutions to run at real-time while being accurate in world space.
In this work, we achieve real-time full-body capture with plausible foot-ground contact by introducing new data generation strategies and novel network architectures.
Neural Decent for Motion Capture. Traditional optimization-based approaches <cit.> typically fit 3D parametric models to the 2D evidence but suffer from initialization sensitivity and the failure to handle challenging poses.
To achieve more efficient and robust motion prediction, there are several attempts to leverage the learning power of neural networks for iterative refinement.
HUND <cit.> proposes a learning-to-learn approach based on recurrent networks to regress the updates of the model parameters.
Song <cit.> propose the learned gradient descent to refine the poses of the predicted body model.
Similar refinement strategies are also exploited in PyMAF <cit.> and LVD <cit.> by leveraging image features as inputs.
In our work, we propose a contact-aware neural decent module and exploit the usage of cross-attention for more effective motion updates.
Physical Plausibility of Motion Capture. Though existing monocular motion capture methods can produce well-aligned results, they may still suffer from the artifacts such as ground penetration and foot skating.
For more physically plausible reconstruction, previous works <cit.> have made attempts to leverage more accurate camera models during the learning process.
To encourage proper contact of human meshes, Rempe <cit.> propose a physics-based trajectory optimization to learn the body contact information explicitly.
HuMoR <cit.> introduces a conditional VAE to learn a distribution of pose changes in the motion sequence, providing a motion prior for more plausible human pose prediction.
LEMO <cit.> learns the proposed motion smoothness prior and optimizes with the physics-inspired contact friction term.
Despite their plausible results, these methods typically require high computation costs and are unsuitable for real-time applications.
For more effective learning of the physical constraints, there are several attempts <cit.> to incorporate the physics simulation in the learning process via reinforcement learning.
However, these methods <cit.> typically depend on 3D scene modeling due to the physics-based formulation.
In our work, we achieve real-time capture with plausible foot-ground contact in world space by designing novel networks and leveraging accurate motion supervision from the proxy data.
§ PROXY DATA GENERATION
To address the precision and generalization challenges associated with extant datasets, we leverage 2D skeleton sequences as proxy representations and synthesize sequential proxy-to-motion data utilizing precise 3D rotational motions in world space. In this section, we begin by presenting the camera and motion decoupling approach in Section <ref>. Subsequently, we introduce the synthetic data generation process in Section <ref>.
§.§ Camera and Motion Decoupling
The traditional world-to-image projection typically involves transferring the world coordinates of the object to the camera space and then projecting them to the image plane.
To simplify the process, previous methods <cit.> directly regress human poses in the camera coordinate system and feed them to a projection function.
However, such a simplified process introduces an ambiguous problem applying the same projection function to different scenes.
Motivated by previous work <cit.>, we address this issue by decomposing the camera pose and the human motion, which allows us to predict the camera pose and the human motion in world space and calculate the foot-ground contact more directly.
In the following, we adopt the classical pinhole camera model to describe our camera settings.
As shown in Fig. <ref>, the camera is fixed along the z-axis and positioned at the height of h above the ground.
The camera orientation is set to face squarely towards the front to reduce the degree of camera pose freedom with only two parameters of pitch α and roll ϕ.
In this setting, the camera rotation R_c and transition T_c can be calculated as following:
R_c = euler2mat(α, ϕ ,0)
T_c = -R_c^⊤· (0, 0, h)^⊤ x ≤ 0 .
For body motion in world space, it can be represented with the rotational parameters θ^w and global transition t^w.
With accurate t^w, the body model can step on the ground z_w = 0 with plausible foot-ground contact.
Let the parameters of body motions in camera space be θ^c, t^c, their relationship with the world-space motion can be written as ℳ(θ^c, t^c) = R_c ·ℳ(θ^w, t^w) + T_c, where ℳ(θ, t) denotes the body model.
§.§ Data Synthesis and Integration
In our method, we adopt the 2D skeleton sequences as the proxy representation.
To generate proxy-to-motion pairs with accurate human kinematics, we synthesize proxy sequences based on the existing motion datasets with virtual cameras.
In the following, we describe the synthesis and integration of different types of labels in our proxy data, including body motions, hand gestures, and contact labels.
Body proxy data. We adopt the motion sequences from the AMASS dataset <cit.> to generate proxy-to-motion pairs for the body part.
The AMASS dataset is a large-scale body motion sequence dataset that comprises 3772 minutes of motion sequence data, featuring diverse and complex body poses.
We downsample the motion data to 60 frames per second, resulting in 9407K frames.
Integration with hand gestures. Since the hand poses in the AMASS dataset are predominantly static, we augment the proxy data with hand poses from the InterHand <cit.> dataset, which contains 1361K frames of gesture data captured at 30 frames per second.
We employ Spherical Linear Interpolation (Slerp) to upsample the hand pose data to 40, 50, and 60 fps and randomly integrate them with the body poses in the AMASS motion sequences.
Integration with contact label. To generate the contact label, we follow LEMO <cit.> and select 6 vertices on the foot of the SMPL and calculate the continuous contact indicators as follows:
ind_i = Sigmoid(v_max-v_i/k_v)· Sigmoid(z_max-z_i/k_z)
where v_i and z_i denote the velocity in XY-plane and the height to the ground of the selected vertex. v_max and z_max is setting to 0.2m/s and 0.08m, the hyper-parameter k_v and k_z is set to 0.04 and 0.008.
The resulting contact label can be directly integrated into the proxy data.
Camera setting. We adopt the same camera setup used in existing public datasets <cit.> for virtual camera settings.
Specifically, we set the fov from 50^∘ to 70^∘ and sampled the camera pose from: α∼𝒰(0^∘, 30^∘), ϕ∼𝒰(-5^∘, 5^∘), to ensure variability in camera viewpoints.
Meanwhile, we will apply a random rotation γ_aug∼𝒰(0^∘, 360^∘) around the z-axis to human and apply a random y-axis displacement d_aug∼𝒰(2.5m, 6m) to simulate all cases that the person is near or far to the camera to augment the data.
We generate pseudo 2D skeleton annotations by adding Gaussian noise Δ X ∼𝒩(0, 0.003m) to the 3D joints of the body model and then project them to the image plane.
§ METHOD
In our approach, the skeleton poses are first detected from a monocular video and then fed into our network to produce 3D motions in world space, as illustrated in Fig. <ref>.
In the following, we describe the proposed sequential proxy-to-motion learning scheme to achieve accurate and physically plausible motion prediction in world space.
§.§ Sequential Full-body Motion Recovery
At the first stage of the proposed scheme, the skeleton sequences are processed by temporal encoders and then fed into regressors for the predictions of full-body motion in world space.
Specifically, the temporal encoders are used to extract features from sequential 2D skeletons of the body and hand parts.
Following previous baseline <cit.>, we build these encoders upon temporal dilated convolutional networks.
Moreover, inspired by previous skeleton-based 3D human pose estimation <cit.>, we adopt two separate branches in the body-specific regressors to learn the global trajectory and root-relative body motions.
Specifically, the trajectory branch learns the camera parameters α, ϕ, h and the global human transition t_b, while the motion branch learns the shape parameter β_b and pose parameter θ_b of the body model.
Separating these two prediction tasks helps to decouple the global and local motions and hence produce more accurate motions in world space.
For better body-hand compatibility,
we exploit the cross-attention mechanism to facilitate the motion context sharing during the body and hand recovery.
Specifically, we first obtain the initial body and hand features from the temporal encoders and split them as the Query Q_b, Q_h, Key K_b, K_h, and Value V_b, V_h.
Then, these features will be updated by the cross-attention layer as follows:
V'_b = V_b + Softmax(Q_h K_v^⊤/√(d_k)) V_b,
V'_h =V_h + Softmax(Q_b K_h^⊤/√(d_k)) V_h.
The updated body and hand features V'_b, V'_h will then be fed into the corresponding regressors to predict pose parameters.
We demonstrate that the fusion strategy can make our regressors benefit each other to produce more comparable wrist poses in the full-body model.
§.§ Contact-aware Neural Motion Descent
At the second stage of the proposed scheme, the coarse motion predictions will be refined to be more accurate and physically plausible with our proposed contact-aware neural motion descent module.
This module takes the foot-ground contact and misalignment status as input and produces updating human parameters during iterations.
To refine the predictions, our network needs to perceive the misalignment and the foot-ground contact status of the currently predicted motions J_i, v_i = ℳ(β_i, t_i, θ_i), where the J_i and v_i denote the 3D joints and vertices of the body model at the i-th iteration.
Since we assume the camera is steady, the estimated motion parameters β, t, θ_b in Sec. <ref> should be transformed to the world space with a reference camera pose as the initial predictions β^0, t^0, θ_b^0. In practice, we take the predicted camera pose of the first frame as the reference camera pose and then transform subsequent estimated human motions accordingly.
For the misalignment status, we can project the 3D joints on the image plane and calculate the difference between the reprojected 2D joints and the proxy observations: 𝒮_proj = Π(R_c^⊤ J_i +T_c) -J_2D.
For the contact status, we calculate the foot-ground contact based on the vertex velocity v^xy_i - v^xy_t-1 in XY-plane and the distance v^z_i to the Z-ground plane.
Moreover, we also leverage the temporal features from inputs 2D skeletons to predict the contact labels ind_t, which will be used as an indicator to mask the foot-ground contact.
Then, the contact status of the current predictions can be denoted as 𝒮_contact = ind_t⊙ (v^xy_i - v^xy_t-1, v^z_i), where ⊙ denotes the mask operation.
After obtaining the contact and misalignment status, we feed them into the neural motion descent module for motion updates.
As shown in Fig. <ref>, the descent module takes the two groups of tensors as input:
i) the state group includes the current SMPL parameters β_i, t_i, θ_i, camera pose α, ϕ, h and the sequential motion context F_seq={V'_b, V'_h};
ii) the deviation group includes the current misalignment status 𝒮_proj and contact status 𝒮_contact.
A straightforward solution would be using an MLP to process these two groups of tensors.
However, the values of these two groups exhibit significant differences.
For instance, the values of the state tensors change smoothly while the values of the deviation tensors may change rapidly along with the misalignment and contact status.
Simply concatenating them as inputs introduce difficulty in the learning process.
Note that the magnitude of the deviation tensors is highly correlated with the parameter updates.
When the body model is well-aligned without foot skating or ground penetration, the values of the deviation tensors are almost zero, indicating that the refiner should output zeros to prevent further changes in the pose parameters.
Otherwise, the refiner should output larger update values for motion adjustments.
To leverage such a correlation property, we exploit a cross-attention module to build a more effective architecture.
As shown in Fig. <ref>, two fully-connect layers are leveraged to process the tensors of the state and deviation groups and produce the Query, Key, and Value for the cross-attention module.
In this way, our contact-aware neural motion descent module can effectively learn the relationship between the state and deviation groups and hence produce more accurate motion updates.
Moreover, the sequential motion context F_seq is also leveraged in our neural descent module to mitigate the depth uncertainty and improve the motion predictions.
Compared with previous work <cit.>, the proposed contact-aware neural motion descent module offers the advantage of freeing us from the heavy cost of explicit gradient calculations or the manual tuning of hyper-parameters during testing.
Furthermore, the module is capable of learning human motion priors with contact in world space from our synthetic dataset, which provides a more suitable descent direction and step to escape the local minima and achieve faster convergence.
§.§ Loss Function
In our solution, the full-body motion recovery module and the contact-aware neural motion descent module are trained separately.
Benefiting from the proxy-to-motion learning, the ground-truth full-body pose θ_b+h and human body shape β_b can be obtained for supervision from our synthetic dataset.
Overall, the objective function of motion recovery can be written as follows:
ℒ_rec = ℒ_3D + ℒ_2D + ℒ_θ + ℒ_β+ ℒ_cam + ℒ_smooth
Specifically, ℒ_3D involves 3D MPJPE loss and 3D trajectory L1 loss while ℒ_2D is the projected 2D MPJPE loss.
ℒ_θ, ℒ_β, ℒ_cam represents L1 loss between the estimated human pose, shape and camera pose to our synthetic ground truth.
ℒ_smooth is adopted from <cit.> by penalizing the velocity and acceleration between the estimation and the ground truth.
For the neural descent module, the objective loss can be written as:
ℒ_desc = ∑_k u^K-k (ℒ_rec + ℒ_contact + ℒ_ind)
ℒ_contact =∑_iind^gt_i⊙(||v^xy_i||_2 + ||v^z_i||_2)
ℒ_ind = ∑_iEntropy(ind^gt, ind^est)
where k=1,2,..., K is the iteration time and u is the decay ratio to emphasize the last iteration.
We set K=3, u=0.8 in practice.
ℒ_contact involves the error of trajectory drifting, foot floating, or ground penetration. ℒ_ind refers to the loss between the predicted contact label to the ground truth.
§ EXPERIMENTS
In this section, we evaluate the effectiveness of our method on both the synthetic dataset and the public datasets.
We also conduct ablation studies to demonstrate the effectiveness of each part and report the computational complexity of each module in our real-time system.
-5pt
5pt
§.§ Comparison with the State of the Art
We evaluate our method on the public datasets, Human3.6M <cit.>, 3DPW <cit.>, EHF <cit.> and our synthetic dataset.
Tab. <ref> shows the comparison results of our method with state-of-the-art ones.
Note that the methods reported in Tab. <ref> include skeleton-based methods and mesh-based methods.
Although the skeleton-based methods achieve higher MPJPE/P-MPJPE scores, they are limited in the ability to capture vertex-to-ground contact information and human kinematics.
Our algorithm yields comparable results to recent state-of-the-art mesh-based methods and presents several advantages such as real-time efficiency, full-body motion capture capability and plausible ground-contact constraints.
As shown in Fig. <ref>, we demonstrate that our method generalizes well in three public datasets and our captured sequences.
§.§ Ablation Studies
In this section, we conduct ablation studies using various settings to validate each component of our proposed method.
To validate the effectiveness of the proposed synthetic dataset, we adopt our network without the neural motion descent module as the Baseline method.
As reported in Tab. <ref>, we conduct ablation experiments with four distinct settings: “In-door”: training the Baseline method on Human3.6M only; “Synth”: training the Baseline method on the proposed synthetic datasets only; “Fusion”:training the Baseline method on both the Human3.6M and synthetic datasets; “Fusion+ND”: training our completed solution with the neural motion descent module on both datasets. The fusion training schedule demonstrates superior accuracy compared to training solely on the Human3.6M or synthetic dataset, as observed in both CPN-detected 2D inputs and ground truth (GT) 2D inputs. The reason is that training solely on Human3.6M leads to overfitting due to the limited motion diversity, while training solely on the synthetic dataset may result in a domain gap between the pseudo motion pair and dataset annotations. Furthermore, we present the results of our complete pipeline with the neural descent module, which shows improved performance.
We also conduct ablation studies of the motion context sharing scheme (“Temp”) and the contact-aware neural motion descent module (“Desc”) in Tab. <ref>.
We report different settings with measurements on body MPJPE, hand MPJPE, global MPJPE (including trajectories), ground penetration <cit.> (GP), and foot floating (FF) for comparison.
Results show that both the motion context sharing scheme and the contact-aware neural motion descent module improve the performance. Evaluations on the FF metric and the GP metric are shown in Fig. <ref>. Here, the FF metric is defined as the percentage of the frames with foot-ground distances far from a specific threshold.
Moreover, we present a qualitative comparison with the previous neural descent method <cit.> and the representative full-body motion recovery method PyMAF-X <cit.> in Fig. <ref>.
We demonstrate that our method can avoid unnatural body leaning and knee bending in Fig. <ref> (a) and prevent foot skating and ground penetration in Fig. <ref>(b).
§.§ Computational Complexity
In this section, we compare the inference speed of our method.
Our real-time monocular full-body capture system can be implemented on a single Laptop (Intel i7-12800HX CPU, NVIDIA RTX 3080Ti GPU).
Specifically, for the 2D pose estimator, we leverage the off-the-shelf framework Mediapipe <cit.> and re-implement it on half-precision arithmetic on the NVIDIA TensorRT platform.
We report the inference time of each module in Tab. <ref>.
§ CONCLUSION
In this paper, we present a real-time full-body motion capture method with physically plausible foot-ground contact in world space.
We leverage a sequential proxy-to-motion learning scheme and synthesize a proxy dataset of 2D skeleton sequences with accurate 3D rotational motions in world space.
For more accurate and physically plausible results, we propose a contact-aware neural motion descent module in our network.
Besides, we leverage the body-hand context information in our network so that the wrist poses can be more compatible with the full-body model.
We demonstrate a real-time monocular full-body capture system and show superior results to existing solutions with more plausible foot-ground contact in world space.
Limitations.
Similar to previous work <cit.>, it is difficult for our method to capture the body shape from sparse skeletons.
Meanwhile, though our method shows more plausible results in world space, the mesh-to-image alignment is slightly inferior to state-of-the-art image-based solutions <cit.> especially when the camera setting differs from the training data.
ieee_fullname
|
http://arxiv.org/abs/2307.03260v1
|
20230706193755
|
A Gaussian Integral Filter with Multivariate Laplace Process Noise
|
[
"Enrico M. Zucchelli",
"Brandon A. Jones"
] |
stat.AP
|
[
"stat.AP"
] |
A Gaussian Integral Filter with Multivariate Laplace Process Noise
This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-19-1-0404. Any opinions, finding, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the United States Air Force.
Enrico M. Zucchelli
Aerospace Engineering and Engineering Mechanics
The University of Texas at Austin
Austin, TX, U.S.A
enricomarco@utexas.edu
Brandon A. Jones
Aerospace Engineering and Engineering Mechanics
The University of Texas at Austin
Austin, TX, U.S.A
brandon.jones@utexas.edu
August 1, 2023
============================================================================================================================================================================================================================================================================================================================================================================================
This paper introduces the concept of the Gaussian integral filter (GIF), the limit of the Gaussian sum filter (GSF) for when the number of mixands tends to infinity. The GIF is obtained via a combination of GSF, quadrature, and interpolation. While it is a very general concept, in this paper the GIF is used to represent multiviariate Laplace (ML) distributions defining the process noise when tracking a maneuvering target. The filter is first applied to a linear three-dimensional toy problem, and then to a maneuvering target tracking problem in Earth orbit. For the more complex maneuvering target tracking problem, the filter requires only 1.4 times the computational resources of an unscented Kalman filter (UKF), while having errors up to 11 times smaller. For the same problem, the UKF slowly diverges.
maneuvering target tracking, Gaussian scale mixture, Gaussian integral filter, multivariate Laplace, continuous Gaussian mixture model
§ INTRODUCTION
Maneuvering target tracking is a challenging problem that has been widely researched for several decades <cit.>. Common approaches include equivalent process noise <cit.>, adaptive-noise methods <cit.>, variable dimension estimators <cit.>, and interacting multiple model (IMM) filters <cit.>. Most of the above mentioned methods either require fine tuning of parameters, or they adapt to the measurements, causing the approach to be non-Bayesian. A Bayesian method with an explicit transitional prior has the advantage that it can be directly implemented in a multi-target tracking filter such as the probability hypothesis density (PHD) filter <cit.> or the generalized labeled multi-Bernoulli (GLMB) filter <cit.>. In a Bayesian framework it is often convenient to use heavy-tailed distributions, such as the multivariate Laplace (ML) distribution or Student's t-distribution, to represent the maneuvers distribution <cit.>. Heavy-tailed distributions are more responsive than Gaussian distributions to sudden, large maneuvers, and are thus more robust.
An ML distribution can be described by a continuous Gaussian mixture model (CGMM), which is an infinite sum of Gaussian components; specifically, the ML distribution can be represented by a Gaussian Scale Mixture (GSM) <cit.>, which is a subclass of the CGMM.
A Gaussian Sum Filter (GSF) <cit.> is a bank of Gaussian filters working in parallel to reproduce non Gaussian distributions more faithfully than a single Gaussian filter would. Depending on the problem, GSFs may be preferred to particle filters (PFs) because they are not subject to sample impoverishment and particle depletion.
In this paper the Gaussian integral filter (GIF) is introduced, which is the limit of the GSF for when the number of components, or mixands, tends to infinity. The result is a combination of GSF with quadrature and interpolation methods over the mixands of the distribution. The GIF is a Bayesian filter that employs a CGMM representation for the prior of the state, for the process noise, for the measurement noise, or for a combination of those distributions.
While the GIF is very generic, and may be used, for example, as an alternative to Gaussian mixture splitting, this paper focuses on how it can be applied to a problem where the process noise is distributed according to an ML distributions.
Huang et al. <cit.> exploit the GSM formulation of the ML for the process noise to design a Kalman filter based on variational Bayesian methods. The filter is applied to a maneuvering target tracking problem. Wang et al. <cit.> exploit the same concept, but use the ML distribution for the measurement noise instead; the resulting filter is robust to problems where the measurements have large outliers. Both filters are limited to linear systems, are iterative, and make simplifying assumptions; in addition, they provide Gaussian posterior distributions.
There are three main contributions in this paper. First, the GIF is introduced, a filter that uses a CGMM as prior, process noise, and/or measurement noise, by a combination of GSF, interpolation, and quadrature. To the best of the authors' knowledge, there has been no direct use of a CGMM-based filter to date. Second, the ML-GIF, a GIF that employs the description of the ML as a CGMM for process noise, is described. The only approximations made are the interpolation, the quadrature, and the fact that every single mixand is kept Gaussian during propagation and update.
Third, the ML-GIF is applied to a challenging maneuvering target tracking problem. The proposed filter requires approximately only 1.3 times the computational time of a UKF when using quadrature and interpolation methods. The method provides a non-Gaussian, possibly heavy-tailed (depending on observability) posterior distribution.
§ THE CONTINUOUS GAUSSIAN MIXTURE MODEL
A finite GMM is defined as follows:
p(x) = ∑_i=1^N w_i 𝒩(x;μ_i,P_i), ∑_i=1^N w_i = 1,
where N is the number of mixands, w_i is the weight of the ith mixand, μ_i and P_i are the corresponding mean and covariance, respectively.
The CGMM consists of the limit of Eq. (<ref>) when N tends to infinity. For this to be properly defined, a parameterization is required:
p(x) = ∫_a^b 𝒩(x;μ(z),P(z)) p_z(z) dz,
where a and b are the boundaries of the integral, z is the parameterization variable, and p_z(z) is the probability density function (p.d.f.) of z.
To numerically evaluate a CGMM, discretization is needed, which leads to a p.d.f. represented as a finite sum of Gaussian distributions like in (<ref>). However, at any time, an approximation to the original integral can be recovered by interpolation. It is thus possible to adaptively change the interpolation nodes and achieve arbitrary precision, as well as to sample from the original distribution.
§ SYMMETRIC ML DISTRIBUTION AS A CGMM
An ML distribution with mean μ∈ℝ^d and variance Σ∈ℝ^d× d has the following p.d.f.:
p(x) = 2/√(|2πΣ|)((x-μ)^TΣ^-1(x-μ)/2)^v/2
× K_v(√(2 (x-μ)^TΣ^-1(x-μ))),
where v=(2-d)/2, and K_v is the modified Bessel function of the second kind and of order v.
A key feature of the symmetric ML distribution is that its marginal distributions are Laplace distributions.
Sampling from an ML distribution is equivalent to sampling from a normal distribution with stochastic variance, where the variance is distributed according to the variance of the ML distribution multiplied by the square root of a random variable (r.v.) distributed according to an exponential distribution with scale 1 <cit.>.
Let Y be the r.v. from the symmetric ML distribution with mean μ and variance Σ, X is the r.v. from the multivariate Gaussian distribution with mean 0 and variance Σ, and Z is the r.v. distributed according to an exponential distribution with scale 1.
Then:
Y = √(Z)X + μ.
This relationship can trivially be written as the following integral:
∫_0^∞ e^-z 1/√(|2π z Σ|) e^-1/2(x-μ)^T(z Σ)^-1(x-μ) dz,
which in turn is the following CGMM:
∫_0^∞ e^-z 𝒩(x; μ, z Σ) dz,
where the p.d.f. p_z(z) is e^-z. The equation shows that the variance of each mixand increases linearly with the parameter z.
The ML as an infinite Gaussian mixture belongs to the class of the GSMs <cit.>, defined as
p(x) = ∫_0^∞𝒩(x;μ+zβ, Σ/κ(z)) p_z(z) dz,
where β is a shape parameter and κ(·) is a positive scale function. In addition to the ML distribution, several others are known to have representations as GSMs, such as the Cauchy distribution and Student's t distribution. GSMs enjoy properties that make them more tractable than general CGMM. After even a linear time update though, the ML-CGMM is no more a GSM, but just a general CGMM. The previously mentioned filter by Huang et al. <cit.> approximates the transitional prior of a Gaussian distribution with ML process noise as a GSM.
§ QUADRATURE AND INTERPOLATION
Quadrature allows one to compute an approximation to the p.d.f. of the CGMM in finite time. This computation is required whenever one wants to reduce the CGMM to a single Gaussian distribution. This is different from quadrature or cubature filters such as the cubature Kalman filter (CKF) <cit.> or the Unscented Kalman Filter (UKF) <cit.>, since here the quadrature is done over an independent parameter that describes a non-Gaussian distribution. At the same, by interpolation one can obtain an approximation to the original CGMM while only saving the value at a few nodes. This way any transformation, such as time update or measurement update, can be performed in a finite amount of time.
Interpolation is also useful to switch the number of nodes when performing different operations; for example, for astrodynamics problems the time update is generally more time consuming than the measurement update, and thus one may want to have fewer nodes for the time update and more nodes for the measurement update.
Quadrature allows to efficiently compute the integral (<ref>). As the integral for the ML-CGMM is indefinite, particular attention needs to be paid to the choice of the quadrature nodes. Gauss-Laguerre quadrature is used to numerically compute the integral
∫_0^∞ e^-z f(z) dz ≈∑_i=1^n w_i f(z_i).
In this case, f(z)=w(z) 𝒩(x; μ(z),Σ(z)).
The n nodes z_i for Gauss-Laguerre quadrature are the roots of the Laguerre polynomial L_n(x):
L_n(z) = 1/n!(d/dz-1)^n z^n,
for integer n, and the corresponding interpolation weights w_i are computed as
w^i = z_i/(n+1)^2[L_n+1(z_i)]^2.
It is possible to recover an approximation to the full distribution from just the values at a few nodes by interpolation. Spline interpolation is preferred here for simplicity. The interpolation nodes do not need to be the same as the Gauss-Laguerre quadrature nodes; however, one needs to choose the n_i interpolation nodes [z_1, … , z_n_i] such that any following evaluations of the interpolation do not lie outside of the interval [z_1,z_n_i].
Spline interpolation can directly be used for the means of the mixands. The interpolation of the p.d.f. p_z(z) can be done by interpolating its natural logarithm, so that positivity is ensured. The interpolated function needs then to be normalized such that its integral is equal to 1. The covariance can be interpolated in several ways. One way consists of taking the Cholesky decomposition, and interpolate it element-by-element. Another way would consist of, after taking the Cholesky decomposition, generating the σ-points as in <cit.>, and then interpolating those. In both cases positive semidefiniteness and symmetry are preserved, but some of the eigenvalues may still be zero. If the interpolation is done over the σ-points, then it can also be used to recover the means of the mixture mixands.
§ THE GIF WITH ML PROCESS NOISE
Consider the nonlinear stochastic discrete-time system with non-additive process noise
x_k = f_k(x_k-1,v_k-1),
y_k = h_k(x_k) + w_k,
where x_k is the state of the system at time k, f_k(·) is a transition function, y_k is the measurement at time k, h(·) is the measurement function, and v_k-1 and w_k are random variables.
For the case where the random variables are Gaussian, this problem can be approximately solved by an Extended Kalman Filter (EKF) or a UKF, which perform, respectively, local and statistical linearization. In this paper we consider the case in which v_k-1 are distributed according to an ML distribution. The case where instead w_k-1 follows an ML distribution is not treated here, but the solution method is very similar.
First, the number of interpolation nodes n_t to use during propagation needs to be decided. Then, assuming the prior at time k-1 is Gaussian, the distribution is propagated for every node, either using the UT, like for a UKF, or by linearizing around the mean, like in the EKF:
x_k|k-1 = f(x_k-1|k-1),
P^i_k|k-1,t = F_k P_k-1|k-1 F_k^T + Γ_k (z^i_tQ) Γ_k^T,
where the superscript i, together with the subscript t, means that the value is for the ith time update node, P^i_k|k-1 is the transitional prior covariance at time k, P_k-1|k-1 is the prior covariance at time k-1, F_k is the state transition matrix, z_i,t is the value of z at node i for the time update, Q_k is the covariance of the ML process noise, and Γ_k is the process noise Jacobian. Note that the components' weights are not considered yet. When using EKFs and starting with a Gaussian distribution at time k-1, the computations of f(x_k-1|k-1), F_k, and Γ_k are the same for any i, since they all take the same input x_k-1|k-1. Those computations can thus be carried out just once, regardless of how many mixands are propagated, making the time update negligibly larger than that of a single EKF. In a similar fashion, if a bank of UKFs is used instead of a bank of EKFs, the different mixands share some of the σ-points, since the noise is uncorrelated from the state; specifically, only 2 dim(v) points need to be computed for every mixand other than the first one.
After propagation the time update nodes are switched to the measurement update nodes. The number of mixands n_m for the measurement update is usually larger than n_t. The values at the new nodes can be found by interpolation, as discussed in Sec. <ref>. For the EKF, the only variable to be interpolated is the covariance P^i_k|k-1:
S_L_k|k-1(z) = S_L_k|k-1(z|L^1_k|k-1,t,…,L^n_t_k|k-1,t),
where L^i_k|k-1 is the lower triangular Cholesky decomposition of P^i_k|k-1, and S_y(z|M^1,…,M^n) is a function interpolating the data matrices M^1,…,M^n at nodes z^1,…,z^n, and evaluated at z=z. The transitional prior covariances at the measurement nodes are then computed:
P^i_k|k-1,m=S_L_k|k-1(z^i_m)(S_L_k|k-1(z^i_m))^T.
The measurement update for the bank of EKFs is then:
Δy = y - h(x_k|k-1),
S^i_k|k-1 = H_k P^i_k|k-1,mH_k^T + R_k,
K^i_k = P^i_k|k-1,mH_k^T(S^i_k|k-1)^-1,
x^i_k|k,m = x^i_k|k-1 + K^i_kΔy,
P^i_k|k,m = (I-K_k^iH_k)P^i_k|k-1,m,
l^i_k|k,m = 1/√(|2π S^i_k|k-1|) e^-1/2 Δy^T(S^i_k|k-1)^-1Δy
where the superscript i, together with the subscript m, means that the variable is for the ith measurement update node (the subscript m is avoided for variables that do not show up during time update or quadrature), S^i_k|k-1 is the innovation covariance, K^i_k is the gain matrix, y is the actual measurement, and l^i is the measurement likelihood. Note that, as in the time update, some computations are the same for all components: the expected measurement h(x_k|k-1) and the measurement Jacobian H_k.
Finally, quadrature is needed to obtain an actual approximation to the posterior.
To compute the posterior in finite time, it is represented as a GMM. Nonetheless, at any time, an approximation to the original CGMM can be recovered back. The n_q quadrature nodes are interpolated from the measurement update nodes. As one can never interpolate outside of the data bounds, it should be made sure that all successive interpolation extrema are inside the previous ones: [z_t,1,z_t,n_t]∈[z_m,1,z_m,n_m]∈[z_q,1,z_q,n_q].
Interpolating functions are used again. The variables to interpolate are the lower triangular Cholesky decompositions L^i_k|k of the posterior covariances P^i_k|k, the means of the posterior distributions x^i_k|k, and the log-likelihoods log l^i:
S_L_k|k (z) = S_L_k|k( z|L^1_k|k,m, …, L^n_t_k | k , m),
s_x_k|k(z) = s_x_k|k(z| x^1_k|k,m, …, x^n_t_k|k,m),
s_l(z) = s_l(z|log l^1_m,…,log l^n_t_m),
The mean of each quadrature component is simply evaluated from the interpolation, and the covariance is computed in a similar fashion as (<ref>). The relative weights are computed as follows:
ŵ^i_k|k,q = w^i_q e^s_l(z^i_q),
where w^i_q is the quadrature weight of the ith quadrature node computed as in (<ref>). Finally, the weights are normalized:
w^i_k|k=ŵ^i_k|k/∑_j=1^n_qŵ^j_k|k.
The method can similarly be applied using UKFs instead of EKFs. In that case, 2dim(x)+1 σ-points can be reused for all components after the first, since only the process noise changes between nodes.
§ RESULTS
In the results section we first analyze a simple linear problem, and look at how the results differ depending on whether rank(H) = dim(x) or rank(H) < dim(x). Then, we look at how the filter behaves in a complex maneuvering target tracking problem in Earth orbit, with large mismatch between the expected maneuver and the actual maneuver.
§.§ Linear Case
The first application is a simple toy problem only aimed at demonstrating the behavior of the ML-GIF with an ML prior and a Gaussian measurement. No tracking is involved here.
Consider the following linear 3-dimensional problem:
x_k = F_k x_k-1 + Γ_kv_k-1,
y_k = H_k x_k + w_k,
with F_k = Γ_k = 𝕀_3× 3, and
H_k = [ 1 0 1; 0 1 0; 0 1 1 ],
and the process noise v_k-1 is distributed according to an ML with variance Q_k=𝕀_3× 3 and mean 0, and the measurement noise w_k is Gaussian with variance R_k=𝕀_3× 3 and mean 0.
At epoch k-1 the prior distribution for the state x_k-1 is set to have mean x_k-1|k-1=0 and covariance P_k-1|k-1=𝕀_3× 3. Here rank(H) = dim(x), and thus we expect the posterior to be sub-Gaussian. Assume now that the measurement y_k=[0,-15,-6] is obtained.
Fig. <ref> shows the posterior weight of each mixand versus the minimum and maximum eigenvalue of the posterior covariance, for several choices of n_m. For increasing magnitude of the eigenvalue, both plots reach what seems to be a vertical asymptote whenever n_m is set to be larger than 5. For this specific problem n_m=10 seems to be large enough, in the sense that all additional mixands for n_m>10 have very small weights. However, in case the deviation were even larger, more nodes may be necessary: the larger the number of nodes, the better a large deviation can be tracked.
Let us consider now the case where the rank of the H_k matrix is smaller than the dimensionality of x:
H_k = [ 1 0 1; 0 -1 0; 1 0 1 ].
Fig. <ref> shows plots for the same variables as the previous figure, but for the latter case. Here the largest eigenvalue of the variance increases linearly with the logarithm of the weight. As per (<ref>), if the variance increases linearly with the logarithm of p_z(z), then the distribution is, along at least one dimension, an ML. If the linear relation only occurs for some values of z larger than a certain threshold, as is the case for the largest eigenvalues, then one can state that the tail is that of an ML distribution. If this is true for at least one eigenvalue, it means that there is a decomposition such that the distribution is heavy-tailed along at least one dimension. Hence, the plot shows that the posterior is still heavy-tailed along at least one of its dimensions. In contrast, the smallest eigenvalue still reaches what seems to be an asymptote, showing that at least one of the dimensions has sub-Gaussian tails, as expected.
§.§ Low-Thrust Maneuvering Spacecraft Tracking with Sparse Observations
Low-thrust maneuvering spacecraft tracking is more challenging than traditional maneuvering target tracking problems because it involves sparse observations and continuous thrust, which keep the uncertainty large for long periods of time <cit.>.
In this subsection, we analyze the results obtained for the tracking of a low-thrust maneuvering spacecraft that is spiraling out with constant in-track thrust. After the scenario description, the results are analyzed for the case where the GIF's nodes are kept constant between time update, measurement update, and quadrature. Then, different combinations of time update nodes and measurement update nodes are tested. In all cases, a bank of UKFs is used, and the integral of mean and covariance is computed after every measurement: the posterior state is always reduced to a Gaussian distribution. The propagation is performed with 19 σ-points, because the state has 6 dimensions and the process noise has 3 dimensions. After the propagation is carried out for the first mixand, all other mixands only need 6 σ-points to be propagated, as the other 13 are shared among all mixands, since they do not include the process noise. Hence, propagation time for 10 nodes only takes about 4 times the computational resources of a single UKF.
The only forces in play in this scenario are the central gravity, perturbation due to J_2, and thrust:
a = -μ/r^3r +a_J_2+T,
where μ is the gravitational parameter of Earth, r is the [x,y,z] position of the spacecraft, T is the thrust, and a_J_2 is the acceleration due to J_2:
a_J_2,x = -3/2μ J_2 R_e^2/r^5(1-5 z^2/r^2) x,
a_J_2,y = -3/2μ J_2 R_e^2/r^5(1-5 z^2/r^2) y,
a_J_2,z = -3/2μ J_2 R_e^2/r^5(3-5 z^2/r^2) z,
where R_e is the Earth's Equatorial radius, and J_2 is the coefficient of degree 2 and order 0 of the spherical harmonics expansion describing Earth's gravity field. The thrust is treated by the filter as the random variable v_k-1 from (<ref>), distributed as an ML. The initial conditions are distributed according to
x_0|0 = [ 0 km 7,000.000 km 0 km ],
ẋ_0|0 = [ 5,335.865 m/s 0 m/s 5,335.865 m/s ],
P_0|0 = [ 100 𝕀_3× 3 m 0; 0 0.1 𝕀_3× 3 m/s ]^2.
One radar measurement is performed every 10,000 s, which is a little less than twice the initial orbital period. To keep the scenario simple, the measurement is simulated as coming from the center of the Earth, and consists of range ρ, range-rate ρ̇, right ascension α, and declination δ. The measurement error variance is
R=(diag[ 3 m 0.03 m/s 0.015 deg 0.015 deg ])^2.
The measurement model provides direct information on the position with a standard deviation of approximately 2.5 km, whereas only one dimension of the velocity is observed at a time. This makes the problem unobservable without a prior.
The spacecraft accelerates with continuous thrust of 300 μm/s2 in the along-track direction, spiraling out. The magnitude and direction of the thrust are unknown to the filter. The filter assumes that the acceleration is constant between two successive observations, but that it can change after any measurement; moreover, it has no memory of the previous thrust profile, to maximize responsiveness. In this scenario the filter assumes that the standard deviation of the thrust is 10 μm/s2, 30 times smaller than the actual one, to stress the capability of the ML-GIF when the target's acceleration magnitude is unknown. All computations were performed in Matlab, with a single thread of a 2.8 GHz Quad-Core Intel Core i7 processor.
§.§.§ Constant Nodes
For this case the nodes used for time update, measurement update, and quadrature are the roots of the Laguerre polynomial of order 10. Using a lower number of nodes leads to situations where the highest weighed quadrature component is also the one with the largest initial variance, causing the filter to miss relevant portions of the distributions. The computational time over the 50 runs is 1,532 s.
Fig. <ref> shows the error in position and velocity obtained over 50 Monte Carlo trials, together with average 3σ filter uncertainty. The error shows a bias, different at every measurement epoch, caused by the fact that the constant thrust introduces a systematic error in the model. About 1.72% of the measurements fall outside of the 3σ predicted variance. As the posterior resembles a Laplace distribution along at least one of the dimensions, as implied by Fig. <ref>, around 1.5% of estimates are expected to be outside the 3σ bounds. While the frequency is slightly larger, this is acceptable considering the fact that a large systematic error is involved. Moreover, note that a majority of large deviations occur during the first few estimates, when the filter is still adjusting to the initial variance. Even though from the plot it looks like the uncertainty increases in the beginning, the determinant of the variance actually decreases, because correlation between the states is introduced by the measurements and the dynamics. This is a known occurrence for orbital problems starting with diagonal covariance matrices <cit.>.
The position RMSE over all runs and epochs is 1,037 m, and the velocity RMSE is 1.096 m/s. As a reference, for this problem, after just the first observation the position and velocity of the accelerating satellite differ from those of a ballistic satellite by, respectively, 44 km and 48 m/s.
To compare, Fig. <ref> shows the performance of a single UKF with same process noise variance as the ML-GIF. The RMSE is 8,625 m in position and 9.059 m/s in velocity, and 99.4% of the state estimates fall outside of the 3σ bounds. From the plot, one can clearly deduce that the Gaussian filter is diverging. The computational time required by the single UKF is 377 s.
§.§.§ Interpolated Nodes
The same problem is now solved by interpolating the nodes between time and measurement update. For this case, the time update nodes differ from the measurement update nodes, but the measurement update nodes are chosen to be the same as the final quadrature nodes. Measurement update is not computationally demanding for this problem, and therefore there is no need to change nodes between measurement and quadrature. The first and last propagation nodes are always the same as the first and last chosen update nodes:
z_t,1=z_m,1, and z_t,n_t=z_m,n_m. The time update nodes in-between are chosen such that they are linear in a quadratic scale. Note that, for n_t=2 and n_t=3, spline interpolation is not possible, and linear and quadratic interpolations are used instead, respectively. The measurement update nodes are Gauss-Laguerre quadrature nodes, so that quadrature can directly be operated over the computed mixands.
No plots are shown for these cases, because the results all look qualitatively very similar to the previous case. Table <ref> summarizes RMSE and computational time for every analyzed combination of n_m=n_q and n_t. All combinations are evaluated over the same 50 Monte Carlo trials.
The error introduced by the interpolation causes a difference in performance between the filters. Since the cases with 2 and 3 time update nodes use a different interpolation technique, namely linear and quadratic, instead of spline, it is impossible to conclude whether the difference in performance is caused by the different interpolation techniques or by the number of nodes. For same number of time update nodes, adding measurement nodes improves both accuracy and statistical consistency. Such improvement is smaller when going from 15 to 25 measurement nodes, likely because the acceleration of 30 standard deviations is captured well enough by 15 nodes. As expected, the main driver of the computational cost is the number of propagation nodes. The ML-GIF with n_t=2 and n_m=n_q=25 takes 1.4 times the computational resources of a single UKF, and performs better than the ML-GIF without interpolation with n_t=n_m=n_q=10, at little more than one third the computational cost.
§ CONCLUSIONS
This paper introduces the GIF, the limit for the GSF when the number of mixands tends to infinity. The GIF is computed numerically by building on the framework of a GSF with quadrature and interpolation. Differently from a normal GSF, an approximation to the corresponding continuous mixture can always be obtained by interpolation. The interpolation can be used to reduce or increase the number of discretization nodes, or to sample from the continuous distribution. While the GIF can be used for a variety of applications, this paper demonstrates the case in which an ML distribution is described as a CGMM, and used to represent the process noise of a maneuvering target. The resulting filter is able to discern whether the posterior distribution is heavy-tailed or not. The filter is successful in a simulated scenario consisting of a tracking problem with sparse observations where a satellite maneuvers with an acceleration that is 30 times the expected standard deviation. A Gaussian filter with same process noise variance diverges. The UKF-ML-GIF requires less than 1.5 times the computational cost of a UKF.
00
li_2003 X. R. Li, and V. P. Jilkov, “Survey of maneuvering target tracking. Part I. Dynamic models”, IEEE Trans. on Aerosp. and Electron. Syst., vol. 39, no. 4, pp. 1333–1364, 2003.
li2001survey X. R. Li, and V. P. Jilkov, “Survey of maneuvering target tracking. Part II. Ballistic target models”, in Proc. of Signal and Data Processing of Small Targets (SPIE), pp. 559–581, 2001.
li2001bsurvey X. R. Li, and V. P. Jilkov, “Survey of maneuvering target tracking. Part III. Measurement models”, in Proc. of Signal and Data Processing of Small Targets , pp. 423–446, 2001.
li2002survey X. R. Li, and V. P. Jilkov, “Survey of maneuvering target tracking. Part IV. Decision-based methods”, in Proc. of Signal and Data Processing of Small Targets , pp. 511–534, 2002.
efe1998maneuvering M. Efe, and D. P. Atherton, “Maneuvering target tracking with an adaptive Kalman filter”, in Proc. of the 37th IEEE Conf. on Decis. and Control, pp. 737–742, 1998.
gholson1977maneuvering N. H. Gholson, and R. L. Moose, “Maneuvering target tracking using adaptive state estimation”,
journal=IEEE Trans. on Aerosp. and Electron. Syst., vol. 13, no. 3, pp. 310–317, 1977.
bar1982variable Y. Bar-Shalom, and K. Birmiwal, “Variable dimension filter for maneuvering target tracking”, IEEE Trans. on Aerosp. and Electron. Syst., vol.18, no. 5, pp. 621–629, 1982.
goff2015orbit G. M. Goff, J. T. Black, and J. A. Beck, “Orbit estimation of a continuously thrusting spacecraft using variable dimension filters”, J. Guid. Control and Dyn., vol. 38, no. 12, pp. 2407–2420, 2015.
li_2005 X. R. Li, and V. P. Jilkov, “Survey of maneuvering target tracking. Part V. Multiple-model methods”, IEEE Trans. on Aerosp. and Electron. Syst., vol. 41, no. 4, pp.1255–1321, 2005.
zucchelli2020tracking E. M. Zucchelli, Z. McLaughlin, and B. A. Jones, “Tracking maneuvering targets with multi-fidelity interacting multiple model filters”, in Proc. of the Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS), Maui, HI, September 2020.
clark2006gmD. E. Clark, K. Panta, and B.-N. Vo, “The GM-PHD filter multiple target tracker”, in Proc. of the Int. Conf. on Inform. Fusion, Florence, Italy, July 10—13, 2006.
vo2016efficientB.-N. Vo, B.-T. Vo, and H. G. Hoang, “An efficient implementation of the generalized labeled multi-Bernoulli filter”, IEEE Trans. on Signal Process., vol. 65, no. 8, pp. 1975–1987, 2016.
yun2022generalized S. Yun, N. Ravago, B. L. Reifler, R. Zanetti, and B. A. Jones, “Generalized labeled multi-Bernoulli filter with kernel-based ensemble Gaussian mixture filtering for orbit determination with sparse data”, in Proc. of the Advanced Maui Optical and Space Surveillance Technologies Conference (AMOS), Maui, HI, September 2022.
roth2013student M. Roth, E. Özkan, and F. Gustafsson, “A Student’s-t filter for heavy tailed process and measurement noise”, in Proc. IEEE Int. Conf. Acoust. Speech Signal Process (ICASSP), May 2013, pp. 5770–5774.
huang2016robust Y. L. Huang, Y. G. Zhang, N. Li, and J. Chambers, “A robust Gaussian approximate fixed-interval smoother for nonlinear systems with heavy-tailed process and measurement noises”, IEEE Signal Process. Lett., vol. 23, no. 4, pp. 468–472, Apr. 2016.
boris2008scale S. T. Boris Choy, and J. S. K. Chan, “Scale mixtures distributions in statistical modelling”, Australian & New Zealand Journal of Statistics, vol. 50, no. 2, pp. 135–146, 2008.
gsf H. W. Sorenson, and D. L. Alspach, “Recursive Bayesian estimation using Gaussian sums”, Automatica, vol. 7, pp. 465–479, 1971.
huang2017robust Y. Huang, Y. Zhang, P. Shi, Z. Wu, J. Qian, and J. A. Chambers, “Robust Kalman filters based on Gaussian scale mixture distributions with application to target tracking", IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol.49, no. 10, pp. 2082–2096, 2017.
wang2021novel G. Wang, C. Yang, and X. Ma, “A novel robust nonlinear Kalman filter based on multivariate Laplace distribution”, IEEE Transactions on Circuits and Systems II: Express Briefs”, vol. 68, no. 7, pp. 2705–2709, 2021.
kotz2001laplace S. Kotz, T. Kozubowski, and K. Podgorski, “The Laplace distribution and generalizations: a revisit with applications to communications”, Economics, Engineering, and Finance, vol. 183, 2001.
cqf I. Arasaratnam, and S. Haykin, “Cubature Kalman filters”, IEEE Trans. on Automatic Control, vol. 54, no. 6, pp 1254–1269, 2009.
julier1997unscented S. J. Julier, and J. K. Uhlmann, “A new extension of the Kalman filter to nonlinear systems”, in Proc. of AeroSense: The 11th Int. Symp. on Aerospace/Defence Sensing, Simulation and Controls, 1997.
sarkka S. Särkkä,
“Bayesian filtering and smoothing”, Cambridge Univeristy Press, 2013.
kelecy2010detection T. Kelecy, and J. K. Moriba, “Detection and orbit determination of a satellite executing low thrust maneuvers”, Acta Astronautica, vol. 66, no. 5-6, pp 798–809, 2010.
woodburn J. Woodburn, and J. Ramrath, “Generation of initial error covariance”, in Proc. AAS/AIAA Space Flight Mechanics Meeting, Williamsburg, VA, January 2015.
|
http://arxiv.org/abs/2307.00942v1
|
20230703113322
|
On the stochastic inventory problem under order capacity constraints
|
[
"Roberto Rossi",
"Zhen Chen",
"S. Armagan Tarim"
] |
math.OC
|
[
"math.OC"
] |
Externally validating the IoTDevID device identification methodology using the CIC IoT 2022 DatasetKahraman Kostas supported by Republic of Turkey - Ministry of National Education
Kahraman Kostas10000-0002-4696-1857 Mike Just10000-0002-9669-5067 Michael A. Lones10000-0002-2745-9896
August 1, 2023
===================================================================================================================================================================================
We consider the single-item single-stocking location stochastic inventory system under a fixed ordering cost component. A long-standing problem is that of determining the structure of the optimal control policy when this system is subject to order quantity capacity constraints; to date, only partial characterisations of the optimal policy have been discussed. An open question is whether a policy with a single continuous interval over which ordering is prescribed is optimal for this problem. Under the so-called “continuous order property” conjecture, we show that the optimal policy takes the modified multi-(s,S) form. Moreover, we provide a numerical counterexample in which the continuous order property is violated, and hence show that a modified multi-(s,S) policy is not optimal in general. However, in an extensive computational study, we show that instances violating the continuous order property are extremely rare in practice, and that the plans generated by a modified multi-(s,S) policy can therefore be considered, for all practical purposes, optimal. Finally, we show that a modified (s,S) policy also performs well in practice.
Keywords: inventory; stochastic lot sizing; order capacity; modified multi-(s,S) policy.
§ INTRODUCTION
This study focuses on one of the fundamental models in inventory management <cit.>: the periodic review single-item single-stocking location stochastic inventory system under nonstationary demand, complete backorders, and a fixed ordering cost component.
By introducing the concept of K-convexity, <cit.> proved, under mild assumptions, that the optimal control policy takes the well-known (s,S) form: if the inventory level falls below the reorder point s, one should place an order and raise inventory up to level S; otherwise, one should not order.
Compared to the case investigated by , in which the order quantity is unconstrained, the capacitated version of the stochastic inventory problem is inherently harder, both structurally and computationally. This work is concerned with this variant of the problem.
If the fixed ordering cost is absent, but ordering capacity constraints are enforced, a so-called modified base stock policy is optimal for both the finite and infinite horizon cases <cit.>.
While in a classical base stock policy one simply orders up to S, in a modified base stock policy, when the inventory level falls below S, one should order up to S, or as close to S as possible, given the ordering capacity. The classical base stock policy is thus “modified” to embed order saturation.
In the presence of a positive fixed ordering cost, <cit.> was the first to investigate the influence of capacity constraints on the structure of the optimal control policy. In analogy to the aforementioned modified base stock policy, conjectured that an optimal strategy may feature a so-called modified (s,S) structure: if the inventory level is greater or equal to s, do not order; otherwise, order up to S, or as close to S as possible, given the ordering capacity.
Unfortunately, both <cit.> and <cit.> provided counterexamples that ruled out the optimality of a modified (s,S) policy. However, <cit.> proved that, under stationary demand and a finite horizon, the optimal policy features a so-called X-Y band structure: when initial inventory level is below X, it is optimal to order at full capacity; when initial inventory level is above Y, it is optimal to not order.
<cit.> introduced CK-convexity, a generalisation of Scarf's K-convexity; by leveraging this property, they extended the analysis in <cit.> and further characterized the optimal policy by identifying four regions: in two of these regions the optimal policy is completely specified, while it is only partially specified in other two regions.
<cit.> discussed further properties of the optimal order policy when the inventory level falls within 's X-Y band, and devised an efficient algorithm to compute optimal policy parameters.
<cit.> extended the analysis in <cit.> and proved that the optimal policy continues to exhibit the X-Y band structure under infinite horizon; moreover, proved that the X-Y band width is no more than the capacity.
<cit.> investigated the case in which the fixed ordering cost is large relative to the variable cost of a full order; this assumption allowed them to restrict their analysis to full-capacity orders; under this setting they showed that the optimal policy is a threshold policy: if the inventory level falls below the threshold s, issue a full-capacity order; otherwise, do not order.
Finally, <cit.> developed an approximation algorithm with worst-case performance guarantee.
As mentioned in <cit.>, when order quantity capacity constraints are enforced, only some partial characterization of the structure of the optimal control policy is available in the literature. To the best of our knowledge, the problem of determining the structure of the optimal policy of the capacitated stochastic inventory problem remains open. A long-standing open question in the literature, originally posed by <cit.>, is whether a policy with a single continuous interval over which ordering is prescribed is optimal for this problem. This is the so-called “continuous order property” conjecture, which was later also investigated by <cit.>. To the best of our knowledge, to date this conjecture has never been confirmed or disproved. This gap motivates the present study.
We make the following contributions to the literature on stochastic inventory control.
* In light of the results presented in <cit.>, we show how to simplify the optimal policy structure presented by <cit.>. Moreover, we extend the discussion in <cit.> and provide a full characterisation of the optimal policy for instances for which the continuous order property holds. In particular, we show that the optimal policy takes the modified multi-(s,S) form.
* We provide a numerical counterexample in which the continuous order property is violated. This closes a fundamental and long standing question in the literature: a policy with a single continuous interval over which ordering is prescribed is not optimal in general. Since generating similar counterexamples is far from trivial, in our Appendix we illustrate the analytical insights we relied upon to generate such rare instances.
* In an extensive computational study, we show that instances violating the continuous order property are extremely rare in practice, and that a modified multi-(s,S) ordering policy can therefore be considered, for all practical purposes, optimal. Moreover, the number of reorder-point/order-up-to-level pairs that this policy features in each period is generally very low — i.e. less or equal to 6 in our study — this means that operating the policy in practice will not result too cumbersome for a manager. Finally, we show that a well-known heuristic policy which has been known for decades, the modified (s,S) policy <cit.>, also performs well in practice.
The rest of this paper is organised as follows.
In Section <ref>, we introduce the well-known stochastic inventory problem as originally discussed in <cit.>.
In Section <ref>, we extend the problem description to accommodate order quantity capacity constraints.
In Section <ref> we summarise known properties of the optimal policy from the literature.
In Section <ref> we introduce the so-called “continuous order property,” which has been previously conjectured in the literature, and illustrate the structure that the optimal policy would take if this property were to hold.
In Section <ref> we present a numerical counterexample in which the continuous order property is violated.
In Section <ref> we illustrate results of our extensive computational study aimed at showing that instances violating the continuous order property are extremely rare in practice, that a modified multi-(s,S) ordering policy is optimal from a practical standpoint, and that a modified (s,S) ordering policy is near-optimal.
In Section <ref> we draw conclusions.
§ PRELIMINARIES ON THE (S,S) POLICY
The rest of this work is concerned with a single-item single-stocking point inventory control problem. A finite planning horizon of n discrete time periods, which are labelled in reverse order for convenience, is assumed. Period demands are stochastic, d_t in period t, with known probability density and cumulative distribution functions f_t and F_t, respectively. The cost components that are taken into account include: the ordering cost c(x) for placing an order for x units; the inventory holding cost h for any excess unit of stock carried over to next period; and the shortage cost p that is incurred for each unit of unmet demand in any given period. Unmet demand is backordered. Without loss of generality, it is assumed that there is no lead-time and deliveries are instantaneous.
Let x represent the pre-order inventory level, and C_n(x) denote the minimum expected total cost achieved by employing an optimal replenishment policy over the planning horizon n,…,1; then one can write
C_n(x)≜min_x≤ y{c(y-x)+L_n(y)+∫_0^∞C_n-1(y-ξ)f_n(ξ) dξ},
where C_0≜0 and
L_n(y)≜∫_0^y h(y-ξ)f_n(ξ) dξ + ∫_y^∞ p(ξ-y)f_n(ξ) dξ.
Following <cit.>, it is assumed that the ordering cost takes the form
c(x)≜{[ 0 x=0,; K+vx x>0. ].
For convex L_n(y), <cit.> proved that the optimal policy takes the (s,S) form, and thus features two policy control parameters: s and S. In the (s,S) policy, an order of size S-x is placed if and only if the pre-order inventory level is x<s.
More specifically, <cit.> introduced the concept of K-convexity (Definition <ref>).
Let K≥ 0, g(x) is K-convex if for all x, a>0, and b>0,
(K+g(x+a)-g(x))/a≥ (g(x)-g(x-b))/b;
and proved that G_n(y) is K-convex, where
G_n(y)≜ vy+L_n(y)+∫_0^∞C_n-1(y-ξ)f_n(ξ) dξ.
This observation implies that the (s,S) policy is optimal, and the policy parameters s and S satisfy
G_n(s)=G_n(S)+K.
Note that when the order quantity is not subject to capacity constraints, S incidentally coincides with the global minimizer of G_n(y). In what follows, we will see that this may not be the case when a capacity constraint is enforced on the order quantity.
§ CAPACITATED ORDERING
The stochastic inventory problem investigated in <cit.> assumes that order quantity Q in each period can be as large as needed. In practice, one may want to impose the restriction that 0≤ Q ≤ B, where B is a positive value denoting the maximum order quantity in each period.
We generalise C_n(x) and G_n(x) to reflect capacity restrictions
C_n(x)≜min_x≤ y ≤ x+B{c(y-x)+L_n(y)+∫_0^∞ C_n-1(y-ξ)f_n(ξ) dξ};
G_n(y)≜ vy+L_n(y)+∫_0^∞ C_n-1(y-ξ)f_n(ξ) dξ.
Finally, we present a useful result that will be used in the coming sections.
G_n(x) is coercive.
The limiting behaviour of G_n(x) can be characterized as lim_x→∞ G'_n(x) = nh and
lim_x→ -∞ G'_n(x) = -np,
and from the fundamental theorem of calculus it follows that G_n(∞)=G_n(-∞)=∞.
§ REVIEW OF KNOWN PROPERTIES OF THE OPTIMAL POLICY
We next introduce[This was originally called strong CK-convexity in <cit.>; however, in line with <cit.>, in the present work we used letter C to denote the cost function and we adopted letter B for the ordering capacity, hence the concept has been renamed (K,B)-convexity.] “(K,B)-convexity (i)” for a function g <cit.>.
Let K≥ 0, B≥ 0, g is (K,B)-convex (i) if it satisfies
(K+g(x+a)-g(x))/a≥ (g(y)-g(y-b))/b
for 0<a≤ B, 0<b≤ B, and y≤ x.
Consider a planning horizon of n=4 periods, and a demand d_t distributed in each period t=1,…,n according to a Poisson law with rate λ_t∈{20, 40, 60, 40}. Other problem parameters are K=100, h=1 and p=10; to better conceptualise the example we let v=0.
In Fig. <ref> we plot G_n(y) and illustrate the concept of (K,B)-convexity (i) for the case in which B=65.
If G_n (resp. C_n) is (K,B)-convex (i) and it is optimal to place an order at x_0, then G_n(y) (resp. C_n(y)) is nonincreasing for y≤ x_0.
Since G_n is (K,B)-convex (i), if it is optimal to place an order at x_0, say an order of a units, then 0≥(K+G_n(x_0+a)-G_n(x_0))/a, and G_n is nonincreasing for y≤ x_0, since
0≥(K+G_n(x_0+a)-G_n(x_0))/a≥ (G_n(y)-G_n(y-b))/b,
for y≤ x_0 and 0<b≤ B. The proof for C_n is identical.
If G_n is (K,B)-convex (i), there exists a pair of values S_m and s_m such that
s_m≜sup{x|C_n(x)=G_n(x)-vx}
is the maximum inventory level at which it is optimal to place an order,
and S_m≜ s_m+a, where 0<a≤ B is the order quantity at s_m.
Let x_0 be any point at which it is optimal to order, say, a units, 0< a ≤ B.
G_n(y) is a nonincreasing function for y≤ x_0 (Lemma <ref>).
This result implies that there must exist an upper bound on inventory level beyond which no ordering is optimal.
Otherwise G_n(y) would be a nonincreasing function for all y, which contradicts Lemma <ref>.
Let K≥ 0, B≥ 0, g is (K,B)-convex (ii) if it satisfies
(K+g(x+a)-g(x))/a≥ (K+g(y)-g(y-B))/B
for 0<a≤ B and y≤ x.
In Fig. <ref> we plot C_n(y) and illustrate the concept of (K,B)-convexity (ii) for our numerical example.
g is (K,B)-convex if it satisfies (K,B)-convex (i) and (K,B)-convex (ii).
G_n(x) and C_n(x) are (K,B)-convex.
Proofs are available in <cit.>.
Originally in <cit.>, and then by introducing the concept of (K,B)-convexity (ii) in <cit.>, established existence of a level Y≜ s_m beyond which it is not optimal order, and of another level X≜ Y-B below which it is optimal to order up to capacity. The optimal policy therefore features a so-called “X-Y band” structure.
If G_n is (K,B)-convex (ii), it is optimal to order up to capacity at any y≤ s_m-B.
Let x_0 be any point at which it is optimal to order something. By (K,B)-convexity (ii),
0>(K+G_n(x_0+a)-G_n(x_0))/a≥ (K+G_n(y)-G_n(y-B))/B,
for all y≤ x_0. Thus, 0>K+G_n(y)-G_n(y-B), because G_n is nonincreasing for y≤ x_0. Hence, it is optimal to order up to capacity at any y≤ x_0 - B.
<cit.> further characterised the structure of the optimal policy within 's X-Y band. In particular, they showed that
C_n(x)={[ G^B_n(x) x< min{s'-B,s}; αmin{-vx+G_n(x),G^B_n(x)}+(1-α)G^S_n(x) min{s'-B,s}≤ x < max{s'-B,s}; min{-vx+G_n(x),G^S_n(x)} max{s'-B,s}≤ x ≤ s'; -vx+G_n(x) x> s' ].
where
[ G^B_n(x)≜ K-vx+G_n(x+B); G^S_n(x)≜ K-vx+min_x≤ y ≤ x+B G_n(y); s≜inf{x|K+min_x≤ y ≤ x+B G_n(y) - G_n(x)≥ 0}; s'≜max{x≤ S_m|K+min_x≤ y ≤ x+B G_n(y) - G_n(x)≤ 0} ]
and α is an indicator variable that takes value 1 if s'-s>B, and 0 otherwise.
s'-B < s ≤ s'
Observe that s_m=s', thus s ≤ s'; by Lemma <ref>, it is optimal to order up to capacity at any x≤ s_m-B;
hence C_n(x)=G^B_n(x) for x< s'-B, and C_n(x)=G^S_n(x) at x=s'-B; therefore s> s'-B.
By leveraging Lemma <ref>, it is possible to further simplify 's structure of the optimal policy as follows.
C_n(x)={[ G^B_n(x) x< s_m-B; G^S_n(x) s_m-B ≤ x < s; min{-vx+G_n(x),G^S_n(x)} s ≤ x ≤ s_m; -vx+G_n(x) x> s_m ].
Observe that s_m=s'; because of Lemma <ref>, it is clear that s_m-s≤ B and α=0.
§ THE MODIFIED MULTI-(S,S) POLICY
Let x_0 be an inventory level at which it is optimal to place an order, C_n is said to have the continuous order property if it is optimal to place an order at y, for all y<x_0.
If C_n has the continuous order property, {x|C_n(x)-(G_n(x)-vx)<0} is a convex set.
If C_n has the continuous order property, in 's policy s=s'; hence for all x≤ s' it is optimal to order, that is C_n(x)-(G_n(x)-vx)≤ 0, and for all x>s' it is optimal to not order, that is C_n(x)-(G_n(x)-vx)> 0; hence {x|C_n(x)-(G_n(x)-vx)<0} is a convex set.
In Fig. <ref> we illustrate Lemma <ref> for our numerical example, which incidentally satisfies the continuous order property.
Consider C_n as defined in Eq (<ref>), let this function be (K,B)-convex, and assume that the continuous order property holds. When inventory falls below the reorder threshold s_m, defined in Lemma <ref>, the optimal policy takes the following form: at the beginning of each period, let x be the initial inventory, the order quantity Q is computed as
Q={[ min{S_k-x, B} s_k-1< x≤ s_k,; 0 x> s_m; ].
where k=1,…, m and s_0=-∞. In essence, the policy features m reorder thresholds s_1<s_2<…<s_m and associated order-up-to-levels S_1<S_2<…<S_m; at the beginning of each period, if inventory drops between reorder threshold s_k and reorder threshold s_k-1, it is optimal to order a quantity Q=min{S_i-x, B}. For convenience, we denote the case Q=B as saturated ordering, and the case 0<Q<B as unsaturated ordering. We shall name this control rule modified multi-(s, S) policy, or (s_k,S_k) policy in short. This policy structure was also described in <cit.>; however, did not establish a relation between the continuous order property and the optimality of this policy.
Consider S_m and s_m as defined in Lemma <ref>, and let S^*≜min_y G_n(y),
* S_m≤ S^*;
* G_n(S_m)≤ G_n(x) for x<S_m;
* G_n(s_m)> G_n(x) for s_m<x≤ S_m.
(a) If s_m≥ S^*-B, then we must necessarily order up to S^* as no point dominates a global minimum.
If s_m< S^*-B, then we do not have sufficient capacity to reach S^*, hence the optimum order quantity will be a value a≤ B; and from s_m we will order up to a point S_m≜ s_m+a≤ S^*.
(b) Assume, ex absurdo, G_n(S_m)> G_n(S) for some S such that s_m<S<S_m; then from s_m it would not be optimal to order up to S_m, which contradicts Lemma <ref>.
(c) Assume, ex absurdo, G_n(s_m)≤ G_n(s) for some s such that s_m<s≤ S_m; then from s it would be optimal to order up to S_m, this contradicts the fact that s_m is the maximum inventory level at which it is optimal to place an order (Lemma <ref>).
Observe that S_m is not necessarily a minimizer of G_n; this is further illustrated in Appendix <ref>.
By building upon (K,B)-convexity of G_n(x) and C_n(x), and upon the assumption that the continuous order property in Definition <ref> holds, we next establish existence of reorder thresholds s_1<s_2<…<s_m and associated order-up-to-levels S_1<S_2<…<S_m that can be used to control the system according to the optimal ordering policy in Eq. (<ref>).
A function g:𝒟→ℝ defined on a convex subset 𝒟∈ℝ is quasiconvex if, for all x,y∈𝒟 and λ∈[0,1],
g(λ x +(1-λ)y)≤max{g(x),g(y)}.
The quasiconvex envelope (QCE) g̃ of a function g on a convex subset 𝒟∈ℝ is defined as
sup{g̃(x)|g̃:ℝ→ℝ , g̃(x)≤ g(x) ∀ x∈𝒟}.
The QCE of G_n on interval (s_m,S_m) is nonincreasing.
From Lemma <ref>b and Lemma <ref>c, it follows
G_n(s_m)> G_n(x)≥ G_n(S_m) for s_m< x< S_m. Hence, the QCE of G_n on interval (s_m,S_m) is a nonincreasing function.
Consider a function g:ℝ→ℝ, a point x in the domain of g is a strict local minimum from the right if there exists δ>0 such that g(y)>g(x) for all y∈(x,x+δ].
Let [a,b], a≤ b, in the domain of a function g be a compact interval such that b is a strict local minimum from the right, g(x)=g(b) for all x∈[a,b], and g(a)=g̃(a); [a,b] nontrivially belongs to the QCE g̃ of g, if there exists δ>0 such that g(y)>g(x) and g(y)=g̃(y) for all y∈(a-δ,a]; [a,b] trivially belongs to the QCE g̃ of g, if there is no δ>0 such that g(y)=g̃(y) for all y∈(a-δ,a).
The concepts introduced in Definition <ref> are illustrated in Fig. <ref>.
Assume G_n is (K,B)-convex; this function must be increasing over some intervals in (s_m,∞), otherwise G_n(y) would be a nonincreasing function for all y, which contradicts Lemma <ref>.
Let 𝒮 be the set of all points a such that interval [a,b]∈(s_m,S_m) nontrivially belongs to the QCE of G_n.
Let x_0 be any point at which it is optimal to place an order; then either it is the case that min_y∈(x_0,x_0+B] G_n(y)=x_0+B, or that min_y∈(x_0,x_0+B] G_n(y)=S_k, for some S_k∈𝒮.
Assume that at x_0 it is optimal to place an order. Then either the lowest cost will be attained by ordering up to x_0+B, or by ordering up to some local minimum S∈ (x_0,x_0+B). Consider this latter case. We first show that S must belong to the QCE of G_n on (s_m,S_m). Assume, ex absurdo, that S does not belong to the QCE of G_n on (s_m,S_m); since the QCE of G_n is nonincreasing on (s_m,S_m) (Lemma <ref>), there must exist some other local minimum S, such that s_m<S<S and G_n(S)<G_n(S), which contradicts the fact that at x_0 it is optimal to order up to S. Finally, assume interval [S,b], for some b≥ S, trivially belongs to the QCE of G_n on (s_m,S_m), this means there must exist some other local minimum S, such that s_m<S<S and G_n(S)=G_n(S); hence ordering up to S is no better than ordering up to S.
Lemma <ref> is further illustrated in a numerical example presented in Appendix <ref>.
In what follows, we shall assume that 𝒮≜{S_1,S_2,…,S_w-1}⊆𝒮 is an ordered set, so that s_m<S_1<S_2<…<S_w-1<S_m, and |𝒮|≥ 0.
G_n(s_m)>G_n(S_1)>G_n(S_2)>…>G_n(S_w-1)>G_n(S_m).
Immediately follows from the definition of 𝒮 and from Lemma <ref>.
𝒮 is empty if G_n is quasiconvex on (s_m,S_m).
If G_n quasiconvex on (s_m,S_m), from Lemma <ref> it follows that G_n is nonincreasing, and hence it does not admit any strict local minima from the right in this interval.
For the sake of convenience let S_w≜ S_m.
For each S_k∈𝒮 there exists a nonempty set {b|S_k<b<S_k+1,G_n(b)≥ G_n(S_k)}.
Consider s_m and S_m as defined in Lemma <ref>.
From Lemma <ref>, G_n(S_k)>G_n(S_k+1), for s_m<S_k<S_k+1<S_m. The result in this lemma follows from the extreme value theorem, since G_n must attain a local maximum at x^*∈(S_k,S_k+1), such that G_n(x^*)>G_n(S_k)>G_n(S_k+1).
Note that there cannot be a point S∈𝒮, such that S_k<S<S_k+1.
For k=1,…,w-1,
b_k≜max{b|S_k<b<S_k+1,G_n(b)≥ G_n(S_k)}, and
s_k≜ b_k-B;
finally, for the sake of convenience, we define s_0≜ -∞.
s_k-1<S_k-B< s_k
This follows from Definition <ref>.
C_n(x)=-vx+min{G_n(x),min_x≤ y ≤ x+B G_n(y) + K} takes the general form
C_n(x)={[ K-vx+G_n(x+B) s_k-1< x≤S_k-B k=1,…,w-1; K-vx+G_n(S_k) S_k-B< x≤ s_k k=1,…,w-1; K-vx+G_n(x+B) s_w-1< x≤ S_m-B; K-vx+G_n(S_m) S_m-B< x≤ s_m; -vx+G_n(x) x> s_m. ].
If at x it is optimal to order a≜ S-x units, where a>0, then C_n(x)=K-vx+G_n(S). We consider each interval for x in order.
x> s_m: this case follows from Lemma <ref>, since s_m denotes an inventory level beyond which no ordering is optimal. Conversely, because of the continuous order property, for x≤ s_m it is always optimal to order;
S_m-B< x≤ s_m: in this interval, min_y∈(x,x+B] G_n(y)=S_m, this follows from the definition of S_m (Lemma <ref>) and from the fact that G_n is nonincreasing in (-∞,s_m] (Lemma <ref>);
s_w-1< x≤ S_m-B: in this interval, from Definition <ref> it follows that
min_y∈(x,x+B] G_n(y)=x+B, since G_n(S_k)>G_n(S_m), for all k=1,…,w-1;
S_k-B< x≤ s_k, for all k=1,…,w-1: in this interval, from Definition <ref> and from Lemma <ref>, it follows that min_y∈(x,x+B] G_n(y)=S_k;
s_k-1< x≤S_k-B, for all k=1,…,w-1: in this interval, from Definition <ref> and from Lemma <ref>, it follows that min_y∈(x,x+B] G_n(y)=x+B, since G_n(S_k)>G_n(S_k+1);
finally, note that if s_0<x≤S_1-B, then min_y∈(x,x+B] G_n(y)=x+B, since G_n is nonincreasing in (-∞,S_1]: in fact, G_n is nonincreasing in (-∞,s_m] (Lemma <ref>), 𝒮 is an ordered set, hence by definition there exists no point s_m<S<S_1 that is a strict local minimum from the right, G_n(s_m)>G_n(S_1) (Lemma <ref>), and thus G_n is nonincreasing in (s_m,S_1].
S_k≜S_k, for all k=1,…,w-1; and, for convenience, let m≜ w.
By applying Definition <ref>, we can rewrite, for k=1,…,m,
C_n(x)={[ K-vx+G_n(x+B) s_k-1< x≤ S_k-B ; K-vx+G_n(S_k) S_k-B< x≤ s_k ; -vx+G_n(x) x> s_m ].
where S_1,…,S_m are the order-up-to-levels and s_1,…,s_m the reorder points of the (s_k, S_k) policy.
If the continuous order property holds, the (s_k, S_k) policy generalises the X-Y band discussed in <cit.>.
In , Y≜ s_m and X≜ Y-B, where X denotes an inventory level below which it is optimal to order up to capacity; hence, their X-Y band has size B. According to Lemma <ref>, it is optimal to order up to capacity for all x≤ S_1-B. According to Lemma <ref>c, s_m<S_1, and thus s_m-B<S_1-B. By letting X̅≜ S_1-B, we obtain a tighter band X̅-Y.
If the continuous order property holds, the (s_k, S_k) policy generalises the policy discussed in <cit.>.
's optimal policy structure features two thresholds: s and s', where -∞≤ s≤ s'≤ S^*, and S^*= min_y G_n(y).
Clearly, s' is the same threshold we denoted as s_m, and under the assumption that the continuous order property holds, it follows that s=s'. 's optimal policy therefore reduces to
C_n(x)={[ K-vx+G_n(x+B) x≤ s_m-B ; K+min_x≤ y≤ x+B{G_n(y) -vx} s_m-B< x≤ s_m ; -vx+G_n(x) x> s_m , ].
which is equivalent to 's X-Y band.
If the continuous order property holds, the (s_k, S_k) policy generalises the (s,S) policy discussed in <cit.>.
When B=∞, S_m-B=-∞, and from Lemma <ref> it is clear that the optimal policy must feature a single reorder threshold s and order-up-to-level S.
In Fig. <ref> we illustrate G_n(y) for different ordering capacities (B∈{35,65,71,∞}) imposed for the problem in Example <ref>.
The optimal (s_k,S_k) ordering policy under ordering capacity constraints for our numerical example is shown in Table <ref>, and in Fig. <ref> for the case in which B=65.
In Appendix <ref> we characterise the structure of the optimal policy for the open numerical example in <cit.>, for which the continuous order property holds.
§ A COUNTEREXAMPLE
The continuous order property in Definition <ref> has been originally conjectured by <cit.>, and it was later further investigated by <cit.>. <cit.> wrote:
A number of problems still remain. The most vexing is the possibility that under the current structure there could exist a number of intervals [...] where it is optimal to start and stop ordering. An optimal policy with a single continuous interval over which ordering is prescribed, as was found for all of the cases tested [...], is much more analytically appealing. [...]
Unfortunately, the proof of this has thus far eluded us. It should be mentioned that it is likewise possible, although we believe it unlikely, that such a structure simply does not exist. To show this requires a problem instance in which the optimal policy has multiple disjoint intervals in which ordering is optimal. Our computational study suggests that this is not the case.
<cit.> wrote:
If our conjecture [the continuous order property] holds, the computational time for obtaining the optimal ordering policy parameters can be further reduced [...]. We can only show that this conjecture holds for a special case where [the capacity] is large enough [...]. It should be an interesting problem for researchers to prove or disprove the conjecture is true for small [capacity].
In the rest of this section, we introduce a numerical instance that violates the continuous order property. To the best of our knowledge, no such instance has ever been discussed in the literature.
Consider a planning horizon of n=4 periods and a nonstationary demand d_t distributed in each period t according to the probability mass function shown in Table <ref>. Other problem parameters are K=250, B=41, h=1 and p=26 and v=0.
In Table <ref> we report an extract of the tabulated optimal policy in which the continuous order property is violated (Fig. <ref>).
Our numerical example confirms that it is possible to construct instances for which it is optimal to start and stop ordering, and that the continuous order property conjectured in <cit.> does not hold for the general case of the stochastic inventory problem under order quantity capacity constraints. In Appendix <ref> we discuss the rationale underpinning the generation of our counterexample.
§ COMPUTATIONAL STUDY
Albeit in the previous section we demonstrated that the it is possible to construct instances for which the continuous order property does not hold, we must underscore that these instances are incredibly rare. This is also the reason why the conjecture in <cit.> remained open for over twenty years. In this section, we consider an extensive test bed comprising a broad family of demand distributions and problem parameters; our aim is threefold.
First, we aim to show empirically that, in practice, instances that violate the continuous order property are extremely rare. In turn, this means that the plans generated by the modified multi-(s,S) ordering policy can be considered, for all practical purposes, optimal.
Second, the modified multi-(s,S) ordering policy may feature, in each period, a variable number of thresholds s_k and associated order-up-to-levels S_k. In our computational study, we show that the number of thresholds in a modified multi-(s,S) policy is typically very low in each period. This means that operating this policy is generally not too cumbersome, as the decision maker only needs to track a few (usually less than 5) reorder thresholds and associated order up to levels.
Finally, as shown in Table <ref>, a modified (s,S) policy <cit.> with parameters (s_m,S_m) appears to perform well in the context of Example <ref>; in our study we proceed to show that this simple policy, which has been known for decades, also performs well in practice across all instances considered.
§.§ Test bed
In our test bed, the planning horizon comprises n=20 periods. We consider 10 different patterns for the expected value of the demand in each period, as shown in Fig. <ref>: 2 life cycle patterns (LCY1 and LCY2), 2 sinusoidal patterns (SIN1 and SIN2), 1 stationary pattern (STA), 1 random pattern (RAND), and 4 empirical patterns (EMP1, EMP2, EMP3, EMP4) derived from demand data in <cit.>. Further details of expected demand rates in each period are given in Table <ref> in Appendix <ref>.
We consider a broad family of demand distributions commonly used in practice: discrete uniform, geometric, Poisson, normal, lognormal, and gamma. Demands in different periods are assumed to be mutually independent. More specifically, let μ_t denote the mean demand in period t, we investigate a demand that follows a discrete uniform distribution in [0,2μ_t); a demand that follows a geometric distribution with mean μ_t; and a demand that follows a Poisson distribution with rate μ_t. Finally, given the coefficient of variation of the demand in each period c_v=σ_t/μ_t, where σ_t is the standard deviation of the demand in period t; we consider a demand that follows a normal, a lognormal, and a gamma distribution with mean μ_t and standard deviation σ_t.
Fixed ordering cost K takes values in {250, 500, 1000}; inventory holding cost h is 1; unit variable ordering cost v takes values in {2, 5, 10}; unit penalty cost p ranges in {5, 10, 15}. For the case of normal, lognormal, and gamma distributed demand, the coefficient of variation takes values in {0.1,0.2,0.3}.
Let D denote the average demand rate over the whole n periods horizon for a given demand pattern; the maximum order quantity B takes values in {round(2D), round(3D), round(4D)}, where the round operator rounds the value to the nearest integer.
Since we adopt a full factorial design, we consider 810 instances for discrete uniform, geometric, and Poisson distributed demand, respectively; and 2430 instances for normal, lognormal, and gamma distributed demands, respectively, since in these latter cases we must also consider the three levels of the coefficient of variation. In total, our computational study comprises 9720 instances.
§.§ Results
We run experiments on an Intel(R) Xeon(R) @ 3.5GHz with 16Gb of RAM.
[
The library used in our experiments is .
The Java code is available on <http://gwr3n.github.io/jsdp/>.
A self-contained Python code is also available on <https://github.com/gwr3n/inventoryanalytics>.]
SDP state space boundaries are fixed — inventory may range in (-10000, 10000) — and in all cases we adopt a unit discretization, therefore running time for each instance is constant; a continuity correction is introduced for continuous distributions. Monte Carlo simulation runs are determined by targeting an estimation error of 0.01% for the mean estimated at 95% confidence level; we adopt a common random number strategy <cit.> across all instances.
In Tables <ref>–<ref> we present the results of our study for each of the demand distributions under scrutiny.
For all instances investigated, a modified multi-(s, S) policy is optimal. Moreover, the maximum number of thresholds observed in any given period is surprisingly low and never exceeds 6 over the whole test bed. We also report the average and maximum % optimality gap of a modified (s, S) policy with parameters (s_m,S_m) extracted from the SDP tables. This policy is found to be near optimal in our study, since its average % optimality gap is consistently negligible, while the maximum % optimality gap observed never exceeds 2%.
§ CONCLUSIONS
The periodic review single-item single-stocking location stochastic inventory system under nonstationary demand, complete backorders, a fixed ordering cost component, and order quantity capacity constraints is one of the fundamental problems in inventory management.
A long standing open question in the literature is whether a policy with a single continuous interval over which ordering is prescribed is optimal for this problem. The so-called “continuous order property” conjecture was originally posited by <cit.>, and later also investigated by <cit.>. To the best of our knowledge, to date this conjecture has never been confirmed or disproved.
In this work, we provided a numerical counterexample that violates the continuous order property. This closes a fundamental and long standing problem in the literature: a policy with a single continuous interval over which ordering is prescribed is not optimal.
<cit.> provided a partial characterisation of the optimal policy to the problem. In light of the results presented in <cit.>, we showed how to simplify the optimal policy structure presented by <cit.>. <cit.> also briefly sketched the form that an optimal policy would take under moderate values of K. We formalised this discussion and provided a full characterisation of the optimal policy for instances for which the continuous order property holds. In particular, we showed that under this assumption the optimal policy takes the modified multi-(s,S) form.
By leveraging an extensive computational study, we showed that instances violating the continuous order property are extremely rare in practice. The modified multi-(s,S) ordering policy can therefore be considered, for all practical purposes, optimal. Moreover, we observed that the number of thresholds in a modified multi-(s,S) policy remains low in each period, this means that operating the policy in practice will not result too cumbersome for a manager. Finally, we showed that a well-known heuristic policy which has been known for decades, the modified (s,S) policy <cit.>, also performs well in practice across all instances considered.
Since a policy with a single continuous interval over which ordering is prescribed is not optimal in general, future works may focus on establishing what restrictions (if any) to the problem statement, e.g. nature of the demand distribution, may ensure that a policy with a single continuous interval over which ordering is prescribed is optimal.
§ ACKNOWLEDGMENTS
The authors would like to thank the China Scholarship Council (CSC) for the financial support provided to Z. Chen under the CSC Postgraduate Study Abroad Program.
§ POSSIBLE SCENARIOS ONE MAY OBSERVE WHEN INVENTORY HITS LEVEL S_M
There are two possible cases one may encounter when inventory hits reorder threshold s_m: either we order less than B, or we order the maximum allowed quantity B. We next illustrate these two possible cases via Example <ref>.
Case 1: The first case (B=65) is shown in Fig. <ref>.
In this case there are m=2 local minima up to (and including) the global minimizer S_m. Let y denote the initial inventory and apply Eq. (<ref>). Since s_2+B≥ S_2, if s_1<y<s_2 we order x=min{S_2-y, B}; if y<s_1 we order x=min{S_1-y, B}. Finally, if y≥ s_2, we do not order.
Case 2: The second case (B=71) is shown in Fig. <ref>.
In this case there are m=3 local minima up to (and including) the global minimizer S^*. Let y denote the initial inventory and apply Eq. (<ref>). Since capacity B is insufficient to reach the global minimizer S^*, if s_2<y<s_3 we order x=S_3-s_3=B; if s_1<y<s_2 we order x=min{S_2-y, B}; and if y<s_1, we order x=min{S_1-y, B}. Finally, if y≥ s_3, we do not order.
These two cases exhaust all possible scenarios one may observe when inventory hits level s_m.
§ NUMERICAL EXAMPLE ILLUSTRATING LEMMA <REF>
Consider a planning horizon of n=12 periods; a demand d_t distributed in each period t=1,…,n according to a Poisson law with rate λ_t∈{151, 152, 58, 78, 134, 13, 22, 161, 43, 55, 110, 37}; K=494, v=0, h=1, p=15, and B=128.
We focus on period 6, and in Fig. <ref> we plot G_6(y) for an initial inventory y∈(60,145). It is clear that at any point x_0 in which it is optimal to place an order, if we have sufficient capacity to order beyond b_1, we should do so; however, if we do not have sufficient capacity, then we would never order up to S, as this point is clearly dominated by S. Observe that while S belongs to the QCE of G_6 — illustrated as a dashed line where it departs from G_6 — S does not.
§ EXAMPLE FROM <CIT.>
We hereby illustrate that an (s_k,S_k) ordering policy is optimal for the numerical example originally presented in <cit.> and also investigated in <cit.> under an infinite horizon.
Consider a planning horizon of n=20 periods and a stationary demand d distributed in each period according to the following probability mass function: {d=6}=0.95 and {d=7}=0.05. Other problem parameters are K=22, B=9, h=1 and p=10 and v=1; note that, if the planning horizon is sufficiently long, v can be safely ignored. The discount factor is α=0.9.
In Table <ref> we report the tabulated optimal policy as illustrated in <cit.>.
In Fig. <ref> we plot G_n(y) for an initial inventory y∈(-5,50) and n=20. The optimal (s_k,S_k) policy is shown in Table <ref>; this is equivalent to the policy illustrated in <cit.> and to the stationary policy tabulated in <cit.>.
§ GENERATING COUNTEREXAMPLES TO THE CONTINUOUS ORDER PROPERTY
Generating counterexamples to the continuous order property is far from trivial. We believe this is the reason why the continuous order property originally conjectured by <cit.> has not been so far confirmed or disproved. In this section, we outline the reasoning we followed to generate our counterexample. Our analysis was inspired by the work of <cit.>.
Let f be convex, and S be a minimizer of f, then
g(x)≜min_y∈[x,x+B] f(y)-f(x)={[ 0 S≤ x; f(S)-f(x) S-B≤ x≤ S; f(x+B)-f(x) x≤ S-B ].
is nondecreasing.
Following <cit.>, g(x) is constant for S≤ x; it is nondecreasing for S-B≤ x≤ S, since f(S) is constant, and f is nonincreasing in this region; finally, it is nondecreasing for x≤ S-B, since f is convex and hence f(x+B)-f(x) is nondecreasing for all x.
Consider G_n and C_n as defined in Eq (<ref>) and Eq. (<ref>), respectively, and let these functions be (K,B)-convex. To show that the continuous order property holds, one must show that {x|C_n(x)-(G_n(x)-vx)<0} is the convex set (-∞,s_m).
Recall that
[ C_n(x)=min{[ L_n(x)+∫_0^∞ C_n-1(x-ξ)f_n(ξ) dξ,; min_x< y ≤ x+B{K+v(y-x)+L_n(y)+∫_0^∞ C_n-1(y-ξ)f_n(ξ) dξ} ]},; G_n(x)=vx + L_n(x)+∫_0^∞ C_n-1(x-ξ)f_n(ξ) dξ,; C_n(x)=-vx + min{G_n(x),K + min_x≤ y ≤ x+B G_n(y)}. ]
To prove that {x|C_n(x)-(G_n(x)-vx)<0} is a convex set, it is sufficient to show that the function
V_n(x)≜ C_n(x)-(G_n(x)-vx)
is nondecreasing in x for each n. Let [x]^-≜min{0,x}, and note that
V_n(x)=[K + min_x≤ y ≤ x+B G_n(y) - G_n(x)]^-.
One may want to try and show by induction that V_n(x) is nondecreasing in x for each n. Let C_0≜ 0, then
[ V_1(x) = [K+min_x≤ y ≤ x+B{v(y-x) + L_1(y)} - L_1(x)]^-; ]
since the unit cost v is linear, and L_1 is convex, from Lemma <ref> it follows that V_1(x) is nondecreasing.
Given this base case, we may then assume that V_n(x) is nondecreasing in x, and try to show that V_n+1(x) is nondecreasing in x.
First, observe that
[ V_n+1(x) = [K+min_x≤ y ≤ x+B (vy + L_n+1(y) + ∫_0^∞ C_n(y-ξ)f_n+1(ξ) dξ); -(vx + L_n+1(x) + ∫_0^∞ C_n(x-ξ)f_n+1(ξ) dξ)]^-. ]
To try and prove V_n+1(x) is nondecreasing, we shall first analyse
[ K +min_x≤ y ≤ x+B v(y-x) + C_n(y)-C_n(x); = min_x≤ y ≤ x+B{K + G_n(y) - G_n(x) - V_n(x) + V_n(y)}, ]
since C_n(x)=V_n(x)+G_n(x)-vx. Consider s_m as defined in Lemma <ref>, and recall this value denotes an inventory level beyond which no ordering is optimal.
There are three intervals we need to analyse: x≤ s_m-B, s_m-B< x≤ s_m, and x> s_m. Observe that, from the definition of s_m in Lemma <ref>, if x=s_m, then K + min_x≤ y ≤ x+B G_n(y) - G_n(x)≤ 0; moreover, by induction hypothesis V_n(x) is assumed nondecreasing, hence V_n(x)=K + min_x≤ y ≤ x+B G_n(y) - G_n(x) for x≤ s_m.
Let x≤ s_m-B; in this interval V_n(x)=K + min_x≤ y ≤ x+B G_n(y) - G_n(x), thus
min_x≤ y ≤ x+B{K + G_n(y) - G_n(x) - V_n(x) + V_n(y)}
= min_x≤ y ≤ x+B{K - (min_x≤ z ≤ x+B G_n(z)) + (min_y≤ w ≤ y+B G_n(w))}
= min_x≤ y ≤ x+B{K - G_n(x+B) + min_y≤ w ≤ y+B G_n(w)}
= K + min_x≤ y ≤ x+2B G_n(y) - G_n(x+B),
= K + min_x+B≤ y ≤ x+2B G_n(y) - G_n(x+B),
because G_n(x) is assumed (K,B)-convex and, by Lemma <ref>, it is nonincreasing for x≤ s_m, therefore it is also nonincreasing in (x, x+B), since x≤ s_m-B.
Let s_m-B< x≤ s_m, in this interval V_n(x)=K + min_x≤ z ≤ x+B G_n(z) - G_n(x), thus
min_x≤ y ≤ x+B{K + G_n(y) - G_n(x) - V_n(x) + V_n(y)}
= min_x≤ y ≤ x+B{G_n(y) + V_n(y)} - min_x≤ z ≤ x+B G_n(z)
= min_x≤ y ≤ x+B{C_n(y) + vy} - min_x≤ z ≤ x+B G_n(z)
= min_s_m< y ≤ x+B{C_n(y) + vy} - min_s_m< z ≤ x+B G_n(z)=0,
because G_n(x) and C_n(x) are assumed (K,B)-convex and, by Lemma <ref>, they are nonincreasing for x≤ s_m; and since no ordering is optimal beyond s_m, then min_s_m< y ≤ x+B C_n(y)+vy=min_s_m< z ≤ x+B G_n(z).
Let x> s_m, in this interval K + min_x≤ y ≤ x+B G_n(y) - G_n(x)> 0, hence V_n(x)=0, V_n(y)=0, and
min_x≤ y ≤ x+B{K + G_n(y) - G_n(x) - V_n(x) + V_n(y)}
= K + min_x≤ y ≤ x+B G_n(y) - G_n(x)> 0.
Equipped with Eq. (<ref>), (<ref>), and (<ref>) for the intervals we considered, it is immediate to see that
[ K +min_x≤ y ≤ x+B v(y-x) + C_n(y)-C_n(x) ]^-={[ V_n(x+B) x≤ s_m-B; 0 s_m-B< x≤ s_m; 0 x> s_m ].
is nondecreasing. However, it is not possible to determine if [K+min_x≤ y ≤ x+B v(y-x)+∫_0^∞ (C_n(y-ξ)-C_n(x-ξ))f_n+1(ξ) dξ]^- is nondecreasing; and reintroducing term min_x≤ y ≤ x+B L_n+1(y) - L_n+1(x) only worsens the matter. But because of the behavior of [ K +min_x≤ y ≤ x+B v(y-x) + C_n(y)-C_n(x) ]^- in intervals s_m-B< x≤ s_m and x≤ s_m-B, one may observe that a V_n+1(x) function featuring some decreasing regions may be produced by the convolution ∫_0^∞ (C_n(y-ξ)-C_n(x-ξ))f_n+1(ξ) dξ, provided demand is sufficiently “lumpy.” In other words, the instance must feature demand whose probability mass function features some values larger than B possessing non negligible probability mass. A demand that is so structured may ensure that the convolution “bends” sufficiently V_n+1(x) beyond s_m so that it turns negative.
On the basis of this observation, we have generated several random instances as follows. The fixed ordering cost is a randomly generated value uniformly distributed between 1 and 500; holding cost is 1; penalty cost is a randomly generated value uniformly distributed between 1 and 30; the ordering capacity is a randomly generated value uniformly distributed between 20 and 200; demand distribution in each period is obtained as follows: the probability mass function comprises only four values in the support, one of these values must fall below the given order capacity, the other three values must fall above, and be smaller or equal to 300; probability masses are then allocated uniformly to each of these values.
The Java code to generate instances that violate the continuous order property is available on <http://gwr3n.github.io/jsdp/>.[File <https://github.com/gwr3n/jsdp/blob/master/jsdp/src/main/java/jsdp/app/standalone/stochastic/capacitated/CapacitatedStochasticLotSizingFast.java>]
§ EXPECTED DEMAND VALUES IN OUR TEST BED
Expected demand values for demand patterns in our test bed are shown in Table <ref>.
[overwrite]demandData.csv
STA,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30 ,30
LC1,46 ,49 ,50 ,50 ,49 ,46 ,42 ,38 ,35 ,33 ,30 ,28 ,26 ,23 ,21 ,18 ,14 ,11 ,8 ,6
LC2,7 ,9 ,11 ,13 ,17 ,22 ,24 ,26 ,32 ,34 ,36 ,41 ,44 ,47 ,48 ,50 ,50 ,49 ,47 ,44
SIN1,47 ,30 ,13 ,6 ,13 ,30 ,47 ,54 ,47 ,30 ,13 ,6 ,13 ,30 ,47 ,30 ,15 ,8 ,11 ,30
SIN2,36 ,30 ,24 ,21 ,24 ,30 ,36 ,39 ,36 ,30 ,24 ,21 ,24 ,30 ,36 ,31 ,24 ,21 ,26 ,33
RAND,63 ,27 ,10 ,24 ,1 ,23 ,33 ,35 ,67 ,7 ,14 ,41 ,4 ,63 ,26 ,45 ,53 ,25 ,10 ,50
EMP1,5 ,15 ,46 ,140 ,80 ,147 ,134 ,74 ,84 ,109 ,47 ,88 ,66 ,28 ,32 ,89 ,162 ,36 ,32 ,50
EMP2,14 ,24 ,71 ,118 ,49 ,86 ,152 ,117 ,226 ,208 ,78 ,59 ,96 ,33 ,57 ,116 ,18 ,135 ,128 ,180
EMP3,13 ,35 ,79 ,43 ,44 ,59 ,22 ,55 ,61 ,34 ,50 ,95 ,36 ,145 ,160 ,104 ,151 ,86 ,123 ,64
EMP4,15 ,56 ,19 ,84 ,136 ,67 ,67 ,155 ,87 ,164 ,194 ,67 ,65 ,132 ,35 ,131 ,133 ,36 ,173 ,152
plainnat
|
http://arxiv.org/abs/2307.02471v1
|
20230705174536
|
Deep Speech Synthesis from MRI-Based Articulatory Representations
|
[
"Peter Wu",
"Tingle Li",
"Yijing Lu",
"Yubin Zhang",
"Jiachen Lian",
"Alan W Black",
"Louis Goldstein",
"Shinji Watanabe",
"Gopala K. Anumanchipalli"
] |
eess.AS
|
[
"eess.AS"
] |
More Exact Thermodynamics of Nonlinear Charged AdS Black Holes in 4D
Critical Gravity
Kairat Myrzakulov
August 1, 2023
=====================================================================================
In this paper, we study articulatory synthesis, a speech synthesis method using human vocal tract information that offers a way to develop efficient, generalizable and interpretable synthesizers. While recent advances have enabled intelligible articulatory synthesis using electromagnetic articulography (EMA), these methods lack critical articulatory information like excitation and nasality, limiting generalization capabilities. To bridge this gap, we propose an alternative MRI-based feature set that covers a much more extensive articulatory space than EMA. We also introduce normalization and denoising procedures to enhance the generalizability of deep learning methods trained on MRI data. Moreover, we propose an MRI-to-speech model that improves both computational efficiency and speech fidelity. Finally, through a series of ablations, we show that the proposed MRI representation is more comprehensive than EMA and identify the most suitable MRI feature subset for articulatory synthesis.
Index Terms: speech synthesis, articulatory synthesis
§ INTRODUCTION
Deep speech synthesis technology has made significant advancements in recent years, leading to high-performing models for tasks such as text-to-speech <cit.>, voice conversion <cit.>, and speech translation <cit.>. However, the development of brain-to-speech devices <cit.> still poses significant challenges, requiring faster and more data-efficient models. Articulatory synthesis <cit.> offers a potential solution by synthesizing speech from a compact, smooth, and interpretable articulatory space <cit.>.
Electromagnetic articulography (EMA) is a commonly used articulatory representation <cit.>, but it only contains 6 x-y points, making it challenging to comprehensively capture articulatory movements. Real-time magnetic resonance imaging (MRI) is a state-of-the-art tool that captures dynamic information about vocal tract movements and shaping during human speech production, offering a feature-rich alternative to EMA. It contains hundreds of x-y points, including positional information for the hard palate, pharynx, epiglottis, velum, and larynx, all of which are important for speech production but not directly described in raw EMA data. Moreover, recent advances in image acquisition and reconstruction techniques have enabled sufficient temporal and spatial resolutions (e.g., 12ms and 2.4 × 2.4 mm^2) that allow researchers to study the intricate and dynamic interactions during speech production <cit.>.
However, since MRI dataset participants speak from inside a tube-shaped MRI machine, there is a noticeable amount of reverberation in the collected utterances, resulting in an unsatisfactory performance on MRI-to-speech synthesis. To overcome this problem, we enhance the utterances and propose a generative adversarial network (GAN) based method that directly synthesizes waveform from articulatory features, which produces noticeably more intelligible speech than the baselines.
We summarize our contributions as follows:
* We propose a novel MRI-based representation for articulatory synthesis, along with effective preprocessing strategies for such data.
* We demonstrate that our proposed model outperforms baselines across several evaluation metrics.
* We quantitatively and qualitatively identify the advantages of MRI over EMA and the most important MRI features for articulatory synthesis.
Code and additional related information will be available at https://github.com/articulatory/articulatoryhttps://github.com/articulatory/articulatory.
§ MRI DATASET
We utilize the real-time MRI and its corresponding audio recordings of one native American English speaker (female, 25-year-old), with a total speech duration of approximately 17 minutes and a sampling rate of 20 kHz, which are acquired from a publicly available multispeaker MRI dataset <cit.>. This dataset includes midsagittal real-time MRI videos with a spatial resolution of 2.4 × 2.4 mm^2 (84 × 84 pixels) and a temporal resolution of 12-ms (83 frames per second), capturing the vocal tract movements during the production of a comprehensive set of scripted and spontaneous speech materials.
To prepare the MRI data for our model, we use a semi-automatic method <cit.> to track the contours of vocal tract air-tissue boundaries in each raw MRI frame (Figure <ref>) and segmented the contours into anatomical components, as shown in Figure <ref> and Figure <ref>. To mitigate the problem of overfitting, we pruned the MRI feature set by discarding segments that did not contribute much to understanding how speech production varies across utterances. Figure <ref> presents the full set of segments, while Figure <ref> shows the reduced set. The original set comprises 170 x-y coordinates, whereas the reduced set contains only 115. We then concatenate and flatten the 115 x-y coordinates into a 230-dimensional vector, which we used as input for our MRI-to-speech synthesis task.
Since the raw MRI data is composed of long utterances with lots of silences, we first segment the utterances into sentence-long pieces. Then we employ a pre-trained BERT-based model[<https://huggingface.co/felflare/bert-restore-punctuation>] to estimate sentence boundaries, and align the audio recordings as well as the orthographic and phonological transcriptions using Montreal force aligner <cit.>. The resulting alignments are manually calibrated by professional phoneticians. By utilizing the estimated sentence boundaries and word alignments, we split the audio recordings and MRI data into 236 utterances, totaling 11 minutes. Finally, these utterances are randomly split into a 0.85-0.05-0.10 train-val-test split, resulting in 200, 11, and 25 utterances in the train, val, and test sets, respectively.
Furthermore, the head location is not fixed within the MRI data, which can negatively impact the ability of such models to generalize to unseen positions. Thus, we adopt a centering approach that centers each frame around a relatively fixed point to improve generalizability. Specifically, we calculate the standard deviation (σ_x, σ_y) of each of the 170 points across the training set and center every frame at the point with the lowest √(σ_x^2 +σ_y^2). This center point is located on the hard palate, circled in green in Figure <ref>. We note that the standard deviation results reflect human speech production behavior, as the hard palate is relatively still across utterances whereas the tongue varies noticeably, highlighting the interpretability of our MRI-based articulatory features. Another preprocessing step that we found useful was denoising, which we detail in Section <ref>.
§ MODELS
§.§ Intermediate-Representation Baselines
Currently, a popular speech synthesis approach is to first synthesize an intermediate representation from the input and then map the intermediate representation to the waveform domain <cit.>. Wu et al. <cit.> showed that directly synthesizing speech from EMA outperformed a spectrum-intermediate approach in terms of computational efficiency and yielded comparable synthesis quality. Intuitively, omitting the spectrum intermediate reflects how the human speech production process does not perform this intermediate mapping <cit.>. In this work, we also compare using intermediate representations with directly synthesizing from inputs. We observe two popular types of intermediate representations in the literature: (1) spectrums <cit.>, and (2) deep representations <cit.>. To compare our proposed direct modelling approach in Section <ref> with both intermediate modelling methods, we experiment with Mel spectrogram and HuBERT <cit.> intermediates. For the Mel spectrogram calculation, we use size-240 hops, size-1024 FFTs, Hann windows, and 80 Mels. With HuBERT, we use the output of the model's last hidden layer, linearly interpolated to match the MRI input sampling rate. We denote spectrum-intermediate models with “Spe." and HuBERT ones with “Hub." in our results below for readability. In our MRI-to-speech task, direct modeling is both more computationally efficient and more high-fidelity than the intermediate approaches, as discussed in Sections <ref> and <ref>. We detail the model architectures of our intermediate-representation baselines in Section <ref>.
§.§ CNN-BiLSTM Baseline
As per Yu et al. <cit.>, we employ the CNN-BiLSTM architecture as the baseline method. This method involves processing each MRI frame through a sequence of four CNN layers, with two max-pooling layers incorporated in the middle. The extracted features are then aggregated along the time axis and fed to a BiLSTM layer to generate the mel-spectrogram. Since the inputs in our MRI-to-speech task are sequences of vectors rather than the MRI video inputs used in Yu et al. <cit.>, we use 1D convolutions instead of 2D and 3D. Finally, a neural vocoder is used to reconstruct the waveform signal. For this vocoder, we use HiFi-CAR <cit.>, which outperforms the WaveGlow architecture <cit.> used by Yu et al. <cit.>. HiFi-CAR is an autoregressive version of the HiFi-GAN convolutional network <cit.>, detailed in Section <ref>. It is worth noting that they used the original speech data, without any denoising, resulting in unsatisfactory performance. For a fair comparison, we also train this model using enhanced speech. In our experiments in Sections <ref> and <ref> below, we refer to this model as CBL for readability.
§.§ HiFi-CAR Model
Similar to the method used by Wu et al. <cit.>, our model directly synthesizes waveforms from articulatory features without the need for an intermediate representation. Specifically, we build on their HiFi-CAR model, which is a HiFi-GAN convolutional neural network <cit.> modified to be autoregressive using the CAR-GAN audio encoder <cit.>. To our knowledge, training models to directly synthesize waveforms from MRI data has not yielded successful results previously. However, we observe that this model outperforms our baselines in terms of both computational efficiency and fidelity, as discussed in Sections <ref> and <ref>. We also use the HiFi-CAR vocoder to map intermediate features to waveforms for our intermediate-representation baselines. For all HiFi-CAR models, we initialize their weights with those of a HiFi-GAN spectrum-to-waveform vocoder pre-trained on LibriTTS.[<https://github.com/kan-bayashi/ParallelWaveGAN>] We note that this initialization approach noticeably improves performance compared to Wu et al. <cit.>. Further modeling details can be found in the accompanying codebase.
§.§ Speech Enhancement Model
The currently available dataset <cit.> suffers from poor quality due to significant reverberation and noise, which poses a significant challenge for accurate modeling of the relationship between MRI and speech. To circumvent this issue, we employed an off-the-shelf Adobe Podcast toolkit[<https://podcast.adobe.com/enhance>], which processes speech recordings to enhance their quality and makes them sound as if they were recorded in a professional studio. Therefore the resulting speech is better suited for our purposes. Unfortunately, due to its proprietary nature, we do not have access to its technical details. Through our observation, however, we conjecture that it may contain a pipeline of bandwidth extension <cit.> and speech enhancement <cit.>. Specifically, we hypothesize that the toolkit up-samples the speech to 48kHz and leverages HiFi-GAN <cit.> to generate high-quality speech. We downsample the enhanced speech to the waveform sampling rate of our MRI dataset to keep model output lengths the same. In our MRI-to-speech task, we use 0.9*y_e+0.1*y_o as our target waveform, where y_e is the enhanced waveform and y_o is the original one. Using this weighted sum yields more intelligible MRI-to-speech models than using just y_e, which may be due to how deep speech enhancers add irregular noise that can be smoothed to more learnable targets by adding the original, more natural waveforms.
§ COMPUTATIONAL EFFICIENCY
Given the importance of computational efficiency for real-time, on-device speech synthesizers we compare the number of parameters and inference times between our model and the baselines, summarized in Table <ref>. GPU trials use one RTX A5000 GPU, and CPU trials use none. Like Wu et al. <cit.>, we report inference time as the
mean and standard deviation of five trials, each calculating the average time to synthesize an utterance in our test. Our model is faster and uses less parameters than both intermediate-representation baselines, reinforcing the idea that directly mapping articulatory features to speech is more efficient than relying on an intermediate representations.
§ SYNTHESIS QUALITY
§.§ Subjective Fidelity Evaluation
We perform a subjective AB preference test on Amazon Mechanical Turk (MTurk). In this evaluation, each participant is asked to distinguish between the utterances generated by our method and the baselines in terms of naturalness. We compared our model with two baselines: (1) Yu et al. <cit.>, detailed in Section <ref>, and (2) <cit.> trained with our denoised waveforms described in Section <ref> as targets. For each of the two AB tests, we asked 6 native English listeners to rank a total of 9 random samples from the test set in this study. To prevent listeners from randomly submitting results, we added an audio pair consisting of one audio sample consisting entirely of noise and another audio sample containing high-fidelity speech. In Table <ref>, we summarize the total number of votes for each of the three options for both AB tests. Our model outperforms both baselines, receiving the most votes. The almost unanimous vote for our model in the AB test with the non-denoised baseline highlights the importance of denoising waveforms accompanying MRI data. Our model also noticeably outperforms the denoised baseline, suggesting that direct synthesis approach described in Section <ref> is well-suited for articulatory synthesis.
§.§ Objective Fidelity Evaluation
We perform an objective evaluation of synthesis quality by analyzing the mel-cepstral distortions (MCD) <cit.> between ground truths and synthesized samples, as in Wu et al. <cit.>. Table <ref> summarizes these results, reporting the mean and standard deviation of the MCDs across utterances. Our HiFi-CAR approach outperforms both intermediate-representation baselines, suggesting that our direct modelling method is suitable for the MRI-to-speech task.
§.§ Transcription
We also compare our method with the baselines in terms of speech intelligibility. Specifically, we compute the character error rate (CER) for speech transcription. We use Whisper <cit.>, a state-of-the-art automatic speech recognition (ASR) model, to generate text from the speech synthesized using each method and all test set utterances. Table <ref> summarizes these results. Like in Table <ref>, our model outperforms both baselines, reinforcing the suitability of our model for MRI-to-speech synthesis.
§ COMPARING MRI AND EMA FEATURES
As mentioned in Section <ref>, MRI provides much more information about the vocal tract than EMA. MRI is a superset of EMA. Specifically, EMA has one x-y coordinate for each of the following locations: upper lip, lower lip, lower incisor, tongue tip, tongue body, and tongue dorsum. Points at all of these locations are present in the MRI data, so we can actually approximate EMA features from MRI by choosing one MRI point at each EMA location, as visualized in Figure <ref>. In this figure, segments are the connected MRI points and the shaped dots are the estimated EMA locations. We compare these two articulatory feature sets by comparing the outputs of our proposed MRI-to-speech model with those of this model trained to synthesize speech from our estimated 12-dimensional EMA features. The test set predictions of this EMA-to-speech model yielded an MCD of 6.986 ± 0.587 and ASR CER of 73.2%± 6.7%. Both of these values are worse than those of our MRI-to-speech model, summarized in Tables <ref> and <ref>. This suggests that MRI features are more complete representations of the human vocal tract than EMA features. Thus, articulatory synthesis models should incorporate features beyond EMA in order to achieve human-like fidelity across all utterances, with MRI features being a potential feature set to extend towards. We identify which of the MRI features would be the most valuable to add to the articulatory feature set in Section <ref>.
§ IDENTIFYING IMPORTANT MRI FEATURES
We also study which of the MRI features are the most useful for synthesis in order to provide insight into which features should be present in an ideal articulatory feature set for articulatory synthesis. Specifically, we created 50 subsets of our 230-dimensional MRI feature set, each composed of a random 90% subset of the 230 features. With each feature subset, we masked the 23 MRI features not in the subset to 0.0 and synthesized the test set utterances. Then, we computed the average MCD between the test set ground truths and the synthesized waveforms. For each MRI feature, we assign it a score equal to the average of the MCD values corresponding to experiments where that feature was unmasked. Since our subsets are chosen randomly, we try each feature an equal number of times in expectation. We rank the MRI features by score, with lower rank values being better and corresponding to a lower score and average MCD. Figure <ref>, with darker green points corresponding to better MRI features. We note that each of the six EMA locations described in Section <ref> have an MRI point that is ranked as important. Moreover, for these six locations, besides, the upper lip, the number of points ranked as important is fairly sparse. This suggests that six points chosen in the EMA feature set are all very valuable for articulatory synthesis. The important MRI features also correspond well to the phonetic constriction task variables, e.g., those used in <cit.> to model articulatory synergies from real-time MRI images. Beyond the corresponding EMA locations, points around and in the pharyngeal region (e.g., between tongue root or epiglottis and rear pharyngeal wall) and velic region are also ranked as important. This suggests that these features are also essential for fully-specified, high-fidelity articulatory synthesis. The pharyngeal features are relevant to the production of various speech sounds <cit.>, like /a/ <cit.> and some variants of /r/ in English <cit.>. The velic features are crucial to the production of nasal sounds. Both of these features are not available from EMA. Thus, moving forward, we plan to incorporate pharyngeal and velic features in all of our articulatory synthesis models. Points around constriction locations, whether at the lips, tongue, or throat, are generally ranked as important. Thus, when designing sparse articulatory feature sets, it may be useful to prioritize these constriction locations.
§ CONCLUSION AND FUTURE DIRECTIONS
In this work, we devise a new articulatory synthesis method using MRI-based features, providing preprocessing and modelling strategies for working with such data. Based on MCD, ASR, human evaluation, timing, and memory measurements, our model achieves noticeably better fidelity and computational efficiency than the prior intermediate-representation approach. Through speech synthesis ablations, we also show the advantages of MRI over EMA and identify the most important MRI features for articulatory synthesis. Moving forward, we will extend our work to multi-speaker synthesis and inversion tasks <cit.>.
IEEEtran
|
http://arxiv.org/abs/2307.03207v1
|
20230706052155
|
H$α$ Kinematics of Superbubbles and Supernova Remnants of the Dwarf galaxy NGC 4214
|
[
"M. Sánchez-Cruces",
"M. Rosado"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
firstpage–lastpage
UIT-Saviors at MEDVQA-GI 2023: Improving Multimodal Learning with Image Enhancement for Gastrointestinal Visual Question Answering
[
==================================================================================================================================
We analysed the ionised gas kinematics of the dwarf galaxy NGC 4214 using high resolution Fabry-Perot interferometry observations and present a set of narrowband images in the Hα, [Sii] λ6717 Å, [Nii] λ6584 Å and [Oiii] λ5007 Å emission lines.
The high-resolution Fabry-Perot observations of the Hα emission line, allowed us to derive the velocity field, the velocity dispersion σ, and the rotation curve of the galaxy. We also present for the first time, three-dimensional kinematic maps of the complexes NGC 4214-I and NGC 4214-II and analysed the kinematics of the ionised gas of two new superbubbles, as well as the supernova remnants previously detected in this galaxy by other authors, in radio, optical and X-ray emission. We computed the expansion velocities of the superbubbles and supernova remnants fitting their velocity profiles and obtained their respective physical parameters.
We found that the superbubbles have an expansion velocity of ∼50 km s^-1, dynamical age about ∼2 Myr and wind luminosity L_W of ∼9×10^38 erg s^-1 produced probably by massive stars in OB associations.
For supernova remnants, their expansion velocities are between ∼48 to ∼80 km s^-1 with ages of about 10^4 years and kinetic energy of about 10^51 erg assuming they are in the radiative phase of evolution.
galaxies: dwarf – galaxies: kinematics and dynamics – ISM: supernova remnants
§ INTRODUCTION
NGC 4214 is a nearby <cit.> irregular dwarf starburst galaxy also known as UGC 7278 and also classified as Magellanic type (NASA/IPAC Extragalactic Database (NED)).
NGC 4214 has two main regions of star formation located in the central part of the galaxy; the northwest (NW) complex, also called NGC 4214-I, is a large complex of H II regions displaying a shell morphology and the southeast (SE) complex known as NGC 4214-II, formed mainly of compact knots <cit.>. Also, <cit.> identified 11 more regions corresponding to nebular knots or stellar clusters with weak or intense nebular emission (named NGC 4214-III to NGC 4214-XIII). NGC 4214-I has several star clusters, the most important is I-As, a massive young <cit.> superstar cluster (SSC). NGC 4214-II contains several Scaled OB Association (SOBAs)[Scaled OB Association (SOBAs) are quite asymmetric extended objects with no well-defined center with sizes larger than 10 pc <cit.>.]. Also, these complexes are known to be rich in Wolf-Rayet stars <cit.> making it a Wolf-Rayet Galaxy. General parameters of the galaxy are shown in Table <ref>.
The study of the ionised gas of this galaxy can help understand the
interaction of their stellar clusters and massive stars with the Interstellar Medium (ISM). In particular, the formation mechanisms of objects at different scales such as bubbles, superbubbles and supershells.
Bubbles are formed by the interaction of strong stellar winds of massive stars with its surrounding interstellar medium. In optical emission,
these bubbles can be detected as ring shaped nebulae <cit.> with diameters, D ≲ 10 pc.
Superbubbles (SBs) or giant shells are objects blown by the combination of fast stellar winds and supernova explosions from groups of massive stars <cit.> as ring-shaped structures with D ≲ 400 pc.
Supershells or supergiant shells are the result of the evolution of super star clusters (SSCs) and their interaction with the ISM of a galaxy with D ≈ 1 kpc <cit.>.
The kinematics of the ionised gas provides important clues to understand the hydrodynamic nature of the interrelationship between the ISM and those types of objects and, the physical processes that affect the surrounding ISM.
In the kinematical study performed by <cit.> on the ionised gas of the nuclear region of this galaxy, they found that
the superbubbles associated to the stellar clusters in the central region are not according to their evolutionary state. They found that the most massive cluster (cluster A) presents a partial ring feature while cluster B has one shell that seems to have experienced blowout, and complete second shell. <cit.> using WIYN Integral Field Unit (IFU) spectroscopy found evidence of high-velocity outflows (50-100 km s^-1) associated with diffuse ionised gas and found a correlation between high-velocity gas and HI holes in this galaxy.
Supernova remnant (SNR) multiwavelength surveys (X-ray, radio and optical) have been been carried out for NGC 4214, providing information about the physical properties of these objects. In particular, it has been found that some SNRs present emission in the X-ray, radio and optical wavelengths <cit.>. A brief summary of the population of SNRs in each wavelength band is given below.
By using radio observations of the Very Large Array (VLA), <cit.> classified one radio source as SNR; four years later, <cit.> radio continuum (VLA data at 20, 6, and 3.6 cm) and optical data, found six more radio SNR candidates and three three objects denoted as SNR/Hii.
Then, <cit.>, based on the X-ray colors or spectra of the SNRs from Chandra archival data, suggested 11 X-ray sources as SNRs or SNR candidates.
Finally, <cit.> found 18 confirmed optical SNRs based on the spectroscopy emission-line flux criterion of [Sii]/Hα >0.4.
Considering all previously published SNR samples in radio, X-ray, and optical emission, and that some of them present emission in two or three wavelength bands; this galaxy harbors a total of 35 SNRs which until now, there is no kinematic information of them (see Section <ref>). In this work we studied the global kinematics of the galaxy and the kinematics at local scales such as around the SBs and SNRs in the dwarf galaxy NGC 4214 by using scanning Fabry-Perot (FP) observations.
Also, in this study, we present, for the first time, a three-dimensional kinematic map of the complexes NGC 4214-I and NGC 4214-II.
The structure of the paper is as follows. Section 2 presents the scanning FP observations and data reductions. Section 3 presents the general morphology of the galaxy. In Section 4 we present the kinematic information derived from the FP observations, and in Section 5 we present the conclusions.
§ OBSERVATIONS AND DATA REDUCTION
§.§ Direct imaging
The observations were carried out in November 27, 1998 using the 2.1 m telescope of the Observatorio Astronómico Nacional of the Universidad Nacional Autónoma de México (OAN, UNAM) at San Pedro Mártir, B. C., México. The observations were made in the Hα line, using the UNAM scanning FP interferometer, PUMA <cit.>.
The image size is 512 × 512 px produced by binning a 1024×1024 Tektronix2 CCD detector by a factor of two. The field of view (FoV) of PUMA is 10 arcmin and has a plate scale of 1.16 arcsec pixel^-1 (15.18 pc pixel^-1 at a distance of 2.7 Mpc).
The direct images were obtained with the FP etalon out of the telescope's line-of-sight (PUMA direct-imaging mode) in Hα, [Sii] λ6717 Å, [Nii] λ6584 Å and [Oiii] λ5007 Å emission lines redshifted to NGC 4214. The exposure time of the Hα image was 60 s and 120 s for the other emission lines. We also obtained direct images of the spectrophometric standard star Feige92 <cit.> with an exposure time of 60 s in each filter. Table <ref> shows the journal of the PUMA observations.
§.§ High resolution Fabry-Perot spectroscopy
The PUMA scanning FP interferometer mode has a finesse of ∼24 leads to a sampling spectral resolution of 0.41 Å (equivalent to a velocity resolution of 19.0 km s^-1 at Hα) and a free spectral range of 19.8 Å (equivalent to a velocity range of 908 km s^-1 at Hα).
Then, with the FP interferometer PUMA, we scanned the Hα line through 48 channels getting a data cube of dimensions 512 × 512 × 48. The integration time was 60 s per channel. For the calibration of the data cube, we used a Hydrogen lamp (6562.78 Å wavelength calibration); this calibration data cube has the same dimensions as the object data cube. Observational and instrumental parameters are listed in Table <ref>.
§.§ Data reduction
The reduction and analysis of FP data were performed
using IRAF[`Image Reduction and Analysis Facility' <http://iraf.noao.edu/>. IRAF is distributed by National Optical Astronomy Observatory, operated by the Association of Universities for Research in Astronomic, Inc., under cooperative agreement with the National Science Foundation.] and the ADHOCw[<http://cesam.lam.fr/fabryperot/index/softwares> developed by J. Boulesteix.] software. We made the standard corrections: i.e., removal of cosmic rays and bias subtraction. We computed the wavelength-calibrated data cube and used it to get the velocity[All quoted velocities are in the heliocentric reference frame.] and velocity dispersion maps.
ADHOCw uses the calibration data cube to produce the phase map; this map provides for each pixel, the reference wavelength in the observed line profile. The wavelength-calibrated data cube (velocity or wavelength data cube), is produced when the phase map is applied to the observed data cube.
In order to improve the signal-to-noise (S/N) ratio, we applied on the wavelength data cube a spectral Gaussian smoothing using a FWHM of 3 channels (σ=57 km s^-1), and spatial Gaussian smoothing using a FWHM of (3,3) pixels by using ADHOCw software.
The “monochromatic” and “continuum” images were obtained with ADHOCw by integrating the radial velocity profile from the FP data cube pixel by pixel in the FoV and the continuum image is derived pixel by pixel as the mean of the three lowest intensity channel, which typically corresponds to 30% of the peak intensity value. For the monochromatic image, ADHOCw integrates the profile of the line in each pixel from its maximum intensity (I_max) up to 0.3×I_max.
Finally, for each pixel ADHOCw computes the Hα profile barycenter to obtain the velocity and velocity dispersion (σ_obs) maps.
We performed thermal and instrumental corrections to the velocity dispersion, according to: : σ_corr = √(σ_obs^2 - σ_inst^2 - σ_th^2) where
σ_inst=14.85 km s^-1 is the instrumental broadening; σ_th=√(82.5( T_4/A)) is the thermal broadening with T_4 = T/10^4 K and A the atomic weight of the atom, taking T=10^4 K for hydrogen gas <cit.> we have σ_th = 9.1 km s^-1.
The velocity accuracy for our data is about ±2 km s^-1 obtained from the calibration velocity map. This map was computed with the barycenter of the emission line in each pixel of the calibration data cube.
§.§ Photometric Calibration
For the flux calibration of our direct images, we used the images of the standard star Feige92 taken in each filter with PUMA in its direct image mode. All direct images of the galaxy and standard star
were reduced as described above.
The conversion factor for each emission line is 1 count s^-1 pixel^-1 = (2.92(Hα), 3.27([Sii]), 5.90([Nii]), 4.47([Oiii])) × 10^-4 erg cm^-2 s^-1 sr^-1, which in units of flux is (1.16(Hα), 1.30([Sii]), 2.34([Nii]), 1.77([Oiii])) × 10^-14 ergs cm^-2 s^-1 sr^-1.
§ GENERAL MORPHOLOGY
The morphology of the ionised gas emission in the central parts of NGC 4214 has been described by <cit.> and <cit.>. Here we review the main features given by <cit.> who described in detail the morphology of the central part of the galaxy and we also describe the global morphology of the ionised gas.
The Hα ionised gas distribution of the central part of the galaxy (see inset panel of Fig. <ref>) shows the two large H II complexes known as NGC 4214-I and NGC 4214-II <cit.>. Also, there are a number of isolated fainter knots scattered in the central part of the galaxy and throughout the galaxy field. In Fig. <ref> is possible to see some nebulae, filaments and SBs distributed along the galaxy. In addition, the diffuse interstellar gas (DIG) is surrounding the two main complexes and the isolated knots.
From the direct images we morphologically identified two SBs (named SB-1 and SB-2) located in the galaxy disk and pointed out with arrows in all panels of Fig. <ref> and bounded by the contours over plot in the Hα image. SB-1 (RA (J2000) =12:15:33.1, Dec (J2000) =+36:22:06.6) is located at SE edge of the galaxy and SB-2 (RA (J2000) =12:15:42.9, Dec (J2000) =+36:18:23.0) is located at 44.5(582 pc) of the NGC 4214-II Hii complex. The size of the SBs was measured by using the PUMA Hα direct image (see Fig. <ref>), the distance to the galaxy (2.7 Mpc) and the image scale (1.16). Figure <ref> shows the Hα SBs pointing out the gray dashed ellipses; the semi major and minor axis of the SBs are 36×29(546.6 pc×440.3 pc) for SB-1 and 34×27(516.2 pc × 409.9 pc) for SB-2.
The global morphology of the galaxy in the [Oiii] line is roughly similar to Hα; i.e., has similarities in the central area and small differences in the galaxy outskirts. Also, the knots situated in the outskirts of the central part are detected in both images. The [Sii] image shows similar morphology than Hα and [Oiii] images while the [Nii] direct image shows the poorer S/N ratio showing only the central part of the galaxy.
We have also produced maps of the [Sii]λ6717/Hα and [Nii]λ6584/Hα line ratios (see Fig <ref>); which are close enough in wavelength to be strongly affected by dust attenuation. Those maps allow us to study the ionised structure of the galaxy.
Low values of [Sii]λ6717/Hα and [Nii]λ6584/Hα are located in the central part of the galaxy, western region and in SBs, corresponding to the high excitation zone. The
higher ratio values ([Sii]/Hα∼ 0.65) are located at the outskirts of the galaxy (where the DIG is located) depicting the low excitation regions.
When we compared our excitation ratio maps of the two large H II complexes (see the inset panels of Fig. <ref>) with those obtained by <cit.>, we found an excellent agreement with values of 0.1<[Sii]/Hα<0.25 and 0.07<[Nii]/Hα<0.12.
The values of [Sii]/Hα for the SBs are between 0.1 and 0.3 while for [Nii]/Hα are between 0.01 and 0.12. Both line ratio values are located mainly in the shell of the SBs.
Our [Sii]/Hα are similar to the values of SBs in the Large Magellanic Cloud <cit.> and in NGC 1569 <cit.>.
The kinematic analysis of the SBs will be discussed later in Section <ref>.
§ KINEMATICS OF THE IONISED GAS
§.§ Velocity field
Figure <ref> shows some velocity channels (obtained from the velocity data cube) of the Hα emission detected in NGC 4214; velocities range from 202 to 410 km s^-1. Also in this figure we show in each channel map a close-up of the central part of the galaxy with contours superimposed. Contour levels were automatically chosen between minimum and maximum intensity values in order to show the morphology of the region in the respective velocity channel. In general, the contours show the morphology on the H II complexes NGC 4214-I and NGC 4214-II. The receding velocities of NGC 4214-II are at the northwest and the approaching are at southeast. However for NGC 4214-I it is not easy to see a velocity pattern.
Figure <ref> shows the velocity field (VF) of NGC 4214. From this figure we can see that the galaxy does not follow circular rotation, no rotation axis can be found and the gas motions are disordered. This is consistent with previous studies of <cit.> and <cit.>. The VF displays velocity values ranging from 250 to 350 km s^-1 (implying a difference of about V∼100 km s^-1); this difference of values are located mainly at the northwest (NW) side of the galaxy. While, the southeast (SE) side of the galaxy presents almost constant velocities (at about 300 - 330 km s^-1). The velocities of the two large H II complexes are about 300 km s^-1 for NGC 4214-I and 290-300 km s^-1 for NGC 4214-II. The VF highlights that NGC 4214-I is surrounded by the highest velocity values at about 320-350 km s^-1. At the NW of the galaxy, there is a high blueshifted region velocities about V = 270-290 km s^-1.
In general, the velocity dispersion map (right hand panel of Fig. <ref>) shows values from 30 - 80 km s^-1 except for the central part, where the high dispersion velocity values (between 90 and 110 km s^-1) are distributed mainly along NGC 4214-I and are in excellent agreement with those determined in similar studies of starburst dwarfs <cit.>. In the center of the high dispersion region, there is a small region with ∼75 km s^-1 confirming the existence of an expanding shell related to NGC 4214-I.
§.§ Rotation curve
Previous works of <cit.> and <cit.> using Hα Fabry-Perot observations showed that this galaxy does not present any evidence for rotation in its velocity map; hence the Hα rotation curve could not be computed. However, <cit.> computed the rotation curve of NGC 4214 as part of a sample of 62 galaxies from the HI Survey of Spiral and Irregular Galaxies (WHISP) project. Later, <cit.> using radio observations of the VLA, computed the rotation curve and made the asymmetric-drift correction and confirmed that this galaxy shows non-circular motions in its disk likely associated with the HI spiral arms and/or the inner stellar bar. This bar is located at the place of the two large H II regions and coincides with the galaxy’s minor axis of the HI map <cit.>.
<cit.> gave two set of kinematical parameters (position angle, PA and inclination, i) for NGC 4214 due to its HI disk is close to face-on in the inner parts and present strongly warped in the outer parts. They found, a PA_kin-HI≃65 and i_kin-HI≃ 30 for R ≲ 3, and PA_kin-HI≃ 84 and i_kin-HI gradually decreasing at larger radii and, that the optical and HI spiral arms wind in opposite directions, clockwise and counter-clockwise, respectively.
In this section, we attempted to derive a RC from the Hα VF of the galaxy following the methodology in <cit.>, <cit.> and <cit.> taking into consideration the kinematical parameters given by <cit.> for R ≲ 3.
The RC was obtained with the ADHOCw software using the standard tilted-ring method. In this case, we considered a width of three-pixels (45 pc). As we discussed in <cit.>, to this method be valid the gas has to move in purely circular orbits, which as mentioned before, is not the case for the innermost parts of NGC 4214; therefore, we derived its RC considering radii larger than 200 pc up to 1.51 kpc (100 pix) within an angular sector of 20 along the galaxy major axis (in HI).
To compute the kinematics parameters with ADHOCw, start with first guess values for i, PA, centre (x_0, y_0) and V_sys, derived from the literature; then those parameters were iteratively modified to obtain a symmetry between the approaching and receding sides superposed and making sure that the RC had low dispersion values <cit.>. In this case we used as initial parameters those of <cit.>; kinematical centre (RA_kin-HI(J2000)=12:15:36.9, Dec_kin-HI (J2000)=+36:19:59.0), i=30and a systemic velocity V_sys =290 km s^-1.
The derived RC is shown in Fig. <ref> and the kinematical parameters are PA=63, V=293 km s^-1 and i= 30; in this case we used the same kinematical centre of <cit.>. From Fig. <ref> we can see that in the innermost region of the RC (radii smaller than 50 pc), the rotation velocity values of the approaching and receding sides are separated by ∼5 km s^-1, with the rotation velocity of the receding side reaching ∼-8 km s^-1 while the rotation velocity on the approaching side reaches ∼-13 km s^-1. Beyond this radius (R = 50 pc), the difference in velocities between the receding and approaching side increases (differences about 30 km s^-1) until reaching a radius of 0.3 kpc. Subsequently, between 0.3 and 0.5 kpc, the RC is symmetric with the smaller dispersion; after this radius the RC is no longer symmetric and with oscillations, rotation velocity differences between 20 and 30 km s^-1 and displaying an ascending behaviour with a maximum rotation velocity of about 82 km s^-1 at R = 1.23 kpc.
In Fig.<ref> we also show the comparison between Hα RC derived from our PUMA observations with the HI RC obtained by <cit.>. We can see that the Hα RC extended up to 1.4 kpc while the HI RC extended beyond 5 kpc. There are only a few points of the HI RC within the Hα RC due to the spatial resolution difference between HI (30 arcsec) and Hα data (1.16 arcsec). Three HI points at 0.6, 1.04 and 1.46 kpc are compatible within twice the quoted rms uncertainty Hα data.
The differences between RCs is due to NGC 4214 is very close to face-on and has a strong warp <cit.>. Also,
the asymmetric behaviour in the innermost parts of the galaxy (R<0.2 kpc) of the Hα RC is an indication that the velocity dispersion of the ionised gas in NGC 4214 is mainly determined by the energy injected into the ISM by ongoing SF <cit.>.
§.§ Evidence for Expanding Superbubbles
The VF of the SB-1 and SB-2 are shown in top right-hand panels of Fig. <ref> and <ref>, respectively. The velocities inside the SB-1 are about 287 km s^-1 and on the edge of the shell there are three knots with velocities >300 km s^-1.
For SB-2, the highest velocities are coming from the central part with velocities >310 km s^-1 while velocities of the shell are about 295 km s^-1.
We extract the velocity profiles of the SBs over square boxes encompassing the whole extent of each superbubble. As we can see in the right-bottom panels of Fig. <ref> and <ref>, the velocity profile of both SBs presents wings or humps, indicating the presence of composite profiles. The decomposition of the profiles were constrained by fitting the minimum number of Gaussian functions by using a combination of visual inspection and the χ^2/doF value of the fit; i.e., when we fitted a profile with a single Gaussian and visually this fit did not match the observed velocity profile; we then fitted it with two or three components (depending on the profile) plus checking the χ^2/doF obtained in each fit <cit.> which indicates the accuracy of the fit.
The comparison between velocity profiles of a HII region fitted with a single Gaussian component and complex velocity profiles of shock nebula fitted with two or three Gaussian components was presented in <cit.>. Additionally, here we show in Fig. <ref> the instrumental contribution overplotted in the velocity profile of SB-1. The instrumental profile is represented by the Airy Function (Lorentzian function). The velocity profile observed in each pixel is the convolution of the Airy function and the 'intrinsic' emission line. Although this function minimally affects the central value and FWHM of the velocity profile, this is calculated with the baricenter of the profile, and as we can see in Fig. <ref> the shape of the instrumental function does not introduce high velocity wings in the velocity profile.
The velocity profile of both SBs were fitted with three Gaussian functions; we interpreted the main component (in orange) as the 'systemic' speed of the SB associated with its rotational motion around the galactic centre. The blueshifted and redshifted components are presented in magenta and cyan respectively, and are related to the expansion velocity of the SBs. For SB-1 the main component is 294 km s^-1 and for SB-2 is 302 km s^-1, while the second components are 250 km s^-1 (blueshifted component) and 345 km s^-1 (redshifted component) for both SBs. Besides, when we convolve the Gaussian functions with the Airy Function, we realized that there are no significative changes in the position of the velocity components of their profile indicating that the instrumental broadening has a negligible impact on the velocity profiles.
In order to support the results obtained in the velocity profiles fitted, we show in Figs <ref> and <ref> the Position Velocity Diagram (PVDs) of each SB centered in their respective geometric centre; these PVDs were extracted from the FP data cube after having subtracted the stellar continuum and along each SB. For SB-1 the pseudo-slit positions used to extract the PVD are from East to West (see Fig. <ref>) while for SB-2 the pseudo-slit positions are from Northwest to Southeast (see Fig. <ref>). For both PVDs, the black slit corresponds to that one in their geometric centre and the other four pseudo-slit are distributed parallel and equidistant with a 2.6 of separation between them. Also, in both PVDs (left panels of Figures <ref> and <ref>) it is possible to see half of the so-called Doppler ellipse of the SBs <cit.> corresponding to the approaching side of the shell and the velocity variation with the SB center associated with the approaching and receding sides of the SBs. On the other hand, S/N ratio of the secondary Gaussian components of the SBs fit (see Fig. <ref> and <ref>) are marginal, about 2, indicating that these components are faint but are observed in both, the velocity profiles and in the PVDs.
For SB-1, the evidence of the Doppler ellipse is more clear in the -2.3 and -4.6 arcsec PVDs position where half of the called Doppler ellipse corresponds to the approaching side of the shell. The PVD at +2.3
exhibits a large velocity variation (from -100 to +100 km s^-1) at position between 3 and 6 arcsec; the PVD at +4.6 present two large variation in velocity of -100 to 90 km s^-1 (at 5 arcsec) and -105 to 80 km s^-1 (at 10 arcsec). For SB-2, the PDV at the centre position (black contours) shows half feature of the Doppler ellipse corresponding to the approaching side of the shell even a second half Doppler ellipse is seen between -30 to -10 arcsec,
which is not a part of SB-2. PVD at -5.2 shows no clear (week) signs of expansion. The PVD at -2.6 present an arc structure between -10 and 10 arcsec related with the half of the velocity ellipse. In the +2.6 and +5.5 positions the arc structure is barely seen.
§.§.§ Expansion Velocity
The expansion velocities of the SBs were obtained considering the expansive motion of a shell as V_exp = (V_max-V_min )/2 <cit.>, where Vmax and Vmin are the redshifted and blueshifted velocity components, respectively. In this case both SBs have the same velocity expansion of V_exp = 47.5±2 km s^-1, typical velocity expansion for SB <cit.>.
§.§.§ Physical Parameters
The physical parameters of the SBs (rms electron density (n_e), the mechanical energy (E_mec) of the stellar winds, and the kinematic age (t)) were computed following <cit.> and <cit.> works. The n_e was obtained with the relation:
n^2_e (rms)(cm^-6 ) = 2.74 × 10^18 F(Hα)θ^-2 R^-1
where F(Hα) is the Hα flux distributed in a spherical shell of radius R (in parsecs) and θ is the semiaxis size of the SB in arcsecond units.
The mechanical energy of the superbubbles can be derived from their expansion velocity (E_mec = 1/2 MV_exp) and assuming that the mass of the SB shell is concentrated in a spherical shell of
radius R (in parsecs) and thickness ΔR= R/12 and considering a μ = 0.65 for an ionised hydrogen gas <cit.>. Thus, the mechanical energy in the ionised superbubble shell is E_mec (erg)= 1/2 MV_exp = 1.7 × 10^41n_e(cm^-3) V_exp^2 R^3.
For the kinematic age (t), we considered the energy-conserving model <cit.>:
t(yr)= 4.8 × 10^5 R/V_exp
where R is the radius and V_exp is the expansion velocity of the SB.
We found that the SB-1 has a rms electron density n_e, SB-1 = 2.33±1.30 cm^-3, a mechanical energy E_mec,SB-1 = 9.55±5.40 ×10^51 erg and a kinematic age t_SB-1 = 2.22±0.09 Myr. For SB-2 we found a n_e,SB-2 = 2.63±0.88 cm^-3, an E_mec,SB-2 = 8.68±2.99 ×10^51 erg and a kinematic age t_SB-2 = 2.07±0.09 Myr.
As we see, both SBs are relatively young with ages of about ∼2 Myr. Those values are consistent with the ages estimated in IC 1613 <cit.>, NGC 1569 <cit.> and DDO 53 <cit.>. The energy value we obtained for SBs is similar to the SBs obtained in IC 1613, NGC 1569 and DDO 53 with values between ∼0.1 and ∼10^51 erg <cit.>.
One of the most feasible scenarios to better explain the mechanism for the formation of a SB is the constant energy injection from multiple stars plus supernovae <cit.>. Due to the fact that the superbubbles physically are very similar to wind-blown interstellar bubbles around individual massive stars, we can consider the standard model of wind-blown bubbles <cit.>, to describe the SBs in NGC 4214. The standard model considers that an OB-type star at rest releases a constant mechanical wind luminosity L_W≈10^36 erg s^-1, with a mass loss rate Ṁ_W≈10^-6 M_⊙ yr^-1 and wind terminal velocity V_W ≈2000 km s^-1.
To compute the mechanical wind luminosity, L_W of the SBs, we can consider the standard model of wind-blown bubbles <cit.> given by:
L_W (36) = 3.2 × 10^-7 n_0 V_exp^3 R^2
where, n_0 is the ambient pre-shock density given by n_0 = n_e/4 in units of cm^-3; considering that the swept-up mass in the shell comes from a homogeneous sphere of radius R in pc, the expansion velocity V_exp^3 in km s^-1, and L_W (36) is in units of 10^36 erg s^-1. We found that the L_W for SB-1 is L_W, SB-1 = 9.70±5.4×10^38 erg s^-1 and for SB-2 L_W,SB-2 = 9.47±3.1×10^38 erg s^-1. Table <ref> shows the derived parameters of the SBs.
The L_W we obtained for both SBs are in agreement with the L_W value obtained in the N206 complex in the Large Magellanic Cloud by <cit.>; they found that the total mechanical luminosity of N206 is about L_W=1.7 ×10^38 erg s^-1 generated by the stellar winds of two Wolf-Rayet (WR) stars plus 164 stars in the interior OB association in which the two WR winds alone release about as much wind luminosity (L_W,W-R =7.54 ×10^37 erg s^-1) as the whole in the OB association (L_W,OB =8.82 ×10^37 erg s^-1).
Since N206 complex has a similar size (465 pc) as the SBs in this work and has an expansion velocity of ∼20-30 km s^-1 <cit.>, we can consider that SB-1 and SB-2 of NGC 4214 could have similar number of OB stars as N206.
Given the dynamical ages of the SBs (2-3 Myr) are shorter than the main-sequence lifetime (∼4-8 Myr) of a 20-60 M stars, the most probable scenario to explain the SBs formation is that only massive stars in OB associations are responsible to form them; i.e., probably non star have had the time to evolve into supernova. This is supported by the fact that if supernovae would have taken place, it would be signs of shock excitation, that is not the case due to the [Sii]/Hα for the SBs are between 0.1 and 0.3. Also, so far there is no information from X-ray emission (with T>10^6 K) or non-thermal radio sources (with spectral indices values between -0.2 and -0.7) around the position of the SBs to indicate the presence of shocks.
§.§ Kinematics and Physical Parameters of SNRs
As we mentioned in Section <ref>, NGC 4214 harbors 35 SNRs detected in radio, optical and X-ray emission; some of them present emission in two or three wavelength bands. From the 35 SNRs only 5 (SNR-2, SNR-3, SNR-5, SNR-23 and SNR-33) present emission in the three wavelengths, 7 present emission in radio and optical (SNR-1, SNR-4, SNR-21, SNR-31, SNR-32, SNR-34 and SNR-35) and 5 (SNR-24, SNR-26, SNR-28, SNR-29 and SNR-30) present emission in optical and X-ray (see Table <ref>).
Also, in Fig. <ref> we show the Hα monochromatic image of NGC 4214 obtained from our FP cube with the 35 SNRs positions superimposed and pointed out with pink crosses the SNRs with optical emission, the green circles represent the SNRs with X-ray emission and with orange boxes the radio SNRs. From this figure we can see that the all SNRs are located in the bar region (see Section <ref>). Highlighting that seven SNRs (SNR-3, SNR-4, SNR-5, SNR-27, SNR-32 and SNR-34, SNR-35) are located inside the H II complex NGC 4214-I.
With our FP observations we were able to study the kinematic of the SNRs in NGC 4214 due to the spatial and spectral FP resolution. As we mentioned in the Introduction, the SNR sample used in this work was obtained from the SNRs detection in different wavelengths by previous works. Only the SNRs sample of <cit.> has information of their size; they consider that all the SNRs of their sample have a radius of about 1.4(18.3 pc). Therefore, in this work we considered that all the SNRs of our sample have a radius of 18.3 pc. Then, we extracted the Hα velocity profiles over windows of 3 pix × 3 pix (45.5 pc × 45.5 pc) and found that only 20 of the 35 SNRs has optimal S/N ratio.
§.§.§ Expansion velocity
The expansion velocity of the SNRs was obtained fitting the velocity profiles of the SNRs with optimal S/N with the minimum of Gaussian functions. All profiles present wings or humps as with SBs and similarity indicates the presence of composite profiles. Almost all SNRs profiles were fitted with three components except SNR-1 and SNR-23 which present multiple components and two components, respectively. For the velocity profiles fitted with three components, the main component (in orange) is associated with the rotation velocity of the gas around the centre of the galaxy. We show, as an example, the fitted velocity profile of six SNRs (SNR-5, SNR-12, SNR-15, SNR-16, SNR-17 and SNR-19) in Fig. <ref>. The velocity components in green and blue are the blueshifted and redshift components, respectively, and are related to the expansion velocity of the SNR. Then, the expansion velocities were obtained considering the expansive motion of a shell and assuming that the optical emission is not located at the centre of the SNR, therefore the expansion velocity is V_exp = (V_max-V_min/2) where V_max and V_min are the blueshifted and redshifted components, respectively.
For the SNR-1 and SNR-23, the expansion velocity was computed with their blueshifted and redshifted components. The expansion velocities of the SNRs with optimal S/N ratio are in general V_exp > 100 km s^-1 except SNR-17 with V_exp = 95 km s^-1. The list of SNRs and velocity values for each Gaussian component of the fitted velocity profiles, as well as their expansion velocities are shown in Table <ref>.
§.§.§ Electron density
We determined the rms electron density (n_e) with the eq. <ref>, considering the linear radii for all SNRs was as 18.3 pc. The rms electron density values rank from 73 to 1000 cm^-3.
According to the SNR evolution diagram <cit.>, the SNRs are in the radiative phase <cit.> due to the expansion velocities we found (between 47 and 79 km s^-1) and their sizes (considered of about 18.3 pc). Therefore, to compute the pre-shock electron density n_0 we considered that the shock is radiative given by:
n_0 = n_e(c_s/V_s)^2
where c_s = 10 km s^-1 is the sound speed of the environment at T_e = 10^4 K and V_s = V_exp is the shock velocity in km s^-1.
The pre-shock densities of SNRs rank from 18 to 257 cm^-3. We computed the uncertainty on n_0 using the error propagation to equation <ref>.
§.§.§ The Energy and Age
We can determine the age of the SNRs and the initial energy (E_0) deposited in the ISM by the SN explosion considering they are in the radiative phase with the numerical model of <cit.>:
t(4) = 30.7 R/V_s
E_0(50) = 5.3×10^-7n_0^1.12V_s^1.4R^3.12
where V_exp is the shock velocity in km s^-1, R is the linear radius in pc, t(4) is the age of the remnant in units of 10^4 yr and E_0(50) is in units of 10^50 erg. The linear radii for all SNRs was considered as 18.3 pc. The errors of t(4) and E_50 were computed applying the error propagation to equations <ref> and <ref>, respectively. We were not able to obtain the age and initial energy of 15 SNRs due to poor detection S/N ratio (no expansion velocity information).
The age of the SNRs found in this work, considering they are in the radiative phase, is about ∼10^4 yr and the initial energy obtained is in the order of 10^50 erg, typical values of SNRs (see Table <ref>).
§ CONCLUSIONS
In this paper, we have presented Hα high resolution Fabry-Perot observations of the galaxy NGC 4214. This data was used to study the kinematics of the galaxy and locally study the kinematics of two new SBs and SNRs in this galaxy. Our main results and conclusions can be briefly summarized as follows:
We obtained the rotation curve of NGC 4214 using the same kinematical parameters of <cit.> HI rotation curve. We found that in the innermost parts of the galaxy (R< 50 pc), the Hα RC has an asymmetric behaviour indicating non-circular velocities in this region and that the kinematics of the ionised gas is probably dominated by the stellar bar motions.
The Hα and HI data RC are in agreement despite their difference data resolution; HI data has a resolution of 30 arcsec while the FP data resolution is 1.16 arcsec, and that this galaxy is very close to face-on and has a strong warp plus complex non-circular motions found in the HI RC of <cit.>.
We found that the highest velocity dispersion are also related with the HII complex with velocities of about ∼100 km s^-1.
FP data allowed us for the first time to show the three-dimensional kinematic maps of the complexes NGC 4214-I and NGC 4214-II: i.e., we show whole 2D fields of the complexes at different velocities.
The galaxy harbours two SBs detected in the Hα, [Sii] λ6717 Å, [Nii] λ6584 Å and [Oiii] λ5007 Å direct images with sizes of 36×29(546.6 pc×440.3 pc) for SB-1 and 34×27(516.2 pc × 409.9 pc) for SB-2. The expansion velocity of both SBs determined from their velocity profiles is 47.5 km s^-1.
The mechanical luminosity of the SBs is L_W, SB-1 = 9.70±5.4×10^38 erg s^-1 and for SB-2 L_W,SB-2 = 9.47±3.1×10^38 erg s^-1 for SB-1 and SB-2, respectively. The dynamical ages of these SBs are about ∼2.0 Myr considering the energy conserving model indicating they are young SBs.
We found that the SBs are probably formed by massive stars or OB associations with stellar population similar to the N206 complex in the Large Magellanic Cloud with several WR stars plus more than 100 OB associations.
Further studies of the stellar content of the SBs will help to determine whether or not WR stars and OB associations and understand the nature of these objects.
With our high spectral resolution observations, we were able to determine the expansion velocities and age of 20 of 35 SNRs hosted in this galaxy. The SNRs have a V_exp between ∼48 to ∼80 km s^-1, similar values obtained in 3C 400.2 (V_exp = 60 km s^-1) by <cit.> and G206.9+2.3 (V_exp = 86 km s^-1) by <cit.>. The ages of the SNRs in NGC 4214 are between 7.1 and 11.8 ×10^4 yr, confirming they are SNRs. We also calculated the initial energy E_0 deposited in the ISM by the SN explosion of 20 of the 35 SNRs considering they are in the radiative phase. The initial energy ranges from 6.0 to 62.8 ×10^50 erg s^-1. Therefore, we confirmed that 20 of the 35 SNRs classified previously are SNRs.
§ ACKNOWLEDGEMENTS
We would like to thank to the anonymous reviewer for the valuable comments and suggestions that helped in improving the article.
The authors acknowledge the financial support from Programa de Apoyo
a Proyectos de Investigación e Innovación Tecnológica (PAPIIT)
from the Universidad Nacional Autónoma de México (UNAM)
IN109919, Consejo Nacional de Ciencia y Tecnologia (CONACYT)
CY-253085 and CF-86367 grants. Based upon observations carried out
at the Observatorio Astronómico Nacional on the Sierra San Pedro
Mártir (OAN-SPM), Baja California, México. We thank the daytime
and night support staff at the OAN-SPM for facilitating and helping
obtain our observations. Facilities: OAN-SPM, México.
§ DATA AVAILABILITY
The data underlying this article will be shared on reasonable request
to the corresponding author.
mnras
|
http://arxiv.org/abs/2307.00781v1
|
20230703064904
|
ACDMSR: Accelerated Conditional Diffusion Models for Single Image Super-Resolution
|
[
"Axi Niu",
"Pham Xuan Trung",
"Kang Zhang",
"Jinqiu Sun",
"Yu Zhu",
"In So Kweon",
"Yanning Zhang"
] |
cs.CV
|
[
"cs.CV",
"eess.IV"
] |
Journal of Class Files, Vol. 14, No. 8, August 2015
Shell et al.: Bare Demo of IEEEtran.cls for IEEE Journals
ACDMSR: Accelerated Conditional Diffusion Models for Single Image Super-Resolution
Axi Niu, Pham Xuan Trung, Kang Zhang, Jinqiu Sun*, Yu Zhu, In So Kweon, Member, IEEE,
and Yanning Zhang, Senior Member, IEEE
This work is funded in part by the Project of the National Natural Science Foundation of China under Grant 61871328, Natural Science Basic Research Program of Shaanxi under Grant 2021JCW-03, as well as the Joint Funds of the National Natural Science Foundation of China under Grant U19B2037.). (* Corresponding author:Jinqiu Sun.)
Axi Niu, Yu Zhu, and Yanning Zhang are with the School of Computer Science, Northwestern Polytechnical University, Xi’an, 710072, China, and also with the National Engineering Laboratory for Integrated Aero-Space-Ground-Ocean Big Data Application Technology, Xi’an, 710072, China (email: nax@mail.nwpu.edu.cn, yuzhu@mail.nwpu.edu.cn, ynzhang@nwpu.edu.cn )
Pham Xuan Trung, Kangzhang, and In So Kweon are with the School of Electrical Engineering, Korea Advanced Institute of Science and Technology. (email:trungpx@kaist.ac.kr, kangzhang@kaist.ac.kr, iskweon77@kaist.ac.kr)
Jinqiu Sun is with the School of Astronautics, Northwestern Polytechnical University, Xi’an 710072, China (email: sunjinqiu@nwpu.edu.cn)
August 1, 2023
===============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Diffusion models have gained significant popularity in the field of image-to-image translation. Previous efforts applying diffusion models to image super-resolution (SR) have demonstrated that iteratively refining pure Gaussian noise using a U-Net architecture trained on denoising at various noise levels can yield satisfactory high-resolution images from low-resolution inputs. However, this iterative refinement process comes with the drawback of low inference speed, which strongly limits its applications. To speed up inference and further enhance the performance, our research revisits diffusion models in image super-resolution and proposes a straightforward yet significant diffusion model-based super-resolution method called ACDMSR (accelerated conditional diffusion model for image super-resolution). Specifically, our method adapts the standard diffusion model to perform super-resolution through a deterministic iterative denoising process. Our study also highlights the effectiveness of using a pre-trained SR model to provide the conditional image of the given low-resolution (LR) image to achieve superior high-resolution results.
We demonstrate that our method surpasses previous attempts in qualitative and quantitative results through extensive experiments conducted on benchmark datasets such as Set5, Set14, Urban100, BSD100, and Manga109. Moreover, our approach generates more visually realistic counterparts for low-resolution images, emphasizing its effectiveness in practical scenarios.
Diffusion Models, Image-to-Image Translation, Conditional Image Generation, Image Super-resolution.
§ INTRODUCTION
Single image super-resolution (SISR) has drawn active attention due to its wide applications in computer vision, such as object recognition, remote sensing and so on <cit.>.
SISR aims to obtain a high-resolution (HR) image containing great details and textures from a low-resolution (LR) image by a super-resolution method, which is a classic ill-posed inverse problem <cit.>.
To establish the mapping between HR and LR images, lots of CNN-based methods have emerged <cit.>.
These methods focus on designing novel architectures by adopting different network modules, such as residual blocks <cit.>, attention blocks <cit.>, non-local blocks <cit.>, transformer layers <cit.>, and contrastive learning <cit.>.
For optimizing the training process, they prefer to use the MAE or MSE loss (, L_1 or L_2)
to optimize the architectures, which often leads to over-smooth results because the above losses provide a straightforward learning objective and optimize for the popular PSNR (peak signal-to-noise-ratio) metric <cit.>.
With deep generative models of all kinds exhibiting high-quality samples in a wide variety of data modalities, approaches based on the deep generative model have become one of the mainstream, mainly including GAN-based methods <cit.> and flow-based methods <cit.>, which have shown convincing image generation ability.
GAN-based SISR methods <cit.> often introduce a generator and a discriminator in an adversarial way to push the generator to generate realistic images. The generator can generate an SR result for the input LR, and the discriminator aims to distinguish if the generated SR result is true. The training process is optimized by combining content loss and adversarial losses, which have strong learning abilities <cit.>. While GAN-based methods have an obvious drawback in that they easily fall into mode collapse, the training process is challenging to converge with complex optimization <cit.>. Furthermore, adversarial losses often introduce artifacts not present in the original clean image, leading
to large distortion <cit.>. Flow-based SR methods are another famous line based on the deep generative model. They directly account for the ill-posed problem with an invertible encoder <cit.>. The flow-based operation transforms a Gaussian distribution into an HR image space instead of modeling one single output and inherently resolves the pathology of the original ”one-to-many” SR problem. Optimized by a negative loglikelihood loss, these methods avoid training instability. Still, they suffer from enormous footprints and high training costs due to the strong architectural constraints to keep the bijection between latents and data <cit.>.
Lately, the broad adoption of diffusion models has shown promising results in image generative tasks <cit.>.
In SRDiff <cit.>, the authors propose a two-stage SR framework. First, they design a super-resolution structure and pre-train it to obtain a conditional image for the diffusion process. Then they redesign the U-net structure in diffusion models. The training process of this method is relatively complicated, and it does not consider combining existing pre-trained SR models, such as EDSR <cit.>, RCAN <cit.>, and SwinIR <cit.>. Similarly, SR3 <cit.> directly applies the bicubic up-sampled LR image as the conditional image. Nevertheless, the stochastic sampling style in the inference phase makes the reconstruction process complex and slow.
Unlike them, we propose a simple but non-trivial method for image super-resolution based on the conditional diffusion model,, ACDMSR (accelerated conditional diffusion model for image super-resolution). Our work shares some similarities with SRDiff, which first applies diffusion models to the SR tasks. Different from the existing technique <cit.>, our ACDMSR adopts the current pertained SR methods to provide the conditional image, which is more plausible than the one in <cit.>.
Moreover, it helps to significantly improve perceptual quality over existing state-of-the-art methods across multiple standard benchmarks.
Furthermore, to accelerate the inference steps, we build a n-th order sampler that decreases the 1000-step- to 40-step inference and keeps the good quality. Compared with previous diffusion model-based methods SR3 and SRDiff, which need 1000 inference steps, ours significantly shortened the acquisition of final results.
By simply concatenating a Gaussian noise and the conditional image with L_1 loss optimizing the diffusion model, our method makes the training process more concise compared with <cit.>. The main contributions of this work are listed as follows:
* To the best of our knowledge, we are the first to combine diffusion models and the existing pre-trained SR models to conduct image super-resolution, which can also be taken as a post-process framework.
* Compared with existing diffusion model-based SR methods, our ACDMSR adopts a deterministic sampling way in the inference phase. It can effectively reduce the inference steps from 1000 to just 40, achieving an improved equilibrium between distortion and perceptual quality.
* Compared to existing SOTA SR methods, our ACDMSR achieves superior perceptive results and can generate more photo-realistic SR results on various benchmarks.
§ RELATED WORK
§.§ Single Image Super-resolution Methods
CNN-based methods.
CNN-based methods are a trendy line for image super-resolution, and much great work is coming out. For example, <cit.> employs the ResNet architecture from <cit.> and solves the time and memory issues with good performance. Then <cit.> further optimizes it by analyzing and removing unnecessary modules to simplify the network architecture and produce better results. After them, RCAN <cit.> and MCAN <cit.>, and EMASRN <cit.> adopt the attention mechanism <cit.> and design new residual dense networks. Then MLRN <cit.>, SRNIF <cit.>, and BSRT <cit.> proposed multi-scale fusion or internal and external features fusion architecture to solve the problem that the existing SISR could not make full use of the characteristic information of the middle network layer and internal features. In addition, SwinIR <cit.> and ESRT <cit.> apply transformer technology to improve the performance further. While these methods aim at pursuing higher PSNR (peak signal-to-noise-ratio) by designing novel architectures and using the MSE or MAE loss (, L_1 or L_2) to optimize the architectures, which often leads to smooth results because the above losses provide a straightforward learning objective <cit.>.
Generative model-based methods.
Because deep generative models have recently exhibited promising results in generating images with rich details, it has become popular to adopt generative models to conduct image super-resolution, such as GAN-based methods <cit.> and flow-based methods <cit.>.
SRGAN <cit.> is the first GAN-based SISR method. It adopts the GAN technology to push the generator to produce results with better Visual effects. Compared with SRGAN, ESRAGN <cit.> trains the discriminator to predict the authenticity of the generated image instead of predicting if the generated image is valid. NatSR <cit.> proposes a Naturalness Loss based on a pre-trained natural manifold discriminator to improve the ability of the discriminator and achieve comparable results to recent CNNs.
However, GAN-based methods have an obvious drawback that is jointly optimizing the whole training process by combining MAE or MSE makes the model easy to fall into mode collapse, and the training process is not easy to converge with complex optimization <cit.>.
Furthermore, adversarial losses often introduce artifacts not present in the original clean image, leading to large distortion <cit.>.
Flow-based SR methods are another famous line based on the deep generative model. They directly account for the ill-posed problem with an invertible encoder <cit.>. The flow-based operation transforms a Gaussian distribution into an HR image space instead of modeling one single output and inherently resolves the pathology of the original ”one-to-many” SR problem. Optimized by a negative loglikelihood loss, these methods avoid training instability. Still, they suffer from enormous footprints and high training costs due to the strong architectural constraints to keep the bijection between latents and data <cit.>.
§.§ Diffusion Models
Diffusion models have achieved promising results in image generation <cit.>. It aims to use a Markov chain to transform latent variables in simple distributions (e.g., Gaussian) to data in complex distributions. The core technology for the success of diffusion models is their iterative sampling process. It progressively removes noise from a random noise vector. This iterative refinement procedure repetitively evaluates the diffusion model, allowing for the trade-off of compute for sample quality: by using extra compute for more iterations, a small-sized model can unroll into a larger computational graph and generate higher quality samples <cit.>.
Inspired by the above works, some researchers apply diffusion models in low-level vision tasks <cit.>. In <cit.>, authors propose a novel framework for blind image deblurring based on conditional diffusion models, which employs a stochastic sampler to refine the output of a deterministic predictor and produces a diverse set of plausible reconstructions for a given input, leading to a significant improvement in perceptual quality over existing state-of-the-art methods. SRdiff <cit.> also discuss the drawbacks of current generative models-based SR methods. It designs a novel single-image super-resolution model based on diffusion models, which can provide diverse and realistic super-resolution predictions while avoiding issues with over-smoothing, mode collapse, or large model footprints. At the same time, it has to combine the counterpart output from the pre-trained SR model for the LR input, which makes the whole training process and forward diffusion process very complex and may struggle with images that contain complex textures or patterns. Unlike SRdiff, SR3 <cit.> presents a straightforward style to introduce diffusion models to help image super-resolution. It just takes the bicubic low-resolution image as the conditional image and uses denoising diffusion probabilistic models to perform stochastic denoising and achieve super-resolution through iterative refinement using a U-Net model trained on denoising at various noise levels, achieving strong performance on super-resolution tasks on faces and natural images, as well as effective cascaded image generation.
Though these methods have achieved plausible visual quality, they have an obvious drawback: the sampling speed needs to be improved in the inference time.
§ PERLIMINARIES: OVERVIEW OF DIFFUSION MODELS
In diffusion models, a Markov chain of diffusion steps generates data by progressively perturbing the data with Gaussian noise.
Subsequently, these models aim to learn how to reverse the diffusion process and reconstruct desired data samples from the noise. This section begins by revisiting the standard denoising diffusion probabilistic model (DDPM) <cit.> to provide a basic understanding. A typical probabilistic diffusion model consists of four main components: the forward process, the reverse process, the optimization of the diffusion model, and the inference stage. We will now introduce each of these components in the following sections:
§.§ Forward process
Suppose we have a real data distribution _0∼ q().
The forward process gradually adds noise into a sampled image _0 using a variance (noise) schedule β_1,…,β_T (β_t∈ (0,1),1≤ t≤ T) to generate noised versions _1,_2,…,_T from the original image _0. This process can be defined with a Markovian structure:
q(x_t|x_t-1)=𝒩(x_t;√(1-β_t)_t-1, β_t 𝐈), 1 ≤ t ≤ T.
By leveraging the properties of the Gaussian distribution and marginalizing the intermediate steps, we can sample _t at any given time-step t using the following formulation:
q(_t|_0)=𝒩(x_t; √(α̂_t)_0,(1-α̂_t)𝐈),
where α_t=1-β_t and α̂_t=∏^t_s=1α_t. This formulation allows us to express _t using the reparameterization trick:
_t(_0, ϵ)=√(α̂_t)_0+√(1-α̂_t)ϵ,
where ϵ is a Gaussian noise vector with ϵ∼𝒩(0, 𝐈).
§.§ Reverse process
In order to acquire a real sample _0 from a Gaussian noise input _T∼𝒩(0, 𝐈), the reversal of the preceding forward process is required. This involves the construction of the inverse of Eq. <ref> and the iterative reversal using q(_t-1|_t). It is worth highlighting that if β_t is sufficiently small, q(_t-1|_t) will also follow a Gaussian distribution. However, estimating q(_t-1|_t) presents a challenge as it requires the utilization of the complete dataset. Furthermore, when conditioned on _0 it becomes tractable:
p(_t-1|_t,_0)=𝒩(_t-1;μ_(_t,x_0),σ(_t,_0)),
where μ(_t,_0):=√(α̂_t-1)β_t/1-α̂_t_0+√(α_t)(1-α̂_t-1)/1-α̂_t_t and σ(_t, _0):=1-α̂_t-1/1-α̂_tβ_t. By substitution Eq. <ref>, _0=(_t - √(1-α̂_t)ϵ)/√(α̂_t), into the μ(_t,_0), we can have
μ(_t,_0)=1/√(α_t)(_t-1-α_t/√(1-α̂_t)ϵ).
Following the choice of <cit.>, if we train a model q_θ(_t-1|_t)=𝒩(μ_θ(_t,t),∑_θ(_t,t)I) to learn the above reverse process, p(_t-1|_t,_0), and set ∑_θ(_t,t) as ∑_θ(_t,t))=σ(_t, _0), we can use network f_θ to predict the noise ϵ≈ f_θ(_t, t) so that the reverse process becomes learnable:
q_θ(_t-1|_t)=
𝒩(1/√(α_t)(_t-1-α_t/√(1-α̂_t)f_θ(_t,t)),
1-α̂_t-1/1-α̂_tβ_tI).
§.§ Optimize the diffusion model
In <cit.>, it has been demonstrated that reweighted evidence lower bound proves to be an effective loss function in practical applications:
L(θ) = 𝔼_t,,ϵf_θ(_t, t) - ϵ^2,
where the model learns to predict the added noise ϵ. The pseudocode for the training is shown in the training part of Algorithm <ref>.
§.§ Inference
After training, the inference becomes trivial now, since given the start point _T, we can get the formulation of next step image _t-1 with the reparametrization trick for Equation <ref> as follows:
_t-1⟵1/√(α_t)(_t-1-α_t/√(1-α̂_t)f_θ(_t, t)) + √(1-α̂_t)ϵ_t,
where ϵ_t∼𝒩(0, I) is the random noise added in each denoise step. We can sample the final image _0 by iteratively applying the above equation. The pseudocode for the inference is shown in the inference part of Algorithm <ref>.
§ METHODOLOGY
Our method can be seen as a post-process for single image super-resolution (SISR).
As shown in Fig. <ref>, our ACDMSR consists of a stochastic diffusion process forward procedure that gradually adds noise to an image until a fully normal Gaussian noise and a deterministic denoising reverse process that conditions on I^C to reconstruct the image from noise. Algorithm <ref> shows the whole process of our ACDMSR. The following section introduces our method in detail.
§.§ Stocatic Diffusion Process
Given a SISR dataset ( I^HR, I^LR)∼ D, we adopt the diffusion model
<cit.> to map a normal Gaussian noise _T ∼𝒩(0, 1) to a high-resolution image _0 = I^HR with a corresponding conditional image ^C=I^LR. We will talk about the choice of the conditional image later. The diffusion model contains latent variables ={_t|t=0,1,...,T}, where _0 = I^HR, _T = 𝒩(0, 1).
The same noise schedule with <cit.> is used for our method, β_1, …, β_T where 1 ≤ t ≤ T.
Forward stochastic diffusion process. We define the forward process q(_t| I^HR):=q(_t| _0) of diffusion model with a Gaussian process by the Markovian structure:
q(_t|_t-1)=𝒩(_t;√(1-β_t)_t-1, β_t 𝐈),
q(_t|_0)=𝒩(_t; √(α̂_t)_0,(1-α̂_t)𝐈).
Same with DDPM <cit.>, the forward process gradually adds noise into an image _0 to generate latent variables _1,...,_T for the original image _0. With the Gaussian distribution reparameterization trick, we can write the latent variable _t as Eq. <ref>, _t(_0, ϵ)=√(α̂_t)_0+√(1-α̂_t)ϵ.
Model training. According to <cit.>, our findings demonstrate that predicting the image, rather than focusing on the noise, yields superior outcomes when applied in super-resolution tasks. We have proved it in Sec. <ref>. Therefore, the optimization target of our diffusion model is denoising _t∼ p(_t|_0) to get estimated _0 with a U-Net f_θ(_t, t, ^C):=_0≈_0. We use the following loss function to train the model:
L := 𝔼_t,(_0, I^C), ϵ[_0 - f_θ(α_t _0 + σ_t ϵ, t, ^C) ^2],
where t is uniformly sampled between 1 and T. With Eq. <ref>, ϵ=(_t-√(α̂_t)_0)/√(1-α̂_t), we can easily predict the added noise to the image _t.
Here, different with <cit.>, we add an additional input ^C as the conditional image to guide the model f_θ to keep the same content with ^C during the denoising process.
§.§ Conditional image choice
To get realistic super-resolution images, <cit.> also introduced diffusion models with conditional denoising on a pre-trained feature extractor or a bicubic upsampled image on a low-resolution image. In this work, we leverage the power of the current development of SISR to provide a better conditional image. Specifically, given a low-resolution image I^LR and a pre-trained super-resolution model ϕ_θ, we generate our conditional image by ^C=ϕ_θ(I^LR), which has been proved to be more plausible for obtaining results with better perceptual quality in ablation study <ref>.
§.§ Sampling Process
Diffusion models are known to be slow and need thousands of forward evaluation steps to achieve the generated image with good quality. Similarly, the diffusion model-based super-resolution method inherited this drawback. To remedy this issue, we propose a n-th order sampler that adapts two accelerating sampling strategies from existing work, , DDIM <cit.> and DPM-solver <cit.>.
These two sampling strategy has been shown
to help the diffusion model achieve good image quality and keep a short sampling time.
In this section, we first define what a sampler is. Then we describe the proposed super-resolution sampler in detail.
Iterative super-resolution sampler. Given a pre-trained model f_θ with objective Eq <ref>, and a low-resolution conditional image ^C, we define a iterative super-resolution sampler from t=T to t=0 as:
_t-1 = ℱ(f_θ,_t,t,^C)
where _t is the ancestor of _t-1.
First order deterministic sampling. Different from SR3 and SRdiff sampling via a stochastic way, we use a deterministic sampling method to conduct the iterative reverse process _t-1 = ℱ(f_θ,_t,t,^c) in a DDIM-like manner which has been shown achieve a high-quality image in limited inference steps. Given the image _t at step t, we can write the generation process of _t-1 as follows:
_t-1 = ℱ_1st(f_θ, _t,t,^c)
= √(α̂_t-1)_0 + √(1-α̂_t-1)_t - √(α̂_t)_0/√(1-α̂_t),
where _0 is predicted with trained denoising model _0=f_θ(_t, t, ^C). Compared to the DDPM sampling process in Eq. <ref>, the above sampling does not add noise in each step, making it a deterministic method. Since, in each step, we need only one forward model evaluation, we call this method a first-order method.
Second order deterministic sampling. <cit.> view the diffusion model as a stochastic differential equation (SDE), which has the same transition distribution q(_t|_0) as in Eq <ref> for any t∈[0,T]:
_t = f(t)_t t + g(t) w_t, _0∼ q_0(_0),
where w_t∈ℝ^D is the standard Wiener process, and
f(t) = log√(α̂_t)/ t,g(t)=1-α̂_t/ t-2f(t)√(α̂_t).
With some regularity, <cit.> shows that the above forward SDE Eq.<ref> has an quivalent reverse process starting from the marginal distribution q(_T) at time T to time step 0:
_t/ t=f(t)_t-1/2g^2(t)∇_log q(_t), _T∼𝒩(0, I),
where score function ∇_log q(_t) can be replaced with the noise prediction of a model ϵ_θ(_t,t), such that:
_t/ t=f(t)_t-g^2(t)/2√(1-α̂_t)ϵ_θ(_t,t), _T∼𝒩(0, I).
This probability flows ordinary differential equation (ODE) has the same marginal distribution at each time t as that of the SED in Eq. <ref>. Sampling can be done by solving the integral of the above ODE from T to 0. <cit.> identifies the integral of the above ODE Eq. <ref> has a linear part f(t)_t which can be solved exactly and a nonlinear part g^2(t)/2√(1-α̂_t)ϵ_θ(_t,t) which needs a black-box ODE solver to approximate. Compared to solving the whole ODE using a black-box solver, this semilinear property enables the elimination of the approximation error of the linear part.
We build our second-order deterministic sampler in a DPM-Solver <cit.> way.
To this end, define λ_t=λ(t)=log√((α̂_t(1-α̂_t))) and its inverse function t_λ(·) such that t=t_λ(λ(t)), we formulate our second-order sampling method on image _t as follows:
s = t_λ(λ_t+λ_t-1/2),
u = ℱ_1st(f_θ, _t, s, ^C),
_t-1 = ℱ_1st(f_θ, u, t, ^C) .
Since there uses first order two times, we call the above iterative sampler a second order deterministic sampler and denote it as ℱ_2ed(f_θ, _t, t, ^C).
We conducted experiments to compare these three sampling methods.
As shown in Fig. <ref>, the PSNR of the original DDPM sampling method is below 20dB in 500 forward steps, which is due to the nature of the stochastic reverse process. DDPM requires many steps to remove the randomness added during each step during the reverse process. First-order and second-order deterministic sampling methods perform much better in small sampling steps. As shown in Fig. <ref>, the first-order and second-order methods demonstrate a trade-off between visual quality and image distortion in the low sampling steps region. As the number of sampling steps increases, the PSNR decreases while the NIQE visual quality measure improves. Fig. <ref> shows that the second-order sampler can achieve the lowest NIQE scores in just 40 steps. Therefore, we chose second-order sampling as our sampling method and set T=40 during inference because we can achieve good perception quality from 40 feedforward steps.
§ EXPERIMENTS
§.§ Experimental Settings
Dataset. We use 800 image pairs in DIV2K as the training set. We take public benchmark datasets, , Set5, Set14, Urnban100, BSD100, and Manga109 as the test set to compare with other methods.
Setups. We set T=1000 for training and T=40 during the inference time for the diffusion model. We take the pre-trained super-resolution models ( EDSR <cit.>, and RCAN <cit.>, SwinIR <cit.>) to provide the initial super-resolution image, the conditional image. The conditional diffusion model is trained with Adam optimizer and batch size 16, with a learning rate of 1× 10^-4 for 400k steps. The architecture of the model is the same as that in <cit.>.
Metrics. The previous study has shown that distortion and perceptual quality are at odds with each other, and there is a trade-off between them <cit.>. Since our work focuses on the perceptual quality, except the distortion metrics: PSNR and SSIM, we also provide perceptual metrics: LPIPS <cit.> and NIQE <cit.>
to show that our method can generate better perceptual results than other methods. LPIPS is recently introduced as a reference-based image quality evaluation metric, which computes the perceptual similarity between the ground truth and the SR image. NIQE is a no-reference image quality score built on a “quality aware” collection of statistical features based on a simple and successful space domain natural scene statistic model.
§.§ Quantitative and Qualitative Results
To verify the effectiveness of our ACDMSR, we select some SOTA generative methods to conduct the comparative experiments, including ESRGAN <cit.>, SRFlow <cit.>, SRDiff <cit.>, SR3 <cit.>. We selected EDSR <cit.>, RCAN <cit.>, and SwinIR <cit.> to provide the conditional image, respectively. Therefore, we report three cases for our cDPMASR, , EDSR+, RCAN+, and SwinIR+.
In addition, we also compare our method with some SOTA tradition CNN-based SR methods to verify further the effectiveness of our ACDMSR, including EDSR <cit.>, RCAN <cit.>, EMASRN <cit.>, SwinIR <cit.>, ESRT <cit.>, and BSRN <cit.>. All the results are obtained from the provided codes or publicized papers.
Tab. <ref> reports the PSNR, SSIM, and LPIPS values for those generative methods. Our method achieves superior performance under these quantitative metrics in terms of both distortion and perceptual quality across multiple standard datasets. ESRGAN is a typical GAN-based SR method, which includes an SR image generator and an SR image discriminator to push the generator to generate more realistic images. It achieves better LPIPSs, lower PSNRs, and lower SSIMs on different datasets under different scales compared with SRDif. It seems the results generated by ESRGAN in Fig. <ref>, Fig. <ref> and Fig.<ref> include more details than other methods, but it introduces too many false artifacts compared to the ground truth. SRFlow adopts the flow model to obtain reasonable high-resolution images by learning a conditional distribution when given low-resolution images. But the flow model needs invertible parameterized transformations with a tractable Jacobian determinant, which limits their expressiveness <cit.> and obtains worse LPIPSs, lower PSNRs, and lower SSIM compared with SRdiff and our method. And the results of SRflow seem noisy. To our knowledge, SRDiff and SR3 are state-of-the-art SR methods based on the diffusion model. SRDiff employs a two-stage structure, first pre-training an SR model and then optimizing the diffusion model. SR3 proposes an intuitive SR diffusion model based on the standard diffusion model in <cit.>. Our method is similar to these two methods. However, we use existing SR methods to provide the conditional image instead of pretraining a new conditional-provided model and adjusting the optimization method by predicting the original image instead of the noise, which is more suitable for the SR task. With better conditional image, our method exhibits superior performance on both quantitative and qualitative results than SR3 <cit.>. Though SRdiff obtains some comparable numeric results in Tab.<ref>, the visual results of our ACDMSR are closer to ground truths (Especially the forehead in Fig.<ref>, the plants and the building in Fig.<ref>). In Sec.<ref>, we have further conducted ablation studies to prove that a better conditional image indeed helps improve the SR performance of the diffusion model.
Tab. <ref> reports the PSNR, SSIM, LPIPS, and NIQE values for those traditional CNN-based SR methods. Because these methods are PSNR-directed and they all focus on obtaining results with good distortion <cit.>, they can perform well on PSNR and SSIM, which are well-known to only partially correspond to human perception and can lead to algorithms with visibly lower quality in the reconstructed images <cit.>. The SR results of these PSNR-oriented methods are obviously so over-smooth that some details are missing. Though the PSNR and SSIM numbers of our method are slightly lower than theirs, it performs better when considering the metrics more in line with the human visual system.
In addition, we present Fig. <ref>, Fig. <ref>, Fig. <ref>, and Fig. <ref> to illustrate the SR visual results on different datasets with varying scales. Our methods perform well on a variety of content, including humans, plants, text, and animals. These results further demonstrate the effectiveness of our approach in achieving both metric and perceptual quality.
§.§ Ablation Study
In this section, we conduct ablation studies to verify the influence of different conditional images on our ACDMSR. In addition, we also investigate how stochastic sampling and deterministic sampling influence the reconstruction results. Furthermore, we conduct experiments to verify the effectiveness of different loss functions.
Different conditional images.
Here, we conduct experiments to verify how different conditional images influence performance. We adopt LR, SRs generated by EDSR, RCAN, SwinIR, and the RRDB trained in SRDiff <cit.> as the conditional images to perform experiments, respectively. As shown in Tab.<ref> and Fig.<ref>, without any pre-processing, the result under LR conditional performs worst in both quantitative and qualitative. After being pre-trained by RRDB, EDSR, RCAN, and SwinIR, the conditional images can restore more details, pushing our ACDMSR model to perform better.
Noise-predicted Loss VS. Image-predicted Loss.
We conduct an experiment on the Urban100 dataset with scale factor 4 to verify whether training the model to predict noise or images can achieve better performance. As shown in Tab. <ref>, the image prediction model can achieve both better distortion metric (PSNR, SSIM) and perceptual quality (LPIPS),
compared with SR3 <cit.> and SRdiff <cit.>, whose model predicts the added noise. It is because the image-predict model is more likely to learn the distribution of image information, which helps obtain good results for super-resolution reconstruction.
§ CONCLUSION
Our work revisits diffusion models in super-resolution and reveals that taking a pre-super-resolved version for the given LR image as the conditional image can help to achieve a better high-resolution image. Based on this, we propose a simple but non-trivial DPM-based super-resolution post-process framework, , ACDMSR. By taking a pre-super-resolved version of the given LR image and adapting the standard diffusion models to perform super-resolution, our ACDMSR improves both qualitative and quantitative results and can generate more photo-realistic counterparts for the low-resolution images on benchmark datasets (Set5, Set14, Urban100, BSD100, Manga109). In the future, we will extend our ACDMSR to images with more complex degradation.
Although our method achieves impressive results in generating high-quality images in single image super-resolution, however, it inherits the natural issue of the diffusion models that require multiple feedforwards to achieve the final output. The recent progress in the research community attempts to resolve this drawback of the diffusion model to shorten it to a single step with promising results <cit.>, which can be beneficial for the SISR framework proposed by our method. In future work, we will focus on accelerating the inference process of diffusion models for image super-resolution.
IEEEtran
[
< g r a p h i c s >
]
Axi Niu received her B.S. and M.S. degrees from Henan University, Kaifeng, China,
in 2014 and 2017. She is currently pursuing the Ph.D. degree with the School of Computer
Science, Northwestern Polytechnical University, Xi’an, China. Her research interests
include image processing and computer vision.
[
< g r a p h i c s >
] Kang Zhang received his B.S. degree from Harbin Institute of Technology, 2020. He is currently pursuing the Ph.D. degree at Korea Advanced Institute of Science & Technology. His research work focuses on Deep Learning, Self-Supervised Learning, and Adversarial Machine Learning.
[
< g r a p h i c s >
] Pham Xuan Trung received his B.S. degree in the School of Electronics and Telecommunications (SET) at Hanoi University of Science and Technology (HUST) in 2014. He is currently working toward his Ph.D. at KAIST under the supervision of Prof. Chang D. Yoo. His doctoral
research interests include Speech Processing, SelfSupervised Learning, and Computer Vision.
[
< g r a p h i c s >
]
Jinqiu Sun received her B.S., M.S., and Ph.D. degrees from Northwestern Polytechnical University in 1999, 2004, and 2005, respectively. She is presently a Professor of the School of Astronomy at Northwestern Polytechnical University. Her research work focuses on signal and image processing, computer vision, and pattern recognition.
[
< g r a p h i c s >
]
Yu Zhu received the B.S., M.S., and M.S. degrees from Northwestern Polytechnical University, Xi’an, China, in 2008, 2011, and 2017, respectively. He is presently an associate researcher at the School of Computer Science, Northwestern Polytechnical University. His current research interests include image processing and image super-resolution.
[
< g r a p h i c s >
]
In So Kweon received the B.S. and the M.S. degrees in Mechanical Design and Production Engineering from Seoul National University, Korea, in 1981 and 1983, respectively, and the Ph.D. degree in Robotics from the Robotics Institute at Carnegie Mellon University in 1990. He is currently a Professor of electrical engineering (EE) and the director of the National Core Research Center – P3 DigiCar Center at KAIST. He served as the department head of Automation and Design Engineering (ADE) at KAIST in 1995-1998. His research interests include computer vision and robotics. He has co-authored several books, including "Metric Invariants for Camera Calibration," and more than 300 technical papers. He served as a Founding Associate-Editor-in-Chief for “The International Journal of Computer Vision and Applications”, and has been an Editorial Board Member for “The International Journal of Computer Vision” since 2005. He is a member of many computer vision and robotics conference program committees and has been a program co-chair for several conferences and workshops. Most recently, he has been a general co-chair of the 2012 Asian Conference on Computer Vision (ACCV) Conference. He received several awards from international conferences, including “The Best Student Paper Runnerup Award in the IEEE-CVPR’2009” and “The Student Paper Award in the ICCAS’2008”. He also earned several honors at KAIST, including the 2002 Best Teaching Award in EE. He is a member of KROS, ICROS, and IEEE.
[
< g r a p h i c s >
]
Yanning Zhang received her B.S. degree from the Dalian University of Science and Engineering in 1988, M.S. and Ph.D. Degree from Northwestern Polytechnical University in 1993 and 1996, respectively. She is presently a Professor of School of Computer Science and Technology, Northwestern Polytechnical University. She is
also the organization chair of ACCV2009 and the publicity chair of ICME2012. Her research focuses on signal and image processing, computer vision, and pattern recognition. She has published over 200 papers in these fields, including the ICCV2011 best student paper. She is a member of IEEE.
|
http://arxiv.org/abs/2307.00610v2
|
20230702163554
|
Fraunhofer SIT at CheckThat! 2023: Mixing Single-Modal Classifiers to Estimate the Check-Worthiness of Multi-Modal Tweets
|
[
"Raphael Frick",
"Inna Vogel"
] |
cs.LG
|
[
"cs.LG",
"cs.CL",
"cs.SI"
] |
2023
Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CLEF 2023: Conference and Labs of the Evaluation Forum, September 18–21, 2023, Thessaloniki, Greece
mode=sub]Notebook for the CheckThat! Lab at CLEF 2023
1]Raphael Antonius Frick[
email=raphael.frick@sit.fraunhofer.de,
]
[1]
[1]
[1]Fraunhofer Institute for Secure Information Technology SIT | ATHENE - National Research Center for Applied Cybersecurity,
Rheinstrasse 75, Darmstadt, 64295, Germany,
url=https://www.sit.fraunhofer.de/
1]Inna Vogel[
email=inna.vogel@sit.fraunhofer.de,
]
[1]
[1]Corresponding author.
The option of sharing images, videos and audio files on social media opens up new possibilities for distinguishing between false information and fake news on the Internet.
Due to the vast amount of data shared every second on social media, not all data can be verified by a computer or a human expert.
Here, a check-worthiness analysis can be used as a first step in the fact-checking pipeline and as a filtering mechanism to improve efficiency.
This paper proposes a novel way of detecting the check-worthiness in multi-modal tweets.
It takes advantage of two classifiers, each trained on a single modality.
For image data, extracting the embedded text with an OCR analysis has shown to perform best.
By combining the two classifiers, the proposed solution was able to place first in the CheckThat! 2023 Task 1A with an F_1 score of 0.7297 achieved on the private test set.
Check-Worthiness Detection Multi-Modality Optical Character Recognition Image Captioning
[
[
August 1, 2023
==================
|
http://arxiv.org/abs/2307.02888v1
|
20230706094506
|
Pion PDFs confronted by Fixed-Target Charmonium Production
|
[
"Wen-Chen Chang",
"Chia-Yu Hsieh",
"Yu-Shiang Lian",
"Jen-Chieh Peng",
"Stephane Platchkov",
"Takahiro Sawada"
] |
hep-ph
|
[
"hep-ph",
"hep-ex",
"nucl-ex"
] |
Article Title]Pion PDFs confronted by Fixed-Target Charmonium
Production
[1]Wen-Chen Changchangwc@phys.sinica.edu.tw
1]Chia-Yu Hsiehcyhsieh@phys.sinica.edu.tw
1]Yu-Shiang Lianyslian@gate.sinica.edu.tw
2]Jen-Chieh Pengjcpeng@illinois.edu
3]Stephane PlatchkovStephane.Platchkov@cern.ch
4]Takahiro Sawadasawada@icrr.u-tokyo.ac.jp
[1]Institute of Physics, Academia Sinica, Taipei, 11529, Taiwan
[2]Department of Physics, University of Illinois at
Urbana-Champaign, Urbana, 61801, USA
[3]IRFU, CEA, Université Paris-Saclay, Gif-sur-Yvette, 91191, France
[4]Institute for Cosmic Ray Research, The University of Tokyo, Gifu, 506-1205, Japan
The pion, as the Goldstone boson of the strong interaction,
is the lightest QCD bound state and responsible for the long-range
nucleon-nucleon interaction inside the nucleus. Our knowledge on the
pion partonic structure is limited by the existing Drell-Yan data
which are primarily sensitive to the pion valence-quark
distributions. The recent progress of global analysis of pion's
parton distribution functions (PDFs) utilizing various experimental
approaches are introduced. From comparisons between the pion-induced
J/ψ and ψ(2S) production data with theoretical
calculations using the CEM and NRQCD models, we show how these
charmonium production data could provide useful constraints on the
pion PDFs.
[
[
^1 West University of Timişoara, Bd. V. Pârvan nr. 4, 300223, Timişoara, Romania
^* Corresponding Author: eva.kaslik@e-uvt.ro
===================================================================================================================================
§ INTRODUCTION
The pion, being the Goldstone boson of dynamical chiral symmetry
breaking of the strong interaction, is also the lightest QCD bound
state. Because of its light mass, the pion plays a dominant role in
the long-range nucleon-nucleon interaction. Understanding the pion's
internal structure is important to investigate the low-energy,
nonperturbative aspects of QCD <cit.>. Even though the
pion is theoretically simpler than the proton, its partonic structure
is much less explored. As scattering off a pion target is not
feasible, current knowledge on pion PDFs mostly relies on the
pion-induced Drell-Yan data <cit.>. Through the
Drell-Yan reaction, the valence-quark distributions at x >0.2 can be
determined while additional measurements are required to constrain the
sea and gluon densities.
While the prompt-photon production process π N →γ
X <cit.> was used to constrain the gluon content of
pions through the Gq →γ q subprocess, the
experimental uncertainties are large. Production of heavy quarkonia,
like and , with a pion beam has distinctive advantages: the
cross sections are large and their decay can be readily detected via
the dimuon decay channel. These datasets have been shown to be
sensitive to both the quark and gluon distributions of the incident
pion <cit.>. The other interesting approach
of accessing the pion PDFs from the Sullivan
process <cit.> in leading neutron deep inelastic
scattering (DIS) data has been considered with promising
results <cit.>. This method is subject
to large systematic uncertainties due to the off-shell nature of
virtual pion in the fluctuated Fock state, and further theoretical
studies are required to clarify the uncertainties <cit.>.
In the fixed-target energy domain, where the transverse momentum of
the charmonium and ψ(2S) is less than its mass, the
charmonium production is dominated by the quark-antiquark (q
q̅) and gluon-gluon fusion (GG) partonic processes. The shape
of the longitudinal momentum x_F cross section is sensitive to the
quark and gluon parton distributions of colliding hadrons. Since the
nucleon PDFs are known with good accuracy, the measurement of total as
well as the differential x_F distribution of charmonia with the pion
beam provides, within the theoretical model uncertainties, valuable
information about the pion quark and gluon partonic distributions.
In this article, we present our recent studies about the possibility
to constrain pion gluon density from the existing fixed-target
charmonium data <cit.>. We
start with an introduction of various pion PDFs and their distinctive
features in Sec. <ref>, followed by Sec. <ref>
describing the two theoretical frameworks, CEM and NRQCD, used for
describing the charmonium production. Sec. <ref> shows the
comparison of data and theoretical predictions, from which the
differentiation of the large-x gluon strengths in various pion PDFs
can be observed. We conclude with a summary of the results and a few
remarks.
§ PION PDFS
Pion-induced Drell-Yan data have been included in all global analyses
for the determination of the pion PDFs. However, Drell-Yan process is
mainly sensitive to the valence-quark distribution. Without additional
observables, the sea and gluon distributions can be only inferred
through the momentum and valence-quark sum rules. Different approaches
have been taken to access the gluon and sea quark distributions: (i)
utilizing production data in OW <cit.>; (ii)
utilizing the direct-photon production data in
ABFKW <cit.>, SMRS <cit.>,
GRV <cit.>, and xFitter <cit.>; (iii)
utilizing the leading neutron DIS (LN) in JAM <cit.>;
(iv) utilizing the production cross sections at the region of large
transverse momentum (p_T) sensitive to NLO qG process in
JAM <cit.>.
In addition, some pion PDFs are constructed based on theoretical
modeling. For example, GRS <cit.> utilized a constituent
quark model to relate the gluon and antiquark density, and
BS <cit.> assumed
quantum statistical distributions for all parton species with a
universal temperature. The soft-gluon threshold resummation correction
is known to modify the extraction of valence-quark distribution toward
x=1 <cit.> and how this effect modifies the large-x
behavior of valence quarks in a global analysis is recently
examined <cit.>. We summarize the data sets used for
various global analyses of pion PDFs in Table <ref>.
Figure <ref> compares the valence, sea, and gluon momentum
distributions of the SMRS, GRV, JAM and xFitter pion PDFs at the scale
of mass <cit.>. Their ratios to SMRS are shown in
the bottom panel. Within the range of x ∼0.1–0.8, the
valence-quark distributions of SMRS, JAM and xFitter are close to each
other, whereas GRV is lower by up to 20%–30%. The sea distribution
shows large variations between the four PDFs. The gluon distributions
also show sizable differences; e.g., in the region of x > 0.2 the
xFitter and JAM distributions are smaller in comparison with SMRS and
GRV, by up to a factor of 2-3. As we will see in
Sec. <ref>, these differences in the large-x gluon
distributions lead to quantitative difference in the data description
of fixed-target charmonium data.
§ CEM AND NRQCD MODELS FOR CHARMONIUM PRODUCTION
Based on factorization, the theoretical description of charmonium
production consists of the pQCD description of the production of c
c̅ pairs at the parton level <cit.>, and their subsequent hadronization into the
charmonium bound state <cit.>. The
latter nonperturbative part is challenging and has been modeled in
theoretical approaches such as the color evaporation model
(CEM) <cit.>, the
color-singlet model (CSM) <cit.>, and the nonrelativistic QCD
(NRQCD) <cit.>.
The CEM assumes a constant probability F^H, specific for each
charmonium H, for the hadronization of c c̅ pairs into the
colorless hadron state. The differential cross section dσ/dx_F
for from the π N collision is expressed as an integration
of c c̅ pair production with an invariant mass M_cc̅
up to the D D̅ threshold,
dσ^H/dx_F= F^H∑_i,j=q, q̅, G∫_2 m_c ^2 m_D dM_c c̅2M_c c̅/s√(x_F^2+4M_c c̅^2/s)
× f^π_i(x_1, μ_F) f^N_j(x_2, μ_F) σ̂[ij → c c̅ X](x_1 p_π , x_2 p_N , μ_F, μ_R),
x_F = 2 p_L/√(s) x_1,2 = √(x_F^2+4M_c c̅^2/s)± x_F/2
where i and j denote the interacting partons (gluons, quarks and
antiquarks) and m_c, m_D, and M_c c̅ are the masses of
the charm quark, D meson, and c c̅ pair, respectively. The
f^π and f^N are the corresponding pion and nucleon parton
distribution functions, respectively, evaluated at the corresponding
Bjorken-x, x_1 and x_2, at the factorization scale μ_F. The
short-distance differential cross section of heavy-quark pair
production σ̂[ij → c c̅ X] is calculable as
a perturbation series in the strong coupling α_s(μ_R)
evaluated at the renormalization scale μ_R. The longitudinal
momentum of the experimentally detected dilepton pair, equivalent to
that of the c c̅ pair, is denoted by p_L.
The F^H factor is to be determined as the normalization parameter in
the fit to the experimental measurements. The assumption of a common
F^H factor for different subprocesses greatly reduces the number of
free parameters of the CEM. In spite of its well-known
limitations <cit.>, the CEM gives a good account of many
features of fixed-target cross section data with proton beams,
including their longitudinal momentum (x_F)
distributions <cit.> and the collider
data at RHIC, Tevatron, and LHC <cit.>.
To examine a possible model dependence of observations, we carry out a
similar study using NRQCD. The NRQCD factorization formula allows for
a systematic expansion of inclusive quarkonium cross sections in
powers of the strong coupling constant α_s and the relative
velocity v of the heavy quarks. This expansion takes into account
the short-distance production of color-singlet and color-octet
cc̅ precursor states with various spin (S), color (n), and
angular momentum (J) quantum numbers. The long-distance matrix
elements (LDMEs) are non-perturbative parameters that characterize the
probability of a cc̅ pair to evolve into a final quarkonium
state. The LDMEs, assumed to be universal, are extracted from the
experimental data. The differential cross section dσ/dx_F for
from the π N collision is expressed as follows,
dσ^H/dx_F= ∑_i,j=q, q̅, G∫_0 ^1 dx_1 dx_2δ(x_F - x_1 + x_2)
× f^h_i(x_1, μ_F) f^N_j(x_2, μ_F) σ̂[ij → H](x_1 P_h , x_2 P_N , μ_F, μ_R, m_c),
σ̂[ij → H] = ∑_nσ̂[ij → c c̅ [n]] (x_1 P_h , x_2 P_N , μ_F,
μ_R, m_c) ⟨𝒪_n^H[^2S+1L_J] ⟩
where σ̂[ij → c c̅ [n]] denotes the
hard-QCD production cross section for c c̅ pair of color state
n and ⟨𝒪_n^H[^2S+1L_J] ⟩ is the
corresponding LDME. Table <ref> summarizes the
relationships between the LDMEs and the scattering subprocesses for
J/ψ, ψ(2S), χ_c0, χ_c1, and χ_c2, up to
𝒪(α_s^3) in the NRQCD
framework <cit.> adopted for computing J/ψ,
ψ(2S), and χ_cJ production via GG, q q̅ and qG
subprocesses. The J/ψ cross section is estimated taking into
account the direct production of J/ψ and the feed-down from
hadronic decays of ψ(2S) and radiative decays of three
χ_cJ states.
§ RESULTS AND DISCUSSIONS
§.§ Integrated cross sections
We start with the comparison between the data of π^- N →
J/ψ X cross sections integrated over x_F >
0 <cit.> and the NLO CEM
calculations with four pion PDFs, shown in
Fig. <ref>. The evaluation of cross sections is done
with a charm quark mass m_c = 1.5 GeV/c^2 and renormalization and
factorization scales of μ_R = m_c and μ_F = 2 m_c,
respectively. The hadronization factors F in the CEM model are
assumed to be energy independent and determined by the best fit to the
data for the central values of each pion PDF. The differences between
them are visible through the F factors, which vary from 0.05 to
0.09. Similar comparison made for the NRQCD calculations is shown in
Fig. <ref>.
In the CEM study, the factor F is determined by the best χ^2
fit to each data set individually. In contrast, a global analysis of
all data sets was performed to obtain some color-octet LDMEs as the
fit parameters in the study with NRQCD. The quality of data
description for each data set in NRQCD study is shown by
χ^2/ndp, where ndp denotes the number of degree of data
points in a specific data set.
The total cross sections evaluated with the four PDFs exhibit quite
similar √(s) dependencies, and all agree reasonably with the
data. The q q̅ contribution dominates at low energies, whereas
the GG contribution becomes important with increasing
√(s). The relative fractions of q q̅ and GG
contributions as a function of √(s) vary for each pion PDFs,
reflecting the differences between the corresponding parton
distributions. For SMRS and GRV the GG contribution starts to
dominate the cross section around √(s)=15 GeV. For xFitter and
JAM the corresponding values are larger at ∼√(s)=20-30 GeV
because of their relatively reduced gluon strength in the valence
region.
§.§ Differential x_F cross sections
To investigate further the effect led by different pion PDFs, we
compare the longitudinal x_F distribution of the calculated
pion-induced production cross section with a selection of
fixed-target data from Fermilab and CERN experiments for pion-induced
production as seen in Table II of Refs. <cit.>. The beam momenta of the datasets cover the range of
39.5–515 GeV/c, corresponding to √(s) values ranging from 8.6
to 31.1 GeV.
The comparison of our LO and NLO CEM calculations to the E672/E706
data <cit.> with a 515 GeV/c π^- beam scattered off
Be targets is shown in Fig. <ref>. Judging from the
reduced χ^2/ndf values, the NLO calculations with SMRS and GRV
are in better agreement with the data than those with xFitter and
JAM. The NLO calculation improves the description of the E672/E706
data only in the cases of SMRS and
GRV. Fig. <ref> shows the same comparison with
the NRQCD caluclations. It is also observed that SMRS and GRV are
favored over JAM and xFitter in both comparisons with the CEM and
NRQCD results.
The fraction of the GG component is maximized around x_F = 0,
corresponding to the gluon distribution G_π(x) around x
∼0.1–0.2. As a result of the rapid drop of the G_π(x)
toward x=1, the GG contribution quickly decreases at large
x_F. In contrast, the q q̅ contribution has a slower fall-off
toward high x_F because of a relatively strong pion valence
antiquark density, in comparison with the gluon one, at large x. The
ratio of q q̅ to GG shows a strong x_F dependence, making
the x_F-differential cross sections at high energies particularly
sensitive to the shape of pion G_π(x).
More information on the charmonium production mechanism can be
obtained by comparing the production of the two charmonium states,
J/ψ and ψ(2S). Fig. <ref> shows the
comparison of the ψ(2S) to J/ψ ratios, R_ψ(x_F), with
the pion beam momentum of 252 GeV/c <cit.> and the
NRQCD calculations. An x_F-independent R_ψ(x_F) is predicted
by the CEM <cit.>, since the fractions of q q̅
and GG components are identical for J/ψ and ψ(2S). In
NRQCD, an x_F-dependent R_ψ(x_F) is possible because
different LDMEs are associated with the q q̅ and GG channels
in evaluating the production of J/ψ and ψ(2S).
Fig. <ref> shows a strong x_F dependence of
R_ψ and this suggests that the relative weights of the
individual subprocesses q q̅ and GG components in J/ψ
and ψ(2S) production are distinctly different. The pronounced
rise in the R_ψ(x_F) data at forward x_F where the q q̅
subprocess dominates the production, indicates that the q q̅
subprocess is more important for the ψ(2S) production than for
the J/ψ production. The comparison of this result remains to
favor the calculations with SMRS and GRV, consistently with the
observation with J/ψ production data.
§ SUMMARY
We examine the existing pion PDFs which exhibit pronounced
differences, particularly in their gluon distributions. Using these PDFs
as the input of CEM and NRQCD, the total and x_F differential cross
sections of pion-induced J/ψ and ψ(2S) production are
calculated and compared to the fixed-target data.
We observe the importance of the gluon-gluon fusion process in
charmonium production, especially at high (fixed-target)
energies. Since the calculated shapes of x_F distributions of GG
and q q̅ contributions are directly related to the parton x
distributions of corresponding PDFs, a proper description of
charmonium production data, especially for x_F>0.5, imposes strong
constraints on the relevant pion's parton densities. Among the four
pion PDFs examined, both CEM and NRQCD calculations clearly favor SMRS
and GRV PDFs whose gluon densities at x > 0.1 are stronger, compared
with xFitter and JAM PDFs. The GG contribution from the latter two
pion PDFs drops too fast toward x_F = 1 to describe the data. While
future theoretical developments are required to reduce the theoretical
uncertainties in describing the charmonium production and thus improve
the precision of the extracted PDFs, we emphasize the importance of
including the pion-induced charmonium data in future pion PDF global
analysis.
In the near future, new measurements of Drell-Yan as well as J/ψ
data in π^- N reactions will be available from the CERN
COMPASS <cit.> and AMBER <cit.>
experiments. For the coming electron-ion collider projects in U.S. and
China, the pion as well kaon structures are to be explored using the
tagged DIS process <cit.>. To characterize the recoiled baryon
system from the collisions with very small four-momentum transfer for
the extraction of on-shell meson PDFs, a high-resolution zero-degree
calorimetor is required. A collaboration of East Asian countries on
developing this key detector for U.S. EIC project was recently
discussed <cit.>.
Acknowledgments
We thank Nobuo Sato and Ivan Novikov for helping us with the usage of
JAM and xFitter PDFs.
Authors' contributions
All authors equally contributed to the manuscript. Wen-Chen Chang is
the lead author in organizing and writing the draft. All authors read,
polished, and approved the final manuscript.
Funding
This work is supported in part by the U.S. National Science Foundation
and National Science and Technology Council of Taiwan (R.O.C.).
Availability of data and materials
The datasets generated during and/or analysed during the current study
are available from the corresponding author on reasonable request.
§ DECLARATIONS
Ethics approval and consent to participate
N/A
Consent for publication
N/A
Competing interests
The authors declare that they have no competing interests.
|
http://arxiv.org/abs/2307.00266v1
|
20230701081600
|
Hierarchical Pretraining for Biomedical Term Embeddings
|
[
"Bryan Cai",
"Sihang Zeng",
"Yucong Lin",
"Zheng Yuan",
"Doudou Zhou",
"Lu Tian"
] |
cs.CL
|
[
"cs.CL",
"cs.AI"
] |
fancy
fancy
[L]
[C]
§ INTRODUCTION
Biomedical term representations condense the semantic meanings of terms into a low-dimensional space, which is useful for various downstream applications,
such as clinical decision making
, patient trajectory modeling
, and automated phenotyping.
Current state-of-the-art methods <cit.> employ pretrained language models (PLMs) with contrastive learning loss to generate contextual embeddings from biomedical knowledge graphs like the Unified Medical Language System (UMLS) <cit.>. These methods focus on term normalization or entity linking problems and expect similar terms to be close in the embedding space. While they excel at similarity modeling, even in challenging tasks like unsupervised synonym grouping <cit.>, they do not perform well in modeling hierarchies between biomedical terms <cit.>.
Efforts have been made in recent studies to incorporate hierarchical information into biomedical term representations. For example, Kaylan and Sangeetha (2021) used a retrofitting algorithm and UMLS relationships to incorporate ontology relationship knowledge into term representations <cit.>. However, this method treats all relationships equally. Another approach was proposed by Yang et al. (2022) based on a hierarchical triplet loss with dynamic margin learned from the hierarchy of ICD codes <cit.>, which improved the performance of the ICD coding task. However, this method is less flexible as it requires an explicit parametrization of the dynamic margin, which can be difficult in the presence of many different classes of term pairs.
To incorporate specific biomedical term hierarchies into training the embedding, we select a set of terms based on these hierarchies for each anchor term. Our model learns to improve the concordance between the cosine similarities of embedded term pairs and their similarities within hierarchies. Existing techniques for optimizing the rank loss require the specification of margins between adjacent categories<cit.>, which is delicate and time-consuming <cit.>.
In this paper, we present a novel hierarchical biomedical term representation model that leverages both the synonyms in UMLS and hierarchies in EHR codified data. To this end, we have gathered medication terms from RxNorm <cit.>, phenotype terms from PheCode <cit.>, procedure terms from CPT <cit.>, and laboratory terms from LOINC <cit.>, and organize them into hierarchical structure for embedding training. Taking advantage of constructed hierarchies, we adapt the existing contrastive loss function to handle any number of ordered categories without the need of specifying any between-category margin. We name our model Hierarchical Pretrained BERT (HiPrBERT).
§ RELATED WORKS
Biomedical term representation is the foundation of biomedical language understanding. Word embeddings generally use word2vec algorithm with biomedical corpus for training <cit.>. Cui2vec factorizes a shifted, positive pointwise mutual information matrix to obtain a lower-dimension embedding of the words <cit.>. CODER and SapBERT extend the fixed vocabulary in word2vec models to arbitrary inputs by using pretrained language models and contrastive learning to learn from the synonyms in UMLS. To encode hierarchies in biomedical term representations, Yang et al. (2022) designs a hierarchical triplet loss with pre-assigned dynamic margin to learn from the hierarchy of ICD codes <cit.>, while Kayyan and Sangeetha (2021) uses a retrofitting algorithm to refine the representations using UMLS relationships <cit.>. These methods facilitate the development of biomedical NLP, but are still restrictive in exploring the fine information in various types of hierarchies.
§ DATA AND METHODS
We will introduce the structure of the input data, the general model architecture that we use to build embeddings, the hard pair mining strategy, and the loss functions.
§.§ UMLS and Medical Hierarchies
HiPrBERT leverages two main sources of data. The first is the UMLS, a knowledge graph that encodes relations across many different medical vocabularies. These terms have no inherent order to them, and there are many different types of relations between pairs of terms. In addition to the UMLS knowledge graph, we have a collection of various hierarchies that we can leverage. Specifically, PheCODE is a hierarchy containing ICD codes that can be represented as a forest of trees. The root of each tree is a separate concept, and children of a node will represent a more specific concept. LOINC is another hierarchy representing laboratory observations, containing 171,191 nodes from 27 trees, whose depth varies from 2 to 13; Similarly, RxNorm and CPT are also represented as forests focusing on medication and procedure terms, respectively. PheCode contains 1,601 nodes, RxNorm contains 192,683 nodes; and CPT contains 10,360 nodes. In these hierarchies, the structure contains more information than UMLS on the “closeness” between various biomedical terms, which can be used to accomplish a fine embedding better discriminating closed related terms from moderately related terms.
It is worth noting that although the number of terms in each hierarchy is significantly lower than the number of terms in the UMLS, we expect that we can obtain enough high-quality training pairs from the hierarchy to enhance the embeddings in most relevant regions of the embedding space. In practical terms, each hierarchy consists of two mappings: one from parents to children and one from codes to the biomedical term strings.
§.§ Term Embeddings
HiPrBERT takes in an input term s and outputs a corresponding embedding e_s∈ R^d. Specifically, the input s is first converted into a series of tokens, which are then encoded by HiPrBERT into a series of d dimensional hidden state vectors
[CLS], 𝐭_0, 𝐭_1, ..., 𝐭_n, [SEP] 𝐡_[CLS], 𝐡_0, 𝐡_1, ..., 𝐡_n, 𝐡_[SEP].
The embedding of s is defined to be the latent vector corresponding to the [CLS] token
s→𝐞_s = 𝐡_[CLS]∈ R^d.
§.§ Distance metric
Similar to SapBERT, our approach learns term representations by maximizing the embedding similarity between term-term pairs that are “close" and minimizing embedding similarities between term-term pairs that are “far". We define the embedding similarity between terms s_i and s_j as S_ij = cos(𝐞_i, 𝐞_j). We also define following distances to quantify the resemblance between terms s_i and s_j. These particular choices of the numerical value are not important and only their order matters in training embeddings.
* If s_i and s_j are from the UMLS,
d(s_i, s_j) =
0 s_i and s_j are synonyms;
3 otherwise.
* If s_i and s_j are from a hierarchy,
d(s_i, s_j) =
0 s_i and s_j are synonyms;
1 s_i and s_j have the same parent (a sibling pair);
2 s_i and s_j are a parent-child pair;
3 otherwise.
§.§ Hard Pair Mining
When sampling UMLS term data, we use an online triplet miner to select negative pairs. Specifically, among all triplets of terms (s_a, s_p, s_n), where (s_a, s_p) are synonymous and (s_a, s_n) are non-synonymous, based on initial embeddings, (s_a, s_p, s_n)→ (𝐞_a, 𝐞_p, 𝐞_n), we consider the difference between (𝐞_a, 𝐞_p) and (𝐞_a, 𝐞_n), and select the triplets with this difference >0.25 to be included in our minibatch for further training. We do the same for UMLS relational data.
For hierarchical data, we leverage the structure of the tree to construct minibatches. For example, we use distance 0 pairs as positive samples, and distance >0 pairs as negative samples. We do this with every distance to encourage separation between varying levels of similarity.
§.§ Loss Function
Given an anchor term s_i and a set of terms Ω_i, we can define the sets
Ω_i^(0)(d_0) = {j ∈Ω_i | d(s_i,s_j) ≤ d_0 } Ω_i^(1)(d_0) = {j ∈Ω_i | d(s_i,s_j) > d_0 }.
In other words, Ω_i^(0)(d_0) contains all terms that are at most distance d_0 from s_i, Ω_i^(1)(d_0) contains all terms that are further than d_0 away. Our goal is to create embeddings such that the similarity between s_i and terms in Ω_i^(0)(d_0) is greater than that between s_i and terms in Ω_i^(1)(d_0). We use the multi-similarity loss <cit.>.
For UMLS data, we have the standard MS loss function.
∑_i=1^k[α^-1log(1+∑_j ∈Ω_i^(0)(0) e^-α(S_ij-λ)) +β^-1log(1+∑_j ∈Ω_i^(1)(0) e^β(S_ij-λ))],
where α=2, β=2, λ=.5. Note that the terms in Ω_i^(1)(0) come from the triplet mining procedure. For hierarchical data, we use a modified loss:
∑_d_0=0^2∑_i=1^k[α^-1log(1+∑_j ∈Ω_i^(0)(d_0) e^-α(S_ij-λ)) +β^-1log(1+∑_j ∈Ω_i^(1)(d_0) e^β(S_ij-λ))],
with the same set of tuning parameters.
§ EXPERIMENTS
§.§ Model Training
Our training process is similar to that of SapBERT, with the main key difference being the loss functions that were used. Using PyTorch <cit.> and the transformers library <cit.>, our model was initialized from PubMedBERT <cit.> and trained using AdamW <cit.> with a learning rate of 2× 10^-5, a weight decay rate of 0.01, and linear learning rate scheduler. We use a training batch size of 256, and train on the preprocessed UMLS synonym data, UMLS relation data, and hierachical data for one epoch. This equates to about 120 thousand iterations, and takes less than 10 hours on a single GPU machine.
§.§ Model Evaluation
To objectively evaluate our models, we randomly selected evaluation pairs from hierarchies that were not used in model training. For each evaluation pair, we calculated the cosine similarity between the respective embeddings to determine their relatedness. The quality of the embedding was measured using the AUC under the ROC curve for discriminating between distance i pairs and distance j pairs, where 0≤ i<j≤ 3. In addition, we have also evaluated the embedding performance via Spearman's correlation and precision-recall curve.
For relatedness tasks, we used pairs of terms in our holdout set for various relations to test the models. There are many different types of relationships, and we report three of clinical importance, as well as the average of the 28 most common relations. We also included performance on the Cadec term normalization task.
We compare HiPrBERT with a set of competitors including SapBERT, CODER, PubMedBERT, BioBERT, BioGPT and DistilBERT, where the SapBERT is retrained without using testing data for generating fair comparisons.
§.§ Evaluation Results
The AUC values for discriminating pairs of different distances are reported in Table <ref>. HiPrBERT, fine-tuned on hierarchical datasets, outperforms all its competitors in every category, except for 1 vs 3, where it's performance is very close to CODER. The most noteworthy improvement is in the 0 vs 1 task, where models have to distinguish synonyms from very closely related pairs, such as “Type 1 Diabetes” and “Type 2 Diabetes”. We have also reported the results using Spearman's rank correlation coefficient in Table <ref>, and the conclusions are similar.
We also see significant improvements in all relatedness tasks (Table <ref>). For example, the AUC in the “Causative” category improves from 91.9% to 98.1% in comparison with the second best embedding generated by CODER. Similar improvement has been also observed in detecting “May Cause/Treat" and “Method of" relations. Overall, the average performance of the model in detecting the 28 most common relationships improved from 88.6% to 93.7% in comparison with the next best embedding. This demonstrates a substantial improvement in our ability to capture more nuanced information. It is worth noting that HiPrBERT's performance in Cadec is on par with other existing models, indicating that our model does not compromise on performance in similarity tasks while achieving improvements in other areas. Lastly, the comparison results based on Spearman's correlation (Table <ref>) and precision-recall curve (not reported) are similar.
§ DISCUSSION
Our model is one of the first to include terms from medical term hierarchies (PheCODE, LOINC, RxNorm), and these trees contain terms critical for structured EHR data. Existing methods such as CODER and SapBERT do not train on this specific vocabulary. By improving embeddings for these strings in particular, our embeddings have the potential to integrate better with structured EHR data, enhancing the representation of patients.
This then directly leads improvements in downstream tasks such as extracting prediction features and patients clustering.
The use of induced distance from hierarchies helps improve model performance, and can be expanded in several ways. One may consider more pair types within each hierarchy; for example the distance metric can be expanded to include grandparent-child and uncle-nephew pairs. Alternatively, the distance metric can take into account the global structure of the tree. Currently, pairwise resemblance only takes into account the local information around the term, looking only at immediate connections. However, typically nodes closer to the root of the hierarchies represent broader concepts that are further apart, whereas nodes closer to the leaves represent more specific concepts that are closer together. This can either be explicitly coded into the training process, or ideally learnt on the fly. In addition, different hierarchies will naturally differ in structure and therefore pairwise distance, so this adjustment would be hierarchy specific. Our simple choice here is for computational convenience and can be improved.
§ CONCLUSION
In this paper we present a novel method for training embeddings better discriminating pairs of different similarity by taking advantage of additional hierarchical structures. Operationally, the method only requires to order the term-term similarity, which is much simpler than assigning quantitative margins between similarities used in the rank loss.
The new model outperforms existing ones on separating weakly related terms from closely related terms without sacrificing performance on other metrics.
IEEEtranN
|
http://arxiv.org/abs/2307.02151v1
|
20230705094957
|
Dixon's asymptotic without CFSG
|
[
"Sean Eberhard"
] |
math.GR
|
[
"math.GR",
"math.CO"
] |
Sean Eberhard, Mathematical Sciences Research Centre, Queen's University Belfast, Belfast BT7 1NN, UK
s.eberhard@qub.ac.uk
SE is supported by the Royal Society.
Without using the classification of finite simple groups, we show that the probability that two random elements of S_n generate a primitive group smaller than A_n is at most exp(-c(n log n)^1/2).
As a corollary we get Dixon's asymptotic expansion
1 - 1/n - 1/n^2 - 4/n^3 - 23/n^4 - ⋯
for the probability that two random elements of S_n (or A_n) generate a subgroup containing A_n.
Dixon's asymptotic without CFSG
Sean Eberhard
Received: date / Accepted: date
===================================
§ INTRODUCTION
We give a CFSG-free proof of the following result.
Let G be the subgroup of S_n generated by two random elements.
The probability that G is contained in a primitive subgroup of S_n smaller than A_n is bounded by exp(-c (n log n)^1/2) for some c>0.
This improves <cit.>*Theorems 1.3 and 1.6.
By combining with the results of <cit.> we have the following corollary. (See also <cit.>*A113869.)
The probability that two random elements of A_n generate the group is
1 - 1/n - 1/n^2 - 4/n^3 - 23/n^4 - 171/n^5 - ⋯.
The same asymptotic expansion is valid for the probability that two random elements of S_n generate at least A_n.
§ SATISFACTION PROBABILITY FOR UNIMODAL WORDS
Let F_2 = F{x,y} be the free group on two letters x, y. We write {x,y}^* for the set of positive words, i.e., the submonoid generated by {x,y}. Let G = S_n = (Ω) for Ω = {1,…,n}.
Let u, v ∈{x,y}^* be distinct and let w = uv^-1∈ F_2.
Let ℓ = ℓ(w) = ℓ(u) + ℓ(v) be the length of w.
For a random evaluation w̅ = w(x̅,y̅) with x̅, y̅∈ S_n uniformly random and independent, we have
(w̅ = 1) ≤ (2ℓ/n)^n/2ℓ.
Write w = w_1 ⋯ w_ℓ with ℓ > 0 and w_i ∈{x^±1, y^±1} for each i.
We may assume this expression is cyclically reduced.
We use the query model for random permutations (see <cit.> or <cit.>*Section A.1).
We gradually expose a random permutation π∈(Ω) by querying values of our choice.
At every stage x̅ and y̅ are partially defined permutations.
We may query the value of any π∈{x̅^± 1, y̅^± 1} at any point ω∈Ω.
If ω is already in the known domain of π, the known value is returned; this is a forced choice.
Otherwise, a random value is chosen uniformly from the remaining possibilities (the complement of the known domain of π^-1);
this is a free choice.
If the result of a free choice is a point in the known domain of any of x̅^± 1, y̅^± 1 we say there was a coincidence.
It is standard and easy to see that this process results in uniformly random permutations x̅ and y̅ once all values are revealed.
Begin by choosing any ω_1 ∈Ω and exposing the trajectory
ω_1^w̅_1, ω_1^w̅_1 w̅_2, …, ω_1^w̅_1⋯w̅_ℓ.
Let E_1 be the event that ω_1^w̅_1 ⋯w̅_ℓ = ω_1.
For this event to occur we claim it is necessary there was some coincidence among our queries of the form ω^w̅_t = ω_1 (this is the crucial part of the argument).
If ℓ(u) = 0 or ℓ(v) = 0 the argument is easy, so assume u and v have positive length.
We may assume w_1 = x and w_ℓ = y^-1 since w is cyclically reduced.
If there is no coincidence of the given form, the trajectory of ω_1 under u̅ does not return to ω_1, so ω_1 cannot be added to the known domain of y̅.
Subsequently, during the negative part of the trajectory, unless there is a coincidence of the given form, ω_1 can be added to the known domains of x̅^-1 and y̅^-1 only.
Therefore at the final step ω_1 is not in the known domain of y̅,
so if the final step is forced then the result is not ω_1,
and if the final step is free then the result is not ω_1 by hypothesis.
This proves the claim.
Since the probability that any given free choice results in ω_1 is at most 1 / (n - ℓ), it follows by a union bound that
(ω_1^w̅ = ω_1) ≤ℓ / (n-ℓ).
Conditional on the event E_1 choose a new point ω_2 outside the trajectory of ω_1, examine the trajectory of ω_2, and so on.
In general, at iteration i, conditional on the event ⋂_j < i E_j where E_j = {ω_j^w̅ = ω_j}, choose a point ω_i ∈Ω outside the union of the trajectories of ω_1, …, ω_i-1
and query the trajectory of ω_i.
In order for the event E_i = {ω_i^w̅ = ω_i} to occur it is necessary that there be a coincidence of the form ω^w̅_t = ω_i.
Therefore
(ω_i^w = ω_i | E_1, …, E_i-1) ≤ℓ / (n - i ℓ).
Let k = n/2ℓ.
Since the event {w̅=1} is contained in E_1 ∩⋯∩ E_k, it follows that
(w̅ = 1) ≤∏_i = 1^k ℓ/n-iℓ≤2ℓn^n/2ℓ.
The proof above is essentially that of <cit.>*Section 3.
An error in that argument was identified in <cit.>,
but the problem does not arise for words of the special form w = uv^-1,
as explained in the third paragraph of the proof.
§ THE ORDER OF THE GROUP
Now let x̅, y̅∈ S_n be uniformly random and let G = x̅,y̅.
There is a constant c > 0 such that
(|G| < exp(c(n log n)^1/2)) ≤exp(-c(n log n)^1/2).
Consider the elements of G of the form u̅ with u ∈{x,y}^* and ℓ(u) < r (for some r).
The number of such u is 1 + 2 + ⋯ + 2^r-1 = 2^r-1.
Applying the previous proposition, the probability that any two such u̅ and u̅' are equal is bounded by
4^r (4r/n)^n/4r≤exp(c_1 r - c_2 (n/r) log(n/r)))
for some constants c_1, c_2 > 0.
Choosing r = c_3 (n log n)^1/2 for a small enough constant c_3>0, we obtain a bound of the required form.
Failing this event, |G| ≥ 2^r - 1, so the result is proved.
A beautiful recent result of Sun and Wilmes <cit.> (building on seminal work of Babai <cit.>) classifies primitive coherent configurations with more than exp( C n^1/3 (log n)^7/3) automorphisms.
A corollary is a CFSG-free determination of the uniprimitive subgroups of S_n of order greater than the same bound.
Much stronger bounds for the order of 2-transitive groups have been known for a long time <cit.>.
Thus we know there are at most two conjugacy classes of primitive maximal subgroups M < S_n apart from A_n such that |M| > exp(C n^1/3 (log n)^7/3),
and each satisfies |M| = expO(n^1/2log n).
Since the number of pairs of permutations lying in a common conjugate of a maximal subgroup M is at most 1/[S_n:M],
<Ref> follows.
This proof was essentially anticipated in <cit.>*Remark 1.
|
http://arxiv.org/abs/2307.03389v1
|
20230707045846
|
A Dynamic Equivalent Method for PMSG Based Wind Farms Under Asymmetrical Faults
|
[
"Dongsheng Li",
"Chen shen"
] |
eess.SY
|
[
"eess.SY",
"cs.SY"
] |
Journal of Class Files, Vol. 14, No. 8, August 2021
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE Journals
A Dynamic Equivalent Method for PMSG Based Wind Farms Under Asymmetrical Faults
Dongsheng Li, Student Member, IEEE, Chen Shen, Senior Member, IEEE,
This work is supported by the National Natural Science Foundation of China under Grant U2166601. (Corresponding author: Chen Shen.)
Dongsheng Li and Chen Shen are with the State Key Laboratory of Power Systems, Department of Electrical Engineering, Tsinghua University, Beijing 100084, China (e-mail: lids19@mails.tsinghua.edu.cn; shenchen@mail.tsinghua.edu.cn).
=====================================================================================================================================================================================================================================================================================================================================================================================================================================================================
In this paper, a three-machine equivalent method applicable to asymmetrical faults is proposed considering the operating wind speed and fault severity. Firstly, direct-driven permanent magnet synchronous generator wind turbines (PMSGs) are clustered based on their different active power response characteristics considering the wind speed, the fault severity, and the negative sequence control strategy. Further, single-machine equivalent methods are proposed for each cluster of PMSGs. In particular, for the PMSGs with ramp recovery characteristics, a single-machine equivalent model with multi-segmented slope recovery is proposed, which can more accurately reflect the characteristics of the wind farm during the fault recovery. Moreover, an iterative simulation method is proposed to obtain the required clustering indicators before the actual occurrence of faults. Therefore, the proposed equivalent method can be used to analyze any anticipated contingency. Eventually, the effectiveness of the proposed method is verified on a modified IEEE 39-bus system.
PMSG, asymmetrical fault, negative sequence control, dynamic equivalent method, anticipated contingency.
§ INTRODUCTION
Due to the advantages of renewability and environmental friendliness, wind power has been widely developed. However, because of the randomness and fluctuation of wind power, it can have a significant impact on the stability of the power system <cit.>. Therefore, it becomes an important issue to study the impact of large-scale wind power on power systems. However, a wind farm typically contains tens or hundreds of wind turbines. If every wind turbine is modeled in detail, it will cause problems such as large memory consumption and low simulation efficiency <cit.>. Hence, in order to analyze wind farms more efficiently, it is urgent to propose an accurate dynamic equivalent method for wind farms.
Nowadays, most studies focus on the equivalent modeling of wind farms under symmetrical faults. However, most of the faults in power systems are asymmetrical faults <cit.>, which will cause overvoltage on the non-faulted phase <cit.>. In order to study the response characteristics of a wind farm under asymmetrical faults more efficiently, it is necessary to develop a dynamic equivalent method for wind farms under asymmetrical faults. In addition, when studying the effectiveness of different negative sequence control strategies at the wind farm level <cit.>, equivalent modeling of the wind farm is of great importance to improve the simulation efficiency. As a result, equivalent models for wind farms are required to improve the efficiency of wind farm analysis under asymmetrical faults.
There are two types of equivalent methods for wind farms: single-machine equivalent methods and multi-machine equivalent methods. The first method usually utilizes indicators such as wind speed <cit.> and pitch angle <cit.> to equalize the wind farm into a single wind turbine. However, these methods ignore the differences in response characteristics among wind turbines, making it difficult to accurately model the wind farm. Multi-machine equivalent methods divide wind turbines into several groups and establish a single-machine equivalent model for each group. For example, some equivalent methods utilize wind speeds <cit.> or geographical location <cit.> to cluster wind turbines. However, these methods do not consider the impact of fault severity on wind turbine response characteristics. When the operating wind speed and geographical location are unchanged, the response characteristics of wind turbines can belong to different groups depending on the severity of faults <cit.>. It is inaccurate to cluster wind turbines without considering the fault conditions.
In <cit.>, wind turbines are clustered into different groups according to the operation characteristics of crowbars. However, the wind turbine response characteristics are not only related to the crowbar action characteristic but also to the control strategy and the fault severity degree. It is not accurate to cluster wind turbines by crowbar action characteristics alone. In addition, the crowbar action characteristic in asymmetrical faults also depends on the negative sequence voltage and the negative sequence control strategy, making these methods challenging to apply to asymmetrical faults. There are also methods that adopt variables such as wind speeds, terminal voltages, terminal currents <cit.>, and rotational speeds <cit.> as clustering indicators. Then, clustering algorithms are applied to cluster the wind turbines. However, these indicators are all positive sequence indicators under symmetrical faults. Due to the negative sequence control, these positive sequence indicators are not sufficient to reflect the operating characteristic differences among wind turbines when asymmetrical faults occur. Therefore, the current wind farm equivalent methods are not applicable to asymmetrical faults.
There are also equivalent methods that consider adaptability in asymmetrical faults. In <cit.>, wind turbines are divided into groups based on the terminal voltage obtained from the power flow calculation. Further, an equivalent method for the zero-sequence network of the wind farm is proposed to improve the effectiveness of the equivalent model in asymmetrical faults. However, this method improves the effectiveness of the equivalent method under the asymmetrical fault only by estimating the zero-sequence impedance of the collection network. Without considering the impact of the negative-sequence component and negative-sequence control strategy of the wind turbine, the method cannot cluster wind turbines accurately. In <cit.>, an equivalent method based on the density peak clustering algorithm was proposed. The authors hold a positive view on whether the model can accommodate asymmetrical faults. However, the article does not analyze asymmetrical faults, and the clustering indicators do not consider the negative sequence component and negative sequence control strategy. The applicability of the method is still in doubt under asymmetrical faults. In <cit.>, the negative-sequence control strategy to mitigate power and DC-link voltage oscillations is considered. An equivalent method suitable for symmetrical and asymmetrical faults is established using an improved BP algorithm. However, only the effectiveness of the method in solving short-circuit currents is proven. The output active power and reactive power of the wind farm before and after equivalence are not analyzed. These studies attempt to propose equivalent methods applicable to asymmetrical faults, but all have limitations. Moreover, there is no literature that presents the theoretical correlation between the severity of the fault and the response characteristics of wind turbines under asymmetrical faults, making it hard to cluster wind turbines accurately.
After the fault clears, wind turbines typically limit the recovery rate of the active current to reduce mechanical stress. In this case, the active power of the wind turbine will recover at a certain slope. The durations of the active power recovery of wind turbines operating at different wind speeds and voltage drops are different. In the detailed model, the recovery process of the wind farm will be composed of multiple different slopes. It is necessary to consider the differences among wind turbines during the recovery process. In <cit.>, the active power reference value of each doubly fed wind turbine during the fault recovery period is calculated through analytical calculation, and the sum is used as the active power reference value of the equivalent wind turbine. However, in direct-driven permanent magnet synchronous generator wind turbines (PMSGs), the active power reference value is determined by the constant DC-link voltage control, making it difficult to analytically solve the active power reference value of each wind turbine. As a result, this method cannot be directly applied to PMSGs.
The response characteristics of wind turbines under asymmetrical faults are related to their negative sequence control strategy. Currently, there are two commonly used negative sequence control strategies, one for mitigating active power and DC-link voltage oscillations <cit.> and the other for balancing the grid voltage by reducing the negative sequence voltage <cit.>. In this paper, considering the converter capacity limitation, the above two negative sequence control strategies are theoretically analyzed separately to find the correspondence between external fault conditions and wind turbine response characteristics based on PMSGs. In the equivalent modeling process, the differences in the recovery characteristics of each PMSG are considered. Moreover, in order to obtain the clustering indicators before the occurrence of the fault, a static equivalent model of the PMSG in the positive and negative sequence network is constructed, and the clustering indicators of each wind turbine are calculated by an iterative simulation method, making the method applicable to anticipated contingencies. The main contributions are listed as follows:
1) A clustering method considering wind speeds, fault severities, and negative sequence control strategies is proposed for PSMGs. All possible response characteristics under two different negative control strategies are analyzed. Further, the clustering boundaries are derived theoretically.
2) Single-machine equivalent models are introduced for each subgroup of PMSGs. For the cluster of PMSGs with ramp recovery characteristics, a multi-segment slope recovery equivalent model is proposed to reflect the differences of each PMSG during the fault recovery period.
3) An iterative simulation method for solving the clustering indicators is presented, which can analyze anticipated contingencies. The actual PCC voltage and clustering indicators can be obtained before the fault actually occurs.
The rest of the paper is organized as follows: the structure and control strategy of the PMSG is presented in Section II. A clustering method is put forward in Section III. Different single-machine equivalent models are proposed for different clusters of PMSGs in Section IV. An iterative simulation method for solving the clustering indicators is introduced in Section V. The proposed equivalent method is verified in a modified IEEE 39-bus system in Section VI. Conclusions are drawn in Section VII.
§ STRUCTURE AND CONTROL STRATEGY OF PMSGS
An asymmetrical fault can cause imbalance in the grid voltage, and due to the existence of negative sequence active power, the DC-link capacitor voltage of PMSG also exhibits twice fundamental frequency fluctuations. For PMSGs under asymmetrical faults, there are two commonly used negative-sequence control strategies. The first one aims to balance the grid-side voltage, and the other one aims to mitigate twice fundamental frequency oscillations in the DC-link. This section will introduce the structure of the PMSG used in this paper and the implementation methods of these two negative sequence control strategies.
§.§ Structure of PMSGs
The machine-side converter of the PMSG consists of a diode rectifier circuit and a boost converter circuit. The grid-side converter consists of a controlled inverter bridge composed of insulated gate bipolar transistors (IGBT). In addition, a chopper circuit is connected in parallel with the DC-link capacitor. The models of other parts of the PMSG, such as the wind turbine, the drive train system, and the synchronous generator, are consistent with common modeling methods, which can be found in <cit.>.
As for the control system, the machine-side converter of the PMSG can control the terminal current of the synchronous generator according to the boost circuit, thereby changing the rotor speed to ensure that the wind turbine operates in the optimal tip speed ratio state, thus achieving the maximum power point tracking (MPPT) control. On the DC side, when the voltage of the DC capacitor reaches the threshold, the chopper circuit will operate to prevent overvoltage. The control strategy of the grid-side converter will be introduced in detail in the following parts. The structure of the PMSG is shown in Fig. <ref>.
§.§ Negative Sequence Control Strategy for Mitigating the Grid Voltage Unbalance
During normal operation, the grid-side converter of PMSG adopts the positive sequence voltage-oriented vector control. The dq-axis components of the grid voltage can be derived as:
v_d=e
v_q=0
where e is the magnitude of grid voltage space vector; v_d and v_q are the dq-axis components of the grid voltage, respectively.
The active and reactive output power of GSC are:
P=3/2ei_d
Q=3/2ei_q
where i_d and i_q are the d-axis and q-axis currents of grid side, respectively.
During normal operation, the reference value of active power is obtained by the constant DC-side voltage control, while the reference value of reactive power is generally maintained at 0 to achieve the unit power factor control. When an asymmetrical fault occurs, the PMSG needs to inject an appropriate amount of positive sequence reactive current into the grid and absorb negative sequence reactive current from the grid to reduce the degree of unbalance in the grid voltage according to the grid code <cit.>. The reference values of the positive and negative sequence reactive currents in the studied PMSG are:
I_t^+=K^+ (0.9- U^+) I_N,(0.6≤ U^+ ≤ 0.9)
I_t^-=K^- U^- I_N
where the superscript "+" and "-" denote the positive and negative sequence components, respectively; I_t^+ is the reference value of positive sequence reactive current injected by the PMSG; I_t^- is the reference value of negative reactive current absorbed by the PMSG; K^+ and K^- are the positive and negative sequence reactive current factors, respectively; U^+ and U^- are the magnitudes of positive and negative sequence voltages in per unit, respectively; I_N is the rated current of the PMSG.
After meeting the requirement of the grid code, the remaining capacity of the converter should be used to output possible maximum positive sequence active power to maintain the stability of the DC-link capacitor voltage <cit.>. Meanwhile, the negative sequence active current is maintained at 0. Thus, the reference value of negative sequence current and positive sequence q-axis current can be derived by:
I_ref^-=I_t^-
I_qref^+=I_t^+
where I_ref^- and I_qref^+ are the reference value of negative sequence current and positive sequence q-axis current, respectively.
In order to keep the output current within the maximum current limit of the converter, the positive and negative sequence currents should satisfy the following inequality:
| I_ref^- |+| I_ref^+ | ≤ I_max
Combining equations (<ref>), (<ref>) and (<ref>), the reference value of positive sequence active current can be derived by:
I_dref^+=min{I_dref1,I_dmax^+}
I_dmax^+=√((I_max-K^- U^- I_N)^2-I_t^+2)
where I_dref^+ is the reference value of the positive sequence active current; I_dref1 is obtained by the constant DC-link voltage control, which is the same as the control strategy during normal operation; I_dmax^+ is the maximum value of positive sequence active current; I_max is the maximum current allowed through the converter.
In summary, the positive and negative sequence dq-axis components of the terminal current of the PMSG can be derived by <cit.>:
I_dref^+=min{I_dref1,I_dmax^+}
I_qref^+=K^+ (0.9- U^+) I_N,(0.2≤ U^+ ≤ 0.9)
I_dref^-=-K^-U_q^-I_N
I_qref^-=K^- U_d^- I_N
where I_dref^- and I_qref^- are reference values of negative sequence dq-axis components of terminal currents of the PMSG, respectively; U_d^- and U_q^- are negative sequence dq-axis components of the terminal voltage in per unit, respectively.
Under the control strategy, the PMSG can inject and absorb the required reactive power while outputting the positive sequence active power to maintain the stability of the DC-link capacitor voltage. If the positive sequence active current is lower than the pre-fault value due to the converter capacity limitation after the fault clears, the recovery rate of the active current is limited to reduce the mechanical stress on the PMSG <cit.>. The control strategy is also shown in Fig. <ref>.
§.§ Negative Sequence Control Strategy for Mitigating the Oscillation in Active Power
When using a negative sequence control strategy for mitigating the twice fundamental frequency oscillations, the reference values of currents can be derived by:
[ I_dref^+; I_qref^+; I_dref^-; I_qref^- ]=
[ U_d^+ U_q^+ U_d^- U_q^-; U_q^+ -U_d^+ U_q^- -U_d^-; U_d^- U_q^- U_d^+ U_q^+; U_q^- -U_d^- -U_q^+ U_d^+ ]^-1×2/3[ P_ref; Q_ref; 0; 0 ]
where U_d^+, U_q^+, U_d^- and U_q^- are positive and negative sequence dq-axis components of terminal voltages, respectively; P_ref and Q_ref are the DC components of the output active and reactive power, respectively.
For convenience of calculation, the reference value of reactive power is set to 0. Additionally, in order to maintain the DC-link voltage, the active power is set to the pre-fault value. If the output active power is limited by the converter capacity, the reference value of active power will be set to the maximum value within the converter capacity. At this time, the reference values for currents can be derived by:
[ I_dref^+; I_qref^+; I_dref^-; I_qref^- ]=2P_ref/3D[ U_d^+; U_q^+; -U_d^-; -U_q^- ]
where P_ref and D can be expressed as:
P_ref=min{P_0,P_max}
D=(U_d^+)^2+(U_q^+)^2-(U_d^-)^2-(U_q^-)^2
where P_0 is the pre-fault active power of the PMSG; P_max is the maximum active power under converter capacity limitation, which will be introduced in the following section.
§ CLUSTERING BOUNDARIES FOR ACTIVE POWER DYNAMIC RESPONSE CHARACTERISTICS
The active power dynamic response characteristics of PMSGs can be divided into two parts: the response characteristics during the fault and the response characteristics after the fault clears. In this section, the active power dynamic response characteristics of a single PMSG will be classified based on different wind speeds and terminal voltages, considering the negative sequence control strategy.
§.§ Clustering Boundaries Under the Negative Sequence Control for Mitigating Unbalanced Voltage
If the converter capacity is sufficient, the output active power of the PMSG can restore to the pre-fault value during the fault. If the external fault is severe, most of the converter's capacity is used to mitigate the unbalanced voltage. The output active power of the PMSG is limited by the converter's capacity and unable to reach the pre-fault value. Therefore, the active power characteristics of a PMSG can be divided into two groups during the fault: 1) the active power can restore to the pre-fault value; 2) the active power is limited by the converter's capacity.
Since the energy accumulated by the double-frequency component of active power on the DC-link capacitor in one cycle is almost 0, it can be considered that the DC-link capacitor voltage is only related to the DC component of active power. When the output DC active power during the fault is equal to the pre-fault value and the current of the converter is at the maximum value at the same time, the pre-fault power is the critical active power of these two groups of response characteristics under the fault. Due to the fast response of PMSGs, it can be assumed that PMSGs are able to adjust the outputs dq-axis currents to the reference value immediately <cit.>. Therefore, the relationship between the critical initial active power and the voltages is as follows:
P_0=P_fault=P_cri1
I_dref^+=I_dmax^+
P_fault=3/2(I_dref^+U_d^++I_qref^+U_q^++I_dref^-U_d^-+I_qref^-U_q^-)
where P_0 is the pre-fault active power of the PMSG; P_fault is the DC component of the output active power of the PMSG during the fault; P_cri1 is the first critical active power. Due to the positive sequence voltage-oriented vector control, U_q^+ equals to 0. Substituting (<ref>) into (<ref>), the first critical active power can be derived as:
I_dmax^+=√((I_max-K^-U^- I_N)^2-(K^+ (0.9-U^+) I_N)^2)
P_cri1=3/2I_dmax^+U_d^+
After the fault clears, the positive sequence voltage amplitude is close to 1. The active power depends on the positive sequence d-axis current according to (<ref>). If the positive sequence d-axis current at the moment of fault clear (I_d^f) is higher than the pre-fault d-axis current (I_d0), the output active power will rise above the pre-fault value and return to the pre-fault value after a short period of oscillation after the fault clears. If I_d^f<I_d0, the active power will recover at a fixed rate due to the limitation on the recovery rate of the active current. At this time, the second critical initial active power can be derived by:
I_dmax^+=I_dcri2
P_cri2=3/2I_dcri2U_d0^+
where I_dcri2 is the second critical initial d-axis current; P_cri2 is the second critical active power; U_d0^+ is the positive sequence d-axis voltage during the normal operation, which is close to 1.
Based on the above analysis, we can classify the active power response characteristics into the following three categories:
1) When P_0<P_cri1, we can get I_dref^+=I_dref1. The DC component of the active power of PMSG has recovered to the pre-fault value during the fault, which can keep the DC-link voltage at the reference value.
2) When P_cri1≤ P_0<P_cri2, we can get I_dref^+=I_dmax^+. The DC component of the active power of PMSG is lower than the pre-fault value while the positive sequence active current is higher than the pre-fault active current during the fault. Thus, when the voltage restores, the output active power will rise above the pre-fault value and return to the pre-fault value after a short period of oscillation.
3) When P_0 ≥ P_cri2, we can get I_dref^+=I_dmax^+. There is a ramp recovery process after the fault clears.
The three types of active response characteristics are shown schematically in Fig. <ref>. The blue curve represents the active power response characteristics, while the red curve shows the DC component of the active power. In the figure, t_0 denotes the time of fault start, t_c denotes the time of fault clear, t_n denotes the time when the PMSG returns to normal.
Moreover, the critical wind speeds of each sub-cluster can be calculated from the critical power and the wind power curve of the PMSG. The critical wind speeds can be derived by:
V_cri1=f^-1(P_cri1)
V_cri2=f^-1(P_cri2)
where f^-1 is the inverse function of the wind power curve. The wind power curve is shown in Fig. <ref>.
According to (<ref>) and (<ref>), the clustering boundary conditions of the three types of response characteristics are shown in Fig. <ref>. The two red planes above and below refer to the rated wind speed and the cut-in wind speed of the PMSG, respectively. When the PMSG operates at or above its rated wind speed, the output power is the rated active power due to the existence of pitch angle control. The red plane perpendicular to the U^+U^- plane represents the situation where the positive and negative sequence voltage drops are equal. The positive sequence voltage is usually greater than the negative sequence voltage in an asymmetrical fault, so only the right-hand part of the plane needs to be analyzed. The three clusters labeled in Fig. <ref> correspond to the three clusters in Fig. <ref>.
In order to show the clustering boundary more clearly, we take the case where the negative sequence voltage drops to 0.2 p.u. as an example and draw the clustering boundaries based on the positive sequence voltage and the wind speed as shown in Fig. <ref>. When the positive and negative sequence terminal voltages and the operating wind speed of a PMSG are known, it can be quickly determined which cluster the active power response characteristics of the PMSG belong to by using the clustering boundaries shown in Fig. <ref>.
§.§ Clustering Boundaries Under the Negative Sequence Control for Eliminating Oscillations in active power
Under the negative sequence control for eliminating the twice fundamental frequency oscillations in active power, the response characteristics of the PMSG can also be divided into three categories, as shown in Fig. <ref>. By comparing the maximum output active power during the fault with the initial active power, it can be determined whether the active power characteristics belong to Cluster I or Cluster II in Fig. <ref>. When the PMSG output the maximum active power, the positive and negative sequence currents should satisfy the following equation:
√((I_dref^+)^2+(I_qref^+)^2) +√((I_dref^-)^2+(I_qref^-)^2)=I_max
Substituting (<ref>) into (<ref>), we can get:
2P_max/3D( U^+ +U^-)=I_max
Simplify (<ref>) and P_cri1 can be derived by:
P_cri1=P_max
P_max=3/2(U^+ -U^- )I_max
Similarly, comparing the magnitude of the positive sequence d-axis current at the moment of fault clear with the pre-fault d-axis current can determine whether there is a ramp recovery process in the PMSG. If there is a ramp recovery process, the positive sequence active current is lower than the pre-fault value and the active power must not rise to the pre-fault value during the fault. As a result, the reference active power is equal to the maximum active power during the fault (P_ref=P_max) according to (<ref>). The second critical initial active power can be derived by:
I_dcri2=I_dref^+
I_dref^+=2P_max/3DU_d^+
where I_dcri2 is the second critical initial d-axis current.
Substituting (<ref>) into (<ref>), we can get:
I_dcri2=U_d^+I_max/ U^+ +U^-
P_cri2=3/2I_dcri2U_d0^+
The critical wind speeds considering the negative sequence control strategy mentioned above are shown in Fig. <ref>. The critical wind speeds are limited between the cut-in wind speed and the rated wind speed.
Similarly, the critical wind speeds are shown in Fig. <ref> when the negative sequence voltage drops to 0.2 p.u.
§ DYNAMIC EQUIVALENT MODELING METHOD FOR PMSGS
After clustering the PMSGs, a single-machine equivalent models can be adopted for each cluster of PMSGs. This section will introduce the dynamic equivalent models for the above three clusters of PMSGs.
§.§ Capacity Weighted Equivalent Method
The capacity weighted equivalent method is always employed to aggregate the wind turbines in the same cluster <cit.>. For the PMSGs in the first and second clusters shown in Fig. <ref>, the characteristics of active power response are still the same as that of a single PMSG after accumulation. Therefore, the capacity weighted equivalent method can be used to equalize these two clusters of PMSGs. The effectiveness of this method has been theoretically analyzed in <cit.>. The equivalent parameters can be calculated by:
S_eq=∑_i=1^NS_i
X_eq=X/N,R_eq=R/N
H_t_eq=1/N∑_i=1^NH_t_i,H_g_eq=1/N∑_i=1^NH_g_i
where N is the number of PMSGs in one cluster; S is the capacity of a PMSG; X and R are the reactance and resistance of stator; H_t is the turbine inertia time constant; H_g is the generator inertia time constant; the subscript “eq” denotes the equivalent value; the subscript “i" denotes the parameter of the ith PMSG.
§.§ Equivalent Model with Multi-Segment Slope Recovery
For PMSGs in Cluster III, since the terminal voltage and the operating wind speed of each PMSG are different, the duration of the slope recovery process is also different. The sum of the active power of PMSGs in Cluster III will recover at different rates in different time periods, as shown by the blue line in Fig. <ref>. If the capacity weighted equivalent method is still used for this group of PMSGs, the active power of the equivalent PMSG will recover to the pre-fault value at a fixed slope, as shown by the green line in Fig. <ref>. In order to present an accurate equivalence of the fault recovery process, an equivalent model with multi-segment slope recovery is proposed.
Firstly, we can calculate the duration of the slope process of each PMSG by their terminal voltages and operating wind speeds. Then, based on the recovery duration of each PMSG, we can limit the recovery rate of the equivalent PMSG at different values in different time periods. In this way, an accurate equivalence of the fault recovery process is achieved.
For the PMSGs in Cluster III, the positive sequence d-axis current of each wind turbine is limited by the capacity of the converter according to (<ref>) and (<ref>). Thus, the positive sequence d-axis currents at the moment of fault clear of each PMSG are I_dcri2. Taking the negative sequence control for mitigating unbalanced voltage as an example, the duration of the slope process of each PMSG can be derived by:
t_i=(I_d0i-I_dcri2_i)/k
where t_i is the time duration of the slope recovery process of the ith PMSG in Cluster III; I_d0i is the pre-fault d-axis current of the ith PMSG; I_dcri2_i is the second critical d-axis current of the ith PMSG; k is the maximum recovery rate of the d-axis current.
The pre-fault active power and pre-fault d-axis current of the ith PMSG in Cluster III can be derived as follows:
P_0i=f(V_wi)
I_d0i=2P_0i/3U_d0^+
where V_wi is the operating wind speed of the ith PMSG.
Substituting (<ref>), (<ref>) and (<ref>) into (<ref>), the time duration of the ith PMSG can be derived as follows:
t_i=(2f(V_wi)/3e-I_dcri2_i)/k
I_dcri2_i=√((I_max-K^- U_i^-I_N)^2-(K^+ (0.9-U^+_i) I_N)^2)
We can see that t_i is determined by the PMSG parameters, the terminal voltage, and the operating wind speed of the ith PMSG. Sorting t_i from the smallest to the largest, the d-axis current recovery rate of the equivalent PMSG after the fault clears is limited by:
k_lim=
N_3k , t <t_1
(N_3-j)k , t_j≤ t<t_j+1,j=1⋯ N_3-1
k , t≥ t_N_3
where N_3 is the number of PMSGs in Cluster III; t is timed from the moment of fault clear. The recovery rate is maintained at k when t≥ t_N_3 to model the overshoot process of the constant DC-link voltage control. The schematic diagram of the proposed method is shown in Fig. <ref>.
§.§ Equivalent Method for Collector Lines
The equal voltage drop method is adapted to calculate the parameters of the equivalent collector lines <cit.>. Since the positive and negative sequence impedance parameters of the collector lines inside a wind farm are generally equal, we only use the equal positive sequence voltage drop method to equate the collector lines under asymmetrical faults. The average positive sequence terminal voltage of PMSGs in the same cluster can be derived by:
U̇_ave^+=1/N∑_i=1^NU̇_i^+
where U̇_ave^+ is the average voltage phasor of PMSGs; N is the number of PMSGs in the same cluster; U̇_i^+ is the positive sequence terminal voltage phasor of the ith PMSG.
Since the average voltage drop before and after equivalence is equal, the parameter of the equivalent collector line can be derived by:
Z_eq=U̇_pcc^+-U̇_ave^+/∑_i=1^Nİ_i^+
where Z_eq is the parameter of the equivalent collector line; U̇_pcc^+ is the positive sequence PCC voltage phasor; İ_i^+ is the output positive sequence current phasor of the ith PMSG.
The proposed method in this section requires parameters such as the terminal voltage and the output current of each PMSG, which will be obtained in the following section.
§ METHOD FOR SOLVING THE CLUSTERING INDICATORS
If wind speeds and terminal voltages of PMSGs are known, PMSGs can be divided into three clusters, as shown in Fig. <ref> according to the clustering boundaries. Then, PMSGs in each sub-cluster can be modeled by a single-machine equivalent method to obtain the three-machine equivalent model of the wind farm. The wind speed of each PMSG can be obtained by prediction or measurement. However, the terminal voltage of each PMSG is not only related to the wind speed but also to the severity of the fault and the response characteristics of other components in the power system, which is difficult to obtain analytically. In this section, a positive and negative sequence terminal voltage calculation method is presented, assuming that the PCC voltage is known. Then, based on this method, the actual PCC voltage is obtained by an iterative simulation method. Eventually, the equivalent model of the wind farm can be obtained.
§.§ Calculation Method for Terminal Voltages
A terminal voltage calculation method is proposed based on a real wind farm topology, as shown in Fig. <ref>. Due to the fast response of PMSGs, the terminal voltages can be calculated by the static model of PMSGs when PCC voltage is known. In this section, we analyze the negative sequence control for mitigating unbalanced voltages as an example.
We decouple the positive and negative sequence networks to calculate the positive and negative sequence terminal voltages of PMSGs, respectively. According to (<ref>), the output dq-axis currents of the PMSG in the negative sequence network are only related to the negative sequence voltage dips, while the d-axis current in the positive sequence network is affected by both positive and negative sequence voltages. Therefore, we can calculate the negative sequence terminal voltage under the negative sequence network first and then further calculate the positive sequence terminal voltage.
§.§.§ Calculation Method for Negative Sequence Terminal Voltages of PMSGs
In the negative-sequence network, the output active and reactive power of each PMSG can be derived by:
P_i^-=1.5( U_di^- I_drefi^-+ U_qi^- I_qrefi^-)
Q_i^-=1.5( U_qi^- I_drefi^– U_di^- I_qrefi^-)
where P_i^- and Q_i^- are the output active and negative power of the ith PMSG in the negative-sequence network. Substituting (<ref>) into (<ref>), we can get:
P_i^-=0
Q_i^-=-1.5K^-( U_qi^-2 + U_di^-2 )I_N=-1.5K^-U_i^-2I_N
The current injected by the ith PMSG can be derived by:
İ_ni^-=(P_i^-+jQ_i^-/U̇_i^-)^*
where İ_ni^- is the injected current phasor of the ith PMSG in the negative sequence network; U̇_i^- is the negative sequence terminal voltage phasor of the ith PMSG.
Further, to establish the node-branch incidence matrix for each feeder of the wind farm, the branch current column vector of a feeder can be derived as:
İ_b^-=C^Tİ_n^-
where C is the node-branch incidence matrix of the feeder line; İ_n^- is the column vector of negative sequence current consisting of İ_ni^-; İ_b^- is the negative sequence branch current column vector. Further, the node voltage drop can be derived by:
ΔU̇_b^-=Z İ_b^-
ΔU̇_n^-=CΔU̇_b^-
where ΔU̇_b^- and ΔU̇_n^- are the branch voltage drop and node voltage drop, respectively; Z is the branch impedance matrix of the feeder. Only the values of the diagonal elements of Z are the branch impedance. Other non-diagonal elements are 0. Moreover, the voltage of each node on the feeder can be updated to:
U̇^-'=U̇_pcc^-+ΔU̇_n^-
where U̇_pcc^- is the negative sequence voltage of PCC; U̇^-' is the column vector of the updated voltage of each node on the feeder.
Substituting (<ref>) and (<ref>) into (<ref>), we can get:
U̇^-'=U̇_pcc^-+CZC^Tİ_n^-
When the negative sequence voltage at PCC is known, the negative sequence terminal voltage of each PMSG can be obtained by the following steps:
1) Get the branch impedance matrix and the node-branch incidence matrix of each feeder based on the wind farm topology data and set all negative sequence terminal voltages of PMSGs to the PCC negative sequence voltage (U_i^-=U_pcc^-).
2) Calculate the revised negative sequence terminal voltage of each PMSG according to (<ref>), (<ref>) and (<ref>).
3) If |U̇^'--U̇^- |<σ_1, the vector of negative sequence terminal voltages of all PMSGs are available in U^'-. If not, assign the value of U̇^'- to U̇^- and turn to step 2. Where σ_1 is the allowable error of terminal voltages.
§.§.§ Calculation Method for Positive Sequence Terminal Voltages of PMSGs
In the positive-sequence network, the output active and reactive power of each PMSG can be derived by:
P_0i^+=1.5( U_di^+ I_drefi^++ U_qi^+ I_qrefi^+)
Q_0i^+=1.5( U_qi^+ I_drefi^+- U_di^+ I_qrefi^+)
while the reference values of the current are rewritten here as:
I_qrefi^+=K^+ (0.9- U_i^+) I_N
I_drefi^+=min{I_dref1i,I_dmaxi}
I_dmaxi=√((I_max-K^- | U_i^-| I_N)^2-I_qrefi^+2)
where I_dref1i is the d-axis current reference value of the constant DC-link capacitor voltage control. In order to maintain the DC voltage, the following equation should be satisfied:
U_d0^+I_d0i=U_i^+I_dref1i
while U_d0^+ is close to 1, I_dref1i can be derived by:
I_dref1i=I_d0i/U_i^+
Since U_i^- has been solved in the previous section, it can be seen from (<ref>), (<ref>) and (<ref>) that the output active and reactive power of PMSGs in the positive sequence network are only related to the operating wind speeds and the positive sequence terminal voltages. According to (<ref>)-(<ref>), we can also solve the positive sequence terminal voltage of each PMSG using the same method proposed in Section V.A(1). The only difference is that we should replace the negative sequence components with the corresponding positive sequence components.
As a result, if the PCC voltage is known, we can solve the terminal voltage of each PMSG using the proposed method in Section V.A.
§.§ Iterative Simulation Method for Solving the PCC Voltage
When analyzing the anticipated contingencies, we need to consider the effect of fault severity on the response characteristics of PMSGs. However, PCC voltage is difficult to obtain before a fault actually occurs. Therefore, an iterative simulation method to solve the PCC voltage is presented in this section. The method can obtain the PCC voltage before the occurrence of the fault, avoiding the simulation of the detailed model to obtain the clustering indicators. Thus, the proposed equivalent method is applicable to the analysis of anticipated contingencies. The specific steps are shown as follows:
1) Input the wind speed of each PMSG and set U_pcc^+=1,U_pcc^-=0.
2) Let U̇_pcc^+=U_pcc^+∠ 0 ^∘ and U̇_pcc^-=U_pcc^-∠ 0 ^∘. Calculate the positive and negative sequence terminal voltages of each PMSG-WTG using the method proposed in Section V.A.
3) Build the equivalent model of the wind farm using the method proposed in Section III and Section IV.
4) Simulation analysis of the anticipated contingency is carried out using the established equivalent model in step 3). Moreover, the positive and negative sequence PCC voltage at the moment of the fault clear (U_pcc^+' and U_pcc^-') can be obtained by the result of the simulation.
5) If | U_pcc^+'-U_pcc^+ |<σ_2 and | U_pcc^-'-U_pcc^- |<σ_2, turn to step 6. If not, set U_pcc^+=U_pcc^+', U_pcc^-=U_pcc^-' and turn to step 2. σ_2 is the allowable error of the PCC voltage.
6) Since the actual PCC voltage is obtained, clustering indicators of each PMSG can be calculated by the method presented in Section V.A. Eventually, the equivalent model of the wind farm can be obtained by the method proposed in Section III and Section IV.
§ METHOD VERIFICATION
The proposed method is verified under different faults on the CloudPSS platform <cit.>. A wind farm including 100 PMSGs with a rated capacity of 1.5 MW is studied as shown in Fig. <ref>. The wind farm is connected to node 30 of the IEEE 39-bus system through a transformer, as shown in Fig. <ref>. The wind speeds of PMSGs are modeled with the Jensen model, assuming that wind speeds among different feeders do not affect each other. The wind speeds of PMSGs are shown in Fig. <ref>.
§.§ Case I: Two-phase Short Circuit Fault at Terminal of the Wind Farm
The two-phase short-circuit fault between phase B and phase C starts at 3.0 s and clears at 3.2 s at node 30. Phase B and C are connected to the ground through an impedance. The PCC voltage can be obtained by the iterative simulation method proposed in Section V.B. The iterative process is shown in Table. <ref>. The PCC voltage converges after one iteration of the equivalent model. Moreover, the terminal voltage of each PMSG at the moment before the fault clearance can be calculated using the method presented in Section V.A. The calculated positive and negative sequence terminal voltage results are compared with the simulation results of the detailed model, as shown in Fig. <ref>. The maximum percentage error in the voltage of each node is 0.089%, which proves the correctness of the voltage calculation method proposed in Section V.A.
Further, PMSGs are clustered into three clusters based on the method proposed in Section III. The clustering result is shown in Table. <ref>. The DC components of individual PMSG active power in different clusters are shown in Fig. <ref>. The characteristics are consistent with the analytical results, which verifies the correctness of the proposed method in Section III.
In order to prove the efficiency of the dynamic equivalent method, the active and reactive power of the detailed model (DM), the single-machine equivalent model (SM) <cit.>, and the proposed three-machine equivalent model (TM) are compared, as shown in Fig. <ref> and Fig. <ref>.
Due to the severe fault, the PMSGs belong to Cluster II and Cluster III in Case I. Thus, the active power response of each model is basically the same during the fault. After the fault clears, the active power of PMSGs in Cluster III will recover at a certain rate, and the TM is able to represent this partial characteristic more accurately than the SM. The simulation times and the equivalent root mean square errors (RMSEs) are shown in Table. <ref>.
§.§ Case II: One-phase Short Circuit Fault at Terminal of the Wind Farm
In Case II, an A-phase short circuit fault starts at 3.0 s and clears at 3.2 s at node 30. Phase A is connected to the ground through a larger impedance, and the voltage dip is slighter than that in Case I. The wind speeds of PMSGs still adopt the wind speeds shown in Fig. <ref>. Similar to Case I, the PCC voltage and the terminal voltage of each PMSG can be obtained by the iterative simulation method proposed in Section V. The PCC voltage results are U^+_PCC=0.646 and U^-_PCC=0.360. Based on the wind speed and terminal voltage of each PMSG, the clustering result can be obtained by the method presented in Section III, as shown in Table. <ref>.
The DC components of individual PMSG active power in different clusters are shown in Fig. <ref>. The active and reactive powers of different models are compared in Fig. <ref> and Fig. <ref>.
In Case II, most of the PMSGs belong to Cluster I and there is only a small number of PMSGs have a ramp recovery process. Therefore, the TM and SM are able to perform the active power characteristics accurately after the fault clears. However, during the fault period, the TM can present the active power characteristics more accurately compared to the SM because the active powers of PMSGs in different clusters are limited by different factors.
In order to reflect the effectiveness of the proposed method more clearly, we present the DC components of the active power of different models, as shown in Fig. <ref>. The SM is inaccurate after the fault clears in Case I and inaccurate during the fault in Case II. In summary, the traditional equivalent method cannot reflect the active power differences among PMSGs in different clusters, while the proposed method is able to achieve accurate equivalence under different faults.
§ CONCLUSION
Under asymmetrical faults, the proposed equivalent method takes the effects of the operating wind speed, the fault severity, and the negative sequence control into consideration. The PMSGs are clustered based on their active power response characteristics. Further, the equivalent models are proposed for each cluster of PMSGs, respectively. Moreover, an iterative simulation method for calculating the clustering indicators is presented to make the proposed method applicable to the anticipated faults. Thus, the difficulty in obtaining clustering indicators is solved. Eventually, the proposed method is validated on a modified IEEE 39-bus system, and the effectiveness of the proposed method is demonstrated by the significant reduction of simulation time compared with the detailed model and the improvement of equivalence accuracy compared with the traditional equivalent method.
IEEEtran
|
http://arxiv.org/abs/2307.01518v1
|
20230704065904
|
Exponential stability of Euler-Bernoulli beam under boundary controls in rotation and angular velocity
|
[
"Alemdar Hasanov"
] |
math.OC
|
[
"math.OC",
"math-ph",
"math.AP",
"math.MP"
] |
1]Alemdar Hasanovfn1
alemdar.hasanoglu@gmail.com
[cor1]Corresponding author
[fn1]Department of Mathematics, Kocaeli University, Turkey
[1]Department of Mathematics, Kocaeli University, Turkey
This paper addresses the analysis of a boundary feedback system involving a non-homogeneous Euler-Bernoulli beam governed by the equation m(x)u_tt+μ(x)u_t+(r(x)u_xx)_xx=0, subject to the
initial u(x,0)=u_0(x), u_t(x,0)=v_0(x) and boundary conditions u(0,t)=0, (-r(x)u_xx(x,t) )_x=0=-k^-_r u_x(0,t)-k^-_a u_xt(0,t),
u(ℓ,t)=0, (-r(x)u_xx(x,t) )_x=ℓ=-k^+_r u_x(ℓ,t)-k^+_a u_xt(ℓ,t), with boundary control at both ends resulting from the rotation and angular velocity. The approach proposed in this study relies on the utilization of regular weak solutions, energy identity, and a physically motivated Lyapunov function. By imposing natural assumptions concerning physical parameters and other inputs, which ensure the existence of a regular weak solution, we successfully derive a uniform exponential decay estimate for the system's energy. The decay rate constant featured in this estimate is solely dependent on the physical and geometric properties of the beam. These properties encompass crucial parameters such as the viscous external damping coefficient μ(x), as well as the boundary springs k^-_r,k^+_r and dampers k^-_a,k^+_a. To illustrate the practical effectiveness of our theoretical findings, numerical examples are provided. These examples serve to demonstrate the applicability and relevance of our derived results in real-world scenarios.
Exponential stability, Euler-Bernoulli beam, boundary control, regular weak solution, energy identity, Lyapunov function, decay rate.
§ INTRODUCTION
Submarine pipelines and long bridges can be considered as an elastic beam with both ends controlled by the boundary rotation and angular velocity <cit.>. In many studies related to pipeline modeling, the pipes are defined as beams resting on a rigid seabed without any penetration (see <cit.> and references therein). However, such hypotheses are not always satisfied in practice. An analysis of the torsional effects on pipe lateral buckling was given in <cit.>, where essential influence of torsion under some specific boundary conditions was demonstrated analytically. Similar situation arise in bridge models governed by the Euler-Bernoulli beam. Namely, it is very important for the sensitivity analysis of bridges to obtain a relationship between the rotation spring constant and the bridge responses (deflections/slopes). This relationship can then be used for evaluating the support condition of bridges <cit.>. Furthermore, in modeling of long flexible structures through the Euler-Bernoulli equation, the bending moment at the end of the beam is controlled by the linear feedback of rotation angle and angular velocity, and the shear force at the same end is controlled by the linear feedback of displacement
and velocity. We refer <cit.> and references therein, for the detailed description of such models.
Considering the effect of the above factor on both models, there is a need for a realistic model that will take into account the effects of both the rotation spring and the angular velocity damper at both ends of the beam, within the framework of the Euler-Bernoulli beam equation. In the most natural way, this can be taken into account by the corresponding boundary conditions at both ends of the beam, including a linear combinations of the rotation spring and the angular velocity damper. This leads to the following mathematical model:
{[ m(x)u_tt+μ(x)u_t+(r(x)u_xx)_xx=0, (x,t) ∈Ω_T,; [4pt]
u(x,0)=u_0(x), u_t(x,0)=u_1(x), x∈ (0,ℓ),; [4pt]
u(0,t)=0, (-r(x)u_xx(x,t) )_x=0=-k^-_r u_x(0,t)-k^-_a u_xt(0,t),; [4pt]
u(ℓ,t)=0, (-r(x)u_xx(x,t) )_x=ℓ=k^+_r u_x(ℓ,t)+k^+_a u_xt(ℓ,t),; [4pt]
t∈ [0,T], ].
where Ω_T=(0,ℓ)×(0,T), ℓ>0 is the length of the beam and T>0 is the final time.
Here and below, u(x,t) is the deflection, u_t(x,t), u_x(x,t), u_xt(x,t), u_xx(x,t), -(r(x)u_xx) and -(r(x)u_xx)_x are the velocity, rotation (or slope), angular velocity, curvature, moment and shear force, respectively <cit.>. Further, m(x)=ρ(x)S(x)>0, while ρ(x) is the mass density and S(x) is the cross section area of the beam, and r(x):=E(x)I(x)>0 represent the flexural rigidity (or bending stiffness) of the beam, respectively, while E(x)>0 is the elasticity modulus and I(x)>0 is the moment of inertia. The non-negative coefficient μ(x):=γ m(x) of viscous resistance to transverse motion of the beam represents the viscous external damping, while γ≥ 0 is the damping constant of proportionality <cit.>. Furthermore, nonnegative constants k^-_r, k^-_a ≥ 0 and k^+_r, k^+_a ≥ 0 are the stiffness of the torsional springs and dampers on the left and right ends of the beam, respectively.
The boundary conditions (-r(x)u_xx(x,t) )_x=0=-k^-_r u_x(0,t)-k^-_a u_xt(0,t) and (-r(x)u_xx(x,t) )_x=ℓ=k^+_r u_x(ℓ,t)+k^+_a u_xt(ℓ,t) at the left and right ends of the beam, respectively, mean the controls resulting from the linear combination of rotation and angular velocity. In this context, the above parameters k^-_r, k^-_a, k^+_r, k^+_a are defined also as the boundary controls.
Geometry of the problem (<ref>) is given in Fig. <ref>.
This work is devoted to the systematic study of the following issues. Under what minimum conditions imposed on the input data is the energy of the system governed by (<ref>) exponentially stable? If the system governed by (<ref>) is stable, how much does each damping parameter γ, k^-_a and k^+_a contribute to this stability? It should be especially noted that the nature of both the external and the boundary damping mechanisms greatly changes the nature of the vibration, and hence controls the response of the beam, as the experimental and theoretical results discussed in <cit.> show.
Modeling of large flexible structures through a class of Euler-Bernoulli beams with structural damping, has begun to be developed, starting with studies <cit.>. The exponential stability of distributed systems governed by Euler-Bernoulli beam equation under classical boundary conditions has been discussed starting from the work <cit.>, and then more general results are obtained in <cit.>. Various methods have been developed in the literature for initial boundary value problems for Euler-Bernoulli equations with a boundary feedback systems. Among these methods, the spectral method turned out to be efficient and useful since it allows to establish the Riesz basis property, which is the most fundamental property of a linear vibrating system <cit.>. In turn, this property means that the generalized eigenvectors of the system form an unconditional basis of the (state) Hilbert space. With semigroup approach, this allows to derive the spectrum determined growth condition and the exponential stability for a system.
In the exponential stability estimate ℰ(t) ≤ M e^-ω tℰ(0) obtained in the studies listed above, the relationship of the decay rate parameter ω >0 with the physical and geometric parameters of the beam, including the damping coefficient μ(x) ≥ 0 and the stiffness k^-_a, k^+_a ≥ 0 of the torsional dampers, has not been determined. Since the relationship of this decay rate parameter with the damping parameters is not known, in concrete applications, such an evaluation does not give a qualified result.
In this paper, we develop the approach based on the weak solution theory for the initial boundary value problem (<ref>), energy estimates and the Lyapunov method to establish an exponential stability estimate for system (<ref>) under minimum conditions imposed on the input data. Furthermore, this approach allows us to derive the role of both types of parameters in the exponential decay of the solution. To our knowledge, this model, defined by the initial boundary value problem (<ref>), in which the viscous external and boundary (torsional) damping factors are considered together and in the presence of torsional springs, is discussed for the first time in the literature.
The rest of the paper is structured as follows. Energy identity and dissipativity of system (<ref>) are derived in Section 2. In Section 3, the Lyapunov function is introduced and then energy decay estimate for system (<ref>) is derived. Numerical examples are presented in Section 4. Some concluding remarks are given in the final Section 5.
§ NECESSARY ESTIMATES FOR THE WEAK SOLUTION OF PROBLEM (<REF>)
We assume that the inputs in (<ref>) satisfy the following basic conditions:
{[ ρ_S, μ,r ∈ L^∞(0,ℓ),; [3pt]
0<m_0≤ m(x)≤ m_1, 0≤μ_0≤μ(x)≤μ_1,; [3pt]
0<r_0≤ r(x)≤ r_1, x∈ (0,ℓ),; [3pt]
u_0∈ H^2(0,ℓ), u_1∈ L^2(0,ℓ),; [3pt]
k^-_r, k^-_a,k^+_r, k^+_a ≥ 0,; [3pt]
γ+k^-_r+ k^-_a+k^+_r+k^+_a >0. ].
For the case when all the parameters k^-_r, k^-_a,k^+_r, k^+_a are equal to zero, under conditions (<ref>), the existence of the weak solution u∈ L^2(0,T; 𝒱^2(0,ℓ)), with u_t∈ L^2(0,T;L^2(0,ℓ)) and u_tt∈ L^2(0,T;H^-2(0,ℓ)) of the initial boundary value problem (<ref>) was proved in <cit.>. Here and below,
𝒱^2(0,ℓ):={v∈ H^2(0,ℓ): v(0)=v(ℓ)=0,},
and H^2(0,ℓ) is the Sobolev space <cit.>. For system (<ref>), with k^-_r, k^-_a,k^+_r, k^+_a>0, the existence of the weak solution u∈ L^2(0,T; 𝒱^2(0,ℓ)) can be proved in the similar way. In this section we derive necessary energy identities and estimates for the weak solution of problem (<ref>).
Assume that the inputs in (<ref>) satisfy the basic conditions (<ref>).
Then the following energy identity holds:
ℰ(t) + ∫_0^t ∫_0^ℓμ(x) u_τ^2 (x,τ) dx d τ
[1pt]
=ℰ(0)-k^-_a ∫_0^t u_xτ^2(0,τ) d τ-k^+_a ∫_0^t u_xτ^2(ℓ,τ) d τ, t∈[0,T],
where
ℰ(t)=1/2∫_0^ℓ [ m(x) u^2_t(x,t)+r(x) u^2_xx(x,t) ] dx
[1pt]
+1/2 k^-_r u_x^2(0,t) +1/2 k^+_r u_x^2 (ℓ,t), t∈[0,T],
is the total energy of system (<ref>) and
ℰ(0)=1/2∫_0^ℓ [ m(x) ( u_1(x))^2 +
r(x) ( u”_0(x))^2 ] dx
[1pt]
+1/2 k^-_r ( u'_0(0))^2+1/2 k^+_r ( u'_0(ℓ))^2
is the initial value of the total energy.
Proof. Multiply both sides of equation (<ref>) by u_t(x,t), integrate it over Ω_t:=(0,ℓ)× (0,t), employ the identity
∫_0^t∫_0^ℓ (r(x)u_xx)_xx u_τ dx dτ= ∫_0^t∫_0^ℓ [(r(x)u_xx)_x u_τ-r(x)u_xx u_xτ]_x dx dτ
[1pt]
+ 1/2∫_0^t∫_0^ℓ (r(x)u_xx^2 )_τ dx dτ,
t ∈ (0,T]. Then we obtain the following integral identity:
1/2∫_0^t∫_0^ℓ (ρ_S(x) u_τ^2 )_τdx dτ +1/2∫_0^t ∫_0^ℓ (r(x)u_xx^2 )_τdx dτ
[1pt]
+∫_0^t ((r(x)u_xx)_x u_τ-r(x)u_xx u_xτ)_x=0^x=ℓ dτ +∫_0^t ∫_0^ℓμ(x) u_τ^2 dx d τ=0,
for all t ∈ (0,T]. Using here the initial and boundary conditions (<ref>), we get:
1/2∫_0^ℓ [m(x) u^2_t+ r(x) u_xx ]dx
+1/2 k^-_r u_x^2(0,t) +1/2 k^+_r u_x^2 (ℓ,t)
[1pt]
+∫_0^t ∫_0^ℓμ(x) u_τ^2 dx d τ
[1pt]
= 1/2∫_0^ℓ [m(x) ( u_1(x))^2 +
r(x) ( u”_0(x))^2 ] dx +1/2 k^-_r ( u'_0(0))^2+1/2 k^+_r ( u'_0(ℓ))^2
[1pt]
-k^-_a ∫_0^t u_xτ^2(0,τ) d τ-k^+_a ∫_0^t u_xτ^2(ℓ,τ) d τ, t∈[0,T],
for all t ∈ (0,T]. This leads to (<ref>) with (<ref>) and (<ref>).
The integral identity (<ref>), with (<ref>) and (<ref>), clearly shows that the increase in the stiffness of the torsional springs k^-_r and k^+_r leads to an increase in the total energy ℰ(t). Conversely, the increase in the stiffness of the torsional dampers k^-_a and k^+_a leads to a decrease in the total energy.
The sum
1/2 k^-_r u_x^2(0,t) +1/2 k^+_r u_x^2 (ℓ,t), t∈[0,T]
in (<ref>) represents the energy of the rigid motion of the elastic system (<ref>), generated by the spring constants k^-_r,k^+_r≥0.
Assume that the basic conditions (<ref>) hold. Then for the decay rate of the total energy the following integral formula is valid:
d ℰ(t)/dt =-∫_0^ℓμ(x) u^2_tdx- k^-_a u_xt^2 (0,t) - k^+_a u_xt^2(ℓ,t), t∈ (0,T).
Proof. From formula (<ref>) for the total energy we deduce that
d ℰ(t)/dt= ∫_0^ℓ [ m(x) u_tu_tt+r(x) u_xxu_xxt ] dx
[1pt]
+k^-_r u_x(0,t)u_xt(0,t) +
k^+_r u_x(ℓ,t)u_xt(ℓ,t), t∈[0,T].
Using here the identities
∫_0^ℓ m(x)u_t u_tt dx =-∫_0^ℓμ (x) u_t^2 dx
-∫_0^ℓ (r(x) u_xx )_xx u_t dx,
[2pt]
∫_0^ℓ (r(x) u_xx )_xx u_t dx=
∫_0^ℓ r(x) u_xxu_xxtdx +k^-_r u_x(0,t)u_xt(0,t)
[1pt]
+k^-_a u^2_xt(0,t)+k^+_r u_x(ℓ,t)u_xt(ℓ,t)
+k^+_a u^2_xt(ℓ,t), t∈[0,T],
we arrive at the required result (<ref>).
Integrating (<ref>) over (0,t) we arrive at the energy identity introduced in (<ref>), that is
ℰ(t) =ℰ(0)-∫_0^t∫_0^ℓμ(x) u^2_τ(x,τ)dx dτ
[1pt]
- ∫_0^t [k^-_a u_x τ^2 (0,τ) + k^+_a u_x τ^2(ℓ,t) ]dτ, t∈ [0,T].
In particular,
ℰ(t) ≤ℰ(0), t∈[0,T],
that is, the energy of the system (<ref>) is dissipating.
§ LYAPUNOV FUNCTION AND EXPONENTIAL STABILITY ESTIMATE
Introduce the auxiliary function:
𝒥(t)= ∫_0^ℓ m(x) u u_tdx+1/2∫_0^ℓμ(x) u^2dx+ 1/2 k^-_a u_x^2 (0,t) +1/2 k^+_a u_x^2 (ℓ,t),
t∈[0,T], that includes all the damping parameters.
Assume that the basic conditions (<ref>) are satisfied. Then between the auxiliary function 𝒥(t) and the energy function ℰ(t), the following relationship
holds:
d 𝒥(t)/dt= 2 ∫_0^ℓ m(x) u_t^2dx -2ℰ(t), t∈[0,T].
Proof. Taking the derivative of the function 𝒥(t) with respect to the time variable and using then the equation (<ref>) we find:
d 𝒥(t)/dt= ∫_0^ℓ m(x) u^2_tdx
- ∫_0^ℓ(r(x)u_xx)_xxu dx
[1pt]
+k^-_a u_x(0,t)u_xt(0,t)+k^+_a u_x(ℓ,t)u_xt(ℓ,t), t∈[0,T].
To transform the second right-hand side integral here, we employ the identity
-∫_0^ℓ (r(x) u_xx )_xx u dx=
-∫_0^ℓ r(x) u^2_xx dx -k^-_r u^2_x(0,t)-k^-_a u_x(0,t)u_xt(0,t)
[1pt]
-k^+_r u^2_x(ℓ,t)-k^-_a u_x(ℓ,t) u_xt(ℓ,t), t∈[0,T].
Then we get:
d 𝒥(t)/dt= ∫_0^ℓ m(x) u^2_tdx
-∫_0^ℓ r(x) u^2_xx dx -k^-_r u^2_x(0,t)-k^+_r u^2_x(ℓ,t),
for all t∈[0,T]. This leads to the required result (<ref>).
The next lemma shows another relationship between the auxiliary function 𝒥(t) and the energy function ℰ(t). Namely, it shows that the energy function serves as lower and upper bounds to the auxiliary function introduced in (<ref>).
Assume that in addition to the basic conditions (<ref>), the coefficient r(x)
in (<ref>) satisfies the regularity condition: r ∈ H^2(0,ℓ). Then the
following inequalities hold:
-β_0 ℰ(t) ≤𝒥(t) ≤β_1 ℰ(t), t∈[0,T],
where
. [ β_0 =ℓ^2/2 √(m_1/r_0); [14pt]
β_1=β_0 {1+ 1/√(m_1 r_0) [ℓ^2μ_1+2/ℓ (k_a^-+k_a^+ ) ]} ,; ].
and m_1, μ_1, r_0>0 are the constants introduced in (<ref>).
Proof. We estimate separately each term on the right hand side of formula (<ref>). For the first term we use the ε-inequality to get
|∫_0^ℓ m(x) u u_tdx|≤ε/2 ∫_0^ℓ m(x) u_t^2dx + 1/2ε ∫_0^ℓ m(x) u^2dx.
Under the condition r ∈ H^2(0,ℓ) the exists the regular weak solution u∈ L^2(0,T; H^4(0,ℓ)), with u_t∈ L^2(0,T; 𝒱^2(0,ℓ)), u_tt∈ L^2(0,T;L^2(0,ℓ)) and u_ttt∈ L^2(0,T;H^-2(0,ℓ)) of problem (<ref>) <cit.>. For this solution we employ the inequality
∫_0^ℓ u^2 dx ≤ℓ^4/4∫_0^ℓ u_xx^2dx, t∈ [0,T],
which can be easily proved due to the conditions u(0,t)=u(ℓ,t)=0. This yeilds:
∫_0^ℓ m(x) u^2dx ≤ℓ^4 ρ_1/4 r_0∫_0^ℓ r(x)u_xx^2dx,
Substituting this in (<ref>) we get:
|∫_0^ℓ m(x) u u_tdx|≤ε/2 ∫_0^ℓ m(x) u_t^2dx + ℓ^4 m_1/8ε r_0∫_0^ℓ r(x)u_xx^2dx.
Choose here the parameter ε>0 from the condition ε/2=ℓ^4 m_1/(8 r_0 ε) as
ε= ℓ^2/2 √(m_1/r_0) ,
we obtain the following estimate:
|∫_0^ℓ m(x) u u_tdx|≤ℓ^2/4 √(m_1/r_0) [ ∫_0^ℓ m(x) u_t^2dx + ∫_0^ℓ r(x) u_xx^2dx ].
Now, we estimate the second right hand side integral in formula (<ref>), using
inequality (<ref>). We have:
∫_0^ℓμ(x) u^2dx ≤ℓ^4 μ_1/4 r_0∫_0^ℓ r(x)u_xx^2dx.
Finally, to estimate the third and fourth terms on the right side of formula (<ref>), we use the same argument as above to conclude that
u^2_x(0,t)=(-∫_0^x̃ u_xx(x,t)dx )^2≤x̃ ∫_0^x̃ u^2_xx(x,t)dx,
u^2_x(ℓ,t)=(∫_x̃^ℓ u_xx(x,t)dx )^2≤ (ℓ-x̃) ∫_0^x̃ u^2_xx(x,t)dx.
Hence,
. [ 1/2 k^-_a u_x^2 (0,t) ≤ℓ/2 k^-_a /r_0∫_0^ℓ r(x)u^2_xx(x,t)dx,; [9pt]
1/2 k^+_a u_x^2 (ℓ,t)≤ℓ/2 k^+_a /r_0∫_0^ℓ r(x) u^2_xx(x,t)dx. ].
In view of (<ref>), (<ref>) and (<ref>) we obtain the following upper estimate for the auxiliary function 𝒥(t):
𝒥(t)≤ℓ^2/4 √(m_1/r_0)∫_0^ℓ m(x) u_t^2dx
[1pt]
+ [ℓ^2/4 √(m_1/r_0)+ℓ^4/4r_0 μ_1+
ℓ/2r_0 (k^-_a+k^+_a ) ]∫_0^ℓ r(x) u_xx^2dx,
for all t∈ (0,T]. This leads to the upper bound
𝒥(t) ≤β_1 ℰ(t), t∈[0,T],
in terms of the energy functional ℰ(t) and the constant β_1>0 introduced in (<ref>).
The lower bound
𝒥(t) ≥ -β_0 ℰ(t), t∈[0,T]
follows from the second part
∫_0^ℓ m(x) u u_tdx ≥ - ℓ^2/4 √(m_1/r_0) [ ∫_0^ℓ m(x) u_t^2dx + ∫_0^ℓ r(x) u_xx^2dx ]
of estimate (<ref>). This leads to the required estimates (<ref>).
The constants β_0,β_1>0 introduced in (<ref>) depend only on the geometric and physical parameters of a beam.
We introduce now the Lyapunov function
ℒ(t)=ℰ(t)+λ𝒥(t), t∈[0,T]
through the energy function ℰ(t) and the auxiliary function 𝒥(t), where λ>0 is the penalty term.
Assume that the inputs in (<ref>) satisfy the basic conditions (<ref>) and
the regularity condition r ∈ H^2(0,ℓ). Suppose, in addition that the damping constant of proportionality is positive,
γ_0>0.
Then system (<ref>) is exponentially stable, that is,
ℰ(t)≤ M e^-σ t ℰ(0), t∈[0,T],
where
. [ M= 1+ β_1 λ/1- β_0 λ , σ=2 λ/1+β_1 λ ,; [14pt]
0<λ <min (1/ β_0, γ m_0/(2m_1)), ].
where μ_0,m_1>0 and β_0, β_1>0 are the constants introduced in (<ref>) and (<ref>), respectively, and ℰ(0)>0 is the initial energy defined in (<ref>).
Proof. Using estimates (<ref>) in (<ref>) we get:
(1-β_0 λ ) ℰ(t) ≤ℒ(t) ≤ (1+β_1 λ ) ℰ(t), t∈[0,T].
From the positivity requirement of the first left hand side multiplier, we find that the penalty term should satisfiy the following condition:
0<λ <1/ β_0, β_0>0.
Differentiate now ℒ(t) with respect to the variable t∈ (0,T) and use formulas (<ref>) and (<ref>). We have:
d ℒ(t)/dt+2 λℰ(t)= -∫_0^ℓ [μ(x)-2λ m(x) ] u_t^2dx
[2pt]
-k^-_a u^2_xt(ℓ,t)-k^+_a u^2_xt(ℓ,t), t∈[0,T].
Assume that, in addition to (<ref>), the penalty term satisfies also the following condition:
λ≤μ_0/(2m_1)
which guarantees positivity of the term in the square bracket under the right hand side intagral in (<ref>). In view of the relation μ_0=γ m_0, this condition implies
λ≤γ m_0/(2m_1).
This leads to
d ℒ(t)/dt+2 λℰ(t)≤0, t∈[0,T],
or, with ℰ(t) ≥ℒ(t)/(1+λγ_1), to the inequality
d ℒ(t)/dt+2λ/1+λγ_1 ℒ(t)≤ 0, t∈[0,T].
Solving this inequality we find:
ℒ(t) ≤ e^-σ t ℰ(0), t∈[0,T]
which implies the required estimate (<ref>).
The constant σ>0 in (<ref>), called the decay rate parameter, depends only on the geometric and physical parameters of the beam and also on the stiffness of the torsional dampers introduced in (<ref>), as formulas (<ref>) show. Hence, the uniform exponential stability estimate (<ref>) can be applied to study exponential stability for Euler-Bernoulli beams with various physical and geometric properties, under boundary controls in rotation and angular velocity. Furthermore, considering formula (<ref>), estimate (<ref>) also clearly shows the contribution of each damping factor μ(x), k_a^- and k_a^- to the energy decay rate.
§ NUMERICAL RESULTS
Although there is an exponential function e^-σ t on the right side of the estimate (<ref>), with the decay rate parameter σ >0 introduced in (<ref>), in some cases, this appearance can be misleading. Namely, σ >0 is dependent on the positive parameters λ and β_1. The specific values of these parameters play a crucial role in determining the decay behavior of the function e^-σ t. Depending on the values of λ and β_1, the decay of this function can exhibit characteristics similar to the decay of a linear function. To see such cases, it is necessary to study the dependence of the decay rate parameter on not only the geometric and physical parameters of the beam, but also on the viscous external damping parameter μ(x) and the torsional dampers k_a^-, k_a^- separately.
The examples below are provided to illustrate these situations and their causes. Without loss of generality, here we consider the constant coefficient beam equation
m u_tt+μ u_t+r u_xxxx=0, (x,t) ∈Ω_T,
where
m= ρ S, μ=γ m, r= E I,
in accordance with the above notation. For this constant coefficients equation, formulas (<ref>) and (<ref>) for the parameters β_0, β_1, M_1, σ>0 and conditions are as follow:
. [ β_0 =ℓ^2/2 √(m/r); [14pt]
β_1=β_0 [1+ ℓ^2 √(m/r) γ ]
+ℓ/2r (k_a^-+k_a^+ ) ,; [14pt]
M= 1+ β_1 λ/1- β_0 λ , σ=2 λ/1+β_1 λ . ].
Here, the beam with the rectangular cross section S=b h, where b>0 and h>0 are the width height, with the following numerical values of the geometric and physical parameters are examined <cit.>:
. [ ℓ=0.502 , b=1.7× 10^-3 , h=0.89× 10^-3 ,; [4pt]
ρ=1.42× 10^3 ^-3, E=3.1× 10^9 ^2, γ∈ [0.01, 10] ^-1. ].
With the numerical values in (<ref>) we have:
. [ S=1.51× 10^-6 ^2, I:=bh^3/12=0.1× 10^-12 ^3,; [4pt]
m=2.14× 10^-3 ^-1, r=0.31× 10^-3 ^2, μ=0.22 ^-1. ].
We consider three-level, weak, medium, and high damping cases corresponding to the values γ=0.1, γ=1.0 and γ=5.0 of the damping constant of proportionality,
using the following values ⟨ k^-_a,k^+_a ⟩= ⟨ 0, 0 ⟩ and ⟨ k^-_a,k^+_a ⟩= ⟨ 0.01, 0.01 ⟩ of the stiffness of the torsional dampers.
The calculated by formulas given in (<ref>) values of the decay rate parameter σ>0 are listed in Table 1. The values of the penalty term λ>0 are set according to the requirement 0<λ <min (1/ β_0, γ/2).
From the last column of Table 1 it can be seen that, in absence of the torsional dampers (k^-_a=k^+_a=0), the increase in the value of the damping constant from γ=0.1 to γ=5.0, leads to the increase of the decay parameter σ>0. Thus, for the
weak damping case γ=0.01 the value of the decay parameter is σ=0.08, and
the energy decay is only exponential in appearance, in fact, it is linear (Figure 1 on the left).
Table 1. The decay rate parameters corresponding to the geometric and physical parameters given in (<ref>).
.2in
Damping constant
γ=0.1
γ=1.0
γ=5.0
⟨ k^-_a,k^+_a ⟩ ⟨β_0,β_1⟩ λ M σ
⟨ 0, 0 ⟩ ⟨ 0.33, 0.35 ⟩ 0.04 1.03 0.08
⟨ 0.01, 0.01⟩ ⟨ 0.33, 16.55 ⟩ 0.04 1.68 0.05
⟨ 0, 0 ⟩ ⟨ 0.33, 0.55 ⟩ 0.4 1.41 0.66
⟨ 0.01, 0.01 ⟩ ⟨ 0.33, 16.75 ⟩ 0.4 8.87 0.10
⟨ 0, 0 ⟩ ⟨ 0.33, 1.42 ⟩ 2.4 21.19 1.09
⟨ 0.01, 0.01 ⟩ ⟨ 0.33, 17.62 ⟩ 2.4 208.12 0.11
Comparing the values of the decay rate parameter, in the last column of Table 1, corresponding to zero and non-zero values of the stiffness of the torsional dampers, we can observe the role of these boundary controls (Figure 1 on the right).
§ CONCLUSIONS
This study proposes an approach for the exponential stability analysis of Euler-Bernoulli beams under boundary controls in rotation and angular velocity. By employing the regular weak solution, energy identity, and Lyapunov function, we are able to derive a uniform exponential decay estimate for the system's energy.
Our approach is grounded in natural assumptions concerning physical parameters and other inputs, ensuring the existence of a regular weak solution. The decay rate constant in the derived estimate relies solely on the physical and geometric parameters of the beam, which include the viscous external damping coefficient, as well as the boundary springs and dampers. This feature enables straightforward utilization of decay rate estimation in practical engineering applications.
Furthermore, we have provided preliminary numerical examples that shed light on the role of damping parameters. However, a more detailed analysis, focusing on the individual contributions of each damping parameter to the overall damping behavior, will be pursued in future research.
§ ACKNOWLEDGMENTS
The research has been supported by the Scientific and Technological Research Council of Turkey (TUBITAK) through the Incentive Program for International Scientific Publications (UBYT). The research of the author has also been supported by FAPESP, through the Visiting Researcher Program, proc. 2021/08936-1, in Escola Politécnica, University of São Paulo, Brazil, during the period November 02 - December 18, 2022.
16
Banks:Inman:1991
H.T. Banks, D.J. Inman, On Damping Mechanisms in Beams, Journal of Applied Mechanics, 58(3) (1991) 716–723.
Cai:2022
J. Cai, P. Le Grognec, Lateral buckling of submarine pipelines under high temperature and high pressure - A literature review, Ocean Engineering 244(15) (2022) 110254.
Chen-Krantz:1988
G. Chen, S.G. Krantz, D.W. Ma, C.E. Wayne, H.H. West, H. The Euler-Bernoulli beam equation with boundary energy dissipation. Report, 1 Sep. 1985 - 31 Aug. 1987, Pennsylvania State Univ., University Park., 1, 1988. https://dx.doi.org/10.21236/ada189517.
Chen-Russell:1982
G. Chen, D.L. Russell, A mathematical model for linear elastic systems with structure
damping, Quart. Appl. Math. 39(1982) 433–454.
Chen-Xu:2014
Y.L. Chen, G. Q. Xu, Exponential stability of uniform Euler-Bernoulli beams with non-collocated boundary controllers, J. Math. Anal. Appl. 409(2014) 851–867.
Clough-Penzien:1975
R.C. Clough, J. Penzien, Dynamics of Structures, McGraw Hill Inc., New York, 1975.
Crandall:1970
S.H. Crandall, The Role of Damping in Vibration Theory, J. Sound Vibr. 11(1970) 3–18.
1970.
Evans:2002
L.C. Evans, Partial Differential Equations, 2nd edn., American Mathematical Society, Rhode Island, 2010.
Grognec:2020
P. Le Grognec, A. Néme, J. Cai, Investigation of the torsional effects on the lateral buckling of a pipe-like beam resting on the ground under axial compression, International Journal of Structural Stability and Dynamics 20 (9) (2020) 2050110.
Guo:2001
B.-Z. Guo and R. Yu, On Riesz basis property of discrete operators with application to an
Euler-Bernoulli beam equation with boundary linear feedback control, IMA J. Math. Control
Inform. 18 (2001) 241–251.
Guo:2002
B.-Z. Guo, Riesz basis property and exponential stability of controlled Euler–Bernoulli beam
equations with variable coefficients, SIAM J. Control Optim. 40 (2002) 1905–1923.
F-Guo:2004
F Guo, F Huang, Boundary Feedback Stabilization of the Undamped Euler–Bernoulli Beam with Both Ends Free, SIAM J. Control Optim. 43(1) (2004) 341–356.
Hasanov-Romanov:2021
A. Hasanov Hasanoglu, A.G. Romanov, Introduction to Inverse Problems for Differential Equations, 2nd ed, Springer, New York, 2021.
Hong:2015
Z. Hong, R. Liu, W. Liu, S. Yan, A lateral global buckling failure envelope for a high temperature and high pressure (ht/hp) submarine pipeline, Applied Ocean Research 51 (2015) 117–128.
Huang:1986
F.L. Huang, Some problems for linear elastic systems with damping, Acta Math. Sci. 6
(1986) 101–107.
Huang:1988
F. Huang, On the mathematical model for linear elastic systems with analytic damping, SIAM
J. Control Optim. 26 (1988) 714–724.
Inman:2015
D. J. Inman, Engineering Vibration, 4th Edn., Pearson Education Limited, 2014.
Liu:2018
R. Liu, X. Wang, Lateral global buckling high-order mode analysis of a submarine pipeline with imperfection, Applied Ocean Research 73 (2018) 107–126.
Park:2019
Y.S. Park, S. Kim, N. Kim, J.J. Lee, Evaluation of bridge support condition using bridge responses. Structural Health Monitoring, 18(3) (2019) 767-777.
Repetto:2012
C.E. Repetto, A. Roatta and R.J. Welti, Forced vibrations of a cantilever beam, Eur. J. Phys. 33 (2012) 1187–1195.
Russell:1978
D. L. Russell, Controllability and stabilizatiblity theory for linear partial differential equations: Recent progress and open questions, SIAM Rev., 20 (1978) 639–739.
|
http://arxiv.org/abs/2307.02378v1
|
20230705154553
|
Continuum Limits of Ollivier's Ricci Curvature on data clouds: pointwise consistency and global lower bounds
|
[
"Nicolas Garcia Trillos",
"Melanie Weber"
] |
math.DG
|
[
"math.DG",
"cs.LG",
"math.AP",
"stat.ML"
] |
Machine learning at the mesoscale: a computation-dissipation bottleneck
Emanuele Panizon
August 1, 2023
=======================================================================
Let ℳ⊆ℝ^d denote a low-dimensional manifold and let 𝒳= { x_1, …, x_n } be a collection of points uniformly sampled from ℳ. We study the relationship between the curvature of a random geometric graph built from 𝒳 and the curvature of the manifold ℳ via continuum limits of Ollivier's discrete Ricci curvature. We prove pointwise, non-asymptotic consistency results and also show that if ℳ has Ricci curvature bounded from below by a positive constant, then the random geometric graph will inherit this global structural property with high probability. We discuss applications of the global discrete curvature bounds to contraction properties of heat kernels on graphs, as well as implications for manifold learning from data clouds. In particular, we show that the consistency results allow for characterizing the intrinsic curvature of a manifold from extrinsic curvature estimators.
§ INTRODUCTION
The problem of identifying geometric structure in data is central to Machine Learning and Data Science. A frequently encountered structure is low-dimension-ality, where high-dimensional data is assumed to lie on or near a low-dimensional manifold (manifold hypothesis). Let ⊂ℝ^d denote such a data set of size n and ⊆^d a low-dimensional manifold from which was sampled. Given , but without prior knowledge of , what can we say about the intrinsic geometry of ? In particular, what can we learn about intrinsic notions of curvature of from ? One of the central goals of this paper is to study this question through the lens of discrete Ricci curvature. Traditionally, curvature has been studied in continuous spaces such as Riemannian manifolds. Several different notions of curvature relate to the local and global geometric properties of manifolds. Ricci curvature, is a classical concept in Riemannian geometry which in particular determines the volume growth of geodesic balls and that is closely connected to functional inequalities. In this paper, we study a discrete notion of Ricci curvature originally defined by Ollivier <cit.> and its relationship to the classical Ricci curvature on .
More specifically, we study the relationship between the curvature of a random geometric graph (short: RGG) built from and the curvature of the manifold .
A RGG G is constructed from a sample by connecting points with a distance of at most with an edge; as we will discuss below, the choice of distance function plays an important role in our results.
In more concrete terms, we are interested in the following two questions:
Can we give non-asymptotic error bounds for the pointwise estimation of the curvature of from that of G?
If the manifold has Ricci curvature bounded from below by a given constant, will a RGG inherit this global structural property with high probability? What are some consequences of the resulting discrete curvature lower bounds?
We will discuss both questions in two different settings. In the first, which is of mostly theoretical value, we have access to the pairwise geodesic distances of points in , i.e., in G two points are connected by an edge if they are within distance from each other along the manifold. In the second setting, we have no access to geodesic distances but we assume to instead have access to sufficiently accurate data-driven approximations thereof. In studying these two problems we will be able to provide theoretical insights into the relationship between discrete and continuous Ricci curvature and deliver consistent continuum limits of Ollivier's Ricci curvature on data clouds. Recall that a positive global lower bound on the Ricci curvature has several important implications for the manifold's geometry, including a bound on the diameter of complete manifolds (Bonnet-Myers <cit.>), as well as consequences for the coupling of random walks, which we will discuss below. We will explore some novel implications of having discrete curvature lower bounds on the behavior of graph Laplacians built over G. For example, the results presented in section <ref> are novel in the literature of graph Laplacians and do not follow from existing discrete-to-continuum consistency results.
§.§ Outline
In order to precisely state our main results, we first present some background material. In particular, in section <ref> we present some background on differential geometry and, importantly, introduce the notions of Ricci curvature, parallel transport, and second fundamental form of an embedded manifold; all these notions will be used in the sequel. Then, in section <ref>, we discuss the notion of Ollivier Ricci curvature for triplets (, d, 𝐦) consisting of a metric space (,d) and a Markov kernel 𝐦 over ; we will discuss a special setting where is a Riemannian manifold and discuss the connection between the induced Ollivier Ricci curvature and the classical Ricci curvature discussed in section <ref>. In section <ref> we introduce the RGGs G over that we will study throughout the paper and define associated discrete Ollivier Ricci curvatures up to the choice of a metric d_G over . The metric d_G will be explicitly defined in section <ref>, right after discussing the approximation of geodesic distances over from data at the beginning of section <ref>.
In section <ref> we present our main theoretical results: in section <ref> our pointwise consistency results (Theorems <ref> and <ref>), and in section <ref> our global lower bounds (Theorems <ref> and <ref>). In section <ref> we illustrate the recovery of Ricci curvature from data with a simple numerical experiment.
In section <ref> we discuss some related literature.
In section <ref> we present the proofs of our main results.
In section <ref> we present a brief discussion of some applications of our main theoretical results. One such application is discussed in section <ref>, where we study the Lipschitz contractivity of graph heat kernels over data clouds sampled from manifolds with positive curvature. In section <ref> we discuss some of the avenues that our main results may open up in the field of manifold learning.
We wrap up the paper in section <ref> with some conclusions and some discussion on avenues for future research directions.
§ BACKGROUND AND NOTATION
§.§ Notions from Differential Geometry
We start by recalling some basic definitions and tools from differential geometry that we have collected from Chapters 0-4 and 6 in <cit.>. This will also give us the opportunity to introduce some notation that we use in the sequel.
An m-dimensional manifold is a locally Euclidean space of dimension m with a differentiable structure. We use T_x to denote the tangent plane at the point x ∈. Throughout the paper we will only consider smooth, connected, compact Riemannian manifolds without boundary. These are manifolds endowed with a smoothly varying inner product structure g={g_x}_x ∈ defined over their tangent planes. The geodesic distance d_ between two points x,y ∈ is defined according to
d_(x,y) = inf _γ: [0,1] →∫_0^1
√( g_γ(t) (γ̇(t), γ̇(t)) ) dt,
where the inf ranges over all smooth paths connecting x to y. Important notions in Riemannian geometry such as geodesic curves (in particular length minimizers) and curvature are defined in terms of connections or covariant derivatives. Informally, given a smooth curve γ on , the covariant derivative ∇_γ̇ is an operator, satisfying some linearity and Leibnitz product rule properties, mapping vector fields along γ into vector fields along γ. Among the multiple choices of connection that can be defined over a manifold, we will work with the Levi-Civita connection, which satisfies some additional compatibility conditions with the Riemannian structure of the manifold; see details in Chapter 2 in <cit.>.
In general, a geodesic is a smooth path γ:[0,1] → satisfying ∇_γ̇γ̇=0. It is possible to show that for every x∈ and every v ∈ T_x there exists a unique geodesic γ satisfying γ(0)=x and γ̇(0)=v. We may use this fact to define the exponential map exp_x: T_x →, which maps v to γ(1) for γ the geodesic starting at x in the direction v. It can be shown that there exists a real number ι_>0, known as 's injectivity radius, for which exp_x : B(0,) ⊆ T_x→ B_(x,) is a diffemorphism for all x ∈ and all < ι_; here and in the remainder, we use B_(x,) to denote the ball of radius around x when is endowed with d_ and B(0,) is the m-dimensional Euclidean ball of radius . The inverse of exp_x, the logarithmic map, will be denoted by
log_x: B_(x, ) ⊆→ B(0, ) ⊆ T_x. For any two points x,y with d_(x,y) < ι_, the unique minimizing geodesic between x and y (i.e., a minimizer in the definition of d_(x,y)) is given by γ:t∈ [0,1] ↦exp_x(tv) where v = log_x(y)/|log_x(y)|. This minimizing geodesic can be reparametrized so that it is unit speed (i.e., γ̇(0) has norm one) in which case γ maps the interval [0, d_(x,y)] into . From now on we will refer to this curve as the unit speed geodesic between x and y.
With the notion of Levi-Civita connection we can also introduce the concept of parallel transport, one important notion that allows us to relate tangent vectors at different points on . Precisely, let γ:[0,a] → be a smooth curve on and let x=γ(0) and y=γ(a). Given v ∈ T_x, we define V(t) ∈ T_γ(t), the parallel transport of v along γ, to be the (unique) solution to the equation ∇_γ̇(t) V = 0 with initial condition V(0)=v. We will be particularly interested in the case where γ is the unit speed geodesic between x and y (sufficiently close to each other) and we will denote by P_xy the map P_xy: T_x → T_y defined as v ∈ T_x ↦ V(d_(x,y)) ∈ T_y.
We can locally characterize the curvature of the manifold in a neighborhood of a point x ∈ via Ricci curvature. Formally, let v ∈ T_x denote a unit vector and { u_1, …, u_m-1, v } an orthonormal basis for T_x. We define the Ricci curvature at x along the direction v as
_x(v) := 1/m-1∑_i=1^m-1 g(R(v,u_i)v,u_i) ,
where R(u,v)w := ∇_u ∇_v w - ∇_v ∇_u w - ∇_[u,v]w, for [u,v] the Lie Bracket between u and v. Furthermore, we can globally characterize 's geometry via sectional curvature, which is given by
K (u,v) := K_x(u,v) = g(R(u,v)u , v )/|u|^2 |v|^2- (g(u,v))^2
for u,v ∈ T_x and x ∈; here we use the notation |u|^2=g(u,u).
Let x,y ∈ and let >0 be smaller than the injectivity radius _. We define 𝒫: B_(x,) → B_(y, ) the map given by
𝒫(x̃) = exp_y(P_xy( log_x(x̃ ) )).
That is, x̃ is mapped to x's tangent plane and then transported to y's tangent plane along the geodesic connecting x and y (unique if we assume d_(x,y) < ι_) to finally be mapped to B_(y,) via the exponential map at y. One important property of the diffeomorphism 𝒫 that we use in the sequel, originally due to Levi-Civita, is that if we form the quadrilateral illustrated in Figure <ref>, then the distance between x̃ and ỹ can be precisely characterized, up to correction terms of order 4, by the distance between x and y and 's sectional curvature at x. More precisely, we have the following result.
Let >0 be a number smaller than ι_, the injectivity radius of . Let x, y ∈ be such that d_(x,y) < ι_ and let x̃∈ B_(x,) and ỹ := 𝒫(x̃) with 𝒫 as in (<ref>). Then
d_(x̃, ỹ) = d_(x,y) ( 1- (d_(x,x̃))^2/2( K(v,w) + O( + d_(x,y) ) ) ),
where v = log_x(y)/|log_x(y)| and w=log_x(x̃).
In the remainder we will consider embedded submanifolds of ^d, which are defined as follows.
We say that ⊆^d is a smooth embedded submanifold of ^d of dimension m strictly less than d if for every x ∈ there is a ball B(x,r) ⊆^d and a smooth function h_x: B(x, r) →^d (termed defining function), such that (i) h_x(y)=0 iff y ∈∩ B(x,r) and (ii) rank ∇ h_x(x)=d-m.
Moreover, the inner product g_x defined over T_x, the latter now seen as a subspace of ^d, is the restriction of ⟨·, ·⟩, the ^d inner product, to T_x.
In the sequel we use the second fundamental form _x(·,·) of the embedded manifold in order to discuss data driven approximations to the geodesic distance on . Let N_x denote the normal space of , i.e., the orthogonal complement of T_x in ^d. The second fundamental form is given by the map _x: T_x × T_x → N_x defined by (u,v) ↦ (Id - Proj_x) (d/dtV(t)t=0). Here, Proj_x: ^d → T_x denotes the orthogonal projection onto the tangent space; γ is a curve on with γ(0)=0 and γ̇(0)=u; V is a vector field along γ with V(t) ∈ T_γ(t) satisfying V(0)=v.
Lastly, we define the reach of a manifold . Let S ⊂ℝ^d denote a closed subset and π_S: ℝ^d → S a map that sends z ∈ℝ^d onto its nearest neighbor in S. The reach τ_ of is defined as the maximal neighborhood radius for which the projection π_ is well-defined, i.e., any point that has distance at most τ_ from has a unique nearest neighbor on .
Second fundamental form and reach are notions that are closely connected to the extrinsic curvature of a manifold. In contrast, Ricci curvature and sectional curvature are intrinsic. A standard way to visualize the difference between the two types of curvature is to imagine a circle drawn on a flat piece of paper, which one can think of as a one dimensional manifold embedded in ^3, and then the same circle after the paper has been rolled to form a cylinder, which can be thought of as a one dimensional manifold ' embedded in ^3. While and ' have the same intrinsic curvature (because the paper is not stretched when rolling it), their extrinsic curvature will be different.
§.§ Ollivier's Ricci curvature
Let (,d) be a metric space and let μ_1, μ_2 be two probability distributions over . Recall that the 1-Wasserstein distance between μ_1, μ_2 is defined as
W_1(μ_1, μ_2) = inf_π∈Γ(μ_1,μ_2)∫_(x,y) ∈× d(x,y) dπ(x,y) ,
where Γ(μ_1,μ_2) is the set of measures on × with marginals μ_1,μ_2. Now, let 𝐦 denote a Markov kernel over , i.e., 𝐦 is a collection {m_x }_x ∈ of probability measures over . The Ollivier Ricci curvature associated to the triplet (,d, 𝐦) in direction (x,y) is given by <cit.>:
κ(x,y) := 1 - W_1(m_x,m_y)/d(x,y).
Notice that, in general, the notion of Ollivier Ricci curvature only requires the structure of a metric space endowed with a (discrete-time) random walk. In the remainder of this section we will consider the case where is assumed to be a Riemannian manifold and discuss the relation between (<ref>) and the geometric notion of Ricci curvature introduced in section <ref>. To do this we first need some definitions.
Let d_(x,y) denote the geodesic distance in between x and y and let B_(x,), B_(y,) be the closed balls of radius (a fixed parameter) around x and y, respectively (termed Ollivier balls). Let further
μ_x^(z) = (z) ⌊_B_(x,)/(B_(x,))
μ_y^(z) = (z)⌊_B_(x,)/(B_(y,))
denote uniform measures on those neighborhoods. Then we define Ollivier's Ricci curvature between x and y as
_̨ (x,y) = 1 - W_1(μ_x^, μ_y^)/d_(x,y) .
Ollivier showed the following fundamental relationship between and _̨:
|_̨ (x,y) - ^2/2(m+2)_x(v) |≤( C ^2 d_(x,y) + C' ^3
) ,
where y is a point on the geodesic from x in direction v (of norm one) and is the radius of the Ollivier balls. C, C' are constants independent of m.
This theorem makes precise the intuition that random walks in starting at nearby points and suitably coupled draw closer together if the Ricci curvature is positive and further apart if the Ricci curvature is negative. It also provides the motivation for the definition of Ricci curvature of arbitrary triplets ( , d , 𝐦).
In section <ref>, we present the main steps in the proof of Theorem <ref>. This will give us the opportunity to introduce some key estimates and constructions that we use later in section <ref> when proving our main results.
§.§ Curvature on Random Geometric Graphs
We recall the notion of a random geometric graph (short: RGG) on . Let denote 's volume form and let μ be the uniform probability measure over defined by
μ (A) := ∫_A d(x) /∫_ d(x)
for all measurable subsets A of . Let = { x_i }_i=1^n be a collection of i.i.d. samples from μ. In the sequel, we use μ_n to denote the empiricial measure associated to these data points. Namely,
μ_n:= 1/n∑_i=1^n δ_x_i.
We construct a random geometric graph G_ = (, w_) by connecting any pair of samples x,y ∈ with an edge whenever their geodesic distance, or an approximation thereof, is less than . More precisely, we'll set w_ to be either
w_(x, y) = w_, (x,y):= 1 if d_(x,y) ≤
0 else,
or
w_(x,y) = w_, (x,y):= 1 if d̂_g(x,y) ≤
0 else.
The first setting, which uses the geodesic distance on , is only reasonable in applied settings where the manifold is known. In contrast, the second setting only presupposes having access to a function d̂_g that serves as a data-driven local approximation for the geodesic distance d_. In section <ref> we discuss some required properties for this approximation and highlight the need to work with approximations of d_ of high enough order if one desires to recover precise curvature information of the underlying manifold from the graph G_ as n →∞.
Analogous to the continuous case, we can define Ollivier's Ricci curvature on the RGGs introduced above. In order to do so, we recall that we need two ingredients: a random walk over and a notion of distance over .
First, we introduce the graph Ollivier balls
B_G(x,) := { z ∈: w_(x,z) =1 } ∀ x ∈ ,
and we consider the family {μ_x^G }_x∈ of uniform distributions
μ_x^G (z) = 1/| B_G(x,) | , z ∈ B_G(x,) .
Observe that the measures μ_x^G define the transition probabilities for a random walk on the RGG. The generator of the discrete-time Markov chain with transition probabilities given by {μ_x^G }_x∈ is known in the literature as the random walk graph Laplacian; see <cit.>.
Notice that the construction of the ball B_G(x,) and its associated probability measure over , μ_x^G, continues to make sense for any arbitrary base point x ∈, regardless of whether x is a data point in or not. This observation will be used in our theoretical analysis in subsequent sections.
The second ingredient needed to define Ollivier's Ricci curvature over G_ is a distance function d_G over . Two specific choices for d_G, one useful when d_ is unknown and the other useful when d_ is known, will be discussed in section <ref>; both choices will endow with a suitable geodesic metric space structure.
Either way, once d_G has been fixed, we can define, relative to the family of measures {μ_x^G}_x ∈ (which in turn we recall depends on the choice of w_), the Ollivier Ricci curvature between points x, y ∈ as
_̨G (x,y):= 1 - W_1,G (μ_x^G, μ_y^G)/d_G(x,y).
In the above, W_1,G(μ_x^G, μ_y^G) is the 1-Wasserstein distance induced by the metric d_G over . Precisely,
W_1,G (μ_x^G, μ_y^G) = min_π∈Γ(μ_x^G,μ_y^G)∫ d_G(x̃, ỹ) dπ(x̃ , ỹ).
§ APPROXIMATION OF GEODESIC DISTANCES FROM DATA
One of the settings that we study in this paper assumes no access to pairwise geodesic distances d_(x,y) (x,y ∈) in our data set, but distances between points may still be computed in the ambient Euclidean space. In order to recover the manifold's intrinsic curvature information from the data in this setting, we will assume to have access to an oracle, data-driven estimator of d_, denoted d̂_g, satisfying the following conditions.
The function d̂_g : ×→ [0, ∞) is assumed to be a symmetric function satisfying, with probability at least 1 - Cexp( - ζ(β, n, )), the following conditions:
* For every x,y ∈ satisfying c ≤ d_(x,y) ≤ C or c ≤d̂_g(x,y) ≤ C, we have
|d_(x,y) - d̂_g(x,y)| ≤ C_1 β^3 + C_2^4.
* We have
d̂_g(x,y) ≤ c ⟹ d_(x,y) ≤4/3 c.
Here, ζ(n, β,) is assumed to be of the form C n^q_1^q_2β^q_3 for positive powers q_1, q_2, q_3>0. In particular, ζ(n, β,) →∞ as n →∞, whenever β >0 and >0 are fixed.
We have assumed the function d̂_g(x,y) to be symmetric, but it is not required to satisfy the other axioms of distance functions. Recall that d̂_g has been used in the definition of the weights w_ , appearing in (<ref>), and we will use it again in section <ref> to define a data driven distance d_G, over . The first condition in Assumption <ref> states that, with high probability, d̂_g approximates d_ locally with an error of order four, at least as long as the distance between points is not too small. In general, one should not expect the same type of error estimate for d_ at very small length scales, which is why we instead require condition 2, a much more reasonable assumption.
The Euclidean distance is not a valid choice for d̂_g since its error of approximation is only O(^3); see the discussion in section <ref>.
In sections <ref> and <ref> we discuss how the problem of constructing an admissible d̂_g can be reduced to the problem of approximating the second fundamental form of from data and review some of the existing literature on this topic. In section <ref>, on the other hand, we introduce two geodesic distance functions d_G over that we may use to induce a Ricci curvature κ_G over the data cloud .
§.§ Estimation when geodesic distance for nearby points is not available
Although in the literature there already exist potential avenues for estimating d_ from data (see section <ref>), in this section we suggest an alternative approach that illustrates how it is possible to transform estimators for extrinsic geometric quantities into estimators for intrinsic ones. Our discussion will also allow us to illustrate why the Euclidean distance may not be a good enough estimator for d_ if one wishes to recover 's Ricci curvature from data.
Let γ: [0, ∞) →⊆^d be a unit speed geodesic in . We will assume without the loss of generality that x=γ(0) = 0.
At least for small enough time t ≤ t_0, we have:
d_(x, γ(t) ) = t .
We now compare the above with
|x - γ(t)| = |γ(t)|,
the Euclidean distance between x and γ(t). For that purpose we define the function
h(t) := t^2 - |γ(t)|^2, t ∈ [0,t_0].
A direct computation using the fact that ⟨γ̇(t) , γ̇(t) ⟩=1 reveals the following expressions for the first four derivatives of h:
h'(t) = 2t - 2 ⟨γ̇(t), γ(t) ⟩,
h”(t) = - 2 ⟨γ̈ (t) , γ(t) ⟩,
h”'(t) = - 2 ⟨⃛γ (t) , γ(t) ⟩ - 2 ⟨γ̈ (t) , γ̇(t) ⟩ = - 2 ⟨⃛γ (t) , γ(t) ⟩ - d/dt⟨γ̇ (t) , γ̇(t) ⟩ = - 2 ⟨⃛γ (t) , γ(t) ⟩,
h””(t) = - 2 ⟨⃜γ (t) , γ(t) ⟩ - 2 ⟨⃛γ (t) , γ̇(t) ⟩ = - 2 ⟨⃜γ (t) , γ(t) ⟩ + 2 ⟨γ̈ (t) , γ̈(t) ⟩.
In the above, the last expression for the fourth derivative follows from the following computation:
0= d^2/dt^2⟨γ̇(t) , γ̇(t)⟩ = 2 d/dt⟨γ̈(t), γ̇(t) ⟩ = 2 ⟨⃛γ(t), γ̇(t) ⟩ + 2 ⟨γ̈(t), γ̈(t) ⟩.
Now, at t=0 we have:
h(0)=0, h'(0)=0, h”(0)=0, h”'(0)=0, h””(0)= 2 ⟨γ̈(0) , γ̈(0) ⟩ ,
since we have assumed γ(0)=0. A Taylor expansion then shows that
h(t) = 1/12⟨γ̈(0) , γ̈(0) ⟩ t^4 + O(t^5).
Hence
t = |γ(t)| √( 1 + 1/12⟨γ̈(0) , γ̈(0)⟩ t^4/|γ(t)|^2 + 1/|γ(t)|^2O(t^5) )
= |γ(t)| ( 1 + 1/24⟨γ̈(0) , γ̈(0)⟩ t^4/|γ(t)|^2 + 1/|γ(t)|^2O(t^5) )
= |γ(t)| + 1/24⟨γ̈(0) , γ̈(0)⟩ t^3 + O(t^4).
The above shows that:
d_(x,y) = |x-y| + 1/24⟨γ̈(0), γ̈(0) ⟩ |x-y|^3 + O(|x-y|^4), x, y ∈,
where in the above we interpret γ̈(0) as the acceleration (in the ambient space) at time 0 of the unit speed geodesic going from x to y. The estimation of the term ⟨γ̈(0), γ̈(0) ⟩ can be done through the estimation of the second fundamental form of , as indeed one can write
γ̈(0) = _x(γ̇(0) , γ̇(0)).
The above discussion thus motivates introducing the function
d̂_g(x,y) = |x-y| + 1/48( |_xy|^2 + |_yx|^2 ) |x-y|^3, x,y ∈,
where _xy is an estimator, built from data, for
_xy:= _x(γ̇(0), γ̇(0))= _x ( log_x( y)/|log_x(y)|, log_x( y)/|log_x(y)|).
In section <ref> we discuss some existing approaches in the literature for building these estimators.
The relevant observation is that if |_xy|^2 can be approximated using |_xy|^2 within error β, and using the fact that |_xy|^2 = |_yx|^2 + O(|x-y|) given that the manifold is smooth, we would conclude that
|d̂_g(x,y) - d_(x,y)| ≤ C_1 β |x-y|^3 + C_2 |x-y|^4.
On the other hand, if d̂_g had simply been defined as the Euclidean distance, the error of approximation of d_ would have been O(|x-y|^3), as mentioned in Remark <ref>.
Notice that d̂_g as in definition (<ref>) is symmetric.
§.§ Quantitative estimates of second fundamental form from data
In this section we review some related literature on estimating the second fundamental form. Kim et al. <cit.> propose an estimator for the second fundamental form for embedded submanifolds, which is the class of manifolds considered in this paper. Specifically, they suggest to construct an estimator of the Hessian H_hx of the defining function h at each point x ∈ (recall Definition <ref>). To do this, they fit a quadratic polynomial p_h to the defining function in a small neighborhood of x and assume H_p_hx≈H_hx. They show that such an approximation convergences indeed asymptotically to the second fundamental form:
Let the coefficients Ã_x of the polynomial p_h be determined by solving the weighted least-squares problem
Ã_x = argmin_Q K_x(XQ-h) ≈ A_x ,
where X is the matrix of second-order monomials of points in centered at x,
A_x =1/2[ [H_hx]_1,1,
[H_hx]_1,2,…,
[H_hx]_d,d]^T
Ã_x =1/2[ [H_p_hx]_1,1,
[H_p_hx]_1,2,…,
[H_p_hx]_d,d]^T
,
and K_x a diagonal matrix with diag(K_x)=1_ x_i - x ≤. Then A_x - Ã_x → 0 for all x ∈ as n →∞, → 0.
A proof can be found in <cit.>.
While this result holds for any manifold considered in this paper, it provides only asymptotic guarantees.
Aamari and Levrard <cit.> show that under additional smoothness assumptions on the underlying manifold, one can indeed obtain non-asymptotic error estimates. They give minimax bounds of order O ( n^2-k/m) <cit.>
for an estimator of the second fundamental form of a C^k-smooth embedded submanifold.
Below, we briefly recall the minimax bounds for later reference. To state the results, we introduce the following additional notation. We define, as usual, the operator norm of a linear map T on S ⊂ℝ^d as T _op:= sup_z ∈ S Tz / z. Let be a C^k-manifold with k ≥ 3 and reach τ≥τ_min >0. For x ∈, one can define the local estimator
(π̂_j,T̂_2,j, …,T̂_k-1,j) ∈_π, sup_2<i<k T_i _op≤ 1 P_n-1^(j)[
x - π(x) - ∑_i=2^k-1 T_i (π(x)^⊗ i) ^2 1_B(0,h)(x)
] ,
where π is an orthogonal projection onto d-dimensional subspaces and T_i: (ℝ^m)^i →ℝ^m a bounded symmetric tensor of order i. Moreover, P_n-1^(j) denotes integration with respect to 1/(n-1)∑_i ≠ jδ_(x_i - x_j), z^⊗ i the (m × i)-dimensional vector (z, …, z) and h≤τ_min/8. Aamari and Levrard give the following guarantee for a solution of Eq. <ref>:
Let be a C^k-manifold with k ≥ 3 and reach τ≥τ_min >0. For sufficiently large n, we have with probability at least 1-( 1/n)^k/d:
* Upper bound:
𝔼[ max_1 ≤ j ≤ n_x_jy∘π_T_x_j - T̂_2,j∘π̂_j _op] ≤ C ( log n/n-1)^k-2/d
* Lower bound:
inf 𝔼[ _x_jy∘π_T_x_j - T̂_2,j∘π̂_j _op] ≥ C' ( 1/n-1)^k-2/d
Here, C,C' are constants depending on d,k,τ_min; n is assumed to be sufficiently large, such that C^-1≥( sup_2 ≤ i ≤ k T_i^* _op)^-1 for the estimator.
It should be noted that Eq. <ref> can be difficult to solve in practise, with one approach viewing Eq. <ref> as an optimization task on the Grassman manifold <cit.>. We further note that similar estimation results were also obtained in related work by Cao et al. <cit.>.
Notice that Theorem <ref> provides error bounds in expectation, which do not immediately translate into concentration bounds. While a development of such concentration bounds is beyond the scope of the present paper, we briefly comment on a possible avenue for deriving them, at least in the large-sample regime. Specifically, given a sufficiently large sample, one may construct approximations of tangent spaces via principal component analysis. Developing a means to track the change in the tangent space as we move along the manifold could deliver an approximation of the second fundamental form, which, in turn, would allow for deriving concentration bounds.
§.§ Geodesic distances on
To define a geodesic distance d_G over G=(,w_) (interpreting w_ as either (<ref>) or as (<ref>)) we first introduce the following “pre-distance" functions:
d̃_G,(x,y) := 0, if x=y,
δ_0 ψ( d_(x,y)/δ_0), if 0<d_(x,y)≤δ_1,
+∞, otherwise,
and
d̃_G,(x,y) := 0, if x=y,
δ_0 ψ( d̂_g(x,y)/δ_0), if 0<d̂_g(x,y)≤δ_1,
+∞, otherwise,
where d̂_g satisfies Assumption <ref>. In the above, we use parameters δ_0<δ_1 that in terms of the parameter will be written as
δ_0 := c_0 , δ_1 := c_1
where c_0 is a fixed but small enough number and c_1 is fixed but large enough; see more details below. Finally, the profile function ψ appearing in both definitions is assumed to satisfy the following conditions:
The function ψ:[0, ∞) → [0, ∞) satisfies the following:
* ψ is C^2, non-decreasing, and convex.
* ψ(t)= t for all t ≥ 1.
* ψ(0) >0 and ψ'(0)>0.
In Figure <ref> we provide an example of an admissible profile function ψ. As we discuss in detail in Remark <ref>, Assumption <ref> guarantees that the geometry of the RGG is suitably glued together when moving from two lengthscales at which the RGG exhibits two different geometric behaviors.
We may now use the above pre-distance functions to define two geodesic distances over :
d_G,(x, y) := inf_n ∈, { x_i }_i=0^n ⊆
x_0=x, x_n=y ∑_i=0^n-1d̃_G,(x_i, x_i+1),
and
d_G,(x, y) := inf_n ∈, { x_i }_i=0^n ⊆
x_0=x, x_n=y ∑_i=0^n-1d̃_G,(x_i, x_i+1).
Note that, in contrast to d_G,, the function d_G, is completely data-driven and thus in principle more useful in applications, unless d_ is known, in which case one could directly work with d_G,. Either way, as we show below, both d_G, and d_G, are indeed distance functions over and both of them induce Olivier Ricci curvature functions that recover 's Ricci curvature in the large data limit.
Let d_G = d_G, and d̃_G = d̃_G,, or d_G = d_G, and d̃_G = d̃_G,. In either case the function d_G is a distance over . Moreover, for every x,y ∈ there exists a finite sequence x_1, …, x_k ∈ such that:
* x_1=x, and x_k=y.
* d_G(x_i, x_i+1) = d̃_G(x_i , x_i+1) ≤δ_1.
* d_G(x,y)= ∑_i=1^k-1 d_G(x_i, x_i+1).
The fact that d_G= d_G, is a distance function follows directly from the definitions. Likewise, we can see that d_G= d_G, is a distance function thanks to the fact that d̂_g is symmetric, according to Assumption <ref>.
To prove the second part, notice that for an arbitrary pair x,y ∈ we can find a path x_1, …, x_k ∈ with x_1=x and x_k=y such that d_G(x,y)= ∑_i=1^k-1d̃ _G (x_i , x_i+1). Now, by definition of d_G(x_i, x_i+1), we have d̃_G (x_i, x_i+1) ≥ d_G(x_i, x_i+1) for every i=1, …, k-1. If the inequality was strict for at least one i, then we would actually be able to build a path connecting x,y whose length is strictly smaller than d_G(x,y), which would be a contradiction. It follows that d_G(x_i, x_i+1)= d̃_G(x_i, x_i+1) for all i=1, …, n.
Let κ∈ and suppose that
1 - W_1,G(μ_x^G, μ_y^G)/d_G(x,y)≥κ,
for every x,y ∈ satisfying d_G(x,y) = d̃_G(x,y) ≤δ_1. Then (<ref>) holds for any pair x,y ∈
Thanks to Lemma <ref>, the proof is just as in [Prop. 19, <cit.>]. The emphasis here, however, is the fact that we only need to check the inequality for pairs x,y for which the distance function d_G and the pre-distance function d̃ _G coincide.
We finish this section with the following inequalities relating the metrics d_G,, d_G,, and d_. These inequalities will only be used later on in section <ref> when studying curvature upper bounds.
Under Assumption <ref>, for all small enough and β the following holds:
d_G,(x,y) ≥ d_(x,y),
for all x, y ∈,
and, with probability at least 1- Cexp(- ζ(n, β,)) we have
(1+C(β^2+^3)) d_G,(x,y) ≥ d_(x,y).
for all x, y ∈. Moreover, if x,y ∈ are such that 2δ_0 ≤ d_(x,y) ≤1/2δ_1, where δ_0 and δ_1 are as in the definition of d_G, and d_G,, then
d_G,(x,y) ≤ d_(x,y)( 1+ C( β^2 + ^3))
and
d_G,(x,y) ≤ d_(x,y).
Inequality (<ref>) is immediate from the triangle inequality for d_ and the fact that d̃_G,(x_i,x_i+1) ≥δ_0ψ( d_(x_i,x_i+1) /δ_0 ) ≥ d_(x_i, x_i+1) for any two data points x_i, x_i+1 with d_(x_i, x_i+1) ≤δ_1, since necessarily ψ(t) ≥ t for all t ≥ 0.
To prove inequality (<ref>), consider an arbitrary discrete path { x_i }_i=0^n connecting x and y such that d̃_G, (x_i, x_i+1) <∞ for all i. Then
∑_i=0^n-1d̃_G, (x_i, x_i+1) = ∑_i=0^n-1δ_0ψ(d̂_g(x_i, x_i+1)/δ_0)
= ∑_i ∈ Aδ_0ψ(d̂_g(x_i, x_i+1)/δ_0) + ∑_i ∈ Bδ_0ψ(d̂_g(x_i, x_i+1)/δ_0)
≥∑_i ∈ Ad̂_g(x_i, x_i+1) + ∑_i ∈ Bδ_0ψ(d̂_g(x_i, x_i+1)/δ_0)
,
where A:= { i s.t. d̂_g(x_i, x_i+1) ≤3/4ψ(0)δ_0 } and B:= { i s.t. d̂_g(x_i, x_i+1) > 3/4ψ(0) δ_0 }. For an i ∈ A, we can use 2 in Assumption <ref> to conclude that d_(x,y) ≤δ_0 ψ(0) ≤d̃_G, (x_i, x_i+1), whereas for an i in B we can use 1 in Assumption <ref> to bound the difference between d̂_g(x_i, x_i+1) and d_(x_i, x_i+1). In particular,
∑_i=0^n-1d̃_G, (x_i, x_i+1) ≥∑_i ∈ A ( d̂_g(x_i, x_i+1) - d_(x_i, x_i+1) + d_(x_i, x_i+1) )
+ ∑_i ∈ B d_(x_i, x_i+1)
≥∑_i=0^n-1 (1 - C_1 β^2 - C_2 ^3 ) d_(x_i, x_i+1)
≥ (1 - C_1 β^2 - C_2 ^3 ) d_(x, y) ,
where the last inequality follows from the triangle inequality for d_. From this we deduce that
(1 + C (β^2+ ^3)) d_G, (x,y) ≥ d_(x,y),
for some constant C.
To prove (<ref>), notice that, under the assumption that is small enough, we can guarantee, thanks to (<ref>), that d̂_g(x,y) ∈ [δ_0, δ_1). This means that
d_G, (x,y) ≤d̂_g(x,y).
In turn, applying (<ref>) again we can upper bound d̂_g(x,y) from above by (1+ C(β^2+^3) ) d_(x,y). Inequality (<ref>) now follows.
Inequality (<ref>) is obvious from the definition of d_G, and the assumption on d_(x,y).
§ MAIN RESULTS
We are now ready to state our main results. Throughout this section we will make the following assumptions on the scale of the parameters that determine G, the estimator d̂_g, and the discrete curvature κ_G.
We assume that the following relations hold:
* c_1 ≥ 2 + 4c_0, where c_0 and c_1 are as in (<ref>).
* and β are sufficiently small and n is sufficiently large.
* The ratio log(n)^p_m/n^1/m^3 is sufficiently small, where p_m is a dimension dependent quantity: p_m=3/4 when m=2, and p_m= 1/m when m≥ 3.
§.§ Poinwise Consistency
Van der Hoorn et al. <cit.> were the first to analyze the pointwise consistency of some form of discrete Ricci curvatire on an RGG. They give asymptotic convergence guarantees of the following form:
Let ⟨·⟩ denote the expectation with respect to the RGG ensemble G=(, w_,). Suppose that d_G is taken to be d_ in (<ref>). Finally, suppose that = _n ∼ n^-α and δ=δ_n= n^-β with 0< β≤α and α + 2β < 1/m.
Then
lim_n →∞⟨| 1/δ^2κ_G - 1/2(m+2)_x(v) | ⟩ =0 ,
where y= y_n is the point on the geodesic in direction v starting at x with δ=d_(x,y).
It is worth pointing out that in the definition of κ_G in <cit.> we have d_G = d_, where as here where we work with d_G,. When working with d_G, we will be able to obtain global lower bounds for our induced curvature, while those lower bounds can not be derived for d_.
The authors of <cit.> provide a second analysis which only requires access to pairwise Euclidean distances in the ambient space, but this result relies on a crucial auxiliary result, which is currently only available in dimension 2.
In what follows we address Problem <ref> and provide non-asymptotic error bounds for the approximation of 's Ricci curvature from our notions of discrete Ricci curvature. Our first result is stated in the setting where we have access to d_.
Let be an m-dimensional, compact, boundaryless, connected, smooth manifold embedded in ^d. Let = { x_1, …, x_n } consist of i.i.d. samples from the uniform distribution on . Let w_= w_, be defined according to (<ref>), let d_G= d_G, be the metric defined in (<ref>) for a profile function ψ satisfying Assumptions <ref>, and let κ_G be the Ollivier Ricci curvature induced by these choices of Ollivier balls and metric (see (<ref>)).
Under Assumption <ref>, for every s >1 there is a constant C such that, with probability at least 1- C n^-s, we have
| κ_G(x,y)/^2 - _x(v)/2(m+2)| ≤ C ( + log(n)^p_m/ n^1/m^3),
for all x,y ∈ satisfying 2δ_0 ≤ d_(x,y) ≤1/2δ_1, where δ_0 and δ_1 are defined in (<ref>). In the above, we use v to denote the vector log_x(y)/|log_x(y)|∈ T_x, and p_m =3/4 if m=2, while p_m=1/m if m≥ 3
A second result, which only assumes access to a sufficiently sharp data-driven approximation of the geodesic distance d_, is stated below.
Let be an m-dimensional, compact, boundaryless, connected, smooth manifold embedded in ^d. Let = { x_1, …, x_n } consist of i.i.d. samples from the uniform distribution on . Let w_= w_, be defined according to (<ref>), for a data-driven approximation d̂_g of d_ satisfying Assumption <ref>. Let d_G= d_G, be the metric defined in (<ref>) for a profile function ψ satisfying Assumptions <ref>, and let κ_G be the Ollivier Ricci curvature induced by these choices of Ollivier balls and metric (see (<ref>)).
Under Assumption <ref>, for every s >1 there is a constant C such that, with probability at least 1- C n^-s - Cexp(- ζ(n, β,)), we have
| κ_G(x,y)/^2 - _x(v)/2(m+2)| ≤ C ( β + + log(n)^p_m/ n^1/m^3),
for all x,y ∈ satisfying 3δ_0 ≤d̂_g(x,y) ≤1/3δ_1. The quantities δ_0, δ_1, v and p_m are as in Theorem <ref>.
§.§ Global Curvature Lower Bounds
We turn our attention to Problem <ref> and state two theorems relating global lower bounds for and κ_G. In section <ref>, we complement our analysis with a discussion of applications of the curvature lower bounds stated below.
In our first result we assume access to the geodesic distance d_.
Let be an m-dimensional, compact, boundaryless, connected, smooth manifold embedded in ^d with Ricci curvature lower bounded by 2(m+2) K. Let = { x_1, …, x_n } consist of i.i.d. samples from the uniform distribution on . Let w_= w_, be defined according to (<ref>), let d_G= d_G, be the metric defined in (<ref>) for a profile function ψ satisfying Assumptions <ref>, and let κ_G be the Ollivier Ricci curvature induced by these choices of Ollivier balls and metric (see (<ref>)).
Under Assumption <ref>, for every s >1 there is a constant C such that, with probability at least 1- C n^-s, we have
κ_G(x,y)/^2≥min{ s_K K - C ( + log(n)^p_m/ n^1/m^3), 1/2^2}, ∀ x, y ∈,
where the factor s_K is given by
s_K := ψ'(0) c_0/12c_1 C_ if K ≥ 0
c_1/c_0 ψ(0) if K <0,
where
c_0, c_1 are as in (<ref>), and C_ is a manifold dependent constant that in particular implies c_0/12 c_1 C_≤ 1. Also, p_m =3/4 if m=2, while p_m=1/m if m≥ 3.
The rescaling of κ_G by ^2 is the right scaling when passing to the continuum limit, i.e. n →∞ and → 0. This can already be seen from our pointwise consistency results in section <ref>, but it can also be interpreted as a way to properly rescale the time variable indexing the discrete-time random walk on G. In particular, we will see in section <ref> that (<ref>) implies novel contraction properties for the heat flow (continuous time) on G when we assume the manifold to be positively curved.
The factor s_K makes the lower bound in (<ref>) looser than the lower bound for 's Ricci curvature: when K≥ 0, s_K is necessarily smaller than one (but still strictly positive, since ψ'(0)>0), whereas when K <0 the quantity s_K is greater than one.
The appearance of s_K is due to the fact that in our analysis we must glue together two estimates that hold at different length-scales, and, in doing so, we end up with a suboptimal bound. Presumably, our analysis can be sharpened, but this aim is out of the scope of this paper.
A second result, which only assumes access to a sufficiently sharp data-driven approximation of the geodesic distance d_, is stated below.
Let be an m-dimensional, compact, boundaryless, connected, smooth manifold embedded in ^d with Ricci curvature lower bounded by 2(m+2) K. Let = { x_1, …, x_n } consist of i.i.d. samples from the uniform distribution on . Let w_= w_, be defined according to (<ref>), for a data-driven approximation d̂_g of d_ satisfying Assumption <ref>. Let d_G= d_G, be the metric defined in (<ref>) for a profile function ψ satisfying Assumptions <ref>, and let κ_G be the Ollivier Ricci curvature induced by these choices of Ollivier balls and metric (see (<ref>)).
Under Assumption <ref>, for every s >1 there is a constant C such that, with probability at least 1- C n^-s - Cexp(- ζ(n, β,)), we have
κ_G(x,y)/^2≥min{ s_K K - C ( + β+ log(n)^p_m/ n^1/m^3), 1/4^2}, ∀ x, y ∈,
where the factor s_K is as in Theorem <ref>, and p_m =3/4 if m=2, while p_m=1/m if m≥ 3.
As we will discuss in section <ref>, the above curvature lower bounds provide information on the behavior of Lipschitz seminorms along the heat flow induced by the graph Laplacian associated to the RGG G=(, w_).
§.§ Numerical examples
We illustrate the recovery of global lower bounds (Theorem <ref>) on the example of a unit d-sphere. Since the unit d-sphere has sectional curvature 1, we expect to recover a global lower bound of 1 for Ollivier's Ricci curvature in a random geometric graph, in the large-sample limit. To test this numerically, we sample n points uniformly at random from the unit d-sphere, centered at the origin. Sampling is performed via sphere picking (also known as Muller-Marsaglia algorithm <cit.>): We sample d independent random variables from the standard normal distribution z=(z_1, …, z_d). The point (∑_i z_i )^-1/2 z lies on the unit d-sphere. We repeat this procedure n times to generate a sample of size n. Figure <ref> shows the curvature distribution of the resulting random geometric graphs with different hyperparameters. We see that almost all edges have indeed a Ricci curvature of 1, as predicted by our theoretical results[The small number of outliers are due to numerical inaccuracies, specifically, (1) the sample distribution is not perfectly uniform and (2) some sample points do not lie exactly on the sphere.].
In our numerical experiments, the 1-Wasserstein distance is computed via the Hungarian algorithm, which has a cubic complexity. Hence, for large sample sizes it is expensive to compute Ollivier's curvature on each edge in the RGG. However, as a byproduct of our global lower curvature bounds, we develop upper and lower bounds on the 1-Wasserstein distance, which do not require optimizing transport maps, but can be computed from combinatorial arguments. Thus, in applications in which we mainly rely on global lower curvature bounds (see examples in the next section), our approach nevertheless allows for an efficient characterization of the manifolds's geometry.
§ RELATED WORK
Throughout this work, we consider Ricci curvature in the sense of Ollivier <cit.>. Our analysis utilizes several theoretical results that date back to Ollivier's work <cit.>.
As mentioned above, the pointwise consistency of Ollivier's Ricci curvature between RGGs and an underlying manifolds has been previously studied in <cit.>, which gave asymptotic guarantees assuming access to geodesic distances, and also when not having access to geodesic distances, but only in a very special case.
Other popular discrete Ricci curvatures include notions by Forman <cit.> and Maas and Erbar <cit.>, as well as a notion by Lin-Yau <cit.>, which is closely related to Ollivier's Ricci curvature. The relation between Forman's Ricci curvature and that of an underlying manifold has recently been explored in <cit.>. Notably, Maas and Erbar's Ricci curvature allows for a log-Sobolev inequality (via a discrete Bakry-Emery theorem) and the inference of curvature lower bounds <cit.>, although not in a tractable way that would allow one to infer its consistency in the RGG setting. Discrete Ricci curvatures have been related to a range of classical graph characteristics <cit.> and have found applications in network analysis and machine learning <cit.>. Beyond <cit.>, to the best of our knowledge there is no other work rigorously connecting the discrete Ricci curvature of a point cloud and the Ricci curvature of a manifold.
Continuum limits of different geometric characteristics defined on data clouds have been explored in the literature. For example, the analysis of graph Laplacians and their convergence toward Laplace-Beltrami operators on manifolds has received a lot of attention in the last decades, e.g., in <cit.> where pointwise consistency results are presented, and in <cit.>, where spectral consistency is discussed.
Other works explore the discrete-to-continuum convergence of general data-driven distances, e.g., see <cit.> and some of the referenecs therein. The papers <cit.>, for example, discuss convergence of distances defined on random geometric graphs (RGG), either in the i.i.d. setting or for Poisson point processes. The results from <cit.> are asymptotic, while the ones in <cit.> provide high probability convergence rates in terms of an RGG's connectivity parameter. The results in <cit.>, for example, discuss the convergence of the ratio between certain expectations of distances at different scales. When combined with concentration inequalities, this allows the authors to prove rates of convergence, in sparse settings, for a semisupervised learning procedure known as Lipschitz learning; see <cit.>. Finally, <cit.> presents a graph-PDE approach to approximate geodesic distances by analyzing variants of the Eikonal equation on a graph. Many of the approaches discussed in the aforementioned papers could potentially be used to define estimators d̂_g for d_.
The approach for estimating d_ from data outlined in section <ref> relies on the approximation of the second fundamental form from data. Some papers that explore the estimation of the second fundamental form are <cit.>.
§ NONASYMPTOTIC GUARANTEES ON CURVATURE CONSISTENCY
In this section we present proofs for our main results. After some preliminary discussion, we first show the consistency of global curvature bounds, followed by a proof of pointwise, local consistency. A summary of the notation used throughout this section can be found in Table <ref>.
§.§ Preliminaries
In this subsection we collect a series of preliminary results and estimates that we use in the proofs of our main results.
§.§.§ Some lemmas from optimal transport theory
We recall the Kantorovich-Rubinstein duality theorem for the 1-Wasserstein metric between two probability measures over the same Polish metric space.
Let μ_1, μ_2 be two (Borel) probability measures over a Polish metric space (𝒰,d). Then
W_1(μ_1, μ_2) := sup_ f s.t. Lip(f) ≤ 1 ∫ f(x̃ ) d μ_1(x̃) - ∫ f(ỹ) dμ_2(ỹ) .
In the above, Lip(f) stands for the Lipschitz constant (relative to the metric d) of the function f.
Next, we recall the notion of glueing of couplings. Given finite positive measures μ_1,…, μ_k over a Polish space (𝒰,d), all of which have the same total mass, and given couplings π_12, π_23, …, π_k-1, k with π_l, l+1∈Γ(μ_l, μ_l+1), we define Π, the glueing of the couplings π_l, l+1, as the measure over 𝒰^k satisfying
∫φ(x_1, …, x_k) dΠ(x_1, …, x_k) =
∫∫…∫φ(x_1, …, x_k) dπ_k-1, k( x_k | x_k-1 ) … dπ_1,2(x_2| x_1) dμ_1(x_1)
for all regular enough test functions φ; in the above, π_l, l+1 (·| x_l ) must be interpreted as the conditional distribution of x_l+1 given x_l when (x_l,x_l+1) are jointly distributed according to π_l, l+1. For given l,s ∈{ 1, …, k } consider the map T_l,s: (x_1, …, x_k) ↦ (x_s, x_l). It is straightforward to see that T_l,s♯Π∈Γ(μ_s, μ_l).
Next, we present the following lemma.
Let μ_1, μ_2, μ̃_1,μ̃_2 be finite positive measures defined over the same Polish space (𝒰,d), satisfying μ_1(𝒰)= μ̃_1(𝒰) and μ_2(𝒰)= μ̃_2(𝒰). Then
W_1( μ_1 +μ_2 , μ̃_1 + μ̃_2 ) ≤ W_1( μ_1, μ̃_1 ) + W_1( μ_2 , μ̃_2 ).
The desired inequality follows from the observation that for any two couplings π_1 ∈Γ(μ_1, μ̃_1) and π_2 ∈Γ(μ_2, μ̃_2) we have π_1 + π_2 ∈Γ(μ_1+ μ_2 , μ̃_1 + μ̃_2).
§.§.§ Some estimates for the ∞-OT distance between measures
In the proofs of our main results we will make use of the ∞-OT distance W_∞(·,·) between probability measures defined over the same metric space. Precisely, let μ_1, μ_2 be two (Borel) probability measures over a Polish metric space (𝒰,d). We define W_∞(μ_1, μ_2) as
W_∞(μ_1, μ_2) := inf_π∈Γ(μ_1, μ_2)sup_(x̃ , ỹ) ∈(π) d(x̃ , ỹ),
where (π) stands for the support of the measure π.
The following results relate, on the one hand, the ∞-OT distance between two measures with densities with respect to the uniform measure over a Euclidean (or geodesic) ball, and on the other hand the L^∞ distance between the densities.
Let μ_1, μ_2 be two probability measures over B(0,) ⊆^m with densities ρ_1, ρ_2 with respect to the uniform probability measure over B(0,) satisfying:
1/α≤ρ_1(x), ρ_2(x) ≤α,
for some α>1. Then
W_∞(μ_1, μ_2) ≤α C_m ‖ρ_1 - ρ_2 ‖_L^∞(B(0,)),
where C_m only depends on dimension m and not on α, , or ρ_1, ρ_2.
Theorem 1.2. in <cit.> gives the result for =1. The general case follows immediately from a rescaling argument.
Let be a smooth, compact Riemannian manifold without boundary and let x ∈. Let < ι_/2. Let μ_1, μ_2 be two probability measures over B_(x,) with densities ρ_1, ρ_2, with respect to the uniform probability measure over B_(x,), that satisfy:
1/α≤ρ_1(x), ρ_2(x) ≤α,
for some α>1. Then
W_∞(μ_1, μ_2) ≤α C_‖ρ_1 - ρ_2 ‖_L^∞(B_(x,)),
where C_ is a manifold dependent constant that does not depend on x, α, , or ρ_1, ρ_2.
Since for every x∈ the map exp_x : B(0,) → B_(x,) is bi-Lipschitz, with bi-Lipschitz constants uniformly bounded over all 0 < < ι_/2 and all x ∈, the desired inequality follows directly from Proposition <ref>.
We also discuss probabilistic bounds for the ∞-OT distance between μ and the empirical measure μ_n. Specifically, we will use the following result that can be found in <cit.> (see also references therein).
Let μ_n be the empirical measure of n i.i.d. samples from μ. Then, for any s>1 and n ∈, we have
W_∞(μ, μ_n) = min_T : T_♯μ =μ_nsup_x ∈ d_( T(x), x) ≤ A_, s(log(n))^p_m/n^1/m ,
with probability at least 1- C_, s n^-s, where p_m is a dimension dependent power: p_m= 3/4 when m=2, and p_m= 1/m when m≥ 3. The constants C_, s and A_, s only depend on s and on . In the sequel, we use T_n to denote a minimizer in the above formula.
§.§.§ Proof of Theorem <ref>
We now provide a proof of Theorem <ref>, which we restate below for convenience.
|_̨ (x,y) - ^2/2(m+2)_x(v) |≤( C ^2 d_(x,y) + C' ^3
) ,
where y is a point on the geodesic from x in direction v∈ T_x and is the radius of Ollivier balls.
Let x,y ∈ and let 𝒫: B_(x,) → B_(y, ) be the map from (<ref>). Then, according to Proposition 6 in <cit.>, we have
d_(x̃ , 𝒫( x̃ )) = d_(x,y) ( 1 - d_(x, x̃)^2(K(v,w)/2 + O( d_(x,y) + ) ) ),
where v= log_x(y)/|log_x(y)|, w=log_x(x̃), and K(v,w) is the sectional curvature in the plane generated by the vectors v,w ∈ T_x. Equation (<ref>) is represented pictorially in Figure <ref>: the distance between x̃ and ỹ:= 𝒫(x̃) is almost equal to the distance between x and y, and the correction term of order 3 is precisely captured by the sectional curvature between vectors v and w.
The measure 𝒫_♯μ_x^, although not exactly equal to μ_y^, has a density ρ_xy with respect to μ_y^ that satisfies
sup_ỹ∈ B_(y,) | ρ_xy (ỹ) - 1 | ≤ C_( d_(x,y)^2 ^2 + d_(x,y) ^2 ),
as follows from the discussion in the proof of Proposition 6 in section 8 in <cit.>. Combining the above estimate with Corollary <ref> we get
W_∞( 𝒫_♯μ_x^ , μ_y^ ) ≤ C_sup_ỹ∈ B_(y,) | ρ_xy (ỹ) - 1 | ≤ C_ d_(x,y) ^3.
We can thus find a map T_y: B_(y, ) → B_(y,) such that T_y ♯ ( 𝒫_♯μ_x^) = μ_y^ and such that
sup_y' ∈ B_(y, ) | y' - T_y(y') | ≤ C_' d_(x,y) ^3;
see <cit.>. If we now define the function : B_(x,) → B_(y,) as
:= T_y ∘𝒫,
we see that
_♯μ_x^ = μ_y^.
Moreover, for every x̃∈ B_(x, ) we have
d_(x̃ , ( x̃ )) = d_(x,y) ( 1 - d_(x, x̃)^2(K(v,w)/2 + O( d_(x,y) + ) ) ),
where we recall v= log_x(y)/|log_x(y)| and w=log_x(x̃). It follows that
W_1(μ_x^, μ_y^) ≤∫_B_(x, ) d_(x̃ , (x̃)) d μ_x^(x̃)
≤ d_(x, y ) -^2 d_(x,y) _x(v)/2(m+2) + O( d_(x,y)^2 ^2 + d_(x,y) ^3 ),
and in turn
1/^2κ_(x,y) + O(d_(x,y) + ) ≥_x(v)/2(m+2),
giving a lower bound for κ_.
To obtain a matching upper bound, we follow <cit.> and construct a function f: → that is 1-Lipschitz with respect to d_(·,· ) and that almost realizes the sup in the Kantorovich-Rubinstein dual formulation of the 1-Wasserstein distance between μ_x^ and μ^_y (see Theorem <ref>). To define this function, let us consider 0<r_0< ι_, and suppose that is small enough and x,y are sufficiently close so that B_(x,), B_(y, ) ⊆ B_(x, r_0/4). Let E_0:= { v' ∈ T_x : ⟨ v' , log_x(y) ⟩=0 }. We first define f : B_(x,r_0) → by
f(z):= (z, exp_x(E_0)) if ⟨log_x(z) , log_x(y) ⟩≥ 0
- (z,exp_x(E_0)) if ⟨log_x(z) , log_x(y) ⟩ < 0,
which is 1-Lipschitz in its domain, and then extend it to a global 1-Lipschitz function using the McShane-Whitney extension theorem. Following the steps in section 8 in <cit.> we can then see that
f( (x̃) ) - f(x̃)= d_(x,y) ( 1 - d_(x, x̃ )^2(K(v,w)/2 + O( d_(x,y) + ) ) )
for every x̃∈ B_(x,), where we recall v= log_x(y)/|log_x(y)|,w=log_x(x̃) and is as in (<ref>). Integrating with respect to μ_x^ and using the Kantorovich-Rubinstein theorem we get
W_1(μ_x^, μ_y^) ≥∫_ f(ỹ) dμ^_y(ỹ) - ∫_ f(x̃) dμ^_x(x̃)
= ∫_ f((x̃)) dμ^_x(x̃) - ∫_ f(x̃) dμ^_x(x̃)
= d_(x, y ) -^2 d_(x,y) _x(v)/2(m+2) + O( d_(x,y)^2 ^2 + d_(x,y) ^3 ),
from where we can now obtain
1/^2κ_(x,y) ≤_x(v)/2(m+2) + O(d_(x,y) + ).
§.§.§ Some additional lemmas
In this section we collect a few lemmas that we use in the proof of our main results.
There is a constant c such that for all small enough _0>0 and all x∈ we have
v_m (1 - c_0^2) _0^m ≤( B_(x, _0) ) ≤ v_m (1 + c_0^2) _0^m,
where v_m is the volume of the m-dimensional Euclidean ball.
Moreover, if _0>0 is such that W_∞(μ, μ_n) ≤1/2_0, then
μ(B_(x, _0 - W_∞(μ, μ_n) )) ≤μ_n(B_(x,_0)) ≤μ(B_(x,_0+ W_∞(μ, μ_n))).
The first part is a standard result in differential geometry (see for example 1.35 in <cit.>). The second part is immediate from the definition of W_∞(μ, μ_n).
Given the assumed compactness and smoothness of the manifold , it is straightforward to show that there exists a constant C_≥ 1 such that
μ_x^( B_(x,) ∖ (B_(x,) ∩ B_(y,) ) ) ≤ C_d_(x,y)/
for all x, y ∈ and all ≤ι_/2; indeed, this type of estimate is easily proved in Euclidean space and can be extended to the manifold setting for all small enough using coarse bounds on the metric distortion by the exponential map around a given point on the manifold. With the aid of standard concentration inequalities we can get a similar estimate to (<ref>) when x ∈ and μ_x^ is replaced with the empirical measure μ_x^G. This is the content of the next lemma.
Provided that W_∞(μ, μ_n)/ is sufficiently small, we have
μ_n(B_(x,) ∖ B_(x,) ∩ B_(y,) )/μ_n(B_(x,))≤ψ(0) c_0/6
for all x,y∈ satisfying 0<d_(x,y) ≤ψ(0)/12 C_δ_0.
This result follows from (<ref>), Lemma <ref>, and the smallness assumption of W_∞(μ, μ_n) relative to .
§.§ Proofs of global curvature bounds
We start by proving Theorem <ref>.
Thanks to Lemma <ref> it is enough to prove the lower bound under the assumption that x,y∈ are two distinct points such that d_G,(x,y) = d̃_G,(x,y) ≤δ_1. Notice that d̃_G, (x,y) = δ_0 ψ( d_(x,y)/δ_0) and thus we may further split the analysis into different cases determined by the value of d_(x,y). It is worth recalling that in the setting considered here we have B_G(x, )= B_(x, ) ∩ and B_G(y, )= B_(y, ) ∩.
Case 1: 0<d_(x,y) ≤ψ(0)/12 C_δ_0, where C_ is as in (<ref>).
We may assume without the loss of generality that | B_G(x,)| ≥ | B_G(y,)|, for otherwise we can swap the roles of x and y. From Lemma <ref> it follows
μ_x^G( B_(x,) ∖ (B_(x,) ∩ B_(y,) ) ) ≤ψ(0) c_0/6.
Also, for all x̃∈ B_G(x,) and ỹ∈ B_G(y, ) we have
d_G, (x̃, ỹ ) ≤ d_G, ( x, x̃ ) + d_G, ( x̃, ỹ ) + d_G, ( ỹ, y ) ≤ 2 δ_0 ψ( /δ_0) + δ_0 ≤ 3.
By selecting a coupling between μ_x^G and μ_y^G that leaves all mass of μ_x^G in B_(x,) ∩ B_(y,) fixed, the above estimates imply
W_1,G(μ_x^G, μ_y^G) ≤ 3 μ_x^G( B_(x, ) ∖ (B_(x, ) ∩ B_(y, ) ) ) ≤1/2ψ(0) δ_0 .
In addition, since by definition we have d_G,(x,y) = d̃_G, (x,y) ≥δ_0 ψ(0), it follows
κ_G(x,y)/^2 = 1/^2(1- W_1,G(μ_x^G, μ_y^G)/d_G, (x,y)) ≥1/2^2.
Case 2: ψ(0)/12 C_δ_0 ≤ d_(x,y) ≤δ_1 - 2.
We start by finding a good upper bound for W_1,G(μ_x^G, μ_y^G ). Without the loss of generality we can assume that
a:= μ( B_(x,))/μ_n( B_G(x,) ) ≤μ( B_(y,))/μ_n( B_G(y,) ) ,
for otherwise we can swap the roles of x and y.
We split the measure μ_x^G into
μ_x^G = μ_x^G⌊_B_(x, ') + μ_x^G ⌊_B_(x,) ∖ B_(x,') ,
where ' := - 3W_∞(μ, μ_n)- C_' ^4 and where the measures on the right hand side represent the restrictions of μ_x^G to B_(x,') and B_(x, ) ∖ B_(x, '), respectively. We decompose the measure μ_y^G as
μ_y^G= μ_y,1^G + μ_y,2^G
for two positive measures μ_y,1^G and μ_y,2^G that we define below, the first of which will be suitably coupled with μ_x^G⌊_B_(x, ') while the second one will be coupled with μ_x^G ⌊_B_(x,) ∖ B_(x,').
Precisely, the measure μ_y,1^G is defined as
μ_y,1^G := a T_n ♯ ( _♯ ( μ_x^⌊_T_n^-1(B_(x,')) ) ),
where T_n : → is an ∞-OT map between μ and μ_n as defined in Theorem <ref> and is the map defined in (<ref>). We will show that μ_y,1^G ≤μ^G_y, which would allow us to take μ_y,2^G := μ_y^G - μ_y,1^G. To see that indeed μ_y,1^G ≤μ^G_y, we first observe that T_n^-1( B_(x,') ) is contained in B_(x, - 2 W_∞(μ, μ_n) - C_' ^4 ). From (<ref>) and ii) in Assumption <ref> it follows that (T_n^-1( B_(x,') )) ⊆ B_(y, - 2W_∞(μ, μ_n) ). Finally, T_n( (T_n^-1( B_(x,') ))) ⊆ B_(y, - W_∞(μ, μ_n)). From this we see that the support of μ_y,1^G is contained in B_(y, - W_∞(μ, μ_n)). Now, let A⊆ B_(y, - W_∞(μ, μ_n)). We see that
μ_y,1^G(A) = a μ_x^( T_n^-1( B_(x,') ) ∩^-1( T_n^-1 (A)) )
≤ a μ_x^( ^-1(T_n^-1(A)))
= aμ_y^( T_n^-1(A) )
= a/μ(B_(y,))μ( T_n^-1(A) )
= a/μ(B_(y,))μ_n( A )
= a μ_n(B_G(y,))/μ(B_(y,))μ_y^G( A )
≤μ_y^G(A).
In the above, the second equality follows from the fact that T_n^-1(A) ⊆ B_(y,) and the fact that _♯μ_x^ = μ_y^; the fourth equality follows from the fact that T_n ♯μ = μ_n; the last inequality follows from the definition of a. Since A was arbitrary, we conclude that indeed μ_y,1^G ≤μ_y^G.
Next, we show that μ_x^G⌊_B_(x, ') and μ_y,1^G have the same total mass and then construct a suitable coupling between them. Indeed, on one hand we have μ_x^G⌊_B_(x, ')()= μ_x^G ( B_(x, ') ) = μ_n( B_(x, ') )/μ_n( B_G(x, ) ). On the other hand,
μ_y,1^G() = a μ_x^( T_n^-1(B_(x, '))) = a/μ( B_(x, ) )μ ( T_n^-1(B_(x,')))
= a/μ( B_(x, ) )μ_n ( B_(x,')) = μ_n( B_(x, ') )/μ_n( B_G(x, ) ),
which implies that the measures indeed have the same total mass. To construct a suitable coupling π_1^G ∈Γ( μ_x^G⌊_B_(x, ') , μ_y,1^G), we first introduce the measure
ν̃_1 := a/μ(B_(x,))μ⌊_ T_n^-1(B_(x,') ) .
Observe that π̃_1:= ( T_n × Id )_♯ν̃_1
belongs to Γ( μ_x^G⌊_B_(x, ') , ν̃_1) and
d_(x̃,x̃') ≤ W_∞(μ, μ_n), ∀ (x̃ , x̃') ∈(π̃_1).
Also, π̃_2:= (Id ×)_♯ν̃_1 ∈Γ(ν̃_1, _♯ν̃_1) satisfies
d_(x̃' , ỹ') = d_(x,y) ( 1 - d_(x, x̃')^2(K(v,w')/2 + O( d_(x,y) + ) ) )
for all (x̃', ỹ ') ∈(π̃_2), according to (<ref>); in the above, v= log_x(y)/|log_x(y)| and w'=log_x(x̃'). Finally, π̃_3 := (Id × T_n)_♯( _♯ν̃_1 ) ∈Γ( _♯ν̃_1, μ_y,1^G ) satisfies
d_(ỹ',ỹ) ≤ W_∞(μ, μ_n), ∀ (ỹ ' , ỹ) ∈(π̃_3).
We can then define π_1^G ∈Γ( μ_x^G⌊_B_(x, ') , μ_y,1^G) as
π_1^G := T_1,4 ♯Π.
where Π is the glueing of the couplings π̃_1, π̃_2, π̃_3 as defined in (<ref>) and T_1,4 is the projection onto the first and fourth coordinates introduced when we defined the glueing of couplings.
We now proceed to estimate W_1,G(μ_x^G, μ_y^G) from above using the coupling π_1^G. First, let (x̃,x̃', ỹ', ỹ) ∈(Π). From the above discussion we have
d_(x̃, ỹ) = d_(x,y) ( 1 - d_(x,x̃ ')^2(K(v,w')/2 + O( d_(x,y) + ) ) ) + O(W_∞(μ, μ_n)).
In particular, given the smallness of W_∞(μ, μ_n) relative to and the fact that d_(x, y) ≤δ_1 - 2 we can assume without the loss of generality that d_(x̃ , ỹ) ≤δ_1 and thus
d_ G, (x̃ , ỹ) ≤d̃_ G, (x̃ , ỹ) = δ_0 ψ( d_(x̃,ỹ)/δ_0).
Using the fact that ψ is non-decreasing combined with a simple Taylor expansion of ψ around d_( x, y)/δ_0, we can bound the right hand side of the above by
δ_0 ψ( d_(x , y)/δ_0) + ψ'( d_( x , y)/δ_0) ( d_(x̃ , ỹ) - d_(x,y) )
+ 1/2δ_0‖ψ”‖_∞ ( d_(x̃ , ỹ) - d_(x,y) )^2
≤δ_0 ψ( d_(x , y)/δ_0) - 1/2ψ'( d_(x,y)/δ_0) d_(x,y) ^2 K(v, w')
+ O(^4 + W_∞(μ, μ_n));
notice that ‖ψ”‖_∞, the supremum norm of the second derivative of ψ, is finite by Assumption <ref>; notice also that this second order correction term is of order O( W_∞(μ, μ_n)^2/ + ^5), which is much smaller than O(^4 + W_∞(μ, μ_n)).
From the above estimates we get
W_1,G(μ_x^G ⌊_B_(x, '), μ_y,1^G ) ≤∫ d_G,(x̃ , ỹ) d π_1^G(x̃ , ỹ)
= ∫ d_G,(x̃, ỹ) d Π(x̃ ,x̃', ỹ', ỹ)
≤δ_0ψ(d_(x,y)/δ_0) - 1/2ψ'( d_(x,y)/δ_0) d_(x,y) ∫ d_(x, x̃')^2 K(v, log_x(x̃')) d ν̃_1(x̃ ')
+ C( ^4 + W_∞(μ, μ_n))
≤δ_0ψ(d_(x,y)/δ_0) - 1/2ψ'( d_(x,y)/δ_0) d_(x,y) ∫ d_(x, x̃')^2 K(v, log_x(x̃')) d μ_x^(x̃ ')
+ C( ^4 + W_∞(μ, μ_n))
≤δ_0ψ(d_(x,y)/δ_0) - ψ'( d_(x,y)/δ_0) d_(x,y) ^2 _x(v)/2(m+2)
+ C( ^4 + W_∞(μ, μ_n)).
In the second to last inequality we have substituted the integral with respect to ν̃_1 with an integral with respect to μ_x^ by introducing an error that is of much smaller order than W_∞(μ, μ_n) + ^4, thanks to Lemma <ref>; in the last inequality we have used (<ref>) and (<ref>).
Next, we find a bound for W_1,G( μ_x^G ⌊_B_(x,) ∖ B_(x,'), μ_y,2^G). We observe that for every x̃∈ B_G(x,) we have d_G,(x̃,x) ≤max{δ_0 , d_(x̃, x ) }≤. Likewise, for every ỹ∈ B_G(y,) we have d_G,(ỹ,y) ≤max{δ_0 , d_(ỹ, y ) }≤. Additionally, d_G,(x,y) ≤ d_(x,y) ≤δ_1 = c_1. It follows that d_G, (x̃ , ỹ) ≤ C for all x̃∈ B_G(x,) and ỹ∈ B_G(y,).
This implies
W_1,G(μ_x^G ⌊_B_(x,) ∖ B_(x,'), μ_y,2^G) ≤ Cμ_n( B_(x,) ∖ B_(x,') )/μ_n( B_(x,) )
≤ C (W_∞(μ, μ_n) + ^4),
thanks to Lemma <ref>.
We may now invoke Lemma <ref> and (<ref>) to get
W_1,G(μ_x^G, μ_y^G ) ≤ W_1,G(μ_x^G ⌊_B_(x, '), μ_y,1^G ) + W_1,G(μ_x^G ⌊_B_(x,) ∖ B_(x,'), μ_y,2^G)
≤δ_0ψ(d_(x,y)/δ_0) - ψ'( d_(x,y)/δ_0) d_(x,y) ^2 _x(v)/2(m+2)
+ C( ^4 + W_∞(μ, μ_n)).
Recalling that d_G, (x,y) = δ_0ψ( d_(x,y)/δ_0) ≥δ_0 ψ(0) = c_0ψ(0), we deduce
κ_G(x,y)/^2 = 1/^2(1 - W_1,G(μ_x^G, μ_y^G)/d_G,(x,y)) ≥ψ'(d_(x,y)/δ_0) d_(x,y) /δ_0ψ( d_(x,y)/δ_0 ) _x(v)/2(m+2)
- C ( + W_∞(μ, μ_n)/^3) .
Under the assumption that _x(v) ≥ 2(D+2) K≥ 0, the first term on the right hand side can be bounded from below by ψ'(0) c_0ψ(0) /12c_1C_ K.
If, on the other hand, K <0, then the first term on the right hand side of (<ref>) can be bounded from below by c_1/c_0ψ(0) K.
Case 3: Here we assume that δ_1 ≥ d_(x,y) ≥δ_1 - 2 .
According to Assumption <ref> we have d_(x,y) ≥ 2 δ_0 and in particular d_G, (x,y) = δ_0 ψ( d_(x,y)/δ_0) = d_(x,y). Let x be the midpoint between x and y along a (manifold) minimizing geodesic connecting them; x may not be a point in , but this is unimportant for our argument. Now, notice that d_(x, x)= 1/2 d_(x, y) ∈ [ 2 δ_0, δ_1- 2 ] and also d_(x,y)= 1/2 d_(x, y) ∈ [ 2 δ_0, δ_1- 2 ]. Using the triangle inequality for W_1,G and recalling Remark <ref> we get:
W_1,G(μ_x^G, μ_y^G) ≤ W_1,G(μ_x^G, μ_x^G) + W_1,G(μ_x^G, μ_y^G).
Then
κ_G(x,y) = 1 - W_1,G(μ_x^G, μ_y^G)/d_G,(x,y)
≥ 1 - W_1,G(μ_x^G, μ_y^G)/d_(x,y)
≥ 1- W_1,G (μ_x^G, μ_x^G) + W_1,G ( μ_x^G, μ_y^G) /d_(x,y)
= 1/ 2( 1 - W_1,G (μ_x^G, μ_x^G) /d_(x,x̅)) + 1/2( 1 - W_1,G (μ_x̅^G, μ_y^G) /d_(x̅,y)).
Using (<ref>) twice (which can be applied regardless of whether x∈ or not), and noticing that ψ(d_(x, y) / 2 δ_0 ) = d_(x, y) / 2 δ_0 and ψ'( d_(x, y) / 2 δ_0 ) =1,
we can lower bound each of the terms on the right hand side of the above expression by 1/2(s_K^2 K - C (^3 + W_∞(μ, μ_n)/) ).
We would like to highlight the different ways in which W_1,G(μ_x^G, μ_y^G) is bounded in Cases 1 and 2 in the previous proof. Indeed, in Case 1, when d_(x,y) is very small, we choose a coupling between μ_x^G and μ_y^G that leaves most mass fixed, taking advantage of the fact that the overlap between B_G(x,) and B_G(y, ) is large in this case. In Case 2, on the other hand, the coupling that we use mimics the coupling in the proof of Theorem <ref>, where all mass moves parallel to the geodesic connecting x and y. Notice that we do need to split into these two cases: in going from (<ref>) to the final lower bound in Case 2 we need to have a lower bound on d_(x, y) that is O() (for the case K>0).
Notice also that the profile function ψ can not be taken to be the identity map for all t>0. Indeed, when we divide W_1,G(μ_x^G, μ_y^G) by δ_0 ψ(0) to go from (<ref>) to (<ref>), we need ψ(0) >0 to guarantee that the term 1/δ_0 ψ(0 ) ^2(^4+ W_∞(μ, μ_n)) is indeed small regardless of how small d_(x,y) may be. Since the minimum interpoint distance in a data set is much smaller than O(1/n^1/m), the distance d_(x,y) may indeed be quite small. This forces us to consider a profile function ψ that bends away, smoothly (so that the first order Taylor expansion of ψ can reveal the desired curvature term), from the diagonal. The factor s_K in the lower bound (<ref>) arises when lower bounding κ_G for x,y for which O(δ) ≤ d_(x,y) ≤δ_0 . We can think of this range as the transition from the Riemannian lengthscale, where 's geometry can be captured, to a lengthscale where the RGG exhibits complete graph behavior. A somewhat similar separation of scales in an RGG was used in <cit.> to study the convergence of discrete Wasserstein spaces defined over RGGs toward the standard Wasserstein space; see the discussions in Remark 1.16. and section 2.1 in <cit.>.
We now proceed to prove Theorem <ref>. The proof is very similar to the one for Theorem <ref> and thus we will mostly provide details for the steps that need some adjustments. In particular, we highlight the reason for requiring d̂_g to approximate d_ up to an error of order four; see Remark <ref> below.
Thanks to Lemma <ref> we can assume, without the loss of generality, that x,y ∈ are such that d_G , (x,y) = d̃_G, (x,y) = δ_0ψ( d̂_g(x,y)/δ_0) ≤δ_1. As in Theorem <ref> we split our analysis into three different cases. We recall that Ollivier balls in this setting take the form:
B_G(x, ) = {x̃∈ : d̂_g(x, x̃ ) ≤}, B_G(y, ) = {ỹ∈ : d̂_g(y, ỹ ) ≤}.
Case 1: 0<d̂_g(x,y) ≤ψ(0)/12 C_δ_0, where C_ is as in (<ref>).
We may assume without the loss of generality that | B_G(x,)| ≥ | B_G(y,)|, for otherwise we can swap the roles of x and y. For _± := ± (C_1β^3 + C_2 ^4), thanks to (<ref>) and Lemmas <ref> and <ref> we can assume
μ_x^G( B_G(x, ) ∖ (B_G(x, ) ∩ B_G(y, ) ) ) ≤36 ψ(0) c_0/12^2.
Now, for all x̃∈ B_G(x,) and ỹ∈ B_G(y, ) we have
d_G, (x̃, ỹ ) ≤ d_G, ( x, x̃ ) + d_G, ( x̃, ỹ ) + d_G, ( ỹ, y ) ≤ 2 δ_0 ψ( /δ_0) + δ_0 ≤ 3.
By selecting a coupling between μ_x^G and μ_y^G that leaves all mass of μ_x^G in B_G(x,) ∩ B_G(y,) fixed, the above estimates imply
W_1,G(μ_x^G, μ_y^G) ≤ 3 μ_x^G( B_G(x, ) ∖ (B_G(x, ) ∩ B_G(y, ) ) ) ≤3/4ψ(0) δ_0 .
In addition, since by definition we have d_G,(x,y) = d̃_G, (x,y) ≥δ_0 ψ(0), it follows
κ_G(x,y)/^2 = 1/^2(1- W_1,G(μ_x^G, μ_y^G)/d_G, (x,y)) ≥1/4^2.
Case 2: ψ(0)/12 C_δ_0 ≤d̂_g(x,y) ≤δ_1 - 2.
As in the proof of Theorem <ref> we may further assume, without the loss of generality, that
a:= μ( B_(x,))/μ_n( B_G(x,) ) ≤μ( B_(y,))/μ_n( B_G(y,) ) .
The measure μ_x^G is decomposed as
μ_x^G = μ_x^G⌊_B_(x, ') + μ_x^G ⌊_B_G(x,) ∖ B_(x,') ,
where now ' := - 3 W_∞(μ, μ_n) - C_1 β^3 - (C_2+C_')^4 and, we recall, B_G is as in (<ref>). Notice that the additional terms in the definition of ', relative to how ' was defined in the proof of Theorem <ref>, account for the discrepancy between d_ and d̂_g. With this definition we have B_(x, ' ) ∩⊆ B_G(x,).
We define the measure μ_y,1^G as
μ_y,1^G := a T_n ♯ ( _♯ ( μ_x^⌊_T_n^-1(B_(x,')) ) ),
for T_n, and μ_x^ as in Case 2 in the proof of Theorem <ref>. We can follow the same steps there to conclude that μ_y,1^G ≤μ_y^G and then define μ_y,2^G:= μ_y^G - μ_y,1^G. Also, we may introduce analogous couplings Π and π_1^G = T_1,4♯Π∈Γ( μ_x^G⌊_B_(x, ') , μ_y,1^G) for which:
d_(x̃, ỹ) = d_(x,y) ( 1 - d_(x, x̃ ')^2(K(v,w')/2 + O( d_(x,y) + ) ) ) + O(W_∞(μ, μ_n))
for all (x̃, x̃' ,ỹ ', ỹ ) ∈(Π). In turn, we can use the approximation error estimates for d̂_g-d_ to obtain
d̂_g(x̃, ỹ) = d̂_g(x,y) ( 1 - d_(x, x̃ ')^2 (K(v,w')/2 + O( d̂_g(x,y) + ) ) ) + O( β^3 + ^4 + W_∞(μ, μ_n))
for all (x̃, x̃' ,ỹ ', ỹ ) ∈(Π), and d_G, (x̃ , ỹ) ≤d̃_G, (x̃ , ỹ) = δ_0 ψ( d̂_g(x,y)/δ_0) for all (x̃ , ỹ) ∈(π_1^G). From this we can conclude that
W_1,G(μ_x^G ⌊_B_(x, '), μ_y,1^G ) ≤∫ d_G,(x̃ , ỹ) d π_1^G(x̃ , ỹ)
= ∫ d_G,(x̃, ỹ) d Π(x̃ ,x̃', ỹ', ỹ)
≤δ_0ψ(d̂_g(x,y)/δ_0) - ψ'( d̂_g(x,y)/δ_0) d̂_g(x,y) ^2 _x(v)/2(m+2)
+ C( β^3 + ^4 + W_∞(μ, μ_n)).
In addition,
W_1,G(μ_x^G ⌊_B_(x,) ∖ B_(x,'), μ_y,2^G) ≤ Cμ_n( B_(x,) ∖ B_(x,') )/μ_n( B_(x,) )
≤ C (W_∞(μ, μ_n) + β^3 + ^4),
thanks to Lemma <ref>.
We may now invoke Lemma <ref> and (<ref>) to get
W_1,G(μ_x^G, μ_y^G ) ≤ W_1,G(μ_x^G ⌊_B_(x, '), μ_y,1^G ) + W_1,G(μ_x^G ⌊_B_(x,) ∖ B_(x,'), μ_y,2^G)
≤δ_0ψ(d̂_g(x,y)/δ_0) - ψ'( d̂_g(x,y)/δ_0) d̂_g(x,y) ^2 _x(v)/2(m+2)
+ C( β^3 +^4 + W_∞(μ, μ_n)).
Recalling that d_G, (x,y) = δ_0ψ( d̂_g(x,y)/δ_0) ≥δ_0 ψ(0) = c_0ψ(0), we deduce
κ_G(x,y)/^2 = 1/^2(1 - W_1,G(μ_x^G, μ_y^G)/d_G,(x,y)) ≥ψ'(d̂_g(x,y)/δ_0) d̂_g(x,y) /δ_0ψ( d̂_g(x,y)/δ_0 ) _x(v)/2(m+2)
- C (β+ + W_∞(μ, μ_n)/^3) .
The lower bound (<ref>) now follows.
Case 3: Here we assume that δ_1 - 2 ≤d̂_g(x,y) ≤δ_1.
As in Case 3 in the proof of Theorem <ref> we consider the midpoint x between x and y (along the manifold geodesic). It is straightforward to see from 1 in Assumption <ref> that
| d̂_g(x, y)/d̂_g(x,y) - 1/2| ≤ C β^2 + C ^3, | d̂_g(x, x)/d̂_g(x,y) - 1/2| ≤ C β^2 + C ^3.
Then
κ_G(x,y) ≥ 1- W_1,G (μ_x^G, μ_x^G) + W_1,G ( μ_x^G, μ_y^G) /d_G, (x,y)
= 1- W_1,G (μ_x^G, μ_x^G) + W_1,G ( μ_x^G, μ_y^G) /d̂_g(x,y)
≥d̂_g(x, x̅)/d̂_g(x,y) ( 1 - W_1,G (μ_x^G, μ_x^G) /d̂_g(x,x̅)) + d̂_g(x̅, y)/d̂_g(x,y) ( 1 - W_1,G (μ_x̅^G, μ_y^G) /d̂_g(x̅,y))
- C β^2 - C^3.
As in the proof of Theorem <ref>, we may now use (<ref>) to bound from below each of the terms ( 1 - W_1,G (μ_x^G, μ_x^G) /d̂_g(x,x̅)) and ( 1 - W_1,G (μ_y^G, μ_x^G) /d̂_g(y,x̅)) by 1/2(s_K^2 K - C ( β^2 + ^3 + W_∞(μ, μ_n)/) ).
In the regime d̂_g(x,y) ∼ , i.e., the regime corresponding to Case 2 in the proof of Theorem <ref>, we use the fact that d̂_g satisfies
d̂_g(x,y) = d_(x,y) + O(β^3 + ^4),
whereas an approximation error of order O(^3) would have produced a lower bound on discrete curvature of the form s_K K - C for some constant C that may be larger than s_K K itself. In particular, if the error was of order O(^3), the sign of the discrete lower bound would not be guaranteed to be consistent with the sign of the manifold's curvature lower bound. From our proof it thus seems that d̂_g(x,y) cannot be simply taken to be the Euclidean distance between x and y and a finer estimator seems to be necessary.
§.§ Pointwise consistency
Next, we present the proof of our pointwise consistency results, i.e., Theorems <ref> and <ref>.
Since d_(x,y) is assumed to satisfy 2δ_0 ≤ d_(x,y) ≤1/2δ_1, we may use Proposition <ref>, (<ref>) in Case 2 in the proof of Theorem <ref>, and the fact that ψ(t)=t for t ≥ 1 to conclude that
κ_G(x,y)/^2≥_x(v)/2(m+2) - C ( + W_∞(μ, μ_n)/^3) .
It thus remains to obtain matching upper bounds.
For this purpose, let f: → be the function defined in (<ref>). Using (<ref>) in Proposition <ref> and the fact that f is 1-Lipschitz with respect to d_, we conclude that the function f restricted to is 1-Lipschitz with respect to d_G,. In turn, thanks to the Kantorovich-Rubinstein theorem (i.e., Theorem <ref>) we obtain
∫ f(ỹ) dμ_y^G(ỹ) - ∫ f(x̃) dμ_x^G(x̃) ≤ W_1,G(μ_x^G, μ_y^).
Using again the fact that f is 1-Lipschitz with respect to d_ we deduce
| ∫ f(x̃) dμ_x^G(x̃) - ∫ f(x̃) dμ_x^(x̃) | ≤ W_1(μ_x^G, μ_x^),
and
| ∫ f(x̃) dμ_y^G(ỹ) - ∫ f(ỹ) dμ_y^(ỹ) | ≤ W_1(μ_y^G, μ_y^).
Putting together the above inequalities we conclude that
∫ f(ỹ) dμ_y^(ỹ) - ∫ f(x̃) dμ_x^(x̃) ≤ W_1,G(μ_x^G, μ_y^G) + W_1(μ_x^G, μ_x^) + W_1(μ_y^G, μ_y^).
Using now (<ref>), we can lower bound the left hand side of the above expression and conclude that
1 -^2 _x(v)/2(m+2)≤W_1,G(μ_x^G, μ_y^G)/d_(x,y) + φ ,
where
φ := C (d_(x,y) ^2 + ^3 ) + W_1(μ_x^G, μ_x^)/d_(x,y) + W_1(μ_y^G, μ_y^)/d_(x,y).
Using the fact that d_G,(x,y) = d_(x,y) by Proposition <ref>, and rearranging terms, we conclude that
κ_G(x,y) ≤^2 _x(v)/2(m+2) + C (d_(x,y) ^2 + ^3 + W_1(μ_x^G, μ_x^)/d_(x,y) + W_1(μ_y^G, μ_y^)/d_(x,y) ).
To finish the proof, it remains to notice that the terms W_1(μ_x^G, μ_x^) and W_1(μ_y^G, μ_y^) can be bounded above by C W_∞(μ, μ_n), as it follows easily from an application of Lemma <ref>.
From Case 2 in the proof of Theorem <ref> we have
κ_G(x,y)/^2≥_x(v)/2(m+2) - C (β+ + W_∞(μ, μ_n)/^3) ,
and thus it remains to obtain a matching upper bound.
First of all, notice that, thanks to (<ref>) and Assumption <ref>, we can assume that 2 δ_0 ≤ d_(x,y) ≤δ_1/2. Now, from (<ref>) and (<ref>) in Proposition <ref> we have
| d_(x,y)/d_G,(x,y) -1 | ≤ C(β^2+^3).
On the other hand, thanks to (<ref>), it follows that the function f from (<ref>) restricted to has Lipschitz constant, with respect to d_G,, no larger than 1+ C(β^2 + ^3). This implies that
∫ f(ỹ) dμ_y^G(ỹ) - ∫ f(x̃) dμ_x^G(x̃) ≤ (1+ C(β^2 + ^3) )W_1,G(μ_x^G, μ_y^).
Proceeding as in the proof of Theorem <ref> we can conclude that
1 -^2 _x(v)/2(m+2)≤ ( 1+ C(β^2 + ^3))W_1,G(μ_x^G, μ_y^G)/d_(x,y) + φ ,
for φ as in (<ref>). In turn, multiplying both sides of the above by 1/1+ C(β^2 + ^3)d_(x,y)/d_G, (x,y), using (<ref>), and rearranging terms, we conclude that
κ_G(x,y) ≤^2 _x(v)/2(m+2) + C (β^2 + ^3 + W_1(μ_x^G, μ_x^)/d_(x,y) + W_1(μ_y^G, μ_y^)/d_(x,y) ).
The result now follows from the fact that, thanks to (<ref>) and Lemma (<ref>), each of the terms W_1(μ_x^G, μ_x^) and W_1(μ_y^G, μ_y^) is bounded by C(W_∞(μ, μ_n) + β^3 + ^4).
§ APPLICATIONS
§.§ Lipschitz contractivity of the graph heat kernel
In this section we discuss some of the implications of our curvature lower bounds on the heat kernel associated to the unnormalized graph Laplacian Δ_n induced by the graph G=(, w_). We recall that the unnormalized graph Laplacian associated to G is defined as
Δ_n u(x) := 1/n^m+2∑_x̃∈ω_(x,x̃ ) ( u(x) - u(x̃)), u∈ L^2().
We will focus on the choice w_= w_, (see (<ref>)) for simplicity, but we remark that a lot of the discussion presented below can be adapted to the choice w_ = w_,.
The operator Δ_n can be written in matrix form as
Δ_n = 1/^2 (D - W),
where W is the weight matrix induced by the rescaled weights 1/n^m w_ and D is the degree matrix associated to W.
Δ_n plays a central role in graph-based learning, where it is used to define algorithms for supervised, semi-supervised, and unsupervised learning; see, e.g., <cit.> for some discussion. There are several results in the literature that discuss the asymptotic convergence of Δ_n toward 's Laplace-Beltrami operator; see e.g. <cit.> for pointwise convergence and <cit.> for spectral convergence. Here we add upon the existing literature on graph Laplacians by providing novel contraction results that are implied by our curvature lower bounds. Specifically, we are interested in the behavior of the heat operator e^-t Δ_n as t →∞. The heat operator e^-t Δ_n can be defined via the spectral theorem or as the operator mapping an initial condition u ∈ L^2() to the solution at time t of the graph heat equation:
∂_s u_s = -Δ_n u_s,
u_0=u.
In what follows we abuse notation slightly and use D(x) to denote the degree of x∈. Precisely,
D(x) := 1/n ^m∑_y ∈η( d_(x,y)/),
where η(t):= 1_ t ≤ 1. D can be thought of as a kernel density estimator for the distribution used to sample the data set , in this case the uniform measure over . Precisely, one can show via standard concentration arguments that for every r ∈ [^2 , 1] we have
max_x ∈ | α_ - D(x) | ≤ Cr,
with probability at least 1 - c(r)^-m exp(-c r^2 n ^m); e.g., see Corollary 3.7 in <cit.> for a closely related estimate. In the above, α_() is the volume of the m-dimensional Euclidean unit ball.
We study the evolution, along the heat flow, of the Lipschitz seminorm of a function u: → when is endowed with the distance d_G= d_G,. This seminorm is defined as:
Lip_G(u):= max_x, y ∈ , x ≠ y | u(x) - u(y)| /d_G(x,y) .
For a given u : → let
𝒜u (x) : = ∫ u(x̃) dμ_x^G(x̃ ) = 1/ n^m D(x)∑_z ∈ w_(x,z) u(z) , x ∈ .
Under the same assumptions as in Theorem <ref> it follows that
Lip_G( 𝒜u ) ≤ (1 - ^2 K_G) Lip_G(u) , ∀ u ∈ L^2(),
where
K_G := min{ s_K K - C ( + W_∞(μ, μ_n)/^3), 1/2^2}.
This is an immediate consequence of the definition of Ollivier Ricci curvature and the dual representation of the 1-Wasserstein distance. Indeed, by Theorem <ref> and the Kantorovich-Rubinstein theorem, for all x, y ∈ we have
(1- ^2 K_G) d_G(x,y) ≥ W_1(μ_x^G , μ_y ^G ) ≥1/Lip_G(u)( ∫ u( x̃ ) dμ_x^G(x̃ ) - ∫ u( ỹ ) dμ_y^G(ỹ ) )
= 1/Lip_G(u) ( 𝒜u (x) - 𝒜u(y)).
Since the above is true for all x, y ∈ we obtain the desired result.
Using Lemma <ref> we can establish the following contraction of Lip_G along the heat flow e^-tΔ_n.
Under the same assumptions as in Theorem <ref>, and letting K_G be defined as in Lemma <ref>, for all u: → we have
Lip_G (e^-t Δ_n u ) ≤exp( -(K_G - 4‖ D - α_‖_L^∞()diam(G) /c_0 ψ(0)^3) t ) Lip_G(u),
where diam(G):= max_x, y ∈ d_G(x,y).
We start by noticing that inequality (<ref>) is invariant under addition of constants. This is because e^-t Δ_n (u + c) = e^-t Δ_n u + c. Due to this, from now on we can assume without the loss of generality that u is such that ∑_z∈ u(z)=0.
Now, fix t ∈ [0,∞) and let x,y ∈ be a pair of data points such that
e^-t Δ_n u (x) - e^-tΔ_nu(y) /d_G(x,y) = Lip_G(e^-t Δ_n u );
such pair always exists because is a finite set. Notice that
d/dt( e^-t Δ_n u(x) - e^-t Δ_n u(y) )^2/2 d_G(x,y)^2 = e^-t Δ_n u(x) - e^-t Δ_n u(y) /d_G(x,y)^2· ( - Δ_n e^-t Δ_n u(x) + Δ_n e^-t Δ_n u(y) ).
We rewrite the term Δ_n e^-t Δ_n u(x) as
1/^2D(x) e^-t Δ_n u(x) - 1/n^m+2∑_z ∈ w_(x,z) e^-t Δ_n u(z) = α_/^2e^-t Δ_n u (x) -α_/^2𝒜 e^- t Δ_nu(x)
+ 1/^2(D(x) - α_) e^-t Δ_n u(x)
+ 1/n^m+2( α_ - D(x)/D(x)) ∑_ z ∈w_(x,z) e^-t Δ_n u(z).
We plug this expression (and the one corresponding to Δ_n e^-t Δ_n u(y)) in (<ref>) to conclude that
d/dt( e^-t Δ_n u(x) - e^-t Δ_n u(y) )^2/2 d_G(x,y)^2 ≤ - α_(e^-t Δ_n u(x) - e^-t Δ_n u(y))^2 /^2 d_G(x,y)^2 + α_/^2Lip_G( e^-t Δ_n u ) Lip_G(𝒜 e^-t Δ_n u)
+ 4/^2 δ_0 ψ(0)Lip_G(e^-t Δ_n u) ‖ D -α_‖_L^∞()‖ e^-t Δ_n u ‖_L^∞()
≤ -α_ (Lip_G(e^-t Δ_n u ))^2 /^2+ α_/^2( 1- ^2 K_G) (Lip_G(e^-t Δ_n u ))^2
+ 4/^2 δ_0 ψ(0)Lip_G(e^-t Δ_n u) ‖ D -α_‖_L^∞()‖ e^-t Δ_n u ‖_L^∞(),
where in the second inequality we have used Corollary <ref>. By assumption we have 1/n ∑_z ∈ e^-t Δ_n u(z) = 1/n ∑_z ∈ u(z) =0 and thus it follows
| e^-t Δ_nu (z')| = |1/n∑_z ∈ ( e^-t Δ_nu (z') - e^-t Δ_nu (z) )| ≤diam(G) ·Lip_G(e^-t Δ_n u ),
for every z'∈. This allows us to bound ‖ e^-t Δ_n u ‖_L^∞()≤diam(G) ·Lip_G(e^-t Δ_n u ). Hence
d/dt( e^-t Δ_n u(x) - e^-t Δ_n u(y) )^2/ d_G(x,y)^2 ≤ - 2( α_ K_G - 4‖ D - α_‖_L^∞()diam(G)/^2 δ_0 ψ(0)) (Lip_G(e^-t Δ_n u ))^2.
Since (x,y) was an arbitrary pair realizing Lip_G(e^-t Δ_n u) we conclude that
d/dt (Lip_G(e^-t Δ_n u ))^2 ≤ - 2( α_ K_G - 4‖ D -α_‖_L^∞()diam(G) /^2 δ_0 ψ(0)) (Lip_G(e^-t Δ_n u ))^2.
Gronwall's inequality implies that
(Lip_G(e^-t Δ_n u ))^2 ≤exp( -2(α_κ_G - 4‖ D -α_‖_L^∞()diam(G) /^2 δ_0 ψ(0)) t ) (Lip_G(u ))^2.
Taking square roots on both sides we obtain the desired result.
In order for the exponent on the right hand side of (<ref>) to be negative, we certainly need K_G to be strictly positive, which we can guarantee when is a manifold with Ricci curvature bounded from below by a positive quantity and the assumptions of Theorem <ref> are satisfied. We also need to make sure that the quantity ‖ D -α_‖_L^∞()/^3 is sufficiently small, which is implied by the assumptions in Theorem <ref> and the bound (<ref>). The bottom line is that, when is sampled from a manifold with positive Ricci curvature, then, under the assumptions in Theorem <ref>, for all large enough n the Lipschitz seminorm Lip_G contracts along the heat flow associated to the unnormalized Laplacian for the graph G=( , w_, ).
We emphasize that Δ_n in Theorem <ref> is the unnormalized Laplacian of G=(, w_,), which we recall depends on the geodesic distance over . While our curvature lower bound results do not allow us to say anything about Laplacians for RGGs with the Euclidean distance, one can certainly deduce adaptations of Theorem <ref> for proximity graphs built from slight modifications of the Euclidean distance. In particular, it is clear that a similar statement can be derived, mutatis mutandis, for the graph G=(, w_, ) endowed with distance d_G,.
To contrast the content of Theorem <ref> with other contractivity results known in the literature,
let λ_G be the first nontrivial eigenvalue of Δ_n. Using the spectral theorem one can easily show that for all u ∈ L^2()
‖ e^-t Δ_n u - u‖_L^2()^2 ≤ e^- t λ_G‖ u - u‖_L^2()^2,
where u = 1/n∑_z ∈ u(z). Spectral consistency results for Δ_n like the ones in <cit.> guarantee that λ_G does not deteriorate as the graph G is scaled up. Naturally, from these L^2 contraction estimates one can not derive Lipschitz contraction as in Theorem <ref> and our results in this paper thus provide novel results for the literature of graph Laplacians on data clouds. It is worth highlighting, however, that for λ_G>0 to remain bounded away from zero, one does not require 's Ricci curvature to be positive.
In the literature on graph based learning it is not unusual to replace a graph Laplacian with a version of it that is obtained by truncating its spectral decomposition, which in particular requires the use of an eigensolver. We emphasize that Theorem <ref> andt its Corollary <ref> below is a structural property that holds for the full Laplacian Δ_n and not for a truncation thereof.
An immediate consequence of Theorem <ref> is the following.
Under the same assumptions as in Theorem <ref> we have
‖ e^-t Δ_n u - u‖_L^∞()≤exp( -(K_G - 4‖ D - α_‖_L^∞()diam(G) /c_0 ψ(0)^3) t ) diam(G) Lip_G(u),
where u:= 1/n∑_x ∈ u(x).
Notice that for any function v : → and any x ∈ we have
|v(x) - v| = | 1/n∑_x̃∈ ( v(x) - v(x̃)) | ≤diam(G) Lip(v) ,
from where it follows that
‖ v - v‖_L^∞()≤diam(G) Lip_G(v).
The result now follows from Theorem <ref>.
§.§ Manifold Learning
We briefly comment on another class of estimation problems on point clouds and graphs where curvature lower bounds may be utilized.
Recognizing and characterizing geometric structure in data is a cornerstone of Representation Learning. A common assumption is that the data lies on or near a low-dimensional manifold ⊆^d (manifold hypothesis). Suppose we are given a point cloud ⊆^d in a high-dimensional Euclidean space, i.e., our data was sampled from an embedded manifold and we have access to pairwise Euclidean distances between data points. What can we learn about the dimension and curvature of given only pairwise Euclidean distances in ? A rich body of literature has considered this question from different angles. Several algorithms exist for inferring the intrinsic dimension of (i.e., ()). However, such algorithms do not allow inference on intrinsic geometric quantities of such as a global curvature bound. There are several approaches for approximating extrinsic curvature, some of which were reviewed in earlier sections of this paper. However, none of these techniques allow for learning the intrinsic curvature of the manifold. The consistency results developed in this paper allow for such inference, even in the case where one has only access to data-driven estimates of pairwise geodesic distances, as is usually the case in practise.
Manifold learning aims to identify a putative manifold ⊆^d whose geometry agrees with the low-dimensional structure in . That is, one learns a point configuration ϕ(), which is the output of an implicit map ϕ: →, that approximately preserves the pairwise distances (d_(x,y) ≈ d_(ϕ(x),ϕ(y)) for all x,y ∈). A large number of algorithms has been proposed for this task, including Isomap <cit.>, Laplacian Eigenmaps <cit.> and Locally Linear Embeddings <cit.>. While these algorithms have gained popularity in practice, it is often challenging to certify that the geometry of the putative manifold aligns with that of the true manifold . To the best of our knowledge, the strongest guarantees are available for Isomap, which is known to recover the intrinsic dimension, as well as, asymptotically, the pairwise distances, in the large-sample limit <cit.>. However, none of these manifold learning algorithms are guaranteed to recover global curvature bounds. The consistency of global curvature bounds (Theorems <ref>) provides an effective, unsupervised means for testing whether has a curvature lower bound by computing Ollivier's Ricci curvature on a geometric graph constructed from . The resulting tool, complementary to standard manifold learning techniques, could allow for learning a more comprehensive geometric characterization of a given point cloud. Curvature lower bounds may also serve as inductive biases in manifold learning approaches. The choice of manifold learning technique often requires prior knowledge on the type of manifold that is to be learnt, e.g., if the data was sampled from a linear subspace, a linear method, such as Principal Component Analysis, is suitable. On the other hand, if the data is sampled from a nonlinear subspace, such as an embedded submanifold, a nonlinear approach, such as Isomap, is expected to perform better.
§ CONCLUSIONS
In this paper, we have investigated continuum limits of Ollivier's Ricci curvature on random geometric graphs in the sense of local pointwise consistency and in the sense of global lower bounds. Specifically, we consider a data cloud sampled uniformly from a low-dimensional manifold ⊆^d. We construct a proximity graph G of that allows us to give non-asymptotic error bounds for for the approximation of 's curvature from data. Moreover, we show that if has curvature bounded below by a positive constant, then so does G with high probability. To the best of our knowledge, our local consistency result presents the first non-asymptotic guarantees of this kind. In addition, we believe that our work provides the first consistency results for global curvature bounds. We complement our theoretical investigation of continuum limits with a discussion of potential applications to manifold learning.
We conclude with a brief discussion of avenues for future investigation. A limitation of the present work is the assumption that is a uniform sample. Future work may investigate whether it is possible to adapt these results to other data distributions. Furthermore, we have assumed that the sample is noise-free; it would be interesting to analyze the noisy case with different noise models. In addition, one setting investigated in this work implicitly assumes access to a sufficiently good data-driven estimator for the geodesic distance. While we have suggested some directions for constructing such estimator, we believe that this question is of interest in its own right and deserves more attention. We would also like to highlight the “shrinking" factor s_K that appears in our main Theorems <ref> and <ref> should be removable with a much more detailed analysis.
Further applications of the global curvature lower bounds may arise in the study of Langevin dynamics on manifolds, specifically when utilizing graph-based constructions to define suitable discretizations of the infinitesimal generators of the stochastic dynamics of interest.
§ ACKNOWLEDGEMENTS
The authors would like to thank Prasad Tetali for enlightening discussions and for providing relevant references. This work was started while the authors were visiting the Simons Institute to participate in the program “Geometric Methods in Optimization and Sampling" during the Fall of 2021. The authors would like to thank the organizers of this program and the Simons Institute for support and hospitality. During the visit, MW was supported by a Simons-Berkeley Research Fellowship. NGT was supported by NSF-DMS grant 2005797 and would also like to thank the IFDS at UW-Madison and NSF through TRIPODS grant 2023239 for their support.
plain
|
http://arxiv.org/abs/2307.02039v2
|
20230705054804
|
ACA CO($J=2-1$) Mapping of the Nearest Spiral Galaxy M33. I. Initial Results and Identification of Molecular Clouds
|
[
"Kazuyuki Muraoka",
"Ayu Konishi",
"Kazuki Tokuda",
"Hiroshi Kondo",
"Rie E. Miura",
"Tomoka Tosaki",
"Sachiko Onodera",
"Nario Kuno",
"Masato I. N. Kobayashi",
"Kisetsu Tsuge",
"Hidetoshi Sano",
"Naoya Kitano",
"Shinji Fujita",
"Atsushi Nishimura",
"Toshikazu Onishi",
"Kazuya Saigo",
"Rin I. Yamada",
"Fumika Demachi",
"Kengo Tachihara",
"Yasuo Fukui",
"Akiko Kawamura"
] |
astro-ph.GA
|
[
"astro-ph.GA"
] |
0000-0002-3373-6538]Kazuyuki Muraoka
Department of Physics, Graduate School of Science, Osaka Metropolitan University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531, Japan
20(AAS Journals Data Editors)
0000-0002-4098-8100]Ayu Konishi
Department of Physics, Graduate School of Science, Osaka Metropolitan University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531, Japan
0000-0002-2062-1600]Kazuki Tokuda
Department of Earth and Planetary Sciences, Faculty of Science, Kyushu University, Nishi-ku, Fukuoka 819-0395, Japan
National Astronomical Observatory of Japan, National Institutes of Natural Science, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
Department of Physics, Graduate School of Science, Osaka Metropolitan University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531, Japan
0000-0002-3499-9460]Hiroshi Kondo
Department of Physics, Graduate School of Science, Osaka Metropolitan University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531, Japan
0000-0001-8187-7856]Rie E. Miura
Departamento de Fisica Teorica y del Cosmos, Campus de Fuentenueva, Universidad de Granada, E18071-Granada, Spain
National Astronomical Observatory of Japan, National Institutes of Natural Science, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
0000-0001-9016-2641]Tomoka Tosaki
Joetsu University of Education, Yamayashiki-machi, Joetsu, Niigata 943-8512, Japan
Meisei University, 2-1-1 Hodokubo, Hino, Tokyo 191-0042, Japan
0000-0002-1234-8229]Nario Kuno
Division of Physics, Faculty of Pure and Applied Sciences, University of Tsukuba, 1-1-1 Tennodai, Tsukuba, Ibaraki 305-8577, Japan
Tomonaga Center for the History of the Universe, University of Tsukuba, Tsukuba, Ibaraki 305-8571, Japan
0000-0003-3990-1204]Masato I. N. Kobayashi
National Astronomical Observatory of Japan, National Institutes of Natural Science, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
I. Physikalisches Institut, Universität zu Köln, Zülpicher Str 77, D-50937 Köln, Germany
0000-0002-2794-4840]Kisetsu Tsuge
Department of Physics, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-0033, Japan
0000-0003-2062-5692]Hidetoshi Sano
Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu 501-1193, Japan
Department of Physics, Graduate School of Science, Osaka Metropolitan University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531, Japan
0000-0002-6375-7065]Shinji Fujita
Institute of Astronomy, The University of Tokyo, 2-21-1, Osawa, Mitaka, Tokyo 181-0015, Japan
Department of Physics, Graduate School of Science, Osaka Metropolitan University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531, Japan
0000-0003-0732-2937]Atsushi Nishimura
National Astronomical Observatory of Japan, National Institutes of Natural Science, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
0000-0001-7826-3837]Toshikazu Onishi
Department of Physics, Graduate School of Science, Osaka Metropolitan University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531, Japan
0000-0003-1549-6435]Kazuya Saigo
Graduate School of Science and Engineering, Kagoshima University, 1-21-40 Korimoto Kagoshima-city Kagoshima, 890-0065, Japan
0000-0002-1865-4729]Rin I. Yamada
Department of Physics, Nagoya University, Chikusa-ku, Nagoya 464-8602, Japan
Department of Physics, Nagoya University, Chikusa-ku, Nagoya 464-8602, Japan
0000-0002-1411-5410]Kengo Tachihara
Department of Physics, Nagoya University, Chikusa-ku, Nagoya 464-8602, Japan
0000-0002-8966-9856]Yasuo Fukui
Department of Physics, Nagoya University, Chikusa-ku, Nagoya 464-8602, Japan
Institute for Advanced Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan
0000-0001-7813-0380]Akiko Kawamura
National Astronomical Observatory of Japan, National Institutes of Natural Science, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan
We present the results of ALMA-ACA 7 m-array observations in ^12CO(J=2-1), ^13CO(J=2-1), and C^18O(J=2-1) line emission
toward the molecular-gas disk in the Local Group spiral galaxy M33 at an angular resolution of 731 × 650 (30 pc × 26 pc).
We combined the ACA 7 m-array ^12CO(J=2-1) data with the IRAM 30 m data to compensate for emission from diffuse molecular-gas components.
The ACA+IRAM combined ^12CO(J=2-1) map clearly depicts the cloud-scale molecular-gas structure over the M33 disk.
Based on the ACA+IRAM ^12CO(J=2-1) cube data, we cataloged 848 molecular clouds with a mass range from 10^3 M_⊙ to 10^6 M_⊙.
We found that high-mass clouds (≥ 10^5 M_⊙) tend to associate with the 8 μm-bright sources in the spiral arm region,
while low-mass clouds (< 10^5 M_⊙) tend to be apart from such 8 μm-bright sources and to exist in the inter-arm region.
We compared the cataloged clouds with GMCs observed by the IRAM 30 m telescope at 49 pc resolution <cit.>,
and found that a small IRAM GMC is likely to be identified as a single molecular cloud even in ACA+IRAM CO data, while a large IRAM GMC can be resolved into multiple ACA+IRAM clouds.
The velocity dispersion of a large IRAM GMC is mainly dominated by the line-of-sight velocity difference between small clouds inside the GMC rather than the internal cloud velocity broadening.
§ INTRODUCTION
The interstellar medium (ISM) is one of the crucial components in galaxies because stars are formed by the contraction of molecular ISM.
In the Milky Way (MW), a large fraction of molecular ISM is in the form of giant molecular clouds <cit.>, whose typical size and mass are a few × 10 - 100 pc and 10^4 - 10^6 M_⊙, respectively.
It is essential to investigate the properties and formation/evolution processes of GMCs because they are known to be major sites of high-mass star formation, which eventually drives the evolution of galaxies.
So far, a lot of studies have investigated various GMC properties and their relationships.
In the MW, <cit.> found that the internal velocity dispersions of the molecular clouds are well correlated with their sizes and masses, and also reported that these correlations (i.e., scaling relations) can be expressed as the power-law form.
<cit.> measured the velocity dispersions, sizes, virial masses, and CO luminosities for 273 GMCs in the Galactic disk, and found that the velocity dispersion is proportional to the 0.5 power of the size.
They also found a tight relationship, over four orders of magnitude, between the virial mass and the CO luminosity with a power-law slope of ∼0.8.
Such GMC studies were expanded to the Local Group galaxies outside the MW.
<cit.> performed a CO survey toward the Large Magellanic Cloud (LMC) at a spatial resolution of ∼40 pc, and identified 272 GMCs with a mass range from 2 × 10^4 M_⊙ to 7 × 10^6 M_⊙ <cit.>.
In addition, <cit.> examined spatial comparisons of these GMCs with young star clusters (YSCs) and H ii regions and found that the GMCs can be classified into three types:
(1) GMCs associated with no H ii regions nor YSCs, (2) GMCs associated only with small H ii regions, but with no YSCs, and (3) GMCs associated with both YSCs and large H ii regions.
Such a classification of GMCs according to the activities of high-mass star formation likely reflects their evolutionary sequence.
In addition, GMC surveys have been often conducted toward M33, which is one of the nearest spiral galaxies <cit.>.
These studies identified more than 100 GMCs <cit.> over the M33 disk at ∼50 pc resolution, and discussed timescales and the evolutionary stages of GMCs based on the comparison with H ii regions and YSCs as well as the LMC studies.
High-angular resolution observations by millimeter-wave interferometers enabled to perform the unbiased GMC surveys even toward external spiral galaxies.
<cit.> reported the GMC catalog, which contains ∼1500 individual objects in the grand-design spiral galaxy M51 at ∼40 pc resolution using data from the PdBI Arcsecond Whirlpool Survey <cit.>.
They proposed that large-scale dynamical processes and feedback from high-mass star formation cause environmental variations in the GMC properties and mass distributions, and also suggested that ∼30% of GMCs in M51 are unbound.
More recently, PHANGS-ALMA survey mapped CO(J=2-1) line emission at ∼1 resolution toward 90 nearby star-forming galaxies <cit.>.
In particular, <cit.> identified 4986 molecular clouds at a common 90 pc resolution and measured their properties for ten subsamples.
They found that the physical properties of clouds vary among galaxies, both as a function of galactocentric radius and as a function of the dynamical environment (e.g., bar, spiral arm, and inter-arm).
However, these earlier studies for external spiral galaxies are likely biased toward the massive (≥ 10^5 M_⊙) population of molecular clouds
except for the case of M33 <cit.>.
To understand the complex hierarchical structures of molecular gas and also to understand the evolution of molecular clouds in galaxies,
smaller and less massive (< 10^5 M_⊙) molecular clouds should be investigated <cit.>.
Thus, we need further molecular-cloud surveys covering such less massive clouds in nearby galaxies as a complementary study to PHANGS-ALMA survey.
In this paper, we present the results of a new CO(J=2-1) survey toward almost the whole molecular-gas disk of M33 conducted with the Atacama Compact Array (ACA) stand-alone mode of ALMA.
The distance to M33 is estimated to be 840 kpc <cit.>; thus, 1 corresponds to 4 pc.
The inclination of M33 is 55^∘ <cit.>.
Its proximity and relatively small inclination angle have enabled many researchers to study the ISM and high-mass star formation over the wide area of the M33 disk at a few × 10 pc scale
<cit.>.
In addition, recent studies based on ALMA 12-m array observations revealed complicated internal molecular-gas structures within some especially massive (∼ 10^6 M_⊙) GMCs of M33 at 1 – 2 pc scale <cit.>.
Thus, M33 is a unique target to investigate the hierarchical structure of molecular gas in face-on spiral galaxies from parsec to kiloparsec scales.
The basic properties of M33 are summarized in Table <ref>.
The main purposes of the new ACA observations are to obtain the spatial distribution in CO(J=2-1) emission with the higher sensitivity and higher angular resolution compared to earlier studies in M33
and to identify low-mass (< 10^5 M_⊙) clouds as well as high-mass (≥ 10^5 M_⊙) clouds.
This surely becomes an important step to understand the hierarchical structures of molecular gas and the evolution of molecular clouds in galaxies.
The structure of this paper is as follows.
In Section <ref>, we describe the detail of the ACA observations and data reduction.
Then, we present the overall molecular-gas structures in CO(J=2-1) emission at ∼30 pc resolution in M33 in Section <ref>.
In Section <ref>, we describe the procedure of cloud decomposition based on ^12CO(J=2-1) cube data, and summarize the basic properties of cataloged molecular clouds.
In Section <ref>, we examine the scaling relations for the molecular clouds in M33.
We compare the cataloged molecular clouds in this study with the earlier GMC catalog in M33 summarized by <cit.> in Section <ref>.
Finally, we discuss the relationship between the properties of molecular clouds and the high-mass star formation in M33 in Section <ref>.
llc
0pt
1
General properties of M33
Parameter Value Reference
IR center (J2000): (1)
Right Ascension 1^ h33^ m50^ s.9
Declination 30^∘3937
Distance 840 kpc (2), (3)
LSR velocity 170 km s^-1 (4)
Inclination 55^∘ (5)
Position angle 21^∘ (5)
Stellar mass 4.8 × 10^9 M_⊙ (6)
Molecular gas mass 3.1 × 10^8 M_⊙ (4)
References. (1) <cit.>, (2) <cit.>, (3) <cit.>, (4) <cit.>, (5) <cit.>, (6) <cit.>
§ OBSERVATIONS AND DATA REDUCTION
Observations toward M33 were carried out in Band 6 (211 – 275 GHz) with the ACA 7 m antennas between 2019 August and 2021 August (project code 2018.A.00058.S).
The target molecular lines were ^12CO(J=2-1), ^13CO(J=2-1), and C^18O(J=2-1).
The bandwidths of the correlator settings were 117.19 MHz with 1920 channels for the ^12CO line and 960 channels for ^13CO and C^18O lines.
The target field was the rectangle with the size of 1100 × 1180 (4.5 kpc × 4.8 kpc), covering most of the molecular-gas disk of M33.
The total number of mosaic fields is 3129.
In addition to this, we retrieved the ALMA archival data (project code 2017.1.00901.S and 2019.1.01182.S), which also observed the molecular-gas disk of M33 by ACA 7 m antennas with almost the same spectral settings as our observations.
Prior to the imaging process, we concatenated all visibilities obtained in each science goal with a total number of 36.
This data reduction strategy is the same as the previously published large-scale ACA mapping project on the Small Magellanic Cloud (SMC) <cit.>.
Figure <ref> shows the eventually observed field.
We used Common Astronomy Software Application (CASA) package <cit.> version 5.4.0 in the data reduction.
We applied the standard calibration scheme provided by the ALMA observatory while we performed the imaging process.
We used the task with deconvolver <cit.> to recover extended emission as much as possible.
In task, we applied the natural weighting and used the procedure to identify automatically regions containing emission in the dirty and residual images.
We continued the deconvolution process until the intensity of the residual image attains the ∼1 σ noise level.
The beam size and the rms noise level for each emission are summarized in Table <ref>.
To evaluate the missing flux of the ACA observations, we measured the global ^12CO(J=2-1) luminosities over the M33 disk obtained by the ACA 7 m antennas and by the IRAM 30 m telescope <cit.>.
We found the global ^12CO(J=2-1) luminosity with ACA 7 m antennas L_ CO^ ACA = 7.6 × 10^6 K km s^-1 pc^2 over the observed region,
and that with the IRAM 30 m telescope L_ CO^ IRAM = 2.1 × 10^7 K km s^-1 pc^2 for the same area.
This indicates the global missing flux of ^12CO(J=2-1) emission of 60-70%, which mainly corresponds to diffuse components of molecular gas.
To compensate for such diffuse components, we combined the ACA 7 m-array ^12CO(J=2-1) data with the IRAM 30 m data using the task.
Hereafter, we refer to the pre-combined ACA 7 m-array ^12CO(J=2-1) data as “stand-alone ACA ^12CO(J=2-1)” data, and to the combined ^12CO(J=2-1) data as “ACA+IRAM ^12CO(J=2-1)” data.
The beam size and the rms noise level of ACA+IRAM ^12CO(J=2-1) data are the same as those of the stand-alone ACA ^12CO(J=2-1) data.
llcc
0pt
2
Properties of each line emission
Line Beam Size Rms Noise Level Velocity Resolution
^12CO(J=2-1) 731 × 650 (30 pc × 26 pc) 39 mK 0.7 km s^-1
^13CO(J=2-1) 772 × 686 (31 pc × 27 pc) 30 mK 1.4 km s^-1
C^18O(J=2-1) 782 × 696 (31 pc × 28 pc) 22 mK 1.6 km s^-1
§ CO MAPS
From the reduced three-dimensional cube data, we examine the zeroth moment (i.e., velocity-integrated intensity) in ^12CO(J=2-1) and ^13CO(J=2-1) emission.
To minimize the effect of the noise, we determined the velocity channel in which the CO emission is expected to appear using the atomic (H i) data <cit.> as follows.
Firstly, we convolved the H i data whose original angular resolution is 20 to 40 in order to reduce the effect of the anomalous H i velocity components.
Then, we regridded them to match our CO data and determined the representative H i velocity V_ rep in each pixel.
Finally, we calculated the zeroth moment in ^12CO(J=2-1) emission from V_ rep - 30 km s^-1 to V_ rep + 30 km s^-1.
Although <cit.> reported that 90% of the velocity separation between CO and H i is within 20 km s^-1, each CO line typically has a velocity width of 5 – 10 km s^-1.
In fact, we found that some molecular clouds dropped ∼30% of CO flux if we apply the velocity range of V_ rep ± 20 km s^-1.
To correctly measure the CO intensity in M33, we needed the velocity range of V_ rep ± 30 km s^-1 for the calculation of the ^12CO(J=2-1) zeroth moment.
We also calculated the ^13CO(J=2-1) zeroth moment from V_ rep - 30 km s^-1 to V_ rep + 30 km s^-1.
Figure <ref> shows the integrated intensity maps in ^12CO(J=2-1) from the stand-alone ACA data and the ACA+IRAM data, respectively.
These ^12CO(J=2-1) maps clearly depict the molecular-gas structure within M33 at 30 pc resolution.
We can easily find a lot of individual molecular clouds over the M33 disk.
The ACA+IRAM ^12CO(J=2-1) map properly recovers diffuse components of molecular gas, which are missed in the stand-alone ACA map.
We show an evident case, ^12CO(J=2-1) integrated intensity map for GMCs associated with the giant H ii region NGC 604, in Figure <ref>.
The integrated intensity map in ^13CO(J=2-1) emission over the M33 disk is shown in the left panel of Figure <ref>.
A lot of ^13CO(J=2-1) sources are detected.
They correspond to moderately dense gas whose density is ≳10^3 cm^-3 within the ^12CO cloud.
The zoomed-in view of the NGC 604 region is shown in the right panel of Figure <ref>.
Note that we found no significant C^18O(J=2-1) emission in the ACA map.
The rms noise level of 22 mK yields a 3 σ upper limit of 66 mK.
To check the validity of the upper limit, we retrieved the ALMA archival data (project code 2017.1.00461.S) and examined the C^18O(J=2-1) emission in a GMC associated with NGC 604.
We found that the peak temperature of the strongest C^18O(J=2-1) emission is ∼ 1 K at an angular resolution of 03 (1.2 pc) and its spatial extent is less than 1.
Then, we convolved the C^18O(J=2-1) emission to 75 and found that the peak temperature decreases down to ∼ 30 mK, which corresponds to 1.4 σ in the ACA C^18O(J=2-1) map.
Thus, we consider that the beam smearing effect makes C^18O(J=2-1) emission undetectable in the ACA 30 pc-resolution map.
§ CLOUD DECOMPOSITION
As shown in Figure <ref>, the structure of molecular clouds is highly complex and hierarchical over the M33 disk.
To identify individual emission structures in an objective way and to investigate the properties of molecular clouds,
we employed <cit.>, a Python implementation of the algorithm to catalog molecular clouds, <cit.>.
We use the ACA+IRAM ^12CO(J=2-1) data in the following analyses.
Firstly, we convolved the ACA+IRAM ^12CO(J=2-1) cube data to 75 in order to identify molecular clouds by a circular beam.
Then, we made masked cube data using provided by <cit.>.
The criteria of the emission mask are that high significance emission is required to be more than 4 σ in continuous three velocity channels and low significance emission
which is adjacent to high significance emission is required to be more than 3 σ in continuous three velocity channels at least over the size of the ACA 75 beam.
Although the default settings of are 4 σ and 2 σ for high and low significance masks, respectively,
we found that the emission masks with these default settings are not suitable for the ACA+IRAM ^12CO(J=2-1) cube data.
In particular, the low significance mask does not reject fake emission (i.e., noise) at the cloud edge.
Thus, we carefully tuned the rms thresholds, and finally we adopted 3 σ for low significance masks.
firstly searches for all local maxima in the emission-masked cube data and measures the peak temperature in each local maximum, T_ max.
When a local maximum has at least one other neighbor whose peak temperature is T_ merge, compares T_ max and T_ merge.
The neighbor is rejected if T_ max - T_ merge is less than 2 σ, which means that such a local maximum is likely a noise fluctuation.
The criterion of 2 σ is a default value recommended by <cit.>.
Then, determines if the spatial and spectral separations between local maxima are adequate or not.
We adopted the beam size (75) as a minimum spatial separation and also adopted 7 km s^-1 as a minimum spectral separation,
which corresponds to a typical velocity width of a GMC with the size of ∼30 pc considering the Galactic size-linewidth relation <cit.>.
If either spatial separation or spectral separation between local maxima does not satisfy the above threshold, the local maximum which has a smaller T_ max is rejected.
Through these processes, identifies a set of significant local maxima.
We treat these local maxima as seeds to assign all the emission to molecular clouds.
To do this, we use a watershed algorithm, which associates all the cube pixels in the emission-masked data with a local maximum.
Some pixels are already assigned to a single local maximum, while the remainder (including rejected local maxima in the above processes) are assigned to any of the local maxima by the watershed algorithm.
More details on the algorithm are summarized in <cit.>.
Finally, identified 886 molecular clouds.
gives the basic properties of the identified clouds, including the extrapolated 2nd moment of the emission along the major and minor axes σ_ maj and σ_ min in parsec,
the position angle of the major axis ϕ, the extrapolated velocity dispersion σ_v, ext, and the integrated ^12CO(J=2-1) flux S within each cloud.
The extrapolated cloud properties are calculated to reduce observational bias <cit.>.
In Figure <ref>, we showed frequency distributions of the ratio between the extrapolated value and the observed value for cloud size and velocity dispersion.
The extrapolated value is typically 10 – 20% larger than the observed value.
We calculated the intrinsic spherical radius R by the deconvolution of the ACA+IRAM beam, σ_ beam, as follows:
R = 1.91 √((σ_ maj^2 - σ_ beam^2)^0.5 (σ_ min^2 - σ_ beam^2)^0.5).
Here, σ_ beam is calculated as 75 × 4 = 30, where the factor of 4 is the spatial size in parsec of 1 at the distance of M33 (840 kpc).
The coefficient 1.91 converts the rms size to the effective spherical radius of the cloud <cit.>.
We treat R as the intrinsic spherical radius of each molecular cloud.
Note that if σ_ min is smaller than σ_ beam, the resultant R is not properly defined.
We do not consider such small clouds further in this paper.
We also deconvolved the velocity dispersion σ_v, ext as follows:
σ_v = √(σ_v, ext^2 - σ_v, chan^2/2 π),
where σ_v, chan is the velocity resolution element, which is related to the velocity channel width (Δ V_ chan = 0.7 km s^-1) as σ_v, chan = Δ V_ chan/(2 √(2 ln 2)).
We do not consider molecular clouds lying at the edge of the ACA field-of-view (FOV) further because the primary beam correction causes larger uncertainties in the obtained properties of molecular clouds.
Thus, we consider 848 molecular clouds after excluding small clouds with undefined radius and clouds at the edge of ACA FOV from the originally identified clouds by .
Figure <ref> shows the spatial distribution of 848 molecular clouds, whose deconvolved sizes and the measured position angle are represented, in the M33 disk.
§.§ Luminosities and Masses
From the basic properties of molecular clouds, we calculated additional properties such as the CO luminosity L_ CO = SD^2 where D = 840 kpc and cloud masses.
The luminosity-based mass (for ^12CO, hereafter M_ CO) which includes the helium contribution is calculated as
M_ CO/M_⊙ = 4.35 X_ CO/2.0 × 10^20 cm^-2 ( K km s^-1)^-1 L_ CO/ K km s^-1 pc^2 R_21^-1,
where X_ CO is CO-to-H_2 conversion factor and R_21 is the ^12CO(J=2-1)/^12CO(J=1-0) intensity ratio.
We adopted a constant X_ CO of 4.0 × 10^20 cm^-2 (K km s^-1)^-1 <cit.> over the M33 disk.
To determine the appropriate R_21 value in this study, we examined the pre-existing single-dish measurements of ^12CO in M33 for J=1-0 <cit.> and J=2-1 <cit.> transitions.
We found that the average R_21 in M33 is 0.60, but this value is lower than the previously-reported R_21 of 0.8 <cit.>.
In the MW, the reported R_21 is 0.64 <cit.>.
In addition, recent studies reported that the mean of R_21 in nearby galaxies is 0.6 – 0.7 <cit.>.
These R_21 values are consistent with the newly-obtained one in M33, 0.60.
Thus, we adopted a constant R_2-1/1-0 of 0.60 across the M33 disk in this study.
Note that, as pointed out by <cit.>, R_21 varies within an individual galaxy; in fact, R_21 in M33 varies from position to position, typically ranging from 0.4 to 0.8.
Therefore we consider that the assumption of a constant R_2-1/1-0 over the M33 disk yields an error of about 30%.
From the emission masks and parameters, the detection limit of M_ CO is calculated to be 3 × 10^3 M_⊙,
while the actual lowest mass of the molecular clouds is 7 × 10^3 M_⊙.
In Figure <ref>, each molecular cloud is color-coded according to its M_ CO,
i.e., red ellipses represent high-mass clouds (M_ CO≥ 10^5 M_⊙) and blue ellipses indicate low-mass clouds (M_ CO < 10^5 M_⊙).
In addition, the spatial comparison between molecular clouds and Spitzer/IRAC 8 μm emission <cit.> is displayed.
In the spiral arm region, many high-mass clouds are associated with the strong (typically > 2 MJy sr^-1) 8 μm emission, which likely traces high-mass star-forming regions <cit.>.
On the other hand, low-mass clouds tend to be apart from such 8 μm-bright sources and to exist in the inter-arm region.
We examine the mass fraction of the molecular clouds to the total molecular gas over the ACA-observed area.
The total mass of the molecular clouds is derived to be 1.6 × 10^8 M_⊙ by summing up their M_ CO values.
We calculated the global ACA+IRAM ^12CO(J=2-1) of 2.0 × 10^7 K km s^-1 pc^2, which yields the total molecular gas mass of 2.9 × 10^8 M_⊙[The ^12CO(J=2-1) data obtained by IRAM 30 m telescope <cit.> gives its luminosity of 2.1 × 10^7 K km s^-1 pc^2 within the ACA FOV (see Section <ref>). If we assume X_ CO of 4.0 × 10^20 cm^-2 (K km s^-1)^-1 and R_21 of 0.6, this luminosity yields the total molecular gas mass of 3.0 × 10^8 M_⊙, which is well consistent with the molecular gas mass of 2.9 × 10^8 M_⊙ derived from ACA+IRAM ^12CO(J=2-1) data.].
Thus, the mass fraction of molecular clouds to the total molecular gas is 55%.
This is similar to the case in M51; <cit.> reported that about half of the CO luminosity arises from molecular clouds and the other half from diffuse components of molecular gas.
We also calculated the virial mass as M_ Vir = 1040 R σ_v^2 for a spherical and virialized cloud with a density profile of ρ∝ r^-1 <cit.>.
Its relationship with M_ CO is discussed in Section <ref>.
§.§ ^13CO(J=2-1) Emission
We examined ^13CO(J=2-1) emission for each cloud.
The criteria for the “detection” of ^13CO(J=2-1) emission are as follows.
Firstly, we drew the ^12CO(J=2-1) spectrum at the ^12CO(J=2-1) peak of each cloud
and defined the “line channels”, which are successive velocity channels where significant ^12CO(J=2-1) emission exists.
Then, we examined the ^13CO(J=2-1) spectrum within the line channels.
If ^13CO(J=2-1) emission exceeds 4 σ for successive 2 channels or exceeds 3 σ for successive 3 channels,
we treat the ^13CO(J=2-1) emission as temporarily-detected.
In addition, we calculated the ^13CO(J=2-1) integrated intensity within the line channels, and derive its signal-to-noise (S/N) ratio.
If the S/N ratio of the temporarily-detected ^13CO(J=2-1) intensity exceeds 3, we finally treat the ^13CO(J=2-1) emission as significantly detected.
We confirmed significant ^13CO(J=2-1) emission for 173 clouds, and thus the resultant ^13CO(J=2-1) detection rate is 20%.
We examined the ^13CO(J=2-1)/^12CO(J=2-1) intensity ratio (hereafter R_13/12) for the ^13CO(J=2-1) detected clouds.
We found that R_13/12 in M33 is almost constant on the galactocentric radius as shown in Figure <ref>,
and the typical R_13/12 is ∼0.1.
This value is similar to that in the disk of M51 <cit.>, and also similar to that for J=1-0 transition (i.e., ^13CO(J=1-0)/^12CO(J=1-0) ratio)
measured in nearby galaxy disks <cit.>.
§.§ Catalog Description
We summarized the properties of 848 clouds in M33 as a catalog, which do not include small clouds with undefined radius and clouds at the edge of ACA FOV.
We assigned the ID number of the clouds in order of increasing the galactocentric radius.
Table <ref> presents the first 10 and last 10 clouds of the catalog, and the full version is available online.
The uncertainty of each property was evaluated using a bootstrapping method implemented in .
Considering that the cloud consists of N data points, we generated a trial cloud by N times random sampling of the data allowing the same data to be sampled more than once.
Then, we measured the properties of the trial cloud.
We repeated the resampling and remeasuring 10,000 times for each cloud, and evaluated the uncertainties.
The final uncertainty in each property is the median absolute deviation of the bootstrapped values scaled up by the square root of the over-sampling rate, which corresponds to the number of pixels per beam size.
This scaling accounts for the fact that pixels within the same beam are not independent (i.e., correlated with each other).
In this molecular-cloud catalog, we noted the S/N ratio of ^12CO(J=2-1) brightness temperature at the CO peak position in each GMC.
As reported in <cit.>, the algorithm requires a minimum S/N ratio of 10 for stable recovery of cloud properties.
Since our catalog includes 147 molecular clouds whose S/N ratio is lower than 10, we examine the properties of such low-S/N clouds in the following analyses.
The minimum S/N ratio of the cataloged cloud is 6.1. In addition, we checked a GMC counterpart identified with the IRAM 30 m telescope <cit.>.
We make a comparison between the two catalogs in Section <ref>.
cccccccccccccccc
0pt
3
List of cataloged molecular clouds
ID R.A. Decl. R_ gal V_ LSR R σ_v M_ CO M_ Vir I_12 CO ^12CO S/N I_13 CO R_13/12 ϕ b/a IRAM ID
(deg.) (deg.) (kpc) (km s^-1) (pc) (km s^-1) (10^4 M_⊙) (10^4 M_⊙) (K km s^-1) (K km s^-1) (^∘)
(1) (2) (3) (4) (5) (6) (7) (8) (9) (10) (11) (12) (13) (14) (15) (16)
1 23.46415 30.65744 0.07 -168 17.3 ± 4.4 3.7 ± 0.9 3.8 ± 2.0 24.2 ± 8.6 1.83 ± 0.08 10.0 < 0.36 < 0.20 24 5.0 149
2 23.46366 30.66494 0.07 -202 29.2 ± 4.2 4.6 ± 0.6 16.5 ± 3.3 64.4 ± 12.0 6.50 ± 0.13 18.2 < 0.35 < 0.05 -75 2.1
3 23.46754 30.66369 0.10 -187 46.3 ± 10.2 2.5 ± 0.5 10.2 ± 4.5 30.8 ± 8.8 2.76 ± 0.12 10.6 < 0.32 < 0.11 25 3.3 156
4 23.45834 30.65327 0.11 -162 19.0 ± 6.9 2.9 ± 0.9 2.5 ± 1.6 16.7 ± 7.8 1.46 ± 0.06 12.4 < 0.25 < 0.17 -61 1.4
5 23.46560 30.66952 0.14 -203 38.1 ± 3.9 2.9 ± 0.4 18.7 ± 3.8 32.4 ± 6.1 3.61 ± 0.09 22.3 < 0.47 < 0.13 -62 2.0 177
6 23.46754 30.65494 0.16 -168 48.2 ± 2.5 4.0 ± 0.2 125.4 ± 6.9 78.4 ± 5.1 20.94 ± 0.10 95.8 2.64 ± 0.11 0.13 ± 0.01 36 1.2 149
7 23.45495 30.65035 0.17 -163 12.6 ± 10.9 2.4 ± 1.0 1.1 ± 0.1 7.5 ± 5.2 1.20 ± 0.07 7.5 < 0.20 < 0.17 41 4.2
8 23.45301 30.65869 0.17 -165 35.7 ± 3.4 2.4 ± 0.2 18.7 ± 2.6 21.8 ± 3.1 7.37 ± 0.07 41.3 0.52 ± 0.11 0.07 ± 0.01 -86 2.3 151
9 23.45543 30.66535 0.18 -177 39.4 ± 6.7 4.6 ± 0.7 9.2 ± 0.5 88.4 ± 18.4 2.62 ± 0.10 14.9 < 0.39 < 0.15 -27 1.3 160
10 23.45494 30.66452 0.18 -187 25.9 ± 6.0 2.0 ± 0.5 3.7 ± 2.2 10.5 ± 4.0 3.15 ± 0.09 13.5 < 0.23 < 0.07 -64 1.9 160
⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯
⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯ ⋯
839 23.65535 30.58110 4.69 -174 29.5 ± 8.8 2.5 ± 0.5 7.6 ± 3.3 19.4 ± 5.5 3.00 ± 0.12 12.0 < 0.37 < 0.12 43 1.2 326
840 23.65583 30.57610 4.74 -176 32.2 ± 6.8 2.2 ± 0.3 8.7 ± 3.5 16.7 ± 4.0 3.47 ± 0.09 19.3 < 0.33 < 0.10 69 1.5 326
841 23.65965 30.54693 5.05 -154 13.3 ± 9.3 1.8 ± 0.8 1.7 ± 1.1 4.5 ± 3.2 1.38 ± 0.10 7.1 < 0.18 < 0.13 -15 5.7
842 23.66347 30.51901 5.36 -152 38.6 ± 6.4 2.7 ± 0.4 13.2 ± 4.8 30.0 ± 6.7 2.49 ± 0.09 16.5 < 0.34 < 0.14 -56 3.9 252
843 23.67514 30.54941 5.36 -165 17.2 ± 10.2 1.8 ± 0.7 1.7 ± 1.8 5.5 ± 3.7 1.08 ± 0.07 8.0 < 0.35 < 0.33 62 2.0
844 23.67173 30.53649 5.39 -161 16.7 ± 8.7 1.8 ± 0.6 1.8 ± 0.9 5.6 ± 3.2 1.25 ± 0.08 9.3 < 0.27 < 0.22 27 1.5
845 23.67512 30.54024 5.43 -158 24.9 ± 4.2 2.1 ± 0.3 17.2 ± 3.6 11.3 ± 2.6 6.57 ± 0.12 30.8 0.84 ± 0.17 0.13 ± 0.03 3 1.3 291
846 23.67899 30.54273 5.50 -164 35.6 ± 9.6 2.0 ± 0.4 6.4 ± 1.7 14.1 ± 4.5 2.08 ± 0.08 15.5 < 0.46 < 0.22 73 3.2 291
847 23.67461 30.52941 5.51 -156 42.1 ± 7.6 2.1 ± 0.4 8.1 ± 1.5 18.7 ± 5.3 1.74 ± 0.09 11.9 < 0.26 < 0.15 -66 2.0 290
848 23.67362 30.51608 5.59 -155 36.6 ± 14.4 2.2 ± 0.3 9.2 ± 0.9 18.2 ± 5.2 3.10 ± 0.15 11.2 < 0.37 < 0.12 -75 1.2 253
(1) ID number of the cloud.
(2) – (3) ^12CO(J=2-1) peak position of the cloud in equatorial coordinates (J2000) in degree.
(4) Galactocentric radius of the cloud from the optical center of M33 (1^ h33^ m50^ s.9, 30^∘3937) in units of kiloparsec.
(5) Radial velocity in the Local Standard of Rest in units of km s^-1.
(6) Deconvolved radius of the cloud including uncertainty in units of parsec.
(7) Deconvolved velocity dispersion including uncertainty in units of km s^-1.
(8) Luminosity mass based on ^12CO(J=2-1) flux including uncertainty in units of 10^4 M_⊙.
(9) Mass of the cloud inferred from the virial theorem including uncertainty in units of 10^4 M_⊙.
(10) ^12CO(J=2-1) intensity at its peak position of the cloud including uncertainty in units of K km s^-1.
(11) S/N ratio of ^12CO(J=2-1) brightness temperature at its peak position.
(12) ^13CO(J=2-1) intensity at ^12CO(J=2-1) peak position of the cloud including uncertainty in units of K km s^-1, or its 3 σ upper limit.
(13) ^13CO(J=2-1)/^12CO(J=2-1) intensity ratio at ^12CO(J=2-1) peak position of the cloud including uncertainty, or its 3 σ upper limit.
(14) Position angle of the major axis of the cloud, which is measured counterclockwise from north to east, in units of degree.
(15) Ratio between the major and minor axes after the deconvolution by the observing beam (75).
(16) GMC counterpart identified with the IRAM 30 m telescope <cit.>.
§.§ Basic Properties of Cataloged Molecular Clouds
Figure <ref> shows the frequency distributions of the radius R and the velocity dispersion σ_v for the cataloged molecular clouds in M33.
R ranges from 6.8 to 72 pc, and σ_v ranges from 1.0 to 6.1 km s^-1. Their medians are 34 pc and 2.8 km s^-1, respectively.
Note that, as pointed out by <cit.>, such distributions of the cloud radius and the velocity dispersion depend on both the spatial and the velocity resolutions of the input data cube
because ISM in galaxies generally has a hierarchical structure from parsec to kiloparsec scales.
We also examined the frequency distributions of M_ CO and M_ Vir as shown in Figure <ref>.
M_ CO ranges from 6.7 × 10^3 to 2.6 × 10^6 M_⊙, and M_ Vir ranges from 1.1 × 10^4 to 1.9 × 10^6 M_⊙.
Their medians are 9.9 × 10^4 M_⊙ and 2.8 × 10^5 M_⊙, respectively.
Both for M_ CO and M_ Vir, the dynamic range of mass is more than two orders of magnitude, which is wider than earlier M33 studies <cit.>.
The low-S/N (< 10) clouds typically show smaller radii and smaller velocity dispersions compared to the high-S/N clouds.
However, some low-S/N clouds have large virial masses (≥ 10^5 M_⊙) although their CO luminosity masses are almost small (< 10^5 M_⊙).
We discuss the origin of the discrepancy between M_ CO and M_ Vir for the low-S/N clouds in subsection <ref>.
§ SCALING RELATIONS
Starting with the pioneering work by <cit.>, a lot of earlier studies suggest that basic properties of molecular clouds are quantitatively related to each other via some kind of scaling relations,
which are often referred to as “Larson's laws”.
In this section, we examine such scaling relations based on our molecular-cloud catalog.
§.§ Size–Line-width Relation
The first Larson's law relates the cloud radius R in parsecs to the velocity dispersion σ_v in km s^-1,
which is expressed as σ_v = 0.72 R^0.5 for the MW <cit.>.
This relation is considered to reflect the turbulent condition inside the molecular clouds.
Figure <ref> shows the relation between R and σ_v for 848 molecular clouds in M33, distinguishing the low-S/N and high-S/N clouds.
The overall distributions in the radius-velocity dispersion plane are similar between the two cloud types;
many clouds show smaller velocity dispersion than the Galactic R-σ_v relation at a given radius regardless of the ^12CO S/N ratio.
This trend can be evaluated more quantitatively by deriving the coefficient of the R-σ_v relation, σ_v R^0.5.
The average σ_v R^0.5 for 848 clouds is 0.48 ± 0.13, which is significantly smaller than the Galactic σ_v R^0.5, 0.72.
To explain the origin of such a smaller velocity dispersion, we examine the R-σ_v relations based on the two earlier GMC catalogs in M33 <cit.>.
We found that the averaged σ_v R^0.5 are 0.54 ± 0.11 and 0.50 ± 0.17 for the <cit.> catalog and the <cit.> catalog, respectively,
although the linewidth of small (< 20 pc) clouds in the <cit.> catalog is comparable to the MW as pointed out by <cit.>.
This suggests that the velocity dispersion of the GMC in M33 is intrinsically smaller than the Galactic GMCs.
Here, we consider the physical mechanism to change the velocity dispersion in molecular clouds.
Earlier studies reported that the velocity dispersion at a given cloud radius is higher in the Galactic Center <cit.> and 30 Doradus in the LMC <cit.>,
which are associated with active star-forming regions, compared to the Galactic R-σ_v relation <cit.>.
In contrast to this, the quiescent cloud PGCC G282.92-32.40 <cit.>, which is referred to as the “Planck Cold Cloud (PCC)”, in the LMC shows a smaller velocity dispersion than the Galactic R-σ_v relation <cit.>.
From these observational facts, <cit.> suggested that local energy injection by star formation feedback plays an important role in the turbulence of molecular clouds.
In other words, it is unlikely that a molecular cloud without active star-forming regions increases its velocity dispersion.
Considering that the R-σ_v relation obtained in M33 is similar to that in the PCC,
it is suggested that many molecular clouds in M33 are not associated with active star formation like the PCC.
However, this contradicts the fact that more than 70% of clouds in M33 are associated with star-forming regions <cit.>.
Indeed, Figure <ref> shows that many high-mass clouds are associated with 8 μm-bright sources (see also Figure <ref>).
Alternatively, <cit.> examined the variation in σ_v R^0.5 for 12 external galaxies
and found a trend that extragalactic GMCs falling under the Galactic R-σ_v relation have lower surface densities (Σ_ GMC) than corresponding clouds in the MW.
GMCs in SMC and NGC 4605 show Σ_ GMC∼ 45 and σ_v R^0.5 = 0.37.
Both values are lower than GMCs in other 10 external galaxies, while similar to those in M33 (Σ_ GMC = 30 - 40 and σ_v R^0.5 = 0.4 - 0.5).
This suggests that low-surface density molecular clouds can be maintained even by the small turbulence (i.e., small velocity dispersion), which results in the observed R-σ_v relation in M33 <cit.>.
§.§ CO Luminosity Mass–Virial Mass Relation
Figure <ref> shows a relationship between M_ CO and M_ Vir for 848 molecular clouds in M33.
Both masses seem to be well correlated, whereas M_ Vir is generally larger than M_ CO. A similar trend is also reported by <cit.>.
In particular, the low-S/N clouds show larger M_ Vir at a given M_ CO; the median of the virial parameter α, which is defined as M_ Vir/M_ CO, is 5.3 for the low-S/N clouds.
Since clouds in virial equilibrium show α≈ 1 – 3 <cit.>, most of the low-S/N clouds are gravitationally unbound (α > 3).
On the other hand, the median of α for the high-S/N clouds is 2.0, and 68% of the high-S/N clouds are gravitationally bound (α≤ 3).
Considering that the R-σ_v relation is not different between the high-S/N clouds and low-S/N clouds, the CO intensity emitted from the low-S/N cloud may be simply weak.
This is consistent with the low-surface density molecular clouds in M33 (see subsection <ref>).
In Figure <ref>, we can see most of the clouds are virialized (α≤ 3) at the high-mass end.
We discuss the physical meaning of this trend in Section <ref>.
§ COMPARISON WITH EARLIER GMC CATALOG
Based on the ACA+IRAM ^12CO(J=2-1) data of M33, we cataloged 848 molecular clouds.
In this section, we compare the ACA+IRAM cloud catalog with the earlier GMC catalog generated by <cit.>.
To give a fair comparison between the two catalogs, we extracted 362 GMCs from the <cit.> catalog which are located within the ACA FOV.
Hereafter, we refer to the GMCs in the <cit.> catalog as “IRAM GMCs”.
This comparison between the two catalogs provides new insights into the hierarchical structure of molecular gas.
§.§ Mass Function
Firstly, we investigate the cloud mass distributions for the two catalogs.
The cumulative mass distribution function can be expressed by truncated power-law functions as follows:
N(M' > M) = N_0 [ ( M/M_0)^γ +1 -1 ],
where M_0 is the maximum mass in the distribution, γ indicates how the cloud mass is distributed, and N_0 is the number of clouds more massive than 2^1/(γ+1)M_0 <cit.>.
To determine the fitting range of the cloud mass distributions, we estimated the completeness limit of molecular clouds by reference to <cit.>.
They reported that the lowest mass molecular cloud in their GMC survey is of order 2 × 10^4 M_⊙ and also estimated the completeness limit of 1.5 × 10^5 M_⊙, which is about seven times larger than the lowest mass.
If we apply such a linear scaling between the two masses to our molecular cloud catalog, the completeness limit is estimated to be 5 × 10^4 M_⊙ because the lowest mass cloud is 7 × 10^3 M_⊙ (see subsection <ref>).
Note that we recalculated the GMC masses in the <cit.> catalog by assuming R_21 = 0.6 and adopted 8.4 × 10^4 M_⊙ as the completeness limit[<cit.> reported that the completeness limit is 6.3 × 10^4 M_⊙ in their GMC catalog. The assumption of R_21 = 0.6 in this study yields the corrected completeness limit of 6.3 × 10^4 × (0.8/0.6) = 8.4 × 10^4 M_⊙.].
Figure <ref> shows the cumulative cloud mass functions fitted by the truncated power-law functions.
Note that we treated N_0 as a free parameter (i.e., independent of γ) in this study to achieve the best fitting.
We obtained γ = -1.60 for the ACA+IRAM cloud catalog and γ = -1.48 for the <cit.> catalog, respectively.
Considering that <cit.> obtained γ = -1.65 for all 566 GMCs in the <cit.> catalog, the cloud population of the ACA+IRAM cloud catalog is similar to that of the <cit.> catalog.
However, the truncated power-law fitting for the ACA+IRAM cloud catalog deviates from the mass spectrum at the high-mass side (especially from 5 × 10^5 M_⊙ to 2 × 10^6 M_⊙);
the number of clouds in this mass range is significantly less than the expectation by the truncated power-law function and also less than the <cit.> catalog.
Such a decrease in the high-mass clouds in the ACA+IRAM cloud catalog is presumably due to the difference in the spatial resolutions of CO data;
some large IRAM GMCs identified with the 49 pc beam can be resolved into multiple cloud components in the ACA+IRAM CO data at 30 pc resolution.
This yields a decrease in the number of GMCs at the high-mass side.
Indeed, we examined a one-on-one comparison between ACA+IRAM clouds and IRAM GMCs and found that 170 IRAM GMCs are resolved into 2 or more ACA+IRAM clouds.
Figure <ref> shows the comparison of the cloud identification between the two CO data.
A small IRAM GMC is identified as a single molecular cloud even in the ACA+IRAM CO data, while a large IRAM GMC are resolved into multiple ACA+IRAM clouds.
§.§ Origin of Velocity Dispersion in IRAM GMCs
As described above, a large IRAM GMC (typically its M_ CO is larger than 3 × 10^5 M_⊙) can be treated as an association of multiple ACA+IRAM clouds.
Investigating such a correspondence is beneficial for the comparison between properties of individual molecular clouds and the average properties of their association.
In particular, we focus on the origin of the observed velocity dispersion (linewidth) of a large IRAM GMC, which is likely composed of two factors;
(1) the line-of-sight relative velocity between internal ACA+IRAM clouds and (2) velocity dispersions of individual ACA+IRAM clouds.
Here we examine which factors mainly contribute to the overall velocity dispersion for 77 IRAM GMCs, which are resolved into 3 or more ACA+IRAM clouds.
To quantify the line-of-sight velocity difference between multiple ACA+IRAM clouds,
we firstly defined the weighted center of line-of-sight velocities between the clouds as follows:
v_g = ∑_i=1^n I_i v_i/∑_i=1^n I_i,
where I_i and v_i are the ^12CO(J=2-1) intensity at the CO peak position (10th column in Table <ref>) and the line-of-sight velocity (V_ LSR; 5th column in Table <ref>) of ith ACA+IRAM cloud, respectively.
n is the number of ACA+IRAM clouds included in a large IRAM GMC.
Using this v_g, we calculate the representative velocity difference between internal ACA+IRAM clouds as follows:
v_ diff = √(∑_i=1^n I_i (v_i - v_g)^2/∑_i=1^n I_i).
In addition, we calculate the weighted mean of velocity dispersions of individual ACA+IRAM clouds as follows:
σ_v, mean = ∑_i=1^n I_i σ_v, i/∑_i=1^n I_i,
where σ_v, i is the velocity dispersion of ith ACA+IRAM cloud.
Figure <ref> shows the velocity dispersion of the IRAM GMC as a function of v_ diff and that of σ_v, mean.
A clear correlation between v_ diff and the velocity dispersion of the IRAM GMC, with the Spearman's rank correlation coefficient r_ s of 0.59, can be seen,
while σ_v, mean seems nearly constant (2 – 4 km s^-1) and its correlation with the velocity dispersion of the IRAM GMC is weak (r_ s = 0.28).
This suggests that the velocity dispersion of a large cloud is mainly dominated by the line-of-sight velocity difference between small clouds inside the GMC in the case of v_ diff > 2 km s^-1,
while the velocity dispersion of individual internal clouds determines the overall velocity dispersion of the GMC if v_ diff is less than 2 km s^-1.
§ PROPERTIES OF MOLECULAR CLOUDS AND HIGH-MASS STAR FORMATION
As shown in Figure <ref> (and also described in subsection <ref>),
many high-mass clouds are associated with the strong 8 μm emission in the spiral arm region,
while low-mass clouds tend to be apart from such 8 μm-bright sources and to exist in the inter-arm region.
Here, we quantitatively evaluate the relationship between the molecular clouds and the 8 μm-bright sources.
To do this, we regridded the IRAC 8 μm map <cit.> to match the ACA+IRAM ^12CO(J=2-1) map and obtained mean 8 μm flux by averaging the pixel values included within each molecular cloud.
Then, we constructed the cumulative distribution functions (CDFs) both for 423 high-mass clouds and 425 low-mass clouds.
Figure <ref> clearly shows that the 8 μm-bright sources are closely associated with high-mass clouds rather than low-mass clouds;
the strong (> 2 MJy sr^-1) 8 μm emission is found in 72% of high-mass clouds, but only in 36% of low-mass clouds, respectively.
Note that this trend does not change even if we exclude the diffuse components of 8 μm emission (see appendix).
Our result indicates that high-mass star formation tends to be associated with high-mass clouds rather than low-mass clouds.
Such a trend is consistent with the extensive study by <cit.>; they identified mid-infrared (MIR) emission with GMCs and found that
a GMC with bright MIR sources tends to have a large CO luminosity mass.
Since high-mass star formation generally starts from the gravitational instability of molecular gas,
the virial parameter α, which expresses the degree of gravitational binding, is useful to examine star formation in molecular clouds.
Figure <ref> shows α as a function of M_ CO for each ACA+IRAM cloud in M33.
α generally decreases (i.e., becomes more unstable against gravitational collapse) with the increase in M_ CO.
A similar trend is also observed in the MW <cit.> and external galaxies <cit.>.
In M33, high-mass clouds whose mass is larger than 10^5 M_⊙ seem to be almost virialized;
in other words, the self-gravitation is predominant rather than the internal turbulence of the cloud.
This indicates that the high-mass star formation likely onsets within such high-mass clouds by the gravitational instability,
which is well consistent with the observed feature; many high-mass clouds are associated with 8 μm-bright sources (Figures <ref> and <ref>).
In addition, a large α for the low-S/N clouds with a median of 5.3 can be explained;
the low-S/N clouds largely correspond to low-mass clouds, which are gravitationally unbound and not associated with star-forming regions.
Finally, we briefly discuss the evolution of molecular clouds.
In M33, many high-mass clouds exist in the spiral arm region, while the inter-arm region is dominated by low-mass clouds.
Since the 8 μm-bright sources are loosely along spiral arms in M33, the stellar potential may play a vital role in the accumulation (and the resultant mass growth) of molecular clouds.
The evolution of molecular clouds crossing the spiral arm and the high-mass star formation within them are often discussed for grand-design spiral galaxies such as M51 <cit.> and IC 342 <cit.>
based on the interferometric CO(J=1-0) observations at a spatial resolution of a few × 10 pc.
In M51, <cit.> suggested that smaller molecular clouds collide to form smooth giant molecular associations (GMAs) at spiral arm regions and then star formation is triggered in the GMA cores.
<cit.> divided the GMCs in the spiral arm of IC 342 into two categories according to whether they are associated with star formation activity or not,
and reported that the GMCs with H ii regions are typically more virialized and massive compared to the GMCs without H ii regions.
These results are consistent with the picture of molecular clouds and the high-mass star formation in M33 although it is a flocculent galaxy whose spiral arm structures are relatively weak.
In a forthcoming paper, we will report a detailed study on the evolutionary stage of GMCs based on the comparison with H ii regions <cit.>.
Although the GMC evolution in M33 was investigated in earlier studies <cit.>, the new ACA CO(J=2-1) data enable us
to study the dense-gas formation based on ^13CO(J=2-1) emission as well as the evolution of basic properties of clouds (e.g., size, linewidth, mass, and virial parameters).
§ SUMMARY
We have performed ALMA-ACA 7 m-array observations in ^12CO(J=2-1), ^13CO(J=2-1), and C^18O(J=2-1) line emission
toward the molecular-gas disk in M33 at an angular resolution of 731 × 650 (30 pc × 26 pc).
We combined the ACA 7 m-array ^12CO(J=2-1) data with the IRAM 30 m data to compensate for diffuse molecular-gas components.
The summary of this work is as follows:
* The ACA+IRAM combined ^12CO(J=2-1) map clearly depicts the cloud-scale molecular-gas structure over the M33 disk.
In addition, we detected a lot of ^13CO(J=2-1) sources which correspond to moderately dense molecular gas.
* We decomposed individual cloud components from the ACA+IRAM ^12CO(J=2-1) cube data employing , and cataloged 848 molecular clouds with a mass range from 10^3 M_⊙ to 10^6 M_⊙.
We found that high-mass clouds (M_ CO≥ 10^5 M_⊙) tend to associate with the 8 μm-bright sources in the spiral arm region,
while low-mass clouds (M_ CO < 10^5 M_⊙) tend to be apart from such 8 μm-bright sources and to exist in the inter-arm region.
* We found that most of the molecular clouds in M33 show smaller velocity dispersions than the Galactic R-σ_v relation at a given radius.
This is presumably due to low-surface density molecular clouds, which may be maintained even by the small turbulence.
* We found that a small IRAM GMC is identified as a single molecular cloud even in ACA+IRAM CO data, while a large IRAM GMC (typically its M_ CO is larger than 3 × 10^5 M_⊙) can be resolved into multiple ACA+IRAM clouds.
The velocity dispersion of a large IRAM GMC is mainly dominated by the line-of-sight velocity difference between small clouds inside the GMC rather than the internal cloud velocity broadening.
* Based on the comparison between M_ CO and M_ Vir for ACA+IRAM clouds, we found that high-mass clouds in M33 are almost virialized.
This indicates that the high-mass star formation likely onsets within such high-mass clouds by the gravitational instability.
We thank the anonymous referee for helpful comments, which significantly improved the manuscript.
This paper makes use of the following ALMA data: [ADS/JAO.ALMA#2017.1.00461.S], [ADS/JAO.ALMA#2018.A.00058.S], [ADS/JAO.ALMA#2017.1.00901.S], and [ADS/JAO.ALMA#2019.1.01182.S].
ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada), NSC and ASIAA (Taiwan), and KASI (Republic of Korea), in cooperation with the Republic of Chile.
The Joint ALMA Observatory is operated by ESO, AUI/NRAO, and NAOJ.
This work is based on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA.
Data analysis was in part carried out on the Multi-wavelength Data Analysis System operated by the Astronomy Data Center (ADC), National Astronomical Observatory of Japan.
This work was supported by NAOJ ALMA Scientific Research grant Nos. 2022-22B and JSPS KAKENHI (grant Nos. JP18H05440, JP21H00049, and JP21K13962).
CASA (v5.4.0; ), Astropy <cit.>, APLpy (v1.1.1; )
§ DIFFUSE EMISSION SUBTRACTION FOR IRAC 8 ΜM DATA
In Section <ref>, we measured 8 μm flux as a proxy for star formation activity in M33.
Generally, star formation rates (SFRs) are estimated from Hα (and also far-infrared emission such as 24μm) luminosities by assuming that all the Hα emitting gas is ionized by the local star-forming region.
Although the typical size of an H ii region is ∼ 0.1 – 10 pc in the MW <cit.>, Hα maps for nearby galaxies often show 100 pc scale (or more) ionizing gas distributions.
The theoretical studies showed that clumpy density structures of ISM allow for larger escape fractions of ionizing radiation <cit.>.
This indicates that the Hα emitting gas is not necessarily ionized by the local star-forming region, and thus the diffuse components of Hα emission should be considered for the estimation of SFRs.
Such diffuse components are also observed in the IRAC 8 μm map that we used.
To extract the compact 8 μm emission which directly reflects the star formation from the diffuse components, we applied HIIphot, an IDL software developed by <cit.>.
Following the procedures in <cit.>, we subtracted the diffuse components from the 8 μm map.
Figure <ref> shows the same as Figure <ref>, but using the 8 μm flux without diffuse components.
The general trend found in Figure <ref> does not change; the 8 μm-bright sources are closely associated with high-mass clouds rather than low-mass clouds.
[Astropy Collaboration et al.(2018)]astropy2018
Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, , 156, 123
[Braine et al.(2018)]braine2018
Braine, J., Rosolowsky, E., Gratier, P., et al. 2018, , 612, A51. doi:10.1051/0004-6361/201732405
[Bolatto et al.(2008)]bolatto2008
Bolatto, A. D., Leroy, A. K., Rosolowsky, E., et al. 2008, , 686, 948. doi:10.1086/591513
[Bolatto et al.(2013)]bolatto2013
Bolatto, A. D., Wolfire, M., & Leroy, A. K. 2013, , 51, 207. doi:10.1146/annurev-astro-082812-140944
[Calzetti et al.(2005)]calzetti2005
Calzetti, D., Kennicutt, R. C., Bianchi, L., et al. 2005, , 633, 871. doi:10.1086/466518
[Calzetti et al.(2007)]calzetti2007
Calzetti, D., Kennicutt, R. C., Engelbracht, C. W., et al. 2007, , 666, 870. doi:10.1086/520082
[Cao et al.(2017)]cao2017
Cao, Y., Wong, T., Xue, R., et al. 2017, , 847, 33. doi:10.3847/1538-4357/aa88c5
[Cao et al.(2023)]cao2023
Cao, Y., Wong, T., Bolatto, A. D., et al. 2023, arXiv:2306.07640. doi:10.48550/arXiv.2306.07640
[Colombo et al.(2014)]colombo2014
Colombo, D., Hughes, A., Schinnerer, E., et al. 2014, , 784, 3. doi:10.1088/0004-637X/784/1/3
[Corbelli et al.(2017)]corbelli2017
Corbelli, E., Braine, J., Bandiera, R., et al. 2017, , 601, A146. doi:10.1051/0004-6361/201630034
[Corbelli et al.(2014)]corbelli2014
Corbelli, E., Thilker, D., Zibetti, S., et al. 2014, , 572, A23. doi:10.1051/0004-6361/201424033
[Cormier et al.(2018)]cormier2018
Cormier, D., Bigiel, F., Jiménez-Donaire, M. J., et al. 2018, , 475, 3909. doi:10.1093/mnras/sty059
[Crocker et al.(2013)]crocker2013
Crocker, A. F., Calzetti, D., Thilker, D. A., et al. 2013, , 762, 79. doi:10.1088/0004-637X/762/2/79
[Dale et al.(2009)]dale2009
Dale, D. A., Cohen, S. A., Johnson, L. C., et al. 2009, , 703, 517. doi:10.1088/0004-637X/703/1/517
[den Brok et al.(2022)]denbrok2022
den Brok, J. S., Bigiel, F., Sliwa, K., et al. 2022, , 662, A89. doi:10.1051/0004-6361/202142247
[den Brok et al.(2021)]denbrok2021
den Brok, J. S., Chatzigiannakis, D., Bigiel, F., et al. 2021, , 504, 3221. doi:10.1093/mnras/stab859
[Druard et al.(2014)]druard2014
Druard, C., Braine, J., Schuster, K. F., et al. 2014, , 567, A118. doi:10.1051/0004-6361/201423682
[Egusa et al.(2011)]egusa2011
Egusa, F., Koda, J., & Scoville, N. 2011, , 726, 85. doi:10.1088/0004-637X/726/2/85
[Engargiola et al.(2003)]engargiola2003
Engargiola, G., Plambeck, R. L., Rosolowsky, E., et al. 2003, , 149, 343. doi:10.1086/379165
[Freedman et al.(1991)]freedman1991
Freedman, W. L., Wilson, C. D., & Madore, B. F. 1991, , 372, 455. doi:10.1086/169991
[Fukui et al.(2008)]fukui2008
Fukui, Y., Kawamura, A., Minamidani, T., et al. 2008, , 178, 56. doi:10.1086/589833
[Fukui et al.(1999)]fukui1999
Fukui, Y., Mizuno, N., Yamaguchi, R., et al. 1999, , 51, 745. doi:10.1093/pasj/51.6.745
[Galleti et al.(2004)]galleti2004
Galleti, S., Bellazzini, M., & Ferraro, F. R. 2004, , 423, 925. doi:10.1051/0004-6361:20040489
[Garay & Lizano(1999)]garay1999
Garay, G. & Lizano, S. 1999, , 111, 1049. doi:10.1086/316416
[Gratier et al.(2010)]gratier2010
Gratier, P., Braine, J., Rodriguez-Fernandez, N. J., et al. 2010, , 522, A3. doi:10.1051/0004-6361/201014441
[Gratier et al.(2012)]gratier2012
Gratier, P., Braine, J., Rodriguez-Fernandez, N. J., et al. 2012, , 542, A108. doi:10.1051/0004-6361/201116612
[Gratier et al.(2017)]gratier2017
Gratier, P., Braine, J., Schuster, K., et al. 2017, , 600, A27. doi:10.1051/0004-6361/201629300
[Haffner et al.(2009)]haffner2009
Haffner, L. M., Dettmar, R.-J., Beckman, J. E., et al. 2009, Reviews of Modern Physics, 81, 969. doi:10.1103/RevModPhys.81.969
[Hirota et al.(2010)]hirota2010
Hirota, A., Kuno, N., Sato, N., et al. 2010, , 62, 1261. doi:10.1093/pasj/62.5.1261
[Hirota et al.(2011)]hirota2011
Hirota, A., Kuno, N., Sato, N., et al. 2011, , 737, 40. doi:10.1088/0004-637X/737/1/40
[Hughes et al.(2013)]hughes2013
Hughes, A., Meidt, S. E., Colombo, D., et al. 2013, , 779, 46. doi:10.1088/0004-637X/779/1/46
[Kawamura et al.(2009)]kawamura2009
Kawamura, A., Mizuno, Y., Minamidani, T., et al. 2009, , 184, 1. doi:10.1088/0067-0049/184/1/1
[Kennicutt(1984)]kennicutt1984
Kennicutt, R. C. 1984, , 287, 116. doi:10.1086/162669
[Kepley et al.(2020)]kepley2020
Kepley, A. A., Tsutsumi, T., Brogan, C. L., et al. 2020, , 132, 024505
[Kobayashi et al.(2017)]kobayashi2017
Kobayashi, M. I. N., Inutsuka, S.-. ichiro ., Kobayashi, H., et al. 2017, , 836, 175. doi:10.3847/1538-4357/836/2/175
[Kobayashi et al.(2018)]kobayashi2018
Kobayashi, M. I. N., Kobayashi, H., Inutsuka, S.-. ichiro ., et al. 2018, , 70, S59. doi:10.1093/pasj/psy018
[Koch et al.(2018)]koch2018
Koch, E. W., Rosolowsky, E. W., Lockman, F. J., et al. 2018, , 479, 2505. doi:10.1093/mnras/sty1674
[Kondo et al.(2021)]kondo2021
Kondo, H., Tokuda, K., Muraoka, K., et al. 2021, , 912, 66. doi:10.3847/1538-4357/abeb65
[Konishi al.(in preparation)]konishi2023
Konishi, A., Muraoka, K., Tokuda, K., et al. 2023, , in prep.
[Larson(1981)]larson1981
Larson, R. B. 1981, , 194, 809. doi:10.1093/mnras/194.4.809
[Leroy et al.(2022)]leroy2022
Leroy, A. K., Rosolowsky, E., Usero, A., et al. 2022, , 927, 149. doi:10.3847/1538-4357/ac3490
[Leroy et al.(2021)]leroy2021
Leroy, A. K., Schinnerer, E., Hughes, A., et al. 2021, , 257, 43. doi:10.3847/1538-4365/ac17f3
[Liu et al.(2011)]liu2011
Liu, G., Koda, J., Calzetti, D., et al. 2011, , 735, 63. doi:10.1088/0004-637X/735/1/63
[Massey et al.(2006)]massey2006
Massey, Philip, Olsen, K. A. G., Hodge, Paul W., et al. 2006, , 131, 5. doi:10.1086/503256
[Massey et al.(2007)]massey2007
Massey, Philip, McNeill, Reagin T., Olsen, K. A. G., et al. 2007, , 134, 6. doi:10.1086/523658
[McMullin et al.(2007)]mcmullin2007
McMullin, J. P., Waters, B., Schiebel, D., et al. 2007, Astronomical Data Analysis Software and Systems XVI, 127
[Miura et al.(2012)]miura2012
Miura, R. E., Kohno, K., Tosaki, T., et al. 2012, , 761, 37. doi:10.1088/0004-637X/761/1/37
[Miura et al.(2014)]miura2014
Miura, R. E., Kohno, K., Tosaki, T., et al. 2014, , 788, 167. doi:10.1088/0004-637X/788/2/167
[Miville-Deschênes et al.(2017)]miville2017
Miville-Deschênes, M.-A., Murray, N., & Lee, E. J. 2017, , 834, 57. doi:10.3847/1538-4357/834/1/57
[Morokuma-Matsui et al.(2020)]morokuma2020
Morokuma-Matsui, K., Sorai, K., Sato, Y., et al. 2020, , 72, 90. doi:10.1093/pasj/psaa084
[Muraoka et al.(2016)]muraoka2016
Muraoka, K., Sorai, K., Kuno, N., et al. 2016, , 68, 89. doi:10.1093/pasj/psw080
[Muraoka et al.(2020)]muraoka2020
Muraoka, K., Kondo, H., Tokuda, K., et al. 2020, , 903, 94. doi:10.3847/1538-4357/abb822
[Ohno et al.(2023)]ohno2023
Ohno, T., Tokuda, K., Konishi, A., et al. 2023, , 949, 63. doi:10.3847/1538-4357/accadb
[Oka et al.(2001)]oka2001
Oka, T., Hasegawa, T., Sato, F., et al. 2001, , 562, 348. doi:10.1086/322976
[Onodera et al.(2010)]onodera2010
Onodera, S., Kuno, N., Tosaki, T., et al. 2010, , 722, L127. doi:10.1088/2041-8205/722/2/L127
[Onodera et al.(2012)]onodera2012
Onodera, S., Kuno, N., Tosaki, T., et al. 2012, , 64, 133. doi:10.1093/pasj/64.6.133
[Paglione et al.(2001)]paglione2001
Paglione, T. A. D., Wall, W. F., Young, J. S., et al. 2001, , 135, 183. doi:10.1086/321785
[Pety et al.(2013)]pety2013
Pety, J., Schinnerer, E., Leroy, A. K., et al. 2013, , 779, 43. doi:10.1088/0004-637X/779/1/43
[Planck Collaboration et al.(2016)]planck2016
Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, , 594, A28. doi:10.1051/0004-6361/201525819
[Robitaille & Bressert(2012)]robitaille2012
Robitaille, T., & Bressert, E. 2012, APLpy: Astronomical Plotting Library in Python, ascl:1208.017
[Rosolowsky(2005)]rosolowsky2005
Rosolowsky, E. 2005, , 117, 1403. doi:10.1086/497582
[Rosolowsky et al.(2021)]rosolowsky2021
Rosolowsky, E., Hughes, A., Leroy, A. K., et al. 2021, , 502, 1218. doi:10.1093/mnras/stab085
[Rosolowsky et al.(2007)]rosolowsky2007
Rosolowsky, E., Keto, E., Matsushita, S., et al. 2007, , 661, 830. doi:10.1086/516621
[Rosolowsky & Leroy(2006)]rosolowsky2006
Rosolowsky, E. & Leroy, A. 2006, , 118, 590. doi:10.1086/502982
[Rosolowsky et al.(2008)]rosolowsky2008
Rosolowsky, E. W., Pineda, J. E., Kauffmann, J., et al. 2008, , 679, 1338. doi:10.1086/587685
[Sanders et al.(1985)]sanders1985
Sanders, D. B., Scoville, N. Z., & Solomon, P. M. 1985, , 289, 373. doi:10.1086/162897
[Sano et al.(2021)]sano2021
Sano, H., Tsuge, K., Tokuda, K., et al. 2021, , 73, S62. doi:10.1093/pasj/psaa045
[Schinnerer et al.(2013)]schinnerer2013
Schinnerer, E., Meidt, S. E., Pety, J., et al. 2013, , 779, 42. doi:10.1088/0004-637X/779/1/42
[Skrutskie et al.(2006)]skrutskie2006
Skrutskie, M. F., Cutri, R. M., Stiening, R., et al. 2006, , 131, 1163. doi:10.1086/498708
[Solomon et al.(1987)]solomon1987
Solomon, P. M., Rivolo, A. R., Barrett, J., et al. 1987, , 319, 730. doi:10.1086/165493
[Thilker et al.(2000)]thilker2000
Thilker, D. A., Braun, R., & Walterbos, R. A. M. 2000, , 120, 3070. doi:10.1086/316852
[Tokuda et al.(2021)]tokuda2021
Tokuda, K., Kondo, H., Ohno, T., et al. 2021, , 922, 171. doi:10.3847/1538-4357/ac1ff4
[Tokuda et al.(2020)]tokuda2020
Tokuda, K., Muraoka, K., Kondo, H., et al. 2020, , 896, 36. doi:10.3847/1538-4357/ab8ad3
[Topal(2020)]topal2020
Topal, S. 2020, , 495, 2682. doi:10.1093/mnras/staa1146
[Tosaki et al.(2011)]tosaki2011
Tosaki, T., Kuno, N., Onodera, S. M., et al. 2011, , 63, 1171. doi:10.1093/pasj/63.6.1171
[Watanabe et al.(2011)]watanabe2011
Watanabe, Y., Sorai, K., Kuno, N., et al. 2011, , 411, 1409. doi:10.1111/j.1365-2966.2010.17746.x
[Wong et al.(2017)]wong2017
Wong, T., Hughes, A., Tokuda, K., et al. 2017, , 850, 139. doi:10.3847/1538-4357/aa9333
[Wong et al.(2019)]wong2019
Wong, T., Hughes, A., Tokuda, K., et al. 2019, , 885, 50. doi:10.3847/1538-4357/ab46ba
[Wu et al.(2005)]wu2005
Wu, H., Cao, C., Hao, C.-N., et al. 2005, , 632, L79. doi:10.1086/497961
[Yajima et al.(2019)]yajima2019
Yajima, Y., Sorai, K., Kuno, N., et al. 2019, , 71, S13. doi:10.1093/pasj/psz022
[Yajima et al.(2021)]yajima2021
Yajima, Y., Sorai, K., Miyamoto, Y., et al. 2021, , 73, 257. doi:10.1093/pasj/psaa119
[Yoda et al.(2010)]yoda2010
Yoda, T., Handa, T., Kohno, K., et al. 2010, , 62, 1277. doi:10.1093/pasj/62.5.1277
|
http://arxiv.org/abs/2307.03136v1
|
20230706170643
|
Multiplicative Updates for Online Convex Optimization over Symmetric Cones
|
[
"Ilayda Canyakmaz",
"Wayne Lin",
"Georgios Piliouras",
"Antonios Varvitsiotis"
] |
math.OC
|
[
"math.OC",
"cs.LG",
"stat.ML"
] |
plain
theoremTheorem[section]
proposition[theorem]Proposition
lemma[theorem]Lemma
corollary[theorem]Corollary
definition
definition[theorem]Definition
assumption[theorem]Assumption
remark
remark[theorem]Remark
MyFrame
linecolor=black,
outerlinewidth=2pt,
innertopmargin=4pt,
innerbottommargin=4pt,
innerrightmargin=4pt,
innerleftmargin=4pt,
leftmargin = 4pt,
rightmargin = 4pt
[ ]
|
http://arxiv.org/abs/2307.00489v1
|
20230702063012
|
Theoretical Limits of Energy Extraction in Active Fluids
|
[
"Shahriar Shadkhoo",
"Matt Thomson"
] |
cond-mat.soft
|
[
"cond-mat.soft",
"cond-mat.other",
"cond-mat.stat-mech",
"physics.flu-dyn"
] |
Systems operating out of equilibrium can be maneuvered to perform mechanical work through feedback loops of information-to-energy conversion. Active materials form a class of far-from-equilibrium systems that are driven internally and have the capability of self-organization, which can be utilized to perform mechanical work. In this Letter we examine limits of work extraction from an active system by considering the transport of a particle coupled to the density field gradient of an active viscoelastic medium. The active energy is provided by a gliding activation front that converts a passive to an active medium. We demonstrate that the maximum extracted power undergoes a discontinuous transition to zero as the ratio of the activity and the activation rate exceeds a critical value. Our model provides a framework for designing efficient mechanisms of transport in synthetic materials.
shahriar@caltech.edu
mthomson@caltech.edu
Division of Bioengineering, California Institute of Technology, Pasadena, CA, 91125
Theoretical Limits of Energy Extraction in Active Fluids
Matt Thomson
========================================================
For more than a century the problem of energy extraction from non-equilibrium systems, e.g. heat engines, has been a subject of extensive research, and has had significant implications in fundamental physics as well as engineering. Thought experiments like Maxwell's demon and the generalizations thereof have established deep connections between the information gain and work extraction <cit.>. Experimental and theoretical studies have also advanced our understanding of the optimal protocols of energy extraction <cit.>. The role of feedback loops in the process of information-to-energy conversion was highlighted in an early instantiation of the Maxwell's demon suggested by Szilárd <cit.>. The underlying principle of the feedback loop is that the demon gains information from the current state of a system, and drives the system away from equilibrium in a direction that performs work.
Active materials constitute a class of far-from-equilibrium systems that are driven internally via injection of energy at microscopic scales—a common-place phenomenon in biology. In spite of its prevalence, the mechanisms of energy propagation and conversion in active systems is yet to be elucidated. Unraveling such mechanisms would provide insight into the efficiency of biological pathways and the barriers that impede the conversion of released energy from reaction products into mechanical work. Recent experimental investigations focused on active cytoskeletal materials estimate that the efficiency of emergent flows to be approximately one-billionth of the total system's energy <cit.>. Besides natural realizations, synthetic active systems provide minimal test beds for programming novel states of matter that exhibit self-organization <cit.>. Harnessing the power of self-organization in such systems, and designing optimal control protocols provides the opportunity of engineering the dynamics of active systems <cit.>.
In this Letter we propose a model of an active system with a built-in mechanical feedback which can be utilized to extract energy; see Fig. (<ref>a). The feedback mechanism operates on the basis of activity-induced contraction in viscoelastic active materials, which in turn amplifies local activity and the resultant contraction (rate). Coupling a second system to the active medium, we learn about the limits of energy extraction by monitoring the dynamics of the second system. One of the omnipresent processes in nature, which is an immediate manifestation of the conversion of the active energy to mechanical work is the transport of an external particle <cit.>.
Transport phenomena require net (thermodynamic) forces like pressure gradients. In isolated passive systems, transport of the constituents is driven by minimization of the free energy towards the thermodynamic equilibrium <cit.>. Active materials generate bulk energy at microscopic scales, that can be harnessed to perform work at coarse-grained scales in the presence of a stress-propagating mechanism <cit.>. Important examples of active transport in biology include cytoplasmic cargo transport of proteins and ions <cit.>. The study of active transport has been mostly—with a few exceptions e.g. in <cit.>—limited to driven and self-propelled particles in a passive medium against thermodynamic forces <cit.>. An alternative scenario is for the particles to be embedded and coupled to a reservoir of active constituents that is maintained out of equilibrium and can provide the transport energy.
Here we aim to understand the properties of the active system (and control protocols) that bound the extraction of energy in an active system by analyzing the transport of a particle coupled to an active viscoelastic scalar material. The energy and direction of transport are provided by a gliding activation front (AF) that activates an otherwise passive medium; like a combine harvester machine. The progression of the AF leaves behind a trail of activated material which evolves under its own stress, and produces a stress gradient that drives the particle; see Fig. (<ref>). Our model involves several parameters, among which we choose the following ones to study: (1) a measure of activity, (2) the viscoelastic timescale, (3) the velocity of the activation front (that determines the activation rate), and (4) the coupling to the particle. The first two pertain to the active substance, whereas (3) and (4) are considered macroscopic knobs to vary without touching the underlying active mechanisms; a much preferred set of probes compared to fine-tuning of microscopic parameters.
Using combined analytical and numerical methods we explore the phase diagram of the system, and find that the upper bound on the activation rate, for which the steady transport of the particle is possible, is proportional to (i) the field-particle coupling, and (ii) the magnitude of activity; see Fig. (<ref>).
Model.—The active system we adopt to study comprises a two-component mixture of active and passive materials, the total density of which is initially uniform in space: Φ = ϕ_a + ϕ_p where ϕ_a,p denote the densities of the active and passive components, respectively. To put our model in context we consider a mixture where the passive component is a solution of free-floating filaments and cross-linkers, that would assemble into an active network (gelation) upon “activation” which entails recruiting active cross-linkers (motor proteins) <cit.>. Active networks of filaments can be realized naturally in biological systems, or artificially (e.g. using improved light-induced dimerization (iLID) technology <cit.>), and are driven out of equilibrium by active cross-linkers that generate active contractile stress by converting chemical to mechanical energy and walking along the filaments. The magnitude of the active stress is controlled by a parameter that depends on the density of active cross-linkers <cit.>.
Cross-linked networks are known to exhibit viscoelastic behavior <cit.>. The specific model of viscoelasticity to be adopted depends on the timescales in the phenomenology of interest. Preformed cross-linked networks are commonly modeled by Maxwell's viscoelasticity. The response of the network—under external stress—transitions smoothly from elastic- to viscous-dominated over a timescale that is determined by the link-breaking rate. Contrary to the preformed networks, the response of an actively assembling network follows the opposite chronological order, i.e. viscous to elastic, and can be captured by Kelvin-Voigt model of viscoelasticity. At early times, when the connectivity of the network is low, the response is fluid-like. Over a timescale determined by the rate of cross-link recruitment, the elasticity (which originates from the rigidity of cross-linkers) is enhanced with increasing connectivity. Fractional Kelvin-Voigt viscoelasticity has also been observed in cytoplasmic mechanics <cit.>.
In our model of artificial active networks, the process of activation, induced externally, occurs instantaneously by a moving front. Speficially, activation is defined as a process that endows a fraction of the passive filament-motor mixtures with the capacity of cross-linking and forming an active gel of density Δϕ_a = -Δϕ_p = ϕ_i; where ϕ_i is the initial density of the activated component. The gelation process following the activation, takes over the viscoelastic timescale discussed above. Contrary to the activation process, the opposite process of unbinding is assumed to occur over excessively large timescales; hence activation is considered practically irreversible.
In addition to the density field, the active component is characterized by a displacement field u_a(,t). In this article we use Euler's notation for derivatives: ≡Dt/Dt, the material time derivative, and ∂_q ≡∂q/∂ q, the partial derivative with respect to a variable q. From the displacement field we derive the velocity and strain fields, _a=∂_t u_a, and = ∇ u_a. The continuity equation for a compressible field is as follows: ∂_t ϕ_a + ∇· (ϕ_a _a) = 0. The conservation of momentum and the constitutive equations for the Kelvin-Voigt viscoelastic network read:
(ϕ_a_a) = ∇· + ,
= η(^-1+ )ℰ.
In the above equations and represent the strain and stress tensor fields, respectively. The parameter is the Kelvin timescale. In the spring-dashpot representation, η and κ = η/ are the viscosity of the fluid component, and the stiffness of the solid component, respectively.
The body force contains contributions from the internal stress and a drag force: =∇·int - γ_a. The internal stress is composed of active and passive parts. The former is generated by walking motor proteins and produces contractile stress <cit.>. The passive contribution originates from the collisions of the filaments and is repulsive, which guarantees stability against the collapse of the network under contractile stress. Using a virial-like expansion for the stress in terms of the active density field we get:
int = σ^int(ϕ_a) 𝕀 = -α_1 ϕ^_a𝕀 + α_2 ϕ_a^2𝕀 + 𝒪(ϕ_a^3),
where α_1,α_2>0, are the constants that depend on active and collision interactions. To the second order in density field, the stress can be written in the following form: σ^int=-α_1ϕ_a (1-ϕ_a/ϕ_0). Since we are interested in isotropic materials and one-dimensional transport, the tensors are reduced to scalars.
The signature of activity encoded in the stress function can be seen by noting that the contractile stress increases the density which consequently leads to progressively larger contractile stress. Therefore, a feedback loop forms against the conventional equilibrating response in passive systems—except for gravitational systems. The positive feedback holds for ϕ_i≤ϕ_a≤ϕ_0/2. For larger densities the effect of passive stress dominates the contractile part until the equilibrium is reached at ϕ_a=ϕ_e. At equilibrium where the time derivatives vanish, the internal stress σ^int(ϕ_e), equals the long-term elastic stress with stiffness η/, corresponding to strain _e. Using the relation between the strain and density: ϕ (1+)=ϕ_i, we can find the equilibrium density and strain. The former satisfies: ϕ_e^3 - ϕ_0 ϕ_e^2 + (ηϕ_0/α_1)ϕ_e - (ηϕ_0ϕ_i/α_1) = 0. We define the dimensionless parameter μ=α_1ϕ_0/η. Clearly in the absence of the elastic contribution (η/=0) the equilibrium density equals ϕ_e=ϕ_0=α_1/α_2. We obtain the asymptotic solution for the equilibrium density in terms of μ. In the limit of μ→∞ the asymptotic solution reads: ϕ_e = ϕ_0 - (ϕ_0-ϕ_i) μ^-1 + 𝒪(μ^-2). In the opposite limit μ→ 0, we get the asymptotic solution: ϕ_e=ϕ_i+(ϕ_i^2/ϕ_0)(1-ϕ_i/ϕ_0)μ+𝒪(μ^2). The equilibrium density ϕ_e increases from ϕ_i for μ→0, to ϕ_0 for μ→∞. We will show below that the dependence of ϕ_e on is crucial to our interpretation of the constraints on power extraction in terms of the viscoelastic timescale.
Field-Particle Interaction.—The total stress experienced by a particle embedded in the solutions depends on its interaction with both active and passive components. In the continuum description of passive and active components the stress fields—obtained by coarse-graining of local isotropic force dipoles—are isotropic tensors, and reduce to scalar pressures. Using virial expansion we express passive and active pressures in powers of the corresponding densities. To linear order the total pressure reads P=P_a+P_p, where P_a,p=g_a,pϕ_a,p, and g_a,p are the coupling constants. The driving force of the particle is provided by pressure gradient. Given ∇ϕ_a=-∇ϕ_p, we can write the pressure gradient as ∇ P=(g_a-g_p)∇ϕ_a=g∇ϕ_a, where g=g_a-g_p. The signs of g_a,p depend on the specific type of interactions between the active/passive fields and the particle. Initial condition turns out to play an important role in the dynamics of the particle. We assume that the activation front starts off at the same position as the particle, i.e. X_AF(t=0) = X_p(t=0). With this initial condition, in order for the particle to be dragged by the active contractile network in the same direction as the activation front, the phenomenological coarse-grained coupling constants must satisfy g=g_a - g_p >0.
The motion of the particle is resisted by a drag force with contributions from both passive and active components, each proportional to their respective densities and relative velocities. Using the fact that the total density remains constant, the drag force can be shown to be equal to -Γ, with Γ the drag coefficient. With M and denoting the particle's mass and velocity, the equation of motion reads:
MV̇_p + Γ = g_0∇ϕ_a.
Here, V̇_p is the particle's acceleration; g_0 is the redefined coupling constant g_0=gℓ, where ℓ is the particle's linear dimension, assumed to be smaller than all other length scales in the system. The full theory of the coupled fields and particle must respect momentum conservation. Nonetheless, here we use the approximation that the coupling of particle to the field is negligible compared to the field's internal interactions: g_0≪α_1. Therefore, we assume that the particle is subject to a time-dependent potential induced by the active field evolving according to the viscoelastic response to the active stress, but not affected by the particle. The potential field to which the particle is exposed is obtained by noting that the force satisfies g_0∇ϕ_a = -∇; thus =-g_0∫^x x∇ϕ_a = -g_0ϕ_a is the potential field.
Active Transport.—The hypothesis we follow is that by continuously activating the material in a specific direction, the extracted active energy that appears in the form of contractile stress propagates down the length of the activated region and provides the required power to move the particle against the viscous background. We seek the answer to two major questions: (1) for specific field parameters, what is the maximum amount of power that can be extracted from the active medium? (2) what properties of the active material determine the feasibility and the efficiency of the transport? We attack these questions by exploring the phase diagrams of the particle's transport velocity in terms of the activity α_1, the Kelvin timescale and the coupling to the particle g_0. The extracted power is related to the transport velocity in a monotonically increasing fashion; thus maximized at the same point in the phase diagram as the velocity.
The short term dynamics of the particle is a transient state during which the inertia dominates the drag force. By the end of the transient stage the particle reaches its terminal velocity which can be obtained using the force balance in Eq. (<ref>): V_p=Γ ^-1g_0∇ϕ_a(X_p); in which X_p is the position of the particle. The motion of the particle with a constant velocity is referred to as the steady state. The magnitude of the force required for transporting the particle with constant velocity is equal to the drag force F_drag=-Γ V_p. Therefore, the transport power is the negative of the power dissipated by the drag force, and is given by _t=-F_dragV_p=Γ V_p^2. Below we derive the conditions required for the transport to be possible, and find the dependence of the extracted energy on the field's parameters in the transport regime.
Solving the equations of motion is analytically intractable. Therefore we resort to numerical integration, the results of which can be seen in Figs. (<ref>) & (<ref>). Before discussing the results, let us first analyze the dynamics of the coupled system. A discretized picture can help us gain intuition into the dynamics. Upon moving the activation front, the activated segments starts contracting and their densities increase. The time difference between the activation of different segments creates a gradient in the density. The density gradient, and the force, at the boundary where the particle is initially located in proportional to the density itself. The particle is accelerated as a result of coupling to the density field, while the drag force decelerates it until a terminal velocity is reached. Denoting the position and velocity of the field's boundary by x_b and v_b, in the frame of reference co-moving with v_b, the potential near the boundary of the active region is a step function of height _b = - g_0ϕ_b(t)Θ(x-x_b(t)), corresponding to a delta-function force f=g_0ϕ_b(t)δ(x-x_b(t)). It is important to note that the force is a delta-function only in the immediate vicinity of the boundary. For x>x_b, the force turns negative because of the negative gradient of the density, pushing the particle back towards the boundary. The negative gradient of the density is due to the retarded activation: ∇ϕ_b = ^-1ϕ_b. The material time derivative, hence the gradient, decline in time and approach zero in the long time limit.
Transport Phase Transition.—The active energy generated at the AF is transmitted across the system to move the particle via a traveling stress/strain pulse of density modulation (see Fig. (<ref>)). The propagation of energy imposes constraints on the maximum velocity of the AF. We speculate that there exists a maximum beyond which the propagation of the energy from the AF to the particle is infeasible; hence no transport. In order to analyze the failure of transport we note that for a particle trapped in the potential well we have V_p = v_b, and X_p(t) = x_b(t). Using force balance in the steady state, the condition for the particle to stay trapped in the potential well is g_0ϕ_b≥Γ V_p for all times. Equivalently, min(ϕ_b/v_b)_t ≥Γ /g_0, where the l.h.s. of the inequality is the minimum of ϕ_b/v_b with respect to time. Both ϕ_b and v_b increase in time but in a fashion that the ratio decays in time and the minimum is reached at long-time limit. A trivial upper bound (weak condition) on the r.h.s. is found by noting that (i) ϕ_b≥ϕ_i and (ii) v_b≤. Thus the weak condition for persistent transport requires the following inequality to hold: Γ /g_0≤ϕ_i/; or ≤ g_0ϕ_i/Γ.
A stricter upper bound can be found by noting that in the long time limit, i.e. t≫τ_K and t≫ M/Γ, the boundary density and velocity read lim_t→∞(ϕ_b)=ϕ_e, and lim_t→∞(v_b)=β_∞, where β_∞<1 is a dimensionless parameter that increases with the viscoelastic timescale . We can find v_b in terms of using some simplifying assumptions. To that end we transform the coordinate system to the frame which is co-moving with the activation front: z = x- t; we denote the velocities by w=v-. In the co-moving frame, the boundary of the active region is ejected with velocity - at time t=0, namely w_b(t=0) = -. Its position is also denoted by z_b(t). The boundary velocity at time t is given by w_b(t) = - + 1/2∫_0^z_b(t) z (z,t). Replacing z= t, we get β(t) = -1/2(t). In the limit of t≫, and zero drag force γ→ 0, the strain reaches its terminal value: β_∞≈1/2(1 - ϕ_i/ϕ_e), where ϕ_e is the equilibrium density, and also the terminal value of the density at the boundary under free boundary condition. As such, β_∞ is an increasing function of .
Using the above relations we find an inequality that determines the upper bound on the velocity of the activation front for which the transport is feasible:
lim_t→∞ϕ_b/v_b =2ϕ_e/(1-ϕ_i/ϕ_e)≤Γ/g_0,
≤ ^th=2g_0/Γϕ_e/(1-ϕ_i/ϕ_e).
In the above equation, the threshold velocity scales linearly with g_0, in agreement with Fig. (<ref>b).
Furthermore, the equilibrium density, as derived previously, satisfies a cubic equation with coefficients proportional to μ^-1 = ϕ_0α_1/η. As such ϕ_e is an increasing function of . Thus the value of ^th is a decreasing function of for small enough , and diverges like ^th∼ 1/, as → 0:
lim_→0^th = 2g_0α_1^2/Γα_2ηϕ_i/(ϕ_0/ϕ_i - 1)1/,
where ϕ_0 = α_1/α_2. In the limit of large activity α_1≫α_2ϕ_i, the equilibrium density equals ϕ_e=ϕ_0=α_1/α_2, thus,
lim_→∞^th = 2g_0α_1/Γα_2,
which explains the linear relation of the phase boundary in Fig. (<ref>a) for large enough α_1. For smaller values of α_1 the sharp transition becomes blurry.
The above results clearly show that the limit of →0, where the response transitions to elastic immediately after activation, allows for infinitely large velocities of activation. In actuality, the divergence is regularized by a microscopic timescale. It is however important to note that determines the fluid to elastic transition, not the active to passive state. They happen to be related because in the fluid state the activity is more easily manifested. But the elasticity per se is not playing a role in the transport of the particle. As a matter of fact, the importance of activity is reflected in the density gradient, not the degree of elasticity.
The transport power can be obtained using: _t=Γ V_p^2 = Γ v_b^2=Γβ_∞^2^2=g_0^2ϕ_e^2/Γ. The power is maximized for max(ϕ_e)=ϕ_0=α_1/α_2 which is achieved for →∞. Therefore the maximum power extracted equals:
max(_t) = α_1^2g_0^2/Γα_2^2,
The above result seems counter-intuitive at first glance: while the maximum threshold velocity is obtained for →0, the maximum power is extracted for →∞. The reason lies in the dependence of boundary density on . For →0, the equilibrium density is close to initial density. Thus the force on the particle and its velocity remain small compared to . Therefore, even though the maximum threshold velocity diverges, the particle's velocity is smaller by a factor β_∞ that spoils the power generated in the active region.
Discussion.—The phase diagrams portrayed in Fig. (<ref>) illustrate important results regarding the dependence of the transport coefficient on the field's parameters, as well as the external coupling to the particle. In Fig. (<ref>a), the dependence of the transport coefficient on activity shows that for α_1ϕ_i/α_2≳ 3, the transport coefficient undergoes a sharp transition. The separation between the two regimes appears to follow a straight line; namely ∝α_1g_0. For small activities α_1≲ 3, the transition appears to be smooth, suggesting that there exists a critical point that separates the regions of sharp transition and cross-over. The existence of a truly second order critical point is beyond the scope of this paper.
Using approximations in the limits of →0,∞ we find that maximum for which the transport is feasible diverges like 1/ for small Kelvin timescales, and approaches a constant for →∞. The active power that is generated in the bulk of the active region, is partitioned into the dissipated power along the active region and the transport power. Note that the active energy is generated not only at the AF, but everywhere along the active region. The energy conversion pathway of the transport in an active system is as follows: (1) the active medium converts the microscopic kinetic energy to active stress over a macroscopic timescales; Kelvin timescale; (2) the stress of the medium pulls on the particle and performs work against the drag force. The transport velocity is bounded by the first stage. Namely the rate of energy flow from micro- to macro-scale, that converts activity to macroscopic stress, should be large enough to compensate for the dissipation.
This work was supported by Packard Foundation, Rosen Center for Bioengineering, Moore Foundation, and Heritage Medical Research Institute. We would like to thank Foundational Questions Institute and Fetzer Franklin Fund through FQXi 1816 for funding the research.
apsrev4-2
|
http://arxiv.org/abs/2307.02979v2
|
20230706132642
|
Relativistic mean field model for ultra-compact low mass neutron star of HESS J1731-347
|
[
"Sebastian Kubis",
"Włodzimierz Wójcik",
"David Alvarez Castillo",
"Noemi Zabari"
] |
nucl-th
|
[
"nucl-th",
"astro-ph.HE"
] | |
http://arxiv.org/abs/2307.01152v1
|
20230703164746
|
Conditional partial exchangeability: a probabilistic framework for multi-view clustering
|
[
"Beatrice Franzolini",
"Maria De Iorio",
"Johan Eriksson"
] |
stat.ME
|
[
"stat.ME",
"math.ST",
"stat.TH"
] |
fit,positioning,arrows,automata
shapes,shadows,arrows,positioning,graphs
-.5in
-.5in
1in
1.3in
-.8in
|
http://arxiv.org/abs/2307.02620v2
|
20230705194803
|
Learning when to observe: A frugal reinforcement learning framework for a high-cost world
|
[
"Colin Bellinger",
"Mark Crowley",
"Isaac Tamblyn"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"68T01",
"I.2.0"
] |
C. Bellinger, et al.
National Research Council of Canada,
Ottawa, Canada
colin.bellinger@nrc-cnrc.gc.ca
Department of Physics, University of Ottawa,
Ottawa, Canada
isaac.tamblyn@uottawa.ca
Vector Institute for Artificial Intelligence,
Toronto ON, Canada
Department of Electrical and Computer Engineering,
University of Waterloo,
Waterloo, Canada
mark.crowley@uwaterloo.ca
Learning when to observe: A frugal reinforcement learning framework for a high-cost world
Colin Bellinger10000-0002-3567-7834 Isaac Tamblyn2,31111-2222-3333-4444 Mark Crowley40000-0003-3921-4762
============================================================================================================
Reinforcement learning (RL) has been shown to learn sophisticated control policies for complex tasks including games, robotics, heating and cooling systems and text generation. The action-perception cycle in RL, however, generally assumes that a measurement of the state of the environment is available at each time step without a cost. In applications such as materials design, deep-sea and planetary robot exploration and medicine, however, there can be a high cost associated with measuring, or even approximating, the state of the environment. In this paper, we survey the recently growing literature that adopts the perspective that an RL agent might not need, or even want, a costly measurement at each time step. Within this context, we propose the Deep Dynamic Multi-Step Observationless Agent (DMSOA), contrast it with the literature and empirically evaluate it on OpenAI gym and Atari Pong environments. Our results, show that DMSOA learns a better policy with fewer decision steps and measurements than the considered alternative from the literature. The corresponding code is available at: <https://github.com/cbellinger27/Learning-when-to-observe-in-RL>.
§ INTRODUCTION
In many applications of reinforcement learning (RL), such as materials design, computational chemistry, deep-sea and planetary robot exploration and medicine <cit.>, there is a high cost associated with measuring, or even approximating, the state of the environment. Thus, the RL system as a whole faces observation costs in the environment, along with processing and decision making costs in the agent. On both sides, the costs result from a cacophony of factors including the use of energy, systems and human resources. In this work, we propose the Deep Dynamic Multi-Step Observationless Agent (DMSOA), the first RL agent in its class to reduce both measurement and decision making costs.
Since standard RL agents require a large number of state-action-reward-next state interactions with the environment during policy learning and application, the measurement and decision making costs can be very high. Traditionally, these underlying costs are hidden from the agent. Indeed, little consideration has been given to the idea that the agent might not need, or even want, a potentially costly observation at each time step. In the real-world, however, agents (animal or artificial) are limited by their resources. To save time and energy, decision making associated with common or predictable tasks is believed to be conducted re-actively or based on fast, low-resource systems. Only with deliberate cognitive intervention are the slower, resource-intensive planning systems used <cit.>.
Recently, there have been a number of interesting conference papers, workshop discussions and theses discussing how to address this problem in RL <cit.>. The general approach is to augment RL by a) assign an intrinsic cost to measure the state of the environment, and b) provide the agent with the flexibility to decide if the next state should be measured. Together, these provide a mechanism and a learning signal to encourage the agent to reduce its intrinsic measurement costs relative the to the explicit control rewards it receives.
When the agent opts not to measure the state of the environment, it must make its next control decision based on stale information or an estimate. Thus, at the highest level, this constitutes a partially observable Markov decision process (POMDP). Learning a POMDP, however, is much more difficult than learning a Markov decision process (MDP) with RL <cit.>. When designed as shown in Fig. <ref>, the agent's experience is composed of fully observable measurements and partially observable estimates of the state of the environment. Thus, the problem is related to mixed observable Markov decision processes (MOMDPs), which are an easier sub-class of POMDPs <cit.>. The authors in <cit.> denoted this class of RL problem as action-contingent, noiselessly observable Markov decision processes (AC-NOMDPs). Distinct from an MOMDP, action-contingent in AC-NOMDP relates the fact that the agent explicitly chooses between measuring and not measuring the environment. “Noiselessly observable” relates to the fact that when the agent decides to measure at a cost, the state is fully observable. Although previous works have used different terms to refer to this class of problem, we believe that AC-NOMDP is the most descriptive of the underlying dynamics and use it throughout this paper.
Recently, <cit.> provided a theoretical analysis of the advantages of AC-NOMDP of over a general POMDP formulation and found a significant improvement in efficiency for RL with explicit observation costs and actions. All other previous works have carried out limited empirical evaluations in which the proposed algorithm is compared to a baseline MDP <cit.>. Moreover, the previous analyses primarily relied on just a few, or even single, experiments on OpenAI gym classic control environments and grid-worlds <cit.>.
In this work, we compare DMSOA to the one-step memory-based observationless agent (OSMBOA) recently proposed in <cit.>. Our contribution serves to expand the state-of-the-art in AC-NOMDP algorithms and the understanding of how different classes of algorithms impact the observation behaviour. OSMBOA was selected for its demonstrated effectiveness and easy of use. At each time step, OSMBOA selects a control action and makes a decision about measuring the next state of the environment. If no measurement is made, the agent's next control action is selected based on its fixed-size internal memory of the last measured state(s). In contrast, DMSOA selects a control action and the number of times to apply the action before measuring the next state. Therefore, DMSOA learns to reduce both observation costs and decision making costs by dynamically applying its control action multiple times.
To facilitate fair comparison, we implement both agents as double DQN <cit.> with prioritized experience replay <cit.>. We evaluate the agents in terms of the accumulated extrinsic control reward and by the reduction in the number of observations and decision steps made on Atari Pong and OpenAI gym <cit.>. The results show that the proposed method learns a better control policy, requires fewer measurements of the environment and decision steps. Moreover, DMSOA has less variance across independent training runs.
The remainder of the paper is organized as follows. In the next section, Section <ref>, we outline the related work. Section <ref> formalizes the AC-NOMDP problem and Section <ref> presents the proposed algorithms. Section <ref> provides the experimental setup. The results are shown in Section <ref> and discussed in Section <ref>. The environmental impact of this work is described in Section <ref> and our concluding remarks are in Section <ref>.
§ RELATED WORK
This work fits into a small but growing sub-area of RL in which observations are optional at each time step and have an explicit cost to the agent when they are made. In the subsection immediately below, we provide an overview of methods recently applied to AC-NOMDP. Following that, we discuss the literature directly related to the proposed DMSOA algorithm.
§.§ AC-NOMDP Methods
In the existing work, the authors in <cit.> proposed tabular Q-learning-based algorithms for AC-NOMDPs and <cit.> proposed deep RL based methods. In <cit.>, the authors modify TRPO, and in <cit.>, actor-critic frameworks with a recurrent neural networks are used. In <cit.>, the authors provide a wrapper class that modifies the underlying environment by expanding the observation and action spaces to facilitate any off-the-shelf deep RL algorithm to work in the AC-NOMDP setting.
Through our analysis of the literature, we have identified 4 key questions addressed when developing for AC-NOMDPs. These are: 1) the mechanism by which the agent expresses its desire not to measure, 2) how the observation is supplemented when no measurement is made, 3) how the agent is encouraged to reduce its reliance of costly measurements, and 4) how the agent is constructed.
The most common way to handle question 1) is by expanding the action space. In the case of discrete actions, <cit.> expanded the action space to action tuples: ⟨control actions⟩×⟨measure, don't measure⟩. Alternatively, in <cit.>, the agent specifies the control action plus a sample purity value q ∈ℝ^1, where a larger q triggers a less accurate measurement with lower associated costs.
With respect to question 2), when the agent does not request a fresh measurement in <cit.>, the environment sends a Null state observation or an observation composed of zeros. In <cit.>, the agent uses an internal statistical model to estimate the next state and in <cit.> the agents utilize a deep recurrent networks for estimating belief states and encoded states, respectively. In <cit.>, the agent utilizes a fix-size memory of recent measurements when no measurement is made. To reduce partial observability, each observation is augmented with a flag indicating whether or not it is the result of a fresh measurement of the environment. Since the agent in <cit.> adjusts the noise level rather than turning on and off measurements, it makes its next action selection purely based on the noisy measurement returned.
Question 3) relates to the rewards structure. This is generally divided into intrinsic rewards, which are used to encourage the agent to reduce its reliance on costly measurements and extrinsic rewards that push the agent to achieve the control objective. At each time step in <cit.>, an intrinsic cost is subtracted from the extrinsic reward if the agent measures the state. Alternatively, in <cit.> a positive intrinsic reward is added when the agent foregoes a measurement. A critical point that remains unclear in the literature is how to acquire the extrinsic reward when no measurement is made. In most cases, this is simply assumed to be available. We argue that if no measurement is made, the extrinsic reward cannot be known. As a result, in this work only the intrinsic portion of the reward is provided when no measurement is made.
The final question relates to the architecture of the agent. In <cit.>, as single agent policy select both the control action and the measurement behaviour. Alternatively, in <cit.>, separate policy are learned to determine the control actions and measurement actions. In addition, <cit.> also learn models for estimating the next state.
§.§ Works Related to DMSOA
The proposal of <cit.> is most algorithmically related to the DMSOA. In it, the author demonstrated the potential of dynamic action repetition for RL with observation costs using tabular q-learning. The agent learns to forego a sequence of one or more measurements in predictable regions of the state space by repeatedly applying the same action. Their proposed method is found to requires fewer measurement step to reach the goal than the MDP baseline. However, it is only suitable for discrete state and actions spaces, and was only evaluated on grid-world problems. In this work, we show how action repetition and measurement skipping can be implemented in deep RL for continuous and image-based observation spaces.
DMSOA is a method that aims to improve the efficiency of RL. To this end, it is weakly related to other techniques to improve the sampling efficiency <cit.>. The classic sample efficiency work, however, aims to reduce the overall number of training steps needed to learn a suitable policy, rather than reducing the measurement or decision steps made by the agent.
DMSOA utilizes concepts from the RL literature on dynamic frame skipping to repeatedly apply the selected control action <cit.>. In DMSOA, however, the agent's measurement skipping policy is shaped by the intrinsic reward. Moreover, unlike frame skipping applications, which are concerned with processing speed not measurement costs, DMSOA does not have access to privileged extrinsic control rewards from intermediate steps. This making the problem more challenging.
In addition, DMSOA has a connection to the options framework <cit.>, and particularly dynamic options <cit.>. Similar to the options framework, at each decision point the DMSOA agent chooses to apply a sequence of actions that will transition the agent through multiple states. In DMSOA, the agent's policy selects a single control action and the number of times to apply the action in order to reduce its measurement costs while still achieving the control objective. Through the incorporation of measurement costs, the agent is able to learn how many times the action should be applied in order to arrive at the next meaningful state. For DMSOA, a meaningful state is one for which the information provided by it is greater than the cost to measure it.
§ PROBLEM SETUP
An AC-NOMDP is defined by ⟨𝒮,𝒜, 𝒪, 𝒫,ℛ_ext,ℛ_int, 𝒫_s_0, γ⟩ where S is the state space, 𝒜=⟨ A_c × A_m⟩ is the set of action tuples composed of control actions a_c and binary measurement actions a_m ∈{0,1} that specify if an observation of the next state is requested.
The observation space 𝒪 is related to 𝒮 by the observation emission function p(o|s^',a) (more on this below).
𝒫: S × A × S →ℝ denotes the transition probabilities,
ℛ_ext: S × A_c →ℝ denotes the extrinsic reward function,
ℛ_int: S × A_m →ℝ denotes the intrinsic reward function that encourages the agent to reduce the number of measurements it makes.
The r_int value is typically set to slightly outweigh the r_ext value to achieve the balance between the need for information to solve to control problem and the cost of information. If r_int is very large or very small relative to r_ext, the agent may never measure at the cost of solving the control objective or always measure and fail to reduce the observation costs.
The function 𝒫_s_0 : 𝒮→ℝ denotes the probability distribution over the initial state and γ∈ (0, 1] is the discount factor. The observation emission function p(o|s^',a) specifies the probability of observing o∈𝒪 given the action a in state s.
Unlike the more general POMDP, the observation space in a AC-NOMDP is limited to 𝒪 = 𝒮∪{empty}, where empty is the missing measurement of the environment. In this setup, the potential probabilities of p(o|s^',a_m=1) ∈{0,1}, with p(o|s^',a_m=1)=1 if and only if o=s^' and p(o|s^',a_m=1)=0 for all o≠ s^'.
In contrast, p(o|s^',a_m=0)=1 if and only if o=stale.
The agent learns a policy π(o): O → A that maps observations to action tuples. The initial observation o_0=s_0 contains a fresh measurement of the environment. The control action selected by the agent is applied in the environment and the underlying state transitions according to 𝒫. At each time step, the reward, r_t is the intrinsic reward, r_t= r_(int, t), if a_m=0, otherwise the extrinsic reward, r_t = r_(ext, t), is given in response to the state s_t and control action a_c selected by the agent. When the measurement action, a_m=1, is selected, the agent receives a fresh measurement of the underlying state o_t+1=s_t+1 of the environment. Alternatively, when a_m=0, the agent does not obtain a fresh measurement and the next action must be selected based the agent's internal mechanism, such as an internal memory or model.
The OSMBOA agent selects one control and one measurement action, ⟨ a_c,t, a_m,t⟩ per time step, whereas the DMSOA agent moves from decision point to decision point with a frequency less than or equal to the environment's clock. At each decision point, the DMSOA agent selects a control action and the number of times to apply it, k. The state is only measured on the k^th application (e.g. ⟨ a_c, a_m=0⟩_t,...,⟨ a_c, a_m=0⟩_t+(k-1), ⟨ a_c, a_m=1⟩_t+k).
The agent's objective is to learn a policy π that maximizes the discounted expected costed return which incorporates both the intrinsic and extrinsic rewards:
J(π) = 𝔼_a_t ∼π, s_t ∼ P[ ∑_t γ^t r(s_t,a_t) ],
where γ < 1 is the discount factor.
In this work, we focus on deep Q-learning based solutions <cit.> combined with standard improvements such as n-step DQN for better convergence <cit.>, double DQN to improve stability <cit.> and prioritized replay to improve sample efficiency <cit.>. Although this work examines problems with discrete action spaces, the proposed algorithms can be modified for continuous action spaces.
§ DEEP DYNAMIC MULTI-STEP OBSERVATIONLESS AGENT
The Deep Dynamic Multi-Step Observationless Q-learning Agent (DMSOA) for noiselessly observable RL environments with explicit observation costs is presented in Figure <ref>. The framework has three key components: the control policy π_c: o → a_a that maps the observation to a control action, the measurement skipping policy π_m: o, a_c → k that maps the observation and selected control action to k∈{1,...,K} the number of steps to apply a_c to the environment, and the action-observation scheduler. The action-observation scheduler applies the action pair (a_c, 0) k-1 times and collects the intrinsic rewards r_int from the environment. On the k^th iteration, it applies the action pair (a_c, 1), records the extrinsic reward r_ext, and passes the new observation to π_c and π_m. The extrinsic reward is equal to the control policy reward for applying a_c and arriving in the measured state after step i=k. The intrinsic reward is r_int∈{0, c}, where c is a bonus (ie. “cost saving”) given to the agent when it chooses not to measure. To ensure the agent is motivated to omit measurements whenever possible, we set c≥ r_ext^max. The optimal setting of c will depend on the application and the requirements of the domain.
In this work, the policies are implemented as deep Q networks (DQN), however, other forms of policy learning could be utilized. The agent's objective is to maximize the costed rewards ∑_t=0^∞γ^t r_t. To achieve this we learn parameterized value functions Q_c(o ; θ) and Q_m(o,a; ζ) as feed-forward deep neural networks. As described above, for an m-dimensional observation space and an n-dimensional action space, Q_c is a mapping from an m-dimensional observation to an n-dimensional vector of action values. The function Q_m is a mapping from an m+1-dimensional observation-action to an K-dimensional vector of measurement values. In the case of image data, each channel is augmented with the action details. The argmax of each output indicates the action to apply and the number of times to apply it.
During training, the experience tuples (o_t, a_(c,t), a_(m,t), r_t, o_t+1) are stored in a prioritized experience replay buffer. To improve stability, target networks θ^- and ζ^- for Q_c and Q_m are copied from θ and ζ every τ steps. In addition, we use the double DQN <cit.> to improve value estimates. The target for the control network is:
Y_i^Q_c≡ r_t + γ Q_c(o_t+1, argmax_a Q_c(o_t+1, a; θ_t); θ^-_t ).
For the same update step, the target for the measurement network is:
Y_i^Q_m≡ r_t + γ Q_m((o_t+1,a_(c,t+1)), argmax_a Q_m((o_t+1,a_(c,t+1)), a; ζ_t); ζ^-_t ).
The corresponding losses are:
ℒ_i^Q_c(θ_i) = 𝔼_(o_t,a_(c,t)) ∼𝒟[(Y_i^Q_c-Q_c(o_t, a_(c,t); θ_i)^2 ],
and
ℒ_i^Q_m(ζ_i) = 𝔼_(o_t,a_(c,t), a_(m,t)) ∼𝒟[(Y_i^Q_m-Q_m((o_t, a_(c,t)), a_(m,t); ζ_i)^2 ]
§ EXPERIMENTAL SETUP
In this section, we compare the performance of DMSOA to OSMBOA. In order to highlight the differences in the measurement behaviour of each method, we implement both with double DQN and a prioritized replay buffer. The hyper-parameters were selected via grid search with 3 random trials. For the evaluation, we report the mean and standard deviation of the reward during training and the observation behaviour of the best policy. Each agent is reinitialized with 20 difference seeds and trained on the OpenAI gym environments Cartpole, Acrobot, Lunar Lander and Atari Pong. The experiments were run on CentOS with Intel Xeon Gold 6130 CPU and 192 GB memory. In addition, a NVIDIA V100 GPU was used in the training of the Atari agent.
§ RESULTS
Figure <ref> shows the mean and standard deviation for each agent on the Cartpole and Acrobot environments. The aim in the Cartpole environment is for the agent to operate a cart such that a vertical pole remains balanced for as long as possible. The extrinsic reward is set to 1 and the intrinsic reward is set to 1.1. We truncate each episode at a maximum of 200 time steps. In the Acrobot environment, the objective is to apply torque to flip an arm consisting of two actuated links connected linearly above a target height in as few steps as possible. The agent receives an extrinsic reward of -1 or an intrinsic reward of -0.85 at each time step. The episode ends at the first of 200 steps or when the arm is successfully flipped over the line.
DMSOA learns a policy for both environments that produces a higher costed reward than OSMBOA. This indicates that DMSOA requires fewer measurements whilst carrying out the control policy. In addition, the standard deviation is lower indicating more stability across independent training runs. The episode length plots on the right show that DMSOA learned policies to keep the Cartpole upright longer and flip the Acrobot over the goal faster.
The results for the Lunar Lander environment are presented in Figure <ref>. The Lunar Lander environment is a rocket trajectory optimization problem <cit.>. The objective is to fire the lander's rockets such that it lands squarely in the target area. The fuel supply is infinite, but the best policy uses it sparingly. The environment has four discrete actions available: do nothing, fire left orientation engine, fire main engine, fire right orientation engine. The intrinsic reward is 0.1. The extrinsic reward is -0.3 for firing the main engine and -0.03 for side engines, the reward is also scaled by the lander's distance from the landing pad. Ten points are added to the extrinsic reward for each leg that is in contact with the ground, and an additional 100 points are added for landing, while 100 points are subtracted for crashing. The episode ends when the agent lands or crashes, or is truncated after a maximum of 400 time steps.
r5.5cm
Ratio of steps with measurements to steps without measurements of the converged policy during training.
Env. DMSOA OSMBOA
Cartpole 1:1.27 1:0.37
Acrobot 1:0.45 1:1.03
Lunar Lander 1:1.56 1:0.33
The plot on the top left in Figure <ref> shows that mean episode length is longer for OSMBOA than DMSOA, and the top right plot shows that DMSOA has significantly more successful landings. This indicates that DMSOA learns a policy that quickly navigates the ship to a safe landing. The lower plot shows that OSMBOA has a slightly higher costed reward. As suggested by the first two plots, this is due to the fact that it takes longer to land and not because the policies is superior.
Table <ref> shows the ratio of the number of steps made without measuring for each measurement made. On Cartpole and Lunar Lander, DMSOA makes more than one step without measuring for each measurement step, whereas on Acrobot it makes an average of 0.5 non-measuring steps for each measuring step. This suggests that the dynamics of Acrobot are less predictable, causing DMSOA to measure more frequently. Interestingly, Acrobot is the only environment where OSMBOA does better than a 1:1 ratio.
§.§ Examination of Measurement Policies
Figure <ref> shows the measurement behaviour of the best OSMBOA (left) and DMSOA (right) policies for Cartpole (top), Acrobot (middle) and Lunar Lander (lower) environments. Each row specifies a 1-episode roll-out of the best policy. Each column in the OSMBOA plots is the environment time step during the episode. For OSMBOA, the number of decision steps is equivalent to the number of steps in the environment. In contrast, each column in the DMSOA plots corresponds to a decision by the agent, with one or more environment time steps associated with it. In addition to highlighting the measurement efficiency, this also shows the decision efficiency. On Cartpole, DMSOA makes approximately 70 action selections (decisions) per episode of 200 environment steps (the mean steps per episode are shown in Figure <ref>.
For OSMBOA, an orange cell indicates that a fresh measurement of the environment and blue specifies that no measurement was requested at corresponding time step. In the case of DMSOA, the colour indicates the number of consecutive steps that were taken without a fresh measurement. Blue indicates that a measurement is made after the control action is applied once, yellow indicates that a measurement is made after the control action is applied twice and red indicates that a measurement is made after the control action is applied three times.
The distinct pattern in each plot suggests the different capabilities of each class of AC-NOMDP agent, along with the fact that each environment is unique in terms of its dynamics and complexity. The consistent measurement patterns for OSMBOA and DMSOA on Cartpole suggest that the environment has very regular dynamics, at least for a near optimal policy. OSMBOA switches between selecting the next action from a freshly measured observation and selecting it from a stale observation. Alternatively, DMSOA learns to apply an action 3 times before measuring. This clearly demonstrates the potential of DMSOA to take more environment steps without measuring than OSMBOA.
Acrobot and Lunar Lander show much more complex measurement behaviour. For both OSMBOA and DMSOA on Acrobot during approximately the first 3/4s of the each episode they display a pattern of frequently measuring followed by briefly not measuring. Both OSMBOA and DMSOA skip measurements while the arcobot is in the lower left region of the observation space. This is roughly where the momentum of the Acrobot shifts from heading way from the goal, back towards to the goal. In this area, it is deemed safe to apply torque back towards the goal without observing. In the last quarter of each episode, both agents take more steps without measuring. It is noteworthy that in most episodes OSMBOA takes significantly more steps without observing than DMSOA. However, this has a negative impact on the total number of action decisions made by OSMBOA on route to achieving to goal. This particularly visible in episodes 1 and 4. DMSOA does not to suffer from similar behaviour.
On the Lunar Lander environment both methods take few or no measurements near the end of the episode when the agent is close to landing. In addition, DMSOA repeatedly takes 2-3 steps before measuring at the beginning of each episode, whereas OSMBOA repeatedly measures early in each episode. Each method measures frequently during the middle of the episode as agent attempts to direct the lander safely towards the landing area. OSMBOA generally alternates between measuring and not measuring at each time step, whereas DMSOA typically takes many measurement steps followed by 1 to 2 steps without measuring before returning to measuring again. Similar to Acrobot, once OSMBOA estimates that it is on target to reach goal it commits to never measuring again. When this estimate is erroneous, the leads to much longer episodes than necessary and the risk of crashing the ship.
§.§ Image-Based RL Results
The objective in the pong Atari game is to bounce the ball off of your paddle and past the opponents paddle into its goal <cit.>. The action space is 6-dimensional including do nothing, fire, move right, move left, fire right and fire left. The observation space is a (210, 160, 3) image. In the case of OSMBOA, a 210 by 1 vector of ones or zeros is added to each channel to indicate if the observation is fresh or stale. The agent gets an extrinsic reward of 1 for winning a match and 0 for each intermediate step. Each episode is composed of 21 matches and the intrinsic reward is 0.001.
The results in Figure <ref> show that DMSOA wins significantly more matches than OSMBOA (top left), achieves a higher costed reward (top right) and more intrinsic reward (lower). Thus, DMSOA learns to be a better Pong player and requires fewer measurements. Due the longer episodes and training times, a measurement behaviour plot similar to Figure <ref> is not feasible within the confines of this paper. However, recordings of each agent and its measurement behaviour are available in the paper Github repository.
From our analysis for the measurement policies of each agent, we found that both learn to measure less frequently when the ball is travelling away from their paddle. Alternatively, if the ball is near their paddle or the opponents paddle, each agent measures more frequently. Inline with the observations on Acrobot and Lunar Lander, when OSMBOA reaches a state from which it expects to win the match, it switch to not measuring for the remainder of the match. If the prediction is correct, it can achieve a greater reduction in measurements than DMSOA. If it is wrong, however, OSMBOA general loses the match. An erroneous prediction of this nature is particularly risky in a complex and dynamic environment.
§ DISCUSSION
The results indicate that DMSOA has a clear advantage over OSMBOA in terms of its convergence rate and the reduction in measurements and decision steps. We believe that the control action repetition capabilities of DMSOA improve its exploration of the environment and its understanding of the implications (positive and negative) of taking multiple steps without measuring. This helps it to quickly converge to good control and measurement policies. In addition, the fact that DMSOA's multi-step action sequences always ends with a measurement of the final state provides it with a good grounding from with to select the next control action. On the other hand, because the extrinsic reward for intermediate steps is not available, there is the potential for more noise in the reward signal for longer DMSOA action repetition trajectories. Due to the fact that OSMBOA is limited to one-step action, noise in the reward is less of a concern. Although, DMSOA appears to handle the noisy reward signal, future work should examine this in more detail.
For unshaped (or uniform) reward environments, such as Cartpole, Acrobot and and Pong, setting the intrinsic reward is simple and the agent is in sensitive to the value so long as it is slightly larger than the extrinsic reward. Alternative, the intrinsic reward requires fine tuning on environments with complex reward shaping such as Lunar Lander. As heuristic, we suggest starting the fine tuning from the mean of the extrinsic reward collected over multiple random walks in the environment.
In multiple environments, we found that OSMBOA commits to not measuring towards the end of each episode. This is surprising since if OSMBOA takes more than one step without measuring, it enters a partially observable state. This is akin to playing the game with its eyes closed. The agent is, thus, unaware if any unexpected event occurs. On Acrobot and pong, this resulted in it not achieving the goal, or it taking much longer than otherwise necessary. An example of this is seen in the Acrobot plot in Figure <ref>.
§ ENVIRONMENTAL IMPACT
This work aims to strike a balance between scientific understanding and energy consumption. To do this we have selected a number of OpenAI gym classic control environments from which RL policies can be efficiently learned, along with one large image-based RL environment. Although the set is relatively small the dynamics are diverse enough to illustrate the differences between the two classes of AC-NOMDP algorithms considered in this work. We also note that the work was conducted in a jurisdiction in which the majority of the electric comes from sources such as hydro-electric and nuclear.
In addition to reducing the measurement costs, this work can lead to a reduction in the associated carbon footprint for RL in this area. Unlike OSMBOA, the multi-step capabilities of DMSOA may robustly lower the number of forward passes through the network for decision making, offer savings in terms of communication with the environment and lower latency. Moreover, it can help with exploration, thereby reduce the time to policy convergence.
§ CONCLUSION
In this work, we consider the problem of RL for environments where agent decision making and measuring the state of the environment have explicit costs, namely AC-NOMDPs. We provide the first survey of methods recently proposed for AC-NOMDPs. Building on the existing work, we propose DMSOA, an RL algorithm learns a control and a measurement policy to reduce measurement and decision steps. Our empirical results confirm the previously published results for OSMBOA on Cartpole, Acrobot and Lunar Lander, and show that OSMBOA is also capable on the more complex, image-based Atari Pong environment. However, we find that our proposed method DMSOA learns a better control policy than OSMBOA, and requires fewer costly measurement and decision steps.
This demonstrates the great potential to reduce measurement and decision costs associated with RL by allowing the agent to take control of its action and observation behaviour. We expect this to be a necessary capability of RL agents applied in many real-world applications. The next steps that we envision are developing more sophisticated loss functions for DMSOA, incorporating recurrency into the network to deal properly with time, expanding the analysis to additional methods and more realistic setting including materials design.
§ ACKNOWLEDGMENTS
This work was supported with funding from the National Research Council of Canada's AI for Design Program.
splncs04
|
http://arxiv.org/abs/2307.01421v1
|
20230704012626
|
Unsupervised Feature Learning with Emergent Data-Driven Prototypicality
|
[
"Yunhui Guo",
"Youren Zhang",
"Yubei Chen",
"Stella X. Yu"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
Unsupervised Feature Learning with Emergent Data-Driven Prototypicality
@cccc@5
Yunhui Guo^1
Youren Zhang^2
Yubei Chen^3
Stella X. Yu^2
ccc5
^1The University of Texas at Dallas
^2University of Michigan
^3New York University
August 1, 2023
=========================================================================================================================================================================================
Given an image set without any labels, our goal is to train a model that maps each image to a point in a feature space such that, not only proximity indicates visual similarity, but where it is located directly encodes how prototypical the image is according to the dataset.
Our key insight is to perform unsupervised feature learning in hyperbolic instead of Euclidean space, where the distance between points still reflect image similarity, and yet we gain additional capacity for representing prototypicality with the location of the point: The closer it is to the origin, the more prototypical it is. The latter property is simply emergent from optimizing the usual metric learning objective: The image similar to many training instances is best placed at the center of corresponding points in Euclidean space, but closer to the origin in hyperbolic space.
We propose an unsupervised feature learning algorithm in Hyperbolic space with sphere pACKing. HACK first generates uniformly packed particles in the Poincaré ball of hyperbolic space and then assigns each image uniquely to each particle. Images after congealing are regarded more typical of the dataset it belongs to. With our feature mapper simply trained to spread out training instances in hyperbolic space, we observe that images move closer to the origin with congealing, validating our idea of unsupervised prototypicality discovery. We demonstrate that our data-driven prototypicality provides an easy and superior unsupervised instance selection to reduce sample complexity, increase model generalization with atypical instances and robustness with typical ones.
§ INTRODUCTION
Not all instances are created equal. Some instances are more representative of the class and some instances are outliers or anomalies. Representative examples can be viewed as prototypes and used for interpretable machine learning <cit.>, curriculum learning <cit.>, and learning better decision boundaries <cit.>. With prototypical examples, we can also conduct classification with few or even one example <cit.>. Given an image dataset, thus it is desirable to organize the examples based on .
If the features of the images are given, it is relatively easy to find the prototypes by examining the density peaks of the feature distribution. If the features are not given, discovering prototypical examples without supervision is difficult: There is no universal definition or simple metric to assess the of the examples. A naive method to address this problem is to examine the gradient magnitude <cit.>. However, this approach is shown to have a high variance which is resulted from different training setups <cit.>. Some methods address this problem from the perspective of adversarial robustness <cit.>: prototypical examples should be more adversarially robust. However, the selection of the prototypical examples highly depends on the adversarial method and the metric used in the adversarial attack. Several other methods exist for this problem but they are either based on heuristics or lack a proper justification <cit.>.
Naturally, given a feature space, prototypical examples can be identified as density peaks. However, prototypicality undergoes changes as the feature space undergoes changes. In this paper, we propose an unsupervised feature learning algorithm, called HACK, for learning features that reflect prototypicality.
Different from existing unsupervised learning methods, HACK naturally leverages the geometry of hyperbolic space for unsupervised learning. Hyperbolic space is non-Euclidean space with constant non-negative curvature <cit.>. Different from Euclidean space, hyperbolic space can represent hierarchical relations with low distortion. Poincaré ball model is one of the most commonly used models for hyperbolic space <cit.>. One notable property of Poincaré ball model is that the distance to the origin grows exponentially as we move towards the boundary. Thus, the points located in the center of the ball are close to all the other points while the points located close to the boundary are infinitely far away from other points. With unsupervised learning in hyperbolic space, HACK can learn features which capture both visual similarity and (Figure <ref>).
HACK optimizes the organization of the dataset by assigning the images to a set of uniformly distributed particles in hyperbolic space. The assignment is done by minimizing the total hyperbolic distance between the features and the particles via the Hungarian algorithm. The arises naturally based on the distance of the example to the others. Prototypical examples tend to locate in the center of the Poincaré ball and atypical examples tend to locate close to the boundary. Hyperbolic space readily facilitates such an organization due to the property of the hyperbolic distance.
Our paper makes the following contributions.
* We propose the first unsupervised feature learning method to learn features which capture both visual similarity and . The positions of the features reflect of the examples.
* The proposed method HACK assigns images to particles that are uniformly packed in hyperbolic space. HACK fully exploits the property of hyperbolic space and arises naturally.
* We ground the concept of based on congealing which conforms to human visual perception. The congealed examples can be used to replace the original examples for constructing datasets with known prototypicality. We validate the effectiveness of the method by using synthetic data with natural and congealed images. We further apply the proposed method to commonly used image datasets to reveal .
* The discovered prototypical and atypical examples are shown to reduce sample complexity and increase the robustness of the model.
§ RELATED WORK
Prototypicality. The study of prototypical examples in machine learning has a long history. In <cit.>, the authors select typical instances based on the fact that typical instances should be representative of the cluster. In <cit.>, prototypical examples are defined as the examples that have maximum mean discrepancy within the data. Li et al. <cit.> propose to discover prototypical examples by architectural modifications: project the dataset onto a low-dimensional manifold and use a prototype layer to minimize the distance between inputs and the prototypes on the manifold. The robustness to adversarial attacks is also used as a criterion for <cit.>. In <cit.>, the authors propose multiple metrics for discovery. For example, the features of prototypical examples should be consistent across different training setups. However, these metrics usually depend heavily on the training setups and hyperparameters. The idea of is also extensively studied in meta-learning for one-shot or few-shot classification <cit.>. No existing works address the discovery problem in a data-driven fashion. Our proposed HACK naturally exploits hyperbolic space to organize the images based on .
Unsupervised Learning in Hyperbolic Space. Learning features in hyperbolic space have shown to be useful for many machine learning problems <cit.>. One useful property is that hierarchical relations can be embedded in hyperbolic space with low distortion <cit.>. Wrapped normal distribution, which is a generalized version of the normal distribution for modeling the distribution of points in hyperbolic space <cit.>, is used as the latent space for constructing hyperbolic variational autoencoders (VAEs) <cit.>. Poincaré VAEs is constructed in <cit.> with a similar idea to <cit.> by replacing the standard normal distribution with hyperbolic normal distribution. Unsupervised 3D segmentation <cit.> and instance segmentation <cit.> are conducted in hyperbolic space via hierarchical hyperbolic triplet loss. CO-SNE <cit.> is recently proposed to visualize high-dimensional hyperbolic features in a two-dimensional hyperbolic space. Although hyperbolic distance facilitates the learning of hierarchical structure, how to leverage hyperbolic space for unsupervised discovery is not explored in the current literature.
Sphere Packing. Sphere packing aims to pack a set of particles as densely as possible in space <cit.>. It can be served as a toy model for granular materials and has applications in information theory <cit.> to find error-correcting codes <cit.>. Sphere packing is difficult due to multiple local minima, the curse of high dimensionality, and complicated geometrical configurations. Packing in hyperbolic space is also studied in the literature. It is given in <cit.> a universal upper bound for the density of sphere packing in an n-dimensional hyperbolic space when n ≥ 2. We are interested in generating uniform packing in a two-dimensional hyperbolic space. Uniformity has been shown to be a useful criterion for learning good features on the hypersphere <cit.>. We opt to find the configuration with an optimization procedure that is easily applicable even with thousands of particles.
§ PROTOTYPICALITY AS DENSITY PEAKS
Given existing features {f(v_i)} obtained by applying a feature extractor for each instance v_i, prototypical examples can be found by examining the density peaks via techniques from density estimation. For example, the K-nearest neighbor density (K-NN) estimation <cit.> is defined as,
p_knn(v_i, k) = k/n1/A_d · D^d(v_i, v_k(i))
where d is the feature dimension, A_d = π^d/2 / Γ(d/2+1), Γ(x) is the Gamma function and k(i) is the kth nearest neighbor of example v_i. The nearest neighbors can be found by computing the distance between the features. Therefore, the process of identifying prototypicality through density estimation can be conceptualized as a two-step procedure involving: 1) feature learning and 2) detecting density peaks.
In the density estimation approach outlined above, the level of prototypicality depends on the particular features learned. Varying training setups can induce diverse feature spaces, resulting in differing conclusions on prototypicality. Nevertheless, prototypicality is an inherent attribute of the dataset and should remain consistent across various features. The aim of this paper is to extract features that intrinsically showcase the prototypicality of the samples. Specifically, by examining the feature alone within the feature space, we should be able to identify the example's prototypicality.
To determine whether the feature truly captures prototypicality, it is necessary to identify which sample is the prototype. We ground our concept of prototypicality based on congealing <cit.>. In particular, we define prototypical examples in the pixel space by examining the distance of the images to the average image in the corresponding class. Our idea is based on a traditional computer vision technique called image alignment <cit.> that aims to find correspondences across images. During congealing <cit.>, a set of images are transformed to be jointly aligned by minimizing the joint pixel-wise entropies. The congealed images are more prototypical: they are better aligned with the average image. Thus, we have a simple way to transform an atypical example into a typical example (see Figure <ref>). This is useful since given an unlabeled image dataset the typicality of the examples is unknown, congealing examples can be naturally served as examples with known typicality and be used as a validation for the effectiveness of our method.
§ UNSUPERVISED HYPERBOLIC FEATURE LEARNING
We aim to develop a method that can automatically discover prototypical examples unsupervisedly. In particular, we conduct unsupervised learning in hyperbolic space with sphere packing (Figure <ref>). We specify where the targets should be located ahead of training with uniform packing, which by design are maximally evenly spread out in hyperbolic space. The uniformly distributed particles guide feature learning to achieve maximum instance discrimination <cit.>.
HACK figures out which instance should be mapped to which target through bipartite graph matching as a global optimization procedure. During training HACK minimizes the total hyperbolic distances between the mapped image point (in the feature space) and the target, those that are more typical naturally emerge closer to the origin of Poincaré ball. Prototypicality comes for free as a result of self-organization. HACK differs from the existing learning methods in several aspects (Figure <ref>). Different from supervised learning, HACK allows the image to be assigned to any target (particle). This enables the exploration of the natural organization of the data.
On the other hand, existing unsupervised learning methods often employ maximal instance discrimination as a criterion for feature learning. However, if these approaches are directly applied to learning features in hyperbolic space, they will drive all instances towards the boundary to achieve maximal instance discrimination. Instead, HACK specifies a predefined geometrical organization which encourages the corresponding structure to be emerged from the dataset.
§.§ Poincaré Ball Model for Hyperbolic Space
Hyperbolic space. Euclidean space has a curvature of zero and a hyperbolic space is a Riemannian manifold with constant negative curvature.
Poincaré Ball Model for Hyperbolic Space. There are several isometrically equivalent models for visualizing hyperbolic space with Euclidean representation. The Poincaré ball model is the commonly used one in hyperbolic representation learning <cit.>. The n-dimensional Poincaré ball model is defined as (𝔹^n, 𝔤_𝐱), where 𝔹^n = {𝐱∈ℝ^n: ‖𝐱‖ < 1 } and 𝔤_𝐱 = (γ_𝐱)^2 I_n is the Riemannian metric tensor. γ_𝐱 = 2/1- ‖𝐱‖^2 is the conformal factor and I_n is the Euclidean metric tensor.
Hyperbolic Distance. Given two points u∈𝔹^n and v∈𝔹^n, the hyperbolic distance is defined as,
d_𝔹^n(u, v) = (1 + 2‖u-v‖^2/(1-‖u‖^2)(1-‖v‖^2))
where is the inverse hyperbolic cosine function and ‖·‖ is the usual Euclidean norm.
Hyperbolic distance has the unique property that it grows exponentially as we move towards the boundary of the Poincaré ball. In particular, the points on the circle represent points in infinity. Hyperbolic space is naturally suitable for embedding hierarchical structure <cit.> and can be regarded as a continuous representation of trees <cit.>. The hyperbolic distance between samples implicitly reflects their hierarchical relation. Thus, by embedding images in hyperbolic space we can naturally organize images based on their semantic similarity and .
§.§ Sphere Packing in Hyperbolic Space
Given n particles, our goal is to pack the particles into a two-dimensional hyperbolic space as densely as possible. We derive a simple repulsion loss function to encourage the particles to be equally distant from each other. The loss is derived via the following steps. First, we need to determine the radius of the Poincaré ball used for packing. We use a curvature of 1.0 so the radius of the Poincaré ball is 1.0. The whole Poincaré ball cannot be used for packing since the volume is infinite. We use r < 1 to denote the actual radius used for packing. Thus, our goal is to pack n particles in a compact subspace of Poincaré ball. Then, the Euclidean radius r is further converted into hyperbolic radius r_𝔹. Let s = 1/√(c), where c is the curvature. The relation between r and r_𝔹 is r_𝔹 = s logs + r/s - r. Next, the total hyperbolic area A_𝔹 of a Poincaré ball of radius r_𝔹 can be computed as A_𝔹 = 4π s^2 sinh^2(r_𝔹/2s), where sinh is the hyperbolic sine function. Finally, the area per point A_n can be easily computed as A_𝔹/n, where n is the total number of particles. Given A_n, the radius per point can be computed as r_n = 2s sinh^-1(√(A_n/ 4 π s^2)). We use the following loss to generate uniform packing in hyperbolic space. Given two particles i and j, the repulsion loss V is defined as,
V(i, j) =
{1/ [2r_n - max (0, 2r_n - d_𝔹(i,j))]^k - 1/(2r_n)^k}· C(k)
where C(k) = (2r_n)^k+1/k and k is a hyperparameter. Intuitively, if the particle i and the particle j are within 2r_n, the repulsion loss is positive. Minimizing the repulsion loss would push the particles i and j away. If the repulsion is zero, this indicates all the particles are equally distant (Figure <ref> a). Figure <ref> b) shows that the repulsion loss grows significantly when the two particles become close.
We also adopt the following boundary loss to prevent the particles from escaping the ball,
B(i; r) = max (0, norm_i - r + margin)
where norm_i is the ℓ_2 norm of the representation of the particle i. Figure <ref> b) shows an example of the generated particles that are uniformly packed in hyperbolic space.
§.§ Hyperbolic Instance Assignment
HACK learns the features by optimizing the assignments of the images to particles (Figure <ref>). Once we generate a fixed set of uniformly packed particles in a two-dimensional hyperbolic space, our next goal is to assign each image to the corresponding particle. The assignment should be one-to-one, i.e., each image should be assigned to one particle and each particle is allowed to be associated with one image. We cast the instance assignment problem as a bipartite matching problem <cit.> and solve it with Hungarian algorithm <cit.>.
Initially, we randomly assign the particles to the images, thus there is a random one-to-one correspondence between the images to the particles (not optimized). Given a batch of samples {(𝐱_1, s_1), (𝐱_2, s_2), ..., (𝐱_b, s_b)}, where 𝐱_i is an image and s_i is the corresponding particle, and an encoder f_θ, we generate the hyperbolic feature for each image 𝐱_i as f_θ(𝐱_i) ∈𝔹^2, where 𝔹^2 is a two-dimensional Poincaré ball. We aim to find the minimum cost bipartite matching of the images to the particles within this batch. It is worth noting that the assignment is done without supervision.
In bipartite matching, the cost is the hyperbolic distance of each image to the particle. Thus, the criterion is to minimize the total hyperbolic distances of the assignment. We achieve this goal with the Hungarian algorithm <cit.> which has a complexity of 𝒪(b^3), where b is the batch size. It is worth noting that the assignment is only limited to the samples in the particular batch, thus the time and memory complexity is tolerable. The one-to-one correspondence between the images and particles is always maintained during training. The details of HACK are shown in Algorithm <ref>.
Due to the property of hyperbolic distance, the images that are more typical tend to be assigned to the particles located near the origin. Thus, HACK implicitly defines as the distance of the sample to the others. The of the images can be easily reflected by the location of the assigned particles. Moreover, similar images tend to cluster together due to semantic similarity. In summary, with hyperbolic instance assignment, HACK automatically organizes images based on by exploiting the hyperbolicity of the space.
§.§ Discussion
Hoes Does HACK Work?
Hyperbolic space can embed tree structures with no distortion. In particular, the root of the tree can be embedded in the center of the Poincaré ball and the leaves are embedded close to the boundary <cit.>. Thus, the root is close to all the other nodes. This agrees with our intuition that typical examples should be close to all other examples. By minimizing the total assignment loss of the images to the particles, we seek to organize the images implicitly in a tree-structure manner. Consider three images A, B, C for an example. Assume image A is the most typical image. Thus the feature of A is close to both the features of B and C. The bipartite matching tends to assign image A to the particle in the center since this naturally reflects the feature distances between the three images.
Connection to Existing Methods. Existing works address the problem of discovery with ad-hoc defined metrics <cit.>. These metrics usually have high variances due to different training setups or hyperparameters. In this paper, we take a different perspective by exploiting the natural organization of the data by optimizing hyperbolic instance assignments. The property of hyperbolic space facilitates the discovery of . Also, popular contrastive learning based unsupervised learning methods such as SimCLR <cit.> and MoCo <cit.> cannot achieve this goal since the predefined structure is not specified.
§ EXPERIMENTS
We design several experiments to show the effectiveness of HACK for the semantic and prototypical organization. First, we first construct a dataset with known using the congealing algorithm <cit.>. Then, we apply HACK to datasets with unknown to organize the samples based on the semantic and prototypical structure. Finally, we show that the prototypical structure can be used to reduce sample complexity and increase model robustness.
Datasets. We first construct a dataset called Congealed MNIST. To verify the efficacy of HACK for unsupervised discovery, we need a benchmark with known prototypical examples. However, currently there is no standard benchmark for this purpose. To construct the benchmark, we use the congealing algorithm from <cit.> to align the images in each class of MNIST <cit.>. The congealing algorithm is initially used for one-shot classification. During congealing, the images are brought into correspondence with each other jointly. The congealed images are more prototypical: they are better aligned with the average image. In Figure <ref>, we show the original images and the images after congealing. The original images are transformed via affine transformation to better align with each other. The synthetic data is generated by replacing 500 original images with the corresponding congealed images. In Section <ref> of the Appendix, we show the results of changing the number of replaced original images. We expect HACK to discover the congealed images and place them in the center of the Poincaré ball. We also aim to discover the prototypical examples from each class of the standard MNIST dataset <cit.> and CIFAR10 <cit.>. CIFAR10 consists of 60000 from 10 object categories ranging from airplane to truck. CIFAR10 is more challenging than MNIST since it has larger intra-class variations.
Baselines. We consider several existing metrics proposed in <cit.> for discovery, the details can be found in Section <ref> of the Appendix.
* Holdout Retraining <cit.>: We consider the Holdout Retraining proposed in <cit.>. The idea is that the distance of features of prototypical examples obtained from models trained on different datasets should be close.
* Model Confidence: Intuitively, the model should be confident in prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for .
Implementation Details. We implement HACK in PyTorch and the code will be made public. To generate uniform particles, we first randomly initialize the particles and then run the training for 1000 epochs to minimize the repulsion loss and boundary loss. The learning rate is 0.01. The curvature of the Poincaré ball is 1.0 and the r is 0.76 which is used to alleviate the numerical issues <cit.>. The hyperparameter k is 1.55 which is shown to generate uniform particles well. For the assignment, we use a LeNet <cit.> for MNIST and a ResNet20 <cit.> for CIFAR10 as the encoder. We apply HACK to each class separately. We attach a fully connected layer to project the feature into a two-dimensional Euclidean space. The image features are further projected onto hyperbolic space via an exponential map. We run the training for 200 epochs using a cosine learning rate scheduler <cit.> with an initial learning rate of 0.1. We optimize the assignment every other epoch. All the experiments are run on a NVIDIA TITAN RTX GPU.
§.§ Prototypicality in the Hyperbolic Feature Norm
We explicitly show that the hyperbolic space can capture by analyzing the relation between hyperbolic norms and the K-NN density estimation. Taken the learned hyperbolic features, we first divide the range of norms of hyperbolic features into numerous portions with equal length (50 portions for this plot). The mean K-NN density is calculated by averaging the density estimation of features within each portion. Figure <ref> shows that the mean density drops as the norm increases, which shows that the emerges automatically within the norms, the inherent characteristic of hyperbolic space. This validates that prototypicality is reflected in the hyperbolic feature norm.
§.§ Visual Prototypicality: Congealed MNIST
We further apply HACK for visual feature learning on congealed MNIST. Figure <ref> shows that HACK can discover the congealed images from all images. In Figure <ref> a), the red particles denote the congealed images and cyan particles denote the original images. We can observe that the congealed images are assigned to the particles located in the center of the Poincaré ball. This verifies that HACK can indeed discover prototypical examples from the original dataset. Section <ref> in the Appendix shows that the features of atypical examples gradually move to the boundary of the Poincaré ball during training. In Figure <ref> b), we show the actual images that are embedded in the two-dimensional hyperbolic space. We can observe that the images in the center of Poincaré ball are more prototypical and images close to the boundary are more atypical. Also, the images are naturally organized by their semantic similarity. Figure <ref> shows that the features of the original images become closer to the center of Poincaré ball after congealing. In summary, HACK can discover and also organizes the images based on their semantics. To the best of our knowledge, this is the first unsupervised learning method that can be used to discover prototypical examples in a data-driven fashion.
§.§ Prototypicality for Instance Selection
Figure <ref> shows the embedding of class 0 from MNIST and class “airplane" from CIFAR10 in the hyperbolic space. We sample 2000 images from MNIST and CIFAR10 for better visualization. We also show the arrangement of the images angularly with different angles. Radially, we can observe that images are arranged based on . The prototypical images tend to locate in the center of the Poincaré ball. Especially for CIFAR10, the images become blurry and even unrecognizable as we move toward the boundary of the ball. Angularly, the images are arranged based on visual similarity. The visual similarity of images has a smooth transition as we move around angularly. Please see Section <ref> in the Appendix for more results.
Comparison with Baselines. Figure <ref> shows the comparison of the baselines with HACK. We can observe that both HACK and Model Confidence (MC) can discover typical and atypical images. Compared with MC, HACK defines as the distance of the sample to other samples which is more aligned with human intuition. Moreover, in addition to , HACK can also be used to organize examples by semantic similarities. Holdout Retraining (HR) is not effective for discovery due to the randomness of model training.
§.§ Application of Prototypicality
Reducing Sample Complexity. The proposed HACK can discover prototypical images as well as atypical images. We show that with atypical images we can reduce the sample complexity for training the model. Prototypical images are representative of the dataset but lack variations. Atypical examples contain more variations and it is intuitive that models trained on atypical examples should generalize better to the test samples. To verify this hypothesis, we select a subset of samples based on the norm of the features which indicates of the examples. We consider using both the most typical and atypical examples for training the model. In particular, typical samples correspond to the samples with smaller norms and atypical samples correspond to the samples with larger norms. The angular layout of the hyperbolic features naturally captures sample diversity, thus for selecting atypical examples, we also consider introducing more diversity by sampling images with large norms along the angular direction.
We train a LeNet on MNIST for 10 epochs with a learning rate of 0.1. Figure <ref> a) shows that training with atypical images can achieve much higher accuracy than training with typical images. In particular, training with the most atypical 10% of the images achieves 16.54% higher accuracy than with the most typical 10% of the images. Thus, HACK provides an easy solution to reduce sample complexity. The results further verify that HACK can distinguish between prototypical and atypical examples.
Increasing Model Robustness. Training models with atypical examples can lead to a vulnerable model to adversarial attacks <cit.>. Intuitively, atypical examples lead to a less smooth decision boundary thus a small perturbation to examples is likely to change the prediction. With HACK, we can easily identify atypical samples to improve the robustness of the model. We use MNIST as the benchmark and use FGSM <cit.> to attack the model with an ϵ = 0.07. We identify the atypical examples with HACK and remove the most atypical X% of the examples. Figure <ref> b) shows that discarding atypically examples greatly improves the robustness of the model: the adversarial accuracy is improved from 84.72% to 93.42% by discarding the most atypical 1% of the examples. It is worth noting that the clean accuracy remains the same after removing a small number of atypical examples.
§ SUMMARY
We propose an unsupervised learning method, called HACK, for organizing images with sphere packing in hyperbolic space. HACK optimizes the assignments of the images to a fixed set of uniformly distributed particles by naturally exploring the properties of hyperbolic space. As a result, prototypical and semantic structures emerge naturally due to the feature learning. We apply HACK to synthetic data with known and standard image datasets. The discovered and atypical examples can be used to reduce sample complexity and increase model robustness. The idea of HACK can also be generalized to learn other geometrical structures from the data by specifying different geometric patterns.
ieee_fullname
§ MORE DETAILS ON K-NN DENSITY ESTIMATION ON MNIST
Feature Extraction: We use a LeNet <cit.> without classifier as the encoder and follow the scheme of MoCo <cit.> to train the feature extractor. We run the training for 200 epochs and the initial learning rate is 0.06. We use a cosine learning rate scheduler <cit.>.
Visualization: Figure <ref> visualize the KNN density estimation on MoCo <cit.> features of MNIST <cit.>. The output features have the dimension of 64. To visualize the features, we use t-SNE <cit.> with the perplexity of 40 and 300 iterations for optimization.
§ MORE DETAILS ON HYPERBOLIC INSTANCE ASSIGNMENT
A more detailed description of the hyperbolic instance assignment is given.
Initially, we randomly assign the particles to the images. Given a batch of samples {(𝐱_1, s_1), (𝐱_2, s_2), ..., (𝐱_b, s_b)}, where 𝐱_i is an image and s_i is the corresponding particle. Given an encoder f_θ, we generate the hyperbolic feature for each image 𝐱_i as f_θ(𝐱_i) ∈𝔹^2, where 𝔹^2 is a two-dimensional Poincaré ball.
we aim to find the minimum cost bipartite matching of the images to the particles. The cost to minimize is the total hyperbolic distance of the hyperbolic features to the particles. We first compute all the pairwise distances between the hyperbolic features and the particles. This is the cost matrix of the bipartite graph. Then we use the Hungarian algorithm to optimize the assignment (Figure <ref>).
Suppose we train the encoder f_θ for T epochs. We run the hyperbolic instance assignment every other epoch to avoid instability during training. We optimize the encoder f_θ to minimize the hyperbolic distance of the hyperbolic feature to the assigned particle in each batch.
§ DETAILS OF BASELINES
Holdout Retraining: We consider the Holdout Retraining proposed in <cit.>. The idea is that the distance of features of prototypical examples obtained from models trained on different datasets should be close. In Holdout Retraining, multiple models are trained on the same dataset. The distances of the features of the images obtained from different models are computed and ranked. The prototypical examples are those examples with the closest feature distance.
Model Confidence: Intuitively, the model should be confident on prototypical examples. Thus, it is natural to use the confidence of the model prediction as the criterion for . Once we train a model on the dataset, we use the confidence of the model to rank the examples. The prototypical examples are those examples that the model is most
§ MORE RESULTS ON PROTOTYPICALITY DISCOVERY
We show the visualization of all the images in Figure <ref> and Figure <ref>. The images are organized naturally based on their and semantic similarity. We further conduct retrieval based on the norm of the hyperbolic features to extract the most typical and atypical images on CIAFR10 in Figure <ref>. The hyperbolic features with large norms correspond to atypical images and the hyperbolic features with small norms correspond to typical images. It can be observed that the object in the atypical images is not visible.
§ GRADUALLY ADDING MORE CONGEALED IMAGES
We gradually increase the number of original images replaced by congealed images from 100 to 500. Still, as shown in Figure <ref>, HACK can learn a representation that captures the concept of prototypicality regardless of the number of congealed images. This again confirms the effectiveness of HACK for discovering prototypicality.
§ DIFFERENT RANDOM SEEDS
We further run the assignment 5 times with different random seeds. The results are shown in Figure <ref>. We observe that the algorithm does not suffer from high variance and the congealed images are always assigned to the particles in the center of the Poincaré ball. This further confirms the efficacy of the proposed method for discovering prototypicality.
§ EMERGENCE OF PROTOTYPICALITY IN THE FEATURE SPACE
Existing unsupervised learning methods mainly focus on learning features for differentiating different classes or samples <cit.>. The learned representations are transferred to various downstream tasks such as segmentation and detection. In contrast, the features learned by HACK aim at capturing prototypicality within a single class.
To investigate the effectiveness of HACK in revealing prototypicality, we can include or exclude congealed images in the training process. When the congealed images are included in the training process, we expect the congealed images to be located in the center of the Poincaré ball while the original images to be located near the boundary of the Poincaré ball. When the congealed images are excluded from the training process, we expect the features of congealed images produced via the trained network to be located in the center of the Poincaré ball.
§.§ Training with congealed images and original images
We follow the same setups as in Section 4.3.1 of the main text. Figure <ref> shows the hyperbolic features of the congealed images and original images in different training epochs. The features of the congealed images stay in the center of the Poincaré ball while the features of the original images gradually expand to the boundary.
§.§ Training only with original images
Figure <ref> shows the hyperbolic features of the congealed images when the model is trained only with original images. As we have shown before, congealed images are naturally more typical than their corresponding original images since they are aligned with the average image. The features of congealed images are all located close to the center of the Poincaré ball. This demonstrates that prototypicality naturally emerges in the feature space.
Without using congealed images during training, we exclude any artifacts and further confirm the effectiveness of HACK for discovering prototypicality. We also observe that the features produced by HACK also capture the fine-grained similarities among the congealing images despite the fact that all the images are aligned with the average image.
§ CONGEALED IMAGES
§ DISCUSSIONS ON SOCIETAL IMPACT AND LIMITATIONS.
We address the problem of unsupervised learning in hyperbolic space. We believe the proposed HACK should not raise any ethical considerations. We discuss current limitations below,
Applying to the Whole Dataset Currently, HACK is applied to each class separately. Thus, it would be interesting to apply HACK to all the classes at once without supervision. This is much more challenging since we need to differentiate between examples from different classes as well as the prototypical and semantic structure.
Exploring other Geometrical Structures We consider uniform packing in hyperbolic space to organize the images. It is also possible to extend HACK by specifying other geometrical structures to encourage the corresponding organization to emerge from the dataset.
|
http://arxiv.org/abs/2307.00988v1
|
20230703130654
|
Tellurium emission line in kilonova AT 2017gfo
|
[
"Kenta Hotokezaka",
"Masaomi Tanaka",
"Daiji Kato",
"Gediminas Gaigalas"
] |
astro-ph.HE
|
[
"astro-ph.HE",
"astro-ph.SR"
] |
Coordinated motion of epithelial layers on curved surfaces
A. Voigt
August 1, 2023
==========================================================
The late-time spectra of the kilonova AT 2017gfo associated with GW170817 exhibit a strong emission line feature at 2.1 μ m.
The line structure develops with time and there is no apparent blue-shifted absorption feature in the spectra, suggesting that this emission line feature is produced by electron collision excitation. We attribute
the emission line to a fine structure line of Tellurium (Te) III, which is one of the most abundant elements in the second r-process peak.
By using a synthetic spectral modeling including fine structure emission lines with the solar r-process abundance pattern beyond the first r-process peak, i.e., atomic mass numbers A≳ 88, we demonstrate that [Te III] 2.10 μ m is indeed expected to be the strongest emission line in the near infrared region. We estimate that the required mass of Te III is ∼ 10^-3M_⊙, corresponding to
the merger ejecta of 0.05M_⊙, which is in agreement with the mass estimated from the kilonova light curve.
transients: neutron star mergers
§ INTRODUCTION
The origin of r-process elements is a long-standing problem in astrophysics <cit.>.
Neutron star mergers have been considered as promising sites of r-process nucleosynthesis <cit.>.
A neutron star merger, GW170817, was accompanied by an uv-optical-infrared counterpart, a kilonova (or macronova) AT 2017gfo, which provides strong evidence that r-process nucleosynthesis occurs in neutron star merger ejecta <cit.>.
A series of spectral data of the kilonova AT 2017gfo was obtained in the the optical and near infrared bands from 0.5 to 10 days after the merger <cit.>.
The kilonova AT 2017gfo is dominated by the photospheric emission at the early times.
The photospheric emission around a few days after the merger peaks in the near infrared band, indicating the existence of lanthanides, which have strong absorption at optical to near infrared wavelengths <cit.>.
The early spectra also exhibit several absorption structures including: (i) the 0.8 μ m feature attributed to Sr II or He I <cit.> and (ii) the 1.3 μ m and 1.5 μ m features attributed to La III and Ce III, respectively <cit.>. In addition to the elemental identification, <cit.> demonstrated that the spectra in the photospheric phase are useful to study the geometry of the outer part of the kilonova ejecta, ≳ 0.2c.
After the photospheric phase, kilonovae enter the nebula phase, where the ejecta is heated by charged decay products of the radioactivity of r-process nuclei and the heat is radiated through atomic emission lines.
Examining kilonova nebular spectra provides opportunities to identify atomic species synthesized in the merger ejecta that may not appear as absorption lines during the photospheric phase. For instance, <cit.> interpreted the detection of Spitzer <cit.> at 4.5 μ m at 43 and 74 days after the merger as emission lines of selenium (Se) or tungsten (W).
In the early nebular phase, ∼ 10 days, the infrared emission is of particular interest because the absorption opacity due to atomic transitions is lower compared to the optical region <cit.>, and thus, the emission lines are expected to appear as early as ≲ 10 days. Most of infrared emission lines are expected to arise from fine-structure transitions in the ground terms of heavy elements, for which the line wavelengths and transition rates can be obtained with reasonably high accuracy from the experimentally calibrated atomic energy levels. Furthermore, such emission lines can be used to estimate the mass distribution of the emitting ions from the emission line spectra. In fact, the mass distributions of ions in type Ia supernova ejecta have been derived from the infrared nebular spectra <cit.>
In section <ref>, we study an emission line feature at 2.1 μ m in the kilonova AT 2017gfo spectra from 7.5 to 10.5 days. We attribute this line to a fine-structure line of doubly ionized Tellurium (Te III, atomic number 52). The Te III mass that is required to explain the observed data is estimated as ∼ 10^-3M_⊙. With a synthetic spectral modeling with the solar r-process abundance pattern, we show that [Te III] 2.10 μ m is the strongest fine structure emission line in the near infrared region. In section <ref>, we conclude the results and discuss the uncertainties and implications.
§ TE III LINE IN KILONOVA
The emission lines produced through radiative de-excitation of atoms emerge from the optically thin region of the ejecta.
The optical depth of the kilonova ejecta with an expansion velocity of v_ ej and a mass of M_ ej is
τ ≈κ M_ ej/4π (v_ ejt)^2,
≈ 1(κ/1 cm^2g^-1)
(M_ ej/0.05M_⊙)
(v_ ej/0.1c)^-2(t/10 day)^-2,
where κ is the opacity and t is the time since merger.
The opacity is dominated by bound-bound transitions of heavy elements and depends on the composition and wavelengths.
<cit.> show that the expansion opacity decreases with wavelength, e.g., ∼ 10 – 100 cm^2g^-1 around 0.5 μ m and ≲ 1 cm^2g^-1 around 2 μ m.
Therefore, infrared emission lines are expected to emerge at the earlier time than optical lines. With the ejecta parameters of AT 2017gfo, we expect emission lines to dominate over the photospheric emission as early as ∼ 10 days around 2 μ m.
Figure <ref> shows the spectral series of the kilonova AT 2017gfo from 7.5 to 10.5 days after the merger taken by X-shooter on the Very Large Telescope <cit.>. The observed spectra are composed of several line features and a continuum component extending from the optical to near infrared bands. We model the underlying continuum spectrum by blackbody radiation, where the photospheric velocity and temperature for 7.5–10.5 days are 0.06 – 0.08c and 1700 – 2400 K, respectively.
The observed spectra clearly show
an emission line at 2.1 μ m (see for a detailed analysis). The expansion velocity of the line emitting region is ∼ 0.07c derived from Doppler broadening of the line, which is consistent with the picture where the emission line is produced outside the photosphere.
The line flux remains roughly constant with time while the continuum flux declines, and thus, the line-to-continuum ratio increases from ∼ 1 at 7.5 days to ∼ 1.5 at 10.5 days. This development of the emission line without a blue-shifted absorption feature indicates that the emission at 2.1 μ m is a forbidden line driven by electron collision rather than an emission line associated with an absorption line, e.g., a P-Cygni line or a fluorescence line.
The wavelength of the peak of the emission line feature indeed coincides with a fine structure line, [Te III] 2.10 μ m, arising from the transition between the ground level ^3 P_0 and the first excited level ^3 P_1.
It is worth noting that
[Te III] 2.10 μ m has been detected in planetary nebulae <cit.>. Note that the transition between the ground level ^3 P_2 and the second excited level ^3 P_1 of Te I also produces an emission line at 2.1 μ m. As discussed later, the contribution of Te I line is weaker than Te III line.
It may not be surprising that Te III produces the strongest emission lines because Te is among the most abundant elements in the second r-process peak.
Figure <ref> shows the mass fraction of each atom at 10 days after the merger. Here we assume that the final abundance pattern matches the solar r-process residual with atomic numbers A≥ 88 <cit.>, i.e., the elements beyond the first r-process peak. With this assumption, the most abundant element is Sr and the second most is Te at 10 days.
Note also that [Te III] 2.10 μ m is particularly expected to be strong as long as Te III is abundant outside the photosphere because this line is produced by radiative decay of the first fine structure transition level, which is easily excited by electron collision. For the iron peak elements, [Co III] 11.89 μ m and [Co II] 10.52 μ m represent lines of the same nature. Indeed, these are among the most prominent mid-IR lines observed in SNe Ia and SN 1987A, respectively <cit.>.
Let us first give an estimate of the amount of Te III from the observed line flux assuming that the observed line flux is predominantly produced by Te III and the ejecta is optically thin to the [Te III] 2.10 μ m line. The total line luminosity is given by
L ∼ hν_10 A_10 f_1 N( Te III),
where hν_10, A_10≈ 2 s^-1, and f_1 are the excitation energy, the radiative decay rate, and the fraction of Te III ions in the ^3 P_1 level, respectively, and N( Te III) is the total number of Te III ions in the ejecta <cit.>. The observed flux at 2.1 μ m after subtracting the underlying continuum is ∼ 0.1 mJy, corresponding to the observed line luminosity of L_ obs,line∼ 2· 10^39 erg s^-1 with D=40 Mpc. Equation (<ref>) leads to a total mass of Te III:
M( Te III)∼ 10^-3M_⊙(f_1/0.1)^-1(L_ obs,line/2· 10^39 erg s^-1).
Note that the electron density at t∼ 10.5 day may be comparable to
the critical density of Te III ^3 P_1 <cit.>, and therefore, the level fraction f_1 is comparable to or slightly less than that expected from the thermal distribution, i.e., f_1≈ 0.1 in thermal equilibrium at T_e=2000 K.
Given the total ejecta mass of ∼ 0.05M_⊙, we estimate that the mass fraction of Te is greater than a few per cent.
We now turn to the comparison of the observed spectra with a synthetic spectral model. The synthetic spectrum is composed of fine structure emission lines and a continuum component, where the continuum emission is approximated by blackbody radiation. The blackbody temperature and radius at a given epoch are determined such that the synthetic spectrum roughly agrees with the observed one at the near IR region ≲ 2 μ m.
The emission line spectrum is computed by the one-zone modeling presented in <cit.>,
where the energy level populations are solved by balance between collision and radiative decay for a given electron density, temperature, and ionization state. We use the collision strengths of the fine structure transitions of the ground term of Te III derived by <cit.>. The collision strengths of other elements that are relevant for the nebular modeling at λ≲ 3.5 μ m
are obtained by using an atomic structure code (, see also ) and
the M1 line list is constructed by using the NIST database <cit.> and the LS selection rules with the single-configuration approximation (Hotokezaka et al. in prep). Note that the wavelengths and radiative transition rates of the M1 lines in the list are sufficiently accurate for our purpose.
In the modeling, the ejecta composition is assumed to be the solar r-process abundance pattern with A≥ 88 (figure <ref>), which is the same as the second and third peak model used in <cit.>.
The model also assumes the electron temperature, T_e=2000 K and the ionization fractions (Y^+0, Y^+1, Y^+2, Y^≥+3)= (0.25, 0.4, 0.25, 0.1)[We neglect the emission lines of ions in Y^≥+3.]. These quantities are assumed to be constant with time for simplicity.
This choice of the ionization fraction is somewhat motivated by <cit.>, where the ionization fractions of Nd atoms in the kilonova nebular phase are studied. With these parameters, the ejecta mass of 0.05M_⊙ and the expansion velocity of 0.07c,
[Te III] 2.10 μ m is the strongest emission line among M1 transitions of all the heavy elements beyond the first r-process peak and
the synthetic spectra can roughly reproduce the emission line structure around 2.1 μ m. However, one should keep in mind that the ionization fraction varies among different atomic species.
The ejecta mass of 0.05M_⊙ agrees with the ejecta mass estimated from the energy budget of the bolometric light curve <cit.>.
If this interpretation is correct, we expect that the 2.1 μ m line remains at the later times while the continuum flux keeps declining.
It is worth noting that
Te III may produce another emission line at 2.93 μ m arising from the transition between the first and second excited levels (^3 P_1-^3 P_2) at the later times because the electron temperature is expected to gradually increase with time <cit.>. Although this line may be hidden by several other lines of Os II, III, and Pd III,
detecting the two lines of Te III in future events can provide solid confirmation of the Te III production in mergers.
Furthermore, the ratio of these line fluxes can be used to diagnose the electron temperature.
§ CONCLUSION AND DISCUSSION
The observed spectra of the
kilonova AT 2017gfo exhibit a strong emission line at 2.1 μ m. The emission line with the lack of an apparent blue-shifted absorption feature suggests that the emission feature is a forbidden line excited through electron collision. We attribute this line to the fine structure line, [Te III] 2.10 μ m, which has also been detected in planetary nebulae <cit.>. Note that Te is one of the most abundant elements in the second r-process peak. We estimate that the mass of Te III is roughly 10^-3M_⊙ to account for the observed line flux.
We compare the observed spectra with a synthetic model, where the spectrum is composed of fine structure emission lines and a continuum component approximated by blackbody radiation. The spectrum of fine structure emission lines is computed with the one-zone model presented in <cit.>.
With the solar r-process abundance beyond the first r-process peak, T_e∼ 2000 K, and Y^+2∼ 0.3, we show that [Te III] 2.10 μ m is the strongest emission line among M1 transitions of all the heavy elements around 10 days after merger.
Our model agrees with the observed spectra for the ejecta of 0.05M_⊙ with the solar r-process abundance pattern with A≥ 88, an expansion velocity of 0.07c, and electron temperature of ∼ 2000 K, and the ionization fraction of Y^+2∼ 0.3. Because blackbody radiation may be a poor approximation to the continuum flux around 2 μ m we should keep in mind that the amount of Te III in our analysis may be affected by the continuum flux model. It is also interesting to note that the same abundance pattern can reproduce the Spitzer 4.5 μ m detection at 40 days <cit.>, in which the emission is attributed to a fine structure line of W III <cit.>. Note that, if the lighter r-process elements are abundant, they are expected to produce emission lines around 2 μ m such as [Kr III] 2.20 μ m and [Se IV] 2.29 μ m. However, the observed spectra peak at 2.1 μ m, suggesting that these ions are less abundant compared to Te III in the line emitting region of the ejecta.
Our model does not include electric dipole (E1) lines, which may produce strong absorption and emission lines.
Recently, <cit.> suggested that 2.1 μ m may be composed of two lines and an E1 line of Ce III is the best candidate producing this line feature.
Although we cannot quantify the contamination of E1 lines to the 2.1 μ m feature, we emphasize that the M1 emission of Te can account the observed line flux with reasonable parameters.
To verify this hypothesis we need spectral modelings with E1 lines.
We also note that the M1 lines in our list cannot account for the observed feature at 1.6 μ m. The flux in this line declines with time as the continuum flux declines, indicating that this emission feature may be produced by E1 lines. Interestingly, <cit.> show that Ce III has several strong E1 lines around 1.6 μ m.
From the early blue emission in the photospheric phase,
it is suggested that the emission is dominated by the ejecta
composed of light r-process elements in order to avoid significant absorption
in the optical band by lanthanides. Furthermore, the analyses of the kilonova spectra in the photospheric phase lead to the similar conclusion. The absorption feature around 0.8 μ m is likely caused by one of light r-process elements, Sr (Z=38), or even He <cit.>. <cit.> propose that La (Z=57) and Ce (Z=58) produce the absorption lines around 1.2 and 1.5 μ m, respectively. But the abundances of La and Ce inferred from the spectral analysis are lower than the solar r-process residuals by factor of ∼ 10. These indicate that the outer part of the ejecta (v≳ 0.2c) is predominantly composed of light r-process elements.
In contrast to the early emission, our analysis implies that heavier elements, i.e., the second r-process peak, are likely more abundant in the slower part of the ejecta.
In order to obtain better constraints on the elemental
abundances and ejecta parameters, the spectral modelings should be improved by developing non-LTE radiation transfer modelings <cit.> and by improving atomic data such as the radiative transition rates <cit.>, collision strengths, and recombination rate coefficients.
For future kilonova events, the spectroscopic observations with the JWST as well as ground-based telescopes will be useful to identify more elements in the nebular spectra with a wider wavelength range.
§ ACKNOWLEDGMENTS
We thank Nanae Domoto and Yuta Tarumi for useful discussion. This research was supported by JST FOREST Program (Grant Number JPMJFR212Y, JPMJFR2136), NIFS Collaborative Research Program (NIFS22KIIF005), the JSPS Grant-in-Aid for Scientific Research (19H00694, 20H00158, 21H04997, 20K14513, 20H05639, 22JJ22810).
§ DATA AVAILABILITY
The data presented this article will be shared on request to the corresponding author.
mnras
|
http://arxiv.org/abs/2307.01529v2
|
20230704073332
|
Shadows and photon rings of a spherically accreting Kehagias-Sfetsos black hole
|
[
"Mohaddese Heydari-Fard",
"Malihe Heydari-Fard",
"Nematollah Riazi"
] |
gr-qc
|
[
"gr-qc"
] |
Shadows and photon rings of a spherically accreting Kehagias-Sfetsos black hole
Mohaddese Heydari-Fard^1
Electronic address: mailto:m_heydarifard@sbu.ac.irm_heydarifard@sbu.ac.ir , Malihe Heydari-Fard^2Electronic address: mailto:heydarifard@qom.ac.irheydarifard@qom.ac.ir and Nematollah Riazi^1Electronic address: mailto:n_riazi@sbu.ac.irn_riazi@sbu.ac.ir
^1 Department of Physics, Shahid Beheshti University, Evin 19839, Tehran, Iran
^2 Department of Physics, The University of Qom, 3716146611, Qom, Iran
August 1, 2023
================================================================================================================================================================================================================================================================================================================================================================================================================================================
By considering Kehagias-Sfetsos black hole in the framework of the Hořava-Lifshitz gravity, we study the optical appearance of such black holes surrounded by spherical accretion flow. For the static/infalling spherical accretion flow, we compute the observed specific intensity as a function of impact parameter. We also investigate the effect of the Hořava parameter and accreting matter on the luminosity of shadows and photon rings. It is found that an increase in the Hořava parameter decreases the shadow size, while the shadows and photon rings luminosities increase. Moreover, we constrain the Hořava parameter from the observational data reported by the Event Horizon Telescope for M87* and Sgr A*.
Keywords: Black hole shadow, Spherical accretion, Modified theories of gravity
§ INTRODUCTION
The Event Horizon Telescope (EHT) collaboration released the first image of a black hole shadow <cit.>–<cit.>. The Image is formed by null geodesics in the strong gravity regime. The photons with low angular momentum fall into the black hole and form a dark area for a distant observer, while photons with large angular momentum coming from infinity will be deflected by the gravitational potential of the black hole. However, the photons with critical value of angular momentum revolve around the black hole an infinite time and surround the dark interior, which are called the photon ring and the black hole shadow, respectively. In a seminal work, Synge calculated the angular radius of the shadow of Schwarzschild black hole <cit.>. Then, Bardeen studied the shadow of Kerr black hole and argued that the black hole angular momentum causes the deformation of its shadow <cit.>.
By modelling M87* with the Kerr geometry in general relativity (GR), the observation was found to be in agreement with the predictions of GR. However, due to the EHT systematic uncertainties it is still possible to test the alternative theories of gravity by simulating the black hole image and observing deviations from the Kerr solution. To this end, one can explore the distortion in the black hole image which contains valuable information about the structure of space-time around a black hole solution. This motivated many authors over the recent years to study the black hole shadow in the context of modified theories of gravity <cit.>–<cit.>.
On the other hand, the astrophysical black holes are expected to be surrounded by sources of the luminous accretion material which makes it possible to investigate the observational appearance of the black hole from the accretion flow. Indeed, before the discovery of the black hole shadow by EHT, the possible observational characteristics of the black hole shadow by considering different accretion flows were studied. Luminet was the first to investigate the optical properties of the Schwarzschild black hole in 1979, and constructed the simulated shadow image of the Schwarzschild black hole surrounded by an emitting thin accretion disk <cit.>. The simulated image obtained by Luminet is remarkably similar to the black hole shadow image captured by EHT <cit.>. He found that the emergence of the shadow and ring depends on the position and profile of the accretion flow and the inner edge of disk can have a remarkable signature in the image. Thereafter, Falcke et al. by considering the radiation of a hot optically thin accretion flow around a supermassive black hole in the center of our galaxy, created a ray-tracing code to obtain the images of Sgr A*, and showed that the black hole shadow is equivalent to the gravitational lensing effect <cit.>. For a geometrically thick and optically thin accretion disk, the gravitational lensing and the shadow of the Schwarzschild black hole was studied by Cunha et al. <cit.>. Gralla et al. by considering the Schwarzschild black hole with both thin and thick accretion disks, investigated the trajectory of light rays and ring that surrounds the black hole shadow. It was found that the shadow size depends on the position of the accretion disk as well as the emission profile of the model <cit.>. However, when the Schwarzschild black hole is surrounded by spherically symmetric accretion flow, Narayan et al. showed that the location of the outer edge of the shadow is independent of the inner radius at which the accreting gas stops radiating <cit.>. Therefore, the size of the shadow depends on the space-time geometry and is not affected by the details of the accretion flow. Also, the optical appearance of black holes surrounded by various accretions in modified gravity theories, have been extensively studied in <cit.>–<cit.>.
Amongst the many modifications of GR that have been suggested is the Hořava-Lifshitz gravity which is motivated by the need to include the quantum effects in the low-mass limit. The theory is a renormalizable four-dimensional theory of gravity which reduces to the Einstein's gravity with non-vanishing cosmological constant in the IR limit but with improved UV behavior. A class of static and spherically symmetric black hole solutions of the theory with a cosmological constant was obtained in <cit.>. Among them, the Ads type solution has an asymptotic behavior which differs from the Schwarzschild-Ads solution in GR; namely in the IR limit the theory of GR is not always recovered. However, in the context of the modified Hořava model, a static spherically symmetric solution with asymptotically flat behavior, which is a counterpart of the Schwarzschild black hole in GR, has been obtained by Kehagia and Sfetsos <cit.>. This solution is usually known as the KS black hole. Then, in the slow rotation approximation, the black hole solution in the IR regime has been obtained in <cit.>–<cit.>. In the literature, many physical aspects of KS black hole have already been studied <cit.>–<cit.>. Moreover, for cosmological implications of Hořava-Lifshitz gravity see for instance Refs.<cit.>–<cit.>.
The shadows and rings of the KS black hole surrounded by a thin accretion disk have been studied in <cit.>. However, the optical appearance of the KS black hole surrounded by spherical accretion flow has not yet been studied. So, in the present work, we consider the KS black hole surrounded with static/infalling accretion flows and discuss the effects of Hořava parameter and spherical accretion on the observed appearance of the black hole.
The paper is organized as follows. In section <ref>, after a brief review of KS black holes, we discuss the photon trajectories in the space-time of such black holes and investigate the effects of the Hořava parameter on them. Then we present the shadow images of KS black hole with static and infalling spherical accretion flows in section <ref> and section <ref>, respectively. The paper ends with concluding remarks in section <ref>.
§ KS BLACK HOLES AND TRAJECTORIES OF SURROUNDING PHOTONS
§.§ A. KS geometry
In the ADM formalism of Hořava-Lifshitz gravity the four-dimensional metric is parameterized as <cit.>
ds^2=-N^2c^2dt^2+g_ij(dx^i+N^idt)(dx^j+N^jdt),
where N, N_i and g_ij are the lapse function, the shift function and three-dimensional spatial metric, respectively.
The action of the IR-modified Hořava gravity is
S = ∫ dtdx^3√(g)N[2/κ^2(K_ijK^ij-λ K^2)-κ^2/2ν^4C_ijC^ij+κ^2μ/2ν^2ϵ^ijkR^(3)_il∇_jR^(3)l_k.
- .κ^2μ^2/8R^(3)_ijR^(3)ij+κ^2μ^2/8(3λ -1)(4λ -1/4(R^(3))^2-Λ_WR^(3)+3Λ_W^2)+κ^2μ^2ω̃/8(3λ -1)R^(3)]
where μ, ν, λ, κ, ω̃ and Λ_W are constant parameters. R^(3) is the three-dimensional curvature scalar for g_ij and the extrinsic curvature, K_ij, is given by
K_ij=1/2N(ġ_ij-∇_iN_j-∇_jN_i),
where the dot represents a derivative with respect to t and ∇_i denotes the covariant derivative with respect to the spatial metric g_ij. C^ij is the Cotton tensor, reads as
C^ij=ϵ^ikl∇_k(R^(3)j_l-1/4R^(3)δ^j_l).
Now, we consider the static and spherically symmetric metric as
ds^2= -g_tt(r)dt^2+g_rr(r)dr^2+r^2 dθ^2+r^2 sin^2θ dφ^2,
where g_tt(r) and g_rr(r) are functions of the radial coordinate r. In the specific case of λ=1, which reduces to the Einstein-Hilbert action in IR limit, the solution of the vacuum field equations can be obtained as follows
-g_tt(r)=1/g_rr(r)=f(r)=1+(ω̃-Λ_W) r^2-√(r[ω̃(ω̃-2Λ_W)r^3+β]),
where β is an integration constant. By considering β=4ω̃M and Λ_W=0, the KS asymptotically flat solution is given by <cit.>
f(r)=1+ω̃ r^2[1-(1+4M/ω̃ r^3)^1/2],
where M is the mass of the black hole and ω̃ is the Hořava-Lifshitz parameter. By rearranging the parameter ω̃ as ω̃=1/2ω^2, one can rewrite the metric function in the following form <cit.>
f(r)=1+r^2/2ω^2(1-√(1+8Mω^2/r^3)).
The radius of the outer and inner horizons can be found by solving f(r)=0
r_±=M[1±√(1-ω^2/M^2)].
As is clear, for the existence of black hole solution a constraint ω/M≤1 should be imposed and for the extremal black hole, r_+=r_-, which corresponds to the case ω/M=1. Also, in the limit of ω→0 the above metric becomes the static solution in GR which is described by the Schwarzschild metric. Note the parameter ω always takes the positive values. Thus, for the range 0<ω<1, the behavior of the inner horizon r_- and the event horizon r_+, is plotted in the left panel of Fig. <ref>. The points at the beginning of each curve denote the corresponding values to the Schwarzschild solution.
§.§ B. Null trajectory around KS black hole
The trajectory of null geodesics in the space-time of KS black hole can be obtained using the Euler-Lagrange equations. Without loss of generality, we restrict ourselves to the equatorial plane, θ=π/2. Since the metric coefficients of KS black hole are independent of t and φ coordinates, there are two constants of motion correspond to the energy E and angular momentum L of photons
E=f(r)ṫ,
L=r^2φ̇.
Now, taking the condition 2 L=g_μνẋ^μẋ^ν=0 for null geodesics and using above equations, we find the equations of photon motion around a KS black hole as
ṫ=E/f(r),
φ̇=L/r^2,
ṙ^2 = E^2 - V_ eff(r),
where the effective potential is as follows
V_ eff(r)=L^2/r^2f(r)=L^2/r^2[1+r^2/2ω^2(1-√(1+8Mω^2/r^3))].
We have displayed the effective potential for different values of the Hořava parameter, ω, in the right panel of Fig. <ref>. As one can see, the peak of the potential increases with ω. Note that for non-radial geodesics, it is convenient to set L=1 and thus we plot the figure for this value of the angular momentum.
Next, we focus on the photon motion in the vicinity of KS black hole. Combining equations (<ref>) and (<ref>) the differential equation governing the light rays trajectory can be obtained as
(dr/dφ)^2 = r^4/b^2-r^2f(r)
where the impact parameter is defined as b≡L/E. In particular, for the photons with critical value of the impact parameter, b = b_ ph, an unstable
circular orbit occurs at the maxima of the effective potential at r = r_ ph, known as the photon sphere <cit.>. To study circular orbits with constant radius r=r_ ph from equation (<ref>) we have
V_ eff(r_ ph) = E_ ph^2, V^'_ eff(r_ ph) = 0,
where prime denotes differentiation with respect to the radial coordinate r. Use of equation (<ref>) leads to the following relation
rf'(r)-2f(r)=0
which gives the radius of unstable photon circular orbits as
r_ ph=2√(3)Mcos[1/3cos^-1(-4ω^2/3√(3)M^2)].
The dependence of the radius of the photon sphere r_ ph on the parameter ω is also plotted in the right panel of Fig. <ref>, showing that r_ [h is a decreasing function of the Hořava parameter. Moreover, we see that in the limiting case ω→0, r_ ph=3M which is the radius of the unstable circular photon orbit for the Schwarzschild black hole. The impact parameter of the photon sphere is also given by
b_ ph = r_ ph/√(f(r_ ph))=(1/r_ ph^2+1-√(1+8Mω^2/r_ ph^3)/2ω^2)^-1/2,
which for an asymptotically flat space-time with a metric in the form (<ref>) is equal to the radius of the black hole shadow. The results of the radii of the event horizon r_+, photon sphere r_ ph as well as the impact parameter of the photon sphere b_ ph are also presented in Table. <ref>.
|
http://arxiv.org/abs/2307.01683v1
|
20230704122710
|
Learning Discrete Weights and Activations Using the Local Reparameterization Trick
|
[
"Guy Berger",
"Aviv Navon",
"Ethan Fetaya"
] |
cs.LG
|
[
"cs.LG",
"cs.AI",
"cs.CV"
] |
Serving Graph Neural Networks With Distributed Fog Servers For Smart IoT Services
Liekang Zeng,
Xu Chen,
Peng Huang,
Ke Luo,
Xiaoxi Zhang,
and Zhi Zhou
The authors are with the School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, Guangdong, 510006 China (e-mail: zenglk3@mail2.sysu.edu.cn, chenxu35@mail.sysu.edu.cn, {huangp57, luok7}@mail2.sysu.edu.cn, {zhangxx89, zhouzhi9}@mail.sysu.edu.cn).
August 1, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================
In computer vision and machine learning, a crucial challenge is to lower the computation and memory demands for neural network inference. A commonplace solution to address this challenge is through the use of binarization. By binarizing the network weights and activations, one can significantly reduce computational complexity by substituting the computationally expensive floating operations with faster bitwise operations. This leads to a more efficient neural network inference that can be deployed on low-resource devices. In this work, we extend previous approaches that trained networks with discrete weights using the local reparameterization trick to also allow for discrete activations. The original approach optimized a distribution over the discrete weights and uses the central limit theorem to approximate the pre-activation with a continuous Gaussian distribution. Here we show that the probabilistic modeling can also allow effective training of networks with discrete activation as well. This further reduces runtime and memory footprint at inference time with state-of-the-art results for networks with binary activations.
§ INTRODUCTION
As neural networks become more powerful and their applications more commonplace, there is an important and growing need to reduce computational costs and memory requirements. This is especially important in edge devices, e.g., smartphones, that have weaker processors and need to optimize energy consumption. To address this challenge, one promising approach is to train binary or ternary networks, where the weights are constrained to take on only a small number of discrete values <cit.>. One can also binarize the activations (e.g., using the sign function) to further improve the efficiency of the network, albeit with a larger reduction in accuracy <cit.>.
Most previous works that have utilized sign activation directly apply it during the forward pass and use heuristic or approximation methods to estimate the gradients during the backward pass, e.g., straight-through gradient or other biased gradient estimators <cit.>.
In this paper, we propose a novel method to compute gradients for networks with discrete activation by employing a smooth approximation. We construct a fully differentiable probabilistic model to approximate the discrete network during training. After training, we sample from our trained probabilistic model to get our discrete weights.
Our method is based on the observation by <cit.> which states that if the weights
of a certain layer are sampled from independent Gaussian distributions, then one can get better stochastic gradient estimation by modeling and sampling the Gaussian distribution of the pre-activations
instead of weights. They called it the local reparameterization trick.
<cit.> extended the local reparameterization trick to train a network with discrete weights. When the weights
are discrete and stochastic, the pre-activations
can still be well approximated by a Gaussian distribution according to the (Lyapunov) central limit theorem (CLT). Then, by using the reparameterization trick, we can compute the derivatives of a smooth distribution.
In this work, we extend <cit.> and train a network with discrete weights and activations.
Based on the observation that the pre-activations
are well approximated by a Gaussian distribution, we
propose sampling discrete values from the induced distribution given our Gaussian approximation. We construct a probabilistic model which is fully differentiable, using the Gaussian CDF function to calculate the activation probabilities and Gumbel-Softmax <cit.> as a smooth approximation for categorical sampling.
We demonstrate the effectiveness of our approach on several benchmark datasets. The experiments show that our proposed method, which we name LAR-nets (Local Activation Reparameterization networks), outperforms previous SoTA binarization approaches. To summarize, we make the following contributions:
* Introduce a novel approach for learning with discrete activations based on the local reparameterization trick.
* Propose a new variant for batch normalization to extend the applicability of the normalization layer to distribution over weights.
* Demonstrate the effectiveness of the proposed approach against multiple baselines for training discrete networks on several vision benchmarks.
§ RELATED WORK
Binary networks have been an active area of research in deep learning over the past few years. <cit.> introduced BinaryConnect, a method for training binary neural networks using binary stochastic weights. They sample binary weights and compute gradients as if it was a deterministic full-precision network.
BNN <cit.> and XNOR-Net <cit.> suggested a different approach; They discretize the weights and activations during the forward pass and back-propagate through this non-continuous discretization using the straight-through estimator.
XNOR-Net also introduces a continuous scaling factor to the binarized weights, which became popular in later works. Other similar works based on the straight-through estimator extend it to ternary networks (TWN <cit.>, TTQ <cit.>).
Motivated by the high potential to reduce model complexity and improve computational efficiency by also discretizing activations, works that discretize both weights and activations became more common.
DoReFa-Net <cit.> extended XNOR-Net to accelerate the training process using quantized gradients. ABC-Net <cit.> suggested reducing the quantization error by linearly combining multiple binary weight matrices and scaling factors to fit the full-precision weights and activations. Other works suggested making modifications in the network architectures, such as adding tailored-made layers, to make them more fit to binary neural network training <cit.>.
Some recent works tried to achieve good quantization functions in forward propagation, which can reduce the gradient error as well. <cit.> presented a differential soft quantization (DSQ) that has more accurate gradients in the backward pass.
Diffenderfer and Kailkhura <cit.> proposed what they called Multi Prize Ticket (MPT).
They incorporated a binarization scheme with weight pruning (which results in a ternary network).
For the weights and activations binarization they suggested a modified gradient estimator for the sign function. For the weights pruning, inspired by Frankle and Carbin <cit.>, they proposed a scheme to prune a randomly initialized network using a learnable mask by updating pruning scores during the training. <cit.>, suggested reshaping the data distribution before binarization by adding a distribution loss for learning the proper binarization (BNN-DL).
<cit.> suggested IR-Net, which aimed to minimize the information loss by maximizing the information entropy of the quantized parameters and minimizing the quantization error. Several other works discretize the activation, but to more than one bit <cit.>.
§ BACKGROUND
In this section, we describe the local reparametrization trick and the LR-net approach.
Notation. Let {(x_i,y_i)}_i=1,...,N denote our training examples. We denote by W the set of all model parameters and let W^(t) denote the weights matrix for layer t. Furthermore, let
w_ij^(t) denote the element of W^(t), i.e., [W^(t)]_ij=w^(t)_ij.
The pre-activations are defined as z^(t+1)=W^(t)h^(t), where h^(t) is the activations of layer t (we omit the bias term for simplicity). We let h^(0)=x and define
h^(t+1)=ϕ(z^(t+1)), where ϕ (·) is a non-linear activation function.
We assume a stochastic model in which each weight w_ij^(t) is sampled independently from a multinomial distribution 𝒲_ij^(t). The objective is minimizing the expected loss w.r.t. 𝒲, where 𝒲 denotes the the distribution over W,
L(𝒲)=_W∼𝒲[∑_i=1^Nℓ(f(x_i,W),y_i)].
The classic approach for minimizing Eq. <ref> with a discrete distribution is the log-derivative trick <cit.>:
∇ L(𝒲)=𝔼_W∼𝒲[∑_i=1^Nℓ(f(x_i,W),y_i)∇log(P(W))].
While this allows us to get an unbiased estimation of the gradient, it suffers from high variance, which makes optimization using this method challenging.
A popular alternative for continuous distributions is the reparameterization trick <cit.> - instead of optimizing 𝔼_x∼ p_φ[f(x)] for distribution parameters φ we parametrize x = g(ϵ;θ) where ϵ is drawn from a known fixed distribution p(ϵ) (usually Gaussian) and optimize 𝔼_p(ϵ)[f(g(ϵ,θ))] for θ. To estimate the gradient w.r.t θ, we sample ϵ_1,...,ϵ_m and use the Monte-Carlo approximation:
∇_θ𝔼_p(ϵ)[f(g(ϵ,θ))]≈∑_i=1^m∇_θ f(g(ϵ_i,θ)).
<cit.> proposed an alternative for the task of
variational approximation for Bayesian neural networks. The authors make the key observation that if a weight matrix W^(t) is sampled from an independent Gaussians w_ij∼𝒩(μ_ij,σ^2_ij), then the pre-activations z^(t+1)=W^(t)h^(t) are distributed according to
z_i^(t+1)∼𝒩(∑_jμ^(t)_ijh^(t)_j,∑_jσ_ij^(t)^2 h_j^(t)^2)
This allows sampling the pre-activations instead of the model weights, which results in lower-variance gradient estimations <cit.> and better optimization. This approach is termed the local reparameterization trick.
The local reparameterization trick has been further extended to training networks with discrete weights.
<cit.> introduced a method to train networks with binary {±1} or ternary {-1,0,1} weights (but it can be applied to a wider range of discrete values). The key idea behind this method is that while the pre-activations z_i=∑_jw_ij h_j [We omit the layer index (t) from that point to simplify the notation.] are discrete, from the (Lyapunov) central limit theorem (CLT) they are well
approximated by the Gaussian distribution z_i ∼𝒩(∑_jμ_ijh_j, ∑_jσ_ij^2h_j^2).
We note that μ_ij and σ_ij are now the mean and variance of a multinomial distribution, and not the mean-field Gaussian distribution.
By sampling ϵ_i∼𝒩(0,1) we can represent the output as z_i=m_i+ϵ_i· v_i,
where m_i=(z_i)=∑_jμ_ijh_j
and v_i^2=Var(z_i)=∑_jσ_ij^2h_i^2.
Let θ_ij be the parameters of the multinomial distribution over w_ij, then, our goal is to minimize the expected loss ℓ w.r.t. θ (where θ denotes the set of all parameters θ_ij),
∇_θ L(θ)=_p(ϵ)[∑_i=1^Nℓ(f(h_i,ϵ,θ),y_i)].
§ OUR METHOD
In this section, we describe our LAR-net approach. We propose a novel extension to the LR-net <cit.> that allows for learning discrete activations. Furthermore, we describe a novel batch-normalization layer to complement our method. Finally, we outline the inference procedure using our method.
§.§ Learning Discrete Activations
Here we propose a novel extension to <cit.> for training networks with discrete weights and activations. Assume we have a network with a distribution over discrete weights and the sign function as its non-linearity. If we approximate the pre-activation output z_i by the Gaussian distribution, we can directly calculate the distribution of the values of the discrete activation sign(z_i) using the probability that the sampled Gaussian would be positive
sign(z_i)=
+1, with probabailty p_i=1-Φ(-m_i/v_i).
-1, with probabailty 1-p_i=Φ(-m_i/v_i).
where Φ is the Gaussian CDF function. We note that the distributions of sign(z_i) are independent, as each one depends on a different row in the matrix W. We also note that if we tried to compute the distribution on subsequent layers, the outputs would not be independent anymore due to the shared inputs. As this would impair our CLT approximation, we instead choose to sample the discrete activation at each layer.
In order to differentiate through the discrete activation sampling, we use the Gumbel-Softmax approximation <cit.>. This method builds on the Gumble-Max trick <cit.>, presented in eq. (<ref>), that showed one can get a categorical sample by taking the maximum of the logits with additional Gumbel distributed noise,
ĥ=one_hot(max_k [g_k+logπ_k]), g_k ∼ Gumbel(0,1)
The approximation in <cit.> replaces the discrete max with a smooth Softmax function with a temprature parameter τ. It can be shown that as τ→ 0 the Softmax converges to the max and we get a sample from the desired categorical distribution.
ĥ_k=exp((log(π_k)+g_k)/τ)/∑^K_j=1exp((log(π_j)+g_j)/τ) for k=1...K
We are interested in discrete variables with two classes ({±1}). We define the class probabilities vector as π_i=[1-p_i,p_i]. Using eq. (<ref>) we can generate a samples vector [ĥ_i,-1,ĥ_i,1] (where ĥ_i,-1+ĥ_i,1=1) and then multiply it with our discrete values [-1, 1] (each discrete value is multiplied with the corresponding element in ĥ_i), resulting in a very simple formula:
h^(t+1)_i= ĥ^(t+1)_i,1 - ĥ^(t+1)_i,-1≈ sign(z_i^(t+1))
When τ→ 0 and the Softmax is replaced with the argmax, we get the exact sign activation. We note that the Gumbel-Softmax has two variants: “hard" and “soft", our reported results are with the “hard" variant, which achieved slightly better results (see <cit.> for details).
This leads to a simple algorithm for the forward pass in the training phase. Let θ_ij be the
parameters of the multinomial distribution over w_ij. At the forward pass we compute the
weights mean μ_ij=_θ_ij[w_ij] and
variance σ_ij^2=Var_θ_ij[w_ij]. Then, we calculate the mean and
variance of z_i, m_i=∑_jμ_ijh_j
and v_i=∑_jσ_ij^2h_j^2. Using the mean and variance, we calculate the discrete
distribution of sign(z_i), and then we use the Gumbel Softmax to (approximately) sample from that distribution.
The forward pass during the training pahse is summarized in Algorithm <ref>. The backward pass is straightforward, since all the functions are differentiable (see Fig. <ref>).
§.§ Batch Normalization
Batch normalization <cit.> is a common part of convolutional neural networks that is known to accelerate training and improve performance in many cases. Specifically for discrete networks, the authors in <cit.> showed a high gain just by including batch normalization layers. In our model, before the sign activation function, we have the pre-activation distributions instead of deterministic values (see <ref>), and we cannot perform the classic batch normalization layer.
To address this issue, we propose a new batch normalization layer for our training process, which can be applied to distributions. The pre-activation z_bcij (b is the batch index and c is the channel index) of each convolution are approximated by Gaussian random variables and are distributed according to z_bcij∼𝒩(μ_bcij,σ_bcij^2).
The mean of the normalized variable can be easily calculated as,
μ_c=E_bij[Z_bcij]=1/B· H · W∑_b,i,jμ_bcij.
Where B,W,H are the batch size, the width, and the height of the image respectively. Using the Law of total variance, Var(Y)=Var[E[Y|X]] + E[Var[Y|X]], we can also calculate the variance of the normalized variable:
σ_c^2=1/B· H · W∑_b,i,j(μ_bcij-μ_c)^2+
1/B· H · W∑_b,i,jσ_bcij^2
Batch normalization usually also includes learnable affine parameters γ and β, which we incorporate here as well. The pre-activation after batch normalization layer z_BN is distributed according to
z_BN_bcij∼𝒩(γ_c·μ_bcij-μ_c/σ_c+β_c,γ^2_cσ_bcij^2/σ_c^2).
§.§ Inference
At inference or test time, we wish to apply our model with the actual discrete weights and activations and not the approximation used for training. To do that, we sample discrete weights based on the probabilities we obtained during training, and perform a standard inference using the sampled weights. This process can be repeated several times to identify the best-performing weights. However, it is worth noting that our trained distribution typically has low entropy, resulting in little variation in test accuracy between weight samples. As a result, only slight improvements in overall performance are observed.
§ IMPLEMENTATION DETAILS
§.§ Network Architecture
We use the ResNet-18 <cit.> and VGG <cit.>, as is standard for experiments on discrete neural networks.
In all our experiments, similar to prior works, the first layer and last fully-connected layer remain in full precision.
§.§ Optimization Details
We observed that incorporating multiple Monte-Carlo samples for each input element significantly enhances the overall results.
In every batch, we run our model on each datum multiple times, each with a different random sample resulting in the following gradient estimation
∇_θ L(θ)≈∑_i=1^N∑_j=1^S∇ℓ(f(h_i,ϵ_j,θ),y_i).
Weights Distributions Entropy. Similar to <cit.>, we added probability decay regularization. We used it as L_2 regularization on the distribution parameters. It helps to increase the weights entropy and prevent many weights distributions converged to a deterministic value, which degrades the CLT approximation and leads to suboptimal results.
Activations Distributions Entropy. In some cases, we observed that there is a need to reduce the entropy of the activations distributions of the last binary layer. We found that many of the activations probabilities p_i in that layer converge to values around 0.5, which means an undesirable extremely high amount of randomness. We found that giving a lower learning rate to the fully connected layer (after the last binary layer) helps to reduce the entropy. We show this phenomenon in Fig. <ref>.
Initialization. Similar to <cit.> we use a pre-trained network to initialize the weight distribution parameters θ. We found that using a network with Tanh as an activation function instead of ReLU results in a better initialization for θ, as it is closer to our discrete sign activation. For further details on the initialization scheme, please refer to the supplementary material.
§ EXPERIMENTS
In this section, we detail the conducted experiments. We use two benchmark datasets, CIFAR-10 and CIFAR-100 <cit.>, to verify the effectiveness of our proposed method and compare it with other state-of-the-art methods. Our code is available in the supplementary material.
CIFAR-10. We compare our results with prior works on CIFAR-10, including IR-Net<cit.>, MPT <cit.>, BNN-DL<cit.>, DSQ<cit.>,
BNN<cit.> and XNOR-Net<cit.> using VGG-small and ResNet-18. We compared networks with 1-bit weights and 1-bit activations (1/1) as well as networks with 2-bits weights and 1-bit activations (2/1). The extra bit resulting in weights sparsity, which brings additional efficiency.
The performance comparison using the different methods is shown in Table <ref> (we presented the best performance for each method over VGG-small and ResNet-18). The results show that our approach achieves better results than other existing methods. The entire experimental details are provided in the supplementary material.
CIFAR-100. We also compared our model performance on CIFAR-100 with IR-Net<cit.>, MPT<cit.>, and BNN-DL<cit.>.
The results are shown in Table <ref>. Our approach achieves better results than other existing methods,
with the exception of BNN-DL which on par. However, we note that the BNN-DL uses a slightly larger network. All the experimental details regarding our training process and the evaluation for MPT and IR-Net
are included in the supplementary material.
They used their variant of ResNet with 20 layers, we refer to this for convenience as resnet-20
§ CONCLUSIONS
In this paper, we introduced a novel scheme for training a network with discrete weights and activations, for efficient inference of vision and learning applications. We demonstrate that using a probabilistic approach is not limited to networks with discrete weights, but can also be successfully applied to networks with discrete activations. Furthermore, we show how standard DNN layers, such as batch normalization can also be applied in this scenario. Finally, we evaluate our method on various image classification datasets obtaining state-of-the-art results.
plainnat
§ LAR-NET INITIALIZATION
In full-precision networks, it is common to use a random initializer for the weight initialization, e.g., Kaiming Normal <cit.>. We are interested in initializing the distributions over discrete weights.
<cit.> suggested using pretrained continuous deterministic weights for the distributions initialization. Let W be the normalized pretrained weights (by dividing the weights in each layer t by the standard deviation of the weights σ^(t)).
Then, p(w_ij^(t)=0) is initialized by
p(w_ij^(t)=0)=p_max - (p_max - p_min) ·|w_ij^(t)|
Where p_max and p_min are hyperparameters (set to 0.05 and 0.95, respectively, in our experiments). Next, p(w_ij^(t)=1 | w_ij^(t)≠ 0) is initialized by
p(w_ij^(t)=1 | w_ij^(t)≠ 0) = 0.5 · ( 1 + w_ij^(t)/1-p(w_ij^(t)=0) )
Then, all the values from equations <ref>,<ref> are clipped in range [ p_min, p_max ].
We found that while this initialization works well also to train networks with discrete activations, we achieved better results when we used LR-Net (with only discrete weights) as a baseline.
We first trained a network with discrete weights and then transfer the weights distributions as initialization to a network with discrete activations as well.
§ CIFAR-10 EXPERIMENTAL DETAILS
§.§ Hyperparameters for LAR-Net
CIFAR-10 <cit.> is an image classification benchmark dataset. It consists of 50,000 training images and 10,000 test images distributed among ten different classes. Each image in the dataset is 32 × 32 pixels in RGB space, and it is preprocessed by subtracting its mean and dividing by its standard deviation. During training, we apply a padding of four pixels to each side of the image, and a random 32 × 32 crop is sampled from the padded image. Furthermore, we flip images horizontally at random during training. At test time, we evaluate the original 32 × 32 image without any padding or multiple cropping. The loss is minimized with Adam <cit.>. The weight decay parameter is set to 1e-4.
We noticed that by executing multiple Monte-Carlo samples for every input element, we can improve the results.
In each batch, our model is executed multiple times for each data, using a random sample, which results in eq. <ref>.
We use a batch size of 64 and run 2 times each batch (S=2). The initial learning rate is 0.01, and we use a cosine decay learning rate policy for training. The probability decay parameter is set to 1e-12, and the temperature of the Gumbel Soft-Max is 1.2 (fixed for all the training). We report the test error rate after 300 training epochs. Our discrete weights are {-1,0,1}, which also brings additional savings. In our method, the sparsity level depends on the learned θ and is not pre-defined. We achieved a sparsity of 44%±1.1%.
§ CIFAR-100 EXPERIMENTAL DETAILS
§.§ Hyperparameters for LAR-Net
CIFAR-100 <cit.> is comprised of 60,000 32x32 color images that are grouped into 100 classes, with 600 images for each class. The dataset is divided into 50,000 training images and 10,000 testing images. The loss is minimized with Adam <cit.>. The weight decay parameter is set to 1e-4. We use a batch size of 32 and run 4 times every batch (S=2). The initial learning rate is 0.01, and we use a cosine decay learning rate policy for training. Similar to CIFAR-10, The probability decay parameter is set to 1e-12, and the temperature of the Gumbel Soft-Max is 1.2. We report the test error rate after 300 training epochs. We achieved a weight sparsity of 43%±1.0%.
§.§ Hyperparameters for MPT and IR-Net
We searched over all the next hyper-parameters to find the best results on CIFAR-100. For IR-Net <cit.>, we used the code from <cit.>, and for MPT <cit.>, we used the code from <cit.>.
We tried to find the best results with ResNet-18 and VGG-small. We used Adam optimizer and SGD, and searched the learning rate (LR) in the range [0.1, 0.001]. We checked different batch sizes [64, 128, 256] and also different weight decay configurations. For MPT, we used the version in which the batch normalization parameters are also learned (which they called MPT+BN). They achieved better results with this version consistently. The weights were initialized using the Kaiming Normal <cit.>.
Table <ref> presents the best configurations for the models we checked.
§ ABLATION STUDY
We investigate the performance of LAR-Net without a Batch Normalization layer. We separate our experiment into the different parts in the BN layer: the normalization (using mean and var) and the affine transform (using learnable parameters γ and β).
Table <ref> shows the results using the different settings. We use no BatchNorm to indicate no Batch Normalization layer at all, and Affine Transform only to indicate only affine transform using γ and β, but without the normalization.
|
http://arxiv.org/abs/2307.00257v1
|
20230701073908
|
Efficient Subclass Segmentation in Medical Images
|
[
"Linrui Dai",
"Wenhui Lei",
"Xiaofan Zhang"
] |
cs.CV
|
[
"cs.CV",
"cs.AI"
] |
1Shanghai Jiao Tong University
2Shanghai Artificial Intelligence Laboratory
{o.o111, wenhui.lei, xiaofan.zhang}@sjtu.edu.cn
Efficient Subclass Segmentation in Medical Images
Linrui Dai1Wenhui Lei1,2Xiaofan Zhang1,2,*
August 1, 2023
=================================================
As research interests[1]Corresponding author. in medical image analysis become increasingly fine-grained, the cost for extensive annotation also rises. One feasible way to reduce the cost is to annotate with coarse-grained superclass labels while using limited fine-grained annotations as a complement. In this way, fine-grained data learning is assisted by ample coarse annotations. Recent studies in classification tasks have adopted this method to achieve satisfactory results. However, there is a lack of research on efficient learning of fine-grained subclasses in semantic segmentation tasks. In this paper, we propose a novel approach that leverages the hierarchical structure of categories to design network architecture. Meanwhile, a task-driven data generation method is presented to make it easier for the network to recognize different subclass categories. Specifically, we introduce a Prior Concatenation module that enhances confidence in subclass segmentation by concatenating predicted logits from the superclass classifier, a Separate Normalization module that stretches the intra-class distance within the same superclass to facilitate subclass segmentation, and a HierarchicalMix model that generates high-quality pseudo labels for unlabeled samples by fusing only similar superclass regions from labeled and unlabeled images. Our experiments on the BraTS2021 and ACDC datasets demonstrate that our approach achieves comparable accuracy to a model trained with full subclass annotations, with limited subclass annotations and sufficient superclass annotations. Our approach offers a promising solution for efficient fine-grained subclass segmentation in medical images. Our code is publicly available https://github.com/OvO1111/EfficientSubclassLearninghere.
§ INTRODUCTION
In recent years, the use of deep learning for automatic medical image segmentation has led to many successful results based on large amounts of annotated training data. However, the trend towards segmenting medical images into finer-grained classes (denoted as subclasses) using deep neural networks has resulted in an increased demand for finely annotated training data<cit.>. This process requires a higher level of domain expertise, making it both time-consuming and demanding. As annotating coarse-grained (denoted as superclasses) classes is generally easier than subclasses, one way to reduce the annotation cost is to collect a large number of superclasses annotations and then labeling only a small number of samples in subclasses. Moreover, in some cases, a dataset may have already been annotated with superclass labels, but the research focus has shifted towards finer-grained categories<cit.>.
In such cases, re-annotating an entire dataset may not be as cost-effective as annotating only a small amount of data with subclass labels.
Here, the primary challenge is to effectively leverage superclass annotations to facilitate the learning of fine-grained subclasses. To solve this problem, several works have proposed approaches for recognizing new subclasses with limited subclass annotations while utilizing the abundant superclass annotations in classification tasks <cit.>. In general, they assume the subclasses are not known during the training stage and typically involve pre-training a base model on superclasses to automatically group samples of the same superclass into several clusters while adapting them to finer subclasses during test time.
However, to the best of our knowledge, there has been no work specifically exploring learning subclasses with limited subclass and full superclass annotations in semantic segmentation task. Previous label-efficient learning methods, such as semi-supervised learning<cit.>, few-shot learning<cit.> and weakly supervised learning<cit.>, focus on either utilize unlabeled data or enhance the model's generalization ability or use weaker annotations for training. However, they do not take into account the existence of superclasses annotations, making them less competitive in our setting.
In this study, we focus on the problem of efficient subclass segmentation in medical images, whose goal is to segment subclasses under the supervision of limited subclass and sufficient superclass annotations. Unlike previous works such as <cit.>, we assume that the target subclasses and their corresponding limited annotations are available during the training process, which is more in line with practical medical scenarios.
Our main approach is to utilize the hierarchical structure of categories to design network architectures and data generation methods that make it easier for the network to distinguish between subclass categories. Specifically, we propose 1) a Prior Concatenation module that concatenates predicted logits from the superclass classifier to the input feature map before subclass segmentation, serving as prior knowledge to enable the network to focus on recognizing subclass categories within the current predicted superclass; 2) a Separate Normalization module that aims to stretch the intra-class distance within the same superclass, facilitating subclass segmentation; 3) a HierarchicalMix module inspired by GuidedMix<cit.>, which for the first time suggests fusing similar labeled and unlabeled image pairs to generate high-quality pseudo labels for the unlabeled samples. However, GuidedMix selects image pairs based on their similarity and fuses entire images. In contrast, our approach is more targeted. We mix a certain superclass region from an image with subclass annotation to the corresponding superclass region in an unlabeled image without subclass annotation, avoiding confusion between different superclass regions. This allows the model to focus on distinguishing subclasses within the same superclass. Our experiments on the Brats 2021 <cit.> and ACDC <cit.> datasets demonstrate that our model, with sufficient superclass and very limited subclass annotations, achieves comparable accuracy to a model trained with full subclass annotations.
§ METHOD
§.§.§ Problem Definition
We start by considering a set of R coarse classes, denoted by 𝒴_c={Y_1,...,Y_R}, such as background and brain tumor, and a set of N training images, annotated with 𝒴_c, denoted by 𝒟_c={(x^l,y^l)|y^l_i∈𝒴_c}_l=1^N. Each pixel i in image x^l is assigned a superclass label y^l_i. To learn a finer segmentation model, we introduce a set of fine subclass K=∑_i=1^Rk_i in coarse classes, denoted by 𝒴_f={Y_1,1,...,Y_1,k_1,...,Y_R,1,..., Y_R,k_R}, such as background, enhancing tumor, tumor core, and whole tumor. We assume that only a small subset of n training images have pixel-wise subclass labels z∈𝒴_f denoted by 𝒟_f={(x^l,z^l)|z_i^l∈𝒴_f}_l=1^n. Our goal is to train a segmentation network f(x^l) that can accurately predict the subclass labels for each pixel in the image x^l, even when n≪ N. Without specification, we consider R=2 (background and foreground) and extend the foreground class to multi subclass in this work.
§.§.§ Prior Concatenation
One direct way to leverage the superclass and subclass annotations simultaneously is using two 1×1×1 convolution layers as superclass and subclass classification heads for the features extracted from the network. The superclassification and subclassification heads are individually trained by superclass P_c(x^l) labels and subclass labels P_f(x^l). With enough superclass labels, the feature maps corresponding to different superclasses should be well separated.
However, this coerces the subclassification head to discriminate among K subclasses under the mere guidance from few subclass annotations, making it prone to overfitting.
Another common method to incorporate the information from superclass annotations into the subclassification head is negative learning <cit.>. This technique penalizes the prediction of pixels being in the wrong superclass label, effectively using the superclass labels as a guiding principle for the subclassification head. However, in our experiments, we found that this method may lead to lower overall performance, possibly due to unstable training gradients resulting from the uncertainty of the subclass labels.
To make use of superclass labels without affecting the training of the subclass classification head, we propose a simple yet effective method called Prior Concatenation (PC): as shown in Fig. <ref> (a), we concatenate predicted superclass logit scores S_c(x^l) onto the feature maps F(x^l) and then perform subclass segmentation. The intuition behind this operation is that by concatenating the predicted superclass probabilities with feature maps, the network is able to leverage the prior knowledge of the superclass distribution and focus more on learning the fine-grained features for better discrimination among subclasses.
§.§.§ Separate Normalization
Intuitively,
given sufficient superclass labels in supervised learning, the superclassification head tends to reduce feature distance among samples within the same superclass, which conflicts with the goal of increasing the distance between subclasses within the same superclass. To alleviate this issue, we aim to enhance the internal diversity of the distribution within the same superclass while preserving the discriminative features among superclasses.
To achieve this, we propose Separate Normalization(SN) to separately process feature maps belonging to hierarchical foreground and background divided by superclass labels. As a superclass and the subclasses within share the same background, the original conflict between classifiers is transferred to finding the optimal transformations that separate foreground from background, enabling the network to extract class-specific features while keeping the features inside different superclasses well-separated.
Our framework is shown in Fig. <ref> (b). First, we use Batch Norm layers<cit.> to perform separate affine transformations on the original feature map. The transformed feature maps, each representing a semantic foreground and background, are then passed through a convolution block for feature extraction before further classification. The classification process is coherent with the semantic meaning of each branch. Namely, the foreground branch includes a superclassifier and a subclassifier that classifies the superclass and subclass foreground, while the background branch is dedicated solely to classify background pixels. Finally, two separate network branches are jointly supervised by segmentation loss on super- and subclass labels. The aforementioned prior concatenation continues to take effect by concatenating predicted superclass logits on the inputs of subclassifier.
§.§.§ HierarchicalMix
Given the scarcity of subclass labels, we intend to maximally exploit the existent subclass supervision to guide the segmentation of coarsely labeled samples. Inspired by GuidedMix <cit.>, which provides consistent knowledge transfer between similar labeled and unlabeled images with pseudo labeling, we propose HierarchicalMix(HM) to generate robust pseudo supervision. Nevertheless, GuidedMix relies on image distance to select similar images and performs a whole-image mixup, which loses focus on the semantic meaning of each region within an image. We address this limitation by exploiting the additional superclass information for a more targeted mixup. This information allows us to fuse only the semantic foreground regions, realizing a more precise transfer of foreground knowledge. A detailed pipeline of HierarchicalMix is described below.
As shown in Fig. <ref>, for each sample (x,y) in the dataset that does not have subclass labels, we pair it with a randomly chosen fine-labeled sample (x',y',z'). First, we perform an random rotation and flipping 𝕋 on (x,y) and feed both the original sample and the transformed sample 𝕋x into the segmentation network f. An indirect segmentation of x is obtained by performing the inverse transformation 𝕋^-1 on the segmentation result of 𝕋x. A transform-invariant pseudo subclass label map z_pse is generated according to the following scheme: Pixel (i,j) in z_pse is assigned a valid subclass label index (z_pse)_i,j=f(x)_i,j only when f(x)_i,j agrees with [𝕋^-1f(𝕋x)]_i,j with a high confidence τ as well as f(x)_i,j and x_i,j both belong to the same superclass label.
Next, we adopt image mixup by cropping the bounding box of foreground pixels in x', resizing it to match the size of foreground in x, and linearly overlaying them by a factor of α on x. This semantically mixed image x_mix has subclass labels z=resize(α· z') from the fine-labeled image x'. Then, we pass it through the network to obtain a segmentation result f(x_mix). This segmentation result is supervised by the superposition of the pseudo label map z_pse and subclass labels z, with weighting factor α: ℒ_p=ℒ(f(x_mix), α· z+(1-α)· z_pse).
The intuition behind this framework is to simultaneously leverage the information from both unlabeled and labeled data by incorporating a more robust supervision from transform-invariant pseudo labels. While mixing up only the semantic foreground provides a way of exchanging knowledge between similar foreground objects while lifting the confirmation bias in pseudo labeling <cit.>.
§ EXPERIMENTS
§.§.§ Dataset and preprocessing
We conduct all experiments on two public datasets. The first one is the ACDC[https://www.creatis.insa-lyon.fr/Challenge/acdc/databases.htmlhttps://www.creatis.insa-lyon.fr/Challenge/acdc/databases.html] dataset <cit.>, which contains 200 MRI images with segmentation labels for left ventricle cavity (LV), right ventricle cavity (RV), and myocardium (MYO). Due to the large inter-slice spacing, we use 2D segmentation as in <cit.>. We adopt the processed data and the same data division in <cit.>, which uses 140 scans for training, 20 scans for validation and 40 scans for evaluation. During inference, predictions are made on each individual slice and then assembled into a 3D volume. The second is the BraTS2021[http://braintumorsegmentation.org/http://braintumorsegmentation.org/] dataset <cit.>, which consists of 1251 mpMRI scans with an isotropic 1 mm^3 resolution. Each scan includes four modalities (FLAIR, T1, T1ce, and T2), and is annotated for necrotic tumor core (TC), peritumoral edematous/invaded tissue (PE), and the GD-enhancing tumor (ET). We randomly split the dataset into 876, 125, and 250 cases for training, validation, and testing, respectively. For both datasets, image intensities are normalized to values in [0, 1] and the foreground superclass is defined as the union of all foreground subclasses for both datasets.
§.§.§ Implementation details and evaluation metrics
To augment the data during training, we randomly cropped the images with a patch size of 256 × 256 for the ACDC dataset and 96 × 96 × 96 for the BraTS2021 dataset. The model loss ℒ is set by adding the losses from Cross Entropy Loss and Dice Loss.
We trained the model for 40,000 iterations using SGD optimizer with a 0.9 momentum and a linearly decreasing learning rate that starts at 0.01 and ends with 0. We used a batch size of 24 for the ACDC dataset and 4 for the BraTS2021 dataset, where half of the samples are labeled with subclasses and the other half only labeled with superclasses. More details can be found in the supplementary materials. To evaluate the segmentation performance, we used two widely-used metrics: the Dice coefficient (DSC) and 95% Hausdorff Distance (HD_95). The confidence factor τ mentioned in HierarchicalMix starts at 1 and linearly decays to 0.4 throughout the training process, along with a weighting factor α sampled according to the uniform distribution on [0.5, 1].
§.§.§ Performance comparison with other methods
To evaluate the effectiveness of our proposed method, we firstly trained two U-Net models <cit.> to serve as upper and lower bounds of performance. The first U-Net was trained on the complete subclass dataset {(x^l,y^l,z^l)}_l=1^N, while the second was trained on its subset {(x^l,y^l,z^l)}_l=1^n. Then, we compared our method with the following four methods, all of which were trained using n subclass labels and N superclass labels: Modified U-Net (Mod): This method adds an additional superclass classifier alongside the subclass classifier in the U-Net.
Negative Learning (NL): This method incorporates superclass information into the loss module by introducing a separate negative learning loss in the original U-Net. This additional loss penalizes pixels that are not segmented as the correct superclass.
Cross Pseudo Supervision (CPS) <cit.>: This method simulates pseudo supervision by utilizing the segmentation results from two models with different parameter initializations, and adapts their original network to the Modified U-Net architecture.
Uncertainty Aware Mean Teacher (UAMT)<cit.>: This method modifies the classical mean teacher architecture<cit.> by adapting the teacher model to learn from only reliable targets while ignoring the rest, and also adapts the original network to the Modified U-Net architecture.
The quantitative results presented in Table <ref> reveal that all methods that utilize additional superclass annotations outperformed the baseline method, which involved training a U-Net using only limited subclass labels. However, the methods that were specifically designed to utilize superclass information or explore the intrinsic structure of the subclass data, such as NL, CPS, and UAMT, did not consistently outperform the simple Modified U-Net. In fact, these methods sometimes performed worse than the simple Modified U-Net, indicating the difficulty of utilizing superclass information effectively. In contrast, our proposed method achieved the best performance among all compared methods on both the ACDC and BraTS2021 datasets. Specifically, our method attained an average Dice score of 87.3% for ACDC and 75.4% for BraTS2021, outperforming the closest competitor by 5.0% and 1.4%, respectively.
§.§.§ Ablation studies
In this study, we performed comprehensive ablation studies to analyze the contributions of each component and the performance of our method under different numbers of images with subclass annotations. The performance of each component is individually evaluated, and is listed in Table <ref>.
Each component has demonstrated its effectiveness in comparison to the naive modified U-Net method. Moreover, models that incorporate more components generally outperform those with fewer components. The effectiveness of the proposed HierarchicalMix is evident from the comparisons made with models that use only image mixup or pseudo-labeling for data augmentation, while the addition of Separate Normalization consistently improves the model performance. Furthermore, our method was competitive with a fully supervised baseline, achieving comparable results with only 6.5% and 3.4% subclass annotations on ACDC and BraTS2021.
§ CONCLUSION
In this work, we proposed an innovative approach to address the problem of efficient subclass segmentation in medical images, where limited subclass annotations and sufficient superclass annotations are available. To the best of our knowledge, this is the first work specifically focusing on this problem. Our approach leverages the hierarchical structure of categories to design network architectures and data generation methods that enable the network to distinguish between subclass categories more easily. Specifically, we introduced a Prior Concatenation module that enhances confidence in subclass segmentation by concatenating predicted logits from the superclass classifier, a Separate Normalization module that stretches the intra-class distance within the same superclass to facilitate subclass segmentation, and a HierarchicalMix model that generates high-quality pseudo labels for unlabeled samples by fusing only similar superclass regions from labeled and unlabeled images. Our experiments on the ACDC and BraTS2021 datasets demonstrated that our proposed approach outperformed other compared methods in improving the segmentation accuracy. Overall, our proposed method provides a promising solution for efficient fine-grained subclass segmentation in medical images.
splncs04
§ SUPPLEMENTARY MATERIALS
|
http://arxiv.org/abs/2307.02928v1
|
20230706112853
|
AllSight: A Low-Cost and High-Resolution Round Tactile Sensor with Zero-Shot Learning Capability
|
[
"Osher Azulay",
"Nimrod Curtis",
"Rotem Sokolovsky",
"Guy Levitski",
"Daniel Slomovik",
"Guy Lilling",
"Avishai Sintov"
] |
cs.RO
|
[
"cs.RO"
] |
[
Stefan Schiffner
August 1, 2023
====================
empty
empty
Tactile sensing is a necessary capability for a robotic hand to perform fine manipulations and interact with the environment. Optical sensors are a promising solution for high-resolution contact estimation. Nevertheless, they are usually not easy to fabricate and require individual calibration in order to acquire sufficient accuracy. In this letter, we propose , an optical tactile sensor with a round 3D structure potentially designed for robotic in-hand manipulation tasks. is mostly 3D printed making it low-cost, modular, durable and in the size of a human thumb while with a large contact surface. We show the ability of to learn and estimate a full contact state, i.e., contact position, forces and torsion. With that, an experimental benchmark between various configurations of illumination and contact elastomers are provided. Furthermore, the robust design of provides it with a unique zero-shot capability such that a practitioner can fabricate the open-source design and have a ready-to-use state estimation model. A set of experiments demonstrates the accurate state estimation performance of .
§ INTRODUCTION
The sense of touch endows humans with neural sensory-motor feedback regarding the shape, weight and texture of objects within contact <cit.>. Hence, touch is vital for humans in order to ensure stable grasps and safe object manipulations <cit.>.
Similar to humans, robots require touch sensing in order to acquire information regarding the state of contact events. In order to manipulate objects effectively in complex and changing environments, a robot must be able to perceive when, where and how it is interacting with the objects <cit.>. Touch, or tactile sensing, can augment visual perception or replace it when occlusions occur by the robot fingers themselves or by obstacles. It has the potential to enable robots to infer about the object's relative state, geometry and texture <cit.>. Accurate and low-cost high-resolution tactile sensors that can provide a full state of contact, namely contact locations and forces, would have a significant role in, for example, material handling, assembly <cit.>, in-hand manipulation <cit.> and prosthesis <cit.>.
Tactile sensors are commonly used to measure a range of touch stimuli including contact pressure <cit.>, vibrations <cit.>, deformation of the contact pad <cit.> and surface texture <cit.>. Within the range of tactile usages, a variety of tactile sensing technologies exists including force sensitive resistors <cit.>, capacitive transducers <cit.>, photoelectric sensing <cit.> and piezo-resistors <cit.>. Although they can provide useful data, these sensors are usually designed for specific tasks in limited environments.
Recently, camera-based optical tactile sensors have become increasingly common due to high-resolution signals and soft contact surfaces <cit.>. An optical sensor typically uses an internal camera to track the deformation of a soft elastomer upon contact with an object <cit.>. A captured image can encode information regarding the state of contact, i.e., contact location with respect to the sensor's coordinate frame and contact forces. Despite the abundance in configurations of optical sensors, they yet to provide a robust tactile solution and are limited in various aspects.
While camera-based sensors are not a new notion, significant advancement and integration in robotic systems were achieved in the last few years with the increase of computing capabilities and hardware minimization. Various small sensors with flat contact surfaces were introduced <cit.>. However, these may have difficulties in general manipulation tasks due to the flat contact surfaces. Hence, other sensors introduced spatial surface geometries <cit.>. Nevertheless, these sensors yet to provide complete and reliable contact information. Some were not demonstrated to provide a complete contact state <cit.>, are limited in load forces <cit.> or require complex and expensive fabrication process <cit.>. Moreover, no sensor has demonstrated an ability for zero-shot transfer of a trained contact state estimation model to a new one.
In this paper, we cope with the limitations of previous optical designs and present a novel spatial sensor termed .
is designed to be small and low-cost for the use on multi-finger hands in in-hand manipulation tasks. The 3D contact surface of is in the shape of a cylinder with an hemispherical end as seen in Figure <ref>. Most of the sensor's components, excluding electronics and elastomer, are printable. In particular, the transparent shell of is 3D printed making the sensor low-cost, easily fabricated and more accessible to practitioners. While fabricated in a low-cost process, is shown to provide an accurate full contact state including position, normal and tangential forces, and torsion. The sensor is the smallest of its kind while able to measure larger forces than previous designs and up to 15 N. In addition, the fabrication process results in a durable sensor able to withstand high and recurring loads. is modular with easily replaceable components where different types of elastomer and illumination can be rapidly replaced. Through a comparative analysis of , we try to answer some fundamental questions in the design of optical tactile sensors such as preferred illumination and surface texture. The comparative analysis is conducted through supervised learning. Structure and various tested sensor configurations are seen in Figure <ref>.
The design, trained models, simulation environment, and code are provided open-source[Open-source design, fabrication instructions, trained models, code and simulation: ] for the benefit of the community and to advance research in the field. However, the trained models should provide sufficient accuracy on a newly fabricated sensor. Therefore, we analyze the transfer learning capabilities of in zero-shot and in fine-tuning with limited new data. We show that these can be done by pre-training with real data collected from a source sensor or with simulated data from TACTO, a physics engine tactile simulation <cit.>. Overall, is capable of achieving sufficient accuracy in zero-shot transfer and high accuracy with fine-tuning on limited new data. Consequently, advanced and novice practitioners can have access to low-cost, easy to fabricate and reproducible sensors with a ready-to-use model.
A prominent goal of this paper is to share insights with the robotics community and to help overcome the numerous bottlenecks faced in the fabrication of spatial tactile sensors. We aim to encourage the wider adoption and development of such sensing technology. To summarize, the contributions of this work are as follow. First, we propose a novel design of spatial optical tactile sensor termed which is compact and with high-resolution. Since most of the parts are 3D printed, is low-cost, easy to fabricate, modular and available open-source. Due to its structure, can provide a full contact state including contact localization, torsion, normal force and shear. Then, an informative comparative analysis is provided involving various known sensor configurations which could assist practitioners in design choices. Finally, we exhibit the ability of to transfer to newly fabricated sensors through zero-shot and fine-tuning. To the best of the author’s knowledge, is the only 3D optical-based sensor that measures the entire contact state, capable of zero-shot learning
and is available open-source.
§ RELATED WORK
Seminal work on optical sensors introduced the use of a black and white camera for observing the deformation of a soft membrane through a glass plate <cit.>. Later work have shown the ability to use the same technology for a round or finger-shaped sensor <cit.>. The GelSight is the first to present a relatively small tactile finger-tip with a flat pad able to measure high-resolution contact geometry <cit.>. Photometric Stereo (PS) was integrated where surface normals during contact deformation are estimated by observing the object under different lighting conditions. Hence, contact force, slip and shape were inferred by observing deformation and calculating geometry gradients. While exhibiting good performance, GelSight and similar ones (e.g., <cit.>) may have difficulties in general dexterous manipulation tasks due to the flat contact surface. Flat sensors require constant alignment with the surface of the object and may not maintain contact during object sliding and rolling <cit.>. Hence, the TacTip set of sensors was introduced having a variety of different contact pads including flat and hemi-spherical ones <cit.>. However, the contact pads did not include a rigid support for the elastomer and, thus, were reported to be too soft for feasible manipulation tasks <cit.>.
Tactile sensors that can efficiently manipulate objects must have a spatial surface structure. Yet, recent attempts to develop tactile sensors with 3D sensing surfaces have raised a number of challenges. To begin with, creating tactile sensors with round contact surfaces
can be difficult from a manufacturing standpoint. Fabrication may require intricate designs and the use of high-budget machinery such as industrial (e.g., Stratasys) <cit.> or Aluminum 3D printers <cit.>. Furthermore, the outer layer of the sensor is in constant contact with objects during use and, therefore, the surface pad is prone to wear and tear over time, adding to the complexity of creating a round tactile sensor that is both sensitive and reliable <cit.>.
It can also be challenging to make the sensor modular for convenient component replacement and easy-to-use through a plug-and-play interface <cit.>.
Researchers have conducted extensive application studies with flat sensors regarding contact localization <cit.>, depth reconstruction <cit.> and
directional force distribution <cit.>.
Sensors with round contact surface, on the other hand, yet to provide complete and reliable contact information. Some sensors can only provide contact localization with no load information <cit.>. However, manipulation capabilities require also information regarding contact forces. Recent sensor developments have tried to provide full contact state. A cone-shape thumb-sized sensor, for instance, provides a full force map along with contact localization <cit.>. However and due to its skeletal structure, it is sensitive to object penetration and, hence, limited to contact forces of up to 2 N. Similarly, the DenseTact provides contact loads through an hemi-spherical pad. A randomized pattern was added to the surface of the contact pad in order to increase features in the images <cit.>. The ability for transfer learning was also demonstrated in a limited setting without zero-shot and with some portion of new data used for calibrating the target sensor. The hemi-sphere of DenseTact is made of an elastomer without a rigid structure. This and the lack of a cylindrical extension to the hemi-sphere may reduce its applicability in manipulation tasks. Transfer learning was also recently demonstrated in classification on the DIGIT flat sensor <cit.>. In the work, a diffusion model was trained to generate realistic tactile images and later calibrated to unseen sensors.
Table <ref> provides a comparative summary of state-of-the-art work on optical-based tactile sensors.
§ DESIGN AND FABRICATION
is an optical tactile sensor designed to be compact and is suitable for usage on various robotic end-effectors and multi-fingered hands. In addition, the contact region of is round with full 360^∘ sensing clearly visible without blind spots or obscurance. While there have been some advances in round-shaped tactile sensors, their reproduction may fail due to complex and sensitive fabrication. sensing surface is more robust and easily interchangeable than previous designs, making the sensor more appealing. The estimated manufacturing cost for is 30 USD per sensor excluding a micro-controller. The main challenges in fabricating a compact and all-around tactile sensor are related to its small size and curved surface. However, we have devised a fabrication process based on in-depth experimentation so that the sensor is easily fabricated and robustly reproduced by novice users.
§.§ General Structure
An illustrative description of is given in Figure <ref>. Similar to previous optical tactile sensors, the core of the design is a single camera. The camera is covered by a three-layered tube in the shape of a cylinder with an hemispherical end. The inner layer of the tube is a rigid crystal-clear shell. A transparent elastomer covers the shell and is coated on its exterior by a reflective silicone paint. Such tube formation provides an opaque structure where the camera observes the deformation of the elastomer from within upon contact. For better visibility, the inner-surface of the shell is evenly illuminated by an annular printed circuit board (PCB) with embedded LEDs.
Photometric effects and structured lighting enable the camera to detect small deformations of the elastomer in physical contact. Prior work uses either white or RGB lights in different variations for contact localization and shape reconstruction. A collimator covers the LED PCB for channeling the light towards the shell and for minimizing illumination losses. All components are assembled on a mounting plate which is the connecting link to a desired hand. Unlike other sensor designs, is the smallest all-around tactile sensor which supports various elastomers and illumination configurations with simple assembly. Experiments in this work provide analyses to common variations.
§.§ Fabrication
As described above, has six main components: camera, mounting plate, customized LED PCB, collimator, shell and elastomer. The fabrication process for these components, illustrated in Figure <ref>, is described including design principles and lessons learned.
§.§.§ Camera
To keep the sensor compact and accessible, a Raspberry-Pi zero camera is used. The camera is inexpensive costing approximately $16 and has a wide 160^∘ fisheye lens. Video is streamed directly to a PC via USB using Raspberry-Pi Zero with camera mode for easy plug-and-play support. It operates at a frame rate of 60fps and outputs 640×480 resolution frames. Similar to <cit.>, in order to obtain color images that are uniform and balanced, it is crucial to disable the automatic white balance function and adjust the fixed gains for the red and blue channels, along with the exposure compensation for the RGB channels.
§.§.§ Rigid transparent shell
The purpose of the shell is to provide rigidity to the structure of the sensor upon contact while enabling clear visibility of the external deformed elastomer. Different methods for fabricating the shell were experimented including clear epoxy resin <cit.>, off-the-shelf plastic tube <cit.> and a printable skeleton <cit.>. While the clear epoxy resin allows complex designs, the resulting shell was not sufficiently clear and required too many fabrication steps. The plastic tube is clear yet not modular. A 3D-printed skeleton provides modularity while not strong enough to withstand various pressures and point contacts within its gaps. In addition, occlusions by the ribs exist. Therefore, we propose fabrication through Stereolithography (SLA) 3D printing. The shell is designed in a custom size and shape which can be modified and scaled. Then, the shell is printed with clear resin followed by surface polishing and application of lacquer. Such approach provides both a crystal-clear shell and modularity. The shell can be easily adapted to additional shapes or scaled.
§.§.§ Elastomer
The elastomer covers the entire exterior of the shell. In such way, the camera can observe deformations of the elastomer through the shell. Here also, the elastomer is made relatively clear. However, in this work we test two designs of the elastomer, both clear while one has additional dot markers. The elastomer is fabricated through molding with a two-piece mold seen in Figure <ref>. Different materials were tested for 3D-printed mold including SLA resin and Polylactic Acid (PLA). SLA was chosen as it provides a much smoother mold surface which affects the quality of the elastomer. For the dotted elastomer, our approach does not require any complex laser cutting <cit.> but merely printing a mold with tiny spikes. Smooth-On Solaris™ is a clear and colorless silicon used for molding the elastomer. Also, it was found to be resistant to tearing and better suitable for in-hand manipulation <cit.>. Prior to casting, the interior of the mold was sprayed with lacquer to prevent sticking and allow easy release. After casting, the mold should be placed in a vacuum desiccator at a pressure of 1 bar for removing gas bubbles within the silicon. Having bubbles may damage the clarity of the sensor and prevent its robustness in model transfer. The mold is left to cure for approximately 24 hours. Finally, the elastomer is carefully removed from the mold and glued to the shell using a clear silicone adhesive (e.g., Smooth-On Sil-Poxy™).
§.§.§ Reflective coating
The exterior of the elastomer is coated with an opaque reflective material aimed to contain and intensify the lighting within the shell. Aluminum powder and grey silicone ink were tested for coatings while the latter proved to be more robust to wear and tear. The silicone base coating is applied by mixing an ink catalyst with a Print-On™ gray silicone ink and Smooth-On NOVOCS™ silicone solvent gloss in a 1:10:30 mass ratio. Upon testing, the coating was yet prone to tearing. Hence, we formulated a new mixture of silicone by adding Smooth-On EcoFlex™ (00-10) to the mixture as suggested in <cit.>. The final mixture ratio of 1:10:10:30 was used for catalyst, paint, gel and solvent, respectively. The acquired coating tested durable and reliable upon high contact forces. Prior to applying the coating for the dotted elastomer, the notches formed by the spiked mold were coated in a dip-and-wipe method. The elastomer was covered with a black silicon pigment in the same mass ratio as the reflective marker and then wiped. Only the notches retained the black paint after wiping. The reflective coating was then applied.
§.§.§ Mounting plate
The mounting plate is 3D printed with either FDM or SLA printers. Hence, can be adapted to various robotic hands by simply modifying the design of the interfacing plate.
§.§.§ Illumination
While off-the-shelf LED PCBs are available, they can be bulky and limit the design. Hence, a customized annular PCB was designed. The PCB includes three sets of LEDs with a total of nine LEDs. Note that different combinations of LED colors are supported. Hence, we evaluate and compare between three sequences of LEDs including all white <cit.>, RRRGGGBBB <cit.> and RGBRGBRGB <cit.> as seen in Figure <ref>. These combinations will be analyzed for performance. The PCB is placed between the mounting plate, while surrounding the camera, and the edge of the elastomer. The LEDs produce nine cones of light. In order to provide a uniform illumination pattern, a collimator was designed and 3D printed for light piping <cit.>. The collimator is a ring covering the PCB with holes that adjust the direction of the lighting into the volume of the elastomer.
All components are assembled onto the mounting plate with three screws. The sensor is designed such that each component can easily be replaced or modified. The final shape of the assembled sensor yields a membrane with an hemisphere of 24 mm diameter on a 14 mm height cylindrical base. The mounting plate with the PCB and collimator has a height of 12 mm. Figure <ref> shows examples of high-resolution and clear tactile images during contact with various objects.
§ TACTILE STATE LEARNING
A contact state s∈ℝ^7 of is defined by the spatial location of contact x∈ℝ^3 on the shell, force vector f∈ℝ^3 at the contact point, and torsion τ∈ℝ with respect to the normal at the contact. Note that a force vector at the contact includes the normal force f_z and tangential forces f_x and f_y as seen in Figure <ref>. The proposed approach for training a state estimation model based on real and simulated image datasets is illustrated in Figure <ref> and discussed next.
§.§ Data collection
We use two sources of training data:
§.§.§ Real-world data
Dataset 𝒫_real is collected by labeling images captured by the internal camera during premeditated contact. A robotic arm equipped with a Force/Torque (F/T) sensor and an indenter touch the surface of the sensor in various contact locations and loads. During contact, an image I_i is taken along with a state measurement s_i. Contact position x_i is calculated through the forward kinematics of the arm. Load at the contact (i.e., force vector f_i and torsion τ_i) is measured by the F/T sensor fixed at the wrist of the robotic arm. In addition to the contact state, the maximum penetration depth d_i of the indenter is also measured.
The acquisition and labeling process yields dataset 𝒫_real={(I_i,x_i,f_i,τ_i,d_i) }_i=1^N of N labeled images. In addition, reference image I_ref is recorded for a sensor without any contact.
r0.5
< g r a p h i c s >
The contact state is defined by the position x of contact with respect to the sensors coordinate frame, normal force f_z at the contact, tangential forces f_x and f_y and the torsion τ about the normal axis.
§.§.§ Simulated data
was implemented in the TACTO physics-engine simulator for optical-based tactile sensors <cit.>. In TACTO, we calibrated the renderer to sufficiently match the real-world by including reference images from real sensors. To enable and optimize sim-to-real pre-training of the state estimation model, we collected different reference images from different sensors and used them for augmentation. The acquired images were augmented by adding noise and varying the lighting conditions. TACTO simulator does not support marker motion and, therefore, only images from sensors with clear shells were used.
A simulated dataset 𝒫_sim was generated by labeling M images captured in TACTO during random premeditated contacts. During contact, an image I_i is taken along with the contact position x_i such that 𝒫_sim={(I_i,x_i) }_i=1^M. Penetration depth d_i can also be acquired but not used here.
§.§ State estimation
We adopt a modified ResNet-18 architecture <cit.> as the state estimation model. The top layer is removed and the flattened output features are fed through two Fully-Connected (FC) layers of size 512 and 256, and with ReLU activation functions. At each iteration, both reference I_ref and contact I_t images are down-sampled to resolution 224×224 and stacked along the channel. The stacked image is then passed through the model to get the estimated state s_t. Simulation data offers a means to collect training data with a much lower effort. While a simulator often cannot provide data similar to the real world, one can pre-train a model prior to fine-tuning it with real data. In such way, a smaller dataset of real world data is required. Hence, the decoder of the contact localization model to approximate x is pre-trained with 𝒫_sim. Finally, the entire contact model is fine-tuned on the real dataset 𝒫_real.
§ EXPERIMENTS
§.§ Data collection
Simulated dataset. Dataset 𝒫_sim comprises of simulated tactile images and corresponding contact poses involving six types of indenters: three spherical indenters, one rectangular, one elliptical and one squared. These indenters were utilized only on with clear shells. To calibrate the simulation, we employed reference images from six different sensors. In addition, Gaussian noise and various illumination settings were used to augment the simulated images. These are intended to make the model independent of the background and focus on capturing the color gradient observed at the contact pixels. For the localization pre-training, our dataset consists of 18,000 samples, with 1,500 samples allocated for each indenter-configuration pair.
Real dataset.
As described in Section <ref>, dataset 𝒫_real is collected in an automated process. The collection setup, seen in Figure <ref>, consists of an sensor mounted on a fixed frame. An indenter is mounted to an OpenMANIPULATOR-P arm equipped with a Robotiq FT-300 sensor. The system is controlled using the Robot Operating System (ROS). During the collection, data stream is acquired in a frequency of 60 Hz. The train and test data are collected in episodes where, at each episode, the robot selects a contact point to press on the sensor's surface. Upon contact, the arm either presses perpendicular to the surface, tilts back and forth about the normal to the surface in order to exert tangential forces or twists the end effector with respect to surface normal for torsion samples. These are chosen arbitrarily and in varying magnitudes within the ranges f_z∈[-12N,0.8N], f_x,f_y∈[-5N,5N] and τ∈[-0.05Nm,0.05Nm]. During the pressing, images are taken in 480×480 resolution, after circular masking and centering, along with contact states.
3D-printed indenters are used for generating different contact geometries including round indenters of radius 3, 4 and 5 mm, square (edge length 6 mm), hexagonal (edge length 3 mm) and elliptical (axis lengths 8 mm and 4 mm) heads.
§.§ Contact State Estimation
We assess the precision of state estimation using collected data. To compare performance, we evaluate a series of six sensor configurations seen in Figure <ref>. These configurations involve cross combination of shells with and without markers, along with three illumination setups: all white, RRRGGGBBB, and RGBRGBRGB. Several experiments were conducted to evaluate the contact estimation capabilities of . In each experiment, the dataset was divided, allocating 80% for training and 20% for testing purposes.
In the first experiment, a comparative analysis was conducted among the six different AllSight configurations. Each sensor was trained using optimized hyper-parameters, utilizing N=12,000 collected samples in 𝒫_real featuring a single spherical indenter of 3 mm radius.
Figure <ref> shows state estimation results over the test set with three different spherical indenters, 3 mm, 4 mm and 5 mm radii. The contact location, force magnitude f (with f=(f_x,f_y,f_z)^T) and torsion errors are shown with with respect to f. Results show that all configurations achieve low estimation errors with subtle differences. Nevertheless, using markers provide lower errors compared to clear elastomers. Overall, the RRRGGGBBB with markers configuration provides the best estimation. Videos of experiments and demonstrations with the sensor, including in-hand manipulation (Figure <ref>), can be seen in the supplementary material.
The above model was trained on data collected solely from spherical indenters. Hence, we observe the ability to learn a model for contact state estimation with various indenter geometries. A model was trained for the RRRGGGBBB-markers configuration with a data of size N=20,000 samples featuring spherical, hexagonal, ellipse and square headed indenters. Figure <ref> illustrates a heatmap of state estimation errors with respect to the indenter position on the contact surface. The mean position, force and torsion errors are significantly low at 0.59 mm, 0.15 N and 0.0002 Nm, respectively. We also separately trained a model to estimate the penetration depth d_i of the indenter. The low mean error of 0.15±0.14 mm obtained from the sensor indicates its ability to provide reliable geometric information about the contact shape.
§.§ Data Efficiency
The above trained models were based on pre-training of the model with simulated data. We now evaluate the contribution of the simulation and the sim-to-real effort in the fine-tuning of the model. First, 1,000 samples from a real RRRGGGBBB-Clear sensor were taken as test data for evaluation. Two models were trained with up to 10,000 real samples: one without any fine-tuning while the second was initially pre-trained with 4,500 simulated samples. Figure <ref> shows the accuracy improvement over the test data for both models with the addition of real training samples. In zero-shot and no fine-tuning, the pre-trained model already provides good performance. With a small amount of real data (approximately 2,000 samples) for fine-tuning, the pre-trained model achieves good performance. These results emphasize the contribution of the simulation for reducing the effort of real data collection.
§.§ Transfer Learning
The ability of the proposed sensor to provide an accurate contact state estimation has been shown above. Nevertheless, the analysis was performed on the same sensor that was used to collect training data. A practitioner may desire to instantly use a newly fabricated sensor without further training, i.e., zero-shot inference. Alternatively, the practitioner can collect a limited amount of labeled data from the new sensor and fine-tune the model for better results. Hence, we now wish to observe the ability of transferring a learned state estimation model to a newly fabricated sensor. Expanding upon previous findings that demonstrated superior results of the RRRGGGBBB-marker configuration, we conducted an assessment of transfer learning capabilities for the sensor on both zero-shot and fine-tuning approaches.
We strengthen the state estimation model and train it with N=40,000 samples collected from three distinct sensors using round indenters of 3 mm, 4 mm and 5 mm radii. To enhance the ability of the model to generalize, we augmented the images with lighting randomization. Furthermore, 2,300 image samples labeled with full contact states were collected from a newly fabricated sensor for possible fine-tuning of the model. An additional 600 labeled image samples not included in the training were collected from the new sensor for testing the model.
Figure <ref> exhibits the accuracy of state estimation over the test data of the new sensor with regards to the number of new samples used to fine-tune the model. First, when no data was used to fine-tune the model, i.e., zero-shot inference, the position, force and torsion errors are already low at 3.49±0.41 mm, 2.06±0.23 N and 0.0068±0.0016 Nm, respectively. With the addition of a limited amount of new training data for fine-tuning, accuracy was improved even more. The results provide compelling evidence that the sensor achieves satisfactory zero-shot performance with ability to further improve. We attribute this success to the robust design of the sensor, the incorporation of reference images and the benefits gained from training data of multiple sensors. Evidently, a practitioner can fabricate our open-source sensor and have a ready-to-go operational model.
§ CONCLUSIONS
In this paper, we have proposed , an optical tactile sensor with a round surface. The fabrication process was discussed where most of the components including the clear rigid shell are 3D printed. The design is fully open-source with fabrication instructions for the benefit of the robotics community. Furthermore, we have presented a comprehensive analysis for contact state estimation accuracy provided by possible design choices. Results show that RRRGGGBBB illumination with dotted markers on the elastomer provide best results by a small margin. Furthermore, we show the advantageous use of simulated data to pre-train a state estimation model in order to reduce the amount of real data to collect. With the proposed design and training process, is shown to have a unique zero-shot learning capability where a newly fabricated sensor can use a model trained on other sensors with sufficient accuracy. Therefore, is a low-cost, small, modular and open-source sensor with a ready-to-go state estimation model. The next step in future work can be to push the boundaries of size and additionally minimize the sensor for smaller manipulation tasks. In addition, multi--sensor inference should be analyzed in manipulation tasks.
IEEEtran
|
http://arxiv.org/abs/2307.00794v1
|
20230703072219
|
Editors handle their collaborators' submissions despite explicit policies
|
[
"Fengyuan Liu",
"Bedoor AlShebli",
"Talal Rahwan"
] |
cs.DL
|
[
"cs.DL"
] |
8.5in 11in
sciabstract
1,2]Fengyuan Liu
3*]Bedoor AlShebli
1*]Talal Rahwan
[1]Computer Science, Science Division, New York University Abu Dhabi, UAE
[2]Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA
[3]Social Science Division, New York University Abu Dhabi, UAE
[*]Joint corresponding authors. E-mails: talal.rahwan@nyu.edu; bedoor@nyu.edu
Editors handle their collaborators’ submissions despite explicit policies
[
=========================================================================
22pt
§ ABSTRACT
Editors are crucial to the integrity of the scientific publishing process, yet they themselves could face conflicts of interest (COIs), whereby their personal interests interfere with their editorial duties. One such COI stems from the fact that, apart from a few exceptions, the vast majority of editors are research-active scientists with many collaborators. Each such editor could potentially handle submissions from their recent collaborators, allowing the editor to use their power, consciously or otherwise, to treat such submissions favourably, thereby jeopardizing the integrity of the editorial decision. Naturally, a number of policies have been put in place to govern such COI, but their effectiveness remains unknown. We fill this gap by analyzing half a million papers handled by 60,000 different editors and published in 500 journals by six publishers, namely Frontiers, Hindawi, IEEE, MDPI, PLOS, and PNAS.
We find numerous papers handled by editors who collaborated recently with the authors; this happens despite policies explicitly prohibiting such behavior. Overall, nearly 3% of journals have a COI rate ≥ 10%, and nearly half of them have a COI rate ≥ 2%. Moreover, leveraging three quasi-experiments, we find that COI policies have a limited, if any, effect on regulating this phenomenon. Finally, we find that editors are faster to accept submissions from their collaborators, raising the possibility of favoritism. These findings highlight the need for policy reform to assure the scientific community that all submissions are treated equally.
§ INTRODUCTION
Academic editors play a crucial role in the scientific community as gatekeepers who ensure the integrity and reliability of published research <cit.>.
Apart from very few exceptions (e.g., professional editors of Cell, Nature and Science), the vast majority of those who serve as editors do so as a community service while focusing on their primary role as research-active scientists.
This could lead to conflicts of interest (COIs), i.e., situations where the editors' private interests interfere with their editorial duties. For instance, some editors may coerce authors to cite their own work <cit.>, or engage in extreme cases of self-publishing <cit.>. Moreover, undisclosed COIs stemming from personal connections could lead to the retraction of scientific articles <cit.>.
All these situations illustrate how editors could potentially compromise the integrity of the editorial decision. Naturally, a number of policies have been put in place to govern such COIs <cit.>.
In this study, we focus on a particular type of COI that all research-active editors could face—handling submissions authored by their recent collaborators. According to the Council of Science Editors, a COI arises when an editor handles a submission (co)authored by a person with whom they collaborated recently <cit.>. In principle, editors could handle such submissions favourably, e.g., by expediting their review process. Despite the self-evident need for policies that govern this type of COI, the degree to which such policies are being enforced remains unclear.
To date, there are very few quantitative studies on policies governing editors' COI, and these studies often focus on understanding the prevalence of such policies, rather than their impact, and are restricted to medical journals <cit.>.
Here, to understand the impact of such policies and the degree to which they are being enforced, we compile a dataset of half a million papers along with their handling editors from six different publishers, namely, Frontiers, Hindawi, IEEE, MDPI, PLOS, and PNAS. By doing so, we provide much needed evidence to inform the development of and implementation of COI policies <cit.>.
§ RESULTS
When it comes to handling the aforementioned type of COI, the six publishers differ in two ways. The first difference is how editors are expected to act in such cases. Apart from Frontiers, all publishers explicitly state that editors should recuse themselves from handling papers with such COI <cit.>.
The second difference is whether and how “recent collaboration” is defined. In particular, when a manuscript is received at time t_1, and an author of that manuscript has previously collaborated with an editor at time t_2, then publishers differ in the way they assess Δ = (t_1 - t_2) to determine whether the collaboration is deemed recent. For instance, according to PNAS, a COI arises when an editor has collaborated with any author during the 48 months that precede the submission, i.e., when Δ≤ 48m <cit.>. In contrast, Frontiers, MPDI, and PLOS consider the threshold to be 24m, 36m, and 60m, respectively <cit.>.
As for the two remaining publishers in our dataset, Hindawi does not provide an explicit definition of what counts as a recent collaboration <cit.>, while the COI policies of IEEE do not mention such COI explicitly <cit.>; see Supplementary Note for more details.
With these policies in mind, we analyze the rate at which editors handle papers submitted by their recent collaborators to understand the degree to which COI policies are enforced. Moreover, we analyze the time spent between the submission and acceptance of papers to determine whether editors are faster to accept those submitted by their recent collaborators.
§.§ Editors often handle their collaborators' submissions
Since the publishers define recent collaborations using different thresholds, ranging from 24m to 60m, we consider all thresholds when quantifying the percentage of papers with COI. The results corresponding to Δ≤ 48m are reported in the main article, while those corresponding to Δ≤ 24m, Δ≤ 36m and Δ≤ 60m are reported in the supplementary materials, showing broadly similar patterns.
Fig. <ref>a shows that the COI rate varies greatly across publishers and journals. Starting with publishers, PNAS tops the chart with 10.5% of papers involving a COI, followed by Frontiers (5.9%) and PLOS (4.6%). As for the journals, seven of them have a COI rate > 10%, namely PLOS Medicine (17%), Journal of Fungi (14%), PLOS Neglected Tropical Diseases (13%), Frontiers in Neuroinformatics (13%), Frontiers in Pediatrics (13%), Toxins (11%), and PNAS (10.5%); all these journals are ranked in the first quartile (Q1) in their respective disciplines. Overall, nearly 3% of jouranls have a COI rate ≥ 10%, and nearly half of them have a COI rate ≥ 2%.
To determine whether the COI rate varies with the editor's characteristics, we use a binary gender classifier commonly used in the literature <cit.>. This reveals that men are more likely to handle submissions from their recent collaborators compared to women (Fig. <ref>b). As for the editor's affiliation, we find that the probability of handling papers with COI increases with affiliation rank (Fig. <ref>c). Finally, we find that editors differ across disciplines in their tendency to handle such papers. Specifically, the COI rates in Biology, Chemistry, and Medicine reach nearly 5%, while Engineering has the lowest rate of just over 2% (Fig. <ref>d). Overall, we find more papers with COI in the natural sciences (4.5%) than those in the social sciences (3.8%), humanities (3.5%), and applied sciences (2.4%).
§.§ Policies fail to eliminate papers with COI
The high COI rate that we have observed is particularly alarming, especially since five out of the six publishers explicitly prohibit editors from handling papers with COI. Despite this finding, it remains unclear whether these policies have any effect at all. In other words, what COI rate do we expect to see in the absence of these policies?
Answering this counterfactual question based on observational data alone is challenging, since we cannot observe a parallel universe in which the COI policies were never introduced. Nevertheless, we are able to estimate the policies' effect by leveraging three quasi-experiments whereby certain changes were introduced to the COI policies of PNAS and PLOS at different points in time. Based on this, we set out to compare editors' behaviour before vs. after the changes were introduced.
More specifically, in all three cases of policy change, the scope of papers considered to have COI was broadened. To put it differently, certain behavior that was considered acceptable by the publisher became prohibited as per the new policy. The first policy change that we analyze (Case 1) took place in July 2011, when PNAS introduced a COI policy, prohibiting editors from handling submissions by authors with whom they collaborated during the past 24 months. Importantly, no such policy existed prior to that date. The second policy change that we analyze (Case 2) took place in January 2014, when PNAS updated its COI policy by modifying its definition of “recent collaboration” from the past 24 months to the past 48 months. The third and final policy change (Case 3) took place in May 2015, when PLOS introduced a COI policy, prohibiting editors from handling submissions by authors with whom they collaborated during the past 60 months; no such policy existed in PLOS prior to that date. See supplementary materials for more details regarding these policy changes.
We start by visualizing the percentage of papers with COI that were submitted around the month in which the policy change took place (Fig. <ref>a to <ref>c). To estimate the effect of the policy change, we use a regression discontinuity in time (RDiT) design, a method commonly used in the Social Sciences to study the treatment effect in quasi-experiments <cit.>. Based on this, we estimate that in Case 1, after PNAS prohibited editors from handling submissions by any collaborators from the past 24 months, the number of papers with such COI decreased from 10.32% to 8.49% (p = 0.029). Moreover, the percentage of papers with such COI continued to decrease by about 0.5% per year during the five-year period that followed the policy change (p < 0.001). Still, it is worth noting that papers with COI persisted despite the explicit policy prohibiting such papers, as evidenced by the fact that the COI rate did not drop to zero after the policy change. This finding clearly indicates that the policy in question was not enforced. In the other two cases, we found no evidence that the policies had any effect on the COI rate; see Table S1 for regression estimates. Together, our findings suggest that COI policies, if not enforced, have limited effect on editors' tendency to handle submissions by their collaborators.
As is the case with any observational study, an RDiT design, such as ours, comes with some intrinsic limitations that should be carefully considered. Firstly, there is often a need to expand the window (i.e., the period before and after the treatment) in order to obtain sufficient statistical power <cit.>. However, by expanding the window, it becomes harder to attribute any observed change to the treatment in question. One way to alleviate this issue is to perform a sensitivity analysis while varying the window size. Accordingly, we adopt alternative specifications where the bandwidth around the cutoff date is varied; this yields similar results (Table S2).
Secondly, the observed change (or lack thereof) in the outcome around the treatment time could be attributed to other events that coincide with the treatment.
To rule out this possibility, we use a negative control group for each of the three policy changes. These groups consist of papers whose author(s) collaborated with the handling editor, but the collaboration fell outside the range specified by the policy in question. Fig. <ref>d to <ref>f depicts the control groups of Cases 1 to 3, respectively. In Case 1, for example, the treatment group (Fig. <ref>a) involves collaborations within the past 24 months, while the control group (Fig. <ref>d) involves those within the past 25 to 48 months. Hence, the treatment and control groups are influenced by the same exogenous factors (if any) apart from the policy change, while the latter applies only to the treatment group. Now if the observed pattern around the cutoff date in the treatment group is attributed (at least partially) to the policy change, we would expect to see a different pattern in the control group. However, both groups differ only in Case 1 (see Table S1), confirming that the decrease in COI rate in Case 1 is indeed due to policy change, and that the policy did not have any effect on the COI rate in Case 2 and 3. Together, these findings suggest that, at least in some cases, the policy has no detectable impact, and even when it does, the policy is insufficient to fully deter editors from handling the papers of their recent collaborators.
§.§ Editors are faster to accept papers with COI
Having established that papers with COI are common, we now investigate whether they differ from other papers in terms of the time spent between submission and acceptance. Scientists would clearly benefit from getting their manuscript accepted earlier. This is because the vast majority of them are funded for a fixed period of time or are constantly evaluated, implying that an earlier acceptance could allow the research carried out during the funded period to be published in time before their evaluation <cit.>.
Past studies suggest that reviewing a paper takes an average of five hours, but authors could wait for months or even years before hearing back from editors <cit.>. This implies that any observed difference in the time spent under review is not primarily due to differences in the effort required from the reviewers. Rather, other factors play a significant role, such as whether the editors prioritize the paper, whether they reach out to responsive reviewers, and whether they constantly follow up with the reviewers. Additionally, editors have the liberty to reach out to reviewers who are likely to give favorable comments, or even override reviewers' requests to revise the manuscript <cit.>.
Against this background, we compare the papers with and without COI in terms of their acceptance delay, i.e., the number of days spent between submission and acceptance, while accounting for the temporal and cross-sectional variation in acceptance delay. More specifically, for each paper p, published in journal j and year y, we compute the relative acceptance delay (RAD), measured as the difference in acceptance delay between p and the average paper published in j in year y.
As can be seen in Fig. <ref>a, papers with COI have a shorter RAD than those without. In particular, the RAD of papers without COI is normally distributed around 0 (mean: 1.56, standard deviation: 64.74), whereas the RAD of papers with COI has two modes, with one of them roughly around 0 and the other around 43. Such a bimodal distribution suggests that papers with COI consist of two distinct subpopulations, with one being handled at the usual pace (i.e., just like papers without COI), and the other being handled faster.
Finally, we explore whether those who have a stronger relationship with the editor get their submissions accepted faster than those with a weaker relationship. To this end, for each submission, we calculate the percentage of authors who have recently collaborated with the editor. As can be seen in Fig. <ref>b, the greater the percentage, the shorter the RAD. Another way to examine the strength of the relationship between an author and an editor is to consider the team size of their prior collaboration. Intuitively, if an author has collaborated with the editor as part of a smaller team (e.g., involving, say, three members), then the author-editor relationship is likely to be stronger than if they were part of a larger team (e.g., involving, say, 20 members). As can be seen in Fig. <ref>c, the smaller the team size, the shorter the RAD. These findings suggest that authors with a stronger connection to the editor experience shorter delays between the submission and acceptance of their manuscripts.
§ DISCUSSIONS
In this study, we demonstrated that editors often handle papers from their recent collaborators. Those who are male, affiliated with a highly ranked affiliation, or conduct research in the natural sciences are more likely to engage in such behavior. More surprisingly, such papers persist despite explicit policies prohibiting them. Leveraging three cases of policy change as quasi-experiments, we found that COI policies, without being enforced, only have a limited, if any, effect on regulating the editors' behavior. Lastly, we demonstrated that these papers are accepted faster, especially when the authors have a strong relationship with the editor, raising the possibility of favoritism.
Our study comes with limitations, since we infer collaborations using publication records. While co-authored papers necessarily indicate scientific collaboration, not all collaborations results in publications. Therefore, papers with COI might be more prevalent than what we have demonstrated. Additionally, future studies should consider other types of COI that are explicitly mentioned in the publishers' policies. Examples include situations where an author of the manuscript under consideration has previously written a grant proposal with the editor, or where an author has previous been a member of the editor's research lab, either as a PhD student or as a postdoc. Indeed, traces of such relationships may already be encoded in our dataset: if two people apply for a grant together, or work together in the same lab, then they are likely to have co-authored a paper together. That being said, additional datasets of grant proposals and mentor-mentee relationships would provide more insights into this form of COI.
Although the term “conflict of interest” implies that editors who handle submissions by their recent collaborators have a private interest in doing so, some editors may actually solicit high quality papers from their collaborators to for the benefit of the journal rather than for their personal interests. Past research has shown evidence that papers with personal or institutional ties to the editors accumulate more citations compared to other papers, suggesting that the editors' network could potentially boost the journal <cit.>. Still, regardless of whether editors use their personal connections to attract submissions, they should avoid handling such submissions to comply with current COI policies.
Our findings beg the question of how to best govern this COI. Some argue that one should not aim to eliminate COIs since they are “an intrinsic component” of existing as a human being <cit.>, and some believe that non-financial COI is “nebulous and unquantifiable” <cit.>. However, just because everyone has competing interests does not mean they should go unchecked. As we have demonstrated, it is feasible to monitor and quantify at least certain types of COI, using existing digital records of publications, collaborations, and other scientific activities. Managing conflicts of interest, such as the one discussed in this paper, would assure the scientific community that all submissions are treated equally.
10
url<#>1urlprefixURL
siler2015measuring
authorSiler, K., authorLee, K. &
authorBero, L.
titleMeasuring the effectiveness of scientific
gatekeeping.
journalProceedings of the National Academy of
Sciences volume112, pages360–365
(year2015).
van2020highly
authorVan Noorden, R.
titleHighly cited researcher banned from journal board for
citation abuse.
journalNature volume578,
pages200–202 (year2020).
schiermeier2008self
authorSchiermeier, Q.
titleSelf-publishing editor set to retire.
journalNature volume456,
pages432–433 (year2008).
liu2023gender
authorLiu, F., authorHolme, P.,
authorChiesa, M., authorAlShebli, B. &
authorRahwan, T.
titleGender inequality and self-publication are common
among academic editors.
journalNature Human Behaviour
pages1–12 (year2023).
retraction2022PNAS
authorOransky, I.
titleWhite house official banned from publishing in
PNAS following retraction.
howpublishedRetraction Watch (year2022).
noteAccessed March 16, 2023.
CSE
titleCSE's White Paper on Promoting Integrity in Scientific Journal
Publications.
howpublishedhttp://www.councilscienceeditors.org/wp-content/uploads/entire_whitepaper.pdf,
Council of Science Editors (year2012).
noteAccessed August 04, 2022 (WayBack Machine).
COPE
titleEditorial board participation.
howpublishedCOPE,
https://publicationethics.org/resources/guidelines/editorial-board-participation.
noteAccessed May 8, 2023.
ICMJE
titleDisclosure of Financial and Non-Financial Relationships and
Activities, and Conflicts of Interest.
howpublishedICMJE,
https://www.icmje.org/recommendations/browse/roles-and-responsibilities/author-responsibilities–conflicts-of-interest.html.
noteAccessed April 10, 2023.
WAME2009COI
titleConflict of interest in peer-reviewed medical journals.
howpublishedWorld Association of Mecial Editors
(year2009).
<https://www.wame.org/conflict-of-interest-in-peer-reviewed-medical-journals>.
noteAccessed March 16, 2023.
haivas2004editors
authorHaivas, I., authorSchroter, S.,
authorWaechter, F. & authorSmith, R.
titleEditors' declaration of their own conflicts of
interest.
journalCMAJ volume171,
pages475–476 (year2004).
bosch2013financial
authorBosch, X., authorPericas, J. M.,
authorHernandez, C. & authorDoti, P.
titleFinancial, nonfinancial and editors' conflicts of
interest in high-impact biomedical journals.
journalEuropean journal of clinical investigation
volume43, pages660–667
(year2013).
faggion2021watching
authorFaggion Jr, C. M.
titleWatching the watchers: A report on the disclosure of
potential conflicts of interest by editors and editorial board members of
dental journals.
journalEuropean Journal of Oral Sciences
volume129, pagese12823
(year2021).
smith2012accessibility
authorSmith, E., authorPotvin, M.-J. &
authorWilliams-Jones, B.
titleAccessibility and transparency of editor conflicts of
interest policy instruments in medical journals.
journalJournal of Medical Ethics
volume38, pages679–684
(year2012).
plos2008making
titlePLOS Medicine Editors, making sense of non-financial
competing interests.
journalPLOS Medicine volume5,
pagese199 (year2008).
PNASpolicy
titleEditorial and journal policies.
howpublishedPNAS Author Center,
https://www.pnas.org/author-center/editorial-and-journal-policies.
noteAccessed April 26, 2023.
FrontiersPolicy
titlePolicies and publication ethics.
howpublishedFrontiers,
https://www.frontiersin.org/guidelines/policies-and-publication-ethics.
noteAccessed March 15, 2023.
MDPIpolicy
titleResearch and publication ethics.
howpublishedMDPI, https://www.mdpi.com/ethics.
noteAccessed April 25, 2023.
PLOSpolicy
titleCompeting interests.
howpublishedPLOS One,
https://journals.plos.org/plosone/s/competing-interests.
noteAccessed April 12, 2023.
HindawiPolicy
titlePublication ethics.
howpublishedhttps://www.hindawi.com/publish-research/authors/publication-ethics/.
noteAccessed April 12, 2023.
IEEEpolicy
titleIEEE policies.
howpublishedThe Institute of Electrical and Electronics
Engineers, Inc., New York, N.Y.,
https://www.ieee.org/content/dam/ieee-org/ieee/web/org/about/corporate/ieee-policies.pdf
(year2023).
<https://www.ieee.org/content/dam/ieee-org/ieee/web/org/about/corporate/ieee-policies.pdf>.
noteAccessed April 28, 2023.
IEEEpubPolicy
titleIEEE publication services and products board operations
manual 2023.
howpublishedIEEE Publications, Piscataway, NJ,
https://pspb.ieee.org/images/files/PSPB/opsmanual.pdf
(year2023).
<https://pspb.ieee.org/images/files/PSPB/opsmanual.pdf>.
noteAccessed April 28, 2023.
alshebli2018preeminence
authorAlShebli, B. K., authorRahwan, T. &
authorWoon, W. L.
titleThe preeminence of ethnic diversity in scientific
collaboration.
journalNature communications
volume9, pages5163 (year2018).
huang2020historical
authorHuang, J., authorGates, A. J.,
authorSinatra, R. & authorBarabási, A.-L.
titleHistorical comparison of gender inequality in
scientific careers across countries and disciplines.
journalProceedings of the National Academy of
Sciences volume117, pages4609–4616
(year2020).
imbens2008regression
authorImbens, G. W. & authorLemieux, T.
titleRegression discontinuity designs: A guide to
practice.
journalJournal of Econometrics
volume142, pages615–635
(year2008).
anderson2014subways
authorAnderson, M. L.
titleSubways, strikes, and slowdowns: The impacts of
public transit on traffic congestion.
journalAmerican Economic Review
volume104, pages2763–2796
(year2014).
reny2021opinion
authorReny, T. T. & authorNewman, B. J.
titleThe opinion-mobilizing effect of social protest
against police violence: Evidence from the 2020 george floyd protests.
journalAmerican Political Science Review
volume115, pages1499–1507
(year2021).
hausman2018regression
authorHausman, C. & authorRapson, D. S.
titleRegression discontinuity in time: Considerations for
empirical applications.
journalAnnual Review of Resource Economics
volume10, pages533–552
(year2018).
bilalli2021framework
authorBilalli, B., authorMunir, R. F. &
authorAbelló, A.
titleA framework for assessing the peer review duration of
journals: Case study in computer science.
journalScientometrics
volume126, pages545–563
(year2021).
ware2015stm
authorWare, M. & authorMabe, M.
titleThe STM report: An overview of scientific and
scholarly journal publishing (year2015).
gans1994mighty
authorGans, J. S. & authorShepherd, G. B.
titleHow are the mighty fallen: Rejected classic articles
by leading economists.
journalJournal of Economic Perspectives
volume8, pages165–179
(year1994).
brogaard2014networks
authorBrogaard, J., authorEngelberg, J. &
authorParsons, C. A.
titleNetworks and productivity: Causal evidence from
editor rotations.
journalJournal of Financial Economics
volume111, pages251–270
(year2014).
medoff2003editorial
authorMedoff, M. H.
titleEditorial favoritism in economics?
journalSouthern Economic Journal
volume70, pages425–434
(year2003).
PNAS_his_1
titlePNAS Conflict of Interest Policy.
howpublishedhttp://www.pnas.org/misc/coi.shtml.
noteAccessed May 11, 2008 (WayBack Machine).
PNAS_his_2
titleInformation for Authors, PNAS.
howpublishedhttp://www.pnas.org/site/misc/iforc.shtml.
noteAccessed May 11, 2009 (WayBack Machine).
PNAS_his_3
titleInformation for Authors, PNAS.
howpublishedhttp://www.pnas.org/site/misc/iforc.shtml.
noteAccessed September 18, 2011 (WayBack Machine).
PNAS_his_4
titleEditorial Policies, PNAS.
howpublishedhttp://www.pnas.org/site/authors/journal.xhtml.
noteAccessed April 19, 2014 (WayBack Machine).
PNAS_his_5
titleEditorial and Journal Policies, PNAS Author Center.
howpublishedhttps://www.pnas.org/authors/editorial-and-journal-policies.
noteAccessed July 22, 2020 (WayBack Machine).
PLOS_his_1
titleCompeting Interests Policy, PLOS Biology.
howpublishedhttp://www.plosbiology.org/static/competing.action.
noteAccessed May 19, 2009 (WayBack Machine).
PLOS_his_2
titlePLOS Policy on Declaration and Evaluation of Competing
Interests, PLOS Biology.
howpublishedhttp://www.plosbiology.org:80/static/competing.action.
noteAccessed September 20, 2009 (WayBack Machine).
PLOS_his_3
titleCompeting Interests, PLOS Biology.
howpublishedhttps://journals.plos.org/plosbiology/s/competing-interests.
noteAccessed May 11, 2015 (WayBack Machine).
§ ACKNOWLEDGMENTS
F.L. is supported by the New York University Abu Dhabi Global PhD Student Fellowship. The support and resources from the High Performance Computing Center at New York University Abu Dhabi are gratefully acknowledged.
§ DATA AND CODE AVAILABILITY
The anonymized data and all custom code used to analyze those data will be shared upon publication.
[50][h]
< g r a p h i c s >
Papers with conflict of interest are accepted faster.
a, Distributions of relative acceptance delay (RAD) of papers with and without COI. These distributions are summarized as boxplots, where boxes extend from the lower to upper quartile values, and whiskers extend until the 5th and the 95th percentiles. The lines represent the median.
c, Correlation between RAD and the percentage of authors that have recently collaborated with the editor.
b, Correlation between RAD and the minimum author count on any papers co-written by the editor and any authors of the focal paper in the past 48 months.
In (b) and (c), lines represent the mean RAD and the shaded regions represent 95% CI; the Pearson correlation coefficients (r) and the associated p-values are calculated using the original data (not binned).
|
http://arxiv.org/abs/2307.03093v2
|
20230706160847
|
Beyond Intuition, a Framework for Applying GPs to Real-World Data
|
[
"Kenza Tazi",
"Jihao Andreas Lin",
"Ross Viljoen",
"Alex Gardner",
"ST John",
"Hong Ge",
"Richard E. Turner"
] |
cs.LG
|
[
"cs.LG",
"stat.ML"
] |
[
Beyond Intuition, a Framework for Applying GPs to Real-World Data
equal*
Kenza Tazicam,bas
Jihao Andreas Lincam,tue
Ross Viljoencam
Alex Gardnerjpl
ST Johnaalto
Hong Gecam
Richard E. Turnercam
camDepartment of Engineering, University of Cambridge, Cambridge, UK
aaltoDepartment of Computer Science, Aalto University, Espoo, Finland
jplJet Propulsion Laboratory, California Institute of Technology, Pasadena, CA, USA
basBritish Antarctic Survey, Cambridge, UK
tueMax Planck Institute for Intelligent Systems, Tübingen, Germany
Kenza Tazikt484@cam.ac.uk
Gaussian Processes, GP, kernel design, real-world data, scalability, glacier elevation change, machine learning, Bayesian Inference
0.3in
]
Gaussian Processes (GPs) offer an attractive method for regression over small, structured and correlated datasets. However, their deployment is hindered by computational costs and limited guidelines on how to apply GPs beyond simple low-dimensional datasets. We propose a framework to identify the suitability of GPs to a given problem and how to set up a robust and well-specified GP model. The guidelines formalise the decisions of experienced GP practitioners, with an emphasis on kernel design and options for computational scalability. The framework is then applied to a case study of glacier elevation change yielding more accurate results at test time.
§ INTRODUCTION
A Gaussian Process (GP) is a probabilistic machine learning model usually used for supervised regression or classification tasks. GPs offer many advantages to modellers. First, they are suitable for small, correlated, and non-gridded datasets with missing values. Second, GPs provide a way to encode domain knowledge through prior specification and kernel design. Third and most importantly, they give principled uncertainty quantification which is critical to decision making. GPs are also more interpretable than deep learning methods. They typically have a small number of parameters that directly capture information about the data, such as the lengthscales of variation or timescales of periodicity, unlike the weights of a neural network which have no direct physical correspondence.
However, the application of GPs remains far from straightforward. In its simplest form, an exact GP is computationally expensive for large datasets, scaling cubically with the number of datapoints. Furthermore, there are comparatively few examples of GP applications suitable for non-experts. Of these examples, most are GP applications to toy datasets rather than complex real-world problems. In reality, many decisions relating to model design and implementation are developed through years of practice, reading academic literature, re-implementing experiments in code repositories, and detailed knowledge of software packages supported by small communities.
This paper aims to formalise the intuition of GP practitioners. We present guidelines to address two questions: (i) Is GP regression both computationally and methodologically applicable to a given problem? (ii) If so, how should the GP model be designed? This review is not an exhaustive list of possible GP applications but offers a starting point for using ‘application grade’ methods straightforwardly with current open-source software. For this reason, many recommendations follow a simple but principled approach to dealing with the challenges of applying GPs to real-world data. This framework can also be used as a guide when applying GP as a baseline to more complex models where simply choosing a squared exponential kernel does not showcase the performance achievable with a GP. We first define exact GP regression (<ref>) and relate this work to previous research (<ref>). We then present the framework (<ref>) and apply it to a case study of glacier height change using highly-structured satellite data (<ref>).
§ GAUSSIAN PROCESSES
§.§ Definition
Consider the set of observations comprising of input-output pairs {x_i, y_i} with i={1,..., N}, x_i ∈ℝ^D and y_i ∈ℝ. These observations are generated by a function f, describing the relationship between inputs and outputs, and modulated by a noise term that accounts for the uncertainty in the observed data:
y_i = f(x_i) + ϵ_i ,
where ϵ_i is assumed to be distributed normally as 𝒩(0; σ_n^2) with standard deviation σ_n. The latent function f can be modelled with a Gaussian Process (GP) <cit.>. A GP is a stochastic process where any finite collection of its random variables is distributed according to a multivariate normal distribution. Generalising to infinity, a GP can be viewed as a distribution over functions. A GP is defined by a mean function μ(x) and covariance or kernel function k(x, x^'):
f(·) ∼𝒢𝒫( μ(x; θ_μ), k(x, x^'; θ_k)) ,
where both mean and kernel function typically depend on hyperparameters θ_μ and θ_k.
A standard method for learning of the kernel hyperparameters is to maximise the marginal likelihood, the probability density of the observations given the hyperparameters. The marginal likelihood is computed by integrating over the values of f. Collecting inputs and outputs into X = (x_i)_i=1^N and Y = (y_i)_i=1^N, the logarithm of the marginal likelihood is given by
log (p(Y|X, θ)) = -1/2 (Y - μ)^⊤ (K+σ_n^2 I)^-1 (Y-μ)
- 1/2log (|K + σ_n^2I|) - n/2log (2 π) ,
where the mean vector μ collects (μ(x_i))_i=1^N and the kernel matrix K is constructed from k(x_i, x_j) evaluated on all pairs i,j = 1,…,N.
The hyperparameters of mean and kernel function and of the likelihood (σ_n^2) are collected in the hyperparameter vector θ.
Maximising <ref> w.r.t θ then gives the Maximum Likelihood Estimate for the hyperparameter values.
Assuming a Gaussian likelihood for ϵ (see <ref>), the posterior predictive distribution is tractable and can be used to calculate predictions for a new output f_*, given a new input x_*, as
p(f_* |Y,X,x_*) = 𝒩(f_* |μ_*(x_*), σ_*^2 (x_* )) .
Predictions are computed using the predictive mean μ_*, while the uncertainty associated with these predictions is quantified through the predictive variance σ_*^2:
μ_* (x_*)
= k_*n^⊤ (K+ σ_n^2 I)^-1 (y - μ) + μ(x_*) ,
σ_*^2 (x_*)
= k_** - k_*n^⊤ (K + σ_n^2 I)^-1k_n* ,
where k_*n = [k(x_*, x_1), …, k(x_*, x_n)]^⊤
and k_** = k(x_*,x_*).
§.§ Strengths
GPs have a number of advantages that can make them a judicious choice over other supervised machine learning algorithms. A GP provides:
* Well-calibrated uncertainty estimates. Assuming the specified model is appropriate, a GP ‘knows when it does not know’, increasing the uncertainty away from the training distribution. This is useful for determining the likelihood of extreme events and improving decision making.
* Machine learning for correlated datasets. In many cases such as geospatial problems, neighbouring observations are usually not independent and identically distributed (i.i.d.) but closely correlated to one another <cit.>.
* More interpretable machine learning. Although not as interpretable as simpler methods such as conventional linear regression or random forests, GP covariance functions specify high-level properties of the generated functions which can be conveyed in natural language <cit.>. Compared to deep learning methods, with thousands to trillions of learnable parameters which cannot obviously be linked with given physical features of the model predictions, GPs can be a more trustworthy alternative to the model end users.
* More interpretable machine learning. Although not as interpretable as simpler methods such as conventional linear regression or random forests, GP covariance functions specify high-level properties of the generated functions which can be conveyed in natural language <cit.>. Compared to deep learning methods, with thousands to trillions of learnable parameters which cannot obviously be linked with given physical features of the model predictions, GPs can be a more trustworthy alternative to the model end users.
* Data-efficient machine learning. A GP is a non-parametric Bayesian method which provides both model expressivity while avoiding overfitting. GPs adapt to different datasets and handle data efficiently without requiring a predefined model structure. Furthermore, once a GP model is trained, new data points can be incorporated efficiently without retraining <cit.>.
* Machine learning systems. GP regression is unlikely to fail and can be used reliably as a subpart of a bigger machine learning system, for example in probabilistic numerics <cit.>, reinforcement learning <cit.> and automated statisticians (see <ref>).
§.§ Limitations
It is also important to acknowledge the limitations of GPs. They struggle with:
* Large numbers of datapoints. Training on datasets with N ≳ 10^4 becomes prohibitive <cit.>. The computational complexity of covariance matrix inversion in <ref> and <ref> scales as 𝒪(N^3). The memory for storing the matrices scales as 𝒪(N^2).
* High-dimensional input spaces. raining on datasets with D ≳ 100 becomes difficult due to the need to compute pair-wise elements of covariance function which scales as 𝒪(DN^2), e.g. they are not best-suited to images <cit.>.
* Complex covariance functions. In situations which require covariance functions with many parameters that must be learned from the data, the covariance function will be hard to design and the model may overfit <cit.>.
* Non-Gaussian distributions. While Gaussian prior and likelihood function assumptions are quite common in classical scientific computing techniques, modern models, such as deep generative models, are increasingly moving towards modelling the target prior/posterior distributions using more flexible distributions parameterised by deep neural networks, like normalising flows <cit.>. These models avoid explicit handcrafted assumptions about the data or model and enable less biased Bayesian inference.
* Misspecified model. Here, misspecification refers to our belief, or lack thereof, in the proposed model to accurately represent the underlying patterns present in the data. In this case it's hard for the model to generate accurate results. For example, an inappropriate kernel function could be chosen for the covariance matrix. Poorly specified models will not only produce a more inaccurate mean posterior distribution but also inaccurate confidence intervals and inappropriate samples <cit.>.
§ RELATED WORK
Research focused on overcoming the limitations of GPs is vast, from improving scalability <cit.>, to overcoming the model selection problem <cit.>. However, these methods are usually not beginner-friendly and in most cases applied to clean and well-studied benchmark datasets.
Previous work on the democratisation of GPs is centred around creating an Automatic Statistician <cit.> which takes in data and outputs results and a model fit in natural language. This framework builds on Automated Bayesian Covariance Discovery <cit.> which uses the Bayesian Information Criterion to brute force the design of sensible kernel function. However, this endeavour sets aside one of the main advantages of GPs: incorporating prior knowledge. It also does not educate modellers about how to use GPs and therefore properly interpret their results.
Instead, our work follows a similar structure to other data science and machine learning guidelines with supporting code and examples to empower the deployment of GPs in the real world <cit.>.
§ FRAMEWORK
The following section gives an overview of the framework, illustrated in Figure <ref>, with the complete framework detailed in Appendix <ref>.
As with any data science problem, the first course of action is to define the task at hand (Step 1). What kind of predictions would we like to make? Will the model interpolate or extrapolate the training data? What should the model output be? Are uncertainties needed? Is the output conditional on other variables? An initial exploration of the data can then be performed (Step 2). The structure, size and dimensions of the dataset should be identified. These first two steps should help the modeller identify the suitability of applying GPs to the task at hand and the best way to set up the model evaluation.
Step 3 formalises the domain and prior knowledge of the modeller in a systematic way. This can help identify early on inputs with strong predictive power, kernel structure such as periodicity, and initialisations or priors for hyperparameters.
In Step 4, from the information collected thus far, the training, validation and test sets can be selected as independently as possible with respect to the problem that is trying to be addressed. More in-depth analyses can now be performed on the data.
First, if the dataset is large, we can identify scaling structures in the data in Step 5. We suggest three straightforward scaling cases that have robust approximation schemes and existing open-source code: over-sampled functions, timeseries data, and Kronecker covariance structure. If no scaling structure is obvious, dividing the data into manageable chunks using an unsupervised clustering method is a baseline alternative.
Second, data transformations can be ascertained and applied (Step 6). To improve inference, input features should be z-scored. Exact GPs also expect the posterior distribution to be Gaussian. Applying a transformation to the target data, i.e. the marginal likelihood, can help achieve this. Transforms can also help avoid unphysical predictions such as negative values and highlight the areas of the distribution we would like predict with more accurately.
Third, we can analyse the properties of that data that will inform kernel design (Step 7). The smoothness, covariance lengthscales, periodicity, outliers and tails, asymmetry, and stationarity should be assessed for the target variable. Once these features have been identified, the kernel can be built through composition, i.e. adding and multiplying standard base kernels. The most important dimensions and the simplest kernel structures should be tried first. Information from previous steps should also be used to apply constraints and priors which in many cases are the determining factor in the convergence of the model to the desired results.
To validate the kernel design (Step 8), the kernel should also be assembled iteratively, checking the performance of the model with each new dimension or kernel parameter. For each iteration, the physical consistency of the samples, the structure of residuals,posterior predictive likelihood scores and the Root Mean Square Error (RMSE) should be checked on both the training and validation sets. The modeller should also make use of simple non-GP baselines. The iteration process should continue until the scores start to stagnate or signs of overfitting are observed. If the results of these tests are unsatisfactory, the modeller should return to Step 6 and try a new transformation, design or set of constraints.
Once the kernel design is determined as appropriate, the model can now be scaled using the special structure found in Step 5 or by combining independent GP models using a Bayesian Committee Machine <cit.> (Step 9). Finally, the metrics used for validation can be reapplied to determine performance on the test set (Step 10).
§ CASE STUDY
The framework is now applied to the case study glacier elevation over Greenland using data derived from ICESat and ICESat-2 satellites. The implementation details are provided in Appendix <ref>.
In this regression problem, we would like to estimate glacier elevation change at unsampled locations (Step 1). Ocean distance, topographic elevation, slope, aspect, surface glacier velocity, and spatial coordinates x-y are used as inputs, D=7 (Step 2). The dataset is also large with N=5×10^5. We compare the framework with a pre-existing application of GPs to this problem <cit.>.
In the original setup, the study area is split into a 30 by 30 grid. The x-y coordinates, elevation and the logarithm of the velocity are used as predictors, with kernel lengthscales initialised at 30, 200, and 0.3year, respectively. A zero mean function and squared exponential (SE) kernel with Automatic Relevance Determination (ARD) are used, except along the x-y coordinates which are constrained to have the same lengthscale and variance. Observational noise is set at 0.1year.
The main application of these results is sea level change prediction, therefore the extreme values are the most important to predict accurately. The data is collected along satellite `tracks' where neighbouring points will be highly correlated. Furthermore, observations are more dense at higher latitudes (Step 3). We therefore allocate randomly 70% of tracks to the training set, 10% to the validation set, and 20% to the test set (Step 4). The data exhibits oversampling which makes this case amenable to sparse variational GPs (Step 5). The distribution of the variables are then examined. The glacier velocity, slope, aspect and elevation change exhibit a high degree of skewness and are transformed to more Gaussian distributions where helpful. The predictive power of the transformations is assessed using a k-nearest neighbour (k-NN) baseline. All the inputs are z-scored (Step 6).
The properties of the transformed data are then further investigated for the kernel design. Glacier elevation change is minimal at the centre of Greenland but the rate rapidly changes towards the coastline. Elevation and ocean distance show the strongest linear correlation with elevation change. Lengthscales are visually identified for each dimension. From this analysis, a kernel with additive structure that increases function variance near the coastline would work well (Step 7). We then set up a simple kernel, starting with a simple squared exponential kernel for x and y, moving towards the final design with each new kernel iteration using more variables and more complex kernel structures. As the dataset is large, we also try different chunking schemes as a function of the dataset's properties. In this case, k-means clustering results in a similar performance to arbitrarily gridding the data. The samples, residuals and metrics (shown in Table 1) are checked for the training and validation sets (Step 8). The final kernel is chosen to be:-1
k = k_Mat32(lat, lon, elev) + k_Mat32(ocean dist)
We also apply a sparse variational GP scheme using this kernel (Step 9). Finally, the final results are reported on the test set (Step 10).
Table <ref> compares the performance of the original implementation, the framework model, the sparse GP and two baselines: a k-NN and a GP with a SE kernel with ARD (SE-ARD). The baselines use untransformed x and y inputs only. While the k-NN model significantly outperforms the other models with respect to R^2, it struggles to predict the extreme negative values. The framework model and original implementation have similar RMSE scores. However, the framework model captures the variation of the data more accurately with a better R^2 and better constrained uncertainty estimates (low MLL). The sparse GP captures the general distribution of the model with a higher R2 but misses the extreme values we are looking to predict accurately. Figure <ref> visually compares the model outputs for a subsection of the data.
§ CONCLUSION
In this paper, we presented a framework to apply GP regression robustly to a wide variety of datasets while making the most of GP's strengths and working around its limitations. This formalisation of the modeller's decision process leads to improvements when applied to the case study and highlights the importance of model design when using GPs as baselines. Further work will include extending the workflow for multi-output regression and classification problems, a more in-depth analysis of GP suitability, `research grade' models and approximations, and available open-source software packages.
§ SOFTWARE AND DATA
The code, data and interactive notebooks are available at: <https://github.com/kenzaxtazi/icml23-gpframe.git>.
§ ACKNOWLEDGEMENTS
This work was supported by the UK Engineering and Physical Sciences Research Council [grant number: 2270379] and the University of Cambridge Harding Distinguished Postgraduate Scholars Programme. The authors thank Carl Rasmussen, Will Tebbutt and Damon Wischik for their suggestions and insightful conversations. We also thank all the participants of Cambridge Stochastic Processes Workshops, including Omer Nivron who led the first event. We are grateful to Isaac Reid for proofreading.
icml2023
§ DETAILED FRAMEWORK
§.§ Step 1: Problem definition
As in any data science problem, the first step is to define the task at hand. What kind of predictions would we like to make? Will the model interpolate or extrapolate the training data? What should the model output be (e.g. a report, one model, an ensemble of models, a forecast, a de-noised timeseries)? Is the output conditional on other variables? For example, the time for a PhD to travel to the Engineering Department will depend on their start location, their mode of transport, the time of day and the time until their thesis submission. In particular, it is important to distinguish between different types of regression. Typically, problems are defined as in Section <ref> but they could also be posed as an auto-regressive task:
y_t = f(y_t-1)+ ϵ_t
where y at the variable t-1 is used to predict the observation of y at the next t. This setup is common for emulation problems where the modeller is interested in replicating the behaviour of a system that is expensive or difficult to study in real life. The type of task which the modeller solves will determine how they design the model(s). For example, forecasting a single time-series along time in the future would mean we want to concentrate on incorporating long term dependencies present in the data. The task also shapes the metrics that should be used at validation and test time: do we care about modelling uncertainties, the mean or the extremes?
§.§ Step 2: Initial data exploration
The next step is to perform an initial exploration of the data in order to understand whether it is suitable for GP regression. One should consider:
* Number of data points N. For
N > 10^4-10^5, exact GP computation becomes prohibitively expensive <cit.>. N < 100 may be too small especially in the case of a complex kernel with many hyperparameters which can lead to overfitting. Smaller N can be mitigated using Markov Chain Monte Carlo (MCMC) estimation <cit.>.
* Number of input dimensions D. GPs are not immune to the curse of dimensionality. For D > 10, it is hard for the modeller to form a clear image of the problem. It can therefore also be difficult to design an appropriate kernel. The number of parameters used to define the kernel will also increase, meaning it will be easier to overfit. Furthermore, if D is very large D>100, constructing the covariance matrix, which scales as 𝒪(N^2D), can become a computational bottleneck.
* Output requirements. Are probability distributions needed for this task? If the modeller simply requires the mean output an alternative method may be more appropriate.
Further considerations:
* We propose a range of values for the upper limit of N. This is because the limit will depend on how the model is used. Higher N can be used for one-off modelling rather than learning hyperparameters through repeated likelihood evaluation.
* Above these limits, it is worth considering the use of a GP as a wrapper for a deep learning model <cit.>. In this case, a deep learning model can be trained on a large subset of the data. The inputs to the GP can be the output from the deep learning model or their residuals. The GP could also be used to refine the predictions for specific locations in the input feature space using held-out data. This will then yield uncertainties which can be used for uncertainty quantification, active learning, etc..
[If the models are not trained jointly, the procedure will result in overfitting and the residuals will go to zero with the GP collapsing to 0 uncertainty. If the GP is fit on the training data (and not the just the held-out data) using a neural network will be more likely to result in overfitting since the model might fit the data perfectly even before applying the GP.]
* It may be acceptable to work in the top end of the dimension range if only a few dimensions are doing most of the predictive work. Furthermore, it is also possible to select or generate a set of lower dimensional features to feed into the model using decision trees such as Random Forests or dimensionality reduction methods such as Principal Component Analysis <cit.>.
* For large datasets (see Step [subsec:scalingstruct] 5), two approaches are possible. In the first case, the data can be divided into chunks, independent GPs or `GP experts' applied are then to each part and predictions are made using a (robust) Bayesian Committee Machine <cit.>. In the second case and conditional on specific structure, scaling GP methods can be also applied.
§.§ Step 3: Domain expertise
In this step, the modeller maps out the information that is known about the dataset, prior to a more in-depth analysis. Writing out this information in detail will be key in designing an appropriate kernel with priors and constraints in Step [subsec:design]7. For example, a positive constraint is necessary when modelling rainfall.
§.§ Step 4: Training, validation and test set definition
A held out dataset allows the modeller to check whether they are under or overfitting, and to give some indication of the performance they would get on `real' unseen data. The separation of the data should reflect the goals and the performance they want to measure in Step [subsec:problem]1, and information from Step [subsec:domain]3. Ideally the dataset should be separated into three groups: a training set, a validation set, and a test set which will not be iterated over. In many cases, the data is used in real-world is not i.i.d., it is dependent and correlated, and come from heterogeneous sources and samplings regimes. If the effective number of data points (the number of independent samples that would be needed to produce the same information content as the given sample) is fairly small, then a cross-validation scheme is strongly recommended to assess the stability and predictability of the model <cit.>.
§.§ Step 5: Scaling structures
When working with a large dataset, the modeller should also analyse the training data to find structure that may lead to the application of scaling methods. Three cases and how to identify them are discussed below.
* Case 1: Oversampled functions. An oversampled function is sampled more frequently than is required to capture its underlying structure and variation. In the case of a periodic function this would be more than the Nyquist frequency. In this situation, the modeller can use sparse GP regression with variational inference of inducing points <cit.>. Many GP libraries, such <cit.> or <cit.>, offer built-in functions to perform this approximation.
* Case 2: Timeseries data. For timeseries data, the GP can be mapped to a Stochastic Differential Equation (SDE) <cit.>. This approximation has a linear cost in the number of time points and works well in many situations. However the mapping between the covariance matrix and the SDE can be expensive, sometimes more expensive than solving SDE itself. , a package Julia, provides a framework to apply this method.
* Case 3: Kronecker product structure. For such dataset with the following kernel structure, such as gridded data:
Cov([ x_1; x_2 ][ x_1^'; x_2^' ])=Cov(x_1,x_1^' )⊗Cov(x_2,x_2^'),
the Kronecker identity can be used to invert the matrices in a piecewise fashion <cit.>. This trick is used in the Structured Kernel interpolation (SKI) <cit.>. Both SKI and SKIP are implementable in .
If no structure is apparent, a baseline that is hard to beat is to the divide the data into smaller datasets, as previously mentioned. This can be done naively by separating data chronologically or into tiles. However, it can be useful to make use of clustering algorithms, such as k-means or k-nearest neighbours, to group features that exhibit similar properties.
§.§ Step 6: Data transformations
To improve inference, input features should generally be z-scored, i.e., subtracting the mean and dividing by the standard deviation. This means that most values should lie between -1 and 1. Exact GPs also expect the posterior distribution to be Gaussian. Applying a transformation to the target data, i.e. the marginal likelihood, can help achieve this. Transforms also can help avoid unphysical predictions such as negative values and highlight the areas of the distribution we would like to predict more accurately. Some common transformations are logarithm and power functions such as the Box-Cox transformation <cit.>. These transformation will constrain values to be non-zero but also reduce the weighting of the extreme values. Transformations also have a significant effect on the confidence interval of the model. The modeller should check the training data residuals and samples iterating over the chosen transform if necessary. It can be helpful to check if the transformation improves baselines such as linear regression or k-nearest neighbour models.
§.§ Step 7: Kernel design
In this step, the modeller explores the dataset in order to design the kernel function. To compose a kernel, the following characteristics of the target variable should be considered: smoothness, lengthscales, periodicity, outliers and tails, asymmetry, and stationarity. Tools such as first and second order statistics, scatter plots (with low dimensional projections), covariance and correlation matrices, autocorrelation plots, power density spectrum, and k-Nearest Neighbour analysis can be used to find these properties. The modeller should exercise their common sense when applying these tests keeping in mind the task they are trying to solve, as defined in Step [subsec:problem]1. They should also view this exercise as hypothesis testing and ascertain if the characteristics of the dataset match the domain knowledge, outlined in Step [subsec:domain]3, and if not, why.
This step is also important for inference time. The covariance is inverted analytically and, in most cases, Cholesky decomposition is used. However, this method requires the covariance matrix is Hermitian positive definite. This means the decomposition will fail if the one of the input dimensions is linearly related to another or a combination of other inputs. In practice, many GP models are set to have a zero mean function μ(x). However, the stability of inference can also be improved by modelling the most obvious features of the dataset through μ(x) the rather than k(x-x^').
Now that we have a clear understanding of the data. We can design the kernel function through `kernel composition'. The kernel can be built by combining standard operators and base kernel functions <cit.>. Adding is equivalent to applying the logical operator ‘OR’, i.e. changes in the amplitude can be explained by either term in the sum. For example, the resulting kernel will have high value if either of the two base kernels have a high value. Multiplying is similar to an ‘AND’ operation, i.e. changes in the amplitude are explained by both term in the multiplication. For example, the kernel will have high value if both base kernels have a high value. It's worth noting that multiplying kernels can result in a lower-dimensional representation of the data, simplify eigenvalue decomposition, result in a sparse matrix, and allow for pre-computation, thus making this procedure a computationally efficient way to combine with kernels. The design of the kernel will first and foremost affect the smoothness of the samples. The modeller can enforce this by choosing the smoothness of the mean and covariance functions. This is in particular useful for modeling data with underlying structures, such as financial time series data or image data.
If the model does not initially converge close the lengthscales values determined previously. They can be initialised manually. If this still does not change where the model converges, strong guidance can be applied through constraints and priors. In real-world cases, including such specifications are not uncommon to achieve the desired results. These include:
* Boundary constraints. The modeller enforce boundary constraints on the model parameters, such as bounds on the mean function parameters or variance and lengthscale of the kernel function. This can be useful when modeling physical processes that have known limits or constraints. In practice, using constraints may be difficult. If they do not match the observations, the model can break down. It is usually better to have small and positive priors (see next point).
* Bayesian priors. The model incorporate prior knowledge about the parameters of the model using priors distributions. For example, a Gaussian prior can be placed on the parameters of the covariance function to encourage the model to converge to a particular solution.
* Constraints on the covariance function. The modeller can enforce constraints on the covariance function, such as stationarity or monotonicity. These constraints can help to ensure that the model has physically meaningful properties.
Note that kernels with many hyperparameters will be more likely to overfit the data. Furthermore, recent literature suggests that even when a large number of terms are used, only a few parameters drive the outputs of the model after inference <cit.>. Regularisation techniques can be applied to the kernel function to limit overfitting <cit.> but are usually quite involved to put in place.
§.§ Step 8: Model iteration
The modeller should start by setting up some simple non-GP baseline such as k-NN or linear regression. They should then build the kernel iteratively, trying the most important dimensions first and check the physical consistency of the samples, the structure of residuals, the posterior predictive likelihood scores such as the marginal log likelihood, the mean log loss and Bayesian Information Criterion. The bias, Root Mean Square Error (RMSE) and Mean Absolute Error should also be evaluated. The checks should be repeated at for each iteration on both the training and validation sets. The modeller should keep iterating, until the scores start to stagnate or signs of overfitting are observed. If they are using using the GP for interpolation, this can simply be done by comparing the metrics such as RMSE and log-likelihood for the training and validation sets. However if extrapolation is the goal, it is normal for the model to perform worse away from the training distribution. In this case, it is useful to look at the GP samples. Are they sensible for both the training and validation sets?
GP samples can also be used as `synthetic data' to check the validity of the model and training procedure. The synthetic data is generated from the model samples. This fake data is similar to real data, but where ground truth parameter values are known. The samples can be fit using another GP. If the specified model is doing a good job, the modeller should be able to recover the original covariance matrix of the training data. If the results of these tests are unsatisfactory, return to Step [subsec:transformations]6 and try a new transformation design or set of constraints.
If after performing Steps 6 to 8, there is no significant improvement, more involved ‘research grade’ GP methods such as a Deep Gaussian Process <cit.> or a Gaussian Process Latent Variable Model <cit.> may be more suitable for the problem.
§.§ Step 9: Scaling
If the modeller is using a large dataset, they can now apply their findings from Step [subsec:scalingstruct]5. In particular, they will either apply one of the previously discussed scaling methods or apply independent GPs to chunked data to make predictions using a Bayesian Committee Machine. This can be implemented using the `Guepard' library <cit.>.
§.§ Step 10: Testing
Finally check the model on the test data using the same metrics as outlined in Step [subsec:iter]8. These are the values that should be quoted as the results.
§ CASE STUDY DETAILS
The following section describes implementation details of the case study. The case study code is in implemented in with all the code executable from notebooks. The most distinctive feature of the glacier elevation dataset is the distribution of the target variable shown in Figure <ref>. The distribution profile change between the negative and positive elevation change. Any simple transformation struggles to make the distribution more tractable to a GP. After applying Box-Cox, and variations of logarithm and exponential transformations, the normalised raw data was still found to perform the best.
The grid size for the independent GPs is set to approximately 130 km2. The framework model is also initialised with the lengthscales found in Step 7. These did not help the model converge better. Priors were also applied to see if enforcing a soft constraint to keep the x-y lengthscales more similar during optimisation helped as they tended to differ significantly for some of the grid tiles.
Two baselines for x-y inputs were chosen. The k-NN baseline used 10 nearest neighbours with distance weighting using the package <cit.>. The k value is closely related to that of GP lengthscales <cit.> suggesting that a `research grade' application of this dataset could include a non-stationary kernel with respect to elevation or ocean distance or the implementation of variational nearest-neigbors GPs <cit.>. The SE-ARD kernel was chosen as the second baseline as it is often the choice baseline for probabilistic modelling <cit.>. This model is trained using the same parameters.
We also included a sparse GP implementation over the whole of Greenland using 1000 inducing points. Increasing the inducing beyond this point became significantly more computationally expensive, in particular, when compared to chunking with exact GP regression. The GP models were trained using conjugate gradient rather than Cholesky decoposition, for 150 epochs using ADAM with a learning rate of 0.01. The sparse GP used minibatch size of 1,024.
|
http://arxiv.org/abs/2307.02327v1
|
20230705143734
|
Equivariant graph neural network interatomic potential for Green-Kubo thermal conductivity in phase change materials
|
[
"Sung-Ho Lee",
"Jing Li",
"Valerio Olevano",
"Benoit Sklénard"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"cond-mat.dis-nn",
"physics.comp-ph"
] |
Univ. Grenoble Alpes, CEA, Leti, F-38000, Grenoble, France
Univ. Grenoble Alpes, CEA, Leti, F-38000, Grenoble, France
Institut Néel, CNRS & Univ. Grenoble Alpes, F-38042, Grenoble, France
benoit.sklenard@cea.fr
Univ. Grenoble Alpes, CEA, Leti, F-38000, Grenoble, France
Thermal conductivity is a fundamental material property that plays an essential role in technology, but its accurate evaluation presents a challenge for theory.
In this letter, we demonstrate the application of E(3)-equivariant neutral network interatomic potentials within Green-Kubo formalism to determine the lattice thermal conductivity in amorphous and crystalline materials.
We apply this method to study the thermal conductivity of germanium telluride (GeTe) as a prototypical phase change material.
A single deep learning interatomic potential is able to describe the phase transitions between the amorphous, rhombohedral and cubic phases, with critical temperatures in good agreement with experiments.
Furthermore, this approach accurately captures the pronounced anharmonicity present in GeTe, enabling precise calculations of thermal conductivity. In contrast, the Boltzmann transport equation tends to overestimate it by approximately a factor of two in the crystalline phases.
Equivariant graph neural network interatomic potential for
Green-Kubo thermal conductivity in phase change materials
Benoit Sklénard
August 1, 2023
======================================================================================================================
Thermal conductivity is an intrinsic material property with deep implications in technology since it determines thermal management in the design of electronic devices <cit.>, and specifies the figure of merit in thermoelectric devices <cit.>. Lattice vibrations, i.e. phonons, dominate heat transport in semiconductors and insulators. Much effort has been devoted to accurate calculations of lattice thermal conductivities from a microscopic perspective. The Boltzmann transport equation (BTE) <cit.>, non-equilibrium Green function (NEGF) theory <cit.>, and the Green-Kubo formula (GK) <cit.> are the three major approaches to lattice thermal conductivity calculations. BTE evaluates the response of phonon occupation to a temperature gradient, typically including three-phonon scattering processes, which limits its application to weakly anharmonic crystalline materials. NEGF treats phonons quantum mechanically and takes into account contact-channel interface scatterings and phonon anharmonicity by self-energies. However, it is computationally expensive <cit.>. GK provides the lattice thermal conductivity from the heat flux in an equilibrium molecular dynamics (MD) simulation, accounting for anharmonic effects to all orders <cit.>. Furthermore, recent developments extend GK to low temperatures <cit.>, which makes it a robust approach for a wide range of temperatures and materials. GK theory provides a unified approach to compute the lattice thermal conductivity in ordered and disordered solids. For harmonic amorphous systems, thermal transport can be described by the Allen and Feldman (AF) theory <cit.>. However, it has been shown that AF theory may be inadequate when anharmonic effects become important <cit.>.
The MD simulation in the GK approach requires a relatively long simulation time (up to a few nanoseconds) for adequate statistical sampling and an accurate description of interactions among atoms.
Such long simulation times are affordable for MD with empirical force fields, but at the price of reduced accuracy and universality.
Ab initio MD has better accuracy but is too computationally expensive for large systems or long MD simulations.
Extrapolation schemes have been proposed <cit.> to reduce the computational cost, but they are unsuitable for disordered solids.
In recent years, machine learning (ML) has emerged as a viable alternative for tasks that ab initio methods have faced challenges with. In particular, machine learning interatomic potentials (MLIP) have been successful in predicting energies, forces and stress tensors orders of magnitude faster than first-principle methods, while retaining their accuracy.
Thermal transport GK calculations have been reported with MLIPs relying on descriptor-based approaches, such as Behler-Parrinello neural networks (NN) or kernel-based methods <cit.>.
Graph NN (GNN) interatomic potentials based on message passing architectures (MPNN) <cit.> have been proposed as an alternative to hand-crafted descriptors, whereby structures are encoded as a graph with atoms represented as nodes that are connected by edges.
In initial models, the information at nodes and edges of the GNN was made invariant with respect to the Euclidean group E(3) (i.e. the group of translations, rotations and inversions in Euclidean space), and the atomic representations were limited to scalar interatomic distances <cit.>.
Such models have since been generally superseded by MPNN architectures built on convolution operations that are equivariant with respect to the E(3) group. In equivariant approaches, isometric transformations on the relative atomic displacement vector inputs are propagated through the network to correspondingly transform the outputs.
Equivariant approaches have been shown to achieve substantially improved data efficiency and unprecedented accuracy compared to their invariant counterparts <cit.>.
In MPNNs, many-body interactions are captured by iteratively propagating information along the graph at each layer in the network. This has the effect of extending the local receptive field of an atom to significantly beyond the cutoff radius, which renders parallelization impractical <cit.>. Recently, a strictly local equivariant neural network approach has been proposed to address this drawback <cit.>. In this architecture, information is stored as a per-pair quantity, and instead of nodes exchanging information with its neighbours via edges, a convolution operation acts on the cutoff sphere in the form of a set of invariant (scalar) latent features and a set of equivariant (tensor) latent features that interact at each layer.
In this letter, we demonstrate that the strictly local E(3)-equivariant NN can be employed to compute the temperature-dependent thermal conductivity of germanium telluride (GeTe) in various phases using GK theory. GeTe is a chalcogenide material employed in many technological applications, such as phase change nonvolatile memory storage <cit.>, thermoelectricity <cit.> and spintronics <cit.>. It undergoes a ferroelectric phase transition from the low temperature rhombohedral α-GeTe (spacegroup R3m) to a cubic β-GeTe (spacegroup Fm3̅m) at a Curie temperature of T_c ≈650-700 K <cit.>. Amorphous GeTe also plays an important role in technological applications.
Therefore, GeTe is an ideal prototype phase change material for the study of lattice thermal conductivity using GK theory.
The thermal conductivity tensor within GK theory is defined as:
κ_αβ(T) = 1k_B T^2 Vlim_τ→∞∫_0^τ dt ⟨ j_α(t) · j_β(0) ⟩_T
,
where k_B is the Boltzmann constant, T the temperature, V the volume, j_α(t) the α-th Cartesian component of the macroscopic heat flux, and ⟨ j_α(t) · j_β(0) ⟩_T the heat flux autocorrelation function (HFACF), with the symbol ⟨·⟩_T denoting ensemble average over time and over independent MD trajectories.
The total heat flux of a system of N atoms is defined
j(t) = ∑_i=1^Nddt( r_i E_i )
,
where E_i = m_i v_i^2 / 2 + U_i is the total energy (i.e. kinetic and potential energy) of atom i with mass m_i, velocity v_i and atomic positions r_i. In MLIPs, the partitioning E = ∑_i E_i of the total energy of the system into atomic contributions E_i allows the total heat flux of a periodic system to be expressed as <cit.> :
j(t) = ∑_i=1^Nv_i E_i - ∑_i=1^N∑_j ≠ ir_ij( ∂ U_i∂r_ij·v_j )
where the sum over j runs over the atoms that are within the cutoff radius r_c of atom i defined for the MLIP.
We implemented the calculation of Eq. (<ref>) in the LAMMPS code <cit.>.
The term ∂ U_i / ∂r_ij is obtained by automatic differentiation of atomic energies U_i computed by the MLIP.
It was also used for the calculation of the virial tensor <cit.>, which is required to perform simulations in the isothermal-isobaric (NpT) ensemble.
To generate the reference dataset to train the MLIP, ab initio MD simulations based on density functional theory (DFT) were performed with temperatures ranging from 100 K to 2500 K using the VASP code <cit.>. The generalized gradient approximation of Perdew-Burke-Ernzerhof (PBE) <cit.> was used for the exchange-correlation energy and Grimme's D3 dispersion correction <cit.> was applied. The supercells contained 192 and 216 atoms for the initial rhombohedral and cubic structures, respectively. Then, 6000 structures in total were taken from the MD trajectories and recomputed to obtain more accurate energy, forces, and stress tensors. We used an energy cutoff of 400 eV and a 2 × 2 × 2 k-mesh to sample the Brillouin zone. The equivariant NN model was trained on energy, forces and stress using the Allegro package <cit.>. The root mean squared errors (RMSE) and mean absolute errors (MAE) on the predicted energies, forces and stress tensors on the test dataset are 0.90 meV/atom, 29.87 meV/Å, 0.28 meV/Å^3 and 1.07 meV/atom, 42.97 meV/Å, 0.37 meV/Å^3, respectively
(see Suppl. Mat. for more information on the training procedure and dataset partitioning).
To further validate the MLIP, the equilibrium geometries of crystalline GeTe were optimized using the MLIP. For α-GeTe, the lattice parameter was a=4.42 Å and the angle α=57.13 ^∘, close to DFT results of a=4.41 Å and α=57.42 ^∘. Similarly, for β-GeTe, the MLIP yields a = 4.24 Å, in excellent agreement with the lattice parameter from DFT of a=4.23 Å.
Moreover, the phonon dispersion from the MLIP is in excellent agreement with DFT for both α and β-GeTe, as shown in Fig. <ref>.
In particular, our model describes optical phonons well, which is usually challenging for MLIPs <cit.>. Imaginary soft phonon modes in cubic GeTe are also well described by the MLIP, which is essential to capture the phase transition <cit.>. These phonon dispersions were computed using the finite displacement method implemented in Phonopy <cit.> with 3 × 3 × 3 and 5 × 5 × 2 supercells of the conventional unit cells for cubic and rhombohedral phases, respectively. For the DFT calculations, we used the same settings as those used to generate the reference dataset. LO-TO splitting was not included in our calculations as long-range Coulomb interactions tend to be screened by free carriers in real samples <cit.>.
We investigated the lattice dynamics of GeTe through MD simulations across the α→β phase transition with our MLIP.
For each temperature, GeTe supercells were first equilibrated for at least 200 ps in the NpT ensemble at ambient pressure with a 2 fs timestep in order to obtain the averaged temperature-dependent structural parameters shown in Fig. <ref>.
The rhombohedral lattice parameter a and angle α reach cubic values at T ≈ 650 K, in good agreement with experimental data.
By employing the temperature-dependent effective-potential (TDEP) method <cit.>, the temperature-dependent interatomic force constants (IFCs) were extracted from a 600 ps MD simulation in the microcanonical ensemble, after equilibrating the system in the NVT ensemble using the structural parameters depicted in Fig. <ref>. By utilizing these IFCs, we computed phonon spectra as a function of temperature (refer to the Suppl. Mat. for more detailed information).
Fig. <ref> presents the evolution of the longitudinal and transverse optical phonon modes (Γ_6 and Γ_4, respectively) as a function of temperature. The softening of these two modes up to the Curie temperature is corroborated by previous theoretical studies <cit.> and is comparable to experiments <cit.>.
Beyond 650 K, the optical phonons merge, indicating the transition to the cubic phase where optical phonons exhibit three-fold degeneracy.
To compute the GK thermal conductivity of cubic, rhombohedral and amorphous GeTe, MD simulations with the MLIP were performed at different temperatures. The amorphous GeTe structure was generated using a melt-quench process (see Suppl. Mat.).
The heat flux was calculated during MD simulations in the microcanonical ensemble and the ensemble average was performed over independent trajectories of at least 1 ns after equilibration in the NpT ensemble. After testing the convergence with respect to system size (see Suppl. Mat.), we used supercells containing 360 atoms for the rhombohedral phase and 512 atoms for the amorphous and cubic phases.
Although cubic GeTe is metastable below T_c, GK is able to determine its lattice thermal conductivity as it becomes dynamically stable at T ≥ 300 K (see finite temperature phonon spectra in Suppl. Mat.).
Rhombohedral GeTe shows a higher thermal conductivity than cubic GeTe before 650 K (see Fig. <ref>) after which the two curves merge, reflecting the α→β phase transition.
The comparison against experiments is challenging because experimental values of lattice thermal conductivities of crystalline GeTe show a large dispersion. There are two reasons for this.
First, thermal conductivity comprises a lattice contribution and an electronic contribution. Therefore, experimental lattice thermal conductivity is an indirect measurement, which is obtained by removing the electronic contribution, typically evaluated using the Wiedmann-Franz law that introduces an additional approximation from the Lorenz number.
Second, the sample quality varies. Extrinsic scatterings due to defects may alter the thermal conductivity measurements. For example, an extra phonon-vacancy scattering has to be included in order to recover a good agreement with experimental data <cit.>. Despite the significant experimental variations mentioned above, the calculated GK thermal conductivity values are found to fall within the range of experimental values.
The GK lattice thermal conductivity for the amorphous phase (solid green line) is in excellent agreement with the experimental data of Ref. <cit.> (green squares).
This can be regarded as a direct comparison with the experiment since the electronic contribution to the thermal conductivity was found to be negligible in amorphous GeTe <cit.>.
A previous study obtained a similar value of 0.27±0.05 W·m^-1·K^-1 at 300 K from GK simulations with a Behler-Parrinello-type MLIP <cit.>.
The predicted thermal conductivity for amorphous GeTe is constant until ∼ 450 K. It then starts to increase, indicating a transition to a crystalline phase, as evidenced by the evolution of the radial distribution function (see Suppl. Mat.) and consistent with the amorphous-crystalline phase transition temperature observed experimentally <cit.>.
To obtain the BTE thermal conductivity, we used the TDEP 2^nd and 3^rd order IFCs from MD simulations and a 30 × 30 × 30 q-mesh.
This allows a direct comparison between GK and BTE as both calculations were on the same footing, with identical interatomic potential and the same temperature; the only difference being the thermal transport formalism.
BTE overestimates the thermal conductivity by about 1.8 W·m^-1·K^-1, which is about twice the GK result at 300 K, and about three times that at 900 K.
Such overestimation is an indication that BTE cannot capture the strong anharmonicity exhibited by GeTe.
In conclusion, we developed an equivariant graph neural network interatomic potential to study thermal transport in amorphous and crystalline GeTe. The potential describes GeTe at a near-ab initio level of accuracy for the rhombohedral, cubic and amorphous phases with a single model.
Our potential also correctly captures phase transitions with Curie temperatures in good agreement with experimental data.
Combined with the Green-Kubo theory, it can determine the lattice thermal conductivity not only for strongly anharmonic crystals, but also for the amorphous phase.
We thank F. Bottin and J. Bouchet for discussions about TDEP calculation. This work was performed using HPC/AI resources from GENCI–IDRIS (Grant 2022-A0110911995) and was partially funded by European commission through ECSEL-IA 101007321 project StorAIge and the French IPCEI program.
56
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Krinner et al.(2019)Krinner, Storz, Kurpiers, Magnard, Heinsoo, Keller, Lütolf, Eichler, and Wallraff]krinner-2019
author author S. Krinner, author S. Storz,
author P. Kurpiers, author P. Magnard, author
J. Heinsoo, author R. Keller, author J. Lütolf, author C. Eichler, and author A. Wallraff, title title
Engineering cryogenic setups for 100-qubit scale superconducting circuit
systems, https://doi.org/10.1140/epjqt/s40507-019-0072-0
journal journal EPJ Quantum Technol. volume 6, pages 1–29 (year
2019)NoStop
[Zhang et al.(2022a)Zhang, Deng,
Wilkens, Reith, and Nielsch]zhang-2022a
author author Q. Zhang, author K. Deng,
author L. Wilkens, author H. Reith, and author
K. Nielsch, title title Micro-thermoelectric devices, https://doi.org/10.1038/s41928-022-00776-0 journal journal Nat. Electron. volume 5, pages 333 (year 2022a)NoStop
[Zhang et al.(2020)Zhang,
Bu, Lin, Chen, Li, and Pei]zhang-2020
author author X. Zhang, author Z. Bu, author S. Lin, author
Z. Chen, author W. Li, and author Y. Pei, title title GeTe
Thermoelectrics, https://doi.org/https://doi.org/10.1016/j.joule.2020.03.004 journal journal Joule volume 4, pages 986 (year 2020)NoStop
[Gutiérrez Moreno et al.(2020)Gutiérrez Moreno, Cao, Fronzi, and Assadi]gutierrez-2020
author author J. J. Gutiérrez Moreno, author J. Cao, author M. Fronzi, and author M. H. N. Assadi, title title A review of recent progress in
thermoelectric materials through computational methods, https://doi.org/10.1007/s40243-020-00175-5 journal journal Mater. Renew volume 9, pages
16 (year 2020)NoStop
[Broido et al.(2007)Broido,
Malorny, Birner, Mingo, and Stewart]broido-2007
author author D. A. Broido, author M. Malorny,
author G. Birner, author N. Mingo, and author
D. A. Stewart, title
title Intrinsic lattice thermal conductivity of semiconductors
from first principles, https://doi.org/10.1063/1.2822891
journal journal Appl. Phys. Lett. volume 91, pages 231922 (year
2007)NoStop
[Sklénard et al.(2021)Sklénard, Triozon, Sabbione, Nistor, Frei, Navarro, and Li]sklenard-2021
author author B. Sklénard, author F. Triozon,
author C. Sabbione, author L. Nistor, author
M. Frei, author G. Navarro, and author J. Li, title title Electronic
and thermal properties of GeTe/Sb_2Te_3 superlattices by ab initio
approach: Impact of van der waals gaps on vertical lattice thermal
conductivity, https://doi.org/10.1063/5.0073469 journal journal Appl. Phys. Lett. volume 119, pages 201911 (year
2021)NoStop
[Li et al.(2022)Li,
Wang, Lee, and Luo]li-2022
author author R. Li, author J.-X. Wang,
author E. Lee, and author T. Luo, title
title Physics-informed deep learning for solving phonon
Boltzmann transport equation with large temperature non-equilibrium, https://doi.org/10.1038/s41524-022-00712-y journal
journal Npj Comput. Mater. volume 8, pages 1–10 (year 2022)NoStop
[Mingo(2006)]mingo-2006
author author N. Mingo, title title Anharmonic phonon flow
through molecular-sized junctions, https://doi.org/10.1103/PhysRevB.74.125402 journal journal Phys. Rev. B volume 74, pages 125402 (year 2006)NoStop
[Li et al.(2009)Li,
Au Yeung, Kam, Peng,
Chen, Zhao, and Sun]li-2009
author author J. Li, author T. C. Au Yeung,
author C. H. Kam, author Y. Peng, author
Q.-h. Chen, author
X. Zhao, and author
C. Q. Sun, title title Anharmonic phonon transport in atomic wire coupled by thermal
contacts with surface bond reconstruction, https://doi.org/10.1063/1.3157175 journal journal J. Appl. Phys. volume 106, pages 014308 (year 2009)NoStop
[Guo et al.(2020)Guo,
Bescond, Zhang, Luisier,
Nomura, and Volz]guo-2020
author author Y. Guo, author M. Bescond,
author Z. Zhang, author M. Luisier, author
M. Nomura, and author
S. Volz, title title Quantum mechanical modeling of anharmonic phonon-phonon scattering
in nanostructures, https://doi.org/10.1103/PhysRevB.102.195412
journal journal Phys. Rev. B volume 102, pages 195412 (year
2020)NoStop
[Fan et al.(2015)Fan,
Pereira, Wang, Zheng,
Donadio, and Harju]fan-2015
author author Z. Fan, author L. F. C. Pereira,
author H.-Q. Wang, author J.-C. Zheng, author
D. Donadio, and author
A. Harju, title title Force and heat current formulas for many-body potentials in
molecular dynamics simulations with applications to thermal conductivity
calculations, https://doi.org/10.1103/PhysRevB.92.094301
journal journal Phys. Rev. B volume 92, pages 094301 (year
2015)NoStop
[Isaeva et al.(2019)Isaeva,
Barbalinardo, Donadio, and Baroni]isaeva-2019
author author L. Isaeva, author G. Barbalinardo, author D. Donadio, and author S. Baroni, title title Modeling heat transport in
crystals and glasses from a unified lattice-dynamical approach, https://doi.org/10.1038/s41467-019-11572-4 journal journal Nat. Commun. volume 10, pages 3853 (year 2019)NoStop
[Zhang et al.(2022b)Zhang, Guo,
Bescond, Chen, Nomura, and Volz]zhang-2022b
author author Z. Zhang, author Y. Guo, author M. Bescond, author
J. Chen, author M. Nomura, and author S. Volz, title title Heat
conduction theory including phonon coherence, https://doi.org/10.1103/PhysRevLett.128.015901 journal
journal Phys. Rev. Lett. volume 128, pages 015901 (year 2022b)NoStop
[Tristant et al.(2019)Tristant, Cupo, Ling, and Meunier]tristant-2019
author author D. Tristant, author A. Cupo,
author X. Ling, and author V. Meunier, title
title Phonon anharmonicity in few-layer black phosphorus, https://doi.org/10.1021/acsnano.9b04257 journal
journal ACS Nano volume 13, pages 10456–10468 (year 2019)NoStop
[Allen and Feldman(1993)]allen-1993
author author P. B. Allen and author J. L. Feldman, title title Thermal conductivity of
disordered harmonic solids, https://doi.org/10.1103/PhysRevB.48.12581 journal journal Phys. Rev. B volume 48, pages 12581 (year 1993)NoStop
[Shenogin et al.(2009)Shenogin, Bodapati, Keblinski, and McGaughey]shenogin-2009
author author S. Shenogin, author A. Bodapati,
author P. Keblinski, and author A. J. H. McGaughey, title title Predicting the thermal conductivity
of inorganic and polymeric glasses: The role of anharmonicity, https://doi.org/10.1063/1.3073954 journal journal J. Appl. Phys. volume 105, pages 034906 (year 2009)NoStop
[Zhu and Shao(2022)]xueyan-2022
author author X. Zhu and author C. Shao, title title Effect of anharmonicity on the thermal
conductivity of amorphous silica, https://link.aps.org/doi/10.1103/PhysRevB.106.014305 journal
journal Phys. Rev. B volume 106, pages 014305 (year 2022)NoStop
[Carbogno et al.(2017)Carbogno, Ramprasad, and Scheffler]carbogno-2017
author author C. Carbogno, author R. Ramprasad, and author M. Scheffler, title title Ab initio green-kubo
approach for the thermal conductivity of solids, https://doi.org/10.1103/PhysRevLett.118.175901 journal
journal Phys. Rev. Lett. volume 118, pages 175901 (year 2017)NoStop
[Verdi et al.(2021)Verdi,
Karsai, Liu, Jinnouchi, and Kresse]verdi-2021
author author C. Verdi, author F. Karsai,
author P. Liu, author
R. Jinnouchi, and author
G. Kresse, title title Thermal transport and phase transitions of zirconia by on-the-fly
machine-learned interatomic potentials, https://doi.org/10.1038/s41524-021-00630-5 journal journal npj Comput Mater volume 7, pages 1 (year 2021)NoStop
[Korotaev et al.(2019)Korotaev, Novoselov, Yanilkin, and Shapeev]korotaev-2019
author author P. Korotaev, author I. Novoselov,
author A. Yanilkin, and author A. Shapeev, title title Accessing thermal conductivity of complex
compounds by machine learning interatomic potentials, https://doi.org/10.1103/PhysRevB.100.144308 journal journal Phys. Rev. B volume 100, pages 144308 (year 2019)NoStop
[Sosso et al.(2012)Sosso,
Donadio, Caravati, Behler, and Bernasconi]sosso-2012
author author G. C. Sosso, author D. Donadio,
author S. Caravati, author J. Behler, and author
M. Bernasconi, title
title Thermal transport in phase-change materials from atomistic
simulations, https://doi.org/10.1103/PhysRevB.86.104301 journal journal Phys. Rev. B volume
86, pages 104301 (year 2012)NoStop
[Schütt et al.(2017)Schütt, Kindermans, Sauceda Felix,
Chmiela, Tkatchenko, and Müller]schutt-2017
author author K. Schütt, author P.-J. Kindermans, author H. E. Sauceda Felix, author S. Chmiela, author A. Tkatchenko, and author K.-R. Müller, title title SchNet: A
continuous-filter convolutional neural network for modeling quantum
interactions, in https://proceedings.neurips.cc/paper/2017/hash/303ed4c69846ab36c2904d3ba8573050-Abstract.html
booktitle Advances in Neural Information Processing
Systems, Vol. volume 30 (publisher Curran
Associates, Inc., year 2017)NoStop
[Batzner et al.(2022)Batzner, Musaelian, Sun, Geiger, Mailoa, Kornbluth, Molinari, Smidt, and Kozinsky]batzner-2022
author author S. Batzner, author A. Musaelian,
author L. Sun, author
M. Geiger, author J. P. Mailoa, author M. Kornbluth, author N. Molinari, author T. E. Smidt, and author B. Kozinsky, title title
E(3)-equivariant graph neural networks for data-efficient and accurate
interatomic potentials, https://doi.org/10.1038/s41467-022-29939-5
journal journal Nat. Commun. volume 13, pages 2453 (year
2022)NoStop
[Schütt et al.(2021)Schütt, Unke, and Gastegger]schutt-2021
author author K. Schütt, author O. Unke, and author M. Gastegger, title title Equivariant message passing for the prediction of
tensorial properties and molecular spectra, in https://proceedings.mlr.press/v139/schutt21a.html booktitle Proceedings of the 38th International Conference on
Machine Learning (publisher PMLR, year
2021) pp. pages 9377–9388NoStop
[Brandstetter et al.(2022)Brandstetter, Hesselink, van der Pol,
Bekkers, and Welling]brandstetter-2022
author author J. Brandstetter, author R. Hesselink, author E. van der
Pol, author E. J. Bekkers, and author M. Welling, https://doi.org/10.48550/arXiv.2110.02905 title Geometric and
Physical Quantities Improve E(3) Equivariant Message Passing
(year 2022), note arXiv:2110.02905 [cs,
stat]NoStop
[Musaelian et al.(2023)Musaelian, Batzner, Johansson,
Sun, Owen, Kornbluth, and Kozinsky]musaelian-2023
author author A. Musaelian, author S. Batzner,
author A. Johansson, author L. Sun, author
C. J. Owen, author
M. Kornbluth, and author
B. Kozinsky, title title Learning local equivariant representations for large-scale atomistic
dynamics, https://doi.org/10.1038/s41467-023-36329-y journal journal Nat Commun volume
14, pages 579 (year 2023)NoStop
[Lankhorst et al.(2005)Lankhorst, Ketelaars, and Wolters]lankhorst-2005
author author M. H. R. Lankhorst, author B. W. S. M. M. Ketelaars, and author R. a. M. Wolters, title title Low-cost and
nanoscale non-volatile memory concept for future silicon chips, https://doi.org/10.1038/nmat1350 journal journal
Nature Mater volume 4, pages 347
(year 2005)NoStop
[Jeong et al.(2021)Jeong,
Lee, Lee, Wook, Kim, Lee, and Cho]jeong-2021
author author K. Jeong, author H. Lee, author C. Lee, author
L. H. Wook, author
H. Kim, author E. Lee, and author M.-H. Cho, title title Ferroelectric
switching in gete through rotation of lone-pair electrons by electric
field-driven phase transition, https://doi.org/10.1016/j.apmt.2021.101122 journal journal Appl. Mater. Today volume 24, pages 101122 (year 2021)NoStop
[Zhang et al.(2021)Zhang,
Ti, Zhu, Zhang, Cao, Li, Wang, Li,
Zou, Hou, Wang, and Tang]zhang-2021
author author Q. Zhang, author Z. Ti, author Y. Zhu, author
Y. Zhang, author Y. Cao, author S. Li, author M. Wang, author D. Li, author
B. Zou, author Y. Hou, author P. Wang, and author G. Tang, title title Achieving Ultralow Lattice
Thermal Conductivity and High Thermoelectric Performance in GeTe
Alloys via Introducing Cu_2Te Nanocrystals and Resonant Level
Doping, https://doi.org/10.1021/acsnano.1c05650 journal journal ACS Nano volume
15, pages 19345 (year 2021)NoStop
[Jiang et al.(2022)Jiang,
Dong, Zhuang, Yu,
Su, Li, Pei, Sun, Zhou, Hu, Li,
Han, Zhang, Mori, and Li]jiang-2022
author author Y. Jiang, author J. Dong,
author H.-L. Zhuang, author J. Yu, author
B. Su, author H. Li, author J. Pei, author F.-H. Sun,
author M. Zhou, author
H. Hu, author J.-W. Li, author Z. Han, author B.-P. Zhang, author T. Mori, and author J.-F. Li, title title Evolution of defect structures leading
to high ZT in GeTe-based thermoelectric materials, https://doi.org/10.1038/s41467-022-33774-z journal journal Nat. Commun. volume 13, pages 6087 (year 2022)NoStop
[Picozzi(2014)]picozzi-2014
author author S. Picozzi, title title Ferroelectric Rashba
semiconductors as a novel class of multifunctional materials, journal journal Front. Phys. volume
2, https://doi.org/10.3389/fphy.2014.00010
10.3389/fphy.2014.00010 (year 2014)NoStop
[Rinaldi et al.(2016)Rinaldi, Rojas-Sánchez, Wang,
Fu, Oyarzun, Vila,
Bertoli, Asa, Baldrati,
Cantoni, George, Calarco,
Fert, and Bertacco]rinaldi-2016
author author C. Rinaldi, author J. C. Rojas-Sánchez, author R. N. Wang, author Y. Fu, author S. Oyarzun, author
L. Vila, author S. Bertoli, author M. Asa, author L. Baldrati, author M. Cantoni,
author J.-M. George, author R. Calarco, author
A. Fert, and author
R. Bertacco, title title Evidence for spin to charge conversion in GeTe(111), https://doi.org/10.1063/1.4941276 journal journal APL Mater. volume 4, pages
032501 (year 2016)NoStop
[Varotto et al.(2021)Varotto, Nessi, Cecchi, Sławińska, Noël, Petrò, Fagiani, Novati, Cantoni, Petti, Albisetti, Costa, Calarco, Buongiorno Nardelli, Bibes,
Picozzi, Attané, Vila,
Bertacco, and Rinaldi]varotto-2021
author author S. Varotto, author L. Nessi,
author S. Cecchi, author J. Sławińska, author
P. Noël, author S. Petrò, author F. Fagiani, author A. Novati, author M. Cantoni, author D. Petti, author E. Albisetti, author M. Costa, author R. Calarco, author M. Buongiorno Nardelli, author M. Bibes, author S. Picozzi, author J.-P. Attané, author L. Vila, author R. Bertacco, and author C. Rinaldi, title title Room-temperature ferroelectric
switching of spin-to-charge conversion in germanium telluride, https://doi.org/10.1038/s41928-021-00653-2 journal journal Nat. Electron. volume 4, pages 740–747 (year 2021)NoStop
[Chatterji et al.(2015)Chatterji, Kumar, and Wdowik]chatterji-2015
author author T. Chatterji, author C. M. N. Kumar, and author U. D. Wdowik, title title Anomalous
temperature-induced volume contraction in GeTe, https://doi.org/10.1103/PhysRevB.91.054110 journal journal Phys. Rev. B volume 91, pages 054110 (year 2015)NoStop
[Sist et al.(2018)Sist,
Kasai, Hedegaard, and Iversen]sist-2018
author author M. Sist, author H. Kasai,
author E. M. J. Hedegaard, and author B. B. Iversen, title title Role of vacancies in the
high-temperature pseudodisplacive phase transition in GeTe, https://doi.org/10.1103/PhysRevB.97.094116 journal journal Phys. Rev. B volume 97, pages 094116 (year 2018)NoStop
[Chattopadhyay et al.(1987)Chattopadhyay, Boucherle, and vonSchnering]chattopadhyay-1987
author author T. Chattopadhyay, author J. X. Boucherle, and author H. G. vonSchnering, title title Neutron diffraction
study on the structural phase transition in GeTe, https://doi.org/10.1088/0022-3719/20/10/012 journal journal J. Phys. C: Solid State Phys. volume
20, pages 1431 (year 1987)NoStop
[Thompson et al.(2022)Thompson, Aktulga, Berger, Bolintineanu, Brown, Crozier, Veld, Kohlmeyer, Moore, Nguyen, Shan, Stevens, Tranchida, Trott, and Plimpton]thompson_lammps_2022
author author A. P. Thompson, author H. M. Aktulga, author R. Berger,
author D. S. Bolintineanu,
author W. M. Brown, author P. S. Crozier, author
P. J. i. t. Veld, author
A. Kohlmeyer, author
S. G. Moore, author
T. D. Nguyen, author
R. Shan, author M. J. Stevens, author J. Tranchida, author C. Trott, and author S. J. Plimpton, title title LAMMPS -
a flexible simulation tool for particle-based materials modeling at the
atomic, meso, and continuum scales, https://doi.org/10.1016/j.cpc.2021.108171 journal journal Comp. Phys. Comm. volume 271, pages 108171 (year 2022)NoStop
[Thompson et al.(2009)Thompson, Plimpton, and Mattson]thompson-2009
author author A. P. Thompson, author S. J. Plimpton, and author W. Mattson, title title General formulation of
pressure and stress tensor for arbitrary many-body interaction potentials
under periodic boundary conditions, https://doi.org/10.1063/1.3245303 journal journal J. Chem. Phys. volume 131, pages 154107 (year 2009)NoStop
[Kresse and Furthmüller(1996)]kresse-1996
author author G. Kresse and author J. Furthmüller, title title Efficient iterative
schemes for ab initio total-energy calculations using a plane-wave
basis set, https://doi.org/10.1103/PhysRevB.54.11169 journal journal Phys. Rev. B volume
54, pages 11169 (year 1996)NoStop
[Kresse and Joubert(1999)]kresse-1999
author author G. Kresse and author D. Joubert, title title From ultrasoft
pseudopotentials to the projector augmented-wave method, https://doi.org/10.1103/PhysRevB.59.1758 journal journal Phys. Rev. B volume 59, pages 1758 (year 1999)NoStop
[Perdew et al.(1996)Perdew,
Burke, and Ernzerhof]perdew-1996
author author J. P. Perdew, author K. Burke, and author M. Ernzerhof, title title Generalized gradient approximation made simple, https://doi.org/10.1103/PhysRevLett.77.3865 journal
journal Phys. Rev. Lett. volume 77, pages 3865 (year 1996)NoStop
[Grimme et al.(2010)Grimme,
Antony, Ehrlich, and Krieg]grimme-2010
author author S. Grimme, author J. Antony,
author S. Ehrlich, and author H. Krieg, title
title A consistent and accurate ab initio parametrization of
density functional dispersion correction (DFT-D) for the 94 elements
H-Pu, https://doi.org/10.1063/1.3382344 journal
journal J. Chem. Phys. volume 132, pages 154104 (year 2010)NoStop
[Liu et al.(2021)Liu,
Verdi, Karsai, and Kresse]liu-2021
author author P. Liu, author C. Verdi, author F. Karsai, and author
G. Kresse, title title phase
transition of zirconium predicted by on-the-fly machine-learned force
field, https://doi.org/10.1103/PhysRevMaterials.5.053804
journal journal Phys. Rev. Mater. volume 5, pages 053804 (year
2021)NoStop
[Wang et al.(2021)Wang,
Wu, Zeng, Embs, Pei, Ma, and Chen]wang-2021
author author C. Wang, author J. Wu, author Z. Zeng, author
J. Embs, author Y. Pei, author J. Ma, and author Y. Chen, title title Soft-mode dynamics in the
ferroelectric phase transition of GeTe, https://doi.org/10.1038/s41524-021-00588-4 journal journal npj Comput Mater volume 7, pages 1 (year 2021)NoStop
[Dangić et al.(2021)Dangić, Hellman, Fahy, and Savić]dangic-2021
author author D. Dangić, author O. Hellman,
author S. Fahy, and author I. Savić, title
title The origin of the lattice thermal conductivity enhancement
at the ferroelectric phase transition in GeTe, https://doi.org/10.1038/s41524-021-00523-7 journal journal Npj Comput. Mater. volume 7, pages 1 (year 2021)NoStop
[Togo and Tanaka(2015)]togo-2015a
author author A. Togo and author I. Tanaka, title title First principles phonon calculations
in materials science, https://doi.org/https://doi.org/10.1016/j.scriptamat.2015.07.021 journal journal Scripta Materialia volume 108, pages 1 (year 2015)NoStop
[Steigmeier and Harbeke(1970)]steigmeier-1970
author author E. F. Steigmeier and author G. Harbeke, title title Soft phonon mode and
ferroelectricity in GeTe, https://doi.org/https://doi.org/10.1016/0038-1098(70)90619-8 journal journal Solid State Commun. volume 8, pages 1275 (year 1970)NoStop
[Hellman et al.(2011)Hellman, Abrikosov, and Simak]hellman-2011
author author O. Hellman, author I. A. Abrikosov, and author S. I. Simak, title title Lattice dynamics of
anharmonic solids from first principles, https://doi.org/10.1103/PhysRevB.84.180301 journal journal Phys. Rev. B volume 84, pages 180301 (year 2011)NoStop
[Hellman and Abrikosov(2013)]hellman-2013
author author O. Hellman and author I. A. Abrikosov, title title Temperature-dependent
effective third-order interatomic force constants from first principles, https://doi.org/10.1103/PhysRevB.88.144301 journal
journal Phys. Rev. B volume 88, pages 144301 (year 2013)NoStop
[Bottin et al.(2020)Bottin,
Bieder, and Bouchet]bottin-2020
author author F. Bottin, author J. Bieder, and author J. Bouchet, title title a-TDEP: Temperature Dependent
Effective Potential for Abinit – Lattice dynamic properties
including anharmonicity, https://doi.org/https://doi.org/10.1016/j.cpc.2020.107301 journal journal Comp. Phys. Comm. volume 254, pages 107301 (year
2020)NoStop
[Fons et al.(2010)Fons,
Kolobov, Krbal, Tominaga,
Andrikopoulos, Yannopoulos, Voyiatzis, and Uruga]fons-2010
author author P. Fons, author A. V. Kolobov,
author M. Krbal, author J. Tominaga, author
K. S. Andrikopoulos, author
S. N. Yannopoulos, author
G. A. Voyiatzis, and author
T. Uruga, title title Phase transition in crystalline GeTe: Pitfalls of averaging
effects, https://doi.org/10.1103/PhysRevB.82.155209 journal journal Phys. Rev. B volume
82, pages 155209 (year 2010)NoStop
[Kadlec et al.(2011)Kadlec,
Kadlec, Ku žžel, and Petzelt]kadlec-2011
author author F. Kadlec, author C. Kadlec,
author P. Ku žžel, and author J. Petzelt, title title Study of the
ferroelectric phase transition in germanium telluride using time-domain
terahertz spectroscopy, https://doi.org/10.1103/PhysRevB.84.205209
journal journal Phys. Rev. B volume 84, pages 205209 (year
2011)NoStop
[Acharyya et al.(2020)Acharyya, Roychowdhury, Samanta, and Biswas]acharyya-2020
author author P. Acharyya, author S. Roychowdhury, author M. Samanta, and author K. Biswas, title title Ultralow Thermal
Conductivity, Enhanced Mechanical Stability, and High
Thermoelectric Performance in (GeTe)_1-2x(SnSe)_x(SnS)_x, https://doi.org/10.1021/jacs.0c11015 journal journal J. Am. Chem. Soc. volume 142, pages 20502 (year 2020)NoStop
[Ghosh et al.(2020)Ghosh,
Kusiak, Noé, Cyrille, and Battaglia]ghosh-2020
author author K. Ghosh, author A. Kusiak,
author P. Noé, author M.-C. Cyrille, and author J.-L. Battaglia, title title Thermal conductivity of amorphous and crystalline
GeTe thin film at high temperature: Experimental and theoretical study, https://doi.org/10.1103/PhysRevB.101.214305 journal
journal Phys. Rev. B volume 101, pages 214305 (year 2020)NoStop
[Xia and Chan(2018)]xia-2018
author author Y. Xia and author M. K. Y. Chan, title title Anharmonic stabilization and
lattice heat transport in rocksalt β-GeTe, https://doi.org/10.1063/1.5048814 journal journal Appl. Phys. Lett. volume 113, pages 193902 (year 2018)NoStop
[Nath and Chopra(1974)]nath-1974
author author P. Nath and author K. L. Chopra, title title Thermal conductivity of
amorphous and crystalline Ge and GeTe films, https://doi.org/10.1103/PhysRevB.10.3412 journal journal Phys. Rev. B volume 10, pages 3412 (year 1974)NoStop
|
http://arxiv.org/abs/2307.00974v1
|
20230703124452
|
Over-The-Air Federated Learning: Status Quo, Open Challenges, and Future Directions
|
[
"Bingnan Xiao",
"Xichen Yu",
"Wei Ni",
"Xin Wang",
"H. Vincent Poor"
] |
eess.SP
|
[
"eess.SP",
"cs.DC",
"cs.LG",
"cs.NI"
] |
Over-The-Air Federated Learning: Status Quo, Open Challenges, and Future Directions
Bingnan Xiao, Xichen Yu, Wei Ni, Senior Member, IEEE, Xin Wang, Fellow, IEEE,
and H. Vincent Poor, Life Fellow, IEEE
B. Xiao, X. Yu, and X. Wang are with the Key Laboratory for Information Science of Electromagnetic Waves (MoE), Department of Communication Science and Engineering, Fudan University, Shanghai 200433, China. E-mail: xwang11@fudan.edu.cn.
W. Ni is with the Data61, Commonwealth Scientific and Industrial Research Organisation (CSIRO), Sydney, NSW 2122, Australia. Email: wei.ni@data61.csiro.au.
H. Vincent Poor is with the Department of Electrical and Computer
Engineering, Princeton University, Princeton, NJ 08544, USA (email: poor@princeton.edu).
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The development of applications based on artificial intelligence and implemented over wireless networks is increasingly rapidly and is expected to grow dramatically in the future. The resulting demand for the aggregation of large amounts of data has caused serious communication bottlenecks in wireless networks and particularly at the network edge. Over-the-air federated learning (OTA-FL), leveraging the superposition feature of multi-access channels (MACs), enables users at the network edge to share spectrum resources and achieves efficient and low-latency global model aggregation. This paper provides a holistic review of progress in OTA-FL and points to potential future research directions. Specifically, we classify OTA-FL from the perspective of system settings, including single-antenna OTA-FL, multi-antenna OTA-FL, and OTA-FL with the aid of the emerging reconfigurable intelligent surface (RIS) technology, and the contributions of existing works in these areas are summarized. Moreover, we discuss the trust, security and privacy aspects of OTA-FL, and highlight concerns arising from security and privacy. Finally, challenges and potential research directions are discussed to promote the future development of OTA-FL in terms of improving system performance, reliability, and trustworthiness. Specifical challenges to be addressed include model distortion under channel fading, the ineffective OTA aggregation of local models trained on substantially unbalanced data, and the limited accessibility and verifiability of individual local models.
Machine learning (ML), federated learning (FL), over-the-air federated learning (OTA-FL), multiple-input multiple-out (MIMO), reconfigurable intelligent surface (RIS), security, privacy.
§ INTRODUCTION
The envisioned sixth-generation (6G) of mobile communication systems has attracted significant attention in academic and industrial communities <cit.>. An important trend in discussions of 6G is a shifting of machine learning (ML) tasks from central cloud infrastructures to the network edge, capitalizing on the computational potential of edge devices and the flexibility of network connectivity <cit.>, <cit.>. Federated learning (FL), a distributed learning framework, is particularly well-suited for edge applications <cit.>. Initially proposed in <cit.>, FL has recently gained considerable traction. In FL settings, geographically distributed users train their own models using local data and then transmit their local model parameters or gradients to a base station (BS) for model aggregation. The BS subsequently returns the obtained global model to the users, repeating this process until model convergence <cit.>. Unlike traditional centralized learning settings, FL does not necessitate the transmission of large amounts of training data, thereby reducing communication costs and helping ensure data privacy to a significant extent <cit.>.
§.§ Over-The-Air Federated Learning
The concept of over-the-air (OTA) computation, also known as AirComp, was introduced in <cit.> to leverage the signal superposition characteristics of wireless multiple access channels (MACs) for function computation. OTA computation offers the advantage of resource consumption reduction since the BS only needs to handle functions uploaded by users rather than individual data. Dedicated radio resource allocation for each user is unnecessary, making the OTA communication-computation approach highly suitable for the model aggregation process in FL.
In recent years, a growing body of research has explored the signal superposition capability of OTA for aggregating local models transmitted by users in the context of wireless federated learning, commonly referred to as OTA-FL. OTA-FL enables users to share the same spectral resources, enhancing communication efficiency. In OTA-FL, edge devices can simultaneously transmit their local model updates, aggregating the models over the air in a “one-time" manner, as illustrated in Fig. <ref>. However, due to the inherent effects of channel fading and additive noise, the aggregated signal received at the BS inevitably exhibits some bias. Consequently, it becomes necessary to devise suitable transmission and reception strategies to mitigate the impact of channel fading and noise, thereby improving the convergence of OTA-FL systems.
As an emerging technique, the consideration of trust, security, and privacy in OTA-FL has been limited despite their paramount importance for the sustainability and reliability of ML models, as unveiled in this survey. While those aspects have been extensively studied regarding the traditional FL <cit.>, it is not straightforward to apply or extend the existing solutions to OTA-FL due to its distinct communication and aggregation strategy. For example, individual local models are obsolete in OTA-FL. As a consequence, the methods for detecting adversarial local models, e.g., Krum or multi-Krum, would no longer be applicable.
§.§ Contribution and Organization
This survey provides a detailed and comprehensive survey of existing studies on OTA-FL, addresses significant challenges, and outlines potential future research directions. The key contributions of the survey are listed as follows.
* We provide a systematic classification of OTA-FL from a fresh perspective, including single-antenna OTA-FL, multi-antenna OTA-FL, and OTA-FL with the assistance of reconfigurable intelligent surfaces (RISs).
We holistically summarize the optimization objectives and algorithms used in the existing works, highlighting their commonalities and pros and cons.
* We delineate the trust, security, and privacy risks confronted by OTA-FL, uncover the gap in the existing literature and point to novel research perspectives crucial for user concerns.
* We identify critical challenges and future directions that need to be addressed, including efficient aggregation, stringent synchronization requirements, abundant data heterogeneity, trustworthiness, and communication-learning metrics.
It is found through this survey that OTA-FL is still in its infancy. Existing studies have typically aimed to minimize the optimality gap of OTA-FL for individual model aggregations under the assumption of ML models with (strongly) convex and smooth loss functions. Little consideration has been given to understanding the impact of channel fading processes on OTA-FL, especially under multi-antenna settings. Moreover, there has been little consideration of the practical implementation of OTA-FL. Challenges, such as model distortion under channel fading, the ineffective OTA aggregation of local models trained on substantially unbalanced data, and the limited accessibility and verifiability of individual local models, have yet to be addressed.
The rest of this paper is structured as follows. Section II categorizes and discusses single-input single-output (SISO) OTA-FL system designs from the perspectives of user selection and power control. Section III extends the SISO architecture to multiple antenna settings and summarizes strategies addressing optimization problems under multi-antenna settings. In Section IV, the role of RISs in OTA-FL is discussed, and the corresponding system design strategies are analyzed. Section V describes the trust, security and privacy issues and their impacts on OTA-FL. Section VI points out the challenges and future research directions of OTA-FL, followed by conclusions in Section VII. The abbreviations involved in the paper are collated in Table I.
Notation: Boldface upper- and lower-cases stand for matrix and vector, respectively; ℝ^N denotes the space of all N×1 real-valued column vectors; 𝔼[·] takes mathematical expectation; h^E denotes the effective channel gain.
§ SISO OTA-FL
We start with an overview of OTA. The primary aim of OTA is to integrate the concurrently transmitted local models or gradients for computing a specific set of nomographic functions <cit.>:
F(d_1, d_2, …, d_K)=ψ(∑_k=1^K φ_k(d_k)),
where F(·): ℝ^K→ℝ is a nomographic function, d_k∈ℝ is a data sample at user k; φ_k(·): ℝ→ℝ and ψ(·): ℝ→ℝ denote the pre-processing function and the post-processing function, respectively. From (<ref>), we can observe that the nomographic function represents the process of signal transmission and aggregation in OTA-FL, i.e., each user's local data d_k undergoes pre-processing through φ_k(·) and is transmitted over wireless channels. The post-processing is subsequently performed through ψ(·) at the BS.
Currently, studies on OTA-FL are still in their infancy, with the majority of studies focused on SISO OTA-FL. Critical issues, such as user selection and power control, have been the primary concerns. There are also studies dedicated to the joint design of user selection and power control to address multiple goods at the same time.
In Table <ref>, we categorize the existing studies on SISO OTA-FL into three classes, including user selection, power control and joint design, and summarize their design goals and optimization strategies, as well as their strengths and weakness.
§.§ User Selection
In SISO OTA-FL systems, several factors, such as the size of local data and channel quality, often influence the significance of local updates for each user. In scenarios where aggregating data from all users is not feasible, selecting the most “important" users for training participation becomes necessary, as this selection can significantly impact the system's performance. However, the user selection problem is typically a discrete non-convex problem, which necessitates the search for efficient solving methods <cit.>.
A trade-off is proposed in <cit.>, where the authors suggest that aggregating training data from a more significant number of devices per round can expedite convergence. However, this approach may also lead to increased aggregation error due to the inclusion of devices with poorer channels.
The work presented in <cit.> addresses a user scheduling problem considering the cumulative energy budget of each user over T rounds. By analyzing the convergence performance, the authors design an estimated drift-plus-penalty algorithm using Lyapunov optimization. An estimation method is employed to forecast the norm of local gradients to overcome the challenge of unknown communication energy.
In the context of multiple parallel OTA-FL in cellular networks, discussed in <cit.>, the authors focus on a scenario where a server handles multiple model training tasks from distinct groups utilizing identical radio resources. They define an optimization problem to simultaneously optimize receiver combiner vectors and user selection to minimize the time-varying convergence upper bound. To tackle this problem, they decompose it into two sub-problems, which are addressed using a successive convex approximation (SCA) method and a greedy algorithm.
The authors of <cit.> propose a dynamic device scheduling framework using channel inversion-based power control. They design a measurement factor inspired by the convergence upper bound, which takes into account both the quality and quantity of selected users as the optimization objective. The problem is then solved using Lyapunov optimization.
§.§ Power Control
The design of power control strategies plays a crucial role in mitigating the impact of fading and ensuring robust received signal strength for users in weak channels at the BS, thereby enhancing the performance of OTA-FL systems. Existing research primarily focuses on power control issues through the minimization of the optimality gap and obtaining the optimal power distribution factor. Most existing works, such as <cit.>, <cit.>, and <cit.>, aim to achieve user power control by minimizing the optimality gap in each round.
In <cit.>, an optimality gap minimization method is proposed to address aggregation errors characterized by mean square error (MSE). This is accomplished by optimizing transmission power and a denoising factor, and the problem is solved using an alternating minimization algorithm. Similarly, in <cit.>, a power control strategy is obtained by minimizing the optimality gap while considering unbiased aggregation constraints. The formulation incorporates both average and maximum power constraints for each user, and convex reformulations are employed with structured optimal solutions. In <cit.>, the authors derive the effects of model aggregation errors accumulated among communication rounds using the upper limit of the time-average norm of model parameter gradients. Power control and transceiver policies are obtained by minimizing the derived upper bound through an alternating optimization algorithm.
In <cit.>, OTA-FL is investigated in a multi-cell wireless network where different training tasks are performed in each cell. A convergence analysis is carried out by taking inter-cell interference, and a problem is formulated to minimize the error gap across all cells while adhering to power constraints.
In order to decrease the overhead of channel estimations in OTA-FL, the authors of <cit.> put forward two OTA-FL schemes that utilize statistical channel state information (CSI), known as S-CSI. These schemes aim to reduce the associated cost of channel estimations.
10039388
The convergence bound of OTA-FL has been analyzed in <cit.>, with a specific focus on the impact of channel fading on the distortion of OTA-FL. A few interesting insights are unveiled:
* The per-round convergence upper bound depend primarily on the mean and variance of channel fading states, μ_h and σ_h^2.
* With the increase of μ_h and the decrease of σ_h^2, this convergence upper bound can effectively shrink when the ML model has strongly convex or convex loss functions. Meanwhile, a larger μ_h and a smaller σ_h^2 can counteract the side effects caused by channel noise.
In light of the insights, an optimal adaptive power control scheme has been recently proposed to combat the channel-induced model distortion of OTA-FL by leveraging S-CSI in <cit.>.
The objective of this scheme is to minimize the optimal gap of OTA-FL under any unknown channel fading conditions, as given by
min_{ρ(h)≥0,∀ h} G( μ_h^E,σ_h^E^2 ) s.t.𝔼_h[ ρ^2(h) ]≤ P_0 ,
where G(·) is the optimality gap of the convergence, and ρ(h) specifies the power control policy.
Based on the findings in <cit.>, this optimization problem can be translated to essentially construct an efficient channel with a large mean μ_h^E and a small variance σ_h^E^2.
By transforming the problem into a nested optimization problem and solving it with the Lagrange dual method, the optimal power control policy can be found to exhibit the following structure:
ρ^*(h)=h(2μ^*_h^E-ν^*)/2(h^2+λ^*),∀ h.
where the optimal dual variables λ^* and ν^* is obtained using a subgradient ascent method, and μ_h^E^* is obtained efficiently using a one-dimensional search method.
Following this policy, at each communication round, a user only needs to know its current channel state and update the dual variables accordingly. Even without the knowledge of future channel variations, the policy has been proven to provide a long-term optimal power control strategy under independent and identically distributed (I.I.D.) channel environments.
The policy can also be extended in the situation where even the channel statistics, i.e., μ_h^E and σ_h^E^2, are unknown a-priori. The policy can estimate the channel statistics on-the-fly based on the accumulated historical observations of the channels.
§.§ Joint Design of User Selection and Power Control
There has been a growing interest in simultaneously considering and optimizing user selection and power control for OTA-FL, recognizing that joint design may not always guarantee optimality for each objective due to transformations or decompositions of the original problems. However, this joint approach can enhance system robustness by improving the system design from multiple perspectives.
In <cit.> and <cit.>, an optimal co-design strategy for user selection and power control is obtained by minimizing the optimality gap, leveraging system convergence analysis. This joint strategy takes into account both user selection and power control to achieve improved performance. Furthermore, in <cit.>, the authors extend the consideration to include energy harvesting strategies on the user side. They formulate a non-convex non-linear integer programming (NIP) problem to optimize client selection on a per-training-round basis, addressing the energy harvesting aspect of OTA-FL in addition to user selection and power control optimization.
§ MULTI-ANTENNA OTA-FL
Recently, there has been a growing interest in exploring more complex scenarios involving multiple antennas at the transmitters and/or receivers in OTA-FL. In such cases, joint optimization of transmitter and receiver beamforming, along with user selection and power control, becomes crucial to achieve enhanced performance. Compared to the SISO OTA-FL systems, the design of OTA-FL systems with multiple antennas is still at an early stage, offering significant opportunities for further development and exploration.
Given the strong interdependence among variables in multi-antenna OTA-FL systems, existing studies predominantly adopt a joint design approach, aiming to optimize variables such as user selection, power control, and beamforming vectors, which are highly coupled, as illustrated in Fig. <ref>.
These joint design approaches are motivated by the need to exploit the benefits of multiple antennas in OTA-FL systems, such as improved spectral efficiency and enhanced communication reliability. By jointly optimizing user selection, power control, and beamforming, the performance of multi-antenna OTA-FL systems can be significantly improved, thereby unlocking the potential of utilizing multiple antennas effectively for distributed learning tasks. In Table <ref>, we summarize the efforts made by existing studies in exploring MIMO OTA-FL systems and summarize their respective optimization objectives and methods, as well as their strengths and limitations.
In multi-antenna OTA-FL systems, the prevailing approach for optimization is centered around minimizing MSE. For instance, in <cit.>, the authors tackle the problem of optimizing users' transmit beamforming and the BS's receive beamforming to minimize the MSE of the received signals. To address the challenges posed by highly coupled variables, they propose an alternating minimization algorithm that jointly optimizes the transmit and receive beamforming vectors. However, recent studies suggest that exploring alternative communication-learning representation objectives may offer a more promising approach than solely focusing on MSE. While MSE minimization remains the mainstream practice, researchers are starting to recognize the potential benefits of adopting different optimization objectives.
In the context of MIMO OTA-FL systems, <cit.> considers the optimization of wireless power transfer (WPT). They reformulate the non-convex MSE minimization problem into two nested sub-problems, making it tractable and enabling efficient solutions.
Another approach is presented in <cit.>, where equalization and channel feedback techniques are designed to improve OTA computation capabilities. The objective is to minimize computation MSE, and the authors propose an equalization strategy based on differential geometry and a channel feedback mechanism to facilitate OTA computation functions.
Furthermore, there are studies that explore the integration of OTA with other systems. In <cit.>, OTA is combined with an integrated sensing and communication (ISAC) system to leverage the advantages of both sensing and OTA computation. The optimization objective in this case is to minimize radar sensing MSE while maintaining sensing accuracy.
In the context of IoT networks, the authors of <cit.> propose a MIMO OTA scheme for networks with clustered multi-antenna sensors and a receive array at the base station. They introduce an optimal receive beamformer, known as the decomposed aggregation beamformer (DAB), which utilizes a decomposed architecture to reduce the channel dimension and perform joint equalization. Additionally, they propose a low-latency channel feedback framework that leverages OTA to enable simultaneous channel feedback from the sensors.
To address signaling overhead and power consumption concerns, the authors of <cit.> present a joint optimization scheme that minimizes MSE through the optimization of transmit beamformers and hybrid combiners. This scheme achieves local stable convergence by dividing time slots and optimizing two subproblems: short-term and long-term optimizations.
In <cit.>, a joint approach incorporating user selection and receiving beamforming design is utilized to achieve fast model aggregation. This approach employs a difference-of-convex-functions (DC) representation to improve sparsity and accurately detect the fixed-rank constraint.
Similarly, an efficient user selection scheme is achieved in <cit.> by maximizing the number of selected devices and minimizing the aggregation error. A greedy method based on matching pursuit is designed to reduce computational complexity and preserve training performance of the DC method achieved in <cit.>.
While the majority of existing studies in multi-antenna OTA-FL focus on MSE minimization, there are only a few studies that explore minimizing the optimality gap. For example, the authors of <cit.> propose a strategy to jointly optimize transceiver beamforming and user selection by minimizing the upper bound. They develop an algorithm based on an alternating optimization framework with low complexity, which helps address the optimization problem and alleviate the straggler problem by aligning the upload gradients at the BS.
§ RIS-ASSISTED OTA-FL
In the context of the envisioned development of 6G, the integration of RIS technology is gaining significant attention and is considered a promising solution for OTA-FL systems. RIS, as a cost-effective passive device, consists of multiple reflective elements capable of adjusting the phase shift of incoming signals, consequently altering the propagation direction of the reflected signals <cit.>, as illustrated in Fig. <ref>. This unique capability of RIS enables the superimposition of the direct link signal, effectively mitigating signal energy attenuation resulting from obstacles such as buildings <cit.>.
RIS holds great potential for improving the performance of OTA-FL systems by overcoming signal propagation challenges. By strategically deploying RIS in the wireless environment, it becomes possible to enhance signal strength and quality, leading to improved communication efficiency and higher reliability <cit.>. The use of RIS technology can effectively address issues related to signal attenuation and multipath fading, which are common obstacles in wireless communications. Additionally, RIS can offer flexibility in optimizing signal propagation paths, thus enabling better coverage and reduced interference in OTA-FL systems.
Several studies have focused on incorporating RIS into OTA-FL frameworks, as delineated in the following. We provide a detailed summary of OTA-FL systems assisted by RISs in Table <ref>, where the optimization objectives, variables, and methods of the systems, as well as their strengths and limitations, are presented accordingly.
§.§ RIS-Assisted OTA-FL in Single-Antenna Systems
In OTA-FL systems incorporating RISs, comprehensive optimization of multiple variables is necessary, including RIS configurations, user selection, and power control. However, these optimization problems can be challenging due to their non-convex nature and the close coupling of variables.
In <cit.>, the authors propose the use of RISs to mitigate the impact of fading channels and enable reliable model aggregation. It is important to note that increasing the number of reflecting elements in the RIS can aggregate more devices for training and improve performance. However, this also leads to increased channel estimation overhead. Thus, a co-design of power control and resource distribution is required to strike a balance. Similarly, in <cit.>, the trade-off is investigated between communication and learning, highlighting the essentiality of RIS deployment. The authors propose an algorithm that jointly optimizes user scheduling, receiver beamforming vectors, and RIS phase shifts to achieve the desired performance.
In order to address performance bottlenecks under poor propagation channel conditions, OTA-FL systems with RISs are designed in <cit.> and <cit.>. These works focus on the co-design of transceivers and RIS phase shifts to minimize aggregation errors. In <cit.>, a mixed-timescale penalty-dual-decomposition (MTPDD) algorithm is proposed to reduce signaling overhead caused by a large number of RIS elements. The algorithm jointly minimizes the MSE of computation over time while mitigating signaling overhead.
The potential of OTA-FL in multi-cell networks is explored in <cit.> through the utilization of RISs. The authors propose an alternating minimization scheme to optimize receiving beamforming vectors and RIS phase shifts jointly.
Furthermore, in <cit.>, the concept of a double-RIS (DRIS) OTA-FL system is introduced, where a pair of RISs are placed at the users' and BS ends, respectively. This configuration addresses the problem of unavailable direct links due to obstacles. It is demonstrated that the MSE performance of the DRIS system surpasses that of single RIS systems when a substantial number of RIS elements and receiving antennas are deployed.
While it may be challenging to obtain the global optimum in the works mentioned above, significant performance gains have been achieved through the rational use of RISs and the design of optimization algorithms. These studies contribute to the exploration and advancement of OTA-FL systems, leveraging the benefits of RIS technology.
§.§ RIS-Assisted OTA-FL in Multi-Antenna Systems
Introducing multiple antennas in RIS-assisted OTA-FL systems provides increased degrees of freedom but also leads to a larger number of variables that need to be optimized.
In <cit.>, the authors present a RIS-assisted OTA-FL framework with multiple antennas at the BS. They formulate an energy minimization problem that aims to jointly optimize user selection, phase shifts, decoding vectors, and power control. To handle the complexity of the problem, it is decomposed into several subproblems and solved iteratively.
In <cit.>, user terminals are equipped with multiple antennas to facilitate concurrent model transmission over a millimeter wave (mmWave) network. The objective is to minimize transmission distortion. Thus, the joint optimization of RIS phase shifts and beamforming vectors is considered to achieve this goal.
Similarly, in <cit.>, a DRIS-assisted OTA-FL system with multiple antennas is proposed to enhance the channel quality. The objective is to minimize MSE by jointly optimizing receiving beamforming vectors, denoising factor, power control, and passive beamforming design. By considering all these variables together, the system performance can be improved.
In these works, the authors recognize the importance of optimizing multiple antennas in RIS-assisted OTA-FL systems. By jointly considering variables, such as user selection, phase shifts, beamforming vectors, power control, and passive beamforming design, the performance of the systems can be enhanced and the objectives can be achieved more effectively.
A critical challenge in RIS-assisted OTA-FL systems is the assumption of perfect instantaneous CSI in existing studies. However, obtaining accurate CSI can be computationally expensive or even infeasible, particularly in scenarios where the system is mobile and the channel exhibits high dynamics. This limitation poses a significant hurdle in optimizing the performance of such systems.
The integration of RIS technology in OTA-FL systems represents a promising avenue for research and development. It has the potential to revolutionize wireless communications by mitigating signal attenuation and improving overall system performance. As the exploration of RIS in OTA-FL continues to expand, further advancements and optimizations are expected to unlock its full potential in realizing efficient and reliable next-generation wireless networks.
§ TRUST, SECURITY AND PRIVACY OF OTA-FL
FL has made significant strides in improving privacy compared to centralized learning, as it enables local processing of raw data. However, the baseline OTA-FL model still lacks a formal guarantee of trust, security, and privacy <cit.>. While these terms are often used interchangeably in existing literature, it is important to highlight their distinct meanings.
Trust ensures that processes operate in expected ways, providing assurance to stakeholders. Security, on the other hand, focuses on protecting data from unauthorized access or alteration, including guarding against Byzantine attacks. Privacy is concerned with preventing the disclosure of private information during interactions with other entities <cit.>.
To address the challenges and ensure trust, security, and privacy in OTA-FL, various mechanisms have been proposed and investigated. Researchers have explored different approaches to develop OTA-FL systems that preserve these crucial aspects. These mechanisms aim to establish a robust framework that instills confidence in the system's operations, safeguards against unauthorized access and malicious attacks, and protects the privacy of sensitive information.
By emphasizing the distinctions between trust, security, and privacy, researchers can better understand the multifaceted nature of OTA-FL systems and design comprehensive solutions. By integrating these mechanisms into the OTA-FL model, it becomes possible to establish a trustworthy, secure, and privacy-preserving framework that meets the requirements of various stakeholders involved in the collaborative learning process. Table <ref> presents a summary of the existing studies on the trust, security, and privacy of OTA-FL, where the design objectives and security or privacy guarantees are systematically reviewed.
§.§ Trust and Integrity
Trust is a concept that reflects the level of control and confidence that an entity has in another entity. It can also be seen as an outcome resulting from advancements in achieving security and privacy objectives <cit.>. In the context of FL, trust plays a significant role. For example, enterprises must trust network operators when granting consent for the collection of their data in FL. However, the abundance of valuable private data introduces concerns regarding algorithms that aim to predict specific business states.
The lack of trust among participants in FL remains an ongoing challenge. In an effort to address this issue, FLChain is proposed in <cit.>. FLChain is a decentralized, publicly auditable, and healthy FL ecosystem that incorporates trust and incentives. It aims to create an environment where participants can trust the system and each other. However, to date, no research specifically focuses on establishing trust in the field of trustworthy OTA-FL.
In order to foster trust in OTA-FL systems, it is crucial to explore mechanisms that enhance transparency, accountability, and verifiability. These mechanisms can include decentralized architectures, public auditing, and incentives for participants. By incorporating these elements into the design and implementation of OTA-FL frameworks, researchers can work towards establishing trust among the involved entities and ensuring the reliability and security of the collaborative learning process.
§.§ Security
The security concerns associated with OTA-FL primarily revolve around the potential for poisoning attacks, where malicious participants aim to compromise the federated learning process. These attacks can be classified into two main categories:
* Data Poisoning: One example of a data poisoning attack is the label-flipping attack, as described in <cit.>. In this type of attack, the adversary maintains the same features in their training sample but flips the corresponding label to a different class. The adversary aims to corrupt the training process and influence the learned model by introducing such maliciously labeled data <cit.>.
* Model Poisoning: Adversaries in OTA-FL have the ability to manipulate local model updates before transmitting them to the server. This manipulation can lead to various outcomes, such as causing misclassifications or implanting hidden backdoors <cit.> within the global model. It has been observed that in FL scenarios, model poisoning attacks tend to be more effective compared to data poisoning attacks <cit.>.
These poisoning attacks pose significant security risks to OTA-FL, as they can undermine the integrity and reliability of the federated learning process. Mitigating these threats requires robust security measures and techniques to detect and prevent malicious behavior from compromising the OTA-FL system.
The simplicity of OTA-FL procedures can render the learning process vulnerable to intentional poisoning attacks by adversaries. In recent years, there has been growing interest in Byzantine attacks, where malicious devices aim to disrupt FL convergence or steer it towards a poisoned model, as illustrated in Fig. <ref>. Therefore, it is crucial to design secure OTA-FL systems that can effectively counteract these attacks. Currently, only a few works specifically address Byzantine attacks in the context of OTA-FL, which are described below.
For instance, in <cit.>, the authors propose ROTAF, a novel transmission and aggregation approach that enhances the robustness of OTA-FL against Byzantine attacks in the case of I.I.D. data distribution. In ROTAF, participating devices are divided into different groups during each global training round, and each group is assigned a separate transmit time slot. The local updates from different groups are aggregated using geometric median aggregation. When dealing with non-I.I.D. data, a resampling step is performed before applying geometric median aggregation. The authors provide theoretical convergence analysis of ROTAF under both I.I.D. and non-I.I.D. data assumptions, demonstrating its ability to converge to within a range of the optimum at a linear rate when the number of groups exceeds twice the number of attacks. Numerical results show that ROTAF exhibits robustness against various forms of Byzantine attacks compared to the basic averaging OTA-FL approach.
In <cit.>, the authors focus on reducing the computation complexity of Byzantine-resilient OTA-FL. They address the challenge posed by the complexity of solving a convex problem in the commonly-used geometric median aggregation approach when the model parameter dimension is large. To overcome this, they adopt the improved Weiszfeld algorithm to calculate the smoothed geometric median. By leveraging the additive structure of the Weiszfeld algorithm, which can be combined with OTA, they propose a secure aggregation approach that is jointly designed for computation and communication in OTA-FL.
In <cit.>, the authors highlight the limitations of the widely-used channel inversion method in OTA-FL for defending against Byzantine attacks. They introduce a novel policy called best effort voting (BEV), which integrates local stochastic gradient descent (SGD) to ensure safe aggregation. BEV allows users to transmit their local gradients with maximum power, maximizing the deterrent effect on OTA-FL convergence. The authors analyze the convergence of BEV and demonstrate its superiority over popular channel inversion methods, particularly under strong adversarial environments.
These works represent important contributions to addressing Byzantine attacks in OTA-FL, providing techniques and mechanisms to enhance the security and resilience of the learning process against malicious behavior.
§.§ Privacy
While FL provides a decentralized approach to model training without the need to share local data with the BS or other participants, it is important to note that FL is not immune to privacy concerns <cit.>. Recent research has demonstrated that FL is susceptible to inference attacks <cit.>, which can potentially lead to the recovery of local training data. This vulnerability arises because the model or gradient updates obtained from local data can unintentionally reveal additional information about the underlying features in the data that were not intended to be disclosed. In the context of OTA-FL, attacks that exploit this privacy vulnerability can be categorized as follows:
* Membership Inference<cit.>: Adversary could test whether a specific data partition of a device has been utilized for training a model.
On the FourSquare location dataset, the authors show that a malicious server can 99% convincingly tell if particular location metadata is used to train a classifier.
* Property Inference<cit.>: Adversary could test if a specific partition of data with certain properties is contained in the data of a device. It is significant to emphasize that this property may not be directly associated with the primary objective.
* Model Inversion<cit.>: Adversary could reconstruct an input sample of the training data of a device based on the local updates. One sample of the training dataset is generated, which should be private with a generative adversarial network (GAN).
It is crucial to design private-by-design OTA-FL systems that are inherently against inference attacks. To this end, differential privacy (DP), one of the perturbation-based methods, has been adopted as a standard solution for preserving privacy in OTA-FL, as illustrated in Fig. <ref>.
DP serves as a robust standard for ensuring privacy in distributed systems <cit.>.
A randomized mechanism 𝒩:𝒟→𝒦, where 𝒟 denotes domain and 𝒦 denotes range, is of (ϵ,δ)-DP, on condition that for arbitrary two adjacent inputs i,i^'∈𝒟 and for an arbitrary subset of outputs O⊆𝒦 it holds <cit.>:
Pr[𝒩(i)∈ O]≤ e^ϵPr[𝒩(i^')∈ O]+δ
For suitably small constants ϵ and δ, it is statistically impossible for an adversary to violate privacy because of the indistinguishability of neighboring datasets l and l^'.
The preservation of privacy in OTA-FL can be achieved through the introduction of artificial noise to the local updates <cit.>. For instance, in <cit.>, the authors consider OTA-FL with local DP over flat-fading Gaussian channels, where privacy-constrained artificial Gaussian noise is linearly combined with the local gradients during transmission. Analytical results show a tradeoff between privacy and convergence for certain loss functions, indicating that the training error decreases as the device count K increases. It is also demonstrated that the privacy level per user decreases at a rate of 𝒪(1/√(K)) compared to orthogonal transmission, where privacy leakage remains constant. However, a limitation of this scheme is that the device constrains the system signal-to-noise ratio (SNR) with the weakest channel, which can degrade learning accuracy. To address this issue, the MPA-DPFL scheme with misaligned power allocation is proposed in <cit.>. The scheme suggests allocating power misaligned when a device's channel gain falls below a certain threshold, as opposed to the aligned manner used in <cit.>.
Several works have considered more comprehensive approaches to introducing additional noise to local updates. In <cit.>, the authors propose a novel DP-based method that adds spatially correlated perturbations to the local updates at each device. Compared to traditional DP-based methods that employ uncorrelated noise, this approach achieves higher learning accuracy while preserving defense ability. Additionally, in <cit.>, it is emphasized that CSI at the devices is crucial for transmission and ensuring a DP-based privacy guarantee. The proposed distributed noise generation process is resilient against pilot attacks manipulated by a malicious server and the failure of transmitting nodes.
Another technique, presented in <cit.>, is a so-called A-FADMM algorithm based on the alternating direction method of multipliers (ADMM). By adding a random variable to the local update and multiplying it with a random fading gain, this method preserves the privacy of both the local model trajectory and gradient trajectory. To enhance communication efficiency, the authors in <cit.> propose the differentially private random projection FedSGD scheme, which reduces the dimension of local updates while preserving privacy.
An alternative to artificial noises is that the inherent channel noise can also be leveraged to preserve privacy. In <cit.>, the authors demonstrate that privacy can be obtained “for free" from the channel noise when the privacy constraint level is below a certain threshold that decreases with SNR. It is also highlighted that actively assigning additional power to perturb local gradients is generally suboptimal in OTA-FL scenarios. To address this, a dynamic power control strategy is optimized under power and privacy constraints to minimize the optimality gap. Similarly, in <cit.>, the inherent receiver noise is harnessed to preserve DP against inference attacks, and a novel power control strategy is introduced. The analysis shows that the received SNR is primarily influenced by the number of devices when aiming for a higher level of privacy.
Recent research has been focused on the joint design of secure and private OTA-FL systems. In the work by Xue et al. <cit.>, they consider a scenario where an external attacker equipped with a directional antenna eavesdrops on RF signals from the devices, which poses a more potent attack model that existing approaches struggle to handle. To address this challenge, the authors propose the introduction of pairwise suppressible random, artificial noises. These noises are used to obfuscate private local model parameters and thwart external eavesdroppers. This design can be seen as an integration of DP and secure aggregation at the physical layer for OTA-FL.
In addition, Yan et al. <cit.> propose a secure and private OTA-FL framework, which utilizes noise to preserve privacy and security guarantees. The framework employs DP and MSE-security as the metrics. Specifically, a subset of devices is designated to send Gaussian artificial noise with the aim of degrading the SNR of potential eavesdroppers. To mitigate the impact of noise on learning accuracy, a channel-weighted post-processing mechanism is introduced. Moreover, the authors propose a scheduling algorithm based on the branch-and-bound concept with low complexity. This algorithm ensures the security of the system and the privacy of user data stored on the server.
§ LESSONS LEARNED, OPEN CHALLENGES, AND FUTURE DIRECTIONS
As OTA-FL in wireless environments continues to gain attention, researchers are actively working to tackle the associated challenges and improve system performance. However, there are still several open questions and directions for further research in this area. Some of these challenging questions are discussed in the following.
§.§ Aggregation Distortion
OTA-FL systems face the challenge of distortion introduced by channel fading, noise, and transceiver filtering, which can degrade the quality of the received summation signal <cit.>, <cit.>. Minimizing this distortion has been a persistent challenge in OTA-FL. Coded OTA can help reduce distortion but adds complexity to the system with coding and decoding processes. On the other hand, uncoded OTA requires an advanced transceiver design to achieve optimal amplitude alignment and combat interference. Both approaches necessitate improved design techniques to enhance the training performance of OTA-FL systems.
§.§ Stringent Synchronization Requirement
The assumption of perfect signal synchronization at the receiving end has been commonly made in most existing OTA-FL studies <cit.>. However, this assumption becomes increasingly challenging to achieve in scenarios with large system sizes and high heterogeneity. While some efforts have been made to address this challenge through robust design techniques, such as those proposed in <cit.>, effectively implementing synchronization in complex network environments remains an important and unresolved area that requires further investigation. Overcoming the synchronization challenge is crucial to ensure the reliable and efficient operation of OTA-FL in real-world wireless systems.
§.§ Data Heterogeneity
In different OTA-FL scenarios, user data often have different distributions. It is necessary to consider different, non-I.I.D. settings when testing the performance of different algorithms to ensure a robust design. For example, the authors of <cit.> define several non-I.I.D. distribution policies to serve as benchmarks. Meanwhile, the severely unbalanced distribution of data often leads to the gradient importance of different users, which makes some users' updates submerged in receiver noises. To this end, an adequate design of aggregation weights under non-I.I.D. distributions is a vital direction in the future.
As a matter of fact, OTA-FL is particularly susceptible to unbalanced volumes of training data among different users. This is because local models trained based on significantly larger amounts of local data are typically weighted higher. In the context of OTA-FL, this means the local models would be delivered with much higher received powers at the server (or BS). A near-far effect could occur, leading to the loss of local modes trained based on smaller amounts of data and delivered with lower transmit powers.
§.§ Secure and Trustworthy OTA-FL
While efforts have been made to protect user data by submerging it within the overlay signal, and mitigate the risk of information theft, there remains an inherent risk of leakage <cit.>. To ensure reliable aggregation computation and minimize aggregation errors caused by active eavesdroppers in different environments, it is crucial to design robust transceiver policies. These policies should effectively counteract the presence of eavesdroppers and maintain the integrity and privacy of the aggregated data.
In addition to addressing security concerns, building users' confidence in local data aggregation is paramount. To achieve this, it is essential to develop reliable algorithms and establish appropriate metrics in trustworthy FL. Researchers should focus their attention on creating robust and verifiable algorithms that instill trust in the aggregation process and provide transparent metrics to assess the reliability and accuracy of the aggregated results. By emphasizing the importance of trustworthy FL and dedicating efforts to its development, researchers can contribute to enhancing users' confidence in the integrity and privacy of their data.
§.§ Other Challenges
In addition, it is crucial to consider the impact of inherent channel fading and additive noise within OTA-FL systems on the convergence upper bound. These factors play a significant role in system performance and should be appropriately captured by a communication-learning metric. Identifying and defining a metric that effectively captures the influence of channel fading and noise is an essential direction for further research in OTA-FL.
Moreover, the implementation of the right to data erasure within the OTA-FL framework presents challenges. The right to data erasure allows users to request the removal of their data from a dataset or model under specific circumstances to protect their privacy <cit.>. However, handling this “unlearning" situation in OTA-FL is not straightforward. Permanently removing user data from the system can lead to significant performance losses, particularly in non-I.I.D. data distributions. Therefore, developing techniques that address data erasure while minimizing the impact on system performance is a challenging aspect that requires further exploration in OTA-FL research.
§ CONCLUSION
This paper has provided a comprehensive overview of the latest studies on the emerging OTA-FL technique. We first categorized OTA-FL systems under different system settings, including single-antenna and multiple-antenna OTA-FL systems, as well as the consideration of RISs. The design objectives and optimization tools were analyzed. Next, we delineated the trust, security, and privacy aspects of OTA-FL systems, provided corresponding performance evaluation metrics, and unveiled critical concerns needed to promote better system design. Additionally, we highlighted the challenges faced by OTA-FL and suggested future research directions. Challenges to be holistically addressed include model distortion under channel fading, the ineffective OTA aggregation of local models trained on substantially unbalanced data, and the limited accessibility and verifiability of individual local models.
ieeetr
|
http://arxiv.org/abs/2307.01499v1
|
20230704061228
|
Comparing dendritic trees with actual trees
|
[
"Roozbeh Farhoodi",
"Phil Wilkes",
"Anirudh M. Natarajan",
"Samantha Ing-Esteves",
"Julie L. Lefebvre",
"Mathias Disney",
"Konrad P. Kording"
] |
q-bio.NC
|
[
"q-bio.NC",
"q-bio.PE"
] |
= 2 mm
-2in
theoremTheorem[section]
thmxTheorem[section]
preexample[theorem]Example
example
ł@subsectiontocline20pt2.5pc5pc
equationsection
Roozbeh Farhoodi, Department of Bioengineering at University of Pennsylvania, 404 Richards Building, 3700 Hamilton Walk, Philadelphia, PA 19104
roozbeh@seas.upenn.edu
Phil Wilkes, Department of Geography at University College London, UK
p.wilkes@ucl.ac.uk
Anirudh M. Natarajan, Department of Computer Science at University of California, Berkeley, USA
anirudh.j.12@gmail.com
Samantha Ing-Esteves, Hospital for Sick Children, University of Toronto, Canada
samantha.esteves@mail.utoronto.ca
Julie L. Lefebvre, Hospital for Sick Children at University of Toronto, Canada
julie.lefebvre@sickkids.ca
Mathias Disney, Department of Geography at University College London, Geography, UK
mathias.disney@ucl.ac.uk
Konrad Paul Kording, Department of Bioengineering and Department of Neuroscience at University of Pennsylvania
kording@upenn.edu
Comparing dendritic trees with actual trees
Roozbeh Farhoodi, Phil Wilkes, Anirudh M. Natarajan, Samantha Ing-Esteves, Julie L. Lefebvre, Mathias Disney, Konrad P. Kording
=======================================================================================================================================
Since they became observable, neuron morphologies have been informally compared with biological trees but they are studied by distinct communities, neuroscientists, and ecologists. The apparent structural similarity suggests there may be common quantitative rules and constraints. However, there are also reasons to believe they should be different. For example, while the environments of trees may be relatively simple, neurons are constructed by a complex iterative program where synapses are made and pruned. This complexity may make neurons less self-similar than trees. Here we test this hypothesis by comparing the features of segmented sub-trees with those of the whole tree. We indeed find more self-similarity within actual trees than neurons. At the same time, we find that many other features are somewhat comparable across the two. Investigation of shapes and behaviors promises new ways of conceptualizing the form-function link.
§ INTRODUCTION
At first glance, most neuron morphologies remind us of the structure of actual trees, the ones that have green leaves. Trees have been invoked frequently as an analogy for the complex neuronal structures in the brain. Santiago Ramón y Cajal, the father of modern neuroscience and the first person to visualize the breadth of neuronal arborizations, has described this similarity many times: “The cerebral cortex is similar to a garden filled with innumerable trees, the pyramidal cells, that can multiply their branches thanks to an intelligent cultivation, sending their roots deeper and producing more exquisite flowers and fruits every day.” <cit.>. Neuroanatomists (or botanists), use the similarity between the samples of neurons (or trees) to divide them into neuron classes (or species of trees). While neurons and trees have significantly different sizes (from μm for neurons to km for trees) their similarities in structure may reveal some unifying principles underlying their morphology <cit.>. Exploring these similarities requires detailed 3D structural measurements to compare the arborization structures despite the vast scale differences.
The arborization structure of trees and neurons can be seen as the result of processes that act on two different timescales: evolutionary and developmental timescales. Over evolutionary timescales, genetic evolution acts to hardwire arborizations that have survival benefits. This usually directs the development of the overall branch architecture. On the developmental timescale, arborizations are patterned through an intersection of molecular and physical cues. This refines the development of arborization according to local resources. Despite these processes acting on both trees and neurons, there are some clear differences. Neurons grow together and essentially pack the whole brain volume, while trees have free space in their surroundings to respire. The arborizations of neurons touch one another producing an environment of communication. Conversely, trees can, and do, survive and thrive individually, and their communications are limited to their physical interactions and potentially their interactions through fungi. Recent evidence shows that trees living in close proximity do not simply compete with each other, but can develop complex resource-sharing and collaboration networks e.g. the so-called ‘wood wide web’ <cit.>. Neurons react to changes in input in the order of milliseconds or seconds through transmission and plasticity <cit.>. Trees do respond on these time scales e.g. diffusion and transpiration at the stomatal level, but on much longer time scales as well, particularly regarding the structural change (weeks to decades) <cit.>. Tree growth is affected by the amount of light, nutrients, and water they receive as well as gravity, prevailing winds, and temperature. Similarly, the elements that affect a neuron’s growth include molecular signals and the activities of other neurons. Both neurons and trees develop in an environment of meaningful stimuli and biological and physical constraints, which affect their arborization structures when they are matured.
In spite of the differences between their respective environments, they undoubtedly share many relevant factors. First, both trees and neurons process information <cit.>. In trees, this information is about the surrounding environment including the availability of light, water, nutrients, and certain survival factors such as competition and ease of reproduction; in neurons, it is about what function they serve and their movement of electrical and chemical signals. Additionally, both trees and neurons can grow in populations, trees in forests, and neurons in the network of the brain and the nervous system. By comparing the two, we might determine the optimal structures and patterns that trees and neurons have acquired to serve their function or survival.
Comparing neuron and actual tree arborizations requires a measurement of their structures. For neurons, this structure is captured through single neuron labeling followed by microscopic imaging <cit.>. This often requires slicing neural tissue, staining and imaging the tissue, and then 3D reconstruction of the morphology by tracing the detected arbors using software (figure <ref>.a). The structure of trees is most often described using relatively simple whole-tree metrics such as diameter-at-breast height (DBH), tree height, height-to-crown, and crown diameter. More recent measurements from terrestrial laser scanning (TLS) have enabled much more detailed measures of whole-tree branching architecture and topology i.e. branch length, radius, angle, and even path length <cit.>. These measures are revolutionizing our understanding of both trees and neurons <cit.>. We thus finally have data to be able to quantitatively compare actual trees and neuronal arborizations.
Here, we compare the morphologies of various neuron subtypes to distinct species of trees. Neuron subtype is akin to tree species and here we use ‘class’ to refer to both neural subtype and/or tree species. Both data types are represented as geometrical graphs, which means that the shape is broken up into nodes with an associated location as well as a map of how these nodes are connected. To compare them, we need to convert the graphical structure to a feature vector. The extracted features can depend on the general structure of the neuron or tree such as its size, its local structures such as the branching angles, or the number of stems attached to the soma/root. Features help us to compare classes of neurons with each other, species of trees with each other, and a class of neurons with a specie of trees. Our analysis finds a large set of morphological aspects to be shared between trees and neurons. Neurons and trees are often self-similar, i.e. their patterns are scaling up with their size. We define a self-similarity measurement by using the histogram of the features. In our definition, we first extract all sub-trees of a tree that start from any possible branching points. To avoid the artifacts that may come from small sub-trees, we discard sub-trees that had has less than ten leaves. Each sub-tree can be seen as a new sample such that its cutting branching point is its root. We can extract six aforementioned features for a sub-tree. We find that self-similarity is stronger for trees, suggesting that there is more relevant heterogeneity for the function and structure of neurons.
§ RESULTS
To ask how common principles and diverse functional objectives are reflected in the shapes of neurons and leaf-carrying trees, we can rely on newly emerging datasets. Advancements in neuroimaging techniques and ecology enable us to collect 3D structure samples of neurons and trees. Here the trees in our dataset belong to five different geological locations (Gabon, Ghana, Aus, and UK), are between ten and forty meters in height, and have a few hundred to a few thousand branching points. We selected five subtypes of neurons from a neuron morphology database (neuromorpho) with the requirement that their shape is completely reconstructed and artifacts of reconstruction are minimal (Figure <ref>) <cit.>. We used the same number of neurons and trees per class (twenty) to fairly compare them. We can thus ask if trees exhibit more self-similarity.
The apparent similarities between neuron morphologies and trees suggest there may be common quantitative rules and constraints to be discovered. Both tree structures and neuron morphologies are often described as mathematical graphs composed of straight segments, and branch points <cit.>. This helps us to compare them meaningfully. We extract features such as angles at the branching points and normalize the histogram of the features to find a probability distribution that describes the feature. We can measure the similarity between two samples by measuring the distances between the probability distributions. We use this metric to compare classes of neurons and trees. We find that our features distinguish the samples of trees from the samples of neurons. To summarize this difference, we define a self-similarity measurement for one neuron or one tree. This is defined by considering the feature set and showing that samples of trees are significantly more self-similar than neurons.
To enable a comparison of neurons and trees we need to somewhat normalize them. First, the root of neurons is the node in the reconstruction that represents the soma (nucleus). In contrast, in our tree dataset, we only observe the upper-ground part of a tree. This part usually contains a long trunk followed by branches and leaves. We define the lowest point of a tree to be its root. Second, neurons have distinctive axonal and dendritic parts. Since the dendrite often has more arborization, we restrict our analysis to the dendrite part. Third, to simplify the structures, we only consider the skeleton of trees and neurons and neglect finer structures such as spine locations for neurons and leave locations for trees. These modifications enable us to meaningfully compare actual trees and neurons.
We look at six features in our sample of neurons and trees. Three features measure local properties: angles at the branching points, lengths of segments, and contraction (a measure of local curvature) of segments (figure <ref> right). The other three features consider the non-local structure of a sample: distances of nodes from the root, global angle, directness, and the ratio of geodesic and Euclidean distance (figure <ref> middle and left). The outcome of one feature is a histogram of values in a given range. We normalize a histogram such that the area under the histogram is equal to one. To quantify a class of neurons or trees, we take the average and standard deviation of the normalized histogram for all the samples in that class. In figure <ref> representative histograms of a few classes and their features are shown. By taking an average of the deviations, we observe the variance of the features is higher for neurons in comparison with trees (figure <ref>). This reflects that there is more heterogeneity in the neuron classes compared to the relatively homogenous tree structures.
To quantify the differences between neurons and trees, we look for the differences in the histogram of their features. To compare the distribution of features within and between trees and neurons, we use earth movers distance (EMD). EMD is a distance on distributions. If the distributions are interpreted as piles of sand over a region, then the EMD is the minimum cost to move one pile to another (figure <ref>). We define the distance between two samples (of trees or neurons) by summing EMD of the distribution of each pair of their features. By measuring this distance between samples of two classes, we can compute the similarity of the two classes.
How similar are the branching patterns of neurons and trees? One way to quantify them is to look at the angles between two outgoing stems. Branching with more than two outgoing segments is rare in both datasets. Indeed, we observe that more than 95 percent of branching points in trees have two outgoing stems. We see that the distributions of angles for neurons have a peak between 45 and 120 degrees (figure <ref>). This shows that at the branching points of neurons, two outgoing neurites grow in somewhat opposite directions. In trees, branching angles are often acute where the peak of their distributions is less than 30 degrees. The peak at an acute degree shows that outgrowing segments of trees often are almost parallel to each other. Indeed, gravity and the need to be illuminated may force both outgoing segments to be roughly perpendicular to the ground. Therefore the angles at the branching points clearly differentiate neurons and trees.
How similar is the growth process of neurons and trees? Ideally, we would trace their developmental process. Instead, we can look at angles after growth. Since both are curves in space, we can measure how far they deviate from straight lines, i.e. measuring the curvature. We compute the starting and ending angles relative to the straight line connecting the points. We observe that histograms of these angles are similar for both trees and neurons. While the process through which trees and neurons grow differ, they are both close to straight. In neurons, the tips of neurites actively sense the local information such as the gradients of their desired molecules <cit.>. In trees, the tip of a segment, called meristem, actively searches for the light spots <cit.>. In both trees and neurons, to sense local information, they may extend their tips and then select the one that maximizes their goal (filopodia in neurons and shoot apical meristem in trees). Comparing local features of neurites and segments of trees promises to shed light on their growth process.
How often do we see perfectly linear outgrowth from the root node? In figure <ref> we observe that the peak in the histogram of global angle for neurons is close to 180 versus in trees this peak is centered around 90. This means that the neurites of a neuron look more like straight lines when compared to tree segments. What we see is that trees tend to grow perpendicular to their stem which may be good to capture light. This in particular is a survival factor for some tropical trees where they are in a race with other adolescent trees to reach the canopy as they are light-limited below that. Neurites of neurons on the other hand may best grow straight as that will shorten the overall wiring length, a criterion seen as worthy of optimization in brains <cit.>. Clearly, optimization of trees and neurons is distinct along this axis, where neurons minimize length and trees maximize the area covered.
How are neurons and trees filling the surrounding space? Trees and neurons expand their neurites and segments to access the resources in space. To measure their spatial density, we count how many times they intersect with the spheres that are centered at their roots (Sholl analysis). The radius begins at zero and continues up to the sphere that contains almost 90 percent of the total length. We select the upper limit for neurons and trees to be 100 μ m and 10 m, respectively. We observe that apart from Pyramidal neurons, the histogram has one peak close to the root. The histogram in Pyramidal neurons is bimodal as neurons have distinctive basal and apical dendritic parts. In trees, the peak is relatively far from the root. Indeed, up to a few meters, the spheres intersect with trees only at the trunk part. Among five classes of trees, we observe that Wytham Woods has a bimodal shape. Dendrites of neurons are mostly concentrated around the soma to gather information in their nearby neurons. For trees, the main body is often far from the root to compete with other trees in collecting sunlight.
How distant are two consecutive branching points (where there are no other branching points on the curve that connects them) in trees and neurons? To answer that question, we measure the Euclidean distance between all consecutive branching points, called branching segments, and find their histograms<ref>. To make it unit free, we divide the distances by their mean and only consider the histogram between zero and three. We find that all the tree classes have a sharp peak around 0.3, versus neurons often the histograms are decreasing.
Do neurons and trees have signs of arborization length minimization? However, all things equal we would still expect both trees and neurons to somewhat have short connections. Indeed, the contraction feature captures the overall extra length induced. We can see that for both neurons and trees connections tend to be relatively close. This finding highlights that the minimization of distances is at least one of the criteria that both trees and neurites are optimized for.
Can we use the aforementioned features to define one central feature for comparing neurons and trees? We start with the hypothesis that trees are more self-similar compared to neurons. We define self-similarity by measuring the distribution of the features of its sub-trees and computing the averaged distance between all sub-trees and trees (figure <ref>). Lastly, we compute the self-similarity measurement for the tree classes and neuron classes by averaging the self-similarity of the samples in that class. To test our hypothesis, we compared the self-similarity of all six features for classes of neurons and trees. In figure <ref>, we found that except for contraction, other features are significantly more self-similar for trees than neurons. Therefore, we help us conclude that trees retain features from the object overall and that they are both self-similar. This of course has been observed for trees (and plants) more generally going back to Da Vinci’s ‘rule’ <cit.>. Further, this result is consistent across all classes of neurons and trees suggesting high self-similarity is a conserved feature of most tree species.
§ DISCUSSION
Here we asked how neurons and trees may have distinct or similar morphologies. We found their general branch patterns, as measured by 6 key features, are similar but that trees tend to branch with more acute angles, have longer segments and their segments are less bent compared to neurons. Moreover, we found that trees are more conserved in their self-similar structure, meaning that the features of their sub-trees are closer to the entire tree. By comparing neurons and trees we are implicitly asking questions about their structure.
One of the main characteristics of a growth process is its self-similarity - the degree of similarity between the small subset and the whole structure. Here we compare neurons and actual trees as two extreme examples in size to test whether their structure is shaped self-similarly. By comparing a set of features, we show that trees and neurons have similar degrees of self-similarity. This may arise from the environments in which these two types of structures are developed. For example, while most trees continuously search for light in their surroundings, neurons' arbors are sensitive to chemical gradients, neuromodulator concentrations, and electrical inputs. We hope that understanding the similarities and differences between neurons and trees may enlighten our search for computational principles.
The samples used in this paper are limited to five classes of neurons (all from mice) and five species of trees (from many countries) . To embrace the diversity of trees and neurons, analyzing more and more diverse data would be interesting. There is a slight concern about the methods used to reconstruct neurons and trees which may affect the results <cit.>. To extract the features we also have to approximate the neurons and trees. This may lead to biases in the features. For example, local tilt angles might be noisy because the reconstruction could not perfectly measure the configuration of local branches of neurons or trees.
There is not clear dictionary between the features that we defined and the environmental correspondence. For instance, some features might emerge during the development. As an example, the Wytham Woods trees have a history of management, particularly copping where they were repeatedly pruned low down earlier in their lives, then left alone over the last few decades. That leads to a relatively short trunk and then a very bushy crown which is not a 'natural' shape but obviously one that is still entirely feasible for vigorous growth. This may lead to a bimodal histogram for their distance from the root (figure <ref>). In another example, being self-similar informs our search for growth models while more involved features may relate to gravity or light, the brain's surface or information. We just reported the differences, to uncover what is this dictionary we may need more studies that address it. We can use this finding to search for the relevant biological processes that may lead to these differences.
The approaches we have used here are well-suited to the examination of much larger datasets of these 3D structures. This in turn may allow the testing of mechanistic models of neuron and tree growth, with the aim of potentially uncovering general rules of neuron and tree structural growth and development under different environmental constraints. Some of the constraints are already under study within the field of neuroscience or ecology. Finding the similarities and differences between neurons and trees opens the door for both of these fields to share their vision and bring novel ideas from one field to another.
We have shown here a quantitative comparison of the 3D structure of neurons and trees. The motivation for this work was to test the commonly-aired assertion that there are, superficially at least, similarities between the structures of these two biological networks. If so, this may uncover common and general underlying principles of growth and development in these networks. In addition, quantifying similarities between neurons and trees opens the possibility of finding mechanistic explanations as to how these similarities arise and provide new insights into how the specific environmental and evolutionary constraints under which each has developed are manifested in their structures.
§ METHOD
§.§ Selecting neuron morphologies
We used the reconstructed neuron morphologies from neuromorpho database (version 8.0.112). We only consider mouse morphologies since the number of neuron morphologies from mice that are deposited in neuromorpho is larger than other species. To ensure that the data is coming from healthy animals, we only use data from the experimental condition ‘control’. We only performed the analysis on the dendritic part of neurons. To reduce the artifacts of the reconstruction methods, we only used data where the dendrite reconstruction is labeled as 'complete'. We use five classes of neurons: pyramidal, Purkinji, Basket, Aspiny and Ganglion as they are repesentative of many other neurons in the brain. We thus have five sets of neuron morphologies that we can compare to trees.
§.§ Trees
Trees are generated in Matthew Disney's lab using Lidar imaging. Lidar generates the full 3D structure of trees (with green leaves) which then can be process to turn it to a set of branches and segments.
§.§ Preprocessing
§.§.§ Down-sampling neurons and trees
Due to the extraction method and the accuracy that the experimenter is used, the nodes of the morphology are not uniformly distributed across the morphology. To overcome this, we process the morphologies in two steps. We first select a lower and upper bound for the distance between a node and its parents. Here, we set the lower bound to 0.5 μ m and upper to 1 μ m. In the first step, we randomly select a node and if the distance between the node and its parent is higher than the upper bound, we add an extra node in the middle of the straight line that connects them. The goal of this step is to have a dense morphology. We continue this step until the Euclidean distance between all nodes and their parent is less than the upper bound. In the second step, we randomly select a node and if the distance between the node and its parent is lower than the lower bound, we remove the parent node. The goal of this step is to ensure that the nodes are uniformly distributed across the morphology. We continue this step until no nodes remain. If the upper bound is twice the lower bound, it is guaranteed that the Euclidean distance between all nodes and their parents is between the lower and upper bound.
§.§ Features
We consider six morphological features to quantify the branch organization for trees and neurons. Three features quantify changes in local regions of the arbor, and the other three are global features that are measured relative to the root of the tree.
§.§.§ Branch angles
In the graphical representation of trees and neurons, each node either has no child (terminated), one child (intermediate), or more than one child (branching points). At each branching point with two children, we can calculate the branching angle by computing the angle between the vectors connecting the branching node to its children.
§.§.§ Direction change
This feature is defined by the straightness of segments of trees or neurons by calculating the angles between two vectors: the vector connecting the node to its parent and the vector connecting the node to its child. This feature is only defined for nodes with one child. If the segment of the neuron is a flat line locally at the node, this value would be 180.
§.§.§ global angles
Global angles measure how straight the segments of the neuron are grown away from the root. It is computed for each node by measuring the vector connecting the node to its parent and the vector connecting it to the root. If a tree has the tendency to explore the surrounding space, it is expected that in many nodes this angle is obtuse.
§.§.§ Length of segments
Two branching points are called consecutive if the shortest path between them does not contain any other branching points. We can measure the Euclidean distance between all pairs of consecutive branching points, and compute its histogram. To make this histogram scale-free, we divide it by its mean.
§.§.§ Distance from root
This feature is performed conceptually by drawing concentric circles around the cell body at incrementally increasing radii and counting the number of times each circle crosses a neuritic segment (shown counting around the circle counterclockwise for demonstration). The number of intersections is graphed as a function of radial distance from the cell body to give a quantitative representation of how neurite density varies spatially.
§.§.§ Contraction
For each node, the shortest neural path that connects it to the soma is usually close to a straight line. To make it concrete, for each node, the ratio of its shortest path through the neuron to soma divided by the Euclidean distance between the node and Soma is calculated. By subtracting one and taking the mean square of this ratio for all nodes we get the Neuronal/Euclidean ratio.
§.§.§ Measuring distances between features
To measure the distance between two features, we use earth mover distance. In statistics, the earth mover's distance (EMD) is a measure of the distance between two probability distributions over a region. Informally, if the distributions are interpreted as two different ways of piling up a certain amount of dirt over a region, the EMD is the minimum cost of turning one pile into the other; where the cost is assumed to be the amount of dirt moved times the distance by which it is moved. Notice that all six features that we used here are histograms and therefore are a distribution.
§.§ Self-similarity
In mathematics, a self-similar object is exactly or approximately similar to a part of itself (i.e., the whole has the same shape as one or more of the parts). Many objects in the real world, such as coastlines, are statistically self-similar: parts of them show the same statistical properties at many scales. We define a self-similarity measurement by using the histogram of the features. In our definition, we first extract all sub-trees of a tree that start from any possible branching points (figure <ref>). To avoid the artifacts that may come from small subtrees, we discard subtrees that have less than ten leaves. Each sub-tree can be seen as a new sample such that its cutting branching point is its root. Therefore, we can extract six aforementioned features. In figure <ref> two samples are shown (one tree and one neuron) with features of a few of their subtrees. We then compute the distance between features of all pairs of subtrees. By taking the average of these distances for one feature, we can compare the tree with its sub-trees. Finally, by doing this process for all features and taking their average, we can define the self-similarity of a neuron or a tree.
alpha
|
http://arxiv.org/abs/2307.02398v1
|
20230705161451
|
A Versatile Hub Model For Efficient Information Propagation And Feature Selection
|
[
"Zhaoze Wang",
"Junsong Wang"
] |
cs.LG
|
[
"cs.LG",
"cs.SI",
"q-bio.NC"
] |
Transient spectroscopy from time-dependent electronic-structure theory without multipole expansions
Thomas Bondo Pedersen
August 1, 2023
===================================================================================================
Hub structure, characterized by a few highly interconnected nodes surrounded by a larger number of nodes with fewer connections, is a prominent topological feature of biological brains, contributing to efficient information transfer and cognitive processing across various species. In this paper, a mathematical model of hub structure is presented. The proposed method is versatile and can be broadly applied to both computational neuroscience and Recurrent Neural Networks (RNNs) research. We employ the Echo State Network (ESN) as a means to investigate the mechanistic underpinnings of hub structures. Our findings demonstrate a substantial enhancement in performance upon incorporating the hub structure. Through comprehensive mechanistic analyses, we show that the hub structure improves model performance by facilitating efficient information processing and better feature extractions.
§ INTRODUCTION
Topology plays a crucial role in determining the dynamics of both biological neural networks (BNNs) and artificial neural networks (ANNs). For ANNs, the topology and weight distribution during initialization are critical factors in determining the speed of convergence and the final states of the network <cit.>. In the case of BNNs, it has been well-established that both functional and anatomical brain networks exhibit modularity <cit.>, small-worldness <cit.>, scale-free or log-normal degree distribution <cit.>, and hub structures <cit.>. Research in network neuroscience has demonstrated a strong correlation between cognitive functioning <cit.>, information integration and propagation <cit.>, and cognitive disorders <cit.> with the topology of biological brains. These findings underscore the significance of topology in understanding and modeling neural networks.
Recent advancements in computational neuroscience have frequently employed RNNs as a means to study biological brains <cit.>. Previous research has harnessed the capabilities of RNNs to investigate the emergence of spatial navigation <cit.>, decision-making, neuro-coding <cit.>, and sensorimotor learning <cit.>. Studies have also been proposed to load functional connectivity matrices to RNN to study the similarities as well as differences between ANNs and BNNs <cit.>. These in-silico simulations allow for a precise examination of the emergence of neuronal activation patterns and wiring patterns, which are difficult to obtain through in-vivo experiments. Therefore, training RNNs to perform cognitive tasks may provide insights into neuroscience. The adoption of biologically observed patterns may also improve the development of ANNs, as these features could be the result of natural selection and represent genetically optimized solutions.
When simulating BNN using ANN, a three-layer RNN is typically used <cit.>. The first layer is a linear layer that serves to simulate the input signal to the simulated brain region. This input layer could either project signal to the entire hidden layer or a subset of hidden layer neurons depending on the experimental settings and objectives. The second layer is a randomly connected hidden layer, which serves as a simplified representation of the brain. It is often generated as a randomly connected sparse network, with weights following a Gaussian or uniform distribution. The last layer, denoted as the readout layer, maps the recurrent states into the targeting labels. Connection weights in the three layers could be updated using gradient descent <cit.>, intrinsic plasticity rules <cit.>, synatic plasticity rules <cit.>, or remain unchanged <cit.>.
Traditional RNN configurations, despite their simplicity and ease of implementation, may fall short in capturing the full range of biological characteristics of brain systems. It has been recognized that RNN architecture, particularly the hidden layer, significantly impacts convergence and should be carefully considered when modeling the brain. To achieve greater biological realism, several adaptations have been proposed. Specifically, <cit.> and <cit.> suggested to balance excitatory and inhibitory synapses in the hidden layer, while <cit.> have adopted the modular topology to the hidden layer.
In this work, we introduced the hub structure to the hidden layer initialization. Hub structure, also known as rich-club organization, denotes the topological attributes that biological brains contain many low-degree peripheral neurons <cit.> and a few highly connected hubs for efficient information processing and propagation <cit.>. Our proposed hub structure generation method is flexible and broadly applicable to various RNNs, with a mathematical formalization that can also fit experimental data. We then applied the proposed hub structure on the echo state network (ESN) <cit.>, a special type of RNN in the reservoir computing (RC) paradigm that allows synaptic weight to remain unchanged during the course of training to emphasize the topological differences. Upon integrating our proposed hub model on ESN, our model demonstrates substantial improvements in prediction accuracy on several time-series forecasting tasks and a classification task. We provided extensive mechanistic analysis and identified that hub neurons could efficiently regulate the network states. Alongside this, our hub structure demonstrated enhanced feature extraction capabilities. Finally, a preference for peripheral neurons over hub neurons during prediction is observed, denoting an emergence of functional hierarchy after adopting the hub structure.
§ THE HUB MODEL
§.§ Motivation
The brain topology is often characterized by its modular organization, small-world characteristics, log-normally distributed nodal degrees, and the existence of neuronal hubs. These features are interlinked, each underscoring different facets of the brain's organization. For the hub structure, it hints at modularity, with each hub potentially acting as the central node for its associated peripheral neurons. It also implies a long-tailed distribution of nodal degree, as it requires only a few highly connected neurons, resulting in a log-normal degree distribution. Furthermore, hub neurons themselves form a small network, which significantly reduces the average nodal length, indicative of small-world properties. Therefore, in our research, we focus on the hub structures as they often imply the presence of the other three properties. Additionally, most recent neuroscience and cognitive research suggest this rich-club organization correlates with individuals' ability on several cognitive tasks <cit.>.
Several factors have been proposed to account for the emergence of hub structures in networks. Wiring cost considerations, which balance network efficiency and metabolic expenses and spatial volume, are known to shape hub structures <cit.>. Neurogenetic elements, encompassing genetic predispositions, developmental processes, and neuronal migration and differentiation, are also crucial in hub neuron formation <cit.>. Furthermore, Hebbian learning principles, encapsulated by the postulate "neurons that fire together wire together," may foster hub structure development by reinforcing commonly used connections and further facilitating the creation of highly connected hub neurons.
§.§ Hub generation
In this study, we consider neurogenetic factors <cit.> and wiring cost <cit.> as the main contributors to the emergence of hub structure during network initialization. Our objective is to investigate the impact of hub topology on network dynamics. Consequently, we have excluded Hebbian factors from the analysis, as their inclusion would necessitate a more intricate discussion on synaptic adaptations. Our proposed method involves first constructing a densely connected weight matrix W. Then a pruning probability matrix P_prune is constructed based on the constraining factors, with each of its entries p_ij representing the likelihood of the corresponding edge e_ij in W be removed until W meets the pre-defined sparsity level.
The wiring cost primarily imposes constraints on neuron density, synapse distance, and axonal projections cross section diameters <cit.>. In our proposed model, the distance factor is considered and termed as distance constraint (DC). Constraints imposed by neurogenesis factors are denoted as neurogenetic constraints (NC). Lastly, we introduced a regularization term R to allow for a smooth transition from hub network to random network.
To construct DC, each neuron n_i | i ∈ N, where N is the network size, is randomly assigned a 3D coordinate Q_i = Q(x, y, z) | x, y, z ∼𝒩(0, 1) at initialization (as depicted in Fig. 1a). Following which, the DC matrix C_d is constructed using the neuron coordinates, with each entry dc_ij∈ C_d is the Euclidean distance between node i and j, as shown in Eq.1.
dc_ij = d(Q_i, Q_j) | ∀ dc_ij∈ C_d; i,j ∈ N
nc_ij = i + j | ∀ nc_ij∈ C_n; i,j ∈ N
p_ij = λ_dc dc_ij^α + λ_nc nc_ij^β + λ_reg R/∑_k, l ∈ N (λ_dc dc_kl^α + λ_nc nc_kl^β + λ_reg r_kl) | ∀ p_ij∈ P_prune; i,j ∈ N
We consider the NC to encompass a confluence of genetic influences, developmental trajectories, neuronal migration patterns, and neuron generation timing. In an effort to construct a simplified yet representative assumption, we define each entry nc_ij in C_n to be the sum of the indices i and j, as indicated in Eq.2. We acknowledge that this assumption simplifies the complex nature of the underlying processes. However, this construction method for NC remains both efficient and effective in generating the hub structure. Additionally, it fosters a symmetrical connection pattern for both pre-synaptic and post-synaptic weights.
Finally, α and β are exponential coefficients (EC) that serve to raise the power of C_d and C_n to fit a log-normal nodal distribution (Fig. 1b). λ_dc and λ_nc are scaling coefficients (SC) for C_d and C_n, while λ_reg is the SC for the regularization term R. Entries r_ij∈ R are random numbers following the same distribution as entries in W, such that increasing λ_reg will gradually transform hub network to random network. As demonstrated in Eq.3, both SC and EC are applied to the constraints, and then the results are combined to generate the pruning probability matrix P_prune. The entries in P_prune are normalized such that they sum up to 1 (Fig. 1c). P_prune is subsequently used to prune the densely connected weight matrix W to the pre-defined sparsity.
Fig. 1a shows the impact of DC that the centered neurons are more likely to be connected and therefore have higher nodal degrees, which is computed as the sum of in-degree and out-degree. Fig. 1c illustrates the normalized P_prune, where DC contributes to lattice-like patterns, and the NC contributes to the gradient pattern along the diagonal. Brighter colors indicate a higher probability of deletion. While NC fosters a clear, scale-free distributed pruning probability along the diagonal, a potential drawback is that it might also introduce unconnected neurons by overly emphasizing the deletion of edges that connect peripheral neurons. DC alleviates this unconnected neuron problem by encouraging connections between peripheral neurons and their nearest neighbors (Suppl. 1).
It is pertinent to mention that this model is not exclusively designed for RNN initialization; it may also function as a theoretical framework to accommodate experimental data. It could potentially be used to fit on experimentally collected functional connectivity matrices by adjusting the SCs and the ECs. This adaptability makes the model a potential framework for generalizing and simplifying the complexity of experimentally collected data.
§.§ Topological features and parameter choices
The incorporation of a hub structure precipitates numerous topological changes within the network. We use network heterogeneity, modularity, and clustering coefficient under difference network sparsity to assess the topological difference after adopting the hub structure (Fig. 1d). We use the coefficient of variation (CV) of the nodal degree to quantify network heterogeneity <cit.> (Suppl. 3.1). The network modularity is measured by first partitioning the network via the Louvain method <cit.>, followed by computation using Newman’s community detection algorithm <cit.> (Suppl. 3.2). The clustering coefficient <cit.> is approximated by converting the network into weighted bidirectional graphs (Suppl. 3.3). Our findings reveal that the hub network manifests a significantly elevated CV compared to its random counterpart, with NC contributing more to heterogeneity than DC (Fig. 1d, upper left). Furthermore, we noted a surge in network modularity along with a subtle augmentation in the clustering coefficient. This suggests that the introduction of a hub structure encourages the development of sub-networks and network heterogeneity.
On top of providing a versatile mathematical model of hub structure, we aim this study to provide a mechanistic explanation of hub structure. Therefore, we have opted for the simplest parameters wherever possible throughout this paper, in order to streamline the discussion and highlight the distinctions between a standard random network and a hub network. We use the most basic ECs, α = β = 2, as they provide a suitable balance between model complexity and interpretability. We have confirmed that the selection of EC does not significantly influence the network's topological properties, nor does it affect our final conclusions (refer to Supplementary Material 2).
While the choice of EC does not substantially impact network topological properties, we observed notable differences between various SC values. It is important to note that λ_dc, λ_nc, and λ_reg are scale-invariant, as they will be subsequently normalized. Their values only represent their relative importance compared to one another. As such, we let λ_nc + λ_dc + λ_reg = 1. As we aim to contrast the difference between random network and hub network, λ_reg is set to 0 throughout this paper so that the network will demonstrate strongest hub features. However, the distance constraint (DC) decreases heterogeneity and clustering coefficient while increasing modularity (Fig. 1d). To streamline the discussion, we chose λ_dc = λ_nc = 0.5 for the rest of the paper.
§ HUB ECHO STATE NETWORK
§.§ Echo State Network
The Echo State Network <cit.> is adopted to investigate the effects of hub structures on network performance, dynamics, and mechanisms. The architecture has been frequently used in the investigation of machine learning and neuroscience <cit.> The input and hidden layers of ESNs remain unchanged during training, serving only to map input signals to a higher dimension before being translated to output by the linear regression readout layer. By maintaining a consistent synaptic configuration, we ensure the preservation of the hidden layer topological features. This clarity enables a more accurate attribution of performance improvements and mechanisms to the network topology rather than synaptic updates.
s(t+1) = f(W^in u(t+1) + W^rec s(t))
The update function of ESN is shown in Eq.4, where s(t) is the hidden state at time t, u(t) is the input for time t; f( ·) is the non-linear activation function, and W^in and W^rec are weight matrices for input and hidden layer, respectively. During training, the input u(t) is sequentially presented to the ESN. The ESN hidden states are stacked into a state matrix S=[s(0);s(1) ...; s(T)]. At the end of the training epoch, the readout layer computes the output weights using the closed-form solution of linear regression, (S^T S)^-1S^TY, where Y is the label vector. During testing, ESN behaves like a standard RNN that maps input to hidden states and then makes predictions at each timestep.
§.§ HubESN
Hub neurons, serving as central nodes within neuronal modules, are recognized in neuroscience research as crucial elements in enabling the efficient dissemination of information from various brain regions. In alignment with previous neuroscience findings that hub neurons facilitate communications across different brain regions, we assume that the input layer simulates the signals projected from other brain regions. The connection between input layer and hidden layer cannot be fully connected, as this would negate the concept of "information transfer between two brain regions". We assume only 10% of the neuron receives input from other brain regions, denoted as r_sig=0.1.
We denote the ESN after incorporating hub structure as HubESN. For standard ESN, the network nodal degrees are homogeneous. Therefore, signals are injected randomly to 10% of the nodes. For HubESN, we propose two methods of injecting signals: similar to ESN, inject signal randomly to 10% of the neurons; or inject signal to the top 10% of neurons with the highest number of connections. We denote the first method as HubESN-rand, and the latter as HubESN. As hub neurons are characterized by having more connections compared to other neurons, we assume signals are injected into hub neurons in the second setting. The illustration of HubESN is demonstrated in Fig. 1e, where red arrows indicate where signals were injected, red nodes represent hub neurons, and blue nodes represent peripheral neurons. The complete implementation procedure and parameter choice of HubESN, HubESN-rand, and ESN are included in Suppl. 5.
For weights w_ij∈ W^rec of ESN, HubESN, and HubESN-rand, w_ij∼𝒩(0, 1/3). We opt for tanh( · ) as our activation function such that the hidden states will oscillate between (-1, 1). An additional imperative parameter for ESN is the spectral radius, which scales the maximum eigenvalue of W^rec. As prior research identifies a spectral radius slightly less than 1 is required to guard the echo state property <cit.>, we accordingly set it to 0.9. The network sparsity is set at 0.2, selected based on the that this value yields the most considerable disparity between the hub network and a random network (Fig. 1d). We use the spectral radius of 0.9 and sparsity of 0.2 consistently throughout this paper. Our conclusions remain unaffected by these parameter choices (Suppl. 4).
§ EXPERIMENTS
We benchmarked HubESN and standard ESN using three standard tasks: Mackey-Glass <cit.> prediction task, nonlinear autoregressive moving average model (NARMA10) <cit.> prediction task, and MNIST <cit.> written digit classification task. All three tasks are standard metrics to evaluate ESN performance.
dx_m/dt = β_m x(t-1-τ)/1+(x_m(t-1-τ))^k-γ_m x_m(t-1)
x_n(t) = α_n x_n(t-1) + β_n x_n(t-1) ∑_i=1^l x_n(t-i) + γ_n u_n(t-l) u_n(t-1) + δ_n
The Mackey-Glass equation is a delayed differential equation exhibiting complex nonlinear dynamics, and was initially introduced as a model for physiological control systems with time delays. We employ this task to evaluate the model's capacity for retaining long memory and leveraging non-linearity to achieve accurate predictions. The Mackey-Glass time series is updated as Eq. 5. In alignment with previous literature <cit.>, the parameters τ, γ_m, β_m, k are set to 17, 0.1, 0.2, and 10, respectively. dt is set to 1 to discretize the equation. At each time step t, x_n(t) is presented to the network, which updates its hidden states to s(t+1) and predicts x_n(t+1) using the information it has encoded.
The NARMA10, a tenth-order nonlinear difference equation, is employed in our study to evaluate the model's proficiency in capturing complex non-linear dependencies. In conjunction with the Mackey-Glass series, it serves to determine the versatility of the proposed model in predicting different types of time series. NARMA10 task is defined as Eq. 6., with l=10, α_n=0.3, β_n=0.05, γ_n=1.5, δ_n=0.1 in consistence with previous research <cit.>. An independent random variable u(t) ∈ [0, 0.5] is presented at each time step, and predictions are made based on the previous l = 10 inputs.
Finally, the MNIST task is employed to if the model performance improvements can be generalized to non-time series tasks. The network predicts labels of written digits by scanning each 28x28 image column-wise. During training, each image is input into the network over 28 time steps, with the one-hot label used for training the model at all 28 steps. During testing, the final label is determined by majority voting based on the network's predictions.
Unlike prior studies that use a fixed training sample size for ESN benchmarking, this study assesses models under various training sample sizes to allow for a more comprehensive evaluation of convergence patterns. We train models on differently-sized datasets and subsequently test on a fixed-size test set. The size of train set and test set are denoted as n_train and n_test in this paper. For the Mackey-Glass and NARMA10 tasks, models are trained on n_train steps of signal and evaluated on their ability to predict the subsequent 2000 steps, with performance measured by root mean square error (RMSE). For the MNIST task, we randomly select n_train images for training and measure classification accuracy on a test set of 3000 images.
Due to the sensitivity of ESN to random initialization, each trial setting is repeated 100 times for Mackey-Glass and NARMA10, and 10 times for MNIST, to ensure measurement precision. To mitigate potential bias, the same dataset is provided to ESN, HubESN, and HubESN-rand models in each trial (refer to Suppl. 6 for replication specifics).
§ RESULTS
§.§ Performance
Fig. 2 presents the performance of ESN, HubESN, and HubESN-rand. We use network size of N=1000 on Mackey-Glass and NARMA10 prediction tasks and a smaller network N=500 on MNIST tasks to reduce training time. The network testing loss for Mackey-Glass and NARMA10 prediction tasks decreases exponentially as n_train increases, therefore in the lower plots of Fig. 2a and 2b, HubESN and HubESN-rand performances are presented as ratios with respect to ESN performance to facilitate a more lucid visual differentiation of ESN and HubESN performance trajectories.
On Mackey-Glass and NARMA10, when the number of training samples is small, while HubESN-rand is slightly worse than HubESN, both significantly outperform ESN. On Mackey-Glass prediction, within the substantial training set size range of 600-2000, HubESN reduces RMSE by more than 37%. At n_train = 1200, HubESN reduces RMSE by 57%. As the training set increases, ESN performance gradually approaches HubESN and HubESN-rand. Note that, however, ESN does not update weights in hidden and input layers; the improved performance is solely attributed to the readout layer. As the ESN readout layer predicts using linear regression, better performance on the same training set size itself suggests a more linearly separable hidden state.
Although ESN performance will gradually approach HubESN performance on time-series tasks, it does not apply to the classification tasks. On MNIST, we benchmarked the model on a wide range of train set sizes. Increasing the training set size did not improve ESN performance to match HubESN performance. Both ESN and HubESN performance stopped increasing after n_train became greater than 12500, while the classification accuracy of HubESN remains greater than that of ESN for all training set sizes.
The size of an Echo State Network (ESN) also impacts its performance. While larger networks can theoretically accommodate more intricate dynamics, they also amplify the complexity of hidden states, complicating readout. In Fig. 2e, we depicted model performance differences on Mackey-Glass prediction when network sizes are N=500, N=750, and N=1000. As increasing network size introduces more complex reservoir dynamics, the readout layer requires larger training set to converge. Therefore, the superiority of HubESN becomes more pronounced with larger network sizes.
§.§ Hub neurons for efficient information propagation
In the areas highlighted by green in Figures 2a and 2b, as well as throughout Figure 2c, the HubESN model outperforms both HubESN-rand and ESN. The sole distinction between HubESN and HubESN-rand lies in where the hidden layer receives input. We applied t-Distributed Stochastic Neighbor Embedding (t-SNE) <cit.> and Uniform Manifold Approximation and Projection (UMAP) <cit.> to visualize the higher dimensional hidden layer states over time to investigate the mechanistic differences. The Mackey-Glass time series is utilized as it is structured in higher dimensions. The hidden layer states were recorded over time, and dimensionality reduction algorithms were applied to reduce hidden state dimensions and preserve their structure in lower dimensions. The neural manifold trajectories of input neurons only and the whole network are visualized in Fig. 3a. We use them The line segments connect the network states and their corresponding input neuron states.
In the ESN and HubESN-rand models, the overall network trajectories do not align closely with the trajectories of the input neurons, whilst the HubESN model shows a close correspondence between the network and input neuron state trajectories (Fig. 3a). This close alignment between the neural manifold trajectories of input neurons and the entire network signifies a more cohesive system within HubESN compared to HubESN-rand and ESN, hence implying an elevated significance of hub neurons in transmitting information throughout the network.
We further group the neurons into two subsets, those receiving direct input from the input layer (input neurons) and those that do not (non-input neurons), to investigate their roles. The models are run (without fitting the readout layer) on a smaller (n_train=1000) and a larger (n_train=4000) Mackey-Glass training set. The hidden layer states were grouped into input neuron states and non-input neuron states. We subsequently fit a linear regression model to both input neuron states and output neuron states and use them to predict the training and test sets.
In low n_train conditions (Fig. 3b, lower and upper left), input neurons across ESN, HubESN, and HubESN-rand exhibit similar prediction capabilities, while all models display higher testing loss than training loss (observe the y-axis scale difference). Conversely, HubESN outperforms in predicting through non-input neurons on both training and testing sets. As HubESN differs HubESN-rand only in its input injection location, and its network trajectory closely follows input neuron trajectories, we contend that HubESN improves performance through efficient information distribution. This efficiency elicits quicker response from a greater number of neurons, fostering rich feature generation and improved predictions. As the vast majority of neurons are non-input (90% against 10% input neurons), their effective use significantly augments overall performance.
When n_train increases (Fig. 2b), non-input neurons show a marked improvement in prediction capabilities, implying their complex states require larger training samples to be accurately captured. Conversely, input neurons exhibit less prominent improvement. This trend underscores the importance of efficient information distribution from input neurons throughout the network.
This claim is further supported by MNIST performance, where HubESN consistently outperforms ESN and HubESN-rand regardless of training set size. MNIST task requires the model to produce as many correct labels as possible within a given time (n=28 for 28 columns). The delayed time dependency cannot be remedied by increasing the training set size. Therefore, this implies a greater advantage of hub structure on non-time series tasks where delayed feature representation cannot be learned.
§.§ Emergence of rich features and functional hierarchy
The efficient information distribution of hub neurons explains the superiority of HubESN over other models in MNIST tasks and at low n_train levels and classification tasks. However, this does not fully account for the performance improvement of HubESN-rand across a broad range of n_train values (Fig. 2a and 2b). We contend the hub structure itself also provides a better feature extraction ability. This can be supported by the fact that non-input neurons in HubESN-rand also have a lower prediction RMSE than ESN in both small and large n_train values (Fig. 3b, lower panels). As HubESN-rand differs ESN only in its hidden layer topology, it shows an inherent effective feature extraction and generalization ability of hub structure.
Existing neuroscience research also suggested a functional difference between hub neurons and peripheral neurons. While hub neurons generally serve for information propagation and integration, peripheral neurons are typically specialized for certain types of processing. With the proposed hub structure, we examined if such functional specialization also emerges in HubESN and HubESN-rand during prediction.
The readout layer of ESN is a linear regression that maps the hidden states to output. Therefore the absolute readout weight can serve as an indicator of a neuron's importance to prediction. We have taken the fact that neurons can oscillate across different magnitudes into account. That is, neurons with smaller absolute oscillation magnitude may have larger weights in the readout layer. We normalized weights using average absolute magnitude across time to ensure an accurate reflection of a neuron's prediction importance. We stack the neuron degree and normalized weights into two vectors and compute their correlations. Fig. 3c reveals a negative correlation between degree and weight in both HubESN and HubESN-rand, indicating a preference for peripheral neurons in predictions. This pattern is not task-specific and is observed across all tasks (Suppl. 7). The preference for peripheral neurons in prediction indicates a role difference between hub neurons and peripheral neurons. Moreover, as this negative correlation exists in both HubESN and HubESN-rand, suggests that this functional difference between nodes with different degrees is self-emerged from hub structure. Finally, Fig. 3c also reflects the neurons that receive input (colored in red) have lower importance in prediction as they have smaller normalized weights, verifying the results in Fig. 3b that input neurons have higher RMSE than non-input neurons when training samples are sufficient.
§ DISCUSSION
In this work, we present a biologically plausible hub model that is both versatile and amenable to control over various constraining factors of hub structure and nodal degree distributions. The flexibility of the proposed hub model allows for seamless transitions between different degrees of heterogeneity, modularity, and clustering coefficients. Our hub model can be readily applied to a range of RNNs to create biologically realistic RNNs, fit onto experimentally collected functional connectivity maps, and used to uncover underlying brain function mechanisms.
Moreover, as a machine learning model, our HubESN demonstrated significant performance improvements across numerous tasks (Fig. 2a-c), outperforming ESN when training data was limited and in classification tasks. Through extensive experimental analysis, we believe it is fair to conclude that the improved performance primarily attributes to the efficient information propagation ability of hub neurons and the better feature extraction ability of the hub structure. The preference for peripheral neurons during prediction aligns with neuroscience insights that peripheral neurons tend to have task-specific responses.
While we chose ESN architecture to ease the process of analyses, its unchanged synapse weights also limited us from deeper mechanistic investigations. For instance, in addition to information propagation, hub neurons may also function to integrate signals from peripheral neurons and play a critical role in multisensory integration. As ESN hidden layer synaptic weights are unchanged over time, it would be challenging for the simple linear regression layer to read out the integrated signal. Additionally, hub structure is also identified in a wide range of brain areas and may have different functions depending on their anatomical locations. Future research could apply our hub model to more advanced RNN structures, training on realistic coherence signals designed for specific cognitive tasks. Moreover, despite our conclusions being unaffected by selected EC and SC values, comprehensive future studies could cover a wider parameter space.
Overall, while the functioning mechanisms of both BNNs and ANNs function remain an active field of research, our hub model could serve as an efficient tool for bridging BNNs with ANNs. Our HubESN further validates the advantages of hub structure and hub neurons within ANNs.
We would like to acknowledge the assistance of OpenAI's language model, ChatGPT, which helped in paraphrasing and improving the clarity of this paper's presentation.
§ THE IMPACT OF DC ON UNCONNECTED NEURONS
We define neurogenetic constraints (NC) as Eq. 1. That is, the probability of an edge being deleted depends on the indices of the two neurons it connects.
nc_ij = i + j | ∀ nc_ij∈ C_n; i,j ∈ N
However, as it will be subsequently raised by the power of β to fit a log-normal distribution, this setting may over-emphasizing deleting higher indices neurons. When β is high and network sparsity is low, NC may result in some neurons being unconnected to the network. This is biologically unrealistic and will also degrade network performance as the network cannot use all of its neurons to predict.
On the other hand, DC deletes edges based on the Euclidean distance between two neurons. It is prone to preserve the edges that connect neurons and their closest neighbors, thereby alleviating the unconnected neuron problem. As shown in Suppl. Fig. 1, increasing the value of λ_dc, i.e., increasing the relative emphasis on DC constraint, will lower the number of unconnected neurons.
§ THE IMPACT OF EC
As highlighted in the paper, implementing the hub structure noticeably improves test performance. This improvement is particularly significant with smaller training sets, while ESN performance slowly catches up with HubESN as the size of n_train increases. To confirm that the choice of EC does not affect this trend, we trained HubESN with both smaller (n_train = 1000-1200) and larger (n_train = 3000-3200) training sets on the Mackey-Glass time series prediction. The network performance is subsequently assessed on a n_test=2000 testing set using RMSE. Suppl. Fig. 2 confirms that the choice of EC does not affect this pattern.
§ HETEROGENEITY, MODULARITY, AND CLUSTERING COEFFICIENT OF HUB NETWORK
§.§ Heterogeneity
We use the coefficient of variation (CV) of the nodal degree to quantify network heterogeneity [35, 36]. CV measures the variability of the node degrees and is defined as CV = σ_deg/μ_deg, where σ_deg is the standard deviation of the nodal degree and μ_deg is the mean of the nodal degree. The node degree is defined as the sum of in-degree and out-degree.
§.§ Modularity
The modularity is computed by first splitting the network into groups using the Louvain network partition method [37], then measured using the Girvan-Newman modularity measurement algorithm[38]. We use the default setting of function in python package to assign community for each node, then use the Girvan-Newman modularity algorithm as defined in Algo. 1 to compute the modularity of the network.
§.§ Clustering coefficient
The clustering coefficient (CC) quantifies the degree to which nodes in a graph cluster together, effectively forming a clique. However, the traditional definition of CC is designed for positive connections and does not accommodate negative edges. Therefore, we removed all negative edges in the graph and used this converted network to estimate the clustering coefficient of the original network. The clustering coefficient of individual nodes is computed using the function from the Python package, and the overall network clustering coefficient is determined by taking the mean CC across all nodes in the network.
§ THE IMPACT OF SPECTRAL RADIUS AND SPARSITY
In our study, we consistently used a spectral radius of 0.9 and a sparsity of 0.2. To ensure these parameter choices did not influence our conclusion, similar to Suppl. section 2, we trained the ESN and HubESN on a smaller and a larger training set and assessed them on a n_test=2000 testing set.
As demonstrated in Suppl. Fig. 3a, when the spectral radius is less than 1, it slightly influences the performance of both ESN and HubESN. When n_train is small, the HubESN consistently outperforms the ESN. As n_train increases, the performance difference between HubESN and ESN diminishes, which aligns with our findings. On the other hand, when the spec_rad ≥ 1, the performance of both ESN and HubESN deteriorates significantly, while HubESN has significantly lower RMSE than ESN regardless of the n_train value.
Furthermore, Suppl. Fig. 3b demonstrates that our choice of sparsity does not significantly alter our conclusions, further aligning with our overall findings.
§ IMPLEMENTING ESN, HUBESN, AND HUBESN-RAND
§.§ Model parameters
In our research, the primary focus is on the influence of the hub structure and hub neurons rather than on parameter optimization. For all the experiments involving ESN, HubESN, and HubESN-rand, we only modify the number of training samples (n_train) and the location where signals are injected into the hidden layer. Unless specified otherwise, all other parameters remain constant throughout all tests. The specific parameter choices are specified in Table 1.
§.§ Input and hidden layer initialization
Both the input layer and the hidden layer are first initialized to a fully connected connectivity matrix with weights following the distribution specified above. The edges in the recurrent weight matrix W^rec are then pruned according to the specified sparsity level. If the network is an ESN, pruning is performed randomly, else if the network is HubESN or HubESN-rand, the pruning is governed by the deletion probability matrix, P_prune. Subsequently, the spectral radius is utilized to scale the largest absolute eigenvalue of W^rec. Finally, the rows of the input layer will be dropped until r_sig equals 0.1. For ESN and HubESN-rand, the rows in the input layer will be deleted randomly. For HubESN, the rows corresponding to neurons within the lower 90th percentile of nodal degree are eliminated, thereby the signal will only be injected into neurons with top 10% nodal degrees.
§ EXPERIMENTAL PROCEDURES
§.§ Mackey-Glass
Mackey-Glass time-series is updated as indicated in Eq. 2. In consistent with previous literature, the parameters tau, γ_m, β_m, k are set to 17, 0.1, 0.2, and 10, respectively. This parameter setting will ensure the system exhibits chaotic behavior. The signal is normalized to (-1, 1) as we are using tanh(·) activation function.
dx_m/dt = β_m x(t-τ)/1+(x_m(t-τ))^k-γ_m x_m(t)
Observe that the time series updates using x_m(t) and the delayed value x_m(t-τ). The hidden layer is expected to encode and memorize the delayed signal x_m(t-τ). Therefore, at each time step, only x_m(t) is input to the ESN. After the entire training set x_m(t) | t = 0, ..., n is input to the network, the readout layer will fit a linear regression on the hidden state s(t) | t = 0, ..., n to the label x_m(t) | t = 1, ..., n+1.
§.§ NARAM10
NARAM10 is updated as specified in Eq. 3. Similar to the Mackey-Glass task, we expect the hidden layer to encode all previously shown inputs x_n(t-i) | i = 1, ... l. Therefore, at time t only the most recent input u_n(t) will be presented to the network, and the network is expected to output x_n(t+1). However, as the input u_n for NARMA10 are independent random variables u_n(t) ∈ [0, 0.5], the inputs are not normalized as normalizing them will also impact the corresponding label.
x_n(t) = α_n x_n(t-1) + β_n x_n(t-1) ∑_i=1^l x_n(t-i) + γ_n u_n(t-l) u_n(t-1) + δ_n
§.§ MNIST
Unlike time-series tasks like Mackey-Glass and NARMA10, the MNIST handwritten digit classification is not a time-series prediction task. To make it compatible with the recurrent setting of ESN, we input each 28x28 image into the network column by column over 28 time steps, using one-hot encoding as the label.
In the testing phase, each image produces 28 predictions. The final prediction is derived by majority vote among these predictions. Model accuracy is computed as the proportion of correctly identified labels.
§ INVERSE CORRELATION BETWEEN READOUT WEIGHTS AND NODAL DEGREE
After ESN is trained, the readout layer fits the ESN states across all training time and produces a mapping between ESN states and the output labels. This allows for an additional use of the readout layer, that is its absolute weights can be used to reflect the relative importance of each neuron in prediction. Considering each neuron may be oscillating in different magnitude between (-1, 1), and lower degree node tends to have lower magnitude. The normalized readout weight of neuron n_i is computed as follows:
w^norm_i = |w^out_i| ∑_t=0^n_train |s_i,t|
Where w^norm_i is the normalized absolute readout weight of n_i, w^out_i is the actual readout weight of n_i in the readout layer, and s_i,t is the state of n_i at time t.
Upon obtaining the normalized absolute readout weight for each neuron, we stack the w^norm_i and its corresponding degree deg_i into two vectors of length N, where N is the network size. We use v_w and v_deg to represent the two vectors. The correlation between the normalized weight and node degree can be computed using Pearson correlation.
r = ∑(v_w-v̅_̅w̅)(v_deg-v̅_̅d̅e̅g̅)/∑(v_w-v̅_̅w̅)^2 ∑(v_deg-v̅_̅d̅e̅g̅)^2
We observed an inverse correlation between the normalized readout weight and nodal degree. This indicates higher nodal degree neurons represent different information than lower nodal degree neurons. The lower nodal degree is preferred during prediction. This pattern is not task-specific, we observed it on all three tasks (Suppl. Fig. 4).
|
http://arxiv.org/abs/2307.01459v1
|
20230704033714
|
The integral Chow ring of weighted blow-ups
|
[
"Veronica Arena",
"Stephen Obinna",
"Dan Abramovich"
] |
math.AG
|
[
"math.AG"
] |
Brown University
Department of Mathematics
Providence
RI 02906
Veronica_Arena@brown.edu
Brown University
Department of Mathematics
Providence
RI 02906
Stephen_Obinna@brown.edu
[with an appendix by]Dan Abramovich, Veronica Arena, and Stephen Obinna
This research is supported in part by funds from BSF grant 2018193 and NSF grant DMS-2100548
The integral Chow ring of weighted blow-ups
Stephen Obinna
August 1, 2023
===========================================
We give a formula for the Chow rings of weighted blow-ups. Along the way, we also compute the Chow rings of weighted projective stack bundles, a formula for the Gysin homomorphism of a weighted blow-up, and a generalization of the splitting principle. In addition, in the appendix we compute the Chern class of a weighted blow-up.
§ INTRODUCTION
Let f:Ỹ Y be the weighted blow-up of X ⊂ Y with positive weights a_1, … a_d and let X̃ be the exceptional divisor.
For most of the paper we will assume X, Y algebraic spaces over a field of characteristic 0. In section 7 we will generalize to the case of , quotient stacks by a linear algebraic group.
Then we have the commutative diagram
X̃[r, "j"] [d, "g"] Ỹ[d, "f"]
X [r, "i"] Y
which is not Cartesian, unlike the ordinary blow-up case (an example of this can be found in <cit.>).
In the case of a classical blow-up, a description of the ring A^*(Ỹ) and of its A^*(Y)-module structure is given in <cit.> or <cit.>.
The purpose of this paper is to give a similar description for the Chow ring of a weighted blow-up.
The formula will follow from the exact sequence in the theorem below, generalizing the key sequence in <cit.>.
[<ref> Key sequence]
Let X, Y, X̃, Ỹ, f be as usual, then we have the following exact sequence.
A^*(X) A^*(X̃) ⊕ A^*(Y) A^*(Ỹ) 0.
Further, if we use rational coefficients, then this becomes a split short exact sequence with g_* left inverse to (f^!, -i_*).
0→ A^*(X,ℚ) A^*(X̃,ℚ) ⊕ A^*(Y,ℚ) A^*(Ỹ,ℚ) 0.
Note that since our blow-up diagram is not Cartesian, the codomain of f^! is A^*(X×_Y Ỹ), but X̃ is the reduction of X×_Y Ỹ so we can identify their Chow groups.
Moreover, when working with integral coefficients, the sequence is no longer exact on the left as shown in example <ref>.
Passing to rational coefficients however, allows us to maintain exactness on the left and to define a left inverse of (f^!,-i_*) via g_*. In fact, it is enough to pass to [1/a_1,…1/a_d]-coefficients.
From the sequence, we can get the following description of A^*(Ỹ).
[<ref> Chow ring of a weighted blow-up] If Ỹ→ Y is a weighted blowup of Y at a closed subvariety X, then the Chow ring A^*(Ỹ) is isomorphic as a group to the quotient
A^*(Ỹ)≅(A^*(X)[t])· t ⊕ A^*(Y)/( ((P(t)-P(0))α,-i_*(α)), ∀α∈ A^*(X))
with P(t)=c_top^_m(𝒩_XY)(t) and [X̃]=-t.
The multiplicative structure on A^*(Ỹ) is induced by the multiplicative structures on A^*(X) and A^*(Y) and by the pullback map in the following way
(0,β) · (t,0)=(i^*(β)t,0).
Equivalently A^*(Ỹ) can be expressed as a quotient of the fiber product
A^*(Y)×_A^*(X) A^*(X)[t]/((i_*α, P(t)α) ∀α∈ A^*(X))
with i^*: A^*(Y) A^*(X) and A^*(X)[t] A^*(X) given by evaluating t at 0.
In order to use the key sequence, we need to give a presentation for the Chow ring of the exceptional divisor X̃.
In the classical case, X̃ is a projective bundle over X and the Chow ring of a projective bundle can be described via the formula <cit.>.
In the case of a weighted blow-up, the exceptional divisor is a projective stack bundle, i.e.,the projectivization of a weighted affine bundle (definitions <ref>, <ref>). In section 3 we define the top _m-equivariant Chern class for a weighted affine bundle E in terms of its homogeneous pieces as
c^_m_top(E)=∏ c^_m_top(E_i)=∏_i(c_n_i(E_i)+a_itc_n_i-1(E_i)+...+a_i^n_it^n_i)
and we give a formula for the integral Chow ring of a projective stack bundle (which was proven for rational coefficients in <cit.>).
[<ref> Weighted projective bundle formula] Let be a weighted, affine bundle over X of rank n. Let c_top^_m(E)(t) be its _m-equivariant top Chern class.
Then
A^*(𝒫()) ≅A^*(X)[t]/c_top^_m(E)(t).
Finally, to have a complete description of the exact sequence in theorem <ref>, we need the appropriate generalization for the excess intersection formula <cit.>. Unlike the case of an ordinary blow-up, in the weighted blow-up case we don't have an excess bundle and we describe f^! as the multiplication by a difference quotient of the top _m equivariant Chern class of the normal bundle.
[<ref> Weighted key formula]
Let X, Y, X̃, Ỹ, f be as above. Let us identify A^*(X̃) ≅ A^*(X)[t]/P(t) with P(t)=c_top^_m(𝒩_XY)(t).
Then we have the following formula for the Gysin homomorphism f^!:A^*(X) A^*(X̃).
f^!(α)=P(t)-P(0)/tα.
The proof of our formula for the Gysin homomorphism relies on a generalization of the splitting principle, theorem <ref>
[<ref> The splitting principle]Let T be the standard maximal torus in . Then the map X” X in the following diagram
X”[r] [d] BT [d]
X' [r] [d] B[d]
X [r] B
induces an injection of Chow rings A^*(X) ↪ A^*(X”) via pullback.
Here is the structure group of the weighted affine bundle E and is the structure group of its associated weighted vector bundle.
Note that the upper square of the diagram is equivalent to the classical splitting principle in <cit.> or <cit.>.
§.§ Acknowledgements
We would like to thank to Dan Abramovich for his invaluable help and guidance, as well as Jarod Alper, Paolo Aluffi, Martin Bishop, Samir Canning, Andrea Di Lorenzo, Giovanni Inchiostro, Patrick Jefferson, Michele Pernice and Ming Hao Quek for insightful conversations.
§ EQUIVARIANT INTERSECTION THEORY
From now on Y,X will be smooth quasi-separated algebraic spaces, of finite type over a field k of characteristic 0, with a _m action. We will also assume the _m action is trivial on X.
Let us first recall the definitions in <cit.> of equivariant Chow groups, for a linear algebraic group G.
<cit.>
Let Y be a d-dimensional quasi-separated algebraic space of finite type over a field k, together with a G action. Let g the dimension of G.
The i-th G-equivariant Chow group of Y is defined as
A_i^G(Y):=A_i+l-g(Y × U/G)
where U is an open subspace of an l-dimensional representation, on which G acts freely and whose complement has codimension greater than d-i.
In this article we will mostly use this definition in the particular case of a _m action. In particular, this leaves us with very convenient choices for representations: V will be an l-dimensional vector space with the standard _m action with weight 1 and U= V ∖{0}.
A^*__m(X)≅ A^*(X)[t].
Indeed, since the _m action is trivial on X
A^i__m(X) =A_d-i^_m(X)=A_d-i+l-1(X × U/_m)=A^i(X ×^l-1)
=⊕_k=0^iA^k(X)t^i-k
with t=c_1(𝒪_^l-1(1)) and the isomorphism follows.
<cit.>
Let be a G equivariant vector bundle over Y. The equivariant Chern classes of E are the operators c_j^G: A_i^G(Y) → A^G_i-j(Y) with c^G_j() ∩α = c_j( × U/G) ∩α∈ A_i+l-j-g(Y × U /G)=A^G_i-j(Y).
As Molina-Rojas and Vistoli remark in <cit.>
many of the standard properties of Chow groups still hold in the equivariant case. Below we collect some that will be used later.
The following are true.
* The first Chern class of the tensor product of line bundles is the sum of the first Chern classes of each line bundle.
* Let E be G-equivariant a vector bundle over Y and f:Y' → Y a map such that f^* has a filtration of G-equivariant vector bundles f^*⊃ F_r ⊃ ... ⊃ F_0=0. Let E_i=F_i/F_i-1.
Then
c^G(f^*)=∏_i c^G(_i).
* If Z is a closed G-invariant subscheme of Y, we have the following exact sequence
A^*_G(Z) → A^*_G(Y) → A^*_G(Y ∖ Z) → 0.
* Let π: E Y be a vector bundle over Y, s_0: Y → E the zero section. Then the Gysin pullback map s_0^*: A^*_G(E) → A^*_G(Y) is an isomorphism inverse of π^*.
We will prove (3). The proof of the remaining ones is analogous.
Let U have dimension l high enough that A_i^G(Y) is defined as A_i+l-g(Y × U/ G).
Then, also by definition, we have A_i^G(Z):=A_i+l-1(Z × U/G). In particular, Z × U/G and Y × U/G are algebraic spaces, and the localization sequence
A^*
(Z × U /G) A^*(Y × U/G ) A^*(Y× U/G ∖ Z × U /G) 0
is exact.
Therefore statement (3) holds as well.
Another proposition, that will be very useful later, is <cit.>, for which we will quote the statement and the proof.
<cit.>
Let G be an affine linear group acting on a smooth scheme Y. Let π:E Y be a G-equivariant vector bundle of rank n. Call E_0 ⊂ E the complement of the zero section s: Y E. Then the pullback homomorphism π|_E_0^*:A^*_G(Y) A^*_G(E_0) is surjective, and its kernel is generated by the top Chern class c_n^G(E) ∈ A^n_G(Y).
Consider the diagram:
A^*_G(Y)[dr,"π|_E_0^*"]
A^*_G(Y) [ur][r,"s_*"] A^*_G(E) [u,"s^*"][r] A^*_G(E_0)[r] 0,
where the bottom is the localization sequence. Since s^* is an isomorphism, inverse to π^*, we see that π|_E_0^* is surjective with kernel generated by the image of s^*s_*. By self-intersection formula, s^*s_* is multiplication by c_n^G(E).
§ CHOW GROUPS OF WEIGHTED PROJECTIVE STACK BUNDLES
In this section we will give formula for the Chow ring of a weighted projective stack bundle. Weighted projective stack bundles appear as the exceptional divisor of a weighted blow-up. We start by computing the Chern classes of weighted affine bundles, and then show we can apply lemma <ref> to them. A similar formula for rational coefficients appears in <cit.>.
An affine bundle is a smooth affine morphism E→ X such that E is, locally in the smooth topology, isomorphic to X×𝔸^n.
<cit.>
A weighted affine bundle is a _m equivariant affine bundle E X where locally in the smooth topology _m acts linearly on 𝔸^n with positive weights a_1,...,a_n ∈ℤ.
We will show in lemma <ref> that the structure group is special, so weighted affine bundles over a scheme will be Zariski-locally-trivial.
It will sometimes be convenient to emphasize the distinct weights of a weighted affine bundle. When we do this we will list the distinct weights as a_1,…,a_r, and use n_i to refer to the dimension of the subspace of 𝔸^n where the action has weight a_i.
A weighted vector bundle is a weighted affine bundle whose underlying _m space is a vector bundle. Equivalently, it is a weighted affine bundle with linear transition functions.
Notice that our terminology is slightly different from that of <cit.>. What they call twisted/untwisted weighted vector bundles, we call weighted affine/vector bundles respectively. Also note that the action on a weighted vector bundle makes it a sum of vector bundles with homogeneous actions.
<cit.>
A weighted projective stack bundle over X is the stack-theoretic Proj of a graded algebra corresponding to a weighted affine bundle with strictly positive weights. Precisely, if R is a graded algebra such that E = _X(R) then _X(R)=[_X(R)∖ V(R_+)/_m].
§.§ Equivariant Chern classes of a weighted line bundle
Let us denote with L → X a line bundle over X with the trivial action. Let us denote with L^a the same underlying line bundle, endowed with the weight a _m action. This is a notation we will adopt only for subsections 3.1 and 3.2, but abandon later as the weight of the _m action will be clear from context.
Some of the following lemmas are likely already known, but are stated and proven for completeness as we couldn't find specific references.
Let L^a be a _m equivariant line bundle over X with weight a, then the first equivariant Chern class of L^a is _1(L^a)=c_1(L)+at via the identification in example
<ref>.
We know that
L^a=L^a ⊗𝒪_X = L ⊗𝒪_X^a.
In particular,
c_1^_m(L^a)=c_1^_m(L ⊗𝒪_X)=c_1^_m(L)+ac_1^_m(𝒪_X^1).
Now, since the action on L is trivial, (L × U )/_m=L ×^l-1 and since A^1(X ×^l-1)=A^1(X) ⊕ A^0(X)t, we get
c_1^_m(L)=c_1(L ×^l-1)=c_1(L) ∈ A^1(X).
We only need to prove c_1(𝒪_X^1)=t.
Let us consider the projection to a point P, f:X → P. The map defines a graded ring homomorphism f^*: A^*__m(P) A^*__m(X), i.e.,a map f^*: ℤ[t] A^*(X)[t] defined by 1 ↦ 1 and t ↦ t.
Now 𝒪_X^1=f^*(𝒪_P^1) and
c_1^_m(𝒪_X^1)=c_1^_m(f^*(𝒪_P^1))=f^*c_1^_m(𝒪_P^1).
Therefore it is enough to prove c_1^_m(𝒪_P^1)=t.
By definition c_1^_m(𝒪_P^1)=c_1(𝒪_P × U/ _m) as a bundle over U/ _m, with U / _m=𝔸^2 ∖ (0,0)/ _m =^1.
Now, a nonzero section s: U U ×𝒪_P / _m is given by (x_0, x_1) ↦ (x_0, x_1, x_0) and intersects the zero section (x_0,x_1,0) in x_0=0, which gives us 𝒪_^1(1), whose first Chern class is t in A^1(U ×𝒪_P / _m)=A^1(^1), as desired.
§.§ Equivariant Chern classes of homogeneous bundles
Let ^a be a rank n vector bundle over X with _m acting homogeneously with weight a on it. Then there exists f:X' → X such that f^*^a has a filtration
f^*^a ⊃ F^a_n⊃ ... ⊃ F^a_0=0
with _m-equivariant line bundle quotients L^a_j=F^a_j+1/F^a_j and f^* is injective.
Let us consider the underlying bundle E. Then by splitting construction <cit.> there is a map f:X'→ X with a filtration f^*E = F_n⊃ ... ⊃ F_0=0 with line bundle quotients.
These bundles naturally have the structure of _m-equivariant vector bundles with weight 1. By replacing the weight 1 action with a weight-a action we get the desired sequence.
Let E^a be a homogeneous _m-equivariant vector bundle of rank n with weight a, E the underlying vector bundle endowed with the trivial _m action. Then the top equivariant Chern class in A^*__m(X)=A^*(X)[t] is
c_n^𝔾_m(E^a)=c_n(E)+atc_n-1(E)+...+a^nt^n.
Let f:X' X as in proposition <ref>. Then c^_m_n_a(f^*E^a)=∏_i=1^n_a c^_m_1(L_i^a). By lemma <ref> c^_m_1(L_i^a)=c_1(L_i)+at.
Therefore c_n^𝔾_m(f^*E^a)=c_n(f^*E)+atc_n-1(f^*E)+...+a^nt^n. By injectivity of f^* we are done.
§.§ Chern classes of weighted affine bundles
Let be a weighted affine bundle over X. Let 0<a_1<...<a_r the different weights of the _m action. Then there exist subbundles F_i such that
⊃ F_r ⊃ ... ⊃ F_1 ⊃ 0
with well defined quotients _i=F_i/F_i-1 which are homogeneous vector bundles with weight a_i.
Let E=_X(R) and {U_i} be a cover for X such that E|_U_i is the trivial bundle. Then
we have graded isomomorphisms
α_i: R|_U_i→𝒪_U_i[x_i,1^(a_1),…,x_i,n_1^(a_1),x_i,1^(a_2),…,x_i,n_2^(a_2),…,x_i,1^(a_r),…,x_i,n_r^(a_r)]
with x_i,1^(a_h),...,x_i,n_h^(a_h) having weight a_h. A general transition map
α_ij=α_j|_U_ij∘α_i|_U_ij^-1: 𝒪_U_ij[x_i,1^(a_1),...,x_i,n_r^(a_n_r)] →𝒪_U_ij[x_j,1^(a_1),...,x_j,r^(a_n_r)]
will map x_i,l^(a_h) to a homogeneous polynomial of degree a_h.
Now let F_k|_U_i:=α_i^-1(𝒪_U_i[x_i,1^(a_1),...,x^(a_k)_i,n_k]) be the locus where _m acts with weights smaller or equal to a_k. This defines a subbundle of E. Moreover the quotients F_k/F_k-1 are well defined. Indeed, while these are affine bundles, they are locally isomorphic to vector bundles so we can at least take quotients locally. By construction, taking quotients locally gives us bundles consisting only of the weight a_k pieces of E, and since the lower degree pieces have been reduced to 0, we are left with linear transition functions making the F_k/F_k-1 homogeneous bundles of weight a_k, as needed.
For an affine bundle E, with E_i as in proposition <ref> we define the G-equivariant total Chern class of E as c^G(E)=∏ c^G(E_i).
Let E be a weighted affine bundle over X as above. Let N_XE be the (non-weighted) normal bundle of X in E. Then, with X↪ E the zero section, we have
N_XE≅ E_1 ⊕ ... ⊕ E_r.
Let I be the ideal sheaf of X in E, then N_XE ≅_E(⊕ I^n/I^n+1) is a vector bundle over X with the same rank as E and we need only determine the transition maps.
Letting α_ij be the transition maps of E, the corresponding transition maps for N_XE are induced by the α_ij in the natural way. Because of the quotients, all non-linear terms of α_ij become zero and the surviving linear terms are exactly the transition functions of E_1 ⊕ ... ⊕ E_r.
Lemma <ref> also holds in the case where E is a weighted affine bundle.
Note that by definition <ref> and proposition <ref>, we have c_i^G(E)=c_i^G(N_XE).
The rest of the proof is the same as in lemma <ref>, noting that π^*:A^*(X)→ A^*(E) is still an isomorphism by <cit.>, and s^*:A^*(E)→ A^*(X) is still it's inverse.
§.§ The Chow ring of a weighted projective stack bundle
Let be a weighted, affine bundle over X of rank n. Let c_top^_m(E)(t) be its _m-equivariant top Chern class.
Then
A^*(𝒫()) ≅A^*(X)[t]/c_top^_m(E)(t).
Note that, by definition, A^*(𝒫(E))=A^*([E_0/_m])=A^*__m(E_0).
By lemma <ref> we only need to prove that the image of _n() via the identification in example <ref> is ∏_i(c_n_i(E_i)+a_itc_n_i-1(E_i)+...+a_i^n_it^n_i).
By corollary <ref>, it follows c^_m_n(E)=∏ c^_m_n_i(E_i)=∏_i(c_n_i(E_i)+a_itc_n_i-1(E_i)+...+a_i^n_it^n_i) and we are done.
Below there are some (familiar) special cases.
[The Chow ring of 𝒫(a_1,...,a_n)]<cit.>
We can consider 𝒫(a_1,...,a_n) as a weighted projective bundle over a point. In particular, we have that 𝒫(a_1,...,a_n) splits into n trivial bundles with weights a_1,...,a_n. Each of these line bundles will have the first Chern class equal to zero. It follows that
A^*(𝒫(a_1,...,a_n))=ℤ[t]/a_1...a_nt^n+1.
[The Chow ring of a classical projective bundle] <cit.>
Let E be a vector bundle of rank n over X in the classical sense.
In this case, we have _m acting homogeneously on the whole space with weight 1. In particular
A^*(𝒫(E))=A^*(X)[t]/c_n(E)+c_n-1(E)t+...+t^n.
As a toric example, this can be recovered as a consequence of <cit.>.
The exceptional divisor X̃ of the weighted blow-up of X=V(x_1,...,x_n) in ^m+n, with (possibly equal) weights a_1,...,a_n.
In this case, the Chow ring of the base will be A^*(X)=A^*(^m)=ℤ[x]/(x^m+1).
The normal cone of X in ^n+m, N_X^n+m will split into the sum of n copies of 𝒪_^m(1), on each one of which _m will act with weight a_i. We will denote the normal cone together with the _m action with N_a_1,...,a_n^n+m
Now c_1(𝒪_^m(1))=x ∈ℤ[x]/x^m+1. Therefore c_n^_m(N_X^n+m)=∏(x+a_it) and
A^*(X̃)=A^*(𝒫(N_a_1,...,a_n^n+m))=ℤ[x,t]/(x^m+1, ∏_i=1^n(x+a_it)).
§ THE SPLITTING PRINCIPLE
The goal of this section is to prove theorem <ref>, an analog of the splitting principle. We start by proving some facts about structure groups and classifying spaces. Using those results we construct, for any weighted affine bundle E→ X, a map X'→ X that allows us to pull back our affine bundle to a vector bundle E' → X' with A^*(X')≅ A^*(X).
§.§ Structure groups
From here on, let E→ X be an affine bundle with fibers isomorphic to an affine space V, 𝐧=(n_1,...,n_r) the dimensions of its homogeneous parts and 𝐚=(a_1,...,a_r) their distinct weights. Let be the group Aut__m(V) of _m-equivariant automorphisms of V and =∏ GL(n_i). Moreover, define V= [V/] and V=[V/].
There is a surjective group homomorphism → and a section →. The kernel of the surjection is a unipotent group U_𝐚,𝐧. This implies that is special.
In <cit.>, the authors offer an explicit description of . In fact, they decompose in the recursive semidirect product
=(GL_n_r× G_𝐚', 𝐧')⋉_a^n_rN_r
with 𝐚'=(a_1,...,a_r-1), 𝐧'=(n_1,...,n_r-1), and N_r the dimension of a_r^th degree piece of a graded polynomial algebra with free variables {x_i,j: 1 ≤ i ≤ r-1, 1≤ j≤ n_i} where x_i,j is given weight-a_i.
Unraveling the recursion gives:
=(GL_n_r×…×((GL_n_1×{1})⋉_a^n_1N_1)⋉…)⋉_a^n_rN_r=
=(...(⋉_a^n_1N_1)⋉…)⋉_a^n_rN_r.
This expression provide us with the desired surjection and section. The kernel is a successive extension of additive groups and so is unipotent.
§.§ Lemmas on classifying spaces
Given a semi-direct product of groups G=L ⋉ U,
we get a Cartesian diagram:
{*}[r] [d] BL [d]
BU [r] BG.
The datum of a map T→ BL×_BGBU consists of a U-bundle P_U→ T, an L-bundle P_L→ T, and an isomorphism of G-bundles ϕ: (P_U× G)/U → (P_L× G)/L.
We will prove that the bundles P_U and P_L are trivial and ϕ is unique up to unique isomorphism, so that BL×_BGBU≅{*}.
Indeed, transition functions on P_U (respectively on P_L) are given by multiplications by elements of U (respectively of L). In particular, the isomorphism of G- bundles (P_U× G)/U → (P_L× G)/L forces the transition functions to be in the intersection L ∩ U = {e}.
This implies P_U and P_L are trivial bundles and from now on we will denote (P_L× G)/L ≅ (P_U× G)/U by P.
We identify automorphisms of the trivial G-bundle with elements of G through the natural isomorphism, and do the same for U-bundles and L-bundles. Given 2 automorphisms, g,g' of the trivial G-bundle there are unique l∈ L and u∈ U such that lg=g'u, making the following diagram commute
P [r, "l"] [d, "g'"] P [d,"g"]
P [r,"u"] P.
We have shown that for any choice of two objects in BU ×_BGBL there is a unique isomorphism between them, as desired.
B→ B is a bundle, specifically there is a Cartesian diagram
[r] [d] B[d]
{*}[r] B.
Consequently, given a morphism X→ B the fiber product X×_B B→ X is a bundle.
Applying lemma <ref> to ,, as in lemma <ref> and appending on the left the Cartesian diagram coming from the standard presentation of B:
[r] [d] {*}[d]
{*}[r] B,
we get the Cartesian diagram:
[r] [d] {*}[r] [d] B[d]
{*}[r] B[r] B
as desired.
Then X×_B B→ X is the pullback of a bundle and hence is a bundle.
Let L be a subgroup of a group scheme G acting on a scheme V. Then the following diagram is Cartesian:
[V/L] [r] [d] BL [d]
[V/G] [r] BG.
An object over a scheme S in [V/G] ×_BGBL is a triple (P, Q, α) where P S is a G-torsor, together with a G-equivariant map to V
P [d] [r, "ϕ"] V;
S
Q S is a L-torsor; α is an isomorphism of G-torsors P Q × G / L.
Given such an object we can construct an object in [V/L] by considering the L-torsor Q S together with the map ψ: Q V defined as follows
Q [d] [r] Q × G / L [r, "α^-1"] P [r,"ϕ"] V.
S
To verify this is indeed an object of [V/L], we need to prove that ψ is L-equivariant.
Now, ϕ and α^-1 are G-equivariant, and in particular L-equivariant. Moreover the quotient map Q Q × G/L maps an element ql to [ql,e]=[qll^-1, le]=[q,l]. But L acts on Q × G/L through its inclusion into G, hence ql ↦ [q,e]l as desired.
On the other hand given a L-torsor Q S together with a L-equivariant map ψ: Q V in [V/L], we can construct the triple (Q × G/L, Q, id) as an object of [V/G] ×_BGBL. In order for Q × G/ L to be an object in [V/G], we must equip it with a G-equivariant map Q × G/L V or, equivalently, with a G-equivariant, L-invariant map Q × G V. The map Φ: Q × G V defined by (q,g) ↦ψ(q)g is L-invariant with respect to the action l · (q,g)=(ql,l^-1g) we are quotienting by, indeed (ql,l^-1g) ↦ψ(ql)l^-1g=ψ(q)ll^-1g=ψ(q)g. Moreover Φ is G-equivariant: Φ((q,g) · h)=ψ(q)gh=(ψ(q)g) · h.
The verification that the functors defined are indeed inverses is standard and will be omitted.
§.§ The Splitting Principle
Given a weighted affine bundle E X (respectively a weighted vector bundle), we have a natural map X B (respectively X B) such that E is the pull back of V (respectively V).
We prove the result for V, the result for V is effectively the same, with the obvious modifications.
Let us denote with Isom the sheaf of isomorphisms of affine bundles Isom_X(E, V × X) (respectively, the sheaf of isomorphisms of weighted vector bundles).
By straightforward application of the definitions, it can be seen that Isom is a principal bundle over X and Isom ×_X E ≅ V ×_{*} Isom.
In particular we get the Cartesian diagram
Isom [r] [d] {*}[d]
X [r] B.
Then we can fit the spaces above in the following commutative cube
[sep = .4cm]
Isom ×_X E [rr][dd][dr]
V [dd][dr]
E V[dd][from=ll,dashed,crossing over]
Isom [rr][dr] {*}[dr]
X [rr][from=uu, crossing over] B
where the bottom, back and sides squares are fiber squares.
Notice that Isom ×_X E → E is a principal bundle. Moreover, the action of on Isom gives a -equivariant map Isom ×_X E → V via the identification of Isom ×_X E with V × Isom. This gives us a map or quotient stacks E→ V which makes the top of the cube Cartesian.
It follows that
E [r] [d] V[d]
X [r] B
is a fiber square, as needed.
Let E X be a weighted affine bundle, with corresponding map X → B . Then E', the pullback of E via the map X'=X ×_B X, is a weighted vector bundle.
Consider the following diagram
[sep = .4cm]
E' [rr][dd][dr]
X' [dd][dr]
V B[dd][from=ll,crossing over]
E [rr][dr] X[dr]
V[rr][from=uu, crossing over] B
The back and right squares are Cartesian by construction.
By lemma <ref> and proposition <ref> we have that the front and bottom squares are Cartesian.
Any such cube with these sides Cartesian is Cartesian, in particular the top square.
Since E'→ X' is the pullback of the vector bundle V B, it is a vector bundle as desired.
Let X' X be as in corollary <ref>, then the pullback map of Chow rings A^*(X) A^*(X') is an isomorphism.
By corollary <ref>, X' X is a -bundle and is a unipotent group. In particular, is a successive extension of the additive group _a by its self and being a bundle is equivalent to being a succession of affine bundles, hence by <cit.> we obtain an isomorphism of Chow rings ϕ^*: A^*(X) A^*(X').
Let T be the standard maximal torus in . Then the map X” X in the following diagram
X”[r] [d] BT [d]
X' [r] [d] B[d]
X [r] B
induces an injection of Chow rings A^*(X) ↪ A^*(X”) via pullback.
By the argument in the proof of <cit.> we have an injection A^*(X') ↪ A^*(X”). By composing with the isomorphism in lemma <ref>, we have the desired map.
§ THE GYSIN HOMOMORPHISM INDUCED BY A WEIGHTED BLOW-UP
The goal for this section is to prove theorem <ref>, which replaces the excess bundle formula in the case of weighted blow-ups.
The strategy for the proof is to reduce to the special case of the weighted blow-up of BT in [^d/T], which will be computed in section 5.3.
The reduction to the special case is performed in two steps: first we reduce to the case of the blow-up of an affine space (section 5.2), and then we apply the splitting principle theorem <ref>.
Some caution is needed when defining f^!, as we don't always have the needed Cartesian diagram. In section 5.1 we address the issue as well as setting some notation for the rest of the paper.
§.§ Notation
Let Ỹ Y be the weighted blow-up of Y centered at X, and let X̃ be the exceptional divisor.
As observed in <cit.> the commutative square is not always Cartesian
X̃[r] [d] Ỹ[d]
X [r] Y
and when defining f^! we have to make sure to define it with respect to the fiber square
X ×_YỸ[r] [d] Ỹ[d]
X [r] Y
.
Moreover we have X̃=(X ×_YỸ)_red and the diagram below commutes
X̃[r] [dr,"g"] X ×_YỸ[r,"j"][d,"h"] Ỹ[d,"f"]
X [r,"i"] Y
When looking at Chow rings though, we have a natural isomorphism (red)_*: A^*(X̃) A^*(X ×_YỸ) induced by the reduction map red: X̃ X ×_YỸ.
In particular, it makes sense to talk about f^! as the composition of (red)_*^-1∘ f^!. Throughout the rest of the paper, we will refer to it simply as f^!.
The map f^!: A^*(X) A^*(X̃) is of the form f^!(α)=g^*(α) ·γ for some element γ∈ A^*(X̃).
In a similar fashion to what we did in proposition <ref>, we will prove the statement by passing through algebraic spaces.
Precisely, let X̃_U=(𝒩_XY ∖ 0 ) × U / _m with U open as in definition <ref> inducing isomorphisms of Chow groups for X̃ of the appropriate degree.
In fact, if U is chosen large enough we also get the algebraic space Ỹ_U with analogous induced isomorphisms of Chow groups for Ỹ.
For the appropriate degrees, the induced maps f_U: Ỹ_U Y with (ỹ, u) ↦ f(ỹ) and g_U: X̃_U X_U with (x̃, u ) ↦ g(x̃) will themselves induce group homomorphisms f^!_U: A^*(X) A^*(X_U) and g^*_U: A^*(X) A^*(X_U).
Now g_U: X̃_U X is a smooth map and by <cit.> we have isomorphisms
A^p(X̃_U)≅ A^p(X̃_U X̃_U) A^p-d(X̃_U X).
In particular, for the degrees on which A^p(X̃_U) = A^p__m(𝒩_XY ∖ 0)≅ A^p(X̃), we have that f_U^!=γ_U · [g_U]=γ_U· g^*_U for some γ_U ∈ A^p(X_U), is equivalent to say f^!(α)=γ_U · g^*(α) for some element γ_U ∈ A^p(X̃).
Since the elements γ_U must agree whenever U has high enough dimension, they must coincide. Hence there exists a unique element γ∈ A^*(X̃) such that f^!(α)=γ· g^*(α).
§.§ Specialization to the weighted normal cone
Analogously to <cit.>, Quek and Rydh in <cit.> construct a deformation to the weighted normal cone of X in Y, which is a weighted affine bundle in our case. We will be using their construction to reduce our argument to the case where Y a weighted affine bundle over X.
A similar construction can be found in <cit.>.
Note that when given a weighted embedding that defines a weighted blow-up of smooth varieties, the weighted normal cone is an affine bundle, which we will denote 𝒩_XY.
Let X, Y, X̃, Ỹ be as usual. Let N=𝒩_X Y the weighted normal affine bundle of X in Y and f_N:Ñ→ N the weighted blow-up with the same weights as f:Ỹ→ Y. Then the induced maps f^!: A^*(X) A^*(X̃) and f_N^!: A^*(X) A^*(X̃) coincide.
Let M^o be the deformation to the weighted normal cone as defined in <cit.> and let M̃^̃õ be the weighted blow-up of X ×^1 in M^o with the same weights as f, i.e.,the weighted blow-up induced by the weighted embedding in <cit.>. Let M_t and M̃_̃t̃ respectively, be the fibers over t. Let Z=X ×_Y Ỹ, note that Z=X ×_N Ñ. Then we have the following diagram.
[sep = .4cm]
X̃×^1 [rr][ddrr, " Z" description] Z×^1 [rr][dd] M̃^̃õ[dd,"f_M"]
X̃[rr][ddrr][ur] whiteZ[rr][dd][ru] M̃_̃t̃[ru]
X ×^1 [rr] M^o
X [rr][ru] M_t [ru][from=uu, crossing over]
By looking at the composition X → M_t→ M^o in the subdiagram
Z [r] [d] M̃_̃t̃[r] [d] M̃^̃õ[d, "f_M"]
X [r] M_t [r] M^o
we see that for t≠ 0 we have that f_M^!:A^*(X)→ A^*(Z) is precisely f^!, and for t=0 it is precisely f_N^!.
Now looking at the composition X→ X×^1→ M^o in the subdiagram
Z [r] [d] Z×^1 [r] [d] M̃^̃õ[d, "f_M"]
X [r] X×^1 [r] M^o
we see that f_M^!:A^*(X)→ A^*(Z) is the same for all t.
§.§ The special case of [^d/T]
Let us now study the particular case of a point in the affine space over the torus.
𝒫(a_1,...a_d) [r] [dr,"g"] 0 ×_^d Bl_a_1,...,a_d^d [r,"j"][d,"h"]
Bl_a_1,...,a_d^d [d,"f"]
0 [r,"i"] ^d
In order to explicitly give a formula for f^! we need presentations for the equivariant Chow rings A^*_T(-).
Now A^*_T(0) ≅ A^*_T(^d) ≅[x_1,...,x_d]. Details about A^*_T(0) can be found in <cit.> and in <cit.> for equivariant Chow rings of toric stacks.
Let us first observe that, since 𝒫(a_1,...,a_d) is the reduction of 0 ×_^d Bl_a_1,...,a_d^d, there is an isomorphism of Chow rings A^*_T(0 ×_^d Bl_a_1,...,a_d^d) ≅ A^*_T(𝒫(a_1,...,a_d)).
Moreover Bl_a_1,...,a_d^d is a line bundle over 𝒫(a_1,...,a_d), in fact it is the total space of 𝒪_𝒫(a_1,...,a_d)(-1), and we have the isomorphism A^*_T(Bl_a_1,...,a_d^d) ≅ A^*_T(𝒫(a_1,...,a_d))
We are left with computing A^*_T(𝒫(a_1,...,a_d)).
A^*_T(𝒫(a_1,...,a_d)) ≅[x_1,...,x_d,t]/P(t) where P(t):=∏_i=1^d (x_i+ta_i)
By construction 𝒫(a_1,...,a_d)=[^d ∖ 0/_m] and the actions of _m and T on ^d ∖ 0 commute. In particular A^*_T(𝒫(a_1,...,a_d)) ≅ A^*_T([A^d ∖ 0/_m]) ≅ A^*_T ×_m(^d ∖ 0).
Similarly to the computation above, A^*_T ×_m(0) ≅ A^*_T ×_m(^d) ≅[x_1,...,x_d,t] where x_1,...,x_d are given by the action of T and t is given by the action of _m. Finally, the image of the first map in the localization sequence:
A^*_T ×_m(0) A^*_T ×_m(^d) A^*_T ×_m(^d ∖ 0) 0
is generated by P(t):=∏_i=1^d (x_i+ta_i). Indeed the top Chern class of the T ×_m-equivariant bundle splits along each component of ^d. On the i-th component of ^d the i-th component of T acts with weight 1 and the other components of T act with weight 0, while _m acts with weight a_i.
Therefore A^*_T(𝒫(a_1,...,a_d)) and A^*_T(Bl_a_1,...,a_d^d) are both isomorphic to [x_1,...,x_d,t]/P(t).
Let f: [Bl_a_1,...,a_d^d/T] [^d/T] be the blow-up of [0/T] in [^d/T] with weights a_1,...,a_d. Then
f^!(1)=( c_top^_m([^n/T])(t)-c_top^_m(^n/T)(0)/t)
By <cit.>
f^! must satisfy f^*i_*=j_*f^!, making the following diagram commute.
A^*(𝒫(a_1,...,a_d)) [r,"j_*"]
A^*(Bl_a_1,...,a_d^d )
A^*(0) [r,"i_*"][u, "f^!(1)· g^*"]
A^*(^d) [u, "f^*"]
Now i_*: A^*_T(0) A^*_T(^d) is just the multiplication by the top equivariant Chern class of the bundle ^d over 0. Specifically i_*(α)=α· x_1...x_d.
Similarly, j_* is the image of the top equivariant Chern class of the bundle Bl_a_1,...,a_d^d over 𝒫(a_1,...,a_d), so we have that j_* is multiplication by -t.
Therefore we must have,
f^*i_*(α)=α· x_1⋯ x_d=α· c_top^_m([^n/T])(0)= f^!(1) α (-t)=j_*f^!(α)
and since t is not a zero divisor in A^*_T(𝒫(a_1,...,a_d)), we must have
f^!(1)=-c_top^_m([^n/T])(0)/t=c_top^_m([^n/T])(t)-c_top^_m([^n/T])(0)/t,
as needed.
§.§ A fomula for the Gysin homomorphism
Let X, Y, X̃, Ỹ, f be as usual. Let us identify A^*(X̃) ≅ A^*(X)[t]/P(t) with P(t)=c_top^_m(𝒩_XY)(t).
Then we have the following formula for the Gysin homomorphism f^!:A^*(X) A^*(X̃).
f^!(α)=P(t)-P(0)/tα.
With the presentation of A^*(X̃) above, the map g^* is the natural inclusion of A^*(X) in A^*(X)[t]/P(t) and, by lemma <ref> we only need to show
f^!(1)=P(t)-P(0)/t.
By theorem <ref> we can assume that Y is a weighted affine bundle over X. By the splitting principle in theorem <ref> it is enough to prove the equality for the pullback X”.
Since blow-ups commute with base change, the blow-up f”: Ỹ” Y” sits in the commutative diagram
X̃”̃[rr, "ϕ̃"][dd][dr]
[𝒫(a_1,...,a_n)/T] [dd][dr]
Ỹ” [Bl_a_1,...,a_n^n/T] [dd][from=ll,crossing over]
X”[rr, "ϕ"][dr] BT [dr]
Y”[rr][from=uu, crossing over] [𝔸^n/T]
induces a commutative diagram of Chow groups
A^*(X̃”̃) [from=dd, "(f”)^!"][dr]
A^*([𝒫(a_1,...,a_n)/T]) [ll, "ϕ̃^*"][from=dd][dr]
A^*(Ỹ”) A^*([Bl_a_1,...,a_n^n/T]) [from=dd][ll,crossing over]
A^*(X”)[from=rr, "ϕ^*"][dr] A^*(BT) [dr]
A^*(Y”) [from=rr][uu, crossing over] A^*([𝔸^n/T]).
Since equivariant Chern classes commute with pullbacks and Y” is a vector bundle over X”, by theorem <ref> the following holds
(f”)^!(1)=(f”)^!(ϕ^*(1))=ϕ̃^* ( c_top^_m([^n/T])(t)-c_top^_m(^n/T)(0)/t) =
=( c_top^_m(ϕ̃^*[^n/T])(t)-c_top^_m(ϕ̃^*[^n/T])(0)/t) =( c_top^_m(Y”)(t)-c_top^_m(Y”)(0)/t)
which is the desired difference quotient.
§ THE CHOW RING OF A WEIGHTED BLOW-UP
In this section we generalize the key sequence in <cit.> and then use it to compute a formula for the Chow ring of a weighted blow-up.
§.§ The Grothendieck sequence
Let X, Y, X̃, Ỹ, f, i, j be as usual, then we have the following exact sequence
A^*(X) A^*(X̃) ⊕ A^*(Y) A^*(Ỹ) 0.
Further, if we use rational coefficients, then this becomes a split short exact sequence with g_* left inverse to (f^!, -i_*).
0→ A^*(X,ℚ) A^*(X̃,ℚ) ⊕ A^*(Y,ℚ) A^*(Ỹ,ℚ) 0.
Exactness at A^*(X̃) ⊕ A^*(Y) is equivalent to having f^*∘ i_*=j_*∘ f^!, which is how we defined f^!.
To see j_*+f^* is surjective, consider the following commutative diagram where the top and bottom are exact
A^*(X̃) [r] A^*(Ỹ) [r] A^*(Ỹ∖X̃) [r] 0
A^*(X) [r][u] A^*(Y) [r][u,"f^*"] A^*(Y∖ X) [r][u,equal] 0.
Let α be any cycle in A^*(Ỹ), α̅ the restriction of α to A^*(Ỹ∖X̃), and γ∈ A^*(Y) any cycle that restricts to α̅ in A^*(Y∖ X). By commutativity, α-f^*(γ) restricts to 0 in A^*(Ỹ∖X̃) and must be in the image of j_*. So α is in the image of j_*+f^*.
Lastly, if we use rational coefficients then there is a left inverse of (f^!, -i^*) given by (α, β) ↦ g_*(α).
Indeed, let x ∈ A^*(X), then g_*(f^!(x))=g_*(δ g^*(x))=g_*(δ) x with δ the difference quotient as in theorem <ref>. Now δ is a degree n-1 polynomial in t, of which only the leading term a_1 … a_n t^n-1 will survive the pushforward. We only need to show that g_*(t^n-1)=1/a_1 … a_n.
It is enough to verify this when X is a point. Notice that a_it=x_i, where the x_i are the usual coordinate divisors, so a_1⋯ a_n-1t^n-1=x_1⋯ x_n-1 which is a Bμ_a_n and so pushes forward to 1/a_n, thus g_*(t^n-1)=1/a_1⋯ a_n
To see why the sequence with integer coefficients is not short exact,
let us consider X an elliptic curve in Y=^2. Let Ỹ be the blow-up of Y at X with weight 2.
Let P, Q ∈ X be distinct points of order 2 and consider the difference [P]-[Q] ∈ A^*(X). When pushed forward to A^*(Y) via i_*, all points are rationally equivalent, hence i_*([P]-[Q])=0. On the other hand, f^! is multiplication by 2, so f^!([P])=f^!([Q])=0. But [P] -[Q] is non zero, so (f^!,-i_*) is not injective.
§.§ The Chow ring of a weighted blow-up
Chow ring of a weighted blow-up] If Ỹ→ Y is a weighted blowup of Y at a closed subvariety X, then the Chow ring A^*(Ỹ) is isomorphic as a group to the quotient
A^*(Ỹ)≅(A^*(X)[t])· t ⊕ A^*(Y)/( ((P(t)-P(0))α,-i_*(α)), ∀α∈ A^*(X))
with P(t)=c_top^_m(𝒩_XY)(t) and [X̃]=-t.
The multiplicative structure on A^*(Ỹ) is induced by the multiplicative structures on A^*(X) and A^*(Y) and by the pullback map in the following way
(0,β) · (t,0)=(i^*(β)t,0).
Equivalently A^*(Ỹ) can be expressed as a quotient of the fiber product
A^*(Y)×_A^*(X) A^*(X)[t]/((i_*α, P(t)α) ∀α∈ A^*(X))
with i^*: A^*(Y) A^*(X) and A^*(X)[t] A^*(X) given by evaluating t at 0.
The exact sequence in theorem <ref> gives us an isomorphism of groups
A^*(Ỹ)≅A^*(X̃) ⊕ A^*(Y)/((f^!(α),-i_*(α)), ∀α∈ A^*(X)).
If we use theorem <ref> to rewrite A^*(X̃) and also add an explicit factor of [X̃] to represent how A^*(X̃) is mapped into A^*(Ỹ), then as a group we can rewrite A^*(Ỹ) as
((A^*(X)[t])·[X̃]) ⊕ A^*(Y)/((c_top^_m(𝒩_XY)(t)[X̃],0), (f^!(α)[X̃],-i^*(α)) ∀α∈ A^*(X)).
Notice that t[X̃]=-[X̃]^2 (since t is the class of 𝒪_X̃(1)) so that [X̃] can be replaced by -t.
Now we need to determine the ring structure. Since much of the ring structure is inherited from that of A^*(Y) and A^*(X̃), what remains is just to determine how to multiply elements coming from A^*(Y) with those coming from A^*(X̃). Consider the usual commutative square
X̃[r] [d] Ỹ[d]
X [r] Y.
Intersecting some class β∈ A^*(Y) with X̃ amounts to pulling it back to A^*(X̃). By commutativity we have g^*(i^*(β))=j^*(f^*(β)) and by pushforward we obtain, (0,β) × (t,0)=(i^*(β)t,0).
Finally, notice also that c_top^_m(𝒩_XY)(t)[X̃] is now redundant, as for α=1 we have
(t)(f^!(α)(-t)-i_*(α)) = (t)(c_top^_m(𝒩_XY)(t)-c_top(𝒩_XY)/t(-t)-i_*(1))
=t(-c_top^_m(𝒩_XY)(t)+c_top(𝒩_XY)-i_*(1))=(-t)· c_top^_m(𝒩_XY)(t).
Where the last equality comes from t· i_*(1)=g^*(i^*(i_*(1)))=c_top(𝒩_XY).
Putting everything together, we have that the A^*(Ỹ) is the group
A^*(Ỹ)≅(A^*(X)[t])· t ⊕ A^*(Y)/( ((P(t)-P(0))α,-i_*(α)), ∀α∈ A^*(X))
with the desired multiplication.
If i^*:A^*(Y)→ A^*(X) is surjective, then this formula simplifies to resemble a formula of Keel <cit.>
A^*(Ỹ)≅A^*(Y)[t]/(t· ker(i^*), Q(t))
Where Q(t) is any polynomial that restricts to c_top^_m(𝒩_XY)(t) on A^*(X) and has constant term equal to [X].
Let i^* be surjective. The fact any α∈ A^*(X) can only appear multiplied by t combined with the terms t·(β-i^*(β)) ∀β∈ A^*(Y) means we can identify every α∈ A^*(X) with any β such that i^*(β)=α.
This identification allows us to suppress A(X) from our presentation, and reduces the terms t·(β-i^*(β)) ∀β∈ A^*(Y) to t· ker(i^*).
Finally, t· f^!(α)+i_*(α)=(c_top^_m(𝒩_XY)(t)-c_top(𝒩_XY))α+i_*(α) and i_*(α)=i_*(i^*(β))=[X]·β for some β∈ A^*(Y).
So t· f^!+i_* is multiplication by (c_top^_m(𝒩_XY)(t)-c_top(𝒩_XY)+[X]) which is precisely the Q(t) desired.
§.§ An example: the Chow ring of ℳ̅_1,2
The Chow ring A^*(ℳ̅_1,2) has been computed in <cit.> and in <cit.>.
The latter uses the construction of ℳ̅_1,2 as the weighted blow-up of 𝒫(2,3,4). We give yet another computation of the ring, using the same blow-up construction.
We start by recalling the following:
<cit.>
There exists an isomorphism ℳ̅_1,2≅ Bl^(4,6)_Z𝒫(2,3,4), where Bl^(4,6)_Z𝒫(2,3,4) is the weighted blow-up of the point Z=[s^2:s^3:0] in 𝒫(2,3,4) with weights (4,6).
Given this, A^*(ℳ̅_1,2) becomes a straightforward computation,
<cit.>
A^*(ℳ̅_1,2)≅A^*(ℤ[y,t])/(ty,24(t^2+y^2)).
First, since i:Z→𝒫(2,3,4) is just the inclusion of a point we have that i is surjective, and since A^*(𝒫(2,3,4))≅[y]/24y^3 we know the kernel is (y).
By corollary <ref> we then have
A^*(ℳ̅_1,2)≅A^*(𝒫(2,3,4))[t]/(ty,Q(t))≅ℤ[y,t]/(24y^3,ty,Q(t)).
Where Q(t) restricts to c_top^_m(𝒩_ZY) and has constant term [Z].
As 𝒩_ZY splits into trivial line bundles, we see c_top^_m(𝒩_ZY)=(4t)(6t).
Writing Z=V(x_3)∩ V(x_1^3-x_2^2) we see [Z]=(4y)(6y), so Q(t)=24t^2+24y^2.
Lastly, the term 24y^3 is now redundant and we have
A^*(ℳ̅_1,2)≅A^*(ℤ[y,t])/(ty,24(t^2+y^2)).
§ GENERALIZATION TO QUOTIENT STACKS
Let us now consider the case of =[Y/G], where Y is an algebraic space and G is a linear algebraic group, hence it is possible to define the G-equivariant Chow ring A^*_G(Y) as in <cit.>.
A weighted embedding of in defines a weighted embedding of X in Y via pullback and, since the quotient maps are smooth, we have ≅ [Ỹ/G] and ≅ [X̃/G].
The theorems in sections 3,5,6 hold for f: weighted blow-up of at .
Let us prove that A^*() ≅A^*()[t]/P(t) as in theorem <ref>, the proof for the other results will be almost identical.
For any p let U as in definition <ref> of dimension high enough, such that A^q(X_U) ≅ A^q_G(X)≅ A^q() with X_U:= X × U /G, up to degree p.
Since X_U is an algebraic space, by theorem <ref>
A^*(X̃_U) ≅ A^*(X_U)[t]/P_U(t).
Note that P_U(t)=c_top^_m(𝒩_X_UY_U) is the pullback of P(t)=c_top^_m(𝒩_), which is a finite degree polynomial. In particular for large enough p, P_U(t) does not depend on U and it is exactly P(t).
Since X̃_U is an open inside a vector bundle, we have isomorphisms A^q(X̃_U) ≅ A^q() up to degree p.
Since <ref> holds up to degree p, for any p we have the desired isomorphism of Chow rings.
§ CHERN CLASS OF WEIGHTED BLOW-UP
by Dan Abramovich, Veronica Arena, and Stephen Obinna
§.§ The goal
Consider a smooth subvariety X of a smooth variety Y, with blowup Ỹ and exceptional divisor, as in the following standard diagram:
X̃[r,"j"][d,"g"] Ỹ[d,"f"]
X [r,"i"] Y.
In <cit.> Fulton provides a formula for the total Chern class c(Ỹ):= c(T_Ỹ) in terms of the blowup data. The purpose of this note is to revisit that formula and generalize it to the case of a weighted blowup. Since smoothness is important in these considerations, our weighted blowups are always stack-theoretic.
§.§ Setup and formula
In our setup, X and Y are still smooth varieties, and X is the support of a weighted center with weighted normal bundle N of rank d=(X⊂ Y). The weighted normal bundle is a weighted affine bundle with total Chern class we denote by c(N)∈ A^*(X) and total _m-equivariant Chern class c^_m(N)= Q(t)∈ A^*__m(X) = A^*(X)[t], where t is the equivariant parameter corresponding to the standard character of _m. In particular Q(0) = c(N).
We recall from theorem <ref> in the main text that
A^*(Ỹ) = (A^*(Y) ⊕ t A^*(X̃) )/ I,
where
I = (i_*(α) ⊕ -(Q(t)-Q(0))α| α∈ A^*(X)).
We denote by
q: A^*(Y) ⊕ t A^*(X)[t] → (A^*(Y) ⊕ t A^*(X̃))/I = A^*(Ỹ)
the natural quotient map.
We have
c(Ỹ)/ f^* c(Y) = q((1-t) Q(t)/Q(0)).
We note that the right-hand side is of the form q(1⊕ t· R(t)), with some R(t) ∈ A^*(X) t.
The formula was proved for Chow groups with rational coefficients by Anca and Andrei Mustaţa, see <cit.>. Our proof in essence verifies that their arguments carry over integrally.
§.§ Approach
Our approach combines the equivariant methods used in the main text to study and compute Chow rings of weighted projective stack bundles and weighted blowups, combined with ideas in Aluffi's paper and lecture <cit.>, especially the user-friendly presentation of the formula in Aluffi's lecture. While Aluffi provides a proof only for complete intersections, the methods of theorem <ref> allow us to reduce the general case to a situation where Aluffi's proof applies.
§.§.§ The quotient class
One first notes that the class c(Ỹ)/ f^* c(Y) appearing on the left-hand side has properties enabling flexible treatment:
The class c(Ỹ)/ f^* c(Y) is of the form q(1⊕ t· R(t)), with some R(t) ∈ A^*(X) t, and is functorial for smooth morphisms Y'→ Y and closed embeddings Y'→ Y that meeet X transversely.
To see that it has this form, consider the localization sequence:
A^*(X̃)→ A^*(Ỹ)→ A^*(Ỹ∖X̃)→0
Since c(Ỹ) and f^* c(Y) must pull back to the same class on A^*(Ỹ∖X̃), and their ratio pulls back to one. In particular we see that c(Ỹ)/f^*c(Y)-1 must be in the image of A^*(X̃), which means c(Ỹ)/f^*c(Y) is of the desired form.
To see functoriality, consider the diagram
Ỹ'̃[r,"h̃"][d,"f'"] Ỹ[d,"f"]
Y' [r,"h"] Y
We must show c(Ỹ'̃)/f'^*c(Y')=h̃^*c(Ỹ)/f^*c(Y), but this is equivalent to c(Ỹ'̃)/h̃^*c(Ỹ)=f'^*c(Y')/h^*c(Y). This is true when h is smooth because the relative tangent bundle of h is compatible with pullback under f, and true when h is a closed embedding since the normal bundle of h is compatible with pullback under f.
§.§.§ Degeneration to the weighted normal bundle
Applying the lemma to the degeneration to the weighted normal bundle we obtain
It suffices to prove the theorem, namely to compute R(t) and c(Ỹ)/ f^* c(Y), when Y = 𝒩_XY.
Recall the diagram from theorem <ref>
[sep = .4cm]
X̃×^1 [rr][ddrr, " Z" description] Z×^1 [rr][dd] M̃^̃õ[dd,"f_M"]
X̃[rr][ddrr][ur] whiteZ[rr][dd][ru] M̃_̃t̃[ru]
X ×^1 [rr] M^o
X [rr][ru] M_t [ru][from=uu, crossing over]
Where M_t≅ Y for t≠0 and M_0=𝒩_XY.
By the previous lemma, the expression c(M̃_t)/ f^* c(M_t) can be pulled back from M̃^o along the embedding corresponding to t and is determined by a class on X̃. However, neither X̃ nor M̃^o depend on t so it is enough to compute things when t=0, that is for 𝒩_XY.
§.§.§ The universal case
It suffices to prove the theorem when X = B and Y = V.
Can we directly reduce to the case in A.5 via theorem 4.8?
By theorem <ref>, the homomorphism A^*(B) → A^*(BT) is injective. Therefore:
It suffices to prove the theorem when X = BT and Y = [V/T]. Equivalently, it suffices to prove the theorem T-equivariantly when X is a point, the origin on Y = Å^d
Follows from functoriality and theorem <ref> since the maps X→ B and BT→ B are smooth.
§.§.§ The toric case of affine space
Finally, let X be the origin of Y = Å^d.
Let A^*_T(0) ≅ A^*_T(𝔸^d) ≅ℤ [x_1, … x_d] and A^*_T(𝒫(a_1,… a_d)) ≅ A^*_T(Bl_(a_1, … a_d)𝔸^d) ≅ℤ [x_1, … x_d, t]/(∏ (x_i+a_it)) as in section <ref>.
We have c^T(Y) = Q(0) and c^T(Ỹ) = q((1-t) Q(t)).
This is essentially the same argument as <cit.>.
Let D=ΣX̃_i +X̃ be the sum of all the irreducible toric divisors on Ỹ. By repeating the argument in <cit.>, we have the exact sequence
0 Ω_Ỹ^1 Ω^1_Ỹ(log D) (⊕_i=1^d 𝒪_X_i) ⊕𝒪_X̃ 0
and Ω^1_Ỹ(log D) is trivial. By Whitney's formula,
c^T(Ω^1_Ỹ) = 1/c^T(𝒪_X̃)c^T(⊕𝒪_X̃_i)=(1+t) ∏_i (1-a_it-x_i).
By taking the dual we obtain
c^T(T Ỹ)=(1-t) ∏_i (1+a_it+x_i)=Q(t)
The same argument works to prove c^T(Y)=Q(0).
The theorem follows.
amsalpha
|
http://arxiv.org/abs/2307.01299v1
|
20230703190904
|
The Evolution of Substance Use Coverage in the Philadelphia Inquirer
|
[
"Layla Bouzoubaa",
"Ramtin Ehsani",
"Preetha Chatterjee",
"Rezvaneh Rezapour"
] |
cs.CL
|
[
"cs.CL"
] |
Traffic Centralization and Digital Sovereignty:
An Analysis Under the Lens of DNS Servers
Demétrio F. Boeira, Eder J. Scheid, Muriel F. Franco, Luciano Zembruzki, Lisandro Z. Granville
Institute of Informatics (INF),
Federal University of Rio
Grande do Sul (UFRGS),
Porto Alegre, Brazil
{demetrio.boeira, ejscheid, mffranco, lzembruzki, granville}@inf.ufrgs.br
Received ; accepted
=====================================================================================================================================================================================================================================================================================
The media's representation of illicit substance use can lead to harmful stereotypes and stigmatization for individuals struggling with addiction, ultimately influencing public perception, policy, and public health outcomes. To explore how the discourse and coverage of illicit drug use changed over time, this study analyzes 157,476 articles published in the Philadelphia Inquirer over a decade. Specifically, the study focuses on articles that mentioned at least one commonly abused substance, resulting in a sample of 3,903 articles. Our analysis shows that cannabis and narcotics are the most frequently discussed classes of drugs. Hallucinogenic drugs are portrayed more positively than other categories, whereas narcotics are portrayed the most negatively. Our research aims to highlight the need for accurate and inclusive portrayals of substance use and addiction in the media.
§ INTRODUCTION
The portrayal of illicit substance use in the media has long been a topic of concern, given the potential impact it can have on public perception and policy. Previous studies highlighted the harmful effects of stigmatizing coverage on individuals struggling with substance use disorder (SUD) and the potential for biased and inaccurate media coverage to perpetuate harmful stereotypes and stigma <cit.>. Moreover, media coverage often prioritizes criminal justice actions over critical health concerns associated with substance use <cit.>. This bias has been further observed in the case of designer drugs such as “bath salts,” which were portrayed as an “epidemic” with harmful effects while neglecting relevant clinical research and mental health issues associated with them <cit.>. As a result, health legislation was inadequately informed, and underlying issues related to substance use are not addressed.
Given the potential impact of media coverage, it is essential to carefully examine how the media frames discussions about illicit drug use and how that framing changes over time. In this study, we examine news articles from the Philadelphia Inquirer to understand changes in media coverage of illicit substance use over time. Philadelphia was chosen because it has a high rate of overdose deaths (40 per 100K) and is home to a significant open-air drug market <cit.>. Analyzing local media coverage of SUD can reveal insights on its portrayal, identify reporting gaps, and inform policymaking and public education. This type of study can also track the policy process and evolving relationships between actors, like advocacy groups or healthcare professionals, over time <cit.>.
More specifically, we analyze the portrayal of nine main drug classes, including stimulants, narcotics, cannabis, hallucinogens, depressants, designer drugs, drugs of concern, treatment, and miscellaneous over a period of ten years and across 157,476 published articles. The study then focuses on approximately 3,903 articles that mention at least one commonly abused substance and applies various natural language processing (NLP) techniques to extract drug-related themes over time. We aim to answer the following research questions:
- RQ1: How do the occurrences of various drug classes change over time, and in what contexts are different substances co-mentioned?
We use a list of commonly abused drugs, map them to drug classes, extract articles with at least one mention of drugs, and use TF-IDF to find salient words in the subsets of articles discussing co-occurring drug classes.
- RQ2: How has media coverage of illicit substance use evolved over time?
Dynamic topic modeling and sentiment analysis are employed to capture the shift of prevalent themes and tone with respect to drug classes, over time.
Our analysis indicates a shift in the discussion of drug-related topics, with cannabis being the most frequently discussed drug class, followed by narcotics. Articles on narcotics and treatment drugs frequently co-occur, focusing primarily on criminality, legislature, and overdose. News articles discussing cannabis, hallucinogens, and treatment drugs tend to be positive while those about narcotics, stimulants, and depressants are negative.
The insights gained from this work can inform better public health messaging, affect drug policy decisions made by lawmakers, and enable medical professionals to create more targeted patient education and awareness programs while promoting the principles of harm reduction.
§ METHODOLOGY
Data Collection
We utilize ProQuest for data collection and accessing news articles. ProQuest is an online platform consisting of thousands of databases that provide access to a diverse set of publications, including journals and newspapers.
In order to collect data, it was necessary to log in to the ProQuest database using our individualized credentials, after which we were able to extract/scrape the data.
We collect a total of 157,476 news articles from the Philadelphia Inquirer, covering the period from January 1, 2013, to December 31, 2022.
It's important to note that our dataset may not represent the complete set of articles published by the Philadelphia Inquirer, as ProQuest's database may not be comprehensive.
Our data comprises the complete text of news articles, alongside several metadata attributes such as date, author, title, links, and subject keywords.[The meta-data of news articles, list of drugs, and code can be found on <https://github.com/social-nlp-lab/drugs_in_inquirer>]
Drug Name Extraction
We utilize the categorization provided by the National Institute on Drug Abuse (NIDA), which comprises 28 drug categories, each with its corresponding commercial names. To minimize irrelevant articles, we manually verified drug names and removed ambiguous terms with multiple meanings, e.g., the term “pot” could refer to “flowering pot” or “cooking pot” in addition to the drug “marijuana”.
We then assign each drug to one of the nine classes established by the Drug Enforcement Administration (DEA) <cit.>. These classes include:
* Cannabis (C): marijuana is a mind-altering (psychoactive) drug, produced by the cannabis sativa plant.“Cannabis” and “marijuana” are often used interchangeably, resulting in the appearance of cannabis as both the drug class and the drug name.
* Depressants (D): known to induce sleep, relieve anxiety and muscle spasms, and prevent seizures, e.g., barbiturates, and sedative-hypnotic substances like GHB.
* Designer Drugs (DD): produced illicitly with a slightly altered chemical structure to mimic the pharmacological effects of controlled substances, e.g., synthetic marijuana or synthetic cathinones.
* Drugs of Concern (DC): unregulated drugs that can be harmful if abused, e.g., kratom and xylazine.
* Hallucinogens (H): derived from plants and fungi and renowned for their capacity to modify human perception and mood, e.g., LSD, mushrooms, and ecstasy.
* Narcotics (N): refers to opium, opium derivatives, and their semi-synthetic substitutes. “Opioid” is a more current and precise term to describe these drugs, e.g., heroin, OxyContin, codeine, morphine, and fentanyl.
* Stimulants (S): drugs accelerating body's functions, e.g., methamphetamine, cocaine, and amphetamines.
* Treatment (T): substances aiding the treatment of opioid addiction, e.g., methadone, Suboxone, and naloxone.
* Miscellaneous (M): substances that can be abused but don't belong to any classes, e.g., steroids.
Using our mapped list of drug names, we identify news articles that mention any of these drugs in either their title or body. Subsequently, we group these articles into clusters based on the respective drug classes.
If an article mentions drugs from multiple classes, we assign it to all relevant drug classes. The number of articles per drug class and year is shown in Table <ref>. Out of a total of 157,476 articles analyzed, 3,903 referenced at least one drug class within their text.
In addition to assessing the frequency of drugs, we also examine co-occurrences of drugs in the same articles each year.
To better understand these co-occurrences and the surrounding contexts, we extract the most significant words based on their TF-IDF scores from a subset of articles where drugs most frequently co-occurred.
Topic Modeling
To investigate the evolution of drug-related articles, we employ dynamic topic modeling <cit.> to understand the distribution of topics over time. We use BERTopic as it can preserve the semantic structure of texts <cit.>.
Since BERTopic operates on a bag-of-words representation of the text <cit.>, we preprocess the text by converting it to lowercase, removing URLs, extending contractions, and eliminating stopwords. This minimizes noise and dimensionality, allowing our models to recognize important patterns and topics.
It is important to note that BERT has a token limit of 512, while the median length of news articles in our data is 749 tokens with a standard deviation of 573. News articles' opening paragraphs contain the most important information, therefore, we assume the first 512 tokens of news articles in our data represent the key points.
Tone Detection
VADER <cit.>, a validated rule-based model for sentiment analysis, is used to analyze shifts in the tone of news articles that center on substance use. We calculate the average sentiment scores of articles in six-month periods using VADER's compound score, allowing us to monitor changes in sentiment over time.
§ RESULTS
How do the occurrences of various drug classes change over time, and in what contexts are different substances co-mentioned?
The results show a significant shift in the discussion of illicit substance use over the past decade.
As shown in Table <ref>, cannabis is the most frequently discussed drug class, followed by narcotics. The annual count of news articles indicates that 2018 saw the highest frequency of drug-related mentions in news coverage. In 2018, several significant events occurred, including a former player for the Philadelphia Eagles admitting to recreational cannabis use during his career <cit.>, Philadelphia Mayor Jim Kenney advocating for the legalization of recreational cannabis use <cit.>, and the City of Philadelphia hosting its first-ever Cannabis Opportunity Conference <cit.>. The frequency of cannabis in the news doesn't necessarily imply that it is used more than other substances, as it could be affected by the prevailing social and political environment. Our findings suggest that the majority of articles mentioning cannabis are centered around the legality of its use.
Narcotics (i.e., opioids) are the second most discussed drug class, and also the most prevalent in 2017 and 2018, indicating the severity of discussions and possibly the opioid crisis during those years. Philadelphia experienced some of the highest rates of unintentional overdose rates, and the synthetic opiate fentanyl became increasingly fatal <cit.>.
As shown in Table <ref>, designer drugs and drugs of concern are mentioned in only 16 and 11 articles, respectively. Due to the sparse data on these topics in our corpus, we exclude them from further analysis.
Further analysis of drug counts per class shows that cannabis is primarily associated with marijuana (Figure <ref>).
Despite the controversy surrounding the use of the term “marijuana” <cit.>, there has been a consistent and steady usage of this word in news articles over the years.
In contrast, depressant drugs like Xanax and Ambien are more frequently mentioned together, likely due to their widespread use for anxiety and sleep disorders and their potential for abuse and addiction. Similarly, cocaine and acid are the most frequently mentioned drugs in the stimulant and hallucinogen drug classes, respectively, indicating their enduring popularity and widespread use.
It is worth noting that our temporal analysis of drug mentions uncovers patterns in drug discussion over time, which can be influenced by a variety of factors, such as changes in drug policies and cultural shifts in attitudes towards various elements of drug use, like policy, treatment, and research.
The co-occurrence analysis (Table <ref>) revealed that certain substances are frequently discussed together, indicating a connection between their use, among other possible associations.
Heroin was once the most discussed substance in Philadelphia until it was surpassed by fentanyl in 2018. This is in line with reports of adulteration and an increase in unintentional overdose deaths containing fentanyl in Philadelphia <cit.>. Stimulants, like cocaine, and cannabis are commonly observed to co-occur with narcotics, and there is growing concern among the public and policymakers over an increasing number of unintentional overdose deaths involving stimulants. Marijuana and heroin are frequently mentioned together in discussions around legalization and concerns about adulteration from fentanyl. Our analysis also shows that narcotics and treatment drugs, including methadone, naloxone, and buprenorphine, are the second most commonly co-located drugs in articles. These articles primarily focus on criminality, legislature, and overdose, as determined by our TF-IDF analysis.
There is growing concern over the co-occurrence of stimulants, like cocaine, with narcotics, like heroin, due to the rising number of fatalities associated with their use <cit.>.
The Philadelphia Inquirer has been particularly active in discussing the “opioid epidemic,” with a focus on treatment. The use of treatment drugs like methadone and naloxone is seen as a positive and proactive response to drug addiction <cit.>. Buprenorphine, another popular substance used to treat opioid use disorder, gained attention in 2022 after President Biden eliminated the restriction on the number of prescriptions providers could issue per month, making it more accessible compared to methadone <cit.>.
How has media coverage of illicit substance use evolved over time?
Utilizing dynamic topic modeling with BERTopic, we were able to generate a diverse set of topics covering various themes. The review of the initial results revealed a number of overlapping topics. Therefore, we iteratively refined the model by decreasing the number of topics until we achieved a high-quality output. Figure <ref> presents temporal changes of topic clusters in our data. The result shows that cannabis-related topics are the most prevalent in our dataset, with a specific focus on legislation. We also observe a significant number of news articles discussing drug use in films and television series, indicating the possible role of popular media in shaping public perceptions and attitudes toward drugs and social issues <cit.>.
The sentiment analysis of news articles demonstrates that the tone of news articles varies across drug classes and evolves over time (Figure <ref>).
News articles with hallucinogenic drugs tend to have a more positive tone compared to other classes.
The positive tone may be attributed to recent studies on the use of psychedelic drugs for the treatment of mental health disorders such as treatment-resistant depression and post-traumatic stress disorder. For instance, we identified an article published in the Philadelphia Inquirer discussing the use of LSD and shrooms to treat PTSD <cit.>. This article reflects the positive sentiment toward the therapeutic potential of hallucinogenic drugs.
Furthermore, the majority of news articles mentioning cannabis, hallucinogens, and treatment drugs exhibit positive sentiments, whereas news articles about narcotics, stimulants, and depressants depict negative sentiments. The sentiment scores assigned to each article range from -1 (most negative) to +1 (most positive), and neutral sentiments are rarely observed in our dataset
Overall, our analysis indicates a shift in the discussion surrounding drug-related topics, especially hallucinogens, towards a more positive view of medicated-assisted treatment and novel applications of these substances. This emphasizes the importance of reporting drug-related incidents using harm reduction principles to encourage safer and managed use.
Further research is necessary to fully understand the context and implications of these findings. It is also important to acknowledge that our analysis only captures a subset of the wider discussion surrounding these drugs and that additional studies may reveal further insights.
§ CONCLUSION AND FUTURE WORK
Over the past decade, there has been an evolution in how the media portrays illicit substance use, as our study has revealed. We analyzed news articles published in the Philadelphia Inquirer between 2013 and 2022 and found that cannabis was the most frequently discussed drug class, followed by narcotics. News articles about hallucinogenic drugs tend to have a more positive tone compared to other categories of drugs, while articles on narcotics were the most negative. By examining changes in the tone and frequency of drug-related discussions over time, we can gain a better understanding of how societal attitudes towards drugs and drug policies are evolving, and how this may impact public health and well-being. It is important to note that our study focused solely on news articles published in the Philadelphia Inquirer, and therefore our findings are specific to this source and its coverage of substance use within the Philadelphia and United States. Further research is needed to explore how the portrayal of substance use in the media varies across different regions and cultures.
|
http://arxiv.org/abs/2307.01602v1
|
20230704094115
|
Coping with seasons: evolutionary dynamics of gene networks in a changing environment
|
[
"Csenge Petak",
"Lapo Frati",
"Melissa H. Pespeni",
"Nick Cheney"
] |
q-bio.PE
|
[
"q-bio.PE"
] |
Both authors contributed equally to this research.
cpetak@uvm.edu
University of Vermont
Burlington
VT
USA
[1]
lfrati@uvm.edu
University of Vermont
P.O. Box 1212
Burlington
VT
USA
mpespeni@uvm.edu
University of Vermont
Burlington
VT
USA
ncheney@uvm.edu
University of Vermont
Burlington
VT
USA
In environments that vary frequently and unpredictably, bet-hedgers can overtake the population. Diversifying bet-hedgers have a diverse set of offspring so that, no matter the conditions they find themselves in, at least some offspring will have high fitness. In contrast, conservative bet-hedgers have a set of offspring that all have an in-between phenotype compared to the specialists. Here, we use an evolutionary algorithm of gene regulatory networks to de novo evolve the two strategies and investigate their relative success in different parameter settings. We found that diversifying bet-hedgers almost always evolved first, but then eventually got outcompeted by conservative bet-hedgers. We argue that even though similar selection pressures apply to the two bet-hedger strategies, conservative bet-hedgers could win due to the robustness of their evolved networks, in contrast to the sensitive networks of the diversifying bet-hedgers. These results reveal an unexplored aspect of the evolution of bet-hedging that could shed more light on the principles of biological adaptation in variable environmental conditions.
<ccs2012>
<concept>
<concept_id>10010520.10010553.10010562</concept_id>
<concept_desc>Computer systems organization Embedded systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
</ccs2012>
Applied computing Life and medical sciences Computational biology Biological networks
Coping with seasons: evolutionary dynamics of gene networks in a changing environment
Nick Cheney
=====================================================================================
§ INTRODUCTION
No environment ever stays the same. What do you do when you can’t predict what happens next? You hedge your bets to maximize your long-term survival. As it turns out, this is exactly what populations in nature evolve to do as well.
Environmental variability comes in all shapes and sizes. Depending on whether the environment changes during the lifetime of an individual or once every thousand generations, and whether the changes are predictable and the cues are accurate, populations can adapt to these changes in the environment using different strategies <cit.>. If the environmental change is infrequent the population likely goes through a process called adaptive tracking, during which the population continuously adapts. Specialists (i.e., only fit in one of
the reoccurring environments) for one environment get outcompeted by specialists for the other environment every time the environment changes. On the other hand, if the change is frequent and there is a reliable environmental cue, phenotypic plasticity can evolve, which is the expression of an alternative phenotype in response to an environmental cue. However, in many cases, there is no reliable cue to signal the environmental change to the individual. In these cases, a strategy call bet-hedging can evolve <cit.>.
A biological bet-hedging strategy is one that results in a decreased variance among the fitness of the offspring across all possible environmental conditions compared to the specialists. There are two main ways to achieve this: 1) diversifying bet-hedgers (BHs) increase phenotypic diversity resulting in different phenotypes among the offspring that are fit in different environmental conditions; 2) conservative BHs adopt a generalist phenotype that is somewhat fit in all environments <cit.>. Thus, when the environment is stable, bet-hedgers are selected against. However, when the environment switches, BH strategies have an advantage over the specialist. Thus, over many environmental switches, BHs can raise in frequency in the population <cit.>.
Examples of bet-hedging strategies in nature span across 16 phyla over 100 studies <cit.>. One of the most common examples is the timing of insect diapause. Many species of insects grow exponentially during a growing season but produce an overwintering alternative phenotype (cold resistant with arrested growth and reproduction) occasionally as to ensure survival when the environment suddenly turns cold <cit.>. Other examples include galactose metabolism in yeast <cit.>, persistence to antibiotics <cit.> and even cancer cells <cit.>. There has been much theoretical work using mathematical and agent-based models to understand the conditions and scenarios in which the different kinds of bet-hedging strategies could evolve. The general consensus among these studies is that while the two bet-hedging strategies are fundamentally similar in how and when they evolve <cit.>, higher frequency of environmental change favors the conservative <cit.>, and stronger selection pressure favors the diversifying BH strategy <cit.>. However, these models didn't include a complex genotype-to-phenotype mapping function, and thus the strategies weren't evolved from scratch. Instead, the probability of producing an alternative phenotype like a diversifying BH was part of the genotype as an explicit evolvable variable <cit.>.
The phenotypes of biological organisms are determined from their genotypes through a complex nonlinear mapping. Models of gene regulatory networks (GRNs, where nodes represent genes and edges represent activating or repressing directional regulatory interactions) are commonly used as conceptual proxies to genotype-to-phenotype mapping functions. Since the structure of GRNs are evolvable and shape the kind of phenotypic variation that is available for natural selection, they are often used in studies investigating the evolution of robustness and evolvability <cit.>.
In this study, instead of using traditional mathematical models, we used an evolutionary algorithm to model the evolution of GRNs to investigate the emergence and success of diversifying and conservative BH strategies under different conditions. This approach allowed us to find these strategies without biasing or limiting our model; the strategies evolved without an explicit incentive through the evolution of different network structures, which led to some unexpected results.
§ METHODS
§.§ Genotypes and phenotypes
In most computational models of GRNs, regulatory interactions between genes are simulated by an adjacency matrix W of size N× N, representing a weighted, directed graph. Then, the expression level of the genes making up the phenotype are calculated through the iterative multiplication of W by a vector of gene expression levels p⃗ with a non-linear transformation. In our model, the initial vector p⃗ (representing gene products coming from the parent, i.e., maternal factors) was a one-hot vector of length N = 50 in all experiments. In order to generate a phenotype for each individual, this fixed input vector was iteratively multiplied a 100 times by their individual matrices as such:
p⃗_t+1 =σ(Wp⃗_⃗t⃗)
σ(x) =1/1+e^-10x
Where Wp⃗ describes the “strength" of interaction between genes and σ is a sigmoid function. The value of p⃗ is bounded between 0 and 1. The individuals' phenotype was the value of gene expression levels p⃗ after this iterative process.
§.§ The evolutionary algorithm
At the beginning of each experiment, a population of a 1000 haploid, asexual individuals was generated along with the two target vectors A⃗ and B⃗. A⃗ was a series of N/2 1s followed by N/2 0s, and B⃗ was 1 - A⃗, see Fig <ref>B first and third example phenotypes. Each experiment started with season A, during which the fitness of the individuals was calculated based on their distance from A⃗:
f_A(p⃗)= 1 - ∑^N|p⃗_i - A⃗_i|N
Every G generations (season length) the target changed from A⃗ to B⃗ or from B⃗ to A⃗. After each individual's phenotype and fitness was calculated, they were sorted based on fitness and the top μ individuals were selected to survive to the next generation and generate (popsize/μ)-1 offspring to keep a constant population size (μ + λ Evolution Strategy). Offspring were mutated at N*m positions by adding a random value drawn from a normal distribution ∼𝒩(0,0.5).
§.§ Experiment
Experiments were run for 75 environmental switches for 6 different season lengths (G): 20, 50, 100, 300, 400, 500, and 3 mutation rates (m) and truncation sizes (μ/pop size): 0.05, 0.1, 0.2 for season lengths 50, 300 and 500. Each combination of parameters was repeated 10 times.
In order to calculate the mutational robustness of an evolved diversifying and conservative BH network, we mutated and evaluated the networks cumulatively 20 times. At each step, we quantified how much of a specialist, diversifying, and conservative BH they were based on the following coefficients: the diversifying BH coefficient was the standard deviation of offspring fitnesses given one of the targets, the conservative BH coefficient was the proportion of genes that are half expressed, and the specialist coefficient was the maximum between the average fitness of the offspring calculated for each of the two environments.
Code is available at: <https://github.com/Cpetak/coping_with_seasons_GRN>
§ RESULTS
Populations evolved to have a lower decrease in average fitness upon a switch in the environment in all of our experiments. The maximum fitness the population reached by the end of a season also decreased over time, Fig <ref>A. This was due to the slow but steady incorporation of in-between 0.5 values to the phenotype, meaning that instead of genes being on or off, more and more genes were half expressed in the individuals over the generations. Therefore, by the end of most experiments, conservative BHs took over the population, Fig <ref>B right most example phenotype.
In the majority of the runs across different settings of the parameters, we also observed the raise and eventual fall of the diversifying BH. Our results showed that during approximately the first third of the simulations, for a short period of time after each environmental switch the diversifying BH quickly grew in frequency (highlighted by the increase of average standard deviation among the offspring of a single parent in Fig <ref>A, and the second example phenotype in Fig <ref>B). However, in most cases this form of bet-hedging quickly got replaced by specialists and conservative BHs.
We observed the initial success of the diversifying BH in all experiments where the season length was ≤ 100 generations, as well as at season length 300 with a medium or low mutation rate. At season length 500, this strategy was only observed in combination with a low mutation rate or low truncation size. Apart from a single run (G = 500, m = 0.05, μ/pop size = 0.2), the diversifying strategy was lost eventually. In contrast, the incorporation of in-between values into the phenotype was observed in every experiment. However, as we increased the season length, mutation rate, or truncation size, less and less genes were half expressed after the 75 environmental switches, i.e., the conservative BH didn't appear or appeared later, while the diversifying BH reached higher frequencies and remained longer in the population. In summary, season length of 300 was found to be ideal for the emergence and success of the diversifying BH, and increasing either the mutation rate or the truncation size favored the conservative strategy.
Next, we looked at the mutational robustness of an evolved diversifying and conservative bet-hedger GRN, see Fig <ref>. The conservative BH strategy was found to be considerably robust to random mutations. When the random mutations did change the phenotype, we found that it became even more of a conservative BH and less of a specialist. This robustness could have been the result of the sparse networks underlying the conservative BH phenotype (few edges with large weights, most edges 0 or small weight, data not shown).
On the other hand, the diversifying BH lost its ability to produce alternative phenotypes drastically. In most replicate experiments the mutated GRNs produced only one type of specialist after a few rounds of mutations. These networks were less sparse, though the degree distribution was similarly power law.
§ DISCUSSION
In this study, we investigated when and how the bet-hedger strategy evolves in a frequently changing environment. In contrast to previous work, we used an agent-based evolutionary algorithm of gene regulatory networks. This allowed us to evolve diversifying and conservative BHs without explicitly selecting for it, or having hard-coded strategies. We found that across different settings of the frequency of environmental change, strength of selection and mutation rates diversifying BH evolved first, followed by conservative BH. The above mentioned 3 parameters only changed the degree by which these strategies evolved, in a manner in line with results of previous studies. Higher frequency of environmental change favored the conservative <cit.>, smaller truncation size favored the diversifying BH strategy <cit.>.
At the beginning of every simulation, in the first environment, the populations quickly adapted and found the optimal solution. When the environment changed, suddenly the previously least fit individuals got to survive and reproduce while the previously fit lineages went extinct. After a huge drop in average fitness, the population adapted again to find the new fitness peak. The effect of a new mutation that causes the individual to be a bet-hedger, either by having an in-between phenotype or by having some proportion of the offspring be the opposite phenotype, is at first disadvantageous. If such mutation appears, it needs to stick around in the population despite being selected against until the environment changes. However, when the environment does change, the bet-hedger has a huge advantage over the specialist. The fitness of the conservative BH remains the same in-between value, which is now much higher than that of the specialist of the previous environment. Similarly, since the diversifying BH produces the alternative phenotype at some proportion, those individuals will now survive and reproduce. This explains why we saw the bet-hedging strategies increase in frequency right after the environment changed (Fig <ref>). While the conservative BH and specialist strategies are purely exploitative, diversifying BHs can be thought of an interesting solution to the tension between exploration and exploitation, in that there is structure to the variation that it creates.
Our hypothesis for why we saw the initial success of the diversifying BH followed by its replacement by the conservative BH has to do with how quickly the alternative strategies can be found and the mutational robustness of the evolved GRNs. As the populations traverse the genotype space over the generations, going from parts of the landscape that produce one target phenotype to parts that produce the other phenotype, individuals can end up on the border of this high dimensional space where a few mutations push them into the other phenotype. Thus, the diversifying strategy could have been quickly found and selected for in our simulations for this reason, while the genotype that corresponds to an in-between phenotype could have been further away in the genotype space. Despite this advantage, our results suggest that the diversifying strategy is inherently more unstable. Offspring of a diversifying BH could easily become a specialist for the current environment due to random mutations, which then outcompetes the bet-hedger unless the environment switches right away. In contrast, once the conservative phenotype is found, it is robust to mutations, thus they are less likely to produce specialists that would drive them to extinction (Fig <ref>).
In conclusion, in response to adaptation to environmental variability, we observed the evolution of GRNs that were capable of generating the two alternative optimal phenotypes given random mutations (diversifying BHs), even without the implementation of gene duplication and deletion that was used in previous studies that found the evolution of this behavior in GRNs <cit.>. We also saw the evolution of the conservative BHs, as it outcompeted the diversifying strategy in most of our experiment. We argue that this dynamic could be explained by the robustness of the strategies.
This material is based upon work supported by the 2021-2022 University of Vermont Dr. Roberto Fabri Fialho Research Award to C.P. and the National Science Foundation Grant No. 2008413.
Computations were performed on the Vermont Advanced Computing Core supported in part by NSF Award No. 1827314.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.02996v1
|
20230706135658
|
A luminous precursor in the extremely bright GRB 230307A
|
[
"S. Dichiara",
"D. Tsang",
"E. Troja",
"D. Neill",
"J. P. Norris",
"Y. H. Yang"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
GRB 230307A precursor]
A luminous precursor in the extremely bright GRB 230307A
0000-0001-6849-1270]Dichiara S.
Department of Astronomy and Astrophysics, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802, USA
0000-0002-1612-2585]Tsang D.
Department of Physics, University of Bath, Claverton Down, Bath, BA2 7AY, UK
0000-0002-1869-7817]Troja E.
Department of Physics, University of Rome - Tor Vergata, via della Ricerca Scientifica 1, 00100 Rome, IT
Department of Physics, University of Bath, Claverton Down, Bath, BA2 7AY, UK
Department of Physics, Boise State University, Boise, ID, USA
0000-0003-0691-6688]Yang Y.-H.
Department of Physics, University of Rome - Tor Vergata, via della Ricerca Scientifica 1, 00100 Rome, IT
GRB 230307A is an extremely bright long duration GRB with an observed gamma-ray fluence of ≳3×10^-3 erg cm^-2 (10–1000 keV), second only to GRB 221009A.
Despite its long duration, it is possibly associated with a kilonova, thus resembling the case of GRB 211211A.
In analogy with GRB 211211A, we distinguish three phases in the prompt
gamma-ray emission of GRB 230307A:
an initial short duration, spectrally soft emission; a main long duration, spectrally hard burst; a temporally extended and spectrally soft tail.
We intepret the initial soft pulse as a bright precursor to the main burst and compare its properties with models of precursors from compact binary mergers. We find that to explain the brightness of GRB 230307A, a magnetar-like (≳ 10^15 G) magnetic field should be retained by the progenitor neutron star.
Alternatively, in the post-merger scenario, the luminous precursor could point to the formation of a rapidly rotating massive neutron star.
§ INTRODUCTION
Gamma-ray bursts (GRBs) are divided into two main phenomenological classes <cit.>: long-duration (>2 s) GRBs produced by the collapse of massive stars, and short-duration (<2 s) GRBs thought to be caused by the coalescence of two compact objects, such as neutron stars (NSs) or black holes (BHs).
In recent years, new sub-classes of GRBs have also been identified, such as short GRBs with extended emission <cit.> and peculiar long GRBs with hybrid high-energy properties, such as GRB 060614 and GRB 211211A <cit.>.
Their gamma-ray emission is characterized by a main spectrally-hard burst followed by a long-lasting tail of spectrally softer emission. Despite their long duration, these bursts are not followed by bright supernovae and thought to be produced by compact object mergers.
Recently, the extremely bright GRB 230307A was proposed as a possible member of this new GRB class, although its prompt gamma-ray phase does not strictly follow the typical phenomenology of these events <cit.>.
Its gamma-ray emission displays a soft-hard-soft spectral evolution, more typical of standard long GRBs.
In this work, we explore the hypothesis that the early spectrally softer emission of GRB 230307A is instead a precursor, preceding the main bright burst composed by a hard peak and a soft tail.
Numerous studies in the past have emphasized the presence of precursor signals in both long GRBs <cit.> and short GRBs <cit.>. A precursor was also identified in the prompt emission of GRB 211211A <cit.>.
Precursors exhibit lower luminosity and often a shorter duration than the main prompt episode <cit.>.
Several theories were proposed to explain the different characteristics associated with these precursors, encompassing spectral shape, duration, and quiescent times.
A weak precursor with a quasi-thermal spectrum is a direct prediction of the standard fireball model <cit.>.
Other models are instead related to the nature of the central engine, with the precursor marking
the birth of a massive NS later collapsing into a BH <cit.>.
In other
scenarios, the precursor is related to the strong interaction between the two compact objects right before they merge.
Luminous high-energy emission is predicted by models that involve the shattering of the NS crust during the inspiral phase <cit.>
or the magnetospheric interactions between the neutron star and another compact object <cit.>.
suggested that,
regardless of their power source, precursors produced in the pre-merger phase should display a modulated temporal profile induced by the stars' orbital motion.
While, in general, precursors associated to short GRBs or short GRBs with extended emission are few and faint, the extreme brightness of GRB 230307A allow us to put these models to test.
The paper is organized as follows: in Section <ref> we describe the data reduction process together with the timing and spectral analysis of GRB 230307A. In Section <ref> we discuss the results, illustrating the different models proposed to explain the nature of the precursor emission and highlighting the possible implications. In Section <ref> we summarize our results and conclusions.
Uncertainties are quoted at the 1-σ confidence level and upper limits are given at a 2-σ level throughout the paper, unless stated otherwise. Standard ΛCDM cosmology <cit.> was adopted.
§ OBSERVATIONS AND DATA ANALYSIS
On March 7, 2023 at 15:44:06.67 UT, hereafter referred as a T_0, the Fermi/GBM triggered on GRB 230307A <cit.>.
The burst was also observed by several other missions including GECAM <cit.>, Konus/WIND <cit.>, AGILE <cit.> and the Solar Orbiter STIX <cit.>. These detections confirmed the extremely bright nature of the transient event and enabled the Interplanetary Gamma-Ray Burst Timing Network (IPN) to provide a refined localization <cit.>.
A preliminary estimate of its broadband (10-1000 keV) fluence reached a remarkable value of approximately 3×10^-3 erg cm^-2 <cit.>, marking the second highest value ever recorded <cit.>.
To conduct our analysis, we use Fermi/GBM’s time-tagged event (TTE) data obtained from three NaI detectors (N6, N8, and Na) and the BGO detector B1.
We processed the data using the HEASoft (version 6.30.1) and the Fermitools (version 2.0.8) software packages following standard procedures [https://fermi.gsfc.nasa.gov/ssc/data/p7rep/analysis/scitools/gbm_grb_analysis.html].
The spectral analysis was conducted using RMFIT v.4.3.2.
Due to the extremely high flux of GRB 230307A, the data suffers from losses caused by electronic bandwidth limits <cit.>. As a result, certain time intervals are affected by pile-up effects <cit.> and were excluded from the analysis.
§.§ Temporal analysis
Figure <ref> shows the GRB light curves extracted with a time bin resolution of 16 ms. We compare the temporal profiles in two energy bands, a soft energy band (7-99 keV) and a hard energy band (>800 keV).
From Figure <ref> we note that most of the emission in the hard energy band is concentrated within the first 18 s, whereas the longer tail of emission, extending up to ∼45 s, consists of softer energy photons.
Using the standard BATSE energy range (50–300 keV), we derive a duration T_90 of ∼33 s.
This value is defined as the time interval over which the cumulative number counts increase from 5% to 95% above background <cit.>, and may thus be slightly overestimated due to pile-up effects. However, even assuming that the count rate is twice the observed value during the bad time interval, the resulting
T_90 would shorten by only 3 s and GRB 230307A would still be classified as a long GRB.
In this respect, it differs from the standard case of short GRBs with extended emission, whose spectrally hard pulse lasts less than 2 s, and resembles the long GRB 211211A.
The inset of Figure <ref> zooms in the first 2.5 sec of emission. The first bright pulse that triggered Fermi/GBM is visible only in the soft energy band, whereas the spectrally hard (>800 keV) emission does not start until ≈ T_0 + 1 s.
The first pulse is characterized by a short duration, lasting from T_0 to T_0+0.4 s, and a double peaked structure. The two peaks are separated by ∼0.1 s. The gamma-ray flux decreases close to the background level after this first short signal and it increases again at ∼ T_0+0.8 s, when the main part of the emission starts.
The short duration of this initial pulse combined with the low flux (in comparison with the extremely bright main emission), the softer spectrum and the delay with respect to the onset of the main emission (∼0.4 s) are distinctive features typical of GRB precursors.
Two quantities commonly used for GRB studies are the spectral lag and minimum time scale variability.
To derive the spectral lag we used the light curve extracted in the two standard energy ranges: 25-50 keV and 100-300 keV.
At first we focused our timing analysis only on the precursor event (from T_0+0 sto T_0+0.18 s).
Using the same approach described in <cit.> and a 4-ms binned light curve we derived a spectral lag of 1.5±3.0 ms in this initial interval.
To investigate possible evolution of the lag over the main emission we also measured this valued in different interval: from T_0+1.4 to T_0+2.9 s, from T_0+7.9 to T_0+13.5 s and from T_0+19.0 to T_0+23.4 s. We found -6.6^+9.1_-11.4 ms, 3.3^+5.2_-5.0 ms and 2.8±4.5 ms, respectively.
Intervals were selected in such a way that the count rates at the beginning and end points are almost the same. This deliberate choice aimed to enhance the efficacy of cross-correlation analysis and produce robust lag results.
We used the 8-ms binned light curve to study the main part and the extended tail of the prompt emission.
This was done in order to ensure the minimum number of counts per bin needed to perform a sensitive fits of the cross correlation function between the two channels.
These spectral lags are consistent with 0 and are similar to the ones observed for short GRBs and short GRBs with extended emission <cit.>.
ccc[h!]
Minimum variability time scale
Time Interval 2cMinimum variability
s s s
8-1000 keV 15-350 keV
T_0-0.064–T_0+0.32 0.017 ± 0.002 0.014 ± 0.002
T_0+0.320–T_0+3 <0.012 0.010 ± 0.002
T_0+7–T_0+18 0.015 ± 0.002 0.017 ± 0.002
T_0+18–T_0+40 0.062 ± 0.004 0.063 ± 0.005
All the reported times are observer frame
We then derived the minimum time scale variability using the method described in <cit.>. The value was calculated for different time intervals, representative of the different partes of the prompt emission: the precursor (-0.064 s - 0.32 s), the main hard peak (0.32-18 s, excluding the bad time interval 3-7 s), and the extended tail (18 - 40 s).
Two energy bands were considered: the standard GBM broadband (8-1000 keV) and the standard Swift band (15-350keV) to allow for direct comparison with GRB 211211A and the rest of the Swift sample.
Results are reported in Table <ref> and Figure <ref>.
During the precursor episode the minimum variability is 17±2 ms (see Figure <ref>).
As the prompt emission evolves, this variability changes. A fast variability (<12 ms) characterizes the signal between T_0+0.4 s and T_0+3 s (after the first peak and before the start of the bad time interval). The minimum variability timescale stays below ∼20 ms during the main emission
(before T_0+18 s) and it increases up to ∼60 ms during the tail (see Table <ref>).
Our values are slightly higher that the one reported by <cit.>, which was derived using a different approach.
It is interesting to note that the minimum variability evolves as observed for GRB 211211A (see Figure <ref>).
It is also worth to note that while the main signal seems to follow the typical hard-to-soft trend observed for other long GRBs commonly associated with a short-to-long variability, the precursor appears to have a peculiar short variability and a soft spectra (see the next Section <ref>). This distinct behaviour deviates from the general trend observed between the pulse width and the energy <cit.> and it is indicative of a different physical process associated with the precursor.
§.§ Spectral analysis
The precursor spectra was obtained integrating the signal between T_0-0.064 s and T_0+0.320 s. The background was derived using the time interval that goes from T_0-89 s to T_0-7 s before the burst trigger and from T_0+162 s to T_0+273 s after the end of the prompt emission. The background light curve was modeled using a third degree polynomial. The spectral best fit results are presented in Table <ref> and in Figure <ref>. The spectral models used for the analysis are: a simple black body (BB), Comptonized model <cit.>, a Band function <cit.> and a combination between a Band function and a BB. The best fit is obtained by a simple Band function (see Figure <ref>). The combined Band+BB model provide a slightly better fit although the improvements is not significant enough to justify the presence of the thermal component. The average flux measured during the precursor episode is 3.3×10^-5 erg cm^-2 s^-1 (10–1000 keV). Assuming that the nearby galaxy at z∼0.065 <cit.> is the actual host, we derive an average luminosity of ∼3.6×10^50 erg s^-1.
In order to study the evolution of the spectra we fitted the data over different time intervals before T_0+3 s and after T_0+7 s. As discussed in the previous Section, the variability time scale is related with the spectral hardness (with shorter variability associated to harder spectra) except for the precursor that has a soft spectra and an surprisingly short variability.
cccccccc
Best fit Spectral parameters
Time Interval Model Mean Flux (10–1000 keV) E_pk α β BB Temp C-STAT/
s erg cm^-2 s^-1 keV index index keV d.o.f
8cPrecursor
T_0-0.064–T_0+0.32 Black Body 2.899(±0.038)×10^-5 — — — 36.0 ± 0.2 3527.3/482
T_0-0.064–T_0+0.32 Comptonized 3.288(±0.038)×10^-5 198.7 ± 3.5 -0.80 ± 0.03 — — 606.1/481
T_0-0.064–T_0+0.32 Band 3.324(±0.038)×10^-5 170.3 ± 4.7 -0.63 ± 0.04 -2.95 ± 0.09 — 563.1/480
T_0-0.064–T_0+0.32 Band+BB 3.333(±0.038)×10^-5 203.9 ± 8.9 -0.59 ± 0.08 -3.20 ± 0.16 15.3 ± 1.4 536.1/478
8cMain Emission
T_0+0.320–T_0+3 Band 1.196(±0.003)×10^-4 914.6 ± 5.4 -0.47 ± 0.01 -5.5 ± 0.3 — 954.4/485
T_0+7–T_0+9 Band 1.748(±0.004)×10^-4 968.0 ± 6.6 -0.71 ± 0.01 -4.5 ± 0.1 — 928.5/485
T_0+9–T_0+11 Band 1.577(±0.004)×10^-4 946.8 ± 6.7 -0.73 ± 0.01 -4.8 ± 0.2 — 1024.2/485
T_0+11–T_0+13 Band 1.170(±0.003)×10^-4 804.5 ± 6.8 -0.90 ± 0.01 -5.0 ± 0.4 — 1225.3/485
T_0+13–T_0+15 Band 9.533(±0.031)×10^-5 781.7 ± 8.4 -1.07 ± 0.01 -4.8 ± 0.4 — 1202.0/485
T_0+15–T_0+18.5 Band 4.916(±0.016)×10^-5 634.6 ± 8.0 -1.17 ± 0.01 -4.5 ± 0.4 — 1185.2/485
T_0+18.5–T_0+22 Band 4.592(±0.016)×10^-5 617.2 ± 8.3 -1.23 ± 0.01 -5.0 ± 0.9 — 1260.2/485
T_0+22–T_0+25.5 Band 3.594(±0.014)×10^-5 468.4 ± 6.8 -1.27 ± 0.01 -5.1 ± 1.5 — 1008.7/485
T_0+25.5–T_0+29 Band 2.457(±0.012)×10^-5 392.6 ± 9.1 -1.50 ± 0.01 -4.7 ± 1.8 — 967.30/485
T_0+29–T_0+32.5 Band 1.186(±0.009)×10^-5 319.2 ± 10.4 -1.47 ± 0.01 -5.2 ± 7.4 — 704.8/485
T_0+32.5–T_0+40 Band 9.870(±0.056)×10^-6 233.0 ± 4.8 -1.50 ± 0.01 -4.7 ± 2.9 — 957.5/485
Best fit results obtained integrating the spectra over different time intervals. We used different models to fit the precursor spectra.
§ MODELING OF THE PRECURSOR
Models for precursor emission in compact binary mergers can be grouped into two classes: pre-merger models, related to processes occurring before the merger, and post-merger models, which describe the
evolution of the remnant after the merger.
In the pre-merger phase, the energy to power the precursor is extracted from the orbit, either through the NS magnetic field or from its crust.
In the post-merger phase, the energy is instead supplied by the central engine, thus allowing for a broader range of precursor luminosities and timescales.
If GRB 230307A is truly associated with a kilonova at 290 Mpc – and thus is a product of a merger involving at least one neutron star – the extreme energetics of its first pulse (L_ iso∼ 3.6 × 10^50 erg s^-1) challenge most models for precursor flares. In the following, we discuss possible scenarios for GRB 230307A.
§.§ Pre-merger models: a Resonant Shattering Flare
Among the pre-merger models, the resonant shattering flare model <cit.> can
reproduce high luminosity precursors by extending slightly beyond the fiducial values presented in <cit.>, and requiring a significant surface magnetic field for one of the progenitors. Here, the crust-core interface mode (i-mode) of one of the merger progenitors is excited by tidal resonance. This mode is excited past the material breaking strain of the NS crust, causing the crust to fracture and shatter. Seismic oscillations in the NS can then drive perturbations of the surface magnetic field, which, if strong enough, can spark pair-photon fireball shells during the resonance window. Collisions between these shells lead to non-thermal gamma-ray emission.
The total energy available in the Resonant Shattering flares (RSF) model is set by the tidal energy transfer rate given by <cit.>
Ė_ tidal≃(3π/40)^1/2 (2 π f_ i-mode)^2 M_ NS^1/2 R_ NS Q E_b^1/2q/q+1.
where f_ i-mode is the i-mode frequency, Q is the tidal overlap parameter, representing the coupling of the i-mode with the tidal field, q is the binary mass ratio, and E_b is the mode energy at which the breaking strain is reached. For fiducial values provided in <cit.> this can reach up to ∼ 10^50 erg/s, which cannot produce the observed precursor with reasonable gamma-ray efficiency (see Figure <ref> which assumes a 20% gamma-ray efficiency). This value could potentially be increased by an order of magnitude if the neutron star crust breaking strain and tidal overlap integral are a factor of a few larger than the conservative fiducial values assumed.
Additionally, the surface magnetic field plays a significant role in limiting the luminosity; the magnetic field must be strong enough to extract the seismic energy from the crust, otherwise this energy remains trapped within the neutron star <cit.>. If the precursor is indeed from an RSF event, this puts a lower limit on the surface magnetic field of ≳ 1.2 × 10^15 G (see Figure <ref>). This requires that a significant surface field remain present during the binary lifetime, which is unlikely if the field is confined to the crust, where Hall evolution and Ohmic dissipation would cause the field to decay in ≲ 10^6 yr <cit.>. Instead, a field partially frozen into a superconducting core could provide support for a sufficiently long-lived surface field <cit.>.
In the simplified colliding fireball shell model, the spectral hardness of the non-thermal gamma-ray emission depends on the Lorentz factor of the fireball shells. This is determined by the breaking strain of the neutron star crust, as well as mass-loading of the pair-fireball by material from the neutron star surface. The duration of a RSF is closely related to the duration over which the i-mode is in resonance <cit.>. To calculate this we follow <cit.> except that we substitute their use of the stationary phase approximation with solving the equations for the evolution of the binary separation (D) and the orbital angular frequency (Ω) due to the gravitational wave emission of two point masses in a circular binary. Note that we neglect relativistic effects that may become important late in the binary inspiral and effects of tidal interactions on the binary orbit, which will likely act to accelerate the inspiral, and so our results serve as an upper bound on the duration of resonance (particularly for more massive binaries, where the inspiral ends at lower orbital frequencies and thus closer to resonance). From the binary evolution we then calculate the cumulative tidal-field-strength-weighted phase difference between the i-mode and the binary orbit:
ϕ(t')=∫^t'_t_01/D(t)^3exp(-2iΦ+i(2π f_ i-mode)t)dt
(where the orbital phase function Φ=∫Ω dt when assuming the NS which produces the RSF has zero spin, and t_0 is a time long before the resonance), which is proportional to the amplitude of the i-mode's oscillations and shows a sharp increase over a short time period around the resonance: the ”resonance window”.
We fix the mass of one object in the binary and calculate the inspiral for several different secondary masses and i-mode frequencies, obtaining the duration of the resonance window for each case. By fitting these durations as a function of the secondary object's mass and the i-mode frequency we obtain an approximate relationship for the duration of the i-mode's resonance. We repeat this for several different fixed masses. Using the resulting relationships, in Figure <ref> we show the approximate i-mode frequency required to produce a RSF with the duration of GRB 230307A's precursor as a function of the mass of one object in the binary, assuming four different values for the mass of the other object. Here we use the minimum variability timescale as an uncertainty in the relationship between flare duration and the duration of the resonance, as in the simplified colliding fireball shell model this timescale is the time taken for shocks to cross the fireballs shells, which is also the time by which the flare extends beyond the resonance. We find that an i-mode frequency of ∼ 100 Hz is required to produce a RSF with a 0.4 s duration, which is on the same scale as the values calculated using several different NS models <cit.>.
Measuring the i-mode frequency directly would allow for strong constraints to be placed on the nuclear physics parameters that determine the nuclear matter equation of state <cit.>. This direct measurement requires a EM/GW multimessenger detection of an RSF, however, we have shown above that a precursor duration can provide a combined i-mode frequency/chirp mass contraint, that may help to inform future detections.
§.§ Other pre-merger models
Other models for short GRB precursors commonly rely on magnetospheric interactions between a NS and a BH or another NS. A BH binary partner could generate a non-thermal flare as it orbits and spins inside the magnetosphere of the NS, accelerating plasma along the field lines and thus producing gamma-rays <cit.>. This emission could reach the luminosity of the precursor to GRB 230307A for a NS with dipole magnetic field strength ≳ 10^15 G and a relatively low mass BH. However, such a flare would appear as a very short and sharp peak. This profile is completely different compared to the one observed for GRB 230307A.
A precursor signal could be also produced in binary NS systems, when two strongly magnetized NSs in a close binary are connected by a flux tube. If the NSs have different spins, or their spins are misaligned with their magnetic fields, this flux tube will become twisted. This transfers energy from the NS's spins into the magnetic field, building the pressure needed to produce a bright non-thermal flare <cit.>. The luminosity of this flare will be strongly dependent on the NS magnetic field strength, and also requires a dipole magnetic field strength ≳ 10^15 G to reach the luminosity of GRB 230307A. Also this mechanism requires that both NSs be strongly magnetized, which as previously mentioned is difficult to explain.
A common theme for many of these models is the requirement that at least one NS in the binary be very strongly magnetized (surface dipole field strength ≳ 10^15 G) in order to reach a luminosity ≳10^50 erg/s. Thus, if GRB 230307A is originated in a binary NS merger, the energetic of this precursor strongly suggest that the NS can maintain a strong magnetic field over the duration of the inspiral phase, indicating that it is preserved by some specific mechanism <cit.>.
§.§ Post-merger models
In the post-merger phase, the precursor emission could be related to either the remnant magnetar <cit.>, the evolution of the relativistic fireball <cit.>, or the interaction of the jet with the merger ejecta <cit.>.
Among these possible models, both the fireball and the jet scenarios fail to reproduce the observed properties.
The fireball is initially optically thick but emission from its photosphere becomes visible as soon as the expanding material becomes transparent <cit.>. However, this would produce a thermal spectral shape, at odds with observations.
A quasi-thermal spectrum would also be expected from the jet breaking out of the ejecta cloud. Although it has been argued that the shock break-out spectrum could differ significantly from a thermal shape <cit.>, this model remains challenged by the high luminosity and short variability timescale of the candidate precursor.
Models related to the central engine offer many more degrees of freedom, and thus can more easily account for a broad range of behaviors.
If the merger remnant is a rapidly spinning, highly magnetized NS, its rotational energy (≈10^52 erg) can easily account for luminosities of the order of 10^50 erg s^-1.
This energy is extracted by some MHD processes and then released into a high-entropy fireball.
The baryon-rich environment of a proto-NS is considered less efficient in accelerating the fireball
to high Lorentz factors <cit.>, and this might explain the softer spectrum of the initial pulse.
Once the path is cleaned by the precursor shell,
subsequent fireball shells could attain much higher velocities and produce the hard gamma-ray spectrum of the main episode. These MHD winds could be halted by the infalling material onto the central NS, causing the 0.4 s delay between the soft-energy precursor and the hard-energy main emission.
Continued accretion of matter could cause the NS to collapse into a BH <cit.>, which would then power the main burst and its temporally extended emission. In this scenario, the main open question remains the nature of the power mechanism, since the typical lifetime of the accretion disk, t_visc≈0.2 (0.1/α) s where α is the dimensionless viscosity parameter <cit.>, is much shorter than the GRB duration.
Alternatively, if the NS remains stable, the GRB might be powered by its spin-down luminosity <cit.>.
This depends markedly on the EoS, which determines the NS maximum mass, from
≈2.05 M_⊙ for soft EoS <cit.>
to ≈2.7 M_⊙ for stiff EoS <cit.>,
and its typical lifetime t_ col.
To reproduce the average luminosity of the temporally extended high-energy emission (∼6.5×10^50 erg s^-1)
and its duration (∼40 s),
the newly formed NS should have a high poloidal magnetic field, B_p ∼ 6×10^15 ξ^0.5 G, and a short initial spin period, P_0 ∼
1 ξ^0.5 ms, where ξ is the
gamma-ray conversion efficiency.
As the derived period is already close to the break-up limit <cit.>, then most of the spin-down energy is efficiently converted into electromagnetic radiation (ξ ≳50%).
Additionally, the collapse time must be
t_ col ≳40 s.
These electromagnetic constraints on
B_p, P_0, and t_ col, when
combined with the GW measurements of the chirp mass, could place tight constraints on the EoS of dense matter <cit.>.
§ SUMMARY
The prompt emission of GRB 230307A starts with a brief and luminous pulse of high-energy radiation, which
we interpret as a precursor.
This signal is characterized by a short minimum variability timescale (approximately 17 ms), a soft non-thermal spectrum peaking around 200 keV, and a negligible spectral lag.
This combination of soft spectrum and short variability deviates from the general trend of prompt GRB emission, supporting our hypothesis of a precursor signal powered by a different mechanism.
We explore a wide range of precursor models to explain the high luminosity (∼3.6×10^50 erg s^-1 at 291 Mpc) and duration of this candidate precursor. Resonant shattering flares occurring before the merger are a viable option.
This model requires the progenitor NS to retain a high magnetic field (≳ 10^15 G) within its core.
Alternatively, high luminosity precursors
could be the product of a rapidly rotating millisecond pulsar
formed after the merger.
This model naturally accounts for the long duration of the gamma-ray emission, and favors stiff EoS for the formation of stable supramassive NSs.
A clear discriminant between these two scenarios (pre-merger and post-merger precursors) would be the simultaneous detection of gravitational waves, setting the exact time of the merger before or after the high-energy precursor.
The upcoming LIGO/Virgo/Kagra observing run O4
holds promise for the possible detection of binary mergers associated with long GRBs. The enhanced sensitivity will allow to observe grativational wave signal from NS mergers out to ∼350 Mpc <cit.> testing possible association with high energy signals originated at the same distance scale of GRB 230307A or GRB 211211A.
Such detections would finally dispel any doubt about the nature of the signals and the observations of a precursor could be used to constrain the properties of the merging neutron stars like the properties of the crust tidal resonant shattering, the magnetic field and, ultimately, the equation of state of dense matter.
§ ACKNOWLEDGEMENTS
This research has made use of data and software provided by the High Energy Astrophysics Science Archive Research Center (HEASARC), which is a service of the Astrophysics Science Division at NASA/GSFC and the High Energy Astrophysics Division of the Smithsonian Astrophysical Observatory.
The material is based upon work supported by NASA under award
number 80NSSC22K1516.
E.T. and Y.-H.Y. were supported by the European Research Council through the Consolidator grant BHianca (grant agreement ID 101002761). D.T. and D.N.'s research was supported by UK Science and Technology Facilities Council grant ST/X001067/1 and Royal Society research grant RGS / R1 / 231499
aasjournal
|
http://arxiv.org/abs/2307.01867v1
|
20230704182554
|
Normalized Gompertz wavelets and their application
|
[
"Grzegorz Rzadkowski"
] |
math.CA
|
[
"math.CA",
"42C40, 65T60, 11B83"
] |
plain
theoremTheorem
corollary[theorem]Corollary
lemmaLemma
*lemma*Lemma
proposition[theorem]Proposition
definition
definition[theorem]Definition
example[theorem]Example
conjecture[theorem]Conjecture
remark
remarkRemark
Normalized Gompertz wavelets and their application
Grzegorz Rza̧dkowski
Department of Finance and Risk Management, Warsaw University of Technology, Narbutta 85, 02-524 Warsaw, Poland
e-mail: grzegorz.rzadkowski@pw.edu.pl
In the present paper, we define the Gompertz wavelets and show their basic properties. In particular, we prove that the admissibility condition holds for them. We also compute the normalizing factors in the space of square intergrable functions L^2(ℝ) and present an explicit formula for them in terms of the Bernoulli numbers. Then, after implementing the second-order Gompertz wavelets into Matlab's Wavelet Toolbox, we apply them to study the spread of the Covid-19 pandemic in Saudi Arabia.
Keywords: Gompertz function, Gompertz wavelet, Stirling number of the second kind, Covid-19, Continuous Wavelet Transform.
2020 Mathematics Subject Classification: 42C40, 65T60, 11B83
§ INTRODUCTION
The Gompertz function is described by the following autonomous differential equation of the first order
x'(t)=sxlogx_max/x x(0)=x_0>0,
with parameters s-growth rate and x_max-saturation level (asymptote), 0<x_0<x_max; log is the natural logarithm. After solving (<ref>) we can write the Gompertz function in the following convenient form
x(t)=x_maxe^-e^-s(t-t_0),
where constant t_0 appears in the integration process of (<ref>) and is connected with the initial condition x(0)=x_0=x_maxe^-e^st_0, thus
t_0=1/sloglog (x_max/x_0). It is easy to check that t_0 is also the inflection point of x(t) (<ref>).
Function (<ref>) was first described and applied in actuarial mathematics in 1825 by Mr. Benjamin Gompertz <cit.>. Since then, the Gompertz function has found applications in probability theory (Gumbel distribution), biology, medicine, economics, engineering, physics and many other fields. The first hundred years of the use of this function are well described by Winsor <cit.>. The interesting story of the next almost one hundred years can be found in the article by Tjørve and Tjørve <cit.>.
In recent years, many articles have appeared in which the Gompertz function was used to describe Covid-19 cases (infected people who were tested positive). Ohnishi et al <cit.> showed that the first waves of Covid-19 cases in 11 selected countries (Japan, USA, Russia, Brazil, China, Italy, Indonesia, Spain, South Korea, UK, and Sweden) can be modeled using the Gompertz function. They also compared the mechanism of the appearance of the Gompertz function with the mechanism of the time dependence of the number of pions produced in nucleus-nucleus collisions, which is also described by the Gompertz function. Dhahbi et al <cit.> used the Gompertz model to describe the first wave of cases in Saudi Arabia. Kundu et al <cit.> proposed an
automated COVID-19 detection system based on convolution neural networks using the Gompertz function. Estrada and Bartesaghi <cit.> linked the networked SIS model with the Gompertz function.
The structure of the article is as follows. In Sec. <ref> we describe the basic properties of the Gompertz function and its derivatives. Sec. <ref> is devoted to the Gompertz wavelets based on the second derivative, that we use later. Then, in the same section, we define the Gompertz wavelets for any derivative and prove that the admissibility condition holds for them. In Sec. <ref> we present some applications of the Gompertz wavelets, in particular for modeling the spread of the Covid-19 pandemic. The paper is concluded in Sec. <ref>. All the data, which we analyze, were taken from the website Our World in Data <cit.>.
§ THE GOMPERTZ FUNCTION AND ITS DERIVATIVES
By n k we denote the Stirling number of the second kind (for subsets), which is defined as the number of ways of partitioning a set of n
elements into k nonempty subsets; see Graham et al <cit.> and Sloane <cit.> (sequence A008277). The sequence has the boundary conditions: n 0 =0 if n>0, 0 0 =1, n k =0 for k>n or k<0.
Let us recall that the numbers fulfill
n k =1/k!∑_j=0^k(-1)^k-jk jj^n=
1/k!∑_j=0^k(-1)^jk j(k-j)^n,
n+1 k =k n k+ n k-1,
and appear in the Taylor expansion
(e^w-1)^k/k!= ∑_n=k^∞ n kw^n/n!.
First few Stirling numbers for subsets are given in Table <ref>.
Rza̧dkowski et al <cit.> proved, among other results, that if x(t) is a solution of the equation (<ref>) then its
nth derivative x^(n)(t) can be expressed as
x^(n)(t)=s^nx∑_k=1^n(-1)^n-kn klog^kx_max/x.
For example
x”(t) =s^2xlogx_max/x(-1+logx_max/x),
x”'(t) =s^3xlogx_max/x(1-3logx_max/x+log^2x_max/x).
From formula (<ref>) we obtain the well-known property of the Gompertz function that its value at the inflection point t_0 equals x(t_0)=x_max/e ≈ 0.368x_max. Similarly, denoting by t_1 the smaller of two zeros of the third derivative (<ref>), we get x(t_1)=exp(-(3+√(5))/2)≈ 0.0729x_max. Comparing this with (<ref>) and solving equation
x(t_1)=x_maxexp(-(3+√(5))/2)=x_maxexp(-exp(-s(t_1-t_0)))
we obtain (cf. Figure <ref>)
t_1=t_0-1/slog3+√(5)/2.
Further comments on this can be found in paper Rza̧dkowski et al <cit.>.
§ THE GOMPERTZ WAVELETS
§.§ Wavelets
We briefly outline the basic general properties of wavelets (cf. <cit.>), which we will need later. A wavelet or mother wavelet (see Daubechies <cit.>, p.24 ) is a function ψ∈ L^1(ℝ) such that the following
admissibility condition holds:
C_ψ=2π∫_-∞^∞|ξ|^-1|ψ(ξ)|^2 dξ < ∞,
where ψ(ξ) is the Fourier transform F(ψ) of ψ, i.e.,
F(ψ)(ξ)=ψ(ξ)=1/√(2π)∫_-∞^∞ψ(x)e^-iξ x dx.
Since for ψ∈ L^1(ℝ), ψ(ξ) is continuous then condition (<ref>) is only satisfied if ψ(0)=0, which is equivalent to ∫_-∞^∞ψ(x)dx=0. On the other hand, Daubechies <cit.>, p.24 points out that condition ∫_-∞^∞ψ(x)dx=0 together with a slightly stronger than the integrability condition ∫_-∞^∞|ψ(x)|(1+|x|)^αdx<∞, for some α>0 are sufficient for (<ref>). Usually, in practice much more is assumed for the function ψ hence, from a practical point of view, conditions ∫_-∞^∞ψ(x)dx=0 and (<ref>) are equivalent. Suppose the function ψ is also square-integrable, ψ∈ L^2(ℝ) with the norm
|| ψ ||=(∫_-∞^∞|ψ(x)|^2 dx )^1/2.
Having a mother wavelet we can generate a doubly-indexed family of wavelets (called children wavelets), by dilating and translating
ψ^a,b(x)=1/√(|a|)ψ(x-b/a),
where a,b∈ℝ, a≠ 0. The normalization has been chosen so that ||ψ^a,b||=||ψ|| for all a, b. It is assumed usually that ||ψ||=1. The continuous wavelet transform (CWT) of a function f ∈ L^2(ℝ) for this wavelet
family is defined as
(T^wavf)(a,b)=⟨ f, ψ^a,b⟩ =∫_-∞^∞f(x) ψ^a,b(x) dx.
§.§ Wavelets based on the second derivative of the Gompertz function
Consider second derivative of the Gompertz function (<ref>) with parameters x_max=1, s=1, t_0=0, x(t)=e^-e^-t. Since
x'(t)=-xlog x=e^-e^-te^-t, then by (<ref>) or directly we get
x”(t)=xlog x(1+log x)=e^-e^-te^-t (e^-t-1).
Note that x'(t)=e^-e^-te^-t is also the probability density function (pdf) of the Gumbel distribution. We calculate the three, following integrals related to (<ref>), each by substituting x=e^-e^-t, x'(t)=-xlog x=e^-e^-te^-t:
∫_-∞^∞x”(t) dt=∫_-∞^∞e^-e^-te^-t (e^-t-1)dt= -∫_0^1(log x+1)dx=(-xlog x)|_0^1=0,
∫_-∞^∞|x”(t)| dt=∫_-∞^∞e^-e^-te^-t |e^-t-1| dt=∫_-∞^0e^-e^-te^-t (e^-t-1) dt+∫_0^∞e^-e^-te^-t (1-e^-t) dt
=-∫_0^1/e(log x+1)dx+∫_1/e^1(log x+1)dx=1/e+1/e=2/e,
∫_-∞^∞(x”(t))^2 dt=∫_-∞^∞(e^-e^-t)^2(e^-t)^2 (e^-t-1)^2dt=-∫_0^1xlog x (1+log x)^2 dx=1/8.
Let us define now the Gompertz mother wavelet ψ_2(t) (Figure <ref>) by:
ψ_2(t)=2√(2)x”(t)=2√(2)e^-e^-te^-t (e^-t-1), t∈ℝ.
By definition (<ref>) and (<ref>-<ref>) we have
∫_-∞^∞ψ_2(t) dt=0, ||ψ_2||=1, ψ_2(t)∈ L^1(ℝ)∩ L^2(ℝ).
Although it is easy to prove that in our case, the sufficient conditions for the admissibility condition (<ref>), described in the previous subsection are fulfilled, we will show a direct and interesting calculation to prove (<ref>). Let us start with an observation concerning the Euler Gamma function
Γ (z)=∫_0^∞x^z-1e^-x dx, Re(z)>0.
After substituting in (<ref>) z=1+iξ and changing the integration variable x=exp(-t) we obtain
Γ (1+iξ)=∫_0^∞x^iξe^-x dx=∫_∞^-∞e^-iξ t e^-e^-t (-e^-t) dt =
∫_-∞^∞ e^-e^-t e^-te^-iξ t dt,
which shows that the Fourier transform of the first derivative of the Gompertz function x'(t)= e^-e^-te^-t is as follows
F(x')(ξ)=x'(ξ)=1/√(2π)Γ (1+iξ).
Using the formula for the Fourier transform of a derivative, (<ref>) and then definition (<ref>) we get
F(x”)(ξ)=x”(ξ)=iξ/√(2π)Γ (1+iξ),
and
F(ψ_2)(ξ)=ψ_2(ξ)=2iξ/√(π)Γ (1+iξ).
Now we can show that for the Gompertz mother wavelet ψ_2(t) the admissibility condition (<ref>) is satisfied and even the integral can be expressed in a closed form in terms of the Riemann Zeta function ζ (z). Namely, using (<ref>), the well-known property of the Gamma function
|Γ (1+iξ)|^2=Γ (1+iξ)Γ (1-iξ) =πξ/sinhπξ,
and the following formula from Dwight's Tables <cit.> (item no 860.502):
∫_0^∞x^2/sinh ax dx=7/2a^3ζ(3),
we obtain
C_ψ_2 =2π∫_-∞^∞|ξ|^-1|ψ_2(ξ)|^2 dξ =
2π∫_-∞^∞1/|ξ||2iξ/√(π)Γ (1+iξ)|^2dξ=
2π∫_-∞^∞1/|ξ|4ξ^2/ππξ/sinhπξdξ
=2π∫_-∞^∞4ξ^2/|sinhπξ | dξ=56ζ (3)/π^2<∞ .
We can now generate a doubly-indexed family of the Gompertz wavelets from the mother Gompertz wavelet ψ_2 by dilating and translating
ψ_2^a,b(t)=1/√(a)ψ_2(t-b/a),
where a,b∈ℝ, a> 0.
§.§ Wavelets based on higher derivatives of the Gompertz function
Similarly, as in the previous subsection, we consider the nth (n=3,4,...) derivative x^(n)(t) of the Gompertz function x(t)=e^-e^-t with parameters x_max=1, s=1, t_0=0. For this particular case formula (<ref>) reads
x^(n)(t)=(-1)^nx(t)∑_k=1^nn klog^kx(t).
Since
∫_-∞^∞|x^(n)(t)| dt= ∫_-∞^∞|(-1)^nx(t)log x(t)∑_k=1^nn klog^k-1x(t)| dt
=∫_0^1|∑_k=1^nn klog^k-1x| dx
≤∑_k=1^nn k∫_0^1|log^k-1x| dx=
∑_k=1^nn k(k-1)! < ∞
we see that x^(n)(t) ∈ L^1(ℝ).
Rza̧dkowski et al <cit.> (Theorem 3.2, p. 377) proved the following formula for derivatives of the Gompertz function x(t)=e^-e^-t:
∫_-∞^∞(x^(n)(t))^2 dt=(-1)^nB_2n(1-2^2n)/2n=|B_2n|(2^2n-1)/2n, n=1,2,…,
where B_n is the nth Bernoulli number. The Bernoulli numbers are well described in the book by Duren <cit.>. For the convenience of the reader, we sketch here only some of their basic properties.
The Bernoulli numbers B_n, n=1,2,… have the following exponential generating function
B_0+B_1z + B_2z^2/2!+⋯ =z/e^z-1,
|z|<2π.
and vanish for all odd n≥ 3. The numbers are rational and they appear in relations such that
∑_k=1^m-1k^n=1/n+1∑_j=0^nn+1 j B_jm^n+1-j, m,n≥ 1,
or
∑_k=1^∞1/k^2n=(-1)^n+12^2n-1π^2n/(2n)!B_2n, n=1,2,…
The first few nonzero Bernoulli numbers are as follows
B_0=1, B_1=-1/2, B_2=1/6, B_4=-1/30
, B_6=1/42, B_8=-1/30, B_10=5/66.
Note that in case n=2 formula (<ref>) agrees with calculation (<ref>) because
(-1)^2B_4(1-2^4)/4=1/4·(-1/30)· (-15)=1/8.
We can define now the Gompertz mother wavelet ψ_n(t), related to the nth derivative of the Gompertz function (<ref>) as
ψ_n(t)=(2n/|B_2n|(2^2n-1))^1/2 x^(n)(t).
Because of (<ref>) the L^2 norm || ψ_n||=1.
We will show now that, for ψ_n the admissibility condition (<ref>) is fulfilled. Applying in (<ref>) (n-1)-times the formula for the Fourier transform of a derivative we get
F(x^(n))(ξ)=x^(n)(ξ)=(iξ)^n-1/√(2π)Γ (1+iξ),
which gives by definition (<ref>)
F(ψ_n)(ξ)=ψ_n(ξ)=(n/|B_2n|(2^2n-1)π)^1/2(iξ)^n-1Γ (1+iξ).
Using (<ref>) and the following formula from Dwight's Tables <cit.> (item no 860.509):
∫_0^∞x^p-1/sinh ax dx=2Γ(p)/a^p( 1-1/2^p)ζ(p), a>0, p>1
we obtain
C_ψ_n =2π∫_-∞^∞|ξ|^-1|ψ_n(ξ)|^2 dξ =
2π∫_-∞^∞1/|ξ|·n/|B_2n|(2^2n-1)π·ξ^2n-2·πξ/sinhπξdξ
= 2π n/|B_2n|(2^2n-1)∫_-∞^∞ξ^2n-2/|sinhπξ|=
2π n/|B_2n|(2^2n-1)·4Γ(2n-1)/π^2n-1( 1-1/2^p)ζ(2n-1)
= n(2^2n-1-1)Γ(2n-1)/|B_2n|(2^2n-1)· 2^2n-4·π^2n-2·ζ(2n-1) <∞,
where ζ(z) is the Riemann Zeta function.
As usually, we generate from the mother wavelet a doubly-indexed family of wavelets from ψ_n by dilating and translating
ψ_n^a,b(t)=1/√(a)ψ_n(t-b/a),
where a,b∈ℝ, a > 0, n=2,3,….
§ APPLICATIONS
We will look, in a time series (y_n), for points corresponding to zeros of the second or the third derivative of the Gompertz function. This is equivalent to detecting points, where the sequence of second differences,
Δ^2y_n=y_n+1-2y_n+y_n-1,
takes a value close to zero or a maximum respectively.
To calculate the CWT (Continuous Wavelet Transform) coefficients for (Δ^2y_n), we implement the mother wavelet ψ_2(t), (<ref>) into Matlab's wavelet toolbox. Two parameters of the Gompertz wavelet, b - shift (translation) and a - dilation, can be read from the CWT scalogram by finding a point where the sum (<ref>) (denoted on the scalogram by Index) is locally maximal. It remains to determine the third parameter of the wave, i.e., its saturation level y_max. Assume that time series (y_n) locally follows the Gompertz function y_n≈ y(n)=y_maxe^-e^-n-b/a. By definition (<ref>) we have
y”(t)=y_max/2√(2)a^3/2ψ_2^a,b(t).
The continuous wavelet transform CWT (<ref>) of the function y”(t), by using Gompertz wavelets ψ_2^c,d
(T^wavy”)(c,d)=⟨ y”, ψ_2^c,d⟩ =∫_-∞^∞ y”(t) ψ_2^c,d(t) dt,
takes the maximum value when c=a and d=b.
By the Cauchy-Schwartz inequality
|(T^wavy”)(c,d)|=|⟨ y”, ψ_2^c,d⟩ |≤ ||y”|| ||ψ_2^c,d||=y_max/2√(2)a^3/2||ψ_2^a,b|| ||ψ_2^c,d||=y_max/2√(2)a^3/2.
However the maximum is reached for c=a, d=b, because:
(T^wavy”)(a,b)=⟨ y”, ψ_2^a,b⟩ =y_max/2√(2)a^3/2⟨ψ_2^a,b , ψ_2^a,b⟩=y_max/2√(2)a^3/2.
In view of Lemma, for the maximal value of Index we get successively
Index=∑_n Δ^2y_nψ_2^a,b(n)≈∑_nΔ^2y(n)ψ_2^a,b(n)
≈∫_-∞^∞y”(t)ψ_2^a,b(t)dt
=∫_-∞^∞y_max/2√(2)a^3/2ψ_2^a,b(t)ψ_2^a,b(t)dt
=y_max/2√(2)a^3/2∫_-∞^∞(ψ_2^a,b(t))^2 dt= y_max/2√(2)a^3/2.
Using (<ref>) we can estimate the saturation level y_max as follows
y_max≈ 2√(2)a^3/2∑_nΔ^2y_nψ_2^a,b(n)= 2√(2)a^3/2Index.
§.§ The case of two exact Gompertz functions
For illustration, consider function y(t) composed of two Gompertz functions, Fig. <ref>:
y(t)=100000e^-e^-t-25/8+200000e^-e^-t-200/20, t∈ [0, 350].
Denote y_n=y(n), n=0,1,2,… 350, calculate the first differences Δ^1y_n=y_n-y_n-1, n=1,2,… 350 (Fig. <ref>) and the central second differences Δ^2y_n=y_n+1-2y_n+y_n-1, n=1,2,…, 349 (Fig. <ref>) .
The CWT applied to the second differences produces scalogram, shown in Fig. <ref>. We can read from it two parameters b=25, a=8 for the first Gompertz wave and similarly b=200, a=20 for the second. At these points, the value of the Index is locally the largest. Both the saturation levels can be calculated using formula (<ref>). The saturation level for the first wave is
y_max≈ 2√(2)a^3/2∑_nΔ^2y_nψ_2^a,b(n)=2√(2)· 8^3/2· 1551=99,264,
and analogously for the second
y_max≈ 2√(2)a^3/2∑_nΔ^2y_nψ_2^a,b(n)=2√(2)· 20^3/2· 789.5=199,729.
Since the second differences (not the second derivatives) were used for the analysis, then the calculated saturation levels (28) and (29) cannot be expected to be exactly equal to the theoretical ones, i.e., 100,000 and 200,000 respectively.
§.§ Application for the analysis of the spread of Covid-19 cases on the example of Saudi Arabia
Rza̧dkowski and Figlia <cit.> presented the use of logistic wavelets to model the spread of the Covid-19 pandemic in several countries. It turns out that sometimes it is better to use the Gompertz curve, than the logistic one, to model particular waves of the pandemic. Dhabi et al. <cit.> modeled, using the Gompertz function, the extensive first wave of Covid-19 cases in Saudi Arabia. They considered 264 days starting from March 12, 2020, until November 30, 2020.
Let us now examine the spread of Covid-19 for Saudi Arabia over a longer time period, from March 12, 2020, to July 20, 2022 covering 861 days. The time series of the total number of reported infections has been smoothed with a 7-day moving average and then denoted by (y_n). Therefore, n = 1 in the series (y_n) means March 18, 2020, and n = 854 is the last day covered by the analysis, i.e., July 19, 2022 (necessity to calculate the last second difference). Fig. <ref> shows in turn: the series (y_n) (Fig. <ref>), first differences (Fig. <ref>) and second differences (Fig. <ref>).
The scalogram Fig. <ref> gives values of the CWT coefficients for the second differences, using Gompertz wavelets. At the point indicated by the label, there is a maximum of Index of the large wave (consisting of several smaller waves) of cases, studied by Dhahbi et al. <cit.>. The saturation level of this wave calculated by formula (<ref>) is
y_max≈ 2√(2)a^3/2∑_nΔ^2y_nψ_2^a,b(n)=2√(2)· 41^3/2· 497.8=369,637.
The saturation level (<ref>) is consistent with the estimates given by Dhahbi et al. <cit.> and in accordance with Fig. <ref>. We have to remember that, this time, the pandemic had been not yet saturated.
Let us examine now the large single wave of cases, which is visible in Fig. <ref> after day 600. The asymmetric shapes of the first differences Fig. <ref> and the second differences Fig. <ref> indicate that the Gompertz function could also be here more efficient for modeling than the logistic curve. This is confirmed by the CWT analysis and can be seen in scalograms Fig. <ref>. On the left, there is the CWT analysis using logistic wavelets Fig. <ref> and on the right - using Gompertz wavelets Fig. <ref>.
The maximum Index of 1307 for the logistic wavelet is smaller than the maximum Index of 1426 for the Gompertz wavelet. Note that both wavelets have the same L^2 norm equal to 1. This indicates that the Gompertz wavelet gives a better fit for the points of observation than the logistic one. The saturation level of the wave of cases, calculated by using the formula (<ref>) is
y_max≈ 2√(2)a^3/2∑_nΔ^2y_nψ_2^a,b(n)=2√(2)· 13^3/2· 1426=189,051
which is consistent with the observations, as well is the day n=675 (January 21, 2022) of the highest smoothed number of cases.
§ CONCLUSIONS AND FURTHER WORK
In this paper we defined Gompertz wavelets of any order. We have shown that the admissibility condition holds for them. We also calculated their normalizing factors in the space of square intergrable functions L^2(ℝ) and showed that they are expressed by an explicit formula in terms of Bernoulli numbers. Next, we illustrated the utility of second-order Gompertz wavelets in the theoretical situation, where the signal consists of two exact Gompertz functions. Then we used them to study the spread of the Covid-19 pandemic in Saudi Arabia.
In further work, we plan to apply the second- or higher-order Gompertz wavelets to some real-world data from the fields such as economics, finance or biology. One could also deal with some generalizations of the basic S-shaped curves to define the corresponding wavelets for them. For this purpose, formulas from the paper Rza̧dkowski and Urlińska <cit.> could be used, allowing for efficient computation of successive derivatives for a large class of S-shaped functions.
Conflict of Interests
The author declares that there are no any conflict of interest related to the submitted manuscript.
Funding statement
The research was partially funded by the ’IDUB against COVID-19’ project granted by the Warsaw University of Technology (Warsaw, Poland) under the program Excellence Initiative: Research University (IDUB), grant no 1820/54/201/2020.
10
D Daubechies, I.: Ten Lectures on Wavelets. Society for Industrial and Applied Mathematics, Philadelphia, PA, USA, (1992)
DCB Dhahbi, A.B., Chargui, Y., Boulaaras, S., Rahali, S., Mhamdi, A.: The proliferation of Covid-19 in Saudi Arabia according to Gompertz model. Fractals 10, 2240251 (2022). DOI: 10.1142/S0218348X22402514
Dr Duren, P.: Invitation to Classical Analysis. American Mathematical Society, (2012)
Dw Dwight, H. B.: Tables of integrals and other mathematical data, 4th ed. The Macmillan Company, New York, (1961)
EB Estrada, E., Bartesaghi,P.: From networked SIS model to the Gompertz function. Appl. Math. Comput. 419, 126882 (2022).
https://doi.org/10.1016/j.amc.2021.126882.
G Gompertz, B.: On the nature of the function expressive of the law of human mortality, and on a new method of determining the value of life contingencies. Phil. Trans. Roy. Soc. 1, 513–585 (1825).
GKP Graham, R. L., Knuth, D. E., Patashnik, O.: Concrete mathematics: A foundation for computer science. Reading MA, Addison Wesley, (1994)
KBS Kundu, R., Basak, H., Singh, P.K. et al.: Fuzzy rank-based fusion of CNN models using Gompertz function for screening COVID-19 CT-scans. Sci. Rep. 11, 14133 (2021). https://doi.org/10.1038/s41598-021-93658-y
M Meyer, Y.: Wavelets, Vibrations and Scalings. CRM Monograph Series, American Mathematical Society, Providence, RI, USA, (1997)
MR Meyer, Y., Ryan, D.: Wavelets: Algorithms and Applications. Society for Industrial and Applied Mathematics,
Philadelphia, PA, USA, (1996)
OEIS OEIS Foundation Inc. (2022), The On-Line Encyclopedia of Integer Sequences. Published electronically at https://oeis.org
ONF Ohnishi A., Namekawa Y., Fukui T.: Universality in COVID-19 spread in view of the Gompertz function. Progress of Theoretical and Experimental Physics 12, 123J01, (2020). https://doi.org/10.1093/ptep/ptaa148
Our Our World in Data. https://ourworldindata.org/coronavirus-source-data (Access: July 21, 2022).
RF Rza̧dkowski, G., Figlia, G.: Logistic wavelets and their application to model the spread of COVID-19 pandemic. Appl. Sci. 11 , 8147 (2021). https://doi.org/10.3390/app11178147
Rz1Rza̧dkowski, G., Rza̧dkowski W., Wójcicki, P.: On some connections between the Gompertz function and special numbers. J. Nonlinear Math. Phys. 3, 374–-380 (2015). http://dx.doi.org/10.1080/14029251.2015.1079419
Rz2 Rza̧dkowski, G., Głażewska, I., Sawińska, K.: The Gompertz function and its applications in management. Foundations of Management 7, 185–190(2015). DOI: 10.1515/fman-2015-0035
Rz3 Rza̧dkowski, G., Urlińska, M.: Some applications of the generalized Eulerian numbers. J. Comb. Theory Ser. A. 163, 85–97 (2019). DOI: https://doi.org/10.1016/j.jcta.2018.11.012
TT Tjørve, K. M. C., Tjørve, E.: The use of Gompertz models in growth analyses, and new Gompertz-model approach: An addition to the Unified-Richards family. PLoS ONE 12(6), e0178691 (2017). https://doi.org/10.1371/journal.pone.0178691
Wi Winsor, Ch. P.: The Gompertz curve as a growth curve. Proceedings of the National Academy of Sciences 1, 1–8 (1932).
|
http://arxiv.org/abs/2307.00572v1
|
20230702135223
|
Exploring the Implications of 2023 Pulsar Timing Array Datasets for Scalar-Induced Gravitational Waves and Primordial Black Holes
|
[
"Sai Wang",
"Zhi-Chao Zhao",
"Jun-Peng Li",
"Qing-Hua Zhu"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"astro-ph.HE",
"gr-qc"
] |
=1
compat=newest,every axis plot/.append style=line width=1pt
figureFig.Figs.
figureFig.Figs.
()
[
]
|
http://arxiv.org/abs/2307.00307v1
|
20230701113024
|
Spatio-Temporal Classification of Lung Ventilation Patterns using 3D EIT Images: A General Approach for Individualized Lung Function Evaluation
|
[
"Shuzhe Chen",
"Li Li",
"Zhichao Lin",
"Ke Zhang",
"Ying Gong",
"Lu Wang",
"Xu Wu",
"Maokun Li",
"Yuanlin Song",
"Fan Yang",
"Shenheng Xu"
] |
eess.IV
|
[
"eess.IV"
] |
Spatio-Temporal Classification of Lung Ventilation Patterns using 3D EIT Images: A General Approach for Individualized Lung Function Evaluation
Shuzhe Chen, Li Li, Zhichao Lin, Ke Zhang, Ying Gong, Lu Wang, Xu Wu, Maokun Li, Senior Member, IEEE, Yuanlin Song, Fan Yang, Fellow, IEEE, and Shenheng Xu, Member, IEEE,
This paragraph of the first footnote will contain the date on
which you submitted your paper for review. This work was supported in part by the Institute for Precision Medicine,
Tsinghua University, National Natural Science Foundation of China (61971263 and 62171259),
Biren Technology, and BGP Inc. Shuzhe Chen and Li Li contributed equally to this work. (Corresponding author: Maokun Li.)
Shuzhe Chen, Zhichao Lin, Ke Zhang, Maokun Li, Fan Yang and Shenheng Xu are with Department of Electronic Engineering, Institute of Precision Medicine, Tsinghua University, Beijing 100084, China, Beijing National Research Center for Information Science and Technology (BNRist), (e-mails: csz21@mails.tsinghua.edu.cn, lzc19@mails.tsinghua.edu.cn, kzhang320@mail.tsinghua.edu.cn, maokunli@tsinghua.edu.cn, fanyang@tsinghua.edu.cn, shxu@tsinghua.edu.cn).
Li Li, Ying Gong, Lu Wang, Xu Wu, and Yuanlin Song are with the Department of Pulmonary and Critical Care Medicine, Zhongshan Hospital, Fudan University, 180 Fenglin Rd, Shanghai 200032, China (e-mails:li.li@zs-hospital.sh.cn, gong.ying@zs-hospital.sh.cn, wu.xu@zs-hospital.sh.cn, bluewang723@163.com, song.yuanlin@zs-hospital.sh.cn).
August 1, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
The Pulmonary Function Test (PFT) is an widely utilized and rigorous classification test for lung function evaluation, serving as a comprehensive tool for lung diagnosis. Meanwhile, Electrical Impedance Tomography (EIT) is a rapidly advancing clinical technique that visualizes conductivity distribution induced by ventilation. EIT provides additional spatial and temporal information on lung ventilation beyond traditional PFT. However, relying solely on conventional isolated interpretations of PFT results and EIT images overlooks the continuous dynamic aspects of lung ventilation.
This study aims to classify lung ventilation patterns by extracting spatial and temporal features from the 3D EIT image series.
The study uses a Variational Autoencoder network with a MultiRes block to compress the spatial distribution in a 3D image into a one-dimensional vector. These vectors are then concatenated to create a feature map for the exhibition of temporal features. A simple convolutional neural network is used for classification.
Data collected from 137 subjects were finally used for training. The model is validated by ten-fold and leave-one-out cross-validation first. The accuracy and sensitivity of normal ventilation mode are 0.95 and 1.00, and the f1-score is 0.94. Furthermore, we check the reliability and feasibility of the proposed pipeline by testing it on newly recruited nine subjects. Our results show that the pipeline correctly predicts the ventilation mode of 8 out of 9 subjects.
The study demonstrates the potential of using image series for lung ventilation mode classification, providing a feasible method for patient prescreening and presenting an alternative form of PFT.
lung ventilation classification, electrical impedance tomography(EIT), pulmonary function test(PFT), variational autoencoder(VAE)
§ INTRODUCTION
Respiration diseases are the third leading cause of death worldwide and severely impact people's quality of life<cit.>. The prevalence of chronic respiratory diseases (CRDs) has increased by about 40% in the past thirty years<cit.>. However, the diagnosis rate of CRDs is far lower than the prevalence rate, and the treatment rate is even lower than the diagnosis rate<cit.>. Early screening and diagnosis of ventilation diseases are of critical significance, yet it has not received enough attention.
The lung ventilation function is evaluated using the Pulmonary Function Test (PFT), in which the airflow inhaled and exhaled by the lungs is recorded and measured by a flow meter. A well-trained physician instructs the subjects to alternate between tidal breath and forced expiration following the regulations of the American Thoracic Society (ATS)/European Respiratory Society (ERS)<cit.>.
Like other clinical tests, such as blood tests, PFT requires establishing normal values for accurate diagnosis. However, the challenge with PFT is that normal values can vary significantly from person to person<cit.> compared to other tests. Ventilation performance depends on numerous factors, including age, gender, body mass index, and even geography<cit.>.
Multivariate regression based on PFT results of many normal subjects is used to establish predicted values for a specific person. However, recruiting and measuring enough normal people in a specific area is challenging and demanding. Moreover, PFT results can only evaluate lung function without providing spatial information.
The large-scale implementation of PFT is challenging for several reasons. First, it requires a significant workforce and material resources, making it difficult to carry out in underdeveloped areas. Second, the testing cycle is long, and data statistics and analysis are highly demanding. Additionally, the lung function of the population changes objectively and dynamically over time <cit.>, making it difficult to carry out periodic repetitions. Nevertheless, industrialization has contributed to an escalation in environmental pollution, decreasing the number of available healthy individuals. This decline poses a challenge when attempting to recruit sufficient healthy subjects for research or studies <cit.>.
Electrical Impedance Tomography (EIT) is an emerging medical imaging modality that can detect conductivity changes in the measured area. The air content in the lungs varies during ventilation, and the resulting changes, especially the spatial distribution of electrical conductivity, can be captured by electrodes placed around the chest<cit.>. EIT images are of significant clinical value<cit.>, including Positive End-expiratory Pressure (PEEP) titration guidance<cit.>, regional distribution of ventilation<cit.>, and ventilation and perfusion matching<cit.>.
Compared to PFT, which provides an overall evaluation of lung function, EIT can discriminate whether changes in global lung function stem from alterations in ventilation distribution or variations in ventilation magnitude. This distinction enables a more precise assessment of regional lung function and facilitates targeted prescreening effort<cit.>. Typically, lung functions are assessed both before and after medical procedures. In this process, 2D images taken at specific time points are often compared and analyzed based on manually crafted features relying on prior physiological knowledge. More attention should be paid to considering the image series and drawing broad conclusions regarding lung function.
In this study, we use 3D EIT image series to classify lung ventilation function modes in a general view. We extract spatial-temporal information using a Variational Autoencoder (VAE). Our proposed method achieved the highest accuracy and AUC of 95.6% and 0.96, respectively, in binary classification based on in-vivo measurements. Furthermore, we also applied this method to four-category problems (normal, restrict, obstruct, and mix), and the maximum accuracy and f1-score were 86.3% and 0.90, respectively.
Our contributions are as follows:
* Three-dimensional lung ventilation image series are reconstructed in a low-cost, radiation-free, and non-invasive manner by EIT.
* Spatial and temporal information in 3D image sequences during forced exhalation were considered simultaneously, representing an improvement over previous analyses of isolated 2D images.
* A concise and practical VAE network has been proposed for dimensional image reduction.
* A new pipeline has been developed to classify lung ventilation patterns, aiming to facilitate lung function diagnosis without dependence on expensive predicted values.
* Classification of ventilation patterns is focused rather than changes in lung ventilation in a specific situation.
The remainder of the article is organized as follows. Section <ref> examines related works on lung ventilation pattern classification and the clinical application of EIT. Section <ref> provides a detailed introduction to the preliminaries of EIT and VAE. In Section <ref>, we describe the proposed method. The EIT measurement system and study protocol are presented in Section <ref>. In Section <ref>, we test and optimize the network's performance. Finally, the work is summarized in Section <ref>.
§ RELATED WORK
§.§ Automated PFT Assisted by Machine Learning
PFT is an established and effective diagnostic tool for assessing lung function. During the test, subjects are instructed to perform tidal breath and forced expiration, allowing for measurement of volume and speed. The results are typically interpreted by physicians using predefined cutoffs in accordance with published guidelines<cit.> to identify a typical pattern. However, this process heavily relies on the doctor's experience and subjective judgment<cit.>. Additionally, many people with chronic obstructive pulmonary disease (COPD) symptoms do not meet the diagnostic criteria<cit.>. PFT results are usually interpreted by clinicians using discrete numeric data according to published guidelines. However, inter-rater variability among clinicians is known to occur, and inaccuracies in interpretation can impact patient care. As a result, many studies have focused on developing automated interpretation systems based on PFT values to reduce misdiagnosis rates and alleviate the burden on doctors.
Automated interpretation of PFT values has been proposed as the first stage in modeling the decision-making process of physicians<cit.>. Moreover, more advanced classification methods have been developed. In <cit.>, a multi-layer perceptron was proposed to classify obstructive and non-obstructive patients, achieving an accuracy of 83.7% with the spirometry data of Forced Vital Capacity (FVC), Forced Expiratory Volume in one second (FEV_1), and Forced Expiratory Flow (FEF_25-75). Disease-specific prediction of COPD and DPLD, as obstructive and non-obstructive, respectively, achieved approximately 90% accuracy in the training dataset.
In addition, researchers in <cit.> have considered the area under the expiratory flow-volume curve as a new indicator, improving the diagnostic classification rates. Furthermore, PFT values are believed to contain adequate information currently neglected by the diagnostic workflow. A fully convolutional network was applied to extract this latent information from the sequence of Flow-Volume loop data in PFT <cit.>, enabling discrimination of the structural phenotype of chronic obstructive pulmonary disease that traditional PFT interpretation cannot accomplish using discrete single values.
§.§ Structure-based prediction of lung function
The static structure of the lungs determines their dynamic function. Modalities such as CT, MRI, and X-ray have been used to assess lung volume, parenchymal change, airway structure, air-trapping, and other structural features. These features are believed to be able to predict the functional parameters of the lungs.
In <cit.>, an end-to-end scheme was used to predict PFT values, including FVC and FEV_1, using low-dose chest CT images, achieving an accuracy of 89.6% and 85.9%. In <cit.>, lung ventilation heterogeneity in COPD patients was predicted using support vector machines (SVM) based on CT texture analysis. The PFT results and ^3He-MRI were considered ground truth, and the predicted ventilation maps had an accuracy of 88% and an AUC of 0.82.
An integrated 3D-CNN and parametric-response mapping model<cit.> is proposed to classify COPD subjects using CT-based variables, achieving an accuracy of 89.3% and a sensitivity of 88.3% in five-fold cross-validation.
Deep learning has also been applied to discover subvisual abnormalities in CT scans related to COVID-19 in an interpretable manner <cit.>.
X-ray images were also used to get an early assessment of the lung function of coronavirus patients with the help of invariant markers<cit.>. MRI-derived regional flow-volume loops were also applied to detect chronic lung allograft dysfunction in early-stage<cit.>. Meanwhile, dynamic and functional imaging develops fast among traditional modalities, 4D-CT <cit.> and hyperpolarized MRI <cit.> are the typical representation.
§.§ Clinical Application of EIT
EIT is a non-invasive imaging technique that provides real-time information on the distribution of electrical conductivity changes in the lung tissue, which is directly related to the respiration phase. Unlike other imaging modalities, EIT does not involve ionizing radiation and is relatively inexpensive, portable, and can be used at the bedside. Therefore, it has great potential for monitoring and evaluating lung function in various clinical scenarios<cit.>.
2D-EIT was studied on 14 healthy individuals <cit.>,which showed an accuracy of 98% in predicting PFT values using EIT values alone. The device used in the study was portable, which may have the potential for home lung monitoring. Similarly, other studies have been conducted on 35 children with cystic fibrosis<cit.> and seven healthy individuals conducting forced expiration maneuvers<cit.>.
Regional lung function evaluation is crucial in clinical settings <cit.>, but it is challenging to assess without EIT. Medical hypotheses that are based on logical deduction can be validated by EIT through the distribution of ventilation. Studies have reported the observation of spatial and temporal inhomogeneous ventilation distributions in patients with chronic obstructive pulmonary disease (COPD) <cit.>, cystic fibrosis <cit.>, idiopathic pulmonary fibrosis<cit.>, and smokers <cit.>. These findings demonstrate the potential of EIT to provide valuable information on regional lung function in various respiratory conditions.
The wide acceptance of these physiological findings within the medical community confirms EIT's reliability. Consequently, once the hypothesis is validated, EIT has proven to be a valuable tool for assessing the effectiveness of different treatments. For example, studies have shown that EIT can be used to evaluate the effectiveness of pulmonary rehabilitation<cit.>, bronchodilators<cit.>, position changes from bed to a wheelchair<cit.>, and even cardiac surgery<cit.>. EIT can also guide tracheal tube placement<cit.> and aid in PEEP titration<cit.>.
Although EIT has shown promising results in correlating with PFT results and evaluating lung function in specific disease scenarios, there are still limitations in the current studies:
* The sample size in most studies is relatively small, limiting the generalizability of the findings.
* There is still a lack of standardization in the EIT data acquisition and analysis procedures.
* EIT measurements are sensitive to various factors, such as electrode positioning, patient movement, and chest wall abnormalities, which may affect the accuracy and reproducibility of the results.
* EIT is still considered an emerging technology, and there is a need for further research to establish its clinical utility and cost-effectiveness in routine clinical practice.
Despite these limitations, EIT holds good promise in pulmonary medicine as a non-invasive, radiation-free, and portable lung function assessment and monitoring tool. Further studies with larger sample sizes and standardization of EIT data acquisition and analysis procedures are needed to establish its clinical usefulness.
§ PRELIMINARIES
§.§ EIT Formulation
EIT is a non-invasive medical imaging technique that utilizes small electrical currents to create images of the internal conductivity distribution within an object. In the context of lung imaging, EIT involves the placement of multiple electrodes on the surface of the chest. These electrodes serve as excitation points for applying small alternating currents. By measuring the resulting voltage distribution at each electrode, EIT can generate images depicting the lungs' internal conductivity distribution (see Fig.<ref>). The conductivity distribution within the lungs changes as air is inhaled and exhaled during breathing. This dynamic feature of EIT makes it a valuable tool for monitoring lung function and detecting abnormalities in real-time. Moreover, EIT is radiation-free and non-invasive, making it a safe imaging technique for repeated and continuous monitoring of lung function.
Reconstructing the conductivity distribution from the boundary voltage data using EIT is a highly ill-posed problem <cit.>, particularly in the case of 3D EIT. The algorithm used in this work is based on 3D time-difference image reconstruction, as described in previous studies<cit.>. The basic principle of this algorithm is briefly introduced below. The chest's conductivity distribution, denoted by σ, is defined on a tetrahedral mesh. Let d and d_* denote the simulated and measured boundary voltage, respectively, given the injection-measurement pattern. The function S(*), also known as the forward model, maps the conductivity distribution to the boundary voltage. At an initialized point σ_0, a linear approximation is needed:
S(σ) ≈ S(σ_0) + Jσ_0^T ·Δσ
where Δσ is a small conductivity change around σ_0, and J_σ_0 is the Jacobian matrix of S(σ) evaluated at σ_0:
J_σ_0 = ∂ S(σ _0)/∂σ _0.
Here, we focus on the difference in the conductivity distribution. Let the measured data and conductivity distribution at t_1 and t_2 be denoted by d_1*, d_2* and σ_1, σ_2, respectively. The time-difference EIT can be approximated as follows:
d_2*- d_1*≈ J_σ _0·(σ_2-σ_1)
§.§ the Proposed VAE
Variational autoencoder (VAE) is a deep generative network <cit.> that can encode high-dimensional data into a lower-dimensional latent space representation. Herein, VAE is employed to acquire a compressed representation of 3D EIT images. The 3D EIT images are fed into the encoder network, which maps the data to a lower-dimensional latent code z.
Then a decoder would reconstruct the original data from z as shown in Fig.<ref>.
A three-dimensional convolutional neural network (CNN) is applied for both the encoder and decoder blocks. The encoder is stacked with five encoding blocks with different numbers of channels, followed by a flatten layer and a dense layer. The flattened output is then passed through two dense layers to obtain the mean and standard deviation of the latent space distribution. The mean and standard deviation are used to sample from the distribution and obtain the latent representation of the input image. The decoder consists of several layers of 3D transposed convolutional blocks, which perform the opposite operation of the encoder. The transposed convolutional blocks gradually increase the dimensions of the input until the output matches the original input dimensions.
A MultiRes block<cit.> is the common part of the encoder and decoder and the key of the proposed structure. Three convolutional blocks are connected sequentially to capture spatial features at multiple resolutions. Moreover, a residual connection is introduced by a 1 × 1 convolutional layer to comprehend some additional spatial information.
The purpose of VAE is to model the real probability distribution of the training data p_r( x) with a latent distribution p( z), which is commonly set as standard distribution 𝒩(0, I). The distribution of the generated samples p( x) can be written as:
p( x) = ∫ p( x| z) p( z) d z
Given the Kullback–Leibler (KL) divergence, which is a measure of the difference between the distributions. The optimization principle of VAE can be written as follows:
min_p( x| z) D_KL((p_r( x)p( x)))
The corresponding objective function is derived as:
ℒ = 𝔼_x∼p̃( x)[D_KL((q( z| x)p( z))) - ∫ q( z| x) ln p( x| z) d z]
Assume that q( z| x)∼𝒩(μ, σ), where μ and σ = diag {σ_v^2 }
are the mean vector and variance vector of the VAE encoder. Then the first KL term in Eq.<ref> can be written as:
ℒ_KL = 1/2∑_i=1^n(μ_i^2(x)+σ_i^2(x)-lnσ_i^2(x)-1)
where n is the length of the latent code z. The KL loss encourages the distribution in the latent space to be close to a standard normal distribution, which leads to a smoother and more continuous latent space structure. The second term is approximated <cit.> as:
ℒ_MSE = 1/N_x x - x̂_2^2
where x and x̂ represent the raw and reconstructed images respectively, and N_x denotes the total number of pixels in a single 3D image. This term ensures that the reconstructed images are faithful to the input images.
In the total loss, the ℒ_KL is weighted by λ = 10^-3 to prevent it from dominating the reconstruction loss during training.
Loss = ℒ_MSE+λℒ_KL
The VAE is designed to learn unsupervised representations by extracting latent features with the encoder. These learned features can then be used as inputs to classification tasks, achieving improved performance.
§ THE PROPOSED APPROACH
§.§ Voltage data preprocessing
The total voltage signal obtained from EIT during a PFT is presented in Figure <ref>. The signal captures the dynamics of ventilation, including forced expiration (depicted in light orange) and tidal breathing (depicted in green), which alternate under the guidance of physicians before breath-holding (depicted in yellow). The EIT electrodes are placed on the skin around the chest, resulting in the raw voltage signal containing changes from ventilation and perfusion inside the chest. To obtain accurate information about the ventilation activities, isolating the ventilation-related changes from the raw EIT voltage signal is necessary.
The heart beats at a significantly higher frequency of 60-100 times per minute (1-1.6 Hz) compared to the respiration rate of 10-20 times per minute (0.17-0.33 Hz). Furthermore, the magnitude of cardiac-related signals is much lower than that of ventilation-related signals. Therefore, digital filters are designed to effectively remove cardiac-related signals and noises, as illustrated in the lower row of Fig. <ref>.
§.§ EIT Image Reconstruction and Code Splicing
The ventilation-related voltage signal is utilized as input to the image reconstruction algorithm (described in Section <ref>), with the end of expiration selected as the reference point. Specifically, for each data record, a sequence of data frames at time t_i (where i=1,2,...,T) denoted as d_i (where i=1,2,...,T) is reconstructed as a 3D image series P = p_i, (i=1,2,...,T).
To enhance the visualization of the lungs, the pixels that rank within the lowest 20% are set to 0 for each image p_i, which helps to make the lung outline more clearly visible. Subsequently, the amplitude of the entire image series P is normalized to fall within the [0,1] range before being fed into the VAE. Let P_min and P_max denote the minimum and maximum pixel value of the image series P, and the normalized image series P̂ can be expressed as shown in Eq.<ref>. The images before and after this process are presented in Fig.<ref>.
P̂ = P - P_min/P_max - P_min
The trained VAE is utilized as a compressor for dimensional reduction. The resulting latent code of the reconstructed image series is represented as Z = z_i, (i=1,2,...,T), where each z has a length of 32. Since the expiration duration varies from person to person, the T dimension of the Z is zero-padded to a length of 93. The resulting zero-padded latent code is denoted as Z_pad = z_i, (i=1,2,...,93). Subsequently, the Z_pad is input to a 2D CNN for classification. See Fig.<ref> for a general overview of this work.
§ EXPERIMENT DESIGN
§.§ Subjects and Data Acquisition
§.§.§ Measurement system
EIT signals were acquired using the Infivision 1900 (Beijing Huarui Boshi Medical Imaging Technology Co., Ltd., Beijing, China). Two electrode belts were placed around the chest to record the signals, with each belt containing 16 electrodes. The upper electrode belt was placed at the height of the armpit, while the lower electrode belt was placed at the fourth to sixth intercostal space (medioclavicular line).
The EIT measurement system utilized in this study has an input impedance of 40 kΩ at a phase angle of -90^∘ and a frequency of 20 kHz. The instrumentation amplifier has a high standard mode rejection ratio (CMRR) of 120 dB. A 2-loop of electrodes injection-measurement pattern<cit.> was used to record EIT signals, with an injected alternating current of 2 mA (root mean square) at a frequency of 20 kHz. EIT data were collected at a rate of 20 images per second and were reconstructed using a reconstruction matrix with Tikhonov regularization, as described in Section <ref>.
§.§.§ Clinical Study Cohort
From August 2021 to September 2022, a total of 186 subjects were recruited after obtaining written consent. Subjects who did not provide written informed consent (n=4), had contraindications to PFT or EIT (n=9), or had lung diseases such as pulmonary lesions, large bullae, and pleural effusion (n=11) were excluded prior to the test. During the test, 6 patients could not perform PFT adequately, and 19 had poor contact with EIT.
After the exclusions above, forced expiratory data from 137 subjects were included in the subsequent analysis. The participant's physical characteristics and PFT values are presented in Table <ref>.
In general, 67 patients (age 62.36 ± 9.56 yr, body weight 63.15 ± 11.70 kg, body height 165.79 ± 8.04 cm) were classified as the abnormal ventilation group, while 70 patients (age 59.04 ± 10.37 yr, body weight 65.89 ± 11.53 kg, body height 166.60 ± 8.57 cm) were classified as the normal ventilation group. Demographically, no significant differences were observed between the two groups (P 0.05).
EIT measurements were conducted with PFT with approval from the Ethics Committee of Zhongshan Hospital, Fudan University (2022-084R), and registration was in the Clinical Trials Register.
§ RESULTS AND DISCUSSION
§.§ Performance of VAE
The EIT image series of the 137 subjects were processed and shuffled to create a training dataset for the VAE model, which was implemented and evaluated using Python with TensorFlow on an NVIDIA Tesla V100 GPU card. The image reconstruction was conducted using MATLAB R2021b. The dataset consisted of 2781 images with a size of 48 × 32× 48, with 2508 images used for training and 279 for testing. The Adam optimizer was applied during training with a learning rate of 4 × 10^-4 over 50 epochs.
The performance of the proposed method heavily relies on the effectiveness of VAE for dimensional reduction. To verify the reconstruction quality of VAE, several samples from the test dataset were randomly selected, and their reconstructed images were visually evaluated. As shown in Fig.<ref>, each reconstructed 3D image is presented with four slices: the central coronal slice in the middle, the center section slice on top, and the left and right lung sections on both sides.
The upper row of Figure <ref> shows the 3D EIT images of three subjects at different stages of the respiratory cycle. Figure A displays the image of a normal subject's lung at the apex of inhalation. The image shows both lungs as round and full, with the ventilation range of the left and right lungs being approximately the same. In contrast, Figure B shows the image of an abnormal subject at the start of forced expiration, with defects visible in the ventilation image of the right lung. Figure C shows the end of the expiration of another subject. The corresponding output images reconstructed by VAE are shown in the lower row of Figure <ref>. The VAE effectively reproduces the input images in terms of both value and contour.
Furthermore, to verify the compact and continuous nature of the latent space, we conducted code interpolation between two test samples x_1 and x_2. We first encode them to obtain their respective latent vectors z_1 and z_2. Then, we created a series of intermediate latent vectors by linearly interpolating between z_1 and z_2 through convex combinations of the two vectors, given by:
z_i = (1-t) z_1 + t z_2
where t ∈ [0,1] is a parameter that controls the degree of interpolation. Subsequently, we decoded these intermediate vectors z_i to obtain the corresponding intermediate image x_i. As depicted in Fig.<ref>, the intermediate images formed a smooth transition between the original images. Thus, we conclude that the latent code z provides a low-dimensional representation of the entire 3D image and that the latent space is compact and continuous.
The latent code serves as a condensed representation of the spatial distribution of lung ventilation, enabling efficient storage and analysis of the pulmonary ventilation distribution's temporal changes. The sequence of latent space vectors, obtained from sequentially inputting 3D lung ventilation images, effectively captures the temporal changes in pulmonary ventilation distribution. Thus, this sequence of latent vectors can be utilized to efficiently store and analyze the temporal changes in the pulmonary ventilation distribution.
§.§ Classification performance
The input series were encoded and zero-padded to form a latent code series denoted as Z_pad. As the training dataset consisted of 137 image sequences reconstructed from measured data, it was essential to validate the model to ensure that overfitting did not occur. To accomplish this, we employed ten-fold and leave-one-subject-out validation techniques during the convolutional neural network (CNN) training. Moreover, we recruited nine new subjects in October 2022, and their data were processed following the same pipeline as the training dataset. These data were used as blind data to test the model's generalization capability.
§.§.§ Ten Fold and Leave-One-Out Cross Validation
A ten-fold test is first conducted to ensure the model's fitness. The whole training data is split into 10 parts; each part is used as a test dataset in turn, while the remaining nine parts are used for training. The average accuracy and AUC are 0.956± 0.06, and the f1-score is 0.956 ± 0.06. The coefficient of variation for the accuracy and AUC are 0.0637 and 0.0639, respectively, indicating a relatively low variance and good reproducibility of the results. These results suggest that the model has a good generalization ability and can accurately classify ventilation patterns in unseen data.
Next, we performed Leave-One-Out Cross Validation (LOOCV) to evaluate the model's performance further. The accuracy, sensitivity, f1-score, and confusion matrix are shown in Fig.<ref>. The results indicate that the model achieves high accuracy, sensitivity, and f1-score on the test data. Specifically, the model achieves an overall accuracy of 0.953, a sensitivity of 0.941, and an f1-score of 0.945. The confusion matrix shows that the model has a high true positive rate for all classes, indicating that the model can accurately classify the ventilation patterns for each subject. These results confirm the robustness and generalization ability of the proposed model for classifying the ventilation patterns in 3D lung ventilation images.
Furthermore, abnormal ventilation can be classified into obstructive, restrictive, and mixed patterns. In order to test the proposed pipeline, we modified the 2D CNN model from a two-class to a four-class classification. The LOOCV confusion matrix indicates good performance in identifying normal and obstructed patterns, where the obstructed pattern is characterized by slow and uneven expiration. However, the model's ability to distinguish between restrictive and mixed patterns could have been more satisfactory. Regarding respiratory mechanics, the restrictive pattern is characterized by a reduction in total capacity but a smooth expiration pattern similar to the normal pattern. The model's accuracy was lowest for the mixed pattern, mainly due to a lack of balanced training samples. Further investigation is required to verify and enhance the model's ability to differentiate among various lung ventilation abnormalities.
§.§.§ Blind Data
The proposed workflow was tested on blind data from nine patients with varying demographics and PFT results. Despite the unbalanced distribution in sex and lung ventilation mode, the results confirmed the reliability and validity of the proposed method. Among the nine subjects, only one was normal, and the remaining eight had obstructed or mixed diagnoses.
EIT records during forced expiration were processed using the proposed pipeline, and the PFT diagnosis was noted in Table <ref>. The classification results of our method are shown in the last line of the table, with only one mistake in subject 3, a 53-year-old female with a PFT diagnosis of restriction. While the flow metrics in PFT, such as the percentage of FEV_1/FVC and FEF_25-75 were close to normal, a decrease was observed in instantaneous flow rate, such as PEF and FEF_25.
§ CONCLUSION
In summary, this work studies the general diagnosis of lung ventilation patterns using 3D EIT image series. Unlike previous studies focusing on specific diseases or operations, this work provides a more comprehensive diagnosis of normal or abnormal lung ventilation. Using a well-trained VAE network with MultiRes block, a single subject's spatial and temporal features are integrated into a two-dimensional feature map, which is then classified using a simple CNN network. The model exhibits satisfying accuracy and stability in cross-validation tests and is validated on new data from nine patients.
This study also addresses the need for more attention to individualized lung function assessment, which can provide valuable information for diagnosis and treatment. While PFT is commonly used for lung function diagnosis, the potential of EIT for individualized lung function evaluation is explored in this work. The results suggest that this approach may have promising applications in personalized diagnosis for lung function disorders.
While this work presents promising results for individualized lung function assessment using EIT, some limitations must be acknowledged. Firstly, the sample size of the training data set is relatively small, which may limit the generalizability of the proposed workflow. A larger sample size is needed to validate the results further and assess the model's performance among different populations.
Secondly, the study only focuses on forced expiration and does not consider the potential changes in lung function during normal breathing or other respiratory maneuvers. Incorporating more comprehensive respiratory measurements may provide a more complete assessment of lung function.
Finally, while the proposed workflow provides a two-dimensional feature map for classification, the interpretability of the features extracted by the VAE network and CNN still needs to be improved. Further research is needed to understand better the relationship between the extracted features and the underlying physiological mechanisms of lung function.
unsrt
|
http://arxiv.org/abs/2307.02508v2
|
20230705035058
|
ZJU ReLER Submission for EPIC-KITCHEN Challenge 2023: TREK-150 Single Object Tracking
|
[
"Yuanyou Xu",
"Jiahao Li",
"Zongxin Yang",
"Yi Yang",
"Yueting Zhuang"
] |
cs.CV
|
[
"cs.CV"
] |
ZJU ReLER Submission for EPIC-KITCHEN Challenge 2023:
TREK-150 Single Object Tracking
Yuanyou Xu, Jiahao Li, Zongxin Yang, Yi Yang, Yueting Zhuang
ReLER, CCAI, Zhejiang University
{yoxu,xljh,yangzongxin,yangyics,yzhuang}@zju.edu.cn
==================================================================================================================================================================================
The Associating Objects with Transformers (AOT) framework has exhibited exceptional performance in a wide range of complex scenarios for video object tracking and segmentation <cit.>.
In this study, we convert the bounding boxes to masks in reference frames with the help of the Segment Anything Model (SAM) <cit.> and Alpha-Refine <cit.>, and then propagate the masks to the current frame, transforming the task from Video Object Tracking (VOT) to video object segmentation (VOS).
Furthermore, we introduce MSDeAOT, a variant of the AOT series that incorporates transformers at multiple feature scales. MSDeAOT efficiently propagates object masks from previous frames to the current frame using two feature scales of 16 and 8.
As a testament to the effectiveness of our design, we achieved the 1st place in the EPIC-KITCHENS TREK-150 Object Tracking Challenge.
§ INTRODUCTION
Video object tracking is a fundamental task in computer vision that involves automatically localizing and tracking a specific object of interest across consecutive frames in a video sequence.
It plays a crucial role in various applications, such as surveillance systems, autonomous vehicles, and video analysis.
The tracking process typically starts with an initial bounding box or mask annotation around the target object in the first frame. Subsequently, tracking algorithms analyze subsequent frames to predict the object's location, usually by exploiting motion and appearance cues. These algorithms employ a variety of techniques, including correlation filters, deep learning-based models, particle filters, and optical flow-based methods.
In recent years, the field of video object tracking has witnessed remarkable advancements through deep learning-based approaches. SwinTrack <cit.> stands out by leveraging transformers for feature extraction and fusion, enabling seamless interaction between the target object and the search area during tracking. Meanwhile, MixFormer <cit.> adopts a novel approach by integrating template and test samples in the backbone network and directly regressing bounding boxes using a simple head, eliminating the need for post-processing. Another noteworthy method, SeqTrack <cit.>, tackles the visual tracking problem by reformulating it as a sequence generation task, enabling autoregressive prediction of the target bounding box.
Although the methods mentioned above have yielded impressive outcomes, they are constrained in their handling of bounding boxes as both references and outputs. Real-world scenarios present challenges where bounding boxes tend to be prone to inaccuracies, potentially encompassing multiple objects. Additionally, the fixed shape of the bounding box hinders effective adaptation to target shape variations.
Hence, tracking methods that utilize masks tend to achieve higher accuracy owing to their pixel-based alignment mechanism <cit.>. Over the years, the Associating Objects with Transformers (AOT) <cit.> series has demonstrated commendable performance in video object segmentation (VOS) tasks, prompting the natural extension of applying it to video object tracking (VOT) tasks. However, the annotation of masks poses greater difficulty compared to bounding box annotation. Fortunately, the introduction of the Segment Anything Model (SAM) <cit.> has significantly alleviated this challenge by enabling the conversion of bounding boxes into masks with remarkable efficacy.
Specifically, we propose a novel tracking framework that leverages the AOT series to track objects in videos. We first convert the bounding boxes to masks in reference frames with the help of SAM and Alpha-Refine <cit.>, and then feed the mask and frames into the VOS model. The model then propagates the masks to the current frame and the bounding box is obtained by the mask. We also introduce MSDeAOT, a variant of the AOT series that incorporates transformers at multiple feature scales.
Leveraging above techniques, we achieve the 1st place in the EPIC-KITCHENS TREK-150 Object Tracking Challenge, attaining an impressive success score of 73.4% under the multi-start evaluation protocol.
§ METHOD
§.§ Preliminaries
DeAOT.
DeAOT is an AOT-based video object segmentation (VOS) model <cit.> that incorporates an identification mechanism to associate multiple targets in a shared high-dimensional embedding space. This unique approach enables DeAOT to track multiple objects with the same efficiency as tracking a single object. To preserve object-agnostic visual information in deep propagation layers, DeAOT leverages a hierarchical Gated Propagation Module (GPM) that independently propagates both object-agnostic and object-specific embeddings from previous frames to the current frame. By utilizing GPM, DeAOT achieves effective and accurate object tracking in complex scenarios.
Segment Anything Model (SAM).
SAM has emerged as a prominent and influential model for image segmentation, captivating the attention of researchers in the field. Through extensive training on an extensive dataset comprising millions of images and billions of masks, SAM demonstrates exceptional proficiency. Notably, SAM excels in generating precise object masks using diverse input prompts, including points or boxes. Moreover, SAM possesses the remarkable capability to produce masks for all objects present within an image, further highlighting its versatility and effectiveness in tackling various segmentation tasks.
Alpha-Refine (AR).
AR represents a versatile and innovative refinement module designed to enhance visual tracking performance through precise bounding box estimation. By effectively capturing and preserving intricate spatial details, AR facilitates accurate predictions of the target's location, building upon coarse initial results. At its core, AR leverages key components such as a pixel-wise correlation layer, a corner prediction head, and an auxiliary mask head. Remarkably, AR operates as an independent module, enabling seamless integration with existing trackers in a plug-and-play manner. Notably, AR seamlessly integrates without the need for additional training or modifications to the base tracker, underscoring its adaptability and practicality.
§.§ Multi-Scale DeAOT
The whole architecture of MSAOT as shown in <ref>c follows an encoder-decoder design similar to classical segmentation networks like U-Net. The encoder consists of multiple blocks that down-sample the input feature maps, yielding features at different scales. These encoder blocks provide multi-scale features that are crucial for accurate object tracking and segmentation.
In the decoder, unlike the FPN module employed in DeAOT ( <ref>b), the Gated Propagation Module (GPM) is integrated with multiple decoder blocks to establish the multi-scale stages of MSDeAOT. Each scale's feature maps from the encoder are fed into the corresponding stage, where the GPM module takes charge of matching the current frame with memory frames and aggregating mask information from the memory frames. The decoder blocks then decode this information.
This innovative design of multi-scale stages brings notable benefits. It effectively harnesses the potential of feature maps at different scales, in contrast to the FPN module used in DeAOT, where multi-scale feature maps solely serve as shortcut connections for residual structures. Specifically, in DeAOT, only the feature maps at the smallest scale are utilized for matching across memory frames using the GPM module. In contrast, MSDeAOT comprehensively engages feature maps from multiple scales during the matching process, thereby enhancing performance and enabling finer details of objects to be captured.
§ IMPLEMENTATION DETAILS
In MSDeAOT, we employ Swin Transformer-Base <cit.> as the backbones for the encoder. For the decoder, the MSDeAOT model incorporates GPM modules in multiple stages. Specifically, we set the number of layers in the GPM to 2 for the 16× scale stage and 1 for the 8× scale stage. To save computational resources, we exclude the 4× scale feature maps and instead duplicate the 16× scale feature maps twice to form the feature pyramid.
The training process comprises two phases, following the AOT framework. In the initial phase, we pre-train the model using synthetic video sequences generated from static image datasets <cit.> by randomly applying multiple image augmentations <cit.>. In the subsequent phase, we train the model on VISOR <cit.>, YouTube-VOS <cit.>, LaSOT <cit.>, MOSE <cit.> and EgoTracks <cit.>, incorporating random video augmentations <cit.>.
For LaSOT, the mask are adopted from RTS <cit.>. For EgoTracks, we sample 300 videos and generate the mask by SAM <cit.> for each image.
During MSDeAOT training, we employ 8 Tesla V100 GPUs with a batch size of 16. For pre-training, we use an initial learning rate of 4 × 10^-4
for 100,000 steps. For main
training, the initial learning rate is set to 2 × 10^-4, and the
training steps are 200,000.
The learning rate gradually decays to 1 × 10^-5 using a polynomial decay schedule <cit.>.
Note that We follow the rules of the competition and use no data from the TREK150 dataset <cit.> for training.
During the inference process, the box references are transformed into masks, as illustrated in <ref>a, and subsequently provided as input to the VOS model (MSDeAOT) along with the frames. For each frame, MSDeAOT generates predictions of the target object's mask, from which the bounding box is conveniently derived using the minimum external rectangle approach.
§ EPIC-KITCHENS CHALLENGE: TREK-150 SINGLE OBJECT TRACKING
§.§ Ablation Study
Firstly, we conduct ablation studies on different initial masks for R50-DeAOTL <cit.>, which is trained on YouTube-VOS <cit.> and DAVIS <cit.>. The results, as shown in <ref>, reveal that the initial mask generated by AR <cit.> outperforms that generated by SAM <cit.>. Moreover, the combination of AR and SAM yields the most promising results, highlighting the significance of leveraging multiple initial masks for optimal performance.
Secondly, we perform ablation studies on various training datasets to assess their impact on the task at hand. Utilizing the DeAOTL <cit.> model with R50 and SwinB backbones, we examine the results presented in <ref>, where it becomes evident that larger backbones exhibit superior performance. Notably, the inclusion of MOSE, EgoTracks, and LaSOT datasets proves to be advantageous, further enhancing the overall outcomes.
§.§ Challenge Results
We rank 1st place in the TREK-150 Single Object Tracking Challenge with a success score of 73.4% under the multi-start evaluation protocol. The results are presented in <ref>. Notably, our method achieves the highest success score in all three evaluation protocols, highlighting its robustness and versatility.
§ CONCLUSION
In this paper, we propose MSDeAOT, an AOT-based video object segmentation model. With the help of SAM and AR, we convert the bounding boxes to masks in reference frames and feed the mask and frames into the VOS model. The model then propagates the masks to the current frame and the bounding box is obtained by the mask.
Masks offer more accurate localization than bounding boxes, and the AOT series has demonstrated remarkable performance in video object segmentation tasks.
Our solution achieves the 1st place in the EPIC-KITCHENS TREK-150 Object Tracking Challenge with a success score of 73.4% on the test set under the multi-start evaluation protocol.
ieee_fullname
|
http://arxiv.org/abs/2307.01010v2
|
20230703134323
|
The 2021 X-ray outburst of magnetar SGR J1935+2154 -- I. Spectral properties
|
[
"Sheng-Lun Xie",
"Yi Zhao",
"Wang-Chen Xue",
"Yun-Wei Yu",
"Shao-Lin Xiong",
"Heng Yu",
"Ce Cai",
"Shuang-Nan Zhang"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
Spec. study for SGR J1935+2154]The 2021 X-ray outburst of magnetar SGR J1935+2154 observed by GECAM - I. Spectral properties
Xie et al.
0000-0001-9217-7070]Sheng-Lun Xie
Institute of Astrophysics, Central China Normal University, Wuhan, China
Key Laboratory of Particle Astrophysics, Institute of High Energy Physics,
Chinese Academy of Sciences, Beijing 100049, China
0000-0002-4636-0293]Yi Zhao
Department of Astronomy, Beijing Normal University, Beijing, China
Key Laboratory of Particle Astrophysics, Institute of High Energy Physics,
Chinese Academy of Sciences, Beijing 100049, China
Key Laboratory of Particle Astrophysics, Institute of High Energy Physics,
Chinese Academy of Sciences, Beijing 100049, China
University of Chinese Academy of Sciences, Chinese Academy of Sciences, Beijing 100049, China
Institute of Astrophysics, Central China Normal University, Wuhan, China
Yun-Wei Yu
yuyw@ccnu.edu.cn
0000-0002-4771-7653]Shao-Lin Xiong
Key Laboratory of Particle Astrophysics, Institute of High Energy Physics,
Chinese Academy of Sciences, Beijing 100049, China
Shao-Lin Xiong
xiongsl@ihep.ac.cn
Department of Astronomy, Beijing Normal University, Beijing, China
College of Physics and Hebei Key Laboratory of Photophysics Research and Application,
Hebei Normal University, Shijiazhuang, Hebei 050024, China
Key Laboratory of Particle Astrophysics, Institute of High Energy Physics,
Chinese Academy of Sciences, Beijing, China
Over a period of multiple active episodes between January 2021 and January 2022, the magnetar SGR J1935+2154 emitted a total of 82 bursts observed by GECAM-B. Temporal and spectral analyses reveal that the bursts have an average duration of ∼145 ms and a fluence ranging from 1.2 × 10^-8 erg · cm^-2 to 3.7 × 10^-5 erg · cm^-2 (30 - 200 keV).
The spectral properties of these bursts are similar to those of earlier active episodes. Specifically, we find that the emission area of the Double Black Body (BB2) model shows a Log-Linear correlation to its temperature, and there is a weak relation between fluence and E_peak (or α) in the Cut-Off Power Law (CPL) model. However, we note that the temperature distributions of BB2/BB models in GECAM-B samples are different from those in GBM-GECAM samples, due to differences in the energy range used for fitting.
To understand this difference, we propose a Multi-Temperature Black Body (MBB) model, assuming that the BB temperatures follow a power law distribution. Our analysis shows that the minimum temperature kT_min∼ 5 keV of the MBB model, which is consistent between GECAM-B and GBM-GECAM. This indicates that both samples originated from similar magnetar bursts. We also reveal the spectra of magnetar bursts tend to be soft. It indicates that magnetar bursts may be composed of multiple low BB temperatures and the majority of the BB temperatures are concentrated around the minimum temperature.
§ INTRODUCTION
The SGRs are perceived to originate in the highly magnetized neutron star, namely magnetar <cit.>. Magnetars are characterized by slow rotation period (P ∼ 2-12 s), rapidly spin down (Ṗ∼ 10^-13-10^-11 s · s^-1) and relatively young age (typically about 1000 yr). So far, 30 magnetars had been detected and 24 of them had been confirmed <cit.>. Most magnetars can increase persistent radiation and emit bursts/flares simultaneously in the X-/Gamma-ray band during an outburst.
Based on their luminosity and duration, the SGR bursts can be divided into three classes <cit.>: Short-duration burst, which consists of single or multiple pulses, is the most typical magnetar burst with the duration range 0.01∼ 1 s and the fluence around 10^-10∼ 10^-4 erg · cm^-2; Intermediate burst, is a brighter magnetar burst with duration longer than short-duration burst (> 1 s) and peak luminosity around 10^41∼ 10^43 erg · s^-1; Giant flare, the rarest and the most powerful energetic burst, is characterized by a significantly higher luminosity than a typical magnetar burst and a special pulse profile with a hard initial spike and rapidly decaying tail.
Magnetar SGR J1935+2154 was first discovered and located in the Milky Way Galaxy by Swift Burst Alert Telescope (BAT) in 2014 July <cit.>. Follow-up observations carried out between 2014 July and 2015 March with Chandra and XMM-Newton allowed the measurement of its spin period and spin-down rate, found to be P ∼ 3.24 s and Ṗ∼ 1.43 × 10^-11 s · s^-1, respectively. This indicates a dipole-magnetic field of B ∼ 2.2 × 10^14 G <cit.>.
It has experienced multiple active windows from 2014 to 2021 <cit.>.
April 2020 was recognized as a month of intense bursting activity for SGR J1935+2154, during which a burst forest was observed. These bursts included the X-ray counterpart <cit.> associated with a fast radio burst, FRB 200428 <cit.>. Additionally, 10 candidate bursts had also been found before 2014 <cit.> observed by Fermi Gamma-ray Space Telescope <cit.>. <cit.> also found there were numerous bursts from SGR J1935+2154 observed by Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor <cit.>.
In this paper, we carry out a spectral analysis on magnetar SGR J19354+2154 using the observation data of the GECAM-B that dates from 2021 January to 2022 January. In Section <ref> we make brief introduce of GECAM-B and report the temporal analysis of SGR J1935+2154 over one year time-span. In Section <ref> we report the spectral properties of SGR J1935+2154. Finally, the summary is given in Section <ref>. In Paper 2 (In prep.), we assess the localization method for magnetar bursts using the spectral fitting results.
§ TEMPORAL ANALYSIS
Launched in December 2020, GECAM has been operating in low Earth orbit <cit.>. GECAM consists of twin microsatellites, namely GECAM-A and GECAM-B, and each comprises 25 gamma-ray detectors <cit.> and 8 charged particle detectors <cit.>. GECAM-B can serve as a wide field of view (FOV) gamma-ray monitor with high time resolution (μ s) and large effective area (up to thousands cm^2).
<cit.> developed a pipeline to do ground search on GECAM-B daily observation data for GRBs using the traditional signal-to-noise ratio (SNR) method. Therefore, in 2021, GECAM-B observed a total of 82 bursts from SGR J1935+2154, as reported in <cit.>. <cit.> discovered that GECAM-B has visibility to SGR1925+2154 for approximately half of each day, with a periodic active window of around 127 days. In this paper, temporal and spectral analysis of this burst history will be conducted. The burst history is depicted in Fig <ref>.
§.§ Burst Duration
For characterizing the SGR's temporal property, we use the Bayesian Block method <cit.>, which identifies regions of the highest statistical significance, to calculate the duration of each burst. The Bayesian Block method divides the events data into multiple blocks, each with a constant count rate. This is a good approach to characterize the variability of the GECAM EVT data by finding the optimal segmentation or boundaries.
In this paper, the GRD detectors, which are used in Bayesian Block analysis, with an angle to the source are less than 60^∘.
To measure the duration of all bursts, the sliced event data of the 10 s burst time window will be used. This also includes both the pre-burst and post-burst time intervals, with an energy range of 30-200 keV. False-positive probability (p0) is set to 0.01 (corresponding to 3σ). We treat blocks with a duration longer than 6 s as background and consider blocks with a duration less than the spin period (3.24 s) of SGR J1935+2154 as part of the burst region.
An example of the Bayesian Block analysis is shown in Fig <ref>. The burst duration will be used in the time-integrated spectral analysis (see Section <ref>). The burst duration distribution of SGRs is presented in the right panel of Fig <ref>, and is fitted by a Log-Gaussian function which obtains a central value of 119.2 ms. The durations of all bursts are listed in Appendix <ref>.
§.§ Burst Hardness Ratio
The hardness ratio is the ratio of net counts of the source in different energy bands. The net counts is estimated as,
C = S - B
where S and B are the total counts and the expected background counts of the source. The background counts are estimated using total counts before and after the source time interval.
We compute the hardness ratio of each burst of the GECAM-B (H3/H2: 50-200 keV/30-50 keV), and the results are shown in Fig <ref>. The hardness ratios of these bursts range from 0.2 to 1.5, with a median value of 0.56.
The left panel of Fig <ref> represents the evolution of the hardness ratio and exhibits no significant trend. We also do not find a correlation between the duration and the hardness ratio (see the right panel of Fig <ref>).
§ SPECTRAL PROPERTIES
A total of 82 bursts were observed by GECAM-B, of which 39 bursts were also observed by Fermi/GBM. Therefore, the dataset utilized in this study comprises of 82 GECAM-B bursts and 39 joint-spectra of both instruments (hereafter, GBM-GECAM).
The GRD detectors, which are used in spectral fitting, with an angle to the source are less than 60^∘.
The background spectra are accumulated from the data events during the preand post-burst time intervals (i.e., from T0 - 10s to T0 - 5 s and from T0 + 5s to T0 + 10 s, where T0 is the trigger time of the burst).
For weak bursts, the GRPPHA command[<https://heasarc.gsfc.nasa.gov/ftools/>] is used to group the observed data (e.g., GROUP MIN RCNTS) to ensure the validity of the fit statistics.
Then, we perform a time-integrated spectral analysis using the XSPEC <cit.>[<https://heasarc.gsfc.nasa.gov/xanadu/xspec/>] software and the Poisson data with Gaussian background statistics (PGSTAT).
These burst samples are fitted with 6 models: a single Black Body (BB), a single Power Law (PL), an Optically-Thin Thermal Bremsstrahlung (OTTB), a single Black Body plus a single Power Law (BBPL), the Double Black Body (BB2) and an exponentially Cut-Off Power Law (CPL), over the energy range of 30-200 keV for GECAM-B and 8-200 keV for the GBM-GECAM. And finally, we use Bayesian Information Criterion <cit.> to estimate the best-fit model among BB2, BB, CPL, OTTB, BBPL and PL.
One should note that the threshold of low energy for GECAM-B is dynamically changing over the course of this year. However, for consistency and the principle of controlling variables, the energy range for GECAM-B samples is set to 30-200 keV in all analyses conducted in this paper and in the localization research mentioned in Paper 2.
All results are presented in the Appendix <ref>. The equations of these models are following:
The exponentially Cut-Off Power Law (CPL) model:
F(E)= K (E/E_piv)^α e^-(2+α)E/E_peak
where K is the amplitude in photon/s/cm^2/keV, E_peak is the ν F_ν peak in keV, α is the power-law index and E_piv is the pivot energy in keV and we set E_piv=20 keV in this paper.
The Black Body (BB) model:
F(E)= 1.0344 × 10^-3× K E^2/e^E/kT-1
where K is R_km^2 / D_10^2, R_km^2 is the radiative area of source in km, D_10^2 is the distance to the source in units of 10 kpc and kT is the temperature keV. We set the distance to the SGR J1935+2154 is 9 kpc in this paper.
The Optically-Thin Thermal Bremsstrahlung (OTTB) model:
F(E)= K (E_piv/E) e^(E_piv-E)/kT
where K is the amplitude in photon/s/cm^2/keV and E_piv is the pivot energy in keV and we set E_piv=20 keV in this paper.
The Power Law (PL) model:
F(E)= K E^α
where K and α are same to the parameters of CPL model.
§.§ Burst Fluence
The fluence is derived from the product of the burst duration and the flux, and the flux is calculated using the best-fit model in the energy ranges of 30-200 keV for GECAM-B samples. The results are listed in Appendix <ref>.
The left panel of Fig <ref> shows the complementary cumulative distribution of energy fluences, which can be fitted by a broken power law. The break point is 4.69 ± 0.17 × 10^-8 erg · cm^-2 for GECAM-B samples. The slope of the lower fluences is -0.036 ± 0.004. The slope of the higher fluences is -0.65 ± 0.02.
The slope of higher fluences are consistent with the previous studies <cit.> and the Gutenberg-Richter law (N(E)∝ E^-5/3 ∼ -2), which describes the power-law-like frequency distribution of earthquakes. This similarity implies that the majority magnetar bursts, akin to earthquakes, possibly originated from the cracks in the solid crust of the magnetar <cit.>.
§.§ The Double Black Body Model
The Double Black Body Model (BB2) is the sum of two BBs (Eq <ref>). Of all burst samples, 41 GECAM-B bursts and 19 GBM-GECAM bursts can be well-fitted with the BB2 model. The Fig <ref> shows the distribution of the low- and the high-temperature (kT_low, kT_high) of BB2. Both kT_low and kT_high, respectively, can be well fit with a Gaussian function (see Table <ref> for central value and sigma). The BB2 temperature distribution of GBM-GECAM samples is similar to that of SGR J1935+2154 in previous erports <cit.>.
The Fig <ref> represents that kT_low and kT_high exhibit a strong Log-Linear correlation to the emission area (R_low, R_high), and the results are listed in Table <ref>.
In addition, the emission area dependence spanning both the low and high BB temperatures, namely R_both^2 ∝ (kT_both)^-4.23 ± 0.15 for GECAM-B samples and R_both^2 ∝ (kT_both)^-3.60 ± 0.13, is very similar to the one corresponding to a single BB obeying the Stefan-Boltzmann law: R^2 ∝ (kT)^-4. This R^2 - kT correlation for BB2 model is also similar to that observed for the collection of SGR J1550-5418 bursts analyzed in previous studies <cit.>.
Because of the limited sample size, there are some differences between the correlation results of GECAM-B and GBM-GECAM. The correlation (R_low ^2 ∝ (kT_low )^α) of GECAM-B bursts exhibits a steeper trend since the higher fitted lower-edge energy range (30-200 keV). This is also the reason why the BB/BB2 temperature distribution of GBM-GECAM is different from that of GECAM-B, and will be discussed in Subsection <ref>.
lcccc
The measure results of the Spearman's rank correlation coefficient.
Correlation PL Index (α) Coefficien (ρ) a p-value Instrument
R_low ^2 ∝ (kT_low )^α -10.92 ± 0.10 -0.63 1.03E-05 GECAM-B
R_high^2 ∝ (kT_high)^α -7.01 ± 0.39 -0.92 1.08E-17 GECAM-B
R_both^2 ∝ (kT_both)^α -4.23 ± 0.15 -0.94 3.33E-40 GECAM-B
R_low ^2 ∝ (kT_low )^α -2.26 ± 0.80 -0.66 2.28E-03 GBM-GECAM
R_high^2 ∝ (kT_high)^α -4.22 ± 0.84 -0.91 8.56E-08 GBM-GECAM
R_both^2 ∝ (kT_both)^α -3.60 ± 0.13 -0.91 8.56E-08 GBM-GECAM
aThe spearman rank-order correlation coefficient
lcccc
The Gaussian distribution of the spectral parameters
[-1ex] Model Parameter μ a σ b Instrument
BB2 kT_low (keV) 9.16 0.88 GECAM-B
BB2 kT_high (keV) 49.56 23.17 GECAM-B
BB2 kT_low (keV) 5.72 2.21 GBM-GECAM
BB2 kT_high (keV) 19.30 4.70 GBM-GECAM
CPL α -1.21 0.33 GBM-GECAM
CPL E_peak (keV) 34.26 8.59 GBM-GECAM
BBPL kT (keV) 9.18 1.30 GECAM-B
BBPL α -1.20 0.37 GECAM-B
BBPL kT (keV) 8.82 0.52 GBM-GECAM
BBPL α -2.16 0.13 GBM-GECAM
BB kT (keV) 8.74 0.58 GBM-GECAM
BB kT (keV) 18.15 8.77 GECAM-B
OTTB kT (keV) 35.83 10.22 GBM-GECAM
OTTB kT (keV) 54.02 11.91 GECAM-B
PL α -2.14 0.24 GBM-GECAM
PL α -2.06 0.63 GECAM-B
MBB β 5.65 0.44 GBM-GECAM
MBB β 5.11 0.46 GECAM-B
MBB kT_min 4.47 1.55 GBM-GECAM
MBB kT_min 5.16 1.83 GECAM-B
aThe central value of the Gaussian distribution
bThe sigma of the central value of the Gaussian distribution
§.§ The Cut-Off Power Law model
As for the CPL model (Eq <ref>), 5 GECAM-B bursts and 24 GBM-GECAM bursts can be well-fitted. If the low energy edge of GECAM-B bursts is greater than ∼30 keV, it becomes difficult to effectively constrain the CPL model. As a result, only a limited number of bursts can be successfully fitted using this model. The CPL model is not recommended for fitting SGR burst data observed by GECAM-B, especially there is a possibility of the low energy edge increasing in future operations, or other instruments with a similar energy range.
The distributions of α and E_peak are presented in the upper two panels of Fig <ref>. The the GBM-GECAM distributions can be fitted with a Gaussian function (see Table <ref> for central value and sigma).
Most of E_peak values range from approximately 20 to 60 keV. The distributions of the two parameters are similar to those in previous studies. However, the correlation between the α and the fluence, as well as the relation between the E_peak and the fluence exhibit weak relevance, as shown in the lower two panels of Fig <ref>.
§.§ The Other Models
The Fig <ref> represents the parameters histogram distribution of the BBPL (the sum of Eq <ref> and Eq <ref>), OTTB (Eq <ref>), BB (Eq <ref>), and PL (Eq <ref>) models. The distributions can be fitted with a Gaussian function (see Table <ref> for central value and sigma).
The kT of the BBPL is similar to that of the BB, and the Photon Index (α) of the BBPL is similar to that of the PL.
The kT of the OTTB is similar to the E_peak of the CPL since the kT of the OTTB is equivalent to E_peak of the CPL (α=-1).
The BB/OTTB models in GECAM-B and GBM-GECAM have different kT values, which can be attributed to differences in the energy range used for fitting.
§.§ The Multi-Temperature Black Body
The kT distribution of the BB2/BB models as observed by GECAM-B differs from that of GBM-GECAM samples due to differences in the energy range used for fitting. Thus we assume the SGR bursts consist of multiple Black Bodies (BBs), and the temperatures of these BBs follow a power law distribution as following,
f(kT) = kT^-β
Then we perform integration of Eq (<ref>) from kT_min to infinity. The spectrum is defined as,
N(E)=K∫_kT_min^∞f(kT)E^2/exp(E/kT)-1d(kT)
where K, β and kT_min are normalization coefficient, power law kT index and Minimum temperature.
The fit results are shown/listed in Fig <ref> and Appendix <ref>. The β and kT_min distributions of GECAM-B and GBM-GECAM are similar to each other representing that they originated from similar thermal radiation even different kT distribution of the BB2/BB models. Therefore, the MBB model is recommended for analyzing the BB temperature of magnetar bursts, even when using different instruments with similar detection energy ranges.
As shown in the right panel of Fig <ref>, the kT of BB model exhibit a Log-Linear correlation to the β of MBB model. We fit the correlation (β∝ kT) with a simple power law function using both GECAM-B and GBM-GECAM samples, and obtain the slope: -0.25 ± 3E-16. This indicates that the higher temperature of the BB model corresponds to a wider temperature distribution, ranging from kT_min to an even higher temperature, as we expected.
The left panel of Fig <ref> show the correlation between β and kT_min. The correlation (β∝ kT_min) is also fitted by a power law function, which give the slope: 0.11 ± 1.3E-14.
However, the correlation between β and burst fluence/duration, as well as the relation between kT_min and burst fluence/duration exhibit weak relevance, as shown in the Fig <ref>.
Interesting, The temperature values (kT_min∼ 5 keV) of the MBB model are similar to those of the Multicolor Blackbody model <cit.>, which describes a superposition of a series of Black Bodies with different temperatures for GRB 081221. Compared to GRB 081221, the slope (β∼ 5.5) of the temperature distribution indicates a narrow temperature distribution. This is because magnetar bursts are generally softer than typical GRBs.
§.§ Time-Resolved Spectral Analysis
GECAM-C, a gamma-ray monitor akin to GECAM-B, was launched on July 2022, aboard the SATech-01 satellite <cit.>. To examine the observation capabilities of GECAM-C on magnetar, we also conduct a spectral analysis of a burst detected by Fermi/GBM, GECAM-B, and GECAM-C, as shown in Fig <ref>. We chose this burst for analysis because of the approaching detection energy range of the three instruments and had significant burst fluence. However, one should note that this burst is just for exhibiting the spectral analysis of GECAM-C and is not contained in the above statistical analysis (2021-01 to 2022-01).
We generate the spectral dataset based on Bayesian block edges and correct the time delays of such three instruments in order to perform joint-spectra fitting, as shown in the left side of Fig <ref>. The right panel of Fig <ref> is a time-integrated spectral fitting with the CPL model (E_peak = 29.85 ± 0.34 keV, α = -0.12 ± 0.08).
We conduct a separate spectral analysis using GECAM-C for localization research, and the results show that the localization is close to the true position. For more detailed spectral fitting and localization results, please refer to Paper 2.
§ SUMMARY
In this paper, we make the temporal and spectral analysis based on GECAM-B EVT dates from January 2021 through January 2022, which is also helpful to study the localization (see Paper 2). Fig <ref> exhibit multiple active burst episodes of SGR J1935+2154 over this year.
The left panel of Fig <ref> represents the cumulative distribution of the fluence can be well fitted by broken power law. The break point is 4.69 ± 0.17 × 10^-8 erg · cm^-2 for GECAM-B samples. The slope of the lower fluences is -0.036 ± 0.004. The slope of the higher fluences is -0.65 ± 0.02.
The bursts duration follow a Log-Gaussian distribution with a central value of 119.2 ms, as shown in the right panel of Fig <ref>. The burst duration mentioned above are computed by the Bayesian Block method.
The hardness ratio is computed in different energy ranges (H3/H2: 50-200 keV/30-50 keV for GECAM-B) and shown in Fig <ref>. The hardness ratio is range from 0.2 to 1.5, with a median value of 0.56.
The fluence mentioned above is derived from the product of the burst duration and the flux. The flux is calculated by the best-fit model, which is assessed by spectral fitting. We carry out a time-integrated spectral analysis using the PGSTAT statistics with BB2, CPL, BBPL, OTTB, BB, and PL models.
The distributions of each model parameter (kT, E_peak or α) follow a Gaussian distribution which the central value is listed in Table <ref>.
As for the BB2 model, the emission area exhibits a Log-Linear correlation with each corresponding temperature (see Fig <ref>).
As for the CPL model, we do not find correlation between the fluence and the E_peak, and the relation between the fluence and the Photon Index (α). The CPL model is not recommended for fitting SGR burst data observed by GECAM-B, especially there is a possibility of the low energy edge increasing in future operations, or other instruments with a low energy threshold higher than 30 keV.
To investigate the thermal radiation emitted during a magnetar burst, we assume the temperature of Black Bodies (BBs) follows a power law distribution, as described in Eq <ref>. Based on this assumption, the spectrum is Eq <ref>.
We perform a fit of the spectrum using all available datasets, and find that a total of 82 GECAM bursts and 39 GBM-GECAM bursts could be well-fit using this MBB model.
As shown in Fig <ref>, the concentration of kT_min values in our analysis indicate that magnetar burst spectra tend to be soft and may be composed of multiple BB components. The steep slope (β) of the temperature distribution further suggests that the majority of the BB temperatures are concentrated around the concentrated around 5 keV.
Additionally, the β and kT_min distributions of GECAM-B and GBM-GECAM are similar to each other representing that the MBB model is recommended for analyzing the BB temperature of magnetar bursts, even when using different instruments with similar detection energy ranges.
This finding provides important insights into the thermal properties of magnetars, and can help inform future studies of these fascinating objects.
This work is supported by the National Key R&D Program of China (2021YFA0718500), the National Natural Science Foundation of China (Grant No. 11833003, 12273042) and the National SKA program of China (2020SKA0120300). The GECAM (Huairou-1) mission is supported by the Strategic Priority Research Program on Space Science (Grant No. XDA15360000, XDA15360102, XDA15360300) of the Chinese Academy of Sciences.
§ TEMPORAL AND SPECTRAL PROPERTIES OF SGR J1935+2154
lcccc|lcccc
The temporal properties of SGR J1935+2154 observed by GECAM-B
[-1ex] ID UTC Duration (ms) Model a Fluence b ID UTC Duration (ms) Model a Fluence b
1 2021-01-27T06:50:20.750 129.17 PL 45.6_-3.29^+0.75 42 2021-09-14T23:26:34.050 30.37 PL 12.85_-4.45^+0.12
2 2021-01-30T08:39:53.840 187.36 PL 6.34_-2.57^+0.15 43 2021-09-15T02:39:25.700 65.87 PL 11.36_-3.81^+0.09
3 2021-01-30T10:35:35.120 23.29 OTTB 6.64_-1.13^+0.39 44 2021-09-22T02:39:10.200 166.41 PL 5.73_-2.74^+0.16
4 2021-02-16T22:20:39.600 357.11 BBPL 27.79_-3.35^+0.41 45 2021-09-22T20:12:16.500 152.61 BBPL 72.8_-12.07^+1.61
5 2021-07-07T00:33:31.640 121.92 BB2 83.98_-11.97^+0.26 46 2021-10-07T11:57:07.700 153.99 PL 9.77_-1.74^+0.21
6 2021-07-08T00:18:18.560 337.3 PL 6.08_-1.52^+0.11 47 2021-11-01T23:13:41.950 40.93 PL 12.56_-3.83^+0.01
7 2021-07-12T04:32:39.600 29.92 PL 15.56_-6.59^+0.13 48 2022-01-04T04:32:11.200 13.16 PL 11.74_-11.74^+0.02
8 2021-07-12T22:12:58.100 21.84 BBPL 12.63_-5.89^+0.72 49 2022-01-05T06:01:31.450 220.58 PL 8.76_-2.24^+0.03
9 2021-09-09T21:07:12.150 204.57 PL 8.44_-2.52^+0.09 50 2022-01-05T07:06:40.800 118.15 BB2 9.15_-3.9^+0.0
10 2021-09-10T01:04:33.500 11.9 PL 15.01_-7.67^+0.31 51 2022-01-06T02:36:14.100 277.75 PL 13.24_-1.65^+0.19
11 2021-09-10T02:07:56.700 66.12 PL 12.47_-3.34^+0.01 52 2022-01-09T07:39:10.700 65.3 PL 10.96_-4.98^+0.08
12 2021-09-10T02:08:28.800 134.65 PL 10.04_-3.13^+0.14 53 2022-01-10T06:52:40.500 258.9 PL 8.61_-1.9^+0.32
13 2021-09-10T03:22:40.550 302.12 PL 10.06_-1.96^+0.26 54 2022-01-11T08:58:35.450 176.2 PL 8.06_-2.55^+0.09
14 2021-09-10T03:24:47.150 40.72 PL 19.26_-3.95^+0.29 55 2022-01-12T01:03:46.900 304.94 PL 6.77_-1.71^+0.11
15 2021-09-10T03:42:45.750 148.83 PL 9.78_-2.88^+0.31 56 2022-01-12T05:42:51.650 503.61 PL 5.61_-1.91^+0.11
16 2021-09-10T05:05:03.350 221.65 PL 12.7_-2.39^+0.02 57 2022-01-12T08:39:25.450 728.17 BBPL 52.9_-5.56^+0.27
17 2021-09-10T05:35:55.500 30.41 PL 14.89_-2.88^+0.1 58 2022-01-12T17:57:08.500 172.48 PL 9.45_-3.05^+0.11
18 2021-09-11T16:39:21.000 50.54 PL 6.85_-6.85^+0.19 59 2022-01-13T19:36:08.600 52.61 PL 9.65_-3.81^+0.4
19 2021-09-11T16:50:03.850 45.88 PL 17.25_-4.28^+0.04 60 2022-01-13T20:14:58.600 319.95 PL 5.2_-1.56^+0.06
20 2021-09-11T17:01:10.800 974.33 BB2 76.89_-2.92^+0.35 61 2022-01-13T21:41:17.900 38.46 PL 12.36_-5.5^+0.0
21 2021-09-11T17:04:29.800 13.65 PL 8.94_-5.07^+0.04 62 2022-01-14T19:42:08.050 193.26 PL 15.52_-2.11^+0.18
22 2021-09-11T17:10:48.750 398.48 PL 6.14_-2.03^+0.02 63 2022-01-14T19:45:08.100 18.93 PL 13.05_-9.02^+0.07
23 2021-09-11T18:02:13.500 38.57 PL 7.72_-3.97^+0.13 64 2022-01-14T19:56:52.700 382.4 BB2 147.37_-5.45^+0.33
24 2021-09-11T18:04:46.350 354.67 PL 5.65_-1.42^+0.16 65 2022-01-14T20:06:07.400 83.6 PL 10.5_-3.38^+0.32
25 2021-09-11T18:54:36.050 26.54 PL 15.33_-5.56^+0.33 66 2022-01-14T20:07:03.050 452.02 BB2 45.28_-5.07^+0.04
26 2021-09-11T19:43:28.000 116.83 PL 11.94_-2.63^+0.06 67 2022-01-14T20:12:45.300 957.86 PL 2.63_-1.89^+0.15
27 2021-09-11T19:46:50.050 53.73 PL 10.97_-6.05^+0.15 68 2022-01-14T20:15:54.400 305.94 BBPL 37.84_-7.35^+0.86
28 2021-09-11T20:13:40.550 111.79 PL 14.03_-2.67^+0.27 69 2022-01-14T20:21:05.150 1530.34 BB2 239.77_-2.79^+0.42
29 2021-09-11T20:22:59.050 1334.24 PL 4.63_-0.91^+0.04 70 2022-01-14T20:23:35.400 189.16 PL 17.39_-2.87^+0.08
30 2021-09-11T20:33:14.550 123.12 PL 8.23_-2.99^+0.15 71 2022-01-14T20:26:50.300 61.19 PL 6.98_-2.94^+1.89
31 2021-09-11T22:51:41.600 78.74 PL 13.34_-3.45^+0.22 72 2022-01-14T20:29:07.250 398.07 BBPL 28.81_-6.74^+0.69
32 2021-09-12T00:34:37.450 192.29 BBPL 31.48_-9.65^+1.54 73 2022-01-14T20:31:49.900 265.8 PL 6.2_-2.02^+0.02
33 2021-09-12T00:45:49.400 32.17 PL 16.6_-3.54^+0.12 74 2022-01-15T09:26:39.900 40.19 PL 10.81_-5.61^+0.23
34 2021-09-12T05:14:07.950 67.48 PL 10.73_-2.28^+0.17 75 2022-01-15T13:52:26.050 76.73 BB2 17.62_-7.87^+0.52
35 2021-09-12T16:26:08.150 77.84 PL 5.71_-3.26^+0.03 76 2022-01-15T16:31:14.900 225.69 PL 7.44_-2.58^+0.28
36 2021-09-12T22:16:36.200 45.55 PL 8.25_-3.82^+0.29 77 2022-01-15T17:21:59.300 247.88 BBPL 53.91_-19.11^+0.17
37 2021-09-13T00:27:25.200 258.7 BB2 8.84_-4.83^+0.42 78 2022-01-16T10:48:37.650 310.99 BBPL 10.99_-6.84^+0.58
38 2021-09-13T19:51:33.350 14.69 PL 14.02_-4.98^+0.01 79 2022-01-17T01:39:37.300 452.21 PL 4.37_-1.88^+0.14
39 2021-09-14T11:10:36.250 76.83 BBPL 36.07_-9.73^+1.94 80 2022-01-20T18:52:48.950 61.4 PL 10.19_-4.13^+0.3
40 2021-09-14T14:15:42.900 15.02 PL 9.99_-9.99^+0.27 81 2022-01-23T20:06:38.750 358.16 BB2 99.96_-4.15^+0.09
41 2021-09-14T23:21:58.500 110.27 PL 10.8_-2.86^+0.34 82 2022-01-24T02:10:55.050 250.47 PL 12.43_-2.22^+0.23
aBest model assessed by BIC criterion
bValues in units of 10^-7 erg cm^-2 are calculated by using best model within 30-200 keV
lccccc|ccc
The spectral properties of BB2 and BB model observed by GECAM-B
[-1ex] ID 5cBB2 3cBB
(r)2-6 (r)7-9 kT1 (keV) norm1 kT2 (keV) norm2 PGSTAT/DOF kT (keV) norm PGSTAT/DOF
1 8.22_-0.31^+0.31 149.66_-26.17^+26.17 27.61_-3.01^+3.01 0.21_-0.12^+0.12 198.06/206 10.16_-0.17^+0.17 60.58_-5.07^+5.07 321.83/208
2 11.64_-3.01^+3.01 1.76_-1.29^+1.42 39.79_-12.66^+14.85 0.02_-0.02^+0.03 109.66/171 23.95_-1.53^+1.53 0.19_-0.05^+0.05 123.31/173
3 ... ... ... ... ... 17.78_-1.55^+1.55 0.65_-0.24^+0.24 72.98/109
4 8.25_-0.19^+0.16 106.52_-12.92^+12.92 86.62_-37.63^+37.63 0.00_-0.00^+0.00 228.08/238 8.83_-0.17^+0.17 77.47_-7.48^+7.48 358.63/240
5 8.37_-0.34^+0.34 256.75_-42.26^+42.26 20.64_-2.59^+2.59 1.04_-0.78^+0.78 199.34/199 10.02_-0.14^+0.14 124.19_-8.50^+8.50 281.15/201
6 ... ... ... ... ... 35.64_-2.73^+2.73 0.05_-0.01^+0.01 235.67/200
7 ... ... ... ... ... 20.85_-1.08^+1.08 0.80_-0.16^+0.16 145.91/159
8 7.96_-0.78^+0.78 45.29_-22.61^+22.61 86.47_-39.88^+48.48 0.01_-0.01^+0.01 97.09/111 12.30_-0.69^+0.69 7.08_-1.82^+1.82 170.42/113
9 11.52_-1.34^+1.34 4.24_-1.96^+1.96 47.12_-19.37^+19.37 0.01_-0.01^+0.02 128.47/153 15.90_-0.72^+0.72 1.33_-0.27^+0.27 155.00/155
10 ... ... ... ... ... 22.65_-1.55^+1.55 0.56_-0.15^+0.15 120.71/118
11 ... ... ... ... ... 26.94_-1.85^+1.85 0.25_-0.06^+0.06 152.69/153
12 ... ... ... ... ... 33.65_-2.54^+2.54 0.09_-0.02^+0.02 218.03/194
13 9.42_-0.71^+0.71 12.83_-4.24^+4.24 50.64_-15.00^+15.00 0.01_-0.01^+0.01 232.82/243 12.59_-0.51^+0.51 4.16_-0.76^+0.76 294.99/245
14 8.47_-0.63^+0.63 45.51_-15.76^+15.76 51.31_-17.83^+17.83 0.02_-0.02^+0.02 134.93/154 11.62_-0.46^+0.46 11.92_-2.23^+2.23 207.83/156
15 ... ... ... ... ... 41.29_-3.77^+3.77 0.05_-0.02^+0.02 194.56/198
16 ... ... ... ... ... 27.68_-1.37^+1.37 0.23_-0.04^+0.04 291.15/225
17 11.10_-1.81^+1.81 5.31_-3.42^+4.02 65.39_-21.34^+21.34 0.01_-0.01^+0.02 132.23/162 27.51_-1.72^+1.72 0.27_-0.06^+0.06 166.25/164
18 ... ... ... ... ... 20.02_-1.72^+1.72 0.40_-0.14^+0.14 77.44/93
19 ... ... ... ... ... 9.69_-0.37^+0.37 28.32_-5.61^+5.61 166.43/130
20 9.06_-0.10^+0.10 180.92_-8.50^+8.50 28.04_-3.42^+3.42 0.10_-0.06^+0.06 326.83/291 9.54_-0.06^+0.06 146.02_-4.41^+4.41 458.42/293
21 8.91_-1.30^+1.30 13.82_-8.56^+11.26 108.00_-62.38^+62.38 0.00_-0.00^+0.01 65.59/78 13.65_-1.08^+1.08 2.44_-0.89^+0.89 99.05/80
22 ... ... ... ... ... 13.49_-0.81^+0.81 1.76_-0.48^+0.48 267.20/217
23 ... ... ... ... ... 21.72_-1.63^+1.63 0.34_-0.10^+0.10 71.27/108
24 ... ... ... ... ... 26.11_-1.59^+1.59 0.12_-0.03^+0.03 175.73/184
25 7.98_-0.72^+0.72 49.79_-21.83^+21.83 53.56_-21.87^+23.31 0.01_-0.01^+0.02 101.27/118 11.02_-0.54^+0.54 12.16_-2.86^+2.86 155.33/120
26 ... ... ... ... ... 12.76_-0.57^+0.57 5.03_-1.04^+1.04 117.75/146
27 ... ... ... ... ... 14.81_-0.87^+0.87 2.25_-0.58^+0.58 129.70/118
28 6.84_-0.69^+0.69 79.35_-42.20^+42.20 29.74_-3.56^+3.56 0.11_-0.06^+0.06 137.57/175 14.30_-0.52^+0.52 3.47_-0.57^+0.57 232.22/177
29 ... ... ... ... ... 11.27_-0.45^+0.45 3.32_-0.62^+0.62 298.44/280
30 ... ... ... ... ... 30.55_-2.34^+2.34 0.11_-0.03^+0.03 135.74/145
31 ... ... ... ... ... 24.61_-1.34^+1.34 0.37_-0.08^+0.08 184.32/170
32 8.95_-0.33^+0.33 66.46_-11.15^+11.15 57.91_-16.59^+16.59 0.01_-0.01^+0.01 220.62/242 10.53_-0.24^+0.24 33.45_-3.62^+3.62 350.95/244
33 9.48_-1.07^+1.07 15.46_-7.78^+7.78 61.59_-20.85^+20.92 0.01_-0.01^+0.01 111.13/164 21.27_-1.05^+1.05 0.78_-0.16^+0.16 179.86/166
34 11.14_-1.43^+1.43 4.69_-2.63^+2.63 71.16_-28.11^+28.11 0.01_-0.01^+0.01 142.92/164 22.61_-1.20^+1.20 0.40_-0.09^+0.09 190.24/166
35 ... ... ... ... ... 19.26_-1.50^+1.50 0.39_-0.13^+0.13 100.07/105
36 ... ... ... ... ... 27.35_-2.23^+2.23 0.16_-0.05^+0.05 89.09/116
37 10.22_-1.07^+1.07 5.80_-2.56^+2.56 72.14_-26.46^+29.11 0.00_-0.00^+0.01 225.28/228 20.55_-0.95^+0.95 0.46_-0.09^+0.09 302.27/230
38 8.82_-1.55^+1.55 16.10_-10.94^+13.17 63.00_-23.71^+26.74 0.01_-0.01^+0.02 97.11/110 24.53_-1.72^+1.72 0.39_-0.11^+0.11 136.94/112
39 8.91_-0.34^+0.34 78.62_-14.29^+14.29 70.62_-23.41^+23.41 0.01_-0.01^+0.01 201.29/218 10.72_-0.26^+0.26 35.73_-4.11^+4.11 364.32/220
40 ... ... ... ... ... 16.62_-1.21^+1.21 1.26_-0.40^+0.40 85.63/82
41 10.44_-4.28^+4.28 2.34_-2.09^+3.65 62.91_-17.37^+18.82 0.02_-0.01^+0.01 170.45/192 45.96_-4.89^+4.89 0.05_-0.01^+0.01 179.50/194
42 ... ... ... ... ... 33.66_-2.70^+2.70 0.13_-0.04^+0.04 125.01/154
43 ... ... ... ... ... 25.47_-1.71^+1.71 0.27_-0.07^+0.07 111.31/142
44 11.08_-2.71^+2.71 2.10_-1.61^+1.71 52.62_-24.61^+30.81 0.01_-0.01^+0.01 107.61/129 26.23_-2.15^+2.15 0.12_-0.04^+0.04 123.26/131
45 ... ... ... ... ... 10.43_-0.14^+0.14 86.45_-5.83^+5.83 350.55/225
46 10.07_-1.28^+1.13 6.62_-3.59^+3.59 67.33_-26.74^+29.13 0.01_-0.01^+0.01 116.29/166 23.95_-1.34^+1.34 0.29_-0.06^+0.06 170.81/168
47 8.80_-1.78^+1.78 11.40_-9.24^+10.48 46.54_-11.01^+11.01 0.03_-0.03^+0.03 132.41/140 27.51_-1.97^+1.97 0.23_-0.06^+0.06 159.62/142
48 8.49_-1.52^+1.52 19.39_-13.29^+15.65 44.06_-14.89^+15.80 0.03_-0.03^+0.03 67.34/98 18.47_-1.25^+1.25 0.97_-0.28^+0.28 95.24/100
49 ... ... ... ... ... 25.28_-1.45^+1.45 0.22_-0.05^+0.05 171.54/172
50 9.25_-0.83^+0.83 13.47_-5.77^+5.77 95.64_-49.35^+69.17 0.00_-0.00^+0.00 105.90/119 13.05_-0.72^+0.72 3.21_-0.83^+0.83 169.40/121
51 9.43_-0.63^+0.63 16.89_-5.02^+5.02 47.11_-10.25^+10.25 0.02_-0.01^+0.01 170.20/214 14.16_-0.43^+0.43 3.41_-0.47^+0.47 280.80/216
52 ... ... ... ... ... 12.91_-0.70^+0.70 4.10_-1.05^+1.05 139.16/118
53 8.56_-1.12^+1.12 12.38_-7.71^+7.71 44.38_-8.60^+8.60 0.02_-0.01^+0.01 155.33/180 22.01_-1.12^+1.12 0.34_-0.07^+0.07 218.35/182
54 10.25_-1.31^+1.31 6.13_-3.27^+3.27 47.33_-17.18^+17.18 0.01_-0.01^+0.02 118.49/144 16.37_-0.87^+0.87 1.08_-0.26^+0.26 151.94/146
55 ... ... ... ... ... 25.66_-1.47^+1.47 0.16_-0.04^+0.04 208.81/189
56 ... ... ... ... ... 16.04_-1.13^+1.13 0.72_-0.22^+0.22 317.18/252
57 ... ... ... ... ... 9.94_-0.08^+0.08 81.14_-3.33^+3.33 447.55/279
58 ... ... ... ... ... 31.75_-2.23^+2.23 0.11_-0.03^+0.03 143.17/189
59 ... ... ... ... ... 22.00_-1.51^+1.51 0.40_-0.11^+0.11 121.95/124
60 ... ... ... ... ... 27.75_-1.79^+1.79 0.09_-0.02^+0.02 232.97/214
61 ... ... ... ... ... 21.33_-1.37^+1.37 0.56_-0.15^+0.15 150.75/140
62 9.76_-0.67^+0.67 16.91_-5.12^+5.12 54.07_-15.03^+15.03 0.01_-0.01^+0.01 149.82/198 14.19_-0.46^+0.46 3.94_-0.57^+0.57 244.63/200
63 ... ... ... ... ... 15.77_-1.02^+1.02 2.05_-0.58^+0.58 108.14/95
64 9.65_-0.17^+0.17 220.03_-14.93^+14.93 24.90_-1.71^+1.71 0.74_-0.28^+0.28 336.83/272 ... ... ...
65 ... ... ... ... ... 32.40_-2.40^+2.40 0.11_-0.03^+0.03 153.68/159
66 9.21_-0.25^+0.25 89.11_-10.03^+10.03 29.21_-5.12^+5.12 0.11_-0.08^+0.08 258.21/262 10.24_-0.14^+0.14 58.15_-3.72^+3.72 340.00/264
67 ... ... ... ... ... 11.73_-1.27^+1.27 1.32_-0.63^+0.63 292.81/287
68 9.18_-0.24^+0.24 75.89_-9.12^+9.12 54.32_-12.79^+12.79 0.01_-0.01^+0.01 228.59/236 10.46_-0.17^+0.17 43.45_-3.39^+3.39 408.35/238
69 9.84_-0.12^+0.12 278.70_-10.92^+10.92 20.56_-0.54^+0.54 4.17_-0.71^+0.71 526.86/301 ... ... ...
70 8.47_-0.42^+0.42 48.85_-11.68^+11.68 44.40_-11.16^+11.16 0.02_-0.02^+0.02 169.85/170 10.23_-0.28^+0.28 21.51_-3.00^+3.00 258.69/172
71 ... ... ... ... ... 16.73_-1.32^+1.32 0.82_-0.30^+0.30 94.81/94
72 9.42_-0.24^+0.24 48.93_-5.66^+5.66 45.65_-7.81^+7.81 0.02_-0.01^+0.01 287.64/250 11.00_-0.16^+0.16 25.74_-1.82^+1.82 497.10/252
73 9.15_-1.81^+1.81 5.25_-4.25^+4.96 40.23_-8.47^+8.47 0.02_-0.02^+0.02 143.18/163 24.65_-1.63^+1.63 0.16_-0.04^+0.04 167.93/165
74 ... ... ... ... ... 23.39_-1.57^+1.57 0.36_-0.09^+0.09 97.89/124
75 8.48_-0.60^+0.60 37.17_-13.02^+13.02 79.15_-31.41^+31.89 0.01_-0.01^+0.01 122.53/174 12.34_-0.52^+0.52 7.50_-1.48^+1.48 238.56/176
76 ... ... ... ... ... 32.26_-2.44^+2.44 0.08_-0.02^+0.02 156.12/184
77 9.36_-0.30^+0.30 88.50_-11.94^+11.94 29.74_-3.22^+3.22 0.18_-0.10^+0.10 214.79/241 11.58_-0.15^+0.15 38.18_-2.43^+2.43 388.29/243
78 9.70_-0.75^+0.75 11.25_-4.00^+4.00 69.57_-21.01^+21.01 0.01_-0.01^+0.01 192.17/227 17.69_-0.67^+0.67 1.10_-0.18^+0.18 307.82/229
79 ... ... ... ... ... 29.17_-2.84^+2.84 0.06_-0.02^+0.02 231.17/237
80 ... ... ... ... ... 21.65_-1.46^+1.46 0.45_-0.12^+0.12 140.36/120
81 8.73_-0.12^+0.12 290.84_-18.57^+18.57 35.59_-6.05^+6.05 0.05_-0.03^+0.03 277.77/229 9.23_-0.08^+0.08 225.61_-10.12^+10.12 416.70/231
82 9.89_-0.83^+0.83 11.08_-4.08^+4.08 59.42_-16.90^+16.90 0.01_-0.01^+0.01 195.88/205 17.35_-0.65^+0.65 1.31_-0.21^+0.21 288.45/207
lcccc|ccc
The spectral properties of CPL and OTTB model observed by GECAM-B
[-1ex] ID 4cCPL 3cOTTB
(r)2-5 (r)6-8 Photon Index E_peak norm PGSTAT/DOF kT (keV) norm PGSTAT/DOF
1 ... ... ... ... 24.29_-0.79^+0.79 8.60_-0.40^+0.40 230.23/208
2 ... ... ... ... 109.54_-23.47^+23.47 0.26_-0.04^+0.04 110.02/173
3 ... ... ... ... 63.39_-14.93^+14.93 0.41_-0.09^+0.09 62.11/109
4 ... ... ... ... 19.92_-0.67^+0.67 7.09_-0.35^+0.35 312.50/240
5 ... ... ... ... 22.71_-0.60^+0.60 17.63_-0.68^+0.68 212.54/201
7 ... ... ... ... 90.55_-14.75^+14.75 0.72_-0.09^+0.09 120.48/159
8 ... ... ... ... 54.67_-6.99^+6.99 1.18_-0.14^+0.14 135.44/113
9 ... ... ... ... 52.76_-5.69^+5.69 0.62_-0.07^+0.07 134.53/155
10 ... ... ... ... 112.12_-26.68^+26.68 0.60_-0.09^+0.09 102.78/118
11 ... ... ... ... 184.33_-55.97^+55.97 0.39_-0.06^+0.06 129.89/153
13 ... ... ... ... 43.71_-3.80^+3.80 0.89_-0.08^+0.08 258.34/245
14 ... ... ... ... 36.84_-3.03^+3.03 2.07_-0.20^+0.20 168.48/156
16 ... ... ... ... 173.42_-35.15^+35.15 0.41_-0.04^+0.04 235.15/225
17 ... ... ... ... 175.58_-47.33^+47.33 0.47_-0.07^+0.07 138.19/164
18 ... ... ... ... 88.32_-24.70^+24.70 0.32_-0.07^+0.07 71.69/93
19 ... ... ... ... 23.99_-1.71^+1.71 3.27_-0.33^+0.33 141.81/130
20 ... ... ... ... 21.14_-0.24^+0.24 17.91_-0.29^+0.29 464.85/293
21 ... ... ... ... 61.25_-11.75^+11.75 0.57_-0.10^+0.10 81.42/80
22 ... ... ... ... 57.60_-7.81^+7.81 0.41_-0.05^+0.05 241.41/217
23 ... ... ... ... 101.36_-25.17^+25.17 0.33_-0.06^+0.06 61.29/108
24 ... ... ... ... 128.92_-27.13^+27.13 0.21_-0.03^+0.03 152.03/184
25 ... ... ... ... 34.98_-3.42^+3.42 1.76_-0.20^+0.20 128.79/120
26 ... ... ... ... 37.58_-3.52^+3.52 1.27_-0.14^+0.14 104.68/146
27 ... ... ... ... 51.32_-7.03^+7.03 0.82_-0.11^+0.11 117.31/118
28 ... ... ... ... 45.81_-3.65^+3.65 1.21_-0.11^+0.11 168.60/177
29 ... ... ... ... 32.43_-2.68^+2.68 0.57_-0.05^+0.05 277.06/280
31 ... ... ... ... 124.04_-23.59^+23.59 0.51_-0.06^+0.06 152.25/170
32 ... ... ... ... 28.56_-1.27^+1.27 4.75_-0.26^+0.26 287.80/244
33 ... ... ... ... 94.26_-14.43^+14.43 0.75_-0.09^+0.09 132.67/166
34 ... ... ... ... 107.41_-18.30^+18.30 0.44_-0.06^+0.06 154.70/166
35 ... ... ... ... 76.35_-16.26^+16.26 0.30_-0.06^+0.06 88.74/105
36 ... ... ... ... 177.60_-63.32^+63.32 0.26_-0.05^+0.05 76.95/116
37 ... ... ... ... 94.94_-13.33^+13.33 0.39_-0.04^+0.04 249.97/230
38 ... ... ... ... 129.35_-33.70^+33.70 0.52_-0.08^+0.08 110.42/112
39 ... ... ... ... 30.74_-1.44^+1.44 5.10_-0.30^+0.30 291.38/220
40 ... ... ... ... 63.56_-12.30^+12.30 0.62_-0.11^+0.11 68.62/82
43 ... ... ... ... 144.18_-36.35^+36.35 0.40_-0.06^+0.06 89.12/142
44 ... ... ... ... 137.96_-41.47^+41.47 0.20_-0.04^+0.04 109.86/131
45 ... ... ... ... 25.44_-0.68^+0.68 13.05_-0.48^+0.48 268.18/225
46 ... ... ... ... 111.30_-19.82^+19.82 0.39_-0.05^+0.05 131.86/168
47 ... ... ... ... 162.74_-45.58^+45.58 0.41_-0.06^+0.06 137.16/142
48 ... ... ... ... 69.59_-12.71^+12.71 0.67_-0.11^+0.11 76.59/100
49 ... ... ... ... 127.63_-26.02^+26.02 0.33_-0.04^+0.04 150.45/172
50 ... ... ... ... 53.29_-6.20^+6.20 0.68_-0.09^+0.09 136.96/121
51 ... ... ... ... 47.43_-3.15^+3.15 1.10_-0.08^+0.08 206.11/216
52 ... ... ... ... 44.73_-5.37^+5.37 0.95_-0.12^+0.12 115.64/118
53 ... ... ... ... 88.90_-12.62^+12.62 0.40_-0.05^+0.05 172.74/182
54 ... ... ... ... 59.43_-7.85^+7.85 0.53_-0.07^+0.07 126.40/146
55 ... ... ... ... 135.70_-27.56^+27.56 0.24_-0.03^+0.03 183.52/189
56 ... ... ... ... 91.99_-17.62^+17.62 0.25_-0.04^+0.04 292.71/252
57 -0.02_-0.20^+0.20 31.79_-1.42^+1.42 28.80_-1.12^+1.12 393.03/278 22.77_-0.36^+0.36 11.08_-0.25^+0.25 412.19/279
59 ... ... ... ... 102.25_-22.39^+22.39 0.41_-0.07^+0.07 108.05/124
60 ... ... ... ... 145.35_-32.25^+32.25 0.18_-0.03^+0.03 204.42/214
61 ... ... ... ... 100.09_-19.83^+19.83 0.53_-0.08^+0.08 122.06/140
62 ... ... ... ... 48.21_-3.51^+3.51 1.26_-0.10^+0.10 181.43/200
63 ... ... ... ... 58.01_-9.95^+9.95 0.86_-0.14^+0.14 97.53/95
64 -0.92_-0.13^+0.13 28.53_-1.84^+1.84 49.89_-1.38^+1.38 407.75/273 27.32_-0.37^+0.37 24.02_-0.44^+0.44 408.13/274
66 -0.91_-0.28^+0.28 25.72_-3.64^+3.64 19.17_-1.04^+1.04 289.66/263 24.51_-0.63^+0.63 8.43_-0.29^+0.29 289.74/264
67 ... ... ... ... 47.60_-11.65^+11.65 0.20_-0.04^+0.04 288.91/287
68 ... ... ... ... 26.48_-0.82^+0.82 6.41_-0.26^+0.26 316.35/238
69 ... ... ... ... 30.63_-0.19^+0.19 33.76_-0.27^+0.27 781.09/303
70 ... ... ... ... 26.85_-1.43^+1.43 2.83_-0.20^+0.20 211.41/172
71 ... ... ... ... 69.38_-13.96^+13.96 0.39_-0.07^+0.07 79.58/94
72 ... ... ... ... 28.83_-0.82^+0.82 4.36_-0.16^+0.16 371.46/252
73 ... ... ... ... 109.87_-22.59^+22.59 0.25_-0.04^+0.04 146.50/165
74 ... ... ... ... 121.62_-31.06^+31.06 0.41_-0.07^+0.07 85.38/124
75 ... ... ... ... 50.13_-4.59^+4.59 1.32_-0.12^+0.12 182.21/176
77 -1.95_-0.02^+0.02 2.69_-1.20^+1.20 15.92_-1.18^+1.18 238.66/242 29.67_-0.80^+0.80 7.85_-0.28^+0.28 253.66/243
78 ... ... ... ... 71.54_-6.84^+6.84 0.64_-0.06^+0.06 229.82/229
80 ... ... ... ... 109.12_-24.89^+24.89 0.42_-0.07^+0.07 124.46/120
81 0.01_-0.22^+0.22 28.89_-1.52^+1.52 76.44_-3.32^+3.32 375.27/230 19.90_-0.32^+0.32 25.33_-0.62^+0.62 390.62/231
82 ... ... ... ... 66.21_-6.19^+6.19 0.74_-0.07^+0.07 225.02/207
lccccc|ccc
The spectral properties of BBPL and PL model observed by GECAM-B
[-1ex] ID 5cBBPL 3cPL
(r)2-6 (r)7-9 kT (keV) norm1 Photon Index norm2 PGSTAT/DOF Photon Index norm PGSTAT/DOF
1 8.80_-0.75^+0.75 39.15_-33.81^+39.46 -3.22_-0.24^+0.24 171263.82_-127297.40^+127297.40 187.21/206 -3.42_-0.07^+0.07 548845.91_-141783.74^+141783.74 195.82/208
2 ... ... ... ... ... -1.85_-0.15^+0.15 106.12_-69.47^+69.47 109.80/173
4 8.14_-0.23^+0.23 111.91_-15.14^+15.14 -0.80_-0.38^+0.38 0.48_-0.48^+1.17 226.85/238 -3.60_-0.07^+0.07 705940.41_-191336.84^+191336.84 274.72/240
5 9.17_-0.42^+0.42 123.84_-41.52^+41.52 -3.09_-0.31^+0.31 102387.61_-98161.45^+168559.52 200.08/199 -3.51_-0.06^+0.06 1427131.21_-331418.52^+331418.52 274.44/201
6 ... ... ... ... ... -1.39_-0.12^+0.12 13.36_-7.83^+7.83 208.78/200
7 11.03_-2.24^+2.24 4.13_-3.36^+3.36 -1.40_-0.64^+0.64 24.03_-24.02^+78.90 115.93/157 -1.95_-0.13^+0.13 413.94_-230.30^+230.30 117.23/159
8 7.82_-1.00^+1.00 47.82_-26.45^+28.64 -0.60_-0.47^+0.56 0.24_-0.24^+0.99 96.75/111 -2.43_-0.14^+0.14 3473.06_-1992.86^+1992.86 115.38/113
9 11.58_-1.23^+1.23 3.39_-1.51^+1.51 -1.47_-0.86^+0.86 11.33_-11.33^+46.32 128.57/153 -2.41_-0.13^+0.13 1598.57_-877.10^+877.10 134.07/155
10 ... ... ... ... ... -1.84_-0.16^+0.16 242.80_-165.13^+165.13 99.67/118
11 ... ... ... ... ... -1.61_-0.14^+0.14 72.56_-44.71^+44.71 126.35/153
12 ... ... ... ... ... -1.56_-0.12^+0.12 46.39_-26.07^+26.07 171.81/194
13 9.03_-0.89^+0.89 12.94_-4.69^+4.69 -1.35_-0.47^+0.47 8.81_-8.78^+22.09 232.48/243 -2.56_-0.11^+0.11 3539.00_-1521.92^+1521.92 240.09/245
14 7.94_-0.82^+0.82 53.60_-22.29^+22.29 -1.34_-0.48^+0.48 15.89_-15.88^+37.43 134.18/154 -2.81_-0.12^+0.12 19159.90_-9113.68^+9113.68 142.70/156
15 6.58_-3.22^+3.22 17.46_-16.53^+39.76 -0.94_-0.23^+0.23 2.06_-2.06^+2.56 168.37/196 -1.22_-0.11^+0.11 9.83_-5.63^+5.63 172.78/198
16 10.65_-1.66^+1.66 5.05_-2.72^+2.72 -0.76_-0.39^+0.39 0.62_-0.62^+2.00 215.60/223 -1.68_-0.10^+0.10 100.73_-42.93^+42.93 223.90/225
17 ... ... ... ... ... -1.65_-0.12^+0.12 102.66_-57.06^+57.06 133.55/164
18 ... ... ... ... ... -1.91_-0.23^+0.23 153.55_-148.60^+148.60 71.50/93
19 ... ... ... ... ... -3.39_-0.14^+0.14 179298.30_-95949.26^+95949.26 123.21/130
20 9.13_-0.08^+0.08 162.79_-10.03^+10.03 -2.54_-0.35^+0.35 3320.13_-3139.00^+6443.09 327.39/291 ... ... ...
21 ... ... ... ... ... -2.33_-0.20^+0.20 1201.31_-975.41^+975.41 73.51/80
22 9.77_-1.27^+1.27 5.75_-2.83^+2.83 -0.79_-0.62^+0.62 0.25_-0.25^+1.34 222.09/215 -2.34_-0.14^+0.14 862.47_-491.91^+491.91 227.87/217
23 ... ... ... ... ... -1.87_-0.18^+0.18 142.08_-113.52^+113.52 60.34/108
24 11.12_-2.92^+2.92 1.15_-1.12^+1.12 -1.36_-0.56^+0.56 7.92_-7.92^+18.61 147.74/182 -1.79_-0.13^+0.13 73.89_-42.67^+42.67 148.39/184
25 7.61_-0.92^+0.92 56.77_-31.28^+31.29 -1.15_-0.56^+0.56 4.77_-4.77^+15.71 100.84/118 -2.86_-0.15^+0.15 18793.55_-11128.23^+11128.23 110.43/120
26 ... ... ... ... ... -2.75_-0.14^+0.14 9465.93_-5280.77^+5280.77 104.16/146
27 11.48_-1.71^+1.71 5.69_-2.57^+2.57 -0.52_-0.42^+2.23 0.06_-0.06^+0.99 107.08/116 -2.38_-0.17^+0.17 1863.77_-1289.24^+1289.24 115.24/118
28 5.08_-1.67^+1.54 206.08_-387.71^+387.71 -2.21_-0.28^+0.28 977.16_-897.79^+1299.59 137.55/175 -2.65_-0.11^+0.11 7361.88_-3152.19^+3152.19 142.73/177
29 ... ... ... ... ... -2.84_-0.12^+0.12 5397.98_-2530.04^+2530.04 270.55/280
30 ... ... ... ... ... -1.51_-0.14^+0.14 31.64_-20.42^+20.42 111.01/145
31 10.29_-2.35^+2.35 4.73_-3.44^+3.44 -1.22_-0.49^+0.49 8.77_-8.77^+23.52 144.69/168 -1.80_-0.12^+0.12 186.37_-98.12^+98.12 146.95/170
32 8.65_-0.39^+0.39 71.71_-13.29^+13.29 -1.24_-0.36^+0.36 10.41_-10.33^+20.00 218.44/242 -3.04_-0.07^+0.07 83890.27_-23932.69^+23932.69 243.85/244
33 9.03_-1.39^+1.39 16.05_-9.14^+9.22 -1.00_-0.41^+0.41 3.12_-3.12^+6.06 111.32/164 -2.01_-0.12^+0.12 573.43_-285.30^+285.30 119.96/166
34 10.61_-1.82^+1.82 4.68_-2.73^+2.73 -0.97_-0.48^+0.48 1.63_-1.63^+5.59 142.40/164 -1.90_-0.12^+0.12 224.20_-117.86^+117.86 146.89/166
35 11.56_-2.59^+2.59 2.04_-1.47^+1.47 -1.07_-0.83^+1.05 1.25_-1.25^+7.58 84.70/103 -2.10_-0.20^+0.20 289.99_-243.46^+243.46 86.93/105
36 ... ... ... ... ... -1.60_-0.17^+0.17 46.74_-35.38^+35.38 75.80/116
37 9.85_-1.38^+1.38 5.87_-2.90^+2.99 -0.90_-0.42^+0.42 0.86_-0.86^+2.59 225.99/228 -1.98_-0.11^+0.11 259.02_-116.85^+116.85 236.73/230
38 8.18_-2.15^+2.15 18.39_-13.64^+20.11 -0.98_-0.46^+0.46 2.70_-2.70^+6.96 97.79/110 -1.83_-0.15^+0.15 219.62_-145.60^+145.60 104.22/112
39 8.63_-0.42^+0.42 86.04_-17.65^+17.65 -1.01_-0.34^+0.34 3.92_-3.92^+7.76 200.85/218 -2.99_-0.08^+0.08 78641.58_-23380.65^+23380.65 232.08/220
40 6.52_-2.29^+2.29 41.65_-38.38^+63.52 -1.55_-0.56^+0.56 33.73_-33.44^+87.42 59.77/80 -2.28_-0.20^+0.20 1088.16_-877.05^+877.05 62.63/82
41 ... ... ... ... ... -1.02_-0.11^+0.11 4.28_-2.59^+2.59 172.31/194
42 ... ... ... ... ... -1.28_-0.13^+0.13 17.28_-10.73^+10.73 111.28/154
43 12.78_-1.86^+1.86 2.70_-1.32^+1.32 -0.04_-0.02^+1.63 0.00_-0.00^+0.13 81.43/140 -1.73_-0.14^+0.14 113.97_-71.76^+71.76 84.89/142
44 ... ... ... ... ... -1.75_-0.17^+0.17 62.17_-47.35^+47.35 107.77/131
45 9.31_-0.23^+0.23 133.54_-12.92^+12.92 -1.50_-0.48^+0.48 44.47_-43.83^+112.54 193.52/223 -3.32_-0.05^+0.05 574088.35_-118860.75^+118860.75 294.40/225
46 8.93_-1.53^+1.53 8.31_-5.54^+11.07 -1.13_-0.37^+0.37 3.93_-3.84^+3.84 116.49/166 -1.92_-0.12^+0.12 226.28_-120.57^+120.57 121.96/168
47 ... ... ... ... ... -1.69_-0.14^+0.14 106.21_-65.55^+65.55 132.96/142
48 8.18_-2.08^+2.08 17.97_-16.14^+17.27 -1.31_-0.65^+0.65 11.03_-11.03^+35.90 67.93/98 -2.22_-0.18^+0.18 1011.82_-771.94^+771.94 71.09/100
49 14.51_-2.23^+2.23 1.04_-0.59^+0.59 -0.92_-0.85^+0.85 0.92_-0.92^+5.40 147.23/170 -1.77_-0.12^+0.12 103.82_-57.91^+57.91 148.33/172
50 ... ... ... ... ... -2.49_-0.14^+0.14 2487.53_-1447.95^+1447.95 120.30/121
51 8.81_-0.84^+0.84 17.65_-6.21^+6.21 -1.58_-0.39^+0.39 36.20_-35.58^+70.01 168.59/214 -2.57_-0.09^+0.09 5021.22_-1794.39^+1794.39 176.48/216
52 ... ... ... ... ... -2.61_-0.15^+0.15 4895.01_-3051.97^+3051.97 101.69/118
53 7.09_-1.88^+1.88 18.83_-14.95^+20.47 -1.51_-0.32^+0.32 24.92_-24.06^+38.52 154.11/180 -2.07_-0.12^+0.12 389.68_-193.11^+193.11 159.53/182
54 9.46_-1.92^+1.92 5.50_-4.30^+4.30 -1.65_-0.61^+0.61 36.83_-36.83^+159.91 117.74/144 -2.32_-0.14^+0.14 1047.50_-616.69^+616.69 119.12/146
55 ... ... ... ... ... -1.74_-0.12^+0.12 72.17_-38.77^+38.77 180.17/189
56 ... ... ... ... ... -1.98_-0.14^+0.14 167.03_-101.94^+101.94 284.01/252
57 9.43_-0.12^+0.12 97.06_-4.85^+4.85 -1.85_-0.44^+0.44 102.35_-101.85^+187.58 326.21/277 ... ... ...
58 ... ... ... ... ... -1.36_-0.13^+0.13 17.80_-10.57^+10.57 142.83/189
59 ... ... ... ... ... -1.88_-0.16^+0.16 186.39_-132.95^+132.95 106.85/124
60 9.96_-2.31^+2.31 2.49_-1.91^+2.03 -1.04_-0.44^+0.44 1.34_-1.34^+3.58 194.48/212 -1.76_-0.12^+0.12 58.85_-33.14^+33.14 198.59/214
61 ... ... ... ... ... -1.96_-0.14^+0.14 346.94_-211.59^+211.59 114.28/140
62 9.22_-0.83^+0.83 18.00_-5.88^+5.88 -1.35_-0.41^+0.41 13.28_-13.26^+25.00 148.02/198 -2.53_-0.09^+0.09 4913.13_-1854.90^+1854.90 156.04/200
63 11.51_-2.13^+2.13 5.68_-3.00^+3.00 -0.90_-0.58^+1.23 0.95_-0.95^+7.32 92.77/93 -2.25_-0.19^+0.19 1272.57_-971.22^+971.22 95.67/95
64 10.34_-0.18^+0.18 129.90_-16.43^+16.43 -2.75_-0.16^+0.16 36038.20_-27097.67^+28316.21 339.36/272 -3.23_-0.03^+0.03 829074.42_-89069.15^+89069.15 806.74/274
65 ... ... ... ... ... -1.48_-0.13^+0.13 34.14_-20.38^+20.38 125.84/159
66 9.34_-0.21^+0.21 71.29_-12.34^+12.34 -2.42_-0.45^+0.45 2389.36_-2331.45^+5329.40 259.90/262 -3.32_-0.05^+0.05 366656.92_-71224.04^+71224.04 379.82/264
67 ... ... ... ... ... -2.31_-0.24^+0.24 323.03_-317.64^+317.64 284.48/287
68 8.98_-0.28^+0.28 78.98_-9.74^+9.74 -1.37_-0.36^+0.36 16.66_-16.24^+32.08 226.85/236 -3.23_-0.06^+0.06 212928.95_-47428.32^+47428.32 284.10/238
70 7.83_-0.56^+0.56 58.20_-16.55^+16.55 -1.92_-0.47^+0.47 173.60_-172.92^+425.24 167.81/170 -3.21_-0.09^+0.09 90638.99_-33238.27^+33238.27 174.82/172
71 ... ... ... ... ... -2.23_-0.20^+0.20 631.07_-272.06^+272.06 73.95/94
72 9.21_-0.27^+0.27 48.66_-5.43^+5.43 -1.66_-0.34^+0.34 57.69_-55.66^+78.61 284.91/250 -3.13_-0.05^+0.05 108382.70_-21750.55^+21750.55 343.73/252
73 ... ... ... ... ... -1.90_-0.14^+0.14 133.26_-83.35^+83.35 142.00/165
74 12.84_-3.07^+3.07 1.71_-1.42^+1.42 -1.07_-0.43^+0.94 3.09_-3.09^+16.06 83.27/122 -1.75_-0.16^+0.16 117.70_-81.27^+81.27 84.62/124
75 ... ... ... ... ... -2.52_-0.11^+0.11 5311.99_-2351.61^+2351.61 148.35/176
76 ... ... ... ... ... -1.43_-0.13^+0.13 19.82_-12.44^+12.44 138.70/184
77 9.68_-0.29^+0.29 55.00_-13.37^+13.37 -2.47_-0.30^+0.30 5779.44_-5361.47^+5887.83 212.47/241 -3.12_-0.05^+0.05 189135.42_-37442.86^+37442.86 269.02/243
78 8.18_-1.02^+1.02 18.64_-9.21^+9.21 -1.26_-0.29^+0.29 8.34_-8.32^+12.26 188.68/227 -2.24_-0.09^+0.09 1059.02_-409.22^+409.22 200.28/229
79 ... ... ... ... ... -1.46_-0.17^+0.17 13.36_-10.33^+10.33 215.65/237
80 ... ... ... ... ... -1.83_-0.16^+0.16 160.83_-111.44^+111.44 122.12/120
81 8.70_-0.12^+0.12 284.98_-16.40^+16.40 -1.97_-0.49^+0.49 274.00_-268.19^+703.12 278.16/229 ... ... ...
82 9.28_-1.06^+1.06 12.12_-5.22^+5.22 -1.17_-0.36^+0.36 4.98_-4.98^+9.69 193.75/205 -2.26_-0.10^+0.10 1261.14_-510.88^+510.88 203.44/207
lcccc|lcccc
The spectral properties of MBB model observed by GECAM-B
[-1ex] ID kT_min (keV) α norm PGSTAT ID kT_min (keV) α norm PGSTAT
1 5.35_-0.12^+0.14 6.95_-0.12^+0.16 47339.11_-11355.99^+23235.18 316.77 40 3.83_-3.33^+1.64 4.87_-0.13^+0.21 45.39_-15.92^+46.89 167.76
2 2.32_-0.63^+0.55 4.89_-0.1^+0.11 22.73_-5.77^+8.71 173.66 41 5.59_-4.86^+0.76 5.88_-0.63^+0.46 2247.44_-1920.32^+5618.35 354.46
3 4.91_-1.26^+1.17 5.27_-0.31^+0.42 72.64_-43.25^+183.51 94.56 42 3.4_-1.66^+1.24 4.76_-0.68^+0.26 37.92_-34.75^+48.78 162.58
4 5.56_-0.14^+0.12 7.62_-0.26^+0.19 153587.51_-69823.95^+83415.83 447.52 43 1.79_-1.44^+2.05 4.97_-0.12^+0.15 51.28_-15.97^+27.3 181.88
6 2.03_-1.36^+3.6 4.3_-0.12^+0.16 2.71_-0.95^+2.14 202.2 44 3.5_-3.25^+5.38 5.75_-0.65^+3.76 13.72_-13.15^+143.64 106.18
7 4.61_-3.21^+1.28 4.99_-0.21^+0.27 79.11_-40.18^+108.67 133.5 46 5.18_-4.68^+3.21 4.52_-0.24^+0.23 8.05_-4.69^+10.95 84.61
11 3.58_-3.0^+1.56 5.13_-0.78^+0.38 128.96_-118.75^+263.48 147.52 47 3.55_-3.46^+2.31 4.85_-0.39^+0.33 27.55_-20.7^+54.24 251.64
12 0.62_-0.59^+4.92 4.94_-0.49^+0.73 32.92_-25.65^+299.74 186.09 48 1.21_-0.7^+3.31 4.89_-0.16^+0.21 49.63_-20.5^+43.09 116.47
13 3.72_-2.82^+2.9 4.43_-0.13^+0.24 8.87_-3.16^+12.72 153.09 49 5.35_-4.24^+0.58 6.06_-1.32^+0.29 4238.83_-4145.99^+5388.21 323.22
15 3.29_-1.51^+0.92 4.77_-0.1^+0.11 25.9_-7.18^+10.26 211.77 52 4.26_-3.63^+1.66 5.18_-0.7^+0.35 86.0_-77.34^+170.58 88.71
16 4.21_-3.33^+1.58 5.38_-0.55^+0.62 171.32_-138.47^+836.75 381.06 53 3.09_-2.57^+3.67 4.14_-0.11^+0.15 2.88_-2.03^+2.88 182.07
17 4.06_-3.22^+1.74 5.28_-1.28^+0.67 222.95_-220.46^+1317.64 191.51 55 6.39_-4.48^+13.33 3.02_-0.18^+0.56 0.01_-0.0^+0.22 237.84
18 2.31_-1.61^+1.97 4.3_-0.08^+0.1 4.88_-1.3^+2.05 184.46 56 1.16_-0.58^+2.05 4.88_-0.11^+0.13 39.84_-12.25^+19.94 100.72
22 2.33_-0.81^+1.71 4.78_-0.11^+0.12 32.7_-9.37^+15.57 246.98 58 2.62_-2.12^+3.55 4.75_-0.18^+0.18 12.65_-5.17^+9.8 120.17
23 1.42_-0.91^+3.0 4.63_-0.08^+0.15 22.68_-4.45^+14.53 141.07 59 2.73_-1.75^+1.32 4.92_-0.28^+0.19 39.77_-25.14^+32.81 174.82
24 7.85_-6.14^+2.26 5.1_-0.61^+0.49 50.47_-44.31^+228.14 92.16 60 6.46_-5.57^+2.34 4.62_-0.36^+0.27 17.6_-13.13^+31.08 133.21
25 0.52_-0.49^+4.75 5.63_-0.98^+0.88 507.7_-473.13^+5416.17 179.01 61 3.6_-2.55^+1.96 5.18_-0.27^+0.36 105.02_-64.19^+194.72 89.53
27 3.03_-2.35^+2.68 5.01_-0.86^+0.4 44.34_-42.44^+105.0 74.15 63 7.72_-1.89^+1.83 4.95_-0.2^+0.22 41.19_-21.52^+46.91 149.81
28 3.47_-2.98^+1.8 5.07_-0.4^+0.26 35.83_-25.69^+45.93 260.37 65 4.11_-3.38^+0.85 5.46_-1.05^+0.31 195.39_-188.42^+283.28 149.35
30 6.97_-6.44^+2.6 5.0_-0.65^+0.47 41.01_-38.35^+156.23 74.0 67 3.45_-3.08^+2.31 5.07_-1.24^+0.56 74.15_-73.54^+367.52 232.13
31 6.83_-1.34^+1.23 4.96_-0.17^+0.19 26.49_-12.01^+24.27 154.28 68 5.08_-3.98^+1.11 5.57_-0.99^+0.39 308.26_-294.66^+628.57 140.83
32 5.45_-0.64^+0.61 5.85_-0.19^+0.3 923.68_-374.9^+1295.0 118.21 71 3.13_-1.93^+1.29 5.02_-0.27^+0.14 44.66_-25.6^+25.38 177.06
34 3.86_-3.47^+2.1 5.42_-1.2^+0.61 215.5_-211.22^+1152.31 140.31 73 4.73_-3.86^+1.24 5.11_-1.42^+0.35 56.21_-56.01^+116.47 144.79
35 6.75_-0.82^+0.78 6.0_-0.3^+0.3 1203.02_-704.41^+1532.82 116.55 74 4.2_-3.59^+2.12 4.74_-0.23^+0.23 14.82_-8.57^+18.46 198.74
36 4.81_-0.7^+0.56 5.71_-0.17^+0.2 601.93_-243.03^+440.27 214.49 78 4.71_-4.25^+1.73 4.9_-0.28^+0.27 20.51_-12.72^+29.07 292.62
37 3.35_-2.78^+1.91 5.36_-1.14^+0.68 69.81_-68.38^+431.03 347.05 79 10.25_-2.53^+2.85 4.76_-0.25^+0.35 27.66_-17.07^+72.52 149.91
38 6.76_-5.61^+2.45 4.64_-0.24^+3.4 6.47_-6.2^+12.59 118.8 82 5.56_-4.93^+2.07 5.11_-0.42^+0.42 74.62_-55.68^+217.34 124.8
lcccc|lcccc
The temporal properties of SGR J1935+2154 fitted by jointed spectrum between Fermi/GBM and GECAM-B
[-1ex] ID UTC Duration (ms) Model a Fluence b ID UTC Duration (ms) Model a Fluence b
1 2021-01-30T08:39:53.810 148.35 PL 7.39_-0.71^+0.33 21 2021-09-14T11:10:36.192 106.51 BBPL 56.76_-2.26^+0.19
2 2021-01-30T10:35:35.121 66.05 CPL 4.42_-0.96^+0.28 22 2021-09-14T14:15:42.885 80.48 BBPL 6.85_-3.53^+5.7
3 2021-02-16T22:20:39.573 348.2 BBPL 55.49_-0.76^+0.18 23 2022-01-04T04:32:11.147 49.78 PL 8.13_-0.74^+0.39
4 2021-07-07T00:33:31.633 145.05 BB2 111.28_-3.95^+0.08 24 2022-01-05T06:01:31.350 659.26 PL 4.83_-0.3^+0.17
5 2021-07-08T00:18:18.550 471.23 PL 3.4_-0.33^+0.08 25 2022-01-05T07:06:40.725 204.01 BBPL 9.87_-1.53^+0.0
6 2021-09-10T01:04:33.342 380.73 CPL 4.0_-0.3^+0.28 26 2022-01-06T02:36:14.044 258.51 BBPL 19.15_-1.14^+0.21
7 2021-09-10T05:35:55.483 86.1 PL 13.05_-0.97^+0.41 27 2022-01-09T07:39:10.637 246.44 BBPL 6.97_-1.36^+0.08
8 2021-09-11T16:50:03.802 59.19 BBPL 26.85_-3.39^+0.19 28 2022-01-11T08:58:35.308 207.01 OTTB 8.72_-0.26^+0.28
9 2021-09-11T17:04:29.740 294.74 CPL 3.77_-0.45^+0.29 29 2022-01-12T01:03:46.329 726.62 BBPL 9.54_-2.95^+0.1
10 2021-09-11T17:10:48.619 449.93 BBPL 7.77_-0.83^+0.09 30 2022-01-12T05:42:51.470 1377.56 PL 2.37_-0.19^+0.11
11 2021-09-11T18:54:36.032 127.98 BB2 12.75_-2.36^+0.11 31 2022-01-12T08:39:25.279 1037.37 CPL 12.51_-1.07^+1.02
12 2021-09-11T20:13:40.478 311.8 BB2 14.71_-0.9^+0.08 32 2022-01-12T17:57:07.731 997.41 CPL 75.5_-0.37^+0.4
13 2021-09-11T20:22:58.772 1464.47 CPL 4.89_-0.13^+0.15 33 2022-01-13T19:36:08.511 75.07 CPL 3.34_-0.41^+0.15
14 2021-09-11T22:51:41.562 104.28 BBPL 12.69_-3.39^+0.07 34 2022-01-14T19:42:08.833 1063.15 BBPL 10.25_-2.16^+0.0
15 2021-09-12T00:34:37.167 727.1 BBPL 18.67_-0.35^+0.14 35 2022-01-14T19:45:08.047 53.66 CPL 6.15_-0.18^+0.17
16 2021-09-12T00:45:49.367 70.43 BBPL 14.75_-3.88^+0.14 36 2022-01-15T09:26:39.856 73.74 CPL 8.43_-0.35^+0.77
17 2021-09-12T05:14:07.811 204.11 PL 9.59_-0.48^+0.31 37 2022-01-15T17:21:59.283 288.66 CPL 31.14_-0.58^+0.67
18 2021-09-12T16:26:08.045 60.02 PL 5.69_-0.66^+0.19 38 2022-01-16T10:48:37.617 324.44 BB2 61.98_-0.93^+0.4
19 2021-09-13T00:27:24.956 342.36 CPL 7.96_-0.23^+0.22 39 2022-01-17T01:39:37.185 121.36 BB2 12.76_-1.06^+0.18
20 2021-09-13T19:51:33.154 119.78 CPL 4.38_-0.45^+0.21
aBest model assessed by BIC criterion
bValues in units of 10^-7 erg cm^-2 are calculated by using best model within 8-200 keV
lccccc|ccc
The spectral properties of BB2 and BB model fitted by jointed spectrum between Fermi/GBM and GECAM-B
[-1ex] ID 5cBB2 3cBB
(r)2-6 (r)7-9 kT1 (keV) norm1 kT2 (keV) norm2 PGSTAT/DOF kT (keV) norm PGSTAT/DOF
1 4.56_-0.43^+0.43 73.79_-29.85^+29.85 24.18_-1.80^+1.80 0.11_-0.04^+0.04 269.13/282 12.78_-0.51^+0.51 1.82_-0.33^+0.33 383.13/284
2 ... ... ... ... ... 11.96_-0.81^+0.81 1.70_-0.47^+0.47 189.20/209
3 ... ... ... ... ... 8.21_-0.04^+0.04 112.77_-2.35^+2.35 872.00/533
4 8.58_-0.15^+0.15 165.50_-6.75^+6.75 17.97_-1.43^+1.43 1.56_-0.79^+0.79 630.02/373 ... ... ...
5 9.01_-0.66^+0.66 2.83_-0.69^+0.69 43.53_-9.73^+9.73 0.01_-0.00^+0.00 692.02/495 12.28_-0.49^+0.49 1.10_-0.18^+0.18 758.78/497
6 ... ... ... ... ... 9.59_-0.36^+0.36 3.77_-0.56^+0.56 630.60/415
7 ... ... ... ... ... 10.32_-0.32^+0.32 8.10_-1.02^+1.02 473.43/278
8 ... ... ... ... ... 8.22_-0.12^+0.12 53.29_-3.20^+3.20 506.71/402
9 9.46_-1.02^+1.02 3.55_-1.19^+1.19 35.00_-14.74^+14.74 0.01_-0.01^+0.03 275.35/195 10.69_-0.55^+0.55 2.54_-0.52^+0.52 283.36/197
10 3.26_-0.34^+0.34 236.41_-106.67^+106.67 11.41_-0.60^+0.60 2.70_-0.73^+0.73 414.09/334 8.55_-0.21^+0.21 11.61_-1.11^+1.11 495.33/336
11 8.79_-0.40^+0.40 16.31_-2.49^+2.49 39.15_-13.64^+14.56 0.01_-0.01^+0.02 323.42/219 9.59_-0.28^+0.28 12.63_-1.46^+1.46 353.13/221
12 5.58_-0.19^+0.19 106.42_-13.10^+13.10 18.60_-1.28^+1.28 0.36_-0.12^+0.12 632.85/411 ... ... ...
13 4.86_-0.25^+0.25 40.37_-6.91^+6.91 13.43_-0.71^+0.71 0.74_-0.20^+0.20 566.24/397 8.25_-0.13^+0.13 9.20_-0.53^+0.53 714.67/399
14 ... ... ... ... ... 8.48_-0.22^+0.22 18.58_-1.93^+1.93 575.93/332
16 ... ... ... ... ... 8.11_-0.19^+0.19 27.40_-2.60^+2.60 551.16/378
17 ... ... ... ... ... 8.59_-0.21^+0.21 13.30_-1.30^+1.30 646.36/393
18 4.64_-0.53^+0.53 53.51_-25.45^+25.45 20.31_-2.07^+2.07 0.16_-0.07^+0.07 335.74/317 9.43_-0.41^+0.41 5.22_-0.97^+0.97 383.05/319
19 ... ... ... ... ... 8.79_-0.16^+0.16 10.76_-0.79^+0.79 802.21/530
20 ... ... ... ... ... 10.02_-0.42^+0.42 3.57_-0.58^+0.58 403.49/231
21 ... ... ... ... ... 8.51_-0.07^+0.07 98.33_-3.31^+3.31 773.25/447
22 7.61_-0.64^+0.64 13.12_-3.66^+3.66 32.97_-10.25^+10.25 0.02_-0.02^+0.02 226.80/192 9.02_-0.44^+0.44 7.94_-1.56^+1.56 249.07/194
23 ... ... ... ... ... 9.49_-0.39^+0.39 7.50_-1.27^+1.27 332.45/252
24 4.89_-0.37^+0.37 32.35_-9.43^+9.43 23.40_-1.72^+1.72 0.09_-0.03^+0.03 538.28/350 10.64_-0.32^+0.32 2.71_-0.32^+0.32 684.85/352
25 ... ... ... ... ... 7.08_-0.15^+0.15 31.56_-2.64^+2.64 602.49/406
26 6.93_-0.19^+0.19 54.67_-5.07^+5.07 24.91_-1.63^+1.63 0.14_-0.04^+0.04 448.92/391 8.83_-0.12^+0.12 25.92_-1.39^+1.39 708.74/393
27 ... ... ... ... ... 8.37_-0.24^+0.24 11.29_-1.31^+1.31 617.19/401
28 ... ... ... ... ... 8.66_-0.18^+0.18 13.06_-1.11^+1.11 543.53/380
29 4.73_-0.20^+0.20 122.81_-21.07^+21.07 23.95_-1.78^+1.78 0.10_-0.03^+0.03 459.08/401 6.73_-0.17^+0.17 35.12_-3.53^+3.53 654.18/403
30 3.99_-0.32^+0.32 43.18_-15.10^+15.10 16.52_-1.61^+1.61 0.14_-0.06^+0.06 731.19/552 8.06_-0.32^+0.32 3.89_-0.59^+0.59 790.80/554
33 4.66_-0.67^+0.67 20.46_-11.85^+11.85 16.83_-1.45^+1.45 0.25_-0.10^+0.10 409.16/243 11.11_-0.45^+0.45 1.70_-0.27^+0.27 439.26/245
34 9.54_-0.40^+0.40 8.89_-1.28^+1.28 43.54_-10.92^+10.92 0.01_-0.01^+0.01 704.91/456 10.98_-0.28^+0.28 5.80_-0.60^+0.60 774.86/458
35 4.11_-0.28^+0.28 89.36_-22.48^+22.48 11.92_-0.61^+0.61 1.64_-0.44^+0.44 552.36/395 8.29_-0.15^+0.15 10.94_-0.74^+0.74 664.22/397
36 6.44_-0.58^+0.58 24.92_-7.96^+7.96 23.69_-2.84^+2.84 0.11_-0.06^+0.06 389.49/364 9.94_-0.35^+0.35 6.50_-0.96^+0.96 453.05/366
39 4.63_-0.18^+0.18 197.69_-30.15^+30.15 15.97_-1.23^+1.23 0.61_-0.21^+0.21 453.58/355 6.34_-0.13^+0.13 67.29_-5.40^+5.40 617.70/357
lcccc|ccc
The spectral properties of CPL and OTTB model fitted by jointed spectrum between Fermi/GBM and GECAM-B
[-1ex] ID 4cCPL 3cOTTB
(r)2-5 (r)6-8 Photon Index E_peak norm PGSTAT/DOF kT (keV) norm PGSTAT/DOF
1 ... ... ... ... 56.08_-5.21^+5.21 0.32_-0.03^+0.03 282.40/284
2 -0.64_-0.46^+0.46 59.91_-9.10^+9.10 0.29_-0.07^+0.07 179.50/208 65.30_-12.18^+12.18 0.19_-0.02^+0.02 180.12/209
3 0.48_-0.07^+0.07 30.97_-0.22^+0.22 22.01_-0.97^+0.97 658.60/532 ... ... ...
4 0.69_-0.07^+0.07 36.75_-0.26^+0.26 32.53_-1.27^+1.27 668.01/374 ... ... ...
5 -1.45_-0.21^+0.21 94.66_-27.75^+27.75 0.16_-0.02^+0.02 688.82/496 72.03_-7.40^+7.40 0.13_-0.01^+0.01 692.86/497
6 -1.27_-0.28^+0.28 38.44_-4.48^+4.48 0.34_-0.05^+0.05 582.14/414 39.49_-3.44^+3.44 0.23_-0.01^+0.01 583.06/415
7 ... ... ... ... 57.30_-4.44^+4.44 0.57_-0.03^+0.03 354.07/278
8 0.23_-0.19^+0.19 30.95_-0.70^+0.70 8.84_-1.05^+1.05 468.42/401 26.78_-0.88^+0.88 2.00_-0.05^+0.05 513.27/402
9 -0.06_-0.48^+0.48 45.35_-3.83^+3.83 0.47_-0.12^+0.12 277.68/196 49.91_-6.69^+6.69 0.20_-0.02^+0.02 281.49/197
10 -1.42_-0.17^+0.17 30.76_-3.39^+3.39 0.68_-0.07^+0.07 397.51/335 33.17_-1.84^+1.84 0.48_-0.02^+0.02 401.08/336
11 0.92_-0.34^+0.34 38.11_-1.27^+1.27 3.45_-0.66^+0.66 349.57/220 40.43_-2.88^+2.88 0.72_-0.04^+0.04 377.57/221
12 -1.14_-0.14^+0.14 24.78_-1.62^+1.62 2.27_-0.21^+0.21 662.29/412 25.77_-0.81^+0.81 1.14_-0.03^+0.03 663.03/413
13 -0.72_-0.14^+0.14 32.38_-1.02^+1.02 0.75_-0.07^+0.07 563.09/398 31.61_-1.10^+1.10 0.34_-0.01^+0.01 567.15/399
14 ... ... ... ... 35.94_-2.04^+2.04 0.75_-0.03^+0.03 460.15/332
16 -1.63_-0.17^+0.17 27.45_-5.65^+5.65 1.19_-0.11^+0.11 464.49/377 33.15_-1.73^+1.73 0.94_-0.04^+0.04 472.79/378
17 ... ... ... ... 35.22_-1.86^+1.86 0.56_-0.02^+0.02 507.31/393
18 ... ... ... ... 42.52_-4.23^+4.23 0.30_-0.02^+0.02 335.96/319
19 -1.26_-0.14^+0.14 34.20_-2.18^+2.18 0.75_-0.06^+0.06 651.48/529 35.42_-1.47^+1.47 0.49_-0.02^+0.02 654.13/530
20 -0.83_-0.32^+0.32 46.99_-4.56^+4.56 0.38_-0.07^+0.07 382.47/230 48.20_-5.09^+5.09 0.23_-0.02^+0.02 382.71/231
21 0.14_-0.10^+0.10 32.29_-0.42^+0.42 15.95_-0.99^+0.99 625.74/446 28.74_-0.56^+0.56 4.06_-0.06^+0.06 760.62/447
22 -1.14_-0.35^+0.35 41.08_-5.46^+5.46 0.57_-0.11^+0.11 230.92/193 40.82_-4.71^+4.71 0.38_-0.03^+0.03 231.04/194
23 ... ... ... ... 42.67_-4.11^+4.11 0.43_-0.03^+0.03 290.85/252
24 -1.68_-0.17^+0.17 71.95_-21.09^+21.09 0.24_-0.02^+0.02 536.81/351 55.15_-4.15^+4.15 0.22_-0.01^+0.01 550.50/352
25 ... ... ... ... 23.54_-1.04^+1.04 0.74_-0.02^+0.02 496.19/406
26 -1.27_-0.10^+0.10 32.00_-1.73^+1.73 1.87_-0.11^+0.11 438.68/392 33.76_-1.01^+1.01 1.20_-0.03^+0.03 443.37/393
27 -1.38_-0.22^+0.22 28.59_-4.04^+4.04 0.68_-0.09^+0.09 564.55/400 31.28_-1.95^+1.95 0.45_-0.02^+0.02 566.19/401
28 ... ... ... ... 32.22_-1.51^+1.51 0.58_-0.02^+0.02 424.39/380
29 ... ... ... ... 26.51_-1.25^+1.25 0.67_-0.03^+0.03 517.28/403
30 ... ... ... ... 32.64_-2.68^+2.68 0.14_-0.01^+0.01 732.26/554
33 -1.03_-0.27^+0.27 54.74_-6.32^+6.32 0.23_-0.03^+0.03 402.76/244 54.46_-5.65^+5.65 0.16_-0.01^+0.01 402.77/245
34 -0.76_-0.19^+0.19 48.46_-2.50^+2.50 0.86_-0.08^+0.08 726.17/457 49.18_-2.96^+2.96 0.52_-0.03^+0.03 727.44/458
35 -1.03_-0.16^+0.16 29.31_-1.57^+1.57 0.84_-0.08^+0.08 538.61/396 29.47_-1.14^+1.14 0.43_-0.01^+0.01 538.64/397
36 -1.50_-0.23^+0.23 41.52_-6.50^+6.50 0.58_-0.07^+0.07 385.96/365 43.79_-3.53^+3.53 0.44_-0.03^+0.03 389.95/366
39 -1.38_-0.18^+0.18 18.44_-2.68^+2.68 2.01_-0.26^+0.26 495.85/356 21.74_-0.86^+0.86 1.06_-0.03^+0.03 499.28/357
lccccc|ccc
The spectral properties of BBPL and PL model fitted by jointed spectrum between Fermi/GBM and GECAM-B
[-1ex] ID 5cBBPL 3cPL
(r)2-6 (r)7-9 kT (keV) norm1 Photon Index norm2 PGSTAT/DOF Photon Index norm PGSTAT/DOF
1 ... ... ... ... ... -2.02_-0.07^+0.07 154.48_-39.80^+39.80 265.27/284
2 10.38_-1.55^+1.55 1.79_-1.36^+1.36 -1.44_-0.42^+0.42 4.83_-4.83^+9.92 176.92/207 -1.75_-0.12^+0.12 35.39_-15.60^+15.60 187.53/209
3 8.29_-0.07^+0.07 88.96_-3.61^+3.61 -2.50_-0.06^+0.06 1295.52_-284.16^+284.16 617.96/531 ... ... ...
4 9.42_-0.08^+0.08 114.90_-4.97^+4.97 -2.15_-0.08^+0.08 575.67_-200.05^+200.05 637.15/373 ... ... ...
5 10.87_-2.14^+2.14 0.52_-0.44^+0.51 -1.72_-0.13^+0.13 17.29_-9.86^+9.86 687.02/495 -1.80_-0.06^+0.06 31.04_-7.34^+7.34 693.76/497
6 12.39_-1.79^+1.79 0.56_-0.34^+0.34 -2.33_-0.20^+0.20 164.99_-96.26^+96.26 583.41/413 -2.13_-0.07^+0.07 128.12_-30.70^+30.70 597.10/415
7 6.98_-1.11^+1.11 10.01_-8.07^+8.07 -1.73_-0.13^+0.13 71.49_-43.44^+43.44 328.21/276 -1.91_-0.05^+0.05 179.16_-36.46^+36.46 333.63/278
8 8.03_-0.20^+0.20 45.79_-5.92^+5.92 -2.22_-0.15^+0.15 299.32_-190.87^+190.87 453.36/400 -2.42_-0.04^+0.04 2477.11_-311.32^+311.32 790.57/402
9 10.25_-1.05^+1.05 2.11_-1.06^+1.06 -1.68_-0.38^+0.38 8.35_-8.24^+14.39 275.30/195 -1.90_-0.09^+0.09 53.59_-18.30^+18.30 299.78/197
10 9.32_-0.83^+0.83 3.23_-1.29^+1.29 -2.22_-0.09^+0.09 224.58_-72.17^+72.17 385.43/334 -2.18_-0.05^+0.05 301.02_-48.87^+48.87 418.32/336
11 8.95_-0.35^+0.35 15.03_-2.06^+2.06 -0.90_-0.60^+0.60 0.39_-0.39^+1.95 324.93/219 ... ... ...
12 6.81_-0.27^+0.27 33.58_-6.69^+6.69 -2.21_-0.09^+0.09 338.51_-122.80^+122.80 636.36/411 -2.44_-0.03^+0.03 1478.93_-167.75^+167.75 780.53/413
13 8.52_-0.33^+0.33 4.53_-0.78^+0.78 -2.19_-0.08^+0.08 101.71_-28.34^+28.34 570.74/397 -2.20_-0.03^+0.03 209.94_-21.62^+21.62 740.15/399
14 6.98_-0.77^+0.77 13.60_-7.72^+7.72 -2.01_-0.11^+0.11 185.74_-88.46^+88.46 434.02/330 -2.17_-0.05^+0.05 472.39_-82.50^+82.50 449.37/332
16 6.73_-0.36^+0.36 33.97_-8.78^+8.78 -1.77_-0.16^+0.16 62.98_-45.06^+48.60 426.78/376 -2.22_-0.05^+0.05 659.51_-109.04^+109.04 480.08/378
17 8.65_-1.27^+1.27 3.15_-2.20^+2.20 -2.19_-0.08^+0.08 293.90_-89.31^+89.31 489.40/391 -2.22_-0.05^+0.05 407.21_-66.17^+66.17 500.60/393
18 ... ... ... ... ... -2.08_-0.08^+0.08 148.66_-42.63^+42.63 329.82/319
19 8.41_-0.49^+0.49 5.83_-1.67^+1.67 -2.08_-0.08^+0.08 130.36_-43.74^+43.74 645.94/528 -2.19_-0.04^+0.04 323.88_-40.52^+40.52 712.19/530
20 9.35_-0.96^+0.96 2.61_-1.33^+1.33 -1.72_-0.21^+0.21 15.87_-14.28^+15.57 377.10/229 -1.92_-0.07^+0.07 65.39_-17.20^+17.20 397.75/231
21 8.17_-0.12^+0.12 90.05_-6.64^+6.64 -2.08_-0.08^+0.08 382.52_-143.74^+143.74 558.22/445 ... ... ...
22 8.20_-0.68^+0.68 6.26_-3.96^+3.96 -1.78_-0.39^+0.39 32.44_-16.75^+16.75 225.27/192 -2.02_-0.09^+0.09 152.08_-48.82^+48.82 238.89/194
23 9.22_-1.75^+1.75 2.69_-2.24^+2.44 -2.01_-0.14^+0.14 119.62_-68.43^+68.43 283.12/250 -2.07_-0.08^+0.08 201.61_-54.45^+54.45 290.12/252
24 ... ... ... ... ... -1.91_-0.05^+0.05 68.01_-12.55^+12.55 539.17/352
25 6.96_-0.52^+0.52 14.50_-5.04^+5.04 -2.40_-0.09^+0.09 501.04_-173.74^+173.74 481.19/404 -2.47_-0.05^+0.05 1037.99_-155.51^+155.51 528.64/406
26 8.32_-0.33^+0.33 15.88_-3.19^+3.19 -2.10_-0.06^+0.06 317.81_-85.33^+85.33 396.16/391 -2.23_-0.03^+0.03 895.69_-90.51^+90.51 537.19/393
27 8.18_-0.72^+0.72 5.73_-2.41^+2.41 -2.17_-0.12^+0.12 154.55_-75.46^+75.46 551.26/399 -2.27_-0.06^+0.06 366.43_-70.64^+70.64 582.13/401
28 8.86_-0.64^+0.64 5.10_-1.71^+1.71 -2.25_-0.09^+0.09 274.62_-91.01^+91.01 423.30/378 -2.26_-0.04^+0.04 462.63_-69.30^+69.30 471.08/380
29 4.76_-0.36^+0.36 67.01_-26.92^+26.92 -2.02_-0.14^+0.14 129.89_-81.07^+81.07 448.67/401 -2.39_-0.05^+0.05 772.14_-122.03^+122.03 472.84/403
30 8.64_-2.61^+2.61 0.55_-0.28^+0.72 -2.21_-0.11^+0.11 82.54_-32.84^+32.84 718.82/552 -2.21_-0.07^+0.07 98.87_-22.39^+22.39 721.59/554
33 11.82_-1.80^+1.80 0.56_-0.37^+0.37 -1.90_-0.15^+0.15 29.34_-16.77^+16.77 400.94/243 -1.88_-0.07^+0.07 42.49_-10.61^+10.61 415.31/245
34 10.23_-0.59^+0.59 4.66_-1.34^+1.34 -1.78_-0.14^+0.14 41.16_-25.83^+25.83 701.17/456 -1.99_-0.05^+0.05 197.19_-34.56^+34.56 779.25/458
35 9.02_-0.44^+0.44 3.96_-0.86^+0.86 -2.42_-0.09^+0.09 308.17_-91.42^+91.42 533.77/395 -2.33_-0.04^+0.04 421.66_-50.52^+50.52 634.07/397
36 9.22_-1.46^+1.46 2.87_-2.27^+2.27 -2.02_-0.13^+0.13 129.09_-70.94^+70.94 386.35/364 -2.09_-0.07^+0.07 235.88_-56.57^+56.57 395.73/366
39 5.20_-0.25^+0.25 87.31_-22.63^+22.63 -2.08_-0.14^+0.14 193.78_-115.66^+117.40 466.52/355 -2.49_-0.04^+0.04 1587.04_-223.74^+223.74 544.37/357
lcccc|lcccc
The spectral properties of MBB model fitted by jointed spectrum between Fermi/GBM and GECAM-B
[-1ex] ID kT_min (keV) α norm PGSTAT ID kT_min (keV) α norm PGSTAT
1 2.18_-0.39^+0.37 5.11_-0.08^+0.09 30.57_-6.04^+7.95 330.09 19 3.8_-0.28^+0.32 5.66_-0.09^+0.11 153.18_-34.48^+53.1 652.19
2 5.25_-1.2^+0.77 5.65_-0.35^+0.49 112.06_-73.08^+319.74 220.18 21 5.58_-0.14^+0.14 7.09_-0.16^+0.15 41791.8_-13756.9^+19245.89 647.98
3 5.81_-0.06^+0.05 8.01_-0.1^+0.1 337830.17_-70618.15^+90284.02 971.98 22 4.53_-0.61^+0.74 5.8_-0.26^+0.36 205.88_-103.49^+332.75 252.16
4 7.07_-0.08^+0.11 8.27_-0.15^+0.18 2131152.87_-676165.4^+1197237.36 820.96 23 3.08_-1.49^+0.88 5.29_-0.18^+0.23 56.62_-24.39^+48.29 302.22
5 3.92_-0.99^+0.84 5.03_-0.14^+0.18 13.35_-4.9^+9.75 682.3 25 3.27_-0.22^+0.26 6.05_-0.12^+0.14 380.61_-90.02^+156.5 521.31
6 3.74_-1.27^+0.78 5.59_-0.35^+0.25 67.25_-43.04^+68.06 652.91 26 3.71_-0.23^+0.25 5.69_-0.09^+0.09 368.3_-76.68^+101.19 447.1
7 2.77_-0.6^+0.5 5.06_-0.08^+0.09 48.53_-10.56^+14.75 335.6 27 4.21_-0.38^+0.38 5.99_-0.17^+0.2 398.86_-142.96^+251.89 480.59
8 5.51_-0.23^+0.26 7.29_-0.21^+0.32 29894.85_-12005.7^+36721.16 462.24 28 2.95_-0.31^+0.34 5.82_-0.12^+0.16 211.84_-52.12^+102.8 405.21
9 6.16_-0.74^+0.77 6.22_-0.49^+0.27 476.62_-354.93^+557.73 272.25 29 1.55_-0.99^+0.63 5.11_-0.06^+0.06 10.4_-1.45^+1.9 748.75
10 2.69_-0.46^+0.38 5.4_-0.09^+0.11 65.44_-14.5^+20.66 429.77 30 2.31_-0.68^+0.48 5.6_-0.14^+0.17 32.99_-9.6^+16.45 674.9
11 6.25_-0.44^+0.34 6.84_-0.34^+0.38 6686.16_-3948.16^+10824.81 333.79 32 2.47_-1.41^+1.35 4.81_-0.08^+0.19 6.95_-1.39^+6.23 537.09
12 3.8_-0.2^+0.18 6.17_-0.1^+0.11 917.06_-209.9^+264.05 691.16 33 3.3_-1.18^+0.77 5.49_-0.31^+0.24 90.25_-52.38^+86.17 429.73
13 4.3_-0.25^+0.21 6.08_-0.13^+0.12 287.45_-82.13^+95.79 632.43 34 4.19_-0.3^+0.28 5.85_-0.12^+0.12 220.34_-61.9^+86.59 959.09
14 3.22_-0.37^+0.36 5.47_-0.09^+0.13 140.07_-31.17^+54.61 458.03 35 5.01_-0.54^+0.52 6.02_-0.27^+0.24 455.85_-241.99^+388.16 407.67
15 6.27_-0.14^+0.14 7.43_-0.17^+0.15 38709.37_-13692.74^+18248.35 1701.79 36 3.8_-1.3^+0.97 5.12_-0.21^+0.2 44.86_-21.94^+38.56 316.5
16 3.69_-0.26^+0.32 5.72_-0.11^+0.14 321.48_-81.59^+139.67 466.78 37 6.3_-0.12^+0.11 6.91_-0.09^+0.1 48734.36_-10535.17^+15056.46 989.18
17 3.0_-0.33^+0.28 5.51_-0.09^+0.09 110.69_-22.35^+28.93 535.14 38 3.51_-0.25^+0.25 5.72_-0.09^+0.1 335.23_-70.18^+101.77 499.1
18 2.01_-1.15^+0.77 5.15_-0.1^+0.13 24.45_-5.33^+9.97 337.19 39 1.9_-1.16^+1.04 5.2_-0.11^+0.16 24.4_-5.99^+13.97 448.06
aasjournal
|
http://arxiv.org/abs/2307.02439v1
|
20230705170940
|
Bayesian evidence for two slow-wave damping models in hot coronal loops
|
[
"I. Arregui",
"D. Y. Kolotkov",
"V. M. Nakariakov"
] |
astro-ph.SR
|
[
"astro-ph.SR"
] |
Instituto de Astrofísica de Canarias, E-38205 La Laguna, Tenerife, Spain
iarregui@iac.es
Departamento de Astrofísica, Universidad de La Laguna, E-38206 La Laguna, Tenerife, Spain
Centre for Fusion, Space and Astrophysics, Physics Department, University of Warwick, Coventry CV4 7AL, United Kingdom
Engineering Research Institute Ventspils International Radio Astronomy Centre (VIRAC) of Ventspils University of Applied Sciences, Inzenieru iela 101, Ventspils, LV-3601, Latvia
Centro de Investigacion en Astronomía, Universidad Bernardo O'Higgins, Avenida Viel 1497, Santiago, Chile
We compute the evidence in favour of two models, one based on field-aligned thermal conduction alone and another that includes thermal misbalance as well, in explaining the damping of slow magneto-acoustic waves in hot coronal loops. Our analysis is based on the computation of the marginal likelihood and the Bayes factor for the two damping models. We quantify their merit in explaining the apparent relationship between slow mode periods and damping times, measured with SOHO/SUMER in a set of hot coronal loops. The results indicate evidence in favour of the model with thermal misbalance in the majority of the sample, with a small population of loops for which thermal conduction alone is more plausible. The apparent possibility of two different regimes of slow-wave damping, if due to differences between the loops of host active regions and/or the photospheric dynamics, may help with revealing the coronal heating mechanism.
Bayesian evidence for two slow-wave damping models in hot coronal loops
I. Arregui
1,2
D. Y. Kolotkov3,4
V. M. Nakariakov3,5
Received ; accepted
=================================================================================================
§ INTRODUCTION
Standing and propagating slow magnetohydrodynamic (MHD) waves in the solar corona have been extensively studied over the past two decades <cit.>. A phenomenon that has attracted particular attention is the appearance of strongly damped Doppler-shift oscillations of ultraviolet emission lines in hot coronal loops (> 6 MK), first reported by <cit.> and <cit.>, in observations with the SUMER spectrometer onboard SOHO. The oscillations show up in hot lines, e.g. FeXIX and FeXXI, and are related to the hot plasma component of active region loops. Hot loops are typically observed in the X-ray band and in hot ultraviolet and extreme-ultraviolet lines <cit.>. They correspond to those already identified in early rocket missions <cit.>. Oscillations of similar nature were also detected with the Bragg Crystal Spectrometer on Yohkoh by <cit.>. A quarter-period phase shift between intensity and Doppler-shift perturbations allowed for the interpretation of these observations as standing slow-mode magneto-acoustic waves. The oscillations are frequently associated with small (or micro-) flares that have an occurrence rate of 3 to 14 per hour, and lifetimes that range from 5 to 150 min <cit.>. Many events belong to recurring episodes, with a rate of 2-3 times within a couple of hours <cit.>. The increase in the number of detected events has enabled to characterise their oscillatory properties statistically, finding periods and damping times in the ranges [10, 30] and [5, 35] min, respectively <cit.>. These oscillations have a proven seismological potential, already demonstrated in applications to the inference of the magnetic field strength <cit.> and the properties of the coronal plasma heating/cooling function <cit.>, for example.
Frequently invoked mechanisms to explain the observed rapid damping of coronal slow-mode waves include thermal conduction <cit.>, compressive viscosity <cit.>, optically thin radiation <cit.>, nonlinear effects <cit.> and their multiple combinations. Different mechanisms seem to be favoured depending on the damping regime (weak/strong), temperature, and density ranges. A comprehensive overview of different physical scenarios for the damping of the fundamental mode of slow magneto-acoustic oscillations in coronal loops with different lengths, temperatures, and densities, under different mechanisms can be found in <cit.>. A table with a summary of proposed damping mechanisms and a discussion, based on the analysis of the scaling of the damping time with the wave period, is presented by <cit.>. On the other hand, an almost linear scaling between the slow-wave damping times and oscillation periods, confidently observed in the solar <cit.> and stellar <cit.> coronae up to periods of 30 min and even longer, can be explained with none of those damping mechanisms.
A mechanism that has recently gathered an increasing interest concerning the damping of slow MHD waves is the process of thermal misbalance, whereby compressive waves and a heated coronal plasma can exchange energy in a continuous interplay between wave-perturbed cooling and heating processes <cit.>. Thus, such a wave-induced thermal misbalance can enhance or suppress the damping of slow waves, depending on the parameter values of the heating/cooling model <cit.>. As shown by <cit.>, in the regime of enhanced damping, the theoretically obtained damping rates are about those estimated from SUMER observations of hot coronal loops. Furthermore, <cit.> and <cit.> considered a model with field-aligned thermal conduction and wave-induced thermal misbalance to address the scaling of the damping time with period of standing slow waves in coronal loops observed in Doppler-shift with SUMER. In particular, <cit.> showed that accounting for the effect of thermal misbalance makes the relationship between the slow-wave damping time and period of a non-power-law form, unlike the damping mechanisms described above.
In this paper, we quantify the evidence in favour of each of the two damping models considered by <cit.>: one with thermal conduction alone and the other with the addition of thermal misbalance. We compare the plausibility of the newly proposed thermal misbalance mechanism in front of thermal conduction, which is used as a reference model. The aim is to assess which mechanism explains better, completely or in part, the damping properties of slow magneto-acoustic waves in hot coronal loops in SUMER observations.
§ DAMPING MODELS
The theoretical prediction for the damping time of slow magnetoacoustic waves due to field-aligned thermal conduction in a weakly dissipative limit can be expressed as <cit.>
τ^TC_ D = 2/d P^2, d=4π^2(γ-1)k_/γρ_0 C_ v c^2_s.
Here, P is the wave period and d is the thermal conduction parameter <cit.>. We can fix the following set of physical parameters appearing in Eq. (<ref>) using standard coronal values: the adiabatic index γ = 5/3, the field-aligned thermal conduction coefficient k_ = 10^-11 T_0^5/2 [W m^-1 K^-1] (with T_0 = 6.3 MK a typical SUMER oscillation detection temperature, ), the sound speed c_ s = √(γ k_ B T_0 / m) (with k_ B the Boltzmann constant and m=0.6 × 1.67× 10^-27 kg the mean particle mass), and the specific heat capacity C_ v=(γ-1)k_ B/m. This results in a model, M_ TC, with the plasma density ρ_0 as the only unknown, that we gather in the parameter vector _ TC ={ρ_0}. For plasma densities in the range ρ_0∈ [0.5, 10]× 10^-12 kg m^-3, characteristic of hot coronal loops <cit.>, values of d in the range d∼[8,176] min are obtained, which leads to model predictions for the damping by thermal conduction in the range
τ^ TC_ D∼[1.1, 360] min, for periods P between 10 and 40 min typically detected in observations.
An alternative theoretical prediction for the damping time of slow magnetoacoustic modes due to a combined effect of field-aligned thermal conduction and wave-induced thermal misbalance is
<cit.>
τ^TM_ D = 2τ_ M P^2/dτ_ M + P^2,
with τ_ M being the thermal misbalance time determined by the properties of the coronal heating/cooling function. Equations (<ref>) and (<ref>) were derived under the assumption of weak dissipation, in which the ratios of oscillation period to thermal conduction and thermal misbalance times are small. In Eq. (<ref>), τ^TM_ D depends on two unknowns that we gather in the parameter vector _ TM ={ρ_0,τ_M}. For this model, plasma densities in the range ρ_0∈ [0.5, 10]× 10^-12 kg m^-3 (as considered before) together with values of τ_ M in the range [1, 30] min <cit.>, lead to damping times in the range τ^ TM_ D∼[0.8, 47] min, for observed periods P between 10 and 40 min.
§ EVIDENCE ANALYSIS AND RESULTS
Our analysis makes use of observations of standing slow waves in coronal loops observed in Doppler-shift with SUMER. The whole SUMER spectral window contains a number of lines formed in the temperature range of 0.01–10 MK. They include the transition region line, SIII/SiIII at
1113Å (0.03–0.06 MK), the coronal lines CaX at 557Å (0.7 MK) and CaXIII at 1133Å (2 MK), as well as the flare-lines FeXIX at 1118Å (6.3 MK) and Fexx at 567Å (8 MK) <cit.>. We restrict our analysis to a selection of events corresponding to detections at 6.3 MK, from those summarised in <cit.> and <cit.>. We deliberately use data obtained with the same instrument and observed at the same emission spectral line to exclude the temperature of the emitting plasma as a free parameter.
This selected SUMER observations were recently employed by <cit.> to validate Eq. (<ref>) for the damping by thermal misbalance. In their analysis, <cit.> fix the plasma temperature to that of the SUMER observational channel in which most standing slow-wave events were observed (6.3 MK). Treating the plasma density and the characteristic time-scale of thermal misbalance as free parameters, they find a reasonable agreement between theory and observations. However, a small number of data-points fall outside the region covered by the posterior predictive distribution of the samples obtained by Bayesian Markov Chain Monte Carlo (MCMC) sampling (see Figure 1 in ), indicating that the effect of thermal misbalance is apparently less important in the slow-wave damping in those events.
To rigorously quantify the evidence of the two damping models given by Eqs. (<ref>) and (<ref>) in explaining the set of observations, we follow a similar procedure to the one employed by <cit.> for the damping of transverse coronal loop oscillations, based on the application of Bayesian model comparison <cit.>. We first construct a two-dimensional grid over the synthetic data space 𝒟=(P, τ_ D), which covers the ranges in the oscillation period and damping time in observations. The magnitude of the marginal likelihood for the two damping models over that space gives a measure of how well a particular period-damping time combination is predicted by each model. For the model with damping by thermal conduction M_ TC, with the parameter vector _ TC, the marginal likelihood is computed as
p(𝒟|M_ TC) = ∫ d_ TC p(𝒟|_ TC,M_ TC) p(_ TC|M_ TC),
and likewise for the model with damping by thermal misbalance, M_ TM, with the parameter vector _ TM.
The first factor in the integrand is the likelihood function. Under the assumption of a Gaussian likelihood function and adopting an error model for the damping time alone
p(𝒟|_ TC, M_ TC) =1/√(2π)σexp{-[τ_ D - τ^ TC_ D(_ TC)]^2/2σ^2},
and correspondingly for the thermal misbalance model. In this expression, σ is the uncertainty in the damping time τ_ D. In the absence of specific values from the literature, this is fixed to the chosen value σ= 0.1 τ_ D. Larger uncertainty values lead to lower levels of evidence. Possible data realisations from the two considered models are generated using the theoretical predictions given by Eqs. (<ref>) and (<ref>) for models M_ TC and M_ TM, respectively.
The second factor in the integrand of Eq. (<ref>) is the prior probability density of the model parameters. Based on the inference results obtained by <cit.>, we choose the following Gaussian priors for the two unknown parameters: 𝒢(ρ_0[10^-12 kg m^-3], 4, 2) and 𝒢(τ_ M [ min], 14.2, 5.0), with the numerical values indicating the mean and the standard deviation, respectively.
In Figure <ref> (top), we show the resulting distribution of the marginal likelihood for the two compared models of the damping, over the grid of synthetic data in the damping time and oscillation period. Although the two marginal likelihood distributions do overlap, especially in the area with periods and damping times below 20 min, the contours associated to the different levels of evidence for each model and their shape can be clearly distinguished. Marginal likelihood contours for thermal conduction alone bend upwards towards regions with weaker damping in the lower period range. Marginal likelihood contours for the model with thermal misbalance extend towards longer period values at comparatively lower damping time ranges.
They nicely cover the grey-shaded area in Figure 1 by <cit.>, which represents the posterior predictive distribution obtained from the MCMC samples in their analysis. The figure also shows the location of the SUMER data plotted over the contours. Most of the data-points (42 out of 49) fall over areas where the marginal likelihood for the model with thermal misbalance is larger than the marginal likelihood for the model with thermal conduction alone, and the evidence thus favours the former mechanism. For the remaining seven cases, the opposite happens and the marginal likelihood for the thermal conduction model is larger.
To quantify the relative evidence between the two compared models, we assume that the two models are equally probable a priori, p(M_ TC) = p(M_ TM), and make use of the Bayes factor, given by
B_ TCTM = 2logp(𝒟|M_ TC)/p(𝒟|M_ TM) = -B_ TMTC.
The assessment in terms of levels of evidence is based on the use of the empirical table by <cit.> from the values thus obtained. The evidence in favour of model M_ TC in front of model M_ TM is deemed inconclusive for values of B_ TCTM from 0 to 2; positive for values from 2 to 6; strong for values from 6 to 10; and very strong for values above 10. A similar tabulation applies to B_ TMTC.
Figure <ref> (bottom) shows the corresponding Bayes factor distributions. By definition, the regions where B_ TCTM and B_ TMTC reach the different levels of evidence are mutually exclusive and cannot overlap.
They are clearly separated by the solid line that connects the points where p(𝒟|M_ TC) = p(𝒟|M_ TM) and thus the Bayes factors vanish, B_ TCTM = B_ TMTC= 0. In the surrounding white area, the Bayes factors are not large-enough to deem positive evidence in favour of any of the two damping models. Then, moving towards the top-left corner, the evidence favours thermal conduction with increasing levels of evidence. Moving towards the bottom-right corner, the evidence supports thermal misbalance with increasing levels of evidence. We can calculate numerical values for the Bayes factor for each of the 49 SUMER loop oscillation events. Differently coloured circles are used to represent different levels of evidence. The majority of observed data points (32 out of 49) fall into the region where the evidence supports a model with wave-induced thermal misbalance in comparison to a model with thermal conduction alone. For several data-points (13 out of 49), the evidence is inconclusive (edge coloured circles). In four cases, the evidence is positive (even strong in one of them) in favour of damping by thermal conduction alone.
§ CONCLUSION
We considered two damping models for the explanation of the damping of standing slow magneto-acoustic waves in hot coronal loops, and computed the evidence in favour of each of them in explaining a set of observed oscillation periods and damping times in SUMER observations with measured periods and damping times. We find a clear separation in the oscillation period and damping time data space between the regions with evidence in favour of each of the two models. The majority of the observed data points (∼ 65%) fall into the region where the evidence supports a model which links the oscillation damping with wave-induced thermal misbalance added to thermal conduction. Some data from the sample (∼ 8%) fall into the region where the evidence supports a damping model based on thermal conduction alone. These few cases may be regarded as a separate population of hot coronal loops for which particular physical or wave characteristics may make thermal conduction more plausible/dominant.
The apparent possibility of two different regimes of slow oscillation damping could possibly be attributed to some variation of the coronal heating function in the events appearing in those different regimes. In the model which links the damping with thermal misbalance, we assumed that the radiative losses and the heating function are both uniquely determined by thermodynamic parameters of the plasma, i.e., the density and temperature. However, the yet unknown heating function could also depend on some other parameters which are not accounted by the model, for example, the energy supply flux which may vary in time. This would make the parameter τ_ M different in the events which belong to the two different populations (in particular, τ_ M→∞ for the population better described by conductive damping alone). The identification of the differences between the loops and/or parameters of host active regions and/or the photospheric dynamics in the events which belong to different populations may shed light on the differences in the heating function, and help with revealing the coronal heating mechanism.
This research was conducted while I.A. was a visitor at the Centre for Fusion, Space and Astrophysics, Department of Physics, University of Warwick. It is a pleasure for I.A. to acknowledge the financial support, the warm hospitality, and the friendly atmosphere during his visit. I.A. is supported by project PID2021-127487NB-I00 from Ministerio de Ciencia, Innovación y Universidades and FEDER funds. D.Y.K. and V.M.N. acknowledge support from the STFC consolidated grant ST/T000252/1 and the Latvian Council of Science Project Multi-Wavelength Study of Quasi-Periodic Pulsations in Solar and Stellar Flares No. lzp-2022/1-0017.
SUMER is part of SOHO, the Solar and Heliospheric Observatory, of ESA and NASA.
The SUMER project is financially supported by the Deutsches Zentrum für Luft- und Raumfahrt (DLR), the Centre National d'Etudes Spatiales (CNES), the National Aeronautics and Space Administration (NASA), and the European Space Agency's (ESA) PRODEX program (Swiss contribution).
37
natexlab#1#1
[Anfinogentov et al.(2022)Anfinogentov, Antolin, Inglis,
Kolotkov, Kupriyanova, McLaughlin, Nisticò, Pascoe, Krishna
Prasad, & Yuan]anfinogentov22
Anfinogentov, S. A., Antolin, P., Inglis, A. R., et al. 2022, ,
218, 9
[Arregui(2021)]arregui21
Arregui, I. 2021, The Astrophysical Journal Letters, 915, L25
[Arregui(2022)]arregui22
Arregui, I. 2022, Frontiers in Astronomy and Space Sciences, 9, 826947
[Cho et al.(2016)Cho, Cho, Nakariakov, Kim, &
Kumar]2016ApJ...830..110C
Cho, I. H., Cho, K. S., Nakariakov, V. M., Kim, S., & Kumar, P.
2016, , 830, 110
[De Moortel(2005)]demoortel05
De Moortel, I. 2005, Royal Society of London Philosophical Transactions
Series A, 363, 2743
[De Moortel & Hood(2003)]demoortel03
De Moortel, I. & Hood, A. W. 2003, , 408, 755
[Duckenfield et al.(2021)Duckenfield, Kolotkov, &
Nakariakov]duckenfield21
Duckenfield, T. J., Kolotkov, D. Y., & Nakariakov, V. M. 2021, ,
646, A155
[Kass & Raftery(1995)]kass95
Kass, R. E. & Raftery, A. E. 1995, JASA, 90, 773
[Kliem et al.(2002)Kliem, Dammasch, Curdt, &
Wilhelm]kliem02
Kliem, B., Dammasch, I. E., Curdt, W., & Wilhelm, K. 2002, , 568,
L61
[Kolotkov et al.(2020)Kolotkov, Duckenfield, &
Nakariakov]kolotkov20
Kolotkov, D. Y., Duckenfield, T. J., & Nakariakov, V. M. 2020, ,
644, A33
[Kolotkov & Nakariakov(2022)]kolotkov22
Kolotkov, D. Y. & Nakariakov, V. M. 2022, , 514, L51
[Kolotkov et al.(2019)Kolotkov, Nakariakov, &
Zavershinskii]kolotkov19
Kolotkov, D. Y., Nakariakov, V. M., & Zavershinskii, D. I. 2019, ,
628, A133
[Krishna Prasad et al.(2014)Krishna Prasad, Banerjee, & Van
Doorsselaere]2014ApJ...789..118K
Krishna Prasad, S., Banerjee, D., & Van Doorsselaere, T. 2014, ,
789, 118
[Mandal et al.(2016)Mandal, Magyar, Yuan, Van
Doorsselaere, & Banerjee]2016ApJ...820...13M
Mandal, S., Magyar, N., Yuan, D., Van Doorsselaere, T., & Banerjee,
D. 2016, , 820, 13
[Mariska(2005)]mariska05
Mariska, J. T. 2005, , 620, L67
[Mariska(2006)]mariska06
Mariska, J. T. 2006, , 639, 484
[Mendoza-Briceño et al.(2004)Mendoza-Briceño,
Erdélyi, & Sigalotti]mendoza04
Mendoza-Briceño, C. A., Erdélyi, R., & Sigalotti, L. D. G. 2004,
, 605, 493
[Nakariakov et al.(2017)Nakariakov, Afanasyev, Kumar, &
Moon]nakariakov17
Nakariakov, V. M., Afanasyev, A. N., Kumar, S., & Moon, Y. J. 2017,
, 849, 62
[Nakariakov et al.(2019)Nakariakov, Kosak, Kolotkov,
Anfinogentov, Kumar, & Moon]nakariakov19
Nakariakov, V. M., Kosak, M. K., Kolotkov, D. Y., et al. 2019, ,
874, L1
[Nakariakov et al.(2000)Nakariakov, Verwichte, Berghmans, &
Robbrecht]nakariakov00b
Nakariakov, V. M., Verwichte, E., Berghmans, D., & Robbrecht, E. 2000,
, 362, 1151
[Ofman & Wang(2002)]ofman02c
Ofman, L. & Wang, T. 2002, , 580, L85
[Pandey & Dwivedi(2006)]pandey06
Pandey, V. S. & Dwivedi, B. N. 2006, , 236, 127
[Prasad et al.(2021a)Prasad, Srivastava, &
Wang]prasad21
Prasad, A., Srivastava, A. K., & Wang, T. 2021a, ,
296, 105
[Prasad et al.(2021b)Prasad, Srivastava, &
Wang]2021SoPh..296...20P
Prasad, A., Srivastava, A. K., & Wang, T. J. 2021b,
, 296, 20
[Reale(2014)]reale14
Reale, F. 2014, Living Reviews in Solar Physics, 11, 4
[Roberts(2006)]roberts06
Roberts, B. 2006, Philosophical Transactions of the Royal Society of London
Series A, 364, 447
[Sigalotti et al.(2007)Sigalotti, Mendoza-Briceño, &
Luna-Cardozo]sigalotti07
Sigalotti, L. D. G., Mendoza-Briceño, C. A., & Luna-Cardozo, M.
2007, , 246, 187
[Vaiana et al.(1973)Vaiana, Krieger, & Timothy]vaiana73
Vaiana, G. S., Krieger, A. S., & Timothy, A. F. 1973, , 32, 81
[Verwichte et al.(2008)Verwichte, Haynes, Arber, &
Brady]2008ApJ...685.1286V
Verwichte, E., Haynes, M., Arber, T. D., & Brady, C. S. 2008, ,
685, 1286
[Wang(2011)]2011SSRv..158..397W
Wang, T. 2011, , 158, 397
[Wang et al.(2021)Wang, Ofman, Yuan, Reale, Kolotkov, &
Srivastava]wang21
Wang, T., Ofman, L., Yuan, D., et al. 2021, , 217, 34
[Wang et al.(2007)Wang, Innes, & Qiu]wang07
Wang, T. J., Innes, D. E., & Qiu, J. 2007, , 656, 598
[Wang et al.(2006)Wang, Innes, & Solanki]wang06
Wang, T. J., Innes, D. E., & Solanki, S. K. 2006, , 455, 1105
[Wang et al.(2002)Wang, Solanki, Curdt, Innes, &
Dammasch]wang02a
Wang, T. J., Solanki, S. K., Curdt, W., Innes, D. E., & Dammasch,
I. E. 2002, , 574, L101
[Wang et al.(2003a)Wang, Solanki, Curdt,
Innes, Dammasch, & Kliem]wang03a
Wang, T. J., Solanki, S. K., Curdt, W., et al. 2003a,
, 406, 1105
[Wang et al.(2003b)Wang, Solanki, Curdt,
Innes, Dammasch, & Kliem]2003A A...406.1105W
Wang, T. J., Solanki, S. K., Curdt, W., et al. 2003b,
, 406, 1105
[Wang et al.(2003c)Wang, Solanki, Innes,
Curdt, & Marsch]wang03b
Wang, T. J., Solanki, S. K., Innes, D. E., Curdt, W., & Marsch, E.
2003c, , 402, L17
|
http://arxiv.org/abs/2307.02586v1
|
20230705183326
|
Mainline Automatic Train Horn and Brake Performance Metric
|
[
"Rustam Tagiew"
] |
cs.RO
|
[
"cs.RO",
"cs.CV",
"68T45",
"C.4"
] |
TransformerG2G: Adaptive time-stepping for learning temporal graph embeddings using transformers
[
August 1, 2023
================================================================================================
empty
empty
This paper argues for the introduction of a mainline rail-oriented performance metric for driver-replacing on-board perception systems. Perception at the head of a train is divided into several subfunctions. This article presents a preliminary submetric for the obstacle detection subfunction. To the best of the author's knowledge, no other such proposal for obstacle detection exists. A set of submetrics for the subfunctions should facilitate the comparison of perception systems among each other and guide the measurement of human driver performance. It should also be useful for a standardized prediction of the number of accidents for a given perception system in a given operational design domain. In particular, for the proposal of the obstacle detection submetric, the professional readership is invited to provide their feedback and quantitative information to the author. The analysis results of the feedback will be published separately later.
§ INTRODUCTION
Driverless and unattended train operations show multiple advantages <cit.>. These advantages are increases in capacity, reliability, service flexibility and energy efficiency as well as alleviating the shortage of train drivers. So far, these advantages can only be enjoyed in the case of fully automated metros in regular operation. Driverless train operation for mainline trains is still an unsolved challenge. The crucial difference is that in mainline railways, the tracks are open to disruptive exogenous influences, and the use of track-side measures such as fencing and cab signaling is not economically justifiable.
The challenge to be solved for mainline railway automatization is related to the fully automated road traffic and can benefit from technology transfer. It requires a development of a vehicle-side AI system performing a multi-sensory perception. However, a literature review has shown an order of magnitude lower research activity for rail traffic in comparison to road traffic <cit.>. It also showed insufficient progress – the current Technology Readiness Level (TRL) for rail traffic is 5 and remains unsurpassed for the last two decades. A key finding was the absence of a widely accepted performance metric that could link rail safety requirements with AI developers' community.
This article attempts to solve the issue with the absent performance metric. Such a performance metric would, on the one hand, provide developers with clear application-oriented goals, make their results comparable and, on the other hand, make progress measurable for outsiders. The article proposes a preliminary performance submetric for the major subfunction of obstacle detection. Based on this proposal, a discussion can be initiated and first perception system performance results can be compared. The readers of this paper are urged to actively submit either the performance data of their systems or their suggestions for improvement to the author. Further, this paper also aims to bring the domain inexperienced developers closer to the condition of railway to increase the research facilitating effect.
§ DIVISION INTO SUBFUNCTIONS
AI systems replacing human staff on trains have to perform multiple functions. Considering current state-of-art, these functions will not represent one-to-one the full range of human abilities, they only cover the most relevant tasks on at least same performance level. In comparison to state-of-art driverless metros, the system replacing the train driver on mainline railways requires extra development. The functions can be divided into mainly two subfunctions, the perception of objects with and without physical contact (fig.<ref>). Fig.<ref> does not include many subfunctions like e.g. surveillance of door operation, emergency detection, crime detection and so on.
Perception by contact with objects is referred to here as collision detection and replaces the train driver's acoustic and haptic sensation sometimes referred to as a “popometer”. Already in EN 62267, a standard for driverless metros, it is mentioned that a collision has to be detected at the latest at the contact with an obstacle. In the special case of shunting, controlled collisions such as running into a drag shoe or coupling of cars are part of normal operation. In all other cases, collisions with objects are unwanted, dangerous accidents that cannot always be avoided and must always be detected. Two types of collisions can be identified so far, impact and overrun events. The detection of impact events is referred to here as impact detection. For mainline railways, little research on impact detection and only one seminal research on overrun event detection systems <cit.> are known. Collision detection is therefore at a very early stage of development.
Detection of obstacles without physical contact replaces the human sight from the cab. It includes multiple tasks having driveway surveillance for collision prediction as the most challenging of them <cit.>. It is assumed that collision prediction is always prone to errors, false negatives and false positives, and therefore cannot make collision detection obsolete.
Visual inspection of infrastructure and rolling stock is more important for mainline railways than for metros due to greater exogenous influences and bigger operational areas, and is not only important for predictive maintenance. There are also cases such as sun kinks, catenary damage, broken signals, malfunctioning railroad crossing gates and slipping loads during train meets that require emergency braking and are therefore part of the driving function. Visual odometry complements rotary encoders, Inertial Measurement Units (IMU) and sensors for Global Navigation Satellite System (GNSS).
The railway signals have to be recognized from the vehicle. There are multiple groups of signals, which can be visual or acoustic. In case of shunting for the lowest Grade of Automation (GoA) 0, signals are e.g. fouling point indicators at the railway switches. Although the detection of signals is ensured by automatic train stop in case of GoA1, they still have to be recognized from the vehicle. The challenge of signal detection also includes detection of tracks and their assignment to the signals <cit.>. From GoA2 on, signals do not need to be detected and are transmitted by cab signalling when used with ETCS. The GoA2 can also be conceptually achieved if an automatic visual detection of signals assists the driver <cit.>.
Prediction of collision with obstacles requires algorithms for obstacle detection, distance estimation, Region of Interest (RoI) determination, obstacle trajectory prediction, human intention recognition and other predictive functions. Depending on the choice of operational design domain (ODD), some of the functions, such as human intention recognition, may be unnecessary. Obstacle detection can be further subdivided into object detection, obstacle classification and spatial angle determination. There are internal obstacles such as railway vehicles and buffer stops. The external obstacles can be pedestrians, road cars, big animals, trees, rocks, wrongly placed drag shoes, floods, fires and similar. Obstacles do not only appear on the ground, they might also hang on the catenary <cit.> or levitate in the air.
Distance estimation is important for shunting and also for detecting obstacles from long distances in curves, where a relatively small distance error determines whether or not an object intersects with the structure gauge <cit.>. Spatial angle determination together with distance estimation is referred to as obstacle localization. For the RoI determination, a 3D tubular space formed by the predicted train's driveway and the structure gauge should be determined in the scene. Train's driveway is also known as train's path <cit.>. The structure gauge is supplemented with a speed-dependent hazard area for pedestrians, which arises due to aerodynamics around a moving train <cit.>. In the rare case that the states of the switches are not otherwise available to the perception system, they must be extracted from the visual input for the train's path prediction.
§ SAFETY ARGUMENTATION
All subfunctions, described in Sec.<ref>, require performance indicating submetrics for all relevant stakeholders, especially the developers and the regulators. Safety relevant functions for European mainline railways are approved according to the Common Safety Method for Risk evaluation and Assessment (CSM-RA). Hereby, performance metrics are needed, which allow prove of compliance with standards, comparison with human performance or calculation of resulting hourly fatality rates. Since there are still no standards for this, only two remaining approaches of safety argumentation are available (fig.<ref>). These are the reference system comparison and explicit risk assessment according to harmonised design goals.
As depicted in fig.<ref> for collision risks, both approaches need performance data of system's collision prediction and collision detection for all relevant conditions. Explicit risk assessment requires additional data to calculate, whether the probability for an accident with a single fatality is lower than 10^-7 and for an accident with more than one fatality is lower than 10^-9. This additional data includes schedule, braking properties, route geometry, probabilities for obstacles, collision consequences, acoustic properties for warning horn, transported load and passengers. This data describes the ODD of a train and is called here ”ODD-Data”. Instead of ODD-Data, the reference system comparison needs performance data of human collision prediction and collision detection for all relevant conditions.
§ REQUIREMENTS FOR OBSTACLE DETECTION
To justify performance metrics, a detailed description of obstacle detection in the railway domain is given with a focus on safety. Commonly used performance metrics do not correlate well with the safety argumentation. In particular, the performance metric Intersection over Union (IoU), which is oriented to the 2D space of camera images, could mislead the development of a perception system. Even in 3D space, IoU still requires a safety-oriented weighting of the spatial direction of the mismatch between prediction and ground truth. Mean Average Precision (mAP) based on IoU provides a value only for single shot prediction, not for a sequence of images of a train approaching an obstacle.
As according to the statistics of Eurostat for 2021 in EU <cit.>, 64.5% of fatalities result from accidents to persons by rolling stock in motion, 34.3% from level crossing accidents including pedestrians and only 1.2% from railway vehicle collisions and other accidents. The portion of pedestrians in level crossing accidents can be assumed to be 14.6% based on German statistics by Deutsche Bahn <cit.> for 2018. Therefore, the most probable fatal accident scenario is collision with a person at roughly 70%. Second most probable scenario is collision with a passenger car at roughly 24%.
Both most common scenarios on railways, pedestrian- and car-collisions, are also most common on roads <cit.>. In contrast to commonly known road vehicles, emergency braking and warning horn are the only available reactions on railway. The braking distance for railway vehicles is approximately 5 times longer than for road vehicles. The ca. 15dB(A) louder warning horn can and should be heard from larger distances <cit.>. Both has consequences on the minimal acceptable performance of vision-based collision prediction and makes Long-Range Object Detection (LROD) necessary. Due to curvatures, weather and light conditions, LROD is not always possible. While for road vehicles, collision prediction enables its prevention, it is rather a matter of damage limitation and reverence in the domain of railway vehicles.
Collision with a person causes a fatality for all ego vehicle speeds in case of railways as according to DIN VDE V 0831-103. However, out of a total of 695 accidental fatalities and serious injuries in 2021 in EU caused by rolling stock in motion, 36,5% were seriously injured, i.e. survived <cit.>. When a deadly collision with a person cannot be prevented, the braking must be applied to preserve the dignity of the dead, to facilitate investigation by authorities and prevent exposure to casual bystanders. This is also important for the more than 2000 rail suicides in EU each year, which are not counted as accidents. Warn horn and braking is never too late and has to be done as soon as possible in this scenario.
Collision with a passenger car is more intricate scenario than with a person regarding the consequences of different ego vehicle speeds. Fig.<ref> shows the roughly estimated consequences for the collision of a train travelling at 130/ with a stranded passenger car. For simplicity, a uniform emergency braking deceleration of 1/^2 without delay is assumed. More realistic modeling would require consideration of additional modifiers such as co-functioning of different types of brakes, sanding to improve adhesion, and surge behavior of the liquid load. In the best case, if the car is recognised at more than 652, the emergency braking can prevent the collision. In the worst case, if the car is not recognised before the collision, the impact detection system should recognise the crash and break to reduce the risk of a potential derailment of the train. The LROD can not always achieve the best case due to obstruction of view in curves, through hilltop, weather conditions, insufficient illumination, as well as due to sudden intrusion of a moving obstacle.
However, earlier braking between the best and worst case reduces harm, which can be shown in our example in fig.<ref>. According to the risk model by ENOTRAC <cit.>, the damage of obstacles to a train grows with their mass and the speed of the train. According to DIN VDE V 0831-103, a crashing train with a speed higher than 40/ will cause fatality of the car driver. If the car driver can escape the car after hearing the warning horn, early braking gives more time for the resort depicted as solid curve. The assumption for the maximal distance of 350 at which the warning horn can be heard by the car driver is derived from the German regulation for the maximal distance between a railroad crossing and a whistle board <cit.>. A lower speed at the obstacle as a consequence of early braking reduces the risk area created by air stream around the vehicle depicted as dashed zigzag line as according to the speed thresholds in the regulation of German Statutory Accident Insurance <cit.>.
The distribution of distances, at which human drivers detect objects on railway, has an irregular bell shape <cit.>. Tab.<ref> shows median distances for human performance at detecting objects on the tracks from all known sources. According to these measurements, a human driver can prevent collision with the car only if the car is of contrasting paint and is presented at daylight. At night without illumination, rainy weather and a decent car paint, the consequences will be much more severe. The shapes of the obstacle detection distances distribution are much steeper for computer vision systems than for humans <cit.>. One source reports distances <cit.>, at which first more or less erroneously placed boxes appear for target objects.
False-negative and false-positive obstacle detection might occur due to reproducible or irreproducible failures in sensors and algorithms. The failures can be assigned to certain functions in certain cases. For instance, objects like stones and trees from the perceived space outside of RoI can be detected as obstacles due to wrong localization of them or to wrong localization of RoI. The space perceived by sensors is often larger than the 3D RoI, even in the presence of view obstructions. Another example is small animals that are recognized as obstacles because they are misclassified as humans or vice versa.
Computationally, moreover, false-positive visual detection, i.e., false alarms, must occur much less frequently than false-negatives, since the case of absent obstacle is overwhelmingly predominant and obstacles are extremely rare. Additionally, mainline railway vehicles' emergency breaks can not be interrupted until full stop in many cases, create jams, damage to the vehicles and constitute therefore a significant cost factor, which has to be considered in the performance metrics. Since false-positive detection can not be outruled, collision prediction will be most probably complemented by impact detection to refute false visual detection <cit.>.
§ PROPOSED PERFORMANCE SUBMETRIC
Fig.<ref> shows the proposed obstacle detection metric. This metric is designed for moving train. The abscissa shows the distances, at which X% of appearing obstacles are detected while approaching them. (100-X)% are detected at closer distances. X=50 denotes a median distance for obstacle detection. The ordinate shows hourly rates of false-positive detections, which will cause unneeded warning horn and jam-creating emergency braking. The values on the ordinate are negative logarithms of the hourly rate, the lower the better. The performance values of a system on these two axes are interlinked and can be adjusted by changing detection thresholds and tweaking internal parameters of a system. Like with Precision-Recall (PR) and Receiver Operating Characteristic (ROC) curves, increasing performance on the one axis will most probably reduce performance on the other axis.
The results according to this metric depend on the number and type of obstacles, the speed of the ego vehicle, the frame rate of the sensors, the track geometry, the time of day, the weather conditions, and other properties of a data set used to validate a system. The characteristics of the validation dataset will most likely depend on a chosen ODD. The shape of such performance curves is speculative and is shown in Fig.<ref> for hypothetical systems A and B.
Both systems A and B have maximum ranges due to their sensor resolutions. Setting the internal thresholds of one system to the extreme of permanent positive detection will give the maximum range on the abscissa and 10^0 on the ordinate. The opposite extreme, where the system is in permanent negative detection, will result in 0 on the abscissa and 10^-∞ on the ordinate. The shapes of the curves in between for the hypothetical systems are drawn based on intuition. From the shape of the curve for the system A, it can be inferred that (100-X)% of car-collision scenarios will result in one or more fatalities with this system adjusted to <10^-4 false-positives (fig.<ref>). The system B has to be adjusted to <10^-3 false-positives, 10x more inappropriate stops, to achieve the same level of safety.
Based on a certain ODD, there will be certain performance minima for each of the axis. If the functions of emergency braking and warning horn are separated, the performance minima for both functions can be different. In project KOMPAS, 10^-4 or less false-positive emergency braking per hour is suggested as the minimal acceptable performance <cit.>. Since false-positive warning horn does not create jams on the railways, the minimal requirements can be much less rigorous. However, extensive false-positive warning horn will probably not be welcomed by residents living close to the railway. For orientation, this paper proposes a rate of 10^-2 cases per hour as depicted in fig.<ref>.
The issue with the minima for distances depends stronger on ODD. Certain ego vehicle speeds, driveway geometries, weather and illumination conditions either prohibit or do not demand LROD for safety argumentation. For instance in case of car-collision scenario, warning horn is assumed to be effective a most 350 only. Low ego vehicle speeds or better brakes result in lower distance requirements for obstacle detection. If a typical curved route does not allow sensors to penetrate further forward than 600, a system will not be required to have a higher range. Both minima are depicted in fig.<ref>.
In the pedestrian-collision scenario, the emergency braking function demands a system to overcome simultaneously higher minima on both axes than the warning horn function. In such case, system A is better than system B for both functions. For the pedestrian-collision scenario, effective distance for warning horn can be significantly longer than braking way <cit.> and that can make system B more appropriate for warning horn subfunction, while system A is more appropriate for emergency braking subfunction.
Once the performance minima are met, the order of preference for both performance values becomes important in the choice of system and system parameter configuration. This could lead to answers to questions such as how much resident annoying extra warning horn is justified to save the life of one unlawful trespasser or one wild animal.
§ CONCLUSION AND CALL FOR DATA
A very important idea of this work is the inaptitude of the concept of a binary false negative rate for obstacle detection for mainline railways. The non-detection of obstacles is gradual and not binary. The question is not “What percentage of the obstacle is detected?”. The question is “At what distance will X% of the obstacles be detected at the latest?”. The other important idea is that driving or minimizing the amount of false-positive stops is the primary goal and computationally more challenging, while safety or maximizing the timeliness of obstacle detection is the secondary goal.
This paper is intended to elicit feedback from the research community. It contains a proposal for a submetric for an autonomous train perception system and a rationale for its design. The amount of feedback will be maximized by wide dissemination. The data expected here are lists of measurements that fit within the proposed submetric in fig.<ref> and 4-tuples of the performance minima for braking and warning. An element in the list of measurement contains the name of the system, the X, rate of false-positives per hour and the minimal distance for X% detections. Textual feedback is also welcome, especially as reasoning for the suggested performance minima. Also, human performance measurements as benchmark are welcome. The anonymized data from the feedback will be analyzed and published in a separate paper, for which this paper serves as a draft.
ieee_fullname
|
http://arxiv.org/abs/2307.01275v1
|
20230703180512
|
Tensionless Tales of Compactification
|
[
"Aritra Banerjee",
"Ritankar Chatterjee",
"Priyadarshini Pandit"
] |
hep-th
|
[
"hep-th"
] |
Fly-by galaxy encounters with multiple black holes produce star-forming linear wakes
Simeon Bird
August 1, 2023
====================================================================================
§ INTRODUCTION
String theory has been a leading candidate for quantum theory of gravity over last few decades. This theory generalises the notion of point particle to fundamental one-dimensional string characterized by its tension T, given by:
T=1/2πα',
where α' gives the square of the length of the string. This tension T is the only free parameter in the non-interacting string theory. Any possible candidate of quantum theory of gravity has to be consistent with general relativity. String theory in the point particle limit (T→∞) reduces to general relativity and superstring theory under the same limit leads us to supergravity. The diametrically opposite limit (T→ 0) then corresponds to the extreme `stringy' or the so-called ultra high energy sector of the theory, and the worldsheet becomes null in this limit. The null (or "Tensionless") sector of string theory was first analyzed by Schild <cit.>, and subsequently has found intriguing applications in diverse physical situations. In <cit.> Gross and Mende showed that in the limit α'→∞, the string scattering amplitudes become considerably simpler. Massless higher spin symmetry <cit.> is also expected to appear in this sector <cit.>.
The physics of tensionless strings have been recently emerging in other circumstances as well. For instance, in <cit.> it has been shown that a closed string becomes tensionless (i.e. the worldsheet becomes null) when it falls on the event horizon of a Schwarzschild black hole. Hence studying the tensionless limit of string theory might prove to be useful to understand how strings would perceive spacetime singularities. Tensionless strings are also expected to emerge when a gas of strings are heated to very near the Hagedorn temperature. It is expected that a phase transition will occur here and new degrees of freedom would appear <cit.>. As an indication of this, a novel closed-to-open transition was discovered in <cit.> when the string tension was dialled to zero. Finally, these tensionless strings have been recently used to build a quantum model of black holes, specifically BTZ black holes in AdS_3 <cit.> in a manner reminiscent of the black hole membrane paradigm. The entropy and logarithmic corrections were obtained by a counting of null string states.
§.§ Tensionless strings: A brief history
The formulation of tensionless string theory has been done using two different approaches. The first approach, taken in <cit.>, involves the construction of action and formulation of the theory from first principle [For other earlier work on null strings, the reader may look at <cit.>.]. Here, the metric of the worldsheet proves to be degenerate which is incorporated in the action. This action is invariant under a gauge symmetry i.e, worldsheet diffeomorphism invariance which can only be fixed partially, quite similar to the case of tensile string theory. After gauge fixing, the action still remains invariant under a residual gauge symmetry, the generators of which close to form the BMS_3 (3d Bondi-Metzner-Sachs) algebra <cit.>. The BMS algebras are the asymptotic symmetry algebra of the Minkowski spacetime at null infinity, studied in <cit.>. This algebra has also been used intensively in the study of flat space holography <cit.>.
The other approach, namely the limiting approach, considers taking the appropriate limit on the worldsheet coordinates from the tensile string theory <cit.>. The limit turns out to be the ultra-relativistic limit or Carrollian limit <cit.> on the worldsheet where the worldsheet velocity of light tends to zero. This can be realised in terms of worldsheet coordinates (τ,σ) scaling as {τ→ϵτ, σ→σ} with ϵ→ 0. Under this scaling, the two copies of Virasoro algebra (residual symmetry algebra of tensile string theory) scales to BMS_3 algebra, making this approach consistent with the intrinsic approach. This consistency between the two approaches, related closely by the symmetry algebra, has been the driving force behind recent explorations into this arena. Such studies have even been extended to Supersymmetric versions of tensionless string theory in <cit.>.
The geometry of the worldsheet of tensionless string naturally carries a 2d version of Carroll geometry i.e. the geometry of a generic null manifold <cit.>. This manifold emerges in physics on various occasions as one departs from the well understood pseudo-Riemannian paradigm. Event horizon of a generic black hole, for instance, happens to be a null manifold, and hence contains Carrollian structure <cit.>. Carrollian physics from the perspective of holography has been explored in <cit.>. Field theories on null manifolds, having intrinsic Carroll symmetries, have been analysed in <cit.>. Recently Carrollian physics has found surprising applications in different areas of physics, such as cosmology <cit.>, condensed matter systems <cit.> and fluid dynamics <cit.>. Other aspects of Carrollian physics has been studied in <cit.>. It is then clear just from the Carroll symmetry perspective, delving deep into the formalism of tensionless strings is going to be very important.
In recent years, substantial progress has been made in the quantization of tensionless strings as well <cit.>. It has been found <cit.> that the classical theory based on the action constructed in <cit.> can be quantized in different ways resulting in three consistent inequivalent quantum theories. These three quantum theories are based on three distinct vacua which have been named the Oscillator vacuum, Induced vacuum and Flipped vacuum in <cit.>. To elucidate, taking the null limit on the usual tensile quantum theory based on highest weight representations, would lead us to the tensionless quantum theory based on Induced vacuum. This theory corresponds to the Induced representations of BMS algebra. One of the intriguing observation of this theory has been the emergence of open string from closed tensile string theory <cit.>. On the other hand, the quantum theory constructed on Flipped vacuum belongs to the explictly constructed highest weight representation of the BMS algebra. As it turns out, this is the tensionless limit of a twisted string theory, a close cousin of usual tensile string theory. Classically these two theories are identical having the same action, but quantum mechanically they have striking differences <cit.>. Unlike these two theories, the Oscillator vacuum is based on a construction akin to the tensile string theory, relying on (seemingly) decoupled left and right moving oscillators. However, the theory is still very interesting due to its connection to tensile vacuum through a Bogoliubov transformation as well as emergence of massive states. As an intriguing example of the usefulness of the oscillator theory, a link between tensionless limit and infinite acceleration limit of a string has been explored from worldsheet perspective in <cit.>. Moreover, the light cone quantization of all the three tensionless theories has been studied in <cit.>. Just like tensile string theory, the critical dimension of the oscillator and flipped vacuum has been found to be 26, whereas, there seems to be no such restriction on the dimension of target space defined on the Induced vacuum.[For path integral quantization of both tensionless bosonic as well as tensionless Superstring theories, one may look at <cit.>.]
§.§ Tensile string compactification
String theory by nature requires multiple target space dimensions to be consistent with Lorentz invariance.
As is well known, the way of making these theories compatible with the four-dimensional world is to compactify the extra dimensions <cit.> on some compact manifold. Compactification of any dimension (say, on a circle) introduces two new quantum numbers in the theory, namely, winding number (W) and quantized momentum (K). These compactified dimensions give rise to several new states in the spectrum: massless and massive vector states, massless and massive scalar states etc. One of the most intriguing features of the tensile string theory with compactification is that the mass spectrum is symmetric under the interchange of W and K along with the following transformation on the radius of compactification
R→α'/R.
This transformation is called T-duality, which relates string theories compactified on circles of different radii. At specific points on the moduli space given by R=√(α'), i.e. at the self dual point of the above transformation, we get new massless scalar and vector states.
[ Note that, tensile twisted bosonic string theory in compactified background has been studied in <cit.>. This will be important for the discussions in the current manuscript.]
§.§ Our present work: Compactified tensionless strings
This naturally brings us to the question we address in the current paper: Do compactified string theories make sense in the tensionless regime as well? We will answer that question in the affirmative, while noting that regardless of the quantum vacuum chosen, it is important to work with multiple target space dimensions in the case of null strings. In this work, we will confine ourselves to the notion that these target spaces are necessarily (D dimensional) Minkowski spaces. We should then ask, how does compactification change the spectra of the tensionless theories based on the three vacua we have been discussing?
But even before we jump to the question of spectrum, the motivation for studying compactified target spaces for tensionless strings is already linked to various applications.
As mentioned earlier, tensionless strings in compactified background has already been used as a building block in the construction of <cit.>, specifically for the oscillator vacuum. It has been postulated that the event horizon of a BTZ black hole coincides with an ensemble of tensionless string states. In this setup, the angular direction on the horizon was recognized as the compactified coordinate in the target space that the null string wraps. The BTZ black hole microstates have thus been identified as the physical states of the tensionless string theory constructed on the oscillator vacuum. It was found that the combinatorics of those microstates result into Bekenstein-Hawking entropy along with logarithmic corrections. In other developments, string theory in zero tension limit gives rise to infinitely many massless higher-spin fields with consistent mutual interactions <cit.>. With the recent progress in discussing higher-spin fields in flat space (see <cit.> for example), compact sectors of flat space tensionless strings may be an interesting new realm to explore. Moreover, since novel phases coming out of Hagedorn transitions are closely connected to tensionless strings, it needs to be pointed out that the very high energy limit of string density of states takes the universal Hagedorn form only when one considers a compact target space <cit.>. As shown in <cit.> the topology of this compact space does not affect the nature of the transition. All of these intriguing ideas, which are still nascent and require dedicated discussions, makes taking first steps towards deciphering compactified null strings an important problem.
§.§ Plan of the paper
The rest of the paper is organised in the following way:
In section (<ref>), we revisit the analysis of classical and quantum tensionless closed string theory. We construct Weyl invariant classical action following <cit.> and discuss its symmetries. Then we briefly review the quantum structure of the tensionless theory following <cit.> by analysing imposition of constraints.
In section (<ref>), we introduce the machinery to study the effect of compactification on the three inequivalent description of quantum theories based on three distinct vacua. We discuss both one or multiple spatial dimensional compactifications, either on a circle (S^1) or d-dimensional torus (T^d) respectively. Section (<ref>), (<ref>) and (<ref>) are dedicated to the detailed discussions on the effect of compactification on level matching condition and mass spectrum separately for all the three inequivalent vacua, namely, Oscillator, Induced and Flipped. We intricately discuss the distinct structure of these three theories and focus on potential implications.
In section (<ref>) we summarise and conclude our discussions with further comments and future perspectives. Appendices at the end contain details of computations and extra discussions.
§ REVIEW OF TENSIONLESS STRINGS: CLASSICAL AND QUANTUM
In this section we revisit the classical and quantum aspects of bosonic tensionless string theory. In the first part, we review the classical aspect of tensionless string theory both from the intrinsic as well as limiting approach. Then we move on to quantize the bosonic tensionless closed string theory and discuss different ways of imposing quantum constraints on the physical states resulting in three distinct inequivalent quantum theories. We discuss in detail all the three consistent quantum tensionless string theory based on three distinct vacua namely, the oscillator, Induced, and flipped vacuum.
§.§ Classical tensionless strings
Following the method introduced in <cit.>, we use the Hamiltonian formalism to construct the Weyl invariant action from Nambu-Goto action of the tensile theory, where tensionless limit can be imposed. Under this limit, the metric density T√(-g)g^αβ turns out to be degenerate and hence is replaced by a rank one matrix V^α V^β. This leads to the following form of the tensionless string action:
S=∫ d^2ξ V^αV^β∂_αX^μ∂_βX^νη_μν,
where V^α is the vector density, ξ^α represents the worldsheet coordinates, X^μ are the spacetime coordinates and η_μν is the flat background metric. The above action is invariant under worldsheet diffeomorphisms resulting in the following gauge fixing:
V^α=(v,0),
where v is a constant. However, even after fixing this gauge, there is still a residual symmetry left over analogous to the tensile theory. This residual symmetry in the tensionless string theory turns out to be the BMS_3 (Bondi-Metzner-Sachs) algebra with generators (L_n, M_n) satisfying:
[L_m,L_n]=(m-n)L_m+n, [L_m,M_n]=(m-n)M_m+n, [M_m,M_n]=0.
This residual symmetry algebra is without any central extension and hence can be identified as the classical part of 3d Bondi-Metzner-Sachs (BMS_3) algebra <cit.>. Remember, the analogous residual symmetry in the tensile case for closed string is two copies of the Virasoro algebra.
§.§ Mode expansions
The equations of motion obtained for the action (<ref>) are:
∂_α(V^α V^β∂_β X^μ)=0, V^βγ_αβ=0,
where γ_αβ=∂_α X^μ∂_β X^νη_μν is the Induced metric on the worldsheet. The second equation in (<ref>) indicates that the metric γ_αβ is degenerate <cit.>. The above equations of motion simplifies in the gauge V^α=(1,0) as:
Ẍ^μ=0; Ẋ· X'=0=T_1, Ẋ^2=0=T_2,
where T_1 and T_2 are the energy momentum tensor of the worldsheet theory. We now concentrate on finding the solutions of the equation of motion. The mode expansion which solves the above equation of motion can be written in general as:
X^μ(τ,σ)=x^μ+√(c'/2)A^μ_0σ+√(c'/2)B^μ_0τ+i√(c'/2)∑_n≠ 01/n(A^μ_n-inτ B^μ_n)e^-inσ.
Note that c' is a parameter with the dimension [L]^2 to make everything consistent.
For X^μ to satisfy the closed string boundary condition given by X^μ(τ,σ)=X^μ(τ, σ+2π), A_0^μ must be zero. We now define the generators of the residual symmetry algebra in terms of the oscillator modes (A,B) as:
L_n=1/2∑_m A_-m· B_m+n, M_n=1/2∑_m B_-m· B_m+n.
Using the above relation on the two constraints in (<ref>), we obtain the expression for the energy momentum tensor in terms of generators of the BMS_3 algebra as follows:
T_1(τ,σ)=1/2π∑_n(L_n-inτ M_n)e^-inσ, T_2(τ,σ)=1/2π∑_n M_ne^-inσ.
We now proceed to compute the algebra satisfied by these modes. The Poisson brackets between X and P require
{A_m^μ,A_n^ν}_P.B = {B_m^μ,B_n^ν}_P.B=0, {A_m^μ,B_n^ν}_P.B=-2imδ_m+nη^μν.
We can clearly see here that this is not the harmonic oscillator algebra. In order to get the algebra for the same we define new modes in the following way <cit.>,
C_n^μ=1/2(A_n^μ+B_n^ν), C̃_n^μ=1/2(-A_-n^μ+B_-n^μ),
which satisfy the algebra similar to oscillator modes of tensile string case. So, the Poisson brackets now take the following form
{C_m^μ,C_n^ν}=-imδ_m+n,0η^μν, {C̃_m^μ,C̃_n^ν}=-imδ_m+n,0η^μν, {C_m^μ,C̃_n^ν}=0.
We call this as the oscillator basis of the tensionless string. We can now write the mode expansion (<ref>) in terms of these new modes as
X^μ(τ,σ)=x^μ+ √(c'/2)(C^μ_0-C^μ_0)σ+√(c'/2)(C^μ_0+C^μ_0)τ
+i √(c'/2)∑_n≠01/n[(C^μ_n-C^μ_-n)-inτ(C^μ_n+C^μ_-n)]e^-inσ,
where periodicity of X^μ demands C_0^μ to be equal to C̃_0^μ resulting in vanishing of the second term. Analogous to the tensile string, we can split the above mode expansion of the tensionless string in terms of C^μ and C̃^μ oscillators representing “left” and “right” modes respectively <cit.> as:
X^μ_L =x^μ/2+√(c'/2)C^μ_0(τ+σ)+i√(c'/2)∑_n≠01/n(C^μ_n-inτ C^μ_n)e^-inσ,
X^μ_R =x^μ/2+√(c'/2)C^μ_0(τ-σ)+i√(c'/2)∑_n≠01/n(C^μ_n-inτC^μ_n)e^inσ,
where C_0^μ=C̃_0^ν=√(c'/2)k^μ are related to the momentum of the tensionless string.
§.§ Limit from tensile closed strings
The algebra for the modes in oscillator basis of tensionless string has been derived from the equation of motion. This is called the “intrinsic” approach. However, it is crucial to check the result from the “limiting” approach. Following <cit.> we take a suitable limit on mode expansion of the tensile string theory and verify that we are arriving at an identical expression for tensionless case.
For tensile closed string, the expression for the mode expansion is
X^μ(τ,σ)=x^μ+2√(2α')α_0^μτ+i√(2α')∑_n≠01/n[α^μ_ne^-in(τ+σ)+α^μ_ne^-in(τ-σ)].
Here the zeroth modes for left and right moving oscillators are equal. Now in order to get to the tensionless strings, we take the following limit on the worldsheet coordinates
τ→ϵτ, σ→σ and α'→ c'/ϵ, ϵ→ 0.
Here c' is a finite parameter that takes care of the mass dimensions. In this limit, the above mode expansion reduces to the following form:
X^μ(τ,σ)=x^μ+2√(2ϵ c')α_0^μτ+i√(2c')∑_n≠01/n[α_n^μ-α̃_-n^μ/√(ϵ)-inτ√(ϵ)(α_n^μ+α̃_-n^μ)]e^-inσ.
We now compare (<ref>) with (<ref>) to find the relation of (α,α̃) with A's and B's as:
A_n^μ=1/√(ϵ)(α_n^μ-α̃_-n^μ), B_n^μ=√(ϵ)(α_n^μ+α̃_-n^μ).
Using (<ref>), we can also compute the relation between tensile oscillators (α,α̃) and tensionless oscillators (C,C̃). They are related through a Bogoliubov transformation given by,
C^μ_n=1/2(√(ϵ)+1/√(ϵ))α^μ_n+1/2(√(ϵ)-1/√(ϵ))α^μ_-n
C^μ_n=1/2(√(ϵ)-1/√(ϵ))α^μ_-n+1/2(√(ϵ)+1/√(ϵ))α^μ_n.
We can clearly see here that in a strict limit ϵ=1, the oscillators in the C basis goes to α basis of the tensile string. However, in the ϵ→0 limit, one gets tensionless oscillators defined in (<ref>). So, as we shift the value of ϵ from 1 towards 0, we systematically land on the tensionless string from tensile string.
§.§ Quantization of tensionless strings
Now we proceed to quantize the classical bosonic tensionless string in the usual canonical formalism. We begin with the tensionless action (<ref>), choosing the gauge V^α=(1,0), which results in
the constraints Ẋ· X'=0=Ẋ^2. We now promote X^μ and its canonical momenta P^μ to operators obeying the commutation relations
[X^μ(τ,σ),P_ν(τ,σ')]=iδ (σ-σ')δ^μ_ν.
Using these relations in the mode expansion (<ref>), we get the following commutators
[A_m^μ,A_n^ν]=0=[B_m^μ, B_n^ν], [A_m^μ,B_n^ν]=2mδ_m+nη^μν, [x^μ,p^ν]=iη^μν.
As we already mentioned earlier, the algebra of (A,B) is not the harmonic oscillator algebra and hence the commutation relation of the C oscillators in (<ref>) satisfying harmonic oscillator algebra, is given by
[C^μ_m,C^ν_n]=[C^μ_m,C^ν_n]=mη^μνδ_m+n,[C^μ_m,C^ν_n]=0.
Now, we use these oscillators to define a vacuum and build a Hilbert space on it. However, in order to get the physical string spectrum, we have to apply constraints on the Hilbert space. In the tensionless case, there are different ways to impose these constraints, leading to distinct but consistent quantum theories which we discuss in detail below.
§.§ Quantum constraints on physical states
From the classical analysis part of this section, we have seen that the components of the energy momentum tensor vanishes (<ref>). However, when we quantize the theory, these components of energy momentum tensor T_1 and T_2 are promoted to operators and setting the entire operator to zero will be too strong a constraint. The most general constraint we impose on these operators is that all the matrix elements
of the operator in the physical Hilbert space will vanish, namely,
⟨phys'|T_1|phys⟩=⟨phys'|T_2|phys⟩=0.
In terms of the generators L_n and M_n these constraints boil down to
⟨phys'|L_n|phys⟩=0, ⟨phys'|M_n|phys⟩=0, ∀ n∈ℤ.
As discussed in <cit.> we can get 9 possible ways to constrain our physical states which are consistent with the above relations. They can be listed as follows,
L_n|phys⟩=0, (n>0),
M_m|phys⟩= 0, (m>0)
M_m|phys⟩= 0, (m≠0)
M_m|phys⟩= 0, (∀ m)
;
L_n|phys⟩=0, (n≠0),
M_m|phys⟩= 0, (m>0)
M_m|phys⟩= 0, (m≠0)
M_m|phys⟩= 0, (∀ m)
;
L_n|phys⟩=0, (∀ n),
M_m|phys⟩= 0, (m>0)
M_m|phys⟩= 0, (m≠0)
M_m|phys⟩= 0, (∀ m)
.
A detailed calculation <cit.> however shows that out of 9 possible consistent ways to impose constraints, only three possibilities are consistent with the BMS_3 algebra, resulting in three different quantum theories on three distinct vacua, namely, oscillator, Induced and flipped vacuum. The three consistent constraints are as follows:
L_n|phys⟩= M_n|phys⟩=0 (∀ n>0),
L_n|phys⟩≠0, M_n|phys⟩=0 (∀ n≠ 0),
L_n|phys⟩≠0, M_n|phys⟩≠ 0 (∀ n).
Except for the case of flipped vacuum, we assume the vacuum state to be a physical state, i.e, |phys⟩=|0⟩ [We shall see later that the physical state conditions in flipped vacuum demands that for non-compact target spacetime, the vacuum itself won't be a physical state. The only physical state for non-compact target spacetime is of level 2.]. So, the above physical state conditions correspond to Flipped, Induced and Oscillator vacuum respectively. In what follows, we will review the structure of theories built upon these three vacua. The reader is directed to <cit.> for a very detailed account of the same.
§.§ Oscillator vacuum
In this section we start with the canonical quantization of tensionless string in oscillator vacuum. The physical state condition based on which this theory has been built is the weakest of the three and is given below in what is known as the “sandwich” form:
⟨phys'|L_n-a_Lδ_n,0|phys⟩=0,⟨phys'|M_n-a_Mδ_n,0|phys⟩=0,
where a_L and a_M are normal ordering constants of L_0 and M_0 respectively. This theory is constructed on the oscillator vacuum which is defined as
C^μ_n|0,k⟩_c=C^μ_n|0,k⟩_c=0∀ n>0 ,
where the oscillators {C,C} satisfy the commutator relations (<ref>).
The expansion of the bosonic field X^μ(σ,τ) in terms of these oscillators is given in (<ref>). Subsequently, the generators of the worldsheet BMS algebra {L_n,M_n} can be expressed in terms of {C,C} as follows:
L_n =1/2∑_m[C_-m· C_m+n-C_-m·C_m-n],
M_n =1/2∑_m[C_-m· C_m+n+C_-m·C_m-n+2C_-m·C_-m-n].
The expression of L_0 and M_0 becomes,
L_0=𝒩-𝒩, M_0=c'k^2+𝒩+𝒩+X+X^†,
where k^2=-m^2 and the operators are given by:
𝒩=∑_m>0C_-m· C_m;𝒩=∑_m>0C_-m·C_m;X=∑_m>0C_m·C_m.
𝒩 and 𝒩 here are number operators and the entire Hilbert space can be spanned by using the eigenstates of them as a basis. A generic eigenstate of 𝒩 and 𝒩 is given by
|r,s⟩=∑_jρ_j(∏_i=1^pC^a_i_-m_i∏_j=1^qC^b_j_-n_j)_j|0,k^μ⟩_c,
where a_i and b_j are powers of the C_-m_i and C_-n_j oscillators respectively. The level of state is (r+s) where r and s are given by
r=∑_i^pa_im_is=∑_i^qb_in_i.
Let us apply the L_0 physical state condition as in (<ref>) with |phys⟩=|phys'⟩=|0,k_0^μ⟩. This immediately leads us to:
⟨0,k^μ_0|L_0|0,k_0^μ⟩=a_L.
That means the only way to ensure that the vacuum is physical state is to demand that a_L=0. As a consequence, sandwiching L_0 with the general state |r,s⟩, and applying physical state condition (<ref>) with a_L=0, we can see that
⟨r,s|L_0|r,s⟩=⟨r,s|(𝒩-𝒩)|r,s⟩=0,
which gives us the level matching condition for |r,s⟩ being physical state. On the other hand, the M_0 physical state condition from (<ref>) on a level matched state |n,n⟩ will give us the mass of the level matched state. As argued in <cit.>, the M_0 condition would lead us to the following mass-spectrum,
m^2|n,n⟩=1/c'(2n-a_M)|n,n⟩.
Hence, all that is left for us to do is to determine the normal ordering constant a_M. Just like in the case of tensile string theory, here too, working in light-cone gauge comes handy when we try to determine a_M as well as the critical dimension. One way of determining them is to calculate the normal ordering of M_0 directly. In light cone gauge we will be able to find its expression in terms of critical dimension D. Then we impose spacetime Lorentz symmetry and find out both a_M and D. This approach differs from the one used in <cit.> and has not been attempted previously. We determine a_M and D in this method in Appendix <ref> and find that a_M=2 for D=26. Another more rigorous method of calculating them can be found in <cit.>, and it also gives the same result.
§.§ Analysis of spectrum
Based on (<ref>) we can briefly discuss the nature of the particles at various level. Let us start from the vacuum itself, which is given by n=0, and further use a_M=2. Like tensile string theory, here too, we get a tachyonic vacuum with mass given by
m^2|0,k^μ⟩_c=-2/c'|0,k^μ⟩_c.
A generic state with n=1 are given by[In Appendix <ref> we have used lightcone quantization just to determine a_M. Here we continue to work in covariant quantization.]
|2⟩=ρ_μνC^μ_-1C^ν_-1|0,k^μ⟩_c.
Just like the level 1 states of tensile string theory we can decompose these states into traceless symmetric, antisymmetric and singlet (trace) part. The traceless symmetric part will correspond to a massless symmetric tensor field G_μν(X) of spin 2, which can be identified with metric of spacetime <cit.>. The antisymmetric massless tensor field would give us the Kalb-Ramond background field B_μν(X). The trace part will give us a scalar field Φ(X), which can be identified as the dilaton in this case. Furthermore, the mass spectrum (<ref>) with a_M=2 clearly shows that for level n>1, we will have higher spin massive states (also see <cit.>).
§.§ Induced Vacuum
Our discussion of the Induced vacuum theory is based on <cit.>. If we directly take the tensionless limit of the quantum tensile string theory constructed on a highest weight vacuum, it will lead us to the tensionless string theory constructed on the Induced vacuum.
Similarly, taking ultrarelativistic limit on the highest weight representaton of Virasoro algebra would lead us to the Induced representation of the BMS algebra <cit.>. The physical state condition of the tensile string theory under this limit boils down to the following conditions
⟨phys'|L_n|phys⟩=0∀ n, M_n|phys⟩=0,∀ n≠ 0.
The vacuum on which this theory has been built is the explicit tensionless limit of the tensile vacuum. Let us recall the definition of the tensile vacuum
α^μ_n|0,k^μ⟩_α=α^μ_n|0,k^μ⟩_α=0∀ n>0.
In terms of oscillators {A,B} (<ref>) this definition can be rewritten as
(√(ϵ)A_n^μ+1/√(ϵ)B_n^μ)|0,k^μ⟩_α=(-√(ϵ)A_-n^μ+1/√(ϵ)B_-n^μ)|0,k^μ⟩_α=0 ∀ n>0.
In the above equation, we have used the inverse relation to (<ref>). The new vacuum arising at the explicit tensionless limit (ϵ = 0) is given by
B^μ_n|0,k^μ⟩_I=0∀ n≠ 0,
B^μ_0|0,k^μ⟩_I=k^μ|0,k^μ⟩_I.
This state does satisfy the physical state condition in (<ref>). The action of M_0 on this state will give us the mass of the state. Here it is worth highlighting that since B's commute with each other, the normal ordering constant a_M for this theory is 0. This results to the following:
M_0|0,k^μ⟩_I=∑_nB_-n· B_n|0,k^μ⟩_I=(∑_n≠ 0B_-n· B_n+B^2_0)|0,k^μ⟩_I=0.
Applying (<ref>) on (<ref>) leads us to
B^2_0|0,k^μ⟩_I=k^2|0,k^μ⟩_I=0.
Applying the L_n physical state condition on the induced vacuum state |0,k^μ⟩_I we get
⟨0,k^μ|L_n|0,k^μ⟩=⟨0,k^μ|A_n· B_0|0,k^μ⟩=c'k·⟨0,k^μ|A_n|0,k^μ⟩=0.
Recalling that A^μ_0=0 due to periodicity condition, the L_0 condition is trivially satisfied by the induced vacuum.
The fate of tensile perturbative states under tensionless limit, has been determined in <cit.>. We discuss about this in Appendix <ref> and see that all the perturbative states condense on the Induced vacuum. There we
also briefly touch on the non-perturbative degrees of freedom emerging in tensionless limit.
§.§ Flipped vacuum
The tensionless string theory constructed on Flipped vacuum corresponds to the highest weight representation of the BMS algebra.
The physical state conditions for this theory mirrors its tensile cousin
(L_n-a_Lδ_n,0)|phys⟩=0∀ n≥ 0,
(M_n-a_Mδ_n,0)|phys⟩=0∀ n≥ 0.
The Flipped vacuum itself can be defined in terms of oscillators {C,𝒞} as
C^μ_n|0,k⟩_A=𝒞^μ_n|0,k⟩_A=0∀ n>0,
where we have defined the oscillator 𝒞 as
𝒞_n=C_-n,
i.e. role of creation and annihilation operators are “flipped” in this sector [This is much like the parent “twisted” string theory where the vacuum condition changes to
α^μ_n|0,k^μ⟩_α=α^μ_-n|0,k^μ⟩_α=0∀ n>0.
].
The commutation relations of these new oscillators are given by
[C^μ_m,C^ν_n]=mη^μνδ_m+n[𝒞^μ_m,𝒞^ν_n]=-mη^μνδ_m+n,[C^μ_m,𝒞^ν_n]=0.
The generators of the residual symmetry algebra in terms of these new oscillators have the following form
L_n =1/2∑_m[C_-m· C_m+n-𝒞_m·𝒞_-m+n],
M_n =1/2∑_m[C_-m· C_m+n+𝒞_m·𝒞_-m+n+2C_-m·𝒞_m+n].
The bosonic scalar field X^μ(τ,σ) can be expressed in terms of these new oscillators as
X^μ(τ,σ)=x^μ+2√(c'/2) C^μ_0τ+i√(c'/2)∑_n≠01/n[(C^μ_n-𝒞^μ_-n)-inτ(C^μ_n+𝒞^μ_-n)]e^-inσ.
Now, L_0 and M_0 in terms of the new oscillators becomes
L_0=𝒩+𝒩,
M_0=c'k^2+𝒩-𝒩+X+Y.
where 𝒩, 𝒩, X and Y are defined as
𝒩=∑_m>0C_-m· C_m, 𝒩=-∑_m>0𝒞_-m·𝒞_m, X=∑_m>0C_-m·𝒞_m, Y=∑_m>0𝒞_-m· C_m.
Note the (-ve) sign in front of the 𝒩 operator in this case.
In <cit.> it has been shown that when we take ultrarelativistic limit of tensile twisted string theory, it gives us a_L=2, a_M=0. The same values of a_L and a_M have been reproduced in light-cone quantization method <cit.>, and also in path-integral method <cit.>. The critical dimension D of this theory too, has been found to be 26 in <cit.>, like the parent twisted theory.
This value of a_L, along with L_0 condition (<ref>), demands that only level 2 states will be physical. The other physical state conditions impose more constraints on these level 2 states. These states can be obtained from physical states in the parent twisted theory by taking a tensionless limit. For some more detailed discussion on physical states of this theory, the reader is referred to Appendix <ref>.
§ COMPACTIFICATION OF TARGET SPACE
This section deals with the effects of compactification which are common in all three quantum tensionless string theories discussed in the previous section. We consider both the cases where one/multiple spatial coordinates are compactified on a circle S^1/d-dimensional torus T^d respectively.
§.§ Compactification on Circle S^1
We begin with rewriting the solutions of the intrinsic equations of motion of tensionless closed string <cit.> given in (<ref>):
X^μ(τ,σ)=x^μ+√(c'/2)A^μ_0σ+√(c'/2)B^μ_0τ+i√(c'/2)∑_n≠ 01/n(A^μ_n-inτ B^μ_n)e^-inσ.
Here μ∈{0,1,...,25} and D=26 is the dimension of the target spacetime in this case. The algebra satisfied by the modes A_n's and B_n's are given in (<ref>).
We now choose to compactify the coordinate X^25 on a circle of radius R. In that case, we are identifying the following two points
X^25∼ X^25+2π RW, W∈ℤ,
where X^25 parametrizes a 1-dimensional circle S^1 of radius R and the integer W parameterises the winding number of the string. The function X^25(τ,σ) maps the closed string 0≤σ≤ 2π to the 1-dimensional circle (0≤ X^25≤ 2π R). Therefore we need to modify the periodicity condition of a closed string in this direction,
X^25(σ+2π)=X^25(σ,τ)+2π RW.
The extra term 2π RW gives rise to strings that are closed only due to the compactification (i.e. they are closed only on the circle S^1 and not on ℝ). When we quantize this theory, this gives rise to winding states, characterised by winding number W.
We now write the mode expansion of X^25(σ,τ) and see the consequence of contraction
X^25(τ,σ)=x^25+√(c'/2)A^25_0σ+√(c'/2)B^25_0τ+i√(c'/2)∑_n≠ 01/n(A^25_n-inτ B^25_n)e^-inσ.
As shown in <cit.>, k^μ=√(1/2c')B^μ_0. Keeping in mind the fact that the wave function e^ik^25X^25 must be single-valued, we must restrict the allowed values of k^25 to discrete values and finally end up getting the following allowed values of B^25_0:
B^25_0=√(2c')(K/R)K∈ℤ.
The modified periodic condition (<ref>) demands
A^25_0=√(2/c')RW.
Therefore the expansion (<ref>) takes the following form
X^25=x^25+RWσ+(c'K/R)τ+i√(c'/2)∑_n≠ 01/n(A^25_n-inτ B^25_n)e^-inσ.
For μ≠ 25, A^μ_0=0 in order to maintain periodicity in σ. As highlighted in section (<ref>), in order to get the harmonic oscillator algebra <cit.> we need to introduce new modes (C,C̃) defined in (<ref>). Using equations (<ref>), (<ref>) in (<ref>), we find the modified zero modes in the compactified case,
C^25_0=1/2[√(2c')(K/R)+√(2/c')RW],C^25_0=1/2[√(2c')(K/R)-√(2/c')RW].
We can see here that for compactified dimension, C_0^25 and C̃_0^25 have different values. However, for non-compactified dimensions indexed by μ={0,1,⋯,24}, we have the usual C^μ_0=C^μ_0=√(c'/2)k^μ.
§.§ Compactification on Torus T^d
In this subsection we are going to generalise our analysis for a background with d number of dimensions compactified, resulting into a 26-d (D=26) dimensional effective theory. Here we have a d-dimensional torus T^d instead of a circle S^1. In the torus we make the following identification for the compactified coordinates
X^I∼ X^I+2π R W^I, I∈{26-d,⋯,25}.
where the winding can be written as:
W^I=∑_i=1^dω^ie^I_i,ω^i∈ℤ.
The components of metric in compactified directions are assumed to be G_IJ=δ_IJ. Here e_i={e_i^I} form the basis of a d-dimensional lattice Λ_d. The momentum is denoted by K_I. In order to make e^iX^IK_I single-valued here again we have to make W^I(RK_I)∈ℤ, implying that RK_I has to reside in the lattice Λ^*_d which is dual to Λ_d. That means K_I can be expressed as
RK_I=∑_i=1^dk_ie_I^*i K_I=∑_i=1^dk_i/Re_I^*i,k_i∈ℤ.
where e^*i={e^*i_I} form the basis of the dual lattice Λ^*_d, and are dual to e_i.
e_i· e^*j=e_i^Ie^*j_I=δ_i^j.
The metric on the lattice Λ_d and Λ^*_d are respectively defined as
g_ij= e_i· e_j=e_i^Ie_j^Jδ_IJ,
g^*_ij=e^*i· e^*j=e^*i_Ie^*j_Jδ^IJ=g^ij.
For our convenience we define dimensionless field Y^I as
X^I=√(c'/2)Y^I.
We also use the (<ref>). The expansion of Y^I, however, would be in terms of oscillators A's and B's.
Y^I=y^I+A^I_0σ+B^I_0τ+i∑_n≠ 01/n(A^I_n-inτ B^I_n)e^-inσ.
Together (<ref>) and (<ref>) imply that
B^I_0=√(2c')K^I,A^I_0=√(2/c')RW^I.
Expressing Y^I in terms of oscillators {C,C} and splitting Y^I to left and right part we can do the following mode expansion
Y^I_L =y^I_L+k^I_L(τ+σ)+i∑_n≠01/n(C^μ_n-inτ C^μ_n)e^-inσ,
Y^I_R =y^I_L+k^I_R(τ-σ)+i∑_n≠01/n(C^μ_n-inτC^μ_n)e^inσ.
Here, k^I_L and k^I_R respectively are dimensionless left and right momenta.
(<ref>) and (<ref>) together imply that
k^I_L,R=1/√(2)(√(c')K^I±1/√(c')W^IR).
With the formalims in place, we are now ready to study the effect of compactification on different quantum theories of tensionless strings. We will deal with the three quantum theories built on the three different vacua individually in the following sections. Our focus would be mainly on the effect of circle compactifications, and although we provide a quick look into the toroidal case, the details of it will be addressed in a subsequent work <cit.>. Before starting the upcoming sections let us mention our notation: while summing over repeated indices, without using summation symbol, we mean sum over all coordinates–including the compactified ones. We use summation symbol when we sum over non-compact coordinates only.
§ EFFECT OF COMPACTIFICATION: OSCILLATOR VACUUM
In this section we start by computing the level matching condition as well as mass spectrum for the tensionless string theory constructed on oscillator vacuum as defined in (<ref>). We shall see that the level matching condition will be modified due to the difference between the zero modes of C oscillators (C_0^25,C̃_0^25) derived in the earlier section.
§.§ Modified level matching condition and mass spectrum
As already defined in subsection (<ref>), the oscillator vacuum is annihilated by oscillators C^μ_n and C^μ_n. As mentioned earlier, we will assume the target space in this case is 26 dimensional Minkowski space for consistency i.e. μ = 0,1,...,25.
The vacuum still remains an eigenstate of the momentum operator. For compactification on a S^1 along the 25th direction, it should also have a winding number (W) along it, resulting in:
|0⟩_c ≡|0,k^μ,K,W⟩_c,
k̂^μ|0,k^μ,K,W⟩_c =k^μ|0,k^μ,K,W⟩_cμ=0,1,⋯,24,
k̂^25|0,k^μ,K,W⟩_c =K/R|0,k^μ,K,W⟩_c.
Since for compactified case we can feel only the dimensions which are non-compact, the square of the mass measured must be the sum over only those components of momentum which belong to the non-compact dimensions
m^2=-∑_μ=0^24k_μk^μ.
For non-compactified case <cit.> as already discussed, the normal ordered zero modes L_0 and M_0 follow (<ref>), where the number operators 𝒩, 𝒩 and the sum of annihilation operators X are defined as (<ref>).
We now move on to compactified case. Using (<ref>) in (<ref>) we get the expression for modified normal ordered zero modes as,
L_0=𝒩-𝒩+KW, M_0=c'K^2/R^2+c'∑_μ=0^24k_μk^μ+𝒩+𝒩+X+X^†,
where 𝒩, 𝒩, X and X^† are same as defined in (<ref>). So, we observe that due to compactification, there is a modification in both the level matching condition as well as M_0. We now compute the physical states using sandwich conditions (<ref>). Let us consider the case where |phys⟩=|phys'⟩=|0,k^μ,K,W⟩.
While for n≠ 0 the physical state conditions are trivially satisfied, for zero modes we have the following
⟨0,k^μ,K,W|L_0|0,k^μ,K,W⟩=⟨0,k^μ,K,W|(𝒩-𝒩+KW)|0,k^μ,K,W⟩ =a_L=0,
⟨0,k^μ,K,W|M_0|0,k^μ,K,W⟩=c'(K^2/R^2+∑_μ=0^24k_μk^μ)=a_M=2 .
In the above we have used the values of a_L and a_M obtained in subsection (<ref>). For the lowest energy state, we immediately have 𝒩=𝒩=0. The above considered state is physical only when KW=a_L=0. Hence, the only way to make sure that the state |0,k^μ,K,W⟩ is physical is to demand either W=0 or K=0. The mass shell condition in this case becomes
m^2=K^2/R^2-2/c'.
Let us consider a generic state of the form |r,s, k^μ,K,W⟩, where
𝒩|r,s, k^μ,K,W⟩=r|r,s, k^μ,K,W⟩, 𝒩|r,s, k^μ,K,W⟩=s|r,s, k^μ,K,W⟩.
As we have seen in subsection (<ref>), non level-matched states can not be physical for non compactified background when we choose oscillator vacuum i.e. we need to impose r=s. However, in the present case, the level matching condition will be changed due to winding modes. Applying the sandwich condition (<ref>) with a_L=0 and |phys⟩=|phys'⟩=|r,s,k^μ,K,W⟩ we see that
⟨r,s,k^μ,K,W|L_0|r,s,k^μ,K,W⟩=r-s+KW=0 s=r+KW.
Now we want to check whether the level matched states satisfy the following sandwich condition
⟨phys|L_n|phys⟩=0, n≠ 0.
As shown in <cit.>, the action of L_n on state |r,s⟩ is given by
L_n|r,s⟩=|r-n,s⟩-|r,s+n⟩.
That means if we take a level matched state with s=r+KW, then after operating L_n on it, we would end up with sum of states |r-n,s⟩ and |r,s+n⟩, both of which are non level-matched, and subsequently orthogonal to any level-matched state. Hence inner product of this sum with any level-matched state is bound to be zero. Consequently, we conclude that the level-matched states satisfy the sandwich condition on L_n for all n.
Now, we apply the sandwich condition for M_0 level-matched states in order to find their mass. Following the method outlined in <cit.>, we see that the M_0 physical state condition leads us to the following constraint on a state |r,s⟩
(c'K^2/R^2+c'∑_μ=0^24k_μk^μ+𝒩+𝒩) |r,s⟩=2|r,s⟩(c'K^2/R^2+r+s-2)|r,s⟩=c'm^2|r,s⟩
m^2|r,s⟩ =[K^2/R^2+1/c'(r+s-2)]|r,s⟩.
In the above we have used equation (<ref>). Hence, a generic physical state |r,s,k^μ,K,W⟩ must satisfy the level matching condition as given in (<ref>) along with mass shell condition (<ref>). This result matches with the result already derived in <cit.>, in the sense there is no winding number contribution to the mass formula. It is a textbook concept that the winding contribution to the string mass is understood as the energy required to wrap the string around the compact circle, and an absence of such contribution may be attributed to the so called "long string" state associated to the tensionless regime, where the rest energy associated to wrapping just vanishes.
§.§ States from compactification
In the following subsection, we are going to discuss about the new states arising due to compactification (apart from the states we have already discussed in (<ref>)). In the case of tensile string theory compactified on S^1, we had two massless vector states along with a massless scalar state for any value of compactifying radius R. At self-dual radius R=√(α'), four additional vector states and eight additional scalar states would become massless. There were also infinite number of vacuum states with either K=0 or W=0, which will become massless at particular value of radius.
However, we will see that, for tensionless string theory on oscillator vacuum, there will be an infinite number of massless vector states and scalar states for any value of the compactification radius. Moreover, at R=√(c') we will have four additional massless vectors and four massless scalars. Like tensile theory there will be infinitely many vacuum states with internal momenta which become massless at particular values of R.
§.§.§ Level zero states
Let us consider the following states (with r=s=0)
|0,k^μ,K,0⟩,K∈ℤ.
The mass formula in (<ref>) implies that states in (<ref>) will have:
m^2=K^2/R^2-2/c'.
Hence, the state in (<ref>) will become massless for a given value of internal momentum K, particularly when the radius of compactification R is
R=K√(c'/2).
In tensile string theory too, there are states of this kind, which become massless at compactified radius R=K√(α')/2. In general the above states can be tachyonic, massless or massive depending on the compact radius, which again mirrors the nature of the tensile counterpart thereof. Note that winding vacuum states like |0,k^μ,0,W⟩ will still be purely tachyonic with a mass square -2/c' for all values of W.
§.§.§ Level 1 vector states
Now we consider the following physical states at level 1, with either r=1 or s=1,
|V^μ_±⟩=C^μ_-1|0,k^μ,± 1,∓ 1⟩_c,|V^μ_±⟩=C^μ_-1|0,k^μ,± 1,±1⟩_c
where μ={0,1,...,24} and clearly KW=± 1. These states are vector states with the following mass squared
m^2=1/R^2-1/c'.
These states become massless at radius R=√(c'). They can be compared to the aforementioned vector states in tensile string theory which become massless at R=√(α'). However in the tensile case, the analogous states always have non negative mass square values, which is not guaranteed in this case. The implication of this observation is not completely clear to us, and we will come back to this in future correspondences.
§.§.§ Level 1 scalar states
By acting the oscillators C^25_-1 and C^25_-1 on |0,k^μ,± 1,∓ 1⟩_c and |0,k^μ,± 1,±1⟩_c respectively we can construct 4 more scalar states of level 1:
|ϕ_±⟩=C^25_-1|0,k^μ,± 1,∓ 1⟩_c,|ϕ_±⟩=C^25_-1|0,k^μ,± 1,±1⟩_c.
These states are scalar states having same mass as in (<ref>), and hence, they too, become massless at R=√(c'). They can be compared to the aforementioned scalar states in tensile string theory which become massless at R=√(α').
§.§.§ Level 2 massless vector states
For K=0, at level 2 (i.e. r=1) we have a tower of massless vector states, since according to (<ref>) and (<ref>), for any value of the winding number W, we shall have m=0. These states are denoted by
|V^μ_W⟩=C^μ_-1C^25
_-1|0,k^μ,0,W⟩_c, |V^μ_W⟩=C^μ
_-1C^25_-1|0,k^μ,0,W⟩_c,
where μ,ν={0,1,⋯,24}. Tensile string theory also has vector states like (<ref>), but they are massless only if both K and W are zero. Hence tensile bosonic string theory can have only two vector states which can be massless for any R.
§.§.§ Level 2 massless scalar states
For r=1 with K=0, we also have infinite number of massless scalar states as well. They are denoted by
|ϕ_W⟩=C^25_-1C^25_-1|0,k^μ,0,W⟩_c.
The states given in (<ref>) and (<ref>) are massless for any value of compactification radius R. Tensile string theory also has similar states, but they can be massless only for K=W=0.
§.§ Limiting theory
We have seen earlier that in non-compactified background tensile and tensionless oscillators are related by a set of Bogoliubov transformations. In this section we would like to see whether a similar situation occurs starting from a compactified target space for the tensile theory and consistently taking limits at every step.
It is quite obvious that (<ref>) will be intact for all the non-compactified dimensions. We will then rederive the Bogoliubov transformation only focusing on the compactified coordinate. Let us also consider the mode expansion of X^25(τ,σ) in tensile string theory, which we take to be compactified on a circle as well,
X^25(τ,σ)= x^25+√(α'/2)α^25_0(τ-σ)+√(α'/2)α^25_0(τ+σ)
+i√(α'/2)∑_n≠01/n[α^25_ne^-in(τ+σ)+α^25_ne^-in(τ-σ)].
Taking ultra-relativistic limit (τ→ϵτ, σ→σ, α'→c'/ϵ) of (<ref>) and comparing this with (<ref>), we get the following relation between the (C^25_0,C^25_0) and (α^25_0,α^25_0)
C^25_0 =1/2(√(ϵ)+1/√(ϵ))α^25_0+1/2(√(ϵ)-1/√(ϵ))α^25_0,
C^25_0 =1/2(√(ϵ)-1/√(ϵ))α^25_0+1/2(√(ϵ)+1/√(ϵ))α^25_0.
One can note, the relation between (C^25_n,C^25_n) and (α^25_n,α^25_n) modes with n≠ 0 remains same as (<ref>). Now let us make the following identification on X^25 in (<ref>)
X^25∼ X^25+2π R'W',W∈ℤ
i.e. the target space of the tensile theory is compactified in 25^th dimension with radius R'[Here we consider the possibility that while taking T→ϵ T limit from tensile string theory we might have to scale the compactification radius as well R'→ϵ^pR.] and a winding number W'. The quantized momentum in the compactified direction will be K'/R'. It can be easily shown that
α^25_0=1/2[√(2α')(K'/R')+√(2/α')R'W'],α^25_0=1/2[√(2α')(K'/R')-√(2/α')R'W']
Now, using the expressions of α^25_0 and α^25_0 in the r.h.s. of (<ref>), we end up with the following expressions for C^25_0 and C^25_0
C^25_0=1/2[√(2c')(K'/R')+√(2/c')R'W'],C^25_0=1/2[√(2c')(K'/R')-√(2/c')R'W'].
We see that the expressions of C^25_0 and C^25_0 in (<ref>) are exactly in the same form as in the intrinsically calculated zero modes (<ref>). Comparing the two expressions, we can conclude:
K'/R'=K/R,W'R'=WR.
Hence, if we want to make a scaling R'=ϵ^pR, then we must have to make the following scaling on W' and K' as well
K'=ϵ^pK,W'=ϵ^-pW.
Since we want both K and W to be finite integers, one of the probable ways to ensure that is to demand that p=0, which means we should have
R'=R.
This automatically implies that K'=K and W'=W, which is one of the possibilities, and we will assume this without loss of generality in what follows. Another implication of this observation is that the tensionless string theory built on the oscillator vacuum resides in the target spacetime identical to that of the tensile theory, and the vacua are just connected through Bogoliubov transformations.
§.§ A brief Look at multiple dimensions compactification
In this section we briefly discuss about oscillator vacuum theory on a background with d dimensions compactified on a torus T^d. Recalling the discussion in section (<ref>), we express L_0 and M_0 in terms of k^I_L,R to get,
L_0 = 𝒩-𝒩+RK^IW_I = 𝒩-𝒩+∑_i=1^dk_iω^i,
M_0=1/2 (k^I_Lk_IL+k^I_Rk_IR+2k^I_Lk_IR)+c'k^2+𝒩+𝒩+X+X^†
=c'K^IK_I+c'k^2+𝒩+𝒩+X+X^†
M_0 =c'/R^2∑_i,j=1^dk_ig^ijk_j+c'k^2+𝒩+𝒩+X+X^†.
k^I_L,R in terms of W^I and K^I are given in (<ref>). The remaining steps are very much same as that in section (<ref>). We apply the level matching condition and finally obtain the mass spectrum as
s-r=∑_i=1^dk_iω^i,
m^2= 1/R^2∑_i,j=1^dk_ig^ijk_j+1/c'(r+s-2),
Which generalizes our earlier discussion. This is however much more intricate as many parameters are involved, and a thorough investigation of the associated spectrum will be detailed elsewhere <cit.> as promised earlier.
§.§ Summary
Given below is the summary of our findings in this section:
* The new level matching condition (<ref>) from physical state conditions has been calculated. It turns out that the level matching condition has been modified in a similar way as for tensile compactified case.
* The mass spectrum in (<ref>) has been computed. Unlike tensile string theory, the mass spectrum is not straightforwardly invariant under T-duality transformation.
* We discussed about the new states arising due to compactification. We find an infinite tower of massless states. There are massless vector states in (<ref>), which are massless for any value of compactified radius. There are other states as well, which become massless at specific values of radius such as states in (<ref>) and (<ref>).
* Oscillator vacuum in a target space with d dimensions compactified on a torus T^d has been considered. The level matching condition and mass spectrum has been derived (<ref>).
§ EFFECT OF COMPACTIFICATION: INDUCED VACUUM
As already highlighted in earlier section, the theory with Induced vacuum emerges when we explicitly follow through with the tensionless limit of the tensile string theory. In this section, we study the effect of compactification on the theory built upon this vacuum. We will also perform a consistent limiting analysis on the tensile perturbative states to ascertain what happens in the explicit limit.
§.§ What happens to the vacuum?
The tensile string vacuum with non-zero internal momentum K and winding number W, in the explicit tensionless limit, will give rise to Induced vacuum with same internal momentum and winding number [This can be seen again from a comparison of (<ref>) and (<ref>), where same compactification radius means same K and W. This comparison remains valid for all vacua. ]
lim_ϵ→ 0|0,k^μ_α,K,W⟩_α=|0,k^μ_I,K,W⟩_I.
We denote the non-compact momentum of tensile theory as k^μ_α, and the same in tensionless theory as k^μ_I in order to distinguish them from each other. We will see later in this section that at explicit tensionless limit the momentum will change and hence this distinction is important.
Intrinsically this new vacuum is defined in analogy to (<ref>) as:
B_n|0,k^μ_I,K,W⟩_I =0, n≠0,
B_0^μ|0,k^μ_I,K,W⟩_I =√(2c')k^μ_I|0,k^μ_I,K,W⟩_Iμ=0,1,⋯,24,
B_0^25|0,k^μ_I,K,W⟩_I =√(2c')K/R|0,k^μ_I,K,W⟩_I.
Now we know from <cit.> that generators of BMS algebra L_n and M_n can be written in terms of A_n's and B_n's as (<ref>).
Let us recall from the discussion in section (<ref>) that the vacuum |0,k^μ_I,K,W⟩_I belongs to the Induced representation of the BMS algebra. The physical state conditions satisfied by these states are given in (<ref>). Hence in order to become physical state, the vacuum must satisfy the following condition
M_n|0,k^μ_I,K,W⟩_I=1/2∑_mB_-m· B_n+m|0,k^μ_I,K,W⟩_I=0.
As pointed out in <cit.>, B_n's commute with each other i.e., there is no normal ordering ambiguity in the expression of operator M_n, which implies a_M=0. So we can promptly write:
M_0|0,k^μ_I,K,W⟩_I=a_M|0,k^μ_I,K,W⟩_I=0,
which in turn gives
∑_mB_-m· B_m |0,k^μ_I,K,W⟩_I=0 ( ∑_m≠ 0B_-m· B_m+B^2_0)|0,k^μ_I,K,W⟩_I=0
B^2_0|0,k^μ_I,K,W⟩_I=2c'(∑_ν=0^24k_Iνk^ν_I+K^2/R^2)|0,k^μ_I,K,W⟩_I=0.
Hence we find here that the vacuum has a mass spectrum given by:
m^2|0,k^μ_I,K,W⟩_I=-∑_ν=0^24k_Iνk^ν_I|0,k^μ_I,K,W⟩_I=K^2/R^2|0,k^μ_I,K,W⟩_I.
So the string in the Induced vacuum state only has a rest energy contributed by the internal momentum, in a way similar to a relativistic massless particle.
The L_n physical state condition on the induced vacuum can be written as follows:
⟨0,k^μ_I,K,W|L_n|0,k^μ_I,K,W⟩=0∀ n
For n=0, this would give us the following
⟨0,k^μ_I,K,W|A_0· B_0 |0,k^μ_I,K,W⟩=KW⟨0,k^μ_I,K,W|0,k^μ_I,K,W|=⟩0
KW=0.
In the above we have used the fact that A_0^μ=0 for μ={0,1,...24} along with expression of A^25_0 as given in (<ref>). This implies the state |0,k^μ_I,K,W⟩_I can be physical iff KW=0. This mirrors the fact for tensile string theory, the physical vacuum must have KW=0.
§.§ Limit from tensile mass formula
Since Induced vacuum comes directly from taking tensionless limit of the tensile case, we shall check whether we arrive at the same conclusion by taking the appropriate limit (i.e. α' →∞) of the tensile string theory.
We start from the following mode expansion of the compactified coordinate in the tensile case
X^25(σ,τ)= x^25+α'p^25τ+WRσ
+i√(α'/2)∑_n≠ 01/n(α^25_ne^-in(τ-σ)+α^25_ne^-in(τ+σ)),
where the internal momenta takes the discrete form p^25=K/R. The physical state conditions for tensile string are
(ℒ_n-aδ_n,0)|phys⟩=0,
(ℒ_n-aδ_n,0)|phys⟩=0 ∀ n>0.
The mass formula and the level matching conditions can now be derived from (<ref>) as,
m^2=K^2/R^2+ 1/α'^2W^2R^2+2/α'(N+N-2), N-N=KW.
Here N and N denote left and right level of tensile string respectively, and W is the winding number. In the tensionless limit, α' gets scaled to c'/ϵ and hence it is straight forward to see that as ϵ→ 0, the 2nd term and 3rd term in (<ref>) vanishes and only the 1st term survives, which is same as what we calculated intrinsically (<ref>) [Note that the oscillator contributions do not survive here, giving the spectrum a more particle-like feeling.].
§.§ What happens to the perturbative states?
In section (<ref>), we have seen that in a non-compactified background, all the perturbative states of tensile string theory under the tensionless limit condense on the Induced vacuum of the tensionless string. In this subsection we shall study the fate of the tensile perturbative states in tensionless limit when the target space has one dimension compactified. Note that the corresponding non-compactified computation of <cit.> has been reviewed in Appendix (<ref>), and we will explicitly follow the same procedure in the current case.
§.§ Perturbative states with either of K,W≠ 0 under tensionless limit
Now, let us have a look at the states having non-zero winding number W. As we have noticed in (<ref>), the winding number did not appear in the mass spectrum. To understand the reason of this, we need to have a look at (<ref>) and (<ref>). We see that unlike in the case of oscillator modes C_0 and C_0 (see (<ref>)), the winding number does not appear in the expression of B^25_0. This means that the non-compact mass m^2=-∑_ν=0^24k_Iνk^ν_I becomes independent of the winding number. As we have seen, this is consistent with the tensionless limit of tensile string as well, since the term containing the winding number W in (<ref>) does vanish in tensionless limit.
The tensile vacuum (N=N=0) for non-zero internal momentum will essentially have zero winding number as dictated by the level matching condition. Hence in tensionless limit, tensile vacuum with W=0, internal momentum K/R and momentum k^μ_α (m^2=-k^2 and its value can be found from (<ref>) with W=N=N=0) will end up as a state with W=0, internal momentum K/R and and a new momentum k^μ_I, where k_I^2=-K^2/R^2. Hence, this vacuum is given by
|0,k^μ_I,K,0⟩_I,
which will satisfy the following equation
W|0,k^μ_I,K,0⟩_I=0.
It can be shown that all the tensile perturbative states with zero winding number and momentum (k^μ_α,K/R) will condense on this state.
§.§ States with W=0, K≠ 0
Since the level matching condition dictates that N-N=KW, for states with W=0, N=N. Hence, we can consider the following perturbative tensile state
|Φ⟩=σ_μνα^μ_-nα^ν_-n|0,k^μ_α,K,0⟩_α.
Following the expansion methods detailed in Appendix (<ref>), we consider the following evolution of the vacuum state with the parameter ϵ
|0,k^μ_α,K,0⟩_α=|0,k^μ_I,K,0⟩_I+ϵ|I_1⟩+ϵ^2|I_2⟩⋯
After this, using the conditions
α_n=1/2[√(ϵ) A_n+1/√(ϵ)B_n],α_n=1/2[-√(ϵ) A_-n+1/√(ϵ)B_-n],
and using the algebra of the A,B modes in (<ref>), we can also find the order by order action of the modes
B_n|0⟩_I=0,∀ n≠ 0
A_n|0⟩_I=-B_n|I_1⟩A_-n|0⟩_I=B_-n|I_1⟩∀ n>0
A_n|I_1⟩=-B_n|I_2⟩A_-n|I_1⟩=B_-n|I_2⟩∀ n>0
⋮⋮⋮⋮
A_n|I_r⟩=-B_n|I_r+1⟩A_-n|I_r⟩=B_-n|I_r+1⟩∀ n>0.
With these in hand, we can easily see the perturbative state basically condenses down onto the zero winding Induced vacuum, i.e.
|Φ⟩→Σ|0,k^μ_I,K,0⟩_I,Σ=2nη^μνσ_μν.
Now let us move to tensile string states with winding number W (W≠ 0), which can be separated into two distinct categories: states with K=0, and states with K≠ 0. In what follows, we will see that states of these two categories will have different fate under the tensionless limit.
§.§ States with W≠0, K=0
For such state evidently we have N=N. Hence we start from a state |χ⟩ having very much similar form as (<ref>), just the relevant vacuum state |0,k^μ_α,K,0⟩_α would be replaced by the zero internal momentum one |0,k^μ_α,0,W⟩_α. The rest of the procedure would be very much the same and we will end up with the following
|χ⟩→Θ|0,k^μ_I,0,W⟩_I,Θ=2nη^μνθ_μν,
where θ_μν is the polarization tensor of |χ⟩. Hence, winding states with K=0, will condense to an Induced vacuum with K=0, W≠ 0, which is given by
W|0,k^μ_I,0,W⟩_I=W|0,k^μ_I,0,W⟩_I.
§.§ States with W≠ 0, K≠ 0
For such state the level matching condition will be N=N+KW. Since the level matching condition has changed here, instead of states of the form (<ref>), we have to consider states having the following form
|ζ_n⟩=ρ_μνα^μ_-nα^ν_-n-KW|0,k^μ_α,K,W⟩_α[This includes tensile states arising due to compactification such as α^25_-nα^25_-n-KW|0,k^μ_α,K,W⟩_α. To get this state we can always take ρ_2525=1 and ρ_μν=0∀μ,ν≠ 25.].
Expanding the tensile vacuum |0,k^μ,K,W⟩_α as in the previous cases, we end up with the following expression
|ζ_n⟩=ρ_μν/ϵ(B^μ_-n+ϵ A^μ_-n)(B^ν_n+KW-ϵ A^ν_n+KW)(|0⟩_I+ϵ|I_1⟩+ϵ^2|I_2⟩+⋯).
Now, we have to be careful about taking the exact limit. Here we have two different cases:
1) KW>0 (i.e. either both K,W>0 or K,W<0) and
2) KW<0 (i.e. either K<0,W>0 or K>0,W<0).
Lets first look at the case KW>0. Here, when we apply the algebra (<ref>) and evaluate the expressions, we shall see that all the terms of orders 𝒪(ϵ^-1) and 𝒪(ϵ^0) will vanish. Hence the dominant terms will be of 𝒪(ϵ), and from (<ref>) we can find four such states. The states are as given below:
ϵρ_μν(-A^μ_-nA^ν_n+KW|0⟩_I+A^μ_-nB^ν_n+KW|I_1⟩-B^μ_-nA^ν_n+KW|I_1⟩+B^μ_-nB^ν_n+KW|I_2⟩).
Again applying our usual expansion methods on the states, it can be shown that
B_-nA_n+KW|I_1⟩=-A_-nB_n+KW|I_1⟩=-B_-nB_n+KW|I_2⟩=A_-nA_n+KW|0⟩_I.
As a result, the 𝒪(ϵ) term of (<ref>) as given in (<ref>) simply becomes
-4ϵρ_μνA^μ_-nA^ν_n+KW|0⟩_I.
As discussed in <cit.>, such states with multiple A's acting on the Induced vacuum are unphysical states [States constructed with only actions of A's have an ill defined norm, as [A,A]=0.]. Hence, the winding states having non-zero internal momentum in tensionless limit will end up being unphysical states.
Now, let us look at the KW<0 states. Let l=|KW|. Then (<ref>) will become
|ζ_n⟩=ρ_μνα^μ_-nα^ν_-n+l|0,k^μ_α,K,W⟩_α,
and the expansion will become
|ζ_n⟩=ρ_μν/ϵ(B^μ_-n+ϵ A^μ_-n)(B^ν_n-l-ϵ A^ν_n-l)(|0⟩_I+ϵ|I_1⟩+ϵ^2|I_2⟩+⋯).
For the states with n>l, the computation will follow the case with KW>0. Using the steps similar to (<ref>), (<ref>), the final form of the limiting state |ζ_n⟩ will be
-4ϵρ_μνA^μ_-nA^ν_n-l|0⟩_I,
which again is a unphysical state.
For states with n=l, the expansion in (<ref>) will be replaced by
|ζ_l⟩=ρ_μν/ϵ(B^μ_-l+ϵ A^μ_-l)(B^ν_0-ϵ A^ν_0)(|0⟩_I+ϵ|I_1⟩+ϵ^2|I_2⟩+⋯).
For these states the term with order 𝒪(ϵ^-1) will again vanish, since B's commute with each other. The leading order term that survives the ϵ→ 0 limit is of order 𝒪(ϵ) and is given by
ρ_μν(A^μ_-lB^ν_0|0⟩_I+B^μ_-lB^ν_0|I_1⟩).
Using a bit of algebra on (<ref>) we get
2ρ_μνA^μ_-lB^ν_0|0⟩_I=2(∑_ν=0^24ρ_μνk^ν_I+ρ_μ 25K/R)A^μ_-l|0⟩_I.
So, we again have an unphysical state, although it is different from (<ref>) and (<ref>).
For states with n<l something new happens. For convenience let us take l-n=m, where m>0. Then we will have a state with the following form
|ζ_n⟩=ρ_μν/ϵ(B^μ_-n+ϵ A^μ_-n)(B^ν_-m-ϵ A^ν_-m)(|0⟩_I+ϵ|I_1⟩+ϵ^2|I_2⟩+⋯).
Applying the algebra (<ref>) along with our order by order evolution, we can easily see that 𝒪(ϵ^-1) and 𝒪(ϵ^0) will vanish. The 𝒪(ϵ) terms in (<ref>) are written below
ϵρ_μν(-A^μ_-nA^ν_-m|0⟩_I+A^μ_-nB^ν_-m|I_1⟩-B^μ_-nA^ν_-m|I_1⟩+B^μ_-nB^ν_-m|I_2⟩).
Using (<ref>) it can be shown that
A^μ_-nB^ν_-m|I_1⟩=B^μ_-nA^ν_-m|I_1⟩=B^μ_-nB^ν_-m|I_2⟩=A^μ_-nA^ν_-m|0⟩_I.
Hence, we see 𝒪(ϵ) term vanishes too. The order 𝒪(ϵ^r) part in (<ref>) turns out to be
ϵ^rρ_μν(-A^μ_-nA^ν_-m|I_r-1⟩+A^μ_-nB^ν_-m|I_r⟩-B^μ_-nA^ν_-m|I_r⟩+B^μ_-nB^ν_-m|I_r+1⟩).
For r=1 this expression gives (<ref>). Here again, using (<ref>), we can show that (<ref>) vanishes, exactly like (<ref>). Hence, for n<l, |ζ_n⟩ vanishes in all the orders of ϵ. This effectively means the tensionless progeny of states like (<ref>) do not really matter in the spectrum.
We can quickly summarise our results as below:
* The tensile states with K=W=0 will condense at the Induced vacuum with K=W=0. In <cit.> this was pointed out as a Bose-Einstein condensation on the worldsheet theory as α' →∞.
* The tensile states with K≠ 0 but W=0 will condense at Induced vacuum with internal momentum K/R. The emergent vacua in this case a family of massive ones, labelled by K values.
* The tensile states with K=0 but W≠ 0 will condense at Induced vacuum with K=0, and winding number W. These are a family of massless vacua, labelled by W values.
* The tensile states with both K,W > 0, and K,W < 0 will tend to different unphysical states under tensionless limit.
* The tensile states with K<0, W>0, and K>0, W<0 will end up being different unphysical states provided the level of the state n≥ |KW|.
* The tensile states with K<0, W>0, and K>0, W<0 will altogether vanish under tensionless limit if the level of the state n<|KW|.
§.§ Nonperturbative states
As we have mentioned in section (<ref>), there are non-perturbatively defined states in the Induced representation of BMS algebra, which introduces new kind of physical states in this tensionless theory (see Appendix <ref>). In this subsection we look at the effect of compactification on such states.
Identifying the Induced vacuum winding states |0,k^μ,K,W⟩_I, we can construct following non-perturbative states
|ϕ⟩=exp(i∑_nω_nL_n)|0,k^μ_I,K,W⟩_I.
This state satisfies the physical state condition (<ref>). Writing L_n in terms of A_n's and B_n's we get
|ϕ⟩=exp(i∑_n,mω_nA_n-m· B_m)|0,k^μ_I,K,W⟩_I.
Now, let us recall the algebra in (<ref>) which says that both A_0 and B_0 will commute with all A_n's and B_n's. Hence the mass operator m^2=∑_μ=0^24B_0^μB_0μ will have the same eigenvalue for the eigenstate in (<ref>) as for the vacuum on which it is built
m^2|ϕ⟩=K^2/R^2|ϕ⟩,
and the winding number operator, which is proportional to A^25_0, has the same eigenvalue for |ϕ⟩ as for |0,k^μ_I,K,W⟩_I:
W|ϕ⟩=W|ϕ⟩.
§.§ A brief look at multiple dimensions compactification
In this subsection we briefly look at the tensionless string theory constructed on Induced vacuum in a background with d number of dimensions compactified.
Applying the M_0 physical state condition on |0,k^μ,k^i,ω^i⟩_I we get
∑_mB_-m· B_m |0,k^μ,k^i,ω^i⟩_I=0
(∑_m≠ 0B_-m· B_m+B^2_0)|0,k^μ,k^i,ω^i⟩_I=0
B^2_0|0,k^μ,k^i,ω^i⟩_I=2c'(k^2+K^IK_I)|0,k^μ,k^i,ω^i⟩_I=0.
The mass of the state [Note that I here is the index on K that denotes compactified directions, just to avoid confusions.]|0,k^μ,k^i,ω^i⟩ is given by
m^2=K^IK_I=1/R^2∑_i,j=1^dk_ig^ijk_j.
The L_0 condition, for this vacuum will give us the following
⟨0,k^μ,k^i,ω^i| A_0· B_0|0,k^μ,k^i,ω^i⟩=0
∑_i=1^dk_iω^i=0.
Recall, the mass spectrum for tensile string in a background with d dimensions compactified on a Torus was given by
m^2=2/α'(N_L+N_R-2)+1/R^2∑_i,j^dk_ig^ijk_j+1/α'^2∑_i,jω^ig_ijω^j
where N_R-N_L=∑_i=1^dk_iω^i. We can now explicitly take the tensionless limit of this (α→c'/ϵ, ϵ→ 0), and obtain the same spectrum as in (<ref>).
§.§ Summary
Let us summarize the results in this section below:
* We get a series of vacua labelled by internal momentum K and winding number W (equation (<ref>)). However, mass of those (equation (<ref>)) do not contain any contribution from W. Quite naturally, T-duality is absent.
* Since this theory is an explicit tensionless limit of usual string theory, it is important to ensure that the intrinsically derived results are consistent with the tensionless limit of the results in the parent theory. In section (<ref>) we have shown that our results from intrinsic analysis matches with that of the limiting one.
* In section (<ref>) we have shown that the perturbative states in the tensile string theory have different fates depending on the different values of K and W. There are three possible fates of them:
(i) Some of them condense on the Induced vacua. And these family of vacua depending on K and W values are the only states in the Induced theory.
(ii) Some of them become unphysical states and
(iii) Rest of them just vanish.
* The non-perturbative states constructed on a vacua with internal momentum K and winding number W have same value of internal momentum and winding number. As a result, the mass of the state remains same as that of the vacuum it is constructed on (equation (<ref>) and (<ref>)).
* We also considered d dimension compactification of the theory on a torus T^d and computed the mass spectrum (equation (<ref>)). We have also reproduced the same spectrum by directly taking tensionless limit of mass spectrum of tensile string theory.
§ EFFECT OF COMPACTIFICATION: FLIPPED VACUUM
In this section we discuss the effect of target space compactification on the flipped vacuum of tensionless string theory. We will analyze the physical states and closely study the constraints imposed on them by the associated non-trivial physical state condition. We should reiterate, this vacuum is important since it is directly connected to the highest weight representations of the BMS algebra.
§.§ Modified constraint on level
The flipped vacuum for tensionless string in terms of oscillators C and 𝒞 has been expressed in (<ref>) with the flipped oscillator 𝒞 defined as (<ref>). The normal ordered zero modes L_0 and M_0 in terms of C and 𝒞̃ can be written as
L_0=𝒩+𝒩+KW,
M_0=1/2[C_0^2+𝒞_0^2+2C_0·𝒞_0]+ ∑_m>0[C_-m· C_m+𝒞_-m·𝒞_m
+C_-m·𝒞_m+𝒞_-m· C_m].
Now using (<ref>) and (<ref>) we conclude the following
M_0=c'K^2/R^2+c'k^2+𝒩-𝒩+X+Y,
where 𝒩 and 𝒩 are the operataors defined in (<ref>). Let's recall from earlier section (<ref>) that for flipped case, a_L=2 and a_M=0. Hence, for non-compactified case only states with level 2 would be physical. However, in case of the flipped string in compactified background, the presence of the KW term in the L_0 condition will allow states having any level to be physical state with appropriate values of K and W. This intriguing result is going to be at the heart of the discussion that follows.
§.§ States at various levels
In the following subsection we shall impose the physical state conditions on a generic state |r,s,k^μ,K,W⟩ as given in (<ref>) along with (<ref>) on the level zero, level one and level two states and examine the mass spectrum. We will also discuss the structure of higher level states.
§.§.§ Level 0 States
While we apply conditions (<ref>) on |0,0,k^μ,K,W⟩ we note that for n≥ 1, (<ref>) are trivially satisfied leaving us with only two non-trivial conditions on |0,0,k^μ,K,W⟩, which read [Tensionless string theory on this vacuum can be shown to be equivalent to what is known as the Ambitwistor string <cit.>, hence the subscript A on the states.]:
(L_0-a_L)|0,0,k^μ,K,W⟩_A =(𝒩+𝒩+KW-a_L)|0,0,k^μ,K,W⟩_A=0,
(M_0-a_M)|0,0,k^μ,K,W⟩_A =(c'K^2/R^2+c'∑_μ=0^24k_μk^μ-a_M)|0,0,k^μ,K,W⟩_A=0.
Now, we are looking at states with 𝒩= 𝒩=0. In order to make them physical state we must have then KW=a_L. Since from earlier works we have already deduced a_L=2, for level zero states to be physical, one must have KW=2. That means there will be four possible states depending on the values of K and W: {K=1,W=2}, {K=2,W=1}, {K=-1,W=-2} and{K=-2,W=-1}.
Since a_M=0, the mass of the level zero states are given by:
m^2=K^2/R^2,
where the values of K=± 1, ± 2.
§.§.§ Level 1 States
For level 1 states, (<ref>) is trivially satisfied for n≥ 2. Hence, we have only L_0, M_0, L_1 and M_1 conditions to satisfy. There are two possibilities: either 𝒩=1, 𝒩=0, or 𝒩=0, 𝒩=1. Applying the L_0 condition on level 1 states we see that the condition for level 1 state being physical state is KW=1, i.e. either K=W=1, or K=W=-1. A generic state of level one can be expressed as a linear combination of |1,0⟩ and |0,1⟩ states as given below <cit.>
|1,K,W⟩_A=a_μC^μ_-1|0,0,k^μ,K,W⟩_A+b_μ𝒞^μ_-1|0,0,k^μ,K,W⟩_A.
Now, we apply the remaining three nontrivial conditions on this generic state in order to put restrictions on it,
L_1|1,K,W⟩_A=M_1|1,K,W⟩_A=M_0|1,K,W⟩_A=0.
After doing a bit of algebra we end up with the above conditions for physical states can be written as:
[K/R(a_25+b_25)+RW/c'(a_25-b_25)+∑_μ=0^24k^μ(a_μ+b_μ)]|0,0,k^μ,K,W⟩_A=0,
[K/R(a_25-b_25)+∑_μ=0^24k^μ(a_μ-b_μ)]|0,0,k^μ,K,W⟩_A=0,
[(c'K^2/R^2+c'∑_ν=0^24k_νk^ν+1)a_μ-b_μ]C^μ_-1|0,0,k^μ,K,W⟩_A
+ [(c'K^2/R^2+c'∑_ν=0^24k_νk^ν-1)b_μ+a_μ]𝒞^μ_-1|0,0,k^μ,K,W⟩_A=0.
As discussed in <cit.>, for a_μ,b_μ≠ 0 the last condition reads
c'K^2/R^2+c'∑_μ=0^24k_μk^μ=0,a_μ=b_μ.
As a result we again have exact same expression for the mass as in (<ref>). However, for this case the only permissible values are K=± 1. Note that, since a_μ=b_μ, the norm of the state (<ref>) is ⟨1||1⟩=a^2-b^2=0. Hence level 1 states are null states. The other physical state conditions in (<ref>) will give us the following:
K/Ra_25+∑_μ=0^24k^μa_μ=0,
which gives the extra condition on the coefficients a_μ.
§.§.§ Level 2 States
In line with what we have seen so far, for level 2 states we will have six non-trivial physical state conditions, namely L_0, M_0, L_1, M_1, L_2 and M_2 conditions. A generic state of level 2 can be expressed as given below
|2,K,W⟩_A =a_μC^μ_-2|0⟩_A+e_μνC^μ_-1C^ν_-1|0⟩_A+h_μνC^μ_-1𝒞^ν_-1|0⟩_A
+b_μ𝒞^μ_-2|0⟩_A+f_μν𝒞^μ_-1𝒞^ν_-1|0⟩_A+j_μνC^μ_-1𝒞^ν_-1|0⟩_A.
Here |0,0,k^μ,K,W⟩_A is shortened as |0⟩_A and hereafter we will use this shortened notation. e_μν and f_μν are manifestly symmetric while h_μν and j_μν are assumed to be symmetric and anti-symmetric respectively. For such states the L_0 condition would imply that KW=0, i.e. either K=0 or W=0. Hence, for level 2, unlike the case of other levels, we will get infinite number of states, since for K=0, W can take any value and vice-versa.
The other physical state conditions will be applied on this state in exactly the same way as they were in the non-compactified case. The M_0 condition would yield the following
M_0|2⟩ =[(c'K^2/R^2+c'∑_ν=0^24k_νk^ν+2)a_μ-2b_μ]C^μ_-2|0⟩_A
+[(c'K^2/R^2+c'∑_ν=0^24k_νk^ν-2)b_μ+2a_μ]𝒞^μ_-2|0⟩_A
+[(c'K^2/R^2+c'∑_ν=0^24k_νk^ν+2)e_μν-h_μν]C^μ_-1C^ν_-1|0⟩_A
+[(c'K^2/R^2+c'∑_ν=0^24k_νk^ν-2)f_μν+h_μν]𝒞^μ_-1𝒞^ν_-1|0⟩_A
+[(c'K^2/R^2+c'∑_ν=0^24k_νk^ν)(h_μν+j_μν)+2(e_μν-f_μν)]C^μ_-1𝒞^ν_-1|0⟩_A=0.
The other non-trivial conditions would yield
L_1|2⟩ =[2a_ν+2e_25ν(c'K/R+RW)+2c'∑_μ=0^24e_μνk^μ+(h_25ν-j_25ν)(c'K/R-RW)
+c'∑_μ=0^24(h_μν-j_μν)k^μ]C^ν_-1|0⟩_A+[2b_ν+2f_25ν(c'K/R-RW)
+2c'∑_μ=0^24f_μνk^μ+(h_25ν+j_25ν)(c'K/R+RW)+c'∑_μ=0^24(h_μν+j_μν)k^μ]𝒞^ν_-1|0⟩_A=0,
M_1|2⟩ =2[(a_ν-b_ν)+2 c'K/Re_25ν+2 c'∑_μ=0^24e_μνk^μ-c'K/R(h_25ν-j_25ν)
-c'∑_μ=0^24(h_μν-j_μν)k^μ]C^ν_-1|0⟩_A+2[(a_ν-b_ν)-2 c'K/Rf_25ν
-2 c'∑_μ=0^24f_μνk^μ+c'K/R(h_25ν+j_25ν)+c'∑_μ=0^24(h_μν+j_μν)k^μ]𝒞^ν_-1|0⟩_A=0,
L_2|2⟩=[K/R(a_25+b_25)+RW/c'(a_25-b_25)+∑_μ=0^24k^μ(a_μ+b_μ)+
1/2c'(e^μ_μ-f^μ_μ)]|0⟩_A=0,
M_2|2⟩=[K/R(a_25-b_25)+∑_μ=0^24k^μ(a_μ-b_μ)+1/4c'(e^μ_μ+f^μ_μ-h^μ_μ)]|0⟩_A=0.
The M_0 condition (<ref>) along with the L_1 condition (<ref>) give us a_μ=b_μ=0. Together, (<ref>), (<ref>) and (<ref>) give us h_μν=2e_μν=2f_μν. Additionally, these three equations also lead us to the following constraints on
e_μν and j_μν
e_25νK/R+∑_ν=0^24e_μνk^μ=j_25νK/R+∑_ν=0^24j_μνk^μ=0,
j_25νW=0.
This means for physical states, the coefficients j_25ν can be non-zero only for states with W=0. Gathering all terms, the physical state (<ref>) can be written in the form:
|2,K,W⟩=e_μν[C^μ_-1C^ν_-1|0⟩_A+2C^μ_-1𝒞^ν_-1|0⟩_A+𝒞^μ_-1𝒞^ν_-1|0⟩_A]+j_μνC^μ_-1𝒞^ν_-1|0⟩_A.
In the above equation, KW=0. Here too, the norm of the states vanish like the level 1 state. However, the mass spectrum would be modified due to the presence of internal momentum K. For states having W=0, mass squared will be same as (<ref>). Here, unlike level 0 or level 1 states, K can assume any integral value, hence there are infinite such states in the spectrum. For winding states (W≠ 0), however, the mass squared will be zero, since for those states, K=0, leading to yet another infinite tower of massless states as well. Note that this is a particular property of level 2 states since KW=0 in this case.
§.§.§ Higher level states
As already stated, the presence of KW term in the L_0 physical state condition implies that states of any level will be physical state. A generic state of level l (l=r+s) can be written as
|r,s,k^μ,K,W⟩=∑_jρ_j(∏_i=1^pC^a_i_-m_i∏_j=1^q𝒞^b_j_-n_j)_j|0,k^μ,K,W⟩.
In the above, a_i and b_j are powers of the oscillators C and 𝒞 respectively. The level of the state l and KW are given by
l=r+s=∑_i^pa_im_i+∑_i^qb_in_i,KW=2-l.
Hence, we can have more number of possible states at level l, depending on the number of possible values of KW satisfying KW=2-l. For state of level l, there will be 2(l+1) number of nontrivial physical state conditions[For level l, the nontrivial physical state conditions will come from L_n and M_m with n,m∈{0,1,2,....,l}. That means we will have (l+1) number of L_n physical state conditions and (l+1) number of M_n physical state conditions. That leaves us with 2(l+1) number of nontrivial physical state conditions.]. Without the L_0 condition we have already used to determine KW, we will be left with 2l+1 physical state conditions. The M_0 condition for higher levels too, will give us mass shell condition same as (<ref>) and the remaining physical state conditions would put constraints of the coefficients ρ_j. Lastly, note that for l≠ 2, both K and W has to be non-zero and as a result, we won't get massless state at higher levels.
§.§ Limit from tensile closed twisted string
The twisted parent theory of the flipped tensionless string is already a peculiar theory.
The effect of compactification on such closed bosonic twisted string has been studied in <cit.> and <cit.>. The physical state conditions for the twisted string theory is given below
(ℒ_n-aδ_n,0)|phys⟩=0,
(ℒ_-n-aδ_n,0)|phys⟩=0∀ n>0,
which is seemingly a mixture of lowest weight and highest weight representations. These physical state conditions will lead us to the following level matching condition <cit.> when we compactify the target space on a single circle:
N+N+KW=2.
The level matching condition turns out to be identical to the level matching condition we have found in our intrinsic analysis in the last section as well, with tensile oscillators replaced by tensionless ones. From the discussion in <cit.> one can see that when we take tensionless limit, level 2 tensile twisted string state with finite norm gives us level 2 null physical states of tensionless twisted string. In a similar vein, for the case of compactified background we can consider the following level 2 tensile state
|ψ⟩=ξ_μνα^μ_-1α^ν_-1|0,0,k^μ,K,W⟩_A,
where either K=0, or W=0. Under tensionless limit this state will reduce to the physical states given in (<ref>). This is comparable to what happens in the non-compactified case, and the reader can look at Appendix (<ref>) for details.
However, as we have seen, the new level matching condition dictates that, for suitable values of K and W we can get physical state at any level. Since tensile theory too has the same level matching condition, this statement is true for tensile twisted theory as well.
Moving on, let us consider starting from the following tensile physical state of level 1 instead:
|Φ⟩=l_μα^μ_-1|0,0,k^μ,K,W⟩_A,KW=1.
Using the Bogoliubov relation between {α,α}
and {C,𝒞}, we can rewrite this state as
|Φ⟩=1/2l_μ[(√(ϵ)+1/√(ϵ))C^μ_-1-(√(ϵ)-1/√(ϵ))𝒞^μ_-1]|0⟩_A=a_μ|Φ_1⟩+ϵ a_μ|Φ_2⟩.
where
a_μ=l_μ/2√(ϵ),|Φ_1⟩=C^μ_-1|0⟩_A+𝒞^μ_-1|0⟩_A,|Φ_2⟩=C^μ_-1|0⟩_A-𝒞^μ_-1|0⟩_A.
Clearly at the strict tensionless limit ϵ→ 0, this state will reduce just to a_μ|Φ_1⟩. Recalling from section (<ref>), we identify this state as a physical state[In section (<ref>) we have seen that a generic level 1 state as given in (<ref>) is physical state provided a_μ=b_μ (equation (<ref>))]. The combination in the state |Φ_2⟩ is not a physical state combination, as we have from intrinsic analysis. And from limiting analysis we see that this state, appearing at subleading order, vanishes at tensionless limit. Both |Φ_1⟩ and |Φ_2⟩ are null states here, but when we take tensionless limit on the norm of the total level 1 state ⟨Φ|Φ|$⟩, it does remain conserved:
lim_ϵ→ 0⟨Φ|Φ|=⟩l_μl_ν[cosh^2θ⟨0|C^μ_1C^ν_-1|0⟩+sinh^2θ⟨0|𝒞^μ_1𝒞^ν_-1|0⟩]
=l_μl_νη^μν=l_μl^μ.
The reason of this is that the non-zero part of the⟨Φ|Φ|$⟩ comes from cross product ⟨Φ_1|Φ_2|$⟩. Advancing in the same way, one can easily check that the state
|χ⟩=l_μα^μ_-1|0,k^μ,K,W⟩_A,KW=1
would also give the level 1 null physical state in section (<ref>) under the limit.
Although we do not provide here the explicit calculation of tensionless limit of higher level (i.e. level greater that 2) tensile flipped states, we can make some generalised comments. Firstly, it is clear that tensile physical state of any level will reduce to tensionless null physical state of same level. However, the norm of the original state will be preserved even at tensionless limit. Secondly, the internal momentumKand winding numberWwill also be intact.
Now, let us recall the mass spectrum of the tensile twisted string, which is given by <cit.>
m^2=K^2/R^2 +1/α'^2W^2R^2+2/α'(N-N).
If we take the limitα'→c'/ϵ,ϵ→0, then the second and third term in the mass square expression vanishes, much like they did in case of Induced vacuum and we are left with a mass spectrum identical to (<ref>). As we have seen in our intrinsic analysis, (<ref>) happens to be the mass spectrum for all levels, not just for the level zero, and hence, the tensionless limit from tensile twisted string theory is consistent with the intrinsically developed tensionless flipped string theory.
§.§ A brief look at multiple dimensions compactification
Now let us consider this theory on a background withdnumber of compactified dimensions. In this section we address it very briefly. Using (<ref>), (<ref>) and (<ref>), and the redefinition in (<ref>) we obtain an expression fork^I_L,Rwhich is same as (<ref>), namely
k^I_L,R=1/√(2)(√(c')K^I±1/√(c')W^IR).L_0andM_0in their normal ordered form in terms ofk^μ_L,Rwill be
L_0 =N+ N+RK^IW_I=N+ N+∑_i=1^dk_iω^i,
M_0=1/2 (k^I_Lk_IL+k^I_Rk_IR+2k^I_Lk_IR)+c'k^2+N- N+X+Y
=c'K^IK_I+c'k^2+N- N+X+Y
M_0 =c'/R^2∑_i,j=1^dk_ig^ijk_j+c'k^2+N- N+X+Y.
The constraints on the levels of physical state in flipped vacuum will be
r+s+∑_i=1^dk_iω^i=2,
where as usualrandsrespectively denote the eigenvalues of the number operatorsNandN. The mass of a physical state of generic will be given by
m^2|r,s,k^i,ω^i⟩=K^IK_I|r,s,k^i,ω^i⟩=(1/R^2∑_i,j=1^dk_ig^ijk_j)|r,s,k^i,ω^i⟩.
One can similarly show the above formula appears as a consistent limit of the mass operator in toroidal compactifications of the twisted parent theory. Details can be found in Appendix (<ref>).
§.§ Summary
We summarize this section as follows:
* The L_0 physical state condition puts restrictions on the level, internal momentum K and winding number W. Unlike non-compactified case, here we see that the level of physical states is not truncated at two. Instead, for appropriate value of K and W, state of any level can be physical (section <ref>). In level 2, we get infinite number of physical states.
* We analyze the physical states of level 0, level 1 and level 2 and studied the constraints put on them by non-trivial physical state condition. We also have calculated the mass spectrum for each level (see equations (<ref>), (<ref>), (<ref>)(<ref>)) .
* We took ultra relativistic limit on the physical states of the parent tensile theory and found that physical states of level 1 and level 2 in the parent theory reduces respectively to physical states of level 1 and level 2 in the tensionless theory (section <ref>). It is reasonable to speculate that tensile physical state of any level will reduce to tensionless physical state of the same level. We also saw that the mass spectrum obtained directly by taking tensionless limit of the parent theory matches with the intrinsically derived mass spectrum.
* We then considered d dimensional compactification of the theory on a torus T^d and computed the level constraint (<ref>) and mass spectrum (<ref>) for the same.
§ CONCLUSIONS AND FUTURE DIRECTIONS
§.§ Summary of the work
In this paper, we first reviewed classical tensionless closed string theory by studying the action and its symmetries <cit.>. Then we revisited the discussion in <cit.> about canonical quantization of bosonic tensionless closed string theory. We have discussed all three consistent ways of quantizing tensionless string theory based on three distinct vacua, namely, the oscillator, Induced, and flipped vacuum. The physical state conditions for all three theories have been analysed. For oscillator and flipped vacuum, the physical state conditions puts constraint on level of the states, a feature absent in the Induced case. In all three theories the physical state conditions give us the mass spectrum.
We then move on to investigate all three tensionless quantum string theories in a target spacetime compactified on a circleS^1. The analysis has been done in a two pronged approach, both from an intrinsic tensionless theory compactified on a circle, and from taking consistent limits on respective tensile theories compactified on a circle. The consistency of our theories are established via an explicit matching of results from the both approaches. We see that for oscillator vacuum, the level matching condition gets modified exactly like in the case of tensile string theory. The Induced vacuum, quite expectedly, has no discernible constraints put on the level of states. On the other hand, the constraint on level of the states in flipped vacuum case is identical to the same in its twisted tensile parent theory.
As for the mass spectrum, we notice that unlike tensile theory, all of the tensionless theories with compactification do not seemingly respect T-duality. However, compactification does replace the role ofα'withc', in the sense of construction of zero modes, and even there is a semblance of a self dual radius atR∼√(c')where extra massless states occur in the oscillator spectrum. The meaning of this symmetry is not clear from the present discussion and needs further investigation. Compactification also introduces interesting new states in the spectrum of other vacua as well. Finally, we have provided a glimpse into the generalisation of our analysis of all three theories to target space withddimensions compactified on a torusT^d.
§.§ Future plan
There are several directions that could be pursued in near future. In tensile string theory when we add a constant Kalb-RamondBfield in the Polyakov action, any observable effect of it on the spectrum can be realised only in a compactified target spacetime. As a result, effect of constantBfield is often discussed along with compactification of string theory <cit.>. In a companion work <cit.>, we have considered tensionless strings with a constantBfield. Here again, we have seen that compactification is necessary. In this work, we have observed that none of the mass spectrum of the three quantum theories respect T-duality as its symmetry. It probably owes to the fact that since the definition of T-duality itself involves tension in such a way that it is not immediately obvious how to define such transformation at tensionless limit. Since, T-duality maps string theories in two different target space, it would be of significance to investigate what happens to the T-dual theory under tensionless limit. The partition function of tensile string in a compactified background is <cit.>
Z(τ,τ)=1/|η(τ)^2|∑_K,W∈ℤq^1/4(√(α')K/R-RW/√(α'))^2q^1/4(√(α')K/R+RW/√(α'))^2
This partition function turns out to be invariant under the transformationR→α'/R. That means T-duality does not manifest itself merely in mass spectrum of tensile string theory but also in partition function. Hence an obvious future direction of us would be to calculate the partition function of all these three tensionless theories in order to see whether it also violates T-duality, or even becomes manifestly invariant under some alternative symmetry.
Even without an explicit realization of T-duality in the mass formula, we do see a number of extra massless states occurring at the special pointR∼√(c'), at least for the oscillator vacuum spectrum. Remember that for compact tensile strings, from 25 dimensional perspective, one would have two massless Kaluza-Klein gauge fields transforming in theU(1)_L×U(1)_Rgroup, and one extra massless scalar. At the self dual radiusR= √(α'), this gauge symmetry was enhanced toSU(2)_L×SU(2)_Rand a plethora of new massless states emerged. It remains an open question whether such a symmetry enhancement happens in the tensionless case as well. To answer this, one must understand the vertex operator structure associated to the current alegbra of tensionless string theory better. Since the worldsheet here is Carrollian, one can hope that Non-Lorentzian Kac-Moody algebras discussed, for example, in <cit.> may be of help in this endeavour.
For the Induced vacuum, there are still more interesting regimes one needs to explore. For example, in <cit.> it was shown that the Induced vacuum state, where all the perturbative tensile state condense upon, can be written as a Neumann boundary state along all directions. Since these states are equivalent to space-filling D-branes, one could show open string degrees of freedom appearing from closed string ones in a Bose-Einstein-like condensation setting. We haven't really touched on the analogous phenomena in the current manuscript, and it will be interesting to see how the physics changes with one (or more) directions compactified. Since such a phase transition can be directly linked with the Hagedorn transition, it remains to be seen how the extra compactified direction sets the corresponding temperature scale.
For the flipped vacuum, since the underlying representation is highest weight BMS_3, one hopes to use well known BMSFT techniques to understand more about the symmetries of the theory. In the compactified case, intriguingly we do not have a truncated spectrum anymore, which makes the theory much nicer to play with. A point to note is that we do not see any symmetry enhancement points in the mass spectrum for this theory, whereas the parent twisted tensile theory was claimed to have an infinite number of those points <cit.>. This surely requires more scrutiny in the future.
We also wish to generalise our analysis to the tensionless superstring theories. The underlying supersymmetric algebra on the worldsheet of the closed tensionless superstring could possibly be Homogenous <cit.> and Inhomogeneous Super BMS (SBMS_H/I) <cit.>. These two theories come about from two different Inönü-Wigner contractions of two copies of the super-Virasoro algebra, and are significantly different from each other in both classical and quantum level. A proper classification of vacuum structures for supersymmetric tensionless strings is still missing from the literature. It would be interesting to compile a full classification of those for both non-compactified and compactified theories. We hope to come back to this problem soon.
§ ACKNOWLEDGEMENTS
The authors are indebted to Arjun Bagchi and Shailesh Lal for numerous illuminating discussions and constructive comments on the draft.
It is also a pleasure to thank Sudipta Dutta, Kedar Kolekar, Mangesh Mandlik, Punit Sharma and Mirian Tsulaia for interesting discussions and useful comments. ArB is supported by the Quantum Gravity Unit of the Okinawa Institute of Science and Technology (OIST). RC is supported by the CSIR grant File No: 09/092(0991)/2018-EMR-I. PP would like to acknowledge the support provided by SPO/SERB/PHY/2019525.
§ LIGHT-CONE QUANTIZATION
Let us apply the light-cone gauge onX^+, whereX^±are defined as
X^±=1/√(2)(X^0± X^D-1).
The light-cone gauge onX^+is
X^+=x^++c'k^+τ.
This gauge implies
C^+_n=C^+_n=0∀ n≠ 0.
Using the equations of motions (<ref>), we can expressC^-in terms of the transverse coordinatesi(i∈1,2...,D-2) as follows
C_m^-=1/8C_0^+∑_n:(C_m-n^i+C̃_-(m-n)^i)(3C_n^i-C_-n^i):
C̃_m^-=1/8C_0^+∑_n:(C_-(m-n)^i+C̃_m-n^i)(3C_n^i-C_-n^i):.
Applying this light-cone gauge onL_0andM_0, we can rewrite them in terms of transverse coordinates only. Their final
expressions after applying gauge choice are same as (<ref>), however, now,𝒩,𝒩andXare expressed just in terms of transverse oscillators
𝒩=∑_i=1^D-2∑_m>0C_-m^iC_m^i,𝒩=∑_i=1^D-2∑_m>0C^i_-mC^i_m,X=∑_i=1^D-2∑_m>0C^i_mC^i_m.
Now let us consider the following state of level 2 (n=1) as
C^i_-1C^j_-1|0,k^μ⟩.
Using an argument similar to <cit.>, it can be proved that these states have to be massless in order to make the theory Lorentz symmetric[In <cit.>, it has been argued using Wigner's classification that tensile bosonic string theory can respect Lorentz invariance iff the first excited states α^i_-1α^j_-1|0,k^μ⟩ are massless. The conclusion was that the normal ordering constant of L_0 is a=1.]. Let us consider massive particles inℝ^1,D-1. In the rest frame of the massive particle, momentum becomesk^μ=(m,0), and it becomes symmetric under the Wigner's little group ofSO(D-1)spatial rotations. Hence, any massive particle inℝ^1,D-1essentially formSO(D-1)representation. However, from (<ref>), we see that there could be only(D-2)^2states in level 2, and therefore, these states cannot be fit into any representation ofSO(D-1). The only way to resolve this inconsistency is to demand that the level two states in (<ref>) are massless, and as a consequence, there is no rest frame. For massless particle, we can choose a frame where the momentum of the particle isk^μ=(k,0⋯0,k), which is symmetric under the Wigner's little groupSO(D-2). There would be no problem in fitting the states in (<ref>) in anSO(D-2)representation.
Equation (<ref>) dictates that setting the mass of states|1,1⟩to zero essentially meansa_M=2. So, the mass spectrum in tensionless string theory from oscillator vacuum is
m^2|n,n⟩=1/c'(2n-2)|n,n⟩.
Botha_Landa_Mcan also be calculated directly from the expressions of theL_0andM_0using the commutators (<ref>). It is not hard to see that
1/2∑_i=1^D-2∑_m=-∞^∞C_-m^iC_m^i=1/2∑_i=1^D-2∑_m=-∞^∞:C_-m^iC_m^i:+D-2/2∑_m=1^∞m,
and the same for right moving oscillatorsC. Famously usingζfunction regularisation one can write
∑_m=1^∞m=-1/12.
Using this in the expression of (<ref>), one can see that
𝒩=:𝒩:-D-2/24and 𝒩=:𝒩:-D-2/24.
Armed with these, we arrive at the following
L_0=𝒩-𝒩=:𝒩:-:𝒩:=:L_0:
M_0=c' k^2+𝒩+𝒩+X+X^†=:M_0:-D-2/12.
Here we have usedX=:X:sinceCandCcommute with each other. From (<ref>) we find the following values ofa_Landa_Ma_L=0, a_M=D-2/12.
Since we already know thata_M=2, we can see that the only dimension where this quantum theory can make sense isD=26. Hence this is the critical dimension for Oscillator vacuum tensionless string theory.
Of course this approach of calculating critical dimension is rather heuristic and the rigorous way of calculating critical dimension is to obtain the closure of the background Lorentz algebra, which has been done in <cit.>.
§ THE FATE OF TENSILE PERTURBATIVE STATES AT TENSIONLESS LIMIT
Following <cit.>, we will show here that under tensionless limit the perturbative states for non-compactified background condense to the Induced vacuum. Let us consider a perturbative state in the tensile string theory
|Ψ⟩=ξ_μνα^μ_-nα^ν_-n|0⟩_α,
whereξ_μνis an arbitrary polarisation tensor. The state|0⟩_αevolves under the tensionless limit (T→ϵT) as follows
|0⟩_α=|0⟩_I+ϵ|I_1⟩+ϵ^2|I_2⟩+⋯
The conditions defining the tensile vacuum are
α_n|0⟩_α=α_n|0⟩_α=0∀ n>0.
Now, the modesA_nandB_nare related toα_nandα_nas
α_n=1/2[√(ϵ) A_n+1/√(ϵ)B_n],α_n=1/2[-√(ϵ) A_-n+1/√(ϵ)B_-n].
Hence, the conditions in (<ref>), under tensionless limit will evolve to the order by order actions mentioned in (<ref>).
Hence the state|Ψ⟩given in (<ref>) in tensionless limit will emerge as the following state
|Ψ⟩=1/ϵ(B_-n+ϵ A_-n)(B_n-ϵ A_n)(|0⟩_I+ϵ|I_1⟩+ϵ^2|I_2⟩+⋯).
Recalling the algebra satisfied byA's andB's as given in (<ref>), we shall see that the|Ψ⟩as given in (<ref>), will reduce to the following
|Ψ⟩→Ξ|0⟩_I,Ξ=2nη^μνξ_μν.
Hence, for non-compactified background spacetime, any perturbative state will condense on the Induced vacuum.
§.§ Non-perturbative States
The Induced vacuum belongs to the Induced representation of the BMS algebra. As discussed in <cit.> and <cit.>, in Induced representation of BMS algebra, we can non-perturbatively define state on any state|M,s⟩, which has a well defined norm. One such state in BMS Induced representation is given by
|ϕ⟩=exp(i∑_nω_nL_n)|M,s⟩,
whereω_nare complex numbers satisfyingω^*_n=ω_-n. It can be seen that this state does satisfy the physical state conditions (<ref>). The nature of such non-perturbative states, however, is yet to be determined.
§ PHYSICAL STATES OF FLIPPED VACUUM
With the value ofa_L=2, we can rewrite theL_0physical state condition in (<ref>) as
(N+N-2)|phys⟩=0.
The implication is that any physical state must be of level 2. A generic state of level 2 in the tensionless theory is given by
|2⟩ =a_μC^μ_-2|0⟩_A+e_μνC^μ_-1C^ν_-1|0⟩_A+h_μνC^μ_-1𝒞^ν_-1|0⟩_A
+b_μ𝒞^μ_-2|0⟩_A+f_μν𝒞^μ_-1𝒞^ν_-1|0⟩_A+j_μνC^μ_-1𝒞^ν_-1|0⟩_A.
where we have assumedh_μνto be symmetric andj_μνto be antisymmetric. Applying the physical state conditions (<ref>) on (<ref>) one can put constraints on the coefficients in (<ref>). For states with level 2, the conditions in (<ref>) withn≥3will be trivially satisfied and hence, excluding theL_0condition (which has already been used) we would be left with only five non-trivial physical state conditions—namely,M_0,L_1,M_1,L_2andM_2conditions. TheM_0condition gives
M_0|2⟩ =[(c'k^2+2)a_μ-2b_μ]C^μ_-2|0⟩_A+[(c'k^2-2)b_μ+2a_μ]𝒞^μ_-2|0⟩_A
+[(c'k^2+2)e_μν-h_μν]C^μ_-1C^ν_-1|0⟩_A+[(c'k^2-2)f_μν+h_μν]𝒞^μ_-1𝒞^ν_-1|0⟩_A
+[c'k^2(h_μν+j_μν)+2(e_μν-f_μν)]C^μ_-1𝒞^ν_-1|0⟩_A=0.L_1,M_1,L_2andM_2conditions respectively give
L_1|2⟩ =[2a_ν+c'(2e_μν+h_μν-j_μν)k^μ]C^ν_-1|0⟩_A
+[2b_ν+c'(2f_μν+h_μν+j_μν)k^μ]𝒞^ν_-1|0⟩_A=0,
M_1|2⟩ =2[(a_ν-b_ν)+c'(2e_μν-h_νμ+j_μν)k^μ] C^ν_-1|0⟩_A
+2[(a_ν-b_ν)-c'(2f_μν-h_μν-j_μν)k^μ]𝒞^ν_-1|0⟩_A=0,
L_2|2⟩ =[2c'k· (a+b)+(e^μ_μ-f^μ_μ)]|0⟩_A=0,
M_2|2⟩ =[4c'k· (a-b)+(e^μ_μ+f^μ_μ)-h^μ_μ]|0⟩_A=0.
Solving theM_0condition in (<ref>) gives usa_μ=b_μ, also thatk^2=0, implying that the state has to be massless. The equations in (<ref>) lead us to the following constraints on the coefficients
e_μν=f_μν=1/2h_μν,e_μνk^μ=j_μνk^μ=0,a_μ=0.
Hence the resulting level 2 physical state becomes
|2⟩=e_μν[C^μ_-1C^ν_-1|0⟩_A+2C^μ_-1𝒞^ν_-1|0⟩_A+𝒞^μ_-1𝒞^ν_-1|0⟩_A]+j_μνC^μ_-1𝒞^ν_-1|0⟩_A.
The norm of this state vanishes. As discussed in <cit.> norm of these states are GCA null states having weightsΔ=2,ξ=0.
Given the fact that this theory comes from direct tensionless limit of twisted string theory, the emergence of null physical states at the tensionless limit might sound bizarre. However, it has been shown in <cit.> that positive norm physical state of tensile twisted string become null at the tensionless limit. In order to understand this let us consider a state in the tensile twisted theory
|Ψ⟩=ξ_μνα^μ_-1α^ν_-1|0⟩_A.
Using the Bogoliubov relation between theαand theCoscillators we can determine the tensionless limit of this state
lim_ϵ→ 0|Ψ⟩=ξ_μν[coshθC^μ_-1-sinhθ𝒞^μ_-1][sinhθC^ν_-1-coshθ𝒞^ν_-1]|0⟩_A=|ϕ_1⟩+|ϕ_2⟩+|ϕ_3⟩,
where
coshθ=1/2(√(ϵ)+1/√(ϵ)), sinhθ=1/2(√(ϵ)-1/√(ϵ)),
and
|ϕ_1⟩ =ϵ/2√(2)ξ_μν[C^μ_-1C^ν_-1-2C^μ_-1𝒞^ν_-1+𝒞^μ_-1𝒞^ν_-1]|0⟩_A
|ϕ_2⟩ =1/2ϵ√(2)ξ_μν[C^μ_-1C^ν_-1+2C^μ_-1𝒞^ν_-1+𝒞^μ_-1𝒞^ν_-1]|0⟩_A
|ϕ_3⟩ =ξ_μν[C^μ_-1𝒞^ν_-1-C^ν_-1𝒞^μ_-1]|0⟩_A.
As pointed out in <cit.>, norm of all the three states in (<ref>) vanish. However, still the norm of|Ψ⟩remains intact since the non-zero part of the norm actually comes from⟨ϕ_1|ϕ_2|$⟩. The norm of |Ψ⟩ is found to be
⟨Ψ|Ψ|=⟩ξ^μνξ_μν.
Looking at (<ref>) and (<ref>) one can notice that the combination given in |ϕ_1⟩ is not a physical state combination when we look at the theory intrinsically. From limiting perspective, it is the 𝒪(ϵ) term and hence, vanishes at the ϵ→ 0 limit.
§ MULTIPLE DIMENSIONS COMPACTIFICATION OF TWISTED TENSILE STRING
In this section we consider the tensile parent theory of ambitwistor string theory in a Target space with d dimensions compactified on a torus T^d. Classically this theory has the same Polyakov action
S=T/2∫ dτ dσ√(-g)g^αβ∂_αX^μ∂_βX_μ.
Solution of the equation of motion of this theory in conformal gauge is expanded as in (<ref>). We rewrite that in terms of left and right modes X^μ=X^μ_L+X^μ_R as
X^μ_L=x^μ+√(α'/2) α^μ_0(τ+σ)+i√(α'/2)∑_n≠ 01/nα^μ_ne^-in(τ+σ),
X^μ_R=x^μ+√(α'/2) α^μ_0(τ-σ)+i√(α'/2)∑_n≠ 01/nα^μ_ne^-in(τ-σ),
where {α,α} satisfy the harmonic oscillator algebra. The physical state condition for this theory is
(ℒ_n-aδ_n,0)|phys⟩=(ℒ_-n-aδ_n,0)|phys⟩=0∀ n≥ 0 ,
where {a,a} are normal ordering constants. The vacuum in this theory is defined as below
α_n|0⟩=α_-n|0⟩=0,∀ n>0.
The number operators in this theory are defined as
N=∑_n=1^∞:α_-n·α_n:and N=∑_n=1^∞:α_-n·α_n:.
Now, let us recall from section (<ref>) that for compactification on a d dimensional torus we need to make the following identification for the compactified coordinates
X^I∼ X^I+2π R W^I, I∈{26-d,⋯,25},
with W^I defined in (<ref>). In order to ensure that e^iX^IK_I single-valued, we need to impose equation (<ref>) on momentum components in compactified directions. Following section (<ref>), here too, we define dimensionless field Y^I as
X^I=√(α'/2)Y^I,
where the mode expansion of Y^I (splitting into left and right part) is given by
Y^I_L =y^I_L+k^I_L(τ+σ)+i∑_n≠ 01/nα^μ_ne^-in(τ+σ),
Y^I_R =y^I_R+k^I_R(τ-σ)+i∑_n≠ 01/nα^μ_ne^-in(τ-σ).
Here k^I_L,R are given by
k^I_L,R=1/√(2)(√(α')K^I±1/√(α')W^IR).
Now, ℒ_0 and ℒ_0 can be expressed in terms of the number operators as
ℒ_0 =α'/4∑_I=1^d(K^I+1/α'W^IR)^2+α'/4∑_μ=0^25-dk_μk^μ+N,
ℒ_0 =α'/4∑_I=1^d(K^I-1/α'W^IR)^2+α'/4∑_μ=0^25-dk_μk^μ-N.
The ℒ_0 and ℒ_0 conditions in (<ref>) with
a=-a=1, we have the following constraints on physical states of level (r,s) (r,s are eigenvalues of number operators N and N respectively).
r+s+α'/4∑_I=1^d(K^I+1/α'W^IR)^2-α'/4∑_I=1^d(K^I-1/α'W^IR)^2-2=0,
r- s+α'/4∑_I=1^d(K^I+1/α'W^IR)^2+α'/4∑_I=1^d(K^I-1/α'W^IR)^2-α'/2m^2=0.
In the above, we have used m^2=-∑_μ=0^25-dk_μk^μ. Using (<ref>) and (<ref>) on (<ref>) we get
r+s+∑_i=1^dk_iω^i=2, m^2=1/R^2∑_i,j=1^dk_ig^ijk_j+R^2/α'^2∑_i,j=1^dω^ig_ijω^j+2/α'(r-s).
Just like the mass spectrum in tensile string theory, this mass spectrum too, is invariant under T-duality transformation
k^i⟷ω_iR→α'/R
Under tensionless limit (α'=c'/ϵ,c'→ 0), this mass-spectrum will reduce to the mass-spectrum for twisted string theory derived in (<ref>).
ieeetr
|
http://arxiv.org/abs/2307.01892v1
|
20230704193317
|
Systematic Computation of Braid Generator Matrix in Topological Quantum Computing
|
[
"Abdellah Tounsi",
"Nacer Eddine Belaloui",
"Mohamed Messaoud Louamri",
"Amani Mimoun",
"Achour Benslama",
"Mohamed Taha Rouabah"
] |
quant-ph
|
[
"quant-ph"
] |
abdellah.tounsi@umc.edu.dz
Constantine Quantum Technologies,
Frères Mentouri University Constantine 1, Ain El Bey Road, Constantine, 25017, Algeria
Laboratoire de Physique Mathématique et Subatomique, Frères Mentouri University Constantine 1, Ain El Bey Road, Constantine, 25017, Algeria
Constantine Quantum Technologies,
Frères Mentouri University Constantine 1, Ain El Bey Road, Constantine, 25017, Algeria
Laboratoire de Physique Mathématique et Subatomique, Frères Mentouri University Constantine 1, Ain El Bey Road, Constantine, 25017, Algeria
Constantine Quantum Technologies,
Frères Mentouri University Constantine 1, Ain El Bey Road, Constantine, 25017, Algeria
Theoretical Physics Laboratory, University of
Science and Technology Houari Boumediene, BP 32 Bab Ezzouar,
Algiers, 16111, Algeria
Constantine Quantum Technologies,
Frères Mentouri University Constantine 1, Ain El Bey Road, Constantine, 25017, Algeria
a.benslama@umc.edu.dz
Constantine Quantum Technologies,
Frères Mentouri University Constantine 1, Ain El Bey Road, Constantine, 25017, Algeria
Laboratoire de Physique Mathématique et Subatomique, Frères Mentouri University Constantine 1, Ain El Bey Road, Constantine, 25017, Algeria
m.taha.rouabah@umc.edu.dz
Constantine Quantum Technologies,
Frères Mentouri University Constantine 1, Ain El Bey Road, Constantine, 25017, Algeria
Laboratoire de Physique Mathématique et Subatomique, Frères Mentouri University Constantine 1, Ain El Bey Road, Constantine, 25017, Algeria
We present a systematic numerical method to compute the elementary braiding operations for topological quantum computation (TQC). Braiding non-Abelian anyons is a crucial technique in TQC, offering a topologically protected implementation of quantum gates. However, obtaining matrix representations for braid generators can be challenging, especially for systems with numerous anyons or complex fusion patterns. Our proposed method addresses this challenge, allowing for the inclusion of an arbitrary number of anyons per qubit or qudit. This approach serves as a fundamental component in a general topological quantum circuit simulator, facilitating the exploration and analysis of intricate quantum circuits within the TQC framework. We have implemented and tested the method using algebraic conditions. Furthermore, we provide a proof of concept by successfully reproducing the CNOT gate.
Systematic Computation of Braid Generator Matrix
in Topological Quantum Computing
Mohamed Taha Rouabah
August 1, 2023
====================================================================================
§ INTRODUCTION
A quantum computer leverages the fundamental principles of quantum mechanics to solve a class of computationally hard mathematical problems <cit.>.
Instead of using classical Boolean bits, a quantum computer utilizes qubits, which are quantum states capable of being in superposition and entangled with each other. Quantum gates are unitary operators that act on the Hilbert space generated by qubits, thus allowing the manipulation of quantum information <cit.>.
Experimental realizations of quantum computers are vulnerable to errors that arise from decoherence, which is the result of the interaction of qubits with their environment <cit.>.
This phenomenon presents a significant challenge to the practical implementation of quantum computers. To address this issue, various approaches are under investigation, including the isolation of qubits from environmental disturbances and the use of cooling systems to minimize the effect of decoherence <cit.>, as well as quantum error mitigation techniques <cit.>. However, to fully harness the advantages of quantum computing, fault-tolerant quantum error correction (QEC) is necessary. Nevertheless, the practical application of QEC is limited by the threshold theorem, which requires a sufficiently low error rate <cit.>.
Furthermore, the field of condensed matter physics has witnessed various experimental and theoretical discoveries in recent decades, revealing intriguing characteristics of the topological phase of matter <cit.>.
Hence, topological quantum systems are emerging as a promising platforms to store and process quantum information in a robust manner, through quantum evolutions that are immune to decoherence <cit.>.
Topological quantum computation (TQC) deals with two-dimensional quantum systems that support excitations with fractional statistics, known as anyons <cit.>. These anyons exhibit statistics that differ from those of fermions and bosons. A system of N non-abelian anyons possesses a topologically protected Hilbert space that grows exponentially with the number of anyons, and quantum information can be processed through the braiding of anyons <cit.>. Non-abelian anyons promise to offer a fault-tolerant method of performing universal quantum computation, as information is stored in the topological, i.e., non-local features of the system, making the quantum state resilient, in principle, to conventional sources of decoherence <cit.>.
The execution of topological quantum computation can be divided into three main steps:
* Initialization: pairs of non-abelian anyons are created from the vacuum. However, the creation of pairs is vulnerable to noise and not naturally robust. The distillation technique, which relies purely on braiding, can be employed to establish a suitable code space for initialization <cit.>.
* Processing: the anyons are moved in 2-dimensional space to form braids in a 2+1-dimensional space-time, which corresponds to quantum unitary gates performed on the anyons' Hilbert space. The anyons must be kept sufficiently far apart to minimize errors. Nonetheless, a measurement-only scheme that does not require any exchange of anyons has also been proposed <cit.>.
* Readout: the adjacent anyon pairs are fused together to close the computation process, which can be accomplished through the non-abelian anyons interferometry <cit.>.
This study aims to provide a comprehensive procedure for computing the matrix elements of the braiding operators, with a specific focus on applying the foundational principles of anyon model theory in quantum computing. The methodology for constructing qubits by selecting suitable fusion states of a set of identical anyons will be described in detail. Additionally, a generalized formula for systematic calculation of the matrix components of the braiding operations will be derived. The computational complexity of this formula will also be analyzed using concrete examples from the Fibonacci anyon model. Furthermore, the method will be utilized to reproduce the well-known CNOT gate for the same model. It is worth noting that the results of this study have been implemented to develop an open-source package called TQSim <cit.>.
§ QUANTUM COMPUTATION WITH ANYONS
Particles exhibit a unique statistical behavior when existing in two-dimensional space. Unlike in three-dimensional space where particles behave as either bosons or fermions, the statistical behavior of indistinguishable particles in two-dimensional space is not restricted to these two categories. The exchange of indistinguishable particles in three dimensions is governed by the permutation group, where the paths of exchanging two particles are irrelevant. Namely, the exchange of fermions or bosons results in a scalar phase factor e^iϕ with ϕ = 0(π) for bosons(fermions).
However, in two dimensions, the path of exchanging two particles falls into different homotopy classes, thereby allowing for particles of any statistical behavior called anyons <cit.>. The statistical quantum evolution of anyons is described by the braid group, which is significant in TQC as the braiding operation is impervious to dynamical and geometrical perturbations.
Two types of anyons can be distinguished: abelian and non-abelian anyons. Abelian anyons have abelian phase factor and give rise to a non-degenerate state, whereas non-abelian anyons have non-abelian phase factor and construct a degenerate quantum state <cit.>. This degeneracy is crucial for encoding non-trivial quantum states.
In the context of TQC, quantum information is encoded in the topology of multiple anyons' state, rather than being encoded in the dynamical properties of a system, which can be easily altered by external perturbations <cit.>. As a result, the only operations that modify the quantum states implemented in TQC are non-local operations, which consist of braiding the world lines of anyonic particles. This approach offers robustness in the implementation of quantum computation, as the movement of particles around each other in an undesired way is less likely to occur when the particles are far apart <cit.>. Therefore, anyons present an opportunity to perform reliable quantum computation without the need for extensive error correction overhead. Additionally, they provide a platform for constructing new quantum algorithms or reformulating existing algorithms in the TQC framework. One such breakthrough is the Aharonov-Jones-Landau (AJL) algorithm, which can approximate the Jones polynomials <cit.>, known to be a BQP-complete problem <cit.>.
Practically, anyons can exist as quasi-particles in effective two-dimensional surfaces. The creation of anyons can be accomplished experimentally by exploiting topological superconducting nanowire devices <cit.> or the fractional quantum Hall effect (FQHE) <cit.>. It has been confirmed that the quasi-particles produced by this latter exhibit anyonic statistics <cit.>.
Moreover, lattice models such as Kitaev toric code <cit.>, Kitaev honeycomb model <cit.>, and Levin-Wen model <cit.> are among the other potential candidates for the emergence of non-abelian anyons, a crucial element for the construction of topological quantum computer.
Anyon models serve as the frameworks for TQC, and the appropriate mathematical language to discuss them is the Unitary Modular Category (UMC) theory. In this latter, the Jones-Wenzl projectors are utilized to model anyons <cit.>. Abelian anyons can be classified based on their braiding statistics, which measure their interactions when they are braided around one another. On the other hand, non-Abelian anyons can be further classified by their fusion statistics, which measure how they combine to form new anyons.
Consequently, the fusion statistics of non-Abelian anyons can be utilized for encoding information, and it becomes possible to implement a universal set of quantum gates by braiding them around one another. Specifically, the SU(2)_k anyon models for k>2 have been shown to be Turing-complete or universal <cit.>.
The rules for combining two anyons into a larger composite are called fusion rules. For non-Abelian anyons different fusion channels are possible, which determine the fusion rules between different types of anyons denoted as a and b, each associated with distinct charges. The allowed fusion outcomes are determined by the following rule:
a × b = ∑_i N_ab^i i .
The value of the integer N_ab^i refers to the count of distinct ways to generate the anyon i by fusing together the anyons a and b. The vacuum, represented as , fuses trivially with any other anyon such that a × = a for all anyons a. Consequently, an anti-anyon a̅ can be defined for each anyon a, where a ×a̅ =.
The fusion processes comprise a collection of states that are orthogonal to each other, spanning a Hilbert space denoted as ℱ and referred to as the fusion space. Each state in ℱ specifies the outcomes of the fusion processes of multiple anyons following a specific order.
In the context of anyon models, the quantum information is encoded in the feasible fusion states of the anyons and manipulated by the exchange of anyons, a process known as braiding.
Importantly, the fusion state of a pair of anyons is a non-local collective property of that pair which is not accessible by local observations on either anyon.
As such, the quantum information must be resilient to local perturbations.
Let us consider the fusion space of four anyons described by the following fusion states:
|(((a, b)_i,c)_j,d)_k⟩ ,
where the indexed parenthesis represents the fusion process. In this example, the fusion order is from left to right. This notation is interchangeable with the diagrammatic fusion tree representation, with the exception of a normalization factor <cit.>. The state (<ref>) can also be represented as a tensor product of all associated fusion processes:
|(((a, b)_i,c)_j,d)_k⟩ = |(a,b)_i⟩⊗|(i, c)_j⟩⊗|(j, d)_k⟩ .
Thus, the dimension of the fusion space, ℱ_4, of four-anyon system can be expressed as the summation of all possible ways of fusing anyons a, b, c and d through the formula
(ℱ_4) = ∑_ijk N_ab^i N_ic^j N_jd^k ,
where N_ab^i represents the number of distinct fusion channels of anyons a and b resulting in the anyon i. In the subsequent sections, only multiplicity-free anyon models, which satisfy N_ab^i = 0 or 1 for all a, b, and i, will be considered. It is worth noting that most anyon models encountered in physical contexts fall under this case.
The fusion of more than two anyons is not uniquely determined by any specific order of pairwise fusion processes due to the variability of intermediate quantum states that result from different fusion process orders. To account for the various possible ways of fusing more than two anyons, it is necessary to introduce fusion matrices, which enable basis transformations. Specifically, the fusion matrices F_abc^j are defined as linear transformations that relate the only two ways of ordering the fusion of anyons a, b, and c to the resulting anyon j, as follows:
|((a, b)_i,c)_j⟩ = ∑_k (F_a b c^j)_k^i |(a, (b,c)_k)_j⟩.
The index k corresponds to the intermediate charges that are summed over, and the coefficients (F_abc^j)_k^i determine the amplitudes of the possible fusion outcomes. It is essential for these fusion matrices to be unitary to maintain the normalization of fusion states <cit.>.
On the other hand, braiding two anyons that fuse in a particular channel cannot change the fusion channel. This is because the total topological charge of the pair is not a local property, and it does not depend on the evolution of the pair.
A counterclockwise exchange of particles a and b can be viewed as a half twist of their fusion outcome i particle i.e the exchange of particles a and b is equivalent to rotating the particle i by 180 degrees. The phase factor R^i_ab is a property of the anyons a, b, and i. It is a measure of how the anyons interact when they are braided around each other.
The value of the phase factor R^i_ab depends on the type of anyons that are being braided. Therefore, the braiding matrix R_ab is defined by its operation on the state |(a,b)_i⟩ as follows <cit.>:
R_ab|(a,b)_i⟩ = R_ab^i|(b,a)_i⟩.
To ensure consistency in an anyon model, the F and R matrices should satisfy the hexagon and pentagon equations, as demonstrated in any braided monoidal category <cit.>.
Furthermore, by solving the hexagon and pentagon identities, one can determine the numerical values of the components of the F and R matrices <cit.>. Another method of determining the components of those matrices for an SU(2)_k anyon model with an arbitrary integer k, representing the level of the theory, is mentioned in <cit.>. The algebraic theory of such anyon systems is described by the Temperley-Lieb-Jones category TLJ(A) for a fixed value of A =± i exp(± iπ/2(k+2)), and the associated topological quantum field theory is known as the Jones-Kauffman theory at level k <cit.>.
The process of exchanging two adjacent anyons is known as braiding, which is a fundamental operation in anyon models. In particular, the braiding operation between the n-th and (n+1)-th particles is referred to as the σ_n braiding operator. The braid group is generated by the set of all possible braiding operators between adjacent anyons. These operators satisfy Artin relations, which include the Yang-Baxter equation or the type III Reidemeister move <cit.>. These relations ensure the algebraic consistency of anyon models <cit.>. In addition to the hexagon and pentagon equations, Artin relations are practically useful in the implementation of unit-testing [Unit-testing involves evaluating each unit of the program separately to identify potential errors that could propagate to other parts of the program at an early stage <cit.>.] for numerical packages used to compute braid generators. The verification process will be explained further in the next sections.
The explicit matrix representation of a braid generator can be obtained by choosing a fusion space basis according to the fusion process and then applying the relevant F and R transformations <cit.>.
In general, there are two schemes to encode quantum information in the fusion states of anyons. In the dense encoding scheme, a minimal number of anyons is utilized. This scheme, which is discussed in Ref. <cit.>, enables the systematic construction of two-qubit and three-qubit controlled phase gates. In the sparse encoding scheme, we represent each qubit or qudit with a definite number of anyons. Even though such construction does not allow for optimal use of the fusion space, it preserves the circuit model of quantum computation. In this study, we follow the latter.
In the following section, we will explicitly demonstrate how to compute the matrix representations of braid generators acting on the fusion state of identical anyons grouped in sets of a fixed number of anyons. Each set in our system consists of four anyons, since it is not recommended to represent a qubit with more than four anyons because of leakage, as proven for all anyon models in Ref. <cit.>. Nevertheless, the method can be extended systematically to an arbitrary number of anyons per set. It is important to note that the fusion space of four anyons does not necessarily have the dimension of a qubit. Instead, it generally corresponds to a qudit.
§ CALCULATING BRAID GENERATORS
This study is focused on examining the matrix representation of braid generators that operate on a state |ψ⟩ comprised of identical anyons arranged in groups of four anyons, with each group representing a single qubit/qudit. The diagrammatic representation of a typical state of this kind is depicted in Fig. <ref>. For simplicity, we can do all calculations for three qudits represented by four anyons each. Symbolically, such a state can be written as follows:
|ψ⟩ = | ((
((((a,a)_i_11, a)_i_12, a)_i_13),
((((a,a)_i_21, a)_i_22, a)_i_23))_j_1,
((((a,a)_i_31, a)_i_32, a)_i_33))_j_2⟩.
However, recognizing that this notation can be cumbersome, we express the previous identity in a more concise and simplified form as follows:
|ψ⟩ = (
(|i_11, i_12, i_13⟩⊗|i_21, i_22, i_23⟩)_j_1⊗|i_31, i_32, i_33⟩)_j_2.
In this identity, |i_q1, i_q2, i_q3⟩ is a reduced notation for the state |(((a,a)_i_q1, a)_i_q2, a)_i_q3⟩, which represents the state of the q-th set consisting of four anyons of type a. The symbols i_q_1, i_q_2,i_q_3 denote the successive fusion results of the anyons within the q-th set.
Additionally, the notation (|i_11, i_12, i_13⟩⊗|i_21, i_22, i_23⟩)_j_1 is a more convenient way of writing |i_11, i_12, i_13⟩⊗|i_21, i_22, i_23⟩⊗|i_13, i_23, j_1⟩ as defined in the Eq. (<ref>).
The symbols j_1, j_2, ⋯, j_f-1 represent the fusion tree of the outcomes of the anyons i_q3, as shown in Fig. <ref>, where q=1,2,⋯,f, and f is the number of qudits, or the number of anyon groups present in the state |ψ⟩. Also, i_q3 refers to the overall fusion outcome of the q-th anyon group.
The matrix representation of a braid generator σ_n is established by its constituent components, which are the likelihood amplitudes of converting any fusion eigenstate |i⟩ to another fusion eigenstate |j⟩ within a specified basis by applying the specified operation. This is expressed as follows:
[σ_n]_ji = ⟨j|σ_n |i⟩.
Ultimately, applying σ_n to a specific fusion eigenstate falls into two cases. In the first case, σ_n interchanges two adjacent anyons within the same qudit group. The amplitude for this process can be calculated using the standard braiding matrix B <cit.>. In the second case, σ_n exchanges two adjacent anyons belonging to different qudit groups. For this scenario, we introduce a mixing matrix M. The detailed calculations for each case will be demonstrated in the following subsections.
§.§ The braiding matrix B
Consider the case of interchanging two adjacent anyons within the same qudit set i.e the braid generaotr index n is not a multiple of 4. Hence, the braiding operation is only applied to the q-th qudit, where q=n|4+1 and n|4 is the resulting quotient obtained from dividing n by 4. Consequently, applying σ_n to the state |ψ⟩ is equivalent to implementing σ_(n mod 4) on the state of the q-th anyon group, where n mod 4 is the residue obtained from the Euclidean division of n by 4. Let us say that q = 2. Then,
σ_n |ψ⟩
= (
(|i_11, i_12, i_13⟩⊗σ_(n mod 4)|i_21, i_22, i_23⟩)_j_1⊗|i_31, i_32, i_33⟩)_j_2.
In the case where n satisfies the congruence n = 1 4, the R matrix definition can be used to determine the action of the braiding operator σ_1 on a basis state of four anyons, denoted as |i,j,k⟩. Specifically, the action of σ_1 on |i,j,k⟩ can be expressed using the tensorial form (<ref>) and the definition of the R matrix given in Eq. (<ref>) as
σ_1|i,j,k⟩
= σ_1 |(((a,b)_i,c)_j,d)_k⟩,
= R_ab^i|i,j,k⟩.
Therefore, the diagonal elements of the matrix [σ_1] are equal to R^i_ab.
When n satisfies the congruence n = 2 4, the action of σ_2 on a four-anyon basis state |i,j,k⟩ can be determined using the F and R transformations. Specifically, the action of σ_2 on |i,j,k⟩ can be expressed as
σ_2|i,j,k⟩
= σ_2 |(((a,b)_i,c)_j,d)_k⟩,
= ∑_ml (F_abc^j)_l^i R_bc^l (F_acb^† j)_m^l |(((a,c)_m,b)_j, d)_k⟩,
which involves a linear product of matrices. Since the R matrix is diagonal, Eq. (<ref>) can be written in term of the braiding matrix elements as:
σ_2 |(((a,b)_i,c)_j,d)_k⟩ = ∑_m ( B^j_abc)_m^i |(((a,c)_m,b)_j, d)_k⟩,
where
( B^j_abc)_m^i =
∑_l (F_abc^j)_l^i R_bc^l (F_acb^† j)_m^l ,
with the braiding matrix B_abc^j being defined as B^j_abc = F_abc^j R_bc F_acb^† j.
Note that in contrast to σ_1, which is limited to phase operations and cannot perform superpositions, the braiding operator σ_2 can mix between the fusion eigenstates and execute the desired superposition operations in the chosen fusion basis.
In the case where n = 3 4, the braiding operator σ_3 can be obtained by applying the operator σ_2 to the state |((i,c)_j, d)_k⟩. This means that, based on the expression in Eq. (<ref>), we have:
σ_3 |i,j,k⟩ = ∑_m ( B^k_icd)_m^j |(((a,b)_i,d)_m,c)_k⟩.
In general, the braiding matrix B_abc^i computes all possible braiding operations between two adjacent anyons within the same qudit group, even if the number of anyons per qudit is greater than four. Furthermore, σ_1 can be represented as B_ ab^a. Namely,
σ_n |((··((a_1,a_2)_i_1,a_3)_i_2, ·· a_f)_i_f-1, a_f+1)_i_f⟩ = ∑_m ( B^i_n_i_n-2 a_n a_n+1)_m^i_n-1
|((⋯((a_1,a_2)_i_1,a_3)_i_2, ⋯a_n-1)_i_n-2, a_n+1)_m, a_n)_i_n,⋯ a_f+1)_i_f⟩.
§.§ The mixing matrix M
In the second case, we assume that n is a multiple of four (n=4m). In this scenario, the operator σ_n acts on the joint state of the m-th and (m+1)-th neighboring groups by exchanging the adjacent edge anyons. To apply this operator, one may first use an F transformation to decouple the joint state of the two groups. For simplicity, let us illustrate the steps for m = 2 in which case the braiding operator writes:
σ_n|ψ⟩
= σ_4 × 2(
(|i_11, i_12, i_13⟩⊗|i_21, i_22, i_23⟩)_j_1⊗|i_31, i_32, i_33⟩)_j_2,
= ∑_k (F_j_0 i_23 i_33^j_2)^j_1_k
(
|i_11, i_12, i_13⟩⊗σ_4
(
|i_21, i_22, i_23⟩⊗|i_31, i_32, i_33⟩)_k
)_j_2.
Notice that in the above relation j_0 = i_13 and in general j_n represents the vacuum charge for all n<0. In doing so, the scenario has shifted to the application of the braiding operation on a shared state of two neighboring qudits. By repeatedly performing F moves, the resulting sequence of transformations on the joint state of these two qudits will be as follows:
σ_4 (
|i_21, i_22, i_23⟩⊗|i_31, i_32, i_33⟩)_k
= σ_4|((((a,a)_i_21,a)_i_22, a)_i_23, (((a,a)_i_31,a)_ i_32, a)_i_33)_k⟩,
= σ_4∑_p_3 p_2 p_1(F_i_23i_32a^† k)^i_33_p_3(F_i_23i_31a^† p_3)^i_32_p_2(F_i_23aa^† p_2)^i_31_p_1
×|(((((((a,a)_i_21,a)_ i_22, a)_i_23, a)_p_1,a)_p_2, a)_p_3, a)_k⟩.
Upon application of the braiding matrix B as defined in Eq. (<ref>) to the state |(i_m2,a,a)_p_1⟩, an immediate outcome can be obtained, which is as follows:
σ_4 (
|i_21, i_22, i_23⟩⊗|i_31, i_32, i_33⟩)_k
= ∑_p_3 p_2 p_1(F_i_23i_32a^† k)^i_33_p_3(F_i_23i_31a^† p_3)^i_32_p_2(F_i_23aa^† p_2)^i_31_p_1∑_i'_23(B_i_22aa^p_1)^i_23_i'_23
×|(((((((a,a)_i_21,a)_ i_22, a)_i'_23, a)_p_1,a)_p_2, a)_p_3, a)_k⟩.
The subsequent step involves the conversion of the state back into its original basis form, which can be achieved by employing the suitable F transformations and the braiding matrix B, as follows:
σ_4 (
|i_21, i_22, i_23⟩⊗|i_31, i_32, i_33⟩)_k
= ∑_i'_23i'_31i'_32i'_33(L^k_i_22 i_23 i_31 i_32 i_33)_i'_23 i'_31 i'_32 i'_33
×|((((a,a)_i_21,a)_ i_22, a)_i'_23, (((a,a)_i'_31),a)_ i'_32, a)_i'_33
)_k⟩,
where the resulting transformation is represented by a new braiding matrix L, which represents the effect of braiding two edge anyons in a joint state of neighboring qudits. The L matrix is defined as follows:
(L^k_i_22 i_23 i_31 i_32 i_33)_i'_23 i'_31 i'_32 i'_33
=
∑_p_3 p_2 p_1(F_i_23i_32a^† k)^i_33_p_3(F_i_23i_31a^† p_3)^i_32_p_2(F_i_23aa^† p_2)^i_31_p_1(B_i_22aa^p_1)^i_23_i'_23(F_i'_23aa^ p_2)_i'_31^p_1(F_i'_23i'_31a^p_3)_i'_32^p_2(F_i'_23 i'_32a^k)_i'_33^p_2.
After performing computations on the joint state of the two qudits, it is necessary to transform the resulting state back to the original form specified by Eq. (<ref>) using the inverse F transformations. Additionally, it is necessary to introduce another braiding matrix, denoted by M the mixing matrix, that accounts for all feasible fusion states achieved by exchanging anyons shared by adjacent qudit groups. This is demonstrated in the ensuing equation:
σ_4× 2|ψ⟩
= ∑_j'_1 i'_23i'_31i'_32 i'_33(M^j_0 j_1 j_2_i_22 i_23 i_31 i_32 i_33)_j'_1 i'_23i'_31i'_32 i'_33(
(|i_11, i_12, i_13⟩⊗|i_21, i_22, i'_23⟩)_j'_1⊗|i'_31, i'_32, i'_33⟩)_j_2.
The components of the introduced matrix M are defined by a specific linear combination of the components of L as follows:
(M^j_0 j_1 j_2_i_22 i_23 i_31 i_32 i_33)_j'_1 i'_23i'_31i'_32 i'_33 =
∑_k
(F_j_0 i_23 i_33^j_2)^j_1_k (L^k_ i_22 i_23 i_31 i_32i_33)_i'_23 i'_31i'_32 i'_33(F_j_0 i'_23 i'_33^† j_2)_j'_1^k.
In summary, by computing the B, L, and M matrices from the F and R matrices, we can determine the effect of any braiding operator acting on a state of multiple groups of anyons, where each group represents a qudit. In other words, the calculation of the right-hand side of Eq. (<ref>) has become straightforward for any number of qubits due to the availability of the aforementioned transformations. The general formula for the L and M matrices, along with their application to a fusion state involving an arbitrary number of anyons per qudit and arbitrary size, are provided in the supplementary material.
In order to ensure the reliability of our method, it is crucial to meet rigorous testing conditions that guarantee the accuracy of the formulas and their programmatic implementation. To achieve this, we will refer to the algebra of the braid group known as Artin relations, which serves as a foundation for our approach <cit.>:
[ σ_iσ_i+1σ_i = σ_i+1σ_iσ_i+1 , ; σ_iσ_j = σ_jσ_i , |i-j| > 1,; σ_i σ_i^-1 = I. ]
To verify Artin relations numerically, we should ensure that the distance between the left and right sides of the relations is negligible, meaning it is in the order of the floating-point precision of the specific device and programming language being used. Additionally, it is necessary to substitute the third relation with the unitarity condition since our focus is on quantum computing applications <cit.>. In order to calculate the difference between two unitaries, we employ the spectral distance, which will be defined in the upcoming section. We have performed numerical verification to validate the formulas presented in this section using Artin algebra for up to 24 anyons within the Fibonacci and Ising anyon models. The tests reveal a difference in the order of 1e-15 between the left and right sides of the relations.
In addition, it is convenient to reproduce quantum gates previously compiled using well-known anyon models, such as Fibonacci or Ising models. This will be the main goal of the next section.
§ IMPLEMENTATION OF TOPOLOGICAL CNOT GATE
In this section, our primary objective is to replicate the braid compilation of the CNOT gate, which was initially introduced by Bonesteel et al., utilizing the injection method <cit.>. We have specifically selected this gate as it allows for a rigorous demonstration of the composition of six anyons for two-qubit controlled gates into three anyon braiding operations.
*The anyon model. In that study, the CNOT quantum gate is designed using six Fibonacci anyons with three anyons per qubit. The Fibonacci model includes one anyonic charge labeled as and the vacuum , and the non-trivial fusion rule of these anyons is × = + <cit.>. Hence, given three anyons, the possible fusion states are as shown in Tab. <ref>.
Note that the logical states |0⟩ and |1⟩ are naturally independent of |2⟩ due to their different global fusion outcomes. Taking into consideration that a single qubit can be effectively represented by three Fibonacci anyons, resulting in a collective outcome of , it follows that the state |2⟩ is non-computational.
Considering six anyons, the fusion basis can be ordered as shown in Tab. <ref>.
Similarly, we have two separate sectors in the fusion space, which are differentiated by their global fusion outcomes. Consequently, we can choose one of two possible computational bases: {|00⟩_|10⟩_|01⟩_|11⟩_} or {|00⟩_|10⟩_|01⟩_|11⟩_}, while all other states that include one of the qubits in state |2⟩ are considered non-computational.
*Computation of braid operators. Since the F and R matrices are known for the Fibonacci model <cit.>, it is possible to obtain the braid generators that operate on the above fusion basis states using the systematic method explained in the previous section. The explicit matrix representations of the braid generators are shown in Fig. <ref>. Note that the braid operations σ_1 and σ_2 apply to the first qubit, while σ_4 and σ_5 act on the state of the second qubit. Furthermore, the braid operator σ_3 is responsible for the interaction between the two qubits.
However, this interaction comes at a cost, as σ_3 mixes computational states with non-computational states, inducing information leakage <cit.>. For the sake of performance, it is worth noting that the computation of such braid generators can be carried out the two possible sectors.
*Implementation of topological CNOT gate. At this stage, we have attained the capability to simulate the unitary evolution of such anyonic system after a given braid sequence. As an illustrative instance, we opt for a braid sequence that approximates the CNOT gate implemented with 280 successive braiding operations using an injection method <cit.>, allowing us to verify our approach.
Computing the matrix representation of the given braid sequence results in a 13 × 13 matrix that includes the two possible computational sectors, as explained earlier.
We obtain a braid-based Controlled-NOT operation, where the second qubit controls the application of the NOT gate on the first qubit, as shown in Fig. <ref>. The accuracy and leakage of this approximation with respect to the conventional CNOT quantum gate <cit.> can be measured directly from the matrix representation.
The spectral distance 𝒟(U_1, U_2) = √(maxEigenvalue(AA^†)) is used to measure accuracy, where A is the difference between the unitaries U_1 and U_2 after eliminating the global phases <cit.>. Furthermore, leakage is measured by the quantity 1 -√(minEigenvalue (UU^†)), where the second term is the minimum factor by which the matrix U can change the norm of a quantum state <cit.>.
The accuracy of the topological CNOT gate as approximated in sector and sector is 1.73e-3 and 1.24e-3, respectively, with an estimated leakage of 1.17e-06 and 2.54e-06, respectively.
The preceding example reveals that the systematic method we present in this study yields a commensurate level of accuracy as the original work <cit.>.
§ CONCLUSION
Quantum information can be encoded, manipulated, and protected in a topologically robust manner, by performing braiding operations. In this paper, we presented a systematic approach for generating braid operators in any qubit or qudit-based quantum computation that incorporates anyons. Our approach can be applied to any specified anyon model, providing a structured framework for producing the necessary braid generators. This systematic method enables effective implementation and analysis of quantum computations with qubits or qudits involving anyons.
The significance of our study lies in its relevance to the compilation of quantum circuits on topological systems that utilize anyonic states <cit.>. While our focus is primarily on sparse encoding in qubit-based circuit models, applying a similar approach to different encoding schemes has the potential to yield a general formula for braid matrices, which would be highly valuable.
The formalism introduced in this work ensures a systematic method for computing braid generators with the assistance of computational units. However, it is important to note that this method does not guarantee a reduction in computational complexity since the size of the braid generator grows exponentially with the number of anyons, in the order of O(d_a^2n), where d_a represents the quantum dimension of the anyon a, and n is the number of anyons.
In order to enhance performance, the method can be parallelized. Additionally, the majority of braid generators can be stored as sparse matrices, as only the braid operations between different qubits have a significant number of non-zero components.
Moreover, we validate the introduced formalism through numerical verification using algebraic braid relations. Furthermore, our study demonstrates the successful reproduction of a previously established topological CNOT gate compiled with braiding Fibonacci anyons. The matrix representation and accuracy of our method align consistently with the original work <cit.>.
The method described in this paper has been successfully implemented for the Fibonacci anyon model in an open-source numerical library named TQSim, developed by the authors of this paper <cit.>. Importantly, our method can be systematically extended to encompass all SU(2)_k anyon models and other significant ones by determining the F and R matrices through the solution of consistency equations. In conclusion, the simulation of topological quantum computation using this method proves highly valuable for testing quantum gate compilation and developing topologically protected quantum circuits.
§ ACKNOWLEDGMENTS
This document has been produced with the financial assistance of the European Union (Grant no. DCI-PANAF/2020/420-028), through the African Research Initiative for Scientific Excellence (ARISE), pilot programme. ARISE is implemented by the African Academy of Sciences with support from the European Commission and the African Union Commission.
We are grateful to the Algerian Ministry of Higher Education and Scientific Research and DGRST for the financial support.
paper.bst
§ SUPPLEMENTARY MATERIALS
The following formulas represent the mixing matrices M and L defined for a fusion state of an arbitrary number of qudits with identical N=q+1 anyons per qudit. The M matrix describes the effect of braiding two adjacent anyons shared between two neighboring qudits m and m+1. Moreover, braiding two adjacent anyons within the same qudit is described by the braid matrix B as explained in the paper.
(M^j_m-2 j_m-1 j_m_i_m(q-1)i_mq i_(m+1)1⋯ i_(m+1)q)_j'_m-1i'_mqi'_(m+1)1⋯ i'_(m+1)q =
∑_k(F_j_m-2 i_mq i_(m+1)q^j_m)^j_m-1_k
(L^k_i_m(q-1) i_mq i_(m+1)1⋯ i_(m+1)q)_i'_mq i'_(m+1)1⋯ i'_(m+1)q(F_j_m-2 i'_mq i'_(m+1)q^† j_m)_j'_m-1^k.
(L^k_i_m(q-1)i_mq i_(m+1)1⋯ i_(m+1)q)_i'_mq i'_(m+1)1⋯ i'_(m+1)q =
∑_p_q p_q-1⋯ p_2 p_1(F_i_mqi_(m+1)(q-1)a^† k)^i_(m+1)q_p_q( F_i_mqi_(m+1)(q-2)a^† p_q)^i_(m+1)(q-1)_p_q-1×⋯
⋯×( F_i_mqi_(m+1)1a^† p_3)^i_(m+1)2_p_2( F_i_mqaa^† p_2)^i_(m+1)1_p_1( B_i_m(q-1)aa^p_1)^i_mq_i'_mq( F_i'_mqaa^ p_2)_i'_(m+1)1^p_1( F_i'_mqi'_(m+1)1a^ p_3)_i'_(m+1)2^p_2×⋯
⋯×( F_i'_mq i'_(m+1)(q-2)a^ p_q)_i'_(m+1)(q-1)^p_q-1( F_i'_mqi'_(m+1)(q-1)a^k)_i'_(m+1)q^p_q.
Consequently, the general action of the braid operator σ_n on an arbitrary fusion state such as n=mN is given as follows:
σ_mN (⋯ ( ⋯ ( ⋯⊗| i_m1, i_m2, ⋯ i_mq⟩_m-th qudit
)_j_m-1⊗|i_(m+1)1, i_(m+1)2, ⋯ i_(m+1)q⟩_(m+1)-th qudit
)_j_m⊗⋯ )_j_f-1
= ∑_j'_m-1 i'_mqi'_(m+1)1)⋯ i'_(m+1)q(M^j_m-2 j_m-1 j_m_i_m(q-1)i_mq i_(m+1)1⋯ i_(m+1)q)_j'_m-1i'_mqi'_(m+1)1⋯ i'_(m+1)q
(⋯⊗|i_m1, i_m2, ⋯ i_m(q-1) i'_mq⟩)_j'_m-1⊗|i'_(m+1)1, i'_(m+1)2, ⋯ i'_(m+1)q⟩)_j_m⊗⋯)_j_f-1
|
http://arxiv.org/abs/2307.02978v1
|
20230706132531
|
Multi-modal multi-class Parkinson disease classification using CNN and decision level fusion
|
[
"Sushanta Kumar Sahu",
"Ananda S. Chowdhury"
] |
cs.CV
|
[
"cs.CV"
] |
MMMC PD CLASSIFICATION USING CNN and DL FUSION
S. Sahu et al.
Jadavpur University, Kolkata, West Bengal, India
{sksahu.etce.rs, as.chowdhury}@jadavpuruniversity.in
MULTI-MODAL MULTI-CLASS PARKINSON DISEASE CLASSIFICATION USING CNN and DECISION LEVEL FUSION
Sushanta Kumar Sahu 0000-0002-4384-8939
Ananda S. Chowdhury0000-0002-5799-3467
============================================================================================
Parkinson’s disease (PD) is the second most common neurodegenerative disorder, as reported by the World Health Organization (WHO).
In this paper, we propose a direct three-Class PD classification using two different modalities, namely, MRI and DTI. The three classes used for classification are PD, Scans Without Evidence of Dopamine Deficit (SWEDD) and Healthy Control (HC). We use white matter (WM) and gray matter (GM) from the MRI and fractional anisotropy (FA) and mean diffusivity (MD) from the DTI to achieve our goal. We train four separate CNNs on the above four types of data. At the decision level, the outputs of the four CNN models are fused with an optimal weighted average fusion technique. We achieve an accuracy of 95.53% for the direct three-class classification of PD, HC and SWEDD on the publicly available PPMI database. Extensive comparisons including a series of ablation studies clearly demonstrate the effectiveness of our proposed solution.
§ INTRODUCTION
Parkinson's disease is the second most common neurological disorder that affects movement and can cause tremors, stiffness, and difficulty with coordination <cit.>. Early diagnosis of PD is important for effective treatment, as there is currently no cure for the disease. However, diagnosis can be challenging due to the variability of symptoms and lack of definitive biomarkers. According to the World Health Organization (WHO), PD affects approximately 1% of people aged 60 years and older worldwide.
However there are approximately 10% of clinically diagnosed patients with early stage PD who exhibit normal dopaminergic functional scans. This class, which signifies a medical condition distinct from PD, is known as Scans Without Evidence of Dopamine Deficit (SWEDD) <cit.>. As a result of the evolution of this new class, difficulty of diagnosing PD has increased manifold, leading to a three-class classification problem of PD vs. SWEDD vs. HC with class overlaps <cit.>.
MRI, SPECT and PET are commonly used imaging techniques for PD diagnosis. However, PET and SPECT are not preferred by doctors due to invasiveness and cost <cit.>. DTI is a newer technique that measures water molecule movement to analyze white matter microstructure which gets affected in PD. In the literature, quite a few works were reported on PD classification based on machine learning (ML) and deep learning (DL) models applied to neuroimaging data. Salat et al. <cit.> found correlations between gray or white matter changes and age using Voxel-based Morphometry (VBM). Adeli et al. <cit.> used a recursive feature elimination approach for two-class classification with 81.9 % accuracy. Cigdem et al. <cit.> proposed a total intracranial volume method with 93.7% accuracy. Singh et al. <cit.> presented a ML framework for three two-class classifications. Chakraborty et al. <cit.> presented an DL model with 95.29% accuracy. A DL-based ensemble learning technique was reported by <cit.> with 97.8% accuracy.
Recent research indicates that combining features from more than one imaging modality can improve the classification accuracy. For example, Li et al. <cit.> showed that combining DTI and MRI features improves the classification accuracy in Alzheimer's disease. The authors of <cit.> used MRI and DTI, but only considered MD data from DTI for PD classification. They used a stacked sparse auto-encoder to achieve better classification accuracy. In light of the above findings, we anticipate that MRI and DTI can be effectively combined to better analyze PD. To increase decision accuracy, we mention here a few decision level fusion techniques.
Majority voting technique is the most common techniques used in late fusion <cit.>. This strategy, however, may not be appropriate for multi-class classification applications. Single classifiers work well on most subjects, but error rates are enhanced for some difficult-to-classify subjects due to overlap across many categories. Instead of using a majority voting strategy, a modified scheme called modulated rank averaging is employed in <cit.>. We feel the accuracy may be enhanced even further by fine-tuning the weights computed in the modulated rank averaging approach.
As a summary, we can say that there is a clear dearth of direct three-class PD classification strategies and that too with multi-modal data. In this paper, we present a direct three-class PD classification using CNNs and decision level fusion. We investigate full potential of multimodal data i.e., FA & MD from DTI and WM & GM from MRI. We use four CNNs to analyze these four types of data. Outputs from all these four models are finally fused using an Optimal Weighted Average Fusion (OWAF) technique at the decision level. Since neuroimaging datasets are small, data augmentation is adopted to ensure proper training with the CNNS <cit.>. We now summarize our contributions as below:
* We address a direct three-class classification task (PD, HC and SWEDD) for Parkinson's disease, which is certainly more challenging than the current trend of a single binary classification (for 2-class problem) or multiple binary classifications (for 3-class problem).
* We make effective use of the underlying potential of multi-modal neuroimaging, namely T1-weighted MRI and DTI. In particular, we train four different CNNs on WM, GM data from MRI and FA, MD data from DTI. Such in-depth analysis of multi-modal neuroimaging data is largely missing in the analysis of PD.
* Finally, at the decision level, the outputs of each CNN model are fused using an Optimal Weighted Average Fusion (OWAF) strategy to achieve state-of-the-art classification accuracy.
§ PROPOSED METHOD
Our solution pipeline for an end-to-end direct three-class classification of PD from DTI and MRI consists of four CNN networks. Each CNN network yields a 3×1 probability vector, which represents the probability that the data falls into one of three classes, i.e., PD, HC or SWEDD. The probability vectors are then combined using the OWAF technique. In section <ref>, we discuss how WM and GM are obtained from MRI data and MD and FA are used from DTI data. Section <ref> describes ADASYN, an oversampling strategy. Section <ref> presents the proposed CNN architecture. In section <ref>, we discuss the decision level fusion. Figure <ref> illustrates the overall pipeline of our solution.
§.§ Data Pre-processing
In this work, voxel-based morphometry (VBM) is used to prepare MRI data. The data is preprocessed using SPM-12 tools and images are normalized using the diffeomorphic anatomical registration with exponentiated lie algebra (DARTEL) method <cit.>. This SPM-12 tool segments the whole MRI data into GM, WM and cerebrospinal fluid, as well as the anatomical normalization of all images to the same stereotactic space employing linear affine translation, non-linear warping, smoothing and statistical analysis. After registration, GM and WM volumetric images were obtained and the unmodulated image is defined as the density map of grey matter (GMD) and white matter (WMD). The PPMI database contains all information regarding DTI indices, including FA and MD. Brain scans from PD groups as well as HC have distinct voxel MD values. MD and FA can be expressed mathematically expressed as <cit.>:
MD=λ_1+λ_2 +λ_3/3=D_xx+D_yy+D_zz/3
FA=√(1/2)√((λ_1-λ_2)^2+(λ_2-λ_3)^2+(λ_3-λ_1)^2/λ_1^2+λ_2^2+λ_3^2)
In equations <ref> the diagonal terms of the diffusion tensor are D_xx, D_yy and D_zz and the sum of these diagonal terms constitutes its trace. After prepossessing, four types of data are made available, namely, grey matter (GM), white matter (WM), fractional anisotropy (FA) and mean diffusivity (MD).
§.§ Data balancing with ADASYN
We find each of GM, WM, FA, MD to be highly imbalanced across the three classes. Further, the number of training samples required to feed a DL model is insufficient. So, ADASYN, an oversampling method is used to increase the number of samples for each minority class <cit.>. The primary idea behind the use of an ADASYN technique is to compute the weighted distribution of minority samples based on a wide range of out-of-elegance neighbors.
A difficult-to-train minority instance surrounded by more out-of-class examples is given a better chance of being augmented through producing synthetic samples. Using a set of pseudo-probabilistic rules, a predetermined number of instances are generated for every minority class depending on the weighted distribution of its neighbors. Following the implementation of this up-sampling approach, the total number of samples in each of the classes is 141 volumetric images. The details of data set division strategies are shown in Figure <ref>.
Let a dataset consists of m samples of the form (x_i,y_i); where i ∈ [1,m], x_i being the i^th sample of the n-dimensional feature space X and y_i being the label (class) of the sample x_i. Let m_min and m_maj be number of the minority and majority class samples respectively. Then the ratio of minority to majority sample will be expressed as: d = m_min /m_maj. Let, β be the balance level of the synthetic samples. So, β = 1 represents that both the classes are balanced using ADASYN. Number of synthetic minority data to be created is given by: <ref>.
G=(m_maj - m_min) ×β
For β≠ 1, synthetic samples will be created for each set of minority data based on the Euclidean distance of their k-nearest neighbors. The dominates of the majority class in each neighborhood is expressed as: r_i= Δ_i/k. where Δ_i is the number of examples in the k-nearest neighbors of x_i that belong to the majority class. Higher value of r_i in a neighborhood indicates more examples of the majority class which makes them harder to learn. We next determine how many synthetic samples per neighborhood need to be generated as: G_i= G ×r̂_̂î. Here r̂_̂î represent normalized version of r_i. This captures the adaptive nature of the ADASYN algorithm, which means more data is created for harder to learn neighbourhoods. We generate G_i number of synthetic data s_i,j, j=1,2, ⋯, G_i for each neighbourhood i as shown in equation <ref> below:
s_i,j= x_i + (x_zi- x_i ) ×λ
Where x_i and x_zi are two minority occurrences in the same neighborhood and λ is a random integer between 0 and 1. In our work, these synthetic images are generated with the help of the Scikit-Learn library such that all the classes are balanced in nature.
§.§ Proposed CNN Architecture
We use four CNN models, each with ten convolutional layers and four dense (FC) layers, for the direct three-class classification task of PD. The proposed network's architecture is depicted in Figure <ref>. We chose fewer parameters in the proposed architecture than in the original VGG16 <cit.> by decreasing network depth. Our proposed network is similar to that proposed in <cit.>. The number of layers are limited to maintain a trade-off between accuracy and computational cost. To generate feature representations of brain MR scans, convolution layers are used. The final FC layer and a soft max operation are used for the classification task. The volumetric input data is processed slice-wise, with each slice having a size of 176 × 176 pixels.
We have used max-pooling in the pooling layers to reduce the image size. The flattened layers convert the reduced feature maps to a one-dimensional feature map. The fully connected layers classify this feature map into three classes: HC, PD and SWEDD. The cross-entropy loss function (CELF) is the most common loss function used for classification problems since it has better convergence speeds for training deep CNNs than MSE and hinge loss.
As a result, we consider CELF for this work, which is mathematically expressed as:
L_CELF = - ∑_i = 1^N∑_j = 1^Kp(y_j|𝐱)logp( y_j|𝐱)
where p(y_j|𝐱) is the original class label distribution and p(y_j|𝐱) is the predicted label distribution from the CNN network. Here N and K represent the number of models and number of classes respectively. For our problem, N=4 and K=3.
§.§ Decision Level Fusion of CNN Networks
We use four CNN models, one each for GM, WM, FA and MD.
We fuse these predicted probabilities with the help of suitable weights. The weights are generated in two stages. In the first stage, the weights are generated using the modulated rank averaging (MRA) method <cit.>. The weights in the MRA method are given by:
w_i=f_i/∑_i=1^N-1f_i + R_max
In equation <ref>, f_i and R_max indicate the normalizing factor and the rank of the model having highest accuracy respectively. The normalizing factor is calculated based on the rank of the current model and the difference between the accuracy of the current and next model. In the second stage, these weights are optimised using the grid search method <cit.>. Let the final optimized weights be denoted by w_i^'. Note that this weight vector is fixed for all 3 classes.
We combine this optimal weight vector with the respective probability vectors to obtain the overall probability of occurrence of respective classes. Let us denote by PF_j; j= 1,2,3, the overall probability of occurrence of the j^th class as a result of fusion.
PF_j = ∑_i=1^4 w_i^'× p_ij
The final class will be the one for which PF_j is maximum.
§ EXPERIMENTAL RESULTS
In this section we present the experimental results for three class PD classification. The section is divided into three subsections. In the first subsection, we provide an overview of the PPMI database. We then discuss data preparation, computing configuration, parameter settings of ADASYN and CNN and evaluation metrics. The second subsection shows a series of ablation studies to demonstrate separate impacts of both MRI and DTI data and the proposed fusion strategy. Finally, we include comparisons with several state-of-the-art methods to demonstrate the effectiveness of the proposed solution.
§.§ Data preparation and implementation details
In our study, we included 281 subjects with baseline visits having both DTI and MRI data from PPMI. This includes 67 HC, 177 PD and 37 SWEDD subjects. Table <ref> shows the demographics of the individuals used in this investigation. For all experiments, we use a system with 16.0 GB DDR4 RAM, an Intel® Core™ i7-10750H CPU @ 2.60GHz and GPU of NVIDIA GeForse RTX 3060 @ 6GB. Here, 80% of the data were randomly selected from the PD, SWEDD and HC groups to produce the training set from 225 volumetric image scans. The remaining 56 volumetric image scans, representing approximately 20% of each class, were utilized to create the test set. For ADASYN, we experiment with different neighbor counts (k) on the training set while keeping other settings unaltered. We find that k = 30 produced the best results. Figure <ref> shows the details of data set division strategies. As a result of this, the total number of volumetric images becomes 423, where each of the three classes have 141 volumetric images. For the four CNN models, one each on GM, WM, FA and MD data, we train the network for 100 epochs. ADAM optimizer and ReLU activation functions are employed. The learning rate is initialized at 1×10^-4 with a batch size of 32.
We use the same training parameters for each model (WM, GM, FA and MD) such that there are no conflicts when we combine the outputs of the models and make a fusion at the decision level.
We evaluate the classification performance using four different metrics. These are accuracy, precision, recall and F1 score.
All the measures are calculated using the Scikit Learn packages <cit.>.
§.§ Ablation Studies
We include two ablation studies. The first study demonstrates the utility of using both MRI and DTI data. The second study conveys the benefit of OWAF, the proposed fusion strategy.
The four CNNs are trained and evaluated on both single and multi-modal data from MRI and DTI. Our goal is to investigate the effects of using single and multi-modal data on direct three-class. The results of direct three-class classification are presented in Table <ref>. This tables clearly illustrate that use of both DTI and MRI yields superior results as compared to using MRI and DTI in isolation. Note that the magnitude of improvement from the use of multi-modal data is clearly more significant for the more challenging three-class classification problem.
The four CNNs are combined using four different fusion strategies at the decision level, namely; majority voting, model average fusion, modulated rank averaging <cit.> and the proposed optimal weighted average fusion (OWAF) based on the grid search approach. When voting techniques are applied, the universal decision rule is established simply by fusing the decisions made by separate models.
In the model average fusion method, the output probabilities of each model are simply multiplied by the weight provided to that model based on its accuracy. In the modulated rank averaging method (MRA), the output probabilities of each model are updated using a weight generated based on their rank and the difference in probabilities between the models. This method gives better results than the model average fusion method and the majority voting method. In this work, we take the weights generated using MRA as the initial weights. These base weights are further optimised using the grid search method. In grid search, we fine tune the weights (w_i) in the range of w_i± 0.05 with a step size of 0.01. The final weights are used in our OWAF technique.
Table <ref> illustrates the effects of various fusion strategies. For fare comparison, we use both MRI and DTI data in all cases. The experimental results in the table <ref> clearly reveal that the proposed OWAF outperforms other fusion strategies.
§.§ Comparisons with State-of-the-art Approaches
We compare our method with ten state-of-the-art approaches. There are no results available for a direct 3-class PD classification. So, we compare our results with those papers that have addressed the PD classification on the PPMI database using single or multiple modalities and with three or fewer two-class classifications. The results of comparisons are shown in Table <ref>. Out of the ten methods we have considered, five are based on machine learning (ML) and the rest five are based on deep learning (DL). Further, in four out of five DL based approaches, only a single modality, namely, MRI is used for classification. Also note that eight of these ten techniques have only addressed a single two-class classification problem between PD and HC and did not consider the challenging SWEDD class at all. The remaining two approaches did consider SWEDD as a third class but have divided the three-class classification problem into multiple binary classes <cit.>. However, Li at al. <cit.> did not report the classification results for PD vs. SWEDD in their paper. In order to have fair comparisons, we have also included three binary classifications as obtained from our method in this table.
Our direct three-class classification accuracy turns out to be superior than two-class classification accuracy of at least eight out of 10 methods. It is also higher than two out of three binary classification accuracy of <cit.>. Note that in <cit.>, the authors used a somewhat different experimental protocol by considering two publicly available databases of ADNI and PPMI. In our work, we explicitly consider data with both MRI and DTI for the same individual as available solely in the PPMI database. Though the authors in <cit.> reported superior classification accuracy for male, the accuracy is much less for female and also for the average case (both male and female taken into account). Only, <cit.> has reported a higher classification accuracy than ours. But, they have considered only a single binary class classification of PD vs HC and ignored the more challenging SWEDD class. If we consider the binary classification results of our method, then we straightway outperform nine of the ten state-of-the-art competitors and even beat the remaining method <cit.> in two out of three classifications.
§ CONCLUSION
In this paper, we presented an automated solution for the direct three-class classification of PD using both MRI and DTI data. Four different CNNs were used for separate classifications with WM and GM data from MRI and MD and FA data from DTI. An optimal weighted average decision fusion method was applied next to integrate the individual classification outcomes. An overall three-class classification accuracy of 95.53% is achieved. Extensive testing, including a number of ablation studies on the publicly available PPMI database clearly establishes the efficacy of our proposed formulation.
In future, we plan to deploy our model in clinical practice with data from other neuro-imaging modalities. We also plan to to extend our model for classifying other Parkinsonian syndromes.
elsarticle-num
|
http://arxiv.org/abs/2307.00322v1
|
20230701122844
|
Spanning trees in the square of pseudorandom graphs
|
[
"Matías Pavez-Signé"
] |
math.CO
|
[
"math.CO"
] |
Spanning trees in the square of pseudorandom graphs
Matías Pavez-SignéSupported by the European
Research Council (ERC) under the European Union Horizon 2020 research and innovation programme (grant agreement No. 947978). Mathematics Institute, Zeeman Building, University of Warwick, Coventry CV4 7AL, UK. Matias.pavez-signe@warwick.ac.uk
==================================================================================================================================================================================================================================================================================================
We show that for every Δ∈ℕ, there exists a constant C such that if G is an (n,d,λ)-graph with d/λ≥ C and d is large enough, then G^2 contains every n-vertex tree with maximum degree bounded by Δ. This answers a question of Krivelevich.
§ INTRODUCTION
A pseudorandom graph G on n vertices is a sparse graph that “resembles” many of the properties that are typically present in the binomial random graph G(n,p) with edge density p=e(G)/n2. Arguably, the most crucial characteristic of random graphs that pseudorandom graphs try to capture is the uniform edge distribution property, that is, that all large subsets of vertices span approximately the expected number of edges that appear in the truly random case.
In this paper, we will take a widely used approach to pseudorandom graphs based on a spectral gap condition. Say that a graph G is an (n,d,λ)-graph if G is an n-vertex d-regular graph such that all of the non-trivial eigenvalues of G are bounded by λ in absolute value, in which case, the so-called expander mixing lemma implies that G enjoys the uniform edge distribution property. We refer to the excellent paper by Krivelevich and Sudakov <cit.> for a comprehensive survey about pseudorandom graphs. In this article, we are interested in the following extremal question.
For an n-vertex graph H, how large d/λ must be so that every (n,d,λ)-graph contains a copy of H.
One of the most important directions here is when H is a Hamilton cycle, in which case we have the following beautiful conjecture posed by Krivelevich and Sudakov nearly 20 years ago.
There exists a positive constant C such that the following holds. If G is an (n,d,λ)-graph with d/λ≥ C, then G contains a Hamilton cycle.
Krivilevich and Sudakov <cit.> proved that d/λ≥ Clog n^1-o(1) is enough to guarantee Hamiltonicity in (n,d,λ)-graphs, and quite recently Glock, Munhá Correira, and Sudakov <cit.> improved this result by showing that d/λ≥ C(log n)^1/3 is sufficient to force Hamiltonicity in (n,d,λ)-graphs. Moreover, they also proved that Conjecture <ref> is true when d≥ n^α for some fixed α>0.
Besides Hamiltonicity, probably the most natural problem here is to study when (n,d,λ)-graphs contain all n-vertex trees with bounded maximum degree. If we believe that Conjecture <ref> is correct, then we should expect to find Hamiltonian paths in (n,d,λ)-graphs, as long as d/λ is large enough. Indeed, in 2007, Alon, Krivelevich, and Sudakov <cit.> asked the following question.
[<cit.>] Is it true that for any Δ∈ℕ, there exists a positive constant C=C(Δ) such that if G is an (n,d,λ)-graph with d/λ≥ C, then G contains a copy of every spanning tree T with Δ(T)≤Δ.
As pointed out by Glock, Munhá Correira, and Sudakov <cit.>, it is not even known how to find paths of length longer than n-O(λ n/d) in (n,d,λ)-graphs when d/λ≥ C for some large C. Therefore, looking for general spanning trees in optimal pseudorandom graphs seems to be a quite challenging problem. For almost-spanning trees, however, Alon, Krivelevich, and Sudakov <cit.> showed that for any Δ∈ℕ and ε>0, there exists a constant C=C(ε,Δ) such that if G is an (n,d,λ)-graph with d/λ≥ C, then G contains a copy of each tree with maximum degree bounded by Δ and at most (1-ε)n vertices. A subsequent work by Balogh, Csaba, Pei, and Samotij <cit.> improved the dependency of the constant C=C(ε,Δ). Regarding spanning trees, the only explicit attempt to answer Question <ref> that we are aware of was due by Han and Yang <cit.>, who showed that d≥ 2λΔ^5√(n) suffices for an (n,d,λ)-graph to contain all n-vertex trees with maximum degree bounded by Δ.
A new twist to this problem was recently introduced by Krivelevich <cit.>, who considered a weakened version of Question <ref> by replacing the (n,d,λ)-graph G with its square G^2. Krivelevich <cit.> proved that Conjecture <ref> holds if the host graph is replaced by its square, that is, it is shown that if G is an (n,d,λ)-graph with d/λ≥ C and d large enough, then G^2 contains a Hamilton cycle. He also asked a similar question about spanning trees.
[<cit.>]Is it true that for every Δ∈ℕ, there exists a positive constant C=C(Δ) such that if d/λ≥ C and T is an n-vertex tree with Δ(T)≤Δ, then the square G^2 of an (n,d,λ)-graph G contains a copy of T.
Our main result is a positive answer to Question <ref>.
For every Δ∈ℕ, there exists a positive constant C such that the following holds for every sufficiently large d∈ℕ. If G is an (n,d,λ)-graph with d/λ≥ C, then G^2 contains a copy of every n-vertex tree with maximum degree at most Δ.
A well-known result says that trees contain either a large collection of leaves or many vertex-disjoint induced paths of some fixed length. If we are given a tree T with few leaves, and therefore many long induced paths, then we can define a new tree T̃ which is obtained from T by replacing a single edge from each of the long induced paths in T with a spike. The new tree T̃ has the following two main features. Firstly, T has bounded maximum degree and contains many leaves and, secondly, if G contains a copy of T, then G^2 contains a copy of T. Therefore, it will be possible to deduce Theorem <ref> from the following result, which might be of independent interest.
For every Δ∈ℕ and α>0 sufficiently small, there exists a positive constant C such that the following holds for every sufficiently large d∈ℕ. If G is an (n,d,λ)-graph with d/λ≥ C, then G contains a copy of every n-vertex tree with maximum degree at most Δ and at least α n leaves.
The paper is organised as follows. In Section <ref>, we give a detailed overview of the proof of Theorem <ref>. In Section <ref> we introduce the main tools that we need here, and we prove Theorems <ref> and <ref> in Section <ref>. Lastly, we give some concluding remarks in Section <ref>.
§ OUTLINE OF THE PROOF OF THEOREM <REF>
Suppose we are given an (n,d,λ)-graph G and a tree T which contains a set of leaves L of size |L|≥α n for some 0<α≪ 1/Δ. In order to embed T, we will follow a similar approach as it has been done before for trees with many leaves (see <cit.> for instance). Roughly speaking, the idea is to first embed T-L and then find a matching between the image of the parents of L and the unoccupied vertices in G.
Assume for a moment that we can actually embed T-L and let us discuss how to complete the embedding of T. When the host graph is a truly random graph, this can be easily done by just sprinkling a few more edges and then showing that with high probability there exists a matching between the set of parents and the rest of the uncovered vertices in the graph. However, when working with pseudorandom graphs, this strategy is not possible anymore. To overcome this issue, we will use the idea of matchmakers as introduced by Montgomery <cit.> and recently implemented by Krivelevich <cit.>.
We first pick pairwise disjoint small random subsets V_1,V_2,V_3⊂ V(G), called matchmakers, and show that with positive probability every vertex in V(G) has Θ(d) neighbours in each of the V_i's (this is done by using the Lovász's local lemma). The crucial observation here is that we can ensure that small sets expand into each of the V_i's (see Lemma <ref>). We will use each of these sets V_1,V_2 and V_3 for different purposes. Firstly, we use V_3 to show that even after removing V_1 from G, we still have good expansion properties. Secondly, we show that if we embed T-L outside V_1, then the image of the set of parents of L will expand into the set of unoccupied vertices in G. Lastly, we will use V_2 in order to show that the set of unused vertices in G also have good expansion properties in the image of the parents of L. In order to perform this last step, however, we need to embed T-L while ensuring that the image of the parents of L cover every vertex from V_2. We will explain now how to do this.
The main tool is a powerful embedding technique, sometimes called extendability methods or tree embeddings with rollbacks, which was first introduced by Friedman and Pippenger <cit.> in 1987 and subsequently improved by Haxell <cit.> in 2001. Here we will use a modern reformulation of this technique which is attributed to Glebov, Johannsen, and Krivelevich <cit.>, and which has played a major role in the solution of several problems in the last few years (see <cit.> for instance). Roughly speaking, the extendability method (Lemma <ref>) says that if we are given a subgraph S_i⊂ G which is 'extendable' and G has good expansion properties, then we can extend S_i by adding a leaf e_i with one of its endpoints in V(S_i) and other in V(G)∖ V(S_i) so that S_i+e_i remains extendable. This method will work smoothly as long as |S_i|≤ |G|-Θ(Δλ n/d), and therefore, since |L|≥α n≫λ n/d, we will be able to iterate this process until we embed all of T-L. However, as we mentioned earlier, we will need to ensure that the matchmaker V_2 is completely contained in the image of the parents of L.
In order to cover V_2, we will use some further ideas from the work of Montgomery <cit.>. We first set aside a small subtree T'⊂ T-L containing a large set Q of parents of leaves which are far apart from each other in the tree. Using the extendability method, we will embed T' in rounds so that at each round we cover more and more of V_i using only vertices from Q. After this is completed, every vertex from V_2 is covered and we then just finish the embedding of T-L using extendability methods. To complete the embedding of T, we use Hall's theorem to find a matching between the image of the parents of leaves and the rest of the graph. The properties of the matchmakers will guarantee that Hall's matching criteria is satisfied and thus we can complete the embedding of T.
§ PRELIMINARIES
We will use standard graph theory notation. For a graph G, we denote by V(G) and E(G) the set of vertices and edges of G, respectively, and write |G|=|V(G)| and e(G)=|E(G)|. For a vertex v∈ V(G), we denote by N(v) the set of neighbours of and let d(v)=|N(v)| denote the degree of v. Given a subset S⊂ V(G), the set of neighbours of S is Γ(S)=⋃_s∈ SN(s) and the external neighbourhood of S is N(S)=Γ(S)∖ S. For a vertex v∈ V(G) and sets U,S⊂ V(G), we write d(v,U)=|N(v)∩ U| and N(S,U)=N(S)∩ U. When working with more than one graph, we will use a subscript to specify which graph are we working with. For example, if H is a subgraph of G and v∈ V(H), then d_H(v) denotes the degree of v in H. Given a subset S⊂ V(G), we write G[S] to denote the graph with vertex set S and all the edges from G with both endpoints in S, and we write G-S for the graph G[V(G)∖ S]. For two sets A,B⊂ V(G), we let e(A,B) denote the number of edges with one endpoint in A and the other endpoint in B, and, if A and B are disjoint, we let G[A,B] denote the bipartite graph induced by A and B in G, in which case e(A,B) is just the number of edges in G[A,B]. For a graph H and an edge e∉E(H), we let H+e denote the graph obtained from H by adding the edge e.
For n∈ℕ, we write [n]={1,…,n}. We will use the standard hierarchy notation, that is, for real numbers a,b∈ (0,1], we will write a≪ b to mean that given b fixed, there exists a_0>0 such that if a≤ a_0 then the following statement holds. If 1/x appears in such a hierarchy, we will assume that x is a natural number. Hierarchies with more constants are defined in a similar way and are to be read from right to left.
§.§ Probabilistic tools
We will use the following standard probabilistic results (see <cit.> and <cit.>).
Let X be a sum of independent random variables with values in {0,1}. Then, for all 0<ε≤3/2,
ℙ(|X-𝔼[X]|≥ε𝔼[X])≤ 2exp(-ε^2/3𝔼[X]).
Let A_1,…, A_n be events in a probability space. Suppose that each event A_i is independent of all the other events A_j but at most d. If ℙ(A_i)≤ p for all i∈ [n] and ep(d+1)≤ 1, then ℙ(⋀_i=1^nA_i)>0.
§.§ Dividing trees
Given a tree T, say that two subtrees S_1,S_2⊂ T divide T if S_1 and S_2 share exactly one vertex and T=S_1∪ S_2.
Let T be a tree and let Q⊂ V(T) be a fixed subset. Then, there exists subtrees S_1 and S_2 that divide T and |V(S_1)∩ Q|,|V(S_2)∩ Q|≥ |Q|/3.
The next result states that we can always find a small subtree containing a significant proportion of some target set.
For all α,β>0, there exists a constant δ>0 such that the following holds for all sufficiently large n∈ℕ. Let T be an n-vertex tree and let Q⊂ V(T) be a subset of size |Q|≥β n. Then there exist two subtrees S_1,S_2⊂ T dividing T such that |S_1|≤α n and |V(S_1)∩ Q|≥δ n.
We assume without loss of generality that α≤2/3 and set ℓ=⌈log_2/3α⌉. For 1≤ i≤ℓ, we will iteratively find subtrees S_1^i,S_2^i⊂ T such that
* S_1^i and S_2^i divide T,
* |S_1^i|≤(2/3)^in, and
* |V(S_1^i)∩ Q|≥β n/3^i.
Note that the case i=1 follows immediately by taking S_1^1 as the smallest tree from the output of Lemma <ref>. Suppose that, for some 1≤ i<ℓ, we have found a decomposition T=S_1^i∪ S_2^i satisfying <ref>–<ref>. Apply Lemma <ref> to S_1^i to find a decomposition S_1^i=F_1∪ F_2 with |F_1∩ Q|,|F_2∩ Q|≥ |V(S_1^i)∩ Q|/3≥β n/3^i+1. Letting S_1^i+1 be the smallest of F_1, F_2, we have
|S_1^i+1|≤|S_1^i|+1/2≤2|S_1^i|/3≤(2/3)^i+1n.
Taking S_2^i+1 as the smallest tree in T containing T∖ S_1^i+1, we complete this step. Let S_1=S_1^ℓ. Firstly, we note that
|S_1|≤(2/3)^ℓ n≤α n,
by definition of α, and, secondly, we have
|V(S_1)∩ Q|≥β n/3^ℓ≥β n/3· 3^log_2/3α≥α^log_2/31/3β n/3=δ n.
For a tree T, say a subset X⊂ V(T) is k-separated if every pair of vertices from X are at distance at least k. The following result says that large subsets of bounded degree trees contain a large separated subset.
Let Δ∈ℕ and k≥ 0. Let T be a tree with Δ(T)≤Δ which contains a subset X⊂ V(T) of size |X|≥ 3Δ^k. Then, there exists a subset Q⊂ X which is (2k+2)-separated in T and |Q|≥ |X|/(8k+8)Δ^k.
§.§ Expansion properties of pseudorandom graphs
In this section, we state and prove the expansion properties of (n,d,λ)-graphs that we will use throughout the paper. The main ingredient that we use is the well-known expander mixing lemma (see <cit.> for a proof).
Let G be an (n,d,λ)-graph. Then, for every pair of (not necessarily disjoint) sets A,B⊂ V(G), we have
|e(A,B)-dn|A||B||<λ√(|A||B|).
Say that a graph G is m-joined if for every pair of disjoint subsets A,B⊂ V(G), each of size m, there exists at least one edge between them.
A direct consequence of the expander mixing lemma is that (n,d,λ)-graphs are λ n/d-joined.
If G is an (n,d,λ)-graph, then G is λ n/d-joined.
Given two disjoint subsets A,B⊂ V(G) of size |A|=|B|=λ n/d, by the Expander Mixing Lemma we have
e(A,B)> d/n|A||B|-λ√(|A||B|)=0,
as |A||B|=(λ n/d)^2.
The following lemma states that in (n,d,λ)-graphs we can translate minimum degree conditions into an expansion property for small sets.
Let 1/d≪ 1/C≪μ and let λ>0 satisfy d/λ≥ C. Let G be an (n,d,λ)-graph which contains a subset X⊂ V(G) such that every vertex v∈ V(G) has at least μ d neighbours in X. Then, if D=μ C/4, every subset S⊂ V(G) of size |S|≤λ n/d satisfies |N(S)∩ X|≥ D|S|.
Suppose that there exists a subset S⊂ V(G) of size 1≤ |S|≤λ n/d such that |N(S)∩ X|<D|S|. Let Y=N(S)∩ X. Using the Expander Mixing Lemma, we have
μ d|S|≤ e(S,Y)<d/n|S|· |Y| +λ√(|S||Y|)≤λ· D|S|+λ |S|√(D) ,
and thus
μ d<λ D+λ√(D)≤ 2λ D<2dD/C,
which is a contradiction.
§.§ Finding matchmakers
The following result was originally proved in <cit.>, however, we include its proof here in order to clarify the hierarchy of the constants that we will use later on.
Let 1/d≪μ≪ 1/ℓ and let G be a d-regular graph. Then, there exist disjoint subsets S_1,…,S_ℓ⊂ V(G), each with at most μ n vertices, such that every vertex v∈ V(G) has at least μ d/4 neighbours in each of the S_i's.
Let k=⌊ 3/μ⌋. We colour each vertex from G uniformly at random with an element from [k], making all choices independently. For v∈ V(G) and a colour i∈ [k], let A_v,i denote the event where v has less than d/2k neighbours in colour i. Then, using Lemma <ref>, we have
ℙ(A_v,i)≤ 2exp (-d/12k).
Note that the event A_v,i is not independent of those the events A_u,j where either u and v have common neighbours or u=v, and the number of such events is at most 2kd^2. Since kd^2e^-d/12k≪ 1 for large enough d, we can use Lemma <ref> to deduce that ℙ(⋀_i∈ [n]A_i)>0. Therefore, there is a colouring of V(G) such that every vertex has at least d/2k neighbours on each colour. Pick the ℓ smallest colour classes V_1,…, V_ℓ with |V_1|≤…≤ |V_ℓ| and thus
|V_i|≤ |V_ℓ|≤n/k-ℓ≤2n/k<μ n.
§.§ The extendability method
Here we state some of the main tools of the extendability method, we refer to <cit.> for a more comprehensive exposition of this technique.
Let d,m∈ℕ be such that d≥ 3. Let G be a graph and let S⊂ G be a subgraph. Say that S is (d,m)-extendable if S has maximum degree at most d and for all U⊂ V(G) with 1≤ |U|≤ 2m one has
|Γ_G(U)∖ V(S)|≥ (d-1)|U|-∑_u∈ U∩ V(S)(d_S(u)-1).
The following lemma says that it is enough to control the external neighbourhood of small sets in order to verify extendability.
Let d,m∈ℕ satisfy d≥ 3 and m≥ 1. Let G be a graph and let S⊂ G be a subgraph with Δ(S)≤ d. If for all U⊂ V(G), with 1≤|U|≤ 2m, we have
|N(U,V(G)∖ V(S))|≥ d|U|,
then S is (d,m)-extendable in G.
The next lemma states that we can add vertices to an extendable subgraph while remaining extendable.
Let d,m∈ℕ be such that d≥ 3, and let G be an m-joined graph. Let S be a (d,m)-extendable subgraph of G such that |G|≥ |S|+(2d+3)m+1. Then for every s∈ V(S) with d_S(s)≤ d-1, there exists y∈ N_G(s)∖ V(S) such that S+sy is (d,m)-extendable.
A direct consequence of Lemma <ref> is that we can embed large trees and remain extendable.
Let d,m∈ℕ be such that d≥ 3, and let G be an m-joined graph. Let T be a tree with Δ(T)≤ d/2 and let H be a (d,m)-extendable subgraph of G with maximum degree at most d/2. If |H|+|T|≤|G|-(2d+3)m, then for every vertex t∈ V(T) and v∈ V(H), there is a copy S of T in G-V(H-v) in which t is copied to v and, moreover, S∪ H is a (d,m)-extendable subgraph of G.
Given a graph G and a subset X⊂ V(G), we let I(X) denote the independent subgraph induced by X, that is, the subgraph of G with vertex set X and no edges. The last ingredient we need is a covering result due to Montgomery <cit.>.
Let k,d,m∈ℕ with d≥ 20. Let G be an m-joined graph and H⊂ G be a subgraph with Δ(H)≤ d/4. Let X⊂ V(G)∖ V(H) be a subset such that H∪ I(X) is (d,m)-extendable in G and let T be a tree with Δ(T)≤ d/4 that satisfies |H|+|X|+|T|≤ |G|-10dm-2k.
Suppose that Q⊂ V(T) is a (4k+4)-separated set in T which satisfies |Q|≥ 3|X|, and let t∈ V(T) and v∈ V(H). Then, there is a copy S of T in G-V(H-v) so that t is copied to v, H∪ I(X)∪ S is (d,m)-extendable in G, |X∖ V(S)|≤ 2m/(d-1)^k, and all vertices in X∩ V(S) have a vertex in Q copied to them.
§ PROOFS
We start by fixing the constants
1/d≪ 1/C≪μ≪β≪ξ≪α≪ 1/Δ.
Let λ>0 satisfy d/λ≥ C, let G be an (n,d,λ)-graph and let T be an n-vertex tree with Δ(T)≤Δ which contains at least α n leaves.
Step 1. Setting the matchmakers: Apply Lemma <ref> to find pairwise disjoint sets V_1,V_2,V_3⊂ V(G) such that |V_1|,|V_2|,|V_3|≤μ n, and
A for every v∈ V(G) and i∈ [3], d(v,V_i)≥μ d/4.
Set V'=V(G)∖ V_1 and G'=G[V'], and set D=μ C/10^3. For a fixed vertex v_0∈ V'∖ (V_2∪ V_3), we claim that I(V_2∪{v_0}) is (D,λ n/d)-extendable in G'. Indeed, given a subset S⊂ V(G') of size 1≤ |S|≤λ n/d, using A and Lemma <ref>, we deduce that
|N_G'(S)|≥ |N_G(S)∩ V_3|≥ 10D|S|.
If λ n/d≤ |S|≤2λ n/d, then, similarly, we have
|N_G'(S)|≥ |N_G(S)∩ V_3|≥ 10D·λ n/d-|S|≥ 8D·λ n/d≥ 4D|S|.
Therefore, by Lemma <ref>, we conclude that I(V_2∪{v_0}) is (D,λ n/d)-extendable in G'.
Step 2. Covering the matchmaker: Use Corollary <ref> to divide T into subtrees T=T_1∪ T_2 so that
B1 |T_1|≤α n, and
B2 T_1 contains a set L of leaves from T of size |L|≥ξ n.
We will use T_1 to carefully cover each vertex from V_2. Let t∈ V(T) be the unique vertex that belongs to T_1 and T_2. We claim that there is a 12-separated set of parents of leaves Q⊂ V(T_1) of size |Q|≥β n which does not contain t.
Indeed, let P be the set of parents of L and note that, as T_1 is a tree with Δ(T_1)≤Δ, we have P⊂ V(T_1) and |P|≥ |L|/Δ≥ξ n/Δ. Then, use Lemma <ref> to find a 12-separated set Q⊂ P of size
|Q|≥|P|/48Δ^6≥ξ n/100Δ^6≥2β n.
By potentially removing t from Q, we may assume without loss of generality that t∉Q and |Q|≥β n. Let T' be obtained from T_1 by removing all those leaves which are children from vertices in P. Our goal now is to find a copy of T' in G' so that every vertex from V_2 is covered by a vertex from Q. Let
ℓ=⌊log_D-1(2λ nd)⌋+1.
There is a sequence of subtrees T'_1,…, T'_ℓ⊂ T_1 such that
C1 T_1=T'_1∪…∪ T'_ℓ,
C2 T'_i intersects T'_i+1 at a vertex t_i+1, and
C3 for each i∈ [ℓ], there is a (4i+8)-separated set Q_i⊂ V(T'_i) which does not contain t_i such that |Q_i|≥β n/2(D-1)^i, if i≥ 2, and |Q_1|≥β n/3.
We say that a sequence of subtrees T'_1,…, T'_i-1,S_i⊂ T_1 and (non necessarily distinct) vertices t_0,…, t_i∈ V(T_1) is a good sequence of length i if the following properties hold:
* t_0∈ V(T_1').
* T_1=T_1'∪…∪ T'_i-1∪ S_i.
* For 1≤ j≤ i-1, T'_j intersects T'_j-1 exactly at t_j, and S_i intersects T'_i-1 exactly at t_i.
* |V(S_i)∩ Q|≥ |Q|/3^i-1 and |V(T_j)∩ Q|≥ |Q|/3^j for j∈ [i-1].
First, note that S_1=T'_1 and t_1=t is a good sequence of length 1. We now show that we can find a good sequence of length ℓ. Suppose that, for some 1≤ i<ℓ, we have found a good sequence T'_1,…,T'_i-1,S_i⊂ T_1 and t_0,…, t_i∈ V(T_1). Use Lemma <ref> to find subtrees T_i,S_i+1⊂ S_i such that S_i+1 and T_i divide S_i and |V(S_i+1)∩ Q|,|V(T_i)∩ Q|≥ |V(S_i)∩ Q|/3. Moreover, we may assume that t_i∈ V(T_i) and let t_i+1 be the unique vertex in V(S_i)∩ V(T_i). Finally, from <ref> we deduce that
|V(S_i+1)∩ Q|,|V(T_i)∩ Q|≥|V(S_i)∩ Q|/3≥|Q|/3^i.
This implies that T'_1,…, T'_i,S_i+1 and t_0,…, t_i+1 is a good sequence of length i+1, and thus, iterating, we can find a good sequence of length ℓ. Let T'_1,…,T'_ℓ-1,S_ℓ and t_0,…, t_ℓ be such a sequence and set T'_ℓ=S_ℓ. We claim that this sequence satisfies C1–C3. Indeed, let Q_1=Q∩ V(T'_1) and note that Q_1 is 12-separated as Q is 12-separated. For each 2≤ i≤ℓ, use Lemma <ref> to find a (4i+8)-separated set Q_i⊂ V(T'_i) of size
|Q_i|≥|Q∩ V(T_i)|/(16i+32)Δ^2i+3≥β n/3^i(16i+32)Δ^2i+3≥β n/(D-1)^i,
provided D is sufficiently large compared to Δ. By potentially removing t_i from Q_i, we can assume that t_i∉Q_i and |Q_i|≥β n/2(D-i)^i.
Now we find the copy of T' while covering every vertex from V_2. For 1≤ i≤ℓ, say that we have a Stage i situation if we have a subgraph F_i⊂ G' and a subset X_i⊂ V_2 disjoint from F_i such that the following properties hold:
D1F_i is a copy of T'_1∪…∪ T'_i with t copied to v_0.
D2 F_i∪ I(X_i) is (D,λ n/d)-extendable.
D3 V_2∩ V(F_i)⊂ Q_1∪…∪ Q_i and |V_2∖ V(F_i)|≤2λ n/(D-1)^i+1.
Let us first produce a Stage 1 situation. Firstly, from C3, we have that Q_1 is a 12-separated set in T'_1 of size
|Q_1|≥β n/2≥ 3|V_2|,
as |V_2|≤μ n and μ≪β. Secondly, since
|T'|+|V_2|≤α n+μ n≤ |G'|-20D·λ n/d,
we can use Lemma <ref> to find a copy F_1 of T'_1 in G' with t copied to v_0 such that, if X_1=V_2∖ V(F_1), we have that I(X_1)∪ F_1 is (D,λ n/d)-extendable in G' and |X_1|≤ 2λ n/d(D-1)^2. Moreover, every vertex in V_2∩ V(F_1) is covered by some vertex of Q_1, which proves that we have a Stage 1 situation. Assume that we have a Stage i situation, for some 1≤ i<ℓ, and let us show how to produce a Stage i+1 situation. Indeed, let F_i and X_i⊂ V_2 such that D1–D3 hold. Again, from C3, we have a (4i+8)-separated set Q_i+1⊂ V(T_i+1') which does not contain t_i+1 such that
|Q_i+1|≥β n/2(D-1)^i+1≥6λ n/d(D-1)^i+1≥ 3|X_i|,
where used D3, the inequality λ/d≤ 1/C≪β, and that X_i⊂ V_2∖ V(F_i). Then, since T'_1∪… T'_i+1⊂ T and X_i⊂ V_2, equation (<ref>) implies that can use Lemma <ref> to find a subgraph F_i+1⊃ F_i such that D1 is satisfied and, letting X_i+1=X_i∖ V(F_i+1), we have that F_i+1∪ I(X_i+1) is (D,λ n/d)-extendable, which shows that Q2 holds. Moreover, every vertex in X_i∩ V(F_i+1) is covered by some vertex from Q_i+1 and |X_i+1|≤2λ n/(D-1)^i+1, showing that D3 holds.
This finally proves that we can reach a Stage ℓ situation. In this scenario, we have a subgraph F_ℓ⊂ G' which is a copy of T'_1∪…∪ T'_ℓ=T' (because of C1) such that t is copied to v_0. Moreover, by D3 and the definition of ℓ, we have
|V_2∖ V(F_ℓ)|≤2λ n/(D-1)^ℓ+1<1,
which implies that V_2⊂ Q_1∪…∪ Q_ℓ⊂ Q.
Step 3. Finishing the embedding: Recall that F_ℓ is a copy of T' so that F_ℓ is (D,λ n/d)-extendable in G', t is copied to v_0, and every vertex from V_2 is covered by some vertex from Q. Recall that P is a set of parents of leaves of size |P|≥ξ n/Δ. Take a set L' of leaves such that there is a perfect matching in T between P and L', and set T”=T-L'. We first a copy of T”. Note that
|T'|+|T”-(T'-t)|≤ n+1-|P|≤ |G'|-2μ n-ξ n/Δ≤ |G'|-(2D+3)·λ n/d,
as μ≪ξ and D·λ n/d≤ Dn/C≪ξ n. Therefore, we can use Corollary <ref> to find a subgraph F⊃ F_ℓ such that
F1 F is a copy of T”,
F2 F is (D,λ n/d)-extendable in G', and
F3 V_2 is contained in the image of P.
It is thus only left to embed the leaves from T that are adjacent to P. To this end, we only need to find a perfect between the image of P and the leftover vertices in G. To this end, we use the well-known Hall's matching theorem.
Let H be a bipartite graph with parts A and B. If for every subset U⊂ A we have |N(U)|≥ |U|, then H contains a matching covering A.
Let A be the image of P, let Y=V(G-F), and let H=G[A,B] be the bipartite graph induced by A and B. Note that in order to finish the embedding of T, it is enough to check the conditions of Lemma <ref> for H. Since F is (D,λ n/d)-extendable by F2, for any subset U⊂ A with |S|≤λ n/d, we have
|N_H(U)|≥ |Γ_G(U)∖ V(F)| ≥ (D-1)|U|-∑_u∈ U∩ V(F)(d_F(u)-1)
≥ (D-Δ)|U|
≥ |U|
as D≫Δ. For the sake of contradiction, suppose that we can find a subset U⊂ A with λ n/d<|U|≤ |A| such that |N_H(U)|<|U|. Firstly, note that, by Corollary <ref>, we have
|U|>|N_H(U)|≥ |B|-λ n/d.
Secondly, let W=B∖ N_H(U) and note that, as |A|=|B|, we have |A∖ U|<|W|≤λ n/d. Finally, use F3 and Lemma <ref> to see that
|W|>|A∖ U|≥ |N_H(W)|≥ |N_G(W)∩ V_2|≥ D|W|,
a contradiction. Therefore, we can use Lemma <ref> to complete the embedding of T and thus finish the proof.
We will now prove Theorem <ref>. Given a tree T, we say that P⊂ T is a bare path if all vertices of P have degree exactly 2 in T. The last ingredient we need is the following structural result of trees.
Let n,k,ℓ∈ℕ and let T be an n-vertex tree with at most ℓ leaves. Then, T contains a collection of a least n/k+1-(2ℓ-2) vertex disjoint bare paths, each of length k.
For Δ∈ℕ fixed, let α>0 be sufficiently small and let C>0 and d∈ℕ be large enough. Let G be an (n,d,λ)-graph with d/λ≥ C and let T be an n-vertex tree with Δ(T)≤Δ. By Theorem <ref>, we can assume that T has less than λ n leaves. Then, Lemma <ref> implies that T contains a collection of at least n/4-2(α n-2)≥n/8 vertex-disjoint bare paths, each of length 3. Therefore, for k=n/8, we can find vertex-disjoint bare paths P_i=a_ib_ic_id_i, for i∈ [k]. For each i∈ [k], we let P̃_i denote the tree with vertex set V(P̃_i)={a_i,b_i,c_i,d_i} and edges E(T̃_i)={a_ib_i,b_ic_i,b_id_i} (see Figure <ref>).
Let T̃ be the tree obtained from T by replacing each bare path P_i with the tree P̃_i. Note that T̃ has n vertices, maximum degree at most Δ(T)≤Δ, and at least k≥λ n leaves, one from each of the k bare paths we modified. Therefore, by Theorem <ref>, G contains a copy of T̃, and thus G^2 contains a copy of T.
§ CONCLUDING REMARKS
In this paper, we solved a question of Krivelevich (Question <ref>) about whether the square of (n,d,λ)-graphs contain spanning bounded degree trees. While doing so, we also made progress towards a question of Alon, Krivelevich, and Sudakov (Question <ref>), giving an affirmative answer provided the tree has linearly many leaves. Therefore, in view of Lemma <ref>, it only remains to show that trees with linearly many bare paths are present in optimal (n,d,λ)-graphs. However, this question is deeply related to Conjecture <ref> about the Hamiltonicity of (n,d,λ)-graphs which has remained open for over 20 years.
In a forthcoming paper together with Montgomery and Yan <cit.>, we further develop the extendability method in order to obtain, among other things, the following result for almost spanning trees in graphs with very weak expansion properties.
For every Δ∈ℕ, there exists a positive constant C such that the following holds for all n,m∈ℕ with n≥ Cm. If G is an m-joined graph on n+m-1 vertices, then G contains every n-vertex tree with maximum degree at most Δ.
Note that Theorem <ref> implies that (n,d,λ)-graphs, with d/λ≥ C for some large constant C, contain all bounded degree trees on n-λ n/d vertices. On the other hand, Theorem <ref> is tight, as shown for example by considering an n-vertex clique together with m-1 isolated vertices. However, (n,d,λ)-graphs have even better expansion properties than just being λ n/d-joined, and therefore we expect that the techniques from <cit.> can be adapted to (n,d,λ)-graphs in order to give a complete answer to Question <ref>.
|
http://arxiv.org/abs/2307.02347v3
|
20230705150310
|
Detecting Images Generated by Deep Diffusion Models using their Local Intrinsic Dimensionality
|
[
"Peter Lorenz",
"Ricard Durall",
"Janis Keuper"
] |
cs.CV
|
[
"cs.CV",
"cs.CR"
] |
[
\begin@twocolumnfalse
Quantum Limits of Position and Polarizability Estimation in the Optical Near Field
Stefan Nimmrichter
==================================================================================
\end@twocolumnfalse
]
Diffusion models recently have been successfully applied for the visual synthesis of strikingly realistic appearing images. This raises strong concerns about their potential for malicious purposes.
In this paper, we propose using the lightweight multi Local Intrinsic Dimensionality (multiLID), which has been originally developed in context of the detection of adversarial examples, for the automatic detection of synthetic images and the identification of the according generator networks.
In contrast to many existing detection approaches, which often only work for GAN-generated images, the proposed method provides close to perfect detection results in many realistic use cases.
Extensive experiments on known and newly created datasets demonstrate that the proposed multiLID approach exhibits superiority in diffusion detection and model identification.
Since the empirical evaluations of recent publications on the detection of generated images are often mainly focused on the “LSUN-Bedroom” dataset, we further establish a comprehensive benchmark for the detection of diffusion-generated images, including samples from several diffusion models with different image sizes.
The code for our experiments is provided at https://github.com/deepfake-study/deepfake_multiLIDhttps://github.com/deepfake-study/deepfake_multiLID.
§ INTRODUCTION
Recently, denoising diffusion probabilistic models (DDPMs) <cit.> have established a new paradigm in image generation thanks to their solid ability to synthesize high-quality images.
As a result, a large number of studies have been exploring novel network architectures <cit.>, alternative noise schedules to accelerate the sampling during inference <cit.> and state-of-the-art text-to-image approaches <cit.>.
Furthermore, numerous image generation platforms, both commercial and open-source, such as Midjourney <cit.>, Dall-e 2 <cit.>, Imagen <cit.>, Dreambooth <cit.>, and Stable Diffusion <cit.>, have contributed to bringing this technology closer to people, boosting significantly its popularity.
However, with the ease of generating content through diffusion models (DMs) at the click of a button, the presence of high-quality tampered content is growing leading to potential privacy issues <cit.>.
As the consumption of media expands to social media and deliberate modifications are made to spread false information <cit.>, it becomes crucial to detect synthesized imagery.
Although there are several detectors available for identifying non-natural images, most have not been designed for diffusion content due to fundamental differences in the generation process.
For example, frequency-based approaches <cit.> have shown high detection scores when applied to images generated by generative adversarial networks (GANs), but they fail when DMs are employed.
The main reason for the phenomenon appears to be that GAN-generated images often exhibit distinct artifacts, characterized by a periodic, grid-like pattern, which is not present anymore in diffusion samples.
In order to circumvent this problem, Wang et al. <cit.> introduced a novel representation for effectively detecting DM-generated images.
Their approach involves analyzing the reconstruction error between synthetic and real images.
Nonetheless, although the aforementioned methods exhibit promising results, they rely on a vast amount of data to be trained on.
As a consequence, these systems might struggle when facing new scenarios with data scarcity.
Additionally, none of them has proven to be able to distinguish different DM-generated images within the same context, i.e., dataset.
In this paper, our main objective is to identify synthetic content, in particular, DM-generated images.
To that end, we introduce a novel pipeline consisting of i) forwarding the input images to an untrained ResNet <cit.> and extracting their feature-map representations; ii) computing multi local intrinsic dimensionality (multiLID) <cit.>, a variant of the LID <cit.> on the resulting lower dimensional spaces; and iii) running a classifier to determine the nature of the input images given their multiLID.
We show that this proposal can successfully distinguish between synthetic and natural images, as well as among different DM-generators, while requiring a relatively small training dataset, i.e., around 1,600 samples per class.
To assess the effectiveness of our multiLID approach, we conduct an extended evaluation that encompasses images generated by various DMs, including unconditional and text-to-image generation setups, e.g., Glide <cit.>, DDPM <cit.>, Latent Diffusion <cit.>, Palette <cit.>, Stable Diffusion <cit.>, VQ Diffusion <cit.> and Diffusion Transformer (DiT) <cit.>.
We demonstrate that the multiLID representation has an effective identification capability through extensive experiments. The three main contributions of our work can be summarized as follows:
* We introduce a lightweight method, i.e., multiLID, for diffusion-generated content identification, whose capabilities extend beyond real and synthetic image classification, as it can also determine the specific generative model.
* We evaluate the performance of our proposed method on numerous datasets from standardized ones, such as LSUN-Bedroom, to state-of-the-art such as CiFake and ArtiFact.
* We conduct a thorough study to assess and characterize the proposed methodology.
§ RELATED WORK
In this section, we provide a brief overview of recent diffusion models for image generation and discuss various DM-detection approaches.
§.§ Diffusion Models for Image Generation
Diffusion models have emerged as a powerful image generation paradigm, which was originally inspired by non-equilibrium thermodynamics <cit.>.
Denoising diffusion probabilistic models (DDPMs), introduced by Ho et al. <cit.>, have exhibited notable generative capabilities when compared to advanced GANs paradigm <cit.>.
Song et al. <cit.> introduced the use of denoising diffusion implicit models (DDIMs) to speed up image generation while keeping a reasonable image quality trade-off.
A later work, ablated diffusion model (ADM) <cit.> finds a much more effective architecture with classifier guidance.
Finally, considering DDPMs as differential equations on manifolds, Liu et al. <cit.> proposed pseudo-numerical methods for diffusion models (PNDMs), which further enhance sampling efficiency and generation quality.
In the quest for progress, the vector quantized diffusion model (VQD) <cit.> proposed a conditional variant of DDPM incorporating a variational quantized diffusion variational auto-encoder (VQ-VAE) <cit.> to model the latent space.
Notably, the latent diffusion model (LDM) <cit.> has demonstrated superior robustness and efficiency compared to other diffusion models.
LDMs employ a cross-attention mechanism inspired by transformers <cit.> to effectively combine text and image input sequences within the latent space.
Building upon the foundation of LDM, the popular Stable Diffusion v2 has further enhanced generation performance while reducing computational requirements.
Recently, Peebles and Xie <cit.> were able to replace the U-Net <cit.> backbone in LDMs with a vision transformer and establish a new paradig called Diffusion Transformers (DiT).
Built upon DiT, Gao et al. <cit.> proposed a Masked Diffusion Transformer (MDT) that consists of a mask latent modeling scheme to explicitly enhance the DMs’ ability of contextual relation learning among object semantic parts in an image.
§.§ Detectors for Synthetic Images
The distinction between natural and synthetic images has captivated researchers since the advent of image generation.
Durall <cit.> discovered an approach to detect GAN-generated images based on classical frequency domain analysis.
Later, Quian et al. introduced the Frequency in Face Forgery Network (F3-Net) <cit.>.
The proposed framework is composed of two frequency-aware branches, one focused on mining subtle forgery patterns through frequency components partition, and the other aimed at extracting small-scale discrepancies of frequency statistics between real and synthetic images.
For each branch, there is a pre-trained classifier to extract the features based on Xception architecture <cit.>, and following both feature sources are combined to final deepfake detector.
CNNDet <cit.> is another FFT-based detector employing a pre-trained Resnet50 and reutilize as a binary classifier.
Their objective was to show if there is a common pattern in the Fourier domain of different GAN models generated images to transfer to unknown synthetic data.
Chai et al. <cit.> introduced the detector Patch-Forensics (Pa-Fo) and found that splitting images into patches to limit the receptive field of a classifier leveraged the ability to detect manipulated parts in an image.
Self-Blended Images (SBI) <cit.> was built up on the idea of creating its own forgeries to learn generic and robust representation.
To achieve that they used pre-trained EfficientNet-b4 <cit.> classifier and fine-tuned it on landmark mismatch, blending boundary, color mismatch, and frequency inconsistency features.
With the emergence of DMs and their increasing dominance, traditional generative solutions like GANs have gradually been replaced.
Studies by Dong et al. <cit.> and Ricker et al. <cit.> have shown that tailored GAN-generated image detectors have also become outdated, as they rely on extracting synthetic artifacts using frequency-aware features or trainable noise patterns within the amplitude and phase spectra domains <cit.>, which are not that prominent in DM-generated images anymore.
Wang et al. <cit.> discovered that DM-generated images exhibit features that are more easily reconstructed by pre-trained diffusion models compared to natural images.
To identify such features, they presented Diffusion Reconstruction Error (DIRE).
Guo et al. <cit.> and Guarnera et al. <cit.> proposed a hierarchical fine-grained labeling approach for forged or synthetic images, utilizing carefully designed training sets.
The hierarchical formulation requires an extensive inclusion of forgery techniques in the training set, which can be challenging when having limited diversity in the training data.
Amoroso et al. <cit.> explored the decoupling of semantic and style features in images, and demonstrated that synthetic images can display greater separability in the style domain.
Nonetheless, the practicality of semantic-style disentangling is challenging, as it necessitates tailored training sets.
§ METHOD
In this paper, we conduct a thorough investigation of the multiLID method <cit.>, originally developed for detecting adversarial examples, and validate its detection capability within the diffusion models context.
Note that the direct application of multiLID on the images yields unsatisfactory results and therefore, we first employ an untrained ResNet18 <cit.> to extract low-dimensional features from the synthetic images.
Then, we can apply multiLID on these extracted features and finally train a classifier, specifically a random forest model.
The conceptual framework of our proposal is illustrated in <ref>.
§.§ Preliminaries
In this section, we explain the background of the feature maps and the intrinsic dimensionality.
Both are crucial for understanding the multiLID. method.
The relevance of CNN Feature Maps cannot be underestimated in the framework of our method.
Actually, the application of multiLID scores on the raw, high-dimensional data results in ineffective and uninformative outcomes.
However, if we employ the extracted lower dimensional and structured feature maps, the performance dramatically boosts.
The usage of neural networks to extract features is not new.
In fact, extensive research has been conducted on the properties of CNN feature maps, primarily focused on natural images.
In this regard, it is worth mentioning that the hypothesis suggesting that natural images lie on or near a low-dimensional manifold remains a topic of debate.
However, as argued by Goodfellow et al. <cit.>, there is at least some correctness in that assumption when it comes to images.
This assertion is supported by two noteworthy observations.
First, natural images exhibit local connectivity.
In other words, each image is surrounded by other highly similar images that can be reached through image transformations such as contrast and brightness adjustments.
Second, natural images appear to conform to a low-dimensional structure as the probability distribution of images is highly concentrated, i.e., randomly sampled pixels alone cannot assemble a meaningful image.
The combination of natural scenes and sensor properties is widely believed to result in sparse and concentrated image distributions, as supported by several empirical studies on image patches <cit.>.
In their seminal work, Olshausen et al. <cit.> demonstrated that natural images exhibit distinctive statistical regularities that differentiate them from random images.
Understanding these regularities has practical implications, such as more efficient coding of natural images and serving as a valuable prior in the field of computer vision <cit.>.
Furthermore, the low-dimensional manifold hypothesis has been extensively validated through rigorous experiments conducted on diverse image datasets <cit.>.
In addition, Fefferman et al. <cit.> proposed novel algorithms for systematically verifying the validity of this manifold hypothesis.
In the context of neural networks, Zhu et al. <cit.> presented a new neural network architecture that incorporates a low-dimensional manifold regularization term to improve the generalization performance of the model.
The authors argued that the high-dimensional nature of neural networks can lead to overfitting and poor generalization.
Moreover, neural networks heavily rely on low-dimensional textures and not on the shape information <cit.>.
In the same vein, it has been suggested that natural images can be represented as mixtures of textures residing on a low-dimensional manifold <cit.>.
Gont et al. <cit.> discovered that neural network features possess low-dimensional characteristics, which are easy to learn.
They also observed a decrease in the intrinsic dimension of features in the last layers of neural networks, with interesting dimensionality trends in the first layers.
Shortly after, Pope et al. <cit.> found that common natural image datasets indeed have very low intrinsic dimensions relative to the high number of pixels in the images.
In particular, they showed it with GAN-generated synthetic data.
Local Intrinsic Dimensionality (LID) is a method used to estimate the intrinsic dimensionality of a learned representation space.
LID measures the average distance between a point and its neighboring points <cit.> as illustrated in <ref>.
This is achieved through maximum likelihood estimation that can be calculated as follows: Consider a mini-batch ℬ of N examples, and let r_i(x)=d(x,y) represent the Euclidean distance between the sample x and y its i-th nearest neighbor in ℬ.
Then the LID can be approximated as:
LID(x) = - (1/k∑^k_i=1logd_i(x)/d_k(x) )^-1,
where k is a hyper-parameter that determines the number of nearest neighbors, and d is the distance metric employed.
Ma et al. <cit.> introduced LID to characterize adversarial examples.
They argued that the average distance between samples and their neighbors in the learned latent space of a classifier exhibits distinct properties for adversarial and natural (not modified) samples.
They assessed LID on the j-dimensional latent representations of a neural network f(x), using the L_2 distance:
d_ℓ(x,y) = f_ℓ^1..j(x) - f_ℓ^1..j(y) _2,
where ℓ∈ L represents the feature maps, and computed a vector of LID values sample-wise:
LID(x) = {LID_d_ℓ (x) }^n_ℓ.
They repeated this procedure for both natural and adversarial examples.
Finally, a logistic regression classifier was trained to detect adversarial samples.
The mathematical definition of the LID is in the <ref>.
§.§ Method - multiLID
The method multiLID <cit.> was designed to detect adversarial examples and is based on the LID. In this section, we explain which advantage multiLID has over the original LID method and its accompanying benefits.
In practice, the statistical estimate of intrinsic dimensionality (ID) is not solely dependent on the chosen neighborhood size.
Typically, the ID is evaluated on a mini-batch basis, where the k-th nearest neighbors are determined from a random sample of points in the latent space.
Although this approach might introduce some noise, it provides broader coverage of the space, while considering only a few neighbors for each ID evaluation.
Consequently, the summation aggregates the relative growth rate over potentially large distances in the latent space, see <ref>.
Lorenz et al. <cit.> argue that this summation step combines locally discriminative information about the growth rate in close proximity, and with the growth rates computed from more distant points.
To address this, they propose “unfolding” <cit.> the growth rate estimation.
Instead of computing an aggregated (semi) local ID, they suggest calculating a feature vector, referred to as multiLID, for every sample x.
The length of this feature vector is k, and it is defined as:
multiLID_d(x)[i] = - ( logd_i(x)/d_k(x) ),
where d represents the Euclidean distance.
By using the multiLID feature vector, multiLID aim to capture more fine-grained information about the relative growth rates at different distances for each sample.
For example, let the number of nearest neighbors be k=10 and we extract eight feature maps (from ReLU activation layers) per sample.
Then, the multiLID feature vector has a length of k × 8 = 80 for k=10, while the LID algorithm would have a feature vector of 8 because it sums up the nearest neighbors.
This approach allows us to consider the local growth rate information separately for each neighbor, without the need for aggregation.
§ EXPERIMENTS
In this section, we first introduce the used datasets, then the experimental setup, and finally we present and discuss an extensive collection of experiments.
§.§ Datasets
This subsection provides an overview of the datasets used in our study, including details on those that are publicly available and those that we created from pre-trained models.
The datasets contain a range of image sizes, spanning from 32× 32 to 768× 768 pixels; and of heterogeneous domains, such as faces animals, places, and even images with artistic style.
§.§.§ Public Datasets
The following datasets are publicly available:
CiFake dataset <cit.> offers a collection of real and synthetic images, comprising a total of 120,000 images.
It combines 60,000 images sourced from the existing CIFAR-10 dataset <cit.> with an additional 60,000 DM-generated images.
The generation of synthetics is carried out by a LDM model<cit.>.
The dataset maintains the same classes as the original CIFAR-10 dataset.
ArtiFact is a large-scale image dataset <cit.>, which includes a diverse collection of real and synthetic images from multiple categories: human/human faces, animal/animal faces, places, vehicles, art, and many other real-life objects.
The real dataset comprises 8 subdatasets (ImageNet, AFHQ, CelebaHQ, COCO, FFHQ, Landscape, MetFaces, and LSUN (Bedroom, Car, Cat, Horse)) <cit.> to ensure diversity.
On the other hand, the synthetic dataset consists of DM-generated images from 25 distinct methods, including 13 GANs, 7 Diffusion, and 5 other miscellaneous generators.
For our evaluation, we randomly select images from six diffusion models (Glide, DDPM, Latent Diffusion, Palette, Stable Diffusion, VQ Diffusion) <cit.> and six GAN models (Big GAN, Gansformer, Gau GAN, Projected GAN, StyleGAN3, Taming Transformer) <cit.> to conduct our evaluations.
In total, we select 10,500 real and generated images with 5,250 images per category.
DiffusionDB is one of the first large-scale text-to-image dataset <cit.>.
The images are generated by sd using prompts from users in a discord channel and the images exhibit an artistic style.
In our study, we work with the subset “2m_random_5k” <cit.>.
Since DiffusionDB does not provide a collection of real images, inspired by Xie et al. <cit.>, we employ LAION-5B and SAC datasets (see below).
LAION-5B is a large-scale web-based dataset <cit.>, which has over 5 billion images crawled from the Internet.
The images are annotated by CLIP <cit.> in many different languages.
Although this dataset provides different image sizes, we focus only on the high-resolution <cit.> subset.
Note that the images are center cropped to fit the synthetic datasets.
We use this dataset to compare synthetic images from DiffusionDB.
SAC (Simulacra Aesthetic Captions) dataset[The images in version 1.0 of SAC are provided as a subset in <https://s3.us-west-1.wasabisys.com/simulacrabot/sac.tar>. We only filter the images with size 512×512 pixels.] <cit.>
is created from various text-to-image diffusion models, such as CompVis latent GLIDE and Stable Diffusion.
It comprises over 40,000 user-generated prompts, predominantly consisting of images with artistic styles.
Xie et al. <cit.> observed that this dataset shares similarities with DiffusionDB and therefore, we use it as a real dataset to compare to DiffusionDB.
§.§.§ New Datasets
Additionally, we create new datasets from different models to further diversify and scale our evaluation. We extend these datasets referring in <ref>.
sdv21, we sample 2,000 images using the pre-trained model <cit.>.
In order to generate the samples, we collect and utilize prompts from LAION-5B.
We employ the images from the LAION-5B dataset as a real dataset <cit.>.
LSUN-Bedroom, we sample 2,000 images (for each method) using several pre-trained models from diffusers <cit.>.
In particular, we leverage the following methods:
* {DDPM, DDIM, PNDM}-ema: The pre-trained model with the id “google/ddpm-ema-bedroom-256” includes DDPM, DDIM, and PNDM samplers.
* ADM: We download the pre-trained LSUN-Bedroom model of ADM <cit.> from the official repository <cit.>.
* sdv21: The pre-trained text-to-image model with the id “stabilityai/stable-diffusion-2-1” <cit.>.
sdv21 uses LDMs as a backend and additionally has integrated cross-attention to enable conditioning multi-modality <cit.>.
* LDM: We use the pre-trained text-to-image model with the id “CompVis/ldmtext2im-large-256” <cit.>.
* VQD: We use the pre-trained text-to-image model with the id “microsoft/vq-diffusion-ithq” <cit.>.
As a real dataset, we employ the images from LSUN-Bedroom dataset <cit.> from huggingface <cit.>.
We center-crop them to 256 × 256 pixels.
§.§ Experimental Setup
Data pre-processing:
All experiments are conducted on the aforementioned datasets.
First of all, we calculate the standard mean and standard deviation on the dataset and normalize the inputs.
Once we have homogeneous data distribution, we feed the images into an untrained ResNet18 model
[As a ResNet18 implementation, we use the model provided by TIMM library <https://huggingface.co/docs/timm/index>. The selected layers are called: 1_conv2_1, 1_conv2_2, 2_conv2_1, 2_conv2_2, 3_conv2_1, 3_conv2_2, 4_conv2_1, 4_conv2_2., which has the advantage to manage all different image sizes.] <cit.> to extract their features.
Although the network is not trained <cit.>, it already suffices to distill the main characteristics of the data. We have not observed a difference in the detector's accuracy by using untrained or trained weights referring to <ref>.
Then, we compute the multiLID scores from the extracted feature maps.
Our training data size has 1,600 samples per class unless otherwise specified in the experiments and finally we train a random forest classifier on the labeled multiLID scores.
Evaluation metrics:
Following previous detection methods <cit.>, we also report the accuracy (ACC) in our experiments.
§.§ Classification
In this subsection, we present our findings across various datasets (refer to <ref>). In real-world situations, images might undergo unidentifiable post-processing operations, such as compression and resizing.
To assess the detectability of DM-generated images even after undergoing post-processing, we apply blurring and JPEG-compression techniques to both synthetic and authentic images, following the established procedure outlined in Wang et al. <cit.>.
Following a similar approach to Wang et al. (2023) <cit.>, we assess the resilience of multiLID in the context of two-class degradation: Gaussian blur and JPEG compression. The perturbations consist of five levels for Gaussian blur (σ = 0, 0.15, 0.5, 1, 3) and four levels for JPEG compression (quality = 100, 90, 60, 30).
Furthermore, we augment the training data by incorporating these perturbations.
In both scenarios, the multiLID algorithm demonstrates notably high accuracies when the training process involves data augmentation.
Notice that accuracy results hold remain consistent regardless of the image size and dataset domain.
In <ref>, we provide additional results from other datasets and conduct an ablation study on the effects of Gaussian blur and JPEG compression degradation.
§.§ Model Strength Assessment
In this subsection, we investigate the boundaries of our approach.
In other words, we aim at gaining more insights about the strength of the algorithms depending on the number of samples and the entries (multiLID scores) of the feature vectors.
Each extracted feature map of ResNet18's selected layers ℓ results in 10 multiLID scores.
This is indeed the case because we choose to compute the multiLID over the 10th nearest neighbors.
Note that the whole length of the feature vector is 10 ×ℓ = 80.
The first entries correspond to the first layers and the latter to the last layers of the network.
We evaluate the detection rates, in terms of accuracy, when using different numbers of samples and accumulating the entries over the feature vectors.
In <ref>, we benchmark our multiLID across two dimensions: i) the number of features; ii) the number of samples.
We run this experiment five times to ensure reproducibility.
We employ 2,000 samples per class, and our starting training-test split is 60-40%.
This implies that the training split is equal to 4,000× 0.6 = 2,400 and hence, 1,600 samples for the test set.
Notice that while the training data will be decreased, the test set size keeps always the same (1,600 samples).
We can observe how, independently of the dataset, our model only needs 800 synthetic images to learn to distinguish real and DM-generated images.
In addition, one can notice that the first eight entries of the feature vectors do not contribute to the detection, as the detection rate is always around 0.5 across the evaluations (see <ref>).
Similar results were observed by <cit.> as discussed in <ref>.
Refer to the <ref> for an in-depth analysis.
As the number of training samples increases, and the feature vector entries are larger than eight, then the detection accuracy becomes uniformly accurate.
Moreover, we include the strength assessment over the variance in <ref>.
§.§ Identification and Transferability Capability Evaluation
In this section, we investigate the identification and transferability capabilities of the multiLID method. To address this objective, we raise the following questions: Can we achieve a dependable identification of each diffusion model through a multilabel classifier? If so, does the multiLID approach retain its transferability when applied to unfamiliar data originating from different models but belonging to the same domain?
To start answering the identification question, we explore the abilities of our approach to LSUN-Bedroom, as it has been widely used in previous literature <cit.>.
In <ref>, we plot the confusion matrix from different DMs: {DDPM, DDIM, PNDM}-ema, LDM, SDv21, and VQD.
The identification results indicate significantly high accuracy scores.
Furthermore, we investigate other datasets, such as CelebaHQ (<ref>), LSUN-Cat (<ref>), LSUN-Church (<ref>), to examine the generalizability of the identification.
Similarly to LSUN-Bedroom, the accuracy is perfect.
Refer to the <ref> for the results.
Limitation of the Identification.
We conduct a series of experiments on the ArtiFact dataset, which comprises 8 authentic datasets, 6 datasets generated from distinct GANs, and 6 datasets produced by different DMs.
For these experiments, we utilize a total of 10,500 real images and an equivalent number of 10,500 images generated by GANs and DMs.
We deviate from training a binary classifier solely for real and synthetic samples. Instead, we explore the classification of synthetic images originating from GANs and DMs separately. While accurately distinguishing between real and synthetic images (i.e., GAN or Diffusion) poses no challenge for our approach, we encounter difficulty in reliably differentiating between GAN- and DM-generated images (refer to the left part of <ref>).
Limitation of the Transferability.
On the other hand, when it comes to transferability, we evaluate it in the form of a matrix.
We conduct again our experiments on LSUN-Bedroom with different DM-generated images: {DDPM, DDIM, PNDM}-ema, LDM, SDv21, and VQD (see right <ref>).
Each classifier is trained on real and one of the diffusion-generated datasets.
We transfer the datasets from other DM-generated datasets.
As expected, the accuracy within the same dataset is accurate, however, the transferability is very low.
As in the identification investigation, we validate our results on other datasets: CelebaHQ (<ref>), LSUN-Cat (<ref>), LSUN-Church (<ref>) datasets.
We obtain the same pattern as for LSUN-Bedroom.
Refer to the <ref> for these results.
Comparison to other Detectors.
In <ref>, it is evident that our method exhibits superior performance compared to other approaches, particularly when employing a limited quantity of training samples—specifically, 800 samples per class for both training and testing purposes.
The reason is that multiLID does not necessitate fine-tuning neural networks, eliminating the need for extensive datasets.
MultiLID operates without pre-trained networks, distinguishing it from other methodologies that rely on pre-trained weights from established classification architectures such as Xception, ResNet50, or EfficientNet-B4.
Another existing method, as presented by Durall <cit.>, operates without pre-trained weights and is based on simple Fourier analysis.
However, as explained by <cit.>, this type of frequency discrimination technique is unsuitable for working with DMs.
We extend our comparison by changing the classification
from a binary to a multi-class scenario.
From now on the detectors not only need to distinguish the real and synthetic samples but also among the different synthetics, i.e., different DMs.
To that end, we keep the training parameters the same as for the binary case in <ref>, but we modify the last layer in all the classifiers.
<ref> shows the accuracy scores for all the methods.
In general, we can see that the classification scores have decreased since having more classes usually poses a more complex problem for the classifiers.
It is noteworthy that the "Real" class is the more troublesome.
The multiLID solution exhibits solid detection results.
To further assess this claim, we evaluate the experiment on more datasets in sections <ref> and <ref> in the appendix.
§ CONCLUSION
This paper focuses on the detection of diffusion-generated images.
Driven by the observation that the grid-like pattern in the Fourier domain is not prominent anymore, we propose the usage of a local intrinsic method variant called multiLID for the examination of diffusion synthetic images.
By leveraging multiLID, we seek to gain insights and improve the detection performance specifically in the context of diffusion model-generated images.
Moreover, we aim to enhance the detection and identification of diffusion-generated images, addressing the shortcomings observed in previous detectors designed for GAN-generated images, such as FFT dependency or the need of large training datasets.
To conduct an in-depth study, we train on publicly available as well as self-constructed datasets consisting of images from different types of diffusion models, such as unconditional, conditional, and text-to-image models.
These datasets are specifically curated to enable the evaluation and analysis of DM-generated images.
By including images from various DMs, we provide a more comprehensive and diverse set of data for studying and assessing the identification and transferability of diffusion-generated images.
Our extensive experimental results show that the multiLID image representation, significantly enhances the DM-generated identification of images, resulting in a highly effective approach for this particular task.
On the other hand, our solution does not offer good transferability, which might reduce the detector's applicability to unseen synthetics.
Nonetheless, we believe that future work can mitigate this drawback.
Acknowledgement
Thanks to Jay Wang, who suggested comparing his DiffusionDB with the artistic SAC dataset.
102
urlstyle
[Sohl-Dickstein et al.(2015)Sohl-Dickstein, Weiss, Maheswaranathan, and
Ganguli]sohl2015deep
Jascha Sohl-Dickstein, Eric Weiss, Niru Maheswaranathan, and Surya Ganguli.
Deep unsupervised learning using nonequilibrium thermodynamics.
In International Conference on Machine Learning, pages
2256–2265. PMLR, 2015.
[Ho et al.(2020)Ho, Jain, and Abbeel]ho2020denoising
Jonathan Ho, Ajay Jain, and Pieter Abbeel.
Denoising diffusion probabilistic models.
Advances in Neural Information Processing Systems,
33:0 6840–6851, 2020.
[Song et al.(2020)Song, Meng, and Ermon]song2020denoising
Jiaming Song, Chenlin Meng, and Stefano Ermon.
Denoising diffusion implicit models.
arXiv preprint arXiv:2010.02502, 2020.
[Nichol and Dhariwal(2021)]nichol2021improved
Alexander Quinn Nichol and Prafulla Dhariwal.
Improved denoising diffusion probabilistic models.
In International Conference on Machine Learning, pages
8162–8171. PMLR, 2021.
[Dhariwal and Nichol(2021)]dhariwal2021diffusion
Prafulla Dhariwal and Alexander Nichol.
Diffusion models beat gans on image synthesis.
Advances in Neural Information Processing Systems,
34:0 8780–8794, 2021.
[Liu et al.(2022)Liu, Ren, Lin, and Zhao]liu2022pseudo
Luping Liu, Yi Ren, Zhijie Lin, and Zhou Zhao.
Pseudo numerical methods for diffusion models on manifolds.
arXiv preprint arXiv:2202.09778, 2022.
[Rombach et al.(2022)Rombach, Blattmann, Lorenz, Esser, and
Ommer]rombach2022high
Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn
Ommer.
High-resolution image synthesis with latent diffusion models.
In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 10684–10695, 2022.
[Lu et al.(2022)Lu, Zhou, Bao, Chen, Li, and Zhu]lu2022dpm
Cheng Lu, Yuhao Zhou, Fan Bao, Jianfei Chen, Chongxuan Li, and Jun Zhu.
Dpm-solver: A fast ode solver for diffusion probabilistic model
sampling in around 10 steps.
arXiv preprint arXiv:2206.00927, 2022.
[Watson et al.(2022)Watson, Chan, Ho, and Norouzi]watson2022learning
Daniel Watson, William Chan, Jonathan Ho, and Mohammad Norouzi.
Learning fast samplers for diffusion models by differentiating
through sample quality.
In International Conference on Learning Representations, 2022.
[Salimans and Ho(2022)]salimans2022progressive
Tim Salimans and Jonathan Ho.
Progressive distillation for fast sampling of diffusion models.
arXiv preprint arXiv:2202.00512, 2022.
[Chen et al.(2022)Chen, Hu, Saharia, and Cohen]chen2022re
Wenhu Chen, Hexiang Hu, Chitwan Saharia, and William W Cohen.
Re-imagen: Retrieval-augmented text-to-image generator.
arXiv preprint arXiv:2209.14491, 2022.
[Ramesh et al.(2022)Ramesh, Dhariwal, Nichol, Chu, and
Chen]ramesh2022hierarchical
Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, and Mark Chen.
Hierarchical text-conditional image generation with clip latents.
arXiv preprint arXiv:2204.06125, 2022.
[Saharia et al.(2022a)Saharia, Chan, Saxena, Li, Whang,
Denton, Ghasemipour, Gontijo Lopes, Karagol Ayan, Salimans,
et al.]saharia2022photorealistic
Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily L
Denton, Kamyar Ghasemipour, Raphael Gontijo Lopes, Burcu Karagol Ayan, Tim
Salimans, et al.
Photorealistic text-to-image diffusion models with deep language
understanding.
Advances in Neural Information Processing Systems,
35:0 36479–36494, 2022a.
[Gu et al.(2022)Gu, Chen, Bao, Wen, Zhang, Chen, Yuan, and
Guo]gu2022vector
Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan,
and Baining Guo.
Vector quantized diffusion model for text-to-image synthesis.
In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 10696–10706, 2022.
[Ruiz et al.(2023)Ruiz, Li, Jampani, Pritch, Rubinstein, and
Aberman]ruiz2023dreambooth
Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and
Kfir Aberman.
Dreambooth: Fine tuning text-to-image diffusion models for
subject-driven generation.
In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 22500–22510, 2023.
[Holz(2022a)]midjourney
David Holz.
Midjoureny.
<https://docs.midjourney.com/docs/model-versions>,
2022a.
[Online; accessed 26-June-2023].
[Holz(2022b)]dalle2
David Holz.
Dall-e 2.
<https://labs.openai.com>, 2022b.
[Online; accessed 27-June-2023].
[Carlini et al.(2023)Carlini, Hayes, Nasr, Jagielski, Sehwag, Tramer,
Balle, Ippolito, and Wallace]carlini2023extracting
Nicholas Carlini, Jamie Hayes, Milad Nasr, Matthew Jagielski, Vikash Sehwag,
Florian Tramer, Borja Balle, Daphne Ippolito, and Eric Wallace.
Extracting training data from diffusion models.
arXiv preprint arXiv:2301.13188, 2023.
[Zhu et al.(2023)Zhu, Chen, Grossklags, and Fritz]zhu2023data
Derui Zhu, Dingfan Chen, Jens Grossklags, and Mario Fritz.
Data forensics in diffusion models: A systematic analysis of
membership privacy.
arXiv preprint arXiv:2302.07801, 2023.
[for Information Security(2023)]dangerdf
German Federal Office for Information Security.
Deep Fakes – Threats and Countermeasures.
<https://www.bsi.bund.de/EN/Themen/Unternehmen-und-Organisationen/Informationen-und-Empfehlungen/Kuenstliche-Intelligenz/Deepfakes/deepfakes_node.html>,
2023.
[Online; accessed 14-June-2023].
[Zhang et al.(2019)Zhang, Karaman, and Chang]zhang2019detecting
Xu Zhang, Svebor Karaman, and Shih-Fu Chang.
Detecting and simulating artifacts in gan fake images.
In 2019 IEEE international workshop on information forensics
and security (WIFS), pages 1–6. IEEE, 2019.
[Durall et al.(2019)Durall, Keuper, Pfreundt, and
Keuper]durall2019unmasking
Ricard Durall, Margret Keuper, Franz-Josef Pfreundt, and Janis Keuper.
Unmasking deepfakes with simple features.
arXiv preprint arXiv:1911.00686, 2019.
[Ricker et al.(2022)Ricker, Damm, Holz, and Fischer]ricker2022towards
Jonas Ricker, Simon Damm, Thorsten Holz, and Asja Fischer.
Towards the detection of diffusion model deepfakes.
arXiv preprint arXiv:2210.14571, 2022.
[Wang et al.(2023)Wang, Bao, Zhou, Wang, Hu, Chen, and
Li]wang2023dire
Zhendong Wang, Jianmin Bao, Wengang Zhou, Weilun Wang, Hezhen Hu, Hong Chen,
and Houqiang Li.
Dire for diffusion-generated image detection.
arXiv preprint arXiv:2303.09295, 2023.
[Roweis and Saul(2000)]roweis2000nonlinear
Sam T Roweis and Lawrence K Saul.
Nonlinear dimensionality reduction by locally linear embedding.
science, 2900 (5500):0 2323–2326, 2000.
[Lorenz et al.(2022)Lorenz, Keuper, and Keuper]lorenz2022unfolding
Peter Lorenz, Margret Keuper, and Janis Keuper.
Unfolding local growth rate estimates for (almost) perfect
adversarial detection.
arXiv preprint arXiv:2212.06776, 2022.
[Ma et al.(2018)Ma, Li, Wang, Erfani, Wijewickrema, Schoenebeck, Song,
Houle, and Bailey]ma2018characterizing
Xingjun Ma, Bo Li, Yisen Wang, Sarah M Erfani, Sudanthi Wijewickrema, Grant
Schoenebeck, Dawn Song, Michael E Houle, and James Bailey.
Characterizing adversarial subspaces using local intrinsic
dimensionality.
arXiv preprint arXiv:1801.02613, 2018.
[Nichol et al.(2021)Nichol, Dhariwal, Ramesh, Shyam, Mishkin, McGrew,
Sutskever, and Chen]nichol2021glide
Alex Nichol, Prafulla Dhariwal, Aditya Ramesh, Pranav Shyam, Pamela Mishkin,
Bob McGrew, Ilya Sutskever, and Mark Chen.
Glide: Towards photorealistic image generation and editing with
text-guided diffusion models.
arXiv preprint arXiv:2112.10741, 2021.
[Saharia et al.(2022b)Saharia, Chan, Chang, Lee, Ho,
Salimans, Fleet, and Norouzi]saharia2022palette
Chitwan Saharia, William Chan, Huiwen Chang, Chris Lee, Jonathan Ho, Tim
Salimans, David Fleet, and Mohammad Norouzi.
Palette: Image-to-image diffusion models.
In ACM SIGGRAPH 2022 Conference Proceedings, pages 1–10,
2022b.
[Peebles and Xie(2022)]peebles2022scalable
William Peebles and Saining Xie.
Scalable diffusion models with transformers.
arXiv preprint arXiv:2212.09748, 2022.
[Karras et al.(2019)Karras, Laine, and Aila]karras2019style
Tero Karras, Samuli Laine, and Timo Aila.
A style-based generator architecture for generative adversarial
networks.
In Proceedings of the IEEE/CVF conference on computer vision
and pattern recognition, pages 4401–4410, 2019.
[Van Den Oord et al.(2017)Van Den Oord, Vinyals, et al.]van2017neural
Aaron Van Den Oord, Oriol Vinyals, et al.
Neural discrete representation learning.
Advances in neural information processing systems, 30, 2017.
[Esser et al.(2021)Esser, Rombach, and Ommer]esser2021taming
Patrick Esser, Robin Rombach, and Bjorn Ommer.
Taming transformers for high-resolution image synthesis.
In Proceedings of the IEEE/CVF conference on computer vision
and pattern recognition, pages 12873–12883, 2021.
[Ronneberger et al.(2015)Ronneberger, Fischer, and
Brox]ronneberger2015u
Olaf Ronneberger, Philipp Fischer, and Thomas Brox.
U-net: Convolutional networks for biomedical image segmentation.
In Medical Image Computing and Computer-Assisted
Intervention–MICCAI 2015: 18th International Conference, Munich, Germany,
October 5-9, 2015, Proceedings, Part III 18, pages 234–241. Springer, 2015.
[Gao et al.(2023)Gao, Zhou, Cheng, and Yan]gao2023masked
Shanghua Gao, Pan Zhou, Ming-Ming Cheng, and Shuicheng Yan.
Masked diffusion transformer is a strong image synthesizer.
arXiv preprint arXiv:2303.14389, 2023.
[Qian et al.(2020)Qian, Yin, Sheng, Chen, and Shao]qian2020thinking
Yuyang Qian, Guojun Yin, Lu Sheng, Zixuan Chen, and Jing Shao.
Thinking in frequency: Face forgery detection by mining
frequency-aware clues.
In Computer Vision–ECCV 2020: 16th European Conference,
Glasgow, UK, August 23–28, 2020, Proceedings, Part XII, pages 86–103.
Springer, 2020.
[Fran et al.(2017)]fran2017deep
C Fran et al.
Deep learning with depth wise separable convolutions.
In IEEE conference on computer vision and pattern recognition
(CVPR), 2017.
[Wang et al.(2020)Wang, Wang, Zhang, Owens, and Efros]wang2020cnn
Sheng-Yu Wang, Oliver Wang, Richard Zhang, Andrew Owens, and Alexei A Efros.
Cnn-generated images are surprisingly easy to spot... for now.
In Proceedings of the IEEE/CVF conference on computer vision
and pattern recognition, pages 8695–8704, 2020.
[Chai et al.(2020)Chai, Bau, Lim, and Isola]chai2020makes
Lucy Chai, David Bau, Ser-Nam Lim, and Phillip Isola.
What makes fake images detectable? understanding properties that
generalize.
In Computer Vision–ECCV 2020: 16th European Conference,
Glasgow, UK, August 23–28, 2020, Proceedings, Part XXVI 16, pages 103–120.
Springer, 2020.
[Shiohara and Yamasaki(2022)]shiohara2022detecting
Kaede Shiohara and Toshihiko Yamasaki.
Detecting deepfakes with self-blended images.
In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 18720–18729, 2022.
[Tan and Le(2019)]tan2019efficientnet
Mingxing Tan and Quoc Le.
Efficientnet: Rethinking model scaling for convolutional neural
networks.
In International conference on machine learning, pages
6105–6114. PMLR, 2019.
[Dong et al.(2022)Dong, Kumar, and Liu]dong2022think
Chengdong Dong, Ajay Kumar, and Eryun Liu.
Think twice before detecting gan-generated fake images from their
spectral domain imprints.
In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 7865–7874, 2022.
[Asnani et al.(2021)Asnani, Yin, Hassner, and Liu]asnani2021reverse
Vishal Asnani, Xi Yin, Tal Hassner, and Xiaoming Liu.
Reverse engineering of generative models: Inferring model
hyperparameters from generated images.
arXiv preprint arXiv:2106.07873, 2021.
[Sinitsa and Fried(2023)]sinitsa2023deep
Sergey Sinitsa and Ohad Fried.
Deep image fingerprint: Accurate and low budget synthetic image
detector.
arXiv preprint arXiv:2303.10762, 2023.
[Guo et al.(2023)Guo, Liu, Ren, Grosz, Masi, and
Liu]guo2023hierarchical
Xiao Guo, Xiaohong Liu, Zhiyuan Ren, Steven Grosz, Iacopo Masi, and Xiaoming
Liu.
Hierarchical fine-grained image forgery detection and localization.
In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 3155–3165, 2023.
[Guarnera et al.(2023)Guarnera, Giudice, and
Battiato]guarnera2023level
Luca Guarnera, Oliver Giudice, and Sebastiano Battiato.
Level up the deepfake detection: a method to effectively discriminate
images generated by gan architectures and diffusion models.
arXiv preprint arXiv:2303.00608, 2023.
[Amoroso et al.(2023)Amoroso, Morelli, Cornia, Baraldi, Del Bimbo, and
Cucchiara]amoroso2023parents
Roberto Amoroso, Davide Morelli, Marcella Cornia, Lorenzo Baraldi, Alberto
Del Bimbo, and Rita Cucchiara.
Parents and children: Distinguishing multimodal deepfakes from
natural images.
arXiv preprint arXiv:2304.00500, 2023.
[Krizhevsky et al.(2009)Krizhevsky, Hinton,
et al.]krizhevsky2009learning
Alex Krizhevsky, Geoffrey Hinton, et al.
Learning multiple layers of features from tiny images.
arXiv, 2009.
[Goodfellow et al.(2016)Goodfellow, Bengio, and
Courville]goodfellow2016deep
Ian Goodfellow, Yoshua Bengio, and Aaron Courville.
Deep learning.
MIT press, 2016.
[Lee et al.(2003)Lee, Pedersen, and Mumford]lee2003nonlinear
Ann B Lee, Kim S Pedersen, and David Mumford.
The nonlinear statistics of high-contrast patches in natural images.
International Journal of Computer Vision, 54:0
83–103, 2003.
[Donoho and Grimes(2005)]donoho2005image
David L Donoho and Carrie Grimes.
Image manifolds which are isometric to euclidean space.
Journal of mathematical imaging and vision, 230
(1):0 5–24, 2005.
[Carlsson et al.(2008)Carlsson, Ishkhanov, De Silva, and
Zomorodian]carlsson2008local
Gunnar Carlsson, Tigran Ishkhanov, Vin De Silva, and Afra Zomorodian.
On the local behavior of spaces of natural images.
International journal of computer vision, 76:0 1–12,
2008.
[Olshausen and Field(1996)]olshausen1996natural
Bruno A Olshausen and David J Field.
Natural image statistics and efficient coding.
Network: computation in neural systems, 70
(2):0 333–339, 1996.
[Peyré(2009)]peyre2009manifold
Gabriel Peyré.
Manifold models for signals and images.
Computer vision and image understanding, 1130
(2):0 249–260, 2009.
[Ruderman(1994)]ruderman1994statistics
Daniel L Ruderman.
The statistics of natural images.
Network: computation in neural systems, 50
(4):0 517, 1994.
[Schölkopf et al.(1998)Schölkopf, Smola, and
Müller]scholkopf1998nonlinear
Bernhard Schölkopf, Alexander Smola, and Klaus-Robert Müller.
Nonlinear component analysis as a kernel eigenvalue problem.
Neural computation, 100 (5):0 1299–1319,
1998.
[Tenenbaum et al.(2000)Tenenbaum, Silva, and
Langford]tenenbaum2000global
Joshua B Tenenbaum, Vin de Silva, and John C Langford.
A global geometric framework for nonlinear dimensionality reduction.
science, 2900 (5500):0 2319–2323, 2000.
[Brand(2002)]brand2002charting
Matthew Brand.
Charting a manifold.
Advances in neural information processing systems, 15, 2002.
[Fefferman et al.(2016)Fefferman, Mitter, and
Narayanan]fefferman2016testing
Charles Fefferman, Sanjoy Mitter, and Hariharan Narayanan.
Testing the manifold hypothesis.
Journal of the American Mathematical Society, 290
(4):0 983–1049, 2016.
[Zhu et al.(2018)Zhu, Qiu, Huang, Calderbank, Sapiro, and
Daubechies]zhu2018ldmnet
Wei Zhu, Qiang Qiu, Jiaji Huang, Robert Calderbank, Guillermo Sapiro, and
Ingrid Daubechies.
Ldmnet: Low dimensional manifold regularized neural networks.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 2743–2751, 2018.
[Geirhos et al.(2018)Geirhos, Rubisch, Michaelis, Bethge, Wichmann, and
Brendel]geirhos2018imagenet
Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A
Wichmann, and Wieland Brendel.
Imagenet-trained cnns are biased towards texture; increasing shape
bias improves accuracy and robustness.
arXiv preprint arXiv:1811.12231, 2018.
[Vacher and Coen-Cagli(2019)]vacher2019combining
Jonathan Vacher and Ruben Coen-Cagli.
Combining mixture models with linear mixing updates: multilayer image
segmentation and synthesis.
feedback, 19:0 15, 2019.
[Vacher et al.(2020)Vacher, Davila, Kohn, and
Coen-Cagli]vacher2020texture
Jonathan Vacher, Aida Davila, Adam Kohn, and Ruben Coen-Cagli.
Texture interpolation for probing visual perception.
Advances in neural information processing systems,
33:0 22146–22157, 2020.
[Gong et al.(2019)Gong, Boddeti, and Jain]gong2019intrinsic
Sixue Gong, Vishnu Naresh Boddeti, and Anil K Jain.
On the intrinsic dimensionality of image representations.
In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 3987–3996, 2019.
[Pope et al.(2021)Pope, Zhu, Abdelkader, Goldblum, and
Goldstein]pope2021intrinsic
Phillip Pope, Chen Zhu, Ahmed Abdelkader, Micah Goldblum, and Tom Goldstein.
The intrinsic dimension of images and its impact on learning.
arXiv preprint arXiv:2104.08894, 2021.
[Amsaleg et al.(2015)Amsaleg, Chelly, Furon, Girard, Houle,
Kawarabayashi, and Nett]amsaleg2015estimating
Laurent Amsaleg, Oussama Chelly, Teddy Furon, Stéphane Girard, Michael E
Houle, Ken-ichi Kawarabayashi, and Michael Nett.
Estimating local intrinsic dimensionality.
In Proceedings of the 21th ACM SIGKDD International Conference
on Knowledge Discovery and Data Mining, pages 29–38, 2015.
[Houle(2017)]houle2017local
Michael E Houle.
Local intrinsic dimensionality i: an extreme-value-theoretic
foundation for similarity applications.
In Similarity Search and Applications: 10th International
Conference, SISAP 2017, Munich, Germany, October 4-6, 2017, Proceedings 10,
pages 64–79. Springer, 2017.
[Bird and Lotfi(2023)]bird2023cifake
Jordan J Bird and Ahmad Lotfi.
Cifake: Image classification and explainable identification of
ai-generated synthetic images.
arXiv preprint arXiv:2303.14126, 2023.
[Awsafur Rahman et al.(2023)Awsafur Rahman, Paul, Haque Sarker, Hakim,
and Anowarul Fattah]awsafur2023artifact
Md Awsafur Rahman, Bishmoy Paul, Najibul Haque Sarker, Zaber Ibn Abdul Hakim,
and Shaikh Anowarul Fattah.
Artifact: A large-scale dataset with artificial and factual images
for generalizable and robust synthetic image detection.
arXiv e-prints, pages arXiv–2302, 2023.
[Deng et al.(2009)Deng, Dong, Socher, Li, Li, and
Fei-Fei]deng2009imagenet
Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei.
Imagenet: A large-scale hierarchical image database.
In 2009 IEEE conference on computer vision and pattern
recognition, pages 248–255. Ieee, 2009.
[Choi et al.(2020)Choi, Uh, Yoo, and Ha]choi2020stargan
Yunjey Choi, Youngjung Uh, Jaejun Yoo, and Jung-Woo Ha.
Stargan v2: Diverse image synthesis for multiple domains.
In Proceedings of the IEEE/CVF conference on computer vision
and pattern recognition, pages 8188–8197, 2020.
[Karras et al.(2017)Karras, Aila, Laine, and
Lehtinen]karras2017progressive
Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen.
Progressive growing of gans for improved quality, stability, and
variation.
arXiv preprint arXiv:1710.10196, 2017.
[Lin et al.(2014)Lin, Maire, Belongie, Hays, Perona, Ramanan,
Dollár, and Zitnick]lin2014microsoft
Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva
Ramanan, Piotr Dollár, and C Lawrence Zitnick.
Microsoft coco: Common objects in context.
In Computer Vision–ECCV 2014: 13th European Conference,
Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, pages
740–755. Springer, 2014.
[Logacheva et al.(2020)Logacheva, Suvorov, Khomenko, Mashikhin, and
Lempitsky]logacheva2020deeplandscape
Elizaveta Logacheva, Roman Suvorov, Oleg Khomenko, Anton Mashikhin, and Victor
Lempitsky.
Deeplandscape: Adversarial modeling of landscape videos.
In Computer Vision–ECCV 2020: 16th European Conference,
Glasgow, UK, August 23–28, 2020, Proceedings, Part XXIII 16, pages
256–272. Springer, 2020.
[Karras et al.(2020)Karras, Aittala, Hellsten, Laine, Lehtinen, and
Aila]karras2020training
Tero Karras, Miika Aittala, Janne Hellsten, Samuli Laine, Jaakko Lehtinen, and
Timo Aila.
Training generative adversarial networks with limited data.
Advances in neural information processing systems,
33:0 12104–12114, 2020.
[Yu et al.(2015)Yu, Seff, Zhang, Song, Funkhouser, and
Xiao]yu2015lsun
Fisher Yu, Ari Seff, Yinda Zhang, Shuran Song, Thomas Funkhouser, and Jianxiong
Xiao.
Lsun: Construction of a large-scale image dataset using deep learning
with humans in the loop.
arXiv preprint arXiv:1506.03365, 2015.
[Brock et al.(2018)Brock, Donahue, and Simonyan]brock2018large
Andrew Brock, Jeff Donahue, and Karen Simonyan.
Large scale gan training for high fidelity natural image synthesis.
arXiv preprint arXiv:1809.11096, 2018.
[Hudson and Zitnick(2021)]hudson2021generative
Drew A Hudson and Larry Zitnick.
Generative adversarial transformers.
In International conference on machine learning, pages
4487–4499. PMLR, 2021.
[Park et al.(2019)Park, Liu, Wang, and Zhu]park2019gaugan
Taesung Park, Ming-Yu Liu, Ting-Chun Wang, and Jun-Yan Zhu.
Gaugan: semantic image synthesis with spatially adaptive
normalization.
In ACM SIGGRAPH 2019 Real-Time Live!, pages 1–1, 2019.
[Sauer et al.(2021)Sauer, Chitta, Müller, and
Geiger]sauer2021projected
Axel Sauer, Kashyap Chitta, Jens Müller, and Andreas Geiger.
Projected gans converge faster.
Advances in Neural Information Processing Systems,
34:0 17480–17492, 2021.
[Karras et al.(2021)Karras, Aittala, Laine, Härkönen, Hellsten,
Lehtinen, and Aila]Karras2021
Tero Karras, Miika Aittala, Samuli Laine, Erik Härkönen, Janne Hellsten,
Jaakko Lehtinen, and Timo Aila.
Alias-free generative adversarial networks.
In Proc. NeurIPS, 2021.
[Wang et al.(2022)Wang, Montoya, Munechika, Yang, Hoover, and
Chau]wang2022diffusiondb
Zijie J Wang, Evan Montoya, David Munechika, Haoyang Yang, Benjamin Hoover, and
Duen Horng Chau.
Diffusiondb: A large-scale prompt gallery dataset for text-to-image
generative models.
arXiv preprint arXiv:2210.14896, 2022.
[Wang(2022)]diffusiondb2mfirst
Jay Wang.
DiffusionDB: “2m_first_5ktrain”.
<https://huggingface.co/datasets/poloclub/diffusiondb/viewer/2m_first_5k/train>,
2022.
[Online; accessed 10-July-2023].
[Xie et al.(2023)Xie, Pan, Ma, Jie, and Mei]xie2023prompt
Yutong Xie, Zhaoying Pan, Jinge Ma, Luo Jie, and Qiaozhu Mei.
A prompt log analysis of text-to-image generation systems.
In Proceedings of the ACM Web Conference 2023, pages
3892–3902, 2023.
[Schuhmann et al.(2022)Schuhmann, Beaumont, Vencu, Gordon, Wightman,
Cherti, Coombes, Katta, Mullis, Wortsman, et al.]schuhmann2022laion
Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross
Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell
Wortsman, et al.
Laion-5b: An open large-scale dataset for training next generation
image-text models.
arXiv preprint arXiv:2210.08402, 2022.
[Radford et al.(2021)Radford, Kim, Hallacy, Ramesh, Goh, Agarwal,
Sastry, Askell, Mishkin, Clark, et al.]radford2021learning
Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh,
Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark,
et al.
Learning transferable visual models from natural language
supervision.
In International conference on machine learning, pages
8748–8763. PMLR, 2021.
[LAION(2022)]laionhq
LAION.
LAION High Resolution.
<https://huggingface.co/datasets/laion/laion-high-resolution>,
2022.
[Online; accessed 10-July-2023].
[Pressman et al.(2022)Pressman, Crowson, and
Contributors]pressmancrowson2022
John David Pressman, Katherine Crowson, and Simulacra Captions Contributors.
Simulacra aesthetic captions.
Technical Report Version 1.0, Stability AI, 2022.
url https://github.com/JD-P/simulacra-aesthetic-captions .
[StabilityAI(2023)]stablediffmodel
StabilityAI.
StableDiffusion version 21̇.
<https://huggingface.co/stabilityai/stable-diffusion-2-1>, 2023.
[Online; accessed 10-July-2023].
[von Platen et al.(2022)von Platen, Patil, Lozhkov, Cuenca, Lambert,
Rasul, Davaadorj, and Wolf]von2022diffusers
Patrick von Platen, Suraj Patil, Anton Lozhkov, Pedro Cuenca, Nathan Lambert,
Kashif Rasul, Mishig Davaadorj, and Thomas Wolf.
Diffusers: State-of-the-art diffusion models, 2022.
[OpenAI(2022)]guideddiff
OpenAI.
Guided Diffusion.
<https://github.com/deepfake-study/guided-diffusion>, 2022.
[Online; accessed 10-July-2023].
[Alammar(2022)]stablediff
Jay Alammar.
The Illustrated Stable Diffusion.
<https://jalammar.github.io/illustrated-stable-diffusion>, 2022.
[Online; accessed 10-July-2023].
[Gu et al.(2021)Gu, Chen, Bao, Wen, Zhang, Chen, Yuan, and
Guo]gu2021vector
Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan,
and Baining Guo.
Vector quantized diffusion model for text-to-image synthesis.
arXiv preprint arXiv:2111.14822, 2021.
[Cuenca(2023)]hflsunbedroom
Pedro Cuenca.
LSUN Bedroom dataset.
<https://huggingface.co/datasets/pcuenq/lsun-bedrooms>, 2023.
[Online; accessed 10-July-2023].
[He et al.(2016)He, Zhang, Ren, and Sun]he2016deep
Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 770–778, 2016.
[Ansuini et al.(2019)Ansuini, Laio, Macke, and
Zoccolan]ansuini2019intrinsic
Alessio Ansuini, Alessandro Laio, Jakob H Macke, and Davide Zoccolan.
Intrinsic dimension of data representations in deep neural networks.
Advances in Neural Information Processing Systems, 32, 2019.
[Cao et al.(2019)Cao, Ma, Xiao, Zhang, Liu, Zhang, Nie, and
Yang]cao2019seernet
Shijie Cao, Lingxiao Ma, Wencong Xiao, Chen Zhang, Yunxin Liu, Lintao Zhang,
Lanshun Nie, and Zhi Yang.
Seernet: Predicting convolutional neural network feature-map sparsity
through low-bit quantization.
In Proceedings of the IEEE/CVF Conference on Computer Vision
and Pattern Recognition, pages 11216–11225, 2019.
[Cheon et al.(2022)Cheon, Baek, and Paik]cheon2022invariance
Jeonghwan Cheon, Seungdae Baek, and Se-Bum Paik.
Invariance of object detection in untrained deep neural networks.
bioRxiv, pages 2022–09, 2022.
[Zhou et al.(2018)Zhou, Han, Morariu, and Davis]zhou2018learning
Peng Zhou, Xintong Han, Vlad I Morariu, and Larry S Davis.
Learning rich features for image manipulation detection.
In Proceedings of the IEEE conference on computer vision and
pattern recognition, pages 1053–1061, 2018.
[Wang et al.(2019)Wang, Wang, Owens, Zhang, and
Efros]wang2019detecting
Sheng-Yu Wang, Oliver Wang, Andrew Owens, Richard Zhang, and Alexei A Efros.
Detecting photoshopped faces by scripting photoshop.
In Proceedings of the IEEE/CVF International Conference on
Computer Vision, pages 10072–10081, 2019.
[Breiman(2001)]breiman2001random
Leo Breiman.
Random forests.
Machine learning, 45:0 5–32, 2001.
[Nembrini et al.(2018)Nembrini, König, and
Wright]nembrini2018revival
Stefano Nembrini, Inke R König, and Marvin N Wright.
The revival of the gini importance?
Bioinformatics, 340 (21):0 3711–3718, 2018.
|
http://arxiv.org/abs/2307.01655v1
|
20230704112551
|
Decentralized optimization with affine constraints over time-varying networks
|
[
"Demyan Yarmoshik",
"Alexander Rogozin",
"Alexander Gasnikov"
] |
math.OC
|
[
"math.OC"
] |
1,3]Demyan Yarmoshikyarmoshik.dv@phystech.edu
1]Alexander Rogozinaleksandr.rogozin@phystech.edu
1,2,3]Alexander Gasnikovgasnikov@yandex.ru
[1]Moscow Institute of Physics and Technology, Dolgoprudny, Russia
[2]Skoltech, Moscow, Russia
[3]Institute for Information Transmission Problems, Moscow, Russia
Decentralized optimization with affine constraints over time-varying networks
[
^1Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411, Tartu, Estonia
=====================================================================================
The decentralized optimization paradigm assumes that each term of a finite-sum objective is privately stored by the corresponding agent.
Agents are only allowed to communicate with their neighbors in the communication graph.
We consider the case when the agents additionally have local affine constraints and the communication graph can change over time.
We provide the first linearly convergent decentralized algorithm for time-varying networks by generalizing the optimal decentralized algorithm ADOM to the case of affine constraints.
We show that its rate of convergence is optimal for first-order methods by providing the lower bounds for the number of communications and oracle calls.
§ INTRODUCTION
Decentralized optimization is a popular approach for high-dimensional machine learning problems and control of distributed systems, such as drone swarms. It is common for these problems to have interconnections between variables, usually posed as affine constraints, e.g., direct-current(DC) power flow constraints in control problems related to electrical energy systems. It is also often the case that the communication graph between computation nodes is subject to change during the optimization process.
Over the past decade, constrained distributed optimization has attracted the attention of researchers.
Among the first applications of first-order methods to constrained decentralized optimization was the projected subgradient algorithm in <cit.>, where the time-varying case was also analyzed.
A systematic review of main problem classes falling into the definition of distributed constrained optimization along with algorithms working on various levels of decentralization was given in <cit.>.
More recent works use first-order methods to deal with a broad range of problem variants, including the
nonconvex objectives <cit.>,
the composite objectives <cit.>,
the inequality constraints <cit.>
and other assumptions on problem's structure <cit.>.
The ADMM-based approaches are also popular <cit.>.
However, to the best of our knowledge, no decentralized linear convergent first-order algorithms for affine-constrained problems have been proposed.
In this work, we close this gap by providing a linearly convergent dual algorithm for decentralized affine-constrained optimization of the sum of smooth strongly convex functions over time-varying networks.
We build on the recently developed optimal algorithms for decentralized optimization over time-varying networks <cit.>,
and extend these results to the affine-constrained case.
This paper could also be seen as a generalization of <cit.> to the time-varying networks.
We also show that our new algorithm inherits the optimality of ADOM by constructing lower bounds on the number of communications and oracle calls in time-varying case.
During this analysis we also prove lower bounds for the static communication graph setup, thus showing the optimality of algorithms in <cit.>.
§.§ Basic definitions and assumptions
* Differentiable function f is μ-strongly convex if
f(y) ≥ f(x) + ∇ f(x), y -x> + μ/2y-x ∀ x,y ∈.
* Differentiable function f is L-smooth (or has L-Lipschitz continuous gradient) if
f(y) ≤ f(x) + ∇ f(x), y -x> + L/2y-x ∀ x,y ∈.
* (A), (A), (A) are the minimal, the minimal positive and the maximal eigenvalues of a matrix A respectively.
* (A), (A), (A) are minimal, minimal positive and maximal singular values σ_i(A) = √(λ_i(A^⊤ A)) of a matrix A respectively.
Notation
* “Dimension-lifted” vectors and matrices are written in bold: ,.
* x_[i] denotes i-th component of a vector x.
* The identity matrix of order d is denoted by I_d. Sometimes the subscript is omitted if the order of I is clear from the context.
* _d denotes the column vector of ones in ^d.
* ⊷ A denotes the image of a linear operator A.
§ PROBLEM FORMULATION
§.§ Objective and constraints
We consider the following affine constrained optimization problem, where f_i are assumed to be L_F-smooth (have Lipschitz-continuous gradient with constant L_F) and μ_F-strongly convex:
min_x_1,…,x_n∈^d ∑_i=1^n f_i(x_i)
s.t. A_i x_i = b_i, i = 1, …, n
x_1 = … = x_n.
Practical examples of this type of finite-sum affine constrained optimization problems include constrained estimation problems, such as constrained least squares problems <cit.>.
The matrix-vector form of the problem is
min_∈^nd F()
s.t. =
∈,
where
is the column vector of x_1, …, x_n, x_i ∈^d,
F() = ∑_i=1^n f_i(x_i), and
denotes the consensus hyperlane defined by constraint (<ref>).
Note, that F() is also μ_F-strongly convex and L_F-smooth.
In case when A_i and b_i are all different, we set
= A_1, …, A_n, = ⊗ b, where ⊗ denotes the Kronecker product.
If all A_i = A and b_i = b are the same,
then , can be chosen in various ways, e.g.:
* = ⊗ A = A, …, A,
= ⊗ b, or
* = (A 0 … 0), = b.
For definiteness, we will assume that the first variant is chosen.
This logic also applies if there are clusters of agents with same affine constraints in each group.
§.§ Decentralized communication
We assume that the problem is distributed over a computational network consisting of n agents (or nodes).
Each agent privately holds f_i, A_i and b_i.
Agents are connected through a communication network, represented by a time-varying undirected graph, i.e. a sequence of undirected graphs.
Agents are only allowed to exchange information with their current neighbors in the communication graph
The limitations imposed on the communication process are formally described in Defintion <ref>.
In further developments, we will heavily rely on the notion of
gossip matrix.
W is a gossip matrix of an undirected graph G = (V, E), |V| = n if it satisfies following properties
* W is symmetric and positive semi-definite;
* (Network compatibility) [ W]_ij≠ 0 if and only if (i, j) ∈ E or i = j;
* (Kernel property) For any v = [v_1,…,v_n]^⊤∈^n, W v = 0 if and only if v_1 = … = v_n.
A typical example of a gossip matrix is the Laplacian matrix of a graph: W∈ℝ^m× m,
[W]_ij =
-1, if (i,j) ∈ E,
deg(i), if i = j,
0, otherwise.
Later we will use the dimension-lifted gossip matrix W ⊗ I_d.
From the third property of gossip matrices, we have that W ⊗ I_d = 0 if and only if ∈.
Since we assume that the communication network is time-varying, we denote the communication graph at the k-th step by G(k) = (V, E(k)) and the associated gossip matrix by W(k).
Note that the existence of a gossip matrix for each step implies, by the kernel property, that G(k) is connected for all k.
According to the second property of gossip matrices, multiplication W(k) ⊗ I_d can be performed in a decentralized way at step k.
The convergence rate of decentralized optimization algorithms depends on the spectrum of the gossip matrices.
Therefore we assume that
λ̃_min^+≤λ_min^+(W(k))≤λ_max(W(k))≤λ̃_max.
§ LOWER BOUNDS
We consider the class of first-order decentralized algorithms defined as follows.
Denote _i(k), where _i(0) = {x_i^0}, as the local memory of the i-th agent at step k.
The set of allowed actions of a first order decentralized algorithm at step k is restricted to the three options
* Local computation: _i(k) = {x, ∇ f_i(x), ∇ f^*_i(x): x ∈_i(k) };
* Decentralized communication: _i(k) = {_j(k): edge (i, j) ∈ E(k)}.
* Matrix multiplication: _i(k) = {b_i, A_i^⊤ A_i x: x ∈_i(k)}.
After each step k, an algorithm must provide a current approximate solution x^k_i ∈_i(k) and set _i(k+1) = _i(k).
Using the standard approach for constructing lower bounds in smooth strongly convex optimization <cit.>, we consider d=∞.
Then, following <cit.>, we consider the affine-constrained problem with constraint (W' ⊗ I_d/m)x = 0, where W'∈^m× m is a gossip matrix of some (static) communication graph, thus interpreting an affine-constrained problem as a decentralized optimization problem.
Let W' be a gossip matrix associated with the graph G'(V', E'), V' = {1, …, m}.
Set A_i = A = √(W' ⊗ I_m) and b_i = 0.
This leads to a two-level decentralized optimization problem.
On the upper level we have the conventional decentralized optimization problem over the communication network G:
min_x∈^d∑_i=1^n f_i(x),
but each f_i = ∑_j=1^m f_ij is distributed among the subnodes of the inner computational network G' located inside the node i.
This forms the inner level of our problem, as shown in Fig. <ref>.
Thus, instead of thinking about affine constraints, we can think about the inner decentralized computational network.
From this perspective, subnodes exchange the d/m-dimension vectors, and nodes exchange the d-dimension vectors, which are stacked from the vectors of their subnodes.
We use this construction to obtain lower bounds for both static and time-varying setups.
For every L_F ≥μ_F >0, χ_W >0 and χ_A > 0 there exists a decentralized optimization problem (<ref>) ((<ref>)) over a static communication graph G
with a gossip matrix W: (W)/(W) = χ_W, and a matrix : (^⊤)/(^⊤) = χ_A, such that any decentralized first-order algorithm, as per Definition <ref>, requires at least
N = Θ√(L_F/μ_F)log1/εsequential local computations,
ΘN √(χ_W)communications,
ΘN√()multiplications by ^⊤.
to achieve
x_i^N - x^*≤ ∀ i ∈ [1, n].
inner/.pic=
ł0.3
every node=[circle, draw, fill=black,
inner sep=0pt, minimum width=4pt]
(-ł,-ł) node – (0,0) node – ++(ł* sqrt(2), 0) node;
The proof is based on the technique from <cit.> and is provided in the Appendix.
We now present a variant of Theorem <ref> for the time-varying communication networks.
For every L_F ≥μ_F >0, χ_A > 0, λ̃_min^+ and λ̃_max: λ̃_max/λ̃_min^+ = χ_W ≥ 3 there exists a decentralized optimization problem (<ref>) ((<ref>)),
a sequence of gossip matrices W(k) satisfying (<ref>), and a matrix : (^⊤)/(^⊤) = χ_A, such that any decentralized first-order algorithm, as per Definition <ref>, requires at least
N = Θ√(L_F/μ_F)log1/εsequential local computations,
ΘN χ_Wcommunications,
ΘN√()multiplications by ^⊤.
to achieve
x_i^N - x^*≤ ∀ i ∈ [1, n].
The proof is based on <cit.> and is provided in the Appendix.
§ APPLICATION OF ADOM
By standard duality arguments we rewrite problem (<ref>) as
min_ max_∈,F() - , - , -
= max_∈ ,-max_ + ^⊤, - F() + ,
= max_∈,-F^*( + ^⊤) + ,
= -min_∈ ,F^*( + ^⊤) - , ,
where F^* is the convex (Fenchel) conjugate of F.
Therefore, problem (<ref>) is equivalent to
min_∈ ,F^*( + ^⊤) - , .
Introduce
= [ ; ],
= [ ; 0 ],
= [ ; I_nd ],
= [ I_() 0; 0 I_nd - 1/n_n_n^⊤⊗ I_d ],
(k) = [ I_() 0; 0 W(k)⊗ I_d ].
Also denote H() = F^*(^⊤) - ,. Now we can equivalently rewirte optimization problem (<ref>) as
min_∈⊷ H().
After that, we apply ADOM <cit.> to the problem (<ref>).
Note that b_i∈⊷ A_i and therefore ∈⊷. Therefore, for any ∈^nd we have ∇ H() = ∇ F^*(^⊤) - ∈⊷. We conclude that the iterates ^k, Δ^k, _f^k, _g^k of Algorithm <ref> lie in ⊷ = ⊷⊗^, and ^k∈⊷ = ⊷⊗^nd.
On the subspace ⊷ we estimate strong convexity and smoothness constants as
μ_H = 1 + ((A))^2/L_F, L_H = 1 + ^2(A)/μ_F.
From (<ref>) we have
λ_min^+ min(1, λ̃_min^+) ≤λ_min^+((k))≤λ_max((k))≤max(1, λ̃_max) λ_max.
Now we can formulate the key convergence result.
Set parameters α, η, θ, σ,τ of Algorithm <ref> to α = /2, η = 2/7√(), θ = 1/, σ = 1/, and τ = /7√(/).
Then there exists C>0, such that for Algorithm <ref> applied for problem (<ref>) it holds
F^*(^⊤_g^N) - ^*≤ C (1- /7√(/))^N,
where ^* is the solution of the problem <ref>.
The proof is a minor modification of original ADOM convergence proof in <cit.>, and is provided in the appendix for reader's convenience.
As a corollary of Theorem <ref> we have the following communication, dual-oracle call and matrix multiplication complexity:
N ≤ Omax(1, λ̃_max)/min(1,λ̃_min^+) √(L_F/μ_F)√(%s/%s)1+ σ_max^2(A)1 + (σ_min^+(A))^2log1/,
where is the desired accuracy of the approximate solution: F^*(^⊤_g^N) - ^*≤.
§ SEPARATING COMPLEXITIES
As shown in <cit.> one can separate oracle (computation) and communication complexities in the time-varying setup by using the multi-consensus procedure.
This will not change (up to a ln 2 factor) the number of communications but decreases the number of oracle calls.
In the time-varying setup, acceleration over χ = / is not applicable, as stated by Theorem <ref>, so we can separate communication complexity but can not improve it.
However, since is constant, we can use Chebyshev acceleration over to separate matrix multiplication complexity and decrease the number of multiplications by , as was done for communication complexity in the static communication graph setup in <cit.>.
For this section we will assume that W(k) are divided by their maximum eigenvalue and therefore = λ̃_max = 1 and = λ̃_min^+.
In practice this could be achieved by using I - M(k) instead of W(k), there M(k) are the Metropolis weight matrices <cit.>.
Let P(x) be a polynomial such that P(0) = 0 and P(λ_i) ≠ 0 for all eigenvalues λ_i of A.
Then, using the fact that M = M^T M, we can do the following sequence of equivalent reformulations
= ⇔ = _0
⇔^⊤ = ^⊤_0
⇔
P(^⊤) = P'(^⊤)^⊤_0
⇔
P(^⊤) = P'(^⊤)^⊤.
where _0 is any vector satisfying _0 = (here we used consistency of constraints),
and P'(x) = P(x)/x is correctly defined since P(0) = 0.
Thus the idea is to replace with P(^⊤) to improve the spectral properties of the matrix.
The polynomials of choice are shifted and scaled Chebyshev polynomials <cit.>, because Chebyshev polynomials increase magnitude more quickly than any other polynomials of the same degree satisfying |P(x)| ≤ 1 ∀ x ∈ [-1,1].
This allows to significantly compress the spectrum, using polynomials of a relatively low degree.
In particular, let P(x) be defined as
P(x) = 1 - T_K(-ν + 2/ - x)/T_K(-ν), K = √(),
where = (^T ) / (^T ), ν = χ +1/χ -1 and T_K are Chebyshev polynomials defined by T_0(x) = 1, T_1(x) = x and T_K+1 = 2x T_K(x) - T_K-1(x) for K ≥ 1.
If P has degree K = √(), then χ( P()) ≤ 4 <cit.>, which allows to quadratically improve the dependence of convergence rate on χ_A
by replacing with P(^⊤) and with P'().
To separate communication complexity, a multi-consensus procedure should be used, i.e. W(k) should be replaced with D(W(k)) = I - (I - W(k))^K, where K = χln 2, what makes χ = O(1) at the cost of K communication rounds <cit.>.
Finally, by replacing → P(), → P'() and W(k) → D(W(k)) we obtain following complexity estimates to reach F^*(^⊤_g^k) - ^*≤ for ADOM algorithm with affine constraints, Chebyshev acceleration, and multi-consensus:
N = O√(L_F/μ_F)log1/ε oracle calls at each node,
ON χ communications,
ON√() multiplications by ^⊤.
These upper bounds match the lower bounds of Theorem <ref>, thus the obtained algorithm is optimal among first-order decentralized algorithms for strongly convex problems with affine constraints on time-varying networks.
§ NUMERIC VALIDATION
We verify Theorem <ref> with numeric experiments on problems with the quadratic objective:
∑_i^n f_i(x) →min_x
s.t. Ax = b,
f_i(x) = 1/2 x^T C_i x + d_i^T x, μ_F I ≼ C_i ≼ L_F I.
From the results of Section <ref>, the influence of the gossip matrix's spectrum and the affine constraint matrix spectrum on the convergence rates of Algorithm <ref> is straightforward to comprehend.
Therefore, we focus only on the impact of the objective's condition number on the convergence rates.
Our numeric experiments do not pretend to trial on real-world problems, but rather illustrations of the theoretical properties of the algorithm, because quadratic objects are a good representative of the smooth and strongly convex problem class.
It is not difficult to implement exact dual oracle for this objective, but we do not want to exploit the simplicity of quadratic problem and, following <cit.>, we obtain an approximation of ^k ≈∇ F^*(_g^k) by using few gradient steps at each iteration: ^k = ^k-1 - 1/L_F(∇ F(^k-1) - _g^k).
So in fact the implemented algorithm uses a primal oracle because dual oracle call in Algorithm <ref> is replaced with primal oracle call.
Experiment parameters are d = 20, A ∈^10 × d, = 20, μ_F=1.
Communication graphs G(k) are random ring graphs at each iteration.
We run Algorithm <ref> for N=2500 iterations for different values of L_F, and do the linear regression to obtain the coefficient κ in the dependence ^k - ^*_2 = C_1 exp(-κ N) using only last N/2 iterations.
This is illustrated in Figure <ref>.
In all cases a steady linear convergence to the solution is present.
Then we do the linear regression in the log-log scale to obtain the coefficient ν in the dependence
κ = C_2 L_F/μ_F^ν, as shown in Figure <ref>.
The resulting value is ν≈ 0.54 with the standard error being ≈ 0.03, what is rather close to the value ν= 1/2 in the Theorem <ref>.
§ CONCLUSION
By viewing the affine-constrained problem as a decentralized optimization problem alike <cit.>, and combining constructions of lower bound for static <cit.> and time-varying <cit.> setups, lower bounds for decentralized optimization with affine constraints over static and time-varying networks via first-order methods were obtained.
As we found, the ADOM algorithm can be straightforwardly extended to the affine-constrained case.
For this problem class, we were also able to apply Chebyshev acceleration over A, and the resulting complexity estimates match the lower bounds.
However, a lot of questions are left for the future work.
We did not succeed to provide an extension of the ADOM+ algorithm <cit.> to the affine-constrained problems, thus no linearly convergent primal algorithm is known for this problem class .
It is also of interest to obtain optimal algorithms for time-varying networks in case of shared affine inequality constraints (∑ A_i x_i ≤ b_i, where A_i and b_i are held privately by i-th agent). This problem variant has more practical applications <cit.>, but also brings additional difficulties to the theoretical analysis, e.g. in this case b_i might not belong to ⊷ A_i.
This means that cannot apply our approach, because it needs a gradient method to stay in the subspace where the objective is strongly convex.
§ ACKNOWLEDGEMENTS
This work was supported by a grant for research centers in the field of artificial intelligence, provided by the Analytical Center for the Government of the Russian Federation in accordance with the subsidy agreement (agreement identifier 000000D730321P5Q0002) and the agreement with the Moscow Institute of Physics and Technology dated November 1, 2021 No. 70-2021-00138.
§ APPENDIX
§.§ Proof of Theorem <ref>
Let the affine constraint in problem <ref> be A_i x_i = 0, with A_i=A=√(W' ⊗ I_m).
Then the affine constrained decentralized problem can be seen as two-level decentralized problem, as explained above.
Select sets of subnodes S_1, S_2 and S_3 such that S_1, S_2 are at the distance ≥Δ_A through the inner graph,
and S_2, S_3 are at the distance ≥Δ_W through the outer graph.
Consider the following splitting of the Nesterov's “bad” function
f_ij(x) =
α/2mnx
+
β - α/8·1/|S_1|x^⊤ M_1 x - 2x_[1], (i,j) ∈ S_1,
1/|S_2|x^⊤ M_2 x, (i,j) ∈ S_2,
1/|S_3|x^⊤ M_3 x, (i,k) ∈ S_3,
0, otherwise,
where
M_1 = (1, 0, M_0, 0, M_0, …),
M_2 = (M_0, 0, M_0, 0, …),
M_3 = (0, M_0, 0, M_0, …),
M_0 = [ 1 -1; -1 1 ].
Then increasing the number of nonzero components in x^k on any subnode by three requires one local computation on a node in S_1, Δ_A inner communications a.k.a. multiplications by A^⊤ A, one local computation on a node in S_2, Δ_W communications in the outer graph and one local computation on a node in S_3.
Denote by κ_g = β/α the “global” condition number of ∑_ij^mnf_ij.
Since the solution is
x^*_k = (√(κ)_g - 1/√(κ)_g +1 )^k, we have
x^N - x^*≥∑_k=N+2^∞ (x^*_k)^2 ≥(√(κ)_g - 1/√(κ)_g +1 )^N+2,
where N is the number of iterations, each including 3 sequential computational steps, Δ_A multiplications by A^TA and Δ_W communications.
To finish the proof we need to construct communication graphs G, G', where distances between S_1, S_2 and S_2, S_3 are close to Δ_A, Δ_W, and equip the graphs with gossip matrices with given condition numbers χ_A, χ_W.
We should also choose α and β such that f_i are L_F-smooth and μ_F-strongly convex,
and choose S_1, S_2, S_3 so that κ_g is similar to L_F/μ_F.
Denote γ(M) = σ_min^+(M) / σ_max(M), γ_W = 1/χ_W, γ_A = 1/χ_A.
Let γ_n=1-cos(π/n)/1+cos(π/n) be a decreasing sequence of positive numbers.
Since γ_2=1 and lim _n γ_n= 0, there exists n ≥ 2 such that γ_n ≥γ >γ_n+1 and
m ≥ 2 such that γ_m ≥γ >γ_m+1.
First, construct graph G.
The cases n=2 and n ≥ 3 are treated separately.
If n ≥ 3, let G be the linear graph of size n ordered from node 1 to n, and weighted with
w_i, i+1=1-a, i=1
1, otherwise.
Then set
S_2G={1, …, ⌈ n / 32⌉} and Δ_W=(1-1 / 16) n-1, so that
S_3G = {⌈ n / 32⌉ + ⌈Δ_W⌉, …, n}.
Take W_a as the Laplacian of the weighted graph G.
A simple calculation gives that, if a=0, γ(W_a)=γ_n and, if a=1, the network is disconnected and γ(W_a)=0.
Thus, by continuity of the eigenvalues of a matrix, there exists a value a ∈[0,1] such that γ(W_a)=γ_W.
Finally, by definition of n, one has γ_W>γ_n+1≥2/(n+1)^2, and Δ_W ≥15/16(√(2/γ_W)-1)-1 ≥1/5 √(γ_W) when γ_W ≤γ_3=1/3.
For the case n=2, we consider the totally connected network of 3 nodes, reweight only the edge (1, 3) by a ∈[0,1], and let W_a be its Laplacian matrix.
If a=1, then the network is totally connected and γ(W_a)=1.
If, on the contrary, a=0, then the network is a linear graph and γ(W_a)=γ_3.
Thus, there exists a value a ∈[0,1] such that γ(W_a)=γ.
Set S_2G={1}, S_3G={2}, then Δ_W=1 ≥1/√(3 γ_W).
Second, do the same for graph G', obtaining m, S_1G', S_2G' and Δ_A ≥1/5√(γ_A).
Define S_1 = S_2G× S_1G', S_2 = S_2G× S_2G' and S_3 = S_3G× S_2G', see Fig. <ref>.
In all cases we have |S_k| ≥ |S_2| ≥⌈n/32⌉⌈m/32⌉ for k∈{1,3}.
Because μ_F = α/n, we set α = μ_F n.
Since 0 ≼ M_k ≼ 2I for k∈{1,2,3}, L_F = α/n + (β-α)m/2|S_2|, thus set β=2|S_2|(L_F-μ_F)/m + μ_F n to make all f_i be L_F-smooth and μ_F-strongly convex.
Then κ_g = β/α = 1 + 2|S_2|(L_F - μ_F)/μ_F mn≥L_F/512 μ_F.
Combining this with (<ref>) and the inequalities between Δ_A, γ_A and Δ_W, γ_W we conclude the proof.
§.§ Proof of Theorem <ref>
As in the proof of Theorem <ref> we set the affine constraint in problem <ref> to be A_i x_i = 0, with A_i=A=√(W' ⊗ I_m), where W' is a gossip matrix of some inner communication graph G'.
Let the sequence of outer communication graphs G(k) is the same as in the proof of Theorem 1 in <cit.>: n= 3 χ_W/3 nodes are split into three disjoint sets V_1, V_2, V_3 of equal size, and G(k) = (V, E(k)) are star graphs with the center nodes cycling through V_2.
Choose the inner communicaiton graph G' = (V', E') as in the proof of Theorem <ref>.
Use Nesterov's function splitting given by (<ref>), choose S_1G' and S_2G' as in the proof of Theorem <ref>.
Set S_1 = V_1 × S_1G', S_2 = V_1 × S_2G' and S_3 = V_3 × E'.
Setting W(k) to be the Laplacian of the star graph G(k) we have (W(k))/(W(k)) =n ≤χ_W.
Also (Lemma 2 <cit.> and proof of Theorem <ref>), increasing the number of nonzero components of x_k on any subnode requires local computation on a node in S_1, Θ√(χ_A) communications in the inner graph G' (i.e. multiplications by A^⊤ A), one local computation on a node in S_2, Θχ_W communications in the outer graph G and one local computation on a node in S_3.
Same reasoning as in the proof of the previous theorem gives κ_g = ΘL_F/μ_F, then using (<ref>) we conclude the proof.
§.§ Auxiliary lemmas for Theorem <ref>
For θ≤1/ we have the inequality
H(_f^k+1) ≤ H(_g^k) - θ/2 H(_g^k)_.
We start with -smoothness of H on ⊷:
H(_f^k+1) ≤ H(_g^k) + H(_g^k), _f^k+1 - _g^k> + /2_f^k+1 - _g^k.
Using line <ref> of Algorithm <ref> together with (<ref>) we get
H(_f^k+1) ≤ H(_g^k) - θ H(_g^k)_(k) + θ^2/2 H(_g^k)_^2(k)
≤
H(_g^k) - θ/2 H(_g^k)_ - θ/2 H(_g^k)_(k) + θ^2/2 H(_g^k)_(k)
=
H(_g^k) - θ/2 H(_g^k)_ +θ/2(θ- 1) H(_g^k)_(k).
Using condition θ≤1/ we get
H(_f^k+1)
≤
H(_g^k) - θ/2 H(_g^k)_.
For σ≤1/ we have the inequality
^k_≤(1 - σ/4)4/σ^k_
-
4/σ^k+1_
+
8η^2/(σ)^2 H(_g^k)_.
Using = ^2 and (k) = (k) = (k)
together with lines <ref> and <ref> of Algorithm <ref> we obtain
^k+1_ =
^k - η H(_g^k) - Δ^k_
=
( -σ(k))(^k - η H(_g^k))
=
^k - η H(_g^k)_
-
2σ^k - η H(_g^k)_(k)
+
σ^2^k - η H(_g^k)_^2(k).
Using (<ref>) we obtain
^k+1_ ≤^k - η H(_g^k)_
-
σ^k - η H(_g^k)_
-
σ^k - η H(_g^k)_(k)
+
σ^2^k - η H(_g^k)_(k)
=
^k - η H(_g^k)_
-
σ^k - η H(_g^k)_
+
σ(σ - 1)^k - η H(_g^k)_(k).
Using condition σ≤1/ we get
^k+1_ ≤
(1-σ)^k - η H(_g^k)_.
Using Young's inequality we get
^k+1_ ≤
(1-σ)(
(1 + σ/2(1-σ))^k_
+
(1 + 2(1-σ)/σ)η H(_g^k)_)
=
(1 - σ/2)^k_
+
η^2(1-σ)(2-σ)/σ H(_g^k)_
≤(1 - σ/2)^k_
+
2η^2/σ H(_g^k)_.
Rearranging concludes the proof.
Let
α = /2,
η = 2/7√(),
θ = 1/,
σ = 1/,
τ = /7√(/).
Define the Lyapunov function
Ψ^k ^k - ^* + 2η(1-ηα)/τ(F^*(_f^k) - F^*(^*) )+6^k_,
where ^k is defined as
^k = ^k + ^k.
Then the following inequality holds:
Ψ^k+1≤(1-/7√(/))Ψ^k.
Using (<ref>) together with lines <ref> and <ref> of Algorithm <ref>, we get
^k+1 = ^k+1 + ^k+1
= ^k + ηα(_g^k - ^k) + Δ^k + ( ^k - η H(_g^k) - Δ^k)
=
^k + ^k + ηα(_g^k - ^k) - η H(_g^k) + Δ^k - Δ^k.
From line <ref> of Algorithm <ref> and (k) = (k) it follows that Δ^k = Δ^k, which implies
^k+1 =
^k + ^k + ηα(_g^k - ^k) - η H(_g^k)
=
^k + ηα(_g^k - ^k) - η H(_g^k).
Hence,
^k+1 - ^* =
^k - ^* + ηα(_g^k - ^k) - η H(_g^k)
=
(1 - ηα)(^k - ^* )+ ηα(_g^k + ^k - ^*)
+
η^2 H(_g^k)_
-
2η H(_g^k), ^k + ^k- ^* + ηα(_g^k - ^k)>
≤.
Using inequality
a + b≤ (1+γ) a + (1 + 1/γ) b, γ > 0
with γ = ηα/1- ηα we get
^k+1 - ^* =
(1-ηα)^k - ^* + ηα_g^k + ^k - ^*
+
η^2 H(_g^k)_
-
2η H(_g^k),(_g^k - ^*)>
+
2η(1-ηα) H(_g^k), (_g^k - ^k)>
-
2η H(_g^k),^k>
≤
(1-ηα)^k - ^* + 2ηα_g^k - ^*
+
2ηα^k_
+
η^2 H(_g^k)_
-
2η H(_g^k),(_g^k - ^*)>
+
2η(1-ηα) H(_g^k), (_g^k - ^k)>
-
2η H(_g^k),^k>
One can observe, that ^k,_g^k,^* ∈⊷. Hence,
^k+1 - ^* ≤
(1-ηα)^k - ^* + 2ηα_g^k - ^*
+
2ηα^k_
+
η^2 H(_g^k)_
-
2η H(_g^k),_g^k - ^*>
+
2η(1-ηα) H(_g^k), _g^k - ^k>
-
2η H(_g^k),^k>.
Using line <ref> of Algorithm <ref> we get
^k+1 - ^* ≤
(1-ηα)^k - ^* + 2ηα_g^k - ^*
+
2ηα^k_
+
η^2 H(_g^k)_
-
2η H(_g^k),_g^k - ^*>
+
2η(1-ηα)(1-τ)/τ H(_g^k), _f^k - _g^k>
-
2η H(_g^k),^k>.
Using convexity and -strong convexity of H() on ⊷ we get
^k+1 - ^* ≤
(1-ηα)^k - ^* + 2ηα_g^k - ^*
+
2ηα^k_
+
η^2 H(_g^k)_
-
2η(H(_g^k) - H(^*)) - η_g^k - ^*
+
2η(1-ηα)(1-τ)/τ(H(_f^k) - H(_g^k))
-
2η H(_g^k),^k>
=
(1-ηα)^k - ^*
+
(2ηα - η)_g^k - ^*
+
η^2 H(_g^k)_
-
2η(H(_g^k) - H(^*))
+
2η(1-ηα)(1-τ)/τ(H(_f^k) - H(_g^k))
-
2η H(_g^k),^k>
+
2ηα^k_.
Using α defined by (<ref>) we get
^k+1 - ^* ≤(1-η/2)^k - ^*
+
η^2 H(_g^k)_
-
2η(H(_g^k) - H(^*))
+
2η(1-ηα)(1-τ)/τ(H(_f^k) - H(_g^k))
-
2η H(_g^k),^k>
+
2ηα^k_.
Since H(_g^k) ≥ H(^*), we get
^k+1 - ^* ≤(1-η/2)^k - ^*
+
η^2 H(_g^k)_
-
2η(1-ηα)(H(_g^k) - H(^*))
+
2η(1-ηα)(1-τ)/τ(H(_f^k) - H(_g^k))
-
2η H(_g^k),^k>
+
2ηα^k_
=
(1-η/2)^k - ^*
+
η^2 H(_g^k)_
+
2η(1-ηα)(
(1-τ)/τH(_f^k) + H(^*) - 1/τH(_g^k)
)
-
2η H(_g^k),^k>
+
2ηα^k_.
Using (<ref>) and θ defined by (<ref>) we get
^k+1 - ^* ≤(1-η/2)^k - ^*
+
(η^2-(1-ηα)η/τ) H(_g^k)_
+
(1-τ)2η(1-ηα)/τ(H(_f^k) - H(^*) )
-
2η(1-ηα)/τ(H(_f^k+1) - H(^*) )
-
2η H(_g^k),^k>
+
2ηα^k_.
Using Young's inequality we get
^k+1 - ^* ≤(1-η/2)^k - ^*
+
(η^2-(1-ηα)η/τ) H(_g^k)_
+
(1-τ)2η(1-ηα)/τ(H(_f^k) - H(^*) )
-
2η(1-ηα)/τ(H(_f^k+1) - H(^*) )
+
η^2/ H(_g^k)_ + /^k_
+
2ηα^k_
=
(1-η/2)^k - ^*
+
(η^2+η^2/-(1-ηα)η/τ) H(_g^k)_
+
(1-τ)2η(1-ηα)/τ(H(_f^k) - H(^*) )
-
2η(1-ηα)/τ(H(_f^k+1) - H(^*) )
+
(/+2ηα)^k_.
Using (<ref>) and (<ref>), that imply ηα≤/4, we obtain
^k+1 - ^* ≤(1-η/2)^k - ^*
+
(η^2+η^2/-3η/4τ) H(_g^k)_
+
(1-τ)2η(1-ηα)/τ(H(_f^k) - H(^*) )
-
2η(1-ηα)/τ(H(_f^k+1) - H(^*) )
+
3/2^k_.
Using (<ref>) and σ defined by (<ref>) we get
^k+1 - ^* ≤(1-η/2)^k - ^*
+
(η^2+η^2/-3η/4τ) H(_g^k)_
+
(1-τ)2η(1-ηα)/τ(H(_f^k) - H(^*) )
-
2η(1-ηα)/τ(H(_f^k+1) - H(^*) )
+
(1 - /4)6^k_
-
6^k+1_
+
12η^2/ H(_g^k)_
≤(1-η/2)^k - ^*
+
(14η^2/-3η/4τ) H(_g^k)_
+
(1-τ)2η(1-ηα)/τ(H(_f^k) - H(^*) )
-
2η(1-ηα)/τ(H(_f^k+1) - H(^*) )
+
(1 - /4)6^k_
-
6^k+1_.
Using η defined by (<ref>) and τ defined by (<ref>) we get
^k+1 - ^* ≤(1-/7√(/))^k - ^*
+
(1 - /4)6^k_
-
6^k+1_
+
(1-/7√(/))2η(1-ηα)/τ(H(_f^k) - H(^*) )
-
2η(1-ηα)/τ(H(_f^k+1) - H(^*) )
≤(1-/7√(/))
(^k - ^* + 2η(1-ηα)/τ(H(_f^k) - H(^*) )+6^k_)
-
2η(1-ηα)/τ(H(_f^k+1) - H(^*) )
-
6^k+1_.
Rearranging and using (<ref>) concludes the proof.
§.§ Proof of Theorem <ref>
From derivation of the reformulated problem and Demyanov-Danskin theorem it follows that F^*(^⊤^*) = ^*.
Therefore H(^*) = (0_d, ^*)^⊤.
Using -smoothness of H on ⊷ we get
F^*(^⊤_g^k) - ^* =
F^*(^⊤_g^k) - F^*(^⊤^*)≤ H(_g^k) - H(^*)≤^2 _g^k - ^*.
Using line <ref> of Algorithm <ref>
and inequality
a + b≤ (1+γ) a + (1 + 1/γ) b, γ > 0
with γ = 1/τ - 1 we get
we get
F^*(^⊤_g^k) - ^* ≤τ^2^k - ^* + (1-τ)^2_f^k - ^*.
Using -strong convexity of H on ⊷ we get
F^*(^⊤_g^k) - ^* ≤τ^2^k - ^*
+
2(1-τ) ^2/(H(_f^k) - H(^*)).
Using (<ref>) we get
F^*(^⊤_g^k) - ^*
≤
2τ^2^k - ^*
+
2τ^2^k_
+
2(1-τ) ^2/(H(_f^k) - H(^*))
=
2τ^2^k - ^*
+
τ(1-τ)^2/η(1-ηα)2η(1-ηα)/τ(H(_f^k) - H(^*))
+
τ^2/36^k_.
≤max{2τ^2,τ(1-τ)^2/η(1-ηα), τ^2/3}(^k - ^* + 2η(1-ηα)/τ(F^*(_f^k) - F^*(^*) )+6^k_)
=
max{2τ^2,τ(1-τ)^2/η(1-ηα)}(^k - ^* + 2η(1-ηα)/τ(F^*(_f^k) - F^*(^*) )+6^k_).
Using the definition of Ψ^k (<ref>) and denoting
C = Ψ^0 max{2τ^2,τ(1-τ)^2/η(1-ηα)}
we get
F^*(^⊤_g^k) - ^*≤C/Ψ^0Ψ^k.
Applying Lemma <ref> concludes the proof.
|
http://arxiv.org/abs/2307.00220v1
|
20230701043628
|
Solving one-dimensional penetration problem for fission channel in the statistical Hauser-Feshbach theory
|
[
"Toshihiko Kawano",
"Patrick Talou",
"Stephane Hilaire"
] |
nucl-th
|
[
"nucl-th"
] |
kawano@lanl.gov
Los Alamos National Laboratory, Los Alamos, NM 87545, USA
Los Alamos National Laboratory, Los Alamos, NM 87545, USA
CEA, DAM, DIF, F-91297 Arpajon, France
Université Paris-Saclay, CEA, LMCE, 91680 Bruyères-le-Châtel, France
LA-UR-23-25089
We solve the Schrödinger equation for an arbitrary one-dimensional
potential energy to calculate the transmission coefficient in the
fission channel of compound nucleus reactions. We incorporate the
calculated transmission coefficients into the statistical
Hauser-Feshbach model calculation for neutron-induced reactions on
^235,238U and ^239Pu. The one-dimensional model reproduces the
evaluated fission cross section data reasonably well considering the
limited number of model parameters involved. A resonance-like
structure appears in the transmission coefficient for a double-humped
fission barrier shape that includes an intermediate well, which is
understood to be a quantum mechanical effect in the fission channel.
The calculated fission cross sections for the neutron-induced
reactions on ^235,238U and ^239Pu all exhibit a similar
structure.
Solving one-dimensional penetration problem for fission channel in the statistical Hauser-Feshbach theory
S. Hilaire
August 1, 2023
=========================================================================================================
§ INTRODUCTION
The statistical compound nucleus theory describes the probability for
a formed compound nucleus to decay into a channel a by the partial
width Γ_a, and the Hauser-Feshbach theory <cit.>
tells us that the energy-average of width ⟨Γ_a ⟩
can be replaced by the optical model transmission coefficient T_a in
the time-reverse process. This is intuitive for particle or
photon-induced reactions, as the interpretation reads the strength to
decay into the channel a is proportional to the compound nucleus
formation probability from the same channel. For the fission channel,
however, the reverse process is not at all trivial. Several
approximations and models are then employed, which significantly
complicate the comparison and interpretation with experimental fission
cross-section data. Studies on the nuclear fission have a long
history, and comprehensive review articles of the fission calculation
are given by Bjørnholm and Lynn <cit.>,
Wagemans <cit.>, and more recently Talou and
Vogt <cit.>.
A traditional approach is to calculate a penetrability (transmission
coefficient) through the fission barrier by adopting the
semi-classical Wentzel–Kramers–Brillouin (WKB)
approximation <cit.>. We often assume that one-dimensional
(1-D) potential energy forms a double-humped fission barrier shape,
which is predicted by the liquid drop model with the microscopic
(shell and pairing energies) corrections, and apply WKB to each of the
barriers separately. By decoupling these two fission barriers, an
effective (net) transmission coefficient T_f through the whole
potential energy is calculated as
T_f = T_A T_B/T_A + T_B ,
where T_A and T_B are the WKB penetrability through the barriers.
Obviously this treatment over-simplifies the fission penetration
problem, as it ignores potential wells between barriers, which gives
rise to the so-called class-II and class-III (in the triple humped
case) states. Some attempts were made in the past to calculate the
fission transmission coefficient by considering the potential well
between barriers. For example, Sin et al. <cit.> defined a continuous fission barrier shape and applied WKB
for each segment to calculate the effective transmission
coefficient. Bouland, Lynn, and Talou <cit.> implemented
the transition states in the class-II well, through which the
penetrability is expressed in terms of the R-matrix
formalism. Romain, Morillon, and Duarte <cit.> reported an
anti-resonant transmission due to the class-II and class-III
states. Some recent developments in the fission calculations are
summarized in Ref. <cit.>.
Segmentation of the potential energy along the nuclear elongation
axis, where the inner barrier, class-II state, outer barrier,
class-III states, …, are aligned, still implies that the
penetration through the entire potential energy surface is obtained by
assembling its piece-wise components. Although limited to an
analytical expression of potential energy, Cramer and
Nix <cit.> obtained an exact solution of wave function in
terms of the parabolic-cylinder functions for the double-humped
potential shape. Sharma and Leboeuf <cit.> extended this
technique to the triple-humped potential barrier case. By solving the
Schrödinger equation numerically, an extension of the Cramer-Nix
model to an arbitrary shape of 1-D potential energy is
straightforward. This was reported by Morillon, Duarte, and
Romain <cit.> and by ourselves <cit.>,
where the effective transmission coefficient in
Eq. (<ref>) is no longer involved. The solution of
Schrödinger equation for 1-D potential is, however, just one of
all the possible fission paths, whereas the dynamical fission process
takes place through any excited states on top of the fission barrier
in a strongly deformed compound nucleus. To calculate the actual
fission transmission coefficient that can be used in the
Hauser-Feshbach theory calculations, we have to take into account the
penetration through the excited states as well.
Eventually we describe the nuclear fission process from two extreme
point of views, namely the compound nucleus evolves through a fixed
albeit large number of fission paths, or the configuration is fully
mixed in the potential well so that the penetration through the
multiple barriers can be totally decoupled as in
Eq. (<ref>).
Our approach follows the more general former case; the fission process
takes place along an eigenstate of the compound nucleus, which is
continuous along the nuclear deformation coordinate. In this paper, we
revisit the Cramer-Nix model and its extension to the arbitrary
potential energy shape, and introduce nuclear excitation to calculate
T_f. The obtained T_f is used in the Hauser-Feshbach theory to
calculate the fission cross section, which can be compared with
available experimental data. We perform the cross-section calculations
for two distinct cases, the neutron-induced fission on ^238U where
the total excitation energy is still under the fission barrier, and
that for ^235U and ^239Pu where the system energy is higher
than the barrier. In this paper we limit ourselves to the first-chance
fission only, where no neutron emission occurs prior to
fission. However, extension to the multi-chance fission process is not
complicated at all.
§ THEORY
§.§ Fission transmission coefficient for double-humped fission barrier
First we briefly summarize the standard technique to calculate the
fission transmission coefficient T_f for the double-humped fission
barrier. The objective is to emphasize the distinction between the
conventional fission calculation and our approach. The fission barrier
is approximated by an inverted parabola characterized by the barrier
parameters; the heights V_A for the inner barrier and V_B for the
outer barrier, and their curvatures C_A and C_B (the curvature is
often denoted by ħω), as shown schematically in
Fig. <ref>. By applying the WKB
approximation to the parabolic-shaped barriers, the transmission
coefficient is given by the Hill-Wheeler expression <cit.>
T_i(E)
= 11 + exp(
2πV_i + E - E_0)/C_i) , i = A, B ,
where E_0 is the initial excitation energy, E is the nuclear
excitation energies measured from the top of each barrier. The
“lumped” transmission coefficient T is the sum of all possible
excited states at E_k for the discrete levels and at E_x in the
continuum,
T_i
= ∑_k T_i(E_k)
+ ∫_E_c^∞ T_i(E_x) ρ_i(E_x) dE_x , i = A, B ,
where ρ(E_x) is the level density on top of each barrier, and
E_c is the highest discrete state energy. Although we didn't specify
the spin and parity of the compound nucleus, the summation and
integration are performed for the same spin and parity states. Often
some phenomenological models are applied to ρ(E_x) to take the
nuclear deformation effect into account, which is the so-called
collective enhancement <cit.>. A standard technique in
calculating fission cross sections, e.g. as adopted by
Iwamoto <cit.>, assumes typical nuclear deformations at
the inner and outer barriers. Generally speaking the collective
enhancement is model and assumption dependent, which makes fission
model comparison difficult.
When the fission barriers V_A and V_B are fully decoupled, T_A
represents a probability to go through the inner barrier, and a
branching ratio from the intermediate state to the outer direction is
T_B/(T_A+T_B). The effective fission transmission coefficient is
thus given by Eq. (<ref>). This expression implies that
the dynamical process in the class-II well is fully adiabatic, and it
virtually forms a semi-stable compound state. It should be noted that
there is no explicit fission path in this model, since integration
over the excited states in Eq. (<ref>) is performed before
connecting T_A and T_B.
§.§ Fission transmission coefficient for 1-D shape
§.§.§ Concatenated parabolas
The Schrödinger equation for an arbitrary one-dimensional (1-D)
potential energy shape can be solved exactly without the WKB
approximation by applying the numerical integration
technique. Although our purpose is to solve problems for any fission
barrier shape, it is still convenient to employ the parabolic
representation to compare with the double-humped barrier
cases. Similar to the three-quadratic-surface parameterization of
nuclear shape <cit.>, the 1-D barrier is
parameterized by smoothly connected parabolas
V(i,x) = V_i + (-1)^i 1/2 c_i (x - x_i)^2 , i = 1, 2, … ,
where i is the region index for the segmented parabola (odd i for
barriers, and even for wells), x is a dimensionless deformation
coordinate, c_i = μ C_i^2 / ħ^2, V_i is the top (bottom)
energy of the barrier (well), x_i is the center of each parabola,
and μ is the inertial mass parameter. Note that the region index
adopted here corresponds to the double-humped case as A = 1 and B =
3. Because the deformation coordinate is dimensionless, the
calculated result is insensitive to μ, and we take
μ/ħ^2 = 0.054 A^5/3
as suggested by Cramer and Nix <cit.>. The region index i
runs from 1 to 3 for the double-humped shape, and 5 for the
triple-humped shape. The double-humped case is shown in
Fig. <ref> by the solid curve.
By providing the barrier parameters V_i and C_i, the junction
point (ξ_i) and the parabola center (x_i) for each adjacent
region are automatically determined through continuity
relations. Since the abscissa is arbitrary in the 1-D model, we first
fix the center of the first barrier at
x_1 = x_ min + √(2 V_1/c_1) ,
where x_ min is an arbitrary small offset. The consecutive
central points are given by
x_i = x_i-1
+ √(2|V_i-1 - V_i|(c_i-1+c_i)/c_i-1c_i) ,
and the junction points are
ξ_i = c_i x_i + c_i+1 x_i+1/c_i + c_i+1 .
With the central points of Eq. (<ref>) and the junction
points of Eq. (<ref>), the segmented parabolas in
Eq. (<ref>) are smoothly concatenated.
In the class-II and/or class-III well between the barriers, it is
possible to add a small imaginary potential that accounts for flux
absorption <cit.>
W(i,x) =
{[ Δ V - W_i Δ V ≤ W_i; 0 Δ V > W_i; ]. ,
where Δ V = V(i,x) - V_i, i=2 for class-II and i=4 for
class-III. We assume the potential shape is the same as the real part,
while the strength is given by a parameter W_i.
§.§.§ Solution of 1-D Schrödinger equation
The 1-D Schrödinger equation for the fission channel of compound
nucleus at the system energy E is written
as <cit.>
d^2/dx^2ϕ(x)
+ 2μ/ħ^2{ E - ( V(x) + iW(x) ) }ϕ(x) = 0 ,
where the wave function ϕ(x) satisfies the following boundary
condition <cit.>
ϕ(x) ≃{[ u^(-)(kx) - S u^(+)(kx) x > x_ max; A u^(-)(kx) x < x_ min; ]. .
[x_ min, x_ max] is the entire range of fission
barrier considered, k =√(2μ E) is the wave number, A is the
amplitude of wave function in the class-I well, and
u^(±)(kx) = cos(kx) ± i sin(kx) .
The Schödinger equation in the internal region can be solved
numerically by a standard technique such as the Numerov method or
Fox-Goodwin method <cit.>. The solution at the matching point
x_m in the external region (x_m > x_ max) is written as
ψ(x_m) = u^(-)(kx_m) - S u^(+)(kx_m) ,
and the internal solution ϕ(x_m) is smoothly connected with the
external solution at x_m. Analog to the scattering matrix element in
the single-channel optical model, the coefficient S is then given by
S = f u^(-)(x_m) - g^(-)/f u^(+)(x_m) - g^(+) ,
where
f ≡. dϕ / dx/ϕ|_x_m
g^(±)≡. d u^(±)/dx|_x_m .
When the potential is real everywhere, the fission transmission
coefficient is given by
T = 1 - |S|^2 .
In the case of the complex potential, the amplitude A in
Eq. (<ref>) is given by the normalization factor of the
internal wave function at x_m,
A = . u^(-) - S u^(+)/ϕ|_x_m ,
and the transmission coefficient through the barrier is T_d = |A|^2.
Because of the loss of flux due to the imaginary potential, T_d is
smaller than T, and T_d goes into the statistical Hauser-Feshbach
theory instead of T.
§.§.§ Potential energy for excited states
Since penetration through the potential defined by
Eq. (<ref>) is merely one of all the possible fission
paths, we have to aggregate such possible trajectories (paths) to
calculate the lumped transmission coefficient, which is analogous to
Eq. (<ref>). While the fission penetration for the ground
state takes place through the shape of potential energy in
Eq. (<ref>), each of the excited states would be
constructed on top of the ground state trajectory. This is a critical
difference between the double-humped and 1-D models, as an adiabatic
intermediate state assumed in the double-humped model conceals an
actual fission path along the deformation coordinate, while it is
explicit in the 1-D model.
To define the fission trajectories for the excited states, one of the
most naive assumptions is that the potential energy is shifted by the
excitation energy E_x as V(x) = V_0(x) + E_x, where V_0 is the
potential for the ground state. This, however, ignores distortion of
the eigenstate spectrum in a compound nucleus as it changes shape. At
the limit of adiabatic change in the nuclear shape, the excitation
energy of each of the eigenstates changes slightly due to shell,
pairing, and nuclear deformation effects. This results in distortion
of the trajectories, as opposed to a simple shift in energy.
We empirically know that calculated fission cross sections
underestimate experimental data if we simply adopt the level density
ρ(E_x) for an equilibrium shape in the lump-sum of
Eq. (<ref>). Therefore we often employ some models to
enhance the level densities on top of each of the barriers, which
account for increasing collective degree-of-freedom in a strongly
deformed nucleus. Instead of introducing the collective enhancement in
our 1-D penetration calculation, we assume the excitation energies of
the states will be lowered due to the nuclear deformation. In other
words, the eigenstates in a compound nucleus at relatively low
excitation energies are distorted by deformation effects. An
illustration of the distortion effect corresponding to a compression
is schematically shown in Fig. <ref> by the
dotted curves—trajectories 2 and 3. This trajectory compression
should be mitigated for the higher excitation energies, which is also
phenomenologically known as the damping of collectivity. Although the
compression might depend on the deformation as it changes the pairing
and shell effects, we model the compression in a rather simple way to
eliminate unphysical over-fitting to observed data. We assume the
eigenstates in the compound nucleus are compressed by a factor that
depends on the excitation energy only. Our ansatz reads
ε_x =
{
f_0 + (1 - e^-f_1 E_x) (1 - f_0)
} E_x ,
where the parameter f_0 is roughly 0.8 and the damping
f_1 is ∼ 0.2 MeV^-1 as shown later. The corresponding fission
trajectory for the excited states is now
V(x) = V_0(x) + ε_x .
The transmission coefficient for this trajectory is
T(ε_x), and the lumped transmission coefficient T_f is
given by
T_f = ∑_k T(ε_k)
+ ∫_E_c^∞ T(ε_x) ρ(ε_x) dE_x ,
where the summation and integration are performed for the spin and
parity conserved states. Although the integration range goes to
infinity, or some upper-limit value could be
considered <cit.>, this converges quickly with increasing
excitation energy. Generally it is safe to truncate the integration at
E_x = E_0.
§ RESULTS AND DISCUSSION
§.§ Wave function and transmission coefficient for a single fission path
As an example of the 1-D model, the calculated wave functions for
connected parabolas are shown in Fig. <ref>, which is for
the A=239 system. The assumed barrier heights are V_1=6.5,
V_2=1, and V_3=5.5 MeV, with the curvatures of C_1=0.6 and
C_2=0.4, and C_3=0.5 MeV. We depict the three cases of system
energy E; (a) below both of the barriers, (b) between V_1 and
V_3, and (c) above the both.
Since the 1-D potential penetration problem is invariant whether
numerical integration is performed from the right or left side, the
wave function is normalized to the external function that has unit
amplitude. The penetrability is seen as the amplitude of wave function
inside the potential region. Apparently the wave function penetrates
through the potential barrier when the system has enough energy to
overcome the both barriers E>V_1 and E>V_3, and it is blocked if
the barrier is higher than the system energy. However, although the
wave function damps rapidly, quantum tunneling is still seen beyond
the barrier.
One of the remarkable differences from the double-humped model in
Eq. (<ref>) with the Hill-Wheeler expression of
Eq. (<ref>) is that the 1-D model sometimes exhibits
resonating behavior due to the penetration through the class-II
well. This was already reported by Cramer and Nix <cit.> in
their parabolic-cylinder function expression. It should be noted that
this is not an actual compound nucleus resonance, but a sort of the
size effect where the traveling and reflecting waves have accidentally
the same phase. As a result the wave function is amplified
significantly at a resonating energy.
This amplification can be seen easily in the transmission coefficient
as in Fig. <ref>. The top panel is for the same
potential as the one in Fig. <ref>. The first sharp
resonance appears below the inner barrier of V_1=5.5 MeV, and the
second broader resonance is just above the barrier. We also depicted
the transmission coefficients calculated with the WKB approximation in
Eq. (<ref>) for the inner and outer barriers. As it is known,
the WKB approximation works reasonably well when the energy is close
to the fission barrier. However, it deviates notably from the 1-D
model when an interference effect of penetrations through the inner
and outer barriers becomes visible.
This effect becomes more remarkable when the inner and outer barriers
have a similar magnitude, which results in a special circumstance that
the penetration and reflection waves are in phase. The bottom panel in
Fig. <ref> is the case where these barriers have the
same height of 6.0 MeV. A broad resonance appears just below the
fission barrier, which enhances the fission cross section even if the
compound state is still below the fission barrier. Then the
penetration drops rapidly as the excitation energy decreases. On the
contrary, the penetration by WKB stays higher in the sub-threshold
region. Under these circumstances, the Hill-Wheeler expression may
give unreliable fission cross sections, albeit their magnitude would
be quite small. Nuclear reaction codes sometimes introduce a
phenomenological class-II (and class-III) resonance effect to
compensate for this deficiency <cit.>.
The difference in the WKB curves in Fig. <ref> (b) is
due to the curvatures, and both curves approach to T_A = T_B = 1
once the system has more than the barrier energy. However, the
effective transmission coefficient becomes 1/2, when
Eq. (<ref>) is applied. This is also an important
difference between the double-humped and 1-D models, as the 1-D model
always gives T = 1 when the system energy can overcome all the
barriers.
§.§ Fission path through complex potential
The wave function is absorbed by a potential when a complex class-II
well is given, which results in reduction in the fission transmission
coefficient. When we add a small imaginary part (W_2 = 0.5 MeV) to
class-II in the potentials shown in Fig. <ref>, the
calculated transmission coefficients are compared with the real
potential cases in Fig. <ref>. The imaginary
potential acts on the wave function as amplitude damping so that the
asymptotic transmission coefficients at higher energies will be less
than unity. In this case, the asymptotic value of T_d = |A|^2 is
∼ 0.5, which is determined by W_2. The imaginary potential also
shifts the phase of wave function slightly, and the resonance-like
shape is less pronounced.
When a larger imaginary potential is provided, the fission
transmission coefficient goes to almost zero. A physical meaning of
the amplitude damping is not so definite, since the imaginary strength
is arbitrary. This is analogous to the optical model; an incident
particle disappears in the optical potential by its imaginary part
regardless of the nuclear reaction mechanisms. A possible
interpretation is that the system is trapped by a shape isomeric state
that might be long-lived.
§.§ Hauser-Feshbach model calculation
We incorporate the lumped fission transmission coefficient in
Eq. (<ref>) into the statistical Hauser-Feshbach model
calculation to demonstrate applicability of the 1-D model in actual
compound nucleus calculations. We do not include the imaginary
potential, so that the calculated results will be tightly constrained
by the potential shape characterized by a limited number of inputs.
The calculation is performed with the CoH_3 statistical
Hauser-Feshbach code <cit.>, which properly combines the
coupled-channels optical model and the statistical Hauser-Feshbach
theory by performing the Engelbrecht-Weidenmüller
transformation <cit.> of the
optical model penetration matrix <cit.>. This is
particularly important for nuclear reaction modeling in the actinide
mass region. We employ the coupled-channels optical potential by
Soukhovitskii et al. <cit.> for producing the
neutron penetration matrix and the generalized transmission
coefficients <cit.>.
To look at the fission channel more carefully, we take some reasonable
model inputs for other reaction channels from literature and do not
attempt to make fine-tuning as the purpose of this study is not a
parameter fitting. Since the curvature parameter C = ħω is
relatively insensitive to fission cross section calculation, we fix
them to a typical value of 0.6 MeV, and roughly estimate the heights
of inner and outer barriers as well as the trajectory compression
parameters in Eq. (<ref>) by comparing with
experimental fission cross section data. The class-II depth has also a
moderate impact on the calculation of transmission coefficients as far
as we provide a reasonable value. We fix it to 0.5 MeV. Other model
parameters are set to default internal values in CoH_3. The
γ-ray strength function is taken from Kopecky and
Uhl <cit.> with the M1 scissors mode <cit.>,
the Gilbert-Cameron composite formula <cit.>
for the level density, and the discrete level data taken from
RIPL-3 <cit.>.
First, we perform the statistical model calculations for
neutron-induced reaction on ^238U, where sub-threshold fission may
be seen below about 1 MeV of incident neutron energy. The ground
state rotational band members, 0^+, 2^+, 4^+, and 6^+ are
coupled with the deformation parameters taken from the Finite Range
Droplet Model (FRDM) <cit.>. The calculated fission cross
sections are shown in Fig. <ref> by comparing with the
evaluated fission cross sections in ENDF/B-VIII.0 <cit.> and
JENDL-5 <cit.>. The reason of showing the evaluations instead
of actual experimental data is that the evaluated data often include
more experimental information than the direct measurement of ^238U
fission cross section, e.g. cross section ratio
measurements. The accuracy of the evaluations is good enough to test
the relevance of this new model. We found that the case of V_1=6,
V_2=0.5, and V_3=5 MeV reasonably reproduce the evaluated fission
cross section in the energy range of our interest. The compression
parameters f_0=0.8 and f_1=0.2 MeV^-1 were needed to reproduce
the fission cross section plateau above 2 MeV. We also calculated the
V_3=5.5 MeV case, which produces a more resonance-like structure below
1 MeV, despite the fact that they tend to underestimate the
evaluations on average.
Since the resonating behavior seen in the sub-threshold region
originates from the wave function in between the inner and outer
barriers, their locations and amplitudes strongly depend on the shape
of the potential energy surface. Because the 1-D potential energy
constructed by smoothly concatenating segmented parabolas is a crude
approximation, we naturally understand that such structure in the
experimental data cannot be predicted exactly by the model unless we
modify the potential shape freely. This being said, the fission cross
sections calculated with the 1-D model in the sub-threshold region are
not so far from reality, which is usually not so obvious in the
Hill-Wheeler case.
The neutron-induced reaction on ^235U does not have a threshold in
the fission channel. The neutron separation energy is 6.55 MeV, and
the compound nucleus already has enough energy to fission even for a
thermal-energy neutron incident. We adopt the same trajectory
compression parameters, V_2, and curvature parameters as those in
the ^238U calculation, and just look for V_1 and V_3. We found
that the set of V_1=5.9 and V_3=5.7 MeV gives a reasonable fit to
the experimental ^235U fission cross section, as compared with the
evaluated values in Fig. <ref>. The resonance-like
structure, which is seen in the sub-threshold fission of ^238U, is
also seen near 60 keV. The evaluated data also show a small bump near
30 keV, which might be attributed to enhancement of the wave-function
amplitude in between the barriers. However, it is hard to claim that
our predicted peak at 60 keV corresponds to the observed bump, as the
potential energy shape is over-simplified in this study.
To show a sensitivity of the inner barrier (or the higher one), a
range of calculated fission cross sections by changing V_1 by ±
100 keV is shown by the dashed curves. More resonance-like structure
appears when V_1 is reduced to 5.8 MeV, because there is only a
100 keV difference between V_1 and V_3. When the difference is
larger, V_1+100 keV, the structure becomes less pronounced. A
similar sensitivity study was performed by Neudecker el
al. <cit.>. where a 100–150 keV change in the fission
barrier height changes the calculated fission cross sections by 10%
or so, while the cross section shape remains the same in the
conventional fission model.
While V_1 has such a large sensitivity, the outer barrier (or the
lower one) does not change the calculated fission cross section much,
as far as V_3 is lower than V_1 by a few hundred keV or
more. Figure <ref> includes the case of V_1=5.9 and
V_3=5.4 MeV, where the resonance-like structure is fully washed
out. We do not show the sensitivity of V_3 by further lowering the
outer barrier, since these curves are hard to distinguish
anymore. Astonishingly, the calculated fission cross sections remain
almost identical even if V_3=1 MeV, which implies that the fission
calculation is totally governed by the single-humped fission barrier
shape.
Figure <ref> shows the calculated fission cross
section of ^239Pu. In this case, it was difficult to obtain a
reasonable fit to the evaluations by employing the same compression
parameters, and a reduction of f_0 to 0.55 was needed (f_1 is the
same as before). The barrier height parameters are V_1=5.9,
V_2=5.7 MeV. The resonance-like structure also appears, although it
is not so noticeable like in the uranium cases. We also show the
cross-section band when V_1± 100 keV. The sensitivity of V_1 to
the fission cross section is similar to ^235U. The evaluated cross
sections are roughly covered by the ±100 keV band. However, again,
we emphasize that the objective of the present study is not to fit
perfectly the model calculation to the experimental data but to
demonstrate the fact that the simple 1-D model is potentially capable
of capturing the gross features of the fission reaction process by
producing calculated fission cross sections in reasonable agreement
with experimental data, without the need for a large number of fitting
model parameters.
§.§ Possible refinements
Although we employed the parameterized potential shape, which is
constructed by segmented parabolas, the experimental fission cross
sections are reasonably reproduced by a few model parameters that
characterize the shape itself. This is already a significant
improvement of the statistical Hauser-Feshbach calculations for
fission compared to the traditional Hill-Wheeler expression for the
double-humped fission barriers connected by
Eq. (<ref>). For better reproduction of available
experimental data, as well as prediction of unknown fission cross
sections, we envision further improvement by incorporating a few
theoretical ingredients.
First, the potential energy shape could be taken from the potential
energy surface calculated microscopically <cit.> or by
semi-microscopic approaches <cit.>. Because the potential energy
surface is often defined in a multi-dimensional deformation coordinate
space, either we have to project the surface onto a one-dimensional
axis (it is, however, known that the projection often causes
discontinuity problems <cit.>), or our 1-D model should be
extended to a set of coupled-equations for the multi-dimensional
coordinate. Second, we should employ a better trajectory compression
model rather than the simple damping of Eq. (<ref>),
where nuclear deformation effect is ignored, nevertheless it is known
that the single-particle spectrum depends on the nuclear
deformation. Because our trajectory compression model is constant
along the deformation axis, the potential penetration calculation
becomes invariant for exchange of the inner and outer barriers, while
the calculated potential energy surface often indicates that the inner
barrier tends to be higher than the outer barrier for the U and Pu
isotopes. Such a property might be seen by introducing the trajectory
distortion that is deformation dependent. The nuclear
deformation can be calculated with the full- or semi-microscopic
approaches, where broken symmetries in the nuclear shape are naturally
taken into account. We could estimate possible trajectories by
calculating the microscopic level densities based on the single
particle energies in the deformed one-body potential.
§ CONCLUSION
We proposed a new model to calculate fission cross sections in the
statistical Hauser-Feshbach framework. Instead of applying the WKB
approximation for uncoupled fission barriers as often done in the
past, we solved the Schrödinger equation for a one-dimensional
(1-D) potential model to calculate the penetration probabilities
(transmission coefficients) in the fission channel of compound nucleus
reactions. Because we took continuity of the fission path into
consideration, the expression to combine several penetrabilities for
different barriers, like T = T_A T_B/(T_A+T_B), is no longer
involved in our model. Although the potential shape was parameterized
by smoothly concatenated parabolas for a sake of convenience, the
model can be applied to any arbitrary shape, as we obtain the wave
function by the numerical integration technique.
We showed that a resonance-like structure manifests in the calculated
transmission coefficients for the double-humped fission barrier that
includes a potential well between them, which is understood to be a
quantum mechanical effect in the fission channel. The resonance-like
structure becomes more remarkable when these barriers have a similar
height, where the penetration and reflection waves are in phase. The
structure becomes less sharp when an imaginary part is introduced in
the potential well. The complex potential also absorbs the flux of
fission channel, resulting in lower transmission coefficients.
The 1-D potential model was incorporated into the statistical
Hauser-Feshbach model to calculate neutron-induced reactions on
^235,238U and ^239Pu. In this case we didn't include the
imaginary part in the potential. In order to calculate the potential
penetration for the excited states, we introduced a simple trajectory
compression model to account for change in the nuclear structure due
to the nuclear deformation. By aggregating the fission transmission
coefficients for all the possible fission paths that are energetically
allowed, calculated fission cross sections for ^235,238U and
^239Pu were compared with the evaluated data that represent the
experimental cross sections. We showed that reasonable reproduction
of the data can be obtained by a limited number of model
parameters. Although the detailed structure seen in the experimental
fission cross section is hardly reproduced by the 1-D model due to a
crude approximation for the potential adopted, further improvement
could be made by more careful studies on the potential shape, together
with more realistic trajectory compression models.
§ ACKNOWLEDGMENTS
TK thanks B. Morillon and P. Romain of CEA for valuable discussions on
this subject. TK and PT were supported by the Advanced Simulation and
Computing (ASC) Program, National Nuclear Security Administration,
U.S. Department of Energy. This work was carried out under the
auspices of the National Nuclear Security Administration of the
U.S. Department of Energy at Los Alamos National Laboratory under
Contract No. 89233218CNA000001.
41
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Hauser and Feshbach(1952)]Hauser1952
author author W. Hauser and author H. Feshbach, title title The inelastic scattering
of neutrons, https://doi.org/10.1103/PhysRev.87.366 journal journal Phys. Rev. volume
87, pages 366 (year 1952)NoStop
[Bjørnholm and Lynn(1980)]Bjornholm1980
author author S. Bjørnholm and author J. E. Lynn, title title The double-humped fission
barrier, https://doi.org/10.1103/RevModPhys.52.725 journal journal Rev. Mod. Phys. volume
52, pages 725 (year 1980)NoStop
[Wagemans(1991)]Wagemans1991
author author C. Wagemans, @noop title The Nuclear Fission
Process (publisher CRC Press, year
1991)NoStop
[Talou and Vogt(2023)]Talou2023
author author P. Talou and author R. Vogt, https://doi.org/10.1007/978-3-031-14545-2 title
Nuclear Fission, Theories, Experiments and Applications (publisher Springer, year 2023)NoStop
[Hill and Wheeler(1953)]Hill1953
author author D. L. Hill and author J. A. Wheeler, title title Nuclear constitution and
the interpretation of fission phenomena, https://doi.org/10.1103/PhysRev.89.1102 journal journal Phys. Rev. volume 89, pages
1102 (year 1953)NoStop
[Sin et al.(2006)Sin,
Capote, Ventura, Herman, and Oblo žžinský]Sin2006
author author M. Sin, author R. Capote,
author A. Ventura, author M. Herman, and author
P. Oblo žžinský, title title Fission of light
actinides: ^232Th(n,f) and ^231Pa(n,f)
reactions, https://doi.org/10.1103/PhysRevC.74.014608 journal journal Phys. Rev. C volume
74, pages 014608 (year 2006)NoStop
[Sin et al.(2016)Sin,
Capote, Herman, and Trkov]Sin2016
author author M. Sin, author R. Capote,
author M. W. Herman, and author A. Trkov, title title Extended optical model for fission, https://doi.org/10.1103/PhysRevC.93.034605 journal journal Phys. Rev. C volume 93, pages 034605 (year 2016)NoStop
[Bouland et al.(2013)Bouland, Lynn, and Talou]Bouland2013
author author O. Bouland, author J. E. Lynn, and author P. Talou, title title R-matrix analysis and prediction of
low-energy neutron-induced fission cross sections for a range of Pu
isotopes, https://doi.org/10.1103/PhysRevC.88.054612 journal journal Phys. Rev. C volume
88, pages 054612 (year 2013)NoStop
[Romain et al.(2016)Romain,
Morillon, and Duarte]Romain2016
author author P. Romain, author B. Morillon, and author H. Duarte, title title Bruyères-le-Châtel neutron
evaluations of actinides with the TALYS code: The fission channel, https://doi.org/10.1016/j.nds.2015.12.003 journal journal Nuclear Data Sheets volume 131, pages 222 (year 2016), note special
Issue on Nuclear Reaction DataNoStop
[Cramer and Nix(1970)]Cramer1970
author author J. D. Cramer and author J. R. Nix, title title Exact calculation of the
penetrability through two-peaked fission barriers, https://doi.org/10.1103/PhysRevC.2.1048 journal journal Phys. Rev. C volume 2, pages
1048 (year 1970)NoStop
[Sharma and Leboeuf(1976)]Sharma1976
author author R. C. Sharma and author J. N. Leboeuf, title title Three-hump potential
barrier in the ^234Th nucleus, https://doi.org/10.1103/PhysRevC.14.2340 journal journal Phys. Rev. C volume 14, pages 2340 (year 1976)NoStop
[Morillon et al.(2010)Morillon, Duarte, and Romain]Morillon2010
author author B. Morillon, author H. Duarte, and author P. Romain, @noop title Petits problèmes de transmission
quantique, type Tech. Rep. (institution CEA, year 2010) note private communicationNoStop
[Kawano(2015)]LA-UR-15-24956
author author T. Kawano, @noop title Exact solution of
fission penetration through arbitrary complex fission barrier, type Tech. Rep. number LA-UR-15-24956 (institution Los Alamos National Laboratory, year
2015)NoStop
[Junghans et al.(1998)Junghans, de Jong, Clerc, Ignatyuk, Kudyaev, and Schmidt]Junghans1998
author author A. Junghans, author M. de Jong,
author H.-G. Clerc, author A. Ignatyuk, author
G. Kudyaev, and author
K.-H. Schmidt, title
title Projectile-fragment yields as a probe for the collective
enhancement in the nuclear level density, https://doi.org/10.1016/S0375-9474(98)00658-7 journal
journal Nuclear Physics A volume
629, pages 635 (year 1998)NoStop
[Iwamoto(2007)]Iwamoto2007
author author O. Iwamoto, title title Development of a
comprehensive code for nuclear data evaluation, CCONE, and validation using
neutron-induced cross sections for uranium isotopes, https://doi.org/10.1080/18811248.2007.9711857 journal
journal Journal of Nuclear Science and Technology volume 44, pages 687 (year
2007)NoStop
[Nix(1969)]Nix1969
author author J. R. Nix, title title Further studies in the
liquid-drop theory on nuclear fission, https://doi.org/10.1016/0375-9474(69)90730-1 journal
journal Nuclear Physics A volume
130, pages 241 (year 1969)NoStop
[Möller et al.(2009)Möller, Sierk, Ichikawa, Iwamoto, Bengtsson, Uhrenholt, and Åberg]Moller2009
author author P. Möller, author A. J. Sierk,
author T. Ichikawa, author A. Iwamoto, author
R. Bengtsson, author
H. Uhrenholt, and author
S. Åberg, title title Heavy-element fission barriers, https://doi.org/10.1103/PhysRevC.79.064304 journal journal Phys. Rev. C volume 79, pages 064304 (year 2009)NoStop
[Back et al.(1971)Back,
Bondorf, Otroschenko, Pedersen, and Rasmussen]Back1971
author author B. B. Back, author J. P. Bondorf,
author G. A. Otroschenko,
author J. Pedersen, and author B. Rasmussen, title title Fission of U, Np, Pu and Am isotopes excited in
the (d, p) reaction, https://doi.org/10.1016/0375-9474(71)90461-1
journal journal Nuclear Physics A volume 165, pages 449 (year
1971)NoStop
[Fox and Goodwin(1949)]Fox1949
author author L. Fox and author E. T. Goodwin, title title Some new methods for the
numerical integration of ordinary differential equations, https://doi.org/10.1017/s0305004100025007 journal journal Mathematical Proceedings of the Cambridge Philosophical Society volume 45, pages 373 (year
1949)NoStop
[Hilaire et al.(2003)Hilaire, Lagrange, and Koning]Hilaire2003
author author S. Hilaire, author C. Lagrange, and author A. J. Koning, title title Comparisons between various width
fluctuation correction factors for compound nucleus reactions, https://doi.org/10.1016/S0003-4916(03)00076-9 journal
journal Annals of Physics volume
306, pages 209 (year 2003)NoStop
[Capote et al.(2009)Capote,
Herman, Obložinský, Young, Goriely, Belgya, Ignatyuk, Koning, Hilaire, Plujko, Avrigeanu, Bersillon, Chadwick, Fukahori, Ge, Han, Kailas, Kopecky, Maslov, Reffo, Sin, Soukhovitskii, and Talou]RIPL3
author author R. Capote, author M. Herman,
author P. Obložinský,
author P. G. Young, author S. Goriely, author
T. Belgya, author A. V. Ignatyuk, author A. J. Koning, author S. Hilaire, author V. A. Plujko,
author M. Avrigeanu, author O. Bersillon, author
M. B. Chadwick, author
T. Fukahori, author
Z. Ge, author Y. Han, author S. Kailas, author J. Kopecky,
author V. M. Maslov, author G. Reffo, author
M. Sin, author E. S. Soukhovitskii, and author
P. Talou, title title RIPL - Reference Input Parameter Library for calculation of
nuclear reactions and nuclear data evaluations, https://doi.org/10.1016/j.nds.2009.10.004 journal journal Nuclear Data Sheets volume 110, pages 3107 (year 2009)NoStop
[Kawano(2021)]Kawano2019
author author T. Kawano, title title CoH_3: The
coupled-channels and Hauser-Feshbach code, https://doi.org/10.1007/978-3-030-58082-7 journal journal Springer Proceedings in Physics volume
254, pages 27 (year 2021), note
CNR2018: International Workshop on Compound Nucleus and Related Topics,
LBNL, Berkeley, CA, USA, September 24 – 28, 2018, J. Escher, Y. Alhassid,
L.A. Bernstein, D. Brown, C. Fröhlich, P. Talou, W. Younes
(Eds.)NoStop
[Engelbrecht and Weidenmüller(1973)]Engelbrecht1973
author author C. A. Engelbrecht and author H. A. Weidenmüller, title title
Hauser-Feshbach theory and ericson fluctuations in the presence of direct
reactions, https://doi.org/10.1103/PhysRevC.8.859 journal journal Phys. Rev. C volume
8, pages 859 (year 1973)NoStop
[Kawano et al.(2015)Kawano,
Talou, and Weidenmüller]kawano2015
author author T. Kawano, author P. Talou, and author H. A. Weidenmüller, title title Random-matrix approach to the
statistical compound nuclear reaction at low energies using the Monte Carlo
technique, https://doi.org/10.1103/PhysRevC.92.044617 journal journal Phys. Rev. C volume
92, pages 044617 (year 2015)NoStop
[Kawano et al.(2016)Kawano,
Capote, Hilaire, and Chau
Huu-Tai]Kawano2016
author author T. Kawano, author R. Capote,
author S. Hilaire, and author P. Chau Huu-Tai, title title Statistical Hauser-Feshbach theory with
width-fluctuation correction including direct reaction channels for
neutron-induced reactions at low energies, https://doi.org/10.1103/PhysRevC.94.014612 journal journal Phys. Rev. C volume 94, pages 014612 (year 2016)NoStop
[Satchler(1963)]Satchler1963
author author G. R. Satchler, title title Average compound nucleus
cross sections in the continuum, https://doi.org/10.1016/0031-9163(63)90440-2 journal
journal Physics Letters volume 7, pages 55 (year 1963)NoStop
[Soukhovitskii et al.(2004)Soukhovitskii, Chiba, Lee, Iwamoto, and Fukahori]Soukhovitskii2004
author author E. S. Soukhovitskii, author S. Chiba, author J.-Y. Lee,
author O. Iwamoto, and author T. Fukahori, title
title Global coupled-channel optical potential for
nucleon-actinide interaction from 1 keV to 200 MeV, https://doi.org/10.1088/0954-3899/30/7/007 journal journal Journal of Physics G: Nuclear and Particle Physics volume 30, pages 905 (year
2004)NoStop
[Kawano et al.(2009)Kawano,
Talou, Lynn, Chadwick, and Madland]Kawano2009
author author T. Kawano, author P. Talou,
author J. E. Lynn, author M. B. Chadwick, and author D. G. Madland, title title Calculation of nuclear reaction cross sections on
excited nuclei with the coupled-channels method, https://doi.org/10.1103/PhysRevC.80.024611 journal journal Phys. Rev. C volume 80, pages 024611 (year 2009)NoStop
[Kopecky and Uhl(1990)]Kopecky1990
author author J. Kopecky and author M. Uhl, title title Test of gamma-ray strength functions
in nuclear reaction model calculations, https://doi.org/10.1103/PhysRevC.41.1941 journal journal Phys. Rev. C volume 41, pages 1941 (year 1990)NoStop
[Mumpower et al.(2017)Mumpower, Kawano, Ullmann, Krti ččka, and Sprouse]Mumpower2017
author author M. R. Mumpower, author T. Kawano,
author J. L. Ullmann, author M. Krti ččka, and author T. M. Sprouse, title title Estimation of M1
scissors mode strength for deformed nuclei in the medium- to heavy-mass
region by statistical Hauser-Feshbach model calculations, https://doi.org/10.1103/PhysRevC.96.024612 journal journal Phys. Rev. C volume 96, pages 024612 (year 2017)NoStop
[Gilbert and Cameron(1965)]Gilbert1965
author author A. Gilbert and author A. G. W. Cameron, title title A composite nuclear-level
density formula with shell corrections, https://doi.org/10.1139/p65-139 journal journal
Can. J. Phys. volume 43, pages 1446
(year 1965)NoStop
[Kawano et al.(2006)Kawano,
Chiba, and Koura]Kawano2006
author author T. Kawano, author S. Chiba, and author H. Koura, title title Phenomenological nuclear level densities using the
KTUY05 nuclear mass formula for applications off-stability, https://doi.org/10.1080/18811248.2006.9711062 journal
journal Journal of Nuclear Science and Technology volume 43, pages 1 (year
2006)NoStop
[Möller et al.(1995)Möller, Nix, Myer, and Swiatecki]Moller1995
author author P. Möller, author J. R. Nix,
author W. D. Myer, and author W. J. Swiatecki, title title Nuclear ground-state masses and
deformations, https://doi.org/10.1006/adnd.1995.1002 journal journal Atomic Data and Nuclear Data Tables volume 59, pages 185 (year
1995)NoStop
[Brown et al.(2018)Brown,
Chadwick, Capote, Kahler,
Trkov, Herman, Sonzogni,
Danon, Carlson, Dunn,
Smith, Hale, Arbanas,
Arcilla, Bates, Beck,
Becker, Brown, Casperson,
Conlin, Cullen, Descalle,
Firestone, Gaines, Guber,
Hawari, Holmes, Johnson,
Kawano, Kiedrowski, Koning,
Kopecky, Leal, Lestone,
Lubitz, Márquez Damián, Mattoon, McCutchan, Mughabghab,
Navratil, Neudecker, Nobre,
Noguere, Paris, Pigni,
Plompen, Pritychenko, Pronyaev, Roubtsov, Rochman, Romano, Schillebeeckx, Simakov,
Sin, Sirakov, Sleaford,
Sobes, Soukhovitskii, Stetcu,
Talou, Thompson, van der
Marck, Welser-Sherrill, Wiarda,
White, Wormald, Wright,
Zerkle, Žerovnik, and Zhu]ENDF8
author author D. A. Brown, author M. B. Chadwick,
author R. Capote, author A. C. Kahler, author
A. Trkov, author M. W. Herman, author A. A. Sonzogni, author Y. Danon, author A. D. Carlson,
author M. Dunn, author
D. L. Smith, author
G. M. Hale, author
G. Arbanas, author R. Arcilla, author C. R. Bates, author B. Beck, author B. Becker,
author F. Brown, author R. J. Casperson, author
J. Conlin, author D. E. Cullen, author M. A. Descalle, author R. Firestone, author T. Gaines,
author K. H. Guber, author A. I. Hawari, author
J. Holmes, author T. D. Johnson, author T. Kawano, author B. C. Kiedrowski, author A. J. Koning, author S. Kopecky, author L. Leal,
author J. P. Lestone, author C. Lubitz, author
J. I. Márquez Damián, author C. M. Mattoon, author
E. A. McCutchan, author
S. Mughabghab, author
P. Navratil, author
D. Neudecker, author
G. P. A. Nobre, author
G. Noguere, author M. Paris, author M. T. Pigni, author A. J. Plompen, author B. Pritychenko, author V. G. Pronyaev, author D. Roubtsov,
author D. Rochman, author P. Romano, author
P. Schillebeeckx, author
S. Simakov, author M. Sin, author I. Sirakov, author B. Sleaford,
author V. Sobes, author E. S. Soukhovitskii, author I. Stetcu, author
P. Talou, author I. Thompson, author S. van der Marck, author L. Welser-Sherrill, author D. Wiarda, author M. White, author J. L. Wormald, author R. Q. Wright, author M. Zerkle, author G. Žerovnik, and author Y. Zhu, title title ENDF/B-VIII.0: the 8th
major release of the nuclear reaction data library with CIELO-project cross
sections, new standards and thermal scattering data, https://doi.org/10.1016/j.nds.2018.02.001 journal journal Nuclear Data Sheets volume 148, pages 1 (year 2018)NoStop
[Iwamoto et al.(2023)Iwamoto, Iwamoto, Kunieda, Minato, Nakayama, Abe, Tsubakihara, Okumura, Ishizuka,
Yoshida, Chiba, Otuka,
Sublet, Iwamoto, Yamamoto,
Nagaya, Tada, Konno,
Matsuda, Yokoyama, Taninaka,
Oizumi, Fukushima, Okita,
Chiba, Sato, Ohta, and Kwon]JENDL5
author author O. Iwamoto, author N. Iwamoto,
author S. Kunieda, author F. Minato, author
S. Nakayama, author
Y. Abe, author K. Tsubakihara, author S. Okumura, author C. Ishizuka, author T. Yoshida, author S. Chiba, author N. Otuka, author J.-C. Sublet, author H. Iwamoto, author K. Yamamoto,
author Y. Nagaya, author K. Tada, author
C. Konno, author N. Matsuda, author K. Yokoyama, author H. Taninaka, author A. Oizumi, author M. Fukushima, author S. Okita, author G. Chiba, author S. Sato, author M. Ohta, and author S. Kwon, title title Japanese evaluated nuclear data
library version 5: JENDL-5, https://doi.org/10.1080/00223131.2022.2141903 journal
journal Journal of Nuclear Science and Technology volume 60, pages 1 (year
2023)NoStop
[Neudecker et al.(2021)Neudecker, Cabellos, Clark, Grosskopf, Haeck, Herman, Hutchinson, Kawano, Lovell, Stetcu, Talou, and Vander Wiel]Neudecker2021
author author D. Neudecker, author O. Cabellos,
author A. R. Clark, author M. J. Grosskopf, author
W. Haeck, author M. W. Herman, author J. Hutchinson, author T. Kawano, author A. E. Lovell, author I. Stetcu, author P. Talou, and author S. Vander Wiel, title title Informing nuclear physics via machine
learning methods with differential and integral experiments, https://doi.org/10.1103/PhysRevC.104.034611 journal journal Phys. Rev. C volume 104, pages 034611 (year 2021)NoStop
[Goriely et al.(2009)Goriely, Hilaire, Koning, Sin, and Capote]Goriely2009
author author S. Goriely, author S. Hilaire,
author A. J. Koning, author M. Sin, and author
R. Capote, title title Towards a prediction of fission cross sections on the basis of
microscopic nuclear inputs, https://doi.org/10.1103/PhysRevC.79.024612 journal journal Phys. Rev. C volume 79, pages 024612 (year 2009)NoStop
[Möller and Randrup(2015)]Moller2015
author author P. Möller and author J. Randrup, title title Calculated
fission-fragment yield systematics in the region
74Z94 and
90N150, https://doi.org/10.1103/PhysRevC.91.044316 journal journal Phys. Rev. C volume 91, pages 044316 (year 2015)NoStop
[Verriere and Mumpower(2021)]Verriere2021
author author M. Verriere and author M. R. Mumpower, title title Improvements to the
macroscopic-microscopic approach of nuclear fission, https://doi.org/10.1103/PhysRevC.103.034617 journal journal Phys. Rev. C volume 103, pages 034617 (year 2021)NoStop
[Jachimowicz et al.(2021)Jachimowicz, Kowal, and Skalski]Jachimowicz2021
author author P. Jachimowicz, author M. Kowal, and author J. Skalski, title title Properties of heaviest nuclei with
98 Z 126 and 134 N
192, https://doi.org/10.1016/j.adt.2020.101393
journal journal Atomic Data and Nuclear Data
Tables volume 138, pages 101393
(year 2021)NoStop
[Dubray and Regnier(2012)]Dubray2012
author author N. Dubray and author D. Regnier, title title Numerical search of
discontinuities in self-consistent potential energy surfaces, https://doi.org/10.1016/j.cpc.2012.05.001 journal journal Computer Physics Communications volume
183, pages 2035 (year 2012)NoStop
|
http://arxiv.org/abs/2307.02549v1
|
20230705180005
|
From the tabletop to the Big Bang: Analogue vacuum decay from vacuum initial conditions
|
[
"Alexander C. Jenkins",
"Jonathan Braden",
"Hiranya V. Peiris",
"Andrew Pontzen",
"Matthew C. Johnson",
"Silke Weinfurtner"
] |
cond-mat.quant-gas
|
[
"cond-mat.quant-gas",
"astro-ph.CO",
"gr-qc",
"hep-ph",
"hep-th"
] |
alex.jenkins@ucl.ac.uk
Department of Physics and Astronomy, University College London, London WC1E 6BT, UK
Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto, ON, M5S 3H8, Canada
Department of Physics and Astronomy, University College London, London WC1E 6BT, UK
The Oskar Klein Centre for Cosmoparticle Physics, Department of Physics, Stockholm University, AlbaNova, Stockholm, SE-106 91, Sweden
Department of Physics and Astronomy, University College London, London WC1E 6BT, UK
Department of Physics and Astronomy, York University, Toronto, ON, M3J 1P3, Canada
Perimeter Institute for Theoretical Physics, 31 Caroline St. N, Waterloo, ON, N2L 2Y5, Canada
School of Mathematical Sciences, University of Nottingham, University Park, Nottingham NG7 2RD, UK
Centre for the Mathematics and Theoretical Physics of Quantum Non-Equilibrium Systems, University of Nottingham, Nottingham NG7 2RD, UK
Ultracold atomic gases can undergo phase transitions that mimic relativistic vacuum decay, allowing us to empirically test early-Universe physics in tabletop experiments.
We investigate the physics of these analogue systems, going beyond previous analyses of the classical equations of motion to study quantum fluctuations in the cold-atom false vacuum.
We show that the fluctuation spectrum of this vacuum state agrees with the usual relativistic result in the regime where the classical analogy holds, providing further evidence for the suitability of these systems for studying vacuum decay.
Using a suite of semiclassical lattice simulations, we simulate bubble nucleation from this analogue vacuum state in a 1D homonuclear potassium-41 mixture, finding qualitative agreement with instanton predictions.
We identify realistic parameters for this system that will allow us to study vacuum decay with current experimental capabilities, including a prescription for efficiently scanning over decay rates, and show that this setup will probe the quantum (rather than thermal) decay regime at temperatures T≲10 nK.
Our results help lay the groundwork for using upcoming cold-atom experiments as a new probe of nonperturbative early-Universe physics.
From the tabletop to the Big Bang:
Analogue vacuum decay from vacuum initial conditions
Silke Weinfurtner
August 1, 2023
=======================================================================================
§ INTRODUCTION
The decay of metastable `false vacuum' states via the nucleation of `true vacuum' bubbles (as illustrated in Fig. <ref>) is a quintessential problem in nonperturbative quantum field theory <cit.>.
This process has a broad range of applications in cosmology, including eternal inflation and multiverse scenarios <cit.>, Higgs vacuum decay <cit.>, and the production of strong gravitational-wave signals <cit.> (and potentially primordial black holes <cit.>) from bubble collisions.
These gravitational-wave signals in particular are a candidate source for the gravitational-wave background recently detected by various pulsar timing arrays, including NANOGrav <cit.> and the European Pulsar Timing Array <cit.>, and are also one of the key obsevational targets of the planned space-based interferometer LISA <cit.>.
Since the pioneering early work of Coleman and collaborators <cit.>, false vacuum decay (FVD) has primarily been studied using instanton methods, in which one obtains a semiclassical approximation of the decay rate by solving the equations of motion in imaginary time.
These methods are made tractable by imposing O(d+1) symmetry on the resulting Euclidean `bounce' solutions which describe the bubble nucleation event (with d the number of spatial dimensions).
However, this symmetry assumption is broken on dynamical and/or inhomogeneous spacetimes that are relevant to cosmology, and precludes us from studying interesting and observationally important issues such as correlations between multiple bubbles <cit.>.
Furthermore, additional assumptions are required to interpret the instanton in real time; specifically, it is assumed that a critical bubble `appears' at some instant in time.
This prevents any study of the precursors of such an event in terms of the real-time dynamics of the field.
Recently, a promising new method for addressing these questions has emerged: the use of ultracold atomic Bose gases as quantum simulators of relativistic bubble nucleation <cit.>.
These systems exhibit coherent quantum behavior on scales that can be directly imaged in the laboratory, and can be manipulated into mimicking the dynamics of a Klein-Gordon field in a potential with true and false vacua.
Such atomic FVD simulators are now under active development by several groups, including the Quantum Simulators for Fundamental Physics (QSimFP) consortium,[<https://qsimfp.org/>] offering the near-future prospect of studying vacuum decay in real time and in a controlled and reproducible manner, giving us new insights that complement those from long-established Euclidean techniques.
Previous analyses of these analogues have focused on their classical equations of motion, showing that these are equivalent to the Klein-Gordon equation for a relativistic field in the appropriate limit.
Here we go further by calculating the spectrum of quantum vacuum fluctuations in the analogue false vacuum state.
This fluctuation spectrum is a crucial input for lattice simulations of the cold-atom system, in which the fluctuations are represented as classical stochastic variables in order to obtain a semiclassical approximation of the decay process.
These simulations are our main theoretical tool for guiding the development of the analogue experiments, and ultimately for helping us interpret the experimental data.
After describing our proposed analogue system in Sec. <ref>, we show in Sec. <ref> that the false-vacuum fluctuation spectrum matches that of a Klein-Gordon field on scales where the classical analogy holds.
This result was not guaranteed by the existing classical analogy, and thus provides further evidence for the suitability of this system as a relativistic analogue.
After an exhaustive search of the cold-atom literature, we identify a homonuclear potassium-41 mixture as the most promising experimental setup, and in Sec. <ref> we present a realistic set of parameters for a 1D realization of this system.
This includes a protocol for scanning over parameters that allows us to vary the decay rate while keeping all other scales in the effective relativistic theory fixed.
In Sec. <ref> we then carry out a suite of semiclassical lattice simulations of this system, using our results for the fluctuation spectrum to generate realistic vacuum initial conditions.
We verify that the field undergoes exponential decay as expected, and that the decay rate scales exponentially with the amplitude of the initial fluctuations, in qualitative agreement with the instanton prediction.
Finally, in Sec. <ref> we explore the impact of finite temperatures on the decay rate, and argue that current experimental technologies can probe the regime of quantum rather than thermal decays.
We summarize our results in Sec. <ref>, and discuss avenues for further development of this work.
§ THE ANALOGUE FALSE VACUUM
In this section we review the essential details of the analogue FVD system we are interested in, as first proposed by <cit.>, and subsequently studied in Refs. <cit.>.
This system consists of a two-component Bose-Einstein condensate (BEC), with each atomic species described by a complex bosonic field[Here and throughout, objects with hats denote quantum operators.]
ψ̂_i(*x)=√(n̂_i(*x)) exp(ϕ̂_i(*x)), i=1,2.
The operators ψ̂_i^†(*x) and ψ̂_i(*x) create and annihilate atoms of species i in the position eigenstate |*x⟩, respectively.
Their amplitudes therefore determine the local number density of each species, n̂_i(*x)=ψ̂_i^†ψ̂_i, while their phases ϕ̂_i(*x) encode coherent wavelike behavior and interference effects.
The dynamics of these fields are described by the Hamiltonian
Ĥ_0=∫_V*x∑_iψ̂_i^†(-ħ^2/2m+1/2gψ̂_i^†ψ̂_i)ψ̂_i,
which consists of a nonrelativistic kinetic term for each species, as well as a quartic self-interaction of strength g>0 due to repulsive s-wave contact interactions between atoms.
This interaction sets the characteristic energy scale of the BEC, E=gn, where n=*n̂ is the mean number density.
The integral in Eq. (<ref>) is over a finite spatial volume V that is either one- or two-dimensional, with the BEC confined tightly along the remaining dimensions, rendering them non-dynamical.
We have specialized here to the case where both species have equal masses (m_1=m_2=m), equal intra-species scattering (g_11=g_22=g), and zero inter-species scattering (g_12=g_21=0).
These conditions can be realized in practice by letting our two species be two different hyperfine states of the same atomic isotope, and applying an external magnetic field at the zero-crossing of a Feshbach resonance in the inter-species channel g_12 <cit.>.
Another possibility is to trap a single atomic species in a double-well potential; the atoms in each of the two wells then act as the two species, and only scatter with other atoms in the same well <cit.>.
The Hamiltonian (<ref>) excludes the usual external potential term that describes the trapping of the atoms along the extended direction(s).
Our proposed experiment uses a `box trap' which effectively approximates an infinite-well potential <cit.>, so that the given Hamiltonian is accurate inside the trap.
This is desirable for simulating relativistic physics as it maintains translation invariance in the interior region, with a near-homogeneous density profile.
The density rapidly tapers to zero at the walls of the trap on a characteristic scale called the healing length,
ξ=ħ/√(mgn).
For the experimental parameters we consider here, this scale is smaller than the size of the BEC by a factor of 500 (see Table <ref>).
We therefore treat the field as homogeneous with periodic boundaries throughout this paper, as in most previous studies of this system <cit.>.
(This setup is also a reasonable approximation to a 1D ring trap, as used in e.g. Ref. <cit.>.)
Extending our results below to include the box trap and corresponding boundary conditions requires a calculation of the full spectrum of inhomogeneous eigenmodes, which has yet to be carried out for this system.
We will present this calculation and its impact on bubble nucleation in an upcoming companion paper.
The two condensed species are coupled via a linear interaction term in the Hamiltonian,
Ĥ =Ĥ_0-ħν(t)Ĥ_int, Ĥ_int=∫_V*x(ψ̂_1^†ψ̂_2+ψ̂_2^†ψ̂_1),
which allows atoms of species 1 to convert into species 2 (and vice-versa) at a rate ν that undergoes rapid modulation at some angular frequency ω,
ħν(t)=ϵ gn+λħω√(ϵ/2)cos(ω t),
where ϵ≪1 and λ=1 are dimensionless constants.
In the setup with two hyperfine states, this coupling is introduced by applying a modulated radio-frequency (RF) field; in the double-well case, ν instead represents the tunnelling rate between the two wells.
We integrate out the fast oscillation to obtain an effective Hamiltonian Ĥ_eff that is valid on timescales much longer than ω^-1 <cit.>.
At linear order in ϵ, we find
Ĥ_eff =Ĥ_0+ϵ gnĤ_int
+1/4ϵλ^2g∫_V*x(4ψ̂_1^†ψ̂_2^†ψ̂_1ψ̂_2-ψ̂_1^†ψ̂_1^†ψ̂_1ψ̂_1
-ψ̂_2^†ψ̂_2^†ψ̂_2ψ̂_2-ψ̂_1^†ψ̂_1^†ψ̂_2ψ̂_2-ψ̂_2^†ψ̂_2^†ψ̂_1ψ̂_1).
This time-averaged picture fails to capture the presence of Floquet instabilities induced in modes whose natural frequencies are close to the driving frequency ω <cit.>.
One expects that setting ω sufficiently large (i.e., making the wavelengths of the unstable modes sufficiently short) will cause these instabilities to be quenched by damping effects on small scales; however, the exact nature of this process is still an open question.
The relevance of the effective Hamiltonian (<ref>) for quantum simulation comes from considering the field
φ̂≡φ_0(ϕ̂_1-ϕ̂_2), φ_0≡√(ħ^2n/2m),
which is proportional to the relative phase between the two species.[
The normalization here is arbitrary at the classical level, but is chosen such that the quantum fluctuations in φ̂ around the false vacuum exactly match those of the corresponding canonically-normalized Klein-Gordon field.]
On scales much larger than the healing length, the classical equation of motion for this degree of freedom is identical to that of a relativistic scalar field,
(c^-2∂_t^2-)φ+U'(φ)=0,
where we identify the `speed of light' as[In reality this is the sound speed of phonons in the BEC, roughly eleven orders of magnitude smaller than the speed of light in vacuum.]
c=√(gn/m),
and the potential as
U(φ)=4ϵφ_0^2m^2c^2/ħ^2[1-cos(φ/φ_0)+1/2λ^2sin^2(φ/φ_0)].
This potential, as shown in Fig. <ref>, contains a series of true vacua at φ_tv/φ_0=2j, j∈ℤ, and for λ>1, a series of false vacua at φ_fv/φ_0=(2j+1).
These correspond to the two atomic species being in phase and in antiphase, respectively; the linear coupling means that there is an additional energy density of order ϵ gn^2 associated with being in antiphase, while the modulation generates an effective potential barrier that makes this state metastable.
Increasing the amplitude of the modulation via λ creates a deeper potential barrier, and increases the mass of fluctuations in the false vacuum,
m_fv^2=ħ^2/c^2U”(φ_fv)=4ϵ m^2(λ^2-1).
§ QUANTUM FLUCTUATIONS IN THE FALSE VACUUM
We have reviewed the known result that, on scales much larger than the healing length, an atomic Bose-Bose mixture can reproduce the classical equation of motion of a Klein-Gordon field (<ref>) with a false vacuum potential (<ref>).
However, vacuum decay is inherently quantum-mechanical, so it is important to test whether these systems are also analogous at the quantum level.
Here we perform this test by calculating the power spectrum of fluctuations in the false vacuum state |Ω_fv⟩,
𝒫_φ(k)≡φ̂_*k^†φ̂_*k^†Ω_fv,
where φ̂_*k are the Fourier modes[
Note that we have discrete Fourier frequencies, as we are working in a finite volume V.
Our conventions for the Fourier transform and its inverse are f_*k=V^-1/2∫_V*x^-*k*xf(*x) and f(*x)=V^-1/2∑_*k^*k*xf_*k.] of the effective relativistic field (<ref>).
Below we find that, on scales much larger than the healing length (ξ k≪1), this spectrum asymptotically matches that of the corresponding Klein-Gordon field,
𝒫_φ(k)=ħ c^2/2ω_k,
with corrections suppressed by powers of (ξ k)^2 and ϵ.
To derive this result, we adopt the standard mean-field approximation <cit.> in which each atomic field consists of small quantum fluctuations around a highly-occupied classical condensate wavefunction,
ψ̂_1=√(n)^-μ t/ħ+ψ̂_1, ψ̂_2=-(√(n)^-μ t/ħ+ψ̂_2).
The factor (-1) here reflects the fact that the two species are in antiphase in the false-vacuum state.
We expand around a homogeneous mean-field wavefunction, whose phase evolves at a rate set by the chemical potential, μ=(1+ϵ)gn.
To study the dynamics of the fluctuations, it is convenient to remove this time evolution with a canonical transformation ψ̂_i→^μ t/ħψ̂_i.
This modifies the Hamiltonian to
K̂_eff=Ĥ_eff-∑_i∫_V*xμψ̂_i^†ψ̂_i.
Expanding this new Hamiltonian to quadratic order in the fluctuations, we find that it can be written as
K̂_eff =K_0+K̂_++K̂_-,
K̂_± =1/2gn∑_*k*0{[ξ^2k^2+2-(2∓2)ϵ]ψ̂_*k^±†ψ̂_*k^±
+[1-(1∓1)ϵλ^2](ψ̂_*k^±†ψ̂_-*k^±†+ψ̂_*k^±ψ̂_-*k^±)},
with K_0 a constant energy offset associated with the mean-field solution, and separate terms K̂_± governing the total and relative fluctuation modes,[
Note that this is only true because we have truncated the Hamiltonian at quadratic order in the fluctuations.
At higher order there are interactions between the total and relative modes, and these can in principle spoil the relativistic analogy if the fluctuations are sufficiently large.]
ψ̂_*k^±≡1/√(V)∫_V*x^-*k*x1/√(2)(ψ̂_1±ψ̂_2),
with the normalization chosen such that the modes obey canonical bosonic commutation relations.
The field we are interested in is defined solely in terms of the relative modes, and at linear order in the fluctuations is given by
φ̂_*k^†=ħ c/2√(gn)(ψ̂_*k^-†-ψ̂_*k^-).
We can therefore ignore the dynamics of the total modes for now, given that they are decoupled in the linear regime.
(We return to them in Sec. <ref>, as they play a significant role in the presence of thermal noise.)
To calculate the power spectrum (<ref>), we must determine the eigenstates of the relative Hamiltonian K̂_- and identify |Ω_fv⟩ as the lowest-lying of these states.[
Restricting ourselves to linear fluctuations around the false vacuum means that the lower-lying states near the true vacuum are not in the spectrum.]
We can do this by writing the Hamiltonian in diagonalized form,
K̂_-=∑_*k*0ħω_kâ_*k^†â_*k^†,
so that each normal mode, described by the ladder operators â_*k^†, â_*k^†, acts as an independent harmonic oscillator.
The false vacuum |Ω_fv⟩ is then identified as the state annihilated by â_*k for all wavenumbers *k.
In Appendix <ref> we identify the appropriate Bogoliubov transformation relating the normal modes to the relative atomic field modes ψ̂_*k^-, ψ̂_*k^-†.
The energy associated with excitations of the normal modes is given by
ħω_k=1/2gn√(ξ^2k^2+4ϵ(λ^2-1))√(ξ^2k^2+4-4ϵ(λ^2+1)),
which, on scales much larger than the healing length (ξ k≪1), reduces to the dispersion relation of a Klein-Gordon field of the same false vacuum mass (<ref>) we found in our classical analysis of the equations of motion,
ω_k^2≃ c^2k^2+c^4/ħ^2m_fv^2.
We can directly evaluate the power spectrum (<ref>) by writing the Fourier modes φ̂_*k in terms of the normal modes â_*k and using standard ladder operator identities.
In the same IR limit as before we find Eq. (<ref>), which is exactly what we expect for the corresponding Klein-Gordon field.
We already know from our classical understanding of the system that the relativistic analogy breaks down on scales much smaller than the healing length (ξ k≫1).
In this limit, we recover the usual nonrelativistic dispersion relation,
ħω_k≃ħ^2k^2/2m,
and a white-noise fluctuation spectrum,
𝒫_φ(k)≃ħ^2/4m=const.
The latter represents an excess of power at small scales compared to the Klein-Gordon spectrum (<ref>), due to nonrelativistic, high-momentum excitations of individual atoms.
The interpolation between this regime and the Klein-Gordon-like results on large scales is shown in Fig. <ref>.
§ EXPERIMENTAL PARAMETERS
Our results for the false vacuum power spectrum are a general feature of the modulated Bose-Bose mixture system described in Sec. <ref>, regardless of its precise experimental realization.
In this section, we describe a concrete set of experimental parameters (summarized in Table <ref>) that is achievable with current cold-atom experiments, and which will allow us to probe the physics of relativistic vacuum decay.
As highlighted in Sec. <ref>, among the key requirements for our system are that both atomic species have equal masses (m_1=m_2), equal intra-species scattering lengths (a_11=a_22), and negligible inter-species scattering (a_12=0).[
These 3D scattering lengths a_ij determine the corresponding 1D interaction strength, g_ij=2ħω_ a_ij, where ω_ is the frequency of the transverse harmonic potential, V_trap=1/2mω_^2(y^2+z^2).]
It is easy to select equal masses by using two hyperfine states of the same atomic isotope (i.e., a homonuclear mixture).
However, the conditions on the scattering lengths are more difficult to arrange.
It is possible to set a_12 to zero by applying an external magnetic field at the zero-crossing of a Feshbach resonance <cit.>, but there is then no further freedom to tune a_11 and a_22 in order to set them equal to each other.
Fortunately, as pointed out by <cit.>, ^41K (potassium-41) possesses a Feshbach resonance between the |F,m_F⟩=|1,0⟩ and |1,+1⟩ states with a zero crossing at B≃675.256 G, where the condition a_11≃ a_22 is realized naturally with a precision of ∼1%.
We have performed an exhaustive search of other known Feshbach resonances in homonuclear mixtures of stable bosonic isotopes of the alkali metals (^7Li <cit.>, ^23Na <cit.>, ^39K <cit.>, ^85Rb <cit.>, ^87Rb <cit.>, and ^133Cs <cit.>), and have not found any other inter-state resonances where the condition a_11≃ a_22 is satisfied at the zero-crossing of a_12.
The ^41K resonance specified above is therefore the optimal candidate system for simulating relativistic vacuum decay.
The main technical challenge with this setup is that the resonance has a width of only 155.8 mG <cit.>, necessitating a very high level of magnetic field stability in order to stay at the zero-crossing of a_12, as illustrated in Fig. <ref>.
Nonetheless, this level of stability is achievable with current experimental technologies.
In particular, <cit.> have recently demonstrated magnetic field stability at the level of ∼2 ppm in a cold-atom experiment.
For our proposed system this corresponds to a_12≤0.53 a_0 (where a_0=5.292×10^-11 m is the Bohr radius).
This is less than 1% of the mean intra-state scattering length a=60.24 a_0, which should be sufficient precision for our purposes.
Given the 3D scattering properties of the two atomic species, the behaviour of the effective 1D system is set by the number of condensed atoms, the size of the trap along the elongated and transverse directions, and the strength and modulation of the applied radio-frequency (RF) field.
We have explored this parameter space with the goal of maximizing the natural condensate energy scale gn relative to the thermal energies k_BT that can be achieved in current experiments, as this will allow us to investigate the regime of quantum (rather than thermal) decays.
At the same time, we have ensured that this energy scale is not so high that transverse modes of energy ħω_ are excited, where ω_ is the frequency of the harmonic trapping potential in the transverse directions.
(We plan to test this explicitly in future work with 3D simulations that resolve the transverse directions.)
In order to facilitate comparisons with instanton predictions (which are challenging to calibrate at any single point in parameter space), it is useful to vary the system parameters to scan over a broad range of bubble nucleation rates.
The instanton decay rate per unit volume in this model scales as
log(Γ/V)∝-ϵ^1-d/2 n̅,
where n̅≡ξ^d n is the dimensionless condensate number density (i.e., the number of atoms in a region of size equal to the healing length).
In d=1 dimensions the dependence on ϵ vanishes, and the decay rate is thus primarily controlled by n̅.
This parameter also sets the size of fluctuations in the field relative to the characteristic value φ_0,
σ_φ^2/φ_0^2∝1/n̅.
We find that it is possible to vary n̅ while keeping the energy scale gn (and therefore all other dimensionless parameters of the system) fixed, by simultaneously increasing the number of atoms of each species N and decreasing the transverse trapping frequency ω_.
This allows us to perform a controlled test of how the bubble nucleation rate scales with the amplitude of the initial fluctuations.
Our proposed parameters are summarized in Table <ref>.
We vary n̅ by a factor of 5, which is sufficient to see a significant variation in the decay rate.
As we show in Sec. <ref> below, the energy scale gn here is large enough that the quantum-decay regime is readily accessible to current or near-future experiments.
§ LATTICE SIMULATIONS
Part of the value of our results on the vacuum power spectrum in Sec. <ref> is that they can be used as an input for semiclassical lattice simulations of the cold-atom system.
These simulations are a powerful tool for exploring the real-time dynamics of bubble nucleation, and are a crucial ingredient for developing and interpreting analogue FVD experiments.
The key idea is to encode the nonclassical nature of the problem in the initial conditions of the simulation, by drawing an ensemble of random field realizations that sample vacuum fluctuations around the homogeneous false vacuum state <cit.>.
These realizations are then evolved forward by numerically integrating the classical equations of motion.
This approach is widely used in the context of atomic physics and quantum optics (where it is referred to as the `truncated Wigner approximation' <cit.>), and also underpins cosmological lattice simulations of inflation and preheating <cit.> as well as vacuum decay <cit.>.
It is common for lattice simulations of cold-atom systems to initialize the fluctuations using a white-noise power spectrum (<ref>) <cit.>, particularly in situations where the processes of interest are insensitive to the precise form of this spectrum.
Bubble nucleation, however, is extremely sensitive to the statistics of the initial fluctuations, as different initial states can decay at exponentially different rates.
(For example, we see from Eq. (<ref>) that there is an exponential sensitivity on n̅.)
The vacuum fluctuation spectrum derived above is therefore a crucial ingredient for realistic simulations of analogue vacuum decay.
In this section we use a suite of lattice simulations to study bubble nucleation from vacuum initial conditions in the 1D cold-atom system described in Sec. <ref>.
We extract decay rates for different values of the fluctuation-amplitude parameter n̅, and verify that the rates depend exponentially on this parameter, in agreement with the scaling found in the instanton approach.
We perform the same test with white-noise initial conditions, and find decay rates that are globally larger than in the vacuum case.
This confirms that vacuum decay in semiclassical lattice simulations is indeed sensitive to the statistics of the initial fluctuations, and that for the cold-atom system these must be correctly specified using Bogoliubov theory, as we have done here.
We additionally investigate the conservation of the Noether charges of the effective Klein-Gordon theory in our simulations of the cold-atom system, as these are a useful diagnostic for the faithfulness of the relativistic analogy.
§.§ Code setup
We use a Fourier pseudospectral code with an eighth-order symplectic time-stepping algorithm <cit.> (see Appendix <ref> for details), and work in units where the atomic mass m, healing length ξ, and sound speed c are set to unity (which is equivalent to also setting ħ=gn=1).
Our simulations work at the level of the time-dependent Hamiltonian (<ref>), resolving the modulation of the inter-species coupling so that we can test for the emergence of the effective time-averaged dynamics.
We simulate a system with the experimental parameters specified in Table <ref>.
In code units, this setup is realized by evolving a periodic region of volume V/ξ^d=L/ξ=500, and setting ϵ=2.5×10^-3 and λ=√(2) so that the false vacuum mass is m_fv/m=0.1.
We additionally set the dimensionless modulation frequency to ωξ/c=680, which is sufficiently large that the Floquet instability bands are above the Nyquist frequency for all of our simulations.
This allows us to model the expected experimental situation where these instabilities are damped by the small-scale dynamics of the BEC, and do not affect the evolution of the IR modes; the actual experimental value of ω is unimportant so long as the Floquet instabilities are quenched.
Our simulations use 2048 lattice sites and a timestep that is 1/16 times the modulation period 2/ω, giving spatial and temporal resolution of x/ξ≈0.244 and c t/ξ≈5.77×10^-4, respectively.
§.§ Bubble nucleation rates
We extract decay rates for the analogue system using ensembles of 1024 simulations, with each simulation corresponding approximately to a different possible classical history drawn from the path integral describing the full evolution of the many-body quantum state.
We initialize each simulation as the homogeneous false vacuum φ=φ_0 plus independent random draws of the vacuum fluctuations φ̂.
We treat the latter as a zero-mean Gaussian random field with a power spectrum that (as shown in Fig. <ref>) interpolates between a relativistic spectrum in the IR and a white-noise spectrum in the UV.
We have checked that this power spectrum remains statistically stationary over time by averaging over the ensemble of non-decayed trajectories, effectively testing that our initial state is indeed an eigenstate of the Hamiltonian near the false vacuum.
As well as the relative phase, we also initialize the relative density and the total phase and density using random draws from their corresponding vacuum spectra.
It is crucial to initialize all four fields in this way to correctly capture the vacuum state.
For example, neglecting the relative density fluctuations corresponds to initializing the effective Klein-Gordon field with zero momentum everywhere, when in fact this momentum field should also contain vacuum fluctuations.
In practice, we initialize the total and relative atomic field modes in our code, which is equivalent at the linear level to working in terms of the density and phase fields.
We find that it is crucial that the positive- and negative-momentum Fourier modes ψ_*k and ψ_-*k are not treated as statistically independent random variables.
Instead, one must draw the positive- and negative-momentum normal modes a_*k, a_-*k independently, and then obtain the Fourier modes of the atomic fields via a reverse Bogoliubov transformation.
This induces a nontrivial correlation between ψ_*k and ψ_-*k that appropriately captures the quantum statistics of the false vacuum state.
Failing to include these correlations in the initial conditions puts the system into an excited state that nucleates bubbles much more rapidly than the false vacuum state, and much more even than the white-noise state.
We truncate all of the fluctuation spectra at a maximum wavenumber of ξ k_UV≈3.22, which is a factor of 4 smaller than the Nyquist frequency of our simulations, ξ k_Nyq=ξ/ x≈12.9.
Evidence from pure Klein-Gordon lattice simulations <cit.> suggests that changing this cutoff modifies the decay rate in a way that can be absorbed into a renormalization of the bare model parameters.
We leave a detailed investigation of this effect in the analogue system for future work, and here use a fixed UV cutoff for all of our simulations.
The amplitude of the fluctuations relative to the homogeneous value of the field is set by the dimensionless number density n̅, which we scan over in the experimentally accessible range 10≤n̅≤50.
We measure a decay rate from each ensemble of simulations by counting the number of non-decayed trajectories as a function of time, dividing by the total number of simulations to obtain an estimate of the time-dependent survival probability.
In doing so, it is necessary to choose a definition for when an individual realization has decayed.
We do this by setting a threshold on the volume average of the cosine of the relative phase, *cos(φ/φ_0)_V.
This quantity fluctuates near to -1 in the false vacuum, and grows rapidly after a bubble nucleates before saturating near +1 once the transition has percolated, as illustrated in Fig. <ref>.
We compute the decay threshold separately for each ensemble as the lowest possible value of *cos(φ/φ_0)_V for which no more than 1% of the simulations cross back below the threshold in any given timestep.[
A more obvious choice would be to allow zero downward crossings through the threshold, as this would capture the notion that vacuum decay is an irreversible process.
However, we find that enforcing zero downward crossings makes the algorithm easily confused by small fluctuations in *cos(φ/φ_0)_V, and results in a choice for the threshold that is far too conservative.
Manual inspection of the results with a 1% allowance for downward crossings confirms that this accurately captures the common-sense notion of when the field has decayed (e.g. see Fig. <ref>).
We have checked that varying this allowed fraction between 0.5% and 2% does not significantly impact our measured decay rates.]
Our resulting estimates of the survival probability are shown in the left panel of Fig. <ref>.
As expected, the ensembles with smaller n̅, and therefore larger initial fluctuations, decay on much shorter timescales.
After an initial transient, each ensemble reaches a regime of exponential decay,
(survive)∼exp(-Γ t).
We fit a decay rate Γ to each curve, restricting the fit to survival probabilities between 50% and 1% in order to exclude the non-exponential regime at early times and noisy small-number statistics at late times, respectively.
The resulting decay rates (in dimensionless units, and measured per unit volume) are shown in blue in the right panel of Fig. <ref>, and are well-described by an exponential scaling with respect to n̅, in qualitative agreement with the instanton prediction (<ref>).
It is important to note however that the proportionality constant linking log(Γ/V) and n̅ does not agree with the instanton prediction; our simulations decay significantly faster than predicted in the instanton approach.
This same behaviour has been observed in pure Klein-Gordon lattice simulations <cit.>, and is an expected consequence of performing instanton calculations using the bare lattice parameters, rather than the renormalized theory <cit.>.
It is also worth pointing out that our instanton calculations are based on the effective Klein-Gordon theory, rather than the full analogue system, and therefore neglects effects such as the excess small-scale power identified in Sec. <ref>.
We plan to explore these issues in the context of the analogue system in future work.
As well as our simulations using vacuum initial conditions, we carry out a suite of simulations using white-noise initial conditions.
This corresponds to the nonrelativistic UV limit (<ref>) of the full power spectrum derived from Bogoliubov theory, and matches the prescription used by several previous studies of vacuum decay in cold-atom analogue systems <cit.>.
The resulting decay rates are shown in red in the right panel of Fig. <ref>.
These are fit only to survival probabilities between 20% and 1%, as we find that it takes longer for these initial states to settle into a period of steady exponential decay.
We see that, while the resulting decay rates also follow the expected exponential scaling with n̅, they are globally larger for white-noise initial conditions than for the vacuum case, despite the fact that the actual amplitudes of the fluctuations are smaller in the IR in the white-noise case (compare the blue and purple curves in Fig. <ref>).
We interpret this as evidence that white-noise fluctuations correspond to an excited state of the analogue system, and thus lead to faster decays, on average, than the vacuum initial conditions we have derived here.
Note that this does not imply that the white-noise spectrum is somehow unphysical.
In fact, such a spectrum is the vacuum state for an alternative system with zero atomic scattering, g=0.
The enhanced decay rates shown in red in Fig. <ref> can thus be interpreted as being due to a mismatch between the Hamiltonian describing the initial conditions and the Hamiltonian describing the time evolution.
§.§ Verifying Klein-Gordon behavior
While our results for the decay rates are in broad agreement with our expectations for relativistic vacuum decay, we can also directly test whether the relative phase field φ is indeed analogous to a relativistic Klein-Gordon field by computing the Noether charges for the corresponding Klein-Gordon theory,
H =∫_V*x[1/2c^2φ̇^2+1/2φ^2+U(φ)],
*P =-∫_V*x1/cφ̇φ.
Since the Noether charges for the underlying nonrelativistic Hamiltonian are conserved with extremely high precision in our simulations (see Appendix <ref>), any non-conservation of the Klein-Gordon charges (<ref>) should be interpreted as being due to limitations of the relativistic analogy, rather than numerical errors.
In Fig. <ref> we show the violation of these charges for a series of simulations with a broad range of dimensionless number densities n̅.
We find that violations in the Klein-Gordon energy and momentum are roughly stationary over time, and reach a regime where they scale like H/H∼n̅^-1 and *P/*P∼n̅^-1/2 respectively, so that in the limit of small fluctuations the analogy holds with high accuracy.
However, in the experimentally accessible regime n̅∈[10,50] that we are interested in here, the violation is on the order of at least a few percent in the energy.
In the momentum, the relative errors reach order unity, although this reflects the fact that the total momentum of the field averaged over the entire volume V is intrinsically close to zero.
While we do not believe these errors invalidate the mapping onto the Klein-Gordon theory, further improvements in the accuracy of the analogue may be possible.
Specifically, so far we have ignored the backreaction of the fluctuations onto the mean-field dynamics, which would modify this mapping in a way that could plausibly be absorbed into a renormalization of the parameters of the effective Klein-Gordon theory.
(Similar effects have recently been investigated in the case of pure Klein-Gordon theory <cit.>.)
This would be consistent with our finding that the level of charge violation scales with the fluctuation amplitudes.
We conjecture that accounting for these corrections and identifying the appropriate Klein-Gordon parameters could substantially improve the level of charge violation over that shown in Fig. <ref>, and also bring our decay rates into closer quantitative agreement with the instanton prediction.
We plan to explore this in detail in future work.
§ FINITE-TEMPERATURE EFFECTS
Thus far we have considered only zero-temperature states of the analogue system.
However, any realistic experiment will inevitably be at some finite temperature, and will therefore contain thermal as well as quantum fluctuations.
These are potentially a nuisance factor in studying quantum vacuum decay, giving an excess contribution to the decay rate and altering the phenomenology of the nucleated bubbles <cit.>.
It is therefore valuable to estimate the temperature threshold at which these deviations from the zero-temperature case become significant, as this can then guide the development and interpretation of the analogue experiments.
In the framework of the truncated Wigner approximation, we can model the thermal bath by including additional fluctuation power in our initial conditions.[
Other prescriptions and theoretical frameworks exist, including modeling the effects of the thermal bath by adding a stochastic driving term to the Gross-Pitaevskii equations <cit.>.
However, our treatment here allows us to model quantum and thermal fluctuations in a simple and conceptually unified way.
A detailed comparison against alternative simulation methods would be interesting, but is beyond our present scope.]
This amounts to replacing vacuum expectation values with traces over a thermal density matrix, resulting in a scale-dependent enhancement to the relative phase power spectrum,
𝒫_φ(k;T)=(ħω_k/2k_BT) 𝒫_φ(k;0),
as well as for the relative density, and the total phase and density.
(Here x=(1+^-2x)/(1-^-2x) is the hyperbolic cotangent function.)
It is convenient to work in terms of the dimensionless temperature,
T̅=k_B/gnT≈T/182 nK,
where the numerical value corresponds to our particular choice of experimental parameters (c.f. Table <ref>).
Fig. <ref> shows the survival probability in ensembles of simulations at various temperatures, with n̅=40.
For dimensionless temperatures T̅≲0.06 we see that, notwithstanding some differences in the initial non-exponential transient phase, the exponential decay rates are all consistent with the zero-temperature result.
At higher temperatures, rather than finding an enhanced rate of relativistic decays, we instead find that the exponential decay model becomes an increasingly poor fit to the empirical survival probabilities.
We interpret this finding as indicating the breakdown of the relativistic analogy at high temperatures, and conjecture that this breakdown is due to the impact of thermal noise on the total phonon modes.
In contrast to the relative modes, which have an effective mass m_fv due to the potential barrier around the false vacuum, the total modes have a massless dispersion relationship ω_k≃ ck in the IR, allowing them to become excited to very large amplitudes by the thermal bath, as illustrated in Fig. <ref>.
The coupling between the total and relative modes then becomes significant, and spoils the effective relativistic dynamics of the relative modes.
As evidence for this interpretation, we note that the T̅≲0.06 threshold determined empirically from our simulations is just below the theoretically-predicted threshold at which the total modes of this system should lose phase coherence, T̅_ϕ=n̅/L̅=0.08 <cit.>.
Our results show that dimensionless temperatures of T̅≲0.06 should give us access to a setting closely resembling the zero-temperature dynamics of the analogue vacuum decay process.
This translates into physical temperatures of T≲10.9 nK for our proposed parameters.
Note that our interpretation in terms of the phase coherence temperature T̅_ϕ=n̅/L̅ implies that this threshold should scale proportionally with the fluctuation-amplitude parameter n̅, so that the T≲10.9 nK benchmark should be viewed as a minimal requirement, with lower temperatures giving us access to vacuum decay rates over a broader range of parameter space.
This benchmark is readily accessible with current experimental setups, which routinely reach temperatures on the order of a few nK, and have even recorded temperatures as low as tens of pK <cit.>.
§ SUMMARY AND OUTLOOK
Quantum analogue experiments present a powerful new tool for understanding relativistic vacuum decay.
Here we have carried out a detailed study of one such proposed experimental setup, which uses a rapidly modulated coupling between two atomic Bose-Einstein condensates to engineer a metastable false vacuum state for the relative phase.
We have derived the spectrum of quantum fluctuations around this state, and have shown that this spectrum asymptotically matches that of the effective Klein-Gordon field in the IR.
As well as providing further evidence for the suitability of the cold-atom analogue for studying relativistic physics, this vacuum fluctuation spectrum is also a crucial input for semiclassical lattice simulations of this system.
By carrying out a suite of such simulations, we have confirmed the key theoretical expectations for the analogue false vacuum: that it undergoes exponential decay, at a rate that is exponentially sensitive to the amplitude of the vacuum fluctuations.
We have also shown that using an alternative fluctuation spectrum — in this case, white noise, which has been used in several previous studies of this system — leads to an enhanced decay rate compared to the pseudorelativistic vacuum fluctuations, as this corresponds to putting the system in an excited initial state.
In carrying out these simulations, we have identified a realistic set of parameters that will allow us to study vacuum decay with current experimental capabilities.
This includes a protocol for scanning over fluctuation amplitudes, and thus decay rates, while keeping all other natural scales of the system fixed, enabling detailed and controlled experimental studies of the decay rate.
As well as the zero-temperature fluctuation spectrum, we have derived the enhancement of the fluctuation power due to thermal noise at finite temperature.
We find that, so long as the system is below a given temperature threshold (which we argue is set by the coupling between the total and relative phase degrees of freedom), the decay rate extracted from our simulations is consistent with that at zero temperature.
For our proposed parameters, this threshold lies well within reach of current experiments, meaning that we should be able to empirically test the physics of quantum bubble nucleation in the near future.
Our results here rely on several simplifying assumptions, which we plan to relax in future work.
In particular, we have treated the BEC system as periodic, neglecting boundary effects due to the external trapping potential.
In a forthcoming companion paper, we will generalize our Bogoliubov analysis to derive the inhomogeneous vacuum fluctuations in a box trap, and investigate the impact of these boundary effects on the bubble nucleation rate.
We have also neglected in our calculations the backreaction of the fluctuations onto the mean-field dynamics of the BEC, and corresponding renormalization of the bare parameters of the effective relativistic theory.
Incorporating these effects should allow for a more precise understanding of the validity of the relativistic analogy, improve the initialization and interpretation of our lattice simulations, and enable more detailed comparisons with instanton predictions.
These developments will enable the first experimental tests of relativistic vacuum decay.
We thank Tom Billam, Nishant Dogra, Christoph Eigen, Zoran Hadzibabic, Konstantinos Konstantinou, Ian Moss, and Dalila Pîrvu for valuable discussions.
This work was supported by the Science and Technology Facilities Council through the UKRI Quantum Technologies for Fundamental Physics Programme [grant number ST/T005904/1], and was partly enabled by the UCL Cosmoparticle Initiative.
This work used computing equipment funded by the Research Capital Investment Fund (RCIF) provided by UKRI, and partially funded by the UCL Cosmoparticle Initiative.
This work used facilities provided by the UCL Cosmoparticle Initiative.
The work of JB was partially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC).
JB was supported in part by the Simons Modern Inflationary Cosmology program.
MCJ is supported by the National Science and Engineering Research Council through a Discovery grant.
SW acknowledges support provided by the Leverhulme Research Leadership Award (RL2019- 020), the Royal Society University Research Fellowship (UF120112, RF\ERE\210198) and the Royal Society Enhancements Awards and Grants (RGF\EA\180286, RGF\EA\181015), and partial support by the Science and Technology Facilities Council (Theory Consolidated Grant ST/P000703/1).
This research was supported in part by Perimeter Institute for Theoretical Physics.
Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities.
The data that support the findings of this study are available from the corresponding author, ACJ, under reasonable request.
§ BOGOLIUBOV ANALYSIS
The Hamiltonian (<ref>) is diagonalized by applying a Bogoliubov transformation to the atomic field modes,
â^†_*k=-(u_kψ̂^-_*k+v_kψ̂^-†_*k),
where the coefficients are given by
u_k^2 =1/2[gn/2ħω_k(ξ^2k^2+2-4ϵ)+1],
v_k^2 =1/2[gn/2ħω_k(ξ^2k^2+2-4ϵ)-1],
with ω_k the dispersion relation given in Eq. (<ref>).
Since the condition u_k^2-v_k^2=1 is satisfied, one can verify that these normal modes obey the standard bosonic commutation relations,
*â^†_*kâ^†_*k'=δ_*k,*k', *â^†_*kâ^†_*k'=*â^†_*kâ^†_*k'=0.
This, combined with the diagonalized Hamiltonian (<ref>), allows us to interpret â^†_*k and â^†_*k as ladder operators for a set of independent harmonic oscillators, one for each normal mode.
The false vacuum |Ω_fv⟩ is then naturally defined as the ground state of these oscillators.
Inserting the normal modes into Eq. (<ref>), we find that the Fourier modes of the relative phase can be written as
φ̂_*k^†=ħ c/2√(gn)(u_k+v_k)(â_*k+â_-*k^†).
In the IR (ξ k≪1), this corresponds exactly to the equivalent expression for a canonically-normalized Klein-Gordon field <cit.>,
φ̂_*k^†≃√(ħ c^2/2ω_k)(â_*k+â_-*k^†),
which automatically guarantees that all expectation values will match those of the Klein-Gordon case in this regime, including the power spectrum (<ref>).
To simulate white-noise initial conditions, we simply replace the coefficients in Eq. (<ref>) with u_k=1 and v_k=0.
In our lattice simulations, we represent the normal modes â_*k as classical stochastic variables a_*k, with expectation values defined by symmetrizing over classically-equivalent operator orderings; e.g.,
*a_*k^2=1/2â^†_*kâ^†_*k+â^†_*kâ^†_*kΩ_fv=1/2.
A simple calculation then shows that each a_*k is an independent draw from a circularly-symmetric complex Gaussian distribution, with real and imaginary parts each having variance 1/4.
In the finite-temperature case, this variance is enhanced by a factor of (ħω_k/2k_BT).
Notice that, while a_*k and a_-*k are statistically independent, the Bogoliubov transformation mixes positive and negative momenta together so that ψ_*k and ψ_-*k are not independent.
Initializing ψ_*k and ψ_-*k independently leads to nontrivial correlations between a_*k and a_-*k, and therefore fails to correctly capture the statistics of the false vacuum state.
§ NUMERICAL METHODS AND CONVERGENCE TESTS
Our code solves the classical equations of motion for the atomic fields *ψ(*x,t)=(ψ_1,ψ_2)^𝖳, corresponding to the time-modulated Hamiltonian (<ref>),
ħ∂_t*ψ=𝒪*ψ,
where we define the differential operator,
𝒪(*x,t) =𝒪_lin+𝒪_nlin
𝒪_lin(*x,t) =-ħ^2/2m-ħν(t)0 1
1 0+μ_lin,
𝒪_nlin(*x,t) =gψ_1(*x,t)^2 0
0 ψ_2(*x,t)^2+μ_nlin,
which we have split into a linear and a nonlinear piece.
Each piece has its own chemical potential, which can be chosen for convenience — e.g., to minimize sinusoidal oscillations in the homogeneous mode of the total phase — as these have no effect on the relative phase φ.
Evolution under each of these operators individually can be solved exactly; the nonlinear piece conserves the amplitude of each field and simply performs a local phase rotation,
*ψ(*x,t)=exp[-t-t_0/ħ𝒪_nlin(*x,t_0)]*ψ(*x,t_0),
while the linear piece can be solved by going to Fourier space,
*ψ(*x,t) =ℱ^-1_*k→*x{exp[-t-t_0/ħ(ħ^2k^2/2m+μ_lin)]
×cos R(t,t_0) sin R(t,t_0)
sin R(t,t_0) cos R(t,t_0)ℱ_*x→*k*ψ(*x,t_0)},
R(t,t_0) =ϵ gnt-t_0/ħ+λ[sin(ω t)-sin(ω t_0)],
where ℱ_*x→*k represents a Fourier transform, and ℱ^-1_*k→*x its inverse.
(These are implemented numerically as fast Fourier transforms, so that in practice Eq. (<ref>) is only exact under the assumption that the fields are band-limited with maximum wavenumber less than or equal to the Nyquist frequency on the lattice.)
While there is no exact solution for the evolution under 𝒪=𝒪_lin+𝒪_nlin from generic initial data, we can approximate this full evolution by chaining together a series of short steps with each of the individual operators,
*ψ(*x, t_0+ t)=^- a_1𝒪_lin t/ħ^- b_1𝒪_nlin t/ħ×⋯
×^- a_k𝒪_lin t/ħ^- b_k𝒪_nlin t/ħ*ψ(*x,t_0)+ t^n+1,
where the dimensionless coefficients a_i, b_i, (i=1,…,k) are chosen such that the integrator is exact to order n in the small timestep t.
Integrators of this form are symplectic, in the sense that they exactly conserve phase space volume.
We implement an efficient realization of this integrator from <cit.>, which uses k=16 steps and is accurate to order n=8.
In Fig. <ref> we show convergence tests of our code for increasing spatial and temporal resolution, measuring numerical errors in terms of pointwise differences in the cosine of the relative phase field, cos(φ/φ_0).
For the level of resolution used in our simulations in Secs. <ref> and <ref>, we see that the maximum error is on the order of ∼10^-7 prior to bubble nucleation, and at most ∼10^-5 even long after bubble nucleation.
This indicates that our simulations are numerically converged, even in the highly dynamical nonlinear regime.
We also test our code by checking for violations in conservation of the Noether charges associated with the cold-atom Hamiltonian (<ref>),
N =∫_V*x∑_i=1,2ψ_i^2,
*P =∫_V*x∑_i=1,2/2(ψ_iψ_i^*-ψ_i^*ψ_i),
which correspond to the total number of atoms and the total momentum of the system, respectively <cit.>.
(Note that the total energy is not exactly conserved, due to the explicit time-dependence of the RF modulation term in the Hamiltonian.)
As shown in Fig. <ref>, both charges are conserved to the level of a few parts per billion in simulations at our fiducial resolution.
|
http://arxiv.org/abs/2307.02907v1
|
20230706104021
|
Constraining the cosmic-ray mass composition by measuring the shower length with SKA
|
[
"S. Buitink",
"A. Corstanje",
"J. Bhavani",
"M. Desmet",
"H. Falcke",
"B. M. Hare",
"J. R. Hörandel",
"T. Huege",
"N. Karasthatis",
"G. K. Krampah",
"P. Mitra",
"K. Mulrey",
"A. Nelles",
"K. Nivedita",
"H. Pandya",
"J. P. Rachen",
"O. Scholten",
"S. Thoudam",
"G. Trinh",
"S. ter Veen"
] |
astro-ph.HE
|
[
"astro-ph.HE"
] |
Audio-visual End-to-end Multi-channel Speech Separation, Dereverberation and Recognition
Guinan Li, Jiajun Deng, Mengzhe Geng, Zengrui Jin, Tianzi Wang, Shujie Hu,
Mingyu Cui, Helen Meng, Fellow, IEEE, Xunying Liu, Memeber, IEEE
Guinan Li, Jiajun Deng, Mengzhe Geng, Zengrui Jin, Tianzi Wang, Shujie Hu, Mingyu Cui are with the Chinese University of Hong Kong, China (email: {gnli, jjdeng, mzgeng, zrjin, twang, sjhu, mycui}@se.cuhk.edu.hk)
Helen Meng is with the Chinese University of Hong Kong, China (email: hmmeng@se.cuhk.edu.hk).
Xunying Liu is with the Chinese University of Hong Kong, China and the corresponding author (email: xyliu@se.cuhk.edu.hk).
Gastón P. Fernández[Ph.D. student at the University of Leuven (KU Leuven), Department of Economics, Naamsestraat 69, box 3565, 3000 Leuven (e-mail: gfernandez@kuleuven.be). I deeply appreciate the invaluable guidance of my advisors Laurens Cherchye and Frederic Vermeulen. I would also like to thank Wietse Leleu and all participants at the Conference of the European Society for Population Economics (ESPE) in Belgrade, the Trans-Atlantic Doctoral Conference (TADC) in London, and the Public-Labor-Health Seminar, the Household Economics Gathering, and the ECORES Summer School in Leuven for their helpful comments. All errors are on my own.]
University of Leuven (KU Leuven)
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
§ INTRODUCTION
The low-frequency part of the Square Kilometre Array <cit.> will have roughly 60,000 antennas within an area of one square kilometer. The antennas are omni-directional and have a large bandwidth of 50-350 MHz. This makes the SKA a unique site for radio detection of air showers. While the antenna density of LOFAR <cit.> is large within the circular antenna fields, there are also large gaps between those stations. At the SKA, on the other hand, the radio footprint will be homogeneously sampled with thousands of antennas. Such extremely high-resolution measurements offer new reconstruction possibilities that can contribute to cosmic-ray source identification.
The size of the SKA determines the upper limit on the cosmic-ray energy at ∼ 10^18 eV. At lower energies, the strength of the radio signal is the limiting factor. The design and antenna density of SKA allows for a considerable gain in sensitivity by using beamforming. We estimate that the lower energy range will lie around 10^16 eV. This part of the cosmic-ray spectrum, between the knee and ankle, is very complex. It is likely to contain the transition from Galactic to extra-galactic origin. Moreover, it may contain a secondary Galactic component consisting of cosmic-rays that are re-accelerated at the Galactic termination shock or that gain their energy in the strongly magnetized shocks of Wolf-Rayet supernova's <cit.>.
Determining the cosmic-ray mass composition is key to understanding the astrophysics between the knee and ankle. This needs to be measured with high accuracy as some models predict transitions between elements of similar masses. For example, in scenarios featuring helium-rich Wolf-Rayet stars, the transition from Galactic to extra-galactic flux is marked by a change from a helium to proton dominated flux <cit.>. The observational challenge is made even larger by the uncertainties in hadronic interaction models. Available models predict different relations between the primary cosmic-ray mass and the average of the showers they will generate.
The extremely high antenna density of SKA allows reconstruction of air showers in unprecedented detail. From Monte Carlo studies <cit.> it is already clear that traditional radio reconstruction methods will yield a precision of 6 – 8 g/cm^2 on and 3% on primary energy. However, the true potential of cosmic-ray science with SKA can only be understood by developing new analysis ideas that allow us to extract more information from the air showers. Here, we argue that reconstruction of the shower length L is a powerful way to determine the proton fraction of the cosmic-ray flux.
§ PARAMETRIZATION OF THE LONGITUDINAL DEVELOPMENT
Several parametrizations exist to describe the longitudinal development of air showers. Here we use <cit.>:
N(X) = exp(-X - /RL) (1 - R/L(X - ))^1/R^2,
where N is the number of particles in the shower, and X is the traversed depth in the atmosphere in []g/cm^2. The parameter L scales with the width of the profile and is a measure for the length of the shower, while R is a measure for the asymmetry in the profile shape before and after the shower maximum.
The Pierre Auger Observatory has performed a measurement of the average L and R for a large set of showers based on the fluorescence technique <cit.>. There are now several indications that it is possible to use radio observations to reconstruct L for individual showers.
The first indication comes from LOFAR data. To reconstruct , the radio emission profiles of a large set of simulated showers are fitted to the data <cit.>. Figure <ref> shows that the reduced χ^2 of the fit critically depends on both and L. Reconstructing both parameters would require a much larger set of simulations, which are very computationally expensive. Future analysis possibilities will critically depend on the development of faster simulation techniques, such as the template synthesis method <cit.>.
The second indication comes from CORSIKA/CoREAS simulations for the SKA <cit.>. First results show that it is indeed possible to simultaneously reconstruct and additional information on the longitudinal development of the shower <cit.>. In particular, it has been shown that the most sensitive parameter after is a linear combination of L and R . It should be noted that this result is from a first exploration of the possibilities and there is ample room for improvement: the optimal reconstruction techniques have yet to be determined.
In the remainder of this contribution, we will investigate the impact of measurements of L on constraining the mass composition. We assume that SKA will be able to reconstruct L with a resolution of ∼ 10 g/cm^2 <cit.>.
§ CONEX SIMULATIONS
We have set up CONEX simulations at three energies (10^16 eV, 10^17 eV, 10^18 eV) for three hadronic interaction models (EPOS-LHC, QGSJETII-04, and Sibyll 2.3d), and five elements (proton, helium, carbon, silicon, iron). For each combination of parameters we have generated 2500 showers. Their longitudinal profiles are fitted with Eqn. <ref> to obtain , L, and R.
In most cases the parametrization fits the longitudinal profile very well, but there are exceptions. Occasionally, showers have a double-bump structure. These structures occur more often for proton and helium showers at the lowest energies. Even then, however, the fraction of showers with a bad fit is below 0.5%.
Figure <ref> shows the average values of , R, L, and the number of muons at shower maximum, for different elements and hadronic interaction models. Whereas most parameters feature a monotonous trend from light to heavy elements, L increases sharply from proton to helium and then drops steadily for further increasing primary mass. A combined measurement of L and any of the other parameters allows us to isolate protons from all other elements.
Figure <ref> shows the average and L for all generated elements, energies, and hadronic interaction models. For each combination of energy and interaction model, the elements form a triangle with protons on one corner, and all other elements on the opposite side. For some triangles, this side is curved, but the protons never lie on the same curve as the other elements.
The average L predicted by the three hadronic interaction models diverges towards higher energies. In particular, Sibyll 2.3d produces higher values for L than the other two models. The real cosmic-ray flux is some mixture of elements and would therefore have an average and L that falls somewhere inside the triangle. Observations of and L can thus be used to test the validity of models, even with incomplete knowledge of the mass composition. SKA can in this way provide a unique measurement to probe the hadronic processes in the shower.
§ ROBUST MEASUREMENTS OF L
While L can provide important information about the mass of the primary, it remains to be proven that it can be reconstructed robustly enough to use it in a mass composition analysis. The differences in average L between different elements are of the order of a few []g/cm^2. We do not have a solid prediction yet for the expected systematic uncertainty on reconstructing L with SKA, but the level of accuracy needed seems very challenging. However, there are more robust features in the distribution of L.
Figure <ref> shows histograms of L for 10^17 eV showers of different primary masses. There is good agreement between the three hadronic interaction models. While the peaks of the distributions of all elements are close together, the high-L tails are very different. Determining the width of the distribution, or the shape of the tail is therefore a much more robust observable, that can tolerate higher systematic uncertainties. For illustration purposes, we will here adopt a simple test parameter: the fraction of showers with L > 225 g/cm^2.
§ DETERMINING THE PROTON FRACTION
To test the impact of reconstructing L on the mass composition analysis we use the CONEX shower library to generate random astrophysical models. Each model consists of the five elements in our set, but the mixing ratios are different. For each model, we randomly select 1500 conex showers, and add random measurement errors to the and L values of each individual shower. For both parameters we add a random error following a Gaussian distribution of σ=10 g/cm^2, which is a reasonable choice for SKA<cit.>.
For each model, the average and the fraction of L > 225 g/cm^2 is plotted in the left panel of Fig. <ref>. The color coding indicates the proton fraction in the model. We again recognize the triangle shape. Every model with 0% protons will fall somewhere on the left side of the triangle. The position on this (slightly curved) line is determined by the actual mixing ratios of helium, carbon, silicon, and iron. Models with larger proton fraction will fall on a similar curve, but shifted towards the right. This is seen by the clean trend in color in the Figure.
We can now set up a simple toy analysis to reconstruct the proton fraction. The analysis is based on the simulated positions of a pure proton, helium, and iron sample. First, a line is reconstructed that passes through the pure iron and helium points. Next, we determine the distance D of the pure proton point to this line. Finally, we determine the distance d to the line for each of the generated mock data sets. The reconstructed proton fraction for this data set is then given by d/D. From Fig. <ref> it is clear that intermediate mass elements can be on the left side of the line connecting iron and helium. These are assigned negative values for d, resulting in a negative reconstructed proton fraction.
The results are shown in the right panel of Fig. <ref>. The proton fraction is retrieved with a precision of ∼ 10%. Since this analysis is completely un-optimized it is likely that even better results could be achieved. While this result is obtained with a 5-element model, the precision is likely to be similar when including more elements, since all elements other than proton lie on the same curve.
§ CONCLUSION
Radio observation of air showers with the SKA will provide unprecedented detail and precision. Existing reconstruction techniques will not use the observatory to its full potential. Monte Carlo simulations have demonstrated that it will be possible to reconstruct the shower length L for individual showers. Unlike most other shower observables, L does not scale monotonously with the primary mass, but is largest for helium. This feature can be used to design new analysis techniques that separate the proton showers from all other mass components. This is invaluable for understanding the cosmic-ray sources in the Galactic-to-extra-galactic transition region. In addition, measurements of L will provide a unique measurement that can put constraints on hadronic interaction models.
§ ACKNOWLEDGEMENTS
TNGT acknowledges funding from Vietnam National Foundation for Science and Technology Development (NAFOSTED) under grant number 103.01-2019.378. ST acknowledges funding from the Khalifa University Startup grant, project code 8474000237-FSU-2020-13.
99
Tan:2015 G. Tan et al., “The square kilometre array baseline design v2.0”, 2015 1st URSI Atlantic
Radio Science Conference (URSI AT-RASC), pp. 1–1, 2015.
LOFAR M. P. van Haarlem et al., “LOFAR: The Low Frequency Array”, Astronomy and Astrophysics
556 (2013) A2.
Thoudam S. Thoudam et al., “Cosmic-ray energy spectrum and composition up to the ankle: the case for a second Galactic component”, Astron. & Astrophys., 595 A33 (2016)
Arthur A. Corstanje et al., “Prospects for measuring the longitudinal particle
distribution of cosmic-ray air showers with SKA”, these proceedings (2022).
Andringa:2011 S. Andringa, R. Conceição, and M. Pimenta, “Mass composition and cross-section from the shape of cosmic ray shower longitudinal profiles”, Astroparticle Physics 34 (2011), 6, 360–367.
Auger_RL A. Aab et al., “Measurement of the average shape of longitudinal profiles at the Pierre Auger Observatory’’, JCAP 03, 018 (2019).
Buitink14 S. Buitink et al., “Method for high precision reconstruction of air shower using
two-dimensional radio intensity profiles”, Phys. Rev. D 90 (Oct, 2014) 082003.
template_synthesis M. Desmet et al., “Template synthesis approach for radio emission from
extensive air showers”, these proceedings (2022).
CORSIKA D. Heck et al., “CORSIKA: a Monte Carlo code to simulate extensive air showers.” Feb., 1998.
CoREAS T. Huege, M. Ludwig, and C. W. James, “Simulating radio emission from air showers with CoREAS”, ARENA 2012, AIP Conf. Proc. 1535 (2013) 128–132.
|
http://arxiv.org/abs/2307.02156v1
|
20230705095738
|
Perimeter control, autonomous vehicle, and urban spatial structure
|
[
"Takao Dantsuji",
"Yuki Takayama"
] |
eess.SY
|
[
"eess.SY",
"cs.SY"
] |
[monash]Institute of Transport Studies, Monash University, Australia
[kana]Institute of Science and Engineering, Kanazawa University, Japan
[titech]Department of Civil and Environmental Engineering, Tokyo Institute of Technology, Japan
monash,kana]Takao DantsujicorTakao.Dantsuji@monash.edu[cor]Corresponding author.
titech]Yuki Takayamatakayama.y.af@m.titech.ac.jp
This paper examines the effects of hypercongestion mitigation by perimeter control and the introduction of autonomous vehicles on the spatial structures of cities. By incorporating a bathtub model, we develop a land use model where hypercongestion occurs in the downtown area and interacts with land use. We show that hypercongestion mitigation by perimeter control decreases the commuting cost and results in a less dense urban spatial structure. Furthermore, we reveal that the impact of autonomous vehicles depends on the perimeter control implementation. Introduction of autonomous vehicles may increase the commuting cost in the presence of hypercongestion and cause an increase in downtown population in the long-run. This result contradicts that of the standard bottleneck model. When perimeter control is implemented, the introduction of autonomous vehicles decreases the commuting cost and results in a less dense urban spatial structure. These results show that hypercongestion is a key factor that can change urban spatial structures.
hypercongestion, perimeter control, autonomous vehicle, bathtub model, land use model
§ INTRODUCTION
§.§ Background
Traffic demand is highly concentrated during rush hours in urban cities, which causes hypercongestion,[The downward–sloping part of the inverted U–shaped relationship between traffic flow and traffic density.] and decentralizing traffic demand can change the spatial distribution of residents in the long-run. Recent research has shown that downtown areas experience network-wide hypercongestion <cit.> and that the temporal concentration of traffic demand leads to network capacity drop <cit.>. Capacity drop is phenomenon where throughput (e.g., flow or outflow) decreases with an increase in the number of vehicles circulating in networks.
A number of studies have proposed traffic demand management (TDM) strategies, such as congestion pricing <cit.> and perimeter control <cit.>, to decentralize traffic demand during peak periods for hypercongestion mitigation as capacity drop causes the inefficient use of transportation systems. However, most of them focused on problems under the assumption that commuters do not relocate (i.e., fixed origin–destination demand). That is, they ignored changes in residents' spatial distribution in response to their commuting behavior changes and instead focused on the short-run effects of TDM strategies. Since commuting and land use patterns influence each other substantially <cit.>, the long-run impacts of TDM strategies on land use should be understood to promote efficient and sustainable urban developments.
Models of urban spatial structures can describe the interaction between commuting and residential locations <cit.>. Traditional models that employ static congestion models <cit.> have been extended to incorporate dynamics bottleneck congestion <cit.>. These extended models highlight the significance of the temporal distribution of traffic demand and dynamic congestion phenomena in long-run equilibrium; however, they cannot incorporate hypercongestion, where capacity can drop over time, unlike in bottleneck congestion. A suitable model is yet to be developed to examine the long-run effectiveness of TDM strategies for hypercongestion mitigation. Furthermore, whether hypercongestion is a key factor that can alter urban spatial structures remains an open question in the literature.
In this new era of autonomous vehicles, hypercongestion can play a more important role in TDM strategies. According to <cit.>, autonomous vehicle can enhance network capacity because they safely drive closer to each other than vehicles driven by humans (“Network capacity" effect hereafter), and passengers in autonomous vehicles are less concerned about travel times because they do not need to drive anymore (“VOT effect" hereafter). <cit.> showed that both network capacity and VOT effects decrease the commuting cost in fully automated traffic bottleneck congestion. Furthermore, existing works <cit.> demonstrated the network flow can be enhanced by the network capacity effect. However, the VOT effect on network efficiency has been insufficiently studied because it is not as simple as the network capacity effect. The VOT effect reduces the the cost of travel time; however, an increase in the VOT effect is expected to worsen capacity drop due to the temporal concentration of traffic demand. Thus, traffic demand patterns with autonomous vehicles will become increasingly complex in the presence of hypercongestion. A model that systematically analyzes these impacts on the temporal and spatial distributions of traffic demand should be developed to investigate the long-run effects of TDM strategies in this new era of autonomous vehicles properly.
In this paper, by incorporating a bathtub model, we develop a land use model where hypercongestion occurs in the downtown area and interacts with land use. Our findings show that the implementation of perimeter control for hypercongestion mitigation decreases the commuting cost, and results in a less dense urban spatial structure. Furthermore, we find that the use of autonomous vehicles may increase the commuting cost due to the severe capacity drop caused by the temporal concentration of traffic demand, and make cities more compact in the long-run. This result contradicts that of the standard bottleneck model. When perimeter control is implemented, the introduction of autonomous vehicles decreases the commuting cost, and results in a less dense urban spatial structure. These results show that hypercongestion is a key factor that can change urban spatial structures.
§.§ Literature Review
Network-wide hypercongestion, namely Macroscopic Fundamental Diagrams (MFDs) is a powerful tool for describing network-wide traffic dynamics; this approach relates network flow (or trip completion rate) to network density (or accumulation of vehicles). The idea of macroscopic traffic theory was proposed by <cit.>, further investigated by <cit.>, and empirically analyzed by <cit.>. Traffic management approaches based on MFDs have been studied, such as pricing <cit.>, perimeter control <cit.>, route guidance <cit.>, and road space allocation <cit.>. MFDs have also been utilized for other purposes, such as dynamic traffic demand estimation <cit.> and network performance indication <cit.>.
Perimeter control is a successful application of MFDs in TDM strategies as an effective and easy-to-implement tool. It aims to control the entry flow at the perimeter boundary of a target area to maximize the trip completion rate. This approach has been extended to multiregion networks <cit.>, perimeter control with boundary queues <cit.> and route guidance <cit.>, and bimodal transportation systems <cit.>. Considerable effort has been dedicated to perimeter control schemes, but there are no studies that examined the impacts of autonomous vehicles (i.e., VOT and network capacity effects) on their effectiveness. Furthermore, despite the significant influence of perimeter control on traffic demand, most of the studies made the assumption that origin-destination demand is fixed. This hinders a comprehensive understanding of the interplay between urban and transportation developments. A critical gap is the lack of a methodology that connects short-run commuters' decisions (e.g., trip timing) with their residential location choices in the long-run.
Traditional land use models that consider the trade-off between land rend and commuting costs in a monocentric city effectively analyze residential location patterns. Static congestion models <cit.> have been extended to dynamic bottleneck models <cit.> to incorporate the impact of the trip timing decisions of commuters into their residential location choices in the long-run. Several aspects have been incorporated into the standard bottleneck model as extensions, such as an incentive for commuters to spend time at home <cit.>, bimodal transportation systems <cit.>, an open city model <cit.>, a tandem bottlenecks <cit.>, a bottleneck with a stochastic location <cit.>, and commuters' heterogeneity <cit.>. These aspects can alter urban spatial structures in the long-run, but whether hypercongestion has the same effect remains an open question in the literature.
The trip timing decisions of commuters in the presence of hypercongestion can be described using bathtub models, namely dynamic user equilibrium models for departure time choices in urban cities with hypercongestion <cit.>. Bathtub models have been extended to trip length heterogeneity <cit.>, cruising-for-parking <cit.>, and bimodal transportation systems <cit.>. However, none of them considered changes in residents' spatial distribution in response to their commuting behavior changes. To the best of our knowledge, our study is the first to systematically analyze a model in which commuters choose their trip timing in the presence of hypercongestion in the short-run and residential location in the long-run.
The remaining of this paper is organized as follows. Section 2 shows the development of the model. Section 3, we characterize the model equilibrium. Section 4 presents the formulation of the model during perimeter control. The equilibrium conditions during perimeter control are studied in Section 5. Numerical examples are provided in Section 6, and we conclude this paper in Section 7.
§ MODEL
Consider a monocentric city that has downtown and suburban areas. All job opportunities are found in the downtown area, whereas there are residential areas in both downtown and suburban areas. The downtown area is in the center of the city, and a residential location in the suburban area is indexed by a distance x from the edge of the downtown area (Fig <ref>). The areas of downtown and suburban at location x have fixed land areas of A_d and A_s(x), respectively. We assume that the land is owned by absentee landlords, as is common in the literature. An N continuum of homogeneous commuters have identical preferences and the numbers of downtown commuters and suburban commuters at location x are denoted by N_d and N_s(x), respectively.
§.§ Hypercongestion and commuting cost
The downtown area has homogeneous topological characteristics by proper partitioning approaches <cit.>, thus showing a well-defined
relationship between space-mean flow and density. The congestion dynamics in the downtown area are described using a bathtub model, whereas we assume that one can travel at a free-flow speed in the suburban area. To incorporate hypercongestion, we employ the Greenshields model for the space-mean speed as follows.
v(t) = v_f ( 1 - n(t)/n_j),
where v(t) is the space-mean speed at time t, v_f is the free-flow speed, n(t) is the vehicle accumulation in the downtown area at time t and n_j is jam accumulation.
Since the downtown area is modeled as a system with inflows and outflows and whose traffic conditions are governed by bathtub congestion dynamics, the time evolution of vehicle accumulation, ṅ(t), is
ṅ(t) = I (t) - G(t),
where I(t) is the inflow and G(t) is the outflow at time t. The outflow is formulated using the network exit function (NEF) as follows:
G(t) = n(t) v(t)/L,
where L is the average trip length in the downtown area.
The travel time in the downtown area is assumed to be determined by a single instant of time for tractability <cit.> and calculated as
T(t) ≈L/v(t).
All suburban commuters have identical trip lengths L in the downtown area.
We assume that the downtown commuters commute by walking or cycling and the suburban commuters travel by car, which is similar to the modeling of <cit.>. The commuting cost of a downtown commuter and a suburban commuter at location x who arrive at work at time t (C_d(t) and C_s(x, t), respectively) are expressed as
C_d(t) = α T_d + s(t),
C_s(x, t) = α( T(t) + τ x ) + s(t),
s(t) =
β( t^* - t ) if t ≤ t^*
γ( t - t^* ) if t > t^*
,
where T_d represents the constant travel time of the downtown commuters, T(t) represents the travel time in the downtown area at time t, τ x represents the suburban free-flow travel time of commuters residing at x (i.e., τ = 1 / v_f), t^* represents the desired arrival time, and α, β and γ are marginal costs of travel time, earliness, and lateness. The first and second terms on the RHS of Eq. (<ref>) are the travel time cost and schedule delay cost, respectively. That is, we assume that commuters have “α-β-γ” type preference <cit.>.
We also consider a situation where every vehicle is an autonomous vehicle. Autonomous vehicles are expected to have two effects: the VOT effect and the network capacity effect <cit.>. In this study, the former effect is represented by a reduction in the VOT. It is ηα where η is the VOT effect parameter (β/α<η≤ 1). The latter effect is captured by an increase in network capacity, which is ξ n_j where ξ is the network capacity effect parameter (ξ≥1). This effect increases not only the network capacity but also the critical accumulation, which is consistent with the simulation analysis by <cit.>.
§.§ Commuter preferences
We next incorporate the commuting cost in the presence of hypercongestion into commuter preferences. The utility of downtown commuters is expressed as the following Cobb–Douglas utility function:
u_d(z_d(t), a_d(t)) = {z_d(t)}^1-μ{a_d(t)}^μ,
where μ∈ (0,1), z_d(t) represents the consumption of the numéraire good and a_d(t) represents the lot size of housing which downtown commuters consume. The budget constraint is
w = z_d(t) + ( r_d + r_A ) a_d(t) + C_d(t),
where w represents their income, r_A > 0 represents the exogenous agricultural rent, and r_d + r_A represents the downtown land rent. The first-order conditions of the utility maximization problem are
z_d(t) = ( 1 - μ) y_d(t), a_d(t) = μ y_d(t)/r_d + r_A, y_d(t) = w - c_d(t),
where y_d(t) represents the income net of commuting cost earned by a downtown commuter who arrives at work at time t. Substituting this into the utility function, we obtain the indirect utility function as
U_d(y_d(t), r_d+r_A) = (1-μ)^1-μμ^μ y_d(t) ( r_d + r_A )^-μ.
Similarly, we formulate a land use model for the suburban commuters. Their utility and budget constraint are respectively expressed as
u_s(z_s(x,t), a_s(x,t)) = {z_s(x,t)}^1-μ{a_s(x,t)}^μ,
w = z_s(x,t) + ( r_s(x) + r_A ) a_s(x,t) + C_s (x,t),
where z_s(x,t) represents the consumption of the numéraire good, a_s(x,t) represents the lot size of housing consumed by the suburban commuters at x, and r_s(x) + r_A represents the land rent at x. Then, the first-order conditions of the utility maximization problem are
z_s(x,t) = ( 1 - μ) y_s(x,t), a_s(x,t) = μ y_s(x,t)/r_s(x) + r_A, y_s(x,t) = w - c_s(x,t),
where y_s(x,t) represents the income net of commuting cost earned by a commuter who resides at x and arrives at work at time t. The indirect utility function is
U_s(y_s(x,t), r_s(x)+r_A) = (1-μ)^1-μμ^μ y_s(x,t) ( r_s(x) + r_A )^-μ.
§ EQUILIBRIUM
§.§ Equilibrium conditions
We assume that the commuters decide their residential locations in the long-run, whereas they choose their trip timings in the short-run, which is similar to the modeling of <cit.>, <cit.>, and <cit.>. That is, the downtown commuters and suburban commuters at location x minimize commuting cost C_d(t) and C_s(x,t), respectively by selecting their arrival time t at work taking their residential locations as given in the short-run. Each commuter chooses a residential location (the downtown area or location x in the suburban area) so as to maximize his/her utility in the long-run.
§.§.§ Short-run equilibrium conditions
In the short-run, the commuters only decide their trip timing, which implies that the numbers of downtown commuters and suburban commuters residing at x are assumed to be fixed. The short-run equilibrium conditions differ according to their residential locations. According to Eq. (<ref>), all downtown commuters arrive at t=t^*, and they incur a constant commuting cost α T_d. Therefore, the short-run equilibrium cost C_d^* expressed as
C_d^* = α T_d.
As for the suburban commuters, Eq. (<ref>) states that the commuting cost consists of the cost ατ x of the suburban free-flow travel time, which depends only on the residential location x, and the bathtub cost C_s^b(t) owing to the travel time in the downtown area and the schedule delay cost, which depends only on the arrival time t (i.e., C_s^b(t)=α T(t) + s(t)). Thus, these commuters choose their arrival time such that the bathtub cost C_s^b(t) is minimized. That is, no suburban commuter residing at x can reduce their bathtub cost by changing their departure time at the short-run equilibrium. The equilibrium conditions are
C_s^b(t) = C_s^b* if n(t) > 0
C_s^b(t) ≥ C_s^b* if n(t) = 0
∀ t ∈ℝ,
∫_t∈ℝn(t)v(t)/L dt = N_s,
where C_s^b* is the short-run equilibrium bathtub cost of the suburban commuters and N_s is the suburban population.
Condition (<ref>) states that if the bathtub cost at time t is greater than the equilibrium bathtub cost, then no one will arrive at their destination at time t. Condition (<ref>) is the conservation law for traffic demand in the suburban area. From these conditions, the congestion dynamics and the short-run equilibrium bathtub cost (n(t) and C_s^b*, respectively) are endogenously determined at the short-run equilibrium. The short-run equilibrium cost C_s^*(x) is expressed as
expressed as
C_s^*(x) = C_s^b* + ατ x.
§.§.§ Long-run equilibrium conditions
Each commuter chooses a residential location (the downtown area or a location x in the suburban area) to maximize their indirect utility in the long run.
As the short-run equilibrium cost C_d^* of the downtown commuters is given by Eq. (<ref>), the income net of downtown commuting cost is
y_d = w - α T_d.
Similarly, as the short-run equilibrium bathtub cost depends on the total number of suburban commuters, the income net of suburban commuting cost at x is expressed as
y_s(x) = w - C_s^b* (N_s) - ατ x.
The equilibrium conditions are
U_d(y_d, r_d+r_A) = U^* if N_d > 0
U_d(y_d, r_d+r_A) ≤ U^* if N_d = 0
,
U_s(y_s(x), r_s(x)+r_A) = U^* if N_s(x) > 0
U_s(y_s(x), r_s(x)+r_A) ≤ U^* if N_s(x) = 0
∀ x ∈ℝ_+,
a_d(y_d, r_d+r_A) N_d = A_d if r_d > 0
a_d(y_d, r_d+r_A) N_d ≤ A_d if r_d = 0
,
a_s(y_s(x), r_s(x)+r_A) N_s(x) = A_s(x) if r_s(x) > 0
a_s(y_s(x), r_s(x)+r_A) N_s(x) ≤ A_s(x) if r_s(x) = 0
∀ x ∈ℝ_+,
N_d + ∫_x ∈ℝ_+ N_s(x) dx = N,
where U^* is the long-run equilibrium utility level and a_d(y_d, r_d+r_A) and a_s(y_s(x), r_s(x)+r_A) denote the lot sizes of downtown commuters and suburban commuters who reside at location x, respectively. These lot sizes are given by
a_d(y_d, r_d+r_A) = μ y_d/r_d + r_A , a_s(y_s(x), r_s(x)+r_A) = μ y_s(x)/r_s(x) + r_A.
Conditions (<ref>) and (<ref>) are the equilibrium conditions for the downtown and suburban residential location choices. respectively. These conditions state that no commuter has the incentive to change residential locations unilaterally. Conditions (<ref>) and (<ref>) are the land market clearing conditions, which states that if the total land demand for housing equals the land size, then land rent is (weakly) larger than the agricultural rent r_A. Condition (<ref>) shows the population constraint.
§.§ Equilibrium properties
§.§.§ Short-run equilibrium properties
As the short-run equilibrium condition (<ref>) coincides with those in the bathtub model of <cit.>, we have the following properties.
The short-run equilibrium has the following properties
* The short-run equilibrium bathtub cost satisfies
F(θ) ≡ N_s - α n_j ( 1/β + 1/γ) ( lnθ + 1/θ - 1 ) = 0,
where θ≡C_s^b* v_f/α L.
* Hypercongestion exists if θ>2.
See Appendix A.
As cases without hypercongestion are out of our interest, we impose the assumption that θ > 2 from Lemma <ref>. All variables except the equilibrium bathtub cost C_s^b* are exogenous because the total number of suburban commuters N_s is given in the short-run. As the function of F(θ) is strictly monotone with respect to θ when θ > 2, the equilibrium bathtub cost C_s^b* is uniquely determined.
The replacement of all human driven vehicles with autonomous vehicles influences the equilibrium bathtub cost. From Eq. (<ref>), we obtain the following equation:
d C_s^b*/ d n_j = - (∂θ/∂ C_s^b*)^-1( ∂ F/∂θ)^-1( ∂ F/∂ n_j) < 0.
As (∂θ/∂ C_s^b*)^-1 is positive and both of ( ∂ F/∂θ)^-1 and ( ∂ F/∂ n_j) are negative, the network capacity effect (i.e., the increase in n_j) always decreases the equilibrium bathtub cost. This is because the network can process more vehicles due to the capacity expansion, so more commuters arrive at their destinations near their desired arrival times.
We also obtain the following equation from Eq. (<ref>), which indicates
that the VOT effect (i.e., the reduction in α) may increase or decrease the equilibrium bathtub cost:
d C_s^b*/ dα = - (∂θ/∂ C_s^b*)^-1( ∂ F/∂θ)^-1( ∂ F/∂α) ≷ 0 if ∂ F/∂α≶ 0,
∂ F/∂α = ∂ f(θ, α)/∂α + ∂ f(θ, α)/∂θ∂θ/∂α,
where f(θ, α)=- α n_j ( β^-1 + γ^-1) ( lnθ + θ^-1 - 1 )
is the second term of F(θ).
The first term on the RHS of Eq.(<ref>) is negative and means that the VOT reduction simply decreases the bathtub congestion cost. The second term on the RHS of Eq.(<ref>) is positive and means that a higher temporal concentration of traffic demand near the desired arrival time causes a more severe capacity drop and results in an increase in the bathtub congestion cost. Therefore, whether the equilibrium bathtub cost increases or decreases depends on the magnitudes of the network capacity and VOT effects.
Similarly,
d C_s^b*/ dα = - (∂θ/∂ C_s^b*)^-1( ∂ F/∂θ)^-1( ∂ F/∂α) ≷ 0 if ∂ F/∂α≶ 0.
This indicates that the VOT effect (i.e., the reduction in α) may increase or decrease the equilibrium bathtub cost. Moreover, differentiating F with respect to α can be spilt into two terms:
∂ F/∂α = ∂ f(θ, α)/∂α + ∂ f(θ, α)/∂θ∂θ/∂α,
where f(θ, α)=- α n_j ( β^-1 + γ^-1) ( lnθ + θ^-1 - 1 )
is the second term of F(θ). The first term on the RHS of Eq.(<ref>) is negative and means that the VOT reduction simply decreases the bathtub congestion cost. The second term on the RHS of Eq.(<ref>) is positive and means that a higher temporal concentration of traffic demand near the desired arrival time causes a more severe capacity drop and results in an increase in the bathtub congestion cost. Therefore, whether the equilibrium bathtub cost increases or decreases depends on the magnitudes of the network capacity and VOT effects.
Although the network capacity effect on hypercongestion has been extensively studies <cit.>, the VOT effect on hypercongestion has been insufficiently explored. Most of the existing works investigating the network capacity effect consider autonomous vehicles to be beneficial <cit.>. The only exception is <cit.>, who examined both the effects of network capacity and VOT in the standard bottleneck model and showed that they always decrease the equilibrium travel cost. However, this may not hold in the presence of hypercongestion due to the capacity drop caused by the VOT effect. In other words, hypercongestion can worsen and the travel cost can increase when autonomous vehicles are introduced if the VOT effect is large.
The short-run equilibrium properties are summarized as follows.
The short-run equilibrium has the following properties.
* The short-run equilibrium bathtub cost is uniquely determined.
* The introduction of autonomous vehicles may increase or decrease the short-run equilibrium bathtub cost in the presence of hypercongestion.
§.§ Long-run equilibrium properties
We examine the properties of the urban spatial structure at the long-run equilibrium.
At the long-run equilibrium,
d {r_s(x)+r_A}/dx = - ατ r_s(x)+r_A/μ y_s(x) < 0,
d {N_s(x)/A_s(x)}/dx = - ατ(1-μ) (r_s(x)+r_A)/{μ(w - C_s^b* - ατ x)}^2 < 0.
See Appendix B
Lemma <ref> indicates that the land rent and population density in the suburban area decrease with an increase in location x. These properties are consistent with those in the literature <cit.>. Then, the long-run equilibrium properties are as follows.
At the long-run equilibrium,
* N_d^* and N_s^*(x) are given by
N_d^* =1/μ{y_d}^1-μ/μ( w - C_s^b*)^-1/μ( r_s(0) + r_A ) A_d,
N_s^*(x) = 1/μ{w - C_s^b* - ατ x }^1-μ/μ{ y_d }^-1/μ( r_s(0) + r_A ) A_s(x).
* The city boundary x_f is given by
x_f = w-C_s^b*/ατ( { r_A }^-μ - { r_s(0) + r_A }^-μ){ r_A }^μ.
* r_s(0) is determined from ∫_0^x_f N_s^*(x)dx=N - N_d^*.
see Appendix C
Eq. (<ref>) states that the suburban population decreases with an increase in the short-run equilibrium bathtub cost. This occurs because a lower commuting cost encourages more commuters to choose the suburban area as their residential location.
Combining Lemma <ref> with Proposition <ref> suggests that the introduction of autonomous vehicles does not necessarily result in an increase in the suburban population (i.e., suburbanization). That is, the downtown population may increase due to the VOT effect, and thus the total car traffic demand decreases in the long-run.
It should be noted that the VOT effect of autonomous vehicles decreases the cost of free-flow travel time in the suburban area, causing suburban commuters to reside farther from the downtown. This leads to a decrease in the population density in the suburban area, as indicated by Eq. (<ref>). This is consistent with the standard results obtained in the literature. Therefore, even if the introduction of autonomous vehicles leads to a decrease in the suburban population, the city can spatially expand outward. In Section 6, we will demonstrate that such a case actually occurs.
The results are summarized as follows.
* The introduction of autonomous vehicles (i.e., the VOT and network capacity effects) may increase the short-run equilibrium bathtub cost, resulting in a decrease in the suburban population (i.e., total car traffic demand) in the long-run.
* Even if the introduction of autonomous vehicles decreases the suburban population, the city may spatially expand outward.
§ PERIMETER CONTROL
§.§ Perimeter control scheme
During perimeter control, the inflow rate to the downtown area is restricted to maintain the maximum throughput in the downtown area. A queue will develop outside the perimeter boundary if the arrival rate at the boundary exceeds the restricted inflow rate to the downtown area.
We derive the critical accumulation n_cr, where the throughput is maximized from Eq. (<ref>).
n_cr = n_j/2.
The aim of perimeter control is to restrict the inflow at the perimeter boundary so as not to exceed the critical accumulation in the downtown area. Therefore, the inflow rate during perimeter control I (t) is determined by
I(t) =
I_p if n(t) = n_cr
A_b(t) if n(t) < n_cr,
where I_p is the inflow rate during perimeter control and A_b(t) is the arrival rate at the perimeter boundary at time t. If the accumulation is below the critical level, then no restriction is implemented; all vehicles at the boundary can enter the downtown area. Once accumulation reaches the critical accumulation, the inflow rate is restricted to I_p. The throughput can be maximized during perimeter control if the inflow rate I(t) is set to the NEF at the critical accumulation. Thus, from Eqs. (<ref>)-(<ref>) and (<ref>), we have
I_p = n_jv_f/4L.
§.§ Queuing dynamics at perimeter boundaries
A queue will develop outside the perimeter boundary if the arrival rate at the perimeter boundary exceeds the inflow rate I_p. We model the queuing dynamics as a point queue and assume a first-arrived-first-in property. Therefore, the waiting time of a commuter who arrives at their destination at time t, T_w(t), is
T_w(t) = q(t)/I_p,
where q(t) is the number of vehicles queued at the perimeter boundary when a commuter who arrives at their destination at time t reaches the boundary.
§.§ Schedule preferences of suburban commuters
The schedule preferences of the downtown commuters are the same as those without perimeter control in Eq. (<ref>). Given the queue dynamics during perimeter control, suburban commuting cost C_s^p(x,t) of a commuter who resides at x and arrives at work at time t is
C_s^p(x, t) = α( T^p(t) + τ x ) + s(t) ,
T^p(t) =
T(t) if t ≤ t_s^p
L/v_f/2 + T_w(t) if t_s^p < t ≤ t_e^p
T(t) if t_e^p < t
,
s(t) =
β( t^* - t ) if t ≤ t^*
γ( t - t^* ) if t > t^*
,
where T^p(t) is the downtown travel time under perimeter control, which includes the travel time in the downtown area and the waiting time at the perimeter boundary.
t_s^p and t_e^p are the start and end times of perimeter control, respectively. The difference between the schedule preferences at the equilibrium without and with perimeter control appears in the downtown travel time under perimeter control T^p(t). The downtown travel time before and after perimeter control implementation (t ≤ t_s^p and t_e^p < t, respectively) are the same as Eq. (<ref>). During perimeter control, the travel time in the downtown area is L/v_f/2 because the NEF is maintained at the maximum. Furthermore, waiting time at the perimeter boundary (Eq. (<ref>)) is incurred in addition to the travel time in the downtown area.
§ EQUILIBRIUM UNDER PERIMETER CONTROL
§.§ Equilibrium conditions
§.§.§ Short-run equilibrium
The short-run equilibrium conditions for the downtown commuters are the same as those without perimeter control (Section <ref>). Therefore,
C_d^p* = α T_d,
where C_d^p* is the short-run equilibrium cost of the downtown commuters.
As at the short-run equilibrium without perimeter control, the suburban commuters choose their arrival times such that the bathtub cost (C_s^bp(t)= α T^p(t) + s(t)) is minimized. Therefore, the equilibrium conditions are
C_s^bp(t) = C_s^bp* if n(t) > 0
C_s^bp(t) ≥ C_s^bp* if n(t) = 0
∀ t ∈ℝ,
n(t) = n_j/2 if q(t) > 0
n(t) ≤n_j/2 if q(t) = 0
∀ t ∈ℝ,
∫_t∈ℝn(t)v(t)/L dt = N_s,
where C_s^bp* is the short-run equilibrium bathtub cost during perimeter control and N_s is the number of suburban commuters under perimeter control.
Conditions (<ref>) and (<ref>) are the same as Conditions (<ref>) and (<ref>), respectively. Condition (<ref>) reflects the restriction of the inflow to the downtown area during perimeter control; accumulation is at the critical level if there is a queue at the perimeter boundary. Otherwise, accumulation is lower than the critical level. Then, the short-run equilibrium cost C_s^p*(x) is
C_s^p*(x) = C_s^bp* + ατ x.
§.§.§ Long-run equilibrium
In the long-run, the difference between the cases with and without perimeter control appears only in the income net of suburban commuting cost.
Specifically, under the perimeter control, the income net of suburban commuting cost at location x is
y^p_s(x) = w - C_s^p*(x).
The long-run equilibrium conditions are thus represented as (<ref>) with the use of (<ref>).
§.§ Equilibrium properties
§.§.§ Short-run equilibrium
Similar to the model of <cit.>, we have the following properties.
The short-run equilibrium under perimeter control has the following properties
* The short-run equilibrium bathtub cost satisfies
F^p(θ^p) ≡ N_s - α n_j ( 1/β + 1/γ) ( θ^p/4 + ln 2 - 1 ) = 0,
where θ^p ≡C_s^pb* v_f/α L.
* A queue develops at the perimeter boundary, and its length increases toward the desired arrival time.
See Appendix D
All variables except the equilibrium bathtub cost C_s^pb* are exogenous. The equilibrium bathtub cost C_s^pb* is uniquely determined because the function of F^p(θ^p) is strictly monotone with respect to θ^p. Eq. (<ref>) is rewritten as
C_s^pb* = βγ/β + γN_s/n_jv_f/4L + 4 α L/v_f( 1 - ln 2 ).
When autonomous vehicles are introduced, as shown in Eq. (<ref>), both the network capacity and VOT effects decrease the short-run equilibrium cost of the suburban commuters under perimeter control (i.e., d C_s^pb* / d n_j <0 and d C_s^pb* / dα >0, respectively). The network capacity effect increases the number of vehicles traveling in the downtown area during perimeter control and the restricted inflow rate at the perimeter boundary. Consequently, more suburban commuters arrive at their destinations near their desired arrival times. This decreases the short-run equilibrium cost of the suburban commuters.
Without perimeter control, the VOT effect may increase the short-run equilibrium cost for the suburban commuters due to capacity drop. With perimeter control, the VOT effect always decreases the short-run equilibrium cost because capacity drop never occurs. Even though the queue length at the perimeter boundary increases due to the VOT effect, the equilibrium cost decreases because the inflow rate is constant regardless of the queue length. This mechanism is the same as that in the standard bottleneck model <cit.> because this situation involves a bottleneck with a fixed capacity (i.e., the value of NEF at critical accumulation) between the downtown and suburban areas.[The first term on the RHS of Eq. (<ref>) represents the bottleneck cost where the bottleneck capacity is n_jv_f/4L, which is identical to the value of NEF at critical accumulation.] This indicates that the impacts of autonomous vehicles in the presence of hypercongestion contradict those in the standard bottleneck model.
Next, we compare equilibria with and without perimeter control to demonstrate the effects of perimeter control on the urban spatial structure. The only difference between the cases with and without perimeter control appears in the short-run equilibrium bathtub cost (C_s^b* and C_s^bp*, respectively). Eqs. (<ref>) and (<ref>) show that the short-run equilibrium bathtub cost is reduced by perimeter control when hypercongestion exists (see Appendix E for the proof). The short-run equilibrium bathtub cost decreases because network capacity drop never occurs under perimeter control. Thus, more commuters arrive at their destinations near their desired arrival times than at user equilibrium where capacity drop occurs. Therefore, although queuing congestion occurs at the perimeter boundary, the short-run equilibrium bathtub cost decreases.
The Properties of the short-run equilibrium with perimeter control are summarized as follows.
The short-run equilibrium with perimeter control has the following properties.
* The short-run equilibrium bathtub cost with perimeter control is uniquely determined.
* The introduction of autonomous vehicles decreases the short-run equilibrium bathtub cost under perimeter control.
* Hypercongestion mitigation by perimeter control decreases the short-run equilibrium bathtub cost.
§.§.§ Long-run equilibrium
We investigate the properties of the urban spatial structure at the long-run equilibrium with perimeter control. In the long-run, the difference between cases with and without perimeter control is the income net of commuting cost. Thus, we have the following properties.
At the long-run equilibrium under perimeter control
* N_d^p* and N_s^p*(x) are given by
N_d^p* =1/μ{y_d}^1-μ/μ( w - C_s^bp*)^-1/μ( r_s(0) + r_A ) A_d,
N_s^p*(x) = 1/μ{w - C_s^bp* - ατ x }^1-μ/μ{ y_d }^-1/μ( r_s(0) + r_A ) A_s(x).
* The city boundary x_f is given by
x_f = w-C_s^bp*/ατ( { r_A }^-μ - { r_s(0) + r_A }^-μ){ r_A }^μ.
* r_s(0) is determined from ∫_0^x_f N_s^p*(x)dx=N - N_d^p*.
The urban spatial structure at the long-run equilibrium under perimeter control has the same properties as the cases without perimeter control. Therefore, as the introduction of autonomous vehicles always decreases the short-run equilibrium bathtub cost under perimeter control (Proposition <ref>), the downtown population decreases under perimeter control due to the introduction of autonomous vehicles.
As the short-run equilibrium bathtub cost is reduced by perimeter control (Proposition <ref>), the downtown population decreases in the long-run, which can be seen from Eq. (<ref>). Since the income net of the suburban commuting cost increases, more commuters select the suburban area as their residential locations. Cities are expanded by hypercongestion mitigation.
The results are summarized as follows.
* The introduction of autonomous vehicles under perimeter control decreases the short-run equilibrium bathtub cost, and results in a decrease in the downtown population in the long-run.
* Hypercongestion mitigation by perimeter control decreases the short-run equilibrium cost, and results in a decrease in the downtown population in the long-run.
§ NUMERICAL EXAMPLES
We numerically investigate the equilibrium patterns of the model and the effects of autonomous vehicles and perimeter control. The parameters are as follows: v_f=20 [mph], n_j=100 [veh], α, β, γ = 20, 10, 40 [$/h], t^*=0, L=5 [mile], N=500 [pax], T_d=5 [min], r_A = 30, w=60, μ=0.25, A_d=2, and A_s(x)=1.
We compare the equilibrium patterns of three cases: the case of base equilibrium (Base case); a case with autonomous vehicles, which leads to an increase in the downtown population (Case I); and a case with autonomous vehicle which leads to a decrease in the downtown population (Case II). In addition to the above parameters, we set the VOT effect parameter to η=0.6 (Case I) and 0.95 (Case II), and the network capacity effect parameter to ξ=1.01 (Case I) and 1.11 (Case II).
We investigate the effect of autonomous vehicles without perimeter control. Accumulation between the cases is compared in Fig. <ref>. In all cases, a similar evolution of car accumulation is obtained. More vehicles circulate in the downtown area at the desired arrival time when autonomous vehicles are introduced. The maximum accumulation during the rush hour in Case I and II is almost the same (90 [veh]). However, the speed at the desired arrival time in Case I is lower than that in Case II, as depicted in Fig. <ref>. This is because the network capacity effect in Case II is higher than that in Case I. Therefore, the network can process more vehicles in Case II for a given accumulation. This can be seen from the MFDs experienced during the rush hour in Fig. <ref>. When the maximum accumulation is reached, the NEF in Case II is 1.8 times higher than that in Case I (see point 1 and 2 in Fig. <ref>). Moreover, the NEF in Case II is higher than that in the Base case (with human-driven vehicles) at their maximum accumulation even though the maximum accumulation in Case II is higher than that in the Base case in the hypercongested regime. This leads to a shorter rush hour and results in a decrease in the short-run equilibrium bathtub cost, as depicted in Fig. <ref>. On the other hand, the higher temporal concentration of traffic demand (Fig. <ref>[Note that the negative values of inflow are omitted from the figure. In reality, such negative values never occur, but this inconsistency is present in the accumulation-based model, as discussed in <cit.> ]) causes a more severe capacity drop in Case I compared with that in the Base case, which leads to an increase in the short-run equilibrium bathtub cost. Therefore, the introduction of autonomous vehicles may increase or decrease the short-run equilibrium bathtub cost in the presence of hypercongestion.
Fig. <ref> depicts the short-run equilibrium pattern of the NEF without perimeter control. The figure shows that when the introduction of autonomous vehicles increases the short-run equilibrium bathtub cost (Case I), the higher temporal concentration of traffic demand causes a more severe capacity drop compared with that in the Base case (with human-driven vehicles). This lengthens the rush hour and increases the short-run equilibrium bathtub cost. By contrast, due to the high network capacity effect, the capacity drop is mitigated and the rush hour is shortened in Case II compared with those in the Base case. As more suburban commuters arrive at their destinations near their desired arrival times, the short-run equilibrium cost decreases.
In the long-run, the introduction of autonomous vehicles decreases and increases the suburban population in Cases I and II, respectively, as shown in Table <ref>. Although the suburban population decreases in Case I, the city spatially expand outward and its boundary is farther than that in Case II. This is because as the VOT effect is higher in Case I, suburban commuters care less about free-flow travel time in the suburban area, which results in the farther city boundary, as shown in Figs. <ref>–<ref>. Interestingly, Table <ref> reveals that the utility level may decreases despite that commuters can relocate in response to the changes in their commuting behaviors due to the introduction of autonomous vehicles. Therefore, the introduction of autonomous vehicles may decrease the utility level in the long-run due to a severe capacity drop in the short-run.
Perimeter control implementation decreases the short-run equilibrium bathtub cost in all cases, as depicted in Figs. <ref>–<ref>. Since hypercongestion never occurs under perimeter control, more suburban commuters arrive at their destinations near their desired arrival times, which results in a shorter rush hour, as depicted in Figs. <ref>–<ref>. A decrease in the short-run equilibrium bathtub cost leads to an increase in the suburban population and makes cities expanded, as depicted in Table <ref> and Figs.<ref>–<ref>. Thus, hypercongestion mitigation by perimeter control decreases the short-run equilibrium bathtub cost, and results in a decreases in the downtown population in the long-run.
The short-run equilibrium bathtub cost in Case I is higher than that in the Base case with and without perimeter control, but the utility level is lower without perimeter control and higher with perimeter control in Case I than those in the Base case. This is because unlike the case in the presence of hypercongestion, the short-run equilibrium bathtub cost decreases with perimeter control and more commuters reside in the suburban area in the long-run. Furthermore, the high VOT effect decreases the suburban population density at location x. Thus, both the short-run equilibrium bathtub cost and the utility level are higher in Case I than in the Base case. The same effects exist in Case II. However, the high network capacity effect causes more suburban commuters to arrive at their destinations in the short-run, but the low VOT effect causes a higher suburban population density and results in fewer commuters residing in the suburban area in the long-run compared with Case I. Thus, the short-run equilibrium bathtub cost is lower, but the utility level is higher in Case II than in the Base case.
Next, we conduct a sensitivity analysis of utility with respect to both the network capacity and VOT effects, as depicted in Fig. <ref>. The pictures of the utility levels with and without perimeter control change entirely. Without perimeter control,
both the network capacity and the VOT effects increase the utility level when the VOT effect is low (i.e., high value of time in Fig. <ref>). However, the utility level begins to decrease when the VOT effect exceeds the threshold values. This is because a high VOT effect leads to a severe capacity drop, resulting in an increase in the short-run equilibrium bathtub cost. Consequently, the income net of the suburban commuting cost decreases and the downtown population increases, leading to a lower utility level in the long-run.
When the VOT effect is low, the short-run equilibrium bathtub cost decreases, and the free-flow travel time cost in the suburban area also decreases, resulting in a higher utility level. When perimeter control is implemented, both network capacity and VOT effects always increase the utility level, as depicted in Fig. <ref>. As discussed earlier, both effects consistently decrease the short-run equilibrium bathtub cost, resulting in an increase in income net of the suburban commuting cost and a decrease in the downtown population. They contribute to a higher utility level.
§ CONCLUSIONS
In this paper, by incorporating a bathtub model, we develop a land use model where hypercongestion occurs in the downtown area and interacts with land use to examine the effects of hypercongestion mitigation by perimeter control and the introduction of autonomous vehicles on the spatial structures of cities. The results indicate that (I) hypercongestion mitigation decreases the commuting cost and results in a less dense urban spatial structure, (II) the introduction of autonomous vehicles may increase the commuting cost in the presence of hypercongestion and results in a decrease in the suburban population (i.e., total car traffic demand) in the long-run, and (III) the introduction of autonomous vehicles under perimeter control decreases the commuting cost and results in a less dense urban spatial structure. We also show that the introduction of autonomous vehicles may decrease the utility level in the long-run due to a severe capacity drop in the short-run. These results show that hypercongestion is a key factor that can change urban spatial structure. Specifically, the effect of autonomous vehicles on the urban spatial structures depends on perimeter control implementation. Moreover, the effect of autonomous vehicles on the urban spatial structure in the presence of hypercongestion contradicts that in the standard bottleneck model.
This paper has several future directions. First, we assume commuter homogeneity. Incorporating heterogeneity such as in trip length <cit.> and preferences <cit.> is an interesting direction. Second, we assume a unimodal transportation system. Future models can be extended to multimodal transportation systems <cit.>. Finally, the monocentric city structure in this work can be extended to a polycentric city structure.
§ ACKNOWLEDGEMENTS
We thank Tatsuhiko Kono, Minoru Osawa, and Koki Satsukawa for their valuable comments. This work was supported by JST ACT-X, Japan (grant #JPMJAX21AE), by JST FOREST Program, Japan (grant #JPMJFR215M), by JSPS KAKENHI, Japan (grant #23K13422 and #22H01610).
§ APPENDIX A: PROOF OF LEMMA 1
As <cit.> assumed the Ardekani–Herman formula (i.e., v(t)=v_f ( 1 - n(t)/n_j)^1+ρ, where ρ is a parameter) for the space-mean speed in the downtown area, the Greenshields model in this paper can be regarded as a special case of the <cit.> model where ρ=0. Here, we briefly review the solution of the bathtub model. Differentiating the bathtub cost C_s^b(t)=α T(t) + s(t) with respect to time at equilibrium (i.e., d C_s^b(t)/dt=0) yields
d T(t)/dt =
β/α if t ≤ t^*
- γ/α if t > t^*.
From Eqs. (<ref>) and (<ref>), we have
d T(t)/dt = L/n_j v_f( 1 - n(t)/n_j)^-2.
Combining Eqs. (<ref>) and (<ref>), grouping the terms in n(t) and the other terms on the LHS and RHS, respectively, and integrating both sides yields
n(t) =
n_j ( 1 - 1/β v_f/α Lt + C_e) if t ≤ t^*
n_j ( 1 - 1/-γ v_f/α Lt + C_l) if t > t^*,
where C_e and C_l are constants of integration for earliness and lateness, respectively. Since the accumulation at the start and end of the rush hour is zero (i.e., n(t_s)=n(t_e)=0),
n(t) =
n_j ( 1 - 1/1 + β v_f/α L (t - t_s) ) if t ≤ t^*
n_j ( 1 - 1/1 + γ v_f/α L (t_e - t) ) if t > t^*,
From Condition (<ref>),
N_s = α n_j ( 1/β + 1/γ) ( logθ + 1/θ - 1 ),
where θ = C_s^b* v_f/α L.
When θ=2, the maximum accumulation reached during the rush hour is the critical accumulation (n_j/2) at the desired arrival time (n(t^*)=n_j/2). Therefore, hypercongestion exists if θ>2.
§ APPENDIX B: PROOF OF LEMMA <REF>
As d U_s(x)/dx=0 at long-run equilibrium, we have from Eq. (<ref>),
d r_s(x)+r_A/dx = - ατ r_s(x)+r_A/μ y_s(x).
The RHS is negative.
Combining Condition (<ref>) and Eq. (<ref>) yields
N_s(x)/A_s(x) = r_s(x)+r_A/μ y_s(x).
By differentiating Eq. (<ref>) with respect to x and substituting Eq. (<ref>) into Eq. (<ref>), we obtain
d N_s(x)/A_s(x)/dx = - ατ(1-μ) (r_s(x)+r_A)/{μ(w - C_s^b* - ατ x)}^2.
The RHS is negative.
§ APPENDIX C: PROOF OF LEMMA <REF>
At the long-run equilibrium, the indirect utility satisfies U^*_d=U_s^*(x) for all x ∈ [ 0, x_f ] , and this condition gives
r_d+r_A/r_s(x)+r_A = (y_d/y_s(x))^1/μ.
As we have a_d=A_d/N_d from Condition (<ref>), substituting these into Eq. (<ref>) results in N^*_d as follows.
N^*_d =1/μ{y_d}^1-μ/μ( w - C_s^b*)^-1/μ( r_s(0) + r_A ) A_d.
Similarly, we have
N_s(x) = 1/μ{w - C_s^b* - ατ x }^1-μ/μ{ y_d }^-1/μ( r_s(0) + r_A ) A_s(x).
Furthermore, as U_s^*(0)=U_s^*(x_f), we obtain
x_f = w-C_s^b*/ατ( { r_A }^-μ - { r_s(0) + r_A }^-μ){ r_A }^μ.
§ APPENDIX D: PROOF OF LEMMA <REF>
As the NEF is maintained at the maximum during perimeter control, the number of suburban commuters who arrive at their destinations during perimeter control is
N_s^p = n_jv_f/4L(t_e^p - t_s^p ).
Since C_s^p*= 2 α L/v_f + β (t^* - t_s^p) = 2 α L/v_f + β ( t_e^p - t^*), substituting it into Eq. (<ref>) produces
N_s^p = α n_j/4( 1/β + 1/γ) ( θ^p - 2 ).
According to Eq. (<ref>), the start and end times of perimeter control (t_s^p and t_e^p, respectively; n(t_s^p)=n(t_e^p)=n_j/2) are
t_s^p - t_s = 1/βα L/v_f,
t_e - t_e^p = 1/γα L/v_f.
Therefore, the number of suburban commuters who arrive at their destinations before and after perimeter control is computed using Eqs. (<ref>), (<ref>), (<ref>), and (<ref>) as follows:
N_s^op = ∫_t_s^t_s^pn(s)v(s)/L ds + ∫_t_e^p^t_en(s)v(s)/L ds
= α n_j ( 1/β + 1/γ) ( ln 2 - 1/2).
We combine Eqs. (<ref>) and (<ref>), and the number of suburban commuters is expressed as
N_s = α n_j ( 1/β + 1/γ) ( θ^p/4 + ln 2 - 1 ).
Next, we prove that a queue develops at the perimeter boundary, and its length increases toward the desired arrival time. Eq. (<ref>) and Condition (<ref>) yield
αd T_w(t)/d t - β = 0
αd T_w(t)/d t + γ = 0.
Combining this with Eq. (<ref>), where I_p is constant, we obtain
d q(t)/d t = β/α
d q(t)/d t = - γ/α.
Thus, a queue starts to develop once perimeter control is implemented, and its length increases toward the desired arrival time. Then its length decreases after the desired arrival time.
§ APPENDIX E: PROOF OF C_S^B* > C_S^PB*
When hypercongestion exists without perimeter control, θ > 2 from Lemma <ref>. Consider a set of parameters (including N_s), Ψ, where there exists θ(>2) such that F(θ)=0. Under Ψ, it always holds that F^p(θ)<0. This leads to θ > θ^p under Ψ if there exists θ^p(>2) such that F^p(θ^p)=0. θ > θ^p yields C_s^b* > C_s^pb*, which completes the proof.
elsarticle-harv
|
http://arxiv.org/abs/2307.00778v1
|
20230703064503
|
Modelling cargo transport in crowded environments: effect of motor association to cargos
|
[
"Sutapa Mukherji",
"Dhruvi K. Patel"
] |
physics.bio-ph
|
[
"physics.bio-ph",
"q-bio.SC"
] |
sutapa.mukherji@ahduni.edu.in
Mathematical and Physical Sciences division, School of Arts and Sciences, Ahmedabad University, Navrangpura,
Ahmedabad 380009, India
In intracellular transports, motor proteins transport macromolecules as cargos
to desired locations by moving on
biopolymers such as microtubules.
Recent experiments suggest that cargos that
can associate motor proteins during their translocation have larger
run-length, association time and can overcome the motor traffic on microtubule tracks.
Here, we model the dynamics of a cargo that can associate at the most m
free motors present on the track as obstacles to its motion.
The proposed models display
competing effects of association and crowding, leading to a peak in
the run-length with the free motor density. This result is consistent with
past experimental observations.
For m=2 and 3, we show that this feature is governed by the largest eigenvalue of
the transition matrix describing the cargo dynamics.
In all the above cases, free motors are assumed
to be present as stalled obstacles. We finally compare simulation results for the
run-length for general scenarios where the free motors undergo processive motion in addition
to binding and unbinding to or from the microtubule.
Modelling cargo transport in crowded environments: effect of motor
association to cargos
Dhruvi K. Patel
August 1, 2023
===========================================================================================
§ INTRODUCTION
Intracellular transport often involves directional movements of motor proteins
on biopolymers
such as microtubules or actin filaments <cit.>. Three major classes of
motor proteins known as kinesin, dynein and myosin are responsible for such transports.
Using the energy derived from the hydrolysis of adenosinetriphosphate (ATP) molecules, motor
proteins transport different types of
cargos such as cellular organelles, protein complexes, mRNAs etc. to
desired locations in the cell. Such cargo movements are essential for various cellular functions
such as cell morphogenesis, cell division, cell growth etc. This motion is processive in the
sense that motor proteins typically move over several successive
steps before detaching from the microtubule.
Early studies <cit.> on intracellular
transport revealed the underlying mechanism behind motor transport and
how various properties such as the run-length, velocity etc. depend, for example,
on the external force or the
concentration of ATP molecules.
While many of these studies are around the transport by a single motor, it is believed that cargos
are often transported by multiple motors <cit.>
which help cargos remain bound to the biopolymer for
a longer time. Experimental and theoretical studies <cit.> show that
the presence of several motors helps the cargo overcome the viscous drag of the cytoplasm and have
larger velocity as compared to transport by single motors. The cooperation of several motors also
leads to longer run-length of the cargo before it detaches from the microtubule. Further, in vitro
experiments indicate that transport processes by multiple motors can be efficiently regulated by controlling the
number of engaging motors <cit.>. Besides these studies, there have been extensive
experimental and theoretical studies attempting to understand the collective nature of transports
involving many motors under diverse conditions
<cit.>.
Quite often such transport processes take place in a crowded environment of
the intracellular space. This is in particular true for the axon region of the neuron
cell where a dense network of biopolymers, pre-exisiting organelles and the narrow geometry of
the axon together give rise to a crowded environment that can impede cargo movements.
However, despite crowding, it is found that the cargo transport happens in a robust manner without
significant jamming or cargo dissociation. Experiments elucidating cargo transport in crowded
environments indicate that motor proteins can adapt alternative strategies that
might help them circumvent the crowding problem <cit.>.
In a recent experimental study aimed at understanding the motion of a cargo
in a crowded environment, Conway et al <cit.> studied
the motile properties of quantum dot (Qdot) cargos, that can associate multiple kinesins,
on a microtubule crowded with free kinesin motors. While comparing the motile properties of free kinesins
and the Qdot cargos in crowded conditions, cargos were found to display
longer run-lengths and association times as
compared to free kinesins as the motor density increased.
This difference prompted the prediction that
the property of a cargo to associate multiple motors helps increase its run-length,
association time and overcome the motor traffic.
It was observed that while translocating, Qdot cargos could associate kinesins from
the microtubule pool, dissociate kinesins attached to itself, or associate kinesins that are already
moving along the microtubule and move together subsequently.
Motivated by this work, here, we propose
mathematical and computational models to characterise the motion
of a single cargo on a track crowded by free kinesin motors.
During its translocation, the cargo
can associate free motors which impede the motion of the cargo by occupying
the forward sites on the microtubule.
We assume that upon such association, a kinesin detaches itself from the microtubule
rendering the forward site free. Our aim is to find how the interplay of the
kinesin-association property of the cargo and the crowding along the track affects
cargo's motile properties, for example, its run-length and association time etc.
To our knowledge, this is the first modelling study of cargo transport where
the cargo has the ability to associate kinesins present on a crowded
microtubule track.
To this end, we study cargo transport under different scenarios described below.
(1) In the simplest scenario, we assume that
the cargo is always bound to the microtubule.
Along the path of the cargo,
the microtubule binding sites are randomly occupied by free kinesin motors.
We assume that the kinesin that gets associated to the cargo during its translocation
plays no specific role in facilitating the forward motion of the cargo
other than freeing the forward site.
This is equivalent to assuming that the cargo removes
the kinesin molecule occupying the forward site via the association process.
For this model, referred as "Model 1" below,
we find the average velocity of the cargo.
(2) In model 2, we assume that the cargo can bind
more than one, say, at the most m number of kinesins. This is based on the
predictions that the cargo may have a finite number of kinesin binding sites <cit.>.
Hence, the cargo can associate a kinesin occupying the forward site
provided it has a free binding site available.
We consider m=2, 3, and 4 in the following analysis. Kinesins
attached to the cargo can detach from it and a free kinesin from the intracellular
space can attach to the cargo at given rates. Finally, we implement the condition
that a cargo can no longer be
on the microtubule track if all the kinesin molecules detach from the cargo.
A generalised version of the mathematical formulation of model 1
allows us
to analyse the cargo motion obeying above rules for m=2 and 3.
Finally, run-lengths of m=2, 3, and 4 are found
upon numerically simulating the cargo dynamics.
The motion of the
cargo following different dynamical rules are shown in figure <ref>.
(3) In models 1 and 2, free kinesins are assumed to be stalled on the microtubule.
In model 3,
we simulate cargo dynamics in the presence of moving kinesins as well as random processes of
kinesin binding and unbinding to or from the microtubule.
We compare the run-length of the cargo (with m=3) in the presence or absence of
various processes mentioned above.
§ MODELS AND RESULTS
§.§ Model 1
The motion of the cargo is modelled considering the following dynamical rules.
(a) The cargo transported by a kinesin starts its forward
journey from a given point on a one-dimensional track (often referred below as a lattice)
representing the microtubule.
(b) For all the lattice sites ahead, we assume an initial, random distribution of free
kinesins. The average kinesin density on the lattice is represented by r_m. These kinesins are
assumed to be stalled.
(c) The cargo moves to the forward site provided the forward site is not occupied
by a free kinesin.
(d) If the forward site in front of the cargo is occupied by a free
kinesin, the cargo can associate the kinesin with itself at
rate r_an rendering the forward site free.
In order to build the mathematical model, we consider possible configurations that two
neighbouring sites can have when the first one of them is occupied by the cargo.
For i and (i+1)-th sites, with the cargo being at the i-th site, the (i+1)-th site can be
either empty or occupied by a free kinesin molecule. We denote the
probability of finding (i+1)-th site empty with the
i-th site occupied by the cargo at time t by P(i,t). Similarly,
the probability of finding (i+1)-th site
occupied by a free kinesin while the cargo is at the i-th site at time t is Q(i,t).
Figure (<ref>) shows these configurations as well as possible transitions from one
configuration to the other as the cargo translocates forward.
The following equations describe how these two configurations evolve with time <cit.>.
dP(i)/dt=r_an Q(i)+(1-r_m) P(i-1)-
P(i),
dQ(i)/dt=-r_an Q(i)+r_m P(i-1).
The first term on the RHS of equation (<ref>)
indicates a cargo-association process due to which a Q-type
configuration transitions to a P-type configuration. The term with the pre-factor
(1-r_m) indicates the motion of the cargo from
(i-1)-th site to
i-th site while the (i+1)-th site is vacant. While the forward motion happens with unit rate,
the factor (1-r_m) indicates the probability that after
the forward motion, (i-1)→ i,
the cargo lands in a P-type configuration i.e. the (i+1)-th site is
unoccupied by a kinesin. The last term in (<ref>) is a loss term which
indicates that a cargo has moved from the i-th site to the (i+1)-th site.
In equation (<ref>), the first term on the RHS indicates a
loss of a Q-type configuration due to the association process. The second term is a gain term due to
the hopping of the cargo from (i-1)→ i where (i+1)-th site is occupied by a free kinesin.
To find the average properties of the cargo motion,
we define generating functions corresponding to the two probabilities as
P̃(γ)=∑_i=-∞^∞γ^i P(i), and Q̃(γ)=∑_i=-∞^∞γ^i Q(i).
In terms of these generating functions, the time evolution equations are
d/dtP̃(γ)=r_anQ̃(γ)+
(1-r_m) γP̃(γ) - P̃(γ), and
d/dtQ̃(γ)=-r_anQ̃(γ)+r_m γP̃(γ).
The average position of the cargo can be found from the probabilities as
⟨ i⟩=∑_i=-∞^∞ i [P(i)+Q(i)] =d/dγ[P̃(γ)+Q̃(γ)]|_γ=1.
The average velocity of the cargo is obtained from
v=⟨ i⟩/t where t is the time taken to travel an
average distance ⟨ i⟩.
Solving equations (<ref>) and (<ref>), the average velocity of the
cargo is found as (see Appendix <ref> for details)
v=r_an/r_an+r_m.
§.§ Model 2
Here we generalize the mathematical framework, discussed in the previous section, for higher
values of m taking into account the possibilities of detachments of the cargo from the microtubule. In the following, we discuss the mathematical model for m=2 and simulation results
for m=2, 3, and 4. The cargo dynamics for m=3 is discussed in
Appendix <ref>.
For m=2, the cargo has two kinesin binding sites.
Hence it can associate at the most two kinesins.
The basic rules for cargo transport in this case are listed below.
(a) As before, we begin with an initial, random distribution of stalled free kinesins
on a one-dimensional lattice. The average density of free kinesins is r_m.
(b) The cargo attached to a kinesin starts its forward
journey from a given point on the lattice.
(c) If the forward site is blocked by a free kinesin, the cargo can associate the
kinesin with itself at a rate r_an provided the cargo has only one kinesin
bound to it.
(d) A kinesin bound to the cargo can detach from the cargo at rate ω_d
and a free kinesin from the intracellular space can bind to the cargo at
rate ω_a provided the cargo has only one kinesin attached to it.
(e) A cargo is not attached to the microtubule if all the kinesins detach from the cargo.
Thus, we are not being specific about how many kinesins are actively
transporting the cargo or how many remain bound to the cargo without participating in
cargo transport actively.
As before, we begin with two possible configurations of, say, {i, i+1}-th sites
where i-th site is occupied by the cargo. However,
here the cargo can be in two possible states - bound to one kinesin or bound to two kinesins.
Hence, the probabilities are defined in the following way. P_n(i,t) (n=1, 2) represents the
probability, at time t,
of the configuration where the cargo, located at i-th site, is bound to n kinesins
and the (i+1)-th site is empty.
Similarly, Q_n(i,t) (n=1, 2)
represents the probability, at time t, of the configuration where the cargo, located
at i-th site, is bound to n kinesins and the (i+1)-th site is occupied
by a free kinesin.
The probabilities of various configurations change with time as per the following equations,
d/dtP_2(i)=(1-r_m) P_2(i-1)-P_2(i)+r_anQ_1(i)-ω_d P_2(i)+ω_aP_1(i),
d/dtP_1(i)=(1-r_m) P_1(i-1)-
P_1(i)-ω_a P_1(i)+ω_d (P_2(i)- P_1(i)),
d/dtQ_2(i)=r_m P_2(i-1)+ω_a Q_1(i)-ω_d Q_2(i), and
d/dtQ_1(i)=r_m P_1(i-1)-r_an Q_1(i)+ω_d (Q_2(i)-Q_1(i))-
ω_a Q_1(i).
The r_ an-dependent term in equation (<ref>) represents a process of kinesin
association by the cargo. Due to this process, a Q_1-type configuration transitions to a
P_2-type configuration.
ω_a(ω_d) dependent terms represent kinesin attachment(detachment) processes
to(from) the cargo. For example, the ω_d dependent term in equation (<ref>) represents
detachment of a kinesin due to which the cargo transitions from P_2 state to P_1 state.
In addition to above equations, we introduce
probabilities P_0(i,t) and Q_0(i,t) of having situations where
the cargo bound to one kinesin residing at i-th site at time t loses its kinesin.
These probabilities change with time as per the equations
d/dt P_0(i)=ω_d P_1(i) and d/dt Q_0(i)=ω_d Q_1(i) .
Defining generating functions as P̃_n(γ,t)=
∑_i=-∞^∞γ^i P_n(i,t), and
Q̃_n(γ,t)=∑_i=-∞^∞γ^i Q_n(i,t) (where n=0, 1, 2),
we can rewrite equations (<ref>)-(<ref>) as
d/dt H(γ,t)= S H(γ,t),
where H is a column matrix
H(γ,t)=[ P̃_2(γ,t); P̃_1(γ,t); Q̃_2(γ,t); Q̃_1(γ,t); ]
and S is a 4× 4 matrix
S=[ (1-r_m) γ-1-ω_d ω_a 0 r_an; ω_d (1-r_m) γ-1-ω_a - ω_d 0 0; r_mγ 0 -ω_d ω_a; 0 r_mγ ω_d -(ω_a+ω_d+r_an); ].
In order to have an estimate of the association time of the cargo
and how it is impacted by various processes, we have studied
the quantity [P̃_0(γ,t)+Q̃_0(γ,t)]|_γ=1.
This quantity being identical to ∑_i=-∞^∞[P_0(i,t) +Q_0(i,t)]
indicates the total probability of cargo being left with no kinesin bound
to it while being at any point on the lattice. A plot of this quantity for different
parameter values are shown in figures (<ref>) and (<ref>).
Over large time, this quantity approaches unity indicating cargo losing all the kinesins leading to the
detachment of the cargo from the microtubule. The approach of this quantity to unity
is what provides us with an estimate of the association time of the cargo to the microtubule.
A fast approach to unity
indicates a low association time of the cargo. For both figures,
we have chosen the same sets of values for the kinesin-association rate, r_an.
The increase or decrease in
ω_a and ω_d, respectively, is expected to increase the association time of the cargo.
Figures show that reducing the kinesin detachment rate, ω_d, from the cargo has much
stronger effects on the association time as compared to increasing the kinesin
attachment rate, ω_a.
In figure (<ref>), we have shown how the total probability
of cargo detachment at any point on the lattice is influenced by the kinesin density, r_m.
The figure shows that at
a low value of the kinesin-association rate by the cargo, r_ an,
the extent of crowding influences the cargo-association time to the microtubule only mildly.
The situation changes significantly when the kinesin-association rate is high. In this
case, the association time of the cargo to the microtubule, in general, increases significantly.
Further, for large r_ an,
the crowding density of free kinesins affects the association time of the cargo
significantly with the association time being larger for lower crowding density.
The dependence of the run-length of the cargo on the crowding density
can be obtained upon solving equations (<ref>)-(<ref>) numerically.
Figure (<ref>) shows run-length plots for different values of
the kinesin-association rate, r_ an,
and kinesin attachment and detachment rates, ω_a and ω_d, respectively.
For small r_ an, the run-length decreases monotonically.
However, for large r_ an, the run-length increases initially for low crowding. In this case,
due to large r_ an, the cargo benefits from
the kinesin-association process at low crowding. As the
crowding density increases, due to limited number of
binding sites, the cargo no longer benefits from kinesin association
and the run-length decreases. This variation of the
run-length with the crowding density is
consistent with earlier experimental predictions <cit.>.
With the increase in the kinesin detachment rate, ω_d,
the run-length of the cargo decreases significantly. However, as found earlier,
a decrease in the rate of kinesin attachment, ω_a,
to the cargo has mild effect on cargo's run-length.
The fact that the run-length of the cargo for high kinesin-association rates increases with
the crowding density initially is consistent with the estimates obtained from the analysis of the largest
eigenvalue of the transition matrix S and the association time. In the limit of large time, the
solutions for the probabilities are given by
H≈ c_1 e^λ_l t X,
where c_1 is a constant, λ_l is the largest of the four eigenvalues
of the transition matrix S with all of them being negative
and X=(x_1, x_2, x_3, x_4)^T is the corresponding eigenvector.
The average distance travelled by the cargo and its average velocity can be obtained from
⟨ i⟩=∑_i=-∞^∞ i [P_1(i,t)+P_2(i,t)+Q_1(i,t)+Q_2(i,t)]
=γd/dγ[P̃_1(γ,t)+P̃_2(γ,t)+Q̃_1(γ,t)+Q̃_2(γ,t)]|_γ=1 and v=⟨ i⟩/t, respectively.
In the large time limit, the dominant contribution to the velocity is of the
form v≈ [γ c_1 e^λ_l tdλ_l/dγ∑_i=1^4x_i]|_γ=1.
Using 1/λ_l as an estimate of the association time, t_ assoc,
and finding dλ_l/dγ|_γ=1
numerically for given parameter values, we have plotted
t_ assocdλ_l/dγ|_γ=1 as a function of the crowding density, r_m,
in figure (<ref>). Plots in figure
(<ref>) display similar trends as found in figure (<ref>) for the
run-length.
Although t_ assoc v gives an estimate of the run-length, the
variation in the run-length with the crowding density as seen in figure (<ref>)
essentially arises from t_ assocdλ_l/dγ|_γ=1. It can be
verified numerically that the variation in the remaining factors in v is almost negligible over
the entire range of r_m, [0:1].
The dynamical equations for a cargo that can bind at the most three kinesins, i.e.
m=3, are shown in
Appendix <ref>. The variation of t_ assocdλ_l/dγ|_γ=1
with the crowding
density as obtained from the analysis of the largest eigenvalue is shown in figure (<ref>).
Next we simulate the cargo dynamics with the cargo having m=2, 3, and 4 kinesin
binding sites.
Figure (<ref>) shows the change
in the run-length of the cargo with
free-kinesin density, r_m, for m=2, 3, and 4.
Simulations show an
initial increase in the run-length with the free-kinesin-density for m=3 and 4;
a trend that was shown earlier in figure (<ref>).
§.§ Processive motion of free kinesins for m=3
In this section, we study the motion of the cargo in the presence of free kinesins which move processively
on the microtubule track. In addition, kinesins from
the intracellular environment can attach to the microtubule and those walking on the microtubule
can leave the microtubule at given rates.
Here we simulate this system using the cellular automaton method. As before, the
microtubule is represented by a one-dimensional lattice.
We begin with the cargo positioned at one end of the lattice. The lattice sites
are randomly occupied by free kinesins with an average density, r_m.
The cargo moves following the
association mechanism mentioned earlier.
We assume that the kinesins move unidirectionally in the same
direction as that of the cargo.
The motion of the free kinesins follows the rules of the paradigmatic totally asymmetric
simple exclusion process <cit.>. Accordingly, each kinesin can
walk to the neighbouring site forward provided
the target site is not occupied by another kinesin.
The attachment and detachment of kinesins are as per the Langmuir kinetics
considered in <cit.>.
A kinesin can attach to a lattice site at rate ω_a, kin provided
the site is empty and a free kinesin can detach from the lattice
at rate ω_d, kin.
A kinesin located at the boundary site can exit from the lattice at rate, β.
We follow random sequential updating scheme with probability p for cargo update and 1-p
for updating the rest of the sites.
Depending on the site chosen, the state of the site (or of the cargo) is updated
following the aforementioned rules. The description of various parameters is provided in
Appendix <ref>.
In figure (<ref>), we have plotted run-lengths for three different scenarios,
(i) free kinesins are stalled (static obstacles) (ii) free kinesins are in motion and (iii) free kinesins
are in motion and they can attach to (or detach from) the microtubule at rates
ω_a, kin (or ω_d, kin).
Plots indicate that in case of processive free kinesins (case (ii)), the run-length of the cargo
peaks at a higher crowding density with a higher maximum value as
compared to the stalled case. The run-length reduces significantly in case of random
attachment and detachment of free kinesins (case (iii)).
Figure (<ref>) in Appendix <ref> shows that
the attachment processes lower the run-length significantly.
Although, due to increased effective crowding density, the cargo
remains bound to the microtubule for a large span of time by associating free kinesins,
the crowding restricts the run-length.
As a result of this, the run-length attains its maximum value at a lower value of the
crowding density, r_m.
Figure (<ref>) shows the variation in the run-length as the
association rate r_ an is changed. The peaks in the run-lengths are similar to
what have been observed earlier.
Figure (<ref>) shows a comparison of how the association time
of the cargo to the microtubule depends on r_m for different
values of the kinesin-association rate, r_ an. The association time is
expressed as the total number of discrete time steps of simulation till the cargo leaves the
microtubule. An increase in r_ an helps cargo stay attached to the microtubule for a
longer span of time while as per figure (<ref>), the velocity of the cargo
decreases monotonically with the crowding density, r_m. Further, no significant variation in
the velocity is seen with r_ an.
The association time and the velocity vary with r_m in such a manner that their
product exhibits a peak at a specific value of r_m.
§ SUMMARY
The motion of cargos on biopolymeric tracks crowded due to free or
cargo-bound motor proteins is a subject of immense experimental and theoretical investigations.
The central goal of these studies is to understand how cargos manage to
overcome the motor traffic in order to transport necessary materials in a robust manner.
Motivated by some of the experimental observations on
translocation of quantum dot cargos
in crowded environments, we have modelled mathematically and computationally
the motion of a cargo that can bind kinesins present along its trajectory
on the microtubule.
In the mathematical modelling, the kinesins on the microtubule track are assumed to be stalled.
Besides taking into account the kinesin-association property of the cargo,
our model incorporates the following dynamical rules.
(i) The cargo has a limited number of kinesin-binding sites as
a result of which it can bind at the most a given number of kinesins, (ii) bound kinesins can
detach from the cargo and kinesins from the
intracellular space can bind to the cargo at certain rates and (iii) the cargo leaves the
microtubule if all the kinesins detach from the cargo.
Upon finding the cargo velocity for a toy model where the cargo never leaves the
microtubule and keeps moving forward by removing obstacles via association, we
generalize the mathematical formulation to take into account the aforementioned
aspects of the cargo dynamics.
We show that the two features, namely, the
crowding along the microtubule and the ability of the cargo to associate kinesins
have competing effects on the run-length of the cargo. For low crowding density,
as the crowding density
increases, the cargo benefits due to its ability to associate kinesins. This leads to
an increase in the run-length with the crowding density. However, as the
crowding density increases further, due to
its limited number of kinesin binding sites, the cargo does not benefit anymore
through kinesin association. As a consequence, the run-length decreases for large values
of the crowding density. This nature of the run-length has been predicted earlier from
experimental observations. We show that this property of the run-length is governed
by the largest eigenvalue of the transition matrix describing the
dynamics of the cargo. The model can be generalized further to incorporate other
features such as reversals of the cargo, bidirectional movements of the cargo,
pausing of the cargo etc. with the frequencies of such events depending on
the crowding density.
The present work lays a foundation for such studies. Additionally, this analysis may
also lead to testable predictions for cargo's motile properties
once appropriate parameter values are available.
Next, we have simulated cargo transport with
processive motion of free kinesins as well as binding and unbinding
of motors to or from the
microtubule. For different values of the rate of kinesin association to the cargo,
the run-lengths show prominent peaks as the crowding density is changed.
However, overall, the run-length decreases significantly due to binding of motors
to the microtubule, a process that increases the effective crowding density.
As a consequence of cargo's ability to associate kinesin, the
association-time of the cargo to the microtubule increases with the increase in the crowding density.
The velocity of the cargo, on the other hand, decreases with the crowding density and it remains
approximately unchanged with the kinesin-association rate of the cargo.
Incorporating the processive motion in the mathematical model would add another
level of complexity which can be a subject of future studies.
§ MODEL 1
In the matrix form the differential equations (<ref>) and (<ref>)
appear as
d/dt G(γ,t)= R G(γ,t),
where
G(γ,t)=
[ P̃(γ,t); Q̃(γ,t); ] and
R=[ (1-r_m) γ-1 r_an; r_m γ -r_an; ].
Here R is a transition matrix. For γ=1, the sum of all the elements in a column is 0.
One may find out the solutions of these equations upon finding the eigenvalues and eigenvectors
of R. The eigenvectors corresponding to the eigenvalues λ_± are, respectively,
[ 1; λ_++1-(1-r_m)γ/r_an; ] and [ 1; λ_-+1-(1-r_m)γ/r_an; ],
where λ_+,-=1/2[-(r_an+1-(1-r_m) γ)± A] with
A=√((r_an+1-(1-r_m) γ)^2-
4 r_an (1-γ)).
The solutions for the generating functions are
[ P̃(γ,t); Q̃(γ,t); ]=c_1e^λ_+t[ 1; λ_++1-(1-r_m)γ/r_an; ]+
c_2e^λ_-t[ 1; λ_-+1-(1-r_m)γ/r_an; ],
where c_1, c_2 are integration constants.
We consider the initial conditions P(i,t=0)=Q(i,t=0)=1/2 for i=0.
Using these conditions, we find
c_1=1/2A[r_an-λ_- - 1+(1-r_m)γ] and
c_2=1/2-c_1=1/2A[A-r_an+λ_-+1-(1-r_m)γ].
Since in the large time limit, the solutions are governed by the largest eigenvalue, we have
P̃(γ,t)+Q̃(γ,t)≈ c_1
e^λ_+t[1+λ_++1-(1-r_m)γ/r_an].
Upon taking derivatives of P̃(γ,t)+Q̃(γ,t) with respect to γ, we have
⟨ i⟩=[γ(dP̃/dγ+dQ̃/dγ)]_γ=1
={γdc_1/dγe^λ_+t(1+λ_++1-(1-r_m)γ/r_an)}_γ=1+
{γ c_1 e^λ_+tdλ_+/dγ t(1+λ_++1-(1-r_m) γ/r_an)}_γ=1+
{γ c_1 e^λ_+t1/r_an(dλ_+/dγ-(1-r_m))}_γ=1.
In the large time limit, we finally have
⟨ i⟩/t=dλ_+/dγ|_γ=1.
Using
dλ_+/dγ |_γ=1=[1-r_m/2+1/2dA/dγ]_γ=1,
where
dA/dγ |_γ=1=r_ an+r_ anr_m-r_m+r_m^2/r_ an+r_m,
we have
v=dλ_+/dγ|_γ=1=r_an/r_an+r_m.
Figure (<ref>) shows plots of velocity obtained mathematically and through simulations.
§ MODEL 2
§.§ m=3
A cargo that can bind at the most three kinesins can be in four possible states, namely,
bound to one, two or three kinesins or not bound to any kinesin.
Possible configurations of two neighbouring sites can be of
P type or Q type depending on whether the site in front of the cargo is occupied by a free kinesin or
empty. For example, P_n(i,t) (n=1, 2, or 3) indicates the probability of
a configuration where
a cargo, bound to n number of kinesins,
is present at the i-th site at time t while the (i+1)-th site is
empty. Similarly, Q_n(i,t) (n=1, 2, or 3) represents the probability of
a configuration where
a cargo, bound to n number of kinesins, is present at the i-th site at time t while the (i+1)-th site is
occupied by a free kinesin. The change in these probabilities with time is described by the equations
d/dtP_3(i)=(1-r_m) P_3(i-1)-P_3(i)+r_anQ_2(i)-ω_d P_3(i)+ω_aP_2(i),
d/dtP_2(i)=(1-r_m) P_2(i-1)-P_2(i)+r_anQ_1(i)+ω_d P_3(i)+
ω_aP_1(i)-(ω_a+ω_d) P_2(i),
d/dtP_1(i)=(1-r_m) P_1(i-1)-P_1(i)+ω_d P_2(i)-
(ω_a+ω_d) P_1(i),
d/dtQ_3(i)=r_m P_3(i-1)+ω_a Q_2(i)-ω_d Q_3(i),
d/dtQ_2(i)=r_m P_2(i-1)+ω_a Q_1(i)+
ω_d Q_3(i)-(ω_d +ω_a)Q_2(i)-r_ an Q_2(i), and
d/dtQ_1(i)=r_m P_1(i-1)-r_an Q_1(i)+ω_d Q_2(i)-(ω_a+ω_d)Q_1(i).
Additionally, as in m=2 case, we have
d/dt P_0(i)=ω_d P_1(i) and d/dt Q_0(i)=ω_d Q_1(i) .
Defining the generating functions as P̃_n(γ,t)=∑_i=-∞^∞γ^i P_n(i,t) and
Q̃_n(γ,t)=∑_i=-∞^∞γ^i Q_n(i,t), we have
d/dt H(γ,t)= S H(γ,t),
where H is a column matrix
H(γ,t)=[ P̃_3(γ,t); P̃_2(γ,t); P̃_1(γ,t); Q̃_3(γ,t); Q̃_2(γ,t); Q̃_1(γ,t); ],
and S is a 6× 6 matrix
S=[ (1-r_m) γ-ω_d -1 ω_a 0 0 r_an 0; ω_d (1-r_m) γ-ω_a - ω_d-1 ω_a 0 0 r_ an; 0 ω_d (1-r_m)γ-ω_a-ω_d-1 0 0 0; r_mγ 0 0 -ω_d ω_a 0; 0 r_mγ 0 ω_d -Ω ω_a; 0 0 r_mγ 0 ω_d -Ω; ],
where Ω=(ω_a+ω_d+r_ an).
As in case of m=2, here again the variation in the run-length is governed by the quantity
t_ assocdλ_l/dγ|_γ=1 where λ_l is the largest
eigenvalue of matrix S. Figure (<ref>) shows the variation in the
t_ assocdλ_l/dγ|_γ=1 with r_m.
§ PROCESSIVE MOVEMENT OF FREE KINESINS
Descriptions of parameters used in simulations
are provided in table <ref>.
Figure (<ref>) shows how the run-length varies with the crowding density
in the three cases - Processive movement of free kinesins and
(i) binding of kinesins to the microtubule at rate ω_a, kin,
(ii) unbinding of kinesins from the microtubule at rate ω_d, kin, and (iii) no
binding or unbinding of kinesins to or from the microtubule.
2in
99
howard1J. Howard, Mechanics of Motor Proteins and the Cytoskeleton (Sinauer Associates, Massachusetts, 2001).
schliwa M. Schliwa, Molecular Motors (Wiley-VCH, Germany, 2003).
howard2J. Howard, A. J. Hudspeth, and R. D. Vale, Nature 342, 154 (1989).
block K. Visscher, M. J. Schnitzer, and S. M. Block, Nature 400, 184 (1999).
cross N. J. Carter, and R. A. Cross, Nature 435, 308 (2005).
grossS. P. Gross, M. A. Welte, S. M. Block, and E. F. Wieschaus, J. Cell Biol. 156, 715 (2002).
holzwarth D. B. Hill, M. J. Plaza, K. Bonin, and G. Holzwarth, Eur. Biophys. J. 33, 623 (2004).
gelfand V. Levi, A. S. Serpinskaya, E. Gratton, and V. Gelfand, Biophys. J. 90, 318 (2006).
unger K. J. Böhm, R. Stracke, P. Mühlig, and E. Unger, Nanotechnology 12, 238 (2001).
klumpp S. Klumpp, and R. Lipowsky, Proc. Natl. Acad. Sci. U. S. A. 102, 17284 (2005).
beeg J. Beeg, S. Klumpp, R. Dimova, R. S. Graciá,
E. Unger, and R. Lipowsky, Biophys. J. 94, 532 (2008).
vershinin M. Vershinin, B. C. Carter, D. S. Razafsky, and S. P. Gross, Proc. Natl. Acad. Sci. U. S. A.
104, 87 (2007).
segregation M. E. Schneider, et al, J. Neurosci. 26,
10243 (2006).
clogging P. I. Zhuravlev, B. S. Der, and G. A. Papoian, Biophys. J. 98, 1439 (2010).
leducC. Leduc et al., Proc. Natl. Acad. Sci. U. S. A. 109, 6100 (2012).
helical V. Bormuth, et. al. Biophys. J.
103, L4-L6 (2012); M. Bugiel, E. Böhl and E. Schäffer, Biophys. J. 108, 2019 (2015).
tasep1C. T. MacDonald, J. H. Gibbs, and A. C. Pipkin,
Biopolymers 6, 1 (1968).
multi-opposite M. R. Evans, D. P. Foster, C. Godrèche, and D. Mukamel,
Phys. Rev. Lett. 74, 208 (1995).
frey-lang A. Parmeggiani, T. Franosch, and E. Frey, Phys. Rev.
Lett. 90, 086601 (2003).
multi-same Y. Chai, S. Klumpp, M. J. I. Müller, and R. Lipowsky, Phys.
Rev. E 80, 041928 (2009).
laneswitch A. I. Curatolo, M. R. Evans, Y. Kafri, and J. Tailleur,
J. Phys. A 49, 095601 (2016).
gov I. Pinkoviezky and N. S. Gov, Phys. Rev. Letts. 118, 018102 (2017).
freygait P. Wilke, E. Reithmann, and E. Frey
Phys. Rev. X 8, 031063 (2018).
surrey A. Seitz, and T. Surrey, EMBO J. 25, 267 (2006).
conway L. Conway, et. al., Proc. Natl. Acad. Sci. U. S. A. 109, 20814 (2012).
conway1 L. Conway, and J. Ross, Comm. and Integ. Biol. 6, e25387.
sm Sutapa Mukherji, Phys. Rev. E 77, 051916 (2008).
|
http://arxiv.org/abs/2307.02297v1
|
20230705135540
|
RIS with insufficient phase shifting capability: Modeling, beamforming, and experimental validations
|
[
"Lin Cao",
"Haifan Yin",
"Li Tan",
"Xilong Pei"
] |
eess.SP
|
[
"eess.SP"
] |
RIS with Insufficient Phase Shifting
Capability: Modeling, Beamforming,
and Experimental Validations
Lin Cao, Haifan Yin, Member, IEEE, Li Tan, and Xilong Pei
This work was supported in part by the National Key Research and Development Program of China under Grant 2020YFB1806904, in part by the National Natural Science Foundation of China under Grants 62071191, 62071192, and 1214110.
The corresponding author is Haifan Yin.
L. Cao, H. Yin, L. Tan and X. Pei are with School of Electronic Information and Communications, Huazhong University of Science and Technology,
430074 Wuhan, China (e-mail: lincao@hust.edu.cn; yin@hust.edu.cn; ltan@hust.edu.cn; pei@hust.edu.cn
).
Received March 10, 2023; accepted May 12, 2023
==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Most research works on reconfigurable intelligent surfaces (RIS) rely on idealized model of the reflection coefficients, i.e., uniform reflection amplitude for any phases and sufficient phase shifting capability. In practice however, such models are oversimplified. This paper introduces a realistic reflection coefficient model for RIS based on measurements. The reflection coefficients are modeled as discrete complex values that have non-uniform amplitudes and suffer from insufficient phase shift capability. We then propose a group-based query algorithm that takes the imperfect coefficients into consideration while calculating the reflection coefficients. We analyze the performance of the proposed algorithm, and derive the closed-form expressions to characterize the received power of an RIS-aided wireless communication system. The performance gains of the proposed algorithm are confirmed in simulations. Furthermore, we validate the proposed theoretical results by experiments with our fabricated RIS prototype systems. The simulation and measurement results match well with the theoretical analysis.
Reconfigurable intelligent surface, practical reflection coefficient, performance analysis, wireless propagation measurements.
§ INTRODUCTION
As the fifth-generation (5G) mobile communication gradually matured, the sixth-generation (6G) mobile communication has appeared in horizon, which calls for much higher data rates, connection density, and energy efficiency.
With the expansion of the network scale and the ever increasing throughput requirement, mobile communications are facing challenges of high energy consumption and low cost efficiency<cit.>.
The recently proposed reconfigurable intelligent surface (RIS) provides new paradigms for wireless communication, for its potential of re-designing the wireless propagation environment and counteract the adverse radio conditions. As a result, RIS is actively being discussed as a prospective technology for 6G <cit.>.
RIS is a two-dimensional array of sub-wavelength elements that can be configured using a large number of passive components <cit.>. The passive electromagnetic response of the structure of each component (e.g., phase and amplitude) is controlled by simple programmable components, such as
positive-intrinsic-negative (PIN) diode <cit.>, varactor diode <cit.>, micro electro mechanical systems (MEMS) switch <cit.>, etc. By jointly manipulating these elements, RIS will be able to build a programmable wireless environment with low additional power or hardware expense <cit.>, thereby further increasing the spectrum efficiency, energy efficiency, and physical layer security aspects of wireless communication systems.
To explore the potential of RIS techniques, RIS-aided wireless communication systems have recently been investigated in various applications/setups. Researchers propose new cost-effective solutions with RIS that lead to high beamforming gains and effective interference suppression using only low-cost reflecting elements, such as physical layer security <cit.>, orthogonal frequency division multiplexing (OFDM) <cit.>, and integrated sensing and communications (ISAC) <cit.>, etc.
The existing works are mostly based on the following three assumptions to make the system model more concise yet idealized:
* Continuous phase shifts at its reflecting elements,
* Uniform reflection amplitude at any phase shift,
* Sufficient phase shifting capability covering the range from 0 to 2π.
However, the RIS element with continuous phase shift is very challenging to implement due to its limited size and cost.
Take the RISs with varactor diodes or PIN diodes as example, the RIS element with finely tuned phase shifts requires a wide range of biasing voltages of varactor diodes or a lot of PIN diodes and the corresponding controlling signals from the RIS controller.
As such, for practical RISs, it is more cost-effective to consider discrete phase shifts with a small number of control bits for each element.
Besides, due to hardware limitations<cit.>, it is difficult and unrealistic to implement an RIS satisfying ideal reflection model which implies the reflection amplitude of each element is uniform at any phase shift.
The experimental results reported in <cit.> show that the amplitude and phase shift of the reflected signal in a practical RIS system with varactor diodes are related to both the frequency of the incident signal and the biasing voltage of the varactor diode.
This is due to the fact that changing the frequency of incident signal or the control voltage will shift the equivalent impedance of the RIS element, leading to a variation in the ohmic loss in the system, which subsequently affects the amplitude and phase shift of the reflected signal.
In reality, this has long been a problem in RIS implementation<cit.>.
Moreover, the phase shifting capability is not always sufficient. The experimental results in <cit.> show that the phase response of RIS element is sensitive to the angle of incident signal, which is due to the RIS being spatially dispersive<cit.>. Besides, most existing RIS elements have limited phase shifting capability and cannot cover the range from 0 to 2π.
There have been some previous studies to investigate the practical system model of RIS-aided systems <cit.>.
In these studies, however, the authors focused on the discrete phase shift at RIS elements or the non-ideal reflection model which implies the amplitude of the received signal varies with the phase shift.
There have been few previous studies that focused on practical reflection coefficient model, especially for the insufficient phase shifting capability.
Motivated by the above, we study in this paper an RIS-aided wireless communication system by establishing a practical system model with discrete reflection coefficients and limited phase shift range.
We formulate and solve the problems to maximize the received power at the user by proposing a group-based query algorithm to optimize the reflection coefficients in the scenario that the above three idealized assumptions are not valid.
To analyze the effect of non-ideal reflection coefficients, the asymptotic performance is analyzed, and the corresponding closed-form expressions are derived.
We validate the theoretical results by both numerical simulations and experiment measurements using our prototypes of RIS system.
The main contributions of this paper are as follows:
* Based on experimental measurements, we introduce a realistic model for the reflection coefficients, which, to the best of our knowledge, is the first model taking into account the limited phase shifting capability.
* With the above model, we formulate a maximization problem for the received power, which is non-convex and difficult to solve. We propose a group-based query algorithm to find the solutions efficiently by calculating the corresponding phase range of each discrete reflection coefficient.
* We analyze the performance of the proposed algorithm. The closed-form expressions for the performances of RIS-aided communication systems are derived, including the cases of uniform reflection amplitude and non-uniform reflection amplitude.
* We conduct experiments with our fabricated RIS prototypes, to evaluate and validate the theoretical performance under practical deployment conditions. Two different RIS prototypes working on 5.8 GHz and 2.6 GHz are employed in the measurements. The experimental results match well with our theoretical analysis.
The rest of this paper is organized as follows.
Section <ref> introduces the system model for RIS-aided communication systems and derives the received power.
In Section <ref>, we propose a realistic reflection coefficient model and a group-based query algorithm for solving the received power maximization problem.
In Section <ref> we validate our theoretic results using both simulations and experimental measurements.
Section <ref> concludes the paper.
§ SYSTEM MODEL
We consider an RIS-aided wireless communication system as shown in Fig.<ref>, where an RIS is adopted to reflect the signal from the base station (BS) towards the user. The RIS is composed of M reflecting elements. By utilizing varactor diodes or PIN diodes, each element can shift the phase of the reflected signal. We show two examples of the structure of the element in Fig.<ref>. Tunable phase shift to the reflected signal is achieved by varying the bias voltage of the diode.
For ease of exposition, we assume that there is no line-of-sight (LoS) path between the BS and the user. However, the following derivation can be easily extended for scenarios with Line-of-Sight (LoS) path.
The signal received at the user is expressed as
y = √(P_t)f^HΦhs + n,
where n∼𝒞𝒩(0,σ^2 ) is the additive white Gaussian noise (AWGN) with zero mean and variance of σ ^2, P_t is the transmit power, s is the transmitted signal with | s^2 | = 1, and ΦΔ= diag(A_1e^ - jθ _1,⋯,A_Me^ - jθ _M) ∈ℂ^M × M denotes the reflection coefficient matrix of the RIS, where A_m and θ _m are the reflection amplitude and phase shift on the incident signal, respectively. h=[ h_1,⋯,h_M]^T ∈ℂ^M × 1 denotes the channel from the BS to the RIS, and f=[ f_1,⋯,f_M]^T ∈ℂ^M × 1 denotes the channel from the RIS to the user.
Although the real direct path between the BS and the user is blocked, the LoS components exist in practical implementation due to the directed reflection of RIS. For this reason, the Rician fading is used to model the channels between the BS and the RIS, as well as the RIS and the user, as signified by h and f, which are written as
𝐡= √(K_1/K_1 + 1)𝐡̅+√(1/K_1 + 1)𝐡̃,
and
𝐟= √(K_2/K_2 + 1)𝐟̅+√(1/K_2 + 1)𝐟̃,
where 𝐡̅ and 𝐟̅ are the LoS components of each channel; 𝐡̃ and 𝐟̃ are the non-LoS (NLoS) components; K_1 and K_2 are Rician K-factors of h and f, respectively.
Since the distances between the BS and the RIS, as well as the RIS and the user, are significantly greater than the distances between any two RIS elements, we assume that the path loss of the BS-RIS link and the RIS-user link via different RIS elements is identical. The reflected LoS components of each channel via the m-th RIS element are denoted by <cit.>
𝐡̅ = √(G_aD_1^-α)·[ e^-j2π/λD_1,⋯,
e^-j2π/λD_m,⋯,√(e)^-j2π/λD_M]^T ,
and
𝐟̅ = √(d_1^-α)[ e^-j2π/λd_1,⋯,
e^-j2π/λd_m,⋯,e^-j2π/λd_M]^T ,
where α is the path loss factor, G_a is the antenna gain, and λ is the wavelength of the signal. D_m and d_m are the distances between the BS and the m-th RIS element and between the m-th RIS element and the user, respectively, as illustrated in Fig.<ref>. During a channel coherent interval, the LoS components are constant, whereas the NLoS components for 𝐡 and 𝐟 follow i.i.d. complex Gaussian distribution<cit.>. The NLoS components of each channel are respectively denoted by
𝐡̃ =L(D_1)[g_1,⋯,g_m,⋯,g_M]^T,
and
𝐟̃ =L(d_1)[ b_1,⋯,b_m,⋯,b_M]^T,
where L( · ) is the channel gain of the NLoS component, g_m∼𝒞𝒩(0,1 ) and b_m∼𝒞𝒩(0,1 ) denote the small-scale NLoS components.
We then derive the analytical expression for the maximum received power of the system. The instantaneous received power is given by
P_r =P_t𝐟^HΦ𝐡^2,
The instantaneous received power is an exponentially distributed random variable.
The long-term average received power (LARP) Γ is denoted by
Γ = 𝔼{P_r} = P_t 𝔼{𝐟 ^HΦ 𝐡^2}.
To have a deeper understanding of the LARP Γ, we provide another form in the following proposition.
The LARP Γ is given by
Γ = κ _NLoS∑_m A_m^2+ κ _LoS∑_m,m'A_mA_m'e^ - j[ϕ _m - ϕ _m' + θ _m - θ _m'],
where ϕ _m = 2π/λ(D_m + d_m) denotes the total phase shift induced by the LoS components of each channel; κ _LoS and κ_NLoS are constants defined as
κ _LoS = K_1K_2η _LoS/(K_1 + 1)(K_2 + 1),
κ_NLoS = K_1η _NLoS1+K_2η _NLoS2+η _NLoS3/(K_1 + 1)(K_2 + 1),
where η _LoS, η _NLoS1, η _NLoS2 and η _NLoS1 are constants related to the path loss of the channels, which are defined as
η _LoS = √(D_1^-αd_1^-α)P_tG_a ,
η _NLoS1 = √(G_aD_1^-α)P_tL( d_1 ),
η _NLoS2 = √(G_ad_1^-α)P_tL( D_1 ),
η _NLoS3 = P_tL( D_1 )L( d_1 ).
See Appendix <ref>.
We aim to maximize the received power at the user by optimizing the response of each RIS element. The problem is formulated as
(P0) : max_Φ P_t 𝔼{ 𝐟 ^HΦ𝐡^2 }
s.t. ϕ_m = A_me^ - jθ_m, m=1,⋯,M,
0 ≤θ_m ≤2π, m=1,⋯,M.
The maximum LARP is obtained when ϕ _m - ϕ _m' + θ _m - θ _m' = 0 and A_m = 1 for any m and m'. In other words, the optimal phase shifts with continuous value θ _m^* on the m-th RIS element should satisfy the following constraint:
θ _m^* + ϕ _m = C,
where C is an arbitrary constant. When the phase shift of each RIS element satisfies (<ref>) and all elements share the same amplitude value A_m = 1, the maximum LARP is obtained:
Γ _max= max_ΦP_t𝔼{𝐟 ^HΦ 𝐡^2}
= κ _NLoSM + κ _LoS M^2.
Γ _max will, therefore, serve as an upper bound of the received power.
According to (<ref>), the maximum LARP increases with the Rician K-factors of the channels.
The relationship between RIS size M and the maximum LARP for different values of K_1 and K_2 is as follows.
When considering a pure LoS channel, i.e., K_1,K_2→∞, an asymptotic squared maximum LARP of O( M^2) can be achieved.
When considering a Rayleigh channel, i.e., K_1,K_2→ 0, an asymptotic linear LARP of O( M) can be achieved.
§ PERFORMANCE ANALYSIS FOR PRACTICAL SYSTEM
Since continuous phase shifts are difficult to realize due to hardware limitations, discrete phase shifts are usually employed in practical systems. In this section, we will introduce a realistic discrete phase shifting model, and discuss how the employment of realistic discrete phase shifters affects the maximum LARP of an RIS-aided system.
§.§ The realistic discrete phase shifting model
We first assume that the RIS has a phase shifting capability of ω, which means it can generate a phase shift covering the range from 0 to ω, and that the phase shift is k-bit uniformly quantized.
In other words, we control the programmable components such as varactor diodes or PIN diodes, to generate 2^k patterns of the reflection coefficients, which are denoted by:
Φ _i = A_θ _ie^ - jθ _i, i = 1,2,⋯,2^k,
where A_θ _i is the amplitude when the phase shift is θ _i.
To investigate the effect of phase shifting capability on the performance, RIS systems are divided into two categories: systems with sufficient phase shifting capability and systems with insufficient phase shifting capability.
We consider phase shifting capability to be sufficient when it can meet the quantification requirements. For example, the phase shifting capability in a 1-bit quantized system surpasses 180° and in a 2-bit quantized system exceeds 270°, etc.
Otherwise, the RIS system is called an insufficient phase shifting capability system.
For systems having sufficient phase shift capability, i.e., ω≥2^k - 1/2^k 2π, the uniform phase interval is Δθ = 2π/2^k. For systems with insufficient phase shift capability, i.e., ω < 2^k - 1/2^k 2π, the uniform phase interval after quantization is Δθ = ω/2^k - 1. θ _i is given by
θ _i ={ ω/2^k - 1·( i - 1 ), ω < 2^k - 1/2^k· 2π;
2π/2^k·( i - 1), ω≥2^k - 1/2^k· 2π .
.
We will analyze the performance of an RIS-aided system employed the realistic phase shifter with or without the assumption of ideal reflection, respectively, in the following subsection.
§.§ Analysis on the realistic discrete phase shifting model with uniform amplitude
In this subsection, we will discuss the impact of limited phase shift range on the maximum LARP, using the ideal reflection model with uniform reflective amplitude,
which means each discrete phase shift θ _i corresponds to an amplitude of A_θ_i=1. In this scenario, the problem (P0) of maximizing the received power is transformed to
(P1) : max_Φ̂ P_t 𝔼{ 𝐟 ^HΦ𝐡^2 }
s.t. ϕ̂_m = e^ - jθ̂ _m, m=1,⋯,M,
θ̂_m ∈(θ_1,⋯,θ_i,⋯,θ_2^k), m=1,⋯,M.
Under the constraints in (<ref>) and (<ref>), the LARP of the RIS-aided system is written as
Γ = κ _NLoSM + κ _LoS∑_m,m'e^-j[(ϕ _m+θ _m)-(ϕ _m'+θ _m')].
To obtain the maximum LARP in this scenario, for the m-th RIS element, we select the discrete phase shift which is closest to the optimal one θ _m^* as given in (<ref>), and denote it by θ̂_m. The phase errors resulting from discrete phase shifts are defined as
δ _m = θ _m^* - θ̂_m, m = 1,⋯,M.
The maximum LARP Γ̂_max with discrete phase shifts is given by
Γ̂_max=κ _NLoSM +κ _LoS∑_m,m'e^-j[C + δ _m - C - δ _m']
=κ _NLoSM+ κ _LoS∑_m,m'(sinδ_msinδ_m' + cosδ_mcosδ_m') .
Since ϕ _m is jointly determined by the wavelength of the incident signal, the distance between the BS and the m-th RIS element, and the distance between the m-th element of the RIS and the user, and that θ _m^* + ϕ _m = C, we assume that the optimal phase shift θ _m^* in the practical system is uniformly distributed in [0,2π ).
The following theorem shows the expectation of the maximum LARP 𝔼(Γ̂_max) in RIS-aided wireless communication systems in this scenario.
Assuming that all elements of RIS have the same reflection amplitude A, the closed-form expression for the expectation of the maximum LARP at the user is given by
𝔼 (Γ̂_max ) =
{ κ _NLoSM +4κ _LoSM^2
×[P_1sin b + P_2(sin a - sin b) ]^2, ω<2^k-1/2^k- 1π ;
κ _NLoSM + κ _LoSM^2·2^2k/π ^2sin ^2π/2^k, ω≥2^k-1/2^k- 1π.
.
where a = π - ω/2, b = ω/2(2^k - 1), P_1 = 2^k/2π, P_2 = 1/2π.
See Appendix <ref>.
Theorem <ref> indicates that the maximum LARP expectation 𝔼(Γ̂_max) is determined by the size and topology of the system, the propagation environment, the number of quantization bits k and the phase shift capability ω. We will further show in the next section that the phase shift capability ω is a key factor for performance.
§.§ Analysis on the realistic discrete phase shifting model
In practical systems, the amplitude response of RIS reflecting elements generally depends on the phase shift value. In this section, we further consider the influence of phase shifting capability on the maximum LARP based on the non-ideal reflection model in which the reflection amplitude varies with the phase shift. In this scenario, based on the constraint of the realistic discrete phase shifting model, the problem of maximizing LARP is written by
(P2) : max_Φ̂ P_t 𝔼{ 𝐟 ^HΦ𝐡^2 }
s.t. ϕ̂_m = A_θ̂ _me^ - jθ̂ _m, m=1,⋯,M,
θ̂ _m ∈(θ_1,⋯,θ_i⋯,θ_2^k), m=1,⋯,M.
Although the objective function of (P2) is convex in this scenario, solving (P2) is difficult due to the non-convex constraint in (<ref>). When using the non-ideal reflection model, the reflection design should strike an appropriate balance between the amplitude and phase of the reflected signal. To solve this problem, we propose a low-complexity algorithm based on vector quantization of reflection coefficients to find an approximate solution to (P2).
We begin by defining a quantization loss function ℒ to assess the difference between the quantified and desired reflection coefficients:
ℒ(θ_i,m) = 1 - A_θ_icos (θ _i - θ^*_m ),
which can be thought of as the difference between the desired reflection coefficient and the quantified reflection coefficient projected onto it. The optimization problem of the reflection coefficient of the m-th element is then simplified to (P3):
(P3) : max_ϕ̂_m a_NLoS A_θ̂ _m^2 + a_LoSA_θ̂_mcos(θ̂_m - θ^*_m)
s.t. θ̂ _m ∈(θ_1,⋯,θ_i,⋯,θ_2^k), m=1,⋯,M.
where a_NLoS and a_LoS denote the constants related to the channel path loss as
a_NLoS = K_1η _NLoS1+K_2η _NLoS2+η _NLoS3/(K_1 + 1)(K_2 + 1),
a_LoS =MA̅K_1K_2η _LoS/(K_1 + 1)(K_2 + 1).
Here, A̅ = ∑A_θ̂ _i^2/∑A_θ̂ _i is a constant related to the power loss caused by the reflection coefficient of RIS, which is derived based on the characteristic that the optimized phase shift is usually more concentrated towards the phase shift with the larger reflective amplitude<cit.>.
Note that (P3) can be solved by the exhaustive search method. Since in practical systems, the number of control bits k is generally not greater than 3, the complexity of this method is not exceptionally high.
Moreover, since each RIS element can generate the same patterns of reflection coefficients, any RIS element with the identical expected phase shift has the same optimal reflection coefficient when solving the problem (P3).
Therefore, we may build a look-up table by calculating the expected phase shift range c_i for each quantized reflection coefficient. This table provides the optimized reflection coefficients for any possible value of expected phase shifts. In other words, the reflection coefficients are quantized using the look-up table, which will further reduce the computational complexity of solving (P3).
Below we will show how this table is developed. For notational simplicity, we omit the subscript m of the expected phase θ_m^*.
Firstly, substitute each quantized reflection coefficient into the objective function (<ref>) and obtain a series of equations as follows:
f_1(θ ) = a_NLoSA_θ _1^2 + a_LoSA_θ _1cos (θ _1 - θ),
⋯
f_i(θ) = a_NLoSA_θ _i^2 + a_LoSA_θ _icos (θ _i - θ),
⋯
f_2^k(θ) = a_NLoSA_θ _2^k^2 + a_LoSA_θ _2^kcos (θ _2^k - θ).
As shown in Fig. <ref>, the solution to problem (P3) when the expected phase θ^* = θ is the reflection coefficient corresponding to the envelope of the set of curves at phase shift θ.
As a result, determining the expected phase range c_i is equivalent to finding among the equations (<ref>) that makes the objective function maximum, and calculating the value range of θ.
In order to solve this problem, we can start by seeking the intersection of every two curves in [0,2π). For instance, for f_i(θ ) and f_i'(θ ), we have
a_NLoSA_θ _i^2 + a_LoSA_θ _icos (θ _i - θ )
= a_NLoSA_θ _i'^2 + a_LoSA_θ _i'cos (θ _i' - θ ).
The above formula can be converted into
√(C_sin^2 + C_cos^2)sin (θ±ϑ )= a_NLoS(A_θ _i'^2 - A_θ _i^2)/a_LoS,
where C_sin and C_cos are constants, which are determined by
C_sin=A_θ _isinθ _i - A_θ _i'sinθ _i',
C_cos=A_θ _icosθ _i - A_θ _i'cosθ _i'.
The quadrant of ϑ is defined by the sign of C_sin and C_cos, and ϑ is defined by tanϑ = C_cos/C_sin. Taking C_sin>0 as an example, we solve the intersection of f_i(θ ) and f_i'(θ ) as
θ _ii' = arcsina_NLoS(A_θ _i'^2 - A_θ _i^2)/a_LoS√(C_sin^2 + C_cos^2) - arctanC_cos/C_sin
Then we need to determine if the following equality holds:
f_i(θ _ii') = max [f_1(θ _ii'),⋯,f_i(θ _ii'),⋯,f_2^k(θ _ii')].
If it does, we call it a valid intersection. Following that, we compare the values of f_i'(θ _ii') and f_i''(θ _ii'). If f_i'(θ _ii')>f_i''(θ _ii'), the phase shift range between θ _ii' and the next valid intersection belongs to c_i, and the phase shift range between the last valid intersection and θ _ii' belongs to c_i', and vice versa. By calculating all the valid intersections between the curves and comparing the derivatives of the corresponding curves at valid intersections, the expected phase range corresponding to each quantized state is obtained.
Once the phase range c_i is obtained,
we may easily compute the quantized coefficient with the expected phase shift, which will greatly reduce the computational complexity.
The overall procedure to solve (P2) is summarized in Algorithm <ref> which is referred to as group-based query algorithm.
Below we will analyze the performance of the proposed algorithm and derive the expectation of the maximum LARP 𝔼(Γ̂_max) at the user in RIS-aided wireless communication systems.
Assuming that each RIS element can produce 2^k discrete reflection coefficients as defined in (<ref>), the expectation of the maximum LARP 𝔼(Γ̂_max) at the user is given by
𝔼(Γ̂_max)=Mκ _NLoS/2π∑_iμ_i/2πA_θ _i^2 +M^2κ _LoS/4π ^2∑_i,i'A_θ _iA_θ _i'
×∫_δ _m∈d_i∫_δ _m'∈d_i'(sinδ _msinδ _m' + cosδ _mcosδ _m')dδ _mdδ _m'.
where d_i is the quantization error of the i-th quantized reflection coefficient Φ_i.
See Appendix <ref>.
The proposed algorithm shows that the expected phase shift range c_i is related to the parameters of the RIS-aided system, such as the phase shifting capability ω, the reflective amplitude A_θ_i, etc.
Therefore, whenever we compute the expectation of the maximum LARP 𝔼(Γ̂_max) of different systems, we need to recalculate the c_i according to the parameters of the system.
However, c_i has a simpler solution when the number of bits k=1 and the pure LoS path, which means the power of the RIS-reflected signal dominates in the total received power. In other words, the Rician K-factors of the channels K_1,K_2→∞. In this case, the expectation of the maximum LARP 𝔼(Γ̂_max) at the user is shown in Corollary <ref>.
Assuming that each RIS element can produce 2 discrete reflective coefficients as Φ _1 = A_θ_1e^ - jθ _1 and Φ _2= A_θ_2e^ - jθ _2, and that the channels contain pure LoS paths. The optimal phase shift ranges are as follows:
c_1∈[ 0,ψ _1) ∪[ ψ _2,2π),
c_2∈[ ψ _1,ψ _2),
where ψ _1 = - arctanA_θ_2cosω ' - A_θ_1/A_θ_2sinω '∈[ 0, π/2)∪( π/2, π) , ψ _2 = π - arctanA_θ_2cosω ' - A_θ_1/A_θ_2sinω '∈[ π, 3π/2)∪( 3π/2, 2π).
The closed-form expression for the expectation of the maximum LARP 𝔼(Γ̂_max) at the user is given by
𝔼(Γ̂_max)
=
{ η _LoSM^2/π ^2[A_θ_1^2 + A_θ_2^2 - 2A_θ_1A_θ_2cosω],ω < π;
η _LoSM^2/π ^2[A_θ_1^2 + A_θ_2^2 + 2A_θ_1A_θ_2],ω≥π .
.
See Appendix <ref>.
The above corollary shows that given the amplitude for each quantized reflection coefficient, the LARP of the RIS-aided system with the number of bits k = 1 tends to decrease when the phase shift capability ω declines. However, provided a phase shift capability ω, the impact of the reflection amplitude on the RIS-aided system is not straightforward; we will show this in the following section using simulations.
§ SIMULATION AND EXPERIMENTAL RESULTS
In this section, we make both simulations and experimental measurements to evaluate the performance of the group-based query algorithm and validate the theoretic results presented in this work.
§.§ Simulation Results
In this subsection, we analyze the performance of an RIS-aided communication system with various users distributed randomly and uniformly on a quarter sphere of radius d_0 centered on the RIS. The simulation results in all figures are averaged over 2000 independent realizations of the different user locations. The channel parameters and RIS system parameters were chosen in accordance with the 3GPP propagation environment outlined in <cit.>. Unless otherwise notified, the simulation parameters are as follows. D_0 = 90 m is the distance between the BS and the RIS center, and d_0 = 70 m is the distance between the user and the RIS center. The number of RIS elements is M = 4096, and the sizes of RIS elements are d_h = d_v = 0.05 m. The transmit power is 20 dBm, and the noise power is -90 dBm. The path loss of LoS and NLoS is configured based on the UMa model defined in <cit.>. The carrier frequency is 2.6 GHz, and the Rician factor is K_1 = K_2 = 4.
Fig. <ref> shows the maximum LARP with continuous phase shifts versus the number of elements M. As shown in Fig. <ref>, our theoretical results are very close to the simulated ones. The figure also shows that for all three channel conditions, the maximum received power increases with the number of elements M. Furthermore, the slope of the maximum received power curve for the pure LoS channel is 20, indicating that the received power is proportional to the square of the number of RIS elements M. Similarly, the slope of the maximum LARP curve for the Rayleigh channel is 10, meaning that the received power is proportional to M. In addition, we can see that the maximum LARP increases with the Rician factors K_1 and K_2.
To quantify the performance degradation in this scenario, we define a loss factor ε:
ε=log _10𝔼(Γ̂_max)/Γ _max.
The loss factor is the ratio of the average received power based on the realistic discrete phase shifting model to the maximum received power with continuous phase shift given by (<ref>), which represents the performance degradation caused by practical phase shifters.
Fig. <ref> shows the expectation of the LARP as a function of the decrement of phase shifting capability c under the ideal reflection model.
Here, c is defined as the difference between the phase capability of the RIS and the necessary phase shifting capability of the quantization bits, which means c=max [0, 2^k - 1/2^k· 2π-ω].
In Fig. <ref>, the numbers of quantization bits are set to 1, 2, and 3, respectively.
As can be seen, more quantization bits lead to higher performance for the same phase shifting capability, which is expected because a larger number of bits reduces phase quantization error.
Furthermore, according to Fig. <ref>, for systems with different numbers of control bits, the LARP decreases significantly with the decrement of the phase shifting capability of the system, especially when the number of quantization bits is smaller.
For the 1-bit, 2-bit, and 3-bit quantized reflection coefficients, a 3 dB LARP degradation is caused by a 90°, 140°, and 175° phase capability decrement, respectively.
Besides, the theoretical results obtained according to Theorem <ref> are in good agreement with the simulation results in Fig. <ref>.
Next, by varying the decrement of phase shifting capability c, the LARP is compared in Fig. <ref> for the following two schemes of computing the discrete RIS phase shifts: (i) group-based query algorithm, and (ii) the ideal model (i.e., assuming reflection amplitude A = 1 and phase shifting capability is sufficient).
`AM[dB]’ in this figure means the value of the reflected signal amplitudes which is determined by the amplitude response and phase response of the prototypes.
The curves (1) and (2) in Fig. <ref> represent 2-bit quantized systems with states `00’, `01’, `10’ and `11’ corresponding to reflection amplitudes 0 dB, -6 dB, -10 dB, and -3 dB, respectively.
The curves (3) and (4) in Fig. <ref> represent 3-bit quantized systems, with corresponding reflection amplitudes of 0 dB, -3 dB, -6 dB, -9 dB, -10 dB, -7 dB, -3 dB, and -2 dB in the `000’, `001’, `010’, `011’, `100’, `101’, `110’, `111’ states, respectively.
This figure shows that the proposed group-based query algorithm outperforms the scheme based on the ideal model thanks to its ability to strike a balance between the reflected signal amplitude by individual elements and phase alignment over all the elements so as to achieve the maximum signal power at the receiver.
When the number of bits k increases, the performance gap between these two schemes increases, especially when the phase shifting capability is relatively small.
In Fig. <ref>, we evaluate the differences of the maximum LARP between two 2-bit quantized RIS-aided systems in which the RIS elements have different amplitudes of the reflected signal. The reflection coefficients of the RISs are determined by the proposed group-based query algorithm.
The `00’, `01’, `10’, `11’ states in curve (1) correspond to the reflection amplitudes 0 dB, -5 dB, -6 dB, and -2 dB, respectively. In curve (2) the states `00’, `01’, `10’, `11’ correspond to reflection amplitudes 0 dB, -6 dB, -10 dB, and -3 dB, respectively.
It can be seen from this figure that the derived theoretical expressions match well with the simulation results, which validates the proposed theorem.
Besides, from the comparison of the curves (1) and (2), we can see that when the RIS system has sufficient phase shifting capability or a comparatively large phase shifting capability, the lower reflection amplitudes lead to a lower LARP, which is expected because lower reflection signal amplitudes mean a lower reflection signal power.
However, when the phase shifting capability of the RIS system falls below a specified threshold, the lower the reflected amplitude cause the greater the LARP.
This is due to the fact that when the phase shifting capability of the system is reduced to a specific level, the quantization error δ will be greater than 90° for some desired phase, indicating that the reflected signal of the RIS elements will have a negative impact on the LARP; in this case, the lower the reflection amplitudes mean the negative impact on the LARP is smaller.
§.§ Experimental Measurements
In this subsection, the experimental results validate the effect of realistic discrete phase shifters on the performance of RIS-aided communication systems.
We established a measurement system and employed two different RISs with non-ideal reflection coefficients. Fig. <ref> illustrates the measurement system, which includes RIS, an RF vector signal generator (Tektronix TSG4106A), a Tx horn antenna, an RF signal spectrum analyzer (Rohde & Schwarz ZNB 8), cables, and blockages (electromagnetic absorbers). The RIS, Tx horn antenna, and Rx horn antenna are horizontally polarized and well matched in the experimental measurement. As shown in Fig. <ref> (a), the transmitting and receiving antennas are positioned on a semicircle with RIS at the center and a radius of d = 2.5 m. The transmitting and receiving horn antennas are aligned with the center of the RIS. The RF vector signal generator provides the RF signal to the Tx horn antenna. The signal reflected by the RIS propagates over the distance d and is received by the Rx horn antenna and the RF signal spectrum analyzer, which gives the measured received signal power.
Fig. <ref> shows the RISs used in different scenarios. The RIS in Fig. <ref> (a) operates at 5.8 GHz with element sizes of d_h = 14.3 mm, d_v = 10.27 mm, and the number of elements M = 1100. More details of this RIS can be found in our previous work <cit.>. The RIS in Fig. <ref> (b) operates at 2.6 GHz, and has M = 256 elements with the element sizes d_h = 45 mm and d_v = 45 mm. Both RISs are 1-bit regulated by varactor diodes. Since altering the bias voltage changes the impedance of the varactor diode as well as the loss induced by the dielectric substrate, metal plate, etc, the reflection phase and amplitude of the reflected signal will fluctuate unpredictably.
We measured the phase and amplitude differences of the two states of 5.8 GHz RIS at different incident angles at 3 V and 7 V bias voltages, representing the state `0’ and state `1’, respectively.
As shown in Table <ref>, the reflection coefficient of RIS is sensitive to incident angle, implying that an RIS system with sufficient phase shift capability at a specific incident angle may become less than satisfactory when the incident angle is altered.
Then, using the 5.8 GHz RIS, we conduct experiments to investigate the impact of incidence angle change on the received power and compare the measured results to those calculated according to proposition <ref>.
In this scenario, for evaluating system performance degradation, the system performance at 10° incidence is served as the baseline.
We move the transmitting antenna to various angles and select 12 random places on the circular arc R illustrated in Fig. <ref> (a) to measure the power after RIS beamforming at each of these points and average the results.
As shown in Table <ref>, the system with sufficient phase shifting capability at 10° incidence has a diminishing trend in beamforming capability as the incident angle increases.
As shown in Fig. <ref>, the system performance decreases as the angle of incidence increases.
Besides, the measured curve follows the same trend as the theoretical curve, with the biggest difference being only about 0.3 dB. This discrepancy may be due to environmental factors.
Furthermore, based on the 2.6 GHz RIS prototype, we simulate that the RIS system lacks sufficient phase shifting capability due to the RIS element design by varying the bias voltage corresponding to the state `0’ and state `1’, evaluate the performance of the RIS system with different phase shifting capabilities, and compare it to theoretical results.
We fixed the incident angle at 10° and measured the relationship between the 2.6 GHz RIS control voltage and the phase shift and amplitude of each RIS element; the results are presented in Fig. <ref>. The bias voltage is then manually adjusted to vary the phase difference between the state `0’ and the state `1’ of the RIS.
Six sets of bias voltages based on the measured results in Fig. <ref> are chosen to make the phase difference 180°, 150°, 120°, 90°, 60°, and 30°, respectively.
In this scenario, the received power of RIS at a set of bias voltages that produced a 180° phase difference is served as a baseline for performance comparison.
For each pair of bias voltages, 12 positions are chosen at random on the circular arc R shown in Fig. <ref> (a). The received power after RIS beamforming is measured at these positions and the results are averaged to obtain the measurement results as shown in Fig. <ref>.
We observe that the beamforming capability of the RIS diminishes as the phase shifting capability of the system decreases. Hence, sufficient phase shift capability must be ensured at the RIS element design stage. The measured curve has the same trend as the theoretical curve, and the biggest difference is only about 0.3 dB. This discrepancy in results, like the previous experiment, could be attributed to environmental influences.
TempEqCntEq25
§ CONCLUSION
In this paper, we proposed a realistic reflection coefficient model for RIS-aided wireless communication systems, which takes into account the discreteness of the phase shift, the attenuation of the reflective signal and the limited phase shifting capability.
The maximum received power of the user based on this model was derived.
We then proposed a group-based query algorithm to maximize the received power for the RIS-aided system with the realistic reflection coefficient model.
We analyzed the asymptotic performance of the proposed algorithm and derived the closed-form expression for the maximum long-term average received power.
Finally, by conducting both simulations and corresponding experiments with the fabricated RIS prototype systems, we verified the proposed theoretical results.
The simulated results and measurement results all match quite well with the analytical results.
§ PROOF OF PROPOSITION 1
By applying (<ref>), (<ref>) in 𝐟 ^HΦ 𝐡^2, Eq. (<ref>) at the bottom of this page holds.
Therefore, 𝔼{𝐟 ^HΦ 𝐡^2} is given by
𝔼 {𝐟 ^HΦ 𝐡^2} = 1/( K_1 + 1)( K_2 + 1)
×( ∑_i = 1^4 𝔼{x_i^2} + ∑_i = 1,j = 1,i j^4 𝔼{x_i^Hx_j}).
Since the NLoS components are independent with each other, and have zero means, any correlation between channel matrices is zero. We observe that
𝔼{x_i^Hx_j} = 0,i,j = 1,2,3,4,i j.
For the LoS channel, by applying (<ref>) and (<ref>) in 𝔼{x_1^2}, we may derive
𝔼{x_1^2} = K_1K_2( 𝐟̅ ^HΦ𝐡̅)^H( 𝐟̅ ^HΦ𝐡̅)
= K_1K_2√(D_1^-αd_1^-α)G_a
×( ∑_m,m'A_mA_m'e^ - j[ϕ _m - ϕ _m' + θ _m - θ _m']).
According to the random matrix theory, it is easy to obtain
𝔼{Φ ^HΦ} = diag( A_1^2,⋯,A_m^2,⋯,A_M^2),
𝔼{𝐟̃𝐟̃^𝐇} = M,
𝔼{𝐡̃^H𝐡̃} = 1.
Thus, we may derive
𝔼{x_2^2} =K_1𝔼{𝐡̅^HΦ^H 𝐟̃𝐟̃^HΦ𝐡̅}
= K_1√(G_aD_1^-α)L( d_1 )∑_m A_m^2.
Similarly,
𝔼{x_3^2} = K_2𝔼{𝐡̃^HΦ ^H𝐟̅𝐟̅^HΦ𝐡̃}
= K_2√(G_ad_1^-α)L( D_1 )∑_m A_m^2,
𝔼{x_4^2} = 𝔼{𝐡̃^HΦ ^H𝐟̃𝐟̃^HΦ𝐡̃}
=L( D_1 )L( d_1 )∑_m A_m^2.
Then, by applying (<ref>), (<ref>)-(<ref>) to (<ref>), we obtain:
Γ = κ _NLoS∑_m A_m^2 + κ _LoS∑_m,m'A_mA_m'e^ - j[ϕ _m - ϕ _m' + θ _m - θ _m'].
This ends the proof.□
TempEqCntEq37
§ PROOF OF THEOREM 1
We first define a function φ(δ _m,δ _m') as
φ(δ _m,δ _m') = sinδ _msinδ _m' + cosδ _mcosδ _m'.
Then according to the expression of LARP showed in (<ref>), we obtain the expectation of the maximum LARP 𝔼( Γ̂_max):
𝔼( Γ̂_max) = κ _NLoSM+ κ _LoSM^2𝔼[ φ(δ _m,δ _m')].
Since K_1, K_2, η _NLoS1, η _NLoS2, η _NLoS3, η _LoS and M are constant, we will focus on 𝔼[ φ(δ _m,δ _m')] in the following.
Because the expected phase shift θ _m^* is uniformly distributed in [0,2π ) and the discrete phase shift closest to the expected one will be chosen to achieve the maximum LARP, depending on phase shifting capability ω, the quantization error δ is uniformly distributed on [ - π/2^k,π/2^k), or uniformly distributed over each of three contiguous subintervals that are [ ω/2 - π ,- ω/2( 2^k - 1)), [ - ω/2( 2^k - 1) , ω/2( 2^k - 1)) and [ ω/2( 2^k - 1) , π - ω/2). When ω < 2^k - 1/2^k - 1π, the probability density function (PDF) of δ is obtained as
f_δ( δ)
={ 2^k/2π, δ∈[ - ω/2( 2^k - 1),ω/2( 2^k - 1));
1/2π, δ∈[ ω/2 - π, - ω/2( 2^k - 1)) ∪[ ω/2( 2^k - 1), π - ω/2);
0, others.
.
Otherwise, when ω≥2^k - 1/2^k - 1π, the PDF of δ is obtained as
f_δ( δ) ={ 2^k/2π, δ∈[ - π/2^k,π/2^k);
0, others.
.
𝔼[ φ(δ _m,δ _m')] is expressed as (<ref>) in the RIS-aided system with phase shifting capability ω in terms of PDF of δ, which is shown at the bottom of the this page.
Applying (<ref>) to (<ref>) and through some basic algebraic manipulations, we derive that
𝔼[ φ(δ _m,δ _m')]
={ 2^2k/π ^2sin ^2π/2^k, ω≥2^k - 1/2^k - 1π ;
4[P_1sin b + P_2(sin a - sin b) ]^2, ω < 2^k - 1/2^k - 1π .
.
By substituting (<ref>) into (<ref>), 𝔼( Γ̂_max) is obtained as (<ref>). This ends the proof.□
§ PROOF OF THEOREM 2
According to the expression of LARP shown in (<ref>), the maximum LARP expectation 𝔼(Γ̂_max) in this scenario is given by
𝔼(Γ̂_max)
= κ _NLoSM𝔼[ A_m^2]+ κ _LoSM^2𝔼[ ϱ(δ _m,δ _m')],
where
ϱ(δ _m,δ _m') = A_mA_m'( sinδ _msinδ _m' + cosδ _mcosδ _m').
As K_1, K_2, η _NLoS1, η _NLoS2, η _NLoS3, η _LoS and M are constant, the key to derive 𝔼(Γ̂_max) is to derive 𝔼[ A_m^2] and 𝔼[ ϱ(δ _m,δ _m')].
Since the optimal phase shift θ^* is uniformly distributed in [0,2π ), the probability density of θ^* in [0,2π ) is always 1/2π.
Therefore, the probability of using the i-th quantified reflection coefficient is obtained as
P_i = μ _i/2π,
where μ _i denotes the length of c_i which is the optimal phase shift range corresponding to each quantized reflection coefficient Φ_i and is obtained from the group-based query algorithm.
Thus, 𝔼[ A_m^2] is obtained as
𝔼[ A_m^2]=∑_iP_iA_θ _i^2 = ∑_iμ_i/2πA_θ _i^2 ,
For each c_i, the corresponding quantization error d_i is denoted by
d_i = c_i - θ _i.
Since the probability density of θ^* in [0,2π ) is always 1/2π, the joint probability density of δ _m and δ _m' is 1/4π^2 when (δ_m ∈ d_i) ∪ (δ_m'∈ d_i') , i,i'=1,⋯,2^k.
Thus, 𝔼[ ϱ(δ _m,δ _m')] is obtained as
𝔼[ ϱ( δ _m,δ _m')]
= ∑_i,i'∫_δ _m∈d_i∫_δ _m'∈d_i'1/4π ^2ϱ( δ _m,δ _m')dδ _mdδ _m',
By substituting (<ref>) and (<ref>) into (<ref>), the LARP expectation 𝔼( Γ̂_max) is obtained as (<ref>). This ends the proof.□
§ PROOF OF COROLLARY 1
We first define ω ' = min( ω ,π). When the RIS is 1-bit coded and the channels are pure LoS paths, the optimization problem (P3) of the reflection coefficient of the m-th element is simplified to:
ϕ̂^*_m = max[ A_θ_1cos ( - θ ^*_m),A_θ_2cos (ω ' - θ ^*_m)],
which can be rewritten as
ϕ̂^*_m ={ Φ _1,[ A_θ_1cos ( - θ ^*_m) - A_θ_2cos (ω ' - θ ^*_m)] > 0;
Φ _2,[ A_θ_1cos ( - θ ^*_m) - A_θ_2cos (ω ' - θ ^*_m)] < 0.
.
To solve the optimal phase shift range c_i, we define a function as
ζ(θ) = A_θ_1cos ( - θ) - A_θ_2cos (ω ' - θ).
Eq. (<ref>) can be converted to
ζ(θ)= -√(A_θ_2^2sin^2ω ' + ( A_θ_2cosω ' - A_θ_1)^2)
×sin( θ + arctanA_θ_2cosω ' - A_θ_1/A_θ_2sinω ').
Since (<ref>) is obviously in sine form, we rewrite (<ref>) as
ϕ̂^*_m ={ A_θ_1e^ - jω '· 0,θ ^*_m∈[ 0,ψ _1) ∪[ ψ _2,2π);
A_θ_2e^ - jω '· 1,θ ^*_m∈[ ψ _1,ψ _2).
.
where ψ _1 and ψ _2 are denoted by
ψ _1 = - arctanA_θ_2cosω ' - A_θ_1/A_θ_2sinω '∈[ 0, π/2)∪( π/2, π),
ψ _2 = π - arctanA_θ_2cosω ' - A_θ_1/A_θ_2sinω '∈[ π, 3π/2)∪( 3π/2, 2π).
Applying c_1 ∈[ 0,ψ _1) ∪[ ψ _2,2π) and c_2 ∈[ ψ _1,ψ _2) in (<ref>), we derive that
𝔼[ ϱ( δ _m,δ _m')] = A_θ _1^2 + A_θ _2^2 - 2A_θ _1A_θ _2cosω '/π ^2.
By substituting (<ref>) into (<ref>), 𝔼( Γ̂_max) is obtained as (<ref>). This ends the proof.□
IEEEtran
|
http://arxiv.org/abs/2307.00405v2
|
20230701183521
|
Provably Efficient UCB-type Algorithms For Learning Predictive State Representations
|
[
"Ruiquan Huang",
"Yingbin Liang",
"Jing Yang"
] |
cs.LG
|
[
"cs.LG",
"stat.ML"
] |
Aggregation Consistency Errors in Semantic Layers and How to Avoid Them
Eugene Wu
=======================================================================
The general sequential decision-making problem, which includes Markov decision processes (MDPs) and partially observable MDPs (POMDPs) as special cases, aims at maximizing a cumulative reward by making a sequence of decisions based on a history of observations and actions over time. Recent studies have shown that the sequential decision-making problem is statistically learnable if it admits a low-rank structure modeled by predictive state representations (PSRs). Despite these advancements, existing approaches typically involve oracles or steps that are not computationally efficient. On the other hand, the upper confidence bound (UCB) based approaches, which have served successfully as computationally efficient methods in bandits and MDPs, have not been investigated for more general PSRs, due to the difficulty of optimistic bonus design in these more challenging settings. This paper proposes the first known UCB-type approach for PSRs, featuring a novel bonus term that upper bounds the total variation distance between the estimated and true models. We further characterize the sample complexity bounds for our designed UCB-type algorithms for both online and offline PSRs. In contrast to existing approaches for PSRs, our UCB-type algorithms enjoy computational efficiency, last-iterate guaranteed near-optimal policy, and guaranteed model accuracy.
§ INTRODUCTION
As a general framework of reinforcement learning (RL), the sequential decision-making problem aims at maximizing a cumulative reward by making a sequence of decisions based on a history of observations and actions over time.
This framework is powerful to include and generalize Markov Decision Processes (MDPs) and Partially Observable Markov Decision Processes (POMDPs), which captures wide range of real-world applications such as recommender systems <cit.>, business management <cit.>, economic simulation <cit.>, robotics <cit.>, strategic games <cit.>, and medical diagnostic systems <cit.>.
However, tackling POMDPs alone presents significant challenges, not to mention general sequential decision-making problems. Many hardness results have been developed <cit.>, showing that learning POMDPs is already computationally and statistically intractable in the worst case. The reason is that the non-Markovian property of these problems implies that the sufficient statistics or belief about the current environmental state encompasses all observations and actions from past interactions with the environment. This dramatically increases the computational burden and statistical complexity, since even for a finite observation space and action space, the possibilities of the beliefs of the environment state are exponentially large in terms of the number of observations and actions.
In order to tackle these challenges, recent research has introduced various structural conditions for POMDPs and general sequential decision-making problems, such as reactiveness <cit.>, decodability <cit.>, revealing conditions <cit.>, hindsight observability <cit.>, and low-rank representations with regularization conditions <cit.>. These conditions have opened up new possibilities for achieving polynomial sample complexities in general sequential decision-making problems. Among them, predictive state representations (PSRs) <cit.> has been proved to capture and generalize a rich subclass of sequential decision-making problems such as MDPs and observable POMDPs.
Yet, most existing solutions for PSRs involve oracles that might not be computationally efficient. For instance, Optimistic MLE (OMLE) <cit.> involves a step that maximizes the optimal value function over a confidence set of models. Typically, such a confidence set does not exhibit advantageous structures, resulting in a potentially combinatorial search within the set. Another popular posterior sampling based approach <cit.> requires to maintain a distribution over the entire set of models, which is highly memory inefficient. In addition, most existing results lack a last-iterate guarantee and produce only mixture policies, which often exhibit a very large variance in practical applications. On the other hand, the upper confidence bound (UCB) based approach has been proved to be computationally efficient and provide last-iterate guarantee in many decision-making problems such as bandits <cit.> and MDPs <cit.>. However, due to the non-Markovian property of POMDPs and general sequential decision-making problems, designing an explicit UCB is extremely challenging. To the best of our knowledge, such a design has never been explored in POMDPs and beyond. Thus, we are motivated to address the following important open question:
Q1: Can we design a UCB-type algorithm for learning PSRs that (a) is both computationally and statistically efficient, and (b) enjoys the last-iterate guarantee?
Another important research direction in RL is offline learning <cit.>, where the learning agent has access to a pre-collected dataset and aims to design a favorable policy without any interaction with the environment. While offline MDPs have been extensively studied <cit.>, there exist very limited studies of offline POMDPs from the theoretical perspective <cit.>. To our best knowledge, offline learning for a more general model of PSRs has never been explored. Thus, we will further address the following research question:
Q2: Can we design a UCB-type algorithm for offline learning of PSRs with guaranteed policy performance and sample efficiency?
Main Contributions:
We provide affirmative answers to both aforementioned questions by making the following contributions.
∙=0.ex =0.15in =0.1in =0.01in
* We introduce the first known UCB-type approach to learning PSRs with only a regularization assumption, characterized by a novel bonus term that upper bounds the total variation distance between the estimated and true models. The bonus term is designed based on a new confidence bound induced by a new model estimation guarantee for PSRs, and is computationally efficient.
* We theoretically characterize the performance of our UCB-type algorithm for learning PSRs online, called PSR-UCB. In contrast to existing approaches, PSR-UCB is computationally efficient, guarantees a near-optimal policy in the last iteration, and ensures model accuracy. When the rank of the PSR is small, our sample complexity matches the best known upper bound in terms of the rank and the accuracy level.
* We further extend our UCB-type approach to the offline setting,
and propose the PSR-LCB algorithm. We then develop an upper bound on the performance difference between the output policy of PSR-LCB and any policy covered by the behavior policy. The performance difference scales in O(C_∞/√(K)), where C_∞ is the coverage coefficient and K is the size of the offline dataset. This is the first known sample complexity result on offline PSRs.
* Technically, we develop two key properties for PSRs to establish the sample complexity guarantees: (a) a new estimation guarantee on the distribution of future observations conditioned on empirical samples, enabled by the constrained model estimation step, and (b) a new relationship between the empirical UCB and the ground-truth UCB. We believe these insights advance the current understanding of PSRs, and will benefit future studies on this topic.
§ RELATED WORK
Learning MDPs and POMDPs.
The MDP is a basic model in RL that assumes Markovian property in the model dynamics, i.e. the distribution of the future states only depends on the current system state. Researchers show that learning tabular MDPs (with finite state and action spaces) is both computationally and statistically efficient in both online setting <cit.> and offline setting <cit.>. Learning MDPs with function approximations is also well-studied by establishing favorable statistical complexity and computation efficiency <cit.>. Notably, several algorithms designed for learning MDPs with general function approximations can be extended to solve a subclass of POMDPs. In particular, OLIVE <cit.> and GOLF <cit.>, which are originally designed for MDPs with low Bellman rank and low Bellman-Eluder dimension, respectively, can efficiently learn reactive POMDPs, where the optimal policy only depends on the current observation. Besides, <cit.> study decodable RL where the observations determine the underlying states, and <cit.> investigate latent MDPs where there are multiple MDPs determined by some latent variables.
To directly address the partial observability in POMDPs, some works assume exploratory data or reachability property and provide polynomial sample complexity for learning these POMDPs <cit.>. Others tackle the challenge of exploration and exploitation tradeoff in POMDPs by considering various sub-classes of POMDPs such as low-rank POMDPs <cit.>, observable POMDPs <cit.>, hindsight observability <cit.>, and weakly-revealing POMDPs <cit.>. Furthermore,
<cit.> propose computationally efficient algorithms for POMDPs with deterministic latent transitions.
Notably, <cit.> propose a provably efficient algorithm for learning observable tabular POMDPs without computationally intractable oracles. Finally, <cit.> study the offline POMDPs in the presence of confounders, and <cit.> provide provably efficient algorithm for offline linear POMDPs.
Learning PSRs and general sequential decision-making problems.
The PSR is first introduced by <cit.> and considered as a general representation to model dynamic systems. A line of research <cit.> obtain polynomial sample complexity with observability assumption and spectral techniques. Later, <cit.> demonstrate that learning regular PSRs is sample efficient and can avoid poly(|𝒪|^m) in the sample complexity. <cit.> propose a PO-bilinear class that captures a rich class of tractable RL problems with partial observations, including weakly revealing POMDPs and PSRs and design an actor-critic style algorithm.
For the works most closely related to ours, <cit.> propose a universal algorithm known as OMLE, which is capable of learning PSRs and its generalizations under certain conditions; <cit.> enhance the sample complexity upper bounds for three distinct algorithms, including OMLE, the model-based posterior sampling, and the estimation-to-decision type algorithm <cit.>; <cit.> address the general sequential decision-making problem by posterior sampling under a newly proposed low generalized Eluder coefficient. However, as elaborated in <Ref>, those approaches are not efficient in terms of computational complexity or memory.
§ PRELIMINARIES
§.§ Problem Setting
We consider a finite horizon episodic sequential decision-making problem, defined by a tuple 𝙿 = (𝒪, 𝒜, H, , R), where 𝒪 represents the observation space, 𝒜 is a finite action space, H is the number of time steps within an episode, = {_h} determines the model dynamics, i.e., _h(o_h|o_1,…,o_h-1,a_1,…,a_h-1), where o_t∈𝒪 is the observation at time step t, and a_t∈𝒜 is the action taken by the agent at time step t for all t∈{1,…,h}, and R: (𝒪×𝒜)^H → [0,1] is the reward function defined on trajectories of one episode. We denote a historical trajectory at time step h as τ_h:=(o_1,a_1,…,o_h,a_h), and denote a future trajectory as ω_h:=(o_h+1,a_h+1,…, o_H,a_H). The set of all τ_h is denoted by ℋ_h = (𝒪×𝒜)^h and the set of all future trajectories is denoted by Ω_h = (𝒪×𝒜)^H-h.
In addition, let ω_h^o = (o_h+1,…,o_H) and ω_h^a = (a_h+1,…,a_H) be the observation sequence and the action sequence contained in ω_h, respectively. Similarly, for a history τ_h, we denote τ_h^o and τ_h^a as the observation and action sequences in τ_h, respectively. Notably, the general framework of sequential decision-making problem subsume not only fully observable MDPs but also POMDPs as special cases, because in MDPs, _h(o_h|τ_h-1) = _h(o_h|o_h-1,a_h-1) and in POMDPs, _h(o_h|τ_h-1) can be factorized as _h(o_h|τ_h-1) = ∑_s_h(o_h|s)_h(s|τ_h-1), where s represents unobserved states.
The interaction between an agent and 𝙿 proceeds as follows. At the beginning of each episode, the environment initializes a fixed observation o_1 at time step 1. After observing o_1, the agent takes action a_1, and the environment transits to o_2, which is sampled according to the distribution _1(o_2|o_1,a_1). Then, at any time step h≥ 2, due to the non-Markovian nature of the problem, the agent takes action a_h based on all past information (τ_h-1,o_h), and the environment transits to o_h+1, sampled from _h(o_h+1|τ_h). The interaction terminates after time step H.
The policy π={π_h} of the agent is a collection of H distributions where π_h(a_h|τ_h-1,o_h) is the probability of choosing action a_h at time step h given the history τ_h-1 and the current observation o_h. For simplicity, we use π(τ_h) = π(a_h|o_h,τ_h-1)⋯π(a_1|o_1)
to denote the probability of the sequence of actions τ_h^a given the observations τ_h^o. We denote ^π as the distribution of trajectories induced by policy π under dynamics . The value
of a policy π under and the reward R is denoted by V_,R^π = _τ_H∼^π[R(τ_H)].
The goal of the agent is to find an ϵ-optimal policy π̂ that satisfies max_πV_,R^π - V_,R^π̂≤ϵ.
Since finding a near-optimal policy for a general decision-making problem incurs exponentially large sample complexity in the worst case, in this paper we follow the line of research in <cit.> and focus on the low-rank class of problems. To define a low-rank problem, we introduce the dynamic matrix 𝔻_h ∈ℝ^|ℋ_h|× |Ω_h| for each h, where the entry at the τ_h-th row and ω_h-th column of 𝔻_h is (ω_h^o, τ_h^o | τ_h^a, ω_h^a).
A sequential decision-making problem is rank r if for any h, the model dynamic matrix 𝔻_h has rank r.
Predictive State Representation (PSR). To exploit the low-rank structure, we assume that for each h, there exists a set of future trajectories, namely, core tests (known to the agent) 𝒬_h = {𝐪_h^1,…, 𝐪_h^d_h}⊂Ω_h,
such that the submatrix restricted to these tests 𝔻_h[𝒬_h] has rank r, where d_h≥ r is a positive integer. This special set 𝒬_h allows the system dynamics to be factorized as (ω_h^o,τ_h^o | τ_h^a,ω_h^a) = (ω_h)^ψ(τ_h), where 𝐦(ω_h), ψ(τ_h)∈ℝ^d_h and the ℓ-th coordinate of ψ(τ_h) is the joint probability of τ_h and the ℓ-th core test 𝐪_h^ℓ. Mathematically, if we use 𝐨_h^ℓ and 𝐚_h^ℓ to denote the observation sequence and the action sequence of 𝐪_h^ℓ, respectively, then (𝐨_h^ℓ, τ_h^o | τ_h^a, 𝐚_h^ℓ) = [ψ(τ_h)]_ℓ. By Theorem C.1 in <cit.>, any low-rank decision-making problem admits a (self-consistent) predictive state representation θ = {ϕ_h,_h}_h=1^H given core tests {𝒬_h}_h=0^H-1, such that for any τ_h∈ℋ_h, ω_h∈Ω_h,
ψ(τ_h) = _h(o_h,a_h)⋯_1(o_1,a_1)ψ_0, (ω_h)^ = ϕ_H^_H(o_H,a_H)⋯_h+1(o_h+1,a_h+1)
∑_o_h+1ϕ_h+1^_h+1(o_h+1,a_h+1) = ϕ_h^, (o_h,…,o_1 | a_1,…, a_h) = ϕ_h^ψ(τ_h),
where _h: 𝒪×𝒜→ℝ^d_h× d_h-1, ϕ_h∈ℝ^d_h, and ψ_0∈ℝ^d_0. For ease of the presentation, we assume ψ_0 is known to the agent[The sample complexity of learning ψ_0 if it is unknown is relatively small compared with learning other parameters.]. Notably, if we employ a bar above the vector ψ(τ_h) to denote its normalization by ϕ_h^ψ(τ_h), i.e. ψ̅(τ_h) = ψ(τ_h)/ϕ_h^ψ(τ_h), this is known as the prediction vector <cit.> or prediction feature of τ_h, since [ψ̅(τ_h)]_ℓ = (𝐨_h^ℓ | τ_h, 𝐚_h^ℓ). As illustrated in <Ref>, the prediction feature plays an important role in our algorithm design.
In the following context, we use _θ to indicate the model determined by the representation θ = {ϕ_h,_h}_h=1^H. For simplicity, we denote V__θ, R^π = V_θ,R^π. Moreover, let 𝒬_h^A = {𝐚_h^ℓ}_ℓ=1^d_h be the set of action sequences that are part of core tests, constructed by eliminating any repeated action sequence. 𝒬_h^A, known as the set of core action sequences, plays a crucial role during the exploration process. Selecting from these sequences is sufficient to sample core tests, leading to accurate estimate of θ.
We further assume that the PSRs studied in this paper are well-conditioned, as specified in <Ref>. Such an assumption and its variants are commonly adopted in the study of PSRs <cit.>.
[γ-well-conditioned PSR]
A PSR θ is said to be γ-well-conditioned if
∀ h, max_x∈ℝ^d_h:x_1≤ 1max_π∑_ω_hπ(ω_h)|(ω_h)^x|≤1/γ.
<Ref> requires that the error of estimating θ does not significantly blow up when the estimation error x of estimating the probability of core tests is small.
§.§ Notations
We denote the complete set of model parameters as Θ and the true model parameter as θ^*. For a vector x, x_A stands for √(x^Ax), and the i-th coordinate of x is represented as [x]_i. For measures ℙ and ℚ (not necessarily probability measures) over a set 𝒳, the total variation distance between these two measures is 𝙳_((x),(x)) = ∑_x|(x)-(x)|, while the hellinger-squared distance is defined as 𝙳_𝙷^2 ((x),(x)) = 1/2∑_x (√((x)) - √((x)) )^2. Note that our definition of the total variation distance is slightly different from convention by a constant factor. We define d = max_h d_h and Q_A = max_h|𝒬_h^A|.
We use ν_h(π,π') to denote the policy that takes π at the initial h-1 steps and switches to π' from the h-th step. Lastly, 𝚞_𝒳 represents the uniform distribution over the set 𝒳.
§ ONLINE LEARNING FOR PREDIVTIVE STATE REPRESENTATIONS
In this section, we propose a model-based algorithm PSR-UCB,
which features three main novel designs: (a) a constrained model estimation step to control the quality of the estimated model such that its prediction features of the empirical data are useful in the design of UCB,
(b) a novel design of an upper confidence bound that captures the uncertainty of the estimated model,
and (c) a termination condition that guarantees the last-iterate model is near-accurate and the corresponding greedy policy is near-optimal.
§.§ Algorithm
The pseudo-code for the PSR-UCB algorithm is presented in <Ref>. We highlight the key idea of PSR-UCB: in contrast to the design of UCB-type algorithms in MDPs that leverage low-rank structures, we exploit the physical meaning of the prediction feature ψ̅(τ_h) to design bonus terms that enable efficient exploration. In particular, since each coordinate of a prediction feature of τ_h represents the probability of visiting a core test conditioned on τ_h and taking the corresponding core action sequence, it is sufficient to explore a set of τ_h whose prediction features can span the entire feature space, and use core action sequences to learn those “base” features.
We next elaborate the main steps of the algorithm in greater detail as follows.
Exploration.
At each iteration k, PSR-UCB constructs a greedy policy π^k-1 based on a previous dataset ^k-1 = {_h^k-1}_h=0^H-1, together with an estimated model θ̂^k-1. Intuitively, to enable efficient exploration, π^k-1 is expected to sample τ_h that “differs” the most from previous collected samples τ_h∈_h^k-1. How to quantify such differences forms the core foundation of our algorithm design and will be elaborated later.
Then, for each h∈[H], PSR-UCB first uses π^k-1 to get a sample τ_h-1^k,h, then follows the policy 𝚞_𝒬_h-1^exp to get ω_h-1^k,h, where 𝒬_h-1^exp = (𝒜×𝒬_h^A) ∪𝒬_h-1^A. In other words, PSR-UCB adopts the policy ν_h(π^k-1, 𝚞_𝒬_h-1^exp) to collect a sample trajectory (τ_h-1^k,h, ω_h-1^k,h), which, together with _h-1^k-1, forms the dataset _h-1^k.
The importance of uniformly selecting actions from 𝒬_h-1^exp can be explained as follows. First, the actions in 𝒬_h-1^A assist us to learn the prediction feature ψ̅^*(τ_h-1^k,h) as its ℓ-th coordiate equals to _θ^*(𝐨_h-1^ℓ | τ_h-1^k,h, 𝐚_h-1^ℓ). Second, the action sequence (a_h,𝐚_h^ℓ) ∈𝒜×𝒬_h^A helps us to estimate _h^*(o_h,a_h)ψ̅^*(τ_h-1^k,h) because its ℓ-th coordinate represents _θ^*( 𝐨_h^ℓ,o_h |τ_h-1^k,h, a_h, 𝐚_h^ℓ ). Therefore, by uniform exploration on 𝒬_h-1^exp given that τ_h-1^k,h differs from previous dataset _h-1^k-1, PSR-UCB collects the most informative samples for estimating the true model θ^*.
Constrained model estimation. With the updated dataset ^k = {_h^k}, we estimate the model by maximizing the log-likelihood functions with constraints. Specifically, PSR-UCB extracts any model θ̂^k from ℬ^k defined as:
Θ_min^k = {θ: ∀ h, (τ_h,π)∈_h^k, _θ^π(τ_h) ≥ p_min},
ℬ^k = {θ∈Θ_min^k: ∑_(τ_H,π)∈^klog_θ^π(τ_H) ≥max_θ'∈Θ_min^k∑_(τ_H,π)∈^klog_θ'^π(τ_H) - β},
where the estimation margin β is a pre-defined constant. Here the threshold probability p_min is sufficiently small to guarantee that, with high probability, θ^*∈Θ_min^k. Note that compared to existing vanilla maximum likelihood estimators <cit.>, PSR-UCB has an additional constraint Θ_min^k. This is a crucial condition, as τ_h^k,h+1 is sampled to infer the conditional probability _θ^*(ω_h^o|τ_h^k,h+1,ω_h^a) or the prediction feature ψ̅^*(τ_h^k,h+1), and learning this feature is useless or even harmful if _θ^*^π^k-1(τ_h^k,h+1) is too small.
Design of UCB with prediction features. From the discussion of the previous two steps, we see that the prediction features are vital since (a) actions in 𝒬_h^exp can efficiently explore the coordinates of these features, and (b) constraint Θ_min^k ensures the significance of the learned features of the collected samples. Therefore, in the next round of exploration, our objective is to sample τ_h whose prediction feature exhibits the greatest dissimilarity compared with those of the previously collected samples τ_h∈_h^k. Towards that, PSR-UCB constructs an upper confidence bound V_θ̂^k,b̂^k^π for 𝙳_ (_θ̂^k^π (τ_H) , _θ^π(τ_H) ), where the bonus b̂^k is defined as:
Û_h^k = λ I + ∑_τ_h∈_h^kψ̅̂̅^k (τ_h) ψ̅̂̅^k (τ_h)^, b̂^k(τ_H) = min{α√(∑_h=0^H-1ψ̅̂̅^k(τ_h) ^2_(Û_h^k)^-1), 1 },
with pre-specified regularizer λ and UCB coefficient α. Note that large b̂^k(τ_H) indicates that τ_H is “perceived” to be under explored since the estimated prediction feature ψ̅̂̅^k(τ_h) is significantly different from {ψ̅̂̅^k(τ_h)}_τ∈_h^k.
Design of the greedy policy and last-iterate guaranteed termination. The construction of UCB implies that θ̂^k is highly uncertain on the trajectories τ_H with large bonuses. Thus, PSR-UCB finds a greedy policy π^k = max_π V_θ̂^k,b̂^k^π and terminates if V_θ̂^k,b̂^k^π is sufficiently small, indicating that the estimated model θ̂^k is sufficiently accurate on any trajectory. Otherwise, π^k serves for the next iteration k+1, as it tries to sample the most dissimilar ψ̅̂̅^k(τ_h) compared with the previous samples ψ̅̂̅^k(τ_h^k,h+1) for efficient exploration. We remark that the termination condition favors the last-iterate guarantee, where a single model and a greedy policy are identified. Compared with algorithms that output a mixture policy (e.g. uniform selection in a large policy set), this guarantee may present lower variance in practical applications.
Our algorithm naturally handles reward-free PSRs since during the exploration, no reward information is needed.
In the reward-free setting, <Ref> exhibits greater advantage compared with existing reward-free algorithms <cit.> for PSRs, which require a potentially combinatorial optimization oracle over a pair of models in the (non-convex) confidence set and examination of the total variance distance[Computation complexity of calculating total variation distance is still an open question <cit.> and only some specific forms can be approximated in polynomial time.], and are computationally intractable in general <cit.>.
§.§ Theoretical Results
In this section, we present the theoretical results for PSR-UCB. To make a general statement, we first introduce the notion of the optimistic net of the parameter space.
Consider two bounded measures and over a set 𝒳. Then, is ε-optimistic over if (a) (x)≥(x), ∀ x∈𝒳, and (b) 𝙳_(,)≤ε. The ε-optimistic net of Θ is a finite space Θ̅_ε such that for all θ∈Θ, there exists θ̅∈Θ̅_ε such that _θ̅ is ε-optimistic over _θ.
Note that if Θ is the parameter space for tabular PSRs (including finite observation and action spaces) with rank r, we have |Θ̅_ε|≤ r^2|𝒪||𝒜|H^2logH|𝒪||𝒜|/ε (see <Ref> or Theorem 4.7 in <cit.>). Now, we are ready to present the main theorem for PSR-UCB.
Suppose <Ref> holds. Let p_min = O(δ/KH|𝒪|^H|𝒜|^H )[|𝒪| can be the cardinality of 𝒪 if it is finite, or the measure of 𝒪 if it is a measurable set with positive and bounded measure.], β = O(log|Θ̅_ε|), where ε=O(p_min/KH), λ = γ |𝒜|^2Q_A βmax{√(r), Q_A√( H)/γ}/√(d H), and α = O( Q_A√( H d)/γ^2√(λ) + |𝒜|Q_A√(β)/γ). Then, with probability at least 1-δ, PSR-UCB outputs a model θ^ϵ and a policy π̅ that satisfy
V_θ^*,R^π^* - V_θ^*,R^π̅≤ϵ, and ∀π, 𝙳_( _θ^ϵ^π(τ_H), _θ^*^π(τ_H)) ≤ϵ.
In addition, PSR-UCB terminates with a sample complexity of
Õ( ( r + Q_A^2 H/γ^2) r d H^3 |𝒜|^2 Q_A^4 β/γ^4ϵ^2).
The following remarks highlight a few insights conveyed by <Ref>. First, <Ref> explicitly states that PSR-UCB features last-iterate guarantee, i.e., the guaranteed performance is on the last output of the algorithm. This is in contrast to the previous studies <cit.> on POMDP and/or PSRs, where the performance guarantee is on a mixture of policies obtained over the entire execution of algorithms. Such policies often have a large variance. Second,
in the regime with a low PSR rank such that r< Q_A^2H/γ^2, our result matches the best known sample complexity <cit.> in terms of rank r and ϵ. Third,
thanks to the explicitly constructed bonus function b̂, whose computation complexity is polynomial in terms of the iteration number and the dimension d, PSR-UCB is computationally efficient, provided that there exist oracles for planning in POMDPs (e.g. Line 9 in PSR-UCB) <cit.> and maximum likelihood estimation.
Proof Sketch of <Ref>. The proof relies on three main steps corresponding to three new technical developments, respectively. (a) A new MLE guarantee. Due to the novel constrained model estimation design, we establish a new MLE guarantee (which has not been developed in the previous studies) that the total variation distance between the conditional distributions of future trajectories conditioned on (τ_h,π)∈_h^k is small. Mathematically, the summation ∑_ (τ_h,π)∈_h^k 𝙳_ (_θ̂^k^π (ω_h|τ_h) , _θ^* ^π( ω_h | τ_h) ) is upper bounded by a constant. (b) A new confidence bound. Based on (a) and the observation that the estimation error of the prediction feature ψ̅̂̅^k(τ_h) is upper bounded by the total variation distance 𝙳_ (_θ̂^k^π (ω_h|τ_h) , _θ^* ^π( ω_h | τ_h) ), the estimation error of other prediction features is captured by the function ψ̅̂̅^k(τ_h)_(Û_h^k)^-1 up to some constant, leading to the valid UCB design.
(c) A new relationship between the empirical bonus and the ground-truth bonus. To characterize the sample complexity, we need to show that V_θ̂^k,b̂^k^π^k can be small for some k. One approach is to prove that the summation ∑_kV_θ̂^k,b̂^k^π^k is sub-linear. We validate this sub-linearity by establishing b̂^k ≤ O(∑_h ψ̅^*(τ_h)_(U_h^k)^-1), where U_h^k = λ I + ∑_τ_h∈_h^kψ̅^*(τ_h)ψ̅^*(τ_h)^. This inequality bridges the empirical bonus and the ground-truth bonus, and yields the final result when combined with the elliptical potential lemma <cit.>.
§ OFFLINE LEARNING FOR PREDICTIVE STATE REPRESENTATIONS
In this section, we develop a computationally efficient algorithm PSR-LCB to learn PSRs in the offline setting. In offline PSRs, a dataset contains K pre-collected trajectories that are independently sampled from a behavior policy π^b. We slightly generalize the learning goal to be finding a policy that can compete with any target policy whose coverage coefficient (See Eqn. <ref>) is finite. We highlight that PSR-LCB is the first offline algorithm for learning PSRs. When specialized to POMDPs, our proposed algorithm enjoys better computational complexity than confidence region based algorithms <cit.>.
§.§ Algorithm
We design a PSR-LCB algorithm for offline PSRs, whose pseudo-code is presented in <Ref>. In the offline setting, we also leverage the prediction feature ψ̅(τ_h) to design lower confidence bound (LCB) of the true value function V_θ^*,r^π. We explain the main steps of the algorithm in detail as follows.
Constrained model estimation. Inspired by PSR-UCB, where the prediction features ψ̅(τ_h) at each step h are learned through separate datasets, PSR-LCB first randomly and evenly divides into H datasets _0,…, _H-1. The goal of this division is to separately learn ψ̅^*(τ_h) for each h∈{0,1,…,H-1}. Then, PSR-LCB extracts a model θ̂ from ℬ_ defined as:
Θ_min = {θ: ∀ h, τ_h ∈_h, _θ^π^b(τ_h) ≥ p_min},
ℬ_ = {θ∈Θ_min: ∑_τ_H ∈log_θ^π^b(τ_H) ≥max_θ'∈Θ_min∑_τ_H ∈log_θ'^π^b (τ_H) - β̂},
where the estimation margin β̂ is a pre-specified constant, and p_min guarantees that, with high probability, θ^*∈Θ_min.
Following similar reasons of the design of PSR-UCB, the constraint Θ_min controls the quality of the estimated model such that the prediction features of the behavior samples are non-negligible and useful for learning.
Design of LCB with prediction features. Given the estimated model θ̂, PSR-LCB constructs an upper confidence bound of 𝙳_(_θ̂^π (τ_H) , _θ^π(τ_H) ) in the form of V_θ̂ ,b̂^π, where b̂(τ_H) is defined as:
Û_h = λ̂ I + ∑_τ_h∈_h ψ̅̂̅ (τ_h) ψ̅̂̅ (τ_h)^, b̂ (τ_H) = min{α̂√(∑_h ψ̅̂̅ (τ_h) ^2_(Û_h )^-1), 1 },
with pre-defined regularizer λ̂ and LCB coefficient α̂. Note that we refer to α̂ as the LCB coefficient, as we adopt the pessimism principle in offline learning. Differently from PSR-UCB, we aim to select a policy where the estimated model exhibits the least uncertainty. Therefore, the output policy should allocate a high probability to τ_H if b̂(τ_H) is small.
Output policy design. Building on the discussion above, PSR-LCB outputs a policy π̅ = max_π V_θ̂, R ^π - V_θ̂, b̂^π, which maximizes a lower confidence bound of V_θ^*,R^π.
§.§ Theoretical Results
In this section, we develop the theoretical guarantee for PSR-LCB. To capture the distribution shift in offline PSRs, we adopt the following assumption that requires that the ℓ_∞ norm of the ratio between the probabilities over the entire trajectory under the target policy and under the behavior policy is finite. Mathematically, if π is the target policy, then,
C_π^b,∞^π := max_h max_τ_h_θ^*^π(τ_h) /_θ^*^π^b(τ_h) < ∞.
We note that, for general sequential decision-making problems, no Markovian property or hidden state is assumed. Therefore, defining the coverage assumption on a single observation-action pair as in <cit.> for offline MDP, or on hidden states as in <cit.> for offline POMDP, is not suitable for PSRs.
Suppose <Ref> holds. Let ι = min_𝐚_h∈𝒬_h^expπ^b(𝐚_h), p_min = O(δ/KH(|𝒪||𝒜|)^H), ε=O(p_min/KH), β̂ = O(log|Θ̅_ε|), λ̂ = γ C_π^b,∞^πβ̂max{√(r), Q_A√(H)/γ}/ι^2Q_A√(dH), and α̂ = O( Q_A√(dH)/γ^2√(λ) + √(β)/ιγ). Then, with probability at least 1-δ, the output π̅ of <Ref> satisfies that
∀π, V_θ^*, R ^π - V_θ^*,R^π̅≤Õ( (√(r) + Q_A√(H)/γ) C_π^b,∞^π Q_A H^2√(d)/ιγ^2 √(rβ̂/K)).
<Ref> states that for any target policy with finite coverage coefficient C_π^b,∞^π, the performance degradation of the output policy π̅ with respect to the target policy π is at most O( C_π^b,∞^π/√(K)), which is negligible if the size of the offline dataset K is sufficiently large.
Proof Sketch of <Ref>. Similar to the online setting, the proof first shows that V_θ̂, b̂^π serves as a valid upper confidence bound of the total variation distance 𝙳_(_θ̂^π, _θ^*^π) for any policy π. We emphasize that the validity is again due to the development of the estimation guarantee of the distributions of future trajectories conditioned on empirical samples τ_h∈_h. Then, we translate the empirical bonus term b̂ to ∑_hψ̅^*(τ_h)_U_h^-1, where U_h = λ I + ∑_τ_h∈_hψ̅^*(τ_h)ψ̅^*(τ_h)^. Since ψ̅^* is a fixed representation function, we leverage the coverage assumption to change the distribution induced by the target policy π to the distribution induced by the behavior policy π^b, and then apply Azuma-Hoeffding's inequality to obtain the final result.
§ CONCLUSION
We studied learning predictive state representations (PSRs) for low-rank sequential decision-making problems. We developed a novel upper confidence bound for the total variation distance of the estimated model and the true model that enables both computationally and statistically efficient learning for PSRs. Specifically, we proposed PSR-UCB for online learning for PSRs with last-iterate guarantee, i.e. producing not only a near-optimal policy, but also a near-accurate model. The statistical efficiency was validated by a polynomial sample complexity in terms of the model parameters. In addition, we extended this UCB-type approach to the offline setting and proposed PSR-LCB. Our theoretical result offers an initial perspective on offline PSRs by demonstrating that PSR-LCB outputs a policy that can compete with any policy with finite coverage coefficient.
apalike
Provably Efficient UCB-type Algorithms For Learning PSRs: Supplementary Materials
§ PROPERTIES OF PSRS
In this section, we present a few important properties of PSRs, which will be intensively used in the algorithm analysis.
First, for any model θ = {ϕ_h, _h(o_h,a_h)}, we have the following identity
_h(o_h,a_h) ψ̅(τ_h-1) = _θ(o_h|τ_h-1)ψ̅(τ_h).
The following proposition is directly adapted from Lemma C.3 in <cit.>. Note that ψ_0 is known to the agent.
Consider two γ-well-conditioned PSRs θ,θ̂∈Θ. We have
𝙳_( _θ̂^π , _θ^π) ≤∑_h=1^H ∑_τ_H| 𝐦̂(ω_h)^( (o_h,a_h) - (o_h,a_h) ) ψ(τ_h-1) | π(τ_H),
𝙳_( _θ̂^π , _θ^π) ≤∑_h=1^H ∑_τ_H| 𝐦(ω_h)^( (o_h,a_h) - (o_h,a_h) ) ψ̂(τ_h-1) | π(τ_H) .
The next proposition characterizes well-conditioned PSRs <cit.>, which is obtained by noting that (𝐪_h^ℓ)^ is the ℓ-th row of _h+1(o_h+1,a_h+1).
For well-conditioned (self-consistent) PSR θ, we have
max_x∈ℝ^d_h-1: x_1=1max_π∑_ω_hπ(ω_h) _h+1(o_h+1,a_h+1)x _1 ≤Q_A/γ.
The following proposition characterizes the log-cardinality of the minimal optimistic net of the rank-r PSRs. We note that this notion is closely related to the bracketing number, a typical complexity measure in the maximum likelihood estimation (MLE) anslysis <cit.>. The proof follows directly from Theorem C.9 in <cit.>.
Given any ϵ, there exists a finite parameter space Θ̅_ϵ satisfying the following property: for any θ∈Θ, we can find a θ̅∈Θ̅_ϵ associated with a measure _θ̅ such that
∀π,h, _θ̅^π(τ_h) ≥_θ^π(τ_h),
∀π, h, ∑_τ_h| _θ̅^π(τ_h) - _θ^π(τ_h)|≤ϵ.
Moreover, log|Θ̅_ϵ|≤ 2r^2OAH^2logOA/ϵ.
0
_θ^π(τ_h) = 𝐯_h^𝐁(o_h,a_h)⋯𝐁(o_1,a_1)ψ_0 π(τ_h)
Now let Θ̅ be a ϵ_0 cover of the the parameters with respect to the ℓ_∞-norm, since all parameters are bounded.
[𝐁̅(o_h,a_h) ]_i,j = ⌈[ 𝐁(o_h,a_h) ]_i,j/ϵ_0⌉ϵ_0.
Then, we have, for any θ, there exists a θ̅ such that
∀ h,τ_h, 0≤_θ^π(τ_h) - _θ̅^π(τ_h) ≤ϵ_0 (OA)^H
Let ϵ_0 = ϵ/(OA)^H, we have
∀π, h, ∑_τ_h| _θ̅^π(τ_h) - _θ^π(τ_h)|≤ϵ
The covering number is
|Θ̅| = ((OA)^H/ϵ_0)^r^2OAH = (OA/ϵ)^2r^2OAH^2
§ GENERAL MLE ANALYSIS
In this section, we present four general propositions that characterize the performance of MLE.
We start with a proposition that states that the log-likelihood of the true model is relatively high compared to any model.
Fix ε<1/KH. With probability at least 1-δ, for any θ̅∈Θ̅_ε and any k∈[K], the following two inequalities hold:
∀θ̅∈Θ̅_ε, ∑_h∑_(τ_h,π)∈_hlog_θ̅^π(τ_h) - 3logK|Θ̅_ε|/δ≤∑_h∑_(τ_h,π)∈_h^klog_θ^*^π(τ_h),
∀θ̅∈Θ̅_ε, ∑_(τ_H,π)∈^klog_θ̅^π(τ_H) - 3logK|Θ̅_ε|/δ≤∑_(τ_H,π)∈log_θ^*^π(τ_H).
We start with the first inequality. Suppose the data in _h^k is indexed by t. Then,
[ exp(∑_h∑_(τ_h,π)∈_h^klog_θ̅^π( τ_h ) /_θ^*^π(τ_h ))]
= [ ∏_t≤ k∏_h _θ̅^π^t(τ_h^t) /_θ^*^π^t(τ_h^t) ]
= [ ∏_t≤ k-1∏_h _θ̅^π^t(τ_h^t) /_θ^*^π^t(τ_h^t) [ _θ̅^π^k(τ_h^k) /_θ^*^π^k(τ_h^k) ] ]
= [ ∏_t≤ k-1∏_h _θ̅^π^t(τ_h^t) /_θ^*^π^t(τ_h^t) ∏_h∑_τ_h_θ̅^π^k(τ_h ) ]
(a)≤(1 + ε)^H [ ∏_t≤ k-1∏_h _θ̅^π^t(τ_h^t) /_θ^*^π^t(τ_h^t) ]
≤(1+ε)^KH
(b)≤ e,
where (a) follows because ∑_τ_H|_θ̅^π(τ_h) - _θ^π(τ_h) | ≤ε, and (b) follows because ε≤1/KH.
By the Chernoff bound and the union bound over Θ̅_ϵ and k∈[K], with probability at least 1-δ, we have,
∀ k∈[K], θ̅∈Θ̅_ϵ, ∑_h∑_(τ_h,π)∈_h^klog_θ̅^π( τ_h ) /_θ^*^π(τ_h )≤ 3logK|Θ̅_ϵ|/δ,
which yields the first result of this proposition.
To show the second inequality, we follow an argument similar to that for the first inequality. We have
[ exp( ∑_(τ_H,π)∈^klog_θ^π(τ_H) /_θ^* ^π(τ_H))] ≤[ exp( ∑_(τ_H,π)∈^klog_θ̅^π(τ_H) /_θ^* ^π(τ_H))]
(a)≤ (1+ε)^KH≤ e,
where (a) follows from the tower rule of the expectation and because ∑_τ_H_θ̅^π(τ_H)≤ε.
Thus, with probability at least 1-δ, for any k∈[K] and any θ̅∈Θ, the following inequality holds
∑_(τ_H,π)∈^klog_θ^π(τ_H) /_θ^* ^π(τ_H)≤ 3logK|Θ̅_ϵ|/δ,
which completes the proof.
The following proposition upper bounds the total variation distance between the conditional distributions over the future trajectory conditioned on the empirical history trajectories. This proposition is crucial to ensure that the model estimated by PSR-UCB is accurate on those sample trajectories.
Fix p_min and ε≤p_min/KH.
Let Θ_min^k = {θ: ∀ h, (τ_h,π)∈_h^k, _θ^π(τ_h) ≥ p_min}. Consider the following event
_ω = {∀ k∈[K], ∀θ∈Θ_min^k, ∑_h ∑_ (τ_h,π) ∈_h^k 𝙳_^2 ( _θ^π(ω_h|τ_h ), _θ^*^π(ω_h|τ_h ) ) .
. ≤ 6∑_h∑_ (τ_H,π)∈_h^k log_θ^*^π(τ_H)/_θ^π(τ_H) + 31logK|Θ̅_ε|/δ}.
Then, (_ω) ≥ 1- δ.
We start with a general upper bound on the total variation distance between two conditional distributions. Note that for any θ,θ'∈Θ∪Θ̅_ϵ and fixed (τ_h,π), we have
𝙳_ ( _θ^π(ω_h|τ_h ), _θ'^π(ω_h|τ_h ))
= ∑_ω_h| _θ'^π(ω_h,τ_h ) _θ^π(τ_h ) - _θ^π(ω_h,τ_h )_θ'^π(τ_h ) /_θ^π(τ_h )_θ'^π(τ_h )|
= ∑_ω_h| (_θ'^π(ω_h,τ_h ) - _θ^π(ω_h,τ_h ) ) _θ^π(τ_h ) + _θ^π(ω_h,τ_h ) (_θ^π(τ_h ) -_θ'^π(τ_h ) ) /_θ^π(τ_h ) _θ'^π(τ_h )|
≤|_θ^π(τ_h ) - _θ'^π(τ_h )|/_θ'^π(τ_h ) + 1/_θ'^π(τ_h )∑_ω_h|(_θ'^π(ω_h,τ_h ) - _θ^π(ω_h,τ_h ) )|
≤2/_θ'^π(τ_h )𝙳_( _θ^π(τ_H), _θ'^π(τ_H) ).
By symmetry, we also have
𝙳_( _θ^π(ω_h|τ_h ), _θ'^π(ω_h|τ_h )) ≤2/max{_θ^π(τ_h ), _θ'^π(τ_h ) }𝙳_( _θ^π(τ_H), _θ'^π(τ_H) ).
We replace θ' by a θ̅∈Θ̅_ε that is ε-optimistic over θ (recall <Ref>), i.e. 𝙳_( _θ^π , _θ̅^π) ≤ε, and _θ̅^π(τ_h)≥_θ^π(τ_h) holds for any π and τ_h.
Then, due the construction of Θ_min^k, we have
∀ (τ_h,π)∈_h^k, 𝙳_( _θ^π(ω_h|τ_h), _θ̅^π(ω_h|τ_h) ) ≤2ϵ/p_min≤2/KH,
which implies
∑_h∑_(τ_h,π)∈_h^k 𝙳_^2( _θ^π(ω_h|τ_h ), _θ^*^π(ω_h|τ_h ))
(a)≤∑_h∑_(τ_h,π)∈_h^k 2𝙳_^2( _θ^π(ω_h|τ_h ), _θ̅^π(ω_h|τ_h )) + 2𝙳_^2( _θ̅^π(ω_h|τ_h ), _θ^*^π(ω_h|τ_h ))
≤4 /KH + 2 ∑_h∑_(τ_h,π)∈_h^k𝙳_^2( _θ̅^π(ω_h|τ_h ), _θ^*^π(ω_h|τ_h )).
Here (a) follows because the total variation distance satisfies the triangle inequality and (a+b)^2≤ 2a^2 + 2b^2.
Moreover, note that
𝙳_^2 ( _θ̅^π(ω_h|τ_h ), _θ^*^π(ω_h|τ_h ))
(a)≤ 4(2 + 2/(KH))𝙳^2_𝙷( _θ̅^π(ω_h|τ_h ), _θ^*^π(ω_h|τ_h ) )
≤ 6( 1+1/KH - _ω_h∼_θ^*^π√(_θ̅^π(ω_h|τ_h )/_θ^*^π(ω_h|τ_h )))
(b)≤ - 6 log_ω_h∼_θ^*^π(·|τ_h ) √(_θ̅^π(ω_h|τ_h )/_θ^*^π(ω_h|τ_h )) + 6/KH,
where (a) is due to <Ref> and (b) follows because 1-x≤ -log x for any x>0.
Thus, the summation of the total variation distance between conditional distributions conditioned on (τ_h,π)∈_h^k can be upper bounded by
∑_h∑_(τ_h,π)∈_h^k 𝙳_^2( _θ^π(ω_h|τ_h ), _θ^*^π(ω_h|τ_h ))
≤18/KH - 12∑_h∑_(τ_h,π)∈_h^klog_ω_h∼_θ^*^π(·|τ_h ) √(_θ̅^π(ω_h|τ_h )/_θ^*^π(ω_h|τ_h )).
In addition, we have
_∀ h,(τ_h,π)∈_h^k,
ω_h ∼_θ^*^π(·|τ_h) [exp( 1/2∑_h∑_(ω_h,τ_h,π)∈_h^klog_θ̅^π(ω_h |τ_h )/_θ^*^π(ω_h |τ_h ) - ∑_h∑_(τ_h,π)∈_h^klog_ω_h∼_θ^*^π(·|τ_h ) √(_θ̅^π(ω_h|τ_h )/_θ^*^π(ω_h|τ_h )))]
= _∀ h,(τ_h,π)∈_h^k,
ω_h ∼_θ^*^π(·|τ_h) [∏_h∏_(ω_h,τ_h)∈_h^k√(_θ̅^π(ω_h |τ_h ) /_θ^*^π(ω_h |τ_h ))] /∏_h∏_(τ_h,π)∈_h^k_ω_h∼_θ^*^π(·|τ_h ) [√(_θ̅^π(ω_h|τ_h )/_θ^*^π(ω_h|τ_h ))] = 1,
where the last equality is due to the conditional independence of ω_h ∈_h^k given (τ_h,π)∈_h^k.
Therefore, by the Chernoff bound, with probability 1-δ, we have
-∑_h∑_(τ_h,π)∈_h^klog_ω_h∼_θ^*^π(·|τ_h ) √(_θ̅^π(ω_h|τ_h )/_θ^*^π(ω_h|τ_h ))≤1/2∑_h∑_(ω_h,τ_h,π)∈_h^klog_θ^*^π(ω_h |τ_h )/_θ̅^π(ω_h |τ_h ) + log1/δ.
Taking the union bound over Θ̅_ϵ, k∈[K], and rescaling δ, we have, with probability at least 1-δ, ∀ k∈[K], the following inequality holds:
∑_h ∑_(τ_h,π)∈_h^k𝙳_^2(_θ^π(ω_h|τ_h ), _θ^*^π(ω_h|τ_h ) )
≤18 /KH + 6∑_h∑_(ω_h,τ_h,π)∈_h^klog_θ^*^π(ω_h |τ_h )/_θ̅^π(ω_h |τ_h ) + 12logK|Θ̅_ε|/δ
≤ 6∑_h∑_(ω_h, τ_h,π)∈_h^klog_θ^*^π( ω_h ,τ_h )/_θ̅^π(ω_h ,τ_h ) + 6∑_h∑_(τ_h,π)∈_h^klog_θ̅^π( τ_h ) /_θ^*^π(τ_h ) + 13logK|Θ̅_ε|/δ.
Note that, following from <Ref>, with probability at least 1-δ, we have for any k∈[K],
∑_h∑_(τ_h,π)∈_h^klog_θ̅^π( τ_h ) /_θ^*^π(τ_h )≤ 3logK|Θ̅_ε|/δ.
Hence, combining with the optimistic property of θ̅ and rescaling δ, we have that the following inequality holds with probability at least 1-δ:
∑_h∑_(τ_h,π)∈_h^k𝙳_^2(_θ^π(ω_h|τ_h ), _θ^*^π(ω_h|τ_h ) ) ≤ 6∑_h∑_(τ_H,π)∈_h^klog_θ^*^π(τ_H )/_θ^π(τ_H ) + 31logK|Θ̅_ε|/δ,
which yields the final result.
The following proposition is standard in the MLE analysis, and we provide the full analysis here for completeness.
Fix ε<1/K^2H^2. Define the following event:
_π = {∀θ∈Θ, ∀ k∈[K], ∑_π∈^k𝙳_𝙷^2 ( _θ^π(τ_H) , _θ^*^π (τ_H) ) ≤1/2∑_(τ_H,π)∈^klog_θ^*^π(τ_H) /_θ^π(τ_H) + 2logK|Θ̅_ε|/δ}.
We have (_π) ≥ 1-δ.
First, by the construction of Θ̅_ε, for any θ, let θ̅ be optimistic over θ, i.e., ∑_τ_H|_θ^π(τ_H) - _θ̅^π(τ_H) |≤ε. We translate the distance between θ and θ^* to the distance between θ̅ and θ^* as follows.
𝙳_𝙷^2 ( _θ^π(τ_H) , _θ^*^π (τ_H) )
= 1 - ∑_τ_H√(_θ^π(τ_H) _θ^*^π (τ_H) )
= 1 - ∑_τ_H√(_θ̅^π(τ_H) _θ^*^π (τ_H) + (_θ^π(τ_H) - _θ̅^π(τ_H) )_θ^*^π (τ_H))
(a)≤ 1 - ∑_τ_H√(_θ̅^π(τ_H) _θ^*^π (τ_H) ) + ∑_τ_H√(|_θ^π(τ_H) - _θ̅^π(τ_H) | _θ^*^π (τ_H))
(b)≤ - log_τ_H∼_θ^*^π(·)√(_θ̅^π(τ_H) /_θ^*^π(τ_H) ) + √(∑_τ_H|_θ^π(τ_H) - _θ̅^π(τ_H) | )
≤ - log_τ_H∼_θ^*^π(·)√(_θ̅^π(τ_H) /_θ^*^π(τ_H) ) + √(ε),
where (a) follows because √(a+b)≥√(a) - √(|b|) if a>0 and a+b>0, and (b) follows from the Cauchy's inequality and the fact that 1-x≤ - log x.
Hence, in order to upper bound ∑_π∈_k𝙳_𝙷^2 ( _θ^π(τ_H) , _θ^*^π (τ_H) ), it suffices to upper bound
∑_π∈_k - log_τ_H∼_θ^*^π(·)√(_θ̅^π(τ_H) /_θ^*^π(τ_H) ). To this end, we observe that,
[exp( 1/2∑_(τ_H,π)∈^klog_θ̅^π(τ_H) /_θ^*^π(τ_H) - ∑_π∈^klog_τ_H∼_θ^*^π(·)√(_θ^π(τ_H) /_θ^*^π(τ_H) ))]
(a)= [∏_(τ_H,π)∈^k√(_θ^π(τ_H) /_θ^*^π(τ_H) )] /[∏_(τ_H,π)∈^k√(_θ^π(τ_H) /_θ^*^π(τ_H) )] = 1,
where (a) follows because (τ_H,π)∈^k form a filtration.
Then, by the Chernoff bound,
we have
( 1/2∑_(τ_H,π)∈^klog_θ̅^π(τ_H) /_θ^*^π(τ_H) - ∑_π∈^klog_τ_H∼_θ^*^π(·)√(_θ^π(τ_H) /_θ^*^π(τ_H) )≥log1/δ)
= ( exp(1/2∑_(τ_H,π)∈^klog_θ̅^π(τ_H) /_θ^*^π(τ_H) - ∑_π∈^klog_τ_H∼_θ^*^π(·)√(_θ^π(τ_H) /_θ^*^π(τ_H) )) ≥1/δ)
(a)≤δ[exp( 1/2∑_(τ_H,π)∈^klog_θ̅^π(τ_H) /_θ^*^π(τ_H) - ∑_π∈^klog_τ_H∼_θ^*^π(·)√(_θ^π(τ_H) /_θ^*^π(τ_H) ))]
≤δ,
where (a) is due to the Markov inequality.
Finally, rescaling δ to δ/(K|Θ̅_ε|) and taking the union bound over Θ̅_ϵ and k∈[K], we conclude that, with probability at least 1-δ, ∀θ∈Θ, k∈[K],
∑_π∈^k 𝙳_𝙷^2 ( _θ^π(τ_H) , _θ^*^π (τ_H) )
≤ KH√(ε) + 1/2∑_(τ_H,π)∈^klog_θ^*^π(τ_H) /_θ̅^π(τ_H) + logK|Θ̅_ϵ|/δ
(a)≤1/2∑_(τ_H,π)∈^klog_θ^*^π(τ_H) /_θ^π(τ_H) + 2logK|Θ̅_ϵ|/δ,
where (a) follows because ε≤1/K^2H^2.
0
Thus, applying the Chernoff bound and taking the union bound over Θ̅_ϵ and k∈[K], with probability at least 1-δ, we have can you simply start with [_π]...?
∑_π∈^k 𝙳_𝙷^2 ( _θ^π(τ_H) , _θ^*^π (τ_H) )
≤ KH√(ε) + 1/2∑_(τ_H,π)∈^klog_θ^*^π(τ_H) /_θ̅^π(τ_H) + logK|Θ̅_ϵ|/δ
(a)≤1/2∑_(τ_H,π)∈^klog_θ^*^π(τ_H) /_θ^π(τ_H) + 2logK|Θ̅_ϵ|/δ, ∀ k∈[K],
where (a) follows because ε≤1/K^2H^2.
The following proposition states that the constraint Θ_min^k does not rule out the true model.
Fix p_min≤δ/KH(|𝒪||𝒜|)^H. Consider the following event:
_min = {∀ k∈[K], ∀ h,(τ_h,π)∈_h^k, _θ^*^π(τ_h) ≥ p_min} = ⋂_k∈[K]{θ^*∈Θ_min^k }.
We have (_min) ≥ 1- δ.
For any k∈[K], h∈{0,1,…,H-1} and (τ_h,π)∈_h^k, we have
( _θ^*^π(τ_h ) < p_min)
= [ ( _θ^*^π(τ_h ) < p_min| π)]
= [ ∑_τ_h _θ^*^π(τ_h^t)1{_θ^*^π^t(τ_h^t) < p_min}]
≤[∑_τ_h p_min]
≤δ/KH .
Thus, taking the union bound over k∈[K], h∈{0,1,…,H-1} and (τ_h,π)∈_h, we conclude that
(_min) ≥ 1 - δ.
§ PROOF OF <REF> (FOR ONLINE PSR-UCB)
In this section, we present the full analysis for the online algorithm PSR-UCB to show <Ref>. In particular, the proof of <Ref> consists of three main steps. Step 1. We prove that the estimated model θ̂^k is not only accurate over the exploration policies, but also accurate conditioned on emipircal samples τ_h∈_h^k. Step 2. Building up on the first step, we are able to show that V_θ̂^k,b̂^k^π is a valid upper bound on the total variation distance between θ̂^k and θ^*. Step 3. Based on a newly developed inequality that translates V_θ̂^k,b̂^k^π to the ground-truth prediction feature ψ̅^*(τ_h), we show that the summation of V_θ̂^k,b̂^k^π over the iteration k grows sublinear in time. This finally characterizes the optimality of the output policy, the accuracy of the output model and the sample complexity of PSR-UCB, and finishes the proof.
Before we proceed, we use to denote the event _ω∩_π∩_min, where these three events are defined in <Ref>. Due to <Ref> and union bound, we immediately have ()≥ 1-3δ.
§.§ Step 1: Estimation Guarantee
We show that the estimated model is accurate with the past exploration policies and dataset.
Under event , the following two inequalities hold:
{ ∑_h∑_(τ_h,π)∈_h^k𝙳_^2 ( _θ̂^k^π(ω_h|τ_h), _θ^*^π(ω_h|τ_h) ) ≤ 7β,
∑_π∈^k𝙳_𝙷^2( _θ̂^k^π(τ_H), _θ^*^π (τ_H) ) ≤ 7β,
.
where β = 31logK|Θ̅_ε|/δ and ε≤δ/K^2H^2(|𝒪||𝒜|)^H.
To show the first inequality, note that by the selection of θ̂^k, we have θ̂^k∈ℬ^k (defined in <Ref>).
Following from <Ref>, we have
∑_h ∑_(τ_h,π)∈_h^k𝙳_^2 ( _θ̂^k^π(ω_h|τ_h), _θ^*^π(ω_h|τ_h) )
≤ 6 ∑_h∑_(τ_H,π)∈_h^klog_θ^*^π(τ_H) - 6∑_h∑_(τ_H,π)∈_h^klog_θ̅^k^π(τ_H) + 31logK|Θ̅_ε|/δ
(a)≤ 6max_θ'∈Θ_min^k∑_h∑_(τ_H,π)∈_h^klog_θ'^π(τ_H) - 6∑_h∑_(τ_H,π)∈_h^klog_θ̅^k^π(τ_H) + 31logK|Θ̅_ε|/δ
≤ 7β,
where (a) follows from θ^*∈Θ_min^k (<Ref>).
To show the second inequality, <Ref> implies that
∑_π∈^k 𝙳_𝙷^2( _θ̂^k^π(τ_H), _θ^*^π (τ_H) )
≤∑_π∈^klog_θ^*^π(τ_H) - ∑_π∈^klog_θ̂^k^π(τ_H) + 2logK|Θ̅_ε|/δ
(a)≤max_θ'∈Θ_min^k ∑_π∈^klog_θ'^π(τ_H) - ∑_π∈^klog_θ̂^k^π(τ_H) + 2logK|Θ̅_ε|/δ
≤ 7β,
where (a) follows from θ^*∈Θ_min^k (<Ref>).
§.§ Step 2: UCB for Total Variation Distance
Following from <Ref>, the total variation distance between two PSRs is controlled by the estimation error. Hence, we first characterize the estimation error of _h^*(o_h,a_h) in the following lemma.
Under event , for any k∈[K] and any policy π, we have
∑_τ_H| 𝐦^*(ω_h)^(_h^k(o_h,a_h) - _h^*(o_h,a_h) ) ψ̂_h-1^k(τ_h-1) | π(τ_H)
≤_τ_h-1∼_θ̂^(k)^π[ α_h-1^k ψ̅̂̅_h-1^k(τ_h-1)_(Û_h-1^k)^-1],
where
Û_h-1^k = λ I + ∑_τ_h-1∈_h-1^k [ ψ̅̂̅_h^k(τ_h-1 )ψ̅̂̅^k(τ_h-1 )^],
α_h-1^k = 4λ Q_A^2 d /γ^4 + |𝒜|^2 Q_A^2 /γ^2∑_τ_h-1∈_h-1^k𝙳_^2( _θ̂^k ^𝚞_𝒬_h-1^exp(ω_h-1 | τ_h-1 ) , _θ^* ^𝚞_𝒬_h-1^exp (ω_h-1 | τ_h-1 ) ).
We index future trajectory ω_h-1 = (o_h,a_h,…,o_H,a_H) by i, and history trajectory τ_h-1 by j. For simplicity, we denote 𝐦^*(ω_h)^(_h^(k)(o_h,a_h) - _h^*(o_h,a_h) ) as w_i^, and ψ̅̂̅_h-1^(k)(τ_h-1) = ψ̂_h-1^k(τ_h-1)/ϕ̂_h-1^kψ̂_h-1^k(τ_h-1) as x_j. We also denote π(ω_h-1|τ_h-1) as π_i|j.
Then, the LHS of <Ref> can be written as
LHS = ∑_τ_H| 𝐦^* (ω_h)^(_h^k(o_h,a_h) - _h^*(o_h,a_h) ) ψ̂_h-1^k(τ_h-1) | π(τ_H)
(a) = ∑_i ∑_j |w_i^x_j|π_i|j_θ̂^k ^π(j)
= ∑_j ∑_i (π_i|j·𝚜𝚐𝚗(w_i^x_j)· w_i)^ x_j·_θ̂^k ^π(j)
= ∑_j( ∑_iπ_i|j·𝚜𝚐𝚗(w_i^x_j )· w_i)^ x_j·_θ̂^k ^π(j)
(b)≤_j∼_θ̂^k ^π[ x_j _(Û_h-1^k)^-1√(∑_iπ_i|j·𝚜𝚐𝚗(w_i^x_j )· w_i_Û_h-1^k^2 )],
where (a) follows because _θ^π(τ_h) = ϕ_h^ψ(τ_h), and (b) follows from the Cauchy's inequality.
Fix τ_h-1 = j_0. We aim to upper bound the term I_1:= ∑_iπ_i|j_0·𝚜𝚐𝚗(w_i^x_j_0 )· w_i_Û_h-1^k^2. Then, we have
I_1 = λ∑_iπ_i|j_0·𝚜𝚐𝚗(w_i^x_j_0 )· w_i_2^2_I_2 + ∑_j∈_h-1^τ[ (∑_iπ_i|j_0·𝚜𝚐𝚗(w_i^x_j_0 )· w_i)^ x_j]^2 _I_3.
We first upper bound the first term I_2 as follows:
√(I_2) = √(λ)max_x∈ℝ^d_h-1: x_2=1|∑_iπ_i|j_0𝚜𝚐𝚗(w_i^x_j_0)w_i^ x|
≤√(λ)max_x∈ℝ^d_h-1:x_2=1∑_ω_h-1| 𝐦^*(ω_h)^( _h^k(o_h,a_h) - _h^*(o_h,a_h) ) x | π(ω_h-1|j_0)
(a)≤√(λ)max_x∈ℝ^d_h-1:x_2=1∑_ω_h-1|𝐦^*(ω_h)^_h^k(o_h,a_h) x | π(ω_h-1|j_0)
+ √(λ)max_x∈ℝ^d_h-1:x_2=1∑_ω_h-1|𝐦^*(ω_h)^_h^*(o_h,a_h) x | π(ω_h-1|j_0)
(b)≤√(λ)/γmax_x∈ℝ^d_h-1:x_2=1∑_o_h,a_h_h^k(o_h,a_h) x _1 π(a_h|o_h,j_0) + √(λ)/γmax_x∈ℝ^d_h-1:x_2=1 x_1
(c)≤2Q_A√(dλ)/γ^2,
where (a) follows because |a+b|≤ |a|+|b|, (b) follows from <Ref>, and (c) follows from <Ref> and because max_x∈ℝ^d_h-1:x_2=1 x_1 ≤√(d).
We next upper bound the second term I_3 as follows.
I_3 ≤∑_τ_h-1∈_h-1^k( ∑_ω_h-1| 𝐦^* (ω_h)^(_h^k(o_h,a_h) - _h^*(o_h,a_h) )ψ̅̂̅^k(τ_h-1) | π(ω_h-1|j_0) )^2
(a)≤∑_τ_h-1∈_h-1^k( ∑_ω_h-1|𝐦^*(ω_h)^( _h^k(o_h,a_h) ψ̅̂̅^k(τ_h-1) - _h^*(o_h,a_h) ψ̅^*(τ_h-1) ) | π(ω_h-1|j_0) .
+ . ∑_ω_h-1| 𝐦^*(ω_h)^_h^*(o_h,a_h) (ψ̅̂̅^(k)(τ_h-1) - ψ̅^*(τ_h-1) ) | π(ω_h-1|j_0) )^2
(b)≤∑_τ_h-1∈_h-1^k( 1/γ∑_o_h,a_h_θ̂^k(o_h|τ_h-1)ψ̅̂̅_h^k(τ_h) - _θ(o_h|τ_h-1) ψ̅_h^*(τ_h) _1 π(a_h|o_h,j_0) .
+ . 1/γ∑_τ_h-1∈_h-1^τψ̅̂̅_h-1^k(τ_h-1) - ψ̅_h-1^*(τ_h-1) _1 )^2
(c)= 1/γ^2∑_τ_h-1∈_h-1^k( ∑_o_h,a_h∑_ℓ=1^|𝒬_h|| _θ̂^k(𝐨_h^ℓ,o_h|τ_h-1,a_h,𝐚_h^ℓ) - _θ^* (𝐨_h^ℓ,o_h|τ_h-1,a_h,𝐚_h^ℓ) |π(a_h|o_h,j_0) .
+ . ∑_ℓ=1^|𝒬_h-1|| _θ̂^k(𝐨_h-1^ℓ | τ_h-1, 𝐚_h-1^ℓ ) - _θ^* (𝐨_h-1^ℓ | τ_h-1, 𝐚_h-1^ℓ ) | )^2
≤1/γ^2∑_τ_h-1∈_h-1^k( ∑_𝐚_h-1∈𝒬_h^exp∑_ω_h-1^o |_θ̂^k(ω^o_h-1|τ_h-1, 𝐚_h-1 ) - _θ^*(ω^o_h-1|τ_h-1, 𝐚_h-1 ) | )^2
≤4|𝒜|^2Q_A^2/γ^2∑_τ_h-1∈_h-1^k𝙳_^2( _θ̂^k ^𝚞_𝒬_h-1^exp(ω_h-1 | τ_h-1 ) , _θ^* ^𝚞_𝒬_h-1^exp (ω_h-1 | τ_h-1 ) ),
where (a) follows because |a+b|≤ |a|+|b|, (b) follows from <Ref> and <Ref>, and (c) follows from the physical meaning of the prediction feature, i.e. [ψ̅(τ_h)]_ℓ = _θ(𝐨_h^ℓ|τ_h,𝐚_h^ℓ).
By combining the upper bounds for I_2 and I_3, we conclude that
I_1≤4λ Q_A^2 d/γ^4 + 4|𝒜|^2Q_A^2/γ^2∑_τ_h-1∈_h-1^k𝙳_^2( _θ̂^k ^𝚞_𝒬_h-1^exp(ω_h-1 | τ_h-1 ) , _θ^* ^𝚞_𝒬_h-1^exp (ω_h-1 | τ_h-1 ) ) = α_h-1^k,
which completes the proof.
The following lemma validates that V_θ̂^k,b̂^k^π is an upper bound on the total variation distance between the estimated model θ̂^k and the true model θ^*.
Under event , for any π, we have
𝙳_( _θ̂^k^π (τ_H), _θ^*^π (τ_H) ) ≤α_τ_H∼_θ̂^k^π[ √(∑_h=0^H-1ψ̅̂̅^k(τ_h)_ ( Û_h^k )^-1^2 )],
where
α^2 = 4 λ H Q_A^2 d /γ^4 + 28|𝒜|^2Q^2_Aβ/γ^2.
We proceed the proof as follows:
∑_h(α_h-1^k)^2
≤4λ HQ_A^2d /γ^4 + 4|𝒜|^2Q_A^2/γ^2∑_h∑_(τ_h-1,π)∈_h-1^k𝙳_^2( _θ̂^k ^π(ω_h-1 | τ_h-1 ) , _θ^* ^π (ω_h-1 | τ_h-1 ) )
(a)≤ 4 λ H Q_A^2 d /γ^4 + 28β |𝒜|^2 Q_A^2 /γ^2,
where (a) follows from <Ref>.
The proof then follows directly from <Ref>, <Ref> and the Cauchy's inequality.
Note that the reward function R is within [0,1]. We hence obtain the following corollary.
Under event , for any k∈[K] and any reward R, we have
|V_θ̂^k , R ^π - V_θ^* , R ^π| ≤ V_θ̂^k, b̂^k,
where b̂^k(τ_H) = min{α√(∑_h ψ̅̂̅_h^k(τ_h)^2_ (Û_h^k )^-1) , 1 }.
§.§ Step 3: Sublinear Summation
To prove that ∑_k V_θ̂^k,b̂^k^π^k is sublinear, i.e., scales as O(√(K)), we first prove the following lemma that relates the estimated feature and the ground-truth feature via the total variation distance between the estimated model and the true model.
Under the event , for any k∈[K] and any policy π, we have
_τ_H∼_θ^*^π [√(∑_h=0^H-1ψ̅̂̅^k(τ_h) ^2_(Û_h^k)^-1)]
≤(1 + 2|𝒜|Q_A√(7rβ)/√(λ)) ∑_h=0^H-1_τ_h∼_θ^*^π[ ψ̅^* (τ_h) _(U_h^k)^-1] + 2HQ_A/√(λ)𝙳_( _θ^*^π(τ_H) , _θ̂^k^π(τ_H)).
Recall that
Û_h^k = λ I + ∑_τ∈_h^kψ̅̂̅^k(τ_h)ψ̅̂̅^k(τ_h)^.
We define the ground-truth counterpart of Û_h^k as follows:
U_h^k = λ I + ∑_τ∈_h^kψ̅^*(τ_h) ψ̅^*(τ_h)^.
Then, following from <Ref>, we have
√(∑_h=0^H-1ψ̅̂̅^k(τ_h) ^2_(Û_h^k)^-1)
≤∑_h=0^H-1ψ̅̂̅^k(τ_h) _(Û_h^k)^-1
≤1/√(λ)∑_h=0^H-1ψ̅̂̅^k(τ_h) - ψ̅^* (τ_h) _2 + ∑_h=0^H-1(1 + √(r)√(∑_τ_h∈_h^k ψ̅̂̅^k(τ_h) - ψ̅^* (τ_h) _2^2 )/√(λ)) ψ̅^* (τ_h) _(U_h^k)^-1.
Furthermore, note that
ψ̅̂̅^k(τ_h) - ψ̅^* (τ_h) _2
≤ψ̅̂̅^k(τ_h) - ψ̅^* (τ_h) _1
(a)≤ 2|𝒜|Q_A 𝙳_( _θ̂^k ^𝚞_𝒬_h^exp(ω_h|τ_h), _θ^* ^𝚞_𝒬_h^exp(ω_h|τ_h) ),
where (a) follows from the physical meaning of the prediction feature.
Following from <Ref>, we conclude that
√(∑_h=0^H-1ψ̅̂̅^k(τ_h) ^2_(Û_h^k)^-1)
≤1/√(λ)∑_h=0^H-1ψ̅̂̅^k(τ_h) - ψ̅^* (τ_h) _2 + (1 + 2 |𝒜|Q_A√(7rβ)/√(λ)) ∑_h=0^H-1ψ̅^* (τ_h) _(U_h^k)^-1.
For the first term, taking expectation, we have
∑_h=0^H-1 _τ_h∼_θ^*^π[ ψ̅̂̅^k(τ_h) - ψ̅^*(τ_h) _1 ]
≤∑_h=0^H-1∑_τ_h( ψ̅̂̅^k(τ_h)( _θ^π(τ_h) - _θ̂^k^π(τ_h) ) + ψ̅̂̅^k(τ_h) _θ̂^k^π(τ_h) - ψ̅^*(τ_h) _θ^*^π(τ_h) _1 )
≤∑_h=0^H-1∑_τ_h( ψ̅̂̅^k(τ_h) _1 | _θ^π(τ_h) - _θ̂^k^π(τ_h) | + ψ̂^k(τ_h) - ψ^*(τ_h) _1 π(τ_h))
(a)≤ 2∑_h=0^H-1 Q_A 𝙳_( _θ^*^π(τ_h) , _θ̂^k^π(τ_h))
≤ 2 H Q_A 𝙳_( _θ^*^π(τ_H) , _θ̂^k^π(τ_H)),
where (a) follows from ψ̅̂̅^k(τ_h)_1 ≤ |𝒬_h^A|≤ Q_A, and the physical meaning of ψ(τ_h).
Thus,
_τ_H∼_θ^*^π [√(∑_h=0^H-1ψ̅̂̅^k(τ_h) ^2_(Û_h^k)^-1)]
≤(1 + 2|𝒜|Q_A√(7rβ)/√(λ)) ∑_h=0^H-1_τ_h∼_θ^*^π[ ψ̅^* (τ_h) _(U_h^k)^-1] + 2HQ_A/√(λ)𝙳_( _θ^*^π(τ_H) , _θ̂^k^π(τ_H)).
The following lemma can be proved via the ℓ_2 Eluder argument <cit.>. Since we have a slightly different estimation oracle and guarantee, we provide the full proof here for completeness.
Under event , for any h∈{0,…,H-1}, we have
∑_k 𝙳_( _θ^* ^π^k(τ_h) , _θ̂^k^π^k(τ_h)) ≲ |𝒜|Q_A√(β)/γ√(rHKlog(1+dQ_AK/γ^4)).
Here, a≲ b indicates that there is an absolute positive constant c such that a≤ c· b.
First, by the first inequality in <Ref>, we have
𝙳_( _θ^* ^π^k(τ_h) , _θ̂^k^π^k(τ_h)) ≤∑_h∑_τ_H| ^k(ω_h)^(_h^k(o_h,a_h) - _h^*(o_h,a_h)) ψ^* (τ_h-1) | π^k(τ_H).
It suffices to upper bound ∑_τ_H| ^k(ω_h)^(_h^k(o_h,a_h) - _h^*(o_h,a_h)) ψ^* (τ_h-1) | π(τ_H) for any policy π.
For simplicity, we use similar notations as in <Ref>.
We index the future trajectory ω_h-1 = (o_h,a_h,…,o_H,a_H) by i, and the history trajectory τ_h-1 by j. We represent ψ̅^*(τ_h-1) as x_j, and ^k(ω_h)^(_h^k(o_h,a_h) - _h^* (o_h,a_h)) as w_i. We also denote π(ω_h-1|τ_h-1) by π_i|j.
Let λ_0 be a constant determined later and define the matrix
Λ_h^k = λ_0 I + ∑_t<k_j∼_θ^*^π^t[ x_jx_j^].
Then, for any π, we have
∑_τ_H | ^k(ω_h)^(_h^k(o_h,a_h) - _h^*(o_h,a_h)) ψ^* (τ_h-1) | π(τ_H)
= _j∼_θ^*^π^k[ ∑_i | π_i|j w_i ^ x_j|]
= _j∼_θ^*^π^k[ (∑_iπ_i|j𝚜𝚐𝚗(w_i ^ x_j) w_i )^ x_j]
(a)≤_j∼_θ^*^π^k[ x_j_Λ_h^†∑_iπ_i|j𝚜𝚐𝚗(w_i ^ x_j) w_i _Λ_h],
where (a) follows from the Cauchy's inequality.
Now we fix j=j_0 and consider the following term.
∑_iπ_i|j_0𝚜𝚐𝚗(w_i ^x_j_0) w_i^2_Λ_h
= λ_0 ∑_iπ_i|j_0𝚜𝚐𝚗(w_i ^x_j_0) w_i^2_2_I_1 + ∑_t<k_j∼_θ^*^π^t[ ( ∑_iπ_i|j_0𝚜𝚐𝚗(w_i ^x_j_0) w_i ^ x_j )^2 ] _I_2
For the first term I_1, we have
√(I_1) = √(λ_0)max_x∈ℝ^d_h-1:x_2=1|∑_iπ_i|j_0𝚜𝚐𝚗(w_i ^x_j_0) w_i^ x |
≤√(λ_0)max_x∈ℝ^d_h-1:x_2=1∑_ω_h-1π(ω_h-1 | j_0 ) | ^k(ω_h-1)^ x |
+ √(λ_0)max_x∈ℝ^d_h-1:x_2=1∑_ω_h-1π (ω_h-1 | j_0 ) | ^k(ω_h)^_h^*(o_h,a_h) x |
(a)≤√(dλ_0)/γ + Q_A√(d λ_0 )/γ^2
≤ 2Q_A√(d λ_0 )/γ^2,
where (a) follows from <Ref>, <Ref>, and the fact that max_x∈ℝ^d_h-1:x_2=1x_1≤√(d_h-1)≤√(d).
For the second term I_2, we have
I_2 ≤∑_t<k_τ_h-1∼_θ^*^π^t[ ( ∑_ω_h-1π(ω_h-1 | j_0) | ^k(ω_h)^( _h^k(o_h,a_h) - _h^*(o_h,a_h) ) ψ̅^*(τ_h-1) | )^2 ]
≤∑_t<k_τ_h-1∼_θ^*^π^t[ ( ∑_ω_h-1π(ω_h-1 | j_0) | ^k(ω_h-1)^( ψ̅^*(τ_h-1) - ψ̅̂̅^k (τ_h-1) ) | . .
+ . . ∑_ω_h-1π(ω_h-1|j_0)| ^k(ω_h)^( _h^k(o_h,a_h)ψ̅̂̅^k (τ_h-1) - _h^*(o_h,a_h) ψ̅^*(τ_h-1) ) | )^2 ]
(a)≤∑_t<k_τ_h-1∼_θ^*^π^t[ ( 1/γψ̅^* (τ_h-1) - ψ̅̂̅^k (τ_h-1) _1 . .
+ . . 1/γ∑_o_h,a_hπ^k(a_h|o_h,j_0)^k(o_h,a_h)ψ̅̂̅^k (τ_h-1) - _h^*(o_h,a_h) ψ̅^*(τ_h-1) _1 )^2 ]
= 1/γ^2∑_t<k_τ_h-1∼_θ^*^π^t[ ( ∑_ℓ=1^|𝒬_h-1|| _θ̂^k(𝐨_h-1^ℓ | τ_h-1, 𝐚_h-1^ℓ ) - _θ^*(𝐨_h-1^ℓ | τ_h-1, 𝐚_h-1^ℓ )| . .
+ ..∑_o_h,a_h∑_ℓ=1^|𝒬_h|π^k(a_h|o_h,j_0)| _θ̂^k(𝐨_h^ℓ, o_h | τ_h-1 ,a_h, 𝐚_h^ℓ ) - _θ^* (𝐨_h^ℓ, o_h | τ_h-1 ,a_h, 𝐚_h^ℓ ) | )^2]
≤|𝒬_h-1^exp|^2 /γ^2∑_t<k_τ_h-1∼_θ^*^π^t[ 𝙳^2_( _θ̂^k ^𝚞_𝒬_h-1^exp (ω_h-1 | τ_h-1 ) , _θ^* ^𝚞_𝒬_h-1^exp (ω_h-1 | τ_h-1 )) ]
(b)≤4|𝒜|^2Q_A^2/γ^2∑_t<k𝙳^2_𝙷( _θ̂^k ^ν_h(π^t, 𝚞_𝒬_h-1^exp) ( τ_H ) , _θ^* ^ν_h(π^t, 𝚞_𝒬_h-1^exp) ( τ_H ) ),
where (a) follows from <Ref>, and (b) follows because |𝒬_h-1^exp|=|𝒜||𝒬_h^A| + |𝒬_h-1^A| ≤ 2|𝒜|Q_A.
Thus, we have
∑_τ_H | ^k(ω_h)^(_h^k(o_h,a_h) - _h^*(o_h,a_h))ψ^*(τ_h-1) | π(τ_H)
≤_τ_h-1∼_θ^*^π[ α̃_h-1^k ψ̅^* (τ_h-1) _Λ_h-1^†],
where
(α̃_h-1^k)^2 = 4λ_0Q_A^2d/γ^4 + 4|𝒜|^2Q_A^2/γ^2∑_t<k𝙳^2_𝙷( _θ̂^k ^ν_h(π^t, 𝚞_𝒬_h-1^exp) ( τ_H ) , _θ^* ^ν_h(π^t, 𝚞_𝒬_h-1^exp) ( τ_H ) ) .
By choosing λ_0 = γ^4/4Q_A^2d, and recalling <Ref>, we further have
∑_h( α̃_h-1^k)^2≤28 |𝒜|^2Q_A^2 β/γ^2≜α̃.
Hence, with the Cauchy's inequality, we have
𝙳_( _θ̂^k^π^k(τ_H), _θ^π^k (τ_H) ) ≤min{α̃√(∑_h=1^H _τ_h-1∼_θ^*^π^k[ ψ̅^*(τ_h-1) ^2_Λ_h-1^-1] ) , 2}.
Taking the summation and applying the elliptical potential lemma (i.e., <Ref>), we have
∑_k=1^K 𝙳_( _θ̂^k^π^k(τ_H), _θ^*^π^k (τ_H) )
≤√(K)√(∑_k=1^K ∑_h=1^H min{α̃_τ_h-1∼_θ^*^π^k[ ψ̅^*(τ_h-1) ^2_Λ_h-1^†] , 4 })
≲|𝒜|Q_A/γ√(rHKβlog(1+dQ_AK/γ)),
which yields the final result.
The next lemma shows that the summation ∑_k=1^K V_θ̂^k, b̂^k^π^k grows sublinearly in K.
Under the event , with probability at least 1-δ, we have
∑_k=1^K V_θ̂^k, b̂^k^π^k≲( √(r) + Q_A√(H)/γ) |𝒜|Q_A^2H√(drHβ K β_0)/γ^2,
where β_0 = max{log(1+K/λ), log(1+dQ_AK/γ) }, and λ = γ |𝒜|^2Q_A βmax{√(r), Q_A√( H)/γ}/√(d H).
First, since b̂^k(τ_H)∈[0,1], we have
V_θ̂^k, b̂^k^π^k ≤ V_θ^* , b̂^k^π^k + 1/2𝙳_( _θ̂^k^π^k, _θ^*^π^k).
Then, following from <Ref>, we have
∑_k=1^K V_θ^* , b̂^k^π^k
≤∑_k=1^Kmin{α(1 + 2|𝒜|Q_A√(7rβ)/√(λ))∑_h=0^H-1_τ_H∼_θ^*^π^k[ ψ̅^*(τ_h) _(U_h^k)^-1] + ∑_k=1^Kα HQ_A/√(λ)𝙳_( _θ^*^π^k(τ_H) , _θ̂^k^π^k(τ_H)) ,1 }
≤∑_k=1^K min{α(1 + 2|𝒜|Q_A√(7rβ)/√(λ)) ∑_h=0^H-1_τ_h∼_θ^*^π^k[ ψ̅^*(τ_h) _(U_h^k)^-1] , 1 }_I_1 + ∑_k=1^Kα HQ_A/√(λ)𝙳_( _θ^*^π^k(τ_H) , _θ̂^k^π^k(τ_H)).
We next analyze the term I_1. Recall that U_h^k = λ I + ∑_τ_h∈_h^kψ̅^*(τ_h)ψ̅^*(τ_h)^. Further note that
{_τ_h∼_θ^*^π^k[ψ̅^*(τ_h)_(U_h^k)^-1] - ψ̅^*(τ_h^k+1,h+1)_(U_h^k)^-1}_k=1^K
forms a martingale. Applying the Azuma-Hoeffding inequality (see <Ref>), we have, with probability at least 1-δ,
I_1 ≤√(2Klog(2/δ)) + ∑_k=1^Kmin{α(1 + 2|𝒜|Q_A√(7rβ)/√(λ)) ∑_h=0^H-1ψ̅^*(τ_h^k+1,h+1) _(U_h^k)^-1 , 1 }
(a)≲√(2Klog(2/δ)) + α(1 + 2|𝒜|Q_A√(7rβ)/√(λ)) H √(rKlog(1+K/λ))
≲α(1 + |𝒜|Q_A√(rβ)/√(λ)) H √(rKlog(1+K/λ)),
where (a) follows from <Ref>.
Let β:=max{log(1+K/λ), log(1+dQ_AK/γ) } for simplicity. By <Ref>, we have,
∑_k V_θ̂^k,b̂^k ^π^k
≲α(1 + |𝒜|Q_A√(rβ)/√(λ)) H√(rKβ_0) + α H /√(λ)|𝒜|Q_A^2√(β)/γ√(rHKβ_0)
≤α(1 + |𝒜|Q_A√(rβ)/√(λ) + |𝒜|Q_A^2√(β H)/γ√(λ)) H√(rKβ_0)
≲( |𝒜|Q_A√(β)/γ + Q_A√(dH)/γ^2√(λ))( 1 + |𝒜|Q_A√(β)max{√(r), Q_A√( H)/γ}/√(λ)) H√(rKβ_0)
= ( 1 + √(dH)/ |𝒜|√(β)γ√(λ))(1 + |𝒜| Q_A√(β)max{√(r), Q_A√( H)/γ}/√(λ)) |𝒜| Q_A H √(rβ Kβ_0)/γ
(a)≲( 1 + Q_A √(dH)max{√(r), Q_A√( H)/γ}/γ) |𝒜| Q_A H√(rβ K β_0)/γ
≤( √(r) + Q_A√(H)/γ) |𝒜|Q_A^2H√(drHβ K β_0)/γ^2,
where (a) follows by choosing
λ = γ |𝒜|^2Q_A βmax{√(r), Q_A√( H)/γ}/√(d H).
This completes the proof.
§.§ Proof of <Ref>
Suppose <Ref> holds. Let p_min = O(δ/KH|𝒪|^H|𝒜|^H ), β = O(log|Θ̅_ε|), where ε=O(p_min/KH), λ = γ |𝒜|^2Q_A βmax{√(r), Q_A√( H)/γ}/√(d H), and α = O( Q_A√( H d)/γ^2√(λ) + |𝒜|Q_A√(β)/γ). Then, with probability at least 1-δ, PSR-UCB outputs a model θ^ϵ and a policy π̅ that satisfy
V_θ^*,R^π^* - V_θ^*,R^π̅≤ϵ, and ∀π, 𝙳_( _θ^ϵ^π(τ_H), _θ^*^π(τ_H)) ≤ϵ.
In addition, PSR-UCB terminates with a sample complexity of
Õ( ( r + Q_A^2 H/γ^2) r d H^3 |𝒜|^2 Q_A^4 β/γ^4ϵ^2).
The proof is under event , which occurs with probability at least 1-3δ.
Following from <Ref>, if PSR-UCB terminates, we have
∀π, 𝙳_(_θ^ϵ^π(τ_H), _θ^*^π(τ_H) ) = 2max_R|V_θ^ϵ,R^π - V_θ^*,R^π| ≤ 2 V_θ^ϵ,b̂^ϵ^π≤ϵ,
where the last inequality follows from the termination condition of PSR-UCB.
In addition,
V_θ^*,R^π^* - V_θ^*,R^π̅
= V_θ^*,R^π^* - V_θ^ϵ,R^π^* + V_θ^ϵ,R^π^* - V_θ^ϵ,R^π̅ + V_θ^ϵ,R^π̅ - V_θ^*,R^π̅
(a)≤ 2max_π V_θ^ϵ,b̂^ϵ^π≤ϵ,
where (a) is due to the design of π̅ and <Ref>.
Finally, recall that <Ref> states that
∑_k=1^K V_θ̂^k,b̂^k ≲( √(r) + Q_A√(H)/γ) |𝒜|Q_A^2H√(drHβ K β_0)/γ^2.
By the pigeon-hole principle and the termination condition of PSR-UCB, if
K = Õ( ( r + Q_A^2 H/γ^2) r d H^2 |𝒜|^2 Q_A^4 β/γ^4ϵ^2),
PSR-UCB must terminate within K episodes, implying that the sample complexity of PSR-UCB is at most
Õ( ( r + Q_A^2 H/γ^2) r d H^3 |𝒜|^2 Q_A^4 β/γ^4ϵ^2).
§ PROOF OF <REF> (FOR OFFLINE PSR-LCB)
In this section, we present the full analysis for the offline algorithm PSR-LCB to show <Ref>. In particular, the proof of <Ref> consists of three main steps. Step 1: We provide the offline estimation guarantee for the estimated model θ̂. Step 2: Building up on the first step, we are able to show that V_θ̂,b̂^π is a valid upper bound of the total variation distance between θ̂ and θ^*. Hence, V_θ̂, R^π - V_θ̂,b̂^π is a valid LCB for the true value V_θ^*,R^π. Step 3: We translate V_θ̂,b̂^π to the ground-truth prediction feature ψ̅^*(τ_h). Finally, we show that the V_θ̂,b̂^π scales in the order of 1/√(K), which characterizes the performance of PSR-LCB and completes the proof.
We first introduce the following definitions for good events.
Good Events. Recall that Θ_min = {θ: ∀ h, τ_h ∈_h, _θ^π^b(τ_h) ≥ p_min}, where p_min≤δ/KH(|𝒪||𝒜|)^H. Let ε≤p_min/KH. Analogous to the proof for online learning, we introduce three events defined as follows.
_ω^o = {∀θ∈Θ_min, ∑_h ∑_τ_h ∈_h 𝙳_^2 ( _θ^π^b(ω_h|τ_h ), _θ^*^π^b(ω_h|τ_h ) ) .
. ≤ 6∑_h∑_τ_H ∈_h log_θ^*^π^b (τ_H)/_θ^π^b (τ_H) + 31log3K|Θ̅_ε|/δ},
_π^o = {∀θ∈Θ, K 𝙳_𝙷^2 ( _θ^π^b(τ_H) , _θ^*^π (τ_H) ) ≤∑_τ_H ∈log_θ^*^π^b(τ_H) /_θ^π^b(τ_H) + 2log3K|Θ̅_ε|/δ},
_min^o = {∀ h, τ_h ∈_h , _θ^*^π^b(τ_h) ≥ p_min}.
Following the proof steps similar to those in <Ref>, we can conclude that ^o:= _ω^o∩_π^o∩_min^o occurs with probability at least 1-δ.
§.§ Step 1: Estimation Guarantee
Under event ^o, the estimated model θ̂ by PSR-LCB
satisfies
∑_h∑_τ_h∈_h𝙳_^2( _θ̂^π^b (ω_h|τ_h), _θ^* ^π^b (ω_h|τ_h) ) ≤ 7β̂,
𝙳_𝙷^2( _θ̂^π^b (τ_H), _θ^* ^π^b (τ_H) ) ≤ 7β̂/K,
where β̂ = 31log3K|Θ̅_ε|/δ.
The proof follows the steps similar to those in <Ref>, except that we replace the exploration policies ν_h(π^k,𝚞_𝒬_h-1^exp) by the behavior policy.
§.§ Step 2: UCB for Total Variation Distance and LCB for Value Function
The following lemma provides an explicit upper bound on the total variation distance between the estimated model and the true model.
Under event ^o, for any policy π, we have
𝙳_ ( _θ^*^π(τ_H) , _θ̂^π(τ_H)) ≲√(β̂/Hι^2γ^2)∑_h=1^H _τ_h-1∼_θ^*^π[ ψ̅^* (τ_h-1)_Λ_h-1^-1],
where Λ_h-1 = λ_0 I + K/H_τ_h-1∼_θ^*^π^b[ ψ̅^*(τ_h-1) ψ̅^*(τ_h-1)^], and λ_0 = γ^4/4Q_A^2d.
Similarly to the analysis in that for <Ref>, we index ω_h-1 = (o_h,a_h,…,o_H,a_H) by i, and τ_h-1 by j. In addition, we denote 𝐦̂(ω_h)^(_h (o_h,a_h) - _h^*(o_h,a_h) ) by w_i^, denote ψ̅^* (τ_h-1) by x_j, and denote π(ω_h-1|τ_h-1) by π_i|j. Then, we have
∑_τ_H | 𝐦̂ (ω_h)^(_h (o_h,a_h) - _h^*(o_h,a_h) ) ψ^* (τ_h-1) | π(τ_H)
= ∑_i ∑_j |w_i^x_j|π_i|j_θ^* ^π(j)
= ∑_j ∑_i (π_i|j·𝚜𝚐𝚗(w_i^x_j)· w_i)^ x_j·_θ^*^π(j)
= ∑_j( ∑_iπ_i|j·𝚜𝚐𝚗(w_i^x_j )· w_i)^ x_j·_θ^* ^π(j)
≤_j∼_θ^*^π[ x_j _Λ_h-1^-1√(∑_iπ_i|j·𝚜𝚐𝚗(w_i^x_j )· w_i_Λ_h-1^2 )],
where Λ_h-1 = λ_0 I + K/H_τ_h-1∼_θ^*^π^b[ ψ̅^*(τ_h-1) ψ̅^*(τ_h-1)^] and λ_0 will be determined later. We fix τ_h-1 = j_0 and aim to analyze the coefficient of x_j_0_Λ_h-1^-1. We have
∑_iπ_i|j_0·𝚜𝚐𝚗(w_i^x_j_0 )· w_i_Λ_h-1^2
= λ_0 ∑_iπ_i|j_0·𝚜𝚐𝚗(w_i^x_j_0 )· w_i_2^2_I_1 + K/H_j∼_θ^*^π^b[ (∑_iπ_i|j_0·𝚜𝚐𝚗(w_i^x_j_0 )· w_i)^ x_j]^2 _I_2.
For the first term I_1, we have
√(I_1) = √(λ_0)max_x∈ℝ^d_h-1: x_2=1|∑_iπ_i|j_0𝚜𝚐𝚗(w_i^x_j_0)w_i^ x|
≤√(λ_0)max_x∈ℝ^d_h-1:x_2=1∑_ω_h-1| 𝐦^*(ω_h)^( _h (o_h,a_h) - _h^*(o_h,a_h) ) x | π(ω_h-1|j_0)
≤√(λ_0)max_x∈ℝ^d_h-1:x_2=1∑_ω_h-1| 𝐦̂ (ω_h)^_h (o_h,a_h) x | π(ω_h-1|j_0)
+ √(λ_0)max_x∈ℝ^d_h-1:x_2=1∑_ω_h-1| 𝐦̂(ω_h)^_h^*(o_h,a_h) x | π(ω_h-1|j_0)
(a)≤√(dλ_0)/γ + √(λ_0)/γmax_x∈ℝ^d_h-1:x_2=1∑_o_h,a_h_h (o_h,a_h) x _1 π(a_h|o_h,j_0)
(b)≤2Q_A√(dλ_0)/γ^2,
where (a) follows from <Ref>, and (b) follows from <Ref>.
For the second term I_2, we have
I_2 ≤K/H_τ_h-1∼_θ^*^π^b[( ∑_ω_h-1| 𝐦̂ (ω_h)^(_h (o_h,a_h) - _h^*(o_h,a_h) )ψ̅^* (τ_h-1) | π(ω_h-1|j_0) )^2 ]
≤K/H_τ_h-1∼_θ^*^π^b[ ( ∑_ω_h-1| 𝐦̂(ω_h)^( _h (o_h,a_h) ψ̅̂̅ (τ_h-1) - _h^*(o_h,a_h) ψ̅^*(τ_h-1) ) | π(ω_h-1|j_0) . .
+ . . ∑_ω_h-1| 𝐦̂ (ω_h)^_h(o_h,a_h) (ψ̅̂̅ (τ_h-1) - ψ̅^*(τ_h-1) ) | π(ω_h-1|j_0) )^2 ]
(a)≤K/H_τ_h-1∼_θ^*^π^b[ ( 1/γ∑_o_h,a_h_θ̂ (o_h|τ_h-1)ψ̅̂̅_h (τ_h) - _θ(o_h|τ_h-1) ψ̅_h^*(τ_h) _1 π(a_h|o_h,j_0) . .
+ . . 1/γψ̅̂̅ (τ_h-1) - ψ̅^*(τ_h-1) _1 )^2 ]
= K/Hγ^2_τ_h-1∼_θ^*^π^b[( ∑_o_h,a_h∑_ℓ=1^|𝒬_h|| _θ̂(𝐨_h^ℓ,o_h|τ_h-1,a_h,𝐚_h^ℓ) - _θ^* (𝐨_h^ℓ,o_h|τ_h-1,a_h,𝐚_h^ℓ) |π(a_h|o_h,j_0) . .
+ . . ∑_ℓ=1^|𝒬_h-1|| _θ̂(𝐨_h-1^ℓ | τ_h-1, 𝐚_h-1^ℓ ) - _θ^* (𝐨_h-1^ℓ | τ_h-1, 𝐚_h-1^ℓ ) | )^2 ]
≤K/Hγ^2_τ_h-1∼_θ^*^π^b[ ( ∑_𝐚_h-1∈𝒬_h^exp∑_ω_h-1^o |_θ̂(ω^o_h-1|τ_h-1, 𝐚_h-1 ) - _θ^*(ω^o_h-1|τ_h-1, 𝐚_h-1 ) | )^2 ]
(b)≤ K / H ι^2 γ^2_τ_h-1∼_θ^*^π^b[𝙳_^2( _θ̂^π^b (ω_h-1 | τ_h-1 ) , _θ^* ^π^b (ω_h-1 | τ_h-1 ) ) ]
(c)≤64K/Hι^2γ^2𝙳_𝙷^2 (_θ̂^π^b(τ_H), _θ^*^π^b(τ_H) )
(d)≲β̂/Hι^2γ^2,
where (a) follows from <Ref> and <Ref>, (b) follows because π^b(𝐚_h-1)≥ι for all 𝐚_h-1∈𝒬_h-1^exp, (c) follows from <Ref>, and (d) follows from <Ref>.
Thus, by choosing λ_0 = γ^4/4Q_A^2d, we have
∑_τ_H | 𝐦̂(ω_h)^(_h (o_h,a_h) - _h^*(o_h,a_h) ) ψ^* (τ_h-1) | π(τ_H)
≲√(β̂/Hι^2γ^2)_τ_h-1∼_θ^* ^π[ ψ̅^* (τ_h-1)_Λ_h-1^-1].
Following from <Ref>, we conclude that
𝙳_ ( _θ^*^π(τ_H) , _θ̂^π(τ_H)) ≲min{√(β̂/Hι^2γ^2)∑_h=1^H _τ_h-1∼_θ^*^π[ ψ̅^* (τ_h-1)_Λ_h-1^-1] ,2 }.
Following from <Ref>, we provide the following lemma that upper bounds the estimation error by the bonus function.
Under event ^o, for any policy π, we have
∑_τ_H | 𝐦^*(ω_h)^(_h (o_h,a_h) - _h^*(o_h,a_h) ) ψ̂ (τ_h-1) | π(τ_H) ≤_τ_h-1∼_θ̂^π[ α̂_h-1ψ̅̂̅ (τ_h-1)_(Û_h-1)^-1],
where
Û_h-1 = λ̂ I + ∑_τ_h-1∈_h-1ψ̅̂̅ (τ_h-1 )ψ̅̂̅ (τ_h-1 )^,
α̂_h-1 = λ̂ Q_A^2d/γ^4 + 4 /ι^2 γ^2∑_τ_h-1∈_h-1𝙳_^2( _θ̂^π^b (ω_h-1 | τ_h-1 ) , _θ^* ^π^b (ω_h-1 | τ_h-1 ) ).
We index ω_h-1 = (o_h,a_h,…,o_H,a_H) by i, and τ_h-1 by j. In addition, we denote
𝐦^*(ω_h)^(_h (o_h,a_h) - _h^*(o_h,a_h) ) by w_i^, ψ̅̂̅_h-1 (τ_h-1) = ψ̂_h-1(τ_h-1)/ϕ̂_h-1^ψ̂_h-1 (τ_h-1) by x_j, and π(ω_h-1|τ_h-1) by π_i|j. Then, we have
∑_τ_H | 𝐦^* (ω_h)^(_h (o_h,a_h) - _h^*(o_h,a_h) ) ψ̂_h-1 (τ_h-1) | π(τ_H)
= ∑_i ∑_j |w_i^x_j|π_i|j_θ̂^π(j)
= ∑_j ∑_i (π_i|j·𝚜𝚐𝚗(w_i^x_j)· w_i)^ x_j·_θ^π(j)
= ∑_j( ∑_iπ_i|j·𝚜𝚐𝚗(w_i^x_j )· w_i)^ x_j·_θ̂^π(j)
≤_j∼_θ̂^π[ x_j _(Û_h-1^k)^-1√(∑_iπ_i|j·𝚜𝚐𝚗(w_i^x_j )· w_i_Û_h-1^2 )].
We fix an index j = j_0, and aim to analyze ∑_iπ_i|j_0·𝚜𝚐𝚗(w_i^x_j_0 )· w_i_Û_h-1^k^2, which can be written as
∑_iπ_i|j_0·𝚜𝚐𝚗(w_i^x_j_0 )· w_i_Û_h-1^k^2
= λ̂∑_iπ_i|j_0·𝚜𝚐𝚗(w_i^x_j_0 )· w_i_2^2_I_1 + ∑_ j∈_h-1[ (∑_iπ_i|j_0·𝚜𝚐𝚗(w_i^x_j_0 )· w_i)^ x_j]^2_I_2 .
For the first term I_1, we have
√(I_1) = √(λ̂)max_x∈ℝ^d_h-1: x_2=1|∑_iπ_i|j_0𝚜𝚐𝚗(w_i^x_j_0)w_i^ x|
≤√(λ̂)max_x∈ℝ^d_h-1:x_2=1∑_ω_h-1| 𝐦^*(ω_h)^( _h (o_h,a_h) - _h^*(o_h,a_h) ) x | π(ω_h-1|j_0)
≤√(λ̂)max_x:x_2=1∑_ω_h-1|𝐦^*(ω_h)^_h (o_h,a_h) x | π(ω_h-1|j_0)
+ √(λ̂)max_x:x_2=1∑_ω_h-1|𝐦^*(ω_h)^_h^*(o_h,a_h) x | π(ω_h-1|j_0)
≤√(λ̂)/γmax_x:x_2=1∑_o_h,a_h_h (o_h,a_h) x _1 π(a_h|o_h,j_0) + √(λ̂)/γ x_1
≤ 2Q_A√(dλ̂)/γ^2.
For the second term I_2, we have
I_2 ≤∑_τ_h-1∈_h-1( ∑_ω_h-1| 𝐦^* (ω_h)^(_h (o_h,a_h) - _h^*(o_h,a_h) )ψ̅̂̅ (τ_h-1) | π(ω_h-1|j_0) )^2
≤∑_τ_h-1∈_h-1( ∑_ω_h-1|𝐦^*(ω_h)^( _h (o_h,a_h) ψ̅̂̅ (τ_h-1) - _h^*(o_h,a_h) ψ̅^*(τ_h-1) ) | π(ω_h-1|j_0) .
+ . ∑_ω_h-1| 𝐦^*(ω_h)^_h^*(o_h,a_h) ( ψ̅̂̅ (τ_h-1) - ψ̅^*(τ_h-1) ) | π(ω_h-1|j_0) )^2
(a)≤∑_τ_h-1∈_h-1( 1/γ∑_o_h,a_h_θ̂ (o_h|τ_h-1)ψ̅̂̅ (τ_h) - _θ(o_h|τ_h-1) ψ̅^*(τ_h) _1 π(a_h|o_h,j_0) .
+ . 1/γψ̅̂̅ (τ_h-1) - ψ̅^*(τ_h-1) _1 )^2
= 1/γ^2∑_τ_h-1∈_h-1( ∑_o_h,a_h∑_ℓ=1^|𝒬_h|| _θ̂(𝐨_h^ℓ,o_h|τ_h-1,a_h,𝐚_h^ℓ) - _θ^* (𝐨_h^ℓ,o_h|τ_h-1,a_h,𝐚_h^ℓ) |π(a_h|o_h,j_0) .
+ . ∑_ℓ=1^|𝒬_h-1|| _θ̂(𝐨_h-1^ℓ | τ_h-1, 𝐚_h-1^ℓ ) - _θ^* (𝐨_h-1^ℓ | τ_h-1, 𝐚_h-1^ℓ ) | )^2
≤1/γ^2∑_τ_h-1∈_h-1( ∑_𝐚_h-1∈𝒬_h-1^exp∑_ω_h-1^o |_θ̂(ω^o_h-1|τ_h-1, 𝐚_h-1 ) - _θ^*(ω^o_h-1|τ_h-1, 𝐚_h-1 ) | )^2
(b)≤ 1 /ι^2 γ^2∑_τ_h-1∈_h-1𝙳_^2( _θ̂^π^b (ω_h-1 | τ_h-1 ) , _θ^* ^π^b (ω_h-1 | τ_h-1 ) ),
where (a) follows from <Ref>, and (b) follows from the condition π^b(𝐚_h)≥ι for any 𝐚_h∈𝒬_h-1^exp.
Thus, we conclude that
∑_τ_H | 𝐦^*(ω_h)^(_h (o_h,a_h) - _h^*(o_h,a_h) ) ψ̂ (τ_h-1) | π(τ_H)
≤_τ_h-1∼_θ̂^π[ α̂_h-1ψ̅̂̅ (τ_h-1)_( Û_h-1)^-1],
where
( α̂_h-1 )^2 = 4λ̂Q_A^2d/γ^4 + 1 /ι^2 γ^2∑_τ_h-1∈_h-1𝙳_^2( _θ̂^π^b(ω_h-1 | τ_h-1 ) , _θ^* ^π^b ( ω_h-1 | τ_h-1 ) ).
Under event ^o, for any reward R, we have,
| V_θ̂ , R ^π - V_θ^* , R^π| ≤ V_θ̂, b̂^π,
where b̂^k(τ_H) = min{α̂√(∑_h ψ̅̂̅_h (τ_h) ^2_ (Û_h )^-1) , 1 }, and α̂ = √( 4λ H Q_A^2d /γ^4 + 7β̂/ι^2 γ^2).
By the definition of the total variation distance, we have
| V_θ̂ , R ^π - V_θ^* , R^π| ≤𝙳_( _θ̂^π, _θ^*^π)
(a)≤∑_h=1^H ∑_τ_H| 𝐦^*(ω_h)^(_h (o_h,a_h) - _h^*(o_h,a_h) ) ψ̂ (τ_h-1) | π(τ_H)
(b)≤min{∑_h=1^H _τ_h-1∼_θ̂^π[ α̂_h-1ψ̅̂̅ (τ_h-1)_( Û_h-1)^-1] , 1 }
(c)≤min{√(∑_h=1^H α̂_h-1^2)√(∑_h = 0^H-1ψ̅̂̅_h (τ_h) ^2_ (Û_h )^-1) , 1 },
where (a) follows from <Ref>, (b) follows from <Ref> and because R(τ_H)∈[0,1], and (c) follows from the Cauchy's inequality.
By <Ref>, we further have
∑_h=1^H α̂_h-1^2
≤ 4λ̂ H Q_A^2d /γ^4 + 1/ι^2 γ^2∑_h∑_τ_h-1∈_h-1𝙳_^2( _θ̂^k ^π^b (ω_h-1 | τ_h-1 ) , _θ^* ^π^b (ω_h-1 | τ_h-1 ) )
≤ 4λ H Q_A^2d /γ^4 + 7 β̂/ι^2 γ^2,
which concludes the proof.
§.§ Step 3: Relationship between Empirical Bonus and Ground-Truth Bonus
Under event ^o, for any π, we have
_τ_H∼_θ^*^π [√(∑_h=0^H-1ψ̅̂̅_h (τ_h) ^2_(Û_h)^-1)]
≤(1 + 2 √(7 rβ̂)/ι√(λ̂)) ∑_h=0^H-1_τ_h∼_θ^*^π[ψ̅_h^* (τ_h) _(U_h)^-1] + 2HQ_A/√(λ̂)𝙳_( _θ^*^π(τ_H) , _θ̂^π(τ_H)).
Recall that
Û_h = λ̂ I + ∑_τ_h∈_hψ̅̂̅(τ_h)ψ̅̂̅(τ_h)^.
We define the ground-truth counterpart of Û_h as follows:
U_h = λ̂ I + ∑_τ_h∈_hψ̅(τ_h)ψ̅(τ_h)^.
Then, by <Ref>, we have
√(∑_h ψ̅̂̅ (τ_h) ^2_( Û_h )^-1)
≤∑_h ψ̅̂̅_h (τ_h) _( Û_h )^-1
≤1/√(λ̂)∑_h ψ̅̂̅ (τ_h) - ψ̅_h^* (τ_h) _2 + ∑_h (1 + √(r)√(∑_τ_h∈_h ψ̅̂̅(τ_h) - ψ̅^* (τ_h) _2^2 )/√(λ̂)) ψ̅^* (τ_h) _(U_h)^-1 .
Furthermore, note that
ψ̅̂̅ (τ_h) - ψ̅^* (τ_h) _2 ≤ψ̅̂̅_h (τ_h) - ψ̅_h^* (τ_h) _1
≤1/ι𝙳_( _θ̂^π^b (ω_h|τ_h), _θ^* ^π^b (ω_h|τ_h) ),
where the second inequality follows because π^b(𝐚_h) ≥ι for all 𝐚_h∈𝒬_h^exp.
By <Ref>, we conclude that
√(∑_h ψ̅̂̅_h (τ_h) ^2_(Û_h )^-1)
≤1/√(λ̂)∑_h ψ̅̂̅_h (τ_h) - ψ̅_h^* (τ_h) _2 + (1 + √(7rβ̂)/ι√(λ̂)) ∑_h ψ̅_h^* (τ_h) _(U_h^k)^-1.
For the first term, following an argument similar to those in <Ref> and taking the expectation, we have
∑_h _τ_h∼_θ^*^π[ ψ̅̂̅(τ_h) - ψ̅(τ_h) _1 ]
≤∑_h ∑_τ_h( ψ̅̂̅(τ_h)( _θ^π(τ_h) - _θ̂^π(τ_h) ) + ψ̅̂̅(τ_h) _θ̂^π(τ_h) - ψ̅(τ_h) _θ^*^π(τ_h) _1 )
≤∑_h ∑_τ_h( ψ̅̂̅(τ_h) _1 | _θ^π(τ_h) - _θ̂^π(τ_h) | + ψ̂(τ_h) - ψ̅^*(τ_h) _1 π(τ_h))
≤ 2∑_h |𝒬_h^A| 𝙳_( _θ^*^π(τ_h) , _θ̂^π(τ_h))
≤ 2HQ_A 𝙳_( _θ^*^π(τ_H) , _θ̂^π(τ_H)),
which completes the proof.
§.§ Proof of <Ref>
Now, we are ready to prove <Ref>.
Suppose <Ref> holds. Let ι = min_𝐚_h∈𝒬_h^expπ^b(𝐚_h), p_min = O(δ/KH(|𝒪||𝒜|)^H), ε=O(p_min/KH), β̂ = O(log|Θ̅_ε|), λ̂ = γ C_π^b,∞^πβ̂max{√(r), Q_A√(H)/γ}/ι^2Q_A√(dH), and α̂ = O( Q_A√(dH)/γ^2√(λ) + √(β)/ιγ). Then, with probability at least 1-δ, the output π̅ of <Ref> satisfies that
∀π, V_θ^*, R ^π - V_θ^*,R^π̅≤Õ( (√(r) + Q_A√(H)/γ) C_π^b,∞^π Q_A H^2 /ιγ^2 √(r d β̂/K)).
Under event ^o, the performance difference can be upper bounded as follows.
V_θ^*,r^π - V_θ^*,r^π̂
= V_θ^*,r^π - (V_θ̂, r^π - V_θ̂, b̂^π) + ( V_θ̂, r^π - V_θ̂, b̂^π ) - ( V_θ̂, r^π̂ - V_θ̂, b̂^π̂ )_I_1 + V_θ̂, r^π̂ - V_θ̂, b̂^π̂ - V_θ^*,r^π̂_I_2
(a)≤1/2𝙳_(_θ̂^π , _θ^*^π) + V_θ̂, b̂^π
(b)≤𝙳_(_θ̂^π , _θ^*^π) + V_θ^*, b̂^π,
where (a) follows from the design of π̅ that results in I_1≤ 0, and from <Ref> which implies I_2≤ 0, and (b) follows from <Ref>.
By <Ref>, we have
𝙳_(_θ̂^π , _θ^*^π) ≲√(β̂/Hι^2γ^2)∑_h=1^H _τ_h-1∼_θ^*^π[ ψ̅^* (τ_h-1)_Λ_h-1^-1]
(a)≤ C_π^b,∞^π√(β̂/Hι^2γ^2)∑_h=1^H _τ_h-1∼_θ^*^π^b[ ψ̅^* (τ_h-1)_Λ_h-1^-1]
≤ C_π^b,∞^πH√(β̂/Hι^2γ^2)√(rH/K) = C_π^b,∞^πH√(rβ̂/Kι^2γ^2),
where (a) follows from the definition of the coverage coefficient.
By <Ref>, we have
V_θ^*, b̂^π
≤min{α̂(1 + 2 √(7 rβ̂)/ι√(λ̂)) ∑_h=0^H-1_τ_h∼_θ^*^π[ψ̅^* (τ_h) _(U_h)^-1] + 2α̂HQ_A/√(λ̂)𝙳_( _θ^*^π(τ_H) , _θ̂^π(τ_H)), 1 }
≤min{α̂(1 + 2 √(7 rβ̂)/ι√(λ̂)) ∑_h=0^H-1_τ_h∼_θ^*^π[ψ̅^* (τ_h) _(U_h)^-1], 1 } + 2α̂HQ_A/√(λ̂)𝙳_( _θ^*^π(τ_H) , _θ̂^π(τ_H))
≤min{α̂C_π^b,∞^π(1 + 2 √(7 rβ̂)/ι√(λ̂)) ∑_h=0^H-1_τ_h∼_θ^*^π^b[ψ̅^* (τ_h) _(U_h)^-1], 1 }_I_3 + 2α̂HQ_A/√(λ̂)𝙳_( _θ^*^π(τ_H) , _θ̂^π(τ_H)).
Recall that U_h = λ̂ I + ∑_τ_h∈_hψ̅^*(τ_h)ψ̅^*(τ_h)^ and the distribution of τ_h∈_h follows _θ^*^π^b. Therefore, by the Azuma-Hoeffding's inequality (<Ref>), with probability at least 1-δ, we have
KI_3 ≤√(2Klog(2/δ)) + min{α̂C_π^b,∞^π(1 + 2 √(7 rβ̂)/ι√(λ̂)) ∑_h=0^H-1∑_τ_h∈_hψ̅^* (τ_h) _(U_h)^-1 , 1 }
(a)≲√(Klog(2/δ)) + α̂C_π^b,∞^π(1+√(rβ̂)/ι√(λ̂)) H √(rK/H)
≲α̂C_π^b,∞^π(1+√(rβ̂)/ι√(λ̂)) √(rHKlog(2/δ)),
where (a) follows from the Cauchy's inequality.
Combining the above results, we have
V_θ^*,r^π - V_θ^*,r^π̂
≲α̂C_π^b,∞^π(1+√(rβ̂)/ι√(λ̂)) √(rHlog(2/δ)/K) + α̂HQ_A/√(λ̂)C_π^b,∞^π H/ιγ√(rβ̂/K)
≲ C_π^b,∞^π(Q_A√(dH)/γ^2√(λ̂) + √(β̂)/ιγ) ( 1 + max{√(r) , √(H)Q_A/γ}/ι√(λ̂)) H√(rHβ̂log(2/δ)/K)
= C_π^b,∞^π(ι Q_A√(dH)/γ√(β̂)√(λ̂) + 1 ) ( 1 + max{√(r) , √(H)Q_A/γ}/ι√(λ̂)) H√(β̂)/ιγ√(rHβ̂log(2/δ)/K)
(a)≲max{√(r) , √(H)Q_A/γ} C_π^b,∞^πQ_AH^2√(d)/ιγ^2√(rβ̂log(2/δ)/K),
where (a) follows because
λ = γ√(β̂)max{√(r), Q_A√(H)/γ}/ι^2Q_A√(dH).
This completes the proof.
§ AUXILIARY LEMMAS
Consider a domain 𝒳, and a filtration ℱ_1⊂…⊂ℱ_k⊂… on the domain 𝒳. Suppose that {X_1,…,X_k,…}⊂[-B,B] is adapted to the filtration (ℱ_t)_t=1^∞, i.e., X_k is ℱ_k-measurable. Then
( |∑_t=1^k-1 X_t - [ X_t] | ≥√(2kB^2log(2/δ))) ≤δ.
The following lemma characterize the relationship between the total variation distance and the Hellinger-squared distance. Note that the result for probability measures has been proved in Lemma H.1 in <cit.>. Since we consider more general bounded measures, we provide the full proof for completeness.
Given two bounded measures P and Q defined on the set 𝒳. Let |P| = ∑_x∈𝒳 P(x) and |Q| = ∑_x∈𝒳Q(x). We have
𝙳_^2(P,Q) ≤ 4(|P|+|Q|)𝙳_𝙷^2(P,Q)
In addition, if P_Y|X, Q_Y|X are two conditional distributions over a random variable Y, and P_X,Y = P_Y|XP, Q_X,Y= Q_Y|XQ are the joint distributions when X follows the distributions P and Q, respectively, we have
_X∼ P [ 𝙳_𝙷^2(P_Y|X,Q_Y|X)] ≤ 8𝙳_𝙷^2 (P_X,Y,Q_X,Y).
We first prove the first inequality. By the definition of total variation distance, we have
𝙳_^2(P,Q) = (∑_x |P(x) - Q(x)| )^2
= (∑_x (√(P(x)) - √(Q(x)))(√(P(x)) + √(Q(x))) )^2
(a)≤(∑_x(√(P(x)) - √(Q(x)))^2) ( 2∑_x( P(x) + Q(x)) )
≤ 4(|P| + |Q|) 𝙳_𝙷^2(P,Q),
where (a) follows from the Cauchy's inequality and because (a+b)^2≤ 2a^2+2b^2.
For the second inequality, we have,
_X∼ P [ 𝙳_𝙷^2(P_Y|X,Q_Y|X)]
= ∑_xP(x)( ∑_y (√(P_Y|X(y)) - √(Q_Y|X(y)))^2 )
= ∑_x,y( √(P_X,Y(x,y)) - √(Q_X,Y(x,y)) + √(Q_Y|X(y) Q(x)) - √(Q_Y|X(y) P(x)))^2
≤ 2∑_x,y( √(P_X,Y(x,y)) - √(Q_X,Y(x,y)))^2 + 2∑_x,y Q_Y|X(y)( √( Q(x)) - √( P(x)))^2
= 4𝙳_𝙷^2(P_X,Y,Q_X,Y) + 2(|P|+|Q| - 2∑_x√(P(x)Q(x)))
(a)≤ 4𝙳_𝙷^2(P_X,Y,Q_X,Y) + 2(|P|+|Q| - 2∑_x∑_y√(P_Y|X(y)P(x)Q_Y|X(y)Q(x)))
= 8 𝙳_𝙷^2(P_X,Y,Q_X,Y),
where (a) follows from the Cauchy's inequality that applies on ∑_y√(P_Y|X(y)Q_Y|X(y)).
Consider two vector sequences {x_i}_i∈ℐ and {y_i}_i∈ℐ and an index subset 𝒥⊂ℐ. Suppose A = λ I + ∑_j∈𝒥 x_jx_j^ and B = ∑_j∈𝒥y_jy_j^, and 𝚛𝚊𝚗𝚔( {x_i}_i∈ℐ) = 𝚛𝚊𝚗𝚔( {y_i}_i∈ℐ) = r. Then
∀ i∈ℐ, x_i_A^-1≤1/√(λ) x_i - y_i _2 + ( 1 + 2√(r)√(∑_j∈𝒥x_j-y_j_2^2 )/√(λ)) y_i_B^-1.
We first write
x_i_A^-1 - y_i_B^-1
= x_i_A^-1 - y_i_A^-1 + y_i_A^-1 - y_i_B^-1
= x_i^2_A^-1 - y_i^2_A^-1^I_1/x_i_A^-1 + |y_i_A^-1 + y_i^2_A^-1 - y_i^2_B^-1^I_2/y_i_A^-1 + y_i_B^-1.
For the first term I_1, we repeatedly apply the Cauchy's inequality, and have
x_i _A^-1^2 - y_i_A^-1^2
= x_i^ A^-1 (x_i - y_i) + y_i^ A^-1 (x_i - y_i)
≤x_i_A^-1x_i - y_i_A^-1 + y_i_A^-1x_i-y_i_A^-1
≤1/√(λ)(x_i_A^-1 + y_i_A^-1) x_i - y_i_2.
For the second term I_2, we repeatedly apply the Cauchy's inequality, and have
y_i _A^-1^2 - y_i_B^-1^2
= y_i^A^-1(B-A)B^-1y_i
= ∑_j∈𝒥(y_i^A^-1x_j (y_j - x_j)^B^-1y_i + y_i^A^-1(y_j -x_j)y_j^B^-1y_i)
≤∑_j∈𝒥x_j_A^-1y_j-x_j_B^-1y_i_A^-1y_i_B^-1
+ ∑_j∈𝒥y_j-x_j_A^-1y_j_B^-1y_i_A^-1y_i_B^-1
≤1/√(λ)y_i_A^-1y_i_B^-1√(∑_j∈𝒥x_j-y_j_2^2 )√(∑_j∈𝒥x_j^2_A^-1)
+ 1/√(λ)y_i_A^-1y_i_B^-1√(∑_j∈𝒥x_j-y_j_2^2 )√(∑_j∈𝒥y_j_B^-1^2 )
≤2√(r)/√(λ)y_i_A^-1y_i_B^-1√(∑_j∈𝒥x_j_h-y_j_h_2^2 ).
Therefore, the lemma follows from the fact that y_i_A^-1≤y_i_A^-1 + y_i_B^-1.
The following lemma and its variants has been developed in <cit.>. We slightly generalize the padding term from 1 to an arbitrary positive number B and provide the full proof for completeness.
For any sequence of vectors 𝒳 = {x_1,…,x_n,…}⊂ℝ^d, let U_k = λ I + ∑_t<kx_kx_k^, where λ is a positive constant, and B>0 is a real number. If the rank of 𝒳 is at most r, then, we have
∑_k=1^K min{x_k_U_k^-1^2 , B}≤ (1+B)rlog(1+K/λ),
∑_k=1^K min{x_k_U_k^-1 , √(B)}≤√((1+B)rKlog(1+K/λ)).
Note that the second inequality is an immediate result from the first inequality by the Cauchy's inequality. Hence, it suffices to prove the first inequality. To this end, we have
∑_k=1^K min{x_k_U_k^-1^2 , B } (a)≤ (1+B)∑_k=1^K log(1 + x_k _U_k^-1^2 )
= (1+B)∑_k=1^K log( 1+ 𝚝𝚛𝚊𝚌𝚎( (U_k+1 - U_k ) U_k^-1) )
= (1+B)∑_k=1^K log( 1+ 𝚝𝚛𝚊𝚌𝚎( U_k^-1/2(U_k+1 - U_k ) U_k^-1/2) )
≤ (1+B)∑_k=1^K log𝚍𝚎𝚝(I_d + U_k^-1/2(U_k+1 - U_k ) U_k^-1/2)
= (1+B)∑_k=1^K log𝚍𝚎𝚝(U_k+1)/𝚍𝚎𝚝(U_k)
= (1+B) log𝚍𝚎𝚝(U_K+1)/𝚍𝚎𝚝(U_1)
= (1+B)log𝚍𝚎𝚝( I + 1/λ∑_k=1^Kx_kx_k^)
(b)≤ (1+B)rlog(1+K/λ),
where (a) follows because x≤ (1+B)log(1+x) if 0<x≤ B, and (b) follows because 𝚛𝚊𝚗𝚔(𝒳) ≤ r.
|
http://arxiv.org/abs/2307.03156v1
|
20230706173056
|
Some applications of representation theory to the sum-product phenomenon
|
[
"Ilya D. Shkredov"
] |
math.NT
|
[
"math.NT",
"math.CO",
"math.GR"
] |
Can Domain Adaptation Improve Accuracy and Fairness of Skin Lesion Classification?
Janet Wang, Yunbei Zhang, Zhengming Ding, Jihun Hamm
August 1, 2023
==================================================================================
Annotation.
In our paper, we introduce a new method for estimating incidences via representation theory.
We obtain several applications to various sums with multiplicative characters and to Zaremba's conjecture from number theory.
§ INTRODUCTION
Given two finite sets A and B of an abelian ring, define the sumset, and the product set of A and B as
A+B={a+b:a∈ A, b∈ B}, A· B ={ab: a∈ A, b∈ B} .
The sum-product phenomena was introduced by Erdős and Szemerédi in paper <cit.> where they proved that for an arbitrary finite subset A of integers one has
max{ |A+A|, |A· A| }≫ |A|^1+c .
Here c>0 is an absolute constant and
Erdős and Szemerédi conjectured that any c<1 is admissible, at the cost of the implicit constant.
As a general heuristic, the conjecture suggests that either A+A or AA is significantly larger then the original set, unless A is close to a subring.
Even more generally speaking, the sum–product phenomenon predicts that the
an arbitrary subset of a ring cannot have good additive and multiplicative structures simultaneously.
The interested reader may consult <cit.> for a rather thorough treatment of sumsets and related questions, including some prior work on the sum-product problem.
The sum-product phenomenon has been extensively studied in the last few decades, the current records as of writing being
<cit.> for real numbers, and <cit.> for sufficiently small sets in finite fields.
In our paper we consider the case of the ring _q := /q and
we have deal with
large sets A ⊆_q (basically, it means that |A|>q^1-κ for a certain constant κ>0).
In the case of a prime q the behaviour of the maximum from (<ref>) is fully known thanks to the beautiful result of Garaev <cit.> who used some classical exponential sums bounds in his proof. Another approach was suggested
in
<cit.> and in <cit.> where some finite geometry considerations were applied.
For example, Vinh <cit.> proved that for an arbitrary prime q and any two sets 𝒜⊆_q ×_q, ℬ⊆_q ×_q one has
| |{ (a_1,a_2) ∈𝒜, (b_1,b_2) ∈ℬ : a_1 b_1 - a_2 b_2 ≡ 1 q }|
- |𝒜|ℬ|/q| ≤√(q |𝒜| |ℬ|) .
In the proof he used the fact that equation (<ref>) can be interpreted as a question about points/lines incidences.
Clearly, the result above has the sum–product flavour and indeed one can use (<ref>) to derive some lower bounds for the maximum from (<ref>) (in the case of large subsets of _q, of course).
In this paper we introduce a new method of estimating sum–product quantities as in (<ref>) which does not use any exponential sums, as well as any considerations from the incidence geometry. It turns out that representation theory
makes it possible
to obtain (almost automatically) asymptotic formulae for the number of solutions to systems of equations that are preserved by the actions of certain groups.
For example, equation (<ref>) can be interpreted as the equation ( [ a b; c d; ]) =1, where (a_1,a_2) ∈𝒜 and (b_1,b_2) ∈ℬ and hence the equation respects the usual action of _2 (_q).
The advantage of
our
approach is its generality and (relative) simplicity.
First of all, having a certain equation, the method makes it possible to obtain
an asymptotic formula for the number of solutions to the equation
for composite q due to the fact that representation theory for composite q is usually not so complicated and can be reduced to the case of prime powers.
We should mention that the question about the sum–product phenomenon for general _q and large sets is considered to be difficult and there are few results in this direction, see
<cit.> and paper
<cit.>, where the case of finite valuation rings was considered (also, see <cit.>).
Another
statement
of the problem concerning the sum–product results in _q is contained in
<cit.>, <cit.>,
<cit.>.
Let us remark that
in <cit.> Fish
also uses the property of equation invariance, but combines it with classical Fourier analysis.
Secondly, due to the obvious observation that representation theory deals with some facts concerning the acting group but not with sets, in all our results all the sets involved (as 𝒜, ℬ in (<ref>)) are absolutely general and do not require to have a special structure, for example, to be Cartesian products of some other sets.
The last constraint is sometimes crucial for Fourier analysis manipulations, see, e.g., <cit.>, <cit.>, although it usually allows to obtain better error terms in
asymptotic formulae.
To be more specific
let us mention
just one result here (see Theorem <ref> of Section <ref> below).
Given positive integers q,n,m, d=n+m, an element ∈_q and sets 𝒜⊆ (^d_q)^n, ℬ⊆ (^d_q)^m define by 𝒟_ (𝒜, ℬ) the number of solutions to the equation
(a_1,…, a_n, b_1,…, b_m) ≡ q ,
(a_1,…, a_n) ∈𝒜, (b_1,…, b_m) ∈ℬ .
Let q be an odd prime number and ≠ 0. Then
|𝒟_ (𝒜, ℬ) - |𝒜| |ℬ|/q-1| ≪
q^d^2/2 - d/4 - 3/4√(|𝒜| |ℬ|) .
In Section <ref> we obtain further applications of our approach to some problems of number theory.
Our main observation is that the representation theory of _2 (_q)
makes it easy
to insert multiplicative characters into all formulae with incidences and, therefore, to obtain non–trivial estimates for the corresponding exponential sums. In the author opinion this is a rather interesting phenomenon due to the widely–known fact that results with multiplicative characters are usually very difficult to obtain.
As an example, we formulate the following theorem concerning summation over a hyperbolic surface.
Denote by 𝒟⊂ the unit disk.
Let q be a prime number, >̣0 be a real number, A,B, X,Y⊆_q be sets, let χ be a non–principal multiplicative character and |X||Y| ≥ q^.
Also, let
c_A : A →𝒟, c_B : B →𝒟
be some weights.
Then there is ()̣>0 such that
∑_a∈ A, b∈ B, x∈ X, y∈ Y : (a+x)(b+y)=1 c_A (a) c_B (b) χ(a+x)
≤√(|A||B|) (|X||Y|)^1-()̣ .
Another application of the approach allows us to generalize <cit.> (also, see Theorem 33 from this paper). Let χ be a non–principal multiplicative character over a finite field .
Consider the Kloosterman sum twisted by the character χ, namely,
K_χ (n,m) = ∑_x ∈∖{0}χ(x) e( nx + mx^-1) ,
where e(·) is an additive character on .
We are interested in bilinear forms of Kloosterman sums (motivation can be found, say, in <cit.>, <cit.>, <cit.>)
that is, the sums of the form
S_χ (,β) = ∑_n,m(n) β (m) K_χ (n,m) ,
where : →, β : → are arbitrary functions.
Let c>0, χ be a non–principal multiplicative character and q be a prime number.
Let t_1, t_2 ∈_p, N,M be integers, N,M ≤ q^1-c
and let
,β : _q → be functions supported on {1,…, N} +t_1 and {1,…, M} +t_2, respectively.
Then there exists =̣δ (c) >0
such that
S_χ (,β ) ≲_2 β_2 q^1- .
Besides, if
M^2 <pN, then
S_χ ({1,…, N}+t_1,β) ≲β_2 (N^3/7 M^1/7 p^13/14 + N^3/4 p^3/4 + N^1/4 p^13/12) .
Our bounds
(<ref>), (<ref>) (also, see much more general Corollary <ref> below)
are non–trivial and do not seem to be covered by <cit.>, <cit.> results, which require some additional restrictions on χ.
Finally, we obtain an application to Zaremba's
conjecture
<cit.>.
Recall the main result of <cit.>.
Let q be a positive
sufficiently large
integer
with sufficiently large prime factors.
Then there is a positive integer a, (a,q)=1 and
M= O(log q/loglog q)
such that
a/q = [0;c_1,…,c_s] =
1c_1 +1c_2 +1c_3+⋯ +1c_s , c_j ≤ M , ∀ j∈ [s] .
Also, if q is a sufficiently large square–free number, then (<ref>), (<ref>) take place.
Finally, if q=p^n, p is an arbitrary prime, then (<ref>), (<ref>) hold for sufficiently large n.
Using an idea from representation theory, one can generalize Theorem <ref>.
Let q be a
sufficiently large prime number and
Γ≤_q
be a multiplicative subgroup,
|| ≫q/log^κ q ,
where κ>0 is an absolute constant.
Then there is
a∈Γ and
M= O(log q/loglog q)
such that
a/q = [0;c_1,…,c_s] , c_j ≤ M , ∀ j∈ [s] .
Some results of this type
concerning
restrictions of the numerators of fractions (<ref>) to multiplicative subgroups were obtained in
<cit.>,
<cit.> and <cit.>.
We thank Nikolai Vavilov and Igor Shparlinski
for useful discussions and references.
§ DEFINITIONS AND PRELIMINARIES
Let be a group (commutative or not) and A,B be some subsets of .
The sumset (and the product set) of A and B was defined in (<ref>). Let us write A ∔ B if for finite sets A, B one has |A+B| = |A||B|.
We use a representation function notation such as r_AB (x) or r_AB^-1 (x), which counts the number of ways x ∈ can be expressed as the product ab or ab^-1 with a∈ A, b∈ B, respectively.
For example, |A| = r_AA^-1(1).
Let us write r^(k)_A for r_A… A, where the set A is taken k times.
Having real functions f_1,…, f_2k :→
(let k be an even number for concreteness), we put
_k (f_1,…, f_2k)
=
∑_a_1 a^-1_2 … a_k-1 a^-1_k = a_k+1 a^-1_k+2… a_2k-1 a^-1_2k f_1(a_1) … f_2k (a_2k) .
We denote the Fourier transform of a function f : →ℂ by f, namely,
f(ξ) = ∑_x ∈ f(x) ξ(x) ,
where
ξ
is an additive character on .
In this paper we use the same letter to denote a set A⊆ and its characteristic function A: →{0,1 }.
Finally, if || < ∞, then we consider the balanced function f_A of A, namely, f_A (x) := A(x) - |A|/||.
In this paper we
have deal with
the group _2 (_q) ≤_2 (_q) of matrices
g=
( [ a b; c d; ]) = (ab|cd)
= (a,b|c,d)
, a,b,c,d∈_q , (g) = ad-bc=1 ,
which acts on the project line (in the case of a prime number q) via the formula gx =ax+b/cx+d and naturally acting on _q ×_q for an arbitrary q.
Now we give a simplified version of the special case of <cit.>
(also, see
<cit.>).
Let p be a prime number, d be a positive integer, V(_p^d) be a vector space over _p^d, V (_p^d) = n on which a non–degenerate symmetric bilinear form Φ (·, ·) is given.
The group of isometries of V is called the orthogonal group of V(_p^d), O_n (_p^d) and the subgroup of isometries with determinant one is called the special orthogonal group of V(_p^d), SO_n (_p^d).
Let p be a prime number, p≥ 5, d be a positive integer,
V(_p^d) be a vector space over _p^d,
and Φ(x_1,…,x_n; y_1,…, y_n) = x_1 y_1 + … + x_n y_n defined on V(_p^d) × V(_p^d).
Suppose that Γ is a normal subgroup of SO_n (_p^d), where n≥ 3, n≠ 4.
Then Γ is a congruence subgroup with the quotient isomorphic to SO_n ( _p^r), r< d.
Indeed, in <cit.> it requires to calculate the center of SO_n (_p^d), which is trivial as one can easily check (or consult <cit.> for general Φ).
Further one needs to find an isotropic vector x=(x_1,…,x_n) ≠ 0 such that Φ(x,x) = 0 and this is an obvious task to do as the equation x^2_1+ x_2^2 + x_3^2 ≡ 0 p has a nonzero solution (and hence a solution modulo p^d by Hensel's lemma).
Finally, notice that in the case n=4 one can in principle use <cit.> (in <cit.> the author considers the case n=4 under some additional assumptions which exclude the case of the sum of two hyperbolic planes).
Basic facts of representation theory can be found in <cit.>. Recall that a representation ρ of a group
is called faithful if it is injective.
We need some number–theoretic functions. Given a positive integer n we write τ (n) for the number of all divisors of n and by ω (n) denote the number of all prime divisors.
Also, denote by J_k (n) = n^k ∏_p|n (1-p^-k) the Jordan totient function
equals the number of
k–tuples of positive integers that are less than or equal to n and that together with n form a coprime set of k+1 integers. For example, it is easy to see that |_2 (_q)| = q J_2 (q).
The signs ≪ and ≫ are the usual Vinogradov symbols.
When the constants in the signs depend on a parameter M, we write ≪_M and ≫_M.
If a≪_M b and b≪_M a, then we write a∼_M b.
All logarithms are to base 2.
We write _q = /q and let ^*_q be the group of all invertible elements of _q.
By _p denote _p = /p for a prime p.
Finally, let us denote by [n] the set {1,2,…, n}.
§ APPLICATIONS TO INCIDENCE PROBLEMS
We start with the simplest question about points/hyperplanes incidences (see equation (<ref>) below).
This problem was considered before in
<cit.>, <cit.>, where the authors obtained better asymptotic formulae for the quantity ℐ_ (𝒜, ℬ) using other approaches.
We commence with equation (<ref>) because it allows us to transparently demonstrate our method, and because
we will use some of the calculations from the proof below.
As we will see the proof of Theorem <ref> exploits some facts about representation theory of SO_n (_q), which preserves the distance x_1^2 + … +x_n^2 in ^n_q.
Thus, our approach is applicable in principle to all distance problems, for example, to the well–known
Erdős–Falconer distance problem
see, e.g., <cit.>.
Given positive integers q, n≥ 2, an element ∈_q and sets 𝒜⊆^n_q, ℬ⊆^n_q consisting of tuples all coprime to q,
define by ℐ_ (𝒜, ℬ) the number of solutions to the equation
a_1 b_1 + … +a_n b_n ≡ q .
Let q, n≥ 2 be positive integers, 𝒜⊆^n_q, ℬ⊆^n_q be sets and ∈^*_q.
Let m be the least prime divisor of q and suppose that m≥ 5.
Then
|ℐ_ (𝒜, ℬ) - |𝒜| |ℬ|/q ∏_p|q (1-p^-n)| ≤ 2
q^n-1√(|𝒜| |ℬ|)· (Θ (n) m^-n_* )^1/4 ,
where n_* = 1 for n=2,3 and n_* = n-3 for n≥ 4
and further,
Θ (2) ≪min{τ (q), log_m q},
Θ (3) ≪min{logω (q), 1 + ω(q)/m } and Θ (n) ≪ 1 for n≥ 4.
Let q=p^ø_1_1 … p^ø_t_t, where m=p_1<p_2<… < p_t are primes and ø_j are positive integers.
Also, let a=(a_1,…,a_n), b=(b_1,…,b_n) and let M(a,b) = 1 iff the pair (a,b) satisfies our equation (<ref>).
Considering the unitary decomposition of the hermitian matrix M(a,b), we obtain
M(a,b) = ∑_j=1^q^nμ_j u_j (a) u_j (b) ,
where μ_1 ≥μ_2 ≥… are the eigenvalues and u_j are correspondent orthonormal eigenfunctions.
Clearly,
ℐ_ (𝒜, ℬ) = ∑_a∈𝒜, b∈ℬ M(a,b) = ∑_j=1^q^nμ_j ⟨𝒜, u_j ⟩⟨ℬ, u_j ⟩ .
Let N=J_n (q).
By the definition of the Jordan totient function the number of vectors a = (a_1,…, a_n) such that a_1, …, a_n, q are coprime is exactly N.
It is easy to see that μ_1 = q^n-1 and u_1 (x) = N^-1/2 (1,…,1)∈ℝ^N.
Indeed, the equation (<ref>) is a linear one and it is enough to solve the homogeneous equation (one can check that the correspondent non–homogeneous one has a solution).
Thus fixing b=(b_1,…, b_n) = (q_1 b'_1, …, q_n b'_n), (b_j,q)=1, q_j|q, j∈ [n], (q_1,…, q_n,q) = 1 we obtain
q_1 b'_1 a_1 + (q_2 b'_2 a_2 + … + q_n b'_n a_n) = q_1 b'_1 a_1 + l(a_2,…, a_n) ≡ q .
Hence l(a_2,…, a_n) ≡ k q_1 q, k∈ [q/q_1] and for any fixed k we find q_1 modulo q/q_1. By induction it follows that we have q_1 · q/q_1 · q^n-2 = q^n-1 solutions.
Indeed, we fix b and thanks to the Chinese remainder theorem we need to we solve linear equation (<ref>) modulo p^ø_j_j, j∈ [t].
Since ∈^*_q and hence ∈^*_p^ø_j_j for all j∈ [t], it follows that not all coefficients of (<ref>) are divided by p_j and hence there are p^ø_j(n-1)_j solutions modulo p^ø_j_j. Hence there are q^n-1 solutions in total.
Thus
we obtain
ℐ_ (𝒜, ℬ) - q^n-1|𝒜| |ℬ|/N
=
∑_j=2^q^nμ_j ⟨𝒜, u_j ⟩⟨ℬ, u_j ⟩ := ℰ .
By the orthonormality of u_j and the Hölder inequality, we get
|ℰ| ≤ |μ_2| √(|𝒜| |ℬ|) .
Thus it remains
to estimate
the second eigenvalue μ_2
and to do this we calculate the rectangular norm of the matrix M, that is
∑_j=1^q^n |μ_j|^4 = ∑_a,a'| ∑_b M(a,b) M(a',b) |^2 := σ ,
and then |μ_2|.
Fixing a pair (a,a') ∈^n_q ×^n_q, we need to solve the system of two linear equations
a_1 b_1 + … + a_n b_n ≡λ q ,
a'_1 b_1 + … + a'_n b_n ≡λ q .
It implies, in particular, that
∑_j=2^n b_j (a'_1 a_j - a_1 a'_j) ≡λ (a_1' - a_1) q ,
and if a_1' - a_1 ∈_q^*, say, then we obtain q^n-2 solutions by the previous argument.
If not, then consider all possible determinants of 2× 2 matrices consisting of the elements of the matrix (1,a_1,…, a_n | 1,a'_1,…, a'_n).
Further given a tuple (r_1,…, r_n), where 0≤ r_j ≤ø_j we consider the set 𝒜(r_1,…,r_n) of pairs
(a,a') ∈^n_q ×^n_q
such that p^r_j_j is the maximal divisor of all these determinants.
If (a,a') ∈𝒜(r_1,…,r_n), then a_j ≡ a'_j p^r_j_j and hence
|𝒜(r_1,…,r_n)| ≤q^2n/∏_j=1^t p^r_j n_j .
To solve (<ref>) (recall that we consider the case when (a,a') ∈𝒜(r_1,…,r_n))
one can use
the Chinese remainder theorem again and we see that there are
∏_j=1^t p^ø_j (n-2) + r_j_j = q^n-2∏_j=1^t p^r_j_j
solutions to equation (<ref>).
Combining the last bound with (<ref>), we obtain
σ≤ q^4n-4∑_r_1 ≤ø_1,…,r_t ≤ø_t ∏_j=1^t p_j^-r_j(n-2)
=
q^4n-4Θ(n) ,
where Θ (n) = O(1) for n≥ 4, Θ (3) = O(log t) and Θ (2) ≪∏_j=1^t (1+ø_j) = τ (q).
Let us remark other bounds for Θ (2) and for Θ (3), namely, from m^τ (q)≤ q one has Θ (2) ≪τ (q) ≪log_m q and, clearly, Θ (3) ≪ 1 + t/m.
It is instructive to consider the case n=2 separately.
Redefining the set 𝒜, we need to solve the equation
a_1 b_1 - a_2 b_2 ≡ q , (a_1,a_2) ∈𝒜, (b_1,b_2) ∈ℬ ,
where a=(a_1,a_2) and b=(b_1,b_2).
It is clear that our equation (<ref>) has the form (a|b) ≡ q and hence we enjoy the following invariance property
M(a,b) = M(ga, gb) , ∀ g∈_2 (_q) .
Hence if f is an eigenfunction of M with the eigenvalue μ, then for f^g (x) := f(gx) one has
∑_a,b M(a,b) f^g(b) = ∑_a,b M(a,b) f(gb) =
∑_a,b M(ga,gb) f(gb) = μ f(ga) = μ f^g (a) ,
where we have used (<ref>) and the transitivity of the natural action of _2 (_q).
In other words, _2 (_q) preserves the eigenspace L_μ, which
corresponds
to μ.
Now consider an arbitrary eigenfunction u_j, j>1.
We know that ∑_x u_j (x) =0 and hence u_j is not a constant function.
There are many ways
to see that (L_μ_j)>1 or, in other words,
that ⟨{ u^g_j }_g∈_2 (_q)⟩≠⟨ u_j ⟩ = L_μ_j.
For example,
one can use the transitivity again.
Another approach is to notice that the group _2 (_q) has no non–trivial one–dimensional representations but, on the other hand, any one–dimensional eigenspace would give us a character
(the same holds in the general case which will be considered below).
Thus anyway we conclude
that for any j>1 the multiplicity of each μ_j is at least the minimal dimension of non–trivial representations of _2 (_q).
Now we essentially repeat the argument from <cit.>. Another way is to use the first part of <cit.> which says exactly the same.
So, let us
repeat what is known about
representation theory of the group _2 (_q), see <cit.>.
First of all, for any irreducible representation ρ_q of _2 (_q) we have ρ = ρ_q = ρ_p^ρ_1_1⊗…⊗ρ_p^ρ_t_t and hence it is sufficient to understand representation theory for _2 (_p^d), where p is a prime number and d is a positive integer.
Now by <cit.> we know that for any odd prime the dimension of any faithful irreducible representation of _2 (_p^d) is at least 2^-1 p^d-2 (p-1)(p+1) (a similar proof for _n (_p^d), n≥ 2 can be found in <cit.>).
If d=1, then the classical result of Frobenius <cit.> says that the minimal dimension of any non–trivial representation is at least (p-1)/2.
For an arbitrary positive integer r≤ d we can consider the natural projection π_r : _2 (_p^d) →_2 (_p^r) and let H_r = Ker π_r. One can show that the set { H_r }_r≤ d gives all normal subgroups of _2 (_p^d) and hence any nonfaithful irreducible representation arises as a faithful irreducible representation of _2 (_p^r) for a certain r<d. Anyway, we see that the multiplicity (dimension) d_ρ of any non–trivial irreducible representation ρ of _2 (_p^d) is at least (m-1)/2 ≥ m/3.
Returning to the quantity σ,
we get
(below n=2)
|μ_2|^4 m ≤ 3 σ≤ 3 q^4n-4Θ(n)
and hence
|μ_2| ≤( 3 m^-1 q^4n-4Θ(n) )^1/4
=
q^n-1 (3 Θ(n) m^-1 )^1/4
Recalling (<ref>), (<ref>), we obtain the required result for n=2.
Now let n>2. It remains only to find a good lower bound for the multiplicity of μ_j, j>1 (the fact (L_μ_j) > 1 is immediate consequence that SO_n (_q) has no non–trivial one–dimensional representations or thus see paper <cit.>).
In the higher–dimensional case n>2 our form Φ(a,b) = a_1 b_1 + … + a_n b_n is preserved by the group of orthogonal transformations O_n (_q) (as well as SO_n (_q)) and hence our task is to find a good lower bound
for the dimension of any non–trivial irreducible representation of SO_n (_p^d).
Using Theorem <ref> and the arguments as above, we see that it is enough to have deal with faithful representations and this problem was solved in <cit.>.
The authors prove that the minimal dimension of any faithful representations coincides (up to constants) with the classical lower bound for minimal dimension of an arbitrary non–trivial representation for
split Chevalley groups over _p^d, see <cit.>, <cit.>.
These results combining with the existence of isomorphisms between low–dimensional classical groups (see <cit.>, for example) give us
d_ρ≥ 2^-2 p^n-3 for n≥ 4 and d_ρ≥ 2^-2 p for n=3.
For n=4 one cannot apply Theorem <ref> but it is easy to see that in this case the multiplicity of μ_2 is at least d_ρ≥ 2^-1 (p^-1) due to the fact that the group _2 (_p^d) ×_2 (_p^d) acts on the quadruples (a_1,…,a_4) and we can use previous arguments concerning _2 (_p^d) and the case n=2.
It follows that
for any n≥ 2 the multiplicity of μ_2 is at least Ω(m^-n_*).
We have
m μ_2^2 ≤∑_j>1 |μ_j|^2 = ℐ_ (𝒜, ℬ) - μ^2_1 < ℐ_ (𝒜, ℬ)
and hence thanks to (<ref>) one has
| ℐ_ (𝒜, ℬ) - |𝒜| |ℬ|/q|
< √(m^-1 |𝒜| |ℬ| ℐ_ (𝒜, ℬ) ) .
If ℐ_ (𝒜, ℬ) ≥2|𝒜| |ℬ|/q, then
ℐ_ (𝒜, ℬ) ≤ 4 |𝒜| |ℬ|/m.
Otherwise
| ℐ_ (𝒜, ℬ) - |𝒜| |ℬ|/q|
<
|𝒜| |ℬ| (mq)^-1/2 .
This completes the proof.
Of course, one can improve bound
(<ref>) do not using the crude bound |𝒜| ≤ q^n in the second inequality of (<ref>).
Let q, n≥ 2 be positive integers, 𝒜⊆^n_q.
Then
max{ |𝒜 + 𝒜|, |𝒜𝒜|}≫min{} .
Thus, as the reader can see, our method almost automatically gives some asymptotic formulae for the number of solutions to systems of equations that are preserved by the actions of certain groups.
The only thing we need to calculate is the first eigenfunction of the correspondent operator and its rectangular norm.
After that we use quasi–random technique in the spirit of
papers <cit.>, <cit.> and <cit.>.
Now we are ready to obtain Theorem <ref> from the introduction and for simplicity we consider the case of a prime number q. We assume that the sets 𝒜, ℬ consisting of linearly independent tuples because otherwise there is no solutions to equation (<ref>).
Let q be an odd prime number and ≠ 0. Then
2^-3|𝒟_ (𝒜, ℬ) - |𝒜| |ℬ|/q| ≤
q^d^2/2 - d/4 - 3/4√(|𝒜| |ℬ|)
+
|𝒜| |ℬ|/q^2 .
The case n=m=1 was considered in Theorem <ref>, so we assume that max{ n,m}≥ 2.
Let
a=(a_1,…,a_n), b=(b_1,…,b_m) and let M(a,b) = 1 iff the pair (a,b) satisfies our equation (<ref>).
Considering the singular decomposition of the matrix M(a,b), we obtain
M(a,b) = ∑_j=1^q^d_j u_j (a) v_j (b) ,
where _1 ≥_2 ≥…≥ 0 are the singularvalues and u_j, v_j are correspondent orthonormal singularfunctions.
Let
𝒩=(q^d-q^m) (q^d- q^m+1) … (q^d - q^d-1)
= q^dn∏_j=1^n (1-q^-j)
ℳ =q^dm∏_j=1^m (1-q^-j) .
It is easy to calculate _1 and to show that u_1 (a) = 𝒩^-1/2 (1,…,1)∈ℝ^𝒩, as well as v_1 (b) = ℳ^-1/2 (1,…,1)∈ℝ^ℳ.
Indeed, for any fixed a or b we need to solve the equation
(a|b) = in b or a, correspondingly.
It is easy to see that the equation (a|b) =, a is fixed, has q^dm-1∏_j=2^m( 1-q^-j) = ℳ/q-1 solutions due to the number of independent vectors over _q.
Similarly, the second equation has
q^dn-1∏_j=2^n( 1-q^-j) = 𝒩/q-1 solutions in a.
Thus, these numbers do not depend on a and b and hence, indeed we have u_1 (a) = 𝒩^-1/2 (1,…,1)∈ℝ^𝒩, v_1 (b) = ℳ^-1/2 (1,…,1)∈ℝ^ℳ and
_1 = ⟨ M u_1, v_1 ⟩ = ℳ/q-1·𝒩· (ℳ𝒩)^-1/2
= √(ℳ𝒩)/q-1 .
Thus we get
| 𝒟_ (𝒜, ℬ) - |𝒜| |ℬ|/q-1|
=
| ∑_j=2^q^d_j ⟨𝒜, u_j ⟩⟨ℬ, v_j ⟩| ≤_2 √(|𝒜| |ℬ|) .
As above we need to estimate the rectangular norm of the matrix M that is
∑_j=1^q^d_j^4 = ∑_a,a'| ∑_b M(a,b) M(a',b) |^2 ,
and thus
we arrive to the system of equations
(a'|b) = (a|b) = with fixed a and a'.
Fixing vectors b_1,…, b_m-1 we
have exactly
equation (<ref>) which has at most q^md-2 solutions.
Thus
∑_j=1^q^d_j^4 ≤ q^2nd q^2md-4 = q^2d^2-4 .
Now it is easy to see that
M(ga, gb) = M(ga_1, …, ga_n, g b_1,…, g b_m) = M(a,b)
for an arbitrary g∈_d (_q) and thus any _j, j>1 has multiplicity equals the minimal dimension
of any non–trivial irreducible representation of _d(_q).
Thus the multiplicity of _2 is at most 2^-2 q^d-1 and hence
_2 ≤ 2 q^d^2/2 -1 q^-(d-1)/4 =
2 q^d^2/2 - d/4 - 3/4 .
Using the last estimate,
and returning to formula (<ref>), we obtain
2^-3| 𝒟_ (𝒜, ℬ) - |𝒜| |ℬ|/q|
≤ q^d^2/2 - d/4 - 3/4√(|𝒜| |ℬ|)
+
|𝒜| |ℬ|/q^2
as required.
Finally, we consider an example with the cross–ratio [a,b,c,d]:= (a-c)(b-d)/(a-d)(b-c).
As one can see, representation theory almost immediately gives asymptotic formula (<ref>) with an acceptable error term.
Let q be a prime number, ∈_q and 𝒜⊆_q ×_q, ℬ⊆_q ×_q be sets.
Define
𝒞_ (𝒜, ℬ) := |{ (a_1,a_2) ∈𝒜, (b_1,b_2) ∈ℬ : [a_1,a_2,b_1,b_2] ≡ q }| .
Let q be a prime number, ∈_q, ≠ 0,1 and 𝒜⊆_q ×_q, ℬ⊆_q ×_q be sets. Then
|𝒞_ (𝒜, ℬ) - |𝒜| |ℬ|/q| ≤ 4 q^3/4√(|𝒜| |ℬ|) .
As usual let a=(a_1,a_2), b=(b_1,b_2) and let M(a,b)=1
iff the pair (a,b) satisfies our equation (<ref>).
It is well–known that _2 (_q) preserves the cross–ratio in the sense
M(ga, gb) = M(ga_1, ga_2, g b_1, g b_2) = M(a,b) .
Considering the unitary decomposition of the hermitian matrix M(a,b) as in (<ref>) we see that the property u_1 (a) = q^-1 (1,…,1) ∈ℝ^q^2
automatically follows from (<ref>) and 2–transitivity of _2 (_q) on the projective line.
It remains to calculate the rectangular norm of the matrix M, that is to solve the system [x,y,c,d]=[x,y,c',d']=.
It follows that
xy(1-) + ( c-d)x + ( d-c) y + dc(1-) = 0 ,
and
xy(1-) + ( c'-d')x + ( d'-c') y + d'c'(1-) = 0 .
Subtracting (<ref>) from (<ref>) we arrive to the equation
x ( (c-c') + d'-d) + y( (d-d') + c'-c) + (1-) (dc-d'c') = 0
and this is a non–trivial equation excluding two cases: c=c', d=d' and = -1, c=d', d=c'.
If equation (<ref>) is non–trivial, then we substitute, say, x into (<ref>) and obtain at most 4 solutions in x,y (one can check that we obtain a non–trivial equation thanks to our condition ≠ 0,1).
In the exceptional cases we have just one equation, say, (<ref>), and it is easy to see that our equation has at most 2q solutions.
Thus
∑_j=1^q^2μ_j^4 = ∑_a,a'| ∑_b M(a,b) M(a',b) |^2
≤ 16 q^4 + 2 q^2 (2q)^2 = 24 q^4
.
It remains to use the Frobenius Theorem <cit.> about minimal representations of _2 (_q).
This result gives us the bound
μ_2 ≤ 4 q^3/4 and we can apply the arguments as in the proofs of Theorems <ref>, <ref>.
This completes the proof.
§ ON SUMS WITH MULTIPLICATIVE CHARACTERS OVER SOME MANIFOLDS AND OTHER APPLICATIONS
In this section we want to extend representation theory methods to some sums with multiplicative characters.
Below p is a prime number and is a finite field of characteristic p.
Let us consider
a basic
example.
We know that _2 () acts on the projective line and it gives us an irreducible representation of this group but from <cit.>, say, it is well–known that there are other irreducible representations of _2 () and a half of them are connected with “projective lines” equipped with multiplicative characters χ.
More precisely, it means that we consider the family of functions f : ×→ such that
f( x, y) = χ () f(x,y) , ∀∈^*
∀ (x,y)∈ (×) ∖{0} ,
and now _2 () acts on this family, as well as on × in a natural way.
In our results below we do not need to use the knowledge of concrete irreducible representations of _2 () (and other groups) but we will use only definition (<ref>) somehow.
Let us start with
the following
auxiliary proposition
concerning summation over a hyperbolic surface (twisted by a multiplicative character) in the spirit of paper
<cit.>, say.
Let A,B⊆_p and G⊆_2 (_p) be sets
and χ be a non–trivial multiplicative character.
Also, let
c_A : A →𝒟, c_B : B →𝒟
be some weights.
Then for any integer k≥ 2 the following holds
2^-2| ∑_a,b c_A (a) c_B (b) ∑_g∈ G : ga =bχ(γ a+)̣|
≤√(|A||B||G|)·^1/8k_2k (f_G)
+ √(|A||B|) |G| · (max{|A|, |B|})^-1/2k .
Consider the functions 𝒜 ( a,) = c_A (a) χ () = 𝒜 (x), ℬ (μ b,μ) = c_B (b) χ(μ) = ℬ (y), where a∈ A, b∈ B, x = (x_1,x_2), y = (y_1,y_2) and μ, run over ^*_p.
It is easy to see that we always have ∑_a,𝒜 ( a,) =0, as well as ∑_b,μℬ (μ b,μ) = 0
since
χ is a non–trivial character.
Notice that
σ:= ∑_x,y𝒜 (x) ℬ (y) ∑_g∈ G : g x = y 1 =
(p-1)∑_a,b c_A (a) c_B (b) ∑_g∈ G : ga =bχ(γ a+)̣
for any trivial/non–trivial multiplicative character χ.
We can interpret the left–hand side of (<ref>) as
the number of some
points
on a hyperbolic surface counting with weights 𝒜 (x), ℬ (y).
The Hölder inequality (see <cit.>) gives us
σ^2k≤𝒜^2k_2
ℬ^2k-2_2
∑_h f^(k)_G G^-1 (h) ∑_xℬ(x) ℬ(h x) .
Applying identity (<ref>), it is easy to see that the contribution of the terms with ∑_xℬ(x) ℬ(h x) ≤ 32 p, say, corresponds to the second term from (<ref>).
Now using
<cit.>
(we notice that, say, 4 different points uniquely determine the transformation from _2 (_p)),
combining with
the Hölder inequality again, we derive
σ^2k≤
(|A| (p-1))^k (|B| (p-1))^k-1
×( ∑_h (f^(k)_G G^-1 (h))^2 )^1/4( ∑_h |f^(k)_G G^-1 (h)| )^1/2( ∑_h( ∑_xℬ(x) ℬ(h x) )^4 )^1/4
≤ 4^k (|A| (p-1))^k (|B| (p-1))^k-1^1/4_2k (f_G) |G|^k · |B| (p-1) .
Recalling (<ref>), we see that estimate (<ref>) is equivalent to the required bound (<ref>).
This completes the proof.
In particular, if one consider the affine group (_p)
and put χ(a), χ(bμ)
Let A,B⊆_p and G⊆_p ×_p be sets.
Also, let χ_1,χ_2 be a non–principal multiplicative character.
Then
| ∑_a∈ A ∑_b∈ B ∑_g∈ G : ga=bχ(a) χ(b)| ≤√(p|A||B|) .
Now we obtain some concrete applications of Proposition <ref>, which correspond to Theorems <ref>, <ref> of the introduction.
Let
A,B, X,Y⊆_p be sets.
Consider the equation
(a+x)(b+y) ≡ 1 p
or, in other words,
y=-b+ 1/(a+x) = g_a,b x, where (g_a,b) = -1.
The energy _2k (f_G) of the correspondent family of transformations G = {g_a,b}_a∈ A,b∈ B was estimated many times see, e.g., paper
<cit.>.
Applying Proposition <ref> to this particular case of equation (<ref>), we obtain
Let >̣0 be a real number, A,B, X,Y⊆_p be sets, let χ be a non–principal multiplicative character and |X||Y| ≥ p^.
Also, let
c_A : A →𝒟, c_B : B →𝒟
be some weights.
Then there is ()̣>0 such that
∑_a,b,x,y : (a+x)(b+y)=1 c_A (a) c_B (b) X(x) Y(y) χ(a+x)
≤√(|A||B|) (|X||Y|)^1-()̣ .
The above corollary immediately implies Theorem <ref> from the introduction (compare with <cit.>) which we recall here for the
reader's convenience.
The proof repeats the argument of <cit.>, the only small difference is the absence of the third term in formula (72) of <cit.> due to the fact that our χ is non–principal.
Below the sign ≲ means a multiple of the form log^O(1) (MN _∞β_∞).
Let c>0, χ be a non–principal multiplicative character and p be a prime number.
Let t_1, t_2 ∈_p, N,M be integers, N,M ≤ p^1-c
and let
,β : _p → be functions supported on {1,…, N} +t_1 and {1,…, M} +t_2, respectively.
Then there exists =̣δ (c) >0
such that
S_χ (,β ) ≲_2 β_2 p^1- .
Besides,
S(,β) ≲β_2 (_L^4/3 N^7/48 M^7/48 p^23/24
+ (_2 _1)^1/2 p^3/4)
,
and if
M^2 N^2 ^12_L^4/3 < p _2^12,
then
S(,β) ≲β_2 (^6/7_L^4/3^1/7_2 N^1/7 M^1/7 p^13/14
+ (_2 _1)^1/2 p^3/4 + p^13/12_L^4/3) .
Now consider the case when our set A is a collection of disjoint intervals. It is an important family of sets, including discrete fractal sets see, e.g., papers <cit.>, <cit.>,
<cit.>—<cit.> and
<cit.>.
Let Λ⊂_p, I=[N], A=I ∔Λ, |A|>p^1-ϵ, and χ be a non–principal multiplicative character. Then there is an absolute constant c_*>0 such that
|∑_x∈ A∩ A^-1χ (x)|
≤ |A∩ A^-1| · N^-c_*≪|A|^2/p· N^-c_* ,
provided N≥ p^ϵ/c_*.
We combine an appropriate version of Corollary <ref> and the well–known Bourgain–Gamburd machine <cit.> applied to equation (<ref>) see, e.g., <cit.>.
Indeed, for any x∈ A∩ A^-1, we have x=i+ such that
1=(i+)(i'+'), where i,i'∈ I and ,'∈Λ.
Thus we in very deed arrive to equation
(<ref>). Now I(i) ≤ N^-1 (I*I)(i), where I = [-N,N] and hence the number of solutions to the equation 1=(i+)(i'+') can be bounded above as 1=(j+a)(j'+a') with a,a'∈ A and j,j'∈I (times N^-2, of course).
In particular (see <cit.> or just Proposition <ref> and Corollary <ref> above), we get for an absolute constant c ∈ (0,1] that
|A∩ A^-1| ≤|A|^2 |I|^2/N^2 p + O(N^-2 |A| · N^2-c)
≪|A|^2/p
and hence the second estimate of (<ref>) follows from the first one. Here we have used the conditions that |A|>p^1-ϵ and N ≥ p^ϵ/c, which is satisfied if we put
c_* = c/4, say.
Similarly, let h∈ [N] be an integer parameter and write I(i) = h^-1 (H*I)(i)+(i), where H=[h] and _∞ =1, | ()| ≤ 2h.
In particular, we have ^2_2 ≤ 2h and one can threat as a sum of two functions _1,_2 with supports on some shifts of the interval H.
Put = _1 + _2 : H → [-1,1].
As always let us write
∑_x∈ A∩ A^-1χ (x) =
∑_1=(i+)(i'+')χ(+i) Λ() Λ(') I(i) I(i')
=
∑_1=(i+)(i'+')χ(+i) Λ() Λ(') (h^-1 (H*I)(i)+(i)) (h^-1 (H*I)(i')+(i'))
=
h^-2∑_a,a',h,h' : (a+h)(a'+h')=1 A(a) A(a') H(h) H(h') χ (a+h) + ℰ
= σ + ℰ ,
where the error term ℰ can be estimated as (there are better bounds as the set A is I–invariant and not just H–invariant)
|ℰ| ≤
2h^-1∑_a,a',h,h' : (a+h)(a'+h')=1Λ(a) A(a')
|(h)| H(h')
+
∑_a,a',h,h' : (a+h)(a'+h')=1Λ(a) Λ(a') |(h) (h')|
≪|A|^2/p( h/N + h^2/N^2) + |A| ·( h^1-c/√(N) + h^2-c/N)
≪|A|^2 h/pN + |A| h^1-c/√(N) .
Here we have assumed that h≤√(N) and applied the well–known Bourgain–Gamburd machine <cit.>, <cit.>.
Recall that this result replaces Corollary <ref> in the case when X, Y
are intervals and χ≡ 1 (that is why we need two additional main terms in (<ref>)).
It remains to estimate the sum σ and to do this we can use
the Bourgain–Gamburd machine one more time, namely, we
apply our
Corollary <ref> and get σ≪ |A| h^-c.
Finally, combining the estimate for σ and bound (<ref>) for the error term ℰ, choosing the parameter h=[√(N)], we obtain
∑_x∈ A∩ A^-1χ (x)
≪
|A| h^-c
+ |A|^2 h/pN + |A| h^1-c/√(N)≪
|A| h^-c≪ |A| N^-c/2≪|A|^2/p· N^-c/4
thanks to our assumptions |A|>p^1-ϵ and N ≥ p^4ϵ/c.
The same calculations show that there is an asymptotic formula for |A∩ A^-1| and, in particular, the inverse inequality to (<ref>) takes place. It gives us the first inequality in (<ref>).
This completes the proof.
It is well–known and it is easy to see that the multiplicative equation (<ref>) is
almost coincides
(up to some transformation) with the additive equation
1/x+a - 1/y+b≡ 1 p ,
where a∈ A, b∈ B, x∈ X, y∈ Y.
Thus we obtain an analogue of Theorem <ref>.
Let Λ⊂_p, I=[N], A=I ∔Λ, |A|>p^1-ϵ, and χ be a non–principal multiplicative character. Then there is an absolute constant c_*>0 such that
|∑_x∈ A^-1∩ (A^-1+1)χ (x)|
≤ |A^-1∩ (A^-1+1)| · N^-c_*≪|A|^2/p· N^-c_* ,
provided N≥ p^ϵ/c_*.
Now we are ready to prove Theorem <ref> from the introduction.
We follow the scheme and the notation of the proof
from paper
<cit.>.
It was shown that the set of a∈ A satisfying (<ref>) contains a set of the form
Z_M ∩ Z^-1_M, |A| ∼ |Z_M|^2/p,
|Z_M|∼ p^w_M+2(1-w_M) and the set Z_M is a disjoint union of some shifts of an interval of length N∼ p^2, where ≫ 1/M is a parameter and Hausdorff dimension w_M enjoys the asymptotic formula w_M = 1-O(1/M), M→∞.
Thus we can apply Theorem <ref> and write
|A∩| = (p-1)^-1∑_χ(∑_x∈ Aχ(x) ) (∑_x∈χ(x) ) ≥|A|||/p-1 - C|A| N^-c_* >0 ,
where C,c_*>0 are some absolute constants.
Here we have used conditions (<ref>), (<ref>), the fact that M ∼log p/loglog p and ≫ 1/M.
It remains to check that N≥ p^ϵ/c_* or, in other words, that ≫ϵ.
Since |Z_M|∼ p^w_M+2(1-w_M) = p^1-ϵ, it follows that ϵ =(1-w_M)(1-2)≪ 1/M and
thus the required condition takes place.
This completes the proof.
Let us make a final remark.
Loosely,
Theorem <ref> gives us a non–trivial bound for the multiplicative energy of the set Z_M, see formula (<ref>) below.
Nevertheless, the last fact follows from the circumstance that Z_M is an Ahlfors–David set, <cit.>, that is
for an arbitrary z∈ Z_M one has
|Z_M ∩ (𝒟+z)| ∼_M |𝒟|^w_M N^1-w_M
for any interval 𝒟, |𝒟| ≥ N with the center at the origin.
Recall that
in <cit.> a non–trivial upper bound was obtained for the additive energy of any Ahlfors–David set.
Let us briefly
prove an upper estimate for the multiplicative energy of
an arbitrary
Ahlfors–David set Z_M, having large Hausdorff dimension w_M.
The advantage of bound (<ref>) below that our power saving can be expressed in terms of |Z_M| but not just N.
Namely, write Z=Z_M, w=w_M and then |Z| ∼_M p^w N^1-w.
Also, put δ∼_M ^w N^1-w, where
is a parameter.
By the points/planes incidences in _p (see <cit.>) and property (<ref>) one has
^× (Z) := |{ (z_1,z_2,z_3,z_4) ∈ Z^4 : z_1 z_2 = z_3 z_4 }|
≪^̣-2 |{ (z_1,z_2,z'_1,z'_2,d,d') ∈ Z^4 × [ ]^2 : z_1 (z_2+d) ≡ z'_1 (z'_2+d') }|
≪^̣-2( |Z|^4 ^2/p + |Z|^3 ^3/2)
≪^̣-2 p^3 ,
where the optimal choice for is = (p/|Z|)^2.
Thus
^× (Z) ≪_M |Z|^3 (p/|Z|)^3-4w N^-2 (1-w)∼_M |Z|^3 · |Z|^-(4w-3)(1-w)/w N^-(1-w)(3-2w)/w
< |Z|^3
for w >3/4.
Thus, we have a power saving in terms of |Z| for the multiplicative energy of any Ahlfors–David set.
§ DATA AVAILABILITY AND CONFLICTS OF INTEREST
No data was used for the research described in the article.
Ilya D. Shkredov declares no conflicts of interest.
abbrv
I.D. Shkredov
London Institute for Mathematical Sciences
21 Albemarle St., W1S 4BS, UK
ilya.shkredov@gmail.com
|
http://arxiv.org/abs/2307.00747v1
|
20230703044647
|
On Three-Term Linear Relations for Theta Series of Positive-Definite Binary Quadratic Forms
|
[
"Rahul Saha",
"Jonathan Hanke"
] |
math.NT
|
[
"math.NT",
"11F27 (Primary) 11E16, 11Y50, 11Y99 (Secondary)"
] |
Feasibility of Universal Anomaly Detection without Knowing the Abnormality in Medical Images
Can Cui1 Yaohong Wang2 Shunxing Bao1 Yucheng Tang3 Ruining Deng1 Lucas W. Remedios1 Zuhayr Asad1 Joseph T. Roland2 Ken S. Lau2
Qi Liu2 Lori A. Coburn2 Keith T. Wilson2 Bennett A. Landman1 Yuankai Huo1
August 1, 2023
=============================================================================================================================================================================================================
* Differentiated contributions as first author and senior author respectively. Rahul Saha implemented the extended refinement algorithm in , established <ref> and <ref>, and handled the a+b ≥ 4 case. Jonathan Hanke introduced the extension of Schiemann's method to determine all theta series satisfying a given linear relation, and supervised the first author's Princeton undergraduate junior paper and reading courses on this topic. This paper was jointly written, with numerous unattributed comments between the authors to improve the clarity of exposition.
In this paper, we investigate three-term linear relations among theta series of positive-definite integral binary quadratic forms. We extend Schiemann's methods to characterize all possible three-term linear relations among theta series of such forms, providing necessary and sufficient conditions for such relations to exist. To accomplish this, we develop, implement, and execute a novel extended refinement algorithm on polyhedral cones. We show that there is exactly one non-trivial three-term linear relation: it involves quadratic forms with discriminants -3, -12, -48, all in the same rational squareclass -3(ℚ^×)^2.
toc
§ INTRODUCTION
Let Q be a positive-definite integer-valued binary quadratic form, which we denote by
Q(x⃗) := Q(x,y) := ax^2 + bxy + cy^2
where a, b, c ∈ and Disc(Q) := Δ(Q) := b^2 - 4ac < 0. For any non-negative integer m ∈_≥ 0, we consider the sets
_Q(m) := {x⃗∈^⊕ 2|
Q(x⃗) = m}
of representation vectors x⃗ := (x, y), whose cardinality defines the representation numbers
r_Q(m) := #_Q(m).
We note that the representation numbers r_Q(m) are unchanged when we perform any
-invertible linear change of variables x⃗' := Ax⃗ (i.e. where both A and A^-1∈GL_d=2())
on Q(x⃗) to obtain another -equivalent quadratic form Q'(x⃗'),
and we denote this -equivalence by Q ∼_ Q'.
The representation numbers r_Q(m) of a positive-definite integer-valued quadratic form Q collectively define its theta series
_Q _Q(z) ∑_m ≥ 0 r_Q(m) q^m,
where q:= e^2π iz and z:= x + iy∈ with y > 0. The theta series _Q(z) encodes much of the arithmetic of the quadratic form Q, and determining its properties has been a central question of Number Theory and many other related areas of mathematics.
One such question about the spectrum of the Laplacian on a Riemannian manifold M popularized by Kac in the 1960s is “Can you hear the shape of a drum?”, which asks if one can determine the underlying manifold from its Laplacian eigenvalues (with multiplicity).
In the special case where M is a flat torus of dimension d (i.e. M = ^d / Λ for some full rank lattice Λ⊂^d), this is equivalent to asking
“Can we determine (the -equivalence class of) a positive-definite integer-valued
quadratic form Q in d variables from its theta series _Q(z)?”
This was definitively answered by Schiemann in the 1990s using a novel computational approach of polyhedral cone decompositions,
where he defined and executed an algorithm to answer this question. Using this approach, he showed that the
uniqueness result is true when d = 3,
and gave an example when d=4 where uniqueness fails. (See <cit.> for a recent survey of this topic.)
In this paper we propose to view Schiemann's results from the perspective of linear relations among theta series _Q(z) of quadratic forms, and observe that in this context he answers the question “Are there any non-trivial 2-term linear relations
α_1 _Q_1 + α_2 _Q_2 = 0
among the theta series of positive-definite integer-valued quadratic forms of dimension d?”. (No when d ≤ 3, and yes when d ≥ 4, where necessarily α_1 = -α_2.) We then supplement and substantially extend his polyhedral cone approach to determine all (trivial and non-trivial) 3-term linear relations
α_1 _Q_1 + α_2 _Q_2 + α_3 _Q_3 = 0
that exist between theta series of positive-definite integer-valued binary quadratic forms (i.e. d=2), with our main result being:
Suppose Q_1, Q_2, Q_3 are positive-definite integer-valued binary quadratic forms satisfying the 3-term linear relation
α_1 _Q_1 + α_2 _Q_2 + α_3 _Q_3 = 0
for some α_1, α_2, α_3 ∈. Then, up to reordering of indices i ∈{1,2,3},
exactly one of the following holds:
1) (Degenerate Solutions) All α_i = 0, and all Q_i are arbitrary.
2) (Trivial 2-term Solutions) α_1 = -α_2 ≠ 0, α_3 = 0, Q_1 ∼_ Q_2, and Q_3 is arbitrary.
3) (Trivial 3-term Solutions) All α_i ≠ 0, α_1 + α_2 + α_3 = 0, and Q_1 ∼_ Q_2 ∼_ Q_3.
4) (Non-Trivial 3-term Solutions) 6 α_1 = 3α_2 = -2α_3 ≠ 0, giving the relation
1/3_Q_1 + 2/3_Q_2 = _Q_3,
where
Q_1 ∼_ c(x^2 + xy + y^2),
Q_2 ∼_ 4c(x^2 + xy + y^2),
Q_3 ∼_ c(x^2 + 3y^2),
for some positive integer c ∈_>0.
§ THE GL2(Z) REDUCTION DOMAIN 𝒱
In this section we define the (usual) fundamental domain 𝒱 for the action of GL_2() on real-valued
positive-definite binary quadratic forms, and give its explicit description as a polyhedral cone.
[Binary Quadratic Forms as Tuples]
Throughout this paper, we adopt the convention of identifying real-valued binary quadratic forms with ^3 via the bijection
Q(x, y) q_11 x^2 + q_12 xy + q_22 y^2
↦
(q_11, q_22, q_12) ∈^3.
We also identify Q with the symmetric matrix [[ q_11 q_12/2; q_12/2 q_22; ]], noting that
Q(x, y) = [ x y; ][ q_11 q_12/2; q_12/2 q_22; ][ x; y; ].
Using the identification in <ref>, we define the set 𝒱⊂^3 of GL_2()-reduced positive-definite binary quadratic forms by
(q_11, q_22, q_12) ∈𝒱
q_22 - q_11≥ 0,
q_12≥ 0,
q_11 - q_12≥ 0, and
q_11 > 0,
and also let 𝒱 denote closure of 𝒱.
We say that a positive-definite binary quadratic form Q(x,y) is reduced if the corresponding vector (q_11, q_22, q_12)
is in 𝒱.
Given any real positive-definite binary quadratic form Q, there exists a unique (reduced) binary quadratic form
Q' ∈𝒱 so that Q ∼_ Q'.
We define the (usual) reduction domain 𝒱' ⊂^3 of SL_2()-reduced positive-definite binary quadratic forms as follows:
(q_11, q_22, q_12) ∈𝒱'
q_22≥ q_11≥ |q_12|,
q_11 > 0,
q_11 = q_22 q_12≥ 0, and
q_11 = q_12 q_12≥ 0.
By <cit.>, given any real positive-definite binary quadratic form Q, there exists Q' ∈𝒱' so that Q = X^T Q' X for some X ∈SL_2(). Furthermore we observe that
[ 1 0; 0 -1; ][ q_11 q_12/2; q_12/2 q_22; ][ 1 0; 0 -1; ] = [ q_11 -q_12/2; -q_12/2 q_22; ]
which implies that (q_11, q_22, q_12) ∼_ (q_11, q_22, -q_12), and they are related by a linear transform Y [[ 1 0; 0 -1; ]] with determinant -1. Since (GL_2()) = {± 1}, we also know g ∈GL_2() has either g ∈SL_2() or gY = gY^-1∈SL_2(), giving the disjoint union
GL_2() = SL_2() ⊔ Y ·SL_2().
Thus for any positive-definite binary quadratic form Q, there must exist Q' ∈𝒱' {(q_11, q_22, q_12) ∈^3 | q_12≥ 0} so that Q ∼_ Q'. To see Q' is unique, assume that there exists another Q”∈𝒱' {(q_11, q_22, q_12) ∈^3 | q_12≥ 0} so that Q ∼_ Q”. Then Q' ∼_ Q”. By uniqueness in <cit.>, they cannot be related by an element of SL_2(). Therefore they must be related by an element of Y·SL_2(), however then Y^TQ'Y ∼_ Q”, which also violates uniqueness in <cit.>. Therefore 𝒱' {(q_11, q_22, q_12) ∈^3 | q_12≥ 0} = 𝒱, as desired.
We note that 𝒱 can be thought of as a polyhedral cone, in the following sense:
Suppose that A and B are finite subsets of ^n. Then we define the polyhedral cone (A, B) ⊆^n as
(A, B) {∈^n |a⃗·≥ 0 for all a⃗∈ A,
and b⃗· > 0 for all b⃗∈ B
}.
Using the notation for polyhedral cones, we can write
𝒱(𝐀, 𝐁),
where
𝐀{(-1, 1, 0), (1, 0, -1), (0, 0, 1)} and 𝐁{(1, 0, 0)}.
We can compute that the edges of 𝒱 are given by (0, 1, 0), (1, 1, 0), and (1, 1, 1), and let _1, _2, _3 denote the associated quadratic forms under <ref>, that is,
_1(x, y) y^2, _2(x, y) x^2 + y^2, and _3(x, y) x^2 + xy + y^2.
§ THE PARTIAL ORDERING ON STRONGLY PRIMITIVE VECTORS
In this section we define a relation ≼ on ^⊕ 2 induced by 𝒱, and the notion of a minimal subset.
§.§ Strongly Primitive Vectors
In this subsection we define the notion of strongly primitive objects in several related contexts.
Let ^⊕ 2_*̃ be the space of all strongly primitive vectors (x, y) so that (x, y) = 1 and the last non-zero coordinate is positive. For m ∈_≥ 0, the strongly primitive representation set ^*̃_Q(m) of m is defined as the set of integral solutions (x, y) ∈^⊕ 2_*̃ to Q(x, y) = m. Then the strongly primitive representation number r^*̃_Q(m) is defined as the number of elements in ^*̃_Q(m). This lets us define the strongly primitive theta series ^*̃_Q as
^*̃_Q ^*̃_Q(z) ∑_m ≥ 0 r^*̃_Q(m) q^m.
In the literature, it is common to define the set of primitive vectors ^⊕ 2_* as the space of all (x,y) ∈^⊕ 2 with (x, y) = 1. The associated primitive objects ^⊕ 2_*, _Q^*(m), r^*_Q(m), and ^*_Q(q) are defined in terms of these primitive vectors. The relationship between primitive and strongly primitive representation numbers is given by r^*_Q(m) = 2r^*̃_Q(m) for m ≥ 0. Note that 0⃗ = (0, 0) is neither a primitive nor a strongly primitive vector, since (0, 0) ≠ 1.
The next lemma allows us to pass from linear relations among theta series to linear relations among strongly primitive theta series. Note that since _Q always has constant term 1 for a positive-definite quadratic form Q, it forces the coefficients α_i in <ref> to satisfy the relation α_1 + α_2 + α_3 = 0.
Suppose Q_1, Q_2, Q_3 are positive-definite integer-valued binary quadratic forms and α_1, α_2, α_3 ∈ are real numbers satisfying α_1 + α_2 + α_3 = 0. Then
α_1 _Q_1 + α_2 _Q_2 + α_3 _Q_3 = 0 α_1 ^*̃_Q_1 + α_2 ^*̃_Q_2 + α_3 ^*̃_Q_3 = 0.
Case 1 (m ≥ 1). Suppose m ≥ 1. Then we have,
r_Q_i(m) = ∑_d | m 2r^*̃_Q_i(d)
for i ∈{1, 2, 3}, giving
α_1 r_Q_1(m) + α_2 r_Q_2(m) + α_3 r_Q_3(m) = α_1∑_d | m 2r^*̃_Q_1(d) + α_2∑_d | m 2r^*̃_Q_2(d) + α_3∑_d | m 2r^*̃_Q_3(d)
= ∑_d | m 2(α_1 r^*̃_Q_1(d)+ α_2 r^*̃_Q_2(d)+ α_3 r^*̃_Q_3(d))
= ∑_d | m 0
= 0.
Conversely, by Möbius inversion we have
r^*̃_Q(m) = 1/2∑_m = dd'μ(d) r_Q(d')
where the Möbius function μ is defined by
μ(m)
+1 if m is a square-free positive integer with an even number of prime factors,
-1 if m is a square-free positive integer with an odd number of prime factors,
0 if m is not square-free.
Therefore
α_1 r^*̃_Q_1(m) + α_2 r^*̃_Q_2(m) + α_3 r^*̃_Q_3(m) = 1/2(α_1∑_m = dd'μ(d)r_Q_1(d') + α_2∑_m = d d'μ(d) r_Q_2(d') + α_3∑_m = dd'μ(d)r_Q_3(d'))
= 1/2∑_m = dd'μ(d) (α_1 r_Q_1(d')+ α_2 r_Q_2(d')+ α_3 r_Q_3(d'))
= 1/2∑_m = dd' 0
= 0.
Case 2 (m = 0). Suppose m = 0. Then α_1 r_Q_1(m) + α_2 r_Q_2(m) + α_3 r_Q_3(m) = α_1 + α_2 + α_3 = 0. Conversely, α_1 r^*̃_Q_1(m) + α_2 r^*̃_Q_2(m) + α_3 r^*̃_Q_3(m) = α_1 · 0 + α_2 · 0 + α_3 · 0 = 0, as desired.
As a consequence of this lemma, we can focus on strongly primitive vectors and strongly primitive representation numbers throughout the rest of the paper, simplifying complexity of our computations.
§.§ The Partial Ordering
In this subsection we define the relation ≼ and give an explicit algorithm for determining it.
Suppose , ∈^⊕ 2. Then we define the relation
≼ Q() ≤ Q() for all Q ∈𝒱.
We also define the opposite relation ≽ by ≽≼.
The next lemma gives us an explicit algorithm for determining when the relation ≼ holds.
Suppose , ∈^⊕ 2. Then ≼ if and only if _i() ≤_i() for each i ∈{1, 2, 3},
where _i are quadratic forms as defined in <ref>.
(⟸) Assume _i() ≤_i() for each i=1, 2, 3. Since 𝒱⊂𝒱, any form Q ∈𝒱 can be written as a _≥ 0-linear combination of the edges, i.e. Q = λ_1 _1 + λ_2 _2 + λ_3 _3 for λ_i ∈_≥ 0. Therefore
Q() = ^T Q = ∑_j=1^3 λ_j ^T _j ≤∑_j=1^3 λ_j ^T _j = ^T Q = Q()
which proves one direction.
(⟹) Assume ≼ (i.e. Q() ≤ Q() for all Q ∈𝒱). First note that _2 and _3 are contained in 𝒱, so it suffices to prove that _1() ≤_1(). Since _1 lies on the boundary of 𝒱, we can construct a sequence of forms ^(1), ^(2), ^(3), …∈𝒱 whose limit lim_i →∞ Q^(i) = _1. Since each Q^(i) () ≤ Q^(i)(), we must have _1() ≤_1(), as desired.
Note that the relation ≼ is not a partial ordering on ^⊕ 2, since ≼ - and -≼, violating antisymmetry of a partial ordering. However, it turns out the relation ≼ is a partial ordering on the set ^⊕ 2_*̃ of strongly primitive vectors, as we show in the next lemma.
The relation ≼ is a partial ordering on ^⊕ 2_*̃.
We verify reflexivity, antisymmetry, and transitivity of ≼ on ^⊕ 2_*̃. Below, we suppose (x_1, x_2), (y_1, y_2), z⃗ (z_1, z_2) ∈^⊕ 2_*̃.
1) Reflexivity. ≼ since Q() = Q() for all Q ∈𝒱.
2) Antisymmetry. Suppose ≼ and ≼, that implies Q() ≤ Q() and Q() ≤ Q() for all Q ∈𝒱, which implies Q() = Q() for all Q ∈𝒱. By <ref>, this implies _i() = _i() for each i ∈{1,2,3}, giving
x_2^2 = y_2^2,
x_1^2 + x_2^2 = y_1^2 + y_2^2, and
x_1^2 + x_1x_2 + x_2^2 = y_1^2 + y_1 y_2 + y_2^2.
From the first equation, we get x_2 = ± y_2. From the second equation, we get x_1 = ± y_1. From the third equation, we get x_1x_2 = y_1y_2 which implies that either (x_1, x_2) = (y_1, y_2) or (x_1, x_2) = (-y_1, -y_2). However since (y_1, y_2) is strongly primitive, we know that (-y_1, -y_2) cannot be strongly primitive, giving = as desired.
3) Transitivity. This follows from <ref> since Q() ≤ Q() and Q() ≤ Q(z⃗) for all Q ∈𝒱 implies Q() ≤ Q(z⃗) for all Q ∈𝒱, hence ≼z⃗.
§.§ The Minimal Subset
In this subsection we define the notion of a minimal subset, and give some useful properties.
Suppose X ⊆^⊕ 2. Then we define the minimal subset (X) of X as
(X) {∈ X | ⋡ for all ∈ X∖{}}.
Next we prove a technical lemma and give some useful properties of (X) that will allow us to compute it in <ref>.
Suppose X ⊂^⊕ 2. Then for any a ∈, the set
S (X) {(x, y) ∈^⊕ 2| y = a}
is finite.
Assume by way of contradiction that S is an infinite set. Then there exist x_1, x_2 ∈ having the same sign so that (x_1, a), (x_2, a) ∈(X), and |x_1| + |x_2| > |a|. Without loss of generality we also assume that x_1 ≤ x_2.
Case 1: Assume x_1, x_2 ≥ 0, then we show (x_1, a) ≼ (x_2, a) by verifying the conditions in <ref>:
_1(x_1, a) ≤_1(x_2, a) a^2 ≤ a^2,
_2(x_1, a) ≤_2(x_2, a) x_1^2 + a^2 ≤ x_2^2 + a_2^2,
which holds since 0 ≤ x_1 ≤ x_2, and
_3(x_1, a) ≤_3(x_2, a) x_1^2 + x_1a + a^2 ≤ x_2^2 + x_2a + a^2 (a + x_1 + x_2)(x_1 - x_2) ≤ 0,
which holds since x_1 - x_2 ≤ 0 and a + x_1 + x_2 ≥ 0 (since by assumption |x_1 | + |x_2 | > |a| and x_1, x_2 ≥ 0). Thus (x_1, a) ≼ (x_2, a) ∈(X), which is a contradiction.
Case 2: Assume x_1, x_2 ≤ 0, then one can similarly show (x_2, a) ≼ (x_1, a) ∈(X), which is once again a contradiction. This completes our proof.
Suppose X ⊂^⊕ 2. Then the following properties hold:
1) (X) is a finite set.
2) (X) ≠∅ when X ⊆^⊕ 2_*̃ is non-empty.
3) If W ⊆ℤ_*̃^⊕ 2 and there exists some non-empty auxiliary set Y_W satisfying both Y_W ⊆ X ⊆ℤ_*̃^⊕ 2 and W ⊇{∈ X | for all ∈ Y_W}∪ Y_W, then MIN(X)=MIN(X ∩ W).
4) Suppose (a, 1) ∈ X ⊆^⊕ 2_*̃, and let W_0, a{∈^⊕ 2_*̃| x ⋡(a, 1)}{(a, 1)}. Then
i) (X) = (X W_0, a), and
ii) W_0, a⊆{∈^⊕ 2_*̃| ‖‖_∞≤√(2(a^2 + max(a, 0) + 1))}, where ‖ (x, y) ‖_∞max{|x|, |y|}.
Proof of 1): Assume otherwise (X) is an infinite set. Choose some arbitrary (x_0, y_0) ∈(X). Since _2 is a positive-definite form, there exists an infinite subset S ⊆(X) so that for any (x, y) ∈ S, we have _2(x, y) ≥_2(x_0, y_0). Similarly, since _3 is a positive-definite form, there exists an infinite subset S' ⊆ S ⊆(X) so that for any (x, y) ∈ S', we have _3(x, y) ≥_3(x_0, y_0). If there is a point (x, y) ∈ S' so that y^2 ≥ y_1^2, then we have a contradiction since (x_0, y_0) ≼ (x, y) (by <ref>) and so (x, y) cannot be in (X). Thus for all (x, y) ∈ S', we have y^2 < y_1^2 y < |y_1|, which implies that y is bounded. But since S' is infinite, there must be some coordinate a ∈ so that there are infinitely many x for which (x, a) ∈ S' ⊆(X), which by <ref> is a contradiction.
Proof of 2): Assume otherwise. Then for any ∈ X ⊆^⊕ 2_*̃, we can find ' ∈ X∖{} such that ' ≼. This lets us construct an infinitely descending chain _1 ≽_2 ≽….
By antisymmetry of the partial order on the set of strongly primitive vectors, no appears twice in this chain. But by <ref>, this implies that the lengths of the vectors in this chain are also decreasing, but this cannot happen forever since there are finitely many vectors of each length, a contradiction.
Proof of 3): The proof in <cit.> can be used with only one minor modification, which is that we are working with strongly primitive vectors of dimension 2 instead of dimension 3.
Proof of 4): Part 4i) follows from part 3) by letting Y_W {(a, 1)}. For part 4ii), suppose we have |x|, |y| > √(2(a^2 + max(a, 0) + 1)), then we can verify that (x, y) ≽ (a, 1) by applying <ref>, and thus (x, y) ∉W_0,a.
For a finite set X ⊆^⊕ 2, we can use part 4i) of <ref> to compute (^⊕ 2_*̃∖ X) since there always exists some vector (a, 1) ∈^⊕ 2_*̃∖ X. Here, the computation is simpler because W_0, a is finite by part 4ii).
Next we observe that (X) satisfies the desirable property that any GL_2()-reduced positive-definite binary quadratic form Q achieves its minimum over X at some element of (X).
Suppose X ⊆^⊕ 2. Then for any Q ∈𝒱, we have
min (Q((X))) = min(Q(X)).
Suppose otherwise that there exists some Q ∈𝒱, ∉(X) so that Q() ≤ Q() for all ∈ X, and Q() < Q() for all ∈(X). But then is a vector such that ⋠ for all ∈ X, and thus by definition, should be in (X). This is a contradiction.
We now define the different but related notion of successive minima. For two singleton sets A = {a}, B = {b}, we use the notation A ≤ B to mean a ≤ b.
Suppose Q is a positive-definite binary quadratic form, and (X_i)_i=1^∞ is a sequence of finite sets X_i ⊆^⊕ 2_*̃. We define (X_i)_i=1^∞ to be a successive minima sequence of Q if and only if the following conditions hold:
1) For all ∈^⊕ 2_*̃, there exists a unique i so that ∈ X_i.
2) For all i, there exists an m ∈_>0 so that Q(X_i) ∈{∅, {m}}.
3) For all i < j such that X_i and X_j are non-empty, we have Q(X_i) ≤ Q(X_j).
Let Q: ^⊕ 2_*̃→_≥ 0 be a quadratic form defined by Q(x,y) x^2 + y^2. Then consider the sequence (X_i)_i=1^∞ given by X_i Q^-1({i}). Every strongly primitive vector appears in exactly one X_i, each set maps under Q to the squared euclidean norm of the vectors in the set (and the norm is constant on each set), and the sets are arranged in increasing order of the norm. Therefore (X_i)_i=1^∞ is a successive minima sequence of Q.
The following lemma gives us a way to construct a successive minima sequence that satisfies a desired property under of subsequences.
Suppose Q ∈𝒱 is a GL_2()-reduced form. Then there exists a sequence of vectors (_i)_i=1^∞ so that ({_i})_i=1^∞ is a successive minima sequence of Q , and
_i+1∈(^⊕ 2_*̃∖{_1, …, _i}) for i ≥ 0.
When i = 0, _1 ∈(^⊕ 2_*̃).
We define the sequence inductively. By <ref>, there exists _1 ∈(^⊕ 2_*̃) so that Q(_1) = min Q(^⊕ 2_*̃), i.e. Q(_1) ≤ Q() for all ∈^⊕ 2_*̃. Suppose we have constructed the sequence up to _1, …, _i. Then by yet another application of <ref>, we can find _i+1∈(^⊕ 2_*̃∖{_1, …, _i}) so that Q(_i+1) = min Q(^⊕ 2_*̃∖{_1, …, _i}).
We prove that this is a successive minima sequence. At each step, we are choosing the vector that minimizes Q on the vectors that have not been chosen yet. This implies that the Q(x_i) are increasing. So it suffices to show that every vector in ^⊕ 2_*̃ appears in the sequence.
Suppose some vector ∈^⊕ 2_*̃ does not appear in the sequence. Since Q is positive-definite, there are finitely many ^⊕ 2_*̃-solutions to the equation Q() = m for any m ∈_>0. Thus there exists an index i such that Q(_i) ≤ Q() < Q(_i+1). By construction, Q(_i+1) = min Q(^⊕ 2_*̃∖{_1, …, _i}) but ∈^⊕ 2_*̃∖{_1, …, _i} and Q() < Q(_i+1), a contradiction.
Thus the constructed sequence is a successive minima sequence.
Let n ∈_≥ 0. For the extended refinement algorithm in <ref>, we need to iterate over all possibilities for the next n minimal vectors. In other words, we want to find vectors _1, …, _n so that _i ∈(X ∖{_1, …, _i-1}) for all i. This motivates us to define the useful generalization _n(X) as the set of all such possible choices (up to permutation). Formally,
Suppose X ⊆^⊕ 2 and n ∈_≥ 1. Then we define the 𝐧^th-order minimal subset _n(X) of X by
_n(X) {𝒳⊆^⊕ 2_*̃ |[ #𝒳 = n and for some ordering 𝒳{_1, …, _n}, we have; _i+1∈(X ∖{_1, …, _i}) for each 0 ≤ i ≤ n-1 ]}.
For n = 1 the notion of _n(X) and (X) coincide up to one level of unpacking (i.e., if (X) = {_1, …, _#(X)} then _1(X) = {{_1}, …, {_#(X)}}).
We can compute _n(X) recursively by first computing _n-1(X), then computing M_n_1(X ∖{_1, …, _n-1}) (as in <ref>) for each {_1, …, _n-1}∈_n-1(X), and setting _n for each {}∈ M_n.
The following corollary of <ref> follows directly from the definition of _n.
Suppose (_i)_i=1^∞ is a sequence satisfying the properties in <ref>. Then for all m, n ∈_≥ 0, we have
{_m+1, …, _m+n}∈_n(^⊕ 2_*̃∖{_1, …, _m}).
§ THE K-SET K(X, X1, ..., XN)
In the previous section we defined a successive minima sequence with respect to a quadratic form Q. In this section we flip the question to ask, “Can we find all quadratic forms Q ∈𝒱 which have a given successive minima sequence?”. The -set is the answer to this question.
Given a sequence X_1, …, X_n, we define the -set of this sequence as the set of all quadratic forms Q so that X_1, …, X_n is a (truncated) successive minima sequence of Q. Formally,
Suppose X ⊆^⊕ 2_*̃, and X_1, …, X_k ⊆ X are finite pairwise-disjoint sets. We define the -set (X; X_1, …, X_k) by
(X; X_1, …, X_k) { Q ∈𝒱 |[ for each 1 ≤ i ≤ k we have Q(X_i) ∈{∅, {m_i}}; where m_i min (Q(X ∖ (X_1 … X_i-1))) ]}.
We show that the -set is a polyhedral cone, and provide explicit inequalities required to algorithmically compute the -set. First, we prove a special case.
Suppose _1, …, _k ∈ X ⊆^⊕ 2_*̃ is a sequence of strongly primitive vectors in X. Then (X; {_1}, …, {_k}) is a polyhedral cone, explicitly given by
(X; {_1}, …, {_k}) = { Q ∈𝒱 |[ Q(_1) ≤…≤ Q(_k) ≤ Q(); for all ∈(X ∖{_1, …, _k}) ]}.
This is a rephrasing of <cit.>, replacing _i by {_i} for all 1 ≤ i ≤ k.
We now use this to prove the general case.
Suppose X_1, …, X_k ⊆ X ⊆^⊕ 2_*̃ are finite pairwise-disjoint subsets of X with some fixed ordering X_i {_i,1, …, _i, #X_i} for each 1 ≤ i ≤ k. Then (X; X_1, …, X_k) is a polyhedral cone, explicitly given by
(X; X_1, …, X_k) = K'_X; X_1, …, X_k L_X; X_1, …, X_k
where
K'_X; X_1, …, X_k (X; {_1,1}, …, {_1, #X_1}, …, {_k,1}, …, {_k, #X_k} ),
L_X; X_1, …, X_k {Q ∈𝒱| Q(_i, 1) = ⋯ = Q(_i,#X_i) for each 1 ≤ i ≤ k},
and the sets X_i, j{_i, j} are ordered in increasing lexicographic order on the indexing pair (i, j).
We can rewrite the definition as
(X; X_1, …, X_k) = { Q ∈𝒱 |[ Q(_i,j) = min(Q(X∖{_1,1, …, _i, j-1}); and Q(_i,1) = ⋯ = Q(_i, #X_k) ]}
= K'_X; X_1, …, X_k L_X; X_1, …, X_k.
K'_X; X_1, …, X_k is a polyhedral cone (by <ref>), and L_X; X_1, …, X_k is a polyhedral cone (since it is an intersection of a subspace with the cone 𝒱). Therefore their intersection is also a polyhedral cone.
In <ref>, it is not necessary to order the sets X_i, j = {_i, j} in increasing lexicographic order. They only need to be in increasing order of index i, with ties among the index j broken arbitrarily.
§ THE EXTENDED REFINEMENT
§.§ Irrational and Normalized Linear Relations
Suppose Q_1, Q_2, Q_3 are positive-definite integer-valued binary quadratic forms satisfying
α_1 _Q_1 + α_2 _Q_2 + α_3 _Q_3 = 0,
for some α_1, α_2, α_3 ∈, where one of the ratios α_i/α_j∉ for some 1 ≤ i, j ≤ 3 with α_j ≠ 0. Then _Q_1 = _Q_2 = _Q_3.
Without loss of generality, we can assume α_1, α_2 ≥ 0 and α_3 < 0. Then, letting α -α_1/α_3 we can rewrite <ref> as
α_Q_1 + (1 - α) _Q_2 = _Q_3.
If α∉, then we must have _Q_1 = _Q_2, since if c_1, c_2 are the coefficients of q^m in _Q_1 and _Q_2 respectively for some m, then c_1 ≠ c_2 α c_1 + (1 -α) c_2 = y + α (c_1 - c_2) ∉. Plugging _Q_1 = _Q_2 in the equation gives us _Q_1 = _Q_2 = _Q_3, as required.
<ref> shows that solving irrational 3-term linear relations is equivalent to solving the rational 2-term relation _Q_1 = _Q_2.
From <ref> we reduce to solving rational linear relations, which we now normalize.
Given positive-definite binary quadratic forms Q_1, Q_2, Q_3 satisfying the relation
α_1_Q_1 + α_2 _Q_2 + α_3 _Q_3 = 0
with α_1, α_2, α_3 ∈, we define its normalized rational 3-term linear relation by
β_1 _Q_σ(1) + β_2 _Q_σ(2) = _Q_σ(3),
with β_1, β_2 ∈_≥ 0 and some permutation σ of {1, 2, 3}.
We can write the normalized rational 3-term linear relation with β_1 a/c and β_2 b/c where a, b, c ∈_≥ 0, c > 0, (a, c) = (b, c) = 1, and necessarily c = a+b (by looking at the first term of the theta series).
Given the integral form of the normalized rational 3-term linear relation
a/a + b_Q_1 + b/a+b_Q_2 = _Q_3,
as described in <ref>, we define its solution set 𝒟_a, b by
𝒟_a, b{(Q_1, Q_2, Q_3) ∈𝒱×𝒱×𝒱|<ref> holds}.
We further define the diagonal (solution set) Δ⊆𝒟_a, b by
Δ{(Q_1, Q_2, Q_3) ∈𝒱×𝒱×𝒱| Q_1 = Q_2 = Q_3}.
§.§ The Key Lemma
In this subsection we characterize non-negative integer solutions for three-term linear diophantine equations. This will be useful for iterating over minimal vectors in algo:extended_refinement_algorithm.
We begin with the following definition.
Suppose a, b ∈_≥ 0 with (a, b) ≠ (0, 0). Then we define the linset _a, b by
ℒ_a,b{L_1, L_2, L_3}
where
L_1 (1, 1, 1), L_2 (a+b, 0, a)/(a, b), and L_3 (0, a+b, b)/(a, b).
We also denote the j-th coordinate of L_i by L_i, j for 1 ≤ i, j ≤ 3.
The following lemma highlights the significance of the linset.
Suppose a, b ∈_≥ 0 with (a, b) ≠ (0, 0). Then a non-negative tuple (x_0, y_0, z_0) ∈_≥ 0^⊕ 3 is a solution to the equation
a/a+b x + b/a+b y = z
if and only if it is a _≥ 0-linear combination of elements of ℒ_a, b (i.e.
(x_0, y_0, z_0) = c_1 L_1 + c_2 L_2 + c_3 L_3
for some c_1, c_2, c_3 ∈_≥ 0).
Without loss of generality we can assume that (a, b) = 1 since both <ref> and <ref> are invariant under the scaling (a, b) ↦(a, b)/(a, b).
(⟸) Note that (1, 1, 1), (a+b, 0 ,a), (0, a+b, b) are non-negative integer solutions to <ref>. Thus any _≥ 0-linear combination of these solutions is also a non-negative integral solution this equation, which suffices.
(⟹) Suppose (x, y, z) = (x_0, y_0, z_0) is a solution to <ref> with x_0, y_0, z_0 ∈_≥ 0. We claim that min{x_0, y_0, z_0}∈{x_0, y_0}. Assume otherwise min{x_0, y_0, z_0} = z_0. Then (x_0 - z_0, y_0 - z_0, 0) is also a non-negative integer solution to <ref>. Thus
a(x_0 - z_0) + b(y_0 - z_0) = 0
but since a, b > 0 we must have x_0 = z_0 and y_0 = z_0. Thus min{x_0, y_0, z_0}∈{x_0, y_0}.
Case 1: Suppose min{x_0, y_0, z_0} = x_0. Then (0, y_0 - x_0, z_0 - x_0) is also a non-negative integer solution to <ref>. This implies that
b(y_0 - x_0) = (a+b)(z_0 - x_0),
but since (a+b, b) = (a, b) = 1, by fundamental theorem of arithmetic we must have y_0 - x_0 = (a+b)k and z_0 - x_0 = bk for some k ∈_≥ 0. This shows that we have the desired _≥ 0-linear combination
(x_0, y_0, z_0) = x_0 · (1, 1, 1) + k · (0, a+b, b).
Case 2: Suppose min{x_0, y_0, z_0} = y_0, then similarly we have
(x_0, y_0, z_0) = y_0 · (1, 1, 1) + k · (a+b, 0, a)
for some k ∈_≥ 0, which completes the proof.
When both a ≠ 0 and b ≠ 0, then the vectors L_1, L_2, L_3 in _a, b are the minimal set of generators for the _≥ 0-span of _a, b, given by
_≥ 0 [ℒ_a, b] {c_1 L_1 + c_2 L_2 + c_3 L_3 | c_1, c_2, c_3 ∈_≥ 0}.
When a = 0 or b = 0, the vectors L_1, L_2, L_3 are not a minimal set of generators for _≥ 0 [ℒ_a, b], since L_1 = L_2 + L_3. A minimal set _a, b' of generators for _≥ 0 [ℒ_a, b] is given by _a, b' {L_2, L_3}.
§.§ The Extended Refinement Algorithm
In this subsection we state the extended refinement algorithm and prove its correctness for any given pair a, b ∈_≥ 0 with (a, b) ≠ (0, 0).
We begin by defining the notion of a covering.
Suppose S ⊆^n. Then we say ℱ{U | U ⊆^n} is a covering of S if
S ⊆⋃_U ∈ℱ U.
Next we define invariant properties which allow us to iteratively refine coverings.
We define a covering parameter P as a tuple P ((X_i)_i=1^k, (Y_i)_i=1^k, (Z_i)_i=1^k) where the X_i, Y_i, Z_i ⊆^⊕ 2_*̃ with 1 ≤ i ≤ k and k ∈_≥ 0.
Suppose a, b ∈_≥ 0 with (a, b) ≠ (0, 0) and T ⊆𝒱×𝒱×𝒱. Suppose also that P ((X_i)_i=1^k, (Y_i)_i=1^k, (Z_i)_i=1^k) is a covering parameter with k ∈_≥ 0. Then we say that T admits covering parameter P (for the pair (a, b)) if the following properties hold:
1) n⃗ (#X_i, #Y_i, #Z_i) ∈ℒ_a,b for each 1 ≤ i ≤ k,
2) T ⊆(^⊕ 2_*̃; X_1, …, X_k) ×(^⊕ 2_*̃; Y_1, …, Y_k) ×(^⊕ 2_*̃; Z_1, …, Z_k), and
3) T ⊆{ (Q_1, Q_2, Q_3) ∈𝒱×𝒱×𝒱 |[ for each 1 ≤ i ≤ k, there exists m_i ∈_≥ 0 so that; we have Q_1(X_i), Q_2(Y_i), Q_3(Z_i) ∈{∅, {m_i}} ]}.
Suppose P ((X_i)_i=1^k, (Y_i)_i=1^k, (Z_i)_i=1^k) is a covering parameter with k ∈_≥ 0. Further suppose n⃗ (n_1, n_2, n_3) ∈ℒ_a,b and
𝒳 𝒳_n⃗∈MIN_n_1(ℤ_*̃^⊕ 2∖ (X_1 … X_k)),
𝒴 𝒴_n⃗∈MIN_n_2(ℤ_*̃^⊕ 2∖(Y_1 … Y_k)),
𝒵 _n⃗∈MIN_n_3(ℤ_*̃^⊕ 2∖(Z_1 … Z_k)).
Then we define auxiliary sets (associated to the parameters , , , P, n⃗) by
𝒦_, , 𝒦_,,; P, n⃗(^⊕ 2_*̃; X_1, …, X_k, ) ×(^⊕ 2_*̃; Y_1, …, Y_k, ) ×(^⊕ 2_*̃; Z_1, …, Z_k, ),
𝒬_,, 𝒬_,,; P, n⃗{(Q_1, Q_2, Q_3) ∈𝒱×𝒱×𝒱| Q_1(𝒳), Q_2(𝒴), Q_3(𝒵) ∈{∅, {m}} for some m ∈_≥ 0}.
Suppose a, b ∈_≥ 0 with (a, b) ≠ (0, 0), and T ⊆𝒱×𝒱×𝒱 admits covering parameter P ((X_i)_i=1^k, (Y_i)_i=1^k, (Z_i)_i=1^k) with k ∈_≥ 0. Then we define the refinement ℳ_a, b((T, P)) of the pair (T, P) as the set of pairs
ℳ_a, b((T, P)) ⋃_(n_1, n_2, n_3)
∈ℒ_a, b ⋃_𝒳∈MIN_n_1(ℤ_*̃^⊕ 2∖(X_1… X_k))
𝒴∈MIN_n_2(ℤ_*̃^⊕ 2∖(Y_1 … Y_k))
𝒵∈MIN_n_3(ℤ_*̃^⊕ 2∖(Z_1… Z_k)) {(T_, , , P_, , ; T)},
where we define the refined set T_, , by
T_, , T 𝒦_,,𝒬_,,
and the covering parameter P_, , ; T associated to T_, , by
P_, , ; T ((X_i)_i=1^k+1, (Y_i)_i=1^k+1, (Z_i)_i=1^k+1),
where X_k+1, Y_k+1, and Z_k+1. Finally, we define the refinement of a set 𝒯 of pairs (T, P) by
𝚛𝚎𝚏𝚒𝚗𝚎_a, b(𝒯):=⋃_(T, P) ∈𝒯ℳ_a, b((T, P)).
Suppose two associated covering parameters P_, , ; T and P_', ', '; T agree. Then the triples of sequences ((X_i)_i=1^k+1, (Y_i)_i=1^k+1, (Z_i)_i=1^k+1) and ((X'_i)_i=1^k+1, (Y'_i)_i=1^k+1, (Z'_i)_i=1^k+1) must agree, implying = ', = ', = '. Thus every covering parameter appears exactly once in ℳ_a, b((T, P)).
Suppose T ⊆𝒱×𝒱×𝒱 admits covering parameter ((X_i)_i=1^k, (Y_i)_i=1^k, (Z_i)_i=1^k) with k ∈_≥ 0. Then T_, , admits the covering parameter P_, , ; T associated to T_, ,.
We will verify that properties 1) - 3) in <ref> hold.
1) By assumption, (#X_i, #Y_i, #Z_i) ∈_a, b for 1 ≤ i ≤ k. For i = k+1, recall that X_i = 𝒳∈MIN_n_1(ℤ_*̃^⊕ 2∖ (X_1 … X_k)),
Y_i = 𝒴∈MIN_n_2(ℤ_*̃^⊕ 2∖(Y_1 … Y_k)), and
Z_i = 𝒵∈MIN_n_3(ℤ_*̃^⊕ 2∖(Z_1 … Z_k)) where (n_1, n_2, n_3) ∈_a,b. So (#X_i, #Y_i, #Z_i) = (n_1, n_2, n_3) ∈_a,b.
2) Recall that T_, , = T 𝒦_,,𝒬_,, and therefore
T ⊆𝒦_, , = (^⊕ 2_*̃; X_1, …, X_k, ) ×(^⊕ 2_*̃; Y_1, …, Y_k, ) ×(^⊕ 2_*̃; Z_1, …, Z_k, ).
3) We have
T_, , ⊆ T ⊆{ (Q_1, Q_2, Q_3) ∈𝒱×𝒱×𝒱 |[ for each 1 ≤ i ≤ k, there exists m_i ∈_≥ 0 so that; we have Q_1(X_i), Q_2(Y_i), Q_3(Z_i) ∈{∅, {m_i}} ]}.
For i = k+1, since T_, , = T 𝒦_,,𝒬_,, we have
T ⊆𝒬_,, = {(Q_1, Q_2, Q_3) ∈𝒱×𝒱×𝒱| Q_1(𝒳), Q_2(𝒴), Q_3(𝒵) ∈{∅, {m_k+1}} for some m_k+1∈_>0},
as desired.
Suppose a, b ∈_≥ 0 with (a, b) ≠ (0, 0). Then we define the refinement sequence (𝒯_i)_i=0^∞ (associated to (a, b)) by
𝒯_i = {(T_0, P_0)} if i = 0,
𝚛𝚎𝚏𝚒𝚗𝚎_a, b(𝒯_i-1) if i ≥ 1,
where T_0 𝒱×𝒱×𝒱, P_0 ((), (), ()), and 𝚛𝚎𝚏𝚒𝚗𝚎_a, b(𝒯_i-1) is as defined in <ref>.
Next we define the Extended Refinement Algorithm.
The sets 𝒮_i in algo:extended_refinement_algorithm differ slightly from the sets 𝒯_i in the refinement sequence, since 𝒮_i does not contain any pairs (T, P) satisfying T ⊆𝚂𝚃𝙾𝙿_𝚂𝙴𝚃.
We now formally prove that the extended refinement algorithm gives a set of coverings of the desired solution set 𝒟_a, b. First we prove some technical lemmas.
Suppose (Q_1, Q_2, Q_3) ∈𝒱×𝒱×𝒱. Then (Q_1, Q_2, Q_3) ∈𝒟_a, b if and only if for each i ∈{1, 2, 3} there exists a successive minima sequence S_i (S_ij)_j=1^∞ of Q_i so that for all j ≥ 1, the following properties hold:
1) (#S_1j, #S_2j, #S_3j) ∈ℒ_a,b,
2) Q_i(S_ij) ∈{∅, {m_j}} for some m_j ∈_> 0, and
3) S_ij∈_#S_ij(^⊕ 2_*̃∖ (S_i1⋯ S_i(j-1))).
(⟹) By <ref>, for each i ∈{1, 2, 3} there exists a sequence S'_i (s⃗_ij)_j=1^∞ of vectors in ^⊕ 2_*̃ such that ({s⃗_ij})_j=1^∞ is a successive minima sequence satisfying
s⃗_i(j+1)∈(^⊕ 2_*̃∖{s⃗_i1, …, s⃗_ij}) for all j ≥ 1.
Fix m ∈_>0. The elements of the ^⊕ 2_*̃-preimage of m under Q_i can be viewed as a contiguous subsequence of S'_i with r^*̃_Q_i(m) elements. Therefore for each i ∈{1, 2, 3}, there exist indices k_i k_i, m and ℓ_i ℓ_i, m such that
Q_i({_ij}_j=k_i^ℓ_i) ∈{∅, {m}} and r^*̃_Q_i(m) = ℓ_i - k_i + 1.
By examining the coefficient of q^m in <ref>, we see that the triple (r^*̃_Q_1(m), r^*̃_Q_2(m), r^*̃_Q_3(m)) satisfies <ref>. Therefore by <ref>, it must be a _≥ 0-linear combination of the elements of ℒ_a, b = {L_1, L_2, L_3}, i.e., there exist non-negative integers c_1, c_2, c_3 ∈_≥ 0 so that
(r^*̃_Q_1(m), r^*̃_Q_2(m), r^*̃_Q_3(m)) = c_1 L_1 + c_2 L_2 + c_3 L_3 = c_1 · (1, 1, 1) + c_2 · (a+b, 0, a) + c_3 · (0, a+b, b).
Next we partition the subsequence (_ij)_j=k_i^ℓ_i of S_i' into a sequence (X_m, n; i)_n=1^c_1 + c_2 + c_3 of c_1 + c_2 + c_3 (possibly empty) sets as follows: start from the beginning of the subsequence (_ij)_j=k_i^ℓ_i, and create a sequence of c_1 sets
(X_m, n; i{s⃗_⃗i⃗j⃗}_j=k_i + (n - 1) L_1, i^k_i + n L_1, i - 1)_n=1^c_1
each having cardinality L_1, i, then create a sequence of c_2 sets
(X_m, n; i{s⃗_⃗i⃗j⃗}_j=k_i + c_1L_1, i + (n - 1) L_2, i^k_i + c_1 L_1, i +n L_2, i - 1)_n=c_1 + 1^c_1 + c_2
each having cardinality L_2, i, and finally create a sequence of c_3 sets
(X_m, n; i{s⃗_⃗i⃗j⃗}_j=k_i + c_1L_1, i + c_2L_2, i + (n-1)L_3, i^k_i + c_1 L_1, i +c_2 L_2, i + n L_3, i - 1)_n=c_1 + c_2 + 1^c_1 + c_2 + c_3
each having cardinality L_3, i. (Recall from <ref> that we denote the i'-th coordinate of L_j' by L_j', i'. For a more explicit definition of X_m, n; i, see <ref>.)
Finally, we define the sequence S_i = (S_ij)_j=1^∞ by arranging the sets X_m, n; i by the indexing pair (m, n), in increasing lexicographic order. (I.e. S_ij X_m, n; i where X_m, n; i is the j-th set under this ordering.)
The sequences S_i satisfy property 1) since by <ref>, <ref>, and <ref>, we have
(#X_m, n; 1, #X_m, n; 2, #X_m, n; 3) =
(L_1, 1, L_1, 2, L_1, 3) = L_1 if 1 ≤ n ≤ c_1,
(L_2, 1, L_2, 2, L_2, 3) = L_2 if c_1 + 1 ≤ n ≤ c_1 + c_2,
(L_3, 1, L_3, 2, L_3, 3) = L_3 if c_1 + c_2 + 1 ≤ n ≤ c_1 + c_2 + c_3.
The sequences S_i also satisfy property 2) since Q_i (X_m, n; i) = {m} (if non-empty) or ∅. By <ref> the sequences S_i satisfy property 3), as desired.
(⟸) Suppose the sequences S_i satisfying properties 1) - 3) exist. Fix m ∈_> 0. Then there exist indices k_m, ℓ_m so that
Q_i(S_ij) ∈{∅, {m}}⟺ k_m ≤ j ≤ℓ_m,
since S_i is a successive minima sequence and by property 2). Furthermore, since the sequence S_i is a successive minima sequence, every vector x⃗∈^⊕ 2_*̃ that satisfies Q_i(x⃗) = m must appear in exactly one S_ij where k_m ≤ j ≤ℓ_m, showing that
∑_j = k_m^ℓ_m# S_ij = r_Q_i^*̃(m).
Therefore
(r_Q_1^*̃(m), r_Q_2^*̃(m), r_Q_3^*̃(m)) = ∑_j=k_m^ℓ_m (# S_1j, #S_2j, #S_3j),
and by property 1) we have (# S_1j, #S_2j, #S_3j) ∈ℒ_a, b. Hence the sum is is a _≥ 0-linear combination of elements of ℒ_a, b, which by <ref> satisfies the equation
a/a+b r_Q_1^*̃(m) + b/a+b r_Q_2^*̃(m) = r_Q_3^*̃(m),
proving the lemma.
Suppose (Q_1, Q_2, Q_3) ∈𝒟_a, b, and for each i ∈{1, 2, 3} let S_i be a successive minima sequence of Q_i as defined in <ref>. Then for each k ∈_≥ 0, there exists some (T, P) ∈𝒯_k where P = ((S_1j)_j=1^k, (S_2j)_j=1^k, (S_3j)_j=1^k).
We proceed by induction on k. For k = 0, we have ((S_1j)_j=1^k, (S_2j)_j=1^k, (S_3j)_j=1^k) = ((), (), ()) and since 𝒯_0 = {(𝒱×𝒱×𝒱, ((), (), ()))}, the base case holds.
Next we establish the lemma for k = k'+1 assuming it holds for k = k', that is, we know there exists some (T, P) ∈𝒯_k' so that P = ((S_1j)_j=1^k', (S_2j)_j=1^k', (S_3j)_j=1^k'). Note that by property 1) of <ref>, we have (n_1, n_2, n_3) (#S_1(k'+1), #S_1(k'+1), #S_1(k'+1)) ∈ℒ_a, b, and by property 3) of <ref> we have
S_1(k'+1)∈_n_1(^⊕ 2_*̃∖ (S_11⋯ S_1k')),
S_2(k'+1)∈_n_2(^⊕ 2_*̃∖ (S_21⋯ S_2k')),
S_3(k'+1)∈_n_3(^⊕ 2_*̃∖ (S_31⋯ S_3k')).
Thus in <ref> of the <ref>, we can let
𝒳 S_1(k'+1), 𝒴 S_2(k'+1), 𝒵 S_3(k'+1),
which gives the corresponding refinement T_, , with associated covering parameter
P = P_, , ; T = ((S_1j)_j=1^k'+1, (S_2j)_j=1^k'+1, (S_3j)_j=1^k'+1),
as desired.
Suppose a, b ∈_≥ 0 with (a, b) ≠ (0, 0), and (𝒯_k)_k=0^∞ is the refinement sequence associated to (a, b), and let
𝒰_k ⋃_(T, P) ∈𝒯_k T
denote the union of the refined sets in each 𝒯_k. Then we have
𝒰_0 ⊇𝒰_1 ⊇…⊇𝒟_a, b.
Part 1) We first show that 𝒰_k+1⊆𝒰_k by showing that for any (T, P) ∈_k+1, there exists (T', P') ∈_k with T ⊆ T'. Suppose (T, P) ∈_k+1 with P ((X_j)_j=1^k+1, (Y_j)_j=1^k+1, (Z_j)_j=1^k+1). Then there must exist (T', P') ∈_k with P' ((X_j)_j=1^k, (Y_j)_j=1^k, (Z_j)_j=1^k). But by <ref> this gives us
T = T' _X_k+1, Y_k+1, Z_k+1_X_k+1, Y_k+1, Z_k+1
and so T ⊆ T', as desired.
Part 2) Now we show that 𝒰_k ⊇𝒟_a, b. Let (Q_1, Q_2, Q_3) ∈𝒟_a, b, and for each i ∈{1, 2, 3} let S_i be a successive minima sequence of Q_i as defined in <ref>. By <ref>, for each k ∈_≥ 0 there exists (T_k, P_k) ∈_k with P_k = ((S_1j)_j=1^k, (S_2j)_j=1^k, (S_3j)_j=1^k). We now use induction on k to show that T_k contains (Q_1, Q_2, Q_3).
For k = 0 we have _0 = {(T_0, P_0)}. Since (Q_1, Q_2, Q_3) ∈𝒟_a, b⊆ T_0 = 𝒱×𝒱×𝒱, the base case holds.
Now we show the theorem is true for k = k'+1 assuming that it is true for k = k', i.e., we know (Q_1, Q_2, Q_3) ∈ T_k' for (T_k', P_k') ∈𝒯_k' where P_k' = ((S_1j)_j=1^k', (S_2j)_j=1^k', (S_3j)_j=1^k'). Now consider (T_k'+1, P_k'+1) where P_k'+1 = ((S_1j)_j=1^k'+1, (S_2j)_j=1^k'+1, (S_3j)_j=1^k'+1). By <ref> we have
T_k'+1 = (T_k'_S_1(k'+1), S_2(k'+1), S_3(k'+1)_S_1(k'+1), S_2(k'+1), S_3(k'+1))
where
_S_1(k'+1), S_2(k'+1), S_3(k'+1)(^⊕ 2_*̃; S_11, …, S_1(k'+1)) ×(^⊕ 2_*̃; S_21, …, S_2(k'+1)) ×(^⊕ 2_*̃; S_31, …, S_3(k'+1)),
𝒬_S_1(k'+1), S_2(k'+1), S_3(k'+1){(Q_1', Q_2', Q_3') ∈𝒱×𝒱×𝒱 |[ there exists m ∈_≥ 0 so that; Q_i'(S_i(k'+1)) ∈{∅, {m}} for each i ∈{1, 2, 3}; ]}.
By definition of -sets, we have Q_1 ∈(^⊕ 2_*̃; S_11, …, S_1(k'+1)), Q_2 ∈(^⊕ 2_*̃; S_21, …, S_2(k'+1)), Q_3 ∈(^⊕ 2_*̃; S_31, …, S_3(k'+1)), and thus (Q_1, Q_2, Q_3) ∈_S_1(k'+1), S_2(k'+1), S_3(k'+1). By property 2) of <ref>, we have Q_1(S_1(k'+1)), Q_2(S_2(k'+1)), Q_3(S_3(k'+1)) ∈{∅, {m}} for some m ∈_≥ 0, and thus (Q_1, Q_2, Q_3) ∈𝒬_S_1(k'+1), S_2(k'+1), S_3(k'+1). By the induction hypothesis, we have (Q_1, Q_2, Q_3) ∈ T_k', which implies
(Q_1, Q_2, Q_3) ∈(T_k'_S_1(k'+1), S_2(k'+1), S_3(k'+1)_S_1(k'+1), S_2(k'+1), S_3(k'+1)) = T_k'+1.
We finally note that (Q_1, Q_2, Q_3) ∈ T_i ⊆𝒰_i, thus 𝒰_i ⊇𝒟_a, b as desired.
§.§ Algorithmic Considerations
In this subsection we describe key implementation details and algorithmic aspects for this project.
§.§.§ Implementation
We implemented algo:extended_refinement_algorithm in (<cit.>, <cit.>). has a polyhedron interface to cdd (and several other packages), however we chose to build a custom polyhedral cone library for two key reasons: First, the standard polyhedra packages in do not accept open halfspace conditions, but the GL_2()-reduction domain 𝒱 requires strict inequalities in its definition. Second, without caching the edge computations from each stage of the refinement, the default class becomes inefficient and does not scale well as the dimension of the problem increases (e.g. ternary quadratic forms as in Schiemann's work or n-term linear relations). Therefore we chose to implement it with these additional use cases in mind. For reference, <cit.> contains a treatment of the theory of polyhedral cones required for our implementation.
§.§.§ Computing Refinement Objects
For clarity, we summarize how to compute objects from the refinement algorithmically:
1) _, ,: This is a cross-product of three -sets. We know each -set is a polyhedral cone that can be explicitly computed using <ref>. Then _, , is the direct sum of these polyhedral cones.
2) _, ,: There are three possibilites on (#, #, #), each requiring that Q_1(), Q_2(), Q_3() ∈{∅, {m}} for some m ∈_> 0:
i) If (#, #, #) = (1, 1, 1), then Q_1() = Q_2() = Q_3().
ii) If (#, #, #) = (a+b, 0, a), then Q_1() = Q_3() and Q_2 is arbitrary.
iii) If (#, #, #) = (0, a+b, b), then Q_2() = Q_3() and Q_1 is arbitrary.
In each case, the set of forms (Q_1 ,Q_2, Q_3) satisfying these equations is a polyhedral cone.
3) _n(X): See <ref> for details.
See sec:appendix for examples of explicit computations of different refinement objects.
§.§.§ Termination Conditions
In Schiemann's original refinement algorithm (<cit.>), termination occurs when a polyhedral cone is contained in the diagonal Δ' ⊂^6 ×^6. This is because in Schiemann's case, it was widely believed that the only reduced solutions (Q_1, Q_2) of _Q_1 = _Q_2 should satisfy Q_1 = Q_2.
For our three-term linear relations, there exist non-trivial solutions not contained in the diagonal Δ⊂^3 ×^3 ×^3. Therefore, for practical purposes, it is important to impose an additional terminating condition, which we include as 𝚂𝚃𝙾𝙿_𝚂𝙴𝚃 and 𝙼𝙰𝚇_𝙸𝚃𝙴𝚁𝙰𝚃𝙸𝙾𝙽𝚂 in algo:extended_refinement_algorithm.
§ RESULTS
In this section we present results obtained from implementing algo:extended_refinement_algorithm, proving <ref>.
Suppose a, b ∈_≥ 0 with (a, b) ≠ 0 and (a, b) = 1. Then recall the normalized rational 3-term linear relation
a/a+b_Q_1 + b/a+b_Q_2 = _Q_3.
We now give our results, grouped by the value of a+b.
§.§ a + b = 1
In this case (a, b) = (1, 0) or (0, 1), and by symmetry it suffices to consider only (a, b) = (1, 0). Then the normalized rational 3-term linear relation is given by
_Q_1 = _Q_3.
We apply algo:extended_refinement_algorithm with 𝚂𝚃𝙾𝙿_𝚂𝙴𝚃Δ', where
Δ' {(Q_1, Q_2, Q_3) ∈𝒱×𝒱×𝒱| Q_1 = Q_3}
and 𝙼𝙰𝚇_𝙸𝚃𝙴𝚁𝙰𝚃𝙸𝙾𝙽𝚂 13. Note that by <ref>, we can use ℒ_1,0' instead of ℒ_1, 0.
The algorithm terminates after 𝙼𝙰𝚇_𝙸𝚃𝙴𝚁𝙰𝚃𝙸𝙾𝙽𝚂 iterations. By analyzing the pairs (T, P) in 𝒮_13 with T ⊈Δ', we write P = ((X_i)_i=1^13, (Y_i)_i=1^13, (Z_i)_i=1^13), and notice that each such P has at least one index i' (with 1 ≤ i' ≤ 13) where #Y_i' > 0. However, since the coefficient of _Q_2 in <ref> is 0, choosing the set Y_i' of strongly primitive vectors of Q_2 does not give us any additional information about the linear relation, because every such refinement can also be realized (with the same projection π_1, 3: (Q_1, Q_2, Q_3) ↦ (Q_1, Q_3)) by removing the triple (X_i', Y_i', Z_i') from P. By applying this logic repeatedly, we can ignore all such triples (T, P) and assume #Y_i = 0 for all 1 ≤ i ≤ 13. Thus all the solutions are in the 𝚂𝚃𝙾𝙿_𝚂𝙴𝚃 = Δ', showing Q_1 ∼_ Q_3, as desired.
Suppose Q_1, Q_2, Q_3 are positive-definite integer-valued binary quadratic forms satisfying
α_1 _Q_1 + α_2 _Q_2 + α_3 _Q_3 = 0,
for some α_1, α_2, α_3 ∈, where one of the ratios α_i/α_j∉ for some 1 ≤ i, j ≤ 3 with α_j ≠ 0. Then Q_1∼_Q_2∼_Q_3.
By <ref>, it suffices to solve _Q_1 = _Q_2 = _Q_3. However, we know from this subsection that any pairwise equality _Q_i = _Q_j for i ≠ j implies Q_i ∼_ Q_j, giving Q_1 ∼_ Q_2 ∼_ Q_3, as desired.
§.§ a + b = 2
In this case (a, b) = (1, 1). Then the normalized rational 3-term linear relation is given by
1/2_Q_1 + 1/2_Q_2 = _Q_3.
We apply algo:extended_refinement_algorithm with 𝚂𝚃𝙾𝙿_𝚂𝙴𝚃Δ and 𝙼𝙰𝚇_𝙸𝚃𝙴𝚁𝙰𝚃𝙸𝙾𝙽𝚂 13. The algorithm terminated after iteration i = 6, there were no remaining pairs (T, P) since all (T, P) ∈𝒮_6 satisfied T ⊆Δ. Therefore Q_1 ∼_ Q_2 ∼_ Q_3.
The following table summarizes the number of pairs (T, P) ∈𝒮_i.
7|c|State after iteration i
i 0 1 2 3 4 5 6
#𝒮_i 1 3 9 29 58 30 0
#{(T, P) ∈𝒮_i | T ≠∅} 1 3 9 16 6 0 0
§.§ a + b = 3
In this case (a, b) = (1, 2) or (2, 1), and by symmetry it suffices to consider only (a, b) = (1, 2). Then the normalized rational 3-term linear relation is given by
1/3_Q_1 + 2/3_Q_2 = _Q_3.
We apply algo:extended_refinement_algorithm with 𝚂𝚃𝙾𝙿_𝚂𝙴𝚃Δ and 𝙼𝙰𝚇_𝙸𝚃𝙴𝚁𝙰𝚃𝙸𝙾𝙽𝚂 13. The algorithm terminated after 𝙼𝙰𝚇_𝙸𝚃𝙴𝚁𝙰𝚃𝙸𝙾𝙽𝚂 iterations. The following table summarizes the number of pairs (T, P) ∈𝒮_i after iteration i.
14|c|State after iteration i
i 0 1 2 3 4 5 6 7 8 9 10 11 12 13
#𝒮_i 1 3 11 21 13 24 16 48 33 57 27 77 42 287
#{(T, P) ∈𝒮_i | T ≠∅} 1 3 5 2 1 1 1 1 1 1 1 1 3 3
Analyzing the unique pair (T, P) ∈𝒯_4 with T ≠∅, we see that T = _≥ 0v⃗ where
v⃗ (1/4, 1/4, 1/4) × (1, 1, 1) × (1/4, 3/4, 0) ∈𝒱×𝒱×𝒱.
This gives us a potential candidate for a non-trivial solution up to scaling and equivalence, namely
Q_1 x^2 + xy + y^2,
Q_2 4(x^2 + xy + y^2),
Q_3 x^2 + 3y^2.
We now prove that this candidate indeed satisfies <ref>.
Suppose Q_1, Q_2, Q_3 are positive-definite integer-valued binary quadratic forms such that
Q_1 ∼_ c(x^2 + xy + y^2),
Q_2 ∼_ 4c(x^2 + xy + y^2), and
Q_3 ∼_ c(x^2 + 3y^2)
for some positive integer c ∈_>0. Then we have the following relation among their theta series:
1/3_Q_1 + 2/3_Q_2 = _Q_3.
Let (a, b) ∈^⊕ 2, and let
m a^2 + 3b^2.
It suffices to show that
1/3 r_Q_1(m) + 2/3 r_Q_2(m) = r_Q_3(m).
Case 1. If a ≡ b 2, then the map (a, b) ↦(a - b/2, b ) is a bijection from _Q_3(m) to _Q_2(m), with the inverse map given by (u, v) ↦ (2u + v, v). Therefore r_Q_2(m) = r_Q_3(m). Similarly, the map (a, b) ↦ (a-b, 2b) is a bijection from _Q_3(m) to _Q_1(m), with the inverse map given by (u, v) ↦(u + v/2, v/2) (which is an integral vector since u^2 + uv + v^2 ≡ 0 2 u, v ≡ 0 2). Therefore r_Q_1(m) = r_Q_2(m) = r_Q_3(m), which suffices.
Case 2. If a ≢b 2, then r_Q_2(m) = 0 since Q_2(x, y) ∈ 2 when x, y ∈. To show that r_Q_1(m) = 3r_Q_3(m), we consider the map φ from _Q_3(m) to subsets of _Q_1(m) given by
φ : (a, b) ↦{(a-b, 2b), (-a+b, a+b), (2b, -a+b)},
which we show has the following two properties:
1) φ is injective.
2) For any (u, v) ∈_Q_1(m) there exists some (a, b) ∈_Q_3(m) so that (u, v) ∈φ((a, b)).
To show part 1), it suffices to construct an inverse map ψ so that ψ(φ((a, b))) = (a, b). Suppose φ((a, b)) = {(u_1, v_1), (u_2, v_2), (u_3, v_3)}. Since a ≢b 2, from the definition of φ we see that exactly one of the elements (u_i, v_i) where 1 ≤ i ≤ 3 satisfies v_i ≡ 0 2. Then we define the inverse map ψ by
ψ : {(u_1, v_1), (u_2, v_2), (u_3, v_3)}↦(u_i + v_i/2, v_i/2),
which we see is in _Q_1(m) and also satisfies ψ(φ((a, b))) = (a, b).
Next we show part 2). Note that the set φ((a, b)) is invariant under the left-multiplication action (u, v) ↦ g · (u, v) for all g in the cyclic group
G {[ 1 0; 0 1; ], [ -1 0; 1 1; ], [ 0 1; -1 0; ]}.
Suppose (u, v) ∈_Q_1(m). Then define
F G · (u, v) = {(u, v), (-u, u + v), (v, -u)}.
We now show that there exists some (a, b) ∈_Q_3(m) so that φ((a, b)) = F. Consider the set
H [ 1 1/2; 0 1/2; ]· F = {(u + v/2, v/2), (v - u/2, u + v/2), (v - u/2, -u/2)}.
Each element (x, y) ∈ H satisfies x^2 + xy + y^2 = m. However, since m ≡ 1 2, both u and v cannot be even. So exactly one of the elements (a, b) ∈ H satisfies (a, b) ∈^⊕ 2, giving φ((a, b)) = F.
Together, parts 1) and 2) imply r_Q_1(m) = 3 r_Q_3(m). Since r_Q_2(m) = 0, this proves the result.
While the proof of <ref> is given in terms of an explicit construction
on the representation vectors for the given quadratic forms, by <cit.> we could also verify this identity
by a (less insightful) finite computation with the associated theta series _Q_i(z) as modular forms in the space M_k(N, χ) = M_1(12, χ_-3)
where χ_-3: (/3)^×→{±1} is the non-trivial Dirichlet character. Here identities between modular forms can be verified by checking that all Fourier coefficients agree up to the Sturm bound, which by <cit.> (applied for infinitely many prime ideals 𝔪 = p) requires us to verify the identity
1/3 r_Q_1(m) + 2/3 r_Q_2(m) = r_Q_3(m)
for m = 0, 1, and 2. This holds, which proves the lemma.
§.§ a+b >= 4
In this case we show that there are no non-trivial solutions.
For all a+b ≥ 4, algo:extended_refinement_algorithm can be represented using the following refinement diagram:
[sep=huge]
∅ T_0 [l, "(a+b, 0, a)"'] [l, "⋮"' , bend left] [l, bend left=49] [r, "(0, a+b, b)"] [r, "⋮", bend right] [r, bend right=49] [d, "(1, 1, 1)" description] ∅
∅ T_1 [l, "(a+b, 0, a)"'] [l, "⋮"' , bend left] [l, bend left=49] [r, "(0, a+b, b)"] [r, "⋮", bend right] [r, bend right=49] [d, "(1, 1, 1)" description] ∅
∅ T_2 [l, "(a+b, 0, a)"'] [l, "⋮"', bend left] [l, bend left=49] [r, "(0, a+b, b)"] [r, "⋮", bend right] [r, no head, bend right=49] [d, "(1, 1, 1)" description] ∅
T_3 ⊆Δ,
where T_0, T_1, T_2, T_3 are non-empty polyhedral cones independent of a and b, with (T_i, P_i) ∈𝒮_i for some covering parameter P_i, and also T_3 ⊆Δ. In this diagram, the refinement algorithm is represented as a rooted edge-labelled directed graph. The root node T_0 = 𝒱×𝒱×𝒱 represents the initial state of the algorithm. For any node T, its children are given by the refinements T_, , across all possible , , from <ref>. The edge from T to T_, , is labelled (||,||,||) ∈_a, b.
We first prove the claim for a+b = 4, and then generalize it for all a+b ≥ 4.
Case 1 (a + b = 4): Since (a, b) = 1, we have either (a, b) = (1, 3) or (3, 1). Now by applying algo:extended_refinement_algorithm, we see that there exist T_0, T_1, T_2, T_3 (with T_3 ⊆Δ) so in either case the refinement algorithm is represented by the refinement diagram ([figure:ref_tree]15). (See <ref> for explicit polyhedral cone descriptions of T_0, T_1, T_2, T_3 and their respective covering parameters P_0, P_1, P_2, P_3).
Case 2 (a + b > 4): We use Case 1 to show that for all a + b > 4, the refinement algorithm is represented by the refinement diagram ([figure:ref_tree]15). When an edge is labelled (1, 1, 1), the refinement is independent of the choices of a and b, and so the path T_0 T_1 T_2 T_3 is always in the directed graph. Therefore it suffices to show that any edge from T_i ∈{T_0, T_1, T_2} that is labelled (a+b, 0, a) or (0, a+b, b) is directed to an empty polyhedral cone.
In <ref>, if n⃗ = (a+b, 0, a) then 𝒳 = 𝒳(n⃗) ∈MIN_a+b(ℤ_*̃^⊕ 2∖(X_1… X_k)), and if n⃗ = (0, a+b, b) then 𝒴 = 𝒴(n⃗) ∈MIN_a+b(ℤ_*̃^⊕ 2∖(Y_1… Y_k)). We show in these cases, that respectively either (^⊕ 2_*̃; X_1, …, X_k, ) or (^⊕ 2_*̃; Y_1, …, Y_k, ) is the zero-cone (i.e. the polyhedral cone {(0, 0, 0)}). Since the binary quadratic form given by Q = 0 is not in 𝒱, the refinement of (T_i, P_i) will be empty. Below, we execute the refinement for (T_i, P_i) with T_i ∈{T_0, T_1, T_2}.
Case 2a (Refining (T_0, P_0)). T_0 has covering parameter P_0 = ((), (), ()), where () denotes an empty sequence. Since a + b > 4, for any X_1 ∈_a+b(^⊕ 2_*̃), there exists X'_1 ∈_4(^⊕ 2_*̃) so that X'_1 ⊆ X_1. Therefore (^⊕ 2_*̃; X_1) ⊆(^⊕ 2_*̃; X'_1). We will show that (^⊕ 2_*̃; X'_1) is the zero-cone, implying (^⊕ 2_*̃, X_1) is also the zero-cone.
Since _4(_*̃^⊕ 2) = {{(1, 0), (0, 1), (-1, 1), (1, 1)}}, we can use our -set algorithm described in <ref> to obtain
(^⊕ 2_*̃; {(1, 0), (0, 1), (-1, 1), (1, 1)}) = Polyhedral Cone {(0, 0, 0)},
as required.
After refining (T_0, P_0), our refinement diagram ([figure:ref_tree]15) now looks like this:
[sep=huge]
∅ T_0 [l, "(a+b, 0, a)"'] [l, "⋮"', bend left] [l, bend left=49] [r, "(0, a+b, b)"] [r, "⋮", bend right] [r, bend right=49] [d, "(1, 1, 1)" description] ∅
T_1.
Case 2b (Refining (T_1, P_1)). T_1 has covering parameter P_1 = (({((1, 0))}), ({((1, 0))}), ({((1, 0))})). For any X_2 ∈_a+b(^⊕ 2_*̃∖ X_1) where X_1 {(1, 0)}, there exists some X'_2 ∈_4(^⊕ 2_*̃∖ X_1) so that X'_2 ⊆ X_2. As in Case 2a, we show that (^⊕ 2_*̃; X_1, X'_2) is the zero-cone. Since _4(^⊕ 2_*̃∖{(1, 0)}) = {{(0, 1), (-1, 1), (1, 1), (-2, 1)}}, we again use our K-set algorithm in <ref> to obtain
(^⊕ 2_*̃; {(1, 0)}, {(0, 1), (-1, 1), (1, 1), (-2, 1)}) = Polyhedral Cone {(0, 0, 0)},
as required. After refining (T_1, P_1), our refinement diagram ([figure:ref_tree]15) now looks like this:
[sep=huge]
∅ T_0 [l, "(a+b, 0, a)"'] [l, "⋮"', bend left] [l, bend left=49] [r, "(0, a+b, b)"] [r, "⋮", bend right] [r, bend right=49] [d, "(1, 1, 1)" description] ∅
∅ T_1 [l, "(a+b, 0, a)"'] [l, "⋮"', bend left] [l, bend left=49] [r, "(0, a+b, b)"] [r, "⋮", bend right] [r, bend right=49] [d, "(1, 1, 1)" description] ∅
T_2.
Case 2c (Refining (T_2, P_2)). T_2 has covering parameter
P_2 = (({(1, 0)}, {(0, 1)}), ({(1, 0)}, {(0, 1)}), ({(1, 0)}, {(0, 1)})).
For any X_3 ∈_a+b(^⊕ 2_*̃∖ (X_1 X_2)) where X_1 {(1, 0)} and X_2 {(0, 1)}, there exists some X'
_3 ∈_4(^⊕ 2_*̃∖ (X_1 X_2)) so that X'_3 ⊆ X_3. As in the previous two cases, we show that (^⊕ 2_*̃; X_1, X_2, X'_3) is the zero-cone. Since
_4(^⊕ 2_*̃∖{(1, 0), (0, 1)}) = {{(-1, 1), (1, 1), (-2, 1), (2, 1)}, {(-1, 1), (1, 1), (-2, 1), (-1, 2)}},
there are two choices of X'_3. If X'_3 = {(-1, 1), (1, 1), (-2, 1), (2, 1)}, then
(^⊕ 2_*̃; {(1, 0)}, {(0, 1)}, {(-1, 1), (1, 1), (-2, 1), (2, 1)}) = Polyhedral Cone {(0, 0, 0)},
and if X'_3 = {(-1, 1), (1, 1), (-2, 1), (-1, 2)}, then
(^⊕ 2_*̃; {(1, 0)}, {(0, 1)}, {(-1, 1), (1, 1), (-2, 1), (-1, 2)}) = Polyhedral Cone {(0, 0, 0)},
as required. After refining (T_2, P_2), the refinement diagram ([figure:ref_tree]15) is complete:
[sep=huge]
∅ T_0 [l, "(a+b, 0, a)"'] [l, "⋮"' , bend left] [l, bend left=49] [r, "(0, a+b, b)"] [r, "⋮", bend right] [r, bend right=49] [d, "(1, 1, 1)" description] ∅
∅ T_1 [l, "(a+b, 0, a)"'] [l, "⋮"' , bend left] [l, bend left=49] [r, "(0, a+b, b)"] [r, "⋮", bend right] [r, bend right=49] [d, "(1, 1, 1)" description] ∅
∅ T_2 [l, "(a+b, 0, a)"'] [l, "⋮"', bend left] [l, bend left=49] [r, "(0, a+b, b)"] [r, "⋮", bend right] [r, no head, bend right=49] [d, "(1, 1, 1)" description] ∅
T_3 ⊆Δ.
§ ACKNOWLEDGEMENTS
We would like to thank Sabrina Reguyal for translating Schiemann's original paper from German. This was invaluable for understanding Schiemann's complete perspective on these computations.
§ INDEX OF TERMINOLOGY AND NOTATION
Terminology Notation Defined in
Admissibility of Covering Parameter — <ref>
Covering — <ref>
Covering Parameter ((X_i)_i=1^k, (Y_i)_i=1^k, (Z_i)_i=1^k) <ref>
Covering Parameter Auxiliary Sets 𝒦_, , , 𝒬_, , <ref>
Diagonal Δ <ref>
Edges of 𝒱 _1, _2, _3 <ref>
Extended Refinement Algorithm — [algo:extended_refinement_algorithm]Algorithm 1
Extended Refinement Algorithm Sets 𝒮_i [algo:extended_refinement_algorithm]Algorithm 1
GL_2()-Reduced Positive-Definite Binary Quadratic Forms 𝒱 <ref>
Integral Form of Normalized Rational 3-Term Linear Relation — <ref>
-set (X; X_1, …, X_k) <ref>
Linset _a, b <ref>
Max Iterations 𝙼𝙰𝚇_𝙸𝚃𝙴𝚁𝙰𝚃𝙸𝙾𝙽𝚂 [algo:extended_refinement_algorithm]Algorithm 1
Minimal Subset (X) <ref>
n^th-order Minimal Subset _n(X) <ref>
Normalized Rational 3-Term Linear Relation — <ref>
Ordering Relation(s) ≼, ≽ <ref>
Polyhedral Cone (A, B) <ref>
Refinement of a Pair ℳ_a, b <ref>
Refinement of Set of Pairs 𝚛𝚎𝚏𝚒𝚗𝚎_a, b <ref>
Refinement Process — <ref>
Refinement Sequence (𝒯_i)_i=0^∞ <ref>
Representation Number r_Q(m) <ref>
Representation Set _Q(m) <ref>
Solution Set of <ref> 𝒟_a, b <ref>
Stop Set 𝚂𝚃𝙾𝙿_𝚂𝙴𝚃 [algo:extended_refinement_algorithm]Algorithm 1
Strongly Primitive Representation Number r^*̃_Q(m) <ref>
Strongly Primitive Representation Set ^*̃_Q(m) <ref>
Strongly Primitive Theta Series _Q^*̃(z) <ref>
Strongly Primitive Vectors ^⊕ 2_*̃ <ref>
Successive Minima Sequence — <ref>
Theta Series _Q(z) <ref>
-equivalence ∼_ <ref>
plain
Rahul Saha, New York, NY, USA
E-mail address:
URL: rahulsaha.net
Jonathan Hanke, Princeton, NJ 08542, USA
E-mail address:
URL: jonhanke.com
§ APPENDIX
§.§ Example MIN Computation
Example 1. We will compute (^⊕ 2_*̃) using <ref>. First of all, note that for a = 1, (a, 1) ∈^⊕ 2_*̃. Then, we define W_0, 1 = {x ∈^⊕ 2_*̃ : x ⋡(1, 1)}{(a, 1)}. By the lemma, W_0, 1⊆{∈^⊕ 2_*̃ : ||x||_∞≤√(6)} and the latter is a finite set. Then, iterating gives us W_0, 1 = {(-1, 1), (0, 1), (1, 0), (1, 1)} We know from the lemma that (^⊕ 2_*̃) = (^⊕ 2_*̃ W_0, a) = ({(-1, 1), (0, 1), (1, 0), (1, 1)}).
This is much better because now we have to compute the of a finite set. For each element in this set, we can check using <ref> if any other element of the set precedes it. If not, then it is in the . Doing this gets us (^⊕ 2_*̃) = {(1, 0)}.
Example 2. We will compute _n(^⊕ 2_*̃) for n = 6. We will do this inductively. We know from the previous example that
_1(^⊕ 2_*̃) = {{(1, 0)}}
Now, we can use <ref> to obtain
(^⊕ 2_*̃∖{(1, 0)}) = {(0, 1)}
and thus
_2(^⊕ 2_*̃) = {{(1, 0), (0, 1)}}
Similarly, since
(^⊕ 2_*̃∖{(1, 0), (0, 1)}) = {(-1, 1)}
we get
_3(^⊕ 2_*̃) = {{(1, 0), (0, 1), (-1, 1)}}
This continues for n = 4 and n = 5,
_4(^⊕ 2_*̃) = {{(1, 0), (0, 1), (-1, 1), (1, 1)}}
_5(^⊕ 2_*̃) = {{(1, 0), (0, 1), (-1, 1), (1, 1), (-2, 1)}}.
The next step however is a little different.
This time, we see that there are two elements in the minima.
(^⊕ 2_*̃∖{(1, 0), (0, 1), (-1, 1), (1, 1), (-2, 1)}) = {(2, 1), (-1, 2)}
This gives us two sets in _6,
_6(^⊕ 2_*̃) = {{(1, 0), (0, 1), (-1, 1), (1, 1), (-2, 1), (2, 1)}, {(1, 0), (0, 1), (-1, 1), (1, 1), (-2, 1), (-1, 2)}}.
§.§ Example K-set Computation
Below, we will adopt the convention to represent sets A and B while defining a polyhedral cone (A, B) as matrices. As an example, the set {(1, 2, 1), (-1, 1, 0)} becomes [ 1 2 1; -1 1 0; ]. Note that the ordering of the rows is irrelevant.
Example. We will compute (^⊕ 2_*̃; {(1, 0), (0, 1)}, {}, {(-1, 1)}). We will use <ref>. Suppose a quadratic form Q (Q_11, Q_22, Q_12) is in K(^⊕ 2_*̃; {(1, 0), (0, 1)}, {}, {(-1, 1)}). Then, for our lemma to work, we will start by ignoring the empty sets, and picking a representative element from each set. Suppose we choose (1, 0) and (-1, 1). Then, we will compute (^⊕ 2_*̃; (1, 0), (0, 1) (-1, 1)) using <ref>. Since (^⊕ 2_*̃∖{(1,0), (0,1), (-1, 1)}) = {(1, 1)}, we need
Q(1, 0) ≤ Q(0, 1) ≤ Q(-1, 1) ≤ Q(1, 1)
(^⊕ 2_*̃; (1, 0), (0, 1) (-1, 1)) = ([[ -1 1 0; 1 0 -1; 0 0 2 ]], [ ] )
The second part is to compute the equalities, in our case, we only care about the equality constraint in {(1, 0), (0, 1)}, since all the other sets have fewer than two elements and so the equality conditions hold vacuously. We want
Q(1, 0) = Q(0, 1)
Adding this condition to (^⊕ 2_*̃; (1, 0), (0, 1) (-1, 1)) gives us
(^⊕ 2_*̃; {(1, 0), (0, 1)}, {}, {(-1, 1)}) = ([[ -1 1 0; 1 0 -1; 0 0 2; 1 -1 0; -1 1 0 ]], [ ]).
§.§ Example Refinement Computation
Suppose (a, b) = (1, 2). Then _a, b = _1, 2 = {(1, 1, 1), (3, 0, 1), (0, 3, 2)}. Below, we will show two examples of refinement. Below, for each polyhedral cone, we will only provide the face description using A and B. One can obtain the edge description by following any edge computing algorithm.
Example 1. Suppose we start with a polyhedral cone P with the following description,
P ([ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 0 1; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 0 0 1 0 0 0; 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ], [ 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ])
with covering parameter ((), (), ()). Note this implies that we have not refined this polyhedral cone yet. Indeed, the above polyhedral cone corresponds to V×V×V. Now, we will refine the polyhedral cone for each vector n⃗ in _1, 2.
n⃗ = (1, 1, 1). Then, we have
_1(^⊕ 2_*̃) = {{(1, 0)}}
_1(^⊕ 2_*̃) = {{(1, 0)}}
_1(^⊕ 2_*̃) = {{(1, 0)}}
Now, we will iterate over each triplet (𝒳, 𝒴, 𝒵) ∈{{(1, 0)}}×{{(1, 0)}}×{{(1, 0)}}. The only such triplet is (𝒳, 𝒴, 𝒵) = ({(1, 0)}, {(1, 0)}, {(1, 0)}). Then,
_,, = ([ -1 1 0 0 0 0 0 0 0; 1 -1 0 0 0 0 0 0 0; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 -1 0 0 0 0; -1 1 0 0 0 0 0 0 0; 1 -1 0 0 0 0 0 0 0; ], [ ]),
_,, = ([ 1 0 0 -1 0 0 0 0 0; -1 0 0 1 0 0 0 0 0; 1 0 0 0 0 0 -1 0 0; -1 0 0 0 0 0 1 0 0; ], [ ]).
Here, we obtained _,, by following the same steps as in <ref>. The _,, was obtained directly from the definition, since we want
Q_1({(1, 0)}) = Q_2({(1, 0)}) = Q_3({(1, 0)})
which gives us two independent linear equalities.
Finally, we get the refinement P' as follows,
P' = P _,,_,,
which obtains
P' = ([[ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 0 1; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 0 0 1 0 0 0; 1 0 0 0 0 0 -1 0 0; -1 0 0 0 0 0 1 0 0; 0 0 0 1 0 0 -1 0 0; 0 0 0 -1 0 0 1 0 0; 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]], [[ 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]]).
n⃗ = (3, 0, 1). Then, we have
_3(^⊕ 2_*̃) = {{(1, 0), (0, 1), (-1, 1)}}
_0(^⊕ 2_*̃) = {{}}
_1(^⊕ 2_*̃) = {{(1, 0)}}
Now, we will iterate over each triplet (𝒳, 𝒴, 𝒵) ∈{{(1, 0), (0, 1), (-1, 1)}}×{{}}×{{(1, 0)}}. The only such triplet is (𝒳, 𝒴, 𝒵) = ({(1, 0), (0, 1), (-1, 1)}, {}, {(1, 0)}). Then,
_,, = ([ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 2 0 0 0 0 0 0; 1 -1 0 0 0 0 0 0 0; -1 1 0 0 0 0 0 0 0; 0 -1 1 0 0 0 0 0 0; 0 1 -1 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; ], [ ] ),
_,, = ([ 1 0 0 0 0 0 -1 0 0; -1 0 0 0 0 0 1 0 0; 0 1 0 0 0 0 -1 0 0; 0 -1 0 0 0 0 1 0 0; 1 1 -1 0 0 0 -1 0 0; -1 -1 1 0 0 0 1 0 0 ] , [ ]).
Here, we can obtain _,, from the definition by noting that
Q({(1, 0), (0, 1), (-1, 1)}) = Q({(1, 0)})
which gives us 3 independent equalities. However, there is an optimization that we can make here. Note that since the K-set already enforces equalities on each of {(1, 0), (0, 1), (-1, )}, {(1, 0)}, {}, we can in fact get away with just one equality given by Q(1, 0) = Q(-1, 1). Thus, we can instead used
([ 1 0 0 0 0 0 -1 0 0; -1 0 0 0 0 0 1 0 0; ] , [ ])
as _,,. Indeed, for our implementation, that is what we do.
Finally, we get the refinement P' as follows,
P' = P _,,_,,,
which obtains
P' = ([[ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 0 1; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 0 0 1 0 0 0; -1 0 1 0 0 0 0 0 0; 1 0 0 0 0 0 -1 0 0; -1 0 0 0 0 0 1 0 0; -1 -1 1 0 0 0 1 0 0; 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]], [[ 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]] ).
n⃗ = (0, 3, 2). Then, we have
_0(^⊕ 2_*̃) = {{}}
_3(^⊕ 2_*̃) = {{(1, 0), (0, 1), (-1, 1)}}
_2(^⊕ 2_*̃) = {{(1, 0), (0, 1)}}
Now, we will iterate over each triplet (𝒳, 𝒴, 𝒵) ∈{{}}×{{(1, 0), (0, 1), (-1, 1)}}×{{(1, 0), (0, 1)}}. The only such triplet is (𝒳, 𝒴, 𝒵) = ({}, {(1, 0), (0, 1), (-1, 1)}, {(1, 0), (0, 1)}). Then,
_,, = ([[ 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 0 0 2 0 0 0; 0 0 0 1 -1 0 0 0 0; 0 0 0 -1 1 0 0 0 0; 0 0 0 0 -1 1 0 0 0; 0 0 0 0 1 -1 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 1 -1 0; 0 0 0 0 0 0 -1 1 0 ]], [ ] ),
and using the optimization described in the previous case, this time we will use a single equality to obtain
_,, = ([ 0 0 0 1 0 0 -1 0 0; 0 0 0 -1 0 0 1 0 0; ] , [ ]).
Here, we can obtain _,, using the equality Q_2({(1, 0)}) = Q_3({(1, 0)}).
Finally, we get the refinement P' as follows,
P' = P _,,_,,,
which obtains
P' = ([[ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 0 1; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 -1 0 1 0 0 0; 0 0 0 1 0 0 -1 0 0; 0 0 0 -1 0 0 1 0 0; 0 0 0 -1 -1 1 1 0 0; 0 0 0 1 0 0 0 -1 0; 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]], [[ 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]] ).
Thus, the refinement of P is given by the collection of all the different P' for different values of n⃗∈_1, 2.
Example 2. Suppose we start with a polyhedral cone P with the following description,
P = ([[ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 0 1; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 -1 0 1 0 0 0; 0 0 0 1 0 0 -1 0 0; 0 0 0 -1 0 0 1 0 0; 0 0 0 -1 -1 1 1 0 0; 0 0 0 1 0 0 0 -1 0; 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]], [[ 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]] )
with covering parameter (({}), ({(1, 0), (0, 1), (-1, 1)}), ({(1, 0), (0, 1)})). We will show an example of refining P for when n⃗ = (3, 0, 1). Then,
_3(^⊕ 2_*̃∖{}) = {{(1, 0), (0, 1), (-1, 1)}}
_0(^⊕ 2_*̃∖{(1, 0), (0, 1), (-1, 1)}) = {{}}
_1(^⊕ 2_*̃∖{(1, 0), (0, 1)}) = {{(-1, 1)}}
Now, we will iterate over each triplet (𝒳, 𝒴, 𝒵) ∈{{(1, 0), (0, 1), (-1, 1)}}×{{}}×{{(-1, 1)}}. The only such triplet is (𝒳, 𝒴, 𝒵) = ({(1, 0), (0, 1), (-1, 1)}, {} , {(-1, 1)}). Then,
_,, = ([[ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 2 0 0 0 0 0 0; 1 -1 0 0 0 0 0 0 0; -1 1 0 0 0 0 0 0 0; 0 -1 1 0 0 0 0 0 0; 0 1 -1 0 0 0 0 0 0; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 0 0 2 0 0 0; 0 0 0 1 -1 0 0 0 0; 0 0 0 -1 1 0 0 0 0; 0 0 0 0 -1 1 0 0 0; 0 0 0 0 1 -1 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 0 2; 0 0 0 0 0 0 1 -1 0; 0 0 0 0 0 0 -1 1 0 ]], [ ] ),
We will use the equality Q_1((1, 0)) = Q_3((-1, 1)) to obtain
_,, = ([ 1 0 0 0 0 0 -1 -1 1; -1 0 0 0 0 0 1 1 -1; ] , []).
Finally, we get the refinement P' as follows,
P' = P _,,_,,,
which obtains
P' = ([[ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 0 1; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 -1 0 1 0 0 0; 0 0 0 1 0 0 -1 0 0; 0 0 0 -1 0 0 1 0 0; 0 0 0 -1 -1 1 1 0 0; 0 0 0 1 0 0 0 -1 0; -1 0 1 0 0 0 0 0 0; 1 0 0 0 0 0 -1 -1 1; -1 0 0 0 0 0 1 1 -1; -1 -1 1 0 0 0 1 1 -1; 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]], [[ 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]] ).
§.§ T0, T1, T2, T3 and P0, P1, P2, P3
Here, we will give polyhedral cone descriptions of T_0, T_1, T_2, T_3 as described in <ref>.
T_0 = ([[ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 0 1; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 0 0 1 0 0 0; 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]], [[ 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]] ),
T_1 = ( [[ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 0 1; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 0 0 1 0 0 0; 1 0 0 0 0 0 -1 0 0; -1 0 0 0 0 0 1 0 0; 0 0 0 1 0 0 -1 0 0; 0 0 0 -1 0 0 1 0 0; 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]], [[ 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]] ),
T_2 = ( [[ -1 1 0 0 0 0 0 0 0; 1 0 -1 0 0 0 0 0 0; 0 0 1 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 0 0 0 0 0 1; 0 0 0 -1 1 0 0 0 0; 0 0 0 1 0 -1 0 0 0; 0 0 0 0 0 1 0 0 0; 1 0 0 0 0 0 -1 0 0; -1 0 0 0 0 0 1 0 0; 0 0 0 1 0 0 -1 0 0; 0 0 0 -1 0 0 1 0 0; 0 1 0 0 0 0 0 -1 0; 0 -1 0 0 0 0 0 1 0; 0 0 0 0 1 0 0 -1 0; 0 0 0 0 -1 0 0 1 0; 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]], [[ 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]]),
T_3 = ([[ -1 1 0 0 0 0 0 0 0; 0 0 0 0 0 0 -1 1 0; 0 0 0 0 0 0 1 0 -1; 0 0 0 -1 1 0 0 0 0; 0 0 0 0 0 1 0 0 0; 1 0 0 0 0 0 -1 0 0; -1 0 0 0 0 0 1 0 0; 0 0 0 1 0 0 -1 0 0; 0 0 0 -1 0 0 1 0 0; 0 1 0 0 0 0 0 -1 0; 0 -1 0 0 0 0 0 1 0; 0 0 0 0 1 0 0 -1 0; 0 0 0 0 -1 0 0 1 0; 1 1 -1 0 0 0 -1 -1 1; -1 -1 1 0 0 0 1 1 -1; 0 0 0 1 1 -1 -1 -1 1; 0 0 0 -1 -1 1 1 1 -1; 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]], [[ 1 0 0 0 0 0 0 0 0; 0 0 0 1 0 0 0 0 0; 0 0 0 0 0 0 1 0 0 ]] ).
The covering parameters P_0, P_1, P_2, P_3 as described in <ref> are given by
P_0 = ((), (), ()),
P_1 = (({((1, 0))}), ({((1, 0))}), ({((1, 0))})),
P_2 = (({(1, 0)}, {(0, 1)}), ({(1, 0)}, {(0, 1)}), ({(1, 0)}, {(0, 1)})),
P_3 = (({(1, 0)}, {(0, 1)}, {(-1, 1)}), ({(1, 0)}, {(0, 1)}, {(-1, 1)}), ({(1, 0)}, {(0, 1), {(-1, 1)}})).
§.§ Explicit Indexing in <ref>
The indexing procedure from <ref> as described in <ref>, <ref>, and <ref> is explicitly illustrated as follows:
s⃗_ik_i, …, s⃗_i(k_i + L_1, i - 1)_L_1, i elements , …, s⃗_i(k_i + (c_1 - 1)L_1, i), … ,s⃗_i(k_i + c_1L_1, i - 1)_L_1, i elements _c_1 sets,
s⃗_i(k_i + c_1L_1, i), …, s⃗_i(k_i + c_1L_1, i + L_2, i - 1)_L_2, i elements , …, s⃗_i(k_i + c_1L_1, i + (c_2 - 1)L_2 , i), … ,s⃗_i(k_i + c_1L_i , 1 + c_2L_2, i - 1)_L_2, i elements _c_2 sets,
s⃗_i(k_i + c_1L_1, i + c_2L_2, i), …, s⃗_i(k_i + c_1L_1, i + c_2L_2, i + L_3, i - 1)_L_3, i elements , …, s⃗_i(k_i + c_1L_1, i + c_2L_2, i + (c_3 - 1)L_3, i), … ,s⃗_i(k_i + c_1L_1, i + c_2L_2, i + c_3L_3, i - 1)_L_3, i elements _c_3 sets.
get arXiv to do 4 passes: Label(s) may have changed. Rerun
|
http://arxiv.org/abs/2307.03678v1
|
20230705035008
|
Evaluating the Effectiveness of Large Language Models in Representing Textual Descriptions of Geometry and Spatial Relations
|
[
"Yuhan Ji",
"Song Gao"
] |
cs.CL
|
[
"cs.CL",
"cs.AI",
"cs.LG",
"I.2"
] |
ZJU ReLER Submission for EPIC-KITCHEN Challenge 2023:
TREK-150 Single Object Tracking
Yuanyou Xu, Jiahao Li, Zongxin Yang, Yi Yang, Yueting Zhuang
ReLER, CCAI, Zhejiang University
{yoxu,xljh,yangzongxin,yangyics,yzhuang}@zju.edu.cn
==================================================================================================================================================================================
[a preprint and the final version will be available in the Proceedings of the 12th International Conference on Geographic Information Science (GIScience 2023) <https://www.giscience.org/>.]This research focuses on assessing the ability of large language models (LLMs) in representing geometries and their spatial relations. We utilize LLMs including GPT-2 and BERT to encode the well-known text (WKT) format of geometries and then feed their embeddings into classifiers and regressors to evaluate the effectiveness of the LLMs-generated embeddings for geometric attributes. The experiments demonstrate that while the LLMs-generated embeddings can preserve geometry types and capture some spatial relations (up to 73% accuracy), challenges remain in estimating numeric values and retrieving spatially related objects. This research highlights the need for improvement in terms of capturing the nuances and complexities of the underlying geospatial data and integrating domain knowledge to support various GeoAI applications using foundation models.
§ INTRODUCTION
Deep learning methods have exhibited great performance to tackle many challenging tasks in geographical sciences <cit.>. However, the models often depend on handcrafted features for specific downstream tasks, thus being hard to be generalized into different tasks. The emergence of representation learning largely mitigated the issue by decomposing the learning process into two steps (task-agnostic data representation and downstream task) <cit.>.
Therefore, an effective location-based representation should preserve key spatial information (e.g., distance, direction, and spatial relations) and make classifiers or other predictors easy to extract useful knowledge <cit.>. In geospatial artificial intelligence (GeoAI) research, although the geospatial data are usually well-formatted and can be readily understood by GIS software, not all of them can be directly integrated into a deep learning model.
The success of ChatGPT has been a milestone that attracts the general public's attention to Large Language Models (LLMs). With tons of parameters trained on a large text corpus, LLMs have learned profound knowledge across many domains. Other well-known LLMs include the Bidirectional Encoder Representations from Transformers (BERT) <cit.>, the Generative Pre-trained Transformer (GPT) series <cit.>, etc.
Despite the differences in network architectures, these LLMs can achieve state-of-the-art performance on natural language processing (NLP) benchmarks.
Consequently, researchers have begun the early exploration of integrating LLMs into GIS research, such as geospatial semantic tasks <cit.> and automating spatial analysis workflows <cit.>. These studies have demonstrated the ability of LLMs to understand and reason about geospatial phenomena from a semantic perspective as learned from human discourse or formalized programming instructions. In contrast, accurate geometries and spatial relations in GIS are not necessarily expressed in natural languages. Therefore, it can be challenging for LLMs to reconstruct the physical world solely from the textual description of these building blocks, which is the motivation of this research.
In GIScience, spatial relations refer to the connection between spatial objects regarding their geometric properties <cit.>
, which play an important role in spatial query, reasoning and question-answering. Using natural language to describe spatial relations is essential for humans to perceive our surroundings and navigate through space. Attempts have been made to formalize the conversion between quantitative models and qualitative human discourse <cit.>. For topological spatial relations, the RCC-8 (region connection calculus <cit.>) and the Dimensionally Extended 9-intersection (DE-9IM) model <cit.> are widely used.
Based on the DE-9IM model, five predicates are named by <cit.> for complex geometries, including crosses, disjoint, touches, overlaps, within. On top of them, the Open Geospatial Consortium (OGC) further added the predicates equals, contains, intersects for computation convenience. In addition, predicates can also be used to describe the distance or direction between a subject and an object. Fuzzy logic can also be adopted to convert precise metrics into narrative predicates such as near and far <cit.>.
However, there remains a gap between the contextual semantics of predicates in everyday language and the abovementioned formalization procedures, yielding disagreement and vagueness in the understanding. It is yet to be determined whether the LLMs can fully capture how people describe spatial objects with predicates in natural language. If so, how we can leverage such knowledge to represent geospatial contexts with LLMs.
§ METHODOLOGY
§.§ Workflow
This research focuses on assessing the ability of LLMs in representing geometries and their spatial relations through a set of downstream tasks. Figure <ref> illustrates the workflow we employed, which consists of three primary modules. The first module utilizes a GIS tool to extract the attributes, such as geometry type, centroid, and area, of individual geometries and their spatial relations, including predicates and distances between pairs of geometries. The second module applies LLMs to encode the well-known text (WKT) format of geometries, e.g., LINESTRING (30 10, 10 30, 40 40), which includes the geometry type and the ordered coordinates whereas the map projection is not considered in this work.
Finally, the obtained embeddings from LLMs, along with the ground-truth attributes or spatial relations, are fed into classifiers or regressors to evaluate the effectiveness of the LLMs-based embeddings.
§.§ Notation
The notations used in this paper are listed in Table <ref>.
§.§ Evaluation Tasks
The downstream tasks are designed for deriving the geometric attributes or identifying spatial relations, as described in Table <ref>. The targets of Tasks 1-5 are straightforward, that is, to train a neural network classification/regression model that can best approximate the ground-truth values computed from a GIS tool. All of these tasks use a Multilayer Perceptron (MLP) as the classifier or regressor.
Task 6 aims to investigate whether a geometry g_i can be predicted based on its neighbor g_j and their spatial relation Rel(g_i, g_j).
We employ the nearest neighbor retrieval approach to evaluate whether LLMs have learned the meaning of spatial predicates properly. During inference, given an object g_j and a spatial relation rel, we retrieve the top-k nearest neighbors of Enc(rel, g_j) and examined whether they belong to the set of subjects {g_i| Rel(g_i, g_j)=rel}. This approach assesses the ability of the LLMs to relate geographic objects through spatial predicates.
§ EXPERIMENTS
§.§ Dataset and Preprocessing
Since there is no available benchmark dataset, we constructed real-world multi-sourced geospatial datasets for our case study in Madison, Wisconsin, United States. We downloaded the OpenStreetMap road network data (including links and intersections) using OSMnx [http://osmnx.readthedocs.io/], points of interest (POIs) categorized by SLIPO [http://slipo.eu/], and Microsoft Building Footprints [http://www.microsoft.com/maps/building-footprints]. Our evaluation tasks focus on the spatial objects with Point, LineString, and Polygon geometry types and assessing their spatial relations, respectively. The datasets are created as follows.
1) For each geometry type, we randomly select 4,000 samples, including 2,000 road intersections and 2,000 POIs for Point data, 4,000 road links for LineString data, and 4,000 building footprints for Polygon data. In total 12,000 samples are used for performing the downstream tasks. The area and centroid of each polygon are also computed.
2) For the spatial predicate disjoint, we randomly generate pairs of geometries and check whether their spatial relation is disjoint. For other predicates, we identify spatially related objects using spatial join. Given each combination of subject/object geometry type and their spatial predicate, we keep 400 triplets (subject, predicate, object) for each category for the task of predicate prediction and distance measure. Then we compute the minimum distance between the subjects and the objects.
3) We further construct data for the task of location prediction. In addition to the subjects and objects that are spatially joined in step 2), we also relate neighboring disjoint geometries using a buffer radius of 0.003°. The predicate of “disjoint” is replaced by “disjoint but near”. For each predicate except disjoint, we select 200 objects of each geometry type that are related to more than 5 subjects by the same predicate.
All the computations are performed by using the GeoPandas package in Python. We consider the predicates of crosses, disjoint (but near), touches, overlaps, within, equals, contains in this work but not
intersects as it is the opposite of disjoint. The data for downstream tasks are further split into 80% training, 5% validation, and 15% test sets.
§.§ Encoding
In this work, we perform the evaluation tasks based on two LLMs: GPT-2 and BERT. Due to the computational and memory resources required to train and use the models, GPT-2 and BERT have a maximum input sequence length (i.e., 1024 and 512 tokens respectively). Therefore, a sliding window approach is employed to tackle the issue as the WKT of LineString and Polygon types can exceed the length limitation. The long input sequences are broken down into smaller segments of 512 tokens with an overlap of 256 tokens between adjacent segments. Each segment is processed by the LLMs separately. We then take the average of the token embeddings to generate the final embedding for the whole sequence of geometries.
§.§ Training MLPs
As we hypothesize that the learned embeddings from LLMs can be effectively utilized in downstream geometry-related tasks, we use a simple neural network architecture (i.e., MLP) across all tasks. Specifically, the input layer of the MLP is the embedding layer generated from LLMs, followed by a dropout layer for regularization purposes. Following the dropout layer is a single hidden layer, which employs the Rectified Linear Unit (ReLU) activation function. Finally, the MLP is concluded with the output linear layer. The number of neurons in the output layer varies depending on the specific task.
To facilitate the training process, we apply a logarithmic function to the target values for the area computation and distance measure tasks. In the centroid derivation task, we use the min-max normalization for the target values. The loss function combines the Mean Squared Error (MSE) on both the transformed and original scales. However, for reporting the performance, we only use the original scale of the target values.
§.§ Results
As shown in Table <ref>, the performance of the downstream tasks based on the embeddings generated by GPT-2 and BERT are similar, which can be understood from the similarity in their subword tokenization and transformer-based architecture.
For T1-T3, the assessment is conducted on individual geometries. The 100% accuracy achieved on both the validation and the test dataset of T1 is expected as the geometry type are words that often occur in text documents. Considering the unit of degree in longitude and latitude, significant errors (measured by Mean Absolute Percentage Error (MAPE) and Root Mean Square Error (RMSE)) are observed in area and centroid computations, and increasing or reducing the model complexity does not alleviate the issue, suggesting a potential loss of information when averaging the token embeddings or fragmentation of coordinates during tokenization. Training the regressor on all geometries for T2 does not successfully learn that Point and LineString have an area of 0. Even when training the regressor on Polygon separately, the results remain unsatisfactory. In T3, the centroids computed from the high-dimensional embeddings often fall outside the study area.
T4-T6 evaluates the embeddings' ability to capture spatial relations. One interesting finding is that the spatial predicate can be better predicted when combined with the geometry type, with accuracy increased from 62%∼68% to 71%∼73%. This can be attributed to the imbalanced spatial relations among different combinations of geometry types. However, the distance measure task T5 still faces challenges in accurately estimating numeric values even when restricted to the “disjoint” relation only. The poor performance on T6 shows that even though the LLMs can encode the spatial relations and geometries in a consistent way, generating embeddings using an average approach alone is insufficient to support spatial reasoning and conduct geometric manipulations directly. Therefore, a different design to enhance the function of localizing spatial objects from textual descriptions <cit.> can improve the applications of LLMs in GeoAI.
Overall, the results indicate that the LLMs-generated embeddings have encoded the geometry types and coordinates present in the WKT format of geometries.
However, it should be noted that the performance of the embeddings does not consistently meet expectations across all evaluation tasks. While the LLMs-generated embeddings can preserve geometry types and capture some spatial relations, challenges remain in estimating numeric values and retrieving spatially related objects due to the loss of magnitude during tokenization <cit.>. Despite the possibility of ameliorating the issue by modifying notations or applying chain-of-thought prompting <cit.>,
this research highlights the need for improvement in terms of capturing the nuances and complexities of the underlying geospatial data and integrating domain knowledge to support various GeoAI applications using LLMs.
|
http://arxiv.org/abs/2307.00454v1
|
20230702022715
|
Structural, vibrational and electronic properties of Nb substituted orthovanadates LaV$_{1-x}$Nb$_x$O$_4$
|
[
"Ashok Kumar",
"Anurag Sharma",
"Madhav Sharma",
"Vinod Singh",
"Anita Dhaka",
"Rajendra S. Dhaka"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci",
"cond-mat.str-el"
] |
Department of Applied Physics, Delhi Technological University, Delhi-110042, India
Department of Physics, Atma Ram Sanatan Dharma College, University of Delhi, New Delhi-110021, India
Department of Physics, Indian Institute of Technology Delhi, Hauz Khas, New Delhi-110016, India
Department of Physics, Indian Institute of Technology Delhi, Hauz Khas, New Delhi-110016, India
Department of Applied Physics, Delhi Technological University, Delhi-110042, India
Department of Physics, Hindu College, University of Delhi, New Delhi-110007, India
rsdhaka@physics.iitd.ac.in
Department of Physics, Indian Institute of Technology Delhi, Hauz Khas, New Delhi-110016, India
We investigate the structural, vibrational, morphological, and electronic properties of Nb substituted orthovanadate LaV_1-xNb_xO_4 samples prepared by the solid-state reaction method. The x-ray diffraction (XRD) analysis reveals the presence of three crystal structures [monoclinic monazite (m-m) type for the x= 0, two-phase equilibrium of monoclinic monazite (m-m) and tetragonal scheelite (t-s) type for the 0.2≤x≤0.8, and monoclinic fergusonite (m-f) type for the x= 1 samples] with an increase in Nb^5+ concentration. The Raman spectroscopy and x-ray photoelectron spectroscopy (XPS) were employed to study the vibrational and electronic properties of all the samples, respectively. In order to choose an excitation wavelength that does not cause undesirable fluorescence and has observable intensities of all the vibrational modes, the Raman spectra are collected using 532 nm, 633 nm, and 785 nm laser lines. With increasing the Nb^5+ concentration, new Raman modes associated with Nb-bonds are clearly visible and the intensity of V-bonds assigned modes is decreasing. The XPS analysis shows the unchanged 3+ oxidation state of La ion where the intensity of the V 2p core-level decreases while the Nb 3d core-level increases with x. The equal spin-orbit energy splitting of the states is confirmed by the average energy difference (across La core-level spectra for all the samples) for state I as well as bonding and anti-bonding of state II. Interesting, the relative intensity of La 3d state I and state II show systematic change with Nb doping altering the metal ligand overlap. We discuss and provide insight into the evolution of the structural, morphological, and chemical features with Nb substitution in LaV_1-xNb_xO_4 samples.
Structural, vibrational and electronic properties of Nb substituted orthovanadates LaV_1-xNb_xO_4
Rajendra S. Dhaka
August 1, 2023
=================================================================================================
§ INTRODUCTION
In various polycrystalline oxides, rare earth orthovanadates (RVO_4; R-Rare earth elements) are interesting because of their potential applications in catalysis, polarizers, luminescent materials, and laser host materials <cit.>. Also, researchers have reported that complex oxide materials show interesting structural, magnetic and electronic properties <cit.>, and may be utilized for various applications such solid oxide fuel cells and as an electrode material for Lithium-ion batteries due of their high specific capacity and cycle stability <cit.>. It is interesting to note that the lanthanum based orthovanadates LaVO_4 shows the structural trend in rare-earth family, it crystallizes in tetragonal–zircon (t-z) type polymorphs with space group I4_1/amd and monoclinic–monazite (m-m) type polymorph with space group P2_1/n. However, it thermally stabilizes in m-m type, whereas the t-z structure remains in metastable state at room temperature, because of the largest ionic radius of La^3+ in lanthanide series, it has a higher oxygen coordination number (9) in m-m type structure as compared to 8 in t-z type <cit.>. The zircon structure contains a pattern of VO_4 tetrahedra (having four identical V-O bonds) <cit.> and RO_8 dodecahedra (coordination no. 8), sharing their edges alternatively and linked together in chains along the c-axis. In the monazite structure, deformed VO_4 tetrahedra with four different V-O bonds <cit.> are connected to RO_9 polyhedra (coordination no. 9) and sharing their edges. The zircon type LaVO_4 sample is difficult to prepare at ambient conditions by conventional solid state reaction method but few reports say that it can be synthesized and stabilized by hydrothermal and precipitation methods <cit.>.
The structural and electronic properties of lanthanum orthovanadate with pentavalent niobium substitution are vital to understand for their practical use. Though the parent compound LaVO_4 with substitution at the La site has been extensively explored <cit.>, there are very few studies to understand the effect of substitution at V site <cit.>. As the niobium is located just below vanadium in the periodic table and has many advantages like Vanadium prices have recently risen to about 300% higher, niobium (Nb^5+) is biocompatible, isoelectronic to vanadium ion and has a larger ionic radius (0.48 Å) with four coordination numbers in comparison to vanadium ion (0.36 Å) <cit.>. The LaNbO_4 is a rare-earth niobate and shows a well-known temperature and composition/substitution-induced structural transformation. For example, the LaNbO_4 undergoes a thermally induced structural transition from monoclinic fergusonite (m-f) with space group I2/a to tetragonal scheelite (t-s) with space group I4_1/a) phase at ∼495C <cit.>. Similarly, it undergoes structural transformation by substituting Nb^5+ at V^5+ site <cit.>. It has been reported that lanthanum niobate shows interesting properties and very useful for technological applications such as proton conductivity <cit.>, good dielectric, high energy emission using X-ray excitation <cit.> and its potential for applications in a variety of fields, including sensors <cit.>, contrast agents, waveguides, ferroelectrics <cit.>, phosphors <cit.>, laser crystals <cit.>, luminophores, LEDs <cit.>, etc.
In this paper, we study the structural, vibrational, morphological, and electronic properties of LaV_1-xNb_xO_4 using various experimental tools like x-ray powder diffraction (XRD), scanning electron microscopy (SEM), high resolution transmission electron microscopy (HR-TEM), selected area electron diffraction (SAED), Raman spectroscopy, and x-ray photoelectron spectroscopy (XPS). We find the phase purity and structural transition by performing the Rietveld refinement of the measured XRD patterns at room temperature. The Raman spectra of LaV_1-xNb_xO_4 samples are measured with different excitation wavelengths of 532 nm, 633 nm, and 785 nm, where we find significant intensity of all the Raman active modes as well as interesting changes with Nb substitution. The Raman spectra exhibit a pattern of maximum intensity peaks that is compatible with Badger's rule. The structural phase transition observed in the XRD analysis of LaV_1-xNb_xO_4 is also supported by the intensity variation in the Raman mode observed in the samples with increasing Nb concentration. Through the SEM micrographs, we identify that the samples contain fine particles along with pores as well as changes in particle size and shape can be seen in the surface images of the samples. The core-level photoemission reveals the oxidation state and electronic structure of the constitute elements in these samples. The intensity of the core-level spectra of all the samples varied systematically with an increase in Nb^5+ concentration, as shown by XPS analysis. The average energy difference (for the La core-level spectra of all the samples) for state I, state II bonding, and state II anti-bonding verified the equal spin-orbit energy splitting of the states. Moreover, we find a systematic change in the relative intensity of La 3d state I and state II with Nb doping, which suggest an altering in the metal ligand overlap.
§ EXPERIMENTAL
We use solid-state reaction method to prepare LaV_1-xNb_xO_4 (x= 0 to 1) samples by mixing V_2O_5 (99.6%, Sigma), Nb_2O_5 (99.99%, Sigma), and La_2O_3 (99.99%, Sigma) as precursors in the stoichiometric proportions. The La_2O_3 was pre-dried for 6 hrs at 900C to remove the moisture. After that the mixture was ground evenly for 8 hours, then heated for 17 hrs at 1000C. The mixture was then reground and sintered at 1250C for 13 hrs to improve the crystallinity of the samples. The phase purity and structural parameters of LaV_1-xNb_xO_4 were determined using Panalytical XPert^3 powder x-ray diffractometer at room temperature using the Cu source of Kα radiation (λ = 1.5406 Å ). We use the step size of 0.033 for each XRD scan taken in the 2θ range from 10 to 90. The lattice parameters are extracted by the Rietveld refinement of XRD patterns using FullProf software, where linear interpolation is used to fit the background. We use Jeol JSM-7800F Prime field emission scanning electron microscope (FE-SEM) with LN_2 free SDD X-max 80 EDS detector in high vacuum mode to produce the scanning electron microscope (SEM) micrographs of the materials' surfaces. The analysis of particle size and change in morphology of LaV_1-xNb_xO_4 was done using ImageJ software by analyzing SEM micrographs at the surface of the pellet samples. In order to execute FE-SEM, the non-conducting LaV_1-xNb_xO_4 pellets were turned into conducting by coating the surface with a thin layer of Au using a sputter coater. We use the JEOL/JEM-F200 microscope, equipped with thermal electron field emission and OneView CMOS camera (4k × 4k pixels), to collect HR-TEM data by operating the system at an acceleration voltage of 200 keV.
The Raman spectra were recorded at room temperature with the Renishaw inVia confocal Raman microscope using 2400 lines/mm grating, 10X objective, and three different wavelengths; (i) 532 nm, gas laser with a power of 1 mW, (ii) 633 nm, where the semiconductor diode laser with a power of 1 mW, (iii) 785 nm semiconductor diode laser with a power of 0.1 mW. The samples can be identified by their particular Raman fingerprint, and their structural and chemical information can be discovered through the examination of several Raman active modes in LaV_1-xNb_xO_4. The x-ray photo emission spectroscopy (XPS) measurements are done using AXIS Supra instrument (Kratos Analytical Ltd). The survey spectra and core level spectra (La 3d, Nb 3d, V 2p, and O 1s, for each sample), were recorded at room temperature using the monochromatic X-ray source: Al Kα-1486.6 eV(step size 1 eV for the survey and 0.1 eV for core level spectra), with a charge neutralizer, is used to offset the charging impact in these insulating materials. The pass energy of the analyzer was 160 eV and 20 eV for the survey and core-level spectra, respectively. For all the wide scans and core-level spectra, the C 1s peak is fitted to obtain the peak binding energy (BE) and the calibration for charge correction was done using the C 1s BE reference at 284.6 eV for each sample. We utilize the Igor Pro 9 software to analyze the observed Raman spectra and fitted the modes using the Lorentzian peak function as well as XPS spectra using the Voigt function.
§ RESULTS AND DISCUSSION
The Rietveld refined room-temperature x-ray diffraction (XRD) patterns of the polycrystalline LaV_1-xNb_xO_4 (x= 0–1) samples are displayed in Fig. <ref> and lattice parameters of the samples are summarised in Table <ref>, where we can see that angle β is increasing in the m-m type phase of LaV_1-xNb_xO_4 samples with Nb^5+ substitution due to higher ionic size of Nb^5+ as compared to V^5+. The crystallization of LaV_1-xNb_xO_4 is clearly observed in three different phases depending on the substitution of Nb^5+ at the site of V^5+, and also been reported by Aldred et al. in ref. <cit.>. We observe that the structure changes from m-m to m-f with increase in the Nb^5+ concentration from 0 to 100%. For the x= 0 and 1, a pure monoclinic phase is obtained with no impurity peaks. In between x= 0.2 and 0.8, a monoclinic monazite (m-m) and a tetragonal scheelite (t-s) type phases coexist. Moreover, all the Bragg reflections of LaVO_4 and LaNbO_4 can easily be indexed to the m-m and m-f phases with the space group P2_1/n and I2/a for the x= 0 and 1 samples, respectively. We find that the contribution of space group I4_1/a is increasing from the x= 0.2 to 0.8 samples (see Table <ref>) due to the increase of t-s phase with the substitution of Nb^5+ at the site of V^5+ in LaV_1-xNb_xO_4 samples. So, it can clearly be seen that the LaV_1-xNb_xO_4 samples crystallize in monoclinic monazite (m-m) type (x= 0), coexistence of monoclinic monazite (m-m) and tetragonal scheelite (t-s) type (0.2≤x≤0.8), and monoclinic fergusonite (m-f) type (x= 1) <cit.>.
Moreover, for the x= 0 sample, the m-m type crystal structure shows high intensity diffraction peaks corresponding to (200) as well as (120) crystal planes at 26.17 and 27.78, respectively. However, the t-s type structure contains a peak corresponding to (112) plane at 28.08, and the m-f type structure shows high intensity peaks for the (121) and (121) planes at 27.5^o and 28.9^o respectively. In the measured XRD pattern for the x= 0.2 to 0.8 samples, the diffraction peaks for the (200), (120) and (112) planes are present, which clearly indicate the co–existence of both the m-m or t-s type structures. The presence of (110) plane at 17.65 for the x= 0.2 and 0.4 samples is due to the dominance of m-m type structure in the LaV_1-xNb_xO_4. The (200) and (120) peaks are also present in these samples; however, their intensity decreasing with higher concentration of Nb substitution and become negligible for the x≥ 0.6 sample. As the Nb^5+ concentration becomes more than V^5+ concentration the t-s type structure dominates, which results in the reduction/absence of diffraction peaks corresponding to (200) and (120) planes. The variation in peak intensity corresponding to (200) and (120) crystal planes and the presence of (112) plane indicate the co–existence of t-s and m-m type structures for the x= 0.2 to 0.8 samples. This also validates that the m-m type structure (P2_1/n) is decreasing and t-s type structure (I4_1/a) is increasing with increasing the Nb concentration, i.e., from the x= 0.2 to 0.8 samples. The determined % of phases by Rietveld refinement of XRD data is presented in Table <ref>. For the x= 1 sample, the presence of (121) and (121) peaks further confirms the m-f type structure of LaNbO_4 and consistent with literature <cit.>.
Note that pure m-m and m-f phases are observed for the x= 0 and 1 samples, respectively. However, for the x= 0.2–0.8 samples, both the monoclinic and scheelite-tetragonal phases coexist in a certain ratio. These results reveal that LaV_1-xNb_xO_4 samples undergo three phase transformation; monoclinic monazite (m-m) type (for the x= 0), two-phase equilibrium of monoclinic monazite (m-m) and tetragonal scheelite (t-s) type (0.2≤x≤0.8), and monoclinic fergusonite (m-f) type (for the x= 1) with increased substitution of Nb^5+ at the V^5+ site. It is quite interesting to note that small amount of Nb^5+ substitution can transform LaVO_4 from m-m phase to mix of m-m and t-s phases. It has also been observed that LaNbO_4 shows structural transition from monoclinic to a tetragonal phase at ∼495C. This structural transformation is very important in governing the protonic conductivity of LaNbO_4 <cit.>. For some compositions of LaV_1-xNb_xO_4, this transition temperature shifts near room temperature. The reported temperature-dependent XRD measurements also suggest that at x= 0.75 (25% substitution of V^5+ at Nb^+5 sites in LaNbO_4) <cit.>, it possess a tetragonal structure at room temperature as its transition temperature is 250 K. The XRD pattern below 250 K shows some residual intensity (broadened lines) of tetragonal structures because of precursor effects. Similarly, we can see broad peaks in XRD patterns for the x= 0.8 sample due to the above mentioned effect <cit.>. As we increase the Nb concentration, we find some new peaks appearing in the x= 0.2 sample at 33.56^o, 52.68^o, 56.69^o, and 58.06^o. All these peaks are the symbols of t–s structure that belongs to (020), (116), (312), and (224) planes, respectively <cit.>. These peaks maintained up to the x= 0.8 sample, which confirms presence of some t-s phase, and also indicate the substitution-induced phase transformation. This is an important finding that LaNbO_4 can possess a tetragonal structure at room temperature by just 20% replacement of Nb^5+ sites by the V^5+ sites. This result opens the possibility for a wide range of applications of LaNbO_4 at room temperature. All these patterns discussed above suggest that substitution of larger Nb^5+ (r= 0.48 Å) ion for V^5+ (r= 0.36 Å) affects the lattice constant of LaV_1-xNb_xO_4 and confirms the transformation of 3 different phases with increasing concentrations of Nb^5+.
The scanning electron microscope images of the LaV_1-xNb_xO_4 for the x= 0–1 are shown in Fig. <ref>, which depict the closed packed surface morphology in all the samples and some variation in the particle size is clearly visible. The pores are clearly visible from the top view of the surface. We can see that with the increase in Nb^5+ concentration, the particle size slightly decreases from x= 0 to x= 0.4 sample, then increases and becomes maximum at x= 0.8 and again decreases for the x= 1 sample.
An average particle size (D) of LaV_1-xNb_xO_4 is 5.14 m for the x= 0, 4.22 m for the x= 0.2, 3.56 m for the x= 0.4, 8.73 m for the x= 0.6, 11.31 m for the x= 0.8, and 5.70 m for the x= 1 samples. It is found that the change in crystal surface morphology of LaV_1-xNb_xO_4 samples with increasing Nb^5+ concentration causes variation in the particle size and shape.
Further, in Figs. <ref> (a, b) we display the HR-TEM images indicating distinct sets of planes with characteristic spacing for the x= 0.2 and 0.8 samples. The images in Figs. <ref>(c, d) and (e, f) for the samples x= 0.2 and x= 0.8, respectively, show these plane sets in magnified view. The spacing between the planes is determined using ImageJ software, and we find the d-spacings of 0.43 and 0.32 nm for the (-1,1,1) and (1,2,0) planes in the P2_1/n phase for the x= 0.2 sample, and 0.28 and 0.31 nm for the (0,0,4) and (1,1,2) planes in the I4_1/a phase for the x= 0.8 sample. However, these planes only correspond to the dominating phase of the mixed-phase samples. The selected area electron diffraction (SAED) patterns in Figs. <ref>(g, h) indicate the contributions from both the phases. The indexed (h, k, l) planes that relate to P2_1/n are coloured in white and yellow colour is designated to the I4_1/a space group, as marked in Figs. <ref>(g, h). We find that the analysis of HR-TEM and SAED results is consistent with the XRD refinement data for these samples, as presented in Figs. 1(b, e).
The Raman spectra of LaV_1-xNb_xO_4 measured at three different excitation wavelengths, 532 nm, 633 nm, and 785 nm are presented in Fig. <ref> for all the samples (x= 0–1). Three different excitation wavelengths are used to distinguish the fluorescence effect on the Raman signal and to avoid the background effects from the sample. We use Lorentzian line shape function to deconvolute and fit the observed individual Raman peaks, as marked in Table <ref>. We find that all the specific Raman peak positions (Raman shift) are independent of excitation wavelength for a sample, which confirm their inherent characteristic of that particular sample, as shown in Fig. <ref>. The intensity of the modes may vary due to several reasons like the polarizability of the molecule, excitation wavelength of the laser source, and the concentration of the active group <cit.>. Though there are minor changes in the intensity variation of Raman modes measured with different excitation wavelengths, we can see that Raman active peaks are changing systematically for all the measured samples in Fig. <ref>.
In the measured spectra we see 20 peaks corresponding to LaVO_4 and 17 peaks for LaNbO_4. According to the Group theory calculations LaVO_4 contains 72 vibrational modes and out of them, 36 modes are Raman active modes (18A_g + 18B_g) <cit.> (here, A and B denote symmetric and antisymmetric vibrations about the principal axis of symmetry and subscripts g indicates that the vibrations are symmetric relative to a symmetry center, respectively). All the 20 Raman peaks for the x= 0 sample are represented from S_0 to S_19, as shown in the Table <ref>. The theoretical approach predicted that the 8A_g+10B_g modes are for m-f structure, and 13 Raman-active modes in t-s structure (as observed in the x= 0.6 sample), which are summarized in Table <ref>. The reason for the absence of some of the peaks could be due to the overlap of several A_g and B_g modes and their low Raman scattering cross-section. All the assignments related to each Raman peak in LaV_1-xNb_xO_4 are summarised in Table <ref>. We can see in Table <ref> that the S_0 mode (127.24 cm^-1) is present only in LaVO_4 and LaNbO_4 samples and absent for rest of the intermediate samples. The reason for origin of S_0 mode is translational motion of La atoms in the monoclinic phase. All the concentrations from x= 0.2 to 0.8 in LaV_1-xNb_xO_4 results in the formation of t-s type structure or m-m and t-s in equilibrium type structure. So, the formation of mixed phase may result in the disappearance of S_0 mode. The S_18 is the most intense mode for the LaVO_4 sample, which decreases with Nb substitution. Whereas, we find that the intensity of S_16 mode increases with Nb substitution and the same becomes the most intense mode for the LaNbO_4 sample, as can be seen in Fig. <ref>(a). For the x= 0.8 sample, the S_18 mode completely disappears, which indicates that crystal phase transformation from a mixed phase of m-m and t-s in equilibrium to turning into an approximately pure (96%) t-s phase <cit.>. This behaviour of S_0, S_16 and S_18 corroborate with the structural phase transformation with Nb^+5 substitution, as has been observed in XRD analysis. Furthermore, the presence of S_8, S_9, S_10, S_13, S_14, S_15, S_17 and S_18 modes in the x= 0 sample confirms the existence of VO_4^3- ions since none of these modes are visible in LaNbO_4 <cit.>.
All the Raman peaks arise due to different vibrational modes, i.e., bonds between different constituent elements, i.e., the La^3+, V^5+, Nb^5+ and O^2-. The comparison of experimentally observed peak positions of distinct Raman modes, fitted using Lorentzian function, with the reported data <cit.>, shows a high degree of similarity, as presented in Table <ref>. In the m-m structured LaVO_4 crystal, nine O^2- atoms are linked to La^3+ whereas four O^2- atoms and V^5+ are joined in a tetrahedral shape. There are four different O^2- locations and it is bound in a 3-coordinate geometry to two equivalent La^3+ and one equivalent V^5+ atoms at the first site. Also, it is bound to two comparable La^3+ and one equivalent V^5+ atoms in a deformed single-bond geometry in the second site. Three comparable La^3+ and one equivalent V^5+ atoms are linked to O^2- in a 3-coordinate geometry at the third O^2- site and it is bound in a deformed single-bond geometry to three equivalent La^3+ and one equivalent V^5+ atom in the fourth O^2- site <cit.>. In the m-f structured LaNbO_4 crystal, the La^3+ is joined to eight O^2- atoms in an 8-coordinate geometry and six O^2- atoms are bound to Nb^5+ to create the deformed, edge-sharing NbO_6 tetrahedra. There are two different sites for O^2- and it is linked in a 4-coordinate geometry to two equivalent La^3+ and two equivalent Nb^5+ atoms at the first O^2- site. Also, it is bound in a 3-coordinate geometry to two equivalent La^3+ and one Nb^5+ atoms at the second O^2- site. In the analysis of vibrational modes, it has been assumed that the LaNbO_4 crystal is made up of La^3+ cations and NbO_4^3- molecular anions <cit.>. It is revealed experimentally that on the addition of Nb^5+, it replaces V^5+ from its site and distorts LaVO_4’s unit cell <cit.>. The mode of vibrations for LaVO_4 is categorised as follows: (I) the zone of high wavenumber (765–874 cm^-1) resulting from O-V-O bond's stretching vibration (II) the intermediate (305–436 cm^-1) region resulting from O-V-O bond's bending vibration, and (III) the zone of low wavenumber (< 285 cm^-1) resulting from La atom's translation modes as the La atoms have high mass <cit.>, and the results are presented in Table <ref>. Similarly, the vibrational modes of LaNbO_4 are also categorized as follows: (I) high wavenumber zone (623–803 cm^-1) for stretching modes of Nb-O bonds, (II) intermediate zone (322–422 cm^-1) for deformation/scissor modes of NbO_4^3-, and (III) low wavenumber zone (121–282 cm^-1) for rotational modes of NbO_4^3- and translational lattice modes that include the relative translations of anions and cations <cit.>.
The LaNbO_4 contains a total of three different modes; rotational modes of NbO_4^3-, vibrational modes of NbO_4^3- and translation modes of La–O and O–La–O bonds. The S_0, and S_22 peaks are visible corresponding to the combined translation-rotational (B_g) and rotational mode (A_g), respectively, while the third rotational B_g mode (S_21) is absent in the observed experimental Raman spectra. The vibrational modes can be categorized into (I) doubly degenerated scissor modes, (II) triply degenerated deformation mode, which further splits into a pair of degenerated rocking mode and one twist mode, (III) stretching mode, a non-degenerate and a triply degenerated with increasing order of wave number <cit.>. The remaining modes are all translational modes. From the Table <ref>, we can easily identify that LaNbO_4 Raman modes are matching well with the reported one. Two NbO_4^3- scissor modes with almost degenerated wave numbers are projected to be seen in the A_g spectrum. Out of all, the most obvious choices are S_26 and S_27 because the remaining A_g band's wavenumbers are too low to allocate them. In the LaNbO_4, as already discussed, the deformation modes are believed to be divided into two almost degenerated rocking modes (S_11 and S_29) with B_g symmetry and a twist mode (S_30) with A_g symmetry. These modes are also present in the region of intermediate wave numbers. The stretching modes are high energy vibrations and here they are recognised as S_16, S_31, S_32 and S_33 peaks. As the non-degenerate symmetric mode is expected to provide the strongest band, band S_16 is allocated to it. The remaining S_31, S_32, and S_33 peaks are assigned to other three degenerate stretching modes. The invariance of the S_4, S_11 and S_16 peak positions, all through, from the x= 0 to 1 samples indicates no effect on translational mode along the b-axis, the B_g frequency of VO_4^3- and NbO_4^3- for rocking and stretching modes. The S_8 peak disappears only in LaNbO_4 spectrum because of absence of O-V-O bending vibrations <cit.>. Interestingly, the S_2, S_3, S_5, S_6, S_9, S_12 and S_19 peaks vanished just before Nb concentration exceeds V (at x= 0.4) and also S_1, S_10, S_13, S_14, S_15, S_16, S_17 and S_18 peaks vanished just after Nb concentration became more than V. It is quite possible that the low concentration of Nb in the sample results in the weakening and then disappearance of some of the spectral peaks. Due to the same reason some new peaks (S_20 and S_33) appeared in x= 0.6–1 samples. Furthermore, the S_20 peak arises due to translational mode along the b-axis, and S_33 peak appeared due to one of three triply degenerate stretching modes of NbO_4^3- in the sample.
The most intense peak in LaNbO_4 (S_16) and LaVO_4 (S_18) at higher wavenumber is due to the stretching of Nb-O_t and V-O_t bonds, where O_t represents the oxygen atoms in the terminal position <cit.>. The terminal position of oxygen is that where it connects the LaO_8 dodecahedra and NbO_6 octahedra in case of LaNbO_4 and LaO_9 muffin <cit.> and VO_4 tetrahedra in case of LaVO_4 <cit.>. Since the VO_4 tetrahedra appears to be intrinsic in the peak broadening, it has been found that this broadening in the Raman peaks spreads along samples with intermediate Nb and V compositions. However, in certain samples, Nb^5+ and V^5+ cation-related variables may also play an important role in the increasing the broadening of the peak. The broad peaks are made up of multiple modes which are normally difficult to distinguish from one another <cit.>. The strongest peak of LaVO_4 (S_18) is in the high wavenumber region which lies approximatly 52.5 cm^-1 on higher side of the spectrum with respect to the strongest peak of LaNbO_4 (S_16). This difference in wavenumber (Δ) is related to average bond length (d) of the atoms by Δ∝ 1/d^3/2, as stated by the Badger rule <cit.>. The bond lengths V–O_t in LaVO_4 and Nb–O_t in LaNbO_4 are ∼1.72 Å <cit.> and ∼1.90 Å <cit.>, respectively. The changes observed in the Raman sepctra of the samples is quite consistent with the Badger's rule.
Finally we use x-ray photoemission spectroscopy (XPS) to investigate the electronic structure by measuring the survey scan and particular elemental core-level spectra of all the prepared samples. The identified peaks in the survey spectra according to their binding energies are labeled and are in agreement with reported values, as shown in Fig. <ref>. The characteristic La 3d peaks cluster (830–870 eV), 4d (4d_5/2 at 101 eV and 4d_3/2 at 104 eV), and 4p (centered around 195 eV) <cit.>. The presence of these peaks of La are clearly visible for every synthesised sample, and they are all remarkably comparable. A consistent rise in Nb 3d (discussed later) and Nb 3p (3p_3/2 at 364 eV and 3p_1/2 at 379 eV) is observed along with an increase in Nb doping and this feature of Nb is absent in the x= 0 sample <cit.>. For the V 2p (2p_3/2 at 517 eV and 2p_1/2 at 525 eV) and V 2s (630 eV) core-level peaks, the reverse behavior is anticipated and it is quite evident as clearly visible in Fig. <ref> <cit.>. The Voigt function has been used to fit the core level spectra of the constituent elements. The fitted La 3d core-levels are shown in Fig. <ref>(a). The spin-orbit splitting peaks present in all the samples, have been de-convoluted at binding energies 834.3±0.2 eV, 836.0±0.3 eV, 838.7±0.1 eV, 847.9±0.1 eV, 851.1±0.2 eV, 853.0±0.3 eV, 855.6±0.1 eV, and 863.4±0.2 eV (average B.E. of all the samples ± ΔB.E., calculated for the x= 0–1 samples). The broad diffusive satellite peaks at 847.7 eV and 863 eV in the locality of La 3d core-level are coming from plasmons. Because of the two final states I and II, and the subsequent spin-orbit splitting between each state, making the structure complex. The primary strong peaks (3d_5/2 at 834.3 eV and 3d_3/2 at 851.1 eV, respectively) are associated with the final state I (La^4+ 3d^94f^0, L), which involves electron transfer to the continuum from the 3d core-level. The peaks at higher binding energies are features of final state II (La^3+ 3d^94f^1, L, -e) and this feature is experimentally unresolved which indicates multiplet structure, as has been suggested by Mullica et al. <cit.>. This corresponds to the electron transfer from ligand (L, O_2p in our case) valance band to the empty 4f orbitals of La <cit.>. This multiplet structure of state II is composed of two bonding and anti-bonding states. The prominent signals at higher binding energies (3d_5/2 at 838.7 eV and 3d_3/2 at 855.6 eV) are due to bonding of state II and the weak signals at lower binding energies (3d_5/2 at 836.0 eV and 3d_3/2 at 853 eV) are because of anti-bonding. The average energy difference (over La core-level spectra of all the samples) between these three pairs of peaks is nearly the same ( 16.9 eV) for the state I, state II bonding, and state II anti-bonding, respectively. This verifies the unaltered spin-orbit energy splitting of the states of La on Nb substitution <cit.>. Interestingly, we find a significant and systematic change in the intensity variation of the peak at 838.7 eV (I_2) relative to the primary peak at 834.3 eV (I_0) with Nb doping. The metal-ligand orbital overlaps are reported to be accountable for such doping-induced intensity variations <cit.> where strong ligands are found to populate the (La^3+ 3d^94f^1, L, -e) state, intensifying I_2 <cit.>. The intensity ratio I_2/I_0 is shown in Fig. <ref>(b), which shows a consistent decrease as a function of doping x. This signifies that with Nb substitution, the extent of overlapping between La(4f)-O(2p) orbitals decreases monotonically. This conclusion can also be drawn from the trend in the energy separation between I_2 and I_0 as a function of x, as shown in Fig. <ref>(c). However, the separation is minute in the subsequent samples, but for the x= 0 and x= 1 the energy difference (I_2 - I_0) is found to be of the order of 0.3 eV. The value of I_2 - I_0 was found to be varying for a variety of La-containing compounds majorly because of the crystal structure, like 3.8 eV for La_0.5Sr_0.5Co_1-xNb_xO_3 and 5.3 eV La_1.85Ba_0.15CuO_4 <cit.>. Notably, this energy separation could be related to the ease of electron transfer between the ligand and the more ionic state of La, therefore having an opposite trend with the tendency of ligand’s overlapping with the La 4f orbitals <cit.>.
The Nb 3d core level spectra are shown in Fig. <ref> where the spin-orbit doublet of Nb 3d core levels are fitted with a single peak for each component and the calculated peak positions for the Nb-doped samples are found to be 3d_5/2 at 206.2±0.2 eV and 3d_3/2 at 209.0±0.2 eV <cit.>. This confirms the presence of prevailed 5+ oxidation state of Nb atom <cit.> in all the samples. However, for the x= 1 sample the Nb 3d_5/2 is at a higher binding energy as compared to the other Nb-containing samples, which could be due to the charging effects and the change in chemical environments. Therefore, Atuchin et al. characterized the Nb state by using energy difference Δ (Nb 3d_5/2 – O 1s) instead of solely relying on Nb 3d_5/2 binding energy position <cit.>. The evaluated Δ (Nb 3d_5/2 – O 1s) values are found to be around 323.5 eV. The calculated energy difference with respect to O 1s is independent of the carbon correction. The obtained binding energy difference ≈323.5 eV is reported to be a fairly highest value for the 5+ oxidation state of Nb. We can also see that the error in the value of binding energy position Δ is only 0.1 eV in this case, while for the Nb 3d_5/2, and O 1s, it is 0.3, and 0.2 eV, respectively. In Fig. <ref> we can also see that the O 1s peak is shifting to higher binding energy for the x= 1 sample as compared to the x= 0. Similarly, for the Nb 3d_5/2 core-level, it is shifting to higher binding energy. The Δ(Nb 3d_5/2 - O 1s) value for the x= 1 sample is quite consistent for all the samples, which strongly supports the electronic characterization using energy difference with respect to O 1s instead of absolute peak positions.
In Fig. <ref>, we present the V 2p core level spectra for all the samples, which shows spin orbit components of 2p_3/2, and 2p_1/2 at 516.9 and 524.8 eV, respectively indicating V in 5+ state. Interestingly, an unusual broadening in the V 2p_1/2 component is observed for all the samples, whereas no such additional component is evident in the V 2p_3/2 peak at 516.9 eV. More importantly, the deconvolution of the V 2p_1/2 component reveals that the FWHM of the higher energy feature (denoted by I) (1.2 eV) is nearly the same as that of 2p_3/2 component (1.1 eV). In contrast, the lower energy feature (II) is significantly broader (2.8 eV). Moreover, the area ratio of the combined I and II with 2p_3/2 is close to 1/2, which clearly indicates the intrinsic origin of these two features from the vanadium. In contrast to metallic V 2p core-level, the vanadium based compounds have often been reported to exhibit an anomalous V 2p_1/2 width as a consequence of Coster-Kronig (C-K) transitions <cit.>. The C-K type transition is a class of Auger transition in which an electron from a higher sub-shell of the same shell fills the core hole <cit.>. In the present case, the filling of 2p_1/2 core hole by an electron from 2p_3/2 may give rise to the C-K transitions and that can result in an additional feature in the 2p_1/2 component. Therefore, it is likely that the component I is attributed to the core-hole recombination with the screening electrons, analogous to the 2p_3/2, whereas an additional L_2-L_3 (C-K) relaxation process gives rise to the feature II in 2p_1/2 peak <cit.>. No significant change in these components has been observed with the Nb substitution, indicating the robust nature of the underlined system. Further, the approach of O 1s energy difference is also implemented in this case as for vanadium oxides, the energy difference Δ(V 2p_3/2 - O 1s) is an advantageous reference <cit.>. The average Δ(V 2p_3/2 - O 1s) magnitude is 12.8∓0.1 eV, which is in good agreement with the literature for V^5+ oxidation state <cit.>.
§ CONCLUSIONS
In conclusion, the solid state reaction method was used to prepare LaV_1-xNb_xO_4 samples successfully with regular variable Nb^5+ concentration. The XRD measurements established that the substitution of larger Nb^5+ ion for V^5+ affects the lattice constant of LaV_1-xNb_xO_4 and goes through three different phase transformations [monoclinic monazite (m-m) type (x= 0), two-phase equilibrium of monoclinic monazite (m-m) and tetragonal scheelite (t-s) type (0.2≤x≤0.8) and monoclinic fergusonite (m-f) type (x= 1)]. The SEM micrographs helped in analyzing that the particle size and shape altered due to the change in crystal phases of these samples with increasing Nb^5+ concentration. The analysis of HR-TEM and SAED data found to be consistent with the XRD refinement data. The Raman spectra of LaV_1-xNb_xO_4 were studied using 532 nm, 633 nm, and 785 nm excitation wavelengths. All the Raman assignments were found to have well-ordered enhancement/diminution with the increase in Nb^5+ doping. The variation in the intensity as well as appearance/disappearance of the Raman mode with Nb concentration are coinciding with the change in the structural phases, as observed in XRD analysis. This further confirms that the phase transformation in LaV_1-xNb_xO_4 agrees with the maximum intensity peak patterns shown in the Raman spectra of these samples and are consistent with the Badger's rule. The XPS analysis reveal the changes in Nb 3d and V 2p core-level spectral intensities of the samples with the increase in Nb^5+ concentration. The equal spin-orbit energy splitting of the states was confirmed by the average energy difference (over La core spectra of all samples) for state I, state II bonding, and state II anti-bonding and the observed changes in their relative intensities with Nb substitution are due to the metal ligand orbitals overlap. These findings provide valuable insights into the structural and electronic properties of LaV_1-xNb_xO_4 samples and their potential use in different fields of practical applications.
§ ACKNOWLEDGMENT
AS and MS thank MHRD and CSIR, respectively for the fellowship. The authors acknowledge IIT Delhi's FIST (DST, Govt. of India) UFO scheme for providing the physics department with the Raman facility. We thank the Physics Department at IIT Delhi for the XRD and the Central Research Facility (CRF) for the FESEM, EDX, and XPS. We also thank Ambuj Mishra for providing HR-TEM facility at IUAC, New Delhi. The preparation of the samples was done in a high-temperature furnace (from Nabertherm GmbH, Germany), funded by BRNS through the DAE Young Scientist Research Award (Project Sanction No. 34/20/12/2015/BRNS). RSD acknowledges SERB–DST for the financial support through a core research grant (project reference no. CRG/2020/003436).
99
Varghese_PRB_20 E. Varghese, S. Kumar, B. Pathak, and S. Sen, Temperature-induced crystallinity and vibrational properties in samarium orthovanadate, Phys. Rev. B 101 (2020) 174112.
Huang_JALCOM_19 S. Huang, Z. Wang, Q. Zhu, X. Shi, X. Wang, X. Li, X. Sun, and J.-G. Li, A new protocol for templated synthesis of YVO_4:Ln luminescent crystallites (Ln=Eu, Dy, Sm), Journal of Alloys and Compounds 776 (2019) 773.
Carbonati_JALCOM_21 T. Carbonati, C. Cionti, E. Cosaert, B. Nimmegeers, D. Meroni, and D. Poelman, NIR emitting GdVO_4:Nd nanoparticles for bioimaging: The role of the synthetic pathway, Journal of Alloys and Compounds 862 (2021) 158413.
Kumar_JPCL_22 Ajay Kumar, A. Jain, S. M. Yusuf, and R. S. Dhaka, Observation of Anisotropic Thermal Expansion and the Jahn-Teller Effect in Double Perovskites Sr_2-xLa_xCoNbO_6 Using Neutron Diffraction, J. Phys. Chem. Lett. 13 (2022) 3023.
Kumar_PRB3_22 Ajay Kumar, R. Shukla, R. Kumar, R. J. Choudhary, S. N. Jha, and R. S. Dhaka, Probing the electronic and local structure of Sr_2-xLa_xCoNbO_6 using near-edge and extended x-ray absorption fine structures, Phys. Rev. B 105 (2022) 245155.
Kumar_PRB1_20 Ajay Kumar and R. S. Dhaka, Unraveling magnetic interactions and the spin state in insulating Sr_2-xLa_xCoNbO_6, Phys. Rev. B 101 (2020) 094434.
Kumar_PRB2_20 Ajay Kumar, B. Schwarz, H. Ehrenberg, and R. S. Dhaka, Evidence of discrete energy states and cluster-glass behavior in Sr_2-xLa_xCoNbO_6, Phys. Rev. B 102 (2020) 184414.
YiJAL17M. Yi, S.-K. Park, C.-Y. Seong, Y. Piao, and T. Yu, The general synthesis and characterization of rare earth orthovanadate nanocrystals and their electrochemical applications, Journal of Alloys and Compounds 693 (2017) 825.
SunJAP10 L. Sun, X. Zhao, Y. Li, P. Li, H. Sun, X. Cheng, and W. Fan, First-principles studies of electronic, optical, and vibrational properties of LaVO_4 polymorph, J. Appl. Phys. 108 (2010) 093519.
ChakoumkosJSSC94 B. C. Chakoumakos, M. M. Abraham, and L. A. Boatner, Crystal structure refinements of zircon-type MVO_4 (M = Sc, Y, Ce, Pr, Nd, Tb, Ho, Er, Tm, Yb, Lu), J. Solid State Chem. 109 (1994) 197.
RiceACB76 C. E. Rice and W. R. Robinson, Lanthanum Orthovanadate, Acta Crystallogr B Struct. Sci. 32 (1976) 2232.
FanJPCB06 W. Fan, X. Song, Y. Bu, S. Sun, and X. Zhao, Selected-Control Hydrothermal Synthesis and Formation Mechanism of Monazite- and Zircon-Type LaVO4 Nanocrystals, J. Phys. Chem. B 110 (2006) 23247.
RastogiJPCC17 C. K. Rastogi, S. K. Sharma, A. Patel, G. Parthasarathy, R. G. S. Pala, J. Kumar, and S. Sivakumar, Dopant Induced Stabilization of Metastable Zircon-Type Tetragonal LaVO_4, J. Phys. Chem. C 121 (2017) 16501.
Xie_JALCOM_12 B. Xie, G. Lu, Y. Wang, Y. Guo, and Y. Guo, Selective synthesis of tetragonal LaVO_4 with different vanadium sources and its luminescence performance, Journal of Alloys and Compounds 544 (2012) 173.
suzukiJAC14 Suzuki, N., Noritake, T., and Hioki, T. Structural analysis and physical properties of Sr_2-xLa_xVO_4-δ. Journal of Alloys and Compounds 612 (2014) 114.
LiuJSSC12 Liu, H., Yuan, J., Jiang, Z., Shangguan, W., Einaga, H., and Teraoka, Y., Roles of Bi, M and VO_4 tetrahedron in photocatalytic properties of novel Bi_0.5M_0.5VO_4 (M=La, Eu, Sm and Y) solid solutions for overall water splitting. Journal of Solid State Chemistry 186 (2012) 70.
HimanshuPRB21Himanshu Dua, Rishabh Shukla, and R. S. Dhaka, Structural phase transition and its consequences for the optical behavior of LaV_1-xNb_xO_4, Phys. Rev. B 103 (2021) 174107.
VermaACAG01 S. Verma, B. N. Wani, and N. M. Gupta, Synthesis, characterisation, TPR/TPO and activity studies on LaMn_xV_1-xO_4-δ–catalysts, Appl. Catal. A: Gen. 205 (2001) 295.
ErrandoneaPMS08D. Errandonea and F. J. Manjón, Pressure effects on the structural and electronic properties of ABX4 scintillating crystals, Progress in Materials Science 53 (2008) 711.
TakeiJCG77H. Takei and S. Tsunekawa, Growth and properties of LaNbO_4 and NdNbO_4 single crystals, Journal of Crystal Growth 38 (1977) 55.
AldredML83A. T. Aldred, Unusual cell volume behavior in the LaNb_1-xV_xO_4 system, Materials Letters 1 (1983) 197.
HaugsrudNM06R. Haugsrud, and T. Norby, Proton conduction in rare-earth ortho-niobates and ortho-tantalates, Nature Mater. 5 (2006) 193.
Hakimova_CI_19 L. Hakimova, A. Kasyanova, A. Farlenkov, J. Lyagaeva, D. Medvedev, A. Demin, and P. Tsiakaras, Effect of Isovalent Substitution of La^3+ in Ca-Doped LaNbO_4 on the Thermal and Electrical Properties, Ceramics International 45 (2019) 209.
BlasseCPL90 G. Blasse, and L. H. Brixner, Ultraviolet emission from
ABO_4-type niobates, tantalates and tungstates, Chem. Phys. Lett. 173 (1990) 409.
Liu_SABC_22 H. Liu, H. Yu, J. Wang, F. Xia, C. Wang, and J. Xiao, LaNbO_4 as an electrode material for mixed-potential CO gas sensors, Sensors and Actuators B: Chemical 352 (2022) 130981.
Zhou_ICF_21 D. Zhou, H.-H. Guo, M.-S. Fu, X.-G. Yao, H.-X. Lin, W.-F. Liu, L.-X. Pang, C. Singh, S. Trukhanov, A. Trukhanov, and I. M. Reaney, Anomalous dielectric behaviour during the monoclinic to tetragonal phase transition in La(Nb_0.9V_0.1)O_4, Inorg. Chem. Front. 8 (2021) 156.
Xue_CEJ_21 J. Xue, Z. Yu, H. M. Noh, B. R. Lee, B. C. Choi, S. H. Park, J. H. Jeong, P. Du, and M. Song, Designing multi-mode optical thermometers via the thermochromic LaNbO_4:Bi^3+/Ln^3+ (Ln = Eu, Tb, Dy, Sm) phosphors, Chemical Engineering Journal 415 (2021) 128977.
DingRSCA17S. Ding, Q. Zhang, W. Liu, J. Luo, F. Peng, X. Wang, G. Sun, and D. Sun, Crystal growth and characterization of a mixed laser crystal: Nd-doped Gd_0.89La_0.1NbO_4, RSC Adv. 7 (2017) 35666.
Xiong_APA_20 F. B. Xiong, F. X. Xu, H. F. Lin, Y. P. Wang, E. Ma, and W. Z. Zhu, Synthesis and luminescent properties of novel thermal-stable orangish-red-emitting LnNbO_4: Sm^3+ (Ln=La, Y) phosphors, Appl. Phys. A 126 (2020) 908.
SunCI15P. Sun, P. Dai, J. Yang, C. Zhao, and X. Zhang, Enhanced upconversion luminescence induced by structural evolution of Lanthanum Niobate Phosphor, Ceramics International 41 (2015) 3009.
Wachowski S. Wachowski, A. Mielewczyk-Gryn, and M. Gazda, Effect of isovalent substitution on microstructure and phase transition of LaNb_1-xM_xO_4 (M=Sb, V or Ta; x= 0.05–0.3), J. Solid State Chem. 219 (2014) 201.
HuseJSSC12 M. Huse, A. W. B. Skilbred, M. Karlsson, S. G. Eriksson, T. Norby, R. Haugsrud, and C. S. Knee, Neutron Diffraction study of the monoclinic to tetragonal structural transition in LaNbO_4 and its relation to proton mobility, Journal of Solid State Chemistry 187 (2012) 27.
DavidMRB83 W. I. F. David, The high-temperature paraelastic structure of LaNbO_4, Mater. Res. Bull. 18 (1983) 749.
ShuklaPRB22 Rishabh Shukla, Clemens Ulrich, and R. S. Dhaka, Investigation of lattice dynamics, magnetism and electronic transport in β-Na_0.33V_2O_5, Phys. Rev. B 106 (2022) 125148.
ChengOM15 X. Cheng, D. Guo, S. Feng, K. Yang, Y. Wang, Y. Ren, and Y. Song, Structure and stability of monazite- and zircon-type LaVO_4 under hydrostatic pressure, Opt. Mater. 49 (2015) 32.
SantosJAP07 C. C. Santos, E. N. Silva, A. P. Ayala, I. Guedes, P. S. Pizani, C.-K. Loong, and L. A. Boatner, Raman investigations of rare earth orthovanadates, J. Appl. Phys. 101 (2007) 053511.
PanchalPRB11 V. Panchal, S. López-Moreno, D. Santamaría-Pérez, D. Errandonea, F. J. Manjón, P. Rodríguez-Hernandez, A. Muñoz, S. N. Achary, and A. K. Tyagi, Zircon to monazite phase transition in CeVO_4: X-ray diffraction and Raman-scattering measurements, Phys. Rev. B 84 (2011) 024111.
OkramMNL11R. Okram, N. R. Singh, and Ak. M. Singh, Simple preparation of Eu^3+-Doped LaVO_4 by ethylene glycol route: A luminescence study, Micro Nano Lett. 6 (2011) 165.
ClavierJECS11N. Clavier, R. Podor, N. Dacheux, Crystal chemistry of the monazite structure, J. Eur. Ceram. Soc. 31 (2011) 941.
SelvanJCS09R.K. Selvan, A. Gedanken, P. Anilkumar, G. Manikandan, C. Karunakaran, Synthesis and characterization of rare earth orthovanadate (RVO_4; R = La, Ce, Nd, Sm, Eu and Gd) nanorods/Nanocrystals/Nanospindles by a facile sonochemical method and their catalytic properties, J. Cluster Sci. 20 (2009) 291.
HuangN12Z. Huang, S. Huang, G. Ou, and W. Pan, Synthesis, phase transformation and photoluminescence properties of Eu:La_1-xGd_xVO_4 nanofibers by electrospinning method, Nanoscale 4 (2012) 5065.
WangJL17L. Wang, Q. Xu, L. Liu, Q. Song, H. Lv, G. Zhu, D. Zhang, Single-hole hollow tetragonal LaVO4:Eu^3+ microspheres prepared by Ostwald ripening and their luminescence property, J. Lumin. 192 (2017) 1020.
IshiiPSSA89K. Ishii, N. Morita, H. Nakayama, S. Tsunekawa, and T. Fukuda, Raman Spectra of LaNbO_4 in the ferroelastic phase and the relaxation after the state shift, Phys. Stat. Sol. (a) 112 (1989) 207.
JiaEJIC10 C. J. Jia, L. D. Sun, Z. G. Yan, Y. C. Pang, S. Z. Lü, and C. H. Yan, Monazite and zircon type LaVO_4:Eu nanocrystals-synthesis, luminescent properties, and spectroscopic identification of the Eu^3+ sites, Eur. J. Inorg. Chem. 18 (2010) 2626.
ErrandoneaJPCC16 D. Errandonea, J. Pellicer-Porres, D. Martínez-García, J. Ruiz-Fuertes, A. Friedrich, W. Morgenroth, C. Popescu, P. Rodríguez-Hernández, A. Muñoz, and M. Bettinelli, Phase stability of lanthanum orthovanadate at high-pressure, J. Phys. Chem. C 120 (2016) 13749.
JainAPLM13A. Jain, S. P. Ong, G. Hautier, W. Chen, W. D. Richards, S. Dacek, S. Cholia, D. Gunter, D. Skinner, G. Ceder, K. A. Persson, Commentary: the materials project: A materials genome approach to accelerating materials innovation, APL Materials 1 (2013) 011002.
ErrandoneaPMS18 D. Errandonea and A. B. Garg, Recent progress on the characterization of the high-pressure behaviour of AVO_4 orthovanadates, Prog. Mater. Sci. 97 (2018) 123.
PellicerJSSC17J. Pellicer-Porres, A. B. Garg, D. Vázquez-Socorro, D. Martínez-García, C. Popescu, and D. Errandonea, Stability of the fergusonite phase in GdNbO_4 by high pressure XRD and Raman experiments, Journal of Solid State Chemistry 251 (2017) 14.
HerzbergVNNY87G. Herzberg and G. Herzberg, Infrared and Raman spectra of polyatomic molecules, van Nostrand, New York, 22 1987.
IshaqueSSS99M. Ishaque Khan, T. Hope, and S. Tabassum, Synthesis, reactivity, X-ray structure and thermal study of the mixed-metal oxide hydrate [Mn(H_2O)_2V_2O_6], Solid State Sciences 1 (1999) 163.
HardcastleJPC91F. D. Hardcastle and I. E. Wachs, Determination of Vanadium-Oxygen bond distances and bond orders by Raman spectroscopy, J. Phys. Chem. 95 (1991) 5031.
LiuJECS17L. Liu, M. Knapp, H. Ehrenberg, L. Fang, H. Fan, L. A. Schmitt, H. Fuess, M. Hoelzel, H. Dammak, M. P. Thi, M. Hinterstein, Average vs. local structure and composition-property phase diagram of K_0.5Na_0.5NbO_3-Bi_0.5Na_0.5TiO_3 system, Journal of the European Ceramic Society 37 (2017) 1387.
PenaJSSC20J. P. Peña, P. Bouvier, and O. Isnard, Structural properties and Raman spectra of Columbite-type NiNb_2-xV_xO_6 synthesized under high pressure, Journal of Solid State Chemistry 291 (2020) 121607.
RuizCEJ08A. Ruiz-Martínez, D. Casanova, and S. Alvarez, Polyhedral structures with an odd number of vertices: Nine-coordinate metal compounds, Chem. Eur. J. 14 (2008) 1291.
BadgerJCP34R. M. Badger, A relation between internuclear distances and bond force constants, The Journal of Chemical Physics 2 (1934) 128.
Mullica_PRB_85 D. F. Mullica, C. K. C. Lok, H. O. Perkins, and V. Young, X-ray photoelectron final-state screening in La(OH)_3: A multiplet structural analysis, Phys. Rev. B 31 (1985) 4039.
Steiner_ZPB_79 P. Steiner and H. Höchst, X-ray excited photoelectron spectra of LiNbO_3: a quantitative analysis. Z Physik B 35 (1979) 51.
Lebugle_PS_81 A. Lebugle, U. Axelsson, R. Nyholm, and N. Mårtensson, Experimental L and M core level binding energies for the metals ^22Ti to ^30Zn, Phys. Scr. 23 (1981) 825.
Shukla_JPCC_19 R. Shukla, A. Jain, M. Miryala, M. Murakami, K. Ueno, S. M. Yusuf, and R. S. Dhaka, Spin dynamics and unconventional magnetism in insulating La_(1-2x)Sr_2xCo_(1–x)Nb_xO_3, J. Phys. Chem. C 123 (2019) 22457.
Shukla_PRB_2023 Rishabh Shukla and R. S. Dhaka, Evolution of complex magnetic phases and metal-insulator transition through Nb substitution in La_0.5Sr_0.5Co_1-xNb_xO_3, Phys. Rev. B 107 (2023) 165108.
Kamath_IJC_1984 P. V. Kamath and D. D. Sarma, Charge Transfer Satellites in X-Ray Photoelectron Spectra of Lanthanum Compounds, Indian J. Chem. 23A (1984) 292.
Vasquez_PRB_1996 R. P. Vasquez, X-Ray Photoemission Measurements of La_1-xCa_xCoO_3 (x= 0, 0.5), Phys. Rev. B 54 (1996) 14938.
Signorelli_PRB_1973 A. J. Signorelli and R. G. Hayes, X-Ray Photoelectron Spectroscopy of Various Core Levels of Lanthanide Ions: The Roles of Monopole Excitation and Electrostatic Coupling, Phys. Rev. B 8 (1973) 81.
Shukla_PRB_18 Rishabh Shukla and R. S. Dhaka, Anomalous magnetic and spin glass behavior in Nb-substituted LaCo_1-xNb_xO_3, Phys. Rev. B 97 (2018) 024430.
Isawa_PRB_94 K. Isawa, R. Itti, J. Sugiyama, N. Koshizuka, and H. Yamauchi, Photoelectron spectroscopic study of Sr_xNbO_3, Phys. Rev. B 49 (1994) 3534.
Atuchin_JESRP_05 V. V. Atuchin, I. E. Kalabin, V. G. Kesler, and N. V. Pervukhina, Nb 3d and O 1s core levels and chemical bonding in niobates, Journal of Electron Spectroscopy and Related Phenomena 142 (2005) 129.
Antonides_PRB_77 E. Antonides, E. C. Janse, and G. A. Sawatzky, LMM Auger spectra of Cu, Zn, Ga, and Ge, II. Relationship with the L_23 photoelectron spectra via the L_2L_3M_45 Coster-Kronig process, Phys. Rev. B 15 (1977) 4596.
Sawastzky_PRB_79 G. A. Sawatzky and D. Post, X-Ray photoelectron and Auger spectroscopy study of some vanadium oxides, Phys. Rev. B 20 (1979) 1546.
Ohno_JESRP_04 M. Ohno, The effect of Coster–Kronig transition on the Auger-photoelectron coincidence spectroscopy spectra of early 3d-transition metals, Journal of Electron Spectroscopy and Related Phenomena 136 (2004) 221.
Mendialdua_JESRP_95 J. Mendialdua, R. Casanova, and Y. Barbaux, XPS studies of V_2O_5, V_6O_13, VO_2 and V_2O_3, Journal of Electron Spectroscopy and Related Phenomena 71 (1995) 249.
Silversmit_JESRP_04 G. Silversmit, D. Depla, H. Poelman, G. B. Marin, and R. De Gryse, Determination of the V 2p XPS binding energies for different Vanadium oxidation states (V^5+ to V^0+), Journal of Electron Spectroscopy and Related Phenomena 135 (2004) 167.
|
http://arxiv.org/abs/2307.00308v1
|
20230701113037
|
Quantum non-demolition measurement of an electron spin qubit through its low-energy many-body spin environment
|
[
"Harry E. Dyte",
"George Gillard",
"Santanu Manna",
"Saimon F. Covre da Silva",
"Armando Rastelli",
"Evgeny A. Chekhovich"
] |
cond-mat.mes-hall
|
[
"cond-mat.mes-hall",
"quant-ph"
] |
Department of Physics and Astronomy, University of
Sheffield, Sheffield S3 7RH, United Kingdom
Department of Physics and Astronomy, University of
Sheffield, Sheffield S3 7RH, United Kingdom
Institute of Semiconductor and Solid State Physics,
Johannes Kepler University Linz, Altenberger Str. 69, 4040 Linz,
Austria
Institute of Semiconductor and Solid State Physics,
Johannes Kepler University Linz, Altenberger Str. 69, 4040 Linz,
Austria
Institute of Semiconductor and Solid State Physics,
Johannes Kepler University Linz, Altenberger Str. 69, 4040 Linz,
Austria
[]e.chekhovich@sheffield.ac.uk Department of
Physics and Astronomy, University of Sheffield, Sheffield S3 7RH,
United Kingdom
The measurement problem dates back to the dawn of quantum mechanics. Here, we measure a quantum dot electron spin qubit through off-resonant coupling with thousands of redundant nuclear spin ancillae. We show that the link from quantum to classical can be made without any “wavefunction collapse”, in agreement with the Quantum Darwinism concept. Large ancilla redundancy allows for single-shot readout with high fidelity ≈99.85%. Repeated measurements enable heralded initialization of the qubit and probing of the equilibrium electron spin dynamics. Quantum jumps are observed and attributed to burst-like fluctuations in a thermally populated phonon bath.
Quantum non-demolition measurement of an electron spin qubit through its low-energy many-body spin environment
Evgeny A. Chekhovich
August 1, 2023
==============================================================================================================
High fidelity qubit readout is essential in quantum information processing. Usually, such readout starts with conversion of a fragile quantum state into a more robust form, detectable by a classical apparatus. Some readout techniques rely on high-energy excitations, making this conversion dissipative (irreversible). Examples include spin-to-charge conversion <cit.>, single photon detection <cit.>, optical readout of spin in defects <cit.> and quantum dots (QDs) <cit.>. An alternative is unitary (reversible) conversion. One example is the off-resonant (Ising) coupling between the main and ancilla electron spin qubits, which enables quantum non-demolition (QND) measurement <cit.>. Other QND demonstrations include superconducting qubits <cit.> and mechanical resonators <cit.>.
Here, we implement unitary conversion of a QD electron spin, but the ancilla is of a different nature, consisting of ≈10^4-10^5 low-energy nuclear spin qubits. The large redundancy of the ancilla results in a very high readout fidelity, which is what an observer perceives as a deterministic classical measurement. The only departure from an ideal quantum-to-classical conversion comes from random qubit jumps. However, unlike in previous studies <cit.>, the jumps are not caused by the measurement itself. Instead, the electron spin jumps are attributed to spontaneous bursts of electric fields, produced by the equilibrium vibrations of the crystal lattice (phonons). Our readout method is particularly robust and simple to implement, since the nuclei are essentially the same in all QDs, eliminating the need for QD-specific calibrations.
We study lattice-matched epitaxial GaAs QDs grown by in-situ etching and infilling of nanoholes in AlGaAs <cit.>. The QD can be charged with a single electron from the n-type Fermi reservoir, by adjusting the bias in a p-i-n diode structure []. A static magnetic field B_z is applied along the growth axis z. A typical QD consists of N≈10^5 atoms, whose nuclei are spin-3/2 particles. The sample is subject to uniaxial stress, which induces nuclear quadrupolar shifts. This way the two-level subspace with nuclear spin projections I_z=-3/2,-1/2 is isolated, allowing the nuclei to be treated as spin-1/2 particles. Individual QDs are addressed optically using focused laser excitation and photoluminescence (PL) spectroscopy. A copper coil is used to generate a radiofrequency (RF) magnetic field orthogonal to B_z. Further details can be found in Supplementary.
The quantum system of a QD charged with a single electron (1e) is described with reference to the level diagram in . The hyperfine interaction Hamiltonian is ℋ_hf = Σ_ka_kŝ·Î_k, where a_k describes the coupling between the spin vector s of the resident electron and the k-th nuclear spin vector I_k. This interaction has a twofold effect. Firstly, in addition to the bare Larmor frequency ν_N, each nucleus acquires a Knight <cit.> frequency shift s_z a_k/(2h). Secondly, the electron states with s_z=±1/2 acquire the (Overhauser) hyperfine shifts ± E_hf/2, arising from the net polarization of the nuclear spin ensemble []. The average hyperfine shift is defined as E_hf=Σ_ka_k⟨Î_z,k⟩, where ⟨...⟩ is the expectation value. The electron spin energy splitting hν_e, is the sum of E_hf and the bare Zeeman splitting hν_e,0=μ_B g_e B_z, where g_e is the electron g-factor and μ_B is the Bohr magneton. The optically excited trion, contains a spin-singlet pair of electrons and an unpaired valence band hole with momentum projection j_z=±3/2. Due to the selection rules, there are two dipole-allowed circularly polarized (σ^±) optical transitions with photon energies hν_ph^±. The optically-detected spectral splitting Δ E_PL = h(ν_ph^+-ν_ph^-) yields the hyperfine shift E_hf, up to a constant offset <cit.>.
The traditional readout uses a cyclic optical transition [e.g. σ^+ in ] to convert the electron spin state into the presence or absence of scattered photons <cit.>. However, there is a finite probability for the measurement process to destroy the spin qubit if the recombination goes via one of the “forbidden” channels [e.g. from j_z=+3/2 to s_z=-1/2 in ]. Here, we take a different approach, using the long coherence of the nuclear spins <cit.> and the large disparity of the energy scales ν_ph^±≫ν_e≫ν_N to turn the nuclei into a non-invasive measurement apparatus.
shows the timing diagram of the measurement cycle. It starts with a long (few seconds) circularly-polarized optical pumping of an empty (0e) QD, which polarizes the nuclear spins up to ≈80% <cit.>. Next, an electron is loaded from the Fermi reservoir (1e) and is allowed to equilibrate for a time T_Load. Nuclear magnetic resonance (NMR) is performed by applying an RF pulse with a total duration T_RF, calibrated to induce a π rotation of the nuclear spins. In some experiments, a second RF pulse is applied, following a free evolution time T_Evol. The final step is the illumination of the QD with a short (tens of milliseconds) optical probe in order to collect the PL spectrum and derive E_hf. Importantly, all measurements are done in one cycle (i.e. single-shot), thus avoiding any averaging.
The readout of the electron spin qubit is explained in . An electron in state s_z=-1/2 (s_z=+1/2) Knight-shifts the QD NMR spectrum to the higher (lower) frequency side of ν_N. A single RF pulse is applied at a radiofrequency ν_N-a/(2h), where a is a weighted average of a_k in a QD. For the electron in the s_z=+1/2 (-1/2) state, the RF pulse is in (out of) resonance, so the QD nuclei are flipped (remain in the initial state) <cit.>. Statistics of the single-shot PL probe spectra [] show a clear bimodality in the spectral splitting (red and black traces), arising from bimodal distribution of the RF-induced hyperfine shifts Δ E_hf. A systematic dependence of Δ E_hf on the RF detuning from ν_N is shown in , where the two branches corresponding to s_z=+1/2 and -1/2 are traced by the dashed and dotted lines, respectively. The broadening of these traces arises from the inhomogeneous distribution of a_k, whereas the empty-QD (0e) NMR spectrum is much narrower (solid line). The optimal resolution of the two electron spin states (the maximum difference in Δ E_hf) is observed when the RF detuning matches the typical Knight shift a/(2h)≈70 kHz.
Using the optimal detuning, we collect detailed statistics of the single-shot Δ E_hf. In an empty QD [0e, ] the distribution of Δ E_hf is a single mode, broadened by the noise in probe PL spectra. The mode is centred at a small value Δ E_hf≈1.7 μeV, indicating partial rotation of the nuclei by the detuned RF pulse. The same measurement in a charged QD [1e, ] shows a bimodal distribution. One mode is centered at Δ E_hf≈0.4 μeV and corresponds to the s_z=-1/2 electron state, which Knight-shifts the nuclei out of resonance with the RF pulse. The mode at Δ E_hf≈13.2 μeV corresponds to the s_z=+1/2 state, which brings the nuclei into resonance with the RF pulse.
These results match the Quantum Darwinism perspective <cit.>, which recognizes that a direct measurement of a qubit is rarely possible. Instead, the observer uses the environment to acquire information about the qubit states indirectly. The observer then relies on a large number of redundant copies in order to arrive at the classical (deterministic) notion of objective reality <cit.>. In our experiments, the nuclear spin ensemble is such an environment. The RF pulse is essentially an electron-controlled CNOT gate acting simultaneously on multiple nuclear qubits <cit.> to copy the electron state s_z into thousands of nuclear states I_z,k. At the final step, the gate bias ejects the electron from the QD, thus disconnecting the measurement apparatus (the nuclei) from the qubit. Illumination by the probe laser gradually destroys the individual nuclear spin copies, but their arithmetic sum [see ] is robust enough to collect thousands of PL photons and measure the hyperfine shift E_hf. This summation of redundant copies into essentially a classical variable (nuclear magnetization) is what enables a single-shot measurement of a quantum variable ŝ_z by a classical instrument (optical spectrometer with a photo-detector).
Returning to the single-shot NMR histograms, we note a small number of events where the NMR signal deviates from either of the modes [8 μeV≲Δ E_hf≲21 μeV in ]. We ascribe such intermediate readouts to electron spin flips during the RF pulse, resulting in partial rotation of the nuclear spins. We model this process by assuming a probability p_Flip for the electron spin to be flipped during T_RF, leaving a probability 1-p_Flip for the electron spin to maintain its s_z. The optical readout noise is also included in the full model (see Supplementary Information). The best-fit results are shown by the solid lines in Figs. <ref>(b) and <ref>(c). Using the fitted mode positions Δ E_hf^- and Δ E_hf^+, we set the detection threshold in the middle (Δ E_hf^-+Δ E_hf^+)/2 and calculate the probability that the detected Δ E_hf is below (above) the threshold when the true electron state is s_z=-1/2(+1/2). This probability is the qubit readout fidelity, found to be F≈0.9985, matching or exceeding the state of the art in a range of qubit systems <cit.>. Since the two histogram modes are well resolved, the loss of fidelity is dominated by the random electron spin flips, leading to F ≈ 1-p_Flip/2 ≈ 1-T_RF/(4T_1,e), where T_1,e is the electron spin lifetime. Resolution of the s_z=±1/2 Knight-shifted NMR spectra with a short (spectrally broad) RF pulse, imposes the lower limit T_RF≳ h/a, where a∝ N^-1. Our experiments with T_RF≈10-20 μs are already close to the lower limit imposed by N≈ 10^5 in the studied QDs. On the other hand, nuclear spin relaxation <cit.> and decoherence <cit.> times are much longer than h/a and therefore do not limit F. The other limitation comes from T_1,e, which ranges from milliseconds to tens of milliseconds for temperature T≈4.2 K and our typical electron spin splitting hν_e≈50 μeV. Further increase in T_1,e (and hence increase in F) can be achieved by lowering the temperature towards k_BT≈ hν_e, and by lowering hν_e through reduced magnetic field and nuclear spin polarization.
Immediate repeatability is a key requirement for any quantum measurement <cit.>, which we verify in an experiment with two RF pulses []. The first pulse applied to ^75As nuclei records the initial state, while the second pulse on ^69Ga stores the state after the interpulse delay T_Evol. The optically-measured Δ E_hf is the total NMR signal produced by the two pulses. shows a two-dimensional histogram of Δ E_hf measured at different T_Evol. A cross-section at short T_Evol≈1 μs [] reveals the same bimodal distribution as in Figs. <ref>(b) and <ref>(c), with only two “no-flip” modes corresponding to s_z=±1/2. The two additional “spin-flip” modes, corresponding to s_z inversion during T_Evol, emerge only at long T_Evol [≈30 ms in ]. Analysis of the entire T_Evol dependence reveals the spin lifetime T_1,e≈0.58 ms at B_z=7 T, measured in equilibrium without any active initialization of the electron spin. Instead, a heralded initialization is performed by the first RF pulse, which stores the initial electron state in the ^75As polarization, to be retrieved by the optical probe afterwards. The repeatability in the two-pulse experiments further highlights the unitarity of the measurement process – although the final ejection of the electron can be seen as a qubit “collapse”, it only occurs after s_z has been measured and recorded in the nuclei. The unitarity in our system is made possible by the low energy of the nuclei and the excellent fidelity of the RF coherent control, arising from a precise description of the microscopic electron-nuclear interactions. We argue that the non-unitary “wavefunction collapse” can be a mere simplification, invoked when the microscopic picture is missing (for example if the measurement involves coupling of the qubit to a high-energy environment).
The readout time T_RF=20 μs is short enough to follow the electron spin evolution on the timescale of T_1,e. However, shows that the electron spin is nearly always detected in either of the eigenstates s_z=±1/2, with very rare intermediate NMR readouts Δ E_hf. This can be explained only if evolution of the electron spin is a random telegraph process, where the electron is in one of the eigenstates s_z=±1/2 most of the time, occasionally experiencing quantum jumps (that are much faster than T_RF).
We identify the origin of the jumps by combining our experiments with the first-principle numerical modelling, where the Schrödinger equation is propagated from the initial wavefunction state ψ_Init into the final state ψ_Fin (see details in Supplementary). We simulate the measurement process by initializing the nuclei (up to N=12) into a polarized state and initializing the electron spin in an arbitrary superposition ψ_Init=α|+1/2⟩ + β|-1/2⟩ with the z-projection expectation value s_z,Init=(|α|^2-|β|^2)/2. Following the RF measurement pulse, we find that (i) the final polarization of each nucleus equals the initial electron polarization I_z,k,Fin≈ s_z,Init and (ii) the electron polarization is nearly unchanged s_z,Fin≈ s_z,Init. Such non-demolition copying of the quantum variable ŝ_z comes at the expense of completely erasing the conjugate variable <cit.>, which manifests in s_x,Fin≈ s_y,Fin≈ 0 regardless of s_x,Init. This result can be understood qualitatively through the large difference in the nuclear and electron precession frequencies ν_N≪ν_e, meaning that the nuclei sense only the average electron polarization ⟨ s_z⟩. Moreover, the disparity in the energy scales Nν_N<ν_e means that the electron follows adiabatically the evolution of the nuclear spin polarization <cit.>. In other words, the nuclei rotated by the RF do not have enough energy to flip the electron spin, ensuring the QND nature of the measurement.
The linear response of the measurement apparatus I_z,k,Fin≈ s_z,Init, revealed by numerical modelling, provides the following insight into the origin of quantum jumps. If all electron spin superpositions had equal probabilities, the single-shot NMR signals would have had a uniform distribution, calculated and shown by the dashed line in . And yet the measurements yield a sharp bimodal distribution, revealing the energy eigenstates s_z=±1/2 as a preferential basis. Quantum mechanics does not prescribe any preferential eigenbasis towards which the superpositions should decohere. Such a preferential basis can arise from the interaction of the qubit with the environment, known as einselection <cit.>. The nuclear spin environment has been ruled out above – its energy is too small to “project” the high-energy electron spin qubit into the s_z=±1/2 eigenstates. By contrast, the lattice vibrations (phonons) can act as a high-energy environment, leading to einselection and quantum jumps.
The inverse dependence of T_1,e on B_z (see Supplementary) confirms the dominant role of the phonons <cit.>. The effective spin-phonon coupling is ∝(ŝ_xℰ_y-ŝ_yℰ_x), where ℰ_x,y are the Cartesian components of the phonon-induced piezo-strain electric field <cit.>. This spin-resonance form suggests that electron spin quantum jumps and einselection are driven by quasi-resonant electric fields, occurring in the form of short (≪ 10 μs) random bursts, separated by long (milliseconds) random intervals. Notably, these jumps are a spontaneous equilibrium process, as opposed to previous studies <cit.>, where the observation process (continuous optical excitation) could itself induce the qubit jumps. Spontaneous collapses and burst-like revivals have been investigated in Bosonic system, such as photons <cit.> and phonons <cit.>, and are typically associated with high mode population numbers n̅≳ 100. The appearance of spontaneous revivals at the much lower average phonon numbers n̅≈6.8 (for T=4.2 K and hν_e≈50 μeV used here) is somewhat unexpected, calling for further experiments at variable T and hν_e. Higher-order correlations of the electron spin quantum jumps can be studied using three or more readout pulses. Sensitive detection of the low-energy phonons is itself an interesting application in the context of particle detection <cit.> and dark matter search <cit.>.
Finally, recent studies on the same GaAs QDs <cit.> have revealed electron spin coherence times as long as ≈ 100 μs, significantly exceeding our measurement time T_RF≈ 10 μs. Thus, this QND readout method should allow for single-shot probing of the electron spin coherence without the need for dynamical decoupling, required in time-averaged measurements. Conversely, a detuned RF pulse can be used to generate and study the Greenberger–Horne–Zeilinger (Schrödinger cat) nuclear states.
Acknowledgements: H.E.D. was supported by EPSRC doctoral training grants. E.A.C. was supported by a Royal Society University Research Fellowship. G.G. and E.A.C. were supported by EPSRC award EP/V048333/1. A.R. acknowledges support of the Austrian Science Fund (FWF) via the Research Group FG5, I 4320, I 4380, I 3762, the Linz Institute of Technology (LIT), and the LIT Secure and Correct Systems Lab, supported by the State of Upper Austria, the European Union's Horizon 2020 research and innovation program under Grant Agreements No. 899814 (Qurope), No. 871130 (Ascent+), the QuantERA II project QD-E-QKD and the FFG (grant No. 891366). Author contributions: S.M., S.F.C.S. and A.R. developed, grew and processed the quantum dot samples. H.E.D, and G.G. conducted the experiments. H.E.D., G.G. and E.A.C. analysed the data. H.E.D., E.A.C. and G.G. drafted the manuscript with input from all authors. H.E.D. and G.G. contributed equally to this work. E.A.C. performed numerical modelling and coordinated the project.
42
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Elzerman et al.(2004)Elzerman, Hanson, Willems van Beveren,
Witkamp, Vandersypen, and Kouwenhoven]Elzerman2004
author author J. M. Elzerman, author R. Hanson,
author L. H. Willems van
Beveren, author B. Witkamp,
author L. M. K. Vandersypen, and author L. P. Kouwenhoven, title title Single-shot read-out of an individual
electron spin in a quantum dot, https://doi.org/10.1038/nature02693 journal journal Nature volume 430, pages
431 (year 2004)NoStop
[Hensen et al.(2020)Hensen,
Wei Huang, Yang, Wai Chan,
Yoneda, Tanttu, Hudson,
Laucht, Itoh, Ladd,
Morello, and Dzurak]Hensen2020
author author B. Hensen, author W. Wei Huang,
author C.-H. Yang, author K. Wai Chan, author
J. Yoneda, author T. Tanttu, author F. E. Hudson, author A. Laucht, author K. M. Itoh,
author T. D. Ladd, author A. Morello, and author
A. S. Dzurak, title
title A silicon quantum-dot-coupled nuclear spin qubit, https://doi.org/10.1038/s41565-019-0587-7 journal journal Nat. Nanotechnol. volume 15, pages 13 (year 2020)NoStop
[Meunier et al.(2006)Meunier, Vink, Willems van Beveren,
Koppens, Tranitz, Wegscheider, Kouwenhoven, and Vandersypen]Meunier2006
author author T. Meunier, author I. T. Vink,
author L. H. Willems van
Beveren, author F. H. L. Koppens, author H. P. Tranitz, author W. Wegscheider, author L. P. Kouwenhoven, and author L. M. K. Vandersypen, title title
Nondestructive measurement of electron spins in a quantum dot, https://doi.org/10.1103/PhysRevB.74.195303 journal journal Phys. Rev. B volume 74, pages 195303 (year 2006)NoStop
[Veldhorst et al.(2014)Veldhorst, Hwang, Yang, Leenstra, de Ronde, Dehollain,
Muhonen, Hudson, Itoh,
Morello, and Dzurak]Veldhorst2014
author author M. Veldhorst, author J. C. C. Hwang, author C. H. Yang,
author A. W. Leenstra, author B. de Ronde, author
J. P. Dehollain, author
J. T. Muhonen, author
F. E. Hudson, author
K. M. Itoh, author
A. Morello, and author
A. S. Dzurak, title
title An addressable quantum dot qubit with fault-tolerant
control-fidelity, https://doi.org/10.1038/nnano.2014.216
journal journal Nat. Nanotechnol. volume 9, pages 981 (year
2014)NoStop
[Hadfield(2009)]Hadfield2009
author author R. H. Hadfield, title title Single-photon detectors
for optical quantum information applications, https://doi.org/10.1038/nphoton.2009.230 journal journal Nat. Photon. volume 3, pages
696 (year 2009)NoStop
[Jiang et al.(2009)Jiang,
Hodges, Maze, Maurer,
Taylor, Cory, Hemmer,
Walsworth, Yacoby, Zibrov, and Lukin]Jiang2009
author author L. Jiang, author J. S. Hodges,
author J. R. Maze, author P. Maurer, author
J. M. Taylor, author
D. G. Cory, author
P. R. Hemmer, author
R. L. Walsworth, author
A. Yacoby, author A. S. Zibrov, and author M. D. Lukin, title title
Repetitive readout of a single electronic spin via quantum logic with
nuclear spin ancillae, https://doi.org/10.1126/science.1176496
journal journal Science volume 326, pages 267 (year
2009)NoStop
[Robledo et al.(2011)Robledo, Bernien, Sar, and Hanson]Robledo2011
author author L. Robledo, author H. Bernien,
author T. v. d. Sar, and author R. Hanson, title title Spin dynamics in the optical cycle of single
nitrogen-vacancy centres in diamond, https://doi.org/10.1088/1367-2630/13/2/025013 journal
journal New Journal of Physics volume
13, pages 025013 (year 2011)NoStop
[Raha et al.(2020)Raha,
Chen, Phenicie, Ourari,
Dibos, and Thompson]Raha2020
author author M. Raha, author S. Chen, author C. M. Phenicie, author
S. Ourari, author A. M. Dibos, and author J. D. Thompson, title title
Optical quantum nondemolition measurement of a single rare earth ion
qubit, https://doi.org/10.1038/s41467-020-15138-7 journal journal Nature Commun. volume
11, pages 1605 (year 2020)NoStop
[Kindem et al.(2020)Kindem,
Ruskuc, Bartholomew, Rochman,
Huan, and Faraon]Kindem2020
author author J. M. Kindem, author A. Ruskuc,
author J. G. Bartholomew,
author J. Rochman, author Y. Q. Huan, and author A. Faraon, title
title Control and single-shot readout of an ion embedded in a
nanophotonic cavity, https://doi.org/10.1038/s41586-020-2160-9
journal journal Nature volume 580, pages 201 (year
2020)NoStop
[Evans et al.(2018)Evans,
Bhaskar, Sukachev, Nguyen,
Sipahigil, Burek, Machielse,
Zhang, Zibrov, Bielejec,
Park, Lončar, and Lukin]Evans2018
author author R. E. Evans, author M. K. Bhaskar,
author D. D. Sukachev, author C. T. Nguyen, author
A. Sipahigil, author
M. J. Burek, author
B. Machielse, author
G. H. Zhang, author
A. S. Zibrov, author
E. Bielejec, author
H. Park, author M. Lončar, and author M. D. Lukin, title title
Photon-mediated interactions between quantum emitters in a diamond
nanocavity, https://doi.org/10.1126/science.aau4691 journal journal Science volume
362, pages 662 (year 2018)NoStop
[Bhaskar et al.(2020)Bhaskar, Riedinger, Machielse,
Levonian, Nguyen, Knall,
Park, Englund, Lončar,
Sukachev, and Lukin]Bhaskar2020
author author M. K. Bhaskar, author R. Riedinger,
author B. Machielse, author D. S. Levonian, author
C. T. Nguyen, author
E. N. Knall, author
H. Park, author D. Englund, author M. Lončar, author D. D. Sukachev, and author M. D. Lukin, title title Experimental
demonstration of memory-enhanced quantum communication, https://doi.org/10.1038/s41586-020-2103-5 journal journal Nature volume 580, pages
60 (year 2020)NoStop
[Vamivakas et al.(2010)Vamivakas, Lu, Matthiesen, Zhao, Fält, Badolato, and Atatüre]Vamivakas2010
author author A. N. Vamivakas, author C. Y. Lu,
author C. Matthiesen, author Y. Zhao, author
S. Fält, author
A. Badolato, and author
M. Atatüre, title
title Observation of spin-dependent quantum jumps via quantum
dot resonance fluorescence, https://doi.org/10.1038/nature09359
journal journal Nature volume 467, pages 297 (year
2010)NoStop
[Delteil et al.(2014)Delteil, Gao, Fallahi, Miguel-Sanchez, and Imamo ğğlu]Delteil2014
author author A. Delteil, author W.-b. Gao,
author P. Fallahi, author J. Miguel-Sanchez, and author A. Imamo ğğlu, title title Observation of
quantum jumps of a single quantum dot spin using submicrosecond single-shot
optical readout, https://doi.org/10.1103/PhysRevLett.112.116802
journal journal Phys. Rev. Lett. volume 112, pages 116802 (year
2014)NoStop
[Antoniadis et al.(2022)Antoniadis, Hogg, Stehl, Javadi, Tomm, Schott, Valentin, Wieck, Ludwig, and Warburton]Antoniadis2022
author author N. O. Antoniadis, author M. R. Hogg, author W. F. Stehl,
author A. Javadi, author N. Tomm, author
R. Schott, author S. R. Valentin, author A. D. Wieck, author A. Ludwig, and author R. J. Warburton, title title Cavity-enhanced
single-shot readout of a quantum dot spin within 3 nanoseconds, https://arxiv.org/abs/2210.13870 journal journal
arXiv:2210.13870 (year 2022)NoStop
[Yoneda et al.(2020)Yoneda,
Takeda, Noiri, Nakajima,
Li, Kamioka, Kodera, and Tarucha]Yoneda2020
author author J. Yoneda, author K. Takeda,
author A. Noiri, author T. Nakajima, author
S. Li, author J. Kamioka, author T. Kodera, and author S. Tarucha, title title Quantum
non-demolition readout of an electron spin in silicon, https://doi.org/10.1038/s41467-020-14818-8 journal journal Nature Commun. volume 11, pages 1144 (year 2020)NoStop
[Blais et al.(2004)Blais,
Huang, Wallraff, Girvin, and Schoelkopf]Blais2004
author author A. Blais, author R.-S. Huang,
author A. Wallraff, author S. M. Girvin, and author R. J. Schoelkopf, title title Cavity quantum electrodynamics for superconducting
electrical circuits: An architecture for quantum computation, https://doi.org/10.1103/PhysRevA.69.062320 journal journal Phys. Rev. A volume 69, pages 062320 (year 2004)NoStop
[Rossi et al.(2018)Rossi,
Mason, Chen, Tsaturyan, and Schliesser]Rossi2018
author author M. Rossi, author D. Mason,
author J. Chen, author
Y. Tsaturyan, and author
A. Schliesser, title
title Measurement-based quantum control of mechanical motion, https://doi.org/10.1038/s41586-018-0643-8 journal
journal Nature volume 563, pages 53 (year 2018)NoStop
[Heyn et al.(2009)Heyn,
Stemmann, Koppen, Strelow,
Kipp, Grave, Mendach, and Hansen]Heyn2009
author author C. Heyn, author A. Stemmann,
author T. Koppen, author C. Strelow, author
T. Kipp, author M. Grave, author S. Mendach, and author W. Hansen, title title Highly
uniform and strain-free GaAs quantum dots fabricated by filling of
self-assembled nanoholes, https://doi.org/10.1063/1.3133338
journal journal Appl. Phys. Lett. volume 94, pages 183113 (year
2009)NoStop
[Atkinson et al.(2012)Atkinson, Zallo, and Schmidt]Atkinson2012
author author P. Atkinson, author E. Zallo, and author O. G. Schmidt, title title Independent wavelength and density
control of uniform GaAs/AlGaAs quantum dots grown by infilling
self-assembled nanoholes, https://doi.org/ttp://dx.doi.org/10.1063/1.4748183 journal
journal J. Appl. Phys. volume 112, pages 054303 (year 2012)NoStop
[Gurioli et al.(2019)Gurioli, Wang, Rastelli, Kuroda, and Sanguinetti]Gurioli2019
author author M. Gurioli, author Z. Wang,
author A. Rastelli, author T. Kuroda, and author
S. Sanguinetti, title
title Droplet epitaxy of semiconductor nanostructures for
quantum photonic devices, https://doi.org/10.1038/s41563-019-0355-y journal journal Nat. Mater. volume 18, pages
799 (year 2019)NoStop
[Gillard et al.(2022)Gillard, Clarke, and Chekhovich]Gillard2022
author author G. Gillard, author E. Clarke, and author E. A. Chekhovich, title title Harnessing many-body spin environment
for long coherence storage and high-fidelity single-shot qubit readout, https://doi.org/10.1038/s41467-022-31618-4 journal
journal Nature Commun. volume 13, pages 4048 (year 2022)NoStop
[Zaporski et al.(2023)Zaporski, Shofer, Bodey, Manna, Gillard, Appel, Schimpf, Covre da Silva, Jarman,
Delamare, Park, Haeusler,
Chekhovich, Rastelli, Gangloff, Atatüre, and Le Gall]Zaporski2022
author author L. Zaporski, author N. Shofer,
author J. H. Bodey, author S. Manna, author
G. Gillard, author M. H. Appel, author C. Schimpf, author S. F. Covre da Silva, author J. Jarman, author G. Delamare, author G. Park, author U. Haeusler, author E. A. Chekhovich, author A. Rastelli, author D. A. Gangloff, author M. Atatüre, and author C. Le Gall, title title Ideal refocusing of an
optically active spin qubit under strong hyperfine interactions, https://doi.org/10.1038/s41565-022-01282-2 journal journal Nat. Nanotechnol. volume 18, pages 257–263 (year 2023)NoStop
[Knight(1949)]Knight1949
author author W. D. Knight, title title Nuclear magnetic resonance
shift in metals, https://doi.org/10.1103/PhysRev.76.1259.2
journal journal Phys. Rev. volume 76, pages 1259 (year
1949)NoStop
[Urbaszek et al.(2013)Urbaszek, Marie, Amand, Krebs, Voisin, Maletinsky, Högele, and Imamo ğğlu]Urbaszek2013
author author B. Urbaszek, author X. Marie,
author T. Amand, author O. Krebs, author
P. Voisin, author P. Maletinsky, author A. Högele, and author A. Imamo ğğlu, title title Nuclear spin physics in quantum dots: An optical
investigation, https://doi.org/10.1103/RevModPhys.85.79 journal journal Rev. Mod. Phys. volume 85, pages 79 (year 2013)NoStop
[Chekhovich et al.(2017)Chekhovich, Ulhaq, Zallo, Ding, Schmidt, and Skolnick]Chekhovich2017
author author E. A. Chekhovich, author A. Ulhaq,
author E. Zallo, author F. Ding, author
O. G. Schmidt, and author
M. S. Skolnick, title
title Measurement of the spin temperature of optically cooled
nuclei and GaAs hyperfine constants in GaAs/AlGaAs quantum dots, https://doi.org/10.1038/nmat4959 journal journal Nat. Mater. volume 16, pages
982 (year 2017)NoStop
[Millington-Hotze et al.(2023a)Millington-Hotze, Dyte, Manna, da Silva, Rastelli, and Chekhovich]MillingtonHotze2023
author author P. Millington-Hotze, author H. E. Dyte, author S. Manna,
author S. F. C. da Silva,
author A. Rastelli, and author E. A. Chekhovich, title title Approaching a fully-polarized state of
nuclear spins in a semiconductor quantum dot, https://arxiv.org/abs/2302.05489 journal journal
arXiv:2302.05489 (year 2023a)NoStop
[Chekhovich et al.(2012)Chekhovich, Kavokin, Puebla, Krysa, Hopkinson, Andreev, Sanchez, Beanland, Skolnick, and Tartakovskii]Chekhovich2012
author author E. A. Chekhovich, author K. V. Kavokin, author J. Puebla,
author A. B. Krysa, author M. Hopkinson, author
A. D. Andreev, author
A. M. Sanchez, author
R. Beanland, author
M. S. Skolnick, and author
A. I. Tartakovskii, title
title Structural analysis of strained quantum dots using
nuclear magnetic resonance, https://doi.org/10.1038/nnano.2012.142 journal journal Nat. Nanotechnol. volume 7, pages 646 (year 2012)NoStop
[Zurek(2018)]Zurek2018
author author W. H. Zurek, title title Quantum theory of the
classical: quantum jumps, Born’s Rule and objective classical reality
via quantum Darwinism, https://doi.org/10.1098/rsta.2018.0107
journal journal Philosophical Transactions of the
Royal Society A: Mathematical, Physical and Engineering Sciences volume 376, pages 20180107 (year
2018)NoStop
[Zurek(2009)]Zurek2009
author author W. H. Zurek, title title Quantum Darwinism, https://doi.org/10.1038/nphys1202 journal journal Nat. Phys. volume 5, pages
181 (year 2009)NoStop
[Yang et al.(2010)Yang,
Liu, and Nori]Yang2010
author author C.-P. Yang, author Y.-x. Liu, and author F. Nori, title title Phase gate of one qubit simultaneously controlling
n qubits in a cavity, https://doi.org/10.1103/PhysRevA.81.062323
journal journal Phys. Rev. A volume 81, pages 062323 (year
2010)NoStop
[Zhang et al.(2021)Zhang,
Guo, Ji, Wang, Yin, Kong, Lin, Yin,
Shi, Wang, and Du]Zhang2021
author author Q. Zhang, author Y. Guo, author W. Ji, author
M. Wang, author J. Yin, author F. Kong, author Y. Lin, author C. Yin, author
F. Shi, author Y. Wang, and author J. Du, title title
High-fidelity single-shot readout of single electron spin in diamond with
spin-to-charge conversion, https://doi.org/10.1038/s41467-021-21781-5 journal journal Nat. Commun. volume 12, pages 1529 (year 2021)NoStop
[Millington-Hotze et al.(2023b)Millington-Hotze, Manna, Covre da Silva, Rastelli, and Chekhovich]MillingtonHotze2022
author author P. Millington-Hotze, author S. Manna, author S. F. Covre da
Silva, author A. Rastelli, and author E. A. Chekhovich, title title Nuclear spin diffusion in the central
spin system of a GaAs/AlGaAs quantum dot, https://doi.org/10.1038/s41467-023-38349-0 journal journal Nat. Commun. volume 14, pages 2677 (year 2023b)NoStop
[Ralph et al.(2006)Ralph,
Bartlett, O'Brien, Pryde, and Wiseman]Ralph2006
author author T. C. Ralph, author S. D. Bartlett,
author J. L. O'Brien, author G. J. Pryde, and author H. M. Wiseman, title title Quantum nondemolition measurements for quantum
information, https://doi.org/10.1103/PhysRevA.73.012113 journal journal Phys. Rev. A volume
73, pages 012113 (year 2006)NoStop
[Merkulov et al.(2002)Merkulov, Efros, and Rosen]Merkulov2002
author author I. A. Merkulov, author A. L. Efros, and author M. Rosen, title title Electron spin relaxation by nuclei in
semiconductor quantum dots, https://doi.org/10.1103/PhysRevB.65.205309 journal journal Phys. Rev. B volume 65, pages 205309 (year 2002)NoStop
[Schlosshauer(2005)]Schlosshauer2005
author author M. Schlosshauer, title title Decoherence, the
measurement problem, and interpretations of quantum mechanics, https://doi.org/10.1103/RevModPhys.76.1267 journal journal Rev. Mod. Phys. volume 76, pages 1267 (year 2005)NoStop
[Khaetskii and Nazarov(2001)]Khaetskii2001
author author A. V. Khaetskii and author Y. V. Nazarov, title title Spin-flip transitions
between zeeman sublevels in semiconductor quantum dots, https://doi.org/10.1103/PhysRevB.64.125316 journal journal Phys. Rev. B volume 64, pages 125316 (year 2001)NoStop
[Gillard et al.(2021)Gillard, Griffiths, Ragunathan,
Ulhaq, McEwan, Clarke, and Chekhovich]Gillard2021
author author G. Gillard, author I. M. Griffiths, author G. Ragunathan, author A. Ulhaq,
author C. McEwan, author E. Clarke, and author
E. A. Chekhovich, title
title Fundamental limits of electron and nuclear spin qubit
lifetimes in an isolated self-assembled quantum dot, https://doi.org/10.1038/s41534-021-00378-2 journal journal npj Quantum Inf. volume 7, pages 43 (year 2021)NoStop
[Eberly et al.(1980)Eberly,
Narozhny, and Sanchez-Mondragon]Eberly1980
author author J. H. Eberly, author N. B. Narozhny, and author J. J. Sanchez-Mondragon, title title Periodic
spontaneous collapse and revival in a simple quantum model, https://doi.org/10.1103/PhysRevLett.44.1323 journal journal Phys. Rev. Lett. volume 44, pages 1323 (year 1980)NoStop
[Hizhnyakov(1996)]Hizhnyakov1996
author author V. Hizhnyakov, title title Relaxation jumps of
strong vibration, https://doi.org/10.1103/PhysRevB.53.13981
journal journal Phys. Rev. B volume 53, pages 13981 (year
1996)NoStop
[Misochko et al.(2004)Misochko, Hase, Ishioka, and Kitajima]Misochko2004
author author O. V. Misochko, author M. Hase,
author K. Ishioka, and author M. Kitajima, title
title Observation of an amplitude collapse and revival of
chirped coherent phonons in bismuth, https://doi.org/10.1103/PhysRevLett.92.197401 journal
journal Phys. Rev. Lett. volume 92, pages 197401 (year 2004)NoStop
[Young et al.(1992)Young,
Cabrera, Lee, and Dougherty]Young1992
author author B. Young, author B. Cabrera,
author A. Lee, and author B. Dougherty, title
title Detection of elementary particles using silicon crystal
acoustic detectors with titanium transition edge phonon sensors, https://doi.org/https://doi.org/10.1016/0168-9002(92)90866-3 journal journal Nuclear Instruments and Methods in Physics
Research Section A: Accelerators, Spectrometers, Detectors and Associated
Equipment volume 311, pages 195
(year 1992)NoStop
[Alkhatib et al.(2021)Alkhatib, Amaral, Aralis, Aramaki, Arnquist, Ataee Langroudy,
Azadbakht, Banik, Barker,
Bathurst, Bauer, Bezerra,
Bhattacharyya, Binder, Bowles, Brink, Bunker, Cabrera, Calkins, Cameron, Cartaro, Cerdeño, Chang, Chaudhuri, Chen, Chott, Cooley, Coombes, Corbett, Cushman, De Brienne, di Vacri,
Diamond, Fascione, Figueroa-Feliciano, Fink, Fouts,
Fritts, Gerbier, Germond,
Ghaith, Golwala, Harris,
Herbert, Hines, Hollister,
Hong, Hoppe, Hsu,
Huber, Iyer, Jardin,
Jastram, Kashyap, Kelsey,
Kubik, Kurinsky, Lawrence,
Li, Loer, Lopez Asamar,
Lukens, MacDonell, MacFarlane, Mahapatra, Mandic,
Mast, Mayer, Meyer zu
Theenhausen, Michaud, Michielin,
Mirabolfathi, Mohanty, Morales Mendoza, Nagorny, Nelson,
Neog, Novati, Orrell,
Oser, Page, Pakarha,
Partridge, Podviianiuk, Ponce, Poudel, Pyle, Rau,
Reid, Ren, Reynolds,
Roberts, Robinson, Saab,
Sadoulet, Sander, Sattari,
Schnee, Scorza, Serfass,
Sincavage, Stanford, Street,
Toback, Underwood, Verma,
Villano, von Krosigk, Watkins, Wills, Wilson, Wilson, Winchell, Wright, Yellin, Young, Yu, Zhang,
Zhang, Zhao, Zheng,
Camilleri, Kolomensky, and Zuber]Alkhatib2021
author author I. Alkhatib, author D. W. P. Amaral, author T. Aralis,
author T. Aramaki, author I. J. Arnquist, author
I. Ataee Langroudy, author
E. Azadbakht, author
S. Banik, author D. Barker, author C. Bathurst, author D. A. Bauer, author L. V. S. Bezerra, author R. Bhattacharyya, author T. Binder, author M. A. Bowles,
author P. L. Brink, author R. Bunker, author
B. Cabrera, author R. Calkins, author R. A. Cameron, author C. Cartaro, author D. G. Cerdeño, author Y.-Y. Chang, author M. Chaudhuri,
author R. Chen, author
N. Chott, author J. Cooley, author H. Coombes, author J. Corbett, author P. Cushman, author F. De Brienne, author M. L. di Vacri, author M. D. Diamond, author E. Fascione, author E. Figueroa-Feliciano, author C. W. Fink, author K. Fouts,
author M. Fritts, author G. Gerbier, author
R. Germond, author M. Ghaith, author S. R. Golwala, author H. R. Harris, author N. Herbert, author B. A. Hines,
author M. I. Hollister, author Z. Hong, author
E. W. Hoppe, author
L. Hsu, author M. E. Huber, author V. Iyer, author D. Jardin, author A. Jastram,
author V. K. S. Kashyap,
author M. H. Kelsey, author A. Kubik, author
N. A. Kurinsky, author
R. E. Lawrence, author
A. Li, author B. Loer, author E. Lopez Asamar, author P. Lukens, author D. MacDonell,
author D. B. MacFarlane,
author R. Mahapatra, author V. Mandic, author
N. Mast, author A. J. Mayer, author H. Meyer zu Theenhausen, author E. M. Michaud, author E. Michielin, author N. Mirabolfathi, author B. Mohanty, author J. D. Morales Mendoza, author S. Nagorny, author J. Nelson, author H. Neog, author V. Novati, author J. L. Orrell,
author S. M. Oser, author W. A. Page, author
P. Pakarha, author R. Partridge, author R. Podviianiuk, author F. Ponce, author S. Poudel, author M. Pyle, author W. Rau, author E. Reid, author R. Ren, author
T. Reynolds, author
A. Roberts, author A. E. Robinson, author T. Saab, author B. Sadoulet, author J. Sander,
author A. Sattari, author R. W. Schnee, author
S. Scorza, author B. Serfass, author D. J. Sincavage, author C. Stanford, author J. Street,
author D. Toback, author R. Underwood, author
S. Verma, author A. N. Villano, author B. von Krosigk, author S. L. Watkins, author L. Wills, author J. S. Wilson,
author M. J. Wilson, author J. Winchell, author
D. H. Wright, author
S. Yellin, author B. A. Young, author T. C. Yu, author E. Zhang, author H. G. Zhang,
author X. Zhao, author
L. Zheng, author J. Camilleri, author Y. G. Kolomensky, and author S. Zuber (collaboration SuperCDMS
Collaboration), title title Light dark matter
search with a high-resolution athermal phonon detector operated above
ground, https://doi.org/10.1103/PhysRevLett.127.061801 journal journal Phys. Rev. Lett. volume 127, pages 061801 (year
2021)NoStop
ł@subsection#1#2
ł@subsubsection#1#2
arabic
§ SUPPLEMENTARY INFORMATION
§ SAMPLE STRUCTURE
The sample structure used in this work is the same semiconductor wafer that was used previously in Refs. <cit.>. The sample is grown using molecular beam epitaxy (MBE) on a
semi-insulating GaAs (001) substrate. The layer sequence of the semiconductor structure is shown in Supplementary Fig. <ref>. The growth starts with a
layer of Al_0.95Ga_0.05As followed by a single pair of
Al_0.2Ga_0.8As and Al_0.95Ga_0.05As layers acting
as a Bragg reflector in optical experiments. Then, a 95 nm thick
layer of Al_0.15Ga_0.85As is grown, followed by a 95 nm thick layer of Al_0.15Ga_0.85As
doped with Si at a volume concentration of 1.0×10^18 cm^-3. The low Al concentration of 0.15
in the Si doped layer mitigates the issues caused by the deep DX
centers <cit.>. The n-type doped layer is followed by the
electron tunnel barrier layers: first a 5 nm thick
Al_0.15Ga_0.85As layer is grown at a reduced temperature of 560 ^∘C to suppress Si segregation, followed by a 10 nm thick
Al_0.15Ga_0.85As and then a 15 nm thick
Al_0.33Ga_0.67As layer grown at 600 ^∘C. Aluminium droplets are grown on the surface of the Al_0.33Ga_0.67As layer and are used to etch the nanoholes <cit.>. Atomic force
microscopy shows that typical nanoholes have a depth of ≈6.5 nm and are
≈70 nm in diameter <cit.>. Next, a 2.1 nm thick layer of GaAs is
grown to form QDs by infilling the nanoholes as well as to form
the quantum well (QW) layer. Thus, the maximum height of the QDs
in the growth z direction is ≈9 nm. The GaAs layer is followed by a
268 nm thick Al_0.33Ga_0.67As barrier layer. Finally, the
p-type contact layers doped with C are grown: a 65 nm thick
layer of Al_0.15Ga_0.85As with a
5×10^18 cm^-3 doping concentration, followed by a 5 nm thick layer of Al_0.15Ga_0.85As with a 9×10^18 cm^-3 concentration, and a 10 nm thick layer of GaAs with a 9×10^18 cm^-3 concentration.
The sample is processed into a p-i-n diode structure. Mesa
structures with a height of 250 nm are formed by etching away the
p-doped layers and depositing Ni(10 nm)/AuGe(150 nm)/Ni(40 nm)/Au(100 nm) on the etched areas. The sample is then annealed to
enable diffusion down to the n-doped layer to form the ohmic
back contact. The top gate contact is formed by depositing Ti(15 nm)/Au(100 nm) on to the p-type surface of the mesa areas. Quantum dot photoluminescence (PL) is excited and collected through the top of the sample. The
sample gate bias V_Gate is the bias of the p-type top
contact with respect to the grounded n-type back contact. Due to the
large thickness of the top Al_0.33Ga_0.67As layer, the tunneling of holes is suppressed, whereas tunnel
coupling to the n-type layer enables deterministic charging of
the quantum dots with electrons by changing V_Gate.
In order to resolve the quadrupolar components of the nuclear magnetic resonance (NMR) spectra, the semiconductor sample is subject to a uniaxial mechanical stress. To this end, the semiconductor wafer is first cleaved into a small piece with a rectangular surface area of 0.7 mm × 2.35 mm. The edges of the rectangular profile are aligned along the [110] and [11̅0] crystallographic directions. The thickness of the sample along the [001] growth direction is 0.35 mm. Thus, the sample is shaped as a parallelepiped. The sample is then inserted into a home-made stress cell. This is done in such a way that the two 0.7 mm × 0.35 mm surfaces of the sample are contacted to the flat titanium surfaces of the stress cell bracket. A titanium screw is then used to apply compressive stress, directed along the 2.35 mm long edge of the sample.
§ ELECTRON-NUCLEAR SPIN SYSTEM
The Hamiltonian describing the nuclear spin system includes the magnetic dipole (Zeeman) and the electric quadrupolar terms. We also consider the magnetic dipole-dipole interactions between the nuclei. The Zeeman term accounts for the coupling of the QD nuclear spins I_k to the static magnetic field B_z directed along the z axis:
ℋ_Z,N = -∑_k=1^Nħγ_k B_zÎ_z,k,
where the summation goes over all individual nuclei 1≤ k ≤ N, ħ=h/(2π) is the reduced Planck's constant, γ_k is the gyromagnetic ratio of the k-th nuclear spin and Î_k is a vector of spin operators with Cartesian components (Î_x,k,Î_y,k,Î_z,k). The result of the Zeeman term alone is a spectrum of equidistant single-spin eigenenergies -I_zħγ_kB_z. These 2I+1 states are also the eigenstates of the Î_z spin projection operator with eigenvalues I_z satisfying -I≤ I_z≤ +I.
The interaction of the nuclear electric quadrupolar moment
with the electric field gradients is described by the term (Ch. 10
in Ref. <cit.>):
ℋ_Q,N = ∑_k=1^N
q_k/6[3Î_z',k^2-I_k^2+η_k(Î_x',k^2-Î_y',k^2)],
where q_k and η_k describe the magnitude and asymmetry
of the electric field gradient tensor, whose principal axes are
x'y'z'. The strain is inhomogeneous
within the QD volume, so that q_k and η_k vary
between the individual nuclei. The axes x'y'z' are different for
each nucleus and generally do not coincide with crystallographic
axes or magnetic field direction. For the as-grown GaAs/AlGaAs QDs the quadrupolar shifts are around | q_k|/h≈20 kHz <cit.>, reaching q_k/h≈200 kHz for a small fraction of the nuclei <cit.>. In the studied structure, q_k are dominated by the extrinsic uniaxial stress. All experiments are conducted under sufficiently strong magnetic fields, where |ħγ_k B_z|≫ |q_k| and quadrupolar effects can be treated perturbatively. In this perturbative regime, the main effect of the quadrupolar shifts is the anharmonicity of the nuclear spin eigenenergies and the resulting quadrupolar NMR multiplet of 2I magnetic-dipole transitions, split by ν_Q≈ q_k/h. The I_z=±1/2 states of a half-integer nuclear spin are influenced by quadrupolar effects only in the second order. These second order shifts scale as ∝ν_Q^2/ν_N, where ν_N=γ B_z/(2π) is the nuclear spin Larmor frequency.
The nuclear spin energy spectrum is probed using optically-detected NMR spectroscopy of individual QDs. Supplementary Fig. <ref> shows the NMR spectra of ^75As (a) and ^69Ga (b) spin-3/2 nuclei measured on an empty (0e) QD. The insets show the corresponding diagrams of the nuclear spin energy levels and the allowed magnetic dipole NMR transitions. As expected for I=3/2, each NMR spectrum is a triplet. The first order quadrupolar shifts are ν_Q≈+260 kHz and ν_Q≈-125 kHz for ^75As and ^69Ga, respectively. The signs of ν_Q are opposite due to the opposite signs of the gradient elastic tensors of the group-III and group-V elements <cit.>. The quadrupolar shifts are the witnesses of the strain induced by the external stress. The strain is estimated to be ϵ_b≈0.0025.
Direct interaction between the nuclei is described by the dipole-dipole Hamiltonian:
ℋ_DD=∑_1≤ j<k≤ Nb_j,k(3Î_z,jÎ_z,k-Î_j·Î_k),
b_j,k=μ_0 ħ^2/4πγ_jγ_k/21-3cos^2θ_j,k/r_j,k^3
Here, μ_0=4π× 10^-7 N A^-2 is the magnetic constant and r_j,k denotes the length of the vector, which forms an angle θ with the z axis and connects the two spins j and k. The Hamiltonian of Supplementary Eq. (<ref>) has been truncated to eliminate all spin non-conserving terms – this is justified for static magnetic field exceeding ≳1 mT. The typical magnitude of the interaction constants for the nearby nuclei in GaAs is max(|b_j,k|)/h≈100 Hz. Consequently, the typical timescales of the processes driven by the many-body dipole-dipole interactions are on the order of 1 ms. This is much slower than the duration of the QND measurement T_RF≈10 μs. As a result, nuclear-nuclear interactions play only a minor role in the context of the current work.
The interaction of the conduction band electron spin s with
the ensemble of the QD nuclear spins is dominated by the contact
(Fermi) hyperfine interaction, with the following Hamiltonian:
ℋ_hf=∑_k=1^Na_k(ŝ_xÎ_x,k+ŝ_yÎ_y,k+ŝ_zÎ_z,k),
where the hyperfine constant of an individual nucleus k is
a_k=A^(k)|ψ(r_k)|^2v. Unlike a_k, the hyperfine constant A^(k) is a parameter describing only the
material and the isotope type to which nucleus k belongs,
|ψ(r_k)|^2 is the density of the electron envelope
wavefunction at the nuclear site r_k of the
crystal lattice, and v is the crystal volume per one
cation or one anion.
The definitions of the hyperfine constants differ between different sources. With the definition adopted here, a fully polarized isotope with spin I, hyperfine constant A and a 100% abundance (e.g. ^75As), would shift the energies of the electron spin states s_z=±1/2 by ± AI/2, irrespective of the shape of |ψ(r)|^2. With such definition, the typical values in GaAs are A≈50 μeV (Ref. <cit.>). The frequency Knight shift of an individual nucleus coupled to a spin polarized electron s_z=±1/2 is ± a_k/(2h). The typical Knight shift can be estimated from the frequency-detuned single-shot NMR spectra of Fig. 2(b) of the main text. We find a_j/(h)≈140 kHz or a_j≈0.58 neV for ^69Ga isotope. Taking the ratio A/a_j we roughly estimate the effective number of nuclei N≈10^5 coupled to the QD electron spin.
The hyperfine interaction of the valence band holes is an order of magnitude smaller <cit.> and can be ignored in the context of this work.
§ EXPERIMENTAL DETAILS AND ADDITIONAL RESULTS
The sample is placed in a liquid helium bath cryostat. A
superconducting coil is used to apply magnetic field up to
B_z=8 T. The field is parallel to the sample growth
direction [001] and the optical axis z (Faraday geometry). The field and the optical axis are orthogonal to the direction of the applied mechanical stress. We use a confocal microscopy configuration. An aspheric lens with a focal
distance of 1.45 mm and NA = 0.58 is used as an objective for
optical excitation of the QD and for photoluminescence (PL)
collection. The excitation laser is focused into a spot
with a diameter of ≈1 μm. The collected PL is dispersed
in a two-stage grating spectrometer, each stage with a 1 m
focal length, followed by a pair of achromatic lens doublets, which transfers the spectral image onto a charge-coupled device (CCD) photo-detector with a magnification of 3.75. The changes in the spectral splitting Δ E_PL of either a neutral exciton X^0 or a negatively
charged trion X^-, derived from the PL spectra, are used to
measure the hyperfine shifts E_hf proportional to the
nuclear spin polarization degree.
Supplementary Fig. <ref> is a detailed version of Fig. 1(e) of the main text and shows the timing of a pulsed single-shot NMR measurement. In what follows we describe the individual steps of the timing sequence.
§.§ Optical pumping of the quantum dot nuclear spins
Optical pumping of the QD nuclear spin polarization (labelled Pump in Supplementary Fig. <ref>) is achieved using the emission of a tunable single-mode circularly polarized diode laser. Optical dynamical nuclear spin polarization is a well known process, that has been observed in many types of QDs <cit.>, see Ref. <cit.> for a review. In the context of the present study, we simply rely on the fact that optical pumping is a reliable tool for achieving nuclear spin polarizations exceeding 50% on a timescales of a few seconds. The physics of nuclear spin pumping in the semiconductor wafer studied here are discussed in Ref. <cit.>. In brief, dynamic nuclear polarization is a three-stage cyclic process. At the first stage a spin polarized electron is created optically. This is made possible by the selection rules, which allow conversion of the circularly polarized photons into spin-polarized electron-hole pairs in group III-V semiconductors. At the second stage, the electron exchanges its spin with one of the nuclei through the flip-flop term of the electron-nuclear hyperfine Hamiltonian (Supplementary Eq. <ref>). The third stage is the electron-hole optical recombination or tunnel escape, which removes the flipped electron. This final step is required in order to let the QD accept new spin-polarized electrons and continue polarizing the ensemble of N≈ 10^5 nuclear spins of the QD. During the optical pump the sample gate is set to a large reverse bias, typically V_Gate=-2 V. The pump power is ≈1 mW, which is three orders of magnitude higher than the ground-state PL saturation power. The photon energy of the pump laser is typically ≈5-10 meV above the X^0 PL energy. The pump duration is T_Pump=3.5 s.
§.§ Nuclear magnetic resonance
The oscillating magnetic field B_x⊥ z, required to perform NMR, is produced by a copper wire coil placed at a distance of ≈0.5 mm
from the QD sample. The coil is made of 10 turns of a 0.1 mm diameter enameled copper wire wound on a ≈0.4 mm diameter spool in 5 layers, with 2 turns in each layer. The coil is driven by a class-AB RF amplifier (Tomco BT01000-AlphaSA rated up to 1000 W) which is fed by the output of an arbitrary waveform generator Keysight M8190.
Supplementary Fig. <ref> shows the timing diagram of a two-pulse experiment, which we now discuss in more detail. We consider the case of σ^+ polarized optical pump, which produces a Boltzmann distribution of the nuclear spin z projections <cit.>, populating predominantly the I_z=-3/2 nuclear spin state, while leaving the I_z=+3/2 state the least populated. The electron to nuclear spin state transfer is performed using the -3/2↔-1/2 NMR transition. In order to increase the NMR signal, we maximize the initial population difference of the I_z=-3/2,-1/2 nuclear states. This is achieved through state population transfer <cit.>. A pair of RF pulses performing π rotation (i.e. inversion) is applied to each of the two isotopes used in the experiment. First, the +1/2↔+3/2 transition is driven to exchange the populations of the I_z=+1/2,+3/2 states of the first isotope (labelled “Iso1” in Supplementary Fig. <ref>), making I_z=+1/2 the least populated state. The second π pulse applied to the -1/2↔+1/2 transition transfers the smallest population to I_z=-1/2. The same population-transfer sequence is applied to the second isotope (labelled “Iso2” in Supplementary Fig. <ref>). The state transfer is performed under reverse bias, which keeps the QD empty of all charges (0e). The absence of charges eliminates the Knight shifts, thus maximizing the fidelity of the state transfer performed by the NMR pulses.
Following the NMR state transfer, the sample gate bias is increased in order to load a single electron from the Fermi reservoir into the QD (1e charge state). The actual electron tunneling takes place on a submicrosecond timescale. However, the electron is then left to equilibrate for a time interval T_Load. We typically use T_Load∈[30, 90] ms. This is much longer than the measured electron spin lifetimes T_1,e, ensuring that any transient effects decay before the electron spin state is measured.
Following the electron equilibration, a frequency-detuned RF π pulse is applied to the -3/2↔-1/2 transition of the first isotope (“Iso1”). The detuning is chosen to be close to the weighted average nuclear hyperfine (Knight) shift and is typically ≈70 kHz for ^69Ga in the studied QDs. This pulse performs a QND measurement of the electron spin, storing the outcome in the long-lived longitudinal nuclear spin polarization of “Iso1”. The electron is then left for a time T_Evol to evolve freely without any optical or RF excitation. This is followed by the second frequency-detuned RF π pulse applied to “Iso2” in order to perform the second QND measurement of the electron spin. The pulses have cosine (near-Gaussian) envelopes and a total duration T_RF, counted between the zero-amplitude points at the start and the end of the pulse (the corresponding full width at half maximum is T_RF/2). Depending on the measurement, T_RF is chosen to be between 10 and 40 μs.
The results of the QND measurements are encoded in the I_z=-3/2,-1/2 nuclear spin subspaces where either the I_z=-3/2 or the I_z=-1/2 states are more populated, depending on the electron spin projection state s_z. At this stage, the nuclear spin polarization can already be retrieved optically. However, it is beneficial to multiply the NMR signal, by exploiting the entire I=3/2 Hilbert space. To this end, the sample is biased back into an empty-QD state (0e) and the reverse population transfer is performed on both isotopes. Each reverse population transfer consists of a π pulse on -1/2↔+1/2, followed by a π pulse on the +1/2↔+3/2 NMR transition. This provides a factor of ≈3 amplification in the NMR signal, which can be understood as follows. If the QND frequency-detuned pulse leaves the I_z=-3/2 state as the most populated, then the reverse population transfer has only a small effect on the nuclear spin polarization. In the opposite case, if QND leaves the I_z=-1/2 state as the most populated, the reverse population transfer makes the I_z=+3/2 state the most populated. Thus, instead of the I_z=-3/2,-1/2 subspace, the reverse population transfer encodes the NMR signal in the I_z=-3/2,+3/2 subspace, which approximately triples the optically-detected hyperfine shift Δ E_hf.
In some cases it is more convenient to perform QND using the I_z=+1/2,+3/2 nuclear spin subspaces, or a combination, where the I_z=+1/2,+3/2 states are used on one of the isotopes, while the I_z=-3/2,-1/2 states are used for the other isotope. In such cases the initial and reverse population transfer sequences are altered to match the chosen transitions, however, the principle described above remains the same. For the two-pulse experiments shown in Fig. 3(e) of the main text, the first RF pulse is detuned to the higher frequencies from the -3/2↔-1/2 high-frequency satellite of ^75As, while the second pulse is detuned to the lower frequencies from the -3/2↔-1/2 low-frequency satellite of ^69Ga. In those experiments where only one RF pulse is used on a single isotope, the timing diagram is the same as in Supplementary Fig. <ref>, but omitting the pulses for the second isotope.
The number of the nuclei used in the QND measurement can be estimated as follows. The root mean square number of nuclei N ≈ 6.5 × 10^4 has been estimated previously from the electron spin dephasing dynamics in the same QD sample <cit.>. Half of these nuclei are arsenic. Moreover, due to the incomplete polarization of the nuclei, only ≈80% are in the two-level subspace (such as I_z=-3/2,-1/2) used for the electron spin readout. Therefore, when ^75As is used for the readout, the number of active nuclei is ≈1/2× 0.8 × 6.5 × 10^4 ≈ 2.6 × 10^4. When the readout is carried out via ^69Ga, we need to take into account the ≈60% natural abundance of the isotope, which leads to ≈ 1.6 × 10^4 nuclei actively used in the electron spin QND measurement.
All NMR measurements are differential. In addition to the actual single-shot measurements with the timing shown in Supplementary Fig. <ref>, we periodically collect the reference PL probe spectra. These reference probe spectra are measured with exactly the same RF pulse sequence, including population transfers, but without the detuned RF pulses. The difference of the spectral splittings from the two measurements yields the differential NMR signal Δ E_hf.
§.§ Optical probing of the quantum dot nuclear spins
For optical probing of the nuclear spin polarization we use a diode laser emitting at 760 nm. Sample forward bias, typically V_Gate=+0.9 V, and the probe power are chosen to maximize (saturate) PL intensity of the ground state neutral exciton X^0. Fig. 2(a) of the main text shows neutral exciton (X^0) PL probe spectra measured at B_z=5.31 T following optical pumping with a σ^- circularly polarized laser. The PL arises from recombination of the electron-hole pairs in a QD (neutral exciton X^0). The PL spectrum is a doublet, where each component corresponds to the optically-excited electron in a spin-up (s_z=+1/2) or spin-down (s_z=+1/2) projection state. Consequently, the splitting Δ E_PL of the PL spectral doublet is sensitive to polarization of the nuclei along the z axis. The variation of this spectral splitting reveals the variation of the hyperfine shift Δ E_hf. These shifts are used to monitor the average QD nuclear spin polarization in NMR experiments. It is also possible to probe the nuclear spin polarization using PL of a negatively charged trion X^-. The charge state for optical probing is selected for each individual QD and magnetic field strength. This choice is governed by the linewidths and the brightness of PL.
Illumination with a probe laser inevitably acts back on the nuclear spin polarization. Each optically excited electron has a finite probability to flip one of N≈10^5 nuclear spins. Eventually, the nuclear spin polarization reaches its steady-state, governed only by the power, wavelength and polarization of the probe laser. Therefore, the duration of the probe T_Probe must be short enough to retain sufficient information about the nuclear polarization at the start of the probe. On the other hand, at the optical probing stage we are only interested in the average nuclear spin polarization, i.e. the arithmetic sum of all the copies transferred by the detuned RF pulse from the electron spin qubit into the nuclear spin z projections (see Supplementary Fig. <ref>). In other words, optical probing measures the z projection of a large total spin ≈ N I formed by thousands of nuclei. Due to the large N, this total spin is essentially a classical variable. Consequently, the QD can be excited optically many times, and a large number of photons can be collected before the decay of the average nuclear spin polarization causes any significant distortion in the measurement outcome. In order to determine the optimal T_Probe we perform calibration measurements, with a typical result shown in Supplementary Fig. <ref>. In this experiment the QD is first pumped with a σ^+ or σ^- polarized laser in order to create large initial nuclear polarization. A probe laser pulse is then applied. The PL spectral splitting Δ E_PL is measured at the end of this probe. It can be seen that the probe induces decay of the nuclear spin polarization on a timescale of a few hundred milliseconds. The probe time T_Probe used in the single-shot NMR experiments is chosen to ensure that any distortion of the measured Δ E_hf is small enough to resolve the Δ E_hf values corresponding to the opposite electron spin states. For example, for the data shown in Supplementary Fig. <ref> we choose T_Probe=0.05 s.
We note that due to the weak backaction it is possible in principle to apply multiple optical probe pulse. This could be useful for isotope-selective retrieval of the NMR signals. For example, one can use a (Probe - RF π pulse - Probe) sequence to measure selectively the NMR transition chosen by the RF pulse. This could be beneficial in situations such as those shown in Supplementary Figs. <ref>(d)–<ref>(f) of the main text, where the “no-flip” NMR signals arising from s_z=±1/2 overlap. However, in the present work we rely on a simple implementation with one probe pulse, which is sufficient to demonstrate the concept.
§ READOUT FIDELITY MODELLING
Here we discuss how the fidelity of the spin readout is derived from the histograms, such as shown in Figs. 3(b) and 3(c) of the main text. We need to construct the model probability distribution of the single-shot NMR signal amplitudes Δ E_hf. When constructing the probability density, we take into account the finite probability for the electron spin to flip during the RF pulse duration T_RF. The flips are modelled as a random telegraph process, where the flips are instantaneous (infinitely fast) and are not correlated. In principle, the electron spin can flip m times during T_RF, with the probability of such events scaling as ∝ (T_RF/T_1,e)^m, where T_1,e is the electron spin lifetime. In all our experiments T_RF≪ T_1,e, so we only account for the terms up to the first order m≤1 (in other words, we ignore the possibility for the electron to flip more than once during T_RF).
For a telegraph process, the probability of zero flips occurring during any given time interval T_RF is:
p_m=0 = exp[-(1-ρ_e ^2) T_RF/2 T_1,e],
where -1≤ρ_e≤ 1 is the equilibrium electron spin polarization degree. For the probability to have exactly one electron spin flip we write:
p_m=1 = 1-p_m=0,
where we have used the simplifying assumption that two or more flips (m≥2) are not possible.
In the case of zero flips (m=0), the electron is either in a spin up (s_z=+1/2) or spin down (s_z=-1/2) state throughout the entire RF measurement pulse. In the absence of noise, these two spin states will result in two discrete NMR readout values: Δ E_hf^+ and Δ E_hf^-, respectively. The relevant probability distributions are described by the delta functions centered at Δ E_hf^+ and Δ E_hf^- for s_z=+1/2 and -1/2. The total probabilities of detecting s_z=+1/2 and -1/2 in equilibrium are (1+ρ_e)p_m=0/2 and (1-ρ_e)p_m=0/2, respectively.
Next, we consider the m=1 case, where the electron flips at a timepoint t_Flip. The distribution of t_Flip is uniform in the interval [0,T_RF]. In order to evaluate the effect of the electron spin flip on the measured NMR signal we use simple geometrical calculations. We assume that all nuclear spins are exactly on resonance (completely out of resonance) with the RF pulse when the electron is in the s_z=+1/2 (s_z=-1/2) spin state. Consider the case where the electron is in the s_z=+1/2 state at the start of the RF pulse. Then the nuclei will be rotated from their initial I_z=+1/2 states towards the inverted I_z=-1/2 states. However, this rotation is interrupted at t_Flip. We then calculate the change in I_z arising from such an interrupted rotation in case of an RF pulse with a cosine shaped envelope. Linear rescaling of this change in I_z yields the NMR signal as a function of t_Flip:
Δ E_hf(t_Flip)=Δ E_hf^- + (Δ E_hf^+ - Δ E_hf^-)1/2(1-cos[1/2sin(2 π t_Flip/T_RF)-π t_Flip/T_RF])
This function is a scaled sigmoid, and, given that t_Flip is distributed uniformly, its appropriately scaled inverse is a cumulative distribution function (CDF) of the single-shot NMR signals Δ E_hf detected in the case of exactly one electron flip (m=1). There is no analytical inverse. The numerically evaluated CDF is shown in Supplementary Fig. <ref>. Its derivative is the probability density function (PDF) and also has to be evaluated numerically. Some properties of this PDF can be noted from Supplementary Fig. <ref>. Namely, it is approximately a sum of two sharp modes at Δ E_hf^- and Δ E_hf^+, and a relatively uniform background in between these modes. This shape can be understood qualitatively as originating from the cosine envelope of the RF pulse. If the electron flips near the start or the end of the pulse, where the RF amplitude is ≈0, the nuclei either have not rotated much, or have already been flipped, respectively. Thus, the flips at the start and the end of the RF pulse give rise to NMR signals very close to Δ E_hf^- and Δ E_hf^+, respectively. The case where the electron is in the s_z=-1/2 state at the start of the RF pulse gives exactly the same distribution of the NMR signals Δ E_hf.
Summarising, we have the following three contributions to the distribution of the NMR signals Δ E_hf:
* (i) The delta-peak mode at Δ E_hf^+ with a probability 1+ρ_e/2exp[-(1-ρ_e ^2) T_RF/2 T_1,e], corresponding to the s_z=+1/2 electron state with no flips.
* (ii) The delta-peak mode at Δ E_hf^- with a probability 1-ρ_e/2exp[-(1-ρ_e ^2) T_RF/2 T_1,e], corresponding to the s_z=-1/2 electron state with no flips.
* (iii) The bimodal distribution given by Supplementary Eq. <ref> with a probability 1-exp[-(1-ρ_e ^2) T_RF/2 T_1,e], corresponding to the case where the electron spin flips once during the RF pulse.
These three contributions, weighted by their relevant probabilities, are added together to obtain the complete ideal PDF. The final step is to take into account the non-ideal nature of the optical readout of the single-shot NMR signals. The readout noise is modelled by assuming that the detected Δ E_hf values have a Gaussian distribution with a full width at half maximum w and centred on the true Δ E_hf. Thus, we convolve the complete ideal PDF with a Gaussian PDF. This convolved PDF is then fitted to the experimental single-shot NMR data, such as shown in the histograms of Figs. 3(b) and 3(c) of the main text. The best fit is found by maximizing the likelihood estimator, with Δ E_hf^-, Δ E_hf^+, ρ_e, T_1,e and w used as fitting parameters.
The best fits are shown by the solid lines in Figs. 3(b) and 3(c) of the main text. The best fit Gaussian FWHM is w≈2.4-2.7 μeV, which is a very good match to w≈2.4 μeV found in the reference measurement on a neutral QD [0e, Fig. 3(a) of the main text] where the data is fitted with a single Gaussian. This agreement confirms that optical readout noise is the main source of the histogram mode broadening and validates the model used for the 1e single-shot NMR. The fitted equilibrium electron spin polarization is small |ρ_e|≤0.1, owing to the small electron g-factor in the studied QDs <cit.>. The best fit values for the electron spin lifetimes are T_1,e≈1.7 ms and T_1,e≈0.58 ms at B_z=1.6 T and 5.3 T, respectively. These values are somewhat smaller than the more accurate measurements T_1,e≈8.7 ms and T_1,e≈0.71 ms obtained at the same magnetic fields via direct measurement of the electron spin relaxation. The T_1,e parameter used in the model probability distribution essentially encodes the qubit readout infidelity 1-F. The underestimated fitted values of T_1,e probably mean that the true qubit readout fidelities are even closer to unity than derived from fitting.
Once the fitting parameters are obtained, the qubit readout criterion is defined by setting the threshold value in the middle of the two modes at (Δ E_hf^- + Δ E_hf^+)/2. Any single-shot NMR signal Δ E_hf below (above) this threshold is interpreted as s_z=-1/2 (s_z=+1/2). The readout fidelity F is defined as the average probability to detect the true s_z correctly. We have F=p_m=0p_Opt+p_m=1/2, where p_Opt denotes the probability that the optical readout noise does not cause the measured Δ E_hf to cross the detection threshold. The two modes in the measured histograms are well resolved, especially at high magnetic field [Fig. 3(c) of the main text], so that p_Opt≈1. Therefore, the fidelity is F=p_m=0p_Opt+p_m=1/2≈ p_m=0+p_m=1/2 = 1-p_m=1/2, dominated by the probability that the electron flips during the readout RF pulse. The 1/2 factor in p_m=1/2 accounts for the fact that even when the electron spin is flipped randomly, there is still a 50% probability that the measured Δ E_hf stays on the correct side of the detection threshold and the electron spin projection s_z is measured correctly. We find F≈0.9985 at both B_z=1.6 T and 5.3 T. We note that the same value of F is found despite the longer electron spin lifetime at B_z=1.6 T. This is likely explained by the longer measurement pulse T_RF≈20 μs, as opposed to T_RF≈10 μs used at B_z=5.3 T, and the smaller separation of the histogram modes, which means that p_Opt is not as close to unity as it is at high magnetic field.
§ ADDITIONAL DATA
Supplementary Figs. <ref>(a)–<ref>(c) replicate Figs. 3(d)–3(e) of the main text, where the results are shown for an experiment with two RF pulses.
The first pulse applied to ^75As nuclei records the initial state of the electron spin, while the second pulse on ^69Ga stores the state after the interpulse free-evolution delay T_Evol. The optically-measured Δ E_hf is the total NMR signal produced by the two pulses. Supplementary Fig. <ref>(b) shows a two-dimensional histogram of the single-shot NMR signals Δ E_hf measured at different T_Evol. A cross-section at short T_Evol≈1 μs is shown in Supplementary Fig. <ref>(c), while the result for a long T_Evol≈30 ms is shown in Supplementary Fig. <ref>(a). The relative weights of the “no-flip” and “spin-flip” modes reveal the probability for the electron spin to relax. We model this relaxation probability by an exponential function of T_Evol, to derive the spin lifetime T_1,e≈0.58 ms for this experiment conducted at B_z=7 T.
Supplementary Figs. <ref>(d)–<ref>(f) show the additional results from the same two-pulse experiment, but conducted at a reduced magnetic field of B_z=2.4 T. In this dataset, the two “no-flip” outcomes merge into one mode at Δ E_hf≈25 μeV. This overlap is due to a slight nonlinearity in the dependence of the optically-probed hyperfine shift Δ E_hf on the QD nuclear spin polarization. Such nonlinearity is a result of the well-known feedback effect occurring in the electron-nuclear spin system under optical pumping <cit.>. Apart from that, the results in Supplementary Figs. <ref>(d)–<ref>(f) match qualitatively the results in Supplementary Figs. <ref>(a)–<ref>(c), with the gradual emergence of the “spin-flip” modes at an increasing T_Evol. However, at B_z=2.4 T the electron spin lifetime is significantly longer, found to be T_1,e≈5.2 ms from the exponential model fitting. The inverse dependence of T_1,e on the applied magnetic field is a clear indication that the electron spin relaxation is dominated by the acoustic phonons <cit.>.
The readout of the electron spin via nuclear spin environment is possible in a wide range of magnetic fields, as demonstrated in Figs. 2(b) and 2(c) of the main text, where similarly high fidelities are achieved at B_z=1.6 and B_z=5.3 T. At high magnetic fields the readout fidelity is fundamentally limited by the shortening of the electron spin lifetime T_1,e. In the experiments, we have verified our readout techniques for magnetic fields up to B_z=7 T [Supplementary Figs. <ref>(a)–<ref>(c)].
At low magnetic fields, the readout is fundamentally limited by the backaction, as discussed in more detail in the next section. Backaction becomes particularly strong if the hyperfine shift ≈ Na of the nuclei used for the measurement (here N is the number of the polarized nuclei rotated by the RF pulse, rather than the total number of nuclei in a QD) is comparable to or larger than the electron Zeeman splitting hν_e,0. Under these conditions, the electron spin energy splitting hν_e, which is the sum of hν_e,0 and the hyperfine shift, can become very small or even zero at some point during the RF pulse. Then, the electron-nuclear spin flip-flops become energetically allowed, disrupting the electron spin qubit state. Such backaction can be remedied by using a smaller number of nuclei in the measurement, though the drawback is the reduced NMR signal Δ E_hf. In our experiments, electron spin readout has been verified down to B_z=0.98 T. At this magnetic field the backaction is not yet the limiting factor. Instead, the limitation comes from the reduction of the optically-pumped nuclear polarization at small magnetic fields. This reduction leads to a smaller NMR signals Δ E_hf, and less resolved spin-up and spin-down modes in histograms, such as shown in Figs. 2(b) and 2(c) of the main text. However, such limitation is technical rather than fundamental, since PL collection efficiency in our setup can still be improved by one or two orders of magnitude, e.g. by using a solid-immersion lens (SIL) <cit.>. With better PL photon collection, the statistical noise in Δ E_hf can be reduced. Alternatively, a more accurate measurement of Δ E_hf can be achieved through resonance fluorescence <cit.>, rather than PL. We therefore expect that our readout method should be applicable in GaAs QDs at least down to a few hundreds of mT.
§ NUMERICAL SIMULATION OF THE ELECTRON-NUCLEAR SPIN EVOLUTION
We perform exact numerical simulations on a system where the central electron spin s is coupled to an ensemble of N nuclei via the contact hyperfine interaction (Supplementary Eq. <ref>):
ℋ_hf=∑_k=1^Na_k(ŝ_xÎ_x,k+ŝ_yÎ_y,k+ŝ_zÎ_z,k),
The Zeeman terms are:
ℋ_Z,e = hν_e,0ŝ_z,
ℋ_Z,N =-h∑_k=1^N ν_N,kÎ_z,k,
where the nuclear Zeeman term of Supplementary Eq. <ref> is rewritten in terms of the nuclear Larmor frequencies ν_N,k. The bare electron Larmor frequency is ν_e,0=μ_B g_e B_z/h, where μ_B is the Bohr magneton and g_e is the electron g-factor. For completeness, we include the nuclear-nuclear dipolar interaction (Supplementary Eq. <ref>). The term that describes the time-dependent radiofrequency (RF) field ν_1(t) acting on the nuclei is given by:
ℋ_RF,N =-h∑_kν_1(t) Î_x,k,
where the summation goes over only those nuclei that belong to the isotope that is resonant with the RF.
Direct numerical modelling is carried out for up to N≤ 12 nuclei with spin I=1/2. We simplify the problem by assuming uniform nuclear Zeeman frequencies ν_N,k=ν_N and hyperfine constants a_k=a. The evolution of the system is simulated through numerical propagation of the Schrödinger equation from an initial wavefunction state ψ_Init. The computation is carried out using the software package Wolfram Mathematica 13.2. We chose ψ_Init as a product state of the electron and the nuclear spins, which means that the spins are not entangled initially. Moreover, the nuclear spin ensemble is initialized into a product of identical single-nucleus states. Next, we describe numerical simulations under different settings.
§.§ Measurement contrast and backaction on the qubit
First, we model the QND measurement process for the case where the electron is initially in the measurement (energy) basis eigenstate. The nuclei are initially in a fully-polarized state with I_z,k=+1/2 for all k. An RF pulse with a total duration of T_RF is applied. In order to match the experiments, we use a cosine amplitude envelope ν_1(t)∝ 1 - cos(2π t/T_RF), where the proportionality factor is chosen to produce a π rotation (inversion) of the nuclei when the RF frequency is in resonance with the nuclei. The RF frequency is ν_N-a/(2h), detuned from the bare NMR frequency ν_N. For an electron in the s_z=+1/2 (s_z=-1/2) state the RF pulse is resonant (detuned), resulting in a full (partial) inversion of the nuclear spins. We note certain similarities of this spin-to-spin conversion via off-resonant NMR with the dispersive (frequency-detuned) readout of a superconducting qubit coupled to a microwave cavity <cit.> – the role of the bosonic cavity mode is similar to that of the fermionic nuclear spin ensemble in our simulations and experiments on QDs.
Using the wavefunction ψ_Fin in the final state, the final polarization of each nucleus is calculated from the expectation value I_z,k,Fin=⟨ψ_Fin|Î_z,k|ψ_Fin⟩, with a total nuclear polarization defined as Σ I_z,Fin=∑_k=1^N I_z,k,Fin. The measurement contrast ΔΣ I_z is the difference in Σ I_z,Fin obtained under s_z=-1/2 and s_z=+1/2 electron states. We conduct simulations for a wide range of parameters T_RF, a, N, ν_N and ν_e,0, under the condition ν_N<ν_e,0. Despite these variations, we find that the normalized contrast ΔΣ I_z/N depends on a single combination (aT_RF/h)^2, as plotted by the symbols in Supplementary Fig. <ref>(a). For short measurement times and/or weak hyperfine interaction (aT_RF/h)^2≪ 1, the NMR resonances under s_z=±1/2 electron states are resolved only partially, leading to a partial measurement contrast ΔΣ I_z/N≪ 1. In the opposite limit of slow measurement (aT_RF/h)^2≥ 1 the contrast saturates at ΔΣ I_z/N≈ 1, which is the regime used in our experiments. The transition from partial to full contrast is well described by the following empirical expression (plotted by the solid line):
ΔΣ I_z/N ≈(1 + (1.418×(aT_RF/h)^2)^-2)^-1/2
The contrast ΔΣ I_z/N is the desired effect produced by the RF measurement pulse. We now evaluate the undesired effect of the same RF pulse, namely the disturbance of the electron spin qubit state. Following the RF pulse, the final electron spin polarization is calculated as s_z,Fin=⟨ψ_Fin|ŝ_z|ψ_Fin⟩. Its deviation Δ s_z from the initial s_z is a measure of backaction in the form of mixing between the measurement basis states. If the electron is in the s_z=-1/2 state, Δ s_z is found to be small. This is because the NMR is shifted out of resonance by the electron Knight field and the nuclei are not flipped. In other words, the RF pulse causes minimal evolution of the electron-nuclear spin system. For the opposite electron spin state s_z=+1/2 the backaction Δ s_z is found to be larger. In this case the nuclei are flipped by the RF pulse, and the transient hyperfine (Overhauser) field produced by the nuclei is what causes the backaction on the electron spin. Consequently, we focus on the worst-case scenario of s_z=+1/2. Once again, for a wide range of model parameter we find a universal functional dependence:
Δ s_z≈ 0.214 × N ( a/hν_e,0)^2,
as can be seen in Supplementary Fig. <ref>(b). The mixing backaction is seen to be a perturbative effect, which vanishes when the electron-nuclear coupling ∝ a is small compared to the electron spin energy gap hν_e,0. Extrapolating the exact results obtained at N≤ 12 we estimate Δ s_z for our experiments on GaAs QDs. The typical electron spin splitting is h ν_e≈50 μeV, arising both from the Zeeman splitting hν_e,0 and the polarized nuclei that are not used in the electron spin readout. We note that the numerator in Supplementary Eq. <ref> can be rearranged as Na^2=a × (Na). The typical Knight shift is a/(2h)≈70 kHz. The product Na is the electron hyperfine (Overhauser) splitting due to the nuclei that are used in the electron spin readout (i.e. rotated by the RF pulse), and is typically ≲ 15 μeV. Substituting these numbers we find Δ s_z≈6× 10^-7. The smallness of the backaction on the measured variable s_z is a key defining property of the quantum non-demolition (QND) measurement <cit.>. The backaction Δ s_z can be interpreted as QND infidelity 1-F_QND. The estimated Δ s_z is also small compared to the overall measurement infidelity 1-F≈ 1-0.9985, confirming that backaction is not the limiting factor, and that the sub-unity fidelity is caused by the electron-spin qubit relaxation.
Conjugate to the observable s_z is the electron spin coherence, which can be written in terms of the azimuthal angle <cit.> of the electron spin in the xy plane. In order to evaluate the measurement backaction in the form of electron spin qubit decoherence we perform the same numerical simulations of the QND process. The only difference is that the electron is now initialized in a superposition of the spin up and down states ψ_Init=2^-1/2 (|+1/2⟩ + |-1/2⟩), which is the s_x=+1/2 eigenstate. Following the RF pulse, the s_x,Fin=⟨ψ_Fin|ŝ_x|ψ_Fin⟩ expectation is calculated to find the deviation Δ s_x, which characterizes the degree of the measurement-induced electron spin qubit decoherence. The universal functional dependence is found to be of the form:
Δ s_x≈((1/2)^-2 + (0.353× N(aT_RF/h)^2)^-2)^-1/2,
as shown in Fig. <ref>(c). There are two distinct cases. In the limit of short measurement N(aT_RF/h)^2≪ 1 the decoherence Δ s_x scales quadratically with aT_RF/h. In case of one nuclear spin (N=1), the decoherence Δ s_x can be interpreted as the phase acquired by the electron spin through its interaction with the nucleus over the measurement time T_RF. Note that N(aT_RF/h)^2≪ 1 implies (aT_RF/h)^2≪ 1. Taking into account Supplementary Eq. <ref>, this means that a small decoherence Δ s_x can be achieved only at the expense of a reduced measurement readout contrast ΔΣ I_z (i.e. reduced measurement fidelity). This is the case of a weak measurement, where the amount of information obtained is small and the backaction is also small for both the measured variable and the conjugate variable <cit.>. In the opposite case N(aT_RF/h)^2≥ 1, the decoherence is nearly complete s_x,Fin≈0, Δ s_x≈1/2. Moreover, we find that the expectation values of the other electron spin projections also vanish: s_y,Fin≈ s_z,Fin≈0. This indicates that electron spin coherence is lost through the measurement-induced entanglement of the electron spin with the nuclear spins, rather than through coherent electron spin precession. Substituting the experimental parameters, we find N(aT_RF/h)^2≈ 8×10^5≫1, indicating that our quantum-dot measurement of s_z is associated with a complete loss of the conjugate variable (i.e. complete qubit decoherence). In other words, our experiments can be described as a strong QND measurement.
§.§ Ruling out the measurement-induced “wavefunction collapse”
Experiments, such as those shown in Fig. 3(b,c) of the main text, indicate that in the vast majority of the single-shot measurements the electron is detected in either of its two energy eigenstates s_z=±1/2. This observation brings up the following questions: Why is it that coherent superpositions are not observed? Does the measurement itself cause the “collapse” of the wavefunction, projecting the electron spin onto the s_z energy basis? At present, it is not possible to create coherent electron spin states in our experiments. Therefore, we use numerical modelling to study these regimes. Once again, we model the QND measurement process with nuclei initialized in a fully-polarized state with I_z,k=+1/2 for all k. The electron spin is initialized in a general superposition ψ_Init=α|+1/2⟩ + β|-1/2⟩ with real α and β, which corresponds to an electron spin eigenstate in the xz plane. The z-projection expectation value is initially s_z,Init=(|α|^2-|β|^2)/2, while the initial x-projection expectation is s_x,Init=αβ. In this calculation we use N=10, a/(2h)=100 kHz, ν_N=2 MHz, T_RF=25 μs and ν_e,0=20 or 200 MHz. These parameter sets correspond to the case of the well-resolved Knight-shifted NMR resonances (aT_RF/h>1) where the measurement contrast ΔΣ I_z is close to its maximum value. Following the detuned RF pulse we evaluate the expectation value of the spin z projection of the k-th nucleus: I_z,k,Fin=⟨ψ_Fin|Î_z,k|ψ_Fin⟩. With good accuracy we find that spin polarization of each nucleus replicates the initial electron spin polarization I_z,k,Fin≈ s_z,Init. This result can be understood qualitatively by noting the large difference in the nuclear and electron spin precession frequencies ν_N≪ν_e,0, meaning that the fast electron spin precession is averaged out and the nuclei effectively sense only the average polarization s_z,Init of the electron spin.
Moreover, the electron spin z polarization is essentially unchanged by the measurement RF pulse s_z,Fin≈ s_z,Init, as expected for a QND measurement. This observation allows us to rule out the possibility that the measurement RF pulse can itself “collapse” the electron wavefunction onto the s_z eigenbasis. On the other hand, regardless of the initial s_x,Init, the final transverse electron spin components are found to be small s_x,Fin≈ s_y,Init≈0, signifying electron spin decoherence through entanglement with the nuclear spins.
§.§ Ruling out the nuclei as a source of einselection
Histograms of the experimentally measured single-shot NMR signals [Figs. 3(b) and 3(c) of the main text] reveal sharp bimodal distributions of Σ I_z,Fin, indicating that the electron is found preferentially in the s_z=±1/2 energy eigenstates. In order to verify this interpretation, we use the same model parameters as in Fig. 3(c), but assume a uniform distribution of the electron spin vector on a Bloch sphere (i.e. assume that the electron is in a random superposition of the s_z=±1/2 states). Given the linear response of the measurement I_z,k,Fin≈ s_z,Init established above (<ref>), the single-shot NMR signals Δ E_hf should have the same distribution as s_z. In other words, the NMR measurement is in principle capable of detecting the electron spin superposition states. The distribution calculated under the assumption of uniformly distributed electron spin superpositions is plotted by the dashed line in Fig. 3(c) of the main text. It is nearly uniform and is incompatible with the experimental histogram, confirming that electron spin superpositions are not realized under equilibrium conditions in our experiments. Quantum mechanics gives no a priori preference to the energy eigenbasis or any other basis. Such a preferential basis, to which the system decoheres from a superposition, can arise from the interaction of the qubit with the environments. This phenomenon is known as einselection <cit.>. The bimodality of the NMR readouts confirms that such einselection takes place in the experiments. However, the results of the numerical modelling presented in <ref> rule out RF manipulation of the nuclear spins as a mechanism of einselection.
In order to complete this analysis, we consider the possibility of einselection induced by the slow equilibrium electron-nuclear spin dynamics, as opposed to the case of fast spin dynamics during the short (tens of μs) RF measurement pulse, as considered above. Here, we construct the initial wavefunction ψ_Init as a product state, where the nuclei and the electron are initially polarized in the xy plane (which is orthogonal to the static magnetic field direction z). The nuclei are aligned along the x axis, so that each nucleus is initially in a superposition 2^-1/2 (|+1/2⟩ + |-1/2⟩) of its single-particle eigenstates I_z,k=±1/2. We consider different initial orientations of the electron spin, by taking the initial superposition of the s_z=±1/2 states as 2^-1/2 (|+1/2⟩±|-1/2⟩) (corresponding to the s_x=±1/2 eigenstate) or 2^-1/2 (|+1/2⟩± i |-1/2⟩) (corresponding to the s_y=±1/2 eigenstate). For all these states the z projection expectation value is s_z,Init=0. There is no RF pulse in this simulation – the electron-nuclear system is allowed to evolve freely for a few tens of milliseconds, which is much longer than all the relevant interaction timescales, and is therefore sufficient to achieve the steady state. We then calculate the final electron spin polarization s_z,Fin=⟨ψ_Fin|ŝ_z|ψ_Fin⟩ that emerges from the electron-nuclear interaction. The value of s_z,Fin depends on the initial mutual orientation of the electron and nuclear spins. In case of the orthogonal orientation (s_y=±1/2) we find s_z,Fin≈0. In case when the electron spin s_x=-1/2 (s_x=+1/2) is initially aligned (anti)parallel to the nuclei, we find negative (positive) s_z,Fin. For a wide range of parameters we find that the magnitude follows an empirical relation:
| s_z,Emerg|≈ 0.231× N a/hν_e,0.
Up to a factor on the order of unity, this result has a simple interpretation as a ratio of two energies. The initial mutual electron-nuclear hyperfine energy is ± Na/4 according to Supplementary Eq. <ref>, while the electron spin energy splitting is hν_e,0. We now extrapolate these results to the case of a real GaAs QD. In a thermal equilibrium any non-zero nuclear spin polarization is due to the statistical fluctuations, which scale as ≈√(N). Thus we substitute N with √(N) and ν_e,0 with ν_e in Supplementary Eq. <ref>, and use the realistic values h ν_e≈50 μeV, a/(2h)≈70 kHz, N≈10^5 to find a small final nuclear spin polarization | s_z,Emerg|≈10^-3. The emergence of a small | s_z,Emerg|≪1/2 has a simple interpretation in that the hyperfine energy of an equilibrium nuclear spin fluctuation is much smaller than the electron Zeeman energy, making it energetically impossible for the nuclei to “collapse” the electron spin superposition into its energy eigenbasis. Thus we rule out the low-energy nuclear spin environment as a source of einselection for the electron spin.
Unlike the low-energy nuclear spins, the crystalline environment of the QD electron can act as a high-energy environment responsible for einselection. Coupling between the QD electron spin and the phonons manifests in electron spin relaxation. At low temperatures (T=4.2 K), the electron spin relaxation is dominated by single-phonon processes <cit.>. Reduction of the electron spin lifetime T_1,e with the increasing magnetic field B_z indicates that the phonon coupling is the dominant electron spin relaxation channel. By contrast, relaxation of the QD electron spin due to its cotunneling coupling with the Fermi reservoir would have resulted in B_z-independent spin lifetimes <cit.>. Following Eq. 4 of Ref. <cit.> the effective spin-phonon coupling can be written as ∝(ŝ_xℰ_y-ŝ_yℰ_x), where ℰ_x,y are the Cartesian components of the phonon-induced piezo-strain electric field. Such form is akin to the magnetic spin resonance Hamiltonian ∝(ŝ_xB_x+ŝ_yB_y). Therefore, we expect that only the resonant or nearly-resonant spectral components of ℰ_x,y can cause the electron spin flips. The two-pulse QND measurement experiments, shown in Supplementary Fig. <ref> and in the main text, indicate that electron spin relaxation is well described by a telegraph random process. The phonon-induced microwave electric fields, which drive this telegraph process, must therefore occur in the form of short bursts (much shorter than the RF pulse ≲10 μs), separated by long (milliseconds) random intervals. Such electric field bursts can rotate the electron spin at random times and with random phases, which would explain both the spin relaxation and the einselection. Spontaneous collapses and burst-like revivals have long been investigated in Bosonic system, such as photons <cit.> and phonons <cit.>, and are typically associated with high mode population numbers n̅≳ 100. For our experiments at T=4.2 K and hν_e≈50 μeV, the average phonon number is n̅≈6.8. The appearance of spontaneous revivals at such low excitations is somewhat unexpected and calls for further investigation.
34
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Oshiyama and Ohnishi(1986)]Oshiyama1986
author author A. Oshiyama and author S. Ohnishi, title title DX center: Crossover of
deep and shallow states in Si-doped
Al_xGa_1xAs, https://doi.org/10.1103/PhysRevB.33.4320 journal
journal Phys. Rev. B volume 33, pages 4320 (year 1986)NoStop
[Mooney(1990)]Mooney1990
author author P. M. Mooney, title title Deep donor levels (DX
centers) in III-V semiconductors, https://doi.org/10.1063/1.345628 journal journal
Journal of Applied Physics volume 67, pages R1 (year 1990)NoStop
[Zhai et al.(2020)Zhai,
Löbl, Nguyen, Ritzmann,
Javadi, Spinnler, Wieck,
Ludwig, and Warburton]Zhai2020
author author L. Zhai, author M. C. Löbl,
author G. N. Nguyen, author J. Ritzmann, author
A. Javadi, author C. Spinnler, author A. D. Wieck, author A. Ludwig, and author R. J. Warburton, title title Low-noise GaAs quantum
dots for quantum photonics, https://doi.org/10.1038/s41467-020-18625-z journal journal Nat. Commun. volume 11, pages 4745 (year 2020)NoStop
[Slichter(1990)]SlichterBook
author author C. P. Slichter, @noop title Principles of
Magnetic Resonance (publisher Springer, year
1990)NoStop
[Ulhaq et al.(2016)Ulhaq,
Duan, Zallo, Ding,
Schmidt, Tartakovskii, Skolnick, and Chekhovich]Ulhaq2016
author author A. Ulhaq, author Q. Duan,
author E. Zallo, author F. Ding, author
O. G. Schmidt, author
A. I. Tartakovskii, author
M. S. Skolnick, and author
E. A. Chekhovich, title
title Vanishing electron g factor and long-lived nuclear spin
polarization in weakly strained nanohole-filled GaAs/AlGaAs quantum
dots, https://doi.org/10.1103/PhysRevB.93.165306 journal journal Phys. Rev. B volume
93, pages 165306 (year 2016)NoStop
[Chekhovich et al.(2018)Chekhovich, Griffiths, Skolnick,
Huang, Covre da Silva, Yuan, and Rastelli]Chekhovich2018
author author E. A. Chekhovich, author I. M. Griffiths, author M. S. Skolnick, author H. Huang,
author S. F. Covre da Silva,
author X. Yuan, and author A. Rastelli, title
title Cross calibration of deformation potentials and
gradient-elastic tensors of GaAs using photoluminescence and nuclear
magnetic resonance spectroscopy in GaAs/AlGaAs quantum dot structures, https://doi.org/10.1103/PhysRevB.97.235311 journal
journal Phys. Rev. B volume 97, pages 235311 (year 2018)NoStop
[Chekhovich et al.(2013)Chekhovich, Glazov, Krysa, Hopkinson, Senellart, Lemaître,
Skolnick, and Tartakovskii]Chekhovich2013NPhys
author author E. A. Chekhovich, author M. M. Glazov, author A. B. Krysa,
author M. Hopkinson, author P. Senellart, author
A. Lemaître, author
M. S. Skolnick, and author
A. I. Tartakovskii, title
title Element-sensitive measurement of the hole-nuclear spin
interaction in quantum dots, https://doi.org/10.1038/nphys2514
journal journal Nat. Phys. volume 9, pages 74 (year 2013)NoStop
[Gammon et al.(2001)Gammon,
Efros, Kennedy, Rosen,
Katzer, Park, Brown,
Korenev, and Merkulov]Gammon2001
author author D. Gammon, author A. L. Efros,
author T. A. Kennedy, author M. Rosen, author
D. S. Katzer, author
D. Park, author S. W. Brown, author V. L. Korenev, and author I. A. Merkulov, title title Electron
and nuclear spin interactions in the optical spectra of single GaAs
quantum dots, https://doi.org/10.1103/PhysRevLett.86.5176
journal journal Phys. Rev. Lett. volume 86, pages 5176 (year
2001)NoStop
[Eble et al.(2006)Eble,
Krebs, Lemaître, Kowalik,
Kudelski, Voisin, Urbaszek,
Marie, and Amand]Eble2006
author author B. Eble, author O. Krebs,
author A. Lemaître, author K. Kowalik, author
A. Kudelski, author
P. Voisin, author B. Urbaszek, author X. Marie, and author T. Amand, title title Dynamic
nuclear polarization of a single charge-tunable InAs/GaAs quantum dot, https://doi.org/10.1103/PhysRevB.74.081306 journal
journal Phys. Rev. B volume 74, pages 081306 (year 2006)NoStop
[Skiba-Szymanska et al.(2008)Skiba-Szymanska, Chekhovich, Nikolaenko,
Tartakovskii, Makhonin, Drouzas, Skolnick, and Krysa]Skiba2008
author author J. Skiba-Szymanska, author E. A. Chekhovich, author A. E. Nikolaenko, author A. I. Tartakovskii, author M. N. Makhonin, author I. Drouzas,
author M. S. Skolnick, and author A. B. Krysa, title title Overhauser effect in individual
InP/Ga_xIn_1xP
dots, https://doi.org/10.1103/PhysRevB.77.165338 journal journal Phys. Rev. B volume
77, pages 165338 (year 2008)NoStop
[Ragunathan et al.(2019)Ragunathan, Kobak, Gillard, Pacuski, Sobczak, Borysiuk, Skolnick, and Chekhovich]Ragunathan2019
author author G. Ragunathan, author J. Kobak,
author G. Gillard, author W. Pacuski, author
K. Sobczak, author J. Borysiuk, author M. S. Skolnick, and author E. A. Chekhovich, title title Direct
measurement of hyperfine shifts and radio frequency manipulation of nuclear
spins in individual CdTe/ZnTe quantum dots, https://doi.org/10.1103/PhysRevLett.122.096801 journal
journal Phys. Rev. Lett. volume 122, pages 096801 (year 2019)NoStop
[Chekhovich et al.(2015)Chekhovich, Hopkinson, Skolnick, and Tartakovskii]Chekhovich2015
author author E. A. Chekhovich, author M. Hopkinson, author M. S. Skolnick, and author A. I. Tartakovskii, title title Suppression of
nuclear spin bath fluctuations in self-assembled quantum dots induced by
inhomogeneous strain, https://doi.org/10.1038/ncomms7348
journal journal Nature Commun. volume 6, pages 6348 (year
2015)NoStop
[Serrels et al.(2008)Serrels, Ramsay, Dalgarno, Gerardot, O'Connor, Hadfield, Warburton, and Reid]Serrels2008
author author K. A. Serrels, author E. Ramsay,
author P. A. Dalgarno, author B. Gerardot, author
J. A. O'Connor, author
R. H. Hadfield, author
R. J. Warburton, and author
D. T. Reid, title title Solid immersion lens applications for nanophotonic devices, https://doi.org/10.1117/1.3068652 journal journal Journal of Nanophotonics volume 2, pages 021854 (year 2008)NoStop
[Munsch et al.(2014)Munsch,
Wust, Kuhlmann, Xue,
Ludwig, Reuter, Wieck,
Poggio, and Warburton]Munsch2014
author author M. Munsch, author G. Wust,
author A. V. Kuhlmann, author F. Xue, author
A. Ludwig, author D. Reuter, author A. D. Wieck, author M. Poggio, and author R. J. Warburton, title title Manipulation of the
nuclear spin ensemble in a quantum dot with chirped magnetic resonance
pulses, https://doi.org/10.1038/nnano.2014.175 journal journal Nature Nanotechnol. volume 9, pages 671 (year 2014)NoStop
[Dubois et al.(2023)Dubois,
Saalmann, and Rost]Dubois2023
author author J. Dubois, author U. Saalmann, and author J. M. Rost, title title Symmetry-induced decoherence-free
subspaces, https://doi.org/10.1103/PhysRevResearch.5.L012003
journal journal Phys. Rev. Res. volume 5, pages L012003 (year
2023)NoStop
[Hatridge et al.(2013)Hatridge, Shankar, Mirrahimi, Schackert, Geerlings, Brecht, Sliwa, Abdo, Frunzio, Girvin, Schoelkopf, and Devoret]Hatridge2013
author author M. Hatridge, author S. Shankar,
author M. Mirrahimi, author F. Schackert, author
K. Geerlings, author
T. Brecht, author K. M. Sliwa, author B. Abdo, author L. Frunzio, author S. M. Girvin,
author R. J. Schoelkopf, and author M. H. Devoret, title title Quantum back-action of an individual
variable-strength measurement, https://doi.org/10.1126/science.1226897 journal journal Science volume 339, pages
178 (year 2013)NoStop
[Cujia et al.(2019)Cujia,
Boss, Herb, Zopes, and Degen]Cujia2019
author author K. S. Cujia, author J. M. Boss,
author K. Herb, author
J. Zopes, and author
C. L. Degen, title title Tracking the precession of single nuclear spins by weak
measurements, https://doi.org/10.1038/s41586-019-1334-9 journal journal Nature volume
571, pages 230 (year 2019)NoStop
[Pfender et al.(2019)Pfender, Wang, Sumiya, Onoda, Yang, Dasari, Neumann, Pan, Isoya, Liu, and Wrachtrup]Pfender2019
author author M. Pfender, author P. Wang,
author H. Sumiya, author S. Onoda, author
W. Yang, author D. B. R. Dasari, author P. Neumann, author X.-Y. Pan, author J. Isoya, author R.-B. Liu, and author J. Wrachtrup, title title High-resolution spectroscopy of single
nuclear spins via sequential weak measurements, https://doi.org/10.1038/s41467-019-08544-z journal journal Nature Communications volume 10, pages 594 (year 2019)NoStop
|
http://arxiv.org/abs/2307.01635v1
|
20230704103918
|
Exploring the vibrational series of pure trilobite Rydberg molecules
|
[
"Max Althön",
"Markus Exner",
"Richard Blättner",
"Herwig Ott"
] |
physics.atom-ph
|
[
"physics.atom-ph"
] |
Department of Physics and Research Center OPTIMAS, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
Department of Physics and Research Center OPTIMAS, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
Department of Physics and Research Center OPTIMAS, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
Herwig Ott0000-0002-3155-2719ott@physik.uni-kl.de
Department of Physics and Research Center OPTIMAS, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau, 67663 Kaiserslautern, Germany
We report on the observation of two vibrational series of pure trilobite rubidium Rydberg molecules. They are created via three-photon photoassociation and lie energetically more than 15 GHz below the atomic 22F state of rubidium. In agreement with theoretical calculations, we find an almost perfect harmonic oscillator behavior of six vibrational states. We show that these states can be used to measure electron-atom scattering lengths for low energies in order to benchmark current theoretical calculations. The molecules have extreme properties: their dipole moments are in the range of kilo-Debye and the electronic wave function is made up of high angular momentum states with only little admixture from the nearby 22F state. This high-l character of the trilobite molecules leads to an enlarged lifetime as compared to the 22F atomic state. The observation of an equidistant series of vibrational states opens an avenue to observe coherent molecular wave-packet dynamics.
Exploring the vibrational series of pure trilobite Rydberg molecules
Richard Blättner
July 4, 2023
====================================================================
§ INTRODUCTION
Creating controllable molecules at ultralow temperatures offers a pathway to engineered ultracold quantum chemical reactions <cit.> and tests of fundamental physics and symmetries <cit.>.
Molecules that possess sizeable electric dipole moments can be controlled by external electric fields making them candidates for quantum information processing <cit.> and the production of strongly correlated many-body systems <cit.>. For dipolar molecules with multiple vibrational states electric field pulses have been proposed to create superposition states <cit.> and observe coherent wave-packet dynamics.
Ultralong-range Rydberg molecules (ULRMs) <cit.> are a platform for creating such dipolar molecules in ultracold environments. In these molecules a neutral ground state atom is trapped inside the giant electronic wavefunction of a Rydberg state by a binding mechanism stemming from the electron-ground state scattering interaction.
ULRMs have been found to be an ideal testbed for low-energy electron-ground state scattering <cit.> and could be used for the investigation of diabatic coupling schemes in molecules <cit.>. They can also be used as a starting point for the creation of ultracold anions <cit.>.
Homonuclear ULRMs can have a permanent electric dipole moment due to the distinguishability of the ground state and Rydberg electron <cit.>.
For ULRMs corresponding to low-l (S, P, D) Rydberg states this can reach about one Debye.
There are also two classes of molecules emerging from the mixing of multiple high-l Rydberg states. These so-called butterfly <cit.> and trilobite molecules can have dipole moments on the order of kilo-Debye <cit.>, which are in special cases even larger than the bond length <cit.>.
Due to the high-l nature of their electronic wave function trilobite molecules are in general not accessible with standard one- or two-photon photoassociation. Nevertheless, states with significant trilobite admixture have been produced via two-photon excitation both in Cs <cit.> and Rb <cit.>. In Cs the almost integer quantum defect of S states leads to a mixing with the high-l states, whereas in Rb a sizable admixture only exists for a specific principal quantum number, where the splitting between the S state and the high-l manifold matches the ground state hyperfine splitting.
Here, we use three-photon excitation to produce pure Trilobite molecules in Rb over a wide range of frequencies and characterize their binding energies, lifetimes and dipole moments. We observe two vibrational series which are energetically split because of different angular momentum couplings and show that their lifetimes exceed that of the adjacent 22F state. Even for this relatively low principal quantum number, we find kilo-Debye dipole moments.
§ RESULTS
ULRMs form due to the elastic scattering interaction of the Rydberg electron with a neutral ground state atom. To describe the scattering process, Fermi pseudo potentials <cit.> with energy-dependent scattering lengths are used. In atomic units, the interaction is given by
V̂ = A *̂ŝ_̂2̂·*̂Î + ∑_S,T 2πℙ̂_S,T a_s^S,T(k) δ( *R - *r)
+ 6πℙ̂_S,T(a_p^S,T(k))^3 δ( *R - *r) ∇·∇⃗,
where *r is the position of the Rydberg electron and *R is the internuclear axis between the Rydberg core and the ground state atom, as shown in Fig. <ref>.
The s- and p-wave scattering lengths a_s/p^S,T depend on the spins of the electrons resulting in singlet and triplet channels with the according projection operators ℙ_S,T. To explain the observed spectra the hyperfine interaction of the ground state atom A *̂ŝ_̂2̂·*̂Î needs to be taken into account. The scattering interaction depends on the Rydberg electron's momentum k relative to the ground state atom, which is calculated semi-classically for every internuclear distance R as k = √(-1/n^2 + 2/R) (in atomic units). For the k-dependence of the singlet scattering lengths we use data provided by I. Fabrikant <cit.>.
For the triplet channels we employ a model potential consisting of a polarization potential with an inner hard wall at variable distance from the ground state atom which captures the short-range physics <cit.>. By varying the position of the hard wall the scattering interaction can be tuned.
We diagonalize the Hamiltonian given in <cit.>, which includes spin-orbit coupling of the p-wave scattering, at each internuclear distance. We consider a finite basis set consisting of two hydrogenic manifolds below the state of interest (n=22) and one manifold above it. The resulting Born-Oppenheimer potential energy curves are shown in Fig. <ref>. The energy curves belong to different types of molecule and show avoided crossings where the molecular character changes. Of particular interest for this work is the crossing between the trilobite and butterfly curves resulting in three mutually shifted potential wells which support multiple vibrational states.
While the lower potential well can be assigned to the F=1 ground state and triplet s-wave scattering, the middle potential curve consists of a mixture of the two hyperfine states and shows both singlet and triplet s-wave scattering <cit.>. This mixture is due to the interplay between the hyperfine interaction and the electron scattering interaction, as both depend on the spin state of the ground state electron. The upper potential well for the F=2 ground state cannot be excited in our experiment, as we prepare the sample in the F=1 state. The bound states are then calculated from the potential curves with a shooting method by analyzing the density of states when varying the inner boundary condition <cit.>.
To photoassociate the trilobite Rydberg molecules in the lower two wells we use a three-photon setup with lasers at 780nm, 776nm and 1288nm. This allows us to couple to the 22F state, which makes up about 3 of the electronic state. The first two lasers are blue detuned to the intermediate states (5P_3/2 and 5D_5/2). The three-photon Rabi frequency is 2π×250kHz. Our sample consists of ^87Rb atoms in the F=1 ground state prepared in an optical dipole trap at 1064nm. The peak density is 4e13 cm^-3 and the temperature is about 150K. Per experimental cycle we perform 800 excitation pulses of 1s duration. Before every excitation pulse the dipole trap is switched off. To detect the Rydberg excitations an extraction field is switched on after the excitation pulse and after a variable delay time a CO_2 laser pulse ionizes all Rydberg states. The resulting ions are guided via a reaction microscope <cit.> to a space- and time-resolved multi-channel plate detector. This allows us to measure the momentum of the Rydberg core prior to ionization. Note that the recoil upon ionization with the CO_2 laser is negligible.
Because of the large dipole moments of the trilobite molecules, precise electric field compensation during the excitation pulses is necessary.
To achieve this we use the momentum imaging capabilities of our reaction microscope. In this field compensation measurement, the atoms are ionized and accelerated in the residual electric field for a variable wait time. Afterwards, we measure the momenta of the ions and extract the electric field from the linear dependence on the wait time.
Fig. <ref> shows the molecular spectrum red detuned to the 22F_7/2 state covering the two trilobite potential wells. We observe a vibrational series of six bound states in each of the potential wells, which are equally spaced. The anharmonicity is less than 10 percent, confirming the harmonic oscillator shape of the potential wells. Thereby, the position of the highest vibrational state coincides with the crossing of the trilobite and the butterfly state, confirming that the well depth for both series is appropriately captured by theory.
Next, we analyze in detail the position of the vibrational states and the conclusions one can draw for the molecular potential. Inspecting the different terms in Eq. <ref> shows that the molecular potential for the trilobite curve is directly proportional to the respective scattering length. High precision molecular Rydberg spectroscopy is therefore a tool to determine the electron-atom scattering lengths. Because of the crossing with the butterfly curves, both the triplet p-wave as well as the dominant triplet s-wave scattering channels have to be considered. Since the mixed trilobite has a small singlet admixture, the splitting of the two potential curves also depends on the singlet s-wave scattering length. Singlet scattering lengths calculated by a two-active-electron model <cit.> fit the observed splitting well.
For the triplet p-wave scattering we find the J=0 shape resonance energy at 24.7±0.5meV, which agrees with the measurement of Engel et al. <cit.> within the error limits. The large margin of error is due to the relative insensitivity of the trilobite states to changes in the p-wave scattering.
For the more prominent triplet s-wave scattering we find a value of a_s^T(k=0.0175)=-7.75±0.03 at the position of the potential minimum. Using the model potential to extrapolate this result to zero momentum yields a_s^T(k=0)=-14.2. This asymptote differs significantly from previous experimental values (-15.2-16.1) <cit.> measured at k values near zero.
Given the high precision of the presented measurement, which is due to binding energies on the order of 10GHz, this points to an incorrect k dependency of the scattering length as calculated from the model potential. We note that previous ab initio calculations <cit.> cannot explain the measured spectrum and therefore do not present an alternative. To resolve this, measurements of trilobite spectra at different principal quantum numbers can be used to probe different ranges of the electron momentum and thus present an opportunity to map out this dependency. With such measurements one can also test whether the semiclassical calculation of k plays a role in the discrepancy. In fact, if the actual electron momentum is assumed to be about 10 larger than the semiclassical calculation, the binding energy of the triplet trilobite can be brought into line with the previously measured scattering length asymptote of Engel et al. <cit.>. Therefore, these exotic molecules could lead to a better theoretical understanding of the more general process of electron-atom scattering.
A peculiar property of trilobite Rydberg molecules is their large permanent electric dipole moment. This stems from the large concentration of the electron density at the position of the ground state atom (see Fig. <ref>). The dipole moments are measured by applying an electric field and observing the broadening of the molecular line. As the rotational splitting cannot be resolved in our experiment, we fit the spectra with the convolution of a Lorentzian with a step function of width 2dE <cit.>.
From the fitted widths dE for different electric fields we can deduce the dipole moment as shown in Fig. <ref>. We find electric dipole moments up to 1735 Debye, which corresponds to 0.8 times the internuclear distance. This reflects the highly efficient binding mechanism, which accumulates the electron density at the location of the ground state atom.
For the theoretical calculation of the dipole moments we write the electronic wavefunction at internuclear distance R in the basis of the unperturbed states
|Ψ_mol^(R)⟩ = ∑_i c_i^(R)|i⟩
and then integrate over the vibrational wavefunction Φ
⟨ d ⟩ = ∫ |Φ(R)|^2 ∑_i,j c_i^(R)* c_j^(R)⟨i|d̂|j⟩dR
The experimental and theoretical results for selected vibrational states are presented in Table <ref>.
Experiment and theory are in good agreement, however, the experimentally determined dipole moments are systematically 10 - 15 % larger than the theoretical values. This could be due to an unidentified systematic measurement error or incorrect theoretical dipole matrix elements as small systematic deviations add up due to the many states that contribute to the trilobite wave function.
The second important characteristic of Rydberg molecules is their lifetime. They reflect the different available decay channels, such as spontaneous emission, black-body induced transitions and molecular decay via tunneling towards shorter internuclear distances, leading to l-changing collisions or to associative ionization. The lifetimes are measured by varying the delay time between the excitation and ionization. We then count the number of ions that have zero momentum. This way, we also account for l-changing collisions, which result in an ion but come along with a large momentum <cit.>. Another possible outcome is associative ionization resulting in Rb_2^+ which can be distinguished by its larger TOF. Associative ionization as well as l-changing collisions stem from a tunneling process out of the potential well into the butterfly potential curve.
The measured lifetimes are given in Table <ref>. We first note that even the shortest measured lifetime is larger than the lifetime of the atomic 22F Rydberg state and the ground state in each potential well has double its lifetime. This reflects the multitude of involved high-l states in the trilobite molecules, resulting in slower radiative decay. As expected, the tunneling processes are more prominent for higher vibrational states, resulting in a shorter lifetime than the deeply bound vibrational ground states. This is corroborated by a roughly 60 % increase in rates for both l-changing collisions and associative ionization when comparing ν_m=6 with the ground state. However, the quantitative dependence on the vibrational quantum number needs further investigation, as e.g. vibronic coupling effects <cit.> lead to a superposition of butterfly and trilobite states and thus influence the decay dynamics.
§ CONCLUSION AND OUTLOOK
We have measured two vibrational series of pure trilobite Rydberg molecules by employing three-photon photoassociation. With this method the creation of trilobite molecules in any element that has a negative s-wave scattering length should be possible, as the quantum defects for the admixed atomic state (here F-state) are rather small and the coupling with the trilobite state is sizable.
We find kilo-Debye dipole moments and lifetimes longer than the coupled atomic state. The observed spectra can be theoretically explained by adjusting the triplet s-wave scattering length. While the resulting agreement is excellent, the extrapolated scattering length asymptote disagrees with previous measurements and merits further theoretical and experimental work. As a logical next step one can extend the measurements for different principal quantum numbers and thus probe different ranges of electron momenta. This allows to map out the scattering length dependence on k. The discrepancy found in the scattering length asymptote might also be due to the semi-classical treatment of the electron momenta and an extended measurements series might support or discard this explanation.
Additionally, for higher principal quantum numbers vibronic coupling effects between the trilobite and butterfly curves become more pronounced and these molecules could serve as a benchmark for theoretical calculations <cit.>. It has also been predicted that at certain principal quantum numbers conical intersections essentially stop the l-changing collision processes <cit.>, which could be checked with our reaction microscope. Finally, the shape of the potential well is suitable to study coherent wave packet dynamics. Note that the quality factor of the potential well is 3× 10^4. Using ns and ps laser pulses in a pump probe scheme provides the required time resolution for such experiments.
§ ACKNOWLEDGEMENTS
We would like to thank Frederic Hummel, Peter Schmelcher and Matt Eiles for helpful discussions. This project is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – project number 316211972 and 460443971.
§ AUTHOR CONTRIBUTIONS
M.A., M.E., R.B. performed the experiments. M.A. and M.E. analyzed the data. M.A. performed the theoretical calculations and prepared the initial version of the manuscript. H.O. conceived and supervised the project. All authors contributed to the data interpretation and manuscript preparation.
§ DATA AVAILABILITY
The data that support the findings of this study are available from the corresponding
author upon reasonable request.
35
fxundefined [1]
ifx#1
fnum [1]
#1firstoftwo
secondoftwo
fx [1]
#1firstoftwo
secondoftwo
noop [0]secondoftwo
ref[1]@startlink#1@href
href[1]#1@endlink
anitize@url [0]`
12`$12`&12`#12`1̂2`_12`%12
startlink[1]
endlink[0]
rl [1]href #1
@bib@innerbibempty
[Carr et al.(2009)Carr,
DeMille, Krems, and Ye]Carr_2009
author author L. D. Carr, author D. DeMille,
author R. V. Krems, and author J. Ye, title title Cold and ultracold molecules: science, technology
and applications, https://doi.org/10.1088/1367-2630/11/5/055049
journal journal New Journal of Physics volume 11, pages 055049 (year
2009)NoStop
[Hu et al.(2019)Hu,
Liu, Grimes, Lin,
Gheorghe, Vexiau, Bouloufa-Maafa, Dulieu, Rosenband, and Ni]doi:10.1126/science.aay9531
author author M.-G. Hu, author Y. Liu, author D. D. Grimes, author
Y.-W. Lin, author A. H. Gheorghe, author R. Vexiau, author N. Bouloufa-Maafa, author O. Dulieu, author T. Rosenband, and author K.-K. Ni, title title Direct
observation of bimolecular reactions of ultracold KRb molecules, https://doi.org/10.1126/science.aay9531 journal journal Science volume 366, pages
1111 (year 2019)NoStop
[Ye et al.(2018)Ye,
Guo, González-Martínez, Quéméner, and Wang]doi:10.1126/sciadv.aaq0083
author author X. Ye, author M. Guo, author M. L. González-Martínez,
author G. Quéméner, and author D. Wang, title title Collisions of ultracold ^23Na
^87Rb molecules with controlled chemical reactivities, https://doi.org/10.1126/sciadv.aaq0083 journal journal Science Advances volume 4, pages eaaq0083 (year 2018)NoStop
[Ospelkaus et al.(2010)Ospelkaus, Ni, Wang, de Miranda, Neyenhuis, Quéméner,
Julienne, Bohn, Jin, and Ye]doi:10.1126/science.1184121
author author S. Ospelkaus, author K.-K. Ni,
author D. Wang, author
M. H. G. de Miranda, author
B. Neyenhuis, author
G. Quéméner, author
P. S. Julienne, author
J. L. Bohn, author
D. S. Jin, and author
J. Ye, title title Quantum-state controlled chemical reactions of ultracold
potassium-rubidium molecules, https://doi.org/10.1126/science.1184121 journal journal Science volume 327, pages
853 (year 2010)NoStop
[DeMille(2002)]PhysRevLett.88.067901
author author D. DeMille, title title Quantum computation with
trapped polar molecules, https://doi.org/10.1103/PhysRevLett.88.067901 journal
journal Phys. Rev. Lett. volume 88, pages 067901 (year 2002)NoStop
[Lukin et al.(2001)Lukin,
Fleischhauer, Cote, Duan,
Jaksch, Cirac, and Zoller]PhysRevLett.87.037901
author author M. D. Lukin, author M. Fleischhauer,
author R. Cote, author
L. M. Duan, author
D. Jaksch, author J. I. Cirac, and author P. Zoller, title title Dipole
blockade and quantum information processing in mesoscopic atomic ensembles, https://doi.org/10.1103/PhysRevLett.87.037901 journal
journal Phys. Rev. Lett. volume 87, pages 037901 (year 2001)NoStop
[Baranov et al.(2012)Baranov, Dalmonte, Pupillo, and Zoller]dipolar_quantum_gases
author author M. A. Baranov, author M. Dalmonte,
author G. Pupillo, and author P. Zoller, title
title Condensed matter theory of dipolar quantum gases, https://doi.org/10.1021/cr2003568 journal journal Chemical Reviews volume 112, pages 5012 (year 2012)NoStop
[Kadau et al.(2016)Kadau,
Schmitt, Wenzel, Wink,
Maier, Ferrier-Barbut, and Pfau]quantum_ferrofluid
author author H. Kadau, author M. Schmitt,
author M. Wenzel, author C. Wink, author
T. Maier, author I. Ferrier-Barbut, and author T. Pfau, title title Observing
the Rosensweig instability of a quantum ferrofluid, https://doi.org/10.1038/nature16485 journal journal Nature volume 530, pages
194 (year 2016)NoStop
[Hummel et al.(2021a)Hummel, Keiler, and Schmelcher]trilobite_wave_packet
author author F. Hummel, author K. Keiler, and author P. Schmelcher, title title Electric-field-induced wave-packet
dynamics and geometrical rearrangement of trilobite Rydberg molecules, https://doi.org/10.1103/PhysRevA.103.022827 journal
journal Phys. Rev. A volume 103, pages 022827 (year 2021a)NoStop
[Greene et al.(2000)Greene,
Dickinson, and Sadeghpour]Greene2000
author author C. H. Greene, author A. S. Dickinson, and author H. R. Sadeghpour, title title Creation of polar and
nonpolar ultra-long-range Rydberg molecules, https://doi.org/10.1103/PhysRevLett.85.2458 journal journal Phys. Rev. Lett. volume 85, pages 2458 (year 2000)NoStop
[Fey et al.(2020)Fey,
Hummel, and Schmelcher]ULRM_review
author author C. Fey, author F. Hummel, and author P. Schmelcher, title title Ultralong-range Rydberg molecules, https://doi.org/10.1080/00268976.2019.1679401 journal
journal Molecular Physics volume
118, pages e1679401 (year 2020)NoStop
[Eiles(2019)]Eiles_2019
author author M. T. Eiles, title title Trilobites, butterflies,
and other exotic specimens of long-range Rydberg molecules, https://doi.org/10.1088/1361-6455/ab19ca journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 52, pages 113001 (year 2019)NoStop
[Engel et al.(2019)Engel,
Dieterle, Hummel, Fey,
Schmelcher, Löw, Pfau, and Meinert]PhysRevLett.123.073003
author author F. Engel, author T. Dieterle,
author F. Hummel, author C. Fey, author
P. Schmelcher, author
R. Löw, author T. Pfau, and author F. Meinert, title title
Precision spectroscopy of negative-ion resonances in ultralong-range
Rydberg molecules, https://doi.org/10.1103/PhysRevLett.123.073003 journal
journal Phys. Rev. Lett. volume 123, pages 073003 (year 2019)NoStop
[Böttcher et al.(2016)Böttcher, Gaj, Westphal, Schlagmüller, Kleinbach, Löw,
Liebisch, Pfau, and Hofferberth]Boettcher_2016_mixed_singlet_triplet
author author F. Böttcher, author A. Gaj,
author K. M. Westphal, author M. Schlagmüller, author K. S. Kleinbach, author
R. Löw, author T. C. Liebisch, author T. Pfau, and author S. Hofferberth, title title
Observation of mixed singlet-triplet Rb_2 Rydberg
molecules, https://doi.org/10.1103/PhysRevA.93.032512 journal journal Phys. Rev. A volume
93, pages 032512 (year 2016)NoStop
[Bendkowsky et al.(2010)Bendkowsky, Butscher, Nipper, Balewski, Shaffer, Löw, Pfau, Li, Stanojevic, Pohl, and Rost]internal_quantum_reflection
author author V. Bendkowsky, author B. Butscher, author J. Nipper,
author J. B. Balewski, author J. P. Shaffer, author
R. Löw, author T. Pfau, author W. Li, author J. Stanojevic,
author T. Pohl, and author J. M. Rost, title
title Rydberg trimers and excited dimers bound by internal
quantum reflection, https://doi.org/10.1103/PhysRevLett.105.163201
journal journal Phys. Rev. Lett. volume 105, pages 163201 (year
2010)NoStop
[Saßmannshausen et al.(2015)Saßmannshausen, Merkt, and Deiglmayr]Cs_singlet_scattering
author author H. Saßmannshausen, author F. Merkt, and author J. Deiglmayr, title title Experimental
characterization of singlet scattering channels in long-range Rydberg
molecules, https://doi.org/10.1103/PhysRevLett.114.133201
journal journal Phys. Rev. Lett. volume 114, pages 133201 (year
2015)NoStop
[Hummel et al.(2023)Hummel,
Schmelcher, and Eiles]vibronic_interactions
author author F. Hummel, author P. Schmelcher, and author M. T. Eiles, title title Vibronic interactions in trilobite and
butterfly Rydberg molecules, https://doi.org/10.1103/PhysRevResearch.5.013114 journal
journal Phys. Rev. Res. volume 5, pages 013114 (year 2023)NoStop
[Hummel et al.(2020)Hummel,
Schmelcher, Ott, and Sadeghpour]Heavy_Rydberg
author author F. Hummel, author P. Schmelcher,
author H. Ott, and author H. R. Sadeghpour, title title An ultracold heavy Rydberg system formed from
ultra-long-range molecules bound in a stairwell potential, https://doi.org/10.1088/1367-2630/ab90d7 journal journal New Journal of Physics volume 22, pages 063060 (year 2020)NoStop
[Li et al.(2011)Li,
Pohl, Rost, Rittenhouse,
Sadeghpour, Nipper, Butscher,
Balewski, Bendkowsky, Löw, and Pfau]homonuclear_dipole_moment
author author W. Li, author T. Pohl, author J. M. Rost, author
S. T. Rittenhouse, author
H. R. Sadeghpour, author
J. Nipper, author B. Butscher, author J. B. Balewski, author V. Bendkowsky, author R. Löw, and author T. Pfau, title title A homonuclear molecule with a permanent electric
dipole moment, https://doi.org/10.1126/science.1211255 journal journal Science volume
334, pages 1110 (year 2011)NoStop
[Niederprüm et al.(2016)Niederprüm, Thomas, Eichert,
Lippe, Pérez-Ríos, Greene, and Ott]butterfly
author author T. Niederprüm, author O. Thomas, author T. Eichert,
author C. Lippe, author J. Pérez-Ríos, author C. H. Greene, and author H. Ott, title
title Observation of pendular butterfly Rydberg molecules, https://doi.org/10.1038/ncomms12820 journal journal Nature Communications volume 7, pages 12820 (year 2016)NoStop
[Booth et al.(2015)Booth,
Rittenhouse, Yang, Sadeghpour, and Shaffer]Cs_trilobite
author author D. Booth, author S. T. Rittenhouse, author J. Yang,
author H. R. Sadeghpour, and author J. P. Shaffer, title title Production of trilobite Rydberg
molecule dimers with kilo-debye permanent electric dipole moments, https://doi.org/10.1126/science.1260722 journal journal Science volume 348, pages
99 (year 2015)NoStop
[Kleinbach et al.(2017)Kleinbach, Meinert, Engel, Kwon, Löw, Pfau, and Raithel]Rb_trilobite
author author K. S. Kleinbach, author F. Meinert,
author F. Engel, author W. J. Kwon, author
R. Löw, author T. Pfau, and author G. Raithel, title title
Photoassociation of trilobite Rydberg molecules via resonant spin-orbit
coupling, https://doi.org/10.1103/PhysRevLett.118.223001
journal journal Phys. Rev. Lett. volume 118, pages 223001 (year
2017)NoStop
[Fermi(1934)]Fermi
author author E. Fermi, title title Sopra lo spostamento per
pressione delle righe elevate delle serie spettrali, https://doi.org/10.1007/BF02959829 journal journal Il Nuovo Cimento (1924-1942) volume 11, pages 157 (year 1934)NoStop
[Omont, A.(1977)]Omont
author author Omont, A., title title On the theory of collisions of atoms
in Rydberg states with neutral particles, https://doi.org/10.1051/jphys:0197700380110134300 journal
journal J. Phys. France volume 38, pages 1343 (year 1977)NoStop
[Fabrikant(1986)]Fabrikant_1986
author author I. I. Fabrikant, title title Interaction of
Rydberg atoms and thermal electrons with k, rb and cs atoms, https://doi.org/10.1088/0022-3700/19/10/021 journal journal Journal of Physics B: Atomic and Molecular Physics volume 19, pages 1527 (year
1986)NoStop
[Bahrim et al.(2001)Bahrim,
Thumm, and Fabrikant]Bahrim_2001
author author C. Bahrim, author U. Thumm, and author I. I. Fabrikant, title title 3se and 1se scattering lengths for e-
+ rb, cs and fr collisions, https://doi.org/10.1088/0953-4075/34/6/107 journal journal Journal of Physics B: Atomic, Molecular and Optical Physics volume 34, pages L195 (year
2001)NoStop
[Markson et al.(2016)Markson, Rittenhouse, Schmidt,
Shaffer, and Sadeghpour]https://doi.org/10.1002/cphc.201600932
author author S. Markson, author S. T. Rittenhouse, author R. Schmidt, author J. P. Shaffer, and author H. R. Sadeghpour, title title Theory of
ultralong-range Rydberg molecule formation incorporating spin-dependent
relativistic effects: Cs(6s)–cs(np) as case study, https://doi.org/https://doi.org/10.1002/cphc.201600932 journal journal ChemPhysChem volume
17, pages 3683 (year 2016)NoStop
[Eiles and Greene(2017)]PhysRevA.95.042515
author author M. T. Eiles and author C. H. Greene, title title Hamiltonian for the
inclusion of spin effects in long-range Rydberg molecules, https://doi.org/10.1103/PhysRevA.95.042515 journal journal Phys. Rev. A volume 95, pages 042515 (year 2017)NoStop
[Anderson et al.(2014)Anderson, Miller, and Raithel]Anderson_2014_angular_momentum_couplings
author author D. A. Anderson, author S. A. Miller, and author G. Raithel, title title Angular-momentum
couplings in long-range Rb_2 Rydberg molecules, https://doi.org/10.1103/PhysRevA.90.062518 journal journal Phys. Rev. A volume 90, pages 062518 (year 2014)NoStop
[Niederprüm et al.(2016)Niederprüm, Thomas, Eichert, and Ott]Niederpruem_2016_spin_flips
author author T. Niederprüm, author O. Thomas, author T. Eichert, and author H. Ott, title title Rydberg molecule-induced remote spin flips, https://doi.org/10.1103/PhysRevLett.117.123002 journal
journal Phys. Rev. Lett. volume 117, pages 123002 (year 2016)NoStop
[Nguyen et al.(2004)Nguyen,
Fléchard, Brédy, Camp, and DePaola]remi1
author author H. Nguyen, author X. Fléchard,
author R. Brédy, author H. A. Camp, and author B. D. DePaola, title title Recoil ion momentum spectroscopy using
magneto-optically trapped atoms, https://doi.org/10.1063/1.1775310 journal journal Review of Scientific Instruments volume
75, pages 2638 (year 2004)NoStop
[Blieck et al.(2008)Blieck,
Fléchard, Cassimi, Gilles,
Girard, and Hennecart]remi2
author author J. Blieck, author X. Fléchard,
author A. Cassimi, author H. Gilles, author
S. Girard, and author
D. Hennecart, title title A new magneto-optical trap-target recoil ion momentum spectroscopy
apparatus for ion-atom collisions and trapped atom studies, journal journal Review of Scientific Instruments volume 79, https://doi.org/10.1063/1.2994151
10.1063/1.2994151 (year 2008), note
103102NoStop
[Hubele et al.(2015)Hubele,
Schuricke, Goullon, Lindenblatt, Ferreira, Laforge,
Brühl, de Jesus, Globig,
Kelkar, Misra, Schneider,
Schulz, Sell, Song,
Wang, Zhang, and Fischer]remi3
author author R. Hubele, author M. Schuricke,
author J. Goullon, author H. Lindenblatt, author
N. Ferreira, author
A. Laforge, author E. Brühl, author V. L. B. de Jesus, author D. Globig, author A. Kelkar,
author D. Misra, author K. Schneider, author
M. Schulz, author M. Sell, author Z. Song, author X. Wang, author S. Zhang, and author
D. Fischer, title title Electron and recoil ion momentum imaging with a magneto-optically
trapped target, journal journal Review of
Scientific Instruments volume 86, https://doi.org/10.1063/1.4914040 10.1063/1.4914040 (year
2015), note 033105NoStop
[Geppert et al.(2021)Geppert, Althön, Fichtner, and Ott]State_changing
author author P. Geppert, author M. Althön,
author D. Fichtner, and author H. Ott, title title Diffusive-like redistribution in state-changing
collisions between Rydberg atoms and ground state atoms, https://doi.org/10.1038/s41467-021-24146-0 journal journal Nature Communications volume 12, pages 3900 (year 2021)NoStop
[Hummel et al.(2021b)Hummel, Eiles, and Schmelcher]conical_intersections
author author F. Hummel, author M. T. Eiles, and author P. Schmelcher, title title Synthetic dimension-induced conical
intersections in Rydberg molecules, https://doi.org/10.1103/PhysRevLett.127.023003 journal
journal Phys. Rev. Lett. volume 127, pages 023003 (year 2021b)NoStop
|
http://arxiv.org/abs/2307.01122v1
|
20230703155156
|
Design, fabrication, and characterization of electrostatic comb-drive actuators for nanoelectromechanical silicon photonics
|
[
"Thor August Schimmell Weis",
"Babak Vosoughi Lahijani",
"Konstantinos Tsoukalas",
"Marcus Albrechtsen",
"Søren Stobbe"
] |
physics.optics
|
[
"physics.optics",
"physics.app-ph"
] |
taswe@dtu.dk
Department of Electrical and Photonics Engineering, DTU Electro, Technical University of Denmark, Building 343, DK-2800 Kgs. Lyngby, Denmark
Department of Electrical and Photonics Engineering, DTU Electro, Technical University of Denmark, Building 343, DK-2800 Kgs. Lyngby, Denmark
NanoPhoton - Center for Nanophotonics, Technical University of Denmark, Ørsteds Plads 345A, DK-2800 Kgs. Lyngby, Denmark.
Department of Electrical and Photonics Engineering, DTU Electro, Technical University of Denmark, Building 343, DK-2800 Kgs. Lyngby, Denmark
Department of Electrical and Photonics Engineering, DTU Electro, Technical University of Denmark, Building 343, DK-2800 Kgs. Lyngby, Denmark
ssto@dtu.dk
Department of Electrical and Photonics Engineering, DTU Electro, Technical University of Denmark, Building 343, DK-2800 Kgs. Lyngby, Denmark
NanoPhoton - Center for Nanophotonics, Technical University of Denmark, Ørsteds Plads 345A, DK-2800 Kgs. Lyngby, Denmark.
Design, fabrication, and characterization of electrostatic comb-drive actuators for nanoelectromechanical silicon photonics
Søren Stobbe
August 1, 2023
===========================================================================================================================
Nanoelectromechanical systems offer unique functionalities in photonics: The ability to elastically and reversibly deform dielectric beams with subwavelength dimensions enable electrical control of the propagation of light with a power consumption orders of magnitude below that of competing technologies, such as thermo-optic tuning. We present a study of the design, fabrication, and characterization of compact electrostatic comb-drive actuators tailored for integrated nanoelectromechanical silicon photonic circuits. Our design has a footprint of [parse-numbers = false]1.2×10^3^2 and is found to reach displacements beyond 50 at 5 with a mechanical resonance above 200, or, using different spring constants and skeletonization, a mechanical resonance above 2.5 with displacements beyond 50 at 28. This is sufficient to induce very large phase shifts and other optical effects in nanoelectromechanical reconfigurable photonic circuits.
§ INTRODUCTION
Microelectromechanical systems (MEMS) have become ubiquitous in communication systems <cit.> and sensing <cit.>, and large-scale MEMS micromanipulators have been employed in areas ranging from biomedical applications <cit.> to scanning probe microscopy <cit.>. A central element of MEMS is actuation, and several mechanisms can be employed, such as electrostatic, piezoelectric, thermal, magnetic, or electrochemical actuation. One of the most commonly used mechanisms is electrostatic actuation <cit.>, where the electrostatic force arising from the capacitive coupling between two electrodes is used to transduce between the electrical and mechanical energy domains. Electrostatic actuators scale well with large surface areas and small gap sizes, are very power efficient, and can be realized with relatively simple nanofabrication involving lithography, etching, and underetching <cit.>. The simplest geometry of an electrostatic actuator is that of two parallel electrodes of which at least one is mechanically compliant, and an applied voltage generates an electrostatic force similar to that of a parallel-plate capacitor. However, the electrostatic force in such direct electrostatic actuators grows nonlinearly with both displacement and voltage, and they are therefore prone to electrostatic pull-in instabilities <cit.>. This has motivated the development of sliding actuators for which the electrodes are displaced laterally and slide next to each other, such that the electrostatic force, in the ideal case, is independent of displacement. This makes sliding actuators resilient against the pull-in instability, which limits the practical travel range of direct electrostatic actuators, in the direction of actuation. The actuation force can then be increased by constructing arrays of sliding electrodes, and such actuators are referred to as comb-drive actuators <cit.>.
The miniaturization of MEMS combined with advances in nanofabrication technologies have led to the vision of nanoelectromechanical systems (NEMS), which are not just scaled-down versions of MEMS but offer a wide range of new and different functionalities. For example, NEMS have less mass and higher surface-to-volume ratio compared to their MEMS counterparts, making them ideal for many sensing applications in fields such as biochemistry <cit.>, thermal sensing <cit.>, and particle mass spectroscopy <cit.>.
The prospects of NEMS are particularly strong within photonics because NEMS enable direct integration with silicon photonics or other planar photonics platforms where elastic deformation of photonic waveguides through electrostatic actuators built in the photonic device layer allows for filtering <cit.>, control of phase <cit.>, and routing of guided light <cit.>. This evidences great promise for photonic NEMS as a fast, compact, and energy-efficient platform for reconfigurable photonics, which can potentially outperform commonly used approaches such as thermo-optic technology on essentially all figures of merit <cit.>.
Most previous works on electrostatic NEMS have so far assumed that NEMS are governed by the same physics as MEMS, but it turns out that both the electrostatic and the material properties change at the nanoscale. For example, the electrostatic force in thin actuators is dominated by fringing fields rather than parallel fields <cit.>. It is therefore essential to establish accurate models and design rules for electrostatic NEMS. This in turn requires a direct comparison between experiment and theory for the displacement, operating bandwidth, and several other important parameters, which has so far been missing in the literature.
The ideal NEMS comb drive should have a compact footprint, low driving voltage, large travel range, and high operating frequency. However, the extreme sensitivity of optical fields to the displacement of boundaries of materials with high refractive indices such as silicon, means that the travel range is often less important for photonic NEMS. Balancing these parameters calls for a compact design with a high density of comb fingers.
Here we focus on developing compact comb-drive actuators with sub-micrometer displacements and high operating frequencies fabricated on silicon on insulator (SOI) platforms with a device-layer thickness of 220, i.e., the standard thickness for silicon photonics. To this end, we use a compact folded-cantilever spring design together with a battering-ram body, as illustrated in Fig. <ref>(a). The gap between opposing comb fingers, g, (as shown in Fig. <ref>(b)) determines the density of the comb fingers and strikes a balance between the electrostatic force at a given actuation voltage and the risk of lateral collapse and stiction.
§ THEORY
The electrostatic force, F_E, acting on an actuator is found as the derivative of the stored electrostatic potential energy <cit.> F_E = 1/2∂ C/∂ xV^2. In MEMS, the capacitance between fingers is often well approximated by the parallel-plate approximation due to the large thickness of the device layer compared to the gap between fingers. However, due to the thin device layer of our platform, fringing fields contribute significantly to the capacitance. While this changes the scaling of the capacitance with respect to g, the resulting electrostatic force remains largely independent of the displacement of the comb drive <cit.>. Figure <ref>(c) shows a contour plot of the electrostatic force generated by a comb drive with 72 finger pairs as a function of voltage, V, and gap size, g, calculated using finite-element modeling.
Our comb-drive design is suspended by four folded cantilevers of width W and length L as shown in Fig. <ref>(d). Each folded cantilever consists of two long beams connected by a short truss. Assuming stiff connecting trusses, these folded cantilevers behave as four parallel connections of two guided cantilevers in series, resulting in a total spring constant estimated by Euler-Bernoulli beam theory <cit.> as
k_EB = 2ET(W/L)^3,
where E = 169 is Young's modulus of silicon in the ⟨ 110⟩ direction <cit.> and T = 220 is the thickness of the device layer. This analytical model provides a simple initial estimate, but it cannot accurately capture the behavior of the system. A numerical finite-element approach is necessary to more precisely estimate the spring constants. Figure <ref>(e) shows a contour plot of the spring constant, k_FEM, of four folded-cantilever springs in parallel calculated with the finite-element method as a function of the width, W, and the length, L, of the folded cantilevers. This relation is important for designing comb-drive actuators because the spring constant directly impacts the trade-off between low-voltage and high-frequency operation.
The displacement, d, of a comb drive in steady state is determined by balancing the electrostatic force against the spring force,
d = ξ V^2,
where we define the comb-drive constant ξ = 1/2k∂ C/∂ d. Figure <ref>(f) shows a contour plot of the steady-state displacement of comb drives with spring constants ranging from 0.5 to 32 and a range of forces varying as specified in Fig. <ref>(c).
Figure <ref> illustrates the design space for comb-drive actuators with thin device layers and illustrates the trade-off between displacement, spring constant, and actuation voltage over a parameter range that is typical for silicon photonics. This serves as a design guide for comb-drive actuators in silicon photonics and enables identifying the geometrical parameters to yield a desired performance. However, this model neglects a number of effects of both theoretical and practical nature. First, the small aspect ratio of silicon photonic NEMS makes them more sensitive to unwanted out-of-plane forces, such as the levitation force or the out-of-plane pull-in instability, depending on whether the substrate is grounded or floating <cit.>. Additionally, a number of works have reported a significant reduction <cit.> of Young's modulus of silicon relative to the bulk value for suspended devices with critical dimensions below 350. In practice, we focus our work on springs with a width of 200 as this ensures that the in-plane spring constant exceeds the out-of-plane spring constant for a single suspended beam while minimizing unwanted effects associated with smaller critical dimensions. In any case, this calls for careful experimental studies, which we present in the following.
§ FABRICATION
We fabricate comb drives with N=72 finger pairs, a gap between fingers of g=200, a spring width of W=200, and a spring length varying between L=[2.55, 3.21, 4.04, 5.09, 6.42, 8.08, 10.18] corresponding to spring constants k_EB=[0.5, 1, 2, 4, 8, 16, 32] according to (<ref>). We have also investigated finger gaps down to 50 nm, but we find that this often leads to in-plane collapses, and we therefore only present the results on 200 nm finger gaps.
Our comb drives are fabricated from nominally undoped SOITEC SOI wafers with a 220 device layer, a resistivity of ρ∼10 specified by the manufacturer, and a 2 buried oxide (BOX) layer. The substrate is spin-coated (ALLRESIST CSAR 62), and patterned by electron-beam lithography (JEOL JBX-9500FSZ 100). The pattern is then transferred into the device layer by reactive-ion etching (SPTS-Pegasus). Afterwards, metal contacts are formed by liftoff using a negative UV-sensitive resist (AZ nLOF 2020) exposed with a maskless aligner (Heidelberg MLA100). Metal contacts are defined by electron-beam evaporation (Ferrotec Temescal FC-2000) of a 5 Cr adhesion layer followed by a 200 Au film. As the last step, the structures are released by a selective etch of the BOX layer with a temperature- and pressure-controlled vapor phase HF etch (SPTS Primaxx uEtch).
Figure <ref>(a) shows a scanning electron microscope (SEM) image of a fabricated comb drive. The light gray areas in the corners of the image show the two metal contacts used to drive the actuator. The dark gray areas are the silicon device layer supported by the BOX layer underneath, and the gray areas are the sections of the device layer that have been released from the BOX layer by isotropic selective underetching. Built-in stress in the device layer strains the comb drive after underetching, and while the springs help absorb some of this strain, the built-in stress has caused the comb drive to buckle slightly up along it's spine. This buckling causes the comb-drive fingers to disengage by a few nanometers, which in turn slightly lowers the electrostatic force. While the effect in this case is likely negligible, it could become significant if the spine is extended to accommodate more rows of fingers, which would require additional stress-release structures or other types of stress management. Figure <ref>(b) shows a zoom-in of the comb-drive fingers. The difference in brightness between the two sets of fingers is caused by SEM charging effects and shows that they are electrically isolated from each other. Figure <ref>(c) shows a periodic scale with a 200 period that runs along the spine of the comb drive and along the two anchored islands. This scale can be automatically located in an SEM image by searching for its known periodicity, and it allows measuring the displacement of the comb drive from scanning electron micrographs with single-nanometer precision by Fourier analysis <cit.>. Figure <ref>(d) shows a fabricated spring. Besides their mechanical function, the springs at the bottom of the comb drive serve as electrical connections between a nearby metal contact and the moving part of the comb drive.
Conventionally, comb drive actuators are doped, which for many applications can be done without introducing additional complexity to the fabrication process, whether by working with pre-doped SOI wafers or by fabricating actuators in polysilicon, which can be deposited with doping by low-pressure chemical vapor deposition (LPCVD) <cit.>. However, in photonic NEMS, indiscriminate doping of the device layer induces optical losses, and as such, a patterned doping process is needed, which increases the complexity of the fabrication process <cit.>. This motivates an investigation of when it is necessary to dope comb-drive actuators. One concern with undoped silicon device layers is the formation of Schottky barriers where metal contacts interface with the intrinsic silicon, as these barriers behave as diodes <cit.>. A minimum of two electrical contacts are needed to operate a comb drive, implying that the equivalent electrical circuit has two opposing Schottky diodes in series, such that one Schottky diode is always reversely biased.
This is normally not an issue for electrostatic actuators during quasi-static operations since the reverse leakage current is usually enough to drive the actuator, but the high impedance of the reversely biased diode, and thus the increased circuit response time (RC-time) can restrict the dynamic response of the actuator to frequencies far below the mechanical resonance frequencies <cit.>.
§ CHARACTERIZATION OF DYNAMIC PERFORMANCE
To study the effect of doping, we fabricate two sets of nominally identical comb drive actuators and dope one set by diffusion-doping using boron-doped silica to obtain a resistivity of ρ = [parse-numbers = false]6×10^-4.
The doped comb drives are not limited by the electrical impedance, and we can excite their mechanical resonances by applying an alternating-current (AC) signal. We measure the dynamic response by applying the AC drive in-situ in an SEM. To illustrate the capabilities of this in-situ technique, we refer to supplementary Visualization 1 for an animation composed of single SEM images of the comb-drive obtained for different values of an applied constant bias. Returning to the AC-measurements, we measure the amplitude of the blur <cit.> of the periodically spaced void features in the body of the comb drives.
Applying a voltage across two electrodes on a sample in an SEM deflects the electron beam much like the deflector coils do. With a direct-current (DC) drive, this can be corrected for by adjusting the deflector coils, but with AC signals with frequencies above the scan rate of the SEM, the beam deflection shows up as image blur that makes it harder to measure the blur created by the mechanical vibrations of the comb drive. We find that keeping the AC amplitude in the 0.1-1 range reduces the beam deflection to an acceptable level, but such a voltage range is too low to noticeably actuate the comb drives. We therefore apply a DC bias to operate the comb drive at a working point with larger d d/d V in order to increase the sensitivity. Since the electrostatic force acting on the comb drives scales with the square of the voltage, F_E∝ V^2, the force generated by an AC signal with a DC bias, V = V_0 + Asin(ω t), has two frequency components
F_E∝ V^2 = V_0^2 + 2V_0Asin(ω t) + A^21-cos(2ω t)/2,
where V_0 is the DC bias, A is the AC amplitude, ω is the angular frequency, and t is time. For an AC signal with zero DC bias, the force will oscillate at twice the frequency of the input signal, but for V_0>>A the frequency of the generated force is dominated by a single frequency component, ω.
The increased sensitivity enabled by the DC bias allows us to image the oscillating comb drives without significant interference from beam deflections. The mechanical resonance frequencies corresponding to the observed peak amplitudes of the image blur are plotted in Fig. <ref>(a) as a function of the beam length, L. From the resonance frequency, we extract the measured spring constant as k_m = (2π f_0)^2m, where f_0 is the resonance frequency and m is the mass of the comb drive. The plot also shows the prediction from three analytical solutions commonly employed in the design and analysis of MEMS in order to probe their validity in the regime of photonic NEMS. The first model is Euler-Bernoulli beam theory, cf. Eq. (<ref>), but we include also calculations using Castigliano's theorem <cit.>, which takes the connecting trusses into account,
k_C = 2ET(W/L)^34L+W^3/W^3_TL_T/4(L+W^3/W^3_TL_T).
Finally, we consider Timoshenko beam theory <cit.> which takes shearing of the spring beams into account,
k_T = (L/2κ TWG+L^3/2ETW^3)^^-1,
where κ=5/6 is the Timoshenko shear coefficient for rectangular cross section and G=79 is the shear modulus of silicon in the ⟨ 110⟩ direction <cit.>. All three models give very similar results in reasonable agreement with the measured eigenfrequencies although with significant deviations in both the scaling and the prefactor. To ensure a comprehensive analysis, we also include finite-element calculations and compare them with the experimental and analytical models presented in Fig. <ref>(a). The spring constants calculated with the finite-element method capture the observed scaling but with a different prefactor, which could be due to built-in stress, or it may originate from the fabricated springs having a lower Young modulus than expected due to surface effects associated with the small critical dimensions in our devices <cit.>. To find the exact reduction in stiffness, the numerically calculated spring constants are fitted with a power law as indicated with a red curve in Fig. <ref>(a). The fitted exponent of this power law is then kept fixed as the power law is then fitted to the experimental data with the prefactor as a free parameter. This results in an excellent agreement between theory (black line) and experiment (black dots). The prefactor extracted by this method corresponds to a 34±1% drop in stiffness compared to the expected value.
§ CHARACTERIZATION OF STEADY-STATE BEHAVIOUR
We measure the displacement of the actuators as a function of applied DC voltage using image analysis of SEM images aquired with zero tilt angle of the periodic structures shown in Fig. <ref>(c). The result is shown in Fig. <ref>(b). For linear springs, the displacement scales with the square of the voltage, and all displacement curves would be parallel lines on log-log axes, which is confirmed by our experiments. While subtle, the displacement at a given applied voltage is slightly less for the undoped comb drives as compared to the doped devices. This is more apparent in a plot with linear axes as shown in Fig. <ref>(c). The doped comb drives displace 20%, 37%, and 20% more than undoped nominally identical comb drives for measured spring constants of k_m = 0.4, 2.5, and 17.7, respectively. This extra displacement could be due to a lower Young modulus of the highly doped springs. However, it has been reported that even heavy doping only reduces Young's modulus of silicon by up to 3% <cit.>. Another explanation could be a reduced Debye screening in the undoped silicon <cit.>, which would result in a smaller differential capacitance, ∂ C/∂ d. To the best of our knowledge, this effect has not yet been experimentally investigated in electrostatic actuators. In any case, this shows that (nominally) undoped silicon works very well for DC operation of photonic NEMS.
We now turn to the extraction of the comb-drive constant, ξ = 1/2k∂ C/∂ d, which we have defined as the proportionality between V^2 and d, i.e., it can readily be extracted by fitting a quadratic function to the measured displacement curves in Fig. <ref>(b). The extracted comb-drive constants are plotted as a function of measured spring constants, k_m, in Fig. <ref>(d), from which we can directly determine the differential capacitances. We find ∂ C/∂ d = [separate-uncertainty=true,multi-part-units=single]2.4(0.1) for the doped comb drives, and ∂ C/∂ d = [separate-uncertainty=true,multi-part-units=single]2.1(0.1) for the undoped comb drives, where we have assumed identical spring constants for doped and undoped actuators.
§ HIGH-FREQUENCY ACTUATORS
For high-speed applications, a comb drive with a high mechanical resonance frequency is required and doping is required to minimize the circuit impedance. The resonance frequency of a comb drive can be increased either by using stiffer springs or by reducing the mass of the comb drive. Skeletonizing a comb drive reduces mass at the cost of decreasing the stiffness of the comb-drive body. Figure <ref> presents a skeletonized comb drive based on our proposed design. For this comb drive, we measure a resonance frequency of 2.7 when suspended by a spring constant of 17.7, which, to the best of our knowledge, is the highest resonance frequency ever reported for a comb-drive actuator.
§ CONCLUSION
In conclusion, we propose and experimentally validate a comb-drive actuator designed for reconfigurable photonic integrated circuits. Our design concept allows for a wide range of customization balancing the desired performance and trade-offs between speed, displacement, and actuation voltage. For example, by choosing a low spring constant of k_m=0.4, a displacement of 50 can be reached with actuation voltages below 5. Such a displacement is sufficient to reconfigure tunable photonic components such as phase shifters <cit.> and directional couplers <cit.> with low actuation voltages while still operating at frequencies above 200. Moreover, the proposed actuator can actuate stiff springs, enabling operation at MHz frequencies with moderate actuation voltages thanks to the tightly arranged interdigitated comb fingers. It is important to note that in a practical application, the displacement and the operating frequency of the comb-drive actuator can be reduced depending on the mass and stiffness of an attached external load. We observe that doping is necessary for high-speed applications, as the high impedance of Schottky barriers otherwise results in an electrical response time significantly slower than the mechanical response time.
§ ACKNOWLEDGEMENTS
The authors gratefully acknowledge financial support from
Innovation Fund Denmark (Grant No. 0175-00022 – NEXUS and Grant No. 2054-00008 – SCALE),
the Danish National Research Foundation (Grant No. DNRF147 – NanoPhoton),
Independent Research Fund Denmark (Grant No. 0135-00315 – VAFL),
the European Research Council (Grant No. 101045396 – SPOTLIGHT),
and the European Union's Horizon research and innovation programme (Grant No. 101098961 – NEUROPIC).
§ DATA AVAILABILITY
Data is available upon reasonable request.
§ COMPETING FINANCIAL INTERESTS
The authors declare no competing financial interests.
naturemag-etalnoitalics.bst
|
http://arxiv.org/abs/2307.02573v1
|
20230705181506
|
Analysis of a Programmable Quantum Annealer as a Random Number Generator
|
[
"Elijah Pelofske"
] |
quant-ph
|
[
"quant-ph"
] |
Multimodal Temporal Fusion Transformers Are Good Product Demand Forecasters
Marcel Worring
===========================================================================
Quantum devices offer a highly useful function - that is generating random numbers in a non-deterministic way since the measurement of a quantum state is not deterministic. This means that quantum devices can be constructed that generate qubits in some uniform superposition and then measure the state of those qubits. If the preparation of the qubits in a uniform superposition is unbiased, then quantum computers can be used to create high entropy, secure random numbers. Typically, preparing and measuring such quantum systems requires more time compared to classical pseudo random number generators (PRNGs) which are inherently deterministic algorithms. Therefore, the typical use of quantum random number generators (QRNGs) is to provide high entropy secure seeds for PRNGs.
Quantum annealing (QA) is an analog type of quantum computation that is a relaxed form of adiabatic quantum computation and uses quantum fluctuations in order to search for ground state solutions of a programmable Ising model. In this article we present extensive experimental random number results from a D-Wave 2000Q quantum annealer, totaling over 20 billion bits of QA measurements, which is significantly larger than previous D-Wave QA random number generator studies have used. Modern quantum annealers are susceptible to noise from environmental sources and calibration errors, and are not in general unbiased samplers. Therefore, it is of interest to quantify whether noisy quantum annealers can effectively function as an unbiased QRNG. The amount of data that was collected from the quantum annealer allows a comprehensive analysis of the random bits to be performed using the NIST SP 800-22 Rev 1a testsuite. The randomness tests show that the generated random bits from the D-Wave 2000Q are biased, and not unpredictable random bit sequences.
§ INTRODUCTION
Random number generation (RNG) is a very important capability in information computing. In particular unbiased random number generation is extremely important in many computing applications. Pseudo-Random Number Generators (PRNGs) are deterministic and very fast software level algorithms that can reliably generate random numbers. True Random Number Generators (TRNGs) are based on a physical property of a system that makes the random number generation inherently non-deterministic. Quantum systems have this property of non-determinism where it is not possible to know deterministically what the measured state of a quantum system will be before it has been measured.
Testing for randomness, in particular secure and unbiased randomness, is not directly possible. Instead, tests for patterns and biases that are clearly not random can be tested for <cit.>. If a proposed RNG is tested against enough of these tests which can detect non-random data, then you can be reasonably confident in the ability of the RNG to generate uniformly random numbers.
One of the types of programmable quantum computers that have become available to test, typically as cloud computing resources, are D-Wave quantum annealers. Quantum annealing is a specialized type of quantum computation that aims to sample the optimal solution of a combinatorial optimization problem, and is typically implemented using the transverse driving Hamiltonian <cit.>. D-Wave quantum annealers are physically implemented using programmable superconducting flux qubits <cit.>. Quantum annealers, and more generally quantum computers, are potentially interesting as secure entropy sources for generating random numbers because of the inherent stochasticity of quantum states - there is not a deterministic way to compute what the measured state will be of an arbitrary quantum state. For this reason, quantum computers, and more generally physical sources of measurements of quantum information, are True Random Number Generators (TRNG's) (or QRNG's) <cit.>.
The primary reason that modern quantum annealers are not perfect random number generators is because there are a large number of sources of error and bias in the computation, for example the spin bath polarization effect <cit.> can cause sequential anneal-readout cycles to have self correlations (in time), and programmed coefficients (even if they are 0) have slightly different effective weights on the hardware <cit.>. Furthermore, it has been shown that modern D-Wave quantum annealers have a measurable performance change over time <cit.>. There have also been cross-qubit correlations observed on a D-Wave 2000Q chip <cit.> possibly due to cross-talk errors. There have been studies which aim to reduce biases and noise present in minor-embedded QA computations, which in the case of reducing biases in the constraint of the graph partitioning problem results in effectively attempting to create an unbiased QA RNG sampler ref. <cit.>. D-Wave quantum annealers have been evaluated for the possibility of utilizing them as TRNG [Sometimes the acronym that is used for quantum devices generating random numbers is Quantum Random Number Generator (QRNG)] <cit.>.
Quantum random number generators in general are a topic of much interest, for example there have been several studies which examined using gate model quantum computers as random number generators <cit.>, boson sampling <cit.>, using quantum walks to generate random numbers <cit.>, and device-independent secure random number generation <cit.>. The idea of using random dense quantum volume circuits as random number generators in the gate model setting has also been proposed <cit.>.
This paper presents the most comprehensive review of using a quantum annealer as a random number generator that been performed to date, totalling over 20 billion bits of qubit measurements, and testing 8 different QA device settings for how they impact the measured bits. In particular, this very large dataset allows all of the NIST SP800-22 randomness tests to be executed on the data (some of the tests have a minimum bit length requirement). This has not been able to be done before for quantum annealing random bits <cit.> or more generally for cloud accessible quantum computers.
§ METHODS
Section <ref> details the Quantum Annealing implementation details, and Section <ref> details the randomness testsuite that is used.
§.§ Quantum Annealing
The computation performed by D-Wave quantum annealers is described by eq. (<ref>), and eq. (<ref>) describes the discrete optimization Ising model that a user can program to be sampled by the quantum annealer (the quadratic coefficients are subject to the constraint of the native connectivity of the quantum annealing hardware). Eq. (<ref>) is just a slight re-formulation of the Ising model defined in the second summation term of eq. (<ref>). The goal of the quantum annealer is to find a minimum variable assignment vector z given the objective function Eq.(<ref>). The variable states can be either {0, 1}^n, in which case the combinatorial optimization problem is called a Quadratic Unconstrained Binary Optimization (QUBO) problem, or the variables can be spins {+1, -1}^n, in which case the combinatorial optimization problem is called an Ising model.
H = - A(s)/2( ∑_i σ̂_x^(i)) + B(s)/2( ∑_i h_i σ̂_̂ẑ^(i) + ∑_i>j J_i, jσ̂_̂ẑ^(i)σ̂_̂ẑ^(j))
f(z_1,…,z_n) = ∑_i=1^n h_i z_i + ∑_i<j J_ij z_i z_j,
The D-Wave quantum annealer that is used to generate random bits is , the Chimera hardware graph for this device is shown in Figure <ref>. The simplest way to generate random bits using a quantum annealer is simply to set the user programmed coefficients for all linear terms (e.g. hardware qubits) and quadratic terms (e.g. hardware couplers) to 0, meaning that only the transverse field terms are present in the computation (ideally), which means that the qubits are in a uniform superposition, while the computation is coherent, during the anneal. There are certainly more complicated ways that could be utilized with the goal of extracting good random bits, such as random circuit sampling on gate model devices <cit.> or by tuning devices biases to improve sampling of balanced partitions for the graph partitioning problem <cit.>, however in general RNG's need to be as fast as possible and therefore minimizing the complexity of the computation is likely a good motivation to aim for. Explicitly, the Ising model (variable states are ∈{+1, -1}) that we will sample is given in eq. (<ref>).
f(z_1,…,z_n) = ∑_i=1^n 0 z_i + ∑_i<j 0 z_i z_j,
In many QRNG systems readout time is much longer than comparable PRNG's, and for cloud based quantum annealers this is also an important aspect to consider. In the experiments we perform, the total annealing time will be varied from 1 microsecond up to 2000 microseconds (1 microsecond is the shortest annealing time available on the D-Wave 2000Q devices). The potentially relevant thing that can be investigated is whether the 1 microsecond annealing times can give high quality random bits, because this utilizes a relatively small amount of compute time (certainly compared to longer annealing times).
The main D-Wave parameter that will be varied are the annealing time, which will use 1, 10, 100, and 2000 microseconds. These annealing times span the range of allowed annealing times on the chip; 1 microsecond is the smallest annealing time and 2000 is the longest available annealing time. The other parameter that will be tested is turning on server side classical post processing which aims to improve sampling (although in this specific case, the sampling is being done on an all zero coefficient Ising model). In order to turn on this server side post processing, the user facing post process option was set to [<https://docs.dwavesys.com/docs/latest/c_qpu_pp.html>]. We test both turning this option on and off. The reasoning is that we would ideally want to the quantum annealer to be able to produce unbiased random bits without this post processing, however it may be the case that the classical post processing helps reduce bias in the samples, in which case it would be interesting to quantify this. Therefore, in total there will be 8 datasets, each using a different quantum annealer parameter choice. Each of the datasets will be strictly sequential in time - e.g. the order of the bits will not be changed by some other entropy source. This time series representation of the data is especially important since it has been shown that there are long term trends that can be observed in current D-Wave quantum annealers <cit.>. Additionally, the exact ordering of the bits within each anneal (e.g. whole-chip readout cycle) is strictly based on the logical qubit indexing within the hardware, which is fixed for all samples, but is arbitrarily set. Each anneal-readout cycle is concatenated with the next anneal-readout cycle that was executed in time - no other source of entropy is present in the data. This is relevant because for example it could be the case that a better ordering of the qubit readouts could be used - e.g. based on how closely neighboring qubits are to each other, as determined by the hardware couplers. In particular, the measured bits corresponding to qubits which are connected by couplers to neighboring qubits in the hardware graph (or have an unknown source of cross-talk error) may have slightly more correlation with those qubits, and therefore produce biased bitstrings.
In order to mitigate spin bath polarization correlations, all of the data is constructed by sequentially calling the D-Wave backend for a single anneal-readout cycle (i.e. instead of measuring many anneals in a single job). Each job is sampled as an Ising model, meaning that spins are the measured states (although whether the model was specified as QUBO or Ising would in principle have no impact on the results). Additionally, although the bits are all time ordered, because of network interruptions or device power losses there are gaps in the time of the sequential random bit sampling.
The parameters used for the 8 datasets is as follows:
* Test 1 uses server side classical post-processing, and an annealing time of 1 microseconds.
* Test 2 uses server side classical post-processing, and an annealing time of 2000 microseconds.
* Test 3 uses server side classical post-processing, and an annealing time of 10 microseconds.
* Test 4 uses server side classical post-processing, and an annealing time of 100 microseconds.
* Test 5 uses default sampling with no server side classical post-processing, and an annealing time of 1 microseconds.
* Test 6 uses default sampling with no server side classical post-processing, and an annealing time of 2000 microseconds.
* Test 7 uses default sampling with no server side classical post-processing, and an annealing time of 10 microseconds.
* Test 8 uses default sampling with no server side classical post-processing, and an annealing time of 100 microseconds.
For all 8 datasets, the and are set to 0 microseconds in order to remove any thermalization effects (beyond thermalization that occurs after the qubits lose coherence; the qubit coherence times are estimated to be on the order of 10's of nanoseconds <cit.>). All other parameters are set to default. The motivation for evaluating these different parameter choices is the following.
* Although in general increasing annealing times on D-Wave quantum annealers results in better sampling success rate of combinatorial optimization problems, these long annealing times are well out of the qubit coherence times of current D-Wave quantum annealers <cit.>. Therefore, the longer annealing times are using thermalization to marginally improve the sampling success probability <cit.>. However, in this case there is not a combinatorial optimization problem being sampled - therefore in principle it may be the case that longer anneal times accumulate more errors in the computation, in particular more biases in the random sampling computation we aim to perform. Therefore, it may be advantageous to sample using the shortest annealing times available on the hardware – whether this is true or not is the aim of the varying annealing times. Furthermore, sampling rate for random numbers is extremely important – faster random bit sampling is more useful than slower sampling. Therefore, if the smaller annealing times produce high entropy random bits this would be better than using longer annealing times.
* The server side classical post-processing is not ideal since we wish to evaluate whether the bare quantum annealing hardware can produce good random bits. However, this post processing is intended to improve sampling of combinatorial optimization problems, and in this case there is nothing to optimize with respect to energy. Nevertheless, it is an interesting question to consider whether there is a clear difference between the QA sampling with and without the server side post processing for sampling optimization.
§.§ Testing for randomness
The randomness test that will be applied to the data are all of the tests from SP800-22 Rev 1a by National Institute of Standards and Technology <cit.>, titled A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications. This testsuite contains 15 randomness tests, two of which contain several sub-tests. In total each of the 8 datasets will be tested against 38 randomness tests, each giving a p-value output. For the purposes of maintaining consistency, and using the original NIST SP 800-22 test definitions <cit.>, a computed P-value which is ≥ 0.01 would accept the sequence as being random, and otherwise we would consider to be non-random. This p-value criteria is applied to all of the randomness tests. In the tests where there are multiple computed P-values, such as where there is a forward and backward mode of operation, all P-values must be greater than or equal to 0.01 in order to be considered random. The test also outputs two P-values.
The implementation used for this analysis is the Python 3 package nistrng [<https://github.com/InsaneMonster/NistRng>]; this package was chosen primarily for its compatibility with NumPy <cit.> arrays, which was necessary for the size of the datasets being tested. Other implementations that are, for example, based on casting the bits to integers does not scale well to these large dataset sizes.
The randomness test implementation details and references are not enumerated here - all details can be found in ref. <cit.> along with the linked open source code implementations.
§ RESULTS
Table <ref> shows the complete randomness testsuite data for the 8 QA implementation variations. Table <ref> shows the total size of each of the 8 datasets. The threshold for failing each randomness test varies depending on the test, but a p-value less than 0.01 definitely shows that the dataset fails that randomness test. The result is that there is no QA device setting that generates random bit strings that pass all of the randomness tests.
Notably, the server side classical post processing did improve the random bitstring - in the sense that more of the tests passed when that post processing was applied. Also very notable is that the raw non post-processed QA data failed the monobit test, arguably the most basic randomness test that can be applied. This shows clearly that there was too much bias in the computation on the D-Wave 2000Q device to produce high quality random bit sequences.
§ DISCUSSION AND CONCLUSION
Even if QRNG's based on near term devices are fundamentally non-deterministic, noise present in the computation can still produce biased random bitstrings. This is what is observed in this D-Wave quantum annealer data. This is not unexpected, especially given the observed trends over time on multiple D-Wave quantum annealers <cit.>. However, it is important to note that this type of biased random sampling is very likely to occur with other Noisy Intermediate Scale Quantum (NISQ) computers <cit.>. It is necessary that extensive tests, such as the ones presented in this paper, must be executed in order for such quantum devices to pass the threshold of being unbiased random bit samplers. Importantly, even large testsuites can not absolutely determine that the generator is indeed random - there are only tests which can show that a bit sequence is not random, or in other words the null hypothesis can never be proven to be true, it can only be observed to fail. Indeed, it has been shown that the NIST SP 800-22 testsuite is not sufficiently rigorous for verifying randomness <cit.>. However, it does serve as a reasonable minimum threshold test, which in this case the D-Wave 2000Q device did not pass.
Interestingly, within each QA dataset there are sometimes only a few tests which failed, but on the whole many of the tests were passed with p-values much greater than 0.01. Evaluating these sources of random numbers using more comprehensive testsuites, such as dieharder <cit.> would be good - the limitation is that those tools require a significant amount of data to be analyzed (more than used in this analysis), which is currently not feasible to obtain using cloud based quantum computer access.
A potentially interesting question to examine how ideal quantum annealing, with no noise or environment interaction, would sample the same Ising model with 0 Ising coefficients (e.g. only the transverse field Hamiltonian).
Another potentially interesting analysis on this existing data from would to to determine if there are strong cross-qubit correlations on the chip. If such correlations exist, then this would indicate cross-talk errors.
§ ACKNOWLEDGMENTS
This work was supported by the U.S. Department of Energy through the Los Alamos National Laboratory. Los Alamos National Laboratory is operated by Triad National Security, LLC, for the National Nuclear Security Administration of U.S. Department of Energy (Contract No. 89233218CNA000001). Research presented in this article was supported by the NNSA's Advanced Simulation and Computing Beyond Moore's Law Program at Los Alamos National Laboratory. This research used resources provided by the Darwin testbed at Los Alamos National Laboratory (LANL) which is funded by the Computational Systems and Software Environments subprogram of LANL's Advanced Simulation and Computing program (NNSA/DOE). The research presented in this article was supported by the Laboratory Directed Research and Development program of Los Alamos National Laboratory under project numbers 20220656ER and 20190065DR. This research used resources provided by the Los Alamos National Laboratory Institutional Computing Program, which is supported by the U.S. Department of Energy National Nuclear Security Administration under Contract No. 89233218CNA000001. This work has been assigned the technical report number LA-UR-23-23112.
|
http://arxiv.org/abs/2307.01457v1
|
20230704033239
|
Scrutinizing the Primordial Black Holes Interpretation of PTA Gravitational Waves and JWST Early Galaxies
|
[
"Yann Gouttenoire",
"Sokratis Trifinopoulos",
"Georgios Valogiannis",
"Miguel Vanvlasselaer"
] |
astro-ph.CO
|
[
"astro-ph.CO",
"hep-ph"
] |
P[1]>p#1M[1]>m#1=100000
yann.gouttenoire@gmail.com School of Physics and Astronomy, Tel-Aviv University, Tel-Aviv 69978trifinos@mit.eduCenter for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USAgvalogiannis@g.harvard.edu
Department of Physics, Harvard University, Cambridge, MA, 02138, USA
miguel.vanvlasselaer@vub.beTheoretische Natuurkunde and IIHE/ELEM, Vrije Universiteit Brussel, & The International Solvay Institutes, Pleinlaan 2, B-1050 Brussels, Belgium
Recent observations have granted to us two unique insights into the early universe: the presence of a low-frequency stochastic gravitational wave background detected by the NANOGrav and Pulsar Timing Array (PTA) experiments and the emergence of unusually massive galaxy candidates at high redshifts reported by the James Webb Space Telescope (JWST). In this letter, we consider the possibility that both observations have a common origin, namely primordial black holes (PBHs) in the mass range between 10^6 M_⊙ and 10^13 M_⊙. While superheavy PBHs act as seeds of accelerated galaxy formation capable of explaining the JWST extreme galaxies, they can also form binary mergers that source gravitational waves which can be potentially identified as the PTA signal. The analysis is performed taking into account the constraints on the relevant region of the PBH parameter space including the novel bound imposed by the so-called Ultraviolet Luminosity Function of galaxies observed by the Hubble Space Telescope. We conclude that PTA's and JWST's interpretations in terms of PBH binary mergers and Poissonian gas of PBHs, respectively, are strongly excluded.
Scrutinizing the Primordial Black Hole Interpretation of PTA Gravitational Waves and JWST Early Galaxies
Miguel Vanvlasselaer
August 1, 2023
========================================================================================================
§ INTRODUCTION
The North American Nanohertz Observatory for Gravitational Waves (NANOGrav)<cit.> combined with the other Pulsar Timing Array (PTA) Collaborations <cit.>, EPTA <cit.>, PPTA <cit.>, CPTA <cit.> and IPTA <cit.> have recently released further evidence for the Hellings-Downs angular correlation in the common-spectrum process. This points toward the existence of a Gravitational Wave (GW) background in the nHz range permeating the universe. However, the origin of this background has not yet been established with certainty. Consequently, these groundbreaking results have sparked a lot of new physics speculation <cit.>.
A different lens through which we can gain glimpses into the early cosmic evolution is presented by the James Webb Space Telescope (JWST). Initial observations have already reported photometric evidence of massive galaxies at unexpectedly high redshifts 7<z<12 <cit.>. A subset of them has been recently spectroscopically confirmed <cit.> and large cosmological hydrodynamical simulation demonstrated compatibility with existing models of galaxy formation <cit.>. However, the status of extreme galaxy candidates with stellar mass as high as 10^11 M_⊙ <cit.> still remains under investigation and if their distance estimates prove accurate, they would pose a significant challenge to ΛCDM itself <cit.>. Also in this case, explanations beyond the standard cosmological paradigm have been postulated <cit.>.
Primordial black holes (PBHs) emerge as one of the most long-studied scenarios <cit.>, capable of leaving distinctive imprints on cosmic history. Depending on the fraction of their abundance with respect to the total dark matter (DM) f_ PBH = Ω_ PBH/Ω_ DM, the spectrum of possible PBH masses M_ PBH spans an enormous range which has been tested by various experiments (for comprehensive reviews see refs. <cit.>). Moreover, as PBHs are formed in the early universe, they can form bounded binaries via several mechanisms <cit.>. Once decoupled from the universe evolution, those binaries will continuously emit GWs until a last spectacular burst when they finally merge. For stellar mass BHs, such mergers have already been observed by earth-based interferometers <cit.> and, several of those mergers could be due to the coalescence of PBHs <cit.>.
PBHs heavier than 100 M_⊙ are of particular interest due to their influence on the growth of massive objects and structures in the early universe. For example, it is a well-known fact that supermassive black holes (SMBHs) with masses above 10^5 M_⊙ reside within galactic nuclei <cit.>. Naturally, it has been proposed that PBHs can be their progenitors reaching these masses either by merging and accretion <cit.> or direct collapse of primordial fluctuations <cit.>. In the latter case, supermassive PHBs are constrained to be less than 𝒪(0.1 %) of DM and since they would already be present from the dawn of the matter-domination, they can act as cosmic seeds boosting galaxy formation <cit.>. Moreover, a subdominant component of DM can consist of stupendously large PBHs, which are heavier than 10^12 M_⊙ <cit.> and may traverse the intergalactic medium.
In this letter, we focus on supermassive PBHs in the mass range 10^6<M_ PBH/M_⊙<10^13 and assess the viability of the following two scenarios: i) they bind in binary systems which leads to late-time merging and radiation of GWs that are detectable by PTA experiments <cit.>,
ii) they are responsible for the generation of the early galaxies reported by JWST <cit.>.
For the first case, we calculate the GW energy spectrum induced by binary mergers and then perform a Bayesian analysis to determine the posterior distribution compatible with the NANOGrav <cit.> and IPTA DR2 <cit.> signals.
For the second case, the accelerated galaxy formation in the presence of PBHs is investigated using both the Poisson and seed effects <cit.> and we identify the PBH populations that can sufficiently seed a large number of massive galaxies at z ∼ 10.
The results of our analysis are presented combined with all relevant cosmological and astrophysical constraints. Notably, we consider for the first time in the PBH literature a constraint inferred by measurements of the Hubble Space Telescope (HST) and encoded in the so-called UV galaxy luminosity function (UV LF) <cit.>. Observations of UV-bright galaxies collected by the Hubble Space Telescope (HST) <cit.> trace the universe at redshifts z=4-10, significantly overlapping with the ones performed by the JWST. In the context of our combined analysis, we find that the HST UV LF together with the large-structure constraints (LSS) rule out the PTA's interpretation in terms of GW from PBH mergers as well as the JWST's interpretation in terms of Poissonian density fluctuation sourced by PBHs.
§ FITTING THE PTA SIGNALS WITH GRAVITATIONAL WAVES FROM PBH BINARY MERGERS
As we already mentioned, supermassive PBHs could form in the early universe via several mechanisms of mass M_ PBH≃ M_⊙(5 × 10^12/(1+z_ f))^2<cit.>, where z_ f is the redshift at formation time. This means that PBHs of mass up to 𝒪(10^18 M_⊙) can form during the radiation-dominated era. Immediately after their formation in the early universe, PBHs are sparsely distributed in space, with mean separation much larger than the Hubble scale, l_ PBH(z_ f) ≫ H^-1(z_ f). However, due to the expansion of the universe, the mean separation between PBHs, l_ PBH(t) ∝ t^1/2, falls below the Hubble distance H^-1∝ t before matter-radiation equality and several PBHs can coexist in the same Hubble patch, leading to the possibility of forming decoupled pairs of two PBHs. In principle, PBHs binaries can form both before or after matter-radiation equality (early <cit.> and late-time <cit.> formation), however the early contribution dominates for the PBHs, e.g. <cit.>. We present the calculation of the distribution of binaries and of the associated merging rate, ℛ(z), in appendix <ref>.
After their formation, the two PBHs will orbit around each other and merge approximately at a time <cit.>
t_ m ≃3/1701/G^3 m^3_ PBH a^4 (1-e^2)^7/2 ,
which depends on the eccentricity e of the orbit of the two PBH and its major axis a. Binaries with abundance f_ PBH≃ 5 × (10^11M_⊙/M_ PBH)^5/16 will typically merge today t_ m≃ 13.8 Gyr.
*Gravitational waves from PBH binary mergers.
The energy density of the stochastic gravitational waves from PBH binaries reads <cit.>
h^2 Ω_ GW = f/ρ_c/h^2 ∫_0^∞ dz ℛ(z)/(1+z)H(z)dE_ GW(f')/df'|_f'=(1+z)f ,
where ρ_c/h^2 = (3.0 meV)^4 is the Hubble-rescaled critical density at present time, dE_ GW(f')/df' is the GW power emitted by a circular and GW-driven binary, f' is the rest frame frequency and f=(1+z)f' the observed one.
We find a fit for the GW signal with the following power broken-law
Ω_ GW h^2 ≃ Ω_ peak S(f) Θ(2f_ peak - f)
where the spectral function is
S(f) = f_ peak^b f^a /(b f^a+b/c+ a f_ peak^a+b/c)^c,
the peak amplitude is
Ω_ peak≃ 0.05 f_ PBH^3 (1.5M_ PBH/10^12 M_⊙)^-0.3 ,
and the peak frequency is f_ peak ≃ 5000 M_⊙/M_ PBH. The fitting parameters are a = 0.7, b = 1.5, c = 0.9. We refer the reader to appendix <ref> for more details.
The step function Θ is introduced to cut the GW spectrum at frequencies above the ring-down frequency f > 2f_ peak.
On top of this estimate, we consider the following two effects:
Environmental effects.
At very low frequencies (IR), the assumption of GW-driven energy loss breaks down and interactions with the environment generate additional energy losses which changes the spectral shape of the GW background <cit.>. Following <cit.>, we discard the region of frequencies lower than
f_ min = (T_ max/δ_2)^-38, δ_2 = 5/256 π^8/3 (G M_c)^5/3 ,
where the choice T_ max = 75 Myr corresponds to the transition from the stellar-scattering dominated phase to the GW dominated one <cit.>.
Discrete versus continuous signal.
Eq. (<ref>) assumes a smooth distribution of sources. However, above a frequency f_ max, the number of sources per frequency bins can become smaller than one N(f, Δ f)<1 and the assumption of continuity in eq. (<ref>) breaks down <cit.>. The stochastic signal is replaced by a pop corn noise which could in principle be subtracted <cit.>. We follow the approach of <cit.> to determine when N(f, Δ f)<1.
The rate ℛ of coalescence can be related to the comoving number of binaries emitting in the logarithmic interval, in the interval of redshift [z, z +dz], via
dN_ mergers/dz d log f = -ℛ(t) dV_c/dzd t/d log f .
We can further define τ_r = dt/dlogf, known as the residence time <cit.>, as the amount of time that each binary spends emitting in a logarithmic frequency interval, which takes the following form <cit.>
τ_r = 5/96π^8/3( 1/GM_c)^5/3 f^-8/3 ,
where M_c = M_ PBH/2^1/5 in the chirp mass. We can see that dV_c/dz = 4π d^2_c(z)/H(z), where d_c is the comoving distance, and the number of mergers can be obtained by integrating over f and z,
N(f,Δ f) = ∫^f_0+ Δ f/2_f_0- Δ f/2df/f ∫^∞_0 dz 4π d^2_c(z)/H(z) R(z)τ_r ,
where f_0≃ 2 nHz is the frequency of the first bin and Δ f = 14 f_0 the number of bins considered. We notice that the computation depends only very weakly on Δ f.
*PTA interpretation.
We perform a Bayesian analysis of the PBH mergers interpretation of the PTA data GW signal using the GW spectrum in eq. (<ref>), which we truncate below the IR cut-off frequency f_ min when orbital energy is controlled by star slingshot. We impose the condition to have more than one emitting source per frequency bins, N(Δ f)>1, cf. eq. (<ref>), as a prior in our analysis. We follow the strategy of NANOGrav <cit.> and IPTA <cit.>, as referenced in <cit.>. We made modifications to the software tools enterprise <cit.> and enterprise_extensions <cit.>, to search for a GW power spectrum from supermassive PBH binaries in the timing residual power spectrum. We generate chains with the parallel-tempering Markov Chain Monte-Carlo sampler PTMCMC <cit.> and use the GetDist software <cit.> to visualize the posterior distribution in fig. <ref>.
§ PBH-ENHANCED STRUCTURE FORMATION AND THE CURRENT JWST OBSERVATIONS
Assuming a monochromatic mass function, the PBH population can be parameterized by the common mass, M_ PBH, and the fraction of DM f_ PBH. At scales where the discrete nature of PBHs becomes relevant, there are two different effects that can influence structure formation and each of them will dominate in different parts of the parameter space.
*Poisson effect. In a region of mass M̃ the condition f_ PBH>M_ PBH/M̃ implies that one expects to find more than one PHBs Ñ_ PBH>1. For initially Poisson-distributed PBHs the fluctuations in their number √(Ñ_ PBH) generate isocurvature density perturbations of magnitude δ_PBH,i≈ (f_ PBH M_ PBH/M̃)^1/2 (see appendix A in ref. <cit.> for a detailed derivation). The perturbations remain frozen during radiation-dominated era <cit.> but evolve linearly right after matter-radiation equality (i.e. z < z_ eq≈ 3400). As a result, the matter power spectrum is modified at smaller scales by the addition of an isocurvature component <cit.>
P(k) = P_ ad(k) + P_ iso(k) ,
P_ iso(k) ≃(f_ PBH D_0)^2/n̅_ PBH , if k≤ k_ cut
0 , otherwise ,
where P_ ad(k) is the adiabatic mode in ΛCDM[We generate a prediction for the linear matter power spectrum using the Boltzmann solver CAMB<cit.>, for a fiducial ΛCDM cosmology corresponding to the following Planck 2018 <cit.> values : n_s=0.9649, σ_8=0.8111, Ω_b=0.0493, Ω_ DM=0.266 and h=0.6736.], n̅_ PBH = f_ PBHρ_ crit,0Ω_ DM/M_ PBH is the co-moving average number density of PBHs, and D_0 ≃ 1 + 3 γ (1+z_ eq)/4 with γ = (Ω_ DM - Ω_ b)/(Ω_ m + Ω_ b) is the growth factor until today.
The isocurvature term is truncated at the scale k_ cut which is the scale where we expect the linear Press-Schechter theory (PS) <cit.> to break down. Two different cut-off scales k_ cut have been discussed, i.e. the inverse mean separation between PBHs, k̅_ PBH = (2 π^2 n̅_ PBH)^1/3 <cit.>,
and the approximate scale k_ NL≈ (n̅_ PBH/f_ PBH)^1/3 <cit.>, where non-linear dynamics (see the seed effect below and the case of mode mixing in ref. <cit.>) starts to dominate. In the absence of a reliable description of the transition between the linear and the non-linear regimes, we perform our analysis reporting both benchmarks, the conservative scenario k_ cut=k̅_ PBH being shown in fig. <ref> and the more aggressive one being reported to fig. <ref> in the appendix <ref>.
Equipped with the enhanced power spectrum, we utilize the Sheth-Tormen (ST) <cit.> modification to the PS formalism with a top-hat window function (see appendix <ref>) and calculate the halo mass function dn(M_h,z)/dM_h. The expected number density of galaxies with stellar mass above the observational threshold, M_⋆^ obs, is given by <cit.>
n_ gal(M_⋆≥ M_⋆^ obs) = ∫_M^ cut_h^∞dn(z_ obs, M_h)/dM_h dM_h .
Under the assumption that each dark-matter halo contains a single central galaxy, the relation between the halo and total stellar mass is M_h (M_∗) = M_∗ / (f_ bϵ_∗), where f_b = Ω_ b/(Ω_ DM+Ω_ b) = 0.157 is the baryon fraction and ϵ_∗ the star formation efficiency. The lower limit of the integral in eq. (<ref>) is defined then as M^ cut_h = M_h (M_⋆^ obs). It is also useful to define the co-moving cumulative stellar mass density as
ρ(M_⋆≥ M_⋆^ obs) = f_b ϵ_∗×∫_M^ cut_h^∞ M_h dn(z_ obs, M_h)/dM_h dM_h .
The JWST signature can then be expressed as ρ(M_⋆≥ 10^10.8 M_⊙) ≃ 5 × 10^5 M_⊙ Mpc^-1 at z_ obs∼ 8 <cit.>.
*Seed effect. In the opposite limit, f_ PBH<M_ PBH/M̃, the PBHs make up only a small fraction of dark matter and form isolated halos. The density perturbations in this case are δ_PBH,i≈ M_ PBH/M̃, binding gravitationally regions of mass
M̃(M_ PBH,z) ≃z_ eq/(z+1)M_ PBH .
Due to its highly non-linear nature, this effect can be examined properly only using simulations <cit.>.
However, following the calculation of refs. <cit.> we can determine the part of the parameter space that can potentially accommodate accelerated early galaxy formation compatible with the JWST observations solely with the seed effect. The three basic requirements are: (i) f_ PBH<M_ PBH/M̃ = (z+1)/z_ eq, (ii) n̅_ PBH > n_ gal≃ 2× 10^-5 Mpc^-3 <cit.>, and (iii) M̃(M_ PBH,z_ obs∼ 8) ≃ M_h(M_∗^ obs∼ 10^11M_⊙), assuming also in this case that the whole bound region hosts a single galaxy.
§ OBSERVATIONAL CONSTRAINTS
In this section we review the constraints on the PBH populations with masses above 10^6 M_⊙ and introduce the UV LF bound on the PBH parameter space.
* Large-scale structure. As already discussed in section <ref>, PBHs can play a crucial role in the generation of cosmic structures. However, the non-observation of different types of cosmic structures above certain redshifts yields relevant constraints on the mass of the initial PBH seed. Using either the Poisson or seed effect one then restricts M̃ < 10^10, 10^12, 10^14 M_⊙ at z = 7,3,1 for dwarf galaxies, milky way-type of galaxies and clusters of galaxies, respectively <cit.>.
We notice also that those constraints concern the bulk of the various types of structures and the limit may still vary for rare instances (e.g. the JWST early galaxies), within an order of magnitude.
* CMB μ distortion.
According to the standard hypothesis of PBH formation large-amplitude primordial fluctuations undergo spherical gravitational collapse upon horizon reentry <cit.>. Fluctuations that dissipate via Silk damping <cit.> during the photon diffusion scale, i.e. 5× 10^4 <z<2× 10^6, inject energy in the photon bath and modify the number of photons at different frequencies w.r.t the black-body equilibrium. This process leaves imprints in the CMB spectrum, called μ distortion <cit.>. The defining observable μ is strictly constrained by the COBE/FIRAS measurements to be smaller than 9× 10^-5 <cit.>, which translates into a strong bound on the PBH abundance over the mass range 10^5 M_⊙ < M_ PBH < 10^12. If the primordial fluctuations are Gaussian then practically all PBH models of interest are excluded <cit.>. [See however the models of refs. <cit.>, which postulate different PBH formation mechanisms and may avoid bounds from CMB spectral distortion altogether.] On the other hand, if one allows for significant non-Gaussianities (NG) in the curvature power spectrum the bound can be avoided. In this work, we adopt the phenomenological description of ref. <cit.> and characterize NGs by the parameter p ≤ 2 (where p=2 corresponds to the Gaussian case). We give all the relevant expressions in appendix <ref>.
* HST UV luminosity function.
The UV LF of galaxies observed by the Hubble Space Telescope (HST) <cit.> has emerged as a sensitive early universe probe to constrain not only ΛCDM <cit.>, but also a broad portfolio of extensions about it <cit.>.
As it was recently pointed out <cit.>, more importantly, any models that attempt to modify the high-redshift halo mass function in relation to the JWST excess of massive galaxies <cit.> are already strongly limited by the HST UV LF, since the latter traces galaxies of the same redshifts and distances as the ones in the JWST sample. Such is the case of the massive PBHs that we consider in this work, which predict an enhancement of dark matter clustering as discussed in section <ref>, and which have indeed been found theoretically capable of producing a desired excess of massive galaxies in the range z_ obs=7-10 <cit.>.
In this letter, we take advantage of the above connection and use HST UV LF measurements <cit.> to constrain the properties of PBH populations using this probe. In particular, we proceed as follows: combining eqs. (<ref>),(<ref>) and the ST formalism further explained in appendix <ref>, we produce theoretical predictions for the galaxy number density above a stellar mass cut-off M_∗ in the presence of a PBH population with mass M_ PBH and abundance f_ PBH, for ϵ_⋆=0.3 at z_ obs∼ 8. These predictions are then contrasted against the maximum deviation from the corresponding ΛCDM result that is allowed by the HST UV LF measurements, as it was reported by the likelihood analysis of <cit.> at the 95 % C.L. (see fig. 3 of that work).
We should note, at this point, that the likelihood analysis of ref. <cit.> considers model-agnostic power spectrum enhancements that were general enough to encompass modifications of the specific form in eq. (<ref>), that we are using, marginalized over variations in the power spectrum amplitude and various astrophysical parameters characterizing the galaxy-halo connection. As a consequence, their findings are considered robust enough for constraining the PBH-induced isocurvature component, which is simply a constant function. We defer the task of directly constraining our PBH model parameters through a separate full likelihood re-analysis of the HST UV LF data to future work.
* Other constraints. The dynamical friction (DF) limit concerns the accumulation of halo BHs into the galactic nucleus. The effect is induced due to dynamical friction by stellar populations and the subsequent merging of those BHs in the nucleus would result into SMBHs heavier than the currently observed ones unless f_ PBH is rather small. The relevant derivation can be found in ref. <cit.>. We mention also that there are additional constraints due to X-ray binaries <cit.>, galaxy tidal distortions <cit.>, and high-z Lyman-α forest data <cit.> which are however not discussed here further because they are sub-leading to the LSS bounds.
§ DISCUSSION AND CONCLUSIONS
The collective results of our analysis can be found in fig. <ref>. This includes the regions on the M_ PBH-f_ PBH plane that can account for both JWST and PTA signatures as discussed in sections <ref> and <ref>, in conjunction with the constraints discussed in section <ref>.
By means of a full Bayesian analysis of the first 14 frequency bins of NANOGrav 12.5-year <cit.> and of the first 13 frequency bins of IPTA DR 2 <cit.>, we find a region of the parameter space, i.e. 10^8≲ M_ PBH/M_⊙≲ 10^9 and f_ PBH∼ 10^-2, that can accommodate both signals at 90% C.L. (blue and orange ovals). The GW spectrum was truncated at a lower frequency to account for environmental effects and we included the condition for the signal to be stochastic as a prior. We calculated the credible intervals for the PBHs interpretation of the stochastic GW bacground recently confirmed by the analysis of NANOGrav 15-year dataset <cit.>.
Assuming a Poissonian gas of PBHs and using linear cosmological perturbation theory and the ST approach, we model the enhancement to the galaxy abundance as a function of the grid of PBH model parameters. Two different wavelength cut-offs k_ cut are used, the conservative one being shown in fig. <ref> of the main text, and the more aggressive one being reported in appendix <ref>. The observation of massive galaxies at redshift z∼ 8 by the JWST is explained along the green thick line. The maximum value of M_ PBH corresponds to a bound region mass of 𝒪(10^12M_⊙), which is the typical upper theoretical limit for halo masses <cit.>. Additionally, using the same assumption for the linear halo evolution, we project the 95% exclusion bound imposed by the UV LF on the parameter space (red shaded region).
Regarding PBHs evolving in isolation, we display within the green trapeze region of fig. <ref>, the region that could explain the JWST data with the seed effect. The upper and lower lines correspond to conditions (i) and (ii) discussed in section <ref> (see paragraph on seed effect), respectively. The two vertical horizontal lines constrain the resulting halo masses to be 𝒪(10^10M_⊙-10^13M_⊙) (using eq. (<ref>)), allowing for sufficient variation around M_h(M_∗^ obs∼ 10^11M_⊙) in order to compensate for unknown uncertainties due to the simplicity of the calculation.
Finally, it is worth noticing that a common requirement for all scenarios of interest involving PBH populations in the superheavy mass region, is the formation of supermassive PBHs from NG primordial fluctuations in order to circumvent the strict bound from CMB μ distortion. In particular, in fig. <ref> we depict the μ bound given p=0.3, which is marginally compatible with the depicted JWST solution based on the seed effect. This is, in fact, an extremely large degree of NGs. For instance, the benchmark p=0.5 (also shown in fig. <ref>), which already corresponds to the infinite limit of more traditional NG measures <cit.>, excludes all of the interesting PBH populations.
*Conclusions. In fig. <ref>, we demonstrate that the PBH populations needed to source the stochastic GW background observed in PTAs are partly excluded by LSS and decisively excluded by the UV LF constraint derived in this work. Similarly, the PBH population required to explain the JWST anomalous observation with the Poisson effect is excluded due to the same constraint. The novel UV LF bound is more rigorous than the former LSS bound of PBHs, because it has been inferred from an extended likelihood analysis of the HST data at the redshift range contemporaneous with the JWST extreme galaxy sample. The tight constraints we obtain are in agreement with the findings of ref. <cit.>.
On the other hand, the PBH solution of JWST observation based on the seed effect is in principle still viable for f_ PBH≲ 10^-3 up to the caveat of extreme primordial NGs. A clearer picture of the reach of the seed effect necessitates the usage of dedicated simulations in the M_ PBH>10^7 M_⊙ range, which to our knowledge have not been performed up until now. As a matter of fact, it is not apriori obvious that the HST dataset cannot be used to derive a bound for the non-linear seed effect.
*Future prospects. Looking forward, a near-future spectroscopic analysis will provide the final verdict on whether the preliminary results of ref. <cit.> constitute a ΛCDM anomaly. Furthermore, the wealth of observational data arriving from current and upcoming missions such as the JWST and the Nancy Roman Grace Telescope <cit.> will trace galaxies at increasingly high redshifts and apparent magnitudes, giving us the opportunity to further elucidate the evolution of cosmic structures, including the eras of cosmic dawn and reionization. As far as constraining NGs, a PIXIE-like mission <cit.> would further lower the current upper limit of μ by two orders of magnitude.
Different early-universe GW interpretations of the PTA signal have been proposed <cit.>. However they confront important PBH overproduction constraints <cit.>, in addition to the ones emanating from Big-Bang Nucleosynthesis predictions <cit.>. The stochastic GW background detected by PTA might instead originate from SMBH binaries, the value of the strain amplitude measured by PTA <cit.> being close to the predicted value <cit.>.
A future increase in observation time and in number of detected pulsars might facilitate the resolution of individual sources at larger frequencies <cit.>, providing new information on the galaxy power spectrum <cit.>. But it might also allow the detection of lower-frequency deviations from the spectral slope h_c∝ f^-2/3, which assumes GW-driven circular binaries. The latter would offer unprecedented information about the environment around SMBHs <cit.> and imply constraints on the density of dark matter <cit.> or the presence of a fifth force <cit.>.
§ ACKOWLEDGEMENTS
We would like to thank Sebastien Clesse, Edoardo Vitagliano, Daniel Eisenstein and Julian Munoz for useful discussions throughout the completion of this work, as well as Omer Katz for sparing computational resources on the TAU cluster during the preparation of this work. YG is grateful to the Azrieli
Foundation for the award of an Azrieli Fellowship. ST is supported by the Swiss National Science Foundation - project n. P500PT_203156, and by the Center of Theoretical Physics at MIT (MIT-CTP/5538). GV recognizes partial support by the Department of Energy (DOE) Grant No. DE-SC0020223. MV is supported by the “Excellence of Science - EOS" - be.h project n.30820817, and by the the Strategic Research Program High-Energy Physics of the Vrije Universiteit Brussel.
§ NOTE
After our analysis was completed and as this paper was being prepared for the final submission, another paper appeared on the arXiv <cit.> that computes the GW prediction from supermassive PBH binaries including the case of initial clustering. Clustered PBHs population could in principle evade the LSS and UV LF constraints. However, it is worth mentioning that the utilization of a more aggressive cut-off k_ cut, as depicted in Fig. <ref>, presents additional challenges in such scenarios. We leave the study of the viability of clustering scenarios in light of JWST and PTA's interpretation together with UV LF constraints for further works.
§ HALO MASS FUNCTION
We consider a spherical dark matter halo at redshift z, containing an average mass M̃ within a region of effective radius R. If, additionally, ρ_m is the average matter density of the universe at z, it follows that
R = (3 M̃/4 πρ_m)^1/3 .
The root-mean square of matter density fluctuations in this region can be further calculated through a convolution of the the linear matter power spectrum, P(k,z=0), with a spherical top-hat smoothing Kernel of radius R:
W(k R) = 3[sin(k R)-k Rcos(k R)]/(k R)^3 ,
which results in the (Fourier space) integral
σ^2(M̃) = ∫dk k^2/2 π^2 W^2(k R) P(k,z=0) .
The PS theory and its generalizations <cit.> postulate that the abundance of halos in the universe can be evaluated by counting all the regions of radius R (and, equivalently, mass M̃) that have gravitationally collapsed by the time of interest z. According to the excursion set approach, these will be the regions with a smoothed density exceeding the critical overdensity for spherical collapse at redshift z, δ_cr. For an Einstein De-Sitter (EDS) cosmology, it is always δ_cr=1.686<cit.>, which turns out to be a very good approximation for ΛCDM cosmologies and will be adopted in this work as well. Assuming, finally, density fluctuations that are accurately described by linear perturbation theory and a Gaussian probability distribution, we can derive an analytical prediction for the comoving mean number density of halos n_h(M) per logarithmic mass interval dln M, given by:
d n_h/dln M = ρ_m/Mν_c(M)f(ν_c(M)) dlnν_c(M)/dM ,
where the quantity ν_c(M), called the peak significance, is defined as
ν_c(M)=δ_cr/σ(M,z)=δ_cr/D(z)σ(M) .
In (<ref>), σ(M,z) denotes the density variance at redshift z, evaluated through the linear evolution of σ(M) (obtained from eq. (<ref>) to the time of collapse z, using the linear growth factor D(z). For the latter we adopt the standard normalization D(z=0)=D_0=1. We note again that for the critical overdensity to collapse we use the EDS solution δ_cr=1.686 .
In the original PS theory, the multiplicity function, f(ν_c(M)), had the simple form:
ν_cf(ν_c)=√(2/π)ν_ce^-ν_c^2/2 ,
which is exact in an EDS universe described by a power-law power spectrum. While this prescription, often referred to as the universal mass function, has been used to describe the halo mass function for a broad range of cosmologies, it lacks the necessary accuracy for predictions in the era of precision cosmology. As a result, ST <cit.> later introduced an alternative function:
ν_cf(ν_c)=√(2/π)A(p)[1+1/(q ν_c^2)^p]√(q)ν_ce^-qν_c^2/2 ,
with A(p)=[1+π^-1/22^-pΓ(0.5-p)]^-1 and where q,p are free parameters to be fitted over N-body simulations, reducing to the vanilla PS function for q=1,p=0. The best fit pair was initially proposed to be (q,p)=(0.707,0.3) which was later updated to (q,p)=(0.75,0.3), commonly considered to be the “standard" ST parameters <cit.>. When focusing on higher redshift probes like in our work, however, a slightly different choice of values, (q,p)=(0.85,0.3), has been found to provide a more accurate fit <cit.>, which is the one we adopt in order to match the analysis of <cit.> that we compare against. We also comment on the fact that we experimented with both choices and found our results to change minimally across the tests, demonstrating the robustness of our analysis against the modeling details of the halo mass function, as was also reported by <cit.>.
To summarize, given a specific choice of the multiplicity function (<ref>), Eqs. (<ref>)-(<ref>) provide a prediction for the halo mass function for a given cosmology, that we subsequently use to obtain our galaxy observables of interest from Eqs. (<ref>) and (<ref>). Its sensitivity to the underlying cosmological model manifests through the linear matter power spectrum entering eq. (<ref>), which we obtain from the Boltzmann solver CAMB<cit.> in the ΛCDM case and additionally using eq. (<ref>) to account for the PBH-induced effects.
§ PBHS FROM NON-GAUSSIAN PERTURBATIONS AND Μ DISTORTION
The probability density function for the primordial curvature perturbations can be parameterized as <cit.>
P(ζ)=1/2√(2)σ̃Γ(1+1/p)exp[-(|ζ |/√(2)σ̃)^p] ,
where p=2 corresponds to the the Gaussian limit of ref. <cit.>.
The variance is
σ^2= ∫_-∞^∞ζ^2 p(ζ)dζ=2Γ(1+3/p)/3Γ(1+1/p)σ̃^2 ,
and the abundance of PBH is
β=∫_ζ_c^∞ P(ζ)dζ =Γ(1/p, 2^-p/2(ζ_c/σ̃)^p)/2pΓ(1+1/p) ,
where ζ_c≃ 0.67 is the threshold for PBH formation <cit.>, Γ(a) is the gamma function and Γ(a,z) the incomplete gamma function.
It is shown in ref. <cit.> that a modification to the curvature power spectrum of the form
Δ P_ζ= 2π^2 σ^2 k^-2δ(k-k_δ) ,
which exhibits an extremely sharp feature at some scale k_δ, generates the following μ distortion
μ≃ 2.2σ^2 [
exp(-k̂_δ/5400)
-exp(-[k̂_δ/31.6]^2)
] .
where k̂_δ=k_δ Mpc.
For a PBH population of mass M_ PBH[Eq. (<ref>) does not necessarily correspond to a monochromatic PBH mass function since critical collapse may broaden the mass spectrum <cit.>.] the scale k_δ is given by <cit.>
k_δ≃ 13 Mpc^-1γ^1/2(g/10.75)^-1/12(M_ PBH/10^11M_⊙)^-1/2 ,
where γ gives the size of the PBH in units of
the horizon mass at formation (we take it to be γ=1) and g is the number of degrees of freedom of relativistic particles. The initial abundance β is related to f_ PBH <cit.>, via
β≃ 6× 10^-4(g/10.75)^1/4(Ω_DM/0.27)^-1(M_ PBH/10^11M_⊙)^1/2f_ PBH .
Replacing this expression in eq. (<ref>), we can solve for σ̃ and substitute in (<ref>) to get σ and finally calculate μ using (<ref>).
We notice also that it is not straightforward to connect the expression of eq. (<ref>) to known models of inflation <cit.>. In more realistic scenarios that exhibit local NGs, one can examine also modifications to the shape of the overdensity and thus the critical threshold for collapse <cit.>. However, NGs corresponding to values p<0.5 cannot be achieved by traditional quadratic f_ NL and cubic g_ NL parameters <cit.>.
§ PBH BINARY DISTRIBUTION DURING RADIATION DOMINATION
In this appendix, we present the computation of the distribution of binaries as a function of eccentricity and major axis. We follow closely <cit.> (see also refs. <cit.> and ref. <cit.> for the earlier proposals). We need first to compute the moment during radiation domination when the PBH pair decouples from the expansion of the universe. This occurs when the energy density enclosed by the two PBHs becomes larger than the radiation energy density in the same sphere
1+ z_ dec = (1+ z_ eq) (x_ max/x)^2 ,
where x is the comoving separation between the two PBHs and x_ max = f_ PBH^1/3 l_ PBH(z=0). The closest third PBH, at comoving distance y, exerts a torque on the binary and prevents head-on collision and transmits angular momentum to the now rotating binary. The distribution of binaries, parameterized by the semi-major axis a and the eccentricity e, can be described by
dP ≃ 4π x^2 dx/n_ PBH^-14π y^2 dy/n_ PBH^-1Θ( y- x) Θ (y_ max- y) ,
= 4π^2/3n_ PBH^1/2 f_ PBH^3/2(1+ z_ eq)^3/2 a^1/2e (1-e^2)^-3/2de da ,
with a = ρ_c, 0Ω_ DM x^4/(1+z_ eq)M_ PBH, e = √(1- (x/y)^6), and y_ max = (4π/3 n_ PBH)^-1/3. This leads to a maximum on a and e,
a_ max = x_ max/1+z_ eq ,
e_ max = 1- (4π n_ PBH/3)^2 ((1+z_ eq)M_ PBH/ρ_c, 0Ω_ DMa)^3/2 .
The analysis of the binary formation we presented and used in this letter relied on the three body approximation, and several effects might bring corrections to the rate we estimated. However it was shown in <cit.> that those effects are 𝒪(1) corrections and would not alter much the results of our analysis. Moreover, for low enough f_ PBH≪ 1, the disruption of binaries formed in the early universe by close encounter with a third PBH is expected to be small <cit.>.
§ DERIVATION OF GRAVITATIONAL WAVE SIGNAL
After their formation during radiation era, the two PBHs pertaining to the binary will orbit around each other and lose energy mostly via gravitational radiation. We can estimate their time of merging, assuming e → 1, by <cit.>
t_ m (a, e)
≃3/1701/G^3 M^3_ PBH a^4 (1-e^2)^7/2 .
This expression of the time of merging t_ m (a, e) can be inverted to eliminate a(t_ m , e) in eq. (<ref>) to express the probability distribution as a function of t_ m and e. This leads to the expression <cit.>
dP/dt_ m≃3/58(t/T)^3/81/t_ m[1/(1-e_ up)^29/16-1] ,
where the typical time of coalescence is
T = 3/1701/G^3 m^3_ PBH(3 y_ max/4π f_ PBH (1+ z_ eq))^4 ,
and the maximal eccentricity reads
e_ up = √(1- (t_ m/T)^6/37) for t_ m< t_c ,
√(1- (4π f_ PBH/3)^2(t_ m/t_c)^2/7) for t_ m> t_c .
The time of the transition between the two regimes is of the form t_c = T (4π f_ PBH/3)^37/3.
The final expression for the rate of merging of PBHs can be obtained by multiplying this probability with the PBHs density, n_ PBH, and we obtain
ℛ(t_ m) ≃ 3 n_ PBH/58(t/T)^3/81/t[1/(1-e_ up)^29/16-1] .
The gravitational wave amplitude emitted from the merger of two PBHs is then <cit.>
h^2 Ω_ GW = f/ρ_c/h^2 ∫_0^∞ dz ℛ(z)/(1+z)H(z)dE_ GW(f')/df'|_f'=(1+z)f ,
where dE_ GW(f')/df' is the GW power emitted by an individual binary <cit.>
dE_ GW(f)/df = (G π)^2/3 M_c^5/3/3
×
f^-1/3 , f < f_1
f^2/3f_1^-1 , f_1 < f < f_2
f_2^-4/3f_1^-1(f/1+4((f-f_2)/σ)^2)^2 , f_1 < f < f_2
where M_c is the chirp mass, M_c = M_ PBH/2^1/5. In order to speed-up our analysis, we approximate the GW power spectrum in Eq. (<ref>) by the fitting function exposed in the main text. The goodness of the approximation is shown in Fig. <ref>.
§ PBH PARAMETER SPACE WITH A HIGHER CUT-OFF
In this appendix, we discuss in fig. <ref> the results of the main analysis using the same cut-off k_ cut≈ (n̅_ PBH/f_ PBH)^1/3 on the isocurvature component in eq. (<ref>) as it is assumed in the heuristic approach of ref. <cit.>. To illustrate the difference from the cut-off k_ cut = k̅_ PBH used in the main text, we plot in fig. <ref> the modification to the power spectrum for a PBH population for both cut-offs. As it is evident, this choice allows for higher wavenumbers to contribute and thus the boost to the halo mass function is significantly increased. As a result, the curve in fig. <ref> where the JWST observations are satisfied as well as the bound derived from the UV LF are both displaced towards lower f_ PBH values, as clearly seen in fig. <ref>.
It is noteworthy that in this case the solution based on the seed effect is partially subjected to the UV LF bound. The underlying assumption is that at intermediate scales between the linear- and non-linear regimes the two effects co-exist and even though the seed effect dominates, the Poisson may still contribute at a smaller percentage (see also fig. 6 in ref. <cit.>, but notice that the authors employ linear perturbation theory only above k_ cut = k̅_ PBH).
§ SGWB FROM COMBINED PBH AND SMBH
We perform a Bayesian analysis of the combined GW spectrum form primordial and astrophysical black holes (PBHs and ABHs). We parameterize the GW spectrum from ABHs by its strain amplitude A_ ABH. We show the credible intervals in Fig. <ref>.
JHEP100NANOGrav:2020bcs NANOGrav Collaboration, Z. Arzoumanian et al. Astrophys. J. Lett. 905 (2020), no. 2 L34, [http://arxiv.org/abs/2009.04496
arXiv:2009.04496].
NANOGrav:2023gor NANOGrav Collaboration, G. Agazie et al. Astrophys. J. Lett.
951 (2023), no. 1 L8, [http://arxiv.org/abs/2306.16213
arXiv:2306.16213].
Goncharov:2021oub
B. Goncharov et al. Astrophys. J. Lett. 917 (2021), no. 2 L19,
[http://arxiv.org/abs/2107.12112 arXiv:2107.12112].
Chen:2021rqp
S. Chen et al. Mon. Not. Roy. Astron. Soc. 508 (2021), no. 4
4970–4993, [http://arxiv.org/abs/2110.13184 arXiv:2110.13184].
Antoniadis:2023ott
J. Antoniadis et al. http://arxiv.org/abs/2306.16214
arXiv:2306.16214.
Reardon:2023gzh
D. J. Reardon et al. Astrophys. J. Lett. 951 (2023), no. 1 L6,
[http://arxiv.org/abs/2306.16215 arXiv:2306.16215].
Xu:2023wog
H. Xu et al. Res. Astron. Astrophys. 23 (2023), no. 7 075024,
[http://arxiv.org/abs/2306.16216 arXiv:2306.16216].
Antoniadis:2022pcn
J. Antoniadis et al. Mon. Not. Roy. Astron. Soc. 510 (2022), no. 4
4873–4887, [http://arxiv.org/abs/2201.03980 arXiv:2201.03980].
NANOGrav:2023hvm NANOGrav Collaboration, A. Afzal et al. Astrophys. J. Lett.
951 (2023), no. 1 L11, [http://arxiv.org/abs/2306.16219
arXiv:2306.16219].
Adams2022
N. J. Adams, C. J. Conselice, L. Ferreira, D. Austin, J. A. A. Trussler,
I. Juodžbalis, S. M. Wilkins, J. Caruana, P. Dayal, A. Verma, and
A. P. Vijayan Monthly Notices of the Royal Astronomical Society
518 (nov, 2022) 4755–4766.
Naidu2022
R. P. Naidu, P. A. Oesch, P. van Dokkum, E. J. Nelson, K. A. Suess, G. Brammer,
K. E. Whitaker, G. Illingworth, R. Bouwens, S. Tacchella, J. Matthee,
N. Allen, R. Bezanson, C. Conroy, I. Labbe, J. Leja, E. Leonova, D. Magee,
S. H. Price, D. J. Setton, V. Strait, M. Stefanon, S. Toft, J. R. Weaver, and
A. Weibel The Astrophysical Journal Letters 940 (nov, 2022) L14.
Finkelstein2023
S. L. Finkelstein, M. B. Bagley, H. C. Ferguson, S. M. Wilkins, J. S.
Kartaltepe, C. Papovich, L. Y. A. Yung, P. A. Haro, P. Behroozi,
M. Dickinson, D. D. Kocevski, A. M. Koekemoer, R. L. Larson, A. L. Bail,
A. M. Morales, P. G. Pérez-González, D. Burgarella,
R. Davé, M. Hirschmann, R. S. Somerville, S. Wuyts, V. Bromm, C. M.
Casey, A. Fontana, S. Fujimoto, J. P. Gardner, M. Giavalisco, A. Grazian,
N. A. Grogin, N. P. Hathi, T. A. Hutchison, S. W. Jha, S. Jogee, L. J.
Kewley, A. Kirkpatrick, A. S. Long, J. M. Lotz, L. Pentericci, J. D. R.
Pierel, N. Pirzkal, S. Ravindranath, R. E. Ryan, J. R. Trump, G. Yang,
R. Bhatawdekar, L. Bisigello, V. Buat, A. Calabrò, M. Castellano, N. J.
Cleri, M. C. Cooper, D. Croton, E. Daddi, A. Dekel, D. Elbaz, M. Franco,
E. Gawiser, B. W. Holwerda, M. Huertas-Company, A. E. Jaskot, G. C. K. Leung,
R. A. Lucas, B. Mobasher, V. Pandya, S. Tacchella, B. J. Weiner, and J. A.
Zavala The Astrophysical Journal Letters 946 (mar, 2023) L13.
2023Natur.616..266L
I. Labbé, P. van Dokkum, E. Nelson, R. Bezanson, K. A. Suess,
J. Leja, G. Brammer, K. Whitaker, E. Mathews, M. Stefanon, and
B. Wang 616 (Apr., 2023) 266–269,
[http://arxiv.org/abs/2207.12446 arXiv:2207.12446].
curtislake2023spectroscopic
E. Curtis-Lake, S. Carniani, A. Cameron, S. Charlot, P. Jakobsen, R. Maiolino,
A. Bunker, J. Witstok, R. Smit, J. Chevallard, C. Willott, P. Ferruit,
S. Arribas, N. Bonaventura, M. Curti, F. D'Eugenio, M. Franx, G. Giardino,
T. J. Looser, N. Lützgendorf, M. V. Maseda, T. Rawle, H.-W. Rix, B. R. del
Pino, H. Übler, M. Sirianni, A. Dressler, E. Egami, D. J. Eisenstein,
R. Endsley, K. Hainline, R. Hausen, B. D. Johnson, M. Rieke, B. Robertson,
I. Shivaei, D. P. Stark, S. Tacchella, C. C. Williams, C. N. A. Willmer,
R. Bhatawdekar, R. Bowler, K. Boyett, Z. Chen, A. de Graaff, J. M. Helton,
R. E. Hviding, G. C. Jones, N. Kumari, J. Lyu, E. Nelson, M. Perna,
L. Sandles, A. Saxena, K. A. Suess, F. Sun, M. W. Topping, I. E. B. Wallace,
and L. Whitler, Spectroscopic confirmation of four metal-poor galaxies
at z=10.3-13.2, 2023.
Robertson:2022gdk
B. E. Robertson et al. Nature Astron. 7 (2023), no. 5 611–621,
[http://arxiv.org/abs/2212.04480 arXiv:2212.04480].
Keller:2022mnb
B. W. Keller, F. Munshi, M. Trebitsch, and M. Tremmel Astrophys. J. Lett. 943 (2023), no. 2 L28, [http://arxiv.org/abs/2212.12804
arXiv:2212.12804].
McCaffrey:2023qem
J. McCaffrey, S. Hardin, J. Wise, and J. Regan
http://arxiv.org/abs/2304.13755 arXiv:2304.13755.
Boylan-Kolchin:2022kae
M. Boylan-Kolchin Nature Astron. 7 (2023), no. 6 731–735,
[http://arxiv.org/abs/2208.01611 arXiv:2208.01611].
Lovell:2022bhx
C. C. Lovell, I. Harrison, Y. Harikane, S. Tacchella, and S. M. Wilkins
Mon. Not. Roy. Astron. Soc. 518 (2022), no. 2 2511–2520,
[http://arxiv.org/abs/2208.10479 arXiv:2208.10479].
Haslbauer:2022vnq
M. Haslbauer, P. Kroupa, A. H. Zonoozi, and H. Haghi Astrophys. J. Lett. 939 (2022), no. 2 L31, [http://arxiv.org/abs/2210.14915
arXiv:2210.14915].
Liu:2022bvr
B. Liu and V. Bromm Astrophys. J. Lett. 937 (2022), no. 2 L30,
[http://arxiv.org/abs/2208.13178 arXiv:2208.13178].
Gong:2022qjx
Y. Gong, B. Yue, Y. Cao, and X. Chen Astrophys. J. 947 (2023),
no. 1 28, [http://arxiv.org/abs/2209.13757 arXiv:2209.13757].
Hutsi:2022fzw
G. Hütsi, M. Raidal, J. Urrutia, V. Vaskonen, and H. Veermäe Phys.
Rev. D 107 (2023), no. 4 043502,
[http://arxiv.org/abs/2211.02651 arXiv:2211.02651].
Menci:2022wia
N. Menci, M. Castellano, P. Santini, E. Merlin, A. Fontana, and F. Shankar
Astrophys. J. Lett. 938 (2022), no. 1 L5,
[http://arxiv.org/abs/2208.11471 arXiv:2208.11471].
Biagetti:2022ode
M. Biagetti, G. Franciolini, and A. Riotto Astrophys. J. 944
(2023), no. 2 113, [http://arxiv.org/abs/2210.04812
arXiv:2210.04812].
Yuan:2023bvh
G.-W. Yuan, L. Lei, Y.-Z. Wang, B. Wang, Y.-Y. Wang, C. Chen, Z.-Q. Shen, Y.-F.
Cai, and Y.-Z. Fan http://arxiv.org/abs/2303.09391
arXiv:2303.09391.
Ilie:2023zfv
C. Ilie, J. Paulin, and K. Freese http://arxiv.org/abs/2304.01173
arXiv:2304.01173.
Parashari:2023cui
P. Parashari and R. Laha http://arxiv.org/abs/2305.00999
arXiv:2305.00999.
Jiao:2023wcn
H. Jiao, R. Brandenberger, and A. Refregier
http://arxiv.org/abs/2304.06429 arXiv:2304.06429.
Guo:2023hyp
S.-Y. Guo, M. Khlopov, X. Liu, L. Wu, Y. Wu, and B. Zhu
http://arxiv.org/abs/2306.17022 arXiv:2306.17022.
Su:2023jno
B.-Y. Su, N. Li, and L. Feng http://arxiv.org/abs/2306.05364
arXiv:2306.05364.
Hawking:1971ei
S. Hawking Mon. Not. Roy. Astron. Soc. 152 (1971) 75.
Carr:1974nx
B. J. Carr and S. W. Hawking Mon. Not. Roy. Astron. Soc. 168 (1974)
399–415.
Meszaros:1974tb
P. Meszaros Astron. Astrophys. 37 (1974) 225–228.
Carr:1975qj
B. J. Carr Astrophys. J. 201 (1975) 1–19.
Chapline:1975ojl
G. F. Chapline Nature 253 (1975), no. 5489 251–252.
Carr:2016drx
B. Carr, F. Kuhnel, and M. Sandstad Phys. Rev. D 94 (2016), no. 8
083504, [http://arxiv.org/abs/1607.06077 arXiv:1607.06077].
Green:2020jor
A. M. Green and B. J. Kavanagh J. Phys. G 48 (2021), no. 4 043001,
[http://arxiv.org/abs/2007.10722 arXiv:2007.10722].
Sasaki:2018dmp
M. Sasaki, T. Suyama, T. Tanaka, and S. Yokoyama Class. Quant. Grav.
35 (2018), no. 6 063001, [http://arxiv.org/abs/1801.05235
arXiv:1801.05235].
Carr:2020gox
B. Carr, K. Kohri, Y. Sendouda, and J. Yokoyama Rept. Prog. Phys.
84 (2021), no. 11 116902, [http://arxiv.org/abs/2002.12778
arXiv:2002.12778].
Quinlan:1987qj
G. D. Quinlan and S. L. Shapiro, STAR CLUSTER COLLAPSE TO A SUPERMASSIVE
BLACK HOLE: BINARIES AND GRAVITATIONAL RADIATION, 1987.
Mouri:2002mc
H. Mouri and Y. Taniguchi Astrophys. J. Lett. 566 (2002) L17–L20,
[http://arxiv.org/abs/astro-ph/0201102 astro-ph/0201102].
Bird:2016dcv
S. Bird, I. Cholis, J. B. Muñoz, Y. Ali-Haïmoud, M. Kamionkowski, E. D.
Kovetz, A. Raccanelli, and A. G. Riess Phys. Rev. Lett. 116
(2016), no. 20 201301, [http://arxiv.org/abs/1603.00464
arXiv:1603.00464].
Sasaki:2016jop
M. Sasaki, T. Suyama, T. Tanaka, and S. Yokoyama Phys. Rev. Lett.
117 (2016), no. 6 061101, [http://arxiv.org/abs/1603.08338
arXiv:1603.08338]. [Erratum: Phys.Rev.Lett. 121, 059901 (2018)].
Kashlinsky:2016sdv
A. Kashlinsky Astrophys. J. Lett. 823 (2016), no. 2 L25,
[http://arxiv.org/abs/1605.04023 arXiv:1605.04023].
Ali-Haimoud:2016mbv
Y. Ali-Haïmoud and M. Kamionkowski Phys. Rev. D 95 (2017),
no. 4 043534, [http://arxiv.org/abs/1612.05644
arXiv:1612.05644].
Raidal:2017mfl
M. Raidal, V. Vaskonen, and H. Veermäe JCAP 09 (2017) 037,
[http://arxiv.org/abs/1707.01480 arXiv:1707.01480].
Raidal:2018bbj
M. Raidal, C. Spethmann, V. Vaskonen, and H. Veermäe JCAP 02
(2019) 018, [http://arxiv.org/abs/1812.01930 arXiv:1812.01930].
LIGOScientific:2016dsl LIGO Scientific, Virgo Collaboration, B. P. Abbott et al. Phys. Rev.
X 6 (2016), no. 4 041015, [http://arxiv.org/abs/1606.04856
arXiv:1606.04856]. [Erratum: Phys.Rev.X 8, 039903 (2018)].
LIGOScientific:2020ibl LIGO Scientific, Virgo Collaboration, R. Abbott et al. Phys. Rev. X 11 (2021) 021053, [http://arxiv.org/abs/2010.14527
arXiv:2010.14527].
Hall:2020daa
A. Hall, A. D. Gow, and C. T. Byrnes Phys. Rev. D 102 (2020)
123524, [http://arxiv.org/abs/2008.13704 arXiv:2008.13704].
Jedamzik:2020ypm
K. Jedamzik JCAP 09 (2020) 022,
[http://arxiv.org/abs/2006.11172 arXiv:2006.11172].
DeLuca:2020qqa
V. De Luca, G. Franciolini, P. Pani, and A. Riotto JCAP 06 (2020)
044, [http://arxiv.org/abs/2005.05641 arXiv:2005.05641].
Franciolini:2021tla
G. Franciolini, V. Baibhav, V. De Luca, K. K. Y. Ng, K. W. K. Wong, E. Berti,
P. Pani, A. Riotto, and S. Vitale Phys. Rev. D 105 (2022), no. 8
083526, [http://arxiv.org/abs/2105.03349 arXiv:2105.03349].
Romero-Rodriguez:2021aws
A. Romero-Rodriguez, M. Martinez, O. Pujolàs, M. Sakellariadou, and
V. Vaskonen Phys. Rev. Lett. 128 (2022), no. 5 051301,
[http://arxiv.org/abs/2107.11660 arXiv:2107.11660].
Ferrarese:2004qr
L. Ferrarese and H. Ford Space Sci. Rev. 116 (2005) 523–624,
[http://arxiv.org/abs/astro-ph/0411247 astro-ph/0411247].
Gultekin:2009qn
K. Gultekin et al. Astrophys. J. 698 (2009) 198–221,
[http://arxiv.org/abs/0903.4897 arXiv:0903.4897].
Kormendy:2013dxa
J. Kormendy and L. C. Ho Ann. Rev. Astron. Astrophys. 51 (2013)
511–653, [http://arxiv.org/abs/1304.7762 arXiv:1304.7762].
Bean:2002kx
R. Bean and J. Magueijo Phys. Rev. D 66 (2002) 063505,
[http://arxiv.org/abs/astro-ph/0204486 astro-ph/0204486].
Kawasaki:2012kn
M. Kawasaki, A. Kusenko, and T. T. Yanagida Phys. Lett. B 711
(2012) 1–5, [http://arxiv.org/abs/1202.3848 arXiv:1202.3848].
Clesse:2016vqa
S. Clesse and J. García-Bellido Phys. Dark Univ. 15 (2017)
142–147, [http://arxiv.org/abs/1603.05234 arXiv:1603.05234].
Clesse:2017bsw
S. Clesse and J. García-Bellido Phys. Dark Univ. 22 (2018)
137–146, [http://arxiv.org/abs/1711.10458 arXiv:1711.10458].
Serpico:2020ehh
P. D. Serpico, V. Poulin, D. Inman, and K. Kohri Phys. Rev. Res. 2
(2020), no. 2 023204, [http://arxiv.org/abs/2002.10771
arXiv:2002.10771].
Nakama:2016kfq
T. Nakama, T. Suyama, and J. Yokoyama Phys. Rev. D 94 (2016),
no. 10 103522, [http://arxiv.org/abs/1609.02245
arXiv:1609.02245].
Nakama:2017xvq
T. Nakama, B. Carr, and J. Silk Phys. Rev. D 97 (2018), no. 4
043525, [http://arxiv.org/abs/1710.06945 arXiv:1710.06945].
Carr:2018rid
B. Carr and J. Silk Mon. Not. Roy. Astron. Soc. 478 (2018), no. 3
3756–3775, [http://arxiv.org/abs/1801.00672 arXiv:1801.00672].
Inman:2019wvr
D. Inman and Y. Ali-Haïmoud Phys. Rev. D 100 (2019), no. 8
083528, [http://arxiv.org/abs/1907.08129 arXiv:1907.08129].
Carr:2020erq
B. Carr, F. Kuhnel, and L. Visinelli Mon. Not. Roy. Astron. Soc.
501 (2021), no. 2 2029–2043, [http://arxiv.org/abs/2008.08077
arXiv:2008.08077].
Atal:2020yic
V. Atal, A. Sanglas, and N. Triantafyllou JCAP 06 (2021) 022,
[http://arxiv.org/abs/2012.14721 arXiv:2012.14721].
2018ApJ...855..105O
P. A. Oesch, R. J. Bouwens, G. D. Illingworth, I. Labbé, and
M. Stefanon The Astrophysical Journal 855 (Mar., 2018) 105,
[http://arxiv.org/abs/1710.11131 arXiv:1710.11131].
2021AJ....162...47B
R. J. Bouwens, P. A. Oesch, M. Stefanon, G. Illingworth,
I. Labbé, N. Reddy, H. Atek, M. Montes, R. Naidu,
T. Nanayakkara, E. Nelson, and S. Wilkins The Astrophysical
Journal 162 (Aug., 2021) 47,
[http://arxiv.org/abs/2102.07775 arXiv:2102.07775].
Bouwens:2014fua
R. J. Bouwens et al. Astrophys. J. 803 (2015), no. 1 34,
[http://arxiv.org/abs/1403.4295 arXiv:1403.4295].
2015ApJ...810...71F
S. L. Finkelstein, J. Ryan, Russell E., C. Papovich, M. Dickinson,
M. Song, R. S. Somerville, H. C. Ferguson, B. Salmon,
M. Giavalisco, A. M. Koekemoer, M. L. N. Ashby, P. Behroozi,
M. Castellano, J. S. Dunlop, S. M. Faber, G. G. Fazio, A. Fontana,
N. A. Grogin, N. Hathi, J. Jaacks, D. D. Kocevski, R. Livermore,
R. J. McLure, E. Merlin, B. Mobasher, J. A. Newman, M. Rafelski,
V. Tilvi, and S. P. Willner The Astrophysical Journal 810
(Sept., 2015) 71, [http://arxiv.org/abs/1410.5439
arXiv:1410.5439].
Atek:2015axa
H. Atek et al. Astrophys. J. 814 (2015), no. 1 69,
[http://arxiv.org/abs/1509.06764 arXiv:1509.06764].
Livermore:2016mbs
R. C. Livermore, S. L. Finkelstein, and J. M. Lotz Astrophys. J.
835 (2017), no. 2 113, [http://arxiv.org/abs/1604.06799
arXiv:1604.06799].
2017ApJ...843..129B
R. J. Bouwens, P. A. Oesch, G. D. Illingworth, R. S. Ellis, and
M. Stefanon The Astrophysical Journal 843 (July, 2017) 129,
[http://arxiv.org/abs/1610.00283 arXiv:1610.00283].
2017ApJ...838...29M
V. Mehta, C. Scarlata, M. Rafelski, T. Gburek, H. I. Teplitz,
A. Alavi, M. Boylan-Kolchin, S. Finkelstein, J. P. Gardner,
N. Grogin, A. Koekemoer, P. Kurczynski, B. Siana, A. Codoreanu,
D. F. de Mello, K.-S. Lee, and E. Soto The Astrophysical Journal 838 (Mar., 2017) 29, [http://arxiv.org/abs/1702.06953
arXiv:1702.06953].
2018ApJ...854...73I
M. Ishigaki, R. Kawamata, M. Ouchi, M. Oguri, K. Shimasaku, and
Y. Ono The Astrophysical Journal 854 (Feb., 2018) 73,
[http://arxiv.org/abs/1702.04867 arXiv:1702.04867].
Atek:2018nsc
H. Atek, J. Richard, J.-P. Kneib, and D. Schaerer Mon. Not. Roy. Astron.
Soc. 479 (2018), no. 4 5184–5195,
[http://arxiv.org/abs/1803.09747 arXiv:1803.09747].
2020ApJ...891..146R
S. Rojas-Ruiz, S. L. Finkelstein, M. B. Bagley, M. Stevans, K. D.
Finkelstein, R. Larson, M. Mechtley, and J. Diekmann The
Astrophysical Journal 891 (Mar., 2020) 146,
[http://arxiv.org/abs/2002.06209 arXiv:2002.06209].
Nakamura:1997sm
T. Nakamura, M. Sasaki, T. Tanaka, and K. S. Thorne Astrophys. J. Lett. 487 (1997) L139–L142,
[http://arxiv.org/abs/astro-ph/9708060 astro-ph/9708060].
Peters:1963ux
P. C. Peters and J. Mathews Phys. Rev. 131 (1963) 435–439.
LIGOScientific:2016fpe LIGO Scientific, Virgo Collaboration, B. P. Abbott et al. Phys. Rev.
Lett. 116 (2016), no. 13 131102,
[http://arxiv.org/abs/1602.03847 arXiv:1602.03847].
PhysRevLett.116.131102 LIGO Scientific Collaboration and Virgo Collaboration Collaboration
Phys. Rev. Lett. 116 (Mar, 2016) 131102.
Kelley:2017lek
L. Z. Kelley, L. Blecha, L. Hernquist, A. Sesana, and S. R. Taylor Mon.
Not. Roy. Astron. Soc. 471 (2017), no. 4 4508–4526,
[http://arxiv.org/abs/1702.02180 arXiv:1702.02180].
Burke-Spolaor:2018bvk
S. Burke-Spolaor et al. Astron. Astrophys. Rev. 27 (2019), no. 1 5,
[http://arxiv.org/abs/1811.08826 arXiv:1811.08826].
Rosado:2015epa
P. A. Rosado, A. Sesana, and J. Gair Mon. Not. Roy. Astron. Soc.
451 (2015), no. 3 2417–2433, [http://arxiv.org/abs/1503.04803
arXiv:1503.04803].
Taylor:2020zpk
S. R. Taylor, R. van Haasteren, and A. Sesana Phys. Rev. D 102
(2020), no. 8 084039, [http://arxiv.org/abs/2006.04810
arXiv:2006.04810].
Sesana:2008mz
A. Sesana, A. Vecchio, and C. N. Colacino Mon. Not. Roy. Astron. Soc. 390 (2008) 192, [http://arxiv.org/abs/0804.4476
arXiv:0804.4476].
Taylor:2021yjx
S. R. Taylor http://arxiv.org/abs/2105.13270 arXiv:2105.13270.
Maggiore:2018sht
M. Maggiore, Gravitational Waves. Vol. 2: Astrophysics and Cosmology.
Oxford University Press, 3, 2018.
NANOGrav:2020spf NANOGrav Collaboration, N. S. Pol et al. Astrophys. J. Lett.
911 (2021), no. 2 L34, [http://arxiv.org/abs/2010.11950
arXiv:2010.11950].
Ferreira:2022zzo
R. Z. Ferreira, A. Notari, O. Pujolas, and F. Rompineve JCAP 02
(2023) 001, [http://arxiv.org/abs/2204.04228 arXiv:2204.04228].
Dandoy:2023jot
V. Dandoy, V. Domcke, and F. Rompineve
http://arxiv.org/abs/2302.07901 arXiv:2302.07901.
Gouttenoire:2023ftk
Y. Gouttenoire and E. Vitagliano http://arxiv.org/abs/2306.17841
arXiv:2306.17841.
enterprise
J. A. Ellis, M. Vallisneri, S. R. Taylor, and P. T. Baker, “Enterprise:
Enhanced numerical toolbox enabling a robust pulsar inference suite.”
Zenodo, Sept., 2020.
enterpriseext
S. R. Taylor, P. T. Baker, J. S. Hazboun, J. Simon, and S. J. Vigeland,
enterprise_extensions, 2021.
v2.3.3.
justinellis20171037579
J. Ellis and R. van Haasteren, jellis18/ptmcmcsampler: Official release,
Oct., 2017.
Lewis:2019xzd
A. Lewis http://arxiv.org/abs/1910.13970 arXiv:1910.13970.
Papanikolaou:2020qtd
T. Papanikolaou, V. Vennin, and D. Langlois JCAP 03 (2021) 053,
[http://arxiv.org/abs/2010.11573 arXiv:2010.11573].
Lewis:1999bs
A. Lewis, A. Challinor, and A. Lasenby Astrophys. J. 538 (2000)
473–476, [http://arxiv.org/abs/astro-ph/9911177
astro-ph/9911177].
Planck:2018vyg Planck Collaboration, N. Aghanim et al. Astron. Astrophys.
641 (2020) A6, [http://arxiv.org/abs/1807.06209
arXiv:1807.06209]. [Erratum: Astron.Astrophys. 652, C4 (2021)].
1974ApJ...187..425P
W. H. Press and P. Schechter Astrophys. J. 187 (1974) 425–438.
DeLuca:2020jug
V. De Luca, V. Desjacques, G. Franciolini, and A. Riotto JCAP 11
(2020) 028, [http://arxiv.org/abs/2009.04731 arXiv:2009.04731].
Liu:2022okz
B. Liu, S. Zhang, and V. Bromm Mon. Not. Roy. Astron. Soc. 514
(2022), no. 2 2376–2396, [http://arxiv.org/abs/2204.06330
arXiv:2204.06330].
Sheth:1999mn
R. K. Sheth and G. Tormen Mon. Not. Roy. Astron. Soc. 308 (1999)
119, [http://arxiv.org/abs/astro-ph/9901122 astro-ph/9901122].
Sheth:1999su
R. K. Sheth, H. J. Mo, and G. Tormen Mon. Not. Roy. Astron. Soc.
323 (2001) 1, [http://arxiv.org/abs/astro-ph/9907024
astro-ph/9907024].
Silk:1967kq
J. Silk Astrophys. J. 151 (1968) 459–471.
Chluba:2012we
J. Chluba, A. L. Erickcek, and I. Ben-Dayan Astrophys. J. 758
(2012) 76, [http://arxiv.org/abs/1203.2681 arXiv:1203.2681].
Fixsen:1996nj
D. J. Fixsen, E. S. Cheng, J. M. Gales, J. C. Mather, R. A. Shafer, and E. L.
Wright Astrophys. J. 473 (1996) 576,
[http://arxiv.org/abs/astro-ph/9605054 astro-ph/9605054].
Hawking:1987bn
S. W. Hawking Phys. Lett. B 231 (1989) 237–239.
Garriga:1992nm
J. Garriga and A. Vilenkin Phys. Rev. D 47 (1993) 3265–3274,
[http://arxiv.org/abs/hep-ph/9208212 hep-ph/9208212].
Garriga:2015fdk
J. Garriga, A. Vilenkin, and J. Zhang JCAP 02 (2016) 064,
[http://arxiv.org/abs/1512.01819 arXiv:1512.01819].
Deng:2016vzb
H. Deng, J. Garriga, and A. Vilenkin JCAP 04 (2017) 050,
[http://arxiv.org/abs/1612.03753 arXiv:1612.03753].
Deng:2017uwc
H. Deng and A. Vilenkin JCAP 12 (2017) 044,
[http://arxiv.org/abs/1710.02865 arXiv:1710.02865].
Deng:2018cxb
H. Deng, A. Vilenkin, and M. Yamada JCAP 07 (2018) 059,
[http://arxiv.org/abs/1804.10059 arXiv:1804.10059].
Kawasaki:2019iis
M. Kawasaki and K. Murai Phys. Rev. D 100 (2019), no. 10 103521,
[http://arxiv.org/abs/1907.02273 arXiv:1907.02273].
Cotner:2019ykd
E. Cotner, A. Kusenko, M. Sasaki, and V. Takhistov JCAP 10 (2019)
077, [http://arxiv.org/abs/1907.10613 arXiv:1907.10613].
Kitajima:2020kig
N. Kitajima and F. Takahashi JCAP 11 (2020) 060,
[http://arxiv.org/abs/2006.13137 arXiv:2006.13137].
Kawana:2021tde
K. Kawana and K.-P. Xie Phys. Lett. B 824 (2022) 136791,
[http://arxiv.org/abs/2106.00111 arXiv:2106.00111].
Kasai:2022vhq
K. Kasai, M. Kawasaki, and K. Murai JCAP 10 (2022) 048,
[http://arxiv.org/abs/2205.10148 arXiv:2205.10148].
Chang:2018bgx
J. H. Chang, D. Egana-Ugrinovic, R. Essig, and C. Kouvaris JCAP 03
(2019) 036, [http://arxiv.org/abs/1812.07000 arXiv:1812.07000].
Lu:2022jnp
P. Lu, K. Kawana, and A. Kusenko Phys. Rev. D 107 (2023), no. 10
103037, [http://arxiv.org/abs/2210.16462 arXiv:2210.16462].
Domenech:2023afs
G. Domènech, D. Inman, A. Kusenko, and M. Sasaki
http://arxiv.org/abs/2304.13053 arXiv:2304.13053.
2016MNRAS.460..417S
G. Sun and S. R. Furlanetto Monthly Notices of the Royal Astronomical
Society 460 (July, 2016) 417–433,
[http://arxiv.org/abs/1512.06219 arXiv:1512.06219].
Naidu:2019gvi
R. P. Naidu, S. Tacchella, C. A. Mason, S. Bose, P. A. Oesch, and C. Conroy
http://arxiv.org/abs/1907.13130 arXiv:1907.13130.
Mason:2015cna
C. Mason, M. Trenti, and T. Treu Astrophys. J. 813 (2015), no. 1
21, [http://arxiv.org/abs/1508.01204 arXiv:1508.01204].
Sahlen:2021bqt
M. Sahlén and E. Zackrisson http://arxiv.org/abs/2105.05098
arXiv:2105.05098.
Sabti:2021unj
N. Sabti, J. B. Muñoz, and D. Blas Astrophys. J. Lett. 928
(2022), no. 2 L20, [http://arxiv.org/abs/2110.13161
arXiv:2110.13161].
Sabti:2021xvh
N. Sabti, J. B. Muñoz, and D. Blas Phys. Rev. D 105 (2022), no. 4
043518, [http://arxiv.org/abs/2110.13168 arXiv:2110.13168].
Menci:2020ybl
N. Menci et al. Astrophys. J. 900 (2020), no. 2 108,
[http://arxiv.org/abs/2007.12453 arXiv:2007.12453].
Bozek:2014uqa
B. Bozek, D. J. E. Marsh, J. Silk, and R. F. G. Wyse Mon. Not. Roy.
Astron. Soc. 450 (2015), no. 1 209–222,
[http://arxiv.org/abs/1409.3544 arXiv:1409.3544].
Schultz:2014eia
C. Schultz, J. Oñorbe, K. N. Abazajian, and J. S. Bullock Mon. Not. Roy.
Astron. Soc. 442 (2014), no. 2 1597–1609,
[http://arxiv.org/abs/1401.3769 arXiv:1401.3769].
Dayal:2014nva
P. Dayal, A. Mesinger, and F. Pacucci Astrophys. J. 806 (2015),
no. 1 67, [http://arxiv.org/abs/1408.1102 arXiv:1408.1102].
Corasaniti:2016epp
P. S. Corasaniti, S. Agarwal, D. J. E. Marsh, and S. Das Phys. Rev. D 95 (2017), no. 8 083512, [http://arxiv.org/abs/1611.05892
arXiv:1611.05892].
Menci:2017nsr
N. Menci, A. Merle, M. Totzauer, A. Schneider, A. Grazian, M. Castellano, and
N. G. Sanchez Astrophys. J. 836 (2017), no. 1 61,
[http://arxiv.org/abs/1701.01339 arXiv:1701.01339].
Menci:2018lis
N. Menci, A. Grazian, A. Lamastra, F. Calura, M. Castellano, and P. Santini
Astrophys. J. 854 (2018), no. 1 1,
[http://arxiv.org/abs/1801.03697 arXiv:1801.03697].
Rudakovskyi:2021jyf
A. Rudakovskyi, A. Mesinger, D. Savchenko, and N. Gillet Mon. Not. Roy.
Astron. Soc. 507 (2021), no. 2 3046–3056,
[http://arxiv.org/abs/2104.04481 arXiv:2104.04481].
Yoshiura:2020soa
S. Yoshiura, M. Oguri, K. Takahashi, and T. Takahashi Phys. Rev. D
102 (2020), no. 8 083515, [http://arxiv.org/abs/2007.14695
arXiv:2007.14695].
Chevallard:2014sxa
J. Chevallard, J. Silk, T. Nishimichi, M. Habouzit, G. A. Mamon, and S. Peirani
Mon. Not. Roy. Astron. Soc. 446 (2015) 3235–3252,
[http://arxiv.org/abs/1410.7768 arXiv:1410.7768].
Sabti:2020ser
N. Sabti, J. B. Muñoz, and D. Blas JCAP 01 (2021) 010,
[http://arxiv.org/abs/2009.01245 arXiv:2009.01245].
Sabti:2023xwo
N. Sabti, J. B. Muñoz, and M. Kamionkowski
http://arxiv.org/abs/2305.07049 arXiv:2305.07049.
Carr:1997cn
B. J. Carr and M. Sakellariadou Astrophys. J. 516 (1999) 195–220.
Inoue:2017csr
Y. Inoue and A. Kusenko JCAP 10 (2017) 034,
[http://arxiv.org/abs/1705.00791 arXiv:1705.00791].
Murgia:2019duy
R. Murgia, G. Scelfo, M. Viel, and A. Raccanelli Phys. Rev. Lett.
123 (2019), no. 7 071102, [http://arxiv.org/abs/1903.10509
arXiv:1903.10509].
Silk:1977wz
J. Silk Astrophys. J. 211 (1977) 638–648.
2013arXiv1305.5422S
D. Spergel, N. Gehrels, J. Breckinridge, M. Donahue, A. Dressler,
B. S. Gaudi, T. Greene, O. Guyon, C. Hirata, J. Kalirai, N. J.
Kasdin, W. Moos, S. Perlmutter, M. Postman, B. Rauscher,
J. Rhodes, Y. Wang, D. Weinberg, J. Centrella, W. Traub,
C. Baltay, J. Colbert, D. Bennett, A. Kiessling, B. Macintosh,
J. Merten, M. Mortonson, M. Penny, E. Rozo, D. Savransky,
K. Stapelfeldt, Y. Zu, C. Baker, E. Cheng, D. Content, J. Dooley,
M. Foote, R. Goullioud, K. Grady, C. Jackson, J. Kruk, M. Levine,
M. Melton, C. Peddie, J. Ruffa, and S. Shaklan arXiv e-prints
(May, 2013) arXiv:1305.5422, [http://arxiv.org/abs/1305.5422
arXiv:1305.5422].
Abitbol:2017vwa
M. H. Abitbol, J. Chluba, J. C. Hill, and B. R. Johnson Mon. Not. Roy.
Astron. Soc. 471 (2017), no. 1 1126–1140,
[http://arxiv.org/abs/1705.01534 arXiv:1705.01534].
Gouttenoire:2023naa
Y. Gouttenoire and T. Volansky http://arxiv.org/abs/2305.04942
arXiv:2305.04942.
Bai:2021ibt
Y. Bai and M. Korwar Phys. Rev. D 105 (2022), no. 9 095015,
[http://arxiv.org/abs/2109.14765 arXiv:2109.14765].
Bringmann:2023opz
T. Bringmann, P. F. Depta, T. Konstandin, K. Schmidt-Hoberg, and C. Tasillo
http://arxiv.org/abs/2306.09411 arXiv:2306.09411.
Madge:2023cak
E. Madge, E. Morgante, C. P. Ibáñez, N. Ramberg, and S. Schenk
http://arxiv.org/abs/2306.14856 arXiv:2306.14856.
NANOGrav:2023hfp NANOGrav Collaboration, G. Agazie et al.
http://arxiv.org/abs/2306.16220 arXiv:2306.16220.
NANOGrav:2023tcn NANOGrav Collaboration, G. Agazie et al.
http://arxiv.org/abs/2306.16221 arXiv:2306.16221.
Chen:2018znx
S. Chen, A. Sesana, and C. J. Conselice Mon. Not. Roy. Astron. Soc.
488 (2019), no. 1 401–418, [http://arxiv.org/abs/1810.04184
arXiv:1810.04184].
Ellis:2023dgf
J. Ellis, M. Fairbairn, G. Hütsi, J. Raidal, J. Urrutia, V. Vaskonen, and
H. Veermäe http://arxiv.org/abs/2306.17021 arXiv:2306.17021.
Ghoshal:2023fhh
A. Ghoshal and A. Strumia http://arxiv.org/abs/2306.17158
arXiv:2306.17158.
Dror:2021wrl
J. A. Dror, B. V. Lehmann, H. H. Patel, and S. Profumo Phys. Rev. D
104 (2021), no. 8 083021, [http://arxiv.org/abs/2105.04559
arXiv:2105.04559].
Depta:2023qst
P. F. Depta, K. Schmidt-Hoberg, and C. Tasillo
http://arxiv.org/abs/2306.17836 arXiv:2306.17836.
1991ApJ...379..440B
J. R. Bond, S. Cole, G. Efstathiou, and N. Kaiser Astrophys. J. 379
(1991) 440.
1986ApJ...304...15B
J. M. Bardeen, J. R. Bond, N. Kaiser, and A. S. Szalay Astrophys. J.
304 (1986) 15–61.
Schneider:2020xmf
A. Schneider, S. K. Giri, and J. Mirocha Phys. Rev. D 103 (2021),
no. 8 083025, [http://arxiv.org/abs/2011.12308
arXiv:2011.12308].
Harada:2017fjm
T. Harada, C.-M. Yoo, K. Kohri, and K.-I. Nakao Phys. Rev. D 96
(2017), no. 8 083517, [http://arxiv.org/abs/1707.03595
arXiv:1707.03595]. [Erratum: Phys.Rev.D 99, 069904 (2019)].
Carr:2016hva
B. J. Carr, K. Kohri, Y. Sendouda, and J. Yokoyama Phys. Rev. D 94
(2016), no. 4 044029, [http://arxiv.org/abs/1604.05349
arXiv:1604.05349].
Nakama:2016gzw
T. Nakama, J. Silk, and M. Kamionkowski Phys. Rev. D 95 (2017),
no. 4 043511, [http://arxiv.org/abs/1612.06264
arXiv:1612.06264].
Carr:2009jm
B. J. Carr, K. Kohri, Y. Sendouda, and J. Yokoyama Phys. Rev. D 81
(2010) 104019, [http://arxiv.org/abs/0912.5297
arXiv:0912.5297].
Byrnes:2012yx
C. T. Byrnes, E. J. Copeland, and A. M. Green Phys. Rev. D 86
(2012) 043512, [http://arxiv.org/abs/1206.4188
arXiv:1206.4188].
Young:2013oia
S. Young and C. T. Byrnes JCAP 08 (2013) 052,
[http://arxiv.org/abs/1307.4995 arXiv:1307.4995].
Franciolini:2018vbk
G. Franciolini, A. Kehagias, S. Matarrese, and A. Riotto JCAP 03
(2018) 016, [http://arxiv.org/abs/1801.09415 arXiv:1801.09415].
Atal:2019cdz
V. Atal, J. Garriga, and A. Marcos-Caballero JCAP 09 (2019) 073,
[http://arxiv.org/abs/1905.13202 arXiv:1905.13202].
Davies:2021loj
M. W. Davies, P. Carrilho, and D. J. Mulryne JCAP 06 (2022), no. 06
019, [http://arxiv.org/abs/2110.08189 arXiv:2110.08189].
Cai:2022erk
Y.-F. Cai, X.-H. Ma, M. Sasaki, D.-G. Wang, and Z. Zhou JCAP 12
(2022) 034, [http://arxiv.org/abs/2207.11910 arXiv:2207.11910].
Ferrante:2022mui
G. Ferrante, G. Franciolini, A. Iovino, Junior., and A. Urbano Phys. Rev.
D 107 (2023), no. 4 043520,
[http://arxiv.org/abs/2211.01728 arXiv:2211.01728].
Iacconi:2023slv
L. Iacconi and D. J. Mulryne http://arxiv.org/abs/2304.14260
arXiv:2304.14260.
Yoo:2019pma
C.-M. Yoo, J.-O. Gong, and S. Yokoyama JCAP 09 (2019) 033,
[http://arxiv.org/abs/1906.06790 arXiv:1906.06790].
Kehagias:2019eil
A. Kehagias, I. Musco, and A. Riotto JCAP 12 (2019) 029,
[http://arxiv.org/abs/1906.07135 arXiv:1906.07135].
Atal:2019erb
V. Atal, J. Cid, A. Escrivà, and J. Garriga JCAP 05 (2020) 022,
[http://arxiv.org/abs/1908.11357 arXiv:1908.11357].
Escriva:2022duf
A. Escrivà, F. Kuhnel, and Y. Tada
http://arxiv.org/abs/2211.05767 arXiv:2211.05767.
Ioka:1998nz
K. Ioka, T. Chiba, T. Tanaka, and T. Nakamura Phys. Rev. D 58
(1998) 063003, [http://arxiv.org/abs/astro-ph/9807018
astro-ph/9807018].
PhysRev.136.B1224
P. C. Peters Phys. Rev. 136 (Nov, 1964) B1224–B1232.
Ajith:2007kx
P. Ajith et al. Phys. Rev. D 77 (2008) 104017,
[http://arxiv.org/abs/0710.2335 arXiv:0710.2335]. [Erratum:
Phys.Rev.D 79, 129901 (2009)].
Ajith:2009bn
P. Ajith et al. Phys. Rev. Lett. 106 (2011) 241101,
[http://arxiv.org/abs/0909.2867 arXiv:0909.2867].
|
http://arxiv.org/abs/2307.02808v1
|
20230706065530
|
Advancing Zero-Shot Digital Human Quality Assessment through Text-Prompted Evaluation
|
[
"Zicheng Zhang",
"Wei Sun",
"Yingjie Zhou",
"Haoning Wu",
"Chunyi Li",
"Xiongkuo Min",
"Xiaohong Liu",
"Guangtao Zhai",
"Weisi Lin"
] |
eess.IV
|
[
"eess.IV",
"cs.CV",
"cs.DB"
] |
Advancing Zero-Shot Digital Human Quality Assessment through Text-Prompted Evaluation
Zicheng Zhang, Wei Sun, Yingjie Zhou, Haoning Wu, Chunyi Li, Xiongkuo Min,
Xiaohong Liu, Guangtao Zhai, Senior Member, IEEE, and Weisi Lin, Fellow, IEEE
Zicheng Zhang, Wei Sun, Yingjie Zhou, Chunyi Li, Xiongkuo Min, and Guangtao Zhai are with the Institute of Image Communication and Network Engineering, Shanghai Jiao Tong University, 200240 Shanghai, China. E-mail:{zzc1998, sunguwei, zhouyingjie, lcysyzxdxc,
minxiongkuo, zhaiguangtao}
@sjtu.edu.cn.
Haoning Wu is with S-Lab, Nanyang Technological University, Singapore. E-mail: haoning001@e.ntu.edu.sg.
Xiaohong Liu is with John Hopcroft Center, Shanghai Jiao Tong University, Shanghai 200240, China. E-mail: xiaohongliu@sjtu.edu.cn.
Weisi Lin is with School of Computer Science and Engineering, Nanyang Technological University, Singapore. E-mail: wslin@ntu.edu.sg.
(Corresponding author: Guangtao Zhai.)
August 1, 2023
===========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Digital humans have witnessed extensive applications in various domains, necessitating related quality assessment studies. However, there is a lack of comprehensive digital human quality assessment (DHQA) databases. To address this gap, we propose SJTU-H3D, a subjective quality assessment database specifically designed for full-body digital humans. It comprises 40 high-quality reference digital humans and 1,120 labeled distorted counterparts generated with seven types of distortions. The SJTU-H3D database can serve as a benchmark for DHQA research, allowing evaluation and refinement of processing algorithms. Further, we propose a zero-shot DHQA approach that focuses on no-reference (NR) scenarios to ensure generalization capabilities while mitigating database bias. Our method leverages semantic and distortion features extracted from projections, as well as geometry features derived from the mesh structure of digital humans. Specifically, we employ the Contrastive Language-Image Pre-training (CLIP) model to measure semantic affinity and incorporate the Naturalness Image Quality Evaluator (NIQE) model to capture low-level distortion information. Additionally, we utilize dihedral angles as geometry descriptors to extract mesh features. By aggregating these measures, we introduce the Digital Human Quality Index (DHQI), which demonstrates significant improvements in zero-shot performance. The DHQI can also serve as a robust baseline for DHQA tasks, facilitating advancements in the field. The database and the code are available at https://github.com/zzc-1998/SJTU-H3D.
Digital humans, quality assessment, database, zero-shot, no-reference
§ INTRODUCTION
Digital humans are computer-based simulations and models of human beings, extensively utilized in various applications such as gaming, the automotive industry, and the metaverse. The current research endeavors primarily focus on the generation, representation, rendering, and animation of digital humans <cit.>. However, with the rapid advancement of virtual reality (VR) and augmented reality (AR) technologies, there is an increasing demand from users for higher visual quality of digital humans. Consequently, it has become imperative to conduct quality assessment studies on digital humans.
Regrettably, acquisition of digital human models is a laborious and costly process compared with 2D media such as images and videos, requiring specialized three-dimensional (3D) scanning devices and professional post-production, which makes it quite difficult to carry out digital human quality assessment (DHQA) databases. Therefore, few works about subjective DHQA have been carried out in the literature. Then the absence of large-scale subjective experiments for assessing the visual quality of digital humans further hinders progress in this domain.
Therefore, in this paper, we propose a comprehensive subjective quality assessment database (SJTU-H3D) targeted at digital humans, aiming to address this research gap and contribute to the advancement of DHQA. The SJTU-H3D database introduced in this study comprises 40 high-quality reference digital humans, represented by textured meshes in full-body format, and the database includes 1,120 distorted digital humans that have been generated using seven different types of distortions. The perceptual mean opinion scores (MOSs) of these distorted digital humans are collected through a meticulously controlled subjective experiment.
Notably, the SJTU-H3D database is the first large-scale database specifically designed for digital human quality assessment (DHQA) that focuses on full-body representations. The primary objective of this database is to advance the research and development of DHQA within the scientific community. Furthermore, it serves as an ideal platform for evaluating and refining various processing algorithms, including but not being limited to denoising and compression techniques.
By providing a comprehensive database consisting of high-quality reference models and distorted counterparts, the proposed SJTU-H3D database offers researchers and practitioners an opportunity to explore and enhance their DHQA methodologies. The availability of such a resource is expected to significantly contribute to the growth and advancement of the DHQA research community.
During recent years, data-driven image and video quality assessment (I/VQA) approaches <cit.> have garnered significant attention and have demonstrated remarkable performance in various application domains. The success of these approaches can be partly attributed to the availability of large-scale I/VQA databases such as the SPAQ database (containing 11,125 labeled images) <cit.> and the LSVQ database (comprising up to 38,811 annotated videos) <cit.>. These databases have also contributed to ensuring the generalization capability and robustness of data-driven methods.
However, in the realm of DHQA research, the availability of suitable perceptual quality assessment databases is limited. With the exception of the proposed SJTU-H3D database, only one perceptual quality assessment database, DHHQA <cit.>, focusing solely on digital human heads rather than full-body representations, exists.
This scarcity of databases makes it challenging to develop data-driven DHQA methods and ensure their generalization ability in practical scenarios.
Hence, this challenge serves as a motivation for us to devise a zero-shot DHQA method that does not necessitate training on labeled DHQA databases. To cater to most practical applications where pristine references may not be readily available, our focus is only on no-reference (NR) methods.
To extract both semantic and distortion features for evaluating the visual quality of digital humans, we employ projection rendering techniques. From a semantic perspective, we utilize the Contrastive Language-Image Pre-training (CLIP) model <cit.> to measure the correlation between the input projections and quality-related texts. Our hypothesis is that high-quality digital human projections should exhibit a strong correlation with positive quality-related texts and a weak correlation with negative ones. To determine the quality levels of the input projections, we design several positive-negative text pairs. The semantic affinity quality measure is then derived by computing the difference in affinity between positive and negative texts.
However, CLIP operates on low-resolution images, which limits its ability to capture low-level distortion information. To address this limitation, we incorporate the completely blind Naturalness Image Quality Evaluator (NIQE) <cit.> to extract low-level quality representations from the raw resolution.
To further enhance the accuracy of quality prediction, we also extract features from the mesh modality. For robustness and effectiveness, we choose the dihedral angle as the geometry descriptor, as it has been widely recognized for effectively capturing geometric features relevant to visual quality <cit.> and its values are confined within the range of [0, π]. By analyzing the changing tendency of dihedral angles corresponding to geometry compression and simplification levels, we average-pool the dihedral angles to derive the geometry loss quality measure.
Finally, all three quality measures (semantic affinity quality measure, spatial naturalness quality measure, and geometry loss quality measure) are aggregated using a sum function to form the proposed Digital Human Quality Index (DHQI). Experimental results demonstrate that DHQI significantly improves zero-shot performance and even achieves competitiveness with supervised methods. In summary, our contributions are as follows:
* We propose the first large-scale full-body DHQA database, SJTU-H3D, which consists of 40 high-quality digital humans represented by textured meshes and 1,120 distorted digital humans generated by 7 types of distortions.
We carry out a well-controlled subjective experiment. 40 human subjects are invited and a total of 44,800 ratings are collected to gather the mean opinion scores (MOSs) for 1,120 distorted digital humans.
* We propose a novel text-prompted zero-shot digital human quality index. Extensive experiments are conducted, which include the general performance comparison, detailed distortion-specific performance comparison, statistical test, and ablation studies.
§ RELATED WORKS
In this section, we give a brief introduction to the development of 3D model quality assessment (3DQA) and no-reference image quality assessment (NR-IQA) methods.
§.§ 3DQA Development
§.§.§ 3DQA Databases
Early subjective 3D quality assessment (3DQA) databases primarily employ colorless point clouds and are relatively
small in scale <cit.>. However, recent efforts have been directed towards addressing the challenge of assessing visual quality in colored 3D models, resulting in the development of substantial 3DQA databases <cit.>. A detailed comparison between these databases and the proposed database is presented in Table <ref>. From the table, it is evident that the recent 3DQA databases, with the exception of DHHQA, encompass general 3D objects and do not specifically focus on 3D digital humans. Although the DHHQA database comprises real human heads, it neglects the consideration of the body part. This highlights the significance of the proposed SJTU-H3D database.
§.§.§ 3DQA Methods
In the field of 3D quality assessment (3DQA), metrics can be broadly categorized into model-based and projection-based methods. Model-based methods <cit.> involve extracting features directly from the 3D model, which offers the advantage of being viewpoint-invariant and relatively straightforward. However, due to the inherent complexity of 3D models, these methods can be computationally expensive and time-consuming.
On the other hand, projection-based methods <cit.> infer the visual quality of a 3D model based on its corresponding projections. These methods leverage mature and successful 2D media analysis tools, which often lead to excellent performance. However, projection-based methods are highly dependent on the selection of viewpoints and can be susceptible to instability when subjected to various rendering setups.
More recently, some 3DQA methods <cit.> have emerged that aim to combine features from multiple modalities. By leveraging the advantages of both model-based and projection-based modalities, these methods attempt to enhance overall performance. This integration of features from different modalities allows for a more comprehensive assessment of 3D model quality.
§.§ NR-IQA Methods
§.§.§ Supervised NR-IQA
In general, NR-IQA methods aim to evaluate image quality without reference information, and they can be classified into handcrafted-based methods, which extract features using manual techniques, and deep learning-based methods, which employ deep neural networks for feature extraction, both of which have demonstrated effectiveness in common IQA tasks.
One representative handcrafted-based method is BRISQUE <cit.>, which utilizes natural scene statistics (NSS) in the spatial domain to analyze image quality. CPBD <cit.> estimates blur levels by computing the cumulative probability of blur detection. BMPRI <cit.> predicts image quality by generating multiple pseudo-reference images obtained through further degradation of the distorted image and comparing their similarities. NFERM <cit.> investigates image quality using the free energy principle.
Deep learning-based IQA methods have gained momentum with the advancement of deep neural networks. DBCNN <cit.> consists of two streams of deep neural networks to address both synthetic and authentic distortions. HyperIQA <cit.> employs a self-adaptive hyper network to handle challenges arising from distortion diversity and content variation in IQA tasks. MUSIQ <cit.> utilizes a multi-scale image quality transformer to represent image quality at different levels of granularity. StairIQA <cit.> hierarchically integrates features extracted from intermediate layers to leverage low-level and high-level visual information.
§.§.§ Zero-shot NR-IQA
Zero-shot IQA methods, also known as opinion-unaware methods, have emerged, which do not rely on training on subjective-rated quality assessment databases and can operate on unseen images. The earliest zero-shot NR-IQA methods are NIQE <cit.> and IL-NIQE <cit.>. NIQE extracts handcrafted natural scene statistics (NSS) features from raw-resolution images and quantifies naturalness quality by computing the Multivariate Gaussian (MVG) distance to high-quality images. IL-NIQE enhances the feature set by incorporating additional quality-aware features, including gradient features, log Gabor filter responses, and color statistics.
§ DATABASE CONSTRUCTION
In this section, we mainly present the construction details of the proposed SJTU-H3D database, which includes reference collection, reference characterization, distortion generation, and subjective experiment.
§.§ Reference Collection
In order to ensure the visual quality and content diversity of the reference 3D digital humans, a manual selection process is conducted to choose all reference digital humans from the HumanAlloy[https://humanalloy.com/], a wonderful platform that provides high-quality 3D humans. A total of 40 digital humans are purchased and collected for this study. These digital humans are represented as textured meshes, with texture resolutions of 2048×2048. Fig. <ref> illustrates the rendered projections of the selected digital humans, and Table <ref> provides detailed information regarding the number of vertices and faces for each model.
§.§ Reference Charaterization
The primary objective of our study is to curate a database that exhibits high diversity and generality while minimizing biases associated with the selection of source models. Therefore, we propose an approach to quantitatively characterize the geometry and color complexity since these two aspects are crucial for the visual quality of 3D digital humans.
§.§.§ Geometry Information
In the domain of image quality assessment (IQA), the analysis of spatial information often involves computing the standard deviation of the Sobel-filtered image. Motivated by this concept, we propose a novel approach to quantify the geometry information by utilizing the standard deviation of the dihedral angles in a mesh. The dihedral angle is a fundamental metric employed in computer graphics and geometric modeling to characterize the shape and curvature of meshes <cit.>, thus drawing a parallel to the Sobel-filtering process in image analysis. It denotes the angle between two neighboring faces that share an edge within the mesh, providing valuable insights into the smoothness or sharpness of the surface. Specifically, the geometry information can be obtained as:
GI = std(Mesh_Dihedral),
where GI represents the geometry information and std(·) stands for the standard deviation function.
By leveraging the standard deviation of dihedral angles, we aim to capture and assess the geometric characteristics of the mesh, enabling a more comprehensive evaluation of its structure and shape.
§.§.§ Colorfulness
To evaluate the color characteristics, we focus solely on the texture map. Following the common color calculation process <cit.>, <cit.>, we first convert the texture from RGB channels to LAB channels and combine the standard deviation of A and B channels, which can be mathematically expressed as:
CF = √(std(A)^2 + std(B)^2),
where CF represents the colorfulness measure, A and B denote the corresponding color channels of the texture. Similar colorfulness measures are also employed in many IQA works <cit.> as one of the metrics or features for assessing the quality of images.
§.§.§ Characterization Visualization
We apply the extracted geometry information and colorfulness measure to the collection of 40 reference digital humans. The results are visualized in Fig. <ref>. The analysis demonstrates that the selected reference 3D digital humans exhibit a wide spectrum of geometry information and colorfulness. Notably, model #24 positioned in the top-right corner showcases intricate geometry details and vibrant colorfulness. In contrast, model #15 portrays simpler geometry information and relatively subdued colorfulness.
The proposed measures thoroughly capture the distinctiveness of 3D digital humans concerning their geometry and color characteristics. It is important to emphasize that these measures are directly computed from the underlying model files, thereby ensuring their stability and viewpoint invariance.
§.§ Distortion Generation
To account for the common sources of distortion, we incorporate distortions arising from both the generation process and the transmission process. During the generation process, we consider geometry noise resulting from erroneous scanning procedures, as well as color noise introduced by cameras. Furthermore, compression and simplification techniques are widely employed during the transmission process. Hence, these factors are also taken into consideration in our assessment. By considering the full range of distortion sources, we aim to provide a comprehensive evaluation of the quality of 3D digital humans.
Therefore, to degrade the quality of the reference 3D digital humans, we apply seven types of distortions and the specific settings for each distortion type are listed in Table <ref>. We manually select the distortion parameters to cover most visual quality range and the details are illustrated as follows:
* Geometry Noise (GN): Gaussian noise with standard deviations σ_g of 0.05, 0.1, 0.15, and 0.2 is added to the vertices' geometry coordinates of the digital humans.
* Color Noise (CN): Gaussian noise with standard deviations σ_c of 20, 40, 60, and 80 is introduced to the texture maps.
* Face Simplification (FS): We utilize the simplification algorithm proposed in <cit.> to simplify the digital human. The simplification rate (number of faces remaining / number of original faces) is set to 0.4, 0.2, 0.1, and 0.05.
* Position Compression (PC): The Draco library[https://github.com/google/draco] is employed to quantize the position attributes of digital humans. The compression parameter Q_p is varied as 6, 7, 8, and 9.
* UV Map Compression (UMC): Similarly, the Draco library is used to quantize the texture coordinate attributes with the compression parameter Q_t set to 6, 7, 8, and 9.
* Texture Down-sampling (TD): The original texture maps with a resolution of 2048×2048 are down-sampled to resolutions of 1024×1024, 512×512, 256×256, and 128×128.
* Texture Compression (TC): JPEG compression is employed to compress the texture maps. The quality levels are set to 3, 10, 15, and 20.
In all, a total of 40×7×4 = 1,120 distorted 3D digital humans are obtained. The distortion samples are exhibited in Fig. <ref>.
§.§ Subjective Experiment
§.§.§ Rendering Setting
In accordance with the recommended procedure outlined in <cit.>, passive watching is chosen over interactive watching for the subjective experiment to mitigate potential viewing bias. The 3D digital humans are rendered into video sequences for exhibition purposes. The open3d library is utilized to generate the projections <cit.>. The rendering window is configured with a resolution of 1080 × 1920. To capture the video frames, a horizontal and a vertical circle are employed as the predefined camera paths. Each 3D digital human is captured at one frame every 3 degree, resulting in a total of 240 frames (360 × 2 ÷ 3). These frames are then compiled into an 8-second video with a framerate of 30 frames per second. This approach ensures that the viewers can effectively perceive the significant quality information. The rendering process is depicted in Fig. <ref>.
§.§.§ Experiment Process
A total of 40 human subjects, comprising 20 males and 20 females, are recruited to participate in the subjective experiment. Prior to the experiment, a training session is conducted, wherein additional videos generated using the same aforementioned process are presented to familiarize the subjects with the tasks. The rating process takes place within a well-controlled laboratory environment, maintaining a normal level of illumination. The viewers are seated at a distance of twice the screen height. The videos are displayed on an iMac monitor capable of supporting resolutions up to 4096 × 2304. The order of video presentations is randomized. To facilitate the evaluation process, a double stimuli strategy is employed, where the reference and distorted videos are simultaneously displayed on the screen. The rating interface is excited in Fig. <ref> and the quality score ranges from 0 to 5. In order to mitigate viewer fatigue, the entire experiment is divided into 20 sessions, with each session featuring 56 digital humans. Ultimately, a total of 44,800 subjective ratings (1,120 × 40) are collected.
§.§.§ Subjective Data Analysis
After the subjective experiment, we calculate the z-scores from the raw ratings as follows:
z_i j=r_i j-μ_i/σ_i,
where μ_i=1/N_i∑_j=1^N_i r_i j, σ_i=√(1/N_i-1∑_j=1^N_i(r_i j-μ_i)), and N_i is the number of digital humans judged by subject i.
In accordance with the ITU-R BT.500-13 <cit.> standard, ratings from unreliable subjects are excluded from the analysis.
The corresponding z-scores are linearly rescaled to the range of [0,5]. Finally, the mean opinion scores (MOSs) are computed by averaging the rescaled z-scores.
Fig. <ref> illustrates the distribution of MOSs and the corresponding probability distributions for different distortion types. Interestingly, the probability distributions reveal that visual quality is less sensitive to varying levels of FS distortions compared to other distortion types. Even when reducing the face numbers to a ratio of 0.05 (only about 2k faces are preserved), the visual quality score remains higher than other distortions with similar levels. This observation indicates that visual quality is relatively resilient to FS distortions, implying that the reduction in face complexity may not significantly impact the perceived quality.
§ PROPOSED METHOD
In this section, we introduce the three indexes that make up the whole proposed digital human quality index (DHQI), which includes the text-prompted semantic affinity quality measure, spatial naturalness quality measure, and geometry loss quality measure. These three indexes are then aligned and aggregated into the proposed DHQI quality index. The framework is exhibited in Fig. <ref>.
§.§ Pre-processing
We acquire the cube-like projection set of the given digital human as follows:
𝒫 = ψ(𝒟ℋ),
𝒫 = { P_k|k =1,⋯, 6},
where 𝒫 represents the set of the 6 rendered projections and ψ(·) stands for the rendering process.
Such rendering process has been employed in the popular point cloud compression standard MPEG VPCC <cit.> and many other 3DQA works <cit.>. The projections are utilized as the input information for the text-prompted semantic affinity and spatial naturalness measure.
§.§ Text-prompted Semantic Affinity Quality Measure
To assess the perception of quality related to semantic content, specifically evaluating the quality of contents and the ability to discern semantic distortions, we design the text-prompted semantic affinity quality measure. Inspired by CLIP <cit.>-based quality assessment tasks <cit.>, we hold the hypothesis that the projections of the high-quality digital humans should have higher affinity with positive quality-related descriptions (e.g. good, perfect) and lower affinity with negative quality-related descriptions (e.g. bad, distorted).
§.§.§ Text Prompt Format
In accordance with the official recommendation provided by CLIP <cit.> and drawing from established practices, our text prompts are designed as a concatenation of three components: a prefix, a description, and a suffix. To be more precise, the text prompt T corresponding to the raw description D is defined as:
T = “a" + D + “projection of 3d human model",
where the suffix “projection of 3d human model" is specifically designed to fit the task of DHQA. This carefully chosen suffix can encourage the CLIP model to prioritize and focus its attention on the detection and evaluation of content-aware distortions that may arise in the context of 3D digital humans.
§.§.§ Description Selection
We have identified descriptions pertaining to quality assessment that encompass broad evaluation aspects to ensure robustness. In this study, the general quality-related descriptions employed comprise the contrasting pairs of high quality ↔ low quality, good ↔ bad, and perfect ↔ distorted. The utilization of the high quality ↔ low quality as well as the good ↔ bad text pair assists in directing the attention of the CLIP model towards general subjective impressions. Conversely, the perfect ↔ distorted pair compels the CLIP model to prioritize the existence of distortions.
§.§.§ Affinity Difference Computation
Given the input image I and text T, the senmantic affinity can be calculated with the assistance of CLIP as:
F_I = E_I(I), F_T = E_T(T),
A(I,T) = F_I · F_T'/‖ F_I ‖‖ F_T ‖,
where E_I and E_T stand for the image and text encoders of CLIP, F_I and F_T represent the CLIP-encoded features, and A(I,T) indicates the affinity between the input image and text.
Afterward, the computation of zero-shot quality affinity can be derived from the aforementioned selected descriptions by calculating the disparity between the probabilities assigned to positive and negative textual inputs:
𝒜(𝒫, T) = 1/6∑_K=1^6 A(P_k,T),
𝒜_diff(𝒫,T_+, T_-) = 1/N_T∑_i=1^N_T(𝒜(𝒫,T_+^i) - 𝒜(𝒫,T_-^i)),
where the averaged affinity to the given text T, denoted by 𝒜(𝒫, T), is calculated by CLIP across the six projections 𝒫. In this context, T_+^i and T_-^i refer to the positive and negative text descriptions, respectively, from the i-th text pair. The variable N_T represents the total number of text pairs. Furthermore, 𝒜_diff signifies the cumulative difference between the averaged positive and negative affinity.
The sigmoid remapping technique is then used to map the raw difference scores 𝒜_diff obtained from perceptual quality evaluation into a range of [0, 1]. This remapping is done based on the guidance provided by the Video Quality Experts Group (VQEG) <cit.>.
The purpose of sigmoid remapping is to transform the raw difference scores into a perceptually meaningful range that is easier to interpret, and the final text-prompted semantic affinity quality score can be derived as:
Q_A = 1/1 + e^-𝒜_diff,
§.§ Spatial Naturalness Quality Measure
Apart from evaluating semantic affinity, we incorporate the use of NIQE (Naturalness Image Quality Evaluator <cit.>) as a blind quality evaluator to assess the spatial naturalness of the digital humans. The purpose of employing NIQE is to identify and quantify common low-level distortions encountered in practical digital humans, including Gaussian noise, blur, and JPEG compression artifacts. By incorporating NIQE alongside semantic affinity evaluation, we aim to complement the assessment of high-level information with an evaluation of low-level technical quality.
The NIQE index operates by quantifying the disparity between the characteristics of the input image features and the anticipated distribution of features observed in "high-quality" images, which are derived from a diverse set of pristine natural images. Since the raw NIQE scores and the raw affinity difference scores are on different scales, it is necessary to normalize the NIQE scores to facilitate meaningful comparison. To achieve this, we divide the NIQE scores by a constant value, denoted as c_1, which effectively restricts the majority of NIQE scores to the range of [0,1]. Consequently, the spatial naturalness quality measure can be computed as follows:
𝒩(𝒫) = 1/6∑_K=1^6 N(P_k),
Q_N = -1/1 + e^-𝒩/c_1
where N(P_k) denotes the NIQE value for the k-th projection, 𝒩(𝒫) represents the average NIQE value across the 6 projections, and Q_N stands for the spatial naturalness quality measure. It's worth noting that the NIQE scores are inversely correlated with quality and the negative sign is incorporated into the sigmoid function, allowing for a consistent interpretation and alignment of the NIQE scores with the quality evaluation framework.
§.§ Geometry Loss Measure
The aforementioned measures are applied to projections, specifically the image modality. In order to enhance the model's understanding of digital humans, it is proposed to directly extract features from the mesh modality to capture the loss in geometry with respect to visual quality.
§.§.§ Descriptor Selection
Various geometry attributes have been utilized to describe the quality-related geometric characteristics of meshes <cit.>, including curvature, dihedral angle, face angle, face area, etc. For the purpose of preserving stability and improving the robustness of the proposed zero-shot method, the dihedral angle is selected as the geometry descriptor for the following reasons: a) Extensive evidence supports the effectiveness of the dihedral angle in describing geometric features relevant to visual quality <cit.>. b) Unlike other geometry attributes, the dihedral angle is invariant to scale. Its values are confined within the range of [0, π], thereby contributing to its robustness.
The dihedral angle is the angle between two adjacent faces, which can be calculated as the dot product of corresponding normal vectors:
Θ = {θ_j/π | cos(θ_j) = n_j1·n_j2/‖n_j1‖‖n_j2‖},
where θ_j/π indicates the scaled dihedral angle corresponding to the j-th edge of the mesh, Θ indicates the set of the scaled dihedral angle values, n_j1 and n_j2 stand for the normal vectors of the two adjacent faces whose co-edge is the j-th edge.
§.§.§ Quality Correlation with Dihedral Angle
Lossy mesh compression and simplification techniques can potentially diminish a mesh's structural details, resulting in a smoother and simpler surface representation. In such cases, the faces comprising the smoother and simpler surface tend to exhibit dihedral angles that approach π, leading to an inherent inclination for larger dihedral angles. To substantiate this observation, we present the tendencies of the mean values of the dihedral angles in Fig. <ref>, from which we can find a consistent upward trend in dihedral angle means as compression/simplification levels increase. Therefore, the mean values of the dihedral angle can be generally taken as an indicator of geometry detail loss caused by compression/simplification. Then geometry loss quality measure can be calculated as:
Q_G = -1/1 + e^-Θ,
where Q_G represents the geometry loss quality measure, Θ indicates the mean value of the dihedral angles, and the negative sign is added to the sigmoid function due to the positive correlation between the dihedral angles' mean values and compression/simplification levels.
§.§ Quality Measure Aggregation
In order to develop a reliable zero-shot perceptual quality index, we adopt a direct aggregation approach wherein we sum up the scale-aligned scores of various indices without performing any fine-tuning processes. Considering that the Q_A, Q_N, and Q_G have undergone sigmoid rescaling, all three measures are bounded within the range of [0, 1]. Consequently, we define the comprehensive unified DHQI (digital human quality index) as follows:
Q_DHQI = Q_A + Q_N + Q_G,
where Q_DHQI indicates the final quality values for the digital humans.
§ EXPERIMENT
§.§ Validation Setup
§.§.§ Benchmark Databases
In addition to the proposed SJTU-H3D database, we have incorporated the digital human quality assessment (DHHQA) database <cit.> as an additional resource for benchmark validation. The DHHQA database comprises a total of 55 scanned digital human heads that serve as reference samples, along with 1,540 labeled distorted digital human heads. These distorted samples have been intentionally degraded through the introduction of noise and compression/simplification.
§.§.§ k-fold Cross-Validation
To ensure robust evaluation, we adopt a k-fold cross-validation strategy. This approach involves dividing the database into k equally sized folds. The model is then trained on k-1 of these folds and subsequently tested on the remaining fold. This process is repeated k times, with each fold being used as the test set once. By averaging the performance across these k iterations, we obtain a more reliable estimate of the model's effectiveness, minimizing the impact of random variations.
For both the SJTU-H3D and DHHQA databases, we have selected a value of k=5 to conduct the k-fold cross-validation, ensuring a balanced evaluation across multiple subsets. It's worth mentioning that there is no content overlap between the training and testing folds.
To facilitate a direct and fair comparison between zero-shot and supervised methods, we validate their performance in the following way. Zero-shot methods are directly applied to the testing folds, as they do not require any training. The performance is then averaged across the testing folds and reported as the final performance. On the other hand, supervised methods undergo training on the training folds and are subsequently tested on the testing folds. Similar to zero-shot methods, the average performance is calculated and reported as the final performance. Adopting this methodology enables a direct and unbiased comparison of the performance between zero-shot and supervised methods, providing insights into their respective strengths and limitations.
§.§.§ Implemetation Details
The cube-like projection process described in Section <ref> is conducted with the assistance of open3d <cit.> library with a resolution of 1080P. The white background is cropped out. The projections are downsampled to 224×224 as the input of the CLIP <cit.> image encoder. The ViT-B-32 <cit.> backbone with LAION-2B <cit.> pretrained weights is utilized as the CLIP model. To fit the DHHQA database, we replace the suffix “projection of 3d human model" as described in Equation <ref> with “projection of 3d human face". The scale parameter c_1 constant described in Section <ref> is set as 100. The supervised training of the proposed DHQA method is conducted with the Support Vector Regression (SVR) model with RBF kernel.
The official source code is used for the competitors and default parameters are maintained. The default 5-fold cross-validation is strictly followed for the competitors to make the comparison fair. In addition, the predicted scores of all the methods are followed by a five-parameter logistic regression to map the scores to the MOS scale.
§.§.§ Evaluation Crieria
Four mainstream criteria are employed for evaluation, which include Spearman Rank Correlation Coefficient (SRCC), Pearson Linear Correlation Coefficient (PLCC), Kendall’s Rank Order Correlation Coefficient (KRCC), and Root Mean Squared Error (RMSE). SRCC gauges the correlation of ranks, PLCC represents linear correlation, KRCC reflects the likeness of the orderings, while RMSE measures the quality prediction accuracy. A top-performing model should have SRCC, PLCC, and KRCC values that approach 1 and RMSE values close to 0.
§.§ Competitors Selection
The competitors' selection is conducted to ensure high diversity, which includes the zero-shot FR methods, zero-shot NR methods, and the supervised NR methods.
§.§.§ Zero-shot FR Methods
We consider several classical projection-based FR methods: PSNR, SSIM <cit.>, MS-SSIM <cit.>, and GMSD <cit.>. These methods are applied to the six perpendicular projections, and the resulting scores are averaged and recorded. Additionally, we incorporate three popular point-based FR metrics proposed by MPEG: PSNR_p2po <cit.>, PSNR_p2pl <cit.>, and PSNR_yuv <cit.>. For the purpose of validation, we convert the digital human models into point clouds. Furthermore, we utilize G-LPIPS* <cit.>, which is a projection-based FR metric modified from LPIPS <cit.> and is designed for textured meshes. The official pretrained weights are employed for this metric.
§.§.§ Zero-shot NR Methods
These methods comprise CPBD <cit.>, pretrained BRISQUE* <cit.>, NIQE <cit.>, and IL-NIQE <cit.>.
§.§.§ Supervised NR Methods
These methods encompass handcrafted approaches such as BRISQUE <cit.>, NFERM <cit.>, and BMPRI <cit.>, which are supervised using the Support Vector Regression (SVR) model. Additionally, we include deep learning-based methods, namely DBCNN <cit.>, HyperIQA <cit.>, MUSIQ <cit.>, and StaiIQA <cit.>, which have been retrained for our evaluation.
§.§ Performance Discussion
The overall performance on the SJTU-H3D and DHHQA databases are exhibited in Table <ref>, from which we can draw several conclusions.
§.§.§ Zero-shot Performance
a) Among all the zero-shot methods compared on the SJTU-H3D database, the DHQI method demonstrates superior performance and outperforms them all. Additionally, it proves to be competitive even when compared to FR metrics on the DHHQA database.
b) Nevertheless, the FR metrics that exhibit the highest performance on the DHHQA database, namely MS-SSIM & GMSD, suffer significant performance degradation when applied to the SJTU-H3D database. This decline suggests that these metrics lack robustness in handling diverse digital human content.
c) In contrast, all the competing zero-shot NR methods consistently exhibit lower performance compared to the proposed DHQI method. The reason for this disparity lies in the focus of these methods on addressing low-level distortions, which restricts their ability to effectively capture and model high-level semantic quality representations. By leveraging the semantic affinity quality measure, the DHQI method can enhance the performance of zero-shot NR approaches even further.
§.§.§ Supervised Performance
Due to the significant advancements achieved by deep neural networks, deep learning-based methods such as HyperIQA and MUSIQ have demonstrated superior performance compared to traditional handcrafted methods. Despite this, the proposed DHQI method, which is solely supervised by Support Vector Regression (SVR) model, achieves the top-ranking performance on the SJTU-H3D database.
One notable advantage of the proposed supervised DHQI index is its cost-effectiveness in terms of time and computational resources. The calibration process of an SVR model requires considerably less time and computational overhead compared to training and optimizing deep neural networks. This attribute enhances the practical viability and efficiency of the proposed DHQI method.
§.§ Distortion-specific Performance
To investigate the specific effects of zero-shot methods, we present the distortion-specific performance in Table <ref>, from which we can several observations:
a) The proposed DHQI method achieves first place in three types of distortions: PC, UMC, and TD, which demonstrates its effectiveness in handling these distortions.
b) The point-based methods proposed by MPEG exhibit high sensitivity to noise-related distortions. This can be attributed to the direct impact of geometry and color noise on the point-level quality characteristics. Additionally, the PSNR_yuv metric demonstrates a strong discriminative ability in distinguishing quality differences within CN, PC, UMC, and TD distortions. However, it is less effective in handling cross-distortion content from a general perspective (its overall SRCC performance is just 0.5247).
c) The zero-shot NR methods NIQE and IL-NIQE show competitive performance for UMC, TD, and TC distortions. This can be attributed to the fact that UMC and TD distortions introduce blurring effects to digital human projections, which aligns with the strengths of these methods. TC distortion, on the other hand, introduces typical JPEG artifacts to digital humans, which can be easily quantified by these methods as well.
d) FS distortion proves to be the most challenging distortion to evaluate. This is due to the fact that the MOS distribution for FS distortion tends to be more centered, as shown in Fig. <ref>, indicating a more fine-grained quality level that is less distinctive. FS distortion primarily causes digital humans to exhibit more geometric characteristics, which may lead to small differences in NSS reflected by the projections and result in the poor performance of NIQE and IL-NIQE.
Despite the less competitive performance of the proposed method in handling FS distortion, it significantly advances the performance of NR methods in general.
§.§ Ablation Study
In this section, we present an analysis of the effects of different quality measures: Q_A, Q_N, and Q_G, on the experimental performance. The combinations of these quality measures are tested, and the results are summarized in Table <ref>. Throughout the experiments, we maintain the default experimental setup.
Table <ref> clearly demonstrates that among the single quality measures, Q_A achieves the highest performance. This finding indicates a strong correlation between quality-aware semantic affinity and the visual quality of digital humans. It suggests that considering the quality of semantic representations is crucial for accurately assessing the visual fidelity of digital human models.
Furthermore, excluding any of the three quality measures leads to a drop in performance compared to utilizing all quality measures together. This observation implies that each quality measure contributes significantly to the final results. The effectiveness of the proposed framework is thereby validated by the consistent performance improvements achieved when all quality measures are incorporated.
§.§ Statistical Test
To further analyze the performance of the proposed method, we conduct the statistical test in this section. We follow the same experiment setup as in <cit.> and compare the difference between the predicted quality scores with the subjective ratings. All possible pairs of models are tested and the results are listed in Fig. <ref>.
Our method demonstrates remarkable superiority over 12 zero-shot methods and 5 supervised methods when compared on the SJTU-H3D database. On the DHHQA database, our method exhibits substantial outperformance compared to 9 zero-shot methods and 3 supervised methods.
§ CONCLUSION
The increasing applications of digital humans across various domains have highlighted the need for comprehensive quality assessment studies. However, the limited availability of comprehensive digital human quality assessment (DHQA) databases has posed challenges in this area. To address this gap, we have introduced the SJTU-H3D subjective quality assessment database, specifically designed for full-body digital humans. This database consists of 40 high-quality reference digital humans and 1,120 labeled distorted counterparts created with seven types of distortions.
Nonetheless, the scarcity of suitable DHQA databases remains a hindrance to the development of data-driven methods. To overcome this limitation and enhance generalization capabilities, we propose a zero-shot DHQA approach that focuses on no-reference (NR) scenarios. Our approach leverages semantic and distortion features obtained from projections, as well as geometry features derived from the mesh structure of digital humans.
The proposed DHQI not only serves as a robust baseline for DHQA tasks but also facilitates advancements in the field. We hope our work can contribute to the establishment of effective evaluation frameworks and methodologies for digital humans, enabling their widespread application in diverse domains.
IEEEtran
|
http://arxiv.org/abs/2307.00635v1
|
20230702184303
|
Analyzing Lack of Concordance Between the Proteome and Transcriptome in Paired scRNA-Seq and Multiplexed Spatial Proteomics
|
[
"Jai Prakash Veerla",
"Jillur Rahman Saurav",
"Michael Robben",
"Jacob M Luber"
] |
q-bio.TO
|
[
"q-bio.TO",
"q-bio.GN"
] |
Analyzing Lack of Concordance Between the Proteome and Transcriptome in Paired scRNA-Seq and Multiplexed Spatial Proteomics
Jai Prakash Veerla12, Jillur Rahman Saurav12, Michael Robben12, Jacob M. Luber12
1Department of Computer Science, University of Texas at Arlington
2 Multi-Interprofessional Center for Health Informatics, University of Texas at Arlington
Email: jxv6663@mavs.uta.edu,{mdjillurrahman.saurav, michael.robben, jacob.luber}@uta.edu
August 1, 2023
=============================================================================================================================================================================================================================================================================================================================================
In this study, we analyze discordance between the transcriptome and proteome using paired scRNA-Seq and multiplexed spatial proteomics data from HuBMAP. Our findings highlight persistent transcripts in key immune markers, including CD45-RO, Ki67, CD45, CD20, and HLA-DR. CD45-RO is consistently expressed in memory T cells, while Ki67, associated with cell proliferation, also displays sustained expression. Furthermore, HLA-DR, part of the MHC class II molecules, demonstrates continuous expression, possibly crucial for APCs to trigger an effective immune response. This investigation provides novel insights into the complexity of gene expression regulation and protein function.
Transcriptional Bursting, Spatial Proteomics, scRNA-Seq
§ INTRODUCTION
Pioneering advancements in multi-omics technologies have revolutionized our capacity to scrutinize gene expression at the cellular level <cit.>. Through the integration of scRNA-Seq and multiplexed imaging like CO-Detection by indEXing (CODEX), we are now able to track both transcripts and corresponding protein expressions. However, inconsistencies have been observed between spatial proteomics and scRNA-Seq data, potentially arising due to batch or technical effects during data collection.
Despite considerable progress in scRNA-Seq technologies, data generated often contains substantial noise, with significant dropout events leading to undetected transcripts due to either biological or technical issues. This typically results in an overrepresentation of zero values, necessitating the use of zero-inflation statistical models <cit.>. Two primary sources of these zero values exist: technical zeros, when a gene is expressed but remains undetected, and biological zeros, occurring when a gene is simply not expressed in a cell.
By visualizing and measuring the activity of various gene expressions in CODEX images, compared with scRNA-Seq count matrix data from HuBMAP, we aim to explore the role of batch and technical effects on transcriptional bursting. Our preliminary findings indicate certain genes showing no expression in scRNA-Seq data, yet present in CODEX images, and others where transcripts are overexpressed relative to proteins. This hints at the potential to measure transcriptional bursting across platforms such as scRNA-Seq and spatial proteomics, using paired scRNA-Seq SALMON data and CODEX imaging data from HuBMAP across multiple patient samples.
§ METHODS
We began our investigation by obtaining both transcriptomic and proteomic data from HuBMAP, specifically, paired scRNA-Seq (Salmon) <cit.> and CODEX datasets <cit.> performed on tissue from the same donors. This resulted in the acquisition of four paired datasets collected by the University of Florida from organs including the spleen and thymus.
The scRNA-Seq (Salmon) dataset was processed using the "vpolo" tool to retrieve the gene expression matrix, which is characterized by rows of cells and columns of genes, with values representing the gene expression of each cell. Given the zero-inflated nature of scRNA-Seq data, we noted a significant number of zeros in the matrix.
For proteomic data analysis, we selected a subset of protein markers used in CODEX corresponding to the genes in our gene expression matrix. We utilized CytoKit <cit.> for image segmentation to identify protein marker expression. Following the acquisition of both the gene expression matrix from scRNA-Seq and protein marker expression from CODEX, we conducted comparative visualizations. All code is available at: https://github.com/jacobluber/TranscriptionalBursting.
§ RESULTS
Several immune cell surface markers, such as CD45-RO, Ki67, CD45, and CD20, showed persistent levels of transcript expression (Fig. <ref>). These markers play crucial roles in immune function and may require persistent transcripts to fulfill their functions effectively (Fig. <ref>). For example, CD45-RO, an isoform of CD45, is typically expressed on memory T cells. Memory T cells, which have previously encountered an antigen, can respond more quickly upon subsequent encounters with the same antigen. The persistent expression of CD45-RO is necessary to maintain their memory function.
Ki67, another marker that displayed sustained expression (Fig. <ref>), is a nuclear protein associated with cellular proliferation. In immune cells, especially lymphocytes, Ki67 is used as a marker for cell division and growth. Continuous transcript expression of Ki67 ensures that rapidly dividing cells can maintain their proliferation.
We also observed sustained expression of HLA-DR (Fig. <ref>, Fig. <ref>), which is part of the Major Histocompatibility Complex (MHC) class II molecules. APCs present antigens to CD4+ T cells, initiating an immune response. To carry out this function effectively, these cells may need to constantly express HLA-DR on their surface, requiring stable transcripts for HLA-DR.
The concept of MHC complex segregation in the context of immune responses involves the presentation of MHC molecules loaded with foreign peptides on the cell surface to be recognized by T cells. The segregation of MHC complexes may increase the diversity of antigens presented, thereby enhancing the likelihood of triggering an appropriate immune response. Therefore, the presence of persistent transcripts for MHC molecules enables immune cells to respond quickly to pathogens by ensuring the continuous availability of proteins required for immune responses.
§ CONCLUSION
As a future direction, combining multiplexed imaging techniques such as CODEX with single-cell RNA sequencing (scRNA-Seq) could provide a more detailed understanding of immune cell dynamics. Pseudotime algorithms, which infer dynamic changes of single cells along a hypothetical timeline, would allow us to track temporal changes in gene expression and visualize transcriptional bursting events. This approach could reveal how immune response is regulated at the single-cell level, highlighting key temporal changes in immune cell markers like CD45-RO, Ki67, CD45, CD20, and HLA-DR. Consequently, these integrated techniques have the potential to revolutionize our understanding of immune system regulation and provide insights for therapeutic interventions.
§ ACKNOWLEDGMENT
This work was supported by a University of Texas System Rising STARs Award (J.M.L) and the CPRIT First Time Faculty Award (J.M.L)
ieeetr
|
http://arxiv.org/abs/2307.02946v1
|
20230706123132
|
Finding Favourite Tuples on Data Streams with Provably Few Comparisons
|
[
"Guangyi Zhang",
"Nikolaj Tatti",
"Aristides Gionis"
] |
cs.DB
|
[
"cs.DB",
"cs.DS"
] |
Finding Favourite Tuples on Data Streams with Provably Few Comparisons]Finding Favourite Tuples on Data Streams with
Provably Few Comparisons
This work was done while the author was with KTH Royal Institute of Technology.
Shenzhen Institute of Computing Sciences
Shenzhen
China
zhangguangyi@sics.ac.cn
HIIT, University of Helsinki
Helsinki
Finland
nikolaj.tatti@helsinki.fi
KTH Royal Institute of Technology
Stockholm
Sweden
argioni@kth.se
One of the most fundamental tasks in data science is to assist
a user with unknown preferences in finding high-utility tuples within a large database.
To accurately elicit the unknown user preferences,
a widely-adopted way is by asking the user to compare pairs of tuples.
In this paper, we study the problem of identifying one or more high-utility tuples
by adaptively receiving user input on a minimum number of pairwise comparisons.
We devise a single-pass streaming algorithm,
which processes each tuple in the stream at most once,
while ensuring that the memory size and the number of requested comparisons
are in the worst case logarithmic in n,
where n is the number of all tuples.
An important variant of the problem,
which can help to reduce human error in comparisons,
is to allow users to declare ties when confronted with pairs of tuples of nearly equal utility.
We show that the theoretical guarantees of our method can be maintained
for this important problem variant.
In addition, we show how to enhance existing pruning techniques in the literature
by leveraging powerful tools from mathematical programming.
Finally, we systematically evaluate all proposed algorithms over both synthetic and
real-life datasets,
examine their scalability, and
demonstrate their superior performance over existing methods.
<ccs2012>
<concept>
<concept_id>10002951.10003317.10003331</concept_id>
<concept_desc>Information systems Users and interactive retrieval</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003752.10010070.10010111</concept_id>
<concept_desc>Theory of computation Database theory</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10003752.10010070.10010071.10010286</concept_id>
<concept_desc>Theory of computation Active learning</concept_desc>
<concept_significance>300</concept_significance>
</concept>
</ccs2012>
[500]Information systems Users and interactive retrieval
[500]Theory of computation Database theory
[300]Theory of computation Active learning
[
Aristides Gionis
August 1, 2023
====================
§ PROOFS FOR SECTION <REF>
<cit.> proved a powerful local lemma, which states that
among a sufficiently large set of vectors from the unit d-ball ^d,
there must exist some vector that can be approximately represented as a special non-negative linear combination of others.
Given _1,…,_∈^d, for any > 0, if ≥,
then there exists a ∈ [] such that
_a = _1 + ∑_j = 1^a-2α_j (_j+1 - _j) + ,
where _2 ≤ and α_j ∈{0,1,2}.
Let S = {_1, …, _}.
<ref> can be easily extended to hold for the intrinsic dimension of S,
by first applying <ref> to the minimal representation _1,…,_∈^d' of S.
Suppose
_a = _1 + ∑_j = 1^a-2α_j (_j+1 - _j) + ',
where '_2 ≤.
By definition, we know that
_i = M _i where M ∈^d,d' and columns of M are orthonomal.
Then, we have _2 ≤,
where = M'.
In <ref>, we have S-_a _a,
where S- is a shorthand for S ∖{}.
Note that this is exactly the condition we use in Step <ref> in <ref> for pruning.
Denote by (S) the set of all such pruned tuples, i.e.,
(S) = {∈^d: S }.
Given any set S of size 4,
at least 3/4 fraction of S can be pruned by other tuples in S, by repeatedly applying <ref>.
Given a sorted set S of size at least 4, where =, we have
|{∈ S: S-}| ≥3/4 |S|.
since |S| ≥ 4,
we can apply <ref> repeatedly to S until only entries remain.
Importantly, {∈ S: S-} do not depend on the arrival order of tuples in S.
As a consequence of <ref>,
the same fraction of current tuples can be pruned by a random sample set S of a sufficient size in expectation.
Given a set of tuples , and
a random sample set S ⊆ of size 4 where =,
we have
[ |(S) ∩| ] ≥3/4 ||,
where the expectation is taken over S.
[Proof of <ref>]
The proof is by a symmetrization argument introduced by <cit.>.
Let be the last tuple added into S. Write T = S -.
Given T, the distribution of is a uniform distribution from ∖ T.
Let be a random sample from X.
Since T ⊆(T), we have
[ T | T ] ≥[ T | T ].
Then,
_S [ |(S) ∩|/||]
= _S[ [ S | S ] ]
= _S[ [ T + | S ] ]
≥_S[ [ T | S ] ]
= _T[ [ T | T ] ]
≥_T[ [ T | T ] ]
= _S[ [ S - | S ] ].
In order to bound the right-hand side, notice that
when conditioned on S, every permutation of S is equally probable over a random-order stream,
which implies that every tuple in S is equally probable to be the last tuple .
Hence, we have [ S - | S ] ≥ 3/4 by <ref>,
proving immediately the claim.
It is important to note that, the above proof doesn't imply that
for every S, when conditioning on it, one can achieve
[ S | S ] ≥ 3/4,
that is, every S can prune at least 3/4 of , which is too strong to be true.
After we replace by , the 3/4 is obtained by taking average over , which is part of outer expectation over S.
Another important issue to handle is to ensure that our pruning strategy will not discard all feasible tuples.
This is prevented by keeping track of the best tuple in any sample set so far, and guaranteed by <ref>.
Denote by all tuples that have arrived so far.
Suppose ^* is the best tuple among .
Tuple ^* is either collected into our sample sets,
or pruned by some sample set S.
In the former case, our statement is trivially true.
In the latter case, suppose S = {_1,…}, where _1 is the best tuple in S.
If _1 is feasible, then is feasible as well, as it is at least as good as _1.
If _1 is infeasible, i.e., (^*) - (_1) >, then ^* cannot be pruned by S by design, a contradiction.
This completes the proof.
Before proving <ref>,
we briefly summarize the hypergeometric tail inequality below <cit.>.
Draw n random balls without replacement from a universe of N red and blue balls, and
let i be a random variable of the number of red balls that are drawn.
Then, for any t > 0, we have
[i ≥[i] + t n] ≤ e^-2t^2n,
and
[i ≤[i] - t n] ≤ e^-2t^2n.
The feasibility of the returned tuple is due to <ref>.
In the rest of the proof, we upper bound the size of every sample and the number of samples we keep in the sequence .
For any sample S with at least 4 samples and any subset ⊆,
let ' = (S) ∩ and by <ref> we have
[ |'| ] ≥3/4 ||.
In particular, let = and we have
[|'|] ≥3/4 || and || =.
Then,
[ |'| < 5/8 || ]
= [ |'| < 3/4 || - 1/8]
≤[ |'| < [|'| - 1/8] ]
≤ e^-2 /8^2,
where the last step invokes <ref>.
Since there can be at most samples,
the probability that any sample fails to pass the pool test is upper bounded by e^-2 /8^2.
We continue to upper bound the number of sample sets.
At most log() sample sets suffice if every sample can prune at least half of the remaining tuples.
Fix an arbitrary sample S, and let to be the set of remaining tuples.
The pool is a random sample from of size . Thus,
[|'|] / = |'| / ||. Consequently, if |'| < ||/2, then [|'|] < /2
and
[ |'| ≥5/8 || ]
≤[ |'| ≥[|'|] + 1/8]
≤ e^-2 /8^2.
Similar to the above,
the probability that any bad sample passes the test is upper bounded by e^-2 /8^2.
Combining the two cases above,
the total failure probability is 2 e^-2 /8^2≤ 1/
Hence, with probability at least 1-1/,
it is sufficient to use log() sample sets, each with a size 4.
Keeping one sample set requires 4log(4) comparisons.
Finally, finding the best tuple among all filters and the pool requires additional + log() comparisons.
§ PROOFS FOR SECTION <REF>
The proof is similar to that of <ref>,
except that we need a new proof for the key <ref>,
since in the presence of ties, we may not be able to totally sort a sample S.
Instead, we show that a partially sorted set S of a sufficient size can also be effective in pruning.
From now on, we treat the sample S as a sequence instead of a set,
as a different arrival order of S may result in a different filter by <ref>.
Let S ⊆ be a sequence of length 16.
Let be the groups constructed by <ref>.
Under <ref>, we have
|{∈ S: - }| ≥3/4 |S|,
where =,
is the largest size of a pairwise -similar subset of ,
and - are the groups with removed from its group.
[Proof of <ref>]
Note that by the definition of ,
for any particular tuple ∈ S,
there are at most 2(-1) tuples that are -similar with tuple .
Thus, must contain at least 8 groups, and
we split all groups in into two parts, those with an odd index and those with an even index.
In each part, we can extract a totally sorted list L of size at least , by picking exactly one tuple from each group.
We remove one tuple ∈ L from S such that L -, whose existence is guaranteed by <ref>.
<ref> guarantees that -.
We repeatedly do so until less than groups remain in each part,
which means that the number of remaining tuples is at most 2 in each part.
As a result, we are able to remove at least 16 - 4 tuples,
concluding the claim.
Although the above lemma appears similar to <ref>,
a crucial difference is that the set of prunable tuples in S now depends on the arrival order of S,
which causes non-trivial technical challenges in the analysis.
A critical observation that enables our analysis is the following result.
Fix a sequence S of size 16, there exist at least 1/4 |S| tuples in S that satisfy
S - .
[Proof of <ref>]
Let be the groups constructed by <ref>.
Write
S' = {∈ S: - }.
By <ref>, we know that |S'| ≥3/4 |S|.
For an arbitrary tuple ∈ S', suppose is assigned to a group G ∈.
We call a tuple good if |G| = 1 or is not a representative in R in <ref>.
Let ' be the groups constructed by <ref> using S -. If is good,
then ' = -.
Therefore, for a good tuple we always have
S - .
By definition, it is easy to see that there are at most |S|/2 tuples in S that are not good,
proving the lemma.
Denote by (S) the set of tuples that can be pruned by S, that is,
(S) = {∈^d: S }.
We now prove a similar lemma to <ref> by a generalized symmetrization argument over sequences.
Given a set of tuples , and
a random sequence S
of at least 16
tuples from ,
we have
[ |(S) ∩| ]
≥1/4 ||,
where =,
and is the largest size of a pairwise -similar subset of .
Moreover, the expectation is taken over S.
[Proof of <ref>]
Let be the last tuple added into S. Write T = S -.
Given T, the distribution of is a uniform distribution from ∖ T.
Let be a random sample from X.
Since T ⊆(T), we have
[ T | T ] ≥ [ T | T ].
Then,
_S [ |(S) ∩|/||]
= _S [ [ S | S ] ]
= _S [ [ T + | S ] ]
≥_S [ [ T | S ] ]
= _T [ [ T | T ] ]
≥_T [ [ T | T ] ]
= _S [ [ S - | S ] ].
Fix S, let ∈ S be a uniformly random tuple in S, and we have
_S [ [ S - | S ] ]
= _S [ [ S - | S ] ]
≥ 1/4,
where the last step is by <ref>, and the first step is due to double counting,
as every sequence S appears |S| times in the right-hand side,
completing the proof.
The proof is similar to <ref> on a high level, and is deferred to Appendix.
[Proof of <ref>]
The proof is similar to <ref> on a high level.
We only elaborate on their differences.
We first prove the guarantee on the regret.
If the optimal tuple ^* is in the pool once the algorithm is done, then the regret is at most .
If ^* is not in the pool, then
the proof of <ref> shows that there is in one of the sample, say S,
that yields a regret of /c.
The top representative of that sample yields /c + regret. Finally, the final top tuple
yields /c + 2 regret.
Next, we upper bound the size of every sample and the number of samples similarly to
the proof of <ref>.
We require every sample to prune at least 1/8 fraction of the remaining tuples instead of 1/2,
which leads to a demand for log_8/7 () samples.
The total failure probability is bounded by 2 e^-2 /16^2≤ 1/.
Consequently, with probability at least 1-1/,
we will use at most
log_8/7 () sample sets, each with a size 16, at most.
[more details]
For any sample S with at least 16 samples and any subset ⊆,
let ' = (S) ∩ and by <ref> we have
[ |'| ] ≥1/4 ||.
In particular, let is a random subset of and we have
[|'|] ≥1/4 ||.
Then,
[ |'| < 3/16 || ]
= [ |'| < 1/4 || - 1/16 ||]
≤[ |'| < [|'| - 1/16] ]
≤ e^-2 /16^2,
where the last step invokes <ref>.
Since there can be at most samples,
the probability that any sample fails to pass the pool test is upper bounded by e^-2 /16^2.
We continue to upper bound the number of sample sets.
At most log_8/7() sample sets suffice if every sample can prune at least 1/8 fraction of the remaining tuples.
Fix an arbitrary sample S, and let to be the set of remaining tuples.
The pool is a random sample from of size . Thus,
[|'|] / = |'| / ||. Consequently, if |'| < ||/8, then [|'|] < /8
and
[ |'| ≥3/16 || ]
≤[ |'| ≥[|'|] + 1/16]
≤ e^-2 /16^2.
Similar to the above,
the probability that any bad sample passes the test is upper bounded by e^-2 /16^2.
Building one filter requires at most (16log(16)) comparisons, because
sorting an new tuple within R by binary search costs at most (log(16)) comparisons.
Finally, finding the best tuple among all filters and the pool requires additional + log_8/7 () comparisons.
§ IMPROVING BASELINE FILTERS
In this section,
we improve existing filters by <cit.>,
by using linear and quadratic programs.
Previously, their filters rely on explicit computation of convex hulls,
which is feasible only in very low dimension.
For example, the convex hull size, and consequently
the running time of these existing techniques,
have an exponential dependence on d <cit.>.
§.§ Improving constrained utility space filter
One of the most natural strategies is to iteratively compare a pair of random tuples.
The feasible space for the utility vector is constrained by the list of pairs
A = {a_i} that have been compared,
where a_i = (,) such that () < ().
Note that every pair of tuples ,∈ forms a halfspace in ^d, i.e.,
= {∈^d: ^T(-) < 0 }.
Specifically, the unknown ∈ is contained in the intersection U of a set of halfspaces, one by each pair.
<cit.> propose to prune a tuple if
for every possible ∈ U there exists a tuple in some pair of A such that () ≥().
They first compute all extreme points of U, and then check if the condition holds for every extreme point.
However, this approach is highly inefficient,
as potentially there is an exponential number of extreme points.
Instead, we propose to test the pruning condition by asking to find
a vector that satisfies
If there is no such vector we prune .
This test can be done with a linear program (LP).
Note that the test is stronger than that by <cit.> as it has been extended to handle ϵ-regret.
We claim that a given tuple can be safely pruned if there is no vector satisfying LP (<ref>).
*
[Proof of <ref>]
Let be the utility vector.
The assumptions imply
^T - ^T > 0
and ^T((1-) - ) > 0.
Next, note that, by definition, for every (,) ∈ A,
^T(-) > 0.
The inequalities in Eqs. (<ref>)–(<ref>) are all proper.
Consequently, we can scale so that the left-hand sides in Eqs. (<ref>)–(<ref>) are at least 1,
that is, there exists a solution to LP (<ref>).
Notice that the second set of constraints in LP (<ref>) (i.e., ^T(-) ≥ 1) is redundant provided () ≥ 0.
Actually, even if () < 0, the test only lets in that is slightly worse than the best tuple in A,
which is unlikely since () < 0.
Thus, in practice we recommend to omit the second set of constraints to speed up the test.
[sorted list vs. pairs]
It is not true that
sorting a list of tuples is a much more efficient way to generate compared pairs.
At least for the purpose of maintaining utility space.
There are quadratic number of pairs in a list, but the “effective” pairs are those made by actual comparisons.
A filter for maintaining the constrained utility space is conceptually different from the filter proposed in <ref>.
A small utility space of is the key for such a filter to be effective,
while a filter in <ref> maintains no explicit knowledge about and mainly relies on the geometry of the tuples.
§.§ Improving conical hull of pairs filter
Another pruning strategy proposed by <cit.>
is the following.
Consider again a list of compared pairs A = {a_i},
where a_i = (,) such that () < (),
and consider a cone formed by all pairs in A.
A tuple can now be pruned
if there is another tuple kept by the algorithm,
such that
= + ∑_a_i = (,) ∈ Aβ_i (-)
such that β_i ≥ 0
for all i.
Instead of actually constructing all facets of the conical hull,
as done by <cit.>,
we propose to solve the following quadratic program (QP),
.
If the optimal value of the QP is at most , we prune .
*
[Proof of <ref>]
We only discuss the case = 0.
When > 0, for any pruned tuple, there exists a tuple in some pair of A that is at most a distance of away from it, and
thus A maintains at least one / c-regret tuple.
The first sum in QP (<ref>) can be seen as an aggregated tuple by convex combination,
whose utility is no better than the top tuple in A.
The second term only further decreases the utility of the first term.
Thus, if a tuple can be written as a sum of the first and second terms,
its utility is no better than the top tuple in A, and
can be pruned.
Similar to <ref>, a weaker but computationally more efficient filter can be used, by replacing the QP with an LP solver.
That is, we prune tuple if there is a solution to
= ∑_a_i = (,) ∈ Aν_i + ∑_a_i = (,) ∈ Aβ_i (-)
such that ∑_a_i = (,) ∈ Aν_i = 1 ,
and ν_i1, ν_i2, β_i ≥ 0 for all i.
As a final remark about the above QP,
we compare its pruning power with that of the proposed filter (<ref>) in <ref>.
Obviously, its pruning power increases as the number of compared pairs in A increases.
For a fixed integer s,
a number of s comparisons result in s pairs for the above QP,
while in <ref>,
s comparisons can produce a sorted list of s/log(s) tuples and s/log(s) 2 pairs.
Hence, the above QP is less “comparison-efficient” than the one in <ref>.
Also, for a fixed number of compared pairs, the number of parameters is larger in QP (<ref>) than in the proposed filter,
which means that QP is more inefficient to solve.
These drawbacks are verified in our empirical study in the next section.
§ ADDITIONAL EXPERIMENTS
Datasets.
A summary of the real-life datasets we use for our evaluation can be found in <ref>.
The datasets contain a number of tuples up to 1M and a dimension up to 100.
Previous studies are mostly restricted to a smaller data size and a dimension size less than 10, and
a skyline operator is used to further reduce the data size in advance <cit.>.
Note that running a skyline operator itself is already a time-consuming operation, especially for high-dimension data <cit.>, and
becomes even more difficult to apply with limited memory size in the streaming setting.
Besides, a fundamental assumption made by a skyline operator, namely,
pre-defined preference of all attributes,
does not hold in our setting.
According to this assumption, it is required to know beforehand whether an attribute is better with a larger or smaller value.
This corresponds to knowing beforehand whether utility entry _i is positive or negative for the i-th attribute.
As we mentioned in <ref>, we do not make such an assumption about , and allow an arbitrary direction.
This is reasonable, as preference towards some attributes may be diverse among different people.
One example is the floor level in the housing market, where some may prefer a lower level, while others prefer higher.
Hence, we do not pre-process the data with a skyline operator.
Details on the data generation process and the actual synthesized data
can be found in our public Github repository.
Baselines.
We do not consider methods that synthesize fake tuples in pairwise comparisons, such as <cit.>.
Over a random-order stream, the algorithm by <cit.> is the same as the baseline when adapted to find the top tuple instead of a full ranking.
The UH-Simplex method <cit.> that simulates the simplex method by pairwise comparisons is not included,
as it is mainly of theoretical interest, designed for offline computation, and has been shown to have inferior empirical performance compared to other baselines.
We do not consider baselines that iteratively compare a greedy pair (among all 2 pairs) with respect to some measure of interest,
such as <cit.>,
because they are designed for offline computation and it is computationally prohibited to decide even the first greedy pair for the adopted datasets.
Misc.
We adopt the OSQP solver <cit.> and the HIGHS LP solver <cit.>.
The maximum number of iterations for the solvers is set to 4000,
which is the default value in the OSQP solver.
All experiments were carried out on a server equipped with 24 processors of AMD Opteron(tm)
Processor 6172 (2.1 GHz), 62GB RAM, running Linux 2.6.32-754.35.1.el6.x86_64.
The methods are implemented in Python 3.8.5.
§.§ Effect of parameters
Recall that in <ref>, a pool of tuples is used to test the performance of a new filter.
A new filter will be ready when it can prune at least a fraction of tuples in .
In <ref>, we run <ref> with a filter on a dataset of 10k tuples.
We fix one parameter (=100 or =0.5) and vary the other.
Parameter roughly specifies the expected fraction of tuples a filter should be able to prune.
A larger implies a need for fewer filters but a larger sample size for each filter.
It is beneficial to use a large which leads a smaller number of comparisons overall.
Nevertheless, as we will see shortly, such a large filter can be time-consuming to run, especially when the dimension d is large.
A larger value of improves the reliability of the testbed ,
which helps reducing the number of comparisons.
However, a larger also results in longer time to run filters over the testbed .
§ INTRODUCTION
One of the most fundamental tasks in data science is to assist
a user with unknown preferences in finding high-utility tuples within a large database.
Such a task can be used, for example,
for finding relevant papers in scientific literature,
or recommending favorite movies to a user.
However, utility of tuples is highly personalized.
“One person's trash is another person's treasure,” as the saying goes.
Thus, a prerequisite to accomplishing this task is to efficiently and accurately
elicit user preferences.
It has long been known,
both from studies in psychology <cit.>
as well as from personal experience,
that humans are better at performing relative comparisons
than absolute assessments.
For instance,
it is typically easy for a user to select a favorite movie between two given movies,
while it is difficult to score the exact utility of a given movie.
This fact has been used in many applications,
such as classification <cit.>,
ranking <cit.>,
and clustering <cit.>.
In this paper we leverage the observation that humans are better at comparing
rather than scoring information items,
and use relative comparisons to facilitate preference learning and help users find relevant tuples
in an interactive fashion,
i.e., by adaptively asking users to compare pairs of tuples.
To cope with the issue of information overload,
it is usually not necessary to identify all relevant tuples for a user.
Instead, if there exists a small set of high-utility tuples in the database,
a sensible goal is to identify at least one high-utility tuple
by making a minimum number of comparisons.
In particular, assuming that a user acts as an oracle,
the number of requested comparisons,
which measures the efficiency of preference learning,
is known as query complexity.
More specifically, in this paper we focus on the following setting.
We consider a database consisting of tuples,
each represented as a point in ^d.
User preference is modeled by an unknown linear function
on the numerical attributes of the data tuples.
Namely, we assume that a user is associated with an unknown utility
vector ∈^d, and
the utility of a tuple ∈^d for that user is defined to be
() = ^T.
A tuple is considered to be of high-utility
if its utility is close to that of the best tuple,
or more precisely,
if compared to the best tuple its utility loss is bounded by
an fraction of the best utility,
(^*) - () ≤ (^*),
where ^* = max_∈() is the best tuple in .
We call the user-defined parameter the “regret” ratio,
a terminology used earlier in database literature <cit.>.
We demonstrate this setting with a concrete example below.
Every tuple being a point in ^3 represents a computer with three attributes:
price, CPU speed, and hard disk capacity.
It is reasonable to assume that the utility of a computer grows linearly in,
for example, the hard disk capacity.
Thus, a user may put a different weight on each attribute, as one entry in the utility vector ∈^3,
which measures its relative importance.
For the setting described above
with a linear utility function,
it is obvious that at most -1 comparisons suffice to find the best tuple,
by sequentially comparing the best tuple so far with a next tuple.
Surprisingly, despite the importance of this problem in many applications,
improvement over the naïve sequential strategy, in the worst case, has remained elusive.
A positive result has only been obtained in a very restricted case of two attributes,
i.e., a tuple is a point in ^2 <cit.>.
Other existing improvements rely on strong assumptions <cit.>,
for example, when every tuple is almost equally probable to be the best.
To the best of our knowledge,
we are the first to offer an improvement on the query complexity
that is logarithmic in , in the worst case.
We refer the reader to <ref> for a detailed comparison with existing work.
There exist heuristics in the literature that are shown to perform empirically better
than the naïve sequential strategy, in terms of the number of requested comparisons.
For example, a popular idea is to compare a carefully-chosen pair in each round of interaction with the user <cit.>.
However, these methods are computationally expensive, and require multiple passes over the whole set of tuples.
To illustrate this point, finding a “good” pair with respect to a given measure of interest can easily take (^2) time, as one has to go over 2 candidate pairs.
Furthermore, while such heuristics may work well in practice,
they may require Ω() pairwise comparisons, in the worst case.
We also address the problem of finding a high-utility tuple reliably,
where we do not force a user to make a clear-cut decision
when confronted with two tuples that have nearly equal utility for the user.
In this way we can avoid error-prone decisions by a user.
Instead, we allow the user to simply declare a tie between the two tuples.
To our knowledge, this is the first paper that
considers a scenario of finding a high-utility tuple with ties and
provides theoretical guarantees to such a scenario.
We systematically evaluate all proposed algorithms over synthetic and real-life datasets,
we demonstrate their superior performance, and
we examine their scalability with respect to the data size , dimension d,
and parameter .
Our contributions in this paper are summarized as follows:
(i) We devise a single-pass streaming algorithm
that processes each tuple only once,
and finds a high-utility tuple by making adaptive pairwise comparisons;
(ii) The proposed algorithm requires a memory size and has query complexity
that are both logarithmic in , in the worst case,
where is the number of all tuples;
(iii) We show how to maintain the theoretical guarantee of our method,
even if ties are allowed when comparing tuples with nearly equal utility;
(iv) We offer significant improvement to existing pruning techniques in the literature,
by leveraging powerful tools from mathematical programming;
(v) We systematically evaluate all proposed algorithms over synthetic and
real-life datasets, and demonstrate their superior performance compared to existing methods.
The rest of the paper is organized as follows.
We formally define the problem in <ref>.
We discuss related work in <ref>.
Then, we describe the proposed algorithm in <ref>, and
its extension in <ref> when ties are allowed in a comparison.
Enhancement to existing techniques follows in <ref>.
Empirical evaluation is conducted in <ref>,
and we conclude in <ref>.
§ PROBLEM DEFINITION
In this section, we formally define the interactive regret minimization () problem.
The goal of the problem is to find a good tuple among all given tuples ⊆^d in a database.
The goodness, or utility, of a tuple is determined by an unknown utility vector ∈^d via the dot-product operation () = ^T.
However, we assume that we do not have the means to directly compute (),
for a given tuple .
Instead, we assume that we have access to
an oracle that can make comparisons between pairs of tuples:
given two tuples and the oracle will return the tuple with the higher utility.
These assumptions are meant to model users
who cannot quantify the utility of a given tuple on an absolute scale,
but can perform pairwise comparisons of tuples.
In practice, it is usually acceptable to find a sufficiently good tuple ' in , instead of the top one ^*.
The notion of “sufficiently good” is measured by the ratio in utility loss
(^*) - (')/(^*),
which is called “regret.”
This notion leads to the definition of the problem.
[Interactive Regret Minimization ()]
Given a set of tuples in a database ⊆^d,
an unknown utility vector ∈^d, and
a parameter ∈ [0,1],
find an -regret tuple ', such that
(^*) - (') ≤ (^*),
where () = ^T and ^* = max_∈().
In addition we aim at performing the minimum number of pairwise comparisons.
Problem <ref> is referred to as “interactive” due to the fact
that a tuple needs to be found via interactive queries to the oracle.
The parameter measures the regret.
When = 0, the problem requires to find the top tuple ^* with no regret.
We refer to this special case as interactive top tuple () problem.
For example, when tuples are in 1-dimension,
reduces to finding the maximum (or minimum) among a list of distinct numbers.
We illustrate the and problems with a concrete example in <ref>.
We elaborate on the example below.
The tuples of a given database are represented as points within a unit circle in ^2.
The unknown utility vector , which is drawn in orange, satisfies _2 = 1.
Thus, as the unique top tuple in , the red point has utility value 1.
The green points are feasible -regret tuples for the problem,
as they are within a distance from the red point,
along the direction of the utility vector .
Clearly, the definition for the problem is meaningful only when (^*) ≥ 0,
which is an assumption made in this paper.
Another important aspect of the problem is whether or not the oracle
will return a tie in any pairwise comparison.
In this paper, we study both scenarios.
In the first scenario, we assume that the oracle never returns a tie,
which implies that no two tuples in have the same utility.
We state our assumptions for the first (and, in this paper, default) scenario below.
We discuss how to relax this assumption for the second scenario in <ref>.
No two tuples in have the same utility.
Moreover, the best tuple ^* has non-negative utility, i.e., (^*) ≥ 0.
Without loss of generality, we further assume that _2 = 1 and
_2 ≤ 1, for all ∈, which can be easily achieved by scaling.
As a consequence of our assumptions, we have c = (^*) ≤ 1.
The proposed method in this paper essentially finds an /c-regret tuple,
which is feasible for the problem when c = 1.
Our solution still makes sense, i.e., a relatively small regret /c, if c is not too small or a non-trivial lower bound of c can be estimated in advance.
On the other hand, if c is very small,
there exists no tuple in that can deliver satisfactory utility in the first place,
which means that searching for the top tuple itself is also less rewarding.
For simplicity of discussion, we assume that c = 1 throughout the paper.
For all problems we study in this paper,
we focus on efficient algorithms under the following computational model.
An algorithm is a one-pass streaming algorithm if
its input is presented as a (random-order) sequence of tuples and is examined by the algorithm in a single pass.
Moreover, the algorithm has access to limited memory, generally logarithmic in the input size.
This model is particularly useful in the face of large datasets.
It is strictly more challenging than the traditional offline model,
where one is allowed to store all tuples and examine them with random access
or over multiple passes.
A random-order data stream is a natural assumption in many applications,
and it is required for our theoretical analysis.
In particular, this assumption will always be met in an offline model, where one can easily simulate a random stream of tuples.
Extending our results to streams with an arbitrary order of tuples
is a major open problem.
One last remark about the problem is the intrinsic dimension of the database .
Tuples in are explicitly represented by d variables, one for each dimension, and
d is called the ambient dimension.
The intrinsic dimension of is the number of variables that are needed in a minimal representation of .
More formally,
we say that has an intrinsic dimension of d' if
there exist d' orthonormal vectors b_1, …, b_d'∈^d such that
d' is minimal and
every tuple ∈ can be written as a linear combination of them.
It is common that the intrinsic dimension of realistic data is much smaller than its ambient dimension.
For example, images with thousands of pixels can be compressed into a low-dimensional representation with little loss.
The proposed method in this paper is able to adapt to the intrinsic dimension of without constructing its minimal representation explicitly.
In the rest of this section,
we review existing hardness results for the and problems.
Lower bounds.
By an information-theoretical argument, one can show that Ω(log) comparisons are necessary for the problem <cit.>.
Consider an instance in ^2, where tuples are points on the unit circle, and each tuple has the possibility to be the top tuple.
Every pairwise comparison as a binary question can eliminate at most half of the remaining possibilities.
The argument is similar to thinking about a binary decision tree, where each leaf represents a possibility.
It is similar to how one lowers bound #comparisons for sorting a list of numbers.
Suppose any algorithm asks a seq of k = (log) questions.
There always exists a seq of answers such that there remain more than one possibilities after k questions.
Say the first question has an answer a ∈{0,1}.
Let
_1 = {∈: agrees with a=1 } and _0 analogously.
Repeatedly choose an answer so that min{ |D_0|,|D_1| }≤ ||/2.
By letting d= and
= {_i} for i ∈ [d], where _i is a vector in the standard basis,
Ω(d) comparisons are necessary to solve the problem,
as a comparison between any two dimensions reveals no information about the rest dimensions.
Therefore, one can expect a general lower bound for the problem to somewhat depend on both d and log.
Thanks to the tolerance of regret in utility,
a refined lower bound Ω(d log (1/)) for the problem is given by <cit.>.
in theory, one could consider an -net of the database , instead of the entire ,
whose size is only Ω((1/)^d) and does not depend on .
Any tuple in is within a distance of from some tuple in the -net.
Indeed, a refined lower bound
Ω(log(1/)^d) = Ω(d log (1/)) for the problem is given by <cit.>,
by lower bounding the cardinality of the -net of a d-sphere.
§ RELATED WORK
Interactive regret minimization.
A database system provides various operators that return a representative subset of tuples (i.e., points in ^d) to a user.
Traditional top-k operators <cit.> return the top-k tuples
according to an explicitly specified scoring function.
In the absence of a user utility vector for a linear scoring function,
the skyline operators <cit.> return a tuple if it has the potential to be the top tuple for at least one possible utility vector.
In the worst case, a skyline operator can return the entire dataset.
<cit.> introduce a novel k-regret operator that achieves a balance between the previous two problem settings,
by returning k tuples such that the maximum regret over all possible utility vectors is minimized.
<cit.> further minimize regret in an interactive fashion by making pairwise comparisons.
They prove an upper bound on the number of requested comparisons by using synthesized tuples for some comparisons.
In fact, their method learns approximately the underlying utility vector.
However, synthesized tuples are often not suitable for practical use.
<cit.> deal with a more general task of finding a full ranking of tuples.
By assuming that every possible ranking is equally probable,
they show that (d log) comparisons suffice to identify the full ranking in expectation.
Nevertheless, in the worst case, one cannot make such an assumption, and
their algorithm may require Ω(^2) comparisons for identifying a full ranking or Ω() comparisons for identifying the top tuple.
Another similar problem assumes a distribution over the utility vector without access to the embedding of the underlying metric space <cit.>.
The problem of combinatorial nearest neighbor search is also related,
where one is to find the top tuple as the nearest neighbor of a given tuple without access to the embedding <cit.>.
<cit.> observe that the problem is equivalent to a special linear program,
whose pivot step for the simplex method can be simulated by making a number of comparisons.
Thus, an immediate guarantee can be obtained by leveraging the fact that
(^1/d) pivot steps are needed in expectation for the simplex method <cit.>.
Here the expectation is taken over some distribution over .
Also in the special case when d=2, they develop an optimal binary search algorithm <cit.>.
<cit.> suggest letting a user sort a set of displayed tuples in each round of interaction,
but their approaches are similar to <cit.>, and do not use a sorted list the way we do.
There are other attempts to the problem that adaptively select a greedy pair of tuples with respect to some measure of interest.
<cit.> iteratively select a hyperplane (i.e., pair) whose normal vector is the most orthogonal to the current estimate of .
<cit.> maintain disjoint regions of over ^d, one for each tuple, where a tuple is the best if is located within its region.
Then, they iteratively select a hyperplane that separates the remaining regions as evenly as possible.
However, these greedy strategies are highly computationally expensive, and do not have any theoretical guarantee.
Compared to aforementioned existing work,
our proposed algorithm makes minimal assumptions, is scalable,
and enjoys the strongest worst-case guarantee.
It is worth mentioning that existing research often assumes that increasing any tuple attribute always improves utility,
by requiring ⊆_+^d and ∈_+^d <cit.>.
We do not make such an assumption in this paper.
Active learning.
The problem can be viewed as a special highly-imbalanced linear classification problem.
Consider a binary classification instance, where the top tuple is the only one with a positive label and the rest are all negative.
Such labeling is always realizable by a (non-homogeneous) linear hyperplane, e.g.,
= {∈^d: ^T = ^T^* - } for any sufficiently small ≥ 0.
Note that non-homogeneous can be replaced by a homogeneous one (i.e., without the offset term ) by lifting the tuples into ^d+1.
Active learning aims to improve sample complexity that is required for learning a classifier by adaptive labeling.
Active learning with a traditional labeling oracle has been extensively studied.
The above imbalanced problem instance happens to be a difficult case for active learning with a labeling oracle <cit.>.
We refer the reader to <cit.> for a detailed treatment.
Active learning with additional access to pairwise comparisons has been studied by <cit.>.
That is, one can use both labeling and comparison oracles.
Importantly, <cit.> introduce a notion of “inference dimension,” with which they design an algorithm to effectively infer unknown labels.
However, due to technical conditions, the inference technique is only useful for classification in low dimension (d ≤ 2) or special instances.
As one of our main contributions,
we are the first to show that the inference technique can be adapted for the problem.
<cit.> also utilize pairwise comparisons in active learning,
but their main goal is to reduce labeling queries and labeling noise
by first sorting all data points into a list and then locating the decision boundary via binary search.
Ranking with existing pairwise comparisons.
A different problem setting,
is to rank collection of tuples by aggregating a set of
(possibly incomplete and conflicting) pairwise comparisons,
instead of adaptively selecting which pair of tuples to compare.
This problem has been extensively studied in the literature within different abstractions.
From a combinatorial perspective, it is known as the feedback arc-set problem on tournaments, where the objective is to find a ranking by removing a minimum number of inconsistent comparisons <cit.>.
There also exist statistical approaches to find a consistent ranking, or the top-k tuples, by estimating underlying preference scores <cit.>.
In machine learning, the problem is known as “learning to rank” with pairwise preferences <cit.>,
where the aim is to find practical ways to fit and evaluate a ranking.
[X+Y sorting[https://en.wikipedia.org/wiki/X_%2B_Y_sortinghttps://en.wikipedia.org/wiki/X_%2B_Y_sorting]]
The sorting approach we propose in this paper is also relevant to the X+Y sorting problem <cit.>.
Given a sample set S ⊆,
let X = {^T : ∈ X}, Y = -X, and
X+Y = { x+y : x ∈ X, y ∈ Y }.
Sorting X+Y reveals all pairwise comparisons in S.
It remains open if one can do this faster than (s^2 log(s)) where s = |S|.
§ FINDING A TUPLE: ORACLE WITH NO TIES
In this section, we present our single-pass streaming algorithm for the problem.
Our approach, presented in <ref>,
uses the concept of filters to prune sub-optimal tuples without the need of further comparisons.
<ref> is a general framework for managing filters,
while <ref> specifies a specific filter we propose.
As we will see in <ref> the framework can also be used for other filters.
The filter we propose relies on a remarkable inference technique introduced by <cit.>.
Note that the technique was originally developed for active learning in a classification task, and
its usage is restricted to low dimension (d ≤ 2) or special instances under technical conditions.
We adapt this technique to devise a provably effective filter for the problem.
In addition, we strengthen their technique with a high-probability guarantee and a generalized symmetrization argument.
The core idea
is to construct a filter from a small random sample of tuples.
It can be shown that the filter is able to identify
a large fraction of sub-optimal tuples in without further comparisons.
Fixing a specific type of filter with the above property,
<ref> iteratively constructs a new filter in a boosting fashion
to handle the remaining tuples.
Finally, one can show that, with high probability,
at most (log) such filters will be needed.
We proceed to elaborate on the mechanism of a filter.
The idea is to maintain a random sample S of tuples, and
sort them in order of their utility.
The total order of the tuples in S can be constructed by pairwise comparisons,
e.g., by insertion sort combined with binary search.
Suppose that S = {_1,…,_}, where _1 has the best utility.
Notice that ^T(_j+1 - _j) ≤ 0 for any j.
Thus, a sufficient condition for an arbitrary tuple to be worse than _1 is
= _1 + ∑_j=1^-1α_j (_j+1 - _j)
such that α_j ≥ 0 for all j.
This condition asks to verify whether lies within a cone with apex _1,
along direction .
The parameters α_j can be efficiently computed by a standard Linear Program (LP) solver.
If Condition (<ref>) can be satisfied for ,
then can be pruned for further consideration.
Actually, it is possible to act more aggressively and prune tuples slightly better than _1,
as long as it is assured that not all feasible tuples will be pruned.
Specifically, we can remove any that deviates from the aforementioned cone within a distance of , that is,
min_α - _1 - ∑_j=1^-1α_j (_j+1 - _j) _2 ≤ s.t. α_j ≥ 0 for all j .
To test whether a given tuple satisfies the above condition,
one needs to search for parameters α_j over [0,∞) for all j.
The search can be implemented as an instance of constrained least squares,
which can be efficiently solved via a quadratic program (QP).
Given a sorted sample S where _1 is the top tuple, we write
S
if a tuple can be approximately represented by vectors in S in a form of <ref>.
An example that illustrates the mechanism of a filter is displayed in <ref>,
on which we elaborate below.
In <ref>,
a random sample S = {_1,_2,_3} of three blue points is collected and sorted,
where _1 has the highest utility.
This means that
(_j+1) - (_j) = ^T (_j+1 - _j) < 0, for any j ∈{1,2}.
Compared to the point _1,
a new point in the form of
= _1 + ∑_j ∈{1,2}α_j (_j+1 - _j) with α_j ≥ 0
can only have a lower utility than (_1),
since
() = ^T [ _1 + ∑_j ∈{1,2}α_j (_j+1 - _j) ] ≤(_1).
Thus, such a point can be safely pruned.
Geometrically, all such prunable points form a cone with apex _1,
as highlighted in the blue region in <ref>.
According to <ref>,
any point that is sufficiently close to (within a distance of ) the blue cone can also be pruned.
Upon a random-order stream of tuples, <ref> collect a pool of initial tuples as a testbed for filter performance.
Then, subsequent tuples are gradually added into the first sample set S_1,
until a filter based on S_1 can prune at least a = 5/8 fraction of .
Then, S_1 is ready, and is used to prune tuples in the pool and future tuples over the stream.
Future tuples that survive the filter formed by S_1 will be gradually added into the pool and a second sample set S_2, and the process is repeated iteratively.
Finally, the algorithm returns the best tuple among all samples.
The following theorem states our main result about <ref>.
Assume ϵ > 0 and let = || be the size of data.
Let c = (^*) ∈ [0,1] be the utility of the best tuple ^*.
Under <ref>,
with a pool size = 64 ln 2 and = 5/8,
<ref> return an /c-regret tuple for the problem.
Let =, where
d is the intrinsic dimension of .
Then, with probability at least 1-1/, at most
(log() 4t log (4t)) +
comparisons
are made.
Moreover,
with probability at least 1-1/,
at most log(n) sample sets will be needed,
each of a size of 4,
where =,
d is the intrinsic dimension of .
Thus,
with probability at least 1-1/,
at most (log(n)) tuples will be kept during the execution, and
at most 2log(n) g(4) comparisons will be made,
where g(a)=alog(a).
The memory size, i.e., the number of tuples that will be kept by the algorithm during the execution,
is (log() 4t),
which is also logarithmic in .
In fact, <ref> are an anytime algorithm,
in the sense that the data stream can be stopped anytime,
while the algorithm is still able to return a feasible solution among all tuples that have arrived so far.
Under <ref>,
the data stream may terminate at any moment during the execution of <ref>, and
an /c-regret tuple will be returned for the problem among all tuples that have arrived so far.
Proofs of <ref> are deferred to <ref>.
In the rest of this section we prove <ref>.
For simplicity of discussion, we assume that c = 1.
§.§ Proofs
<cit.> proved a powerful local lemma, which states that
among a sufficiently large set of vectors from the unit d-ball ^d,
there must exist some vector that can be approximately represented as a special non-negative linear combination of others.
Given _1,…,_∈^d, for any > 0, if ≥,
then there exists a ∈ [] such that
_a = _1 + ∑_j = 1^a-2α_j (_j+1 - _j) + ,
where _2 ≤ and α_j ∈{0,1,2}.
Let S = {_1, …, _}.
<ref> can be easily extended to hold for the intrinsic dimension of S,
by first applying <ref> to the minimal representation _1,…,_∈^d' of S.
Suppose
_a = _1 + ∑_j = 1^a-2α_j (_j+1 - _j) + ',
where '_2 ≤.
By definition, we know that
_i = M _i where M ∈^d,d' and columns of M are orthonomal.
Then, we have _2 ≤,
where = M'.
In <ref>, we have S-_a _a,
where S- is a shorthand for S ∖{}.
Note that this is exactly the condition we use in Step <ref> in <ref> for pruning.
Denote by (S) the set of all such pruned tuples, i.e.,
(S) = {∈^d: S }.
Given any set S of size 4,
at least 3/4 fraction of S can be pruned by other tuples in S, by repeatedly applying <ref>.
Given a sorted set S of size at least 4, where =, we have
|{∈ S: S-}| ≥3/4 |S|.
since |S| ≥ 4,
we can apply <ref> repeatedly to S until only entries remain.
Importantly, {∈ S: S-} do not depend on the arrival order of tuples in S.
As a consequence of <ref>,
the same fraction of current tuples can be pruned by a random sample set S of a sufficient size in expectation.
Given a set of tuples , and
a random sample set S ⊆ of size 4 where =,
we have
[ |(S) ∩| ] ≥3/4 ||,
where the expectation is taken over S.
[Proof of <ref>]
The proof is by a symmetrization argument introduced by <cit.>.
Let be the last tuple added into S. Write T = S -.
Given T, the distribution of is a uniform distribution from ∖ T.
Let be a random sample from X.
Since T ⊆(T), we have
[ T | T ] ≥[ T | T ].
Then,
_S [ |(S) ∩|/||]
= _S[ [ S | S ] ]
= _S[ [ T + | S ] ]
≥_S[ [ T | S ] ]
= _T[ [ T | T ] ]
≥_T[ [ T | T ] ]
= _S[ [ S - | S ] ].
In order to bound the right-hand side, notice that
when conditioned on S, every permutation of S is equally probable over a random-order stream,
which implies that every tuple in S is equally probable to be the last tuple .
Hence, we have [ S - | S ] ≥ 3/4 by <ref>,
proving immediately the claim.
It is important to note that, the above proof doesn't imply that
for every S, when conditioning on it, one can achieve
[ S | S ] ≥ 3/4,
that is, every S can prune at least 3/4 of , which is too strong to be true.
After we replace by , the 3/4 is obtained by taking average over , which is part of outer expectation over S.
Another important issue to handle is to ensure that our pruning strategy will not discard all feasible tuples.
This is prevented by keeping track of the best tuple in any sample set so far, and guaranteed by <ref>.
Denote by all tuples that have arrived so far.
Suppose ^* is the best tuple among .
Tuple ^* is either collected into our sample sets,
or pruned by some sample set S.
In the former case, our statement is trivially true.
In the latter case, suppose S = {_1,…}, where _1 is the best tuple in S.
If _1 is feasible, then is feasible as well, as it is at least as good as _1.
If _1 is infeasible, i.e., (^*) - (_1) >, then ^* cannot be pruned by S by design, a contradiction.
This completes the proof.
Before proving <ref>,
we briefly summarize the hypergeometric tail inequality below <cit.>.
Draw n random balls without replacement from a universe of N red and blue balls, and
let i be a random variable of the number of red balls that are drawn.
Then, for any t > 0, we have
[i ≥[i] + t n] ≤ e^-2t^2n,
and
[i ≤[i] - t n] ≤ e^-2t^2n.
The feasibility of the returned tuple is due to <ref>.
In the rest of the proof, we upper bound the size of every sample and the number of samples we keep in the sequence .
For any sample S with at least 4 samples and any subset ⊆,
let ' = (S) ∩ and by <ref> we have
[ |'| ] ≥3/4 ||.
In particular, let = and we have
[|'|] ≥3/4 || and || =.
Then,
[ |'| < 5/8 || ]
= [ |'| < 3/4 || - 1/8]
≤[ |'| < [|'| - 1/8] ]
≤ e^-2 /8^2,
where the last step invokes <ref>.
Since there can be at most samples,
the probability that any sample fails to pass the pool test is upper bounded by e^-2 /8^2.
We continue to upper bound the number of sample sets.
At most log() sample sets suffice if every sample can prune at least half of the remaining tuples.
Fix an arbitrary sample S, and let to be the set of remaining tuples.
The pool is a random sample from of size . Thus,
[|'|] / = |'| / ||. Consequently, if |'| < ||/2, then [|'|] < /2
and
[ |'| ≥5/8 || ]
≤[ |'| ≥[|'|] + 1/8]
≤ e^-2 /8^2.
Similar to the above,
the probability that any bad sample passes the test is upper bounded by e^-2 /8^2.
Combining the two cases above,
the total failure probability is 2 e^-2 /8^2≤ 1/
Hence, with probability at least 1-1/,
it is sufficient to use log() sample sets, each with a size 4.
Keeping one sample set requires 4log(4) comparisons.
Finally, finding the best tuple among all filters and the pool requires additional + log() comparisons.
§ FINDING A TUPLE: ORACLE WITH TIES
In this section,
we first introduce a natural notion of uncomparable pairs to avoid error-prone comparisons, and then
we show how this new setting affects our algorithms.
It is clearly more difficult for a user to distinguish a pair of tuples with nearly equal utility.
Thus, it is reasonable to not force the user to make a choice in the face of a close pair, and
allow the user to simply declare the comparison a tie instead.
We make this intuition formal below.
Two tuples ,∈ D are -similar if
|() - ()| ≤,
for some fixed value .
We write if they are uncomparable.
A query about a -similar pair to the oracle will be answered with a tie.
Besides, as before, we assume that the best tuple ^* has non-negative utility,
(^*) ≥ 0.
Typically, the value is fixed by nature, unknown to us, and cannot be controlled.
Note that when is sufficiently small, we recover the previous case in <ref> where every pair is comparable under <ref>.
By allowing the user to not make a clear-cut comparison for a -similar pair, one can no longer be guaranteed total sorting.
Indeed, it could be that every pair in is -similar.
In <ref>, we provide a filter to handle ties under <ref>.
We maintain a totally sorted subset R of representative tuples in a sample set S.
For each representative ∈ R, we create a group G_.
Upon the arrival of a new tuple , we sort into R if no tie is encountered.
Otherwise, we encounter a tie with a tuple ∈ R such that , and
we add into a group G_.
In the end, the best tuple in R will be returned.
To see whether a filter in <ref> can prune a given tuple ,
we test the following condition.
Let R = _1, … be the sorted list of representive tuples,
where _1 is the top tuple.
Let = G_1, … be the corresponding groups.
A tuple can be pruned if
there exists ' such that - '_2 ≤, where
' = ∑_∈ G_1∪ G_2ν_
+ ∑_j=1∑_∈ G_j∑_∈ G_j+2α_, ( - )
such that∑_∈ G_1∪ G_2ν_ = 1 and all ν, α≥ 0 .
The idea is similar to <ref>, except that
the top tuple _1 in <ref> is replaced by an aggregated tuple by convex combination, and
every pair difference _j+1 - _j is replaced by pair differences between two groups.
We avoid using pair differences between two consecutive groups, as tuples in group G_j may not have higher utility than tuples in G_j+1.
If the above condition is met, then
we write
and, if is constructed using S,
S .
The number of comparisons that is needed by <ref> depends on the actual input,
specifically, , the largest size of any pairwise -similar subset of .
Note that the guarantee below recovers that of <ref> up to a constant factor, if assuming <ref> where =1.
However, in the worst case, = () and the guarantee becomes vacuous.
Assume ϵ > 0 and let = || be the size of data.
Let c = (^*) ∈ [0,1] be the utility of the best tuple ^*.
Under <ref>,
with a pool size = 256ln 2 and = 3/16,
<ref> return an (/c + 2)-regret tuple for the problem.
Let =, where
d is the intrinsic dimension of , and
be the largest size of a pairwise -similar subset of .
Then, with probability at least 1-1/, at most
(log() 16 log(16 )) +
comparisons
are made.
Proofs of <ref> are deferred to <ref>.
In the rest of this section, we prove <ref>.
§.§ Proofs
The proof is similar to that of <ref>,
except that we need a new proof for the key <ref>,
since in the presence of ties, we may not be able to totally sort a sample S.
Instead, we show that a partially sorted set S of a sufficient size can also be effective in pruning.
From now on, we treat the sample S as a sequence instead of a set,
as a different arrival order of S may result in a different filter by <ref>.
Let S ⊆ be a sequence of length 16.
Let be the groups constructed by <ref>.
Under <ref>, we have
|{∈ S: - }| ≥3/4 |S|,
where =,
is the largest size of a pairwise -similar subset of ,
and - are the groups with removed from its group.
[Proof of <ref>]
Note that by the definition of ,
for any particular tuple ∈ S,
there are at most 2(-1) tuples that are -similar with tuple .
Thus, must contain at least 8 groups, and
we split all groups in into two parts, those with an odd index and those with an even index.
In each part, we can extract a totally sorted list L of size at least , by picking exactly one tuple from each group.
We remove one tuple ∈ L from S such that L -, whose existence is guaranteed by <ref>.
<ref> guarantees that -.
We repeatedly do so until less than groups remain in each part,
which means that the number of remaining tuples is at most 2 in each part.
As a result, we are able to remove at least 16 - 4 tuples,
concluding the claim.
Although the above lemma appears similar to <ref>,
a crucial difference is that the set of prunable tuples in S now depends on the arrival order of S,
which causes non-trivial technical challenges in the analysis.
A critical observation that enables our analysis is the following result.
Fix a sequence S of size 16, there exist at least 1/4 |S| tuples in S that satisfy
S - .
[Proof of <ref>]
Let be the groups constructed by <ref>.
Write
S' = {∈ S: - }.
By <ref>, we know that |S'| ≥3/4 |S|.
For an arbitrary tuple ∈ S', suppose is assigned to a group G ∈.
We call a tuple good if |G| = 1 or is not a representative in R in <ref>.
Let ' be the groups constructed by <ref> using S -. If is good,
then ' = -.
Therefore, for a good tuple we always have
S - .
By definition, it is easy to see that there are at most |S|/2 tuples in S that are not good,
proving the lemma.
Denote by (S) the set of tuples that can be pruned by S, that is,
(S) = {∈^d: S }.
We now prove a similar lemma to <ref> by a generalized symmetrization argument over sequences.
Given a set of tuples , and
a random sequence S
of at least 16
tuples from ,
we have
[ |(S) ∩| ]
≥1/4 ||,
where =,
and is the largest size of a pairwise -similar subset of .
Moreover, the expectation is taken over S.
[Proof of <ref>]
Let be the last tuple added into S. Write T = S -.
Given T, the distribution of is a uniform distribution from ∖ T.
Let be a random sample from X.
Since T ⊆(T), we have
[ T | T ] ≥ [ T | T ].
Then,
_S [ |(S) ∩|/||]
= _S [ [ S | S ] ]
= _S [ [ T + | S ] ]
≥_S [ [ T | S ] ]
= _T [ [ T | T ] ]
≥_T [ [ T | T ] ]
= _S [ [ S - | S ] ].
Fix S, let ∈ S be a uniformly random tuple in S, and we have
_S [ [ S - | S ] ]
= _S [ [ S - | S ] ]
≥ 1/4,
where the last step is by <ref>, and the first step is due to double counting,
as every sequence S appears |S| times in the right-hand side,
completing the proof.
The proof is similar to <ref> on a high level, and is deferred to Appendix.
[Proof of <ref>]
The proof is similar to <ref> on a high level.
We only elaborate on their differences.
We first prove the guarantee on the regret.
If the optimal tuple ^* is in the pool once the algorithm is done, then the regret is at most .
If ^* is not in the pool, then
the proof of <ref> shows that there is in one of the sample, say S,
that yields a regret of /c.
The top representative of that sample yields /c + regret. Finally, the final top tuple
yields /c + 2 regret.
Next, we upper bound the size of every sample and the number of samples similarly to
the proof of <ref>.
We require every sample to prune at least 1/8 fraction of the remaining tuples instead of 1/2,
which leads to a demand for log_8/7 () samples.
The total failure probability is bounded by 2 e^-2 /16^2≤ 1/.
Consequently, with probability at least 1-1/,
we will use at most
log_8/7 () sample sets, each with a size 16, at most.
[more details]
For any sample S with at least 16 samples and any subset ⊆,
let ' = (S) ∩ and by <ref> we have
[ |'| ] ≥1/4 ||.
In particular, let is a random subset of and we have
[|'|] ≥1/4 ||.
Then,
[ |'| < 3/16 || ]
= [ |'| < 1/4 || - 1/16 ||]
≤[ |'| < [|'| - 1/16] ]
≤ e^-2 /16^2,
where the last step invokes <ref>.
Since there can be at most samples,
the probability that any sample fails to pass the pool test is upper bounded by e^-2 /16^2.
We continue to upper bound the number of sample sets.
At most log_8/7() sample sets suffice if every sample can prune at least 1/8 fraction of the remaining tuples.
Fix an arbitrary sample S, and let to be the set of remaining tuples.
The pool is a random sample from of size . Thus,
[|'|] / = |'| / ||. Consequently, if |'| < ||/8, then [|'|] < /8
and
[ |'| ≥3/16 || ]
≤[ |'| ≥[|'|] + 1/16]
≤ e^-2 /16^2.
Similar to the above,
the probability that any bad sample passes the test is upper bounded by e^-2 /16^2.
Building one filter requires at most (16log(16)) comparisons, because
sorting an new tuple within R by binary search costs at most (log(16)) comparisons.
Finally, finding the best tuple among all filters and the pool requires additional + log_8/7 () comparisons.
§ IMPROVING BASELINE FILTERS
In this section,
we improve existing filters by <cit.>,
by using linear and quadratic programs.
We will use these baselines in the experiments.
Previously, their filters rely on explicit computation of convex hulls,
which is feasible only in very low dimension <cit.>.
Technical details are deferred to <ref>.
Existing filters iteratively compare a pair of random tuples, all of which are kept in
A = {a_i},
where a_i = (,) such that () < (),
and use them to prune potential tuples.
Filter by constrained utility space
Given a tuple , we try to find
a vector that, for all (,) ∈ A,
^T(-) ≥ 1, ^T(-) ≥ 1, ^T((1-) - ) ≥ 1.
eq:lp
We claim that a given tuple can be safely pruned if there is no vector satisfying LP (<ref>).
propositionproplp
Consider a tuple with () > () and
() - () > () for every (, ) ∈ A.
Then there is a solution to LP (<ref>).
Filter by conical hull of pairs
Given a tuple , we propose to solve the following quadratic program (QP),
min_ν,β - ∑_a_i = (,) ∈ A (ν_i1 + ν_i2 ) - ∑_a_i = (,) ∈ Aβ_i (-)
such that∑_a_i = (,) ∈ Aν_i1+ν_i2 = 1
and ν_i1, ν_i2, β_i ≥ 0 for all i eq:qp-pairs.
If the optimal value of the QP is at most , we prune .
propositionpropqppairs
Let ^T ^* = c.
A tuple ∈ can be pruned if
the objective value of the quadratic program (<ref>) is at most / c.
If we set ϵ = 0, then we can use LP solver (similar to <ref>) instead of QP solver.
This results in a weaker but computationally more efficient filter.
§ EXPERIMENTAL EVALUATION
In this section, we evaluate key aspects of our method and the proposed filters.
Less important experiments and additional details are deferred to <ref>.
In particular, we investigate the following questions.
(i) How accurate is the theoretical bound in <ref>?
More specifically, we want to quantify the sample size required by <ref> to prune at least half of the tuples,
and understand its dependance on the data size , dimension d,
and regret parameter .
(ii) Effect of parameters of <ref>.
(<ref>)
(iii) How scalable are the proposed filters?
(iv) How do the proposed filters perform over real-life datasets?
(v) How do ties in comparisons affect the performance of the proposed filters?
Our implementation is available at
Next, let us introduce the adopted datasets and baselines.
Datasets.
A summary of the real-life datasets we use for our evaluation can be found in <ref>.
To have more flexible control over the data parameters,
we additionally generate the following two types of synthesized data.
sphere: Points sampled from the unit d-sphere uniformly at random.
clusters: Normally distributed clustered data, where each cluster is centered at a random point on unit d-sphere .
To simulate an oracle, we generate a random utility vector on the unit d-sphere for every run.
More details about datasets can be found in <ref>.
Baselines.
A summary of all algorithms is given in <ref>.
We mainly compare with (enhanced) pruning techniques (, and ) by <cit.>,
halfspace-based pruning (), and
a random baseline ().
Discussion of other baselines is deferred to <ref>.
We instantiate every filter (except for the and ) in the framework provided in <ref>,
that is, we iteratively create a new filter that can prune about half of the remaining tuples.
This is a reasonable strategy, and will be justified in detail in <ref>.
For pair-based filters, a new pair is made after two consecutive calls of the add function.
The pool size and threshold in <ref> are set to be 100 and 0.5, respectively.
Since the proposed algorithm only guarantees a regret of /(x^*), where x^* is the best tuple in the dataset,
we pre-compute the value of (x^*) ∈ [0,1], and adjust the regret parameter of to be (x^*).
§.§ Sample size in practice
<ref> proves a theoretical bound on the size of a random sample
required by <ref> to prune at least half of a given set of tuples in expectation.
This bound is 2 where =.
Importantly, the bound does not depend on the data size ||,
which we verify later in <ref>.
In <ref> (in Appendix), we compute and present the exact required size for synthesized data, and illustrate
how the size changes with respect to the dimension d and regret parameter .
As can be seen, the bound provided in <ref> captures a reasonably accurate dependence on d and , up to a constant factor.
§.§ Scalability
The running time required for each filter to prune a given tuple depends heavily on its memory size,
i.e., the number of tuples it keeps.
In <ref>, we compute and show the required memory size for a filter to prune half of a given set of tuples, and
how the size changes with respect to the data size = ||.
Impressively, most competing filters that adopt a randomized approach only require constant memory size, regardless of the data size .
This also confirms the effectiveness of randomized algorithms in pruning.
Based on the above observation,
it is usually not feasible to maintain a single filter to process a large dataset .
If a filter requires s tuples in memory to prune half of ,
then at least s log(||) tuples are expected to process the whole dataset .
However, the running time for both LP and QP solvers is superlinear in the memory size of a filter <cit.>,
which means that running a filter with s log(||) tuples is considerably slower than running log(||) filters, each with s tuples.
The latter approach enables also parallel computing for faster processing.
Therefore, we instantiate each competing filter (except for and ) in the framework provided in <ref>, and
measure the running time it takes to solve the problem.
In the rest of this section,
we investigate the effect of the data dimension d and regret parameter on the running time.
Effect of data dimension d.
In <ref>,
we fix a regret parameter =0.01, and
examine how the running time of a filter varies with respect to the data dimension d on synthesized data.
The first observation from <ref> is that LP-based filters are more efficient than their QP counterparts.
Particularly, is too slow to be used, and we have to settle for its LP counterpart in subsequent experiments.
Let us limit the comparison to those LP-based filters.
and are more computationally expensive than .
For , the reason is obvious:
as discussed at the end of <ref>,
makes relatively more comparisons and
every compared pair of tuples adds two more parameters to the LP.
For , the number of parameters in its LP depends linearly on both the dimension d and number of compared pairs,
while only depends on the latter.
Thus, is less scalable by design.
When d increases, the impact of #parameters needed for compared pairs dominates that of those modeling for , so its running time converges to .
Effect of regret parameter .
The effect of the regret parameter can be found in <ref> for all real-life datasets.
Generally, a larger value of decreases the running time,
as each filter can be benefited by more aggressive pruning.
The running time of deteriorates dramatically for a small value of , and
the number of comparisons needed also rises considerably.
The reason is that,
most numerical methods for solving a mathematical program have a user-defined precision parameter.
Small precision gives a more accurate solution, and at the same time causes a longer running time.
When gets close to the default precision, or to the actual precision after the maximum number of iterations is exceeded, fails to prune tuples.
Thus, is advised to be used for a relatively large regret value .
In regard to the memory size,
as we can see in <ref>,
and consistently use a much smaller memory size than and .
This also demonstrates the advantage of using a sorted list over a set of compared pairs.
§.§ The case of oracles with no ties
The performance of competing filters can be found in <ref> for all real-life datasets.
The average and standard error of three random runs are reported.
We instantiate each competing filter (except for and ) in the framework provided in <ref> to solve the problem.
Meanwhile, we vary the regret parameter to analyze its effect.
We also experimented with a smaller value such as 0.005,
the observations are similar except that the filter is significantly slower for reasons we mentioned in <ref>.
Except and ,
every reasonable filter succeeds in returning a low-regret tuple.
We limit our discussion to only these reasonable filters.
In terms of the number of comparisons needed,
outperforms the rest on most datasets provided that the regret value is not too small.
We rate as the runner-up, and it becomes the top one when the regret value is small.
Besides, is the fastest to run.
The number of comparisons needed by and is similar, and
they sometimes perform better than others, for example, over the youtube dataset.
Let us make a remark about the regret value .
Being able to exploit a large value of in pruning is the key to improving performance.
Notice that both and cannot benefit from a large regret value by design.
Though is designed with in mind,
it is more conservative as its pruning power depends on ^T instead of ^T^*,
where is the tuple to prune.
In summary, we can conclude that
the filter is recommended for a not too small regret parameter (i.e., ≥ 0.1), and
the filter is recommended otherwise.
In practice, since both and follow an almost identical procedure,
one could always start with , and switch to if the pruning takes too long time.
I cannot explain why in house dataset, #comparison goes up for and by increasing .
This also happens mildly in game dataset.
One common feature of them is having a lot of one-hot attributes.
§.§ Effect of ties
According to <ref>,
the oracle returns a tie if the difference in utility between two given tuples is within a parameter .
For filters like and , the most natural strategy to handle a tie for a pair of tuples is to simply discard one of them.
It is expected that ties worsen the performance of a filter, as they fail to provide
additional information required by the method for pruning.
In <ref>, we vary the value of parameter to see
how it affects the performance of the proposed filters.
It is not surprising that
as the value of increases, the number of ties encountered and the number of comparisons made by all algorithms both increase.
Notably, the running time of and grows significantly as increases.
This is because one parameter is needed in their solvers for every pair of tuples between two consecutive groups G_i,G_j, and
the total number of parameters can increase significantly if the size of both groups increases.
This behavior also reflects the fact that a partially sorted list is less effective for pruning.
However, how to handle a large remains a major open problem.
Hence, we conclude that the proposed algorithms work well provided that the parameter is not too large.
Summary
After the systematical evaluation,
we conclude with the following results.
(i) LP-based filters are more efficient than their QP counterparts, but less effective in pruning.
(ii) is the most scalable filter.
The runner-up is , provided that the data dimension is not too large (d < 128) and the regret parameter is not too small (≥ 0.1).
(iii) To minimize the number of requested comparisons,
is recommended for a not too small (≥ 0.1).
When is small, we recommend .
(iv) Good performance can be retained if the oracle is sufficiently discerning (≤ 0.01).
Otherwise, a better way to handle ties will be needed.
[Minor points]
* Handle (few) rounding error in comparison
* Run for offline settings
* Combine different kinds of filters
§ CONCLUSION
We devise a single-pass streaming algorithm for finding a high-utility tuple by making adaptive pairwise comparisons.
We also show how to maintain the guarantee when ties are allowed in a comparison between two tuples with nearly equal utility.
Our work suggests several future directions to be explored.
Those include
finding a high-utility tuple in the presence of noise,
incorporating more general functions for modeling tuple utility,
devising methods with provable quarantees for arbitrary-order data streams,
and
devising more efficient algorithms to handle ties.
This research is supported by the Academy of Finland projects MALSOME (343045) and MLDB (325117),
the ERC Advanced Grant REBOUND (834862),
the EC H2020 RIA project SoBigData++ (871042),
and the Wallenberg AI, Autonomous Systems and Software Program (WASP)
funded by the Knut and Alice Wallenberg Foundation.
[SVM]Just a remark that SVM is a special instance of QP.
Different from constrained least squares,
it minimizes the ℓ_2 norm the coefficient vector for a linear function
subject to a set of linear inequalities (i.e., hard margin constraints), one for each data point.
ACM-Reference-Format
§ PROOFS FOR SECTION <REF>
<cit.> proved a powerful local lemma, which states that
among a sufficiently large set of vectors from the unit d-ball ^d,
there must exist some vector that can be approximately represented as a special non-negative linear combination of others.
Given _1,…,_∈^d, for any > 0, if ≥,
then there exists a ∈ [] such that
_a = _1 + ∑_j = 1^a-2α_j (_j+1 - _j) + ,
where _2 ≤ and α_j ∈{0,1,2}.
Let S = {_1, …, _}.
<ref> can be easily extended to hold for the intrinsic dimension of S,
by first applying <ref> to the minimal representation _1,…,_∈^d' of S.
Suppose
_a = _1 + ∑_j = 1^a-2α_j (_j+1 - _j) + ',
where '_2 ≤.
By definition, we know that
_i = M _i where M ∈^d,d' and columns of M are orthonomal.
Then, we have _2 ≤,
where = M'.
In <ref>, we have S-_a _a,
where S- is a shorthand for S ∖{}.
Note that this is exactly the condition we use in Step <ref> in <ref> for pruning.
Denote by (S) the set of all such pruned tuples, i.e.,
(S) = {∈^d: S }.
Given any set S of size 4,
at least 3/4 fraction of S can be pruned by other tuples in S, by repeatedly applying <ref>.
Given a sorted set S of size at least 4, where =, we have
|{∈ S: S-}| ≥3/4 |S|.
since |S| ≥ 4,
we can apply <ref> repeatedly to S until only entries remain.
Importantly, {∈ S: S-} do not depend on the arrival order of tuples in S.
As a consequence of <ref>,
the same fraction of current tuples can be pruned by a random sample set S of a sufficient size in expectation.
Given a set of tuples , and
a random sample set S ⊆ of size 4 where =,
we have
[ |(S) ∩| ] ≥3/4 ||,
where the expectation is taken over S.
[Proof of <ref>]
The proof is by a symmetrization argument introduced by <cit.>.
Let be the last tuple added into S. Write T = S -.
Given T, the distribution of is a uniform distribution from ∖ T.
Let be a random sample from X.
Since T ⊆(T), we have
[ T | T ] ≥[ T | T ].
Then,
_S [ |(S) ∩|/||]
= _S[ [ S | S ] ]
= _S[ [ T + | S ] ]
≥_S[ [ T | S ] ]
= _T[ [ T | T ] ]
≥_T[ [ T | T ] ]
= _S[ [ S - | S ] ].
In order to bound the right-hand side, notice that
when conditioned on S, every permutation of S is equally probable over a random-order stream,
which implies that every tuple in S is equally probable to be the last tuple .
Hence, we have [ S - | S ] ≥ 3/4 by <ref>,
proving immediately the claim.
It is important to note that, the above proof doesn't imply that
for every S, when conditioning on it, one can achieve
[ S | S ] ≥ 3/4,
that is, every S can prune at least 3/4 of , which is too strong to be true.
After we replace by , the 3/4 is obtained by taking average over , which is part of outer expectation over S.
Another important issue to handle is to ensure that our pruning strategy will not discard all feasible tuples.
This is prevented by keeping track of the best tuple in any sample set so far, and guaranteed by <ref>.
Denote by all tuples that have arrived so far.
Suppose ^* is the best tuple among .
Tuple ^* is either collected into our sample sets,
or pruned by some sample set S.
In the former case, our statement is trivially true.
In the latter case, suppose S = {_1,…}, where _1 is the best tuple in S.
If _1 is feasible, then is feasible as well, as it is at least as good as _1.
If _1 is infeasible, i.e., (^*) - (_1) >, then ^* cannot be pruned by S by design, a contradiction.
This completes the proof.
Before proving <ref>,
we briefly summarize the hypergeometric tail inequality below <cit.>.
Draw n random balls without replacement from a universe of N red and blue balls, and
let i be a random variable of the number of red balls that are drawn.
Then, for any t > 0, we have
[i ≥[i] + t n] ≤ e^-2t^2n,
and
[i ≤[i] - t n] ≤ e^-2t^2n.
The feasibility of the returned tuple is due to <ref>.
In the rest of the proof, we upper bound the size of every sample and the number of samples we keep in the sequence .
For any sample S with at least 4 samples and any subset ⊆,
let ' = (S) ∩ and by <ref> we have
[ |'| ] ≥3/4 ||.
In particular, let = and we have
[|'|] ≥3/4 || and || =.
Then,
[ |'| < 5/8 || ]
= [ |'| < 3/4 || - 1/8]
≤[ |'| < [|'| - 1/8] ]
≤ e^-2 /8^2,
where the last step invokes <ref>.
Since there can be at most samples,
the probability that any sample fails to pass the pool test is upper bounded by e^-2 /8^2.
We continue to upper bound the number of sample sets.
At most log() sample sets suffice if every sample can prune at least half of the remaining tuples.
Fix an arbitrary sample S, and let to be the set of remaining tuples.
The pool is a random sample from of size . Thus,
[|'|] / = |'| / ||. Consequently, if |'| < ||/2, then [|'|] < /2
and
[ |'| ≥5/8 || ]
≤[ |'| ≥[|'|] + 1/8]
≤ e^-2 /8^2.
Similar to the above,
the probability that any bad sample passes the test is upper bounded by e^-2 /8^2.
Combining the two cases above,
the total failure probability is 2 e^-2 /8^2≤ 1/
Hence, with probability at least 1-1/,
it is sufficient to use log() sample sets, each with a size 4.
Keeping one sample set requires 4log(4) comparisons.
Finally, finding the best tuple among all filters and the pool requires additional + log() comparisons.
§ PROOFS FOR SECTION <REF>
The proof is similar to that of <ref>,
except that we need a new proof for the key <ref>,
since in the presence of ties, we may not be able to totally sort a sample S.
Instead, we show that a partially sorted set S of a sufficient size can also be effective in pruning.
From now on, we treat the sample S as a sequence instead of a set,
as a different arrival order of S may result in a different filter by <ref>.
Let S ⊆ be a sequence of length 16.
Let be the groups constructed by <ref>.
Under <ref>, we have
|{∈ S: - }| ≥3/4 |S|,
where =,
is the largest size of a pairwise -similar subset of ,
and - are the groups with removed from its group.
[Proof of <ref>]
Note that by the definition of ,
for any particular tuple ∈ S,
there are at most 2(-1) tuples that are -similar with tuple .
Thus, must contain at least 8 groups, and
we split all groups in into two parts, those with an odd index and those with an even index.
In each part, we can extract a totally sorted list L of size at least , by picking exactly one tuple from each group.
We remove one tuple ∈ L from S such that L -, whose existence is guaranteed by <ref>.
<ref> guarantees that -.
We repeatedly do so until less than groups remain in each part,
which means that the number of remaining tuples is at most 2 in each part.
As a result, we are able to remove at least 16 - 4 tuples,
concluding the claim.
Although the above lemma appears similar to <ref>,
a crucial difference is that the set of prunable tuples in S now depends on the arrival order of S,
which causes non-trivial technical challenges in the analysis.
A critical observation that enables our analysis is the following result.
Fix a sequence S of size 16, there exist at least 1/4 |S| tuples in S that satisfy
S - .
[Proof of <ref>]
Let be the groups constructed by <ref>.
Write
S' = {∈ S: - }.
By <ref>, we know that |S'| ≥3/4 |S|.
For an arbitrary tuple ∈ S', suppose is assigned to a group G ∈.
We call a tuple good if |G| = 1 or is not a representative in R in <ref>.
Let ' be the groups constructed by <ref> using S -. If is good,
then ' = -.
Therefore, for a good tuple we always have
S - .
By definition, it is easy to see that there are at most |S|/2 tuples in S that are not good,
proving the lemma.
Denote by (S) the set of tuples that can be pruned by S, that is,
(S) = {∈^d: S }.
We now prove a similar lemma to <ref> by a generalized symmetrization argument over sequences.
Given a set of tuples , and
a random sequence S
of at least 16
tuples from ,
we have
[ |(S) ∩| ]
≥1/4 ||,
where =,
and is the largest size of a pairwise -similar subset of .
Moreover, the expectation is taken over S.
[Proof of <ref>]
Let be the last tuple added into S. Write T = S -.
Given T, the distribution of is a uniform distribution from ∖ T.
Let be a random sample from X.
Since T ⊆(T), we have
[ T | T ] ≥ [ T | T ].
Then,
_S [ |(S) ∩|/||]
= _S [ [ S | S ] ]
= _S [ [ T + | S ] ]
≥_S [ [ T | S ] ]
= _T [ [ T | T ] ]
≥_T [ [ T | T ] ]
= _S [ [ S - | S ] ].
Fix S, let ∈ S be a uniformly random tuple in S, and we have
_S [ [ S - | S ] ]
= _S [ [ S - | S ] ]
≥ 1/4,
where the last step is by <ref>, and the first step is due to double counting,
as every sequence S appears |S| times in the right-hand side,
completing the proof.
The proof is similar to <ref> on a high level, and is deferred to Appendix.
[Proof of <ref>]
The proof is similar to <ref> on a high level.
We only elaborate on their differences.
We first prove the guarantee on the regret.
If the optimal tuple ^* is in the pool once the algorithm is done, then the regret is at most .
If ^* is not in the pool, then
the proof of <ref> shows that there is in one of the sample, say S,
that yields a regret of /c.
The top representative of that sample yields /c + regret. Finally, the final top tuple
yields /c + 2 regret.
Next, we upper bound the size of every sample and the number of samples similarly to
the proof of <ref>.
We require every sample to prune at least 1/8 fraction of the remaining tuples instead of 1/2,
which leads to a demand for log_8/7 () samples.
The total failure probability is bounded by 2 e^-2 /16^2≤ 1/.
Consequently, with probability at least 1-1/,
we will use at most
log_8/7 () sample sets, each with a size 16, at most.
[more details]
For any sample S with at least 16 samples and any subset ⊆,
let ' = (S) ∩ and by <ref> we have
[ |'| ] ≥1/4 ||.
In particular, let is a random subset of and we have
[|'|] ≥1/4 ||.
Then,
[ |'| < 3/16 || ]
= [ |'| < 1/4 || - 1/16 ||]
≤[ |'| < [|'| - 1/16] ]
≤ e^-2 /16^2,
where the last step invokes <ref>.
Since there can be at most samples,
the probability that any sample fails to pass the pool test is upper bounded by e^-2 /16^2.
We continue to upper bound the number of sample sets.
At most log_8/7() sample sets suffice if every sample can prune at least 1/8 fraction of the remaining tuples.
Fix an arbitrary sample S, and let to be the set of remaining tuples.
The pool is a random sample from of size . Thus,
[|'|] / = |'| / ||. Consequently, if |'| < ||/8, then [|'|] < /8
and
[ |'| ≥3/16 || ]
≤[ |'| ≥[|'|] + 1/16]
≤ e^-2 /16^2.
Similar to the above,
the probability that any bad sample passes the test is upper bounded by e^-2 /16^2.
Building one filter requires at most (16log(16)) comparisons, because
sorting an new tuple within R by binary search costs at most (log(16)) comparisons.
Finally, finding the best tuple among all filters and the pool requires additional + log_8/7 () comparisons.
§ IMPROVING BASELINE FILTERS
In this section,
we improve existing filters by <cit.>,
by using linear and quadratic programs.
Previously, their filters rely on explicit computation of convex hulls,
which is feasible only in very low dimension.
For example, the convex hull size, and consequently
the running time of these existing techniques,
have an exponential dependence on d <cit.>.
§.§ Improving constrained utility space filter
One of the most natural strategies is to iteratively compare a pair of random tuples.
The feasible space for the utility vector is constrained by the list of pairs
A = {a_i} that have been compared,
where a_i = (,) such that () < ().
Note that every pair of tuples ,∈ forms a halfspace in ^d, i.e.,
= {∈^d: ^T(-) < 0 }.
Specifically, the unknown ∈ is contained in the intersection U of a set of halfspaces, one by each pair.
<cit.> propose to prune a tuple if
for every possible ∈ U there exists a tuple in some pair of A such that () ≥().
They first compute all extreme points of U, and then check if the condition holds for every extreme point.
However, this approach is highly inefficient,
as potentially there is an exponential number of extreme points.
Instead, we propose to test the pruning condition by asking to find
a vector that satisfies
If there is no such vector we prune .
This test can be done with a linear program (LP).
Note that the test is stronger than that by <cit.> as it has been extended to handle ϵ-regret.
We claim that a given tuple can be safely pruned if there is no vector satisfying LP (<ref>).
*
[Proof of <ref>]
Let be the utility vector.
The assumptions imply
^T - ^T > 0
and ^T((1-) - ) > 0.
Next, note that, by definition, for every (,) ∈ A,
^T(-) > 0.
The inequalities in Eqs. (<ref>)–(<ref>) are all proper.
Consequently, we can scale so that the left-hand sides in Eqs. (<ref>)–(<ref>) are at least 1,
that is, there exists a solution to LP (<ref>).
Notice that the second set of constraints in LP (<ref>) (i.e., ^T(-) ≥ 1) is redundant provided () ≥ 0.
Actually, even if () < 0, the test only lets in that is slightly worse than the best tuple in A,
which is unlikely since () < 0.
Thus, in practice we recommend to omit the second set of constraints to speed up the test.
[sorted list vs. pairs]
It is not true that
sorting a list of tuples is a much more efficient way to generate compared pairs.
At least for the purpose of maintaining utility space.
There are quadratic number of pairs in a list, but the “effective” pairs are those made by actual comparisons.
A filter for maintaining the constrained utility space is conceptually different from the filter proposed in <ref>.
A small utility space of is the key for such a filter to be effective,
while a filter in <ref> maintains no explicit knowledge about and mainly relies on the geometry of the tuples.
§.§ Improving conical hull of pairs filter
Another pruning strategy proposed by <cit.>
is the following.
Consider again a list of compared pairs A = {a_i},
where a_i = (,) such that () < (),
and consider a cone formed by all pairs in A.
A tuple can now be pruned
if there is another tuple kept by the algorithm,
such that
= + ∑_a_i = (,) ∈ Aβ_i (-)
such that β_i ≥ 0
for all i.
Instead of actually constructing all facets of the conical hull,
as done by <cit.>,
we propose to solve the following quadratic program (QP),
.
If the optimal value of the QP is at most , we prune .
*
[Proof of <ref>]
We only discuss the case = 0.
When > 0, for any pruned tuple, there exists a tuple in some pair of A that is at most a distance of away from it, and
thus A maintains at least one / c-regret tuple.
The first sum in QP (<ref>) can be seen as an aggregated tuple by convex combination,
whose utility is no better than the top tuple in A.
The second term only further decreases the utility of the first term.
Thus, if a tuple can be written as a sum of the first and second terms,
its utility is no better than the top tuple in A, and
can be pruned.
Similar to <ref>, a weaker but computationally more efficient filter can be used, by replacing the QP with an LP solver.
That is, we prune tuple if there is a solution to
= ∑_a_i = (,) ∈ Aν_i + ∑_a_i = (,) ∈ Aβ_i (-)
such that ∑_a_i = (,) ∈ Aν_i = 1 ,
and ν_i1, ν_i2, β_i ≥ 0 for all i.
As a final remark about the above QP,
we compare its pruning power with that of the proposed filter (<ref>) in <ref>.
Obviously, its pruning power increases as the number of compared pairs in A increases.
For a fixed integer s,
a number of s comparisons result in s pairs for the above QP,
while in <ref>,
s comparisons can produce a sorted list of s/log(s) tuples and s/log(s) 2 pairs.
Hence, the above QP is less “comparison-efficient” than the one in <ref>.
Also, for a fixed number of compared pairs, the number of parameters is larger in QP (<ref>) than in the proposed filter,
which means that QP is more inefficient to solve.
These drawbacks are verified in our empirical study in the next section.
§ ADDITIONAL EXPERIMENTS
Datasets.
A summary of the real-life datasets we use for our evaluation can be found in <ref>.
The datasets contain a number of tuples up to 1M and a dimension up to 100.
Previous studies are mostly restricted to a smaller data size and a dimension size less than 10, and
a skyline operator is used to further reduce the data size in advance <cit.>.
Note that running a skyline operator itself is already a time-consuming operation, especially for high-dimension data <cit.>, and
becomes even more difficult to apply with limited memory size in the streaming setting.
Besides, a fundamental assumption made by a skyline operator, namely,
pre-defined preference of all attributes,
does not hold in our setting.
According to this assumption, it is required to know beforehand whether an attribute is better with a larger or smaller value.
This corresponds to knowing beforehand whether utility entry _i is positive or negative for the i-th attribute.
As we mentioned in <ref>, we do not make such an assumption about , and allow an arbitrary direction.
This is reasonable, as preference towards some attributes may be diverse among different people.
One example is the floor level in the housing market, where some may prefer a lower level, while others prefer higher.
Hence, we do not pre-process the data with a skyline operator.
Details on the data generation process and the actual synthesized data
can be found in our public Github repository.
Baselines.
We do not consider methods that synthesize fake tuples in pairwise comparisons, such as <cit.>.
Over a random-order stream, the algorithm by <cit.> is the same as the baseline when adapted to find the top tuple instead of a full ranking.
The UH-Simplex method <cit.> that simulates the simplex method by pairwise comparisons is not included,
as it is mainly of theoretical interest, designed for offline computation, and has been shown to have inferior empirical performance compared to other baselines.
We do not consider baselines that iteratively compare a greedy pair (among all 2 pairs) with respect to some measure of interest,
such as <cit.>,
because they are designed for offline computation and it is computationally prohibited to decide even the first greedy pair for the adopted datasets.
Misc.
We adopt the OSQP solver <cit.> and the HIGHS LP solver <cit.>.
The maximum number of iterations for the solvers is set to 4000,
which is the default value in the OSQP solver.
All experiments were carried out on a server equipped with 24 processors of AMD Opteron(tm)
Processor 6172 (2.1 GHz), 62GB RAM, running Linux 2.6.32-754.35.1.el6.x86_64.
The methods are implemented in Python 3.8.5.
§.§ Effect of parameters
Recall that in <ref>, a pool of tuples is used to test the performance of a new filter.
A new filter will be ready when it can prune at least a fraction of tuples in .
In <ref>, we run <ref> with a filter on a dataset of 10k tuples.
We fix one parameter (=100 or =0.5) and vary the other.
Parameter roughly specifies the expected fraction of tuples a filter should be able to prune.
A larger implies a need for fewer filters but a larger sample size for each filter.
It is beneficial to use a large which leads a smaller number of comparisons overall.
Nevertheless, as we will see shortly, such a large filter can be time-consuming to run, especially when the dimension d is large.
A larger value of improves the reliability of the testbed ,
which helps reducing the number of comparisons.
However, a larger also results in longer time to run filters over the testbed .
|
http://arxiv.org/abs/2307.02178v1
|
20230705101430
|
Non-Concave Utility Maximization with Transaction Costs
|
[
"Shuaijie Qian",
"Chen Yang"
] |
q-fin.MF
|
[
"q-fin.MF"
] |
rmkRemark
propositionProposition
assumptionAssumption
lemmaLemma
exampleExample
remarkRemark
theoremTheorem[section]
definitionDefinition
==thebibliography[1]
#1
1.5
showonlyrefs,
mathic
Non-Concave Utility Maximization with Transaction Costs
Shuaijie QianCMSA, Harvard University, Cambridge, MA 02139, USA; Department of Mathematics, HKUST, Hong Kong. Email: sjqian@ust.hk. Corresponding author
Chen YangDepartment of Systems Engineering and Engineering Management, The Chinese University of Hong Kong, Hong Kong. Email: cyang@se.cuhk.edu.hk
================================================================================================================================================================================================================================================================================================================
This paper studies a finite-horizon portfolio selection problem with non-concave terminal utility and proportional transaction costs. The commonly used concavification principle for terminal value is no longer valid here, and we establish a proper theoretical characterization of this problem. We first give the asymptotic terminal behavior of the value function, which implies any transaction close to maturity only provides a marginal contribution to the utility. After that, the theoretical foundation is established in terms of a novel definition of the viscosity solution incorporating our asymptotic terminal condition. Via numerical analyses, we find that the introduction of transaction costs into non-concave utility maximization problems can prevent the portfolio from unbounded leverage and make a large short position in stock optimal despite a positive risk premium and symmetric transaction costs.
Keywords: utility maximization, portfolio selection, transaction costs, concavification principle
§ INTRODUCTION
The utility maximization framework is widely used for studying individuals' decisions in problems such as portfolio selection theory or consumer theory. For example, the classic Merton problem (c.f. <cit.>) studies the optimal portfolio selection in which an investor aims at maximizing the expected utility over terminal wealth and intertemporal consumption.
In the classic utility maximization literature, the utility function is typically chosen as a concave function (e.g. CRRA or CARA utilities), which represents the individual's risk aversion. However, in many practical problems, the individual's utility has non-concave dependence on the terminal wealth level. For example, the investor can have an investment objective and gains a sudden boost in her utility level if the wealth breaks through such an objective. This creates a jump discontinuity in the utility and makes it non-concave (see, e.g., the goal-reaching problem in Example <ref> and aspiration utility in Example <ref>). Another example is from the S-shaped utility in behavioral economics (see Example <ref>). More examples can be found in delegated portfolio choice problems (see, e.g., <cit.>, <cit.>, <cit.>).
The non-concave utility maximization is commonly tackled in the literature using the concavification principle. Using this principle, the optimal investment strategy can be equivalently obtained by solving a “concavified” problem with the utility U replaced by its concave envelope U. The basic idea behind this principle is that when the end of investment horizon approaches, it is optimal for the investor to avoid reaching a terminal wealth level Z_T where U(Z_T)>U(Z_T) via taking (positively or negatively) unbounded leverage (c.f. <cit.>). With one-sided portfolio bounds, <cit.> show that this principle still remains valid, since the investor can still establish unbounded leverage in the permitted direction. However, <cit.> prove that this no longer holds with two-sided portfolio bounds. Indeed, such bounds directly prohibit unbounded leverage, and they show that the non-concavity of the terminal utility has significant impacts on the investor's strategy, both theoretically and practically. For example, the investor may choose to gamble by short-selling stocks of positive risk-premium, or take extreme positions that attain the portfolio bounds and deviate significantly from the frictionless optimum.
In this paper, we show that the concavification principle also fails when there are transaction costs incurred by trading the stocks, and we provide a rigorous theoretical characterization for this problem. To our best knowledge, this is the first paper studying continuous-time non-concave utility maximization problems with transaction costs.
Our main contribution is three-fold. First, from the theoretical perspective, the classical definition of viscosity solution (e.g. <cit.>) requires continuous value functions. Given the intrinsic discontinuity of value function near the end of horizon, we provide a rigorous treatment of the asymptotic terminal value and propose a novel definition of viscosity solution to characterize the investor's optimal value as the unique viscosity solution of the corresponding HJB equation (see Definition <ref> and Theorem <ref>).
Second, our theoretical terminal condition (see Proposition <ref>) also unveils the fundamental reason for the inapplicability of the concavification principle in the presence of transaction costs. Unlike the portfolio bounds that directly prohibit unbounded leverage, transaction costs impose “soft” bounds, making it diminishingly worthwhile for investors to transact and establish unbounded leverage,
as the end of investment horizon approaches. While the transaction costs in our setting and the two-sided portfolio bounds in <cit.> both result in the inapplicability of the concavification principle, the fundamental reasoning and underlying economic intuitions are significantly different.
Third, our numerical result demonstrates many intriguing financial insights for the optimal portfolio strategy. For example, when the remaining time to beat the target performance is short, a small magnitude of transaction costs can trigger a very high, although finite, level of optimal leverage, which can be much higher than <cit.> where the leverage is limited by the imposed portfolio bounds. Also, holding a large short position of the risky asset can be optimal despite its positive risk premium, since switching to a long position is costly due to the transaction cost.
Related Literature. With concave utilities, there is a large body of literature studying the continuous-time utility maximization problems with transaction costs, starting from the seminal papers <cit.>. They found that the transaction costs, however small, virtually prohibit continuous portfolio rebalancing for optimal diversification. Instead, the investor should strike a balance between achieving the optimal risk exposure and diversification, and minimizing the transaction costs. The transaction costs have since been widely used to model the bid-ask spread in a limit order book (e.g. <cit.>), or to model the general liquidity cost when trading in an illiquid market (e.g. <cit.>). Transaction costs also have been widely studied in portfolio selection (e.g. <cit.>), in the explanation of liquidity premium (e.g. <cit.>), and in derivative pricing (e.g. <cit.>).
Our proposed model is related to the literature studying non-concave utility functions. In addition to the goal-reaching utility (e.g. <cit.>), aspiration utility (e.g. <cit.>) and S-shaped utility (<cit.>) that will be discussed in details later, the non-concave utility functions are also widely used for modeling general objective related to the distribution of wealth (e.g. <cit.>).
Our result is also linked to the notion of viscosity solutions (see <cit.>). Unlike the classical notion that requires continuity, our definition of viscosity solution admits discontinuity. <cit.> consider the portfolio selection problem with both fixed and proportional transaction costs and smooth, concave utilities. Their derived value function may be discontinuous, and it is the unique viscosity solution up to a semicontinuous envelope. Our definition is mostly close to <cit.>, but their techniques for verifying the terminal boundary condition fail here, and we derive our condition by delicate analysis.
The remainder of this paper is organized as follows. Section <ref> describes the basic model setup and the assumptions. Section <ref> carries out the theoretical studies of the model, by characterizing the value function as the unique viscosity solution of the HJB equation, as well as identifying and proving the suitable terminal condition. Section <ref> presents several numerical examples of our model and discusses their financial implications. Section <ref> concludes the paper.
§ MODEL SETUP
We consider a finite investment horizon T>0 and assume that there are a risk-free asset (cash) and a risky asset (stock) in the market.
The cash position grows at the constant risk-free interest rate r and the stock price follows
dS_t = μ S_t dt + σ S_t d ℬ_t,
where μ >r is the expected stock return rate, σ is the stock volatility, and {ℬ_t}_0≤ t ≤ T is a standard one dimensional Brownian motion on a filtered probability space (Ω, ℱ, {ℱ_t}_0≤ t ≤ T, ℙ) with ℬ_0 = 0. The filtration {ℱ_t}_0≤ t ≤ T is generated by this Brownian motion, and ℱ_t contains all the ℙ-null sets of ℱ.
Trading the stock incurs proportional transaction costs. We denote X_t and Y_t as the amount of wealth in the cash and stock, respectively. Let θ_1 ∈ (0,1) and θ_2 ∈ (0, +∞) be the rates of the proportional
costs incurred on the stock sale and purchase, respectively. The dynamics of X_s and Y_s, 0≤ s ≤ T are
d X_s = r X_s ds - (1+θ_2)d L_s + (1-θ_1) dM_s,
d Y_s = μY_s ds + σY_s d ℬ_s +d L_s - dM_s,
where L_t and M_t represent the cumulative dollar amounts of stock purchase and
sale, respectively. They are both right-continuous with left limits, non-negative,
non-decreasing, and adapted to {ℱ_t}_0≤ t≤ T.
As <cit.>, we consider the forward wealth in cash and stock, which are defined as
X_s = e^-r(s-T)X_s, Y_s = e^-r(s-T)Y_s.
Then
d X_s = - (1+θ_2)d L_s + (1-θ_1) dM_s,
d Y_s = ηY_s ds + σY_s d ℬ_s + d L_s - dM_s,
where η: = μ -r is the excess rate of return, and L_s = ∫_t^s e^-r(u-T) d L̅_u, M_s = ∫_t^s e^- r(u-T) d M̅_u.
§.§ The Investor's Problem
Denote by {Z_t}_0≤ t≤ T the forward wealth process, i.e.,
Z_t = X_t+(1-θ_1)Y_t^+ - (1+θ_2)Y_t^- ,
where Y_t^+ := max{0, Y_t} and Y_t^- := max{0, -Y_t} are the positive and negative parts of Y_t, respectively. Furthermore, there exists a liquidation boundary K. If the forward wealth Z_t is no greater than K at some time point t, the stock position is immediately liquidated and the account is closed, and the investor can only hold cash in [t, T].
The solvency region is
𝒮 = {(x, y)∈ℝ^2| x+(1-θ_1)y^+ - (1+θ_2)y^- ≥ K }.
Given an initial time t∈ [0, T] and position (X_t-,Y_t-)=(x, y) ∈𝒮, an investment strategy (L_s, M_s)_t≤ s ≤ T is admissible if (X_s, Y_s) given by (<ref>) is in 𝒮 for all s∈[t, T]. Denote by 𝒜_t(x, y) the set of all admissible strategies with initial time t and initial position (x, y).
The investor's objective is choosing an admissible strategy to maximize the expected terminal utility over Z_t, i.e.,
max_(L_s, M_s)_0≤ s ≤ T∈𝒜_0(x, y)𝔼_0^x, y[U(Z_T)],
subject to (<ref>), where 𝔼_t^x, y denotes the conditional expectation given X_t- = x and Y_t- = y. Finally, U(·) is the utility function, which satisfies the following assumption throughout this paper.
The utility function U:[K, +∞) →ℝ is monotonically non-decreasing, right-continuous, and it satisfies
U(z)≤ C_1 + C_2 z^p,
for some constants C_1>0, C_2>0 and 0<p<1.
The following are some examples of non-concave utility functions satisfying this assumption.
Goal-Reaching Utility. <cit.> considers a fund manager whose objective is maximizing the probability that the portfolio value z beats some benchmark of z̅ in a given finite time horizon. Then the corresponding utility function is
U(z) = 1_z≥z̅,
where z̅ is the benchmark. This utility function is discontinuous at z=z̅ and hence non-concave.
The Aspiration Utility. <cit.> study the type of discontinuous utility functions
U(z) =
z^p if z< z̅,
c_1+c_2 z^p if z≥z̅.
Here, 0<p<1 indicates the risk aversion level, c_1>0,c_2 are constant such that U(z̅-)<U(z̅), and z̅ denotes the aspiration level. As a result, the utility function U has an upward jump at z̅, meaning that the investor achieves a boost in her utility once the wealth reaches z̅. For example, this can be due to a change in the investor's social status. More theoretical and empirical evidence can be found in the above two papers.
The S-shaped Utility of Prospect Theory. <cit.> consider the following S-shaped utility function:
U(z) =
(z-z_0)^p if z > z_0
-λ (z_0-z)^p if z ≤ z_0,
where z_0 is the wealth at time 0 to distinguish gains from losses, p ∈ (0, 1) since the investor is risk-averse over gains, and λ>1 because the pain from loss is higher than the pleasure from the same amount of gain.
§ THEORETICAL ANALYSIS
In the following, we denote
z: = x+(1-θ_1)y^+ - (1+θ_2)y^-.
We define the value function by
V(t, x, y) = max_(L_s, M_s)_t≤ s ≤ T∈𝒜_t(x, y)𝔼_t^x, y[U(Z_T)]
for (x, y) ∈𝒮, 0≤ t ≤ T. Formally, in the interior of 𝒮, V(t, x, y) satisfies the following Hamilton-Jacobi-Bellman
(HJB) equation
ℒ V: = min{-V_t -1/2σ^2 y^2 V_yy - η y V_y , V_y - (1-θ_1)V_x, (1+θ_2) V_x - V_y } = 0.
On the boundary z = K, the stock is liquidated and therefore we have the following boundary condition
V(t, x, y) = U(K), when z= K.
§.§ Terminal Condition
The classical definition of viscosity solution (e.g. <cit.>) requires the continuity of value function in the whole region including the terminal boundary. Without transaction costs, the investor can take infinite leverage near the terminal time and the concavification principle holds. Therefore, the terminal utility can be replaced by its concave envelope, which is continuous. With portfolio bounds that put a hard constraint on the leverage, the concavification principle is proved to be invalid by <cit.>, but the intuition of taking the maximum allowed leverage around terminal time still holds.
By introducing transaction costs, the behavior of the value function and the strategy near terminal time become more intriguing. While the investor has the incentive to take the maximum leverage allowed as mentioned above, transaction costs virtually prohibit the investor from taking infinite leverage. Consequently, the concavification principle becomes no longer applicable in the presence of transaction costs. Intuitively, compared to the hard constraint on leverage imposed by the portfolio bound, transaction costs impose a “soft” constraint. Indeed, the following proposition characterizes the asymptotic behavior of the value function as the time approaches maturity T, which confirms that the value function can be discontinuous if it is both close to the terminal in time and close to the jump points of the utility function.
The value function V defined in (<ref>) satisfies
lim_(t, x, y) → ( T-, x̂, ŷ) V(t, x, y) - U(ẑ-)-2 Φ(min{z-ẑ, 0}/ |ẑ-x| σ√(T-t)) (U(ẑ) - U(ẑ-))= 0,
where U(ẑ-) is the left limit of U at ẑ, U(K-) = U(K), Φ is the standard normal cumulative distribution function, and ẑ is defined by (<ref>) with (x,y,z) replaced by (x̂,ŷ,ẑ).
In the case |ẑ-x| = 0, we set
Φ(min{z-ẑ, 0}/ |ẑ-x| σ√(T-t) ) =
0 when z<ẑ,
1 when z≥ẑ.
The proof of Proposition <ref> will be relegated to <ref>. The proof is significantly different from <cit.> from the technical perspective. Indeed, they verify the discontinuous terminal condition by reducing the original problem to a one-dimensional problem. However, such kind of homotheticity does not apply here.
Instead, we directly estimate the contribution of transaction in the total value function by delicate mathematical analysis, and we show that when the time is close to maturity, this contribution is marginal, if not negative. The technical difficulty lies in the arbitrariness of trading strategy, and we are able to build a uniform estimation over all trading strategies.
Intuitively, to make the wealth increase to the threshold ẑ, either the investor needs to hold a large (positive or negative) amount of stock, or the stock price needs to be sufficiently fluctuant. But the value of holding a large amount of stock is eroded by the transaction costs, while the contribution of stock price fluctuation is smaller and smaller as time approaches maturity. In this way, we can show that the value of transaction is marginal.
In the following, we provide the intuitions on the terminal condition (<ref>).
In the special case U(ẑ-) = U(ẑ), i.e., U is continuous around ẑ, (<ref>) degenerates to
lim_(t, x, y) → ( T-, x̂, ŷ) V(t, x, y) = U(ẑ),
which implies a continuous terminal condition consistent with the classical definition. To elaborate on the more interesting case when U(ẑ-)< U(ẑ), we consider the goal-reaching problem by letting U(z) = 1_z≥ẑ with ẑ = 1. Consequently, (<ref>) degenerates into
lim_(t, x, y) → ( T-, x̂, ŷ) V(t, x, y) -2 Φ(min{z-1, 0}/ |1-x| σ√(T-t)) = 0.
The equation (<ref>) indicates the failure of concavification principle, since this principle implies the boundary condition
lim_(t, x, y) → ( T-, x̂, ŷ) V(t, x, y) = ẑ.
We discuss (<ref>) in two cases.
When z is always higher than 1 in the limiting process: this equation becomes
lim_(t, x, y) → ( T-, x̂, ŷ) V(t, x, y) - 1 = 0.
The interpretation is that, when wealth is higher than the target, the investor can always liquidate the entire stock position and reach the goal.
When z is always lower than 1 in the limiting process: this equation becomes
lim_(t, x, y) → ( T-, x̂, ŷ) V(t, x, y) -2 Φ(z-1/ |1-x| σ√(T-t)) = 0.
In the limiting process, the second term has singularity around (T, x̂, ŷ), which cancels out the singularity of V around this point.
The second term is nothing but the leading term around (T, x̂, ŷ) of the value function under the following strategy: the investor makes no transaction before reaching the goal and liquidates the entire stock position immediately after reaching the goal.
Because when T-t is short,
the stock account wealth dynamic is approximately
d Y_s = σ Y_t d ℬ_s, t≤ s ≤ T, Y_t = Y_t= y.
Therefore, Y_s-y/σ y is a standard Brownian motion. Taking x<1 as an example, we have
ℙ( Z_T≥1 ) = ℙ( max_t≤ s ≤ T (1-θ_1)Y_s ≥ 1-x)
≈ℙ( max_t≤ s ≤ T (1-θ_1)Y_s ≥ 1-x )
= ℙ( max_t≤ s ≤ TY_s-y/σ y ≥1-z/ (1-θ_1) σ y ).
Since x+(1-θ_1)y → 1,
ℙ( max_t≤ s ≤ TY_s-y/σ y ≥1-z/ (1-θ_1) σ y )≈ℙ( max_t≤ s ≤ TY_s-y/σ y ≥1-z/ (1-x) σ) = 2 Φ(z-1/ (1-x) σ√(T-t)).
The case x>1 is analogous.
It is worth clarifying that the condition (<ref>) does not imply that the optimal strategy requires not to transact before reaching the goal when T-t is sufficiently small. On the contrary, numerical results in Section <ref> illustrate that it can still be optimal to buy or sell in this case, which even leads to a very high leverage.
The high leverage ratio does not appear in the terminal condition (<ref>) because when time to maturity T-t is small, the contribution of transaction to the utility is marginal. This will be again confirmed in Section <ref>.
§.§ Viscosity Solution and Comparison Principle
In this subsection, we show that the value function V(t, x, y) is the unique viscosity solution to the PDE problem (<ref>) with boundary condition (<ref>) and terminal condition (<ref>).
In the following, we simply refer to the PDE together with the boundary and terminal conditions as the HJB equation (<ref>) – (<ref>).
We first introduce our notion of viscosity solution. Since classical viscosity solution requires the continuity of value function, while our terminal condition (<ref>) implies discontinuity, we present our new definition of viscosity solution as follows.
Define the lower semicontinuous envelope and upper semicontinuous envelope of the value function V as
V_*(t, x, y)=lim inf_(t_1,x_1, y_1)→(t, x, y)V(t_1,x_1, y_1), V^*(t, x, y)=lim sup_(t_1,x_1, y_1)→(t, x, y)V(t_1,x_1, y_1).
(i).
We say that V is a viscosity subsolution of the HJB equation (<ref>) – (<ref>) if it satisfies the following conditions:
a) For all smooth ψ such that V^* ≤ψ and
V^*(t̂, x̂, ŷ)=ψ(t̂, x̂, ŷ) for some (t̂, x̂, ŷ)∈ [0, T)×𝒮,
ℒψ(t̂, x̂, ŷ) ≤ 0.
b) For all 0≤t̂< T,
lim sup_(t, x, y) → (t̂, x̂, ŷ)V^*(t, x, y) - U(K) ≤ 0, if ẑ = K.
c) For all w≥ K,
lim sup_(t, x, y) → ( T-, x̂, ŷ) V^*(t, x, y) - U(ẑ-)-2 Φ(min{z-ẑ, 0}/ |ẑ-x| σ√(T-t)) (U(ẑ) - U(ẑ-))≤ 0.
(ii).
We say that V is a viscosity supersolution of the HJB equation (<ref>) – (<ref>)
if it satisfies the following conditions:
a) For all smooth ψ such that V_* ≥ψ and
V_*(t̂, x̂, ŷ)=ψ(t̂, x̂, ŷ) for some (t̂, x̂, ŷ)∈ [0, T)×𝒮,
ℒψ(t̂, x̂, ŷ) ≥ 0.
b) For all 0≤t̂< T,
lim inf_(t, x, y) → (t̂, x̂, ŷ)V_*(t, x, y) - U(K) ≥ 0, if ẑ = K.
c) For all w≥ K,
lim inf_(t, x, y) → ( T-, x̂, ŷ) V_*(t, x, y) - U(ẑ-)-2 Φ(min{z-ẑ, 0}/ |ẑ-x| σ√(T-t)) (U(ẑ) - U(ẑ-))≥ 0.
(iii).
We say that V is a viscosity solution if it is both a viscosity supersolution and subsolution.
Define the set
𝒞: = { v:[0, T]×𝒮→ℝ| lim sup_x+y → +∞sup_0≤ t ≤ Tv(t, x, y)/(x+y)^p < +∞}.
With the notion of viscosity subsolution (supersolution) in Definition <ref>, we have the following comparison principle.
Assume that u and v are a viscosity subsolution and a supersolution to HJB equation
(<ref>) – (<ref>), respectively. If u and v are both in 𝒞, then u≤ v in [0, T]×𝒮.
The proof of this theorem will be given in <ref>. The comparison principle is essential to guarantee our definition is reasonable. In the proof of this comparison principle, we pay special attention on the terminal condition, which differs from the classical proof.
The following theorem summarizes our result.
(i) There is at most one viscosity solution to (<ref>) – (<ref>) in 𝒞.
(ii) The value function V(t, x, y) is a viscosity solution to (<ref>) – (<ref>) and V∈𝒞.
(iii) V(t, x, y) is the unique viscosity solution to (<ref>) – (<ref>) in 𝒞.
The proof of this theorem will be given in <ref>.
Theorem <ref> (i) is from the comparison principle Theorem <ref>. This is because any viscosity solution must be both a subsolution and supersolution, then any two viscosity solutions must equal. Also, Theorem <ref> (iii) is a direct corollary of (i) and (ii). The proof of Theorem <ref> (ii) includes two part. The first part is to verify that V satisfy Definition <ref>, i.e., it is both a subsolution and a supersolution.
The second part is to check value function V(t, x, y) ∈𝒞. Actually, this can be proved using Assumption <ref>, which implies
U(K) ≤ V(t, x, y) ≤ C_1+ C_2 V_CRRA(t, x+y),
with V_CRRA(t, z) the value function of the Merton's problem for terminal utility U(z) = z^p and initial wealth Z_t = z.
§ NUMERICAL EXAMPLE
Based on the above theoretical framework, in this section, we provide numerical examples that illustrate interesting financial insights. In the following, we consider a stock with positive risk premium η=0.04 and volatility σ=0.3. Recall that x and y denotes the dollar value in cash and stock, respectively, and the wealth z is the liquidation value of the portfolio defined in (<ref>).
§.§ Goal-Reaching Problem with Short-selling Prohibited
First, we consider the goal-reaching problem with the constraint that short-selling the risky asset is prohibited, with z̅=1. We verify the terminal condition (<ref>) in Figure <ref>. To illustrate, we plot the left hand side difference in (<ref>) against a range of the wealth z while fixing the dollar investment in stock at y = 20, for various time to maturity T-t. Since goal-reaching utility jumps at z = 1, the difference jumps from 1 to 0 at z = 1. This figure confirms that the difference converges to 0 in a pointwise manner. However, such convergence is not uniform, as the maximum difference is always 1.
Next, we study the action regions illustrated in Figure <ref>. The left and right figures plot the same action regions in terms of the proportion of wealth in stock z-x/z and dollar investment in stock y, respectively. When y is large or small, it is optimal to sell or buy, as indicated by the sell or buy regions, respectively. When y is at a moderate level, it is optimal to avoid transactions and hold on to the current position, as indicated by the no-trading region.
Let us first focus on the sell region. We see that the lower boundary of the sell region has a strictly positive limit as z increases to 1, which is in stark contrast to <cit.> and <cit.> where the limit is 0. In these two papers, the positive risk premium leads to the incentive to stay in the market for a longer time, therefore the leverage is significantly lowered as z increases to reduce bankruptcy risk. In our case, the bankruptcy risk is negligible compared with the future purchase cost incurred. Therefore, the investor would rather keep a strictly positive stock position and delay the liquidation to maturity. The intuition for the strictly positive sell boundary around z = 0 is similar.
As for the buy region, from the left panel of Figure <ref>, we see that when close to the target or liquidation boundary, the investor will buy fewer stocks due to the transaction cost. But around z=0 the investor will be more risk-seeking since it is further from the target, and thus the buy boundary is skewed. When wealth is far from the target and the liquidation boundary, the upper boundary of the buy region is higher than the target position without transaction costs. This is to compensate for the future lower stock position around the target or the liquidation boundary.
§.§ Goal-Reaching Problem without Short-selling Constraint
We study the goal-reaching problem without short-selling constraint. Figure <ref> verifies the terminal condition at y=20 and -20 in a similar way to the no-shorting case, which exhibits a similar pattern to Figure <ref>.
Figure <ref> illustrates the action regions. While the regions in y>0 are qualitatively similar to the case with short-selling constraint, we have an interesting observation regarding y<0, namely, there is no buy region when y is very negative. This means that even if the investor is already deep in the short region, she never buys back to reduce her short leverage. It is significantly different from the strategy in the y>0 region, where the investor will reduce the long leverage if it is too high. This can be explained as follows. When y>0, both the high variance from the leverage and the positive risk premium of the risky asset contribute towards achieving the goal. Therefore, when y>0 and the leverage is too high, the investor has the incentive to reduce the leverage for staying in the market to take advantage of the risk premium. In contrast, starting from y<0, the positive risk premium works against the investor and gradually drags down the wealth level. However, switching to a long position is also prohibitively costly due to the transaction cost. As a result, the investor can only resort to the high variance created by the large short position to achieve the goal and therefore has no incentive to reduce the risk exposure.
For the goal-reaching problem, <cit.> also document investors' risk-seeking behavior. In their case, the trigger for such behavior is not the transaction costs, but rather the imposed two-sided portfolio bounds, which is the limit on the level of permitted leverage. In contrast, transaction costs do not put a direct limit on leverage; rather, the intrinsic limit that prevents taking infinite leverage is the large potential transaction cost that needs to be paid upon liquidation. Consequently, Figure <ref> indicates that it is possible that the investor takes and holds on to much higher leverage compared to <cit.>; for example, starting from the buy region, the investor will buy to the upper boundary of this region and keep the position if it subsequently moves into the no-trading region. Furthermore, unlike their model, our model does not produce a sudden switch between large long and short positions as the wealth changes, since this would trigger a very large amount of transaction cost.
§.§ Aspiration Utility
As a third example, we discuss the strategy under the aspiration utility (<ref>) with p=0.5, c_1=0, c_2=1.5, z̅=1, and without short-selling constraint. Again, we verify the terminal condition (<ref>) in Figure <ref>, at y=5 and -5. Similar to the previous cases, both figures again illustrate that (<ref>) holds in a pointwise but not uniform manner. The convergence for z<1 seems much slower than z>1, especially when z is just below 1. As will be illustrated below, such characteristics can be attributed to the risk-seeking behavior in z<1 as opposed to the risk-averse behavior in z>1.
Figure <ref> plots the optimal strategy under aspiration utility. The top left figure shows that when it is very close to maturity and the wealth z is below 1 but away from 0, the strategy is locally similar to the goal-reaching problem (compare region I – IV with the right figure of Figure <ref>). Indeed, Region I – IV show that it is optimal to achieve and maintain a high leverage by either longing or shorting based on the initial position. However, unlike goal-reaching problem, region IV is now lower bounded, which suggests that the investor should not allow arbitrarily large short positions. Intuitively, the investor will still gain utility from the terminal wealth even if z=1 cannot be eventually reached, and therefore the investor should not take arbitrarily large leverage and risk. On the other hand, when z is very close to 0, the strategy is to keep a small leverage to avoid bankruptcy; when z is sufficiently large, the optimal strategy resembles the classic Merton strategy with transaction costs (e.g. <cit.>), that is, performing minimum trading to keep the position sufficiently close to the Merton line.
As the time to maturity increases, two effects occur for y>0 and y<0. In the case of y>0 (long position), it is optimal to reduce the leverage to avoid extreme volatility, as indicated by the shrinking of region I and region II in y>0 (note the scales of the vertical axis). In the case of y<0 (short position), a transition is initiated by the enlargement of region II: first, it expands downwards and gradually replaces region III, and then it expands further downwards and starts piercing through region IV. This means that when z is not close to 0.2 or 1 and maturity increases, the positive risk premium plays a more and more important role, and it is optimal for the investor to switch to a long position.
§.§ The S-Shaped Utility
Finally, we present the results for the S-shaped utility (<ref>), with parameters λ=2.25,p=0.5,z_0=1, and without the short-selling constraint. Due to the similarity with the above results, we only present the verification of the asymptotic terminal utility in the left figure of Figure <ref> and the action region at T-t=0.01 in the right figure. Due to the continuity of the S-shaped utility, the difference in the left panel converges uniformly. For the right panel, we see that the shape of the action region resembles that of the aspiration utility, and it can still be optimal to take large negative leverage despite the positive risk premium. The only major difference is that when z is close to 0, the investor does not actively reduce the leverage to avoid bankruptcy as for the aspiration utility. The reason is that, unlike the aspiration utility where the investor is risk-averse for z<z_0, here the investor is risk-seeking, and therefore she would gamble for a smaller loss rather than trying to reduce leverage to avoid bankruptcy.
§ CONCLUSION
In this paper, we study the non-concave utility maximization problem under proportional transaction costs. Since the concavification principle is no longer applicable, we derive rigorous theoretical characterization of the value function in terms of discontinuous viscosity solution. Especially, we establish the asymptotic behavior of the value function as time approaches maturity.
As numerical illustrations, we study the optimal strategies for the goal-reaching problem with and without short-selling constraint, as well as the aspiration utility and S-shaped utility maximization problems. We found that, when facing transaction costs, an investor with non-concave utility can take a very high, but finite leverage when the remaining time to beat the target performance is short, and the investor can also hold on to a large short position of risky asset despite its positive risk premium.
From a theoretical perspective, this paper is among the strand of work on the discontinuous viscosity solutions arising from some mathematical finance problems. It will be of interest to further build a unified theoretical framework incorporating general frictions, such as capital gains tax and fixed costs. The joint impact of frictions and risk-seeking incentive predicted by our numerical results can also inspire future empirical work for real-world analyses.
apalike
§ APPENDICES
tocsectionAppendices
§.§ Proof of Proposition <ref>
We decompose the proof into three steps. We first show the proposition holds in the special case of goal-reaching problem, then prove a result bridging the goal-reaching problem to the general case, and finally prove the general case.
§.§.§ The Special Case of Goal-Reaching Problem
For the goal-reaching problem with K = 0, we have
lim_(t, x, y) → (T^-, x̂, ŷ) V(t, x, y) - 2Φ( min{z-1, 0}/ |1-x| σ√(T-t)) = 0, when ẑ≤ 1 ,
where z := x+(1-θ_1)y^+- (1+θ_2) y^- and ẑ := x̂+(1-θ_1)ŷ^+- (1+θ_2) ŷ^-.
The result is straightforward when z ≥ 1. Therefore, we focus on z<1 in the follows.
We only consider the case y>0; the case y≤ 0 can be proved similarly.
1. We first show that for this terminal condition, we only need to consider strategies without buying or shorting stock in [t, T].
For any strategy π = (L_s, M_s), s≥ t, let us consider another strategy π' = (0, M_s), s≥ t, which is, never buy any stock. Since the investor has to sell all stock before time T which is subject to transaction costs, the only possibility that π is superior to π' is that the gains from the increase in the stock price is high enough to cover the transaction cost. Mathematically,
ℙ (Z_T^π≥ Z_T^π') ≤ℙ(max_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1≥1/1-θ_1) → 0, as t→ T,
where Z^π_T and Z^π'_T are the terminal wealth with initial condition (X^π_t, Y^π_t)= (X^π'_t, Y^π'_t) = (x, y) and strategy π and π' respectively. Then
ℙ(Z_T^π≥ 1) - ℙ(Z_T^π'≥ 1) ≤ℙ (Z_T^π≥ Z_T^π') → 0 , as t→ T.
Similarly, if the investor short-sells stock, the only possibility that the resulted gain is superior to π” = (L_s1_t≤ s≤τ_0, M_s1_t≤ s≤τ_0) is that it covers the transaction cost when rebalancing stock position to 0 at maturity, where τ_0: = inf{s≥ t | Y^π”_s = 0}. Mathematically,
ℙ (Z_T^π≥ Z_T^π”) ≤ℙ(min_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1≤1/1+θ_2) → 0, as t→ T.
Consequently,
ℙ(Z_T^π≥ 1) - ℙ(Z_T^π”≥ 1) ≤ℙ (Z_T^π≥ Z_T^π”) → 0 , as t→ T.
2. Therefore, in the following, we only consider the strategies without purchase and short-sale in [t, T]. In this case, since the entire stock position should be liquidated no later than maturity, we must have
ℙ(Z_T ≥ 1) ≤ℙ((1-θ_1)y (max_t≤ s ≤ TS_s/S_t -1) ≥ 1-z )
= ℙ(max_t≤ s ≤ TS_s/S_t≥1-x/(1-θ_1)y).
Denote a = |η -1/2σ^2|, for any constant C>1, we have
ℙ(max_t≤ s ≤ TS_s/S_t≥ C) = ℙ(max_t≤ s ≤ TlnS_s/S_t≥ln C)
≤ ℙ(a(T-t) + σmax_t≤ s ≤ T (ℬ_s - ℬ_t) ≥ln C)
≤ ℙ(max_t≤ s ≤ T (ℬ_s - ℬ_t) ≥ln C-a(T-t)/σ)
= 2 Φ(min{-ln C+a(T-t), 0}/σ√(T-t)).
Therefore, according to the inequality ln w≥w-1/w, ∀ w > 0,
(<ref>) ≤ 2 Φ(min{-ln (1-x/(1-θ_1)y) +a(T-t), 0}/σ√(T-t))
≤ 2 Φ(min{-1-z/1-x +a(T-t), 0}/σ√(T-t))
= 2 Φ(min{z-1+ a(1-x)(T-t), 0}/σ(1-x)√(T-t)).
As a result, on the one hand,
lim sup_(t, x, y) → ( T^-, x̂, ŷ)V(t, x, y) -2 Φ(z-1/ (1-x) σ√(T-t))
≤ lim sup_(t, x, y) → ( T^-, x̂, ŷ)ℙ(max_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1≥1/1-θ_1) + ℙ(min_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1≤1/1+θ_2)
+2 Φ(min{ z-1+ a(1-x)(T-t), 0}/σ(1-x)√(T-t)) -2 Φ(z-1/ (1-x) σ√(T-t))
= 0.
On the other hand, consider the strategy π' : = (L_s, M_s) = (0,0), t≤ s≤ T. Since ln w ≤ w-1, ∀ w >0, we have
V(t, x, y)≥ ℙ(Z^π'_T ≥ 1)
= ℙ(max_t≤ s ≤ TS_s/S_t≥1-x/(1-θ_1)y)
≥ 2 Φ(min{-ln (1-x/(1-θ_1)y) - a(T-t), 0}/σ√(T-t))
≥ 2 Φ(min{-1-z/(1-θ_1)y - a(T-t), 0}/σ√(T-t))
= 2 Φ(min{z-1- a(1-θ_1)y(T-t), 0}/σ(1-θ_1)y√(T-t)) .
Therefore,
lim inf_(t, x, y) → ( T^-, x̂, ŷ) V(t, x, y) - 2 Φ (z-1/ (1-x) σ√(T-t) )
≥ lim inf_(t, x, y) → ( T^-, x̂, ŷ) 2 Φ(min{z-1- a(1-θ_1)y(T-t), 0}/σ(1-θ_1)y√(T-t)) - 2 Φ(z-1/ (1-x) σ√(T-t))
≥ lim inf_(t, x, y) → ( T^-, x̂, ŷ) 2 Φ(min{z-1- a(1-θ_1)y(T-t), 0}/σ(z-x)√(T-t)) -2 Φ(min{z-1- a(1-θ_1)y(T-t), 0}/σ(1-x)√(T-t))
+ lim inf_(t, x, y) → ( T^-, x̂, ŷ) 2 Φ(min{z-1- a(1-θ_1)y(T-t), 0}/σ(1-x)√(T-t)) - 2 Φ(z-1/ (1-x) σ√(T-t))
= 0,
because lim_ϵ→ 0sup_w∈ℝ|Φ((1+ϵ) w) -Φ(w)| = 0 and lim_ϵ→ 0sup_w∈ℝ|Φ(w+ϵ) -Φ(w)| = 0. Consequently, we have proved the proposition for case y> 0.
§.§.§ Bridging the Goal-Reaching Problem with the General Case
Before we prove the terminal condition (<ref>) for general utility functions, we also need the following proposition.
For any constants 0<q<1, α>0, C>z, n ∈ℕ^+, there exists δ_n>0, such that for any (x, y)∈𝒮,
T-t ≤min{1, ln 2/2|η-1/2σ^2|, 1/(16nσ)^2, (C-z/4 |η-1/2σ^2| (1+θ_2) (|y|+ C^α) )^4/3},
and for any admissible strategy (L_s, M_s)_t≤ s ≤ T∈𝒜_t(x, y),
ℙ(Z_T ≥ C| (X_t, Y_t) = (x, y)) ≤ 2 e^q Λ_q (T-t)z^q/(min{θ_1, θ_2})^q (T-t)^q/4 C^-α q
+ 8/√(2π)δ_n[8σ (1+θ_2) (C^α+|y| (T-t)^1/4)/C-z]^n (T-t)^n/4,
where Λ_q: = sup_u ∈ℝ{η u -1-q/2σ^2 u^2 } < +∞.
For any strategy Π = (L_s, M_s), t≤ s≤ T, consider the process with no transaction cost under strategy Π. Denote the corresponding cash, stock, and wealth processes by X^(0)_s, Y^(0)_s, Z^(0)_s. Then
L_T- L_t ≤1/θ_1max_t≤ s≤ T Z^(0)_s,
since the stock position is subject to transaction cost θ_1 upon liquidation, and cumulative transaction cost cannot exceed max_t≤ s≤ T Z^(0)_s. Similarly, we have
M_T- M_t ≤1/θ_2max_t≤ s≤ T Z^(0)_s.
Given (x_t, y_t, z_t), to make Z_T >C, we need either that the long or short leverage is sufficiently high, or the stock price is sufficiently fluctuant in the remaining time. Therefore,
for any constant B>0, we have
ℙ(Z_T ≥ C| (X_t, Y_t) = (x, y))
≤ ℙ(L_T- L_t≥ B) + ℙ((1-θ_1)(B+|y|)(max_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1-1) ≥ C-z)
+ ℙ(M_T- M_t ≥ B) + ℙ((1+θ_2)(B+|y|)(1- min_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1) ≥ C-z)
≤ ℙ(max_t≤ s≤ T Z^(0)_s ≥θ_1 B) + ℙ((1-θ_1)(B+|y|)(max_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1-1) ≥ C-z)
+ ℙ(max_t≤ s≤ T Z^(0)_s ≥θ_2 B) + ℙ((1+θ_2)(B+|y|)(1- min_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1) ≥ C-z).
We need the following lemmas for the proof of the proposition.
For any constant C>z>0 and 0<q<1, we have
max_Πℙ(max_t≤ s≤ TZ^(0)_s ≥ C|z_t = z) = max_Πℙ(Z^(0)_T ≥ C|z_t = z) ≤ e^q Λ_q (T-t)z^q/C^q.
Since U(w) : = w^q/C^q≥1_w≥ C, we have
max_Πℙ(Z^(0)_T≥ C|z_t = z) ≤max_Π𝔼[U(Z^(0)_T)| z_t = z] = e^q Λ_q (T-t)z^q/C^q
from the closed-form solution for CRRA utilities.
For any constant ϵ>0,
max{ℙ(max_t≤ s_1≤ s_2 ≤ TlnS_s_2/S_s_1≥ϵ), ℙ(min_t≤ s_1≤ s_2 ≤ TlnS_s_2/S_s_1≤ -ϵ) }≤ 44σ√(T-t)/√(2π)ϵ e^-1/2ϵ^2/16σ^2 (T-t),
when |η -1/2σ^2| (T-t)≤ϵ/2.
When |η- 1/2σ^2|(T-t) ≤ϵ/2,
ℙ(max_t≤ s_1≤ s_2 ≤ TlnS_s_2/S_s_1≥ϵ)
= ℙ(max_t≤ s_1≤ s_2 ≤ T (η- 1/2σ^2)(s_2-s_1) + σ (ℬ_s_2 -ℬ_s_1)≥ϵ)
≤ ℙ(max_t≤ s_1≤ s_2 ≤ Tσ (ℬ_s_2 -ℬ_s_1)≥ϵ - (η- 1/2σ^2)(T-t) )
≤ ℙ(max_0≤ s ≤ T-tℬ_s≥1/2σ[ϵ - (η- 1/2σ^2)(T-t)] ) + ℙ(min_0≤ s ≤ T-tℬ_s≤ -1/2σ[ϵ - (η- 1/2σ^2)(T-t)] )
≤ 2Φ(-ϵ + (η- 1/2σ^2)(T-t)/2σ√(T-t))+ 2Φ(-ϵ + (η- 1/2σ^2)(T-t)/2σ√(T-t))
= 4Φ(-ϵ + (η- 1/2σ^2)(T-t)/2σ√(T-t))
≤ 4Φ(-ϵ/4σ√(T-t)).
Noticing when w<0,
Φ(w) = 1/√(2π)∫^∞_|w| e^-1/2 t^2 dt
≤1/√(2π)∫^∞_|w|t/|w|e^-1/2 t^2 dt
= 1/√(2π) |w| e^-1/2 w^2,
we have our estimation. Similarly, we can prove
ℙ(min_t≤ s_1≤ s_2 ≤ TlnS_s_2/S_s_1≤ -ϵ)≤ 44σ√(T-t)/√(2π)ϵ e^-1/2ϵ^2/16σ^2 (T-t).
For any integer n>0, there is a constant δ_n>0, such that
(1+A)^w ≥ 1+ δ_n (wA)^n, ∀ w≥ 2n, A ≥ 0.
Denote ⌊ w ⌋ the maximum integer no larger than w, we have
(1+A)^w ≥ (1+A)^⌊ w ⌋≥ 1+ C_⌊ w ⌋^n A^n.
By noticing
inf_w≥ 2nC_⌊ w ⌋^n/w^n = δ_n >0.
we finish the proof.
Now we turn back to the proof of Proposition <ref>. We first focus on (<ref>).
According to Lemma <ref>, we have
ℙ(max_t≤ s≤ T Z^(0)_s ≥θ_1 B) ≤ e^q Λ_q (T-t)z^q/(θ_1)^q B^-q.
Denote ϵ = ln( C-z/(1-θ_1) (B+|y|)+1 ). According to Lemma <ref>, when
T-t≤ϵ/2|η-1/2σ^2|,
we have
ℙ((1-θ_1)(B+|y|)(max_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1-1) ≥ C-z) ≤ 44σ√(T-t)/√(2π)ϵ e^-1/2ϵ^2/16σ^2 (T-t).
In summary, we have
ℙ(max_t≤ s≤ T Z^(0)_s ≥θ_1 B) + ℙ((1-θ_1)(B+|y|)(max_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1-1) ≥ C-z)
≤ e^q Λ_q (T-t)z^q/θ_1^q B^-q
+ 44σ√(T-t)/√(2π)ϵ e^-1/2ϵ^2/16σ^2 (T-t).
By choosing B = C^α (T-t)^-1/4,
the condition (<ref>) is satisfied. Because if
ϵ > ln 2,
then immediately T-t ≤ln 2/2|η-1/2σ^2|; if ϵ≤ln 2, by definition of ϵ,
ϵ≥1/2C-z/(1-θ_1) (B+|y|) = 1/2C-z/(1-θ_1) (C^α (T-t)^-1/4+|y|)≥1/2C-z/(1-θ_1) (C^α +|y|) (T-t)^1/4.
Thus (<ref>) guarantees (<ref>) .
Under this selection of B, we also have
(<ref>)
= e^q Λ_q (T-t)z^q/(θ_1)^q (T-t)^q/4 C^-α q≤ e^q Λ_q (T-t)z^q/(min{θ_1,θ_2})^q (T-t)^q/4 C^-α q.
Denote ξ = ϵ/4σ√(T-t)>0, we have
(<ref>) = 4/√(2π)ξ e^-1/2ξ^2
= 4/√(2π) e^1/2ξ/e^ξ e^-1/2(ξ +1)^2≤4/√(2π) e^1/2 e^-1/2(ξ +1)^2≤4/√(2π) e^1/2 e^-1/2(ξ +1)
= 4/√(2π) e^-1/2ξ.
Noticing ξ = ϵ/4σ√(T-t) = ln( C-z/(1-θ_1) (B+|y|)+1 )/4σ√(T-t),
4/√(2π) e^-1/2ξ = 4/√(2π)( C-z/(1-θ_1) (B+|y|)+1 )^- 1/8σ√(T-t)
= 4/√(2π)( C-z/(1-θ_1) (C^α (T- t)^-1/4+|y|)+1 )^- 1/8σ√(T-t).
For any integer n>0, according to Lemma <ref>, when T-t ≤ 1/(16nσ)^2
4/√(2π)/( C-z/(1-θ_1) (C^α (T-t)^-1/4+|y|)+1 )^1/8σ√(T-t)
≤ 4/√(2π)/1+ δ_n [C-z/(1-θ_1) (C^α (T- t)^-1/4+|y|)(8σ√(T-t)) ]^n
≤ 4/√(2π)δ_n[(1-θ_1) (C^α (T-t)^-1/4+|y|)(8σ√(T-t))/C-z]^n
= 4/√(2π)δ_n[8σ (1-θ_1) (C^α+|y| (T-t)^1/4)/C-z]^n (T-t)^n/4
≤ 4/√(2π)δ_n[8σ (1+θ_2) (C^α+|y| (T-t)^1/4)/C-z]^n (T-t)^n/4.
Similarly, we can handle (<ref>). We have
ℙ(max_t≤ s≤ T Z^(0)_s ≥θ_2 B) ≤ e^q Λ_q (T-t)z^q/θ_2^q B^-q,
and by choosing B = C^α (T-t)^-1/4,
ℙ((1+θ_2)(B+|y|)(1- min_t≤ s_1≤ s_2 ≤ TS_s_2/S_s_1) ≥ C-z)
≤ 4/√(2π)4σ√(T-t)/ϵ e^-1/2ϵ^2/16σ^2 (T-t)
≤ 4/√(2π)( min{1-C-z/(1+θ_2) (B+|y|), 0})^ 1/8σ√(T-t)
≤ 4/√(2π)1/(1+ C-z/(1+θ_2) (B+|y|))^ 1/8σ√(T-t)
≤ 4/√(2π)δ_n[8σ (1+θ_2) (C^α+|y| (T-t)^1/4)/C-z]^n (T-t)^n/4
where ϵ = - ln(max{1- C-z/(1+θ_2) (B+|y|), 0 }), and we set 1/+∞ e^-1/2+∞^2/16σ^2 (T-t) = 0. Consequently,
(<ref>)≤ e^q Λ_q (T-t)z^q/(min{θ_1, θ_2})^q (T-t)^q/4 C^-α q + 4/√(2π)δ_n[8σ (1+θ_2) (C^α+|y| (T-t)^1/4)/C-z]^n (T-t)^n/4.
This finishes the proof.
§.§.§ Proof of Proposition <ref>
We prove the equality of (<ref>) by showing that inequalities hold in both directions.
We first show the left side of (<ref>) is no greater than 0.
For any constant ϵ>0, there is a 0< δ <ẑ, s.t. U(w)≤ U(ẑ)+ϵ, when w≤ẑ +δ, and U(w) ≥ U(ẑ-)-ϵ when w≥ẑ-δ. We have
𝔼[U(Z_T)1_Z_T≥ẑ+ δ]
= ∫_ẑ+δ^+∞ U(w) d ℙ(Z_T≤ w)
≤ ∫_ẑ+δ^+∞ C_1+C_2 w^p d ℙ(Z_T≤ w)
= C_1ℙ(Z_T≥ẑ+δ) + C_2 (ẑ+δ)^p ℙ(Z_T≥ẑ+δ) +C_2 ∫_ẑ+δ^+∞ p w^p-1ℙ (Z_T≥ w) dw.
We let p<q<1, p/q<α<1, n∈ℕ^+∩ (1/(1-α), +∞), then according to Proposition <ref>,
lim sup_(t, x, y) → ( T^-, x̂, ŷ)∫_ẑ+δ^+∞ p w^p-1ℙ (Z_T≥ w) dw
≤ lim sup_(t, x, y) → ( T^-, x̂, ŷ)[∫_ẑ+δ^+∞ p w^p-1 2 e^q Λ_q (T-t)z^q/(min{θ_1, θ_2})^q (T-t)^q/4 w^-α q dw
+ ∫_ẑ+δ^+∞ p w^p-1 2 e^q Λ_q (T-t)8/√(2π)δ_n(8σ (1+θ_2) (w^α+|y| (T-t)^1/4)/w-z)^n (T-t)^n/4 dw]
≤ lim sup_(t, x, y) → ( T^-, x̂, ŷ)∫_ẑ+δ^+∞ p 2 e^q Λ_q (T-t)z^q/(min{θ_1, θ_2})^q (T-t)^q/4 w^-α q+p-1 dw
+ lim sup_(t, x, y) → ( T^-, x̂, ŷ)∫_ẑ+δ^+∞ p w^p-1 2 e^q Λ_q (T-t)8/√(2π)δ_n(8σ (1+θ_2) (w^α+|y| (T-t)^1/4)/w-z)^n (T-t)^n/4dw.
Since α q >p and p-1+(α-1) n ≤ p-1-1 <-1, when T-t→0,
lim sup_(t, x, y) → ( T^-, x̂, ŷ)∫_ẑ+δ^+∞ p w^p-1ℙ (Z_T≥ w)dw = 0.
Analogously,
lim sup_(t, x, y) → ( T^-, x̂, ŷ)ℙ (Z_T≥ẑ+δ) = 0.
Consequently, we have from (<ref>),
lim sup_(t, x, y) → ( T^-, x̂, ŷ)𝔼[U(Z_T)1_Z_T≥ẑ+ δ] = ∫_ẑ+δ^+∞ U(w) d ℙ(Z_T≤ w)=0.
Set
U̅(x) :=
U(ẑ-) K≤ x< ẑ
U(ẑ) x≥ẑ,
from (<ref>) we have
lim sup_(t, x, y) → ( T^-, x̂, ŷ) E[U(Z_T)] -U(ẑ-) - 2 (U(ẑ) - U(ẑ-)) Φ(min{z-ẑ,0}/ |ẑ-x| σ√(T-t))
= lim sup_(t, x, y) → ( T^-, x̂, ŷ)∫_K^+∞ U(w) d ℙ(Z_T≤ w) - U(ẑ-) - 2 (U(ẑ) - U(ẑ-)) Φ(min{z-ẑ,0}/ |ẑ-x| σ√(T-t))
≤ lim sup_(t, x, y) → ( T^-, x̂, ŷ)∫_ẑ+δ^+∞ U(w) d ℙ(Z_T≤ w)+ ∫_K^ẑ+δU(w) d ℙ(Z_T≤ w)
- U(ẑ-) - 2 (U(ẑ) - U(ẑ-)) Φ(min{z-ẑ,0}/ |ẑ-x| σ√(T-t))
= lim sup_(t, x, y) → ( T^-, x̂, ŷ)∫_K^ẑ+δ U(w) d ℙ(Z_T≤ w)- U(ẑ-) - 2 (U(ẑ) - U(ẑ-)) Φ(min{z-ẑ,0}/ |ẑ-x| σ√(T-t))
≤ ϵ+ lim sup_(t, x, y) → ( T^-, x̂, ŷ)∫_K^ẑ+δU̅(w) d ℙ(Z_T≥ w)- U(ẑ-) - 2 (U(ẑ) - U(ẑ-)) Φ(min{z-ẑ,0}/ |ẑ-x| σ√(T-t)).
According to Proposition <ref>, since K≥ 0,
lim sup_(t, x, y) → ( T^-, x̂, ŷ)∫_K^ẑ+δU̅(w) d ℙ(Z_T≤ w) - U(ẑ-) - 2 (U(ẑ) - U(ẑ-)) Φ(min{z-ẑ,0}/ |ẑ-x| σ√(T-t)) ≤ 0.
Since ϵ is arbitrary,
lim sup_(t, x, y) → ( T^-, x̂, ŷ)∫_K^+∞ U(w) d ℙ(Z_T≤ w)- U(ẑ-) - 2 (U(ẑ) - U(ẑ-)) Φ(min{z-ẑ,0}/ |ẑ-x| σ√(T-t)) ≤ 0.
We next show the left side of (<ref>) is no less than 0.
When ŷ >0, consider the strategy π^* which does not sell or buy in [t, τ_ẑ), and sell all stock at τ_ẑ, where τ_ẑ: = inf{ s∈ [t, T]| Z^π^*_s≥ẑ}.
We have
ℙ(Z^π^*_T ≤ẑ-δ) = ℙ((1-θ_1)Y^π^*_T ≤ẑ - δ-x )
= Φ( lnẑ - δ-x/1-θ_1-ln y -(η-1/2σ^2 ) (T-t) /σ√(T-t))
→ 0, when (t, x, y) → ( T^-, x̂, ŷ),
and
ℙ( Z^π^*_T ≥ẑ) = ℙ((1-θ_1)Y^π^*_T ≥ẑ -x )
= ℙ( ln Y_T^π^*≥ln (ẑ -x/1-θ_1) )
= ℙ( lnY^π^*_T/y≥lnẑ -x/(1-θ_1)y)
≥ℙ( σmax_t≤ s≤ T (ℬ_s -ℬ_t)≥lnẑ -x/(1-θ_1)y+ |η-1/2σ^2|(T-t))
= 2 Φ(min{- lnẑ -x/(1-θ_1)y - |η-1/2σ^2|(T-t), 0}/σ√(T-t)).
Therefore,
lim inf_(t, x, y) → ( T^-, x̂, ŷ) V(t, x, y) - U(ẑ-) - 2 (U(ẑ) - U(ẑ-)) Φ(min{z-ẑ,0}/ |ẑ-x| σ√(T-t))
≥ lim inf_(t, x, y) → ( T^-, x̂, ŷ) U(K) ℙ(Z_T^π^*≤ẑ-δ) +U(ẑ-)(1- ℙ(Z^π^*_T ≤ẑ-δ) ) + (U(ẑ) - U(ẑ-)) ℙ( Z^π^*_T ≥ẑ)
- U(ẑ-) - 2 (U(ẑ) - U(ẑ-)) Φ(min{z-ẑ,0}/ |ẑ-x| σ√(T-t))
≥ lim inf_(t, x, y) → ( T^-, x̂, ŷ) 2 (U(ẑ) - U(ẑ-)) ( Φ(min{- lnẑ -x/(1-θ_1)y - |η-1/2σ^2|(T-t), 0}/σ√(T-t)) - Φ(min{z-ẑ,0}/ |ẑ-x| σ√(T-t))).
Noticing lim_ϵ→ 0sup_w∈ℝ|Φ((1+ϵ) w) -Φ(w)| = 0, and w-1/w≤ln w ≤ w-1, ∀ w≥ 0, we have (<ref>)→ 0.
For the case ŷ≤0, the proof is similar, and that finishes our proof.
§.§ Proof of Theorem <ref>
We prove by contradiction. Consider ψ(t, x, y) = e^β(t-T) u(t, x, y) and ϕ(t, x, y) = e^β(t-T) v(t, x, y), where β>η/θ_1>0, then ψ (resp. ϕ) is a viscosity subsolution (resp. supersolution) to
min{-F_t -1/2σ^2 y^2 F_yy - η y F_y+ β F , F_y - (1-θ_1)F_x, (1+θ_2) F_x - F_y } = 0.
with the boundary condition
F(t, x, y) = e^β(t-T) U(K), when z = K,
and the terminal condition (<ref>).
Assume on the contrary there is some point (t̅, x̅, y̅) ∈ [0, T)×Ω such that
ψ(t̅, x̅, y̅) -ϕ(t̅, x̅, y̅) = 2δ >0.
Then consider
M_α (t, x, y, s, u, v): = ψ(t, x, y) - ϕ(s, u, v) -φ(t, x, y, s, u, v),
where φ(t, x, y, s, u, v): = ϵ_1/t + ϵ_2/T-t + ϵ_3 (x+y) + ϵ_4 (u+v) + α/2( (t-s)^2+ (y-v)^2 +(x-u)^2 ), and ϵ_1, ϵ_2, ϵ_3, ϵ_4 are three positive constants which are sufficiently small, s.t. M_α (t̅, x̅, y̅, t̅, x̅, y̅) > δ>0.
First we show that for any α>0, we can find an interior point (t, x, y, s, u, v) ∈ (0, T) ×𝒮× (0, T) ×𝒮, such that M_α attains global maximum.
Notice that when x+y is sufficiently large,
M_α (t, x, y, s, u, v) ≤ C'_1 + C'_2 (x+y)^p + |U(K)| - ϵ_3 (x+y) <0< δ,
for some constant C'_1 and C'_2.
Therefore, we only focus on the set 𝒮_C' : = 𝒮∩{(x, y)| x+y ≤ C'}. Due to the upper semicontinuity of ψ and lower semicontinuity of ϕ, we have the maximum of M_α can be attained in [0, T]×𝒮_C'× [0, T]×𝒮_C'. Because of the term ϵ_1/t+ ϵ_2/T-t, the maximum of M_α can be attained in (0, T)×𝒮_C'× (0, T)×𝒮_C'. We denote one of the maximizers by (t_α, x_α, y_α, s_α, u_α, v_α).
Second, we show that we can find a sufficiently small ϵ_2 >0, s.t.
ϵ_1/t_α^2≥ϵ_2/(T-t_α)^2, for sufficiently large α.
Notice that for any ϵ_1, ϵ_2, ϵ_3, ϵ_4, there exists a subsequence such that
lim_α→ +∞α ( (t_α-s_α)^2 +(x_α-u_α)^2+ (y_α-v_α)^2 ) = 0,
and both (t_α, x_α, y_α)
and (s_α,u_α, v_α) converge to some interior point (t̂, x̂, ŷ) (see, e.g. <cit.>). Let (t̂_0, x̂_0, ŷ_0) be a limit of (t̂, x̂, ŷ) as ϵ_2→ 0.
We next show that t̂_0<T, and (<ref>) is proved accordingly. We prove by contradiction.
If t̂_0=T, according to the terminal condition (<ref>), we have
lim sup_(t̂, x̂, ŷ) → (T-, x̂_0, ŷ_0)(ψ(t̂, x̂, ŷ) - ϕ(t̂, x̂, ŷ))
≤ lim sup_(t̂, x̂, ŷ) → (T-, x̂_0, ŷ_0)ψ(t̂, x̂, ŷ) - U(ẑ_0-) - 2 Φ(min{z-ẑ, 0}/ |ẑ-x| σ√(T-t)) (U(ẑ_0) - U(ẑ_0-) )
- lim inf_(t̂, x̂, ŷ) → (T-, x̂_0, ŷ_0)ϕ(t̂, x̂, ŷ) - U(ẑ_0-) - 2 Φ(min{z-ẑ, 0}/ |ẑ-x| σ√(T-t)) (U(ẑ_0) - U(ẑ_0-) )
≤ 0,
which contradicts the fact that, for each ϵ_2 and α,
ψ(t_α, x_α, y_α) - ϕ(s_α,u_α, v_α)≥ M_α(t_α, x_α, y_α, s_α, u_α, v_α)≥ M_α(t̅, x̅, y̅, t̅, x̅, y̅)>δ>0.
Third, we apply the Ishii's lemma to prove the theorem. For notational simplicity, we use (t, x, y, s, u, v) for (t_α, x_α, y_α, s_α, u_α, v_α). By Ishii's lemma, for any γ>0, there are constants M and N, s.t.
min{-φ_t -1/2σ^2 y^2 M - η y φ_y+βψ , φ_y - (1-θ_1)φ_x, (1+θ_2) φ_x - φ_y }≤ 0,
min{φ_s -1/2σ^2 v^2 N + η v φ_v+βϕ , -φ_v + (1-θ_1)φ_u, -(1+θ_2) φ_u + φ_v }≥ 0,
where
(
M 0
0 -N
) ≤∇^2_y, v φ+ γ (∇^2_y, v φ) ^2
with
∇^2_y, v φ= (
∂^2 φ/∂ y^2 ∂^2 φ/∂ y∂ v
∂^2 φ/∂ v∂ y ∂^2 φ/∂ v^2).
From the definition of φ, we have
φ_t = -ϵ_1/t^2 + ϵ_2/(T-t)^2+ α (t-s), φ_s = -α (t-s),
φ_x = ϵ_3 +α (x-u), φ_y = ϵ_3 + α (y-v),
φ_u = ϵ_4 - α (x-u), φ_v = ϵ_4 - α (y-v),
and
∇^2_y, v φ
=α (
1 -1
-1 1
).
According to (<ref>),
M y^2 - N v^2 =
(
y, v
)
(
M, 0
0, -N
)
(
y
v
)
≤ (
y, v
)
(∇^2_y, v φ + γ ( ∇^2_y, v φ)^2)
(
y
v
)
= α (y-v)^2 + γ (
y, v
)
( ∇^2_y, v φ)^2
(
y
v
).
We can choose γ sufficiently small, such that
M y^2 - N v^2 =α (y-v)^2 + o(1), as α→ +∞.
According to (<ref>), either of the following three inequalities should be satisfied.
(i) φ_y - (1-θ_1) φ_x ≤ 0.
We have from (<ref>) that
0≥ [φ_y - (1-θ_1) φ_x] - [ -φ_v + (1-θ_1) φ_u ] =θ_1( ϵ_3 + ϵ_4) >0.
Contradiction.
(ii) (1+θ_2) φ_x -φ_y ≤ 0.
We have from (<ref>) that
0≥ [(1+θ_2)φ_x - φ_y] - [-(1+θ_2)φ_u + φ_v] =θ_2 (ϵ_3 + ϵ_4) >0.
Contradiction.
(iii) -φ_t -1/2σ^2 y^2 M - η y φ_y+βψ≤ 0
We have from (<ref>) that
0≥ [-φ_t -1/2σ^2 y^2 M - η y φ_y+βψ] - [ φ_s -1/2σ^2 v^2 N + η v φ_v+βϕ ]
= [ϵ_1/t^2 - ϵ_2/(T-t)^2 -α (t-s) -1/2σ^2 y^2 M - η y (ϵ_3 +α (y-v)) +βψ]
- [-α(t-s) -1/2σ^2 v^2 N + η v (ϵ_4 - α (y-v)) +βϕ ]
≥ -1/2σ^2 (y^2 M - v^2 N) -αη (y-v)^2 -η(ϵ_3 y + ϵ_4 v ) +β (ψ - ϕ)
= -1/2σ^2 α(y-v)^2 -αη (y-v)^2+ o(1) -η(ϵ_3 y + ϵ_4 v) +β (ψ - ϕ).
According to (<ref>), if ŷ: = lim_α→ +∞ y_α = lim_α→ +∞ v_α≤ 0, we have already made a contradiction by ψ(t̂, x̂, ŷ) - ϕ(t̂, x̂, ŷ)≥δ. If ŷ>0, since x_α + (1-θ_1) y_α≥ 0, we have y_α≤x_α+y_α/θ_1. Therefore due to the choice of β,
-η(ϵ_3 y + ϵ_4 v) +β (ψ - ϕ) ≥ -η/θ_1 (ϵ_3 (x + y) + ϵ_4 (u+ v)) +β(ψ(t, x, y) - ϕ(t, x, y))
≥β(ψ(t, x, y) - ϕ(t, x, y) - ϵ_3 (x + y) - ϵ_4 (u+ v) )
≥βδ
>0.
This leads to contradiction and concludes the proof.
§.§ Proof of Theorem <ref>
As indicated in the main body, we only need to show Theorem <ref>(ii), i.e., V is a viscosity solution. Also, condition c) in Definition <ref> is a direct result of Proposition <ref> and has been proved in <ref>. Therefore, in the following we focus on verifying condition a) and b) in Definition <ref>.
§.§.§ Verifying Condition a)
Condition a) is from the following weak dynamic programming principle.
Denote (X̂_s, Ŷ_s) as the state processes (X_s, Y_s) starting from X_t=x, Y_t = y under the portfolio π: = (L_s, M_s)_t≤ s≤ T.
For any stopping time τ taking values within [t, T], and (t, x, y) ∈ [0, T) ×𝒮, we have
V(t, x, y)≤sup_π∈𝒜_t(x, y)E[V^*(τ, X̂_τ, Ŷ_τ)]
and
V(t, x, y)≥sup_π∈𝒜_t(x, y)E[V_*(τ, X̂_τ, Ŷ_τ)].
The proof of this proposition is identical with <cit.>, then Condition a) is verified by Corollary 5.6 of <cit.>.
§.§.§ Verifying Condition b)
In this part, we want to prove the continuity of value function around z = K. More precisely, we have the following result
We have
lim_(t, x, y) →(t_0, x̂, ŷ) V(t, x, y) = U(K), when ẑ = K.
On the one hand, it is easily found that
lim inf_(t, x, y) →(t_0, x̂, ŷ) V(t, x, y) ≥ U(K).
On the other hand, denote by V̂(t, z) the value function for given wealth z at time t and without transaction costs, we then have
V̂(t, x+(1-θ_1)y^+- (1+θ_2)y^-) ≥ V(t, x, y).
According to the result for non-concave utility maximization without transaction costs (<cit.>), we have
lim sup_(t, x, y) →(t_0, x̂, ŷ) V(t, x, y) ≤lim sup_(t, x, y) →(t_0, x̂, ŷ)V̂(t, x+(1-θ_1)y^+- (1+θ_2)y^-) = K.
Then we have proved this proposition.
§.§ Numerical Procedure
Define the change of variable z=x+(1-θ_1)y^+-(1+θ_2)y^- as in (<ref>), v=√(T-t)· y, W(t,z,v)=V(t,x,y). Under this transformation, the HJB equation (<ref>) becomes
min{-W_t-ℒ̃_-θ_1W, W_v, (θ_1+θ_2)W_z-√(T-t)· W_v} =0, for v≥ 0,
min{-W_t-ℒ̃_θ_2W, (θ_1+θ_2)W_z+√(T-t)· W_v, -W_v} =0, for v<0,
where
ℒ̃_θW = 1/2σ^2 v^2 ( W_vv+2(1+θ)/√(T-t) W_vz+(1+θ)^2/T-t W_zz)-(1/2(T-t)-η) v W_v +η(1+θ) v/√(T-t) W_z.
We then solve the above variational inequalities numerically via the penalty method (c.f. <cit.>). The corresponding penalty formulation is
W_t+ℒ̃_-θ_1W+λ (-W_v)^++λ(√(T-t)· W_v-(θ_1+θ_2)W_z)^+ =0, for v≥ 0,
W_t+ℒ̃_θ_2W+λ (W_v)^++λ(-√(T-t)· W_v-(θ_1+θ_2)W_z)^+ =0, for v< 0,
where the penalty constant λ>0 is a large number. The nonlinear terms (√(T-t)· W_v-(θ_1+θ_2)W_z)^+ and (-√(T-t)· W_v-(θ_1+θ_2)W_z)^+ are linearized using the non-smooth Newton iteration (c.f. <cit.>), and the linearized equations are solved using the implicit finite-difference scheme (see <cit.>).
For the boundary conditions, in the case of the goal-reaching problem, we set W(t,0,v) = 0 due to bankruptcy, and W(t,1,v) = 1 since the goal is reached by liquidating the whole risky asset position. As |v|→∞, we impose the boundary condition W(t,z,v) = z.
If short-selling of risky assets is prohibited, then only the equation in the region v≥ 0 remains, and the buy strategy is imposed on v=0. In the case of aspiration utility and S-shaped utility problems, when z is very large, the problem is asymptotically a classic Merton optimal investment problem with proportional transaction costs (up to a shifting and scaling in the utility function). Therefore, we set a Dirichlet boundary condition at a large value of z that W equals the classic Merton problem with transaction costs up to the same shifting and scaling.
|
http://arxiv.org/abs/2307.00232v1
|
20230701054409
|
H-Unitality of Smooth Groupoid Algebras
|
[
"Michael Francis"
] |
math.OA
|
[
"math.OA",
"math.DG"
] |
The Potential of LEO Satellites in 6G Space-Air-Ground
Enabled Access Networks
Ziye Jia, Member, IEEE, Chao Dong, Member, IEEE,
Kun Guo, Member, IEEE, and
Qihui Wu, Senior Member, IEEEZiye Jia is with the College of Electronic and Information Engineering,
Nanjing University of Aeronautics and Astronautics, Nanjing 211106,
China, and also with the State Key Laboratory of ISN, Xidian University,
Xi’an 710071, China (e-mail: jiaziye@nuaa.edu.cn).
Chao Dong and Qhui Wu are with the College of Electronic and Information
Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing
211106, China (e-mail: wuqihui@nuaa.edu.cn, dch@nuaa.edu.cn).
Kun Guo is with the East China Normal
University, Shanghai 200241, China (e-mail: kguo@cee.ecnu.edu.cn).
August 1, 2023
=================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
We show that the convolution algebra of smooth, compactly supported functions on a Lie groupoid is homologically unital in the sense of Wodzicki. We also prove homological unitality of certain ideals associated to invariant closed subsets of the unit space, leading to an excision property in Hochschild and cyclic homology around invariant, closed subsets. This extends previous work of the author establishing the Dixmier-Malliavin theorem in this setting.
§ INTRODUCTION
Let A denote an associative algebra over . We do not assume A is commutative or unital. Let us say that A has the weak factorization property if every a ∈ A can be expressed as a finite sum a = ∑ b_i c_i, where b_i,c_i ∈ A. Notice that every unital algebra has the weak factorization property, so this notion is only of interest in the nonunital setting.
Recall that, given a Lie group G equipped with Haar measure, the space C_c^∞(G) of smooth, compactly-supported functions on G becomes an algebra with respect to convolution. This algebra is nonunital unless G is discrete (the unit wants to be the Dirac mass at 1). In a 1978 paper, Dixmier-Malliavin proved the following striking result.
For any Lie group G, the smooth convolution algebra C_c^∞(G) has the weak factorization property.
The main technical ingredient of their proof is the following lemma whose own proof is an intricate piece of hard analysis.
For any sequence (c_m)_m=0^∞ of positive scalars, there exist f_0,f_1 ∈ C_c^∞() and scalars (a_m) with |a_m| ≤ c_m such that δ_0 = f_0 + ∑_m=0^∞ a_m f_1^(m). Here, δ_0 denotes the Dirac mass at 0 and the series converges in the sense of compactly-supported distributions. The functions f_0, f_1 may be chosen to be supported in any fixed neighbourhood of the origin.
The (already nontrivial) G= case of Theorem <ref> follows quite directly from Lemma <ref>. Given any φ∈ C_c^∞(), by choosing the coefficients a_m to converge sufficiently quickly to 0, one has that φ_1 ∑_m a_m φ^(k) belongs to C_c^∞() and so
φ = δ_0 * φ = f_0 * φ + ∑_m=0^∞ a_m f_1^(m) * φ = f_0 * φ + f_1 * φ_1.
To prove Theorem <ref> for a general Lie group, one writes G (locally) as a product of 1-parameter subgroups and factors one group at a time.
In <cit.>, the author extended Dixmier-Malliavin's result to the setting of Lie groupoids:
For any Lie groupoid G, the smooth convolution algebra C_c^∞(G) has the weak factorization property.
We always assume our Lie groupoids G are equipped with a smooth Haar system so that a convolution product on C_c^∞(G) is defined (this arbitrary choice can be avoided by working with appropriate densities in place of functions, see <cit.>, Section 2.5). The total space of G (but not its unit space) is permitted to be non-Hausdorff. As usual, in the non-Hausdorff case, C_c^∞(G) is defined somewhat differently as the span of all C_c^∞(U), where U is a chart neighbourhood in G (see, e.g. <cit.>, <cit.>. <cit.>).
In the case of groupoids, there is an additional phenomenon (absent in the group case) of ideals associated to invariant subsets of the unit space. If Z is a closed, invariant subset of the unit space of G and G_Z ⊂ G denotes the closed subgroupoid consisting of arrows whose source and target lie in Z, we denote by J_Z^∞⊂ C_c^∞(G) the ideal (with respect to convolution) of functions which vanish to infinite order on G_Z. Weak factorization was also established by the author for these ideals, in the case where Z is a closed invariant submanifold:
For any Lie groupoid G, for any closed, invariant submanifold Z of the unit space of G, the corresponding ideal J_Z^∞⊂ C_c^∞(G) has the weak factorization property.
The purpose of this article is to strengthen Theorem <ref> and Theorem <ref> by showing that C_c^∞(G) and J^∞_Z are in fact homologically unital in the sense defined by Wodzicki. This immediately implies excision results for the Hochschild and cyclic homology of these algebras.
The present article is also more general than <cit.> in that Z is permitted to be an arbitrary invariant, closed subset of the unit space, and not necessarily a submanifold. Furthermore, an increased amount of care is taken to ensure that the results and arguments apply in the case of non-Hausdorff Lie groupoids; the results in <cit.> do apply in the non-Hausdorff setting, but the verification of this is mostly left to the reader.
We give a brief review of the homological notions at play. For simplicity, we only consider complex scalars, being the ground field for all algebras of interest here. In general, this restriction is not necessary, though working with a field does streamline definitions somewhat.
Given an associative algebra A over , bar complex of A is the chain complex
…[r,"d'"] A^⊗ 3[r,"d'"] A^⊗ 2[r,"d'"] A [r] 0,
where the differential d' is determined by
d'(a_0⊗…⊗ a_n) = ∑_i=0^n-1 (-1)^i a_0 ⊗…⊗ a_ia_i+1⊗…⊗ a_n n ≥ 1.
If one modifies the above definition of d' to include the “wrap around term” a_n a_0 ⊗…⊗ a_n-1, then one gets the differential d involved in the definition of Hochschild and cyclic homology.
The following terminology was introduced by Wodzicki.
An associative algebra A over is said to be homologically unital or H-unital if its bar complex is acyclic. That is, if
…[r,"d'"] A^⊗ 3[r,"d'"] A^⊗ 2[r,"d'"] A [r] 0
is an exact sequence.
Every unital algebra A is H-unital. Indeed, x ↦ 1 ⊗ x : A^⊗ n→ A^⊗ (n+1) defines a contracting chain homotopy for the bar complex. More generally, if A is locally unital in the sense that, for any finite set of elements, there is an element acting as a unit for those elements, then A is H-unital. Any Banach algebra with a bounded approximate unit is H-unital. In particular, all C*-algebras are H-unital. More in the spirit of the present article, if M is a smooth manifold and Z ⊂ M is any closed subset, then then ideal I_Z^∞⊂ C^∞(M) of smooth functions which vanish with all their derivatives on Z is H-unital.
From Wodzicki's groundbreaking paper on excision in cyclic homology and algebraic K-theory, H-unitality has the following relationship to excision.
Let A be an associative algebra over . Then, the following are equivalent:
* A has the excision property for Hochschild homology.
* A has the excision property for cyclic homology.
* A is H-unital.
For a precise explanation of the terminology above, one may see <cit.>, or Section 1.4 of <cit.>. The key point is the following: if I is an H-unital algebra, and B contains I as an ideal, then the short exact sequence of -algebras
0 [r] I [r] B [r] B/I [r] 0
induces a corresponding long exact sequence in Hochschild/cyclic homology.
Note that, if A is H-unital, then, in particular, the mapping A⊗ A → A is surjective. That is, H-unital algebras have the weak factorization property.
In this article, we prove the following result which may be viewed as a generalization of the Dixmier-Malliavin theorem for Lie groupoids of <cit.>.
For any Lie groupoid G, the smooth convolution algebra C_c^∞(G) is H-unital.
The author would like to thank Xiang Tang for pointing out the similarity of Theorem <ref> above to Proposition 2 of <cit.>. We note, however, that, because the tensor products in this article are algebraic tensor products and the tensor products in <cit.> are completed tensor products ⊗ satisfying C_c^∞(M) ⊗C_c^∞(N) = C_c^∞(M × N) for all smooth manifolds M and N, the results are distinct.
The second purpose of this article is to prove the following generalization of Theorem <ref>.
For any Lie groupoid G, for any closed, invariant subset Z of the unit space of G, the corresponding ideal J_Z^∞⊂ C_c^∞(G) is H-unital.
Theorems <ref>, <ref> and <ref> have the following important corollary.
The algebras C_c^∞(G) and J_Z^∞ have the excision property in Hochschild homology and cyclic homology.
The main practical consequence of our results is that any extension of the form
0 [r] J_Z^∞[r] C_c^∞(G) [r] C_c^∞(G)/J_Z^∞[r] 0
induces a corresponding long exact sequence in Hochschild/cyclic homology (since is a field, the purity hypothesis in <cit.> is automatically satisfied). It is hoped that this will lead to an improved understanding of localization around invariant subsets in calculations of the cyclic and Hochschild homology of convolution algebras of Lie groupoids. Examples of calculations utilizing this excision principle will be considered elsewhere. Such calculations fall squarely within Connes' noncommutative geometry program. One may see <cit.> for recent progress in this area.
We now discuss the criterion that will be used to establish H-unitality. As mentioned above, if an algebra A is unital, then x ↦ 1 ⊗ x : A^⊗ n→ A^⊗ (n+1) defines a contracting homotopy for the bar complex. More generally, if there exists a -linear map ϕ : A → A ⊗ A which is right A-linear (where the right A-module structure on A⊗ A is such that (a ⊗ b)· c = a⊗(bc) for a,b,c ∈ A) and makes the diagram
A [rd,equals] [r,"ϕ"] A ⊗ A [d,"multiplication"]
A
commutative, then a simple calculation shows that ϕ⊗𝕀 : A^⊗ n→ A^⊗ (n+1) gives a contracting homotopy for the bar complex. In fact, because any cycle only involves finitely any elements of A, it is not actually necessary to have a globally-defined map ϕ; one can get by with a family of locally-defined maps. This is formalized in the following result
from <cit.> (specialized to the case k=) and we refer the reader to that source for the proof.
[4.1 Proposition, <cit.>]
Let A be an associative -algebra. Suppose that, for every finite set 𝒫⊂ A, there exists a right ideal A_0 ⊂ A such that 𝒫⊆ A_0 and a -linear map ϕ : A_0 → A ⊗ A such that:
* ϕ is a map of right A-modules (the right A-module structure on A⊗ A is such that (a ⊗ b)· c = a⊗(bc) for a,b,c ∈ A).
* The diagram
A_0 [rd,"inclusion"'] [r,"ϕ"] A ⊗ A [d,"multiplication"]
A
is commutative.
One straightforward consequence of Proposition <ref> is H-unitality of “locally unital” algebras; if every finite subset of A admits a common left unit, then A is H-unital. While the latter statement suffices for many algebras of interest in noncommutative geometry, the algebras considered here are not locally unital. Already, the convolution algebra C_c^∞() is not locally unital (under Fourier transform, it is an algebra of analytic functions under pointwise multiplication). Nonetheless, our proofs of H-unitality will pass through the sufficient condition of Proposition <ref>. As an instructive example, we establish H-unitality for C_c^∞() below.
Let X=d/dx, considered as an operator on C_c^∞().
Given a formal series P(z) = ∑_m≥ 0 a_m z^m ∈[[z]], one may define P(X) : (P(X)) → C_c^∞() by
(P(X)) = {φ∈ C_c^∞() : ∑_m ≥ 0 |a_m| X^m+rφ < ∞ for all r ≥ 0 }
P(X) φ = ∑_m ≥ 0 a_m X^mφ,
where · denotes the uniform norm. One may check that (P(X)) is an ideal in C_c^∞(). Given any finite subset 𝒫⊂ C_c^∞(), one may use Lemma <ref> to find a representation δ_0 = f_0 + ∑_m=0^∞ a_m f_1^(m) of the Dirac measure in terms of f_0,f_1 ∈ C_c^∞() such that P(z) = ∑_m ≥ 0 a_m z^m has 𝒫⊂(P(X)). Then, the map
φ↦ f_0 ⊗φ + f_1 ⊗ P(X) φ : (P(X)) → C_c^∞() ⊗ C_c^∞()
satisfies the conditions of Proposition <ref> by the same calculation given below the statement of Lemma <ref>, so C_c^∞() is H-unital by Proposition <ref>.
We give a brief summary of the contents of this article. Section 2 reviews certain aspects of the theory of non-Hausdorff smooth manifolds. A number of examples are included with the intention being to clarify where complications can arise, especially as pertains to supports of functions and flows of vector fields. Section 3 is a technical section devoted to domains of operators obtained by taking power series in a fixed set of vector fields. The goal here is to confirm that, by taking series coefficients to be sufficiently small, domains of such operators can be made to contain any finite set of bump functions. In Section 4, we establish conventions for Lie groupoids and smooth actions of Lie groupoids. Section 5 is concerned with our first main result, the H-unitality of the smooth convolution algebras of groupoids. Section 6 is concerned with factorization with respect to pointwise multiplication of smooth functions vanishing to infinite order on a closed set, relative to a submersion. Section 7 contains our second main result, the H-unitality of infinite-order vanishing ideals in smooth convolution algebras of groupoids.
§ NON-HAUSDORFF SMOOTH MANIFOLDS
It is frequently necessary in noncommutative geometry to work with non-Hausdorff manifolds. In this section, we highlight some of the complications that can arise and review the standard workarounds. Readers who are only interested in Hausdorff Lie groupoids may safely ignore this section. Readers who are already well-versed in the techniques needed to deal with non-Hausdorff Lie groupoids may also prefer to skip this material.
In this article, a smooth manifold refers to a second-countable topological space M equipped with a smooth atlas (more precisely, an equivalence class of smooth atlases or, alternatively, a maximal smooth atlas) of some fixed dimension. The topology of M is not required to be Hausdorff. Of course, M is locally Hausdorff, being locally Euclidean.
Actually, it would do little harm to drop the assumption of second-countablity as well, provided we at least assume that every component of M is second-countable. This amounts to working with possibly uncountable disjoint unions of second countable manifolds.
Much of the basic theory of smooth manifolds goes through unchanged without the Hausdorff hypothesis. For example, one can talk about smooth vector fields on and smooth maps between non-Hausdorff smooth manifolds by requiring smoothness in every chart. Many of the issues that do arise are due to the nonexistence of partitions of unity. Ultimately, this stems from an undersupply of scalar-valued functions that look smooth in every chart. To some extent, it is possible to define one's way around this issue by modifying the definition of bump functions. The simple definition below is due to Connes. More sophisticated approaches with better functorial properties are also possible. See the unpublished manuscript <cit.>.
Let M be a possibly non-Hausdorff smooth manifold. Then, C_c^∞(M) denotes the linear span of all functions φ : M → that are given as the extension by zero of a function of the form f ∘χ : U → where U ⊂ M is open, χ : U →^d is a diffeomorphism, and f ∈ C_c^∞(^d).
If M is Hausdorff, the usual meaning of C_c^∞(M) is recovered. On the other hand, if M is non-Hausdorff, the notation is C_c^∞(M) is misleading in several ways:
* A function φ∈ C_c^∞(M) need not be smooth in every chart (or even continuous).
* The support of φ∈ C_c^∞(M), defined in the usual way as the complement of the largest open set where φ is zero (equivalently, the closure of the nonvanishing locus of φ), need not be compact.
* C_c^∞(M) need not be closed under the operation of pointwise product.
Let M=((-∞,0) ×{0}) ∪⋃_n ∈([0,∞) ×{n}),
with smooth manifold structure determined by the atlas {χ_n :U_n →}_n ∈ defined by
U_n = ((-∞,0) ×{0}) ∪([0,∞) ×{n}) χ_n = proj_1|_U_n n ∈.
Choose some f ∈ C_c^∞() with supp(f)=[-1,1] and define φ_n to be the extension by zero to all of M of f ∘χ_n, so that φ_n ∈ C_c^∞(M) by definition. Then,
* The restriction of φ_m to U_n is not smooth if m ≠ n.
* The support of φ_0 is ([-1,1] ×{0 }) ∪{ (0,n) : n ∈}, which is not compact.
* The pointwise product of φ_m and φ_n does not belong to C_c^∞(M) if m ≠ n.
[scale=.5]
(-5,0)– (5,0);
(0,1)– (5,1);
(0,2)– (5,2);
(0,-1)– (5,-1);
(0,-2)– (5,-2);
at (0,0) [circle,fill,inner sep=1.5pt];
at (0,1) [circle,fill,inner sep=1.5pt];
at (0,2) [circle,fill,inner sep=1.5pt];
at (0,-1) [circle,fill,inner sep=1.5pt];
at (0,-2) [circle,fill,inner sep=1.5pt];
at (2.5,3) ⋮;
at (2.5,-3) ⋮;
Note as well that the space in the example above does arise naturally in geometry. For example, foliate the lower half of the cylinder S^1 × into circles and the upper half into spirals so that there is one-sided holonomy at the equator S^1 ×{0}. The restriction of the holonomy groupoid of this foliation to a vertical line transversal is diffeomorphic to M.
Although it can occur that (φ) φ^-1(^×) is noncompact for φ∈ C_c^∞(M) when M is noncompact, we do at least have the following.
Let τ:M → B be a smooth map of smooth manifolds, where B is Hausdorff and M is possibly non-Hausdorff. Then, for all φ∈ C_c^∞(M), we have that τ((φ)) is compact.
Write φ = ∑_i=1^n (f_i ∘χ_i)_0 where χ_i : U_i →^d are charts, f_i ∈ C_c^∞(^d), and the 0 subscripts denote the operation of extension by zero. Then, K = ⋃_i=1^n χ_i^-1((f_i)) is a compact (but possibly not closed) subset of M containing φ^-1(^×). The conclusion follows from the elementary point-set topological lemma below, with S = φ^-1(^×).
Let f : X → Y be a continuous map of topological spaces, where Y is Hausdorff and X is possibly non-Hausdorff. Suppose that S ⊂ K ⊂ X and K is compact (but possibly not closed). Then, f(S) = f(S) and this is a compact subset of Y.
We have f(S) ⊂f(S)⊂f(S∩ K) = f(S∩ K) ⊂ f(S), so all these sets are equal to the compact set f(S∩ K).
Another issue with Definition <ref> is the poor control that the support of an element φ∈ C_c^∞(M) exerts over the supports of the summands in the possible decompositions φ into functions coming from charts. One might expect that, if U is open and supp(φ) ⊂ U, then φ∈ C_c^∞(U) ⊂ C_c^∞(M), i.e. φ can be expressed in terms of bump functions defined on chart neighbourhoods contained in U. In general, this is not assured, as the example below illustrates.
Fix a smooth function θ : → with θ(x)=0 for x ≤ 0 that restricts to a diffeomorphism θ_+:(0,∞)→(0,∞). Consider the non-Hausdorff smooth manifold M = ∪_θ_+ obtained by using θ_+ to glue two copies of along (0,∞). More precisely,
M =((-∞,0] ×{-1,1})∪( (0,∞) ×{0})
U_+ = ((-∞,0] ×{1})∪( (0,∞) ×{0})
U_- = ((-∞,0] ×{-1})∪( (0,∞) ×{0})
[scale=.5]
at (-7,0) M;
at (0,2) U_+;
at (0,-2) U_-;
(-5,1)– (0,1);
(-5,-1)– (0,-1);
(.14,0)– (5,0);
at (0,1) [circle,fill,inner sep=1.5pt];
at (0,-1) [circle,fill,inner sep=1.5pt];
(0,0) circle (4pt);
with non-Hausdorff smooth manifold structure determined by the two charts:
χ_+: U_+ → χ_+(x,y) = x
χ_-: U_- → χ_-(x,y) =
x x ≤ 0
θ_+(x) x > 0.
Fix g ∈ C_c^∞() with supp(g)=[-1,1] such that g(x)=x for all x in a neighbourhood of 0. Define f ∈ C_c^∞() by f(x)=g(θ(x)) for x>0 and f(x)=0 for x ≤ 0. Let φ∈ C_c^∞(M) be given by φ = (g ∘χ_-)_0 - (f ∘χ_+)_0, where the subscript 0 denotes extension by zero.
[scale=1]
(-2,1)– (2,1);
(-2,-1)– (2,-1);
at (0,1) [circle,fill,inner sep=1.5pt];
at (0,-1) [circle,fill,inner sep=1.5pt];
at (-1,2) f;
at (-1,0) g;
[scale=1, black, thick, domain=.01:1.762, smooth, variable=] plot (, 1+*exp(-1/)*exp(1-1/(1-*exp(-1/)**exp(-1/))));
[scale=1, black, thick, samples=101, domain=-.99:.99, smooth, variable=] plot (, *exp(1-1/(1-*))-1);
[scale=1]
at (-1,2) φ;
(-2,1)– (0,1);
(-2,-1)– (0,-1);
(0.07,0)– (2,0);
at (0,1) [circle,fill,inner sep=1.5pt];
at (0,-1) [circle,fill,inner sep=1.5pt];
(0,0) circle (2pt);
[scale=1, black, thick, samples=101, domain=-.99:0, smooth, variable=] plot (, *exp(1-1/(1-*))-1);
Then, supp(φ) = [-1,0] ×{-1} (note the point (0,1) does not belong to the support because φ vanishes identically on U_+). In particular, supp(φ) ⊂ U_-. However, φ does not belong to C_c^∞(U_-) ⊂ C_c^∞(M).
Another issue with Definition <ref> of relevance to us relates to smooth vector fields. On the one hand, there is no difficulty defining the tangent bundle TM of a non-Hausdorff smooth manifold M and defining a vector field X to be a section of TM which is smooth in every chart. On the other hand, integral curves of X may not be unique, preventing one from talking about the flow of X. In fact things already go wrong at infinitesimal level, i.e. one does not even have a well-defined linear map X:C_c^∞(M)→ C_c^∞(M). The operators defined by X in charts need not patch together. This issue arises already in the simplest examples, as shown in Example <ref> below.
Let M=^×∪{0_1,0_2}, the “line with two origins” with noncommutative smooth manifold structure coming from the two obvious charts χ_1,χ_2:M→. Let X be the (global) smooth vector field on M which coincides with d/dx in both charts. Fix f ∈ C_c^∞() with f(0)=0 and f'(0)=1. For i=1,2, let φ_i ∈ C_c^∞(M) be the extension by zero of f ∘χ_i and let ψ_i ∈ C_c^∞(M) be the extension by zero of f' ∘χ_i. Then, φ_1 - φ_2 = 0, but ψ_1 - ψ_2 is the function which is zero on ^×, 1 at 0_1 and -1 at 0_2. Thus, φ_1 - φ_2 = 0 and ψ_1 - ψ_2≠ 0. This shows that the result of applying X to the zero function is not unambiguously defined.
A flow for a smooth vector field X on a smooth manifold M is a smooth map (t,m) ↦ϕ_t : W → M where W ⊂× M is an open set containing {0}× M whose intersection with ×{m} is connected for all m ∈ M such that ϕ_0(m)=m for all m ∈ M and d/dtϕ_t(m) = X(m) for all (t,m) ∈ W. It is easy to see that, if X has a flow, then any two flows agree on the intersection of their domains, and there is a unique maximal flow (t,m)↦ϕ_t(m) : × M which furthermore satisfies ϕ_s+t(m)=ϕ_s(ϕ_t(m)) whenever (t,m) , (s,ϕ_t(m)) ∈ W.
Let M be a smooth manifold, not necessarily Hausdorff, and let X be a smooth vector field on M. We say that X is non-branching if it has a flow.
A non-branching vector field X with flow ϕ does determine a well-defined linear map X:C_c^∞(M)→ C_c^∞(M). One way to see this is to note that the locally defined operators on charts agree with the global one defined by (Xf)(p)=lim_t→0 (f(ϕ_t(p)-f(p))/t, so they patch together. Indeed, one may check that existence of the flow of X is equivalent to its well-definedness as an operator on C_c^∞(M). Note the equation (Xf)(p)=lim_t→0 (f(ϕ_t(p)-f(p))/t can be also be used to justify the following expected fact that X, viewed as an operator on C_c^∞(M), does not increase supports.
If X is a smooth, non-branching vector field on a possibly non-Hausdorff smooth manifold M, then (X ψ) ⊂(ψ) for all ψ∈ C_c^∞(M).
If X is a complete non-branching vector field, we consider its flow as a smooth, 1-parameter group of diffeomorphisms of M and denote it by t ↦ e^tX. In other words:
ddt e^tX m = X(m) m ∈ M
In spite of the shortcomings highlighted in Examples <ref>, <ref> and <ref>, Definition <ref> does allow one to bypass many issues relating to nonexistence of partitions of unity. The following simple lemma will suffice for many purposes.
Let M be a possibly non-Hausdorff smooth manifold. Let (U_i)_i ∈ I be a cover of M by Hausdorff open sets. Then, every φ∈ C_c^∞(M) can be expressed as a finite sum ∑φ_i where φ_i is the extension by zero to all of M of some f_i ∈ C_c^∞(U_i).
Without loss of generality, φ is the extension by zero to all of M of some f ∈ C_c^∞(U), where U ⊂ M is open and Hausdorff. Then, by usual theory of Hausdorff smooth manifolds, we can write f as a finite sum ∑ f_i where f_i ∈ C_c^∞(U_i ∩ U) ⊂ C_c^∞(U_i). Then, letting φ_i be the extension by zero to all of M of f_i, we have φ = ∑φ_i.
The following proposition shows some other important respects in which C_c^∞(M) is well-behaved.
* If M and N are possibly non-Hausdorff smooth manifolds, f ∈ C_c^∞(M), g ∈ C_c^∞(N), then f ⊗ g ∈ C_c^∞(M× N) (here, (f⊗ g)(m,n)=f(m)g(n)).
* If M is a possibly non-Hausdorff smooth manifold, and N ⊂ M is a closed submanifold, then restriction gives a surjective map C_c^∞(M) → C_c^∞(N).
* If M is a possibly non-Hausdorff smooth manifold and θ : M → M is a self-diffeomorphism, then pullback along θ determines a linear bijection C_c^∞(M) → C_c^∞(M).
* If τ:M→ N is a smooth map of manifolds, with N Hausdorff, then C_c^∞(M) is a C^∞(N)-module with respect to f ·φ (f ∘τ)g, f ∈ C^∞(N), φ∈ C_c^∞(M).
The above statements are either immediate from Definition <ref>, or follow from an application of Lemma <ref>.
§ INFINITE SERIES OF VECTOR FIELDS
The following section is somewhat technical, but more or less elementary. The results obtained will be somewhat stronger than strictly necessary, but will enable us to streamline arguments later on. Essentially we need elaborated versions of the following elementary fact: if (f_n)_n ≥ 0 is a sequence in C_c^∞() with uniformly bounded supports, then there exist positive scalars (c_n)_n ≥ 0 such that, for any sequence of scalars (a_n)_n ≥ 0 with |a_n| ≤ c_n, the series ∑_n ≥ 0 a_n f_n converges absolutely and uniformly to a function in C_c^∞(). To see this, one may take, for instance, c_n = min_k ≤ n( 2^n f^(k)_n)^-1.
The following definition is well-suited to our purposes.
Let P(z) = ∑_m ≥ 0 a_m z^m ∈[[z]] be a formal series and X a smooth vector field on ^d, thought of as a differential operator X : C_c^∞(^d) → C_c^∞(^d). Then, we define (P(X)) to be the set of all φ∈ C_c^∞(^d) such that ∑_m ≥ 0 |a_m| ∂^α∂ x^α X^m φ< ∞ for every multi-index α∈^d and put
P(X) φ = ∑_m ≥ 0 a_m X^m φ
for all φ∈(P(X)).
Some basic consequences of this definition are collected in the following proposition.
Suppose P(z) = ∑_m ≥ 0 a_m z^m ∈[[z]] and X is a smooth vector field on ^d. Define P(X) : (P(X)) → C_c^∞(^d) as above.
* ∂^α/∂ x^α P(X) φ = ∑_m≥0∂^α/∂ x^α X^m φ for all α∈^d, φ∈ C_c^∞(^d), where the series converges absolutely and uniformly.
* (P(X)φ) ⊂(φ) for all φ∈(P(X)).
* The definition of P(X) is coordinate-independent; if θ : ^d →^d is a diffeomorphism, then (P(θ^*(X))) = θ^*((P(X))) and P(θ^*(X)) = θ^*(P(X)).
* If P(z) = ∑_m ≥ 0 a_m z^m, Q(z) = ∑_m ≥ 0 b_m z^m ∈[[z]] and |a_m|≤|b_m| for m≥ 0, then (Q(X)) ⊂(P(X)).
The proof of the above proposition is straightforward and we omit it. For example, property (3) comes down to the fact that ∂^α/∂ x^α (f ∘θ) can be expressed as a finite sum ∑_|β|≤|α|[ (∂^β/∂ x^β f) ∘θ] ·θ_β, where θ_β is a smooth function made up of partial derivatives of components of θ.
Property (3) permits us to make the following definition:
Suppose that M is a not-necessarily-Hausdorff smooth manifold, X is a smooth, non-branching (Definition <ref>) vector field on M and P(z) = ∑_m ≥ 0 a_m z^m ∈[[z]]. Then, we define (P(X)) to be the linear span of functions φ on M given as the extension by zero of functions f ∘χ, where χ : U →^d is a Euclidean chart and f ∈ C_c^∞(^d) belongs to (P(Y)), where Y=χ_*(X|_U). We define P(X) : (P(X)) → C_c^∞(M) by
P(X)φ = ∑_m ≥ 0 a_m X^m φ,
this series being absolutely and uniformly convergent for φ∈(P(X).
The lemma below will be used to control the domains of compositions of operators P(X).
Suppose that (a_m)_m ≥ 0 is a sequence of nonnegative scalars. Then, there exists a non-increasing sequence of positive scalars (c_m)_m ≥ 0 such that
∑_m_1,…,m_n ≥ 0 c_m_1⋯ c_m_n a_m_1+…+m_n+r < ∞
for all integers n,r with n ≥ 1 and r ≥ 0.
Without loss of generality, the terms of (a_m) are positive. We make repeated use of the fact that, given positive scalars (a_m^(i))_m, i≥ 0, there is a positive sequence (b_m)_m≥ 0 such that ∑_m ≥ 0 b_m a_m^(i) < ∞ for every i ≥ 0. One may use, for instance, b_m = ( 2^m max_i ≤ m a_m^(i))^-1.
Applying the aforementioned fact, there is a sequence of positive scalars (c^(1)_m)_m ≥ 0 such that a_r^(2)∑_m ≥ 0 c^(1)_m a_m+r<∞ for all r ≥ 0. Applying this fact again, there is a sequence of positive scalars (c^(2)_m)_m ≥ 0 such that a_r^(3)∑_m ≥ 0 c^(2)_m a_m+r^(2) = ∑_m_1,m_2 ≥ 0 c^(1)_m_1 c^(2)_m_2 a_m_1+m_2+r<∞ for all r ≥ 0. Continuing in this way, we obtain positive sequences (c_m^(i))_m ≥ 0 for i=1,…,m such that
∑_m_1,…,m_n ≥ 0 c_m_1^(1)⋯ c_m_n^(n) a_m_1+…+m_n+r < ∞
for all r ≥ 0.
Next, put c_m min(c_m^(1),…, c_m^(m)). We show that (c_m) satisfies the conclusion of the lemma by induction on n ≥ 1. Since c_m ≤ c_m^(1) for all m, the statement holds for n=1. Let n ≥ 2. Then,
∑_m_1,…,m_n ≥ 0 c_m_1⋯ c_m_n a_m_1+…+m_n+r
≤ ∑_m_1,…, m_n ≥ n c_m_1⋯ c_m_n a_m_1+…+m_n+r + n∑_m=0^n-1 c_m ∑_m_1,…,m_n-1≥ 0
c_m_1⋯ c_m_n-1 a_m_1+…+m_n-1+m+r.
The first term is finite because c_m ≤ c^(n)_m for m ≥ n. The second term is finite by the induction hypothesis.
Suppose X_1,X_2,X_3,… is a sequence of smooth vector fields on ^d and φ∈ C_c^∞(^d). Then, there exists a sequence (c_m)_ m ≥ 0 of positive real numbers such that, for any formal series P(z) = ∑_m ≥ 0 a_m z^m ∈[[z]] satisfying |a_m|≤ c_m for all m≥0, one has φ∈dom( P(X_i_1) ⋯ P(X_i_n) ) for all n, i_1,…,i_n ≥ 1.
Set
M_m = max{∂^α∂ x^α X_i_1^m_1⋯ X_i_n^m_nφ : m_1+…+m_n +|α| ≤ m and n,i_1,…,i_n ≤ m },
where · denotes the uniform norm. By the preceding lemma, there exists a sequence (c_m)_m ≥ 0 of positive real numbers such that
∑_m_1,…,m_n ≥ 0 c_m_1⋯ c_m_n M_m_1+…+m_n+r < ∞.
for all integers n,r with n ≥ 1 and r ≥ 0. Then, provided |a_m|≤ c_m, one has that
∑_m_1,…,m_n ≥ 0 |a_m_1| … |a_m_n| ∂^α/∂ x^α X_i_1^m_1⋯ X_i_n^m_nφ < ∞
for all n, i_1,…,i_n ≥ 1 and α∈ℕ^d. Indeed, for all but finitely many terms of the above sum, we have n,i_1,…,i_n ≤ m_1+…+m_n+ |α|, whence ∂^α/∂ x^α X_i_1^m_1⋯ X_i_n^m_nφ≤ M_m_1+…+m_n+|α|, by definition. It follows that φ∈(P(X_1) ⋯ P(X_n)). Indeed,
P(X_i_1)⋯ P(X_i_n)φ = ∑_m_1,…,m_n ≥ 0 a_m_1⋯ a_m_n X^m_1_i_1⋯ X^m_n_i_nφ,
where the above series is uniformly and absolutely convergent to a function C_c^∞(^d).
As a direct corollary, we have:
Suppose M is a possibly non-Hausdorff smooth manifold and X_1,X_2,X_3,… are smooth, non-branching vector fields on M. Fix a finite subset 𝒫⊂ C_c^∞(M). Then, there exists a sequence (c_m)_ m ≥ 0 of positive real numbers such that, for any formal series P(z) = ∑_m ≥ 0 a_m z^m ∈[[z]] satisfying |a_m|≤ c_m for all m≥0, one has φ∈dom( P(X_i_1) ⋯ P(X_i_n) ) for all φ∈𝒫 and all n, i_1,…,i_n ≥ 1.
§ LIE GROUPOIDS AND LIE GROUPOID ACTIONS
In this somewhat lengthy section, we lay out the notations and conventions that will be used for Lie groupoids and their actions. For the most part, our conventions coincide with those of <cit.> and <cit.>. We identify sections of the Lie algebroid with right-invariant vector fields, as is done in <cit.>. On the other hand, it will be convenient to have our vector fields and measures defined along the same fibers (namely the source fibers), so we will use right Haar systems instead of the left Haar systems used in <cit.>. Although the material to follow is rather standard, we do sometimes include proofs, mainly in order to ensure that all arguments used make sense in the non-Hausdorff setting as well.
§.§ Basic notions
We will denote a typical Lie groupoid by G and its unit space by B. The source and target maps are denoted s,t : G → B. We always assume they are submersions. Multiplication is a smooth map from G^(2) G×_s,tG to G that is performed from right to left and denoted by juxtaposition; given γ_1,γ_2 ∈ G, the product γ_1γ_2 is defined if and only if s(γ_1)=t(γ_2). The inverse of γ∈ G is denoted by γ^-1. For convenience, it is always assumed that B is embedded in G as a closed submanifold. We write k (G)-(B). In other words, k is the dimension of the source and target fibers, for which we shall use the standard notations G_x s^-1(x) and G^x t^-1(x), x ∈ B.
We allow the arrow space G, but not the unit space B, of a Lie groupoid to be non-Hausdorff. This is needed to accommodate certain examples of interest, such as groupoids arising from foliations.
Dealing with the non-Hausdorff case requires modifications to the definitions one would use in the Hausdorff case. These kinds of issues are well understood, see <cit.>, <cit.>, <cit.>. We recall Hausdorffness of the unit space automatically implies that of the source and target fibers.
Let G ⇉ B be a possibly non-Hausdorff Lie groupoid. Then, for any x ∈ B, the source and target fibers G_x and G^x are Hausdorff.
See <cit.>, Proposition 2.8.
We also recall that the space of units always admits a Hausdorff open neighbourhood.
Let G ⇉ B be a Lie groupoid. Then, there exists a Hausdorff open set W ⊂ G with B ⊂ W.
This follows from <cit.>, Lemma 4.18. See also the discussion in Section 7 of <cit.>.
§.§ The Lie algebroid of a Lie groupoid
The Lie algebroid of a Lie groupoid G ⇉ B is the vector bundle AG → B obtained by restricting the source fiber tangent bundle (ds) ⊂ TG to B.
There is a canonical bundle map AG → TB called the anchor map given by restricting the differential of the target map dt: TG → TB to AG. Significantly, there is also a natural bracket operation on the sections of AG, but we will not need this.
A vector field X on a Lie groupoid G is said to be right-invariant if it is tangent to the source fibers of G and, for all γ∈ G, one has
(R_γ)_* X = X γ∈ G,
where R_γ denotes right-multiplication by γ. Note that R_γ is a diffeomorphism G_t(γ)→ G_s(γ) and the above equation is implicitly understood to refer to the restriction of X to the relevant source fibers.
It is easy to see that a right-invariant vector field on G is completely determined by its restriction to B, which is naturally a section of AG. Conversely, every section of AG extends uniquely to a right-invariant vector field on G. We will freely denote a section of the Lie algebroid and its extension to a right-invariant vector field by the same symbol. We note that the bracket of two right-invariant vector fields is easily seen to be right-invariant, leading to the bracket operation on sections of AG (that we will not be needing).
Because right-invariant vector fields are tangent to source fibers and source fibers are Hausdorff (Lemma <ref>), we have the following.
Every right-invariant vector field on a possibly non-Hausdorff Lie groupoid is non-branching, i.e. has a well-defined flow .
It is common practice to denote the anchor map of a Lie algebroid by #, but we will not do this here because it conflicts with the standard notation for fundamental vector field in the context of smooth actions, which we are also using. Instead, given a right-invariant vector field X on G ⇉ B, we write X^B for the corresponding vector field on B arising from the anchor map. To be more specific, the vector fields X and X^B are t-related.
In general, the flow of a right-invariant vector field on a Lie groupoid need not be complete. A sufficient condition for the flow of a right-invariant vector field to be complete is that the associated smooth section of the Lie algebroid is compactly-supported. See, for instance, Proposition 3.6 in <cit.>. If X is a complete, right-invariant vector field on G ⇉ M, then the induced vector field X^B on the base is complete as well. Indeed, its flow is determined by:
e^rX^B b = t( e^rX b) r ∈, b ∈ B.
§.§ Smooth Haar systems
We now turn to Haar systems. Their general theory appears in <cit.> and their routine adaptation to the smooth case is discussed in various sources such as <cit.>.
Let π : M → N be a submersion of possibly non-Hausdorff manifolds. Assume that the fibers of π are Hausdorff and λ_y is a smooth measure on π^-1(y) for each y ∈ N. We say that λ=(λ_y)_y ∈ N is a smooth system of measures for π if the following condition holds: if x ∈ M, U ⊂ M is a neighbourhood of x, V ⊂ N is a neighbourhood of π(x), π(U)=V and there are diffeomorphisms χ_U : U →^d ×^k, χ_V : V →^d (so (M)=d+k, (N)=d) making the following diagram commutative
U [r,"χ_U"] [d,"π"] ^d ×^k [d,"proj_2"]
V [r,"χ_V"] ^d,
then the family of measures pushed forward through χ_U is the standard volume measure of ^k copied on vertical fibers, multiplied by a smooth, positive-valued function on ^d ×^k.
Working locally, one sees that the above definition allows for integration along fibers of bump functions, even in the non-Hausdorff case where Definition <ref> is in effect.
Let π : M → N be a submersion of possibly non-Hausdorff manifolds with Hausdorff fibers that is equipped with a smooth system of measures λ = (λ_y)_y ∈ N. Then, integration along fibers with respect to λ defines a linear map π_!: C_c^∞(M) → C_c^∞(N).
Use Lemma <ref> to reduce to charts on which π is a projection.
Let G ⇉ B be a Lie groupoid. A (smooth, right) Haar system for G is a smooth system of measures λ = (λ_b)_b ∈ B for the source submersion s : G → B which is furthermore right-invariant in the sense that, for every γ∈ G, the right multiplication map R_γ is a measure preserving diffeomorphism G_t(γ)→ G_s(γ).
Fixing a Haar system λ for a Lie groupoid G turns C_c^∞(G) into a (generally noncommutative) algebra C_c^∞(G,λ) with respect to the convolution product * defined by either of the following equivalent integrals:
(f*g)(γ_0) = ∫_G_t(γ_0) f( γ^-1) g (γγ_0) dλ_t(γ_0) = ∫_G_s(γ_0) f( γ_0γ^-1) g (γ) dλ_s(γ_0).
The isomorphism class of this algebra does not depend on the choice of Haar system. Indeed, if λ and λ' are two Haar systems for G⇉ B, then there is a unique smooth, positive-valued function ρ on B such that λ' = (ρ∘ t) λ. Moreover ρ can be used to define a canonical algebra isomorphism C_c^∞(G,λ) → C_c^∞(G,λ'). Specifically:
f ↦1/(ρ∘ s)^1/2 (ρ∘ t)^1/2f : C_c^∞(G,λ) → C_c^∞(G,λ').
Since the convolution algebra of G is independent of λ, we will tend to write C_c^∞(G) instead of C_c^∞(G,λ). Note it is also possible to skirt the discussion of Haar systems completely in defining the convolution algebra of G if one works with appropriate densities instead of functions, see <cit.>, Section 2.5.
§.§ Smooth groupoid actions
We now turn our attention to groupoid actions.
Let G⇉ M be a Lie groupoid. Let M be a possibly non-Hausdorff manifold together with a smooth map τ : M → B. Note the fiber product
G⋉ M G×_s,τM
is a closed submanifold of G × M because it is the preimage of the diagonal under the submersion s ×τ : G × M → B × B. A left action of G on M is a smooth product G⋉ M ∋ (γ, x) ↦γ· x ∈ M, such that τ(γ· x) = t(γ) for all (γ, x) ∈ G ⋉ M, τ(x) · x = x for all x ∈ M and (γ_1γ_2)· x = γ_1 · (γ_2 · x) for all γ_1,γ_2 ∈ G, x ∈ M with s(γ_1) = t(γ_2), s(γ_2) = μ(x).
Similarly, if M is equipped with a smooth map σ : M → G, a right action of G on M is a smooth product M ⋊ G ∋ (x,γ) ↦ x ·γ∈ M, where M⋊ G M ×_t,σ G, such that σ( x ·γ)=s(γ) for all (x,γ) ∈ M ⋊ G, x ·σ(x) =x for all x ∈ M and x · (γ_1γ_2) = (x·γ_1)·γ_2 for all γ_1,γ_2 ∈ G, x ∈ M with σ(x) = t(γ_1), s(γ_1)=t(γ_2).
Supposing G_1⇉ B_1 acts on M from the left with respect to a map τ : M → B_1 and G_2⇉ B_2 acts on M from the right with respect to the map σ : M → B_2, we say that the left and right action commute with one another if γ_1 · (x ·γ_2) = (γ_1 · x) ·γ_2 for all γ_1 ∈ G_1, γ_2 ∈ G_2, x ∈ M satisfying s_1(γ_1)=τ(x), σ(x)=t_2(γ_2). In particular, σ(γ_1 · x) = σ(x), τ(x ·γ_2)=τ(x).
Of course, we use the notation G⋉ M for G×_s,τM because it is the transformation groupoid. There is a small snag here because we have insisted that our groupoids have Hausdorff unit spaces and M, which is the unit space of G⋉ M, may be non-Hausdorff. In any event, we do not need to view G⋊ M as a groupoid in its own right here.
The following diagram summarizes some of the data in Definition <ref>
G_1 [d,"s_1"'] [r,bend left=50,yshift=-2ex] X [ld,"τ"] [rd,"σ"'] G_2 [l,bend right=50, yshift=-2ex] [d,"t_2"]
B_1 B_2
One obvious example of commuting left and right actions are the left and right multiplication action of a Lie groupoid on itself. Importantly, commuting left/right actions also appear in one formulation of the notion of Morita equivalence for Lie groupoids, see <cit.>.
If G ⇉ B acts from the left on τ:M → B, then a right-invariant vector field X on G determines a corresponding vector field X^M on M (called the fundamental vector field) determined by
X^M(m) = ddr (e^rXτ(m)) · m |_r=0 m ∈ M.
If X is complete, then X^M is complete and, indeed, the flow of X^M is given by:
e^rX^Mm = (e^rXτ(m)) · m r ∈, m ∈ M.
If a Lie groupoid G⇉ B with given right Haar system acts on τ : M→ B from the left, then C_c^∞(M) becomes a left C_c^∞(G)-module with respect to the product
C_c^∞(G) × C_c^∞(M) ∋ (f,ψ) ↦ f * ψ∈ C_c^∞(M)
defined by
(f*ψ)(x) = ∫_G_τ(x) f( γ^-1) g (γ· x) dλ_τ(x).
To see this formula does indeed define a product (especially in the non-Hausdorff case where things may be less clear), it is helpful to cast the formula (<ref>) in slightly more abstract form, as outlined below:
* Let G⇉ B be a Lie groupoid acting smoothly from the left on τ:M→ B. Here, G and M are possibly non-Hausdorff. Let π ,α : G ⋉ M → M be the (restrictions of) the second factor projection and the action map, respectively. Let : G ⋉ M → G ⋉ M be defined by (γ,m)=(γ^-1,γ· m). Then, is an order-2 diffeomorphism satisfying α=π∘. Indeed, although we do not make explicit use of it because its unit space M may be non-Hausdorff, α, π and are the target, source and inversion map for the transformation groupoid G ⋉ M. The full set of structure maps are as follows:
source: (γ,m) ↦ m
target: (γ,m) ↦γ· m
product: (γ',γ· m) (γ,m) = (γ'γ,m)
inversion: (γ,m) ↦ (γ^-1,γ· m)
* Let λ be the Haar system for G and let λ be the obvious smooth system of measures for π determined by copying λ on fibers (one has π^-1(m)=G_τ(m)×{m} for m ∈ M). In other words, λ arises from the pullback square:
G ⋉ M [r,"pr_1"] [d,"π,λ"] G [d,"s, λ"]
M [r,"τ"] B
* Using the map , we also endow α : G⋉ M→ M with a smooth system of measures (indeed, this is the left Haar system for the transformation groupoid) so that there is a fiberwise integration map α_! : C_c^∞(G⋉ M) → C_c∞(M) (Proposition <ref> (3)). By construction, the following diagram is commutative:
C_c^∞(G ⋉ M) [r,"_*"] [d,"α_!"] C_c^∞(G ⋉ M) [d,"π_!"]
C_c^∞(M) [r,equals] C_c^∞(M)
* Given f ∈ C_c^∞(G) and φ∈ C_c^∞(M), define f⋉φ to be the restriction of f ⊗φ to G⋉ M ⊂ G× M.
Using Proposition <ref> (1) and (2), one has that f ⋉φ∈ C_c^∞(G⋉ M).
* In terms of the above notations, the left C_c^∞(G)-module structure (<ref>) of C_c^∞(M) may be expressed as follows:
f * ψα_!( f ⋉ψ ) f ∈ C_c^∞(G), ψ∈ C_c^∞(M).
Similarly, if G⇉ B acts on σ:M → B from the right, then C_c^∞(M) becomes a right C_c^∞(G)-module with respect to the product C_c^∞(M) × C_c^∞(G) ∋ (ψ,f) ↦ψ * f ∈ C_c^∞(M) defined by
(ψ * f)(x) = ∫_G_σ(x)ψ( x ·γ^-1) g (γ) dλ_σ(x).
Symmetrically to the case of left actions, there is a canonical way to equip the right action map β : M ⋊ G → M with a smooth system of measures in terms of which the right C_c^∞(G)-module structure may be alternatively expressed as follows:
ψ * f β_!( ψ⋊ f ) f ∈ C_c^∞(G), ψ∈ C_c^∞(M).
Again, although we do not make explicit use of it, we remark that the structure maps of the right transformation groupoid M ⋊ G are given by:
source: (m,γ) ↦ m ·γ
target: (m,γ) ↦ m
product: (m,γ) (m ·γ,γ') = (m,γγ')
inversion: (m,γ) ↦ (m·γ,γ^-1)
If M carries commuting actions of G_1 from the left and G_2 from the right, then (f_1 * ψ)*f_2 = f_1 * (ψ * f_2) is satisfied for all f_1 ∈ C_c^∞(G_1), f_2∈ C_c^∞(G_2), ψ∈ C_c^∞(M) so that C_c^∞(M) has the structure of a C_c^∞(G_1)-C_c^∞(G_2)-bimodule. In the special case of a groupoid acting on itself from the left and right, this recovers the usual convolution product on C_c^∞(G).
In order to check the associativity of the bimodule structure, it is useful to pass through the two-sided transformation groupoid:
G_1 ⋉ M ⋊ G_2
{ (γ_1,m,γ_2) ∈ G_1 × M × G_2 : s_1(γ_1)=τ(m),σ(m)=t_2(γ_2) }
whose structure maps are listed below:
source: (γ_1,m,γ_2) ↦ m ·γ_2
target: (γ_1,m,γ_2) ↦γ_1 · m
inversion: (m,γ) ↦ (m·γ,γ^-1)
product: (γ_1,m,γ_2) (γ_1',m',γ_2')
= (γ_1γ_1',(γ_1')^-1· m,γ_2γ_2') (where m ·γ_2 = γ_1'=m')
= (γ_1γ_1',m'·(γ_2)^-1,γ_2γ_2').
Because the actions commute, we have the following commuting diagram:
G_1 ⋉ M ⋊ G_2 [rd,"μ"] [r,"α×𝕀"] [d,"𝕀×β"] M ⋊ G_2 [d,"β"]
G_1 ⋊ M [r,"α"] M
where
μ(γ_1,m,γ_2) γ_1 · m ·γ_2.
Introducing fiberwise measures in the natural way, commutativity holds as well at the level of bump functions and integration maps:
C_c^∞(G_1 ⋉ M ⋊ G_2) [rd,"μ_!"] [r,"(α×𝕀)_!"] [d,"(𝕀×β)_!"] C_c^∞(M ⋊ G_2) [d,"β_!"]
C_c^∞(G_1 ⋊ M) [r,"α_!"] C_c^∞(M)
leading to the desired associativity property:
f*(ψ*g) = (f*ψ)*g = μ_!(f ⋉ψ⋊ g) f∈ C_c^∞(G_1),ψ∈ C_c^∞(M), g ∈ C_c^∞(G_2).
§.§ Dixmier-Malliavin for R-actions
Suppose the additive group acts smoothly on a possibly non-Hausdorff smooth manifold M. Let X be the (non-branching, complete) vector field on M generating the action. The integrated form of the action is the representation π of C_c^∞() on C_c^∞(M) defined by:
(π(f)ψ )(m) = ∫_ f(-t) ψ (e^tXm) dt.
Of course, this is a very special case of the integrated forms of groupoid actions just discussed. As in the more general case, especially when M is non-Hausdorff, in order to see π(f) does indeed define a mapping C_c^∞(M) → C_c^∞(M), it helps to express this action in the more abstract form:
π(f) φ = α_!( f ⊗φ) f ∈ C_c^∞(), φ∈ C_c^∞(M)
Here, α : × M → M is the action map, made into a measured submersion so as to render the following diagram commutative:
C_c^∞(× M) [r,"_*"] [d,"α_!"] C_c^∞(× M) [d,"(pr_2)_!"]
C_c^∞(M) [r,equals] C_c^∞(M)
,
where (t,m)=(-t,e^tXm) and pr_2 is a measured submersion in the obvious way (copying Lebesgue measure).
As one expects, integration by parts enables one to move differentiation across the convolution map C_c^∞() × C_c^∞(M) → C_c^∞(M). That is, π(f')φ = π(f) Xφ holds, where X is the vector field generating the -action. We sketch a proof of this (quite routine) fact mainly to indicate how it works in the formalism π(f) φ = α_!( f ⊗φ) which is convenient for rigorous treatment of the non-Hausdorff case.
Let act smoothly on a possibly non-Hausdorff smooth manifold M via the complete vector field X. Let π be the associated representation of C_c^∞() on C_c^∞(M). Then, π(f')φ = π(f)Xφ for all f ∈ C_c^∞(), φ∈ C_c^∞().
It is clear that (pr_2)_!:C_c^∞(× M) → C_c^∞(M) maps the image of the (non-branching) vector field ⟨d/dt , 0 ⟩ on × M to zero. Conjugating by (t,m)=(-t,e^tXm), this gives that integration along the action map α_! :C_c^∞(× M) → C_c^∞(M) maps the image of ⟨ -d/dt,X ⟩ to zero, giving the desired result.
The above lemma leads to the following following preliminary factorization result. This was also the essential ingredient in <cit.>.
Let act smoothly on a smooth manifold M via a complete vector field X and let π be the representation of C_c^∞() on C_c^∞(M) defined by (<ref>). Suppose f_0,f_1 ∈ C_c^∞() and P(z) = ∑_m ≥ 0 a_m z^m ∈[[z]] are such that δ = f_0+∑_m ≥ 0a_m f_1^(m), as in Lemma <ref>.
Then, for any φ∈(P(X)) ⊂ C_c^∞(M), one has:
φ = π(f_0)φ+ π(f_1) P(X) φ.
See Theorem 8.1 in <cit.>.
It will be important for us to know that, if M carries a (left) -action generated by a vector field X that furthermore commutes with a given right action of a groupoid G', then the operators P(X) : (P(X))→ C_c^∞(M) for P(z) ∈[[z]] are right-linear for the C_c^∞(G')-module structure of C_c^∞(M). The case of interest for us is where M carries commuting actions of groupoids G and G' from the left and the right, respectively, and the vector field on M is the fundamental vector field X^M associated to some section X of the Lie algebroid AG. The desired statement is Corollary <ref> below, which follows directly from the three lemmas preceding it. The proofs of the these lemmas are straightforward, and we omit them.
Let π : M → N be a measured submersion of possibly non-Hausdorff smooth manifolds. Let X be a non-branching smooth vector field on M and Y a non-branching smooth vector field on N and suppose X and Y are π-related. Let P(z) ∈[[z]]. Then, π_! maps (P(X)) into (P(Y)) and the following diagram is commutative:
(P(X)) [r,"P(X)"] [d,"π_!"] d C_c^∞(M) [d,"π_!"]
(P(Y)) [r,"P(Y)"] C_c^∞(N)
Regarding the above, we note that, if X is non-branching, it is not automatic that Y is non-branching, as may be seen in the case where π is the quotient map from two disjoint copies of to the line with two origins. Of course, if Y is non-branching, it is also not automatic that X is non-branching (one could take N to be a single point).
Let M be a possibly non-Hausdorff smooth manifold and let N be a closed submanifold of M. Let X be a non-branching smooth vector field on M that is tangent to N and denote the restriction of X to N by Y. Let P(z) ∈[[z]]. Then, the restriction map C_c^∞(M) → C_c^∞(N) maps (P(X)) into (P(Y)) and the following diagram is commutative:
(P(X)) [r,"P(X)"] [d,"restr"] d C_c^∞(M) [d,"restr"]
(P(Y)) [r,"P(Y)"] C_c^∞(N)
Let M and N be possibly non-Hausdorff smooth manifolds. Let X be a non-branching smooth vector field on M and let Y = X,0, a non-branching smooth vector field on M × N. Let P(z) ∈[[z]]. Then, if f ∈(P(X)) ⊂ C_c^∞(M) and g ∈ C_c^∞(N), we have f ⊗ g ∈(P(Y)) and Y(f ⊗ g) = (Xf) ⊗ g.
Let G'⇉ B be a possibly non-Hausdorff Lie groupoid acting smoothly from the right on a possibly non-Hausdorff smooth manifold M with respect to a smooth map σ : M → B. Let X be a smooth vector field on M which commutes with the right action of G' on M. Let P(z) ∈[[z]]. Then, (P(X)) is right C_c^∞(G')-submodule of C_c^∞(M), and P(X)(φ * f )= (P(X)φ) *f for all φ∈(P(X)) and f ∈ C_c^∞(M).
Let β:M ⋊ G' → M be the action map. Put Y ⟨ X,0⟩. Observe that Y is a vector field on M × G' which is tangent to M ⋊ G'. Put Z=Y|_M ⋊ G'. Then we have
(P(X) φ) * f = β_!( (P(X) φ )⋊ f)
= β_!( (P(Y) (φ⊗ f))|_M ⋊ G') by Lemma <ref>
= β_!( P(Z) (φ⋊ f)) by Lemma <ref>
= P(X) β_!( φ⋊ f) by Lemma <ref>
= P(X) ( φ * f),
as required.
§.§ Ideals associated to invariant subsets
Next, we turn our attention to ideals in smooth groupoid algebras. A subset Z of the unit space of a groupoid is called invariant if every arrow with source in Z also has target in Z. Given a Lie groupoid G⇉ B with Haar system, a closed, invariant set Z ⊂ B and p ∈∪{∞}, one may consider the pth order vanishing ideal J_Z^p ⊂ C_c^∞(G) consisting of the functions vanishing to (at least) pth order on G_Z s^-1(Z)=t^-1(Z). We discuss these ideals in greater detail below.
Let M be a possibly non-Hausdorff smooth manifold, let Z ⊂ M be a closed set, and let p ∈ℕ∪{∞}. We say that φ∈ C_c^∞(M) vanishes to pth order on Z if it can be decomposed as φ = ∑_i=1^N (f_i ∘χ_i)_0 where U_i ⊂ M are chart neighbourhoods, χ_i : U_i →^d are diffeomorphisms, and f_i ∈ C_c^∞(^d) are such that f_i and all its partial derivatives of order <p vanish on χ_i(U_i ∩ Z). The subscript 0 above indicates the operation of extension by zero.
Let G⇉ B be a Lie groupoid with given Haar system acting from the left on a smooth manifold M with respect to a smooth map τ:M→ B so that C_c^∞(M) is a left C_c^∞(G)-module. Let Z ⊂ B be an invariant, closed subset and let p,q ∈ℕ∪{∞}. If f∈ C_c^∞(G) vanishes to pth order on G_Z s^-1(Z) =t^-1(Z) and φ∈ C_c^∞(M) vanishes to qth order on M^Z τ^-1(Z), then f * φ vanishes to (p+q)th order on M^Z.
Let α : G ⋉ M → M be the action map. It is simple to check in charts that f ⊗φ∈ C_c^∞(G × M) vanishes to order p+q on G_Z × M^Z. Restricting to G ⋉ M ⊂ G × M, one has that f ⋉φ vanishes to order p+q on (G_Z × M^Z) ∩ (G ⋉ M) =α^-1(M^Z) (the latter equality uses the invariance of Z). The conclusion follows by expressing the convolution of f and φ in the form f * φ = α_!(f ⋉φ), as explained above.
As a corollary of the above proposition, one has that the pth order vanishing ideals
J^p_Z { f ∈ C_c^∞(G) : f vanishes to pth order on G_Z}
are indeed ideals for Z⊂ B closed and invariant and p ∈ℕ∪{∞}. Moreover, J^p_Z * J^q_Z ⊂ J^p+q_Z. In <cit.>, it was shown that this containment is an equality in the case where Z ⊂ B is a closed, invariant submanifold. In particular, J_Z^∞*J_Z^∞ = J_Z^∞. One of our goals in the present article is to show, more generally, that J_Z^∞ is H-unital for Z⊂ B any closed, invariant subset.
We give one final definition/notation. As a point of clarification, what follows are straightforward groupoid analogs of the fact that matrices of the form * *
0 0, respectively * 0
0, constitute a right ideal, respectively left ideal, in the the algebra of 2-by-2 matrices. Assume as above that M carries commuting actions of G_1 from the left and G_2 from the right. Then, given a closed set K ⊂ B_1, we define
M^K τ^-1(K)
C_c^∞(M)^K {ψ∈ C_c^∞(M) : supp(ψ) ⊂ M^K}.
It is straightforward to check that C_c^∞(M)^K is a right C_c^∞(G_2)-submodule of C_c^∞(M). Symmetrically, if K ⊂ B_2 is a closed set, one may define
M_K σ^-1(K)
C_c^∞(M)_K {ψ∈ C_c^∞(M) : supp(ψ) ⊂ M_K}
and check that C_c^∞(M)_K is a left C_c^∞(G_1)-submodule of C_c^∞(M).
§ H-UNITALITY OF THE CONVOLUTION ALGEBRA OF A LIE GROUPOID
In this section, we prove the first of our main results, the H-unitality of C_c^∞(G) for any Lie groupoid G (Theorem <ref> from the introduction). Indeed we prove a slightly stronger statement involving smooth actions of groupoids. The latter results are applicable, for instance, in context of Morita equivalences of Lie groupoids.
Throughout this section, G⇉ B and G'⇉ B' are Lie groupoids with given Haar systems and M is a smooth manifold carrying commuting actions of G from left and G' from the right with respect to smooth maps τ:M → B and σ:M→ B'. Thus, C_c^∞(M) has the structure of a C_c^∞(G)-C_c^∞(G')-bimodule. Recall as well the notations:
M^K τ^-1(K)
C_c^∞(M)^K {ψ∈ C_c^∞(M) : supp(ψ) ⊂ M^K}.
and the fact that C_c^∞(M)^K is a right C_c^∞(G')-submodule of C_c^∞(M).
Let G⇉ B be a Lie groupoid with given Haar system. Let X_1,…,X_k ∈ C_c^∞(B,AG), viewed as complete, right-invariant vector fields on G. Define u:^k × B → G by
u(t_1,…,t_k,b) = e^t_1X_1⋯ e^t_kX_kb
Suppose that W⊂^k × B is an open set which is mapped diffeomorphically by u onto an open set W ⊂ G. Then there is a linear bijection θ from C_c^∞(W) ⊂ C_c^∞(^k× B) to C_c^∞(W) ⊂ C_c^∞(G) given as pushforward by u, followed by multiplication by a suitable Jacobian factor with the property described below.
Let G act smoothly from the left on τ:M → B and define
π(f)ψ(m) = f(-t_k,…,-t_1,τ(e^t_1X_1^M⋯ e^t_k X_k^Mm))ψ(e^t_1X_1^M⋯ e^t_k X_k^Mm) .
Then, one has:
π(f) ψ = θ(f) * ψ
for all f ∈ C_c^∞(W) and all ψ∈ C_c^∞(M).
We will generalize the following result from <cit.>.
Suppose G ⇉ B is a Lie groupoid with a given Haar system and M is a smooth manifold equipped with a left action of G. Thus, C_c^∞(M) has the structure of a C_c^∞(G)-module. Then, for every φ∈ C_c^∞(M), there exist f_1,…,f_N ∈ C_c^∞(G) and ψ_1, …, ψ_N ∈ C_c^∞(M) such that
φ = f_1 * ψ_1 + … + f_N * ψ_N.
Moreover, this factorization can be taken such that, for all i, supp(ψ_i) ⊂supp(φ) and supp(f_i) ⊂ W, where W is a prescribed open subset of G containing τ( supp(φ)).
As in the result above, we shall consider the more general situation of a smooth groupoid action. This is done with an eye to certain applications to smooth Morita equivalences which may be explored elsewhere. The case of interest here in this article is that of a groupoid acting on itself.
Let 𝒫 be any finite subset of C_c^∞(M). Define K = ⋃_φ∈𝒫τ( supp(φ)) and let W be any open subset of G containing K. Then, there exist:
* f_1,…,f_N ∈ C_c^∞(W) ⊂ C_c^∞(G).
* a right C_c^∞(G')-submodule A_0 with 𝒫⊂ A_0 ⊂ C_c^∞(M)^K,
* right C_c^∞(G')-linear maps Ψ_1,…,Ψ_N : A_0 → C_c^∞(M) that do not increase supports
such that, for all φ∈ A_0, we have φ = f_1 * Ψ_1(φ) + … + f_N * Ψ_N(φ).
We remark that, in view of Example <ref>, (1) can in general be a stronger assertion than: f_i ∈ C_c^∞(G) and f_i ⊂ W. That being said, since the unit space of G admits a Hausdorff neighbourhood (Proposition <ref>), this distinction is not important here as W may without loss of generality be assumed to be Hausdorff.
Use the map τ : M → B to endow C_c^∞(M) with the structure of a left C^∞(B)-module as follows:
(ρ·φ)(x) = ρ(τ(x))φ(x) ρ∈ C^∞(B), φ∈ C_c^∞(M), x ∈ M.
It is simple to confirm that the left C^∞(B)-module structure on C_c^∞(M) commutes with the right C_c^∞(G')-module structure.
Note that K ⊂ B is compact by Proposition <ref>. First we argue that it suffices to prove this theorem under the additional hypothesis that the Lie algebroid AG is trivial (as a vector bundle) over some open neighbourhood of K. Indeed, assume this special case is already proven, and choose smooth, compactly-supported functions ρ_1,…, ρ_N : B → [0,1] such that ∑_i=1^N ρ_i = 1 holds on K and AG is trivial on a neighbourhood of supp(ρ_i) for each i. Define 𝒫^(i) = {ρ_i ·φ : φ∈𝒫} for i=1,…,N. By hypothesis, for 1 ≤ i ≤ N, there exist
* f_1^(i),…,f^(i)_N_i∈ C_c^∞(G) with supports contained in W,
* A right C_c^∞(G')-submodule A_0^(i)⊂ C_c^∞(M) with 𝒫^(i)⊂ A_0^(i)⊂ C_c^∞(M)^K,
* Right C_c^∞(G')-linear maps Ψ_1^(i),…,Ψ_N_i^(i) : A_0^(i)→ C_c^∞(M) that do not increase supports
such that, for all φ∈ A_0^(i),
φ = ∑_j=1^N_i f_j^(i) * Ψ_j(φ)
Define:
A_0 = {φ∈ C_c^∞(M)^K : ρ_i ·φ∈ A_0^(i) for i=1,…,N }.
By definition, 𝒫⊂ A_0 ⊂ C_c^∞(M)^K, and it is easy to check that A_0 is a right C_c^∞(G')-submodule.
For 1 ≤ i ≤ N, 1 ≤ j ≤ N_i, define maps Ψ_ij :A_0 → C_c^∞(M) by
Ψ_ij(φ) = Ψ_j^(i)(ρ_i ·φ).
It is easy to check these are linear maps which do not increase supports and are right C_c^∞(G')-linear. Moreover, if φ∈ A_0
φ = ∑_i=1^N ρ_i ·φ
= ∑_i=1^N ∑_j=1^N_i f^(i)_j * Ψ_j(ρ_i ·φ)
= ∑_i=1^N ∑_j=1^N_i f_j^(i) * Ψ_ij(φ).
It remains to check the case where AG is trivial over a neighbourhood of K. Then, by inverse function theorem, there exists an open set U ⊂ B with K ⊂ U and (complete by Lemma <ref>) right-invariant vector fields X_1,…,X_k ∈ C_c^∞(B,AG) such that
u : ^k × B → G u(t_1,…,t_k,b) = e^t_1X_1⋯ e^t_k X_k b
maps W (-1,1)^k × U diffeomorphically onto an open subset of G contained in W. Indeed by shrinking W we can, and do, assume that W=u(W).
Let X_1^M,…, X_k^M denote the corresponding complete vector fields on M and let π_1,…, π_k denote the corresponding representations of C_c^∞() on C_c^∞(M) given by (<ref>). Because the action of G on M commutes with the action of G' on M, the actions determined by the fundamental vector fields X_1^M,…,X_k^M commute with the action of G' as well.
By Lemma 4.2 in <cit.>, there is a linear bijection θ_W from C_c^∞(W) ⊂ C_c^∞(^k× B) to C_c^∞(W) ⊂ C_c^∞(G) given as pushforward by u, followed by multiplication by a suitable smooth Jacobian factor such that
π(f) ψ = θ_W(f) * ψ
for all f ∈ C_c^∞(W) and all ψ∈ C_c^∞(M), where
π(f)ψ(m) f(-t_k,…,-t_1,τ(e^t_1X_1^M⋯ e^t_k X_k^Mm))ψ(e^t_1X_1^M⋯ e^t_k X_k^Mm) .
In particular, fixing ρ∈ C_c^∞(B) such that ρ=1 on K (so that ρ·φ = φ for all φ∈ C_c^∞(M)^K) and (ρ)⊂ U, we get that
π_1(f_1) ⋯π_k(f_k) ψ = θ_W( f_k ⊗…⊗ f_1 ⊗ρ) * ψ
for all f_1, …,f_k ∈ C_c^∞(-1,1)⊂ C_c^∞() and all ψ∈ C_c^∞(M)^K (see also Lemmas 4.1 and 4.3 in <cit.>).
By Lemma <ref>, there exists a sequence of positive reals (c_m)_m ≥ 0 such that, for any formal series P(z) = ∑_m ≥ 0 a_m z^m with |a_m|≤ c_m, we have
𝒫⊂dom( P(X_i_1^M) ⋯ P(X_i_n^M) )
for all n, i_1,…,i_n ≥ 1. Using Lemma <ref>, write δ_0 = f_0 +∑_m ≥ 0 f_1^(m) where f_0,f_1 ∈ C_c^∞(-1,1) ⊂ C_c^∞() and |a_m| ≤ c_m. By repeated application of Theorem <ref>, we obtain
φ
= π_1(f_0) φ + π_1(f_1) P(X^M_1) φ
= π_1(f_0) π_2(f_0) φ + π_1(f_0) π_2(f_1) P(X^M_2) φ
+ π_1(f_1) π_2(f_0) P(X_M^1) φ + π_1(f_1)π_2(f_1)P(X^M_2)P(X^M_1) φ
⋮
= ∑_i_1, …,i_k ∈{0,1}π_1(f_i_1) ⋯π_k(f_i_k) P(X^M_k)^i_k⋯ P(X^M_1)^i_1φ
for all φ∈𝒫, with the understanding that P(X_i)^0=𝕀. Thus, using (<ref>),
φ = ∑_i_1, …,i_k ∈{0,1} f_i_1,…,i_k * Ψ_i_1,…,i_k ( φ )
where
f_i_1,…,i_k θ_W( f_1 ⊗⋯⊗ f_k ⊗ρ) ⊂ C_c^∞(W) ⊂ C_c^∞(G)
A_0 ⋂_i_1, …,i_k ∈{0,1}( P(X^M_k)^i_k⋯ P(X^M_1)^i_1) ∩ C_c^∞(M)^K
Ψ_i_1,…,i_k P(X^M_k)^i_k⋯ P(X^M_1)^i_1|_A_0
By Corollary <ref>, we have that A_0 is a right C_c^∞(G')-submodule of C_c^∞(M)^K and that the maps Ψ_i_1,…,i_k are right C_c^∞(G')-linear, as required. They do not increase supports by Proposition <ref>.
Specializing to the case where G acts on itself from the right and left, Theorem <ref> amounts to the following.
Let G ⇉ B be a Lie groupoid with a given Haar system. Let 𝒫 be a finite subset of C_c^∞(G) and put K ⋃_φ∈𝒫t(φ) (by Proposition <ref>, K is a compact subset of B) and let W⊂ G be an open set with K ⊂ W (by Lemma <ref>, we may assume W is Hausdorff). Then, there exist:
* f_1,…,f_N ∈ C_c^∞(W) ⊂ C_c^∞(G),
* a right ideal A_0 ⊂ C_c^∞(G) with 𝒫⊂ A_0 ⊂ C_c^∞(G)^K,
* right C_c^∞(G)-linear maps Ψ_1,…,Ψ_N:A_0 → C_c^∞(G) that do not increase supports
such that, for all φ∈ A_0, we have φ = f_1*Ψ_1(φ)+…+f_N*Ψ_N(φ).
The desired result follows.
For any Lie groupoid G with given Haar system, the smooth convolution algebra C_c^∞(G) is H-unital.
This follows from Corollary <ref> and Proposition <ref>, taking ϕ to be the map
φ↦ f_1 ⊗Ψ_1(φ) + … + f_N ⊗Ψ_N(φ).
§ FACTORIZATION OF FLAT FUNCTIONS RELATIVE TO A SUBMERSION
In this section, we extend the following known factorization result for functions that are “flat” on a given closed set so as to allow factorization to be performed relative to a submersion.
Suppose M is a Hausdorff smooth manifold and Z ⊂ M is closed. Let I_Z^∞ denote the ideal in C^∞_c(M) consisting of functions that vanish to infinite order (i.e. vanish with all derivatives) on Z. Then, given φ_1,…,φ_N ∈ I_Z^∞, there exists ρ, ψ_1,…,ψ_N ∈ I_Z^∞ such that φ_i=ρψ_i for i=1,…,N.
This follows from Theorem 3.2 of <cit.>. See also the proof of Theorem 6.2 in <cit.> (“property [F]” is defined on pp. 612).
Our extended factorization result is Theorem <ref> below, essentially a “with parameters” version of the preceding result. It seems likely that Theorem <ref> is known to experts but, as a reference could not be located, we provide a proof.
Let us first record the following elementary extension principle.
Let Z ⊂^d be a closed set and let f ∈ C^∞(W), W ^d ∖ Z. If f and all its partial derivatives vanish at the boundary of W, then f extends by zero to a smooth function on ^d vanishing to infinite order on Z.
Let g be the extension by zero of f.
Using induction and the standard result that a function on ^d is C^1 if and only each of its first order partials exists and is continuous, one only needs to check that the first order partials of g exists and vanish on Z. One may therefore reduce to the case d=1 where the desired conclusion may be deduced from the mean value theorem.
The preceding lemma leads to directly to a condition under which one may form the quotient of two smooth functions vanishing to infinite order on a given closed set.
Let Z ⊂^d be a closed set. Suppose f , g ∈ C^∞(^d) vanish to infinite order on Z and g >0 on W^n ∖ Z. If, for every α∈^d and m ∈, the function ∂^α f/g^m∈ C^∞(W) vanishes at the boundary of W, then the extension by zero of f/g is a smooth function on ^d vanishing to infinite order on Z.
Let ℱ⊂ C^∞(W) denote the C^∞(^d)-linear span of the functions ∂^α f/g^m for α∈^d, m ∈. By assumption, the functions in ℱ all vanish at the boundary of W. Observe that ℱ is closed under taking partial derivatives. Indeed, if α∈^d is a multi-index, m ∈, h ∈ C^∞(^d) and ∂ is one of the first-order partials, then
∂ ( h ∂^α f/g^m )= (∂ h) ∂^α f/g^m + h (∂∘∂^α) f/g^m - m(∂ g)h ∂^α f/g^m+1.
Thus, thinking of f/g as a smooth function on W, we have by induction that all of its higher order partial derivatives vanish at the boundary of W. Thus, f/g extends to a smooth function on ^d that vanishes to infinite order on Z by Lemma <ref>.
We use the existence of a regularized distance function for an arbitrary closed subset of Euclidean space. Such distance functions appear in the proof of the Whitney extension theorem given in <cit.> and can be explicitly constructed using cubical meshes. Note that Wodzicki <cit.> also uses these functions by way of an appeal to <cit.>.
Let Z ⊂^d be closed, W ^d ∖ Z and let δ(x) denote the distance from x to Z. Then, there is a smooth function Δ : W → (0,∞) such that:
* there exist constants c_2>c_1>0 such that c_1 δ(x) ≤Δ(x) ≤ c_2 δ(x), x ∈ W,
* for each α∈^d, there is a constant B_α≥ 0 such that | ∂^αΔ(x) | ≤ B_αδ(x)^1-|α|, x ∈ W.
The constants c_1,c_2, B_α are independent of Z.
This is Theorem 2 on pp. 171 of <cit.>.
Next, we apply Theorem <ref> to construct smooth defining functions ρ∈ C^∞(^d) for arbitrary closed sets Z ⊂^d with rate of vanishing governed by an arbitrary smooth function g : (0,∞) → (0,∞) vanishing together with all derivatives at 0.
Let Z ⊂^d be closed and put W ^d ∖ Z. Let Δ:W → (0,∞) be a regularized distance function for Z, as in Theorem <ref>. Then, given any g ∈ C^∞(0,∞) vanishing together with all derivatives at 0, the extension by zero of g∘Δ is a smooth function ρ on ^d vanishing to infinite order on Z.
In view of Lemma <ref>, we just need to show that all of the partial derivatives of g∘Δ vanish at the boundary of W. For nonzero α∈^d, we have the following formula for the αth partial derivative of a composition:
∂^α(g ∘Δ) = ∑_k=1^|α| ( g^(k)∘Δ) ∑_β∈ J(α,k) C(α,k,β) ∏_j=1^k ∂^β(j)Δ
where C(α,k,β) ∈ and
J(α,k) {β = (β(1),…, β(k)) ∈ (^d)^k : β(j) ≠ 0, j=1,…,k and ∑_j=1^kβ(j) = α}.
See the proof of Lemma 3.1 in <cit.> as well as <cit.>. Applying the estimates of Theorem <ref> to the above expression for ∂^α(g∘Δ), it is straightforward to derive an estimate of the form
|∂^α (g ∘Δ)| ≤∑_k=1^|α| C(k) (g^(k)∘Δ) Δ^k-|α|,
where C(k) ≥ 0. Since g(t)t^-p→ 0 as t→ 0^+ for all p ∈ and Δ vanishes at the boundary of W, the estimate above shows that ∂^α(g ∘Δ) vanishes at the boundary of W as needed.
Lemma <ref>, together with the following result, allows one to construct smooth defining functions for a closed set Z ⊂^d which vanish to infinite order on Z “as slowly as desired”.
Let f_k be a sequence of functions on (0,∞) such that lim_t → 0^+ f_k(t) t^-p = 0 for all p ∈. Then, there exists a smooth, positive-valued function g on (0,∞) that vanishes together with all its derivatives at 0 such that lim_t → 0^+f_k(t)/g(t) =0 for all k.
See Lemma 6.7 in <cit.> for a complete proof and references to the literature.
Let Z ⊂^k be closed, W ^k ∖ Z. Suppose f ∈ C^∞(^k ×^ℓ) vanishes to infinite order on Z ×^ℓ. Then, there exists ρ∈ C^∞(^k) vanishing to infinite order on Z and strictly positive on W and g ∈ C^∞(^k ×^ℓ) vanishing to infinite order on Z ×^k such that f(x,y) = ρ(x) g(x,y) for all (x,y) ∈^k×^ℓ.
Because f vanishes with all its derivatives on Z ×^ℓ, it follows (e.g. from Taylor's theorem) that (x,y)↦ f(x,y)δ(x)^-p vanishes at the boundary of W×^ℓ for any p ∈, where δ(x) denotes the distance from x to Z. The same is true if δ is replaced by a regularized distance function Δ : W → (0,∞) (Theorem <ref>). Given α∈^k ×^ℓ, m ∈, r>0, define f_α,m,r : (0,∞)→(0,∞) by
f_α,m,r(t) = sup{ |∂^α f(x,y)|^1/m : |x|,|y| ≤ r and Δ(x) ≤ t }.
By design, f_α,m,r is an increasing, continuous function satisfying
lim_t→0^+ f_α,m,r(t) t^-p = 0
for all p ∈. Thus, by Lemma <ref>, there exists a smooth function g : (0,∞)→(0,∞) that vanishes with all its derivatives at 0 such that
lim_t→0^+f_α,m,r(t)/g(t) =0
for all α∈^k×^ℓ, m∈, r>0. By Lemma <ref>, g ∘Δ extends by zero to a smooth function ρ : ^d → [0,∞) which vanishes to infinite order on Z. The estimate
| ∂^α f (x,y)/ρ(x)^m| ≤( f_α, m,r(Δ(x))/g(Δ(x)))^m
(valid for |x|,|y| ≤ r) shows that (x,y) ↦∂^α f(x,y)/ρ(x)^m vanishes at the boundary of W ×^ℓ for all α∈^k×^ℓ, m∈ and so, by Theorem <ref>, (x,y) ↦f(x,y)/ρ(x) extends by zero to a smooth function on ^k×^ℓ vanishing to infinite order on Z ×^k.
We are now in a position to state and prove the main result of this section.
Let π : M → B be a submersion where B is Hausdorff and M is possibly non-Hausdorff. View C_c^∞(M) as a C^∞(B)-module with module structure given by f ·φ (f ∘π)φ for f ∈ C^∞(B), φ∈ C_c^∞(M). Let Z ⊂ B be a closed set and let φ_1,…, φ_N ∈ C_c^∞(M) vanish to infinite order on π^-1(Z) (Definition <ref>). Then, there exists ρ∈ C^∞(B) vanishing to infinite order on Z and strictly positive on W B ∖ Z and ψ_1,…,ψ_N ∈ C_c^∞(M) vanishing to infinite order on π^-1(Z) such that φ_i=ρ·ψ_i for i=1,…,N.
Suppose f,ρ_1,ρ_2 are smooth functions on ^k ×^ℓ vanishing to infinite order on {0}×^ℓ and that 0 < ρ_1 < ρ_2 on the complement of {0}×^ℓ. We remark that, if f/ρ_1 extends to a smooth function on ^k ×^ℓ vanishing to infinite order on {0}×^ℓ, then f/ρ_2 also extends to a smooth function on ^k ×^ℓ vanishing to infinite order on {0}×^ℓ (see Lemma 6.9 <cit.>). This remark, together with the fact that B is Hausdorff and therefore admits smooth partitions of unity allow one to (i) consider only the local problem, where M and N are Euclidean spaces and π is projection, and (ii) consider the case of only a single function φ∈ C_c^∞(M). The local case of a single function is given by Theorem <ref>.
§ H-UNITALITY OF IDEALS ARISING FROM INVARIANT SUBSETS
In this final section, we prove the second of our main results, the H-unitality of infinite order vanishing ideals in smooth groupoid algebras (Theorem <ref> from the introduction). This leads directly to an excision principle for invariant subsets.
Applications of this excision result will be considered elsewhere.
Recall from Wodzicki's seminal paper on H-unitality, if Z is a closed subset of a Hausdorff smooth manifold M, then the ideal in C^∞(M) consisting of functions that vanish together all derivatives on Z is H-unital (Theorem 6.1, <cit.>). The goal here is to obtain the noncommutative analogue.
The main ingredient of the proof is the following direct corollary of Theorem <ref> (take the submersion π to be the target submersion).
If G ⇉ B is a Lie groupoid, Z ⊂ B is an invariant, closed subset and φ_1,…,φ_N belong to the infinite order vanishing ideal J_Z^∞⊂ C_c^∞(G) (see Section <ref>), then there exists a smooth function ρ∈ C^∞(B), vanishing to infinite order on Z and positive on B∖ Z and ψ_1,…,ψ_N ∈ J_Z^∞ such that φ_i=ρ·ψ_i for i=1,…,N.
Note that, if ρ∈ C^∞(B) is nonvanishing on B∖ Z, then φ↦ρ·φ : J^∞_Z → J^∞_Z is clearly injective. Indeed, if G is Hausdorff and Z is a closed submanifold, then φ↦ρ·φ is injective on all of C_c^∞(G). When G is non-Hausdorff and Z⊂ B is a closed submanifold (or more generally has empty interior), then injectivity of φ↦ρ·φ on all of C_c^∞(G) may fail, as the following example shows.
Let B= and let G = ^ ×⊔, the “line with infinitely many origins” with its obvious non-Hausdorff smooth manifold structure. Then, G is a Lie groupoid over B where s=t is the obvious projection to G → B and multiplication G^(2) = ^×⊔^2 → G is (𝕀_^×⊔addition). Define φ = 0 ⊔ f where f : →{0,1} is given by f(1)=1, f(-1)=-1, f(n)=0 for n ≠± 1. It is easy to see that φ is a (nonzero) element of C_c^∞(G). However, ρ·φ =0 for any ρ∈ C^∞() vanishing at 0.
As in Section 5, we deduce H-unitality from a technical result designed to be used with Proposition <ref>.
Let G ⇉ B be a Lie groupoid with a given Haar system and let Z ⊂ B be a G-invariant, closed subset. Let 𝒫 be a finite subset of J_Z^∞. Put K ⋃_φ∈𝒫t(φ) and let W⊂ G be an open set with K ⊂ W. Then, there exist:
* f_1,…,f_N ∈ J_Z^∞⊂ C_c^∞(G),
* a right ideal A_0 ⊂ J_Z^∞ with 𝒫⊂ A_0 ⊂ C_c^∞(G)^K,
* right C_c^∞(G)-linear maps Ψ_1,…,Ψ_N:A_0 → J_Z^∞ that do not increase supports
such that, for all φ∈ A_0, we have φ = f_1*Ψ_1(φ)+…+f_N*Ψ(φ).
Let 𝒫 = {φ_1,…,φ_n}. From Corollary <ref> above, there exists ρ∈ C^∞(B), vanishing to infinite order on Z and positive on B∖ Z, and ψ_1,…,ψ_n ∈ J^∞_Z such that φ_i = ρ·ψ_i for i=1,…,n. We have that φ↦ρ·φ is a right C_c^∞(G)-linear bijection of J_Z^∞ onto the right ideal ρ· J_Z^∞⊂ J_Z^∞. Let M_1/ρ: ρ· J_Z^∞→ J_Z^∞ denote the inverse isomorphism of right C_c^∞(G)-modules. By design, M_1/ρ(ψ_i)=φ_i for i=1,…,n.
From Corollary <ref>, there exist:
* g_1,…,g_N ∈ C_c^∞(W) ⊂ C_c^∞(G),
* a right ideal B_0 ⊂ C_c^∞(G) with 𝒫⊂ B_0 ⊂ C_c^∞(G)^K,
* right C_c^∞(G)-linear maps Φ_1,…,Φ_N:B_0 → C_c^∞(G) that do not increase supports
such that, for all φ∈ B_0, we have φ = g_1*Φ_1(φ)+…+g_N*Φ(φ). Define:
f_i = g_i ·ρ A_0 B_0 ∩ρ· J_Z^∞ Ψ_i M_1/ρ∘Φ_i|_A_0 i=1,…,N
so that
φ = g_1*Φ_1(φ)+…+g_N*Φ(φ)
= g_i*(ρ·Ψ_1(φ))+…+g_N*(ρ·Ψ_N(φ))
= f_1*Ψ_1(φ)+…+f_N*Ψ(φ)
and the f_i, A_0 and Ψ_i are as needed.
As a corollary, we obtain the desired H-unitality result and its consequence for excision.
For any Lie groupoid G⇉ B with a given Haar system and any G-invariant, closed subset Z ⊂ B, the associated ideal J_Z^∞⊂ C_c^∞(G) is H-unital. Consequently, the short exact sequence
0 → J_Z^∞→ C_c^∞(G) → C_c^∞(G)/J_Z^∞→ 0
induces a corresponding long exact sequence in cyclic/Hochschild homology.
Follows from Proposition <ref> and Corollary <ref>.
abbrv
|
http://arxiv.org/abs/2307.01555v1
|
20230704081221
|
A comparative study of MOND and MOG theories versus the$κ$-model: An application to galaxy clusters
|
[
"Gianni Pascoli"
] |
astro-ph.CO
|
[
"astro-ph.CO"
] |
Title A comparative study of MOND and MOG theories versus
the κ-model: An application to galaxy clusters
G. Pascoli
Email: mailto:pascoli@u-picardie.frpascoli@u-picardie.fr
Faculté des sciences
Département de physique
Université de Picardie Jules Verne (UPJV)
33 Rue Saint Leu, Amiens, France
Many models have been proposed to minimize the
dark matter (DM) content in various astronomical objects at every scale in the Universe. The most widely
known model is MOdified Newtonian Dynamics (MOND). MOND was first published by Mordehai
Milgrom in 1983 (Milgrom, 1983; 2015; see also Banik and Zhao, 2022 for a
review). A second concurrent model is modified gravity (MOG), which is a covariant
scalar-tensor-vector (STVG) extension of general relativity (Moffat, 2006;
2020). Other theories also exist but have not been broadly applied to a large list of astronomical objects (Mannheim and
Kazanas, 1989; Capozziello and De Laurentis, 2012; O'Brien and Moss, 2015;
Verlinde, 2017). A new model, called κ-model, based on very
elementary phenomenological considerations, has recently been
proposed in the astrophysics field. This model shows that the presence of dark matter can be considerably minimized with regard to
the dynamics of galaxies (Pascoli, 2022 a,b). The κ-model belongs to
the general family of theories descended from MOND. Under this family of theories, there is
no need to develop a highly uncertain dark matter sector of physics to
explain the observations.
Keywords: dark matter, MOND, modified gravity, κ-model, galaxies,
galaxy clusters
§ INTRODUCTION
The dark matter (DM) paradigm is considered to simply explain the dynamics in individual galaxies and galaxy clusters. Astrophysicists have been searching for DM evidence for years. However, the
quantity of dark matter required is immense, and the ratio of the dark
matter (DM) to the baryonic component (B), (DM/B), could largely exceed 10 in a few
galaxy clusters, raising serious doubts about the validity of this hypothesis. A rapid explanation to
remedy this problem is to say that a large quantity of invisible baryonic
matter is not counted in the galaxies and galaxy clusters. Unfortunately, this immediate
solution is may be acceptable up to a factor of 2, but fully irrealistic up
to a factor of 10. What is then the nature of dark matter ? To
date, no particle of dark matter has never been detected in lab
(DAMA/LIBRA Collaboration, 2022), even though researchers have built larger and
more sensitive detectors (Xenon Collaboration, 2023). This is rather an
intriguing status and the DM paradigm sounds suspiciously similar to the phlogistic and the
aether in the nineteenth century. Additionally, some agnostic physicists believe
that DM does not exist at all and have instead proposed alternative models of
gravity. As a result, all sorts of theories have been built in order to remove dark
matter in all astrophysical systems. There are two theories that stand out because they have
been applied to numerous concrete situations. The first is modified
Newtonian dynamics (MOND). MOND diverges from the standard Newton's laws at extremely low
accelerations, which are a characteristic of the outer regions of galaxies
(Milgrom, 1983, 2015). MOND postulates a modification to Newton's second
law, such that the
force applied on a particle is no longer proportional to the
acceleration a, but rather to its square a^2, when the accelerations are smaller than the critical limit a_0=10^-10 m s^-2 (Milgrom,
1983). This model effectively explains the dynamics of individual galaxies without dark
matter (Famey and Mc Gaugh, 2012).
In modified gravity (MOG) (STVG or scalar-tensor-vector gravity), the approach is
very different. The structure of space-time is described by the usual metric
tensor g_μν, complemented by a vector field, ϕ_μ, and two scalar
fields, G and μ, which represent a dynamical version of
the Newtonian gravitational constant and the mass of the vector field,
respectively (Moffat, 2006; 2020). The vector field part produces a Yukawa-like modification of
the gravitational force
due to a point source. This model accurately explains not only the
dynamics of galaxies but also the dynamics in galaxy clusters without dark
matter (Moffat, 2020).
The common and main objective of MOND and MOG theories is to eliminate in whole or
in part the dark matter in the Universe. The agreement between these two
models and the observational data are very remarkable for galactic dynamics; however, the situation is distinct for galaxy clusters where MOG appears to have an advantage over its competitor MOND (Brownstein and Moffat,
2006). MOG achieves to eliminate all dark matter content in these
objects, but MOND still needs to consider some form of invisible matter in galaxy clusters (Mc Gaugh, 2015).
However, this does not mean that MOND has been falsified, because
the MONDian world is very rich and there are numerous extensions, such as
the promising extended MOND (EMOND) (Hodson and Zhao, 2017a, b)[The
most complete theory regarding the MONDian world is that of Solznik and
Skordis (2021), but the formalism EMOND is much easier to manipulate (the
theory of Solznik and Skordis belongs to the large class of TeVeS
theories, such as MOG with many free parameters and some
disguised aspect of DM).]. Moreover, we can speculate whether MOG does not reintroduce DM
in a disguised manner (the vector field ϕ_μ is massive).
Following this simple statement, only MOND is truly free of DM,
and the path followed by EMOND is potentially preferable.
The third model studied here is the κ-model; it is based on a
phenomenological and MONDian procedure whose main aim is to
simultaneously explain the dynamics of individual galaxies by minimizing the
dark matter content and by maintaining the formal aspect of the Newtonian law of
gravitation. It is based on a relational consideration; the mean density of
matter is estimated at a very large scale, and the surroundings of a given observer
influence the measurements made for the determination of the velocities and accelerations. In this regard, the
observations depend not only on the reference frame of
the observer but also on his environment. Thus, the κ-model uses a
holistic or Machian approach. The velocities and accelerations that are
environment-dependent are renormalized, and the observer measures apparent
quantities depending on a coefficient denoted κ. An empirical (and universal)
relationship is provided between the coefficient κ and the mean
density. The coefficient κ intervenes in the acceleration
term of the dynamics equation and imparts a MOND-type appearance to the κ-model.
However, a physical support is now
provided; this is the environment of the observer, which distorts the
measurements. Thus, a naive analogy is that of an observer placed in a medium of given
refraction index, who sees a magnification of both the size and the velocity of any object.
However, this very simplistic comparison is not to be taken at face value because in a medium of given refraction index, the light is attenuated and the
images can be extremely blurred. On the contrary, in the κ-model framework, no such
medium is existing; the light is not attenuated and it still
propagates in straight line with the speed c, constant
and independent of frequency, as measured in vacuum by every observer (Pascoli, 2022a)[Further information is summarized in the two appendices A and B placed at the end of this paper. Initially, we start with an isotropic and homogeneous base space Σ. In this space the light propagates in straight line. However, any observer is located at the center of a homogeneous and isotropic universe κΣ, homothetic to the base space Σ; the coefficient κ is dependent on the environment of this observer (the mean density of matter surrounding the observer). Unfortunately, it is very difficult to have a global view of the situation with a unique ℝ^3-type space. Rather, the image needs to be that of a fiber bundle Σ×κ with Σ as the base and κ as the fiber (specifically, each observer projects all structures present in the Universe on his own stratum labelled by κ). Within the framework of this fiber bundle, DM effects can be re-interpreted as a κ-lensing, whose the Bullet Cluster is an illustrative example (paragraph 3).]. The great
interest of the κ-model is that no arbitrary parameter is introduced,
contrary to MOG, which has two outer free parameters
(these two models having a universal relationship for the fits).
Another advantage of the κ-model is that it would permit to
pass directly for any type of galaxies from the data on the spectroscopic velocity measurements
to the mean densities (and inversely) without the ambiguous transition through the mass-to-light ratio
knowledge. The relationship between the spectroscopic velocities and the mean densities is direct in the κ-model. Strongly contrasting with this view, DM can fit almost all
rotational curves with any mean baryonic density profiles due to its very large flexibility, and for this reason the DM paradigm
has unfortunately no predictive value. Eventually the κ-model
can easily help to understand the weaker "DM effect" in the regions where the mean density is high (globular clusters, core
of the galaxies) and, conversely, the stronger "DM effect" in the regions where the mean
density is weaker (outer regions of the galaxies, low brightness galaxies and
galaxy clusters). More specifically, the κ-model predicts that when the mean density ρ̅ in a large-scale object (galaxy or galaxy cluster) is smaller than a critical value ∼ 4 10^-24 g cm^-3[This value corresponds to the mean density of the baryonic matter detected in the solar neighborhood (or, correspondingly, ∼ 70 M_⊙ pc^-2, assuming a vertical thickness ∼ 1 kpc (Famaey, and McGaugh, Fig.19, 2012)).] the measured velocities appear magnified compared to the estimated newtonian velocities.
The κ-model has been already applied to various types of galaxies, such as large or small low surface brightness galaxies (LSBs) and high surface brightness galaxies (HSBs) (Pascoli, 2022 a,b). The κ-model fits and the observational curves has been shown in fairly good agreement. Even though this study needs to be extended to a larger database, such as SPARC (Lelli,
McGaugh and Schombert; 2017), these initial results and deductions provided by the κ-model are highly encouraging.
While the κ-model appears to succeed at explaining the dynamics of numerous
individual galaxies (Pascoli, 2022), to date, this model has not been
tested on the galaxy cluster field, and this is the goal of our present
study. We show that the κ-model can greatly lower the major
part of the dark matter content in the galaxy clusters. The current ratio DM/B
amounts around 10 in the outskirts of these objects, and the κ-model
can reduce this ratio to approximately 0-1. The situation is more difficult in the inner regions of the galaxy clusters, but by lowering the gas temperature in these regions, the problem can be easily solved. The κ-model provides a good predication for the
ratio DM/B in the outer regions of galaxy clusters, without introduction of numerous free parameters other than those linked to the mean density of baryonic
matter (Pascoli, 2022 a,b). Likely, this result is not a mere coincidence, and
supports the consideration of our new proposal. Here, the κ-model is applied
to a sample of galaxy clusters (paragraph 2) and then, eventually, to the
Bullet Cluster (paragraph 3).
§ GALAXY CLUSTERS
The gas density profile for a galactic cluster can be approximately fitted by
the following function (Cavaliere, and Fusco-Femiano, 1976):
ρ(r)=ρ_M[1+(r/r_c)^2]^-3β/2
where ρ(r) is the intracluster medium (ICM) mass density profile and ρ_M is the maximal value taken by ρ(r). r_c and
β are fit parameters for the distribution of the density.
Due to this isotropic density distribution, the gas mass contained in a sphere of radius r is as follows:
M_gas(r)= 4π∫_0^r dr r'^2ρ(r') =_2F_1(1.5, 1.5β,2.5,-0.000017 r^2)
where _2F_1 is the hypergeometric function.
When r ≫ r_c and β<1, we can approximate this formula by the more manipulable relationship, as follows:
M_gas(r)=4 πρ_M r_c^3/3(1-β)(r/r_c)^3(1-β)
Unfortunately this relationship is clearly divergent when r ⟶∞. A cut-off for the distribution of gas, necessary for finite spatial extent, needs to be introduced. Following Brownstein and Moffat (2006), this cut-off, let rout, is chosen equal to the radius at which the density, drops to 10^-28 g cm^-3 or 250 times the mean cosmological density of the baryons.
On the other hand, assuming that the
cluster is in hydrostatic (isothermal) equilibrium, the Newtonian dynamical mass is as follows (Brownstein and Moffat, 2006, eq. 19):
M_N(r)=3βk_B T/μm_pG(r^3/r^2+r_c^2)
where T is the temperature, k_B is Boltzmann's constant, μ ≈ 0.609
is the mean atomic weight and m_p is the proton mass.
In eq.4 the quantities β, k_B, T, μ, m_p
and G are κ-invariant, both r and r_c^2[The temperature T is measured in situ and is observer-independent, likewise for the dispersion velocities σ_r,σ_θ,σ_ϕ (for instance in eq. 9 of Brownstein and Moffat, 2006 : σ_r^2= (k_B T)/μ m_p is observer-independent, as is the gravitational potential Φ which is also measured in situ).] are transformed as r⟶κ/κ_Er. Then the κ-mass
profile for a cluster
is given by the following relationship:
M_κ(r)=κ/κ_EM_N(r)
where M_N(r) is the Newtonian mass evaluated at the radius r. Then the mean density ρ(r) is inserted
in the relationship (κ, ρ) (see the appendix A eq. 18
placed at the end of the article) and this leads to the magnification ratio, as follows:
κ_E/κ=1+Ln (ρ_E/ρ)=1+Ln
(ρ_E/ρ_Mρ_M/ρ)=1+Ln(ρ_E/ρ_M)+3β/2Ln
[1+(r/r_c)^2]
with the mean mass density ρ_E near the Sun estimated to 410^-24 g cm^-3. We used the sample of galaxy clusters from the paper of Brownstein and Moffat (2006). The relevant properties are listed in Table 1. By convention column (6) is
the position, r_out, at which the density drops to ρ_out≃10^-28 g cm^-3, or 250 times the mean cosmological density of the baryons.
Figure 1 shows the results of our analysis applied to the COMA
cluster. The κ-curve (shown in amber) has approximately the same
profile as MOND (shown in dashed-dotted cyan) when r>100 kpc, but the κ-curve is much
closer to the observational curve and even merges with the latter
one in the outer regions of the cluster. Even though the κ-curve is not fully
merged with the ICM gas curve inside the inner regions; however, the apparent
gravitational mass has been largely lowered (by a factor in the range 7-10 along the curve). Finally a residual gap has to be filled in between the κ-curve and
the observational one in the inner regions of the COMA cluster. To do this, we
propose to decrease the temperature in the inner regions. This proposal can be
applied in the same way to the other galaxy clusters (Figure 2). Notably, in most of the cases presented on this array of figures, MOG theory leads
exactly to the same prediction as the temperature profiles after comparing
the curves (we can refer to A0085, A0133, NGC 507 and A062 as examples). Assuming a non-isothermal temperature
profile, the Newtonian mass has to be re-calculated as follows (Brownstein and Moffat, 2006, eq. 18):
M_N(r)=rk_BT(r) /μm_pG(3βr^2/r^2+r_c^2-dLnT(r)/dLn(r))
Then we used the following easy-to-manipulate temperature profile:
T(r)=T_outexp [-α(r_out-r/r_out)]
where T_out designates the temperature in the outer regions of the cluster. This parameter is provided in column (2) of Table 1. The coefficient α=1 is used when M_N≲10 M_gas. This is the case for most
situations under study. For a few cases where this ratio is far beyond 10,
then α=2 is used. The temperature profile (eq.8) has been selected such
that the outer temperature, T_out (column (7) of Table 1) is that provided by Brownstein and Moffat (2006), as directly resulting from observations (Reiprich and Böhringer, H. 2002).
M_N(r)=r k_BT(r)/μm_pG[3βr^2/r^2+r_c^2-αr/r_out]
The green curve on the Figure 1 is the κ-profile with a non-isothermal temperature
profile given by eq. 8. The κ-curve associated to a non-isothermal temperature profile becomes virtually superimposed
to the ICM gas profile when r < 500 kpc (let us note that for r> 500 kpc, the κ-curve, associated to an isothermal temperature profile and that associated to a non-isothermal temperature profile flank the ICM gas profile). Thus, a prediction of the κ-model is that the
temperature is lowered by a factor two (at r_out/2) in the inner regions compared to that in the outskirts of the COMA cluster. More precisely, the most appropriate description is that of a non-isothermal core for r < 1000 kpc, which is itself surrounded by a quasi-isothermal shell when 1000 < r < 1954 kpc. A vey similar situation is encountered in the other clusters collected in Table 1. Thus, since the κ-model has no outer parameters, an optimal fit can be achieved by finely adjusting the temperature profile and, in return, can eventually predict this profile (whereas this refinement and prediction are not possible within the ad hoc, and definitely not falsifiable, DM paradigm, which is easily adapted to fit the ICM gas curve with any temperature profile indicating no predictive value).
< g r a p h i c s >
Figure 1 COMA cluster profile. The horizontal axis is the radius in kpc and the
vertical axis is mass in units of the solar mass M_⊙. The red long
dashed curve is the ICM gas mass derived from X-ray observations
(compilation of Reiprich, 2001; Reiprich and Böhringer, 2002); the short
dashed blue curve is the Newtonian dynamic mass; the dashed-dotted cyan curve is
the MOND dynamic mass; the solid black curve is the MSTG dynamic mass (Browstein and Moffat, 2006). Our contribution is displayed as the
amber curve, showing the κ-model dynamic mass with the temperature T=8.38 keV.
The solid green curve displays the κ-model dynamic mass,
assuming a non-isothermal temperature profile with α=1 in eq. 8.
A perfect superposition with the ICM gas mass curve in COMA cluster can still be achieved by taking the temperature profile displayed in Figure 2. We note an increase of the mean temperature from the inner regions up to 1500 kpc and then a slow decrease toward the outskirts. However, a comparison of this profile with observational data is not very conclusive. The main reason is that the COMA cluster modeled here, similar to other research (Browstein and Moffat, 2006), is in the form of a spherical and well-relaxed distribution of gas. The reality is much more complex. The density in the inner regions is relatively high, and the cooling through thermal bremsstrahlung emission must effectively be much more efficient than in the outskirts. However, the heating by active galactic nuclei in the inner regions can compensate for the cooling processes. Thus, the energetics are very complex and the temperature profile is not easily predicted (Bykov et al, 2015). In a general manner for the galaxy clusters, all temperature profiles found in the literature are model-dependent with a very large amount of dark matter. Moreover, even limiting to the outskirts, which are directly accessible, the physics is poorly understood (Walker et al, 2019). Thus, the temperature map of the outskirts of the COMA cluster simultaneously exhibits cool and hot regions with various substructures (Watanabe et al, 1999).
< g r a p h i c s >
Figure 2 Predicted mean temperature profile in the COMA cluster in the framework of the κ-model
We have suggested here to solve the problem of the very large excess of (newtonian) mass in the galaxy clusters by using a two-stage procedure. First, by significantly reducing the apparent attractive mass with the κ-model. With help of this operation the ratio M_DM/M_gas is reduced to rather more credible values, i.e. ∼ 0 in the outer regions and to ∼ 2 in the inner regions. Then the second step consists in adapting the temperature profile with cancellation of the residual excess of mass. It seems that MOG was facing the same situation, even though Brownstein and Moffat (2006) have not tried to perform this second step (for instance for A0085, ...). Another issue for this secong step would still be to follow the MOND galaxy cluster analysis of Banik and Zhao (2022), that is to say to add a dense core composed of sterile neutrinos (see Giunti and Lasserre (2019) for a review on these hypothetical particles). The observational data not being perfect it is very likely that a two-stage solution is also needed in the framework of other theories (for instance MOND) which have been proposed to eliminate the dark mater (taken into account of the measurement uncertainties of inclination, thickness, mass-to-light ratio for the individual spiral galaxies; density/temperature profiles and clumpiness in the case of galaxy clusters, and so on). The only model which runs directly in one step is dark matter given its undue flexibility (compared to the κ-model with no flexibility).
< g r a p h i c s >
Figure 3 Plot of the radial mass profile
for clusters of the sample in Table 1. For the caption see the
Figure 1. The solid red curve displays the κ-model dynamic mass,
with a non-isothermal temperature profile of α=2.
< g r a p h i c s >
Figure 3 Continued galaxy cluster mass profiles
< g r a p h i c s >
Figure 3 Continued galaxy cluster mass profiles
Table 1 Galaxy cluster properties
Note - This compilation is issued from Brownstein and Moffat (2006).
We have added a column for the mass M_κ.
Column (1) Galaxy cluster name
Column (6) radius where gas ≃10^-28 g cm^-3
Column (2) X-ray temperature
Column (7) ICM gas mass integrated to r_out
Column (3) ICM central mass density
Column (8) Newtonian dynamic mass integrated to rout
Column (4) model parameter Column (9) MSTG dynamic mass integrated to rout
Column (5) model core radius parameter
Column (10) convergent MOND dynamic mass
Column (11) M_κ integrated to rout
51ptCluster
(1)
14ptT
keV
(2)
56ptρ_M
10^-25 g cm^-3
(3)
28ptβ
(4)
28ptr_c
kpc
(5)
29pt
r_out
kpc
(6)
28pt
M_gas
10^14M_⊙
(7) 35pt
M_N
(8)
35pt
M_MSTG
(9)
35pt
M_MOND
(10) 35pt
M_κ
(11)
51ptA0085
28pt6.90 56pt0.34
28pt0.532
28pt58.5
28pt2,241
35pt1.48
21pt9.02
21pt1.15
21pt1.83
21pt0.77
51ptA0119
14pt5.60
56pt0.03
28pt0.675
28pt352.8
28pt1728
35pt0.73
21pt6.88
21pt0.73
21pt1.76
21pt0.60
51ptA0133
14pt3.80
56pt0.42
28pt0.530
28pt31.7
28pt1417
35pt0.37
21pt3.13
21pt0.28
21pt0.55
21pt0.27
51ptNGC507
14pt1.26
56pt0.23
28pt0.444
28pt13.4
28pt783
35pt0.05
21pt0.48
21pt0.02
21pt0.04
21pt0.04
51ptA0262
14pt2.15
56pt0.16
28pt0.443
28pt29.6
28pt1334
35pt0.26
21pt1.39
21pt0.11
21pt0.13
21pt0.12
51ptA0399
14pt7.00
56pt0.04
28pt0.713
28pt316.9
28pt1791
35pt0.90
21pt9.51
21pt1.07
21pt3.07
21pt0.82
51ptFORNAX
14pt1.20
56pt0.02
28pt0.804
28pt122.5
28pt387
35pt0.009
21pt0.373
21pt0.011
21pt0.102
21pt0.026
51ptNGC1550
14pt1.43
56pt0.15
28pt0.554
28pt31.7
28pt632
35pt0.034
21pt0.548
21pt0.024
21pt0.086
21pt0.047
51ptA1060
14pt3.24
56pt0.09
28pt0.607
28pt66.2
28pt790
35pt0.07
21pt1.69
21pt0.10
21pt0.50
21pt0.21
51ptA1367
14pt3.55
56pt0.03
28pt0.695
28pt269.7
28pt1234
35pt0.27
21pt3.19
21pt0.26
21pt0.75
21pt0.40
51ptMKW4
14pt1.71
56pt0.57
28pt0.440
28pt7.7
28pt948
35pt0.09
21pt0.78
21pt0.05
21pt0.08
21pt0.10
51ptZwCl1215
14pt5.68
56pt0.05
28pt0.819
28pt303.5
28pt1485
35pt0.59
21pt7.15
21pt0.72
21pt2.5
21pt0.91
51ptNGC4636
14pt0.76
56pt0.33
28pt0.491
28pt4.2
28pt216
35pt0.001
21pt0.088
21pt0.001
21pt0.019
21pt0.011
51ptA3526
14pt3.68
56pt0.29
28pt0.495
28pt26.1
28pt1175
35pt0.20
21pt2.35
21pt0.17
21pt0.45
21pt0.24
51ptA3266
14pt8.00
56pt0.05
28pt0.796
28pt397.2
28pt1,915
35pt1.22
21pt12.82
21pt1.56
21pt4.79
21pt1.12
51ptA3395s
14pt5.00
56pt0.03
28pt0.964
28pt425.4
28pt1223
35pt0.32
21pt5.77
21pt0.49
21pt2.34
21pt0.50
51ptCOMA
14pt8.38
56pt0.06
28pt0.654
28pt242.3
28pt1954
35pt1.13
21pt11.57
21pt1.38
21pt3.81
21pt0.99
51ptA2065
14pt5.50
56pt0.04
28pt1.16
28pt485.9
28pt1302
35pt0.49
21pt8.01
21pt0.76
21pt3.83
21pt0.69
51ptA2142
14pt9.70
56pt0.27
28pt0.591
28pt108.5
28pt2537
35pt2.39
21pt15.93
21pt2.32
21pt4.36
21pt1.37
51ptA2244
14pt7.10
56pt0.23
28pt0.607
28pt88.7
28pt1773
35pt0.84
21pt8.36
21pt0.92
21pt2.45
21pt0.71
51ptUGC03957
14pt2.58
56pt0.09
28pt0.740
28pt100.0
28pt764
35pt0.08
21pt1.57
21pt0.09
21pt0.47
21pt0.13
51ptS636
14pt1.18
56pt0.01
28pt0.752
28pt242.3
28pt742
35pt0.06
21pt0.65
21pt0.03
21pt0.09
21pt0.05
51ptM49
14pt0.95
56pt0.26
28pt0.592
28pt7.7
28pt177
35pt0.001
21pt0.109
21pt0.002
21pt0.041
21pt0.009
51ptA1689
14pt9.23
56pt0.33
28pt0.690
28pt114.8
28pt1898
35pt1.23
21pt13.21
21pt1.61
21pt5.14
21pt1.13
51ptA1800
14pt4.02
56pt0.04
28pt0.766
28pt276.1
28pt1284
35pt0.34
21pt4.14
21pt0.36
21pt1.15
21pt0.38
51ptA1914
14pt10.5
56pt0.22
28pt0.751
28pt162.7
28pt1768
35pt1.08
21pt15.21
21pt1.79
21pt7.44
21pt1.31
51ptNGC5813
14pt0.52
56pt0.18
28pt0.766
28pt17.6
28pt166
35pt0.001
21pt0.072
21pt0.001
21pt0.021
21pt0.006
51ptNGC5846
14pt0.82
56pt0.47
28pt0.599
28pt4.9
28pt152
35pt0.001
21pt0.082
21pt0.001
21pt0.031
21pt0.007
51ptA2151w
14pt2.40
56pt0.16
28pt0.564
28pt47.9
28pt957
35pt0.12
21pt1.42
21pt0.09
21pt0.25
21pt0.12
51ptTRIANGULUM
14pt9.60
56pt0.1
28pt0.61
28pt196.5
28pt2385
35pt1.98
21pt15.22
21pt2.11
21pt4.48
21pt1.32
51ptOPHIUCHUS
14pt10.3
56pt0.13
28pt0.747
28pt196.5
28pt1701
35pt0.91
21pt14.11
21pt1.59
21pt6.91
21pt1.21
51ptZwC174
14pt5.23
56pt0.1
28pt0.717
28pt163.4
28pt1354
35pt0.43
21pt5.49
21pt0.50
21pt1.79
21pt0.47
51ptA3888
14pt8.84
56pt0.1
28pt0.928
28pt282.4
28pt1455
35pt0.71
21pt12.61
21pt1.33
21pt7.14
21pt1.08
51ptHGC94
14pt3.45
56pt0.11
28pt0.514
28pt60.6
28pt1237
35pt0.24
21pt2.40
21pt0.19
21pt0.43
21pt0.21
51ptRXJ2344
14pt4.73
56pt0.07
28pt0.807
28pt212.0
28pt1222
35pt0.34
21pt4.97
21pt0.43
21pt1.78
21pt0.43
By comparing columns 9 and 11 in Table 1, MOG
and κ-model provide reasonably close values to each other for the masses
integrated to rout, respectively M_MSTG and M_κ. By comparing columns 7 and 11 in Table 1, the agreement between M_κ
and M_gas is fairly good, and it is clear that, in most
cases, the κ-model does not necessitate dark matter in the outer
regions of the galaxy clusters. In Table 1, M_κ is smaller than M_MSTG when the temperatures are higher than 5-6 keV, whereas the reverse is true when the temperatures are lower than 5-6 keV. On the other hand, for T ∼ 5-6 keV, then M_κ∼ M_MSTG. Eventually, when the temperatures are smaller than 1 keV, then M_κ are larger than M_MSTG.
By contrast, convergent MOND dynamic mass, M_MOND, are systematically located too high. Even though MOND substantially decreases the dark matter content for convergent
MOND dynamic mass, this theory is not able to completely remove dark
matter in the galaxy clusters. By comparing the columns 10 (M_MOND) and 7 (M_gas) of Table
1, M_MOND is sometimes superior by
a mean factor of the order of 2 to M_gas. Galaxy cluster
stability cannot be fully explained by the current MOND formulation alone. MOND, at least
its initial form (Milgrom, 1983), still needs to use a residual content of invisible matter for galaxy clusters.
However, owing to the great success of MOND for the individual galaxies, a proper
direction is to build a multiscale MOND, which is able to simultaneously explain both individual
galaxies and galaxy clusters. Fortunately, this model has been
built. This is the main motivation of EMOND; EMOND assumes that there is an increase in the fundamental parameter a_0 of the MOND paradigm in galaxy clusters compared to the value selected for this coefficient in the case of individual galaxies (Zhao and Famaey, 2012; Hodson and Zhao, 2017 a, b). This rescaling adapts the parameter a_0 to the size of the object under consideration and appears to be quite natural. In this case, MOND can eventually and adequately fit the ICM gas mass integrated to r_out. Note, in the context of the κ-model a physical interpretation of this rescaling in MOND is supplied. The mean density in a galaxy cluster is much smaller than the corresponding one in an individual galaxy by a factor ∼ 40 (center of the cluster) and ∼ 200-1000 (in the outskirts of the cluster). In our study, the rescaling is given by an estimate of κ, and κ_G/κ_cluster∼ 5-10.
Despite these promising considerations for the κ-model, MOG appears to be slightly better as for the fitting of the observational curves. We might argue that MOG has two free outer parameters, whereas the κ-model has none; therefore, it is easier to fit a curve with a sufficient number of outer free parameters than without any free parameter. However, on closer examination of MOG, we can see that a multifit procedure is used throughout the calculations. MOG relies on two parameters M_0 and r_0, but these parameters are not constant across the series of galaxy clusters. Moreover, M_0 depends on the ICM gas mass for each individual clusters (Brownstein and Moffat, 2006, paragraph 4). Thus it seems that in MOG the results are already included at the start of the calculations, and the method appears somewhat post hoc. Thus, MOG is not a fully ab initio procedure. Due to these reasons, the κ-model and EMOND are more efficient than MOG as for their predictive power.
§ BULLET CLUSTER
The Bullet Cluster (1E 0657-56) is very often presented as a clear proof of the
existence of dark matter (Figure 3). The Bullet Cluster is composed of two
colliding clusters: a main cluster (Mc) and a small or sub-cluster (Sc) (the bullet per se)
(Bradač el al, 2005, 2006).
< g r a p h i c s >
Figure 4
To determine how the κ-model reinterprets the observational data for the
Bullet Cluster, we initially start with the known (but apparent) surface densities for both the
hot gas and the visible galaxies (Figure 5). Figures 5a and 5b are reproduced from
Brownstein and Moffat (2006). In addition, Table 2 provides the masses for the different
components. The assumed DM content is adjusted such that the total mass ratio DM/B is ∼ 6.
< g r a p h i c s >
Figure 5
Figure 5 Abscissa : distances, unit 500 kpc. Ordinate:
surface densities, unit 3.1 10^3 M_⊙ pc^-2.
The abscissa axis passes through the galaxy cluster centers. Figure 5a: scaled plot of hot gas
density, Figure 5b: scaled plot of apparent galaxy density observed from Earth (Main cluster + Bullet) and Figure 5c: scaled plot of real galaxy
density measured by a hypothetical observer located inside the Main or the Sub group of
visible galaxies (represented by blue disks in Figure 4). The mass is
invariable by the transform: apparent ⟷ real surface densities, i.e. ∫ dxdy Σ_s (x,y)=∫ dxdy Σ_s' (x,y).
63ptComponent
96ptMain cluster
(Main)
80ptSubcluster (Sub)
80ptDiffuse component
79ptTotal
63ptM_gas 96pt4.38 10^13
80pt1.93 10^13 80pt8.01 10^13
79pt1.43 10^14
63pt
M_galaxies
96pt4.67 10^12
80pt3.46 10^12
80pt– 79pt
8.32 10^12
63ptM_DM 96pt6.54 10^14
80pt1.67 10^14
80pt– 79pt
8.21 10^14
Table 2 The masses are expressed in solar mass M_⊙
In the usual treatment of the galaxy clusters (paragraph 2), only the gaseous
component are considered (the stellar fraction is negligible in
galaxy clusters). In the specific case of the Bullet Cluster, the situation is very different. We have two
clearly identified parts: a very massive gaseous component and a galactic
component; however, these two components are separated. Based on the lensing diagram, only the gaseous component needs to be considered in the calculation of κ. Thus, if we assume that the Main and
Sub groups of visible galaxies (the disks displayed in blue in
Figure 6) have a low content of gas, the lensing due to the κ effect is much
higher than that in the hot gas (the disk displayed in red in
Figure 6). The usual relationship of the κ-model is as follows:
κ_M/κ=1+Ln(Σ_M/Σ_gas)
where Σ_M ∼ 0.075×3.1 10^3 M_⊙ pc^-2=232.5 M_⊙pc^-2 is the maximal value taken by the surface density of hot
gas (Figure 5a). Figure 5a can be used to provide the following relationship:
(Σ_gas^Main)_outer , (Σ_gas^Sub)_outer∼ 0.7× 232.5 M_⊙ pc^-2=162.7 M_⊙ pc^-2
The latter quantity represents the surface density of the hot gas located in the immediate
environment of the Main and Sub groups of visible galaxies; each of them being displayed by a blue
disk in Figure 4.
However, following the current interpretation, both the Main and Sub galaxy clusters were stripped from the hot gas during the
collision. The amount of the gas remaining inside
the groups of visible galaxies after this collision needs to be determined. Clearly the hot gas initially located inside the Main and Sub groups of visible galaxies is removed by the strong
shock during the collision, and each of these groups is now located in a subdense bubble with a low content of hot gas. A
pre-analysis of the α-diagram in the κ-model context, compared to that
provided by observations (Bradač el al, 2005, 2006), enables to predict that
the amount of hot gas inside the subdense bubbles is ∼ 15% of the amount of hot gas surrounding them, i.e. (Σ_gas^Main)_inner or
(Σ_gas^Sub)_inner=24.4 M_⊙ pc^-2. With
equal temperatures, the gas pressure in
the groups of visible galaxies (blue regions in the Figure 4) is thus predicted to be lower
than that in the immediate region surrounding them (red region). Evidently, the
system is not in hydrostatic equilibrium, and this situation
cannot indefinitely persist. The required time to re-equalize the densities
is approximately R_c/3c_s∼100Myears, where R_c∼0.5 Mpc is the mean characteristic size of the Main or Sub groups of visible galaxies and c_s is the speed of sound in the hot gas (T_e∼10^8 K)[After this short period of time, the κ-model predicts that the lensing diagram will no longer be centered on the groups of visible galaxies, but on the hot gas and will likely resemble Figure 7a. Thus, we see an instantaneous phase of a rapidly evolutionary process.]. With the relationship
(10) and the aforementioned values of (Σ_gas^Main)_inner or
(Σ_gas^Sub)_inner, we can determine the
amplification factor κ inside the two groups of visible galaxies as follows:
κ_M/κ_gas^Main=κ_M/κ_gas^Sub∼κ_M/κ_gas^galactic=3
< g r a p h i c s >
Figure 6 Basic illustration of the combined action of both gravitational and kappa lensings. The
lensing by the hot gas is used as a reference for the two figures a and b.
In Figure 6a, the combined product of both the gravitational and
kappa lensing effects is clearly centered on the Main and Sub groups of visible galaxies. In Figure 6b, as expected, the lensing is clearly
centered on the hot gas (which contains 90% of the baryonic mass against 10%
for the visible galaxies in a galaxy cluster). However, the reality shown in
Figure 6b cannot be viewed by a unique observer (for instance, a terrestrial
observer surrounded by its own environment). Specifically, two distinct observers are needed to determine the effect.
The part located in the blue area (groups of visible galaxies) is perceived by a
hypothetical observer situated inside any region where the mean gas density is the same as that in the Main or Sub group of visible galaxies; the part
located in the red area (hot gas) is perceived by a hypothetical observer
situated inside any region where the mean density is the same as that in
the hot gas. If
the κ-model paradigm is expected to be on the right track, then the Bullet Cluster is an illustrative (but rare) example that the perception of the objects in the Universe is observer-dependent.
Beyond these qualitative considerations, calculations are evidently needed to ascertain our purpose. The procedure is
well known. The field equations of general relativity can be linearized if the
gravitational field is weak. Then, the deflection angle of a set of masses is
simply the vectorial sum of the deflections due to individual lenses. The
plane of the sky is (x, y). The deviation angle α can be written
using the thin lens approximation as follows(Bartelmann and Schneider, 2001):
α(x,y)=∫_Sdx’dy’4G/cΣ(x’,y’)r-r’/|r-r’|^2
where G is the gravitational constant and c is the speed of light. The density distribution Σ is
integrated over all the surface S of the cluster system. The total
surface density Σ(x,y) is the sum of the gas and stellar
surface densities.
Σ(x,y)=Σ_g(x,y)+Σ_s’(x,y)
where Σ_g(x,y) and Σ’_s(x,y) are the fits of the
distributions represented in Figures 4a and 4c, respectively; these are assumed to be
approximately circular and Gaussian.
We know that the κ-model does not change the local physics, apart from a
magnification factor locally applied. Thus, in the κ-model, the
relationship (14) eventually becomes the following:[A multiplicative factor,
κ_E/κ_M, needs to be applied to
α(x,y); however, the factor is a global and constant coefficient
independent of x,y, which does not affect the relative position of the peaks on the
lensing diagram.]
α(x,y)=∫_Sdx’dy’4G/c[Σ_g(x,y)+κ_M/κ_gas^galacticΣ’_s(x,y)]r-r’/|r-r’|^2
Figure 7 shows the lensing diagrams |α| of both dark matter
and κ-model and Figure 8
shows the components α_x and α_y. Even though
the physical interpretation is very different, the κ-model
and DM diagrams are very similar and both show that the lensing is centered
on the visible galaxies and not on the hot gas, which is centered at (0,0). The
essential difference between the two models is that the κ-model does not
need dark matter to explain the same facts.
< g r a p h i c s >
Figure 7 Comparative plots of the lensing diagram. The crosses indicate the
positions of the centers of the different distributions (hot gas in red and groups of visible galaxies in blue)
< g r a p h i c s >
Figure 8 The same as Figure 7 but for the components of α
The method applied to the Bullet Cluster can be extended to other similar cases such as the Train Wreck Cluster (Abell 620). In the latter situation the lensing is centered on the hot gas. Even though the morphology of the Train Wreck Cluster is much more complex than that of the Bullet Cluster (Jee et al, 2014), a natural explanation in the framework of the κ-model is that the intergalactic gas filling rate of the bubbles containing the visible galaxies is different. With a filling rate of 60% instead of 15% (Bullet Cluster) we have been able to ascertain that the lensing is no longer centered on the visible galaxies.
§ CONCLUSION
Effectively tested MOND and MOG models were designed to understand the
dynamics of the Universe without dark matter. By contrast, the dark
matter paradigm is an ad hoc concept, where the dark matter content has to be
adapted to each situation and consequently, has no predictive value. However, and paradoxically enough, the DM paradigm constitutes a near unanimous recognition among the astrophysicists. The main reason is that the DM paradigm is very easy to use and effectively works at all times due to its extreme flexibility.
On the other hand, in the framework of the κ-model, the single issue of
the baryonic mass needs to be sufficient to understand the dynamics of galaxies and that of
the galaxy clusters, without dark matter or artificial ingredients, such as the
introduction of new parameters into the calculations. Specifically, the κ-model
aims to determine a one-to-one relationship directly
linking the sole observational data, i.e. the estimated mean density, to either
the spectroscopic velocities in the galaxies or X-ray temperatures in galaxy
clusters. The κ-model is a MOND-type model, and a behavior very similar
to MOND needs to be found for the galaxy clusters. In reality, even though the shape of curves are effectively similar, the κ-model
curves are strongly parallel-displaced and systematically move
closer to the observational curves than MOND. The agreement between the κ-model
prediction and the observational data for the total mass of the hot gas in a galaxy cluster is satisfactory when the mass
ratio DM/B is less than 10, but for values exceeding this
ratio the observational total mass of gas cannot be effectively predicted in the framework of an isothermal model.
In this case, a lower temperature for the hot gas is predicted in the inner regions of the
galaxy clusters.
Finally, the ordered series is as follows: the κ-model has no outer
parameter (except the internal parameters relative to the system type
(individual galaxies or galaxy clusters), i.e. mean density and temperature); MOND is similar with just a single outer parameter; MOG has two outer parameters (unfortunately, it is in fact a disguised gas-mass dependent
multifit parameter for the galaxy clusters, i.e. the results are already included in the hypotheses), and DM is an artificial and ad hoc
procedure that works at all times. Our proposed logical program was developed to fit the observational curves with only the observational
parameters in a self-consistent manner (i.e. through the triplet mean
densities, spectroscopic velocities, and temperatures), and this ultimate goal is now possible in the
framework of the κ-model.
Data availability statement: The author confirms that the data supporting the findings of this study are available within the article and the reference list.
Conflicts of Interest: The author declares no conflict of interest.
§ REFERENCES
Batik, I. & Zhao, H., 2022, Symmetry, 14, 1331
Bartelmann, M., & Schneider, P., 2001, Physics Reports, 340, 291
Bradač, M., Schneider, P., Lombardi, M., & Erben, T., 2005, A & A,
436, 39
Bradač, M., Clowe, D., Gonzalez, A.H., Marshall, P., Forman, W., Jones, C., Markevitch, M., 2006, ApJ, 652, 937
Randall, S., Schrabback, T., & Zaritsky, D., 2006, ApJ, 652, 937
Browstein, J.R., & Moffat, J., 2006, MNRAS, 367, 527
Browstein, J.R., & Moffat, J., 2007, MNRAS, 382, 29
Bykov, A.M., Churazov, E.M., Ferrari, C.,
Forman, W.R. Kaastra, J.S., Klein, U.,
Markevitch, M., & J. de Plaa, J., 2015, Space Sci Rev, 188:141
Capozziello, S., & De Laurentis, M., 2012, Ann. D. Physik, 524, 545
Cavaliere, A. L. & Fusco-Femiano, R. 1976, A&A, 49, 137
DAMA/LIBRA Collaboration, 2022, arXiv.2208.05158
Famaey, B, & McGaugh, S., 2012, Living Rev. Relativity 15, 10
Freundlich, J., Famaey, B., Oria, P.A., Bílek, M., Müller O, & Ibata, R., 2022, A&A, 658, A24
Giunti, C, & Lasserre, T., 2019, Annual Review of Nuclear and Particle Science, Volume 69, 163
Hodson, A.O,. & Zhao H., 2017a, A&A, 598, A127
Hodson, A.O,. & Zhao H., 2017b, A&A, 608, A109
Jee, M. J., Hoekstra, H., Mahdavi, A., & Babul, A., 2014, ApJ. 783 (2): 78
Mc Gaugh, S., 2015, Canadian Journal of Physics. 93 (2): 250
McGaugh, S.S., Lelli, F., & Schombert, J. M. 2016, Physical Review Letters,
117, 201101
Mannheim, P.D., & Kazanas, D., 1989, ApJ, 342, 635
Milgrom, M. 1983, Astrophys. J. 270, 365
Milgrom, M., Canadian Journal of Physics, 2015, 93(2): 107
Moffat, J. W.,2006, Journal of Cosmology and Astroparticle Physics, 3, 4
Moffat, J.W., 2020, Eur. Phys. J. C, 80(10), 906
Niikura, H., Takada, M., Yasuda, N., Lupton, R.H., Sumi, T., More, S.,
Kurita, T., Sugiyama, S., More, A., Oguri, M., & Chiba, M., 2019, Nature
Astronomy, 3, 524
O'Brien, J.G., & Moss, R.J., 2015, J. Phys.: Conf. Ser. 615, 012002
Pascoli, G. & Pernas, L., 2020, hal: 02530737
Pascoli, G, 2022a, Astrophys. and Space Sci., 367, 121
Pascoli, G., 2022b, arXiv: 2205.03062
Reiprich, T.H, 2001, Ph.D.Dissertation, Cosmological Implications and Physical Properties of an X-Ray Flux-Limited Sample of Galaxy
Clusters, Ludwig-Maximilians-Universität München
Reiprich, T.H. & Böhringer, H. 2002, ApJ, 567, 716
Schödel,R. Gallego-Cano, E., Dong, Nogueras-Lara, F., Gallego-Calvente, A.T., Amaro-Seoane, P. & Baumgardt, H., 2018, A&A, 609, A27
Skordis, C, & Złośnik, T., 2021, Phys. Rev. Lett., 127, 161302
Verlinde, E.P., 2017, SciPost Phys. 2, 016
Walker, S., Simionescu, A., Nagai D., Okabe, N., Eckert, D., Mroczkowski, T.,
Akamatsu, H., Ettori, S., & Ghirardini, V., 2019, Space Science Reviews, 215: 7
Watanabe, M., Yamashita, K., Furuzawa, A., Kunieda, H., & Tawara, Y., 1999, ApJ, 527: 80
Xenon Collaboration, 2023, arxiv2303.14729v1
Zhao, H., & Famaey, B., 2012, Phys. Rev. D, 86, 067301
§ MOTION OF AN INDIVIDUAL PARTICLE
This appendix is a summary of the preceding papers (Pascoli and Pernas, 2020;
Pascoli, 2022 a, b). In a current manner for a terrestrial observer E the motion of any
particle of mass m taken in an extended system of particles is simply
determined by the following dynamic equation:
d(m κ_Eσ̇)/dt=f_E
where σ̇ is the bare velocity of the particle (here, the derivative of the bare position
given by the vector σ) and f_E is the sum of forces applied to it[The force f is the resultant force of the gravitational, electrostatic and magnetic forces, and also of the centrifugal and Coriolis forces if the frame of reference is not
inertial.], as
evaluated by the considered observer (the coefficient κ_E is defined below). Additionally, the κ-model
assumes that at a very large (galactic) scale, the immediate environment of a particle plays a
role in the determination of its motion. For instance, the particle "feels" the
presence of the other particles in such a way that its velocity is modified
by a scale factor κ that is proportional to the mean density ρ̅; more exactly, this factor is
proportional to the logarithm of the mean density. In a more
concrete manner, all structures of the Universe are usually
inserted in a homogeneous and isotropic base space Σ with coordinates σ. A
phase space Π(σ, σ̇) is attached to Σ (the
point on the letter designates the time
derivative). However, every observer O has his own measuring
gauge, which is dependent on his environment. Thus, this observer applies an apparent
isotropic and homogeneous dilation to the phase space Π(σ, σ̇)[Once this operation is achieved, the quantity
κσ forms an inseparable unit R, where κ and
σ are no longer separately measurable for any observer (similarly for
κσ̇, which forms an inseparable unit Ṙ). Within the κ-model framework the Newtonian law is formally maintained, as follows: m dP/dt=-∇_R[(Φ(R)] where P=mṘ is the impulsion of the particle and Φ is the gravitational potential measured in situ.
For all these reasons the variation of κ is hidden to any observer, and
the space Σ of vectors σ is not reachable. However, the ratios of the type
κ'/κ are always measurable for any observer, an
operation which helps to exchange the information with other observers (Pascoli, 2022).]
(σ, σ̇) ⟶(κσ,
κσ̇)
where κ is a renormalization coefficient (for the terrestrial observer κ is denoted κ_E). For any observer O, the coefficient κ is considered to be independent of the point, and any other observer O’, placed in a distinct environment, follows the same reasoning, but with the essential difference that this one applies a coefficient κ'≠κ. The basic idea is that any observer does not directly
access to the phase space Π but rather visualizes an apparent homothetic replica κΠ. The
κ-model is reduced to this sole operation. Consequently,
this model is not a "theory" by itself. An underlying theory is needed, such as the
Newtonian mechanics or the general relativity.
For any extended objects (a galaxy or a galaxy cluster) with a mean density profile with both definite upper ρ_M
and lower ρ_m bounds, such as ρ_M > ρ_m, there exists a
universal relationship for κ; this
factor is linked to the local mean density ρ̅, as follows:
κ_M/κ=1+Ln (ρ_M-ρ_m/ρ̅-ρ_m)
For a galaxy or a cluster of galaxies ρ_M≫ρ_m≃0, this relationship
simplifies in the form κ_M/κ=1+Ln (ρ_M/ρ̅) (the function ρ̅ has a
definite
upper bound ρ_M and ρ_m≃0)[However, in order to
perform the analysis of the CMB in the framework of the κ-model, the
complete form has to be used because ρ_M≃ρ_m
(ρ_M-ρ_m/ρ_m∼10^-5).]. The quantity ρ_M is a reference
value for the mean density. For any local observer the density
measured, ρ̅_l, is equal to the following:
ρ̅/[1+Ln (ρ_M/ρ̅)]^3
The density ρ̅_l, measured in situ, can be considered as real, whereas the density ρ̅ measured by a terrestrial observer is only an "apparent" quantity.
Eventually, the mean density around a point, labelled by σ, can be obtained
by integrating over a suitably sized volume ω surrounding this point as follows:
ρ̅(σ)=∫_ωd^3σ'w(σ-σ')ρ(σ')
For simplicity, we can assume that the spread function w(σ) is isotropic and Gaussian, i.e. w(σ)=(πδ^2)^(-3/2)e^-(σ^2/δ^2). This spatial averaging is used to smooth the strongly varying density distribution of matter in a galaxy over distances of a few parsecs around each point.
With these conditions in view, the dynamic equation (16) becomes[Equation (16) is evidently not correct, and the
well known consequence of its misuse constitutes the missing mass problem.
The solution acknowledged today by a common consensus is to add dark matter. Thus, we have the following relationship:
md/dt(κ_Eσ̇)=f_E+f_DM=(1+α)f_E
The observations provides α∼1-10, following the type of extended
objects under consideration (the coefficient α, or more generally
the profile of the function α when α varies in the
objects, is predominantly adapted in an ad hoc manner). For instance, α is small in dense systems, such as globular clusters and, at the opposite side, high
in diffuse low-density galaxies.
In the framework of the κ-model formalism, equation (16) can be rewritten as follows:
md/dt(κσ̇)=(κ_E/κ)^2f_E=f
where the second equality is consistent with the following relationship:
f_C⟶P=-GM mκCP/κ^3CP^3=(κ_E/κ)^2f_E,C⟶P
This relationship applies for a test particle P of mass m under the action of a central attractor C of mass M (σ=CP). The force is evaluated where the particle P resides (the real force is measured in situ).
For a particle placed on a circular orbit and subjected to a central attractor of mass M, the coefficient κ is quasi-constant, and the following useful formula can be used:
v_spectro^2=(κσ̇)^2=κ_E/κ(GM/κ_E σ)
Two very distinct interpretations of the magnification factor κ_E/κ are then possible: an apparent magnification of the gravitational constant G (as in MOG) or an apparent magnification of the attractive mass M (as in DM with the identification 1+α=κ_E/κ).
Note that v_spectro^2 is observer-independent (we can replace κ_E with any value).
]
(mκσ̇)/dt=f
or, the mass m is invariable, as follows:
md(κσ̇)/dt=f
As for the second member of the dynamic equation, the real force f is obtained from the apparent force f_E (evaluated by a terrestrial observer) by applying
the same transform (17), i.e.
that which is measured by a local observer, where the force is really
acting. In the following, and in order to simplify
the writing, ρ̅ will be replaced by the symbol ρ.
§ CASE OF AN EXTENDED SET OF POINTS
As discussed in the preceding paragraph, in the κ-model, the dynamic
equation, formally written for an infinitesimal element of matter
of mass dm and subjected to a force df, is as follows:
dmd/dt(κσ̇)=df
where κ is a local normalization coefficient applied to the spatial lengths and df is measured in situ, i.e. where the element of matter resides.
However, this equation has no meaning without a reference frame
predefined in advance. First, the internal motions of any particle (a star, a
nebula) in a galaxy needs to be studied in the reference frame R_A, in which the
barycenter of the galaxy is at rest. Such a reference frame can be
built by considering a collection S_{A_i} of open sets U_A_i which entirely
covers the galaxy. A couple (A_i, κ_A_i),
composed of an observer A_i and a normalization coefficient applied to
the spatial lengths, κ_A_i, is associated with each open set U_A_i.
The observers A_i are assumed to be at rest relatively to each
other, as follows:
v(A_i)_A_j=0
Then, the collection S_{A_i} composes a reference frame R_A. The observers A_i do not
use the same unit of length along R_A, but they are at rest
relatively to each other as imposed by eq. (28). Next, for any point P in the
set U_A_i, the following relationship is used:
v(P)_A_i=κ_A_iσ̇(P)_A_i
Second, the motion of any galaxy as a whole has to be treated as if this one was a point confounded
with its barycenter. Any galaxy evolves in a cluster of galaxies. A second collection S_{B_l} of open sets U_B_l is considered and once again covers the entire galaxy cluster. The observers B_l are assumed to be
at rest relatively to each other, as follows:
v(B_l)_B_m=0
Then, the collection S_B_l constitutes a second reference of frame R_B. A couple (B_l, κ_B_l),
composed of an observer B_l and a normalization coefficient applied to the
spatial lengths, κ_B_l, is associated to each open set U_B_l.
For any couple of sets U_A_i and U_A_j located in a set U_B_l, we have the following relationships:
σ̇(A_i)_B_l=σ̇(A_j)_B_l
and
v(A_i)_B_l=κ_B_lσ̇(A_i)_B_l
< g r a p h i c s >
Figure 9
The law of addition of velocities is applied, as follows:
v(P)_B_l=v(P)_A_i
+v(A_i)_B_l
with the following: v(P)_A_i=κ_A_iσ̇(P)_A_i, v(A_i)_B_l=κ_B_lσ̇(A_i)_B_l.
Based on these relationships, the expression (25)
can now be properly rewritten (df_i represents the internal forces, df_inertial, df_e are the
inertial forces and the external forces, respectively, applied
on the mass dm), as follows:
dmd/dt(κ_A_iσ̇(P)_A_i)=df_i+df_inertial+df_e
This equation treats the case of the internal motions within the galaxy (for instance, the
rotation). We can integrate eq. 34 over the volume Ω of the galaxy, as follows:
∫_Ωdmd/dt(κ_A_iσ̇(P)_A_i)=∫_Ωdf_i+ ∫_Ωdf_inertial+∫_Ωdf_e=F_i+F_inertial+F_e
For an isolated (F_e=0, F_inertial=0) and symmetric (F_i a sym=0) galaxy, the following relationship is obtained:
∫_Ωdmκ_A_iσ̇(P)_A_i=K
where the constant K can eventually considered nul.
The dynamic equation for a symmetric galaxy studied as a whole and covered by a set B_l is as follows: [For
an isolated and asymmetric galaxy for which F_i asym≠0, this equation becomes the following:
Md/dt(κ_B_lσ̇(A_i)_B_l)
=F_i asym
In this case, the galaxy can be auto-accelerated.]
Md/dt(κ_B_lσ̇(A_i)_B_l)
=F_e
where M is the total mass of the galaxy (M=∫_Ωdm)). The equation (38) can be used to treat
the motion of a galaxy (viewed as a point here) in a galaxy cluster. The dynamic equation (34) relative to any point P of mass m in the galaxy becomes:[Let P contained in the set A_i and P’ contained in the
set A_i' The internal forces are calculated as follows:
f_P'⟶P=-Gκ_A_iP'P/κ_A_i^3P'P^3 f_P⟶P'=-Gκ_A_i'
PP'/κ_A_i'^3PP'^3
The force needs to be measured in situ (i.e. respectively at P for f_P'⟶P and at P' for f_P⟶P') and, in this case, the measured force is a well-defined quantity and considered observer-independent.]
m(d/dtκ_A_iσ̇(P)_A_i)=f_i+(f_e-m/MF_e)
The force f_e-m/MF_e is the tidal force produced by an outer mass. Equation (40) can be
used to treat the internal motions in a galaxy.
|
http://arxiv.org/abs/2307.03357v1
|
20230707024009
|
Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms
|
[
"Ming Yang",
"Xiyuan Wei",
"Tianbao Yang",
"Yiming Ying"
] |
cs.LG
|
[
"cs.LG",
"stat.ML"
] |
plain
theoremTheorem
propositionProposition
lemmaLemma
corollaryCorollary
definition
definitionDefinition
assumptionAssumption
remark
remarkRemark
.6∘
𝐰
Ø𝒪
ϵ
λ
𝒳
𝒴
𝒲
ℳ
𝒵
𝒩
𝐱
𝐳
AUC
𝕀
𝔼
𝔼^(g,g')
𝔼^g_+
𝔼^g'_-
𝒮
𝐰
𝐰
priv
SGD
PSGD
𝐛
𝒜
β
α
δ
ϵ
Δ
ℛ
γ
ℝ
𝐯
σ
𝕀
0
Lap
pdf
𝐉
𝒥
𝐯
AUC
Proj
𝐱̅
y̅
𝐱̃
𝐱̃̅̃
Var
ω
∑
∫
\beginΩΓPrℰ_riskProj𝐰𝐳𝐱^∗_SØ𝒪𝔼priv𝐛ℳ𝒜𝒵ℝSubsampleℳ∘SubsamplePr𝐰̅𝐰𝒳𝒴𝒵𝒲σ𝕀Π_𝐳𝐰𝐮𝒩𝒲PrRenyi𝒜𝔼𝐛ℳ𝒵ℝ𝒪≫(g,g)(g,g')𝐮𝐮̂𝐯𝐯̂ĤH̃#1@xdefthanksthanks
^†#1#1@xdefthanksthanks
^∗#1
Stability and Generalization of Stochastic Compositional Gradient Descent Algorithms
Ming Yang^†Equal contribution. First Version on May 15, 2023
University at Albany, SUNY
Albany, NY, United States
Xiyuan Wei^†
Texas A&M University
College Station, TX, USA
Tianbao Yang
Texas A&M University
College Station, TX, USA
Yiming Ying
University at Albany, SUNY
Albany, NY, United States
Abstract
0.9
We review the modular flavor symmetric models of quarks and leptons
focusing on our works.
We present some flavor models of quarks and leptons
by using finite modular groups and discuss the phenomenological implications.
The modular flavor symmetry gives interesting phenomena at the fixed point of
modulus. As a representative, we show the successful texture structure at the fixed point τ = ω.
We also study CP violation, which occurs through the modulus stabilization.
Finally,
we study SMEFT with modular flavor symmetry by including higher dimensional operators.
===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Many machine learning tasks can be formulated as a stochastic compositional optimization (SCO) problem such as reinforcement learning, AUC maximization, and meta-learning, where the objective function involves a nested composition associated with an expectation. While a significant amount of studies has been devoted to studying the convergence behavior of SCO algorithms, there is little work on understanding their generalization, i.e., how these learning algorithms built from training examples would behave on future test examples. In this paper, we provide the stability and generalization analysis of stochastic compositional gradient descent algorithms through the lens of algorithmic stability in the framework of statistical learning theory. Firstly, we introduce a stability concept called compositional uniform stability and establish its quantitative relation with generalization for SCO problems. Then, we establish the compositional uniform stability results for two popular stochastic compositional gradient descent algorithms, namely SCGD and SCSC. Finally, we derive dimension-independent excess risk bounds for SCGD and SCSC by trade-offing their stability results and optimization errors. To the best of our knowledge, these are the first-ever-known results on stability and generalization analysis of stochastic compositional gradient descent algorithms.
§ INTRODUCTION
Recently, stochastic compositional optimization (SCO) has gained considerable interests <cit.> in machine learning. It has the following form:
min_x∈{F(x) = f∘ g(x) = _ν[ f_ν ( _ω[ g_ω(x) ] ) ]},
where f∘ g(x) = f(g(x)) denotes the function composition, f: ^d → and g: ^p →^d are differentiable functions, ν,ω are random variables, and is a convex domain in ^p.
SCO generalizes the classic (non-compositional) stochastic optimization where its objective function F(·) involves nested compositions of functions and each composition is associated with an
expectation.
SCO problem (<ref>) instantiates a number of application domains. For instance, reinforcement learning <cit.> aims to get
an value function of the given policy which can be regarded as an SCO problem <cit.>. The risk-averse
portfolio optimization <cit.>, bias-variance issues in supervised learning <cit.>, and group distributionally robust optimization <cit.> can also be formulated in similar SCO forms. Model-agnostic meta learning (MAML) <cit.> finds a common initialization for a quickly adaption to new tasks which was essentially an SCO problem as pointed out in <cit.>. The recent task of AUC maximization <cit.> for imbalanced classification aims to rank positive examples above negative ones. In <cit.>, it can be regarded as an SCO problem: min_∈ℝ^d𝔼[(h_𝐰()) - a()))^2|y=1]+𝔼[(h_𝐰(^')-b())^2|y^'=1]+(1-a(𝐰)+b(𝐰))^2, where h_(·) is the decision function, a() = [h_()|y=1], and b() = [h_(')|y'=-1]. Likewise, other important learning tasks such as the maximization of the area under precision-recall curves (AUCPRC) and other compositional performance measures can be cast in a similar fashion <cit.>.
There is a substantial amount of studies devoted to studying the convergence behavior of stochastic compositional optimization algorithms for solving (<ref>). <cit.> pioneered the non-asymptotic analysis of the so-called stochastic compositional gradient decent algorithms (SCGD) which employed two time scales with a slower stepsize for updating the variable and a faster one used in the moving average sequence y_t+1 to track the inner function g(x_t). An accelerated version of SCGD has been
analyzed in <cit.> and its adapted variant was studied in <cit.>. In particular, <cit.> proposed the stochastically corrected SCGD called SCSC which was shown to enjoy the same convergence rate as that of the standard SGD in the non-compositonal setting. Further extensions and their convergence analysis were investigated in different settings such that the single timescale <cit.>, variance reduction techniques <cit.>, and applications to non-standard learning tasks <cit.>.
On the other important front, one crucial aspect of machine learning is the development of learning algorithms that can achieve strong generalization performance. Generalization refers to the ability of a learning algorithm to perform well on unseen or future test data, despite being trained on a limited set of historical training data. In the last couple of years, we have witnessed a large amount of work on addressing the generalization analysis of the vanilla stochastic gradient descent (SGD) with focus on the classical ERM formulation in the non-compositional setting. In particular, stability and generalization of SGD have been studied using the uniform argument stability <cit.> and on-average model stability <cit.>. In <cit.>, different stability and generalization measures are investigated for minimax optimization algorithms. However, to the best of our knowledge, there is no work on understanding the important stability and generalization properties of stochastic compositional optimization algorithms despite its surging popularity in solving many machine learning tasks <cit.>.
Our Contributions. In this paper, we are mainly interested in the stability and generalization of stochastic compositional optimization algorithms in the framework of Statistical Learning Theory <cit.>.
Our main contributions are summarized as follows.
* We introduce a stability concept called compositional uniform stability which is tailored to handle the function composition structure in SCO problems. Furthermore, we show the qualitative connection between this stability concept and the generalization error for randomized SCO algorithms. Regarding to the technical contributions, we show that this connection can mainly derived by estimating the stability terms involving the outer function f_ν and the vector-valued generalization term of the inner function g_ω which will be further estimated using the sample-splitting argument <cit.>.
* More specifically, we establish the compositional uniform stability of SCGD and SCSC in convex and smooth case. Our stability bound mainly involves two terms, i.e., the empirical variance associated with the inner function g_ω and the convergence of the moving-average sequence to track g_S(x_t). Then we establish the excess risk bounds 𝒪(1/√(n)+1/√(m)) for both SCGD and SCSC by balancing the stability results and optimization errors, where n and m denote the numbers of training data involving ν and ω, respectively. Our results demonstrate that, to achieve the same excess risk rate of 𝒪(1/√(n)+1/√(m)), SCGD requires a larger number of iterations, approximately T≍max(n^3.5,m^3.5), while SCSC only needs T≍max(n^2.5,m^2.5).
* We further extends the analysis of stability and generalization for SCGD and SCSC in the strongly convex and smooth case. Specifically, we demonstrate that SCGD requires approximately T≍max(n^5/3,m^5/3) iterations, while SCSC only needs T≍max(n^7/6,m^7/6) iterations to achieve the excess risk rate of 𝒪(1/√(n)+1/√(m)).
§.§ Related Work
In this section, we review related works on algorithmic stability and generalization analysis of stochastic optimization algorithms, and algorithms for compositional problems.
Stochastic Compositional Optimization.
The seminal work by <cit.> introduced SCGD with two time scales, and <cit.> presented an accelerated version. <cit.> incorporated variance reduction, while <cit.> proposed a modified SCGD with a single timescale. <cit.> introduced SCSC, a stochastically corrected version with the same convergence rate as vanilla SGD. <cit.> explored problems with multiple levels of compositions, and <cit.> proposed SOX for compositional problems. Recently, there has been growing interest in applying stochastic compositional optimization algorithms to optimize performance measures in machine learning, such as AUC scores <cit.>. Most of these studies have primarily focused on convergence analysis.
Algorithmic Stability and Generalization for the Non-Composiotnal Setting. Uniform stability and generalization of ERM were established by <cit.> in the strongly convex setting. <cit.> studied stability of randomized algorithms, and <cit.> derived high-probability generalization bounds for uniformly stable algorithms. <cit.> established uniform argument stability and generalization of SGD in expectation for smooth convex functions. <cit.> established data-dependent stability results for SGD. On-average model stability and generalization of SGD were derived in <cit.> for convex objectives in both smooth and non-smooth settings. Stability and generalization of SGD with convex and Lipschitz continuous objectives were studied in <cit.>. For non-convex and smooth cases, stability of SGD was investigated in <cit.>. Further extensions were conducted for SGD in pairwise learning <cit.>, Markov Chain SGD <cit.>, and minimax optimization algorithms <cit.>. However, existing studies have primarily focused on SGD algorithms and their variants for the standard ERM problem in the non-compositional setting.
Recently, <cit.> studied the generalization and uniform stability of the exact minimizer of the ERM counterpart for the SCO problem using the uniform convergence approach <cit.>. They also showed uniform stability of its ERM minimizer under the assumption of a Hölderian error bound condition that instantiates strong convexity. Their bounds are algorithm-independent. To the best of our knowledge, there is no existing work on stability and generalization for stochastic compositional optimization algorithms, despite their popularity in solving machine learning tasks.
Organization of the Paper. The paper is organized as follows. Section 2 formulates the learning problem and introduces necessary stability concepts. Two popular stochastic compositional optimization algorithms, SCGD <cit.> and SCSC <cit.>, for solving (<ref>) are presented. The main results on stability and generalization for SCGD and SCSC algorithms are illustrated in Section 3. Finally, Section 4 concludes the paper.
§ PROBLEM SETTING AND TARGET OF ANALYSIS
In this section, we illustrate the target of generalization analysis and the stability concept used in the framework of Statistical Learning Theory <cit.>. Then, we describe two popular optimization schemes, i.e., SCGD and SCSC, for solving the SCO problems as well as other necessary notations.
Target of Generalization Analysis. For simplicity, we are mainly concerned with the case that the random variables ν and ω are independent which means that g() = [g_ω ()] = [g_ω () | ν] for any ν. This is the case which was considered in <cit.>. In practice, we do not know the population distributions for ν and ω for SCO problem (<ref>) but only have access to a set of training data S = S_ν∪ S_ω where both
S_ν = {ν_i: i =1, …, n } and S_ω = {ω_j: j =1, … m } are distributed independently and identically (i.i.d.).
As such, SCO problem (<ref>) is reduced to the following nested empirical risk for SCO:
min_x∈{F_S(x) := f_S(g_S(x)) = 1/n∑_i=1^n f_ν_i( 1/m∑_j=1^m g_ω_j(x) )},
where g_S: ^p →^d and f_S: ^d → are the empirical versions of f and g in (<ref>) and are defined, respectively, by g_S(x)= 1/m∑_j=1^m g_ω_j(x) and f_S(y) = 1 n∑_i=1^n f_ν_i(y).
We refer to F(x) and F_S(x) as the (nested) true risk and empirical risk, respectively, in this stochastic compositional setting.
Denote the least (nested) true and empirical risks, respectively, by F(x_∗) = inf_x∈ F(x) and F(x^S_∗) = inf_x∈ F_S(x). For an randomized algorithm A, denote by A(S) its output model based on the training data S. Then, our ultimate goal is to analyze the excess generalization error (i.e., excess risk) of A(S) which is given by
F(A(S)) - F(x_∗).
It can be decomposed as follows:
_S,A[F(A(S)) - F(x_*) ] = _S,A[F(A(S)) - F_S(A(S)) ]+ _S,A[F_S(A(S)) - F_S(x_*) ]
≤_S,A[F(A(S)) - F_S(A(S)) ]+ _S,A[F_S(A(S)) - F_S(x_*^S) ],
where we have used the fact that F_S(x_*^S)≤ F_S(x_*) by the definition of x_*^S. The first term on the right hand side of (<ref>) is called the generalization (error) gap (i.e., estimation error) and the second term is the optimization error. The optimization error (convergence analysis) in our study builds upon the analysis conducted in previous works such as <cit.>. However, our main focus is on estimating the generalization gap using the algorithmic stability approach <cit.>. In order to achieve this, we introduce a proper definition of stability in the compositional setting, which will be outlined below.
Uniform Stability for SCO. Existing work of stability analysis <cit.> focused on SGD algorithms in the non-compositional ERM setting. We will extend the algorithmic stability analysis to estimate the estimation error (i.e., generalization gap) for SCO problems.
In our new setting, when we consider neighboring training data sets differing in one single data point, the change of one data point can happen in either S_ν or S_ω. In particular, for any i∈ [1,n] and j∈ [1,m], let S^i,ν be the i.i.d copy of S where only i-th data point ν_i in S_ν is changed to ν'_i while S_ω remains the same. Likewise, denote by S^j,ω the i.i.d copy of S where only j-th data point ω_ℓ in S_ω is changed to ω'_j while S_ν remains unchanged. Throughout the paper, we also denote by S'= S'_ν∪ S'_ω the i.i.d. copy of S where S'_ν = {ν'_1,…, ν'_n} and S'_ω = {ω'_1,…, ω'_m}. We say that a randomized algorithm A is (_ν,_ω)-uniformly stable for SCO problem (<ref>) if, any i∈ [1,n], j ∈ [1,m], there holds
_A [ A(S) - A(S^i,ν)] ≤_ν, and _A[ A(S) - A(S^j,ω)] ≤_ω,
where the expectation _A[·] is taken w.r.t. the internal randomness of A not the data points, and the uniform bound holds true for any
We will show the relationship between the compositional uniform stability (i.e., Definition <ref>) and the generalization error (gap) which holds true for any randimized algorithm. To this end, we need the following assumption.
We assume that f_ν and g_ω are Lipschitz continuous with parameters L_f and L_g, respectively, i.e.,
* sup_νf_ν(y)-f_ν(ŷ)≤ L_fy-ŷ for all y,ŷ∈ℝ^𝕕.
* sup_ωg_ω(x)-g_ω(x̂)≤ L_gx-x̂ for all x,x̂∈ℝ^p.
The following theorem establishes the relationship between the stability of SCGD and its generalization.
If Assumption <ref> is true and the randomized algorithm A is -uniformly stable then
_S,A[ F(A(S)) - F_S(A(S)) ] ≤ L_f L_gϵ_ν+4L_fL_gϵ_ω+L_f√(m^-1𝔼_S,A[Var_ω(g_ω(A(S)))]),
where the variance term _ω(g_ω(A(S))) = _ω[ g_ω(A(S)) - g(A(S))^2 ]. Theorem 1 describes the relationship between the compositional uniform stability and generalization (gap) for any randomized algorithm for SCO problems. It can be regarded as an extension of the counterpart for the non-compositional setting <cit.>. Indeed, if we let g_ω(x) = x, then g_S(x) = g_ω(x) = x for any ω and S, the SCO problem is reduced to the standard non-compositional setting, i.e., F(x) = _ν[f_ν(x)] and F_S(x) = 1 n∑_i=1^n f_ν_i(x). In this case, our result in Theorem 1 indicates, since there is no randomness w.r.t. ω, that _S,A[F(A(S)) - F_S(A(S)) ] ≤ L_f _ν which is exactly the case in the non-compositional setting <cit.>.
There are major technical challenges to derive the relation between stability and generalization for SCO algorithms. To clearly see this, recall that, in the classical (non-compositional) setting, given i.i.d. data S= {z_1, …,z_n}, the empirical and population risks are given by F_S(A(S)) = 1/n∑_i=1^n f(A(S);z_i) and F(A(S)) = _z[ f(A(S); z], respectively. Let S^i = {z_1,…, z_i-1, z'_i, z_i+1, z_n} be the i.i.d. copy of S but differs in i-th data point. Using the symmetry between the i.i.d. datasets S= {z_1,…, z_n} and S'= {z'_1, z'_2, …, z'_n}, one can immediately relates _S,A [F(A(S))-F_S(A(S))] = _S,A, S'[1/n∑_i=1^n f(A(S^i); z_i)-1/n∑_i=1^n f(A(S); z_i)] ≤ L_fA(S^i) -A(S). However, in our compositional setting, _S,A[ F(A(S)) - F_S(A(S)) ] = _S,A[ _ν [f_ν( g(A(S)))] - 1/n∑_i=1^n f_ν_i( g(A(S)) ) ] + _S,A[ 1/n∑_i=1^n ( f_ν_i( g(A(S)) ) - f_ν_i( 1/m∑_j=1^m g_ω_j(A(S)) ) )]. The first term on the right-hand side of the above equality can be handled similarly as the non-compositional setting. The main challenge comes from the second term which, by the Lipschitz property of f_ν, involves a vector-valued generalization _S,A[g(A(S)) - 1/m∑_j=1^m g_ω_j(A(S)) ] because one can not interchange the expectation and the norm. We will overcome this obstacle using the sample-splitting argument in <cit.>Optimization Algorithms. We will study two popular optimization algorithms for solving (<ref>), i.e., SCGD <cit.> and SCSC <cit.>. Their pseudo-code is given in Algorithm <ref> where a sequence of
y_t+1=(1-β_t)y_t+β_t g_ω_j_t(x_t)
is used to track the expectation of g_S(x_t) = _j_t[g_ω_j_t(x_t)] = 1 m∑_j=1^m g_ω_j(x_t) (see Line 5 in Algorithm <ref>).
As shown in <cit.>, SCGD needs to choose a smaller stepsize η_t than the stepsize β_t to be convergent. This prevents the SCGD from choosing the same stepsize as SGD for the non-compositional stochastic problems. To address this issue, <cit.> proposed a stochastically corrected version of SCGD which is referred to as SCSC. In particular, the sequence of y_t+1 is now given as follows (see Line 6 in Algorithm <ref>):
y_t+1=(1-β_t)(y_t+g_ω_j_t(x_t)-g_ω_j_t(x_t-1)+β_t g_ω_j_t(x_t).
We list definitions about strong convexity and smoothness which will be used in subsequent sections.
A function F: ℝ^p →ℝ is σ-strongly convex with some σ≥ 0 if, for any u, v∈ℝ^p, we have F(u)≥ F(v)+ ⟨∇ F(v), u-v⟩ +σ/2u-v^2.
If σ=0, we say that F is convex.
The following is the smoothness property of F leads to a bound on the gradient update.
A function F: ℝ^p →ℝ is L-smooth if, for any u, v∈ℝ^p, we have ∇ F (u)-∇ F (v)≤ L u-v.
In general, smoothness implies the gradient update of F cannot be overly expansive. Also the convexity and L-smooth of F implies the gradients are co-coercive, hence we have
⟨∇ F(u)-∇ F(v), u-v⟩≥1/L∇ F(u)-∇ F(v)^2
Note that if F is σ strongly convex, then φ(x)=F(x)-σ/2x^2 is convex with (L-σ)-smooth. Then, applying (<ref>) to φ yields the following inequality:
⟨∇ F(u)-∇ F(v), u-v ⟩≥Lσ/L+σu-v^2+1/L+σ∇ F(u)-∇ F(v)^2
§ STABILITY AND GENERALIZATION
In this section, we will present our main results on estimating the stability bounds for SCGD and SCSC which subsequently can lead to the estimation for their generalization gaps from Theorem <ref>. Then, we start from the error decomposition (<ref>) to derive the bounds for their excess risks by trade-offing the bounds for the above generalization (error) gaps and optimization errors. We will present results in two different cases, i.e., convex and strongly convex settings, in different subsections. For brevity, we summarize our results for the excess risks for both SCGD and SCSC in Table <ref>. Before illustrating our main results, we list some assumptions.
We assume that the following conditions hold true.
* With probability 1 w.r.t S, there holds sup_x∈1 m∑_j=1^m[ g_ω_j(x) - g_S(x)^2 ]≤ V_g.
* With probability 1 w.r.t S, there holds sup_x∈1 m∑_j=1^m[∇ g_ω_j(x)-∇ g_S(x)^2]≤ C_g.
* With probability 1 w.r.t. ν, the function f_ν(·) has Lipschitz continuous gradients, i.e., ∇ f_ν(y)- ∇ f_ν(y̅)≤ C_fy- y̅ for all y, y̅∈ℝ^d.
* With probability 1 w.r.t ν and S, the function f_ν(g_S(·)) is L-smooth, i.e., ∇ g_S(x) ∇ f_ν(g_S(x)) - ∇ g_S(x') ∇ f_ν(g_S(x'))≤ L x-x' for any x,x'∈.
§.§ Convex Setting
In this subsection, we present our main results for SCGD and SCSC in the convex setting.
Stability Results. The following theorem establishes the compositional uniform Stability (See Definition <ref>) for SCGD and SCSC in the convex setting.
Suppose that Assumption <ref> and <ref> hold true. Consider Algorithm <ref> with η_t=η≤1/2L, and β_t=β∈ (0,1) for any t∈ [0,T-1]. Then, the outputs A(S) =x_T of both SCGD and SCSC at iteration T are compositional uniformly stable with
_ν+ _ω = 𝒪( L_fL_g/nη T +L_fL_g/mη T
+√(C_g)L_fη√(T) + C_fL_gsup_S∑_j=0^T-1η(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1 2).
The proof for the above theorem will be given in in Appendix <ref>.
In this remark, we discuss how the function composition plays a role in the stability analysis for SCGD and SCSC and then compare our results with that for SGD in the non-compositional setting <cit.>. To this end,
considering the step sizes η_t = η and n=m, then (<ref>) is reduced to the following estimation:
_ν+ _ω= 𝒪( η T/n +√(C_g)η√(T) + η(sup_S∑_j=0^T-1𝔼_A[y_j+1-g_S(x_j)^2])^1/2).
It was shown in <cit.> that the uniform stability for SGD with convex and smooth losses is of the order 𝒪( η T n). Comparing these two results, we can see how the compositional structure plays a role in the stability analysis. Indeed, in contrast to the result for SGD, there are two extra terms in (<ref>) for SCGD and SCSC, i.e., √(C_g)η√(T) and ηsup_S∑_j=0^T-1𝔼_A[y_j+1-g_S(x_j)^2])^1/2. Here, C_g is the (empirical) variance of the gradient of inner function, i.e., sup_x∈1 m∑_j=1^m∇ g_ω_j(x)-∇ g_S(x)^2≤ C_g given in Assumption <ref> and the other extra term arises when the moving-average sequence y_t+1 is used to track g_S(x_t). Notice that, if we let g_ω(x) = x, then g_S(x) = g_ω(x) = x for any ω and S, then SCGD and SCSC reduces to the classical SGD, and our stability result (<ref>) is the same as that of SGD since two extra terms mentioned above will be all zeros due to the fact that y_j+1 = g_S(x_j) = x_j and C_g = 0 in this special case.
Combining (<ref>) with the estimation for 𝔼_A[ y_j+1-g_S(x_j)^2]<cit.> (see also Lemma <ref> and its self-contained proof in Appendix <ref>), one can get the following explicit stability results.
Let Assumption <ref> and <ref> hold true. Consider Algorithm <ref> with η_t=η≤1/2L, and β_t=β∈ (0,1) for any t∈ [0,T-1] and the output A(S) =x_T. Let c be an arbitrary constant. Then, we have the following results:
* SCGD is compositional uniformly stable with
ϵ_ν + ϵ_ω= 𝒪(η T n^-1+η T m^-1+η T^1/2+η T^-c/2+1β^-c/2+η^2 β^-1T+ηβ^1/2T).
* SCSC is compositional uniformly stable with
ϵ_ν + ϵ_ω=𝒪(η T n^-1+η T m^-1+η T^1/2+η T^-c/2+1β^-c/2+η^2 β^-1/2T+ηβ^1/2T)
Generalization results. Using the error decomposition (<ref>), Corollary <ref> and Theorem <ref>, we can derive the excess risk rates. To this end, we need the following results for estimating the optimization error, i.e., F_S(A(S))- F_S(x_*^S).
Suppose Assumption <ref> and <ref><ref>, <ref> hold for the empirical risk F_S, _A x_t- x_*^S^2 is bounded by D_x for all t∈ [0,T-1] and _A y_1- g_S(x_0)^2 is bounded by D_y. Let A(S)= 1/T∑_t= 1^T x_t be the solution produced by Algorithm <ref> with SCGD or SCSC update, η_t=η and β_t=β for some a,b∈(0,1]. Let c be an arbitrary constant.
* For SCGD update, there holds
_A[F_S(A(S))- F_S(x_*^S)]
= 𝒪(D_x(η T)^-1+ L_f^2L_g^2η+ C_fD_y(β T)^1-c(η T)^-1+ C_fV_gβ^2 η^-1+ C_fL_f^2L_g^3D_xηβ^-1).
* For SCSC update, there holds
_A[F_S(A(S))- F_S(x_*^S)]
= 𝒪(D_x(η T)^-1+ L_f^2L_g^2η+ C_fD_y(β T)^-cβ^-1/2+ C_fV_gβ^1/2+ C_fL_f^2L_g^3η^2 β^-3/2+ C_fL_g^2D_x β^1/2).
The boundedness assumptions are satisfied if the domain is bounded in ℝ^p. The detailed proofs are given in Appendix <ref> and <ref>. Note that the upper-bounds for the optimization error given in the above theorem hold true uniformly for any training data S.
Combining the above results with the stability bounds in Corollary <ref> and Theorem <ref>, we can derive the following excess risk bounds for SCGD and SCSC.
Suppose Assumptions <ref> and <ref> hold true and _A [x_t- x_*^S^2] is bounded by D_x for all t∈ [0,T-1] and _A [y_1- g_S(x_0)^2] is bounded by D_y. Let A(S)= 1/T∑_t= 1^T x_t be a solution produced by Algorithm <ref> with SCGD or SCSC update and η=T^-a and β=T^-b for some a,b∈(0,1].
* If we select T ≍max(n^3.5,m^3.5), η=T^-6/7 and β=T^-4/7, then, for the SCGD update, we have that _S,A[F(A(S)) - F(x_*) ]= 𝒪(1/√(n)+1/√(m)).
* If we select T ≍max(n^2.5,m^2.5), η=T^-4/5 and β=T^-4/5, then, for the SCSC update, there holds _S,A[F(A(S)) - F(x_*) ]= 𝒪(1/√(n)+1/√(m)).
In the recent work <cit.>, the uniform convergence using the concentration inequalities and the covering number are used to study the generalization gap (estimation error) of the ERM minimizer related to SCO problems. Applying their results to our case, they proved the following results, assuming that is a bounded domain, f_ν and g_ω are both Lipschitz continuous and bounded, there holds, with high probability,
F(A(S)) - F_S(A(S)) ≤sup_x∈ |F(x - F_S(x)| = 𝒪( √(p m+n)) which is highly dependent on the dimension of the domain ⊆^p.
Comparing with their bounds, we can get excess risk bounds which is dimension independent. Dimension-independent generalization bounds were also provided in <cit.> which requires the Hölder error bound condition (e.g., strong convexity). The proof there heavily depends on the property of the ERM minimizer of the SCO problem and does not apply to SCGS and SCSC.
Theorem <ref> shows that the generalization error for SCGD can be achieved the rate 𝒪(1/√(n)+1/√(m)) in the convex case after selecting appropriately the iteration number T and step sizes η and β. Recall that
in the non-compositional setting, <cit.> established generalization error bounds 𝒪(1/√(n)) by choosing T≍ n for SGD in the convex and smooth case. To achieve a similar rate, our results indicate that SCGD and SCSC need more iterations to do that. The reason may be due to the usage of the moving-average sequence y_t+1 to track g_S(x_t) and the (empirical) variance term for the inner function g_ω as mentioned in Remark <ref>.
Note that in Theorem <ref> we present the stability result of the last iterate A(S)= x_T. While in Theorem <ref> we present the generalization bound of A(S)= 1/T∑_t= 1^T x_t, which is the average of the intermediate iterates x_1, …, x_T. This stems from the fact that generalization is a combination of stability and optimization, and the main focus of optimization is the average of intermediate iterates in the convex setting (see e.g. <cit.>).
§.§ Strongly Convex Setting
Stability Results. The following theorem establishes the compositional uniform Stability (See Definition <ref>) for SCGD and SCSC in the strongly convex setting.
Suppose that Assumption <ref> and <ref> hold true and f_ν(g_S(·)) is σ-strongly convex. Consider Algorithm <ref> with η_t = η≤1/(2L+2σ) and β_t = β∈ (0,1) for t∈ [0,T-1] and the output A(S) =x_T. Then,
SCGD and SCSC are compositional uniform stable with
ϵ_ν+ϵ_ω= 𝒪(L_gL_f(L+σ)/σ L m+ L_gL_f(L+σ)/σ Ln+L_f√(C_g(L+σ)η)/√(σ L)
+C_fL_gηsup_S∑_j=0^T-1(1-ηLσ/L+σ)^T-j-1(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1 2).
The proof for Theorem <ref> is given in Appendix <ref>.
The stability for SGD with σ-strongly convex and smooth losses is of the order 𝒪(1/σ n) which was established in <cit.>. Comparing the result of SGD with our SCGD and SCSC, we have tow extra terms if n=m, i.e., ηsup_S∑_j=0^T-1(1-ηLσ/L+σ)^T-j-1(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1 2 and L_f√(C_g(L+σ)η)/√(σ L), where C_g is the (empirical) variance of the gradient of inner function, i.e.sup_x∈1 m∑_j=1^m∇ g_ω_j(x)-∇ g_S(x)^2≤ C_g . We can see that if g_ω(x)=x, then g_S(x)=g_ω(x) for any ω and S. In this case, 𝔼_A[ y_j+1-g_S(x_j)^2 ] and C_g will be zeros. Therefore, our stability results in Theorem <ref> matches that of SGD in the non-compositional setting <cit.>.
Combining Theorem <ref> with the estimation for 𝔼_A[ y_j+1-g_S(x_j)^2] in Lemma <ref> and using the Lemma <ref> which is given in Appendix <ref>, we can derive the explicit stability bounds in the following corollary. Its detailed proof is given at the end of Section <ref> in the appendix.
Let Assumption <ref> and <ref> hold true and f_ν(g_S(·)) be σ-strongly convex. Consider Algorithm <ref> with η_t = η≤1/(2L+2σ) and β_t = β∈ (0,1) for t∈ [0,T-1] and the output A(S) =x_T. Let c be an arbitrary constant. Then, we have the following results:
* SCGD is compositional uniformly stable with
ϵ_ν + ϵ_ω= 𝒪(n^-1+m^-1+η^1/2+ηβ^-1+β^1/2+T^-c/2β^-c/2).
* SCSC is compositional uniformly stable with
ϵ_ν + ϵ_ω=𝒪(n^-1+m^-1+η^1/2+ηβ^-1/2+β^1/2+T^-c/2β^-c/2).
Generalization results. Using the error decomposition (<ref>), Corollary <ref> and Theorem <ref>, we can derive the excess risk rates. To this end, we need the following results for estimating the optimization error, i.e., F_S(A(S))- F_S(x_*^S).
Suppose Assumption <ref> and <ref><ref>, <ref>- <ref> hold for the empirical risk F_S, and F_S is σ-strongly convex. Let A(S)= x_T be the solution produced by Algorithm <ref> with SCGD or SCSC update and η_t= η and β_t= β for some a,b∈(0,1]. Let c be an arbitrary constant.
* For SCGD update, there holds
𝔼_A[F_S(A(S))- F_S(x_*^S)]
= 𝒪( D_x(η T)^-c+ LL_f^2L_g^2η+ C_f^2L_g^2D_y (η T)^-cη+ C_f^2L_g^2D_y(β T)^-c+ C_f^4L_g^5C_gη^2β^-2+ C_f^2L_g^2V_gβ).
* For SCSC update, there holds
𝔼_A[F_S(A(S))- F_S(x_*^S)]
= 𝒪( D_x(η T)^-c+ LL_f^2L_g^2η+ C_f^2L_g^2D_y (η T)^-cη+ C_f^2L_g^2D_y(β T)^-c+ C_f^4L_g^5C_gη^2β^-1+ C_f^2L_g^2V_gβ).
Suppose Assumption <ref> and <ref> hold true, F_S is σ-strongly convex and L-smooth. Denote D_x:= _A [F_S(x_0)- F_S(x_*^S)] and D_y:= 𝔼_A[y_1- g_S(x_0)^2] . Let A(S)= x_T be a solution produced by Algorithm <ref> with SCGD or SCSC update and η=T^-a and β=T^-b for some a,b∈(0,1].
* If we select T ≍max(n^5/3,m^5/3), η=T^-9/10 and β=T^-3/5, then, for the SCGD update, we have that _S,A[F(A(S)) - F(x_*) ]= 𝒪(1/√(n)+1/√(m)).
* If we select T ≍max(n^7/6,m^7/6), η= β=T^-6/7, then, for the SCSC update, there holds _S,A[F(A(S)) - F(x_*) ]= 𝒪(1/√(n)+1/√(m)).
Theorem <ref> shows that the generalization error for SCGD can be achieved the rate 𝒪(1/√(n)+1/√(m)) in the strongly convex case after carefully selecting the iteration number T and constant stepsize η and β. It is worthy of noting that, for achieving the rate 𝒪(1/√(n)+1/√(m)), SCGD needs iteration T≍max(n^5/3,m^5/3) in the strongly convex case while Theorem <ref> shows that it needs more iterations, i.e., T≍max(n^3.5,m^3.5) in the convex case. SCSC fruther improves the results as it only needs iteration T≍max(n^7/6,m^7/6) in the strongly convex case.
§ CONCLUSION
In this paper, we conduct a comprehensive study on the stability and generalization analysis of stochastic compositional optimization (SCO) algorithms. We introduce the concept of compositional uniform stability to handle the function composition structure inherent in SCO problems. By establishing the connection between stability and generalization error, we provide stability bounds for two popular SCO algorithms: SCGD and SCSC. In the convex case with standard smooth assumptions, we demonstrate that both SCGD and SCSC achieve an excess generalization error rate of 𝒪(1/√(n)+1/√(m)), with SCSC requiring fewer iterations than SCGD. Furthermore, we extend our analysis to the strongly convex case, where we show that SCGD and SCSC achieve the same rate of 𝒪(1/√(n)+1/√(m)) with even fewer iterations than in the convex case.
There are several directions for future research. Firstly, while our analysis only considers the convex and smooth cases, an interesting avenue for future research is to consider the case where the inner function and/or outer function are non-smooth and non-convex, e.g., neural networks with Rectified Linear Unit (ReLU) activation function. Secondky, it would be interesting to get optimal excess risk rates 𝒪(1/√(n)+1/√(m)) with linear time complexity T = 𝒪(max(n, m)) for SCGD and SCSC.
The work is partially supported by NSF grants under DMS-2110836, IIS-2103450, and IIS-2110546.
plain10bartlett2002rademacher
Peter Bartlett and Shahar Mendelson.
Rademacher and gaussian complexities: Risk bounds and structural
results.
Journal of Machine Learning Research, 3:463–482, 2002.
bassily2020stability
Raef Bassily, Vitaly Feldman, Cristóbal Guzmán, and Kunal Talwar.
Stability of stochastic gradient descent on nonsmooth convex losses.
Advances in Neural Information Processing Systems, 33, 2020.
bousquet2004introduction
Olivier Bousquet, Stéphane Boucheron, and Gábor Lugosi.
Introduction to statistical learning theory.
Advanced Lectures on Machine Learning: ML Summer Schools 2003,
Canberra, Australia, February 2-14, 2003, Tübingen, Germany, August 4-16,
2003, Revised Lectures, pages 169–207, 2004.
bousquet2002stability
Olivier Bousquet and André Elisseeff.
Stability and generalization.
Journal of Machine Learning Research, 2(Mar):499–526, 2002.
bousquet2020sharper
Olivier Bousquet, Yegor Klochkov, and Nikita Zhivotovskiy.
Sharper bounds for uniformly stable algorithms.
In Conference on Learning Theory, pages 610–626, 2020.
bousquet2019sharper
Olivier Bousquet, Yegor Klochkov, and Nikita Zhivotovskiy.
Sharper bounds for uniformly stable algorithms.
In Conference on Learning Theory, pages 610–626, 2020.
charles2018stability
Zachary Charles and Dimitris Papailiopoulos.
Stability and generalization of learning algorithms that converge to
global optima.
In International Conference on Machine Learning, pages
744–753, 2018.
chen2021solving
Tianyi Chen, Yuejiao Sun, and Wotao Yin.
Solving stochastic compositional optimization is nearly as easy as
solving stochastic optimization.
IEEE Transactions on Signal Processing, 69:4937–4948, 2021.
chen2021tighter
Tianyi Chen, Yuejiao Sun, and Wotao Yin.
Tighter analysis of alternating stochastic gradient method for
stochastic nested problems.
arXiv preprint arXiv:2106.13781, 2021.
dentcheva2017statistical
Darinka Dentcheva, Spiridon Penev, and Andrzej Ruszczyński.
Statistical estimation of composite risk functionals and risk
optimization problems.
Annals of the Institute of Statistical Mathematics,
69:737–760, 2017.
devraj2019stochastic
Adithya M Devraj and Jianshu Chen.
Stochastic variance reduced primal dual algorithms for empirical
composition optimization.
Advances in Neural Information Processing Systems, 32, 2019.
elisseeff2005stability
Andre Elisseeff, Theodoros Evgeniou, and Massimiliano Pontil.
Stability of randomized learning algorithms.
Journal of Machine Learning Research, 6(Jan):55–79, 2005.
farnia2021train
Farzan Farnia and Asuman Ozdaglar.
Train simultaneously, generalize better: Stability of gradient-based
minimax learners.
In International Conference on Machine Learning, pages
3174–3185. PMLR, 2021.
feldman2019high
Vitaly Feldman and Jan Vondrak.
High probability generalization bounds for uniformly stable
algorithms with nearly optimal rate.
In Conference on Learning Theory, pages 1270–1279, 2019.
finn2017model
Chelsea Finn, Pieter Abbeel, and Sergey Levine.
Model-agnostic meta-learning for fast adaptation of deep networks.
In International conference on machine learning, pages
1126–1135. PMLR, 2017.
ghadimi2020single
Saeed Ghadimi, Andrzej Ruszczynski, and Mengdi Wang.
A single timescale stochastic approximation method for nested
stochastic optimization.
SIAM Journal on Optimization, 30(1):960–979, 2020.
hardt2016train
Moritz Hardt, Ben Recht, and Yoram Singer.
Train faster, generalize better: Stability of stochastic gradient
descent.
In International Conference on Machine Learning, pages
1225–1234, 2016.
hu2019efficient
Wenqing Hu, Chris Junchi Li, Xiangru Lian, Ji Liu, and Huizhuo Yuan.
Efficient smooth non-convex stochastic compositional optimization via
stochastic recursive gradient descent.
Advances in Neural Information Processing Systems, 32, 2019.
hu2020sample
Yifan Hu, Xin Chen, and Niao He.
Sample complexity of sample average approximation for conditional
stochastic optimization.
SIAM Journal on Optimization, 30(3):2103–2133, 2020.
jiang2022optimal
Wei Jiang, Bokun Wang, Yibo Wang, Lijun Zhang, and Tianbao Yang.
Optimal algorithms for stochastic multi-level compositional
optimization.
In International Conference on Machine Learning, pages
10195–10216. PMLR, 2022.
kar2013generalization
Purushottam Kar, Bharath Sriperumbudur, Prateek Jain, and Harish Karnick.
On the generalization ability of online learning algorithms for
pairwise loss functions.
In International Conference on Machine Learning, pages
441–449, 2013.
kuzborskij2018data
Ilja Kuzborskij and Christoph Lampert.
Data-dependent stability of stochastic gradient descent.
In International Conference on Machine Learning, pages
2820–2829, 2018.
lei2022nonsmooth
Yunwen Lei.
Stability and generalization of stochastic optimization with
nonconvex and nonsmooth problems.
arXiv preprint arXiv:2206.07082, 2022.
lei2022stability
Yunwen Lei, Rong Jin, and Yiming Ying.
Stability and generalization analysis of gradient methods for shallow
neural networks.
In Advances in Neural Information Processing Systems, 2022.
lei2021stability
Yunwen Lei, Zhenhuan Yang, Tianbao Yang, and Yiming Ying.
Stability and generalization of stochastic gradient methods for
minimax problems.
In International Conference on Machine Learning, pages
6175–6186, 2021.
lei2020fine
Yunwen Lei and Yiming Ying.
Fine-grained analysis of stability and generalization for stochastic
gradient descent.
In International Conference on Machine Learning, pages
5809–5819, 2020.
lei2021stochastic
Yunwen Lei and Yiming Ying.
Stochastic proximal auc maximization.
The Journal of Machine Learning Research, 22(1):2832–2876,
2021.
lian2017finite
Xiangru Lian, Mengdi Wang, and Ji Liu.
Finite-sum composition optimization via variance reduced gradient
descent.
In Artificial Intelligence and Statistics, pages 1159–1167.
PMLR, 2017.
lin2018improved
Tianyi Lin, Chenyou Fan, Mengdi Wang, and Michael I Jordan.
Improved oracle complexity for stochastic compositional variance
reduced gradient.
arXiv preprint arXiv:1806.00458, 2018.
liu2018fast
Mingrui Liu, Xiaoxuan Zhang, Zaiyi Chen, Xiaoyu Wang, and Tianbao Yang.
Fast stochastic AUC maximization with O(1/n)-convergence rate.
In International Conference on Machine Learning, pages
3195–3203, 2018.
qi2021stochastic
Qi Qi, Youzhi Luo, Zhao Xu, Shuiwang Ji, and Tianbao Yang.
Stochastic optimization of areas under precision-recall curves with
provable convergence.
Advances in Neural Information Processing Systems,
34:1752–1765, 2021.
ruszczynski2021stochastic
Andrzej Ruszczynski.
A stochastic subgradient method for nonsmooth nonconvex multilevel
composition optimization.
SIAM Journal on Control and Optimization, 59(3):2301–2320,
2021.
schmidt2011convergence
Mark Schmidt, Nicolas Roux, and Francis Bach.
Convergence rates of inexact proximal-gradient methods for convex
optimization.
Advances in neural information processing systems, 24, 2011.
shapiro2021lectures
Alexander Shapiro, Darinka Dentcheva, and Andrzej Ruszczynski.
Lectures on stochastic programming: modeling and theory.
SIAM, 2021.
shen2019stability
Wei Shen, Zhenhuan Yang, Yiming Ying, and Xiaoming Yuan.
Stability and optimization error of stochastic gradient descent for
pairwise learning.
Analysis and Applications, pages 1–41, 2019.
sutton2018reinforcement
Richard S Sutton and Andrew G Barto.
Reinforcement learning: An introduction.
MIT press, 2018.
szepesvari2010algorithms
Csaba Szepesvári.
Algorithms for reinforcement learning.
Synthesis lectures on artificial intelligence and machine
learning, 4(1):1–103, 2010.
tolstaya2018nonparametric
Ekaterina Tolstaya, Alec Koppel, Ethan Stump, and Alejandro Ribeiro.
Nonparametric stochastic compositional gradient descent for
q-learning in continuous markov decision problems.
In 2018 Annual American Control Conference (ACC), pages
6608–6615. IEEE, 2018.
tutunov2020compositional
Rasul Tutunov, Minne Li, Alexander I Cowen-Rivers, Jun Wang, and Haitham
Bou-Ammar.
Compositional adam: An adaptive compositional solver.
arXiv preprint arXiv:2002.03755, 2020.
vapnik2013nature
Vladimir Vapnik.
The nature of statistical learning theory.
Springer, 2013.
wang2022finite
Bokun Wang and Tianbao Yang.
Finite-sum compositional stochastic optimization: Theory and
applications.
arXiv preprint arXiv:2202.12396, 2022.
wang2017stochastic
Mengdi Wang, Ethan X Fang, and Han Liu.
Stochastic compositional gradient descent: algorithms for minimizing
compositions of expected-value functions.
Mathematical Programming, 161(1):419–449, 2017.
wang2016accelerating
Mengdi Wang, Ji Liu, and Ethan Fang.
Accelerating stochastic composition optimization.
Advances in Neural Information Processing Systems, 29, 2016.
wang2022stability
Puyu Wang, Yunwen Lei, Yiming Ying, and Ding-Xuan Zhou.
Stability and generalization for markov chain stochastic gradient
methods.
arXiv preprint arXiv:2209.08005, 2022.
yang2022algorithmic
Tianbao Yang.
Algorithmic foundation of deep x-risk optimization.
arXiv preprint arXiv:2206.00439, 2022.
AUCSurvey2023
Tianbao Yang and Yiming Ying.
Auc maximization in the era of big data and ai: A survey.
ACM Computing Surveys, 55(8), 2022.
yang2021simple
Zhenhuan Yang, Yunwen Lei, Puyu Wang, Tianbao Yang, and Yiming Ying.
Simple stochastic and online gradient descent algorithms for pairwise
learning.
Advances in Neural Information Processing Systems,
34:20160–20171, 2021.
ying2016stochastic
Yiming Ying, Longyin Wen, and Siwei Lyu.
Stochastic online AUC maximization.
In Advances in Neural Information Processing Systems, pages
451–459, 2016.
zhang2021generalization
Junyu Zhang, Mingyi Hong, Mengdi Wang, and Shuzhong Zhang.
Generalization bounds for stochastic saddle point problems.
In International Conference on Artificial Intelligence and
Statistics, pages 568–576. PMLR, 2021.
zhang2020optimal
Z Zhang and G Lan.
Optimal algorithms for convex nested stochastic composite
optimization.
Mathematical programming, 2020.
zhao2011online
Peilin Zhao, Steven CH Hoi, Rong Jin, and Tianbao Yang.
Online AUC maximization.
In International Conference on Machine Learning, pages
233–240. Omnipress, 2011.
zhou2002covering
Ding-Xuan Zhou.
The covering number in learning theory.
Journal of Complexity, 18(3):739–767, 2002.
§ TECHNICAL LEMMAS
First, we list some signal notations in Table <ref> for our paper setting. To derive the stability and generalization bounds, we give the following lemmas.
The following lemma is directly adapted from <cit.> where both the population distribution for the random variables ν and ω are the uniform distributions over S_ν = {ν_1,…, ν_n} and S_ω = {ω_1, …,ω_m}. It states that y_t+1 behaves similarly to g_S(x_t)
Let Assumption <ref> and <ref><ref> holds and (x_t, y_t) is generated by Algorithm <ref>. Let η_t= η, and β_t= β for η, β> 0. Let c> 0 be an arbitrary constant.
* With SCGD update, we have
𝔼_A[y_t+1-g_S(x_t)^2]
≤(c/e)^c (tβ)^-c𝔼_A[ y_1- g_S(x_0)^2]+ L_f^2L_g^3η^2/β^2+2V_gβ.
* With SCSC update we have
𝔼_A[y_t+1-g_S(x_t)^2]
≤(c/e)^c (tβ)^-c𝔼_A[y_1-g_S(x_0)^2]+L_f^2L_g^3η^2/β+ 2V_gβ.
The next lemma was established in <cit.> and this lemma was used in <cit.>.
Assume that the non-negative sequence u_t: t∈ℕ satisfies the following recursive inequality for all t ∈ℕ,
u_t^2 ≤ S_t+∑_τ=1^t-1α_τ u_τ.
where {S_τ: τ∈ℕ} is an increasing sequence, S_0 ≥ u_0^2 and α_τ for any τ∈ℕ. Then, the following inequality holds true:
u_t ≤√(S_t)+∑_τ=1^t-1α_τ.
For any ν, c>0, we have
e^-ν x≤(c/ν e)^c x^-c
Let {a_i}_i= 1^T, {b_i}_i= 1^T be two sequences of positive real numbers such that a_i≤ a_i+ 1 and b_i≥ b_i+ 1 for all i. Then we have
∑_i= 1^T a_ib_i/∑_i= 1^T a_i≤∑_i= 1^T b_i/T.
To show (<ref>), it suffices to show
∑_i= 1^T a_ib_i ∑_j= 1^T 1≤∑_j= 1^T a_j∑_i= 1^T b_i.
Rearranging the summation, it suffices to show
∑_i= 1^T ∑_j= 1^T a_ib_i- ∑_i= 1^T ∑_j= 1^T a_j b_i≤ 0.
The above inequality can be rewritten as
0≥∑_i= 1^T ∑_j= 1^T (a_i- a_j)b_i= ∑_i= 1^T ∑_j= i+ 1^T (a_i- a_j)(b_i- b_j),
where the last equality holds due to the symmetry between i and j. Since for i< j we have a_i≤ a_j and b_i≥ b_j, we know the above inequality holds, and thus (<ref>) holds. Then we complete the proof.
§.§ Proof of Lemma <ref>
The proof of Lemma <ref> leverages the following results.
Suppose Assumption <ref><ref> and <ref><ref> hold for the empirical risk F_S, By running Algorithm <ref> with SCGD update, we have
𝔼_A[ y_t+ 1- g_S(x_t)^2| ℱ_t]≤ (1- β_t) y_t- g_S(x_t- 1)^2+ L_g^2/β_t x_t- x_t- 1^2+ 2V_gβ_t^2
Suppose Assumption <ref><ref> and <ref><ref> hold for the empirical risk F_S, By running Algorithm <ref> with SCSC update, we have
𝔼_A[ y_t+ 1- g_S(x_t)^2| ℱ_t]≤ (1- β_t) y_t- g_S(x_t- 1)^2+ L_g^2 x_t- x_t- 1^2+ 2V_gβ_t^2
Now we are ready to prove Lemma <ref>.
We first present the proof for the SCGD update. Taking expectation with respect to the internal randomness of the algorithm over (<ref>) and noting that 𝔼_A[x_t- x_t- 1^2]≤ L_f^2L_g^2η_t- 1^2, we get
𝔼_A[ y_t+ 1- g_S(x_t)^2]≤ (1- β_t)𝔼_A[ y_t- g_S(x_t- 1)^2]+ L_f^2L_g^3η_t- 1^2/β_t+ 2V_gβ_t^2.
Telescoping the above inequality from 1 to t yields
𝔼_A[ y_t+ 1- g_S(x_t)^2]
≤ ∏_i= 1^t(1- β_i)𝔼_A[ y_1- g_S(x_0)^2]+ L_f^2L_g^3 ∑_i= 1^t∏_j= i+ 1^t (1- β_j)η_i- 1^2/β_i+ 2V_g ∑_i= 1^t∏_j= i+ 1^t (1- β_j)β_i^2.
Note that ∏_i= K^N (1- β_i)≤exp(- ∑_i= K^Nβ_i) for all K≤ N and β_i> 0, then setting η_t= η, β_t= β, thus we have
𝔼_A[ y_t+ 1- g_S(x_t)^2]
≤exp(-β t)𝔼_A[ y_1- g_S(x_0)^2]+ ∑_i= 1^t (1-β)^t-i(L_g^3L_f^2η^2/β+2V_gβ^2).
Using Lemma <ref> with ν= 1, we get
𝔼_A[ y_t+ 1- g_S(x_t)^2]
≤(c/e)^c (tβ)^-c𝔼_A[ y_1- g_S(x_0)^2]+ L_g^3L_f^2η^2/β^2+2V_gβ,
where the inequality holds for ∑_i= 1^t (1-β)^t-i≤1/β. Then we get the desired result for the SCGD update. Next we present the proof for the SCSC update. Taking total expectation with respect to the internal randomness of the algorithm over (<ref>) and noting that 𝔼_A[x_t- x_t- 1^2]≤ L_fL_gη_t- 1^2, we get
𝔼_A[ y_t+ 1- g_S(x_t)^2]≤ (1- β_t)𝔼_A[ y_t- g_S(x_t- 1)^2]+ L_f^2L_g^3η_t- 1^2+ 2V_gβ_t^2.
Telescoping the above inequality from 1 to t yields
𝔼_A[ y_t+ 1- g_S(x_t)^2]
≤ ∏_i= 1^t(1- β_i)𝔼_A[ y_1- g_S(x_0)^2]+ L_f^2L_g^3 ∑_i= 1^t∏_j= i+ 1^t (1- β_j)η_i- 1^2+ 2V_g ∑_i= 1^t∏_j= i+ 1^t (1- β_j)β_i^2.
Note that ∏_i= K^N (1- β_i)≤exp(- ∑_i= K^Nβ_i) for all K≤ N and β_i> 0, then setting η_t= η, β_t= β, thus we have
𝔼_A[ y_t+ 1- g_S(x_t)^2]
≤ exp(-t β)𝔼_A[ y_1- g_S(x_0)^2]+ ∑_i= 1^t (1-β)^t-i(L_g^3L_f^2η^2+2V_gβ^2).
Using Lemma <ref> with ν= 1, we get
𝔼_A[ y_t+ 1- g_S(x_t)^2]
≤(c/e)^c (tβ)^-c𝔼_A[ y_1- g_S(x_0)^2]+ L_g^3L_f^2η^2/β+2V_gβ,
where the inequality holds for ∑_i= 1^t (1-β)^t-i≤1/β.
Then we get the desired result for the SCSC update. Then we complete the proof.
§ PROOF FOR SECTION <REF>
Write
_S,A[ F(A(S)) - F_S(A(S)) ] = _S,A[ _ν [f_ν( g() )] - 1/n∑_i=1^n f_ν_i( 1/m∑_j=1^m g_ω_j() )]
= _S,A[ _ν [f_ν( g(A(S)))] - 1/n∑_i=1^n f_ν_i( g(A(S)) ) ]
+ _S,A[ 1/n∑_i=1^n f_ν_i( g(A(S)) ) - 1/n∑_i=1^n f_ν_i( 1/m∑_j=1^m g_ω_j(A(S)) )]
≤_S,A[ _ν [f_ν( g(A(S)))] - 1/n∑_i=1^n f_ν_i( g(A(S)) ) ]
+ _S,A[ 1/n∑_i=1^n ( f_ν_i( g(A(S)) ) - f_ν_i( 1/m∑_j=1^m g_ω_j(A(S)) ) )].
Now we estimate the two terms on the right-hand side of (<ref>). Define S^',ν={ν_1^',ν_2^',...,ν_n^',ω_1,ω_2,...,ω_m}. In particular, we have that
_S,A[ _ν [f_ν ( g(A(S)))] - 1/n∑_i=1^n f_ν_i( g(A(S)) ) ]
=
_S,A, S^',ν[ 1/n∑_i=1^n f_ν_i( g(A(S^i,ν)) - 1/n∑_i=1^n f_ν_i( g(A(S)) ) ]
= _S,A, S^',ν[ 1/n∑_i=1^n ( f_ν_i( g(A(S^i,ν)) - f_ν_i( g(A(S)) ) ]
≤ L_f g(A(S^i,ν)) - g(A(S)) ≤ L_f L_g A(S^i,ν) -A(S) .
Furthermore,
_S,A[ 1/n∑_i=1^n ( f_ν_i( g(A(S)) ) - f_ν_i( 1/m∑_j=1^m g_ω_j(A(S)) ) )]
≤ L_f _S,A[g(A(S)) - 1/m∑_j=1^m g_ω_j(A(S)) ].
Now it is sufficient to estimate the term _S,A[g(A(S)) - 1/m∑_j=1^m g_ω_j(A(S)) ].
Note that, in general, g is a mapping from ^p to ^d. To this end, we will use some ideas from <cit.>. To this end, we write
g(A(S))-1/m∑_j=1^m g_ω_j(A(S))
= 1/m∑_j=1^m 𝔼_ω,ω_j^'[g_ω(A(S))-g_ω(A(S^j,ω))]
+1/m∑_j=1^m 𝔼_ω_j^'[𝔼_ω[g_ω(A(S^j,ω))]-g_ω_j(A(S^j,ω))]
+1/m∑_j=1^m 𝔼_ω_j^'[g_ω_j(A(S^j,ω))- g_ω_j(A(S))].
It then follows that:
g(A(S))-1/m∑_j=1^m g_ω_j(A(S))≤1/m∑_j=1^m 𝔼_ω,ω_j^'g_ω(A(S))-g_ω(A(S^j,ω))
+1/m∑_j=1^m 𝔼_ω_j^'[𝔼_ω[g_ω(A(S^j,ω))]-g_ω_j(A(S^j,ω))]+1/m∑_j=1^m 𝔼_ω_j^'g_ω_j(A(S^j,ω))- g_ω_j(A(S)).
Note S and S^j,ω differ by a single example. By the assumption on stability and Definition <ref>, we further get
𝔼_S, A[g(A(S))-1/m∑_j=1^m g_ω_j(A(S))]
≤𝔼_S,A[1/m∑_j=1^m 𝔼_ω_j^'[𝔼_ω[g_ω(A(S^j,ω))]-g_ω_j(A(S^j,ω))]]+2L_gϵ_ω.
Next step, we need to estimate ∑_j=1^m 𝔼_ω_j^'[𝔼_ω[g_ω(A(S^j,ω))]-g_ω_j(A(S^j,ω))].
As the similar proof technique in paper <cit.>, we can set ξ_i(S) as a function of S as follows
ξ_j(S) = 𝔼_ω_j^'[𝔼_ω[g_ω(A(S^j,ω))]-g_ω_j(A(S^j,ω))].
Notice that:
𝔼_S, A[∑_j=1^m ξ_j(S)^2]= 𝔼_S,A[∑_j=1^mξ_j(S)^2]+∑_j, i ∈[m]: j ≠ i𝔼_S,A[⟨ξ_j(S), ξ_i(S)⟩].
According to the definition of ξ_j(S) and Cauchy-Schwartz inequality, we know
𝔼_S,A[∑_j=1^mξ_j(S)^2]
=∑_j=1^m 𝔼_S,A[𝔼_ω_j^'[𝔼_ω[g_ω(A(S^j,ω))]-g_ω_j(A(S^j,ω))] ^2]
≤∑_j=1^m 𝔼_S,A[𝔼_ω[g_ω(A(S^j,ω))]-g_ω_j(A(S^j,ω)) ^2]
=∑_j=1^m 𝔼_S,A[𝔼_ω[g_ω(A(S))]-g_ω_j^'(A(S)) ^2]=m𝔼_S,A[Var_ω(g_ω(A(S)))],
where the variance term Var_ω(g_ω(A(S)))=𝔼_ω[g(A(S))-g_ω(A(S))^2].
Next, we will estimate the second term on the right-hand side of (<ref>).
To this end, we define
S^i,ω={ω_1, …, ω_i-1, ω_i^',ω_i+1, …, ω_m, ν_1,…, ν_n};
S^i,j,ω={ω_1, … ,ω_i-1, ω_i^',ω_i+1… ,ω_j-1, ω_j^', ω_j+1, …, ω_m,ν_1,…, ν_n}.
Due to the symmetry between ω and ω_j, we can have
𝔼_w_j[ξ_j(S)]=0,∀ j ∈[m]
If j≠ i,we have
𝔼_S,A[⟨ξ_j(S^i,ω), ξ_i(S)⟩]
=𝔼_S,A𝔼_ω_i[⟨ξ_j(S^i,ω), ξ_i(S)⟩]
=𝔼_S, A[⟨ξ_j(S^i,ω), 𝔼_ω_i[ξ_i(S)]⟩]=0,
where the second equality holds since the ξ_j(S^i,ω) is independent of ω_i and the last identity follows from 𝔼_w_i[ξ_i(S)]=0 due to (<ref>) . In a similar way, we can get the following equations for j≠ i
𝔼_S, A[⟨ξ_j(S), ξ_i(S^j,ω)⟩]
=𝔼_S,A𝔼_ω_j[⟨ξ_j(S), ξ_i(S^j,ω)⟩]
=𝔼_S, A[⟨𝔼_ω_j[ξ_j(S)], ξ_i(S^j,ω)⟩]=0,
and
𝔼_S, A[⟨ξ_j(S^i,ω), ξ_i(S^j,ω)⟩]
=𝔼_S,A𝔼_ω_j[⟨ξ_j(S^i,ω), ξ_i(S^j,ω)⟩]
=𝔼_S, A[⟨𝔼_ω_j[ξ_j(S^i,ω)], ξ_i(S^j,ω)⟩]=0.
Combining the above identities, we have j≠ i
𝔼_S,A[⟨ξ_j(S), ξ_i(S)⟩]=𝔼_S,A[⟨ξ_j(S)-ξ_j(S^i,ω), ξ_i(S)-ξ_i(S^j,ω)⟩]
≤𝔼_S , A[ξ_j(S)-ξ_j(S^i,ω)·ξ_i(S)-ξ_i(S^j,ω)]
≤1/2𝔼_S,A[ξ_j(S)-ξ_j(S^i,ω)^2]
+1/2𝔼_S, A[ξ_i(S)-ξ_i(S^j,ω)^2 ],
where the third inequality use a b ≤1/2(a^2+b^2). With the definition of ξ_j(S), S^i,ω and S^i,j,ω. We can have the following identity for j≠ i
𝔼_S,A[ξ_j(S)-ξ_j(S^i,ω)^2]
= 𝔼_S,A[𝔼_ω_j^'[𝔼_ω[g_ω(A(S^j,ω))]-g_ω_j(A(S^j,ω))]
+𝔼_ω_j^'[𝔼_ω[g_ω(A(S^i,j,ω))]-g_ω_ j(A(S^i,j,ω))]^2]
= 𝔼_S, A[𝔼_ω_j^'𝔼_ω[g_ω(A(S^j,ω))-g_ω(A(S^i,j,ω))]+𝔼_w_j^'[g_ω_j(A(S^i,j,ω))-g_ω_j(A(S^j,ω))]^2].
Then using the elementary inequality (a+b)^2 ≤ 2(a^2+b^2) and Cauchy-Schwarz inequality, we get
𝔼_S,A[ξ_j(S)-ξ_j(S^i,ω)^2]
≤ 2𝔼_S, A[g_ω(A(S^j,ω))-g_ω(A(S^i,j,ω))^2]+2 𝔼_S, A[g_ω_j(A(S^i,j,ω))-g_ω_j(A(S^j,ω))^2]
≤ 2 𝔼_S,A[L_g^2A(S^j,ω)-A(S^i,j,ω)^2]+2 𝔼_S,A[L_g^2A(S^i,j,ω)-A(S^j,ω)^2].
Since S^i,ω and S^i,j,ω differ by one example, it follows from the definition of stability, we can have
𝔼_S,A[ξ_j(S)-ξ_j(S^i,ω)^2]≤ 4 L_g^2 ϵ_ω^2, ∀ j ≠ i.
In a similar way, we can have
𝔼_S, A[ξ_i(S)-ξ_i(S^j,ω)^2 ]≤ 4 L_g^2 ϵ_ω^2, ∀ j ≠ i.
Combining above two inequalities into (<ref>), we get
∑_j, i ∈[m]: j ≠ i𝔼_S,A[⟨ξ_j(S), ξ_i(S)⟩]≤ 4m(m-1) L_g^2 ϵ_ω^2, ∀ j ≠ i.
Then combining the (<ref>) and (<ref>) into (<ref>), we can have
𝔼_S, A[∑_j=1^m ξ_j(S)^2]=m𝔼_S,A[Var_ω(g_ω(A(S)))]
+4m (m-1)L_g^2 ϵ_ω^2.
Then we get
𝔼_S, A[∑_j=1^m ξ_j(S)]≤( 𝔼_S, A[∑_j=1^m ξ_j(S)^2])^1/2≤√(m𝔼_S,A[Var_ω(g_ω(A(S)))])+2m L_gϵ_ω,
pluging the above inequality back into (<ref>), we get
𝔼_S, A[g(A(S))-1/m∑_j=1^m g_ω_j(A(S))]≤√(m^-1𝔼_S,A[Var_ω(g_ω(A(S)))])+4L_g ϵ_ω.
Using the result (<ref>) into (<ref>) and then combining with the result (<ref>) into (<ref>), we get final result
_S,A[ F(A(S)) - F_S(A(S)) ] ≤ L_f L_gϵ_ν+4L_fL_gϵ_ω+L_f√(m^-1𝔼_S,A[Var_ω(g_ω(A(S)))])
where Var_ω(g_ω(A(S)))=𝔼_ω[g(A(S))-g_ω(A(S))^2].
§ PROOF FOR THE CONVEX SETTING
§.§ Stability
For any k∈ [n], define S^k,ν={ν_1,...,ν_k-1, ν_k^',ν_k+1,...,ν_n,ω_1,...,ω_m} as formed from S_ν by replacing the k-th element. For any l∈ [m], define S^l,ω={ν_1,...,ν_n,ω_1,...,ω_l-1, ω_l^',ω_l+1,...,ω_m} as formed from S_ω by replacing the l-th element. Let {x_t+1} and {y_t+1} be produced by SCGD based on S, {x_t+1^k,ν} and {y_t+1^k,ν} be produced by SCGD based on S^k,ν, {x_t+1^l,ω} and y_t+1^l,ω be produced by SCGD based on S^l,ω. Let x_0=x_0^k,ν and x_0=x_0^l,ω be
starting points in 𝒳. Since changing one sample data can happen in either S_ν or S_ω, we estimate 𝔼_A[x_t+1-x_t+1^k,ν] and 𝔼_A[x_t+1-x_t+1^l,ω] as follows.
Estimation of 𝔼_A[x_t+1-x_t+1^k,ν]
We begin with the estimation of the term 𝔼_A[x_t+1-x_t+1^k,ν]. For this purpose, we will consider two cases, i.e., i_t ≠ k and i_t = k.
Case 1 (i_t ≠ k). If i_t ≠ k, we have
x_t+1-x_t+1^k,ν^2 ≤ x_t-η_t ∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-x_t^k,ν+η_t ∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν)^2
= x_t-x_t^k,ν^2-2 η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩
+η_t^2∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν)^2.
Taking the expectation w.r.t j_t on the both sides of (<ref>) implies that
_j_t[x_t+1-x_t+1^k,ν^2 ]
≤_j_t[x_t-x_t^k,ν^2]-2 η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩]
+η_t^2_j_t[∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν)^2].
We first estimate the second term on the right hand side of (<ref>). It can be decomposed as
- 2η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩]
= -2η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^k,ν⟩]
-2η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^k,ν⟩]
-2η_t_j_t[⟨∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)),x_t-x_t^k,ν⟩]
-2η_t_j_t[⟨∇ g_S(x_t^k,ν) ∇ f_ν_i_ t(g_S(x_t^k,ν))-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)),x_t-x_t^k,ν⟩]
-2η_t_j_t[⟨∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)) -∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩] .
Now we estimate the terms on the right hand side of (<ref>) one by one. To this end, noticing that j_t is independent of i_t and x_t, then _j_t[∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))]=∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)) holds true. Consequently,
-2η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^k,ν⟩]=0,
-2η_t_j_t[⟨∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)) -∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)),x_t-x_t^k,ν⟩]=0.
Then by Part <ref> of Assumption <ref>, we know f_ν(g_S(·)) is L-smooth. By inequality (<ref>), we get
⟨∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)),x_t-x_t^k,ν⟩
≥1/L∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))^2.
Furthermore, noticing that x_t is independent of j_t, we get
-2η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^k,ν⟩]
≤2η_t _j_t[| ⟨∇ g_ω_j_t(x_t) (∇ f_ν_i_ t(y_t+1)-∇ f_ν_i_ t(g_S(x_t))),x_t-x_t^k,ν⟩|]
≤2η_t_j_t[∇ g_ω_j_t(x_t) (∇ f_ν_i_ t(y_t+1)-∇ f_ν_i_ t(g_S(x_t)))x_t-x_t^k,ν]
≤2η_t_j_t[∇ g_ω_j_t(x_t)∇ f_ν_i_ t(y_t+1)-∇ f_ν_i_ t(g_S(x_t))x_t-x_t^k,ν]
≤ C_fL_g2η_t_j_t[y_t+1-g_S(x_t)]x_t-x_t^k,ν,
where the last inequality holds by L_g Lipschitz continuity of g_ω in Assumption <ref><ref> and the C_f Lipschitz continuous gradients of f_ν in Assumption <ref><ref>.
Analogous to (<ref>), we get
-2η_t_j_t[⟨∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)) -∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩]
≤ 2C_fL_gη_t_j_t[y_t+1^k,ν-g_S(x_t^k,ν)]x_t-x_t^k,ν.
Putting (<ref>), (<ref>), (<ref>) and (<ref>) into (<ref>), we get that
-2η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩]
≤2C_fL_gη_t_j_t[y_t+1-g_S(x_t)]x_t-x_t^k,ν+2C_fL_gη_t_j_t[y_t+1^k,ν-g_S(x_t^k,ν)]x_t-x_t^k,ν
-2η_t1/L∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))^2.
We estimate the third term on the right hand side of (<ref>) as follows:
∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν)
≤∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))
+∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))
+∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))
+∇ g_S(x_t^k,ν) ∇ f_ν_i_ t(g_S(x_t^k,ν))-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))
+∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)) -∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν).
Taking square on both sides of the above inequality, we have that
η_t^2 ∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν)^2
≤ 4η_t^2 C_f^2∇ g_ω_j_t(x_t) (y_t+1-g_S(x_t))^2+4η_t^2 C_f^2∇ g_ω_j_t(x_t) (g_S(x_t^k,ν)-y_t+1^k,ν)^2
+8η_t^2(∇ g_ω_j_t(x_t)-∇ g_S(x_t)) ∇ f_ν_i_ t(g_S(x_t))^2
+8η_t^2(∇ g_ω_j_t(x_t^k,ν)-∇ g_S(x_t^k,ν)) ∇ f_ν_i_ t(g_S(x_t^k,ν))^2
+4η_t^2∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))^2,
where we have used the fact that (∑_i=1^5 a_i)^2≤ 4 a_1^2+4 a_2^2 + 4a_3^2 + 8a_4^2 + 8 a_5^2 and part <ref> of Assumption <ref>, i.e., C_f-Lipschitz continuity of ∇ f_ν. Taking the expectation w.r.t. j_t on both sides of (<ref>), there holds
𝔼_j_t[η_t^2∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν)^2]
≤ 4η_t^2 C_f^2_j_t[∇ g_ω_j_t(x_t) ^2y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2_j_t[∇ g_ω_j_t(x_t) ^2g_S(x_t^k,ν)-y_t+1^k,ν^2]
+8η_t^2_j_t[∇ g_ω_j_t(x_t)-∇ g_S(x_t)^2∇ f_ν_i_ t(g_S(x_t))^2]
+8η_t^2_j_t[∇ g_ω_j_t(x_t^k,ν)-∇ g_S(x_t^k,ν)^2∇ f_ν_i_ t(g_S(x_t^k,ν))^2]
+4η_t^2∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))
≤ 4η_t^2 C_f^2L_g^2 _j_t[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_j_t[y_t+1^k,ν-g_S(x_t^k,ν)^2]+16η_t^2L_f^2 C_g
+4η_t^2∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))^2,
where the second inequality follows from the Lipschitz continuity of f_ν and g_ω acoording to Assumption <ref> as well as part <ref>
of Assumption <ref>.
Putting (<ref>) and (<ref>) back into (<ref>) implies that
𝔼_j_t[x_t+1-x_t+1^k,ν^2]
≤x_t-x_t^k,ν^2+2C_fL_gη_t_j_t[y_t+1-g_S]x_t-x_t^k,ν
+2C_fL_gη_t_j_t[y_t+1^k,ν-g_S(x_t^k,ν)]x_t-x_t^k,ν
+(4η_t^2-2η_t1/L)∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))^2
+4η_t^2 C_f^2L_g^2 _j_t[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_j_t[y_t+1^k,ν-g_S(x_t^k,ν)^2]+16η_t^2L_f^2 C_g
≤x_t-x_t^k,ν^2+2C_fL_gη_t_j_t[y_t+1-g_S(x_t)]x_t-x_t^k,ν
+2C_fL_gη_t_j_t[y_t+1^k,ν-g_S(x_t^k,ν)]x_t-x_t^k,ν
+4η_t^2 C_f^2L_g ^2 _j_t[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_j_t[y_t+1^k,ν-g_S(x_t^k,ν)^2]+16η_t^2L_f^2 C_g,
where in the second inequality we have used the fact that η_t≤1/2L.
Case 2 (i_t = k). If i_t = k, we have
x_t+1-x_t+1^k,ν = x_t-η_t ∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-x_t^k,ν+η_t ∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t^'(y_t+1^k,ν)
≤x_t-x_t^k,ν+η_t∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t^'(y_t+1^k,ν)
≤x_t-x_t^k,ν+η_t∇ g_ω_j_t(x_t)∇ f_ν_i_ t(y_t+1)+η_t∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t^'(y_t+1^k,ν)
≤x_t-x_t^k,ν+2L_g L_fη_t,
where in the third inequality we have used Assumption <ref>, i.e., the Lipschitz continuity of f_ν and g_ω. Taking square of the terms on the both sides of the above inequality and taking the expectation w.r.t. j_t yield that
_j_t[x_t+1-x_t+1^k,ν^2]≤x_t-x_t^k,ν^2+4L_gL_fη_tx_t-x_t^k,ν+4L_g^2 L_f^2η_t^2.
Combining Case 1 and Case 2 together, we have that
𝔼_j_t[x_t+1-x_t+1^k,ν^2]≤x_t-x_t^k,ν^2+2C_fL_gη_t_j_t[y_t+1-g_S(x_t)]x_t-x_t^k,ν
+2C_fL_gη_t_j_t[y_t+1^k,ν-g_S(x_t^k,ν)]x_t-x_t^k,ν
+4η_t^2 C_f^2L_g ^2 _j_t[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_j_t[y_t+1^k,ν-g_S(x_t^k,ν)^2]+16η_t^2L_f^2C_g
+4L_gL_fη_tx_t-x_t^k,ν𝕀_[i_t=k]+4L_g^2 L_f^2η_t^2𝕀_[i_t=k].
Taking the expectation w.r.t. A on both sides of (<ref>), we get that
𝔼_A[x_t+1-x_t+1^k,ν^2]
≤ _A[ x_t-x_t^k,ν^2]+2C_fL_gη_t_A[_j_t[y_t+1-g_S(x_t)]x_t-x_t^k,ν]
+2C_fL_gη_t_A[_j_t[y_t+1^k,ν-g_S(x_t^k,ν)]x_t-x_t^k,ν]
+4η_t^2 C_f^2L_g^2 _A[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_A[y_t+1^k,ν-g_S(x_t^k,ν)^2]+16η_t^2L_f^2 C_g
+4L_f L_gη_t_A[x_t-x_t^k,ν𝕀_[i_t=k]]+4L_g^2 L_f^2η_t^2_A[𝕀_[i_t=k]]
≤ _A[ x_t-x_t^k,ν^2]+2C_fL_gη_t(_A[y_t+1-g_S(x_t)^2])^1/2(_A[x_t-x_t^k,ν^2])^1/2
+2C_fL_gη_t(_A[y_t+1^k,ν-g_S(x_t^k,ν)^2])^1/2(_A[x_t-x_t^k,ν^2])^1/2
+4η_t^2 C_f^2L_g^2 _A[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_A[y_t+1^k,ν-g_S(x_t^k,ν)^2]+16η_t^2L_f^2 C_g
+4L_fL_gη_t_A[x_t-x_t^k,ν𝕀_[i_t=k]]+4L_g^2 L_f^2η_t^2_A[𝕀_[i_t=k]],
where the second inequality holds by Cauchy-Schwarz inequality. Observe that
_A[x_t-x_t^k,ν𝕀_[i_t=k]]=_A[x_t-x_t^k,ν_i_t[𝕀_[i_t=k]]]=1/n_A[x_t-x_t^k,ν]≤1/n(_A[x_t-x_t^k,ν^2])^1/2.
Note that x_0-x_0^k,ν^2=0. Combining above observation with (<ref>) implies that
𝔼_A[x_t+1-x_t+1^k,ν^2]
≤
2C_fL_g∑_j=1^tη_j(_A[y_j+1-g_S(x_j)^2])^1/2(_A[x_j-x_j^k,ν^2])^1/2
+2C_fL_g∑_j=1^tη_j(_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2(_A[x_j-x_j^k,ν^2])^1/2
+4C_f^2L_g^2 ∑_j=0^tη_j^2 _A[y_j+1-g_S(x_j)^2]+4C_f^2L_g^2 ∑_j=0^tη_j^2 _A[y_j+1^k,ν-g_S(x_j^k,ν)^2]
+16L_f^2C_g∑_j=0^tη_j^2+4L_gL_f/n∑_j=1^tη_j(_A[x_j-x_j^k,ν^2])^1/2
+4L_f^2 L_g^2/n∑_j=0^tη_j^2.
For notational convenience, we denote by u_t=(𝔼_A[x_t-x_t^k,ν^2])^1/2. Using this notation, from (<ref>) we get that
u_t^2 ≤
2C_fL_g∑_j=1^t-1η_j(_A[y_j+1-g_S(x_j)^2])^1/2u_j+2C_fL_g∑_j=1^t-1η_j(_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2u_j
+4C_f^2L_g^2 ∑_j=0^t-1η_j^2 _A[y_j+1-g_S(x_j)^2]+4C_f^2L_g ^2∑_j=0^t-1η_j^2 _A[y_j+1^k,ν-g_S(x_j^k,ν)^2]
+16L_f^2C_g∑_j=0^t-1η_j^2+4L_gL_f/n∑_j=1^t-1η_j u_j
+4L_f^2 L_g^2/n∑_j=0^t-1η_j^2.
We will apply Lemma <ref> to get the desired estimation from the above recursive inequality. To this end, we define
S_t =4C_f^2L_g^2 ∑_j=0^t-1η_j^2 _A[y_j+1-g_S(x_j)^2]+4C_f^2L_g^2 ∑_j=0^t-1η_j^2 _A[y_j+1^k,ν-g_S(x_j^k,ν)^2]
+4L_f^2 L_g^2/n∑_j=0^t-1η_j^2+16L_f ^2C_g∑_j=0^t-1η_j^2,
α_j =2C_fL_gη_j(_A[y_j+1-g_S(x_j)^2])^1/2+2C_fL_gη_j(_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2+4L_gL_f/nη_j.
Now applying Lemma <ref> with u_t, S_t and α_j defined above, we get
u_t ≤√(S_t)+∑_j=1^t-1α_j
≤(4C_f^2L_g ^2∑_j=0^t-1η_j^2 _A[y_j+1-g_S(x_j)^2])^1/2+(4C_f^2L_g^2 ∑_j=0^t-1η_j^2 _A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2
+(4L_f^2 L_g^2/n∑_j=0^t-1η_j^2)^1/2+(16L_f ^2C_g∑_j=0^t-1η_j^2)^1/2+2C_fL_g∑_j=1^t-1η_j(_A[y_j+1-g_S(x_j)^2])^1/2
+2C_fL_g∑_j=1^t-1η_j(_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2+4L_fL_g/n∑_j=1^t-1η_j
≤ 4C_fL_g∑_j=0^t-1η_j(_A[y_j+1-g_S(x_j)^2])^1/2
+4C_fL_g∑_j=0^t-1η_j(_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2
+4L_f√(C_g)(∑_j=0^t-1η_j^2)^1/2+(4L_f^2L_g^2/n∑_j=0^t-1η_j^2)^1/2+4L_fL_g/n∑_j=0^t-1η_j,
where the second inequality uses the fact that (∑_i=1^4a_i)^1/2≤∑_i=1^4(a_i)^1/2 and the last inequality holds by the fact that
(4C_f^2L_g ^2∑_j=0^t-1η_j^2 _A[y_j+1-g_S(x_j)^2])^1/2 ≤2C_fL_g∑_j=0^t-1η_j(_A[y_j+1-g_S(x_j)^2])^1/2,
(4C_f^2L_g ^2∑_j=0^t-1η_j^2 _A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2 ≤2C_fL_g∑_j=0^t-1η_j(_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2.
Furthermore, if η_t=η, it is easy to see that ∑_j=0^T-1 (𝔼_A[y_j+1-g_S(x_j)^2])^1/2≤sup_Sη∑_j=0^T-1 (𝔼_A[y_j+1-g_S(x_j)^2])^1/2 and ∑_j=0^T-1 (𝔼_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2≤sup_Sη∑_j=0^T-1 (𝔼_A[y_j+1-g_S(x_j)^2])^1/2. Consequently, with T iterations, we obtain that
u_T≤ 8C_fL_gsup_Sη∑_j=0^T-1 (𝔼_A[y_j+1-g_S(x_j)^2])^1/2+4L_f√(C_g)(∑_j=0^T-1η^2)^1/2+(4L_f^2L_g^2/n∑_j=0^T-1η^2)^1/2
+4L_gL_f/n∑_j=0^T-1η
≤ 8C_fL_gsup_Sη∑_j=0^T-1 (𝔼_A[y_j+1-g_S(x_j)^2])^1/2+4L_f√(C_g)η√(T)+6L_gL_f/nη T,
where the last inequality holds by the fact that
(4L_f^2L_g^2/n∑_j=0^T-1η^2)^1/2= 2L_gL_f/√(n)η√(T)≤2L_fL_g/nη T because often we have T≥ n.
Since
𝔼_A[x_T-x_T^k,ν]≤ u_T=(𝔼_A[x_T-x_T^k,ν^2])^1/2, we further get
𝔼_A[x_T-x_T^k,ν]
≤ 8C_fL_gsup_Sη∑_j=0^T-1 (𝔼_A[y_j+1-g_S(x_j)^2])^1/2
+4L_f√(C_g)η√(T)+6L_fL_g/nη T.
We got the following desired result for Case 1:
𝔼_A[x_T-x_T^k,ν]= 𝒪( L_fL_gTη/n
+L_f√(C_g)η√(T)+ C_fL_gsup_S∑_j=0^T-1η(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1 2).
Next we move on to the estimation of 𝔼_A[x_t+1-x_t+1^l,ω].
Estimation of 𝔼_A[x_t+1-x_t+1^l,ω]. We will estimate it by considering two cases, i.e., j_t ≠ l and j_t =l.
Case 1 (j_t ≠ l). If j_t ≠ l, we have
x_t+1-x_t+1^l,ω^2 ≤ x_t-η_t ∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-x_t^l,ω+η_t ∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω)^2
= x_t-x_t^l,ω^2-2 η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω),x_t-x_t^l,ω⟩
+η_t^2∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω)^2 .
We will estimate the second term and the third one on the right hand side of (<ref>) as follows. Let us first estimate the second term. To this end, using similar arguments in (<ref>), it can be decomposed as
- 2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω),x_t-x_t^l,ω⟩
= -2η_t ⟨∇ g_ω_j_t(x_t) (∇ f_ν_i_ t(y_t+1)-∇ f_ν_i_ t(g_S(x_t))),x_t-x_t^l,ω⟩
-2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩
-2η_t⟨∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)), x_t-x_t^l,ω⟩
-2η_t⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩
-2η_t⟨∇ g_ω_j_t(x_t^l,ω) (∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ f_ν_i_t(y_t+1^l,ω)),x_t-x_t^l,ω⟩.
Using part <ref> of Assumption <ref> and inequality (<ref>), we have
⟨∇ g_S(x_t) ∇ f_ν_i_t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩
≥1/L∇ g_S(x_t) ∇ f_ν_i_t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω))^2.
Furthermore, using part <ref> of Assumption <ref> and part <ref> of Assumption <ref>, we get
-2η_t ⟨∇ g_ω_j_t(x_t) (∇ f_ν_i_ t(y_t+1)-∇ f_ν_i_ t(g_S(x_t))),x_t-x_t^l,ω⟩
≤2η_t∇ g_ω_j_t(x_t) (∇ f_ν_i_ t(y_t+1)-∇ f_ν_i_ t(g_S(x_t))x_t-x_t^l,ω
≤ 2η_t∇ g_ω_j_t(x_t)∇ f_ν_i_ t(y_t+1)-∇ f_ν_i_ t(g_S(x_t))x_t-x_t^l,ω
≤ 2η_t C_fL_gy_t+1-g_S(x_t)x_t-x_t^l,ω.
Likewise,
-2η_t⟨∇ g_ω_j_t(x_t^l,ω) (∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ f_ν_i_t(y_t+1^l,ω)),x_t-x_t^l,ω⟩
≤ 2η_tC_fL_gy_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω.
Putting (<ref>), (<ref>) and (<ref>) into (<ref>) yields that
- 2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω),x_t-x_t^l,ω⟩
≤ 2η_t C_fL_gy_t+1-g_S(x_t)x_t-x_t^l,ω+ 2η_t C_fL_gy_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω
-2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩
-2η_t1/L∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω))^2
-2η_t⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩.
Next we will estimate the third term on the right hand side of (<ref>). In analogy to the argument in (<ref>), one can show that
η_t^2∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω)^2
≤ 4η_t^2 C_f^2∇ g_ω_j_t(x_t) (y_t+1-g_S(x_t))^2+4η_t^2 C_f^2∇ g_ω_j_t(x_t) (g_S(x_t^l,ω)-y_t+1^l,ω)^2
+8η_t^2(∇ g_ω_j_t(x_t)-∇ g_S(x_t)) ∇ f_ν_i_ t(g_S(x_t))^2
+8η_t^2(∇ g_ω_j_t(x_t^l,ω)-∇ g_S(x_t^l,ω)) ∇ f_ν_i_ t(g_S(x_t^l,ω))^2
+4η_t^2∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω))^2
≤ 4η_t^2 C_f^2L_g^2 y_t+1-g_S(x_t)^2+4η_t^2 C_f^2L_g^2∇ g_S(x_t^l,ω)-y_t+1^l,ω^2
+8L_f^2η_t^2∇ g_ω_j_t(x_t)-∇ g_S(x_t) ^2+8L_f^2η_t^2∇ g_ω_j_t(x_t^l,ω)-∇ g_S(x_t^l,ω) ^2
+4η_t^2∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω))^2,
where, in the second inequality, we have used Assumption <ref>.
Putting the results (<ref>) and (<ref>) into (<ref>) implies that
x_t+1-x_t+1^l,ω^2
≤x_t-x_t^l,ω^2 +2η_t C_fL_gy_t+1-g_S(x_t)x_t-x_t^l,ω+ 2η_t C_fL_gy_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω
-2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩
-2η_t⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩
+(4η_t^2-2η_t1/L)∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω))^2
+4η_t^2 C_f^2L_g^2 y_t+1-g_S(x_t)^2+4η_t^2 C_f^2L_g^2g_S(x_t^l,ω)-y_t+1^l,ω^2
+8L_f^2η_t^2∇ g_ω_j_t(x_t)-∇ g_S(x_t) ^2+8L_f^2η_t^2∇ g_ω_j_t(x_t^l,ω)-∇ g_S(x_t^l,ω) ^2
≤x_t-x_t^l,ω^2 +2η_t C_fL_gy_t+1-g_S(x_t)x_t-x_t^l,ω+ 2η_t C_fL_gy_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω
-2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩
-2η_t⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩
+4η_t^2 C_f^2L_g^2 y_t+1-g_S(x_t)^2+4η_t^2 C_f^2L_g^2g_S(x_t^l,ω)-y_t+1^l,ω^2
+8L_f^2η_t^2∇ g_ω_j_t(x_t)-∇ g_S(x_t) ^2+8L_f^2η_t^2∇ g_ω_j_t(x_t^l,ω)-∇ g_S(x_t^l,ω) ^2,
where we have used the fact that η_t≤1/2L in the second inequality.
Case 2 (j_t=l). If j_t=l, from Assumption <ref> we have that
x_t+1-x_t+1^l,ω = x_t-η_t ∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-x_t^l,ω+η_t ∇ g_ω_j_t^'(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω)
≤x_t-x_t^l,ω+∇ g_ω_j_t(x_t)∇ f_ν_i_ t(y_t+1)+∇ g_ω_j_t^'(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω)
≤x_t-x_t^l,ω+η_t∇ g_ω_j_t(x_t)∇ f_ν_i_ t(y_t+1)+η_t∇ g_ω_j_t^'(x_t^l,ω)∇ f_ν_i_ t(g_S(x_t^l,ω))
≤x_t-x_t^l,ω+2η_t L_g L_f.
Therefore,
x_t+1-x_t+1^l,ω^2≤x_t-x_t^l,ω^2+4L_gL_fη_tx_t-x_t^l,ω+4η_t^2L_g^2L_f^2.
Combining Case 1 and Case 2 together, we obtain
x_t+1-x_t+1^l,ω^2
≤x_t-x_t^l,ω^2+2C_fL_gη_ty_t+1-g_S(x_t)x_t-x_t^l,ω
+2C_fL_gη_ty_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω
-2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩𝕀_[j_t≠ l]
-2η_t⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩𝕀_[j_t≠ l]
+8L_f^2η_t^2∇ g_ω_j_t(x_t)-∇ g_S(x_t) ^2+8L_f^2η_t^2∇ g_ω_j_t(x_t^l,ω)-∇ g_S(x_t^l,ω) ^2
+4η_t^2 C_f^2L_g^2 y_t+1-g_S(x_t)^2+4η_t^2 C_f^2L_g^2g_S(x_t^l,ω)-y_t+1^l,ω^2
+4η_tL_gL_fx_t-x_t^l,ω𝕀_[j_t=l]+4η_t^2L_g^2 L_f^2𝕀_[j_t= l] .
Taking the expectation w.r.t. A on both sides of (<ref>) yields that
_A[x_t+1-x_t+1^l,ω^2]
≤_A[x_t-x_t^l,ω^2]+2C_fL_gη_t_A[y_t+1-g_S(x_t)x_t-x_t^l,ω]
+2C_fL_gη_t_A[y_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω]
-2η_t_A[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩𝕀_[j_t≠ l]]
-2η_t_A[⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩𝕀_[j_t≠ l]]
+8L_f^2η_t^2_A[∇ g_ω_j_t(x_t)-∇ g_S(x_t) ^2]+8L_f^2η_t^2_A[∇ g_ω_j_t(x_t^l,ω)-∇ g_S(x_t^l,ω) ^2]
+4η_t^2 C_f^2L_g^2_A[ y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_A[g_S(x_t^l,ω)-y_t+1^l,ω^2]
+4η_tL_fL_g_A[x_t-x_t^l,ω𝕀_[j_t=l]]+4η_t^2L_g^2 L_f^2_A[𝕀_[j_t= l]].
We will estimate the terms on the right hand side of the above inequality. To this end, denote
T_1: =⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩,
T_2: =⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩.
Taking the expectation w.r.t. A on the both sides of above identity, we have
_A[T_1] =_A[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩]
=_A[⟨_j_t[∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))]-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩]
=_A[⟨∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩]
=0,
where the second identity holds true since j_t is independent of i_t and x_t.
Therefore,
-2η_t_A[T_1𝕀_[j_t≠ l] ] = -2η_t_A[T_1𝕀_[j_t≠ l] ]+ 2η_t_A[T_1𝕀_[j_t= l] ] -2η_t_A[T_1𝕀_[j_t= l] ]
=(-2η_t_A[T_1𝕀_[j_t≠ l] ]-2η_t_A[T_1𝕀_[j_t= l]] )+2η_t_A[T_1𝕀_[j_t= l] ]
=-2η_t_A[T_1 ]+2η_t_A[T_1𝕀_[j_t= l] ]
=2η_t_A[T_1𝕀_[j_t= l] ].
We further get the following estimation
-2η_t_A[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩𝕀_[j_t≠ l]]
=2η_t_A[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩𝕀_[j_t= l]]
≤ 2η_t_A[∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))x_t-x_t^l,ω𝕀_[j_t= l]]
≤2η_t_A[(∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))+∇ g_S(x_t)∇ f_ν_i_ t(g_S(x_t)))x_t-x_t^l,ω𝕀_[j_t= l]]
≤ 4η_t L_g L_f_A[x_t-x_t^l,ω𝕀_[j_t= l]],
where the last inequality holds true due to Assumption <ref>. Similar to estimations of (<ref>) , (<ref>) and (<ref>), one can show that
-2η_t_A[T_2𝕀_[j_t≠ l] ]
=-2η_t_A[⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩𝕀_[j_t≠ l]]
=2η_t_A[⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩𝕀_[j_t= l]]
≤ 4η_tL_g L_f_A[x_t-x_t^l,ω𝕀_[j_t= l]].
Substituting (<ref>)
and (<ref>) into (<ref>) and noting that C_g represents the empirical variance associated with the gradient of the inner function as given in part <ref> of Assumption <ref>, we obtain
_A [x_t+1-x_t+1^l,ω^2]
≤ _A[x_t-x_t^l,ω^2]+2C_fL_gη_t_A[y_t+1-g_S(x_t)x_t-x_t^l,ω]
+2C_fL_gη_t_A[y_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω]
+4η_t^2 C_f^2L_g^2_A[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_A[y_t+1^l,ω-g_S(x_t^l,ω)^2]
+16η_t^2L_f^2C_g+12η_tL_gL_f_A[x_t-x_t^l,ω𝕀_[j_t=l]]
+4η_t^2L_g^2 L_f^2_A[𝕀_[j_t= l]]
≤ _A[x_t-x_t^l,ω^2]+2C_fL_gη_t(_A[y_t+1-g_S(x_t)^2])^1/2(_A[x_t-x_t^l,ω^2])^1/2
+2C_fL_gη_t(_A[y_t+1^l,ω-g_S(x_t^l,ω)^2])^1/2(_A[x_t-x_t^l,ω^2])^1/2
+4η_t^2 C_f^2L_g^2_A[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_A[y_t+1^l,ω-g_S(x_t^l,ω)^2]
+16η_t^2L_f^2C_g+12η_t L_gL_f_A[x_t-x_t^l,ω𝕀_[j_t= l]]
+4η_t^2L_g^2 L_f^2_A[𝕀_[j_t= l]],
where the second inequality holds by Cauchy-Schwarz inequality. Observe that
_A[x_t-x_t^l,ω𝕀_[j_t= l]] =_A[x_t-x_t^l,ω_j_t[𝕀_[j_t= l]]]
=1/m_A[x_t-x_t^l,ω]≤1/m(_A[x_t-x_t^l,ω^2])^1/2.
Note that x_0-x_0^l,ω^2=0. Combining the above two estimations together implies that
_A[x_t+1-x_t+1^l,ω^2]≤ 2C_fL_g∑_i=1^tη_i(_A[y_i+1-g_S(x_i)^2])^1/2(_A[x_i-x_i^l,ω^2])^1/2
+2C_fL_g∑_i=1^tη_i(_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2(_A[x_i-x_i^l,ω^2])^1/2
+4∑_i=0^tη_i^2 C_f^2L_g^2_A[y_i+1-g_S(x_i)^2]+4∑_i=0^tη_i^2 C_f^2L_g^2_A[y_i+1^l,ω-g_S(x_i^l,ω)^2]
+16L_f^2C_g∑_i=0^tη_i^2+12L_gL_f/m∑_i=1^tη_i(_A[x_i-x_i^l,ω^2])^1/2
+4L_g^2 L_f^2/m∑_i=0^tη_i^2.
Again, for notational convenience, let u_t=(_A[x_t-x_t^l,ω^2])^1/2. The above estimation can be rewritten as
u_t^2 ≤ 2C_fL_g∑_i=1^t-1η_i(_A[y_i+1-g_S(x_i)^2])^1/2u_i+2C_fL_g∑_i=1^t-1η_i(_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2u_i
+4∑_i=0^t-1η_i^2 C_f^2L_g^2_A[y_i+1-g_S(x_i)^2]+4∑_i=0^t-1η_i^2 C_f^2L_g^2_A[y_i+1^l,ω-g_S(x_i^l,ω)^2]
+16L_f^2C_g∑_i=0^t-1η_i^2+12L_f L_g/m∑_i=1^t-1η_i u_i
+4L_g^2 L_f^2/m∑_i=0^t-1η_i^2.
We will use Lemma <ref> to get the desired estimation. For this purpose, define
S_t =4∑_i=0^t-1η_i^2 C_f^2L_g^2_A[y_i+1-g_S(x_i)^2]+4∑_i=0^t-1η_i^2 C_f^2L_g^2_A[y_i+1^l,ω-g_S(x_i^l,ω)^2]
+16L_f^2C_g∑_i=0^t-1η_i^2+4L_g^2 L_f^2/m∑_i=0^t-1η_i^2,
α_i =2C_fL_gη_i(_A[y_i+1-g_S(x_i)^2])^1/2+2C_fL_gη_i(_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2+12L_gL_f/mη_i.
Now applying Lemma <ref> with u_t, S_t and α_i define as above to (<ref>), we get
u_t ≤√(S_t)+∑_i=1^t-1α_i
≤ 2C_fL_g∑_i=1^t-1η_i(_A[y_i+1-g_S(x_i)^2])^1/2+2C_fL_g∑_i=1^t-1η_i(_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2
+(4∑_i=0^t-1η_i^2 C_f^2L_g^2_A[y_i+1-g_S(x_i)^2])^1/2+(4∑_i=0^t-1η_i^2 C_f^2L_g^2_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2
+(16L_f^2 C_g∑_i=0^t-1η_i^2)^1/2+12L_fL_g/m∑_i=1^t-1η_i
+(4L_g L_f/m∑_i=0^t-1η_i^2)^1/2
≤
4C_fL_g∑_i=0^t-1η_j(_A[y_i+1-g_S(x_i)^2])^1/2
+4C_fL_g∑_i=0^t-1η_i(_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2
+(16L_f^2C_g∑_i=0^t-1η_i^2)^1/2+12L_gL_f/m∑_i=0^t-1η_i
+(4L_g^2 L_f^2/m∑_i=0^t-1η_i^2)^1/2,
where the second inequality uses the fact that (∑_i=1^4a_i)^1/2≤∑_i=1^4(a_i)^1/2 and the last inequality holds by the fact that (4C_f^2L_g^2 ∑_i=0^t-1η_i^2 _A[y_i+1-g_S(x_i)^2])^1/2≤2C_fL_g∑_i=0^t-1η_i(_A[y_i+1-g_S(x_i)^2])^1/2 and (4∑_i=0^t-1η_i^2 C_f^2L_g^2_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2≤ 2C_fL_g∑_i=0^t-1η_i(_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2.
If η_i =η, note that η∑_i=0^T-1 (𝔼_A[y_j+1-g_S(x_i)^2])^1/2≤sup_Sη∑_i=0^T-1 (𝔼_A[y_i+1-g_S(x_i)^2])^1/2 and η∑_i=0^T-1 (𝔼_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2≤sup_Sη∑_i=0^T-1 (𝔼_A[y_i+1-g_S(x_i)^2])^1/2. Consequently, with T iterations, we further obtain that
u_T≤ 8C_fL_gsup_Sη∑_i=0^T-1 (𝔼_A[y_i+1-g_S(x_i)^2])^1/2+(16L_f^2C_g∑_i=0^T-1η^2)^1/2+12L_fL_g/m∑_i=0^T-1η
+(4L_g^2 L_f^2/m∑_i=0^T-1η^2)^1/2
≤ 8C_fL_gsup_Sη∑_i=0^T-1 (𝔼_A[y_i+1-g_S(x_i)^2])^1/2+4L_f√(C_g)η√(T)+14L_gL_f/mη T.
where the last inequality holds by the fact that
(4L_f^2L_g^2/m∑_i=0^T-1η^2)^1/2= 2L_fL_g/√(m)η√(T)≤2L_fL_g/mη T because often we have T≥ m. Noting that
𝔼_A[x_T-x_T^l,ω]≤ u_T=(𝔼_A[x_T-x_T^l,ω^2])^1/2, we further get
𝔼_A[x_T-x_T^l,ω]
≤ 8C_fL_gsup_Sη∑_i=0^T-1 (𝔼_A[y_i+1-g_S(x_i)^2])^1/2
+4L_f√(C_g)η√(T)+14L_fL_g/mη T.
Equivalently,
𝔼_A[x_T-x_T^l,ω] = 𝒪(L_fL_g/mη T
+L_f√(C_g)η√(T) + sup_SC_fL_g∑_i=0^T-1η(𝔼_A[ y_i+1-g_S(x_i)^2 ] )^1 2).
Now we combine the above results for estimating
𝔼_A[x_T-x_T^k,ν] and 𝔼_A[x_T-x_T^l,ω] and conclude that
ϵ_ν+ϵ_ω= 𝒪( L_fL_g/nη T
+L_fL_g/mη T+L_f√(C_g)η√(T) +C_fL_g sup_S∑_j=0^T-1η(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1 2).
The proof is completed.
Next we move on to the proof of Corollary <ref>
Considering the constant step size η_t=η, and with the result of SCGD update in Lemma <ref>, we have
ϵ_ν+ϵ_ω = 𝒪(η T n^-1+η T m^-1+η T^1/2+η∑_j=1 ^T-1(j^-c/2β^-c/2+η/β+β^1/2))
=𝒪(η T n^-1+η T m^-1+η T^1/2+η T^-c/2+1β^-c/2+η^2 β^-1T+ηβ^1/2T).
With the result for the SCSC update in Lemma <ref>, we have
ϵ_ν+ϵ_ω = 𝒪(η T n^-1+η T m^-1+η T^1/2+η∑_j=1 ^T-1(j^-c/2β^-c/2+ηβ^-1/2+β^1/2))
=𝒪(η T n^-1+η T m^-1+η T^1/2+η T^-c/2+1β^-c/2+η^2 β^-1/2T+ηβ^1/2T).
§.§ Optimization
Suppose Assumptions <ref> and <ref><ref> holds for the empirical risk F_S, By running Algorithm <ref>, we have for any γ_t> 0
_A[x_t+1- x_*^S^2| ℱ_t]
≤ (1+ C_fL_g^2η_t/γ_t)x_t- x_*^S^2+ L_f^2L_g^2η_t^2 - 2η_t(F_S(x_t)- F_S(x_*^S))
+ γ_t C_fη_t_A[g_S(x_t)- y_t+1^2| ℱ_t].
where ℱ_t is the σ-field generated by {ω_j_0, …, ω_j_t-1, ν_i_0, …, ν_i_t- 1}.
The proof of Lemma <ref> is deferred to the end of this subsection. Now we are ready to prove the convergence of Algorithm <ref> for the convex case.
We first present the proof for the SCGD update. Taking total expectation with respect to the internal randomness of A on both sides of (<ref>), we get
𝔼_A[x_t+1- x_*^S^2]≤ 𝔼_A[x_t- x_*^S^2]+ L_f^2L_g^2η_t^2- 2η_t𝔼_A[F_S(x_t)- F_S(x_*^S)]
+ γ_t C_fη_t𝔼_A[g_S(x_t)- y_t+1^2]+ C_fL_g^2η_t/γ_t𝔼_A[x_t- x_*^S^2].
Setting η_t= η, β_t= β and γ_t= β_t/η_t= β/η, plugging Lemma <ref> into (<ref>), we have
𝔼_A[x_t+1- x_*^S^2]≤ 𝔼_A[x_t- x_*^S^2]+ L_f^2L_g^2η^2- 2η𝔼_A[F_S(x_t)- F_S(x_*^S)]
+ C_fβ( (c/e)^cD_y (tβ)^-c +L_g^3 L_f^2η^2/β^2+2V_gβ)+ C_fL_g^2𝔼_A[x_t- x_*^S^2]η^2/β.
Setting η= T^-a, β= T^-b, telescoping the above inequality for t= 1, ⋯, T, and noting that 𝔼_A[x_t- x_*^S^2] is bounded by D_x, we get
2 η∑_t= 1^T𝔼_A[F_S(x_t)- F_S(x_*^S)]≤ D_x+ L_f^2L_g^2 η^2 T+ (c/e)^cC_fD_y β^1- c∑_t= 1^T t^-c + 2C_fV_g β^2 T
+ C_fL_f^2L_g^3 η^2β^-1 T+ C_fL_g^2D_x η^2β^-1 T.
From the choice of A(S) and the convexity of F_S, noting that ∑_t= 1^T t^-z= 𝒪(T^1- z) for z∈ (0, 1)∪ (1, ∞) and ∑_t= 1^T t^-1= 𝒪(log T), as long as c≠ 1 we get
𝔼_A[F_S(A(S))- F_S(x_*^S)]
= 𝒪(D_x(η T)^-1+ L_f^2L_g^2η+ C_fD_y(β T)^1-c(η T)^-1+ C_fV_gβ^2 η^-1+ C_fL_f^2L_g^3D_xηβ^-1).
Then we get the desired result for the SCGD update. Next we present the proof for the SCSC update. Setting η_t= η, β_t= β and γ_t= 1/√(β_t)= 1/√(β), plugging Lemma <ref> into (<ref>), we have
𝔼_A[x_t+1- x_*^S^2]≤ 𝔼_A[x_t- x_*^S^2]+ L_f^2L_g^2η^2- 2η𝔼_A[F_S(x_t)- F_S(x_*^S)]
+ C_fη/√(β)( (c/e)^cD_y (tβ)^-c+L_g^3 L_f^2η^2/β+2V_gβ)+ C_fL_g^2𝔼_A[x_t- x_*^S^2]η√(β).
Setting η= T^-a, β= T^-b, telescoping the above inequality for t= 1, ⋯, T, and noting that 𝔼_A[x_t- x_*^S^2] is bounded by D_x, we get
2 η∑_t= 1^T𝔼_A[F_S(x_t)- F_S(x_*^S)]≤ D_x+ L_f^2L_g^2 η^2 T+ (c/e)^cC_fD_y ηβ^-1/2- c∑_t= 1^T t^-c + 2C_fV_g ηβ^1/2 T
+ C_fL_f^2L_g^3 η^3β^-3/2 T+ C_fL_g^2D_x ηβ^1/2 T.
From the choice of A(S) and the convexity of F_S, noting that ∑_t= 1^T t^-z= 𝒪(T^1- z) for z∈ (0, 1)∪ (1, ∞) and ∑_t= 1^T t^-1= 𝒪(log T), as long as c> 2 we get
𝔼_A[F_S(A(S))- F_S(x_*^S)]
= 𝒪(D_x(η T)^-1+ L_f^2L_g^2η+ C_fD_y(β T)^-cβ^-1/2+ C_fV_gβ^1/2+ C_fL_f^2L_g^3η^2 β^-3/2+ C_fL_g^2D_x β^1/2).
We have completed the proof.
From Algorithm <ref> we have
x_t+1- x_*^S^2
≤ x_t- η_t∇ g_ω_j_t(x_t)∇ f_ν_i_t(y_t+1)- x_*^S^2
= x_t- x_*^S^2+ η_t^2∇ g_ω_j_t(x_t)∇ f_ν_i_t(y_t+1)^2- 2η_t⟨ x_t- x_*^S, ∇ g_ω_j_t(x_t)∇ f_ν_i_t(y_t+1)⟩
= x_t- x_*^S^2+ η_t^2∇ g_ω_j_t(x_t)∇ f_ν_i_t(y_t+1)^2- 2η_t⟨ x_t- x_*^S, ∇ g_ω_j_t(x_t)∇ f_ν_i_t(g_S(x_t))⟩ + u_t,
where
u_t:= 2η_t⟨ x_t- x_*^S, ∇ g_ω_j_t(x_t)∇ f_ν_i_t(g_S(x_t))- ∇ g_ω_j_t(x_t)∇ f_ν_i_t(y_t+1)⟩.
Let ℱ_t be the σ-field generated by {ω_j_0, …, ω_j_t-1, ν_i_0, …, ν_i_t- 1}. Taking expectation with respect to the internal randomness of the algorithm and using Assumption <ref>, we have
_A[x_t+1- x_*^S^2| ℱ_t]
≤ x_t- x_*^S^2+ L_f^2L_g^2η_t^2- 2η_t_A[⟨ x_t- x_*^S, ∇ g_ω_j_t(x_t)∇ f_ν_i_t(g_S(x_t))⟩|ℱ_t]+ _A[u_t| ℱ_t]
= x_t- x_*^S^2+ L_f^2L_g^2η_t^2- 2η_t⟨ x_t- x_*^S, ∇ F_S(x_t)⟩+ _A[u_t| ℱ_t]
≤ x_t- x_*^S^2+ L_f^2L_g^2η_t^2- 2η_t(F_S(x_t)- F_S(x_*^S))+ _A[u_t| ℱ_t],
where the last inequality comes from the convexity of F_S. From Cauchy-Schwartz inequality, Young's inequality, Assumption <ref><ref> and <ref><ref> we have, for all γ_t> 0, that
2η_t⟨ x_t- x_*^S, ∇ g_ω_j_t(x_t)∇ f_ν_i_t(g_S(x_t))- ∇ g_ω_j_t(x_t)∇ f_ν_i_t(y_t+1)⟩
≤ 2η_tx_t- x_*^S∇ g_ω_j_t(x_t)∇ f_ν_i_t(g_S(x_t))- ∇ f_ν_i_t(y_t+1)
≤ 2C_fη_tx_t- x_*^S∇ g_ω_j_t(x_t)g_S(x_t)- y_t+1
≤ 2C_fη_t(x_t- x_*^S^2∇ g_ω_j_t(x_t)^2/2γ_t+ γ_t/2g_S(x_t)- y_t+1^2)
≤ C_fL_g^2η_t/γ_tx_t- x_*^S^2+ γ_t C_fη_tg_S(x_t)- y_t+1^2.
Substituting (<ref>) into (<ref>), we get
_A[x_t+1- x_*^S^2| ℱ_t]
≤ (1+ C_fL_g^2η_t/γ_t)x_t- x_*^S^2+ L_f^2L_g^2η_t^2- 2η_t(F_S(x_t)- F_S(x_*^S))
+ γ_t C_fη_t_A[g_S(x_t)- y_t+1^2| ℱ_t].
The proof is completed.
§.§ Excess Generalization
We first present the proof for the SCGD update. Setting η_t= η, β_t= β for η, β> 0, from (<ref>) and (<ref>) we get for all t
_A[x_t- x_t^k, ν]
≤ 8C_fL_gsup_Sη∑_j=0^t-1 (𝔼_A[y_j+1-g_S(x_j)^2])^1/2
+4L_f√(C_g)η√(t)+6L_fL_g/nη t.
and
_A[x_t- x_t^l, ω]
≤ 8C_fL_gsup_Sη∑_i=0^t-1 (𝔼_A[y_i+1-g_S(x_i)^2])^1/2+4L_f√(C_g)η√(t)+14L_fL_g/mη t.
Plugging Lemma <ref> with SCGD update into (<ref>) and (<ref>), then we have
_A[x_t- x_t^k, ν]≤ 8C_fL_g η∑_j= 1^t- 1√((c/e)^cD_y (jβ)^-c+ L_fL_g^2η^2/β^2+2V_gβ)
+ 4L_f√(C_g)η√(t)+ 6L_fL_g/nη t+ 8C_fL_gD_yη.
and
_A[x_t- x_t^l, ω]≤ 8C_fL_g η∑_j= 1^t- 1√((c/e)^cD_y (jβ)^-c+ L_fL_g^2η^2/β^2+2V_gβ)
+ 4L_f√(C_g)η√(t)+ 14L_fL_g/mη t+ 8C_fL_gD_yη.
From the fact that √(a+ b)≤√(a)+ √(b) we get
_A[x_t- x_t^k, ν]≤ 8C_fL_g√((c/e)^cD_y)ηβ^-c/2∑_j= 1^t- 1 j^-c/2+ 8C_f√(L_f) L_g^2η^2/β t+ 8C_fL_g√(2V_g)η√(β) t
+ 4L_f√(C_g)η√(t)+ 6L_fL_g/nη t+ 8C_fL_gD_yη.
and
_A[x_t- x_t^l, ω]≤ 8C_fL_g√((c/e)^cD_y)ηβ^-c/2∑_j= 1^t- 1 j^-c/2+ 8C_f√(L_f) L_g^2η^2/β t+ 8C_fL_g√(2V_g)η√(β) t
+ 4L_g√(C_g)η√(t)+ 14L_fL_g/mη t+ 8C_fL_gD_yη.
Thus we get
_A[x_t- x_t^k, ν]+ 4_A[x_t- x_t^l, ω]
≤ 40C_fL_g√((c/e)^cD_y)ηβ^-c/2∑_j= 1^t j^-c/2+ 40C_f√(L_f) L_g^2η^2/β t+ 40C_fL_g√(2V_g)η√(β) t
+ 20L_f√(C_g)η√(t)+ 6L_fL_g/nη t+ 56L_fL_g/mη t+ 40C_fL_gD_y η.
Using Theorem <ref>, we have
_S, A[F(x_t)- F_S(x_t)]
≤ 40C_fL_g√((c/e)^cD_y)L_fL_g ηβ^-c/2∑_j= 1^t j^-c/2+ 40C_f√(L_f) L_fL_g^3η^2/β t+ 40C_f√(2V_g)L_fL_g^2η√(β) t
+ 20√(C_g)L_f^2L_g η√(t)+ 6L_fL_g/nη t+ 56L_fL_g/mη t+ 40C_fL_gD_y η+ L_f√(_S, A[_ω(g_ω(x_t))]/m).
From (<ref>) we get
∑_t= 1^T𝔼_S, A[F_S(x_t)- F_S(x_*^S)]≤ D_x η^-1+ L_fL_g η T+ (c/e)^cC_fD_y η^-1β^1- c∑_t= 1^T t^-c + 2C_fV_g η^-1β^2 T
+ C_fL_fL_g^2 ηβ^-1T+ C_fL_gD_x ηβ^-1T.
Setting η= T^-a and β= T^-b in (<ref>) with a, b∈ (0, 1] and telescoping from t= 1, …, T, then adding the result with (<ref>), and using the fact F_S(x_*^S)≤ F_S(x_*), we get
∑_t= 1^T𝔼_S, A[F(x_t)- F(x_*)]
≤ 40C_fL_g√((c/e)^cD_y)L_fL_g T^-a+ bc/2∑_t= 1^T∑_j= 1^t j^-c/2+ 40C_f√(L_f) L_fL_g^3 T^b- 2a∑_t= 1^T t
+ 40C_f√(2V_g)L_fL_g^2 T^-a- b/2∑_t= 1^T t+ 20√(C_g)L_f^2L_g T^-a∑_t= 1^T √(t)+ 6L_f^2L_g^2/n T^-a∑_t= 1^T t
+ 56L_f^2L_g^2/m T^-a∑_t= 1^T t+ 40C_fL_gD_y T^1- a+ L_f∑_t= 1^T√(_S, A[_ω(g_ω(x_t))]/m)
+ D_xT^a+ L_fL_g T^1- a+ (c/e)^cC_fD_y T^-b(1- c)+ a∑_t= 1^T t^-c
+ 2C_fV_g T^1- 2b+ a+ C_fL_fL_g^2 T^1+ b- a+ C_fL_gD_x T^1+ b- a.
Noting that ∑_t= 1^T t^-z= 𝒪(T^1- z) for z∈ (-1, 0)∪ (-∞, -1) and ∑_t= 1^T t^-1= 𝒪(log T), we have
∑_t= 1^T ∑_j= 1^T j^-c/2= 𝒪(∑_t= 1^T t^1- c/2 (log t)^𝕀_c= 2)= 𝒪(T^2- c/2 (log T)^𝕀_c= 2).
With the same derivation we can get the bounds on other terms on the right hand side of (<ref>). Then we get
∑_t= 1^T𝔼_S, A[F(x_t)- F(x_*)]
= 𝒪(T^2- a- c(1- b)/2 (log T)^𝕀_c= 2+ T^2+ b- 2a+ T^2- a- b/2+ T^3/2- a.
.+ n^-1T^2- a+ m^-1T^2- a+ T^1- a+ m^-1/2T+ T^a+ T^1- a+ T^(1- b)(1- c)+ a (log T)^𝕀_c= 1.
.+ T^1- 2b+ a+ T^1+ b- a).
Dividing both sides of (<ref>) with T, then from the choice of A(S) we get
_S,A[F(A(S)) - F(x_*) ]
= 𝒪(T^1- a- c(1- b)/2 (log T)^𝕀_c= 2+ T^1+ b- 2a+ T^1- a- b/2+ T^1/2- a.
.+ n^-1T^1- a+ m^-1T^1- a+ T^-a+ m^-1/2+ T^a- 1+ T^- a+ T^(1- b)(1- c)+ a- 1 (log T)^𝕀_c= 1.
.+ T^- 2b+ a+ T^b- a).
Since a, b∈ (0, 1], as long as we have c> 2, the dominating terms are
𝒪(T^1- a- b/2), 𝒪(T^1+ b- 2a), 𝒪(n^-1T^1- a), 𝒪(m^-1T^1- a), 𝒪(T^a-1), 𝒪(T^a-2b).
Setting a= 6/7 and b= 4/7 yields
_S,A[F(A(S)) - F(x_*) ]= 𝒪(T^-1/7+ T^1/7/n+ T^1/7/m+ 1/√(m)).
Setting T= 𝒪(max{n^3.5, m^3.5}) yields the following bound
_S,A[F(A(S)) - F(x_*) ]= 𝒪(1/√(n)+ 1/√(m)).
Then we get the desired result for the SCGD update. Next we present the proof for the SCSC update. Plugging Lemma <ref> with SCSC update into (<ref>) and (<ref>), then we have
_A[x_t- x_t^k, ν]≤ 8C_fL_g η∑_j= 1^t- 1√((c/e)^cD_y (jβ)^-c+ L_fL_g^2η^2/β+2V_gβ)
+ 4L_f√(C_g)η√(t)+ 6L_fL_g/nη t+ 8C_fL_gD_yη.
and
_A[x_t- x_t^l, ω]≤ 8C_fL_g η∑_j= 1^t- 1√((c/e)^cD_y (jβ)^-c+ L_fL_g^2η^2/β+2V_gβ)
+ 4L_f√(C_g)η√(t)+ 14L_fL_g/mη t+ 8C_fL_gD_yη.
From the fact that √(a+ b)≤√(a)+ √(b) we get
_A[x_t- x_t^k, ν]≤ 8C_fL_g√((c/e)^cD_y)ηβ^-c/2∑_j= 1^t- 1 j^-c/2+ 8C_f√(L_f) L_g^2η^2/√(β) t+ 8C_fL_g√(2V_g)η√(β) t
+ 4L_f√(C_g)η√(t)+ 6L_fL_g/nη t+ 8C_fL_gD_yη.
and
_A[x_t- x_t^l, ω]≤ 8C_fL_g√((c/e)^cD_y)ηβ^-c/2∑_j= 1^t- 1 j^-c/2+ 8C_f√(L_f) L_g^2η^2/√(β) t+ 8C_fL_g√(2V_g)η√(β) t
+ 4L_g√(C_g)η√(t)+ 14L_fL_g/mη t+ 8C_fL_gD_yη.
Thus we get
_A[x_t- x_t^k, ν]+ 4_A[x_t- x_t^l, ω]
≤ 40C_fL_g√((c/e)^cD_y)ηβ^-c/2∑_j= 1^t j^-c/2+ 40C_f√(L_f) L_g^2η^2/√(β) t+ 40C_fL_g√(2V_g)η√(β) t
+ 20L_f√(C_g)η√(t)+ 6L_fL_g/nη t+ 56L_fL_g/mη t+ 40C_fL_gD_y η.
Using Theorem <ref>, we have
_S, A[F(x_t)- F_S(x_t)]
≤ 40C_fL_g√((c/e)^cD_y)L_fL_g ηβ^-c/2∑_j= 1^t j^-c/2+ 40C_f√(L_f) L_fL_g^3η^2/√(β) t+ 40C_f√(2V_g)L_fL_g^2η√(β) t
+ 20√(C_g)L_f^2L_g η√(t)+ 6L_fL_g/nη t+ 56L_fL_g/mη t+ 40C_fL_gD_y η+ L_f√(_S, A[_ω(g_ω(x_t))]/m).
From (<ref>) we get
∑_t= 1^T𝔼_S, A[F_S(x_t)- F_S(x_*^S)]≤ D_x η^-1+ L_fL_g η T+ (c/e)^cC_fD_y β^-1/2- c∑_t= 1^T t^-c + 2C_fV_g β^1/2T
+ C_fL_fL_g^2 η^2β^-3/2 T+ C_fL_gD_x β^1/2 T.
Setting η= T^-a and β= T^-b in (<ref>) with a, b∈ (0, 1] and telescoping from t= 1, …, T, then adding the result with (<ref>), and using the fact F_S(x_*^S)≤ F_S(x_*), we get
∑_t= 1^T𝔼_S, A[F(x_t)- F(x_*)]
≤ 40C_fL_g√((c/e)^cD_y)L_fL_g T^-a+ bc/2∑_t= 1^T∑_j= 1^t j^-c/2
+ 40C_f√(L_f) L_fL_g^3 T^b/2- 2a∑_t= 1^T t + 40C_f√(2V_g)L_fL_g^2 T^-a- b/2∑_t= 1^T t
+ 20√(C_g)L_f^2L_g T^-a∑_t= 1^T √(t)
+ 6L_f^2L_g^2/n T^-a∑_t= 1^T t+ 56L_f^2L_g^2/m T^-a∑_t= 1^T t+ 40C_fL_gD_y T^1- a
+ L_f∑_t= 1^T√(_S, A[_ω(g_ω(x_t))]/m)+ D_xT^a+ L_fL_g T^1- a+ (c/e)^cC_fD_y T^b(1/2+ c)∑_t= 1^T t^-c
+ 2C_fV_g T^1- b/2+ C_fL_fL_g^2 T^1+ 3/2b- 2a+ C_fL_gD_x T^1- b/2.
Noting that ∑_t= 1^T t^-z= 𝒪(T^1- z) for z∈ (-1, 0)∪ (-∞, -1) and ∑_t= 1^T t^-1= 𝒪(log T), we have
∑_t= 1^T ∑_j= 1^T j^-c/2= 𝒪(∑_t= 1^T t^1- c/2 (log t)^𝕀_c= 2)= 𝒪(T^2- c/2 (log T)^𝕀_c= 2).
With the same derivation for estimating other terms on the right hand side of (<ref>), we get
∑_t= 1^T𝔼_S, A[F(x_t)- F(x_*)]
= 𝒪(T^2- a- c(1- b)/2 (log T)^𝕀_c= 2+ T^2+ b/2- 2a+ T^2- a- b/2+ T^3/2- a.
.+ n^-1T^2- a+ m^-1T^2- a+ T^1- a+ m^-1/2T+ T^a+ T^1- a+ T^1- (1- b)c+ b/2 (log T)^𝕀_c= 1.
.+ T^1- b/2+ T^1+ 3/2b- 2a+ T^1- b/2).
Dividing both sides of (<ref>) with T, then from the choice of A(S) we get
_S,A[F(A(S)) - F(x_*) ]
= 𝒪(T^1- a- c(1- b)/2 (log T)^𝕀_c= 2+ T^1+ b/2- 2a+ T^1- a- b/2+ T^1/2- a.
.+ n^-1T^1- a+ m^-1T^1- a+ T^-a+ m^-1/2+ T^a- 1+ T^- a+ T^- (1- b)c+ b/2 (log T)^𝕀_c= 1.
.+ T^- b/2+ T^3/2b- 2a+ T^- b/2).
Since a, b∈ (0, 1], as long as we have c> 4, the dominating terms are
𝒪(T^1- a- b/2), 𝒪(T^1+ b/2- 2a), 𝒪(n^-1T^1- a), 𝒪(m^-1T^1- a), 𝒪(T^a-1), and 𝒪(T^3/2b- 2a).
Setting a= b= 4/5 yields
_S,A[F(A(S)) - F(x_*) ]= 𝒪(T^-1/5+ T^1/5/n+ T^1/5/m+ 1/√(m)).
Choosing T= 𝒪(max{n^2.5, m^2.5}) yields the following bound
_S,A[F(A(S)) - F(x_*) ]= 𝒪(1/√(n)+ 1/√(m)).
Thereofore, we get the desired result for the SCSC update. The proof is completed.
§ PROOF FOR THE STRONGLY CONVEX SETTING
§.§ Stability
The proof is analogous to the convex case. For any k∈ [n], define S^k,ν={ν_1,...,ν_k-1, ν_k^',ν_k+1,...,ν_n,ω_1,...,ω_m} as formed from S_ν by replacing the k-th element. For any l∈ [m], define S^l,ω={ν_1,...,ν_n,ω_1,...,ω_l-1, ω_l^',ω_l+1,...,ω_m} as formed from S_ω by replacing the l-th element. Let {x_t+1} and {y_t+1} be produced by SCGD based on S, {x_t+1^k,ν} and {y_t+1^k,ν} be produced by SCGD based on S^k,ν, {x_t+1^l,ω} and y_t+1^l,ω be produced by SCGD based on S^l,ω. Let x_0=x_0^k,ν and x_0=x_0^l,ω be starting points in 𝒳. Since changing one sample data can happen in either S_ν or S_ω, we need to consider the 𝔼_A[x_t+1-x_t+1^k,ν] and 𝔼_A[x_t+1-x_t+1^l,ω].
Estimation of 𝔼_A[x_t+1-x_t+1^k,ν]
We begin with estimation of the term 𝔼_A[x_t+1-x_t+1^k,ν]. For this purpose, we will consider two cases, i.e., i_t≠ k and i_t=k.
Case 1 (i_t≠ k). If i_t ≠ k, we have
x_t+1-x_t+1^k,ν^2 ≤ x_t-η_t ∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-x_t^k,ν+η_t ∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν)^2
= x_t-x_t^k,ν^2-2 η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩
+η_t^2∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν)^2.
Taking the expectation w.r.t j_t on the both sides of (<ref>) implies that
_j_t[x_t+1-x_t+1^k,ν^2 ]
≤_j_t[x_t-x_t^k,ν^2]-2 η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩]
+η_t^2_j_t[∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν)].
We first estimate the second term on the right hand side of (<ref>). It can be decomposed as
- 2η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩]
= -2η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^k,ν⟩]
-2η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^k,ν⟩]
-2η_t_j_t[⟨∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)),x_t-x_t^k,ν⟩]
-2η_t_j_t[⟨∇ g_S(x_t^k,ν) ∇ f_ν_i_ t(g_S(x_t^k,ν))-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)),x_t-x_t^k,ν⟩]
-2η_t_j_t[⟨∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν)) -∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩] .
We will estimate the terms on the right hand side of the above equality. Indeed, from part <ref> in Assumption <ref>, we know that f_ν(g_S(·)) is L-smooth. This combined with the strongly convexity of f_ν(g_S(·)) and inequality (<ref>) implied that
⟨∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν), x_t-x_t^k,ν⟩
≥Lσ/L+σx_t-x_t^k,ν^2+1/L+σ∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))^2.
Substituting (<ref>), (<ref>), (<ref>) and (<ref>) into (<ref>), we get that
-2η_t_j_t[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν),x_t-x_t^k,ν⟩]
≤2C_fL_gη_t_j_t[y_t+1-g_S(x_t)]x_t-x_t^k,ν-2Lη_tσ/L+σx_t-x_t^k,ν^2
-2η_t1/L+σ∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))^2
+2C_fL_gη_t_j_t[y_t+1^k,ν-g_S(x_t^k,ν)]x_t-x_t^k,ν.
Furthermore, similar to the argument for (<ref>), we take the expectation w.r.t. j_t of the third term on the right hand side of (<ref>) and then obtain that
𝔼_j_t[η_t^2∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^k,ν) ∇ f_ν_i_t(y_t+1^k,ν)^2]
≤4η_t^2 C_f^2L_g^2 _j_t[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_j_t[y_t+1^k,ν-g_S(x_t^k,ν)^2]+16η_t^2L_f^2 C_g
+4η_t^2∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))^2.
Putting (<ref>) and (<ref>) back into (<ref>) implies that
𝔼_j_t[x_t+1-x_t+1^k,ν^2]
≤ (1-2Lση_t/L+σ) x_t-x_t^k,ν^2+2C_fL_gη_t_j_t[y_t+1-g_S(x_t)]x_t-x_t^k,ν
+2C_fL_gη_t_j_t[y_t+1^k,ν-g_S(x_t^k,ν)]x_t-x_t^k,ν
+(4η_t^2-2η1/L+σ)∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^k,ν) ∇ f_ν_i_t(g_S(x_t^k,ν))^2
+4η_t^2 C_f^2L_g ^2 _j_t[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_j_t[y_t+1^k,ν-g_S(x_t^k,ν)^2]+16η_t^2L_f ^2C_g
≤ (1-2Lση_t/L+σ) x_t-x_t^k,ν^2+2C_fL_gη_t_j_t[y_t+1-g_S(x_t)]x_t-x_t^k,ν
+2C_fL_gη_t_j_t[y_t+1^k,ν-g_S(x_t^k,ν)]x_t-x_t^k,ν
+4η_t^2 C_f^2L_g^2 _j_t[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_j_t[y_t+1^k,ν-g_S(x_t^k,ν)^2]+16η_t^2L_f^2 C_g,
where in the second inequality we have used the fact that η_t≤1/2(L+σ).
Case 2 (i_t = k). If i_t = k, in analogy to the argument in (<ref>), we have
_j_t[x_t+1-x_t+1^k,ν^2]≤x_t-x_t^k,ν^2+4L_gL_fη_tx_t-x_t^k,ν+4L_g^2L_f^2η_t^2.
Combining the results of Case 1 and Case 2 and taking the expectation w.r.t. A, we have that
𝔼_A[x_t+1-x_t+1^k,ν^2]
≤(1-2η_tLσ/L+σ+2η_tLσ/n(L+σ))_A[x_t-x_t^k,ν^2]
+2C_fL_gη_t(_A[y_t+1-g_S(x_t)^2])^1/2(_A[x_t-x_t^k,ν^2])^1/2
+2C_fL_gη_t(_A[y_t+1^k,ν-g_S(x_t^k,ν)^2])^1/2(_A[x_t-x_t^k,ν^2])^1/2
+4η_t^2 C_f^2L_g^2 _A[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_A[y_t+1^k,ν-g_S(x_t^k,ν)^2]
+16η_t^2L_f^2 C_g+4η_tL_gL_f_A[x_t-x_t^k,ν 𝕀 _[i_t= k]]
+4η_t^2L_f^2 L_g^2_A[ 𝕀 _[i_t= k]].
Note that η_tLσ/L+σ≥2η_tLσ/n(L+σ) as n ≥ 2. We further get that 1-2η_tLσ/L+σ+2η_tLσ/n(L+σ)≤ 1-η_tLσ/L+σ. Observe that _A[x_t-x_t^k,ν 𝕀 _[i_t= k]]=1/n_A[x_t-x_t^k,ν]≤1/n(_A[x_t-x_t^k,ν^2])^1/2.
If η_t=η, combining the above observations with (<ref>) implies that
𝔼_A[x_t+1-x_t+1^k,ν^2]
≤
2C_fL_g∑_j=1^t(1-ηLσ/L+σ)^t-jη(_A[y_j+1-g_S(x_j)^2])^1/2(_A[x_j-x_j^k,ν^2])^1/2
+2C_fL_g∑_j=1^t(1-ηLσ/L+σ)^t-jη(_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2(_A[x_j-x_j^k,ν^2])^1/2
+4C_f^2L_g^2 ∑_j=0^t(1-ηLσ/L+σ)^t-jη^2 _A[y_j+1-g_S(x_j)^2]
+4C_f^2L_g ^2∑_j=0^t(1-ηLσ/L+σ)^t-jη^2 _A[y_j+1^k,ν-g_S(x_j^k,ν)^2]
+16L_f^2 C_g∑_j=0^t(1-ηLσ/L+σ)^t-jη^2+4L_gL_f/n∑_j=1^t(1-ηLσ/L+σ)^t-jη(_A[x_j-x_j^k,ν^2])^1/2
+4L_f^2 L_g^2/n∑_j=0^t(1-ηLσ/L+σ)^t-jη^2 .
Again, for notatioanl convenience, let u_t=( 𝔼_A[x_t-x_t^k,ν^2])^1/2. The above estimation can be equivalently rewritten as
u_t^2≤
2C_fL_g∑_j=1^t-1(1-ηLσ/L+σ)^t-j-1η(_A[y_j+1-g_S(x_j)^2])^1/2u_j
+2C_fL_g∑_j=1^t-1(1-ηLσ/L+σ)^t-j-1η(_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2u_j
+4C_f^2L_g^2 ∑_j=0^t-1(1-ηLσ/L+σ)^t-j-1η^2 _A[y_j+1-g_S(x_j)^2]
+4C_f^2L_g ^2∑_j=0^t-1(1-ηLσ/L+σ)^t-j-1η^2_A[y_j+1^k,ν-g_S(x_j^k,ν)^2] +16L_f^2 C_gη^2∑_j=0^t-1(1-ηLσ/L+σ)^t-j-1
+4L_gL_f/nη∑_j=1^t-1(1-ηLσ/L+σ)^t-j-1u_j +4L_f^2 L_g^2/nη^2∑_j=0^t-1(1-ηLσ/L+σ)^t-j-1.
Note that
16L_f^2 C_gη^2∑_j=0^t-1(1-ηLσ/L+σ)^t-j-1≤16L_f^2 C_gη^2L+σ/Lησ=16L_f^2 C_gL+σ/Lση and 4L_f^2 L_g^2/nη^2∑_j=0^t-1(1-ηLσ/L+σ)^t-j-1≤4L_f^2 L_g^2/nη^2L+σ/Lησ=4L_f^2 L_g^2/nL+σ/Lση.
Furthermore, define
S_t =4C_f^2L_g^2 ∑_j=0^t-1(1-ηLσ/L+σ)^t-j-1η^2 _A[y_j+1-g_S(x_j)^2]
+4C_f^2L_g ^2∑_j=0^t-1(1-ηLσ/L+σ)^t-j-1η^2_A[y_j+1^k,ν-g_S(x_j^k,ν)^2]+16L_f^2 C_gL+σ/Lση+4L_f^2 L_g^2/nL+σ/Lση,
α_j =2C_fL_g(1-ηLσ/L+σ)^t-j-1η(_A[y_j+1-g_S(x_j)^2])^1/2
+2C_fL_g(1-ηLσ/L+σ)^t-j-1η(_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2+4L_gL_f/n(1-ηLσ/L+σ)^t-jη .
Now applying Lemma <ref> with u_t, S_t and α_j defined above to (<ref>), we get
u_t≤√(S_t)+∑_j=1^t-1α_j
≤2C_fL_g(∑_j=0^t-1 (1-ηLσ/L+σ)^t-j-1η^2_A[y_j+1-g_S(x_j)^2] )^1/2
+2C_fL_g(∑_j=0^t-1(1-ηLσ/L+σ)^t-j-1η^2_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2
+2C_fL_g∑_j=1^t-1(1-ηLσ/L+σ)^t-j-1η(_A[y_j+1-g_S(x_j)^2])^1/2
+2C_fL_g∑_j=1^t-1(1-ηLσ/L+σ)^t-j-1η(_A[y_j+1^k,ν-g_S(x_j^k,ν)^2])^1/2 +4L_f√( C_gL+σ/L σ)√(η)
+2L_gL_f√(L+σ/Lσ)√(η/n)
+4L_gL_f(L+σ)/nLσ
where the last inequality uses the fact that (∑_i=1^4a_i)^1/2≤∑_i=1^4(a_i)^1/2 and we use the fact that 4L_gL_f/nη∑_j=1^t-1(1-ηLσ/L+σ)^t-j-1≤4L_gL_f(L+σ)/nLσ.
Note that _A[y_j+1-g_S(x_j)^2]≤sup_S𝔼_A[y_j+1-g_S(x_j)^2] and _A[y_j+1^k,ν-g_S(x_j^k,ν)^2]≤sup_S𝔼_A[y_j+1-g_S(x_j)^2].
Consequently, with T iterations, since 𝔼_A[x_T-x_T^k,ν]≤ u_T=(𝔼_A[x_T-x_T^k,ν^2])^1/2, we further obtain
𝔼_A[x_T-x_T^k,ν]≤ 4C_fL_gηsup_S(∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1_A[y_j+1-g_S(x_j)^2] )^1/2
+4C_fL_gηsup_S∑_j=0^T-1(1-ηLσ/L+σ)^T-j-1(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1/ 2+4L_f√( C_gL+σ/L σ)√(η)
+2L_fL_g√(L+σ/Lσ)√(η/n)
+4L_gL_f(L+σ)/ nLσ.
Estimation of 𝔼_A [x_t+1-x_t+1^l,ω]
Likewise, we will estimate 𝔼_A [x_t+1-x_t+1^l,ω] by considering two cases, i.e., j_t≠ l and j_t=l.
Case 1 (j_t≠ l). If j_t≠ l, we have
x_t+1-x_t+1^l,ω^2 ≤ x_t-η_t ∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-x_t^l,ω+η_t ∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω)^2
= x_t-x_t^l,ω^2-2 η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω),x_t-x_t^l,ω⟩
+η_t^2∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω)^2 .
We first estimate the second term on the right hand side of (<ref>). It can be decomposed as
- 2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω),x_t-x_t^l,ω⟩
= -2η_t ⟨∇ g_ω_j_t(x_t) (∇ f_ν_i_ t(y_t+1)-∇ f_ν_i_ t(g_S(x_t))),x_t-x_t^l,ω⟩
-2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩
-2η_t⟨∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)), x_t-x_t^l,ω⟩
-2η_t⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩
-2η_t⟨∇ g_ω_j_t(x_t^l,ω) (∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ f_ν_i_t(y_t+1^l,ω)),x_t-x_t^l,ω⟩.
From part <ref> of Assumption <ref> and inequality (<ref>), we have
⟨∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩
≥Lσ/L+σx_t-x_t^l,ω^2+1/L+σ∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω))^2.
Plugging (<ref>), (<ref>) and (<ref>) into (<ref>) implies that
- 2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω),x_t-x_t^l,ω⟩
≤ 2η_t C_fL_gy_t+1-g_S(x_t)x_t-x_t^l,ω+ 2η_t C_fL_gy_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω
-2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩
-2η_tLσ/L+σx_t-x_t^l,ω^2-2η_t1/L∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω))^2
-2η_t⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩.
Next we estimate the last term on the right hand side of (<ref>). Using arguments similar to that for (<ref>), we have
η_t^2∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(y_t+1)-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(y_t+1^l,ω)^2
≤ 4η_t^2 C_f^2L_g^2y_t+1-g_S(x_t)^2+4L_g^2η_t^2 C_f^2 g_S(x_t^l,ω)-y_t+1^l,ω^2
+8L_f^2η_t^2∇ g_ω_j_t(x_t)-∇ g_S(x_t) ^2+8L_f^2η_t^2∇ g_ω_j_t(x_t^l,ω)-∇ g_S(x_t^l,ω) ^2
+4η_t^2∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω))^2.
Putting (<ref>) and (<ref>) into (<ref>) and noting that η_t≤1/2(L+σ), we get
x_t+1-x_t+1^l,ω^2≤(1-2Lση_t/L+σ) x_t-x_t^l,ω^2 +2η_t C_fL_gy_t+1-g_S(x_t)x_t-x_t^l,ω
+ 2η_t C_fL_gy_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω
-2η_t⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩
-2η_t⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩.
+4η_t^2 C_f^2L_g^2 y_t+1-g_S(x_t)^2+4η_t^2 C_f^2L_g^2g_S(x_t^l,ω)-y_t+1^l,ω^2
+8L_f^2η_t^2∇ g_ω_j_t(x_t)-∇ g_S(x_t) ^2+8L_f^2η_t^2∇ g_ω_j_t(x_t^l,ω)-∇ g_S(x_t^l,ω) ^2.
Case 2 (j_t= l). If j_t= l, using the argument similar to (<ref>), it is easy to see that
x_t+1-x_t+1^l,ω^2≤x_t-x_t^l,ω^2+4L_gL_fη_tx_t-x_t^l,ω+4η_t^2L_g^2L_f^2.
Combining Case 1 and Case 2 and taking the expectation w.r.t. A on both sides and together with part <ref> of Assumption <ref> , we have
_A[x_t+1-x_t+1^l,ω^2]
≤(1-2Lση_t/L+σ+2η_tLσ/m(L+σ))_A[x_t-x_t^l,ω^2]+2C_fL_gη_t_A[y_t+1-g_S(x_t)x_t-x_t^l,ω]
+2C_fL_gη_t_A[y_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω]+4η_t^2 C_f^2L_g^2_A[y_t+1-g_S(x_t)^2]
+4η_t^2 C_f^2L_g^2_A[y_t+1^l,ω-g_S(x_t^l,ω)^2]+16C_gL_f^2η_t^2
-2η_t_A[⟨∇ g_ω_j_t(x_t) ∇ f_ν_i_ t(g_S(x_t))-∇ g_S(x_t) ∇ f_ν_i_ t(g_S(x_t)),x_t-x_t^l,ω⟩𝕀_[j_t≠ l]]
-2η_t_A[⟨∇ g_S(x_t^l,ω) ∇ f_ν_i_ t(g_S(x_t^l,ω))-∇ g_ω_j_t(x_t^l,ω) ∇ f_ν_i_t(g_S(x_t^l,ω)),x_t-x_t^l,ω⟩𝕀_[j_t≠ l]]
+4η_tL_fL_g_A[x_t-x_t^l,ω𝕀_[j_t=l]]+4η_t^2L_g^2L_f^2_A[𝕀_[j_t= l]].
Note that η_tLσ/L+σ≥2η_tLσ/m(L+σ) as m ≥ 2. We further get that 1-2η_tLσ/L+σ+2η_tLσ/m(L+σ)≤ 1-η_tLσ/L+σ. Plugging (<ref>) and (<ref>) into (<ref>) implies that
_A[x_t+1-x_t+1^l,ω^2]
≤(1-Lση_t/L+σ)_A[x_t-x_t^l,ω^2]+2C_fL_gη_t_A[y_t+1-g_S(x_t)x_t-x_t^l,ω]
+2C_fL_gη_t_A[y_t+1^l,ω-g_S(x_t^l,ω)x_t-x_t^l,ω]+4η_t^2 C_f^2L_g^2_A[y_t+1-g_S(x_t)^2]
+4η_t^2 C_f^2L_g^2_A[y_t+1^l,ω-g_S(x_t^l,ω)^2]+16C_gL_f^2η_t^2
+12η_tL_fL_g_A[x_t-x_t^l,ω𝕀_[j_t=l]]+4η_t^2L_g^2L_f^2_A[𝕀_[j_t= l]]
≤(1-Lση_t/L+σ) _A[x_t-x_t^l,ω^2]+2C_fL_gη_t(_A[y_t+1-g_S(x_t)^2])^1/2(_A[x_t-x_t^l,ω^2])^1/2
+2C_fL_gη_t(_A[y_t+1^l,ω-g_S(x_t^l,ω)^2])^1/2(_A[x_t-x_t^l,ω^2])^1/2
+4η_t^2 C_f^2L_g^2_A[y_t+1-g_S(x_t)^2]+4η_t^2 C_f^2L_g^2_A[y_t+1^l,ω-g_S(x_t^l,ω)^2]+16C_g L_f^2η_t^2
+12η_tL_fL_g_A[x_t-x_t^l,ω𝕀_[j_t=l]]
+4η_t^2L_g^2 L_f^2_A[𝕀_[j_t= l]],
where the second inequality holds by the Cauchy-Schwarz inequality.
In addition, observe that
_A[x_t-x_t^l,ω𝕀_[j_t=l]] =_A[x_t-x_t^l,ω_j_t[𝕀_[j_t=l]]]
=1/m_A[x_t-x_t^l,ω]≤1/m(_A[x_t-x_t^l,ω^2])^1/2.
If η_t=η, using the above observations, noting x_0-x_0^l,ω^2=0, we can obtain
𝔼_A[x_t+1-x_t+1^l,ω^2]
≤
2C_fL_g∑_i=1^t(1-Lση/L+σ)^t-iη(_A[y_i+1-g_S(x_i)^2])^1/2(_A[x_i-x_i^l,ω^2])^1/2
+2C_fL_g∑_i=1^t(1-Lση/L+σ)^t-iη(_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2(_A[x_i-x_i^l,ω^2])^1/2
+4C_f^2L_g^2 ∑_i=0^t(1-Lση/L+σ)^t-iη^2 _A[y_i+1-g_S(x_i)^2]
+4C_f^2L_g^2 ∑_i=0^t(1-Lση/L+σ)^t-iη^2 _A[y_i+1^l,ω-g_S(x_i^l,ω)^2]
+16L_f^2 C_g∑_i=0^t(1-Lση/L+σ)^t-iη^2+12L_gL_f/m∑_i=1^t(1-Lση/L+σ)^t-iη(_A[x_i-x_i^l,ω^2])^1/2
+4L_f^2 L_g^2/m∑_i=0^t(1-Lση/L+σ)^t-iη^2 .
For notional convenience, let u_t=( 𝔼_A[x_t-x_t^l,ω^2])^1/2. Therefore, (<ref>) can be equivalently rewritten as
u_t^2 ≤2C_fL_g∑_i=1^t-1(1-Lση/L+σ)^t-i-1η(_A[y_i+1-g_S(x_i)^2])^1/2u_i
+2C_fL_g∑_i=1^t-1(1-Lση/L+σ)^t-i-1η(_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2u_i
+4C_f^2L_g^2 ∑_i=0^t-1(1-Lση/L+σ)^t-i-1η^2 _A[y_i+1-g_S(x_i)^2]
+4C_f^2L_g^2 ∑_i=0^t-1(1-Lση/L+σ)^t-i-1η^2 _A[y_i+1^l,ω-g_S(x_i^l,ω)^2] +16L_f^2 C_g∑_i=0^t-1(1-Lση/L+σ)^t-i-1η^2
+12L_gL_f/m∑_i=1^t-1(1-Lση/L+σ)^t-i-1η u_i +4L_f^2 L_g^2/m∑_i=0^t-1(1-Lση/L+σ)^t-i-1η^2 .
We will use Lemma <ref> to get the desired result. To this end, notice that
16L_f^2 C_gη^2∑_i=0^t-1(1-ηLσ/L+σ)^t-i-1 ≤16L_f^2 C_gη^2L+σ/Lησ=16L_f^2 C_gL+σ/Lση,
4L_f^2 L_g^2/mη^2∑_i=0^t-1(1-ηLσ/L+σ)^t-i-1 ≤4L_f^2 L_g^2/mη^2L+σ/Lησ=4L_f^2 L_g^2/mL+σ/Lση.
Moreover, we define
S_t =4C_f^2L_g^2 ∑_i=0^t-1(1-Lση/L+σ)^t-i-1η^2 _A[y_i+1-g_S(x_i)^2]
+4C_f^2L_g^2 ∑_i=0^t-1(1-Lση/L+σ)^t-i-1η^2 _A[y_i+1^l,ω-g_S(x_i^l,ω)^2]
+16L_f^2 C_gL+σ/Lση+4L_f^2 L_g^2/mL+σ/Lση,
α_i =2C_fL_g(1-Lση/L+σ)^t-i-1η(_A[y_i+1-g_S(x_i)^2])^1/2
+2C_fL_g(1-Lση/L+σ)^t-i-1η(_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2
+12L_gL_f/m(1-Lση/L+σ)^t-i-1η.
Applying Lemma <ref> with u_t, S_t and α_i defined as above to (<ref>), we get
u_t≤√(S_t)+∑_i=1^t-1α_i
≤ (4C_f^2L_g^2 ∑_i=0^t-1(1-Lση/L+σ)^t-i-1η^2 _A[y_i+1-g_S(x_i)^2])^1/2
+(4C_f^2L_g^2 ∑_i=0^t-1(1-Lση/L+σ)^t-i-1η^2 _A[y_i+1^l,ω-g_S(x_i^l,ω)^2] )^1/2
+2C_fL_g∑_i=1^t-1(1-Lση/L+σ)^t-i-1η(_A[y_i+1-g_S(x_i)^2])^1/2
+2C_fL_g∑_i=1^t-1(1-Lση/L+σ)^t-i-1η(_A[y_i+1^l,ω-g_S(x_i^l,ω)^2])^1/2
+4L_f√(C_gL+σ/Lσ)√(η)+2L_fL_g√(L+σ/Lσ)√(η/m)+12L_gL_f(L+σ)/m Lσ,
where we have used the fact that (∑_i=1^4a_i)^1/2≤∑_i=1^4(a_i)^1/2 and 12L_gL_f/m(1-Lση/L+σ)^t-i-1η≤12L_gL_f(L+σ)/m Lσ.
Note that _A[y_i+1-g_S(x_i)^2]≤sup_S𝔼_A[y_i+1-g_S(x_i)^2] and _A[y_i+1^l,ω-g_S(x_i^l,ω)^2]≤sup_S𝔼_A[y_i+1-g_S(x_i)^2].
Consequently, with T iterations, since 𝔼_A[x_T-x_T^l,ω]≤ u_T=(𝔼_A[x_T-x_T^l,ω^2])^1/2, we further obtain
𝔼_A[x_T-x_T^l,ω]
≤ 4C_fL_gηsup_S(∑_i=0^T-1(1-Lση/L+σ)^T-i-1_A[y_i+1-g_S(x_i)^2])^1/2
+4C_fL_gηsup_S∑_i=0^T-1(1-Lση/L+σ)^T-i-1η(_A[y_i+1-g_S(x_i)^2])^1/2
+4L_f√(C_gL+σ/Lσ)√(η)+2L_fL_g√(L+σ/Lσ)√(η/m)+12L_gL_f(L+σ)/m Lσ.
Combining the estimations for 𝔼_A[x_T-x_T^k,ν] and 𝔼_A[x_T-x_T^l,ω], we obtain
ϵ_ν+ϵ_ω ≤ 8C_fL_gηsup_S(∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1_A[y_j+1-g_S(x_j)^2] )^1/2
+8C_fL_gηsup_S∑_j=0^T-1(1-ηLσ/L+σ)^T-j-1(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1/ 2
+8L_f√( C_gL+σ/L σ)√(η)+2L_fL_g√(L+σ/Lσ)√(η/n)
+4L_gL_f(L+σ)/ nLσ
+2L_fL_g√(L+σ/Lσ)√(η/m)+12L_gL_f(L+σ)/m Lσ
≤16C_fL_gηsup_S∑_j=0^T-1(1-ηLσ/L+σ)^T-j-1(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1 2
+8L_f√( C_gL+σ/L σ)√(η)+2L_fL_g√(L+σ/Lσ)√(η/n)
+4L_gL_f(L+σ)/ nLσ
+2L_fL_g√(L+σ/Lσ)√(η/m)+12L_gL_f(L+σ)/m Lσ.
Next we will verify why the second inequality of (<ref>) holds true.
With the result of SCGD update in Lemma <ref>, we have
η(∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1_A[y_j+1-g_S(x_j)^2] )^1/2
≤η(∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1((c/e)^c (jβ)^-c𝔼_A[ y_1- g_S(x_0)^2]+ L_f^2 L_g^3η^2/β^2+2V_gβ) )^1/2
≤η(∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1(L_f^2 L_g^3η^2/β^2+2V_gβ))^1/2+η((c/e)^cD_y∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1(jβ)^-c)^1/2
≤L_fL_g√(L_g(L+σ))/√(Lσ)η^3/2/β+√(2V_g(L+σ)/Lσ)√(ηβ)+(c/e)^c/2√(D_y)√((L+σ)η)/√(Lσ)T^-c/2β^-c/2,
where the last inequality holds by the fact that ∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1≤L+σ/η Lσ and Lemma <ref>. To see this, (∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1(jβ)^-c)^1/2≤ (∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1∑_j=0^T-1(jβ)^-c/T)^1/2≤ (T^-c+1β^-c(L+σ)/Tη Lσ)^1/2=T^-c/2β^-c/2√(L+σ)/√(η Lσ).
And
η∑_j=0^T-1(1-ηLσ/L+σ)^T-j-1(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1 2
≤η∑_j=0^T-1(1-ηLσ/L+σ)^T-j-1(((c/e)^c (jβ)^-c𝔼_A[ y_1- g_S(x_0)^2]+ L_f^2 L_g^3η^2/β^2+2V_gβ))^1/2
≤η∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1(√(L_g)L_gL_fη/β+√(2V_g)√(β))+(c/e)^c/2√(D_y)η∑_j=0^T-1 (1-ησ)^T-j-1(jβ)^-c/2
≤√(L_g)L_gL_f(L+σ)/Lση/β+√(2V_g)(L+σ)/Lσ√(β)+(c/e)^c/2√(D_y)(L+σ)/Lσ T^-c/2β^-c/2,
where the last inequality holds by the fact that ∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1≤L+σ/η Lσ and Lemma <ref>. To see this ∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1(jβ)^-c/2≤∑_j=0^T-1 (1-ηLσ/L+σ)^T-j-1∑_j=0^T-1(jβ)^-c/2/T≤T^-c/2β^-c/2(L+σ)/η L σ. Comparing the result (<ref>) and (<ref>), the dominating terms are (<ref>). We can show that with result of SCSC update in Lemma <ref>, the dominating term is η∑_j=0^T-1(1-ηLσ/L+σ)^T-j-1(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1 2.
Since often we have η≤min(1/n, 1/m), then √(η)/√(n)≤1/n. Consequently, we get that √(L+σ/Lσ)√(η/n)≤(L+σ)/ nLσ. And √(η)/√(m)≤1/m, √(L+σ/Lσ)√(η/m)≤(L+σ)/ m Lσ. We further get the final stability result for σ-strongly convex setting which holds for SCGD and SCSC in Theorem <ref>
ϵ_ν+ϵ_ω= 𝒪(L_gL_f(L+σ)/σ L m+ L_gL_f(L+σ)/σ Ln+L_f√(C_g(L+σ)η)/√(σ L)
+C_fL_gηsup_S∑_j=1^T(1-ηLσ/L+σ)^T-j(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1 2).
This completes the proof.
Next we move on to the Corollary <ref>
Putting the result (<ref>) to (<ref>), we get stability result of SCGD for strongly convex problems
ϵ_ν + ϵ_ω= 𝒪(n^-1+m^-1+η^1/2+ηβ^-1+β^1/2+T^-c/2β^-c/2).
With SCSC update in Lemma <ref>, with a same progress, we have stability result of SCSC for strongly convex problems
ϵ_ν+ϵ_ω=𝒪(n^-1+ m^-1+η^1/2+ηβ^-1/2+β^1/2+T^-c/2β^-c/2).
§.§ Optimization
Suppose Assumptions <ref><ref> and <ref><ref>-<ref> holds and F_S is σ-strongly convex. By running Algorithm <ref>, we have for any x
𝔼_A[F_S(x_t+1)|ℱ_t]≤ F_S(x_t)- η_t/2∇ F_S(x_t)^2+ LL_f^2L_g^2/2η_t^2+ C_f^2L_g^2η_t/2𝔼_A[y_t+1- g_S(x_t)^2|ℱ_t].
where 𝔼_A denotes the expectation taken with respect to the randomness of the algorithm, and ℱ_t is the σ-field generated by {ω_j_0, …, ω_j_t-1, ν_i_0, …, ν_i_t- 1}.
The proof of Lemma <ref> is deferred to the end of this subsection. Now we are ready to prove the convergence of Algorithm <ref> for strongly convex problems.
We first present the proof for the SCGD update. Note that strong convexity implies the Polyak-Lojasiewicz (PL) inequality
1/2∇ F_S(x)^2≥σ(F_S(x)- F_S(x_*^S)), ∀ x.
Taking full expectation over (<ref>), subtracting both sides with F_S(x_*^S), and plugging in the PL inequality, we get
𝔼_A[F_S(x_t+1)- F_S(x_*^S)]≤ (1- ση_t)𝔼_A[F_S(x_t)- F_S(x_*^S)]+ LL_f^2L_g^2/2η_t^2+ C_f^2L_g^2η_t/2𝔼_A[y_t+1- g_S(x_t)^2].
Setting η_t= η and β_t= β, plugging Lemma <ref> into (<ref>), and letting D_y:= 𝔼_A[ y_1- g_S(x_0)^2], we have for t≥ 1
𝔼_A[F_S(x_t+1)- F_S(x_*^S)]
≤ (1- ση)𝔼_A[F_S(x_t)- F_S(x_*^S)]+ LL_f^2L_g^2/2η^2+ C_f^2L_g^2η/2( (c/e)^cD_y (tβ)^-c+L_g^3 C_f^2 η^2/β^2+ 2V_g β).
Telescoping the above inequality for t= 1, ⋯, T- 1, we get
𝔼_A[F_S(x_T)- F_S(x_*^S)]
≤ (1- ση)^T- 1𝔼_A[F(x_1)- F_S(x_*^S)]+ LL_f^2L_g^2/2η^2 ∑_t= 1^T- 1 (1- ση)^T- t- 1
+ C_f^2L_g^2η/2(c/e)^cD_y β^-c∑_t= 1^T- 1 t^-c(1- ση)^T- t- 1+ C_f^4L_g^5η^3/2β^2∑_t= 1^T- 1 (1- ση)^T- t- 1
+ C_f^2L_g^2V_gηβ∑_t= 1^T- 1 (1- ση)^T- t- 1.
For t= 0 we have
𝔼_A[F_S(x_1)- F_S(x_*^S)]≤ (1- ση)𝔼_A[F_S(x_0)- F_S(x_*^S)]+ LL_f^2L_g^2/2η^2+ C_f^2L_g^2D_yη/2.
Combining the above two inequalities yields
𝔼_A[F_S(x_T)- F_S(x_*^S)]
≤ (1- ση)^T𝔼_A[F(x_1)- F_S(x_*^S)]+ LL_f^2L_g^2/2η^2 ∑_t= 1^T (1- ση)^T- t+ (1- ση)^T- 1C_f^2L_g^2D_yη/2
+ C_f^2L_g^2η/2(c/e)^cD_y β^-c∑_t= 1^T- 1 t^-c(1- ση)^T- t- 1+ C_f^4L_g^5η^3/2β^2∑_t= 1^T (1- ση)^T- t+ C_f^2L_g^2V_gηβ∑_t= 1^T (1- ση)^T- t.
Note that we have ∑_t= 1^T( 1- ση/2)^T- t≤2/ση. From Lemma <ref> we have
∑_t= 1^T- 1( 1- ση/2)^T- t- 1t^-c≤∑_t= 1^T- 1( 1- ση/2)^T- t- 1/T- 1∑_t= 1^T- 1 t^-c≤4/Tση∑_t= 1^T- 1 t^-c.
And from Lemma <ref> we know (1- ση/2)^T≤exp(-ση T/2)≤ (2c/eσ)^c (η T)^-c. Letting D_x:= 𝔼_A[F(x_0)- F_S(x_*^S)], we get
𝔼_A[F_S(x_T)- F_S(x_*^S)]≤ (2c/eσ)^c (η T)^-c D_x+ LL_f^2L_g^2/ση+ (2c/eσ)^c (η T)^-c C_f^2L_g^2D_yη
+ 2C_f^2L_g^2/σ(c/e)^cD_y β^-cT^-1∑_t= 1^T- 1 t^-c+ C_f^4L_g^5η^2/σβ^2 + 2C_f^2L_g^2V_g/σβ.
Noting that ∑_t= 1^T- 1 t^-z= 𝒪(T^1- z) for z∈ (0, 1)∪ (1, ∞) and ∑_t= 1^T- 1 t^-1= 𝒪(log T). As long as c≠ 1 we get
𝔼_A[F_S(x_T)- F_S(x_*^S)]
= 𝒪(D_x(η T)^-c+ LL_f^2L_g^2η+ C_f^2L_g^2D_y (η T)^-cη+ C_f^2L_g^2D_y(β T)^-c+ C_f^4L_g^5C_gη^2β^-2+ C_f^2L_g^2V_gβ).
Then we get the desired result for the SCGD update. Next we present the proof for the SCSC update. Setting η_t= η and β_t= β, plugging Lemma <ref> into (<ref>), and letting D_y:= 𝔼_A[ y_1- g_S(x_0)^2], we have for t≥ 1
𝔼_A[F_S(x_t+1)- F_S(x_*^S)]
≤ (1- ση)𝔼_A[F(x_t)- F_S(x_*^S)]+ LL_f^2L_g^2/2η^2+ C_f^2L_g^2η/2( (c/e)^cD_y (tβ)^-c+L_g^3 C_f^2 η^2/β+ 2V_g β).
Telescoping the above inequality for t= 0, ⋯, T- 1, we get
𝔼_A[F_S(x_T)- F_S(x_*^S)]
≤ (1- ση)^T- 1𝔼_A[F(x_1)- F_S(x_*^S)]+ LL_f^2L_g^2/2η^2 ∑_t= 1^T- 1 (1- ση)^T- t- 1
+ C_f^2L_g^2η/2(c/e)^cD_y β^-c∑_t= 1^T- 1 t^-c(1- ση)^T- t- 1+ C_f^4L_g^5η^3/2β∑_t= 1^T- 1 (1- ση)^T- t- 1+ C_f^2L_g^2V_gηβ∑_t= 1^T- 1 (1- ση)^T- t- 1.
For t= 0 we have
𝔼_A[F_S(x_1)- F_S(x_*^S)]≤ (1- ση)𝔼_A[F_S(x_0)- F_S(x_*^S)]+ LL_f^2L_g^2/2η^2+ C_f^2L_g^2D_yη/2.
Combining the above two inequalities yields
𝔼_A[F_S(x_T)- F_S(x_*^S)]
≤ (1- ση)^T𝔼_A[F(x_1)- F_S(x_*^S)]+ LL_f^2L_g^2/2η^2 ∑_t= 1^T (1- ση)^T- t+ (1- ση)^T- 1C_f^2L_g^2D_yη/2
+ C_f^2L_g^2η/2(c/e)^cD_y β^-c∑_t= 1^T- 1 t^-c(1- ση)^T- t- 1+ C_f^4L_g^5η^3/2β∑_t= 1^T (1- ση)^T- t+ C_f^2L_g^2V_gηβ∑_t= 1^T (1- ση)^T- t.
Note that we have ∑_t= 1^T( 1- ση/2)^T- t≤2/ση. From Lemma <ref> we have
∑_t= 1^T- 1( 1- ση/2)^T- t- 1t^-c≤∑_t= 1^T- 1( 1- ση/2)^T- t- 1/T- 1∑_t= 1^T- 1 t^-c≤4/Tση∑_t= 1^T- 1 t^-c.
And from Lemma <ref> we know (1- ση/2)^T≤exp(-ση T/2)≤ (2c/eσ)^c (η T)^-c. Letting D_x:= 𝔼_A[F(x_0)- F_S(x_*^S)], we get
𝔼_A[F_S(x_T)- F_S(x_*^S)]≤ (2c/eσ)^c (η T)^-c D_x+ LL_f^2L_g^2/ση+ (2c/eσ)^c (η T)^-c C_f^2L_g^2D_yη
+ 2C_f^2L_g^2/σ(c/e)^cD_y β^-cT^-1∑_t= 1^T- 1 t^-c+ C_f^4L_g^5η^2/σβ + 2C_f^2L_g^2V_g/σβ.
Noting that ∑_t= 1^T- 1 t^-z= 𝒪(T^1- z) for z∈ (0, 1)∪ (1, ∞) and ∑_t= 1^T- 1 t^-1= 𝒪(log T). As long as c≠ 1 we get
𝔼_A[F_S(x_T)- F_S(x_*^S)]
= 𝒪(D_x(η T)^-c+ LL_f^2L_g^2η+ C_f^2L_g^2D_y (η T)^-cη+ C_f^2L_g^2D_y(β T)^-c+ C_f^4L_g^5C_gη^2β^-1+ C_f^2L_g^2V_gβ).
Then we get the desired result for the SCSC update. The proof is completed.
From Assumption <ref><ref> we know f_ν(g_S(·)) is L-smooth for all ν, then we know F_S(·)= 1/n∑_i= 1^n f_ν_i(g_S(·)) is also L-smooth. Thus we have
F_S(x_t+1)≤ F_S(x_t)+ ⟨∇ F_S(x_t), x_t+1- x_t⟩+ L/2x_t+1- x_t^2
≤ F_S(x_t)- η_t⟨∇ F_S(x_t), ∇ g_ω_j_t(x_t)∇ f_ν_i_t(g_S(x_t))⟩+ Lη_t^2/2∇ g_ω_j_t(x_t)∇ f_ν_i_t(y_t+1)^2
-η_t⟨∇ F_S(x_t), ∇ g_ω_j_t(x_t)∇ f_ν_i_t(y_t+1)- ∇ g_ω_j_t(x_t)∇ f_ν_i_t(g_S(x_t))⟩.
Let ℱ_t be the σ-field generated by {ω_j_0, …, ω_j_t-1, ν_i_0, …, ν_i_t- 1}. Taking expectation with respect to the randomness of the algorithm conditioned on ℱ_t, we have
𝔼_A[F_S(x_t+1)|ℱ_t]≤ F_S(x_t)- η_t∇ F_S(x_t)^2+ LL_f^2L_g^2/2η_t^2
+η_t𝔼_A[⟨∇ F_S(x_t), ∇ g_ω_j_t(x_t)∇ f_ν_i_t(g_S(x_t))- ∇ g_ω_j_t(x_t)∇ f_ν_i_t(y_t+1)⟩|ℱ_t].
Note that from Cauchy-Schwartz inequality, Young's inequality, Assumption <ref><ref> and <ref><ref> we have
𝔼_A[⟨∇ F_S(x_t), ∇ g_ω_j_t(x_t)∇ f_ν_i_t(g_S(x_t))- ∇ g_ω_j_t(x_t)∇ f_ν_i_t(y_t+1)⟩|ℱ_t]
≤ C_f∇ F_S(x_t)𝔼_A[∇ g_ω_j_t(x_t)y_t+1- g_S(x_t)| ℱ_t]
≤ C_f(L_g^2∇ F_S(x_t)^2/2γ^t+ γ_t/2𝔼_A[y_t+1- g_S(x_t)^2|ℱ_t])
= C_fL_g^2/2γ_t∇ F_S(x_t)^2+ C_fγ_t/2𝔼_A[y_t+1- g_S(x_t)^2|ℱ_t]
for any γ_t> 0. Thus we get
𝔼_A[F_S(x_t+1)|ℱ_t] ≤ F_S(x_t)- η_t(1- C_fL_g^2/2γ_t)∇ F_S(x_t)^2+ LL_f^2L_g^2/2η_t^2
+ C_fγ_tη_t/2𝔼_A[y_t+1- g_S(x_t)^2|ℱ_t].
Setting γ_t= C_fL_g^2, we have
𝔼_A[F_S(x_t+1)|ℱ_t]≤ F_S(x_t)- η_t/2∇ F_S(x_t)^2+ LL_f^2L_g^2/2η_t^2+ C_f^2L_g^2η_t/2𝔼_A[y_t+1- g_S(x_t)^2|ℱ_t].
We have completed the proof.
§.§ Generalization
We first present the proof for the SCGD update.
From (<ref>) we get
𝔼_A[x_T- x_T^k, ν]+ 4𝔼_A[x_T- x_T^l, ω]
≤ 40C_fL_gηsup_S∑_j=0^T-1(1-ηLσ/L+ σ)^T-j(𝔼_A[ y_j+1-g_S(x_j)^2 ] )^1 2+ 20L_f√( C_gL+σ/L σ)√(η)
+2L_fL_g√(L+σ/Lσ)√(η/n)
+4L_gL_f(L+σ)/ nLσ+ 8L_fL_g√(L+σ/Lσ)√(η/m)+48L_gL_f(L+σ)/m Lσ.
Plugging (<ref>) into the above inequality, we get
𝔼_A[x_T- x_T^k, ν]+ 4𝔼_A[x_T- x_T^l, ω]
≤ 40C_f L_g √(L_g)L_gL_f(L+ σ)/Lση/β
+ 40C_fL_g √(2V_g)(L+ σ)/Lσ√(β) + 40C_fL_g(c/e)^c/2D_y(L+ σ)/Lσ T^-c/2β^-c/2
+2L_fL_g√(L+σ/Lσ)√(η/n)+ 4L_gL_f(L+σ)/ nLσ+ 8L_fL_g√(L+σ/Lσ)√(η/m)+48L_gL_f(L+σ)/m Lσ.
Using Theorem <ref>, we have
𝔼_S, A[F(x_T)- F_S(x_T)]
≤ 40C_f √(L_g)L_g^3L_f^2(L+ σ)/Lση/β+ 40C_fL_g^2L_f √(2V_g)(L+ σ)/Lσ√(β)
+ 40C_fL_g^2L_f(c/e)^c/2D_y(L+ σ)/Lσ T^-c/2β^-c/2+2L_f^2L_g^2√(L+σ/Lσ)√(η/n)+ 4L_g^2L_f^2(L+σ)/ nLσ
+ 8L_f^2L_g^2√(L+σ/Lσ)√(η/m)+48L_g^2L_f^2(L+σ)/m Lσ +L_f√(𝔼_S, A[Var_ω(g_ω(x_T))]/m).
From Theorem <ref> we get
𝔼_A[F_S(x_T)- F_S(x_*^S)]≤ (2c/eσ)^c (η T)^-c D_x+ LL_f^2L_g^2/ση+ (2c/eσ)^c C_f^2L_g^2D_y(η T)^-cη
+ 2C_f^2L_g^2/σ(c/e)^cD_y β^-cT^-c(1- b)+ C_f^4L_g^5η^2/σβ^2 + 2C_f^2L_g^2V_g/σβ.
Adding (<ref>) with (<ref>), setting η= T^-a, β= T^-b with a, b∈ (0, 1], and using the fact F_S(x_*^S)≤ F_S(x_*), we get
𝔼_S, A[F(A(S))- F(x_*)]
≤ 40C_f √(L_g)L_g^3L_f^2(L+ σ)/Lσ T^b- a+ 40C_fL_g^2L_f √(2V_g)(L+ σ)/Lσ T^-b/2
+ 40C_fL_g^2L_f(c/e)^c/2D_y(L+ σ)/Lσ T^c/2(b- 1) +2L_f^2L_g^2√(L+σ/Lσ)1/√(n) T^-a/2+ 4L_g^2L_f^2(L+σ)/ nLσ
+ 8L_f^2L_g^2√(L+σ/Lσ)1/√(m) T^-a/2+48L_g^2L_f^2(L+σ)/m Lσ +L_f√(𝔼_S, A[Var_ω(g_ω(x_T))]/m)
+ (2c/eσ)^c+1D_x T^-c(1- a)+ LL_f^2L_g^2/σ T^-a+ (2c/eσ)^c C_f^2L_g^2D_y T^-c(1- a)- a
+ 2C_f^2L_g^2/σ( c/e)^c T^-c(1- b)+ 2C_f^2L_gV_g/σ T^-b+ C_f^4L_g^5/σ T^2b- 2a.
Thus we have
𝔼_S, A[F(A(S)) - F(x_*) ]
=𝒪(T^b- a+ T^-b/2+ T^c/2(b- 1)+ n^-1+ n^-1/2T^-a/2+ m^-1.
.+ m^-1/2 T^-a/2+ T^-c(1- a)+ T^-a+ T^-c(1- a)- a+ T^-c(1- b)+ T^-b+ T^2b- 2a).
Since a, b∈ (0, 1], setting c= 3, the dominating terms are
𝒪(T^b- a), 𝒪(T^-b/2), 𝒪(T^3/2(b- 1)), 𝒪(T^-a/2), 𝒪(T^3(a- 1)).
Setting a= 9/10 and b= 3/5 yields
𝔼_S, A[F(A(S)) - F(x_*) ]= 𝒪(T^-3/10).
Setting T= 𝒪(max{n^5/3, m^5/3}) yields the following bound
𝔼_S, A[F(A(S)) - F(x_*) ]= 𝒪(1/√(n)+ 1/√(m)).
Then we get the desired result for the SCGD update. Next we present the proof for the SCSC update. With the same derivation as the SCGD case, we get
𝔼_A[x_T- x_T^k, ν]+ 4𝔼_A[x_T- x_T^l, ω]
≤ 40C_f L_g √(L_g)L_gL_f(L+ σ)/Lση/√(β)
+ 40C_fL_g √(2V_g)(L+ σ)/Lσ√(β)+ 40C_fL_g(c/e)^c/2D_y(L+ σ)/Lσ T^-c/2β^-c/2
+2L_fL_g√(L+σ/Lσ)√(η/n)+ 4L_gL_f(L+σ)/ nLσ+ 8L_fL_g√(L+σ/Lσ)√(η/m)+48L_gL_f(L+σ)/m Lσ.
Using Theorem <ref>, we have
𝔼_S, A[F(x_T)- F_S(x_T)]
≤ 40C_f √(L_g)L_g^3L_f^2(L+ σ)/Lση/√(β)+ 40C_fL_g^2L_f √(2V_g)(L+ σ)/Lσ√(β)
+ 40C_fL_g^2L_f(c/e)^c/2D_y(L+ σ)/Lσ T^-c/2β^-c/2+2L_f^2L_g^2√(L+σ/Lσ)√(η/n)+ 4L_g^2L_f^2(L+σ)/ nLσ
+ 8L_f^2L_g^2√(L+σ/Lσ)√(η/m)+48L_g^2L_f^2(L+σ)/m Lσ +L_f√(𝔼_S, A[Var_ω(g_ω(x_T))]/m).
From Theorem <ref> we get
𝔼_A[F_S(x_T)- F_S(x_*^S)]≤ (2c/eσ)^c (η T)^-c D_x+ LL_f^2L_g^2/ση+ (2c/eσ)^c C_f^2L_g^2D_y(η T)^-cη
+ 2C_f^2L_g^2/σ(c/e)^cD_y β^-cT^-c(1- b)+ C_f^4L_g^5η^2/σβ + 2C_f^2L_g^2V_g/σβ.
Adding (<ref>) with (<ref>), setting η= T^-a, β= T^-b with a, b∈ (0, 1], and using the fact F_S(x_*^S)≤ F_S(x_*), we get
𝔼_S, A[F(A(S))- F(x_*)]
≤ 40C_f √(L_g)L_g^3L_f^2(L+ σ)/Lσ T^b/2- a+ 40C_fL_g^2L_f √(2V_g)(L+ σ)/Lσ T^-b/2
+ 40C_fL_g^2L_f(c/e)^c/2D_y(L+ σ)/Lσ T^c/2(b- 1)+2L_f^2L_g^2√(L+σ/Lσ)1/√(n) T^-a/2+ 4L_g^2L_f^2(L+σ)/ nLσ
+ 8L_f^2L_g^2√(L+σ/Lσ)1/√(m) T^-a/2+48L_g^2L_f^2(L+σ)/m Lσ +L_f√(𝔼_S, A[Var_ω(g_ω(x_T))]/m)
+ (2c/eσ)^c+1D_x T^-c(1- a) + LL_f^2L_g^2/σ T^-a+ (2c/eσ)^c C_f^2L_g^2D_y T^-c(1- a)- a
+ 2C_f^2L_g^2/σ( c/e)^c T^-c(1- b)+ 2C_f^2L_gV_g/σ T^-b+ C_f^4L_g^5/σ T^b- 2a.
Thus we have
𝔼_S, A[F(A(S)) - F(x_*) ]
= 𝒪(T^b/2- a+ T^-b/2+ T^c/2(b- 1)+ n^-1+ n^-1/2T^-a/2+ m^-1.
.+ m^-1/2 T^-a/2+ T^-c(1- a)+ T^-a+ T^-c(1- a)- a+ T^-c(1- b)+ T^-b+ T^b- 2a).
Since a, b∈ (0, 1], setting c= 3, the dominating terms are
𝒪(T^b/2- a), 𝒪(T^-b/2), 𝒪(T^3/2(b- 1)), 𝒪(T^-a/2), and 𝒪(T^3(a- 1)).
Setting a= b= 6/7 yields
𝔼_S, A[F(A(S)) - F(x_*) ]= 𝒪(T^-3/7).
Setting T= 𝒪(max{n^7/6, m^7/6}) yields the following bound
𝔼_S, A[F(A(S)) - F(x_*) ]= 𝒪(1/√(n)+ 1/√(m)).
Then we get the desired result for the SCSC update. The proof is completed.
|
http://arxiv.org/abs/2307.01970v1
|
20230705004400
|
Understanding Resolution of Multi-Language Bugs: An Empirical Study on Apache Projects
|
[
"Zengyang Li",
"Wenshuo Wang",
"Sicheng Wang",
"Peng Liang",
"Ran Mo"
] |
cs.SE
|
[
"cs.SE"
] |
[]978-1-6654-5223-6/23/$31.00 2023 IEEE []
Understanding Resolution of Multi-Language Bugs:
An Empirical Study on Apache Projects
Zengyang Li2This work was funded by the Natural Science Foundation of Hubei Province of China under Grant No. 2021CFB577, the National Natural Science Foundation of China under Grant Nos. 62176099 and 62172311, and the Knowledge Innovation Program of Wuhan-Shuguang Project under Grant No. 2022010801020280., Wenshuo Wang2, Sicheng Wang2, Peng Liang31, Ran Mo2
2School of Computer Science & Hubei Provincial Key Laboratory of Artificial Intelligence and Smart Learning,
Central China Normal University, Wuhan, China
3School of Computer Science, Wuhan University, Wuhan, China
{zengyangli, moran}@ccnu.edu.cn, {scwang1998, wenshuowang}@mails.ccnu.edu.cn, liangp@whu.edu.cn
August 1, 2023
==============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================
Background: In modern software systems, more and more systems are written in multiple programming languages (PLs). There is no comprehensive investigation on the phenomenon of multi-programming-language (MPL) bugs, which resolution involves source files written in multiple PLs.
Aim: This work investigated the characteristics of bug resolution in MPL software systems and explored the reasons why bug resolution involves multiple PLs.
Method: We conducted an empirical study on 54 MPL projects selected from 655 Apache OSS projects, of which 66,932 bugs were analyzed.
Results: (1) the percentage of MPL bugs (MPLBs) in the selected projects ranges from 0.17% to 42.26%, and the percentage of MPLBs for all projects as a whole is 10.01%; (2) 95.0% and 4.5% of all the MPLBs involve source files written in 2 and 3 PLs, respectively; (3) the change complexity resolution characteristics of MPLBs tend to be higher than those of single-programming-language bugs (SPLBs); (4) the open time for MPLBs is 19.52% to 529.57% significantly longer than SPLBs regarding 9 PL combinations; (5) the reopen rate of bugs involving the PL combination of JavaScript and Python reaches 20.66%; (6) we found 6 causes why the bug resolution involves multiple PLs and identified 5 cross-language calling mechanisms.
Conclusion: MPLBs are related to increased development difficulty.
Multi-Programming-Language Software System, Bug Resolution Characteristic, Open Source Software
§ INTRODUCTION
In modern software systems, more and more systems are developed in multiple programming languages (PLs) <cit.>. We call such systems multi-programming-language (MPL) software systems, which can take advantage of each PL and reuse existing code and libraries to meet various quality requirements and to improve software development efficiency <cit.>. During the development of an MPL system, a bug may be fixed by one or more bug-fixing commits in which source files written in multiple PLs are involved, and such a bug is called as an MPL bug (MPLB). In contrast, if a bug is fixed by one or more commits in which source files in the same PL are involved, then such a bug is called as a single-PL bug (SPLB).
MPLBs may impact a software system at the architecture level, given that the resolution of an MPLB may involve comprehending, modifying, debugging, and testing inter-language communications and inter-component interactions.
To our knowledge, there are only two works closely related to MPLBs and their impact on software development. Li et al. explored the impact of MPL commits (MPLCs) on development difficulty and software quality in 18 Apache MPL OSS projects <cit.>. They found that the issues (including new features, bugs, improvements, etc.) fixed in MPLCs take longer time to be fixed than issues fixed in non-MPLCs, and there is no significant differences between MPLCs and non-MPLCs with respect to the impact on issue reopen in most projects <cit.>.
However, MPLBs were not the core research objects in that work, and thus no in-depth or comprehensive investigation on MPLBs was performed.
In the other work, Li et al. looked into 1,497 bugs with 406 MPLBs included in 3 MPL deep learning frameworks <cit.>. Although some bug resolution characteristics of MPLBs were investigated in that work, the numbers of MPLBs and projects are relatively small, and PL combinations involved and in-depth analysis of causes for involving multiple PLs in MPLB resolution were not considered.
To understand the resolution of MPLBs, we comprehensively investigated the bug resolution characteristics of Apache OSS projects in an MPL context and explores the reasons why bug resolution involves multiple PLs. We conducted a large-scale empirical study on 54 non-trivial MPL projects selected from 655 Apache OSS projects, of which 66932 bugs were analyzed, including 6700 MPLBs.
The main contributions of this paper are: (1) This work is the first attempt to comprehensively explore the phenomenon of MPLBs in real-world projects.
(2) This work presents a large-scale empirical study on 54 non-trivial Apache MPL OSS projects.
(3) This work investigated resolution characteristics of MPLBs with 13 PL combinations in terms of change complexity, open time, and reopen.
(4) This work explored the reasons why bug resolution involving multiple PLs and the cross-language calling mechanisms used.
§ RELATED WORK
The related work centers around the phenomenon of MPL software systems and bug resolution characteristics.
§.§ Phenomenon of MPL Software Systems
Several studies investigated the practice of PL selection in MPL systems. In 2014, Tomassetti et al. found that 96% of projects used at least two PLs, usually a combination of PLs, such as C/C++, Makefile and HTML, CSS, JavaScript, and over 50% of projects used at least two general purpose languages (GPLs) <cit.>. In 2015, Mayer et al. identified three language ecosystems that are associated with the GPL: (a) Shell and Make, (b) XML, and (c) HTML and CSS <cit.>. Similar to the study by Bissyandé et al. <cit.>, in 2021, Li et al. studied the PL selection and usage of multilingual OSS projects on GitHub spanning 10 years (2010-2019) <cit.>, and they found that there exists a verifiable correlation between software domains and sets of mainstream PLs. Studies also show that the increasing adoption of multiple PLs is to take advantage of different PLs and meet development requirements <cit.>. Although in our work we also explored PLs used in Apache MPL OSS projects, we paid more attention to resolution characteristics of bugs involving different PL combinations.
In 2017, Mayer et al. launched a survey of 139 professional software developers <cit.>. Respondents saw benefits of multi-language development for the motivation of developers and the translation of requirements, but the existence cross-language linking is in many cases preventing them from making necessary changes to the source code.
The presence of cross-language links has prompted researchers to explore ways to recognize and mitigate the associated threats of multiple PLs.
Some studies investigated cross-language relations between specific PL combinations, while others attempted to develop a taxonomy of cross-language links. Cross-language analysis between Java and C (i.e., using Java Native Interface (JNI)) has been studied relatively extensively. In 2020, Grichi et al. proposed two approaches for multi-language dependency analysis of JNI systems <cit.>: one approach is to use JNI rules to statically parse a software system and generate a model that incorporates all of its constituent parts; the other approach is to analyze the MPL source code as it changes together, at the commit level. There are some similar JNI studies <cit.>. There was also cross-language analysis between Python and C <cit.>, which uses multi-language static syntax analysis for Python and C to parse Python programs with native C modules.
In 2017, Mayer analyzed 22 open-source frameworks and classified the cross-language linking mechanisms among GPLs and DSLs, resulting in a taxonomy of cross-language linking mechanisms <cit.>. In 2022, Li et al. defined four language interface categories to represent the multilingual calling mechanisms between languages <cit.>. In contrast, our work also defines cross-language calling mechanisms, and gives real-world examples to help stakeholders better understand these mechanisms.
§.§ Bug Resolution Characteristics
There are a number of studies that are closed to bug resolution characteristics.
In 2011, Bhattacharya et al. analyzed four OSS projects written in C and C++ <cit.>. They used the number of lines of code changed during bug resolution to measure maintainability and found that using C++ improved software quality and reduced maintenance effort, and the code base is shifting from C to C++.
In 2019, Zhang et al. examined three bug-resolution characteristics of 10 GPLs, namely SLOC changed, files touched, and bug-resolution time <cit.>. Their results revealed that bug-resolution time was shorter for projects written in Java compared to other PLs. In contrast, for projects written in Ruby, the bug-resolution time was longer. Their study focused on bug resolution from a single-language perspective, our work focuses on the resolution characteristics of MPLBs, exploring the potential impact of MPLBs on software development through six bug resolution characteristics.
In 2020, Li et al. investigated whether bug severity is in line with code change complexity of bug resolution, which was measured by four bug resolution characteristics for Java projects: the number of modified lines of code, the number of modified source files, the number of modified packages, and the entropy of the change process <cit.>. They found that there was not constantly consistent between bug severity and code change complexity of bug resolution. In 2023, Li et al. used open time, three change complexity measures of pull requests, and two communication complexity measures in bug resolution, to explore the impact of bugs (including MPLBs) on development in three deep learning frameworks <cit.>.
Building on the aforementioned studies on bug resolution characteristics, our study incorporates the bug resolution characteristics of 4 code change complexity measures and open time. We specifically include the resolution characteristic of bug reopen due to its key impact on software maintainability.
§ STUDY DESIGN
We describe the empirical study designed and reported following the guidelines proposed by Runeson and Höst <cit.>.
§.§ Objective and Research Questions
The goal of this study described using the Goal-Question-Metric (GQM) approach <cit.> is: to analyze bugs as well as their corresponding modified source files for the purpose of exploration with respect to the state of MPLBs as well as their resolution characteristics, from the point of view of software developers in the context of MPL OSS development.
Based on the above mentioned goal, we formulated 5 research questions (RQs) as shown in TABLE <ref>. RQ1 and RQ2 investigate the state of MPLBs. RQ3 and RQ4 center around the resolution characteristics of MPLBs and explores the possible impact of MPLBs on development difficulty. RQ5 explores possible reasons for why the resolution of MPLBs involves multiple PLs.
§.§ Project Selection
In this study, we only investigated Apache MPL OSS projects, since the links between issues and corresponding commits tend to be well recorded in the commit messages of those projects. For selecting each project included in our study, we applied the following inclusion criteria:
C1: The project uses JIRA <cit.> as its issue tracking system.
This criterion is set to ensure that the issues from different projects have the same format so that issues can be handled in the same way. It is convenient to export issue data through the REST API provided by JIRA <cit.>.
C2: The project has more than 1000 issue records in JIRA. This criterion was set to ensure the dataset was big enough to be statistically analyzed.
C3: At least 2 out of the 18 PLs listed in TABLE <ref> are used in the project. All the 18 listed PLs are mainstream general-purpose PLs, which are adopted from <cit.>. The percentage of each PL is greater than 5%, and the percentage of the main PL does not exceed 90%.
§.§ Data Collection
§.§.§ Data Items to be Collected
To answer the RQs, we took a bug as the unit of analysis and the data items to be collected are listed in Table <ref>. All the data items to be collected except for D2 and D9 are straightforward, thus we only explain D2 (IsMPLB) and D9 (Entropy) data items in detail.
First, we explain data item D2. A bug is fixed by one or more commits. There are two types of commits: (1) MPL commit (MPLC), in which source files in multiple PLs are modified, and (2) Single-PL commit (SPLC), in which source files in the same single PL are modified.
To determine a bug is an MPLB, it should satisfy one of the following conditions: 1) the bug is resolved in one or more commits, in which at least one commit is an MPLC; 2) the bug is resolved in multiple SPLCs involving different PLs.
Then, we explain the definition of the entropy of the modified source files in a bug resolution (i.e., D9) <cit.>.
Suppose that the modified source files of commit c are {f_1,f_2,⋯,f_n}, and file f_i(1≤ i≤ n) was modified in t_i commits during a period of time before the commit. Let p_i = t_i/∑_i=1^nt_i .
Then, the entropy H(m) = -∑_i=1^mp_ilog_2 p_i .
m indicates the number of files modified in the commit(s) to fix the bug.
Since the number of modified source files differs between different periods, we need to normalize the entropy to be comparable. Given that H(m) achieves the maximum of log_2 m when p_i=1/m (1≤ i≤ m), the normalized entropy
H(m)=
H(m)/log_2 m m>1 ,
0 m=1 .
In this study, the period is set to 60 days (including the day when commit c happened), which is chosen according to <cit.>.
§.§.§ Data Collection Procedure
The data collection procedure for each selected project consists of six steps.
Step 1: Export bug reports. We used JIRA REST API to export all the bugs of the project.
Step 2: Store the exported bug reports from Step 1 in MySQL.
Step 3: Clone the Git repository of the project from GitHub.
Step 4: Extract the commit records from the Git repository. Extract the commit records from the project's Git repository into a text file, which is formatted for further parsing. In this step, we only exported the commit records of the master branch and the commit records that were merged into the master branch. The commit records corresponding to the MERGE operations were excluded, because the commit record corresponding to the MERGE operation is duplicated with the merged commit records.
Step 5: Match each bug report with corresponding commit record(s). If a commit is to resolve a certain bug, the committer often explicitly mention the bug ID in the commit message. Thus, a bug can be matched with corresponding commit record(s) through the bug ID.
Step 6: Calculate data items listed in TABLE <ref> for each bug.
§.§ Data Analysis
The answers to RQ1 and RQ2 can be obtained by descriptive statistics. To answer RQ3, the resolution characteristics of MPLBs were compared with those of SPLBs in order to observe the impact of the introduction of multiple PLs on bug resolution.
To answer RQ4, we first present the resolution characteristics of MPLBs with different PL combinations, and then compared the resolution characteristics of MPLBs involving different PL combinations with those of SPLBs involving a single PL from the corresponding PL combinations.
To do the comparisons for RQ3 and RQ4, we performed the Mann-Whitney U test to check whether two sample groups of data are significantly different <cit.>. We also conducted the Chi-squared test, and the two variables of a Chi-squared test are: whether a fixed bug was reopened or not and whether it is an MPLB or not.
The test is significant at p-value 0.05, which means that the two variables are connected.
To answer RQ5,
we needed to obtain the category of the resolution of each MPLB and the cross-language calling mechanism used in the source files modified. According to <cit.>, the proportion of commits involving 3 or more PLs is rather small, thus we divided all the MPLBs into 3 parts, i.e., MPLBs which resolution involves 2 PLs, 3 PLs, and 4 and more PLs.
If the number of any of the 3 parts of the MPLBs is not greater than 500, we conducted manual analysis on all the MPLBs; otherwise, we manually analyzed a sample of the MPLBs, and the sample size was determined by taking 99% of confidence level and 5% of margin of error <cit.>.
The first three authors worked together to label the causes of bug resolution as tags, and each tag represents a category. The process of manual analysis on each MPLB is described as follows:
Step 1: Check the bug summary. We first read the bug summary to get a rough idea on what the MPLB is about.
Step 2: Check the bug-fixing commit(s). We carefully read the commit messages and code changes of the bug-fixing commit(s) of the MPLB. If the tag can be determined, then the analysis of this MPLB is finished; otherwise, go to the next step.
Step 3: Check the description of the MPLB. We synthesized the description with the knowledge obtained in Step 2. If the tag can be determined, then the analysis of this bug is finished; otherwise, go to the next step.
Step 4: Repeat Steps 2 and 3 till the tag can be determined.
Then, we describe the classification process of all the sampled MPLBs as follows.
Step S1: Construct a preliminary set of tags. the second and third authors independently labeled 100 MPLBs out of the sampled MPLBs. Then the first author joined them discussed to address any inconsistencies, and to construct a preliminary set of tags for the MPLBs.
Step S2: Conduct a pilot MPLB labelling. The second and third authors labeled 100 MPLBs with the preliminary set of tags independently. In this step, new MPLB tags might arise, existing tags might be merged and removed. If there was any disagreement on MPLB labelling, they discussed with the first author to get a consensus.
Fleiss Kappa was used to measure the consistency between MPLB labelling results of the two authors <cit.>. If the Kappa value is less than 0.75, the two authors needed to discuss to resolve disagreements, and randomly selected another 100 MPLBs for another round of labelling. This iterative labelling process would stop when the Kappa value exceeds 0.75, indicating substantial agreement.
Step S3: Classify the remaining MPLBs. The second author labelled the remaining MPLBs with the updated set of tags.
After labelling the sampled MPLBs with causes for involving multiple PLs, we further identified the cross-language calling mechanisms used by the modified source files in different PLs during bug resolution. This may help further understand the underlying reasons for involving multiple PLs in bug resolution <cit.>. Since the used cross-language calling mechanism in the MPLB resolution is not ambiguous, it could be identified by analyzing the source files.
§ STUDY RESULTS
Based on the criteria defined in Section <ref>, 655 Apache projects were obtained by criterion C1, 184 projects were left by C2, and 54 projects were finally included for data extraction and analysis according to C3.
The basic information of the 54 projects is shown in an online table <cit.>. We then collected the data items described in Table <ref> from the 54 projects, and the data were collected during July to August, 2022.
§.§ Proportion of MPLBs in the Selected Projects (RQ1)
Fig. <ref> shows the percentage of the number of MPLBs over the number of bugs explicitly associated with commits in each project. In this figure, for each project, the three numbers denote the number of bugs that are explicitly associated with commits, the number of MPLBs, and the percentage of the former over the latter. There are 6,700 MPLBs over 66,932 bugs in all the selected Apache projects, and the percentage of MPLBs is 10.01% when considering the bugs of all projects as a whole. The percentage of MPLBs in each project varies from 0.17% (project Falcon) to 42.26% (project CarbonData).
§.§ Number of PLs Used in the Resolution of MPLBs (RQ2)
We explored the distribution of the 18 PLs used in the resolution of MPLBs, and the results are presented in an online table <cit.>. Taking all the selected projects as a whole, 95.07% and 4.51% of the MPLBs involve source files in 2 and 3 PLs, respectively; this means that the vast majority of MPLBs involve source files written in only 2 PLs, and it is uncommon to modify source files in 4 or more PLs in MPLB resolution.
§.§ Resolution Characteristics of MPLBs (RQ3)
To better understand the resolution characteristics of MPLBs, we conducted a comparison of the resolution characteristics for the 6,700 MPLBs and 60,232 SPLBs, which is summarized in Table <ref>. Columns MPLB and SPLB present the medians of LOCM, NOFM, NODM, OT (in days), Entropy, and Reopen Rate in the corresponding rows of the MPLBs and SPLBs, respectively.
Column %Diff presents the percentage of difference between MPLB and SPLB, i.e.,
%Diff=(MPLB-SPLB)/SPLB×100% .
Column p-value reports the results of the Mann-Whitney U tests on the first five resolution characteristic between MPLBs and SPLBs. For Reopen Rate, p-value reports the result of the Chi-squared test, and the test results indicate that whether a bug has been reopened or not is related to whether the bug resolution involves multiple PLs.
As shown in Table <ref>, the LOCM, NOFM, NODM, OT, and Entropy of MPLB resolution are significantly larger than those of SPLB resolution. In addition, there is a significant connection between bug reopen and resolution involving multiple PLs.
§.§ Resolution Characteristics of MPLBs with Different PL Combinations (RQ4)
We grouped all MPLBs by different PL combinations, and obtained 104 PL combinations in total. To ensure that the dataset for each PL combination is big enough for statistical analysis, only the PL combinations with more than 50 MPLBs are considered. The resulting 13 PL combinations with more than 50 MPLBs are shown in Table <ref>, where #MPLB denotes the number of MPLBs whose bug-fixing commits involve the corresponding PL combination.
Based on the 13 PL combinations shown in Table <ref>, we analyzed bug resolution characteristics for the bugs of each PL combination, including the four measures for change complexity of bug-fixing commits (i.e., LOCM, NOFM, NODM, and Entropy), OT, and Reopen Rate of bugs. The distributions of the four change complexity measures are shown in Fig. <ref>, and the distributions of Reopen Rate and OT are shown in Fig. <ref>. In these two figures, the horizontal coordinates indicate the 13 PL combinations in Table <ref> and the vertical coordinates indicate the four change complexity measures, the values of Reopen Rate and OT, respectively.
In the rest of this section, we first present the results on the resolution characteristics of MPLBs with different PL combinations, and then compare the resolution characteristics of MPLBs and those of SPLBs with different PL combinations.
§.§.§ Resolution characteristics of MPLBs
From Fig. <ref> and Fig. <ref>, we can see that the distributions of LOCM, NOFM, NODM, and OT of MPLBs differ apparently between different PL combinations involved.
The Reopen Rate of MPLBs are also obviously different between different PL combinations involved.
However, Fig. <ref>
shows relatively large Entropy medians of the MPLBs involving all the 13 PL combinations, given that all the Entropy medians are either near 0.900 or greater than 0.900.
In Fig. <ref> and Fig. <ref>, it can be observed that PL combinations involving Java (Java, Scala; Java, Python; Java, JavaScript; C#, Java) have a larger median LOCM, NOFM, and NODM than other PL combinations, and tend to have a lower median OT.
PL combinations involving C/C++ (C/C++, Python; C/C++, Java) demonstrate a greater median OT than other PL combinations, and tend to have a smaller median LOCM, NOFM, and NODM.
The combinations of three PLs (Java, JavaScript, Python; C/C++, Java, Python) show a relatively large median of LOCM, NOFM, and NODM.
§.§.§ Comparison of Resolution Characteristics between MPLBs and SPLBs
Next, we investigated the differences on the resolution characteristics between MPLBs and SPLBs.
In Table <ref>, MedianSPLB denotes the maximum median of the corresponding resolution characteristic of the SPLBs involving each single PL in the corresponding PL combination, and MedianMPLB denotes the median of the corresponding resolution characteristic of the MPLBs involving the corresponding PL combination. %Diff represents the percentage of difference between MedianMPLB and MedianSPLB, i.e.,
%Diff=(MedianMPLB-MedianSPLB)/MedianSPLB×100% .
#RMPLB and #RSPLB denote the numbers of reopened bugs out of MPLBs and SPLBs, respectively. #MPLB and #SPLB denote the numbers of MPLBs and SPLBs, respectively. %RMPLB and %RSPLB denote the percentages of #RMPLB over #MPLB and #RSPLB over #SPLB, respectively. P-value denotes the p-value of Mann-Whitney U or Chi-squared test.
Change Complexity:
As shown in Table <ref>, the medians of the four resolution characteristics regarding change complexity for MPLBs involving the 13 PL combinations tend to be larger than that of the corresponding SPLBs. Specifically, LOCM for MPLBs is 17.07%-623.86% larger than that for SPLBs, NOFM for MPLBs is 50.00%-450.00% larger than that for SPLBs, NODM for MPLBs is 50.00%-250.00% larger than that for SPLBs, and Entropy for MPLBs is 1.31%-82.94% larger than that for SPLBs except for the PL combination of Clojure and Java. We further performed Mann-Whitney U tests on the four change complexity measures for MPLBs and SPLBs, and the p-value for each measure is less than 0.05 for each PL combination (except p-value of 0.267 for Entropy of the PL combination of Clojure and Java). It indicates that the change complexity of the resolution of MPLBs is significantly greater than that of SPLBs.
Open Time:
As presented in Table <ref>, the Mann-Whitney U test results reveal that the median OT of the MPLBs involving most (12 out 13, 92.3%) PL combinations is not significantly shorter than that of the corresponding SPLBs.
Specifically, there are no significant differences on OT between MPLBs involving 3 PL combinations (Java, Python; C#, Java; JavaScript, Python) and that of the corresponding SPLBs; the median OT of the MPLBs involving one PL combination (Erlang, JavaScript) is significantly smaller than that of the corresponding SPLBs; and the median OT of MPLBs involving the rest 9 PL combinations is significantly (from 19.52% to 529.57%) longer than that of the corresponding SPLBs.
Reopen Rate: The percentages of the reopened MPLBs and SPLBs are shown in TABLE <ref>, in which the percentages of reopened MPLBs and SPLBs range from 1.75% to 20.66% and from 4.89% to 6.28%, respectively. In addition, MPLBs involving 9 out of 13 PL combinations are not significantly associated with bug reopen, MPLBs involving one PL combination (C/C++, Python) is significantly negatively associated with bug reopen, and MPLBs involving 3 PL combinations (Java, Python; Java, JavaScript; JavaScript, Python) are significantly positively associated with bug reopen.
§.§ Causes for Involving Multiple PLs in Bug Resolution (RQ5)
To understand the causes why the resolution of MPLBs involves multiple PLs, we manually analyzed a sample set of the MPLBs (i.e., 931 MPLBs), including all 28 MPLBs involving more than 3 PLs, all 302 MPLBs involving 3 PLs, and a sampled subset of 601 MPLBs out of all 6,370 MPLBs involving 2 PLs.
After two rounds of pilot bug labelling, the Fleiss Kappa value for the labelling results of the second and third authors was 0.76, greater than the threshold 0.75.
§.§.§ Bug Resolution Categories
After several rounds of manual analysis, we identified the following 6 categories of bug resolution involving multiple PLs, as shown in Fig. <ref>:
(1) Algorithm Implementation (AI).
This category of bug resolution solves an MPLB by new algorithm (function) or code logic implementation. For example, MPLB “AMBARI-14948: Config consistency checker” is resolved by implementing the check_database() function in Java and Python.
(2) Algorithm Implementation Modifications (AIM).
This category of bug resolution solves an MPLB by modifying existing algorithm implementation, function call, etc. For instance, MPLB “SPARK-19134: Fix several sql, mllib and status api examples not working” is resolved by modifying algorithm implementation by Java, Python, and Scala.
(3) Data-related Changes (DC).
This category of bug resolution indicates an MPLB resolved through data-related changes, including but not limited to: time format, data type, type precision, and data structure changes (adding variables to the data structure). For example, MPLB “DISPATCH-284: Added a connection id to link which can be used to linked back to identity of connection" is fixed by adding a uint64_t management_id variable in C and Python.
(4) Configuration-related Changes (CRC).
This category of bug resolution indicates that an MPLB is solved via configuration-related changes to the software system, such as making configuration option (file)-related changes, fixing dependency issues between components, compatibility issues, and hardcoded issues. For example, MPLB “AMBARI-6484: Hbase RegionServer -Xmn must be configurable" is resolved by adjusting the hbase_regionserver_xmn parameter from hardcoded to adjustable in JavaScript and Python.
(5) Non-functional Modifications (NFM).
This category of bug resolution fixes an MPLB by non-functional modifications, such as specification of function/variable names, removal of dead code, optimization of code. For example, MPLB “SPARK-2739: Rename registerAsTable to registerTempTable” was assigned a Jira priority of “Blocker”; users complained that it was difficult to differentiate between registerAsTable and saveAsTable, thus registerAsTable function name was changed to registerTempTable in Java, Python, and Scala.
(6) Documentation Updates (DU).
This category of bug resolution solves an MPLB by updating documents, such as annotated documents or examples. For instance, MPLB “SPARK-32035: Inconsistent AWS environment variables in documentation" is fixed through changing annotations, i.e., changing annotation AWS_SECRET_KEY to AWS_SECRET_ACCESS_KEY. This annotation change involves source files written in Java, Python, and Scala.
§.§.§ Cross-language Calling Mechanisms
We looked further into cross-language calls to explain the causes for involving multiple PLs in bug resolution, and found 6 cross-language calling mechanisms used by the involved source files written in multiple PLs. These mechanisms are described as follows:
(1) Local Library Mechanisms (LLM). Through an LLM, code in one PL can directly call another PL's native libraries, e.g., Java Native Interface (JNI) and Dynamic-link library (DLL). We briefly introduce two LLMs in the following.
(a) JNI is a native programming interface and it allows Java code that runs inside a Java Virtual Machine (JVM) to interoperate with applications and libraries written in other PLs, such as C, C++, and assembly <cit.>. For example, MPLB “HARMONY-6642: [classlib][luni] FileInputStream doesn't close FD in native code” is resolved by implementing native library calls via JNI methods, as shown in Fig. <ref>.
(b) Cython is a PL that makes writing C extensions for the Python language as easy as Python itself <cit.>. Cython
code is translated into optimized C/C++ code (essentially DLLs) and compiled as Python extension modules. For example, to solve MPLB “ARROW-2270: [Python] ForeignBuffer doesn't tie Python object lifetime to C++ buffer lifetime”, as shown in Fig. <ref>, Foreign Buffer is defined and implemented in C++ and wrapped in Cython, which enables Python to call the Foreign Buffer across PLs.
(2) Common Runtime Mechanisms (CRM).
Converting code in various PLs into intermediate code (e.g., bytecode for JVM or intermediate language code for .NET) that enables the code to run on the same runtime platform, allowing cross-language calls.
For example, as shown in Fig. <ref>, in the resolution of MPLB “SAMZA-940: TestProcessJob.testProcessJobKillShouldWork fails occasionally”, Scala code can call Java code since both can run on JVM.
(3) Communication Protocol Mechanisms (CPM). Cross-language calls between different PLs are accomplished through specific communication protocols, e.g., HTTP. We briefly introduce three CPMs in the following.
(a) Thrift is a cross-language service framework, which can automatically generate code in different PLs based on the service interface and data types defined in Interface Definition Language (IDL) <cit.>. Thrift enables clients and servers written in different PLs to communicate through Remote Procedure Call (RPC). For example, in the resolution of MPLB “AIRAVATA-1281: experiments returned by searchExperimentByName don't have applicationId”, 7: optional string applicationId is added to the Thrift IDL file, and Thrift automatically generates code for C++, PHP, and Java, allowing for cross-language calling via RPC.
(b) Py4J is an inter-process communication (IPC) technology widely used in Spark's pyspark library <cit.>. JVM acts as the server side of the IPC, starting a socket port to provide the service, and Python code acts as the client side of the IPC, calling the client interface provided by Py4J. For example, in the resolution of MPLB “SPARK-31710: Fail casting numeric to timestamp by default", as shown in Fig. <ref>, the method timestamp_seconds (e:Column): Column is defined in Scala, and Python code utilizes Py4J to enable cross-language calls.
(c) HTTP refers to cross-language calls between different PLs through the HTTP protocol. The client in a PL sends an HTTP GET/POST request to the server in a distinct PL, which processes the request and returns a response. This is done using a wrapped HTTP library, such as Python's Request library <cit.>. For example, in the resolution of MPLB “AMBARI-20517: make home directory check as optional in hive20 view”, as shown in Fig. <ref>, JavaScript is used to implement the new getServiceCheckPolicy() method, which uses GET requests to make a cross-language call to the service-check-policy interface in Java.
(4) Inter-language Testing (ILT).
The cause for involving multiple PLs in inter-language testing is that source files in one or more PLs (e.g., C) are modified,
while test cases written in other PLs (e.g., Python) for black-box testing need to be modified accordingly.
For example, in the resolution of MPLB “SVN-1581: svn cp url onto missing file corrupts wc”, the C source file now includes detection of logical obstructions, while the Python source file incorporates test cases for the corresponding black-box tests.
(5) MPL Definition and Implementation (MDI).
Source files in multiple PLs are modified for the same reason, since there is a link between MPL modifications, such as implementation or modification of the same algorithm in multiple PLs, unified definition or modification of data structures in multiple PLs. For example, in the resolution of MPLB “THRIFT-3276: Binary data does not decode correctly using the TJSONProtocol when the base64 encoded data is padded”, the ignore padding algorithm was implemented in three different PLs, i.e., C++, C#, and Java.
Due to our limited knowledge, experience, and resources in comprehending the source code of the 54 non-trivial projects across a wide range of application domains, we only identified 515 cross-language calls in the resolution of 505 MPLBs, which account for 54.2% of the 931 MPLBs in total under manual analysis.
The distribution of MPLBs of each bug resolution category over the 5 cross-language calling mechanisms is shown in TABLE <ref>.
We found that the MDI mechanism was used most frequently, with a total of 195 times. The ILT mechanism is the most commonly used mechanism in the AIM category, accounting for 37.8% (107/283) of all the usages.
§ DISCUSSION
§.§ Interpretation of Study Results
RQ1:
Taking all projects as a whole, only about 10% of the bugs are MPLBs, which indicates that a majority of bugs in MPL software development can be solved by modifying source code in a single PL. It further implies that inter-language dependencies tend to be well designed and implemented so that the impact of most bugs does not propagate across PLs.
RQ2: The vast majority (95%) of MPLBs involve source files written in only 2 PLs. A possible reason is that modifying source files involving more PLs in a single bug resolution may result in higher code change complexity, which requires a more comprehensive consideration and thus more effort when performing change impact analysis on the software system.
RQ3: (1) The resolution characteristics of MPLBs show a more complex resolution process than SPLBs. This means that resolving MPLBs will increase the difficulty of software development. MPLBs involve source files written in multiple PLs, and the source files in different PLs usually distribute in different components; therefore, MPLBs tend to have a global impact on the software system, and the change complexity of MPLBs will be higher that of SPLBs.
(2) Bug resolution involving multiple PLs is related to the reopen of bugs, resulting in increased rework.
RQ4:
(1) Higher code change complexity of resolution of MPLBs does not necessarily mean longer OT of the MPLBs. For instance, among the MPLBs involving two PLs, MPLBs involving Java tend to have greater LOCM, NOFM, and NODM but take shorter OT; in contrast, MPLBs involving C/C++ tend to have smaller LOCM, NOFM, and NODM but take longer OT.
This observation is similar to the conclusion on SPLBs reached by Zhang et al. <cit.>.
(2) MPLBs involving C/C++, Java, and Python show a much larger OT than MPLBs involving other PL combinations. This may be related to almost the highest code change complexity of the resolution of those MPLBs.
(3) The percentage of reopened MPLBs involving different PL combinations fluctuates strongly while the percentage of reopened SPLBs involving different PL combinations is relatively stable. This is partially because the numbers of MPLBs and reopened MPLBs involving different PL combinations are too small, compared with the numbers of SPLBs and reopened SPLBs involving the corresponding PL combinations.
RQ5: (1) The classification of causes of bug resolution involving multiple PLs shows that involving multiple PLs in the resolution of 54.78% of the MPLBs result from modifications to algorithm implementation (i.e., AIM). Inter-language testing (ILT) is used by the modified source files for resolving 107 out of 283 MPLBs by AIM, the largest proportion. This can be explained by that changes in algorithm implementation are often followed by corresponding testing. (2) MPL definitions and implementations (MDI) is used in the resolution of 195 out of 505 MPLBs, accounting for the largest proportion; the possible reason is that common implementations or modifications of multiple PLs aim to provide interfaces to multiple PLs to meet the calls from different PLs.
§.§ Implications for Practitioners
Practitioners should be cautious of bug resolution involving the PL combination of JavaScript and Python.
MPLBs involving the PL combination of JavaScript and Python have a reopen rate of 20.66%, which is much higher than that of MPLBs involving other PL combinations. A reopened bug can take considerably more time and effort to fix <cit.>.
The code change complexity and OT are related to specific PLs. For instance, MPLBs involving PL combinations with Java show a higher code change complexity and shorter OT while MPLBs involving PL combinations with C/C++ show a lower code change complexity and longer OT.
Reopen of MPLBs is in association with specific PL combinations. There are significantly positive associations between bug reopen and modifying source files in 3 PL combinations (Java, Python; Java, JavaScript; JavaScript, Python).
The complexity of resolution of MPLBs involving most PL combinations are manageable. This is evidenced by the fact that there is no significant association between bug reopen and modifying source files in 9 out 13 PL combinations.
The identified causes for involving multiple PLs in bug resolution and cross-language calling mechanisms can help practitioners to better understand the links between MPL source code. There has been limited research on the classification of cross-language calling mechanisms.
§.§ Implications for Researchers
More PL combinations need to be explored with respect to the resolution characteristics of MPLBs.
It is valuable to get informed the impact of different PL combinations on the bug resolution, so that developers can make appropriate decisions on release planning and task assignment when facing MPLBs involving different PL combinations.
There is a need of a more complete picture of cross-language calling mechanisms. Due to the core role of cross-language calling mechanisms in the research on MPL topics, such a complete picture will save researchers considerable effort in MPL analysis.
§ THREATS TO VALIDITY
Construct validity is concerned with whether the data we collected (listed in Table <ref>) is consistent with the true values we expect. A possible threat to construct validity is that not all resolved bugs are associated with the corresponding commits. Due to various reasons, committers may not explicitly mention the resolved bug ID in the corresponding commit message, which may negatively influence the representativeness of the collected bugs and further affect the accuracy of entropy and bug OT. Through our manual check, we confirmed that the bugs with explicit links to corresponding commits are not reported in a narrow time span and also not resolved by a small group of specific developers. Therefore, this threat is partially alleviated.
Another potential threat is biases held by different researchers during manual analysis, which may result in incorrect classification of MPLB resolution. To mitigate this threat, we adopted a three-step bug classification process, in which a pilot bug labelling was adopted to identify and resolve possible disagreements between researchers.
External validity centers around the generalizability of the study results. First, a possible threat to external validity is whether the selected projects are sufficiently representative. As described in Section <ref>, we used a set of criteria to select the projects. All Apache projects satisfying the selection criteria were included. Thus, this threat was eliminated. Second, another threat was that only Apache MPL OSS projects were selected. This means that we cannot claim the validity of the study results for MPL projects from other OSS ecosystems.
Internal validity is the extent to which a piece of evidence supports a claim about cause and effect. In the data analysis for answering RQ5, we explored the causes why bug resolution involves multiple PLs, and selected a sample of 931 MPLBs out of the total 6700 MPLBs for our qualitative analysis. A threat is that other bugs that were not included in our studied sample may contain causes beyond what we identified in our analysis. However, since we selected the statistically representative sample with a 99% confidence level and a 5% margin of error, this sample selection bias could be alleviated.
Reliability refers to whether the same results can be produced when other researchers replicate this study. One potential threat is related to the implementation of the associated software tools used for data collection. These tools were primarily implemented by the second author, and the code for key functions was regularly reviewed by the first author. Another threat is related to the dataset used in this study. To increase the reliability of the study results, we provided the used dataset and classification results of manual analysis of bug resolution and cross-language calling mechanisms online <cit.>. Consequently, the threat to reliability was reduced.
§ CONCLUSIONS AND FUTURE WORK
This work aims to understand the resolution of MPLBs in MPL software systems and the reasons why bug resolution involves multiple PLs.
To this end, we conducted a large-scale empirical study on 66932 bugs with 6700 MPLBs included, coming from 54 MPL projects that were selected from 655 Apache OSS projects according to a set of inclusion criteria.
The main findings are summarized as follows:
(1) The percentage of MPLBs in the selected projects ranges from 0.17% to 42.26%, and the percentage of MPLBs for all projects as a whole was 10.01%.
(2) 95.0% and 4.5% of all the MPLBs involve source files in 2 and 3 PLs, respectively; this means that the vast majority of MPLBs involve source files written in only 2 PLs, while it is uncommon to modify source files in 4 or more PLs in MPLB resolution.
(3) The change complexity resolution characteristics and open time of MPLBs tend to be higher than those of SPLBs.
(4) There is significant association between MPLBs and bug reopen for specific PL combinations. The reopen rate of the combination of JavaScript and Python reaches 20.66%.
(5) In the identified six causes for involving multiple PLs in bug resolution, Algorithm Implementation and Modification (AIM) accounts for the highest percentage (54.78%), and five cross-language calling mechanisms were identified to deepen the understanding on MPLB resolution in MPL software systems.
We plan to further: (1) investigate the impact of MPLBs on MPL software architectures, e.g., whether MPLBs are relevant to the introduction of architectural technical debt, and (2) explore how to automatically identify the cross-language calling mechanisms manually identified in this study.
IEEEtran
|
http://arxiv.org/abs/2307.00393v1
|
20230701174418
|
Using joint training speaker encoder with consistency loss to achieve cross-lingual voice conversion and expressive voice conversion
|
[
"Houjian Guo",
"Chaoran Liu",
"Carlos Toshinori Ishi",
"Hiroshi Ishiguro"
] |
eess.AS
|
[
"eess.AS"
] |
MobileViG: Graph-Based Sparse Attention for Mobile Vision Applications
Mustafa Munir*
The University of Texas at Austin
mmunir@utexas.edu
William Avery*
The University of Texas at Austin
williamaavery@utexas.edu
Radu Marculescu
The University of Texas at Austin
radum@utexas.edu
============================================================================================================================================================================================================================
Voice conversion systems have made significant advancements in terms of naturalness and similarity in common voice conversion tasks. However, their performance in more complex tasks such as cross-lingual voice conversion and expressive voice conversion remains imperfect. In this study, we propose a novel approach that combines a jointly trained speaker encoder and content features extracted from the cross-lingual speech recognition model Whisper to achieve high-quality cross-lingual voice conversion. Additionally, we introduce a speaker consistency loss to the joint encoder, which improves the similarity between the converted speech and the reference speech. To further explore the capabilities of the joint speaker encoder, we use the Phonetic posteriorgram as the content feature, which enables the model to effectively reproduce both the speaker characteristics and the emotional aspects of the reference speech. The code and pre-trained model are open-sourced [https://github.com/ConsistencyVC/ConsistencyVC-voive-conversion].
cross-lingual voice conversion, expressive voice conversion, joint speaker encoder, speaker consistency loss
§ INTRODUCTION
Voice conversion (VC) is a task that aims to modify a speaker's voice characteristics, such as speaker identity<cit.>, emotion<cit.>, and accent<cit.> while preserving the linguistic content. In this study, we aim to address two challenges:
* Cross-lingual voice conversion (XVC), where the source speech and reference speech are in different languages<cit.>.
* Expressive voice conversion (EVC), which refers to the research conducted by Du et al.<cit.>, involves converting an input speech into a referenced speech in terms of both speaker identity and emotional style.
Decoupling and reconstructing the information in speech is currently the most popular approach for high-quality voice conversion models<cit.><cit.><cit.>. In detail, during training, content and speaker information is extracted from speech and then used for speech reconstruction. During inference, voice conversion is achieved by generating new speech using the content information from the source speech and the speaker information from the reference speech.
Phoneme posteriorgram (PPG) -based voice conversion, such as PPG-VC<cit.>, is the classical voice conversion method based on this principle. However, in the past, limited automatic speech recognition (ASR) performance and insufficient speech synthesis model capability resulted in limited speech quality of the synthesized output<cit.>. With the emergence of non-autoregressive (NAR) text-to-speech(TTS) models, such as VITS<cit.>, FastSpeech2<cit.>, DiffSinger<cit.>, and the availability of large-scale pre-trained self-supervised learning (SSL) models, such as Hubert<cit.> and WavLM<cit.>, high-quality voice conversion models have become possible. The underlying principle of FreeVC<cit.>, ACE-VC<cit.>, and the widely adopted open-source Singing Voice Conversion (SVC) model SO-VITS-SVC[https://github.com/svc-develop-team/so-vits-svc] is to leverage SSL models to extract content features from the original speech. Then, speaker ID or speaker classification models are employed to extract speaker-specific information from the speech. Finally, both sets of information are used to reconstruct the speech by the TTS model. The quality of VC results depends heavily on the synthesis capabilities of the speech reconstruction model, especially when accurate and unambiguous content information is available.
However, there is potential to improve the method of extracting speaker features. Previous VC models, such as PPG-VC and Freevc, have typically used speaker encoders pre-trained on speaker classification tasks to acquire speaker embeddings that are then used to guide speech synthesis. It is important to note that the primary goal of training speaker encoders is not speech synthesis, but speaker recognition. Consequently, this approach may miss valuable information present in the reference speech, such as emotion. In addition, training speaker classification models requires large datasets.
Freevc-s<cit.>, Quickvc<cit.>, and NVC-Net<cit.> use a jointly trained speaker encoder to ensure that the output of the speaker encoder contains only speaker-related information. This is achieved by implementing a bottleneck structure and carefully excluding speaker information from the content features. However, there is a lack of a more detailed loss function specifically designed to capture speaker-related features.
The concept of speaker consistency loss has been used in several studies, including YourTTS<cit.> and CyclePPG-XVC<cit.>, with the aim of improving the speaker similarity between model-generated speech and real speech. This is achieved by comparing the outputs of a speaker encoder that processes both the generated and the real speech. However, these studies used pre-trained speaker encoders that were specifically trained for speaker classification tasks. This method has certain limitations, such as the neglection of sentiment information, as discussed above. Furthermore, the speaker consistency loss used in these studies only updates the speech synthesis module and has no impact on the speaker encoder itself. Therefore, there is still room for improvement in the application of speaker consistency loss, especially for XVC and EVC tasks.
In this study, we introduce a novel VC model called ConsistencyVC to address issues related to speaker feature extraction. The main contributions of our research are summarised as follows:
* A new method for speaker feature extraction is proposed, where the speaker consistency loss is applied to the joint speaker encoder. Experimental results show that the inclusion of speaker consistency loss improves both speaker similarity and emotion similarity.
* Implementation of XVC using Whisper: We use the intermediate features of the cross-lingual speech recognition model Whisper <cit.>, which can help to generate high-quality speech without foreign accents, to implement XVC.
* Implementation of EVC with PPG as input: In order to imitate the emotional and speaker information in the reference speech, we implement EVC with PPG as input, which results in accurate conversion of both emotional and speaker characteristics.
§ METHOD
§.§ Motivation
In recent VC research, state-of-the-art VC systems have shown impressive performance, particularly in single-language (typically English) scenarios without emotion alteration. The speech samples that are generated show a remarkable degree of naturalness and similarity to human voices. We believe it is time to shift the focus of VC research from traditional VC to more complex applications. Therefore, we chose the XVC and EVC tasks to demonstrate the flexibility of our designed speaker information encoding approach, which can handle more tasks beyond traditional VC.
The proposed ConsistencyVC is inspired by FreeVC-s<cit.>, LoraSVC[https://github.com/PlayVoice/lora-svc], VALL-e<cit.> and YourTTS<cit.>. The model is based on the infrastructure of FreeVC-s due to its end-to-end structure, which enables high-quality VC. The bottleneck structure of the FreeVC-s's joint speaker encoder ensures that only speaker features are encoded, without including content information. In addition, the non-autoregressive design improves inference speed. However, unlike FreeVC-s, we take inspiration from LoraSVC and choose content features that perform better than the raw SSL model output for the XVC task. This eliminates the need for data augmentation, which consumes a lot of storage space. VALL-e is a zero-shot speech synthesis model that uses 3-second segments of target speech as reference speech input during training. The model can mimic the speaker's features and even emotional features from any 3-second reference speech segment and synthesize high-quality speech. This slicing concept inspired us to assume that a sub-segment of speech should contain similar speaker and emotional features as the whole speech. Building on this assumption, and inspired by YourTTS, we apply speaker consistency loss to the training of the joint speaker encoder VC model. This application of speaker consistency loss improves speaker similarity and emotion similarity.
§.§ Model architecture
As shown in Figure 1, the main structure of the ConsistencyVC model follows the VITS speech synthesis model. However, the text encoder and duration predictor are replaced by a content encoder similar to the posterior encoder in VITS. The content encoder uses WaveNet residual blocks. Instead of text, the content encoder takes content features from a pre-trained ASR model as input. Speaker information is encoded from Mel-spectrogram using a jointly trained speaker encoder. In the experiment part, the XVC and EVC tasks use different datasets and different types of content features to train the VC model. However, the structure of the VC model remains the same.
In the following, the content feature selection and the structure of the speaker encoder are explained in detail.
§.§.§ Content feature
Whisper is an ASR model that has achieved remarkable results in cross-lingual speech recognition tasks. Its model architecture is based on an encoder-decoder transformer. We choose the output of the transformer encoder blocks, called Whisper Encoder's Output (WEO), as the content feature. Compared to the content features in previous XVC studies, the WEO provides more accurate and comprehensive information, including the accent of the source speech, which is crucial for achieving foreign accent-free XVC. Therefore, we choose WEO as the content feature for the XVC task.
For the EVC task, we choose to use the PPG obtained from the wav2vec model trained on the phoneme recognition task[https://huggingface.co/speech31/wav2vec2-large-english-TIMIT-phoneme_v3]. This choice is based on the fact that different emotions within the same utterance have different prosody. PPG provides clear content information and contains less prosodic information than WEO, so the prosody of the reconstructed speech relies entirely on the output of the speaker encoder. This makes PPG more suitable for EVC.
Conversely, WEO is more suitable than PPG for XVC. This is because the same pronunciation may have different prosody or accent in different languages. In the XVC task, the prosody of the reconstructed speech should depend on the source speech, which comes from a native speaker.
§.§.§ Speaker Encoder
The coded speaker embedding is generated by the speaker encoder using the Mel-spectrogram as input. The speaker encoder is trained together with the rest of the model. It consists of one block of 3-layer LSTM module and a fully connected layer, similar to FreeVC-s. We feed the Mel-spectrogram derived from the speech into the LSTM layer of the speaker encoder. The final hidden state of the LSTM is passed to the fully connected layers, allowing the transformation of variable-length Mel-spectrogram inputs into fixed-size embeddings, which achieves a bottleneck structure. The output of the content encoder is assumed to be speaker independent. To synthesize speech, the model replaces the missing speaker information by using the input from the speaker encoder.
§.§ Training strategy
Following the training strategy of VITS, the ConsistencyVC model incorporates VAE and adversarial training during the training process.
For the generator part, the loss can be expressed as:
L_v a e= L_r e c o n+ L_k l+ L_a d v(G)+ L_f m(G)+ L_S C L,
where the L_r e c o n is the reconstruction loss, the L_k l is the KL loss, L_a d v(G) is the adversarial loss, and L_f m(G) is the feature matching loss. These losses are similar to the VITS, so the specific details of these losses are not reiterated here. Instead, let's focus on explaining the speaker consistency loss L_S C L.
In the implementation of VITS, researchers adopt windowed generator training<cit.>, a technique that generates only a part of the raw waveforms during training to reduce computational requirements. They randomly extract segments of latent representations z to feed into the HiFi-GAN-based decoder, and corresponding audio segments are extracted from the ground truth raw waveforms as training targets. This leads to the model's output speech's length during training being a small part of the input speech's length.
In FreeVC-S, for the VC task, the content information input is also segmented, limiting the maximum size of the content features. We assume that the speech segment corresponding to the input content features and the speech segment output by the model should contain the same emotion and speaker features. Based on this assumption, we can design the speaker consistency loss for the jointly trained speaker encoder.
Formally, let ϕ(·) be the function of the speaker encoder that outputs the speaker embedding of the reference, The speaker consistency loss is defined as the L1 distance between the speaker embeddings of the ground truth speech segmentation and the generated speech segmentation:
L_S C L = ‖ϕ(t) - ϕ(h) ‖_1 .
where t and h represent, respectively, the ground truth speech segmentation and the generated speech segmentation. Similar to YourTTS and other research, we do not introduce the speaker consistency loss at the beginning of the training step but rather after the model has learned the basic speech synthesis capability.
As far as our knowledge goes, we are the first to introduce the consistency loss to the joint-trained speaker encoder.
§ EXPERIMENT
§.§ Cross-lingual Voice Conversion
§.§.§ Dataset
In the XVC experiment, we used several datasets, including Aishell-3<cit.>, LibriTTS-100<cit.>, JVS<cit.>, ESD<cit.>, VCTK<cit.>, Aishell-1<cit.>, and JECS<cit.>. Of these, LibriTTS-100, ESD, Aishell3, and JVC contain speech samples in English, Chinese, and Japanese; these datasets were used to train the XVC model. The VCTK, Aishell-1, and JECS datasets were used to provide unseen speaker samples in English, Chinese, and Japanese respectively. These unseen speaker samples were used to evaluate the ability of the model to imitate speakers not present in the training set.
§.§.§ Experimental setup
For our experiments, we used a sampling rate of 16,000 Hz. The utterances of each speaker were randomly split into training and test sets in a 9:1 ratio. The model is based on FreeVC-s, but there are some parameters designed differently.
The most significant difference is that for the XVC task we choose WEO as the content information, the hop size of WEO is 320. In terms of other inputs to the model, both linear spectrogram and 80-band Mel-spectrogram are computed using Short-Time Fourier Transform (STFT) with FFT size, window size, and hop size being set to 1024, 1024 and 320 respectively. The upsampling scale of the four residual blocks in the HiFi-GAN-based decoder is factorized as 320 = 10 × 8 × 2 × 2, which means that the upsampling scales for the four blocks are [10, 8, 2, 2]. To avoid potential checkerboard artifacts caused by the "ConvTranspose1d" upsampling layer<cit.>, kernel sizes of [20, 16, 4, 4] are used. The AdamW optimizer is used with the same weight decay and learning rate as in FreeVC-s.
In our experiments, we compare two versions of ConsistencyVC: ConsistencyXVC and ConsistencyXVC-w/o loss. Both models are trained on a single NVIDIA 3090 GPU for up to 300k steps using fp16 training. For the first 100k steps, the batch size is 108, and the utterance lengths used for training range from 24,000 to 96,000 samples, corresponding to 1.5 to 6 seconds. The latent variables z input to the HiFi-GAN-based decoder are sliced into 28 segments, resulting in a speech length of 28 × 320 = 8,960 samples. However, for ConsistencyXVC, starting from the 100k step mark, an additional training phase with speaker consistency loss is introduced for the next 200k steps. In this phase, the latent variables z are sliced into 75 segments, resulting in a speech length of 75 × 320 = 24,000 samples. Due to the larger size of the latent variables z, the batch size is reduced to 42. ConsistencyXVC-w/o loss, on the other hand, continues training with the same parameters without introducing speaker consistency loss up to 300k steps.
We also chose BNE-PPG-VC as a baseline, which uses F0, PPG and speaker embeddings as inputs to a seq2seq model for speech reconstruction. Since the model has access to the F0 information of the source speech, it is also able to perform XVC without foreign accents. We trained BNE-PPG-VC on the same tri-lingual dataset as ConsistencyXVC.
§.§.§ Subjective evaluation
In the subjective experiments, we adopted the Mean Opinion Score (MOS) as the subjective metric to compute the naturalness and similarity scores of the converted utterances. We invited 47 native English speakers from Amazon Mechanical Turk [https://requester.mturk.com] to evaluate the speech. All speech's source speech is English but reference speech is English, Chinese or Japanese.
Each subject was required to evaluate the naturalness of 6 original utterances from the dataset and 60 converted utterances. Additionally, they were asked to evaluate the similarity of 32 converted utterances to the utterances of the target speakers. Some audio samples are available on the demo page[https://consistencyvc.github.io/ConsistencyVC-demo-page/].
The experimental results for naturalness in Table 1 show the fact that the reference voice is not English does not affect the naturalness of English. This indicates that the model is successful in the XVC task. Meanwhile, the speaker similarity experiments show that the introduction of speaker consistency loss improves the similarity between the speakers of the generated speech and the speakers that appeared in the model's training set. However, for speakers not seen in the training set, speaker consistency loss did not serve to improve speaker similarity.
§.§.§ Objective evaluation
wer,cer;speaker verification model eer;F0-PCC
§.§ Expressive Voice Conversion
§.§.§ Dataset
For the EVC task, we conducted the experiments using English datasets. We selected the English data from the ESD dataset<cit.> and the VCTK dataset<cit.> to train the models. Additionally, we selected samples from the Emov-db dataset<cit.> as the reference speech to consider the model's ability to imitate emotional speech from speakers that were not present in the training set.
§.§.§ Experimental setup
In the EVC task, we compared different variations of the ConsistencyVC model, including the presence or absence of speaker consistency loss, and using different types of content features, including PPG and WEO.
The variations of the model that were compared are as follows:
1. ConsistencyEVC: This model uses PPG as the content feature input. It is trained without speaker consistency loss for the first 100k steps with a batch size of 108. Then, it continues training with speaker consistency loss for the next 200k steps with a reduced batch size of 42.
2. ConsistencyEVC-w/o loss: This model also uses PPG as the content feature input. It is trained without speaker consistency loss for the entire duration of 300k steps with a batch size of 108.
3. ConsistencyEVC-whisper: This model uses WEO as the content feature input. Similar to ConsistencyEVC, it is trained without speaker consistency loss for the first 100k steps with a batch size of 108. Then, it continues training with speaker consistency loss for the next 200k steps with a reduced batch size of 42.
Apart from the dataset and content features, all other training parameters remain the same as those used in the XVC task.
§.§.§ Subjective evaluation
In the subjective experiments, following Du et al.<cit.>, we used the Mean Opinion Score (MOS) to calculate the naturalness of the speech and the ABX preference test to compare the results of the different methods in terms of style similarity. We invited 45 subjects from Amazon Mechanical Turk to participate in the experiment. Six original utterances from the dataset and 60 converted utterances were evaluated by each subject for their naturalness, and there are 36 sets of preference tests. In each test, the subjects were asked to choose which utterance is more similar to the reference utterance, in terms of both speaker and emotion.
Similar to the XVC task, the use of speaker consistency loss improved the model's ability to imitate the reference speech of the seen speakers. Furthermore, the experimental results showed that WEO as a content feature outperformed PPG in terms of naturalness as measured by MOS. WEO contains more information suitable for reconstruction, resulting in more natural synthesized speech.
However, in the ABX preference test, ConsistencyEVC-whisper performed worse than ConsistencyEVC. This is because WEO contains additional information beyond content, such as intonation, which is helpful for XVC tasks to eliminate accents when foreign speakers speak in a different language. For EVC tasks, however, we want to retain only the content features of the source speech and let the reference speech determine the intonation of the converted speech.
§ CONCLUSION, LIMITATIONS, AND FUTURE WORK
In this study, we built on FreeVC-s to implement cross-lingual voice conversion and expressive voice conversion using the cross-lingual speech recognition model Whisper and the wav2vec-based phoneme recognition model, respectively. To improve the imitation of reference speech, we introduced the speaker consistency loss to the joint speaker encoder. Experimental results showed that this loss contributed to improvements in both speaker and emotion features. However, there are still some limitations to our research:
Speaker similarity decreases when the reference speech belongs to an unseen speaker in the dataset. Training the model with a more diverse set of speakers could potentially improve its ability to imitate unseen speakers. For example, training with the LibriTTS-R dataset<cit.>, which consists of 585 hours of speech data from 2,456 speakers, could allow for better zero-shot and higher-quality voice conversion.
The content features for XVC and EVC tasks are inconsistent. The choice of content features is flexible. If the focus is on speech quality and maintaining a similar pitch in the converted speech, then Whisper can be chosen. However, if the focus is on emotional expression in speech, the use of PPG as content features can ensure that the VC model generates speech with the same style as the reference speech. It is likely that the two tasks are not mutually exclusive, as XVC requires the preservation of intonation, which needs to be modified in the EVC tasks. In future research, it would be worthwhile to explore approaches to decouple emotion and speaker information in speech.
IEEEbib
|
http://arxiv.org/abs/2307.03045v1
|
20230706151029
|
Track Mix Generation on Music Streaming Services using Transformers
|
[
"Walid Bendada",
"Théo Bontempelli",
"Mathieu Morlon",
"Benjamin Chapus",
"Thibault Cador",
"Thomas Bouabça",
"Guillaume Salha-Galvan"
] |
cs.IR
|
[
"cs.IR",
"cs.LG",
"cs.SD",
"eess.AS"
] |
Deezer
France
This paper introduces Track Mix, a personalized playlist generation system released in 2022 on the music streaming service Deezer.
Track Mix automatically generates “mix” playlists inspired by initial music tracks, allowing users to discover music similar to their favorite content.
To generate these mixes, we consider a Transformer model trained on millions of track sequences from user playlists. In light of the growing popularity of Transformers in recent years, we analyze the advantages, drawbacks, and technical challenges of using such a model for mix generation on the service, compared to a more traditional collaborative filtering approach.
Since its release, Track Mix has been generating playlists for millions of users daily, enhancing their music discovery experience on Deezer.
<ccs2012>
<concept>
<concept_id>10002951.10003317.10003347.10003350</concept_id>
<concept_desc>Information systems Recommender systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10002951.10003260.10003261.10003271</concept_id>
<concept_desc>Information systems Personalization</concept_desc>
<concept_significance>500</concept_significance>
</concept>
[300]Information systems Recommender systems
[300]Information systems Personalization
Track Mix Generation on Music Streaming Services using Transformers
Guillaume Salha-Galvan
August 1, 2023
===================================================================
§ INTRODUCTION
The French music streaming service Deezer <cit.> provides unlimited access to a large catalog of 90 million music tracks.
To help its 16 million active users navigate through this catalog and discover new content they might like, the service integrates a variety of large-scale music recommender systems <cit.>. In this paper, we present the latest addition to these systems: Track Mix, a personalized playlist generation tool designed to enhance the music discovery experience on Deezer.
Released in 2022 on the Deezer homepage, Track Mix generates “mix” playlists inspired by a selected initial music track.
This allows users to discover new tracks similar to their favorite ones on the service.
To generate a playlist from an initial track, we consider in this paper a Transformer <cit.> trained on millions of track sequences obtained from user playlists.
We show that, while deploying this model in production entails facing important technical challenges, it improves most of our performance indicators in online A/B tests compared to a more traditional latent model for collaborative filtering <cit.>. Transformer-based mixes consistently result in longer listening times for all users. They are also associated with a significant increase in the number of “collect” actions (i.e., additions to the list of favorite tracks or to personal playlists) among new users. This is a valuable result, as facilitating the acquisition of preference information on new users contributes to addressing cold start problems <cit.>. Nonetheless, this Transformer also reduces the number of collect actions for more regular users. We interpret this uneven performance in terms of popularity bias and user expectation regarding music discovery.
This paper is organized as follows. Section <ref> introduces the Track Mix feature on Deezer. Section <ref> details the motivations, development, and deployment of our Transformer for mix generation. We analyze our online experiments on Deezer in Section <ref>, and conclude in Section <ref>.
§ TRACK MIX GENERATION ON DEEZER USING TRANSFORMERS
§.§ Track Mix, a Personalized Playlist Generation System on Deezer
§.§.§ The Track Mix Feature
Track Mix is a playlist generation system, accessible by millions of Deezer users since its worldwide release on the homepage of this service in 2022.
As illustrated in Figure <ref>, it materializes as a personalized shortlist of up to 12 music tracks, selected[ Technical details on the selection process for initial track and on playlist reordering are voluntarily omitted in this paper for confidentiality reasons.] from the ones previously liked or regularly listened to by each user. They are dynamically updated at each connection to the service.
As illustrated in Figure <ref>, a click on one of them generates a “mix” playlist composed of the selection and 39 other similar tracks. Besides serving as an online jukebox, Track Mix aims to support users in discovering music similar to their favorite content.
Unlike the Flow algorithm on Deezer <cit.>, a personalized radio mixing the user’s favorite tracks along with new recommendations, Track Mix does not automatically enforce the addition of favorites within playlists. Hence, Track Mix puts a stronger emphasis on music discovery.
§.§.§ A Baseline Model for Track Mix Generation
To generate these playlists, Track Mix has been historically relying on “Mix-SVD”, an internal latent model for collaborative filtering <cit.> that will act as a baseline for the Transformer from Section <ref>.
By factorizing a pointwise mutual information matrix based on track co-occurrences in user playlists and lists of favorites, using singular value decomposition (SVD) <cit.>, this model learns vector representations of tracks in an embedding space where proximity reflects user preferences <cit.>. When a user selects an initial track in Track Mix, this model identifies its closest neighbors in the embedding space, and reorders them using internal rules<ref> to generate playlists. Mix-SVD is suitable for large-scale production use on a service like Deezer. Embedding vectors of millions of tracks undergo weekly updates and are exported in a Cassandra cluster. Computation services run on a Kubernetes cluster. We compute playlist generation operations on a Scala server. In particular, we use approximate nearest neighbors techniques <cit.> for efficient similarity search, via a Golang application incorporating the Faiss library <cit.>.
§.§ Leveraging Transformers for Track Mix Generation
§.§.§ Motivation
In this paper, we consider replacing Mix-SVD with a Transformer <cit.>.
In its general formulation, the term Transformer refers to a family of neural architectures leveraging attention mechanisms <cit.> to process sequential data.
In recent years, Transformers have emerged as a competitive approach for sequence modeling and generation in various domains <cit.>. Besides the sequential nature of music playlists, we explore the use of these models for Track Mix generation for two main reasons.
Firstly, they have achieved promising results on recommendation tasks in recent research (see, e.g., SASRec and BERT4Rec <cit.>). Secondly, a Transformer has already been successfully deployed on Deezer to improve automatic playlist continuation (APC) <cit.>, i.e., to better recommend lists of tracks for users to extend their own playlists (we refer to Bendada et al. <cit.> for details). Our study in this paper aims to build upon these successes. Nonetheless, previous work has also emphasized that the performance of Transformers on APC tends to decrease when the number of tracks in the playlist to extend diminishes <cit.>. In the extreme case of the Track Mix feature, a Transformer would only have access to a single initial track to generate a playlist. For this reason, the empirical superiority of a Transformer on Track Mix generation compared to Mix-SVD still needs to be fully demonstrated.
§.§.§ Mix-Transformer
The specific model we consider in our study, denoted “Mix-Transformer”, is a Decoder-only Transformer <cit.> with one hidden layer.
Our decision to solely retain the Decoder component is driven by the popularity of Generative Pre-trained Transformers (GPT), a family of Decoder-only Transformers with state-of-the-art performances on sequence generation <cit.>.
We train Mix-Transformer on a playlist completion task, using millions of track sequences obtained from user playlists on Deezer. Formally, we denote by 𝒯 the set of tracks from the Deezer catalog and by z_t∈ℝ^d some embedding vector representing each track t ∈𝒯, with d ∈ℕ^*. A playlist of length l ∈ℕ^+ is a sequence of l distinct tracks from 𝒯. For each playlist p of length l > 1 and k ∈{1, …, l-1}, we create a sub-playlist p_:k consisting of the k first elements of p, and a sub-playlist p_(l-k): consisting of its l-k last elements. We train Mix-Transformer to associate, to each p_:k, an embedding vector z_p_:k∈ℝ^d whose inner product nearest neighbors should be the embedding vectors of tracks in p_(l-k):. For this purpose, we minimize the same logistic loss as our previously mentioned APC model <cit.>, using tracks randomly picked from 𝒯∖ p as negative samples, and initial track embedding vectors retrieved from Mix-SVD.
Finally, to generate a mix playlist in Track Mix, the trained Mix-Transformer treats the selected initial track as a 1-track playlist, computes its embedding vector, and then identifies and reorders<ref> its closest neighbors.
§.§.§ Deployment
We stress that deploying Mix-Transformer on Deezer entails overcoming engineering challenges.
Our internal tests have revealed that directly replacing Mix-SVD with Mix-Transformer in Track Mix would result in four times longer inference times for mix generation.
This latency would be deemed unacceptable in production, as it would be noticeable to users. Additionally, the maximum throughput, i.e., the number of users that could be served simultaneously, would severely deteriorate. To obtain latency and throughput levels comparable to Mix-SVD, without incurring additional infrastructure costs (e.g., without adding GPUs), we closely follow the recently proposed “represent-then-aggregate” framework <cit.> for scalable APC.
We leverage ONNX model merging <cit.> to integrate all operations from data processing to track ranking into the deployed Mix-Transformer. This reduces its infrastructural complexity and speeds up inferences.
Lastly, and perhaps most importantly, we dynamically quantize <cit.> ONNX models. Our tests have demonstrated that this quantization significantly reduces computation costs, with a minor impact on performances.
§.§ Online Evaluation on Deezer
§.§.§ Setting
We now present the online A/B test we conducted on millions of Deezer users in March and April 2023.
During this test, we used Mix-Transformer to generate Track Mix playlists for a randomly selected cohort of test users, and Mix-SVD for others.
Table <ref> reports the relative performance of Mix-Transformer compared to Mix-SVD.
Four metrics compare listening times for each group, and two metrics evaluate the music discovery aspect via collect actions[ The number of “collect” actions is the number of recommended tracks that users added to their list of favorites or to their personal playlists.].
§.§.§ Results
We observe that using Mix-Transformer enhances the listening times of Track Mix playlists, according to all four metrics under consideration. This positive result indicates a higher usage of the feature on Deezer. However, the number of collect actions simultaneously diminishes in the Mix-Transformer cohort.
A close examination of results reveals that this decrease primarily affects users with over a month of activity (seniority > 30 days). Conversely, Mix-Transformer boosts collect actions for new users (seniority ≤ 30 days).
Hence, using Mix-Transformer would facilitate the acquisition of preference information on these new users, which could benefit all usage-based recommender systems on Deezer by helping to overcome cold start issues <cit.>. We note that Mix-Transformer tends to recommend more popular tracks than Mix-SVD. This increased mainstreamness might explain the lower collection levels among regular users. Indeed, they might have already liked these popular tracks. Moreover, they might have different expectations regarding music discovery. Having an already established library of favorite content, they might be more open to and even looking for specialized/niche recommendations.
While Table <ref> supports these assumptions, more investigations will be required in future work for confirmation. Our test also encourages the development of an improved Mix-Transformer addressing popularity biases <cit.> on regular users, e.g., by controlling popularity levels in the training dataset and the optimized loss.
§ CONCLUSION
Since its release in 2022, Track Mix has been generating mix playlists for millions of Deezer users daily. At the time of writing, Mix-SVD remains in use on the service for mix generation. However, our online A/B test prompts us to consider adopting Mix-Transformer for new users in the near future. Despite being more challenging to deploy, this method showcases a promising performance with these users. Our test also opens up interesting avenues for future research on Transformer-based mix generation, to further enhance the music discovery experience of our more regular users.
§ SPEAKER BIO
Guillaume Salha-Galvan is a research coordinator at Deezer, where he conducts fundamental and applied research projects on music recommendation. He holds a Ph.D. in Computer Science from École Polytechnique in France.
ACM-Reference-Format
|
http://arxiv.org/abs/2307.01856v1
|
20230704180007
|
Long-Lived Particles and the Quiet Sun
|
[
"R. Andrew Gustafson",
"Ryan Plestid",
"Ian M. Shoemaker",
"Albert Zhou"
] |
hep-ph
|
[
"hep-ph",
"astro-ph.HE",
"astro-ph.SR",
"nucl-th"
] |
CALT-TH/2023-023
gustafr@vt.edu
Center for Neutrino Physics, Department of Physics,
Virginia Tech, Blacksburg, VA 24061, USA
rplestid@caltech.edu
Walter Burke Institute for Theoretical Physics, California Institute of Technology, Pasadena, CA 91125, USA
shoemaker@vt.edu
Center for Neutrino Physics, Department of Physics,
Virginia Tech, Blacksburg, VA 24061, USA
Institut für Astroteilchenphysik, Karlsruher Institut für Technologie (KIT), D-76021 Karlsruhe, Germany
The nuclear reaction network within the interior of the Sun is an efficient MeV physics factory, and can produce long-lived particles generic to dark sector models. In this work we consider the sensitivity of satellite instruments, primarily the RHESSI Spectrometer, that observe the Quiet Sun in the MeV regime where backgrounds are low. We find that Quiet Sun observations offer a powerful and complementary probe in regions of parameter space where the long-lived particle decay length is longer than the radius of the Sun, and shorter than the distance between the Sun and Earth. We comment on connections to recent model-building work on heavy neutral leptons coupled to neutrinos and high-quality axions from mirror symmetries.
Long-lived particles and the Quiet Sun
Albert Zhou
August 1, 2023
=======================================
§ INTRODUCTION
It has long been recognized that the solar interior can serve as an efficient factory for keV-scale physics beyond the Standard Model (BSM), e.g. solar axions and dark photons <cit.>. In addition to thermal production mechanisms, nuclear reactions within the Sun may also source BSM particles up to masses and energies of roughly 15 MeV <cit.>. If a flux of long-lived particles (LLPs) in this energy regime emanates from the solar interior, they may transit toward the Earth and their decay products can leave detectable signatures. It is important to emphasize that LLPs are generic consequences of a dark sector with relatively light particles and feeble couplings to the SM <cit.>. As decay lengths become long, LLPs become increasingly difficult to detect and strategies to attack this “lifetime frontier” are valuable tools in the search for BSM physics. This idea has been previously investigated, largely considering FERMI-LAT, in the high energy, i.e. ≳ 100 MeV, regime for annihilating dark matter <cit.>.
In this work we point out that existing data from the RHESSI satellite spectrometer <cit.>, which observed the Quiet Sun,[Time periods without intense surface activity such as solar flares.] can place interesting limits on dark sectors with LLPs in the range of 𝒪(100 keV) - 𝒪(1 MeV). This is an old idea, first proposed by Stodolsky and Raffelt in 1982 in the context of a 200 keV axion <cit.>, however, it has remained unexplored despite new data in the intervening decades <cit.>. We illustrate the potential sensitivity of Quiet Sun data with a number of BSM examples, emphasizing different production mechanisms which may operate in this mass window. A conservative analysis of existing data from RHESSI is capable of offering complimentary constraints on production mechanisms involving neutrino upscattering, and can probe previously untouched regions of parameter space for axion like particles (ALPs) with masses close to ∼ 1 MeV. Upcoming missions, such as the COSI satellite <cit.>, may be able to substantially improve on the capabilities of RHESSI by i) taking advantage of a larger instrument surface area, ii) making use of dead time to carefully study backgrounds, and iii) taking advantage of distinctive spectral features.
We focus on LLPs that decay primarily to photons,[We could also consider decays to e^+e^- pairs but an analysis is complicated by the magnetic fields that surround the Earth.] and have decay lengths, ℓ_ LLP, that satisfy
R_⊙≪ℓ_ LLP≪ d_⊙ ,
where R_⊙ is the radius of the Sun and d_⊙ is the distance from the Sun to the Earth. This allows an O(1) fraction of the LLPs to decay en-route to the satellite instrument. In this limit, the flux of LLPs will never reach any terrestrial experiment since they will decay in flight and their daughter photons will be absorbed in the upper atmosphere. In this sense, Quiet Sun observations are complimentary to terrestrial searches for LLPs from the Sun such as those that have been performed by CAST <cit.> and Borexino <cit.>.
We perform a straightforward (and conservative) rate-only analysis the details of which can be found at the end of <ref>. In the body of the paper we organize our discussion along the lines of specific BSM scenarios. We discuss neutrino upscattering in <ref> and solar axion production in <ref>. We also spend time focusing on model-independent LLP constraints in <ref>. In <ref> we discuss the physics potential for dark sector searches using future missions such as COSI. We close by summarizing our results in <ref>.
§ NEUTRINO UPSCATTERING - TRANSITION DIPOLE
We begin by considering a production mechanism involving the upscattering of solar neutrinos transiting through the Sun, e.g. ν A → LLP A with a A a nucleus such as hydrogen or helium (see e.g. <cit.> for results on neutrino upscattering in the Earth). This mechanism leverages the large solar neutrino flux which is copious in the few-hundred keV region, and extends up to ∼ 15 MeV. Solar neutrinos have a small probability of being absorbed in the SM because of the small charged current scattering cross section at E_ν∼ MeV energies. It is, however, possible to have BSM cross sections that exceed the weak interaction at low energies if neutrinos couple via a transition magnetic dipole moment <cit.>. This can lead to sizable conversion probabilities into an unstable right-handed neutrino, N (also called a heavy neutral lepton or HNL), for neutrinos transiting from the center to the surface of the Sun. As it is unstable, N may decay in flight supplying a broad flux of photons in RHESSI. Similar phenomena may occur in the aftermath of SN 1987A <cit.> leading to tight limits below the supernova floor derived in <cit.>.
This “dipole portal” can dominate low energy phenomenology since it is a dimension-five operator and competes with the dimension-six four-Fermi contact interaction at low energies. The effective Lagrangian is given by
ℒ_ int⊃∑_α d_α F^μνN̅σ_μν P_L ν_α .
Here, d_α represents the coupling between N and each of the 3 SM neutrinos. In this work, we consider the cases where N couples to a single flavor. This effective interaction has been studied recently in the context of accelerator, solar, atmospheric, and collider neutrinos as well as in the context of early universe cosmology and constraints from SN 1987A <cit.>.
Unlike the monoenergetic LLP cases discussed later in this paper, the spectrum of E_ν (and hence E_N) spans several orders of magnitude. For that reason, we implement a Monte Carlo integration to sample neutrino energy, production location, and upscattering location. We also account for flavor transformation during the neutrino propagation (both due to adiabatic conversion and oscillations).
We consider the Sun to be solely comprised of ^1H and ^4He with densities given by the Standard Solar Model <cit.>. All scattering is calculated to be off free nucleons, ignoring the coherent enhancement due to helium. This only leads to an ∼ 10% change in the bounds, which we will see is a much smaller effect than uncertainty in the detector opening angle/background. The cross section for scattering on a free proton is given by σ_dip = σ_1 + σ_2 with,
σ_1/d E_r = α (2 d)^2 F_1^2 ( 1/E_r - 1/E_ν
+m_N^2(E_r-2 E_ν - m_p)/4 E_ν^2 E_r m_p +m_N^4(E_r-m_p)/8 E_ν^2 E_r^2 m_p^2 ) ,
and
σ_2/dE_r = α d^2 μ_n^2 F_2^2 [ 2 m_p/E_ν^2 ( (2 E_ν-E_r)^2 - 2 E_r m_p )
+ m_N^2 E_r - 4 E_ν/E_ν^2 + m_N^4/E_ν E_r ] .
Here, F_1 and F_2 are electromagnetic form factors, μ_n is the magnetic moment of the nucleon in question, E_r is the recoil energy, and m_p is the proton mass <cit.>. Since the neutrino energy is much less than the proton mass, the HNL energy E_N is nearly identical to the neutrino energy E_ν. Thus, the flux of HNLs has similar features to the solar neutrino flux (see <ref>).
The HNL has decay channels N →ν_αγ. We consider the ν to be massless, and the decays to be isotropic in the rest frame of the HNL.[In complete generality the HNL may have some angular correlation with its polarization, but this depends on the details of the model e.g. Dirac vs. Majorana neutrinos <cit.> and we neglect this in what follows.] The decay length is calculated as
λ = 4 π/d_α^2 m_N^3γβ .
The Monte Carlo simulation samples locations for N decays along with the energy and direction of the decay photon. This is used to calculate the resulting photon flux with respect to energy and angle observed by RHESSI. We consider opening angles of 1^∘ and 90^∘, where we reject all photons arriving at larger angles. The background flux observed by RHESSI is calculated by using the reported number of counts and effective area of the front segment (ignoring narrow peaks) <cit.>. We reject a parameter point if the flux from N decays exceeds the observed flux at any energy (see <ref>).
Our resulting exclusion curves from the RHESSI data are shown in <ref> for a muon neutrino dipole coupling. We find that RHESSI data can offer a complimentary (and direct) probe of regions of parameter space that are already probed by SN-1987A. Constraints are strongest in the low mass region (sub-MeV), and this may also be probed using coherent elastic neutrino nucleus scattering. We see that the exclusions for the three neutrino flavors all have similar values in <ref>.
§ HEAVY SOLAR AXIONS
Another production mechanism is solar axions with energies in excess of E_a≳ 500 keV. These energies are too high to allow for thermal production (except for in exponentially suppressed tails), and so the background photon fluxes are much smaller than for typically considered keV solar axion searches. The study of MeV-scale solar axions has a long history, and they have been searched for in terrestrial experiments such as Borexino and CAST <cit.>. As we discuss below, satellite measurements of the Quiet Sun provide a complimentary probe that excels for decay lengths that are short relative to the Earth-Sun distance.
It is worth highlighting recent work on model building for axions with an extended matter content <cit.>. These models are motivated by the axion quality problem and seek to protect the axion against Planck suppressed corrections. The simplest mechanism to achieve this is to simply break the canonical relation f_a m_a ≈ f_π m_π and to allow for m_a to be “heavy” relative to predictions of conventional (i.e. DSVZ <cit.> or KSVZ <cit.>) axion models. It is interesting to note that these independent model building considerations often push the mass and couplings of the axion into regions of parameter space that are well suited for solar axion detection; we will comment on this in great detail below. For instance, following the benchmark scenarios presented in <cit.> one finds that masses in the ∼ 10 MeV regime with axion decay constants f_a ∼ 10^-5 GeV^-1 fall squarely within the “natural” window of parameter space whilst simultaneously predicting a sizeable coupling to nucleons and a decay length that is a few times longer than the radius of the Sun. For slightly lighter axions, solar production and detection is a useful complimentary probe.
The primary production mechanism for heavy solar axions is the p d → ^3 He γ reaction which takes place in the solar pp chain. Other mechanisms are energetically allowed, such as M1 transitions in the CNO chain <cit.>, and e^+e^- annihilation from ^8 B neutrinos in the solar interior, however, we find that the production rates are sufficiently small so as to be uninteresting.
The flux of axions (prior to decay) can be related to the flux of pp neutrinos, and depends on the isovector coupling of axions to nucleons g_3aN <cit.>. The axions must first escape the Sun and then decay before reaching Earth. The escape probability depends both on axion absorption and decay processes. Putting all of this together and setting BR_a γγ=1, we arrive at the flux of axions arriving at a detector orbiting the Earth,
Φ_γ/Φ_ν^(pp) = 0.54 |g_3aN|^2 [p_a/p_γ]^3 [ ^-R_⊙/ℓ_ abs - ^-d_⊙/ℓ_ dec] ,
where ℓ_ abs^-1 = ℓ_ MFP^-1 + ℓ_ dec^-1 with ℓ_ MFP^-1 the averaged mean free path in the Sun and ℓ_ dec the axion decay length. The coupling g_3aN is the isovector coupling strength of the axion to nucleons, and p_a/p_γ is the ratio of three-momenta between an axion and photon emitted with E=5.49 MeV. The pp neutrino flux is given by
Φ_ν^(pp)=6 × 10^10 cm^-2 s^-1. We account for axion-absorption, Primakoff scattering, and axion electron scattering in our calculation of ℓ_ MFP^-1.
Our results are shown in <ref>. We note that our exclusions depend on the axion nucleon coupling, captured by g_3aN, and the decay constant g_aγγ. If g_aγγ vanishes at some scale μ=μ_0, but g_aee≠ 0 then an effective g_aγγ∼ (α/4π) g_aee/m_e will be generated via a 1-loop triangle diagram, and in this way one can re-cast our limits[This requires accounting for the branching ratio to photons, as well as adjusting the decay length.] in terms of those on g_aee. We do not include exclusions from SN1987 typically plotted in the m_a-g_aγγ plane because the values of g_3aN that are required to produce a sufficient axion flux in the Sun lead to axion trapping within a core-collapse supernova <cit.>.[This occurs because axion-nucleon scattering leads to mean free paths much shorter than the typical size of a supernova, trapping the axions.] This is an important distinction between the hadronically coupled axion models we considered here vs. an axion like particle which couples exclusively to photons (see e.g. <cit.>). The solar axion constraints we discuss here are therefore complimentary to supernova cooling ones. If the axion nucleon coupling, g_aN, is large enough to evade SN-1987 bounds via self trapping then it is also large enough to be probed with RHESSI data. Low energy supernovae observations have been used to place constraints on axions which decay in-flight and deposit energy to the ejecta <cit.>. These constraints also disappear in the strong coupling regime, and are complimentary to ours. Constraints from NA62 <cit.>, E787 <cit.>, and E949 <cit.> are subject to O(m_K^4/m_ρ^4) hadronic uncertainties in the prediction of K→ a π <cit.>. Finally, our constraints on g_aγγ lie above the ceiling of searches performed with the Borexino collaboration <cit.> because we are sensitive to decay lengths much shorter than d_⊙. This is demonstrative of the way in which constraints from solar axion may compliment existing search techniques using accelerator based experiments, underground detectors, and astrophysical constraints.
Constraints from big bang nucleosynthesis (BBN) will generically apply both because the axions we consider have lifetimes in the vicinity of a few seconds, and because the same reaction, p d→ ^3 He γ, is a key driver of BBN. In the absence of any additional dark sector decay modes, measurements of N_ eff will generically exclude axions with masses below 5 MeV or so. These constrains can be alleviated if the dark sector contains additional degrees of freedom see e.g. <cit.>. Searches for gamma rays from the Quiet Sun offer a complimentary direct probe of axion (or other light particle) production that is independent of early universe cosmology.
We consider a 90^∘ opening angle for our signal, meaning all decays between the Sun's surface and Earth's orbit contribute. The monoenergetic nature of the axion means the photon flux is constant in energy (see <ref> for more details on monoenergetic production). We demand that this flux exceed 1.8 × 10^-3s^-1cm^-2keV^-1 for photon energies above 1 MeV, so that this flux is above the observed RHESSI background flux in the front segments.
§ MODEL-INDEPENDENT SEARCHES
Let us now consider a model-independent production of LLPs (here called ϕ) which decay via ϕ→γγ. In this simplified model, we consider all production to occur at the solar center, and ϕ only interacts with SM physics through its decay. We also assume there is no preferential direction for decay in the rest frame of ϕ, so the flux of photons is a uniform distribution between E_γ, min and E_γ, max where E_γ, max/min = 1/2 × ( E_ϕ±√(E_ϕ^2 - m_ϕ^2) ). Inverting this equation, we find E_ϕ≥ E_γ + m_ϕ^2/(4E_γ) (we will call this lowest energy E_ϕ,min). Therefore, if we know the rate of production R_ϕ and decay length λ as a function of E_ϕ, then we can determine the BSM flux of photons at Earth.
Φ_γ/ E_γ = 2/4π d_⊙^2×
∫^∞_E_ϕ,min E_ϕe^-R_⊙/λ(E_ϕ) - ^-d_⊙/λ(E_ϕ)/√(E_ϕ^2-m_ϕ^2) R_ϕ/ E_ϕ
One particularly well motivated morphology is where ϕ has a mono-energetic production spectrum. This would occur if ϕ is produced via a 2-body decay χ→ϕ X or via annihilation χχ→ϕ X for v_χ≪ 1. Performing the integral in <ref> with a delta-function distribution leads to a flux of photons that is constant in energy between E_γ, min and E_γ, max.
Remaining more agnostic to the source of ϕ production, we may consider a power-law distribution with respect to energy for E_b≤ E_ϕ≤ E_u
R_ϕ/ E_ϕ|_power = R_c× E_ϕ^c Θ(E_u - E_ϕ) Θ(E_ϕ - E_b) .
For m_ϕ≪ E_γ, E_ϕ the photon flux is calculable in closed form,
Φ_γ/ E_γ|_power = 2 R_c/4 π d_⊙^2
× [ (R_⊙E/λ )^c (Γ (-c, R_⊙E/λ E_u ) - Γ (-c, R_⊙E/λ E_l) )
- (d_⊙E/λ)^c (Γ(-c, d_⊙E/λ E_u) - Γ (-c, d_⊙E/λ E_l) ) ] ,
with Γ(a, x) the incomplete gamma function, E_l = max{E_b,E_ϕ,min}, and λ is the decay length at characteristic energy E. We normalize to the total rate of ϕ produced, N_ϕ. For the mono-energetic case, we have N_ϕ = R_ϕ, while for the power-law production, we have
R_c = N_ϕ (c+1)/E_u^c+1-E_b^c+1 for c ≠ -1 ,
N_ϕ/log(E_u/E_b) for c = -1 .
Constraints on the number of ϕ produced per second in the Sun are shown in <ref>. Constraints are set as described at the end of <ref>.
§ FUTURE PROSPECTS
In the above discussion, we have found that re-purposing existing RHESSI data is able to provide interesting constraints on light dark sectors with MeV scale LLPs. Our analysis should be viewed as a proof of principle and certainly underestimates the sensitivity of experiments like RHESSI to new physics models. The major limitations in our analysis are a lack of reliable peak-subtracted spectra and the ability to suppress backgrounds (see <cit.> for recent work in the keV regime for a more sophisticated statistical analysis). For example, much of the background for RHESSI comes not from solar activity but rather from cosmic ray interactions with the Earth's atmosphere i.e. the radiation comes from the rear rather than forward field of view. Much of this background can presumably be suppressed (or perhaps eliminated) with a future instrument, especially if a dedicated search is performed. In what follows we sketch potential improvements using a near-term MeV telescope. For concreteness we will anchor our discussion around the COSI satellite.[We thank Albert Shih for pointing out the COSI mission to us.]
RHESSI operated with minimal shielding to minimize weight. This made the instrument an effectively “all sky” observation with a high level of cosmic ray background activity. In contrast COSI will operate with active shielding, and its further use of Compton kinematic discrimination offers further background reduction <cit.>. Moreover, ongoing work to better understand gamma ray emission from the Quiet Sun will further improve on irreducible backgrounds <cit.>.
Other strategies that could be pursued with a future instrument are to go beyond the rate-only analysis presented above. For example, COSI will have 25% sky coverage and excellent angular resolution. One could image the MeV photon flux differential in both energy and angular position. Depending on the lifetime of LLPs a “halo” of photons could be searched for outside the solar corona. The shape of the photon distribution will be model dependent, but can be computed using the Monte Carlo simulations outline above. Similarly, taking advantage of COSI's large field of view, other local planetary systems could be used to search for LLPs. This was suggested recently in the context of Jupiter where the capture of light dark matter is better motivated <cit.>.
Finally, let us comment on a second channel of interest: LLP→ e^+e^-. This may occur for a dark vector which dominantly decays via V→ e^+e^-, and has recently been considered (in the context of large volume underground detectors) for the same p d→ ^3 He γ reaction considered here <cit.>. A search for electrons and/or positron would require accurate modeling for propagation through magnetic fields in the vicinity of the Earth.
§ CONCLUSIONS AND OUTLOOK
We have discussed simple particle physics models that predict an MeV flux of photons produced by the Sun. The generic requirement is the existence of some LLP which can efficiently transport energy from the interior (fueled by nuclear reactions) to beyond the Sun's surface. Provided the LLP has a sizeable branching ratio to final states including at least one photon e.g. γγ, νγ, and/or e^+e^-γ final states, one can search for energetic gamma rays emanating from the Quiet Sun.
We find that constraints from existing data from RHESSI, with a very conservative analysis strategy, can probe small pockets of untouched parameter space for both MeV scale axions and a neutrino dipole portal. In both cases, the RHESSI analysis provides complimentary coverage to existing search strategies (including cosmological probes such as BBN).
Our major motivation is a simple proof of principle that MeV-scale LLPs with decay lengths larger than the radius of the Sun can be efficiently searched for using solar telescopes. The analysis presented here is conservative and fairly crude; we define exclusions by the condition that the BSM signal prediction exceeds the total signal observed in any energy window by RHESSI. Constraints and/or discovery potential could be substantially improved with a better understanding of instrument backgrounds and more sophisticated analysis techniques. For example, one could make use of angular profiles of incident photons to search for new physics, as an LLP flux will produce a photon flux outside the stellar corona with a predictable angular shape/morphology. We encourage future missions with MeV scale instrumentation below the cut-off of Fermi-Lat, such as COSI <cit.>, to consider searches for BSM particles, with the Sun being a well-motivated engine for MeV-scale LLPs.
§ ACKNOWLEDGMENTS
This work benefited from feedback at the Simon's Center for Geometry and Physics, and RP would like to specifically thank Simon Knapen, Rebecca Leanne, and Jessie Shelton for useful discussions. We thank Albert Shih for helpful discussions regarding the RHESSI instrument. We benefited from feedback on this manuscript from Rebccea Leanne and Elena Pinetti.
This work is supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and by the Walter Burke Institute for Theoretical Physics. RP acknowledges support from the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632 and the Neutrino Theory Network Program Grant under Award Number DEAC02-07CHI11359 and the US DOE under Award Number DE-SC0020250.
§ INEFFICIENT PRODUCTION MECHANISMS
In this section we discuss production mechanisms which we have found to be too inefficient to allow for detection prospects with our RHESSI analysis.
§.§ Mass-Mixing portal for HNLs
Another BSM model involving HNLs has N couple directly to active neutrinos through added elements in the PMNS matrix<cit.>. Active neutrinos contain a small admixture of the HNLs along with the three known mass states,
ν_α = U_α N N + ∑_i=1^3 U_α iν_i ,
where U_α N represents the coupling of HNLs to active neutrinos. Since the Sun only has nuclear reactions that produce electron neutrinos, our constraint is on U_e N. The N flux from upscattering is subdominant by orders of magnitude to that from direct production. Therefore, the flux is given by rescaling the neutrino flux
Φ_N = |U_N e|^2 Φ_ν√(1 - m_N^2/E_N^2) .
For the masses considered here, there are only three decay channels; i) N → 3 ν, ii) N →νγ, and iii) N →ν e^+ e^-. As with other production mechanisms, we only consider signals from photons. The geometry of this decay (into a massless neutrino and photon) is identical to the case of the dipole portal. The decay rate for each of the processes follows the general form
Γ_N → SM∝ G_F^2 |U_e N|^2 m_N^5 ,
which has the steep power-law dependence on mass typical of weak decays. We find that since decay lengths are always long enough to fall outside the range given in <ref> that sensitivity from RHESSI is subdominant to searches at Borexino (which benefits from a large detector volume) and from direct laboratory searches (see <ref>).
§.§ Captured dark matter in the Sun
If heavy dark matter, χ, has interactions beyond gravity, it may scatter within large celestial bodies and become gravitationally captured. The Sun, being by far the most massive object in the solar system, is a strong candidate in searching for the signals from captured χ <cit.>.
For the case of symmetric dark matter with a long-lived particle mediator, there is the interaction χχ→LLPs. The energies of these final observable particles are 𝒪 (m_χ). However, as discussed in <cit.>, for thermal relic annihilation cross sections, short range interactions with SM, and m_χ below a few GeV, most of the χ evaporates from the Sun before annihilating. We note that for dark matter with long range interaction, evaporation may be suppressed <cit.> and RHESSI data could provide interesting limits on lighter dark matter in these scenarios. Also, Jupiter has low evaporation for m_χ > 30 MeV for long-range interactions (m_χ > 200 MeV - 1 GeV for short-range interactions) <cit.>.
We also considered the case of asymmetric dark matter with self-interactions via a scalar ϕ with a Yukawa like interaction ℒ⊂χ̅χϕ . As there is no annihilation, in the absence of evaporation, the χ population grows indefinitely. Virialized dark matter passing through the Sun can scatter on the trapped overdensity and produce LLPs via the bremsstrahlung-like reaction χχ→χχϕ. In order to produce MeV gamma rays, we require heavy dark matter, m_χ≳ 1 TeV, such that there is sufficient available kinetic energy m_χ v_χ^2 ≳ 1 MeV.[Dark matter nucleon scattering cannot induce MeV bremsstrahlung (i.e. via χ N →χ N ϕ), because the available kinetic energy is set by m_N v_χ^2 ∼ 1 keV× (v_χ/10^-3)^2. This is most easily seen in the rest frame of the heavy dark matter.] In order to produce a sufficiently large flux of LLPs, we require a sizeable χχ→χχ cross section. This can only be achieved with a light mediator for TeV scale (or heavier) dark matter. The cross section relies on a small momentum transfers. Non-relativistic kinematics, however, demand a parametrically larger momentum transfer in the bremsstrahlung like reaction than for elastic scattering. For example, demanding E_ϕ∼𝒪(MeV) bremsstrahlung, requires a momentum transfer on the order of Δ p^2 ∼ m_χE_ϕ∼ (1 GeV)^2. Due to this kinematic supression, we find that RHESSI is incapable of setting competitive limits even with the most generous/optimistic model building choices to maximize the bremsstrahlung like cross section.
|
http://arxiv.org/abs/2307.01198v1
|
20230703175826
|
Improved sampling via learned diffusions
|
[
"Lorenz Richter",
"Julius Berner",
"Guan-Horng Liu"
] |
cs.LG
|
[
"cs.LG",
"math.OC",
"math.PR",
"stat.ML"
] |
[
Improved sampling via learned diffusions
equal*
Lorenz Richterequal,ZIB,dida
Julius Bernerequal,Vienna
Guan-Horng LiuGeorgia
ZIBZuse Institute Berlin
didadida Datenschmiede GmbH
ViennaUniversity of Vienna
GeorgiaGeorgia Institute of Technology
Lorenz Richterlorenz.richter@dida.do
Schrödinger bridge, sampling from densities, stochastic optimal control, diffusion-based generative modeling
0.3in
]
(the author order was determined by )
Recently, a series of papers proposed deep learning-based approaches to sample from unnormalized target densities using controlled diffusion processes. In this work, we identify these approaches as special cases of the Schrödinger bridge problem, seeking the most likely stochastic evolution between a given prior distribution and the specified target. We further generalize this framework by introducing a variational formulation based on divergences between path space measures of time-reversed diffusion processes. This abstract perspective leads to practical losses that can be optimized by gradient-based algorithms and includes previous objectives as special cases. At the same time, it allows us to consider divergences other than the reverse Kullback-Leibler divergence that is known to suffer from mode collapse. In particular, we propose the so-called log-variance loss, which exhibits favorable numerical properties and leads to significantly improved performance across all considered approaches.
§ INTRODUCTION
Given a function ρ^d→[0,∞), we consider the task of sampling from the density
p_target := ρ/ with := ∫_^dρ(x) dx,
where the normalizing constant is typically intractable. This represents a crucial and challenging problem in various scientific fields, such as Bayesian statistics, computational physics, chemistry, or biology <cit.>. Fueled by the success of denoising diffusion probabilistic modeling <cit.> and deep learning approaches to the Schrödinger bridge (SB) problem <cit.>, there is a significant interest in tackling the sampling problem by using stochastic differential equations (SDEs) which are controlled with learned neural networks to transport a given prior density p_prior to the target p_target.
Recent works include the Path Integral Sampler (PIS) and variations thereof <cit.>, the Time-Reversed Diffusion Sampler (DIS) <cit.>, as well as the Denoising Diffusion Sampler (DDS) <cit.>. While the ideas for such sampling approaches based on controlled diffusion processes date back to earlier work, see, e.g., <cit.>, the development of corresponding numerical methods based on deep learning has become popular in the last few years.
However, up to now, more focus has been put on generative modeling, where samples from p_target are available.
As a consequence, it seems that for the classical sampling problem, i.e., having only an analytical expression for ρ∝ p_target, but no samples, diffusion-based methods cannot reach state-of-the-art performance yet. Potential drawbacks might be stability issues during training, the need to differentiate through SDE solvers, or mode collapse due to the usage of objectives based on reverse Kullback-Leibler (KL) divergences, see, for instance, <cit.>.
In this work, we overcome these issues and advance the potential of sampling via learned diffusion processes toward more challenging problems. Our contributions can be summarized as follows:
* We provide a unifying framework for recently developed sampling methods based on learned diffusions, i.e., DIS, DDS, and PIS, from the perspective of measures on path space and time-reversals of controlled stochastic processes.
* This path space perspective, in consequence, allows us to consider arbitrary divergences for the optimization objective, whereas existing methods are solely based on minimizing a reverse KL divergence, which is prone to mode collapse.
* In particular, we propose the log-variance divergence that avoids differentiation through the SDE solver and allows to balance exploration and exploitation, resulting in significantly improved numerical stability and performance, see <Ref>.
§.§ Related work
We build our theoretical foundation on the variational formulation of SB problems proposed by <cit.>. While the numerical treatment of SB problems has classically been approached via iterative nested schemes, the approach in <cit.> uses backward SDEs (BSDEs) to arrive at a single objective based on a KL divergence. This objective includes the (continuous-time) ELBO of diffusion models <cit.> as a special case, which can also be approached from the perspective of optimal control <cit.>. For additional previous work on optimal control in the context of generative modeling, we refer to <cit.>.
We extend the variational formulation of SBs to different divergences and, in particular, propose the log-variance loss that has originally been introduced in <cit.>. Variants of this loss have previously only been analyzed in the context of variational inference <cit.> and neural solvers for partial differential equations (PDEs) <cit.>. Different from previous works, our objective incorporates the path space measures of two controlled SDEs.
However, we also show that this objective, as well as the one in <cit.>, does in general not have a unique solution as it lacks the entropy constraint of classical SB problems. Specifically, we provide a new derivation only relying on time-reversals of diffusion processes. The underlying ideas were established decades ago <cit.>, however, only recently applied to diffusion models <cit.> and SBs <cit.>. In special cases, we recover unique objectives corresponding to recently developed sampling methods, i.e., DIS, DDS, and PIS.
The most common methods to sample from unnormalized densities and compute normalizing constants are arguably Monte Carlo (MC) techniques. Specialized variations of, e.g., Annealed Importance Sampling (AIS) <cit.> or Sequential Monte Carlo <cit.> (SMC) are often regarded as the gold standard in the literature.
Even though MCMC methods are guaranteed to converge to the target distribution under mild assumptions, the convergence speed might be too slow in many practical settings <cit.>.
Variational methods such as mean-field approximations <cit.> and normalizing flows <cit.> provide an alternative. By fitting a parametric family of tractable distributions to the target density, the problem of density estimation is cast into an optimization problem.
As already observed in <cit.>, we note that one cannot directly leverage the connection of diffusion models to score matching <cit.> for the application of sampling from densities. However, the score matching objective has been employed to approximate the extended target distribution needed in the importance sampling step of AIS methods <cit.> and also in combination with importance sampling using the likelihood of the partially-trained diffusion model as proposal distribution <cit.>.
In this work, however, we want to focus on variational methods that directly fit a parametric family of tractable distributions (given by controlled SDEs) to the target density.
§.§ Outline of the article
The rest of the article is organized as follows. In <Ref> we provide an introduction to diffusion-based sampling from the perspective of path space measures and time-reversed SDEs. This can be understood as a unifying framework allowing for generalizations to divergences other than the KL divergence. We propose the log-variance divergence and prove that it exhibits superior properties. In <Ref>, we will subsequently outline multiple connections to known methods, such as SBs in <Ref>, diffusion-based generative modeling (i.e., DIS) in <Ref>, and approaches based on reference processes (i.e., PIS and DDS) in <Ref>. For all considered methods, we can find compelling numerical evidence for the superiority of the log-variance divergence, see <Ref>.
§ DIFFUSION-BASED SAMPLING
In this section, we will reformulate our sampling problem as a time-reversal of diffusion processes. This perspective can be understood as a change of measure on path space and we present two divergences that lend themselves to practical objectives. Let us first define our notation and setting.
§.§ Notation and setting
We denote the density of a random variable X by p_X. For a suitable ^d-valued stochastic process X=(X_t)_t∈[0,T] we define its density p_X w.r.t. to the Lebesgue measure by
p_X(·,t) := p_X_t, t∈[0,T].
For a suitable functions f∈ C(^d × [0,T], ) and w∈ C(^d × [0,T], ^d), we further define
R_f(X) ∫_0^T f(X_s,s) ds
and
S_w(X) ∫_0^T w(X_s,s) ·dW_s,
where W is a standard d-dimensional Brownian motion.
We denote by 𝒫 the set of all probability measures on the space of continuous functions C([0,T],^d) and define the path space measure _X∈𝒫 as the law of X. For a time-dependent function , we denote by the time-reversal given by
(t) (T-t).
Finally, we assume that the coefficient functions of all appearing SDEs are sufficiently regular such that Novikov's condition is satisfied and such that the SDEs admit
unique strong solutions with smooth and strictly positive densities p_X_t for t∈ (0,T).
§.§ Sampling as time-reversal problem
The goal of diffusion-based sampling is to sample from the density p_target = ρ/ by transporting a prior density p_prior via controlled stochastic processes. We consider the processes described by the generative SDE
d X_s^u = ( + σ u)(X_s^u, s) ds + σ(s) dW_s,
X^u_0 ∼ p_prior,
and the inference SDE
d Y_s^v = (- + σv)(Y_s^v, s) ds + σ(s) dW_s,
Y^v_0 ∼ p_target,
where we aim to identify control functions u, v∈𝒰 in a suitable space of admissible controls
𝒰⊂ C(^d× [0,T],^d)
in order to achieve _T^u ∼ p_target and Y_T^v ∼ p_prior. Specifically, we seek controls satisfying
p_prior [Y^v]^u⇄ p_target
in the sense that Y^v is the time-reversed process of X^u and vice versa, i.e., p_^u = p_Y^v. In this context, we recall the following well-known results on the time-reversal of stochastic processes <cit.>.
The time-reversed SDE Y^v given by
dY_s^v = ( + σũ)(Y^v_s, s) ds + σ(s) dW_s,
Y_0^v ∼ Y^v_T,
with ũσ^⊤∇logp_Y^v - v satisfies that p_Y^v = p_Y^v.
The result can be derived by comparing the Fokker-Planck equations governing p_Y^v and p_Y^v, see, e.g., <cit.>.
We can now view <Ref> from the perspective of path space measures on the space of trajectories C([0,T],^d), as detailed in the sequel.
[Time-reversal]
Let _X^u be the path space measure of the process X^u defined in (<ref>) and _Y^v the path space measure of Y^v, the time-reversal of Y^v, given in (<ref>). Further, let
D:𝒫×𝒫→_≥ 0
be a divergence, i.e., a non-negative function satisfying that D(,)=0 if and only if =. We aim to find optimal controls u^*, v^* s.t.
u^*, v^* ∈_u, v ∈𝒰×𝒰 D(_X^u | _Y^v).
Let us note that <Ref> aims to reverse the processes X^u and Y^v with respect to each other while obeying the respective initial values X^u_0 ∼ p_prior and Y^v_0 ∼ p_target, as defined in (<ref>) and (<ref>). For the actual computation of suitable divergences, the following formula will be helpful.
Let X^w be a process as defined in (<ref>) with u being replaced by w∈𝒰. We can compute the Radon-Nikodym derivative as
d_X^u/d_Y^v(X^w) = exp(R_f_u,v,w^SB + S_u+v + B)(X^w)
with
B(X^w) log p_prior(X_0^w)/ρ(X_T^w)
and
f_u,v,w^SB (u + v) ·(w + v-u/2) + ∇· (σ v -),
where S and R are defined as in (<ref>) and (<ref>).
The proof mainly relies on Girsanov's theorem, Itô's lemma, and the HJB equation governing log p_Y^v, see <Ref>.
Using the representation of the Radon-Nikodym derivative in <Ref>, we may, in principle, choose any suitable divergence in order to approach <Ref>. In the following we will analyze the default setting, i.e., a KL divergence, and propose an alternative, the so-called log-variance divergence, which offers several numerical advantages.
§.§ Comparison of the KL and log-variance divergence
Most works in the context of diffusion-based sampling rely on the KL divergence. Choosing D = D_KL, which implies w = u in (<ref>), we can readily compute
D_KL( _X^u | _Y^v) = [ (R_f_u,v,u^SB + B)(X^u) ] + log
with
f_u,v,u^SB = u + v ^2/2 + ∇·(σ v - ),
where we used the fact that the stochastic integral S_u+v has vanishing expectation.
Note that in practice we minimize the objective
ℒ_KL(u, v) := D_KL( _X^u | _Y^v) -log.
This objective is identical to the one derived in <cit.> for the SB problem, see also <Ref> and <Ref>. However, the KL divergence is known to have some evident drawbacks, such as mode collapse <cit.> or a potentially high variance of Monte Carlo estimators <cit.>. To address those issues, we propose another divergence that has been originally suggested in <cit.>.
Let be a reference measure. The log-variance divergence between the measures and w.r.t. the reference is defined as
D_LV^(, ) := _[ logd/d].
Note that the log-variance divergence is symmetric in and and actually corresponds to a family of divergences, parametrized by the reference measure . Obvious choices in our setting are
:= _X^w, := _X^u, and := _Y^v,
resulting in the log-variance loss
ℒ_LV^w(u, v) := D_LV^_X^w(_X^u, _Y^v)
= [ (R_f_u,v,w^SB+S_u+v+ B)(X^w) ].
Since the variance is shift-invariant, we can omit log in the above objective.
Compared to the KL-based loss (<ref>), the log-variances loss (<ref>) exhibits the following beneficial properties.
First, by the choice of the reference measure _X^w, one can balance exploitation and exploration. To exploit the current control u, one can set
w = u,
but one can also deviate from this control to prevent mode collapse. Next, note that the log-variance loss in (<ref>) does not require the derivative of the process X^w w.r.t. the control w (which, for the case w=u, is implemented by detaching or stopping the gradient, see <Ref>). In contrast, the KL-based loss in (<ref>) demands to differentiate X^u w.r.t. the control u, requiring to differentiate through the SDE solver and resulting in higher computational costs. Particularly interesting is the following property, sometimes referred to as sticking-the-landing <cit.>. It states that the gradients of the log-variance loss have zero variance at the optimal solution. This property does, in general, not hold for the KL-based loss, such that variants of gradient descent might oscillate around the optimum.
Let ℒ_LV be the Monte Carlo estimator of the log-variance loss in (<ref>) and let the controls u=u_θ and v=v_γ be parametrized by θ and γ. The variances of the respective derivatives vanish at the optimal solution (u^*,v^*)=(u_θ^*,v_γ^*), i.e.
[∇_θℒ_LV^w(u_θ^*, v_γ^*)] = 0
and
[∇_γℒ_LV^w(u_θ^*, v_γ^*)] = 0,
for all w ∈𝒰. For the Monte Carlo estimator ℒ_KL of the KL-based loss in (<ref>) the above variances are, in general, not vanishing.
The derivative and its variance can be calculated using <Ref>, see <Ref> and <cit.>.
§ CONNECTIONS AND EQUIVALENCES OF DIFFUSION-BASED SAMPLING APPROACHES
In general, there are infinitely many solutions to <Ref> and, in particular, to our objectives (<ref>) and (<ref>). In fact, Girsanov's theorem shows that the objectives only enforce Nelson's identity <cit.>, i.e.,
u^* + v^* = σ^⊤∇log p_X^u^* = σ^⊤∇logp_Y^v^*,
see also the proof of <Ref>.
In this section, we show how our setting generalizes existing diffusion-based sampling approaches which in turn ensure unique solutions to <Ref>. For each approach, we will derive the corresponding versions of the log-variance loss (<ref>).
§.§ Schrödinger bridge problem (SB)
Out of all possible solutions u^* fulfilling (<ref>), the Schrödinger bridge problem considers the solution u^* that minimizes the KL divergence
D_KL(_X^u^* | _X^r)
to a given reference process X^r, defined as in (<ref>) with u replaced by r∈𝒰. Traditionally, the choice r=0, i.e., the uncontrolled process X^0 is considered.
Defining
f_u,r,w^ref (u - r) ·(w - u+r/2),
Girsanov's theorem shows that
d_X^u/d_X^r(X^w) = exp(R_f_u,r,w^ref + S_u-r)(X^w),
which implies that
D_KL(_X^u | _X^r) = [ R_f^ref_u,r,u(X^u)],
see, e.g., <cit.> and the proof of <Ref>. The SB objective can thus be written as
min_u ∈𝒰[ R_f_u,r,u^ref(X^u) | X_T^u∼ p_target],
see <cit.>. We note that the above can also be interpreted as an entropy-regularized optimal transport problem <cit.>. The entropy constraint in (<ref>) can also be combined with our objective in (<ref>) by considering, for instance,
min_u, v ∈𝒰×𝒰{[ R_f_u,r,u^ref(X^u)] + λ D(_X^u | _Y^v)},
where λ∈(0,∞) is a sufficiently large Lagrange multiplier. In <Ref> we show how the SB problem (<ref>) can be reformulated as a system of coupled PDEs or BSDEs, which can alternatively be used to regularize <Ref>, see also <cit.>. Interestingly, the BSDE system recovers our KL-based objective in (<ref>), as originally derived in <cit.>.
Note that via Nelson's identity (<ref>), an optimal solution u^* to the SB problem uniquely defines an optimal control v^* and vice versa. For special cases of SBs, we can calculate such v^* or an approximation
v̅≈ v^*.
Fixing v = v̅ in (<ref>) and only optimizing for u appearing in the generative process (<ref>) then allows us to attain unique solutions to (an approximation of) <Ref>. We note that the approximation v̅≈ v^* incurs an irreducible loss given by
d_X^u^*/d_Y^v̅(X^w) =
d_Y^v^*/d_Y^v̅(X^w),
thus requiring an informed choice of v̅ and p_prior, such that Y^v̅≈ Y^v^*. We will consider two such settings in the following sections.
§.§ Diffusion-based generative modeling (DIS)
We may set
v̅ 0,
which can be interpreted as a SB with
u^*=r=σ^⊤∇logp_Y^0
and p_prior = p_Y^0_T, such that the entropy constraint (<ref>) can be minimized to zero. Note though, that this only leads to feasible sampling approaches if the functions and σ in the SDEs are chosen such that the distribution of p_Y^0_T is (approximately) known and such that we can easily sample from it. In practice, one chooses functions and σ such that
p_Y^0_T≈ p_prior𝒩(0, ν^2 I),
see Section <ref>. Related approaches are often called diffusion-based generative modeling or denoising diffusion probabilistic modeling since the (optimally controlled) generative process X^u^* can be understood as the time-reversal of the process Y^0 that moves samples from the target density to Gaussian noise <cit.>.
Let us recall the notation from <Ref> and define
f^DIS_u,w f^SB_u,0,w = u · w - u ^2/2 - ∇·.
Setting v = 0 in (<ref>), we directly get the loss
ℒ_KL(u) = [ (R_f^DIS_u,u + B)(X^u) ],
which corresponds to the Time-Reversed Diffusion Sampler (DIS) derived in <cit.>. From (<ref>), we analogously obtain the related log-variance loss
ℒ_LV^w(u) = [ (R_f^DIS_u,w +S_u + B)(X^w)].
§.§ Time-reversal of reference processes (PIS & DDS)
In general, we may also set
v̅σ^⊤∇log p_X^r -r.
Via <Ref> this choice implies
_X^r = _Y^v̅, ref,
where Y^v, ref is the process Y^v as in (<ref>), however, with p_target replaced by the density p_ref p_X^r_T, i.e.,
d Y^v, ref = (- + σv)(Y^v, ref, s) ds + σ(s) dW_s,
Y^v, ref∼ p_ref.
Since Y^v̅, ref is the time-reversal of the reference process X^r, we note that the optimal control v^*, corresponding to the solution u^* of the SB problem in (<ref>), minimizes
D_KL(_Y^v | _Y^v̅, ref)
among all controls v with Y^v_T ∼ p_prior.
Using (<ref>) with p_ref instead of p_target=ρ/Z, we obtain that
1 = d_X^r/d_Y^v̅, ref(X^w)
= p_prior(X_0^w)/p_ref(X_T^w)exp(R_f_r,v̅,w^SB + S_r+v̅)(X^w).
This leads to the following alternative representation of <Ref>.
Assuming (<ref>), it holds that
d_X^u/d_Y^v̅(X^w) = Zexp(R_f_u,r,w^ref + S_u-r + B^ref)(X^w),
where f_u,r,w^ref is defined as in (<ref>) and
B^ref(X^w) := logp_ref/ρ(X^w_T).
The result follows from dividing (<ref>) by (<ref>).
Note that computing the Radon-Nikodym derivative in <Ref> requires to choose r, p_prior, , and σ such that p_ref = p_X^r_T is tractable[In general, it suffices to be able to compute p_X_T^r up to its normalizing constant.]. For suitable choices of r (see below), one can, for instance, use the SDEs with tractable densities stated in <Ref> with p_prior = δ_x_0, p_prior = 𝒩(0,ν^2 I), or a mixture of such distributions. Recalling (<ref>) and the choice
v̅σ^⊤∇log p_X^r -r,
we also need to guarantee that Y^v̅≈ Y^v^*. Let us now outline two such cases in the following.
PIS: We first consider the case r 0. <Ref> and choosing D = D_KL in <Ref> then yield the objective
ℒ_KL(u) = D_KL( _X^u | _Y^v̅) -log
=[(R_f_u,0,u^ref + B^ref)(X^u) ].
This objective has previously been considered by <cit.> and corresponding numerical algorithms, referred to as Path Integral Sampler (PIS) in <cit.>, have been independently presented in <cit.>. Choosing D = D_LV, we get the corresponding log-variance loss
ℒ_LV^w(u) = [ (R_f_u,0,w^ref+ S_u + B^ref)(X^w)],
which has already been stated in <cit.>. Typically, this objective is used with
p_priorδ_x_0,
since Doob's h-transform guarantees that v̅=v^*, i.e., we can solve the SB exactly, see <cit.> and also <Ref>. In this special case, the SB is often referred to as a Schrödinger half-bridge.
DDS: Next, we consider the choices
rσ^⊤∇logp_Y^0,ref, v̅ 0, and p_prior p_Y_T^0,ref,
which yields a special case of the setting from <Ref>.
Using <Ref>, we obtain the objective
ℒ_KL(u) =
[(R_f_u,r,u^ref + B^ref)(X^u) ].
This corresponds to the Denoising Diffusion Sampler (DDS) objective stated by <cit.> when choosing μ and σ such that Y^0 is a VP SDE, see <Ref>. Choosing the invariant distribution
p_ref𝒩(0,ν^2I)
of the VP SDE, see (<ref>) in the appendix, we have that
p_X^r(·,t) = p_Y^0,ref(·,t) = p_ref = p_prior, t∈[0,T],
and, in particular,
r(x,t) = -σ^⊤ x/ν^2.
The corresponding log-variance loss can now readily be computed as
ℒ_LV^w(u) = [ (R_f_u,r,w^ref+ S_u-r + B^ref)(X^w)].
We refer to <Ref> for a comparison of our objectives.
The expression in <Ref> can also be derived via
d_X^u/d_Y^v̅(X^w) = d_X^u/d_X^r(X^w) d_Y^v̅, ref/d_Y^v̅(X^w)
= d_X^u/d_X^r(X^w) p_X_T^r/p_target(X^w_T),
where the first factor can be computed as in (<ref>). Yet another viewpoint is based on importance sampling in path space, see, e.g., <cit.>. Since our goal is to find an optimal control u^* such that we get samples X_T^u^*∼ p_target, we may define our target path space measure _X^u^* via
d_X^u^*/d_X^r(X^w) = p_target/p_X^r_T(X^w_T).
We can then compute
d_X^u/d_X^u^*(X^w) = d_X^u/d_X^r(X^w) d_X^r/d_X^u^*(X^w),
which, together with (<ref>), is equivalent to the expression in <Ref>. Note that in the importance sampling perspective we do not need the concept of time-reversals.
§ NUMERICAL EXPERIMENTS
In this section, we compare the KL-based loss with the log-variance loss on the three different approaches, SB, PIS, and DIS, introduced in Sections <ref>, <ref>, and <ref>. As DDS can be seen as a special case of DIS (both with v̅=0), we do not consider it separately. For our PIS experiments, we follow the implementation of <cit.> and only change the objective to ℒ_LV^w with w=u. For DIS, we also use the models from <cit.>, incorporating the density p_prior and a variance-preserving SDE (see <Ref>) similar to <cit.>. For SB, we analogously approach the general loss (<ref>), which corresponds to the setting in <cit.> adapted to unnormalized densities.
We can demonstrate that the appealing properties of the log-variance loss can indeed lead to significant performance improvements for all approaches. Note that we always compare the same settings, in particular, the same number of target evaluations, for both the log-variance and KL-based losses and use sufficiently many gradient steps to reach convergence, see <Ref> for details. Still, we observe that qualitative differences between the two losses are consistent across various hyperparameter settings.
§.§ Benchmark problems
We shall evaluate the different methods on the following three numerical benchmark examples.
Gaussian mixture model (GMM): We consider the density
ρ(x) = p_target(x) = 1/m∑_i=1^m 𝒩(x;μ_i, Σ_i).
Specifically, we choose m=9, Σ_i=0.3 I, and
(μ_i)_i=1^9= {-5,0,5}×{-5,0,5}⊂^2
to obtain well-separated modes, see also <Ref>.
Funnel: The 10-dimensional Funnel distribution <cit.> is a challenging example often used to test MCMC methods. It is given by the density
ρ(x) = p_target(x) = 𝒩(x_1; 0, η^2) ∏_i=2^d 𝒩(x_i ; 0, e^x_1)
for x=(x_i)_i=1^10∈^10 with η=3.
Double well (DW):
A typical problem in molecular dynamics considers sampling from the stationary distribution of a Langevin dynamics. In our example we shall consider a d-dimensional double well potential, corresponding to the (unnormalized) density
ρ(x) = exp(-∑_i=1^(x_i^2 - δ)^2 - 1/2∑_i=+1^d x_i^2)
with ∈ combined double wells and a separation parameter δ∈ (0, ∞), see also
<cit.> and <Ref>. Note that, due to the double well structure of the potential, the density contains 2^ modes. For these multimodal examples we can compute a reference solutions by numerical integration since ρ factorizes in the dimensions.
§.§ Results
Let us start with the SB approach and the general losses in (<ref>) and (<ref>). <Ref> shows that the log-variance loss can improve our considered metrics. However, we note that choosing an appropriate prior p_prior (with sufficient mass at the modes of the target) was necessary to make our algorithms converge, see <Ref>. While the general setting of SB enables us to incorporate such prior knowledge, the framework suffers from reduced efficiency and numerical instabilities. These issues are commonly observed in the context of SBs <cit.> and might be rooted in the non-uniqueness of the optimal control (cf. <Ref>) and the fact that two controls need to be optimized. For the more challenging problems, the SB approach did not achieve satisfying results. Since such pathologies do not appear in the special cases of DIS and PIS, we shall focus on them separately in the sequel.
We observe that the log-variance loss significantly improves both DIS and PIS across our considered benchmark problems and metrics, see <Ref>. The improvements are quite remarkable considering that we only replaced the KL-based loss ℒ_KL by the log-variance loss ℒ_LV without tuning the hyperparameter for the latter loss. In the few cases, where the KL divergence performs better, the difference seems rather insignificant. In particular, <Ref> show that the log-variance loss successfully counteracts mode-collapse, leading to quite substantial improvements.
§ CONCLUSION
In this work, we provide a novel unifying perspective on diffusion-based generative modeling that is based on path space measures of time-reversed diffusion processes. In principle, this perspective allows to consider arbitrary divergences between such measures as objectives for the corresponding task of interest.
While the KL divergence yields already known objectives, we find that choosing the log-variance divergence leads to novel algorithms which are particularly useful for the task of sampling from (unnormalized) densities. Specifically, this divergence exhibits beneficial properties, such as lower variance, computational efficiency, and exploration-exploitation trade-offs. We can demonstrate in multiple numerical examples that the the log-variance loss greatly improves sampling quality across a range of metrics. We believe that problem and approach-specific finetuning might further enhance the performance of the log-variance loss, thereby paving the way for competitive diffusion-based sampling approaches.
Based on our work, one could also explore more divergences, e.g., the family of α-divergences, see <cit.>. Finally, we anticipate further performance improvements by combining diffusion-based samplers with neural solvers for the optimality PDEs or MCMC methods, as has been successfully done for normalizing flows <cit.>.
§ ACKNOWLEDGEMENTS
The research of L.R. was funded by Deutsche Forschungsgemeinschaft (DFG) through the grant CRC 1114 Scaling Cascades in Complex Systems (project A05, project number 235221301). The computational results have been achieved in part using the Vienna Scientific Cluster (VSC).
icml2023
§ APPENDIX
§.§ Proofs
Let us define the path space measures _X^u,x and _Y^v,x as the measures of X^u and Y^v conditioned on X^u_0 = x and Y^v_0 = x with x∈^d, respectively. We can then compute
logd_X^u/d_Y^v(X^w) = logd_X^u,x/d_Y^v,x(X^w) + logd_X^u_0/d_Y^v_0(X_0^w)
= logd_X^u,x/d_Y^v,x (X^w)+ logp_prior(X^w_0)/p_Y^v(X^w_0,0).
We follow <cit.> and first note that the time-reversal of the process Y^v defined in (<ref>) is given by
dY_s^v = ( + σσ^⊤∇ g - σ v)(Y^v_s, s) ds + σ(s) dW_s,
where we abbreviate g logp_Y^v, see <Ref>.
Let us further define the short-hand notations h := u + v - σ^⊤∇ g and b := +σ( u - h). Then, we can write the SDEs in (<ref>) and (<ref>) as
dX^u_s = (b+σ h)(X_s^u, s) ds + σ(s) dW_s,
dY_s^v = b(Y_s^v, s) ds + σ(s) dW_s.
We can now apply Girsanov's theorem <cit.> to rewrite the logarithm of the Radon-Nikodym derivative in (<ref>) as
logd_X^u,x/d_Y^v,x(X^w)
= ∫_0^T (σ^-⊤h )(X_s^w, s) ·d X^w_s - ∫_0^T ( σ^-1 b · h)(X_s^w, s) ds - 1/2∫_0^T h(X_s^w, s) ^2 ds
= ∫_0^T ( (w - u) · h + 1/2h ^2 )(X_s^w, s) ds + S_h(X^w)
= ∫_0^T ( (w - u) ·( u + v - σ^⊤∇ g) + 1/2 u + v - σ^⊤∇ g ^2 )(X_s^w, s) ds + S_h(X^w)
= R_f_u,v,w^SB - ∫_0^T ( ∇· (σ v -) + (v + w)·σ^⊤∇ g - 1/2σ^⊤∇ g ^2 )(X_s^w, s) ds + S_h(X^w),
Further, we may apply Itô's lemma to the function g to get
g(X^w_T, T) - g(X^w_0, 0) = ∫_0^T ( ∂_s g + ∇ g · ( + σ w) + 1/2(σσ^⊤∇^2 g) )(X_s^w, s) ds + ∫_0^T σ^⊤∇ g(X_s^w, s) ·d W_s.
Noting that g=log p_Y^v fulfills the Hamilton-Jacobi-Bellman equation <cit.>
∂_s g =- 1/2( σσ^⊤∇^2 g) + ( σ v-) ·∇ g + ∇· (σ v-) - 1/2σ^⊤ g ^2,
we get
g(X^w_T, T) - g(X^w_0, 0) = ∫_0^T ( ∇· (σ v -) + ( v + w) ·σ^⊤∇ g - 1/2σ^⊤ g ^2)(X_s^w, s) + ∫_0^T σ^⊤∇ g(X_s^w, s) ·d W_s.
Finally, combining this with (<ref>) and (<ref>) and noting that
g(X^w_T, T) = log p_Y^v(X^w_T, T) = log p_Y^v(X^w_T, 0) = p_target(X^w_T),
yields the desired expression.
Let us first recall the notion of Gâteaux derivatives, see <cit.>.
We say that ℒ𝒰×𝒰→_≥ 0 is Gâteaux differentiable at u ∈𝒰 if for all v, ϕ∈𝒰 the mapping
ε↦ℒ(u+ εϕ, v)
is differentiable at ε=0.
The Gâteaux derivative of ℒ w.r.t. u in direction ϕ is then defined as
δ/δ uℒ(u, v; ϕ) d/dε|_ε=0ℒ(u+ εϕ, v).
The derivative of ℒ w.r.t. v is defined analogously. Let now u = u_θ and v = v_γ be parametrized[We only assume that θ and γ are in the same space ^p for notational simplicity.] by θ∈^p and γ∈^p. Relating the Gâteaux derivatives to partial derivatives w.r.t. θ and γ, respectively, let us note that we are particularly interested in the directions ϕ=∂_θ_iu_θ and ϕ=∂_γ_iv_γ for i ∈{1, …, p}. This choice is motivated by the chain rule of the Gâteaux derivative, which, under suitable assumptions, states that
∂_θ_iℒ(u_θ, v_γ) = δ/δ u|_u= u_θℒ(u, v_γ; ∂_θ_i u_θ) and ∂_γ_iℒ(u_θ, v_γ) = δ/δ v|_v= v_γℒ(u_θ, v; ∂_γ_i v_γ).
Analogous to the computations in <cit.>, the Gâteaux derivatives of the Monte Carlo estimator ℒ_LV^w of the log-variance loss ℒ_LV^w in (<ref>) with K∈ samples is given by
δ/δ uℒ^w_LV(u, v; ϕ) = 2/K∑_k=1^K 𝒜^u,v,w,(k)((R_f^gen_u,w,ϕ + S_ϕ^(k))(X^w, (k)) - 1/K∑_i=1^K (R_f^gen_u,w,ϕ + S_ϕ^(i))(X^w, (i))),
where the superscript (k) denotes the index of the k-th i.i.d. sample in the Monte Carlo estimator ℒ^w_LV and we define the short-hand notations
𝒜^u,v,w,(k) := ( R_f_u,v,w^SB + S^(k)_u + v + B)(X^w, (k)) + log and f^gen_u,w,ϕ= (w - u)·ϕ.
Now, note that the definition of the log-variance loss and <Ref> imply that for the optimal choices u=u^*, v=v^* it holds that
𝒜^u^*,v^*,w, (k) = 0
almost surely for every k ∈{1, …, K } and w ∈𝒰. This readily implies the statement for the derivative w.r.t. the control u_γ. The analogous statement holds true for the derivative w.r.t. v_γ, as we can compute
δ/δ vℒ^w_LV(u, v; ϕ) = 2/K∑_k=1^K 𝒜^u,v,w,(k)((R_f^inf_v,w,ϕ + S_ϕ^(k))(X^w, (k)) - 1/K∑_i=1^K (R_f^inf_v,w,ϕ + S_ϕ^(i))(X^w, (i))),
where
f^inf_v,w,ϕ= (v+w)·ϕ + ∇· (σϕ).
For the derivative of the Monte Carlo version of the loss ℒ_KL as defined in (<ref>) w.r.t. to v we may compute
δ/δ vℒ_KL(u, v; ϕ) = 1/K∑_k=1^K ∫_0^T ((u + v) ·ϕ + ∇·(σϕ) )(X_s^u,(k), s) ds .
We note that even for u = u^* and v = v^* we can usually not expect the variance of the corresponding Monte Carlo estimator to be zero. For the computation of the derivative w.r.t. u we refer to <cit.>.
For the gradient of the loss ℒ_KL w.r.t. to u we may compute
δ/δ uℒ_KL(u, v; ϕ) = [∫_0^T ((u + v) ·ϕ)(X_s^u, s) ds + (R_f_u,v,u^SB(X^u) + B(X^u)) S_ϕ(X^u) ] = [ 𝒜^u, v, u S_ϕ(X^u) ],
where we used Girsanov's theorem and the Itô isometry. Comparing with (<ref>), we realize that the derivative of ℒ_LV w.r.t. u for the choice w = u can be interpreted as a control variate version of the derivative of ℒ_KL, thereby promising reduced variance of the corresponding Monte Carlo estimators, cf. <cit.>.
§.§ The Schrödinger bridge problem
In the following, we will formulate optimality conditions for the Schrödinger bridge problem defined in (<ref>) for the standard case r = 0. Moreover, we outline how the associated system of BSDE system leads to the same losses as given in (<ref>) and (<ref>), respectively. The ideas are based on <cit.>.
First, we can define the
ϕ(x,t) min_u ∈𝒰[ ∫_t^T 1/2u(X_s^u, s)^2 ds | X_t^u = x, X_T^u∼ p_target].
By the dynamic programming principle it holds that ϕ solves the Hamilton-Jacobi-Bellman (HJB) equation
∂_t ϕ = - ·∇ϕ - 1/2(σσ^⊤∇^2 ϕ) + 1/2σ^⊤∇ϕ^2
(with unknown boundary conditions) and that the optimal control satisfies
u^* = - σ^⊤∇ϕ.
Together with the corresponding Fokker-Planck equation for X^u^*,
this yields necessary and sufficient conditions for the solution to (<ref>). Now, we can transform the Fokker-Planck equation and the HJB equation (<ref>) into a system of linear equations, using the exponential transform
ψexp(-ϕ) and ψ p_X^u^*exp(ϕ) = p_X^u^*/ψ,
often referred to as the Hopf-Cole transform.
This yields the following well-known optimality conditions of the Schrödinger Bridge problem defined in (<ref>).
The solution u^* to the Schrödinger Bridge problem (<ref>) is equivalently given by
* u^* - σ^⊤∇ϕ, where p_X^u^* and ϕ are the unique solutions to the coupled PDEs
∂_t p_X^u^* = - ∇·( p_X^u^* ( - σσ^⊤∇ϕ) ) + 1/2(σσ^⊤∇^2 p_X^u^*)
∂_t ϕ = - ·∇ϕ - 1/2(σσ^⊤∇^2 ϕ) + 1/2σ^⊤∇ϕ^2,
with boundary conditions
p_X^u^*(·,0) = p_prior,
p_X^u^*(·,T) = p_target.
* u^* σ^⊤∇logψ, where ψ and ψ are the the unique solutions to the PDEs
∂_t ψ = - ∇ψ· - 1/2(σσ^⊤∇^2 ψ),
∂_t ψ = - ∇·( ψ) + 1/2(σσ^⊤∇^2 ψ),
with coupled boundary conditions
ψ(·, 0)ψ(·, 0) = p_prior,
ψ(·, T) ψ(·, T) = p_target.
The optimal control v^* is given by Nelson's identity (<ref>), i.e.,
v^* = σ^⊤∇log p_X^u^* - u^* = σ^⊤∇logψ.
Using Itô's lemma, we now derive a BSDE system corresponding to the PDE system in (<ref>).
Let us assume ψ and ψ fulfill the PDEs (<ref>) with boundary conditions (<ref>) and let us define the processes
^w_s = logψ(X_s^w, s),
^w_s = logψ(X_s^w, s),
^w_s = σ^⊤∇logψ(X_s^w, s)=u^*(X^w_s,s),
^w_s = σ^⊤∇logψ(X_s^w, s)=v^*(X^w_s,s),
where the process X^w is given by
d X^w_s = ( + σ w)(X_s^w, s) ds + σ(s) dW_s
with w ∈𝒰 being an arbitrary control function. We then get the BSDE system
d^w_s = ( ^w_s · w(X_s^w,s) - 1/2^w_s^2 ) ds + ^w_s ·d W_s,
d^w_s =(1/2^w_s^2 + ∇·(σ^w_s - (X_s^w,s)) + ^w_s · w(X_s^w,s) ) ds + ^w_s ·dW_s.
Furthermore, it holds
^w_s + ^w_s = log p_X^u^*(X_s^w, s) = logp_Y^v^*(X_s^w, s).
The proof is similar to the one in <cit.>. For brevity, we define D=1/2σσ^⊤. We can apply Itô's lemma to the stochastic process ^w_s = logψ(X^w_s, s) and get
d^w_s = (∂_s logψ + ∇logψ·( + σ w ) + (D ∇^2 logψ) )(X_s^w, s) ds + σ^⊤∇logψ(X^w_s, s) ·d W_s.
Further, via (<ref>) it holds
∂_s logψ = 1/ψ(-∇ψ· - (D ∇^2 ψ) ) = -∇logψ· - (D ∇^2 ψ/ψ),
and we note the identity
∇^2 logψ = ∇^2 ψ/ψ - ∇ψ(∇ψ)^⊤/ψ^2.
Combining (<ref>), (<ref>), and (<ref>), we get
d^w_s = (σ^⊤∇logψ· w - (D∇ψ(∇ψ)^⊤/ψ^2) ) (X_s^w, s) ds + σ^⊤∇logψ(X_s^w, s)·dW_s
= ( ^w_s · w(X_s^w, s) - 1/2^w_s ^2) ds + ^w_s ·dW_s.
Similarly, we may apply Itô's lemma to ^w_s = logψ(X^w_s, s) and get
d^w_s =(∂_s logψ + ∇logψ·( + σ w ) + (D ∇^2 logψ) )(X_s^w, s) ds + σ^⊤∇logψ(X^w_s, s) ·d W_s.
Now, via (<ref>) it holds that
∂_s logψ = 1/ψ(-∇·(ψ) + (D ∇^2 ψ) ) = -∇logψ· - ∇· + (D∇^2 ψ/ψ).
Combining (<ref>) and (<ref>), we get
dlogψ(X_s^w, s) = ( (D ∇^2ψ/ψ + D ∇^2 logψ) -∇· + σ^⊤∇logψ· w )(X_s^w, s) ds+ σ^⊤∇logψ(X_s^w, s) ·dW_s.
Now, noting the identity
(D ∇^2ψ/ψ + D ∇^2 logψ) = 2( D ∇^2ψ/ψ) - 1/2σ^⊤∇logψ^2 = 1/2σ^⊤∇logψ^2 + ∇·(σσ^⊤∇logψ),
we can get the relation
d^w_s = (1/2σ^⊤∇logψ^2 + ∇·(σσ^⊤∇logψ - ) + σ^⊤∇logψ· w )(X_s^w, s) ds + σ^⊤∇logψ(X_s^w, s) ·dW_s
= (1/2^w_s ^2 + ∇· (σ^w_s - ) + ^w_s · w)(X_s^w, s) ds + ^w ·d W_s,
which concludes the proof.
Note that the BSDE system is slightly more general than the one introduced in <cit.>, which can be recovered with the choice w(X_s^w,s) = ^w_s. Also, the roles of p_prior and p_target are interchanged in <cit.> since they consider generative modeling instead of sampling from densities.
A valid loss can now be derived by adding the two BSDEs and recalling relation (<ref>), which yields
logp_target(X_T^w)/ p_prior(X_0^w) = ∫_0^T (
(_s^w + ^w_s) ·(w + ^w_s-_s^w/2)
+ ∇·(σ^w_s - μ) )(X_s^w,s) ds + ∫_0^T ( ^w_s + ^w_s ) · dW_s
almost surely.
Analogous to <cit.> in generative modeling,
the above equality suggests a parameterized lower bound of the log-likelihood log p_prior when replacing the optimal controls in ^w_s = u^*(X^w, s) and ^w_s = v^*(X_s^w, s) with their approximations u and v, see <cit.>. This lower bound exactly recovers the loss given in (<ref>). Further, note that variance of the left-hand minus the right-hand side is zero, which readily yields our log-variance loss as defined in (<ref>).
§.§.§ Schrödinger half-bridges (PIS)
For the Schrödinger half-bridge, also referred to as PIS, introduced in <Ref>, we can find an alternative derivation, motivated by the PDE perspective outlined in <Ref>. For this derivation it is crucial that we assume the prior density to be concentrated at a single point, i.e.,
p_priorδ_x_0
for some x_0 ∈^d (typically x_0=0), see <cit.>.
We can recover the corresponding objectives by noting that, in the case p_prior = δ_x_0, the system of PDEs in (<ref>) can be decoupled. More precisely, we observe that the second equation in (<ref>) is the Fokker-Planck equation of ^0 and we have that
ψ=p_^u^*exp(ϕ) = p_^0 and ψ(·,0)=p_^0_0=δ_x_0.
In view of (<ref>), we note that this defines v^* = σ^⊤∇log p_^0. By (<ref>), we observe that
ψ = p_^u^*/p_^0,
which yields the boundary condition
ϕ(·,T)=-logψ(·,T) = logp__T^0/p_target= log p__T^0/ρ
to the HJB equation in (<ref>).
By the verification theorem <cit.>, we thus obtain the PIS objective
ℒ_KL(u) =
[ ∫_0^T 1/2 u(_s^u,s) ^2 ds + logp_^0_T(_T^u)/ρ(_T^u)] =[(R_f_u,0,u^ref + B^ref)(X^u) ].
Moreover, the optimal control is given by u^* = -σ^⊤∇ϕ = σ^⊤∇logψ. We can also derive this objective from the BSDE system in <Ref>. Since ψ(·, 0) = δ_x_0, we may focus on the process ^w_s = logψ(X^w_s, s) only and get
_T^w - _0^w = ∫_0^T ^w_s · w(X_s^w,s) - 1/2^w_s^2 ds + ∫_0^T _s^w ·d W_s.
The PIS objective now follows by choosing w(X_s^w,s) = _s^w and noting that
_T^w = logψ(X_T^w, T) = logp_target/p__T^0(X_T^w).
Recalling our notation in (<ref>) and (<ref>), this also shows that the log-variance loss can be written as
ℒ_LV^w(u) = [ (R_f_u,0,w^ref+ S_u + B^ref)(X^w)].
§.§ Tractable SDEs
Let us present some commonly used SDEs of the form
dX^u_s = μ (X_s^u, s) ds + σ(s) dW_s
with affine drifts that have tractable marginals conditioned on their initial value, see <cit.>.
For notational convenience, let us define
α(t) ∫_0^t β(s) ds
with suitable β∈ C([0,T],(0,∞)).
Variance-Preserving (VP) SDE: This Ornstein-Uhlenbeck process is given by
σ(t) ν√(2β(t)) I and (x,t) - β(t)x.
with ν∈(0,∞). Then, we have that
X_t | X_0 ∼𝒩( e^- α(t)X_0, ν^2(1-e^- 2α(t)) I).
This shows that for α(T) sufficiently large it holds that
X_T ≈𝒩( 0, ν^2 I).
For X_0 ∼𝒩(m, Σ), we further have that
X_t ∼𝒩( e^- α(t)m, e^- 2α(t)( Σ - ν^2I)+ν^2 I).
Variance-exploding (VE) SDE / scaled Brownian motion: This SDE is given by a scaled Brownian motion, i.e., 0 and σ as defined above. It holds that
X_t | X_0 ∼𝒩(X_0, 2ν^2α(t)I).
For X_0 ∼𝒩(m, Σ), we thus have that
X_t ∼𝒩( m, 2ν^2α(t)I + Σ).
§.§ Computational details
In our implementations, we generally follow the settings and hyperparameters of PIS in <cit.>. The main difference is that we observed better performance (for all considered methods and losses) by choosing more steps for the SDE solver, larger batch sizes, and more gradient steps during training. We thus always used 200 steps for the Euler-Maruyama scheme, a batch size of 2048, and 60000 gradient steps for the experiments with d ≤ 10 and 120000 gradient steps otherwise. However, we observed that the differences between the losses are already visible before convergence, see, e.g., <Ref>.
For DIS, we replace the pinned Brownian motion of <cit.> by the VP-SDE in <cit.>. Specifically, we use ν 1 and
β(t) (1-t)β_min + t β_max, t ∈ [0,1],
with β_min=0.05 and β_max=5, see <Ref>. Similar to PIS, we also use the score of the density ∇logρ (typically given in closed-form or evaluated via automatic differentiation) for the parametrization of the control u.
For the SB examples reported in <Ref>, we chose the same setting as in the PIS and DIS experiments. One difference is, however, that we are free to choose the prior density p_prior as well as the drift function in the SDEs (<ref>) and (<ref>). We choose = 0, noting that more sophisticated, potentially problem-specific choices might be investigated in future studies. We observed that for the double well example with d=5 it was sufficient to choose p_prior = 𝒩(0, I). For the GMM and the high-dimensional double well example, on the other hand, the experiments did not properly converge using the Gaussian prior. In case of the GMM example, choosing the uniform density p_prior = 1/2561_[-8,8]^2, helped the model to converge while detecting all nine modes.
For the log-variance loss, we used the default choice of w u, i.e., X^w X^u. We emphasize that we do not need to differentiate w.r.t. w, which results in reduced training times, see <Ref>. In practice, we detach X^w from the computational graph, which can be achieved by the and operations in PyTorch and TensorFlow, respectively. We leave other choices of w to future research and anticipate that choosing noisy versions of u in the initial phase of training might lead to even better exploration and performance. Furthermore, we use the same hyperparameters for the log-variance loss as for the KL-based loss. As these settings originate form <cit.> and have been tuned for the KL-based loss, we suspect that optimizing the hyperparameters for the log-variance loss can lead to further improvements.
To evaluate our metrics, we consider n=10^5 samples (x^(i))_i=1^n and use the ELBO as an approximation to the log-normalizing constant log, see <Ref>. We further compute the (normalized) effective sample size
ESS(∑_i=1^n w^(i))^2/n ∑_i=1^n (w^(i))^2,
where (w^(i))_i=1^n are the importance weights of the samples (x^(i))_i=1^n in path space. Finally, we estimate the Sinkhorn distance[Our implementation is based on <https://github.com/fwilliams/scalable-pytorch-sinkhorn> with the default parameters.] 𝒲^2_γ <cit.> and report the error for estimating the average standard deviation across the marginals, i.e.,
std1/d∑_k=1^d √([G_k]), where G∼ p_target.
§.§.§ Computation of log-normalizing constant
For the computation of the log-normalizing constant in the general SB setting, <Ref> ensures that for the optimal u^* and v^* it holds that
log = - (R_f_u^*,v^*,u^*^SB + S_u^*+v^* + B)(X^u^*)
Using approximations of u^* and v^*, the ELBO yields a lower bound to log.
For PIS and DIS, the log-normalizing constants can be computed analogously, see also <cit.>.
§.§ Further experiments
In <Ref> we present boxplots to show that our results from <Ref> are robust w.r.t. different seeds.
|
http://arxiv.org/abs/2307.01349v1
|
20230703204931
|
Towards exponentially-convergent simulations of extreme-mass-ratio inspirals: A time-domain solver for the scalar Teukolsky equation with singular source terms
|
[
"Manas Vishal",
"Scott E. Field",
"Katherine Rink",
"Sigal Gottlieb",
"Gaurav Khanna"
] |
gr-qc
|
[
"gr-qc",
"astro-ph.HE",
"physics.comp-ph"
] |
vishalmanas28@gmail.com
Gravitational wave signals from extreme mass ratio inspirals are a key target for upcoming, space-based gravitational wave detectors. These systems are typically modeled as a distributionally-forced Teukolsky equation, where the smaller black hole is treated as a Dirac delta distribution (i.e., a point-particle). Time-domain solvers often use regularization approaches that approximate the Dirac distribution. Unfortunately, such approaches often introduce small length scales (e.g., when the approximation is by a narrow Gaussian) and are a source of systematic error, especially near the smaller black hole. We describe a multi-domain discontinuous Galerkin (DG) method for solving the distributionally-forced s=0 Teukolsky equation that describes scalar fields evolving on a Kerr spacetime. To handle the Dirac delta, we expand the solution in spherical harmonics and recast the sourced Teukolsky equation as a first-order, one-dimensional symmetric hyperbolic system. This allows us to derive the method's numerical flux to correctly account for the Dirac delta. As a result, our method achieves global spectral accuracy even at the source's location. To connect the near field to future null infinity, we use the hyperboloidal layer method, allowing us to supply trivial outer boundary conditions and providing direct access to the far-field waveform. We document several numerical experiments where we test our method, including convergence tests against exact solutions, energy luminosities for circular orbits, the scheme's superconvergence properties at future null infinity, and the late-time tail behavior of the scalar field. We also compare two systems that arise from different choices of the first-order reduction variables, finding that certain reasonable choices are numerically problematic in practice. The methods developed here may be beneficial when computing gravitational self-force effects, where the regularization procedure has been developed for the spherical harmonic modes and high accuracy is needed at the location of the Dirac delta.
Gaurav Khanna
August 1, 2023
==================
§ INTRODUCTION
Black hole perturbation theory is a standard framework for studying a diverse range of gravitational phenomena, such as gravitational waves, quasi-normal modes, late-time Price tails, self-force effects, and linear stability of black hole solutions. The theory of such perturbations started with pioneering investigations by Regge and Wheeler <cit.> for Schwarzschild (nonspinning) black holes. The theory was later extended by Teukolsky to handle perturbations of the Kerr metric <cit.>. Small perturbations of a field with spin-weight s evolve on the Kerr geometry according to the Teukolsky master equation (<ref>).
One important astrophysical application of black hole perturbation theory is to numerically simulate extreme mass ratio inspiral (EMRI) systems. An EMRI is comprised of small mass–q compact object orbiting a large mass–M black hole, where q ≪ M. EMRI systems emit gravitational radiation at low frequencies and are a key target of the upcoming Laser Interferometer Space Antenna (LISA) observatory <cit.>. A standard method for studying EMRIs uses the perturbation theory of Kerr black holes in an approximation which treats the smaller compact object as point–like and structureless. When a rotating black hole is perturbed by a small, compact object the Teukolsky equation features a source-term proportional to a Dirac delta distribution, δ(x), and possibly its derivatives.
Many numerical methods have been developed for the Teukolsky equation. Due to the separability of the Teukolsky equation in the frequency domain, frequency-domain solvers are one popular class of methods that have been extensively developed <cit.> and work especially well for sources that have a discrete frequency spectrum or approximately so like with adiabatically-driven inspiral.
Time-domain solvers <cit.>, on the other hand, can generally handle a broader range of problems including high-eccentricity orbits and inspiral-merger-ringdown orbital regimes. Time-domain solvers are especially efficient when the source has a very broad or continuous Fourier spectrum. However, because the Teukolsky equation is not separable in the time-domain, most method development has focused on 2+1 solvers: after expanding the field in angular modes exp(-i m ϕ) (the m-modes do separate) we are left with a differential equation with two spatial dimensions and time. The first 2+1, time-domain solver was based on a Lax-Wendroff scheme <cit.>. Since then, various 2+1 time-domain Teukolsky solvers have been developed based around pseudo-spectral <cit.> and weighted essentially non-oscillatory (WENO) <cit.> methods. In particular, pseudo-spectral solvers are particularly well suited for smooth solutions while WENO methods are able to handle problems with sharp features such as those that arise in the computation of the Aretakis charge <cit.>.
Despite the progress on time-domain methods for the source-free Teukolsky equation, particle-driven perturbations of the Kerr geometry remain challenging. To solve the distributionally-sourced Eq. (<ref>), various regularized numerical approaches <cit.> have been proposed. To our knowledge, all current treatments approximate the Dirac delta as a narrow Gaussian or discrete representations over an extended range of grid points <cit.>. In both cases, these methods introduce small length scales (e.g., when the approximation is a narrow Gaussian), which can be a source of systematic error and will impose smaller timestep restrictions when evolved with an explicit time-stepping method. We note that an alternative effective-source approach can also be applied to this problem by allowing for analytic modifications to the source term <cit.>.
These problems have a clean solution for linear wave equations of one spatial variable. In previous work on the Regge-Wheeler-Zerilli and scalar wave equations, it was shown that a suitably modified multi-domain pseudo-spectral <cit.> or discontinuous Galerkin (DG) <cit.> method can exactly treat the Dirac delta distribution. Both methods (i) provide spectral accuracy even at its location, (ii) are especially well-suited for smooth solutions, which is the case away from the Dirac delta distribution. The main complication in applying the techniques of Ref. <cit.> is that the Teukolsky equation is typically written in either 3+1 or 2+1 form. However, over the past few years multiple works have considered expanding the solution in (spin-weighted) spherical harmonics, leading to a coupled system of 1+1 wave equations. To our knowledge, the benefits of this approach were first sketched out in <cit.>, and the relevant equations written out in explicit form (in Boyer-Lindquist coordinates) to study source-free scalar <cit.> and gravitational <cit.> perturbations. And very recently this (source-free) system was also written in hyperboloidal coordinates and numerically solved with a symmetric integration method <cit.>.
In this paper we describe a discontinuous Galerkin (DG) method <cit.> for solving the distributionally-sourced, s=0 Teukolsky equation describing scalar waves on Kerr. The main contribution of our work is to extend the numerical flux construction of Refs. <cit.> to the coupled 1+1 system, Eq. (<ref>). Our DG method discretizes this coupled system written in fully first-order form and achieves global spectral accuracy everywhere in the computational domain including at the particle's location. Furthermore, at future null infinity (where the far-field signal is recorded) our method is super-convergent, thereby allowing for extremely accurate waveforms for a comparatively lower resolution numerical grid. As a by-product of our work, we also consider two systems that arise from different choices in the first-order reduction variables finding that certain choices are numerically problematic.
This paper is organized as follows. Section <ref> derives the 1+1 system in fully first-order form and discusses its hyperbolicity. The outer boundary, where we need to supply radiation outgoing boundary conditions and extract the far-field waveform, is handled using the method of hyperboloidal layers. The nodal discontinuous Galerkin method, and its generalization for incorporating Dirac delta distributions, are summarized in Sec. <ref>. Section <ref> documents several experiments testing our method, including convergence tests against exact solutions in special cases where they are known, energy luminosities for circular orbits, and late-time tail behavior of the scalar field. Several appendices collect technical results and report on an alternative (and numerically problematic) formulation of the system.
§ EVOLUTION EQUATIONS
§.§ Teukolsky equation as a coupled 1+1D system
The Teukolsky master equation describes scalar, vector, and tensor field perturbations in the space-time of
Kerr black holes <cit.>. In Boyer-Lindquist coordinates, this equation takes the form
-[(r^2 + a^2)^2 /Δ-a^2sin^2θ]
∂_ttΨ
-4 M a r/Δ∂_tϕΨ
- 2s[r-M(r^2-a^2)/Δ+iacosθ]
∂_tΨ
+ Δ^-s∂_r(Δ^s+1∂_rΨ)
+1/sinθ∂_θ(sinθ∂_θΨ)+
[1/sin^2θ-a^2/Δ]
∂_ϕϕΨ + 2s [a (r-M)/Δ
+ i cosθ/sin^2θ] ∂_ϕΨ
- (s^2 ^2θ - s ) Ψ = -4 π(r^2 + a^2 cos^2θ) T ,
where M is the mass of the Kerr black hole, a its angular momentum per unit mass, Δ = r^2 - 2 M r + a^2,
s is the spin weight of the field, and T is a source term. The s = 0 version of this equation describes scalar fields,
and it is within this simpler setting that we will develop our methods.
Most time-domain numerical solvers do not directly discretize Eq. (<ref>). Instead, due to the axisymmetry of the Kerr spacetime, a set of decoupled 2+1D equations can be derived by separating out the field's azimuthal dependence.
However, when the source term describes a point-particle, both the original 3+1D and the 2+1D systems are numerically challenging.
For example, for the 3+1D system, if T ∝δ(r - r_p) δ(θ - θ_p) δ(ϕ - ϕ_p) then
the solution is singular near (r_p, θ_p, ϕ_p).
Moreover, the methods of <cit.>,
which exactly model the delta distribution as a modification
to the numerical flux, are no longer directly applicable in 3+1D.
To overcome this issue, and to allow for exact treatment of the Dirac distribution, we follow Refs. <cit.> and derive a 1+1D coupled system of equations obeyed by the scalar field. We first expand the solution
Ψ(t,r,θ,ϕ) = ∑_ℓ=0^∞∑_m=-ℓ^ℓΨ_ℓ m(t,r) Y_ℓ m(θ, ϕ) ,
in terms of the ordinary scalar spherical harmonics Y_ℓ m(θ, ϕ). We follow the same definition and conventions as Ref. <cit.> and, in particular, the harmonics are orthonormal when integrated over the sphere.
Substituting Eq. (<ref>) into Eq. (<ref>) and using well-known properties of spherical harmonics we arrive at
∑_ℓ, m[-(r^2+a^2)^2Ψ̈_ℓ m
-4imMarΨ̇_ℓ m
+Δ∂_r(Δ∂_rΨ_ℓ m).
.+(a^2m^2-ℓ(ℓ+1)Δ)Ψ_ℓ m +Δ a^2sin^2θΨ̈_ℓ m]Y_ℓ m
= -4 πΔ(r^2 + a^2 cos^2θ) T .
Here we use an over-dot to denote ∂ / ∂ t differentiation.
The term sin^2θ Y_ℓ m is responsible for mode coupling, and we use relations from Ref. <cit.>
to rewrite it as
sin^2θ Y_ℓ m = c^L_ℓ mY_L m
=c^ℓ-2_ℓ mY_ℓ-2, m + c^ℓ_ℓ m Y_ℓ m + c^ℓ+2_ℓ m Y_ℓ+2, m ,
where, for brevity, we will use Einstein summation notation over the repeated “L” index.
Appendix <ref> provides the calculations needed to find these coupling coefficients.
Multiplying by Y_ℓ' m' and integrating over the sphere, we arrive at
-(r^2+a^2)^2Ψ̈_ℓ m -4imMarΨ̇_ℓ m
+Δ∂_r(Δ∂_rΨ_ℓ m)
+(a^2m^2-ℓ(ℓ+1)Δ)Ψ_ℓ m
+Δ a^2 c^ℓ_L mΨ̈_L m
= ĝ_ℓ m(t,r) ,
where the overline denotes complex conjugation,
c^ℓ_L mΨ̈_L m = c^ℓ_(ℓ-2) mΨ̈_(ℓ-2) m
+ c^ℓ_ℓ mΨ̈_ℓ m + c^ℓ_(ℓ+2) mΨ̈_(ℓ+2) m,
and
ĝ_ℓ m(t,r) = -4 πΔ∫(r^2 + a^2 cos^2θ) Y_ℓ m T dΩ ,
is the source term.
We now factor out the large-r behavior of Ψ_ℓ m,
Ψ_ℓ m = ψ_ℓ m/√(r^2+a^2) ,
and to map out the background spacetime, we introduce the tortoise coordinate, r_*, defined by
d r_*=r^2+a^2/Δd r .
In terms of ψ_ℓ m and r_*, Eq. (<ref>) becomes
f
- ψ̈_ℓ m + ∂_r_*∂_r_*ψ_ℓ m + Δ a^2/(r^2+a^2)^2 c^ℓ_L mψ̈_L m
-4imMar/(r^2+a^2)^2ψ̇_ℓ m + V_ℓ m (r)ψ_ℓ m
= g_ℓ m(t,r) ,
where ψ = ψ(t,r_*), g_ℓ m = ĝ_ℓ m / (r^2+a^2)^3/2 and,
V_ℓ m(r )= [3r^2Δ^2/(r^2+a^2)^4-2rΔ (r-M)/(r^2+a^2)^3.
.-Δ^2/(r^2+a^2)^3+a^2m^2/(r^2+a^2)^2-ℓ(ℓ+1)Δ/(r^2+a^2)^2] .
The resulting differential equation's principle part is particularly simple <cit.> and
reduces to the ordinary wave operator when a=0.
Eq. (<ref>) is the one we numerically solve after
carrying out a reduction to fully first-order form
and transforming it to hyperboloidal coordinates; refer to Sec. <ref> and Sec. <ref>.
§.§ Distributional source term due to a scalar-charged particle in circular orbit
In this subsection, we provide expressions for the sourcing functions, g_ℓ m(t), that arise from projecting the
scalar charge density, T, onto the spherical harmonics. Following the standard setup, we assume
our problem arises from a small “particle” of mass q orbiting a large-mass M Kerr blackhole, where the mass ratio
satisfies q/M ≪ 1. Ignoring radiation-reaction effects and specializing to circular and equatorial geodesic orbits, the
particle's scalar charge density is given by <cit.>
T = q/r_p^2 u^tδ(r-r_p) δ(θ-θ_p) δ(ϕ - ϕ_p(t) ) ,
where the constant r_p denotes the particle's radial position, θ_p=π/2 denotes the particle's polar angle,
u^t is the t-component of the particle's four-velocity, and
ϕ_p(t) is the particle's angular location. Expressions for
u^t = g^tϕ L - g^tt E , ϕ_p(t) = v^3/M(1+ãv^3) t ,
are readily given in, for example, Refs. <cit.>. Here v=√(M/r_p), ã = a/M, and
E = 1 - 2v^2 + ãv^3/√(1 - 3v^2 +2ãv^3) , L = r_p v1 - 2ãv^3 + ã^2v^4/√(1 - 3v^2 +2ãv^3) ,
are, respectively, the particle's energy and angular momentum. The background Kerr metric is denoted by g_μν.
With these preliminaries in place, for motion in the orbital plane and viewing g_ℓ m as a function of r_*,
we now straightforwardly compute
g_ℓ m(t,r_*) =
-4 π q/u^t ( r_p^2 + a^2 )^1/2Y_ℓ m(π/2,ϕ_p(t)) δ(r_* - r_*,p) ,
where we have used
δ(r - r_p) = [(r_p^2 + a^2) / Δ_p ] δ(r_* - r_*,p) to transform the Dirac distribution into r_* coordinates.
§.§ Reduction to first-order form
For wave-like problems,
the discontinuous Galerkin method we will introduce in Sec. <ref> is most readily applicable
to systems in fully first-order form. Towards this end, we introduce two new variables,
π_ℓ m=-∂ψ_ℓ m/∂ t , ϕ_ℓ m=∂ψ_ℓ m/∂ r_* .
The following first-order system corresponds to the original second-order wave equation (<ref>):
ψ̇_ℓ m =-π_ℓ m ,
π̇_ℓ m
- f c^ℓ_Lmπ̇_L m = - ∂_r_*ϕ_ℓ m - μπ_ℓ m
- V_ℓ m (r)ψ_ℓ m + g_ℓ m(t,r) ,
ϕ̇_ℓ m = - ∂_r_*π_ℓ m .
In these expressions and below we will use
f = Δ a^2/(r^2+a^2)^2 , μ = 4imMar/(r^2+a^2)^2 ,
for convenience.
If ψ_ℓ m solves the first-order system (<ref>) it also solves the
original second-order equation (<ref>) provided the constraint ϕ_ℓ m=∂_r_*ψ_ℓ m.
We note that alternative reduction variables could also be used.
One such seemingly-reasonable choice (π_ℓ m=-∂_τψ_ℓ m and ϕ_ℓ m=∂_ρψ_ℓ m) is considered in App. <ref>, and in Sec. <ref> we demonstrate that this choice appears to be numerically problematic. While we have not shown this system to be theoretically problematic, under certain conditions our preferred choice of reduction variables (<ref>) leads to a symmetric hyperbolic (and hence well-posed) initial-boundary-value problem.
§.§ Hyperboloidal layers
The numerical simulation of wave phenomena on an open domain requires the specification of radiation boundary conditions
and, in the case of gravitational-wave simulations, access to the waves that reach future null infinity.
We use the method of hyperboloidal layers <cit.> to solve both issues at the outermost physical boundary.
We first introduce hyperboloidal coordinates, (ρ, τ), defined by
r_*=ρ/Ω(ρ) , τ=t-h(r_*) ,
that are related to the original (r_*,t) coordinates through specification of the functions,
Ω=1-(ρ-R/s-R)^PΘ(ρ-R) , h=ρ/Ω-ρ ,
where R, s, and P are to-be-set parameters and Θ is the Heaviside step function.
To the left of the timelike surface defined by ρ=R, the coordinates are
the original (r_*,t) ones.
To the right of ρ=R, the coordinates smoothly connect the computational domain to future null infinity, which
is defined by ρ=s.
The width of the hyperboloidal layer is given by s-R.
We will choose the location of interface R such that
(i) the source term is always located to the left of the interface and (ii) for our multi-domain method, we collocate the start of the layer at a subdomain interface. A sufficiently smooth coordinate transformation can be achieved by setting the value of P, typically taken to be a positive integer <cit.>. The value of P, in turn, can impact the numerical scheme's order of convergence. In Fig. <ref> we systematically vary P to find values that maintain our DG method's superconvergence property. Besides this one numerical experiment we use P=4 throughout this paper.
In hyperboloidal coordinates, Eq. (<ref>) becomes [When enacting this coordinate transformation
we do not let π_ℓ m and ϕ_ℓ m transform. If we did, the resulting system would be
the problematic one considered in App. <ref>.]
ψ̇_ℓ m = -π_ℓ m ,
π̇_ℓ m
- f c^ℓ_Lmπ̇_L m
- H ϕ̇_ℓ m = - (1-H) ϕ_ℓ m^'
- μπ_ℓ m - V_ℓ m (r)ψ_ℓ m + g_ℓ m(t,r) ,
ϕ̇_ℓ m - H π̇_ℓ m = - (1-H) π_ℓ m^' ,
where we now use an over-dot to denote ∂ / ∂τ differentiation,
a prime for differentiation by ∂ / ∂ρ, and we note
that ∂ / ∂τ = ∂ / ∂ t.
Here H=∂ h / ∂ r_* = 1 - ∂ρ / ∂ r_* and, for later use, we note that
H = 1 - Ω^2/Ω - ρΩ^' ,
H^' = dH/dρ=-ω^' ,
where ω = Ω^2/(Ω - ρΩ^').
To the left of the layer, where ρ < R, we have H=0 and so both Eq. (<ref>)
and Eq. (<ref>) are identical. One important fact of this
coordinate transformation, which we will use later on, is that
the outgoing characteristics obey τ - ρ = t - r_*.
While the coordinate transformation is singular at ρ=s,
the coefficients of each term in Eq. (<ref>)
are finite. This can be seen by noting that as ρ→ s we have
(1 - H) ∼Ω^2 ∼ r_*^-2∼ r^-2 <cit.>.
§.§ Truncating to a finite number of modes
To achieve a finite number of equations, the expansion Eq. (<ref>) must be truncated to a
finite value ℓ_ max. Let Ψ_ full be the infinite series (<ref>) and Ψ_ℓ_ max the truncated
expansion. Then the angular truncation error for a=0 (no mode mixing) is
Ψ_ full - Ψ_ℓ_ max = ∑_ℓ=ℓ_ max+1^∞∑_m=-ℓ^ℓΨ_ℓ m(t,r) Y_ℓ m(θ, ϕ) .
Integrating the residual over the sphere, we arrive at
δΨ
= ∑_ℓ=ℓ_ max+1^∞∑_m=-ℓ^ℓ| Ψ_ℓ m(t,r) |^2 ,
where | ·| is the complex modulus.
Due to the expected decay of the coefficients Ψ_ℓ m(t,r), we can estimate the angular approximation error as
δΨ≈∑_m=-ℓ_ max^ℓ_ max| Ψ_ℓ m(t,r)|^2, which can be monitored throughout the simulation.
For a ≠ 0, ℓ modes will mix, but we can nonetheless use this estimation as a useful guide. While the angular approximation might seem like a limitation to our method, we point out that this error shows up under a different guise in any numerical scheme. For example, traditional 2+1 solvers that discretize the ∂_θ operator effectively set a resolution for the maximum resolvable ℓ-mode. In all cases, one can estimate the error by increasing the angular grid resolution (for traditional 2+1 solvers) or ℓ_ max (our angular resolution error).
§.§ Matrix form
We now turn to writing Eq. (<ref>) in matrix form, which will prove useful later on.
Let ℓ_ max be the largest value of ℓ we will consider in the computation, that is we
make an approximation ψ_ℓ m =0 for ℓ > ℓ_ max.
Noting that (i) the equation for
odd and even ℓ modes decouple and (ii) each m mode decouples,
let us define a solution vector for the even-ℓ mode, U_ℓ_ max m^ even = [ π⃗_ℓ_ max m^ even, ϕ⃗_ℓ_ max m^ even], where
π⃗_ℓ_ max m^ even = [π_ℓ_ min m^ even, π_(ℓ_ min+2)m^ even, …, π_ℓ_ maxm^ even]^T ,
ϕ⃗_ℓ_ max m^ even = [ϕ_ℓ_ minm^ even, ϕ_(ℓ_ min+2)m^ even, …, ϕ_ℓ_ maxm^ even]^T .
We have introduced ℓ_ min for the smallest value of ℓ given m and the even/odd mode type. For example, if m=0 we have ℓ_ min = 0 (1) for the even (odd) modes, while if m=2, we have ℓ_ min = 2 (3) for the even (odd) modes. A similar set of notation is equally applicable to the odd-ℓ modes, leading to the system vector U_ℓ_ max m^ odd.
The coupled system of equations (<ref>)(b,c) take the form,
EU̇+ÂU^'+B̂(U)+V̂(ψ) = Ĝ(t) δ(r_* - r_*,p) ,
where the components of E, Â, B̂, and V̂ can be read off from Eq. (<ref>).
The vector Ĝ(t) contains the coefficients of the distributional source term shown in Eq. (<ref>).
In this expression, we have dropped the m, ℓ_ max, and { odd, even} labels from both the matrices
and solution vector for clarity. The remaining set of equations, ψ̇_ℓ m = -π_ℓ m, are trivially
evolved along with the Eq. (<ref>). Clearly the size of these matrices change with ℓ_ max and m. For example, choosing ( type,ℓ_ max,m)=( even,8,0) will result in a 10-by-10 system, while ( type,ℓ_ max,m)=( even,8,4) will yield a 6-by-6 system. Inverting E, Eq. (<ref>) becomes
U̇+AU^'+B(U)+V(ψ) = G(t) δ(r_* - r_*,p) ,
where we have defined A=E^-1Â, B=E^-1B̂, V=E^-1V̂, and G=E^-1Ĝ.
We summarize the discretization of system, Eq. (<ref>) in Sec. <ref>.
For concreteness, consider the even ℓ-mode sector with m=0 and ℓ_ max=2; App. <ref> considers the most general system.
Then we have U=[π_00, π_20, ϕ_00, ϕ_20,]^T, and
E=[ 1-fc_00^0 -fc_20^0 - H 0; -fc_00^2 1-fc_20^2 0 - H; -H 0 1 0; 0 -H 0 1 ] ,
and
Â=[ 0 0 (1-H) 0; 0 0 0 (1-H); (1-H) 0 0 0; 0 (1-H) 0 0 ] ,
with
V̂ =[ V_0ψ_00, V_2ψ_20, 0, 0 ]^T ,
B̂ =[ μπ_00, μπ_20, 0, 0 ]^T .
The components of the source vector are given by
Ĝ= -4 π q/u^t ( r_p^2 + a^2 )^1/2[ Y_0 0, Y_2 0, 0, 0, ]^T ,
where the spherical harmonics are evaluated at θ = π/2 and ϕ = ϕ_p(t).
§.§ System hyperbolicity
We now consider the system's hyperbolicity, which has important consequences for the numerics and is directly used in the
construction of the numerical flux. As shown in App. <ref>, the when a=0 the system (<ref>) is symmetric hyperbolic and we conjecture this to be true for a ≠ 0 too. If we had instead used π_ℓ m=-∂_τψ_ℓ m and ϕ_ℓ m=∂_ρψ_ℓ m as our first-order reduction variables, the resulting system (<ref>) is not symmetric hyperbolic even when a=0. In principle this isn't necessarily an issue, but we have found through extensive experimentation that our numerical scheme when applied to this alternative system shows a catastrophic loss of accuracy near ρ = s (cf. Fig. <ref> and Table <ref>). This alternative system and its hyperbolicity are discussed in App. <ref>.
To study the system's wave structure, we consider the principle part of Eq. (<ref>),
U̇+AU^'+ … = 0 ,
and diagonalize the matrix A
A(ρ) = T(ρ) Λ(ρ) T^-1(ρ) .
Let us assume U is of length L, then T is an L-by-L matrix whose i^ th column is the
right eigenvector of A corresponding to the eigenvalue λ_i(ρ). The eigenvalues
are real and correspond to the entries of the diagonal matrix Λ = diag(λ_1, λ_2, …, λ_L).
Furthermore, the eigenvalues satisfy -1 ≤λ_i ≤ 1. Noting that L is always even, half of the eigenvalues are 1 and half of the eigenvalues are non-positive at ρ < s and become exactly 0 at ρ=s.
Assuming “frozen” matrix coefficients and letting C=T^-1U, the principle part of the system can be written as
Ċ+Λ C^' = 0 .
Then, like the simple advection equation,
λ_i > 0 corresponds to outgoing waves
moving at a speed λ_i while λ_i < 0 corresponds to incoming waves moving at a speed |λ_i|.
For later use in Sec. <ref>,
we split the diagonal matrix as
Λ = Λ^+ + Λ^-, where Λ^+ (Λ^-) contains only positive (negative) eigenvalues.
We then define the projection matrix P^+ as Λ^+ with non-zero entries replaced by 1. Similarly,
let P^- be Λ^- with non-zero entries replaced by 1. With this new notation, the non-zero components of P^+C and P^-C
are, respectively, the right-moving and left-moving waves. These projection operators are used in the construction of the numerical flux given
Sec. <ref>.
§.§ Boundary conditions
Due to the hyperboloidal coordinate transformation, the wave speeds of system, Eq. (<ref>) at the right physical boundary ρ=s are either 1 or 0.
This means ρ=s is an outflow boundary and hence no boundary conditions are needed. For the left boundary, we note that the system includes a mixture of incoming and outgoing characteristics and, furthermore, the potential is non-zero at the horizon when m≠0 and a≠0. We impose Sommerfeld boundary conditions setting the incoming characteristics to zero. This will lead to spurious reflection that can be mitigated by placing the left boundary at a sufficiently negative value of ρ such that the left boundary is causally disconnected in the region of the computational domain that we care about. We note that this boundary issue can be solved by enacting so-called azimuthal transformations <cit.>, which we do not consider here.
§.§ Energy flux
One important application of the distributionally-sourced scalar wave equation is
computing energy luminosities. Here we provide a formula for this quantity in terms of
our system variables.
The radiative energy flux across an arbitrarily large (r →∞) spherical surface
will form the basis of some of our numerical experiments.
The energy flux is given by <cit.>,
Ė = d E/d t = -Δ∮ T_tr d Ω ,
where the stress-energy tensor
T_αβ = 1/4π( ∂_αΨ∂_βΨ - 1/2 g_αβ∂^μΨ∂_μΨ) ,
is determined from derivatives of the scalar field.
Noting that the scalar field is real, we first write T_tr = - Ψ_,tΨ_,r / (4 π), then
substitute
Ψ(t,r,θ,ϕ) = 1/√(r^2+a^2)∑_ℓ=0^∞∑_m=-ℓ^ℓψ_ℓ m(t,r) Y_ℓ m(θ, ϕ) ,
to arrive at the individual multipole contributions to the total energy flux through the sphere at infinity,
Ė = - 1/4π∑_ℓ, mψ_,t^ℓ mψ_,r^ℓ m
= 1/4π∑_ℓ, m| ψ_,t^ℓ m|^2
= 1/4π∑_ℓ, m| π^ℓ m|^2 ,
where we made use of the fact that
in the asymptotic limit (r →∞) the outgoing radiation condition
implies ψ_,t = - ψ_,r. We will often directly compare the
multipole contributions Ė_ℓ m = | π^ℓ m|^2 / (4 π).
§ THE DISCONTINUOUS GALERKIN SCHEME
Discontinuous Galerkin (DG) methods are especially well suited for solving
Eq. (<ref>) and, more broadly, problems with Dirac delta distributions.
The DG method solves the weak form of the problem, a natural
setting for the delta distribution, and
the solution's non-smoothness can be
“hidden” at an interface between subdomains.
Our numerical scheme is based on the one described in Refs. <cit.>
for solving one-dimensional wave-like equations written in fully first-order form with source terms proportional to a Dirac delta distribution and its derivative(s). As such, we only summarize the key steps in the discretization and refer interested readers to those references for the details. After carrying out a spatial discretization using the nodal DG method, we integrate over time using the fourth order Runge Kutta (RK4) scheme.
§.§ The source-free method for G=0
We partition the spatial domain into K non-overlapping subdomains defined by the
partition points ρ_0 < ρ_1 < … < ρ_K = s
and denote 𝖣^j = [ρ_j-1,ρ_j] as the j^ th subdomain. In this one-dimensional setup,
the points {ρ_i}_i=1^K-1 locate the internal subdomain interfaces, and we require one of them to be
the location of the Dirac distribution. Since we are solving a one-dimensional problem, the subdomains are line segments
and neighboring subdomains will intersect at a point.
In each subdomain,
all components of the solution vector U and matricies A, B, and V are expanded in a polynomial basis, which are taken
to be degree-N Lagrange interpolating polynomials {ℓ_i(ρ) }_i=0^N defined from Legendre-Gauss-Lobatto (LGL) nodes.
The time-dependent coefficients of this
expansion (e.g., on subdomain j: π_20^j(τ,ρ) = ∑_i=0^N π_20^i(τ) ℓ_i(ρ))
are the unknowns we solve for. Products of terms arising in Eq. (<ref>)
are represented through pointwise products of the interpolating polynomials at the LGL nodal points.
On each subdomain, we follow the standard DG recipe by requiring the residual
R_j(τ,ρ) =
∂_τ U^j
+ A^j ∂_ρ U^j
+ B^j U^j
+ V^j ,
to satisfy
∫_𝖣^j R_j ℓ_i^j dρ =
[ (F^j - F^*) ℓ_i^j ]_x_j-1^x^j ,
for all basis functions – that is for all i=0, 1, …, N. Here we use a superscript “j” for vectors and matricies whose components have been approximated by Lagrange interpolating polynomials. Eq. (<ref>) features the physical flux vector, F(U)=AU and the numerical flux F^*.
The numerical flux is some yet-to-be-specified function F^*(U^L, U^R), where U^L and U^R
are, respectively, the left and right boundary values of the numerical solution
defined on the interface between neighboring subdomains. To build a stable and convergent DG scheme the numerical flux
must satisfy a few basic properties <cit.> such as consistency F^*(U, U) = F(U). While there
are many reasonable choices for the numerical flux, because of its simplicity
and low cost-of-evaluation for our large, coupled system, we use the local Lax-Friedrichs (LLF) numerical
flux. At each interface, the LLF flux is computed as
F^*(U^L, U^R) = 1/2[ F(U^L) + F(U^R)] -λ^ LLF/2( U^R- U^L ) ,
where λ^ LLF is the maximum eigenvalue of the flux Jacobian matrix A evaluated at an interface; in our case λ^ LLF=1. The local Lax-Friedrichs flux is seen to be an average of the physical flux at the interface plus a dissipative part proportional to λ^ LLF, which is necessary to stabilize the scheme.
We note that the integrals appearing in Eq. (<ref>) can be pre-computed on a reference interval in terms of mass and differentiation matrices, leading to a coupled system of ordinary differential equations (see Eq. (47) of Ref. <cit.>) that can be integrated in time using RK4. A Courant-Friedrichs-Lewy condition restricts the largest stable timestep
associated with explicit numerical integration of Eq. (<ref>). For a DG scheme, it is known that
Δ t_ max∝ C Δ x_ min/λ_ max, where λ_ max=1 is the largest
wavespeed and Δ x_ min is the smallest distance between neighboring Legendre-Gauss-Lobatto points on the physical grid.
The unknown scaling factor C is typically of order unity.
Finally, the global solution is taken to be a direct sum of the local solutions defined on each subdomain,
U_h(τ, ρ) = ⊕_j=1^K U_j(τ, ρ).
§.§ Inverting Ê and expressions at infinity
While not part of our DG scheme per se, an important numerical consideration is how to best invert Ê. While one could numerically invert Ê, we found that higher accuracies can be achieved though symbolic computations. This is, we compute A=Ê^-1Â symbolically, then export these expressions into code.
This symbolic approach works well for ρ < s. At ρ=s the coordinate transformation (<ref>) is singular yet, as remarked in Sec. <ref>, the coefficients of the first-order differential equation (<ref>) are well behaved. However, special care is needed at ρ=s. To better appreciate the issue at hand, consider setting M=a=0, then we end up with system (<ref>). The terms on the right-hand-side – these are (1-H) and V – behave like r^-2 as r→∞. Meanwhile, the matrix Ê^-1 behaves like r^2 as r→∞. Consequently, the coefficients and Ê^-1V need to be analytically supplied at ρ=s. This strategy continues to be applicable for the the general case (with non-zero values of M and a) too. At grid points immediately to the left of ρ=s, the relevant coefficients in the partial differential operator might suffer from mild ill-conditioning; we experimented with variable precision computations but found no benefit.
§.§ Including the Dirac delta distribution
With a non-zero source term proportional to a Dirac delta distribution, the numerical flux evaluated at the interface ρ=r_*,p will be modified through additional terms. The form of these new terms were derived in Eq. (58) of Ref. <cit.> and later extended in Ref. <cit.>. Consider a Dirac delta distribution located at the interface between elements 𝖣^k_p and 𝖣^k_p+1.
The basic idea is to note that the DG method is a weak formulation of the differential equation, where the numerical solution is made to satisfy Eq. (<ref>). When we collocate the δ(r_* - r_*,p) with a subdomain interface, we are faced with evaluating two relevant integrals for the subdomain to the left and right of r_*,p. We enforce the usual selection property of the Dirac distribution
when integrated over the union 𝖣^k_p∪𝖣^k_p+1, yet we are free to choose how the individual integrals contribute to the total one. Following Ref. <cit.>, we carry out a preferred splitting according to the wave dynamics of the problem. Applying the procedure of Ref. <cit.> to our problem we arrive at
- (F^*)^k_p+1_left, modified =
-(F^*)^k_p+1_left + T P^+T^-1 G(t) ,
-(F^*)^k_p_right, modified =
-(F^*)^k_p_right - T P^-T^-1 G(t) .
The negative signs in front of each F^* instance can be understood by noting that
the flux and source vectors are defined on different sides of the differential equation.
Compare, for example, Eq. (<ref>) and Eq. (<ref>).
§ NUMERICAL EXPERIMENTS
The numerical experiments documented in
Sec .<ref> and <ref> set M=1.
§.§ Verification of hyperboloidal layers: convergence and superconvergence
Our first experiment demonstrates exponential convergence of our solver throughout the hyperboloidal layer.
Upon setting a=M=q=0 [Clearly this setup does not correspond to an EMRI model. However, the resulting system (<ref>) is an important special case for which exact solutions can be computed. This allows for unambiguous code tests that are impossible otherwise.], the potential (<ref>) becomes V_ℓ m = - ℓ (ℓ +1)/r^2 and r_* =r. And
so Eq. (<ref>) is just the ordinary flatspace wave equation:
[- ∂^2/∂ t^2 + ∂^2/∂ r^2 -ℓ(ℓ+1)/r^2]ψ_ℓ m = 0 .
The first-order system in hyperboloidal coordinates is given from Eq. (<ref>):
ψ̇_ℓ m = -π_ℓ m
π̇_ℓ m
- H ϕ̇_ℓ m
= - (1-H) ϕ_ℓ m^'
- ℓ(ℓ+1)/r^2ψ_ℓ m
ϕ̇_ℓ m - H π̇_ℓ m = - (1-H) π_ℓ m^' .
Due to the simple setting an exact,
closed-form solution can be found <cit.>. Setting ℓ=2, the outgoing solution
to Eq. (<ref>) is
ψ(t,r) = f”(t-r)
+ 3/r f'(t-r)
+ 3/r^2 f(t-r) ,
where f(u) is an underlying function of u=t-r,
the prime indicating
differentiation in argument, and we have
suppressed the harmonic indices.
Specification of the profile function,
f(u) = sin[f_0 (u-u_0) ]
e^-c(u-u_0)^2 ,
determines a purely outgoing multipole
solution <cit.>[Eq. (53) from Ref.<cit.> (which is Eq. (60) in arXiv version 1) has a typo.
This equation is corrected in arXiv version 2.].
Here c characterizes the solution's spatial extent,
f_0 its “central” frequency, and u_0 its offset.
A consequence of the hyperboloidal layer coordinate transformation, Eq. (<ref>),
is that t-r = τ - ρ, which allows us to easily re-express the
exact solution in (τ, ρ) coordinates.
We solve Eq. (<ref>) (with ℓ=2) on
ρ∈ [1, 50], with the layer at R=30, set the final time T = 50,
and choose a timestep Δ t = 10^-5 sufficiently small enough such
that the Runge-Kutta's temporal error is well below the spatial discretization error.
Upon setting f_0 = 2, c = 1, and u_0 = -10, we
take the initial data from the exact solution evaluated at τ=0, which
is (numerically) zero at the left boundary and the start of the layer ρ=R.
At the left physical
boundary point we choose numerical fluxes that weakly
enforce simple Sommerfeld (outgoing) boundary conditions,
which are sufficient for our purposes as the solution never impinges upon this boundary.
Because of the property of the hyperboloidal layers, no boundary conditions are needed at ρ=50.
We now check the convergence of our numerical solution
against the exact solution when using hyperboloidal layers.
It is well known that for a fixed value of polynomial degree N, the approximation
error (when computed in the L_2 norm over the spatial grid at a fixed time T) typically
decreases as a power law with an anticipated rate of N+1 for smooth solutions.
We say this is “anticipated” because it follows from standard results of polynomial
approximation theory and has nothing to do with the DG scheme, which can modify the expected rate
depending on the system and various method choices.
Without using hyperboloidal layers our numerical scheme (for the ordinary wave equation)
is identical to the one of Ref. <cit.> where the expected
convergence rate of N+1 is
demonstrated in Fig. 3 of that reference; similar results hold for non-linear problems <cit.>.
We now show that our numerical approximation error continues to decay with a rate of N+1
when using the method of hyperboloidal layers. Furthermore,
at certain locations of the grid we achieve superconvergent rates of 2N+1.
We report the numerical error as a
relative L_2 norm
E_N,K(ρ) = ∫_0^T | ψ_ num(τ,ρ) - ψ_ exact(τ,ρ) |^2 dτ/∫_0^T | ψ_ exact(τ,ρ) |^2 dτ
taken over time at a fixed value of ρ.
The numerical solution ψ_ num is computed using a degree N approximating polynomial on a grid of K subdomains.
Fig. <ref> shows the scheme's rate of convergence as N and K are varied.
We measure the error before the layer at ρ=15 (left panel), inside the layer at ρ=40 (middle panel), and at future null infinity at ρ=50 (right panel). For the first two cases these values of ρ correspond to grid points that do not lie on the rightmost side of a
subdomain. The scheme's rate of convergence is consistent with our N+1 expectation. By comparison, the rate of at future null infinity is found to be 2N+1. This is a hallmark of a superconvergent DG scheme <cit.>, at which certain grid points (in this case, the outflow boundary is also a Radau nodal point) converge at a higher rate than that of other grid points. Finally, Fig. <ref> shows the scheme's (super)convergence depends on the coordinate transformation (<ref>) that defines the hyperboloidal layer. Noting that the transformation is parameterized by a positive integer P, Fig. <ref> shows a loss of convergence when the hyperboloidal layer's differentiability is too low (P=2) or P is odd.
Because the far-field waveform reaching gravitational-wave detectors is taken to be at future null infinity, superconvergence allows our scheme to obtain highly accurate waveforms for comparatively sparse computational grids. Furthermore, because the largest stable timestep scales as N^-2, our high-order waveform computation can be achieved with a larger step. For example, setting N=4 we achieve 9^ th order convergence. Without superconvergence this would only be possible with N=8, which translates into 4× larger timesteps. We note that superconvergence is not a property of hyperboloidal layers, but rather our DG scheme. Yet the two work well together as ρ=s is both a super-convergent point and exactly where we need to record the far-field waveform after applying the method of hyperboloidal layers.
Finally, Fig. <ref> demonstrates
the spectral convergence of our method as applied to Eq. (<ref>) (our preferred first-order reduced system; denoted “System 1” in the legend) and Eq. (<ref>) (denoted “System 2” in the legend). We observe the problematic “System 2” loses convergence much sooner than “System 1”, in particular well before round-off error is expected to show up. While a complete understanding of this issue is still lacking, some observations are reported in App. <ref>. For these experiments, we use a fixed number of K=128 subdomains and plot the error (<ref>) as a function of N and for each N. We have chosen a Δ t= 0.00001 to ensure temporal error is well below the spatial discretization error.
§.§ Verification of Price tails
The late time decay behavior of scalar field perturbations around a Kerr black hole were first shown in Ref. <cit.>. At late times, after the the exponential decay of ringdown, the field decays as a power law t^n. In this subsection, we perform a series of experiments to test our code by comparing to results from the literature. In the legends of Figs. <ref> and <ref>, the first value in the parentheses represents the expected behavior. We use two methods to extract a value of n from our numerical data: (i) power law fit evaluated at 100,000 (the second number in the parentheses of each legend) and (ii) each figure also includes an inset where the decay rate is computed from a finite-difference approximation to the logarithmic derivative n=∂_lnτln( ψ_ℓ m(τ) ).
We provide initial data to the scalar field at some point away from the black hole event horizon but well before the hyperboloidal layer starts. The solution is then extracted as a time series at a location inside the layer and at ℐ^+. This also acts as an important test for the hyperboloidal layer method.
Finally, we note that some results of this section have been compiled using the alternative first-order reduced system discussed in App. <ref>; this shows that although the alternative system is problematic for certain studies (cf. Sec. <ref> and Sec. <ref>) it can be used when high-accuracy is not required.
§.§.§ Price tails on Schwarzschild
For our first test we perturb the scalar field around the background Schwarzschild spacetime with non-zero momentum initial data,
ψ_ℓ m = 0 , π_ℓ m = 1/√(2π10^2)exp( -(ρ - 30)^2/2π10^2) ,
and where ϕ_ℓ m = ∂_r_*ψ_ℓ m. We solve Eq. (<ref>) (with ℓ=2 and M=1) on ρ∈ [-200, 1200] with the layer at R=150. We use K=400 subdomains and on each subdomain the solution is approximated by a degree N=10 polynomial. The time step is set to Δ t = 0.0866. In Fig. <ref>, we show the solution recorded as a time series at r_*=500 (left panel) and at ℐ^+ (right panel). At late times we empirically measure the decay rate of solution and quote this value in the parentheses of the figure's label. In the inset figure, we also report the local power index and show the expected rate (as a horizontal line) based on Price's Law <cit.>. Our measured tail decay rates are consistent with the theoretically known values at both a finite distance and ℐ^+ <cit.>.
§.§.§ Price tails on Kerr
Unlike the Schwarzschild case, a full understanding of Price tails in Kerr spacetime was not developed until relatively recently <cit.> (the reader is referred to the detail therein). Consider the situation where ℓ' is the mode where we set the initial data (assuming non-zero momentum), and ℓ is the mode excited through coupling. As proposed in Ref. <cit.> and later shown in Ref. <cit.>, except for the case in which ℓ' is the slowest decaying mode (for which case the decay rates are given by -n=ℓ'+ℓ+3), all other azimuthal modes decay along r= const (approaching future timelike infinity i^+) according to -n=ℓ'+ℓ+1. Ref. <cit.> also found the decay rate along future null infinity ℐ^+, specifically, -n^ℐ^+=ℓ+2 if ℓ≥ℓ', and -n^ℐ^+=ℓ if ℓ≤ℓ'-2.
In Fig. <ref> we repeat our Schwarzschild tails experiment but now setting a=0.99995. We solve on the domain ρ∈ [-2100,400] with the start of the hyperboloidal layer at R=149.3500. We use K=400 subdomains and on each subdomain the solution is approximated by a degree N=15 polynomial. The time step is set to Δ t = 0.0761. To facilitate a direct comparison with previous work, we choose our initial data to match that of Ref. <cit.>. In particular, we provide a non-zero (2,0) mode perturbation that, through mode coupling, excites the (0,0) and (4,0) modes. Technically all m=0, even ℓ modes will be excited, but we only consider nearest neighbor coupling for this experiment; see Sec. <ref>. Our measured rates are consistent with known results.
As a final test, we extend this experimental setup (now using N=10, K=400, and Δ t = 0.0436) for a range of non-zero (ℓ',m) mode data and report the measured decay rate of the coupled (ℓ,m) modes. Table <ref> summarizes numerical experiments while using a spin value of a=0.8.
§.§ Fluxes from a scalar-charged particle in circular orbit
§.§.§ Fluxes on Schwarzschild
This subsection compares our circular orbit energy fluxes (<ref>)
to those obtained by other authors and codes.
For our simulations, we have typically chosen
M = 1, and we solve Eq. (<ref>) with a distributional
source term given by Eq. (<ref>). Our physical domain
ρ∈ [-100, 200] is partitioned into 200 subdomains
and on each subdomain we approximate the solution with a degree
N=10 polynomial. We set the final time T = 8000,
and the time step Δ t = 0.0018. As is common practice,
we supply trivial (zero) initial
data. We note that supplying trivial initial
data is commonly done because (i) the exact initial data is unknown and
(ii) the expectation is that over long enough times the impact of incorrect
initial data will propagate away as so-called junk radiation <cit.>.
Trivial initial data is inconsistent with the distributional forcing
terms (<ref>), so we smoothly turn on the source term with
a ramp-up function. We take our ramp-up function to be
Eq. C1 of Ref. <cit.>
with parameter values τ = 400 and δ = 0.00025.
To achieve high accuracies for some of the higher harmonic modes, we
sometimes deviate from these default settings.
Before presenting our numerical results, we remark on potential sources of error.
With the numerical setup described above, the relative
error associated with our numerical solution before the start of the hyperboloidal layer
is better than 10^-10, which is sufficient for our purposes. At ℐ^+,
where the energy flux is computed, we encounter two
additional sources of error: spurious junk due to trivial initial data and
setting the hyperboloidal layer's width. These two sources of (systematic) error
are quantified in Fig. <ref>. In this experiment,
we consider a high-resolution numerical computation of the
(2,2)-mode energy flux Ė^22_ DG(τ) from
a particle in circular orbit located at
r_*,p=12.7725887222397 and our domain is
ρ∈ [r_*,p-200,r_*,p+200].
For a fixed location
of the hyperboloidal layer's start (ρ = R), the figure
shows the agreement between our DG scheme and a frequency-domain solver <cit.>
becomes better as we wait longer. Indeed, for each R we plot the relative error computed as
|Ė^ℓ m_ DG(τ) - Ė^ℓ m_ FR| / |Ė^ℓ m_ FR| ,
where Ė^ℓ m_ FR is the energy flux computed with a frequency-domain solver and Ė^ℓ m_ DG(τ) is
the time-domain DG solver's value. We see that at early times the solution is highly contaminated by spurious junk. Achieving a high-accuracy solution requires that we wait sufficiently long for the spurious “junk tails” to decay away.
As noted in Ref. <cit.>, because tails at ℐ^+ decay more slowly, one must wait even longer (as compared to finite-radius measurements) for these transients to die off.
Fig. <ref> also shows that as
R → s (ie the layer's width goes to zero) our solution quality degrades. We
believe this is due to sharper gradients that stem from the compression function Ω,
which makes it harder for the numerical scheme to approximate the solution for a fixed grid resolution.
For example, setting R=100 (where the layer's width is ≈ 110) we find the energy flux computation is accurate to a relative error on the order of 10^-12. Keeping the grid resolution fixed, we see that the error in the flux computation increases as the layer's width shrinks. When R=190 (where the layer's width is ≈ 22) the error is now ≈ 5 × 10^-11, yet doubling the number of subdomains improves the solution quality allowing for energy flux computations to again obtain relative errors on the order of 10^-12. To achieve efficient numerical schemes it would be helpful to have criteria for setting the layer's width given properties of the problem, but this is currently unavailable in the literature.
Having properly accounted for these sources of error, Table <ref> compares the energy flux (computed from high resolution runs) to values computed using as frequency-domain solver <cit.> implemented within the Black Hole Perturbation Toolkit <cit.>. These frequency-domain results rely on the appropriate boundary value problems in the frequency domain and allows for a direct, non-trivial comparison between methods. Finally, Fig. <ref> shows the rate of convergence in our DG scheme's energy flux computation (at time 4000M after junk radiation has left the system) as N and K are varied while the timestep Δ t = 0.010460592 is kept fixed. As the energy flux is computed at future null infinity we anticipate superconvergent rates similar to those observed for the flatspace wave equation; see the rightmost panel of Fig. <ref>. For DG methods the numerical error is expected to decrease exponentially fast, E ∝ K^-p, where p=N+1 is the standard convergence rate. Fig. <ref> shows the rate of convergence to be much faster than N+1, and about 1 order of convergence less than whats shown in Fig. <ref>. This discrepancy can be understood by noting that the errors shown in Fig. <ref> are for Ψ, while for the energy flux computation the errors are due to π = -∂_t ψ.
§ SUMMARY AND FUTURE WORK
This paper presents a new numerical method for solving the distributionally-forced s=0 Teukolsky equation (scalar waves on Kerr) that models extreme mass ratio inspirals (EMRIs) in a Kerr spacetime. Our method uses a nodal discontinuous Galerkin (DG) approach that expands the solution in spherical harmonics and recasts the sourced Teukolsky equation as a set of coupled, fully-first order one-dimensional symmetric hyperbolic equations. This approach allows us to correctly account for the Dirac delta distribution using a modified numerical flux and achieve global spectral accuracy even at the source's location. Furthermore, we use the hyperboloidal layer method to connect the near field to future null infinity, providing direct access to the far-field waveform. Our numerical experiments demonstrate the accuracy and efficiency of the method, including convergence tests against exact solutions, energy luminosities for circular orbits, and the scheme's superconvergence at future null infinity.
Our method has several advantages over existing time-domain solvers for the distributionally-forced Teukolsky equation. First, it does not introduce small-length scales near the smaller black hole, which can be a significant source of systematic error in other time-domain solvers. Combined with spectral accuracy at the source's location, our DG scheme and code should be well-suited for self-force calculations that require highly-accuracy numerics at the location of delta distribution. Its also worth noting that when computing gravitational self-force effects a suitable regularization procedure must be used. Regularization techniques have been well-developed for the spherical harmonic modes. Yet in frequency-domain Kerr calculations, spheroidal harmonics are exclusively used, which in turn requires projecting onto a spherical harmonic basis before the regularization step can be applied. In our method, we directly solve for the spherical harmonic modes, thereby avoiding this complicated intermediate step. Second, it achieves global spectral accuracy (in fact, superconvergence), which is critical for accurately capturing the complex dynamics of extreme mass ratio inspirals both at the smaller black hole's location and in the far field. Finally, our method provides direct access to the far-field waveform, a key output of the simulation that can be compared with data from upcoming space-based gravitational wave detectors.
As a byproduct of our work, we have compiled numerical evidence that when combined with the hyperboloidal layer method, one seemingly reasonable choice of the first-order reduction variable appears to be problematic. This issue shows up in different settings, such as calculating scalar fluxes at future null infinity or when comparing to exact solutions of the ordinary wave equation. In all settings, the numerical solution's accuracy saturates at about 5 to 6 digits irrespective of the timestep or grid resolution. On the other hand, with a better choice of auxiliary variable, we are able to achieve the expected decay of numerical error to better than 12 digits of accuracy. We also show that the resulting first-order system is symmetric hyperbolic in certain cases.
There are several avenues for future work that could extend and improve our methodology. One important direction is the extension of our method to handle the s=± 2 Teukolsky equation, which are needed for modeling gravitational waves from EMRI systems. Another area for future work is the development of methods to handle eccentric orbits using a time-dependent coordinate transformation, which has been worked out in simpler settings <cit.>. The methods presented in this paper assume circular orbits, but eccentric orbits are common in real astrophysical EMRI systems and can significantly impact the gravitational wave signal's morphology. Finally, there is also potential for further improving the accuracy and efficiency of our method by exploring higher-order time-stepping methods or adaptive mesh refinement.
§ ACKNOWLEDGMENTS
We thank Anil Zenginoglu for discussions on first-order reductions and hyperboloidal layers, and
Jennifer Ryan for discussions on superconvergence.
We thank Som Bishoyi, Zachariah Etienne, Alfa Heryudono, Scott A. Hughes, Harald Pfeiffer, Leo Stein, Saul Teukolsky, Alex Vano-Vinuales, Niels Warburton, and Barry Wardell for helpful discussions throughout the project. We also thank Karoly Csukas and Hannes Rüter for their valuable feedback on an earlier draft of this paper. The authors acknowledge support of NSF Grants PHY-2106755 and PHY-2307236 (G.K), DMS-1912716 and DMS-2309609 (S.F, S.G, and G.K), AFOSR Grant No. FA9550-18-1-0383 (S.G) and Office of Naval Research/Defense University Research Instrumentation Program (ONR/DURIP) Grant No. N00014181255. This material is based upon work supported by the National Science Foundation under Grant No. DMS-1439786 while a subset of the authors were in residence at the Institute for Computational and Experimental Research in Mathematics in Providence, RI, during the Advances in Computational Relativity program. Some of the simulations were performed on the UMass-URI UNITY supercomputer supported by the Massachusetts Green High Performance Computing Center (MGHPCC). We also thank ChatGPT for writing the first draft of the conclusion (and only the conclusion) of this paper.
§ MODE-COUPLING COEFFICIENTS
When expanding the s=0 Teukolsky equation (<ref>) in spherical harmonics, we encounter a
term sin^2θ Y_ℓ m that is responsible for mode coupling. To re-expand this term
with spherical harmonics
sin^2θ Y_ℓ m = c^ℓ-2_ℓ mY_ℓ-2, m + c^ℓ_ℓ m Y_ℓ m + c^ℓ+2_ℓ m Y_ℓ+2, m
= C_++^ℓ-2 Y_ℓ-2, m + C_0^ℓ Y_ℓ m + C_–^ℓ+2 Y_ℓ+2, m
we need expressions for the mode-coupling coefficients. These can be found from either Ref. <cit.> or Ref. <cit.>, where the latter considered coupling between associated Legendre polynomials, P^m_ℓ(cosθ). We follow Ref. <cit.>, and in line (<ref>) we switch from our paper's notation to the notation used by Ref. <cit.>. Quoting the relevant results from Ref. <cit.>, we have
[ C_++^ℓ = -c_+^ℓ+1 c_+^ℓ ,; C_0^ℓ = 1-(c_-^ℓ)^2-(c_+^ℓ)^2 ,; C_–^ℓ = -c_-^ℓ-1 c_-^ℓ , ]
where
c_-^ℓ = [(ℓ^2-m^2)/(2ℓ-1)(2ℓ+1)]^1/2 ,
c_+^ℓ = c_-^ℓ+1 .
§ MATRIX FORM OF THE FULL SYSTEM (<REF>)
We now consider the matrix form for the general system (<ref>) for any combination of ℓ_ max, m and ℓ-mode type = {odd, even}. The even and odd ℓ modes decouple so their governing systems can be analyzed separately. For concreteness, consider (for a fixed value of m) the even ℓ-mode sector, where ℓ=ℓ_ min,ℓ_ min+2,…,ℓ_ max. First, we split our solution vector U_ℓ_ max m into subvectors
π⃗_ℓ_ max m = [π_ℓ_ min m, π_(ℓ_ min+2)m, …, π_ℓ_ maxm]^T ,
ϕ⃗_ℓ_ max m = [ϕ_ℓ_ minm, ϕ_(ℓ_ min+2)m, …, ϕ_ℓ_ maxm]^T ,
which allows us to write U_ℓ_ max m = [ π⃗_ℓ_ max m, ϕ⃗_ℓ_ max m].
We now use block notation to index vectors, for example U^ℓ_ max m_π = π⃗_ℓ_ max m. With this new notation,
the matricies defining the first-order system, Eq. (<ref>) can be partitioned as
E = (
[ E_ππ E_πϕ; E_ϕπ E_ϕϕ ]) , Â = (
[ Â_ππ Â_πϕ; Â_ϕπ Â_ϕϕ ]) .
The block-wise components of  are Â_ππ = Â_ϕϕ = 0 and
Â_πϕ = Â_ϕπ = (1-H) 𝐈,
where 𝐈 is the identity matrix.
The block-wise components of E are
E_πϕ = E_ϕπ = -H 𝐈,
E_ϕϕ = 𝐈,
and the tridiagonal matrix
E_ππ=
[ 1-f c^ℓ_ min_ℓ_ min, m -f c^ℓ_ min_ℓ_ min+2, m 0 0 0 0; -fc^ℓ_ min+2_ℓ_ min,m 1-fc^ℓ_ min+2_ℓ_ min+2,m -fc^ℓ_ min+2_ℓ_ min+4,m 0 ⋮ ⋮; 0 -fc^ℓ_ min+4_ℓ_ min+2,m 1-fc^ℓ_ min+4_ℓ_ min+4,m -fc^ℓ_ min+4_ℓ_ min+6,m 0 0; 0 0 ⋱ ⋱ ⋱ 0; ⋮ ⋮ ⋮ -fc^ℓ_ max-2_ℓ_ max-4,m 1-fc^ℓ_ max-2_ℓ_ max-2,m -fc^ℓ_ max-2_ℓ_ max,m; 0 0 0 0 -fc^ℓ_ max_ℓ_ max-2,m 1-fc^ℓ_ max_ℓ_ max,m ] .
The matrix structure for the odd ℓ-modes is identical. From properties documented in App. <ref>, we have c^ℓ+2_ℓ,m = c^ℓ_(ℓ+2),m which implies E_ππ is symmetric. Therefore, E is also symmetric. That  is symmetric follows from straightforward inspection.
Finally, we consider the flux Jacobian matrix A defined from Eq. (<ref>). In terms of block-wise components we have
A/1-H =
(
[ H (E_ππ - H^2)^-1 ( E_ππ - H^2)^-1; ( 𝐈 - H^2 E_ππ^-1)^-1 H( 𝐈 - H^2 E_ππ^-1)^-1 E_ππ^-1 ]) ,
where E_ππ^-1 E_ππ = 𝐈. We see that in general A is not a symmetric matrix, even though E_ππ^-1 is. By considering H=0 we see that each of the four blocks of A are symmetric when working in the original t and r_* coordinates.
§ EQ. (<REF>) AS A SYMMETRIC HYPERBOLIC SYSTEM
With an explicit expression for the matricies of our problem given in App. <ref>, we now show conditions under which the first-order system (<ref>) is symmetric hyperbolic. Recall that to study a first-order system's hyperbolicity classification it is sufficient to consider the principle part of the differential equations,
ÊU̇+ÂU^'+ … = 0 ,
U̇+AU^'+ … =0 ,
where the variable-coefficient matrices A(ρ), Ê(ρ), and Â(̂ρ̂)̂ depend on the spatial independent variable ρ and “…” denotes lower-order terms (ie. it does not include any term proportional to U̇ or U^'). System (<ref>) represents the form of our original system (<ref>)'s principle part after we put it into matrix form (<ref>) where the matrices are shown explicitly in App. <ref>. System (<ref>) is the result of inverting Ê, leading to the system (<ref>) that we discretize with the DG method.
One important question is whether or not this system constitutes a well-posed initial-boundary-value problem (IBVP). That is, given sufficiently smooth initial conditions U(0,ρ) and dissipative boundary conditions, the L_2 norm of U can be bounded by norms taken over the initial data and boundary conditions <cit.>. Well-posedness for linear, variable-coefficient systems like ours can be shown if the problem is symmetric hyperbolic system <cit.>. This condition is sometimes stated as follows: the system (<ref>) is symmetric hyperbolic if A(ρ) is a symmetric matrix at all values of ρ in the physical domain <cit.>. Direct inspection of Eq. (<ref>) shows that this is not the case. However, a similar well-posedness result follows if we show our system (<ref>) to be positive symmetric: that is both Ê(ρ) and Â(ρ) are symmetric matrix at all values of ρ, and Ê(ρ) is a positive-definite matrix at all values of ρ. To see why this would be helpful, note that if Ê is positive definite then we can take its square root, Ê = C C, for some invertible matrix C. Defining a new system vector V = C U, Eq. (<ref>) can be rewritten as
V̇+C^-1ÂC^-1V^'+ … = 0 ,
where the matrix C^-1ÂC^-1 is symmetric <cit.>. System (<ref>) is clearly symmetric hyperbolic by the usual definition, and so the standard well-posedness results hold for V. Due the boundedness of C and C^-1, the original system (<ref>) is also well-posed.
Since App. <ref> shows that Ê and  are symmetric, we only need to show that Ê is positive definite. While we have been unable to prove Ê is positive definite in the most general setting, we have numerically checked a handful of cases and conjecture this to be true in general. We have been able to prove it for one important special case.
We consider the special case: when a=0 (scalar waves on Schwarzschild with hyperboloidal layers) the matrix Ê is positive definite. Recall that Ê is positive definite if z^T Ê z > 0 for all non-trivial vectors z. When a=0 this implies f=0, and the matrix E_ππ = 𝐈 vastly simplifies. In this case its straightforward to directly check the positive-definite condition. Mimicking the block structure of Ê, let
z = [a_1, a_2, …, a_n, b_1, b_2, …, b_n ], such that a_i and b_i correspond, respectively, to the π and ϕ portions of U. Direct computation gives
z^T Ê z = ∑_i=1^n a_i^2 + b_i^2 - 2 H a_i b_i .
Noting that 0 ≤ H ≤ 1, we have two cases to consider for each i^th term. First suppose a_i b_i <0, then clearly a_i^2 + b_i^2 - 2 Ha_i b_i ≥ 0. Next consider a_i b_i >0, in which case we have
a_i^2 + b_i^2 - 2 Ha_i b_i ≥ a_i^2 + b_i^2 - 2 a_i b_i ≥ 0
where we've used Young's inequality, ab ≤ a^2/2 + b^2/2. Such bounds can be applied to all terms in the sum, and so we have z^T Ê z ≥ 0. As an immediate consequence, upon setting M=0 we deduce that the ordinary flatspace wave equation in hyperbolodial coordinates (<ref>) also constitutes a well-posed IBVP.
§ AN ALTERNATIVE FIRST-ORDER REDUCTION
Throughout the main body of this paper, we have worked with a first-order
reduction of Eq. (<ref>) that arise from the reduction variables
π_ℓ m=-∂ψ_ℓ m/∂ t
and ϕ_ℓ m=∂ψ_ℓ m/∂ r_* followed by
a coordination transformation <ref>.
Appendix <ref> shows the resulting system
to be symmetric hyperbolic when a=0.
Different choices in our first-order
reduction can lead to different systems with different hyperbolicity classifications. In this
appendix we consider a reasonable set of choices that leads to a strictly hyperbolic system
by not symmetric hyperbolic.
Through numerical experiments, and as documented in Sec. <ref>,
we found this formulation to be problematic leading to a loss of convergence.
While we lack a full understanding of the issue, we note that symmetric hyperbolic systems
are automatically well-posed IBVPs and often have nice numerical properties after discretization.
On the other hand, strictly hyperbolic systems need not be well-posed when one of the
wave speeds is 0 on the boundary <cit.>.
We note that at the rightmost boundary, ρ=s, one of the system's wavespeeds is 0.
However, we caution the reader that because we do not understand the observed problematic
behavior, we cannot entirely rule out more pedestrian explanations like a code bug. However,
we did spend significant effort checking the correctness of our code and carefully considered
issues such as catastrophic cancellation that might lead to loss of accuracy near ρ=s.
Using the hyperboloidal layer approach described in Sec. <ref>,
the 1+1 scalar wave Eq. (<ref>) becomes
-(1-H^2 ) ψ̈_ℓ m + (1-H)^2 ψ^''_ℓ m
+ Δ a^2/(r^2+a^2)^2 c^ℓ_L mψ̈_L m
- 2H(1-H) ψ̇^'_ℓ m - H^'(1-H) ψ̇_ℓ m - H^'(1-H) ψ_ℓ m^'
- 4imMar/(r^2+a^2)^2ψ̇_ℓ m + V_ℓ m (r)ψ_ℓ m = g_ℓ m(t,r) ,
where we use an over-dot to denote ∂ / ∂_τ differentiation,
a prime for differentiation by ∂ / ∂_ρ, and
the various other quantities appearing in this equation have been defined in Sec. <ref>.
To the left of the layer, where ρ < R, we have H=0 and so both Eq. (<ref>)
and Eq. (<ref>) are identical.
The coefficients of each term in Eq. (<ref>)
are finite at ρ=s, which can be seen by noting that as ρ→ s we have
(1 - H) ∼Ω^2 ∼ r_*^-2∼ r^-2 <cit.>.
To carry out a first-order reduction, we introduce two new variables,
π_ℓ m=-∂ψ_ℓ m/∂τ , ϕ_ℓ m=∂ψ_ℓ m/∂ρ .
The following first-order system corresponds to the original second-order wave equation (<ref>):
ψ̇_ℓ m = -π_ℓ m
(1-H^2) π̇_ℓ m
- Δ a^2/(r^2+a^2)^2 c^ℓ_Lmπ̇_L m
- H (1-H) ϕ̇_ℓ m
= - (1-H)^2 ϕ^'_ℓ m
- H (1-H) π^'_ℓ m
- H^'(1-H) π_ℓ m
- 4imMar/(r^2+a^2)^2π_ℓ m
+ H^'(1-H) ϕ_ℓ m
- V_ℓ m (r)ψ_ℓ m + g_ℓ m(t,r)
ϕ̇_ℓ m = - ∂_ρπ_ℓ m .
where in carrying out the first-order reduction, we replaced the mixed-partial derivative term
as 2 ψ̇^'_ℓ m = -∂_ρπ_ℓ m + ∂_τϕ_ℓ m and did not consider
alternative choices. For example, the choice 2 ψ̇^'_ℓ m = -2∂_ρπ_ℓ m will lead to a simpler
A matrix (cf. <ref>) but was not considered in our first-order reduction. If
ψ_ℓ m solves the first-order system (<ref>) it also solves the
original second-order equation (<ref>) provided ϕ_ℓ m=∂_ρψ_ℓ m.
Let us now put Eq. (<ref>) into matrix form (<ref>).
For concreteness, consider the even ℓ-mode sector with m=0 and ℓ_ max=2.
Then we have U=[π_00, π_20, ϕ_00, ϕ_20]^T, and
E=[ 1-H^2-fc_00^0 -fc_20^0 -ω H 0; -fc_00^2 1-H^2-fc_20^2 0 -ω H; 0 0 1 0; 0 0 0 1 ]
and
Â=[ ω H 0 ω^2 0; 0 ω H 0 ω^2; 1 0 0 0; 0 1 0 0 ]
with
B̂=[ ω H^'+μ 0 ωω^' 0; 0 ω H^'+μ 0 ωω^'; 0 0 0 0; 0 0 0 0 ]
and,
V̂=[ V_0ψ_0, V_2ψ_2, 0, 0 ]^T.
where μ and f have been defined in Eq. (<ref>).
We immediately see that this system is not symmetric hyperbolic because  is not symmetric, nor is E^-1 symmetric.
However, one can show strict hyperbolicity and, noting that one of the wavespeeds is zero at ρ=s (the right-most boundary),
we see that  is not invertible. Numerically, we have been unable to achieve good results with this system. For example, in the very simple
case of M=0 (ie flatspace) the numerical solution to this system stops converging after about 4 to 5 digits of accuracy. This is shown most clearly
in Fig. <ref>: compare “System 2” (the problematic one using reduction variables (<ref>)) and “System 1” (the first-order system that arise from the reduction variables (<ref>)).
|
http://arxiv.org/abs/2307.03268v1
|
20230705174605
|
More Exact Thermodynamics of Nonlinear Charged AdS Black Holes in 4D Critical Gravity
|
[
"Prosenjit Paul",
"Sudhaker Upadhyay",
"Yerlan Myrzakulov",
"Dharm Veer Singh",
"Kairat Myrzakulov"
] |
gr-qc
|
[
"gr-qc",
"hep-th"
] |
prosenjitpaul629@gmail.com
Indian Institute of Engineering Science and Technology (IIEST), Shibpur-711103, WB, India
sudhakerupadhyay@gmail.com
Department of Physics, K.L.S. College, Magadh University, Nawada, Bihar 805110, India
School of Physics, Damghan University, Damghan, 3671641167, Iran
ymyrzakulov@gmail.com
Department of General & Theoretical Physics,
LN Gumilyov Eurasian National University, Astana, 010008, Kazakhstan
veerdsingh@gmail.com
Department of Physics, Institute of Applied Sciences and Humanities,
GLA University, Mathura 281406, Uttar Pradesh, India.
krmyrzakulov@gmail.com
Department of General & Theoretical Physics,
LN Gumilyov Eurasian National University, Astana, 010008, Kazakhstan
In this paper, we investigate nonlinearly charged AdS black holes in four-dimensional critical gravity and study more exact black hole thermodynamics under the effect of small statistical fluctuations. We compute the correction to the thermodynamics of nonlinearly charged AdS black hole up to the leading order. We discuss the stability of black holes under the circumstances of fluctuation and find that fluctuation causes instability in the black holes. Moreover, both the isothermal and adiabatic
compressibilities are also derived.
Finally, we estimate the role of small fluctuations on the equation of states and study the P-v diagram of nonlinearly charged AdS black hole.
More Exact Thermodynamics of Nonlinear Charged AdS Black Holes in 4D
Critical Gravity
Kairat Myrzakulov
August 1, 2023
=====================================================================================
§ OVERVIEW AND MOTIVATION
The history of black hole thermodynamics is quite long. In 1972, Bekenstein conjectured that black
holes possess an entropy <cit.>. Later, in 1973, a relationship between black hole entropy and horizon area is established by Bekenstein <cit.>. There it was found that the black hole entropy is proportional to the area of the event
horizon. In 1973, Bardeen, Carter, and Hawking proposed the four-laws of black hole mechanics <cit.> following the analogy with the four-laws of thermodynamics. In 1974, Hawking <cit.> proposed that black holes emit thermal radiation and the temperature of the radiation is inversely proportional to black hole mass. Hawking and Page found a black hole solution in asymptotically AdS space <cit.> which possesses the thermodynamics properties like entropy, temperature, etc.
Black hole physics has been a fascinating subject of study for several decades and their thermodynamics has been an active area of research. Jacob Bekenstein proposed the so-called “entropy-area" law, which suggested that the entropy of a black hole is proportional to the area of its event horizon. Subsequent research in the 1990s and 2000s focused on understanding the quantum corrections to black hole entropy that arise due to thermal fluctuations and other quantum effects. The Cardy formula
<cit.>, introduced by John Cardy in 1986, stands out as one of the most significant breakthroughs in this domain. The Cardy formula provides a powerful tool for studying the connection between black hole thermodynamics and conformal field theory and has been used extensively to study black hole entropy. Another important development was the discovery of the AdS/CFT correspondence in 1997 by Maldacena. This correspondence provides a way to understand the behavior of quantum systems in curved space-time, such as that near a black hole, in terms of a dual quantum field theory living on the boundary of space-time. This has led to important insights into the nature of black hole entropy and the holographic principle, which suggests that the properties of a system can be understood in terms of its boundary degrees of freedom. Other interesting black hole solutions are also studied <cit.>.
In 2002, Das computed <cit.> logarithmic corrections to the entropy. In recent years, there has been growing interest in studying the logarithmic corrections to the entropy of black holes due to small statistical fluctuations around black hole equilibrium. Assuming that a black hole behaves as a thermodynamic system and this system should follow the equilibrium with thermal radiation. However, the logarithmic corrections to thermodynamic entropy arise for all thermodynamic systems when small statistical fluctuations around equilibrium are taken into account <cit.>. A nontrivial multiplicative factor to the expression for the density of states arises due to the small statistical fluctuations and the logarithm of these multiplicative factors leads to the corrections to the entropy. Thus, Bekenstein-Hawking entropy can be modified by logarithmic corrections that result from thermal fluctuations of the black hole around its state of equilibrium. These logarithm corrections to the entropy of the black hole are universal and apply to all kinds of black hole spacetime irrespective of whether they arise in Einstein's gravity or any higher-order theories of gravity. The inclusion of logarithmic corrections to the Bekenstein-Hawking entropy can be understood as the thermal fluctuations experienced by the black hole as it deviates from its stable state.
Recent studies have demonstrated that thermal fluctuations play a crucial role in understanding the behaviour of charged anti-de Sitter black holes <cit.>, leading to corrections in their thermodynamic properties. In fact, a thorough analysis of black holes has revealed that the quantum approach to their thermodynamics at small scales is essential, resulting in a variety of corrections to thermodynamic quantities. Investigations into the effects of such corrections have been conducted for a range of black holes, including the Godel black hole <cit.>, quasitopological black holes <cit.> and the Schwarzschild–Beltrami–de Sitter black hole <cit.>, charged rotating black holes in AdS space <cit.>, Horava-Lifshitz black holes <cit.>, charged black holes in gravity rainbow <cit.>, black holes in f(R) gravity <cit.>, rotating and charged BTZ black hole <cit.> and
Schwarzschild black hole
immersed in holographic quintessence <cit.> etc. The pioneering research of Frolov <cit.> has provided significant insights into the quantum corrections to black hole thermodynamics. The logarithmic corrections to the entropy of black holes have important consequences. One of the most significant is that they violate the area law of black hole entropy. The area law states that the entropy of a black hole is proportional to the area of its event horizon. However, the logarithmic corrections introduce additional terms in the entropy that are not proportional to the area of the event horizon.
The theory of nonlinear electrodynamics was first proposed <cit.> by Born and Infeld to remove the singularity of electromagnetic fields due to point particle. After Born-Infeld electrodynamics, a new model of nonlinear electrodynamics was proposed by Plebanski using antisymmetric conjugate tensor P^μν (known as Plebanski tensor) and a structure-function ℋ = ℋ(P, Q), where P and Q are the invariants formed with the antisymmetric conjugate Plebanski tensor. The structure-function ℋ(P,Q) is related to the Lagrangian ℒ(F,G) by the relation
ℋ(P,Q)= 2Fℒ_F(F,G) - ℒ,
and the Lagrangian is dependent on the invariant formed with
the Maxwell tensor F^μν. Plebanski nonlinear
electrodynamics has been used to study nonlinear optics and
condensed matter physics. Plebanski theory has been studied
extensively in the context of gravitational theories and
regular nonrotating black hole solution has been obtained
<cit.>. Very
recently, a charged rotating black hole solution using Plebanski
the theory is also obtained in Refs.
<cit.>. Using
Plebanski nonlinear electrodynamics formalism is an interesting
model of nonlinearly charged AdS black hole was obtained in 4D
critical gravity <cit.> and its
logarithmic corrections to thermodynamics are not studied.
This provides us with an opportunity to fill this gap.
This work considers an AdS black hole in four-dimensional critical gravity coupled with nonlinear electrodynamics and discusses their thermal properties. Furthermore, we study the effects of thermal fluctuations on the thermodynamics of this black hole. In this regard, we compute first-order correction to the entropy of nonlinearly
charged AdS black holes. Next, we plot the entropy as a function of horizon radius for both cases with
and without considering thermal fluctuations. Here, we find that the thermal fluctuations affect the entropy of
small black holes significantly and for large black holes their impacts are negligible. Moreover, we compute the
corrected mass (enthalpy) of the system using the Hawking temperature and corrected entropy. The pressure
can be expressed in terms of the cosmological constant. So, the conjugate (corrected) thermodynamic volume of
a black hole is calculated using the expression of corrected mass. This can be done based on the fact that the
system must satisfy the first-law of thermodynamics. Once we have expressions of the corrected mass, volume, and entropy, it is a matter of calculation to compute corrected Helmholtz and Gibbs free energy. Here, we find
that the Helmholtz free energy decreases with horizon radius and the thermal fluctuations do not change the
nature of Helmholtz free energy. Thermal fluctuation decreases the Helmholtz free energy a bit. In
the case of Gibbs free energy, we find that for large black holes, Gibbs free energy takes a negative value. Also, we
notice that the effects of thermal fluctuation are significant for small black holes.
The stability of this black hole system is also studied. For this, we calculate the specific heat. The positive
value of specific heat suggests that the black hole is the stable state for the system in equilibrium. However,
due to small statistical fluctuation, the system undergoes to an unstable state for the small black holes. The thermal fluctuations do not affect the stability of large black holes. Next, we compute the effects of
thermal fluctuations on specific heat. The effects of thermal fluctuations on isothermal compressibility are also
computed. A phase transition occurs for the corrected isothermal compressibility from a positive to a negative
value at a critical horizon radius. We also compute the speed of sound for this black hole whose value ranges
from zero to one. Finally, we consider the system as a Van der Waals fluid and it is observed that the pressure is
discontinuous with respect to a specific volume of the black hole.
The main aim of this paper is to study correction on various thermodynamics parameters of nonlinearly
charged AdS black holes in 4D critical gravity when small statistical fluctuations around its equilibrium are
taken into account. In section <ref>, we study the black hole solution in critical gravity coupled with nonlinear electrodynamics. Within this section, we study uncorrected electric charge, electric potential, Hawking temperature, Wald entropy, the mass of the black hole, the thermodynamic volume of the black hole, free energy, and specific
heat. In section <ref>, we compute the effects of thermal fluctuations on various thermodynamic parameters. In
section <ref>, we study the charged AdS black hole in nonlinear electrodynamics as a Van der Waals fluid. Finally,
in section <ref>, we summarize our results.
§ THE METRIC AND THERMODYNAMICS
In this section, we recapitulate some of the known facts about nonlinear electrodynamics in critical gravity.
Let us begin by writing an action describing the theory of critical gravity coupled with nonlinear electrodynamics <cit.>
S[g_μν , A_μ, P^μν]= ∫ d^4x √(-g) [ℒ_CG + ℒ_NLE] ,
where ℒ_CG and ℒ_NLE are the Lagrangian of the critical gravity and nonlinear electrodynamics, respectively. Here, ℒ_CG has the following form:
ℒ_CG= 1/2 κ( R - 2 Λ + β_1 R^2 +β_2 R_μν R^μν) ,
where κ is the surface gravity, R and R_μν are the Ricci scalar and Ricci tensor, Λ is the cosmological constant, β_1
and β_2 are the coupling constants. Critical gravity allows for the massive spin-zero fields to vanish if the coupling constants β_1 and β_2 are restricted to obey the relations <cit.>
β_2 = -2β_1,
β_1 =-1/2Λ.
The expression for the Lagrangian describing the nonlinear electrodynamics is given by
ℒ_NLE = -1/2 P^μν F_μν + ℋ(P,Q),
where P^μν is conjugate antisymmetric tensor known as Plebanski tensor and structure-function ℋ(P,Q), where P and
Q are the invariants formed with the antisymmetric conjugate Plebanski tensor. The field strength tensor F_μν is defined in terms of vector field A_μ as: F_μν = ∂_μA_ν - ∂_νA_μ. ℋ(P) is a structure-function depending on the invariant formed with the conjugated antisymmetric tensor.
Variation of the action (<ref>) gives the following field equations:
G_μν + Λ g_μν + χ_μν^CG =κ T_μν^NLE,
∇_μ P^μν =0,
where
χ_μν^CG = 2 β_2( R_μρ R_ν^ρ - 1/4 R^ρσ R_ρσ g_μν) + 2 β_1 R ( R_μν - 1/4 R g_μν) + β_2( R_μν
+ 1/2 g_μν -2 ∇_ρ∇_( μ R_ν )^ρ) +2 β_1( g_μν R - ∇_μ∇_ν R ) ,
T_μν^NLE = ℋ_p P_μλ P_ν^λ - g_μν( 2 P ℋ_p -ℋ),
where β_1 and β_2 are given in equation (<ref>) and ℋ_P =∂ℋ/∂P.
The asymptotically AdS black hole metric is given by <cit.>
ds^2= - r^2/l^2 f(r) dt^2 + l^2/r^2 dr^2/f(r) + r^2/l^2 dΩ_2^2,
where cosmological constant Λ= - 3/l^2, with the asymptotic condition
lim_r →∞ f(r) =1.
The nonlinear source is described by the structure-function ℋ is real. Here we choose the structure-function depends only P, because we are interested in static configurations and ℋ=ℋ(P) <cit.>
ℋ(P) = (α_2^2 - 3 α_1α_3 ) l^2 P/3 κ - 2 α_2 (-2P)^1/4/l κ + α_2√(-2P)/κ,
where α_1, α_2, and α_3 are coupling constant. From second equation of (<ref>) one can obtain
P=-M^2/2r^4,
where M is an integration constant. Therefore, the
structure-function ℋ in equation (<ref>) is real. Finally, using field equation (<ref>) one can obtain the function f(r) as
f(r) = 1 - α_1√(M)l/r + α_2Ml^2/r^2 - α_3M^3/2l^3/r^3.
It is shown in Ref. <cit.> that the structural coupling constants have a significant role in the characterization of
the solutions.
§ THERMODYNAMICS
In this section, we study the thermodynamics of nonlinearly charged AdS black holes in critical gravity. The metric of such a black hole is given in equation (<ref>). The electric charge of the black hole is calculated by <cit.>
Q= Ω_2 r_h^2/ζ^2 l^4,
where r_h is the position of the horizon and can be expressed as r_h = ζ√(M) l, where ζ is the roots of the polynomial
ζ^3 - α_1ζ^2 + α_2ζ - α_3 =0. The electric potential is
Φ= r_h/κ( α_2 + α_1^2 - 3/2α_1ζ - α_1α_2/ζ +α_2^2/3 ζ^2).
The Hawking temperature due to surface gravity is calculated by
T_H = r_h/4 π l^2( 3 - 2 α_1/ζ + α_2/ζ^2).
The Wald entropy is given by <cit.>
S_0= 2 Ω_2π/κ( r_h/l)^2( α_1/ζ - 2 α_2/3 ζ^2),
where Ω_2 refers to the finite volume of the compact planar manifold.
The mass of the black hole in central gravity with nonlinear electrodynamics has the following expression <cit.>:
ℳ = α_1α_2Ω_2 r_h^3/9 κζ^3 l^4=64 α_1α_2Ω_2π^2 P^2 r_h^3/81 κζ^3,
This is equivalent to the enthalpy of the system. It is well known that the cosmological constant is responsible for pressure in AdS space, P=-Λ/8 π= 3/8 π l^2. Now, the mass of the black hole in terms of Wald entropy and charge written by
ℳ(S_0,Q) =√(6 κ) S_0^3/2( 3ζ^2 -2 α_1ζ + α_2)/12 √(Ω_2)π^3/2√(3 α_1ζ -2 α_2) l ζ + Q^3/2 l^2 ζΨ/9 √(Ω_2)κ,
where Ψ is given by
Ψ= 6 α_2 +6 α_1^2 -9α_1 ζ - 6 α_1 α_2/ζ + 2 α_2^2/ζ.
The conjugate volume of the black hole is <cit.>
V=∂M/∂P= 128 α_1α_2Ω_2π^2 P r_h^3/81 κζ^3.
With the above-mentioned thermodynamical quantities, we can compute further properties of the black hole such as internal energy (U), Helmholtz free energy (F), and Gibbs free energy (G). The internal energy is
calculated as
U= ℳ - PV= -α_1α_1Ω_2 r_h^3/9 κζ^3 l^4.
Using the standard definition of Helmholtz free energy, we obtain
F=U-T_HS_0 = -Ω_2 r_h^3 (27 ζ^3α_1 -18 ζ^2α_1^2-18 ζ^2α_2 +23 α_1α_2ζ -6 α_2^2)/18 ζ^4κ l^4 .
Now, it is a matter of calculation to obtain the expression of the Gibbs free energy G=ℳ - T_HS_0-Φ Q. This reads
G= -3 Ω_2 r_h^2/2 κζ^4 l^4( α_1 (r_h - 1) ζ^3 - 2(α_1^2 +α_2 ) (r_h - 1) ζ^2/3 +19 (r_h - 18/19) α_1α_2ζ/27 - 2 α_2^2 (r_h + 1)/9).
The specific heat of a black hole plays an important role in the stability of the system. Now, we calculate the specific heat at constant electric potential as
C_Φ = T_H(∂S_0/∂T)_Φ ,
C_Φ = 4 Ω_2π r_h^2 (3 α_1ζ -2 α_2 )/3 κζ^2 l^2.
The temperature and specific heat will be positive if and only if
Ψ_1= (3ζ^2 -2 α_1ζ + α_2) ≥ 0
Ψ_2=(3 α_1ζ -2 α_2 ) ≥ 0.
If T_H,Φ≥ 0, then above equation and
Ψ= α_1 α_2/ζ - Ψ_1 Ψ_2/ζ^2≥ 0,
hold, where Ψ is defined in equation (<ref>).
The possible solutions for α_1 and α_2 are shown in Fig. <ref>. The region bounded by the red and blue curve in the first octant is the possible solution, except the region bounded by the black curve.
§ THERMODYNAMICS WITH FIRST-ORDER CORRECTION
In this section, we study the effect of thermal fluctuations on various thermodynamic parameters of black holes up to the first order. To the first order, correction to entropy was first studied in Ref. <cit.>, which is logarithmic in nature. The logarithmic corrections to thermodynamic entropy arise when small stable fluctuations around equilibrium are taken into account. The logarithmic correction to the entropy of BTZ black hole, Schwarzschild AdS black hole & Reissner-Nordstrom black hole studied in Ref. <cit.>. The partition function of a thermodynamics system is
Z(β) = ∫_0^∞ρ(E) e^-β E dE,
where β=1/T_H. The density of states for fixed energy can be obtained from the above equation using inverse Laplace transformation
ρ= 1/2 π i∫_c - i∞^c + i∞ e^𝒮(β) dβ.
The above complex integral can be computed using the steepest descent method at saddle point β_0, and we obtain
𝒮(β)= S_0 + 1/2 (β - β_0)^2 ( ∂^2 𝒮_0/∂β^2)_β_0 + ⋯,
where 𝒮(β) and S_0 is the exact entropy and zeroth order entropy. Substituting equation (<ref>) into equation (<ref>) we have
ρ= e^S_0/2 π i∫_c - i∞^c + i∞ e^1/2 (β - β_0)^2 ( ∂^2 𝒮_0/∂β^2 )_β_0 dβ.
Finally, the above integral gives <cit.>
ρ(E)= e^S_0/√(2 π𝒮^''(β_0)).
Therefore, the exact entropy due to thermal fluctuation is
S= lnρ= S_0 - 1/2ln( ∂^2 𝒮_0/∂β^2).
From <cit.> one can write the above entropy as
S = S_0 - 1/2ln(S_0 T_H^2).
Therefore, entropy received correction due to thermal fluctuations. Now, to identify the effect of this correction term on other thermodynamical quantities, we label the 1/2 factor in the R.H.S of equation (<ref>) by γ. Fianlly, the corrected entropy becomes
S = S_0 - γln(S_0 T_H^2),
where γ=0 refers to uncorrected entropy S_0 and γ=1/2 refers to corrected entropy in equation (<ref>). Therefore, the correction coefficient γ can only take two values, i.e. γ=0 or 0.5 & γ is a dimensionless quantity. The correction coefficients γ arise due to the thermal fluctuation in the equilibrium thermodynamics of black holes. This thermal nature leads to a prefactor in the expression for the density of states of the system, which in turn modifies the entropy of the black hole.
§.§ Corrected entropy
Using the relations (<ref>), (<ref>) and (<ref>), we obtain
S = 2 Ω_2π r_h^2 (α_1ζ -2 α_2/3)/ζ^2κ l^2-γln[ Ω_2 r_h^4 (α_1ζ -2 α_2/3) (3 ζ^2-2 α_1ζ +α_2 )^2/8 πζ^6κ l^6].
The effects of thermal fluctuation on entropy are depicted in Fig. <ref>. From the diagram <ref>, we observe that entropy increases with the horizon radius. The thermal fluctuation increases the entropy of the black hole significantly for small-sized black holes with horizon radius r_h < 0.2. Therefore, quantum effects significantly dominate for smaller-sized black holes with r_h < 0.2. However, for larger black holes with r_h>0.2, the effects of thermal fluctuations on the entropy are negligible.
§.§ Corrected mass
Now, we analyze the effect of thermal fluctuation on the total mass (enthalpy) of the black holes. The corrected mass can be evaluated with the help of the following definition:
ℳ_c = ∫ T_H dS.
Here, we have introduced the corrected entropy in place of equilibrium entropy. Substituting the value of Hawking temperature and corrected entropy from equations (<ref>) and (<ref>), respectively, into equation (<ref>), we have
ℳ_c = (3 ζ^2-2 α_1ζ +α_2 ) r_h (3 πζ r_h^2Ω_2α_1 -9 γζ^2κ l^2-2 π r_h^2Ω_2α_2 )/9 l^4ζ^4πκ.
Now, to do a comparative analysis, we plot the corrected mass and equilibrium mass with respect to the horizon radius in Fig. <ref>. From Fig. <ref>, we see that the mass is an increasing function of the horizon radius. Interestingly, we find that the thermal fluctuations decrease the mass a bit but do not change the behavior of the mass.
§.§ Corrected thermodynamic volume of black hole
In this subsection, we study the corrected thermodynamic volume of the black hole as a function of pressure and horizon radius. The thermodynamic volume of an asymptotically AdS black hole is defined as <cit.>
V=( ∂ℳ_c/∂P)_S_0,Q,
To compute this equation, we first write the corrected mass in terms of pressure. Since pressure depends on the cosmological constant. So, the corrected mass (<ref>) can be expressed in terms of pressure as follows:
ℳ_c = 64 (3 ζ^2-2 α_1ζ +α_2 ) r_h (3 πζ r_h^2Ω_2α_1 -27 γζ^2κ/8 π P-2 π r_h^2Ω_2α_2 ) π P^2/81 ζ^4κ.
Substituting the value of (<ref>) in Eq. (<ref>), we obtain the corrected thermodynamic volume of black hole as
V_c=8 (3 ζ^2-2 α_1ζ +α_2 ) r_h (48 P π^2ζ r_h^2Ω_2α_1 -32 P π^2 r_h^2Ω_2α_2 -27 γζ^2κ )/81 ζ^4κ,
This further simplifies to
V_c=8 (3 ζ^2-2 α_1ζ +α_2 ) r_h (6 πζ r_h^2Ω_2α_1 -9 γζ^2κ l^2-4 π r_h^2Ω_2α_2 )/27 l^2ζ^4κ.
To study the behavior of thermodynamic volume and their dependency on thermal fluctuations, we plot Fig. <ref>. We find that the thermodynamic volume of the black hole increases with the horizon radius. The thermal fluctuation decreases the volume of the black holes which becomes significant in larger black holes.
§.§ Corrected Helmholtz free energy
In this subsection, we study the corrected Helmholtz free energy of the AdS black hole due to thermal fluctuation. The Helmholtz free energy is defined by
F_c= U - T_H S.
By plugging the values from Eqs. (<ref>), (<ref>) and (<ref>), the above expression leads to
F_c= r_h (3 ζ^2-2 α_1ζ +α_2 ) [9 γln(Ω_2 r_h^4 (3 α_1ζ -2 α_2 ) (3 ζ^2-2 α_1ζ +α_2 )^2/24 πζ^6κ l^6) ζ^2κ l^2-30 πζ r_h^2Ω_2α_1 +20 π r_h^2Ω_2α_2]/36 l^4ζ^4πκ.
To study the behavior of Helmholtz free energy and their dependencies on thermal fluctuation, we plot Fig. <ref>. From the figure, it is evident that the Helmholtz free energy decreases with the horizon radius. The thermal fluctuation does not change the nature of Helmholtz free energy. Thermal fluctuation decreases the Helmholtz free energy a bit.
§.§ Corrected Gibbs free energy
Another important thermal quantity that plays important role in the discussion of the stability of black holes is Gibbs free energy. The Gibbs free energy is defined by
G_c = ℳ_c -T_HS_c - Φ Q.
Substituting the expression of ℳ_c, T_H, S, Φ and Q from equations (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>) into above equation, we have
G_c = -(-9 r_hα_1ζ^3+6 r_h (α_1^2+α_2 ) ζ^2-6 r_hα_1α_2ζ -2 r_hα_2^2) Ω_2 r_h/6 κζ^4 l^4
- (3 ζ^2-2 α_1ζ +α_2) [2 Ω_2π r_h^2 (α_1ζ -2 α_2/3)/ζ^2κ l^2-γln(Ω_2 r_h^4 (α_1ζ -2 α_2/3) (3 ζ^2-2 α_1ζ +α_2 )^2/8 πζ^6κ l^6)] r_h/4 l^2ζ^2π
+ (3 ζ^2-2 α_1ζ +α_2) (3 πζ r_h^2Ω_2α_1 -9 ζ^2γκ l^2-2 π r_h^2Ω_2α_2) r_h/9 l^4ζ^4πκ.
To do a comparative analysis of corrected Gibbs free energy with its equilibrium value, we plot a diagram <ref>. The effect of the correction terms
is significant for small black holes which are depicted in Fig. <ref> (a). However, the behavior of Gibbs free energy for the large horizon radius is denoted in
Fig. <ref> (b).
From the diagram, it is obvious that Gibbs free energy starts from zero and takes the maximum positive corrected Gibbs free energy with γ=0.5 value and then starts decreasing towards a negative value. It means that for larger black holes uncorrected Gibbs free energy takes a negative value.
§.§ Stability and specific heat
In this subsection, we check the stability of the black hole by estimating the specific heat of the black hole. The specific heat of the black hole is defined by
( C_Φ)_c = T_H∂S/∂T_H=T_H∂S/∂ r_h/∂T_H/ ∂ r_h.
Using equations (<ref>) and (<ref>) Specific heat takes the following form:
( C_Φ)_c = [4 Ω_2π r_h (α_1ζ -2 α_2/3)/ζ^2κ l^2-4 γ/r_h]r_h.
The stability of a black hole is determined by the condition C_Φ≥ 0. Now, we plot the specific heat with respect to the horizon radius to see the signature. From Fig. <ref>, we observe that the black hole is stable for γ=0. However, the thermal fluctuation causes instability to the small (r_h<0.13) black holes, i.e. black holes with small horizon radii are thermodynamically locally unstable. A transition from negative specific heat to positive one occurs at r_h=0.13 Therefore, black holes with horizon radius r_h>0.13 are thermodynamically stable. As the horizon increases thermal fluctuation becomes ineffective to the specific heat, both corrected & uncorrected specific heat coincide for r_h >>0.13.
§.§ Corrected isothermal compressibility
In this subsection, we study the effects of thermal fluctuation on isothermal compressibility and adiabatic compressibility. Let us first define the isothermal compressibility of black hole <cit.>
β_T = -1/V_c( ∂Vc/∂P)_T.
Using equations (<ref>) and (<ref>) isothermal compressibility takes the following form:
β_T = -48 π^2ζ r_h^2Ω_2α_1 -32 π^2 r_h^2Ω_2α_2/48 P π^2ζ r_h^2Ω_2α_1 -32 P π^2 r_h^2Ω_2α_2 -27 γζ^2κ.
Now, we plot the isothermal compressibility of nonlinearly charged AdS black hole in fourth dimensions critical gravity in Fig. <ref>. Here, we find that for γ=0 (equilibrium) isothermal compressibility takes constant
negative value. However, for γ=0.5=0 (considering thermal fluctuations), a phase transition for the isothermal compressibility occurs which takes a positive value for small-sized black holes and a negative value for massive black holes.
The adiabatic compressibility of a black hole is defined as <cit.>
β_S = -1/V_c( ∂Vc/∂P)_S=0.
Here, we find that the adiabatic compressibility is zero.
A speed of sound can be calculated for the black hole from the given formula
v_S^-2= ( ∂ρ/∂P)_S,
where ρ refers to the density of the black hole. This simplifies to
v_s^-2= 1152 P^2 r_h^4 (α_1ζ -2 α_2/3)^2π^4Ω_2^2-1296 ζ^2 P r_h^2 (α_1ζ -2 α_2/3) γπ^2κΩ_2 +729 ζ^4γ^2κ^2/2304 (P r_h^2 (α_1ζ -2 α_2/3) π^2Ω_2 -9 γζ^2κ/16)^2,
where value of v_s^2 ranges from zero to one. From Fig. <ref>, we find that the thermal fluctuation increases the speed of sound for smaller black holes. However, for large black holes (r_h >>1) speed is constant and the effects of thermal fluctuation are not significant.
§ VAN DER WAALS BLACK HOLES
In this section, we study the behavior of charged AdS black holes in nonlinear electrodynamics as a Van der Waals fluid. The Van der Waals equation of state describes real fluids and modifies the ideal gas equation of states as
( P+a/v^2) (v - b) = T,
where v = V/N is the specific volume of the fluid. In the case of a black hole, N = A/l_p^2 denotes the number of degrees of freedom associated with the black hole horizon. Constant represents the interaction between the molecules of a given fluid and constant b represents the nonzero size of molecules. The specific volume of black hole <cit.> is given by
v = 6 V_c/N.
This simplifies to
v = (3 ζ^2-2 α_1ζ +α_2 ) (48 π^2 P ζ r_h^2Ω_2α_1 -32 π^2 P r_h^2Ω_2α_2 -27 γζ^2κ )/36 r_hζ^2π^2 P Ω_2 (α_1ζ -2 α_2/3).
The above equations yield the pressure as
P = -3 γζ^2κ (3 ζ^2-2 α_1ζ +α_2 )/4 r_hΩ_2π^2{ (v -4 r_h ) ζ^2+8 ζ r_hα_1/3-4 r_hα_2/3} (α_1ζ -2 α_2/3) .
We plot P-v diagram with r_h = 1 and r_h = 10 as depicted in Fig. <ref>. We find that when the thermal fluctuations are not taken into account the pressure remains zero, but in the presence of thermal fluctuations a phase transition of the pressure of the black hole occurs from a negative value to a positive value. In Fig. <ref>(a) pressure is depicted with r_h =1 when specific volume v ≤ 1 pressure takes a negative value, a phase transition occurs when v > 1 and finally pressure takes a positive value. A similar kind of behaviour is shown in Fig. <ref>(b) with r_h=10.
The equation (<ref>) leads to the pressure as
P = 3 T ξ^2/2 r_h (3 ζ^2-2 α_1ζ +α_2 ).
Comparing equations (<ref>) and (<ref>) we can conclude that a=0. Black hole mimicking the ideal gas behaviour.
§ CONCLUSIONS
In this paper, we have studied the effects of small statistical fluctuation on the equilibrium thermodynamics of nonlinearly charged AdS black holes in four-dimensional critical gravity. To do so, we computed the Hawking temperature for the black holes. The entropy of the black hole has an additional term at first order due to the thermal fluctuations. With the help of Hawking temperature and corrected entropy, we have computed the
more exact Helmholtz free energy of the black hole. The corrected Helmholtz free energy of the black hole takes a negative value. The equilibrium Gibbs free energy of the black hole is positive for small-sized black holes and for larger black holes, in contrast, it takes negative values. The corrected Gibbs free energy of the black hole is positive. We have found that
the black hole is stable in absence of thermal fluctuations. However, in presence of thermal fluctuations, the
small-sized black hole becomes unstable and the large-sized black hole remains stable. Incidentally, the internal energy has not found any correction at the leading order.
On the other hand, we have also computed the corrected isothermal compressibility for the black hole. The equilibrium isothermal compressibility has found a constant negative value. However, when thermal fluctuations are taken into account, the isothermal compressibility takes a positive value for small-sized black holes and a negative for large black holes.
Finally, we have studied the P-v diagram of a black hole and found that the thermodynamic pressure vanishes for the system in equilibrium. However, the thermodynamic pressure is negative/positive for small/large-sized (with respect to specific volume) black holes with non-vanishing correction parameters.
§ ACKNOWLEDGEMENTS
This research was funded by the Science Committee of the Ministry of Science and Higher Education of the Republic of Kazakhstan (Grant No. AP19674521). D.V.S. thanks University Grant Commission for the Start-Up Grant No. 30-600/2021(BSR)/1630.
99
bekenstein2020black
J. D. Bekenstein, “Black holes and the second law" In: JACOB BEKENSTEIN: The Conservative Revolutionary, World Scientific, 2020, pp. 303–306.
bekenstein1973blackJ. D. Bekenstein, “Black holes and entropy" In: Physical Review D 7.8 (1973), p. 2333.
bardeen1973fourJ. M. Bardeen, B. Carter, and S. W. Hawking. “The four laws of black hole mechanics”. In: Communications
in mathematical physics 31.2 (1973), pp. 161–170.
hawking1974black
S. W. Hawking. “Black hole explosions?” In: Nature 248.5443 (1974), pp. 30–31.
hawking1975particleS. W. Hawking. “Particle creation by black holes”. In: Euclidean quantum gravity. World Scientific, 1975,
pp. 167–188.
hawking1983thermodynamicsS. W. Hawking and D. N. Page. “Thermodynamics of black holes in anti-de Sitter space”. In: Communications
in Mathematical Physics 87.4 (1983), pp. 577–588.
Kaul:2000kfR. K. Kaul and P. Majumdar. “Logarithmic correction to the Bekenstein-Hawking entropy”. In: Phys. Rev. Lett. 84 (2000), pp. 5255–5257.
Carlip:2000nv S. Carlip. “Logarithmic corrections to black hole entropy from the Cardy formula”. In: Class. Quant.
Grav. 17 (2000), pp. 4175–4186.
an1T. Tangphati et al. “Anisotropic Stars in 4D Einstein-Gauss-Bonnet Gravity”. In: Physics of the Dark
Universe 33 (2021), p. 100877.
an2J.M. Z. Pretel, A. Banerjee, and A. Pradhan. “Electrically charged quark stars in 4D Einstein–Gauss–Bonnet
gravity”. In: European Physical Journal C 82.180 (2022), pp. 1–9.
an3T. Tangphati et al. “Anisotropic quark stars in Einstein-Gauss-Bonnet theory”. In: Phys. Lett. B 819
(2021), p. 136423.
an4G. Panotopoulos et al. “Charged polytropic compact stars in 4D Einstein–Gauss–Bonnet gravity”. In:
Chinese Journal of Physics 77 (2022), pp. 2106–2114.
das2002generalS. Das, P. Majumdar, and R. K. Bhaduri. “General logarithmic corrections to black-hole entropy”. In:
Classical and Quantum Gravity 19.9 (2002), p. 2355.
Pourhassan:2015cga
B. Pourhassan and M. Faizal. “Thermal Fluctuations in a Charged AdS Black Hole”. In: EPL 111.4
(2015), p. 40006.
Pourdarvish:2013gfa
A. Pourdarvish et al. “Thermodynamics and Statistics of Goedel Black Hole with Logarithmic Correction”.
In: Int. J. Theor. Phys. 52.10 (2013), pp. 3560–3563.
upadhyay2017quantumS. Upadhyay. “Quantum corrections to thermodynamics of quasitopological black holes”. In: Physics
Letters B 775 (2017), pp. 130–139.
Pourhassan:2017rieB. Pourhassan, H. Farahani, and S. Upadhyay. “Thermodynamics of higher-order entropy corrected
Schwarzschild–Beltrami–de Sitter black hole”. In: Int. J. Mod. Phys. A 34.28 (2019), p. 1950158.
upadhyay2018leadingS. Upadhyay. “Leading-order corrections to charged rotating AdS black holes thermodynamics”. In: General
Relativity and Gravitation 50.10 (2018), pp. 1–13.
pourhassan2018quantumB. Pourhassan et al. “Quantum gravity effects on Hořava–Lifshitz black hole”. In: Nuclear Physics B 928
(2018), pp. 415–434.
pourhassanB. Pourhassan and i. Sakalli. “Non-perturbative correction to the Horava-Lifshitz black hole thermody-
namics”. In: Chinese Journal of Physics 79 (2022), pp. 322–338.
upadhyay2018thermalS. Upadhyay et al. “Thermal fluctuations of charged black holes in gravity’s rainbow”. In: Progress of
Theoretical and Experimental Physics 2018.9 (2018), 093E01.
upadhyay2021perturbedS. Upadhyay, S. Soroushfar, and R. Saffari. “Perturbed thermodynamics and thermodynamic geometry
of a static black hole in f (R) gravity”. In: Modern Physics Letters A 36.29 (2021), p. 2150212.
upadhyay2022modifiedS. Upadhyay, N. Ul Islam, and P. A. Ganai. “A modified thermodynamics of rotating and charged BTZ
black hole”. In: Journal of Holography Applications in Physics 2.1 (2022), pp. 25–48.
saraS. Saghafi and K. Nozari. “Shadow behavior of the quantum-corrected Schwarzschild black hole immersed
in holographic quintessence”. In: Journal of Holography Applications in Physics 2.2 (2022), pp. 31–38.
Frolov:1996hdV. P. Frolov, W. Israel, and S. N. Solodukhin. “On one loop quantum corrections to the thermodynamics
of charged black holes”. In: Phys. Rev. D 54 (1996), pp. 2732–2745.
Born:1934jiM. Born. “On the quantum theory of the electromagnetic field”. In: Proc. Roy. Soc. Lond. A 143.849
(1934), pp. 410–437.
Born:1934ghM. Born and L. Infeld. “Foundations of the new field theory”. In: Proc. Roy. Soc. Lond. A 144.852 (1934),
pp. 425–451.
Born:1935apM. Born and L. Infeld. “On the quantization of the new field equations. II”. In: Proc. Roy. Soc. Lond. A
150.869 (1935), pp. 141–166.
Infeld:1936wzoL. Infeld. “The new action function and the unitary field theory”. In: Proc. Cambridge Phil. Soc. 32.1
(1936), pp. 127–137.
Infeld:1937frvL. Infeld. “A new group of action functions in the unitary field theory. II”. In: Proc. Cambridge Phil. Soc.
33.1 (1937), pp. 70–78.
Ayon-Beato:1998hmiE. Ayon-Beato and A. Garcia. “Regular black hole in general relativity coupled to nonlinear electrody-
namics”. In: Phys. Rev. Lett. 80 (1998), pp. 5056–5059.
Ayon-Beato:1999qinE. Ayon-Beato and A. Garcia. “Nonsingular charged black hole solution for nonlinear source”. In: Gen.
Rel. Grav. 31 (1999), pp. 629–633.
Ayon-Beato:1999kuhE. Ayon-Beato and A. Garcia. “New regular black hole solution from nonlinear electrodynamics”. In:
Phys. Lett. B 464 (1999), p. 25.
Ayon-Beato:2000mjtE. Ayon-Beato and A. Garcia. “The Bardeen model as a nonlinear magnetic monopole”. In: Phys. Lett.
B 493 (2000), pp. 149–152.
Ayon-Beato:2004ywdE. Ayon-Beato and A. Garcia. “Four parametric regular black hole solution”. In: Gen. Rel. Grav. 37
(2005), p. 635.
Garcia-Diaz:2021baoA. A. Garcia-Diaz. “Stationary Rotating Black Hole Exact Solution within Einstein–Nonlinear Electro-
dynamics”. ArXiv: 2112.06302 [gr-qc].
DiazGarcia:2022jpc
A. A. Garcia-Diaz. “AdS–dS stationary rotating black hole exact solution within Einstein-nonlinear elec-
trodynamics”. In: Annals Phys. 441 (2022), p. 168880.
Ayon-Beato:2022dwgE. Ay´on-Beato. “Unveiling the electrodynamics of the first nonlinearly charged rotating black hole”. In:
(Mar. 2022).
alvarez2022thermodynamics A. Álvarez et al. “Thermodynamics of nonlinearly charged anti–de Sitter black holes in four-dimensional
critical gravity”. In: Physical Review D 105.8 (2022), p. 084032.
wald1993blackR. M. Wald. “Black hole entropy is the Noether charge”. In: Physical Review D 48.8 (1993), R3427.
kubizvnak2017blackD. Kubizˇn´ak, R. B. Mann, and M. Teo. “Black hole chemistry: thermodynamics with Lambda”. In:
Classical and Quantum Gravity 34.6 (2017), p. 063001.
dolan2011compressibilityB. P. Dolan. “Compressibility of rotating black holes”. In: Physical Review D 84.12 (2011), p. 127503.
rajagopal2014van
A. Rajagopal, D. Kubizňák, and R. B. Mann. “Van der Waals black hole”. In: Physics Letters B 737
(2014), pp. 277–279.
|
http://arxiv.org/abs/2307.03258v1
|
20230706192722
|
Pretty Good Strategies for Benaloh Challenge
|
[
"Wojciech Jamroga"
] |
cs.CR
|
[
"cs.CR"
] |
arrows,automata,calc,shapes,decorations,backgrounds,petri,mindmap,fit,positioning
same
#1
itemize20in
enumerate20in
description20in
myexample
mydefinition
myproposition
semantics
algitemize
algitemizeplus
motivitemize[1][]
[name=Lemma]lemmarep
[name=Theorem]theoremrep
Interdisc. Centre on Security, Reliability and Trust, SnT, University of Luxembourg
Institute of Computer Science, Polish Academy of Sciences, Warsaw, Poland
wojciech.jamroga@uni.lu
plain
Pretty Good Strategies for Benaloh Challenge
Wojciech Jamroga
August 1, 2023
============================================
Benaloh challenge allows the voter to audit the encryption of her vote, and in particular to check whether the vote has been represented correctly. An interesting analysis of the mechanism has been presented by Culnane and Teague. The authors propose a natural game-theoretic model of the interaction between the voter and a corrupt, malicious encryption device. Then, they claim that there is no “natural” rational strategy for the voter to play the game. In consequence, the authorities cannot provide the voter with a sensible auditing strategy, which undermines the whole idea.
Here, we claim the contrary, i.e., that there exist simple rational strategies that justify the usefulness of Benaloh challenge.
Extended version:
- Where to submit:
Social Choice and Welfare (70), Games and Economic Behavior (140), ACM Transactions on Economics and Computation (70) ?
Journal of Information Security and Applications (100; IF 4.96; topic: Human factors in security), ACM Transactions on Information and System Security (140) -> was transformed to ACM Transactions on Privacy and Security (20)
- What to add:
detailed proofs, more discussion of the model (esp payoffs), more detailed GT primer, characterization of SV for > 2 ?
Address PYAR's comments:
* If D has a notion how V will cast, say c_0, and V inputs c_1 then it’s a good guess that V will audit, so don’t cheat, especially if c_1 is an unpopular candidate . Conversely if V inputs c_0 then will probably cast, especially if this is the second try after inputing something else on the first try, so cheat.
Answer: Vanessa and Chris argued in their paper that it doesn't pay off for the voter to encrypt a vote different from the voter's true preferences, and I agree with their argument. (BTW, their reasoning is very similar to what you wrote.) So, I assumed that the voter always submits her true vote for encryption, even if she intends to audit.
Well spotted - this is by no means obvious, and should be stated explicitly!
* Also I’m curious as to the intuition as to why Stackelberg gives V a better expected payoff than Nash.
Answer: Stackelberg is always a bit of a tradeoff. On the one hand the leader takes the initiative and puts the follower on the defensive. On the other hand, the leader gives away valuable information that the follower can exploit.
Interestingly, the former often outweighs the latter, which seems counterintuitive. It might be because our basic intuition comes from zero-sum games, and there being the leader is always detrimental. But in general-sum games choosing the "battle ground" is often more important than exposing your strategy to the enemy.
* A lot of the reasoning here, and in T+C, is based on the assumption that being caught once or at least a small number of times will be catastrophic for the attacker. In reality this probably isn’t really true due to difficulties with dispute resolution, device faking crashing when audited etc.
Answer: Well, maybe not catastrophic, but I'd say getting caught brings penalty that is disproportionately large compared to the reward for swinging one vote. If you swing one vote, you gain one vote. If you get caught on swinging one vote, there is a non-negligible probability that you will lose *most of the affected votes*. And it increases with every next vote on which you are caught tampering.
* Being caught just once across an election presumably won’t trigger much, only a number of indigents, maybe with some pattern would trigger investigation etc. but this just influences the payoff. But I would guess that an attacker would be ok with a strategy that has an expectation of a small number of detections.
Answer: True. But by the same token swinging one vote is also of little use to the attacker. In this sense, both the attacker's reward (for successfully cheating) and the penalty (for getting caught) is nonlinear wrt the number of successful/detected swaps.
We use an idealized model where both the reward and the penalty of the attacker is additive, and thus can be decomposed into 1-1 interactions between a single voter and the attacker-controlled device.
Address reviewers' comments:
* Reviewer 2: One interesting question on this, the analysis was performed in isolation – as a direct game between the voter and the device; it will be interesting to see how this compares when the adversary controls multiple devices and plays multiple rounds with different voters. I think such an adversary will have a stronger chance of winning then just running many independent 1-to-1 sessions with voters. This would also allow to capture more realistic adversaries as they can control multiple casting devices [if the voter uses their own pc to cast ballots].
* Reviewer 3: It would be interesting to see a more thorough discussion on the payoffs as consequences of either failing or succeeding for either the voter or the device. In particular, the paper assumes that the costs of failure for the device are higher than the benefits of success; I wonder, however, whether this is the case in practice. For once, I imagine it depends on whether the voter reports the failure in the first place (from studies we know that at least some voters would just assume that it was them who made a mistake, or that the failure is a result of a system bug rather than a malicious attack) or just disregard the attack and vote via a different channel. Then even in case the voter decides to report, the actual consequences to the attacker would depend on the actions taken by the election authorities, and whether they would follow up on the report at all; to which extent these actions are being/will be taken can be seen as a discussion topic by itself.
Peter's comments
§ INTRODUCTION
Benaloh challenge <cit.> aims to give the voter the possibility to audit the encryption of her vote, and in particular to check whether the vote has been represented correctly.
More precisely, the device that encrypts and sends the ballot must first commit to a representation of the vote given as input. After that, the voter decides whether to cast it or “spoil” it, i.e., open the encryption and check its correctness.
Intuitively, this should reduce the risk of altering the value of the vote by a malfunctioning or corrupt machine when it casts the ballot on the voter's behalf.
An interesting analysis of the mechanism has been presented in <cit.>. The authors propose a natural game-theoretic model of the interaction between the voter and a corrupt, malicious encryption device. Then, they claim that there is no “natural” rational strategy for the voter to play the game, where rational play is defined in terms of Nash equilibrium <cit.>. More precisely, they claim that: (1) only randomized voting strategies can form a Nash equilibrium, (2) for audit sequences with bounded length, the voter gets cheated in all Nash equilibria, and (3) the Nash equilibria in the infinite game do not form an easy pattern (e.g., Bernoulli trials).
In consequence, the voter cannot be provided with a sensible auditing strategy, which undermines the whole method.
In this paper, we claim that – on the contrary – there exist simple auditing strategies that justify the usefulness of Benaloh challenge. This follows from three important observations.
First, we show that there are Nash equilibria in bounded strategies where the voter casts her intended vote with high probability.
Based on this observation, we focus on a small subset of randomized strategies, namely the ones where the voter spoils the ballot with probability p in the first round, and in the second round always casts.
Secondly, we point out that the rationality of strategies in Benaloh challenge is better captured by Stackelberg equilibrium <cit.>, rather than Nash equilibrium.
Thirdly, a sensible Stackelberg strategy does not have to be optimal; it suffices that it is “good enough” for whatever purpose it serves.
Fourthly, we prove that the generalized Stackelberg equilibrium in the set of such strategies does not exist, but the voter can get arbitrarily close to the upper limit of the Stackelberg payoff. To show this, we formally define the concept of Stackelberg value, and show that it is always higher than the value of Nash equilibrium in the set of randomized strategies for the voter.
Related work.
Game-theoretic analysis of voting procedures that takes into account the economic or social incentives of the participants has been scarce.
In <cit.>, two voting systems were compared using zero-sum two-player games based on attack trees, with the payoffs representing the success of coercion.
In <cit.>, a simple game-theoretic model of preventing coercion was proposed and analyzed using Nash equilibrium, maxmin, and Stackelberg equilibrium.
The authors of <cit.> applied Stackelberg games to prevent manipulation of elections, focussing on the computational complexity of preventing Denial of Service attacks.
The research on security games <cit.><cit.>, using Stackelberg equilibrium to design anti-terrorist and anti-poaching policies, is of some relevance, too.
§ BENALOH CHALLENGE AND BENALOH GAMES
We start by a brief introduction of Benaloh challenge. Then, we summarize the game-theoretic analysis of the challenge, proposed in <cit.>.
§.§ Benaloh Challenge
Benaloh challenge <cit.> is a “cut-and-choose” technique for voter-initiated encryption audits, which proceeds as follows:
* An empty ballot is generated and provided to the voter.
* The voter fills in the ballot and transmits it to the encryption device;
* The device encrypts the ballot with the election public key, and makes the encrypted vote available to the voter;
* The voter decides to cast the encrypted vote, or to open and audit the encryption. If the encryption is opened, the ballot is discarded, and the voter proceeds back to step <ref>.
Benaloh challenge is meant to counter the threat of a malicious encryption device that falsely encrypts the ballot, e.g., in favor of another election candidate.
Importantly, this should be done without compromising receipt-freeness of the voting protocol.
In a broader perspective, the challenge can be applied in any communication scenario where the encryption mechanism is not trustworthy and plausible deniability is required on the side of the sender.
The idea behind the technique is that, if the voters audit the encryptions from time to time, corrupt devices will be exposed and investigated.
Thus, it does not pay off to tamper with the encryption in the long run, and the perpetrator would have little incentive to do that.
At its core, this is a game-theoretic argument.
§.§ Benaloh Challenge as Inspection Game
Intuitively, the interaction in Benaloh challenge can be seen as a game between the voter V and the encryption device D – or, more accurately, between the voter and the malicious party that might have tampered with the device. We will use the term Benaloh game to refer to this aspect of Benaloh challenge.
In each round, the voter can choose between casting her intended vote (action cast) and auditing the encryption (action audit). At the same time, the device chooses to either encrypt the vote truthfully (action true) or cheat and encrypt another value of the vote (action ).
Both players know exactly what happened in the previous rounds, but they decide what to do without knowing what the other player has selected in the current round.
A very interesting analysis has been presented by Chris Culnane and Vanessa Teague in <cit.>. The authors model the interaction as an inspection game, i.e., a non-cooperative game where one player verifies if the other party adheres to a given requirement – typically, a legal rule <cit.>.
The idea is very simple: V chooses the round in which she wants to cast the vote, and D chooses the round when it will fake the encryption for the first time.
Consequently, the voter's plan is to audit the encryption in all rounds n<, and similarly the device encrypts truthfully for all n<.
The players choose their strategies before the game, without knowing the opponent's choice.
Their payoffs (a.k.a. utilities) are presented in Figure <ref>, with the parameters interpreted as follows:
* i: the reward of player i for succeeding with their task (i.e., casting the vote as intended for V, and manipulating the vote for D);
* i: player i's penalty for failing (i.e., getting cheated for V, and getting caught with cheating for D);
* : the cost of a single audit; essentially, a measure of effort and time that V needs to invest into encrypting and spoiling a spurious ballot;
It is assumed that i,i, > 0. Also, <, i.e., the voter cares about what happens with her vote enough to audit at least once.
There are two variants of the game: finite, where the number of rounds is bounded by a predefined number ∈, and infinite, where the game can proceed forever.
In the finite variant, the voter chooses ∈1,…,, and the device selects ∈1,…,,∞, with = ∞ meaning that it always encrypts truthfully and never cheats. In the infinite variant, the voter and the device choose respectively ∈ and ∈∪∞.
The structure of the game is common knowledge among the players.
Discussion.
One might consider a slightly richer game by allowing the voter to refuse participation (=0) or to keep auditing forever (=∞). Also, we could include a reward that the voter gets when detecting an attack and reporting it to the authorities. In this paper, we stick to the game model of <cit.>, and leave a proper analysis of the richer game for the future.
§.§ Are There Simple Rational Strategies to Cast and Audit?
Culnane and Teague make the following claims about their model (and, by implication, about the game-theoretic properties of Benaloh challenge):
*
There is no Nash equilibrium in deterministic strategies <cit.>. Thus, a rational voter must use randomized strategies in Benaloh challenge.[
A concise explanation of game-theoretic terms is presented in Sections <ref> and <ref>. ]
*
A Nash equilibrium in the finite Benaloh game can only consist of the voter casting right away and the device cheating right away; the argument proceeds by backward induction <cit.>. Thus, by <cit.>, there are no Nash equilibria in the finite Benaloh game, and a rational voter should use infinite audit strategies.
*
In the infinite Benaloh game, there is no Nash equilibrium in which the voter executes a Bernoulli process, i.e., randomizes in each round with the same probability r whether to audit or cast <cit.>.
Quoting the authors, “this prevents authorities from providing voters with a sensible auditing strategy.”
In other words, there are no “easy to use” rational strategies for the voter in Benaloh challenge.
The above claims have two controversial aspects: a technical one and a conceptual one.
First, while claims (<ref>) and (<ref>) are correct, claim (<ref>) is not.
By Nash's theorem <cit.>, every finite game has a Nash equilibrium in randomized strategies, and this one cannot be an exception.
We look closer at the issue in Section <ref>, show why backward induction does not work here, and demonstrate that a clever election authority can design the procedure so that the voters do have a simple Nash equilibrium strategy to cast and audit.
Secondly, the authors of <cit.> implicitly assume that “sensible strategies” equals “simple Nash equilibrium strategies.”
As we discuss in Section <ref>, Nash equilibrium is not the only concept of rationality that can be applied here.
In fact, Stackelberg equilibrium <cit.><cit.> is arguably a better fit for the analysis of Benaloh challenge.
Following the observation, we prove that generalized Stackelberg equilibrium <cit.> for the voter in the set of randomized strategies does not exist, but V can get arbitrarily close to the upper limit of the Stackelberg payoff function. Moreover, there is always a Bernoulli strategy for the voter whose Stackelberg value is higher than the payoff in Nash equilibrium.
In sum, Stackelberg games better capture rational interaction in Benaloh challenge, provide the voter with simple strategies, and obtain higher payoffs for V than Nash equilibria.
§ INTERMEZZO: GAME THEORY PRIMER, PART ONE
Here, we present a compressed summary of the relevant game-theoretic notions.
For a detailed introduction, see e.g. <cit.>.
Strategic games.
A strategic game consists of a finite set of players (or agents), each endowed with a finite set of actions.
A tuple of actions, one per player, is called an action profile. The utility function u_i(α_1,…,α_n) specifies the utility (often informally called the payoff) that agent i receives after action profile (α_1,…,α_n) has been played.
In the simplest case, we assume that each player plays by choosing a single action. This kind of choice represents a deterministic strategy (also called pure strategy) on the part of the agent.
The payoff table of an example strategic game is shown in Figure <ref>. Two players, Alice and Bob, decide in parallel whether to go to the local bar or to the theater. The strategies and utilities of Bob are set in grey for better readability.
Rationality assumptions.
The way rational players choose their behaviors is captured by solution concepts, formally represented by a subset of strategies or strategy profiles.
In particular, Nash equilibrium (NE) selects those strategy profiles σ which are stable under unilateral deviations, i.e., no player i can improve its utility by changing its part of σ while the other players stick to their choices.
Equivalently, σ is a Nash equilibrium if each σ_i is a best response to the choices of the other players in σ.
In our example, (theater,theater) is the only Nash equilibrium.
Another solution concept (Stackelberg equilibrium) will be introduced in Section <ref>.
Multi-step games.
To model multi-step interaction, we use concurrent extensive form games, i.e., game trees where the players proceed in rounds, and choose their actions simultaneously in each round.
The agents' payoffs are defined for each play, i.e., maximal path from the root to a leaf of the tree.
A multi-step variant of the Battle of the Sexes, where Alice and Bob first veto-vote on whether to go out and then decide on where to go, is shown in Figure <ref>.
In such games, a deterministic strategy of player i is a conditional plan that maps the nodes in the tree to i's actions.
Each strategy profile determines a unique play.
Nash equilibrium is defined analogously to strategic games.
Additionally, σ is a subgame-perfect Nash equilibrium (SPNE) if it is a Nash equilibrium in each subtree obtained by fixing another starting point for the game.
Backward induction eliminates choices that are weakly dominated, i.e., ones for which there is another choice obtaining a better vector of payoffs.
Backward induction preserves subgame-perfect Nash equilibria, and can be used to reduce the game tree if the agents are assumed to play SPNE.
For example, Alice's strategy bar obtains payoff vector 31, while theater obtains 42.
Thus, the former strategy is dominated by the latter, and can be removed from the game three.
Randomized play.
Randomization makes it harder for the opponents to predict the player's next action, and to exploit the prediction.
Moreover, Nash equilibrium is guaranteed to exist for randomized strategy profiles (Nash's theorem <cit.>), whereas no such guarantee applies to pure strategies.
In multi-step games, players can randomize in two ways.
A mixed strategy for player i is represented by a probability distribution over the pure strategies of i, with the idea that the player randomizes according to that distribution, and then duly executes the selected multi-step strategy.
A behavioral strategy assigns each game node with a probability distribution over the actions of i, with the idea that i randomizes freshly before each subsequent move.
By Kuhn's theorem, every mixed strategy has an outcome-equivalent behavioral strategy <cit.> and vice versa <cit.> in games with perfect recall (i.e., ones where players never forget what they have observed).
Note that deterministic strategies can be seen as a special kind of randomized strategies that use only Dirac distributions, i.e., _i() = 1. In that case we will write _i = as a shorthand.
§ BENALOH ACCORDING TO NASH
In this section, we look closer at the claims of <cit.>.
§.§ Deterministic Audit Strategies in Benaloh Games
The first claim of Culnane and Teague is that Benaloh games have no Nash equilibrium where the voter plays deterministically <cit.>. This is indeed true.
To see that, consider any strategy profile (,_D) where V deterministically chooses a round to cast her vote, and D chooses according to probability distribution _D.
If _D ≠, then the device increases its payoff by responding with _D =, i.e., cheating with probability 1 at round ; hence, (,_D) is not a Nash equilibrium.
Conversely, if _D =, then the voter increases her payoff by changing her mind and casting at round -1 earlier (if >1) or at round +1 (otherwise); hence (,) is not a Nash equilibrium either.
Ultimately, V must use randomized strategies, so that D cannot precisely predict in which round the vote will be cast.
§.§ The Rise and Fall of Backward Induction
Now, we turn to randomized voting strategies in Benaloh games with finite horizon .
It was claimed in <cit.> that all V's strategies where the voter does not cast immediately cannot be part of a Nash equilibrium.
The argument goes by backward induction: D knows that V must cast in round n=, so it can safely cheat in that round.
Thus, the voter should cast in rounds 1,…,-1 to avoid being cheated, in which case the device can actually safely cheat in round -1, and so on. Unfortunately (or fortunately from the voters' point of view), the argument is incorrect.
To begin with, backward induction cannot be applied to games in strategic form nor to inspection games; it requires a proper representation of the sequential nature of the game.
We propose the concurrent EF game in Figure <ref> as a model of Benaloh challenge with horizon .
Each level in the game tree corresponds to a subsequent round of the game. The players choose their actions simultaneously; if V casts, or V audits and D submits false encryption, then the game ends and the payoffs are distributed. If V audits and D encrypts truthfully, the game proceeds to the next round. At n=, the voter can only cast.
Let us start with the final round of the procedure (i.e., the lowest level in the tree). D has two available choices: and , promising the payoff vectors of 0 and D, respectively. Indeed, the choice to encrypt truthfully is dominated and can be removed from the tree, leaving only the right-hand branch. We can also propagate the payoffs from the remaining leaf to its parent (i.e., -(-1) - for V, and for D).
Consider now the second-to-last level of the tree. Again, the device has two choices: true promising 0, and promising -. It is easy to see that none of them dominates the other: works strictly better if the opponent decides to cast, whereas true obtains better payoff if the opponent does audit.
Also the voter has now two available choices: cast with the payoff vector -(-2) + -(-2) - and audit with -(-1) - -(-1). Clearly, the former vector obtains better payoff in the first dimension, but strictly worse in the second one. Thus, no choice of the voter is dominated.
Since we cannot eliminate any choices, the backward induction stops already at that level.
Why is the intuitive argument in <cit.> wrong? After all, if the voter assigns a positive probability p to auditing in the round -1, she knows she will be cheated (in the final round) with exactly that probability. The problem is, if she sets p = 0, she is sure to get cheated right away! Thus, the voter should use p to keep the opponent uncertain about her current action, which is the usual purpose of randomizing in strategies.
§.§ Mixed Nash Equilibria in Finite Benaloh Games
We know from Section <ref> that backward induction does not eliminate randomized audit strategies in finite Benaloh games.
The next question is: what Nash equilibria do we obtain?
We start with mixed strategies, i.e., ones represented by probability distributions
s_V = [p^V_1,⋯,p^V_]
and s_D = [p^D_1,⋯,p^D_∞], where
p^V_n is the probability that the voter casts her vote in round n, and
p^D_n is the probability that the device cheats for the first time in round n.
Support sets of Nash strategies.
First, observe that there are no subgames outside of the main path in the game tree. Thus, all Nash equilibria are subgame perfect.
Moreover, backward induction eliminates the possibility that the device encrypts truthfully in the last round, hence p^D_∞=0 in any Nash equilibrium. Consequently, we can represent s_D by [p^D_1,⋯,p^D_].
Secondly, all the other probabilities must be nonzero, see the following lemma.[
The proofs of the formal results can be found in the extended version of the paper <cit.>Appendix <ref>. ]
lemmarepsupportNE
If s_V = [p^V_1,⋯,p^V_] and s_D = [p^D_1,⋯,p^D_] form a Nash equilibrium, then
for all i=V,D and n=1,…, we have p^i_n > 0.
Calculating the audit probabilities.
We compute p_1^V,…,p_^V using the standard necessary condition for Nash equilibrium in mixed strategies <cit.>.
If (s_V,s_D) is a Nash equilibrium with p^V_n > 0 and p^D_n > 0 for all n=1,…,, then the following conditions must hold:
* Every deterministic strategy of V obtains the same payoff against s_D, in other words:
∀,'∈1,…, . u_V(,s_D) = u_V(',s_D)
* Every deterministic strategy of D obtains the same payoff against s_V, in other words:
∀,'∈1,…, . u_D(s_V,) = u_D(s_V,')
Consider condition (<ref>). Using the payoffs in Figure <ref>, we get:
lemmarepNEvoternecessary
If s_V = [p^V_1,⋯,p^V_] is a part of Nash equilibrium then
p_n+1^V = /+ p_n^V
for every n∈1,…,-1.
theoremrepNEvoter
The mixed voting strategy s_V = [p^V_1,⋯,p^V_] is a part of Nash equilibrium iff, for every n∈1,…,:
p_n^V = (1-R)R^n-1/1-R^, where R = /+.
Indeed, the mixed equilibrium strategy s_V provides no simple recipe for the voter. This is evident when we consider concrete payoff values.
Take =5 and assume = 1, =4, i.e., the opponent fears failure four times more than he values success.
Then, R=0.2, and hence s_V = [0.8, 0.16, 0.032, 0.006, 0.001] is the unique equilibrium strategy for the voter.
In other words, the voter should cast immediately with probability 0.8, audit once and cast in round 2 with probability 0.16, and so on.
§.§ Towards Natural Audit Strategies
So far, we have considered mixed strategies for the voter.
That is, the voter draws before the game according to the probability distribution s_V, and then duly follows the outcome of the draw.
An alternative is to use a behavioral strategy b_V = (b_1^V, …, b_^V), where the voter does a fresh Bernoulli-style lottery with probability of success b_n^V in each subsequent round. If successful, she casts her vote; otherwise, she audits and proceeds to the next round.
Behavioral Nash equilibria.
First, we observe that the game in Figure <ref> is a game of perfect recall, i.e., the players remember all their past observations (in our case, the outcomes of all the previous rounds). Thus, by Kuhn's theorem, mixed and behavioral strategies are outcome-equivalent. In other words, the same outcomes can be obtained if the players randomize before the game or throughout the game.
Below, we characterize the behavioral strategy that corresponds to the mixed strategy of Theorem <ref>.
theoremrepNEvoterbehavioral
The behavioral voting strategy b_V = [b^V_1,⋯,b^V_] is a part of Nash equilibrium iff, for every n∈1,…,:
b_n^V = 1-R/1-R^-n+1, where R = /+.
The behavioral strategy implementing s_V = [0.8, 0.16, 0.032, 0.006, 0.001] of Example <ref> is b_V = [0.8, 0.801, 0.81, 0.83, 1].
That is, the voter casts immediately with probability 0.8, else audits, randomizes again, and casts with probability 0.801, and so on.
Behavioral audit strategies are reasonably simple.
At the first glance, the above behavioral strategy seems difficult to execute, too. We cannot expect the voter to randomize with probability exactly 0.8, then exactly 0.801, etc.
On the other hand, b_V can be approximated reasonably well by the following recipe: “in each round before , cast with probability close to 0.8, otherwise audit, randomize freshly, and repeat; in the last round, cast with probability 1.”
This can be generalized due to the following observation.
In Benaloh games, we can usually assume that ≫.
First of all, it is important to realize that the opponent of the voter is not the encrypting device, but a human or organizational perpetrator represented by the device. To be more precise, the strategies in the game are defined by the capabilities of the device, but the incentives are those of the perpetrator. Thus, the utility values defined by u_D should not be read as “the payoffs of the device,” but rather the utilities of the external party who rigged the device in order to achieve some political, social, or economic goals.
Secondly, the scope of the opponent's activity is not limited to the interaction with a single voter and to corrupting a single encryption device. Presumably, they must have tampered with multiple devices in order to influence the outcome of the vote.
Consequently, the opponent is in serious trouble if even few devices are caught cheating. This is likely to attract attention and trigger investigation, which may lead to an audit of all the encryption devices, revision or voiding of the votes collected from those that turned out corrupt, and even an arrest and prosecution of the perpetrator.
All in all, the penalty for fraud detection () is usually much higher than the reward for a successful swap of a single vote ().
theoremrepbehavioralapprox
If /→ 0, then the equilibrium strategy b_V of the voter converges to the following behavioral strategy:
b_n^V = {[ /+ for n<; 1 for n= ].
The finite Bernoulli strategy to audit with probability R = /+ in each round except last seems reasonably simple.
By Theorem <ref>, it is also reasonably close to the unique Nash equilibrium.
Making things even simpler for the voter.
In order to make Benaloh challenge even easier to use, the voting authority can set accordingly. In particular, it can fix =2, i.e., allow the voter to audit at most once. That does not seem very restrictive, as empirical evidence suggests that voters seldom audit their votes <cit.>, and even fewer are able to complete it correctly <cit.>.[
In fairness, there is also some evidence that suggests the contrary <cit.>. ]
Marky18reallyVote
The Benaloh game in strategic form for =2 is shown in Figure <ref>a.
theoremrepbehavioralsimple
For = 2, the behavioral NE strategy of the voter is:
b_1^V = +/2+,
b_2^V = 1 .
To make the analysis intuitive, consider the concrete values in Example <ref>.
Take = 1, =4.
By Theorem <ref>, the behavioral Nash equilibrium strategy of the voter is b_V = [5/6, 1].
That is, the voter casts immediately with probability 5/6, otherwise audits and casts in the next round – which is a rather simple strategy.
Also, recall our argument that, typically, ≫.
In that case, p_V^1 becomes close to 1. In other words, the voter should almost always cast immediately, which is a very simple recipe to follow.
Thus, contrary to what Culnane and Teague claim in <cit.>, Benaloh challenge can be designed in a way that admits simple Nash equilibrium strategies of the voter.
§.§ Behavioral Audit Strategies are Simple Enough, But Are They Good Enough?
We have just seen that finite Benaloh games do allow for simple and easy to use Nash equilibrium strategies.
This seems good news, but what kind of utility do they promise for the voter? That is, how much will the voter benefit from playing NE in Benaloh challenge?
For easier reading, we calculate the answer on our running example.
Following Example <ref>, we take =2, = 1, =4.
Moreover, we assume =2, =3, =1, i.e., the voter loses slightly more by getting cheated than she gains by casting successfully, and the cost of an audit is half of the gain from a successful vote. The resulting payoff table is presented in Figure <ref>b.
We can now compute the Nash equilibrium strategy of the device using Lemma <ref> and Condition <ref> of Section <ref>.
Consequently, we get
-3 p_1^D + 2 (1-p_1^D) = -p_1^D -4(1-p_1^D), and thus s_D = [3/4, 1/4].
Recall that the NE strategy of the voter is s_V = [5/6, 1/6].
This yields the following expected payoffs of the players:
u_V(s_V,s_D) =
-315/24 + 25/24 - 13/24 - 41/24 = -7/6
u_D(s_V,s_D) =
115/24 + 05/24 - 43/24 + 1/24 =1/6 .
So, the voter gets negative expected utility, and would be better off by not joining the game at all!
If that is the case, then a considerate election authority should forbid electronic voting not because there are no simple NE strategies to audit and vote, but because there is one and it is bad for the voter.
The big question is: does Nash equilibrium really provide the right solution concept for rational interaction in Benaloh challenge? We discuss this in Section <ref>.
§ BENALOH ACCORDING TO STACKELBERG
Nash equilibrium encodes a particular view of rational decision making.
In this section, we discuss its applicability to Benaloh games, suggest that Stackelberg equilibrium is a much better match, and analyze Benaloh challenge through the lens of Stackelberg games.
§.§ Game-Theoretic Intermezzo, Part Two
Every solution concept encodes its own assumptions about the nature of interaction between players and their deliberation processes.
The assumptions behind Nash equilibrium in 2-player games can be characterized as follows <cit.>:
* Alice and Bob have common belief that each of them plays best response to one another, and
* Alice believes that Bob has an accurate view of her beliefs, and that Bob believes that Alice has an accurate view of his beliefs,
* ...and analogously for Bob.
Alternatively, NE can be characterized as a local optimum of strategy search with mutual adaptations.
Informally, it represents collective behaviors that can emerge when the agents play the game repeatedly, and adapt their choices to what they expect from the other agents.
Thus, it captures the “organic” emergence of behavior through a sequence of strategy adjustments that leads to a point where nobody is tempted to change their strategy anymore.
Is Nash equilibrium the right concept of rationality for Benaloh games? Note that the characterizations of NE are inherently symmetric. In particular, they assume that both players are able to form accurate beliefs about each other's intentions.
This is not the case in Benaloh challenge. In line with the arguments of <cit.>, the perpetrator has significant technological and motivational advantage over an average voter. For example, he can use opinion polls and statistical methods to get a good view of the voter's preferences.
Even more importantly, machine learning techniques can be used to profile the frequencies with which the voter chooses to audit or cast.
On the other hand, the voter has neither data nor resources to form accurate predictions w.r.t. the strategy of the encryption device.
This seems pretty close to the Stackelberg model of economic interaction.
Stackelberg equilibrium.
Stackelberg games <cit.><cit.> represent interaction where the strategy of one player (called the leader) is known in advance by the other player (the follower).
The follower is assumed to play best response to that strategy.
The generalized Stackelberg equilibrium (SE) <cit.> prescribes the leader's strategy that maximizes the guaranteed payoff against the follower's best responses.
We define and analyze SE for Benaloh games in Section <ref>.
§.§ Pretty Good Strategies against Best Response
For simplicity, we assume that =2 throughout this section, i.e., the voter can audit the encryption at most once.
Thus, the strategy of the voter can be represented by the probability p_V of casting the vote in the first round.
Similarly, the strategy of the device can be represented by the probability p^D of cheating in the first round.
We first establish D's best response to any fixed p^V and the voter's guaranteed expected utility against best response.
These can be formally defined as follows.
The best response of D, given V's strategy represented by p^V, returns those strategies p^D for which the expected value of u_D(p^V,p^D) is maximal:
BR_D(p^V) = _p^D∈[0,1] (E u_D(p^V,p^D)).
Note that a best response always exists, though it does not have to be unique.
The generalized Stackelberg equilibrium for V is defined as the strategy that maximizes V's expected payoff against best response.
In case of multiple best responses to some p^V, we look at the worst case scenario.
SE_V = _p^V∈[0,1]inf_p^D∈ BR_D(p^V) (E u_V(p^V,p^D)).
For randomized strategies of the leader, the Stackelberg equilibrium does not have to exist (cf. Example <ref>).
To characterize the leader's abilities in such games, we propose the notion of Stackelberg value.
The Stackelberg value for V is the expected guaranteed payoff that V can obtain against best response in the limit:
= sup_p^V∈[0,1]inf_p^D∈ BR_D(p^V) (E u_V(p^V,p^D)).
Clearly, is always well defined.
Moreover, the game has a Stackelberg equilibrium if V obtains the Stackelberg value for some strategy.
Finally, for each ϵ>0, the voter has a strategy that ϵ-approximates the Stackelberg value, i.e., obtains at least -ϵ against best response.
lemmarepbestresponse
The best response of the device to any fixed strategy of the voter is
BR_D(p^V) = {[ 0 for p^V < p^V_; 1 for p^V > p^V_; any p^D∈ [0,1] for p^V = p^V_ ].
where p^V_ = +/2+ is the NE probability of casting in round 1.
lemmareputilitybestresponse
The voter's expected utility against best response is:
Eu_V(p^V,BR_D(p^V)) = {[ p^V - (1-p^V)(+) for p^V < p^V_; -p^V - (1-p^V) for p^V ≥ p^V_ ].
The graph of Eu_V(p^V,BR_D(p^V)) for the parameters in Example <ref>
(i.e., =2, = 1, =4, =2, =3, =1)
is depicted in Figure <ref>.
It is easy to see that the function does not reach its optimum, and hence the optimal p^V against best response does not exist.
Still, the strategies based on p^V being slightly smaller than the Nash equilibrium strategy p^V_ = 5/6 are quite attractive to the voter, since they obtain payoff that is both positive and strictly higher than the Nash payoff.
The next and final theorem generalizes the example to arbitrary two-round Benaloh games.
It shows that the voter has no optimal Stackelberg strategy in the game (point <ref>),
but the value of = (--) + /2+ can be approximated arbitrarily closely (point <ref>).
That is, for each ϵ>0, the voter has a strategy that obtains at least - ϵ against best response.
Moreover, ϵ-approximating Stackelberg equilibrium is strictly better than playing Nash equilibrium (point <ref>).
Lastly, approximate Stackelberg strategies obtain positive utility for the voter under reasonable assumptions (point <ref>).
theoremrepstackelberg
The following properties hold for the Benaloh game with =2:
*
There is no Stackelberg equilibrium for V in randomized strategies.
*
The Stackelberg value of the game is
= (--) + /2+.
*
> Eu_V(p^V_,p^D_), where (p^V_,p^D_) is the Nash equilibrium.
*
If ≫ and ≥ a for a fixed a>0, then > 0.
Thus, Stackelberg games capture the rational interaction in Benaloh games better than Nash equilibrium, and predict strictly higher payoffs for the voter.
§ CONCLUSIONS, OR WHAT DO WE LEARN FROM THAT?
In this paper, we analyze a simple game-theoretic model of incentives in Benaloh challenge, inspired by <cit.>.
Contrary to <cit.>, we conclude that the voters have at their disposal simple strategies to audit and cast their votes.
This is especially the case if encryption audits are limited to at most one audit per voter.
In that event, a pretty good strategy for the voter is to almost always (but not exactly always!) cast immediately in the first round.
Interestingly, this is how voters usually behave in real-life elections, according to empirical evidence.
Moreover, we point out that rational interaction in Benaloh games is better captured by Stackelberg equilibrium, rather than Nash equilibrium.
While the optimal Stackelberg strategy is not attainable for the voter, it can be approximated arbitrarily close by casting the vote immediately with probability slightly lower than for the Nash equilibrium.
This is good news, because Stackelberg strategies (even approximate) promise strictly better payoffs for the voter than Nash strategies.
And, under reasonable assumptions, they produce positive utility for V. Thus, using Benaloh challenge is beneficial to the voter, after all.
The takeaway advice based on this study can be summarized as follows:
* Using Benaloh challenge is practical and beneficial to the rational voter.
* Putting a strict limit on the number of allowed audits makes things easier for the voter.
The election authority might design the voting system so that each voter can audit the vote encryption at most once.
* The voters should not try to adapt to the strategy of the attacker, the way Nash equilibrium prescribes.
Instead, they should stick to auditing the votes with a fixed (and rather low) frequency, thus approximating the Stackelberg optimum and putting the opponent on the defensive.
Discussion and future work.
An obvious limitation of the current study is the assumption of complete information about the structure of the game. In particular, it is dubious to assume that the voter knows how much the adversary values the outcomes of the game.
In the future, we plan to extend the analysis to an incomplete information game model of Benaloh challenge, e.g., in the form of a Bayesian game <cit.>.
add short discussion of the payoffs
Moreover, the analysis in this paper is performed as a 2-player game between a single voter and the voter's device. It would be interesting to see how this extends to scenarios where the adversary controls multiple devices and plays multiple rounds with different voters.
Last but not least, the players' payoffs for either failing or succeeding need further discussion. In particular, we assume that the costs of failure for the opponent are much higher than the benefits of success; this should be better justified or refuted.
Acknowledgments. The author thanks Stanisław Ambroszkiewicz, Peter B. Roenne, Peter Y.A. Ryan, and the anonymous reviewers of E-VOTE-ID for their valuable comments, suggestions, and discussions.
The work has been supported by NCBR Poland and FNR Luxembourg under the PolLux/FNR-CORE projects STV (POLLUX-VII/1/2019 and C18/IS/12685695/IS/STV/Ryan), SpaceVote (POLLUX-XI/14/SpaceVote/2023 and C22/IS/17232062/SpaceVote) and PABLO (C21/IS/16326754/PABLO).
plain
10
Acemyan14usabilityE3EVV
C.Z. Acemyan, P. Kortum, M.D. Byrne, and D.S. Wallach.
Usability of voter verifiable, end-to-end voting systems: Baseline
data for Helios, Prêt à Voter, and Scantegrity II.
In Proceedings of EVT/VVOTE. USENIX Association, 2014.
Avenhaus00inspectiongames
R. Avenhaus, B. von Stengel, and S. Zamir.
Inspection games.
In Handbook of Game Theory, volume 3, pages 1947–1987.
North-Holland, 2000.
Benaloh06verifiable
Josh Benaloh.
Simple verifiable elections.
In USENIX Electronic Voting Technology Workshop, 2006.
Benaloh07challenge
Josh Benaloh.
Ballot casting assurance via voter-initiated poll station auditing.
In USENIX/ACCURATE Electronic Voting Technology Workshop,
2007.
Buldas07evoting
Ahto Buldas and Triinu Mägi.
Practical security analysis of e-voting systems.
In Proceedings of IWSEC, volume 4752 of Lecture Notes in
Computer Science, pages 320–335. Springer, 2007.
Culnane16benalohGT
Chris Culnane and Vanessa Teague.
Strategies for voter-initiated election audits.
In Decision and Game Theory for Security: Proceedings of
GameSec, volume 9996 of Lecture Notes in Computer Science, pages
235–247. Springer, 2016.
Ehin22votingEstonia
Piret Ehin, Mihkel Solvak, Jan Willemson, and Priit Vinkel.
Internet voting in Estonia 2005–2019: Evidence from eleven
elections.
Government Information Quarterly, 39(4):101718, 2022.
Fang15anti-poaching
Fei Fang, Peter Stone, and Milind Tambe.
When security games go green: Designing defender strategies to
prevent poaching and illegal fishing.
In Proceedings of IJCAI, pages 2589–2595. AAAI Press,
2015.
Gjosteen16evotingNorway
Kristian Gjøsteen.
E-voting in Norway.
In Feng Hao and Peter Y.A. Ryan, editors, Real-World Electronic
Voting. Design, Analysis and Deployment. CRC Press, 2016.
Gjosteen16experimentNorway
Kristian Gjøsteen and Anders Smedstuen Lund.
An experiment on the security of the Norwegian electronic voting
protocol.
Ann. des Télécommunications, 71(7-8):299–307,
2016.
Harsanyi72generalized
J.C. Harsanyi and R. Selten.
A generalized Nash solution for two-person bargaining games with
incomplete information.
Management Science, 18(5/2):80–106, 1972.
Hart92games
S. Hart.
Games in extensive and strategic forms.
In R.J. Aumann and S. Hart, editors, Handbook of Game Theory
with Economic Applications, Volume 1, pages 19–40. Elsevier/North-Holland,
1992.
Jamroga17preventing
W. Jamroga and M. Tabatabaei.
Preventing coercion in e-voting: Be open and commit.
In Electronic Voting: Proceedings of E-Vote-ID 2016, volume
10141 of Lecture Notes in Computer Science, pages 1–17. Springer,
2017.
Kuhn50extensiveGames
H.W. Kuhn.
Extensive games.
Proceedings of the National Academy of Sciences of the United
States of America, 36(10):570–576, 1950.
Leitmann78stackelberg
G. Leitmann.
On generalized stackelberg strategies.
Journal Of Optimization Theory And Application, 26(4):637–643,
1978.
Marky18reallyVote
Karola Marky, Oksana Kulyk, Karen Renaud, and Melanie Volkamer.
What did I really vote for?
In Proceedings of the Conference on Human Factors in Computing
Systems CHI, page 176. ACM, 2018.
Nash50equilibrium
J.F. Nash.
Equilibrium points in n-person games.
Proceedings of the National Academy of Sciences U.S.A.,
36:48–49, 1950.
Osborne94gamet
M. Osborne and A. Rubinstein.
A Course in Game Theory.
MIT Press, 1994.
Perea07doxasticNash
Andres Perea.
A one-person doxastic characterization of Nash strategies.
Synthese, 158(2):251–271, 2007.
Shoham09MAS
Y. Shoham and K. Leyton-Brown.
Multiagent Systems - Algorithmic, Game-Theoretic, and Logical
Foundations.
Cambridge University Press, 2009.
Tambe11securitygames
M. Tambe.
Security and Game Theory. Algorithms, Deployed Systems, Lessons
Learned.
Cambridge University Press, 2011.
Stackelberg52market
H. von Stackelberg.
The Theory of the Market Economy.
Oxford Uni. Press, 1952.
Stackelberg34equilibrium
Heinrich Freiherr von Stackelberg.
Marktform und Gleichgewicht.
Vienna, 1934.
Weber09usabilityHelios
Janna-Lynn Weber and Urs Hengartner.
Usability study of the open audit voting system Helios, 2009.
<http://www.jannaweber.com/wpcontent/uploads/2009/09/858Helios.pdf>.
Yin16protectingelections
Y. Yin, Y. Vorobeychik, B. An, and N. Hazon.
Optimally protecting elections.
In Proceedings of SECMAS. IFAAMAS, 2016.
Yin10stackelberg
Z. Yin, D. Korzhyk, C. Kiekintveld, V. Conitzer, and M. Tambe.
Stackelberg vs. Nash in security games: interchangeability,
equivalence, and uniqueness.
In Proceedings of AAMAS, pages 1139–1146. IFAAMAS, 2010.
§ FORMAL PROOFS
Here, we present the proofs of our formal results.
§.§ Proofs of Section <ref> (Benaloh According to Nash)
*
Suppose that (s_V,s_D) is a Nash equilibrium, and that p^V_n = 0 for some n (i.e., the voter always audits in round n). Take the smallest such n. Then, s_D = n is the unique best response of D, i.e., the device must cheat for the first time in that round.
We consider two cases now:
(i) n=1: in that case, the voter is better off playing s_V=1, i.e., casting deterministically at the first round.
(ii) n>1: in that case, the voter is better off by swapping p_n-1^V and p_n^V, i.e., postponing the action planned for round n-1 until round n.
In both cases, we get that (s_V,s_D) is not a Nash equilibrium, which is a contradiction.
Hence, we get that p^V_n > 0 for all n. [*]
Suppose now that p^D_n = 0 for some n (i.e., the device never cheats in round n). Take the smallest such n. If n=1, then V's best response is s_V = 1, which contradicts [*].
If n>1, then V's best response includes p_n-1^V = 0, i.e., V postpones casting at n-1 until the next round, which also contradicts [*].
Hence, also p^D_n > 0 for all n.
*
Recall Condition (<ref>), saying that:
∀,'∈1,…, . u_D(s_V,) = u_D(s_V,').
It is equivalent to:
∀ n∈1,…,-1 . u_D(s_V,n+1) - u_D(s_V,n) = 0 [*]
Notice that:
u_D(s_V,n) = ∑_i=1^ p_i^V· u_D(i,n) =
= ∑_i=1^n-1 p_i^V· 0 + p_n^V· + ∑_i=n+1^ p_i^V·(-)
= · p_n^V - ·∑_i=n+1^ p_i^V
Similarly,
u_D(s_V,n+1) = ∑_i=1^ p_i^V· u_V(i,n+1) =
= · p_n+1^V - ·∑_i=n+2^ p_i^V
By this and [*], we get that:
· p_n+1^V - · p_n^V + · p_n+1^V = 0
In consequence,
p_n+1^V = /+ p_n^V
which completes the proof.
*
If s_V is a part of Nash equilibrium then p_n^V>0 for all n=1,…, (by Lemma <ref>).
Moreover, by Lemma <ref>, the probabilities p_1^V,…,p_^V form a geometric sequence with ratio R = /+.
Thus, ∑_n=1^ p_n^V = p_1^V·1-R^/1-R must be equal to 1.
In consequence, p_1^V = 1-R/1-R^, and hence p_n^V = (1-R)R^n-1/1-R^.
Notice that the above probability distribution is the only admissible solution, i.e., no other s_V can be a part of Nash equilibrium.
By Nash's theorem, the finite Benaloh game must have at least one equilibrium; hence, it is the unique one.
*
We claim that the above behavioral strategy implements the unique Nash equilibrium strategy s_V=[p_1^V,…,p_^V] of Theorem <ref>. To prove this, it suffices to verify that
p_n^V = (1-b_1^V)·…·(1-b_n-1^V) · b_n^V for all n=1,…,.
That is, casting at round n indeed corresponds to unsuccessful Bernoulli trials in the first n-1 rounds, and a successful trial in round n.
The check is technical but straightforward.
Complete, see the handwritten notes!
*
Take the behavioral NE strategy b_V in Theorem <ref>.
For /→ 0, we get R → 0. Hence, 1-R^-n+1 for n< converges to 1 much faster than 1-R, and thus b_n^V = 1-R/1-R^-n+1 gets arbitrarily close to 1-R = 1 - /+= /+.
*
Fix =2. By Theorem <ref>, we get b_1^V = 1-R/1-R^2 = 1/1+R = +/2+.
Similarly, b_2^V = 1-R/1-R = 1.
§.§ Proofs of Section <ref> (Benaloh According to Stackelberg)
*
Given a strategy profile represented by (p^V,p^D), the expected payoff of the device is:
Eu_D(p^V,p^D) = p^Vp^D - (1-p^V)p^D + (1-p^V)(1-p^D)
= (2p^V + p^V - - )p^D + (1-p^V) .
Therefore, the derivative of Eu_D(p^V,p^D) is
d Eu_D(p^V,p^D)/d p^D = 2p^V + p^V - - ,
which is negative for p^V < +/2+ and positive for p^V > +/2+.
We recall from Theorem <ref> that p^V_=+/2+ is the Nash equilibrium probability that the voter casts in the first round.[
Note that, for =2, mixed and behavioral strategies coincide and can be used interchangeably. ]
Thus, Eu_D(p^V,p^D) is decreasing for p^D ∈ [0,p^V_), and hence reaches its maximum at p^D = 0.
Similarly, Eu_D(p^V,p^D) is increasing for p^D ∈ (p^V_,1], and has its maximum at p^D = 1.
Finally, by Lemma <ref> and the necessary Nash condition (<ref>), any response of D to strategy represented by p^V_ must obtain the same expected payoff for D, hence each is a best response.
*
For p^V < p^V_, we have E u_V(p^V,BR_D(p^V)) = E u_v(p^V,0) = p^V - (1-p^V)(+).
Similarly, for p^V > p^V_, we have E u_V(p^V,BR_D(p^V)) = E u_v(p^V,1) = -p^V - (1-p^V).
For p^V = p^V_, any p^D∈[0,1] is a best response. Since Eu_V(p^V_,p^D) is a linear function w.r.t. p^D, it reaches its minimum for either p^D=0 or p^D=1.
Observe that Eu_V(p^V_,0) - Eu_V(p^V_,1) = (2+)p^V_ - > 0
because
p^V_ = +/2+ > 1/2 > /2+ .
Thus, Eu_V(p^V_,0) > Eu_V(p^V_,1), and V's lowest payoff against best response at p^V_ is Eu_V(p^V_,1).
*
Ad. <ref> & <ref>:
Consider f(p^v) = Eu_V(p^V,BR_D(p^V)), established in Lemma <ref>. The function is increasing for p^V∈[0,p^V_) and decreasing for p^V∈[p^V_,1].
Moreover, lim_p^V → (p^V_)^^- f(p^V) = Eu_V(p^V_,0) > Eu_V(p^V_,1) = f(p^V_)).
Thus, = sup_p^V∈[0,1] f(p^V) = Eu_V(p^V_,0) = p^V_ - (1-p^V_)(+) = (+)(++)/2+,
and the value is not reached by any p^V.
Ad. <ref>:
By Lemma <ref>, p^D_ > 0.
Moreover, Eu_V(p^V_,p^D) is linear w.r.t. p^D, and we already know that Eu_V(p^V_,0) > Eu_V(p^V_,1), thus it must be strictly decreasing.
In consequence, = Eu_V(p^V_,0) > Eu_V(p^V_,p^D_).
Ad. <ref>:
Let ≥ a, and recall that <.
Then,
≥(a--) + a/2+ = (a - (2+a)/2+).
For /→ 0, this converges to a, which is greater than 0.
|
http://arxiv.org/abs/2307.00939v1
|
20230703112749
|
Solitonic symmetry as non-invertible symmetry: cohomology theories with TQFT coefficients
|
[
"Shi Chen",
"Yuya Tanizaki"
] |
hep-th
|
[
"hep-th",
"cond-mat.str-el",
"math-ph",
"math.MP"
] |
Externally validating the IoTDevID device identification methodology using the CIC IoT 2022 DatasetKahraman Kostas supported by Republic of Turkey - Ministry of National Education
Kahraman Kostas10000-0002-4696-1857 Mike Just10000-0002-9669-5067 Michael A. Lones10000-0002-2745-9896
August 1, 2023
===================================================================================================================================================================================
§ INTRODUCTION
Symmetry provides one of the guiding principles when studying strongly-coupled physics in quantum field theories (QFTs), and astonishingly, the notion of symmetry itself has been vastly generalized in these recent years. The generalization is achieved under the motto that the topologicalness of operators should always represent conservation laws, and we identify the algebraic structure of topological operators in QFTs with the generalized symmetries.
Such generalizations mainly include two directions: One is the higher-form symmetry <cit.> and higher-group symmetry <cit.>, where the symmetry operators are defined on various codimensions, and the more recent one is the non-invertible symmetry, where the fusion rule obeys a suitable algebraic structure beyond the usual group multiplications.
The non-invertible symmetries in (1+1)-dim are now well understood, and the fusion category captures their algebraic structure (when finitely generated) <cit.>.
The non-invertible symmetries in higher dimensions start to be realized in various QFTs <cit.>, and there is also a massive endeavor to identify the precise mathematical structures behind generalized symmetries <cit.>.
In this paper, we shall tackle this problem from the viewpoint of non-invertible symmetries organizing conservation laws of topological solitons, which we call non-invertible solitonic symmetries <cit.>.
Solitons are nonperturbative objects in QFTs that appear due to the nontrivial topology of the path-integral target space, such as kinks, vortices, monopoles, etc., and they are typically created/annihilated by the defect operators.
An intriguing aspect of solitons is their topological stability, and it has been common wisdom that the homotopy group of the target space captures it <cit.>.
In the previous study <cit.>, the present authors revealed that solitonic symmetry also becomes a non-invertible symmetry in general, and the solitonic symmetry generators are given by the partition functions of auxiliary lower-dim topological QFTs (TQFTs) coupled with the original system.
This finding provides us an opportunity to reconsider the foundations of solitonic symmetries, and, surprisingly, deep mathematics turns out to be there behind the solitonic symmetries, which echoes those behind classfying gepped phases <cit.>.
The fact that solitonic symmetries become non-invertible implies that the usual homotopy group of the target space Y does not always correctly characterize the topological conservation laws of solitons, which raises the following questions:
* What is the new foundation for solitonic symmetry? (Sec. <ref>)
* What is the symmetry generating operators for solitonic symmetry? (Secs. <ref>)
* What is the algebraic structure of solitonic symmetry? (Sec. <ref>)
* What makes solitonic symmetry go beyond homotopy groups? (Sec. <ref>)
We give proposals and/or solutions to these questions in the indicated sections and let us here summarize the results.
We need to specify the symmetry generators and their action on charged objects discussing symmetries.
We showed in Sec. <ref> that, in the case of the topological conservation law of solitons, the charged objects are given by the defect operators that create/annihilate solitons, and then the symmetry generators should be given by topological functionals using the fundamental fields in the path-integral formulation.
Then in Sec. <ref>, we would like to find the most general form of topological functionals to define the non-invertible solitonic symmetries, and we clarify the physical requirements to be satisfied by the topological functionals.
In particular, it turns out that the essential requirement comes from locality.
As a natural ansatz satisfying such requirements, we propose that the solitonic symmetries are generated by partition functions of auxiliary fully-extended TQFT coupled to the fundamental fields (Ansatz <ref>).
The rigorous treatment of Ansatz <ref> requires us to employ the knowledge of fully-extended TQFTs.
We present a very concise review of the relevant mathematical treatments in Sec. <ref>.
And also in that section, we clarify the algebraic structure of solitonic symmetry which is described by symmetric fusion higher-categories 𝖱𝖾𝗉^∙(Y) for bosonic case and 𝗌𝖱𝖾𝗉^∙(Y) for fermionic case.
These observations are consistent with the latest progresses on generalized symmetry in the literature such as Refs. <cit.>.
We see that solitonic symmetry serves as non-invertible generalizations of cohomology theories with TQFT coefficients.
We also discuss the invertible solitonic subsymmetry, which is given by orthodox cohomology theories.
Armed with these mathematical guidance, in Sec. <ref>, we can systematically study the origin of non-invertibility in the solitonic symmetry and how the conventional wisdom of homotopy groups are surpassed.
We shall see that (𝗌)𝖱𝖾𝗉^∙(Y) can be decomposed into two parts.
The first part comes from (𝗌)𝖱𝖾𝗉^∙-1(Y) via formulating condensations and contains topological functionals that are trivial on spheres.
The second part comes from the homotopy group π_∙- and contains topological functionals that are nontrivial on spheres.
But according to the topological data present in the theory, these spherically-nontrivial topological functionals have to take a non-invertible fusion rule.
This decomposition unpacks the structure of solitonic structure inductively and provides us with the insight into the connection between the generalized solitonic symmetry from the contemporary perspective and the conventional wisdom since Coleman etc.
This work was initiated during S. C.’s visit to YITP with the Atom-type visiting program, and the authors appreciate the hospitality of Yukawa institute.
This work was supported by JSPS KAKENHI Grant No. 21J20877 (S. C.), 22H01218 and 20K22350 (Y. T.), and also by the Center for Gravitational Physics and Quantum Information (CGPQI)
§ BASIC CONCEPTS
Solitonic symmetry is generated by topological functionals.
It (i) describes the conservation law in the solitonic sector, and (ii) prescribes the selection rule in correlation functions between solitonic defects, and (iii) determines the possible topological couplings with a background gauge field.
We explain these basic notions in this section.
The spacetime dimension is denoted by d throughout.
§.§ Target space of the path integral
Let us consider a general situation, where the quantum field theory (QFT) is defined by the path integral, and its partition function is given by
=∫σexp(-S[σ]),
and σ is some field on the d-dim spacetime.
We focus on the non-Grassmann sector of this path integral and thus σ might be a scalar field, a gauge field, a higher-form gauge field, or even a combination of them coupled.
We note that, except the simplest case of pure scalar fields, all other fields are not maps to a fixed space.
In this paper, we are interested in a series of nonperturbative phenomena that are insensitive to continuous deformations of field configurations.
Namely, we only care about the deformation classes of field configurations on some closed (smooth) manifold M.
The actual choice of M depends on specific problems; it might be the spacetime itself, a submanifold like a space slice or a world line, or a virtual submanifold like the normal sphere bundle of a defect operator (see Sec. <ref>).
Conveniently, we can always find a topological space Y such that there exists a one-to-one correspondence,
Deformation classes o f field configurations on M
∥
Deformation classe s of maps from M to Y .
We shall refer to this topological space Y as the (homotopy) target space of the path integral and abuse the symbol σ to denote also the auxiliary maps to the target space, i.e.,
σ|_M: M↦ Y.
Formally, the set of deformation classes of field configurations are now expressed by the set of homotopy classes of maps to the target space Y,
[M,Y]≡{f:M↦ Y}/homotopies.
Because the dimension of closed manifold M cannot exceed spacetime dimension d, the set [M,Y] depends only on the homotopy d-type of Y only.
In a d-dim QFT defined by a path integral, the deformation classes of field configurations on any submanifold depend only on the homotopy d-type of the target space Y.
Thus, Y is understood up to (d+1)-connected maps in this paper.
Without loss of generality, we can always require Y to be a d-aspherical space.
Topological space X is n-aspherical if π_∙(X,x)≃ 0 for all ∙>n and x∈ X.
Namely, we can always take the d-th Postnikov truncation of Y.
Nevertheless, in actual practice, despite Proposition <ref>, we often render more topological and even geometrical structures on Y to keep in touch with other physics that are sensitive to continuous deformations of field configurations
We now construct several common Y's to illustrate the above abstract concepts.
* Y≃ X for a X-valued scalar field, where X is a topological space.
* Y≃ BG, the delooping of G (or the classifying space of G), for a gauge field with gauge group G, where G is a topological group (can be discrete).
* If we couple the two fields above via a continuous G-action on X, then Y fits into a fibration X→ Y→ BG determined by the G-action[
If G acts on X freely (i.e. the action has no fix point), we just have Y≃ X/G. If X is contractible, we just have Y≃ BG.
Otherwise, Y has a quite complicated structure, such as many orbifolding examples.
].
* Y≃ B^pG, the p-th delooping of G, for a p-form gauge field with Abelian G.
* Y≃ B𝖦 for a gauge field of a local higher-group symmetry 𝖦 (see the discussions around Philosophies <ref> and <ref>, and also the final remark in Sec. <ref>).
Concrete examples will appear later.
§.§ Solitonic symmetry
From a contemporary viewpoint of QFTs, the notion of symmetry is vastly generalized and can be summarized as
Generalized symmetry≡Algebra of topological operators.
In the path-integral formalism, there are basically two constructions of operators[
We note that these two constructions can be interchanged under the duality operation, and thus the notion of the solitonic symmetry depends on the explicit path-integral realization of a given QFT.
],
defects and functionals.
Both types of operators have the chance to be topological.
Topological defects are the most orthodox topological operators and all symmetries “on the electric side” are generated by them.
Topological functionals are more or less atypical and symmetries “on the magnetic side” are generated by them.
In this paper, we shall refer to the symmetry generated by topological functionals as solitonic symmetry, i.e.,
Solitonic symmetry≡Symmetry generated by topological functionals .
The core purpose of this paper is to reveal the structure of solitonic symmetry by studying the behavior of topological functionals.
§.§.§ Elementary properties
For a functional on an n-dim closed manifold M to be topological, it must factor through the deformation classes of field configurations.
Namely, it factors through [M,Y] for the target space Y.
Given that [M,Y] depends on the homotopy of n-type of Y only, we see the following property of topological functionals.
The n-dim topological functionals depend on the homotopy n-type of the target space only.
This is consistent with Proposition <ref>.
As a prototypical example of topological functionals, let us pick up a cohomology class ω∈𝔼^∙(Y) for some multiplicative cohomology 𝔼[
Spectrum 𝔼 can be assumed connective, i.e. 𝔼_∙≃ 0 for ∙<0, given Proposition <ref>.
].
It can be an ordinary
cohomology ℍR for some ring R or an extraordinary cohomology such as 𝕂 and 𝕂𝕆.
Then, on any closed n-dim 𝔼-orientable manifold M that acquires a chosen 𝔼-orientation, we can define a topological functional as
U_g(M)≡ g(∫_Mσ^*ω) , ∀ g∈(𝔼_n-∙,U(1)) .
We can readily see the fusion rule U_g_1U_g_2=U_g_1g_2 and thus we obtain a p-form symmetry with p=d-n-1.
The operator dimension n can range from d to 0, corresponding to the symmetry form p ranging from -1 to d-1.
Note that n-dim topological functionals are exactly θ-angles, i.e. topological terms in the action that depend on Y only[
In contrast, topological terms that also depend on the geometry beyond the mere topology of Y are not n-dim topological functionals.
CS terms and WZW terms are such counter-examples.
].
We shall see concrete examples of operator (<ref>) as soon as in Sec. <ref>.
The solitonic symmetry defined above has an invertible fusion rule, which is captured by an Abelian group, (𝔼_n-∙,U(1)) or one of its quotients.
Actually, operator (<ref>) gives the universal construction of invertible solitonic symmetry.
In this paper, we shall discuss the most generalized connotation of topological functional and solitonic symmetry with more complicated non-invertible fusion rules.
In particular, non-invertible θ-angles mean couplings to topological orders, as explained in Refs. <cit.>.
Nevertheless, in any case, the fusion rule of topological functionals must still be commutative, because on each supporting manifold, the fusion is just the multiplication of complex numbers:
Solitonic symmetry is commutative.
Namely, topological functionals never care about their order.
Solitonic symmetry is also insensitive to the theory details, including the action and ambient spacetime, because topological functionals and their fusions do not care about these theory details.
Therefore, when a d-dim and a (d+1)-dim theory share the same target space Y (i.e. the homotopy d-types of the target spaces are identical), we have for 0≤ n≤ d,
n-dim topological fu nctional in d-dim QFT
∥
n-dim topological func tional in (d+1)-dim QFT ,
and accordingly, for d-1≥ p≥-1, we have the following equivalence,
p-form solitonic s ymmetry in d-dim QFT
∥
(p+1)-form solitonic s ymmetry in (d+1)-dim QFT .
From this point of view, the algebraic structure of solitonic symmetry is supposed to a sort of cohomology theory on the target space Y; we shall justify this in Sec. <ref>.
Thus we shall neglect mentioning the spacetime when discussing topological functionals.
The spacetime dimension implicitly enters as an upper limit for the possible dimension of topological functionals, according to Prop. <ref>.
Before continuing the journey to more details about topological functionals, let us pause here to get readers acquainted with physical consequences of solitonic symmetry.
§.§.§ Physical significance
First, a symmetry shows the presence of certain conserved charges.
To find charged objects of solitonic symmetry, let us consider a correlation function that involves a topological functional.
Then this topological functional can produce a nontrivial number as long as the field configuration on its supporting manifold cannot be continuously deformed to a trivial configuration.
This can be achieved only if the correlation function includes proper solitonic defects or the spacetime has a special topology.
Let us now introduce the notion of solitonic defects.
For some 0≤ p≤ d-1, we take a p-dim submanifold N of the spacetime and excise its infinitesimal neighborhood.
Then a p-dim Dirichlet defect operator on N, which we call a solitonic defect operator, is defined by putting the Dirichlet boundary condition on the boundary of the excised region.
More formally, it is a Dirichlet boundary condition on the normal sphere bundle N of N (Locally, N≃ N× S^d-p-1).
It can be virtually viewed as a (d-1)-dim submanifold in the spacetime via a tubular neighborhood.
According to Sec. <ref>, the deformation classes of such Dirichlet boundary conditions can be expressed by deformation classes of maps
σ|_ N: N↦ Y.
Namely, the deformation classes of solitonic defects on N are classified by
[ N,Y] .
Solitonic defects can be either topological or non-topological operators, depending on details of the theory.
If topological, themselves also generate symmetry, but “on the electric side” and thus non-solitonic.
Solitonic defects couple to a nonperturbative sector called the solitonic sector which appears because of the nontrivial topology of the target space[
The solitonic sector discussed in this paper is able to exist on closed spacetimes.
There are also other types of solitonic sectors that inhabit non-closed spacetimes with boundaries only.
They also originate from the path-integral topology but are not captured by the target space.
Nevertheless, in many cases, they can still be understood from the target space of another related theory; See Sec. <ref> for more discussions.
].
When we put solitonic defects in the spacetime and evaluate the correlation function via the path integral, the configurations to be integrated are topological solitons bounded by those defects (see Sec. <ref> for a few examples).
In particular, the tree-level contribution to the correlation functions comes from solitonic solutions of the classical equation of motion.
Therefore, the solitonic sector in the quantum theory can be viewed as the quantization of classical solitons, and the solitonic defects are the creation/annihilation operators for quantum solitons.
Aside from putting solitonic defects, arranging a nontrivial spacetime topology is also a common method to visualize the solitonic sector.
It is now clear that the charged objects of solitonic symmetry are exactly those solitonic objects introduced above.
Solitonic symmetry puts conserved charges which we call topological charges on these solitonic objects.
The conservation law of topological charges constrains the correlation functions among solitonic defects and prescribes the selection rule in the physical processes in which topological solitons are involved.
Second, a symmetry prescribes the possible couplings to background gauge fields.
For solitonic symmetry, this coupling is a topological coupling, namely a topological term in the action.
Typically, the coupling is directly related to the solitonic sector.
However, there are also couplings that are not related to any authentic solitonic sector.
For example, a U(1) Chern-Simons path integral
∫ a exp{ı k/4π∫ aạ}
has no physical solitonic sector since monopoles are not gauge invariant as point-like local operators.
However, we can still couple a U(1) background gauge field A topologically via
∫ a exp{ı k/4π∫ aạ + ı/2π∫ aẠ} .
The information of existence of such possible topological couplings with a background U(1) gauge field is also encoded in a 0-form U(1) solitonic symmetry from Y≃ BU(1), although the corresponding solitonic objects are unphysical.
Another general case is that, due to the dimensional reason, no solitonic defect really carries a (-1)-form topological charge.
Instantons, the charged objects under (-1)-form solitonic symmetry, are just classical objects.
Thus (-1)-form solitonic symmetry, generated by θ-angles, never really rules a quantum solitonic sector but instead prescribes the topological couplings to a background “0-form gauge field”, i.e., a background axion field.
§.§.§ Conventional wisdom and homotopy groups
In the conventional discussion on topological solitons, their stability and the selection rules are usually discussed using the homotopy group of the target space.
To be concrete, let us consider two simple examples:
(1) A S^1 sigma model has a (d-2)-dim solitonic defect, which bounds kinks.
Their (d-2)-form topological charge is related to π_1 S^1≃.
This gives rise to a (d-2)-form U(1) solitonic symmetry.
(2) A pure U(1) gauge theory or a U(1) Higgs model has a (d-3)-dim 't Hooft defect, which bounds a magnetic flux or a gauge vortex. Their (d-3)-form topological charge is related to π_2BU(1)≃.
This gives rise to a (d-3)-form U(1) solitonic symmetry.
In these examples, the solitons and their conservation laws are controlled by π_∙Y, the homotopy groups of the target space Y.
This conventional treatment suggested a conventional wisdom that the p-form solitonic symmetry is described by
(π_d-p-1Y, U(1)) .
Indeed, in the two examples above, the relevant topological functionals can be constructed via the universal invertible form (<ref>).
Thus the solitonic symmetries in these two examples are indeed invertible and described by Eq. (<ref>).
We would like to note that the above examples are actually selected rare cases.
In general, most of the topological charges prescribed by homotopy groups cannot be detected by the universal invertible construction (<ref>) and the solitonic symmetry is not given by Eq. (<ref>).
The key problem is the following.
* The notion of the dimension of a solitonic configuration is in general ambiguous.
A generic soliton may be a mixture of solitons of clear dimensions.
This is especially often the case for the solitonic configuration bounded by solitonic defects, which determines the correlation function of these solitonic defects.
* A p-dim solitonic defect may be able to create/annihilate not only (p+1)-dim solitons but also solitonic of all dimensions <p+1.
* A p-dim solitonic defect can carry q-form topological charges for all 0≤ q≤ p.
This is not surprising at all: Any conserved q-form charge can be carried by operators of dimension ≥ q, which has already been noticed since the very beginning of generalized symmetry <cit.>.
This phenomenon is the starting point that led us to topological charges beyond homotopy groups in the P^1 model <cit.>, which provides an example of non-invertible solitonic symmetry.
The entire Sec. <ref> will be devoted to discussing how general solitonic symmetry goes beyond homotopy groups and becomes non-invertible.
§ TOPOLOGICAL FUNCTIONAL
From the contemporary perspective, to understand solitonic symmetry exactly means to understand topological functionals.
We shall carefully inspect the notion of topological functionals in this section.
At the first glance, functional operators look more orthodox and simpler than defect operators.
However, we will find that they are actually much subtler than defects, and a well-behaved notion of topological functionals is vastly constrained by several physical requirements.
Pursuing a well-behaved notion of topological functionals, we will eventually bring ourselves to Ansatz <ref> which claims that topological functionals are best understood as the partition functions of auxiliary fully-extended TQFTs.
§.§ The identity problem
As we have mentioned at the beginning of Sec. <ref>, a topological functional on a manifold M has to factor through [M,Y].
However, not every [M,Y]↦ gives rise to topological functionals.
A priori, only the topology of M matters for a topological functional.
We cannot distinguish the original M and a new M transformed by a self-diffeomorphism Mf→M.
Accordingly, let us consider two different configurations Ma→Y and Mb→Y such that f transforms one to the other, say b=f∘ a.
Then, a topological functional we can construct with our bare hands must be blind to the difference between a and b.
More formally, let us consider the mapping class group π_0Diff(M), i.e. the group of the isotopy classes of self-diffeomorphisms on M.
This group acts on [M,Y] in the way described above.
Then, a topological functional should really factor through the equivalence classes,
[M,Y]/π_0Diff(M) .
Unfortunately, in most interesting cases, the π_0Diff(M)-action on [M,Y] results in vast degeneracies.
Many elements in [M,Y] have to be regarded as identical.
We thus call this the identity problem.
This problem terribly prevents us from having sufficiently many meaningful topological functionals.
There is one way out, to add some extra structure to M.
The self-diffeomorphism f may transform this structure to a different structure of the same type.
If this happens, we will be able to distinguish a and b by their relative relationship to the extra structure.
More formally, now we should consider the mapping class group π_0Diff(M,γ)⊆π_0Diff(M) of isotopy classes of self-diffeomorphisms that preserve the extra structure γ.
Then a topological functional that relies on γ should factor through the equivalence classes,
[M,Y]/π_0Diff(M,γ) .
Actually, in the consistent practice of physicists, we almost always unconsciously assume some extra structure; recall “𝔼-orientation” for Eq. (<ref>).
It is exactly the identity problem that motivated our subconscious for doing so.
Such 𝔼-orientations are primary examples of the structure γ.
Thus we shall just call γ a generalized orientation.
Consequently, a topological functional that can detect sufficiently many deformation classes of field configurations needs to rely on a generalized orientation γ.
§.§.§ Example: orientation
For example, let us consider Y≃ S^2 and M≃ S^2.
Then we have [M,Y]≃π_2(S^2)≃.
Let us consider the reflection self-diffeomorphism f defined by f(n⃗)=-n⃗ where we view S^2⊆^3.
This self-diffeomorphism generates
π_0Diff(S^2)≃_2 .
Clearly, f acts on [M,Y] via →-.
Thus with bare hands, we cannot distinguish two configurations labeled by opposite integers.
To break the ice, we note that M is orientable.
Recall that orientations form a H^0(-;_2)-torsor.
Thus there are two orientations on M, ξ and ξ', which are exchanged under the transformation of f.
That is,
π_0Diff(S^2,ξ)≃0 .
Combining configurations with orientations, we see that for an n∈≃[M,Y], (n,ξ)f⟷(-n,ξ') and (-n,ξ)f⟷(n,ξ') are distinct from each other and cannot be mixed by f.
Based on an orientation, we can construct the following topological functional,
U_θ(M) ≡ exp{ıθ∫_Mσ^* b} , ∀θ∈/2π ,
where b denotes the canonical 2-form on S^2 that integrates to 1.
This operator can distinguish any two elements in [M,Y].
We can also recast operator (<ref>) into the universal form (<ref>) by choosing 𝔼≃ℍ.
Concretely, in Eq. (<ref>), we take ω as a generator of H^2(Y;)≃, and g∈(,U(1))≃/2π.
§.§.§ Example: spin structure
Besides orientations, other structures may also be needed, such as spin structures.
An interesting example was presented in our earlier work <cit.> (also implicitly in Ref. <cit.>).
We consider Y≃ S^2 and M≃ S^2× S^1.
With some efforts we can compute
[S^2×S^1,S^2] ≃ {(m,ℓ) | m∈, ℓ∈_2|m|} ,
where ℤ_0 means ℤ.
For a configuration σ, the m labels σ|_S^2×{p} in π_2(S^2)≃ for an arbitrary p∈ S^1.
Now consider the twist diffeomorphism f defined by f(n⃗,t)≡(^tẑ⃗̂×n⃗,t), where we view S^2⊆^3 and S^1≃/2π, as well as another diffeomorphism g defined by g(n⃗,t)≡(-n⃗,-t).
Both f and g preserve an orientation.
Actually, for an orientation ξ, they generate
π_0Diff(S^2×S^1,ξ)≃_2×_2 .
Both f and g induce an almost double degeneracy on [M,Y]: (m,ℓ)g⟷(-m,ℓ) while f exchanges (m,ℓ_1) and (m,ℓ_2) if 2ℓ_1=2ℓ_2.
For example, f transforms (n⃗,t)↦n⃗ to (n⃗,t)↦^tẑ⃗̂×n⃗, and these two maps belong to different classes (1,0) and (1,1) in [M,Y], respectively.
We would like to lift the f-degeneracy.
We note that M is spinnable.
Recall that, on top of a given orientation, spin structures form a H^1(-;_2)-torsor.
Since H^1(S^2×S^1;_2)≃_2, M have two different spin structures ρ and ρ' on top of an orientation ξ.
They are exchanged by f.
That is,
π_0Diff(S^2× S^1,ξ,ρ)≃_2 .
Only the g-degeneracy remains.
Based on a spin structure, we can construct the following topological functional as a spin Chern-Simons integral,
U_k(M) ≡ exp{ı k∫_Mσ^*a σ̣^*a/4π} ,
where k∼ k+2, and a denotes the U(1) gauge field on S^2 associated with the Hopf fibration S^1→ S^3→ S^2, which is related to b in the former example (<ref>) via b=ạ/2π.
We can also recast operator (<ref>) into the universal form (<ref>) by choosing 𝔼≃𝕂𝕆 since a 𝕂𝕆-orientation is exactly a spin structure.
Concretely, in Eq. (<ref>), we take ω as a generator of KO^2(Y)≃, and g∈(KO_1,U(1))≃(_2,U(1)).
The operator (<ref>) can lift the almost double degeneracy in [M,Y] caused by f.
Nevertheless, to distinguish other element in [M,Y] (except the g-degeneracy) requires non-invertible topological functionals based on a spin structure as the present authors showed in Ref. <cit.>, which will also be discussed in Sec. <ref> in this paper.
§.§ The coherence problem
The identity problem concerns a topological functional on a single supporting manifold only.
An even more severe problem appears if we start to move from one supporting manifold to another.
Namely, for manifolds M_1≄M_2, given a function on [M_1,Y] and another function on [M_2,Y], how can we tell whether they are just different topological functionals or different realizations of the same topological functional?
An incorrect assignment would lead to wrong solitonic physics.
We call this the coherence problem.
We can learn hints from the universal construction (<ref>) for invertible solitonic symmetry.
It is naturally defined on any closed 𝔼-orientable 𝔼-oriented manifolds.
Also, its concrete incarnations in operators (<ref>) and (<ref>) are automatically defined on any closed oriented orientable manifolds and any closed spin spinnable manifolds, respectively.
It is natural to regard the operators on different manifolds but defined by the same expression Eq. (<ref>) as the different realizations of the same topological functional.
The above observation suggests that the solution to the coherence problem is to require locality.
Recall that defect operators are defined by infinitesimal boundary conditions around the operator (see Sec. <ref>).
This definition concerns field configurations in the vicinity of each point on the supporting manifold and thus satisfies a good sense of locality.
However, a naively defined functional operator may behave quite non-local and might not yield a physically sensible operator.
We propose that a functional operator satisfies locality if it is the multiplication of piecewise data localized around each point.
The universal invertible construction (<ref>) provides the special cases where multiplication is given the “exponentiation” of summation, represented by morphisms to U(1).
Furthermore, to glue these local data may require a generalized orientation on supporting manifolds.
In summary,
Locality≡Multiplying local data with respect to generalized orientation.
A functional satisfying locality automatically renders coherence on a class of manifolds with the prescribed generalized orientation.
A natural subsequent question is whether there is a universal construction of multiplying local data, which goes beyond the exponentiation (<ref>) and can capture non-invertible cases.
We propose a positive answer:
The universal way of multiplying local data is exactly another path integral, and a most general functional that satisfy locality is exactly the partition function of another fully-extended QFT.
Namely, we can consider some auxiliary fields that inhabit the operator manifold only, couple them to the dynamical quantum fields, and perform a path integral of the auxiliary fields.
The output of this auxiliary path integral is the partition function of an auxiliary fully-extended QFT that inhabits the operator manifold only.
We optimistically assume that all functional operators satisfying locality can be produced by the partition functions of some auxiliary fully-extended QFT.
In particular, we require all topological functionals to satisfy locality, i.e., all topological functionals [M,σ] are assumed to be produced by the partition functions of auxiliary topological fully-extended QFTs (TQFTs) coupled to the target space Y:
[M,σ]=TQFT partition function that couples with σ|_M:M→ Y.
In particular, this construction includes Eq. (<ref>) as its special cases for invertible topological functionals.
Also, this construction is consistent with Propositions <ref> and <ref>.
We now flesh out the precise connotation of “TQFT” in our proposal.
§.§ Our ansatz
In a specific theory, which generalized orientation the topological functionals rely on should be determined by the theory itself.
Therefore, instead of studying topological functionals for an arbitrary generalized orientation, we focus on the most common generalized orientations in this paper.
The most fundamental property of a theory that determines the generalized orientation is particle statistics.
As might be surprising at the first glance, a non-Grassmann path integral can be used to define not only a bosonic QFT that inhabits any oriented spacetime but also a fermionic QFT that inhabits any spin spacetime.
These fermionic theories are not bosonic theories in guise, i.e., the _2-grading (-)^F on states is nontrivial, as long as the action includes proper spin topological terms.
In a bosonic (resp. fermionic) theory, topological functionals are supposed to inhabit oriented (resp. spin) manifolds.
This is natural since the theory itself inhabits oriented (resp. spin) spacetime.
A more persuasive rationale is that all the solitonic defects (see Sec. <ref>), the charged operators of topological functionals, are defined by field configurations on oriented (resp. spin) manifolds.
To see this, one notes that the normal sphere bundle N of any closed submanifold N in the spacetime naturally inherits an orientation (resp. a spin structure) from the spacetime, even if N itself is not orientable or spinnable.
As a consequence, the example in Eq. (<ref>) cannot exist in bosonic theories.
We focus on the elementary cases of bosonic and fermionic theories in the paper while leaving the theories where other interesting generalized orientations[
Such theories appear typically when one wants to take into account (1) unorthodox statistics, (2) discrete spacetime symmetry, (3) mixing between spacetime and internal (non-solitonic) symmetry, and (4) conditions on higher objects than particles.
]
are involved to future work.
We now have all the ingredients to formulate an Ansatz for topological functionals, however, unfortunately, given that Y satisfies a finiteness condition.
Topological space Y is n-finite if π_0Y is finite and π_q(Y,y) is finite for all 0≤ q≤ n and y∈ Y.
If M is a closed manifold of dimension ≤ n, [M,Y] is finite when Y is n-finite.
[M,Y] has the chance to have infinitely many elements if M is not n-finite.
Infinitely many topological charges are the sign for continuous symmetry.
Both the existence of infinite many elements in [M,Y] and the existence of infinitesimal symmetry transformations cause tricky technical troubles.
We shall present a systematic treatment of discrete solitonic symmetry but only an approximating treatment of continuous solitonic symmetry.
Topological space Y is finite if it is simultaneously n-finite and n-aspherical for some n∈.
Given Propositions <ref> and <ref>, without loss of generality, we shall just state our systematic result for a finite target space Y without specifying a number n.
Let us describe our Ansatz for topological functionals responsible for discrete solitonic symmetry.
To produce bosonic (resp. fermionic) topological functionals, TQFTs themselves must be bosonic (resp. fermionic).
Their partition functions inhabit closed manifolds equipped with a map to the target space Y, σ|_M:M→ Y.
To justify “multiplication of local data”, these TQFTs must be maximally local, i.e., they should be fully-extended TQFTs, and we reach the following Ansatz:
For a finite space Y, an n-dim bosonic (resp. fermionic) topological functional targeting at Y is the partition function of an n-dim bosonic (resp. fermionic) Y-enriched fully-extended TQFT.
We regard this ansatz as a complete characterization of topological functionals (with a finite target space).
Based on a physical principle, that a QFT is completely determined by its partition functions (in the presence of various kinds of background fields), we make the following conjecture.
For a finite space Y,
inequivalent n-dim bosonic (resp. fermionic) Y-enriched fully-extended TQFTs produce different n-dim bosonic (resp. fermionic) topological functionals targeting at Y.
Ansatz <ref> and Conjecture <ref> will be the foundation for our analysis of solitonic symmetry in this paper.
We note that similar TQFTs often appear in a thriving contemporary theme of physics, the classification of gapped phases.
In particular, they are tightly related to the notion of symmetry-protected topological phases and symmetry-enriched topological orders.
To conclude this section, we slightly discuss continuous solitonic symmetry.
There are basically two problems.
First, although infinitesimal symmetry generators can be unbounded (self-adjoint operators in the invertible case), finite symmetry operators must be bounded (unitary operators in the invertible case), which means we have to require a topological functional [M,Y]↦ to be bounded.
Second, Conjecture <ref> fails due to the existence of non-semisimple TQFTs which superfluous produce duplicated partition functions as semisimple TQFTs; recall that for group representations R≄A⊕ R/A, we still have _R=_A+_R/A=_A⊕ R/A.
In this paper, we do not attempt a systematic treatment of continuous solitonic symmetry.
Instead, we shall be satisfied with the discrete subsymmetries of continuous solitonic symmetry.
This is realized by considering finite homotopy quotients of Y.
Z is a homotopy quotient of Y if there is a map f:Y↦ Z such that f_*:π_0 Y→π_0 Z is surjective and f_*:π_q(Y,y)→π_q (Z,f(y)) is an epimorphism for all q>0 and y∈ Y.
Picking up a finite homotopy quotient Z of Y and considering topological functionals that factor through [-,Z], we obtain a discrete solitonic subsymmetry.
We believe that the colimit of all discrete subsymmetries (see Sec. <ref>) leads to an almost faithful approximation to a continuous symmetry, just like approximating U(1) by ≃⋃_n∈_n.
Besides, we shall find it easy to describe the continuous solitonic symmetry directly in some concrete examples.
§ ALGEBRAIC STRUCTURE OF SOLITONIC SYMMETRY
We are going to reveal the universal algebraic structure of solitonic symmetry based on Ansatz <ref> and Conjecture <ref>.
First in Sec. <ref>, we present a short mathematical preliminary on higher-categories.
Then in Sec. <ref>, we shall formulate the mathematical notion of fully-extended TQFTs to clarify the accurate connotation of Ansatz <ref>.
Finally in Sec. <ref>, we shall discuss the mathematical structure that describes the algebraic structure of solitonic symmetry.
In this section, we aim at establishing a more or less rigorous mathematical ground for solitonic symmetry.
Thus our expositions might look inevitably more or less abstract.
However, readers acquainted with the issue of classifying gapped phases by fully-extended TQFTs will find the expositions familiar and will recognize tremendous echoes.
§.§ Preliminaries on higher-categories
The most efficient way to formulate fully-extended TQFTs needs the package of higher-categories.
The goal here is to give a brief overview of this nice mathematical package.
We are not attempting a self-contained exposition, but
instead, we present an oversimplified introduction following Sec. 1.3 of Ref. <cit.>.
Unfamiliar readers could find good entrances in the literature like Refs. <cit.> to get started into this evolving contemporary discipline.
§.§.§ n-category
To start with, let us recollect the definition of categories.
A category comprises (i) objects, (ii) a set (x,y) between any two objects x and y, and (iii) an associative unital composition map (x,y)×(y,z)→(x,z) for any three objects x, y, and z.
In particular, the composition map makes (x,x) a monoid.
Elements of (x,y) are called morphisms.
One of the tremendous reasons why categories are useful is that they are regarded as soft instead of rigid.
Namely, we regard two categories as “the same” as long as they are equivalent rather than isomorphic.
This is similar to algebraic topology which cares about (weak) homotopy equivalences instead of homeomorphisms.
Generalizing the above definition, we can sketch the notion of n-categories by an induction.
At the root of this induction, a 0-category is a set and its elements are called objects or 0-morphisms.
The induction then goes as follows.
An n-category comprises (i) objects, also called 0-morphisms, (ii) a small (n-1)-category (x,y) between any two objects x and y, and (iii) an associative unital composition functor (x,y)×(y,z)→(x,z) for any three objects x, y, and z.
In particular, the composition functor makes (x,x) a monoidal (n-1)-category.
For all 0≤ p<n, p-morphisms of (x,y) are called (p+1)-morphisms of this n-category.
Clearly, if we take n=1, we just recover the definition for an ordinary category.
We can conceive an ∞-category as a proper limit of n-categories as n approaches ∞.
Its (x,y)'s are also ∞-categories.
The above sketch of the definition looks promising but hides the subtlety in the treatment of the associativity and the unitality for compositions.
We do not need the too rigid notion of strict n-categories, where these conditions are satisfied literally.
Instead, we need weak n-categories, where these conditions are satisfied up to specified equivalences.
For example, in the case of n=2, we want (X,X) to be a (weak) monoidal category rather than a strict monoidal category.
However, as n increases, to characterize accurately all the axioms gets rapidly a formidable task.
Different models to organize this have been proposed and the equivalence between them, though widely believed, is a matter of ongoing research.
We shall drop the prefix “weak” henceforth.
We now introduce two convenient notations.
Let us consider an n-category 𝖢 with a distinguished object 1_𝖢.
For example, 𝖢 might be a monoidal n-category, which is an n-category with the “tensor product” ⊗ and the unit object 1_𝖢.
We conceive a monoidal (n-1)-category Ω𝖢 via
Ω𝖢≡(1_𝖢,1_𝖢) .
When 𝖢 is a monoidal n-category, we conceive a one-object (n+1)-category B𝖢 such that
Ω B𝖢 = 𝖢 .
Remarkably, if 𝖢 is further symmetric, B𝖢 has a canonical symmetric monoidal structure.
In this special case, we can further define iterated B^n𝖢.
These two prefix symbols Ω and B are apparently borrowed from looping and delooping in algebraic topology.
The reason why they are adopted will be clear shortly.
§.§.§ Space and n-groupoid
We can extract an n-category π_≤ nX from each topological space X.
Thereof, objects are points in X, 1-morphisms are homotopies of objects (paths), 2-morphisms are homotopies of 1-morphisms (based homotopies of paths), 3-morphisms are homotopies of 2-morphisms (based homotopies of based homotopies of paths), and so on inductively until that n-morphisms are the homotopy classes of homotopies of (n-1)-morphisms.
Note that we can allow n=∞ by canceling terminating the induction and deleting the eventual command of taking the homotopy class in the above construction.
The n-category π_≤ nX has a special property that all of its morphisms are invertible up to equivalence.
In general, an n-category with such a property is called an n-groupoid.
Thus π_≤ nX is called the fundamental n-groupoid of X.
This π_≤ nX knows a great deal about the topology of X.
First, the equivalence classes of objects in π_≤ nX exactly constitute π_0X.
Second, for any base point x∈ X, the fusion monoid of the equivalence classes of objects in Ω^qπ_≤ nX is exactly the homotopy group π_q(X,x) for q≤ n and the trivial group for q>n, accordingly.
Furthermore, the entire first n-th stages of the Postnikov tower of each path component of X are encoded in π_≤ nX.
In other words, the fundamental n-groupoid completely determine the homotopy n-type of the space.
Namely, π_≤ nX≃π_≤ nY as long as there is an (n+1)-connected map between X and Y.
It is a classical theorem that every groupoid is equivalent to the fundamental groupoid of some space.
Since Quillen <cit.> and Grothendieck <cit.>, it has also been generally accepted that every n-groupoid is equivalent to the fundamental n-groupoid of some space.
Therefore, based on the homotopical property discussed above, we have the following equivalence.
An n-groupoid is equivalent to a homotopy n-type.
That is, an n-groupoid is an n-aspherical space, and the equivalence between n-groupoids is the weak homotopy equivalence between n-aspherical spaces.
One can interpret this equivalence as saying that n-categories are non-invertible generalizations of homotopy n-types.
Let us look at the particular case of n=∞.
Then π_≤∞X encodes the entire Postnikov tower of X and completely determines the homotopy type of X.
The above equivalence then suggests that we can model ∞-groupoids by spaces.
An ∞-groupoid is equivalent to a homotopy type.
That is, an ∞-groupoid is a space, and the equivalence between ∞-groupoids is the weak homotopy equivalence between spaces.
An invertible monoidal (n-1)-groupoid is called an n-group.
An ∞-group is equivalent to a loop space.
Ω and B establish a one-to-one correspondence between n-groups and one-object (n+1)-groupoids.
We also unify the symbols for spaces and higher-groupoids.
We shall write π_≤∞X just as X and abandon the now tautological symbol π_≤∞X.
We shall also write the homotopy n-type of X, incarnated by an n-aspherical space that X can map to via an (n+1)-connected fibration, just as π_≤ nX.
It is reasonable to regard the higher-category theory as the non-invertible generalization of algebraic topology, in the sense that any correct model for higher-categories should produce philosophies <ref> and <ref> as theorems.
§.§.§ (n,r)-category
We now learned that the invertibility of morphisms is a distinguished property for higher-categories.
Thus people introduced the notion of (n+r,n)-categories to specify the information about invertibilities.
From one perspective, (n+r,n)-category is just an (n+r)-category whose p-morphisms are invertible up to equivalence for all p>n.
For example, an (r,0)-category just means an r-groupoid and an (n,n)-category just means an n-category.
For n_1≤ n_2≤ n, an (n,n_1)-category is also an (n,n_2)-category.
From another perspective, an (n+r,n)-category is an n-category enriched over r-groupoids.
Namely, in the previous sketch of definition for n-categories, we now choose to start the induction from r-groupoids instead of the mere sets.
These r-groupoids are directly modeled by topological spaces according to philosophies <ref> (and <ref>).
This second perspective has bonus advantages in some aspects and has thus drawn vast attention.
In particular, researches on (∞,n)-categories are thriving.
§.§ Fully-extended TQFT
A general n-dim fully-extended TQFT is formulated as a symmetric monoidal functor between two symmetric monoidal (∞,n)-categories, which axiomatizes the “results” of the path integral.
The domain is a bordism (∞,n)-category and the codomain is a fully-dualizable (∞,n)-category.
The choice of domains and codomains is determined by the specific physics context.
§.§.§ Bordism domain (infty,n)-category
An n-tangential structure is a map XΓ→BO(n).
The simplest examples include
n-framing: {*}fr⟶BO(n) ,
n-orientation: canonical BSO(n)SO⟶BO(n) ,
n-spin structure: canonical BSpin(n)Spin⟶BO(n) .
For an n-tangential structure Γ and for q≤ n, a q-dim Γ manifold means a q-dim manifold M equipped with a map M→ X, such that the composition M→ XΓ→BO(n) classifies the n-stabilized tangent bundle of M (i.e. TM⊕^n-q).
Given an n-tangential structure Γ, Lurie <cit.> conceives a symmetric monoidal (∞,n)-category 𝖡𝗈𝗋𝖽^Γ_n as follows.
In the non-invertible-morphism region of q≤ n, a q-morphism is a q-dim Γ bordism, i.e., a q-dim Γ manifold with corners.
In the invertible-morphism region, the homomorphism ∞-groupoids between n-morphisms are the spaces of boundary-fixed diffeomorphisms along the philosophy <ref>.
The symmetric monoidal structure on 𝖡𝗈𝗋𝖽^Γ_n is prescribed by disjoint union.
Such 𝖡𝗈𝗋𝖽^Γ_n's give the domains of a fully-extended TQFT.
However, given that the codomains we will be considering are essentially n-categories, the spaces of diffeomorphisms will not really contribute to our results.
We want to equip all manifolds with a map to the target space Y.
The simplest way to implement this is to adopt a special n-tangential structure e_Y×Γ:Y×X→ BO(n), the product of the collapse map Ye_Y→{*} and another n-tangential structure XΓ→BO(n), such as those listed in Eq. (<ref>).
In this case, we shall particularly call e_Y×Γ manifolds Y-enriched Γ manifolds, and particularly write
𝖡𝗈𝗋𝖽^Γ_n(Y)≡𝖡𝗈𝗋𝖽^e_Y×Γ_n .
Such 𝖡𝗈𝗋𝖽^Γ_n(Y)'s give the domains for Y-enriched fully-extended TQFTs.
This notion of enriched TQFT is tightly related to the notion of symmetric TQFT (or say equivariant TQFT).
Due to the dimensional reason, a map from a manifold of dimension ≤ n to Y factors into Y's homotopy n-type, π_≤ nY.
That is, we have the following equivalence,
𝖡𝗈𝗋𝖽^Γ_n(Y) ≃𝖡𝗈𝗋𝖽^Γ_n(π_≤ nY) .
If Y is path connected, an n-dim Y-enriched TQFT is exactly an n-dim TQFT that acquires an action by the higher-group Ωπ_≤ nY, i.e. an Ωπ_≤ nY-symmetric TQFT.
If Y has multiple path components, we obtain a TQFT consisting of a π_0Y worth of universes, each of which is a higher-group-symmetric TQFT.
Physically, a priori, any TQFT suffers from a framing anomaly and requires a framing dependence.
Therefore, 𝖡𝗈𝗋𝖽^fr_n(Y) would become a suitable choice of the domain for all TQFTs.
A universal Y-enriched fully-extended TQFT with framing is a symmetric monoidal functor
: 𝖡𝗈𝗋𝖽_n^fr(Y) → 𝖣_n ,
where the target category is a fully-dualizable symmetric monoidal (∞,n)-category 𝖣_n as we shall see later.
The maximal ∞-groupoid inside 𝖣_n acquires a natural homotopy O(n)-action according to the cobordism hypothesis <cit.>.
This homotopy O(n)-action can be lifted to a homotopy Ω X-action by an n-tangential structure XΓ→BO(n).
If this Ω X-action is (canonically) trivializable, the a priori framing anomaly turns out to be merely a Γ anomaly,
and then the TQFT does not really depend on an n-framing but an n-tangential structure Γ instead.
Consequently, in such a situation, we can (canonically) extend the domain of to any Y-enriched Γ manifolds so that
^Γ: 𝖡𝗈𝗋𝖽_n^Γ(Y) → 𝖣_n .
Along this philosophy of treatment, what manifolds a TQFT can inhabit is determined by the property of its codomain[
This treatment may be phrased as codomain-dominated.
There is also a domain-dominated treatment, where we take a universal codomain 𝖣_n (for each n) and ask Γ to vary.
Then the logic will be reversed, i.e., the choice of Γ=SO (resp. Γ=Spin) implies that bosonic (resp. fermionic) state spaces will be picked up in the universal codomain.
Since a universal codomain is very difficult to construct, we take the codomain-dominated approach.
].
§.§.§ Physical codomain (infty,n)-category
Let 𝖣_n denote a fully-dualizable symmetric monoidal (∞,n)-category that we want to use as the codomain for TQFTs, where the requirement of full dualizability puts a natural finiteness condition (see Sec. 2.3 of Ref. <cit.> for the definition).
For a physical TQFT from domain 𝖡𝗈𝗋𝖽^Γ_n(Y), we want to assign a complex number to each closed n-dim Y-enriched Γ manifolds as its partition function, and assign a vector space to each closed (n-1)-dim Y-enriched Γ manifolds as its state space.
In other words, we want Ω^n-1𝖣_n to be an (∞,1)-completion of a symmetry monoidal category of proper vector spaces.
There are two inequivalent natural (∞,1)-completions of a category of vector spaces.
* Only identity higher morphisms are added.
(-,-) has the discrete topology.
* Iterated isotopies are added.
(-,-) has the subspace topology from some ^m.
The second completion is appropriate for classifying gapped phases because it identifies TQFT deformation classes[
It captures deformation classes of not only orthodox TQFTs but also half-geometric-half-topological QFTs.
These unorthodox “TQFTs” depend on geometric data of background fields σ:M↦ Y but are invariant under spacetime diffeomorphisms.
This is exactly what is needed for classifying gapped phases.
].
But the first completion is appropriate for our purpose because we want to speak of the partition function of each individual TQFT.
Thus in this paper, we regard the first completion as the canonical (∞,1)-completion and abuse the same symbol of the category to denote also its canonical (∞,1)-completion.
The choice of Ω^n-1𝖣_n is determined by what state spaces we want.
In the bosonic case, we consider Ω^n-1𝖣_n≃𝖵𝖾𝖼𝗍^fd whose objects are finite-dim -linear spaces and morphisms are -linear maps.
The tensor product gives rise to a monoidal structure on 𝖵𝖾𝖼𝗍^fd.
A swap map comes from the index exchange,
x⊗ y ↦ y⊗ x ,
which makes 𝖵𝖾𝖼𝗍^fd a symmetric monoidal category.
All objects have duals due to the finite-dim condition.
It also has finite biproducts given by the direct sum, is semisimple with respect to it, and has a unique class of simple objects.
It is further -linear.
All these structure makes 𝖵𝖾𝖼𝗍^fd a symmetric fusion category.
As for the fermionic case, the state space V has a -linear involution ε called fermionic parity, which makes the state space _2-graded.
Such a pair (V,ε) is called a super vector space.
The (+1)-eigenspace of ε is the bosonic sector and the (-1)-eigenspace is the fermionic sector.
Let 𝗌𝖵𝖾𝖼𝗍^fd denote the category whose objects are finite-dim super -linear spaces and morphisms are -linear maps that commute with fermionic parities.
Its monoidal structure also comes from the tensor product,
(X,ε_X)⊗(Y,ε_Y)≃(X⊗ Y, ε_X⊗ε_Y) .
𝗌𝖵𝖾𝖼𝗍^fd is also -linear, has finite biproducts, and has duals for all objects, which turns out to make it a fusion category.
As fusion categories, 𝗌𝖵𝖾𝖼𝗍^fd is equivalent to 𝖱𝖾𝗉(_2), the representation category of _2.
It is the swap map that distinguishes them as inequivalent symmetric fusion categories.
The swap map in 𝗌𝖵𝖾𝖼𝗍^fd encodes the fermionic statistics following the Koszul sign rule.
Namely, for fermionic parity eigenstates x∈ X and y∈ Y, the swap map sends
x⊗ y ↦(-)^|x||y| y⊗ x ,
where |∙|=0 if ∙ is bosonic and |∙|=1 if ∙ is fermionic.
We take Ω^n-1𝖣_n≃𝗌𝖵𝖾𝖼𝗍^fd for the fermionic case.
The final step is to determine the entire 𝖣_n.
Since we do not have any requirement on higher objects than particles, our guiding principle is that no superfluous data on lower-dim manifolds except necessaries should be introduced.
An elegant solution was found by Gaiotto and Johnson-Freyd in Ref. <cit.> (see also Ref. <cit.>).
They noticed the Karoubi-completeness among various other features and clarified that the minimal necessary complexity is exactly the n-categorical generalization of being Karoubi-complete.
Roughly speaking, being Karoubi-complete means that every idempotent comes from a splitting, among all p-morphisms.
Given a Karoubi-complete monoidal n-category 𝖢, they defined its “stable suspension” Σ𝖢 as the Karoubi completion of its delooping B𝖢.
Symbolically,
Σ𝖢≡Kar(B𝖢) .
When the above n-category 𝖢 is further symmetric, Σ𝖢 turns out to be a Karoubi-complete symmetric monoidal (n+1)-category <cit.>.
Given that both 𝖵𝖾𝖼𝗍^fd and 𝗌𝖵𝖾𝖼𝗍^fd are Karoubi-complete, the codomain 𝖣_n we are looking for can be given by
(bosonic) Σ^n-1𝖵𝖾𝖼𝗍^fd ,
(fermionic) Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd .
We shall flexibly regard them as either n-categories or (∞,n)-categories (with identity higher morphisms) according to the context.
Gaiotto and Johnson-Freyd also conjecture <cit.> that the canonical homotopy SO(n)-action [resp. Spin(n)-action] on the maximal ∞-groupoid inside Σ^n-1𝖵𝖾𝖼𝗍^fd (resp. Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd) is canonically trivializable.
If these conjectured properties are true, according to the discussion around Eq. (<ref>), bosonic (resp. fermionic) topological functionals indeed inhabit any closed oriented (resp. spin) manifold.
§.§.§ Summary of the formulation
We now summarize all the ingredients found above to present accurate formulations of the relevant TQFTs to clarify the connotation of Ansatz <ref>.
A priori, any TQFT suffers from a framing anomaly and requires a framing dependence.
An n-dim bosonic (resp. fermionic) Y-enriched fully-extended TQFT is a symmetric monoidal functor between symmetric monoidal (∞,n)-categories,
: 𝖡𝗈𝗋𝖽_n^fr(Y) → Σ^n-1𝖵𝖾𝖼𝗍^fd , [resp. : 𝖡𝗈𝗋𝖽_n^fr(Y) → Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd] .
It is reasonable to anticipate that having bosonic (resp. fermionic) state spaces logically implies inhabiting any oriented (resp. spin) spacetime, i.e., the framing anomaly should merely be an orientation (resp. spin) anomaly.
This anticipation is realized by the following conjectured property of Σ^n-1𝖵𝖾𝖼𝗍^fd and Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd <cit.>:
The canonical homotopy SO(n)-action [resp. Spin(n)-action] on the maximal ∞-groupoid inside Σ^n-1𝖵𝖾𝖼𝗍^fd (resp. Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd) is canonically trivializable.
By the cobordism hypothesis combined with this conjecture, every (resp. ) has no genuine framing dependence but just an orientation (resp. spin) dependence, so that
every or can be canonically extended to
^SO: 𝖡𝗈𝗋𝖽_n^SO(Y) → Σ^n-1𝖵𝖾𝖼𝗍^fd ,
^Spin: 𝖡𝗈𝗋𝖽_n^Spin(Y) → Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd .
Therefore, we will talk about the partition functions of (resp. ) on any closed oriented (resp. spin) n-manifold M, which really mean those of ^SO (resp. ^Spin).
When Y is path-connected, the evaluation of (resp. ) on a point gives a linear n-group action on objects in Σ^n-1𝖵𝖾𝖼𝗍^fd (resp. Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd) imposed by the n-group Ωπ_≤ nY.
Namely, an n-dim Y-enriched TQFT is the same as an n-dim Ωπ_≤ nY-symmetric TQFT.
When Y has multiple path-components, we have a π_0(Y) worth of universes of higher-group symmetric TQFTs.
The cobordism hypothesis <cit.> asserts that such a characterization by evaluation on a point is always faithful and complete.
In general, 𝖡𝗈𝗋𝖽^fr_n(Y) is the free fully-dualizable symmetric monoidal (∞,n)-category generated by ∞-groupoid Y.
A n-representation (resp. super n-representation) of space Y is a functor between (∞,n)-categories,
: Y →Σ^n-1𝖵𝖾𝖼𝗍^fd , (resp. : Y →Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd) .
Note that since Σ^n-1𝖵𝖾𝖼𝗍^fd (resp. Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd) is essentially an n-category, (resp. ) actually factors through the homotopy n-type of Y.
Namely, (resp. ) is in essence a functor between n-categories, from π_≤ nY to Σ^n-1𝖵𝖾𝖼𝗍^fd (resp. Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd).
The cobordism hypothesis specialized for our purpose asserts the following equivalence (see <cit.>):
Via point evaluation, an n-dim bosonic (resp. fermionic) Y-enriched fully-extended TQFT (Definition <ref>) is equivalent to a n-representation (resp. super n-representation) of Y (Definition <ref>).
Higher-representations at lower dimensions in the context of generalized symmetries are extensively discussed by, e.g., Refs. <cit.>.
As a preliminary example of topological functionals from these TQFTs, let us consider 1D topological functionals targeting at path-connected 1-finite Y.
Namely, π_1Y is finite.
We start from the bosonic case.
A 1-representation of Y is just a representation R of π_1Y.
An S^1 bosonic topological functional (g) for g∈[S^1,Y] is a -valued function on
[S^1,Y] ≃ [S^1,Bπ_1Y] ≃ {conjugacy classes of π_1Y} .
Note that [S^1,Y]≄π_1Y as long as π_1Y is not commutative[
This discrepancy between [S^1,Y] and π_1Y comes from the difference between free homotopy and based homotopy.
As we shall see in Sec. <ref>, this difference accounts for for a large class of non-invertible fusion rules in solitonic symmetry.
].
Therefore, an S^1 topological functional is equivalent to a class function of π_1Y, such as a character.
Note that g∈[S^1,Y] prescribes a π_1Y gauge field on S^1 whose holonomy conjugacy class is g.
Therefore, the partition function is indeed given by a character, i.e.,
(g)=_R(g) .
One can generalize this relation to T^n bosonic topological functionals and define the notion of n-characters[
Ideas of n-characters originate from Ref. <cit.>.
2-characters are defined in Ref. <cit.> and are developed in, e.g., Refs. <cit.>.
Unlike ordinary characters, the input of an n-character is a set of n mutually commutative elements in G, and the n-character is invariant under simultaneous conjugations.
One can immediately recognize that this input describes the isomorphism classes of G-bundles on T^n, i.e., [T^n,BG].
It is then natural to connect these n-characters with T^n bosonic topological functionals targeting at BG.
].
The traces of representations amount to all characters and the space of characters is -linearly spanned by simple characters.
The characters from 1-dim representations factor through H_1(Y;), the Abelianization of π_1Y, and thus can be constructed via the universal invertible form (<ref>).
The characters from higher-dim representations, especially the irreducible ones, go beyond Eq. (<ref>).
For the fermionic case, a super 1-representation of Y is just a super representation of π_1(Y), which is further a pair of ordinary representations (B,F) on the bosonic and the fermionic sectors, respectively.
There are two spin structures on S^1, 0 and 1, corresponding to the spin bordism group Ω^_1=_2.
Then a similar analysis to the bosonic case shows that the S^1 fermionic topological functional for (g,ε)∈[S^1,Y]×Ω^_1 is given by [recall that g represents a conjugacy class in π_1Y]
(g,0) = _B(g) + _F(g) ,
(g,1) = _B(g) - _F(g) .
We thus learned that S^1 fermionic topological functionals are virtual characters of π_1Y.
They are -linearly spanned by simple characters rather than -linearly.
The virtual characters from 1-dim representations factor through the super integral homology 𝕊ℤ_1(Y) (see Sec. <ref>) and thus can also be constructed via the universal invertible form (<ref>).
Tremendous examples of higher-dim topological functionals will be discussed in Sec. <ref>.
§.§ Cohomology with TQFT coefficients
As we mentioned in Sec. <ref>, a crucial feature of solitonic symmetry is its independence of details of the system such as the action and the ambient spacetime.
It is just the algebra of topological functionals and is determined by the target space Y only.
Formally, it gives homotopy-invariant contravariant functors on the topological space Y.
Thus the algebraic structure of solitonic symmetry can be interpreted as a cohomology theory on Y, in a vastly generalized sense.
§.§.§ Non-invertible: (Super) solitonic cohomology
Given two TQFTs _1 and _2, _1(-)⊗_2(-) also defines a TQFT denoted by _1⊗_2.
We can transfer the fusion of two topological functionals to the fusion of the two TQFTs beneath.
To see ⊗ between TQFTs, instead of looking at each individual TQFT, we should consider the functor (∞,n)-category containing all n-representations[
One may first come up with
𝖥𝗎𝗇^⊗ ( 𝖡𝗈𝗋𝖽_n^fr(Y) , Σ^n-1𝖵𝖾𝖼𝗍^fd ) and 𝖥𝗎𝗇^⊗ ( 𝖡𝗈𝗋𝖽_n^fr(Y) , Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd ) .
According to the cobordism hypothesis, they are the maximal ∞-groupoids (in essence n-groupoids) inside 𝖱𝖾𝗉^n(Y) and 𝗌𝖱𝖾𝗉^n(Y), respectively.
They have too few morphisms to support the rich structures we shall discuss shortly.
].
It is actually an n-category because of the n-category nature of the codomain.
The n-th solitonic cohomology [resp. super solitonic cohomology] of a finite space Y is a symmetric multi-fusion n-category,
𝖱𝖾𝗉^n(Y) ≡ 𝖥𝗎𝗇 ( Y, Σ^n-1𝖵𝖾𝖼𝗍^fd ) , [resp. 𝗌𝖱𝖾𝗉^n(Y) ≡ 𝖥𝗎𝗇 ( Y, Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd )].
They are symmetric fusion n-categories when Y is further path-connected.
In this case, they can be understood as higher-representation higher-categories of the ∞-group Ω Y.
In particular, the n-category nature of the codomains suggest the following.
* When Y is path-connected, 𝖱𝖾𝗉^1(Y) [resp. 𝗌𝖱𝖾𝗉^1(Y)] is equivalent to the representation (resp. super-representation) category of the group π_1Y.
* When Y is path-connected, 𝖱𝖾𝗉^n(Y) [resp. 𝗌𝖱𝖾𝗉^n(Y)] is equivalent to the representation (resp. super-representation) n-category of the n-group Ωπ_≤ nY.
The 1-dim case reproduces the discussion on 1-dim topological functionals at the end of Sec. <ref>.
The higher-group representation higher-category have caught vast attention in recent literature on generalized symmetry, see e.g. Refs. <cit.>.
When Y is not path-connected, the solitonic cohomologies are just the Cartesian product of solitonic cohomologies of each path component.
The fusion monoid of bosonic (resp. fermionic) topological functionals can be equated with the fusion monoid of the equivalence classes of objects in 𝖱𝖾𝗉^n(Y) [resp. 𝗌𝖱𝖾𝗉^n(Y)], which gives what we have been pursuing.
However, we should not quickly throw away the far richer structures contained in 𝖱𝖾𝗉^n(Y) and 𝗌𝖱𝖾𝗉^n(Y) than their mere fusion monoids.
To see their significance, let us analyze the physical meaning of morphisms in 𝖱𝖾𝗉^n(Y) and 𝗌𝖱𝖾𝗉^n(Y).
Here 1-morphisms are natural transformations between n-representations.
We can readily note that the natural endo-transformations of the trivial n-representation are simply functors from Y to Σ^n-2𝖵𝖾𝖼𝗍^fd and Σ^n-2𝗌𝖵𝖾𝖼𝗍^fd, respectively.
Namely, we arrive at the following relations between different n.
Ω𝖱𝖾𝗉^n(Y)≃𝖱𝖾𝗉^n-1(Y) and Ω𝗌𝖱𝖾𝗉^n(Y)≃𝗌𝖱𝖾𝗉^n-1(Y).
This result hints at the physical meaning of other morphisms in 𝖱𝖾𝗉^n(Y) and 𝗌𝖱𝖾𝗉^n(Y):
* 1-morphisms are (n-1)-dim topological interfaces between n-dim TQFTs.
* (p+1)-morphisms are (n-p-1)-dim topological interfaces between (n-p)-dim topological interfaces.
Topological functionals, which are auxiliary-TQFT partition functions defined on proper closed manifolds, cannot capture these richer data, which concern auxiliary TQFTs themselves and inhabit networks made by non-closed manifolds via connections and junctions.
In recent literature on generalized symmetry, people have recognized the significance of such networks of non-closed topological operators as a formulation of background gauge fields.
It starts to become customary to recognize the algebraic structure of such networks as the total generalized symmetry of a theory given that it contains literally all information about a symmetry, not just the fusion rule.
We have no reason not to follow this custom.
Consider a d-dim bosonic (resp. fermionic) theory defined by a path integral with finite target space Y.
* The total solitonic symmetry is described by 𝖱𝖾𝗉^d(Y) [resp. 𝗌𝖱𝖾𝗉^d(Y)], a symmetric fusion d-category.
* For -1≤ p≤ d-1, the (≥p)-form solitonic symmetry is described by 𝖱𝖾𝗉^d-p-1(Y) [resp. 𝗌𝖱𝖾𝗉^d-p-1(Y)], a symmetric fusion (d-p-1)-category.
* For -1≤ p≤ d-1, the p-form solitonic symmetry is described by the fusion monoid of 𝖱𝖾𝗉^d-p-1(Y) [resp. 𝗌𝖱𝖾𝗉^d-p-1(Y)], a commutative rig.
Note that we also include (-1)-form symmetry in the total solitonic symmetry.
The result here echoes recent progresses on categorical generalized symmetry.
We may understand 𝖱𝖾𝗉^∙(-) and 𝗌𝖱𝖾𝗉^∙(-) as non-invertible generalizations of cohomology theories.
We may view the collections,
{Σ^n-1𝖵𝖾𝖼𝗍^fd}_n∈ and {Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd}_n∈,
as non-invertible generalizations of spectra.
We shall shortly see in Sec. <ref> which orthodox spectra they generalize.
The coefficients of these non-invertible cohomologies are non-enriched fully-extended TQFTs:
𝖱𝖾𝗉^∙ ≡ 𝖱𝖾𝗉^∙({*}) ≃ Σ^∙-1𝖵𝖾𝖼𝗍^fd ,
𝗌𝖱𝖾𝗉^∙ ≡ 𝗌𝖱𝖾𝗉^∙({*}) ≃ Σ^∙-1𝗌𝖵𝖾𝖼𝗍^fd .
𝖱𝖾𝗉^∙(-) and 𝗌𝖱𝖾𝗉^∙(-) are indeed homotopy-invariant contravariant functors from topological spaces because a map Y→ Z induces pullbacks 𝖱𝖾𝗉^∙(Z)→𝖱𝖾𝗉^∙(Y) and 𝗌𝖱𝖾𝗉^∙(Z)→𝗌𝖱𝖾𝗉^∙(Y) simply by their definitions.
When Y is not finite, 𝖱𝖾𝗉^∙(Y) and 𝗌𝖱𝖾𝗉^∙(Y) contain too much superfluous data to capture the algebraic structure of solitonic symmetry.
For example, recall the discussion about 1-dim topological functionals at the end of Sec. <ref>.
Recall that 𝖱𝖾𝗉^1(Y) and 𝗌𝖱𝖾𝗉^1(Y) are simply representation categories of π_1Y.
If we allowed π_1Y to be infinite like or SL(2,), non-semisimple representations would appear since the Maschke theorem does not apply.
They are not unitarizable and produce superfluous duplicated topological functionals.
There are also semisimple non-unitarizable representations whose S^1 partition functions are not bounded.
It is natural to expect that only unitarizable representations capture the solitonic symmetry.
We do not attempt a generalization of the above unitarizability condition to higher-dim general cases in this paper.
Instead, as we discussed at the end of Sec. <ref>, we shall just focus on finite homotopy quotients Z of Y and discuss the according solitonic subsymmetries that can be faithfully described by 𝖱𝖾𝗉^∙(Z) or 𝗌𝖱𝖾𝗉^∙(Z).
The collection of them for all different choices of Z, together with natural functors between them and so on, form a diagram in the (n+1)-category of symmetric multi-fusion n-categories.
We expect that the colimit of this diagram exists and believe that this colimit gives an almost complete approximation of the continuous solitonic symmetry for Y.
Furthermore, we expect that the continuous solitonic symmetry may be constructed as a generalized Cauchy completion of this colimit, provided we can cook up a well-behaved notion of topologized/unifromized monoidal n-categories[
The prototype of our anticipation is just the Cauchy completion of to U(1).
It is possible to prescribe a topology on through the colimit construction ≃⋃_n∈_n.
Topological groups are naturally uniformizable and thus we can take the Cauchy completion of to obtain U(1).
].
§.§.§ Invertible: (Super) unitary cohomology
As the first application, let us find out invertible subsymmetry of the generically non-invertible solitonic symmetry.
Invertible topological functionals are the partition functions of invertible fully-extended TQFTs.
A fully-extended TQFT :𝖡𝗈𝗋𝖽^Γ_n → 𝖣_n is said invertible if we can find another fully-extended TQFT ^-1:𝖡𝗈𝗋𝖽^Γ_n → 𝖣_n such that ⊗^-1≃ 1, the trivial TQFT.
Therefore, must factor through the following commutative diagram (see the discussion around Sec. 6.2 of Ref. <cit.>):
𝖡𝗈𝗋𝖽^Γ_n [r, ""] [d] 𝖣_n
𝖡𝗈𝗋𝖽^Γ_n[r, "^×"] 𝖣_n^×[u,hook]
Here, 𝖣_n^× is the maximal Picard ∞-groupoid inside 𝖣_n, where a Picard ∞-groupoid means an invertible symmetric monoidal ∞-groupoid (see Appendix A.4 of Ref. <cit.>).
And 𝖡𝗈𝗋𝖽^Γ_n is the ∞-groupoid completion of 𝖡𝗈𝗋𝖽^Γ_n, which turns out to have invertible objects only and becomes a Picard ∞-groupoid.
Thus ^× is a symmetric monoidal functor between two Picard ∞-groupoids.
According to Philosophy <ref>, a Picard ∞-groupoid is an infinite loop space, i.e. the 0-space of a spectrum.
Also, ^× is an infinite loop map
and the fusion between _1^× and _2^× is induced by loop concatenation.
Therefore, classifying invertible TQFTs reduces to classifying infinite loop maps between two infinite loop spaces, which further reduces to classifying spectrum maps between two spectra.
Therefore, the treatment of invertible TQFTs belongs to the realm of stable homotopy theory.
The Galatius-Madsen-Tillmann-Weiss theorem <cit.>, a theorem derived out of the cobordism hypothesis <cit.>, asserts that 𝖡𝗈𝗋𝖽^Γ_n is weakly homotopy equivalent to the 0-space of the Madsen-Tillmann spectrum Σ^n𝕄𝕋Γ (see also <cit.>).
In the case of our interest, 𝖡𝗈𝗋𝖽^fr_n(Y) is independent of n and is the free infinite loop space generated by Y_+≡ Y⊔{*}, i.e. Y with an extra base point.
Namely,
𝖡𝗈𝗋𝖽^fr_n(Y) ≃ q→∞colim Ω^qΣ^q Y_+ .
It is the 0-space of the suspension spectrum of Y_+.
The codomain is more complicated due to the complexity of the higher-categorical Karoubi completion.
We thus leave the full analysis of the maximal invertible subsymmetry to future works.
Here we just focus on a sufficiently interesting part we can obtain immediately by noticing
Ω^n-1⟨Σ^n-1𝖣| ⟩≃ ⟨Ω^n-1Σ^n-1𝖣| ⟩≃ 𝖣^× .
In general, the spectrum {⟨Σ^n-1𝖣|}⟩_n∈ may have a bad connectivity, i.e., the Karoubi-completion procedure may add new invertible objects and make many ⟨Σ^n-1𝖣|$⟩ not path-connected.
However, here we shall neglect such contributions from the Karoubi-completion procedure.
In other words, we shall consider spectrum
{B^n-1⟨𝖣|}⟩_n∈ ,
which is the(-1)-connective cover of spectrum{⟨Σ^n-1𝖣|}⟩_n∈.
We thus obtain at least part of the maximal invertible subsymmetry[
For the bosonic case, we speculate that our treatment might be precise, i.e., ⟨Σ^n-1𝖵𝖾𝖼𝗍^fd|≃⟩B^n^×.
For the fermionic case, we speculate that there is one more non-connected space, π_0⟨Σ^2𝗌𝖵𝖾𝖼𝗍^fd|≃⟩_2, and the spectrum is the ^×-dual of the second Postnikov truncation of the sphere spectrum 𝕊.
We shall leave the verification of our speculations to future works.
].
Let us start from the bosonic case.
Only the1-dim vector space is invertible in𝖵𝖾𝖼𝗍^fd, and
its invertible endomorphisms constitute the group^×.
Therefore, we have
B^n-1⟨𝖵𝖾𝖼𝗍^fd| ⟩≃ B^n^×_δ .
When ambiguity may arise, we use the subscriptδto indicate the discrete topology.
These infinite loop spaces assemble to form the Eilenberg–Maclane spectrumℍ^×.
We then arrived at the following theorem:
The fusion monoid of solitonic cohomology 𝖱𝖾𝗉^∙(Y) contains a group (ℍ^×)^∙(Y)≃ H^∙(Y;^×).
We now turn to the more interesting fermionic case.𝗌𝖵𝖾𝖼𝗍^fdhas two classes of invertible objects, the bosonic and the fermionic1-dim vector spaces.
Their fusion monoid is_2.
The invertible endomorphisms of each class constitute^×.
The symmetric monoidal structure on𝗌𝖵𝖾𝖼𝗍^fd, especially the Koszul sign rule (<ref>), positions the infinite loop space⟨𝗌𝖵𝖾𝖼𝗍^fd|$⟩ into a Puppe sequence,
⋯⟶ B^n^×_δ⟶ B^n-1⟨𝗌𝖵𝖾𝖼𝗍^fd|⟶⟩B^n-1_2 ρ∘Sq^2⟶ B^n+1^×_δ⟶⋯ ,
classified by the table cohomology operation ρ∘Sq^2.[The direct determination of ρ∘Sq^2 requires to evaluate the E_∞-structure of ⟨𝗌𝖵𝖾𝖼𝗍^fd|$⟩.
The Koszul sign rule should prescribe anE_∞-structure that leads to a nontrivial stable cohomology operation.
We note however thatρ∘Sq^2is the only existing nontrivial stable cohomology operation fromℍ_2toΣ^2ℍ^×.
HereSq^2is the second Steenrod square andρis the change-of-coefficient for the canonical inclusion_2→^×.
These infinite loop spaces assemble to form a spectrum and let us denote it as𝕊.
We have thus proved the following theorem:
The fusion monoid of super solitonic cohomology 𝗌𝖱𝖾𝗉^∙(Y) contains a group 𝕊ℂ^∙(Y), where 𝕊ℂ is defined via Eq. (<ref>).
For a finite target spaceY, these theorems tell us that there are invertible(d-n-1)-form solitonic symmetry given byH^n(Y;^×)in ad-dim bosonic theory and𝕊ℂ^n(Y)in ad-dim fermionic theory.
We can further ask what higher-group they make up, which requires us to consider the entire map spectrumMap(𝕐_+,ℍℂ)orMap(𝕐_+,𝕊ℂ), rather than merely itsπ_0-.
We again leave such analysis to future works.
When we focus on finite spaces only, many other cohomology theories can also provide the same results as above.
For example, for finite spaceYwe have
H^∙(Y;^×) ≃ H^∙(Y;U(1)) ≃ H^∙+1(Y;) ,
𝕊ℂ^∙(Y) ≃ 𝕊𝕌^∙(Y) ≃ I_𝕊ℤ^∙+1(Y) ,
but they are different for generalY.
Here𝕊𝕌is defined as the spectrum obtained by substituting^×withU(1)in Eq. (<ref>) and changingρinto the change-of-coefficient for_2→U(1).
We will explain𝕊ℤandI_𝕊ℤlater.
We shall callℍU(1)the unitary spectrum and call𝕊𝕌the super unitary spectrum.
We conjecture that these two spectra correctly capture invertible solitonic symmetry even when we remove the finiteness condition onY.
Consider a d-dim bosonic (resp. fermionic) theory defined by a path integral with target space Y.
* For -1≤ p≤ d-1, the p-form solitonic symmetry contains the unitary cohomology [resp. super unitary cohomology] group H^d-p-1(Y;U(1)) [resp. 𝕊𝕌^d-p-1(Y)].
(Note: This is a theorem mentioned above when Y is finite.)
This conjecture is motivated by our discrete approximation discussed at the end of Sec. <ref> and Sec. <ref>.
They are also consistent with physicists' long experience with invertibleθ-angles and the universal invertible construction (<ref>).
Before concluding this section, let us unpack these two cohomology theories.ℍU(1)is the U(1)-dual ofℍand its universal coefficient theorem takes the naive form,
H^∙(-;U(1)) ≃ (H_∙(-,),U(1)) .
As for the fermionic case, the modified version of the Puppe sequence (<ref>) leads to a long exact sequence[
This is also the Atiyah–Hirzebruch spectral sequence for 𝕊𝕌^∙(-).
Namely, d_2:E^∙,1_2→ E^∙+2,0_2=ρ∘Sq^2.
],
⋯⟶ H^∙(-;U(1)) ⟶𝕊𝕌^∙(-) ⟶ H^∙-1(-;_2) ρ∘Sq^2⟶ H^∙+1(-;U(1)) ⟶⋯ .𝕊𝕌is the U(1)-dual of𝕊, which we call the super integral spectrum.
Namely,
𝕊𝕌^∙(-) ≃ (𝕊ℤ_∙(-),U(1)) .𝕊can be defined as the first Postnikov truncation of the sphere spectrum𝕊or the Thom spectrum𝕄Spin.
Namely, whenYis(n-1)-connected, we have
𝕊ℤ_∙(Y) ≃ π^s_∙(Y) ≃ Ω^Spin_∙(Y) , for ∙≤ n+1 .
Nonzero homotopy groups of the spectra mentioned here are summarized:
𝔼 ℍ ℍU(1) 𝕊ℤ 𝕊𝕌
𝔼_1 _2
𝔼_0 U(1) U(1)
𝔼_-1 _2
.
As final remark, if we planned to classify gapped phases, we would take the other(∞,1)-completion at the beginning of Sec. <ref>.
Then we would reach spectraΣℍandΣI_𝕊ℤinstead, which gives the last cohomologies in Eqs. (<ref>) and (<ref>).
HereI_𝕊ℤdenotes the Anderson dual of𝕊ℤand is the spectrum studied by Freed <cit.> and
Gu-Wen <cit.>.
§ NON-INVERTIBLE STRUCTURE BEYOND HOMOTOPY GROUPS
Following Ansatz <ref> and Conjecture <ref>, we have studied the algebraic structure of solitonic symmetry in Sec. <ref>.
These results are quite formal and we would like to make some down-to-earth illustrations.
For this purpose, we adopt the following angle of looking at solitonic symmetry: What role does the conventional wisdom(π_∙-,U(1))(see Sec. <ref>) play in the general solitonic symmetry?
In this section, we focus on path-connected target spaceYsince different path components correspond just to different universes[
When we are interested in the relationship between different universes, it is also useful to render multiple path components.
In such a case, we can consider a point-like topological functional that specifies one of the path-connected components, i.e., a local operator 1_i(x) that takes 1 if the field at x is valued in i∈π_0Y and takes 0 otherwise.
Due to the continuity of the field, this operator is indeed topological.
It generates the (d-1)-form solitonic symmetry,
𝖱𝖾𝗉^0(Y) ≃ 𝗌𝖱𝖾𝗉^0(Y) ≃ {π_0Y↦} ,
which is non-invertible.
The state space decomposes into different sectors, i.e. universes, even at finite volumes <cit.>.
Interfaces between different universes, i.e. (d-1)-dim solitonic defects, are the charged objects of the (d-1)-form solitonic symmetry.
].
The conventional wisdom classifies topological charges by homotopy groupsπ_∙-.
From the perspective of topological functionals, the conventional wisdom concerns topological functionals only on spheres and solitonic symmetry has the structure(π_∙-,U(1)).
Due to Alexander's trick, a sphere necessarily has
π_0Diff(S^∙,ξ)≃ 0
for an orientationξ.
Thus spherical topological functionals do not suffer from the identity problem discussed in Sec. <ref>.
Therefore,(π_∙-,U(1))should not be regarded as just wrong.
Instead, it should be rectified into non-invertible symmetry somehow.
§.§ Rectification vs. condensation
Let us speculate the structure of𝖱𝖾𝗉^n(Y)and𝗌𝖱𝖾𝗉^n(Y)inductively [We use𝖱𝖾𝗉^n(Y)to illustrate the idea].
Namely, we suppose that we have already understood𝖱𝖾𝗉^n-1(Y)≃𝖱𝖾𝗉^n-1(Y_n-1), and we now would like to understand𝖱𝖾𝗉^n(Y)≃𝖱𝖾𝗉^n(Y_n).
HereY_∙denotes the∙-th Postnikov truncation ofY.
Then-th floor of the Postnikov tower is a fibration
B^nπ_nY → Y_n → Y_n-1 .
This fibration allows us to complete our mission in two steps:
𝖱𝖾𝗉^n-1(Y) 1⟹ 𝖱𝖾𝗉^n(Y_n-1) 2⟹ 𝖱𝖾𝗉^n(Y) .
In the first step, we need to constructn-dim topological functionals forY_n-1from lower-dim topological functionals forY_n-1.
BecauseY_n-1is(n-1)-aspherical, i.e., it has non-dim homotopical data at all, then-dim topological functionals can be obtained via condensation.
Physically, following Ref. <cit.>, condensation can formulated via higher-gauging the subsymmetries of solitonic symmetry on then-dim operator manifold only.
Mathematically, following Ref. <cit.>, condensation corresponds to formulating the Karoubi completion.
Thus we expect
𝖱𝖾𝗉^n(Y_n-1) ≃ Σ𝖱𝖾𝗉^n-1(Y) .
From the higher-gauging procedure, we can see that condensation relies on nontrivial lower-dim cycles on the operator manifold.
Therefore, then-dim topological functionals obtained by condensation must take trivial values onS^n.
In the second step, the Postnikov fibration (<ref>) induces an injective functor
Σ𝖱𝖾𝗉^n-1(Y) → 𝖱𝖾𝗉^n(Y) .
What we need to do is to clarify what new objects are added in this step.
The homotopy fiber of the fibration (<ref>) points out the answer, the topological functionals that are nontrivial onS^n.
When the fibration (<ref>) is a trivial product, we just need to add(π_nY,U(1))as suggested by the conventional wisdom.
In this case,(π_nY,U(1))correctly captures solitonic symmetry.
However, when the fibration (<ref>) is nontrivial, the conventional wisdom must be rectified and the fusion rule of spherical topological functionals becomes non-invertible.
This non-invertible rectification of(π_nY,U(1))is of our primary interest.
Using the invertible solitonic subsymmetry we discussed in Sec. <ref>, we can measure the rectification of(π_∙-,U(1))by
(generalized) Hurewicz maps,
(bosonic) π_∙- h_b⟶ H_∙(-;) ,
(fermionic) π_∙- h_f⟶ 𝕊ℤ_∙(-) .
Since these homology groups describe the invertible topological charges, the rectification of conventional wisdom is measured by the non-injectivity of Hurewicz maps.
* Two elements in π_∙- differed by an element in ker h_b or ker h_f must share the same invertible topological charge.
* The image of the dual Hurewicz map H^∙(-;U(1))→(π_∙-,U(1)) or 𝕊𝕌^∙(-))→(π_∙-,U(1)) characterizes the survivors in the conventional wisdom as invertible operators.
Recall that the invertible solitonic symmetry we found in Sec. <ref> is probably not maximal.
Thus we may overestimate the rectification and regard some invertible operators as non-invertible.
Nevertheless, whenYis(n-1)-connected, such overestimation does not happen for dimension≤n+1because our spectra are the(-1)-connective covers of the authentic spectra.
Besides, the non-rectified operators we find should be truly invertible.
§.§ Examples of rectification
The failure of Hurewicz maps' being injective originates from the non-Abelianness in the homotopy structure of the target spaceY.
Namely, we sayYis Abelian if
all homotopy groups are Abelian and Y≃∏_n=1^∞B^nπ_nY .
Hurewicz maps on Abelian spaces are always injective and thus the conventional wisdom(π_∙-,U(1))is not rectified.
However, a general topological space is far from being Abelian and its non-Abelianness is described by its Postnikov tower.
§.§.§ Spherical rectification
First of all, the homotopy classes of field configurations on a sphere is in general not the homotopy group, i.e.,π_∙Y≄[S^∙,Y].
This comes from a distinction between homotopies and based homotopies.
Let us write based homotopy classes of based maps between two based spaces as[-,-]_*.
Homotopy groups are preciselyπ_∙Y≡[S^∙,Y]_*.
Through the homotopy extension property,π_1Yalways naturally acts on[-,Y]_*.
Then[-,Y]is precisely the orbit space of thisπ_1Y-action, i.e. (see Prop. 4A.2 of Ref. <cit.>)
[-,Y] ≃ [-,Y]_*/π_1Y-action .
On homotopy groupsπ_nY, theπ_1Y-action comprises automorphisms ofπ_nY.
As long as[S^∙,Y]≄π_∙Y, the conventional wisdom(π_∙-,U(1))is rectified.
In the simplest case of∙=1,π_1Yacts on itself as inner automorphisms and thus
[S^1,Y] ≃ {conjugacy classes of π_1Y} ,
We have already seen this around the end of Sec. <ref>.
The Hurewicz mapπ_1Y→H_1(Y;)≃𝕊ℤ_1(Y)is just the Abelianization.
Therefore, we have
H^1(Y;U(1))≃(π_1Y,U(1)) , 𝕊𝕌^1(Y)≃(π_1Y,U(1))×_2 ,
which comprises1-dim (super-)representations ofπ_1Y.
The higher-dim representations ofπ_1Yconstitute the non-invertible objects in𝖱𝖾𝗉^1(Y)and𝗌𝖱𝖾𝗉^1(Y), the[≥(d-2)]-form solitonic symmetry for a finiteY.
Theπ_1Y-action on higher homotopy groups[
There is an illuminating way to understand these actions.
Y's universal cover Y̅ naturally carries a free π_1Y-action.
Each π_1Y element as a homeomorphism of Y induces a group automorphism on π_n(Y).
These automorphisms assemble to a π_1Y-action on π_n(Y), which is then turned into a π_1Y-action on π_nY by the natural isomorphism π_n(Y)≃π_nY for n>1.
]
has no chance to be inner since higherπ_∙Yis Abelian.
To describe topological functionals on
[S^∙,Y] ≃ π_∙Y/π_1Y-action ,
we can prescribe a semidirect product from theπ_1Y-action,
π_∙Y⋊π_1Y .
Then aS^∙topological functional is a character ofπ_∙Y⋊π_1Ywhich vanishes for all group elements outsideπ_∙Y.
A nontrivialπ_1Y-action implies the existence of such representations of dimension>1and the accordingS^∙topological functionals have a non-invertible fusion rule.
The simplest example for this effect is perhaps theS^2topological functionals for[
The latter example Y≃ BO(2) was discussed as semi-Abelian gauge theories in Refs. <cit.>.
Thereof, the non-invertible symmetries for the electric 1-form symmetries have been discussed, while our discussion here is devoted for the magnetic one.
In both cases, the mechanism for the construction of non-invertible symmetries is essentially the same and we can be exchanged under a duality transformation.
We note that there is a mixed 't Hooft anomaly between the electric and the magnetic symmetry, and thus we cannot find a path-integral description where both symmetries are solitonic.]
Y ≃ P^2 ≃ S^2/_2 or Y ≃ BO(2) ≃ BU(1)/_2 ,
or any otherYwhose second Postnikov truncationY_2is given by the unique nontrivial split fibration of the following form,
B^2→ Y_2 → B_2 .
In these examples,π_1Y≃_2acts onπ_2Y≃through→-, which means
[S^2,Y] ≃ .
We can find its homology to be
H_2(Y;) ≃ 0 , 𝕊ℤ_2(Y) ≃ 0 ,
which implies that the conventional wisdom(π_∙-,U(1))is completely rectified into non-invertible symmetry.S^2topological functionals are given by characters ofD_∞≃⋊_2which vanish outside.
Since there are infinitely many topological charges, according to the discrete approximation, we should consider the representations that are arbitrarily close to representations of quotientD_n≃_n⋊_2for any finiten.
Therefore,S^2topological functionals are-linearly spanned by the characters of2-dim irreducible representations ofD_∞, namely2cos(ϕ∙)with∙∈π_2Y≃for allϕ∈/2π.
The non-invertible fusion rule follows evidently,
2cos(α∙) × 2cos(β∙) = 2cos[(α+β)∙] + 2cos[(α-β)∙] .
In summary, the theπ_1-actionπ_1×π_∙→π_∙rectifies the conventional wisdom by directly assigning a non-invertible fusion rule to spherical topological functionals.
§.§.§ Non-spherical rectification
Let us now discuss subtler effects that cause non-injectivity beyondπ_1-actions.
WhenYis(k-1)-connected withk>1, the Hurewicz map,h_b:π_∙Y→H_∙(Y;ℤ), is isomorphism for∙≤k(and same is true forh_f), which shows that the solitonic symmetry is invertible at least up tok-dim topological functionals.
Non-invertibility can appear for larger dimensions,∙≥k+1.
Since there is noπ_1-action here, topological functionals must be invertible as long as they inhabit spheres.
Thus the rectification happens because a spherical topological functionals can also inhabit non-spheres.
In order to see how the non-invertible symmetry appears in this situation, we need to take into account the inter-dimensional effects between solitons.
As we have discussed in Sec. <ref>, for somek≤p<n, a codimension-(p+1)solitonic defect may also carry not only thep-dim solitonic charge but also then-dim solitonic charge, and suchn-dim solitonic charge needs to be measured by non-spherical topological functionals.
As the Hurewicz map of degreenis not injective, this may violate the selection rule given by(π_nY, U(1)), which suggests that the conventional selection rule,(π_n Y, U(1)), is valid only if solitons with higher dimensions are absent and slection rules are modified otherwise.
Accordingly,n-dim solitonic symmetry has to be non-invertible to capture this intriguing selection rule.
As the present authors pointed out in the previous paper Ref. <cit.>, this effect actually happens in the4-dimℂP^1sigma model.
Let us discuss it here as the example.
Homotopy groups of the simply-connectedℂP^1≃S^2at low degrees are well-known to be
π_2(ℂP^1)≃ℤ , π_3(ℂP^1)≃ℤ .
The third Postnikov truncation ofP^1then sits in a principal fibration
B^3 ⟶ [ P^1]_3 ⟶ B^2 .
It is well-known
that this fibration is classified by a generator of
H^4(B^2;) ≃ .
This structure of the homotopy3-type ofP^1prescribes the following homologies,
H_2( P^1;) ≃ , 𝕊ℤ_2( P^1) ≃ ,
H_3( P^1;) ≃ 0 , 𝕊ℤ_3( P^1) ≃ _2 ,
and requires all the relevant Hurewicz maps to be epimorphic.
The last mapπ_3(P^1)→𝕊ℤ_3(P^1)'s being epimorphic can be understood via the Freudenthal suspension theorem due to the natural𝕊ℤ_3(P^1)≃π_3^s(P^1).
On the one hand, the conventional wisdom is not rectified at dimension2, which gives aU(1)1-form symmetry acting on the line solitonic defects.
The concrete topological functionals were constructed by Eq. (<ref>) in Sec. <ref>.
On the other hand, the conventional wisdom is indeed rectified at dimension 3, which gives non-invertible0-form symmetry acting on point solitonic defects.
The extent of rectification depends on the bosonic/fermionic nature of the theory.
The conventional wisdom is completely rectified non-invertible in the bosonic theory while a_2part survives in the fermionic theory.
The concrete topological functionals for this_2part was constructed by Eq. (<ref>) in Sec. <ref>.
We have also constructed non-invertible3-dim topological functionals in Ref. <cit.> according to the discrete approximation.
Concretely, we construct the rectified operators for_2N⊆(π_3(P^1), U(1))for eachN∈ℕ.
In particular, a generator of_2Nis rectified into the following non-invertible form:
ℋ_π/N(M_3)≡∫ b exp(-∫_M_3N/4πb b+∫_M_31/2π b σ^*a) ,
whereσ^*ais the same as that in Eq. (<ref>), i.e.,σ|_M:M↦P^1denotes the field map andadenotes theU(1)gauge field onS^2associated with the Hopf fibrationS^1→S^3→S^2.
In the fermionic theory, these operators (<ref>) are the building block of𝗌𝖱𝖾𝗉^3(ℂP^1)beyond condensation𝗌𝖱𝖾𝗉^3(B^2)≃Σ𝗌𝖱𝖾𝗉^2(ℂP^1).
In the bosonic theory, without spin structure,Nneeds to be restricted to even integers.
These even-Noperators are also building blocks of𝖱𝖾𝗉^3(P^1).
WhenM_3=S^3, these topological functionals recover a naive integral expression for the Hopf invariant,
ℋ_π/N(S^3)=1/√(N)exp(π/N∫_S^3σ^*a σ^*a/4π^2)=1/√(N)exp(π/N∙) ,
for∙∈≃π_3(P^1),
which indeed echoes the conventional wisdom.
The non-invertible feature of Eq. (<ref>) becomes transparent by evaluating it around the line solitonic defect, which is also the charged object of theU(1)1-form solitonic symmetry.
The line solitonic defect is defined by setting the boundary condition for theℂP^1field on the normal sphere bundle aroundS^1, which is nothing butS^2×S^1.
Based on to the structure of[S^2×S^1,S^2]described by Eq. (<ref>), for(m,ℓ)∈[S^2×S^1,S^2], we can evaluate to have
ℋ_π/N(S^2×S^1)=
exp(π/Nℓ) , m=0 N
0 , m≠0 N
.
It does not lift theg-degeneracy discussed below Eq. (<ref>) as we expected in Sec. <ref>.
We see that the topological functional becomes nonzero only if the levelNdivides the1-form topological chargem, which clarifies that the presence of line solitonic defects causes the non-invertibility of the0-form solitonic symmetry.
Also, we see that different manifolds can support coherent topological charges, like thatℓis correlated toπ_3(P^1).
§.§ Examples of condensation
Finally, we describe some examples of topological functionals that are trivial on spheres, i.e., those that lie in the image of Eq. (<ref>).
In fact, Sec. <ref> already implies examples when we try to fusion some operators there.
However, here we shall present several more clean examples where the conventional wisdom totally vanishes.
We focus on the simplest2-dim case, so let us supposeMis a closed2-dim manifold.
Prototypical bosonic examples are the2-dim topological functionals for two_ngauge fieldss_1,2∈H^1(M;_n), or for twoS^1-valued scalarsϕ_1,2:M↦/2π.
The former case hasY≃B_n^2while the later case hasY≃T^2.
We see always thatπ_2Y≃0but
H^2(B_n^2;U(1)) ≃ _n and H^2(T^2;U(1)) ≃ U(1) .
The corresponding2-dim bosonic topological functionals inhabit Riemann surfacesM.
They are trivial onM≃S^2but non-trivial on higher-genus surfaces such asM≃T^2.
These topological functionals can be explicitly constructed via the universal invertible form (<ref>).
ForY≃B_n, we have
U_k(M) ≡ exp{ı k2π/n∫_Ms_1∪ s_2} , k∼ k+n .
ForY≃S^1, we have
U_θ(M) ≡ exp{ıθ∫_Mϕ̣_1/2π∪ϕ̣_2/2π} , θ∼θ+2π .
They are apparently trivial onS^2.
They can be defined just via field configurations on several properly selected loops insideM.
Prototypical fermionic examples are the2-dim topological functionals for a_2ngauge fields∈H^1(M;_2n), or for aS^1-valued scalarϕ:M↦/2π.
The former hasY≃B_2nwhile the later hasY≃S^1.
We can readily seeπ_2Y≃H_2(Y;)≃0and find via the long exact sequence (<ref>) that
𝕊ℤ_2(Y) ≃ H_1(Y;_2) ≃ _2 .
We thus see that the corresponding topological functionals are fermionic, i.e., they inhabit spin Riemann surfacesM.
Also, they are trivial onS^2but non-trivial on other spin Riemann surfaces.
Via the universal invertible form (<ref>), we can construct these fermionic topological functionals through the Arf invariant of spin structures.
Namely, we pick up a spin structureρonMsuch thatArf(ρ)=0, i.e.,[ρ]∈0∈Ω^Spin_2.
Then forY≃B_2n, we have
U_k(M) ≡ (-)^k Arf(ρ+s̅) , k∼ k+2 ,
wheres̅denotes the mod-2 reduction ofs.
ForY≃S^1, we just substitutes̅above with the mod-2 reduction of[ϕ]∈H^2(M;).
It is evident to see that this topological functional vanishes onS^2.
We now carefully analyze the identity problem (see Sec. <ref>) of the above example.
Concretely, we consider torus topological functionals for aS^1-valued scalar or a_n-valued gauge field.
Recall that the target space takes the formY≃BRforR≃in the former case and forR≃_nin the later case.
We readily recognize
[T^2,BR]≃ H^1(T^2;R)≃ R^2 .
The mapping class groups ofT^2are well-known, e.g.,
π_0Diff(T^2) ≃ GL(2,) ,
π_0Diff(T^2,ξ) ≃ SL(2,) ,
π_0Diff(T^2,ξ,ρ) ≃ p^-1(_2) , for _2⊆ SL(2,_2) and SL(2,)p→SL(2,_2) .
whereξis an orientation andρis still a spin structure such thatArf(ρ)=0.
Without loss of generality,ρcan be taken asAP.
Since[T^2,BR]is just a2-dim lattice, these mapping class groups act on[T^2,BR]just in the defining way.
These actions are far from trivial and, concretely, the criteria for classifying(a,b)∈[T^2,BR]into its action orbit are as follows:
π_0Diff(T^2) : gcd(a,b) ,
π_0Diff(T^2,ξ) : gcd(a,b) ,
π_0Diff(T^2,ξ,ρ) : gcd(a,b) , a2 ,
where we stipulategcd(0,0)≡0.
We see that a spin structure lifts the degeneracy a little bit whenRhas even characteristic.
This tiny degeneracy lifting is exactly realized by the invertible topological functional (<ref>).
Despite huge degeneracy, equation (<ref>) still gives tremendous inequivalent topological charges and they have to be distinguished by non-invertible topological functionals.
They are examples of non-invertible condensations.
§ DISCUSSION
In this paper, we discussed the general structure of the non-invertible solitonic symmetry by attempting a precise mathematical formulations (Sec. <ref>) after a pursuit of the proper physical constraints (Sec. <ref>).
It allows us to discuss how the non-invertible structure in general solitonic symmetry goes beyond the conventional wisdom of homotopy groups (Sec. <ref>), in terms concrete examples.
Nevertheless, our analysis in this paper is still far from complete to be complemented by future studies.
In this section, we are going to make some remarks and discuss the outlooks for future studies.
§.§ Remarks
First, we point out an interesting connection between the solitonic symmetry and the higher-group symmetry, which was actually already mentioned around the end of Sec. <ref>.
Assume that we have a QFT𝒯with an anomaly-free discrete higher-group𝖦, then we can consider another QFT𝒯/𝖦by dynamically gauging the higher-group𝖦.
This gauging procedure can be realized by adding a path integral over𝖦gauge fields.
The homotopy target space of this path integral is precisely the classifying spaceB𝖦.
Thus the theory𝒯/𝖦acquires the solitonic symmetry(𝗌)𝖱𝖾𝗉^∙(B𝖦).
Therefore, gauging of a non-anomalous higher-group symmetry produces a QFT with non-invertible solitonic symmetry.
In particular, this picture suggests the last property of solitonic symmetry described in this paper.
Solitonic symmetry is free of 't Hooft anomaly.
It comes from the belief that the dynamical gauging procedure is reversible.
Namely, if we could develop the proper gauging procedures of the solitonic symmetry, then it would be natural to expect that gauging of(𝗌)𝖱𝖾𝗉^∙(B𝖦)in𝒯/𝖦produces the original theory𝒯.
This picture echoes the ideas developed in Refs. <cit.>.
One may consider a Tannaka duality between higher-group symmetry and solitonic symmetry.
This also suggest a wide adaption of solitonic symmetry: A wide class of non-solitonic symmetry can also be described by solitonic symmetry via a properly selected virtual target spaceY.
Second, we point out that QFTs with discrete higher-group symmetry also possess solitonic sectors.
Their solitonic sectors are the higher-domain-walls of spontaneously breaking higher-group symmetry and inhabit only non-closed manifolds (recall Footnote <ref>).
However, via the dynamical gauging discussed above, these non-closed solitonic sectors are the counterparts of our closed solitonic sectors discussed in this paper.
For example, when a discrete symmetry is spontaneously broken, there are domain walls connecting different vacua, and they are well-defined on the infinite space.
However, when we consider the finite space with the periodic boundary condition, a single domain wall is inconsistent with the boundary condition and we need a pair of domain wall and anti-wall.
Even in this case, the single domain wall becomes well-defined on closed manifolds by considering the gauging of the broken symmetry or the symmetry-twisted sector.
Therefore, solitons in different theories can behave locally in the same way but have different global behaviors.
§.§ Outlooks
We present a list of outlooks based on the subtle points in the analysis about fully-extended TQFTs in the present paper.
They should also be of interest to those concerned about classifying gapped phases.
* The validity of Conjecture <ref> on the correspondence between TQFTs and their partition functions should be confirmed.
* The validity of Conjecture <ref> on the homotopy actions should be confirmed.
* The maximal invertible solitonic symmetry, i.e., the structure of the infinite loop spaces ⟨Σ^n-1𝖵𝖾𝖼𝗍^fd|$⟩ and⟨Σ^n-1𝗌𝖵𝖾𝖼𝗍^fd|$⟩ for n>1, should be clarified.
The higher-group structure of invertible solitonic symmetry should also be clarified.
* Does the Karoubi completeness as well as the operation Σ provide the ultimate ideal solution to the codomains for fully-extended TQFTs?
As for the solitonic symmetry itself, we also have many prospects such as exploring the relation between solitonic symmetry and other generalized symmetry, constructing explicitly more examples of topological functionals, and presenting a comprehensive classification of low-dim topological functionals.
A list of particularly intriguing questions are listed below.
* A systematic treatment of continuous solitonic symmetry and infinitely many topological charges should be developed.
A proposal is to consider things like Σ^n-1𝖧𝗂𝗅𝖻^fd (c.f. Ref. <cit.>).
Another is to make the discrete approximation rigorous.
* Can we tackle solitonic symmetry from the side of topological charges?
This requires a complicated analysis on the homotopy classes of maps on manifolds, including their relations via bordisms (see Ref. <cit.>), as well as a treatment of the identity problem (see Sec. <ref>).
* It would be nice to rigorously formulate the inductive decomposition into non-spherical condensation and spherical rectification discussed in Sec. <ref>.
If these two problems can be solved, we will further strengthen our confidence about Ansatz <ref> or know how to modify it.
utphys]
|
http://arxiv.org/abs/2307.02248v1
|
20230705124431
|
Cylindrical void growth vs. grain fragmentation in FCC single crystals: CPFEM study for two types of loading conditions
|
[
"Saketh Virupakshi",
"Katarzyna Kowalczyk-Gajewska"
] |
cond-mat.mtrl-sci
|
[
"cond-mat.mtrl-sci"
] |
inst1]Saketh Virupakshi
[inst1]organization=Institute of Fundamental Technological Research, Polish Academy of Sciences,
addressline=Pawinskiego 5b,
city=Warsaw,
postcode=02106,
country=Poland
inst1]Katarzyna Kowalczyk-Gajewska cor1
[cor1]Corresponding author.
kkowalcz@ippt.pan.pl
The crystal plasticity finite element method (CPFEM) is used to investigate the coupling between the cylindrical void growth or collapse and grain refinement in face-centered cubic (FCC) single crystals. A 2D plane strain model with one void is used. The effect of the initial lattice orientation, similarities, and differences between stress- and strain-driven loading scenarios are explored. To this end, boundary conditions are enforced in two different ways. The first one is based on maintaining constant in-plane stress biaxiality via a dedicated truss element, while the second one is imposing a constant displacement biaxiality factor. Uniaxial and biaxial loading cases are studied. For the uniaxial loading case a special configuration, which enforces an equivalent pattern of plastic deformation in the pristine crystal, is selected in order to investigate the mutual interactions between the evolving void and the developed lattice rotation heterogeneity. Next, biaxial loading cases are considered for three crystal orientations, one of which is not symmetric with respect to loading directions. It is analysed how stress or strain biaxility factors and initial lattice orientation influence the void evolution in terms of its size and shape. Moreover, the consequences of variations in the resulting heterogeneity of lattice rotation are studied in the context of the grain refinement phenomenon accompanying the void evolution. Scenarios that may lead to more advanced grain fragmentation are identified.
Crystal Plasticity Finite Element Method Void Evolution Grain Refinement
§ INTRODUCTION
Nucleation, growth, and coalescence of intra- or intergranular micro-voids is a usual scenario by which ductile metallic materials fail <cit.>. Most often, micro-voids are nucleated as a result of decohesion or fracture process of second phase precipitates. Growth of those micro-defects takes place due to diffuse plastic deformation up to the onset of coalescence when strain localizes in the ligament connecting closely spaced voids. From the mechanics point of view, for 3D stress controlled axially symmetric loading (e.g. uniaxial tension process) the void coalescence is connected with the transition from axisymmetric to uniaxial straining mode, which leads to the plastic flow localization.
After that moment the voids continue expansion, mostly towards each other up to final ligament failure or full impingement. On the other hand, the development of strain heterogeneity around a deforming and growing void leads to microstructure changes in the material around the void. It seems that the last aspect of the mechanics of porous metallic materials has not been fully explored, yet. On the other hand, this effect accompanying void evolution can have important consequences as concerns the grain refinement as observed by <cit.> who formulated a phenomenological model of grain fragmentation.
Beginning from the late 60s of the previous century a lot of studies have been done experimentally, theoretically, and numerically to understand the mechanics of void initiation, growth, and coalescence. Initially, numerical analyses were performed using the macroscopic isotropic nearly rate-independent elastic-plastic model of metallic material. <cit.> and <cit.> seem to be the first who applied respectively the 2D and 3D unit cell approach in this respect. Based on performed studies the Gurson yield criterion for porous metals, originally developed based on the analytical solution and micromechanical approach, has been modified and equipped with additional tunning parameters to become the widely used GTN (Gurson-Tvergaard-Needleman) criterion <cit.>. Those initial studies revealed an important role of stress triaxiality in the void growth phenomenon.
In the next studies, the influence of the third invariant (Lode parameter) and material anisotropy <cit.> was observed. However, as recently concluded by <cit.> there is not yet a universal theory of ductile failure. Moreover, as expressed by a recent review by <cit.> there is no full agreement yet if the process is more strain or stress-driven, while these two mechanical fields are strictly related by a constitutive behaviour of the virgin medium and problem geometry (i.e. void shape and space distribution).
The void growth and coalescence contributing to the ductile failure is a process that usually takes place at the micro-level of polycrystalline metal, therefore it was soon understood that replacing the macroscopic plasticity model for a matrix material in numerical calculations with a more relevant continuum crystal plasticity framework may lead to a better understanding of the phenomenon. Following this observation, firstly, 2D analyses of unit cells with cylindrical voids were initiated under plane strain and in-plane strain-driven boundary conditions <cit.>.
Similar 3D analyses of spherical void growth and coalescence under strain-controlled boundary conditions were performed by <cit.>. Those boundary conditions (i.e. strain-based) were motivated by the possibility of avoiding any instability in the calculations. The main conclusion of those studies was that the influence of crystal orientation is more significant for the loading cases with a small strain biaxility or triaxility and of secondary importance for higher strain biaxiality or triaxiality factors. Among mentioned studies, only <cit.> provided some results related to microstructure evolution in the presence of voids. They are concerned with the texture evolution within the unit cell and the heterogeneity of deformation assessed by the misorientation angle with respect to the average orientation. It was concluded that the heterogeneity of lattice rotation is concentrated in the regions around the void.
The cylindrical void growth under plane strain while constant in-plane stress biaxiality factor in hexagonal close packed (HCP) crystal studied by <cit.>. <cit.>
analysed 3D cell with a spherical void maintaining constant stress triaxiality, however applied boundary conditions were not purely stress driven since at the same time equal values of two lateral macroscopic strains were imposed. The macroscopic stress direction was fully controlled in the 3D calculations by <cit.> for Ni FCC single crystal, <cit.> for FCC austenitic stainless steel, and by <cit.> for HCP Mg crystal. Different values of stress triaxiality and Lode parameter as well as selected crystal orientations with respect to loading axes were analysed.
It was found that the value of the Lode parameter is more decisive concerning the void coalescence or collapse at a lower value of stress triaxiality.
However, for certain anisotropic orientations, the Lode parameter can also have a significant effect on creep strain and porous evolution at higher stress triaxiality values <cit.>, which led to forming a polygonal void shape with rounded corners. Moreover, the initial crystal orientation dictates the location of maximum stress concentration. Based on such numerical studies analogues of the GTN criterion for single crystal were formulated by <cit.> and <cit.>, using the classical multi-surface Schmid condition and the regularized Schmid law, respectively, as valid models for the bulk medium.
Let us also remark that the void growth and coalescence in the heterogeneous bulk medium described by the crystal plasticity constitutive model were also studied numerically. For example, bi-crystal unit cells were assumed in <cit.> and polycrystal unit cells in <cit.>. Among mentioned contributions, selected results concerning heterogeneity of lattice rotation were provided in <cit.>.
Unless stated otherwise, a large strain rate-dependent CP formulation with the power law for slip was used in all papers recalled above.
The goal of the present research is two-fold. First, we would like to compare and analyse strain and stress-driven loading scenarios in the context of cylindrical void evolution in FCC single crystal under plane strain conditions. In particular, the effect of strain vs. stress in-plane biaxiality factor is elucidated. To our best knowledge, such direct comparison has not been performed in the literature yet. Second, the mutual interactions between the void evolution and development of lattice rotation heterogeneity, leading to grain refinement, as two competitive mechanisms of microstructure changes are explored. Such an interplay between two effects seems not to be sufficiently quantified in other research.
The paper is organized as follows. After this introductory section, we present the applied crystal plasticity model and its finite element implementation in Section 2. Section 3 is devoted to the description of the numerical model of a unit cell and the boundary conditions. The main body of the paper is included in Section 4 where the results of performed numerical studies are outlined and discussed. Their presentation is divided into two parts. The first concerns four uniaxial loading cases under a special crystal configuration, which enable the observation of a clear coupling of the void growth or collapse with the grain fragmentation into subgrains. The second part of this section continues the analysis of the impact of two biaxiality factors on the void growth and overall stress-strain response for biaxial loading processes, as well as their relation to microstructure evolution. The paper is closed with conclusions.
§ CRYSTAL PLASTICITY MODEL AND ITS FE IMPLEMENTATION
§.§ Crystal plasticity constitutive theory
In this section, the key details of crystal plasticity implementation in FEM applied in the analyses are described. Model formulation and implementation follow <cit.> and <cit.>.
First, let us present the rate-dependent elastic-plastic model of the single crystal. In terms of kinematics description, the model follows classical contributions by <cit.>. The deformation gradient 𝐅 is multiplicatively decomposed into two parts:
𝐅=𝐅_e𝐅_p
where 𝐅_e and 𝐅_p denote the elastic and plastic components, respectively. The evolution of the plastic part of the deformation gradient is governed by the equation:
𝐅̇_p=𝐋̂_p𝐅_p,
where the dot over the quantity denotes its material time derivative. The plastic velocity gradient 𝐋̂_p is the sum of shears on slip systems:
𝐋̂_p=∑_r=1^Mγ̇^r𝐦^r_0⊗𝐧^r_0
with unit vectors 𝐦^r_0 and 𝐧^r_0 denoting the r-th slip system direction and plane normal defined in the initial configuration. In FCC crystals, plastic deformation occurs along the {111}⟨110⟩ family of slip systems, which contains 12 potentially active slip systems that are taken into account in the computations.
In the rate-dependent formulation, in order to calculate shear rates the power law <cit.> is used:
γ̇^r=v_0sign(τ^r)|τ^r/τ_c^r|^n̅
where v_0 is the material parameter, n̅ is a rate-sensitivity parameter. The resolved shear stress τ^r is the projection of the Mandel stress tensor 𝐌_e on the direction and plane of slip:
τ^r=𝐦^r_0·𝐌_e·𝐧^r_0,𝐌_e = 𝐅^T_e 𝐒𝐅^T_p = 𝐅^T_e τ𝐅^-T_e ,
where 𝐒 is the first Piola-Kirchhoff stress and τ is the Kirchhoff stress. The Mandel stress tensor is obtained using a hyper-elastic law:
𝐌_e=2𝐂_e∂Ψ/∂𝐂_e ,
where 𝐂_e=𝐅^T_e𝐅_e is the right elastic Cauchy-Green tensor and
Ψ=1/2𝐄_e·𝕃^e·𝐄_e
is the Kirchhoff-type function of free energy density per unit volume in the reference configuration, 𝕃^e is the anisotropic stiffness tensor of single crystal and 𝐄_e=1/2(𝐂_e-1) is the elastic Lagrangian strain tensor.
The evolution of the critical value of the resolved shear stress is governed by the exponential Voce law:
τ̇_̇ċ^r = H(Γ)∑_s=1^Mh_rs| γ̇^s |,
where h_rs is the latent hardening parameter, equal 1 for self-hardening (r=s), q_0 for latent hardening (r ≠ s) on coplanar systems (𝐧^r_0·𝐧^q_0=1), and q for latent hardening on non-coplanar systems (𝐧^r_0·𝐧^q_0≠ 1) and
H(Γ)=dτ_c(Γ)/dΓ,
τ_c(Γ)=τ_0+(τ_1+θ_1 Γ)
(1-exp(-Γθ_0/τ_1) )
Γ=∫Γ̇dt,
Γ̇=∑_r|γ̇^r|
The parameters of the hardening model, elastic constants of the material, and the value of n used are shown in the table <ref>. The latent hardening parameter on both coplanar and non-coplanar systems is the same, but in general, it could have been taken as different.
§.§ FE implementation
The standard procedures developed for the FE implementation of finite strain elasto-plasticity in the fully Lagrangian displacement-based setting are followed <cit.>. In particular, incremental constitutive equations have been obtained by applying the implicit backward-Euler time integration scheme and the relation (<ref>) is integrated using the exponential map,
𝐅_p(t+Δ t)=exp(Δ t𝐋̂_p)𝐅_p(t).
The implementation has been performed using AceGen code generator <cit.>. It combines the symbolic algebra capabilities of Wolfram Mathematica with automatic differentiation and advanced techniques of expression optimization. The package enables straightforward derivation of an algorithmic consistent tangent that leads to a quadratic convergence rate. Computations were performed using the AceFEM package. In calculations, 4-noded linear quadrilateral elements with 4 integration points are used. Additionally, the F-bar method <cit.> is applied in order to have a robust implementation, enabling the enforcement of nearly incompressible material behaviour in the geometrically non-linear regime. In spite of considering the 2D plane strain problem, the three-dimensional nature of the crystal plasticity model, and specifically the geometry of slip systems, are fully taken into account. At each Gauss point the material displacement gradient 𝐇=𝐅-𝐈 is assumed for which components H_i3=H_3i=0 (i=1,2,3) and '3' denotes the direction perpendicular to the plane.
§ UNIT CELL MODEL AND BOUNDARY CONDITIONS
§.§ Cell model
A 2D plane strain unit cell with one cylindrical void is employed. Cartesian coordinate system (x-y) is used and the origin of the coordinate frame is placed at the node OP. The initial diameter of the void is D and the square plate has a side length of L. The ratio of D/L is used to define the void volume fraction: f=π/4(D/L)^2. Nodes OP, XP, and YP are used to prescribe the boundary conditions. Instead of confining the sides of the unit cell to stay planar, which can over-constrain the model leading to the development of high stresses for some orientations, the periodic boundary conditions are applied. Accordingly, the displacement of corresponding nodes on opposite sides of the unit cell in the x-y plane are connected by periodic boundary conditions, namely
𝐮_2 - 𝐮_1 = 𝐇̅ (𝐱_2 - 𝐱_1),
where 𝐇̅ is the overall (averaged) material displacement gradient tensor of the unit cell, 𝐮_1, 𝐮_2 represent the displacement of corresponding nodes on opposite sides and 𝐱_1, 𝐱_2 represent the corresponding nodal vectors at the reference configuration.
Several studies were carried out by <cit.> on unit cells containing voids imposed with constant stress triaxiality ratio (ratio of the mean stress to the von Mises stress) boundary conditions. On the other hand, strain-controlled boundary conditions were considered by
<cit.>. In the present study for overall loading, both kinds of boundary conditions are employed in order to compare and quantify the influence of strain and stress biaxiality ratio on the void growth and coalescence. The way in which they are imposed is described in the next subsections.
§.§ Displacement controlled boundary conditions
For the displacement controlled boundary conditions, a displacement biaxiality factor β is set, which is defined as the ratio of the displacement in the x direction to the displacement in the y direction, namely β=u_x(XP)/u_y(YP)=const. Therefore, the following displacement boundary conditions are imposed at the reference configuration as shown in Figure <ref>:
* at node OP, u_x = u_y = 0,
* at node XP, u_x = β u(t), u_y = 0,
* at node YP, u_x = 0, u_y = u(t),
which result in the following components of the displacement gradient 𝐇̅ in Eq. (<ref>)
H̅_kl=u(t)/L[[ β 0 0; 0 1 0; 0 0 0; ]]
Note that all components of 𝐇̅ are known for this loading scenario.
For the uniaxial tension/compression case in the y-direction of the sample, considered at the beginning of the next section, the following displacement boundary conditions are imposed:
* at node OP, u_x = u_y = 0
* at node XP, u_y = 0
* at node YP, u_y = u(t),
which result in the following components of the displacement gradient 𝐇̅ in Eq. (<ref>)
H̅_kl=u(t)/L[[ ⋆ ⋆ 0; 0 1 0; 0 0 0; ]] ,
whereby ⋆ we denoted unknown components of 𝐇̅. By energy minimization, this leads to the averaged Cauchy stress for which Σ_xy=Σ_xx=0, so the stress biaxiality factor η=Σ_xx/Σ_yy=0. Note that this case is not equivalent to the 3D uniaxial tension case since, in general all Σ_kz, k=x,y,z are not necessarily zero for anisotropic material under plane strain conditions.
§.§ In-plane stress controlled boundary conditions
For controlling the in-plane stress biaxiality factor η, the formulation based on the proposal by <cit.> is employed in the present study. A special spring element oriented in the direction of principal loading is employed to regulate displacement at the nodes XP and YP in order to maintain the constant stress ratio. Node OP is fixed and the displacement of node XP in the y direction is disabled to remove rigid motion as shown in the figure <ref>. In-plane stress biaxiality η, which is defined as the ratio of the Cauchy's stress normal components along x direction to y direction, namely η = Σ_xx/Σ_yy=const is kept constant to study the void growth. Application of the element results in the averaged Cauchy stress for the unit cell of the form,
Σ_kl=Σ_yy(t)[[ η 0 ⋆; 1 ⋆; sym. ⋆; ]] ,
whereby ⋆ we denote unknown components of Σ. Note that Σ_yy(t) is also unknown, while via the truss element, displacement u_y(YP) is imposed.
Different displacement and in-plane stress biaxiality ratios: β and η are considered in the present study, which are compared and discussed in the next section.
§.§ Finite element geometry and mesh
Two commercial software packages are used in this study. 2D planar model and mesh are generated using the commercial CAE software (ABAQUS version 6.13) as shown in Fig. <ref>. Then the mesh data is imported to a symbolic and algebraic system, Wolfram Mathematica as specified in section <ref> to perform finite element calculations and post-processing using the AceFEM package. The ratio of the void diameter to the side length in the x-y plane is taken as D/L = 0.2, leading to an initial void volume fraction of f = 0.0314.
2D mesh is employed with 1168 elements of type CPE4R. Mesh convergence tests were carried out on a number of unit cells with different mesh sizes. Convergence was evaluated by determining the evolution of the relative void volume fraction with the overall effective strain.
Similarly to other studies (see e.g. <cit.>), the overall Cauchy stress Σ=1/J̅𝐒̅𝐅̅ (J̅=𝐅̅) is calculated based on the volume averaged first Piola-Kirchhoff stress 𝐒̅ found as:
𝐒̅=1/V∫_V𝐒(𝐗)dV=1-f/V_m∫_V𝐒(𝐗)dV_m ,
where V_m is the bulk crystal volume and the integration is performed numerically in the reference configuration. Unknown components of the deformation gradient 𝐅̅=𝐈+𝐇̅ are calculated based on the relation (<ref>) using the current displacement vectors at nodes XP and YP, namely
F̅_11=1+u_x(XP)/L , F̅_21= u_y(XP)/L , F̅_12= u_x(YP)/L , F̅_22=1+u_y(YP)/L .
§ RESULTS AND DISCUSSION
In this section, on the basis of the results of finite element simulations, the impacts of crystallographic orientation and various boundary conditions on the void growth or coalescence as well as grain refinement due to heterogeneous lattice rotation, in a 2D plane strain unit cell are examined. First, in-plane uni-axial compression and tension, simulated using the displacement controlled scenario (Eq. <ref>), are performed to demonstrate the void-induced heterogeneous slip activity, which then leads to spatial variation in lattice rotation. The example serves also to explore the effect of loading direction with respect to crystal axes and loading 'sign' (tension vs. compression). Moreover, the analysis preliminary verifies the model predictions with available experimental findings provided in <cit.>. Next, various in-plane biaxial processes are considered. The displacement-controlled boundary condition will be referred to as the β loading case and the stress-controlled boundary condition will be referred to as the η loading case throughout the discussion. To see how plastic anisotropy impacts void growth in an FCC single crystal, four initial orientations of the crystalline lattice with respect to the sample axes are taken into consideration. They are collected in Table <ref>. In the current study, seven loading scenarios with β equal to -0.5, 0, and 1 as well as η equal to -0.5, 0, 0.8, and 1 are analysed. The scenario η=0.8 is selected due to its approximate equivalence to the β=0 case. Note that the state of in-plane uni-axial tension or compression is represented by η = 0.
§.§ Microstructure evolution and void growth in in-plane uni-axial tension and compression
<cit.> examined in-plane uni-axial compression of a single crystal along the [001] direction with a cylindrical void axis along the [110] (orientation O in Table <ref>). As discussed in detail by <cit.>, by applying the anisotropic rigid-plastic slip line theory, this configuration ensures a plane strain condition in the [001] - [1̅10] crystal plane under the action of the compressive or tensile loading with three effective in-plane slip systems.
For the pristine crystal they are results of equal activity of four systems (see Table <ref>), which act in opposite directions (i.e. 𝐦 and -𝐦) for tensile and compressive loading in the plane. It could be also verified that when the direction of loading is changed to [001], under a plane strain condition, the same set of slip systems will be active, again in the opposite sense. Thus, as far as plastic deformation by dislocation motion is considered, in-plane compression (cor. tension) in [001] is equivalent to in-plane tension (cor. compression) in [1̅10].
For the sample with a cylindrical void, under the same loading conditions, the formation of regions of unequal slip activity of potentially active systems around the void is observed, which leads to the lattice rotation heterogeneity and crystal fragmentation into subgrains. These theoretical predictions were verified by <cit.> experimentally, for the [001] compression case, by EBSD measurements. Note that for the crystal without the void, no lattice rotation is predicted by the model, and slip activity is homogeneous, so the grain is not fragmented.
The purpose of the study is to investigate the differences between void evolution and grain fragmentation using four loading scenarios, namely tension/compression in the [1̅10] and [001] directions, even though the same active slip systems are expected for all cases in a pristine rigid-plastic crystal (as indicated in Table <ref>).
It is obvious that the overall stress biaxiality factor η is equal to 0 in each case.
First, we have used this example to verify the predictive capabilities of the present numerical model. As shown in <cit.> and confirmed in our study, one of three effective in-plane slip systems dominates in three different angular slip sectors which are centred at the middle of the void, as marked by dashed lines in Fig. <ref>b. This results in different lattice rotations in respective domains. In figure <ref>b, we present misorientation angle distribution[See definition (<ref>), which is in the present case equipped with the sign to indicate the in-plane rotation direction. Sign + denotes clockwise and - anticlockwise rotation of [001] axis] for the compression strain of 5%. Qualitatively similar subdivision is seen in experimental data quoted in figure <ref> after <cit.>. There are some differences concerning the direction of rotation in the lateral domains, however, a full quantitative comparison is not possible due to the lack of the detailed experiment geometry and boundary condition data in <cit.>.
Next, the same sample configuration is used to explore the effect of loading direction and its sign (i.e. in plane tension vs. compression) on the void evolution and grain refinement. To this end, four processes enlisted in Table <ref> are studied numerically.
In figure <ref>a, we compare the overall in-plane mean stress variation vs. magnitude of the true strain in the loading direction. It is seen that initially, the response in terms of the magnitude of the in-plane mean stress (σ_mean=1/2(Σ_xx+Σ_yy)) is the same for all processes and does not show visible tension-compression asymmetry. However, as the deformation proceeds, the difference starts to increase due to differences in the lattice rotation and void evolution. In each case, the stress level is smaller for the porous crystal than for the pristine one. The evolution of the normalized void volume fraction (f/f_0) is presented in figure <ref>b. It is observed that, as expected, the void volume fraction increases for tension and decreases for compression, however, there are important differences between the two loading directions. While for tension in [001] void grows monotonically, for tension in [1̅10] after initial increase void volume stabilizes at some, relatively small, constant value (f/f_0∼ 1.13), at least for the demonstrated strain regime[It has been verified that for both these loading cases, the void volume starts to increase with accelerating rate and softening is observed for in-plane mean stress at larger strain, which events eventually lead to void coalescence, see Fig. S.1 in the supplementary material.]. On the other hand, for compression in [1̅10] void volume decreases monotonically and void starts to close at a relatively small true strain level (∼0.25)[Calculations were stopped at the moment when the void opposite boundaries were first in contact since the material overlapping was not prevented in calculations.], and for compression in [001] after initial important decrease the void collapse is postponed to higher strain values. It should be stressed that the overall stress triaxiality value, calculated accounting for the 3D character of the stress field (note that Σ_zz is not zero for all analysed cases), is approximately equal to 0.5 for both tension loadings and -0.5 for compression. Only small variations in the Lode parameter calculated for the overall stress are detected for four loading cases (its value is around 0.32-0.33).
Displacement biaxiality ratio β, calculated here as the inverse ratio of in-plane displacements in loading direction with respect to the lateral one, is seen in Fig. <ref>c, while the in-plane true strain biaxiality β_log, calculated as the corresponding ratio of in-plane components of true strain measure (e.g. for tension/compression in [001] it is β_log=E_xx/E_yy=ln(F_xx)/ln(F_yy)) is shown in Fig. <ref>d. Their variation with strain is compared for all four processes and pristine and voided crystals. As expected, it is observed that for a crystal without a void for all processes the evolution of β_log is the same: it starts with the value of -0.5 in the elastic regime and reaches -1.0 for well-developed plastic flow, which marks incompressible deformation in that regime. On the other hand, for voided crystals, the value of -1 is approached only for tension in [1̅10], which is related to the stabilization of the void growth. For the remaining processes, the value does not drop below -0.95 indicating the compressibility of voided crystal.
Differences in the void growth or closing for two loading directions concern also the developed void shape as seen in Figs <ref> and <ref>. While the ellipsoidal shape of the void is observed for tension in [1̅10] and compression in [001] (equivalent in terms of slip activity pattern in pristine crystal), the polygonal shapes are the results of compression in [1̅10] and tension in [001] directions. Accumulated shear maps also show the failure mode for each case. In compression cases the failure proceeds by accumulated shear localization in two intersecting bands. For tension, although at the initial stage two bands are also visible, the void coalescence takes place, much later for [1̅10] than for [100] case (see footnote <ref> and Fig. S.1 in Supplementary Material).
Figure <ref> shows an interesting interplay between the void evolution and the grain fragmentation phenomenon. It is seen that for the two cases for which the void growth/collapse is halted or retarded (tension in [1̅10] and compression in [001], respectively) the clear checker-board-type subdivision of initial grain into subgrains, misoriented with respect to each other by the angle as large as ∼ 20^o at the true strain level 0.25, is found. On the other hand, for two other processes, the significant lattice rotation is seen only in the domains of intensive strain. These latter results confirm microstructure evolution as an important effect accompanying the deformation of voided crystalline materials.
The analysis showcased in this section illustrates that both in-plane stress biaxiality and stress triaxiality, as well as displacement or strain biaxiality, alone are inadequate in determining the growth of voids. This is particularly true when anisotropic materials are analysed. It is important to note that microstructure evolution plays a substantial role in this process. Fragmentation of bulk crystal surrounding the void into subgrains may lead to significant impediment of the void volume changes.
§.§ Void growth and microstructure evolution in in-plane biaxial loading processes
In this subsection, to further explore factors differentiating the void growth and accompanying grain fragmentation in FCC crystals, in-plane biaxial processes are considered, for three orientations A, B, and C defined in Table <ref>. Orientations were selected following <cit.>. In order to investigate and differentiate the effect of stress and strain biaxiality seven loading scenarios with β equal to -0.5, 0, and 1 as well as η equal to -0.5, 0, 0.8, and 1 are analysed. Let us remark that orientation A, contrary to B and C, is non-symmetric with respect to the loading axes, thus shear strain component E_xy (cor. shear stress component Σ_xy) may be observed for η (cor. β) loading cases even for a pristine crystal sample.
§.§.§ Overall response of voided crystal
Stress biaxiality ratio
The stress biaxiality ratio for seven loading scenarios is shown in Figure <ref>a. To start with, as it is evident, the stress biaxiality ratio for the η loading case is maintained constant for all crystal orientations during the deformation process, which verifies the validity of the finite element procedure used for imposing a constant stress biaxiality ratio. On the other hand, in general, for displacement controlled processes (with constant β) stress biaxiality ratio η changes during the deformation process. For crystal orientations A and C, under the β=-0.5 loading case, the stress biaxiality is larger than zero; the slope initially rises, then progressively drops, and ultimately approaches the uniaxial loading case at the end of loading. Although the biaxiality ratio is marginally more than η=0 for orientation B at the end of loading, apparently, it would reach the uniaxial loading state if the deformation would have proceeded. For β=0, the slope steadily rises until it approaches η=1 at the end of loading. For this case, on average the value of stress biaxiality for three orientations is close to η=0.8 that is why for comparison purposes such stress controlled scenario is also selected for analysis. Finally, for the β=1 case, the stress biaxiality ratio is kept constant just as it does for the η=1 case. Those graphs in conjunction with displacement biaxiality plots in Figs. <ref>b are important for analysing the growth of the void and the stress response. The softening stress response is evident in Fig. <ref>c when the stress biaxiality ratio increases and void growth is significant, resulting in coalescence.
Additionally, yellow lines are denoted as 'uniaxial' in Figs. <ref>, show the results obtained for the in-plane uniaxial tension process without the employment of a special spring element but using the displacement-controlled conditions with 𝐇 described by Eq. <ref>. Calculations are performed for verification purposes and are in good agreement with the predictions obtained with the use of the spring element (marked as η=0 in figures).
Displacement biaxiality ratio
The displacement biaxiality under various loading instances is depicted in Figure <ref>b. Similar to the situation of stress biaxiality, the displacement biaxiality ratio β is kept constant during the β-type process, which verifies the finite element procedure. On the contrary, in general, for stress ratio controlled processes (with constant η), the displacement biaxiality ratio varies in the course of deformation. The displacement biaxiality is kept below -0.5 for η=0 (in-plane uniaxial tension) and η = -0.5 loading cases. For asymmetric orientation A, under the η=1 loading scenario, the ratio initially follows the β =1 case, but as deformation proceeds the slope steadily falls and approaches the β=0 case. For orientations B and C, the ratio remains constant until halfway through the deformation, after which it steadily drops. For η=0.8 as expected, the strain biaxiality oscillates around β=0, although differently for each of the three orientations. For orientation A it is almost constant and close to zero, for orientation B it is negative, initially being close to -0.5 and increasing towards the uniaxial straining mode, while for orientation C it starts with a positive value and next decreases to zero. These plots are again valuable for studying in conjunction with the contour plots of accumulated shear in Section <ref>, void evolution (Fig. <ref>d) and stress response (Fig. <ref>c).
Overall mean stress response
Figures <ref>c illustrate the in-plane overall mean stress response for the various loading scenarios and the given crystallographic orientation. When all loading scenarios are compared, β=1 exhibits the stiffest response in the initial deformation phase, whereas η = -0.5 demonstrates the softest stress response for all crystal orientations. For the η = 0 loading scenario, the stress response increases monotonically in all orientations. Figure <ref>a shows that the stress biaxiality ratio is greater than 0 (positive) for β = -0.5, 0 and 1, and η =0.8 and 1 loading cases. As a result, in the initial deformation stage, a stiffer stress response is observed, followed by a softening response due to significant void expansion in the crystal, which cannot be further compensated by an increase of average stress in the bulk crystal.
When the magnitude of peak stress for the different orientations is compared, orientation C has the largest peak stress, and orientation A has the lowest peak stress for β =1 loading case. Furthermore, the evolution of the overall mean stress in Fig. 7c correlates well with the displacement biaxiality ratio β shown in Fig. 7b. In particular, the higher β value the more stiff the initial response is and the sooner (in terms of the value of F_22-1) the peak stress is achieved for the given process.
Figure <ref> depicts the overall mean stress response in both pristine and porous unit cells for various crystal orientations and the specified loading scenario. Five loading scenarios where β and η are both equal to 0 and 1 are analysed, together with scenario η=0.8 which approximately corresponds to β=0 case as discussed before. It is evident that all loading cases exhibit the anisotropic response. Also, it is apparent that the response of the porous crystal differs substantially from that of the pristine crystal. With the exception of η= 0 (uni-axial loading condition), the pristine crystal response is purely elastic. In the case of η = 0 loading, the pristine crystal displays a stiffer response than the porous crystal for orientations A and C; however, for orientation B, the response is almost the same for both the pristine and porous crystal. The response of orientation C is the stiffest in each of the loading conditions. For loading scenarios, β = 0, 1, and η = 0.8, 1, orientation A initially displays the softest response, whereas orientation B exhibits the softest response by the end of the deformation process. When the loading scenarios β = 0 and η = 0 are compared, the substantially higher stress biaxiality in the β = 0 loading case (refer to Fig. <ref>a) causes a more stiff response during an early deformation stage, followed by a softening due to significant void expansion in the crystal. On the contrary, a monotonic stress increase is observed for the η = 0 loading scenario. Instead, as expected, the stress response for β = 0 case is close to η=0.8 loading conditions. Due to the highest stress biaxiality, a similar response was observed for β = η = 1.
Normalized void volume fraction evolution
Figure <ref>d compares the evolution of the normalized void volume fraction for various loading conditions and the specified orientation. These evolution plots are in good agreement with displacement biaxiality ratio plots in Fig. <ref>a. For all orientations, the void growth rate increases as the displacement biaxiality ratio increases. The void is collapsing for the η =-0.5 loading case, and this behaviour is the most pronounced in orientation C. Confirmation of the phenomenon can be seen in contour plots of accumulated shear for orientation C which are shown in Fig. <ref>. Under η = 0 loading case, the void grows very slowly as compared to higher stress biaxiality cases. If the displacement biaxiality ratio is less than -0.5, softening behaviour is not observed, since not much void growth is seen. The void growth increases at first with β=-0.5 but subsequently stabilizes for all orientations. The evolution of the void volume fraction under the β and η = 1 loading scenarios correlates with the evolution of the displacement biaxiality ratio (Fig <ref>b). The curves of void evolution start to deviate from each other at the same moment when the value of β for η = 1 case drops below one.
Qualitative differences are observed in the curves shown in Fig. <ref>d for η-cases, which can be explained by the accompanied variation of displacement biaxility ratio. As it is seen, for the high stress biaxiality ratio: η=1, initially for all three orientations the displacement biaxiality β is equal to 1, so the cell is under the conditions beneficial for the void expansion. Accordingly, at this stage, the void is growing in all directions (compare Fig. S.2f, S.3f, and S.4f in the supplementary material). However, as deformation proceeds the displacement biaxiality ratio is decreasing towards zero, which effectively slows down the void growth since its growth starts to be limited to one direction in the plane. Nevertheless, the void volume fraction is still growing on the cost of bulk crystal, and the achieved values are high. This causes a decrease of the overall in-plane mean stress as an increase of average stress in the bulk crystal is not able to compensate for the void expansion, Fig. <ref>c. On the contrary, for smaller stress biaxiality ratio: η=0, the initial displacement biaxiality is negative, so even though the net change of void volume fraction is positive, in this scenario the void diameter is growing only in one direction while decreasing in the perpendicular one (compare Fig. S.2e, S.3e and S.4e in the supplementary material). For this case, as the deformation proceeds the displacement biaxiality ratio increases towards zero, which in this case leads to accelerated void growth because the reduction of void size in one of the directions is halted, while it is still growing in another one. For all three orientations for the considered deformation range, the increase in the void volume fraction is not yet sufficient to overcome the overall mean stress increase due to the strain hardening in bulk crystal. However, with increasing deformation one may expect softening which will be accompanied by an accelerated void growth rate. Interestingly, for the η=0.8 case the former and latter scenarios of void growth are observed for orientations C and B, respectively (see also Fig. <ref>c). Orientation A exhibits here some limit case with the almost constant rate of void volume fraction.
Figure <ref> compares the evolution of the void volume fraction for different crystal orientations and the selected loading conditions. Five loading cases, the same as in mean stress response plots shown in Fig. <ref> are illustrated. For η= 0 (in-plane uniaxial loading case), the anisotropic response is observed. Void growth in orientation C is the highest, followed by orientations A and B. But for β=0 and 1 loading cases, due to relatively large strain biaxiality, the void growth is significant and the effect of crystal orientation diminishes. It has been verified that the latter observation is true also for other processes in which β value is kept constant.
The same observation is reported in <cit.> under displacement controlled boundary conditions. The void growth for orientations C and B are nearly identical for η = 1 loading. However, the void growth rate is slower in asymmetric orientation A than in orientations B and C. For the same η and β values, the void expansion under β = 0 is substantially faster than the η = 0 loading situation, since, as already mentioned, this case corresponds approximately to η=0.8 case, so a much higher stress biaxiality ratio. Similarity between η=0.8 and β=0 case is also seen when comparing the contour plots in (b) parts of figures in Subsection <ref> with respective maps in Fig. S.5 in supplementary material.
Unlike in <cit.> the void coalescence criterion is not formulated in the present study. Nevertheless, in order to closely observe this phenomenon, Fig. <ref> it is demonstrated how the void size is changing in three different directions: AB, EF, E'F' marked in Fig. <ref> for two selected loading cases: β=1 and η=1. The figure presents the evolution of the value of log(L/|𝐱_right-𝐱_left|) where 𝐱_right and 𝐱_left denote current locations of nodes at the right and left end of the respective diameter and L is the current cell size in a relevant direction. When this value is tending to zero the coalescence is approached. It is seen that for the case β=1 and symmetric orientations B and C the coalescence state is attained in two perpendicular bands along X and Y directions. Additionally, the void is loosing its spherical shape, more importantly for orientation B than C. For other cases the coalescence is approached mainly in X direction and this state is attained visibly sooner for orientations A and B than for orientation C. Figure <ref>a shows that, although the orientation effect is not seen in the normalized void evolution plots shown in Fig. <ref> under the same value of displacement biaxiality, it manifests in the developed void shape and thus may influence the coalescence strain and, in general, the failure mode.
§.§.§ Local sample response
Local distribution of accumulated shear
First, in order to show the possible failure mode, contour plots of accumulated shear are presented at the end of the deformation process at strain level 0.3 for six considered loading scenarios in Figs. <ref>, <ref> and <ref>, for three crystallographic orientations A, B, and C listed in Table <ref>, respectively. Since the strain level along the principal loading direction is the same for all the cases, one is able to observe relative variation in a shape change of the cell as a whole and the void for all six loading scenarios. Additionally, to illustrate the strain localization process, the contour plots are presented for F_22-1 = 0.15, so at the intermediate stage of the deformation, and placed in the supplementary material.
Part (a) of figures <ref>-<ref> shows the contour plots of accumulated shear under β = -0.5 loading. For all orientations, shear begins to accumulate on the transverse sides of the voids at the intermediate strain level of 0.15. Due to the symmetry of crystal orientations B and C with respect to the loading direction, the symmetrical distribution of accumulated slip is observed, whereas for asymmetrical orientation A alternate bands of severe deformation and no deformation are developing. Because of the relatively high stress biaxiality ratio in the β = -0.5 scenario, void growth is rapid as deformation progresses (refer to Figs. <ref>a and <ref>d).
The unit cell is deformed substantially at strain level 0.3, with the maximum shear accumulating on the transverse sides of the void. For orientations A and B, the transverse ligament is the origin of void coalescence. In orientation A, the void rotates, and the strain concentration is observed on the transverse sides along the inclined direction. Moreover, at the strain level 0.3 a slight trace of the shear band is seen. Because of the asymmetric orientation, the unit cell edges do not remain straight and are deformed. For orientation B, a polygonal void shape is noticed. For orientation C, inclined shear bands clearly form, and the shear accumulates along the transverse sides of the void. The mode of failure in this case is through these inclined shear bands. In addition, the void elongates in the loading direction, resulting in an ellipsoidal void shape.
Part (b) of Figs. <ref>-<ref> displays the contour plots of accumulated shear under β=0 loading. The void growth is substantially faster due to the high stress biaxiality (0.5<η<1, refer to fig <ref>a) and is evident even at the intermediate strain level of 0.15. At this strain level, shear begins to accumulate around the void and the void starts to grow significantly in the transverse direction for all three orientations. Additionally, the void rotates for orientation A. More shear is accumulated in the transverse ligament for orientation C than for orientations A and B. The symmetric distribution of contours is found again for symmetric orientations B and C.
As the deformation process proceeds, rapid void growth is observed in all crystal orientations, and coalescence occurs along the transverse sides of the void. The void is substantially rotated for orientation A, and a zigzag pattern of strain localization bands is seen along the transverse direction of the void. Similarly, in orientation B, void expansion in the transverse direction is quick, and a polygonal form of the void is clearly developed. On the other hand, the void shape in orientation C is nearly circular, and accumulated shear is seen in the transverse ligament.
Part (c) of figures <ref>-<ref> present the accumulated shear distribution under β = 1 loading. The contour plots resemble those from the preceding loading scenario, i.e. β = 0, however, the void growth is quick in both longitudinal and transverse directions due to the strong stress biaxiality. When compared to the previous loading instance β =0, the void expansion and accumulation of shear is significantly more severe at the intermediate strain level of 0.15. The void is rotated for asymmetric orientation A, as in prior loading scenarios. Due to the high stress biaxiality, the void form is much more circular for orientations B and C at the strain of 0.15. In contrast to prior loading examples, coalescence is observed in both directions at the final strain level of 0.3 for orientations B and C. In addition, for orientations B and C, a polygonal void shape with rounded corners is observed. Similar behaviour was reported in <cit.> at high stress triaxialities. Furthermore, for both orientations, substantial shear accumulation is seen around the void. The void rotates even further in asymmetric orientation A, but its shape is not perfectly circular or polygonal. The same rapid void growth is clearly observed in normalized void volume fractions plots for this β loading case and three orientations (refer Fig. <ref>d).
Now, let us move to the η=const loading scenarios.
Subfigures (d) of figures <ref>-<ref> show the contour plots of accumulated shear under the η=-0.5 loading scenario. In comparison to the β= -0.5 loading condition, void growth is not significant under this loading configuration. This is because the stress biaxiality ratio is low. Also, as previously discussed on the basis of the void evolution plots, void expansion is not detected when the displacement biaxiality ratio β is lower than -0.5. At the intermediate strain level of 0.15, inclined shear bands begin to form for orientation A, whereas shear bands for orientation C are at 45 degrees with the main loading direction. However, in orientation B, the deformation is almost homogeneous and there is no void growth.
For orientations A and C, the void collapses at a strain level of 0.3. Normalized void volume fraction plots confirm the observation. The void in orientation C is collapsing like a penny shaped crack. Furthermore, shear accumulates the most at the tip of the severely elongated void. For orientation B, still, almost homogeneous deformation is seen, with no void expansion.
Part (e) of Fig <ref>-<ref> displays the contour plots of accumulated shear in accordance with the η=0 (uniaxial) loading scenario. The response is quite similar to the loading case with η=-0.5. At the strain level of 0.15, for asymmetric orientation A, the shear starts to accumulate on the transverse sides of the void and the formation of one family of inclined shear bands is observed whereas for orientation B almost homogeneous deformation is observed with no shear localization. The formation of two families of the inclined deformation bands and accumulation of slip on transverse sides of the void is seen for orientation C. For orientation A, a noticeable formation of another family of inclined deformation bands is observed at a strain level of 0.3, and the unit cell is distorted whereas for orientation B some heterogeneity of deformation is observed, but not much void expansion. For orientation C, slip shear accumulates on both sides of the void, causing the void to elongate along the loading direction, resulting in an ellipsoidal shape. When compared to orientations A and B, void expansion is substantially more prominent for orientation C. Overall, when comparing responses of three orientations with the respective β = 0 loading case, the void growth is not significant due to low stress and negative displacement biaxiality ratios.
On the other hand, as already discussed, the β=0 case is approximately equivalent to the η=0.8 case for which the respective accumulated shear maps are shown in Fig. S.5a of the supplementary material. Those contour plots are very similar to the maps shown in (b) subfigures included in this subsection.
Finally, part (f) of figures <ref>-<ref> depicts the accumulated shear contour plots under the η=1 loading scenario. Similarly to the β=1 loading scenario, void growth is accelerated due to high stress and displacement biaxiality. The displacement biaxiality ratio plot (Fig. <ref>b) explains the slight deviations from the β=1 loading condition. For orientations B and C, the void growth is rapid even at the strain level of 0.15, which is identical to the β=1 loading situation. However, there is some deviation in the strain accumulation as compared to β=1 case for asymmetric orientation A, which can be correlated with differences seen in the displacement biaxiality ratio curves in Fig. <ref>b for these cases.
A polygonal void shape with rounded corners is seen for orientations B and C at the strain level of 0.3, which is identical to the β=1 loading situation. The void growth is slightly reduced since the displacement biaxiality ratio is less than 1 (refer to Fig. <ref>d). The primary difference is that in the β=1 loading situation, void coalescence occurs in both loading directions, but in the η = 1 loading instance, void coalescence happens only in the transverse ligament. Displacement biaxiality plots (Fig. <ref>b) clearly show the origin of this disparity. In addition, the maximum shear accumulates around the void for all orientations.
Local distribution of lattice rotation
The influence of void evolution and loading conditions on new grain formation is now studied on the basis of contour plots of the lattice rotation angle. We concentrate on, somewhat opposite, cases of orientation A and B, and present only selected results for orientation C.
The lattice rotation angle Ψ∈ (0,π), presented in the plots, is defined as:
Ψ = arccos(tr(Δ𝐑(t)) - 1/2)
where Δ𝐑(t) is calculated based on the initial orientation tensor 𝐑(0) and current orientation tensor 𝐑(t), respectively, as
Δ𝐑(t) = 𝐑(t)𝐑(0)^⊺ .
Orientation tensor 𝐑(t) is constructed based on the current orientation of lattice direction 𝐚 with the Miller indices [100] and lattice plane normal 𝐛 with {001}, respectively. The change of their orientation during the deformation process is governed by the elastic part of the deformation gradient 𝐅_e, so that 𝐚(t)=𝐅_e(t)𝐚(0) and 𝐛(t)=𝐅_e^-T(t)𝐛(0).
In each loading case, we observe rotation angle heterogeneity, which results in the development of a new microstructure. The presence of a void causes heterogeneity of strain, which results in heterogeneity of lattice rotation. However, we notice that the distribution of the rotation angle does not follow perfectly the distribution of accumulated shear Γ, as was already seen in Section 4.1 for uniaxial loading cases.
The lattice rotation angle plots for asymmetric orientation A are shown in Figure <ref>. Because orientation A is not stable under prescribed loading conditions, we observe uniform lattice rotation even in pristine crystals. For example, the calculated lattice rotation angle for loading case η=0 at F_22-1=0.3 for a crystal without a void is 11. For voided crystal and η=0 case, we observe bands with rotation angles of 30, whereas the remaining portion of the cell rotates at a smaller angle of about 10, which roughly corresponds to the lattice rotation which would be seen in pristine crystals. In the unit cell, new grains are formed as a result of the different rotation angles. Under the η = -0.5 loading scenario, a similar response is observed. Two inclined bands are forming in this case. The evolution of the void volume fraction has an effect on the evolution of the microstructure. For η = -0.5, we see that the new subgrains with larger rotation angle correlate with the zones of increased accumulation of shear. For β =0, β =1, and η = 1 loading cases, the formation of new grains takes place around the void, as well as alternate domains of no rotation (blue domain) and moderate rotation outside of the void is observed. All of these factors contribute to the formation of multigrain microstructures, particularly at high strain or stress biaxiality values. For the β = -0.5 loading case (and for the approximately equivalent η=0.8 case as seen in the supplementary material), the combination of effects found for other loading cases is seen. We observe the formation of a band with high lattice rotation, which starts at the lateral sides of the void and then is parallel to the main direction of loading, as well as alternating bands of no and medium rotation angles in the middle vertical portion of the unit cell.
Figure <ref> depicts the lattice rotation angle plots for orientation B. Due to the symmetry of orientation with respect to the loading conditions, the developed microstructure preserves this symmetry. Because orientation B is stable under prescribed loading conditions, we do not observe lattice rotation for pristine crystal. For a voided crystal, for η=0 and η=-0.5 loading cases, the heterogeneity of lattice rotation angle is very small, which is less than 5, following homogeneity of deformation seen in Fig. <ref>(d, e). Around the void, a few small domains with 5 - 10 of lattice rotation are present. The formation of new grains around the void is observed for higher biaxiality loading cases where β= 0, 1 and η = 1, and the crystal domain in a larger distance from the void does not rotate significantly. Under β= -0.5 and η=0.8 loadings, the response is somewhere in between the two scenarios discussed before.
In order to further illustrate the effects related to grain refinement,
Figures <ref> and <ref> present histograms of the lattice rotation angle generated based on the data in Figures <ref> and <ref>, respectively. The histogram plot on the left displays the entire unit cell, while the histogram plot on the right shows the area surrounding the void. For the purpose of the latter plot, we employed two layers of finite elements which surround the void (refer to Fig. <ref>). Different colours of bars correspond to different loading conditions. When we compare the results for asymmetric orientation A and symmetric orientation B, we observe that orientation A has a substantially larger orientation spread than orientation B. This is because most of the elements rotate less than 10 degrees in orientation B. When we consider the area around the void for both orientations, the orientation spread widens significantly, especially for η=1 and β=1 loading cases.
In order to quantify more directly observed differences for each case mean value and standard deviation of rotation angle were calculated and collected in Table <ref>, which includes also the respective values found for orientation C. As expected, for all loading conditions the highest mean value is obtained for orientation A, which is connected with lack of symmetry for this configuration.
Considering the results for the same orientation but different loading conditions, we see that the highest mean misorientation angle is found for the case η = -0.5 and 0 for orientation A, and for β = 1 for orientation B and C (refer to Table <ref>). The standard deviation is used to illustrate lattice rotation heterogeneity. High biaxiality factors η and β = 1 correspondingly showed the highest lattice rotation heterogeneity for orientation A. However, for other stress and strain biaxiality factors, the variation in lattice rotation is not significantly lower. The smallest heterogeneity is observed for the η = 0 case (around 4.5 degrees). As a result, in the presence of a void, orientation A is prone to grain refinement. For orientations B and C, the disparities in lattice rotation heterogeneity are larger. Again, β and η = 1 had the highest values. However, no heterogeneity is evident for η = -0.5 and η = 0 for orientation B, as shown by contour plots (Figure <ref>). This observation is not true for orientation C, which can be correlated with important strain heterogeneity for those two cases seen in Fig. <ref>. On overall, the magnitude of grain refinement appears to be more influenced by loading conditions in the case of symmetric orientations.
§ SUMMARY AND CONCLUSIONS
In this paper, using the crystal plasticity theory combined with the finite element method, we have investigated the effects of initial crystallographic orientation, stress, and displacement controlled loading conditions on the void and microstructure evolution in a 2D plane strain unit cell. Uniaxial and biaxial loading cases have been studied.
For uniaxial loading cases a special configuration, which enforces an equivalent pattern of plastic deformation in the pristine crystal, has been selected in order to investigate the mutual interactions between the evolving void and the lattice rotation heterogeneity. It has been found that neither macroscopic in-plane stress biaxiality nor displacement/strain biaxiality, are sufficient to fully decide about the void growth, especially when anisotropic materials are considered, and that a significant role in this process is played by microstructure evolution. Fragmentation of bulk crystal surrounding the void into subgrains may lead to significant disturbance of the void volume changes. Note that a similar observation, about the importance of the microstructure changes, was made by <cit.> for HCP crystal in which the appearance of domains with new twin related orientation strongly affected void growth and coalescence.
Next, biaxial loading cases have been considered for three crystal orientations, one of which is not symmetric with respect to loading directions. It has been analysed how stress or strain biaxility factors and initial lattice orientation influence the void evolution in terms of its size and shape. Overall, seven cases with three displacement controlled loading scenarios (β={-0.5, 0,1}) and four stress controlled loading scenarios (η={-0.5, 0, 0.8, 1 }) have been considered. The following are the key conclusions of the study:
* It seems that the primary driving factor for void growth and coalescence is the displacement biaxiality factor β. A clearer correlation is found between variations in displacement biaxiality ratio and normalized void volume fraction evolution plots, as well as the resulting void shape and coalescence pattern.
* Softening stress response is evident for large displacement biaxiality factors when the stress biaxiality ratio η increases. The void volume fraction increase in such cases is significant, resulting in void coalescence. The effect of crystal orientation is then diminished. Similar findings were reported in other studies <cit.>. The coalescence is observed in both directions for displacement biaxiality β =1, but only in the transverse ligament for stress biaxiality η =1. For advanced plastic deformation, particularly at high stress and displacement biaxiality η = β = 1, voids evolve into polygonal forms. Similar findings have been reported by <cit.>.
* For stress controlled processes the starting point can be described as a biaxial straining process, which under the void growth is approaching an uniaxial straining mode. The way by which the void growth proceeds is governed by the variation of the displacement biaxiality factor β. When initially β is positive the obtained void volume fractions are larger (softening is observed earlier), while the void growth rate will be decreasing when the uniaxial straining mode is approached. On the other hand, when initially β is negative then the obtained void volume fractions are smaller (softening is observed later), while the void growth rate will be increasing when the uniaxial straining mode is approached.
* For lower stress η and displacement β biaxiality values, an anisotropic response is observed, and the strain-stress response is dependent on crystallographic orientation. For the lowest value of stress biaxiality η = -0.5, void closure has been observed, particularly in the non-symmetric orientation A and orientation C, as well as the formation of strain localization bands.
* In general, the heterogeneity of plastic deformation is the largest for non-symmetric orientation A. This results in lattice rotation heterogeneity and the formation of grain fragmentation in each loading case. For other orientations heterogeneity of lattice rotation is concentrated around the void, especially for higher stress and displacement biaxiality ratios (β ={ 0, 1 } & η={ 0.8, 1 }). On the other hand, for small or negative values of both biaxiality factors, void evolution, and lattice rotation heterogeneity is greatly influenced by initial crystal orientation and substantially differ for the same value of stress and strain biaxiality factor, while the grain refinement encompasses a larger crystal volume.
It should be remarked that FCC crystals usually present smaller plastic anisotropy than HCP crystals, for which different types of slip systems can be activated with substantially different values of critical shear stresses. Moreover, for many HCP metals, uniaxial twinning plays an important role. In such a situation, we may expect even more significant influence of microstructure changes on void evolution and accompanying ductile failure mode. This, together with extending the analysis to 3D spherical voids, is an interesting direction for further research.
§ ACKNOWLEDGEMENTS
The research was partially supported by project No. 2021/41/B/ST8/03345 of the National Science Centre, Poland. Authors acknowledge Prof. Stanislaw Stupkiewicz from (IPPT) for his help in AceGen/AceFEM implementation of the computational procedures and Dr. Karol Frydrych (IPPT) for fruitful discussions.
elsart-harv
|
http://arxiv.org/abs/2307.01938v1
|
20230704215705
|
Physics-based Motion Retargeting from Sparse Inputs
|
[
"Daniele Reda",
"Jungdam Won",
"Yuting Ye",
"Michiel van de Panne",
"Alexander Winkler"
] |
cs.CV
|
[
"cs.CV"
] |
dreda@cs.ubc.ca
University of British Columbia
Canada
Seoul National University
South Korea
Reality Labs Research, Meta
United States of America
University of British Columbia
Canada
winklera@meta.com
Reality Labs Research, Meta
United States of America
Avatars are important to create interactive and immersive experiences in virtual worlds.
One challenge in animating these characters to mimic a user's motion is that commercial AR/VR products consist only of a headset and controllers, providing very limited sensor data of the user's pose. Another challenge is that an avatar might have a different skeleton structure than a human and the mapping between them is unclear. In this work we address both of these challenges. We introduce a method to retarget motions in real-time from sparse human sensor data to characters of various morphologies. Our method uses reinforcement learning to train a policy to control characters in a physics simulator. We only require human motion capture data for training, without relying on artist-generated animations for each avatar. This allows us to use large motion capture datasets to train general policies that can track unseen users from real and sparse data in real-time. We demonstrate the feasibility of our approach on three characters with different skeleton structure: a dinosaur, a mouse-like creature and a human. We show that the avatar poses often match the user surprisingly well, despite having no sensor information of the lower body available. We discuss and ablate the important components in our framework, specifically the kinematic retargeting step, the imitation, contact and action reward as well as our asymmetric actor-critic observations. We further explore the robustness of our method in a variety of settings including unbalancing, dancing and sports motions.
<ccs2012>
<concept>
<concept_id>10010520.10010553.10010562</concept_id>
<concept_desc>Computer systems organization Embedded systems</concept_desc>
<concept_significance>500</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010575.10010755</concept_id>
<concept_desc>Computer systems organization Redundancy</concept_desc>
<concept_significance>300</concept_significance>
</concept>
<concept>
<concept_id>10010520.10010553.10010554</concept_id>
<concept_desc>Computer systems organization Robotics</concept_desc>
<concept_significance>100</concept_significance>
</concept>
<concept>
<concept_id>10003033.10003083.10003095</concept_id>
<concept_desc>Networks Network reliability</concept_desc>
<concept_significance>100</concept_significance>
</concept>
</ccs2012>
< g r a p h i c s >
Our method uses only a headset and controller pose as input to generate a physically-valid pose for a variety of characters in real-time.
Physics-based Motion Retargeting from Sparse Inputs
Alexander Winkler
July 2023
===================================================
§ INTRODUCTION
Augmented and Virtual Reality (AR/VR) has the potential to provide rich forms of self-expression. By using human characters it is easier to accurately reflect the motions of a user. However, many users might want to portray themselves via non-human characters.
Games with non-human player characters already demonstrate the great appeal of this type of embodiment,
albeit one that works within the limited immersion afforded by current gaming input devices and displays.
How can we best allow users to embody themselves in non-human characters using current AR/VR systems?
Our work seeks to make progress on this question. This entails multiple challenges, in particular:
(a) AR/VR systems provide only sparse information regarding the pose of the user,
obtained from a head-mounted device (HMD) and two controllers.
(b) The target character may have significantly different dimensions and body types, as shown in <ref>; and
(c) Kinematic animation, including that resulting from kinematic retargeting, often lacks physical plausibility, producing movements that lack a feeling of weight.
We propose a method to address these challenges. In particular, we develop an imitation-based
reinforcement learning (RL) method that uses the sparse sensor input of a user to drive
a physics-based simulation of the target character. This directly takes into account the physical properties of the given character,
such as the heavy tail of a dinosaur or the short-legs of a mouse character, as shown in <ref>.
We only require human motion capture data
for training, without relying on artist-generated animations for each avatar.
This allows us to use large motion capture datasets to train general policies
that can track unseen users from real and sparse data in real-time.
We identify ingredients as being important to successful retargeting in this setting,
including foot contact rewards, sparse mapping of key features for retargeting,
and suitable reward terms that offer further style control.
Many of the pieces that we rely on exist elsewhere in the literature.
Our primary contribution lies with bringing them together in a way that
enables a new retargeting capability well-suited to current AR/VR systems.
We are the first to show a framework that works with real data from sparse sensors in real time while producing high-quality motions for non-human characters.
We validate our design choices through a variety of ablations.
§ RELATED WORK
In this literature review we focus on the most relevant works in motion tracking, retargeting, and physics-based control.
§.§ Human Motion Tracking
Many solutions exist for full-body tracking of human motion, varying in their choice of sensors,
the number of sensors, and their placement.
Optical marker-based systems with external cameras remain the most common choice for applications requiring high accuracy, e.g., <cit.>.
Markerless and vision-based approaches rely on cameras alone to generate full body poses.
Common approaches leverage human body models such as SMPL as a pose prior <cit.>, using extracted keypoints or correspondences from the images <cit.>, or use physics-based priors, e.g., <cit.>
Wearable sensors are another common choice, relying on sensors attached on the user's body, such as Inertial Measurement Unit (IMU) devices, e.g., <cit.>.
When using AR/VR devices, systems are further limited by the sparse sensors available. Most commonly available units are comprised of 3 tracker devices: a head-mounted device (HMD) and two controllers, one for each hand.
As a human motion tracking device, these are handicapped by the lack of sensory information regarding
the lower body and legs, which are essential to synthesizing believable full-body motion.
Multiple methods have been proposed to address this, using transformers <cit.>, VAEs <cit.> and normalizing flows generative models <cit.>.
Being kinematic-based approaches, however, these methods do not enforce physical properties and thus
suffer from motion artifacts such as foot-skating and jitter.
Physics-based approaches have also recently been proposed <cit.>.
These both make use of reinforcement learning and physics to learn general and robust policies that drive full-body
avatars, conditioned on input from a VR device. These are closest to the work we present in this paper, and
have great promise, although come with their own limitations.
The Neural3Points method <cit.> is specific to a single user and
uses auxiliary losses and an intermediate full-body pose predictor.
Relatedly, <cit.> proposes a more direct approach that is able to control a simulated
human avatar and generalizes to users of different heights and multiple type of motions.
Our work generalizes the method of <cit.> in two important ways: (1) we learn physics-based retargeting to characters having different morphologies, and
(2) we enable real-time retargeting.
§.§ Retargeting Motions
The motion retargeting problem is that of remapping motion from a source character or skeleton,
often driven by motion capture data, to another character of possibly different dimensions.
This is a long-standing problem for which many solutions have been proposed.
Arguably the most challenging version of this problem arises when the source and target characters may
differ significantly in terms of their morphology and skeleton, as is also the case for our work.
Kinematic retargeting methods often approach the problem by allowing the user to specify directly, or alternatively to learn via examples,
a model for source-to-target pose correpondences, e.g., <cit.>.
This creates a puppetry system, where target motions can be further cleaned to respect contacts with the help
of inverse kinematics. Kinematic motion deformation approaches can be used to adapt multiple characters trajectories
for motions involving coordination such as moving boxes <cit.>.
Recent work proposes a kinematic method to learn how to retarget without requiring any explicit pairing between motions <cit.>,
and this is also demonstrated to work on skeletons with very different proportions.
Other recent work examines how to learn efficient kinematic motion retargeting for human-like skeletons
while preserving contact constraints, such as when hands and arms have self-contact with the body <cit.>.
Physics-based retargeting methods aim to produce a physics-based simulation of the output motion,
which results in crisp contacts and physically-plausible motion of the target character.
An offline approach to motion retargeting using spacetime trajectory optimization is presented in <cit.>.
The final output uses LQR trees, and thus the given motions can cope with some perturbations.
A method is recently proposed for using interactive human motion to drive the motion of a quadruped robot <cit.>.
A curated dataset of matching pairs of human-and-robot motions is used to develop relevant kinematic mappings
for particular motions or tasks. A deep-RL policy is then learned that can track the target kinematic motions in real time,
enabling a form of real-time human-to-real-robot puppetry. In our setting, we assume significantly sparser user input and motion specifications.
§.§ Physics-based Character Simulation
Controllers for physics-based characters have been extensively explored.
The ability to imitate reference motions was first demonstrated to varying extents
in a number of papers over the past 15 years, e.g., <cit.>.
These methods often incorporated some iterative optimization to adapt to a specific motion
and used a simple control law to provide robust balance feedback.
Some of these methods were also adapted to produce motions for non-human characters,
e.g., <cit.>.
Neural network policies, trained via deep reinforcement learning (RL), provide new capabilities
to learn new skills from scratch, or to imitate artist-provided motions or motion capture clips,
e.g., <cit.>, including demonstrations for non-human characters.
More recent methods provide more flexibility in sequencing motions for basketball <cit.>
or, more generally, to track online streams of motion capture data <cit.>.
Control policies have also been learned which are conditioned on not only the desired motion, but also
the specific morphology of a simulated character, which can then even be changed at run time <cit.>.
We further refer the reader to a recent survey of RL-related animation methods <cit.>.
We build on the foundations provided above for our specific problem,
namely how to retarget from sparse (and therefore potentially highly ambiguous) input data to a
non-human physics-based character with very different dimensions and proportions.
§ METHOD
An overview of our system is shown in <ref>. We use reinforcement learning to learn a policy that generates torques for a physics simulator.
During training, we use human motion capture data to both synthesize HMD and controllers data for the policy, and to build a reward training signal. In the following we give an overview of reinforcement learning and then describe each component in detail.
§.§ Reinforcement Learning
We use deep reinforcement learning (RL) to learn a retargeting policy for each character.
In RL, at each time step t, the control policy reacts to an environment state s_t by performing an action a_t. Based on the action performed, the policy receives a reward signal r_t = r(s_t, a_t). In deep RL, the control policy π_θ(a | s) is a neural network. The goal of deep RL is to find the network parameters θ which maximize the expected return defined as follows:
J_RL(θ) = 𝔼[∑_t=0^∞γ^tr(s_t, a_t)],
where γ∈ [0, 1) is the discount factor. Tuning γ affects the importance we give to future states.
We solve this optimization problem using the proximal policy optimization (PPO) algorithm <cit.>, a policy gradient actor-critic algorithm.
A review of PPO algorithm is provided in <ref>.
§.§ Characters
We demonstrate our retargeting solution on three characters with unique features:
Oppy <cit.> is a mouse with a short lower body, a big head, big ears and a tail;
Dino is a tall dinosaur, with a long and heavy tail and head, and short arms;
Jesse is a human-like cartoon character with a skeleton structure similar to the mocap data.
<ref> shows a visual representation of the characters and <ref> details the structure of their skeletons.
§.§ Observations
The observation contains two parts: simulated character data o_t, sim and user's sparse sensor data o_t, user.
o_t = [o_t, sim, o_t-1, user, o_t, user]
o_t, sim = [o_sim, q, o_sim, q̇, o_sim, x, o_sim, R]
o_t, user = [h_t, l_t, r_t, R_h, t, R_l, t, R_r, t]
The simulated character's state is fully observable in the simulation. Therefore, even though the sensor signals is sparse, the policy can still rely on the full state of the simulated character.
This observation consists of joint angles o_sim, q∈ℝ^j and joint angle velocities o_sim, q̇∈ℝ^j of all degrees of freedom j of the character.
We also provide Cartesian positions o_sim, x∈ℝ^l×3 and orientations o_sim, R∈ℝ^l×6 of a subset l of links of the character.
The orientations consist of the first two columns of their rotation matrices.
All positions and orientations are expressed with respect to a coordinate frame located on the floor below the character which rotates according to the character heading direction. This is useful to make the controller agnostic to the heading direction.
The sensor data, either coming from the real device or synthetically generated from the training data (described in <ref>), consists of the position and orientation of the HMD h, the left controller l and the right controller r. Positions and orientations are expressed in the same coordinate system as the simulated character observations. To allow the policy to infer velocities, we provide it two consecutive sensor observations [o_t-1, user, o_t, user].
Inspired by <cit.>, we use asymmetric observations.
At training time we augment the value function observation by providing the full human mocap pose and future human mocap state information.
This complete view of the state allows the value function to better estimate the returns. The better the return estimate, the easier it is for the policy to learn. We are allowed to provide this mocap state information, because the value function is required only for training. Real-time inference still only relies on the policy, which uses the sparse sensor input. We ablate this in <ref> and find that is essential for sparse real time retargeting.
§.§ Synthetic Training Data
During training, we require HMD and controller data for the observation paired with kinematic poses for each character s_t, kin from which the reward r_t is computed.
To synthetically generate the HMD and controller data we offset the mocap head and wrist joints to emulate the position and orientation of HMD, left and right controllers as if the subjects were equipped with an AR/VR device.
Importantly, our system does not require artist-generated animations for each specific character as training data, which would be infeasible to create with the diversity and quantity we require.
Instead we reuse existing human motion capture data s_gt and perform a rough kinematic retargeting s_kin to the morphology of the simulated character (<ref>).
In this step, we manually match selected joint angles of the human to conceptually similar joints of the creature.
For joints where no correspondence can be found, we just set them to their default pose (e.g. ears and tails).
This provides a rough estimate of the creature's motion. However, this motion has many artifacts, such a feet sliding due to different leg lengths, self-collisions, floor collisions, and no motion of the tail and ears.
Nonetheless, we can still use it as a reward signal to train our simulated character.
The physical constraints imposed by the simulation then remove remaining artifacts. Importantly, after the simulated character is trained, it is driven only by a headset and controllers, without requiring any full-body information of the user or any kinematic retargeting.
§.§ Reward
The goal for the simulated character is to imitate the human motion as closely as possible, while respecting all the constraints imposed by physics. Our reward function includes a component for imitation, contact, and action regularization:
r_t = r_t(imitation) + r_t(contact) + r_t(action)
r_t(imitation) = r_t(q) + r_t(q̇) + r_t(x) + r_t(ẋ) + r_t(orientation)
r_t(action) = r_t(action diff) + r_t(action min).
Each of the reward terms is expressed using a weighted Gaussian kernel:
r_t(s) = w_s e^-k_s d(s_t, sim, s_t, kin)
where for each term only the specific component of the state s is considered and d(s_sim, s_kin) represent the distance metric between the simulated and kinematic components of the state, k is the sensitivity of the Gaussian kernel, and w is the weight of the reward component. Parameter values and details of the distance metrics for each term are provided in <ref>.
§.§.§ Imitation Reward
This reward matches the available information between the simulated character s_sim and the kinematically retargeted ground truth pose s_kin.
The five terms represent a weighted sum of the difference between the matching joint angles (q), joint angle velocities (q̇), Cartesian coordinate positions (x) and velocities (ẋ), and orientation.
The imitation reward term captures the degrees of supervision we want to transfer between human motion data and the simulated character.
For clarity, <ref> is the general form which includes all possible terms, but the way they are used differs according to each character.
The less supervision the imitation term provides, the more we rely on physics and the other components to generate a sensible motion.
Depending on the quality of our kinematically retargeted pose, we can choose which of the aspects of the pose we want the simulated character to imitate more closely.
The least amount of supervision consists in only tracking their root position, which according to our experiments does not produce high-quality motions.
On the other extreme, we also do not want to track every aspect of the kinematically retargeted pose.
For example there is no tail motion in the human mocap data, so the kinematically retargeted pose has all tails set to a stiff default pose. However, a simulated character might want to move the tail to achieve balance and smoother motion. So we do not require these parts of the skeleton to imitate the kinematic pose.
Orientations are skeleton independent, so we rely on the actual human mocap data, not the kinematically retargeted pose to formulate the orientation rewards.
We always formulate a reward that matches the characters root with the human mocap root, as well as the characters head orientation with the human head orientation.
Ablations without these terms are provided in <ref>.
§.§.§ Contact Reward
The contact reward is a boolean value that checks whether the simulated character's foot contact and the human's foot contact coincide.
We estimate contact of the mocap data based on a velocity and height threshold. For the simulated character, we can directly access contact forces from the simulator and threshold those.
In most cases the kinematically retargeted leg motion has a variety of artifacts, such as feet sliding or penetrating the ground. Imitating this pose is not physically-valid. Since this reward doesn't depend on the skeleton structure, it can be used for all bipedal characters equally and directly computed from human mocap.
The contact reward is important to give further training supervision and generate the high-quality motions shown. Ablations are provided.
§.§.§ Action Reward
The action reward is a regularization term to minimize total amount of energy consumed by the character.
It consists of two terms that minimize the difference in torque between two subsequent actions and minimize the absolute action value and is defined as:
r_t(action diff) = 1/N∑_i^N (a_t-1, i - a_t, i)^2
r_t(action min) = 1/N∑_i^N a_t, i^2
where N is the total number of action values which the policy outputs.
The purpose of these components is to incentivize overall lower energy movements and to minimize twitching with a smoother movement between poses.
§.§ Termination
As noted in multiple previous works <cit.>, early termination techniques are important for learning complex motions through reinforcement learning.
We reset the environment when one of the following two termination conditions is satisfied: the character enters an unrecoverable state, which we define as falling and touching the ground with the upper body, or when the character root position is more than 30cm apart from the scaled root of the motion capture data.
Furthermore, to mitigate the imbalance of visiting and learning to retarget only the early parts of the motion trajectories, we reset the character every 500 steps. We randomly sample a pose from the human data and set the character using the kinematically retargeted pose.
§.§ Learning Control Policies
The policy for each simulated character outputs torque values in the range [-1, 1] which are then rescaled according to minimum and maximum torque values for each joint (provided in <ref>).
We find this to perform better and be more clear with respect to outputting PD target angles, as shown by previous works <cit.>.
We train the policy with PPO and PyTorch auto differentiation software <cit.> and simulate physics with NVIDIA PhysX Isaac Gym physics simulator <cit.>.
A complete set of hyperparameter details for reproducibility are summarized in <ref>.
§ RESULTS
All experiments are performed on a single 12-core machine with one NVIDIA RTX 2070 GPU.
All models are trained for 24 hours which translates to approximately 6 billion environment steps.
We demonstrate comparable results with two different motion capture datasets.
Our in-house mocap data consists of 4 hours of motion clips of 120 subjects. Specifically, the dataset contains 130 minutes of walking and 110 minutes of jogging.
We also demonstrate robust and general results with the Ubisoft La Forge Animation (LaFAN1) dataset <cit.>, an open source motion capture dataset containing 5 subjects and 77 sequences. For the purposes of this work, we only considered actions themed Walk and Run, which consist of a total of 15 sequences and 74 minutes of data.
We note that these motions are very different from the ones in our in-house dataset, containing diverse and hard behaviors and gaiting styles.
At inference, we provide input to the policy with a Meta Quest headset and controllers device.
§.§ Real-time Retargeting with Headset and Controller
We thank the QuestSim <cit.> authors for providing us with testing data and video references.
With our method, we are able to control different characters in real time with only headset and controller information.
Importantly, we are able to estimate the lower-body pose of the user from only three points in the upper body and correctly match the user action while transferring it to a character with a different morphology.
Our virtual characters respect physical behaviors and do not suffer from jittering, foot sliding or penetration.
Moreover, we are able to generalize to users not present in the training set and users of different heights.
In <ref> we show a sequence in which all three characters are controlled by an unseen user.
§.§ Retargeting using only Headset
Some VR systems provide only a head-mounted device (HMD), without the two controllers. This provides an even more challenging domain, requiring the policy to predict a full-body pose and control a virtual character from a one-point input.
Nonetheless, our trained models are robust to the lack of controller signal and are able to retarget real time user data from unseen users even from this extremely sparse input, albeit with a lower quality compared to before. We invite the reader to watch the video available in the supplementary material.
§.§ Reward component ablations
Some reward components are essential to get good motions. Here we go through a few interesting examples.
§.§.§ Contact Reward
The contact reward shapes the gait style of the character. Both Oppy and Dino display different locomotion behaviors when using this reward component. Furthermore, as the character size changes, more signal can be transferred to the simulated character. In <ref> we show Oppy in two different sizes. When Oppy's size matches the user, it performs the same gait style and distance motions; when it is smaller, by matching the correct gait style it will travel less distance, while it can perform a faster gait to keep up with the user, depending on the weighting of the reward components.
Similarly, in <ref>, the different frames show the matching gaits between the three characters and the user.
§.§.§ Orientation Reward
Providing signal for mimicking head and root orientation is an essential component to support more fidelity in tracking user's head and overall movements. We show in <ref> how Dino without the head orientation component is unable to correctly move its head in the same way as the user. As shown in the supplementary video, both Oppy and Dino without head orientation reward component show the head wobbling left and right while walking. These characters have heavy heads needing learned control.
§ DISCUSSION
We discuss different capabilities and components of our system.
§.§ Physics-based control
Physics acts as a powerful helper in driving the motion of components with missing pose information, with the skeleton description as underlying prior.
For the tail of Dino, the simulator affords several stylization options, i.e., whether we allow more joint mobility and passively actuate it through a PD controller with fixed-set input as secondary motion or we let the policy make active control decisions.
In <ref> we show three examples, in which Dino's tail is fixed, passively actuated, or controlled by the learned policy.
Tail and ears of Oppy are all treated as secondary dynamics.
This stylization would not be possible in a kinematic retargeting setting.
§.§ Controlling the style
Our method is robust to different set of parameters. Once changed, most parameters still output a reasonable motion controller with different styles. As described in <ref>, the contact reward shapes the gait style of the character, and modified together with the size of the character would produce different gait styles.
The kinematic retargeting described in <ref> only requires a rough retargeting to produce sensible motions, as the physics dynamics correct the artifacts. Moreover, tuning the key joints for the kinematically retargeted motion produces an overall modification of the style. For example, it is possible to give Dino a more horizontal feeling, with the tail straight behind the back and not touching the ground, by tuning the spine parameter to be more bent over. An illustration of this tail is provided in <ref> and in the supplementary video, and noticeable difference can be observed compared to <ref>.
§.§ Importance of asymmetric observations
During training we provide a richer observation to the value function compared to the one we provide to the policy. Specifically, while at inference the controller receives only real time sparse information (i.e. no future and no full-body pose), there is no need to constrain the value function since at training time this signal is available.
In our experiments, we notice that the outcome of training a policy with a value function that receives no future and no full body pose, is an overall less robust policy. It is able to retarget easy walking examples coming from the training data, but it fails at harder motions like running and is incapable of generalizing to real data coming from an unseen user.
§.§ Quality of open-source datasets
We test our method with two different datasets, a 4 hour in-house dataset and a 74 minutes open-source dataset.
While we notice that a larger and more diverse dataset improves the quality of the final motions, models trained with either of these datasets are robust and capable. Both are able to generalize to unseen users, and perform in real-time, even with headset-only sensory input.
§ CONCLUSIONS
We have presented a method to retarget a user's motion to simulated characters,
in a challenging setting: the target characters can differ significantly in size and body morphology;
we require a real-time remapping; and the mapping needs to be driven
by the sparse motion data coming from an AR/VR device.
We show that physics-based simulations, driven by asymmetric actor-critic RL policies,
allow for effective retargeting in this difficult setting.
The motions generated by the policies track those of the user while also
being appropriate to the physics of the target character.
We introduce a general reward description which allows for tuning of the degree of supervision
and adapts to a range of character morphologies.
Numerous ablations allow us to understand the impact of various parameters and design choices,
including varying degrees of available tracking information, the impact of contact rewards,
choices related to the secondary motion of tails and ears, and more.
Our work still has a number of limitations.
Our controller fails to track challenging motion sequences, where the user performs fast and dynamic movements or uncorrelated upper/lower body motions.
In these scenarios, a kinematic-based controller acting directly in the pose space will still be able to produce a motion,
albeit not of high quality, and it will be able to catch up as parts of the motion become easier by "teleport" between poses without correlation.
Instead, our controller has to produce a correct sequence of joint torques to control the character and may suffer from compounding tracking errors until it fails.
An approach that divides the pipeline in two stages, similar to <cit.> where first a network predicts the full-body pose and then a high-frequency controller outputs torques, could allow to regain the advantages of kinematic-based systems
when needed.
While our framework allows richer forms of self expressions for users, empowering them to control different kind of characters, we are only scratching the surface of the complexities that arise due to different target skeletons.
Our characters are still bipeds.
Increased character complexity might be achieved by supplying skeleton information to the policy <cit.>, using graph neural networks to learn a flexible policy similarly to <cit.>, or training an auxiliary network to find a mapping between source and target skeletons.
ACM-Reference-Format
§ REWARD DETAILS
Parameter values for each term of <ref> and <ref> are provided in <ref>.
Given the state of the simulated character and the ground truth pose coming from the motion capture dataset, the distance metric for the different imitation reward components is formulated as a weighted sum of the Euclidean distance between the two values:
d(x_sim, x_gt) = ∑_i w_i ‖ q_x, sim - q_x, gt‖_2^2
where i represent the joint angles or the link positions and weights vary according to the character.
As described in <ref>, the imitation reward defines the degree of supervision.
As the two characters are closer alike, we can rely more on this reward. For Jesse, in fact, all joint weights are equal to 1. For Oppy and Dino, which have a different lower body size compared to a human, we rely more on the style reward for a good motion and decrease the weight of all lower body joints to 0.3.
For link weights, for Oppy and Dino we set all weights to zero other than for the root, which is set to 1, for Jesse we track also end effectors.
Contact distance metric is also computed through the Euclidean distance between ground truth human motion data and simulated character data.
We define that a human foot is in contact if its height is less than 20cm above the ground and the norm of its velocity is less than 0.4 m/s.
For the simulated character, a force threshold of 1 N is set on the feet link.
The orientation distance metric, given the two orientations in quaternions, first computes the composition of the ground truth quaternion with the inverse of the simulated quaternion. Then, takes the distance norm of its axis angle representation.
§ PROXIMAL POLICY OPTIMIZATION
Let an experience tuple be e_t = (o_t, a_t, o_t+1, r_t) and a trajectory be τ = {e_0, e_1, …, e_T}.
We episodically collect trajectories for a fixed number of environment transitions and we use this data to train the controller and the value function networks.
The value function network approximates the expected future returns of each state, and is defined for a policy π as
V^π(o) = E_o_0=o, a_t∼π(· | o_t)[∑_t=0^∞γ^ t r(o_t, a_t) ].
This function can be optimized using supervised learning due to its recursive nature:
V^π_θ(o_t) = γ V^π_θ(o_t+1) + r_t,
where
V^π_θ(o_T) = r_T + γ V^π_θ_old(o_T+1).
In PPO, the value function is used for computing the advantage
A_t = V^π_θ - V^π_θ_old
which is then used for training the policy by maximizing:
L_π(θ) = 1/T∑_t=1^T min(ρ_tÂ_t, clip(ρ_t,1-ϵ,1+ϵ)Â_t),
where ρ_t = π_θ(a_t | o_t) /π_θ_old(a_t | o_t) is an importance sampling term used for calculating the expectation under the old policy π_θ_old.
§ TRAINING PARAMETERS
§ TORQUE LIMITS
|
http://arxiv.org/abs/2307.00848v2
|
20230703084247
|
Dynamics of Myers-Perry black holes with almost equal angular momenta in odd dimensions
|
[
"Ryotaku Suzuki",
"Shinya Tomizawa"
] |
hep-th
|
[
"hep-th",
"gr-qc"
] |
TTI-MATHPHYS-21
2cm
2cm
Dynamics of Myers-Perry black holes
0.5cm
with almost equal angular momenta in odd dimensions
1.6 cm
Ryotaku Suzuki and Shinya Tomizawa
0.5cm
Mathematical Physics Laboratory, Toyota Technological Institute
2-12-1 Hisakata, Tempaku-ku, Nagoya 468-8511, Japan
0.5cm
sryotaku@toyota-ti.ac.jp, tomizawa@toyota-ti.ac.jp
2cm
Abstract 0.2cm
We investigate the nonlinear dynamics of D=2N+3 Myers-Perry black holes with almost equal angular momenta, which have N equal spins out of possible N+1 spins.
In particular, we study the ultraspinning instability and the fate of its nonlinear evolution using the large D effective theory approach.
We find that every stationary phase can be mapped
to the counterpart in the singly rotating phase within the leading order effective theory. From the known results of singly rotating solutions, we obtain the phase diagram of almost equally rotating black holes.
We also obtain a certain implication for the possible topology changing transition.
empty
toc
plain
§ INTRODUCTION
Higher-dimensional (more than four dimensions) black holes and other extended black objects have played
a central role in the research of gravitational theories.
It is now evident that even within the vacuum Einstein theory, there is a much
richer variety of black hole solutions in higher dimensions than in the four dimension since higher-dimensional black holes can have
several rotations in multiple independent rotation planes
and the relative competition between the gravitational and centrifugal potentials is essentially different from four dimensions.
Such fascinating and profound nature is already found in the simplest rotating spherical solution derived by Myers and Perry <cit.>.
In particular, in five-dimensional vacuum Einstein gravity, the first discovery of black ring solutions by Emparan and Reall <cit.>
surprised many relativists because there are a rotating spherical hole and two rotating
rings with the same mass and angular momentum, providing the evidence against the uniqueness property of black holes in higher dimensions in contrast to the four dimension.
By the development of the solution-generating methods which are applicable to the five-dimensional vacuum Einstein equations, various exact solutions of other black objects were subsequently found.
Pomerasky and Sen'kov <cit.> first succeeded in the construction of the black ring solutions with two independent rotations in the two orthogonal rotational planes.
Elvang and Figueras <cit.> constructed the exact stationary asymptotically flat vacuum solution describing black saturn, i.e., a spherical black hole surrounded by a black ring.
Iguchi and Mishima <cit.> found the exact solutions of two concentric rotating black rings called black dirings, and Elvang, Rodriguez <cit.> and Izumi <cit.> independently obtained the exact solutions of the dirings with spins around two independent planes describing two concentric orthogonal rotating black rings.
Moreover, several authors
<cit.>
tried to find the vacuum solutions of black lenses which have the horizon of lens spaces topologies but unfortunately all of these attempts failed, though such solutions were found as supersymmetric regular solutions in the five-dimensional minimal ungauged supergravity
<cit.>.
In contrast to the stable Kerr family in D=4, Myers-Perry black holes have a multitude of instabilities at large enough angular momenta, which leads to richer dynamics.
These so-called ultraspinning instabilities are caused by the deformation of the horizon shape due to the centrifugal forces <cit.>.
Since Myers-Perry black holes are rotating around the multiple axes,
different spin configurations can lead to different dynamics. The existence of such instabilities are first confirmed by the linear perturbation in several setups <cit.>.
The outcome of the ultraspinning instability are well understood in the singly rotating setup.
The numerical simulation revealed that this instability can end up with the fragmentation of the horizon accompanied by singular pinch-offs <cit.> as observed in the black string simulation <cit.>. The zero modes of the instability also leads to the deformed stationary phases which eventually are connected to multi black rings or saturns through the the topology changing transition <cit.>.
On the other hand, such nonlinear phenomenon are not yet studied in more general configurations with multiple spins.
In higher dimensions, the dynamical analysis of black holes has been relied heavily on the numerical calculation in most cases due to the lack of analyticity. The large dimension limit, or large D limit <cit.> provides a (semi-)analytic and systematic procedure in the search of higher dimensional black holes at the cost of 1/D-expansion.
Remarkably, the black hole dynamics is reformulated into a certain effective theory on the horizon surface, which is often called large D effective theory <cit.>.
Moreover, the large D limit offers another useful simplification for rotating black holes, such that the near horizon geometry at the leading order coincides with that of the static solution whose line elements are replaced by the local boosted frame <cit.>,
so that the field equations become decoupled and integrable.
With this property, the large D limit has been used for the study of the linear spectrum of the ultraspinning instability of equally rotating Myers-Perry black holes <cit.>, the dynamics of the singly rotating Myers-Perry black holes <cit.>, as well as
the construction of equally rotating black holes with the Maxwell charge <cit.> and
Gauss-Bonnet correction <cit.>.
In particular, the dynamics of singly rotating black holes at large D has been investigated vigorously through the so-called blob approximation, in which the dynamics of a spherical black hole is identified as that of a Gaussian mass profile or black blob on the black brane magnifying the region around a polar axis of the black hole by √(D) <cit.>.
The blob approximation was helpful in finding various (non)axisymmetric stationary phases at large D <cit.> ( figure <ref> ).
Although nonaxisymmetric horizons cannot remain stationary at finite D, they would exist as long-lived intermediate states at large enough D due to the e^-D-suppression of the radiation effect <cit.>.
In fact, the numerical simulation in D=6,7 actually provides the evidence of such long-lived intermediate states <cit.>.
Beyond a single connected horizon, the blob approximation can also describe the collision of two separate black holes at large D <cit.>.
The similar approximation was also used to study the dynamics of AdS black holes <cit.>.
In this article, we study the nonlinear dynamics derived from ultraspinning instabilities of equally rotating Myers-Perry black holes in D=2N+3 using the large D limit. Focusing on the near polar dynamics, we obtain the dynamical effective theory of slightly broader case, i.e., N equal angular momenta plus one different momentum out of N+1 angular momenta, which we call almost equal rotation. Surprisingly, we find that the large D effective theory is equivalent to that of the singly rotating case.
Therefore, with the knowledge of singly rotating phases, the phase diagram is obtained for several sets of angular momenta.
The rest of the article is organized as follows.
In section <ref>, the metric ansatz and scaling assumptions are explained. The metric solutions expanded in 1/D are shown in section <ref>. In section <ref>, we present the large D effective theory of almost equally rotating black holes.
The phase diagram of stationary solutions are studied in section <ref>. We summarize and discuss the possible outcome in section <ref>.
We also attach an auxiliary Mathematica notebook that encloses the data for the metric solutions up to the next-to-leading order.
§ SETUP
In this article, we study rotating black holes with in D=2N+3 dimension, solving the vacuum Einstein equation
R_μν = 0.
It is known that, if all N+1-spins are equal, D=2N+3 Myers-Perry black holes have the enhanced symmetry of CP^N with the S^1-fibration <cit.>,
whose metric is given by
ds^2 = -F(r)/H(r)dt^2+dr^2/F(r)+r^2 H(r)^2(Φ+Ω(r)dt)^2+r^2 dΣ^2_N, Φ := dϕ + _N,
where _N and dΣ_N^2 are the Kähler potential and Fubini-Study metric on CP^N, respectively. The metric components are given by
F(r) = 1 - r_0^2N/r^2N+ a^2 r_0^2N/r^2N+2,
H(r) = 1 + a^2 r^2N_0/r^2N+2, Ω(r)=-a r_0^2N/r^2N+2 H(r).
The main advantage of the large D limit is that the metric (<ref>) locally approaches to the large D limit of the Schwarzschild metric <cit.>
ds^2 ≃ - (1-) (dt')^2 + d^2/D^2(-1)+
r_0^2 (Φ')^2+ r_0^2 dΣ_N^2, := (r/r_0)^2N/1-a^2, (D∼ N ≫ 1)
where the one forms (dt',Φ') is given by the local Lorentz boost of (dt,Φ),
dt' := coshα dt - sinhα Φ, Φ' := sinhα dt - coshα Φ, tanhα:=a/r_0.
The same reduction is possible for other general cases of Myers-Perry black holes <cit.>.
Hence, the large D limit of rotating black holes can be studied almost in parallel to the static case.
This metric admits the ultraspinning
instability both in axisymmetric and nonaxisymmetric modes with respect to the ϕ coordinate <cit.>.
Here, we are interested in the end point of the instability at large D.
To consider the nonlinear evolution of such instability, we must break the CP^N-symmetry.
For this, it is convenient to decompose CP^N in terms of CP^N-1 such that
dΣ_N^2 = dθ^2 + sin^2θcos^2θΨ^2 + sin^2θ dΣ^2, _N = sin^2θΨ, Ψ := dψ + ,
where and dΣ^2 are the Kähler potential and Fubini-Study metric on CP^N-1, respectively.
In the following analysis, we assume that the U(1)-symmetry for ϕ is broken in general, while the U(1)-symmetry for ψ is always kept.
We also allow the black holes to have the angular velocity along ψ that corresponds to N spins out of N+1 spins[We present the Myers-Perry solution with such setup in Appendix <ref>.],
which we call almost equal rotations.
This is because, at the large D limit, having N equal spins and N+1 equal spins are approximately the same.
Blob approximation
To treat the dynamical deformation of the spherical horizon correctly, we apply the blob approximation by introducing the near polar coordinate z which zoom in around the ψ-axis at θ =π/2, by √(2N), such that
θ := π/2 - z/√(2N).
In the singly rotating case, the similar approximation leads to the simpler formulation in which the dynamics of spherical black holes is obtained through the analysis of the Gaussian blob on the black 2-brane <cit.>.
Here we show that the almost equally rotating configuration admits the same approximation.
Let us demonstrate how the blob approximation improves a dynamical analysis at large D. For simplicity, we consider the scalar harmonics in a flat background
∇^2 f =0,
where the background metric is decomposed by using the CP^N-1 metric such that
ds^2 = -dt^2+dr^2 + r^2Φ^2+r^2sin^2θcos^2θΨ^2 +r^2 dθ^2 +
r^2sin^2θ dΣ^2.
For the zero modes f(r,ϕ,θ) = e^im ϕ R(r) S(θ), the angular part of eq. (<ref>) becomes
S”(θ)+(N-1+N cos (2 θ )) θθ S'(θ)+ (ℓ(ℓ+2N)-m^2 ^2θ)S(θ) =0,
where ℓ(ℓ+2N) is the separation constant between R(r) and S(θ).
Here we focus on the angular mode function S(θ).
The exact solution regular at θ = 0 is obtained as
S(θ) = Ccos^mθ _2 F_1(m-ℓ/2,m+ℓ/2+N, N,sin^2θ).
For regularity at θ=π/2, we must impose the quantization condition for ℓ,
ℓ = 2s+m, s=0,1,2,….
In turn, by simply taking the limit N→∞, eq. (<ref>) reduces to the first order equation
θ S'(θ)+ℓ S(θ)=0,
which gives the leading solution in the 1/N-expansion
S(θ) = C cos^ℓθ.
Clearly, this does not carry the information about the quantization condition (<ref>).
It is straightforward to show that adding the 1/N-correction does not improve the situation either.
The reason is because the limit N→∞ of S(θ) is not uniform in the entire range of θ∈ [0,π/2], i.e., the 1/N-expansion of the exact solution (<ref>)
is consistent with eq. (<ref>) only in the range |θ-π/2| ≫ 1/√(N)[
We used the formula
_2 F_1 (a,b+λ,λ,z) = (1-z)^-a(1-a b/λz/z-1+…) for λ≫ 1.
]
S(θ) ≃ C cos^ℓθ( 1+ tan^2θ /N).
Thus, the solution (<ref>) fails to capture the boundary behavior at θ = π/2. One can also check that the zeros of the exact mode function (<ref>) with (<ref>) gather around θ-π/2 = O (1/√(N)) for N ≫ 1.
In fact, the previous large D analysis with θ had to impose the condition (<ref>) as an extra condition to the effective theory <cit.>.
On the other hand, the near polar coordinate z in eq. (<ref>) properly resolves the regularity condition both at θ=0 (z=∞) and θ=π/2 (z=0) at N→∞. With eq. (<ref>), eq. (<ref>) at N→∞ becomes
z^2 S̃”(z) + z(1-2z^2)S̃'(z) + (2ℓ z^2-m^2)S̃(z) =0,
where S̃(z):=S(π/2-z/√(2N)).
The solution regular at z=0 is given by
S̃(z) = C z^m _1 F_1(m-ℓ/2,m+1,z^2).
It is easy to check that S̃(z) diverges as e^z^2 for z→∞ unless eq. (<ref>) is satisfied.
Metric Ansatz
Having these in mind, we consider the following metric ansatz
ds^2 = -A (e^(0))^2+2U e^(0) dr - 2C_i e^(0)e^(i) + H_ij e^(i)e^(j)+r^2 sin^2 θ dΣ^2,
where i,j=1,2,3 and the boosted frame is given by
e^(0) := γ(dt-Ω (dϕ + sin^2θΨ)),
e^(1) := γ(dϕ+sin^2θΨ-Ω dt), e^(2) :=dθ, e^(3) := sinθcosθΨ, γ :=√(1-Ω^2).
In the following analysis, for convenience, we use
n:=2N
as the large parameter and consider the expansion in 1/n instead of 1/D or 1/N.
The near horizon coordinate are introduced by
:= (r/r_0)^n,
where we set r_0=1.
In the z-coordinate (<ref>), the boosted frame (<ref>) is approximated by
e^(0) = γ ( dt - Ω (dϕ + Ψ))+n^-1 , e^(1) = γ ( dϕ+Ψ-Ω dt)+n^-1, e^(2) = -dz/√(n) , e^(3) = z/√(n)(Ψ+n^-1).
For later convenience, we also define the rescaled frame which remains 1 at n→∞,
ê^(0) = e^(0), ê^(1) := e^(1), ê^(2) := -√(n) e^(2) ê^(3) := √(n) e^(3).
To obtain the reasonable solution at n→∞, we should scale each components of C_i and H_ij as
C_1 = nĈ_1, C_2 = -√(n)Ĉ_2, C_3 = √(n)Ĉ_3, H_ii = Ĥ_ii, H_23=-Ĥ_23,
H_12 = -√(n)Ĥ_12, H_13 = √(n)Ĥ_13.
Then, the rescaled components are expanded in 1/n as functions of (,t,ϕ,z),
A = ∑_k=0^∞A^(k)/n^k, U = 1 + n∑_k=0^∞U^(k)/n^k, Ĉ_i = ∑_k=0^∞C_i^(k)/n^k, Ĥ_ij = δ_ij+n∑_k=0^∞H_ij^(k)/n^k.
To remain the asymptotic flatness, each variables should follow the asymtptotic boundary condition at →∞ given in Appendix. <ref>.
§ METRIC SOLUTIONS
Now, we solve the metric solution in the 1/n-expansion (<ref>) up to the next-to-leading order (NLO) by integrating the evolution equations with respect to .
The constraint equations are imposed in the later section to obtain the effective equation.
Leading order
The leading order solutions can be obtained as
A^(0) = 1 - m(t,ϕ,z)/, C_i^(0) = p_i(t,ϕ,z)/ -2Ω/1-Ω^2log δ_i1 ,
H^(0)_11= 2/1-Ω^2log, H^(0)_ij = 2logδ_ij+p_i(t,ϕ,z) p_j(t,ϕ,z)/m(t,ϕ,z) for (i,j)≠(1,1)
and
U^(0) = -Ω ^2 log/1-Ω ^2-p_2(t,ϕ ,z)^2+p_3(t,ϕ ,z)^2/2 m(t,ϕ ,z),
where m(t,ϕ,z) and p_i(t,ϕ,z) are integration functions of -integrals, which correspond to the mass density and momentum densities. Note that m and p_i are not arbitrary functions, but must be solutions of the effective equation that will be presented later.
The horizon is given by =m at the leading order.
The metric functions other than A and C_i are imposed the regularity at =m as well as the asymptotic boundary condition.
Next-to-leading order
As usual in the large D effective theory analysis, in the higher order, there are ambiguities in the solution of A^(k) and C^(k)_i that corresponds to the redefinition of m and p_i. For simplicity, we set so that
A^(k)(=m)=0, C^(k)(=m)=0 (k > 0).
One should note that the former condition does not necessarily sets the event horizon at =m up to NLO. Later, we will see that the event horizon differs from =m at 1/n, when m is not constant. As we will see later, the event horizon can differ from =m in general from n^-1. The NLO solutions are given by
A^(1) =- 2Ω^2/1-Ω^2log
+(-z ∂_z p_2 +(z^2-1)
p_2 +∂_ϕ p_2 ) log (/m)/z
-2 Ω ^2 m log m /(Ω ^2-1),
and
C_1^(1) = log (/m)/( z ∂_z p_1p_2-∂_ϕ p_1 p_3+p_1 (z ∂_z p_2-(z^2-1)
p_2-∂_ϕ p_3)/z m . . + p_1 (∂_ϕ m p_3-z ∂_z m p_2)/z m^2 )-2Ω/1-Ω^2(log^2 -mlog^2 m/),
C_2^(1) = log (/m)/(-p_2 (∂_ϕ p_3-2 z ∂_z p_2)+p_3 (∂_ϕ p_2+p_3)+(z^2-1)
p_2^2/z m. . + p_2 (∂_ϕ m p_3-z ∂_z m p_2)/z m^2+2 γΩ p_3),
C_3^(1)=log (/m)/(p_3
(z ∂_z p_2-2 ∂_ϕ p_3)+p_2 (z ∂_z p_3-(z^2-2) p_3)/z m. . + p_3 (∂_ϕ m p_3-z ∂_z m p_2)/z m^2-2 γΩ p_2),
and
H_11^(1) = -2Ω^2 log^2 /1-Ω^2 + p_1^2/ m.
Because of the lengthy expressions, we will not show the detail, but other H_ij^(1) and U^(1) are also solved to evaluate thermodynamic variables up to NLO. We present the solutions up to NLO in the auxiliary Mathematica notebook.
§.§ Local event horizon
To keep track of the dynamical horizon, it is convenient to introduce a so-called local event horizon <cit.>, which is later used to evaluate the entropy and temperature.
The position of the local event horizon r=r_h(t,ϕ,z) is defined as a null hypersurface ||dr-dr_h||^2=0. In the boosted frame, the derivative of r_h is written as
dr_h = ∂_0 r_h e^(0) + ∂_i r_h e^(i) ,
where the dual basis is given by
∂_0 := γ ( ∂_t + Ω∂_ϕ), ∂_1 := γ ( ∂_ϕ+ Ω∂_t), ∂_2 := ∂_θ = -√(n)∂_z, ∂_3 := θθ∂_ψ - tanθ∂_ϕ
= √(n)/z(∂_ψ -∂_ϕ+n^-1).
We also introduce the rescaled dual basis by
∂̂_0:=∂_0, ∂̂_̂1̂:=∂_1, ∂̂_2 := - √(n)∂_2, ∂̂_3 := √(n)∂_3.
With the metric up to NLO, the null condition ||dr-dr_h||^2=0 leads to
_h := r_h^n = m - n((p_2-∂_z m)^2/m+(p_3-z^-1∂_ϕ m)^2/m-2 γ z^2 ( ∂_t+Ω∂_ϕ) m)+n^-2.
The cross section with the t= const. surface is then given by
ds_H^2 = H_ij ( e^(i)-v^i e^(0))(e^(j)-v^j e^(0))+ r_h^2 cos^2 (z/√(n))
dΣ^2,
where the leading order solution (<ref>) leads to
v^1 = nv̂^1 = n(p_1-∂̂_1 m/m-2γ^2 Ωlog m), v^2 = -√(n)v̂^2= -√(n)p_2 - ∂̂_2 m/m , v^3 = √(n)v̂^3= √(n)p_3-∂̂_3 m/m.
In the coordinate basis, this is rewritten as
ds_H^2 = H̃_ab ( dy^a - v^a dt)(dy^b-v^b dt) +r_H^2 sin^2θ dΣ^2,
where H̃_ab (a,b=ϕ,z,ψ) is obtained from H_ij with the transformation (<ref>) and
v^ϕ = Ω -v^ψ +n (γ^-2v̂_1+z^2 v^ψ)+n^-2, v^z = γ^-1v̂^2+n^-1,
v^ψ = v̂^3/γ z+n^-1.
§ LARGE D EFFECTIVE THEORY
Substituting the leading order solution (<ref>)
to the constraint equations on the = const. surface, we obtain the following effective equation
(γ∂_t +γΩ∂_ϕ) m-z∂_ϕ(p_3+z∂_ϕ m)+ (∂_z-z+z^-1) (p_2-∂_z m)=0,
(γ∂_t + γΩ∂_ϕ) p_1 - (∂_z-z+z^-1) (∂_z p_1-p_1p_2/m) - z∂_ϕ( z∂_ϕ p_1 + p_1 p_3 / m ) + γ^2Ω( (∂_z-z+z^-1) (∂_z m+p_2)+z^-2∂_ϕ^2 m-z^-1∂_ϕ p_3)
+2 γ[z^-1∂_ϕ p_2+(∂_z-z+z^-1) p_3 ] +γ∂_ϕ m=0,
(γ∂_t + γΩ∂_ϕ) p_2- (∂_z -z+z^-1)(∂_z p_2-p_2^2/m) -z∂_ϕ( z∂_ϕ p_2 + p_2p_3/m)
+∂_z m (2 Ω ^2-1/Ω ^2-1) - 2∂_ϕ p_3 /z^2-(1-z^-2) p_2-p_3^2/zm
+2 γΩ(p_3+z^-1∂_ϕ m)=0,
(γ∂_t +γΩ∂_ϕ) p_3 - (∂_z-z+z^-1)(∂_z p_3-p_2p_3/m)-z∂_ϕ( z∂_ϕ p_3 + p_3^2/m)
-z∂_ϕ m (2 Ω ^2-1/Ω ^2-1) +2 ∂_ϕ p_2/z^2 +(1-z^-2) p_3+ p_2 p_3/z m +2 γΩ (∂_z m-p_2) =0.
As discussed in ref. <cit.>, these equations can be understood as an effective theory living in the overlap region B between the near-horizon and asymptotic regions, 1≪≪ e^n ( or 1≪log≪ n), where the metric on the R= const. surface becomes expanded both in 1/n and 1/ as
ds_ B^2 ≃ -dt^2+(dϕ+(1-z^2/n)(dψ+))^2+n(dz^2+z^2 (dψ+)^2 )+(1-z^2/n) dΣ^2.
Spectrum around equally rotating Myers-Perry black holes
The uniform solution corresponds to the equally rotating Myers-Perry black hole
m=1, p_i=0.
One can see that the known spectrum are reproduced by considering the stationary perturbation
m = 1 + e^i k (ϕ-Ω t) f_0(z), p_i = δ p_i e^i k (ϕ-Ω t) f_i(z).
Plugging this into eqs. (<ref>), we obtain the master equation for f_0(z)
f_0”(z) -(z-1/z)
f_0'(z)+ (1/1-Ω ^2-k^2/z^2) f_0(z)=0,
and f_i(z) are written in terms of f_0(z).
By introducing the new variables
x := z^2/2 , f̃_̃0̃(x) := z^-k/2 f_0(z),
this reduces to the associated Laguerre equation
x f̃_̃0̃”(x) +(k+1-x) f̃_̃0̃'(x) + 2(1-Ω^2-k)f̃_̃0̃(x)=0.
Hence, the uniform solution has the normalizable stationary mode only if
2(1/1-Ω^2-k) = I , I=0,1,2,…
or
Ω = √(1-1/k+2I).
This is consistent with the perturbative analysis at large D <cit.>.
The mode function is given by the associated Laguerre polynomial
f_0(z) = C z^k L^k_I(z^2/2).
Conservation form
The physical meaning of the effective equation is clearer if we switch the variables from (m,p_i) to (m,v̂^i).
Eq. (<ref>) can be rewritten in the form of the mass conservation
∂̂_0 ( z e^-z^2/2 m )+ ∂̂_2 ( z e^-z^2/2 m v̂_2)+ ∂̂_3 ( z e^-z^2/2 m v̂_3) =0,
where the extra factor comes from the volume of the spacial cross section at large D
sinθcosθ×sin^n-2θ≃ z e^-z^2/2.
We do not show the detail, but
one can also find that eqs. (<ref>) and (<ref>) can be written in terms of the conservation of the Brown-York tensor in the direction of ϕ and ψ as well. Note that eq. (<ref>) cannot be written in the conservation, reflecting the fact that the asymptotic background is not symmetric in z ( or θ).
§.§ Stationary solutions
Now we show a stationary assumption reduces eq. (<ref>) to a single master equation. First, we clarify the stationary condition for the effective theory.
From eq. (<ref>),
the horizon null generator ξ is given by
ξ^μ∂_μ = ∂_t + v^ϕ∂_ϕ + v^z ∂_z + v^ψ∂_ψ.
To obtain the stationary horizon, the horizon must stay at the same position along ξ
(∂_t+v^a ∂_a) m=0.
As in ref. <cit.>, we also require
that ξ becomes the Killing vector of the metric (<ref>),
which imposes
∂_t v^a = 0,
and the shear free condition
σ_ab := D_(a v_b) =0, v_a:=h̅_ab v^b,
where h̅_ab is the spacial part of eq. (<ref>) and
D_a are the covariant derivatives for h̅_ab.
In the orthogonal frame of h̅_ab, the shear tensor becomes
σ_11≃γ^2 n∂_ϕv̂_1, σ_12≃ -2√(n)γ^2 (∂_z v̂^1 +γ∂_ϕv̂^2+2γv̂^3), σ_13≃ - 2√(n) z γ^2(∂_ϕv̂^1+2 z γv̂^2-z γ∂_ϕv̂^3) , σ_22≃γ^-1∂_z v̂_2, σ_23≃2γ z(-z ∂_z v̂^3 + v̂^3 + ∂_ϕv̂^2), σ_33≃v̂^2-∂_ϕv̂^3/γ z,
where σ_ij := E_i^a E_j^b σ_ab with
E^1_a dy^a = dϕ+(1-z^2/n+…)Ψ, E^2_a dy^a = -n dz, E^3_a dy^a = z/√(n)Ψ+….
The only solution that satisfies eqs. (<ref>) and (<ref>) and regular at z=0 is
v̂^1 = -γ^2 z^2 Ω_ψ + Ω_1, v̂^2 = 0, v̂^3 = γ z Ω_ψ,
where Ω_1 and Ω_ψ are the constants.
This is equivalent to
v^ϕ = Ω-Ω_ψ+Ω_1/n , v^z = 0, v^ψ = Ω_ψ.
Using the parameter shift Ω→Ω + 1/n, we can always set Ω_1=0 and the null generator becomes
ξ^μ∂_μ = ∂_t + (Ω-Ω_ψ) ∂_ϕ + Ω_ψ∂_ψ.
We should note that, although we started from the equally rotating ansatz,
our setup also includes the states between N+1 equal spins for Ω_ψ=0 and N equal spins for Ω_ψ=Ω.
Therefore, from the conditions (<ref>) and (<ref>),
m must take the form of
m =exp (ϕ-(Ω-Ω_ψ) t, z),
and p_i are, then, expressed by through eqs. (<ref>) and (<ref>) as
p_1 = (∂̂_1 - γ^2 z^2 Ω_ψ+2 Ωγ^2 )e^ ,
p_2 = ∂̂_2 e^, p_3 = (∂̂_3 + Ω_ψγ z) e^.
Plugging these into the effective equation (<ref>),
we obtain a single master equation
∂_z^2 + (z^-1-z) ∂_z +z^2∂_ϕ^2 + 2((∂_z )^2 + z^2 (∂_ϕ)^2)+γ^2(-_0)+1/2Ω_ψ ( Ω_ψ-2Ω) γ^2 z^2 =0,
where the parameter _0 is the integration constant that fixes the horizon scale.
For simplicity, we set _0=0.
A remarkable fact is that, with change of the variable
= -2(1-Ω^2)+z^2/2 + ,
eq. (<ref>) coincides with the master equation for the singly rotating case <cit.>
∂_^2 + ∂_+^2∂_ϕ^2
+ 2((∂_)^2 + ^2 (∂_ϕ)^2)++2ω_s^2 ^2 =0,
where
:= γ z , ω_s := (Ω_ψ-Ω)√(1-Ω^2).
Therefore, the results for the single rotating case <cit.> can be carried to the almost equally rotating case straightforwardly.
In the later section, we will see how the phase diagram is mapped as well.
One must note that this coincidence is not guaranteed up to the higher order in 1/n.
§.§ Thermodynamic variables
ADM charges
Reflecting the background symmetry in t,ϕ,ψ, we have the conserved asymptotic charges of M,J_ϕ,J_ψ, which are given in terms of Brown-York's quasi local tensor calculated by
8 π G T^μ_ν := lim_→∞√(h) (K δ^μ_ν - K^μ_ν) - 8 π GT̃^μ_ν,
where ∇_μ and K_μν is the covariant derivative and extrinsic curvature of the intrinsic metric h_μν on the = const-surface.
The regulator T̃^μ_ν is determined so that ∇_μT̃^μ_ν=0.
Corresponding to the background Killing vector ∂_t, ∂_ϕ, ∂_ψ, these tensors satisfy the conservation law
∂_μ T^μ_t =0, ∂_μ T^μ_ϕ =0, ∂_μ T^μ_ψ =0.
The ADM mass is given by
M :=
nΩ_n+1/16 π G∫ρ_M dV_(z,ϕ), ∫ dV_(z,ϕ):= ∫_0^∞ dz ∫_0^2πdϕ/2π ,
where the mass density up to NLO is calculated by the metric solution up to NLO as
ρ_M := -16 π G n T^t_t = γ^2 z e^-z^2/2[ m + n(
(1+z^2/3-z^4/12)m - 2 γ^2 Ω ^2 m log m.. ..
-(2-Ω ^2+log m ) ( (∂_z-z+z^-1) p_2-z^-1∂_ϕ p_3).. + 2 Ω(p_1- γ(1-Ω ^2) ∂_ϕ m )
- γΩ ^2 (∂_t+Ω∂_ϕ) m )],
and we used the relation between the volume of CP^n/2-1 and that of n-sphere Ω_n,
Ω_n+1 = 2π/nΩ_n-1 = (2π)^2/n vol(CP^n/2-1).
Similarly, we define the angular momentum densities in ϕ and ψ directions by
ρ_ϕ = 16 π G n T^t_ϕ, ρ_ψ = 16 π G n T^t_ψ,
and the angular momenta are written by
J_ϕ = nΩ_n+1/16π G∫ρ_ϕ dV_(z,ϕ),
J_ψ = nΩ_n+1/16π G∫ρ_ψ dV_(z,ϕ).
It turns out that ρ_ϕ and ρ_ψ are proportional to ρ_M at the leading order,
and hence we only show the difference at the sub-leading order
ρ_ϕ = Ωρ_M + δρ_ϕ/n, ρ_ψ = Ωρ_M + δρ_ψ/n,
where
δρ_ϕ := z e^-z^2/2( γ^2Ω m+ p_1+ Ω
(∂_z -z+z^-1)p_2- γ(1-Ω ^2) ∂_ϕ m- Ω z^-1∂_ϕ p_3 ),
δρ_ψ :=
z e^-z^2/2(
γ^2 Ω(1-z^2) m + p_1+ z γ p_3+ Ω (∂_z-z+z^-1) p_2
+ γΩ ^2 ∂_ϕ m
- Ω z^-1∂_ϕ p_3).
With the velocity fields (<ref>), this can be written as
δρ_ϕ = z e^-z^2/2 m (Ωγ^2+ v̂^1+ 2Ωγ^2 log m)+ ∂_a (⋯ )^a,
δρ_ψ = z e^-z^2/2m (v̂^1+ z γv̂^3 + 2γ^2 Ωlog m+ γ^2 Ω (1- z^2))+ ∂_a (⋯ )^a,
where ∂_a (… )^a denotes the spacial divergence.
Entropy
The entropy of the dynamical horizon is calculated from the area of the local event horizon (<ref>) as (see Appendix <ref> for detail)
S = Ω_n+1/4G∫ρ_S dV_(z,ϕ),
where the entropy density ρ_S is proportional to the linear combination of the mass and angular momentum and hence conserves at the leading order
ρ_S = γ (ρ_M - Ωρ_ϕ) + nδρ_S.
In terms of the velocity fields (<ref>), the difference δρ_S is expressed as
δρ_S =γ z e^-z^2/2[ (γ^2 log m -1) m-m/2( (v̂^2)^2+ (v̂^3)^2)- (∂_z m) ^2/2 m - (∂_ϕ m) ^2/2 z^2 m ]+∂_a (…)^a.
Therefore, the nonconserving subleading correction is given by the following functional
S_1 = Ω_n+1/4G∫δρ_S dV_(z,ϕ) = Ω_n+1γ/4G∫ z e^-z^2/2[ (γ^2 log m -1) m-m/2( (v̂^2) ^2+ (v̂^3) ^2)- (∂_z m) ^2/2 m - (∂_ϕ m) ^2/2 z^2 m ]dV_(z,ϕ).
Temperature
The temperature is also defined in terms of the surface gravity T=κ/(2π) on the local event horizon if the horizon null generator (<ref>) is the Killing vector (see Appendix <ref> for detail). Plugging the metric solution into eq. (<ref>), we obtain
T = n/4πγ[1 + 1/n(- 2 γ^2Ω ^2-γ^2 log m-Ωv̂^1-(∂_z+z^-1-z) v̂^2-1/2 (v̂^2)^2-1/2 (v̂^3)^2+(∂_z m)^2/2 m^2.. ..-∂_z^2 m/m+(z^2-2 z v̂^2-1) ∂_z m/z m+2 v̂^3 ∂_ϕ m/z m+(∂_ϕ m)^2/2 z^2 m^2+∂_ϕv̂^3/z-∂_ϕ^2 m/z^2 m-2
γ (∂_t+Ω∂_ϕ) m/m)].
By substituting eqs. (<ref>) and (<ref>), the use of the master equation (<ref>) reduces this to the constant
T = n/4πγ(1 -2/nγ^2 Ω^2 ).
The extremal condition up to NLO is then given by
|Ω| = 1 -2n.
First law and Smarr formula
With the stationary conditions (<ref>) and (<ref>), using eq.(<ref>), one can show the first law up to 1/n by taking the variation in and Ω_ψ independently
δ M = T δ S + Ω_ϕδ J_ϕ + Ω_ψδ J_ψ,
where Ω_ϕ := Ω-Ω_ψ.
The Smarr formula can also be shown up to 1/n
n+1/n M = TS+Ω_ϕ J_ϕ + Ω_ψ J_ψ.
Second law
With the use of eqs. (<ref>), the time derivative of eq. (<ref>) leads to the second law
∂_t S = n∂_t S_1 =Ω_n+1/4G n∫ dV_(z,ϕ) z e^-z^2/2(
m (z^-1∂_ϕv̂^2 - ∂_z v̂^3 +z^-1v̂^3 )^2+2 m/z^2(∂_ϕv̂^3- v̂^2 )^2+2 m (∂_z v̂^2) ^2 ) ≥ 0.
Then, the entropy production ceases if
v̂^2=0, v̂^3 = cosnt.× z.
This is consistent with a part of the Killing condition (<ref>).
The contribution of v̂_1 is suppressed to the subleading order in Eq. (<ref>), because of the scaling in eq. (<ref>). Nevertheless, assuming the condition (<ref>) in eq. (<ref>), we obtain m=exp(ϕ-v^ϕ t,z) with v^ϕ = const. and the Laplace equation for w(ϕ-v^ϕ t ,z):=v̂^1(ϕ-v^ϕ t ,z)+γ z v̂^3(z)
∂_z (e^ z e^-z^2/2∂_z w)
+z^2∂_ϕ(e^ z e^-z^2/2∂_ϕ w) =0.
For the regular and normalizable solution both at z=0 and z=∞, w must be a constant as in (<ref>).
It is less clear, but as in refs. <cit.>, eq. (<ref>) can be written as the square of the viscosity tensor (<ref>)
∂_t S = Ω_n+1/4Gn∫ z e^z^2/2 2γ^2 m h^ab h^cdσ_acσ_bd dV_(z,ϕ),
where h_ab is the spacial part of eq. (<ref>).
Here we note that only σ_22,σ_23,σ_33 contribute to the leading order in eq. (<ref>).
§.§ Reduction to the singly rotating phase
All the above thermodynamic quantities can be mapped to those of singly rotating black holes through eqs. (<ref>) and (<ref>). First, we define normalized thermodynamical quantities by the mass scale to eliminate the scaling degree of freedom. The angular momenta normalized by the mass scale become
j_ϕ := 16π G J_ϕ/(n+2)Ω_n+1r_M^n+1 = Ω̃ + nδ j_ϕ, j_ψ := 16π G J_ψ/nΩ_n+1r_M^n+1= Ω̃ + nδ j_ψ,
where Ω̃:=Ω r_M and the mass scale is given by
r_M := (16π G M/(n+1)Ω_n+1)^n≃ 1+nlog(∫ρ_M dV_(z,ϕ)).
δ j_ϕ is given by
δ j_ϕ = ∫ z e^-z^2/2 (m γ^-2v̂_1+2Ω m log m) dV_(z,ϕ)/∫ z e^-z^2/2 m dV_(z,ϕ)- 2Ωlog(∫ z e^-z^2/2γ^2 m dV_(z,ϕ)).
Instead of δ j_ψ, it is more convenient to see the difference between j_ψ and j_ϕ since it vanishes for the equally rotating Myers-Perry (j_ϕ=j_ψ),
Δ j := δ j_ψ -δ j_ϕ =n ( j_ψ- j_ϕ) = ∫ z^2 e^-z^2/2( γ^-1 m v̂^3+ Ω(2-z^2) m) dV_(z,ϕ)/∫ z e^-z^2/2 m dV_(z,ϕ)
.
Note that we keep using Ω instead of Ω̃
in eqs. (<ref>) and (<ref>) as the difference only comes in the higher order (<ref>).
With the stationary conditions (<ref>) and (<ref>), or equivalently (<ref>), the normalized angular momenta (<ref>) and (<ref>) become
δ j_ϕ =- Ω_ψ∫ z^3 e^-z^2/2 e^ dV_(z,ϕ)/∫ z e^-z^2/2 e^ dV_(z,ϕ) + 2 Ω∫ z e^-z^2/2 e^ dV_(z,ϕ)/∫ z e^-z^2/2 e^ dV_(z,ϕ) - 2Ωlog(∫ z e^-z^2/2 e^ dV_(z,ϕ)),
Δ j = 2 Ω - (Ω-Ω_ψ)∫ z^3 e^-z^2/2 e^ dV_(z,ϕ)/∫ z e^-z^2/2 e^ dV_(z,ϕ).
We show that these have simpler expressions using the singly rotating variables calculated in refs. <cit.>.
In particular, with eqs. (<ref>) and (<ref>), eq. (<ref>) can be expressed as
Δ j = 2 Ω + √(1-Ω^2) j_s,
where j_s=j_s(ω_s) is the normalized angular momentum for the singly rotating black holes given by the solution to eq. (<ref>) as
j_s :=∫ω_s z̅^3 e^ dV_(z̅,ϕ)/∫z̅ e^ dV_(z̅,ϕ),
and the angular velocity of the single rotation ω_s is given by Ω and Ω_ψ through eq. (<ref>). Note that, by definition, j_s is an odd function of ω_s.
Eq. (<ref>) is also simplified as
δ j_ϕ = -√(1-Ω^2) j_s + 2 Ω (h_s - logμ_s) = - Δ j+ 2 Ω (h_s +1- logμ_s) ,
where we used eq. (<ref>) and h_s=h_s(ω_s) and μ_s=μ_s(ω_s) are calculated from the singly rotating solution
h_s :=∫z̅ e^dV_(z̅,ϕ)/∫z̅e^dV_(z̅,ϕ),
μ_s := ∫z̅ e^ dV_(z̅,ϕ).
Once we specify the corresponding singly rotating family of the phase j_s=j_s(ω_s), Ω_ψ is determined as the functions of (Ω,Δ j) through
eqs. (<ref>) and (<ref>).
The entropy (<ref>) is normalized in the same way and expressed in terms of the singly rotating variables
s := 4GS/Ω_n+1 r_M^n+1 =γ̃[1+γ^2/n(-Ωδ j_ϕ +(2Ω-Δ j)Ω-ω_s j_s - logμ_s + 1-3Ω^2)].
Myers-Perry phases
As shown in ref. <cit.>, eq. (<ref>) admits the singly rotating Myers-Perry solution
= 2/1+a̅^2(1-z̅^2/4),
which leads to j_s := 2a̅ and
ω_s = 2 j_s/4+j_s^2, μ_s = (1+j_s^2/4) e^8/4+j_s^2, h_s = 1-ω_s j_s.
One can easily check that the corresponding almost equally rotating phase is actually the Myers-Perry phase of a given (Ω,Δ j).
To confirm this, let us compare the corresponding phase with the large D limit of the exact Myers-Perry solution with almost equal angular momenta given in Appendix <ref>.
The first equation of eq. (<ref>) is inversely solved as
j_s = 1±√(1-4ω_s^2)/ω_s.
Then, eqs. (<ref>) and (<ref>) reproduce eq. (<ref>) at the leading order.
The normalized entropy (<ref>) is also consistent with eq.(<ref>) at the leading order.
Entropy difference
Since the entropy (<ref>) is insensitive to the horizon dynamics at the leading order, it is convenient to see only the difference from a reference phase.
For this, we compare with the Myers-Perry solution with the same (j_ϕ,j_ψ).
Since j_ϕ≃ j_ψ≃Ω, Ω is also the same at the leading order. Nevertheless, one should recall that we also have the degree of freedom to be adjusted that changes Ω in n^-1 as in eq. (<ref>). Thus, having the same j_ϕ does not necessarily means the same δ j_ϕ, instead requires
j_ϕ = Ω + n( Ω_1 + Ωlogμ_s + δ j_ϕ)
= Ω + n (Ω_1^ MP + Ωlogμ_s^ MP + δ j_ϕ^ MP),
where the quantities with MP denotes those for the Myers-Perry phase. This condition determines the difference in Ω_1.
Then, the entropy difference from the Myers-Perry phase becomes
s-s_ MP =γ/n( -ω_s j_s+ω_s^ MP j_s^ MP-log (μ_s/μ_s^ MP) ),
where eq. (<ref>) is used with the fact that
γ̃-γ̃^ MP≃γ/nΩ (-Ω_1+Ω_1^ MP-Ωlog (μ_s/μ_s^ MP) ).
To avoid the divergent behavior near the extremality, we define the entropy difference as
Δ s := -ω_s j_s+ω_s^ MP j_s^ MP-log (μ_s/μ_s^ MP ).
Since eq. (<ref>) leads to
j_s^ MP = Δ j-2Ω/√(1-Ω^2) = j_s,
eq. (<ref>) means
ω_s^ MP = 2 j_s/4+j_s^2 , μ_s^ MP = (1+j_s^2/4) e^8/4+j_s^2.
Thus, eq. (<ref>) is simplified to
Δ s = 2-ω_s j_s- logμ_s + log(1+j_s^2/4).
Interestingly, this is identical to the entropy difference in the singly rotating case, in which case the comparison should be made with the singly rotating Myers-Perry phase[It has not been shown explicitly, but one can easily obtain from the entropy formula in ref. <cit.>.].
This suggests the thermodynamics for given (Ω, Δ j) is identical with the corresponding
singly rotating phase with (ω_s, j_s).
Hence, in the following, we do not distinguish Δ s for the almost equally rotaing case and that for the singly rotating case.
§ STATIONARY PHASES
§.§ Stationary phases in the singly rotating case : a review
Since the stationary phase in the almost equally rotating setup is mapped to the singly rotating phase,
we first revisit the stationary phases in the singly rotating setup found in refs. <cit.>.
In figure <ref>, the major stationary phases in the singly rotating case are shown:
* Myers-Perry black holes
* Black ripples : Axisymmetric deformed branches of the Myers-Perry phase bifurcating from axisymmetric zero modes at j_s=2√(2k-1), (k=2,3,4,…). Here we call the corresponding branches as k-ripple. At the large deformation and large angular momentum, the horizon tends to be fragmented into several parts of ring shapes with/without a central Gaussian blob for even/odd branches, respectively. At finite D, they would eventually cause the topology-change to black multi-rings/saturns.
* Black bar : A nonaxisymmetric branch bifurcates from the zero mode of the bar mode instability at j_s=2 <cit.>.
At large deformation and large angular momentum, the profile tends to be infinitely elongated in one direction and becomes unstable to the Gregory-Laflamme type instability.
* Black dumbbells : Deformed black bar branches bifurcate from the zero mode instabilities of the black bar at j_s=2k/√(2k-1), (k=2,3,4,…), which we call k-dumbbell for each k. At large deformation, both the angular momentum and angular velocity tend to be zero and the shape is
fragmented into a multiple array of orbiting black blobs.
Typical profiles of these phases are shown in figure <ref>.
A remarkable feature of the large D effective theory is that it admits the nonaxisymmetric stationary solutions, which cannot be stationary at finite D due to the radiation.
This is because the effects of the gravitational wave emission is suppressed nonperturvative in D, i.e. ∼ e^-D, and hence negligible in the 1/D-expansion <cit.>. One may doubt that such solutions are artifacts only at D=∞. However, one may also interpret them as long-living intermediate states due to the low radiation rate at large enough D. In fact, the black hole-black hole collision simulation in D=6,7 captures relatively long-living black bar and dumbbell states <cit.>.
In figure <ref>, we show the entropy of the above solutions given by eq. (<ref>).
Beyond the bar mode at j_s=2, the most stable state is the black bar until the first zero mode takes place on the black bar at j_s = 4/√(3). Then, 2-dumbbell branch becomes the most stable state beyond j_s=4/√(3) until it reaches to a turning point j_s, crit≈ 2.662. The numerical simulation in the large D effective theory indicates that the black hole collision results in
the most entropically favored phase for the corresponding total angular momentum, where the collision always ends up in the fragmented profile for j_s > j_s, crit <cit.>.
§.§ Phase diagram
Finally, we present the phase diagram of almost equally rotating black holes in figure <ref>. Each phases are labelled by the name of corresponding singly rotating phases with double quotation, e.g., black bar → “black bar".
From singly rotating phases in figure <ref>, each solutions with (ω_s, j_s) are mapped to (Ω,Ω_ψ) through eqs. (<ref>) and (<ref>) for given Δ j. The corresponding solutions have the same entropy difference (<ref>).
Since the parameters has the symmetry of simultaneous flip of the sign
(ω_s,j_s,Ω,Ω_ψ,Δ j) → (-ω_s,-j_s,-Ω,-Ω_ψ,-Δ j),
we can always choose Δ j ≥ 0 and instead consider the range Ω∈ (-1,1).
An important point is that eq. (<ref>) sets the different parameter region for j_s for Ω∈ (-1,1) depending on the value of Δ j:
* 0≤Δ j < 2:
j_s monotonically decreases from j_s=∞ (Ω=-1) to j_s=-∞ (Ω=1). Eq. (<ref>) is solved as
Ω = 2 Δ j-j_s √(4-Δ j^2+j_s^2)/4+j_s^2.
* Δ j = 2:
j_s monotonically decreases from j_s=∞ (Ω=-1) to j_s=0 (Ω=1). Ω=Ω(j_s) is also given by eq. (<ref>).
* Δ j > 2:
j_s reaches to the minimum at Ω = 2/Δ j and goes to ∞ both at Ω→± 1, that is, the value of j_s is bounded below
j_s ≥√(Δ j^2-4).
In this case, Ω is written as the multi-valued function of j_s
Ω_± =2 Δ j± j_s √(4-Δ j^2+j_s^2)/4+j_s^2,
where -1<Ω_-<2/Δ j<Ω_+<1.
In the third case, particularly, the lower bound of j_s largely affects the appearance of the phase diagram.
For example, if Δ j > 2 √(2), with which j_s is always above the bar mode threshold, the Myers-Perry solution cannot be stable and the black bar branch no longer bifurcates from the Myers-Perry phase (left panel of figure <ref>). We will not show all cases but the same phenomenon occurs for higher other branches as well at larger Δ j.
We also present the most preferable phase for given (Ω,Δ j) in the right panel of figure <ref>.
Ω_ψ for each phases is plotted in figure <ref>.
It is interesting to note that even if we set Δ j=0, the deformed phases have nonzero Ω_ψ. This can be understood that the conservation of Δ j in eq. (<ref>)
makes the difference in Ω_ψ due to the difference in the moment of inertia.
§ DISCUSSION
In this article, we have investigated the dynamics of rotating black holes with almost equal angular momenta, which have N equal spins out of N+1 spins, using the large D effective theory approach. We have first studied the nonlinear dynamics involving the ultraspinning instability in the equally rotating case, which is found to be captured by the simple effective theory under the proper scaling ansatz at large D.
We have found that the effective theory of almost equally rotating black holes reduces to that of singly rotating black holes in the stationary setup, and thermodynamical variables are also mapped to singly rotating counterparts. In particular, we have found that the phase diagram for stationary solutions shares the common microcanonical structure in both setups.
This implies that all the instabilities and deformed branches found in the singly rotating setup have their counterparts in the almost equally rotating setup.
At finite D, the singly rotating setup admits axisymmetrically deformed phases, which are called bumpy black holes or ripples <cit.>.
It is expected that those branches are connected to branches of different topologies such as black rings or black saturns through the topology changing transition.
In the blob approximation at large D, this topology changing transition takes place in a smooth and more unclear way.
For example, the black ripples at the large deformation limit tend to be isolated ring-like Gaussian blobs with or without a central blob connected via thin black branes <cit.> (figure <ref>).
Since the 1/D-expansion breaks down if the mass density m becomes m∼ e^-D, such highly-deformed phases no longer describe the blob approximation of a single deformed black hole, but should rather describe black rings or saturns. This occurs for j_s ∼√(D) and ω_s ∼ 1/√(D) <cit.>. Then, one can see from eq. (<ref>) that this corresponds to Ω = 1 -1/D in the almost equally rotating setup.
However, the big difference from the singly rotating case is that Ω has the upper bound due to the extremality which is also given by Ω_ ext = 1-1/D (<ref>).
Therefore, in the almost equally rotating setup, the topology changing transition would not occur
if the phase reaches its extremality before the transition.
To confirm this, one has to examine higher order corrections in 1/D.
If the topology changing transition really ocurrs, another question is, what is the topology after the transition? Can they admit novel horizon topologies?
Since the current ansatz has a twisted structure on CP^N-1, the horizon topology after the pinch-off is less clear than in the singly rotating ansatz. For this, the soap bubble analysis would hint the shape of the horizon <cit.>. This will be postponed to the future work.
It is straightforward to extend the analysis to more general cases with the cosmological constant, Maxwell field <cit.>,
or with the Gauss-Bonnet correction <cit.>.
It would be worth to examine whether the similar large D correspondence between single rotating black holes and almost equal rotating black holes holds in such general cases.
In the AdS background, even in finite D, the equally rotating setup is known to admit nonaxisymmetric AdS black holes with only one Killing vector, called black resonators <cit.>. They are surrounded by the matter clouds or gravitational radiation which are maintained by the superradiant instability. One may pursue the possibility of such resonator-type solutions at large D, by
applying the similar large D analysis in the AdS background.
However, finding the equilibrium condition for the radiation would be difficult task at large D, since the effect of the radiation is suppressed by e^-D, and furthermore the radiating sector belongs to the higher frequency modes of ω∼ D <cit.>, which are not described by the effective theory approach.
By adding a Kaluza-Klein circle to equally rotating black holes, one can obtain equally rotating black strings.
The large D analysis in this setup will lead to black strings with nontrivial helical Killing vector as seen in D=6 <cit.>.
This would also be an intriguing direction.
§ ACKNOWLEDGEMENT
R.S. was supported by JSPS KAKENHI Grant Number JP18K13541.
S.T. was supported by JSPS KAKENHI Grant Number 21K03560.
§ ADM DECOMPOSITION WITH CP^N-1 REDUCTION
In the section <ref>, we solve the following reduced equation instead of the Einstein equation itself to save the computational resource[The computation of the 1/D-expansion on the Mathematica takes considerably longer times as the dimension of the non-symmetric subspace increases.].
We consider the decomposition into the radial direction and others,
ds^2 = α^2 dρ^2 + g̅_μ̅ν̅
(dx̅^μ̅+β̅^μ̅ dρ)( dx̅^ν̅+β̅^ν̅ dρ),
with which the Einstein equation is decomposed to the evolution equation
α∂_ρK̅^μ̅_ν̅ = R̅^μ̅_ν̅-K̅K̅ ^μ̅_ν̅ + α_β̅K̅ ^μ̅_ν̅- α∇̅^μ̅∇̅_ν̅α,
the scalar constraint
K̅^2 - K̅^μ̅_ν̅K̅^ν̅_μ̅ - R =0,
and the vector constraint
∇̅_ν̅K̅^ν̅_μ - ∇̅_μ̅K̅ =0,
where K̅^μ̅_ν̅ and R̅^μ̅_ν̅ are the extrinsic and intrinsic curvature for g̅_μ̅ν̅, respectively.
We assume the intrinsic geometry has the isometry of a S^1 fiber over CP^N-1
g̅_μ̅ν̅dx̅^μ̅ dx̅^ν̅ = g_μν(x)ξ^μξ^ν +h(x)^2 γ_ab dy^a dy^b, ξ^μ = dx^μ+δ^μ_ψ_a dy^a,
in which γ_ab and _a are the Fubini-Study metric and Kähler potential of CP^N-1 and ψ is the fiber coordinate.
The shift vector is given by
β̅_μ̅ dx̅^μ̅ = β_μ dx^μ + β_ψ_a dy^a, β̅^μ̅∂_μ̅ = β^μ∂_μ.
The components of the extrinsic curvature are given by
K̅^μ_ν = K^μ_ν = 2αg^μα( ∂_ρ g_αν - ∇_αβ_ν - ∇_νβ_α),
K̅^a_b = K_Σ δ^a_b :=α(∂_ρ - β^μ∂_μ) log h δ^a_b,
K̅^a_μ = 0, K̅^μ_a = _a K̅^μ_ψ - δ^μ_ψ A_b K̅^b_a,
where the quantities without the bar are for g_μν.
The mean curvature is written as
K̅ = K + (2N-2) K_Σ .
Eq. (<ref>) are decomposed to
α∂_ρ K^μ_ν = R̅^μ_ν-K̅ K ^μ_ν
+ α_β K^μ_ν- α∇^μ∇_να,
and
α (∂_ρ-β^μ∂_μ) K_Σ = R_Σ - K̅ K_Σ
- α∂^μα∂_μlog h,
where the curvature tensors of the intrinsic metric are given by
R̅^μ_ν = R^μ_ν - 2(N-1) (h^-1∇^μ∇_ν h - h^-4δ^μ_ψ g_νψ).
and
R̅^a_b = R_Σδ^a_b := [2N h^-2-2 h^-4 g_ψψ - (2N-3) h^-2 (∇^μ h) (∇_μ h)-h^-1∇^2 h]δ^a_b.
The nonzero components of the constraints are given by
∇̅_ν̅K̅^ν̅_μ - ∇̅_μK̅
= ∇_ν K^ν_μ - ∇_μK̅ + 2(N-1) ∂_νlog h K^ν_μ
- 2N ∂_μlog h K_Σ =0,
and
K̅^2 - K^μ_ν K^ν_μ - (2N-2) K_Σ^2- R =0.
To solve the metric (<ref>) in the 1/D-expansion, we set
ρ = := r^2N, β_μ dx^μ := U/2N^1-2N e^(0), α := √(-β_μβ^μ),
and
g_μνξ^μξ^ν = -A (e^(0))^2 - 2 C_i e^(0) e^(i) + H_ij e^(i)e^(j), h = r sinθ.
§ ASYMPTOTIC BEHAVIOR IN THE BOOSTED ANSATZ
In this section, we relate the asymptotic form of the ansatz (<ref>) with the flat background
ds^2 = - dt̅^2 +dr^2 + r^2 Φ̅^2+r^2 sin^2θcos^2θΨ^2+r^2dθ^2+r^2 sin^2θ dΣ^2.
where Φ̅:=dϕ̅+sin^2θΨ.
We assume the horizon is placed around r=r_0. To keep the calculation explicit, we do not set r_0=1 throughout this section. One can easily obtain the formula used in the main part by setting r_0=1.
First, we switch to the Eddington-Finkelstein coordinate,
ds^2 = - dt^2 + r^2 Φ^2 + 2 (dt-r_0^2ΩΦ) dr/√(1-r_0^4 Ω^2/r^2)
+r^2 sin^2θcos^2θΨ^2+r^2dθ^2+r^2 sin^2θ dΣ^2,
by the transformation
dt̅ = dt - dr/√(1-r_0^4 Ω^2/r^2), dϕ̅ = dϕ - r_0^2 Ω/r^2dr/√(1-r_0^4 Ω^2/r^2),
where Φ := dϕ+sin^2θΨ.
In terms of the local boosted frame around r=r_0,
e^(0) := γ (dt- r_0^2 ΩΦ), e^(1) :=r_0 γ(Φ - Ω dt), e^(2) := r_0 dθ, e^(3) := r_0 sinθcosθΨ,
where γ:=(1-r_0^2 Ω^2)^-1/2, the background geometry (<ref>) is written as
ds^2 = - γ^2(1-r^2Ω^2)(e^(0))^2 + 2e^(0) dr/γ√(1-r_0^4 Ω^2/r^2) + 2 r_0 Ωγ^2 (r^2/r_0^2-1)e^(0)e^(1) +r^2/r_0^2[γ^2(1-r_0^4Ω^2/r^2) (e^(1))^2 +(e^(2))^2+(e^(3))^2].
With :=(r/r_0)^2N, this determines the asymptotic behavior of the metric functions in
ds^2 = -A (e^(0))^2+2U e^(0) dr - 2C_i e^(0)e^(i) + H_ij e^(i)e^(j)+r^2 sin^2 θ dΣ^2,
as
A = γ^2 (1-r^2 Ω^2) + ^-1 = 1 - r_0^2Ω^2/1-r_0^2Ω^2log/N+N^-2,^-1,
U = √(1-r_0^2Ω^2/1-r_0^4Ω^2/r^2) + ^-1
= 1 + r_0^2Ω^2/1-r_0^2 Ω^2log/N + N^-2,^-1,
C_i = r_0 Ω(1-r^2/r_0^2)/1-r_0^2Ω^2δ_i1+^-1 = - r_0 Ω/1-r_0^2 Ω^2δ_i1log/N + N^-2,^-1,
and
H_ij =r^2/r_0^2 [δ_ij + r_0^2 Ω^2(1-r_0^2/r^2)/1-r_0^2Ω^2δ_i1δ_j1]+^-1=δ_ij + (δ_ij+ r_0^2Ω^2δ_i1δ_j1/1-r_0^2Ω^2)log/N+ N^-2,^-1.
§ LOCAL EVENT HORIZON, ENTROPY AND TEMPERATURE
In this section, we derive the formula involving the local event horizon in the metric (<ref>) without the large D limit.
The local event horizon r=r_h(t,ϕ,θ) is defined as a null hypersurface ||dr-dr_h||^2=0 <cit.>.
The derivative of r_h is written as
dr_h = ∂_0 r_h e^(0) + ∂_i r_h e^(i) ,
where the dual basis is
∂_0 := γ ( ∂_t + Ω∂_ϕ), ∂_1 := γ ( ∂_ϕ+ Ω∂_t), ∂_2 := ∂_θ, ∂_3 := θθ∂_ψ - tanθ∂_ϕ.
The null condition becomes
. A - 2 U ∂_0 r_h + H_ijv^i v^j
|_H=0,
where
v^i :=. H^ij (C_j - U ∂_i r_h)|_H.
The horizon cross section is then given by
ds_H^2 = H_ij ( e^(i)-v^i e^(0))(e^(j)-v^j e^(0))+ r_h^2 sin^2 θ
dΣ^2.
In the coordinate basis, this is written as
ds_H^2 =H̃_ab ( dz^a - v^a dt)(dz^b-v^b dt) +r_h^2 sin^2θ dΣ^2.
where a,b=ϕ,θ,ψ and
v^ϕ = Ω+v^1/1+Ω v^1-v^ψsin^2 θ, v^θ= v^2 γ(1-Ω^2)/1+Ω v^1,
v^ψ = v^3/cosθsinθγ(1-Ω^2)/1+Ω v^1.
It is also convenient to define
v^Φ := v^ϕ + v^ψsin^2θ = Ω+v^1/1+Ω v^1.
H_ij and H̃_ab are related by
( [ e^(1)-v^1 e^(0); e^(2)-v^2 e^(0); e^(3)-v^3 e^(0) ])
= ([ γ (1-Ω v^Φ) 0 0; γΩ v^2 1 0; γΩ v^3 0 sinθcosθ ])
( [ Φ-v^Φ dt,; dθ - v^θ dt; Ψ - v^ψ dt ]),
which leads to
√( detH̃) = sinθcosθγ (1-Ω v^Φ) √( det H).
Thus, the entropy formula is given by
S = nΩ_n+1/4Gγ∫_0^π/2 dθ∫_0^2πdϕ/2π r_h^n-1cosθsin^n-1θ/1+Ω v^1√( detH),
where we used the property
1- Ω v^Φ = 1-Ω^2/1+Ω v^1.
On the other hand, the null generator of the horizon becomes
ξ = ∂_t + v^ϕ∂_ϕ + v^θ∂_θ + v^ψ∂_ψ.
If ξ is the Killing vector on the horizon, one can find the surface gravity is given by
κ =. ∂_r Ã/2 U1/γ(1+Ω v^1)|_H,
where
à = A - 2 U ∂_0 r_H + 2 v^i ( C_i-U∂_i r_H),
and hence the Hawking-Bekenstein temperature
T = κ/2π =. ∂_r Ã/4π U1/γ(1+Ω v^1)|_H.
§ D=2N+3 MYERS-PERRY BLACK HOLES
In this section, we review the properties of Myers-Perry solution
in D=2N+3, whose metric, in general, is given by <cit.>
ds^2 = - dt^2 + μr̅^2/Π F(dt + ∑_i=0^N a_i μ_i^2 dϕ_i)^2+Π F/Π-μr̅^2 dr̅^2
+ ∑_i=0^N (r̅^2+a_i^2)(dμ_i^2+μ_i^2 dϕ_i^2),
where μ_i is the directional cosine which satisfies ∑_i=0^N μ_i^2=1 and
F=1-∑_i=0^N a_i^2 μ_i^2/r̅^2+a_i^2, Π = ∏_i=0^N(r̅^2+a_i^2).
We particularly interested in the case where the majority of spins are equal.
§.§ Equal angular momenta : N+1 equal spins
First, we revisit the known expression with equal spins a_i=a <cit.>. In the new coordinates
r=√(r̅^2+a^2), ϕ=ϕ_0, ζ^α = (μ_α/μ_0) e^i(ϕ_α-ϕ_0),
where {ζ^α}_α=1,…,N are the Fubini-Study coordinates for CP^N,
the metric (<ref>) reduces to
ds^2 = -F(r)/H(r)dt^2+dr^2/F(r)+r^2 H(r)^2(dϕ+_N+Ω(r)dt)^2+r^2 dΣ^2_N,
where _N and dΣ_N^2 is the Kähler potential and Fubini-Study metric on CP^N.
With μ:=r_0^2N, the metric functions are written by
F(r) = 1 - r_0^2N/r^2N+ a^2 r_0^2N/r^2N+2,
H(r) = 1 + a^2 r_0^2N/r^2N+2, Ω(r)=-a r_0^2N/r^2N+2 H(r).
The position of the event horizon is determined by
F(r_+) = 1 - r_0^2N/r_+^2N+ a^2 r_0^2N/r_+^2N+2=0.
The thermodynamic variables for this solution are obtained as follows
M = Ω_2N+1/8π Gμ(N+2), J_ϕ = Ω_2n+1/8π Gμ (N+1)a,
Ω_ϕ = a/r_+^2, S = Ω_2N+1/4Gμ^1/2 r_+^N+1,
T = κ/2π= N μ^1/2/2π r_+^N+1(1-N+1/Na^2/r_+^2).
With the decomposition (<ref>), one can also calculate
J_ψ = N/N+1 J_ϕ, Ω_ψ =0.
In particular, the mass normalized angular momentum and entropy are given by
j_ϕ := 8π G J_ϕ/(N+1)Ω_2N+1(16π G M/(2N+1)Ω_2N+1)^-2N+1/2N=a/r_0, j_ψ := 8π G J_ψ/NΩ_2N+1(16π G M/(2N+1)Ω_2N+1)^-2N+1/2N=a/r_0, s := 4G S/Ω_2N+1(16π G M/(2N+1)Ω_2N+1)^-2N+1/2N=r_+/r_0√(1-a^2/r_+^2).
§.§ Almost equal angular momenta : N equal spins
Now, we assume N of N+1 spins are the same
a_0 = a, a_i = b (i=1,…,N).
We introduce the following coordinates
r=√(r̅^2+b^2), θ = arccosμ_0 , ϕ = ϕ_0, ψ = ϕ_N-ϕ_0, u^i = (μ_i/μ_N) e^i (ϕ_i-ϕ_N),
where {u^i}_i=1,…,N-1 are the coordinates in the Fubini-Study metric for CP^N-1.
With this, the metric (<ref>) reduces to
ds^2 = - FΔ/Hdt^2+G/Fdr^2+r^2 G dθ^2
+ r^2 sin^2θ(1+b^2μsin^2θ/r^2N+2G)(dψ++V_ψ dt)^2 +2 (r^2+μ b(a+(b-a)sin^2θ)/r^2NG)sin^2θ (dϕ+V_ϕ dt)(dψ++V_ψ dt) +(r^2+(a^2-b^2)cos^2θ + μ(a+(b-a)sin^2θ)/r^2N G)(dϕ+V_ϕ dt)^2+r^2 sin^2θ dΣ^2,
where and dΣ^2 are the Kähler potential and the metric on CP^N-1, respectively.
The functions in the metric are given as follows
F = 1 +a^2-b^2/r^2 -μ (r^2-b^2)/r^2N+2, G = 1+r^2(a^2-b^2)sin^2θ, H = 1 + a^2-b^2/r^2 + μ a^2 /r^2N+2, B = H + (a^2-b^2)F/r^2sin^2θ, V_ϕ = aμ/r^2N+2B, V_ψ=(a-b)(b^2+ab-r^2)μ/r^2N+4B, Δ = 1 + (r^2+a^2-b^2)(a^2-b^2) μsin^2θ/r^2N+4B.
One can easily see that this metric reduces to the equally-rotating case by setting a=b and using the decomposition in eq. (<ref>).
The horizon is given by the largest root of
F(r_+) = 1 + a^2-b^2/r_+^2 - μ(r_+^2-b^2)/r_+^2N+2 =0.
Then, one can write a as function of r_+ and b.
The thermodynamics are given by
M = Ω_2N+1μ/8π G(N+2) , J_ϕ = Ω_2N+1μ/8π G (a+N b) , J_ψ = Ω_2N+1μ/8π G N b, Ω_ϕ = a/a^2-b^2+r_+^2, Ω_ψ = b/r_+^2 - a/a^2-b^2+r_+^2, S = Ω_2N+1μ/4 G√(r_+^2-b^2), T = κ/2π = 1/2π√(r_+^2-b^2)(N(1-b^2/r_+^2)-a^2/r_+^2+a^2-b^2).
It is easy to check the Smarr formula and first law hold by differentiating with μ, r_+ and b,
2N/2N+1 M = T S + Ω_ϕ J_ϕ + Ω_ψ J_ψ ,
δ M = T δ S + Ω_ϕδ J_ϕ + Ω_ψδ J_ψ.
With μ :=r_0^2N, the scale invariant expressions are given as
j_ϕ = 8π G J_ϕ/(N+1)Ω_2N+1(8π G M/(N+1/2)Ω_2N+1)^-1-2N= Nb+a/(N+1)r_0,
j_ψ = 8π G J_ψ/NΩ_2N+1(8π G M/(N+1/2)Ω_2N+1)^-1-2N= b/r_0,
s = 4 GS/Ω_2N+1(8π G M/(N+1/2)Ω_2N+1)^-1-2N = r_+/r_0√(1-b^2/r_+^2),
and
Ω̃_ϕ = r_0 Ω_ϕ = a r_0/a^2-b^2+r_+^2, Ω̃_ψ = r_0 Ω_ψ = br_0/r_+^2-a r_0/a^2-b^2+r_+^2.
Large D limit
To compare with the solution in the main part, we take the large D limit ( or large N limit ) of the thermodynamic variables.
Since r_+ = r_0 + N^-1, we obtain
Ω̃:= Ω̃_ϕ + Ω̃_ψ≃b/r_0, Ω̃_ψ≃b/r_0-a r_0/a^2-b^2+r_0^2
and
j_ϕ≃ j_ψ≃b/r_0≃Ω̃ , s ≃√(1-b^2/r_0^2)≃√(1-Ω̃^2).
The difference between two angular momenta becomes
Δ j := 2N(j_ψ-j_ϕ) ≃2(b-a)/r_0.
Since eq. (<ref>) leads to
a/r_0≃1±√(1-4 (1-Ω̃ ^2) (Ω̃ -Ω̃_ψ )^2)/2 (Ω̃ -Ω̃_ψ ),
eq. (<ref>) can be expressed as the relation between the normalized quantities
Δ j ≃ 2Ω̃ - 1±√(1-4 (1-Ω̃ ^2) (Ω̃ -Ω̃_ψ )^2)/Ω̃ -Ω̃_ψ.
99
Myers:1986un
R. C. Myers and M. J. Perry,
“Black Holes in Higher Dimensional Space-Times,”
Annals Phys. 172, 304 (1986)
Emparan:2001wn
R. Emparan and H. S. Reall,
Phys. Rev. Lett. 88, 101101 (2002)
[arXiv:hep-th/0110260 [hep-th]].
Pomeransky:2006bd
A. A. Pomeransky and R. A. Sen'kov,
“Black ring with two angular momenta,”
[arXiv:hep-th/0612005 [hep-th]].
Elvang:2007rd
H. Elvang and P. Figueras,
“Black Saturn,”
JHEP 05, 050 (2007)
[arXiv:hep-th/0701035 [hep-th]].
Iguchi:2007is
H. Iguchi and T. Mishima,
“Black di-ring and infinite nonuniqueness,”
Phys. Rev. D 75, 064018 (2007)
[erratum: Phys. Rev. D 78, 069903 (2008)]
[arXiv:hep-th/0701043 [hep-th]].
Elvang:2007hs
H. Elvang and M. J. Rodriguez,
“Bicycling Black Rings,”
JHEP 04, 045 (2008)
[arXiv:0712.2425 [hep-th]].
Izumi:2007qx
K. Izumi,
Prog. Theor. Phys. 119, 757-774 (2008)
[arXiv:0712.0902 [hep-th]].
Evslin:2008gx
J. Evslin,
“Geometric Engineering 5d Black Holes with Rod Diagrams,”
JHEP 09, 004 (2008)
[arXiv:0806.3389 [hep-th]].
Chen:2008fa
Y. Chen and E. Teo,
“A Rotating black lens solution in five dimensions,”
Phys. Rev. D 78, 064062 (2008)
[arXiv:0808.0587 [gr-qc]].
Tomizawa:2019acu
S. Tomizawa and T. Mishima,
“Stationary and biaxisymmetric four-soliton solution in five dimensions,”
Phys. Rev. D 99, no.10, 104053 (2019)
[arXiv:1902.10544 [hep-th]].
Lucietti:2020phh
J. Lucietti and F. Tomlinson,
“On the nonexistence of a vacuum black lens,”
JHEP 21, 005 (2020)
[arXiv:2012.00381 [gr-qc]].
Kunduri:2014kja
H. K. Kunduri and J. Lucietti,
“Supersymmetric Black Holes with Lens-Space Topology,”
Phys. Rev. Lett. 113, no.21, 211101 (2014)
[arXiv:1408.6083 [hep-th]].
Tomizawa:2016kjh
S. Tomizawa and M. Nozawa,
“Supersymmetric black lenses in five dimensions,”
Phys. Rev. D 94, no.4, 044037 (2016)
[arXiv:1606.06643 [hep-th]].
Emparan:2003sy
R. Emparan and R. C. Myers,
“Instability of ultra-spinning black holes,”
JHEP 09, 025 (2003)
[arXiv:hep-th/0308056 [hep-th]].
Dias:2010maa
O. J. C. Dias, P. Figueras, R. Monteiro and J. E. Santos,
“Ultraspinning instability of rotating black holes,”
Phys. Rev. D 82, 104025 (2010)
[arXiv:1006.1904 [hep-th]].
Dias:2010eu
O. J. C. Dias, P. Figueras, R. Monteiro, H. S. Reall and J. E. Santos,
“An instability of higher-dimensional rotating black holes,”
JHEP 05, 076 (2010)
[arXiv:1001.4527 [hep-th]].
Dias:2011jg
O. J. C. Dias, R. Monteiro and J. E. Santos,
“Ultraspinning instability: the missing link,”
JHEP 08, 139 (2011)
[arXiv:1106.4554 [hep-th]].
Figueras:2017zwa
P. Figueras, M. Kunesch, L. Lehner and S. Tunyasuvunakool,
“End Point of the Ultraspinning Instability and Violation of Cosmic Censorship,”
Phys. Rev. Lett. 118, no.15, 151103 (2017)
[arXiv:1702.01755 [hep-th]].
Lehner:2010pn
L. Lehner and F. Pretorius,
“Black Strings, Low Viscosity Fluids, and Violation of Cosmic Censorship,”
Phys. Rev. Lett. 105, 101102 (2010)
[arXiv:1006.5960 [hep-th]].
Emparan:2014pra
R. Emparan, P. Figueras and M. Martinez,
“Bumpy black holes,”
JHEP 12, 072 (2014)
[arXiv:1410.4764 [hep-th]].
Dias:2014cia
Ó. J. C. Dias, J. E. Santos and B. Way,
“Rings, Ripples, and Rotation: Connecting Black Holes to Black Rings,”
JHEP 07, 045 (2014)
[arXiv:1402.6345 [hep-th]].
Asnin:2007rw
V. Asnin, D. Gorbonos, S. Hadar, B. Kol, M. Levi and U. Miyamoto,
“High and Low Dimensions in The Black Hole Negative Mode,”
Class. Quant. Grav. 24, 5527-5540 (2007)
[arXiv:0706.1555 [hep-th]].
Emparan:2013moa
R. Emparan, R. Suzuki and K. Tanabe,
“The large D limit of General Relativity,”
JHEP 06, 009 (2013)
[arXiv:1302.6382 [hep-th]].
Emparan:2020inr
R. Emparan and C. P. Herzog,
“Large D limit of Einstein’s equations,”
Rev. Mod. Phys. 92, no.4, 045005 (2020)
[arXiv:2003.11394 [hep-th]].
Emparan:2015hwa
R. Emparan, T. Shiromizu, R. Suzuki, K. Tanabe and T. Tanaka,
“Effective theory of Black Holes in the 1/D expansion,”
JHEP 06, 159 (2015)
[arXiv:1504.06489 [hep-th]].
Bhattacharyya:2015dva
S. Bhattacharyya, A. De, S. Minwalla, R. Mohan and A. Saha,
“A membrane paradigm at large D,”
JHEP 04 (2016), 076
[arXiv:1504.06613 [hep-th]].
Bhattacharyya:2015fdk
S. Bhattacharyya, M. Mandlik, S. Minwalla and S. Thakur,
“A Charged Membrane Paradigm at Large D,”
JHEP 04 (2016), 128
[arXiv:1511.03432 [hep-th]].
Emparan:2013xia
R. Emparan, D. Grumiller and K. Tanabe,
“Large-D gravity and low-D strings,”
Phys. Rev. Lett. 110, no.25, 251102 (2013)
[arXiv:1303.1995 [hep-th]].
Emparan:2014jca
R. Emparan, R. Suzuki and K. Tanabe,
“Instability of rotating black holes: large D analysis,”
JHEP 06, 106 (2014)
[arXiv:1402.6215 [hep-th]].
Suzuki:2015iha
R. Suzuki and K. Tanabe,
“Stationary black holes: Large D analysis,”
JHEP 09, 193 (2015)
[arXiv:1505.01282 [hep-th]].
Tanabe:2016opw
K. Tanabe,
“Charged rotating black holes at large D,”
[arXiv:1605.08854 [hep-th]].
Suzuki:2022apk
R. Suzuki and S. Tomizawa,
“Rotating black holes at large D in Einstein-Gauss-Bonnet theory,”
Phys. Rev. D 106, no.2, 024018 (2022)
[arXiv:2202.12649 [hep-th]].
Andrade:2018nsz
T. Andrade, R. Emparan and D. Licht,
“Rotating black holes and black bars at large D,”
JHEP 09, 107 (2018)
[arXiv:1807.01131 [hep-th]].
Andrade:2018rcx
T. Andrade, R. Emparan and D. Licht,
“Charged rotating black holes in higher dimensions,”
JHEP 02, 076 (2019)
[arXiv:1810.06993 [hep-th]].
Licht:2020odx
D. Licht, R. Luna and R. Suzuki,
“Black Ripples, Flowers and Dumbbells at large D,”
JHEP 04, 108 (2020)
[arXiv:2002.07813 [hep-th]].
Suzuki:2020kpx
R. Suzuki,
“Black hole interactions at large D: brane blobology,”
JHEP 02, 131 (2021)
[arXiv:2009.11823 [gr-qc]].
Andrade:2019edf
T. Andrade, R. Emparan, D. Licht and R. Luna,
“Black hole collisions, instabilities, and cosmic censorship violation at large D,”
JHEP 09, 099 (2019)
[arXiv:1908.03424 [hep-th]].
Andrade:2018yqu
T. Andrade, R. Emparan, D. Licht and R. Luna,
“Cosmic censorship violation in black hole collisions in higher dimensions,”
JHEP 04, 121 (2019)
[arXiv:1812.05017 [hep-th]].
Andrade:2020dgc
T. Andrade, P. Figueras and U. Sperhake,
“Evidence for violations of Weak Cosmic Censorship in black hole collisions in higher dimensions,”
JHEP 03, 111 (2022)
[arXiv:2011.03049 [hep-th]].
Andrade:2020ilm
T. Andrade, R. Emparan, A. Jansen, D. Licht, R. Luna and R. Suzuki,
“Entropy production and entropic attractors in black hole fusion and fission,”
JHEP 08, 098 (2020)
[arXiv:2005.14498 [hep-th]].
Luna:2022tgh
R. Luna and M. Sanchez-Garitaonandia,
“Holographic collisions in large D effective theory,”
JHEP 02, 147 (2023)
[arXiv:2212.14440 [hep-th]].
Emparan:2021ewh
R. Emparan, D. Licht, R. Suzuki, M. Tomašević and B. Way,
“Black tsunamis and naked singularities in AdS,”
JHEP 02, 090 (2022)
[arXiv:2112.07967 [hep-th]].
Licht:2022rke
D. Licht, R. Suzuki and B. Way,
“The large D effective theory of black strings in AdS,”
JHEP 12, 146 (2022)
[arXiv:2211.04333 [hep-th]].
Emparan:2023dxm
R. Emparan, R. Luna, R. Suzuki, M. Tomašević and B. Way,
“Holographic duals of evaporating black holes,”
JHEP 05, 182 (2023)
[arXiv:2301.02587 [hep-th]].
Kunduri:2006qa
H. K. Kunduri, J. Lucietti and H. S. Reall,
“Gravitational perturbations of higher dimensional rotating black holes: Tensor perturbations,”
Phys. Rev. D 74, 084021 (2006)
[arXiv:hep-th/0606076 [hep-th]].
Bhattacharyya:2008xc
S. Bhattacharyya, V. E. Hubeny, R. Loganayagam, G. Mandal, S. Minwalla, T. Morita, M. Rangamani and H. S. Reall,
“Local Fluid Dynamical Entropy from Gravity,”
JHEP 06, 055 (2008)
[arXiv:0803.2526 [hep-th]].
Dias:2011at
O. J. C. Dias, G. T. Horowitz and J. E. Santos,
“Black holes with only one Killing field,”
JHEP 07, 115 (2011)
[arXiv:1105.4167 [hep-th]].
Dias:2015rxy
Ó. J. C. Dias, J. E. Santos and B. Way,
“Black holes with a single Killing vector field: black resonators,”
JHEP 12, 171 (2015)
[arXiv:1505.04793 [hep-th]].
Ishii:2018oms
T. Ishii and K. Murata,
“Black resonators and geons in AdS5,”
Class. Quant. Grav. 36, no.12, 125011 (2019)
[arXiv:1810.11089 [hep-th]].
Ishii:2019wfs
T. Ishii and K. Murata,
“Photonic black resonators and photon stars in AdS_5,”
Class. Quant. Grav. 37, no.7, 075009 (2020)
[arXiv:1910.03234 [hep-th]].
Ishii:2020muv
T. Ishii, K. Murata, J. E. Santos and B. Way,
“Superradiant instability of black resonators and geons,”
JHEP 07, 206 (2020)
[arXiv:2005.01201 [hep-th]].
Ishii:2021xmn
T. Ishii, K. Murata, J. E. Santos and B. Way,
“Multioscillating black holes,”
JHEP 05, 011 (2021)
[arXiv:2101.06325 [hep-th]].
Emparan:2014aba
R. Emparan, R. Suzuki and K. Tanabe,
“Decoupling and non-decoupling dynamics of large D black holes,”
JHEP 07, 113 (2014)
[arXiv:1406.1258 [hep-th]].
Dias:2022mde
O. J. C. Dias, T. Ishii, K. Murata, J. E. Santos and B. Way,
“Gregory-Laflamme encounters Superradiance,”
JHEP 01, 147 (2023)
[arXiv:2211.02672 [gr-qc]].
Dias:2022str
O. J. C. Dias, T. Ishii, K. Murata, J. E. Santos and B. Way,
“Gregory-Laflamme and superradiance encounter black resonator strings,”
JHEP 02, 069 (2023)
[arXiv:2212.01400 [gr-qc]].
|
http://arxiv.org/abs/2307.01759v1
|
20230704150006
|
Pretraining is All You Need: A Multi-Atlas Enhanced Transformer Framework for Autism Spectrum Disorder Classification
|
[
"Lucas Mahler",
"Qi Wang",
"Julius Steiglechner",
"Florian Birk",
"Samuel Heczko",
"Klaus Scheffler",
"Gabriele Lohmann"
] |
cs.CV
|
[
"cs.CV",
"cs.LG",
"eess.IV"
] |
Pretraining is All You Need: Transformers for ASD Classification
L. Mahler et al.
Max-Planck-Institute for Biological Cybernetics, 72076 Tübingen, GermanyUniversity Hospital Tübingen, 72076 Tübingen, Germany
{name.surname}@kyb.tuebingen.mpg.de
Pretraining is All You Need: A Multi-Atlas Enhanced Transformer Framework for Autism Spectrum Disorder Classification
Lucas Mahler1Qi Wang1,2Julius Steiglechner1,2Florian Birk 1, 2 Samuel Heczko 1 Klaus Scheffler 1, 2Gabriele Lohmann 1, 2
August 1, 2023
============================================================================================================================
Autism spectrum disorder (ASD) is a prevalent psychiatric condition characterized by atypical cognitive, emotional, and social patterns.
Timely and accurate diagnosis is crucial for effective interventions and improved outcomes in individuals with ASD.
In this study, we propose a novel Multi-Atlas Enhanced Transformer framework, METAFormer, ASD classification.
Our framework utilizes resting-state functional magnetic resonance imaging data from the ABIDE I dataset, comprising 406 ASD and 476 typical control (TC) subjects.
METAFormer employs a multi-atlas approach, where flattened connectivity matrices from the AAL, CC200, and DOS160 atlases serve as input to the transformer encoder.
Notably, we demonstrate that self-supervised pretraining, involving the reconstruction of masked values from the input, significantly enhances classification performance without the need for additional or separate training data.
Through stratified cross-validation, we evaluate the proposed framework and show that it surpasses state-of-the-art performance on the ABIDE I dataset, with an average accuracy of 83.7% and an AUC-score of 0.832.
The code for our framework is available at https://github.com/Lugges991/METAFormergithub.com/Lugges991/METAFormer
§ INTRODUCTION
Autism spectrum disorder (ASD) is a widespread psychiatric condition characterized by atypical cognitive, emotional, and social patterns.
With millions of individuals affected worldwide, the early diagnosis of ASD is a critical research priority, as it has a significant positive impact on patient outcomes.
The etiology of ASD remains elusive, with intricate interactions among genetic, biological, psychological, and environmental factors playing a role.
Currently, diagnosing ASD relies heavily on behavioral observations and anamnestic information, posing challenges and consuming a considerable amount of time.
Skilled clinicians with extensive experience are required for accurate diagnosis.
However, common assessments of ASD have been criticized for their lack of objectivity and transparency <cit.>.
Given these limitations, there is an urgent need for a fast, cost-effective, and objective diagnostic method that can accurately identify ASD leading to more timely interventions and improved outcomes for affected individuals.
In recent years, magnetic resonance imaging (MRI) has emerged as a powerful non-invasive tool for gaining insights into brain disorders' pathophysiology.
Functional MRI (fMRI), a notable advancement in MRI technology, allows for the investigation of brain function by measuring changes in blood oxygen levels over time.
Functional connectivity (FC) analysis<cit.> plays a crucial role in fMRI data analysis, as it examines the statistical dependencies and temporal correlations among different brain regions.
Rather than considering isolated abnormalities in specific regions, brain disorders often arise from disrupted communication and abnormal interactions between regions.
FC analysis enables researchers to explore network-level abnormalities associated with various disorders.
This analysis involves partitioning the brain into regions of interest (ROIs) and quantifying the correlations between their time series using various mathematical measures.
In recent years machine learning approaches have been widely applied to the problem of ASD classification using rs-fMRI data.
The majority of these studies use functional connectivities obtained from a predefined atlas as input to their classifiers.
A considerable amount of work used classical machine learning algorithms such as support vector machines and logistic regression to classify ASD<cit.>.
However, these methods have limitations as they are typically applied to small datasets with specific protocols and fixed scanner parameters, which may not adequately capture the heterogeneity present in clinical data.
3D Convolutional neural networks<cit.> have also been applied to preprocessed fMRI data, <cit.> have used 2D CNNs on preprocessed fMRI data.
Though, these approaches are as well limited by the fact that they were only using small homogeneous datasets.More recent works tried to overcome the homogeneity limitations and have used deep learning approaches to classify ASD based on connectomes.
Multi-layer perceptrons are suited to the vector based representations of connectomes and have thus seen some usage in ASD classification<cit.>.
Graph convolutional models are also well suited and have yielded high accuracies<cit.>.
Other approaches used 1D CNNs<cit.>, or variants of recurrent neural networks<cit.>, and also probabilistic neural networks have been proposed<cit.>.
However, ASD classification is not limited to fMRI data and there has been work using, for example, EEG<cit.> or also more novel imaging approaches such as functional near-infrared spectroscopy<cit.>.
The current study aims to improve classification performance of ASD based on resting-state fMRI (rs-fMRI) data over the entire ABIDE I dataset<cit.> by leveraging the representational capabilities of modern transformer architectures.
We thus summarize our main contributions as follows:
* We propose a novel multi-atlas enhanced transformer framework for ASD classification using resting-state fMRI data: METAFormer
* We demonstrate that self-supervised pretraining leads to significant improvements in performance without the requirement of additional data.
* We show that our model outperforms state of the art methods on the ABIDE I dataset.
§ METHODS
§.§ Dataset
Our experiments are conducted on the ABIDE I dataset <cit.> which is a publicly available dataset containing structural MRI as well as resting-state functional MRI data obtained from individuals with Autism Spectrum Disorder (ASD) and typical controls (TC) from 17 different research sites.
The raw dataset encompasses a total of 1112 subjects, 539 of which are diagnosed with ASD and 573 are TC.
Subjects ages range from 7 to 64 years with a median age of 14.7 years across groups.
The ABIDE I dataset is regarded as one of the most comprehensive and widely used datasets in the field, offering a combination of MRI, rs-fMRI, and demographic data.The ABIDE I dataset exhibits significant heterogeneity and variations that should be taken into account.
It comprises data from diverse research sites worldwide, leading to variations in scanning protocols, age groups, and other relevant factors.
Consequently, the analysis and interpretation of the ABIDE I dataset pose challenges due this inherent heterogeneity.
§.§.§ Preprocessing Pipeline.
We utilize the ABIDE I dataset provided by the Preprocessed Connectomes Project (PCP) <cit.> for our analysis.
The PCP provides data for ABIDE I using different preprocessing strategies.
In this work we use the preprocessed data from the DPARSF pipeline <cit.>.
The DPARSF pipeline is based on SPM8 and includes the following steps:
The first 4 volumes of each fMRI time series are discarded to allow for magnetization stabilization.
Slice timing correction is performed to correct for differences in acquisition time between slices.
The fMRI time series are then realigned to the first volume to correct for head motion.
Intensity normalization is not performed.
To clean confounding variation due to physiological noise, 24-parameter head motion, mean white matter and CSF signals are regressed out.
Motion realignment parameters are also regressed out as well as linear and quadratic trends in low-frequency drifts.
Bandpass filtering was performed after regressing nuisance signals to remove high-frequency noise and low-frequency drifts.
Finally, functional to anatomical registration is performed using rigid body transformation and anatomical to standard space registration is performed using DARTEL<cit.>.
§.§.§ Functional Connectivity.
As the dimensionality of the preprocessed data is very high, we perform dimensionality reduction by dividing the brain into a set of parcels or regions with similar properties according to a brain atlas.
In this work we process our data using three different atlases.
The first atlas is the Automated Anatomical Labeling (AAL) atlas <cit.>.
This atlas, which is widely used in the literature, divides the brain into 116 regions of interest (ROIs) based on anatomical landmarks and was fractionated to functional resolution of 3mm^3 using nearest-neighbor interpolation.
The second atlas is the Craddock 200 (CC200) atlas <cit.>.
It divides the brain into 200 ROIs based on functional connectivity and was fractionated to functional resolution of 3mm^3 using nearest-neighbor interpolation.
The third atlas we considered is the Dosenbach 160 (DOS160) atlas <cit.> which contains uniform spheres placed at coordinates obtained from meta-analyses of task-related fMRI studies.After obtaining the ROI time-series from the atlases, we compute the functional connectivity using the Pearson Correlation Coefficient between each pair of ROIs.
The upper triangular part of the correlation matrix as well as the diagonal are then dropped and the lower triangular part is vectorized to obtain a feature vector of length k(k-1)/2, where k is the number of ROIs, which then serves as input to our models.
§.§ Model Architecture
§.§.§ METAFormer: Multi-Atlas Transformer.
Here, we propose METAFormer, at the core of which lies the transformer encoder architecture, originally proposed by Vaswani et al.<cit.> for natural language processing (NLP) tasks.
However, as our main goal is to perform classification and not generation we do not use the decoder part of the transformer architecture.
In order to accommodate input from multiple different atlases, we employ an ensemble of three separate transformers, with each transformer corresponding to a specific atlas.
As depicted in Figure <ref>, the input to each transformer is a batch of flattened functional connectivity matrices.
First, the input to each transformer undergoes embedding into a latent space using a linear layer with a dimensionality of d_model=256.
The output of the embedding is then multiplied by √(d_model) to scale the input features.
This scaling operation aids in balancing the impact of the input features with the attention mechanism.
Since we are not dealing with sequential data, positional encodings are not utilized.The embedded input is subsequently passed through a BERT-style encoder<cit.>, which consists of N=2 identical layers with d_ff=128 feed forward units, and h=4 attention heads.
To maintain stability during training, each encoder layer is normalized using layer normalization <cit.>, and GELU <cit.> is used as the activation function.
Following the final encoder layer, the output passes through a dropout layer.
Then, a linear layer with d_model hidden units and two output units corresponding to the two classes is applied to obtain the final output.
The outputs of the three separate transformers are averaged, and this averaged representation is passed through a softmax layer to derive the final class probabilities.
To train our Multi-Atlas Transformer model, we follow a series of steps. Firstly, we initialize the model weights using the initialization strategy proposed by He <cit.>, while setting the biases to 0. To optimize the model, we employ the AdamW optimizer <cit.> and minimize the binary cross entropy between predictions and labels. Our training process consists of 750 epochs, utilizing a batch size of 256. To prevent overfitting, we implement early stopping with a patience of 40 epochs.
In order to ensure robustness of our model, we apply data augmentation. Specifically, we randomly introduce noise to each flattened connectome vector with an augmentation probability of 0.3. The noise standard deviation is set to 0.01.
We conduct hyperparameter tuning using grid search. We optimize hyperparameters related to the optimizer, such as learning rate and weight decay. We also consider the dropout rate during this process.
§.§ Self-Supervised Pretraining
As popularized by <cit.>, the utilization of self-supervised generative pretraining followed by task-specific fine-tuning has demonstrated improved performance in transformer architectures.
Building upon this approach, we propose a self-supervised pretraining task for our model.
Our approach involves imputing missing elements in the functional connectivity matrices, drawing inspiration from the work introduced by <cit.>.
To simulate missing data, we randomly set 10% of the features in each connectome to 0 and train the model to predict the missing values.
The corresponding configuration is illustrated in Figure <ref>.
To achieve this, we begin by randomly sampling a binary noise mask M ∈{0,1}^n_i for each training sample, where n_i denotes the number of features in the i-th connectome.
Subsequently, the original input X is masked using element-wise multiplication with the noise mask: X_masked = X ⊙ M.To estimate the corrupted input, we introduce a linear layer with n_i output neurons on top of the encoder stack, which predicts x̂_i.
We calculate a multi atlas masked mean squared error (MAMSE) loss ℒ_multi between the predicted and the original input:
ℒ_multi = 1/3∑_i=1^31/n_i∑_j∈ M^n_i ||x_i,j - x̂_i,j||^2
where x_i,j is the original value of the j-th masked input from the i-th atlas and x̂_i,j is the predicted value for the masked input at position j in the i-th atlas.
§ EXPERIMENTS
§.§ Experimental Setup
To evaluate the classification performance of our models in a robust manner, we employed 10-fold stratified cross-validation.
For each fold, the model is trained on the remaining nine training folds and evaluated on the held-out test fold.
Further, we set aside 30% of each training fold as validation sets which are then used for hyperparameter tuning and early stopping.
In order to assess the impact of self-supervised pretraining, we compare the performance of our model with and without pretraining.
To achieve that, we first pretrain the model using the imputation task on the training folds and subsequently fine-tune the model on the training folds using the classification task after which we evaluate on the held-out test fold.
In order to verify the efficacy of using multiple atlases as input we compared the performance of our METAFormer model with the performance of single atlas transformer (SAT) models.
For that, we trained three separate transformer models using only one atlas as input.
The SAT models are trained using the same architecture as well as training procedure as the METAFormer model.
We also evaluated the performance of the SAT models with and without self-supervised pretraining in order to asses its impact on the performance of the model.
To make results comparable, we use the same training and validation folds for all model configurations under investigation.
Evaluation Metrics. By using cross-validation, we obtained 10 different sets of performance scores per configuration.
These scores were then averaged and the standard deviation of each score was obtained, providing reliable estimates of the model's performance on unseen data.
The classification results were reported in terms of accuracy, precision, recall (sensitivity), F1-score and AUC-score, which are commonly used metrics for evaluating classification models.
§.§ ASD Classification Results
Table <ref> shows the superior performance of our pretrained METAFormer model compared to previously published ASD classifiers trained on atlas-based connectomes.
Importantly, our model achieves higher accuracy even when compared to approaches with similar test set sizes that did not employ cross-validation.To further validate the effectiveness of our proposed Multi-Atlas Transformer model for Autism Spectrum Disorder classification, we compare METAFormer against single atlas transformers.
The results, as presented in Table <ref>, demonstrate the superiority of METAFormer over all single atlas models in terms of accuracy, precision, recall, F1-score, and AUC-score.
Moreover, the multi-atlas model exhibits comparable or lower standard deviations in performance metrics compared to the single atlas models.
This indicates higher robustness and stability of our multi-atlas METAFormer architecture, attributed to the joint training of the three transformer encoders.
Impact of Pretraining.
We also evaluated the effect of self-supervised pretraining on the classification performance of our models.
As Table <ref> shows pretraining significantly improves the performance of all models in terms of accuracy, precision, recall, F1-score and AUC-score.
Furthermore, for our proposed METAFormer architecture pretraining improves the performance by a large margin.
§ CONCLUSION
In this paper, we propose METAFormer, a novel multi-atlas enhanced pretrained transformer architecture for ASD classification.
We utilize self-supervised pretraining on the imputation task on the same dataset to prime the model for the downstream task.
We conducted extensive experiments to demonstrate the effectiveness of our approach by comparing it to several baselines that use single-atlas and multi-atlas approaches with and without pretraing.
Our results show that our model performs better than state-of-the-art methods and that pretraining is highly beneficial for the downstream task.
10
DarkASD
Ahammed, M.S., Niu, S., Ahmed, M.R., Dong, J., Gao, X., Chen, Y.: DarkASDNet:
Classification of ASD on functional MRI using deep neural network.
Frontiers in Neuroinformatics 15 (Jun 2021)
dartel
Ashburner, J.: A fast diffeomorphic image registration algorithm. NeuroImage
38(1), 95–113 (2007)
layernorm
Ba, J.L., Kiros, J.R., Hinton, G.E.: Layer normalization (2016)
Biswal1995
Biswal, B., Yetkin, F.Z., Haughton, V.M., Hyde, J.S.: Functional connectivity
in the motor cortex of resting human brain using echo-planar mri. Magnetic
Resonance in Medicine 34(4), 537–541 (Oct 1995)
eeg
Brihadiswaran, G., Haputhanthri, D., Gunathilaka, S., Meedeniya, D.,
Jayarathna, S.: EEG-based processing and classification methodologies for
autism spectrum disorder: A review. Journal of Computer Science
15(8), 1161–1183 (Aug 2019)
pcp
Craddock, C., Benhajali, Y., Chu, C., Chouinard, F., Evans, A., Jakab, A.,
Khundrakpam, B.S., Lewis, J.D., Li, Q., Milham, M., et al.: The neuro bureau
preprocessing initiative: open sharing of preprocessed neuroimaging data and
derivatives. Frontiers in Neuroinformatics 7, 27 (2013)
craddock200
Craddock, R.C., James, G., Holtzheimer III, P.E., Hu, X.P., Mayberg, H.S.: A
whole brain fmri atlas generated via spatially constrained spectral
clustering. Human Brain Mapping 33(8), 1914–1928 (2012)
ensemble
Deng, J., Hasan, M.R., Mahmud, M., Hasan, M.M., Ahmed, K.A., Hossain, M.Z.:
Diagnosing autism spectrum disorder using ensemble 3d-CNN: A preliminary
study. In: 2022 IEEE International Conference on Image Processing (ICIP).
IEEE (Oct 2022)
bert
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep
bidirectional transformers for language understanding (2019)
dosenbach160
Dosenbach, N.U.F., Nardos, B., Cohen, A.L., Fair, D.A., Power, J.D., Church,
J.A., Nelson, S.M., Wig, G.S., Vogel, A.C., Lessov-Schlaggar, C.N., Barnes,
K.A., Dubis, J.W., Feczko, E., Coalson, R.S., Pruett, J.R., Barch, D.M.,
Petersen, S.E., Schlaggar, B.L.: Prediction of individual brain maturity
using fmri. Science 329(5997), 1358–1361 (2010)
Du2018
Du, Y., Fu, Z., Calhoun, V.D.: Classification and prediction of brain disorders
using functional connectivity: Promising but challenging. Frontiers in
Neuroscience 12 (Aug 2018)
misodnn
Epalle, T.M., Song, Y., Liu, Z., Lu, H.: Multi-atlas classification of autism
spectrum disorder with hinge loss trained deep architectures: Abide i
results. Applied Soft Computing 107, 107375 (2021)
fnirs
Gerloff, C., Konrad, K., Kruppa, J., Schulte-Rüther, M., Reindl, V.: Autism
spectrum disorder classification based on interpersonal neural synchrony: Can
classification be improved by dyadic neural biomarkers using unsupervised
graph representation learning? In: Lecture Notes in Computer Science, pp.
147–157. Springer Nature Switzerland (2022)
he2015delving
He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: Surpassing
human-level performance on imagenet classification (2015)
hendrycks2023gaussian
Hendrycks, D., Gimpel, K.: Gaussian error linear units (gelus) (2023)
pnn
Iidaka, T.: Resting state functional magnetic resonance imaging and neural
network classified autism and control. Cortex 63, 55–67 (Feb
2015)
cnng
Jiang, W., Liu, S., Zhang, H., Sun, X., Wang, S.H., Zhao, J., Yan, J.: CNNG:
A convolutional neural networks with gated recurrent units for autism
spectrum disorder classification. Frontiers in Aging Neuroscience
14 (Jul 2022)
ssrn
Kang, L., Gong, Z., Huang, J., Xu, J.: Autism spectrum disorder recognition
based on machine learning with roi time-series. NeuroImage: Clinical
(2023)
gcn
Lamani, M.R., Benadit, P.J., Vaithinathan, K.: Multi-atlas graph convolutional
networks and convolutional recurrent neural networks-based ensemble learning
for classification of autism spectrum disorders. SN Computer Science
4(3) (Feb 2023)
warum
Li, X., Dvornek, N.C., Zhuang, J., Ventola, P., Duncan, J.S.: Brain biomarker
interpretation in ASD using deep learning and fMRI. In: Medical Image
Computing and Computer Assisted Intervention – MICCAI 2018, pp.
206–214. Springer International Publishing (2018)
adamW
Loshchilov, I., Hutter, F.: Decoupled weight decay regularization (2019)
DiMartino2013
Martino, A.D., Yan, C.G., et al.: The autism brain imaging data exchange:
towards a large-scale evaluation of the intrinsic brain architecture in
autism. Molecular Psychiatry 19(6), 659–667 (Jun 2013)
1dcnn
Qayyum, A., Ahamed Khan, M.K.A., Benzinou, A., Mazher, M., Ramasamy, M.,
Aramugam, K., Deisy, C., Sridevi, S., Suresh, M.: An efficient 1dcnn–lstm
deep learning model for assessment and classification of fmri-based autism
spectrum disorder. In: Raj, J.S., Kamel, K., Lafata, P. (eds.) Innovative
Data Communication Technologies and Application. pp. 1039–1048. Springer
Nature Singapore, Singapore (2022)
gpt
Radford, A., Narasimhan, K., Salimans, T., Sutskever, I., et al.: Improving
language understanding by generative pre-training (2018)
aal
Rolls, E.T., Huang, C.C., Lin, C.P., Feng, J., Joliot, M.: Automated anatomical
labelling atlas 3. NeuroImage 206, 116189 (2020)
Thomas2020
Thomas, R.M., Gallo, S., Cerliani, L., Zhutovsky, P., El-Gazzar, A., van
Wingen, G.: Classifying autism spectrum disorder using the temporal
statistics of resting-state functional MRI data with 3d convolutional
neural networks. Frontiers in Psychiatry 11 (May 2020)
Timimi2019
Timimi, S., Milton, D., Bovell, V., Kapp, S., Russell, G.: Deconstructing
diagnosis: Four commentaries on a diagnostic tool to assess individuals for
autism spectrum disorders. Autonomy (Birmingham, England) 1(6)
(2019)
transformer
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N.,
Kaiser, Ł., Polosukhin, I.: Attention is all you need. Advances in neural
information processing systems 30 (2017)
mage
Wang, Y., Liu, J., Xiang, Y., Wang, J., Chen, Q., Chong, J.: MAGE: Automatic
diagnosis of autism spectrum disorders using multi-atlas graph convolutional
networks and ensemble learning. Neurocomputing 469, 346–353 (Jan
2022)
aimafe
Wang, Y., Wang, J., Wu, F.X., Hayrat, R., Liu, J.: AIMAFE: Autism spectrum
disorder identification with multi-atlas deep feature representation and
ensemble learning. Journal of Neuroscience Methods 343, 108840
(Sep 2020)
dparsf
Yan, C., Zang, Y.: Dparsf: a matlab toolbox for" pipeline" data analysis of
resting-state fmri. Frontiers in systems neuroscience p. 13 (2010)
mvts
Zerveas, G., Jayaraman, S., Patel, D., Bhamidipaty, A., Eickhoff, C.: A
transformer-based framework for multivariate time series representation
learning. In: Proceedings of the 27th ACM SIGKDD Conference on Knowledge
Discovery and Data Mining. ACM (Aug 2021)
Zhao2018
Zhao, Y., Ge, F., Zhang, S., Liu, T.: 3d deep convolutional neural network
revealed the value of brain network overlap in differentiating autism
spectrum disorder from healthy controls. In: Medical Image Computing and
Computer Assisted Intervention – MICCAI 2018, pp. 172–180.
Springer International Publishing (2018)
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.