entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
15
199
authors
list
primary_category
stringlengths
5
18
categories
list
text
stringlengths
1
461k
http://arxiv.org/abs/2307.02110v1
20230705083723
A Database with Directivities of Musical Instruments
[ "David Ackermann", "Fabian Brinkmann", "Stefan Weinzierl" ]
eess.AS
[ "eess.AS", "cs.SD" ]
Ackermann et. alDirectivity of Musical Instruments David Ackermann Fabian Brinkmann AND Stefan Weinzierl Audio Communication Group, Technische Universität Berlin, Germany We present a database of recordings and radiation patterns of individual notes for 41 modern and historical musical instruments, measured with a 32-channel spherical microphone array in anechoic conditions. In addition, directivities averaged in one-third octave bands have been calculated for each instrument, which are suitable for use in acoustic simulation and auralisation. The data are provided in SOFA format. Spatial upsampling of the directivities was performed based on spherical spline interpolation and converted to OpenDAFF and GLL format for use in room acoustic and electro-acoustic simulation software. For this purpose, a method is presented how these directivities can be referenced to a specific microphone position in order to achieve a physically correct auralisation without colouration. The data is available under the CC BY-SA 4.0 licence. A Database with Directivities of Musical Instruments Correspondence should be addressed to David Ackermann. E-mail: david.ackermann@tu-berlin.de [ =================================================================================================================================================== § INTRODUCTION Studies of the sound radiation characteristics of the human voice date back to the late 1930s <cit.>, and studies of the directivity of musical instruments began thirty years later (summarized in <cit.>). While these early measurements were often made with a single microphone moved around the source, the radiation patterns of acoustic sound sources such as speakers, singers, or musical instruments are today usually measured with the source at the center of an enclosing microphone array in anechoic conditions. Such a nearly full spherical array was used to analyze the directivity of 40 different human speakers, with measurements taken sequentially at 253 positions <cit.>. The radiation characteristics of eight opera singers <cit.> and fifteen trained singers <cit.> were determined in the horizontal plane, with measurements taken at nine and thirteen positions, respectively. High spatial resolution was employed for the measurements of a professional male singer, using an adjustable semi-circular microphone array with 24 receivers <cit.>. A recent review of research on the sound radiation of singing voices is given by Abe (2019) <cit.>. For the directivity of musical instruments, eight orchestral instruments were measured using 64 microphones <cit.>, while 22 instruments of a symphony orchestra were measured using 22 microphones <cit.>. A recently generated database for 14 instruments and a speaker contains radiation patterns measured at 2522 positions on a sphere <cit.>, but these data have only limited frequency resolution in (third-)octave bands. The most comprehensive and publicly available database was compiled for 41 modern and historic instruments, measured with 32 microphones, and includes recordings of single notes within the playable range of each instrument and directivities calculated from the stationary parts of these notes <cit.>. Based on these measurements, the aim of this work was to provide the directivities of musical instruments in an open and standardised format, including both the acoustic measurements, the results of the subsequent processing, and important metadata, such as the exact position of the microphone capsules, the tuning frequency of the instrument and the pitch for which the directivity is valid. This should facilitate the exchange and use of such data by the scientific and general acoustics community. To facilitate this, we recently standardized the Spatially Oriented Format for Acoustics (SOFA) convention , as part of the AES69-2022 standard <cit.>, but also provide the data in OpenDAFF <cit.> and Generic Loudspeaker Library (GLL) <cit.> formats for use in room acoustic simulation software. For each instrument, the database contains the single-tone recordings (calibrated to an absolute sound pressure and equalized for the microphone array transfer function), the extracted single-tone directivities, and the one-third octave band-averaged directivities and corresponding finite impulse responses (FIRs)[The data will be published after the review of this paper with a DOI on the DepositOnce repository and can be accessed beforehand at https://tubcloud.tu-berlin.de/s/8joeeK3fFingLgp]. § METHODS The directivities of 41 modern and historical musical instruments were measured using a 32-channel full-spherical microphone array in an anechoic chamber in an earlier study <cit.>. The following sections detail the recording of the original database and highlightes the improved processing of the data suggested in this work. §.§ Data representation and format The directivity of electro-acoustic sound sources, such as loudspeakers and microphones, can be described relatively easily by using transfer functions in the frequency domain or by finite impulse response (FIR) filters in the time domain for each direction of radiation. In contrast, the directivity of natural sound sources, such as human speakers, singers and musical instruments, is more complex to describe, because the directivity of a musical instrument depends not only on the frequency, but also on the note being played, and sometimes on the fingering, so that the same note can have multiple radiation patterns. To ensure maximum flexibility, the convention represents directivity data as complex transfer functions (TFs) at arbitrary frequencies. This allows FIR filters of artificial sound sources, but also multi-channel acoustic recordings of a musical instrument (see section <ref>) to be stored as complex spectra with linear frequency resolution. In addition, the directivity patterns of natural sound sources can be stored for arbitrary, not necessarily equidistant frequencies such as the fundamental frequency and the corresponding overtones (see section <ref>), or as one-third octave frequency-band averaged data (see section <ref>). This representation is mainly used in geometric acoustic simulation. For a complete description of natural sound source directivities, information about the instrument/singer and the way of playing is needed in addition to the measurement setup. The SOFA standard allows these data to be stored as metadata and uniquely assigned for easy handling and data exchange. An overview of the available metadata can be found in section <ref>. §.§ Measurement setup The instruments were recorded with a fully spherical lightweight microphone array in the anechoic chamber of the Technische Universität Berlin with a room volume of approximately 1070 m^3 and a lower cut-off frequency of f_c = 63 Hz. 32 Sennheiser KE4-211-2 electret capsules of the microphone array were located at the vertices of a pentakis dodecahedron with a diameter of 2.1 m. A height-adjustable chair was used to position the musicians with their instruments so that the estimated acoustic centre of each instrument was as close as possible to the centre of the microphone array. The musicians faced the positive x-axis. The exact capsule positions of the array are given in spherical coordinates, i.e., in azimuth (ϕ = 0 ^∘ pointing in positive x-direction, ϕ = 90^∘ pointing in positive y-direction), colatitude (θ = 0^∘ pointing in positive x-direction, θ = 90^∘ pointing in positive z-direction) and distance (in metres) are included in the metadata of the SOFA files (cf. section <ref>). The recordings were made with 24 bit resolution and a sampling frequency of f_s = 44.1 kHz. The recordings were calibrated – i.e., a digital amplitude of 1 corresponds to a pressure of 1 Pascal resp. a sound pressure level of L_p = 94 dB – and compensated for the frequency response of the microphone array and the capsules. The exact measurement setup and a detailed description of the calibration procedure can be found in <cit.>. §.§ Processing The calibrated and equalised single-tone recordings in the dynamic range of pianissimo (pp) and fortissimo (ff) provided as 32-channel WAV files, as published in  <cit.>, form the basis for the processing described below. §.§.§ Recordings The real-valued single-note recordings x_q[n] of even length N are available at discrete times n∈{0,1,...,N-1} and for Q=32 channels of the spherical microphone array with q∈{1,2,...,Q}. Because the convention requires frequency data, the recordings were Fourier transformed X_q(k) = ∑_n=0^N-1 x_q[n] e^-i 2πk/Nn, with i^2=-1 being the imaginary unit, and saved in the SOFA files as complex-valued single-sided spectra X_S,q(k) of length N/2 + 1 X_S,q(k) = {[ X_q(k), if k = 0; 2 · X_q(k), if 0 < k < N/2; X_q(k), if k = N/2 .; ]. Note that reconstructing the time domain recordings from the published data thus requires the reconstruction of the both-sided spectrum of length N X_q(k) = {[ X_S,q(k), if k = 0; 1/2· X_S,q(k), if 0 < k < N/2; X_S,q(k), if k = N/2; 1/2· X_S,q^∗(N-k), if k > N/2. ]. with (·)^∗ denoting the complex conjugate, before applying the inverse Fourier transform (see  <cit.> for details). §.§.§ Single tone directivity data To determine the directivity of the musical instruments, we used the stationary part of the single note recordings. For all instruments producing stationary parts, this was manually windowed by visual inspection, resulting in durations between 200 and 2104 ms, with a median duration of 630 ms. For the acoustic guitar and the harp, a quasi-stationary part was defined as the part between the decay time and the release time as estimated with the Timbre Toolbox <cit.>. For the transient timpani signals, the entire recording was used, from the onset to the transition into the noise floor. The directivities were estimated in two steps. First, the fundamental frequency f_0 and the frequency of the overtones f_i in Hz were identified with i∈{1,2,...,I} and I being the highest identifiable overtone. Secondly, the energy at these frequencies was estimated. Both steps were based on the magnitude response | X_q(k) | and ignored the phase information because natural sound sources, unlike artificial sound sources, do not have a stationary phase response, both for a given frequency and direction of radiation <cit.>. Figure <ref> shows the magnitude and phase response for 0.5 seconds excerpt of a trumpet recording. Although the amplitudes of the fundamental and harmonics remain constant throughout the time window, the phase varies considerably, making it impossible to determine it unambiguously. This factor also complicates the use of natural sound sources for room acoustic simulation without further modification, since interpolation of the phase spectrum is highly susceptible to noise and can lead to errors, especially at high frequencies <cit.>. We have therefore proposed the use of absolute-valued directivities for interpolation <cit.>. This is in contrast to the previous processing of the data, which used the complex valued spectrum <cit.>. An estimate of the fundamental frequency f_0 was made by identifying the frequency with the highest amplitude within a window of ± 100 cent bandwidth around the frequency corresponding to the tuning pitch indicated by the musicians. This was usually 442 or 443 Hz for modern instruments, 430 Hz for instruments of the Classical period, and 415 Hz for instruments of the Baroque period. This frequency was obtained for all 32 microphone recordings, and the most frequently occurring frequency over all 32 extracted values was chosen as f_0. In the next step, the frequencies of the partials were estimated by placing a search window of ± 10 cents around each harmonic frequency corresponding to f_0 and identifying the most frequently occurring, highest amplitude within this window, again considering all 32 microphone recordings. The search window takes into account the fact that, for physical reasons, the partials do not always lie exactly at the harmonic multiples of the fundamental frequency, and that the fundamental was not always held exactly constant over the duration of the stationary part. If the identified frequency deviated from the harmonic frequencies by more then five cents, the procedure was stopped, and the signal energy was considered to be below the noise floor. A visual inspection of the detected signal components confirmed that all relevant and clearly identifiable partial tones had been found by this procedure. At the frequencies f_i that could be estimated, the directivity was calculated using the power spectral density (PSD) S_xx,q(k) = 1/f_s N|X_q(k)|^2 which is a measure of the power of each frequency component in a signal with the implicit unit Pa2/Hz. We have used the Welch method to improve the robustness of the PSD against noise by * dividing the signal into eight segments of equal length with 50 % overlap, * applying a Hanning window function to each segment to reduce the effect of spectral leakage caused by the finite length of the segments, * computing the periodogram of each segment, which is an estimate of the PSD of that segment, and * averaging the periodograms across the segments to obtain an estimate of the overall PSD of the signal. To obtain an estimate of the power at each frequency, each estimate of the PSD was scaled by the equivalent noise bandwidth of the window (in Hz). Finally, the power was determined by simple peak picking in the scaled PSD around the frequencies f_i of the tone's harmonics and converted to the sound pressure value p_q(f_i) by taking the square root. Figure <ref> illustrates the process for one note played by the modern oboe. §.§.§ One-third octave band averaging Geometric acoustics applications typically require a single directivity pattern for a sound source, which is usually provided in one-third octave band resolution. This was calculated by energetic averaging of the partials of all individual notes of an instrument falling into the one-third octave frequency bands according to IEC 61260-1 <cit.>, for the M=30 centre frequencies from 25 Hz to 20 kHz. All partials of the J individual notes p_q(f_i,j) from section <ref> were used for the one-third octave band representation. The data were averaged for each of the Q=32 receivers of the spherical microphone array individually, by calculating the averaged amplitude as p̅_q,m = √(1/L∑_i=0^L-1 p_q,i^2), where L indicates the number of partials identified in one one-third octave band. Figure <ref> illustrates the averaging procedure over all partials for the modern oboe in one-third octave bands from 400 Hz to 2500 Hz. At this point, the data still contains the direction-dependent frequency response of each instrument. If the directivities are used for auralizations in which a simulated (binaural) room impulse response is convolved with an anechoic recording of an instrument, it should be noted that the anechoic recording also contains the frequency response of the instrument in the direction of the recording microphone. To avoid an unnatural coloration during auralizations, the frequency response has to be removed from the third-octave averaged directivities. In theory, this normalization should also consider the position of the microphone from which the anechoic recording of the instrument was made, i.e., this direction should be normalized to 0 dB. If this position, however, is unknown or unstable due to movements of the instrument relative to the microphone, it may be most robust to equalize the directivity so that equal energy is radiated across all directions within each band, i.e., to p̅_diff,q,m = p̅_q,m/√(∑_q=1^Qp̅_q,m^2 · w'_q), where w'_q are the normalized area weights of the measurement grid with ∑ w'_q = 1. This representation of the radiation patterns is called diffuse equalization in the following and will be discussed in more detail in section <ref>. The final step was to calibrate the directivity to the sound power of the real instruments, as the previous diffuse equalisation (cf. equation <ref>) had lost the absolute sound power reference. This was done by averaging the sound pressure level over the effective one-third octave bands for each microphone. Third-octave bands with no sound radiation were set to zero. The average sound pressure level of the diffuse equalized directivity per microphone was then calculated as L_P,3rd,q = 10 ( 1/M∑_m ∈ Mp̅^2_diff,q,m/p^2_0), where M indicates the number of effective one-third octave bands and p_0 = 2 × 10^-5 Pa. Its average over the surface of the spherical envelope is given by L̅_P,3rd = 10 (1/Q∑_q =1^Q 10^0.1 L_P,3rd,q). The sound pressure of the reference, i.e. the corresponding instruments, was calculated from the calibrated recordings as follows L_P,ref,q = 10 ( 1/NJ∑_n= 1^N∑_j= 1^J x^2_q,j[n]/p^2_0), where N is the number of samples of the stationary part, J is the number of single notes and averaged over all microphones to L̅_P,ref = 10 (1/Q∑_q =1^Q 10^0.1 L_P,ref,q). Finally, the calibration of the diffuse equalized directivity from equation <ref> is given by p̅_cal,q,m = p̅_diff,q,m· 10^L̅_P,ref - L̅_P,3rd/20. The one-third octave band averaged directivity of each instrument is calculated for the dynamic fortissimo (ff) and provided in the SOFA convention. §.§.§ Interpolation Several applications that rely on the directivity of sound sources, such as room acoustic simulations, require the use of continuous or high resolution data. Consequently, the measurement data must be spatially resampled (interpolated) to match the required sampling grid. By sampling the actual sound pressure function f(θ,ϕ) with a Q channel spherical microphone array, the samples p_q=f(θ_q,ϕ_q) are given at the positions (θ_q,ϕ_q) of the respective microphones for q∈{1,2,...,Q}. The general mathematical formula for interpolation can therefore be expressed as f̂(θ_r,ϕ_r) = ∑_q =1^Q f(θ_q,ϕ_q) · L_q(θ_r,ϕ_r), where f̂(θ_r,ϕ_r) = p̂_r is the estimated sound pressure at the R points (θ_r,ϕ_r) of the interpolation grid for r∈{1,2,...,R} and L_q(θ_r,ϕ_r) being the interpolation function derived from the known sound pressure p_q at the position (θ_q,ϕ_q). The specific choice of the interpolation function depends on the interpolation method being used. There is a plethora of techniques for interpolating real-valued scattered data that make different assumptions about the distribution of the discrete set of known data points <cit.>. For musical instruments, the thin-plate pseudo-spline method <cit.> of order 1 has been found to be a good method, producing lower interpolation errors than spherical harmonics (SH) interpolation <cit.> or three-dimensional Vector-Based Amplitude Panning (VBAP <cit.>) when applied to sparsely sampled directivity measurements and evaluated against the directivity of different musical instruments measured at high resolution as a reference <cit.>. We chose an equiangular grid with an angular resolution of 5^∘ in azimuth and colatitude as the target for the interpolation, resulting in R = 2522 sound pressure values p̂_r,m for each of the m∈ M one-third octave bands. The closed form spherical spline interpolation for order 1 was realized with AKtools using the function  <cit.> based on the directivity patterns according to equation <ref>. §.§.§ 3rd octave smoothed FIRs To enable musical instruments to be used as sound sources for simulating room acoustic and electroacoustic environments with software that uses FIR filters to represent directivity, R = 2522 FIR filters were calculated using the above mentioned grid. For this purpose, a one-third octave band spectrum according to IEC 61260-1 <cit.> with a frequency resolution of 1 Hz was generated from the estimated sound pressure values p̂_r,m (gray line in figure <ref>). The one-sided spectrum with odd N was then smoothed with a one-third octave filter (black line in figure <ref>). After transformation to a two-sided spectrum according to (equation <ref>) the spectrum was transformed into the time domain using IFFT and the phase was made minimum phase using the AKTools function . Finally, the FIR filter was reduced to 8192 samples according to AES56-2008 <cit.> as shown in figure <ref>. For use in the software EASE [www.afmg.eu/en (accessed May 2, 2023)], this data was stored in the proprietary GLL format. § DATABASE The database contains the calibrated single-note recordings, the single-note directivities, and the frequency-averaged directivity patterns in SOFA format convention under a Creative Commons share alike licence (CC-BY-SA 4.0). In addition, high spatial resolution interpolated radiation patterns averaged over one-third octave bands, are provided in openDaff format and as FIR filters in GLL format. All data are freely accessible [The data will be published after peer review with a DOI on the TU Berlin data repository and can be accessed beforehand at https://tubcloud.tu-berlin.de/s/8joeeK3fFingLgp]. A list of the available musical instruments can be found at table <ref>. The SOFA files can be read using a variety of APIs[cf. www.sofaconventions.org/mediawiki/index.php/ Software_and_APIs (accessed May 2, 2023)]. They contain the recorded signals and extracted directivities as complex transfer functions (TFs) together with metadata describing the data in detail. This includes the name of the instrument in the entry , the name of the musician in , the manufacturer of the instrument in and a verbal description of the position of the instrument during the measurement in . The arrangement of the capsules of the microphone array is described in . In the SOFA data of the recordings, the respective note and the dynamic level are indicated in , e.g. , the MIDI number in , e.g. 69 for A4, and the frequency for A4 corresponds to the tuning frequency in the entry for . In the SOFA data of the original recordings, indicates the range of the manually determined stationary portion in samples. A detailed description of the structure of the database can be found in the accompanying documentation. §.§ SOFA recordings The single note recordings are available as a one-sided complex TF for each instrument and note. The data can be converted into a two-sided spectrum according to equation <ref> and converted to the time domain by means of an IFFT. The naming of the data follows the scheme , and the recordings are stored in the and fields. The fields have the dimension , where M (measurement) is always 1, R (receiver) is the number of capsules of the microphone array, with R = 32, and N indicates the length of the TF. The field contains the frequencies of the bins of the TF in Hz. Note that the calibrated 32-channel WAV recordings on which this data set is based are still freely accessible <cit.>. §.§ SOFA single note There is also a separate SOFA file for each instrument and note for the single-note directivity data; the naming of the file corresponds to the scheme . The purely real sound pressure levels are stored as complex transfer functions in the field with the dimension , where M is always 1, R = 32, and N refers to the number of the I extracted partials. The field with the dimension is included in the dataset for consistency reasons, but contains only zeros.The field indicates the frequencies of the I partials in Hz. §.§ SOFA One-third octave band For the one-third octave band-averaged directivities, there is one SOFA file for each instrument with the naming scheme . The averaged and calibrated sound pressures from equation <ref> are stored in with the dimension , where M is always 1, R = 32, and N=30 refers to the nominal centre frequencies from 25 Hz to 20 kHz according to IEC 61260-1:2014 <cit.>. The data field has been filled with zeros. The field indicates the centre frequencies of the one-third-octave bands in Hz. In this case, the entries , and are not included in this data representation. §.§ OpenDAFF One-third octave band The open source format openDAFF can be read with several APIs[cf. www.github.com/svn2github/opendaff (accessed May 2, 2023)]. The naming of the data follows the scheme . The directional patterns are stored with a spatial resolution of 5^∘ (azimuth and colatitude), i.e. each file contains one-third octave magnitude spectra at 2522 points. These data can be used directly in the acoustic simulation environment RAVEN <cit.>. For evaluating the directivity, both of the individual notes and of the frequency-averaged directivity patterns, in arbitrary spatial resolution, we provide a Matlab script as part of the database (cf. section <ref>). §.§ GLL One-third octave band FIRs The proprietary GLL format allows the integration of complex sound sources into the acoustic simulation environment EASE. The directional patterns averaged over a one-third-octave for all 21 modern musical instruments and for a soprano singer were stored as FIR filters with 8192 taps, and with a spatial resolution of 5^∘. For the exact naming scheme, please refer to the documentation of the dataset[The data will be published after the review of this paper with a DOI on the DepositOnce repository and can be accessed beforehand at https://tubcloud.tu-berlin.de/s/8joeeK3fFingLgp]. The data can be converted to UNF (used by Ulysses[www.ifbsoft.de (accessed May 2, 2023)]) and the CLF/CIF format (used by ODEON [www.odeon.dk (accessed May 2, 2023)] and CATT-Acoustic[www.catt.se (accessed May 2, 2023)]) using the proprietary SpeakerLab[www.afmg.eu/en/ease-speakerlab (accessed May 2, 2023)] software from AFMG. §.§ Tools Part of the database is the Matlab script that makes it possible to read the recordings from the SOFA data, display their spectrum graphically and transform them into the time domain by IFFT. This data can then be saved as a WAV file and played back with common media players. The script also allows for the three-dimensional display of single-note and frequency-averaged directivity in the form of balloon plots based on the SOFA data provided. Finally, the data can be evaluated at any sampling quadrature using spherical spline interpolation. Prerequisite for all processing steps is the installation of AKTools[cf. www.tu.berlin/ak/forschung/publikationen/open-research-tools/aktools (accessed May 2, 2023)] and the SOFA API for Matlab (SOFAToolbox[cf. www.github.com/sofacoustics/SOFAtoolbox (accessed May 02, 2023)]) contained therein. § DISCUSSION The present dataset contains recordings and radiation patterns of the individual notes of 41 modern and historical musical instruments, measured with a 32-channel microphone array in anechoic conditions. The recordings and directivities are provided in standardised SOFA format in convention . From these data, averaged directivities have been calculated for each instrument, which are suitable for use in acoustical simulation and auralisation. In addition, spatially high-resolution directivities in OpenDAFF and GLL formats have been generated, allowing direct use in software such as RAVEN and EASE. The absolute quality of the interpolation methods used for this spatial upsampling obviously depends on the characteristics of the sound source, such as its acoustically effective size, the modal patterns of its sound radiating parts, and the resulting complexity of the radiation pattern. For acoustically small sources, such as a trumpet and trombone bell or a violin, a fairly accurate interpolation can be expected even based on a measurement at 32 points. For extended sources with more complex radiation patterns, however, a sparse sampling grid may lead to increasingly poor estimates of the far field directivity <cit.>. If other types of interpolation prove superior in the future, such as recently investigated, physically informed interpolation methods using the Euler equations as constraints <cit.>, the interpolations applied may be revised in the future. For the physically correct auralisation of musical instruments in virtual acoustics, frequency-averaged directivities are a compromise that has to be made for technical reasons. Current simulation applications do not allow a straightforward exchange of single tone representations of such directivities within a simulation run. Due to the frequency averaging of the input data, a tonal colouration of the simulation result may occur <cit.>. The magnitude of this effect and its influence on instrumental and room acoustic perception will have to be determined in a subsequent study using the data presented. When an anechoic recording of an instrument and its directivity is used to auralize a virtual acoustic environment, it is essential to normalize the directivity of the source to a suitable reference. In theory, this would be the position of the microphone used to make the anechoic recording. Since the recorded signal already contains the directional timbre characteristic of the instrument in that direction, it should not be altered by the applied directivity, as this would result in unwanted colouration of the simulation. This means that the directivity should be normalized, so that only a frequency response relative to this reference direction will be obtained. This will be achieved by calculating p̂_pt,r,m = p̂_r,m/p̂_mic,m, where p̂_mic,m is the interpolated and frequency-averaged sound pressure of the M one-third octave band in the reference direction. In a real recording situation, however, the instrument being played by a musician is a moving sound source. This can result in an angular displacement of the instrument which can easily reach up to 47^∘ when played in a standing position and up to 36^∘ when played in a sitting position <cit.>. To compensate for the movement of the instrument when referencing, we consider it useful to equalise the directivity not to a point, but to the average of a larger spherical surface area A. The distribution of orientations over this surface can be taken into account by using a weighting function, i.e., by calculating p̂_area,r,m = p̂_r,m/√(∑_r_A=1^R_Ap̂_r_A,m^2 · w'_r_A· g'_r_A), where w'_r_A and g'_r_A are the normalized area and the two-dimensional function weights, respectively, with ∑_r_A=1^R_A w'_r_A· g'_r_A = 1, and r_A∈{1,2,...,R_A} are the R_A grid points of the area A over which the mean is to be calculated. If no information about the position of the microphone during the recording is documented, a diffuse equalised directivity (cf. equation <ref>) can be used to minimise the sound colouration on average. For this reason, the directivity averaged over one-third octave bands has been provided in this representation for uniformed immediate use. The extent to which the different referencing methods of the recording position affect the perceived sound event and the room acoustic parameters will be clarified in a later investigation on the basis of these data. § ACKNOWLEDGMENT We would like to thank Stefan Feistel and Silke Bögelein (AFMG) for their support in producing the GLL datasets. aes2e.bst
http://arxiv.org/abs/2307.02852v1
20230706083608
TDLE: 2-D LiDAR Exploration With Hierarchical Planning Using Regional Division
[ "Xuyang Zhao", "Chengpu Yu", "Erpei Xu", "Yixuan Liu" ]
cs.RO
[ "cs.RO" ]
Formation and evolution of transient jets and their cavities in black-hole X-ray binaries [ August 1, 2023 ========================================================================================= empty empty Exploration systems are critical for enhancing the autonomy of robots. Due to the unpredictability of the future planning space, existing methods either adopt an inefficient greedy strategy, or require a lot of resources to obtain a global solution. In this work, we address the challenge of obtaining global exploration routes with minimal computing resources. A hierarchical planning framework dynamically divides the planning space into subregions and arranges their orders to provide global guidance for exploration. Indicators that are compatible with the subregion order are used to choose specific exploration targets, thereby considering estimates of spatial structure and extending the planning space to unknown regions. Extensive simulations and field tests demonstrate the efficacy of our method in comparison to existing 2D LiDAR-based approaches. Our code has been made public for further investigation[Available at <https://github.com/SeanZsya/tdle>]. section/1_intro.tex section/2_related.tex section/3_overview.tex section/4_methods.tex section/5_experiments.tex § CONCLUSIONS In this paper, a hierarchical planning framework has been proposed for obtaining global exploration routes in an intuitive and efficient way. The planning space has been dynamically divided into subregions and arrange their orders to provide global guidance for exploration. Indicators that compatible with the subregion order have been selected to choose specific exploration targets. Mapping and motion planning modules have also been optimized to further enhance the autonomy and efficiency of the proposed system. Extensive simulation and field tests have been conducted, demonstrating the effectiveness of our proposed method. 0.cm IEEEtran
http://arxiv.org/abs/2307.01818v1
20230704164927
Generalized eigenvalue problem for an interface elliptic equation
[ "Braulio B. V. Maia", "Mónica Molina-Becerra", "Cristian Morales-Rodrigo", "Antonio Suárez" ]
math.AP
[ "math.AP", "35A15, 35B33, 35B25, and 35J60" ]
10000 On the weight zero compactly supported cohomology of _g,n [ August 1, 2023 ========================================================= 1. Universidade Federal Rural da Amazonia, Campus de Capitao-Poco, PA, Brazil. 2. Dpto. Matemática Aplicada II, Escuela Politécnica Superior, C. Virgen de Africa, 7, 41011, Univ. de Sevilla, Sevilla, Spain. 3. Dpto Ecuaciones Diferenciales y Análisis Numérico and IMUS, Fac. de Matemáticas, Univ. de Sevilla, Sevilla, Spain. 0.5cm e-mails: braulio.maia@ufra.edu.br, monica@us.es, cristianm@us.es, suarez@us.es. In this paper we deal with an eigenvalue problem in an interface elliptic equation. We characterize the set of principal eigenvalues as a level set of a concave and regular function. As application, we study a problem arising in population dynamics. In these problems each species lives in a subdomain, and they interact in a common border, which acts as a geographical barrier. Keywords: interface, principal eigenvalue. MSC2010: 35A15, 35B33, 35B25, and 35J60. § INTRODUCTION Recently, the following semilinear interface problems have been analyzed {[ - u_i=ł f_i(x,u_i) in Ω_i, i=1,2,; ∂_νu_i=γ_i(u_2-u_1) on Σ,; ∂_νu_2=0 on Γ, ]. where Ø is a bounded domain of ^N with Ø=Ø_1∪Ø_2 ∪Σ, with Ø_i subdomains, with internal interface Σ=∂Ø_1, and Γ=∂Ø_2∖Σ, ν_i is the outward normal to Ø_i, and we call ν:=ν_1=-ν_2 (see Figure <ref> where we have illustrated an example of Ø). In (<ref>), u_i represents the density of a species inhabiting in Ø_i, and they interact on Σ under the so called Kedem-Katchalsky conditions (see <cit.>), and it means that the flux is proportional to the jump of the function through Σ (see <cit.>, <cit.>, <cit.>, <cit.> and references therein). Here, f_i:Ø_i×↦ are regular functions, _i>0 stands for the proportional coefficient of the jump and ł is a real parameter representing the growth rate of the species, the same in both subdomains. It seems natural to consider two different growth rates, one for each species, that is, a problem as {[ - u_i=ł_i f_i(x,u_i) in Ω_i,; ∂_νu_i=γ_i(u_2-u_1) on Σ,; ∂_νu_2=0 on Γ, ]. with ł_i∈. As a first step towards the study of (<ref>), it is necessary to analyze the eigenvalue problem {[ - u_i=ł m_i(x)u_i in Ω_i,; ∂_νu_i=γ_i(u_2-u_1) on Σ,; ∂_νu_2=0 on Γ, ]. where m_i∈ L^∞(Ø_i), m_i≢0 in Ø_i. (<ref>) has been analyzed in <cit.> in the self-adjoint case _1=_2. For that, the authors used variational arguments to prove the existence of principal eigenvalue as well as its main properties. The general case _1≠_2 was studied in <cit.> using a different argument. In <cit.>, to study (<ref>), the authors first analyze the problem {[ - u_i+c_i(x)u_i=ł u_i in Ω_i,; ∂_νu_i=γ_i(u_2-u_1) on Σ,; ∂_νu_2=0 on Γ, ]. where c_i∈ L^∞(Ø_i). They prove the existence of a unique principal eigenvalue of (<ref>), denoted Ł_1(c_1,c_2). Hence, the study of (<ref>) is equivalent to find the zeros of the map ł∈↦ f(ł):=Ł_1(-ł m_1,-ł m_2). The main goal of this paper is to study the following generalized eigenvalue problem: {[ - u_i=ł_i m_i(x)u_i in Ω_i,; ∂_νu_i=γ_i(u_2-u_1) on Σ,; ∂_νu_2=0 on Γ. ]. Motivated by <cit.>, to study (<ref>) we analyze the zeros of the map (ł_1,ł_2) ∈^2↦ F(ł_1,ł_2):= Ł_1(-ł_1 m_1,-ł_2 m_2), that is, we analyze the set C:={(ł_1,ł_2)∈^2: F(ł_1,ł_2)=0}. We show that F is a regular function, concave and F(0,0)=0. Hence, for instance, fixed ł_1, there exist at most two values of ł_2 such that F(ł_1,ł_2)=0. Moreover, due to the concavity of F, it is well known that the set {(ł_1,ł_2)∈^2: F(ł_1,ł_2)≤0} is convex. In any case, the study of the set C depends strongly on the signs of m_i. It is obvious that the case where the functions m_i have a definite sign, for example they are positive, is a simpler case than the case where one or both of them change sign. We summarize the main results. Our first result deals with the case both m_i non-negative and non-trivial functions (see Figure <ref>). Assume that m_i⪈ 0 in Ø_i, i=1,2. Then, there exist positive values Ł_i^+, i=1,2 such that: * Assume that ł_1≥Ł_1^+. Then, F(ł_1,ł_2)<0 for all ł_2∈. * Assume that ł_1<Ł_1^+. There exists a unique ł_2:= H(ł_1) such that F(ł_1,ł_2)=0 and F(ł_1,ł_2)<0 , F(ł_1,ł_2)>0 Moreover, the map ł_1↦ H(ł_1) is continuous, decreasing, H(0)=0 and lim_ł_1→ -∞ H(ł_1)=Ł_2^+, lim_ł_1→Ł_1^+ H(ł_1)=-∞. The values Ł_1^+ and Ł_2^+ will be defined in Section 2. In the next result we analyze the case m_1 non-negative and non-trivial and m_2 changing sign (see Figure <ref>). Assume that m_1⪈ 0 in Ø_1 and m_2 changes sign in Ø_2. There exists ł_1^max≥ 0 such that: * If ł_1>ł_1^max, then F(ł_1,ł_2)<0 for all ł_2∈. * If ł_1=ł_1^max, then there exists a unique ł_2 such that F(ł_1^max,ł_2)=0 and F(ł_1^max,ł_2)<0 for all ł_2∈∖{ł_2}. * For all ł_1<ł_1^max, there exist ł_2^-= H^-(ł_1)<ł_2^+= H^+(ł_1) such that F(ł_1,ł_2^-)= F(ł_1,ł_2^+)=0, and F(ł_1,ł_2) {[ < 0 ,; >0 ]. Moreover, the map ł_1↦ H^+(ł_1) (resp. H^-(ł_1)) is continuous, decreasing (resp. increasing) and lim_ł_1→ -∞ H^±(ł_1)=Ł_2^± lim_ł_1→ł_1^max H^±(ł_1)=ł_2. * Finally, * If ∫_Ø_2 m_2<0, then ł_1^max>0 and ł_2>0. * If ∫_Ø_2 m_2>0, then ł_1^max>0 and ł_2<0. * If ∫_Ø_2 m_2=0, then ł_1^max=ł_2=0. * In Figure <ref> we have represented the cases m_1⪈ 0 in Ø_1, m_2 changes sign in Ø_2, ∫_Ø_2m_2>0 and ∫_Ø_2m_2=0. * Of course, by symmetry, a similar result holds for m_1 changing sing in Ø_1 and m_2 non-negative and non-trivial. Finally, we deal with the case of both m_i changing sing. Assume that m_i changes sign in Ø_i. Then, there exists a closed curve C⊂^2, such that F(ł_1,ł_2)=0 if and only if (ł_1,ł_2)∈ C. Moreover, and The form and structure of C depends strongly on the sign of the integrals of m_i. In all the cases, (0,0)∈ C. In the following result, we complete the above Theorem, see Figure <ref>. Assume that m_i changes sign for i=1,2. There exist ł_1^min≤ 0≤ł_1^max such that * If ł_1<ł_1^min or ł_1>ł_1^max, then F(ł_1,ł_2)<0 for all ł_2∈. * If ł_1=ł_1^max (resp. ł_1=ł_1^min) then there exists a unique ł_2 (resp. ł_2) such that F(ł_1^max,ł_2)=0 (resp. F(ł_1^min,ł_2)=0) and F(ł_1^max,ł_2)<0 (resp. F(ł_1^min,ł_2)<0) for all ł_2∈. * If ł_1∈ (ł_1^min, ł_1^max) there exist unique ł_2^-= H^-(ł_1)<ł_2^+= H^+(ł_1) such that F(ł_1,ł_2^±)=0. Moreover, F(ł_1,ł_2) {[ < 0 ,; >0 ]. Finally, lim_ł_1→ł_1^min H^±(ł_1)=ł_2, lim_ł_1→ł_1^max H^±(ł_1)=ł_2. We apply these results to the nonlinear problem {[ - u_i=ł_i m_i(x)u_i-u_i^p_i in Ω_i,; ∂_νu_i=γ_i(u_2-u_1) on Σ,; ∂_νu_2=0 on Γ, ]. with ł_i∈, m_i∈ L^∞ (Ø_i) and p_i>1. We prove (Theorem <ref>) that (<ref>) possesses a positive solution if and only if F(ł_1,ł_2)<0. Moreover, in such a case, the solution is the unique positive solution. Hence, we can give the following consequences: * Assume that m_1 and m_2 are non-negative and non-trivial functions. * For ł_1 large (ł_1>Ł_1^+), there exists a positive solution for all ł_2∈. * For ł_1<Ł_1^+, there exists a value ł_2= H(ł_1) such that (<ref>) possesses a positive solution for ł_2> H(ł_1). In both cases, for ł_1>0 we have that there exists a positive solution for negative growth rate (ł_2) of u_2. In the case without interface, this is not possible, that is, even if the population has negative growth in one part of the domain, the interface effect makes it possible for the species to persist throughout the domain. * Assume that m_1 is non-negative and non-trivial and m_2 changes sign. Then, if ł_1 is large, then there exists positive solution for all ł_2∈. However, for ł_1<ł_1^max, then there exists positive solution for ł_2< H^-(ł_1) or ł_2> H^+(ł_1). * Assume that m_1 and m_2 change sign. There exist ł_1^min<ł_1^max such that for ł_1>ł_1^max or ł_1<ł_1^min, (<ref>) possesses a positive solution for all ł_2∈. However, for ł_1∈ (ł_1^min,ł_1^max), there exist H^-(ł_1)< H^+(ł_1) such that (<ref>) possesses a positive solution only for ł_2< H^-(ł_1) or ł_2> H^+(ł_1). An outline of the paper is: in Section 2 we include some preliminary results related to scalar eigenvalue problems. Section 3 is devoted to show some general properties of F(ł_1,ł_2). The main results concerning to the eigenvalue problem (<ref>) are proved in Section 4. Finally, in Section 5, we analyze (<ref>). § PRELIMINARY RESULTS §.§ Scalar eigenvalue problem In this section we recall some results concerning to scalar eigenvalue problems, see <cit.> for example. Here G is a C^2,α, α∈ (0,1), domain of ^N, ∂ G = Γ_1 ∪Γ_2, Γ_1 ∩Γ_2 = ∅ and ν is the outward unit normal vector field. For c∈ L^∞(G), h∈ C(Γ_1), g∈ C(Γ_2), we denote by σ_1^G(-+c;N+h,N+g) the principal eigenvalue of the problem {[ -Δϕ +c(x)ϕ = λϕ in G,; ∂ϕ/∂ν+hϕ = 0 on Γ_1,; ∂ϕ/∂ν+gϕ = 0 on Γ_2, ]. and by σ_1^G(-+c;N+h,D) that of the problem {[ -Δϕ +c(x)ϕ = λϕ in G,; ∂ϕ/∂ν+hϕ = 0 on Γ_1,; ϕ = 0 on Γ_2. ]. We will quote some important properties of σ_1^G(-+c;N+h,N+g) and σ_1^G(-+c;N+h,D). We denote the boundary operator B(ϕ)={[ ∂_νϕ +hϕ = 0 ; ∂_νϕ +gϕ = 0 ]. B(ϕ)={[ ∂_νϕ +hϕ = 0 ; ϕ = 0 ]. * The map c∈ L^∞(G)↦_1^G(-+c; B) is continuous and increasing. * It holds that _1^G(-+c;N+h, N+g)<_1^G(-+c; N+h, D). * Assume that there exists a positive supersolution, that is, a positive function u∈ W^2,p(G), p>N, such that -u+c(x)u≥ 0 B(u)≥ 0 and some strict inequalities, then _1^G(-+c; B)>0. Now, we define μ(ł):=σ_1^G(--ł c; B), ł∈. The main properties of (ł) are stated in the next result. * Assume that c≢0 in G. Then, ł∈↦(ł) is regular and concave. * Assume that c⪈ 0 in G, define C_0:=G∖{x∈ G:c(x)>0}, and assume that dist(∂ C_0∩ G,∂ G)>0. The map ł↦μ(ł) is decreasing and lim_ł→ +∞μ(ł)=-∞, lim_ł→ -∞μ(ł)=σ_1^C_0(-;D). * Assume that c changes sign, then lim_ł→±∞μ(ł)=-∞. Moreover, there exists ł_0∈ such that '(ł_0)=0, '(ł)>0 for ł<ł_0 and '(ł)<0 for ł>ł_0. We can describe the exactly the sign of the map μ(ł). * Assume that c⪈ 0 and the set C_0 satisfies (<ref>). Then, there exists a unique zero of the map (ł), we denote it by ł_1^+(G,c; B), and as consequence, μ(ł) {[ >0 ,; =0 ,; <0 . ]. * Asume that c changes sign. * If (ł_0)<0, then (ł)<0 for all ł∈. * If (ł_0)=0, then ł_0 is the unique zero of the map (ł). * If (ł_0)>0, then there exist two zeros of the map (ł), we call them ł_1^-(G,c; B)<ł_0<ł_1^+(G,c; B). As a consequence, μ(ł) {[ >0 ,; =0 ,; <0 . ]. §.§ Interface eigenvalue problem First, we fix some notations that will be used throughout the paper. For convenience, we write u=(u_1,u_2) with u_i defined in Ø_i and similarly c=(c_1,c_2). In order to simplify the notation we write the boundary conditions as {[ ∂_νu_i=γ_i(u_2-u_1) on Σ,; ∂_νu_2=0 on Γ, ]}⟺ I( u)=0 We write I()≽ 0 ⟺{[ ∂_νu_1≥γ_1(u_2-u_1) on Σ,; ∂_νu_2≤γ_2(u_2-u_1) on Σ,; ∂_νu_2≥ 0 on Γ. ]. We consider the Banach spaces [ L^p:={ u: u_i∈ L^p(Ø_i)}, p≥ 1,; H^1:={ u: },; W^2,p:={ u: u_i∈ W^2,p(Ø_i)}, p≥ 1.; ] The norm of a function is defined as the sum of the norms of u_i in the respective spaces. On the other hand, given =(u_1,u_2) we write ≥ 0 in Ø if u_i≥ 0 in Ø_i for i=1,2 and >0 in Ø if both u_i>0 in Ø_i for i=1,2, and finally ≠ 0 in Ø if u_i≠ 0 in a subset of positive measure of Ø_i for some i=1,2. Given c_i∈ L^∞(Ø_i), we denote by Ł_1( c)=Ł_1(c_1,c_2) the principal eigenvalue of (see <cit.>) {[ - u_i+c_i(x)u_i=ł u_i in Ω_i,; I( u)=0 ]. First, we recall some properties of Ł_1(c_1,c_2), see <cit.>. Given ≥ 0 in Ø, =(u_1,u_2), ∈ W^2,p, p>N, is a strict supersolution of (-+ c, I) if -Δ+ c(x)≥ 0 I()≽ 0 and some of these inequalities are strict. * Assume that c≤ d in Ø. Then, Ł_1( c)≤Ł_1( d). Moreover, if c≠ d in Ø, Ł_1( c)< Ł_1( d). * Assume that c_n→ c in L^∞, then Ł_1( c_n)→Ł_1( c). * It holds that Ł_1( c)<min{_1^Ø_1(-+c_1;N+_1),_1^Ø_2(-+c_2;N+_2,N)}. * The map c∈ L^∞↦Ł_1( c) is concave. * Ł_1( c)>0 if and only if there exists a strict positive supersolution u of (-+ c, I). § GENERALIZED INTERFACE PRINCIPAL EIGENVALUE: FIRST PROPERTIES The main goal in this paper is to analyze the eigenvalue problem {[ - u_i=ł_i m_i(x)u_i in Ω_i,; I( u)=0 ]. where m_i∈ L^∞(Ø_i), m_i≢0 in Ø_i, i=1,2. It is obvious that Hence, we define F:^2↦ by F(ł_1,ł_2):=Λ_1(-ł_1 m_1,-ł_2 m_2). The following result addresses the concavity of Ł_1(c_1,c_2) in each component. Fix c_2∈ L^∞(Ø_2). Then, the map c_1∈ L^∞(Ø_1)↦Ł_1(c_1,c_2)∈ is concave. Denote G(c_1):=Ł_1(c_1,c_2), take c^i_1∈ L^∞(Ø_1), i=1,2 and t∈ [0,1]. Then, G(tc_1^1+(1-t)c_1^2)=Ł_1(tc_1^1+(1-t)c_1^2,c_2)=Ł_1(tc_1^1+(1-t)c_1^2,tc_2+(1-t)c_2)=Ł_1(t c+(1-t) d), where c=(c_1^1,c_2) and d=(c_1^2,c_2). Using now Proposition <ref> 4., we get that [ G(tc_1^1+(1-t)c_1^2)= Ł_1(t c+(1-t) d); ≥ tŁ_1( c)+(1-t)Ł_1( d); = tŁ_1(c_1^1,c_2)+(1-t)Ł_1(c_1^2,c_2); = tG(c_1^1)+(1-t)G(c_1^2). ] This completes the proof. As a consequence, we deduce the concavity of the map F(ł_1,ł_2). Fixed ł_1∈, ł_2↦ F(ł_1,ł_2) is concave, and then, there exist at most two values of ł_2 such that F(ł_1,ł_2)=0. A similar result holds when we fix ł_2. In order to simplify the notation, we denote (recall Corollary <ref>) Ł_1^±:=ł_1^±(Ø_1,m_1;N+_1), Ł_2^±:=ł_1^±(Ø_2,m_2;N+_2,N). Observe that if we denote by (ł)=_1^Ø_1(--ł m_1;N+_1), then (0)>0. Hence, if m_1 changes sign the existence of Ł_1^-<0<Ł_1^+ is guaranteed by Corollary <ref>. If m_1⪈ 0 in Ø_1 then Ł_1^-=-∞. The first result provides upper bounds of F(ł_1,ł_2). It holds: F(ł_1,ł_2)< min{_1^Ø_1(--ł_1 m_1;N+γ_1),_1^Ø_2(--ł_2 m_2;N+γ_2,N)}, and F(ł_1,ł_2)≤ -ł_1∫_Ø_1m_1-ł_2∫_Ø_2m_2+(_1+_2)|Σ |/|Ø_1|+|Ø_2|. (<ref>) follows from Proposition <ref> 3. Let φ=(_̌1,_̌2) be a positive eigenfunction associated to F(ł_1,ł_2). Observe that [ -_̌i-ł_im_i(x)_̌i=F(ł_1,ł_2)_̌i; I(φ)=0 ] Multiplying by 1/_̌i, integrating and adding the two resulting equations, we obtain F(ł_1,ł_2)(|Ø_1|+|Ø_2|)=-ł_1∫_Ø_1m_1-ł_2∫_Ø_2m_2-(∫_Ø_1|_̌1|^2/_̌1^2+ ∫_Ø_1|_̌2|^2/_̌2^2) +∫_Σ (_̌2-_̌1)(_2/_̌2-_1/_̌1). Observe that ∫_Σ (_̌2-_̌1)(_2/_̌2-_1/_̌1)=(_1+_2)|Σ|-∫_Σ_1_̌2^2+_2_̌1^2/_̌1_̌2, whence we conclude (<ref>). * F(ł_1,ł_2)<0 for ł_1∈ (-∞,Ł_1^-]∪ [Ł_1^+,+∞) or ł_2∈ (-∞,Ł_2^-]∪ [Ł_2^+,+∞). * Assume that F(ł_1,ł_2)=0. Then, Ł_1^-<ł_1<Ł_1^+ Ł_2^-<ł_2<Ł_2^+. * Observe that if ł_1∈ (-∞,Ł_1^-]∪ [Ł_1^+,+∞), then, by Corollary <ref>, we get _1^Ø_1(--ł_1 m_1;N+γ_1)≤ 0. Hence, by (<ref>) we obtain that F(ł_1,ł_2)<0. * Since F(ł_1,ł_2)=0, by (<ref>), we have that 0<min{_1^Ø_1(--ł_1 m_1;N+γ_1),_1^Ø_2(--ł_2 m_2;N+γ_2,N)}, and then _1^Ø_1(--ł_1 m_1;N+γ_1) >0 and _1^Ø_2(--ł_2 m_2;N+γ_2,N)>0, and hence the result concludes by Corollary <ref>. The following result will be very useful. First, we introduce some notation. Assume that m≥ 0 in Ø, define M_i^0:=Ω_i∖{x∈Ø_i:m_i(x)>0} and assume that ∂ M_i^0 is a regular and dist(∂ M_i^0∩Ø_i,∂Ø_i)>0. Assume that m_i⪈ 0 in Ø_i and that the sets M_i^0 verify (<ref>). Take two sequences {a_n} and {b_n} such that a_n→ a_*∈ (-∞,∞), b_n→ -∞ Then, at least for a subsequence, lim_n→∞F(a_n,b_n) = min{_1^Ø_1(--a_*m_1;N+_1),_1^M_2^0(-;D)}. Observe that F(a_n,b_n)<min{_1^Ø_1(--a_n m_1;N+γ_1),_1^Ø_2(--b_n m_2;N+γ_2,N)} . By continuity, _1^Ø_1(--a_n m_1;N+γ_1)→_1^Ø_1(--a_* m_1;N+γ_1), and using Proposition <ref> 2., we get _1^Ø_2(--b_n m_2;N+γ_2,N)→_1^M_2^0(-;D). Hence, F(a_n,b_n) is bounded. Assume that _0 :=_1^M_2^0(-;D)<_1^Ø_1(--a_* m_1;N+γ_1). Consequently, we conclude that, for a subsequence, F(a_n,b_n)→ F_0≤_0=_1^M_2^0(-;D)<_1^Ø_1(--a_* m_1;N+γ_1)<∞ Without loss of generality, we consider _n=(_̌1n,_̌2n) a positive eigenfunction associated to F(a_n,b_n) such that _n_2=1. Then, ∫_Ø |∇_̌n|^2-a_n∫_Ø_1 m_1_̌1n^2-b_n∫_Ø_2 m_2_̌2n^2+∫_Σ (_1_̌1n^2+_2_̌2n^2)-(_1+_2)∫_Σ_̌1n_̌2n=F(a_n,b_n)≤ C, where we have denoted ∫_Ø |∇_̌n|^2=∑_i=1^2∫_Ø_i|∇_̌in|^2. Since b_n<0, m≥ 0 in Ø and a_n→ a^*∈ (-∞,∞), we get that ∫_Ø |∇_̌n|^2-(_1+_2)∫_Σ_̌1n_̌2n≤ C. Using now the inequalities ∫_Σ u_1u_2≤1/2(∫_Σ u_1^2+∫_Σ u_2^2), and that for any ε>0 there exists C(ε)>0 such that ∫_Σ v^2≤ε∫_Ø_i|∇ v|^2 +C(ε)∫_Ø_i v^2 ∀ v∈ H^1(Ø_i), (see for instance Lemma 1 in <cit.>) and _n_2=1, wet get that ∫_Σ_̌1n_̌2n≤1/2(∫_Σ_̌1n^2+∫_Σ_̌2n^2)≤1/2(ε(∫_Ø_1|∇_̌1n^2|+∫_Ø_2|∇_̌2n^2|)+C(ε)), and then from (<ref>) we get ∫_Ø |∇_̌n|^2≤ C. Hence, _̌n_H^1≤ C_0. Thus, _̌n⇀_̌∞=(_̌1∞,_̌2∞)≥ 0 , _̌n→_̌∞ By definition of F(a_n,b_n) we have that [ ∑_i=1^2(∫_Ø_i∇_̌in·∇ v_i -a_n∫_Ø_1 m_1_̌1nv_1-b_n∫_Ø_2 m_2_̌2nv_2)+; +∫_Σ(_̌2n-_̌1n)(_2v_2-_1 v_1)=F(a_n,b_n)(∫_Ø_1_̌1nv_1+∫_Ø_2_̌2nv_2 ), ∀ v_i∈ H^1(Ø_i). ] First, we prove that _̌2∞∈ H_0^1(M_2^0). Since H_0^1(M_2^0)={u∈ H^1(Ø_2):}, we claim that _̌2∞=0 in Ø_2∖ M_2^0. By contradiction, assume that _̌2∞>0 in D, for some D⊂Ø_2∖ M_2^0 and take v_1=0 in Ø_1 and v_2∈ C_0^∞(D), v_2>0 in D. Then, by (<ref>) -∫_D v_2_̌2n-b_n∫_D m_2(x)_̌2nv_2=F(a_n,b_n)∫_D_̌2nv_2. If _̌2∞>0 in D, then -b_n∫_D m_2(x)_̌2nv_2→∞ as b_n→-∞, a contradiction with (<ref>). Hence, we conclude that _̌2∞=0 in D. This implies (<ref>). Taking v_1∈ H^1(Ø_1) and v_2=0 in (<ref>), taking limit, we get ∫_Ø_1∇_̌1n·∇ v_1-a_n∫_Ø_1m_1 _̌1nv_1+∫_Σ(_̌2n-_̌1n)(-_1 v_1) =F(a_n,b_n)∫_Ø_1_̌1nv_1, then passing to the limit, taking into account (<ref>) in the boundary integral, ∫_Ø_1∇_̌1∞·∇ v_1-a_*∫_Ø_1m_1 _̌1∞v_1+_1∫_Σ_̌1∞ v_1=F_0∫_Ø_1_̌1∞v_1. Hence, if _̌1∞_2≠ 0, then F_0=_1^Ø_1(--a_*m_1;N+_1), an absurdum due to F_0≤_0 and (<ref>). Then, _̌1∞_2=0. Hence, _̌2∞_2= 1. Then, take v_1=0 and v_2∈ H_0^1(M_2^0) in (<ref>), we obtain ∫_M_0^2∇_̌2∞·∇ v_2=F_0∫_M_0^2_̌2∞v_2, which yields that F_0=_1^M_2^0(-;D)=_0. A similar reasoning can be carried out when _1^Ø_1(--a_* m_1;N+γ_1)<_1^M_2^0(-;D). This finishes the proof. § PROOFS OF THE MAIN RESULTS The main idea of the proof can be summarized as follows. Instead of looking for solutions of F(ł_1,ł_2)=0 in the general form (ł_1,ł_2), we look for solutions in the particular form ł_2=μł_1, for all ∈. Hence, the following map plays an essential role in our study. Given ∈, we define f_(ł_1):=Ł_1(-ł_1 m_1,-ł_1 m_2)=F(ł_1,ł_1). In the following result we state that, for ł_1≠ 0, it is equivalent to solve F(ł_1,ł_2)=0 to f_(ł_1)=0. Specifically, we have: Assume that F(ł_1^0,ł_2^0)=0 and ł_1^0≠ 0, then f__0(ł_1^0)=0 for _0=ł_2^0/ł_1^0. Conversely, if f__0(ł_1^0)=0 then F(ł_1^0,ł_2^0)=0 for ł_2^0=_0ł_1^0. In what follows, we explore the particular case ł_1=0. Assume that ł_1=0 and denote g(ł_2):=F(0,ł_2)=Ł_1(0,-ł_2m_2). The map ł_2↦ g(ł_2) is regular, concave, g(0)=0 and sign(g'(0))=sign(-∫_Ø_2m_2). Moreover, * If m_2⪈ 0 in Ø_2, then ł_2↦ g(ł_2) is decreasing and lim_ł_2→ +∞g(ł_2)=-∞ lim_ł_2→ -∞g(ł_2)=min{_1^Ø_1(-; N+_1),_1^M_2^0(-; D)}. In this case, g(ł_2)>0 for ł_2<0 and g(ł_2)<0 if ł_2>0. * If m_2 changes sign in Ø_2, then lim_ł_2→±∞g(ł_2)=-∞. Moreover, * If ∫_Ø_2m_2=0, then g'(0)=0 and ł_2=0 is the unique root of g(ł_2)=0. As a consequence, g(ł_2)<0 for ł_2≠ 0. * If ∫_Ø_2m_2<0, then g'(0)>0 and there exists ł_2^+>0 such that g(ł_2^+)=0. In this case, g(ł_2) {[ >0 ; <0 ]. * If ∫_Ø_2m_2>0, then g'(0)<0 and there exists ł_2^-<0 such that g(ł_2^-)=0. Hence, g(ł_2) {[ >0 ; <0 ]. To begin with, the regularity of g follows by the regularity of the function F. On the other hand, by Proposition <ref> follows that g(ł) is concave. It is obvious that g(0)=F(0,0)=Ł_1(0,0)=0. On the other hand, taking m=(0,m_2) in Proposition 3.17 in <cit.>, we conclude (<ref>). Finally, observe that by (<ref>) we have g(ł_2)<_1^Ø_2(--ł_2 m_2;N+_2,N), whence we deduce that lim_ł_2→ +∞g(ł_2)=-∞ from Proposition <ref> 2. and 3. * Assume that m_2⪈ 0 in Ø_2. In this case, g is decreasing. Moreover, by Proposition <ref>, taking a_n=0, we conclude that lim_ł_2→ -∞g(ł_2)=min{_1^Ø_1(-; N+_1),_1^M_2^0(-; D)}. * Assume that m_2 changes sign. Then, using (<ref>) and Proposition <ref> 3., we deduce that lim_ł_2→ -∞g(ł_2)=-∞. Now, from the sign of g'(0) in (<ref>), we conclude the result. In the next result, we study in detail the map ł_1↦ f_μ(ł_1). Fix ∈. Then, ł_1↦ f_(ł_1) is regular, concave, f_(0)=0 and sign(f_'(0))=-sign(_2∫_Ø_1m_1+_1∫_Ø_2m_2). * If m_1⪈ 0 in Ø_1, then lim_ł_1→ +∞f_(ł_1)=-∞ * If m_1 or m_2 changes sign, then lim_ł_1→±∞f_(ł_1)=-∞. It is clear that f_μ(0)=0. The regularity follows by the regularity of F, the concavity of f_μ(ł_1) follows by Proposition <ref> 4., and (<ref>) follows taking m=(m_1, m_2) in Proposition 3.17 in <cit.>. On the other hand, by (<ref>) we get f_(ł_1)<min{_1^Ø_1(--ł_1 m_1;N+_1),_1^Ø_2(--ł_1μ m_2;N+_2,N)}, and then lim_ł_1→ +∞f_(ł_1)=-∞, and if m_1 or m_2 changes sign, lim_ł_1→ -∞f_(ł_1)=-∞. For ∫_Ø_2m_2≠ 0, we define ^*:=-_2∫_Ø_1m_1/_1∫_Ø_2m_2, in such a way that f'_^*(0)=0. §.§ Case m_i⪈ 0 in Ø_i, i=1,2. Observe that in this case ^*<0. Assume that m_i⪈ 0 in Ø_i, i=1,2. * If ≥ 0, the unique zero of f_(ł_1) is ł_1=0. * If <0, ≠^*, there exists an unique ł_1=h_1()≠ 0 such that f_(ł_1)=0. Moreover, h_1() {[ <0 ,; =0 ; >0 ]. * The map ∈ (-∞, 0)↦ h_1() is continuous and decreasing. Moreover, lim_↑ 0h_1()=-∞, lim_→ -∞h_1()=Ł_1^+. * Assume that ≥ 0. Then, since ł_1↦ f_(ł_1) is decreasing and f_(0)=0, the result follows. * Assume that <0. Recall that ł_1↦ f_(ł_1) is concave and f_(0)=0. If >^* then f'_(0)<0, and hence there exists a unique h_1()<0 such that f_(h_1())=0. Similarly, when <^* there exists a unique h_1()>0 such that f_(h_1())=0. * We will show that ↦ h_1() is decreasing. Take now _1<_2<0. Observe that -_1ł_1>-_2ł_1 if ł_1>0 and -_1ł_1<-_2ł_1 if ł_1<0. Hence, we distinguish several cases: * Assume that ^*≤_1<_2<0. In this case, h_1(_2) and h_1(_1) are negative, and then we compare the functions f__2 and f__1 for negative values. Indeed, observe that f__2(ł_1)>f__1(ł_1) for ł_1<0, and then h_1(_2)<h_1(_1). * Assume that _1<^*<_2<0: in this case h_1(_2)<0<h_1(_1). * Assume that _1<_2≤^*<0, then f__2(ł_1)<f__1(ł_1) for ł_1>0, and then h_1(_2)<h_1(_1). This shows that ↦ h_1() is decreasing. We prove now the continuity. Take _n∈ (-∞, 0)→_0<0 and consider ł_n:=h_1(_n). Since 0=f__n(ł_n)=F(ł_n,_nł_n), by Corollary <ref> we conclude that ł_n<Ł_1^+ _nł_n<Ł_2^+. Hence, there exists ł_1∈ (-∞,+∞) such that ł_n→ł_1. We have to show that ł_1=h_1(_0). Indeed, observe that 0=f__n(ł_n)=Ł_1(-ł_n m_1,-ł_n _n m_2)→Ł_1(-ł_1 m_1,-ł_1 _0 m_2)=f__0(ł_1), that is, f__0(ł_1)=0. We separate now two cases: * _0≠^*: In this case, we assert. that ł_1≠ 0. Indeed, assume that ł_1= 0, that is, h_1(_n)→ 0. If, for instance, _0> ^*, then there exists ρ_1(_n)∈ (h_1(_n),0) such that f__n'(ρ_1(_n))=0. Since h_1(_n)→ 0, then ρ_1(_n)→ 0, and as consequence, f__0'(0)=0, a contradiction. This shows that ł_1≠ 0. Then, since h_1(_0) is the unique nonzero root of f__0(ł_1)=0, we have that ł_1=h_1(_0). * _0= ^*: in this case f_^*(ł_1)=0 implies that ł_1=0=h_1(^*). This concludes that ł_1=h_1(_0), and hence the continuity. We claim that h_1(_n)→ -∞ Assume that |h_1(_n)|≤ C. Then, we can assume that, at least for a subsequence, h_1(_n)→ h_1^*<0 and hence 0=Ł_1(-h_1(_n)m_1,-_nh_1(_n)m_2)→Ł_1(-h_1^*m_1,0)=0, a contradiction because Ł_1(-h_1^*m_1,0)>0. This proves (<ref>). By (<ref>), if → -∞ we can assume that h_1()→ h^*≤Ł_1^+ and h^*>0. Then, h_1()→ -∞. Since 0=f_(h_1())=F(h_1(),h_1()) and by Proposition <ref> 0=F(h_1(),h_1())→min{_1^Ø_1(--ł^*m_1;N+_1),_1^M_2^0(-;D)}, it follows that 0= min{_1^Ø_1(--ł^*m_1,N+γ_1),_1^M_2^0(-,D)}. Since _1^M_2^0(-,D)>0, we conclude that _1^Ø_1(--h^*m_1,N+γ_1)=0, that means that h^*=Ł_1^+, that is lim_μ→ -∞h_1()=Ł_1^+. This concludes the proof. Once we have studied the map ↦ h_1(), we need to analyze the map ∈ (-∞,0)↦ h_2():= h_1(). The map ∈ (-∞,0)↦ h_2():= h_1() is continuous, increasing, h_2() {[ >0 ,; =0 ; <0 ]. lim_→ -∞h_2()=-∞, and lim_→ 0h_2()=Ł_2^+. To start with, the continuity and the sign of the map h_2() follow directly from Proposition <ref>. Moreover, it is clear that lim_→ -∞h_2()=lim_→ -∞ h_1()=-∞. In order to prove (<ref>) we can argue exactly as the proof of Proposition <ref>. Finally, using that F is increasing, we prove that the map ↦ h_2() is increasing. Take _1<_2 and assume that h_2(_1)≥ h_2(_2). Since h_1(_1)>h_1(_2), then 0=F(h_1(_1),h_2(_1))<F(h_1(_2),h_2(_1))≤ F(h_1(_2),h_2(_2))=0, a contradiction. In Figure <ref> we have represented the functions ↦ h_1(), h_2(). Now, we proceed to the proof of Theorem <ref>. (see Figure <ref>) * Observe that by Corollary <ref>, we obtain F(ł_1,ł_2)<0 * Take ł_1<Ł_1^+. Then, by Proposition <ref> there exists a unique =(ł_1)<0 such that ł_1=h_1(). Take ł_2=h_2()= h_1(), then F(ł_1,ł_2)=F(h_1(),h_2())=F(h_1(), h_1())=f_μ(h_1())=0. We define the function H(ł_1):=h_2(h_1^-1(ł_1)), ł_1<Ł_1^+. It is clear that H is well-defined (observe that h_1^-1 exists due to that h_1 is an increasing function), is continuous and F(ł_1, H(ł_1))=0. Moreover, since fixed ł_1, the map ł_2↦ F(ł_1,ł_2) is concave, it follows that F(ł_1,ł_2)>0 for ł_2< H(ł_1) and F(ł_1,ł_2)<0 for ł_2> H(ł_1). Furthermore, by Propositions <ref> and <ref>, lim_ł_1↑Ł_1^+ H(ł_1)=lim_ł_1↑Ł_1^+h_2(h_1^-1(ł_1))= lim_→ -∞h_2()=-∞, and, lim_ł_1↑ -∞ H(ł_1)= lim_ł_1↑ -∞h_2(h_1^-1(ł_1))=lim_→ 0h_2()=Ł_2^+. Finally, we prove that ł_1↦ H(ł_1) is increasing. Take ł_1<ł_1<Ł_1^+ and consider H(ł_1) and H(ł_1). We are going to show that H(ł_1)> H(ł_1). Assume that H(ł_1)≤ H(ł_1), then 0=F(ł_1, H(ł_1))>F(ł_1, H(ł_1))≥ F(ł_1, H(ł_1))=0, a contradiction. This concludes the proof. §.§ Case m_1⪈ 0 in Ø_1 and m_2 changes sign in Ø_2. In this case, the results depend on the sign of ∫_Ø_2m_2. We detail the case ∫_Ø_2m_2<0, similarly the other cases can be studied (see Remark <ref>). Observe that in this case ^*=-_2∫_Ø_1m_1/_1∫_Ø_2m_2>0. Assume that m_1⪈ 0 in Ø_1, m_2 changes sign in Ø_2 and ∫_Ø_2m_2<0. Then, for each ≠ 0 there exists a unique h_1()∈ such that f_(h_1())=0. Moreover, h_1() {[ >0 ,; =0 ; <0 ]. Furthermore, the map ∈∖{0}↦ h_1()∈ is continuous, and lim_→±∞h_1()=0, lim_→ 0h_1()=-∞. As consequence, there exists _max>^* such that max_≠ 0h_1()=h_1(_max):=ł_1^max. Finally, the map ∈∖{0}↦ h_1()∈ is increasing in (0,_max) and decreasing in (-∞,0) and (_max,∞). The proof of this result is rather similar to the proof of Proposition <ref>, hence we sketch the proof. Since f__n(ł_1(_n))=F(h_1(_n),_n h_1(_n))=0, by Corollary <ref> we get Ł_2^-<h_1(_n)_n<Ł_2^+, whence we conclude that h_1(_n)→ 0 as _n→±∞. Before proving the monotony, we claim that for any c∈ there exist at most two values of such that h_1()=c. We argue by contradiction. Assume that for _1<_2<_3 we get h_1(_i)=c for i=1,2,3. Taking ł_2^i= c_i we obtain 0=F(c,ł_2^i), ł_2^1<ł_2^2<ł_2^3, a contradiction because, fixed c, the map ł_2↦ F(c,ł_2) is concave. Now, for instance, we show that h_1() is decreasing in (-∞,0). Take _1<_2<0 and assume that h_1(_1)≤ h_1(_2). Since h_1()→ 0 as → -∞ and h_1()→ -∞ as → 0. Then, there exists c<0 such that h_1()=c possesses at least three solutions. This is a contradiction and proves that h_1() is decreasing in (-∞,0). With a similar argument, it can be proved that h_1(μ) is decreasing in (_max,∞) and increasing in (0,_max) . Again, we can deduce properties of the map h_2()= h_1(). Assume that m_1⪈ 0 in Ø_1, m_2 changes sign in Ø_2 and ∫_Ø_2m_2<0. Then h_2()= h_1() is continuous in ≠ 0, increasing and verifies h_2() {[ >0 ,; =0 ; <0 ]. and lim_→ 0^±h_2()=Ł_2^∓, lim_→±∞h_2()=ł_2^*, for some ł_2^*∈(0,Ł_2^+). Assume that _n→ 0^+, then since h_2(_n) is bounded, at least for a subsequence, h_2(_n)→ł_2<0. Observe that since h_1(_n)→ -∞, by Proposition <ref> 0=F(h_1(_n),h_2(_n) )→_1^Ø_2(--ł_2m_2;N+_2,N)=0, and then ł_2=Ł_2^-. Analogously for _n→ 0^-. On the other hand, assume that _n→ +∞ and h_2(_n)→ł_2<0. In this case, h_1(_n)→ 0, and then 0=F(-h_1(_n)m_1,-h_2(_n)m_2)→ F(0,-ł_2m_2), whence ł_2=ł_2^*. Observe that 0=F(0,-ł_2^*m_2)<_1^Ø_2(--ł_2^*m_2;N+_2,N) and so ł_2^*<Ł_2^+. Finally, with an argument similar to the one used in Proposition <ref> we can conclude that the equation h_2()=c possesses at most a unique solution. Hence, the monotony of h_2() follows. This completes the proof. In Figure <ref>, one may see a representation of the maps ↦ h_1(),h_2(). * In the case ∫_Ø_2m_2>0 we can obtain a similar result switching by - (see Figure <ref>). * When ∫_Ø_2m_2=0 observe that f'_μ(0)<0 for all (see (<ref>)), and then h_1()<0 for all . The global behavior of h_1() at =0 and →±∞ is similar to Proposition <ref> (see Figure <ref>). (See Figure <ref>). Assume that ∫_Ø_2m_2<0 (see Figure <ref>). We introduce the following notation: h_1():= {[ h_1^1() ; h_1^2() ; h_1^3() ]. * If ł_1>ł_1^max, then there does not exist ∈ such that ł_1=h_1(). Hence, F(ł_1,ł_2)≠0 for all ł_2∈, in fact, F(ł_1,ł_2)<0 for all ł_2∈. Indeed, if for some ł_2 we have F(ł_1,ł_2)>0, then there exists at least ł_2^0 such that F(ł_1,ł_2^0)=0. Then, for some _0 we have ł_1=h_1(_0), a contradiction. * If ł_1=ł_1^max, there exists a unique _max>^* such that ł_1^max=h_1(_max), and then ł_2^max=h_2(_max)>0 and F(ł_1^max,ł_2^max)=0. * We fix ł_1∈ (0,ł_1^max). Then, (see Figure <ref>) there exist _2,_3 with ^*<_2<_max<_3 such that h_1^i(_i)=ł_1 i=2,3. To these values correspond two different values of h_2(_i). Moreover, as ł_1→ 0, then _2→^* and _3→ +∞, and this case h_2(_2)=h_2^2(_2)→ h^2_2(^*)=0 and h_2(_3)=h_2^2(_3)→ł_2^*. * On the other hand, when ł_1∈ (-∞,0). There exist _1<0<_2<^* such that ł_1=h_1^i(_i) i=1,2, with _1→ -∞ and _2→^* as ł_1→ 0. Then, h_2(_1)=h_2^1(_1)→ł_2^* and h_2(_2)=h_2^2(_2)→ 0. Observe that when ł_1→ -∞ then _1→ 0^- and _2→ 0^+, and hence h_2(_1)→Ł_2^+ and h_2(_2)→Ł_2^-. With this construction, we can define H^+(ł_1):= {[ h_2((h_1^3)^-1(ł_1)) ; h_2((h_1^1)^-1(ł_1)) ; ]. and H^-(ł_1):=h_2((h_1^2)^-1(ł_1)) Observe that thanks to the monotony of the maps h_1^i for i=1,2,3, H^+ and H^- are well defined. Moreover, lim_ł_1→ 0^+ H^+(ł_1)=lim_ł_1→ 0^+h_2((h_1^3)^-1(ł_1)) =lim_→ +∞h_2()=ł_2^*, and lim_ł_1→ 0^- H^+(ł_1)=lim_ł_1→ 0^-h_2((h_1^1)^-1(ł_1)) =lim_→ -∞h_2()=ł_2^*. As a consequence, H^+ is continuous. On the other hand, lim_ł_1→ł_1^max H^+(ł_1)=lim_→_maxh_2()=ł_2, lim_ł_1→ł_1^max H^-(ł_1)=lim_→_maxh_2()=ł_2. Finally, lim_ł_1→-∞ H^+(ł_1)=lim_→ 0^+h_2()=Ł_2^+, and lim_ł_1→-∞ H^-(ł_1)=lim_→ 0^-h_2()=Ł_2^-. We show that H^+(ł_1) is decreasing. Take ł_1^1<ł_1^2. * When ł_1^1<ł_1^2<0: then (h_1^1)^-1(ł_1^2)<(h_1^1)^-1(ł_1^1)<0 and so h_2((h_1^1)^-1(ł_1^2))<h_2((h_1^1)^-1(ł_1^1)). This concludes that H^+(ł_1^1)> H^+(ł_1^2). * Assume now that ł_1^1<0<ł_1^2: in this case (h_1^1)^-1(ł_1^1)<0< (h_1^3)^-1(ł_1^2) and then h_2((h_1^1)^-1(ł_1^1))>0>h_2((h_1^3)^-1(ł_1^2) ), that is H^+(ł_1^1)> H^+(ł_1^2). * Finally when 0<ł_1^1<ł_1^2: in this case 0< (h_1^3)^-1(ł_1^2)<(h_1^3)^-1(ł_1^1). Again, H^+(ł_1^1)> H^+(ł_1^2). We can argue in the same manner for H^-. This completes the proof. Case ∫_Ø_2m_2>0 can be handled in an analogous way, but the case ∫_Ø_2m_2=0 deserves a comment. In this case, H^+ and H^- should be defined as follows: H^+(ł_1):= {[ h_2((h_1^1)^-1(ł_1)) ; 0 ; ]. and H^-(ł_1):= {[ h_2((h_1^2)^-1(ł_1)) ; 0 ; ]. §.§ Case m_i changes sign, i=1,2. Consider in this case ∫_Ø_1m_1<0, ∫_Ø_2m_2<0, and then ^*<0. Assume that m_i changes sign for i=1,2 and ∫_Ø_1m_1<0, ∫_Ø_2m_2<0. Then, for each ∈ there exists a unique value ł_1=h_1() such that f_(ł_1)=0. Moreover, the map ∈↦ h_1() is continuous, h_1() {[ >0 ,; =0 ; <0 ]. and lim_→±∞h_1()=0. As consequence, there exist _min<^*<_max such that h_1(_min)=min_∈ h_1()=ł_1^min<0, h_1(_max)=max_∈ h_1()=ł_1^max>0. Finally, the map ↦ h_1() is decreasing in (-∞, _min) and (_max,∞) and increasing in (_min,_max). For h_2(), we can deduce the following Assume that m_i changes sign for i=1,2 and ∫_Ø_1m_1<0, ∫_Ø_2m_2<0. Then, the map ∈↦ h_2() is continuous, h_2() {[ >0 ,; =0 ; <0 ]. Moreover, lim_→±∞h_2()=ł_2^*. As a consequence, there exists _min∈ (^*,0) such that h_2(_min)=min_∈h_2()=ł_2^*<0. We have represented in Figures <ref> and <ref> some examples of the maps h_1() and h_2() in the case m_1 and m_2 changing sign and ∫_Ø_1m_1<0 and ∫_Ø_2m_2<0. * By Proposition <ref>, we deduce that F(ł_1,ł_2)<0 * Now, we introduce some notation: h_1():= {[ h_1^1() ; h_1^2() ; h_1^3() ]. * When ł_1=ł_1^max, there exists a unique value of , =_max such that h_1(_max)=ł_1. The value h_2(_max):=ł_2 verifies that F(ł_1^max,ł_2)=0. * Take now ł_1∈ (0,ł_1^max). Then, there exist ^*<_2<_3 such that ł_1=h_1^i(_i) i=2,3, specifically, ł_1=h_1^2(_2)=h_1^3(_3). Moreover, _2→^* and _3→ +∞ as ł_1→ 0. For these values, h_2(_2)→ 0 and h_2(_3)→ł_2^*. Observe that h_2(0)=0. * Consider the case ł_1∈ (ł_1^min,0). There exists a unique value of _1<_2<^* such that ł_1=h_1^i(_i) i=1,2, in fact, ł_1=h_1^1(_1)=h_1^2(_2). In this case, as ł_1→ 0, then with _1→ -∞ and _2→^*. Hence, h_2(_1)→ł_2^* and h_2(_2)→ 0. * The case ł_1=ł_1^min is analogous to the first case. Now, we define the maps H^+(ł_1):= {[ h_2((h_1^3)^-1(ł_1)) ; h_2((h_1^1)^-1(ł_1)) ; ]. and H^-(ł_1):=h_2((h_1^2)^-1(ł_1)) This completes the proof. § SEMILINEAR INTERFACE PROBLEMS In this section we study the semilinear problem (<ref>). Problem (<ref>) possesses a positive solution if and only if F(ł_1,ł_2)<0. In case the existence, the positive solution is unique. Assume that there exists at least a positive solution (u_1,u_2) of (<ref>). Then, using Proposition <ref> 1., 0=Ł_1(-ł_1m_1+u_1^p_1-1,-ł_2 m_2+u_2^p_2-1)>Ł_1(-ł_1m_1,-ł_2 m_2)=F(ł_1,ł_2), whence we deduce that F(ł_1,ł_2)<0. On the other hand, assume that F(ł_1,ł_2)<0. Then =(u_1,u_2)=ε(_̌1,_̌2), =(u_1,u_2)=K(1,1), it is a pair of sub-supersolution for ε small and K large. Indeed, K and ε must verify K^p_i-1≥ |ł_i|m_i_L^∞(Ø_i), ε^p_i-1_̌i_L^∞(Ø_i)≤ -F(ł_1,ł_2) i=1,2.. Clearly, we can take ε small and K large verifying both inequalities and such that ≤ in Ø. The uniqueness follows by Theorem 4.3 in <cit.>. § ACKNOWLEDGMENT MMB, CMR and AS were partially supported by PGC 2018-0983.08-B-I00 (MCI/AEI/FEDER, UE) and by the Consejería de Economía, Conocimiento, Empresas y Universidad de la Junta de Andalucía (US-1380740, P20-01160 and US-1381261). MMB was partially supported by the Consejería de Educación y Ciencia de la Junta de Andalucía (TIC-0130). 00 ab99 G. A. Afrouzi and K. J. Brown, On principal eigenvalues for boundary value problems with indefinite weight and Robin boundary conditions, Proc. Amer. Math. Soc., 127, (1999), 125-130. santijlg S. Cano-Casanova, and J. López-Gómez, Properties of the principal eigenvalues of a general class of non-classical mixed boundary value problems, J. Differential Equations, 178 (2002), 123-211. chenCPDE C.-K. Chen, A barrier boundary value problem for parabolic and elliptic equations, Comm. Partial Differential Equations, 26:7-8, (2001) 1117-1132. chen2 C.-K. Chen, A fixed interface boundary value problem for differential equations: a problem arising from population genetics, Dyn. Partial Differ. Equ., 3 (2006) 199-208. cp G. Ciavolella and B. Perthame, Existence of a global weak solution for a reaction-diffusion problem with membrane conditions, J. Evol. Equ. 21 (2021), 1513-1540. cia2 G. Ciavolella, Effect of a membrane on diffusion-driven Turing instability, Acta Appl. Math., 178 (2022), Paper No. 2, 21 pp KedemK O. Kedem and A. Katchalsky, A physical interpretation of the phenomenological coefficients of membrane permeability, The Journal of General Physiology, 45, 143–179, (1961). bcs B. B. V. Maia, C. Morales-Rodrigo and A. Suárez, Some asymmetric semilinear elliptic interface problems, Journal of Mathematical Analysis and Applications, 526, (2023), 127212. chinos Y. Wang, and L. Su, A semilinear interface problem arising from population genetics, J. Differential Equations, 310 (2022) 264-301.
http://arxiv.org/abs/2307.03080v1
20230706154829
A Map-Free LiDAR-Based System for Autonomous Navigation in Vineyards
[ "Riccardo Bertoglio", "Veronica Carini", "Stefano Arrigoni", "Matteo Matteucci" ]
cs.RO
[ "cs.RO" ]
Power-Aperture Resource Allocation for a MPAR with Communications Capabilities Augusto Aubry, Senior Member, IEEE, Antonio De Maio, Fellow, IEEE, and Luca Pallotta, Senior Member, IEEE A. Aubry and A. De Maio are with the Department of Electrical Engineering and Information Technology (DIETI), Università degli Studi di Napoli “Federico II”, via Claudio 21, I-80125 Napoli, Italy. E-mail: {augusto.aubry, ademaio}@unina.it. L. Pallotta is with School of Engineering, University of Basilicata, via dell'Ateneo Lucano 10, 85100 Potenza, Italy. E-mail: luca.pallotta@unibas.it. ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== empty empty Agricultural robots have the potential to increase production yields and reduce costs by performing repetitive and time-consuming tasks. However, for robots to be effective, they must be able to navigate autonomously in fields or orchards without human intervention. In this paper, we introduce a navigation system that utilizes LiDAR and wheel encoder sensors for in-row, turn, and end-row navigation in row structured agricultural environments, such as vineyards. Our approach exploits the simple and precise geometrical structure of plants organized in parallel rows. We tested our system in both simulated and real environments, and the results demonstrate the effectiveness of our approach in achieving accurate and robust navigation. Our navigation system achieves mean displacement errors from the center line of 0.049 and 0.372 for in-row navigation in the simulated and real environments, respectively. In addition, we developed an end-row points detection that allows end-row navigation in vineyards, a task often ignored by most works. § INTRODUCTION The increasing demand for food in the current climate-changing environment introduces new challenges, such as the necessity of increasing production and the sustainability of crop management while reducing costs <cit.>. Agricultural robots can help achieve these goals by performing repetitive and time-consuming tasks, allowing farmers to improve production yields. At the same time, for robots to be effective, they must be able to navigate autonomously in fields or orchards without human intervention. Navigation approaches can be broadly divided into two categories: those with or without a map of the environment. While map-based approaches can be helpful in unstructured environments, they require a more expensive sensor suite and incur increased computational effort. Additionally, localization on a pre-built map can fail due to the constantly changing nature of agricultural environments. Nevertheless, agricultural environments typically have a simple and precise geometrical structure, with crops organized in parallel rows. This structure can be exploited for navigation without the need for a map. Autonomous navigation in agriculture often utilizes GNSS information for pre-planned routes or as supplementary information. Additionally, Differential GNSS technology provides higher localization accuracy of up to centimeters. However, the GNSS signal is not always available, especially for those cultivations with high plants and abundant vegetation. LiDAR and camera sensors are also utilized for navigation. LiDARs can be either 2D or 3D sensors, with the latter characterized by multiple scanning planes. LiDAR sensors provide a geometrical view of the environment, work at a reasonable frequency (over 10 Hz), and are precise. Cameras, such as RGB, stereo, or RGB-D, provide a more complex semantic interpretation of the environment, which is helpful for tasks like obstacle avoidance. Stereo and RGB-D cameras can also produce 3D renderings of the environment. Although LiDARs only provide geometrical data, they are less susceptible to lighting conditions than cameras, which is essential in agricultural environments where strong sunlight and shadows are typical. The VINBOT project <cit.> has developed a vineyard navigation system combining a line detection algorithm and GNSS navigation for in-row navigation. Two lines representing vineyard rows were identified using a 2D laser and RANSAC algorithm. The robot changed the corridor by rotating around one of two points representing the plant's end. Localization relied on IMU, GPS, and wheel odometry data, but tests have shown that plant holes should be manually managed to avoid misinterpretation. The VineSLAM algorithm, described in <cit.>, employed laser rangefinder data and known parameters to identify trunks and masts as landmarks for 2D SLAM. RFID tags were utilized to mark the corridor boundaries for topological mapping. However, the algorithm's accuracy relied on the detection of trunks and masts, and external factors such as grass and wind introduced substantial noise, compromising navigation reliability. Bernad et al. <cit.> proposed three straightforward in-row navigation approaches using only 2D LiDAR data. The most effective algorithm involved calculating the average distance from both sides of the crop row and estimating an orientation correction based on the offset. They achieved an accuracy of [separate-uncertainty = true, multi-part-units = repeat]0.041 ±0.034 from the center line when testing outdoors with potted maize plants. Rovira-Más et al. <cit.> presented a multi-sensor navigation approach for inside-row guidance. The authors used a so-called Augmented Perception Obstacle Map (APOM) to store and evaluate readings from a 3D stereo camera, LiDAR, and ultrasonic sensors. The map is then analyzed to find specific situations representing the status of row detection. The next navigation target point is only computed if one or both rows are found. Mengoli et al. <cit.> proposed Hough Transform-based methods for orchard navigation, including in-row and row-change maneuvers. The authors enhanced robustness by incorporating vineyard geometry conditions and using GPS to identify corridor ends. The detected pivot point in row-change maneuvers had an RMSE of 0.3429 m in the x direction and 0.5840 m in the y direction. Aghi et al. <cit.> introduced a vineyard in-row navigation algorithm with two components. The first component uses an RGB-D camera's depth map to detect the end of the row by fitting a rectangular area to the farthest pixels. In case of failure, a backup algorithm takes over, utilizing a neural network to identify and correct the robot's orientation if needed. The Field Robot Event (FRE)[<https://fieldrobot.nl/event>] is a robotics competition that focuses on autonomous navigation in agricultural environments. We drew inspiration from the in-row navigation approach used by the Kamaro team <cit.> in the 2021 FRE competition for maize fields and adapted it for vineyard navigation. Our navigation system utilizes a single LiDAR and wheel encoders to reduce sensor requirements and costs. Additionally, we developed an end-row navigation algorithm to facilitate autonomous row changes. We proposed a straightforward evaluation benchmark for in-row navigation and end-row point detection, eliminating the need for external devices like laser tracking or Differential GNSS systems. The system was tested in both real vineyard (see Figure <ref>) and simulated environments. The complete algorithm code is available at this GitHub repository: <https://github.com/AIRLab-POLIMI/MFLB-vineyard-navigation>. § MATERIALS AND METHODS We developed our navigation algorithm for a skid-steering mobile robot, although the general structure can also be adapted to other types of kinematics. The navigation software was implemented using the Robot Operating System (ROS) library, specifically the Melodic version on Ubuntu 18.04 LTS. The software architecture is presented in Figure <ref>. Initially, the robot is assumed to reach the beginning of a row; the In-row navigation module guides the robot to follow the row until the end is detected. Then, the robot performs an open-loop turn managed by the End-Row navigation module, which guides the robot along the border of the vineyard until it reaches a specified row to turn into, where the In-row navigation module is reactivated. The following gives a more detailed description of each algorithm component. §.§ Input Data Our algorithm needs very few input data, namely, an odometry source and 2D laser scans. Since we used a robot with a skid-steering kinematic, we computed its odometry with the model presented in <cit.>. The kinematic relation is expressed as follows: ( v_x v_y ω_z) = A ·( V_l V_r) where v = (v_x, v_y) is the vehicle’s translational velocity with respect to its local frame, ω_z is its angular velocity, V_l and V_r are the left and right linear tread velocities, and matrix A is defined by Equation (<ref>). Following the experiments presented in <cit.> we have calibrated the matrix A that, in the case of an ideal symmetrical kinematic, takes the following form: A = α/2 x_ICR·[ 0 0; x_ICR x_ICR; -1 1 ] where, x_ICR is the x-axis component of the Instantaneous Center of Rotation (ICR), and α is a correction factor to account for mechanical issues such as tire inflation conditions or the transmission belt tension. Both these parameters have been empirically estimated following the directions provided in <cit.>. Beyond odometry, our navigation system expects 2D laser scans to perceive the environment. We transformed LiDAR messages from an Ouster OS1 3D LiDAR sensor into 2D laser scans through the pointcloud_to_laserscan ROS package[<https://github.com/ros-perception/pointcloud_to_laserscan>]. We set the sensor at 10 Hz and 1024 points for each of its 64 planes. We then filtered the laser scan messages to reduce their size. We first applied radius filtering to remove points outside a circle centered on the sensor and then downsampling to reduce the density of points. We also applied outlier filtering to remove noise from data. §.§ In-row navigation In the in-row navigation stage, the navigation system makes the robot traverse a corridor created by two lines of plants by maintaining an equal distance from them as much as possible. The approach we used for the in-row navigation has been adapted from that of the Kamaro team[<https://github.com/Kamaro-Engineering/fre21_row_crawl>] which participated in the 2021 FRE competition. The functioning of the In-row navigation module is graphically illustrated in Figure <ref>. The find_cone method analyzes the laser scan messages to find an obstacle-free cone in front of the robot. To do so, a cone centered on the moving robot direction is gradually grown by enlarging the apex angle until a certain number of points fall inside the cone. The two cone sides are moved independently, and they have a configurable length. Once the cone is found, we compute an angular offset between the cone center line and the robot center line. This angular offset is increased by an additional offset proportional to the distance between the robot and corridor center. The latter distance is computed by growing two rectangles on the side of the robot until a certain number of points fall into them. A graphical representation of the cone and rectangles is shown in Figure <ref>. The final angular offset defines a new line pointing toward the steering direction. We use a PID controller to steer toward the point on this line that is 1 in front of the robot. The linear speed is set to a constant value, and it is reduced if an object in front of the robot is detected. The algorithm uses a rectangle in front of the robot to calculate the target speed based on the distance between the robot and any obstacles. At each linear and angular speed update, the In-row module checks if the end of the row has been reached. This procedure involves a rectangular area (colored light green in Figure <ref>) placed in front of the robot, spanning the entire corridor and part of both row sides. The corridor is over when the number of points in the rectangle approaches zero. The last step is to exit the row by a fixed distance measured through the robot odometry. Since the latter distance is usually of about 1, the odometry guarantees a reasonable accuracy. Once the robot has exited a row, it performs an in-place rotation by a fixed angle (usually 90). The user needs to set the direction of the first rotation, left or right. During the rotation, the odometry is monitored to halt the robot when the required angle has been performed. Note here that we expect the robot to skid, and because of this, the effective rotation might differ from 90. However, the algorithm overcomes this problem by selecting two end points—one positioned in front and the other at the back of the robot. Subsequently, it rotates the robot to align its moving direction parallel to the line segment connecting these two points. It's also important to note that the robot does not need to be perfectly aligned with the row direction when it begins navigating at the beginning of the row. In both scenarios, the algorithm compensates for an incomplete rotation up to a specific angle. The maximum angle that can be recovered depends on factors such as the width of the row, the robot's distance from the row's starting point, and algorithm parameters like the length of the cone sides. Once the turn is completed, the navigation system activates the End-row navigation module. §.§ End-row navigation After completing the turn, the navigation system initiates the End-row algorithm. A schematic representation of the End-row navigation algorithm is presented in Figure <ref>. The primary objective of this algorithm is to enable the robot to travel perpendicularly to the field rows until it reaches the next corridor. The algorithm is specifically designed to leverage row ends, which typically consist of wooden support poles in vineyards. We employed the Euclidean Cluster Extraction technique <cit.> to identify row ends from the 2D point cloud data. This simple algorithm is highly effective in vineyards because the rows are widely separated by open areas to allow for human operations. Each obtained cluster represents a row end. The subsequent task selects a point for each recognized cluster, representing the row end. We evaluated two policies to select such end point. The first policy, termed Nearest, involves selecting the nearest cluster point to the robot center, which is surrounded by a minimum number of points at a threshold distance. Therefore, the circular neighborhood's radius and the minimum number of points are parameters that need to be configured. The second policy, called Line fitting, involves a first step in which the end point is selected with the Nearest policy, then a line is fitted to the cluster of points, and finally, the end point is projected onto that line. We implemented line fitting using the random sample consensus (RANSAC) algorithm, finding that 100 iterations and a distance threshold of 0.1 offer a good balance between speed and accuracy. After detecting the points representing row ends, we use them to construct segments that indicate the navigation direction. Indeed, the navigation system keeps a fixed distance from row ends by maintaining a moving direction parallel to such fitted segments. Figure <ref> displays the clustered row ends in various colors and the identified end points through the Nearest policy with red squares. Additionally, the current direction segment is shown with a red line. Figure <ref> shows the clusters and end points obtained through the Line fitting policy. While the robot navigates parallel to end rows, it keeps track of the number of passed row ends and stops in the middle of the next corridor to enter. Then it will perform a 90 in-place rotation, and the system will activate the In-row navigation module again. § RESULTS We conducted experimental tests in both simulated and real environments. The simulation has been performed on the Gazebo simulator with vineyard models at different vegetative stages taken from the BACCHUS project repository[<https://github.com/LCAS/bacchus_lcas>] (see Figure <ref>). We also performed tests in a real vineyard located on the Piacenza (Italy) campus of the Università Cattolica del Sacro Cuore. The simulated environment consisted of three vineyard corridors approximately 36 long and approximately 2 large, characterized by three different vegetative stages: low, medium, and high. The results reported for the simulated environment are thus an average over the three vegetative stages. The real environment was a single vineyard corridor with a length of approximately 40 and a width of approximately 2.5, which is one of the typical settings in Italy. The vegetative stage of the real vineyard was comparable to the high vegetative stage of the simulated one. During the tests, we reached a maximum linear speed of 2 in the simulated environment and 1 in the real environment for both in-row and end-row navigation. We mounted the Ouster OS1 LiDAR sensor at an approximate height of 1 from the ground. The navigation system ran on an onboard Shuttle XPC (model DS81L15) equipped with an Intel(R) Core(TM) i7-4790S CPU and 8 GB of RAM. The LiDAR sensors produced messages at a frequency of 10, and the odometry was published at 50. All the ROS nodes were capable of keeping up with the 10 frequency of the LiDAR, except for the nodes responsible for clustering and end point detection, which proved to be the bottleneck of the system. Specifically, the node performing clustering with the Nearest end point picking policy operated at a minimum frequency of 9, while the one using the Line fitting policy ran at a minimum frequency of 5. Nevertheless, the bottleneck only affected the end-row navigation, which represents a small part of the total path traversed in a vineyard. §.§ In-Row Navigation Evaluation To evaluate the precision of the In-row navigation module, we measured the robot's displacement from the central row line. This displacement was determined by calculating the absolute distance between the robot's center and the central line of the row. In the simulated environment, we had access to the true robot position, whereas in the real-world test, we relied on the side distance measurements of the In-row algorithm performed via the LiDAR (which has a precision of ±0.01). Evaluating navigation accuracy in real agricultural environments is a challenging and ambiguous task currently addressed by agricultural robotics competitions such as that described in <cit.>. Alternatively, one could utilize an expensive yet highly accurate laser position tracking system, although determining the optimal target trajectory remains a nontrivial problem. In our case, we defined a perfectly row-centered trajectory as the optimal one. However, in both the simulation and the real vineyard, protruding vegetation and branches caused the robot to deviate from the central line, resulting in some average deviation from the center. Table <ref> presents the outcomes of in-row navigation tests performed in simulation across three rows at varying vegetation stages and in two real vineyard rows. The mean displacement from the central line was 0.049 in the simulated environment, whereas in the real vineyard, we observed a mean displacement of 0.372. In both scenarios, the robot successfully avoided protruding branches and never collided with the row sides. Table <ref> also presents the row width measurements computed from LiDAR scans. The measurements indicate that protruding vegetation causes row width variations, impacting robot centering. In the real scenario, the minimum measurable row width of 1.6 was reached, as our LiDAR has a minimum scanning distance of 0.8. §.§ Row Ends Detection Evaluation To estimate the accuracy of the row ends detection, we computed the Euclidean distance between the true center of row support poles and those detected by our row ends detection system. It is important to note that the assumption that the pole center is always the true row end point is not always valid, as vegetation can cover the pole and protrude outward. In the simulated environment, we computed the instantaneous Euclidean distance from the real pole center to the end point detected by our system during a full turn from one row to the next. We performed measurements for three different vegetative stages. In the real environment, obtaining multiple measurements of the real displacement of the pole center from the robot is laborious and time-consuming. Furthermore, without any absolute positioning system available, the only way to measure it was manually, which introduced measurement errors in the order of centimeters. Therefore, we statically positioned the robot in the middle of a row to detect the two side end points and compared them to manual measurements. In both the simulated and real scenarios, we compared the two policies explained in section <ref>: Nearest and Line fitting. Table <ref> shows the mean, max, and min distances between the true center poles coordinates and those detected by our system. In the simulated scenario, the Line fitting policy was more accurate with a mean of 0.155. The Nearest policy also showed an acceptable mean distance of 0.205 while being less computationally intensive. In the real scenario, the accuracy of both policies was comparable since the difference in the order of centimeters could be attributable to the error of manual measurements. Nonetheless, our row ends detection system performed accurately in both scenarios. § CONCLUSIONS In this paper, we have presented a simple and efficient map-free LiDAR-based navigation system designed for vineyard applications. Our approach relies on the geometrical structure of the environment and does not require a pre-built map or GNSS measurements. The navigation system is capable of in-row, turn, and end-row navigation and has been tested in both simulated and real vineyards. The results of our experiments indicate that the proposed navigation system achieves accurate and reliable navigation performance, even under challenging vineyard conditions with variations in row spacing and vegetative stages. The system can effectively detect protruding vegetation and adjust the trajectory accordingly, potentially reducing crop damage. The proposed navigation system is simple and cost-effective, relying only on odometry and LiDAR as sources of information, requiring low computational effort. Future work can explore testing with a 2D LiDAR to compare the navigation precision and extend the system's evaluation to other types of line-arranged crops. Additionally, the system could be integrated with a robust semantic obstacle detection algorithm to enhance the navigation system's safety. -12cm § ACKNOWLEDGMENT We are grateful to our colleagues at Università Cattolica del Sacro Cuore (Piacenza, Italy), especially Professor Matteo Gatti, for allowing us to conduct experiments in their vineyard on the university campus. This study was conducted within the Agritech National Research Center, and received partial funding from the European Union Next-GenerationEU (Piano Nazionale di Ripresa e Resilienza (PNRR), missione 4, componente 2, investimento 1.4, D.D. 1032 17/06/2022, CN00000022), the European Union's Digital Europe Programme under grant agreement N.101100622, and the European Union's H2020 grant N.101016577. IEEEtran
http://arxiv.org/abs/2307.02079v1
20230705074040
Relations between Stokes constants of unrefined and Nekrasov-Shatashvili topological strings
[ "Jie Gu" ]
hep-th
[ "hep-th", "math-ph", "math.MP" ]
LØ: An Accountable Mempool for MEV Resistance Johan Pouwelse August 1, 2023 ============================================= § INTRODUCTION String theory has rich non-perturbative structures. For instance, it was already pointed out in <cit.> for bosonic string and later in <cit.> for fermionic string that the free energy as a perturbative series in string coupling g_s is divergent as the coefficients have factorial growth, i.e. (g_s) = ∑_g=0^∞_g g_s^2g-2, _g∼ (2g)!, which implies the power series has zero radius of convergence. Such a factorial growth of coefficients also signals that the full and exact free energy would require exponentially small non-perturbative corrections. These non-perturbative corrections were interpreted as effects from D-branes <cit.>, although detailed calculations of D-branes were only performed recently <cit.> to reproduce the exponentially small non-perturbative corrections in the cases of minimal string theory. In general the non-perturbative corrections to string free energies are difficult to study. One division of string theory where one might have a better chance of understanding the non-perturbative corrections is topological string theory. Topological string is constructed by topological twists on the worldsheet of perturbative string, and it captures the important BPS sectors of type II superstring compactified on a Calabi–Yau threefold. On the other hand, topological string is relatively simple. It permits a rather rigorous mathematical definition through the Gromov-Witten theory, and the perturbative free energies in topological sting can be computed via various methods, including holomorphic anomaly equations <cit.>, topological vertex <cit.>, topological recursion <cit.>, blowup equations <cit.>, and etc. Making use of these methods, hundred of terms of perturbative free energies can be computed in the case of non-compact Calabi–Yau threefolds, and more than sixty terms have recently been computed for certain family of compact Calabi–Yau threefolds in <cit.>, building on <cit.>, including the famous quintic model, which provides a treasure of perturbative data unimaginable in critical string theory, making tests of various proposals of non-perturbative corrections possible. It is due to these facts that it was mused in “A panorama of physical mathematics c. 2022” <cit.> that if we wish to make progress on understanding what is string theory including all of its non-perturbative aspects, it is more tractable and more realistic to try to answer this question first in topological string. Among different proposals to study non-perturbative corrections to topological string free energies, one of the most promising is to use the resurgence theory <cit.>[See lectures by both mathematicians and physicists <cit.>]. The key idea is that non-perturbative corrections to a divergent perturbative series appear in the form of trans-series, and the full and exact solution can be written as the Borel–Laplace resummation, a distingshed method to resumm a divergent power series, of certain linear combination of these trans-series in addition to the perturbative series. Furthermore, in order for the resummed linear combination to be a well-defined function, the trans-series representing non-perturbative corrections must be encoded in and therefore can be extracted from the perturbative series, through various Stokes automorphisms characterised by a collection of numbers called Stokes constants. The resurgence theory therefore offers a roadmap to systematically study non-perturbative contributions to topological string free energy, making use of the already available rich data of perturbative expansion. Resurgence techniques were first applied to study topological string in <cit.>, where for instance it was checked numerically that indeed topological string free energies grow like (<ref>), and the non-perturbative effects that control such a growth behavior were analysed in simple models. Later studies focused on the simplest model, the resolved conifold, where both the trans-series and the Stokes constants could be computed <cit.> (see also <cit.> building on <cit.>). For more complicated geometries, these computations are much more difficult. Nevertheless, based on a novel method proposed in <cit.> to systematically calculate the trans-series using holomorphic anomaly equations <cit.>, the trans-series in any non-perturbative sectors were calculated in closed form <cit.> for both compact and non-compact Calabi–Yau threefolds. In a different line of researches, Tom Bridgeland proposed that the BPS invariants or generalised DT invariants of a non-compact Calabi–Yau threefold can be used to construct a Riemann-Hilbert problem whose solution of τ function expands to the topological string free energy <cit.>. These BPS invariants are countings of stable D-brane bound states. Various subsets of BPS invariants are already known to be related to topological string. For instance, the Gopakumar-Vafa formula <cit.> expresses the perturbative free energy in terms of the D2-D0 BPS invariants, and the generating function of D6-D2-D0 BPS invariants, also known as the PT invariants, is also related to the topological string free energy <cit.>. Bridgeland's proposal provides another interesting link between basically Stokes constants of topological string free energies and BPS invariants, but concrete construction could only be made for the resolved conifold (see also <cit.>), where the only non-trivial BPS invariants are countings of D2-D0 bound states. Various hints exist for more complicated models, for instance checks of integrality of Stokes constants in <cit.> in certain models, but no conclusive statements can be made, and many questions still remain open, including: if the relation between Stokes constants and BPS invariants can be generalised, and if true, what are the concrete formulas, and whether the full set or only a subset of BPS invariants can be recovered from the Stokes constants. In this short note, we will make an important step towards answering these questions through a slightly different but related route, summarised by Fig. <ref>. When the target space is a suitable noncompact Calabi–Yau threefold X, topological string is characterised by a Riemann surface called the mirror curve equipped with a canonical 1-form associated to X by mirror symmetry. At the same time, it is equivalent to a 5d N=1 gauge theory on S^1×^4, which reduces to an N=2 gauge theory when S^1 shrinks to a point, corresponding to certain scaling limit of the mirror curve <cit.>. It was found in <cit.> that the canonical 1-form describes a metric on the mirror curve, and BPS states of either the topological string or the supersymmetric field theory can be described by geodesic 1-cycles on the mirror curve with respect to this metric. This observation was developed into a full-blown method to calculate BPS invariants in <cit.> known either as the spectral network (4d field theory) <cit.> or exponential network (5d field theory) <cit.>. On the other hand, it was pointed out that the mirror curve can be promoted via quantisation to either a differential operator (4d field theory) or a difference operator (5d field theory), called the quantum mirror curve, and the periods of the mirror curves are promoted to quantum periods which are divergent power series in ħ through the exact WKB method. These quantum periods have remarkable resurgent properties. When the quantum mirror curve is a second order differential operator (4d rank one gauge theory), it was proved that the trans-series that appear in Stokes automorphisms of quantum periods are associated also with geodesic 1-cycles <cit.> and they should be naturally mapped to the BPS states. Furthermore, the Stokes automorphism takes a form, known as the Delabaere–Dillinger–Pham formula <cit.>, which resembles the Kontsevich-Soibelman automorphism, a crucial ingredient of the wall-crossing formula <cit.>, also known as the spectrum generator of BPS invariants <cit.>. The Stokes constants, which are coefficients of the Stokes automorphisms, should then be identified with the BPS invariants, which are coefficients of the Kontsevich-Soibelman automorphisms. This identification was checked in various 4d rank one gauge theories <cit.> and is expected to hold in higher rank theories. This provides the upper horizontal arrows in Fig. <ref>. Quantum periods can be studied through another approach different from the exact WKB method. Inspired by Nekrasov's partition function for 4d and 5d gauge theories on the Omega background <cit.>, the topological string free energy was refined to depend on not a single perturbative expansion parameter g_s but two _1,_2, and it reduces to the unrefined case in the limit _1+_2 = 0. In another special limit _2→ 0, called the Nekrasov–Shatashvili limit <cit.>, the topological string free energy provides a set of relations between quantum A- and B-periods called the quantum special geometry <cit.>. The Wilson loop amplitudes in various gauge representations in the NS limit then provide another set of relations between these quantum periods. Together the NS free energies and the NS Wilson loop amplitudes completely determine the quantum periods <cit.>, and the Stokes automorphisms of quantum periods can also be derived from those of NS free energies and Wilson loop amplitudes. A pattern of the latter was recognised, from which one concludes that the Stokes automorphisms of quantum periods in 5d gauge theories follow the same DDP formulas as 4d gauge theories <cit.> (see also <cit.>), further confirming the horizontal arrows in the second arrow in Fig. <ref>. In addition, Stokes constants of NS free energies are themselves identified with Stokes constants of quantum periods, providing the vertical arrows on the right hand side in Fig. <ref>. In this note, we will argue that the Stokes constants of unrefined free energies and those of NS free energies of topological string can also be identified, possibly up to a sign, given by the key formula (<ref>), thus providing the lower horizontal arrows in Fig. <ref>. The key idea is to use the blowup equations for refined topological string free energy <cit.>, which in a special limit provides the sought for relationship between unrefined and NS free energies known as the compatibility formula <cit.>. Once the lower horizontal arrows are established, one can make direct identification betwene Stokes constants of unrefined topological string free energy and BPS invariants of the Calabi–Yau threefold, given by eq. (<ref>), represented by the dashed vertical arrows on the left hand side in Fig. <ref>. The identified BPS invariants include all the D4-D2-D0 stable bound states. We emphysize that our argument only works on topological string on non-compact Calabi–Yau threefolds where NS free energies and quantum periods as intermediary steps can be defined. A direct argument that make the left vertical arrow that works also for compact Calabi–Yau threefolds would be very desirable, possibly along the line of <cit.>. The remainder of the paper is structured as follows. In section <ref>, we sketch the ingredients of resurgence theory enough for understanding the derivation and the statements in this paper. We then summarise the known results on Stokes automorphisms for both unrefined free energies <cit.> and NS free energies <cit.>. In section <ref>, we introduce and slightly extend the blowup equations for refined topological string, which are important for later sections. We give the key argument in section <ref> that makes the connection between the Stokes constants of unrefined and NS free energies. Finally, we summarise and discuss open problems in section <ref>. In Appendix <ref>, we re-derive and generalise the relationship between Stokes constants of NS free energies and those of quantum periods, which is crucial for closing the circle in Fig. <ref>. § RESURGENCE AND TOPOLOGICAL STRING §.§ Resurgence theory in a nutshell We give a lightning overview of the resurgence theory <cit.>. We refer to the lectures <cit.> for details. Suppose we have a perturbative series φ(z) of the Gevrey-1 type, i.e. φ(z) = ∑_n≥ 0 a_n z^n, a_n ∼ n!, which is divergent with zero radius of convergent, the resurgence theory tells us that if we wish to find the full and exact description of the quantity in the form of a function f(z) that admits φ(z) as an asymptotic expansion, we must include non-perturbative corrections, which are actually encoded in and therefore can be extracted from the perturbative series itself. In order to uncover the hidden non-perturbative corrections, we introduce the Borel transform, φ(ζ) = ∑_n≥ 0a_n/n!ζ^n. This is a convergent series with a positive radius of convergence in the complex ζ-plane _ζ, also known as the Borel plane, and it can be analytically continued to the entire complex plane with possible singularities, known as the Borel singularities. If the singularities are discrete points and form a closed subset Ω⊂_ζ and in addition φ(ζ) allows analytic continuation along any path in _ζ\Ω, φ(ζ) is called a resurgent function, and φ(z) called a resurgent series. In this case, we can define the Laplace transform of φ(ζ) along any direction which does not pass through any Borel singularities[We also need to assume that φ(ζ) has at most exponential growth ∼^|ζ|/R with R>|z| when ζ→∞.] s_θ(φ)(z) = 1/z∫_0^^θ∞^-ζ/zφ(ζ) ζ. with θ = z. This is known as the Borel–Laplace resummation of φ(z). According to the resurgence theory, each of the discrete singular points in the Borel plane in fact represents a non-perturbative saddle point[They could also be renormalons which have no semi-classical saddle point interpretation. But this distinction is irrelevant for our discussions.] whose action is given by the position of the singular point, and the perturbative series φ^()(z) in the non-perturbative sector can be uncovered by the remarkable formula s_θ_+φ(z) - s_θ_-φ(z) = s_^-/z s_θ_-φ^()(z). Here we change (<ref>) slightly and define lateral Borel–Laplace resummations, as shown in Fig. <ref>, s_θ_±(φ)(z) = 1/z∫_0^^(θ± 0)∞^-ζ/zφ(ζ) ζ. And in (<ref>) we choose z so that θ = z = . The constant s_ is known as the Borel residue. If we have a string of singular points k (k=1,2,…) along the ray ρ_ = ^_+, known as the Stokes ray, the right hand side of (<ref>) should be modified to include contributions from all these non-peturbative saddles s_θ_+φ(z) - s_θ_-φ(z) = ∑_k=1^∞s_k^-k/z s_θ_-φ^(k)(z). All resurgent series form an algebra, and the analytic formula (<ref>) can be represented alternatively as an algebraic operator in the algebra of resurgent series. Introducing Stokes automorphism S_θ associated to the Stokes ray ρ_θ s_θ_+ = s_θ_-∘S_θ so that S_θφ(z) = φ(z) + ∑_k=1^∞s_kA^-k/zφ^(k)(z), then S_θ is, as its name suggests, an automorphism so that for two power series φ, ψ S_θ(φ(z)ψ(z)) = S_θφ(z) S_θψ(z). Another even more powerful way to encode the formula (<ref>) is to introduce the alien derivatives k associated to each Borel singularity related to the Stokes automorphism S_θ by S_θ = exp(∑_k=1^∞k ). Upon acting on the series φ, one has kφ(z) = S_kφ^(k)(z), where the constants S_k are Stokes constants, and they are combinatoric combinations of s_k. More importantly, it can be proved that S_θ are proper derivations, in the sense that they satisfy the following properties * Leibniz rule: if φ,ψ are two power series (φ(z)ψ(z)) = ψ(z)φ(z) + φ(z)ψ(z). * Chain rule: if ψ(x,z) is a parametric power series in z with an auxiliary parameter x, and ψ(z) another power series in z φ(ψ(z),z) = φ(x,z)|_x=ψ(z) + _xφ(x,z)ψ(z) * Commutation relation[Strictly speaking, the commutation relation with the expansion parameter z only holds with a slightly different convention. But we will only use the commutation relation with auxiliary parameter x in later sections.]: [,_x] = [,_z] = 0. Thus is a derivation independent of _x and _z. §.§ Resurgent structure of topological string Consider topological string with the target space a non-compact Calabi–Yau threefold X. Let the number of linearly independent compact 2-cycles and compact 4-cycles be b_2 and b_4 respectively. We collect the complexified Kahler moduli of 2-cycles t^i (i=1,…,b_2) and of 4-cycles t_D,ℓ (ℓ=1,…,b_4) in a vector Π = ( [ t_D; t; 1 ]). where the last entry can be regarded as the trivial Kahler modulus of a point. In general b_2 ≥ b_4, and if b_2 > b_4, we can make the distinction of b_4 linear combinations of t^i associated to 2-cycles that intersect with compact 4-cycles, and another b_2-b_4 linear combinations of t^i associated to 2-cycles that have zero intersection numbers with compact 4-cycles. These linear combinations are called true moduli and mass parameters, denoted by t_*^ℓ and t_*^k+b_4 = m^k (ℓ=1,…,b_4, k=1,… b_2-b_4) and we call the corresponding 2-cycles gauge 2-cycles and flavor 2-cycles, as they are related to Coulomb moduli and flavor masses in the associated field theory. From superstring theory point of view, the Kahler moduli in the vector Π are also interpreted as the central charges of D4-, D2-, and D0-branes supported on these cycles. The moduli space of the Calabi–Yau threefold enjoy special properties known as the special geometry relations, among which we can define the prepotential _0, a function of t^i, so that[Up to normalisation.] t_D,ℓ = ∑_i=1^b_2 C^i_ℓ_0/ t^i, ℓ=1,…,b_4, where C^i_ℓ are entries of the integer valued b_2× b_4 intersection matrix between compact 2-cycles and compact 4-cycles. _0 is the genus zero component of the free energy, and higher genus free energies _g can be constructed by coupling the worldsheet theory to gravity. Mathematically, the free energies are defined as the generating function of Gromov-Witten invariants, the counting of stable holomorphic maps from genus g Riemann surfaces to 2-cycles in the Calabi–Yau X, i.e. _g() = ∑_β∈ H_2(X) N_g,β^-t(β), where t(β) is the complexified volume of the 2-cycle β. Collectively, the perturbative free energy of topological string is ^(0)_top(,g_s) = ∑_g≥ 0_g() g_s^2g-2. Through mirror symmetry, the components of the vector Π are identified with complex structure moduli of the mirror threefold Y, which are periods of the holomorphic (3,0) form over integral 3-cycles in Y, or equivalently with periods of the canonical 1-form over integral 1-cycles in the mirror curve Σ that the threefold Y can reduce to. Hence Π is also called the period vector. The distinction between t^i and t_D,ℓ corresponds to a choice of symplectic basis of H_1(Σ) consisting of A-cycles and B-cycles so that the oriented intersection is[Note that in the cases of mirrors to non-compact Calabi–Yau threefolds, it is sometimes not possible to make choices of integral basis of cycles so that the intersetion matrix is of the form δ^i_j.] A^i ∩ B_ℓ = C^i_ℓ. And t^i, t_D,ℓ are correspondingly called the A- and B-periods. Such a choice is not unique, and a different choice of A- and B-cycles, known as a different frame, leads to different A- and B-periods. The frame given by (<ref>) is called the large radius frame. We denote A- and B-periods in a generic frame Γ by , , i=1,…,b_2, ℓ=1,…,b_4, and the special geometry relation (<ref>) becomes accordingly = ∑_i=1^b_2^i_ℓ_0/, ℓ=b_4, where ^i_ℓ is the intersection matrix of A- and B-cycles in frame Γ. Clearly, the prepotential _0, as well as the genus g free enegies _g, depend on the choice of frame Γ. Although this is not always the case, throughout our paper, we choose frames so that A- and B-cycles are integral cycles, so that changing a frame amounts to a symplectic transformation in Sp(b_3(Y),) of the interal periods. To find how free energies change across different frames, it was noted in <cit.> that free energies _g for g≥ 2 and exp(_1) are almost holomorphic modular forms, whose modular parameter is τ^Γ_ℓ m = ∑_i,j=1^b_2^i_ℓ^j_m ^2_0/^i ^j, ℓ,m=1,… b_4, and a change of frame is equivalent to a modular transformation of these almost holomorphic modular forms. In the remainder of this section, we will drop the superscript Γ for frame to reduce the notational clutter. Regardless of which frame one is at, the genus g free energy _g() is a well-defined function, while the perturbative series ^(0)_top(,g_s) is a divergent series of Gevrey-1 type <cit.> _g() ∼ (2g)! and therefore there should be non-perturbative corrections which can be analysed by resurgence techniques. It has been found that the locations of singularities of the Borel transform always coincide with classical integral periods up to normalisation <cit.>. More precisely[See <cit.> for a detailed account of the issue of normalisation.], =ℵ(p·t_D + q·t + s), where ℵ = 4π^2, p^ℓ,q_i,s are certain integer numbers. We will not discuss the cases with p^ℓ = q_i =0 as they are trivial. The alien derivative of the free energy ^(0)(,g_s) at a singular point is proportional to the instanton amplitude associated to this singularity. In particular, if we have a sequence of singular points k, k=1,2,… along a Stokes ray ρ_arg, and let us denote the instanton amplitude at the singularity k by ^(k), the alien derivatives at these singular points read <cit.> k^(0)_top(,g_s) = /2π^(k)_top(,g_s). Here, both the perturbative free energy and the instanton amplitudes depend on the holomorphic frame of evaluation. In particular, the expression of the instanton amplitude depends greatly on the type of frame. If a frame, known as an A-frame, is chosen such that = is an A-period, i.e. p^ℓ=0, then the instanton amplitude simplifies greatly and we have ^(k)_top,(,g_s) = (1/k^2+/k g_s)^-k/g_s, where the subscript refers to the A-frame. If we are not in an A-frame, the instanton amplitude has more complicated form, but we will not need them here. The Stokes constants are very interesting, as it was found empirically in <cit.> that they satisfy certain intriguing properties. They are integers, and they seem to be frame independent as well as the same for all the singular points k. Among these properties, the frame independence may be due to the following reason. It is known that the holomorphic and frame dependent free energies _g() can be lifted to anholomorphic and frame independent free energies F_g(,) <cit.>, and choosing a frame is done by sending to some fixed value, which can be interpreted as choosing a different gravitational background <cit.>. We speculate that (<ref>) also holds when both sides are lifted to anholomorphic amplitudes k F^(0)_top(,,g_s) = /2π F^(k)_top(,,g_s). The frame independence of the Stokes constants is then equivalent to the conjecture that they are background independent. The purpose of this note is to uncover the nature of the Stokes constants by relating them to the Stokes constants of the refined free energy in the Nekrasov–Shatashvili limit. The perturbative free energy of topological string can be refined to ^(0)(,_1,_2) = ∑_n,g≥ 0 (_1+_2)^2n(_1_2)^g-1_n,g(), which reduces to the conventional topological string in the limit _1 = -_2 = g_s ^(0)_top(,g_s) = ^(0)(, g_s,- g_s). Another interesting limit we can take is the Nekrasov–Shatashvili limit ^(0)_NS(,ħ) = lim__2→ 0_2^(0)(,ħ,_2) =: ∑_n≥ 0^NS_n() ħ^2n-1. The components ^NS_n() are also almost holomorphic modular forms and they transform accordingly in a change of frame as well. The NS free energies are also Gevrey-1 series in ħ, and we can similarly perform resurgence analysis. We find the singularities of the Borel transform are also located in (<ref>) <cit.>[In <cit.>, locations of Borel singularities of NS free energies are found to be 4π^2(p·_D + ·+s). The difference of the factor of is because the authors of <cit.> used the convention ^(0)_NS = ∑_n _(n,0)ħ^2n-1 instead of the more natural ^(0)_NS = ∑_n (-1)^n_(n,0)ħ^2n-1 that follows from (<ref>).]. The alien derivatives of NS free energy at such a singularity is found to be k^(0)_NS(,ħ) = ħ/2π^(k)_NS. Both perturbative and instantonic free energies are frame dependent. In the case where is an A-period, i.e. in an A-frame, one finds <cit.> ^(ℓ)_NS, = (-1)^k-1/k^2^-k/ħ. In the cases where is not an A-period, i.e.  is given by (<ref>) with p≠0, we can shift the definition of prepotential _0 so that = -1/2π∑_i=1^b_2∑_ℓ=1^b_4_0/ t^i C^i_ℓ p^ℓ Then the instanton ampitudes of NS free energy are <cit.> ^(ℓ)_NS = (-1)^k-1/k^2^-k (,ħ)/ħ, where the quantity (,ħ) is defined by (,ħ) = -ħ/2π∑_i=1^b_2∑_ℓ=1^b_4^(0)_NS(,ħ)/ t^iC^i_ℓ p^ℓ. Yet again, it was found empirically in <cit.> that the Stokes constants are integers, the same for all k, and seem to be frame independent. The forms of the alien derivatives (<ref>) and the properties of the Stokes constants have profound consequences. As pointed out in <cit.> (see also <cit.>) and will be reviewed in Appendix <ref>, they imply, together with the Stokes transformation properties of Wilson loop amplitudes, that the quantum periods satisfy the DDP type of formulas for Stokes automorphism <cit.>, so that the Stokes constants of quantum periods can be identified with BPS invariants. More importantly, the Stokes constants are identified with those of quantum periods <cit.>, so that themselves are given by BPS invariants. More precisely, the coefficients (p,q,s) in the composition of in (<ref>) are brane charges. For instance in the large radius frame, (p,q,s) are respectively the D4-, D2-, and D0-brane charges. is then the counting of BPS states of stable D-brane bound states with brane charges γ() = (p,q,s); in other words, = Ω(γ()). We will show in Section <ref> that the Stokes constants of unrefined free energies coincide up to a sign with of NS free energies as in (<ref>). § BLOWUP EQUATIONS OF REFINED TOPOLOGICAL STRING §.§ Blowup equations in large radius frame It was conjecturd <cit.> based on <cit.> and checked in many examples that the blowup equations for supersymmetric gauge theories <cit.> can be generalised and are satisfied by free energies of topological string on a local Calabi–Yau threefold X. And it was pointed out in <cit.> that blowup equations can be used to solve D2-D0 type BPS invariants. This line of research was later expanded in <cit.>. See also related works in <cit.>. The blowup equations will play a crucial role for relating the Stokes constants of unrefined and NS free energies of topological string. Let us work in the large radius frame. The number of linearly independent compact 2-cycles and 4-cycles in Calabi–Yau threefold X are respectively b_2 and b_4. Denote by the b_2× b_4 intersection matrix between compact 2-cycles and 4-cycles. The complex Kahler moduli of compact 2-cycles are . Then it was conjectured that there exist b_2-dimensional integer valued vectors r satisfying the checkerboard pattern conditions, also known as flux quantisation conditions, for non-vanishing D2-D0 brane BPS invariants N^d_j_L,j_R 2j_L+2j_R +1 ≡r·d mod 2, N^d_j_L,j_R≠ 0, such that the refined free energy of topological string satifies the so-called blowup equations, Λ(,_1,_2) = ∑_n∈^b_4 (-1)^|n|exp(_ref(+_1/2π,_1,_2-_1) +_ref(+_2/2π,_1-_2,_2) -_ref(,_1,_2) ) where |n| = n_1+…+n_b_4, and R = C·n +r/2. Here the vector r is in addition subject to the equivalence relation r∼r + 2 C·n', n'∈^b_4, as (<ref>) does not change under this transformation. Besides the crucial factor Λ(,_1,_2) depends not on all the Kahler moduli but only the mass parameters. We will be interested in the special cases where Λ vanishes identically. These are called vanishing blowup equations. One subtlety concerning the blowup equations as claimed in <cit.> is that the refined free energies that appear in (<ref>) should be twisted in the sense that _ref→_ref(,_1,_2) = _ref^pert(,_1,_2) + _ref^inst(+/2,_1,_2), where , known as the B-field, is a _2 valued b_2-dimensional vector defined by ≡r mod (_2)^b_2, The twisted free energy was introduced so that when a gauge theory description is available it coincides with the logarithm of the Nekrasov partition funtion. Here F_ref^inst is the instanton contributions, while F_ref^pert is the sum of classical and 1-loop contributions and it is given collectively by _ref^pert(,_1,_2) = 1/_1_2( 1/6∑_i,j,ka_ijkt_it_jt_k + 4π^2∑_i=1^sb_i^NSt_i ) + ∑_i=1^s b_i t_i - (_1+_2)^2/_1_2∑_i=1^s b_i^NSt_i. where a_ijk are triple intersection numbers of divisors in X and b_i^NS, b_i are some other intersection numbers. The three terms on the right hand side in (<ref>) come from _(0,0), _(0,1), _(1,0) respectively. As it stands, _ref^pert defined in (<ref>) does not have quadratic contributions and it is calculated from the special geometry relation (<ref>) using the Frobenius basis of periods. If, however, we integrate the special geometry relation (<ref>) using the integral basis of periods as we do in this paper, we would have that _ref^pert,Frob(-/2,_1,_2) = _ref^pert,int'l(,_1,_2) + 1/_1_2()+(1)+(_1+_2)^2/_1_2(1), with an appropriate representation of B. As we will later see in (<ref>), the blowup equations only depend on _(0,0), _(0,1), _(1,0) through _t^n≥ 2_(0,0), _t^n≥ 1_(0,1), _t^n≥ 1_(1,0) so that the difference in (<ref>) is irrelevant. In light of this relation, we can use the blowup equations (<ref>) with the understanding that we can use refined free energies of topological string based on an integral basis of periods for the moduli without twist after making the shift →-/2. Let us illustrate (<ref>) with the simple example of the local ^2 model. This model has a one dimensional moduli space parametrised by a global modulus z. In the large radius frame, the integral periods are <cit.> Π = [ t_D; t; 1 ] = [ 1 -1/2 1/4; 0 1 0; 0 0 1 ]·Π_0 where Π_0 is the Frobenius basis given by Π_0 = [ X^(1,1); X^(1); 1 ] = [ 1/2(2π)^2(log(z)^2 + 2σ_1(z)log(z) + σ_2(z)); 1/2π(log(z) + σ_1(z)); 1 ] where σ_1(z) = ∑_j≥ 1 3(3j-1)!/(j!)^3(-z)^j, σ_2(z) = ∑_j≥ 118/j!(3j-1)!/(j!)^3(-z)^j(ψ(3j)-ψ(j+1)), with ψ being digamma function. The special geoemtry relation is <cit.> t_D = -3 (2π)^-3_t _0(t). The prepotential obtained by integrating the special geometry relation using the integral periods t_D and t is _0^int'l(t) = -t^3/18 + t^2/12 - t/12 + (1) while the prepotential obtained by replacing t_D,t with the Frobenius periods X^(1,1),X^(1), as practised in <cit.>, is _0^Frob(t) = -t^3/18 + (1) and they satisfy (<ref>) after taking into account that we can take B=1 in local ^2 <cit.>. §.§ Blowup equations in a generic integral frame The blowup equations (<ref>) are formulated for free energies in the large radius frame. Nevertheless, it is possible to change the frame and write down the blowup equations in other integral frames as well. One way of doing this is using the anholomorphic blowup equations proposed in <cit.> and choosing the appropriate holomorphic limit. Another way is expand the blowup equations in terms of _1+_2 and _1_2 ^_(0,1)-_(1,0) ∑_∈^b_4(-1)^||^-1/2R^2”_(0,0) (1+(_1+_2)(R'_(0,1)+R'_(1,0)-1/6R^3”'_(0,0)+…) = Λ_(0,0)() + (_1+_2)Λ_(1,0)()+…. Here we use the notation R^k ^(k)_(n,g) = ∑_i_1,…,i_k(2π)^-kR_i_1… R_i_k_t_i_1…_t_i_k_(n,g)() and Λ_(n,g) are components of Λ(,_1,_2) through the expansion Λ(,_1,_2) = ∑_n,g≥ 0Λ_(n,g)(_1+_2)^n(_1_2)^g At each order of _1+_2 and _1_2, the left hand side is a linear sum of ∑_∈^b_4 (-1)^|| R^k ^-1/2R^2 ”_(0,0) which are theta constants with modulus τ∝”_(0,0) and its higher dimensional generalisations. The coefficients of the linear sum are products of ^_(0,1)-_(1,0) and ^(k)_(n,g) which are almost holomorphic modular forms of τ. The identity (<ref>) at each order of _1+_2 and _1_2 expansion is an equation of almost holomorphic modular forms, and they have been checked for various examples in <cit.>. A frame transformation is then akin to a modular transformation at each order of (<ref>) and they can be reassembled into the blowup equation in the corresponding new frame. The blowup equation in an arbitrary integral frame takes a form similar to (<ref>), (,_1,_2) = ∑_n∈^b_4 (-1)^|n|exp(_ref(+_1/2π,_1,_2-_1) +_ref(+_2/2π,_1-_2,_2) -_ref(,_1,_2) ), with = ·n + /2. The ingredient including its coefficients and as well as (,_1,_2) may change over different frames. We will only be interested in vanishing blowup equations so the change of (,_1,_2) is trivial as it stays zero across all frames. On the other hand, and in should change appropriately so that each component in the expansion (<ref>) transform consistently under modular transformations. The properties of and would be crucial in later sections. We will only consider frames defined by integral basis of periods, and in these cases we argue that we always have is an integer valued b_2× b_4 matrix. Indeed the sum over ∈^b_4 is a summation over discrete magnetic flux over the exceptional divisor ^2 in the spacetime ^2≅^4 blown up at the origin B^2 in the field theory description <cit.>, and each component of the flux vector is associated to an irreducible compact 4-cycle in the Calabi–Yau X <cit.>. Therefore, in the case of large radius frame where the moduli ^i = t^i are associated with integral 2-cycles, = is defined as the integer valued intersection matrix of 2-cycles and 4-cycles. In a generic integral frame, each modulus ^i is associated with either an integral 2-cycle or an integral 4-cycle. In the former case, the corresponding row of is the integer valued intersection numbers with 4-cycles; in the latter case, the corresponding row of should be the integer valued decomposition coefficients in terms of a basis of integral and irreducible 4-cycles. We emphysize that in a generic frame, is not identified with the intersection matrix given in (<ref>). Similar to the large radius case, the vector is defined up to the equivalence relation ∼ + 2·n', n'∈^b_4. We also comment that even though we do not have a physics argument, the vector also seems to be integer valued in an arbitrary frame defined by integral periods. We demonstrate the integrality of both and through two examples below. §.§.§ Local ^2 We first consider the simple example of the local ^2 model. The first two orders of the expansion of the vanishing blowup equations (<ref>) with = 0 in terms of _1+_2 and _1_2, similar to (<ref>), are _0() = 0 _1()('_(0,1)()+'_(1,0)()) - 1/6_3() ”'_(0,,0)() = 0, where _k() are the theta constants _k() = ∑_n∈ (-1)^n (2π)^-k ()^k ^-1/2()^2”_(0,0). Consider first the large radius frame, where we will drop all the superscript Γ. These two equations have been checked in <cit.>. Indeed, we have R = 3(n+1/2) and it is easy to see that (<ref>) is satisfied as the summand of Θ_0 is an odd function of n. Furthermore, let us introduce the theta constants relevant for the local ^2 model a(τ) = θ^3[ [ 1/6; 1/6 ]](0,τ), b(τ) = θ^3[ [ 1/6; 1/2 ]](0,τ), c(τ) = θ^3[ [ 1/6; 5/6 ]](0,τ), d(τ) = θ^3[ [ 1/2; 1/6 ]](0,τ), with θ[,'](z,τ) = ∑_n∈exp(πτ(n+)^2+2π(n+)(z+')) They have modular weight 3/2 and enjoy the properties a(-1/τ) = κ_1τ^3/2c(τ), b(-1/τ) = κ_2τ^3/2d(τ), where κ_1,2 are roots of unity. Then the free energies _(0,1)(t), _(1,0)(t) are <cit.> _(0,1)(t) = -1/6log(d(τ)η^3(τ)), _(1,0)(t) = 1/6log(η^3(τ)/d(τ)), where the modular parameter is τ = t_D/ t = -3(2π)^-3^2_t _0. Note that here _(0,1)(t) is the holomorphic limit of the anholomorphic F_(0,1)(t,t̅) = -1/2logτ_2 η^2η̅^2 + 1/2logη/d^1/3, with τ_2 = τ. Using expressions of _(0,1), _(1,0) in (<ref>), (<ref>) can be integrated to ∑_n∈(-1)^n(n+1/2)^3π(n+1/2)^2τ = Const. d(τ) which can be checked to high degrees of q = exp(2πτ) expansion. As mentioned before, the local ^2 model has a one dimensional moduli space (K_^2) parametrised by a global parameter z. The moduli space of the local ^2 model has a conifold singularity at z = -1/27, at which the period t_D vanishes. It is appropriate then to adopt the conifold frame where t_D is chosen as the A-period when we are close to the conifold frame, and t as the B-period. In the conifold frame, the special geometry relation is t(t_D) = 3 (2π)^-3_t_D^c_0(t_D). The modular parameter is τ^c = - t(t_D)/ t_D = -3 (2π)^-3^2_t_D^c_0(t_D) = -1/τ, so that the first few free energies written as almost holomorphic modular forms are (up to a constant term) ^c_(1,0)(t_D)= 1/6log(η^3(τ^c)/b(τ^c)), and F^c_(0,1)(t_D,t̅_D) =-1/2logτ^c_2η(τ^c)^2η̅^2(τ^c) +1/2logη(τ^c)/b^1/3(τ^c), whose holomorphic limit is ^c_(0,1)(t_D) = -1/6log(b(τ^c)η^3(τ^c)). Now (<ref>) only holds if R^c= ^c(n+1/2), up to the equivalence relation (<ref>) for r^c. Using (<ref>),(<ref>), the identity (<ref>) can also be integrated to ∑_n∈(-1)^n(n+1/2) ^(^c)^2/3π (n+1/2)^2τ^c = Const. b(τ^c) which is only valid for ^c = 1. This is consistent with our prediction for ^c as the A-period t_D in the conifold frame is associated to the irreducible compact 4-cycle. Together with (<ref>), we can collect the following facts of integrality in the conifold frame for local ^2, ^c= 1, r^c = 1. §.§.§ Local ^1×^1 Here we consider another example of the local ^1×^1 model. This model has one gauge modulus and one mass parameter. We restrict ourselves to the case of trivial mass parameter, corresponding to constraining the two ^1's to have the same complexified Kahler modulus t. In this case, the model also has a one dimensional moduli space (K_^1×^1) parametrised by a global parameter z. Let us study the vanishing blowup equations. The first two equations from expanding the vanishing blowup equations in terms of _1+_2 and _1_2 are still (<ref>),(<ref>). We consider again the large radius frame first. In the massless local ^1×^1 model, we have <cit.> R = 2(n+1/2), and the free energies _(0,1)(t),_(1,0)(t) are respectively <cit.> _(0,1)(t) = -logη(τ), _(1,0)(t) = -1/6logθ_2^2(τ)/(θ_3(τ)θ_4(τ)), where the modular parameter is τ = t_D(t)/ t = -2(2π)^-3_t^2_0. Here we normalise the periods similar to (<ref>) in local ^2, namely t = X^(1) = 1/2πlog z + …, t_D = X^(1,1)+ … = 1/2(2π)^2log^2z + …. Furthermore, as in local ^2, _(0,1)(t) is the holomorphic limit of an anholomorphic free energy which reads F_(0,1)(t,t̅) = -1/2logτ_2η^2η̅^2. We can then check that (<ref>) naturally holds as the summand of Θ_0 is yet again an odd function of n, while (<ref>) can be integrated to ∑_n∈ (-1)^n (n+1/2)^2π(n+1/2)^2τ = Const.η^3(τ)θ_2(τ)/√(θ_3(τ)θ_4(τ)) which can be checked to high degrees of q = exp(2πτ) expansion. The moduli space (K_^1×^1) of the massless local ^1×^1 model has a conifold singularity at z = 1/16, at which the period t_D vanishes. It is then suitable to choose the conifold frame where t_D is selected as the A-period when we are close to the conifold point, and t as the B-period. In the conifold frame, the special geoemtry relation is t = 2(2π)^-3_t_D_0^c(t_D) and the modular parameter is τ^c = - t/ t_D = -2(2π)^-3_t_D^2_0^c(t_D) = -1/τ. Using that θ_2(-1/τ) = κ_1' τ^1/2θ_4(τ), θ_3(-1/τ) = κ_2' τ^1/2θ_3(τ), where κ'_1,2 are roots of unity, it is easy to find the first few free energies are (up to a constant term) _(1,0)^c(t_D) = -1/6logθ_4^2(τ^c)/θ_3(τ^c)θ_2(τ^c) and F^c_(0,1)(t_D,t̅_D) = -1/2logτ_c^2η^2(τ^c)η̅^2(τ̅^c) whose holomorphic limit is ^c_(0,1)(t_D) = -log(η(τ^c)). We find then again that (<ref>) holds if and only if R^c = ^c(n+1/2) up to the equivalence relation (<ref>) for r^c. Using (<ref>),(<ref>), we find that (<ref>) can also be integrated to ∑_n∈(-1)^n (n+1/2)^(^c)^2/2π(n+1/2)^2τ^cConst.η^3(τ^c)θ_4(τ^c)/√(θ_2(τ^c)θ_3(τ^c)). This is only valid for ^c=1, which is consistent with our prediction for ^c as the A-period t_D in the conifold frame is associated to the irreducible compact 4-cycle. Together with (<ref>) we collect the following facts of integrality in the conifold frame for massless local ^1×^1, ^c = 1, r^c = 1. § RELATION BETWEEN UNREFINED AND NS STOKES CONSTANTS In this section, we reveal the intimate connection between the Stokes constants of unrefined and NS free energies. The starting point is the observation in <cit.> that in the limit _2 = 0 the blowup equations of (<ref>) of the vanishing type in the large radius frame become the so-called compactibility formulas which relate the unrefined and the NS free energies of topological string. The same is true for vanishing blowup equations in an arbitrary integral frame, and the corresponding compactibility formula reads 0 = ∑_∈^b_4 (-1)^||exp(_top(+ħ/2π,ħ)-∑_i=1^b_2∑_ℓ=1^b_4^i_ℓn_ℓ/2π t^i_NS(,ħ)), where is given in (<ref>). In this section, we will drop the superscript Γ indicative of frame to reduce notational clutter. Let be a point of the type (<ref>) in the Borel plane. We take the compactibility formula (<ref>) in an A-frame where is a classical A-period so that p = 0, and apply the alien derivative k. After using the Leibniz rule (<ref>), we find that 0 = ∑_∈^b_4 (-1)^||exp(_top(+ħ/2π,ħ)-∑_i=1^b_2∑_ℓ=1^b_4^i_ℓn_ℓ/2π t^i_NS(,ħ)) ×(k_top(+ħ/2π,ħ) -∑_i=1^b_2∑_ℓ=1^b_4^i_ℓn_ℓ/2π t^ik_NS(,ħ) ). Since both the unrefined and NS free energies are evaluated in an A-frame, their alien derivatives are simple, given by (<ref>),(<ref>),(<ref>),(<ref>). We then use the commutation relation (<ref>) to find /2π t^ik_NS(,ħ) = q_i (-1)^k/k^-k/ħ and use the chain rule (<ref>) to find k_top(+ħ/2π,ħ) = /2π(1/k^2+/kħ + π·/k + 2π/k··)(-1)^k·^-k /ħ where we have used that ··∈ due to the integrality condition (<ref>). We plug these equations in the second line of (<ref>) and drop any component which vanishes due to the compactibility formula, and we arrive at the crucial equation 0 = (-(-1)^k (·) + (-1)^k )1/k^-k/ħ ×∑_∈^b_4·· (-1)^||exp(_top(+ħ/2π,ħ)-∑_i=1^b_2∑_ℓ=1^b_4^i_ℓn_ℓ/2π t^i_NS(,ħ)). We make the distinction between two cases. If the singularity is such that · = 0, the second line vanishes and (<ref>) gives no constraint between the two types of Stokes constants. Geometrically, condition (<ref>) corresponds to flavor 2-cycles in Calabi–Yau threefold X, which are 2-cycles that have zero intersection numbers with compact 4-cycles. Flavor 2-cycles are not expected to support BPS states. If, on the other hand, the condition (<ref>) is not satisfied, we argue that one can at least find one vector for vanishing blowup equations so that the second line of (<ref>) does not vanish. Following (<ref>), the leading term in the ħ expansion of the second line of (<ref>) is ∑_∈^b_4 (··)(-1)^||^-1/2R^2”_(0,0). Recall from (<ref>) that the leading term in the vanishing blowup equations is ∑_∈^b_4 (-1)^||^-1/2R^2”_(0,0) = 0, and it has been conjetured in <cit.> that in the large radius frame, once is known, any integer valued vector with which (<ref>) holds is a suitable vector for vanishing blowup equations (<ref>) with Λ = 0. Among these suitable vectors, a special one is the one that makes the summand of (<ref>) an odd function of . We denote such an vector by _odd, and we conjecture that it exists in any integral frame, as the structure of vanishing blowup equations are similar across different frames. In the local ^2 model discussed in Section <ref>, _odd = 3 in the large radius frame, and _odd= 1 in the conifold frame. Similarly, in the local ^1×^1 model discussed in Section <ref>, _odd = 2 in the large radius frame, and _odd= 1 in the conifold frame. Such an _odd can also be found in all the examples discussed in <cit.>. Now if we take = _odd in (<ref>), it will no longer be zero as the linear term ·· changes the summand from an odd function to an even function of . As the second line of (<ref>) does not vanish, we can naturally conclude that the Stokes constants of unrefined free energy and those of NS free energy be the same up to a sign = (-1)^k(·_odd-1). This is our key formula. And thanks to (<ref>), it implies that ≒ Ω(γ()), i.e. the Stokes constant of unrefined free energy of topological string for the Borel singularity k, which is not associated with flavor 2-cycles, coincide with the BPS invariant Ω(γ()) where γ() is the brane charge associated with , possibly up to a sign as indicated by the dotted equality sign ≒. § DISCUSSION In this note, we are interested in the relationship between Stokes constants of unrefined perturbative free energies and those of refined perturbative free energies in the Nekrasov–Shatashvili limit of topological string theory on a non-compact Calabi–Yau threefold. It was observed in <cit.> with the example of local ^2 that both perturbative series have Borel singularities located at classical integral periods of mirror Calabi–Yau in the B-model, and the Stokes constants of the two perturbative series at the same Borel singularity might be related. We confirm this observation and demonstrate, using the formulation of the blowup equations in a generic integral frame taken to certain special limit, that this should be true on a generic non-compact Calabi–Yau threefold. More precisely, as long as the Borel singularity does not correspond to some 2-cycles that intersect only with non-compact 4-cycles in the A-model, the Stokes constants of the unrefined and the NS perturbative free energies must be the same up to a sign. It was argued in <cit.> that the Stokes constants of unrefined topological string free energy for non-compact Calabi–Yau threefold should be related to BPS invariants, although as far as concrete constructions are concerned only the simplest Calabi–Yau threefold, the resolved conifold, is considered. Similar statements for more generic non-compact Calabi–Yau <cit.> and even for compact Calabi–Yau threefolds <cit.> have been proposed. In this paper we give strong support for these statements. In fact, since the Stokes constants of NS free energies can be shown <cit.> to coincide with the Stokes constants of quantum periods, and therefore can be interpreted as BPS invariants, the results of this paper imply immediately that the Stokes constants of unrefined free energies on a non-compact Calabi–Yau threefold can similarly be identified as counting of BPS states, i.e. stable D4-D2-D0 brane bound states in type IIA string. Similary conclusions have been reached in <cit.>, where a closed-form formula for Stokes automorphisms of unrefined topological string free energies has been provided, which resembles the DDP formula of quantum periods. This paper opens many new directions to explore. First of all, our demonstration of the BPS interpretation of Stokes constants of unrefined free energies hinges on two crucial conjectures, the blowup equations for refined topological string free energies in a generic integral frame, and the identification of Stokes constants of NS free energies as BPS invariants. More evidences or maybe proofs are needed for these two conjectures. Furthermore, the argument in this paper is indirect and is only valid for non-compact Calabi–Yau threefolds. A direct argument or even proof, potentially also valid for compact Calabi–Yau threefold, possibly along the line of <cit.>, will be very desirable. Finally, the result of this paper suggests using resurgence techniques to systematically study BPS invariants of Calabi–Yau threefolds in different loci of moduli space and to study their stability structures. It would be interesting to compare with the BPS spectrum of local _0 in <cit.> and of local ^2 in <cit.> computed with other techniques. § ACKNOWLEDGEMENT We would like to thank Alba Grassi, Lotte Hollands, Min-xin Huang, Albrecht Klemm, Marcos Mariño, Boris Pioline, Kaiwen Sun for useful discussions. We also thank the hospitality of the Mainz Institute for Theoretical Physics (MITP) of Johannes Gutenberg University, the host of the workshop “Spectral Theory, Algebraic Geometry and Strings” (STAGS2023), during which part of this work was completed. J.G. is supported by the startup funding No. 4007022316 of the Southeast University. Special thanks: During the write-up of this paper, Marcos Mariño sent us his draft together with Kohei Iwaki on a related subject: the derivation of a closed-form formula for Stokes automorphisms of unrefined topological string free energies <cit.>. Although the methods and the concrete results of their paper are different from ours, we both give strong support to the idea that Stokes constants of unrefined free energies are given by BPS invariants. We thank Kohei and Marcos for agreeing to coordinate the submission of these two papers. § STOKES AUTOMORPHISMS OF QUANTUM PERIODS In this Appendix we recall and slightly generalise the derivation in <cit.> that the Stokes automorphisms of quantum periods in topological string should follow the DDP formulas <cit.>, and that the Stokes constants of NS free energies coincide with those of quantum periods, which combined together imply that the Stokes constants of NS free energies of topological string can be identified with BPS invariants. We will present the derivation in the large radius frame, but the derivation in a generic integral frame is similar. Let us first expand on the definition of gauge moduli t_*^ℓ and flavor masses t_*^k+b_4 = m^k for ℓ=1,…,b_4 and k=1,…,b_2-b_4 introduced in Section <ref>. One possible way of choosing them is to complete the b_2× b_4 intersection matrix C^i_ℓ to an b_2× b_2 matrix Ĉ^i_j of full rank by including intersections of 2-cycles with non-compact 4-cycles. Then the gauge moduli t_*^ℓ and the flavor masses t_*^k+b_4 = m^k can be chosen as t^ℓ_* = ∑_j=1^b_2(Ĉ^-1)^ℓ_j t^j, m^k = ∑_j=1^b_2(Ĉ^-1)^k+b_4_jt^j. Conversely, we have t^i = ∑_ℓ=1^b_4C^i_ℓ t_*^ℓ + ∑_k=1^b_2-b_4Ĉ^i_k+b_4m^k. Then the special geometry relation (<ref>) can be written as t_D,ℓ()= (2π)^-3_0(())/ t_*^ℓ|__*=_*(), ℓ=1,…,b_4. Here we have included the proper normalisation prefactor consistent with the normalisation of periods as in (<ref>). And we denote by the global moduli of the model, and () are the classical mirror map. The classical periods t_*^ℓ() and t_D,ℓ() (but not the mass parameters m^k) can be promoted to quantum periods t_*^ℓ(,ħ) and t_D,ℓ(,ħ), and it was observed in <cit.> that they satisfy the quantum special geometry relation t_D,ℓ(,ħ) = (2π)^-3ħ^(0)_NS((),ħ)/ t_*^ℓ|__* = _*(,ħ), ℓ=1,…,b_4, which states that quantum A- and B-periods satisfy the same relationship as classical A- and B-periods, as long as the global moduli in the NS free energy are related to the quantum A-periods t(,ħ) through the classical mirror map (). Another type of quantities we will need are the NS Wilson loop amplitudes in different representations ρ_ℓ of gauge group, which can be defined for 4d or 5d gauge theories that topological string engineers on a non-compact Calabi-Yau threefold[For the case of local ^2, which has no apparent gauge group, a notion of Wilson loop amplitudes can also be defined <cit.>.], and we refer to <cit.> for background and references. They are also frame dependent, and each of them is a Gevrey-1 power series in ħ ^(0)_NS,ρ_ℓ(,ħ) = ∑_n≥ 0^NS_ρ_ℓ,n() ħ^2n. The Stokes automorphisms of NS Wilson loop amplitudes have been studied in <cit.>. The crucial fact we will use is that the NS Wilson loop amplitudes have Borel singularities also at as defined in (<ref>), but only when is a B-period. In other words k^(0)_NS,ρ_ℓ(,ħ) = 0 if is an A-period. In addition, it was observed in <cit.> that, similar to (<ref>), the NS Wilson loop amplitudes offer another set of relations between quantum periods ^(0)_NS,ρ_ℓ((),ħ)|__*=_*(,ħ) = ^NS_ρ_ℓ,0(), in other words, the quantum Wilson loop amplitude as a power series of ħ reduces to the classical Wilson loop amplitude which does not depend on ħ, if the global moduli in the quantum Wilson loop amplitude are related to the quantum A-periods via the classical mirror maps (). We will then show that using the identities (<ref>) and (<ref>) we can deduce the properties of Stokes automorphisms of quantum periods from those of NS free energies and NS Wilson loop amplitudes. Let be a Borel singularity of the type as in (<ref>). By appying the alien derivative k on both sides of (<ref>) and using the chain rule (<ref>), we find the relation of alien derivatives 0 = k^(0)_NS,ρ_ℓ((),ħ) |__*=_*(,ħ) + ∑_n=1^b_4_t_*^n^(0)_NS,ρ_ℓ((),ħ) |__*=_*(,ħ)kt_*^n(,ħ). Similarly, by applying the alien derivative k on both sides of (<ref>), we find (2π)^3kt_D,ℓ(,ħ) = ħ_t_*^ℓk^(0)_NS((),ħ)|__*=_*(,ħ) + ħ∑_n=1^b_4_t_*^ℓ_t_*^n^(0)_NS((),ħ)|__*=_*(,ħ)k t_*^n(,ħ). Note that this equation demonstrates the clear difference between using the classical mirror map and the quantum mirror map when computing the Stokes automorphisms or alien derivatives of NS free energies. (<ref>) and (<ref>) are two crucial identities we will make heavy use of. Let us first assume that is an A-period, i.e. p = 0. From (<ref>) and (<ref>) we find immediately kt_*^ℓ(,ħ) = 0, ℓ=1,…,b_4. In other words, the quantum A-periods have vanishing Stokes constants. Next by using (<ref>) and (<ref>) as well as (<ref>), (<ref>), we find k t_D,ℓ(,ħ) = -(·)_ℓħ/ℵ(-1)^k-1/k^-k(())/ħ|__*=_*(,ħ). (<ref>) and (<ref>) imply the Stokes automorphisms of quantum A- and B-periods across ρ_ when is an A-perod are S_() t_*^ℓ(,ħ) = t_*^ℓ(,ħ), S_()t_D,ℓ(,ħ) = t_D,ℓ(,ħ)-(·)_ℓħ/ℵlog(1+^-(())/ħ)|__*=_*(,ħ). Note that the exponent of the right hand side of (<ref>) is in fact a quantum A-period (())|_t_*=t_*(z,ħ) = ℵ(·(,ħ)+s) and thus the Stokes automorphism of the quantum B-period t_D(,ħ) is expressed in terms of the quantum A-period associated to (). Next we consider the case where is no longer an A-period, i.e. p≠0. We will again make the shift of prepotential _0 as in (<ref>). If we were in a different frame where t_D,ℓ were A-periods, would be an A-period as it is a linear combination of t_D,ℓ by virtue of (<ref>). Therefore by symmetry argument, we should have kt_D,ℓ(,ħ) = 0. If we now impose (<ref>) and use (<ref>) together with (<ref>), (<ref>), we will find that kt_*^ℓ(,ħ) = p^ℓħ/ℵ(-1)^k-1/k^-k (,ħ)/ħ|__*=_*(,ħ). From (<ref>) and (<ref>) we find the Stokes automorphisms of quantum A- and B-periods across ρ_ when is a B-period are S_() t_*^ℓ(,ħ) = t_*^ℓ(,ħ)+p^ℓħ/ℵlog(1+^-(,ħ)/ħ)|__*=_*(,ħ), S_() t_D,ℓ(,ħ) = t_D,ℓ(,ħ). Note that (,ħ) as defined in (<ref>) is the quantum deformation of the B-period , and after replacing _* by the quantum A-periods, (t,ħ)|__*=_*(,ħ) = ħ/2π∑_i=1^b_2∑_ℓ=1^b_4_t^i^(0)_NS((),ħ)|__*=_*(,ħ) C^i_ℓ p^ℓ = (())|__*=_*(,ħ), it becomes the corresponding quantum B-period. Therefore the Stokes automorphisms of the quantum A-periods t_*^ℓ(,ħ) are expressed in terms of the quantum B-periods associated to the B-period (). By introducing the symplectic pairing γ,γ' of two quantum periods Π_γ(,ħ), Π_γ'(,ħ) where γ= (p,q,s), γ' = (p',q',s') and γ,γ' = q'·C·p -q·C·p', as well as the Voros symbol _γ(,ħ) = exp(-ℵΠ_γ(,ħ)/ħ) the Stokes automorphisms (<ref>),(<ref>),(<ref>),(<ref>) can be summarised succinctly by the DDP type of formulas <cit.> S__γ(,ħ) = _γ(,ħ)(1+_γ()(,ħ))^γ,γ(), where γ() is the charge associated to . By comparing with the formula of Kontsevich-Soibelman automorphism, one can conclude that can be identified with BPS invariant Ω(γ()), in other words = Ω(γ()). JHEP
http://arxiv.org/abs/2307.00369v1
20230701154433
The Yule-Frisch-Waugh-Lovell Theorem
[ "Deepankar Basu" ]
econ.EM
[ "econ.EM" ]
The Yule-Frisch-Waugh-Lovell Theorem Deepankar BasuDepartment of Economics, University of Massachusetts Amherst. Email: . August 1, 2023 ======================================================================================== This paper traces the historical and analytical development of what is known in the econometrics literature as the Frisch-Waugh-Lovell theorem. This theorem demonstrates that the coefficients on any subset of covariates in a multiple regression is equal to the coefficients in a regression of the residualized outcome variable on the residualized subset of covariates, where residualization uses the complement of the subset of covariates of interest. In this paper, I suggest that the theorem should be renamed as the Yule-Frisch-Waugh-Lovell (YFWL) theorem to recognize the pioneering contribution of the statistician G. Udny Yule in its development. Second, I highlight recent work by the statistician, P. Ding, which has extended the YFWL theorem to a comparison of estimated covariance matrices of coefficients from multiple and partial, i.e. residualized regressions. Third, I show that, in cases where Ding's results do not apply, one can still resort to a computational method to conduct statistical inference about coefficients in multiple regressions using information from partial regressions. JEL Codes: C01. Keywords: multiple regression; partial regression; Frisch-Waugh-Lovell theorem. § INTRODUCTION The Frisch-Waugh-Lovell theorem is a remarkable result about linear regressions estimated with the method of least squares. The theorem shows that coefficients of variables in a multiple regression are exactly equal to corresponding coefficients in partial regressions that use residualized versions of the dependent and independent variables. While it has been primarily used in econometrics <cit.>, it has found wider applications in a variety of disciplines, including statistics <cit.>, electrical engineering <cit.>, computer science <cit.>, and genetics & molecular biology <cit.>, to name just a few. It is now included in many graduate-level textbooks in econometrics, including <cit.>, <cit.>, <cit.>, and <cit.>. In the econometrics literature, the theorem is understood to have originated in a 1933 paper by Ragnar Frisch and Frederick V. Waugh in the first volume of Econometrica <cit.>, which was later generalized by Michael C. Lovell <cit.>; hence the name Frisch-Waugh-Lovell theorem. The first contribution of this paper is to show that the result was proved more than two and a half decade ago by the statistician G. Udny Yule in a 1907 paper <cit.>. This seems to be well known in statistics <cit.> and should be recognized in econometrics as well. In fact, to recognize Yule's pioneering contribution to the development of this important result, we should refer to this important result as the Yule-Frisch-Waugh-Lovell (YFWL) theorem, rather than the currently used Frisch-Waugh-Lovell (FWL) theorem. The second contribution of this paper is to trace out the analytical development of the theorem through the decades. In <cit.>, who proved the result using basic algebra, the partial regressions are always bivariate regressions of residual vectors. In <cit.>, where the proof relied on some basic properties of determinant and Cramer's rule for finding solutions of linear systems of equations, the partial regressions are themselves multiple regressions but the residuals are computed with bivariate regressions. Thus, <cit.> allows multiple variables in the conditioning set, but conceives of the partial regressions as bivariate regressions. On the other hand, <cit.> allows only one variable (the linear time trend) in the conditioning set, but allows the partial regressions to be multiple regressions. <cit.> generalizes in both directions, allowing multiple variables in the conditioning set and allowing the partial regressions to themselves be multiple regressions.[<cit.> points out that <cit.> had extended the Frisch-Waugh result to the case of polynomial time trends.] In terms of methodology, <cit.> introduces the use of projection matrices from linear algebra, making the proof compact and elegant. Current presentations of the theorem follow Lovell's exposition <cit.>. One use of the YFWL theorem is that if researchers are interested in only a subset of coefficients in a multiple regression equation, they need not estimate the full multiple regression. They can estimate the relevant partial regressions to recover estimates of the desired coefficients. In contexts of empirical analyses with large numbers of variables and observations, this can significantly reduce computational time and resources. Will researchers also be able to conduct inference about the subset of coefficients in a multiple regression using the standard errors (or the covariance matrix) estimated from the relevant partial regression? <cit.> and <cit.> did not pose, and therefore did not address, this question about standard errors (or the covariance matrix). <cit.> addressed the issue of the covariance matrices very briefly by pointing out that estimated standard errors from the multiple and partial regressions are equal up to a degree of freedom adjustment when the true error is homoskedastic. In a recent contribution, <cit.> has analyzed the question of estimated standard error equivalence between multiple and partial regressions in greater detail. <cit.> has demonstrated that estimates of the covariance matrices from the multiple and partial regressions are equal up to a degree of freedom adjustment in the case of homoskedastic errors and, quite surprisingly, are exactly equal for some variants of heteroskedasticity consistent covariance matrices (HCCM), heteroskedasticity and autocorrelation consistent (HAC) covariance matrices, and for some variants of clustered robust covariance matrices. The analysis in <cit.> highlights a general principle: if the estimate of error covariance matrix does not depend on the matrix of covariates beyond its dimensions, then estimated covariance matrices from multiple and partial regressions are equal, either exactly or with a degree of freedom correction. This principle can be leveraged to offer a computational solution for cases where <cit.>'s results do not hold, i.e. estimates and other information from partial regressions are sufficient to compute correct covariance matrices of coefficient vectors in the multiple regression. If only a subset of coefficients is of interest then estimation and proper statistical inference can be conducted by working with the partial regression alone. In cases where the number of regressors is large, this might reduce computational burden. The rest of the paper is organized as follows: in section <ref>, I introduce the set-up and pose the main question; in section <ref>, I present Yule's results; in section <ref>, I discuss Frisch and Waugh's contribution; in section <ref>, I discuss how Lovell extended the previous analysis; in section <ref>, I present Ding's extension of the previous literature; I conclude in section <ref> with two observations about the analytical and historical development of the YFWL theorem. Proofs are collected together in the appendix. § THE SET-UP AND THE QUESTION Consider a linear regression of an outcome variable, Y, on a set of k covariates, Y = W β + ε, where Y is a N × 1 vector of the outcome variable, W is a N × k matrix of covariates (with the first column being a column of 1s to capture the constant term in the regression function), β is a k × 1 of parameters, and ε is a N × 1 vector of the stochastic error term. Suppose on the basis of a specific question under investigation, it is possible for the researcher to partition the set of regressors into two groups, W_1 and W_2, where the former is a N × k_1 matrix (with the first column being a column of 1s) and the latter is a N × k_2 matrix, with k=k_1+k_2. The model in (<ref>) can then be written as Y = W_1 β_1 + W_2 β_2 + ε, where β_1 and β_2 are k_1 × 1 and k_2 × 1 vectors of parameters. Suppose the researcher is only interested in the first group of regressors (W_1), i.e. she is only interested in estimating and conducting inference about β_1. But, of course, she does not want to completely throw away the information in W_2 because the variables in this subset do impact Y and are likely to also be correlated to the variables in W_1. Hence, she wants to condition her analysis on W_2, but is not interested in their partial effects on the outcome. Given her interest and the set-up, can the researcher avoid estimating the full model in (<ref>)? Can she work with a smaller, let us say partial, model that allows her to consistently estimate and conduct proper statistical inference on β_1? The YFWL theorem provides an answer in the affirmative. § PIONEERING CONTRIBUTION OF YULE §.§ Novel notation <cit.> considers the relationship among n random variables X_1, X_2, …, and X_n using a sample which has N observations on each of the variables. In particular, suppose this relationship is captured by a linear regression of the first variable, X_1, on the other variables, X_2, …, X_n, and a constant. This linear regression is what Yule called a multiple regression. With reference to (<ref>), therefore <cit.> uses: k=n, X_1=Y, X_2 the second column of W, X_3 the third column of W, and so on. Subtracting the sample mean from all variables, the multiple regression equation can be equivalently expressed in `deviation from mean' form in which the constant drops out. Let x_1, x_2, …, x_n denote the same random variables, but now expressed as deviations from their respective sample means. Introducing a novel notation to express the coefficients and the residual, <cit.> writes the sample analogue of the multiple regression function as follows: x_1 = b_12.34 … n x_2 + b_13.24 … n x_3 + ⋯ + b_1n.234 … n-1 x_n_predicted value + x_1.234 … n_residual. In (<ref>), the subscripts of the coefficient on each regressor, as also the residual, is divided into the primary and secondary subscripts. The primary subscripts come before the period; the secondary subscripts come after the period. For the coefficients of regressors, the primary subscripts identify the dependent variable and that particular regressor, in that order; and the secondary subscripts identify the other regressors in the model. For the residual in (<ref>), x_1.234 … n, the primary subscript identifies the dependent variable and the secondary subscripts identify all the regressors. It is useful to note three features of the subscripts. First, the order of the primary subscripts is important but the order among the secondary ones are not because the order in which the regressors are arranged in immaterial for estimation and inference. For instance, b_12.34 … n denotes the coefficient on x_2 for a regression of x_1 on x_2, x_3, …, x_n. Similarly, b_1j.234 … n denotes the coefficient on x_j for a regression of x_1 on x_2, x_3, …, x_n, and note that the secondary subscripts excludes j. Second, elements of the subscripts cannot be repeated because it is not meaningful to include the dependent variable as a regressor or repeat a regressor itself. Third, since the coefficients of the multiple regression function are related to partial correlations, the notation was also used for denoting relevant partial correlations too.[See, for instance, equation (2) in <cit.>.] §.§ The theorem Now consider, with reference to the multiple regression (<ref>), what Yule called partial regressions. These refer to regressions with partialled out variables. For instance, with reference to (<ref>), the partial regression of x_1 on x_2 would be constructed as follows: (a) first run a regression of x_1 on all the variables other than x_2, i.e. x_3, x_4, …, x_n, and collect the residuals; (b) run a regression of x_2 on all the other regressors, i.e. x_3, x_4, …, x_n, and collect the residuals; (c) run a regression of the residuals from the first step (partialled out outcome variable) on the residuals from the second step (partialled out first regressor). <cit.> showed that, if parameters of the multiple regression and the partial regression(s) were estimated by the method of least squares then the corresponding coefficients would be numerically the same. For instance, considering the first regressor, x_2, in (<ref>), he demonstrated that ∑( x_1.34 … n× x_2.34 … n) /∑( x_2.34 … n)^2 = b_12.34 … n where the summation runs over all observations in the sample.[For a proof see appendix <ref>.] To see why (<ref>) gives the claimed result, recall that x_1.34 … n is the residual from a regression of x_1 on x_3, …, x_n and x_2.34 … n is the residual from the regression of x_2 on x_3, …, x_n. By construction, both have zero mean in the sample. Recall, further, that in a regression of a zero-mean random y on another zero-mean random variable z, the coefficient on the latter is ∑ y z /∑ z^2. Hence, the left hand side of (<ref>) is the coefficient in a regression of x_1.34 … n on x_2.34 … n. The right hand side is, of course, the coefficient on x_2 in (<ref>). Hence, the result. The above argument, of course, can be applied to any of the regressors in (<ref>) and thus <cit.> proved the following general result: the coefficient on any regressor in (<ref>) is the same as the coefficient in a bivariate regression of the residualized dependent variable (the vector of residual from a regression of the dependent variable on the other regressors) and the residualized regressor (the vector of residuals from a regression of the regressor of interest on the other regressors). With reference to the question posed in section <ref>, <cit.>, therefore, showed that a researcher could work with partial regressions if she were only interested in a subset of the coefficients in the multiple regression. She would arrive at the same estimate of the parameters by estimating partial regressions as she would if she had estimated the full model. In particular, when W_1 contained only one variable (other than the column of 1s), <cit.> provided the following answer: (a) run a regression of y (demeaned Y) on the variables in w_2 (demeaned variables in W_2), and collect the residuals; (b) run a regression of w_1 (demeaned variable in W_1, i.e. excluding the constant) on the variables in w_2, and collect the residuals; (c) regress the first set of residuals on the second to get the desired coefficient. There are two things to note about Yule's answer. First, he did not provide an answer to the question posed in section <ref> when W_1 contained more than one random variable (excluding the constant). Of course, partial regressions could be estimated for each independent variable, but they had to be separately estimated as bivariate regressions with all the other variables used for the partial regressions. Second, <cit.> did not investigate the relationship among variances of the parameters estimated from multiple and partial regressions. Is the estimated standard error of a coefficient also identical from the full and partial regressions? Yule did not pose, and hence did not provide any answers to, this question. §.§ Application to a regression with a linear time trend We can immediately apply the result from Yule's theorem given above to a regression with a linear time trend. With (<ref>), let x_n denote the demeaned linear time trend variable. Applying (<ref>), we will have the following result: the coefficient on any of the regressors, x_2, x_3, x_n-1 in (<ref>) is the same as the coefficient in a bivariate regression of residualized x_1 on the relevant regressor. This is of course the exact same result that was presented 26 later in <cit.>. § FRISCH AND WAUGH'S RESULT §.§ Notation and result <cit.> study the relationship among n+1 variables, one of which is a linear time trend. In reference to (<ref>), therefore <cit.> use: k=n, X_0=Y, X_1 the second column of W, X_2 the third column of W, and so on. They use the same convention of considering variables in the form of deviations from their respective sample means.[“Consider now the n variables x_0, …, x_n-1 and let time be an (n+1)th variable x_n. Let all the variables be measured from their means so that ∑ x_i=0 (i = 0, …, n) where ∑ denotes a summation over all the observations.” <cit.>.] Using the exact same notation as used by <cit.> and postulating an a priori true linear relationship among these variables, they write, x_0 = β_01.34 … n x_1 + β_02.24 … n x_2 + ⋯ + β_0n.234 … n-1 x_n, and consider estimating the parameters of (<ref>) in two ways. First, they consider the multiple regression of x_0 on x_1, …, x_n: x_0 = b_01.34 … n x_1 + b_02.24 … n x_2 + ⋯ + b_0n.234 … n-1 x_n. Second, they consider the multiple regression of x'_0 on x'_1, …, x'_n-1 (note that x_n has been excluded from the set of regressors), x'_0 = b'_01.34 … n x'_1 + b'_02.24 … n x'_2 + ⋯ + b'_0,n-1.234 … n-1 x'_n-1, where the primed variables are the corresponding time-demeaned original variables, i.e., for j = 0, 1, …, n-1, x'_j is the residual in the regression of x_j on x_n.[Note that (<ref>), (<ref>) and (<ref>) are just the predicted regressions functions. These equations exclude the residuals.] Using the basic rules of determinants and Cramer's rule for solving linear equation systems, <cit.> demonstrated that the coefficients denoted by b in (<ref>) are numerically equal to the corresponding coefficients denoted by b' in (<ref>).[For a proof see appendix <ref>.] With regard to the question posed in section <ref>, Frisch and Waugh provide the same answer as Yule: coefficients are the same whether they are estimated from the multiple or from partial regressions. There are both similarities and differences between <cit.> and <cit.>. First, whereas in <cit.>, only one variable could be included in W_1 (the subset of covariates that was of interest to the researcher), in <cit.> only one random variable could be included in W_2 (the subset of covariates that was not of interest to the researcher). Second, much like <cit.> before them, <cit.> did not investigate the relationship among estimated variances of the parameters of multiple and partial regressions. The question that is relevant for statistical inference, i.e. standard errors, had still not been posed. § LOVELL EXTENDS THE ANALYSIS <cit.> extended the reach of the theorem significantly and addressed both questions that had been left unanswered by <cit.> and <cit.>. On the one hand, <cit.> partitioned W into two subsets without any restrictions on the number of variables in each; on the other, he laid the groundwork for thinking about the estimated covariance matrices of the coefficient vectors. §.§ The results To understand the argument in <cit.>, we can directly work with (<ref>). Consider the sample analogue of (<ref>), Y = W_1 b_1 + W_2 b_2 + u, where b_1 and b_2 are OLS estimators of β_1 and β_2, respectively, and u is the residual (sample analogue of the error term, ε). To facilitate the algebra, we will rewrite the above as Y = W_2 b_2 + W_1 b_1 + u, where I have switched the position of the regressors of interest, W_1, and of those that are used merely for conditioning, W_2. Let Y^* be the N × 1 vector of residuals from a regression of Y on W_2, and W_1^* be the N × k_1 matrix formed by column-wise stacking of the vectors of residuals obtained from regressing each column of W_1 on W_2, i.e. Y^* = [ I - W_2 ( W'_2 W_2 ) ^-1 W'_2] Y and W^*_1 = [ I - W_2 ( W'_2 W_2 ) ^-1 W'_2] W_1. The matrices P_W_2=W_2 ( W'_2 W_2 ) ^-1 W'_2 and M_W_2 play important roles in the whole analysis. They are both symmetric and idempotent. They are referred to, respectively, as the `hat matrix' and the `residual maker matrix'. This terminology comes from the fact that P_W_2 projects any vector in ℝ^N onto the column space of W_2, and M_W_2 projects onto the orthogonal complement of the column space of W_2. Thus, P_W_2 generates the vector of predicted values and M_W_2 generates the vector of least squares residuals for a regression of any vector in ℝ^N on W_2. This result holds for any regression, so that, for instance, the vector of residuals in (<ref>) is given by u = (I - P_W) Y. Now consider the regression of Y^* on W_1^*: Y^* = W_1^* b̃_1 + ũ, and note that the vector of residuals in (<ref>) is given by ũ = (I - P_W^*_1) Y^*. <cit.> demonstrated two important results: (a) the coefficient vectors are numerically the same whether they are estimated from the multiple or the partial regressions, i.e. b_1 in (<ref>) is exactly equal to b̃_1 in (<ref>), and (b) the vector of residuals from the multiple and partial regressions are numerically the same, i.e. u in (<ref>) is equal to ũ in (<ref>). The first result completed the YFWL so far as the estimate of the coefficient is concerned because the partitioning of the set of regressors was completely general; the second result laid the groundwork for comparing estimated variances of the coefficient vectors from multiple and partial regressions.[For a proof of Lovell's results, see appendix <ref>. It is straightforward to extend the YFWL to generalized least squares <cit.>. Other scholars have worked on variations of Lovell's results; see, for instance, <cit.>.] § DING CONSIDERS STANDARD ERRORS The analysis so far has focused on comparing the coefficient vectors from multiple and partial regressions, i.e. b_1 from (<ref>) and b̃_1 from (<ref>). The YFWL theorem has established that they are numerically the same. This is a very useful result but still does not answer the question that would be relevant if a researcher were interested in conducting statistical inference about b_1. To conduct inference about b_1 using results of estimating b̃_1, we also need to be able to compare their estimated covariance matrices. With reference to the multiple regression (<ref>), the estimated variance of the coefficient vector is [ b_2; b_1 ] = [ ( b_2) ( b_2, b'_1 ); ( b_1, b'_2 ) ( b_1) ] = (W' W)^-1 W' Ω_m W (W' W)^-1, where W = [ W_2 : W_1] and Ω_m (subscript `m' for identifying the multiple regression) is an estimate of the variance of the error term in the multiple regression model (<ref>). Hence, the estimated variance of b_1 is ( b_1) = (2,2) block of (W' W)^-1 W' Ω_m W (W' W)^-1. Similarly, with reference to the partial regression (<ref>), the estimated variance of b̃_1 is given by ( b̃_1) = ( W^*_1^' W^*_1)^-1W^*_1^'Ω_p W^*_1 ( W^*_1^' W^*_1)^-1 . where W^*_1 is defined (<ref>), and Ω_p (subscript `p' for identifying the partial regression) is an estimate of the variance of the error term in the partial regression model (<ref>). What is the relationship between (<ref>) and (<ref>)? <cit.> provides a systematic treatment of this issue that relies on two ingredients. First, that the vector of residuals from the multiple and partial regressions are exactly equal. Note that this was already proved by <cit.>, as we have seen above. This ingredient is important because the estimate of the covariance matrix of the regression error vector is a function of the regression residual vector. If the function depends only on the vector of regression residuals, then we immediately have Ω_m=Ω_p. This already takes us some way in addressing the relationship between (<ref>) and (<ref>). The second ingredient relates to the other parts that make up the estimated covariance matrices in (<ref>) and (<ref>). With reference to the multiple regression model in (<ref>) and the partial regression model in (<ref>), <cit.> shows that the (2,1) block of ( W'W) ^-1 W' = ( W^*_1^' W^*_1)^-1W^*_1^', where W = [ W_2 : W_1] and W^*_1 is defined (<ref>).[This follows from a straightforward application of the inverse of partitioned matrices. Hence, I omit the proof.] Bringing the two ingredients together, we get the following general result: (a) if the estimate of the error covariance matrix depends only on the vector of regression residuals, then (<ref>) and (<ref>) are exactly equal; (b) if some degree of freedom correction is applied to generate the estimate of the error covariance matrix, then (<ref>) and (<ref>) are equal up to the relevant degree of freedom correction; (c) if the estimate of the error covariance matrix depends, in addition to the residual vector, on the elements of the matrix of regressors, then (<ref>) and (<ref>) are not, in general, equal even after making degree of freedom adjustments. §.§ The results Using the above argument in different settings, <cit.> shows the following: * In models with homoskedastic errors, the covariance matrices from the multiple and partial regressions are equal up to a degree of freedom adjustment , i.e. (N-k_1)( b̃_1) = (N-k) ( b_1). This is because Ω_p = (1/N-k_1) diag[ ũ_i^2] and Ω_m = (1/N-k) diag[ u_i^2]. * In models with HC0 version of HCCM <cit.>, the covariance matrices from the multiple and partial regressions are exactly equal because Ω_p = diag[ ũ_i^2] and Ω_m = diag[ u_i^2]. * In models with HC1 version of HCCM <cit.>, the covariance matrices from the multiple and partial regressions are equal up to a degree of freedom adjustment because Ω_p = N/(N-k_1)diag[ ũ_i^2] and Ω_m = N/(N-k) diag[ u_i^2]. Hence, (N-k_1)( b̃_1) = (N-k) ( b_1). * In models using heteroskedasticity and autocorrelation (HAC) consistent covariance matrices <cit.>, the covariance matrices from the multiple and partial regressions are exactly equal because Ω_p = ( ω_|i-j|ũ_i ũ'_j)_1 ≤ i, j, ≤ N and Ω_m = ( ω_|i-j|ũ_i ũ'_j)_1 ≤ i, j, ≤ N, as long as the same weights are used in both regressions. * In models using the cluster robust estimate of the variance matrix (CRVE) <cit.>, the covariance matrices from the multiple and partial regressions are exactly equal if no degree of freedom correction is used or if G/(G-1) is used as the degree of freedom correction, where G is the number of clusters. If G(N-1)/(G-1)(N-k) is used as the degree of freedom correction, then (N-k_1)( b̃_1) = (N-k) ( b_1). In all these cases, researchers can use estimated covariance matrices from the partial regression (<ref>), with the relevant degree of freedom adjustment if necessary, to conduct proper statistical inference on the parameters from the multiple regression (<ref>). §.§ The cases that were left out There are some cases where the estimate of the error covariance matrix depends, in addition to the residual vector, on elements of the covariate matrix. In these cases, (<ref>) and (<ref>) will not be equal, even after making degrees of freedom corrections. Some common cases where this happens are: (a) models using HC2, HC3, HC4, or HC5 forms of HCCM; (b) some variants of the HAC and cluster-robust consistent covariance matrices. In these cases, it is not possible to use standard errors from the partial regression, with or without degrees of freedom correction, for inference about the coefficient vector in the multiple regression. But, it is still possible to compute the correct covariance matrix for the coefficient vector in the multiple regression without having to estimate that regression, as I now show. Let me discuss the HC2 case in detail; the other cases can be dealt with in a similar manner. Let h_ii=W'_i(W' W)^-1W_i, where W_i is the i-th column of W = [ W_2 : W_1] with reference to (<ref>). The HC2 estimated covariance matrix of b_1 in (<ref>) is given by the (2,2) block of (W' W)^-1 W' diag[ u_i^2]/1-h_ii W (W' W)^-1, which, using (<ref>), is the (2,2) block of [ *; ( W^*_1^' W^*_1)^-1W^*_1^' ]diag[ u_i^2]/1-h_ii[ W^*_1( W^*_1^' W^*_1)^-1 * ], and, therefore, is given by ( W^*_1^' W^*_1)^-1W^*_1^'diag[ u_i^2]/1-h_iiW^*_1( W^*_1^' W^*_1)^-1, which, since u_i^2 can be replaced with ũ_i^2, is ( W^*_1^' W^*_1)^-1W^*_1^'diag[ ũ_i^2]/1-h_iiW^*_1( W^*_1^' W^*_1)^-1. If we can estimate h_ii using information available after estimating the partial regression (<ref>), we can use (<ref>) to compute the covariance matrix that is necessary to conduct proper statistical inference on the coefficient vector b_1 in the multiple regression (<ref>). In fact, using results on the inverse of partitioned matrices, we can do so. Hence, the following procedure for estimation and inference coefficient vector b_1 in the multiple regression (<ref>) based on the estimation of the partial regression (<ref>) can be suggested. * Compute Y^* = [ I - W_2 (W'_2 W_2)^-1 W'_2 ] Y, and W^*_1 = [ I - W_2 (W'_2 W_2)^-1 W'_2 ] W_1. * Estimate the partial regression (<ref>) by regressing Y^* on W^*_1 and get the coefficient vector b̃_1. * For i = 1, 2, , …, N, compute h_ii=W'_i(W' W)^-1W_i for the matrix of covariates, W = [ W_2 : W_1], in the multiple regression (<ref>). Let W^*_2 = [ I - W_1 (W'_1 W_1)^-1 W'_1 ] W_2, and note that, using results for the inverse of partitioned matrices, we have (W' W)^-1 = [ W^11 W^12; W^21 W^22 ], where W^11 = ( W^*_2^' W^*_2)^-1, W^12=W^11^'= - ( W'_2 W_2)^-1W'_2 W_1 ( W^*_1^' W^*_1)^-1, and W^22 = ( W^*_1^' W^*_1)^-1. Thus, we only compute W^11, W^12 and W^22, and then stack them to get ( W'W)^-1. This avoids computing inverse of the k × k matrix W'W, and instead only involves computing inverses of k_1 × k_1 or k_2 × k_2 matrices. * Use the matrix of covariates, W^*_1 and the vector of residuals, ũ, from the partial regression (<ref>), to compute the covariance matrix of b̃_1 using (<ref>). The main advantage of this procedure is computational. If a researcher were to estimate (<ref>) and compute the covariance matrix of the full coefficient vector b, then she would have to program (<ref>), which would involve computing the inverse of a k × k matrix, W'W. If, instead, the partial regression-based procedure is used instead, then the researcher would: (a) compute h_ii using (<ref>), which involves inverses of k_1 × k_1 and k_2 × k_2 only; and (b) implement (<ref>), which involves computing the inverse of the k_1 × k_1 matrix, W^*_1^' W^*_1. If k_1 and k_2 are much smaller than k=k_1+k_2, then there might be a significant reduction in computational burden. The four step procedure outlined above can be used for HC3, HC4 variants of HCCM, and might even be applied to different variants of cluster-robust covariance matrices for which Theorem 3 and 4 in <cit.> do not hold. The first step would remain unchanged; the second step would only change if any quantity other than, or in addition to, h_ii is needed; only the third step would change significantly, where the researcher would need to use the correct expression for the relevant estimated covariance matrix, in place of (<ref>). For instance, for HC3 and HC4 variants of heteroskedasticity consistent covariance matrices, steps 1, 2 and 3 would remain unchanged; in step 4, she would use either ( W^*_1^' W^*_1)^-1W^*_1^'diag[ ũ_i^2]/( 1-h_ii)^2 W^*_1( W^*_1^' W^*_1)^-1; for HC3, or, ( W^*_1^' W^*_1)^-1W^*_1^'diag[ ũ_i^2]/( 1-h_ii)^δ_iW^*_1( W^*_1^' W^*_1)^-1, for HC4, where δ_i = max( 4, N h_ii/k) and k is the total number of regressors in (<ref>), including the constant. § DISCUSSION AND CONCLUSION By way of concluding this paper, I would like to offer two comments. My first comments is about the substance and usefulness of the YFWL theorem. In the context of a linear regression model, the YFWL theorem shows that if a researchers is interested in estimating only a subset of the parameters of the model, she need not estimate the full model. She can partition the set of regressors into two subsets, those that are of interest and those that are not of direct interest (but needs to be used for conditioning). She can regress the outcome variable on the conditioning set to create the residualized outcome variable. She can regress each of the variables of interest on the conditioning set to create corresponding residualized covariates. Finally, she can regress the residualized outcome variable on the residualized covariates of interest to get the estimated parameter of interest. For statistical inference also, she can work with a partial regression in many, but not all, cases. If the errors are homoskedastic, then the estimated covariance matrix from the partial regression can be used for inference once a degree of freedom adjustment is made. If the errors are heteroskedastic, as is often the case in cross sectional data sets, and the researcher wishes to use the HC0 or HC1 variant of HCCM, then she can use the estimated covariance matrix from the partial regression without any adjustment in the case of HC0 and with a degree of freedom adjustment in the case of HC1. If the researcher is using a time series data set and wishes to use a HAC covariance matrix, then she can use the estimated covariance matrix from the partial regression as is. If the researcher is using a panel data set and wishes to use the standard cluster robust covariance matrix, then she can use the estimated covariance matrix from the partial regression without any adjustment; if a degree of freedom correction is used to compute the cluster robust covariance matrix, then she will need to apply a corresponding degree of freedom adjustment. If, on the other hand, the researcher wishes to use the HC2, HC3, HC4 or HC5 variants of heteroskedasticity consistent covariance matrix, or if she wishes to use other variants of the cluster robust covariance matrix or HAC covariance matrices, then she cannot use above results, i.e. the covariance matrices from multiple and partial regressions are no longer equal, even after degrees of freedom adjustments. Instead, the researcher can use the four-step computational method proposed in section <ref> if she wishes to use information from the partial regression to conduct proper statistical inference about parameters in the multiple regression. My second comment relates to a puzzle in the intellectual history of the YFWL theorem. The puzzle arises from noting that <cit.>'s result is essentially the same as the result proved by <cit.>, as I have demonstrated. What is puzzling, therefore, is that <cit.> do not cite <cit.>. This omission seems puzzling to me given two fact. First, <cit.> refer to a previous paper by one of the authors: <cit.>. It is also to be noted that Frisch had published a related work in 1931 <cit.>. What is interesting is that in both these papers, there is explicit reference to the partial correlation formula introduced by Yule. Recall that Yule had introduced a novel notation for representing partial correlations and coefficients in a multiple regression equation in his 1907 paper that I have discussed. The notation was then introduced to a wider readership in statistics with his 1911 book <cit.>. This book was extremely popular and ran several editions, later ones with Kendall <cit.>. Second, <cit.> use the exact same notation to represent the regression function that Yule had introduced in his 1907 paper. as I have highlighted. These two facts suggest that Ragnar Frisch was familiar with Yule's work. In fact, Yule's work on partial correlations and multiple regression was the standard approach that was widely used and accepted by statisticians in the early part of the twentieth century <cit.>. Therefore, it is a puzzle of intellectual history as to why <cit.> did not cite Yule's 1907 result about the equality of multiple and partial regression coefficients which was, in substantive terms, exactly what they went on to prove in their paper. Of course, <cit.> used a different method of proof from <cit.>, as I have demonstrated in this paper. But in substantive terms, <cit.> proved the same result that Yule had established more than two decades ago. No matter what the reason for the omission in history, it seems that now is the right time to acknowledge Yule's pioneering contribution by attaching his name to a theorem that is so widely used in econometrics. apalike § PROOF OF YULE'S RESULT The first step of the proof is to derive the normal equations arising from the least squares method of estimation. Given the random variables x_1, x_2, …, x_n, which are demeaned versions of X_1, X_2, …, X_n, the method of least squares chooses constants b_1, b_2, …, b_n to minimize ∑( x_1 - b_2 x_2 - b_3 x_3 - ⋯ - b_n x_n)^2, where the sum runs over all observations in the sample. The first order conditions of this minimization problem are referred to as the `normal equations'. Differentiating the above with respect to b_j, we have ∑ x_j ( x_1 - b_2 x_2 - b_3 x_3 - ⋯ - b_n x_n) = 0 ( j=2, 3, …, n), so that, using Yule's notation, the normal equations are given by ∑ x_j x_1.234 … n = 0 ( j=2, 3, …, n), which shows that the residual in the regression equation (<ref>) is uncorrelated with all the regressors included in the model, a result that holds for any regression function whose coefficients are estimated with the method of least squares. Now consider, x_2.34 … n, the residuals from a regression of x_2 on x_3, x_4, …, x_n, and note that it is a linear function of x_2,x_3, x_4, …, x_n. Hence, using (<ref>), we have ∑ x_1.234 … n x_2.34 … n = ∑ x_1.234 … n( x_2 - c_1 x_3 - ⋯ - c_n-2 x_n ) = 0, for some constants c_1, …, c_n-2. We are now ready to prove the main result: coefficients from multiple and partial regressions are numerically the same. 0 = ∑ x_2.34 … n x_1.234 … n = ∑ x_2.34 … n( x_1 - b_12.34 … n x_2 + b_13.24 … n x_3 + ⋯ + b_1n.234 … n-1 x_n ) = ∑ x_2.34 … n( x_1 - b_12.34 … n x_2) = ∑ x_2.34 … n x_1 - b_12.34 … n∑ x_2.34 … n x_2 = ∑ x_2.34 … n( b_13.4 … n x_3 + b_14.3 … n x_4 + ⋯ + b_1n.34 … n x_n + x_1.34… n) - b_12.34 … n∑ x_2.34 … n x_2 = ∑ x_2.34 … n x_1.34… n - b_12.34 … n∑ x_2.34 … n x_2 = ∑ x_2.34 … n x_1.34… n - b_12.34 … n∑ x_2.34 … n( b_23.4 … n x_3 + b_24.3 … n x_4 + ⋯ + b_2n.34 … n x_n + x_2.34… n) = ∑ x_2.34 … n x_1.34… n - b_12.34 … n∑ x_2.34 … n x_2.34… n Hence, we have (<ref>): ∑( x_1.34 … n× x_2.34 … n) /∑( x_2.34 … n)^2 = b_12.34 … n. § PROOF OF FRISCH AND WAUGH'S RESULT Consider (<ref>) and let m_ij = ∑ x_i x_j ( i,j = 0, 1, 2, …, n), where the sum runs over all observations, and x_0, x_1, …, x_n are demeaned versions of X_0, X_1, …, X_n.[<cit.> presents Frisch and Waugh's results using projection matrices. That is not correct. <cit.> did not use projection matrices in their proof, as I show in this section.] Then, <cit.> assert that the regression equation in (<ref>) can be expressed as the following equation that specifies the determinant of the relevant (n+1)-dimensional matrix to be zero: x_0 x_1 ⋯ x_n m_10 m_11 ⋯ m_1n ⋮ ⋮ ⋱ ⋮ m_n0 m_n1 ⋯ m_nn = 0. To see this, which <cit.> do not explain, perhaps because it was obvious to them, we need to recall two things. First, if we expand the determinant in (<ref>) using the first row of the matrix, we will get an equation of the following form, a_0 x_0 + a_1 x_1 + ⋯ + a_n x_n = 0, where a_0 is the determinant obtained by deleting the first row and first column (of the matrix whose determinant is being considered in (<ref>)), a_1 is -1 times the determinant obtained by deleting the first row and second column, a_2 is 1 times the determinant obtained by deleting the first row and third column, and so on.[The signs alternate because the determinant obtained by deleting the first row and the j-th column is multiplied by -1^(1+j), where j=1, 2, …, n+1. ] Assuming a_0 ≠ 0, which is guaranteed as long as the regressors are not perfectly collinear, this gives x_0 = -a_1/a_0 x_1 - ⋯ -a_n/a_0 x_n. This has the same form as (<ref>) and all we need to do is to show that the coefficients in (<ref>) are what appears in (<ref>). To do so, we can use the normal equation and Cramer's rule. Recall that the normal equations that had been written in (<ref>) can, with reference to the least squares estimation (<ref>), be written more compactly as X'X b = X'y, where X = [ x_1 : x_2 : ⋯ : x_n] is the matrix obtained by stacking the regressors column-wise, y = x_0 is the dependent variable, and b is the least squares coefficient vector: b = [ b_01.34 … n b_02.24 … n ⋯ b_0n.234 … n-1]. Note that the (i,j)-th element of X'X is m_ij as defined in (<ref>), and the i-th element of X'y is m_i0. Thus, the normal equations can be written as [ m_11 m_12 ⋯ m_1j ⋯ m_1n; ⋮ ⋮ ⋯ ⋮ ⋯ ⋮; m_n1 m_n2 ⋯ m_nj ⋯ m_nn ][ b_1; ⋮; b_n ] = [ m_10; ⋮; m_n0 ]. Applying Cramer's rule <cit.> to this equation system to solve the b vector, keeping track of how many columns are switched and recalling that switching rows (columns) of a matrix only changes the sign of the determinant <cit.>, we see that the coefficients in (<ref>) and (<ref>) are identical. Thus, for j=1,2, …, n b_j = | B_j | /| A| , where A = [ m_11 m_12 ⋯ m_1j ⋯ m_1n; ⋮ ⋮ ⋯ ⋮ ⋯ ⋮; m_n1 m_n2 ⋯ m_nj ⋯ m_nn ] and B_j = [ m_11 m_12 ⋯ m_10 ⋯ m_1n; ⋮ ⋮ ⋯ ⋮ ⋯ ⋮; m_n1 m_n2 ⋯ m_n0 ⋯ m_nn ] is obtained from A by replacing the j-th column with (m_10 … m_n0)'. Now consider (<ref>) and let m'_ij = ∑ x'_i x'_j ( i,j = 0, 1, 2, …, n), where the sum runs, once again, over all observations. Using the same logic as above, we will be able to see that the regression equation in (<ref>) can be expressed as x'_0 x'_1 ⋯ x'_n-1 m'_10 m'_11 ⋯ m'_1,n-1 ⋮ ⋮ ⋱ ⋮ m'_n-1,0 m'_n-1,1 ⋯ m'_n-1,n-1 = 0. The strategy will now be to use (<ref>) and (<ref>) to show that the first n-1 coefficients in (<ref>) are equal to the corresponding coefficients in (<ref>). The first thing is to relate m_ij and m'_ij. This is easy to do by noting that x'_j = x_j - m_jn/m_nnx_n ( j = 0, 1, 2, …, n-1), because x'_j is the residual from a regression of x_j on x_n. Multiplying x_i on both sides and summing over all observations for x_i and x_j, we get m'_ij = m_ij - m_in m_jn/m_nn. The second step is to return to (<ref>) and carry out a series of elementary row operations that converts elements in the second through the penultimate row from m_ij to m'_ij. From the row beginning with m_10 in (<ref>) subtract -m_1,n/m_n,n times the last row; from the row beginning with m_20 subtract -m_2,n/m_n,n times the last row; and so on. Note that these row operations do not change the determinant of the matrix <cit.>. Hence, these series of elementary row operations will convert (<ref>) to x_0 x_1 ⋯ x_n-1 x_n m'_10 m'_11 ⋯ m'_1,n-1 0 ⋮ ⋮ ⋱ ⋮ ⋮ m'_n-1,0 m'_n-1,1 ⋯ m'_n-1,n-1 0 m_n0 m_n1 ⋯ m_n,n-1 m_nn = 0. because of (<ref>). Now consider the following determinant equation x'_0 x'_1 ⋯ x'_n-1 0 m'_10 m'_11 ⋯ m'_1,n-1 0 ⋮ ⋮ ⋱ ⋮ ⋮ m'_n-1,0 m'_n-1,1 ⋯ m'_n-1,n-1 0 c_0 c_1 ⋯ c_n-1 1 = 0, for some arbitrary constants c_0, …, c_n-1, and by expanding the determinant by the last column, note that (<ref>) is equivalent to (<ref>). Expanding (<ref>) by the first row, we get a_0 x_0 + a_1 x_1 + ⋯ + a_n-1 x_n-1 + a_n x_n = 0 and expanding (<ref>) by the first row, we get a'_0 x'_0 + a'_1 x'_1 + ⋯ + a'_n-1 x'_n-1 = 0, where a_i = a'_i m_nn (i=0,1, 2, …, n-1). Rearranging (<ref>) as x_0 = -a_1/a_0 x_1 - ⋯ -a_n-1/a_0 x_n-1 -a_n/a_0 x_n, and rearranging (<ref>) as x'_0 = -a'_1/a'_0 x'_1 - ⋯ -a'_n-1/a'_0 x'_n-1, and noting that (<ref>) is a re-expression of (<ref>) while (<ref>) is a re-expression of (<ref>), we have b_0i.34 … n = b'_0i.34 … n (i=1, 2, …, n-1). § PROOF OF LOVELL'S RESULTS Instead of following the proof in <cit.>, I will instead use <cit.> and <cit.>. §.§ Parameter estimates are same Let us start by recalling that the normal equations for (<ref>), which we have seen before in in the context of Yule's proof and Frisch and Waugh's proof, can be written as W'W b = W'Y, where W = [ W_2 : W_1] and b' = [ b'_2 : b'_1]. The normal equations can be expanded to give [ W'_2 W_2 W'_2 W_1; W'_1 W_2 W'_1 W_1 ][ b_2; b_1 ] = [ W'_2 Y; W'_1 Y ], which, in turn, gives us the following two equation systems: W'_2 W_2 b_2 + W'_2 W_1 b_1 = W'_2 Y W'_1 W_2 b_2 + W'_1 W_1 b_1 = W'_1 Y. From (<ref>), we get b_2 = ( W'_2 W_2)^-1[ W'_2 Y - W'_2 W_1 b_1] and substituting this in (<ref>), and simplifying, we get b_1 = ( W'_1 M_W_2 W_1)^-1 W'_1 M_W_2 Y, where M_W_2 = I - P_W_2, and P_W_2 = W_2 ( W'_2 W_2)^-1 W'_2. Since M_W_2 is symmetric and idempotent, we can write (<ref>) as b_1 = ( W^*_1^' W^*_1)^-1W^*_1^' Y^*, where W^*_1 = M_W_2 W_1 is the matrix of column-wise stacked residuals from regressions of the columns of W_1 on W_2, and Y^* = M_W_2 Y is the vector of residuals from a regression of Y on the W_2. Turning to (<ref>), we see that b̃_1 = ( W^*_1^' W^*_1)^-1W^*_1^' Y^*, which establishes that b_1 = b̃_1. §.§ Residuals are same The residual vector from (<ref>) is given by u = (I - P_W) Y, and the residual vector from (<ref>) is given by ũ = (I - P_W^*_1) Y^* = (I - P_W^*_1) (I-P_W_2) Y. Thus, if we can show that P_W = P_W_2 + P_W^*_1 + P_W^*_1P_W_2, then the proof will be complete. We see that, with reference to (<ref>), P_W = W ( W' W)^-1 W' = [ W_2 W_1 ][ W'_2 W_2 W'_2 W_1; W'_1 W_2 W'_1 W_1 ]^-1[ W'_2; W'_1 ], which, using the inverse of partitioned matrices <cit.>, becomes P_W = [ W_2 W_1 ][ ( W'_2 W_2 - W'_2 W_1 ( W'_1 W_1)^-1 W'_1 W_2)^-1 -( W'_2 W_2)^-1 W'_2 W_1 F; -F W'_1 W_2 ( W'_2 W_2)^-1 F ]^-1[ W'_2; W'_1 ], where F = ( W'_1 W_1 - W'_1 W_2 ( W'_2 W_2)^-1 W'_2 W_1)^-1 = ( W'_1 M_W_2 W_1)^-1. Hence, P_W = W_2 ( W'_2 W_2)^-1 W'_2 + W_2 ( W'_2 W_2)^-1 W'_2 W_1 F W'_1 W_2 ( W'_2 W_2)^-1 W'_2 - W_2 ( W'_2 W_2)^-1 W'_2 W_1 F W'_1 - W_1 F W'_1 W_2 ( W'_2 W_2)^-1 W'_2 + W_1 F W'_1, Note that the last two terms in (<ref>) can be combined to give - W_1 F W'_1 W_2 ( W'_2 W_2)^-1 W'_2 + W_1 F W'_1 = W_1 F W'_1 [ I - P_W_2] = W_1 F W'_1 M_W_2, and the second and third terms in (<ref>) can be combined to give W_2 ( W'_2 W_2)^-1 W'_2 W_1 F W'_1 W_2 ( W'_2 W_2)^-1 W'_2 - W_2 ( W'_2 W_2)^-1 W'_2 W_1 F W'_1 = - W_2 ( W'_2 W_2)^-1 W'_2 W_1 F W'_1 M_W_2. Using these in (<ref>) we get P_W = W_2 ( W'_2 W_2)^-1 W'_2 + W_1 F W'_1 M_W_2 - W_2 ( W'_2 W_2)^-1 W'_2 W_1 F W'_1 M_W_2 = P_W_2 + [ I - W_2 ( W'_2 W_2)^-1 W'_2] W_1 F W'_1 M_W_2 = P_W_2 + M_W_2 W_1 F W'_1 M_W_2 = P_W_2 + W^*_1 ( W^*_1^' W^*_1)^-1W^*_1^', because F = ( W'_1 M_W_2 W_1)^-1 = ( W'_1 M'_W_2 M_W_2 W_1)^-1 = ( W^*_1 W^*_1)^-1, where W^*_1 = M_W_2 W_1, and M_W_2 is symmetric and idempotent. Hence, we have P = P_W_2 + P_W^*_1, and further that P_W_2 P_W^*_1 = W_2 ( W'_2 W_2)^-1W'_2 W^*_1_=0( W^*_1^' W^*_1)^-1W^*_1^' = 0, because W'_2 W^*_1 = W'_2 M_W_2 W_1 = W'_2 ( I - W_2 ( W'_2 W_2)^-1 W'_2 ) W_1 = (W'_2-W'_2 ) W_1 = 0 × W_1 = 0. Since P_W_2 and P_W^*_1 are both symmetric, this implies that P_W^*_1P_W_2=0. Hence, we have P_W = P_W_2 + P_W^*_1 + P_W^*_1P_W_2, and this establishes (<ref>).
http://arxiv.org/abs/2307.02679v1
20230705224114
A Study on the Impact of Face Image Quality on Face Recognition in the Wild
[ "Na Zhang" ]
cs.CV
[ "cs.CV" ]
A Study on the Impact of Face Image Quality on Face Recognition in the Wild Na Zhang Na Zhang is with Lane Department of Computer Science and Electrical Engineering at West Virginia University, Morgantown, WV 26506-6109. August 1, 2023 ====================================================================================================================================================== Deep learning has received increasing interests in face recognition recently. Large quantities of deep learning methods have been proposed to handle various problems appeared in face recognition. Quite a lot deep methods claimed that they have gained or even surpassed human-level face verification performance in certain databases. As we know, face image quality poses a great challenge to traditional face recognition methods, e.g. model-driven methods with hand-crafted features. However, a little research focus on the impact of face image quality on deep learning methods, and even human performance. Therefore, we raise a question: Is face image quality still one of the challenges for deep learning based face recognition, especially in unconstrained condition. Based on this, we further investigate this problem on human level. In this paper, we partition face images into three different quality sets to evaluate the performance of deep learning methods on cross-quality face images in the wild, and then design a human face verification experiment on these cross-quality data. The result indicates that quality issue still needs to be studied thoroughly in deep learning, human own better capability in building the relations between different face images with large quality gaps, and saying deep learning method surpasses human-level is too optimistic. Face recognition, Face image quality, Deep learning § INTRODUCTION We all know that the accuracy of traditional face recognition (FR), e.g. Eigenfaces <cit.> and Fisherfaces <cit.>, is greatly affected by face image quality problems, such as intraclass variations between enrollment and identification stages. Using face images with poor quality can actually degrade face recognition performance. Non-standard lighting or pose and out-of-focus are among the main reasons responsible for the performance degradation. That is why many quality enhancement methods were proposed to try to improve the performance. For example, Hassner et al. <cit.> used an off-the-shelf detector to detect faces and facial landmarks, and then align the photo with a textured, 3D model of a generic, reference face. Wang et al. <cit.> performed photometric normalization on face images. One solution, where most researchers commit themselves, is to improve the algorithm itself by making it robust to possible degradation. As the introduction of deep learning (DL) technique, successful development have been obtained on face recognition <cit.>, especially in unconstrained environment, in which the face images contain various face quality challenges, e.g. pose variations, facial expression, varying illumination, large age gap, facial makeup, partial occlusions. Deep learning based face recognition methods can obtain much robust features and outperform the conventional face recognition methods with hand-craft features. Some of these methods claimed that they have achieved human-level performance or even better in face verification on the Labelled Faces in the Wild (LFW) <cit.> database. The gap between humans and machines seems become narrower. LFW database is a well-known, widely used, and challenging benchmark for face verification evaluation, which contains 13,233 face images of 5,749 subjects collected from the web. Many deep learning based face recognition methods use this database to evaluate their performance in unconstrained condition. Even though existing face verification accuracy is very close to 100%, it still remains an argument that claiming surpassing human-level face verification performance is too optimistic. Liao et al. in <cit.> figured out that the existing standard LFW protocol is very limited, only 3,000 positive and 3,000 negative face pairs for classification, and fails to fully exploit all the available data. Probably that is why some deep methods can easily reach such high accuracies, even surpass the human-level performance. N. Zhang and W. Deng <cit.> also proposed several limitations on LFW, like that intraclass variations and interclass similarity sometimes may be ignored by researchers, insufficient matching pairs can not capture the real difficulty of large-scale unconstrained face verification problem. Therefore, it is questionable to say that deep models have touched the limit of LFW benchmark. For traditional automatic face recognition systems, their performance largely depends on the quality of the face images. Generally speaking, face image quality can be used as a measure metric for their performance. In the early stage, most face images were obtained under controlled environment with proper lighting condition, frontal pose, neutral expression, no or less makeup and standard image resolution, e.g. photos on ID cards. These faces own pretty high quality, thus it is easy for FR systems to achieve extremely high recognition accuracy. However, as the emergency of face data captured under uncontrolled environment (e.g. face images crawled from Internet), these images with low quality significantly degrade recognition accuracy. Some researchers tend to seek for more robust methods, thus deep learning based method was brought in. Different with traditional methods which are model driven, deep learning methods are learning driven which can automatically learn all kinds of faces with different quality problems if enough data are fed into the network. It seems that face image quality become less important for the performance of deep learning based face recognition system. Besides little research specially study the impact of face image quality on deep learning methods. It is well known that face recognition in unconstrained condition is much more difficult due to various changes in face images, e.g. pose variations, illumination changes, varying facial expression, partial occlusion, low resolution, age variations, heavy make-up, etc. Besides, high interclass similarity and large intraclass variation are still two big challenges for face recognition task. Although existing deep models have been trained very well for various quality changes of face images, it is still much more challenging for deep models to recognize faces with quite low quality. Therefore, we raise a question: Does the performance of deep learning based face recognition system still depend on the face image quality? If not, what is the challenge? If so, how it affects? Based on this, we further investigate the impact of face image quality on human performance, and the gap between deep learning and human. In our previous research <cit.>, we proposed that the face image quality issue is still a grand challenge for deep learning methods. In order to prove this, we developed new face recognition protocols for cross-quality face identification and verification on two public databases, IJB-A <cit.> and FaceScrub <cit.>, and four popular deep models were evaluated under this settings. Based on this research, we asked human beings to perform face verification experiment on the faces in unconstrained environment by matching across different face image qualities and further investigate the impact of face image quality on human performance and the distance between human beings and deep learning methods. We also seek to expand previous comparisons <cit.> by performing face verification on cross-quality face data in the wild. In our experiment, we focus on face images of extremely difficult levels. These images are chosen from face pairs that the deep model fails to recognize successfully. The evaluation on human performance in face verification discloses that human beings show a different performance with deep learning methods, and saying surpassing human-level is still too optimistic. The contributions of our work includes: * as an extension of research <cit.>, we aim to examine the face recognition performance of deep learning and human beings on cross-quality face images; * four pre-trained deep models with high reported accuracy are adopted to perform cross-quality face recognition on two databases, IJB-A and FaceScrub; and the deep model with best recognition performance is chosen to be compared with human beings; * human beings perform better than deep learning on face recognition by matching face images with different qualities, especially when the quality gap is large, which also indicates that deep learning method still has a long way to surpass human. The paper is organized as follows. In section <ref>, we talk about related work on face image quality assessment, human performance in face recognition. In section <ref>, we describe how to choose the best model among four representative deep models. In section <ref>, the face verification experiment is performed by human. And section <ref> gives an analysis on the results. In section <ref>, some interesting discussion and conclusions are drawn. § RELATED WORK §.§ Face Image Quality Assessment Face image quality is an important factor that apparently affect the performance of traditional face recognition. In practical recognition system, it is usual to choose multiple face images for each subject, hence choosing face images with high quality is a good way to improve recognition accuracy. The approved ISO/IEC standard 19794-5 <cit.> specified recommendations for face photo taking for ID card, E-passport and related applications, including instructions for light condition, head pose, facial expression, occlusion, and so on. Figure <ref> shows a few correct and incorrect illustration face images of ISO/IEC 19794-5 standard <cit.>. Face images of bad quality which do not accord with the requirements of the standards is a reason leading to face recognition performance degradation. ISO/IEC 29794-5 <cit.> specifies a few methodologies and approaches for computation of quantitative quality scores for facial images by introducing facial symmetry, resolution and size, illumination intensity, brightness, contrast, color, exposure, sharpness, etc. Recently, a few face image quality assessment methods have been proposed. Most existing face image quality assessment methods are based on the analysis of specific facial properties. Yang et al. <cit.> introduced a face pose estimation method by a boosting regression algorithm to evaluate face image quality, and applied it in the best shot selection problem to choose the most frontal face from a video sequence. Gao et al. <cit.> developed a facial symmetry based method for face image quality assessment in which it applies the degree of facial asymmetry to quantify the face quality caused by non-frontal illumination and improper face pose. Nasrollahi and Moeslund <cit.> assesses face quality in video sequence by combining four features (e.g. out-of-plan rotation, sharpness, brightness and resolution) using a local scoring system and weights. Sang et al. <cit.> presented several methods for face image quality evaluation. It uses Gabor wavelets as basis features to estimate the facial symmetry and then evaluate the illumination condition and facial pose. Sellahewa et al. <cit.> try to measure the face image quality in terms of luminance distortion in comparison to a specified reference face image. Wong et al. <cit.> designed a patch-based face image quality assessment method to choose the 'best' subset of face images from multiple frames of video captured in uncontrolled conditions by quantifying the similarity of a face image to a probabilistic face model, the 'ideal' face. Image characteristics that affect recognition, such as head pose, illumination, shadowing, motion blur and focus change over the sequence, are taken into account. Long and Li <cit.> designed a quality assessment system to select the best frame from the input video sequence by considering five features including sharpness, brightness, resolution, head pose and expression. The score of each feature is calculated separately, and then the final quality score is obtained by weight fusion of five scores. The image quality assessment model in <cit.> assesses the image quality by considering occlusion, face-to-camera distance, pose, expression, uneven illumination measure. Most of the methods mentioned above apply the artificially defined facial properties and empirically selected reference face images in their assessment process. Some others apply different features, or strategies. Zhang and Wang <cit.> proposed three asymmetry based face quality measures, which are based on scale insensitive SIFT features. Bharadwaj et al. <cit.> applied Gist and HOG to classify face images into different quality categories that are derived from face matching performance. Raghavendra et al. <cit.> proposed a scheme for face quality estimation. It first separates frontal faces from non-frontal ones by pose estimation, and evaluate the image quality of frontal faces by analyzing its texture components using Grey Level Co-occurrence Matrix (GLCM), finally quantify the quality using likelihood values obtained using Gaussian Mixture Model (GMM). Chen et al. <cit.> proposed a simple and flexible framework in which multiple feature fusion and learning to rank are used. §.§ Human Performance in Face Recognition A lot researchers did pretty much work to evaluate human performance in face recognition. O'Toole et al. <cit.> did a series of face verification experiments on human and algorithms in which the face images of each pair were taken under different illumination conditions. They found that three algorithms surpassed humans being performance by matching face pairs pre-screened to be "difficult" and six algorithms surpassed humans on "easy" face pairs. Alice J. O'Toole et al. <cit.> compared the performance of humans and machines in face identification task on frontal face images taken under different uncontrolled illumination conditions in both indoor and outdoor settings and with natural variations in a person's day-to-day appearance. In particular, they studied how human beings perform relative to machines as the level of difficultly increases as the variations contributed, such as facial expression, partial occlusion, hair styles and so forth. They concluded that the superiority of machines over humans in the less challenging conditions may indicate that face recognition systems may be ready for applications with comparable difficulty. Kumar et al. <cit.> presented an evaluation of human performance on LFW dataset by following a procedure mentioned in paper <cit.>. They generated 6,000 image pairs and asked 10 users to label two faces of each pair whether they belong to same person or not. The users were also asked to rate their confidence when labelling. Human performance on LFW is 99.20%, 97.53% and 94.27% when users are shown the original images, tighter cropped images and inverse crops. Human performance is really perfect when the participants are shown the original images. Due to lacking context information, the performance drops when a tighter cropped version of face images are given. It indicates that human can easily use context cues to recognize faces. Besides, the human performance is still wonderful when they are just shown the inverse cropped version (only context information is shown). P. Jonathon Phillips et al. <cit.> also did a similar work by matching frontal faces in still and video face images in different difficulty levels (e.g. good, challenging, very challenging). The result showed that algorithms are consistently superior to humans for frontal still faces with good quality, and humans are superior for video and challenging still faces. The result also indicated that humans can use non-face identity cues (e.g. head, body. etc.) to recognize faces. Best-Rowden et al. <cit.> analyzed the face recognition accuracies achieved by both machines and humans on unconstrained face data, reported the human accuracy in still images via crowdsourcing on Amazon Mechanical Turk, and first reported human performance on video faces, the YouTube Faces database, which indicated that humans are superior to machines, especially when videos contain contextual cues in addition to the face image. Zhou et al.<cit.> did a human face verification test in real-world environment on Chinese ID (CHID) benchmark, in which the data were collected offline and specialized on Chinese people. The dataset contains a typical characteristic, age variation including intra-variation (i.e., same person with different ages) and inter-variation (i.e., persons with different ages). The experiment focused on cases their recognition system failed to recognize. The result showed that 90% cases can be solved by human. Phillips et al. <cit.> expanded the comparison between human and machine from still images and videos taken by digital single lens reflex cameras to digital point and shoot cameras, Point and Shoot Face Recognition Challenge (PaSC). They provided a human benchmark for verifying unfamiliar faces in unconstrained still images at two levels: challenging and extremely-difficult. 100 different-identity image-pair with the highest similarity scores and 100 same-identity image-pair with the lowest similarity scores were selected and 30 users were asked to view two faces of each image-pair side by side and rate on a 1 to 5 scale. The results demonstrated that, in extremely-difficult level, human performance shines relative to algorithms. Austin Blanton et al. <cit.> also made a comparison of performance between human and algorithms in face verification on the challenging IJB-A dataset, which includes varying amounts of imagery, immutable attributes,e.g. gender, and circumstantial attributes, e.g. occlusion, illumination, and pose. In their experiment, the participants are asked to show how confident when they decide whether two given faces belong to same subjects or not with six options, which are Certain, Likely, Not Sure, Unlikely, Definitely Not, and Not Visible. The result shows that even for the challenging images in IJB-A, face verification is an easy task for humans. In the past 10 years, pretty a lot researchers studied the performance of humans and machines on face recognition and did all kinds of comparisons between them. In some scenarios, especially "easy" cases, the algorithms perform better, and in other scenarios, like still images in "difficult" levels with various variations and videos, the humans are better. As the fast development of deep learning technique in face recognition, the performance of deep models increase quickly. Quite a lot research reported the surpassing human-level performance on face recognition. Can deep learning technique really gain more excellent performance than human? § OUR APPROACH Fig. <ref> shows a whole pipeline of our approach. At the beginning, we partition two popular public databases in the wild, IJB-A <cit.> and FaceScrub <cit.>, into three quality sets (e.g. high quality, middle quality, low quality) separately according the face image quality score. Four famous pre-trained deep models, Light CNN <cit.>, FaceNet <cit.>, VGGFace <cit.>, and CenterLoss <cit.>, with high reported accuracy, are chosen to perform face recognition experiments, including face identification and face verification, on cropped faces of the two databases. After that, the deep model with best performance among them is selected by evaluating their performance. And the face images that the best model fails to recognize successfully are filtered as the data to be used in our well-designed human verification experiments. Human beings are asked to perform face verification experiment by matching across different face image qualities and then the result is evaluated to further examine whether face image quality changes can impact the performance of human beings, how, and what is the gap between deep model and human. In the experiment, we focus on extremely difficult level of face images, i.e., matching low to high quality sets. These images are chosen from face pairs that deep model fails to recognize successfully. §.§ Face Image Quality Although LFW is very popular for face recognition in the wild, there still exists some limitations, like the standard LFW protocol contains limited number of pairs, which causes insufficient exploration on various quality issues, e.g. pose variations, lighting condition, low resolution. Therefore, face image quality changes maybe the key issue in unconstrained face recognition. In order to have a better understanding of the face image quality, we are first to examine the distribution of different face qualities in the data and the impact of the distribution on face recognition performance. The face image quality is evaluated by considering specific facial properties, like resolution, pose angle, illumination parameters, or occlusion. We adopt a method proposed in <cit.> to measure and quantify the quality of every face image. This method tries to compare the relative qualities of each face pairs and then use the relative relationship to train a ranking based model to learn the quality score. The generated quality score, which is between 0 and 100, is used as the indicator of face image quality. The higher the quality score is, the better quality the face image has. According to the score of face image, the database is divided into three subsets, i.e., high quality, middle quality and low quality sets. In our study, high quality set is selected as the gallery set, and middle, low quality sets as probe set separately, and then to perform face recognition on four deep models. §.§ Database Preparation We evaluate the performance of face recognition with matching across different face image quality sets on two public face databases, IJB-A <cit.> and FaceScrub <cit.>. IJB-A, the IARPA Janus Benchmark A (IJB-A) database, is a publicly available media in the wild dataset containing a total of 21,230 face images of 500 subjects with manually localized face images. It is more challenging for face recognition. This dataset contains full pose variation, joint use for face recognition and face detection benchmark, wider geographic variation of subjects, protocols supporting both open-set identification (1:N search) and verification (1:1 comparison), an optional protocol that allows modelling of gallery subjects and ground truth eye and nose locations. FaceScrub was created by building face dataset that detects faces in images returned from searching for public figures on the Internet, followed by automatically discarding those not belonging to each queried person. It comprises a total of 106,863 face images of 530 celebrities with about 200 images per person. It contains 55,306 face images of 265 males and 51,557 face images of 265 females. All face images in both databases are estimated by the face image quality assessment method <cit.> and quality scores are calculated for each face image. According to these scores, we divide the two databases into three different quality sets. Table <ref> shows the distribution of three quality sets on the two databases. The quality score is between 0 and 100. Image quality scores in high quality set are greater than or equal to 60. Scores in middle quality set are greater than or equal to 30 and less than 60. And scores in low quality set are less than 30. Fig. <ref> gives some face examples of high, middle, and low quality sets from the two databases. Images with high quality are those frontal faces with high resolution, proper light condition, no occlusion. Images with low quality are those with big pose, dark light condition, or partial occlusion. And images with middle quality are those cases between the two situations. For IJB-A database, we find that quite a lot subjects in high quality set have less than three images. To ensure the gallery, i.e., high quality set, has enough target faces (at least three), we choose a few images from middle quality sets with higher scores to the high quality set. From Fig.<ref> (a), it is easy to notice that most subjects in high quality set have three images. The middle quality set contains the most images (63.55%), and low quality set also contains pretty much (29.19%). However, FaceScrub database owns many images with pretty good quality, about 70% images with high quality and 25% with middle quality. In order to match the size of IJB-A, a shortened version of FaceScrub is generated by randomly selecting images from each subject in high and middle quality sets. Finally, the subset of FaceScrub contains a total of 20,895 images of 530 subjects as shown in table <ref>. From Fig.<ref> (b), we can see the shortened FaceScrub still has quantities of face images with pretty good quality. §.§ Deep Models Light CNN <cit.>, FaceNet <cit.>, VGGFace <cit.>, and CenterLoss <cit.> are four popular deep models that have reported very high accuracies (LightCNN: 99.33%, FaceNet: 99.63%, VGGFace: 98.95%, and CenterLoss: 99.28%) on LFW for face verification. Light CNN <cit.> is a light framework to learn a 256-D face representation on the large-scale face data with massive noisy labels. It is efficient in computational costs and storage spaces. FaceNet <cit.> can directly learn a mapping from input face images to a compact 128-D Euclidean space in which the Euclidean distance indicates face similarity. VGGFace <cit.> is inspired by <cit.>. It is a 'very deep' network with a long sequence of convolutional layers. CenterLoss <cit.> uses two loss functions, softmax and center loss, to train the deep model. The center loss can learn a center of deep features for each class to reduce the intra-class variations and enlarge the inter-class differences. §.§ Choose Model with Best Performance To avoid any bias in training stage, we use the pre-trained deep models to perform cross-quality face identification and verification experiments on three types (high, middle, and low) of quality sets from IJB-A and FaceScrub databases. By evaluating the performance, the model with best performance is selected. §.§.§ Face Identification Face identification aims to recognize the person from a set of gallery face images and find the most similar one to the probe sample. For each database, we design three groups of experiments, and in each group the matching faces is across different quality sets. The first one is low to high matching in which low quality set is designed as query images and high quality set is gallery images. The second one is middle to high matching in which middle quality set is query images and high quality set is gallery images. And the third one is low to middle matching in which query images come from low quality set and gallery images are from middle quality set. Deep features of three quality sets from four deep models on IJB-A and FaceScrub are extracted and Cosine Similarity Score is adopted to calculate the similarity score of each face pair. The performance of four models is measured by Cumulative Match Curve (CMC) <cit.> on two databases as shown in Fig.<ref> and Fig.<ref>. It is easily to find that the performance of matching from middle to high quality set is much better than the other two matches for all deep models. The performance of matching from low to middle is slightly better than that of matching from low to high for most cases. The reason probably is that the difference between low and high quality faces is larger than the difference between low and middle quality faces. In general, VGGFace has the better result than the other three models, and FaceNet performs the worst. §.§.§ Face Verification Face verification aims to determine whether a given pair of face images or videos belongs to the same person or not. Considering that the performance of low to high and low to middle quality sets are nearly similar, only low to high and low to middle cases are performed in face verification experiment. Low and middle quality sets of each database are set as query images separately and high quality set as gallery images. Finally, about 18,978 positive pairs and 9,541,450 negative pairs in the case of matching low to high quality sets, and 41,642 positive pairs and 20,774,971 negative pairs in the case of matching middle to high quality sets on IJB-A database are generated, and also 6,676 positive pairs and 3,645,542 negative pairs in the case of matching low to high quality sets, and 193,745 positive pairs and 105,175,771 negative pairs in the case of matching middle to high quality sets on FaceScrub database are generated. In the face verification experiment, we construct a similarity matrix in which the row presents one query image, the column indicates one gallery image and the value in the matrix shows cosine similarity score between two face images of the corresponding row and column. Simultaneously, a similarity mask matrix is built in which the row still indicates one query image and the column indicates one gallery image. The difference between the two matrices is the values. In similarity mask matrix, the values have only two types. -1 means that two face images in the corresponding row and column is a positive pair and 127 means negative pair. We still adopt Cosine Similarity Score to show how similar two faces are and then calculate verification accuracies with respect to FAR=0.01, 0.001 and 0.0001 (FAR: false accept rate) as presented in table <ref>, and also give Receiver Operating Characteristic curves (ROC) in Fig. <ref>. The result of verification using Gabor feature is set as a baseline to be compared. We can see that the performance of Gabor feature is the worst. There is a big gap between Gabor features and deep features. For matching middle to high quality sets experiment, Light CNN and CenterLoss has the best performance on IJB-A and FaceScrub separately. And in low to high experiment, VGGFace performs best on FaceScrub, and better than others in FAR=0.01 case on IJB-A. By analyzing the results of face identification and verification experiments, we can see that, on IJB-A, VGGFace has the best performance in low to high experiment, Light CNN is the best in middle to high experiment, and on FaceScrub, VGGFace gains the highest accuracy in low to high experiment, CenterLoss performs best in middle to high experiment. § FACE VERIFICATION EXPERIMENT BY HUMAN In this face verification experiment, we use the best model chosen from previous face identification and verification experiments, and try to find the decision boundary for these positive and negative face pairs based on the best model. Then we randomly select a certain number of face pairs that the best model fails to recognize and perform human verification experiment on the selected face pairs. Since our goal is to examine how well the human performance on face verification comparing to algorithms, we mainly focus on face verification task in extremely difficult level, matching low quality set to high quality set. From previous experiments, it is easy to find that VGGFace has the greatest performance on IJB-A and FaceScrub databases in matching low to high experiment. Hence we choose a number of face image pairs of low to high quality set on IJB-A and FaceScrub databases based on VGGFace model to do human face verification experiment. §.§ Get Decision Boundary We generate the statistical distributions of genuine and impostor matching scores of all positive and negative pairs on the two databases to find the decision boundaries. Fig. <ref> shows the statistical distributions of genuine and impostor scores on both databases. And then the distributions are fitted as Gaussian distribution illustrated in Fig. <ref>. Finally, the thresholds, 0.188 for IJB-A and 0.138 for FaceScrub, are easily obtained. §.§ Choose Genuine and Impostor Pairs Based on the thresholds, genuine and impostor pairs can be easily selected. Those face images that VGGFace fails to recognize successfully are chosen, so the genuine pairs whose matching scores are less than the threshold value and the impostor pairs whose matching scores are greater than or equal to the threshold are filtered from two databases. Since context information in face image can give people some useful cues to recognize the identities <cit.>, the original images are not directly used in the experiment. We adopt a cropped version of original face images from VGGFace. Besides, those pairs that the face images are wrongly or improperly aligned or cropped are manually removed to ensure that those pairs in the human experiment do not contain some technical errors caused by the factors that , not image quality. And then we randomly select 100 positive pairs and 100 negative pairs from the cleaned pairs, put them together and randomly permute them. Finally, a total of 400 pairs for two databases are obtained. In this case, the verification rate of deep model VGGFace is 0% correct. §.§ Participants and Tool We design a face verification experiment performed by humans. In the experiment, a total of 20 participants, 14 males and 6 females, are asked to view 400 face image pairs and give their choice on whether the two faces in each given pair belong to same person or not. A part of them (as indicated in table <ref>) have much experience on face image quality analysis, some ones just know about it and others have no background. For convenience, a tool is designed to assist participants during experiment. Fig. <ref> shows some samples of face pairs shown in the tool. Left is two positive pairs and right are two negative ones. §.§ Experiment Procedure 100 positive pairs and 100 negative pairs are randomly selected for each database. These 200 pairs are divided into four subsets randomly with same size, i.e., 50 pairs. A total of eight subsets are generated in the end. All participants are asked to check the pairs one by one for each subset on the designed tool and make the decision. After finishing one subset, participants are advised to check next subset after a pretty good rest which makes them work on this task with full of energy. All participants have unrestricted time to finish this experiment. § EXPERIMENT RESULTS AND ANALYSIS All participants are grouped into three sets as indicated in Table <ref> according to their background on image quality analysis. 3 persons (2 males and 1 female) have quite a lot experience on face quality understanding and analysis. 4 individuals (2 males and 2 females) have ever worked on related topics, and the remaining (10 males and 3 females) have little background. We also analyzed all participants as one group. Most of them are students. Majority voting technique is adopted to deal with the final results of these four groups. If the number in the group is even, one subject in it will randomly removed and just odd number of subjects are considered. Table <ref> and <ref> gives the confusion matrix results including positive and negative accuracies in both actual and predicted cases on IJB-A and FaceScrub databases. ROC curves are also drawn in Fig. <ref>. By analyzing the results, we can easily find that the performance of human on IJB-A and FaceScrub is more excellent than VGGFace (best among the four deep models), although very high accuracy on LFW benchmark is achieved. There still exists a clear gap between human performance and machine recognition especially in the real-world setting. Real-world face recognition has much more diverse criteria, like big pose angle, poor illumination condition, and large facial occlusion, than we treated in previous recognition benchmarks. And data quality plays an important role in the performance of algorithms. Wider and more arbitrary range of changes like pose, illumination, expression, occlusion, resolution, age variation, heavy make-up of face images are most common factors which influence the system's performance. However, it still lacks a sufficient investigation on these cross factors, and also lacks an efficient method to handle them clearly and comprehensively. Large amount of face data with these factors are needed to assist us to build better models to improve recognition performance. We also find that people who have much experience in face recognition perform better than those who have not. What is interesting is that people have higher accuracy in recognition of negative pairs than that of positive pairs. The reason may be that it is hard for people to recognize that the two faces belong to same subject for positive pairs since the quality of face in query set is much low, but for negative pairs, it is much easier to view two faces as negative (different persons). Besides, we find that the accuracies on FaceScrub are lower than IJB-A. The reason may be that the quality of faces in query set (low quality set) on FaceScrub is much lower than that on IJB-A. The quality scores of face images can also prove this. § DISCUSSION AND CONCLUSION It is obvious that face image quality plays an important role in model-driven face recognition systems. Faces with bad quality can directly degrade the accuracy of face recognition. The main reason may be that most face recognition methods in the early stage try to build the models that are used to extract hand-craft features, and nearly all data are collected in controlled conditions with standard lighting, fixed head pose, proper facial expression, etc. These data fails to contain various or mixed qualities of face images. And the built models are sensitive to face quality changes. In order to improve the accuracy, some research focus on designing face image quality enhancement methods, like deblurring <cit.>, pose correction <cit.>, and photometric normalization <cit.>. Another solution is to develop more robust algorithm to possible degradation. The brought of deep learning technique into face recognition field gives an clear direction to further development. In our previous research <cit.>, we explored the impact of face image quality on deep learning based face recognition in unconstrained environment. Practically, the performance of deep neural networks can be largely improved by feeding various of face data with different qualities in training stage. Since the deep networks have almost learnt all kinds of face images with different qualities, they may keep in mind certain connections between them on some level. Hence deep learning based face recognition system can obtained more robust features than traditional face recognition methods. However, in fact, face image quality still has an influence on the accuracy of face recognition, although the deep networks have seen large quantities of face images. For example, in face identification evaluation on four deep models, it is easy for deep models to identify the correct subject in matching faces from middle to high qualities, but difficult in matching from low to high, which shows that deep models can recognize faces whose quality changes are big to some degrees, but not too huge. Therefore, more robust deep learning methods than existing ones are still needed to be able to recognize faces with large quality gaps. The influence of face image quality on human performance were further explored. We designed a face verification experiment by human beings on cross-quality face data, IJB-A and FaceScrub, by matching from low to high qualities, which is the hardest one. The human performance on IJB-A and FaceScrub are more excellent than the best model, VGGFace. Human outperform deep learning methods largely. The result indicts that there still exists a clear gap between human and machine performance in face recognition in unconstrained environment. Human beings own the capability in recognizing face images with large quality gaps. Besides, all participants were grouped into three categories according to their background on face image quality analysis, and the performance of each group were analyzed too. IEEEtran
http://arxiv.org/abs/2307.01943v1
20230704222151
Hierarchical Planning and Policy Shaping Shared Autonomy for Articulated Robots
[ "Ehsan Yousefi", "Mo Chen", "Inna Sharf" ]
cs.RO
[ "cs.RO" ]
Instantaneous Wireless Robotic Node Localization Using Collaborative Direction of Arrival Ehsan Latif and Ramviyas Parasuraman^* School of Computing, University of Georgia, Athens, GA 30602, USA. ^* Corresponding Author Email: ramviyas@uga.edu. August 1, 2023 ================================================================================================================================================================= In this work, we propose a novel shared autonomy framework to operate articulated robots. We provide strategies to design both the task-oriented hierarchical planning and policy shaping algorithms for efficient human-robot interactions in context-aware operation of articulated robots. Our framework for interplay between the human and the autonomy, as the participating agents in the system, is particularly influenced by the ideas from multi-agent systems, game theory, and theory of mind for a sliding level of autonomy. We formulate the sequential hierarchical human-in-the-loop decision making process by extending MDPs and Options framework to shared autonomy, and make use of deep RL techniques to train an uncertainty-aware shared autonomy policy. To fine-tune the formulation to a human, we use history of the system states, human actions, and their error with respect to a surrogate optimal model to encode human's internal state embeddings, beyond the designed values, by using conditional VAEs. We showcase the effectiveness of our formulation for different human skill levels and degrees of cooperativeness by using a case study of a feller-buncher machine in the challenging tasks of timber harvesting. Our framework is successful in providing a sliding level of autonomy from fully autonomous to fully manual, and is particularly successful in handling a noisy non-cooperative human agent in the loop. The proposed framework advances the state-of-the-art in shared autonomy for operating articulated robots, but can also be applied to other domains where autonomous operation is the ultimate goal. § INTRODUCTION §.§ Background and Motivation Shared autonomy is a framework to enable humans and robots to interact in a shared manner in order to accomplish certain goals. Shared autonomy has been utilized in a wide range of applications, from autonomous driving (<cit.>) to assistive robots (<cit.>), in order to extend and enhance human capabilities (<cit.>). Indeed, the wide range of its applications attests to the importance of efficient human-robot interactions, as well as the current state of co-existence between humans and increasingly more intelligent robots. Our interest in shared autonomy is motivated by its potential applications in the context of articulated machines, these commonly operated by a human operator physically located in the machine. Operating such a machine, typically comprised of a mobile base and a large-scale multi-degree-of-freedom arm, involves multiple levels of hierarchy in the operator decision making. These encompass the higher-level strategical decision making, such as path planning for the machine, all the way to the lower-level decision-making related to individual joint control for arm manipulation. In essence, this hierarchy is comparable to human decision making when driving a car (<cit.>). A schematic of an articulated robot with mobile base is shown in Figure <ref>. In addition to the hierarchy of decision levels as described above, operation of articulated machines, especially those in industrial settings, such as excavators used in construction or feller-bunchers employed in timber harvesting, is also tied to the detailed know-how of their respective application domains. This is one of the reasons why reaching a high operator skill level to efficiently utilize these machines can take years in some applications (<cit.>). In this paper, we develop a general task-oriented hierarchical planning framework for the robot/machine, with human and AI interactions in mind, that extends beyond the standard robotic planning techniques. One of the main challenges in the multi-level, real-world robotic applications is that despite having a good insight into different operations, complete knowledge of the relevant application domain needed to achieve a fully autonomous system cannot be assumed. However, this should not prevent us from incorporating whatever knowledge we have into a versatile framework. Moreover, the type of applications considered here, involving large, extremely powerful machines, does not allow for hazardous trial-and-error experimentation in the field. This motivates the central research question addressed in this work, that of how to design a comprehensive shared autonomy architecture that allows different levels of autonomy in a human-in-the-loop framework for complex, hierarchical, robotic decision-making tasks. It is important to highlight that in this work, the agents, i.e., the human and the autonomous, co-operate on one physical entity, the robot/machine. Once again, this bears similarities to autonomous driving scenarios (<cit.>), and is unlike many other human-robot interaction scenarios where the two agents act on/through two separate physical entities (<cit.>). §.§ State of the Art In shared autonomy, arbitration of human and autonomous action commands, which jointly form the input to the robot/machine system, is of prominent importance. In this regard, the available schemes in literature can be categorized into two main groups. The first is referred to as policy blending, where the human action and autonomous action are treated as two separate signals and an arbitration function is used to decide how to blend these two signals (<cit.>). Despite wide application due to its simplicity and efficacy, the policy blending approach has some drawbacks that stem from the fact that it attempts to blend two signals that might be different in nature and their meaning (<cit.>). To address the latter issue, in (<cit.>), the authors suggested a latent-action representation from human's low-dimensional actions to high-dimensional inputs. They next combined the latter with the assistance signal in order to fine-tune the robot behavior. The other limitation is the inherent “predict-then-go" nature of the system architecture to implement the policy blending approach (<cit.>). In some respects, the resulting autonomous agent in an inherently “predict-then-go" setting can be viewed as a Sisyphus or an absurd hero (<cit.>), with a perpetual though successful struggle, that resets after every cue by the human agent. The second group of methods is what we will call policy shaping, where the autonomous action (and policy) is shaped by taking into account human action, as well as other available information, and it is the only input to the robot. In other words, unlike policy blending where the agents' inputs (i.e., human and autonomous) are combined in parallel, in policy shaping, they are in series. This approach does not suffer from the drawbacks of policy blending; however, it is computationally expensive and users report less comfort using it despite having better performance in certain scenarios (<cit.>). One of the strategies in the second category involves conditioning the robot action on the human signal. The authors of (<cit.>) defined an augmented (autonomous) state consisting of the (overall) state of the robot and human's goal. It was assumed that the human policy that acts based on the augmented state is modeled and known, for which they used the Maximum Entropy (MaxEnt) Inverse Optimal Control (IOC) framework. The autonomous action is based on the overall robotic system state as well as the human action, and is defined such that it minimizes a cost function dependent on the human action and goal. It was furthermore assumed that a goal g is partially observable and the human state is the same as the autonomous state. In (<cit.>), the authors developed a deep Reinforcement Learning (RL) algorithm to learn a model-free policy that maps the augmented state of the robot to the (autonomous) action. The augmented state comprised the state of the overall robotic system and the human signal. The latter was either the intended goal – inferred using Bayesian inference under an inverse RL scheme – if such information was available, or the raw low-level human inputs, otherwise. The purpose in (<cit.>) was to find an optimal autonomous action close to the human action to deliver high performance, while keeping the human as a high-quality input source in the loop. It was demonstrated that incorporating an inference algorithm resulted in a better overall performance despite the additional computational cost. However, the authors of (<cit.>) did not assume a model for human policy and the human signal was part of an augmented state definition for the autonomous policy. A model-free RL algorithm was used to find the optimal autonomous action while keeping it close to the human action. §.§ Contributions Our work is based on the premise that the ultimate goal of a fully autonomous system operating an articulated robot/machine is best achieved through a shared autonomy framework. Under such a framework, the autonomous agent can progressively increase the level of autonomy while keeping the human in the loop to handle edge cases and to, possibly, learn from or teach the autonomous agent. We suggest that such a framework is particularly useful to applications which rely heavily on a skilled human to operate the robot/machine, when the operations involve a hierarchy of decision making, and in operations where safety is important. With this perspective, the main contributions of this paper are as follows: * Development of a general, task-oriented hierarchical planning formulation for the operation of articulated robots/machines, with human interpretability and shared autonomy in mind; * Proposition of a novel shared autonomy architecture for human-in-the-loop tasks and policy shaping; this involves a design of hierarchical interactions and arbitration between the autonomy and the human. Our work towards this contribution is particularly influenced by the ideas from multi-agent systems, game theory, and theory of mind (<cit.>) for a sliding level of autonomy; * Formulation of the MDPs and Options framework to enable deep RL for shared autonomy; * Application of the proposed shared autonomy framework to an industrially important application—timber harvesting. Thus, we fine-tune our formulation for the specific tasks of a timber-harvesting machine: a feller-buncher, which is a large-scale hydraulically actuated articulated robot with a specialized end-effector (<cit.>). This paper is organized as follows: We first introduce the definitions and nomenclature in <ref>. In <ref>, we provide the elements of shared autonomy. Then, in <ref>, we provide our problem statement and points of view on the problem. In <ref>, we discuss the case study application: timber harvesting, followed by detailed analysis in <ref>. In <ref>, we present our results for different sections. Finally, <ref> concludes our work by reiterating the main ideas presented and suggesting directions for future work. § DEFINITIONS AND NOTATION To eliminate possible ambiguities and for most clarity, we begin by defining the relevant terminology in our work. As much as possible, we use terminology consistent with what is established in the relevant literature; however, we bear in mind that definitions often depend on the specific perspective and background of the authors. Figure <ref> depicts graphically the various components of the system and the corresponding terms to describe its components. Agent: This term refers to any entity capable of making decisions. In our problem, we have two agents: * Human Agent (HA): This term refers to a human operator, driver, or user, as a decision maker. * Autonomous Agent (AA): This term refers to the artificial high-level intelligence capable of decision making. Robot or machine: the physical entity being operated by an agent; it is operated in the field and interacts with the environment. In our application, it is a mobile base with an articulated arm, such as a feller-buncher machine operated in the forest. We might use the term robotic system equivalently, as a robot includes certain components internally, such as sensors and actuators. We use the term Overall Robotic System when we refer to the robotic system and the environment together. Autonomous System: This term refers to the autonomous agent (AA) and the robot together. The term Overall Autonomous System is used when we include the environment as an element in this system. Human-Robot System: This term refers to the human agent (HA) and the robot together. The term Overall Human-Robot System is used when we include the environment as an element in this system. System: This term refers to the human agent (HA), the autonomous agent (AA), and the robot together. The term Overall System is used when we include the environment as an element in this system. With the above system components clearly delineated, we next introduce the basic terminology for the learning aspect of the framework. State: relevant variables defined for each element of the Overall System, in particular, * State of the robot, s^R: This refers to variables relevant to the robot itself, such as its pose and the remaining capacity of its end effector, i.e., the end effector capacity for maneuverability (CfM_ee). * State of the environment, s^E: This state defines the different elements in the environment surrounding the robot, such as the objects, and obstacles, as well as certain task-related elements, depending on the type of task. We will discuss these in more detail in the subsequent sections. * State of the Autonomous Agent, s^A: This is the designed representation of the state of the Overall System by the architect of Autonomous Agent based on the foregoing state elements as well as task-related elements. This representation forms the basis upon which the autonomous agent acts. * State of the Human Agent, s^H. This refers to the human's representation of the state of the Overall System. We do not assume knowledge of s^H. Also, we do not assume equivalency between s^H and s^A as will be discussed later. Action: Each of the decision making agents, i.e., HA and AA, can also act in a shared autonomy setting depending on the collaboration scheme and level of autonomy. The action is relayed directly to the robot as input. We will use the following terms: * Action of Autonomous Agent or simply Autonomous Action, a^A: In a shared autonomy setting, this action will be of assistive nature, and we might refer to it as Assistive Signal. * Action of Human Agent or simply Human Action, a^H: This refers to the human action using any input device, such as, joysticks and pedals. Policy: Each of the decision-making/acting agents in a shared autonomy setting has a policy according to which they act. We will use Human (Agent) Policy, π_H, and Autonomous (Agent) Policy, π_A, to refer to these policies. § ELEMENTS OF SHARED AUTONOMY §.§ Hierarchical task-oriented robot planning & design variables As alluded earlier, we consider the robot planning problem in terms of tasks and functions: this is advantageous when interfacing multiple agents, including a human in the context of shared autonomy, as well as sliding levels of autonomy (<cit.>). Moreover, this task-oriented approach makes it possible to incorporate the inherent hierarchy of tasks and consequently, hierarchy in human-robot interactions (<cit.>). The conceptual representation of our task-oriented hierarchical perspective on the robot planning problem for an articulated robot/machine is shown in Figure <ref>. Here, π_RP denotes the overarching, master policy for robot planning and it is broken down (black arrows) into two general tasks or policies and their associated sub-policies as follows: π_M: policy to move the arm. This is further categorized into two hierarchical levels: * π_MHL: policy for high-level arm manipulations that includes n_m sub-policies, such as end-effector path planning and scheduling of arm motions. The specific definition of each sub-policy depends on the particular application domain beyond standard robotic planners; * π_MLL: policy for low-level arm manipulations that includes low-level control, e.g., joint control; π_B: policy to move the base of the robot/machine. This is further categorized into two hierarchical levels, defined similar to the arm motion policies: * π_BHL: policy for high-level motion of the base that includes n_b sub-policies, such as the classic example of hierarchical room-to-room robot planning (<cit.>), * π_BLL: policy for low-level motion of the base that includes base motion control. The green arrows in Figure <ref> show the inter-dependencies of the (sub-)policies. It should be noted that there might be policies that require coordinated or combined planning of robot arm and base. This would also fall under the umbrella of the overarching robot planning policy. A task-oriented planning (and scheduling) problem can be formulated as a sequential decision making problem that optimizes for certain task-specific metrics (<cit.>). We use the analogy between an option in Options framework (<cit.>) and a task in our robot planning: just like a task may involve multiple sub-tasks, an option generally involves multiple actions. The Options framework encodes the generalized actions as options. With this analogy, we invoke the Markov decision process (MDP) framework which provides a model for sequential decision making processes, considering the agents' actions while taking into account the stochasticity of the process. Since the Options framework itself is built on semi-MDPs which extend the definition of MDPs to include a sense of time, it enables our shared autonomy formulation to accommodate robotic tasks with different durations, as well as hierarchies. Moreover, it has been shown that the behavior of a human agent operating an articulated robot can be described by a well-structured sequence of repetitive (sub-)tasks (<cit.>). Our framework, therefore, is designed to take into account the spatiotemporal aspects of a shared policy in providing the abstraction of shared autonomy. An example of temporal progression of a sequence of hierarchical tasks is shown in Figure <ref>. From a graphical probabilistic model point of view, the policy of an autonomous agent can be depicted as in Figure <ref>, where z_0 encodes the task or function specific state variables, which augment the classic robot-related states s into ŝ. Research on how to define z_0 is quite extensive and is also application specific, as illustrated by the work in (<cit.>). It could be argued that there is neither a unique formulation nor a methodology to define z_0, as there is no unique way of performing the same task. In tasks involving pick and place operations, using information about the goal space to define z_0 has been shown a good choice (<cit.>). We will demonstrate the significance of this choice through an example in <ref>. The advantage of our framework is that z_0 is defined for shared autonomy by design, which makes the robot operation interpretable as well as efficient. The mutual interpretability attribute is particularly important for tasks involving a human agent in the loop and for those with limited domain knowledge. It should be noted that our framework is not necessarily a “human-knowledge-based" method[The Bitter Lesson, Rich Sutton, 2019: <http://www.incompleteideas.net/IncIdeas/BitterLesson.html> (accessed 12.06.2023).]. Although having insight into how humans perform a task helps with the understanding of the task, especially for applications of human-operated machines, this is not a requirement for our framework, but a matter of interpretability to the human who is in the loop. With the understanding presented above, we now discuss how we interface the agents in the system, i.e., the human and the autonomous, given the hierarchy of tasks. Our view of the overall system involving a human and an autonomous agent is that of a multi-agent system with gamified interactions, and we design our shared autonomy architecture accordingly, as discussed next. §.§ Shared autonomy architecture & design variables A high-level block diagram of the proposed shared autonomy scheme from the control systems point of view is shown in Figure <ref>. The architecture of each of these blocks and how they interface with each other are the most challenging aspects of shared autonomy design. Figure <ref> depicts our proposed viewpoint of a shared autonomy framework as a graphical probabilistic model. With the task-representative variables, z_0, as introduced in <ref>, we now introduce the human-representative design variables, z_1, which encode the human's internal states. There is substantial literature on conceptualizing the human aspect in the context of shared autonomy, whether through explicit model assumptions (for example, (<cit.>) or model-free approaches (for example, (<cit.>)). In the literature to date, inferred human's belief over the goals of the specific task is often used as one of the human internal states, even though inference of specific goals may not always be feasible. Arguably, optimality of the operator, or equivalently, the amount of noise in their actions, is another important characteristic that we are interested in quantifying and utilizing for smooth, user-tuned shared autonomy. Thus, human operator analysis with minimal assumptions about them is an important aspect of our work, to be discussed in <ref>. The flow of information to/from a human agent is shown with blue arrows in Figure <ref>. The dashed lines are related the human analysis process that will be discussed in <ref>. Continuing with Figure <ref>, we introduce z^RP,A_2 to encode pre-training related state variables to represent prior training and knowledge. In our framework, we refer to pre-training as the process of training a fully autonomous agent using a model similar to Figure <ref>. This is the first step in shared autonomy design, ensuring that the state definition and the model are capable of performing a task autonomously. As well, the pre-trained model will allow us to look into the human's internal state and/or their fine-tuning and noise through the lens of a structured task. The information gained with this process enters the model via the green arrows in Figure <ref>. To summarize, we employ three categories of variables in our shared autonomy framework: * z_0: task/operation specific state variables, representing the domain knowledge, * z_1: human's internal states, * z_2: pre-training state variables. These three categories can be considered as pillars of how humans learn to perform a task: we bring in our past knowledge and experiences (category 3), fine-tune those for a particular series of tasks towards optimality based on the task requirements (category 1), and personalize how we proceed with taking actions (category 2). In a shared autonomy context, this equivalence is helpful as it allows the system to be mutually understandable to both the human and the autonomous agents. Our proposed shared autonomy framework can also be considered as an instance of computational human-robot interaction (<cit.>). The most important problem that we address is how to design a hierarchical structure for shared autonomy so as to facilitate and further, to make seamless, this complex interaction of multiple hierarchical systems, as shown in Figure <ref>. In analyzing human behavior, it is important to note that, in general, to assume that the state definition s_t is the same between the human agent and the autonomous agent is not valid. The reason lies in the fact that we do not have access to internal perception and state definition of a human, i.e., s^H_t. Therefore, assuming that a human policy maps from s^A_t to a^H_t is conceptually inaccurate, in general. In this paper, we assume that a human has an internal state that is comprised of s^A_t and z_1,t; hence, s^H_t ≜ (s^A_t, z_1,t). The latter denotes the partially observable part of a human's state. This point becomes even more important when we have a hierarchy of tasks. The notion of including the human signal in the augmented state definition, as proposed in (<cit.>), makes sense in this light. If the human signal is defined as the low-level actions, then this implicitly enforces the Markovian assumption, i.e., the history is ignored. In contrast, it can be argued that conditioning the shared policy π_sh on a rich signal from the human, such as the goal space and if feasible the intended goal, is critical, as it effectively reflects the human history of actions. This argument is also supported by the results in (<cit.>) for an unstructured user input, where raw (i.e., unconditioned) low-level human inputs were used and poor performance was reported. Consequently, the performance of a collaboration scheme highly depends on a relatively successful encoding of human's internal state variable(s) z_1. It is worth noting that goal inference algorithms, in general, are based on a history of human inputs. The goal/intent inference requires a knowledge of goal space, which, in turn, requires domain knowledge. We encode the latter in z_0 without assuming direct knowledge of the intended goal, but only as a measure of goal space. From another point of view, using the human's internal signal as input to the autonomous policy provides a mechanism to synchronize human actions and the resultant autonomous actions over a finite horizon that ends with reaching a goal. Otherwise, the performance of collaboration will be poor, as was the case in the results reported in (<cit.>) when low-level human input was used in the autonomous policy shaping. § PROBLEM STATEMENT & MODELLING We now present the mathematical model of our shared autonomy architecture in compact mathematical form. A typical trajectory τ of sequential state-actions in the context of human-robot interaction takes the following form: τ = {s^A_t,a^H_t,a^A_t,...,s^A_T,a^H_T,a^A_T }, where s^A_t, a^A_t, and a^H_t denote the defined state, the autonomous agent action, and the human agent action, respectively, at any time-step t; T denotes the time horizon for the task at hand. The action can be extended to an option wherever needed. In this work, we do not impose a Markovian constraint on the human action, and thus, include a history of states in the human policy. Letting n_h represent the number of steps of human's state history, it can be shown that the probability distribution of the trajectory is given by: p(τ) = p(s^A_1) ∏_t=1^Tπ_H(a^H_t|s^A_t) π_A(a^A_t|a^H_t,s^A_t)p(s_t+1|s^A_t,a^A_t), where π_H and π_A are human and autonomous policies, respectively. The state variable s^A_t = {s^A_t, ..., s^A_t-n_h} comprises n_h steps of history of state trajectory. Note that t ≥ n_h. The derivation of (<ref>) is given in Appendix A. We take a look at each of the terms in (<ref>) in more details. §.§ Analysis of Human Agent & Policy π_H As already noted, we do not assume equivalence between state definitions of the human and autonomous agents. Moreover, we do not assume any direct knowledge of the human's policy or their internal variables, z_1. To address this knowledge gap and to analyze the π_H(a^H_t|s_t) term in (<ref>), we propose to explicitly encode the human's internal state variable z_1. This explicit encoding offers a deeper insight into the individual human agent; it helps to encode the differences between human agents and ultimately, enables a faster tuning of the shared autonomy framework to individual human operators. It also facilitates a more robust model against human noise levels. We provide specific examples of this point in <ref>-<ref>. Let n_s denote the dimension of autonomous state S⊂ℝ^n_h × n_s. We learn an encoder θ_H: {E, A, S}→Z_1, with human's latent state space Z_1 ⊂ d_z_1, d_z_1 < (n_h × n_s), from S conditioned on human's n_h steps of history of actions a^H ⊂A as well as history of errors of their actions e^H ⊂E with respect to those of a known surrogate optimal agent, which might be another human or a pre-trained model. Considering n_a discrete actions, E⊂ℕ^0 and A⊂ℕ^0. The error, in general, is defined as the angular difference between the denoted actions, as follows: ∠e^H = arccos(a^H ·a^*/a^Ha^*), where a^* and a^H are the vectors representing the action of the surrogate optimal agent and that of the human, respectively. Moreover, we learn a decoder ϕ_H: Z_1 ×S→{A, E}, with the following reward function of the optimization process in cVAE defined as: ℒ_i,H = E_z_1 ∼ q_ϕ_H(z_1|e^H_i,a^H_i,s_i)(log p_θ_H(e^H_i,a^H_i|s_i,z_1)) - D_KL(q_ϕ_H(z_1|e^H_i,a^H_i,s_i)∥ p(z_1)), where ϕ_H and θ_H are encoder and decoder networks, as shown in Figure <ref>. The two terms in (<ref>) are the reconstruction error and KL-divergence, respectively (<cit.>). §.§ Analysis of Autonomous Agent and Policy π_A Based on (<ref>) and the discussions regarding encoding of distribution of z_1, the policy for the autonomous agent tuned to the human agent can be written as: π_A = π_A(a^A_t|a^H_t,z_1,t,s_t). Following the logic of <ref>, we now have access to the distribution of z_1. Based on (<ref>), we set up our shared autonomy framework, as shown in Figure <ref> in a graphical probabilistic model. It is worth noting that the degree to which a human agent participates in sharing the operation of the system depends on: (a) the level of autonomy desired for the system, (b) domain and task-specific knowledge, and (c) the extent of human presence. This shared autonomy framework facilitates a sliding level of autonomy. If we have a semi-autonomous agent, a shared autonomy framework is needed to assist the human agent to reach a goal (<cit.>). The human agent's involvement, therefore, is in the training phase as well as testing/operational phases. Moreover, we utilize the gamified human-robot interaction as well as game theoretic approaches in designing the reward function and the interaction architecture. The objective of this shared autonomy setting is to provide a near optimal input to the robot with respect to a reward function comprised of two contributions: * R_1: Reward from robot planning, which includes task-related and obstacle avoidance rewards, * R_2: Closeness to human input depending on signal z_1. Hence, the general form of the reward function is as follows: R = c_1R_1 + c_2R_2 = c^TR, where c assembles the dynamic level of autonomy coefficients showing how much autonomy is required, how successful it has been, and in short, the level of autonomy. In other words, the choice of the two coefficients allows for a more efficient sliding level of autonomy. From a multi-agent perspective, we model the agents' interactions and resolve possible issues in two ways: (1) policy shaping, that considers a serial architecture of the agents, and (2) strategic assignment of the coefficients c. This is a novel perspective on the problem of multi-agent shared autonomy with human as an agent in the loop. In summary, Figure <ref> shows our point of view on how the closed-loop block diagram of Figure <ref> should be designed in a framework that is interface-able with the human as another agent for hierarchical robotic tasks. § CASE STUDY APPLICATION: TIMBER HARVESTING As mentioned in <ref>, the motivation for our research on shared autonomy stems from its potential applications to machines employed in the Canadian timber harvesting industry. These machines, such as the feller-buncher machine used in our case study (see Figure <ref>), are comprised of a mobile base and, a crane-like hydraulically-actuated manipulator arm with a specialized end-effector. In the case of feller-buncher, the latter is designed for cutting trees, picking them up and depositing them in a storage location. Currently, machines employed for timber harvesting rely heavily on direct operator intervention, sometimes at the level of controlling individual joints of the crane. In fact, the current state of autonomy in the industry is much lower than in other comparable industries, such as mining (<cit.>). There are several drivers for increasing autonomy of the machines employed in timber harvesting such as to improve the productivity of the operations which are significantly affected by human performance: for example, it has been reported that the productivity of a harvester is 25-40% dependent on the skill of the operator (<cit.>). In addition, the harsh machine and environmental conditions contribute to human fatigue, health issues and compromise operator safety, all of these factors exasperating the labour shortage for machine operators. Moreover, it can take years of working in the field to achieve a skill level necessary for operating the machine at a high level of productivity. This becomes apparent if one considers, for example, that the operator of a harvesting machine is required to perform, on average, 24 functions per tree and to make 12 decisions (<cit.>). We suggest that shared autonomy can provide the way forward to address both the issue of productivity and operator training. The human-in-the-loop approach also addresses other challenges of complex robotic tasks, in particular the limited knowledge of their details and ensuring a certain level of safety. We consider the operation of a feller-buncher at a particular fixed location in the forest (i.e., fixed base), as defined by the operation region in the ground plane, illustrated in Figure <ref>; the region is bounded by the minimum and maximum reach of the robot end-effector centered at the location of the mobile base. An actual photo of an operation region is shown in Figure <ref> for comparison. We use the term Capacity for Maneuverability (CfM_arm∈ [0, 1]), which is the remaining actuation capacity for a human intervention (<cit.>). A second capacity is defined for the end-effector since it can pick up several cut trees at a time: CfM_ee∈ [0, 1] quantifies the remaining capacity in the end-effector to carry objects. With the view to describing the feller-buncher operation as an MDP, we divide each region into cells, which discretely encode the location of the machine end-effector p_EE, the objects in the region (i.e., trees), the goal location(s), obstacles, and the storage location. In (<cit.>), we proposed a human-inspired planning algorithm using a concept we called the Envelope of Manipulation E^M, which is a curve connecting key points E_i (see Figure <ref>), these assembled in a set ℰ. Taking the perspective of a human operator based on our observations in the field, we identified two high-level options in the options space 𝒪: 1) O_1∈𝒪 that encapsulated the motion along the envelope between two cells, and 2) O_2∈𝒪 which encodes the operations inside each cell. The operator may group several objects into a cluster in order to cut and grab several trees together, before moving on. The envelope E^M can take any shape; however, based on our field observations, it is well approximated by a circular arc. It is thus possible to encode a sequence of operations as a sequence of the two aforementioned options. The problem of robot planning, therefore, turns into optimizing this sequence of options. The overall task of robot planning includes a hierarchy of subtasks, namely, envelope options (i.e., O_1 and O_2), and moving the arm or crane along the specified trajectories (MA). We will discuss these further in the subsequent sections. § SHARED AUTONOMY DESIGN FOR FELLER-BUNCHER ROBOT §.§ Hierarchical Robot Planning Following <ref>, we break down the tasks into a hierarchical planning and execution scheme with the levels listed in Table <ref>, where π_ℰ is an instance of π_MHL, and π_MA is an instance of π_MLL. As noted in <ref>, in the current scheme of operations, a human operator uses the arm to manipulate (e.g., cut, grab, and deposit) the objects in the operational region of a particular base location. As listed in Table <ref>, we break down the relevant tasks into a hierarchical planning and execution scheme, with three levels defined as follows: §.§.§ π_RP: Overarching policy to plan a robot motion in a task-oriented manner This is the global or master policy which includes the policies of the lower levels and collects the corresponding rewards; this policy is executed once per region. §.§.§ π_ℰ: Policy for Envelope of Manipulation (E^M) The definition of this level in our hierarchy was motivated by our observations of expert operators: they first implicitly carry out a clustering of trees by grouping subsets of objects into clusters around the machine and subsequently, interact with the objects in clusters. Each cluster can include multiple objects, and span across one or more cells. We designate each cell with a key point E_i and we have a set of key points ℰ, defined as ℰ={E_0,E_1,...,E_n}, where E_0 and E_n correspond to the initial end-effector location at the start of operation and the storage point, respectively. The planning problem is, in fact, a sequential decision making problem to optimally sequence options O_1 and O_2 and how to group the objects next to a key point as cluster(s). §.§.§ π_MA: Policy to Move Arm (MA) This is the lowest-level policy in our hierarchy, and it directly interacts with the environment. This policy takes the destination key point as its goal, plans a smooth trajectory for the end-effector to reach it, and executes the motion of the arm. Standard robotic tools can be employed to execute these subtasks. Although we do not design this policy directly in the present implementation, we include the reward terms related to it in robot planning. To find the optimal policy for Robot Planning, π^*_RP, we build on our previous proof-of-concept formulation in (<cit.>), combined with the background provided in <ref>. For that purpose, we construct the more general, compared to (<cit.>), Markov Decision Process (MDP) framework, MDP^RP, as follows: The Environment or The World: We have tailored a commonly employed grid world to our specific robotic application, for a generalizable MDP backbone. As schematically shown in Figure <ref>, this environment is comprised of n_c × n_r cells, circumferentially arranged around the robot base; n_c and n_r are the circular and radial dimensions of the grid, respectively, these chosen to accommodate the desired resolution. An important element of environment definition which directly affects π_RP is how to handle the objects surrounding the robot. These objects, depending on the state of the system, can be either obstacles or subgoals, and this categorization changes dynamically. As illustrated in Figure <ref>, we first construct the relevant Spaces as follows: * Objects space 𝒮_oj: includes all objects. * Subgoals space 𝒮_sg: a subset of 𝒮_oj and it includes all objects accessible to the robot (and hence, not blocked by other objects from robot's reach). The augmented Subgoal space, 𝒮̂_sg, is constructed by adding the storage point, as conditioned on the capacity for maneuverability CfM_ee, with the following logic: * if CfM_ee=0, 𝒮̂_sg = {E_n} * if 0 < CfM_ee < 1, 𝒮̂_sg = {𝒮_sg, E_n} * if CfM_ee=1, 𝒮̂_sg = {𝒮_sg}. * Obstacles space 𝒮_ot: defined by negating the augmented Subgoal space from the Objects space. Therefore, the three object spaces are related through: 𝒮_oj = 𝒮_sg∪𝒮_ot Constraints In general, the workspace of the robot is limited by (a) boundaries of the grid, which are in turn defined by the minimum and maximum values of the reachability of the robot, CfM_arm, and (b) obstacles located next to the end-effector as well as those obstructing its arm movement, as depicted in Figure <ref>. This environment, therefore, includes the following constraints built-in and dynamically updated: * Robot workspace and related constraints, such as the capacity for maneuverability of the arm and the end-effector, i.e., CfM_arm and CfM_ee, respectively. This can be extended to stability related constraints, as well. * Path planning related constraints, such as obstacles. With the definition of the world, the categorization of object spaces, and the constraints, we are now ready to define the main MDP elements which in turn encode the path planning problem with obstacle avoidance, as discussed earlier. State (or Observation) Space. As shown in Figure <ref>, the observation space is a discrete space with three types of observations: * s^RP_1: discrete 2D position of the robot end-effector, [angular position, radial position], in the range 0,..,n_c-1 and 0,...,n_r-1, respectively, * s^RP_2: payload indicator 0, ..., p_max; it is related to CfM_ee through (CfM_ee=(p_max-s_2)/p_max), where p_max is the maximum number of trees/objects that the end-effector is able to carry. * s^RP_3: contains the circular distance from the current cluster/key point to all subgoals with respect to the robot end-effector in the CCW direction. If no subgoal is present at a location, -1 is returned. Therefore, s^RP_3 effectively augments the state of the robot (comprised of s^RP_1 and s^RP_2) with the information related to the subgoals space. It is, in fact, the variable z_0 introduced earlier since it encodes the goal space information of the task. Therefore, we denote the state in this level with s^RP defined as follows: s^RP≜ (s^RP_1, s^RP_2, s^RP_3). Together with the spaces defined above, a state s^RP encodes three features depending on the scenario: * obstacle, if s^RP∈𝒮_ot, * sub-goal, if s^RP∈𝒮̂_sg. If the agent is done with the operation overall, reaching the storage point is the final goal and the episode is done. * normal, otherwise. Action Space. The action space is discrete, consisting of four actions: left, right, front, and back, encoded by 0, 1, 2, and 3, respectively. Note that the directions of these actions are defined with respect to each cell, and not in an absolute sense. Rewards. Rewards are defined as follows: * R^RP_1 = -2: All transitions except the transition to the ”sub-goal” or ”goal” state, * R^RP_2 = 20n_cut or 20n_store: Transition to one of the ”sub-goal” states for the cases of cutting or storing, * R^RP_3 = 400: Transition to the ”goal” state. This ends an episode and resets the environment, * R^RP_4 = -20: Collision with an obstacle, * R^RP_5 = -20: Out of boundaries action, * R^RP_6 = -5s^RP_2: Cost of carrying an object, * R^RP_7 = -400: Getting trapped. This also ends an episode and resets the environment. Note that the combined value of the above reward elements forms R_1 in (<ref>). Policy. With the architecture shown in Figure <ref>, we denote the policy for this level with π_RP = π_RP(o^RP,A|s^RP). It is worth noting that to implement this world efficiently, we have created a custom OpenAI Gym (<cit.>) environment, and we call it “adaptive_grid_v0". §.§ Shared Autonomy Setting As shown in Figures <ref> and <ref>, we define the autonomous agent policy as the highest level policy called Shared Policy, π_sh, and model it as an instance of MDP with the following elements: The Environment or The World: We designed a higher level environment for the tasks of shared autonomy for a generalizable MDP backbone, the attributes of which are defined shortly. In particular, we have created a second custom OpenAI Gym environment, called “assist_AI_v0", which directly communicates with the lower level environment, adaptive_grid_v0, and acts as a master agent with a master policy incorporating the assistance and/or autonomy protocols. State Space. This is defined based on state space of the MDP^RP, but expands it to include z_1 and a^H. Note that a^H is recorded as -1 if no action is taken by the human agent to accommodate such instances. Action Space. This is defined similar to that of MDP^RP, and is comprised of four actions: left, right, up, and down, encoded by 0, 1, 2, and 3, respectively. Policy. The policy considering the model presented in Figure <ref> and discussions in <ref>. Rewards. Rewards, as discussed earlier, take the form of c^⊤R in (<ref>). From another perspective, since the autonomous agent is human-inspired by design, with a shared autonomy mindset, the autonomous actions are comprehensible to the human agent and vice versa. Hence, this shared mental model (<cit.>) is not only necessary for proper collaboration but more importantly provides a road map to the design of any modern framework involving humans and (semi-)autonomous agents. Our approach has the capability to correct and help train a novice human operator in a safe and efficient manner. On the other hand, with the designed hierarchical learning and planning algorithms, full autonomy is also achievable. § NUMERICAL RESULTS & EXPERIMENTS We present numerical results in four parts, progressively adding layers of complexity, in a similar order to the material discussed in <ref>. The four sets of results also illustrate how we build up and test our shared autonomy framework in the following stages: * Stage I: Pre-Training—this stage produces an autonomous agent trained by using deep RL (RL) techniques. This model will be considered as the baseline to which the behavior of a human operator will be compared. The results for this stage are presented in <ref>. Since our shared autonomy framework is capable of full autonomy by design, i.e., highest level of autonomy, the overarching goal in this stage is to showcase such capability in training and testing of an autonomous agent given the inherent stochasticity of the environment as well as the challenges of the application. * Stage II: Manual—we let the human take full control authority over the robot. The advantage of a shared autonomy framework is in effectively incorporating human operator in the loop to learn from and, in general, to switch control if needs be. The results of this stage, presented in <ref> are to set this important capability in place and provide the algorithm with necessary data for a human-tuned framework using the formulation presented in <ref>. * Stage III: Shared-Training—we train the shared autonomy policy according to our proposed model, also by using deep RL techniques. Our approach in this stage is looking into certain challenging scenarios in training a shared autonomy agent with expert and noisy humans and to see the effects of different components of our architecture. The results for this stage are presented in <ref>. * Stage IV: Shared-Testing—we test the trained model with expert human for a variety of cases. Finally, in this stage, we interface the trained shared autonomy agent with humans with different levels of noise and analyze its performance. The results for this stage are presented in <ref>. §.§ Results for Stage I, Pre-Training Here, we showcase the training of a fully autonomous robot. In the context of our shared autonomy framework, this will represent a pre-trained agent, to be used as the baseline for computing the human agent's error. With the adaptive_radial_grid_v0 environment described earlier, we use Stable-Baselines3 library (<cit.>) to train a deep RL policy. During the training process, we sample the objects in the environment from a Gaussian distribution in order to account for different possible variations of the object spaces. More specifically, we draw samples from a truncated Gaussian distribution in interval [0 4] with the mean and standard deviation of 2 and 1, respectively. Accordingly, our formulation is uncertainty-aware. We employ the Proximal Policy Optimization (PPO) algorithm (<cit.>) notably with a batch size of 32, learning rate of 1× 10^-3, and γ of 0.99 for a multilayer-perceptron (MLP) policy with 2 layers of 64 nodes. Figure <ref> shows the performance of the training process. We also provide two examples of output sequences for two objects configurations: Figure <ref> for a relatively sparse scenario, and Figure <ref> for a relatively crowded scenario. The figures include information on the initial state and the output action sequences of the policy. We observe that these are logical and intuitive. §.§ Results for Stage II, Manual Here, we present the results supporting the encoding of the human agent's internal/latent variable z_1. We build up a shared autonomy platform that, at its core, is comprised of our hierarchical MDPs implemented using two Gym environments. We have used Cogment (<cit.>) to enable real-time human-in-the-loop interaction in our platform. Cogment platform is an open source framework built on a micro-service architecture for running different kinds of RL, multi-agent RL and human-in-the-loop learning applications. During a test, human user is presented with a random initialization of the environment, an example of which is shown Figure <ref>. The four basic discrete inputs, introduced in <ref>, are mapped to four direction buttons on a regular keyboard. Figure <ref> shows our setup for a test. We record the actual human data using similar object randomization and environment configurations as in Stage I. It is also important to note that an explicit goal inference is not feasible in our set-up except for a myopic one (<cit.>), which assumes that the intended goal is the closest of goal space points. This, in essence, is how we defined variable z_0 in this work that encodes the angular distance to the nearby goals. Following the formulation presented in <ref>, we train our auto-encoder for a 5D latent variable z_1 using 2 history steps (n_h=2) using 40 recorded episodes or trials of a human user interacting with our setup. It is worth noting that we randomly divide the dataset with a ratio of 0.7. Notably, the learning rate and batch size are 5×10^-4 and 5, respectively. To compute the input error of (<ref>), we use the pre-trained model from Stage I as the surrogate optimal model. From a practical point of view, we used one-hot transformation for our discrete variables, such as the state s^RP, and introduced white noise for better training. Figure <ref> shows the training and validation process of our cVAE model. §.§ Results for Stage III, Shared-Training Building on Stages I and II, Stage III is the next step in setting up our platform. In this work, we used deep RL to train the shared policies using PPO algorithm (<cit.>), similar to the pre-trained model. We consider three scenarios of training with different human skill levels and different shared autonomy settings. Each of these scenarios might include one or more hypotheses, i.e., a research question(s), followed by presentation of the results and assessing them. It is worth noting that in evaluating the results of the training process, we use the following two measures: * Reward per time-step: the rewards with respect to the training (simulator) time-step. In all training cases, we train the policy for a total number of N_tr=5×10^5 time-steps, which is the horizontal axis in all rewards-related plots. Note that we use subscript tr for training related variables. * Sample Processing Rate (SPR): the number of processed samples forward and backward per second. In shared autonomy, the speed of the platform is of crucial importance, since it needs to train an agent for a shared autonomy while the task progresses with human in the loop. SPR is defined as follows: SPR(k) = ∑_j=0^k n^j_tr / (t-t_0), where t and t_0 are the current and initial wall-time in seconds, n^k_tr is the number of training samples processed at time-step k. This also includes the time required by the stochastic gradient decent. We use Adam as our optimizer (<cit.>). Batch size is 64 with learning rate of 1×10^-4. Moreover, we have two expertise levels for the human agent in our trials: * Expert Human: A human agent who is familiar with our setup. * Noisy Human: We deliberately perturb the human's action by adding noise. In all cases, if no human action is available, the human agent is considered to be non-collaborative. No action is an action itself. We use the above-mentioned metrics to investigate: (a) whether or not a shared autonomy agent can be trained under different human expertise levels and the degree of collaborativeness given the proposed MDP structure, (b) to what extent the human-tuned variable z_1 affects the training process, (c) how rewards terms and their coefficients in (<ref>) affect the training process, (d) how a trained model performs if interfaced with humans of different expertise level. Scenario 1: Training with Expert Human We begin the presentation of shared autonomy results by demonstrating the training process for expert human, defined above, which includes the trials data collected from a human agent familiar with our setup. Arguably, no human-in-the-loop test can cover the complete state space, and therefore, we will treat the human agent at the unseen states also as a non-cooperative agent who takes no action. In this scenario, we choose equal weights for R_1 and R_2 and set c = [10, 10]. A shared autonomy policy can be trained using our formulation for an expert human, as defined before, who is alternating between cooperative and non-cooperative, with the inherent stochasticity of the environment. Figure <ref> shows the training process, confirming the success of our algorithm in training a shared autonomy agent detailed in Hypothesis <ref>. Further tests with the trained model will be provided in <ref>. Under the conditions outlined in Hypothesis <ref>, we hypothesize that the human-tuned variable z_1 results in a more efficient training process. Figure <ref> shows how sample processing rate (SPR), defined in (<ref>), changes with respect to training time-step for the expert human with and without z_1. In the case of without z_1, we still pass the signal through the z_1 model but zero it out before feeding to the algorithm in order to as much as possible isolate the mere effect of z_1. In the early stages of the training time-steps, i.e., ts≤ 1.6 × 10^5, which is the highly oscillatory stage, we do not see noticeable difference between the performances of the two cases. However, as the training progresses, the effect of z_1 is evident, which results in more efficient performance. This result partially confirms Hypothesis <ref>. Intuitively, z_1 is effective when the policy is getting closer to the convergence. Moreover, it can be concluded that z_1 contributes positively in our shared autonomy framework for an expert human. Scenario 2: Training with Noisy Human In this scenario, we deliberately perturb the human action by adding noise to their action, that results in a noisy human, defined above. The process is outlined in Appendix B. This is a challenging scenario for our setup for three reasons: (a) at a conceptual level, a noisy human in shared autonomy setting is, in general, a non-collaborative agent, which makes the task of the autonomous agent that is accommodating them challenging. As noted in <ref>, the aspect where policy shaping outshines policy blending is in this interfacing with a noisy human in the loop, (b) as we are assessing the limits of our setting, we are still using equally weighted rewards with c = [10, 10]. We will drop this constraint later. (c) we keep using the z_1 variable that is fine-tuned to the expert human. The latter results in mismatched z_1, which makes the process even more challenging. For a noisy human with mismatched z_1, we expect this variable to negatively affect the training process. Figure <ref> depicts the sample processing rate (SPR) for the training process with a noisy human for the cases with and without z_1. Similar to the previous assessment, this figure partially verifies Hypothesis <ref>. Only towards the later stages of the training that we observe the effects of including z_1, which is negatively affecting the performance. Intuitively, this is expected, since z_1 is fine-tuned for a different human. The results of assessment of Hypotheses <ref> and <ref> are significant in the sense that they confirm the validity of our architecture design and the considered assumptions. From the behavior of the training processes, one realizes the inherently complex dynamics of a shared autonomy setting with a human in the loop, and how finding and incorporating human's latent variable improves the training process. Another aspect of our framework is the ability to adjust the coefficients through c. To showcase this, we present results with reduced c_2, i.e., reduced contribution of human action in reward function, and use c = [10, 5] to counter-act the high noise in human actions. Given a noisy human, reducing the associated coefficient in objective function, i.e., c_2, results in improved training process. Figure <ref> compared training performance for the noisy human with and without reduced c_2 effect. This figure confirms Hypothesis <ref> by showing a much less oscillatory training process and earlier convergence. This result also confirms practical applicability of human-related coefficient c_2 as a design variable to control the performance of shared autonomy. Scenario 3: Training with Override Option A challenging task for an autonomous agent in shared autonomy is presented when the human agent is given an override option. We tested for the scenario that the human action overrides the autonomous action with a probability of 80%. This scenario is important from a practical point of view for safety critical applications. The training process is shown in Figure <ref> which indicates a much more challenging policy update and a sub-optimal policy. However, we maintain the equal weights on R_1 and R_2. This is another manifestation of the bottleneck in shared autonomy discussed in Scenario 2. That is we are imposing our designed definition of a reward function next to what a human might consider (and hence, override). §.§ Results for Stage IV, Shared-Testing In this stage, we test the model trained with an expert human, discussed in Scenario I. We, however, test the model in a challenging scenario, which occurs when shared autonomy interacts with a noisy human agent that might or might not cooperate. In such scenarios, we expect a shared autonomy framework implemented with a policy shaping paradigm to outshine in its performance. For such cases, the burden of successfully carrying out the tasks is on the autonomous agent while trying as much as possible to follow the human's input. We use the model trained using raw human data, i.e., the expert human and present results for the environment initialized as shown in Figure <ref>. We consider the following three cases. For each case, there is range of possible outcomes due to the stochasticity of the human behavior; we present an illustrative example for each case. * Case 1: Random human, i.e., very noisy or novice human. The results for this case are given in Table <ref>. In this table, the first row is the sequence of actions of autonomous agent (AA sequence). Comparatively, the second row shows the sequence of actions of human agent (HA sequence). The reader is reminded that an action of "-1" denotes a non-cooperative human agent. Moreover, as denoted in the third row, human agent interacted 18 times out of the total of 26 actions or 69.2% of the time throughout the episode. Despite having a very noisy human, the shared autonomy managed to follow the human 8 times. This result shows that that the autonomous agent ignored the human most of time and successfully carried out the operation. Table <ref> shows the extended results for 10 tests. * Case 2: Medium-level noisy. The results for this case are given in Table <ref>, which shows how differently the autonomous agent engaged with the human compared to the previous case. This signifies the capability of the framework to discern humans with different skill levels. Table <ref> shows the extended results for 10 tests. * Case 3: Least noisy human, i.e., close to expert human. The results for this case are given in Table <ref>, which once more shows how the autonomous agent engaged with the human. It is observed that the autonomous agent managed to follow this human 20 times out of 21. Table <ref> shows the extended results for 10 tests. § CONCLUSIONS AND FUTURE WORK Operating an articulated machine is similar to driving a car in terms of complexity and hierarchy of tasks involved, from strategical route planning to low-level controls, and it is highly intertwined with the specific requirements of the application domain. Therefore, as we argued in this paper, design for autonomous operation of such machines requires a careful understanding of the nature of the tasks and their environment. In this work, we proposed a shared autonomy framework to operate articulated robots. We first introduced a hierarchical task-oriented planning formulation for context-aware robot operation. Building on this foundation as well as theory of mind and game theory, we proposed a novel shared autonomy framework to facilitate efficient interaction between the human and the autonomy, the two participating agents in this system. We modelled the decision making process using hierarchical MDPs and Options in an algorithm we called policy shaping. In this algorithm, the autonomous system policy is shaped by incorporating design variables contextual to the task, human's internal state, and pre-training, as well as the human's input. To encode the human's internal state beyond the designed state variables, we used the pre-trained model as the surrogate optimal model as a frame of reference to compare human's input. We employed the associated error as well as the history of states and actions in a conditional Variational Autoencoder (cVAE) architecture to find the human's latent embedding through the lens of the structured task at hand. To showcase the success of our framework, we fine-tuned our framework for the operation of a feller-buncher articulated machine in timber harvesting, a series of physically and mentally arduous operations in harsh environmental conditions. Building on our earlier work (<cit.>), with intricate know-how of the tasks, a novel, human-inspired path planning algorithm using the Envelope of Manipulation ℰ^M and the Envelope actions to encode the sequence of decisions/actions in the operations was proposed. We have used this case study as our test-bed to train and test different policies. In training the policies, we used deep RL techniques. Moreover, by using a wide range of available tools, libraries, and packages, we setup a human-in-the-loop test that enabled us to gather actual human trials data. In presenting results, we considered a number of scenarios and cases of importance to a shared autonomy framework. First, we trained a fully autonomous policy capable of carrying out the operations in our setup. We used this model as the surrogate optimal model. By gathering actual human trials data, we were able to train a cVAE network to access a human's internal embeddings. Then, we envisioned and implemented several training scenarios involving a range of human expertise. We assessed the success of our novel platform by forming certain hypotheses regarding the affect of our designed structure and variables. In testing the trained shared autonomy policy, once more we looked at the performance of the model in interacting with human agents with different skill levels and degree of cooperativeness. The extensive test results demonstrate the success of our platform in a particularly challenging case of interacting with a noisy non-cooperative human. The future directions are numerous, given the potential of this novel framework. We, however, propose that the future directions should be more in-line with autonomous operation/driving scenarios, since this platform offers an alternative point of view into designing a hierarchical planning framework with full autonomy in mind. Moreover, refined training algorithms tuned for shared autonomy and human-in-the-loop scenarios as well as more structured approaches to encode human's embeddings can be considered in future work. § APPENDIX A Here, we show the derivation of (<ref>) for 3 time-steps. For the trajectory τ, we have: τ = {s^A_1,a^A_1,a^H_1,...,s^A_3,a^A_3,a^H_3 }, The probability over trajectory is hence given be: p_τ = p(s^A_1,a^A_1,a^H_1,...,s^A_3,a^A_3,a^H_3) = p(a^A_1,a^H_1,...,s^A_3,a^A_3,a^H_3|s^A_1)p(s^A_1) Next, we write: p_τ = p(s^A_3|a^A_2, s^A_2)p(a^A_2,a^H_2,s^A_2,a^A_1,a^H_1|s^A_1)p(s^A_1) = p(s^A_3|a^A_2, s^A_2)p(a^A_2,a^H_2|s^A_2,a^A_1,a^H_1,s^A_1) p(s^A_2,a^A_1,a^H_1|s^A_1)p(s^A_1) Next, we have: p_τ = p(s^A_3|a^A_2, s^A_2)p(s^A_2|a^A_1,s^A_1)p(a^A_2|a^H_2,s^A_2) p(a^H_2|s^A_2,s^A_1)p(a^A_1|a^H_1,s^A_1)p(a^H_1|s^A_1)p(s^A_1), which simplifies to: p(τ) = p(s^A_1) ∏_t=1^3π_H(a^H_t|s^A_t) π_A(a^A_t|a^H_t,s^A_t)p(s_t+1|s^A_t,a^A_t), § APPENDIX B Here we provide the pseudocode on how we perturb human action in the cases we discuss noisy human: § ACKNOWLEDGEMENT We would like to thank Professor Dylan P. Losey for his early contributions to this work. This work was supported by the National Sciences and Engineering Research Council (NSERC) Canadian Robotics Network (NCRN). The authors also acknowledge the valuable contributions of AI-Redefined and William Duguay to the development of the shared autonomy setup. agsm
http://arxiv.org/abs/2307.00912v1
20230703101552
Hamilton transversals in tournaments
[ "Debsoumya Chakraborti", "Jaehoon Kim", "Hyunwoo Lee", "Jaehyeon Seo" ]
math.CO
[ "math.CO" ]
[temp]latexCommand [temp] bibfile.bib [enumerate,1]label=(*), ref=(*) [enumerate,3]label=(*), ref=(*) propcounter proplist propcounter subsectionsubsectionsubsections plain theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition conjecture[theorem]Conjecture claimClaim[theorem] *claim*Claim Casesenumerate3 [Cases]parsep=0pt plus 1pt [Cases,1]wide=0pt, listparindent=, label = Case *:, ref = * [Cases,2]wide, labelindent=, label = Case Casesi-(Casesii): [Cases,3]wide, labelindent=1.5, topsep=5pt, label = Case Casesi-(Casesii)-(Casesiii): Casesicasecases definition definition[theorem]Definition remark[theorem]Remark question[theorem]Question proofsketch[1][Proof sketch] statement @̧statement@̧equation descriptcount statement statement -3cm  (statement) statement(#2#1#3) foreach/parallel foreach/.style args=#1in#2via#3 evaluate=#3 as #1 using #2[#3-1] , m m m m #1 #2 #3 #4 [count=] in in [ ; ] m m Playing the role ofParameters #1#2 m m Statement OutputsNames in our usage #1#2 lrcases* @th## ## { } (2,1) (0.1,0.5)(0.2,1) (0.2,0.5)(1,0)0.8 (1,0.5)(1,0)0.8 (1.9,0.5)(0.2,1) [1] @vv@rev#1 scopes matrix,arrows,decorations.pathmorphing positioning decorations.markings calc decorations.pathreplacing,calligraphy,backgrounds vertex/.style=circle, fill=black, inner sep=0pt, minimum size=2mm, every label/.append style=rectangle, outer sep=3pt, ->-/.style= decoration=markings,mark=at position #1 with >, postaction=decorate, ->-/.default=0.75, -<-/.style= decoration=markings,mark=at position #1 with <, postaction=decorate, -<-/.default=0.25, highlight edge/.style = preaction=draw,lightgray, double=lightgray,double distance=5pt [name=S, color=Emerald!60!black]JS [name=K, color=blue!50!black]JKim [name=C, color=orange!70!black]D [name=L, color=Mahogany!60!black]H Hamilton transversals in tournaments Debsoumya Chakraborti Discrete Mathematics Group (DIMAG), Institute for Basic Science (IBS), South Korea. E-mail: debsoumya@ibs.re.kr. Supported by the Institute for Basic Science (IBS-R029-C1). Jaehoon Kim Department of Mathematical Sciences, KAIST, South Korea. Email: jaehoon.kim@kaist.ac.kr, hyunwoo.lee@kaist.ac.kr. Supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government(MSIT) No. RS-2023-00210430. Hyunwoo Lee[2] Extremal Combinatorics and Probability Group (ECOPRO), Institute for Basic Science (IBS), South Korea. Partially supported by the Institute for Basic Science (IBS-R029-C4). Jaehyeon Seo Department of Mathematics, Yonsei University, South Korea. E-mail: jaehyeonseo@yonsei.ac.kr. August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= It is well-known that every tournament contains a Hamilton path, and every strongly connected tournament contains a Hamilton cycle. This paper establishes transversal generalizations of these classical results. For a collection 𝐓={T_1,…,T_m} of not-necessarily distinct tournaments on a common vertex set V, an m-edge directed graph 𝒟 with vertices in V is called a 𝐓-transversal if there exists a bijection ϕ E(𝒟)→ [m] such that e∈ E(T_ϕ(e)) for all e∈ E(𝒟). We prove that for sufficiently large m with m=|V|-1, there exists a 𝐓-transversal Hamilton path. Moreover, if m=|V| and at least m-1 of the tournaments T_1,…,T_m are assumed to be strongly connected, then there is a 𝐓-transversal Hamilton cycle. In our proof, we utilize a novel way of partitioning tournaments which we dub 𝐇-partition. § INTRODUCTION Given a collection ℱ={F_1,…,F_m} of sets, a set X of size m is called an ℱ-transversal if there is a labelling x_1,…,x_m of the elements in X such that x_i∈ F_i for each i∈ [m]. Transversals over various mathematical objects have been studied in the literature throughout the last few decades. To name a few, such variants are extensively studied for Carathéodory's theorem <cit.>, Helly's theorem <cit.>, Erdős-Ko-Rado theorem <cit.>, Rota's basis conjecture <cit.>, etc. The same notion for graphs, i.e., transversals over a collection of graphs, is implicitly used in much of the literature and explicitly defined in <cit.>. The same definition can be extended for related objects such as hypergraphs and directed graphs as follows. For a given collection ℱ=(_1,…, _m) of graphs/hypergraphs/directed graphs with the same vertex set V, an m-edge graph/hypergraph/directed graph on the vertex set V is an ℱ-transversal if there exists a bijection ϕ:E()→ [m] such that e∈ E(_ϕ(e)) for all e∈ E(). By interpreting each _i as the set of edges colored with the color i, the function ϕ is often called a coloring. We say the coloring ϕ is rainbow if it is injective, and we say is a rainbow if there is a rainbow coloring on it. Many classical results in extremal graph theory have been extended to such transversal settings, exhibiting interesting phenomena. The classical Mantel's theorem states that any n-vertex graph with more than 1/4n^2 edges must contain a triangle. Aharoni, DeVos, de la Maza, Montejano, and Šámal <cit.> considered a transversal version of this showing that if a graph collection 𝒢=(G_1,G_2,G_3) on n vertices satisfies min_i∈ [3]{e(G_i)} > (26-2√(7)/81)n^2, then it has a 𝒢-transversal isomorphic to a triangle. Surprisingly, this condition with the irrational multiplicative constant is best possible. Furthermore, this captures an interesting phenomenon that 26-2√(7)/81 is larger than 1/4, which we obtain from Mantel's theorem. It is an interesting open problem to obtain a similar tight condition for the existence of a transversal of K_r with r>3. A similar extremal problem, where instead of putting a condition on min_i {e(G_i)}, to find the tight condition on ∑_i e(G_i) for the existence of a 𝒢-transversal isomorphic to a given graph is studied in <cit.>. Addressing a question of <cit.>, Cheng, Wang, and Zhao <cit.> obtained an asymptotic version of the transversal generalization of the classical Dirac's theorem, and a complete resolution was independently obtained by Joos and the second author <cit.>. They proved that if a graph collection 𝒢=(G_1,…,G_n) on n vertices satisfies min_i∈ [n]{e(G_i)}≥ n/2, then it has a 𝒢-transversal isomorphic to a Hamilton cycle. Soon a number of results followed, finding similar tight conditions to find 𝒢-transversals isomorphic to a few other graphs; see <cit.>. Finally, the first two authors, Im, and Liu <cit.> generalized these results by establishing the bandwidth theorem for graph transversals. Transversal generalizations were recently considered for hypergraphs and directed graphs; see, <cit.>. The above lines of research have a central theme which can be pinned by the following meta-question. For a given graph/hypergraph/directed graph with m edges, which properties 𝒫_n will ensure the following? Every collection ℱ of m graphs/hypergraphs/directed graphs on the same vertex set of size n satisfying the property 𝒫_n contains a ℱ-transversal copy of . As all objects in the collection could be identical, a natural necessary criterion for such a property 𝒫_n is that it has to ensure that every n-vertex graph/hypergraph/directed graph with property 𝒫_n contains a copy of . However, it is not necessarily sufficient as mentioned by the result of Aharoni, DeVos, de la Maza, Montejano, and Šámal <cit.>. Thus, it is a natural problem to investigate when these properties directly carry over to the transversal generalizations from the original results. We answer this in positive for the transversal generalizations of the following two folklore results for sufficiently large tournaments. We call a tournament T strongly connected if for every pair of vertices x,y in T, there is a directed path from x to y. Such a path or a cycle is Hamilton if it contains all vertices of the given digraph. In this paper, whenever we mention paths and cycles, we always refer to directed paths and directed cycles. * Every tournament contains a Hamilton path. * Every strongly connected tournament contains a Hamilton cycle. In what follows, we always assume that ={T_1,…,T_m} is a collection of tournaments on the common vertex set V(). Our first main result is to establish a transversal version of <ref>. For every sufficiently large n, every collection of n-1 tournaments with |V()| = n contains a transversal Hamilton path. We remark that the above result is not true when n=3 by considering two directed triangles with opposite orientation, i.e., the red and blue tournaments in <Ref>. Our next result establishes a transversal version of <ref>. In fact, we prove a slightly stronger statement. For every sufficiently large n, every collection of n tournaments with |V()| = n satisfies the following. If all tournaments in possibly except one are strongly connected, then contains a transversal Hamilton cycle. The above result is not true for n=3 by considering three directed triangles where two of them have the same orientation and the third one has the opposite orientation; see <Ref>. The number `one' in <Ref> cannot be replaced by `two' in the above result as shown in the following proposition. For every n≥ 3, there exists a collection of n tournaments with |V()| = n, and all but two tournaments in are strongly connected such that does not contain a transversal Hamilton cycle. Consider the transitive tournament T on [n] containing the arcs ij for every 1≤ i< j≤ n, and the strongly connected tournament T' on [n] containing the arcs ij for every 2≤ i+1< j≤ n and the (backward) arcs ji for every 2≤ i+1=j≤ n. Consider the collection =(T_1,…,T_n), where T_1=T_2=T and T_3=…=T_n=T'. We claim that does not contain a rainbow Hamilton cycle. Suppose for contradiction that there is such a cycle. Then, such a cycle must contain a rainbow path P starting from the vertex n and ending at the vertex 1. Since there are no arcs ji with i≤ j-2 in any color, the length of P must be n-1 and it consists of the arcs ji with 2≤ i+1=j≤ n. However, there are only n-2 colors where these backward arcs are there, which contradicts the fact that P is rainbow. This completes the proof of <Ref>. Organization. The rest of the paper is organized as follows. In the next section, we start with collecting a few notations and tools that will be useful throughout the paper, and then we mention brief proof sketches of our main results. In <Ref>, we prove <Ref> and also establish a general lemma that will be useful to us later. Using this lemma along with some additional arguments, we prove <Ref> in <Ref>. § PRELIMINARIES §.§ Notations For a positive integer n, we write [n]{1,2,…,n} and for two positive integers a<b, we write [a,b]{a,a+1,…, b}. If we say that a result holds when 0< δ≪γ, β≪α<1, we mean that there exist non-decreasing functions f : (0,1] → (0,1] and g : (0,1]^2 → (0,1] such that the result holds when γ,β≤ f(α) and δ≤ g(γ, β). We will often not explicitly calculate these functions. We omit floors and ceilings where they are not crucial. Digraph. We use standard terminologies from graph theory. Let be a digraph. Denote the set of vertices of by V() and the set of arcs in by E(). We write e() to denote the number of arcs in . For two digraphs _1 and _2, we denote the disjoint union of them by _1 ∪_2. For a vertex v of , denote by d_^+(v) and d_^-(v) the out-degree and in-degree of v, respectively. Denote by N_^+(v) and N_^-(v) the out-neighborhood and in-neighborhood of v, respectively. For U⊆ V() and σ∈{+,-}, we let N_^σ(v,U) N_^σ(v)∩ U and d_^σ(v,U) |N_^σ(v,U)|. We often omit the subscript if it is clear from the context. For U⊆ V(), we denote by ∖ U the digraph induced by V()∖ U. For vertices u and v in a digraph, the arc from u to v is denoted by uv, vu, and we often write u v to say that there is an arc directed from u to v. For disjoint vertex sets X and Y, we write E[X, Y] as the set of arcs directed from a vertex in X to a vertex in Y. For a given digraph and its disjoint vertex subsets X and Y, we write X Y or Y X when xy∈ E() for every (x,y)∈ X× Y. A (directed) path of length ℓ in a digraph is a sequence of distinct vertices (v_1, …, v_ℓ+1) (and often denoted by v_1 ⋯ v_ℓ+1 or v_1 →⋯→ v_ℓ+1) such that the arcs v_iv_i+1∈ E() for i∈ [ℓ]. A path v_1→⋯→ v_ℓ+1 together with an arc v_ℓ+1→ v_1 is a (directed) cycle. Paths or cycles are Hamilton in a digraph 𝒟 if they cover all the vertices in 𝒟. Consider paths P_1,…,P_m in a digraph where for every i∈ [m], the first and the last vertices of P_i are u_i and v_i respectively. If for every i∈ [m-1], the arcs u_iv_i+1∈ E(), then we denote the concatenation of the paths P_1,…,P_m by P_1 P_2 ⋯ P_m or simply P_1P_2⋯ P_m. Sometimes, when the end vertex of P_1=v_1… v_ℓ and the start vertex of P_2=v_ℓ… v_k coincide, we write P_1P_2 to denote the path v_1… v_k. Coloring. Let be a digraph with an edge-coloring φ:E() → A for some color set A. If φ is rainbow and uses only colors in some C⊆ A, then we say φ is C-rainbow. In this case, we also say is C-rainbow. If a coloring φ is being considered in the context, then we write uc v to denote φ(uv)=c. We also use this to denote the arc uv colored by c. If in addition φ uses all the colors in C'⊆ C, then we say φ is a (C,C')-rainbow coloring. We write (C,c)-rainbow coloring for a color c∈ C to denote a (C,{c})-rainbow coloring. Collection of Tournaments. Let be a collection of tournaments. Define || to be the number of tournaments in this collection. Unless stated otherwise, we assume is of the form {T_c:c∈ C} for some set C. We say Γ() C is the color set of . For an arc e between two vertices of V(), we define C_(e){i∈Γ():e∈ T_i}, which is the set of the colors which can be used to color e. If is clear from the context, we omit the subscript. For X⊆ V() and A⊆Γ(), we define the vertex-induced collection [X] {T[X]:T∈} and the color-induced collection _A {T_i: i∈ A}. This naturally yields _A[X] = [X]_A. Let = {T_1, … , T_m} be a collection of tournaments. For γ∈(0,1], we define ^γ as the digraph on the vertex set V() with the arc set {uv : |{i∈[m]:uv∈ E(T_i)}|≥γ m}. A simple pigeonhole principle implies that ^γ contains at least one tournament as a subdigraph when γ≤ 1/2. We write _A^γ = (_A)^γ to make the order of subscripts and superscripts clear. We frequently use the following straightforward observations. For a given collection of tournaments, 0<α≤β≤1/2 and C_1⊆ C_2⊆Γ(), we have the following. * ^β⊆^α. * _C_2^β⊆_C_1^α when (1-α)|C_1|≥ (1- β)|C_2|. * _C_1^β⊆_C_2^α when β|C_1|≥α|C_2|. §.§ H-partition We will partition the vertex set of a given tournament in a desirable way to execute Step 1 of the proof sketch described in <Ref>. For convenience, we give a name to such partitions as follows. Let 0≤γ≤ 1 and r,l be positive integers. Let T be a tournament. A tuple (W_1,…, W_r, w_1,…, w_r-1) of disjoint vertex subsets W_1,…, W_r⊆ V(T) and distinct vertices w_1,…, w_r-1 in V(T) ∖⋃_i∈ [r] W_i is an (ℓ,γ)-partition if the following hold. * (⋃_i∈ [r] W_i)∪{w_1,…,w_r-1}=V(T). * γℓ≤ |W_i| ≤ℓ for each i∈ [r]. * W_i{w_i} W_i+1 for each i∈ [r-1]. In the above definition, the edges between the vertex w_i and W_i∪ W_i+1 are often referred to as intermediate edges. Note that an (ℓ,γ_1)-partition is an (ℓ,γ_2)-partition whenever 0<γ_2≤γ_1≤ 1. The following lemma proves the existence of (ℓ,γ)-partition. Let 0 < γ≤1/6 and ℓ,n be positive integers with 3≤ℓ≤ n. Let T be a tournament of order n. Then, T has an (ℓ ,γ)-partition. Note that the statement is clear when ℓ=n as V(T) is a trivial (ℓ ,γ)-partition. Assume that ℓ≥ 3 is the largest possible number such that T does not have a (ℓ ,γ)-partition. As ℓ<n, by the maximality of ℓ, the tournament T has an (ℓ+1 ,γ)-partition. Among all (ℓ+1 ,γ)-partitions, choose one P=(W_1,…, W_t, w_1,…, w_t-1) with the smallest number of i∈ [t] such that |W_i|=ℓ+1. If every i∈ [t] satisfies |W_i| <ℓ+1, then it is also an (ℓ ,γ)-partition, a contradiction. Hence, we may assume that there exists i_0∈ [t] with |W_i_0|=ℓ+1. We use the following standard fact. For a tournament T and d≥0, there are at most 2d+1 vertices whose in-degree (resp. out-degree) is at most d. By <Ref>, we can partition W_i_0 into three sets W^-_i_0, {v}, W^+_i_0 such that W^-_i_0{v} W^+_i_0 and |W^-_i_0|, |W^+_i_0| ≥1/6 |W_i_0| ≥γ (ℓ +1). Now consider the partition P' = (W'_1,…, W'_t+1, w'_1,…, w'_t) = (W_1,…, W_i_0-1, W^-_i_0,W^+_i_0, W_i_0+1,…, W_t, w_1,…, w_i_0-1, v, w_i_0+1,…, w_t-1). We claim that this is an (ℓ+1 ,γ)-partition having one smaller number of i∈ [t+1] with |W'_i|=ℓ+1 compared to P. Indeed, since W_i_0 is partitioned, it is clear that P' contains less number of vertex sets of size ℓ+1. Also, it is clear that P' is an (ℓ+1,γ)-partition, which is a contradiction. §.§ Color absorption The following lemma allows us to find a color absorber in an appropriate collection of tournaments. Let α∈ (0, 1), let n, m and ℓ≥ 1 be integers satisfying ℓ≤α^7 m/ 10^5 and α n ≥ 8m. Let H be a bipartite graph on vertex classes A and B such that |A| = m, |B| = n and, for each v∈ A, d_H(v)≥α n. Then, there are disjoint subsets B_0, B_1⊂ B with |B_0| = m-ℓ and |B_1|≥α^7 n/ 10^5, and the following property. Given any set U⊂ B_1 of size ℓ, there is a perfect matching between A and B_0∪ U in H. Using this, we can easily deduce the following color absorption lemma, which fits our setting. Let 0<1/n≪γ≪β≪α≤ 1/2. Let H be a digraph with β n≤ e(H)≤ (β+ 1/2γ ) n, and be a collection of tournaments with |V()|=n and ||=m≥α n. Let S be a copy of H in ^α. Then, there exist disjoint sets A,C⊆[m], with |A|=e(H)-γ m and |C|≥ 10β m such that the following property holds. Given any subset C'⊆ C of size γ m, there is a rainbow coloring of S in using colors in A∪ C'. Let K be the bipartite graph with vertex classes E(S) and [m], where (e,i) is an edge if e∈ T_i. Then <Ref> applies with (l,m,n)=(γ m,β m,m). This yields disjoint A,C⊆[m] with |A|=e(H)-γ m and |C|≥ 10β m, such that, for any C'⊆ C of size γ m there is a perfect matching between E(S) and A∪ C'. Such a matching corresponds to an (A∪ C')-rainbow coloring of S in , as required. §.§ Proof sketches Here, we outline the major steps involved in the proofs of our results. We start with <Ref>. Firstly, when we have many spare colors, it is easy to find a rainbow Hamilton path. Indeed, if has at least 2n tournaments, then one can find a Hamilton path P in ^1/2 and then P can be greedily colored in a rainbow way as 1/2· 2n > n-1 colors are available for each edge. In fact, as we will see in Lemma <ref>, one can find a rainbow Hamilton path even when only one spare color is provided. However, it is not trivial to find a transversal Hamilton path when the number of colors is exactly n-1. To overcome the difficulties, we decompose the path of order n in smaller subpaths, and iteratively apply the above to embed paths into these smaller parts. This yields a rainbow linear forest of at least (1-o(1))n edges. To convert this into a Hamilton path, we connect them and use the color absorption lemma. This idea is elaborated below. We will choose constants μ,γ,β so that 0 < 1/n≪μ≪γ≪β≪ 1 holds. Step 1: Partition the vertex set. We first consider a tournament T⊆^1/2. Then, consider an (μ n, γ)-partition (W_1,…, W_r, w_1,…, w_r-1) of T. Let E_i,1:= {vw_i : v∈ W_i} and E_i,2:= {w_iv : v∈ W_i+1}. We plan to later find Hamilton paths P_i in each of the W_i's which yield a Hamilton path P_1 e_1,1 e_1,2 P_2 e_2,1 e_2,2 P_3 ⋯ P_r of T for some e_i,j∈ E_i,j. In the next steps, we illustrate how to obtain such paths so that one can find a rainbow coloring of the final Hamilton path. Step 2: Partition the color set and find a color absorber. We set aside a set D of at most μ^2 n colors such that for any collection {e_i,j: i∈ [r-1], j∈ [2]} of one arc from each E_i,j, we can find a D-rainbow coloring of these arcs. Let t∈ [r] be such that ∑_i∈ [t] (|W_i|-1) = β n + o(n). For each i∈ [t], let P_i be a Hamilton path of T[W_i]. Next, utilize <Ref> to find disjoint A,C⊆Γ()∖ D such that |A| = β n - γ n and |C| = 10β n and for any C'⊆ C of size γ n, we can find an (A∪ C')-rainbow coloring of the arcs in ⋃_i∈ [t] P_i. Denote BΓ()∖(A∪ C∪ D). Step 3: Use most of the colors in B and color most intermediate arcs. We first choose a number τ∈ [t+1,r-1]. For each i∈ [t+1,r]∖{τ} one by one, we consider the collection of tournaments _C'[W_i] where C'⊆ B∪ C is the set of current available colors. Then we find a Hamilton path in _C'[W_i] and color the arcs of this path. As |W_i| is much smaller than |C'|, we can greedily color the arcs. We can repeat this for all i∈ [t+1,r]∖{τ}. Furthermore, we can ensure that all colors of B are used, possibly except at most r colors. This procedure can be done using <Ref>. After this step, suppose C^*,B^* denote the remaining unused colors in C,B respectively. Let u_i,v_i denote the starting and ending vertices of the Hamilton paths considered inside W_i. We now D-rainbow color the intermediate arcs {w_i u_i+1 : i∈ [r-1]}∪{v_i w_i: i∈ [r-1]}. Let D^* be the remaining unused colors in D. Step 4: Absorb the colors in B^*∪ D^* using the colors in C^*. Since the number of colors in B^*∪ D^* is small enough, we can embed a rainbow Hamilton path in W_τ∪{w_τ-1, w_τ} that starts from the vertex w_τ-1 to w_τ using up all the colors in B^*∪ D^* and some colors in C^*. This is done by applying <Ref>. Finally, we use the remaining unused colors in C along with those in A to color the arcs in P_i with i∈ [t], which completes the desired transversal Hamilton path in . To prove <Ref>, we again need to use similar arguments to find a Hamilton path in along with some extra arguments to close the path to a Hamilton cycle. Thus, we extract a lemma (see <Ref>) capturing these similar arguments that can be directly applicable to both <Ref>. Similar to before, consider a tournament T⊆^1/2. By <Ref>, we first find an (μ n, 1/6)-partition (W_0, W_1, …, W_r+1, w_0, …, w_r) of T. We next find a Hamilton path P_0 of T[W_0∪{w_0}] from some vertex x to w_0 and a Hamilton path P_r+1 of T[W_r+1∪{w_r}] from w_r to some vertex y. If yx∈ T_i for some i∈ [n], then we can utilize the -partition to find a [n]∖{i}-rainbow Hamilton path from x to y, yielding a desired rainbow Hamilton cycle. Hence we have xy∈ T_i for all i∈ [n]. If there are many internally disjoint short rainbow paths from y to x, then we can choose one such short path P from y to x that does not use any vertex in W_0∪ W_r+1∪{w_0, …, w_r}. After removing the vertices used in P, the modified partition (W_0,W_1∖ V(P),W_2∖ V(P),…,W_r∖ V(P),W_r+1, w_0,…,w_r) still forms an (μ n, 1/10)-partition of T∖ V(P). This fact then helps us to carry through the same 4 steps to find an appropriate rainbow Hamilton path P' in V()∖ (W_0∪ W_r+1∪ V(P)) from w_0 to w_r, yielding a desired rainbow Hamilton cycle. Hence, this yields two vertices x and y such that xy∈ T_i for all i∈ [n] and there are not many internally disjoint short rainbow paths from y to x. These two properties will be useful. We consider a longest rainbow path P^*=x_1→…→ x_k from y=x_1 to x=x_k and let D be the set of colors not used by the path. If this is a Hamilton path, then we can close it to a rainbow cycle as xy∈ T_c for the remaining color c. If not, the maximality of this path provides information between the leftover vertices z and V(P). For example, if zx_i∈ T_j for some j∈ D, then x_i-1z∉ T_j' for j'∈ D∖{j} as otherwise we can find a longer path x_1→… x_i-1j' z j x_i →…→ x_k. This fact together with the lack of many internally disjoint short rainbow paths from y to x allows us to obtain patterns on the directions of the arcs between the vertices outside P^* and the vertices in V(P^*). Once we have the patterns, some clever choices of maximality together with the strong connectedness of the tournaments yield a rainbow Hamilton path that can be closed into a rainbow Hamilton cycle. Details are provided in <Ref>. § TRANSVERSAL HAMILTON PATHS In this section, we prove <Ref>. The following lemma shows that one can find a transversal Hamilton path with one additional color. For any collection of tournaments with |V()| = || = n, there is a rainbow Hamilton path. Take a longest rainbow path P=u_1 u_2 ⋯ u_r in . Suppose |V(P)| < n. Then, there is at least one vertex v ∈ V()∖ V(P) and at least two unused colors, say 1 and 2. We have u_1v∈ E(T_1) since otherwise there is a rainbow path v1 P longer than P, a contradiction to the maximality of P. Similarly, we have u_1v∈ E(T_2). We claim that u_tv∈ E(T_i) holds for all t∈ [r] and i∈{1,2}. If not, let 2≤ t'≤ r be the smallest index where u_t'v∈ E(T_j) for some j∈{1,2}. By the minimality of t', we have u_t'-1v∈ E(T_3-j), which gives a rainbow path u_1 ⋯ u_t'-13-j v j u_t'⋯ u_r longer than P, a contradiction. Thus the claim holds, whence in particular u_rv∈ E(T_i) for i∈{1,2}. However, this gives P1v which is again a rainbow path longer than P, a contradiction. Therefore, |V(P)| = n and P is a rainbow Hamilton path. Having more additional colors, instead of just one, allows us to find a rainbow Hamilton path in a robust way: we can designate some colors to appear on the path. The following lemma allows us to ensure that one fixed color must appear on the path. Using this, we can even make sure linearly many preselected colors appear on the path. Let be a collection of tournaments with |V()| = n≥ 2 and || ≥ 2n. For each i∈Γ(), there is a rainbow Hamilton path in using an arc of T_i, except for the case where the tournament T_i is a directed triangle with n=3, and the other tournaments in are directed triangles with the opposite orientation. Assume without loss of generality that Γ()=[2n] and i=1. We use induction on n. The base cases when n=2 or 3 are clear. Let n≥ 4 and assume the statement holds for smaller n and is the smallest counterexample to the statement. We call a collection of tournaments exceptional if one tournament is a directed triangle and all the others are directed triangles with the opposite orientation. Choose a tournament T⊆^1/2. For any vertex v∈ V(), if d^+_T(v)≥ 2, then the collection [N^+_T(v)] is exceptional. Similarly, if d^-_T(v) ≥ 2, then the collection [N^-_T(v)] is exceptional. Suppose that v has out-degree at least two and [N_T^+(v)] is not exceptional. By the induction hypothesis, there exists a rainbow directed path P = v_1⋯ v_t with V(P) = N_T^+(v) using the color 1. Since vv_1∈ T⊆^1/2, there are at least 1/2· 2n = n colors containing vv_1. As at most d_T^+(v)-1 ≤ n-2 colors are used in P, we can choose a color to extend P to a rainbow path P'= v v_1 ⋯ v_t. If d^-_T(v)=0, then P' is the desired rainbow Hamilton path of , a contradiction. Otherwise, letting I ⊆ [2n] be the collection of colors not used by P', we have |I| ≥ 2n - (n-1) ≥ n+1 colors which are not yet used. By applying <Ref> to _I[N^-_T(v)], we obtain a (possibly empty) rainbow path Q = u_1⋯ u_s on the vertex set N^-_T(v). As at most (d_T^+(v)-1)+(d_T^-(v)-1)+1=n-2 colors are used in P'∪ Q, we can still choose a color j∈ [2n] such that u_s v∈ T_j and j is not used in P'∪ Q. This yields a desired rainbow Hamilton path Q P' of using the color 1, a contradiction. The similar argument works when d_T^-(v)≥ 2 and [N_T^-(v)] is not exceptional. Now we use this claim to derive a contradiction. If n ≥ 8, then there must be a vertex v with d_T(v)≥ 4, whence [N^+_T(v)] is not exceptional, a contradiction to <Ref>. If 4≤ n ≤ 6, then it is easy to check that there must be a vertex v such that either d^+(v)∉{1,3} or d^-(v) ∉{1,3}, which yields a contradiction to <Ref>. The only remaining case is when n=7, and T is a 3-regular tournament. Choose v∈ V(). <Ref> implies that [N^+_T(v)] and [N^-_T(v)] are both exceptional. Then, T[N^+_T(v)] is a directed triangle a b c a and moreover T_1[N^+_T(v)] is the directed triangle c b a c. Also, T[N^-_T(v)] is a directed triangle x y z x and moreover T_1[N^-_T(v)] is the directed triangle z y x z. As T is 3-regular, we can assume that ax∈ T. Then, as every arc in T belongs to at least n colors, we can find a rainbow direct Hamilton path z1 y v b c a x by greedily choosing colors of all arcs other than the first one. Again, this contradiction completes the proof. We next show that for any linearly many specified colors, we can find a rainbow Hamilton path using those colors. Let be a collection of tournaments with |V()| = n≥ 25 and || =m ≥ 4n. Let B⊆Γ() with |B| ≤n/25. Let u,v be vertices in V() such that for each w∈ V(T)∖{u,v}, the arc uw is in T_i for at least two choices of i∈ B and the arc wv is in T_j for at least two choices of j∈ B. Then, has an (Γ(),B)-rainbow Hamilton path starting from u and ending at v. Assume without loss of generality that Γ()=[4n]. Let V V()∖{u,v}. Choose T⊆^1/2[V], and apply <Ref> to find an (24,1/6)-partition (W_1, …, W_r, w_1, …, w_r-1) of T. Note that we have n/25≤ r≤ n and |W_i|≥ 4 for each i. Assume that B⊆ [r]. Partition the color set [r+1,4n] into C_1,…, C_r where |C_i|≥ 2|W_i| for each i. For j∈{1,r}, use <Ref> to obtain a C_i-rainbow Hamilton path P_i in W_i starting from a vertex u_j and ending at a vertex v_j. We choose two colors, say 1 and r, so that uu_1∈ T_1 and v_rv∈ T_r. For each i=2,…, r-1, we apply <Ref> to find a (C_i∪{i},i)-rainbow Hamilton path P_i of [W_i] which starts from a vertex u_i and ends at a vertex v_i. This uses all the colors in B=[r]. We have used at most n colors, so for each arc in T there are at least 1/2· 4n-n=n unused colors. Thus we can greedily choose distinct colors for {v_iw_i, w_i u_i+1:1≤ i≤ r} to complete a rainbow Hamilton path u → P_1 → w_1 → P_2 →…→ w_r → P_r+1→ v of , which uses all the colors in B as desired. §.§ Main lemma In this subsection, we prove the following lemma. It is straightforward to derive <Ref> from this lemma. Let 0<1/n≪μ≪γ, α≤ 1. Let be a collection of tournaments with |V()| = n and || = n-1. Let T⊆^α be a tournament. Let x_0,x_r∈ W, and (W_1,…,W_r,w_1,…,w_r-1) be an (μ n,γ)-partition of T∖{x_0,x_r}. Suppose x_0 W_1 and W_r x_r in T. Then, there is a rainbow Hamilton path in from x_0 to x_r. We first show that <Ref> implies <Ref>. Let α=1/4 and 0<1/n≪μ≪γ, α≤ 1. Fix a tournament T⊆^1/2. By <Ref>, T has an (μ n, γ)-partition (W_0,…, W_r+1, w_0,…, w_r) for some r∈ℕ. Choose a Hamilton path P_0 of T[W_0] and a Hamilton path P_r+1 of T[W_r+1]. Denote the last vertex of P_0 by u and the first vertex of P_r+1 by v. Since the arcs of P_0 and P_r+1 and the arcs uw_0 and w_rv lie in at least (n-1)/2> |E(P_0)|+|E(P_r+1)|+2 different colors, we can greedily choose colors of the arcs in E(P_0)∪ E(P_r+1)∪{uw_0, w_rv} in a rainbow way. Let C⊆Γ() be the set of all the remaining colors, then T is still a subtournament of _C^1/4. By considering the collection _C[V()∖ (W_0∪ W_r+1∪{w_0,w_r})] together with an (μ n, γ)-partition (W_1,…, W_r,w_1,…, w_r-1) of T∖ (W_0∪ W_r+1∪{w_0,w_r}), Lemma <ref> applied with α=1/4, x_0=w_0, and x_r=w_r yields a C-rainbow Hamilton path from w_0 to w_r. This together with the precolored paths P_0 and P_r+1 and the arcs uw_0 and w_rv yields a rainbow Hamilton path in . Note that an (μ n, γ)-partition is also an (μ n, γ')-partition for all γ' < γ. Hence, it is enough to prove the lemma further assuming μ≪γ≪α. We assume this in the rest of the proof. We further choose a number β satisfying 0< 1/n≪μ≪γ≪β≪α < 1. We take a few steps to finish the proof, and the steps loosely follow the proof sketch given in <Ref>. Step 1. Reserve a color set D for the intermediate arcs in the -partition. Define w_i:=x_i for i∈{0,r}. We start with setting aside a set D of colors which we will later use for the arcs incident with the vertices w_0,w_1,…,w_r-1,w_r. For each i∈ [r], let ^-_i = {C_(vw_i) : v∈ W_i} be the multi-collection of the sets of available colors for the arcs between the vertices in W_i and w_i, and let ^+_i = {C_(w_i-1v): v∈ W_i} be the multi-collection of the sets of available colors for the arcs between w_i-1 and the vertices in W_i. Note that each set A in ^-_i∪^+_i has size at least α n. For each element in Γ(), we include it in the set D independently at random with probability 20μ^-2 r log n/n. Then, the standard Chernoff bound yields that all of the following simultaneously hold with a positive probability. * |D|≤ 100 μ^-2 rlog n < μ^2 n. * For each i∈ [r-1] and each set A ∈^-_i∪^+_i, we have |D∩ A| > 2r. Let us fix one such set D. Step 2. Setting up a color absorber. Let t≤ r-2 be an index such that β n≤∑_i∈ [t] (|W_i|-1) ≤ (β+ μ) n. Such a number t exists as |W_i|≤μ n and μ≪β. For each i∈ [t], take a spanning path P_i in T[W_i]. Let τ be an arbitrary element in [r] with t<τ < r. We partition [r] into three sets L_1, L_2, L_3 as follows: L_1 = [t], L_2 = [r]∖ (L_1∪{τ}), L_3 = {τ}. Let Q_1 be the union of the paths ⋃_i∈ L_1 P_i. Every arc e of T satisfies |C(e)∖ D| ≥α n - μ^2 n ≥α n/2. So we have T⊆_Γ(V())∖ D^α/2. Hence, <Ref> with α /2 playing the role of α ensures a partition A∪ B∪ C of Γ(V())∖ D satisfying the following. * β n - γ n ≤ |A| = e(Q_1) - γ n≤β n - 1/2γ n. * |C| ≥ 10 β n. * For any subset C'⊆ C of size e(Q_1) - |A|, there is a rainbow coloring of Q_1 in using colors in A∪ C'. By <ref>, <ref>, <ref>, and the fact that |A|+|B|+|C|+|D|=n-1, we have |B|+|C| = n-1 - e(Q_1) + γ n -|D| ≥ n - e(Q_1) + 1/2γ n, |B| ≤ n- e(Q_1) + γ n -|C| ≤ n-e(Q_1) - 9β n. Step 3. Use most of the colors in B and color most intermediate arcs. Next, we will choose a Hamilton path in W_i for each i∈ L_2 and choose colors for the arcs in those paths. While doing that, we aim to exhaust most of the colors in B while using some additional colors in C. By (<ref>), (<ref>), and the fact that e(Q_1)≥β n, we know that M: = ∑_i∈ L_2 |W_i| satisfies (|B| + |C|) - 1/2γ n ≥ n - e(Q_1) ≥ M ≥ n - e(Q_1) - 2r ≥ n -e(Q_1) - β n ≥ |B|. Using this, we choose an arbitrary subset C_1⊆ C with |B∪ C_1|=M. Furthermore, the set C∖ C_1 satisfies |C∖ C_1| = |B| + |C| - M ≥1/2γ n. Now, we partition the set B∪ C_1 into {B_i: i∈ L_2} with |B_i|=|W_i| for each i∈ L_i. For each i∈ L_2, we apply <Ref> to the collection _B_i[W_i]. This yields a B_i-rainbow Hamilton path in [W_i] using all colors in B_i except exactly one color b_i, say. Let Q_2 be the union of such paths. Let B' = {b_i :i∈ L_2} be the set of at most r unused colors. Let B^* B∩ B' and C^* (C∖ C_1)∪ (C∩ B'). Then, B^* and C^* are the set of the unused colors in B and C, respectively. Moreover, |B^*| ≤ r and |C^*| ≥1/2γ n. We currently have a union of paths Q_1∪ Q_2 where the arcs of Q_1 are not colored, but the arcs in Q_2 have received pairwise distinct colors. Now, for each i∈ [r]∖{τ}, we have a path-component of Q_1∪ Q_2 within W_i, which starts from u_i and ends at v_i, say. Now we will connect the paths using the vertices w_i. Consider the sets E of arcs {w_i-1 u_i : i∈ [r]∖{τ}}∪{v_i w_i: i∈ [r]∖{τ}}. For each arc e∈ E, we greedily choose a color in C(e)∩ D and color it with the color, ensuring the arcs in E receive pairwise distinct colors. By <ref>, we can do this for all arcs of E. Let D^* be the set of unused remaining colors in D. Step 4. Absorption of B^*∪ D^* using the color absorber. Let S^* = B^*∪ C^*∪ D^*. We now consider the collection _S^*[W_τ∪{w_τ,w_τ+1}]. By <ref> and (<ref>) , we have |B^*∪ D^*|≤ r + μ^2 n < 1/25|S^*|. Also by <ref>, at least 2r-2(r-1)≥ 2 choices of C(w_τw)∩ D and C(w w_τ+1) are available for each w∈ W_τ. Hence we can apply Lemma <ref> on _S^*[W_τ∪{w_τ,w_τ+1}] with B^*∪ D^* playing the role of B to obtain a (S^*, B^*∪ D^*)-rainbow path from w_τ to w_τ+1. This, together with Q_1∪ Q_2, provides a partially colored rainbow Hamilton path P^final in , which uses all the colors outside C. Let C'⊆ C be the set of colors in C which are not used in P^final. By using <ref>, we can color the remaining uncolored arcs in Q_1 exactly using the colors in A∪ C'. This provides a rainbow Hamilton path in starting from x_0 to x_r. § TRANSVERSAL HAMILTON CYCLES The definition of strong connectedness says that for any pair of vertices, there exists a path connecting those vertices. The following lemma guarantees a similar property in the rainbow setting. Let be a collection of strongly connected tournaments with ||≥ |V()|-1≥ 1. Then, for all distinct x, y∈ V(), there exists a rainbow path from x to y. Let |V()|=n and ⊇{T_1, …, T_n-1}. Let L_0 {x}, and inductively define for each i∈ [n-1] L_i L_i-1∪{v∈ V() : there exists u∈ L_i-1 such that uv∈ E(T_i)}. Then for each i∈ [n-1], we have either |L_i-1| < |L_i| or |L_i-1| = n. This is because if |L_i-1| < n and |L_i-1| = |L_i|, it means (V()\ L_i) L_i in the tournament T_i. This contradicts the assumption that T_i is strongly connected. Thus, there is r∈ [n-1] such that |L_r| = n, i.e., L_r=V(). Consequently, we can take a minimum r∈ [n-1] such that y∈ L_r. Then, by definition of the sets L_i, there exists a rainbow path from x to y with using some colors from [r]. Now we prove <Ref>. Let = {T_1, …, T_n} and |V()|=n. We may assume T_1, …, T_n-1 are strongly connected tournaments. Assume that there are no rainbow Hamilton cycles. Let μ be a small constant so that we have 0< 1/n ≪μ≪ 1. Fix a tournament T⊆^1/2. There is an (μ n, 1/6)-partition (W_0, W_1, …, W_r+1, w_0, …, w_r) of T by <Ref>. Consider a Hamilton path P_0 of T[W_0] from a vertex x to a vertex x' and another Hamilton path P_r+1 of T[W_r+1] from a vertex y' to a vertex y. We fix these vertices x,x',y,y'. For each i∈ [n], we have xy∈ T_i. Also, there are at most 10 μ n internally disjoint paths of length three from y to x such that each path is rainbow, while different paths may have arcs with the same color. We first define a color set I⊆ [n] and a vertex set X and a path P in each of the following two cases. * If yx∈ T_i for some i∈ [n], we let I={i} and X=∅ and P=yx. * If Case 1 does not happen and there are more than 10μ n internally disjoint rainbow paths of length three from y to x, then we choose one such path P=y → u → v → x where u,v∉{ w_1,…, w_r}∪ W_0∪ W_r+1. Indeed, such a path exists since |{ w_1,…, w_r}∪ W_0∪ W_r+1| ≤ 6/μ + 2μ n < 10 μ n. In this case, we let I to be the set of three colors in the rainbow path P, and X= {u,v}. In either case, we let W'_i = W_i∖ X for each i∈{0}∪ [r+1] and '= _[n]∖ I[V()∖ X] and T'=T∖ X. Then, (W'_0, W'_1, …,W'_r, W'_r+1, w_0, …, w_r) is an (μ n, 1/10)-partition of T'. Note that T'⊆'^1/3 because we assumed T⊆^1/2. Note that W'_0=W_0, W'_r+1=W_r+1, and P_0 and P_r+1 are still Hamilton paths on W'_0 and W'_r+1, respectively. Consider the path P'_0 obtained by appending the arc x'w_0 at the end of P_0, and the path P'_r+1 obtained by appending the arc w_r y' in front of P_r+1. We greedily color the arcs of P'_0 and P'_r+1 so that P'_0 ∪ P'_r+1 forms a ([n]∖ I)-rainbow digraph. This is possible as P'_0 ∪ P'_r+1 contains at most 2μ n arcs while each arc in T⊆^1/2 has at least n/2 - |I| > 2μ n available colors outside of I. Let C be the colors in [n]∖ I which are not used in the path P'_r+1 P P'_0. Let V' = V(')∖ V(P_r+1 P P_0). Then _C[V'] is a collection of |C|=|V'|-1 many tournaments. As each arc of T is in T_i for at least 1/3Γ(') - |E(P_r+1'PP_0')| ≥1/3(n-|I|) - (2μ n+3)≥1/10|V'| many choices of i∈ C, we can apply <Ref> to _C[V'] with w_0, w_r, 1/10, 1/10 playing the roles of x_0, x_r, γ, α, respectively, to obtain a C-rainbow Hamilton path in _C[V'] from w_0 to w_r. This together with the rainbow path P'_r+1 P P'_0 yields a rainbow Hamilton cycle, a contradiction. This proves the claim. Let y_1 and y_k be two vertices in V() and A⊆Γ() be a set of colors. Let Q = y_1 → y_2 →⋯→ y_k be a longest rainbow path in from the vertex y_1 to the vertex y_k that does not use any color in A. If y_ℓ z∈ T_i for some z∉ V(Q) and i∈ A and ℓ∈ [k-1], then y_ℓ+1 z∈ T_j for all j∈ A∖{i}. Symmetrically, if a vertex z and a color j∈ A satisfies zy_ℓ+1∈ T_j for some ℓ∈ [k-1], then zy_ℓ∈ T_i for all i∈ A∖{j}. Indeed, if not, then y_1⋯ y_ℓi z j y_ℓ+1⋯ y_k yields a longer rainbow path, where the arcs other than y_ℓz,zy_ℓ+1 have the same color as in Q. This is a contradiction to the maximality of Q. A similar argument shows the symmetric statement, and this proves the claim. By <Ref>, there is a rainbow path starting from y and ending at x using colors in [n-1]. This together with <Ref> provides the existence of the following path P with k≥ 3: [c]0.9P= x_1 x_2⋯ x_k is a longest rainbow path from y=x_1 to x=x_k. Let C be the set of colors used in P. We first show that the path P has less than n-1 vertices. We have k< n-1. If k=n, then let [n]∖ C={a}. By <Ref>, the arc xy in T_a together with the C-rainbow path P yields a rainbow Hamilton cycle, a contradiction. Thus, we can assume that k≤ n-1. Assume that k=n-1 and let z be the unique vertex in V()∖ V(P) and let [n]∖ C= {a,b} be the set of colors not used in P. Note that there could be multiple choices of rainbow path P from y to x of length n-2. In any of such path P, we have the following. [c]0.9 For each i∈ [k] and {c,c'}={a,b}, x_iz∈ T_c if and only if x_i+1z∈ T_c', where x_k+1=x_1. Indeed, if x_iz∈ T_c for some i∈ [k-1], then <Ref> implies that x_i+1z∈ T_c'. Moreover, if x_kz∈ T_c, then we have x_1z∈ T_c' as otherwise x_1… x_k c z c' x_1 yields a rainbow Hamilton cycle. If x_iz∈ T_c for some i∈ [k], then the preceding argument applied twice yields x_i+2z∈ T_c. By repeatedly applying <Ref> k-1 times when k is even and 2k-1 times when k is odd, we obtain that x_i-1z∈ T_c'. Since one of T_a and T_b is strongly connected, without loss of generality, we may assume that T_a is strongly connected; thus, there exists i∈ [k] such that x_iz∈ T_a and j∈ [k] such that zx_j∈ T_a. We claim that k is even. If k is odd, then by (<ref>) and the fact that k is odd, we conclude that x_i'z∈ T_a and x_i' z∈ T_b for all i'∈ [k], which is a contradiction as T_a is strongly connected. Now, when k is even, (<ref>) ensures the following. * x_i'z∈ T_a and x_i'+1z∈ T_b for all i' having the same parity with i, and * zx_i'∈ T_a and zx_i'+1∈ T_b with all i' having the same parity with j. Hence, i and j have different parity. Now, for any 1<ℓ<k, assume that the arcs x_ℓ-1x_ℓ and x_ℓx_ℓ+1 are colored with d and d', respectively in P. We can consider a rainbow path P'= x_1 ⋯ x_ℓ-1c z c' x_ℓ+1⋯ x_k where (c,c')∈{(a,b), (b,a)} is chosen according to the parity of ℓ. Note that this P' is also a longest rainbow path from y to x, hence <ref> and <ref> also hold with the path P' and the colors d and d'. So, by swapping d and d' if necessary, we know that yx_ℓ,x_ℓx∈ T_d. By choosing another 1<ℓ'<k with |ℓ-ℓ'|>1, the same argument yields four distinct colors d,d',d^* ,d^** such that yx_ℓ',x_ℓ'x∈ T_d^*. Depending on whether x_ℓx_ℓ'∈ T_d' or x_ℓ'x_ℓ∈ T_d', we have one rainbow path yx_ℓ x_ℓ' x or yx_ℓ' x_ℓ x of length three. We can pair up the number in {2,…, k} so that paired-up numbers differ by more than 1, then this provides at least (n-1)/2 internally disjoint paths of length three from y to x where each path is rainbow. This contradicts <Ref> and thus proves the claim that k<n-1. Thus, we have k<n-1. Let D=[n]∖ C. Then, we have |D| = n-(k-1) ≥ 3. If a vertex z∉ V(P) and a color i∈ D satisfy x_1z∈ T_i, then <Ref> implies that x_2z∈ T_j for all j∈ D∖{i}. As D∖{i} contains at least two colors, we apply <Ref> again for each j∈ D∖{i}, then we conclude that x_ℓ z∈ T_i for all i∈ D and ℓ≥ 3. Since k≥ 3, this implies that x_k z∈ T_j for all j∈ D. This shows that every vertex outside V(P) belongs to at least one of the following two sets. S^+ = { z∈ V()∖ V(P): zx_1∈ T_i for all i∈ D } and S^- = { z∈ V()∖ V(P): x_kz∈ T_i for all i∈ D }. We now extend the path P to a rainbow path P_1, which is a longest rainbow path among the rainbow paths satisfying the following. * P_1 = x_1→ x_2 →…→ x_k →…→ x_k_1. * x_k_1∈ S^+ ∪{x_k}. Let D_1 be the set of colors that do not appear in P_1. Let S_1^+ S^+∖ V(P_1) and S_1^- S^-∖ V(P_1). As x_k_1 either belongs to S^+ or equals x_k, either the definition of S^+ or <Ref> implies that x_k_1x_1∈ T_i for each i∈ D_1. For all z∈ S^+_1, i∈ [k_1], and j∈ D_1, we have zx_i∈ T_j. Moreover, for all z'∈ S^-_1 and j∈ D_1, we have x_k_1 z'∈ T_j. Note that if S^+_1 or S^-_1 is empty, the corresponding statement vacuously holds. If at least one of them is not empty, we have |D_1|≥ 2. We first prove the first statement. If S^+_1 is not empty, choose a vertex z∈ S^+_1. For this choice of z, consider the largest i∈ [k_1], if exists, such that there exists j∈ D_1 with x_iz∈ T_j. If i=k_1, then x_1→…→ x_k_1j z yields a longer rainbow path than P_1, contradicting the maximality of P_1. Otherwise, choose j'∈ D_1∖{j}. If i≥ k, then the path x_1→…→ x_ij zj' x_i+1→…→ x_k_1 yields a longer rainbow path than P_1, a contradiction to the maximality of P_1. If i<k, then the path x_1→…→ x_ij zj' x_i+1→…→ x_k contradicts the maximality of P. Hence i does not exist and this proves the first part of the claim. Similarly, choose z'∈ S_1^-. Consider the largest i≥ 0, if exists, such that z' x_k+i∈ T_j for some j∈ D_1. By the definition of S^-, the number i is positive if exists. Then for j'∈ D_1∖{j}, the rainbow path x_1→…→ x_k+i-1j' z'j x_k+i→…→ x_k_1 contradicts the maximality of P_1, a contradiction. Hence i does not exist and this proves the moreover part of the claim. We have S^+_1 = ∅. We first claim that S^+_1 S^-_1 in T_j for all j∈ D_1. If not, then there exist vertices z_1∈ S_1^+ and z_2∈ S_1^-, and j∈ D_1 such that z_2z_1∈ T_j. Existence of such vertices z_1, z_2 implies |D_1|≥ 2, so we can choose j'∈ D_1∖{j}. Then <Ref> implies that we can extend P_1 to obtain a longer rainbow path x_1⋯ x_k_1j' z_2 j z_1, contradicting the maximality of P_1. Hence, S^+_1 S^-_1 holds in T_j for all j∈ D_1. Then <Ref> implies that S^+_1 V()∖ S^+_1 holds for all j∈ D_1, ensuring that T_j is not strongly connected if S^+_1 is not empty. This yields at least |D_1|≥ 2 not strongly connected tournaments in , a contradiction. Thus S^+_1 must be empty. This proves the claim. We now extend the path P_1 to a rainbow path P_2, which is a longest rainbow path among the rainbow paths satisfying the following. * P_2 = y_1 → y_2 →…→ y_ℓ→ x_1→ x_2 →…→ x_k_1. * y_1 ∈ S^-_1 ∪{x_1}. Let D_2 be the set of colors that do not appear in the rainbow path P_2. Let S^-_2 S^-_1 ∖ V(P_2). For each i∈ [k_1], let y_ℓ+i=x_i, then we have P_2= y_1→…→ y_k_2 where k_2=ℓ+k_1. Then the following claim holds. For each j∈ D_2, we have V()∖ S^-_2 S^-_2 in T_j. This is vacuously true if S^-_2 =∅. Otherwise, we have |D_1|≥ 2 and we choose z∈ S^-_2. For this choice, consider smallest i∈ [k_2] such that there exists j∈ D_2 with zy_i∈ T_j. If i=1, then z→ y_1→…→ y_k_2 yields a longer rainbow path than P_2, contradicting the maximality of P_2. Otherwise, for j'∈ D_2∖{j}, the path y_1→…→ y_i-1j' zj y_i→…→ y_k_2 yields a longer rainbow path than P_2, a contradiction. This proves the claim. If S^-_2 is not empty, then the above claim states that T_j is not strongly connected for all j∈ D_2. Then we obtain at least |D_2|≥ 2 tournaments in which are not strongly connected, a contradiction. Hence we have S^-_2=∅ and V(P_2) = V() and |D_2| = 1. Let c be the only color in D_2. The moreover part of <Ref> implies that x_k_1y_1∈ T_c if y_1∈ S^-_1, which together with P_2 gives a rainbow Hamilton cycle. Assume y_1∉ S^-_1, then <ref> yields y_1 = x_1. In this case, <ref> implies that x_k_1∈ S^+∪{x_k}. If x_k_1∈ S^+, then the definition of S^+ yields x_k_1y_1∈ T_c and if x_k_1=x_k, then the first part of <Ref> yields x_k_1y_1∈ T_c. In either case, this arc x_k_1y_1∈ T_c together with P_2 yields a desired rainbow Hamilton cycle. This finishes the proof of <Ref>. Partition the vertex set We first consider a tournament T⊆^1/2. Then, consider an (μ n, γ)-partition (W_1,…, W_r, w_1,…, w_r-1) of . Let E_i,1:= {vw_i : v∈ W_i} and E_i,2:= {w_iv : v∈ W_i+1}. We plan to later find Hamilton paths P_i in each of W_i's which yields a Hamilton path P_1 e_1,1 e_1,2 P_2 e_2,1 e_2,2 P_3 ⋯ P_r of T for some e_i,j∈ E_i,j. In the next steps, we illustrate how to obtain such paths so that one can find a rainbow coloring of the final Hamilton path. Partition the color set and find a color absorber We set aside a set D of at most μ^2 n colors such that for any collection {e_i,j: i∈ [r-1],j∈ [2]} of one arc from each E_i,j, we can find a D-rainbow coloring of these arcs. Let t∈ [r] be such that ∑_i∈ [t] (|W_i|-1) = β n + o(n). For each i∈ [t], let P_i be a Hamilton path of T[W_i]. Next, utilize <Ref> to find disjoint A,C⊆Γ()∖ D such that |A| = β n - γ n and |C| = 10β n and for any C'⊆ C of size γ n, we can find an (A∪ C')-rainbow coloring of the arcs in ⋃_i∈ [t] P_i. Denote BΓ()∖(A∪ C∪ D). Use most of the colors in B and color most intermediate arcs We first choose a number τ∈ [t+1,r-1]. For each i∈ [t+1,r]∖{τ} one by one, we consider the tournament _C'[W_i] where C'⊆ B∪ C is the set of current available colors. Then we find Hamilton paths in _C'[W_i] and color the arcs of this path. As |W_i| is much smaller than |C'|, we can greedily color the arcs. We can repeat this for all i∈ [t+1,r]∖{τ}. Furthermore, we can ensure that all colors of B are used, possibly except at most r colors. This procedure can be done using <Ref>. After this step, suppose C^*,B^* denote the remaining unused colors in C,B respectively. Let u_i,v_i denote the starting and ending vertices of the Hamilton paths considered inside W_i. We now D-rainbow color the intermediate arcs {w_i u_i+1 : i∈ [r-1]}∪{v_i w_i: i∈ [r-1]}. Let D^* be the remaining unused colors in D. Absorb the colors in B^*∪ D^* using the colors in C^* Since the number of colors in B^*∪ D^* is small enough, we can embed a rainbow Hamilton path in W_τ∪{w_τ-1, w_τ} that starts from the vertex w_τ-1 to w_τ using up all the colors in B^*∪ D^* and some colors in C^*. This is done by applying <Ref>. Finally, we use the remaining unused colors in C along with those in A to color the arcs in P_i with i∈ [t], which completes the desired transversal Hamilton path in .
http://arxiv.org/abs/2307.02449v1
20230705172108
An integrative dynamical perspective for graph theory and the study of complex networks
[ "Gorka Zamora-López", "Matthieu Gilson" ]
physics.soc-ph
[ "physics.soc-ph", "cond-mat.dis-nn", "physics.data-an" ]
=1 myenumerate A dynamical perspective for network analysis] An integrative dynamical perspective for graph theory and the study of complex networks gorka@Zamora-Lopez.xyz Center for Brain and Cognition, Pompeu Fabra University, Barcelona, Spain. Department of Information and Communication Technologies, Pompeu Fabra University, Barcelona, Spain. Institut des Neurosciences des Systemes, INSERM-AMU, Marseille, France. Built upon the shoulders of graph theory, the field of complex networks has become a central tool for studying a wide variety of real systems across many fields of research. Represented as a graph, all those systems can be studied using the same analysis methods allowing for their comparison. In this perspective we challenge the extended idea of graph theory as being a data-driven analysis tool. Instead we show that classical graph metrics (e.g., degree, matching index, clustering coefficient and geodesic distance) arise from a common hidden generative propagation model: the discrete cascade. From this model-based, dynamical perspective, graph metrics are no longer regarded as combinatorial properties of the graph but as spatio-temporal characteristic of the network, unfolded at different temporal scales. Once we acknowledge graph analysis as a dynamical, model-based analysis tool, we are free to replace the original discrete cascade by other propagation models and to derive new network metrics. Explicitly and transparently, opening the oportunity to design personalized network analyses for different classes of real networks by choosing generative models that fulfil the minimal constraints of the empirical systems under study. Thus balancing between simplicity and interpretability of results. [ Matthieu Gilson August 1, 2023 =================== § INTRODUCTION Built upon the shoulders of graph theory, the field of complex networks has become a central tool for studying a wide variety of real systems across many fields of research, e.g. sociology <cit.>, epidemiology <cit.>, neuroscience <cit.>, biology <cit.>, chemistry <cit.> and telecommunications <cit.>. The success of graph theory to permeate over such a variety of domains lies on its simplified representation. In the eyes of graph theory, a system of interacting elements is reduced to nodes and edges. A graph is an abstract manner to describe empirical systems which provides them with a “form” that is mathematically tractable, thus allowing to uncover their hidden architecture and to investigate how this architecture is related to—or affected by—the functions of the real system. Despite its inmense success, the simplicity of graph analysis is at the same time its major limitation. The process of reducing a real system into a graph requires to discard much of the information needed to understand the system. As beneficial as it is to count with a simplified representation and having a common toolbox for all networks, the final step of the analysis is to translate back the outcomes of the graph metrics into interpretations that make sense in the context of the real system. This step—from metrics to interpretation—is prone to personal creativity due to the large simplifications made in first place. Graph theory is for binary networks by definition: the only relevant information about the interactions is whether a link exist between two nodes or not. However, the connections of empirical systems usually carry weighted links that graph analysis is not suited to deal with. On the one hand, the combinatorial nature of graph theory cannot treat continuous variables representing link weights. On the other hand, the link weights of empirical networks are not just numerical values; they represent physical or statistical quantities. Weighted graph metrics have often been defined by starting from the equations for a binary metric and directly replacing the binary entries by weighted ones. By doing so, we risk ignoring the physical magnitudes of those weights. These limitations underline the need to establish more flexible network analysis tools that are better suited for the variety of real complex systems studied, allowing for their characterization and individual interpretation. Plenty of work has been devoted to study the bidirectional relation between network structure and the dynamics manifesting on networks <cit.>. Many works have attempted to uncover how specific network features (e.g., the presence of hubs or degree-degree correlations) affect the collective dynamics on a network. Other efforts aimed at employing dynamical processes to reveal the organization of a network and its features by observing the behaviour of diffusion, propagation or routing processes in the network <cit.>. In this perspective we revisit those efforts and we go a step forward by showing that the relation between graphs and dynamics is not only a matter of practical interest but that a foundational correspondence exists between the two. We expose that graph analysis can be reformulated from the perspective of dynamical systems by showing that popular graph metrics arise from a simple but common generative dynamical model: a cascade of discrete agents, which is also discrete in time and rapidly diverges. From this dynamical perspective, graph metrics are no longer regarded as combinatorial properties of the graph but as spatio-temporal characteristic of the network, unfolded at different temporal scales after unit perturbations are applied at the nodes. We believe that exposing this dynamical viewpoint of graph metrics is relevant for various reasons and opens new opportunities for the study of complex networks in a more pragmatic manner. First, it allows to frame the ecosystem of dynamical approaches to characterise networks into perspective, providing a common umbrella to encompass them. Second, it reveals that graph analysis is model-based rather than a data-driven analysis toolbox. Hence, every time we employ graph metrics we are implicitly assuming that a discrete cascade—together with its assumptions and constraints—is the right model to describe a real network. Third, it shows that some of the limitations of graph theory are not necessarily of combinatorial nature, but are associated with the contrains of the discrete cascade behind graph metrics. And last, once we acknowledge graph analysis as a model-based analysis tool, there is no need to get married with a unique model, assumed to be meaningful for all cases. Instead, we are free to replace the original discrete cascade by other propagation models and to derive new network metrics in a similar fashion. Explicitly and transparently. This flexibility will allow to define network metrics in which link weights are built-in, and it provides the oportunity to calibrate network analyses by choosing generative models that respect the minimal constraints of the particular real system under study. Thus balancing between simplicity and interpretability of results. The paper is organised as follows. Section II describes the dynamical formulation of graph metrics as emerging from a discrete cascading process. Section III illustrates how networks can exhibit different faces depending on the dynamical process employed to observe them, indicating the need for a generalization of network analysis in which the underlying propagation model is replaceable and explicit. Section IV illustrates some benefits of a dynamical approach to network analysis such as a generalised distance metric and network comparison. Last, Section V provides an overview of past efforts to characterize graphs using dynamics and clarifies how those attempts fall together into a common umbrella of the perturbative formalization here proposed. § A DYNAMICAL REPRESENTATION OF GRAPH ANALYSIS Graphs are typically encoded either as adjacency matrices or adjacency lists. For a graph G made of n nodes its adjacency matrix A is a matrix of shape n × n with entries a_ij = 1 if there is a link connecting nodes i and j, or a_ij = 0 otherwise. Adjacency lists, on the other hand, are the sets of edges E(G) = { (i,j) } for those nodes i and j that are connected by a link. For graph theory, all revelant information about a network is encoded in A or E(G). However, neither the adjacency matrix nor the adjacency list explain the underlying architecture of the graph. For that, graph analysis consists of applying a variety of metrics to extract information that allows to clarify the “form” of a graph. Uncovering the architecture of a network is like building a puzzle because no single graph metric conveys all necessary information we need to fully understand the network. Each metric provides us with a useful but incomplete piece of information about the architecture of the network, and only by integrating several pieces together we can understand how it is organised. Although graph theory is a branch of combinatorial mathematics, algorithmically speaking, graph metrics are rarely evaluated employing combinatorial tools. Instead, most graph metrics are computed by exploring the graph via depth-first-search (DFS) or breath-first-search (BFS) algorithms and applying different rules along the process. From a dynamical point of view DFS and BFS represent two very different propagation processes. Depth-first search corresponds to a conservative dynamical process in which a single agent navigates through the entire graph. The agent moves from one node to another along a link connecting them. This is similar to processes of random walkers since at all time steps there is a unique agent on the network—the one initially seeded. The difference with random walkers is that in a DFS the agent navigates through the network in a prestablished order while a random walker randomly chooses which neighbour visits at the following time step. On the contrary, BFS represents a non-conservative cascading process because for every particle sitting on a node at time t, the process gives rise to one new particle per neighbour at the following iteration t+1. That is, in a BFS type of propagation, agents are not passively transported from one node to another through links. Instead, the outgoing links of a node actively create new particles every time step. Therefore, the number of particles in the network rapidly grows over time. Such a cascade is illustrated in Figure <ref> for a single particle (a tennis ball) initially seeded at node i=7. This node has a single neighbor and thus at time t=1 node i=6 receives one ball. At the next iteration, however, node i=6 has four neighbors and thus each neighbor receives one ball. At time t=3 nodes i=6 and i=4 receive more than one ball. Therefore, at time t=4 these two nodes will give to their neighbours one new ball, one per each ball they already had. Without a queue to remember the nodes visited, the dynamical system that describes the cascade behind BFS is the discrete mapping f: ℕ^n →ℕ^n of the form: 𝐱_t = A 𝐱_t-1, where 𝐱_t is the state (vertical) vector of the n nodes. The values x_i,t∈ℕ represent the number of particles found in node i at time t. In principle, the BFS starts with a single particle at a selected node, e.g., choosing i=2 as the root vertex for a network of n=5 nodes, the process would start from the initial conditions 𝐱_0^T = (0,1,0,0,0). Let us initially seed one particle at every node such that 𝐱_0 = 1 with x_i,0=1 for all i. The solutions of the discrete cascade at times t > 0 are obtained recursively from Eq. (<ref>), such that: 𝐱_1 = A 𝐱_0 = A 1, 𝐱_2 = A 𝐱_1 = A ( A 𝐱_0) ) = A^2 𝐱_0 = A^2 1, 𝐱_3 = A 𝐱_2 = A (A ( A 𝐱_0 )) = A^3 𝐱_0 = A^3 1 ⋮ 𝐱_t = A^t 𝐱_0 = A^t 1. The recursive nature of the process implies that the solution at any time t is trivially determined by two quantities: the initial conditions 𝐱_0 = 1 and the powers of the adjacency matrix A^t acting as the propagators (also known as the Green function) of the process over time. Strictly speaking, the values ( A^t )_ji are the number of particles found in node j at time t, due to the single particle initially seeded at node i. More generically, the values ( A^t )_ij can be interpreted in two complementary manners. On the one hand, as the influence that an initial unit perturbation at node i exerts on node j over time, or on the other hand, as the temporal response of node j to a unit perturbation applied on i at time t=0. It is important to note that this conditional pair-wise response encompasses all network effects from i to j acting at different time scales along all (non-Hamiltonian) paths of different lengths. At this point, a connection can be drawn between graph theory—as a combinatorial subject—and the dynamical nature behind the calculation of graph metrics. From graph theory it is well known that the powers of the adjacency matrix A^l encode the number of non-Hamiltonian paths of length l between two nodes, or the number of cycles of length l in which a node participates. For example, the entry (A^3)_ij represents the number of all (non-Hamiltonian) paths of length l=3 starting at node i that reach node j. If i=j, then (A^3)_ii is the number of triangles in which i participates. From a purely combinatorial point of view counting and identifying all possible paths of a given length in a network is a difficult problem to tackle since the number of branches and choices rapidly grow with the length. However, from the dynamical point of view it is a rather trivial exercise. As the derivations above reveal, the combinatorial problem is equivalent to study the propagation of a discrete cascade in the network—one of the simplest dynamical models that can be defined. Our aim here is to show that more than an exceptional coincidence, the dynamical equivalence and interpretation is common for graph metrics, in particular for the most popular and informative ones. To do so, we realise that all the relevant information needed to characterise the network and to define graph metrics is unfolded—via the generative dynamics—from the adjaceny matrix A onto the response matrices ℛ = { A^0, A^1, A^2, A^3, …, A^t }. In Appendix <ref> we show how several graph metrics (node degree, matching index, clustering coefficient and geodesic distance) are in fact derived from the set of response matrices ℛ. From these derivations we learn two conclusions. First, from this dynamical perspective, graph metrics are no longer regarded as combinatorial attributes of the graph but they correspond to spatio-temporal properties of the network's response to external (unit) perturbations. Second, although the discrete cascade is a system that rapidly diverges, graph metrics are not affect by this because they only represent the properties of the network responses at very short time scales. As shown in Appendix 1, both the degree and the matching index are network attributes expressed at time t=2 the clustering coefficient is a network feature at time t=3. The geodesic distance is the only one that may result from the cascading process at longer time scales (up to t = n-1 if the graph is connected). But for the common small-world real networks, it spans only for times t ≪ n. § PROPAGATION MODEL SELECTION FOR PERSONALIZED NETWORK ANALYSIS The derivations in the previous section and in Appendix 1 allowed us to draw a foundational relation between graph theory and network dynamics by showing that typical graph metrics can be derived from a common generative model—the discrete cascade in Eq. (<ref>)—and, therefore, those metrics can be interpreted as spatio-temporal properties of the network responses to initial unit perturbations. Although defining and deriving the graph metrics from the set of response matrices ℛ_t = { A^0, A^1, A^2, A^3, …, A^t } may seem a complication, this dynamical representation implies two relevant consequences. On the one hand, it reveals that every time we perform graph analysis, we are assuming that the discrete cascade is the appropriate dynamical model to describe the real network under study. Given the wide variety of empirical systems studied with graph analysis, it is unrealistic to assume that one propagation model serves to characterise and interpret all real networks. On the other hand, it opens the door to alleviate this problem by developing a family of graph analysis flavours. Once we have acknowledged that graph analysis is a model-based data analysis tool, we are free to replace the underlying propagation model and design analyses that are better suited for the individual real networks of interest. We envision that in the future, before performing a network analysis, the user will first identify which are the fundamental contraints of the real system investigated (e.g., is it discrete, or is it a continuous system? Is it conservative, or non-conservative?). Once the fundamental ingredients are clear, the user could select the simplest propagation model that satisfies those conditions and develop a personalised network analysis that is tuned for that real network; or for a domain of real networks. In the following, we illustrate how such families for personalised graph analyses could be derived, both for discrete and for continuous generative dynamical models. The goal is, as mentioned above, to identify the network responses ℛ to an initial unit perturbation for different models, and then to extract the information about the network from ℛ. A popular propagation model often employed to explore networks is the random walker. The random walk is, as the cascade in Eq. (<ref>), a discrete dynamical model both in variable and in time. The main difference with the cascade is that the random walker represents a conservative system in which one agent, the walker, perpetually navigates through the network. That is, for every walker initially seeded, there is one, and only one walker in the network at all times. On the contrary, for the discrete cascade, at every time-step, every particle in node i results in k_i new particles. Given the adjacency matrix A, the transition probability matrix T is defined by normalising the columns by their total degree. Hence, entries T_ij represent the probability that a walker located at node i at time t has to visit one of the k_i neighbours at time t+1. Formally, the random walker is a mapping f: ℝ^n →ℝ^n of the form: 𝐱_t = T 𝐱_t-1, where the elements x_i(t) represent the expected number of walkers on node i at time t. As for the discrete cascade, the solution of Eq. (<ref>) is iterative. Given initial conditions 𝐱_0, the solution is for any time t > 0 is 𝐱_t = T^t 𝐱_0. If we allow one walker to start from each node, x_i(0) = 1, the resulting response matrices are ℛ = { T^0, T^1, T^2, T^3, …, T^t }. Figure <ref> shows the response matrices of the same sample graph for five distinct generative models, two discrete and three continuous). Comparing the results for the discrete cascade and the random walkers, it is seen that the two models give rise to different patterns of pair-wise responses. At the first iteration t=1 the ℛ_1 matrices of both models display a similar pattern that reflects direct connections. But in the subsequent iterations the response matrices begin to differ between the two models. This shows that model selection matters for the analysis of networks, as each propagation model highlights different aspects of the underlying graph. A major difference between the two models is that the cascade is a divergent process while the random walk is a conservative propagation. The panels on the right display the solutions x_i,t for the eight nodes over time. As seen, the number of agents on each node rapidly grows for the cascade while the expected number of walkers on a node stabilises after a short transient. The divergent or conservative nature of the two models is also reflected in the response matrices. We define the network response r(t) as the sum of all pair-wise responses at each time step, r(t) = ∑_i,j=1^n ℛ_ij(t). Figure <ref>C shows the evolution of the network responses for the five models. For the discrete cascade, its network response rapidly grows because at each iteration, every particle gives rise to k_i new particles. However, for the random walkers, r(t) = 8 at all times because it is a conservative system and we initially seeded eight walkers, one per node. §.§ Continuous propagation models The extension of Eq. (<ref>) into the continuous realm is given by the following differential equation: 𝐱̇(t) = A 𝐱(t), where 𝐱^T(t) = [ x_1(t), x_2(t), …, x_n(t) ] is the real-valued state vector of the n variables and A is a real-valued, positive connectivity matrix not restricted to binary. Given initial conditions 𝐱_0, the solution of this system is: 𝐱(t) = e^At𝐱_0. In this case, the propagator—or Green function—of the network is the function ℛ(t) = e^At instead of the set of power matrices seen for the discrete models. At every time t', ℛ(t') is a matrix of shape n × n whose elements ℛ_ij(t') = ( e^At')_ij represent the temporal evolution of the response of node j at times t' > 0 to a unit perturbation applied in node i at time t=0. Or, equivalently, ℛ_ij(t) is the influence that node i exerts on j over time. As for the discrete cascade, ℛ(t) encompasses this influence along all possible paths, of all lengths converging into j at different times. With the difference that now the connectivity A could be weighted and, in that case, ℛ_ij(t) would enclose the influence over all weighted routes. Hence, the response is typically larger between nodes directly connected by strong links, and smaller between nodes connected by weaker links, or not directly connected and relying on indirect paths. In Fig. <ref>B, the evolution of the response matrices ℛ(t) = e^At is shown at various temporal snapshots for the small sample graph. As seen in the first snapshot, short after the perturbation the responses are governed by the direct connections and ℛ(t) resembles the adjacency matrix A. But as time passes and the influence between nodes expands to longer paths, the patterns ℛ(t) change and dissociate from A. The continuous cascade is also a divergent system and the solutions (node activity) x_i(t) grow exponentially as depicted in the panel at the right. Same fate goes for the network response r(t), in Fig. <ref>C. Divergent dynamics are rarely representative of empirical systems. A strategy to avoid divergence in Eq. (<ref>) is to include a dissipative term as follows: 𝐱(t) = - 1/τ𝐱(t) + A 𝐱(t). The term -𝐱 / τ implies that part of the flow passing through a node will leak, compensating for the exponential growth of the cascading term A 𝐱. The relaxation time-constant τ controls for the ratio of the leakage: the shorter the τ the faster the nodes leak. When τ = 0 all the flow is lost through the nodes and nothing will flow from one node to another. Given that λ_max is the largest eigenvalue of the (weighted) connectivity A, the leakage can only compensate the cascading term as long as 0 ≤τ < 1 / λ_max. When τ≥ 1 / λ_max the exponential growth dominates and the system becomes divergent. In this case, we define the network reponse to an initial unit perturbation 𝐱_0 = 1 as <cit.>: ℛ(t) = ( e^Jt - e^J^0t) where J = -δ_ij / τ + A_ij is the Jacobian matrix of System (<ref>) and J^0 = - 𝐱 / τ is for the leakage term. In Eq. (<ref>), J^0 is regressed out because we are only interested in the responses of the nodes due to the pair-wise interactions. J^0 only represents the passive leakage of the initial perturbation on a node through itself. In this case, the direct connections are more relevant than previously found for the cascade. The patterns of the response matrices at the initial times are dominated by the shape of the connectivity matrix A, see Fig. <ref>B. Only at the longer times, the patterns displayed by ℛ(t) change and dissociate from A. As expected, the solutions x_i(t) for the individual nodes decay and relax to zero, right panel of Fig. <ref>B. However, the network response r(t) undergoes a transient peak at the shorter times to later decay and relax to zero, Fig. <ref>C. and an initial phase of growth at early times but peaks and decays after a tran follow the shape of the connecitivity A at the initial times temporal evolution of ℛ(t) matrices in this case is shown in represent the temporal response of node j at times t > 0 to an initial perturbation on node i. For adequate values τ, all pair-wise responses vanish after an initial transient. Response matrices to an initial unit perturbation for this case on the sample network are shown in Fig. <ref>B together with the decaying individual solutions x_i(t) of Eq. (<ref>) for each node. The transient behaviour of the total network response r(t) is shown in Fig. <ref>C. §.§ Diffusive coupling and the Laplacian matrix The two continuous models described so far are non-conservative because the coupling i → j is mediated by passing the state x_i of node i to the target node j. Thus, the total response of j is the sum ∑_i=1^n A_ij x_i of the states of its neighbours. However, the interaction between nodes in many systems is mediated by the difference (x_j - x_i) between the nodes, as is for example the case in the Kuramoto model. In the Kuramoto, the strength of the interaction between two oscillators is proportional to their phase differences: θ̇_i ∝∑_j=1^n sin(θ_j - θ_i). This type of coupling by the differences is usually termed as diffusive coupling and is characteristic of conservative dynamical systems because it guides nodes towards the mean-field. The simplest linear propagation model based on diffusive coupling can be written by: ẋ_i = ∑_j=1^n A_ji( x_j - x_i). Since the sum only affects the j index, we have that ẋ_i = ∑_j=1^n A_ji x_j - ∑_j=1^n A_ji x_i. If A is the binary and symmetric adjacency matrix, then ∑_j=1^n A_ji = k_i the degree of node i. Thus, the system can be re-written as: ẋ_i = - k_i x_i + ∑_j=1^n A_ji x_j. Defining D as the diagonal matrix with entries D_ii = k_i, we express the system in matrix form: 𝐱̇ = - D 𝐱 + A 𝐱 = L 𝐱, where L = -D+A is usually known as the Laplacian matrix. Comparing Eqs. (<ref>) and (<ref>), it shows that this conservative system could be regarded as a special case of the leaky-cascade system only that here the time-constants of the nodes are individually tuned such that τ_i = 1/k_i. This tuning balances the input and the leakage ratios at every node. All the input that arrives to a node is leaked, thus setting a zero neat flow at all time points. The solution of Eq. (<ref>) with initial conditions 𝐱_0 = 1 is 𝐱(t) = e^Lt 𝐱_0. Following the rationale for the definition of the response function for the leaky cascade, we define: ℛ(t) = ( e^Lt - e^L^0t) where L^0 = - D 𝐱 is the leakage term. The temporal evolution of ℛ(t) is depicted in Fig. <ref>B (bottom panels). Again, the initial moments post-stimulation ℛ(t) seems governed by the direct connections although at subsequent times the patterns change. In particular, it can be seen that the connections of the most connected node rapidly start to loose relevance, as compared to the responses of the leaky-cascade in which the hub is reinforced at early times. Given the zero neat flow through the nodes, their solution is constant 𝐱(t) = 1 for all t, right panel at the bottom of Fig. <ref>B. For the same reason, the network responses r(t) = 8 as the sum of the unit contribution of the eight nodes. Summarizing, in this section we have illustrated how to define the network responses ℛ to a unit perturbation for five simple propagation models. Two discrete and three continuous. Two of them represent conservative dynamics and three are non-conservative. This five models could serve as the underlying generative dynamics to perform model-based network analysis, suited for different classes of real systems. The goal, as stated above, is to extract information about the network by defining metrics from the spatio-temporal responses ℛ. Importantly, the results depicted in Fig. <ref>B expose that one network, the small sample graph depicted, can display different faces depending on the dynamical model employed to “observe” it. Each dynamical model expresses the various features of the network in a different manner. For example, here the leaky cascade seems to expose the connections of the hub robustly (node i=4 in the sample graph) while the approach based on the diffusive coupling—the Laplacian matrix—tones down the links of the hub and reinforces instead the links between peripheral nodes (e.g., links 1-2 and 1-3 of the sample graph). § EXAMPLES AND APPLICATION The core idea of this perspective is that all the information needed to characterise a network is encoded in the spatio-temporal responses to a unit perturbation, ℛ, which differs depending on the generative model of choice. Once ℛ is defined for a given model, the challenge is then to derive network metrics from ℛ to describe the network, in line with the finding that classical graph metrics originate from the power matrices ℛ_t = A^t associated to the discrete cascade. While detailed derivation of measures shall be regarded as a future endeavour, we now provide examples to illustrate that the dynamical approach to network analysis here proposed can serve to overcome some of the limitations of classical graph analysis. We first explain how to derive a more general definition of distance between nodes for the continuous case, and then we deal with the problem of network comparison. For these proofs of concept we will restrict to the continuous propagation with leakage in Eq. (<ref>) and its response function, Eq. (<ref>). §.§ Graph distance as response times In graphs, the (geodesic) distance between two nodes i and j is quantified as the smallest number of links that an agent needs to traverse, hopping through links, to go from node i to j, Fig. <ref>A. However, this concept is only valid for the case of unweighted, binary graphs and discrete agents or particles navigating in the network. If the edges of a graph are weighted, or the system cannot be represented by discrete particles, then the idea of an agent `hopping' through links is incompatible and an alternative definition of distance is required. In Appendix 1, we show that from the dynamical perspective here introduced, the graph distance between two nodes corresponds to the time step t at which a discrete cascade initiated at node i arrives for the first time in node j. This redefinition of distance in terms of time allows for a more flexible application. Consider the leaky-cascade in Eq. (<ref>). The pair-wise responses R_ij(t) due to an initial perturbation at node i, undergo a transient growth followed by a decay dominated by the leakage term, as depicted in Fig. <ref>B. In this scenario, we define the distance from node i to j as the time required for the response of node j to an initial perturbation on i to reach its peak. We demonstrate this on three undirected graphs (random, scale-free-like and a ring lattice) of n = 100 and density ρ = 0.1. The scale-free-like network was generated for γ = 2.5. The adjacency matrices, the graph distance D^g_ij and the time-to-peak distance D^ttp_ij matrices for the three graphs are shown in Fig. <ref>C. Visually, D^g_ij and D^ttp_ij look very much alike, indicating that in this unweighted case measuring time-to-peak or the classical graph distance are qualitatively equivalent. Quantitatively, the agreement is not exact but very similar, Figure <ref>D. While graph distance is a discrete quantity, time-to-peak is continuous. Thus there is some level of degeneracy in the time-to-peak values taken by all the pairs at the same graph distance. Although this variation is small and a reasonable linear correlation is found between the geodesic distance and the time-to-peak distance. It shall be noted that in the particular case of the leaky cascade, the response dynamics depends on the intrinsic relaxation time-constant τ governing the rate of the leakage. For the examples in Fig. <ref> the τ were independently chosen for the three networks such that τ = 0.4 τ_max where τ_max = 1/λ_max with λ_max being the largest eigenvalue of each network. The value of τ can alter the linear relation between D^g_ij and D^ttp_ij with wider degeneracy and ultimately saturating when τ is close to τ_max. The larger the τ the slower is the leakage bringing the system towards the transition from convergent to divergent as the leakage can no longer balance the flow generated by the connectivity. §.§ Comparing networks with each other The outcome of graph metrics is influenced by the size n and the density ρ (or number of links m) of a network. This dependence difficults the comparison between networks. Imagine we study two graphs G_1 and G_2 of same size n_1 = n_2 but one is denser than the other, say ρ_1 = 0.01 and ρ_2 = 0.06. If we obtained average pathlengths l_1 = 3.5 and l_2 = 2.9 respectively, the objective evidence is that G_2 is, as a graph, longer than G_1. However, it is well-known that the pathlength typically decays with graph density. Hence, we may also want to ask whether l_2 < l_1 simply because G_2 is denser than G_1, or because their internal architectures differ. In order to answer this question we need to regress out the influences of both size and density on the pathlength. Since the specific depence of a graph metric on size and density is not always known, the typical strategy to deal with this problem consists in comparing empirical networks to simple graph models (null-models), e.g. random graphs or degree-conserving random graphs, Fig. <ref>A. For example, we would construct two ensembles of random graphs matching the size and the number of links of the two graphs studied and we would obtain the corresponding ensemble average pathlengths l_r,1 and l_r,2. Then, one would typically compare the relative metrics l'_1 = l_1 / l_r,1 and l'_2 = l_2 / l_r,2 with each other to derive conclusions about which network is shorter. This typical procedure suffers from some conceptual and interpretative limitations <cit.>. From the dynamical perspective to network analysis here proposed there is no need to employ null-models for comparing networks. Instead, a simple normalization of the connectivity suffices to align networks of different size and/or density, making them comparable. The largest eigenvalue of a connectivity matrix λ_max captures the intrinsic time scale of the evolution of a network, regardless of the dynamical model. Hence, for any two networks, normalising the connectivity matrices by their corresponding λ_max such that A' = A / λ_max, aligns the time scales of their responses making them directly comparable <cit.> (see Fig. <ref>B for illustration). The largest eigenvalues of the normalised connectivities A'_1 and A'_2 are the same: λ'_max,1 = λ'_max,2 = 1.0. It shall be noted that after the normalization the matrices A'_1 and A'_2 are weighted. Standard graph theory cannot deal with these normalised connectivities as it requires adjacency matrices to be binary, with entries 0 or 1. However, for the dynamical approach to network analysis, dealing with such weighted networks is natural. In Figs. <ref>C-E we show the results of this normalization on three network models: random graphs (uniform probability), scale-free-like graphs and Watts-Strogatz graphs. For each of the three models we generated four graphs of size n = 200 or 500 nodes, and densities ρ = 0.06 or 0.1. We studied the responses ℛ(t) of the networks using the leaky continuous cascade as the generative dynamics and following Eq. (<ref>). Within each model, the network responses r(t) of the four graphs (top panels) follow different amplitudes and characteristic time-scales despite the internal architecture of the networks are equivalent—as they are instances of the same graph model. Next, we normalised the connectivity matrices by their corresponding λ_max and recomputed the responses ℛ'(t). As shown (bottom panels), the normalisation aligns the temporal scales of the four networks. The response curves r'(t) collapse in pairs of different amplitude. The difference in response amplitudes depend only on the network size. A further normalization of the responses ℛ'(t) by network size n would fully align the four curves, however, that would not imply making the networks more comparable. The effect of normalisation by λ_max is that the average response per node is the same in all networks. This alignes the internal variances of the network that arise from the internal architecture. We illustrate this studying the relation between the node-wise responses and the node degrees in the original graph before and after the normalization. The node response r_i(t) are defined as the temporal response of a node to all the initial perturbations. It is computed as the column or row sums of the response matrices: r_i(t) = ∑_j=1^n ℛ_ij(t). Then, the total node responser_i accounts for the accumulated response at the node from the initial time t=0 and it is calculated as the integral (or area under the curve) over time r_i = ∫_t=0^∞ r_i(t) dt. A linear relation between the original degrees k_i of the binary graph and the node responses r_i is observed in all the networks. Networks generated out of the same graph model are expected to follow the same degree distribution although the actual values k_i tend to grow with network size and density. In the comparison to r_i before the normalisation (top panels), we find the same trend happens for the r_i values taken by the nodes. Their absolute values grow with n and ρ of the underlying original graphs forming separate “clouds” of points in the plots. However, after the adjacency matrices have been normalised (bottom panels), the values for the responses r'_i of the four networks become aligned, showing that both the r_i values and their distributions p(r_i) are now directly comparable across networks. § RELATION WITH PAST WORKS The relation between network structure and function has attracted significant attention in the recent years. More specifically, function is usually refered in the literature to imply the behaviour of dynamical phenomena happening on a network. Typically, the study of this relation falls in one of the following three categories: (i) Investigations that aim at uncovering how network architecture or network features (e.g., the degrees of nodes or the presence of motifs) affect the dynamics on a network <cit.>. (ii) Studies making use of dynamical processes in order to reveal the architecture of the networks or specific network properties, e.g., community identification or defining centrality measures <cit.>. The present article falls in this category. And (iii) studies of network inference aiming at guessing the unknown or incomplete information about the architecture of a network out of empirically oberved dynamics <cit.>. Three types of dynamical approaches are usually investigated on networks. (i) Coupled dynamical systems constructed by placing a dynamical unit at each node which interacts with other nodes according to an underlying connectivity matrix. Examples of such coupled node dynamics could be neurons, oscillators or chaotic attractors. (ii) Propagation, diffusion, spreading or navigation dynamics. This class comprises a wide variety of approaches encompasing both discrete units (e.g., agents, particles, information packages, viruses, goods or money) and continuous variables (e.g., electrical current, flows or influence). And (iii) heuristic approaches whose underlying generative dynamics are hidden, implicitly assumed or unknown. There are many such cases in the literature of network analysis, specially regarding the definition of centrality measures as we will illustrate next. The aim of this perspective article was to expose that a hidden dynamical model lies at the origin of the common graph metrics; calling for a new point of view for graph analysis from the perspective of dynamical systems and to propose a generalised frame in which the generative model can be adapted to the needs of the real systems under study. Concerns regarding the lack of transparency, implicit assumptions and hidden generative dynamics is not new in the study of networks. Specially in respect of definitions of centrality. The concept of centrality is intuitively related to propagation phenomena in networks. The literature has been prolific in proposing centrality measures, e.g. degree centrality, closeness centrality, betweenness centrality, eigenvector centrality and Katz centrality. The majority of those classic measures were defined following intuitive but ad-hoc rationale. The variety of centrality measures, the rather opaque definitions and the implicit assumptions behind them has lead to debates calling for clarity <cit.>. As stated by S.P. Bogartti <cit.>: What is not often recognized is that the formulas for these different measures make implicit assumptions about the manner in which things flow in a network (…) the discussion of centrality has largely avoided any mention of the dynamic processes that unfold along the links of a network (not to mention the processes that shape the network structure). Yet, the importance of a node in a network cannot be determined without reference to how traffic flows through the network. And concludes that: …the off-the-shelf formulas for centrality measures are fully applicable only for the specific flow processes they are designed for, and that when they are applied to other flow processes they get the `wrong' answer. It is noted that the most commonly used centrality measures are not appropriate for most of the flows we are routinely interested in. This conclusion very much resonates with the aim of this perspective, although here our aim is to generalise these ideas to the essence of graph analysis, beyond centrality measures. To be fair, it shall be noted that in the recent years several centrality measures have been proposed for which the underlying propagation model is made explicit. For example, approaches based on propagation kernels tunable for various spatial scales <cit.>, the propagation of random walkers <cit.>, or a measure based on the conservative diffusion mediated by the Laplacian matrix <cit.>, i.e., the same system defined in Eq. (<ref>). We now expose the hidden dynamical origin and the implicit assumptions of two popular network measures, a classic one (the Katz centrality) and a more recent one (communicability). We stablish the connection between the two measures thanks to the dynamical viewpoint endorsed in this this article. §.§ Katz centrality and communicability Given that the powers of the adjacency matrix (A^l)_ij determine the number of paths of length l between nodes, it has been often recognised in the literature that this construct should be key to explain the functional relation between two nodes, not only through the direct or shortest paths, but encompasing all possible routes of any length to travel from node i to j. In fact, it has been often proposed that the influence of one node over another should depend on the accumulated routes that an agent could possibly travel between the two, say: Q = A + A^2 + A^3 + A^4 + A^5 + … The problem with this expression is that for any binary adjacency matrix the sum diverges as the values (A^l)_ij rapidly grow with length l. In order to avoid this, Katz (1952) <cit.> proposed to include an attenuation factor α which “has the force of a probability of effectiveness of a single link.” In other words, α is a weight given to the links in order to tune their efficiency of transmission. Then, the expression above is rewritten as follows: Q^K = α A + α^2 A^2 + α^3 A^3 + α^4 A^4 + α^5 A^5 + … When α < 1 is small enough, this reduced efficiency of transmission should be able to compensate for the growth of the (A^l)_ij and guarantee the convergence of the sum. This is achieved for α < 1 / λ_max. Notice that α A is now a weighted graph. The influence matrix Q^K can be reduced to: Q = I + α A + α^2 A^2 + … = ∑_l=0^∞( α A )^l = 1/I - α A, where I is the identity matrix. Following this, Katz defined a centrality of all nodes in the network 𝐂^K as: 𝐜^K = ( 1/I - α A - I) 𝐮 = Q^K 𝐮, where 𝐮 is the unit vector 𝐮^T = (1,1, …, 1) and the matrix Q^K encodes the net influence that one node exerts over another through all possible paths at all distances, given that at every step along the path influence decays by the ratio α. Katz centrality c_i^K thus quantifies the summed influence that a unit perturbation applied to all nodes, the vector 𝐮, has over node i—excluding the self-influence triggered by the perturbation u_i at node i. It is also common to find in the literature the Katz centrality expressed as 𝐜^K = ( I - α A )^-1𝐮 which includes the self-influence. Another popular approach motivated to characterise the influence of nodes beyond shortest paths has been the communicability metric <cit.>. In this case, the solution to guarantee the convergence of the sum of the powers A^k, Eq. (<ref>), was to choose the factorial coefficients 1 / l! such that I + A + 1/2! A^2 + 1/3! A^3 + … = ∑_l=0^∞A^l/l!. This series converges for all positive definite matrices A as it is the series expansion of the matrix exponential e^A. Optionally, a factor α can be included to define communicability as the following pair-wise influence matrix: Q^C = I + α A + 1/2!α^2 A^2 + … = ∑_l=0^∞(α A)^l/l! = e^α A. In both approaches, the factor α tunes the “depth” at which influence can be exerted. When α = 0 no information or influence can pass across the links. Increasing α will first favour transmission along direct connections and then the shorter paths. Increasing α will eventually allow the longer paths to take effect and—for the case of the Katz centrality—when α > 1 / λ_max it will make the Q^K to diverge. The factor α can be regarded either as an attenuation factor, a resistance or more generally, as a coupling strength associated to the links. The rationale behind Katz centrality and communicability is identical as both approaches define a metric of pair-wise influence exerted over all possible paths, of all lengths. The only difference betwen the two approaches is that Katz centrality assumes a constant tuning or attenuation of all paths, regardless of their length. That is, at every step, the propagation through a link suffers the same attenuation α, regardless of how many steps were given before. In contrast, the nonlinear factorial coefficients 1/l! of communicability punish the longer paths in excess, which enforces the convergence. In other words, the only difference between Katz centrality and communicability is that they are the result of a different propagation model. It is rather trivial to show that Katz centrality is driven by the leaky-cascade of Eq. (<ref>) while communicability is the product of the continuous cascade in Eq. (<ref>). Given the leaky cascade subjected to a constant input 1, 𝐱̇ = - 𝐱 / τ + A 𝐱 + 1, the steady-state solution (𝐱̇̃̇ = 0) is given by ( I / τ - A ) 𝐱̃ = 1. Solving for 𝐱̃ we have that <cit.>: 𝐱̃ = ( 1/I / τ - A) 1, which is, precisely the definition of Katz centrality (𝐱̃≡𝐜^K) in the version that accounts for the self-influences and with the attenuation factor being α = τ. Whether α divides the self activity of the nodes (the term -𝐱 / α) or multiplies the connectivity matrix (α A), and whether it is interpreted as a dissipation factor, a leakage term, a coupling strength or an attenuation factor, this is just a matter of convenience to be coherent with the real system under study. Mathematically, those forms are all identical. Now, recalling that the solution of the continuous cascade 𝐱̇(t) = A 𝐱(t) to initial conditions 𝐱_0 is given by 𝐱(t) = e^At 𝐱_0, it becomes clear that communicability is just the propagator—the Green function—of the discrete cascade: Q^C ≡ e^At = I + At + 1/2! (At)^2 + … = ∑_l=0^∞(At)^l/l!. This expression clarifies the intrinsic relation between the “attenuation factor” α and the depth of the paths involved in the influence between nodes. After a short time t > 0, transmission along the shorter paths dominates but as time passes the influence of the longer paths starts to take effect. This is reflected in the evolution of the response matrices for the continuous cascade in Fig. <ref>B. The initial response matrix very much resembles the adjacency matrix, since the accumulated influence is dominated by the direct connections but, as time passes, the influence through the longer paths takes effect; leading to a pair-wise response architecture that is dissociated from A and only the degrees of the nodes matter. Also, the origin of the communicability on the continuous cascade exposes that communicability is divergent by definition and hence, analysis of communicability—same as for the standard graph metrics—shall be bounded to the shorter time-scales with a temporal cut-off. This could be somehow aleviated by adding the attenuation constant or a coupling strength to the system, 𝐱̇(t) = α A 𝐱(t), in order to tune rate of divergence of communicability as Q^C = e^α At. We cannot close this brief overview without mentioning the extensive efforts done in the recent years to define network metrics for the case of random walkers. Given their Markovian and conservative nature, processes of random walkers on networks are mathematically tractable and well-behaved (the dynamics do not diverge). Hence, random walkers represent a convenient approach to explore networks and to define metrics to characterise them, much in the line of the goals of the present article. Besides the aforementioned centrality measures, significant work has been done to identify communities based on the susceptibility of a walker to remain trapped in a module or to jump from one community to another <cit.>. § DISCUSSION Traditionally, graph theory has been considered as a data-driven analysis tool and therefore, its metrics applicable to any system that is represented as a graph. Methods to explore and characterise networks based on propagation processes have also been proposed, specially regarding the definition of centrality measures and community detection. Sometimes, those methods have been defined following ad-hoc rationale, e.g., Katz centrality and communicability; based on the idea that the powers of the adjacency matrix inform of the number of non-Hamiltonian paths from one node to another, instead of being derived from first principles. Other efforts have explicitly employed processes of random walkers to define diverse methods, e.g., to identify communities. In any case, whether the underlying dynamics were implicit or explicit, all those methods have been proposed as if they were universal, useful to study any network. Here, we have exposed that classical graph metrics (e.g., degree, matching index, clustering coefficient and geodesic distance) are founded on a hidden generative propagation model: the discrete cascade. We have also shown that other network metrics—the Katz centrality and the communicability—which are usually thought as generic or model agnostic metrics, are also the product of specific propagation models. Communicability originates from the continuous cascade in Eq. (<ref>), and Katz centrality is derived from the continuous leaky cascade in Eq. (<ref>). These observations reveal that, contrary to the common belief, graph analysis is not data-driven but model based. Now, the problem with model-based data analysis methods is that of model selection. In statistics for example, one would never apply certain metrics to a dataset unless the data passes a Gaussianity test before. Because if the dataset were not Gaussian, the outcome of those measures would not be interpretable. So, graph analysis needs its particular Gaussianity test. We advocate for transparent network analysis tools in which the underlying dynamical model, the assumptions and the contraints are explicit, instead of hidden or implicitly defined. And, we will have to recognise that graph metrics are not universal. All graph metrics are valid. The traditional metrics (clustering, geodesic distance, etc.), Katz centrality, communicability, community detection based on random walkers, and so on. All those all are valid formulations to characterise graphs and complex networks. The question is not whether they are useful, but to understand when and where is meaningful to use them. Once we have acknowledged that graph analysis is a model-based data analysis tool, we are free to replace the underlying propagation model and design analyses that are better suited for particular real networks. We imagine that in the future, before performing a network analysis, a user will first identify which are the fundamental contraints of the real system investigated. Is it discrete, or is it a continuous system? Is it conservative or non-conservative? Once the fundamental ingredients of the system are clear, the user will select the right model that satisfies those conditions and develop a personalised network analysis that is tuned for that real network; or for a family of real networks. For this scenario to be plausible, the remaining challenge is to define those model-based measures for different propagation models. Here, we have proposed a possibility inspired by the fact that the typical graph metrics can be derived as spatio-temporal properties of the network responses to a unit perturbation. Our proposal is to derive the network (pair-wise) response function ℛ(t) (i.e., the Green function of the network for a given propagation model), and to extract the information about the network from the corresponding ℛ(t). Although defining such network metrics might not always be trivial, we have shown that this dynamical point of view to network analysis brings several benefits. First, we could illustrate that, in fact, networks look different seen through the lense of different propagation models. Even for the same network, each model highlights some aspects of the network and ignores others. Second, this approach can naturally deal with weighted networks—as long as the meaning of the weights is compatible with the dynamical process. And third, normalizing the connection weights of networks by their largest eigenvalues, aligns their temporal scales facilitating their comparison without the need of using null-models. We conclude with the following reflection. Given the number of existing centrality measures, how comes that PageRank became so successful? Very likely, the answer is simply that the propagation model behind PageRank is a crude but a reasonable approximation to the underlying human behaviour while navigating the internet. The implicit assumptions and propagation models behind other centrality measures were not compatible with the description of humans surfing the world-wide web. We believe that if network analysis as a field overcomes the idea of graph theory and its satellite approaches as being universal tools that serve for all networks, if the field transitions into more personalised analysis strategies that transparently and naturally encompases the mininal contraints and assumptions of each real system, we will see other success stories similar to that of PageRank. We understand that such a transition can only happen at the cost universality—a price many will find difficult to pay—but by doing so there is plenty of specificity and interpretability to gain. With this in mind, we shall stress that the dynamical perspective we have discussed here, while it serves as an umbrella to encompass many of the existing analyses and approaches, it is neither to be regarded as the ultimate solution. Surely there are many real-world systems susceptible of a network representation but whose analysis in terms of propagation and navigation is not suitable. Their study may require employing other forms of network analyses. In this new scenario that is opening, what matters is to choose the right tools for right case. This work has been supported (GZL and MG) by the European Union's Horizon 2020 research and innovation programme under Specific Grant Agreement No. 785907 (HBP SGA2) and Specific Grant Agreement No. 945539 (Human Brain Project SGA3). MG also acknowledges funding from the Marie Skłodowska-Curie Action (Grant H2020-MSCA-656547) of the European Commission. * § DYNAMICAL REPRESENTATION OF GRAPH METRICS From the point of view of graph theory, all the relevant information about the network is encoded in the adjacency matrix A. Combinatorial or algorithmic methods allow then to answer different questions about the architecture of the graph in the form of graph metrics. The dynamical paradigm shown in the previous sections exposes that under the discrete cascade in Eq. (<ref>), the structural information in A is unfolded into the set of powers ℛ = { A^0, A^1, A^2, A^3, …, A^t } representing the temporal response of the network to an initial perturbation in all nodes. We now show how fundamental graph metrics are encoded by the response matrices ℛ. The degree of a node, k, is defined as the number of neighbours of the node. Usually, it is calculated as the row or column sum of the adjacency matrix such that k_i = ∑_j=1^n A_ij. In the dynamical perspective, the degree is expressed as the number of particles returning to the node in the short time scales. A particle starting at node i produces that each neighbour of i receives one particle in the first iteration. In the second iteration, t=2, new particles will propagate to the neighbours of each node. The single particle starting from i at t=0 results in i receiving one particle per neighbour at time t=2. In other words, the degree is the influence that a node exerts on itself at time t=2 and it is thus represented by the diagonal elements of A^2: k_i = (A^2)_ii. Matching index is a measure of the structural similarity between two nodes. It is evaluated counting the number of common neighbours since, two nodes that share the same connections play an identical role in the graph. Given that 𝒩(v) is the set of nodes connected to vertex v—the neighbourhood of v—the number of common neighbours between two nodes is quantified as the size of the overlap of their neighbourhoods: m(i,j) = |𝒩(i) ∩𝒩(j)|. From the adjacency matrix m(i,j) is calculated comparing the rows corresponding to the two nodes such that m(i,j) = ∑_k=1^n A_ik A_jk. Usually, the matching index M(i,j) is normalised by the fraction between the number of common neighbours and the total number of nodes adjacent to either i or j: M(i,j) = |𝒩(i) ∩𝒩(j)|/|𝒩(i) ∪𝒩(j)| = m(i,j)/k_i + k_j - m(i,j). Thus, M(i,j) = 0 when i and j have no neighbours in common and M(i,j) = 1 when both nodes are connected to the same, and only the same, neighbours. Under the perspective of the discrete cascading, the overlap m(i,j) can be regarded as the “convergence zone” of two simultaneous propagations, one starting from i and the other from j. Imagine the initial conditions 𝐱_0 with x_k,0 = 1 if k = i,j and x_k,0 = 0 otherwise. After the first iteration nodes adjacent to either i or j will receive one particle and the only nodes with two particles are those adjacent to both i and j. At the second time-step, node j receives one particle, due to the initial one on i at t=0, from each of the nodes shared with i. Therefore, the number of common neighbours m(i,j) between i and j is reflected in the matrix element (A^2)_ij. In other words, the influence that i exerts on j at time t=2—or the influence of j on i—is mediated exclusively via their common neighbours. If they had no common neighbours, then there is no influence between them at this time step. As shown before, the degrees k_i are encoded in the entries (A^2)_ii, thus substituting in Eq. (<ref>) we can express the matching index in terms of the discrete propagation as: M(i,j) = (A^2)_ij/(A^2)_ii + (A^2)_jj - (A^2)_ij, This expression illustrates that in dynamical terms the normalised matching index is regarded as the fraction of the influence between two nodes that is routed via the common neighbours, and it thus invites for a generalization of the index to subsequent time steps t > 2 by allowing the subsequent powers into Eq. (<ref>). Such a generalisation should also open the door to define an equivalent metric when the underlying discrete cascade is replaced by other more general dynamical models. The clustering coefficient, C, is a popular graph metric. It quantifies the probability that the neighbours of one node are connected with each other. In social terms, it answers the question of how likely is that “my friends are also friends with each other”. In practice, the clustering coefficient is calculated by counting the number of triangles in a graph since a link between two neighbours of a node leads to a triangle. It is well-known that the diagonal entries of A^3 represent the number of triangles—cycles of length l=3—in which nodes participate and that the total number of triangles in a graph is given by n(△) = 1/3 tr(A^3) = 1/3∑_i=1^n (A^3)_ii, where the factor 1/3 is to account for the fact that every triangle is counted once per node. For the clustering coefficient to be a probability, it is normalised by the total number of triads n(∨), or paths of length l = 2 in the graph. Thus, C is 1 only if all the triads form closed paths. In terms of the powers of A, the total number of paths of length l = 2 is calculated as n(∨) = |A^2| - tr(A^2), where | · | represents the sum of all the elements of the matrix, and tr(·) is the trace. So, the clustering coefficient is calculated as: C = 3 n(△)/n(∨) = tr(A^3)/|A^2| - tr(A^2). Under the dynamical perspective of the discrete cascading in Eq. (<ref>), the quantity |A^2| - tr(A^2) represents the number of particles that are generated in the iteration from t=1 to t=2, or in other words, the total influence exerted across nodes at time t=2. The quantity tr(A^3) is the number of particles returning to the nodes, or the self-influence at t=3. Thus, in dynamical terms the clustering coefficient can be interpreted as how much of the influence generated by the network at time t=2 falls back to the nodes at t=3. It shall be noted that if the degree is a metric of the influence of a node over itself at t=2, the clustering coefficient is a metric of the influence that nodes exert on themselves at time t=3. The difference is that the clustering is normalised in order to take the form of a probability. Last, the dynamical definition of C allows for a natural generalisation of the probability of self-interaction at any time, such that for all t >1, C_t = tr(A^t)/|A^t-1| - tr(A^t-1). The distance, d_ij, between two nodes in a graph is defined as the minimal number of links needed to traverse in order to reach j from i. Graph distance cannot be derived from the adjacency matrix alone. Its calculation requires to navigate through the graph, e.g. based upon DFS or BFS algorithms. As mentioned before, the cascading process described in Eq. (<ref>) is indeed the BFS navigation without memory. Under this cascading, instead of counting the number of jumps to travel between nodes, d_ij can be evaluated in terms time; the time needed for a cascade initialised at node i to first reach node j. That is, in dynamical terms graph distance can be regarded as the time a perturbation on a node needs to reach the rest of nodes. Given the set of matrix powers ℛ = { A^0, A^1, A^2, A^3, …, A^t }, we can formally redefine graph distance as: d_ij = t' : (A^t')_ij > 0 t < t', (A^t)_ij = 0. Our overall goal is to generalise graph analysis by replacing the original generative dynamical model behind graph metrics, Eq. (<ref>), with other models which account for other basic properties of real systems. The interpretation of distance in terms of the time required for perturbations to propagate will become a handful change of perspective under arbitrary dynamical rules, either discrete or continuous as we will derive in the following sections. § REFERENCES 10 Borgatti2009 Stephen P Borgatti, Ajay Mehra, Daniel J Brass, and Giuseppe Labianca. Network analysis in the social sciences. Science, 323:892–895, 2009. Kiss_EpidemicBook I. Kiss, J. C. Miller, and P. L. Simon. Mathematics of epidemics on networks, volume 46 of Interdisciplinary applied mathematics. Springer, 2017. Kaiser_Review_2007 M. Kaiser. Brain architecture: a design for natural computation. Phil. Trans. R. Soc. A, 365:3033–3045, 2007. Zamora_FrontReview_2011 G. Zamora-López, C. S. Zhou, and J. Kurths. Exploring brain function from anatomical connectivity. Front. Neurosci., 5:83, 2011. Baronchelli_Review_2013 A. Baronchelli, R. Ferrer i Cancho, R. Pastor-Satorras, N. Chater, and M. H. Christiansen. Networks in cognitive science. Trends Cogn. Sci., 17(7):348–360, 2013. Papo_GreatExpectations_2014 D. Papo, M. Zanin, J.A. Pineda-Pardo, S. Boccaletti, and J.M. Buldú. Functional brain networks: great expectations, hard times and the big leap forward. Phil. Trans. R. Soc. B, 369:20130525, 2014. Jeong2000 H Jeong, B Tombor, R Albert, Z N Oltvai, and A L Barabási. The large-scale organization of metabolic networks. Nature, 407:651–654, 2000. Junker_BookNets B. H. Junker and F. Schreiber, editors. Analysis of biological networks. Wiley-interscience, Hoboken, New Jersey, USA, 2008. Wickramasinghe_Chimera_2013 M. Wickramasinghe and I.Z. Kiss. Spatially organized dynamical states in chemical oscillator networks: Synchronization, dynamical differentiation, and chimera patterns. PLoS ONE, 8(11):1862–1867, 2013. Bick_Chimera_2017 C. Bick, M. Sebek, and I.Z. Kiss. Robust weak chimeras in oscillator networks with delayed linear and quadratic interactions. Phys. Rev. Lett., 119:168301, 2017. Broder2000 Andrei Broder, Ravi Kumar, Farzin Maghoul, Prabhakar Raghavan, Sridhar Rajagopalan, Raymie Stata, Andrew Tomkins, and Janet Wiener. Graph structure in the web. Comput Netw, 33:309–320, 2000. Arenas_Review_2008 A. Arenas, A. Díaz-Guilera, J. Kurths, Y. Moreno, and C. Zhou. Synchronization in complex networks. Phys. Reps., 469:93–153, 2008. Barrat_Book A. Barrat, M. Barthélemy, and A. Vespignani. Dynamical processes on complex networks. Cambridge University Press, 2008. Masuda_ReviewWalks_2017 N. Masuda, M. A. Porter, and R. Lambiotte. Random walks and diffusion on networks. Phys. Reps., 716(717):1–58, 2017. Ji_PropagationReview_2023 P. Ji, J. Ye, Y. Mu, W. Lin, Y. Tian, C. Hens, M. Perc, Y. Tang, J. Sun, and J. Kurths. Signal propagation in complex networks. Phys. Reps., 1017:1–96, 2023. Yang_Walking_2005 S.-J. Yang. Exploring complex networks by walking on them. Phys Rev. E, 71:016107, 2005. Rosvall_Infomap_2008 M. Rosvall and C. T. Bergstrom. Maps of random walks on complex networks reveal community structure. Proc. Nat. Acad. Sci., 105(4):1118–1123, 2008. Boguna_Navigability_2009 M. Bogu ná, D. Krioukov, and K. C. Claffy. Navigability of complex networks. Nat. Physics, 5:74–80, 2009. Delvenne_StabilityComms_2010 J.-C. Delvenne, S. N. Yaliraki, and M. Barahona. Stability of graph communities across time scales. Proc. Nat. Acad. Sci., 107(29):12755–12760, 2010. Gilson_DynCom_2018 M. Gilson, N.-E. Kouvaris, G. Deco, and G. Zamora-López. Framework based on communicability and flow to analyze complex network dynamics. Phys. Rev. E, 97:052301, 2018. Gilson_DynComfMRI_2019 M. Gilson, N. E. Kouvaris, G. Deco, J.-F. Mangin, C. Poupon, S. Lefranc, D. Rivière, and G. Zamora-López. Network analysis of whole-brain fmri dynamics: A new framework based on dynamic communicability. NeuroImage, 201:116007, 2019. Zamora_Sizing_2019 G. Zamora-López and R. Brasselet. Sizing complex networks. Comms. Phys., 2:144, 2019. Zamora_Hubs_2010 G. Zamora-López, C. S. Zhou, and J. Kurths. Cortical hubs form a module for multisensory integration on top of the hierarchy of cortical networks. Front. Neuroinform., 4:1, 2010. Arnaudon_Centrality_2020 A. Arnaudon, R. L. Peach, and M. Barahona. Scale-dependent measure of network centrality from diffusion dynamics. Phys. Rev. Research, 2:033104, 2020. Arenas_SynchScales_2006 A. Arenas, A. Díaz-Guilera, and C. Pérez-Vicente. Synchronization reveals topological scales in complex networks. Phys. Rev. Lett., 96:114102, 2006. Bovet_FlowStab_2022 A. Bovet, J.-C. Delvenne, and R. Lambiotte. Flow stability for dynamic community detection. Sci. Adv., 8:eabj3063, 2022. Wu_topologies_2011 X. Wu, C.S. Zhou, G. Chen, and J.-A. Lu. Detecting the topologies of complex networks with stochastic perturbations. Chaos, 21:043129, 2011. Biancho_NetInference_2016 E. Bianco-Martinez, N. Rubido, C. G. Antonopoulos, and M. S. Baptista. Successful network inference from time-series data using mutual information rate. Chaos, 26:043102, 2016. Asilani_Inference_2020 M. Asllani, B.R. Da Cunha, E. Estrada, and J. P. Gleeson. Dynamics impose limits to detectability of network structure. New J. Phys., 22:063037, 2020. Friedkin_Centrality_1991 N. E. Friedkin. Theoretical foundations for centrality measures. Am. J. Sociol., 96:1478—1504, 1991. Bogartti_Centrality_2005 S. P. Bogartti. Centrality and network flow. Social Networks, 27:55–71, 2005. Zhang_NodeImportance_2011 J. Zhang, X.-K. Xu, P. Li, K. Zhang, and M. Small. Node importance for dynamical process on networks: A multiscale characterization. Chaos, 21:016107, 2011. Katz_Centrality_1952 L. Katz. A new status index derived from sociometric analysis. Psychometrika, 18(1):39–43, 1952. Estrada_Communicability_2008 E. Estrada and N. Hatano. Communicability in complex networks. Phys Rev. E, 77:036111, 2008. Tononi_Complexity_1994 G. Tononi, O. Sporns, and G. M. Edelman. A measure for brain complexity: relating functional segregation and integration in the nervous system. Proc. Nat. Acad. Sci., 91:5033–5037, 1994. Galan_DominatingPatterns_2008 R.F. Galán. On how network architecture determines the dominant patterns of spontaneous neural activity. PLoS ONE, 3(5):e2148, 2008. Zamora_FComplexity_2016 G. Zamora-López, Y. Chen, G. Deco, M. L. Kringelbach, and C. S. Zhou. Functional complexity emerging from anatomical constraints in the brain: the significance of network modularity and rich-clubs. Sci. Reps., 6:38424, 2016. Sharkey_Katz_2017 K. J. Sharkey. A control analysis on katz centrality. Sci. Reps., 7:17247, 2017. Schaub_Encoding_2012 M.T. Schaub, R. Lambiotte, and M. Barahona. Encoding dynamics for multiscale community detection: Markov time sweeping for the map equation. Phys Rev. E, 86:026112, 2012.
http://arxiv.org/abs/2307.01389v1
20230703230226
Identification of Causal Relationship between Amyloid-beta Accumulation and Alzheimer's Disease Progression via Counterfactual Inference
[ "Haixing Dai", "Mengxuan Hu", "Qing Li", "Lu Zhang", "Lin Zhao", "Dajiang Zhu", "Ibai Diez", "Jorge Sepulcre", "Fan Zhang", "Xingyu Gao", "Manhua Liu", "Quanzheng Li", "Sheng Li", "Tianming Liu", "Xiang Li" ]
cs.LG
[ "cs.LG", "stat.ME" ]
Alzheimer's disease (AD) is a neurodegenerative disorder that is beginning with amyloidosis, followed by neuronal loss and deterioration in structure, function, and cognition. The accumulation of amyloid-β in the brain, measured through 18F-florbetapir (AV45) positron emission tomography (PET) imaging, has been widely used for early diagnosis of AD. However, the relationship between amyloid-β accumulation and AD pathophysiology remains unclear, and causal inference approaches are needed to uncover how amyloid-β levels can impact AD development. In this paper, we propose a graph varying coefficient neural network (GVCNet) for estimating the individual treatment effect with continuous treatment levels using a graph convolutional neural network. We highlight the potential of causal inference approaches, including GVCNet, for measuring the regional causal connections between amyloid-β accumulation and AD pathophysiology, which may serve as a robust tool for early diagnosis and tailored care. Causal inference, Amyloid accumulation, Alzehimer's disease, Counterfactual inference. Identification of Causal Relationship between Amyloid-β Accumulation and Alzheimer’s Disease Progression via Counterfactual Inference Haixing Dai†1, Mengxuan Hu†1, Qing Li†2,3, Lu Zhang4, Lin Zhao1, Dajiang Zhu4 Ibai Diez5, Jorge Sepulcre5, Fan Zhang6, Xingyu Gao7, Manhua Liu8, Quanzheng Li5, Sheng Li9, Tianming Liu1, and Xiang Li5 1School of Computing, University of Georgia, Athens, GA, USA 2State Key Lab of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, China 3School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China 4Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington 76019, USA 5Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston 02114, USA 6Department of Radiology, Brigham and Women's Hospital and Harvard Medical School, Boston 02115, USA 7School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University, Shanghai 200240, China 8The MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University, Shanghai, 200240, China 9School of Data Science, The University of Virginia, Charlottesville 22903, USA † These authors contributed equally to this paper. Corresponding author: Xiang Li (email: xli60@mgh.harvard.edu). 9 June 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION The differentiation of Alzheimer's disease (AD) from the prodromal stage of AD, which is the mild cognitive impairment (MCI), and normal control (NC) is an important project that interests many researchers making effort on <cit.>. It is commonly recognized through studies that the progression of AD involves a series of gradually intensifying neuropathological occurrences. The process begins with amyloidosis, followed by neuronal loss and subsequent deterioration in the areas of structure, function, and cognition <cit.>. As a non-invasive method that could measure the accumulation of amyloid in the brain, 18F-florbetapir (AV45) positron emission tomography (PET) imaging has been widely used for early diagnosis of AD <cit.>. The use of florbetapir-PET imaging to characterize the deposition of amyloid-β has shown to be of significant diagnostic value in identifying the onset of clinical impairment. In recent years, there has been increasing research in counterfactual causal inference to estimate the treatment effect in various domains such as medicine <cit.>, public health <cit.>, and marketing <cit.>. Especially, estimating the causal effect of continuous treatments is crucial. For example, in precision medicine, a common question is “What is the ideal medicine dosage to attain the best result?”. Therefore, an average dose-response function (ADRF) that elucidates the causal relationship between the continuous treatment and the outcome becomes imperative. Estimating the counterfactual outcome presents a significant challenge in causal effect estimation, as it is inherently unobservable. To provide a clear definition, we use the binary treatment scenario (T=1 or T=0) for illustration. As depicted in Fig. <ref>, let us consider a patient with a headache (x_i) who has the option to either take the medicine (T=1) or not take it (T=0). The potential outcomes corresponding to these two treatment choices would be being cured (Y_i(T=1)) or not being cured (Y_i(T=0)), respectively. The causal effect is defined as the difference between these two potential outcomes. However, given that a patient can only choose one treatment option, we can observe only one outcome (the observed outcome), while the other outcome that was not observed is considered the counterfactual outcome. Similarly, in the context of a continuous setting, estimating the counterfactual outcome remains a significant challenge. Therefore, a variety of existing works on causal effect estimation focus on counterfactual estimation <cit.> under the assumption of binary treatments or continuous treatments (ADRF estimation) <cit.>. Especially, in the context of continuous treatments, the generalized propensity score (GPS), proposed by Hirano and Imbens <cit.>, is a traditional approach to estimate ADRF with counterfactual outcomes. Moreover, as machine learning has gained increasing attention due to its extraordinary ability to solve complex problems, many existing works use machine learning techniques to address the problem. Schwab et al. <cit.> proposed DRNet to split a continuous treatment into several intervals and built separate prediction heads for them on the latent representation of input. Nie et al. <cit.> adopted varying coefficient structure to explicitly incorporate continuous treatments as a variable for the parameters of the model, preserving the continuity of ADRF. Other methods, such as GAN <cit.> and transformer <cit.>, have also been proposed. In this work, we propose a novel model, the Graph Varying Coefficient Neural Network (GVCNet), for measuring the regional causal associations between amyloid-β accumulation and AD pathophysiology. Specifically, by comparing our model with the most advanced model, VCNet, we demonstrate that our model achieves better performance in AD classification. Moreover, we adopt K-Means clustering to group the generated average dose-response function (ADRF) curves from each region of interest (ROI) and then map them onto the cortical surface to identify the amyloid-β positive regions. The main contributions of this work are summarized as follows: 1. To the best of our knowledge, this is the early attempt to utilize the brain structural topology as the graph to measure the regional causal associations between amyloid-β accumulation and AD pathophysiology. Consistent experimental results on AD public dataset not only demonstrate the effectiveness and robustness of the proposed framework, but also support this hypothesis: the AD pathophysiology is deeply associated with amyloid-β accumulation, no matter with which kind of topology graph. 2. Compared with the most advanced approach (i.e., VCNet), the proposed GVCNet experimentally obtains a higher diagnosis accuracy, suggesting that the good performance could be achieved with graph topology. As such our framework, such attempt extends the applications of graph-based algorithms on brain imaging analysis and provides a new insight into the causal inference that combines the phenotype, structural and functional data. 3. Our work demonstrates clearly that there are four brain regions (i.e., pre- & post- central gyrus among cortical area, left & right pallidum among subcortical area) can be as the key ROIs for AD diagnosis. With the quantitative experimental results, with such ROIs, the diagnosis accuracy is better than with the whole brain information. § RELATED WORK §.§ Counterfactual Outcome Estimation The definition of counterfactual outcome is typically framed using the potential outcome framework <cit.>. To provide a clear definition, we illustrate with the use of binary treatments, which can be extended to multiple treatments by comparing their potential outcomes. Each individual x_i has two potential outcomes: Y_i(T=1) and Y_i(T=0), corresponding to the two possible treatments (T=1 or T=0). Since an individual can only receive one of the two treatments in observational data, only one potential outcome can be observed (observed outcome), while the remaining unobserved outcome is referred to as the counterfactual outcome. Hence, the major challenge in estimating Individual Treatment Effect (ITE) lies in inferring counterfactual outcomes. Once the counterfactual outcomes are obtained, ITE can be calculated as the difference between the two potential outcomes: ITE_i= Y_i(T=1)- Y_i(T=0). Many existing approaches have been proposed to estimate the counterfactual outcomes, such as conditional outcome modeling that trains two separate models to predict outcomes for the treatment group and control group and use the predicted value to fill the unobserved counterfactual outcomes. In addition, tree-based and forest-based methods are widely used to estimate ITE <cit.>. Additionally, matching methods <cit.>, stratification mathods <cit.>, deep representation methods <cit.> have been proposed to address the problem as well. §.§ Continuous Treatment Effect Estimation Continuous treatments are of great practical importance in many fields, such as precision medical. Typically, the objective of continuous treatment effect estimation is to estimate the average dose-response function (ADRF), which demonstrates the relationship between the specific continuous treatment and the outcome. Although recent works utilized the representation learning methods for ITE estimation <cit.>, most of the existing works are under the assumption of binary treatments, which cannot be easily extended to continuous treatment due to their unique model design. To address this issue, Schwab et al. <cit.> extended the TARNet <cit.> and proposed Dose Response networks (DRNet), which divided the continuous dosage into several equally-sized dosage stratus, and assigned one prediction head for each strata. To further achieve the continuity of ADRF, Nie et al., <cit.> proposed a varying-coefficient neural network (VCNet). Instead of the multi-head design, it used a varying coefficient prediction head whose weights are continuous functions of treatment t, which improved the previous methods by preserving a continuous ADRF and enhancing the expressiveness of the model. Hence, in this paper, we adopt it as part of the model to estimate the effect of each Regions of Interest (ROI) of the brain on Alzheimer's disease. §.§ Traditional Correlation-based PET Image Analysis Methods The correlation-based methods on PET images analysis could be used in many clinical applications, such as tumor detection and brain disorder diagnosis. An et al. used canonical correlation analysis-based scheme to estimate a standard-dose PET image from a low-dose one in order to reduce the risk of radiation exposure and preserve image quality <cit.>. Landau et al. used the traditional corrlation method to compare the retention of the 11-C radiotracer Pittsburgh Compound B and that of two 18-F amyloid radiotracers (florbetapir and flutemetamol) <cit.>. Zhu et al. used the cannoical representation to consider the correlations relationship between features of PET and other different brain neuroimage modalities <cit.>. Li et al. used sparse inverse covariance estimation to reveal the relationship between PET and structural magnetic resonance imaging (sMRI) <cit.>. And for the AD diagnosis, it has been suggested that brain regions such as the posterior cingulate and lateral temporal cortices are affected more in AD than the NC, with the florbetapir-PET <cit.>. Some researches on florbetapir-PET imaging have revealed that neurodegeneration does not influence the level of amyloid-β accumulation. Instead, amyloid-β pathophysiology is considered a biologically independent process and may play a "catalyst" role in neurodegeneration <cit.>. There have also been many theories that highlight the amyloid-β pathologies as the main driving forces behind disease progression and cognitive decline. In order to characterize the relationship between the amyloid-β accumulation and AD pathophysiology, the counterfactual causal inference method will be a useful tool to uncover how the patterns of causality or significant changes in regional or temporal amyloid-β levels can impact the development of AD over time. §.§ Graph Neural Network Deep learning has revolutionized many machine learning tasks, but challenges arise when data is represented as graphs. The basic idea behind GNNs is to iteratively update the feature vectors of each node by aggregating the feature vectors of its neighboring nodes. The update rule for a GNN can be formalized as follows: h^l+1_i = σ(a_i^l W^l), a_i^l = g^l(h_i^l, {h_u^l: u ∈𝒩(i)}), where h_i^(l+1) is the feature vector of node i at layer l+1, 𝒩(i) is the set of neighboring nodes of i, g^l is the aggregation function at latyer l, and W^(l) is a learnable weight matrix at layer l. The function σ is a non-linear activation function, such as the ReLU function. Graph convolutional networks (GCNs) extend convolutional neural networks <cit.> to the graph domain, allowing for meaningful feature extraction. GCNs have been applied in various fields, including node classification <cit.>, link prediction <cit.>, and graph generation <cit.>. Initial work on GCNs was proposed by <cit.> in 2013, followed by the seminal paper by <cit.> in 2017. Since then, many extensions and improvements to GCNs have been proposed, including Graph Attention Networks (GATs) <cit.> and GraphSAGE <cit.>. Researchers have also studied different graph convolutional layers, such as Message Passing Neural Networks (MPNNs) <cit.> and Convolutional Graph Neural Networks (ConvGNNs) <cit.>. Overall, GCNs have shown great potential in graph representation learning and have the potential to revolutionize many applications where data is represented in the form of graphs. § METHODOLOGY §.§ Problem Setting VCNet is one of the advanced methods for ADRF estimation, typically it can generate continuous ADRF and provide promising counterfactual estimation. Hence, in this study, we adopt this model to estimate the effect between the amyloid-β level and the probability of gaining AD. Typically, we treat the amyloid-β in a specific brain region as the treatment T and whether the subject gains AD as the outcome Y. In our study, we used the Harvard-Oxford Atlas (HOA) to divide the entire brain into 69 regions. Since the some regions for tau imaging is not a target binding region, we excluded the following regions: left cerebral white matter, left cerebral cortex, left lateral ventrical, right cerebral white matter, right cerebral cortex, right lateral ventricle and brain-stem. For the rest of 62 regions, we treated one region as the treatment and used the other regions as covariates (X) to train a separate model for each setting. We iterated this process 62 times to obtain the causal effect and accuracy estimates for each region. To capture more information, we used graph structures of the whole brain denoted as 𝒢 = (𝒱,ℰ,𝒳), where each graph contains 62 nodes representing 62 ROIs, 𝒱 represents the node set and ℰ represents the edge set. Let X ∈ R^N × F be the input feature matrix, where each row corresponds to a node and each column corresponds to a feature. To estimate the causal effect of one ROI, we removed the corresponding node and all edges related to it and used the rest of the graph as input (61 nodes). Finally, we used the amyloid-β value as the treatment variable T for the VCNet analysis. In our work, we follow three fundamental assumptions for identifying ADRF: Stable Unit Treatment Value Assumption (SUTVA): There are no unit interactions, and there is only one version of each treatment, which means that various levels or doses of a specific treatment are considered as separate treatments. Positivity: Every unit should have non-zero probability of being assigned to every treatment group. Formally, P(T=t|X=x)≠ 0, ∀ t∈𝒯, ∀ x∈ X. Ignorability: Given covariates x, all potential outcomes {Y(T=t)}_t∈𝒯 are independent of the treatment assignment, implying that there are no unobserved confounders. Mathematically, {Y(T=t)}_t∈𝒯 T | X. §.§ GVCNet In our proposed GVCNet framework, as illustrated in Figure <ref>, there are three main components: ChebNet <cit.>, Deep&Cross Network <cit.>, and VCNet <cit.>. These components work together to estimate the Average Treatment Effect (ATE) using graph-structured data and demographic information. The ChebNet component takes advantage of the graph structure of the data and utilizes this graph structure to generate features or representations that capture the underlying relationships between entities. The Deep&Cross Network component incorporates demographic data into the framework. The Deep&Cross Network module utilizes these demographic features to learn complex interactions between them, capturing both low-order and high-order feature interactions. This helps to capture additional information beyond what can be learned solely from the graph-structured data. The resulting latent representation, denoted as Z^', which is a combination of features from ChebNet and Deep&Cross Network, is then fed into the VCNet component. VCNet infers the treatment distribution from Z^' to ensure that it contains sufficient information for accurate ADRF estimation. Finally, the ADRF is estimated based on t and Z^'. §.§ ChebNet In this paper, to preserve the topological information of PET data. We introduce the Chebyshev neural network (ChebNet) <cit.> to replace the first two fully connected layers in VCNet. ChebNet uses Chebyshev polynomials to approximate the graph Laplacian filter, which is a commonly used filter in GCNs. Chebyshev polynomials are a sequence of orthogonal polynomials that can be used to approximate any smooth function on a given interval, and can be efficiently computed using recursive formulas. The equation of first ChebNet is as follows: f_out(ℒ, 𝐗)=σ(∑_k=0^K-1Θ_k T_k(ℒ̃) 𝐗) where 𝐗∈ℝ^N × F is the input matrix of N nodes, each with F features, ℒ is the graph Laplacian, and ℒ̃ is the normalized Laplacian defined as ℒ̃ = 2ℒ/λ_max - I_N, where λ_max is the largest eigenvalue of ℒ. T_k(·) are Chebyshev polynomials of order k and Θ_k are the learnable filter coefficients for the k-th Chebyshev polynomial. Finally, σ(·) is a non-linear activation function such as ReLU or sigmoid that is applied element-wise to the output of the ChebNet. And the binary cross-entropy loss function is utilized to quantify the dissimilarity between the predicted probability of the positive class and its true probability in binary classification tasks. §.§ Deep & Cross Network The Deep & Cross Network (DCN) <cit.> is utilized to combine demographic data with topological information from PET data. Instead of conducting task-specific feature engineering, the DCN is capable of automatically learning the interactions between features that contribute to the task. Although deep neural networks (DNNs) are capable of extracting feature interactions, they generate these interactions in an implicit way, require more parameters, and may fail to learn some feature interactions efficiently. The DCN uses an embedding and a stack layer to embed sparse features in the input into dense embedding vectors x_embed,k^T to reduce the dimension. These vectors are then stacked with normalized dense features x_dense^T in the input as a single vector x_0=[x_embed,1^T,...,x_embed,k^T,x_dense^T ]. A cross network and a deep network are adopted to further process this vector in parallel. The hallmark of the paper is the cross network, which applies explicit and efficient feature crossing as shown below: x_l+1 =x_0x_l^Tw_l+b_l+x_l Here, x_l denotes the output of the l-th cross layer, and w_l and b_l represent the weight and bias of the l-th cross layer, respectively. The equation demonstrates that the degree of feature interactions grows with the depth of the layer. For example, the highest polynomial degree of x_0 of an l-layer cross network is l+1. Additionally, the interactions in the deep layer depend on the interactions in shallow layers. In addition to the cross network, a fully-connected feed forward neural network is used to process x_0 simultaneously. The outputs of the cross network and the deep network are concatenated and fed into a standard logit layer to conduct the final prediction by the combination layer. §.§ VCNet Despite the prior endeavours on ITE estimation, most of the work are focused on binary treatment settings and fail to extend to continuous treatment easily. Although some papers propose to estimate the continuous treatment by splitting the range of treatment into severel intervals and use one prediction network for each interval, the continuity of ADRF is still an open issue. To address these issues, VCNet is proposed by <cit.>, which is capable of estimating continuous treatment effect and maintaining the continuity of ADRF simultaneously. A fully connected feedforward neural network is trained to extract latent representation z from input x. To guarantee z encode useful features, z is used to estimate the conditional density of the corresponding treatment ℙ(t|z) through a conditional probability estimating head. Specifically, ℙ(t|z) is estimated based on the (B + 1) equally divided grid points of treatment and the conditional density for the remaining t-values is computed using linear interpolation. After obtaining the z containing valuable information, a varying coefficient neural network f_θ(t)(z) is adopted to predict the causal effect of t on the outcome y_i,t based on z and the corresponding t, where the network parameters are a function of treatment f_θ(t) instead of fixed parameters. Typically, the B-spline is used to model θ(t): θ(t)=[∑_l=1^La_1,lφ^NN_l(t), ⋯ ,∑_l=1^La_d_θ(t),lφ^NN_l(t)]^T ∈ℝ^d(θ), φ^NN_l(t) denotes the spline basis of the treatment and a_1,l are the coefficients to be learned; d(θ) is the dimension of θ(t). By utilizing the varying coefficient neural network, the influence of the treatment effect t on the outcome is integrated via the parameters of the outcome prediction network, thereby preventing any loss of treatment information. Additionally, the incorporation of t in this manner allows for the attainment of a continuous ADRF. § EXPERIMENT §.§ Dataset In this paper, we conducted an evaluation of their proposed algorithm using two subsets of data from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu), specifically ADNI-1 and ADNI-2, as well as the entire dataset. The subjects were divided into three categories, consisting of AD, NC, and MCI, as shown in Table <ref>. In this paper, we take AD as the AD group (298 subjects) and NC+MCI as the non-AD group (607 subjects). All florbetapir-PET images were co-registered with each individual’s sMRI and subsequently warped to the cohort-specific DARTEL template. And all subject has demographic features: age, sex, CDR score and MMSE score. All sMRI and florbetapir-PET images in this study are pre-processed by FMRIB Software Library (FSL) 6.0.3 (https://fsl.fmrib.ox.ac.uk/). The brain extraction step is based on the BET algorithm firstly<cit.>. And the skull is stripped from the source image sapce. Secondly, the sMRI images are aligned to Montreal Neurological Institute T1 standard template space (MNI152) with the FLIRT linear registration algorithm<cit.>, which can save computational time during the application stage. All florbetapir-PET images were co-registered with each individual's sMRI and subsequently warped to the cohort-specific DARTEL template. More specifically, after registration, the sMRI and florbetapir-PET images are cropped to the size of 152 × 188 × 152 by removing the voxels of zero values in the periphery of brain. Then, all the images are downsampled to the size of 76 × 94 × 76 that to reduce the computational complexity. And all subject has demographic features: age, sex, CDR score and MMSE score. In order to generate the structural connectivity matrix between different cortical regions, we also used the T1w and diffusion MRI (dMRI) provided in the ADNI database. T1-weighted images were acquired using a 3D sagittal MPRAGE volumetric sequence with TE = 3.0 ms; TI = 900.0 ms; TR = 2300.0 ms; flip angle = 9°; matrix size = 176 × 240 × 256; voxel size = 1.2 × 1.1 × 1.1 mm3. dMRI was acquired with a spin-echo planar imaging (EPI) sequence. 48 noncollinear gradient directions were acquired with a b-value of 1,000 s/mm2. 7 additional volumes were acquired without diffusion weighting (b-value = 0 s/mm2). Other parameters of dMRI were as follows: TE = 56.0 ms; TR = 7200.0 ms; flip angle = 90°; matrix size = 116 × 116 × 80; isotropic voxel size = 2 × 2 × 2 mm3. A subset of 20 subjects was used for generating a group-wise connectivity matrix. For each subject, whole brain tractography was computed using the dMRI data, with the Unscented Kalman Filter (UKF) tractography method <cit.> provided in the SlicerDMRI <cit.> software. Structural T1w imaging data was processed using FreeSurfer (version 6.0, https://surfer.nmr.mgh.harvard.edu/), and cortical regions were parcellated with the Desikan-Killiany Atlas <cit.>. Co-registration between the T1-weighted and dMRI data was performed using FSL <cit.>. Then, for each pair of cortical regions, streamlines that end in the two regions were extracted and the number of streamlines were computed, followed by the creation of the subject-specific connectivity matrix. For the group-wise connectivity matrix, the mean number of streamlines across the 20 subjects was recorded. In the trainning process, We randomly split the dataset into a training set (633 subjects) and a testing set (272 subjects). The proposed model was tested on the testing set to calculate the classification accuracy and generate average dose-response function curves (ADRFs) for each ROI. §.§ Experiment Setting In GVCNet, we designate each one of the 62 ROIs as the treatment and use the other ROIs as patient features. The average amyloid-β level serves as the signal for each ROI. We construct the input graph by defining the ROIs as nodes V and the DTI structure among the ROIs as edges E. For the sturctural connectivity matrix, we have two alternative cunstructing options as follows: one is to use the Pearson correlation value among the ROIs' T1-weighted values to construct the structural correlation graph (which is called the Corr graph in this paper to make it simplified); the other is to use the smoothed white fibers among the ROIs based on the 20 subjects (which is called DTI graph). Then treat the graph embedding and demographic data as input of the deep and cross network. Finally, feed the treatment and calculate the counter-factor with our GVCNet. For the hyper-parameters, we set the learning rate to 1e-4 and β to 0.5. During model training, all networks were trained for 600 epochs. Our model is trained using Adam <cit.> with momentum as 0.9. §.§ Prediction Performance First, we compare our model, GVCNet with the baselien model, VCNet. As shown in Table <ref>, the prediction performance of our model is around 88.72%, which is 4.7% higher than VCNet. In Table <ref>, we evaluate the model's performance by the accuracy percentage. The table presents the evaluation results of the GVCNet model on different datasets, using different types of graphs, and considering different demographic factors. The first three rows present the evaluation results on the combined ADNI1+ADNI2 dataset, using Corr graphs and again different combinations of demographic factors. The model achieves an average accuracy of 0.8296 when no demographic features are selected, an average accuracy of 0.8675 when age and sex are used, and an average accuracy of 0.8868 when all the demographic features are selected. The last three rows present the evaluation results on the combined ADNI1+ADNI2 dataset, using DTI graphs and again different combinations of demographic factors. The model achieves an accuracy of 0.8698 when no features are selected, an accuracy of 0.8689 when age and sex features are considered, and an accuracy of 0.8872 when all the features are selected. By comparing the last 6 rows, we can see that using DTI as the graph structure is slightly better than using the correlation graph between the ROIs as the graph structure. §.§ ADRF Curve Analysis Based on the patterns of the estimated ADRF of each region and the premise that different parts of the brain may play different roles during the normal/abnormal aging process, we use KMeans clustering method to cluster the ADRF curves from each region into three groups: upward(up, aβ positively respond to the treatment), downward(down, aβ negatively respond to the treatment) and unbiased, based on their trend of relationship with AD probability. Brain regions within each cluster were visualized onto the cortex and subcortex mappings in Fig. <ref> and Fig. <ref>. It can be found that there exist strong causal relationships between the AD progression and the PET signal level in the precentral/postcentral gyrus (cortical) and left/right pallidum (subcortical), indicating the potentially important role of these regions in modulating the Amyloid-β protein pathway in AD. It is interesting to observe that both the cortical (precentral gyrus) and subcortical (pallidum) regions responsible for voluntary motor movements <cit.> are all highly responding to AD, indicating a possible link between the behavior and pathological aspect of AD. In addition, based on Table <ref> that brain regions in the up group will have a slightly higher prediction power towards the AD probability, we investigated the patterns of ADRF curves and the regions within the up group in Fig. <ref>, which is consistent with Figs. <ref> and <ref> that pre- and post- central gyrus, left and right pallidum are upward with the increasing treatment. Moreover, we can obtain the same conclusion from both the VCNet and GVCNet, as shown in Fig. <ref>. Compared with the VCNet, our proposed Graph-VCnet can achieve much better prediction accuracy no matter with which kind of brain regions. And more specifically, with upward brain regions, both VCNet and Graph-VCNet could achieve the best prediction accuracy, compared with the other kinds of brain regions. § CONCLUSION AND DISCUSSION In this paper, we propose a novel model called GVCNet, which combines a graph neural network architecture with a targeted regularization approach to estimate varying coefficients of a treatment effect model and improve the model's performance. Experiment results show that GVCNet exhibits promising capabilities in making counterfactual causal inferences for Alzheimer's Disease (AD) progression based on the regional level of Amyloid-beta protein. The rationalization for employing a graph neural network architecture in GVCNet stems from the inherent complexity and interconnectedness of brain regions, both structurally, functionally, and pathologically. The graph structure allows for capturing the potentially long-distance spatial relationships and dependencies among these regions, providing a more comprehensive representation of the underlying proteinopathy dynamics. Furthermore, GVCNet incorporates a targeted regularization approach. Regularization techniques play a crucial role in mitigating model complexity and ensuring robustness. By imposing the proposed regularization constraints, GVCNet can effectively handle the inherent noise and variability in PET imaging data, leading to more reliable, generalizable, and accurate predictions. The potential of GVCNet in patient management, treatment, and drug discovery is substantial. If the model demonstrates sufficient robustness and consistency through rigorous validation studies, it can be ultimately utilized to project personalized AD progression trajectories. By leveraging counterfactual analysis, GVCNet can provide insights into the "what if" scenarios by assessing how the current imaging results would evolve if they were to worsen (due to disease progression) or improve (because of the medications or other types of interventions). This information is invaluable in guiding clinicians and patients in making informed decisions about treatment strategies and long-term care plans. Moreover, GVCNet's ability to predict the personalized treatment effect of a patient after administering a medication targeting Amyloid-beta deposition is of significant clinical importance. It can provide insights into the expected outcomes and help determine the optimal dosage for individual patients. This personalized, regional treatment prediction can aid in tailoring interventions and optimizing therapeutic strategies, leading to improved patient outcomes and more efficient use of resources. Looking ahead, the future of imaging-guided diagnosis, prognosis, and treatment planning for AD is likely to focus on unraveling the underlying mechanisms that link imaging targets, such as Amyloid-beta protein, with the patient’s internal and external characteristics (e.g., genetic factors, health conditions, comorbidities, and social determinants of health) to the disease progression. The proposed counterfactual causal inference modeling approach with multi-modal data input, as demonstrated by GVCNet, will play a pivotal role in this pursuit. With more data modalities and holistic patient characterization, we can uncover critical insights into the disease's pathophysiology, identify novel therapeutic targets, and develop more effective interventions. In conclusion, counterfactual causal inference modeling such as GVCNet holds immense potential for advancing our understanding of personalized AD management. It will enable personalized projections of disease trajectories and treatment effects, empowering clinicians and patients to make informed decisions. The integration of imaging-guided diagnosis, prognosis, and mechanistic insights will shape the future of AD research and pave the way for improved patient care and therapeutic strategies. § DECLARATION OF COMPETING INTEREST The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. § CREDIT AUTHORSHIP CONTRIBUTION STATEMENT Haixing Dai Conceptualization, Formal analysis, Methodology, Software, Writing – original draft. Mengxuan Hu: Formal analysis, Methodology. Qing Li: Writing – darft & review & editing. Lu Zhang: Writing – review & editing. Lin Zhao: Writing – review & editing. Dajiang Zhu: Writing – review & editing. Ibai Diez: Writing – review & editing. Jorge Sepulcre: Writing – review & editing. Xingyu Gao: PET imaging and non-imaging data analysis, writing – review & editing. Manhua Liu: Writing – review & editing. Quanzheng Li: Writing – review & editing. Sheng Li: Writing – review & editing. Fan Zhang: Diffusion imaging data analysis and tractography, writing – review & editing. Tianming Liu: Conceptualization, Writing – review & editing. Xiang Li: Conceptualization, Writing – review & editing. § ACKNOWLEDGMENTS Data collection and sharing for this project was funded by the Alzheimer's Disease Neuroimaging Initiative (ADNI) (National Institutes of Health Grant U01 AG024904) and DOD ADNI (Department of Defense award number W81XWH-12-2-0012). ADNI is funded by the National Institute on Aging, the National Institute of Biomedical Imaging and Bioengineering, and through generous contributions from the following: Alzheimer's Association; Alzheimer's Drug Discovery Foundation; BioClinica, Inc.; Biogen Idec Inc.; Bristol-Myers Squibb Company; Eisai Inc.; Elan Pharmaceuticals, Inc.; Eli Lilly and Company; F. Hoffmann-La Roche Ltd and its affiliated company Genentech, Inc.; GE Healthcare; Innogenetics, N.V.; IXICO Ltd.; Janssen Alzheimer Immunotherapy Research & Development, LLC.; Johnson & Johnson Pharmaceutical Research & Development LLC.; Medpace, Inc.; Merck & Co., Inc.; Meso Scale Diagnostics, LLC.; NeuroRx Research; Novartis Pharmaceuticals Corporation; Pfizer Inc.; Piramal Imaging; Servier; Synarc Inc.; and Takeda Pharmaceutical Company. The Canadian Institutes of Health Research is providing funds to support ADNI clinical sites in Canada. Private sector contributions are facilitated by the Foundation for the National Institutes of Health (www.fnih.org). The grantee organization is the Northern California Institute for Research and Education, and the study is coordinated by the Alzheimer's Disease Cooperative Study at the University of California, San Diego. ADNI data are disseminated by the Laboratory for Neuro Imaging at the University of Southern California. IEEEtran.bst
http://arxiv.org/abs/2307.01264v1
20230703180006
The impact of compact binary confusion noise on tests of fundamental physics with next-generation gravitational-wave detectors
[ "Luca Reali", "Andrea Maselli", "Emanuele Berti" ]
gr-qc
[ "gr-qc" ]
]The impact of compact binary confusion noise on tests of fundamental physics with next-generation gravitational-wave detectors William H. Miller III Department of Physics and Astronomy, Johns Hopkins University, Baltimore, Maryland 21218, USA lreali1@jhu.edu andrea.maselli@gssi.it William H. Miller III Department of Physics and Astronomy, Johns Hopkins University, Baltimore, Maryland 21218, USA berti@jhu.edu August 1, 2023 Next-generation ground-based gravitational-wave observatories such as the Einstein Telescope and Cosmic Explorer will detect 𝒪(10^5-10^6) signals from compact binary coalescences every year, the exact number depending on uncertainties in the binary merger rate. Several overlapping signals will be present in band at any given time, generating a confusion noise background. We study how this confusion noise affects constraints on possible deviations from general relativity induced by modified gravity and environmental effects. Confusion noise impacts only the signals that last longer in band. Even for a “golden” GW170817-like signal, the constraints broaden by a factor in the range [10%, 40%] ([70%, 110%]) for the fiducial (highest) value of the local binary neutron star merger rate. Our ability to test general relativity or constrain environmental effects will be limited by systematic errors, and not by confusion noise. ET-0228A-23 § INTRODUCTION The LIGO <cit.>, Virgo <cit.> and KAGRA <cit.> (LVK) network of gravitational-wave (GW) interferometers has opened a new window in the study of the Universe, detecting so far ∼ 100 signals from the coalescence of compact binaries. In the 2030s they will be joined by next-generation (XG) ground-based interferometers, Cosmic Explorer (CE) <cit.> in the United States and the Einstein Telescope (ET) <cit.> in Europe. The increased sensitivity will provide an unprecedented redshift reach, achieving a detection rate of 𝒪(10^5-10^6) events per year <cit.>. Coupled with the routine observation of high signal-to-noise-ratio (SNR) signals <cit.> , this will allow for precision tests of cosmological models <cit.>, alternative theories of gravity <cit.> and astrophysical scenarios of compact binary formation and evolution <cit.>. The low-frequency sensitivity limit will improve to ∼ 3-7 Hz down from the 10-20 Hz of current detectors <cit.>, meaning that some GW signals will last for several hours in band <cit.>. Long durations are crucial for prompt sky localization of sources to trigger electromagnetic follow-up campaigns <cit.>, but they also cause the presence of multiple overlapping signal in the data at any given time <cit.>. Performing parameter estimation on coincident signals can lead to biases if the coalescence times are within less than ∼ 0.5 s from each other <cit.>. In the same way, overlapping signals with mergers close to each other can bias parametric tests of general relativity (GR) and magnify systematic errors due to inaccurate waveform models <cit.>. Besides these potential biases on loud signals, the superposition of many weak, individually unresolvable signals produces a confusion noise background component in addition to the usual instrumental noise <cit.>. In the context of XG observatories, confusion noise can reduce the redshift reach of the detectors <cit.> and broaden the errors on the inferred parameters of the longest signals <cit.>. In this work, we assess the impact of confusion noise on tests of fundamental physics in XG detectors, considering both parametric tests of GR <cit.> and constraints on environmental effects <cit.>. We generate the confusion noise from a catalog of unresolved binary neutron star (BNS) signals from state-of-the-art population models <cit.>. We employ the formalism of Ref. <cit.> to compute the confusion noise power spectral density (PSD). We model environmental effects and beyond-GR corrections by adding parametrized deviations to the gravitational phase at various post-Newtonian (PN) orders <cit.>. We then estimate the impact of confusion noise on the constraints on such deviations with an information-matrix formalism <cit.>. The paper is organized as follows. In Sec. <ref> we describe how we generate and characterize the confusion noise. In Sec. <ref> we summarize the parametrized post-Einsteinian (ppE) framework we use to constrain environmental effects and deviations from GR. In Sec. <ref> we present our results. Throughout this work, we adopt the ΛCDM cosmological model with parameters taken from Planck 2018 <cit.>. § CONFUSION NOISE §.§ Theoretical framework Let us consider a loud, detected GW signal h(ξ⃗) with true parameters ξ⃗. If N unresolved signals {h_ over^j}_j=1^N are present in band at the same time, the detector output reads s(t) = h(t;ξ⃗) + n(t) + ΔH(t) , where n is the instrumental noise, and ΔH is the confusion noise produced by the superposition of the N overlapping signals <cit.>: Δ H(t) = ∑_j=1^N h^j_ over(t) . The number of coincident background signals (and thus the confusion noise) is mainly determined by two quantities: the duration in band of the resolved signal h, and the merger rate R_0 of the astrophysical population generating the unresolved signals <cit.>. At leading (quadrupolar) order, the duration in band of a detected, inspiral-dominated GW signal is given by T_ det = 5/256(Gℳ_ z/c^3)^-5/3(π f_0)^-8/3 . Here, f_0 is the low-frequency sensitivity limit of the detector and ℳ_ z is the detector-frame chirp mass of the source ℳ_ z = (1+z) (m_1 m_2)^3/5/(m_1+m_2)^1/5 , where z is the redshift and m_1,2 denotes the binary component masses. For a given detected signal h, smaller values of the redshifted chirp mass ℳ_ z correspond to a larger number of coincident background signals N, and to a larger impact of the confusion noise on parameter estimation errors <cit.>. Assuming that the confusion noise is Gaussian and stationary, it can be modeled by a power spectral density (PSD) S_ conf via <cit.> ⟨ΔH̃(f)ΔH̃^*(f') ⟩ = 1/2δ(f-f')S_ conf(f) , where we denote the Fourier transform with a tilde, complex conjugates with a star, and ensemble averages with ⟨·⟩. For a background of inspiral-dominated GW signals from compact binaries, the confusion noise PSD can be approximated by a power law <cit.> S_ conf(f)=A_ ref (f/f_ ref)^-7/3 , with f_ ref an arbitrary reference frequency. The total PSD for each detector in the presence of confusion noise is then given by S_ tot(f) = S_ n(f) + S_ conf(f) , where S_ n is the instrumental-noise PSD. §.§ Generation of confusion noise To generate the confusion noise, we consider a background of BNS signals, which is expected to be the dominant GW background from compact binaries in XG detectors <cit.>. Furthermore, BNS signals last longer in band compared to binary black hole (BBH) or neutron star-black hole (NSBH) signals, meaning that they have a higher chance of overlap <cit.>. We adopt a BNS population consistent with the latest LVK GWTC-3 catalog <cit.>. The component masses are sampled according to the preferred model of Ref. <cit.>: the primary mass m_1 follows a double Gaussian distribution with means 1.34 M_⊙ and 1.47 M_⊙, standard deviations 0.02 M_⊙ and 0.15 M_⊙, and mixing fraction 0.68; the secondary mass is instead distributed uniformly within the range m_2∈[1.14,1.46] M_⊙. We assume nonspinning BNSs and neglect tidal deformabilities. For the BNS redshift z, we adopt the same distribution as Ref. <cit.>. We assume that the binary formation rate follows the Madau-Dickinson <cit.> cosmic star formation rate, and we obtain the merger rate by convolving the SFR with a standard p(t_d)∝ 1/t_d time-delay distribution <cit.>. The normalization of the merger rate R_0 is set by the measured local merger rate from LVK observations <cit.>. We choose a fiducial value of R_0 = 320 Gpc^-3yr^-1, which is consistent with the estimates of both the GWTC-2 and GWTC-3 catalogs <cit.>. To characterize the impact of the large astrophysical uncertainty on the local merger rate, we vary R_0 within the 90% confidence interval from the GWTC-3 catalog <cit.>, going from a minimum value R_0^ low=10 Gpc^-3yr^-1 to a maximum of R_0^ high=1700 Gpc^-3yr^-1. For each BNS sampled from our population model, we generate a GW signal and assess detectability by computing its network SNR. We consider a detector network of three XG ground-based observatories. We choose one CE detector with 40-km arm length in the US <cit.>, one CE with 20-km arm length in Australia <cit.>, and one ET in Italy <cit.>. We set a detectability threshold of ρ_ thr=12 and assume that only signals with SNR lower than ρ_ tr contribute to the background. We assume f_0=3 Hz to be the low-frequency sensitivity limit of every detector in our network. We assign a fixed time of arrival t_0 to the resolved signal h, and uniformly sample the arrival times of the background signals around t_0. Then, we associate an interval in time domain to every signal in our catalog from its time of arrival and duration in band. If the interval associated to a background signal overlaps with the interval associated to h, the background signal contributes to the confusion noise (see Ref. <cit.> for more details). The duration of each signal in band is computed at 3.5 post-Newtonian (PN) order with the public package PyCBC <cit.>. GW signals are generated with the inspiral-only waveform model  <cit.> and SNRs are computed with the public package gwbench <cit.>. § NUMERICAL SETUP We investigate the impact of confusion noise on fundamental physics using a Fisher information matrix formalism <cit.>. We assume that the generic GW signal in the frequency domain h̃(f,θ⃗), which depends on the source parameters θ⃗, is observed by XG detectors with large SNRs. In this regime, given the interferometer's output s, we expect the parameters θ⃗ to be strongly peaked around their true values ξ⃗, so their likelihood function can be described by a gaussian distribution and the posterior reads p( θ⃗| s)∝ p^(0)(θ⃗)e^-1/2(θ^i-ξ^i)Γ_ij(θ^i-ξ^i)^T , where p^(0)(θ⃗) is the prior on θ⃗, and Γ_ij are the elements of the Fisher matrix: Γ_ij=(∂ h/∂θ^i|∂ h/∂θ^j)|_θ⃗=ξ⃗ . Here we have defined the inner product (h_1| h_2)=2∫_f_min^f_maxh̃_1(f) h̃_2^⋆(f)+ h̃_1^⋆(f) h̃_2(f)/S(f)df , where S(f) is the noise spectral density of the detector <cit.>. The covariance matrix of the parameters, Σ^ij, is simply given by Σ^ij=(Γ^-1)^ij. Using Eq. (<ref>) we can define the SNR of the signal h̃(f) as ρ=(h| h)^1/2 . In the following we fix f_min=3 Hz for both ET and CE, while f_max is conventionally set the to innermost stable circular orbit (ISCO) frequency of the Schwarzschild spacetime, π Mf_max=6^-3/2, with M total mass of the system (see e.g. <cit.>). We use as a baseline waveform model the Fourier-domain TaylorF2 approximant <cit.>, which describes the inspiral phase of the binary: h̃ (f) = Ω A_N(f) e^i ϕ (f) . The phase ϕ(f) is given by a post-Newtonian (PN) expansion: ϕ(f)=2π ft_c-ϕ_c-π/4+5/128(π Mf)^5/3[ ∑_i=0^7ψ_pp^(i/2)(π M f)^i/3 +ϕ_T^(5)(π M f)^10/3 +ϕ_T^(6)(π M f)^12/3] , where a term proportional to (π M f)^i/3 is said to be of ith PN order, and (t_c,ϕ_c) are the time and phase at coalescence. The phase (<ref>) contains: (i) point-particle (pp) terms up to the 3.5PN (i=7) order <cit.>, which depend on the binary chirp mass ℳ = (m_1 m_2)^3/5/(m_1+m_2)^1/5 and on the symmetric mass ratio η = m_1 m_2/(m_1+m_2)^2; (ii) linear spin corrections proportional to the (anti)symmetric combinations of the spin parameters χ_s=(χ_1+χ_2)/2 and χ_a=(χ_1-χ_2)/2 up to 3PN (i=6) order, as well as quadratic-in-spin corrections entering at 2PN (i=4) order[We neglect here the contribution induced by spin-induced quadrupole moments, which appears at 2PN order in ψ(f).] <cit.>; (iii) 5PN and 6PN tidal terms, ψ_T^(5) and ψ_T^(5), which depend on the effective tidal parameter Λ̃[The 6PN coefficient contains a second tidal parameter, δΛ̃, which is in general very small for realistic equations of state (δΛ̃∼0), and therefore will be neglected.] <cit.>. We truncate the waveform amplitude at the leading (Newtonian) order: 𝒜_N = √(5/24)ℳ^5/6f^-7/6/π^2/3d_L , where d_L is the luminosity distance. The geometric factor Ω in Eq. (<ref>) depends on the inclination angle ι between the line of sight of the source and its orbital angular momentum and on the detectors' antenna pattern functions, i.e., it is a function of the source position in the sky (θ, φ) and of the polarization angle ψ. Here we consider two different scenarios: in the first we average over all binary orientations, such that the waveform is specified by the parameters θ⃗=( A_N, M,η,χ_s,χ_a,Λ̃,t_c,ϕ_c), which yield a 8× 8 covariance matrix; in the second, we include in the Fisher matrix (<ref>) all of the angles that determine the geometric factor Ω , i.e., θ⃗=( A_N, M,η,χ_s,χ_a,Λ̃,t_c,ϕ_c,ι,θ,φ,ψ). We augment the waveform (<ref>) by including either: (i) a beyond-GR parametric deviation in the PN expansion of the phase ϕ(f), or (ii) a phase shift which encodes the presence of environmental effects. In the first case, following Ref. <cit.>, we modify the GR term by an additive term: ϕ(f) ⟶ ϕ(f)+β( Mπ f)^(2γ-5)/3 , where γ identifies the leading PN order of the non-GR contribution, and β is a coefficient that depends on the theory and (possibly) on the binary parameters. Environmental effects can be included in the TaylorF2 waveform with a similar methodology. We focus here on three different phenomenological terms that correspond to gravitational pull, gravitational drag, and collisionless accretion <cit.>, which modify Eq. (<ref>) as follows: ϕ(f) ⟶ ϕ(f)+ ρ_0κ M^2(M f)^-δ δ_pull=2 κ_pull= 1 δ_drag=11/3 κ_drag= -η^-3(1-3η)π^-11/3 δ_accretion=3 κ_accretion= -η^-1π^-3 , where ρ_0 is the average density of the medium in which the binary system evolves. In both cases (i) and (ii), the parameter vector θ⃗ includes an extra term, corresponding to the parameter β in case (i) and to the parameter ρ_0 in case (ii). We perform our Fisher analysis on four representative systems with parameters similar to the two BNS systems GW170817 <cit.>, GW190425 <cit.>, to the NSBH binary GW200115 <cit.>, and to the BBH GW150914 <cit.>, respectively. The masses, luminosity distance and tidal parameters of these sources are listed in Table <ref>. For all systems we assume negligible spins (χ_1=χ_2=0), and we also set t_c=ψ_c=0. Finally, for the BBH event GW150914 we remove the tidal deformability from the list of waveform parameters, because tidal deformabilities are equal to zero for black holes <cit.>. § RESULTS Figure <ref> shows the total PSD S_ tot(f) (top panels) and the ratio between confusion and instrumental noise PSDs S_ n^ conf(f)/S_ n(f) for the longest signal we consider, a GW170817-like BNS. Confusion noise dominates over instrumental noise at low frequencies, with the ratio between the PSDs peaking between 7 and 12 Hz for all detectors. Given the present uncertainty in the BNS merger rate, the confusion noise PSD ranges from being comparable to the instrumental noise PSD at the lowest end of the estimated merger rate, to being ∼ 20 or even ∼ 50 times larger than the instrumental noise (in the worst-case scenario) for the highest estimated merger rates. We can understand which parameter errors are most affected by confusion noise by comparing the PSD ratio S_ conf(f)/S_ n(f) with the (normalized) measurability integrand I_θθ <cit.>. This quantity is defined as the integrand of the corresponding diagonal element of the Fisher matrix, and it illustrates what range of frequencies is most important for the measurability of the given parameter. In Fig. <ref> we plot the measurability integrands for selected binary parameters and for two different beyond-GR modifications of the waveform, entering either at -1PN (top panel) or 2PN (bottom panel). In both cases, constraints on the deviation parameters β_ PN rely on measurements at frequencies close to the peak of PSD ratio S_ conf/S_ n. Therefore we can expect confusion noise to broaden the constraints on these parameters, at least for long enough signals. Figure <ref> quantifies the extent of this broadening for the two longest detected signals in our study: the GW170817-like and GW190425-like BNS systems. As a general trend, confusion noise-induced broadening is slightly less important for higher-order PN corrections, which are mostly measured at higher frequencies. However, this trend is barely noticeable. The confusion noise-induced broadening for the fiducial value of the local BNS merger rate (R_0=320 Gpc^-3yr^-1) is shown by bullets: for all three detectors, constraints on the deviation parameters broaden by a factor ranging between ∼ 10% and ∼ 50%. The worst-case scenario corresponds, of course, to the highest merger rate (solid black lines marking the top edge of the gray band in each panel). In this case, the broadening ranges between 50 and 110%. As expected, the broadening is negligible at the lowest end of the estimated merger rate. We do not show plots for the GW200115-like NSBH binary, because they have the same qualitative behavior shown in Fig. <ref>. The impact of confusion noise is negligible for shorter GW signals: our Fisher analysis for a GW150914-like system shows that the constraints on the ppE parameters are almost unaffected, which changes smaller than 1% at all pN orders. The constraints shown in Fig. <ref> are computed by averaging over inclination angle, polarization angle and sky location. In Fig. <ref> we show how the broadening in β_ PN for different detectors changes as we sample isotropically over these angular parameters. For illustration, we focus on a GW170817-like binary, the fiducial local BNS merger rate, and a dipolar (-1PN) deviation from GR. The qualitative shape of the histograms depends on the detector's location, orientation and antenna pattern. The variability is largest for CE40: in this case, the percentage broadening on β_ PN ranges between ∼ 30% and ∼ 60%. Finally, in Fig. <ref> we present results similar to Fig. <ref>, but focusing on environmental effects. More specifically, we show how confusion noise would affect constraints on the matter density ρ_0 for the three different phenomenological modifications introduced in Eq. (<ref>) in the “worst-case” scenario of a GW170817-like resolved signal. The broadening is mildly detector-dependent: it ranges from ∼ 20% to ∼40% at the fiducial local BNS merger rate, from ∼ 70% to ∼ 110% at the highest end of the merger rate, and (as usual) it becomes negligible at the lowest end of the local merger rate. § CONCLUSIONS We have studied how confusion noise affects the constraints that XG ground-based detectors, such as ET and CE, may place on beyond-GR parametric deviations and environmental effects. We have found that the contribution of confusion noise is negligible for most signals, becoming relevant only for the longest signals in band, such as low-redshift BNS signals. Even in the worst-case scenario (a “golden” GW170817-like BNS at ∼ 40 Mpc), the constraints on beyond-GR deviations (or on the average density of the medium interacting with the binary) broaden by a modest amount: ∼ 10-40% for our fiducial BNS merger rate, and ∼ 70-110% at the highest end of the local merger rates compatible with current LVK observations. It may be possible to further reduce the impact of confusion noise on parameter estimation by more sophisticated data analysis techniques, such as global fitting methods <cit.> or simultaneous fitting of the foreground and background parameters <cit.>. The exploration of these methods is an interesting topic for future work. The estimates provided in this work are somewhat conservative, because we only considered unresolved signals from BNSs (which, however, are expected to dominate the GW background from compact binaries). The inclusion of unresolved signals from NSBHs and BBHs would increase the confusion noise by a factor of order unity and introduce non-Gaussian components that complicate the analysis, but it would not change our main conclusions. An additional caveat to consider is the low-frequency sensitivity limit of the detectors. Throughout our study, we set the limit to 3 Hz for all of the interferometers. Decreasing this limit would increase the duration in band of all the GW signals, and thus the number of overlapping signals at any given time. An important implication of our work is that the limiting factor in our ability to test GR or constrain environmental effects will be systematic errors due to our imperfect modeling of GR waveforms, and not confusion noise. § ACKNOWLEDGMENTS E.B. and L.R. are supported by NSF Grants No. AST-2006538, PHY-2207502, PHY-090003 and PHY-20043, and NASA Grants No. 20-LPS20-0011 and 21-ATP21-0010. A.M. and E.B. acknowledge support from the ITA-USA Science and Technology Cooperation programme (CUP: D13C23000290001), supported by the Ministry of Foreign Affairs of Italy (MAECI). A.M. acknowledges financial support from the Italian Ministry of University and Research (MUR) for the PRIN grant METE under contract no. 2020KB33TP. This research project was conducted using computational resources at the Maryland Advanced Research Computing Center (MARCC). The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC, visualization, database, or grid resources that have contributed to the results reported in this paper <cit.>. URL: http://www.tacc.utexas.edu. iopart-num.bst
http://arxiv.org/abs/2307.02869v1
20230706091213
MomentDiff: Generative Video Moment Retrieval from Random to Real
[ "Pandeng Li", "Chen-Wei Xie", "Hongtao Xie", "Liming Zhao", "Lei Zhang", "Yun Zheng", "Deli Zhao", "Yongdong Zhang" ]
cs.CV
[ "cs.CV" ]
Spin and orbital Edelstein effect in a bilayer system with Rashba interaction Annika Johansson August 1, 2023 ============================================================================= Video moment retrieval pursues an efficient and generalized solution to identify the specific temporal segments within an untrimmed video that correspond to a given language description. To achieve this goal, we provide a generative diffusion-based framework called MomentDiff, which simulates a typical human retrieval process from random browsing to gradual localization. Specifically, we first diffuse the real span to random noise, and learn to denoise the random noise to the original span with the guidance of similarity between text and video. This allows the model to learn a mapping from arbitrary random locations to real moments, enabling the ability to locate segments from random initialization. Once trained, MomentDiff could sample random temporal segments as initial guesses and iteratively refine them to generate an accurate temporal boundary. Different from discriminative works (e.g., based on learnable proposals or queries), MomentDiff with random initialized spans could resist the temporal location biases from datasets. To evaluate the influence of the temporal location biases, we propose two “anti-bias” datasets with location distribution shifts, named Charades-STA-Len and Charades-STA-Mom. The experimental results demonstrate that our efficient framework consistently outperforms state-of-the-art methods on three public benchmarks, and exhibits better generalization and robustness on the proposed anti-bias datasets. The code, model, and anti-bias evaluation datasets are available at <https://github.com/IMCCretrieval/MomentDiff>. § INTRODUCTION Video understanding <cit.> is a crucial problem in machine learning, which covers various video analysis tasks, such as video classification and action detection. But both tasks above are limited to predicting predefined action categories. A more natural and elaborate video understanding process is the ability for machines to match human language descriptions to specific activity segments in a complex video. Hence, a series of studies <cit.> are conducted on Video Moment Retrieval (VMR), with the aim of identifying the moment boundaries ( i.e., the start and end time) within a given video that best semantically correspond to the text query. As shown in Fig. <ref>(a), early works address the VMR task by designing predefined dense video proposals ( i.e., sliding windows <cit.>, anchors <cit.> and 2D map <cit.> ). Then, the prediction segment is determined based on the maximum similarity score between dense proposals and the query text. However, these methods have a large redundancy of proposals and the numbers of positive and negative proposals are unbalanced, which limits the learning efficiency <cit.>. To deal with this problem, a series of VMR works <cit.> have recently emerged, mainly discussing how to reduce the number of proposals and improve the quality of proposals. Among them, a promising scheme (Fig. <ref>(b)) is to use sparse and learnable proposals <cit.> or queries <cit.> (i.e., soft proposals) to model the statistics of the entire dataset and adaptively predict video segments. However, these proposal-learnable methods rely on a few specific proposals or queries to fit the location distribution of ground truth moments. For example, these proposals or queries may tend to focus on video segments where locations in the dataset occur more often (i.e., yellow highlights in Fig. <ref>(b)). Thus, these methods potentially disregard significant events that transpire in out-of-sample situations. Recent studies <cit.> indicate that VMR models <cit.> may exploit the location biases present in dataset annotations <cit.>, while downplaying multimodal interaction content. This leads to the limited generalization of the model, especially in real-world scenarios with location distribution shifts. To tackle the above issues, we propose a generative perspective for the VMR task. As shown in Fig. <ref> (c) and (d), given an untrimmed video and the corresponding text query, we first introduce several random spans as the initial prediction, then employ a diffusion-based denoiser to iteratively refine the random spans by conditioning on similarity relations between the text query and video frames. A heuristic explanation of our method is that, it can be viewed as a way for humans to quickly retrieve moments of interest in a video. Specifically, given an unseen video, instead of watching the entire video from beginning to end (which is too slow), humans may first glance through random contents to identify a rough location, and finally iteratively focus on key semantic moments and generate temporal coordinates. In this way, we do not rely on distribution-specific proposals or queries (as mentioned in the above discriminative approaches) and exhibit more generalization and robustness (Tab. <ref> and Fig. <ref>) when the ground truth location distributions of training and test sets are different. To implement our idea, we introduce a generative diffusion-based framework, named MomentDiff. Firstly, MomentDiff extracts feature embeddings for both the input text query and video frames. Subsequently, these text and video embeddings are fed into a similarity-aware condition generator. This generator modulates the video embeddings with text embeddings to produce text-video fusion embeddings. The fusion embeddings contain rich semantic information about the similarity relations between the text query and each video frame, so we can use them as a guide to help us generate predictions. Finally, we develop a Video Moment Denoiser (VMD) that enhances noise perception and enables efficient generation with only a small number of random spans and flexible embedding learning. Specifically, VMD directly maps randomly initialized spans into the multimodal space, taking them as input together with noise intensities. Then, VMD iteratively refines spans according to the similarity relations of fusion embeddings, thereby generating true spans from random to real. Our main contributions are summarized as follows. 1) To the best of our knowledge, we are the first to tackle video moment retrieval from a generative perspective, which does not rely on predefined or learnable proposals and mitigates temporal location biases from datasets. 2) We propose a new framework, MomentDiff, which utilizes diffusion models to iteratively denoise random spans to the correct results. 3) We propose two “anti-bias” datasets with location distribution shifts to evaluate the influence of location biases, named Charades-STA-Len and Charades-STA-Mom. Extensive experiments demonstrate that MomentDiff is more efficient and transferable than state-of-the-art methods on three public datasets and two anti-bias datasets. § RELATED WORK Video Moment Retrieval. Video moment retrieval <cit.> is a newly researched subject that emphasizes retrieving correlated moments in a video, given a natural language query. Pioneering works are proposal-based approaches, which employ a "proposal-rank" two-stage pipeline. Early methods <cit.> usually use handcrafted predefined proposals to retrieve moments. For example, CTRL <cit.> and MCN <cit.> aim to generate video proposals by using sliding windows of different scales. TGN <cit.> emphasizes temporal information and develops multi-scale candidate solutions through predefined anchors. 2DTAN <cit.> designs a 2D temporal map to enumerate proposals. However, these dense proposals introduce redundant computation with a large number of negative samples <cit.>. Therefore, two types of methods are proposed: 1) Proposal-free methods <cit.> do not use any proposals and are developed to directly regress start and end boundary values or probabilities based on ground-truth segments. These methods are usually much faster than proposal-based methods. 2) Proposal-learnable methods that use proposal prediction networks <cit.> or learnable queries <cit.> to model dataset statistics and adaptively predict video segments. QSPN <cit.> and APGN <cit.> adaptively obtain discriminative proposals without handcrafted design. LPNet <cit.> uses learnable proposals to alleviate the redundant calculations in dense proposals. MomentDETR <cit.> can predict multiple segments using learnable queries. Since proposal-learnable methods adopt a two-stage prediction <cit.> or implicit iterative <cit.> design, the performance is often better than that of proposal-free methods. However, proposal-learnable methods explicitly fit the location distribution of target moments. Thus, models are likely to be inclined to learn location bias in datasets <cit.>, resulting in limited generalization. We make no assumptions about the location and instead use random inputs to alleviate this problem. Diffusion models. Diffusion Models <cit.> are inspired by stochastic diffusion processes in non-equilibrium thermodynamics. The model first defines a Markov chain of diffusion steps to slowly add random noise to the data, and then learns the reverse diffusion process to construct the desired data samples from the noise. The diffusion-based generation has achieved disruptive achievements in tasks such as image generation <cit.> and text generation <cit.>. Motivated by their great success in generative tasks <cit.>, diffusion models have been used in image perception tasks such as object detection <cit.> and image segmentation <cit.>. However, diffusion models are less explored for video-text perception tasks. This paper models similarity-aware multimodal information as coarse-grained cues, which can guide the diffusion model to generate the correct moment boundary from random noise in a gradual manner. Unlike DiffusionDet <cit.>, we avoid a large number of Region of Interest (ROI) features and do not require additional post-processing techniques. To our knowledge, this is the first study to adapt the diffusion model for video moment retrieval. § METHOD In this section, we first define the problem in Sec. <ref>, introduce our framework in Sec. <ref>, and describe the inference process in Sec. <ref>. §.§ Problem Definition Suppose an untrimmed video 𝒱 = {v_i}_i=1^N_v is associated with a natural text description 𝒯 = {t_i}_i=1^N_t, where N_v and N_t represent the frame number and word number, respectively. Under this notation definition, Video Moment Retrieval (VMR) aims to learn a model Ω to effectively predict the moment x̂_0 = (ĉ_0, ŵ_0) that is most relevant to the given text description: x̂_0 = Ω (𝒯,𝒱), where ĉ_0 and ŵ_0 represent the center time and duration length of the temporal moments, i.e., predicted spans. §.§ The MomentDiff Framework Fig. <ref> sheds light on the generation modeling architecture of our proposed MomentDiff. Concretely, we first extract frame-level and word-level features by utilizing pre-trained video and text backbone networks. Afterward, we employ a similarity-aware condition generator to interact text and visual features into fusion embeddings. Finally, combined with the fusion embeddings, the video moment denoiser can progressively produce accurate temporal targets from random noise. §.§.§ Visual and Textual Representations. Before performing multimodal interaction, we should convert the raw data into a continuous feature space. To demonstrate the generality of our model, we use three distinct visual extractors <cit.> to obtain video features 𝒱: 1) 2D visual encoder, the VGG model <cit.>. 2) 3D visual encoder, the C3D model <cit.>. 3) Cross-modal pre-train encoder, the CLIP visual model <cit.>. However, due to the absence of temporal information in CLIP global features, we additionally employ the SlowFast model <cit.> to extract features, which concatenate CLIP features. Besides, to take full advantage of the video information <cit.>, we try to incorporate audio features, which are extracted using a pre-trained PANN model <cit.>. To obtain text features, we try two feature extractors: the Glove model <cit.> and the CLIP textual model to extract 300-d and 512-d text features 𝒯, respectively. §.§.§ Similarity-aware Condition Generator Unlike generation tasks <cit.> that focus on the veracity and diversity of results, the key to the VMR task is to fully understand the video and sentence information and to mine the similarities between text queries and video segments. To this end, we need to provide multimodal information to cue the denoising network to learn the implicit relationships in the multimodal space. A natural idea is to interact and aggregate information between video and text sequences with a multi-layer Transformer <cit.>. Specifically, we first use two multilayer perceptron (MLP) networks to map feature sequences into the common multimodal space: V∈ℝ^N_v × D and T∈ℝ^N_t × D, where D is the embedding dimension. Then, we employ two cross-attention layers to perform interactions between multiple modalities, where video embeddings V are projected as the query Q_v, text embeddings T are projected as key K_t and value V_t: V̂=softmax(Q_v K_t^T) V_t+ Q_v, where V̂∈ℝ^N_v × D. To help the model better understand the video sequence relations, we feed V̂ into a 2-layer self-attention network, and the final similarity-aware fusion embedding is F = softmax(Q_v̂K_v̂^T) V_v̂+ Q_v̂, where Q_v̂, K_v̂, V_v̂ is the matrix obtained from V̂ after three projections respectively. In the span generation process, even for the same video, the correct video segments corresponding to different text queries are very different. Since the fusion embedding F serves as the input condition of the denoiser, the quality of F directly affects the denoising process. To learn similarity relations for F in the multimodal space, we design the similarity loss ℒ_sim, which contains the pointwise cross entropy loss and the pairwise margin loss: ℒ_sim= 1/N_v∑_i=1y_i * log (s_i) + (1- y_i) * log (1- s_i) + 1/N_s∑_j=1max(0, β +s_n_j-s_p_j) , where s∈ℝ^N_v is the similarity score, which is obtained by predicting the fusion embedding F through the MLP network. y∈ℝ^N_v is the similarity label, where y_i = 1 if the i-th frame is within the ground truth temporal moment and y_i = 0 otherwise. s_p_j and s_n_i are the randomly sampled positive and negative frames. N_s is the number of samples and the margin β = 0.2. Although ℒ_sim may only help the fusion embedding retain some coarse-grained similarity semantics, this still provides indispensable multimodal information for the denoiser. §.§.§ Video Moment Denoiser Recent works <cit.> have revealed that previous models <cit.> may rely on the presence of location bias in annotations to achieve seemingly good predictions. To alleviate this problem, instead of improving distribution-specific proposals or queries, we use random location spans to iteratively obtain real spans from a generative perspective. In this section, we first introduce the principle of the forward and reverse processes in diffusion models. Then, we build the diffusion generation process in the video moment denoiser with model distribution p_θ(x_0) to learn the data distribution q(x_0). [16]r0.4 < g r a p h i c s > Video moment denoiser. For simplicity, we only draw the intensity-aware attention structure that is different from the general Transformer. Forward process. During training, we first construct a forward process that corrupts real segment spans x_0∼ q(x_0) to noisy data x_m, where m is the noisy intensity. Specifically, the Gaussian noise process of any two consecutive intensities <cit.> can be defined as: q(x_m |x_m-1) =𝒩(x_m ; √(1-β_m)x_m-1, β_m I) , where β is the variance schedule. In this way, x_m can be constructed by x_0: q(x_1: m|x_0) =∏_i=1^m q(x_i |x_i-1). Benefiting from the reparameterization technique, the final forward process is simplified to: x_m=√(α̅_m)x_0+√(1-α̅_m)ϵ_m, where the noise ϵ_m ∼𝒩(0,I) and α̅_m=∏_i=1^m (1-β_i). Reverse process. The denoising process is learning to remove noise asymptotically from x_m to x_0, and its traditional single-step process can be defined as: p_θ(x_m-1|x_m)=𝒩(x_m-1 ; μ_θ(x_m, m), σ_m^2 I) where σ_m^2is associated with β_m and μ_θ(x_m, m) is the predicted mean. In this paper, we train the Video Moment Denoiser (VMD) to reverse this process. The difference is that we predict spans from the VMD network f_θ(x_m,m,F) instead of μ_θ(x_m, m). Denoiser network. As shown in Fig. <ref>, the VMD network mainly consists of 2-layer cross-attention Transformer layers. Next, we walk through how VMD works step by step. For clarity, the input span and output prediction presented below are a single vector. 182 Span normalization. Unlike generation tasks, our ground-truth temporal span x_0 is defined by two parameters c_0 and w_0 that have been normalized to [0,1], where c_0 and w_0 are the center and length of the span x_0. Therefore, in the above forward process, we need to extend its scale to [-λ, λ] to stay close to the Gaussian distribution <cit.>. After the noise addition is completed, we need to clamp x_m to [-λ, λ] and then transform the range to [0,1]: x_m = (x_m / λ + 1) / 2, where λ = 2. 183 Span embedding. To model the data distribution in multimodal space, we directly project the discrete span to the embedding space through the Fully Connected (FC) layer: x^'_m = FC(x_m) ∈ℝ^D. Compared to constructing ROI features in DiffusionDet <cit.>, linear projection is very flexible and decoupled from conditional information (i.e., fusion embeddings), avoiding more redundancy. 184 Intensity-aware attention. The denoiser needs to understand the added noise intensity m during denoising, so we design the intensity-aware attention to perceive the intensity magnitude explicitly. In Fig. <ref>, we use sinusoidal mapping for the noise intensity m to obtain e_m∈ℝ^D in the multimodal space and add it to the span embedding. We project x^'_m + e_m as query embedding and the positional embedding pos_m ∈ℝ^D is obtained by sinusoidal mapping of x_m. We can obtain the input query: Q_m = Concat(Proj(x^'_m + e_m), pos_m). Similarly, The input key is K_f = Concat(Proj(F) , pos_f) and the input value is V_f = Proj(F), where Proj(·) is the projection function and pos_f ∈ℝ^D is the standard position embedding in Transformer <cit.>. Thus, the intensity-aware attention is: Q_m=softmax(Q_m K_f^T) V_f+ Q_m. 185 Denoising training. Finally, the generated transformer output is transformed into predicted spans x̂_m-1 = (ĉ_m-1, ŵ_m-1) and confidence scores ẑ_m-1, which are implemented through a simple FC layer, respectively. Following <cit.>, the network prediction should be as close to ground truth x_0 as possible. In addition, inspired by <cit.>, we define the denoising loss as: ℒ_vmr(x_0, f_θ(x_m,m,F) )=λ_L 1x_0 -x̂_m-1+λ_iou ℒ_iou (x_0, x̂_m-1) + λ_ceℒ_ce(ẑ_m-1), where λ_L1, λ_iou and λ_ce are hyperparameters, ℒ_iou is a generalized IoU loss <cit.>, ℒ_ce is a cross-entropy loss. Note that the above procedure is a simplification of training. Considering that there may be more than one ground truth span in the dataset <cit.>, we set the number of input and output spans to N_r. For the input, apart from the ground truth, the extra spans are padded with random noise. For the output, we calculate the matching cost of each predicted span and ground truth according to ℒ_vmr (i.e., the Hungarian match <cit.>), and find the span with the smallest cost to calculate the loss. In ℒ_ce, we set the confidence label to 1 for the best predicted span and 0 for the remaining spans. §.§ Inference After training, MomentDiff can be applied to generate temporal moments for video-text pairs including unseen pairs during training. Specifically, we randomly sample noise x̂_m from a Gaussian distribution 𝒩(0,I), the model can remove noise according to the update rule of diffusion models <cit.>: x̂_m-1= √(α̅_m-1) f_θ(x̂_m,m,F) + √(1-α̅_m-1-σ_m^2)x̂_m-√(α̅_m) f_θ(x̂_m,m,F) /√(1-α̅_m)+σ_m ϵ . As shown in Fig <ref>(d), we iterate this process continuously to obtain x̂_0 from coarse to fine. Note that in the last step we directly use f_θ(x̂_1,1,F) as x̂_0. In x̂_0, we choose the span with the highest confidence score in ẑ_0 as the final prediction. To reduce inference overhead, we do not employ any post-processing techniques, such as box renewal in DiffusionDet <cit.> and self-condition <cit.>. § EXPERIMENTS §.§ Datasets, Metrics and Implementation Details Datasets. We evaluate the efficacy of our model by conducting experiments on three representative datasets: Charades-STA <cit.>, QVHighlights <cit.> and TACoS <cit.>. The reason is that the above three datasets exhibit diversity. Charades-STA comprises intricate daily human activities. QVHighlights contains a broad spectrum of themes, ranging from everyday activities and travel in lifestyle vlogs to social and political events in news videos. TACoS mainly presents long-form videos featuring culinary activities. The training and testing divisions are consistent with existing methods <cit.>. Metrics. To make fair comparisons, we adopt the same evaluation metrics as those used in previous works <cit.>, namely R1@n, MAP@n, and MAP_avg. Specifically, R1@n is defined as the percentage of testing queries that have at least one correct retrieved moment (with an intersection over union (IoU) greater than n) within the top-1 results. Similarly, MAP@n is defined as the mean average precision with an IoU greater than n, while MAP_avg is determined as the average MAP@n across multiple IoU thresholds [0.5: 0.05: 0.95]. Implementation details. For a fair comparison <cit.>, we freeze the video encoder and text encoder and use only the extracted features. For VGG <cit.>, C3D <cit.> or SlowFast+CLIP (SF+C) <cit.>, we extract video features every 1/6s, 1s or 2s. So the frame number N_v is related to the length of the video, while the max text length N_t is set to 32. We set the hidden size D = 256 in all Transformer layers. The number of random spans N_r is set to 10 for QVHighlights, 5 for Charades-STA and TACoS. We use the cosine schedule for β. For all datasets, we optimize MomentDiff for 100 epochs on one NVIDIA Tesla A100 GPU, employ Adam optimizer <cit.> with 1e-4 weight decay and fix the batch size as 32. The learning rate is set to 1e-4. By default, the loss hyperparameters λ_L1 = 10, λ_iou = 1 and λ_ce = 4. To speed up the sampling process during inference, we follow DDIM <cit.> and iterate 50 times. [t]0.6 captypetable Performance comparisons (%) on QVHighlights with SF+C video features and CLIP text features. "⋆" denotes that we re-implement the method with only segment moment labels. "†" stands for using audio data. MDE is the abbreviation of MomentDETR <cit.>. 0.73 2*Method 5cQVHighlights 2-6 R1@0.5 R1@0.7 MAP@0.5 MAP@0.75 MAP_avg MCN <cit.> 11.41 2.72 24.94 8.22 10.67 CAL <cit.> 25.49 11.54 23.40 7.65 9.89 XML <cit.> 41.83 30.35 44.63 31.73 32.14 XML+ <cit.> 46.69 33.46 47.89 34.67 34.90 MDE^⋆ <cit.> 53.56 34.09 53.97 28.65 29.39 cyan!5 MomentDiff 57.42 39.66 54.02 35.73 35.95 UMT^⋆† <cit.> 56.26 40.31 52.77 36.82 35.79 cyan!5 MomentDiff^† 58.21 41.48 54.57 37.21 36.84 [t]0.4 captypetable Performance comparisons (%) on TACoS. We adopt C3D features to encode videos. MDE is the abbreviation of MomentDETR <cit.>. 0.73 2*Method 3cTACoS 2-4 R1@0.1 R1@0.3 R1@0.5 CTRL <cit.> 24.32 18.32 13.30 SCDM <cit.> - 26.11 21.17 DRN <cit.> - - 23.17 DCL <cit.> 49.36 38.84 29.07 CBLN <cit.> 49.16 38.98 27.65 FVMR <cit.> 53.12 41.48 29.12 RaNet <cit.> - 43.34 33.54 MDE^⋆ <cit.> 41.16 32.21 20.55 MMN^⋆ <cit.> 51.39 39.24 26.17 cyan!5 MomentDiff 56.81 44.78 33.68 §.§ Performance Comparisons Comparison with state-of-the-art methods. To prove the effectiveness of MomentDiff, we compare the retrieval performance with 17 discriminative VMR methods. Tab. <ref>, Tab. <ref>, and Tab. <ref> show the R1@n, MAP@n, and MAP_avg results on Charades-STA, QVHighlights and TACoS. Compared with SOTA methods <cit.>, MomentDiff achieves significant improvements on Charades-STA regardless of whether 2D features (VGG), multimodal features (VGG+A), 3D features (C3D), or multimodal pre-trained features (SF+C) are used. This proves that MomentDiff is a universal generative VMR method. In the other two datasets (QVHighlights and TACoS), we still have highly competitive results. Specifically, compared to MomentDETR <cit.>, MomentDiff obtains 2.35%, 3.86%, and 13.13% average gains in R1@0.5 on three datasets. It is worth noting that TACoS contains long videos of cooking events where different events are only slightly different in terms of cookware, food and other items. The learnable queries in MomentDETR may not cope well with such fine-grained dynamic changes. We attribute the great advantage of MomentDiff over these methods to fully exploiting similarity-aware condition information and progressive refinement denoising. Transfer experiments. To explore the location bias problem, we organize moment retrieval on two anti-bias datasets with location distribution shifts: 182 Charades-STA-Len. We collect all video-text pairs with w_0 ≤ 10s and randomly sample pairs with w_0 > 10s in the original training set of Charades-STA, which account for 80% and 20% of the new training set, respectively. On the contrary, we collect all pairs with w_0 > 10s and randomly sample pairs with w_0 ≤ 10s from the original test set, accounting for 80% and 20% of the new test set. 183 Charades-STA-Mom. Similarly, we collect all video-text pairs with the end time c_0 + w_0/2 ≤ 15s and sample pairs with the start time c_0 - w_0/2 > 15s as the training set, which accounts for 80% and 20%, respectively. Likewise, the construction rules for the test set are the opposite of those for the training set. Dataset statistics can refer to Train/Test in Fig. <ref> and the supplementary material. In Tab. <ref>, the proposed MomentDiff shows much more robustness than previous state-of-the-art method MomentDETR <cit.>. Concretely, compared with the experiment in Tab. <ref>, the performance gap between MomentDiff and MomentDETR gets larger on Charades-STA-Len and Charades-STA-Mom. Fig. <ref> also demonstrates that the distribution of our prediction is closer to the one of the test set. We conjecture that it is because MomentDiff discards the learnable proposals that fit the prior distribution of the training set. Moreover, 2DTAN <cit.> and MMN <cit.> perform worse than MomentDETR on the original Charades-STA dataset, but they achieve better or comparable results than MomentDETR in Tab. <ref>. This shows that predefined proposals <cit.> may be better than learnable proposals in dealing with the location bias problem, but they take up more time and space overhead. Differently, our method performs well on both public datasets and anti-bias datasets. §.§ Ablation Study To provide further insight into MomentDiff, we conduct critical ablation studies on Charades-STA. Span embedding type. Regarding the way discrete spans are mapped to the embedding space, we compare the ROI strategy <cit.> with our linear projection (FC) in Tab. <ref>(a). For the ROI strategy, we slice the fusion embeddings F corresponding to random spans, followed by mean pooling on the sliced features. Tab. <ref>(a) shows that ROI does not work well. This may be due to two points: 1) ROI is a hard projection strategy, while the importance of each video frame is quite different. FC is similar to soft ROI, and its process can be trained end-to-end. 2) FC is decoupled from F, which allows the model to focus on modeling the diffusion process and avoid over-dependence on F. Scale λ. λ is the signal-to-noise ratio <cit.> of the diffusion process, and its effect is shown in Tab. <ref>(b). We find that the effect of larger λ drops significantly, which may be due to the lack of more hard samples for denoising training when the proportion of noise is small, resulting in poor generalization. Video Moment Denoiser (VMD) and noise intensity m. In Tab. <ref>(c), we first remove the denoiser and the diffusion process (w/o VMD). After training with the same losses, we find that predicting with only fusion embeddings F leads to a drastic drop in results, which reveals the effectiveness of denoising training. Then we remove the noise intensity m (w/o m), and the result is reduced by 5.16% on R1@0.5. This shows that explicitly aggregating noise intensity with random spans improves noise modeling. Combined with VMD and m, the diffusion mechanism can fully understand the data distribution and generate the real span from coarse to fine. Loss designs. In Tab. <ref>(d), we show the impact of loss functions. In ℒ_sim, we use pointwise and pairwise constraints to guide token-wise interactions between multimodal features, while ensuring reliable conditions for subsequent denoising. In ℒ_vmr, the model can learn to accurately localize exact segments. Adequate multimodal interaction and denoising training procedures are complementary. Span number. In Tab. <ref>(e), we only need 5 random spans to achieve good results. Unlike object detection <cit.>, the number of correct video segments corresponding to text query is small. Therefore, a large number of random inputs may make the model difficult to train and deteriorate the performance. Model performance vs. speed. In Tab. <ref>(f), we explore the effects of different diffusion steps. When step=2, good results and fast inference speed have been achieved. Subsequent iterations can improve the results of high IoU (i.e., R1@0.7), which shows the natural advantages of diffusion models. §.§ Qualitative Results We show two examples of the diffusion process in Fig. <ref>. We can find that the retrieved moments by MomentDiff are closer to the ground truth than those by MomentDETR. The diffusion process can gradually reveal the similarity between text and video frames, thus achieving better results. Besides, the final predictions corresponding to spans with multiple random initial locations are close to the ground truth. This shows that our model achieves a mapping from arbitrary locations to real segments. § LIMITATION AND CONCLUSION Limitation. Compared to existing methods <cit.>, the diffusion process requires multiple rounds of iterations, which may affect the inference speed. As shown in Tab. <ref>(f), we reduce the number of iterations, with only a small sacrifice in performance. In practical usage, we suggest choosing a reasonable step number for a better trade-off between performance and speed. Conclusion. This paper proposes a novel generative video moment retrieval framework, MomentDiff, which simulates a typical human retrieval style via diffusion models. Benefiting from the denoising diffusion process from random noise to temporal span, we achieve the refinement of prediction results and alleviate the location bias problem existing in discriminative methods. MomentDiff demonstrates efficiency and generalization on multiple diverse and anti-bias datasets. We aim to stimulate further research on video moment retrieval by addressing the inadequacies in the framework design, and firmly believe that this work provides fundamental insights into the multimodal domain. § ACKNOWLEDGEMENT This work is supported by the National Key Research and Development Program of China (2022YFB3104700), the National Nature Science Foundation of China (62121002, 62022076, U1936210). nips
http://arxiv.org/abs/2307.05333v1
20230703092136
Unbiased Pain Assessment through Wearables and EHR Data: Multi-attribute Fairness Loss-based CNN Approach
[ "Sharmin Sultana", "Md Mahmudur Rahman", "Atqiya Munawara Mahi", "Shao-Hsien Liu", "Mohammad Arif Ul Alam" ]
eess.SP
[ "eess.SP", "cs.AI", "cs.LG" ]
Unbiased Pain Assessment through Wearables and EHR Data: Multi-attribute Fairness Loss-based CNN Approach Sharmin Sultana Dept. of Computer Science University of Massachusetts Lowell sharmin_sultana@student.uml.edu Md Mahmudur Rahman Dept. of Computer Science University of Massachusetts Lowell mdmahmudur_rahman@student.uml.edu Atqiya Munawara Mahi Dept. of Computer Science University of Massachusetts Lowell atqiyamunawara_mahi@student.uml.edu Shao-Hsien Liu Research Faculty at PHARE University of Massachusetts Chan Medical School shaohsien.liu@umassmed.edu Mohammad Arif Ul Alam Dept. of Computer Science University of Massachusetts Lowell mohammadariful_alam@uml.edu August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= The combination of diverse health data (IoT, EHR, and clinical surveys) and scalable-adaptable Artificial Intelligence (AI), has enabled the discovery of physical, behavioral, and psycho-social indicators of pain status. Despite the hype and promise to fundamentally alter the healthcare system with technological advancements, much AI adoption in clinical pain evaluation has been hampered by the heterogeneity of the problem itself and other challenges, such as personalization and fairness. Studies have revealed that many AI (i.e., machine learning or deep learning) models display biases and discriminate against specific population segments (such as those based on gender or ethnicity), which breeds skepticism among medical professionals about AI adaptability. In this paper, we propose a Multi-attribute Fairness Loss (MAFL) based CNN model that aims to account for any sensitive attributes included in the data and fairly predict patients' pain status while attempting to minimize the discrepancies between privileged and unprivileged groups. In order to determine whether the trade-off between accuracy and fairness can be satisfied, we compare the proposed model with well-known existing mitigation procedures, and studies reveal that the implemented model performs favorably in contrast to state-of-the-art methods. Utilizing NIH All-Of-US data, where a cohort of 868 distinct individuals with wearables and EHR data gathered over 1500 days has been taken into consideration to analyze our suggested fair pain assessment system. Wearable, Electronic Health Record, Fairness, Privileged and Unprivileged Group, Sensitive Attribute. § INTRODUCTION The Intelligent Internet of Things (IIoT) has the potential to revolutionize healthcare by allowing for continuous monitoring of health vitals and behaviors, which can lead to timely health interventions for communities in need <cit.>. However, using AI in healthcare brings major risks and potential unintended harm by introducing biases in decisions. Biased AI may discriminate based on sensitive attributes like race, gender, ethnicity, and disabilities, leading to mistrust among clinicians and preventing AI systems from being adopted in clinical settings <cit.>. Measuring bias is challenging because it may occur explicitly or implicitly. Explicit bias involves conscious attitudes that can be measured by self-report but poses the risk of falsely endorsing more socially desirable attitudes. Implicit biases occur outside of conscious awareness and can result in a negative evaluation of a person based on sensitive attributes. Identifying and measuring pain <cit.> begins with a person’s self-report, while simply worded questions and tools, that are easily understood, continue to be the most effective. The best choice for assessing pain intensity includes self-reported surveys such as Iowa Pain Thermometer (IPT) and Numeric Rating Scale (NRS). Despite the prevalence and consequences of pain, healthcare professionals struggle in assessing <cit.> and treating pain accurately <cit.>. On the other hand, worded questions-based assessments become ineffective among the geriatric population with disparities in terms of cognition, hearing, literacy, and language abilities, as surveys-based assessment tools highly rely on the above abilities. In this paper, we present a research contribution that focuses on mitigating disparity in sensitive attributes to ensure the fair classification of pain status (i.e., pain improves or degrades) from patient’s heart rate and step counts data collected from wearable and EHR. We introduce a novel Multi-attribute Fairness Loss (MAFL) based CNN approach that incorporates all the sensitive attributes present in the nationwide collected heterogeneous data (All-Of-Us data) <cit.>. Our model tries to reduce the discrepancies between the privileged and unprivileged groups, hence encouraging fairness in categorization, by explicitly taking into account these features. § RELATED WORKS Clinical pain assessment relies heavily on self-reported pain scale surveys, including disease-specific and generalized tools. While changes in pain measures are currently the standard for tracking patient progress, these questionnaires still depend on subjective patient assessments and are vulnerable to various biases <cit.>. However, research has shown that racial, age-related, and socioeconomic biases exist in pain treatment, with nonwhite and older patients and those of lower socioeconomic status being less likely to receive pain medication <cit.>. Despite fairness training, these biases may persist in clinical settings, leading to unfair treatment to the unprivileged group. Biases in clinical data can also lead to unfair machine learning (ML) algorithms. Given the limitations of subjective self-reported measures of pain, there has been increasing interest in pain assessment objectively. While many areas of medicine have developed meaningful biomarkers and endpoints to objectively treat patients, the development of objective biomarkers in pain has been lacking <cit.>. There is very little research has been done to investigate the potential use of wearable devices to complement the assessment and treatment of painful conditions. Recently, fMRI-based and ICU video-based facial expression monitoring <cit.> methods show promising objective pain assessment automation, but such hospital-based, controlled imaging studies are not feasible for assessing ongoing real-time objective markers of pain. Recent studies showed that wearable sensors are not free of biases and may provide inaccurate data, for example, people with darker skin tones <cit.>. Similarly, it has been anticipated that wearable-based health vitals monitoring may provide disparities for older adults, persons with skin disease, and so on, which also may persist in wearable-based ML algorithms <cit.>. <cit.> provides benchmarking pre-processing, in-processing, and post-processing algorithms to mitigate bias in the ML algorithms, and fairness measure is done here by using 5 fairness metrics. But algorithms introduced by <cit.> only consider single sensitive attributes and are compatible with ML models. This research paper introduces MAFL, a novel loss function-based CNN model that incorporates either single or multiple protected attributes. By minimizing the disparities between privileged and unprivileged groups, MAFL-CNN effectively mitigates bias in classification tasks. § METHODOLOGY §.§ Data Collection To access NIH All of Us data, we registered and signed a Data Use and Registration Agreement (DURA). Each member underwent the Collaborative Institutional Training Initiative (CITI Program) training and applied for Controlled Tier access. Using concept ID ‘3036453’, we collected data for 868 individuals who had been evaluated with pain at least twice in a hospital visit during the same year and had complete wearable sensor data. The pain assessment tool used was the Visual Analog Score ranging from 0 to 10. We use the feature deviance method to extract personalized features for each individual. In this regard, at first, we extract statistical, temporal, and spectral domain feature X^fi_d where f, i and d refer to the type of feature f∈ F = {statistical, temporal, spectral}, index of feature i ∈ N={1,2..N} (we have total N=60 features), day of the collected wearable data. We consider 4 different deviant functions (mathematical, logarithmic, cosine, and logcosh) on these 60 extracted features. §.§ Loss Function Design max width= 0.95 Multi_Attribute_Fair_Loss(y_true, y_pred): unprivileged_indices indices where sensitive_attributes equals 0 privileged_indices indices where sensitive_attributes equals 1 loss_unprivileged mean of y_pred values at unprivileged_indices loss_privileged mean of y_pred values at privileged_indices disparity loss_privileged - loss_unprivileged λ 1.0 loss Binary_Cross_Entropy(y_true,y_pred) loss += λ·disparity^2 regularization_loss 0.01 · |∑_k=1^n (y_pred - mean y_pred ) | total_loss loss + regularization_loss return total_loss MAFL adds fairness to the regular loss function. Initially, it takes the indices of the unprivileged and privileged groups within the sensitive attributes list and then calculates the mean prediction values for the unprivileged and privileged groups separately. The difference represents the disparity between the groups in terms of the predictions made by the model. A regularization term is added to the loss function to further promote fairness. The term penalizes large differences between individual predictions and the mean prediction of all samples. Finally, the total loss is calculated by summing the regular loss function (in this case, it is Binary Cross-Entropy) and the regularization loss. Here, a fairness score is applied to input and target for sensitive attributes (k=1 to n). In our dataset, we have 5 protected attributes (gender, race, ethnicity, age, and cognitive ability (dementia)). §.§ Integrated CNN Framework to Predict Pain Levels The 1-D CNN model [Figure <ref>] has seven layers, including two 1D-convolutional layers, two max-pooling layers, one flattening layer, one fully-connected hidden layer, and an output layer. The input layer has 2883 input numbers representing demographic features, heart rate, and step counts for a day. A 1D max-pooling layer is added to convert variable-length hidden features in the previous convolution layer to a fixed number of features. The extracted features are fed into a 512-node fully-connected hidden layer, and the output layer contains two nodes. The softmax function is applied to predict the pain level probability. The dropout technique is used in the hidden layer to prevent over-fitting. To leverage the discriminative power of 1-D CNNs, we integrate MAFL into the network architecture. The CNN framework learns features that are both informative for classification and less dependent on sensitive attributes. By optimizing the proposed loss function, we train the network to effectively reduce disparity in the classification outcomes. § EXPERIMENTAL EVALUATION To enable the classification model to function effectively, the raw data was transformed into a clean dataset. The initial step involved comparing pain levels between consecutive dates. Pain improvement was represented as 1, otherwise 0. Categorical features were encoded as binary column vectors. Missing rows were filled with mean or interpolated values. Irrelevant columns were dropped after applying Min-Max normalization. The resulting clean dataset consisted of 868 records. It included 5 protected attributes (gender, race, ethnicity, age, and dementia), and privileged subgroups for each attribute were determined to be males, Asians, non-Hispanic individuals, those under 65, and people without dementia, respectively (Figure <ref>). We conduct tests on our dataset using benchmark pre-processing, in-processing, and post-processing bias mitigation techniques suggested by AIF360<cit.> to show that MAFL-CNN satisfies the tradeoff between fairness and accuracy. AIF360 toolkits consider single protected attributes present in the data and except the Adversarial Debiasing algorithm, all other algorithms only work for machine learning models. Since our dataset comprises five sensitive attributes, the loss function we created can take into account all of them simultaneously based on their indices as well as only one attribute. We assess the effectiveness of our MAFL-CNN model in terms of fair accuracy by comparing it to cutting-edge fairness-aware AIF360 bias mitigation techniques on Logistic Regression (LR), Random Forest (RF), Decision Tree (DT), Support Vector Machine (SVM), and Naive Bayes (NB) models. We selected the best model based on the highest number of fair classification matrices (Statistical Parity Difference, Disparate Impact, Equal Opportunity Difference, Theil Index, and Average Odd Difference) to satisfy accuracy and fairness trade-off. Using Python3.11, the experiment was carried out in a virtual setting and finished in about 170 minutes. § RESULTS To assess the presence of bias in our dataset, we employed two fairness metrics, namely statistical parity difference and disparate impact, as described in the work by AIF360 <cit.>. If the unprivileged group receives a positive outcome of less than 80% of their proportion relative to the privileged group, it suggests a potential bias or disparate impact. In our original dataset, prior to any mitigation efforts, the disparate impact value was 0.72, and the statistical parity difference was 1.5, which ideally should range between -1 and 1. To address these biases, we made modifications to the original dataset using fair pre-processing procedures. To evaluate the impact of these pre-processing bias mitigation techniques on bias detection metrics, we computed fairness metrics both before and after the transformation. The results, shown in Figure <ref>, indicate the extent of±1 standard deviation using the bars. It is evident that the Reweighing and Disparate Impact Remover pre-processing techniques enhance fairness for all the datasets in terms of both provided measures. The performance of the classification models in predicting pain status fairly is depicted in Figure <ref>. However, when considering protected attributes such as gender, age, race, ethnicity, and cognitive disability, all the classifiers exhibit unfairness according to our proposed majority voting bias detection metrics. Table <ref> provides a comprehensive summary of the results, highlighting the best-performing model in terms of accuracy and fairness for each mitigation technique, and comparing it with benchmarking techniques from <cit.>. In Table <ref>, we only report the best model in terms of accuracy and fairness for each mitigation technique. Among the pre-processing techniques, SVM demonstrates the highest balance between accuracy and fairness metrics, achieving an accuracy of 76.4%. In the case of in-processing techniques, the Prejudice Remover (PR) technique performs well in terms of accuracy for the sensitive attribute 'Race,' while the RF model shows fairness as a predictor for the Exponentiated Gradient Reduction (EGR) technique. The Reject Option Classification (ROC) technique exhibits an average accuracy of around 75% for each sensitive attribute. However, our MAFL-CNN model suppresses the performance of all the models, resulting in accuracy ranging from 75% to 85%. Furthermore, unlike the AIF360 mitigation techniques, our MAFL-CNN model can consider multiple sensitive attributes. Additionally, it is worth noting that AIF360 only includes one neural network (NN) model, which is the adversarial debiasing algorithm, and its performance is noticeably lower than our MAFL-CNN model. § DISCUSSION Our experimental results demonstrate the efficacy of our proposed approach. We observe significant reductions in disparity between privileged and unprivileged groups while maintaining competitive classification accuracy [Figure <ref>]. Furthermore, we analyze the impact of different sensitive attributes on the fairness metrics to gain insights into their influence on the classification process. By mitigating disparity in classification outcomes, our approach contributes towards promoting fairness and reducing bias in decision-making systems [Figure<ref>] with accuracy ranging from 75% to 85%. Future studies emphasize exploring the generalizability of our method across different datasets and extending the approach to other deep learning architectures. § CONCLUSION In this paper, we have presented a novel loss function, MAFL, integrated into a 1-D CNN framework to address disparity issues arising from sensitive attributes in classification tasks. Our approach demonstrates notable reductions in disparity while maintaining competitive classification accuracy ranging from 75% to 85% for single or multiple sensitive attributes. 00 9Keogh, A., et al. Assessing the usability of wearable devices to measure gait and physical activity in chronic conditions: a systematic review. J NeuroEngineering Rehabil 18, 138 (2021). 10Mosconi P, et al. Use of Health Apps and Wearable Devices: Survey Among Italian Associations for Patient Advocacy. JMIR Mhealth Uhealth. 2019 Jan 15;7(1):e10242 intro3Nelson, A. (2002). Unequal treatment: confronting racial and ethnic disparities in health care. Journal of the national medical association, 94(8), 666. intro4Cleeland, C. S., & Ryan, K. M. (1994). Pain assessment: Global use of the Brief Pain Inventory. Annals, Academy of Medicine, Singapore, 23(2), 129–138. 21Kaasalainen SJ, et al. The assessment of pain in the cognitively impaired elderly: A literature review. Perspectives. 1998;22:2. 26Morrison RS, Siu AL. A comparison of pain and its treatment in advanced dementia and cognitively intact patients with hip fracture. J Pain Symptom Manage. 2000. 27 The “All of Us” Research Program, August 15, 2019 N Engl J Med 2019 bias1Tsai PF. Assessing pain in older adults. J Gerontol Nurs. 2011. bias2Tsze DS, von Baeyer CL, Bulloch B, Dayan PS. Validation of self-report pain scales in children. Pediatrics. 2013. bias3Shah AA, Zogg CK, et al. Analgesic Access for Acute Abdominal Pain in the Emergency Department Among Racial/Ethnic Minority Patients: A Nationwide Examination. Med Care. 2015. bias4Singhal A, et al. Racial-Ethnic Disparities in Opioid Prescriptions at Emergency Department Visits for Conditions Commonly Associated with Prescription Drug Abuse. PLoS One. 2016. wearable1van der Miesen MM, et al. Neuroimaging-based biomarkers for pain: state of the field and current directions. Pain Rep. 2019. wearable2Wager TD, et al. An fMRI-based neurologic signature of physical pain. N Engl J Med. 2013. wearable3Perraudin CG, et al. Observational Study of a Wearable Sensor and Smartphone Application Supporting Unsupervised Exercises to Assess Pain and Stiffness. Digit Biomark. 2018 wearable11Shcherbina A, et al. Accuracy in wrist-worn, sensor-based measurements of heart rate and energy expenditure in a diverse cohort. J Pers Med. 2017;7(2):3. arif_fairnessAlam et. al., AI-Fairness Towards Activity Recognition of Older Adults, Mobiquitous 2020. aif360Bellamy, R. K. E. (2018, October 3). AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias.
http://arxiv.org/abs/2307.02276v1
20230705132021
First-Explore, then Exploit: Meta-Learning Intelligent Exploration
[ "Ben Norman", "Jeff Clune" ]
cs.LG
[ "cs.LG", "cs.AI" ]
[ Jean-Philip Piquemal^* August 1, 2023 ========================== Standard reinforcement learning (RL) agents never intelligently explore like a human (i.e. by taking into account complex domain priors and previous explorations). Even the most basic intelligent exploration strategies such as exhaustive search are only inefficiently or poorly approximated by approaches such as novelty search or intrinsic motivation, let alone more complicated strategies like learning new skills, climbing stairs, opening doors, or conducting experiments. This lack of intelligent exploration limits sample efficiency and prevents solving hard exploration domains. We argue a core barrier prohibiting many RL approaches from learning intelligent exploration is that the methods attempt to explore and exploit simultaneously, which harms both exploration and exploitation as the goals often conflict. We propose a novel meta-RL framework (First-Explore) with two policies: one policy learns to only explore and one policy learns to only exploit. Once trained, we can then explore with the explore policy, for as long as desired, and then exploit based on all the information gained during exploration. This approach avoids the conflict of trying to do both exploration and exploitation at once. We demonstrate that First-Explore can learn intelligent exploration strategies such as exhaustive search and more, and that it outperforms dominant standard RL and meta-RL approaches on domains where exploration requires sacrificing reward. First-Explore is a significant step towards creating meta-RL algorithms capable of learning human-level exploration which is essential to solve challenging unseen hard-exploration domains[Code is available at <https://github.com/btnorman/First-Explore>.] § INTRODUCTION Reinforcement learning (RL) is seeing successful application to a range of challenging tasks from plasma control <cit.>, molecule design <cit.>, game playing <cit.>, and to the control of robots <cit.>. Despite this promise, standard RL is very sample inefficient. It can take an agent hundreds of thousands of episodes of play to learn a task that humans could learn in a few tries <cit.>. This sample inefficiency has several causes. First, standard RL cannot condition on a complex prior such as a human's common sense or general experience <cit.>. For example, a human gamer has intuitions when first encountering a 2D game with a character, platforms, ladders, keys and doors (e.g., Montezuma's Revenge). They think they can probably control the character with the game actions, and that the character might be able to jump, climb ladders, collect keys, and use keys to open doors. It has been shown that much of the sample efficiency of humans comes from such priors <cit.>. Second, standard RL is limited in how it adapts as it relies on reinforcing existing behaviours over multiple episodes rather than being able to tailor each exploration to be maximally informative. For example, upon finding and reading a treasure map in one episode, a human might navigate to the treasure location in the next episode, or upon losing to a strategy in a symmetric game (e.g. in chess losing against a particular opening), they might mimic and attempt to master that strategy in subsequent play. This lack of human-like adaption further harms sample efficiency. Third, standard RL and standard meta reinforcement learning (meta-RL) both use the same policy to explore (gather data to improve the policy) and to exploit (achieve high episode reward) <cit.>. Standard meta-RL <cit.> does not enable intelligent exploration (exploration that incorporates a complex prior and adapts appropriately based on memory). When good exploration and good exploitation are different, e.g., when exploring requires sacrificing episode reward, using the same policy to both explore and to exploit can cause terrible sample inefficiency and potentially prevent learning (detailed in section <ref>). We present First-Explore, a simple framework for meta-RL that overcomes these limitations by learning a pair of policies: an explore policy that can intelligently explore, and an exploit policy that can intelligently exploit. First-Explore enables the potential of learning policies that exhibit meta-RL's human-level-in-context-sample-efficient learning on unseen hard-exploration domains including hostile ones that require sacrificing reward to properly explore. § PRELIMINARIES AND RELATED WORK §.§ Problems with Standard RL Exploration §.§.§ Exploring by Exploiting In standard RL, the same policy is generally used for two different purposes: i) Exploring: gathering data to improve the policy and ii) Exploiting: using the gathered data to specify a highly performant policy <cit.>. Standard RL algorithms (such as PPO <cit.>) rely on exploring by sampling the small area of policy space covered by a noisy policy centered on exploitation, e.g., by ensuring the exploit policy has high entropy <cit.> or by epsilon-greedy sampling of the policy <cit.>. r0.2 < g r a p h i c s > Locked Path Environment. The high reward blue path requires complex behavior. Exploring by relying on such noisy exploiting will never solve some tasks. For example, imagine an environment with two long paths, one orange and one blue. While the orange path is straightforward, and leads to a medium reward, the blue path is blocked by a locked door that requires great lock-picking skills to open. However, behind the door is a vast treasure (with significantly higher reward). During an episode an agent does not have time to reach the end of both paths. We shall call this the locked path environment (Figure <ref>). The standard-RL approach of exploring by (noisily) exploiting will not enable learning the best strategy (reaching the blue path reward). This dynamic occurs because while the agent is unskilled at lock picking the blue path gives zero reward, which is lower than the medium reward of the orange path. Hence, an agent attempting to exploit each episode will solely travel the orange path. Finally, because the blue path is never travelled, there is no chance to learn lock picking. The best strategy is not learnt because travelling the orange path is a local optima. Because of this local optima, getting the maximum reward in the locked path environment requires effective exploring (learning to travel the blue path and unlock the lock) which requires sacrificing episode reward during exploration (by not getting the orange path reward). This is a general principle we now define: * Sacrificial exploration: exploration that is not exploitative is sacrificial as one is `sacrificing' episode reward for information gain. Examples: paying for information or tutoring, doing practice drills, practicing ones weaknesses, attempting to solve a task in an unfamiliar way when one can already solve it in a familiar way. * Coincidental exploration: exploitation that happens by coincidence when noisily exploiting (exploiting with noise potentially added or encouraged). Relying on coincidental exploration is the standard RL approach, and is vulnerable to local optima. Examples: practicing one's strengths, playing normal matches, attempting to solve a task in a familiar way, attempting something new when nothing is working. Standard RL never intentionally sacrificially explores because each episode is spent trying to maximize reward. This inability prevents standard RL from optimally exploring, and so causes greater sample inefficiency, making solving hostile tasks (where exploration requires sacrificing reward) infeasible. §.§.§ Memory-less Exploration People often exhaustively explore. For example, an explorer, searching for new lands, has little interest in places they have already visited. In standard RL, an agent has no knowledge or memory of previous episodes, and so (while noisily exploiting) it will do approximately the same `exploration' repeatedly. This lack of memory can make the standard RL exploration hugely sample inefficient. While the agent's policy may change due to updates to the policy's weights, the policy change is slow, and unlikely to allow human-level adaption, wherein people change their policy substantially and appropriately based on a single episode of experience. §.§.§ No Prior on Exploration Effective and efficient exploration requires a prior on how to explore in the environment. When a human sees a level of Montezuma's Revenge, they have an intuition that keys open rooms, and hence collecting them is important for exploration. Having such intuition is core to efficient exploration. Furthermore, a good exploration prior is often different from a good exploitation prior because optimal exploration often requires sacrificing episode reward, e.g. to experiment with new strategies. Imagine playing an adventure game where one explores a strange and unfamiliar dungeon. When one is purely exploiting each episode, one acts as though one only has a single life and it is wise to only perform actions one knows are safe, e.g. not open any doors to previously unexplored rooms. However, when purely exploring, one would play as though one has infinite lives with the only concern regarding dying being the opportunity cost of wasting subsequent exploration that one could have done were one still alive in that episode, e.g. open all the unexplored dungeon rooms, but with the safest seeming rooms opened first. Both ways of playing correspond to a complex prior on how the player should act, however the prior for exploitation actively prohibits effective exploration (no new rooms are explored). §.§ Meta-RL Meta-RL attempts to address many of standard-RL's issues by learning a reinforcement learning algorithm. This reinforcement learning algorithm can be realized as a mapping from a context of rollouts c in an environment m to a peformant policy π_θ, c specialized to that environment, whether by a transformer <cit.>, recurrent neural network <cit.> or other method capable of processing long-sequences or memory. To train meta-RL, one specifies a distribution of environments ℳ. Giving the agent multiple interactions with a sampled environment then enables the policy to learn to adapt to the specifics of the environment m it is in, and also make use of and learn the prior that the environment comes from the training distribution, m ∼ℳ. Once trained, the learnt RL algorithm can be very sample efficient <cit.>. For example, when trained to find a reward location in mazes, a learnt RL algorithm can remember the areas of the maze it has already visited, and know not to visit those areas again <cit.> (unless worth it to reach other unseen areas). This capability allows an unseen maze to be solved in a few tries, which is fewer episode rollouts than are needed for a single batched gradient update of standard RL <cit.>. Despite these benefits, the standard approach to meta-RL is still limited in that it learns a single policy (the learnt RL algorithm) and uses it for two different purposes: to a) get information about the current environment, and b) get maximum reward in the current environment. Thus these meta-RL approaches would still fail the locked path environment described in section <ref> because in this case intelligent exploration and intelligent exploitation are very different, meaning that modelling both at once is not possible. First-Explore solves this problem by instead learning two policies: one to intelligently explore and one to intelligently exploit. We will first consider two existing works of meta-RL, and detail how they suffer from the problem of exploring by exploiting. The first is AdA <cit.>. The authors note that their trained model always attempts to maximize individual episode reward conditional on the context of previous episodes (Fig. <ref>, left). AdA then demonstrates very efficient exploitation that can learn and adapt from previous episodes. However, because the agent is always exploiting, all exploration comes from coincidental exploration. This restriction prohibits the method working in domains that require sacrificial exploration. With minor modifications, AdA could learn to perform sacrificial exploration, such as if the reward function is the reward gained in the final episode only; then it might learn to sacrificially explore in early episodes and exploit in the last episode. The second work we consider is RL^2 <cit.> (concurrently invented in <cit.>). RL^2 maximizes cumulative reward (Fig. <ref>, middle). This incentive means it learns a changing explore exploit trade-off, where initial episodes can be slightly sacrificially exploratory. However, maximizing cumulative reward prohibits pure sacrificial exploration with arbitrarily negative rewards, because the initial exploration could be so costly as to make the overall sum negative. RL^2 is also inflexible as it links the explore exploit trade-off to the size of the context, and hence one cannot exploit early, or continue exploring indefinitely. Another work, <cit.> presents E-RL^2, which is a modification of RL^2 that still maximizes cumulative reward but ignores the reward of the first k episodes. This modification makes the first k episodes exploratory on a new task, and allows initial sacrificial exploration on those first k episodes (Fig. <ref>, right). Despite this improvement, it is limited for several reasons. First, the method introduces an across-episode value assignment problem of assigning credit to which of the k exploratory episodes contributed to the explore context. It is also inflexible in that a) only the final episode is purely exploitative and b) the number of zero reward episodes is set as a hyperparameter and the same for all tasks (both at training and at inference). This constraint can be inefficient and counter productive if at inference one wants to continually explore until a satisfactory policy quality is reached. §.§ Other Works Addressing Exploration There is a rich literature on non-meta-RL exploration approaches. One relevant approach is Intrinsic Motivation (IM), which replaces the environment reward with an intrinsic motivation reward such as novelty <cit.>. Despite the success of IM at enabling sacrificial exploration, these methods are limited by being slow to adapt due to lacking a memory not encoded via weights (section <ref>) and not having a complex learnt prior on exploration (section <ref>). Another deeper problem is that many of these methods enable sacrificial exploration by entirely ignoring the reward signal, leading to pathologies such as the noisy TV problem <cit.>, where an agent looking for new states will find a TV showing white noise to be endlessly captivating. There is also work on combining such (IM) methods with meta-RL such as <cit.> which has the insight of decoupling exploration from exploitation, however, they do so at the cost of making the exploration un-grounded in ultimate reward, introducing the pathologies just mentioned. Another method, Go-Explore <cit.>, decouples exploration and exploitation, but lacks complex priors. There are also approaches within the multi-armed bandit literature, and regret-based learning <cit.>. Despite the benefits of all these approaches, they all have pathologies, and only meta-RL is both a) computationally tractable and b) capable of human-level sample efficiency in high dimensional state spaces. As discussed above our method is an improvement on top of meta-RL learning by allowing sacrificial exploration. § FIRST-EXPLORE FRAMEWORK First-Explore is a framework that overcomes the outlined standard RL and meta-RL limitations by recognizing that RL is composed of two tasks. The task of a) gathering informative environment rollouts and b) mapping those to an effective policy. By separating exploration and exploitation we can learn sample-efficient and intelligent pure-exploration and pure-exploitation policies that are not hampered by attempting to do both at once. First-explore learns a pair of policies. An explore policy π_explore, θ, c that explores and provides information (context) for itself and for exploitation, and an exploit policy π_exploit, θ, c that exploits after every explore providing feedback to train the explore policy. The policies may share or have separate parameters, e.g., for policies with separate parameters, one could write θ = (θ_explore, θ_exploit) with each policy only dependent on its own subset of θ. This framework is visualized in Figure <ref>. * The explore policy π_explore, θ, c gathers informative environment rollouts based on the current context c (all previous explores) and parameters θ. * The exploit policy π_exploit, θ, c exploits (maximizes episode return) based on the current context c (all previous explores) and parameters θ. Here the notation π_explore, θ, c refers to the explore policy conditioned on the context π_explore, θ, c = π_explore, θ | c and similarly for π_exploit, θ, c. Notably, each policy is limited by the quality of the other as if there is no useful context then an excellent exploit policy will do no better than a mediocre one, and if the exploit policy is poor then mediocre and excellent context will be similarly indistinguishable. Thus, for complex tasks, the policies need to be learnt together. A central idea of First-Explore is that the exploratory value v_explore of an explore episode τ_explore given a context of past episodes {τ_1, …, τ_n} is the increase in expected reward of a subsequent exploit when the explore episode is added to the context to create new context {τ_1, …, τ_n, τ_explore}. v_explore (τ_explore) | {τ_1, …, τ_n} = 𝔼(τ_exploit | {τ_1, …, τ_n, τ_explore}) - 𝔼(τ_exploit | {τ_1, …, τ_n}) where τ_exploit | c denotes a rollout from π_exploit, θ, c. As the last term does not depend on the explore episode τ_explore, it is possible to discard the last term 𝔼(τ_exploit | {τ_1, …, τ_n}) when learning the optimal exploration policy. The reward function for the explore policy is thus the reward of the following exploit. To train, one then iterates building a context exclusively from exploration, each time determining the value of each exploration by a subsequent exploit. Evaluating and crediting each explore in this way allows First-Explore to avoid the value assignment problem of E-RL^2. First-explore can be combined with different meta-RL approaches and losses. Algorithm <ref> gives an example of a training update for a simple loss function l such as policy gradient <cit.>. Once the two policies have been trained, to adapt to an environment, one performs iterated rollouts in the environment using the explore policy to sample-efficiently explore, with each explore rollout added to the context potentially improving the context-conditioned exploit policy. This process is the analogue of sample-efficiently training a standard RL policy where accumulation of informative context replaces standard RL-training, and exploit rollouts replace standard RL evaluation rollouts. One approach would be to explore in the environment until a preset desired exploit quality is reached (similarly to training in standard RL until an environment is solved). Algorithm <ref> gives an example of this method. One could also simply do a set number of rollouts (similarly to training standard RL for a set number of epochs). [!ht] [escapeinside=||,mathescape=true]python # rollout conducts an episode when provided with an environment and policy # and returns all the episode infomation. def calc_loss(|θ|): """A basic First-Explore training step.""" # sample an environment, and initalize context c and loss value |m| = sample(|ℳ|); |c| = set(); loss = 0 for i in range(k): # do k iterated rollouts |τ_explore| = rollout(|m, π_explore,θ, c|) # explore given context c |τ_exploit| = rollout(|m, π_exploit,θ, c ∪ {τ_explore}|) # exploit given c ∪ {τ_explore,} |r = final_reward(τ_exploit)| # get the exploit reward # add π_explore's loss using the exploit reward and pre-explore context # and π_exploit's loss using the exploit reward and post-explore context loss = loss|+ l(r, π_explore,θ, c, τ_explore, c_1) + l(r, π_exploit,θ, c ∪ {τ_explore})| |c = c ∪{τ_explore}| # update the context for the next explore return loss |θ = θ - learning_rate×∇calc_loss(θ)| # training update Example meta-RL training update using First-Explore framework for a simple loss l such as policy gradient. First-Explore is a framework for meta-RL training, not a specific algorithm. [!ht] [escapeinside=||,mathescape=true]python def incontext_learn(|θ|, env, reward_target): """Incontext learn a policy for the environment env until the requisite reward_target is reached. Returns a context that specifies the learnt policy in combination with theta.""" |c| = set() # initalize context, c while True: |τ_explore| = rollout(env, |π_explore,θ, c|) # explore given c |c = c ∪{τ_explore}| # update the context |τ_exploit| = rollout(env, |π_exploit,θ, c|) # exploit given the context c |r = final_reward(τ_exploit)| # get the exploit reward if |r ≥| reward_target: # if exploting gave sufficient reward return |c| # then return the context Example of using the trained First-Explore policies to in-context learn on a new environment until a desired exploit reward is reached. § RESULTS For each domain, the architecture is a standard GPT-2 style transformer architecture <cit.>. For simplicity, the parameters are shared between the two policies, differing only by a final linear-layer head. As a control, we modified the same algorithm used to train the First-Explore policies to instead learn to always-exploit. We use a novel loss based on predicting the sequence of future actions conditional on the episode having high reward, which preliminary experiments showed improved training stability. While an innovation, it is not core to the framework, and other standard losses (e.g. PPO) should work as replacements. Full architecture, training details and hyperparameters are given in the supplementary materials (SI). Because the environments differ in how difficult and rewarding they are, a single evaluation of the policies involves sampling a batch of environments (10,000 for the bandit domain, and 1,000 for the treasure-room one), performing iterated rollouts in each environment (allowing the policy to in-context adapt to each environment) and calculating the average statistics across the batch (e.g., the average first episode exploit reward). To ensure non-spurious results, First-Explore and the always-exploit control were both trained ten times with ten different random seeds. Furthermore, the in-context learning of each training run was evaluated ten times each on an independently sampled batch of environments (for a total of 100 evaluations). Each treatment is then visualized by a line showing the mean over the evaluations and training runs. The darker-shaded area shows one standard deviation from the mean, and the lighter-shaded area shows the minimum and maximum value (across evaluations and seeds). If the light area shaded around one line (e.g. the First-Explore exploit reward) is above the light shaded area around another (e.g. the always-exploit reward) then, in all 100 evaluations, one treatment beats the other, which (as the runs are independent) is statistically significant (p ≤ 2^-10). All lines have these shaded areas, however the deviation between evaluations can be so small that the shaded areas can be hard to see. §.§ Gaussian Multi-armed Bandit The first problem setting is a multi-armed Gaussian Bandit with k arms specified by k arms means μ_{1, … k}∈ℝ. At each time step t the agent chooses an arm a_t and receives as reward r_t equal to the arm mean plus a normally distributed noise term, r_t = N(μ_a_t, 1/2). In our meta-RL approach, the agent observes its previous actions and their rewards, and can adapt based on that. Each environment's arm means are normally distributed, μ_{1, …, k}∼ N(0, 𝐈). In this domain, First-Explore learns intelligent exploration, learning a policy that exhaustively searches (Fig. <ref>, right blue line) in the first ten actions and then significantly and appropriately changes to sampling high reward arms (Fig. <ref>, left blue line). This series of average episode rewards show how the learnt policy is grounded in reward (by focusing on high reward arms at times), while also able to sacrificially explore (by getting low expected reward for the first ten pulls). The First-Explore exploit policy has the highest reward (Fig. <ref>, left orange line), and also matches optimal hand-coded exploitation when the hand-coded exploitation function is provided with the context produced by the First-Explore explore policy (Fig. <ref>, left). Further, First-Explore exploration exceeds random play exploration, iterated exploitation, and even hand-coded exhaust search at informing optimal exploitation (Fig. <ref>, right). Interestingly, after the First-Explore exploration policy changes to sampling the high reward arms, the explore episode reward steadily trends downward and eventually becomes negative. This behavior is consistent across all training runs, and all evaluations. We believe this phenomenon occurs because, once the agent has gained sufficient information about the high-payoff arms, the only useful behavior left is to check if the low expected value arms truly are low value, and were not just unlucky when previously sampled. §.§ Dark Treasure-Room The second problem setting is the Dark Treasure-Room (based on the Darkroom in <cit.>). Dark Treasure-Rooms are w × h grids full of treasure. The agent navigates (up, left, down, or right) to find treasure, and cannot see its surroundings. It receives only its current coordinates (x,y) as observation, with a meta-RL agent also observing past rewards and actions. When the agent encounters a treasure it consumes it, and receives an associated reward (positive or negative). The lack of sight means each individual room is a separate training challenge for a standard RL agent, with the agent having to memorise the locations of rewards and how to reach them in the agent’s parameters, e.g. in the neural network weights. A meta-RL agent has access to a context of all past environment interactions, and so can instead in-context adapt to newly sampled environments, rather than needing to be trained anew. The training distribution was 9× 9 Dark Treasure-Rooms, each having 8 treasure locations. The rewards associated with these locations are uniformly distributed in the range -4 to 2 (i.e., r_i ∼ U[-4, 2]). The locations of the treasures are randomly sampled uniformly, with overlapping treasure locations having their reward values stack. Importantly, the average value of any location is negative, meaning that visiting a location not in memory gives a negative expected reward. The negative expected reward for visiting new states makes the environment distribution hostile to coincidental exploration, thus requiring sacrificial exploration. On this distribution, always-exploiting (even with epsilon-greedy sampling to provide extra exploration, as is common in standard RL) only learns to avoid negative reward (Fig. <ref>, left green), while First-Explore in-context adapts over multiple explorations to exploit and achieve increasing positive reward with the number of explorations (Fig. <ref>, left orange). The increased reward with additional explorations demonstrates First-Explore succeeding at learning in an environment where sacrificial exploration is needed, and where always-exploiting fails. The failure of always-exploiting can be understood by considering how the policy explores the state space. As each unseen location has negative reward in expectation, the always-exploiting policy learns to avoid entering locations not already in the context, resulting in very poor state space coverage. In contrast, First-Explore exploration does not experience this issue (Fig. <ref>, middle blue). This difference is because always-exploiting actively avoids sacrificial exploration, while First-Explore embraces it. Figure <ref>, right visualizes the correspondence between coverage and how context informs exploitation. See SI for visualizations of iterated rollouts in these environments and how successive explorations inform exploitation. These highly consistent First-Explore results across training runs (e.g., the coverage of First-Explore exploration, Fig. <ref> middle blue) suggest that the same systematic behaviour is being learnt regardless of the network initialisation and random seed used for training (which determines both the sampled training environments and the action selection). This consistency suggests that First-Explore is learning something fundamental to the domain, and is promising as it potentially means First-Explore might learn a consistent algorithm or heuristic for general exploration if paired with a sufficiently-complex task-distribution. § LIMITATIONS AND FUTURE WORK One limitation with First-Explore is that reward during training can matter. Imagine a chef robot learning a new recipe in a physical home. In this scenario, it is vital the robot behaves safely and does not endanger humans or damage the kitchen while learning; however, it is fine if it cooks poorly, or makes a poor-tasting meal. First-Explore being willing to sacrifice reward could be dangerous, as it might ignore a safety reward penalty in order to master the recipe. One potential solution is to infinitely penalize endangerment or damage while training both the explore and exploit policy. This proposed version of First-Explore could actually result in far safer training (via in-context adaption) than attempting to use standard-RL, as the meta-RL policies would have learnt a strong prior of avoiding potentially endangering actions. However, it could be that such a strong penalty could prohibit effective training too. As such, it seems an open question, worth further investigation. r0.4 < g r a p h i c s > A visualization of well-planned sequential exploration (left) and myopic exploration (right) of a plane from the origin over multiple episodes. The initial explore (red) is equally good, but myopic exploration hinders the second explore (green) as it must revisit previously seen locations, and more so for the third (purple). This initial version of First-Explore is also limited in that the explore policy π_explore, θ is optimized to provide the single best episode of exploration that will increase the expected reward of the exploit policy π_exploit, θ by the greatest amount. Unfortunately, iterated optimal myopic exploration does not necessarily produce an optimal sequence of explorations (Fig. <ref>). One potential solution is to reward exploration episodes with weighted sums of the rewards of all subsequent exploitations. A final problem is the challenge of long sequence modelling, with certain environments requiring a very large context. However, it seems likely the rapid progress in context lengths, and the research on more effectively using context, will continue.<cit.>. We also expect improvements in stably training transformers for RL. § DISCUSSION Given that First-Explore uses RL algorithms to train the meta-RL policy, how could it solve hard-exploration domains that standard-RL can not? For example, how might First-Explore learn to pick the lock in the locked path environment? The answer is that it is possible if there are always some tasks in the training distribution suitable for the current agent (e.g., a curriculum that leads to learning to pick locks). Initially, the agent does not know how to explore at all and must learn to exploit based on random noise. Once it has learnt rudimentary exploitation, the agent can learn rudimentary exploration. Then it can learn better exploitation, then better exploration, and so on, each time relying on there being tasks within a `goldilocks zone' of being not too hard and not too easy, see <cit.>. Learning via this process essentially corresponds to how standard RL can use domain randomization to aid exploration (see <cit.> for an example of how domain randomization can solve a seemingly hard exploration task). The advantage of First-Explore is that we spend our compute on domain randomization upfront to learn intelligent exploration. Once trained, however, the explore policy can be very sample efficient at learning a new task. Additionally, one might wonder how significant a limitation exploring by exploiting is, given that standard-RL seems to succeed despite it. We argue that it is when one attempts to explore and exploit intelligently with human-level adaption on complex tasks that the difference becomes especially significant. In both problem domains, the results show how optimal exploiting and exploring significantly differ, both in how they cover the state or action space, and in how and whether they help achieve high reward, and how this difference matters in order to achieve efficient in-context learning. § CONCLUSION We identify the problem of attempting to explore by exploiting, and demonstrate that the novel meta-RL framework, First-Explore, solves this problem via the simple modification of learning two policies (one to first explore, another to then exploit). This paradigm of learned, intelligent exploration informing learned exploitation significantly improves meta-RL performance. First-Explore performs better on even simple domains such as the multi-armed Gaussian-bandit, and massively improves performance on domains that require sacrificial exploration, such as the Dark Treasure Room environment (when it has negative average expected treasure value). The results in this paper show First-Explore allows learning basic intelligent exploration strategies, such as exhaustive search for the first ten actions, followed by prioritizing sampling actions with high reward. We believe combining First-Explore with a curriculum, such as the AdA <cit.> curriculum, could be a step towards creating algorithms able to exhibit human-level performance on unseen hard-exploration domains, which is one of the core challenges of creating artificial general intelligence (AGI). Provided we can adequately address the real and significant safety concerns associated with developing AGI, such developments would allow us to reap AGI's tremendous potential benefits. § ACKNOWLEDGMENTS AND DISCLOSURE OF FUNDING Resources used in preparing this research were provided, in part, by the Province of Ontario, the Government of Canada through CIFAR, and companies sponsoring the Vector institute <www.vectorinstitute.ai/#partners>. This work was further supported by an NSERC Discovery Grant, and a generous donation from Rafael Cosman. Thanks to Michiel van de Panne, Mark Schmidt and Ken Stanley for discussions, and to Yuni Fuchioka and Ryan Fayyazi for feedback on the writing. We also thank Aaron Dharna, Shengran Hu, and Jenny Zhang (sorted alphabetically) in our lab at the University of British Columbia for discussions and feedback. § SUPPLEMENTARY MATERIAL § REPLICABILITY For full transparency, replicability, and to make it easier for future scientists to build on our work, we are releasing the training code, visualization code, the code to generate the significance plots, and the environment code. We are also releasing the weights of a trained model for each domain for both First-Explore, and the always-exploit control. Each model contains both the explore and exploit policies as separate heads on the shared trunk. The code is available at <https://github.com/btnorman/First-Explore>. § COMPUTE Each training run commanded a single GPU, specifically a Nvidia T4. Table <ref> gives the approximate walltime of each run. Evaluation (sampling the multiple evaluation environments and performing iterated First-Explore and comparison rollouts) was with a single GPU, and took minutes. § TRAINING DETAILS Full training code to replicate the results is provided. The architecture for both domains is a GPT-2 transformer architecture <cit.> specifically the Jax framework <cit.> implementation provided by Hugging Face <cit.>, with the code being modified so that token embeddings could be passed rather than token IDs. The different Hyperparameters for the two domains are given in Table <ref>. For both domains each token embedding is the sum of a linear embedding of an action, a linear embedding of the observations that followed that action, a linear embedding of the reward that followed that action, a positional encoding of the current timestep, and a positional encoding of the episode number. See the provided code for details. For the dark treasure-room environments a reset token was added between episodes that contained the initial observations of the environment, and a unique action embedding corresponding to a non-action. The bandit domain had no such reset token. For training we use AdamW <cit.> with a piece-wise linear warm up schedule that interpolates linearly from an initial rate of 0 to the full learning rate in the first 10% training steps, and then interpolates linearly back to zero in the remaining 90% of training steps. Table <ref> gives the optimization hyperparameters. Hyperparameters were chosen based on a relatively modest amount of preliminary experimentation. Finally, for efficiency, all episode rollouts and training was done on GPU using the Jax framework <cit.>. §.§ Optimization Loss The First-Explore policies are trained by a novel optimization approach. To learn to exploit we learn the distribution of actions that lead to `maximal' exploit episodes. Here we define an exploit episode as maximal if it a) has higher or equal reward to the best reward found in all of the previous First-Explore explore and exploit episodes in the current environment, and b) also exceeds a set baseline reward (hyperparameter) for the domain, see Algorithm <ref>. To learn to explore we learn the distribution of actions that lead to `informative' explore episodes. Informative episodes are those that when added to the context lead to a subsequent exploit episode that a) exceeds the best reward of previous First-Explore explore and exploit episodes and b) has higher reward than the environment baseline. This explore criterion is slightly different from the exploit `maximal' criterion, as it requires an improvement in reward, see Algorithm <ref>. The baseline reward is there such that the first First-Explore exploit and explore episodes have an incentive to be respectively exploitative and exploratory. Because in the dark treasure-room each episode is composed of multiple actions, the probability of an initial action leading to any outcome is potentially dependent on the distribution of future actions (e.g., imagine requiring two up actions to reach a reward; the first up action is no better than the first down action if the policy always moves down in the second step). Hence, one must learn the distribution conditional on a rollout policy. This expression is shown in Equation <ref> for the case of the exploit distribution. Here “episode is maximal” refers to an exploit episode having higher reward than the baseline reward and the previous First-Explore explores and exploits (see previous paragraph). a_t refers to the current action, and [a]_i>t∼π expresses how subsequent actions are taken under the rollout policy. 𝐏(episode is maximal | a_t, [a]_i>t∼π) To learn this distribution, the predicted likeihoods of actions being `maximal' or `informative' are compared to the action distributions of the rollouts that are `maximal' or `informative.' The predictions are improved by minimizing a cross entropy loss between the actions observed in the maximal and informative episodes, and the calculated probability of those actions being selected. This loss is detailed in Algorithm <ref> as well as the provided code. Once learned, the explore and exploit distributions combined with a sampling temperature each then specify a policy that with high probability selects actions likely to lead to good exploitation or good exploration. To ensure that all actions are sampled and to provide more exploration during training (of both the explore and exploit policy), we add a small probability ϵ chance of selecting a random action instead of one sampled from the unmodified explore or exploit policy. This probability is then a hyperparameter that can be tuned. Learning the distributions then allows iteratively updating the rollout policies by each time taking the new rollout policies and learning the new distributions of maximal and informative actions under the rollout policy. The frequency of such updates is then a hyperparameter. The hyperparameters used are given in Table <ref>. While preliminary experiments found this meta-RL training method performed best, we believe the First Explore meta-RL framework will work for general approaches too, such as using policy gradient with actor critic, or Muesli <cit.> which was used in AdA <cit.>. For evaluation, we then sample by taking the argmax over actions, and do not add the ϵ-noise. [h] [escapeinside=||,mathescape=true]python # rollout conducts an episode when provided with an environment and policy # and returns all the episode infomation def model_conditional_actions(|θ|, |π|, baseline_reward): # sample an environment, and initalize context c and loss values |m| = sample(|ℳ|); |c| = set(); loss = 0 best_reward_seen = baseline_reward for i in range(k): # do k iterated rollouts |τ_explore| = rollout(|m, π_explore, c|) # explore given context c |τ_exploit| = rollout(|m, π_exploit, c ∪ {τ_explore}|) # exploit given c ∪ {τ_explore,} |r = final_reward(τ_exploit)| # get the exploit reward # calculate a weight on the episodes # non-increasing episodes have zero weight # and increasing episodes have weight proportional to reward improvment explore_weight = |1_r > best_reward_seen * (1 + r - best_reward_seen)| exploit_weight = |1_r ≥best_reward_seen * (1 + r - best_reward_seen)| explore_loss = cross_ent(|π and θ| predicted probability, |τ_explore| actions) exploit_loss = cross_ent(|π and θ| predicted probability, |τ_exploit| actions) # update the loss, conditional on the episodes being improvements loss = loss + explore_weight * explore_loss loss = loss + exploit_weight * exploit_loss |c = c ∪{τ_explore}| # update the context for the next explore # update the best reward seen best_reward_seen = max(best_reward_seen, |final_reward(τ_exploit), final_reward(τ_explore)|) return loss Training to model conditionally increasing exploits with First-Explore rollouts. § DARK TREASURE-ROOM VISUALIZATIONS Here are example iterated First-Explore rollouts of the two trained policies, π_explore, π_exploit, visualized for a single sampled darkroom.
http://arxiv.org/abs/2307.01344v1
20230703202857
Equidistribution of high traces of random matrices over finite fields and cancellation in character sums of high conductor
[ "Ofir Gorodetsky", "Valeriya Kovaleva" ]
math.NT
[ "math.NT", "math.PR" ]
Linear multistep methods with repeated global Richardson extrapolation I. Fekete, corresponding author, Department of Applied Analysis and Computational Mathematics, ELTE Eötvös Loránd University, Pázmány P. s. 1/c, H-1117 Budapest, Hungary, L. Lóczi, Department of Numerical Analysis, ELTE Eötvös Loránd University, Pázmány P. s. 1/c, H-1117 Budapest, Hungary, and Department of Differential Equations, BME Budapest University of Technology and Economics ==================================================================================================================================================================================================================================================================================================================================================================================================== Let g be a random matrix distributed according to uniform probability measure on the finite general linear group . We show that (g^k) equidistributes on _q as n →∞ as long as log k=o(n^2) and that this range is sharp. We also show that nontrivial linear combinations of (g^1),…, (g^k) equidistribute as long as log k =o(n) and this range is sharp as well. Previously equidistribution of either a single trace or a linear combination of traces was only known for k ≤ c_q n, where c_q depends on q, due to work of the first-named author and Rodgers. We reduce the problem to exhibiting cancellation in certain short character sums in function fields. For the equidistribution of (g^k) we end up showing that certain explicit character sums modulo T^k+1 exhibit cancellation when averaged over monic polynomials of degree n in _q[T] as long as log k = o(n^2). This goes far beyond the classical range log k =o(n) due to Montgomery and Vaughan. To study these sums we build on the argument of Montgomery and Vaughan but exploit additional symmetry present in the considered sums. § INTRODUCTION Fix _q, the finite field of q elements. We denote its characteristic by . Let g ∈ be an invertible n × n matrix over _q chosen according to the uniform probability measure. The first-named author and Rodgers <cit.> showed that (g^k) equidistributes in _q as n →∞, uniformly for k ≤ c_q n for some sufficiently small c_q depending on q, and the rate of convergence is superexponential. However nothing beyond k=O(n) was known. We describe our three main results. Let g ∈ be chosen uniformly at random. Let k = k(n) be a positive integer such that log k=o(n^2). The distribution of (g^k) tends to the uniform distribution on _q as n tends to ∞. The range log k = o(n^2) in Theorem <ref> is optimal in the sense that we cannot replace it with log k = O(n^2). Indeed, if we take k = ||= ∏_i=0^n-1(q^n-q^i) then log k ≍_q n^2 and, by Lagrange's theorem, g^k = I_n for g ∈ and so (g^k) ≡ n does not equidistribute. (This shows (g^k) is periodic in k.) We also prove a theorem for combination of traces. Let g ∈ be chosen uniformly at random. Let k = k(n) be a positive integer such that ∤ k and log k=o(n). Let a_j ∈_q for j = 1,…, k be arbitrary constants with a_k ≠ 0. Then the distribution of ∑_1 ≤ i ≤ ka_i ( g^i) tends to the uniform distribution on _q as n tends to ∞. Over _q we have (g^) = (g)^ and the condition ∤ k is necessary to avoid trivial linear combinations. The range log k=o(n) in Theorem <ref> is optimal in the sense that it cannot be replaced by log k =O(n), as we now demonstrate by giving an example where ∑_1 ≤ i ≤ k a_i (g^i) (∤ k, a_k ≠ 0) is not equidistributed and log k ≍ n. Let denote an algebraic closure of _q, and let f ∈_q[T] be a monic polynomial. Given its factorization f(T) = ∏_j=1^ f (T-λ_j) over we define its ith (i ≥ 0) power sum symmetric polynomial as p_i(f) := ∑_j=1^ fλ_j^i ∈. By Newton's identities p_i(f) is an integral multivariate polynomial in coefficients of f, hence p_i(f) is in fact in _q. Moreover, we have p_i(fg) = p_i(f)+p_i(g) for any monic f,g∈_q[T]. If f(T) = (I_n T-g) is the characteristic polynomial of a matrix g then p_i(f) = (g^i). Let F(T):=∑_i=0^k a_i T^i be the product of all monic irreducible polynomials in _q[T] of degree at most n (so a_k=1, a_0=0). Then for any λ∈∪_j=1^n _q^j F(λ) = ∑_i=1^k a_i λ^i= 0. Hence ∑_i=1^k a_i p_i(f) = 0 for any f with f ≤ n and ∑_i=1^k a_i (g^i) = ∑_i=1^k a_i p_i((I_n T-g))=0 for all g ∈. Finally, k= F ≍ q^n by the Prime Polynomial Theorem (see Lemma <ref>). If k happens to be divisible by we can replace F by TF. If i is a negative integer, (<ref>) is defined as long as T ∤ f. Moreover, the usual properties are preserved: p_i(fg)=p_i(f)+p_i(g) if T∤ fg; if f(T)=(I_n T-g) then p_i(f)=(g^i). When T ∤ f, p_i(f)=p_-i( f(1/T)T^ f/f(0)) where f(1/T)T^ f/f(0) is a monic polynomial, proving p_i(f) ∈_q as well. For certain values of k, e.g. primes, we can go beyond the range log k =o(n^2) of Theorem <ref> using the following arithmetic criterion. Let g ∈ be chosen uniformly at random. Let k = k(n) be a positive integer. Suppose ∑_12 log_q n < d≤ n (k,q^d-1) < q^d/3d^-1→∞ as n →∞. Then the distribution of (g^k) tends to the uniform distribution on _q as n tends to ∞. As we shall see later, Theorem <ref> is a consequence of Theorem <ref>. §.§ Comparison with random matrix theory Our investigation was motivated by results in random matrix theory, although we do not use any techniques from this area. Let U_n() be the group of n× n unitary matrices over complex numbers, endowed with Haar measure of total mass 1. A classical result of Diaconis and Shahshahani <cit.> states that for any k ≥ 1, the vector X_k=((U),(U^2)/√(2),…, (U^k)/√(k)) converges in distribution to Y_k=(Z_1, Z_2,…, Z_k), where {Z_j}_j=1^k are independent standard complex Gaussians. Johansson <cit.> showed that the rate of convergence of a linear combination of (U^i) to a suitable Gaussian is superexponential in total variation distance. In <cit.>, Johansson and Lambert extended <cit.> to the total variation distance of X_k from Y_k uniformly for k ≪ n^2/3-ε, and in <cit.>, Courteaut and Johansson established similar results for orthogonal and symplectic groups. In a recent work, Courteaut, Johansson and Lambert <cit.> studied the convergence of (U^k)/√(k) to Z_k as k varies, obtaining, among other results, that the distance goes to 0 for any k in the range 1 ≤ k < n. As for the complementary range, Rains <cit.> proved that there is a stabilising phenomenon once k ≥ n: the eigenvalues of U^k become distributed as n independent uniform random variables on the unit circle. §.§ Symmetric function perspective Let us also formulate a natural problem in symmetric functions that will turn out to share strong similarities with the equidistribution of (g^k). Let e_k(t_1,…,t_n) be the kth elementary symmetric polynomial e_k(t_1,…,t_n) := ∑_1 ≤ j_1 < j_2<⋯ <j_k ≤ n t_j_1t_j_2⋯ t_j_k and p_k(t_1,…,t_n) be the kth power sum symmetric polynomial p_k(t_1,…,t_k):=∑_i=1^n t_i^k. Let 𝐗 = {X_i}_i=1^n be -valued random variables such that e_1(𝐗),…,e_n(𝐗) are independent and uniformly distributed on _q. By Newton's identities, p_k(𝐗) must also be _q-valued. We ask, is p_k(𝐗) close to uniform? Here is one way to construct such a sequence 𝐗. Taking a_1,…,a_n to be independent uniform random variables on _q, the polynomial T^n - a_1T^n-1+a_2T^n-2∓… + (-1)^na_n is distributed uniformly among . Setting {X_i}_i=1^n to be its n roots in some order, we have that e_j(𝐗) = a_j satisfy the conditions above, and p_i(f) defined in (<ref>) coincides with p_i(𝐗). We prove the following. Let n ≥ 1. Let k = k(n) be a positive integer. Suppose {X_i}_i=1^n are random variables such that (e_1(𝐗),…,e_n(𝐗)) has the uniform distribution on (_q)^n. Then: * If log k =o(n^2) then the distribution of p_k(𝐗) tends to the uniform distribution on _q. * Let a_1,…,a_k ∈_q. If log k = o(n), a_k ≠ 0 and ∤ k then the distribution of ∑_i=1^k a_i p_i(𝐗) tends to the uniform distribution on _q. * If the sum in (<ref>) diverges then the distribution of p_k(𝐗) tends to the uniform distribution on _q. Theorem <ref> is about p_k(f) when we choose f uniformly at random from , while Theorems <ref>–<ref> are about p_k(f) for a polynomial f drawn from the space of possible characteristic polynomials of a matrix from endowed with the uniform measure. As proved in <cit.>, the total variation distance of the law of the first k next-to-leading coefficients of (I_n T-g) from the uniform distribution _q^k tends to 0 as n →∞ for k as large as n-o(log n), so the setup of Theorem <ref> is not that different from the setup of Theorems <ref>–<ref>. §.§ Cancellation in character sums of high conductor Given n ≥ 0 we denote by ⊆_q[T] the subset of monic polynomials of degree n. We denote by =∪_n ≥ 0 the set of monic polynomials in _q[T]. We denote by ⊆ the set of irreducible polynomials of degree n and let = ∪_n ≥ 0. Throughout, the letter P is reserved for elements of . Given k ≥ 1 and a nontrivial additive character ψ_q → we define a function χ_k,ψ→ by χ_k,ψ(f) := ψ(p_-k(f)), if (f,T)=1, 0, otherwise. We show in Lemma <ref> that function χ_k,ψ is a primitive Dirichlet character modulo T^k+1. The following theorem is the main component behind the proof of Theorem <ref>. We have, uniformly for n≥ 0 and k ≥ 1, q^-n∑_f ∈χ_k,ψ(f) ≪1+ log_q n +√(log_q k)/n+1. In particular, we have cancellation when log k = o(n^2) as n →∞. Note that the range log k = o(n^2) is optimal because for k=∏_i=1^n(q^i-1) we have γ^-k=1 for every γ∈∪_i=1^n_q^i^× and thus p_-k(f) ≡ n on ∩{f: T ∤ f}. Hence there is no cancellation for such k. We do not know if cancellation for log k=o(n^2) persists if we restrict to instead of ; the proof of Theorem <ref> relies on summing over all degree-n polynomials. It is instructive to compare Theorem <ref> to results about character sums for integers and polynomials. We switch temporarily to the integer setting. Let χ be a nonprincipal Dirichlet character modulo m and consider the sum S_x,χ:=∑_n ≤ xχ(n) as x →∞. Montgomery and Vaughan <cit.> proved under the generalized Riemann hypothesis for L(s,χ) that if m ≥ x and y ∈ [(log m)^4,x] is a parameter then S_x,χ =∑_n≤ x p | n p ≤ yχ(n)+ O(xy^-1/2(log m)^4). Recall ∑_n ≤ x: p | n p ≤ y1 is o(x) if log x/log y →∞ <cit.>. Taking y = (log m)^9 we see that S_x,χ exhibits cancellation if loglog m=o(log x). Granville and Soundararajan improved the error term in (<ref>) and also showed that this range is optimal <cit.>, in the sense that for any given A>0 and for any prime m there exists a nonprincipal character χ m with |S_x,χ| ≫_A x, where x = log^ A m. Much less is known unconditionally. Burgess proved S_x,χ exhibits cancellation when m≤ x^3-ε, and this can be extended to m ≤ x^4-ε if m is cubefree <cit.>. When the conductor is smooth, better results exist <cit.>. In particular, if m=p^r then Banks and Shparlinski <cit.> showed one has cancellation in the range log m = o( (log x)^3/2) (if r ≫ 1 and p≤ x^c); this improved earlier work of Postnikov <cit.>. It is worthwhile to recall ∑_n ≤ xn^it exhibits cancellation when log (|t|+2) =o( (log x)^3/2), a result due to Vinogradov <cit.>. The bound we prove for the sum of f↦χ_k,ψ(f) also holds for the function f↦ψ(p_k(f)) which is an analogue of n↦ n^it, see Remark <ref> and <cit.>. Now let us return to the polynomial setting. The generalized Riemann hypothesis in _q[T] is a seminal theorem due to A. Weil <cit.>. Let χ be a nonprincipal Dirichlet character modulo a polynomial Q ∈_q[T]. In <cit.>, Bhowmick and Lê adapted (<ref>) to the polynomial setting, proving unconditionally that ∑_f ∈χ(f) = ∑_f ∈ P | f P ≤ mχ(f) + O( q^n-m/2 Q ) holds for m ∈ [2log_q Q ,n]. Taking m=⌈ 4log_q Q ⌉ shows that, as n →∞, ∑_f ∈χ(f) exhibits cancellation when log_q Q = o(n). (For a self-contained bound on the sum in the right-hand side of (<ref>) see Lemma <ref>.) The range log k=o(n^2) in Theorem <ref> is far beyond the ranges that the generalized Riemann hypothesis implies, and heavily exploits a special symmetry satisfied by χ_k,ψ which is shown in Lemma <ref>. We are not aware of any other explicit family of characters, in either integers or polynomials, where the generalized Riemann hypothesis implies cancellation when the conductor exceeds the Montgomery–Vaughan range loglog m = o(log x) in or log Q = o(n) in _q[T]. In <ref> we prove a new bound on general character sums in function fields which is not used in the paper. §.§ Structure of the paper In <ref> we discuss the connection between the distribution of traces and character sums in more detail, and bound the total variation distance between our respective distributions and the uniform distribution by corresponding character sums. By doing so, we reduce Theorem <ref> and the first part of Theorem <ref> to Theorem <ref>, and Theorem <ref> and the second part of Theorem <ref> to obtaining bounds for character sums in (<ref>). The latter is quite straightforward as shown in Lemma <ref>, while the former requires more careful treatment. We prove Theorem <ref> in <ref>; we consider this to be the technical part of the paper. We start with observing that there is underlying symmetry in sums of χ_k,ψ against primes. To see this, in Lemma <ref> we prove that ∑_f ∈Λ(f)χ_k,ψ(f) =∑_f ∈Λ(f)χ_k^',ψ(f), where Λ(f) is the function field von Mangoldt function, and k^' = (k,q^n-1). This identity allows us to improve the Weil bound |∑_f ∈Λ(f)χ_k,ψ(f)| ≤ q^n/2 k to |∑_f ∈Λ(f)χ_k,ψ(f)| ≤ q^n/2k^' = q^n/2(k,q^n-1) as shown in Corollary <ref>. This is helpful when (k,q^n-1) is small. Motivated by Montgomery and Vaughan <cit.> given S ⊆{1,…,n} we write q^-n∑_f ∈χ_k,ψ(f)=q^-n∑_f ∈ P | f P ∉Sχ_k,ψ(f)+q^-n∑_f∈ ∃ P such that P ∈ Sχ_k,ψ(f). In our case choosing S={⌈ 12log_qn ⌉≤ d ≤ n: (k,q^d-1)<q^d/3} allows us to bound the second sum in the right-hand side of (<ref>) using (<ref>) (see Lemma <ref>) while a sieve bound (Lemma <ref>) bounds the first sum. This strategy ends up proving the following criterion. Let n ≥ 1 and k ≥ 1. Let ψ_q →^× be a nontrivial additive character. We have q^-n∑_f ∈χ_k,ψ(f) ≪ n^-1 + exp( - ∑_12log_q n < d ≤ n (k,q^d-1)<q^d/3 d^-1). In particular, a sufficient criterion for cancellation is that the sum in (<ref>) diverges as n →∞. In <ref> we reduce Theorem <ref> and the third part of Theorem <ref> to Proposition <ref>. To deduce Theorem <ref> from Proposition <ref> we prove a sharp upper bound on ∑_L<d ≤ 2Llog(k,q^d-1) in Lemma <ref>, that may be of independent interest. The criterion above is particularly interesting because it allows us to exhibit cancellation in the left-hand side of (<ref>) for arbitrarily large k, e.g. whenever k is a prime, or a product of a bounded number of primes, the sum in (<ref>) diverges simply because the following shorter sum does: ∑_12 log_q n < d≤ n (k,q^d-1)=1 d^-1. Proposition <ref>, and hence Theorem <ref>, apply as is to ∑_f ∈μ(f)χ_k,ψ(f) where μ is the Möbius function, see Remark <ref>. Recall μ is multiplicative with μ(P)=-1 and μ(P^e)=0 for e ≥ 2. However, we do not know that log k =o(n^2) is optimal in this case. For arbitrarily small ε>0, there are examples where log k ∼ε n^2 for which Proposition <ref> does not yield cancellation, namely k=∏_i=1^⌊ε' n⌋(q^i-1). § ACKNOWLEDGEMENTS We thank Brad Rodgers and Zeev Rudnick for comments on an earlier version of the paper. O.G. was supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 851318). V.K. was supported by ERC LogCorRM (grant no 740900), and by CRM-ISM postdoctoral fellowship. § REDUCTIONS §.§ An involution Given f ∈ we use the notation p_i(f) introduced in (<ref>). Given a_1,…,a_k ∈_q and an additive character ψ_q →^× define a function ξ_𝐚,ψ→ by ξ_𝐚,ψ(f) := ψ(∑_i=1^k p_i(f) a_i). The function ξ_𝐚,ψ, sometimes called a short interval character, is closely related to a certain Dirichlet character modulo T^k+1 as observed by Hayes <cit.> and Keating and Rudnick <cit.><cit.>. Let us explain this idea. We define an involution ι on :={ f ∈ : (f,T)=1} by ι(f) := f(1/T)T^ f/f(0). If we factorize f∈ as f(T)=∏_j=1^ f(T-λ_j) over we find that ι(f) = ∏_j=1^ f(T-λ_j^-1) so p_k(f) = p_-k(ι(f)). For any function α→ we define ι_α→ by ι_α(f) := α(ι(f)) if f ∈, 0 if f ∉. We say that a function α→ is completely multiplicative if α(fg)=α(f)α(g) holds for all f,g ∈ and α(1)=1. If α is completely multiplicative then so is ι_α because ι is (when extended to via ι(T)=0). Given an arithmetic function α→ we use the notation S(n,α) := ∑_f ∈α(f). Letting χ_0(f) := 1_T ∤ f we have S(n,α·χ_0) = S(n,ι_α) and, since S(n,α·χ_0) = S(n,α)-α(T)S(n-1,α)1_n ≥ 1, S(n,α) = ∑_i=0^n S(i,α·χ_0) α(T)^n-i= ∑_i=0^n S(i,ι_α)α(T)^n-i. A variant of the following lemma was established in <cit.>. Let a_1,…,a_k ∈_q with a_k ≠ 0 and ∤ k. Let ψ_q →^× be a nontrivial additive character, and let α = ξ_𝐚,ψ. Then ι_α is a primitive Dirichlet character modulo T^k+1. We know ι_α is completely multiplicative, vanishes at multiples of T and ι_α(1) = 1. It remains to show that it only depends on the residue of the input modulo T^k+1 and that it is primitive. Newton's identities yield that p_i(f) is a function of the i first next-to-leading coefficients of f, where the jth (j ≥ 1) next-to-leading coefficient of T^n+∑_i=0^n-1 a_iT^i is defined to be a_n-j if j ≤ n and to be 0 otherwise. Hence α = ξ_𝐚,ψ(f) only depends on the k first next-to-leading coefficients of f. Since by definition ι(f) reverses the coefficients of f and normalizes by f(0), it follows that ι_α(f) depends only on the last k+1 coefficients of f, i.e. on f T^k+1. Finally, ι_α(T^k+1-c)=1 for every c ∈_q^× while a short computation shows ι_α(T^k-c)=ψ(ka_k/c) is not equal to 1 for suitable c. For k ≥ 1 and an additive character ψ_q → with a slight abuse of notation let ξ_k,ψ(f) := ψ(p_k(f)), which coincides with ξ_𝐚,ψ for a_1=…=a_k-1=0, a_k=1. By definition, we have χ_k,ψ= ι_ξ_k,ψ = ι_ξ_k,ψ·χ_0. We have S(n,ξ_k,ψ)=∑_i=0^n S(i,χ_k,ψ) by (<ref>). In particular, q^-nS(n,χ_k,ψ) ≪ (1+log_q n + √(log_q k))/(n+1) implies q^-nS(n,ξ_k,ψ) ≪ (1+log_q n + √(log_q k))/(n+1) and vice versa. §.§ Reduction of Theorem <ref> to character sum estimates Recall that 𝐗 = {X_i}_i=1^n∈^n are random variables such that e_1(𝐗),…,e_n(𝐗) are independent and uniformly distributed on _q, and that one can take X_i to be the zeros of a polynomial f chosen uniformly at random from . Let a_1,…,a_k ∈_q and denote by _q the group of q additive characters from _q to ^×, with ψ_0 denoting the trivial character. By Fourier analysis on _q, given x ∈_q, (∑_i=1^ka_i p_i(𝐗)=x) = _f ∈(∑_i=1^ka_i p_i(f)=x) = q^-1∑_ψ∈_qψ(x)_f ∈ψ(∑_i=1^k a_i p_i(f))=q^-1∑_ψ∈_q q^-nψ(x) S(n,ξ_𝐚,ψ). Since S(n,ξ_𝐚,ψ_0)=q^n, the triangle inequality implies ∑_x ∈_q| (∑_i=1^k a_i p_i(𝐗)=x) - q^-1| ≤∑_ψ_0 ≠ψ∈_q |q^-nS(n,ξ_𝐚,ψ)|. By (<ref>) and the triangle inequality, ∑_x ∈_q| (∑_i=1^k a_i p_i(𝐗)=x) - q^-1| ≤∑_j=0^nq^j-n∑_ψ_0 ≠ψ∈_q |q^-jS(j,ι_ξ_𝐚,ψ)|. A special case of (<ref>) is ∑_x ∈_q| (p_k(𝐗)=x) - q^-1| ≤∑_j=0^nq^j-n∑_ψ_0 ≠ψ∈_q |q^-jS(j,χ_k,ψ)|. The first part of Theorem <ref> is immediate from (<ref>) and Theorem <ref>. The third part follows from (<ref>) and Proposition <ref>. For the second part of Theorem <ref>, we use (<ref>) and observe ι_ξ_𝐚,ψ is a nonprincipal Dirichlet character modulo T^k+1 by Lemma <ref>, and the result follows from applying the following lemma. Let χ→ be a nonprincipal Dirichlet character modulo Q. We have q^-nS(n,χ) ≪1+log_q (1+ Q)/n+1. Lemma <ref> follows from (<ref>) with m=⌈ 2log_q(n(1+ Q))⌉ (see <ref> for details). §.§ Reductions of Theorems <ref>–<ref> to character sum estimates We define an arithmetic function → as follows. If f ∈ and n = f, then (f) := _g ∈((I_n T - g) = f) where is endowed with the uniform measure. It follows from the work of Reiner <cit.> and Gerstenhaber <cit.> that is multiplicative (in the sense that (fg)=(f)(g) when (f,g)=1) and supported on . Their works also show that on prime powers it is given by (T^e)=0 and (P^e) = q^-e P∏_i=1^e (1-q^-i P)^-1 where P ∈∖{T} and e ≥ 1 (cf. <cit.>). The following identity is a quick consequence of (<ref>) (it can also be derived directly from <cit.>). Recall |f|=q^ f. We have = α_1 * α_2 where α_1(f) = |f|^-1·1_T ∤ f and α_2(f) = (f)/|f|. Let a_1,…,a_k ∈_q. By Fourier analysis on _q, given x ∈_q we have _g ∈(∑_i=1^k a_i(g^i)=x)= q^-1∑_ψ∈_q_g ∈ψ( ∑_i=1^ka_i(g^i) = x). The trivial character ψ_0 contributes 1/q to the right-hand side. Recall p_k(f) (f ∈) is defined in (<ref>). Since p_i((I_nT-g)) = (g^i), _g ∈(∑_i=1^k a_i(g^i)=x) -q^-1= q^-1∑_ψ_0 ≠ψ∈_q∑_f ∈(f) ξ_𝐚,ψ(x). Summing over x and using the triangle inequality gives ∑_x ∈_q|_g ∈(∑_i=1^k a_i(g^i)=x) - q^-1| ≤∑_ψ_0 ≠ψ∈_q |S(n, ·ξ_𝐚,ψ)|. In <cit.> the first-named author and Rodgers showed that |S(n, ·ξ_𝐚,ψ)| ≤ q^-n^2/2k+O_q(n). For fixed k this is essentially optimal up to constants, because |S(n, ·ξ_𝐚,ψ)| cannot decay faster than exponentially in n^2 due to || = q^Θ(n^2). In this note we focus on the range of cancellation rather than the rate of cancellation, and in this aspect we can do better. Observe that from Lemma <ref>, S(n,·ξ_𝐚,ψ) = q^-n∑_i+j=n∑_f ∈ℳ_i,q, T ∤ fξ_𝐚,ψ(f) ∑_g ∈ℳ_j,q(g)ξ_𝐚,ψ(g). Since is a probability measure on ℳ_j,q, the triangle inequality yields |S(n,·ξ_𝐚,ψ)| ≤ q^-n∑_i=0^n |S(i,ξ_𝐚,ψ·χ_0) |. From (<ref>), (<ref>) and (<ref>), ∑_x ∈_q|_g ∈(∑_i=1^k a_i(g^i)=x) - q^-1| ≤ q^-n∑_ψ_0 ≠ψ∈_q∑_i=0^n|S(i, ι_ξ_𝐚,ψ)|. Similarly to the symmetric function case, we see that Theorem <ref> follows from (<ref>) and Lemma <ref>. In the special case a_1=…=a_k-1 and a_k=1 we have ∑_x ∈_q|_g ∈((g^k)=x) - q^-1| ≤ q^-n∑_ψ_0 ≠ψ∈_q∑_i=0^n|S(i, χ_k,ψ))|. Theorem <ref> follows from (<ref>) and Theorem <ref>. Theorem <ref> follows from (<ref>) and Proposition <ref>. § PROOF OF THEOREM <REF> The von Mangoldt function Λ→ is defined as Λ(f) = P if f = P^k, P ∈, k ≥ 1, 0 otherwise. Gauss' identity <cit.> states that ∑_f ∈Λ(f) = ∑_d | n |𝒫_d,q| d =q^n, and is known to imply the following. <cit.> We have q^n/n - 2q^n/2/n ≤ || ≤ q^n/n for n ≥ 1. Given a Dirichlet character χ its L-function is defined as L(u,χ) = ∑_f ∈χ(f) u^ f = ∏_P ∈(1-χ(P)u^ P)^-1 which converges absolutely for |u|<1/q. If χ is a nonprincipal character modulo Q then L(u,χ) is a polynomial of degree at most Q - 1 as follows from the orthogonality relations for χ. We set d(χ):= L(u,χ) < Q. Let χ be a nonprincipal Dirichlet character. Factoring L(u,χ) as L(u,χ) = ∏_i=1^d(χ)(1-γ_i u) we have |γ_i| ∈{1, √(q)} for all 1 ≤ i ≤ d(χ). Let χ be a nonprincipal Dirichlet character, then |∑_f ∈χ(f) Λ(f)| ≤ q^n/2 d(χ). We take the logarithmic derivative of the Euler product (<ref>) and (<ref>) and compare coefficients to obtain ∑_f ∈χ(f) Λ(f) = - ∑_i=1^d(χ)γ_i^n for all n ≥ 1. Then Theorem <ref> implies the lemma via the triangle inequality. Let α(f) be a completely multiplicative function. Given any subset T of the positive integers , we define F_T(u,α) := ∑_f ∈ P | f P ∈ T α(f)u^ f= ∏_P ∈, P ∈ T (1-α(P)u^ P)^-1. Let us write [u^m]F for the mth coefficient of a power series F. Our starting point is the identity ∑_f ∈α(f) = [u^n]F_(u,α)= [u^n]F_T^c(u,α)F_T(u,α) = [u^n] F_T^c(u,α) + [u^n] F_T^c(u,α)(F_T(u,α) -1) =q^-n∑_f ∈ P | f P ∉Tα(f)+q^-n∑_f ∈ ∃ P | f such that P ∈ Tα(f), where T^c = ∖ T is the complement of T. §.§ Montgomery and Vaughan's estimate To bound the second term in (<ref>) we prove the following lemma generalizing an estimate of Montgomery and Vaughan <cit.> in the polynomial setting. Let α→ be a 1-bounded completely multiplicative function, meaning |α(f)|≤ 1 for all f ∈. Let n ≥ 1 and S ⊆{1,…,n} be a set of positive integers, and S^c = {1,…,n}∖ S be its complement. Let s_0 = min_s ∈ S s. Then X := q^-n∑_f ∈ ∃ P | f such that P ∈ Sα(f)≪exp(A_1)(exp(A_2)-1), where A_1 = ∑_d ∈ S^c1/d, A_2 = ∑_d ∈ Sq^-d/d| ∑_f ∈ℳ_d,qα(f)Λ(f)| + O(q^-s_0/2/s_0). In <cit.> the authors prove Lemma <ref> in the special case S ={m+1,m+2,m+3,…,n} and α being a Dirichlet character. For this choice one immediately recovers (<ref>) from (<ref>). Using identity (<ref>) and noticing that we can restrict the set of degrees to {1,…,n}, we have X = q^-n∑_f ∈ ∃ P | f such that P ∈ Sα(f) = [u^n] F_S^c(u,α)(F_S(u,α) -1). Because α is 1-bounded, the coefficients in the series of F_S^c(u,α) are bounded in absolute value by the respective coefficients of Z_S^c(u):=∑_f ∈ P | f P ∈ S^c u^ f =∏_P ∈, P ∈ S^c (1-u^ P)^-1. Further, we may write F_S(u,α) as F_S(u,α) = ∏_P ∈, P ∈ S (1-α(P)u^ P)^-1 = exp( ∑_d ∈ S∑_i ≥ 1 i^-1 u^di∑_P ∈𝒫_d,qα(P)^i ) =exp( ∑_d ∈ S u^d ∑_P ∈𝒫_d,qα(P) + ∑_d ∈ S∑_i ≥ 2i^-1u^di∑_P ∈𝒫_d,qα(P)^i) =exp( ∑_d ∈ Su^d/d∑_f ∈ℳ_d,qα(f)Λ(f) +∑_d ∈ S∑_i ≥ 2u^di/i∑_P ∈𝒫_d,qα(P)^i-∑_d ∈ Su^d/d∑_e | d e ≠ d e∑_P ∈𝒫_e,qα(P)^d/e). It follows that [u^j](F_S(u,α)-1) for 0≤ j ≤ n are bounded in absolute value by the coefficients of u^j in Z_S,α(u) := exp( ∑_d ∈ Su^d/d|∑_f ∈ℳ_d,qα(f)Λ(f)| + ∑_d ∈ S∑_i ≥ 2u^di/i | 𝒫_d,q| + ∑_d ∈ Su^d/d∑_e | d e ≠ d e |𝒫_e,q|)-1. Putting this together, we have |X|=|[u^n] F_S^c(u,α)(F_S(u,α) -1)| ≤ [u^n] Z_S^c(u) (Z_S,α(u)-1). We make a general observation. If the coefficients of a power series F(u)=∑_i ≥ 0 f_i u^i are bounded in absolute value by the respective coefficients of G(u)=∑_i ≥ 0 g_i u^i then |f_i| ≤ G(R)R^-i for every 0<R<C such that G(R) converges. Indeed, G(R)R^-i = ∑_j=0^∞ g_j R^j-i≥ g_i R^i-i = g_i ≥ |f_i|. Applying this observation with R=1/q we get |X|≤ q^n Z_S^c(1/q) (Z_S,α(1/q)-1). We estimate Z_S^c(1/q) using Lemma <ref> as Z_S^c(1/q) = exp∑_ P∈ S^c (q^- P+O(q^-2 P)) =exp∑_d ∈ S^cd^-1 + O(1). Similarly, ∑_d ∈ S∑_i ≥ 2q^-di/i | 𝒫_d,q| + ∑_d ∈ Sq^-d/d∑_e | d e ≠ d e |𝒫_e,q|≤∑_d ∈ S∑_i ≥ 2q^-di/iq^d/d + 2∑_d ∈ Sq^-d/2/d≪q^-s_0/ 2/s_0, where s_0 = min_s∈ S s. The required estimate then follows from (<ref>), (<ref>), (<ref>) and (<ref>). We now move on to estimating the first term in (<ref>). §.§ Sieve estimate Let S ⊆{1,…,n} be a set of positive integers, and S^c = {1,…,n}∖ S be its complement. Then ∑_f ∈ P | f P ∉S 1 ≪q^n/nexp∑_d ∈ S^c d^-1≍ q^nexp- ∑_d ∈ S d^-1. In the integer setting the estimate ∑_n≤ x: (n,k)=1 1 ≪ x ∏_p| k(1-1/p) for any positive integer k whose prime factors do not exceed x is a classical consequence of Selberg's sieve (see <cit.> for a discussion of this and an alternative proof). A permutation analogue of Lemma <ref> was established (in greater generality) by Ford <cit.>. The proof we give is self-contained and is in the spirit of <cit.>. Let A_i := ∑_f ∈ℳ_i,q P | f P ∉S1, and define F(u) := ∏_ P ∈ S^c (1-u^ P)^-1 = exp( ∑_ P ∈ S^c∑_k ≥ 1u^k P /k)=:∑_i=0^∞A_i u^i. The coefficients satisfy A_i = A_i for 0 ≤ i ≤ n. Differentiating (<ref>) formally we see that ∑_i ≥ 1 i A_i u^i = F(u)G(u), where G(u) = ∑_i ≥ 1 B_i u^i, B_i := ∑_k ≥ 1, P ∈ S^c k P = i P = ∑_d ∈ S^c d | i d | 𝒫_d,q| . Comparing coefficients in (<ref>) we obtain that A_n = n^-1∑_i=1^n A_n-i B_i. Lemma <ref> implies B_i ≤1_i ∉S (i | 𝒫_i,q|) + ∑_d ≤ i/2 d| 𝒫_d,q| ≤1_i ∉S q^i +2q^i/2, and so A_n ≤ n^-1∑_i ∈ S^c q^i A_n-i + 2n^-1∑_i=1^n A_n-iq^i/2. The trivial bound A_n-i≤ q^n-i implies A_n ≤ n^-1∑_i ∈ S^c q^i A_n-i + 5n^-1 q^n ≤ n^-1q^n ∑_0 ≤ i ≤ n q^-i A_i + 5. Since F(1/q) =∑_i ≥ 0 q^-iA_i ≥∑_0≤ i ≤ n q^-i A_i we find that A_n ≤ n^-1 q^n ( F(1/q) + 5). By Lemma <ref>, log F(1/q) = ∑_d ∈ S^c |𝒫_d,q| ∑_k ≥ 1q^-kd/k≤∑_d ∈ S^c1+q^-d/d = ∑_d ∈ S^c1/d + 1, and recalling log n = ∑_d=1^n d^-1 + O(1) we conclude that A_n ≪ n^-1 q^n exp∑_d ∈ S^c d^-1≍ q^n exp-∑_d ∈ S d^-1 as needed. §.§ Proof of Lemma <ref> Due to the trivial estimate |S(n,χ)|≤ q^n we may assume n ≥ 12(1+log_q (1+ Q)). Since S(n,χ)=0 for n≥ Q we may also assume Q >n. Applying Lemma <ref> with α=χ and S={m+1,m+2,…} for some m ∈ [2log_q Q, n], as indicated in Remark <ref>, we find that (<ref>) indeed holds. By Lemma <ref>, q^-n∑_f ∈ P | f P ≤ mχ(f) ≪ q^-n∑_f ∈ P | f P ≤ m 1 ≪m/n. Choosing m= ⌈ 2log_q (n(1+ Q))⌉ in (<ref>) yields the statement of the lemma. §.§ Bound on a gcd sum For L ≥ 1 let B_L,k := ∑_L≤ d < 2Llog(k,q^d-1). We trivially have B_L,k≪ L min{log k, Llog q}, and this bound is optimal when log_q k ≫ L^2. For example, take k=∏_1≤ i < 2L(q^i-1) = q^Θ(L^2), then q^d-1 | k for all L≤ d< 2L so that B_L,k≍ L^2 log q. For smaller k, however, this bound is too generous. Suppose L ≥√(log_q k). Then B_L,k≪ L √(log k log q). To prove this result we will need a couple of facts concerning cyclotomic polynomials. Recall that the cyclotomic polynomials {ϕ_n(x)}_n ≥ 1 are defined recursively by ∏_d | nϕ_d(x) = x^n-1. They lie in [x] and can be written as ϕ_n(x) = ∏_j ∈ (/n)^×(x-e^2π i j/n). This last relation shows that if m and p are coprime where p is a prime then ϕ_mp^i(x)=ϕ_m(x^p^i)/ϕ_m(x^p^i-1) for i ≥ 1. The following lemma is classical but we could not find it explicitly stated in the literature and so we give a proof based on an argument implicit in Roitman <cit.>. Fix an integer a. For a prime p define A_p := { n ≥ 1: p |ϕ_n(a)}. If p| a then A_p=∅. Otherwise A_p = {p^i : i ≥ 0} where is the multiplicative order of a modulo p. If p |ϕ_n(a), then p | a^n-1 since ϕ_n(a) | a^n-1. If p| a then a^n -1 ≡ - 1 p, which is a contradiction. Thus A_p=∅ in this case. Now suppose p ∤ a. By definition, we have p | a^-1. Suppose that p|ϕ_n(a) | a^n -1 for some positive integer n, then p |(a^n-1,a^-1)= a^(n,)-1. Since = min{n ≥ 1: p | a^n -1}, we have (n,) ≥ and thus | n. This shows that A_p ⊆{ m ·: m ≥ 1}. Next we show ∈ A_p. Since p| a^-1 = ∏_e |ϕ_e(a) it follows that A_p contains some divisor of , which by (<ref>) then has to be itself. Next, we want to show that n/ is a power of p. If p^' is a prime dividing n/ then | n/p' and so p |ϕ_n(a) = a^n-1/∏_d | n, d ≠ nϕ_d(a)|a^n-1/∏_d | n/p'ϕ_d(a) =a^n-1/a^n/p'-1 = ∑_i=0^p'-1 a^in/p'≡∑_i=0^p'-1 1 = p' p, where we used that by definition of a^n/p' = (a^)^n/p' ≡ 1^n/p'≡ 1 p. Hence p^' must be p, and n is indeed times a power of p. This shows A_p ⊆{p^i : i ≥ 0}. Finally, the relation ϕ_p^i m(a) = ϕ_m(a^p^i)/ϕ_m(a^p^i-1) for (m,p)=1 and i ≥ 1 implies p^i ∈ A_p for every i ≥ 1 since ϕ_m(a^p^i)/ϕ_m(a^p^i-1) ≡ϕ_m(a)^p^i-p^i-1 p. Recall that q^d-1 = ∏_e | dϕ_e(q). Because (A,ab) ≤(A,a)(A,b) it then follows that log(k,q^d-1) ≤∑_e | dlog(k,ϕ_e(q)) = ∑_e | d∑_p | klog(p^ν_p(k),ϕ_e(q)), where p stands for prime and ν_p(k) is the multiplicity of p in k (i.e. the p-adic valuation of k). Hence B_L,k≤∑_L≤ d < 2L∑_p | k, ϕ_e(q) e | dlog(p^ν_p(k),ϕ_e(q)). We introduce a parameter T≥ 1. We shall show B_L,k≪ T log k+ L^2 log q/T, and then take T=L/√(log_q k). We consider separately the contribution of e> d/T and e ≤ d/T, obtaining B_L,k≤ B_L,k,1+B_L,k,2, where B_L,k,1 =∑_L≤ d < 2L∑_p | k, ϕ_e(q) e | d e >d/Tlog(p^ν_p(k),ϕ_e(q)) , B_L,k,2 =∑_L≤ d < 2L∑_p | k, ϕ_e(q) e | d e ≤ d/Tlog(p^ν_p(k),ϕ_e(q)). Let us bound B_L,k,1. By interchanging the order of summation, B_L,k,1≤∑_e, p: p | k, ϕ_e(q) L/T < e < 2Lν_p(k)log p∑_L≤ d < 2L e | d 1 ≪ L ∑_e, p: p | k, ϕ_e(q) L/T <e < 2Lν_p(k)log p /e . Let A_p:={ n≥ 1: p divides ϕ_n(q)}. By Lemma <ref>, A_p is either empty or is a geometric progression with step size p, so that ∑_ e: p|ϕ_e(q) L/T <e < 2L1/e≪T/L∑_i ≥ 0 p^-i≪T/L and it follows that B_L,k,1 ≪ T ∑_p | kν_p(k) log p = T log k. Next, omitting the condition p | k and using ϕ_e(q) | q^e-1 ≤ q^e we have B_L,k,2≤∑_L≤ d < 2L∑_e | d e ≤ d/Tlogϕ_e(q)≤log q ∑_L≤ d < 2L∑_e | d e ≤ d/T e ≪ L log q ∑_e ≤ 2L/T 1 ≪L^2 log q/T. This implies (<ref>), and thus the statement of the lemma. §.§ Symmetry of character sums Let ψ_q →^× be an additive character. Let k,n ≥ 1, and let k^' = (k,q^n-1). Then ∑_f ∈Λ(f)χ_k,ψ(f) =∑_f ∈Λ(f)χ_k^',ψ(f). Consider a surjective map Φ_q^n^×→{ f ∈∖{ T}: Λ(f) ≠ 0} given by Φ(x) = ∏_σ∈Gal(_q^n/_q)(T-σ(x))=∏_i=0^n-1(T-x^q^i). Alternatively, one can also write Φ(x)=m_α(x)^n/ m_α, where m_α is the minimal polynomial of x over _q. Every element f in the image has Λ(f) preimages given by the Galois conjugates of a given preimage, namely, {x^q^i}_i=0^d-1 if m_x=d. Because χ_k,ψ(f) is defined in terms of the zeros of f, the map Φ allows us to conveniently write ∑_f ∈Λ(f)χ_k,ψ(f) = ∑_x ∈_q^n^×ψ(∑_i=0^n-1 (x^q^i)^-k). By Euclid's algorithm we can write k^' = (k,q^n-1)=ak+b(q^n-1) for some coprime integers a and b. In particular, x^k^'=(x^k)^a. Conversely, x^k = (x^k^')^k/k^'. It follows that the group endomorphisms x ↦ x^k and x ↦ x^(k,q^n-1) defined on _q^n^× have the same image and kernel. This implies that every element in the image is attained the same number of times, and so ∑_x ∈_q^n^×ψ(∑_i=0^n-1 (x^q^i)^-k)=∑_x ∈_q^n^×ψ(∑_i=0^n-1 (x^q^i)^-k^')= ∑_f ∈Λ(f)χ_k^',ψ(f) as required. From Lemmas <ref>, <ref> and <ref> we conclude the following. Let ψ_q →^× be a nontrivial additive character. Let k,n ≥ 1. We have |∑_f ∈Λ(f)χ_k,ψ(f)| ≤ q^n/2(k,q^n-1). §.§ Proof of Proposition <ref> Let m:=⌈ 12log_q n ⌉, and S={ m ≤ d ≤ n: (k,q^d-1) < q^d/3}. Let S^c = {1,…, n}∖ S. As before, let q^-n∑_f ∈χ_k,ψ(f) =q^-n∑_f ∈ ∃ P | f such that P ∈ S χ_k,ψ(f) + q^-n∑_f ∈ P | f P ∈ S^cχ_k,ψ(f)= X+Y. Applying Lemma <ref> for α=χ_k,ψ and our chosen S together with Corollary <ref>, we get X :=q^-n∑_f ∈ ∃ P | f such that P ∈ S χ_k,ψ(f) ≪exp(∑_d ∈ S^c d^-1) (exp(∑_d ∈ Sq^-d/6/d + O(q^-m/2/m))-1)≪n q^-m/6/m≪ n^-1. By Lemma <ref>, Y:=q^-n∑_f ∈ P | f P ∈ S^cχ_k,ψ(f) ≪ q^-n∑_f ∈ P | f P ∈ S^c 1 ≪exp(-∑_d ∈ Sd^-1). The same proof works as is if we replace χ_k,ψ by μ·χ_k,ψ, because ∑_f ∈ℳ_d,qχ_k,ψ(f)Λ(f) = -∑_f ∈ℳ_d,qμ(f) χ_k,ψ(f)Λ(f) + O(q^d/2). §.§ Conclusion of proof of Theorem <ref> Due to the trivial estimate |S(n,χ)|≤ q^n we may assume n ≥ 12(1+ √(log_q k)). In view of Proposition <ref> it suffices to show -∑_d ∈ Sd^-1≤log (1+√(log_q k) +log_q n ) -log n+O(1) where S is as in (<ref>). To show this, first observe that -∑_d ∈ S d^-1 =∑_d ∈ S^c d^-1 - log n + O(1). Let m' = max{⌈ 12log_q n⌉, √(log_q k)}. Since 1_A ≥ B≤log A/log B, ∑_d ∈ S^c d^-1≤∑_d ≤ m' d^-1 + ∑_m'≤ d ≤ n (k,q^d-1) ≥ q^d/3 d^-1≤log m' + O(1) +∑_m'≤ d ≤ nd^-1log(k,q^d-1)/log (q^d/3) . By Lemma <ref> the last d-sum is O(√(log_q k)/m^'), finishing the proof. § SQUAREROOT CANCELLATION IN CHARACTER SUMS Under the generalized Riemann hypothesis for L(s,χ) one has ∑_n ≤ xχ(n) ≪√(x)exp(Clog m/loglog m) for a nonprincipal Dirichlet character χ modulo m <cit.>. Bhowmick, Lê and Liu <cit.> proved the function field analogue ∑_f ∈χ(f) ≪ q^n/2exp( C_q( n loglog Q/log Q+ Q/log^2 Q)) unconditionally, for a constant C_q that depends only on q, where χ is a character modulo Q ∈_q[T]. The estimate (<ref>) implies ∑_f ∈χ(f) ≪ q^n/2(1+o(1)) for Q=o(nlog^2 n). Here we prove that squareroot cancellation holds in the wider range Q≤ n^1+o(1). Let χ be a nonprincipal Dirichlet character modulo Q ∈_q[T]. There is a constant C_q that depends only on q such that ∑_f ∈χ(f) ≪_q q^n/2exp( max{C_q nlog ( Qlog Q/n)/log (1+ Q),0}). Recall d(χ):= L(u,χ)< Q. If d(χ) <n we have S(n,χ)=0, so from now on we assume that d(χ) ≥ n. In view of the trivial bound |S(n,χ)| ≤ q^n we may also assume that n is larger than a fixed constant, as well as that d ≤ n^1+δ for a fixed δ>0. From (<ref>), L(u,χ) =exp( ∑_k ≥ 1u^k/k∑_f ∈ℳ_k,qχ(f) Λ(f) ). Set L:=⌊ 2log_q d ⌋. For k ≤ L the trivial bound | ∑_f ∈ℳ_k,qχ(f) Λ(f)| ≤∑_f ∈ℳ_k,qΛ(f) = q^k coming from (<ref>) is superior to (<ref>). We see from (<ref>) and (<ref>) that the coefficients of L(u,χ) are bounded (in absolute value) from above by those of exp( ∑_k ≥ 1u^k/kmin{q^k, d(χ) q^k/2}), hence, writing [u^n]F for the nth coefficient of a power series F, |S(n,χ)| = q^n/2|[u^n]L(u/√(q),χ)| ≤ q^n/2[u^n] exp( ∑_k ≥ 1u^k/kmin{q^k/2, d(χ)}) ≤ q^n/2exp( ∑_k≥ 1R^k/kmin{q^k/2, d(χ)}) R^-n for any R ∈ (0,1) (this is the same observation used in the proof of Lemma <ref>). If 6/(5√(q)) < R < 1 then ∑_1 ≤ k ≤ LR^k/kmin{q^k/2,d(χ)} = ∑_1 ≤ k ≤ L(R√(q))^k/k≤∑_1 ≤ k ≤ L (R√(q))^k ≤(R√(q))^L/1-(R√(q))^-1≤ 6d(χ) R^L and ∑_k > LR^k/kmin{q^k/2, d(χ)}≤ d(χ) ∑_k>LR^k/k≤d(χ)/L+1∑_k>L R^k = d(χ)/L+1R^L+1/1-R, so that |S(n,χ)/q^n/2| ≤exp( 6d(χ)R^L ( 1 + R/(L+1)(1-R)) - n log R). Set a := log (d(χ) log d(χ) /n) and take R=q^-a/(2log d(χ)). The assumption d(χ) ≥ n implies a > 0 (at least if n ≥ 3) and so R < 1. (This choice of R differs from the choice in <cit.>, where a is chosen to be of order loglog d(χ).) Since we may assume n ≤ d(χ) ≤ n^1+δ and n ≥ C for δ and C we choose, we may also assume a/log d(χ) is at most δ' for a δ' we want. In particular, R > 6/(5√(q)) may be assumed from now on by taking small enough δ'. Moreover, -n log R ≤ (log q) n log (d(χ) log d(χ) /n)/log d(χ), R^L ≤ q^-log (d(χ) log d(χ)/n)/2log d(χ) (2 log_q d(χ) - 1)≤2/(d(χ) log d(χ)/n) for large enough n. Additionally, alog q/log d(χ) ∈ (0,1) holds (by taking small enough δ') which implies R ≤ 1-alog q/(4log d(χ)), and so R/(L+1)(1-R)≤1/L+11/1-R≤log q/2 log d(χ) 4log d(χ)/log q log (d(χ) log d(χ) /n) = 2/log (d(χ) log d(χ)/n)≤ 1 for large enough n. All in all, |S(n,χ)/q^n/2| ≤exp( 24n/log d(χ) +(log q) n log (d(χ) log d(χ)/n)/log d(χ)) ≤exp( C_q n log (d(χ) log d(χ)/n)/log d(χ)). Since Q > d(χ), the claim follows. alpha
http://arxiv.org/abs/2307.01588v1
20230704092752
Computation of the deformation of a planar kirigami
[ "Frederic Marazzato" ]
math.NA
[ "math.NA", "cs.NA", "math.AP" ]
OrthoBoXY: A Simple Way to Compute True Self-Diffusion Coefficients from MD Simulations with Periodic Boundary Conditions Without Prior Knowledge of the Viscosity Dietmar Paschek August 1, 2023 at  =================================================================================================================================================================== Kirigami are part of the larger class of mechanical metamaterials, which exhibit exotic properties. This article focuses on rhombi-slits, which is a specific type of kirigami. A nonlinear kinematics model was previously proposed as a second order divergence form PDE with a possibly degenerate, and sign-changing coefficient matrix. We first propose to study the existence and uniqueness of the solutions of this equation by using the limiting absorption principle. Then, we propose a numerical method based on adding a complex dissipation to approximate the solutions. Finally, comparisons of simulations with experiments are performed. Keywords: Kirigami, Degenerate PDE, Sign-changing PDE, Limiting absorption principle. AMS Subject Classification: 35M12, 65N12, 65N30 and 35Q74. § INTRODUCTION Kirigami is a variation of origami, the Japanese art of paper folding. In kirigami, the paper is cut as well as being folded. Origami and kirigami have been studied as concrete examples of mechanical metamaterials <cit.>. Mechanical metamaterials are solids with unusual mechanical properties that are generated by inner mechanisms of the system. These materials have found applications for generating soft robots <cit.> or in aerospace engineering <cit.>. The mechanisms giving metamaterials their unusual properties are discrete by nature, and have been modeled as such <cit.>. Recently, important efforts have been devoted to proposing a continuous description of origami and kirigami-based metamaterials <cit.>. The modeling process results in homogenized PDEs where the characteristic size of the mechanism is considered negligible with respects to the size of the whole structure. Some first results regarding the existence of solutions to these nonlinear PDEs, and their numerical approximation have been achieved in <cit.>. This paper focuses on rotating square patterns as presented in <cit.>. Figure <ref> shows one such kirigami pattern. In this entire article, deformations remain planar. ξ_in is defined as the slit opening in the undeformed configuration. ξ is the opening of the slit, and γ is the local rotation of a panel, as represented in Figure <ref>. y_eff is the effective deformation, which tracks the cell-averaged panel motions. Using a coarse-graining technique, <cit.> has proposed that ∇ y_eff = R(γ) A(ξ), where R(γ) is the canonical 2×2 rotation matrix parametrized by the angle γ, and A(ξ) is called the shape tensor. Taking the curl of (<ref>), it is shown in <cit.>, that ∇γ = Γ(ξ) ∇ξ, where Γ is a 2×2 matrix, which depends on A. Taking the curl of (<ref>), one gets -div(R(π/2) Γ(ξ) ∇ξ) = 0. The main goal of this paper is to study the existence and uniqueness of solutions to (<ref>) supplemented with appropriate Dirichlet and Neumann boundary conditions. Let B(ξ) := R(π/2)Γ(ξ). We will assume in the following that B(ξ) is symmetric. Studying (<ref>) presents two main difficulties. The first is that B(ξ) can degenerate for certain values of ξ. The second is that the sign of det(B(ξ)) can change, thus locally changing the type of (<ref>) from an elliptic to a hyperbolic PDE. Regarding the issue of degeneracy, Muckenhopt weights <cit.> have been used to study the existence of solutions to degenerate elliptic equations, see <cit.>, for instance. However, as in <cit.>, the degeneracy we encounter is localized on a curve, and thus Muckenhopt weights cannot be used. Following <cit.>, we will use weighted Sobolev spaces to study the existence of solutions to (<ref>). Regarding the issue of sign-changing, the concept of 𝚃-coercivity <cit.> has been used to study the existence of solutions for some linear equations <cit.>. Unfortunately, this concept does not seem to be applicable here. Instead, we follow <cit.> and use a limiting absorption principle to prove the existence in complex Sobolev spaces of approximate solutions of (<ref>), which, asymptotically, produce solutions of (<ref>). Several numerical methods have been proposed to approximate solutions of sign-changing problems. <cit.> proposed methods based on first-order Lagrange polynomials and 𝚃_h-coercive meshes. However, these cannot be used in the present case as the position of the curve where (<ref>) changes type is not known, a priori. More recently, <cit.> proposed a method based on optimal trasnport. However, it cannot be applied as we deal with a degenerate equation. Therefore, we compute solutions of a problem regularized by adding a complex dissipation, as already performed in <cit.>. The present article is structured as follows. Section <ref> studies the existence and uniqueness of solutions of (<ref>). Then, Section <ref> proposes a numerical method to approximate solutions of (<ref>), supplemented by a convergence proof. Finally, Section <ref> compares a few numerical tests to experimental results. § CONTINUOUS PROBLEM Let Ω⊂ℝ^2 be an open bounded polygonal domain, that can be perfectly fitted by triangular meshes, with a Lipschitz boundary ∂Ω. The boundary is partitioned as ∂Ω = ∂Ω_D ∪∂Ω_N, where ∂Ω_D is relatively closed in ∂Ω. Let V:= H^1(Ω; ), equipped with the usual Sobolev norm. A Dirichlet boundary ξ_D ∈ H^1/2(∂Ω_D ; ℝ) is imposed strongly on ∂Ω_D. Let V_D := {ξ∈ V | ξ = ξ_D on ∂Ω_D } be the solution space and V_0 its associated homogeneous space. A Neumann boundary condition g ∈ H^-1/2(∂Ω_N; ) is imposed weakly. In the following, both real and complex Sobolev spaces are used. If the notation does not precise whether the space is real or complex, the reader is invited to understand the space as a real Sobolev space. §.§ Mathematical setting In the following, we focus on the specific case of rhombi slits, see <cit.>. Let α≤ 0, and β≥ 0 be geometric parameters, see <cit.>. Let ξ∈ L^2(Ω;). We define μ_1(ξ) = cos(ξ) - αsin(ξ), μ_2(ξ) = cos(ξ) + βsin(ξ), and A(ξ) = μ_1(ξ) e_1 ⊗ e_1 + μ_2(ξ) e_2 ⊗ e_2. Let Γ(ξ) = Γ_12(ξ) e_1 ⊗ e_2 + Γ_12(ξ) e_1 ⊗ e_2, where (e_1,e_2) is the canonical basis of ℝ^2, and Γ_12(ξ) = -μ_1'(ξ)/μ_2(ξ), Γ_21(ξ) = μ_2'(ξ)/μ_1(ξ). Finally, one has B(ξ) = [ -Γ_21(ξ) 0; 0 Γ_12(ξ); ]. We want to find ξ∈ V_D, { -div(B(ξ) ∇ξ) = 0 a.e. in Ω, B(ξ) ∇ξ· n = g on ∂Ω_N, . where n is the exterior normal to ∂Ω. Note that, defined as such, the entries of B(ξ) can take infinite values. Let ξ∈ V_D, ξ^- ∈ [-π/2,0], and ξ^+ ∈ [0,π/2]. We define the cut-off coefficients Γ_21(ξ) = { Γ_21(ξ^+) if ξ≥ξ^+, Γ_21(ξ^-) if ξ≤ξ^-, Γ_21(ξ) otherwise, . Γ_12(ξ) = { Γ_12(ξ^+) if ξ≥ξ^+, Γ_12(ξ^-) if ξ≤ξ^-, Γ_12(ξ) otherwise, . and B̅(ξ) = [ -Γ_21(ξ) 0; 0 Γ_12(ξ); ]. Note that B̅(ξ) is Lipschitz continuous. ξ^- and ξ^+ are chosen such that there exists M > 0, independent of ξ, ‖B̅(ξ)‖_L^∞(Ω) < M. Of course, B̅(ξ) is consistent with B(ξ) only as long as ξ^- ≤ξ≤ξ^+ a.e. in Ω. The bilinear form a(ξ) is defined for ζ∈ V_D and ζ̃∈ V_0 by a(ξ; ζ, ζ̃) = ∫_ΩB̅(ξ) ∇ζ·∇ζ̃. The linear form l is defined for ζ̃∈ V_0 by l(ζ̃) = ∫_∂Ω_N g ·ζ̃. The weak form of the cut-off of (<ref>) is then: search for ξ∈ V_D, a(ξ; ξ, ζ̃) = l(ζ̃), ∀ζ̃∈ V_0. The main results of this paper are the following. There exists a solution ξ∈ L^2(Ω;) of (<ref>). In general, one has B̅(ξ) ∇ξ∈ L^2(Ω;), and ξ = ξ_D on ∂Ω_D. If B̅(ξ) is such that -Γ_21(ξ) and Γ_12(ξ) have a constant sign, and are bounded away from zero, then ξ∈ H^1(Ω;). Let ϵ > 0, ‖ξ_D ‖_H^1/2(∂Ω_D)≤ϵ, and ‖ g ‖_H^-1/2(∂Ω_N)≤ϵ. For ϵ small enough, the solution ξ of (<ref>) is unique. §.§ Linear case For simplicity, the analysis is performed in this subsection with real Sobolev spaces. It would give similar results using complex Sobolev spaces. Let ξ∈ L^2(Ω). We focus here on solving the problem, search for ζ∈ V_D, a(ξ; ζ, ζ̃) = l(ζ̃), ∀ζ̃∈ V_0. In the following, we study the various scenarios that can arise while studying (<ref>). §.§.§ Strictly elliptic Let us assume that a.e. in Ω, Γ_12(ξ) > 0 and Γ_21(ξ) < 0. Then (<ref>) is elliptic. Let δ > 0, and assume min(|Γ_21(ξ)|, Γ_12(ξ)) ≥δ > 0, a.e. in Ω. There exists a unique solution ζ∈ V_D to (<ref>), and one has ‖ζ‖_V ≤ C_tr(1 + M√(1+C)/δ) ‖ξ_D ‖_H^1/2(∂Ω_D) + √(1+C)/δ‖ g ‖_H^-1/2(∂Ω_N), where C,C_tr > 0 are constants independent of ξ. The proof is a simple application of the Lax–Milgram theorem. a(ξ) is trivially bilinear continuous. Let us show that a(ξ) is coercive over V_0^2. Let ζ∈ V_0, one has a(ξ; ζ, ζ) = ∫_Ω (-Γ_21(ξ) ζ_x^2 + Γ_12(ξ) ζ_y^2 ≥δ‖∇ζ‖_L^2(Ω;)^2, where indices indicative derivatives. Let us now take into account the boundary conditions. As ξ_D ∈ H^1/2(∂Ω_D), there exists ζ_D ∈ H^1(Ω), ζ_D = ξ_D on ∂Ω. Let f = - B̅(ξ) ∇ζ_D ∈ L^2(Ω). As the right-hand side is a continuous linear form, there exists a unique ζ̂∈ V_0 solution of a(ξ; ζ̂, ζ̃) = ∫_Ω f ·∇ζ̃ + l(ζ̃). Thus, ζ := ζ̂ + ζ_D ∈ V_D is our solution. Using the coercivity of a(ξ), one has δ/√(1+C)‖ζ‖_V^2 ≤ a(ξ, ζ, ζ) ≤‖B̅(ξ)∇ζ_D ‖_L^2(Ω)‖ζ‖_V + ‖ g ‖_H^-1/2(∂Ω_N)‖ζ‖_H^1/2(∂Ω_N), ≤ M C_tr‖ξ_D ‖_H^1/2(∂Ω_D)‖ζ‖_V + ‖ g ‖_H^-1/2(∂Ω_N)‖ζ‖_V, where C > 0 is the Poincaré constant and C_tr > 0 is the constant from the trace theorem <cit.>, from which one deduces (<ref>). Note that the case where a.e. in Ω, Γ_12(ξ) < 0 and Γ_21(ξ) > 0 would lead to a similar result. §.§.§ Strictly hyperbolic Let us assume that a.e. in Ω, Γ_12(ξ) > 0 and Γ_21(ξ) > 0. Then (<ref>) is hyperbolic. Let δ > 0, and assume min(Γ_21(ξ), Γ_12(ξ)) ≥δ > 0, a.e. in Ω. There exists a unique solution ζ∈ V_D to (<ref>), and it verifies (<ref>). This is proved using the limiting absorption principle. Following <cit.>, we add some dissipation to (<ref>) by adding a purely imaginary part to the problem. Let ξ∈ L^2(Ω;), W := H^1(Ω;ℂ), W_D := {ζ∈ W | ζ = ξ_D on ∂Ω_D } be the solution space, and W_0 the associated homogeneous space. For ζ∈ W_D and ζ̃∈ W_0, let a_ε(ξ; ζ, ζ̃) := ∫_Ω (B̅(ξ) + ιε I) ∇ζ·∇ζ̃, where ε > 0 is a regularization parameter, and ι^2 = -1. Let ε > 0, there exists a unique solution ζ_ε∈ W_D to a_ε(ξ;ζ_ε, ζ̃) = l(ζ̃), ∀ζ̃∈ W_0. One has ‖ζ_ε‖_W ≤ C_tr(1 + M√(1+C)/ε) ‖ξ_D ‖_H^1/2(∂Ω_D) + √(1+C)/ε‖ g ‖_H^-1/2(∂Ω_N), where C,C_tr > 0 are constants independent of ξ. Let ζ∈ W_0, one has (a_ε(ξ;ζ, ζ)) ≥ε‖∇ζ‖_L^2(Ω; )^2. Therefore, a_ε(ξ) is coercive over W_0 × W_0. a_ε(ξ) is also sesquilinear continuous over W_0 × W_0. l is continuous over W_0 and the boundary conditions are handled as in the proof of Lemma <ref>. We apply the Lax–Milgram lemma and get the desired result. Let us first show the existence of a solution. Let ξ∈ L^2(Ω;), ζ_ε∈ W_0 and f = - B̅(ξ) ∇ζ_D ∈ L^2(Ω;) such that a_ε(ξ;ζ_ε, ζ̃) = l(ζ̃) + ∫_Ω f ·∇ζ̃, ∀ζ̃∈ W_0. Testing with ζ̃∈ V_0, and taking the real part, one has ∫_ΩB̅(ξ)∇(ζ_ε) ·∇ζ̃ - ε∫_Ω∇(ζ_ε) ·∇ζ̃ = l(ζ̃) + ∫_Ω f ·∇ζ̃ . Therefore, one has | ∫_ΩB̅(ξ)∇(ζ_ε) ·∇ζ̃| ≤ε‖∇(ζ_ε) ‖_L^2(Ω;)‖∇ζ̃‖_L^2(Ω;) + ‖ g ‖_H^-1/2(Ω;)‖ζ̃‖_V + M C_tr‖ξ_D ‖_H^1/2(∂Ω_D)‖∇ζ̃‖_L^2(Ω;), ≤( C_1 ‖ξ_D ‖_H^1/2(∂Ω_D) + C_2 ‖ g ‖_H^-1/2(∂Ω_N))‖ζ̃‖_V, where C_1,C_2 > 0 are constants independent of ε because of (<ref>). Therefore, ‖B̅(ξ)∇(ζ_ε) ‖_V_0^* = sup_ζ̃∈ V_0| ∫_ΩB̅(ξ)∇(ζ_ε) ·∇ζ̃| /‖∇ζ̃‖_L^2(Ω;)≤ C', where C'>0 is a generic constant, independent of ε > 0, and ‖·‖_V_0^* is the usual dual norm over V_0. However, as B̅(ξ)∇(ζ_ε) ∈ L^2(Ω;), one has ‖B̅(ξ)∇(ζ_ε) ‖_V_0^* = ‖B̅(ξ)∇(ζ_ε) ‖_L^2(Ω;)≥δ‖∇(ζ_ε) ‖_L^2(Ω;) . Therefore, ((ζ_ε))_ε > 0 is bounded in H^1-norm independently of ξ and ε. Thus there exits ζ∈ V_0 such that, up to a subsequence, (ζ_ε) ⇀ζ weakly in V, when ε→ 0. Testing with ζ̃∈ V_0, and taking the imaginary part, one has ε∫_Ω∇(ζ_ε) ·∇ζ̃ + ∫_ΩB̅(ξ)∇(ζ_ε) ·∇ζ̃ = 0. But as ((ζ_ε))_ε > 0 is bounded in V, the first term in the right-hand side vanishes when ε→ 0. Therefore, δ‖∇(ζ_ε) ‖_L^2(Ω;)≤‖B̅(ξ)∇(ζ_ε) ‖_L^2(Ω;)⟶_ε→ 0 0. Let us now show that ζ + ζ_D ∈ V_D solves (<ref>). Let ζ̃∈ V_0. One has l(ζ̃) + ∫_Ω f ·∇ζ̃ = ∫_ΩB̅(ξ)∇(ζ_ε) ·∇ζ̃ - ε∫_Ω∇(ζ_ε) ·∇ζ̃⟶_ε→ 0∫_ΩB̅(ξ)∇ζ·∇ζ̃. Thus, ζ +ζ_D ∈ V_D solves (<ref>). Let us now show uniqueness by using the second condition of the BNB lemma, see <cit.>. Let ζ̃∈ V_0. We assume that for all ζ∈ V_0, a(ξ, ζ, ζ̃) = 0. Therefore, 0 = ‖B̅(ξ) ∇ζ̃‖_V_0^* = ‖B̅(ξ) ∇ζ̃‖_L^2(Ω;)≥δ‖∇ζ̃‖_L^2(Ω;). Thus, ζ̃ = 0 a.e. in Ω. §.§.§ Degenerate sign-changing case One cannot apply a strategy similar to Subsections <ref> and <ref> as the condition to be bounded away from zero by δ > 0 is of fundamental importance in deriving the estimate (<ref>). Instead, we follow <cit.> in using weighted Sobolev spaces. For simplicity, we make the following assumption. Let Σ = {x ∈Ω | Γ_21(ξ)(x) = 0 }. We assume that Σ is a locally Lipschitz simple curve splitting Ω in two disjoint open sets Ω_1 and Ω_2 such that Ω_1∩Ω_2 = Σ. For simplicity, in order to avoid problems associated with rigid-body motions, we assume that for i=1,2, ℋ^1(∂Ω_i ∩∂Ω_D) > 0, where ℋ^1 is the one-dimensional Hausdorff measure. Figure <ref> shows an example of such a geometry. We define the following weighted Sobolev spaces, for i=1,2 X_i : = {ζ∈ L^2(Ω_i), B̅(ξ)∇ζ∈ L^2(Ω_i) }, with the associated norm ‖ζ‖_X_i^2 := ‖ζ‖_L^2(Ω_i)^2 + ‖B̅(ξ) ∇ζ‖_L^2(Ω_i)^2. Note that this norm is equivalent to the standard weighted H^1_1/2(Ω_i) norm, see <cit.>. For i=1,2, let X_i,D := {ζ∈ X_i | ζ = ξ_D on ∂Ω_D ∩∂Ω_i } be a convex subset of X_i and let X_i,0 be the associated homogeneous space. To not have any issues with traces, we assume that ℋ^1(∂Ω_D ∩Σ) = 0. Let i=1,2. The weighted Sobolev space X_i is compactly embedded into L^2(Ω_i). Let ε > 0 and Ω_i^ε := {x ∈Ω_i | dist(x, Σ) < ε}. On Ω_i ∖Ω_i^ε, ‖·‖_X_i and the ‖·‖_H^1(Ω_i) are equivalent. Indeed, letting 0 < δ_i = inf_Ω_i ∖Ω_i^ε(|Γ_12(ξ)|, |Γ_21(ξ)|), one has ‖B̅(ξ) ∇ζ‖^2_L^2(Ω_i ∖Ω_i^ε) = ∫_Ω_i ∖Ω_i^εΓ_21(ξ)^2 ζ_x^2 + Γ_12(ξ)^2 ζ_y^2 ≥δ_i^2 ‖∇ζ‖^2_L^2(Ω_i ∖Ω_i^ε). The injection H^1(Ω_i ∖Ω_i^ε) ⊂ L^2(Ω_i ∖Ω_i^ε) being compact on open domains with locally Lipschitz boundaries, see <cit.>, the result is true on Ω_i ∖Ω_i^ε. Letting ε→ 0, the result is proved. For i=1,2, there exists a unique solution ζ_i ∈ X_i,D of a(ξ; ζ_i, ζ̃) = l(ζ̃), ∀ζ̃∈ X_i,0, and one has C(ξ) ‖ζ_i ‖_X_i≤ C_1‖ξ_D ‖_H^1/2(∂Ω_D) + C_2 ‖ g ‖_H^-1/2(∂Ω_N), where C_1,C_2 > 0 are constants independent of ξ, and C(ξ) > 0 is a constant which, a priori, depends on ξ. Let i=1,2 and ζ∈ X_i,0. Let us first show that B̅(ξ) ∇· is injective: we assume that for all ζ̃∈ X_i,0, a(ξ, ζ, ζ̃) = 0. Let ε > 0. There exists a unique solution on Ω_i ∖Ω_i^ε per Lemma <ref> and Proposition <ref>. Therefore, ζ = 0 in Ω_i ∖Ω_i^ε. Letting ε→ 0, ζ = 0 a.e. in Ω_i. Let us now show that the image of the operator B̅(ξ) ∇· is closed in X_i,0^*. By definition, one has ‖ζ‖_X_i^2 = ‖B̅(ξ) ∇ζ‖^2_L^2(Ω_i) + ‖ζ‖^2_L^2(Ω_i), and thus ‖ζ‖_X_i≤‖B̅(ξ) ∇ζ‖_L^2(Ω_i) + ‖ζ‖_L^2(Ω_i). Using the compact embedding of Lemma <ref> and the Petree–Tartar Lemma, see <cit.>, one deduces that there exits a unique ζ_i ∈ X_i,0 solution of a(ξ;ζ_i,ζ̃) = l(ζ̃) + ∫_Ω f ·∇ζ̃, ∀ζ̃∈ X_i,0. Therefore, ζ_i + ζ_D χ_Ω_i∈ X_i,D, where χ is the indicator function, is the unique solution in X_i,D of (<ref>). Also, as a consequence of the Petree–Tartar Lemma, there exists C(ξ) > 0, C(ξ) ‖ζ_i ‖_X_i≤‖B̅(ξ) ∇ζ_i ‖_L^2(Ω_i). One has ‖B̅(ξ) ∇ζ_i ‖_L^2(Ω_i;) = ‖B̅(ξ) ∇ζ_i ‖_X_i,0^*≤ C_1 ‖ g ‖_H^-1/2(∂Ω_N) + C_2 ‖ξ_D ‖_H^1/2(∂Ω_D), where C_1,C_2 > 0 are constants independent of ξ, thus providing (<ref>). The dependence in ξ of the constant in (<ref>) is problematic because it prevents us from applying the Schauder fix point theorem <cit.> directly, because we do not have an invariant domain. §.§ Nonlinear case This section is dedicated to proving Theorem <ref> and Proposition <ref>. For ε > 0, there exists a fixed point ξ_ε∈ W_D solution of a_ε((ξ_ε); ξ_ε, ζ̃) = l(ζ̃), ∀ζ̃∈ W_0. Let ε > 0 and T_ε : L^2(Ω; ) → W be the map such that for ξ∈ L^2(Ω; ), T_εξ = ζ_ε∈ W_D is a solution of a_ε((ξ);ζ_ε, ζ̃) = l(ζ̃), ∀ζ̃∈ W_0. We will use the Schauder fix point theorem. The first step is to find an invariant domain for T_ε. Let B = {ζ∈ W_D | ζ verifies (<ref>)}, which is a compact convex set in W. Using Lemma <ref>, one has T_ε B ⊂ B. Let us now show that T_ε is continuous over B for ‖·‖_W. Let (ξ_n)_n ∈ℕ be a sequence of B such that ξ_n ⟶_n → +∞ξ∈ B in W. Let ζ = T_εξ, and ζ_n = T_εξ_n, for n ∈ℕ. We want to prove that ζ_n ⟶_n → +∞ζ in W. ( ζ_n )_n is bounded in W_D, and thus there exists ζ̂∈ W_D, up to a subsequence, ζ_n ⇀_n → +∞ζ̂ weakly in W. Using the classical Sobolev injection W ⊂ L^2(Ω;), ζ_n →ζ̂ strongly in L^2(Ω;), when n → +∞. Let us show that ζ̂ is a solution of (<ref>). By definition, one has for all ζ̃∈ W_0, l(ζ̃) = a_ε((ξ_n); ζ_n, ζ̃) → a_ε((ξ); ζ̂, ζ̃), when n → +∞, since B̅ is K-Lipschitz, and thus continuous. Therefore, ζ̂∈ W_D solves (<ref>), which has for unique solution ζ∈ W_D, and thus ζ̂ = ζ. Using the uniform coercivity of a_ε(ξ_n), ε‖∇ζ_n - ∇ζ‖_L^2(Ω; )^2 ≤ a_ε((ξ_n); ζ_n - ζ, ζ_n - ζ) =a_ε((ξ_n); ζ_n, ζ_n - ζ) - a_ε((ξ_n); ζ, ζ_n - ζ) = l(ζ_n - ζ) - a_ε((ξ_n); ζ, ζ_n - ζ) ⟶_n →∞ 0 - a_ε((ξ); ζ, 0) = 0, as ξ_n →ξ strongly in L^2(Ω;), ζ_n ⇀ζ weakly in W, and ζ_n →ζ strongly in L^2(Ω;). Thus ζ_n →ζ strongly in W. Also, as the solution of (<ref>) is unique, the full sequence ( ζ_n )_n converges towards ζ in W. We conlude with the Schauder fix point theorem. We can now give a proof of Theorem <ref>. This will be done by proving that ((ξ_ε))_ε > 0 has a limit and by characterizing that limit. Equation (<ref>) allows us to apply the Banach–Alaoglu Theorem, see <cit.>. Therefore, there exists ℬ∈ L^∞(Ω;ℝ)^2 × 2 such that, up to a subsequence, B̅((ξ_ε)) ⇀^* ℬ weak-* in L^∞(Ω;ℝ)^2 × 2. * Strictly elliptic or hyperbolic case We assume that there exists δ > 0, inf_Ω (|ℬ_11|,ℬ_22) ≥δ. Testing (<ref>) with ζ̃∈ V_0, and taking the real part, one has ∫_ΩB̅((ξ_ε))∇(ξ_ε) ·∇ζ̃ - ε∫_Ω∇(ξ_ε) ·∇ζ̃ = l(ζ̃) - ∫_ΩB̅((ξ_ε)) ∇ζ_D ·∇ζ̃ . Therefore, one has | ∫_ΩB̅((ξ_ε))∇(ξ_ε) ·∇ζ̃| ≤ε‖∇(ξ_ε) ‖_L^2(Ω;)‖∇ζ̃‖_L^2(Ω;) + ‖ g ‖_H^-1/2(Ω;)‖ζ̃‖_V + M C_tr‖ξ_D ‖_H^1/2(∂Ω_D)‖∇ζ̃‖_L^2(Ω;). Thus, ‖B̅((ξ_ε))∇(ξ_ε) ‖_V_0^*≤ C', because of (<ref>), where C'>0 is a generic constant, independent of ε > 0. However, as B̅((ξ_ε))∇(ξ_ε) ∈ L^2(Ω;), one has ‖B̅((ξ_ε))∇(ξ_ε) ‖_V_0^* = ‖B̅((ξ_ε))∇(ξ_ε) ‖_L^2(Ω;)≥δ‖∇(ξ_ε) ‖_L^2(Ω;) . Therefore, ((ξ_ε))_ε > 0 is bounded in H^1-norm independently of ξ and ε. Thus there exits ξ_0 ∈ V_0 such that, up to a subsequence, (ξ_ε) ⇀ξ_0 weakly in V, when ε→ 0. Using a classical Sobolev injection, one has (ξ_ε) ⟶_ε→ 0ξ_0 strongly in L^2(Ω;). Testing (<ref>) with ζ̃∈ V_0, and taking the imaginary part, one has ε∫_Ω∇(ξ_ε) ·∇ζ̃ + ∫_ΩB̅((ξ_ε))∇(ξ_ε) ·∇ζ̃ = 0. But as ((ξ_ε))_ε > 0 is bounded in V, the first term in the right-hand side vanishes when ε→ 0. Therefore, δ‖∇(ξ_ε) ‖_L^2(Ω;)≤‖B̅((ξ_ε))∇(ξ_ε) ‖_L^2(Ω;)⟶_ε→ 0 0. Let us now show that ξ_0 + ζ_D ∈ V_D solves (<ref>). As B̅ is Lipschitz, and therefore continuous, one has B̅((ξ_ε)) →B̅(ξ_0), when ε→ 0. Let ζ̃∈ V_0. One has l(ζ̃) - ∫_ΩB̅((ξ_ε)) ∇ζ_D ·∇ζ̃ = ∫_ΩB̅((ξ_ε))∇(ξ_ε) ·∇ζ̃ - ε∫_Ω∇(ξ_ε) ·∇ζ̃⟶_ε→ 0∫_ΩB̅(ξ_0)∇ξ_0 ·∇ζ̃. Therefore, ∫_ΩB̅(ξ_0) ∇ξ_0 ·∇ζ̃ = l(ζ̃) - ∫_ΩB̅(ξ_0) ∇ζ_D ·∇ζ̃, ∀ζ̃∈ V_0, and thus ξ_0 + ζ_D ∈ V_D verifies (<ref>). * Degenerate sign-changing case We assume that ℬ is degenerate sign-changing. Then as in Hypothesis <ref>, we assume that Σ separates Ω into Ω_1 and Ω_2. We adopt the same notation for the spaces as in Section <ref> but based on ℬ. Thus, let i=1,2, by Proposition <ref>, there exists ξ_0∈ X_i,D, which verifies ∫_Ωℬ∇ξ_0 ·∇ζ̃ = l(ζ̃) - ∫_Ωℬ∇ζ_D ·∇ζ̃, ∀ζ̃∈ X_i,0. Our goal is to show that (ξ_ε) →ξ_0 in L^2(Ω_i;), when ε→ 0. We follow similear steps as in the non-degenerate case and get that there exists C'>0, independent of ε > 0, ‖B̅((ξ_ε))∇(ξ_ε) ‖_L^2(Ω;)≤ C'. Therefore, there exists 𝒜∈ L^2(Ω;), up to a subsequence, B̅((ξ_ε))∇(ξ_ε) ⇀𝒜 weakly in L^2(Ω;), when ε→ 0. But, as B̅((ξ_ε)) ⇀^* ℬ, up to a subsequence, ℬ∇(ξ_ε) ⇀𝒜 weakly in L^2(Ω;), when ε→ 0. Thus ((ξ_ε))_ε > 0 is bounded in X_i. Also, using Lemma <ref>, there exits ξ̂_0 ∈ L^2(Ω;) such that, up to a subsequence, (ξ_ε) →ξ̂_0 strongly in L^2(Ω;), when ε→ 0. Therefore, ∇(ξ_ε) ⇀∇ξ̂_0 weakly in L^2(Ω;), when ε→ 0. Testing (<ref>) with ζ̃∈ X_i,0, and taking the imaginary part, one has ε∫_Ω∇(ξ_ε) ·∇ζ̃ + ∫_ΩB̅((ξ_ε))∇(ξ_ε) ·∇ζ̃ = 0. But as ((ξ_ε))_ε > 0 is bounded in X_i, the first term in the right-hand side vanishes when ε→ 0. Therefore, there exists C”>0, independent of ε > 0, ‖B̅((ξ_ε))∇(ξ_ε) ‖_L^2(Ω;)≤ C”. Reasoning as above, one gets that ( (ξ_ε) )_ε > 0 is bounded in X_i,0. Passing to the limit ε→ 0, one has ∫_Ωℬ∇ξ̂_0 ·∇ζ̃ = l(ζ̃) - ∫_Ωℬ∇ζ_D ·∇ζ̃, ∀ζ̃∈ X_i,0. However, the unique solution to this equation is ξ_0. Thus, ξ̂_0 = ξ_0. As B̅ is Lipschitz, and therefore continuous, one has B̅((ξ_ε)) →B̅(ξ_0) strongly in L^∞(Ω;), when ε→ 0. But, as B̅((ξ_ε)) ⇀^* ℬ, when ε→ 0, then ℬ = B̅(ξ_0). Therefore, ∫_ΩB̅(ξ_0) ∇ξ_0 ·∇ζ̃ = l(ζ̃) - ∫_ΩB̅(ξ_0) ∇ζ_D ·∇ζ̃, ∀ζ̃∈ X_i,0, and thus ξ_0+ζ_D ∈ X_i,D verifies (<ref>). Let us now prove Proposition <ref>. * Non-degenerate case Let ξ, ξ̂∈ V_D, and ζ,ζ̂∈ V_D the associated solutions of (<ref>). Using the notation of Section <ref>, δ‖ζ - ζ̂‖_V ≤‖B̅(ξ) ∇ (ζ - ζ̂) ‖_L^2(Ω), = ‖B̅(ξ̂) ∇ζ̂ - B̅(ξ) ∇ζ̂‖_L^2(Ω), ≤ K ‖|ξ̂ - ξ||∇ζ̂|‖_L^2(Ω), ≤ K ‖ξ - ξ̂‖_L^p(Ω)‖∇ζ̂‖_L^q(Ω), ≤ K ‖ξ - ξ̂‖_V ‖ζ̂‖_V, because B̅ is K-Lipschitz, and using a generalized Hölder inequality with 1/p + 1/q = 1/2, p > 2, and q < 2. Using (<ref>) and for ϵ > 0 small enough, one gets that the fix point map is contracting and thus uniqueness. * Degenerate sign-changing case Let i=1,2, ε > 0 and Ω_i^ε := {x ∈Ω_i | dist(x, Σ) < ε}. On Ω_i ∖Ω_i^ε, ‖·‖_X_i and the ‖·‖_H^1(Ω_i) are equivalent. Using the argument of the previous paragraph, the fix point map is contracting on Ω_i^ε, for all ε > 0. Thus the solutions of (<ref>) are unique on Ω_i^ε. Letting ε→ 0, the solutions are unique in Ω_i. Therefore, the solutions can only differ on Σ. However, by definition, one has ξ = tan^-1(-α) on Σ. Thus ξ is single-valued on Σ and the solution of (<ref>) is unique. § DISCRETE PROBLEM In this section, we assume that there exists a unique solution ξ of (<ref>). This can be ensured by verifying the hypothesis of Proposition <ref>. We also assume that B̅(ξ) is degenerate sign-changing. Previous discretizations are suited to problems that are sign-changing but not degenerate, apart from the dissipative approach of <cit.> using complex numbers. This is the approach used in the following. §.§ Discrete setting Let (𝒯_h )_h be a family of quasi-uniform and shape regular triangulations <cit.>, perfectly fitting Ω. For a cell c ∈𝒯_h, let h_c := diam(c) be the diameter of c. Then, we define h := max_c ∈𝒯_h h_c as the mesh parameter for a given triangulation 𝒯_h. Let V_h := ℙ^1(𝒯_h;), the space of affine Lagrange polynomials with complex values. As the solutions we try to compute are only H^1, we use the Scott–Zhang interpolator written as ℐ_h, see <cit.> for details. As ξ_D ∈ H^1/2(∂Ω_D;), there exists ζ_D ∈ H^1(Ω;), ζ_D = ξ_D on ∂Ω_D. We define the following solution space, V_hD := {ζ_h ∈ V_h; ζ_h = ℐ_h ζ_D on ∂Ω_D }, and its associated homogeneous space V_h0 := {ζ_h ∈ V_h; ζ_h = 0 on ∂Ω_D }. §.§ Discrete problem Let ε > 0 and ξ_h ∈ V_hD. Let us first focus on the following linear problem. We search for ζ_h ∈ V_hD, such that a_ε((ξ_h); ζ_h, ζ̃_h) = l(ζ̃_h), ∀ζ̃_h ∈ V_h0. There exists a unique solution ζ_h ∈ V_hD to (<ref>), and one has ‖ζ_h ‖_W ≤ C' C_tr(1 + M√(1+C)/ε) ‖ξ_D ‖_H^1/2(∂Ω_D) + √(1+C)/ε‖ g ‖_H^-1/2(∂Ω_N), where C_tr > 0 is the constant from the trace theorem, C > 0 is the Poincaré constant, and C' > 0 is the interpolation constant, see <cit.>. The result is a direct application of the Lax–Milgram lemma. We now focus on the nonlinear problem consisting of searching for ξ_h ∈ V_hD, a_ε((ξ_h); ξ_h, ζ̃_h) = l(ζ̃_h), ∀ζ̃_h ∈ V_h0. There exists a solution ξ_h to (<ref>). Let ϵ > 0, such that ‖ξ_D ‖_H^1/2(∂Ω_D)≤ϵ, and ‖ g ‖_H^-1/2(∂Ω_N)≤ϵ. For ϵ small enough, ξ_h is unique. Let ξ_h ∈ V_hD. We define the fix point map T_h : V_hD→ V_hD, such that T_h ξ_h = ζ_h, where ζ_h is a solution of (<ref>). Let B = {ξ_h ∈ V_hD | ξ_h verifies (<ref>)}. Using Lemma <ref>, one has T_h B ⊂ B. Let us show that T_h is Lipschitz. Let ξ_h,ξ̂_h ∈ V_hD and let ζ_h:= T_h ξ_h, and ζ_h := T_h ξ_h. As a_ε(ξ_h) is coercive with respect to V_h0, ε‖ζ_h - ζ̂_h ‖_V^2 ≤‖B̅((ξ_h)) ∇ (ζ_h - ζ̂_h) ‖_L^2(Ω;) = ‖ (B̅((ξ_h)) - B̅((ξ̂_h))) ∇ζ_h ‖_L^2(Ω;), ≤ K ‖ |ξ_h - ξ̂_h| |∇ζ_h| ‖_L^2(Ω ; ), ≤ K ‖ξ_h - ξ̂_h ‖_L^p(Ω;)‖∇ζ_h ‖_L^q(Ω;), ≤ K ‖ξ_h - ξ̂_h ‖_W‖ζ_h ‖_W, ≤ K ‖ξ_h - ξ̂_h ‖_W C' C_tr(1 + M√(1+C)/ε) ‖ξ_D ‖_H^1/2(∂Ω_D) + √(1+C)/ε‖ g ‖_H^-1/2(∂Ω_N), as B̅ is K-Lipschitz, and using a generalized Hölder inequality with 1/p + 1/q = 1/2, and p > 2 and q < 2. Thus T_h is continuous and the Brouwer fix point gives the desired existence. Regarding uniqueness, T_h becomes contracting for ϵ > 0 small enough, which ensures uniqueness. §.§ Convergence In the following, we operate under the assumption that uniqueness, as proved in Proposition <ref>, applies. Even though it is not explicitly stated, all the results in Section <ref>, so far, depend on ε > 0. We make the notation explicit by writing ( ξ_ε,h)_ε > 0, h > 0 the sequence of solutions of (<ref>). When ε→ 0, ( ξ_ε,h)_h > 0 converges strongly in L^2(Ω ; ℝ) to ξ_0, solution of (<ref>), and lim_ε,h → 0‖B̅(ξ_0) ∇ (ξ_0 - (ξ_ε,h)) ‖_L^2(Ω; )→ 0. As for ε > 0 fixed, ( ξ_ε,h)_ε,h is bounded in W_D, according to Lemma <ref>, there exists ζ_ε∈ W_D, up to a subsequence, ξ_ε,h⇀ζ_ε, weakly in W, when h → 0. Using a compact Sobolev injection, one has ξ_ε,h→ζ_ε, strongly in L^2(Ω;), when h → 0. Let ζ̃∈ W_0. Testing (<ref>) with ℐ_h ζ̃∈ W_h0, one has ∫_Ω (B̅((ξ_ε,h)) + ιε ) ∇(ξ_ε,h) ·∇ℐ_h ζ̃ = l(ℐ_h ζ̃) - ∫_ΩB̅((ξ_ε,h)) ∇ℐ_h ζ_D ·∇ℐ_h ζ̃ . Thus, as B̅ is K-Lipschitz and therefore continuous, one has ∫_Ω (B̅((ζ_ε)) + ιε ) ∇(ζ_ε) ·∇ζ̃ = l(ζ̃) - ∫_ΩB̅((ζ_ε)) ∇ζ_D ·∇ζ̃. And therefore, ζ_ε is a solution of (<ref>) which has ξ_ε∈ W_D as a unique solution. Thus, ζ_ε = ξ_ε, and the full sequence (ξ_ε,h)_h > 0 converges towards ξ_ε. Let us now show the strong convergence of the gradients. As a consequence of the weak convergence, one has lim inf_h → 0∫_Ω (B̅((ξ_ε,h)) + ιε ) ∇ξ_ε,h·∇ξ_ε,h≥∫_Ω (B̅((ξ_ε)) + ιε ) ∇ξ_ε·∇ξ_ε. Testing (<ref>) with ξ_ε,h∈ W_h0, one has ∫_Ω (B̅((ξ_ε,h)) +iε) ∇ξ_ε,h·∇ξ_ε,h = l(ξ_ε,h) - ∫_ΩB̅((ξ_ε,h)) ∇ζ_D ·∇ξ_ε,h, ⟶_h → 0 l(ξ_ε) - ∫_ΩB̅((ξ_ε)) ∇ζ_D ·∇ξ_ε, using the weak and strong convergence towards ∇ξ_ε and ξ_ε. However, one has l(ξ_ε) - ∫_ΩB̅((ξ_ε)) ∇ζ_D ·∇ξ_ε = ∫_Ω (B̅((ξ_ε)) +i ε)∇ξ_ε·∇ξ_ε. Therefore, lim sup_h → 0∫_Ω (B̅((ξ_ε,h)) + ιε ) ∇ξ_ε,h·∇ξ_ε,h≤∫_Ω (B̅((ξ_ε)) +i ε)∇ξ_ε·∇ξ_ε. Consequently, (B̅((ξ_ε,h)) + ιε ) ∇ξ_ε,h·∇ξ_ε,h⟶_h → 0∫_Ω (B̅((ξ_ε)) +i ε)∇ξ_ε·∇ξ_ε. As the bilinear form a_ε((ξ_ε)) is coercive over W_0 × W_0, then ∇ξ_ε,h→∇ξ_ε strongly in L^2(Ω;), when ε→ 0. Using Theorem <ref>, one can prove that ((ξ_ε))_ε > 0 converges strongly in L^2(Ω;) to ξ_0, the unique solution of (<ref>). Let us now prove the last statement of the theorem. One has ∫_ΩB̅((ξ_ε)) ∇(ξ_ε) ·∇(ξ_ε) - ε∫_Ω∇(ξ_ε) ·∇(ξ_ε) = l((ξ_ε)) - ∫_ΩB̅((ξ_ε)) ∇ζ_D ·∇(ξ_ε), ⟶_h → 0 l(ξ_0) - ∫_ΩB̅(ξ_0) ∇ζ_D ·∇ξ_0, using the strong convergence towards ξ_0. However, l(ξ_0) - ∫_ΩB̅(ξ_0) ∇ζ_D ·∇ξ_0 = ∫_ΩB̅(ξ_0) ∇ξ_0 ·∇ξ_0. Consequently, B̅((ξ_ε)) ∇(ξ_ε) ·∇(ξ_ε) ⟶_h → 0∫_ΩB̅(ξ_0) ∇ξ_0 ·∇ξ_0. But, as B̅((ξ_ε)) →B̅(ξ_0), when ε→ 0, one has lim_h → 0‖B̅(ξ_0) ∇ (ξ_0 - (ξ_ε)) ‖_L^2(Ω; )→ 0. Using the fact that ξ_ε,h→ξ_ε strongly in W, when h → 0, one gets the desired result. § NUMERICAL TESTS is used in its configuration for complex numbers, see <cit.>. The numerical experiments consist in solving (<ref>) for various boundary conditions and different material parameters. (<ref>) is solved through Newton iterations. Then, γ_h, and y_eff,h are computed from ξ_h through least-squares and Equations (<ref>) and (<ref>). These results are then compared to experimental results. The experimental data are recovered using the image recognition capabilities of , see <cit.>. The code can be found in <cit.>. §.§ Auxetic kirigami The pattern is sketched in Figure <ref>. For this pattern, one has α=-0.9 and β=0.9. For ξ∈ [0, π/3], -Γ_21(ξ) > 0 and thus (<ref>) is strictly elliptic. The domain is Ω = (0,L) × (0,L), where L=1.5, see Figure <ref>. Homogeneous Neumann boundary conditions are imposed on the top and bottom surfaces and the nonhomogeneous Dirichlet condition ξ_D is imposed on the left and right surfaces. We consider a mesh of size h=0.005, with 119,817 dofs. The initial guess is chosen as ξ^0_h = 0 in Ω. 7 Newton iterations are necessary to reach convergence with a relative tolerance on the residual r_tol = 10^-8. Figure <ref> shows the numerical results in the deformed configuration against the experimental results. The numerical results are very similar to the ones obtained in <cit.>. One can notice some slight differences with the experimental results. We interpret these as being due to elasticity effects, as described in <cit.>. §.§ Non-auxetic kirigami The pattern is sketched in Figure <ref>. For this pattern, one has α=-0.9 and β=0. For ξ∈ [0, π/3], -Γ_21(ξ) ≤ 0 and thus (<ref>) is degenerate hyperbolic. The domain is Ω = (0,L) × (0,L), where L=1.5. The Dirichlet boundary conditions are similar to Figure <ref>. We consider a mesh of size h = 5.3 · 10^-3, with 180,601 dofs. The regularization parameters is chosen as ε = 0.5. 4 Newton iterations are necessary to reach convergence with a relative tolerance on the residual r_tol = 10^-6. Figure <ref> shows the comparison between the experimental and numerical results. With respect to the results of <cit.>, this numerical result shows a better agreement with the experimental data as one can see the depression in the middle of the sample, where ξ is lower than in the surroundings. Also, one sees a similar gradient of ξ on the four edges of the sample. §.§ Mixed type kirigami The pattern is sketched in Figure <ref>. For this pattern, one has α=-1.6 and β=0.4. The domain is Ω = (0,L) × (0,L), where L=1.5. The Dirichlet boundary conditions are similar to Figure <ref>. A main contribution of this paper is to be able to approximate solutions for this pattern, which was not previously possible, see <cit.>. We consider a mesh of size h = 5.0 · 10^-3, with 180,601 dofs. The regularization parameter is chosen as ε = 0.071. 5 Newton iterations are necessary to reach convergence with a relative tolerance on the residual r_tol = 10^-6. Figure <ref> shows the comparison between the experimental and numerical results. This numerical test shows a good agreement with the experimental results. The sample seems to deform less than in the experiment which could be due to elasticity effects or indicate a need for more accurate boundary conditions. § CONCLUSION This paper has presented the analysis of a nonlinear degenerate sign-changing divergence form PDE that models the deformation of a specific type of kirigami called the rhombi-slit. Under appropriate boundary conditions, it proved existence and uniqueness of solutions to (<ref>). Then, a numerical method based on Lagrange ℙ^1 finite elements with complex values and a regularized problem is analyzed and shown to converge towards the solutions of (<ref>). Finally, numerical results show the robustness of the method with respect to previous results and experimental results. Future work could focus on trying to solve for an imposed y_eff on the boundary ∂Ω_D and not a ξ. Future work could also include working on other kirigami patterns. § CODE AVAILABILITY The code is available at <https://github.com/marazzaf/rhombi_slit.git>. § ACKNOWLEDGMENT The author would like to thank Paul Plucinsky (Univeristy of Souther California) and Ian Tobasco (Univeristy of Illinois Chicago) for fruitful discussions. The author would also like to thank Paolo Celli (New York University at Stony Brooks) for providing him with experimental results, produced for <cit.>, and with a code to extract the true deformations from the experiments. § FUNDING This work is supported by the US National Science Foundation under grant number OIA-1946231 and the Louisiana Board of Regents for the Louisiana Materials Design Alliance (LAMDA). plain
http://arxiv.org/abs/2307.01742v1
20230704141814
Can We Mathematically Spot Possible Manipulation of Results in Research Manuscripts Using Benford's Law?
[ "Teddy Lazebnik", "Dan Gorlitsky" ]
cs.IR
[ "cs.IR", "cs.DL" ]
Can We Mathematically Spot Possible Manipulation of Results in Research Manuscripts Using Benford's Law? Teddy Lazebnik^1* and Dan Gorlitsky^1 ^1 Independent researcher, Israel * Corresponding author: lazebnik.teddy@gmail.com ================================================================================================================================ The reproducibility of academic research has long been a persistent issue, contradicting one of the fundamental principles of science. What is even more concerning is the increasing number of false claims found in academic manuscripts recently, casting doubt on the validity of reported results. In this paper, we utilize an adaptive version of Benford's law, a statistical phenomenon that describes the distribution of leading digits in naturally occurring datasets, to identify potential manipulation of results in research manuscripts, solely using the aggregated data presented in those manuscripts. Our methodology applies the principles of Benford's law to commonly employed analyses in academic manuscripts, thus, reducing the need for the raw data itself. To validate our approach, we employed 100 open-source datasets and successfully predicted 79% of them accurately using our rules. Additionally, we analyzed 100 manuscripts published in the last two years across ten prominent economic journals, with ten manuscripts randomly sampled from each journal. Our analysis predicted a 3% occurrence of result manipulation with a 96% confidence level. Our findings uncover disturbing inconsistencies in recent studies and offer a semi-automatic method for their detection. Keywords: Statistical analysis; anomaly detection; first digit law; results reproduction. Can We Mathematically Spot Possible Manipulation of Results in Research Manuscripts Using Benford's Law? Teddy Lazebnik^1* and Dan Gorlitsky^1 ^1 Independent researcher, Israel * Corresponding author: lazebnik.teddy@gmail.com ================================================================================================================================ empty myheadings Draft: August 1, 2023Draft: August 1, 2023 § INTRODUCTION The scientific community places great emphasis on maintaining the integrity and dependability of published manuscripts <cit.>. The accuracy and validity of research findings are crucial for advancing knowledge and establishing evidence-based policies <cit.>. Unfortunately, the existence of fraudulent or deceptive research across different disciplines presents a substantial obstacle for scientists <cit.>. There are various motivations behind the presentation of misleading results in academic papers. These motivations range from seeking professional recognition by publishing in high-impact journals to securing funding based on impressive previous work, and even attempting to salvage a study that did not yield the desired outcomes <cit.>. Furthermore, the traditional peer review process often fails to identify deliberate attempts at result fabrication, particularly when raw data is not provided, although the absence of raw data itself is an undesirable practice <cit.>. This issue is particularly relevant in the field of economics, where data analysis and statistical properties play a crucial role, but restrictions on sharing raw data, driven by privacy concerns and the protection of business secrets, make it difficult to scrutinize the findings <cit.>. Consequently, scholars in this field may find it tempting to manipulate results with minimal risk involved, creating an undesirable environment for research integrity. Ensuring the integrity and trustworthiness of research studies is essential, and this necessitates the identification and exposure of potential inconsistencies or intentional misrepresentations within research manuscripts <cit.>. Traditional methods of detecting anomalies or suspicious patterns often involve a manual examination, which is a time-consuming and resource-intensive process <cit.>. Furthermore, this approach demands a high level of expertise in each respective field, thereby limiting the number of individuals capable of performing such tasks. As a result, there is an increasing demand for objective and automated approaches to assist in the identification of possible falsehoods in academic research, particularly when the original data is unavailable for review. This paper presents an innovative method leveraging Benford's law <cit.>, a statistical phenomenon commonly utilized in forensic accounting and auditing. Our approach focuses on devising rules for examining standard statistical analyses like mean, standard deviation, and linear regression coefficients. Benford's law centers around the distribution of leading digits in real-world datasets, offering a mathematical framework to detect deviations from anticipated patterns. Building upon this framework, we introduce multiple tests associated with various types of statistical analyses typically reported in research manuscripts. These tests compare the expected Benford's distribution against the observed distribution for each respective analysis. In order to assess the efficacy of our methodology, a sample of 100 open-access datasets was obtained. For half of these datasets, we computed the actual statistical values, while for the remaining half, we intentionally introduced modifications to these values. The findings demonstrate that our proposed approach successfully predicted the outcomes with an accuracy rate of 79%. Subsequently, we collected data from 100 papers published in the top 10 economic journals within the last two years. Disturbingly, our method detected anomalies in 3% of the papers, attaining a confidence level of 96%. This paper is organized as follows. Section <ref> outlines our adoption of Benford's distribution and the construction of the manuscripts test. Section <ref> presents the methodology employed to collect and preprocess the data used for our experiments as well as the analysis itself. Section <ref>, provides the results of our experiments. Section <ref> discusses the implementations of our results followed by an analysis of the applications and limitations of the study with possible future work. Fig. <ref> provides a schematic view of this study. § STATISTICAL OPERATORS BENFORD'S LAW Benford's law describes the expected distribution of leading digits in naturally occurring datasets <cit.>. It states that in many sets of numerical data, the leading digits are not uniformly distributed. In fact, they follow a logarithmic distribution, as follows: P(d) = log_10 (1 + 1/d), where d ∈{1, 2, …, 9} indicates the leading digit and P(d) ∈ [0, 1] is the probability a number would have d as its leading digit. To apply Benford's law in practice, one needs to compare the observed distribution of leading digits in a dataset to these of Eq. (<ref>). Deviations from the expected distribution can indicate potential anomalies, irregularities, or manipulation within the dataset. Now, let us consider a set of vectors V := {v_i}_i=1^k ∈ℝ^n × k. Formally, an irregularity test based on Benford's law would return p which is the probability value obtained from the Kolmogorov-Smirnov test <cit.> between the log distribution obtained by fitting V and the distribution in Eq. (<ref>). In order to perform this test on values obtained from V using operator o, one needs to first find Benford's distribution associated with such an operator. Hence, let us consider three common statistical operators: mean, standard deviation, and linear regression coefficients. One can numerically obtain these distributions using the convolution operator <cit.>. Formally, we define an anomaly test to be T_o(D) where T_o: ℝ^n → [0, 1] is a function that accepts a vector D ∈ℝ^n and an operator o and returns a score of the probability D is anomaly with respect to operator o. Formally, for our case, we associate each operator o with its Benford's distribution and T_o(D) is implemented to return 1 - p where p is the probability value obtained from the Kolmogorov-Smirnov test <cit.> between the distribution associated with the operator o and the same one after fitting to V. Notably, for each operator, we generated 1000 random samples and calculated the results for each one of them. We denoted the worst result obtained as a ∈ [0, 1]. In order to ensure that the proposed test numerically produces results in the range [0, 1], for each outcome, x, we compute and report (x-a)/(1-a). § EXPERIMENTAL SETUP In this section, we outline the two experiments conducted in this study. The first experiment is designed to numerically validate the performance of the proposed method. After validating the method, in a complementary manner, the second experiment evaluates the number of irregularities in recent academic economic studies. We implemented the experiments using the Python programming language <cit.> (Version 3.7.5). We set p < 0.05 to be statistically significant. First, for the method's performance validation, we manually collect 100 numerical datasets from the Data World[We refer the reader to <https://data.world/datasets/economics>] and Kaggle[<https://www.kaggle.com>], following <cit.>. The datasets are randomly chosen from a broad range of fields and represent a wide range of computational tasks. Each dataset is represented by a matrix D. We define a feature f_j of a dataset D as follow f_j := ∀ i ∈ [1, …, n]: d_i,j. A feature is used to calculate the unitary statistical properties. Based on this data, for each datasets (D) and statistical operator (o), we computed T_o(D), obtaining a vector of results denoted by u. The overall anomaly probability prediction is define to be 1/|u|∑_i = 1^|u|u_i. For half of the datasets, we introduce uniformly distributed noise which is between 1 and 10 percent of the mean value in a uniform manner. As such, these datasets should not agree with Benford's law and therefore if the proposed method predicts they do, it is an error. As such, we have 50 positive and 50 negative examples. Second, for the manuscript evaluation, we collected a sample of 100 papers published in 10 leading economic journals over the past two years. These papers served as the test subjects for applying our proposed method to detect anomalies or irregularities. We choose these amounts and distribution to balance the time and resource burden and the statistical power of the sample. In order to determine which journals are leading in the economics field, we used the Scimago Journal and Country rank website[We refer the reader to <https://www.scimagojr.com/>], searching for the Economics and Econometrics and taking the top 10 journals <cit.>: Quarterly Journal of Economics, American Economic Review, Journal of Political Economy, Journal of Finance, Review of Economic Studies, Econometrica, Journal of Economic Literature, Review of Financial Studies, Journal of Marketing, and Journal of Financial Economics. For each journal, we mainly count how many manuscripts the journal published in the last two years, asking the computer to randomly pick 10 indexes. Once the indexes were obtained, we downloaded these manuscripts from the journals' websites. Next, we manually extract the results from the manuscripts presented either in tables or figures. For each of them, if appropriate, we apply our adopted Benford's law. § RESULTS To assess the performance of our method, we evaluated the confusion matrix for the dataset, as presented in Table <ref>. The obtained results indicate an accuracy of 0.79 and an F_1 score of 0.77. Notably, the model exhibited a tendency to predict manipulation-free manuscripts incorrectly, identifying 7 manipulation-free manuscripts as containing manipulations. Conversely, it also misclassified 14 manuscripts with manipulations as manipulation-free. However, from the perspective of the journal, it is preferable for the model to err on the side of caution by falsely predicting manuscripts as manipulation-free, as falsely accusing innocent authors of result manipulation is deemed more undesirable than missing manuscripts with actual manipulations. Furthermore, Table <ref> provides an overview of the predicted number of economic manuscripts flagged for containing results manipulations based on varying confidence levels. It is evident that as the confidence level increases, the number of flagged manuscripts decreases. This observation aligns with expectations since the null hypothesis assumes that the manuscripts are manipulation-free. Hence, higher confidence levels necessitate stronger statistical evidence of manipulation for a manuscript to be flagged. § DISCUSSION AND CONCLUSION In this study, we introduced an innovative approach to identify potential falsehoods in research manuscripts by applying Benford's law to commonly reported statistical values, including mean, standard deviation, and linear regression coefficients. By adopting this law to the context of research manuscripts, we aimed to enhance the detection of deceptive information. To validate the efficacy of our approach, we conducted two experiments. In the initial experiment, we evaluated the performance of our method by applying it to a random sample of 100 datasets from diverse fields. The results demonstrated that our method achieved an accuracy of 0.79 and an F1 score of 0.77, indicating its capability to identify potential anomalies, albeit with some limitations. Consequently, it can serve as a supportive tool or an initial filter to alleviate the burden of manual investigation. Building upon this premise, the second experiment involved applying our method to 100 recent manuscripts from reputable high-impact academic journals in the field of economics. Alarming findings emerged, as approximately 3% of the manuscripts exhibited anomalies, inaccuracies, or even explicit manipulations, with a confidence level of 96%. These outcomes unfortunately align with existing trends in academic fraud practices, underscoring the significance of our approach in uncovering inconsistencies and deliberate misrepresentations in academic research. By leveraging Benford's law, our method offers an objective and automated solution to complement traditional manual scrutiny. Furthermore, it holds particular relevance in fields like economics where researchers heavily rely on data analysis and statistical properties, often lacking access to raw data due to privacy or proprietary constraints. While our results demonstrate the promise of our approach, there are several limitations to consider. First, our method relies on the assumption that the reported aggregated data follows Benford's distribution, which may not always hold true <cit.>. Second, our approach requires the development of tests for each statistical operator, which makes it hard to utilize a wide spectrum of fields and manuscripts that may use and report about a large number of different statistical analysis methods. Third, our method does not provide definitive proof of fraud or misconduct but rather serves as a signal for potential irregularities that warrant further investigation, thus only slightly reducing the time and resources required for the task. Finally, the publication of this study reduces its effectiveness as malicious scholars would be aware of the proposed method and develop counter-strategies to overcome it, as common in other fields like cybersecurity <cit.>. unsrt
http://arxiv.org/abs/2307.01158v2
20230703170718
Theory of Mind as Intrinsic Motivation for Multi-Agent Reinforcement Learning
[ "Ini Oguntola", "Joseph Campbell", "Simon Stepputtis", "Katia Sycara" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.MA" ]
[ Theory of Mind as Intrinsic Motivation for Multi-Agent Reinforcement Learning equal* Ini Oguntolacmu Joseph Campbellcmu Simon Stepputtiscmu Katia Sycaracmu cmuSchool of Computer Science, Carnegie Mellon University, Pittsburgh, USA Ini Oguntolaioguntol@andrew.cmu.edu Theory of Mind, ToM, RL, multi-agent, reinforcement learning, intrinsic motivation 0.3in ] The ability to model the mental states of others is crucial to human social intelligence, and can offer similar benefits to artificial agents with respect to the social dynamics induced in multi-agent settings. We present a method of grounding semantically meaningful, human-interpretable beliefs within policies modeled by deep networks. We then consider the task of 2nd-order belief prediction. We propose that ability of each agent to predict the beliefs of the other agents can be used as an intrinsic reward signal for multi-agent reinforcement learning. Finally, we present preliminary empirical results in a mixed cooperative-competitive environment. § INTRODUCTION The ability to infer the mental states of oneself and others – beliefs, desires, intentions, preferences, etc – is known as theory of mind (ToM) <cit.>. Humans naturally build rich internal models of others, and are able to use these inferences to predict the behavior of others, to condition their own behavior, and to forecast social interactions <cit.>. Theory of mind has long been studied within cognitive science and psychology <cit.>, a fundamental aspect of human social intelligence that has been shown to develop in early childhood. <cit.>. Traditionally, agent-modeling approaches within reinforcement learning (RL) and imitation learning largely ignore the idea of internal mental states, typically only focused on modeling external actions <cit.>. However, there is a growing body of work in the machine learning literature aimed towards developing artificial agents that exhibit theory of mind <cit.>. Even beyond simply providing a helpful inductive bias for modeling behavior, ToM reasoning has the potential to enable the discovery and correction of false beliefs or incomplete knowledge, facilitate efficient communication and coordination, and improve human-agent teaming <cit.>. The work of <cit.> highlights key challenges regarding the difficulty of evaluating current deep learning ToM approaches. In particular, from a human perspective we may solve a task using an already-developed internal theory of mind, whereas an artificial agent may be able to learn simpler decision rules or take advantage of spurious correlations as shortcuts, and it is difficult to determine whether ToM has actually been learnt. Here we consider the reverse – rather than solving a task and hoping it induces a theory of mind, we instead explicitly learn a theory of mind over semantically grounded beliefs, and use this as a signal to solve the task. Our fundamental research question is the following: can modeling other agents' beliefs serve as an intrinsic reward signal to improve performance in multi-agent settings? In this paper we develop an approach to explicitly grounding semantically meaningful beliefs within RL policies. We then propose the use of ToM reasoning over the beliefs of other agents as intrinsic motivation in multi-agent scenarios. We run experiments in a mixed cooperative-competitive environment and show preliminary results that suggest this approach may improve multi-agent performance, with respect to both coordination and deception. The primary contributions of this paper are the following: * We develop an information-theoretic residual variant to the concept bottleneck learning paradigm <cit.> based on mutual information minimization. * We utilize this approach to model semantically-meaningful belief states within RL policies. * We propose the prediction task of second-order prediction of these beliefs (i.e. ToM reasoning) as intrinsic motivation. * We demonstrate preliminary results that demonstrate improved performance in a mixed cooperative-competitive environment. § RELATED WORK §.§ Intrinsic Motivation in Deep RL Intrinsic motivation in reinforcement learning refers to the use of an additional reward signal to encourage particular agent behaviors without direct feedback from the environment on the task. In the single-agent setting, common approaches to intrinsic motivation include “curiosity" to encourage visiting novel states <cit.> and “empowerment" to encourage diversity of reachable states <cit.>. Most of these approaches can also be extended to the multi-agent setting, but the introduction of multiple agents inherently creates an inter-agent dynamic that can be explored as well. <cit.> proposed an intrinsic reward for “social influence" by rewarding agents for having high mutual information between their actions. <cit.> develop similar approaches that reward an agent for influencing the state transition dynamics and rewards of other agents. In constrast, our intrinsic reward approach is predicated on influencing the internal beliefs of other agents, rather than directly influencing their external states or actions. §.§ Theory of Mind in Multi-Agent RL Although RL often implicitly involves theory of mind via agent modeling, recent approaches have also sought to model this directly <cit.>. Within multi-agent reinforcement learning there have been a variety of approaches inspired by ToM reasoning, modeling beliefs <cit.> and intents <cit.>. Other inverse reinforcement learning methods approach ToM-like reasoning by conditioning the reward function on inferred latent characteristics <cit.>. Most of these are aimed at improving coordination in cooperative multi-agent scenarios, particularly with regard to communication <cit.>. §.§ Concept Learning Concept learning, generally speaking, is an approach to interpretability for deep neural networks that involves enforcing structure on the latent space to represent grounded, semantically meaningful “concepts". One such approach is concept whitening <cit.>, in which an intermediate layer is inserted for orthogonal alignment of data in the latent space with predefined human-interpretable concept labels, with concepts provided via auxiliary datasets. The restriction with this method is the inherent assumption that all concepts are non-overlapping. Concept bottleneck models are a similar approach developed an approach that consists of a concept extractor directly supervised on concept labels, and a predictor network that generates an output from these concepts <cit.>. While more flexible than concept whitening in the sense that it can encode any set of concepts, it still makes the assumption that the provided set of concepts alone is expressive enough for the predictive task; performance suffers when this not the case. Some approaches mitigate this by combining the concept predictions with a residual extracted from the input, they either impose additional constraints (e.g. orthogonality) on the combined output that may not hold <cit.>, or they do not provide a way to directly ensure the information encoded by the residual does not overlap with the concepts <cit.>, allowing the model to effectively ignore concepts in its decision making process. While prior work has used these approaches in the context of imitation and reinforcement learning <cit.>, in this work we specifically examine concept learning as a way to approach the challenge of grounding semantically meaningful mental states within policies. We also develop a residual variant that directly encourages decorrelation between concepts and residual while avoiding the introduction of any restrictive assumptions. § METHOD §.§ Modeling Beliefs via Concept Learning In deep reinforcement learning, policies are typically black box models that directly map states to actions. Our approach follows the paradigm of concept learning <cit.>, which involves inserting an intermediate concept layer which is designed to align with human-interpretable “concepts", typically via a supervised auxiliary loss. In our setting, these concepts are designed to model beliefs about the environment. For instance, in an environment with a door, one could model the belief over whether the door is locked as a binary concept b_locked∈{0, 1}. L_belief = MSE(𝐛, 𝐛') if continuous CE(𝐛, 𝐛') if discrete where 𝐛 is the agent belief vector, 𝐛' is the ground truth, MSE is the mean-squared error, and CE is the cross entropy loss. These beliefs are then used to generate an action. However, depending on the selection of beliefs, they alone may not be a sufficient signal to learn a policy that successfully solves a given task. We mitigate this by additionally introducing a residual – a compressed representation of the input that is concatenated to the belief vector. Given vector input 𝐱, we have our residual network generate r(𝐱) = 𝐳. It is important that our residual and beliefs be disentangled – that is, the residual should not contain any information about the beliefs – as otherwise our model may simply learn to rely entirely on the residual and ignore the beliefs, which would compromise the interpretablity of the policy. We approach “disentanglement" from a probability theory perspective, aiming to ensure that the belief and residual vectors are statistically independent. Here our goal is to minimize the mutual information between the belief vector and residual, which is zero if and only if they are independent. This measure can also be characterized as KL-divergence between the joint distribution and the product of the marginal distributions: I(B;Z) = D_KL(ℙ_BZ∥ℙ_B ⊗ℙ_Z) To achieve this, we utilize the variational approach from <cit.> and minimize a contrastive log-ratio upper bound: L_q(θ) = -𝔼_p_σ(𝐛, 𝐳)[log q_θ(𝐳 | 𝐛)] L_residual(σ) = 𝔼_p_σ(𝐛, 𝐳) [log q_θ(𝐳 | 𝐛)] - 𝔼_p_σ(𝐛)𝔼_p_σ(𝐳) [log q_θ(𝐳 | 𝐛)] where 𝐛 is the belief vector, 𝐳 is the residual vector, p_σ(𝐛, 𝐳) is the joint distribution of intermediate outputs from our policy, and q_θ(𝐳 | 𝐛) is a variational approximation to the conditional distribution p_σ(𝐳 | 𝐛), modeled via a separate neural network trained to minimize negative log-likelihood L_q(θ) = -logℒ(θ). Unlike approaches based on concept whitening <cit.>, our method of disentanglement does not assume or impose any intra-dimensional orthogonality constraints within the concept (i.e. belief) or residual layers, but rather decorrelates the two vectors as a whole. Specifically, we make no restrictive assumptions that concepts are mutually exclusive, and also retain full multi-dimensional expressiveness within our residual representation while simultaneously minimizing correlation with our concept vector. Finally, the concatenated output (𝐛, 𝐳) is fed into the rest of the actor network to generate an action. The concept layer and residual layer are trained by adding the additional loss terms to the objective function optimized by the reinforcement learning algorithm of choice. For our experiments we use the PPO objective from <cit.>, but generally speaking this approach is agnostic to the particular RL algorithm chosen. L_PPO(σ) = 𝔼_t [ min ( r_t(σ)A_t, clip(r_t(σ), 1 + ϵ, 1 + ϵ) A_t ) ] L_policy = α L_PPO + β L_belief + γ L_residual where r_t(σ) = π_σ(a_t | s_t)/π_σ_old(a_t | s_t) is the PPO probability ratio, π_σ is the policy to be optimized, A_t is the advantage function, and α, β, γ, ϵ > 0 are hyperparameters. During training, for each batch we optimize both the policy loss L_policy (with respect to the policy parameters σ) and the variational loss L_q (with respect to the variational parameters θ). §.§ Second-Order Belief Prediction In a multi-agent scenario where each agent is reasoning over the same set of beliefs over the environment, consider the second-order belief as one agent's prediction of another agent's beliefs. It is important to note that the first-order belief of an agent may be incorrect, in which case a correct second-order belief would successfully predict this false belief. For instance, consider a scenario where a door is locked but agent A believes the door is unlocked. Agent B should ideally have 1) the first-order belief that the door is unlocked, and 2) the second-order belief that agent A thinks the door is locked. Our approach proposes the use of second-order belief prediction as an intrinsic reward. Intuitively speaking, we want to incentivize each agent to 1) learn to predict the beliefs of other agents and 2) learn to behave in a way such that the beliefs of the other agents will be predictable (e.g. learning to observe other agents, learning to communicate, etc). We do this by augmenting the agent's belief network to produce not only its own belief vector, but also a belief vector prediction for each of the other agents. 𝐁 = [ 𝐛 + f(𝐱)_i ]_i=1^K where K is the total number of agents, 𝐁 is the K × dim(𝐛) second-order belief matrix, and f : ℝ^dim(𝐱)→ℝ^K ×dim(𝐛) is modeled by a neural network. Rather than treat this as a directly-supervised auxiliary task, we instead include the second-order prediction loss as an additional reward term, as we want the policy's value estimation to be biased towards states where both the current and the future beliefs (or belief distributions) of the other agents tend to be predictable (e.g. states where it can gain information about other agents). Then the intrinsic reward becomes the negative belief prediction loss: r_tom = - 1/K∑_i=1^K MSE(𝐁_i, 𝐛^(i)) if continuous - 1/K∑_i=1^K CE(𝐁_i, 𝐛^(i)) if discrete r = r_task + λ r_tom where λ≥ 0 is a hyperparameter. §.§ Training vs Execution The training setup requires that all agents are trained in the manner previously described, and we assume that the beliefs of other agents are available during centralized training to calculate intrinsic reward. During training we do not propagate gradients from the policy or reward through the 1st-order belief prediction network; that is, the 1st-order belief prediction network is only updated from the supervised belief loss on ground truth values from the environment, and is unaffected by the reward dynamics of the task. In combination with the mutual information regularization for the residual, this ensures that any belief information relevant to an agent's policy comes only from the agent's ability to infer the correct values of said beliefs from the environment. This approach eliminates any potential issues with a "malicious actor" purposefully generating incorrect belief predictions. Execution, on the other hand, does not require beliefs or any inner states of other agents, and thus can be done with other policies that were not trained with our training setup or architecture – or even with human agents. § EXPERIMENTS §.§ ParticleWorld: Physical Deception We use a variant of the physical deception task described in <cit.>. This environment consists of N landmarks, N green “good" agents and a single red adversary agent within a 2D world. In our variant, one of the landmarks is the “target", but neither the good agents nor the adversary are initially told which one. The N green agents receive a joint reward based on the minimum distance to the target landmark, with each agent's contribution weighted by a randomly generated reward coefficient η_i ∼Uniform[0, 1]. Similarly, the adversary is penalized based on its distance from the target. The episode ends either after a fixed time-limit, or when the adversary reaches any landmark. If this is the target landmark, the adversary receives a positive reward, otherwise a negative penalty (both time–scaled). r_good(t) = -min_i { d(𝐱_i,t, 𝐱_target) } + d(𝐱_adv, 𝐱_target) r_adv(t) = -d(𝐱_adv, 𝐱_target) + 𝕀[𝐱_adv = 𝐱_other](1 - t/T) - 𝕀[𝐱_adv = 𝐱_target](1 - t/T) where d is Euclidean distance, 𝐱_target is the position of the target, 𝐱_other is the position of the non-target landmark, 𝐱_i,t is the position of good agent i at time t, 𝐱_adv,t is the position of the adversary agent at time t, and T is the maximum episode length. The adversary is incentivized to find and navigate to the target as quickly as possible. On the other hand, the green agents are incentivized to keep the adversary uncertain as long as possible while accumulating reward. Observations Each agent policy takes in a vector observation indicating the relative positions of landmarks and other agents. The good agents also can observe the weighted sum of their distances to the target landmark (weighted via their reward coefficients), whereas the adversary must rely on observing other agents' behavior to try and determine which landmark is the target. Actions Each agent moves via a discrete action space. Beliefs In this scenario each agent is trained with two sets of first-order beliefs: * Which landmark is the target? * What are the reward coefficients for each agent? §.§ Training We use Multi-Agent Proximal Policy Optimization (MAPPO) to train all agents in our experiments, under the paradigm of centralized training with decentralized execution (CTDE) <cit.>. Our training procedure alternates between optimizing the policy for the good agents and the policy for the adversary, where one policy remains fixed and the weights other are trained; we swap every 100k timesteps. § PRELIMINARY RESULTS We trained agents with various belief-prediction configurations on the physical deception task with N=2 landmarks; training curves are shown in Figure <ref>, and the mean episodic reward achieved by the final policies are shown in Table <ref>. We report the mean episode reward obtained with the best hyperparameter setting over 20 episodes, for each of 5 random seeds. We find that agents with the 2nd-order intrinsic reward perform significantly better in relation to the opposition. This phenomenon is observed for both the green good agents and the red adversary. §.§ Qualitative Analysis of Observed Strategies We qualitatively assess and summarize the strategies observed with the final trained policies from each of the configurations we considered below. Baseline (no beliefs) Each green agent drifts towards a unique landmark. Red adversary appears to drifts randomly. 1st-order beliefs only (all agents) Similar behavior to baseline. 2nd-order beliefs (green agents) Each green agent drifts towards a specific landmark. In some episodes. green agents swap between landmarks. 2nd-order beliefs (red adversary) Red tends to be more decisive, moving quickly to landmark. In both cases we observe that the incorporation of the 2nd-order intrinsic reward tends to lead to the exhibition of more complex strategies that do not seem to be discovered with the baseline MARL approach, or even when learning with 1st-order beliefs alone. § ONGOING AND FUTURE WORK Although preliminary results indicate our approach may be effective, they are with respect to a single, relatively simple environment. We are currently examining more complex multi-agent tasks with more varied social dynamics, and additionally scaling the approach to scenarios with more (or even an arbitrary number of) agents. Beyond continuing to experiment with other environments, we are particularly interested in studying the efficacy of our approach in communication; both in more traditional cooperative scenarios as well as potentially in competitive tasks. We are also interested in a more thorough investigation of our concept-residual approach in comparison with the standard whitening or bottleneck approaches <cit.>. § ACKNOWLEDGEMENTS This work is supported by the Defense Advanced Research Projects Agency (DARPA) under Contract No. HR001120C0036, and by the AFRL/AFOSR award FA9550-18-1-0251. tom2023
http://arxiv.org/abs/2307.00181v1
20230701004043
Influence maximization on temporal networks: a review
[ "Eric Yanchenko", "Tsuyoshi Murata", "Petter Holme" ]
cs.SI
[ "cs.SI" ]
Journal of Class Files, Vol. 14, No. 8, August 2015 Yanchenko et al.: Influence maximization on temporal networks: a review Influence maximization (IM) is an important topic in network science where a small seed set is chosen to maximize the spread of influence on a network. Recently, this problem has attracted attention on temporal networks where the network structure changes with time. IM on such dynamically varying networks is the topic of this review. We first categorize methods into two main paradigms: single and multiple seeding. In single seeding, nodes activate at the beginning of the diffusion process, and most methods either efficiently estimate the influence spread and select nodes with a greedy algorithm, or use a node-ranking heuristic. Nodes activate at different time points in the multiple seeding problem, via either sequential seeding, maintenance seeding or node probing paradigms. Throughout this review, we give special attention to deploying these algorithms in practice while also discussing existing solutions for real-world applications. We conclude by sharing important future research directions and challenges. graphs; diffusion; dynamic networks Influence maximization on temporal networks: a review Eric Yanchenko, Tsuyoshi Murata and Petter Holme E. Yanchenko: Department of Statistics at North Carolina State University, USA, and Department of Computer Science at Tokyo Institute of Technology, Japan. E-mail: ekyanche@ncsu.edu T. Murata: Department of Computer Science at Tokyo Institute of Technology, Japan. P. Holme: Department of Computer Science at Aalto University, Finland and Center for Computational Social Science at Kobe University, Japan. August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § INTRODUCTION Networks, or graphs, are a simple tool to abstractly represent a system involving interacting entities, where the objects are modeled as nodes and their relationship as edges. Because of their generality and flexibility, many real-world settings have leveraged networks over the past few decades including: online social networks <cit.>, infrastructure networks <cit.> and biological process networks <cit.>. Recently, there has been great interest in not only understanding the topological structure of networks, but also how information diffuses on them <cit.>. For example, in social networks, we may be interested in understanding viral outbreaks in a population, or breaking news spreads in an online setting. The most fundamental assumption of network science and machine learning applied to graphs is that the network structure begets the function of the networked system. First discussed in abstract terms by Georg Simmel in the 1890s <cit.> and in the language of graph theory by Jacob Moreno and Helen Jennings in the 1930s <cit.>, this assumption is close to the core structuralism—a pillar of 20th-century (primarily social) science. It suggests that we can infer the function of nodes from their position in the network. The foundational functional concept is a node's importance. But to operationalize a concept like importance, we must consider many specifics about the system. Some questions may include: What is the objective of the system? What dynamics operate on it? Or what are the possible interventions? Influence maximization (IM)[Throughout this work, we use the terms influenced, activated and infected interchangeably when referring to a node's state.] assumes a scenario where some diffusion (in the mathematical and physical literature, also known as spreading) process can happen on the network and we want this process to reach as many nodes as possible. This diffusion process is triggered by some seed nodes and the IM problem is to identify the seed nodes that maximize the number of nodes affected by the diffusion. Viral (or word-of-mouth) marketing is an obvious potential application area that fits the assumptions above <cit.>. Recommendation systems are another relevant area of application <cit.>, as is seeding public health campaigns <cit.>. However, by analogy to network centrality—another family of conceptualizations under the umbrella term of importance—influence maximization is interesting for a wider area of problems. Protecting critical infrastructure <cit.> or safeguarding against bioterrorism <cit.> could also benefit from influence maximization studies. It is also a possible approximation <cit.> for distinct but related scenarios like the vaccination problem <cit.> (finding nodes whose removal would hinder a diffusion event as effectively as possible), and sentinel surveillance <cit.> (identifying nodes that would be suitable probes for early and reliable detection of diffusion events). Kempe et al. <cit.> first formulated the IM problem in their seminal work and ever since, the problem has been explored extensively in the computer science, statistical physics and information science literature. The majority of work considers static networks where the nodes and edges are fixed <cit.>. Most methods either efficiently emulate the influence spread function, or use some heuristic to rank nodes by importance. We eschew extended discussion of the static IM problem and instead refer interested readers to reviews in <cit.>. In many real-world situations, the static assumption is violated as networks have temporal variation with links forming and disappearing <cit.>. The topic of this review is IM techniques and analyses for such dynamically varying networks. For example, on a social media platform like Meta's Facebook or Twitter, users are constantly joining the platform in addition to updating their connections. Thus, to achieve satisfactory influence spread, researchers must account for these dynamics. There are several key challenges associated with the IM problem. First, simply calculating the expected influence spread is #P-hard for many models <cit.>. One common approach to circumvent this issue is Monte Carlo (MC) simulations, where the diffusion process is simulated a large number of times and the average number of influence nodes is used to estimate the influence spread <cit.>. Still, selecting the optimal seed nodes is NP-hard <cit.>. Therefore, many researchers employ heuristics so finding the globally optimal solution is rarely guaranteed. In temporal networks, there is an additional challenge stemming from the interplay of dynamic variation in edge sets and diffusion processes. In this work, we provide a comprehensive review of the existing literature on IM on temporal networks and elucidate important future research areas. One main contribution of this review is our keen eye towards the challenges associated with deploying these methods in practice. While there has been significant research on the static and temporal IM problem, we found that there is minimal research on using these methods “in the field.” Thus, we highlight the utility of each method for practitioners. This differentiates the present work from reviews in <cit.> and <cit.>. Additionally, <cit.> focuses on influence analysis rather than strictly influence maximization and we also discuss several tasks not mentioned by the authors, e.g., sequential seeding and the ex ante setting. There are four main sections of this review. For the remainder of this section, we introduce the necessary prerequisites for studying the IM problem. In Section <ref>, we discuss “single seeding” methods which select a single seed set at the beginning of the diffusion process. This problem is the natural extension of the static IM problem. We classify the existing methods into three categories while also discussing methods which analyze the diffusion process. The topic of Section <ref> is methods which repeatedly choose seed nodes as the network evolves. Within this category, some methods activate nodes at different times throughout the diffusion process while others “maintain” an influential seed set. Additionally, we consider the node probing problem where the future evolution of the network can only be known by probing a small subset of nodes and this partially visible network is used for IM seeding. See Figure <ref> for an overview comparing these different paradigms. Real-world implementation of IM algorithms is the topic of Section <ref> where we primarily focus on the problem of increasing HIV awareness amongst homeless youth. This application highlights the many challenges associated with deploying IM algorithms. Finally, we conclude in Section <ref> with important areas for future research, including the ex ante setting, model misspecification, and the temporal relationship between the diffusion process and network evolution. Throughout the paper, we give special attention to the functionality of these methods on real-world problems. §.§ Notation We begin by defining common notations used throughout the paper. Let 𝒢=(G_0,…,G_T-1) be a temporally evolving network over T time stamps. Typically the graph snapshots G_t occur over evenly spaced time intervals, i.e., t_k-t_l is constant for all k,l. For each t, let G_t=(V_t,E_t), where V_t is the set of vertices and E_t is the set of edges. Typically, V_t≡ V and does not vary with time. Let n=|∪_t V_t| and m=∑_t |E_t| be the total number of nodes and edges in the network, respectively. Additionally, let A_ij(t) denote the corresponding adjacency matrix for graph G_t where A_ij(t)=1 if there is an edge from node j to node i at time t, and 0 otherwise. In an undirected network, A_ij=A_ji for all i,j, while A may be asymmetric for a directed network. Let N_i(t) be the set of incoming neighbors of node i at time t, i.e., N_i(t)={j:A_ij(t)=1}. §.§ Diffusion mechanisms In order to study IM, it is necessary to describe the diffusion of influence on a network. The most common diffusion models are the independent cascade (IC), linear threshold (LT) and susceptible-infected-recover (SIR) models. In the IC model <cit.>, influenced nodes have a single chance to activate their uninfluenced neighbors. Specifically, let p_ij be the probability that node i influences nodes j. If node i is infected at time t-1, then node j becomes infected at time t with probability p_ij, assuming A_ij(t-1)=1. From time t, node i can no longer influence its neighbors. Then the total influence spread is the number of nodes that were active at any point. In Figure <ref>, we show the influence diffusion process on a toy network using the IC model. Two nodes (green) are initially selected for the seed set and begin in the active state. These nodes attempt to activate all of their neighbors, but only some attempts are successful (green and red edges are successful and unsuccessful activations, respectively). In the next time step, the newly activated nodes now attempt to influence all of their current neighbors, while the previously activated nodes become inactive. This process continues one more time step, and the number of nodes in the active (green) or formerly active (red) state is the total influence spread for this seed set (eight nodes). Due to the stochastic nature of the process, even if the process is repeated with the same seed nodes, the total diffusion spread may differ. The SIR model <cit.> is similar to the IC model, but now each activated node has a fixed probability λ of infecting its unactivated neighbors. Moreover, a node can activate its neighbors as long as it is in the infected state. Each infected node, however, has probability μ of “recovering” and being unable to activate its neighbors. The total influence spread is the final number of nodes in the infected and recovered state. If p_ij=λ for all i,j and μ=1, then the SIR and IC models are equivalent. Additionally, the SIR model reduces to the susceptible-infected (SI) model <cit.> if μ=0. In the LT model <cit.>, each node is randomly assigned a threshold θ_i and each edge endowed with a weight b_ij. If the sum of weights for a node's infected neighbors exceeds its threshold, then this node becomes infected, i.e., node i is activated if ∑ b_ij>θ_i where the sum is over all infected neighbors of i. §.§ Problem statement With notation and diffusion mechanisms in hand, we formally define the IM problem. Let 𝒢 be some dynamic network and let 𝒟 be a diffusion mechanism, e.g., IC or SIR. We define σ(S) as the expected number of influenced nodes for seed set S and for diffusion process 𝒟 on graph 𝒢. We suppress the dependence of 𝒢 and 𝒟 on σ(·), but stress that its behavior is highly dependent on both. For a fixed k=|S|, we seek the seed set S which maximizes σ(S), i.e., S^* =max_S⊆ V,|S|=kσ(S). Perhaps the simplest approach to approximate (<ref>) is evaluating σ(S) via MC simulations and choosing the node which marginally leads to the largest gain in influence spread, as outlined in Algorithm <ref>. In the static setting, this greedy algorithm provably yields a result within a factor of (1-1/e) of the global optimum <cit.>. MC simulations are computationally intensive, however, so many methods focus on efficiently computing the influence spread before employing a greedy algorithm. Another common paradigm for selecting seed nodes that avoids direct calculation of the influence function is based on node ranking. Nodes are ranked based on some measure of importance, e.g., degree or centrality, and the k nodes with the largest value are chosen for the seed set. While much faster than greedy algorithms, these approaches yield no theoretical guarantees and may choose nodes that “overlap” their influence. For example, if two nodes have high degrees but share many common neighbors, then seeding both nodes may not be optimal as their influence will spread to the same nodes. Reverse Reachable (RR) sketches also choose seed nodes without direct computation of σ(S). For a given time t and for each edge (i,j), we randomly draw Z_ij∼𝖡𝖾𝗋𝗇𝗈𝗎𝗅𝗅𝗂(p_ij) where p_ij is the probability that node i influences node j, and keep the subgraph with Z_ij=1. These edges are sometimes referred to as the “live” or “active” edges. Once this subgraph is constructed, the source and destination of each (directed) edge are reversed before randomly selecting a node. Finally, a breadth-first search is conducted from this randomly selected node and all nodes reached by this search are kept for this particular RR-sketch. Essentially, the nodes in this set are those that can influence the selected node through the diffusion process. This process is repeated a large number of times to yield a set of RR-sketches. § SINGLE SEEDING The classical IM problem is where a practitioner selects a set of seed nodes at time t=0 in order to maximize the influence spread at time t=T. Most solutions either estimate the influence spread via probabilities, or use some heuristic to rank nodes by influence. For the majority of these methods, the complete temporal evolution of the network is assumed to be known, but some relax this assumption. We also discuss several works which do not present novel algorithms, but rather analyze existing methods and/or diffusion processes. §.§ Algorithms First, we discuss algorithms which solve the single seeding temporal IM problem. §.§.§ Greedy The greedy algorithm of Kempe et al. <cit.> naturally extends to the temporal setting: nodes with the largest marginal gain in influence spread are added to the seed set incrementally as in Algorithm <ref>. The only difference from <cit.> is that the expected influence spread is computed on a temporally evolving network. This method is considered the “gold standard” of IM algorithms and can easily be adapted to any diffusion model. On the other hand, this algorithm suffers from a high computational cost due to the repeated MC simulations required for computing the influence spread. §.§.§ Probability of influence spread Since the costly step of the greedy algorithm is computing the influence spread, several heuristics exist which use the probability of a node's activation in order to approximate σ(S). In Aggarwal et al. <cit.>, π_i(t) is the probability that node i is activated at time t. Assuming the network is a tree, if p_ji(t) is the probability that node j activates node i at time t, then the probability that node i is activated at time t+1 is π_i(t+1) =π_i(t) + (1-π_i(t))×(1-∏_j∈ N_i(t)(1-π_j(t)p_ji(t))) where N_i(t) is the set of incoming neighbors of node i at time t. The first term is the probability that the node was already activated during the previous time step while the second term is the probability that it was not previously activated but becomes so in the current step. The authors initialize π_i(1)=1 if i∈ S and 0 otherwise. For each i, π_i(t) iteratively updates via (<ref>) and ∑_i π_i(T) estimates σ(S). Aggarwal et al. use this procedure and a greedy algorithm to choose the seed set. The authors assume that p_ij(t) is an increasing function of the length of time that an edge between nodes i and j persists. Additionally, this method eschews the standard, equally-spaced graph snapshots in favor of times corresponding to structural changes based on the number of edge updates. The approach also can find the most likely seed nodes for a given diffusion pattern. Osawa and Murata <cit.> take an analogous approach to Aggarwal et al. but use the SI model for diffusion. In fact, this method is equivalent to <cit.> if p_ij(t)=λ for all i,j and times t. Osawa and Murata show this approach slightly overestimates the true influence spread and prove that the associated greedy algorithm's computational complexity is O(nmk). In simulations, this method outperforms broadcast <cit.> and closeness centrality <cit.> heuristics. It also yields comparable performance to a standard greedy algorithm but is two orders of magnitude faster. Additionally, Osawa and Murata show that centrality heuristics perform worse on networks with strong community structure due to nodes' overlapping influence. Erkol et al. <cit.> extend this paradigm to the SIR model. If π_i(t) is defined as above and ρ_i(t) is the probability that node i is in state R at time t (such that 1-π_i(t)-ρ_i(t) is the probability of being in state S), then the SIR dynamics are defined by π_i(t) = (1-μ)π_i(t-1) +(1-π_i(t-1)-ρ_i(t-1)) ×(1-∏_j∈ N_i(t)(1-λπ_j(t-1))) ρ_i(t) =ρ_i(t-1)+μπ_i(t-1). A greedy algorithm chooses the seed nodes based on the influence spread estimate ∑_i {π_i(T) + ρ_i(T)}. Erkol et al. study the performance of the method when the network is noisy or incomplete, i.e., the temporal snapshots are randomly re-ordered, only the first snapshot is available, and the network is aggregated into a single snapshot. The authors find that the order of snapshots is crucial and ignorance about G_1 causes the algorithm to suffer. Indeed, in many cases, knowing only G_1 is sufficient for large influence spread, while the aggregation approach consistently performs poorly. Erkol et al. also show that if μ is large, then central nodes in the first few layers are the best influence spreaders, whereas for small μ, nodes must be central in many layers to make for optimal seed nodes. §.§.§ Node ranking heuristics Rather than estimate the influence spread, the following methods rank nodes by influence and select the top k as seed nodes. The earliest approach comes from Michalski et al. <cit.>. The authors adopt the LT model and assume the first T/2 snapshots are available to select seed nodes, but the diffusion process occurs on G_T/2,…, G_T-1. Some static measure of node importance m_i(t) is computed for all nodes i and snapshots G_t. The values across t are combined by down-weighting older values to yield a single metric θ_i for each node, i.e., θ_i =∑_t=0^T/2-1 f(m_i(t), t) where f(x,t) is an increasing function in t. The k nodes with largest θ_i for a given f(·) are chosen as seed nodes. Michalski et al. find that the following combinations of metrics m_i(t) and forgetting mechanisms f(·) yield the largest influence spread: out-degree and in-degree with exponential forgetting (f_exp(x,t)=e^-tx)[This was the forgetting mechanism reported in the original paper but there appears to be some error since this function clearly decreases with t.], total degree and logarithmic forgetting (f_log(x,t)=log_T/2-1-t+1(x)), betweenness centrality and hyperbolic forgetting (f_hyp(x,t)=(T/2-1-t-1)^-1x); and closeness centrality with power forgetting (f_pow(x,t) = x^t). The authors also vary the number of aggregated snapshots used to compute the metrics and find that the finest granularity performs the best. Indeed, treating the network as static by aggregating G_0,…,G_T/2-1 into a single graph yields the lowest influence spread on G_T/2,…,G_T-1, thus demonstrating the importance of accounting for the temporal variation in the network. Another node ranking heuristic comes from Murata and Koga <cit.>. Using the SI model, the authors extend several static measures of importance to the temporal settings. In particular, the dynamic degree discount algorithm extends <cit.>. First, the node with largest dynamic degree D_T(v) is added to the seed set, where D_T(v) =∑_t=2^T |N_v(t-1)∖ N_v(t)|/|N_v(t-1)∪ N_v(t)||N_v(t)| and N_v(t) is the neighbors of node v at time t. Once a node is selected, the value of the dynamic degree for its neighboring nodes is decreased and the process repeats until k nodes are selected. The authors show that the complexity of this method is O(klog n + m + mT/n) but that it only is valid for the SI model. Murata and Koga also propose Dynamic CI as an extension of <cit.> based on optimal percolation and Dynamic RIS as an extension of <cit.> based on RR sketches. In simulations, all methods perform comparably to Osawa and Murata <cit.> but are significantly faster. The authors also show that when λ is large, choosing the optimal seed nodes is less important as many seed sets yield comparable influence spread. Recently, <cit.> propose another node-ranking heuristic. The authors postulate that, for the IC model, nodes with large variability in their neighbors should be chosen to maximize spread. They quantify neighborhood variability with an entropy measure that rewards nodes for changing their neighbors in subsequent graph snapshots and the k nodes with the largest value are chosen for the seed set. The measure is computed on the first T/2 snapshots while the influence is calculated on the second half of the graph's evolution, similar to Michalski et al. <cit.>. The authors note, however, that this metric may not make sense for the LT model which requires the number of activated neighbors to “build up” for a node to become infected. As the method depends on the neighborhood set of each node, its complexity is O(m). To summarize the methods in the previous two subsections, we include Table <ref> which compares them across several metrics. §.§ Analysis We turn our attention to methods that do not propose a novel IM algorithm, but rather analyze the existing algorithms and/or diffusion mechanisms. In order to better model information propagation, Hao et al. <cit.> propose two novel diffusion models where a node's propensity of activation depends on the number of past attempts to activate it. In the time-dependent comprehensive cascade model, an active node still only has a single chance to activate its neighbors, but the probability of being infected can either increase, decrease or be unaffected by the number of previous attempts on that node. The authors also propose a dynamic LT model where the node's activation threshold depends on the number of previous activation attempts. Hao et al. proposes a time series-based approach to empirically determine the effect of past activation attempts on infection probabilities. Gayraud et al. <cit.> study the behavior of the influence spread function under several novel diffusion models while also allowing seed nodes to be activated at different times. Let f:2^V→ℝ be a set function. If S⊆ V, then f is monotone if f(S∪{v})-f(S)≥ 0 for all v∈ V∖ S and S⊆ V. It is submodular if f(A∪{v})-f(A)≥ f(B∪{v})-f(B) for A⊆ B and v∈ V∖ B. In other words, the monotone property implies that adding a node never decreases the influence spread and submodularity means that there is diminishing returns for adding more nodes. Additionally, the authors define a seeding strategy to be timing insensitive if all nodes should be activated at time t=0 and timing sensitive otherwise. In the transient evolving IC model (tEIC), infected nodes at time t-1 have one chance to infect their neighbors at time t. The authors prove that this diffusion mechanism is neither monotone nor submodular. In contrast to the tEIC model, the persistent EIC model assumes that a node tries to activate its neighbors the first time that the two nodes have a link. If the activation probabilities are constant in time, Gayraud et al. prove that this model is monotone, submodular and timing insensitive. If the activation probabilities dynamically vary, then the influence function is neither monotone nor submodular and is timing-sensitive. The authors propose similar extensions to the LT model. The transient ELT model only considers weights from active neighbors at the current snapshot whereas the persistent ELT model sums all weights from neighbors activated during any previous time. These models are monotone, not submodular and timing insensitive, and monotone, submodular and timing insensitive, respectively. The key contribution of this paper is that if a model is timing-sensitive, the seed nodes should be activated throughout the diffusion process, as opposed to all at t=0. The authors also show that choosing seed nodes based on aggregating all graph snapshots does not perform well for any model. The submodularity of the influence function is also studied in <cit.>, this time under the SIR model. The authors show that if μ=0 (SI model), the influence function is submodular, but loses this property when μ>0. Effectively, the violations come from nodes in state R “blocking” paths to nodes in state S, as demonstrated by a toy example in Figure <ref> (reproduced with permission of the author). A relaxation of the submodularity property, γ-weakly submodular, is also not achieved in the SIR model. A function f is γ-weakly submodular if for A∩ B=∅ and 0<γ≤ 1, ∑_v∈ Bf(A∪{v}) ≥min{γ f(A∪ B), 1/γf(A∪ B) }. The authors then empirically check the number of violations of the submodular criteria in real networks. They find that if nodes are randomly selected, the criteria is frequently violated. If nodes are selected based on a greedy algorithm, however, the submodularity property is rarely violated. Thus, the influence function is effectively submodular. Now, since the influence function is not submodular, there is no theoretical guarantee that the greedy algorithm adequately approximates the optimal solution. In spite of this, compared with a brute-force algorithm on real-world networks, the greedy algorithm still yields results within 97% of the optimal solution. Lastly, although not pertaining explicitly to IM, we briefly discuss <cit.>. This work studies the relationship between graph topological evolution and diffusion processes by analyzing which part of the diffusion is owed to the diffusion mechanism, and which to graph dynamics. The authors consider two timing mechanisms: extrinsic time based on seconds between interactions, and intrinsic time based on changes or transitions in the network. While researchers typically use extrinsic time, the authors argue that intrinsic time may be more sensible in many cases. Using the SI model and intrinsic time, the observed diffusion is governed more by the diffusion mechanism than the evolution of the network. Using extrinsic time, conversely, the topological changes in the network greatly affect the diffusion. Thus, the diffusion process is highly dependent on the timing method. §.§ Discussion In the previous subsections, we presented the leading methods for choosing a single seed set in temporal IM. Each method either estimates the influence spread, or ranks nodes based on a heuristic. Aggarwal et al. <cit.>, Osawa and Murata <cit.> and Erkol et al. <cit.> proposed analogous approaches with the only difference being in the diffusion model. These methods maintain many of the desired properties of the greedy algorithm, but are computationally less intensive. The node ranking metrics of <cit.>, <cit.> and <cit.> are even faster since they avoid the costly influence spread calculation. In theory, these methods are not guaranteed to perform as well as a greedy-based algorithm, but <cit.>, for example, shows they still maintain good performance. A key challenge in implementing these methods on real-world problems is the requirement that the entire topology of the network be known. Save <cit.>, each method assumes that the evolution of the network G_0,…,G_T-1 is known at time t=0 when the seed nodes are selected. Of course, in practice, it is unreasonable for a practitioner to know the future topology of the network, so it is not obvious how to apply these methods in this case. To address this issue, <cit.> propose a link prediction approach for ex ante temporal IM. Using the SI model, the authors use the first p snapshots to train a link prediction algorithm and then predict the network topology for G_p,…, G_T-1. An existing temporal IM algorithm is applied to these predicted networks to choose the seed sets. In many cases, finding seed nodes on a simple aggregation of G_0,…,G_p-1 performs as well as the more complicated link prediction methods. This finding is at odds with Michalski et al. <cit.> and Erkol et al. <cit.> who showed poor performance of IM algorithms on aggregated networks. These papers, however, assumed different diffusion mechanisms, so it is possible that aggregating only works well for the SI model. Another practical consideration for applied researchers is the size of the network. If working with a relatively small social network, then a greedy algorithm is reasonable, whereas a node ranking heuristic is mandatory for large online social networks with millions of nodes. Finally, the diffusion mechanism must be carefully chosen based on the application's domain, as certain methods are only applicable to specific mechanisms. § MULTIPLE SEEDINGS In the previous section, we considered IM algorithms for temporal networks where all seed nodes are activated at time t=0. Now we discuss methods where nodes are seeded at different points throughout the evolution of the network, or where the seed set is updated at each time step. §.§ Sequential seedings Related to the single seeding problem, consider a single seed set S, but instead of activating all nodes at t=0, nodes activate sequentially as the network evolves. This problem involves not only choosing which nodes to include in the seed set, but also when to activate them. Michalski et al. <cit.> focus on the seed activation step of this problem. The authors consider a variant of the IC model where a single node is activated and the diffusion occurs until no more activations are possible. Then the next node is activated and the process continues. In this setting, Michalski et al. use a simple seed selection method based on degrees. First, the node with the largest degree is activated. Once the diffusion process finishes, the uninfected node with largest degree is activated and the process continues until k nodes have been seeded. This method is compared with activating the k nodes with largest degree at time t=0. When t is small, activating all nodes at once leads to a larger influence spread, but as t increases, the sequential seeding strategy outperforms the single seeding, as shown in Figure <ref> (reproduced with permission of the author). Tong et al. <cit.> consider another variation of the sequential seeding problem where seed set nodes are unsuccessfully activated with some probability. They propose a greedy algorithm which maximizes the marginal gain in influence spread given the current diffusion. Towards this end, the authors derive a closed-form expression for the expected number of influenced nodes by constructing an auxiliary graph with extra nodes and edges based on possible seed sets and propagation probabilities. The greedy algorithm is shown to yield results within (1-1/e) of the optimal influence spread while the computational burden is mitigated with the Lazy-forward technique <cit.>. Additionally, Tong et al. prove that the strategy outlined in Michalski et al. <cit.> of seeding nodes one at a time and waiting for the diffusion process to finish before activating the next node is the optimal seeding strategy for any temporal graph. §.§ Maintenance seeding In a highly dynamic network, the optimal seed set may change with time. For example, in a long-term marketing campaign on Twitter, the active users and followers change over the duration of the campaign. Thus, it is necessary to maintain or update the seed set S_t such that it provides maximum influence spread on G_t for all t. This problem is known as maintenance seeding. Maintenance seeding is markedly different from static seeding as now k nodes are activated at each time step t in order to maximize the diffusion on G_t. Thus, this process is analogous to a sequence of static IM problems. Chen et al. <cit.> first study this problem under the name “influential node tracking.” The authors assume that the topology of the network is known at the next time step G_t+1 and use the IC model for diffusion. Using the seed set from the current snapshot S_t, Chen et al. employ an interchange heuristic <cit.> to efficiently update the seed set and prove that the solution is guaranteed to be within 1/2 of the optimal spread. Effectively, this method swaps one node in S_t with one node in V∖ S_t to maximize the marginal gain in influence spread. Since evaluating the marginal gain for every node in V∖ S_t is expensive, the authors only consider nodes with the largest marginal gain upper bound. If the upper bound for node u∈ V∖ S is smaller than the marginal gain of another node v, then evaluating the influence of node u is unnecessary as its inclusion cannot improve the total influence spread. The proposed algorithm has O(kn) complexity. Ohsaka et al. <cit.> consider a similar problem for large online networks where nodes and edges are added or removed at each time step. Using the IC model, the authors propose a sketching method akin to RR sets and an efficient data structure to build and store these sketches. A greedy algorithm is then implemented to choose the seed sets. Specifically, the node which is present in the most sketches is chosen as a seed node. Then all sketches which contain that node are removed from consideration, and the node which occurs in the most remaining sketches is chosen for the seed set. This process continues until k nodes are chosen. In addition to the novel data structure, this work proposes heuristics that lead to efficient updates of the sketches at each evolution of the graph, instead of recomputing them from scratch. These heuristics come with theoretical guarantees and lead to algorithmic speed-ups. There are several other methods that address this problem.  <cit.> recast it as a bandit problem but tackle it in a similar manner to Ohsaka et al. by using RR sketches.  <cit.> find the optimal seed nodes at t=0 and incrementally update them based on investigating parts of the graph which changed significantly between snapshots. <cit.> selects seed nodes based on a sliding window scheme and <cit.> uses a node's number of triangles to estimate its influence. <cit.> consider a special case by accounting for user attributes in an online social network, including preferred topics of engagement. The authors also account for certain time periods where users may be inactive and allow for a different diffusion model based on the topic. Up to this point, each method assumes knowledge of the future topology of the network. <cit.> relax this assumption by predicting the graph structure one time step in the future using a conditionally temporal restricted Boltzmann machine <cit.> and then finding the seed nodes on the predicted graph. The authors use an interchange <cit.> heuristic to update the seed set and ideas from <cit.> to improve efficiency. Rather than propose a new maintenance seeding algorithm, Peng <cit.> studies the amortized running time, i.e., the amount of time it takes to update the seed nodes at each time step. Even though the current algorithms efficiently update the seed set in O(n) time for each t, the author argues that this is still too slow for large networks. Peng then considers two different graph evolution paradigms, both under either the IC or LT model. First is an incremental model where a network may only add new nodes and edges. Under this model, Peng shows that an (1-1/e-ϵ) approximation of the optimal solution is possible with probability 1-δ for amortized running time O(kϵ^-3log^3(n/δ)), much faster than O(n). Under a fully dynamic model, however, where nodes and edges can be added and deleted, the author proves that a 2^(-log n)^1-o(1) approximation is impossible without n^1-o(1) amortized run time. Thus, there is no possibility of improving the O(n) run time. §.§ Node probing While previous methods assumed complete knowledge of the future network topology, the node probing problem assumes that the future graph snapshots are unknown but can be partially observed by probing the neighborhoods of certain nodes. Here, probing a node means observing its edges. Assuming G_0 to be known, the goal is to carefully select which nodes to probe in order to have the most information on the topology of the network in order to effectively implement an IM algorithm. This problem may arise in large online social networks where it is infeasible to observe the activities of all users at every time step. Another relevant application is modeling the social connections within a hard-to-reach population, e.g., homeless youth, as there is no straightforward way to observe all the people (nodes) in this network, yet alone the friendships (edges). This problem was originally formulated by Zhuang et al. <cit.>. For each t, the researcher probes b nodes and observes changes in their neighborhoods. Once the nodes are probed, an IM algorithm is implemented on the (incomplete) visible network. Thus, the goal is to find the ideal probing strategy. The authors propose probing nodes that yield the maximum possible change to the solution of the IM problem. Since the authors use the degree discount algorithm <cit.>, this reduces to finding the nodes with greatest change in their degree. Specifically, let β(v) be the maximum difference in the influence spread of optimal seeds chosen before and after probing node v. Moreover, let S be the optimal seed nodes at time t-1 and let S_0 be the k nodes with the largest in-degree on the most up-to-date graph snapshot. Let t-c_v be the last time stamp at which node v was probed. For ϵ>0, if z_v=√(-2c_vlogϵ), β(v) is derived as: β(v) =max{0, max_u∉ Sd̂_in(u)-d̂_in(v)+z_v}, v∈ S_0 max{0,d̂_in(v) - min_u∈ Sd̂_in(u)+z_v}, v∉S_0 where d̂_in(v) is the in-degree of node v based on the most recently probed network. Then node v^*=max_v∈ Vβ(v) is probed and the network topology is updated. Once b nodes have been probed, the degree discount algorithm is applied to determine the optimal seed nodes for influence spread. Han et al. <cit.> study the same problem but focus on communities with high variation as opposed to nodes. The authors postulate that the total in-degree for a community should be relatively stable with time, so if this changes greatly, there must have been a significant change in this community and it is worth probing. The authors use the community detection algorithm of <cit.>, and once the community with high variability has been identified, they employ a probing algorithm similar to that of Zhuang et al. <cit.>. §.§ Discussion We close this section by highlighting important considerations for practical implementation of these methods. In the sequential seeding setting, it is important for researchers to consider how long they can allow the diffusion to take place since static seeding is preferable for small T and sequential for large T. Michalski et al. <cit.> also emphasize that the sequential strategy is better suited for independently activated models, e.g., IC and SI, rather than threshold-based models, e.g., LT, so the diffusion model is another important consideration. It would also be interesting to compare static and sequential seeding for more complicated IM algorithms. It is well-known that seeding the top k degree nodes is a relatively poor IM algorithm, so it is unclear whether sequential seeding would perform so much better when combined with different IM algorithms. For maintenance seeding, we observe that the seeding budget is effectively kT rather than k, since k nodes are activated at T different time steps. If T is large, then it may be prohibitive to keep activating k nodes each round. Additionally, this setting implicitly assumes that nodes can be reinfected at successive snapshots, i.e., S_t∩ S_t+1≠∅. This may be reasonable in epidemiological settings, for example, where a person can be reinfected by a disease. For marketing campaigns, on the other hand, it is unlikely that a user targeted with an ad in multiple time steps can be expected to have significant diffusion in each case. Thus, the number of times that a user has been infected and this effect on the diffusion mechanism should be considered carefully. Moreover, save the IC model, if a node is infected at time t, then it could continue to attempt to infect its neighbors at t+1,t+2,…. The frameworks presented above, however, assume that unless nodes are in the new seed set S_t+1, they are unable to exert influence. Next, save <cit.>, each maintenance seeding method assumes that the future network topology is known, which is generally untrue in practice. In particular, sequential seeding strategies when the graph snapshots are unknown is an important and practically relevant open problem. Finally, Yang et al. <cit.> argue that identifying influential nodes is a separate task from influence maximization. For example, if a new user joins Twitter, they may want to follow the most influential users. Identifying these users is different from trying to maximize the spread of a product or idea on Twitter's network. The node probing problem is a promising step toward practically relevant IM algorithms. Indeed, assuming that the network structure is unknown, except through probing, is much more realistic than the methods which assume complete topological information. These methods, however, treat the problem as a sequence of static IM tasks since the seed nodes are computed fresh at each time step. An interesting advance would be to leverage the previous seed set in computing the new seeds. § REAL WORLD IMPLEMENTATIONS A key focus of this review is understanding if existing methods are prepared to handle IM tasks “in the field.” To date, the literature on IM in real-world settings is scant. In this section, we highlight the existing studies and discuss some of the associated challenges. While these works assume that the network is static, the majority employ a sequential seeding strategy which is why we include it in our discussion of temporal IM. To our knowledge, there are no existing papers explicitly implementing IM algorithms on dynamic networks. The most notable examples of applied IM comes from a series of papers by Yadav and Wilder <cit.>. In these works, the goal is to maximize HIV awareness among homeless youth in large urban areas. This is a classic IM setting as homeless shelters can only train a small number of youth on HIV prevention, but hope that participants pass this information along to their friends to maximize awareness. The general problem setup is as follows. First, the social network of homeless youth is partially constructed. Then the homeless shelter chooses k youth to participate in an intervention on HIV prevention. During the training, the youth reveal all of their one-hop friendships. The information is then given time to diffuse on the network (but this spread is unknown) before inviting k more youth for training. This process continues for T training rounds. There are several key challenges to deploying IM algorithms in this setting. First, the complete social network of the homeless youth population is unknown, both in terms of nodes (youth) and links (friendships). Moreover, new information on the network structure is collected during the experiment as youth are trained and their friendship circle is elucidated. Second, youth may refuse and/or be unable to attend the training, meaning that seed nodes have a certain probability of remaining inactive. Lastly, quantifying the information spread on the network is highly non-trivial. Thus, this problem combines node probing, as the network structure is partially unknown before selecting a node to learn their social circle, and sequential seeding, where the nodes are activated over time. It differs from the standard node probing problem, however, in that nodes are chosen to optimize influence spread, rather than maximize topological information about the network; it differs from sequential seeding in that the influence spread is unknown when selecting the next seed nodes. The first attempt to address these challenges comes from <cit.>. Assuming the SI model[In the paper, Yadav et al. state that they use the IC model, but in terms of the notation of this paper, it falls under the SI classification] for diffusion, the authors construct the social network using Facebook friendships while inferring missing links using link prediction techniques <cit.>. They prove that the task of choosing k seed nodes at each of the T time steps is NP-hard and that it is impossible to achieve a n^-1+ϵ approximation of the optimal solution with an uncertain network. The problem is then recast as a Partially Observable Markov Decision Process (POMDP). By simulating the diffusion process, the nodes with the largest expected reward (influence spread) are selected for the seed set. In order for the method to handle real-world network sizes, the authors propose a divide-and-conquer approach. Their proposed method is one hundred times faster than existing methods while also yielding greater influence spread. In <cit.>, the authors generalize the model by allowing for greater uncertainty in the influence and edge probabilities. In <cit.>, the authors focus on several practical considerations for this problem. First, the algorithm accounts for a non-zero probability that a seed set node remains inactive, i.e., the youth does not attend the training. The authors also address the network construction step by proposing a network sampling approach based on the friendship paradox <cit.>. This paradox says that, on average, a random node's neighbor has more friends than the original node. Thus, with a sampling budget of M nodes, they first randomly sample M/2 nodes and then randomly sample one neighbor per node. This approach increases the likelihood that central (e.g., influential) nodes are sampled. Figure <ref> shows the homeless youth social network constructed using different methods (reproduced with permission of the author). These four networks highlight the challenges of constructing the network for a hard-to-reach population as the topology varies greatly depending on the collection method (self-report, field observations and homeless shelter staff observations). Next, the authors assume that the influence propagation probability is unknown but modeled to maximize the worst-case ratio between the true spread and the estimated spread. Finally, the authors propose a greedy algorithm to select the optimal seed nodes and prove that it is guaranteed to output a solution within a factor of (e-1)/(2e-1) of the optimal. In a real-world pilot study, by sampling only 15% of the nodes, the proposed method achieved comparable spread compared to that if the entire network was known. These methods are applied to the real-world task in <cit.>. Some of the key questions considered are: Do the activated nodes actually pass their information along to others? Do the activated nodes give meaningful information about the social network? Can these algorithms do a better job of selecting seed nodes than an expert (social worker) can? To answer these questions, the authors implement the methods from <cit.> and <cit.>. They also consider a baseline IM algorithm based on largest degrees. For each method, the authors recruit study participants, construct the network, activate nodes (via training) and conduct follow-ups to evaluate the final influence spread. The proposed methods in <cit.> and <cit.> yielded much larger influence spreads than the degree-based method while also leading to a change in participants' behavior, i.e., increase in participants testing for HIV. Lastly, we discuss an application of IM to an online setting. <cit.> consider a “closed” social network where posts are only shared with certain people rather than all of the users' connections. As a slight variation to the standard IM problem, the authors find the friends with which the user should share their information to maximize spread. In other words, the goal is to maximize the influence spread where users can only share the information with a limited subset of edges (neighbors). Indeed, this may be a more realistic diffusion mechanism for social networks as it is unlikely that someone would give equal effort to share information with each of their friends; rather, he/she would likely target a few specific people. The authors apply this to an online multiplayer game where each user is recommended friends to interact with, e.g., send gifts, game invitations, etc. The proposed method is compared with randomly selecting friends and yields a 5% increase in click-through rate. § CONSIDERATIONS AND FUTURE DIRECTIONS We concluded by sharing thoughts on the challenges associated with temporal IM as well as some of the important areas for future research. §.§.§ Real-world implementations: In Section <ref>, we saw the litany of challenges facing a researcher trying to implement IM algorithms on real-world problems. We list a handful of questions that he/she must consider in applying these methods: What is the information diffusion mechanism? Can nodes be sequentially updated, or are they all activated at the start? Will seed nodes be activated with certainty? How long does the diffusion process continue? On what time scale is the network evolving? How long does it take to influence a node? Are the network dynamics changing rapidly? Does the future topology of the network need to be predicted? Is the network updated in an online setting or with standard snapshots? Is the true influence spread known? We look forward to many more IM implementations in real-world applications. §.§.§ Single seeding methods: In Section <ref>, we discussed several methods for the single seeding temporal IM problem. There were only five papers, however, and Aggarawal et al. <cit.>, Osawa and Murata <cit.> and Erkol et al. <cit.> all proposed similar solutions. Thus, there is still much room for research on this problem. Recently, graph neural networks (GNN) were applied to the static IM problem <cit.> and may also find success in the temporal setting. §.§.§ Ex ante vs. ex post: Most methods proposed in Section <ref> assume that the entire topology of the dynamic network is known, even though this is unrealistic in many situations. <cit.> yielded promising results for ex ante IM, but more work is certainly needed. §.§.§ Impact of time: In dynamic networks with diffusion, there is a highly intricate relationship between the structural evolution and influence diffusion. This must be carefully accounted for in the IM problem similar to Gayraud et al. <cit.> and <cit.>. The impact of time scales, aggregation, diffusion times, and diffusion mechanisms deserves further study. §.§.§ Online setting: Related to the previous point is IM in the online setting, where nodes and edges come and go continuously. In real-world applications, it may not be obvious how or when to aggregate the network so it becomes more natural to consider online updates. Most methods, however, require that the network is aggregated into graph snapshots. This aggregation inherently loses information, such as when the link appeared/disappeared and the persistence of the edge. More methods like Ohsaka et al. <cit.> can be developed to address this challenge. §.§.§ Model mis-specification: A pertinent challenge for applied IM is selecting the diffusion model. For diseases, the SIR model is sensible since infected persons can infect other nodes for as long as they are infected. On the other hand, for HIV awareness among homeless youth, it is unlikely that someone would attempt to influence all of his/her friends indefinitely. Thus, choosing an appropriate diffusion model is crucial. But what are the effects on influence spread if the model is misspecified? In <cit.>, the authors study this for static IM and find that standard diffusion models grossly underestimate the influence spread of more realistic models. This is likely only compounded in temporal networks where the topology also varies. §.§.§ Uncertainty estimates of seed nodes: The majority of temporal IM algorithms output the optimal seed nodes to achieve maximal influence spread. But are there other seed sets that would yield a comparable spread? In other words, is the objective function “flat” in the sense that many seed sets yield comparable spread? An interesting avenue of research would be deriving a measure of uncertainty for optimal seed sets. §.§.§ Influence minimization: A related problem to IM is that of influence minimization in which seed nodes are “vaccinated” to stop the spread of influence on the network. This problem arises in rumor diffusion and epidemiological settings <cit.> and may lead to interesting philosophical questions. For example, in the vaccine campaign against COVID-19, vaccines were first administered to the most vulnerable populations, e.g., elderly. Thus, seed nodes were chosen based on vulnerability. In an influence minimization schema, however, the most active and/or social people would likely receive the vaccine first to minimize the spread between groups. These opposing goals lead to challenging decisions both ethically and politically. § ACKNOWLEDGMENTS § ACKNOWLEDGMENT This work was conducted while EY was on a JSPS Predoctoral Fellowship for Research in Japan (Short-term Program). PH was supported by JSPS KAKENHI Grant Number JP 21H04595. IEEEtran Eric Yanchenko received a BS in Mathematics and Physics from The Ohio State University, Columbus, Ohio, USA. He is currently a PhD candidate in Statistics at North Carolina State University, Raleigh, North Carolina, USA and a Japan Society for Promotion of Science (JSPS) Short-term Fellow at Tokyo Institute of Technology, Tokyo, Japan. His research interests include hypothesis testing for meso-scale structures on graphs, eliciting prior distributions for scale parameters in hierarchical Bayesian models, and influence maximization. Tsuyoshi Murata received his bachelor's degree from the Department of Information Science at The University of Tokyo, Tokyo, Japan and his PhD from Tokyo Institute of Technology, Tokyo, Japan, where he is currently a full professor in the School of Computing. His research interests are in artificial intelligence, network science, machine learning and social network analysis. Petter Holme received a PhD in Theoretical Physics from Umea University, Umea, Sweden. He has served as a professor at Sungkyunkwan University, Seoul, Korea and the Institute of Innovative Research, Tokyo Institute of Technology, Tokyo, Japan. He is currently a full professor of Computer Science at Aalto University, Espoo, Finland and maintains an affiliation with the Center for Computational Social Science, Kobe University, Kobe, Japan. His research focuses on large-scale structures in society, technology, and biology.
http://arxiv.org/abs/2307.01688v1
20230704124326
Further evidence of the link between activity and metallicity using the flaring properties of stars in the Kepler field
[ "Victor See", "Julia Roquette", "Louis Amard", "Sean Matt" ]
astro-ph.SR
[ "astro-ph.SR" ]
firstpage–lastpage SpComp: A Sparsity Structure-Specific Compilation of Matrix Operations Supratim Biswas August 1, 2023 ====================================================================== The magnetic activity level of low-mass stars is known to vary as a function of the physical properties of the star. Many studies have shown that the stellar mass and rotation are both important parameters that determine magnetic activity levels. In contrast, the impact of a star's chemical composition on magnetic activity has received comparatively little attention. Data sets for traditional activity proxies, e.g. X-ray emission or calcium emission, are not large enough to search for metallicity trends in a statistically meaningful way. Recently, studies have used the photometric variability amplitude as a proxy for magnetic activity to investigate the role of metallicity because it can be relatively easily measured for large samples of stars. These studies find that magnetic activity and metallicity are positively correlated. In this work, we investigate the link between activity and metallicity further by studying the flaring properties of stars in the Kepler field. Similar to the photometric variability, we find that flaring activity is stronger in more metal-rich stars for a fixed mass and rotation period. This result adds to a growing body of evidence that magnetic field generation is correlated with metallicity. stars: flare – stars: activity – stars: low-mass § INTRODUCTION Understanding the processes that govern magnetic field generation in low-mass stars (M_⋆≲ 1.3M_⊙) is an ongoing task. One way to probe the magnetic field generation process is to study how the magnetic properties of low-mass stars scale with their physical properties. The most relevant parameter appears to be the Rossby number which is defined here as the rotation period of the star divided by its convective turnover time. This parameter encapsulates the interplay between rotation and convection that is thought to power the dynamo process in low-mass stars <cit.>. Numerous studies have shown that magnetism and activity is generally stronger in stars with smaller Rossby numbers up to a saturation value <cit.>. Although the Rossby number is the most relevant parameter when it comes to predicting the activity level of low-mass stars, it is also hard to estimate. The difficulty arises because the convective turnover time is not a directly observable property and is hard to constrain <cit.>. Therefore, it is also useful to study how magnetic activity scales with more directly measurable stellar properties such as rotation or mass. In general, more rapidly rotating stars are more magnetically active than slowly rotating stars <cit.>. This is consistent with the fact that stars with smaller Rossby numbers are generally more magnetically active since Rossby number is proportional to the rotation period. Additionally, less massive stars are generally more magnetically active than more massive stars. This is also consistent with low Rossby number stars having high activity because less massive stars tend to have longer convective turnover times and, therefore, smaller Rossby numbers. While the impact of stellar mass and rotation on the activity levels of low-mass stars are well known, the impact of metallicity has received comparatively little attention until recently. Stellar structure models show that more metal-rich stars have longer convective turnover times and, therefore, smaller Rossby numbers <cit.>. Therefore, one should expect that more metal-rich stars should have more efficient dynamos and be more magnetically active. However, testing this hypothesis is difficult since relatively large sample sizes are needed to properly disentangle the impact of metallicity from mass and rotation on activity levels. In recent years, a number of authors have investigated the link between activity and metallicity using the photometric variability amplitude as a proxy for magnetic activity <cit.>. The advantage of using the photometric variability amplitude over more traditional proxies such as X-ray emission or calcium emission is that it can be easily estimated for large samples of stars thanks to missions like Kepler <cit.>. These investigations find that more metal-rich stars generally have larger variability amplitudes and are, therefore, more magnetically active which is in line with the theoretical expectation. Most recently, in<cit.>, we studied a sample of over 3000 low-mass stars in the Kepler field covering a wide range of masses and rotation periods. Similar to previous works, we found that, at fixed mass and rotation, more metal-rich stars generally have larger photometric variability amplitudes. Although these studies have advanced our understanding of the role that metallicity plays in magnetic field generation, they suffer from the fact the photometric variability amplitude is a relatively indirect tracer of magnetic activity. For example, our analysis in <cit.> is slightly hampered by the presence of a dip seen in the photometric variability versus Rossby number diagram that is not seen in the activity-rotation relations for other activity proxies <cit.>. Additional factors, e.g. stellar inclination <cit.>, can also impact the variability amplitude and could add significant scatter to the trends being studied. Lastly, metallicity can affect the contrast of magnetic features <cit.> and may therefore influence the photometric variability of a star in a way that is unrelated to magnetic field generation. For these reasons, it would be beneficial to investigate how metallicity affects magnetic activity using other activity proxies. In this study, we build on our work from <cit.> to investigate how the flaring properties of stars in the Kepler field depend on stellar metallicity. Flare events involve a rapid conversion of magnetic energy to electromagnetic radiation in the atmospheres of low-mass stars <cit.>. These events show up in photometric light curves as a rapid rise phase followed by an exponential decay and have been detected on wide range of stars from G dwarfs <cit.> to K dwarfs <cit.> and M dwarfs <cit.>. Due to the magnetic origin of flares, studying flaring properties, such as flare rates, flare energies or flare frequency distributions, allows us to learn about the stellar magnetic field generation process. Previous works have already shown that flaring properties vary with stellar properties like rotation, effective temperature or Rossby number in a similar way to other activity proxies <cit.>. Additionally, using flaring properties as an activity proxy is complementary to using the photometric variability as an activity proxy since flares are not affected by some of the previously mentioned issues that the photometric variability suffers from. The rest of this paper is structured as follows. In section <ref>, we present the sample of Kepler field stars that we use in this study. In section <ref>, we show how the flaring properties of this sample vary as a function of stellar properties, focussing on the Rossby number and metallicity. Finally, we present our conclusions and discuss the implications of these results in section <ref>. § STELLAR SAMPLE The sample of stars we use for this study is an updated version of the samples used by <cit.> and <cit.>. Similar to those studies, the sample in this work is the result of cross-matching the samples from a number of different surveys and studies and focusses on stars in the Kepler field (Data Release retrieved through the NASA Exoplanet Archive[<https://exoplanetarchive.ipac.caltech.edu/>]). The rotation periods, P_ rot, are taken from either <cit.> or the series of papers by <cit.> and <cit.>. When periods exist for a star in multiple works, we preferentially use the one from <cit.> although we note that our results are not significantly different if we were to adopt the periods from <cit.> and <cit.> instead in these cases. Spectroscopically derived stellar parameters (metallicities, [Fe/H], effective temperatures, T_ eff, and surface gravities, logg) are taken from the APOGEE DR17 <cit.> and LAMOST DR7 <cit.> surveys. For LAMOST DR7 the information could be from either the low resolution spectra (LRS) or medium resolution spectra (MRS) surveys. Where objects exist in multiple of these surveys, we adopted the spectral information from the survey with the highest resolution, i.e. APOGEE (R∼22,500), followed by LAMOST MRS (R∼7,500), and finally LAMOST LRS (R∼1,800). For this work, we only include stars with a reported [Fe/H] uncertainty smaller than 0.1 dex. We also only include stars with effective temperatures, T_ eff<6500 K, since the convective regions of hotter stars become vanishingly thin. As such, the magnetic properties of these stars do not appear to follow the same trends as cooler stars <cit.>. Photometric and astrometric data from Gaia-DR3 were retrieved from the Gaia-Archive@ESA[<https://gea.esac.esa.int/archive/>] and cross-matched to the Kepler database using <cit.>. Following the recommendations in the Gaia DR3 release papers we performed the following corrections to the data. (i) We used the new C* metrics defined by <cit.> to correct for inconsistency between different passbands. (ii) We limited the effects of brightness excess towards the fainter end of the G_BP passband by limiting our dataset to stars brighter than G_BP=20.9 mag <cit.> (iii) We applied saturation corrections for the brightest stars <cit.>. To select the highest quality data, we only used data with >10 and with ≥10. Finally we also limited the dataset to sources with photometric uncertainty better than 1%. Similar to <cit.>, stellar masses, M_⋆, and convective turnover times, τ, for our sample are estimated using a grid of stellar structure models from <cit.> and an adapted maximum-likelihood interpolation tool <cit.>. For each star, prior spectroscopic information about its metallicity and effective temperature along with absolute magnitudes from Gaia DR3 photometry are incorporated into the mass and turnover time estimates. The turnover time is estimated at half a pressure-scale height above the base of the convective zone using a mixing length theory prescription (see <cit.> for a comparison of turnover timescales at other depths). As well as the physical properties of the stars in our sample, we also require information about their magnetic activity. In this work we focus primarily on the normalised flaring luminosity, defined as the flaring luminosity divided by the bolometric luminosity, R_ flare=L_ flare/L_ bol, as calculated by <cit.> for Kepler field stars. The normalised flaring luminosity is calculated by summing up the energies of all the flares present in a photometric light curve and normalising by the bolometric luminosity energy output over the duration of the light curve (see <cit.> and <cit.> for further details). <cit.> also investigated other flare properties, such as the flare frequency distribution. However, we choose to focus on the normalised flaring luminosity in this work as it is an indication of the fraction of a star's energy output that is released through flares and therefore a useful probe of the underlying dynamo. In order to calculate R_ flare, the bolometric luminosity is needed. In their work, <cit.> used the KIC effective temperature to determine the bolometric luminosity, L_ bol=4π r_⋆^2 σ T_ eff^4. In our work, we recalculate R_ flare using the spectroscopically determined effective temperatures from the APOGEE and LAMOST surveys as these are more accurate. Additionally, we also use the photometric variability amplitude, R_ per, as calculated by <cit.> in this work. As we wish to focus on single main sequence stars, we removed possible near-equal-mass binaries from our sample, which typically appear as a sequence of stars 2.5log(2) = 0.753 mag above the main sequence in a colour-magnitude diagram. We followed a metallicity-dependent approach, similar to the method used in <cit.>, but with an improvement that is described in Appendix <ref>, which allows us to account for the typical extinction as a function of distance in the Kepler Field. After removing the possible equal-mass binaries, we also removed sources in common with the Kepler Eclipsing Binary Catalogue <cit.>[<http://keplerebs.villanova.edu/>]. Finally, we kept only sources with Gaia DR3 renormalized unit weight error <1.4 <cit.>, which selects well-behaved astrometric solutions of single stars. After these cuts, our sample consists of 240 stars. The range of masses, periods and metallicities present in our sample can be seen in fig. <ref> and the numerical values of all the parameters of our stellar sample can be found in table <ref>. § RESULTS Figure <ref> shows the normalised flare luminosity, R_ flare, versus Rossby number for our sample of stars. This quantity is analogous to the X-ray luminosity to bolometric luminosity ratio, R_ X, that is commonly used in X-ray activity studies <cit.>. Although there is a reasonable amount of scatter in fig. <ref>, an inverse power law relationship between R_ flare and Rossby number is evident. This behaviour is similar to that of other activity indicators in the unsaturated regime. <cit.> also found a similar relationship between R_ flare and Rossby number for K and M type stars in the unsaturated regime. Interestingly, these authors find a much larger scatter in the R_ flare vs Rossby number diagram for F and G type stars and it is not clear if these stars follow the same trends. We note that these authors do not explicitly account for any metallicity dependence when calculating the convective turnover times used in their work. In <cit.>, we studied how the photometric variability depends on Rossby number (see fig. 3 from that work). One surprising trend we observed is that more metal-rich stars seem to be more active even at a fixed Rossby number. Such a trend is not expected if the influence of metallicity on magnetic activity is solely through its influence on the stellar structure and, hence, the stellar dynamo. In <cit.>, we suggested that this trend could be due to additional impacts that metallicity has on photometric variability that are unrelated to the dynamo, e.g. the impact of metallicity on the contrast of magnetic features at the stellar surface <cit.>. There does not seem to be a similar residual metallicity dependence in fig. <ref>. This suggests that the residual metallicity dependence seen in fig. 3 of <cit.> could be attributed to an effect that is unique to the photometric variability amplitude rather than something that is common to all activity proxies. However, we caution that our sample in this work is much smaller than our sample from <cit.> and that a future study involving a larger sample could reveal a similar residual metallicity dependence in the normalised flare luminosity diagram as the one seen for the photometric variability. In order to study the relationship between R_ flare and metallicity, we would, ideally, perform a similar analysis to the one we conducted in <cit.>. In that work, we divided our sample into bins of approximately constant mass and constant rotation period. This allowed us to study how magnetic activity depends on metallicity independently of the effects of mass and rotation. However, this method is not feasible for our current study due to the smaller sample size and we must take a slightly different approach. Instead, we perform an orthogonal distance multivariate regression to our full sample of the form log R_ flare = a log M_⋆ + b log P_ rot + c [Fe/H] + d, where M_⋆ is the stellar mass, P_ rot is the rotation period, [Fe/H] is the metallicity and a, b, c & d are the fit parameters. The values of these fit parameters from our regression are shown in table <ref>. This is similar to the analysis conducted by <cit.> on variability data in their supplementary materials. However, we have parameterised our multivariate fit in terms of stellar mass rather than effective temperature since mass and metallicity are independent variables whereas effective temperature and metallicity are not. Figure <ref> visually shows the results of our multivariate regression. Each panel shows how the normalised flare luminosity, R_ flare, of our sample varies as a function of either mass, rotation period or metallicity. The remaining two parameters that are not under consideration in each panel are subtracted from the normalised flare luminosity on the y-axis. The values of the fit parameters for the mass term, a, and rotation term, b, in equation (<ref>) are both negative indicating that rapidly rotating and low-mass stars are more flare active than slowly rotating and high-mass stars. This can be seen in the left two panels of fig. <ref> and is also consistent with the behaviour of many other activity indicators as discussed in the introduction. The value of the fit parameter for the metallicity term, c, in equation (<ref>) is positive indicating that metal-rich stars are more flare active than metal-poor stars. This can be seen in fig. <ref>c and is consistent with out results in <cit.> that more metal-rich stars are generally more magnetically active. Finally, as a direct comparison of our work from <cit.> to this work, we plot the normalised flare activity, R_ flare, versus the photometric variability amplitude, R_ per, as measured by <cit.> in fig. <ref>. We see that the two activity indicators are correlated, which is consistent with the result of <cit.>, although there is a large amount of scatter. This scatter is likely caused by the fact that both the flaring activity and photometric variability are relatively indirect proxies of magnetic activity. There are also the non-activity related factors mentioned in the introduction that can contribute towards the variability of a star that likely also add extra scatter to this plot (see also the discussion in section 3.3 of <cit.> regarding the scatter in this plot). § CONCLUSIONS In this work, we study the flaring properties of a sample of 240 main sequence stars in the Kepler field. In particular, we investigated the dependence of the normalised flaring luminosity on stellar metallicity. For each star, we compile literature values for the rotation period <cit.>, metallitiy <cit.>, Gaia DR3 astrometry and photometry, and normalised flaring luminosity <cit.>. Additionally, we calculate stellar masses and convective turnover times using the structure models of <cit.>. Our sample predominantly lies in the unsaturated regime of the activity-rotation relation. Similar to previous works, e.g. <cit.>, the normalised flaring luminosity of our sample is inversely correlated with Rossby number. We also demonstrate that metal-rich stars generally have larger normalised flaring luminosities than metal-poor stars. The result that more metal-rich stars have stronger flaring activity is consistent with the theoretical expectation. More metal-rich stars are expected to have longer convective turnover times resulting in smaller Rossby numbers and, therefore, should have stronger magnetic activity. Indeed, our study adds to the growing body of evidence that most, if not all, forms of magnetic activity scale with metallicity. For instance, previous studies have shown that another activity proxy, the photometric variability, is also correlated with metallicity <cit.>. Additionally, <cit.> showed that more metal-rich stars in the Kepler field are, on average, spinning more slowly than metal-poor stars. They interpreted this as evidence that more metal-rich stars have stronger magnetised winds than metal-poor stars and, therefore, lose angular momentum more rapidly resulting in slower rotation at late ages <cit.>. § ACKNOWLEDGEMENTS We thank the anonymous referee for their time refereeing our manuscript. We also thank Oliver Hall for useful discussions. V.S. acknowledges support from the European Space Agency (ESA) as an ESA Research Fellow. J.R. acknowledges funding from the European Union’s Horizon 2020 research and innovation program (grant agreement No.101004141, NEMESIS). L.A. acknowledges support from the Centre National des Etudes Spatialees (CNES) through the PLATO/AIM grant. S.M. acknowledges funding from the European Research Council (ERC), under the European Union's Horizon 2020 research and innovation program (grant agreement No. 682393 AWESoMeStars). Software: <cit.>, <cit.>, <cit.>, <cit.> § DATA AVAILABILITY The data used throughout this work, i.e. the data contained in table <ref>, will be made available via VizieR upon publication. mnras § SELECTING SINGLE MAIN SEQUENCE STARS IN THE KEPLER FIELD To reduce the number of equal-mass binaries in our dataset, we use an improved approach to the one we used in <cit.>. First, we binned the data in terms of metallicity using the same [Fe/H] steps for which <cit.> isochrones are available. Next, we select single main sequence stars as those between a 5 Gyr isochrone for the bin's upper-metallicity value shifted by Δ M_G=-(0.376+σ_G) + A_G(d) and Δ (BP-RP)=+σ_BP-RP+A_BP-RP(d), and a 1 Gyr isochrone for the bin's lower-metallicity value shifted by Δ M_G=+σ_M_G and Δ (BP-RP)=-σ_BP-RP. σ_G and σ_BP-RP are the typical uncertainties in the Gaia DR3 photometry at G=20 mag. A_G(d) and A_BP-RP(d) are the average extinction in the Kepler Field at a given distance. To estimate the Kepler Field's average extinction, we applied the extinction map <cit.>, which is based on Pan-STARRS 1 and 2MASS data with the <cit.> extinction law. We use the online tool[<http://argonaut.skymaps.info/>] to query 100 uniformly distributed locations within the Kepler field and retrieve data for extinction, E(g-r), as a function of distance for each position. Figure <ref> shows extinction as a function of distance as faint black lines for each of the positions queried, where we transformed the original data to E(B-V) following the transformations from <cit.>. We then averaged these extinction curves as a function of distance, which is shown as a red-line in Figure <ref>. Next, we used the average extinction curve to estimate the step in the distance required for an increase of 0.02 mag in the average extinction, and we used these to bin our dataset in terms of distance. Finally, we used the average extinction at the upper distance for each distance and metallicity bin to derive the appropriate A_G(d) and A_BP-RP(d) for the equal-mass binaries cut.
http://arxiv.org/abs/2307.00256v2
20230701073531
Murmurations of Dirichlet characters
[ "Kyu-Hwan Lee", "Thomas Oliver", "Alexey Pozdnyakov" ]
math.NT
[ "math.NT" ]
definition theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition conjecture[theorem]Conjecture definition[theorem]Definition remark[theorem]Remark example[theorem]Example
http://arxiv.org/abs/2307.03097v2
20230706161150
Close Encounters of Star - Black Hole Binaries with Single Stars
[ "Taeho Ryu", "Selma de Mink", "Rob Farmer", "Ruediger Pakmor", "Rosalba Perna", "Volker Springel" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.SR" ]
firstpage–lastpage Couplet scoring for research based assessment instruments H. J. Lewandowski August 1, 2023 ========================================================= Multi-body dynamical interactions of binaries with other objects are one of the main driving mechanisms for the evolution of star clusters. It is thus important to bring our understanding of three-body interactions beyond the commonly employed point-particle approximation. To this end we here investigate the hydrodynamics of three-body encounters between star-black hole (BH) binaries and single stars, focusing on the identification of final outcomes and their long-term evolution and observational properties, using the moving-mesh hydrodynamics code AREPO. This type of encounters produces five types of outcomes: stellar disruption, stellar collision, weak perturbation of the original binary, binary member exchange, and triple formation. The two decisive parameters are the binary phase angle, which determines which two objects meet at the first closest approach, and the impact parameter, which sets the boundary between violent and non-violent interactions. When the impact parameter is smaller than the semimajor axis of the binary, tidal disruptions and star-BH collisions frequently occur when the BH and the incoming star first meet, while the two stars mostly merge when the two stars meet first instead. In both cases, the BHs accrete from an accretion disk at super-Eddington rates, possibly generating flares luminous enough to be observed. The stellar collision products either form a binary with the BH or remain unbound to the BH. Upon collision, the merged stars are hotter and larger than main sequence stars of the same mass at similar age. Even after recovering their thermal equilibrium state, stellar collision products, if isolated, would remain hotter and brighter than main sequence stars until becoming giants. black hole physics – gravitation – stellar dynamics § INTRODUCTION Dynamical interactions between stars and compact objects in dense environments play a fundamental role in a variety of astrophysical settings, from influencing the thermodynamic state of a star cluster <cit.>, to altering original planetary architectures (e.g. ), to forming binary black holes (BHs) (e.g. ) which may make up a significant contribution to the BH-BH mergers detected via gravitational waves (GWs) by the LIGO and Virgo observatories <cit.>. We are living in an exciting era of transient surveys where the number of transients will exponentially grow soon with detections by ongoing (e.g., the Zwicky Transient Facility [https://www.ztf.caltech.edu]) and future (e.g., Vera Rubin Observatory[https://www.lsst.org] and ULTRASAT[https://www.weizmann.ac.il/ultrasat]) surveys. However, the origin of many transients such as the newly discovered class of the Fast Blue Optical Transients <cit.> remains unknown. For their reliable identification, it is imperative to understand possible mechanisms for the formation of various types of transients <cit.>. In particular, when a dynamical interaction between a star and a BH brings them within a very close distance, the star can be destroyed via its strong interactions with the BH, leading to the production of a bright flare. Due to the expectation of these transient electromagnetic signatures, the study of close dynamical interactions between stars and compact objects is an especially timely one in light of both ongoing and upcoming transient surveys. When close interactions involve a three-body encounter between a binary and a tertiary object, there is a richer set of possible outcomes compared to the case of a close encounter between a star and a stellar-mass BH where, depending on the closeness of the encounter, the main outcome is a partial or full disruption of the star <cit.>. While so far few hydrodynamical simulations involving binaries have been carried out <cit.>, the impending increase in the number of detectable transients, and at the same time the importance of compact object binaries for GW observations, make these investigations especially timely. Rarer events, which may have not been detectable to date, may likely be in the near future. Since encounters among more than two objects can become chaotic and are not analytically tractable in general, most of the theoretical understanding of dynamical interactions is based on numerical experiments using N-body simulations in which the trajectories of stars or BHs, approximated as a point particle, are integrated under the gravitational forces over time <cit.>. Those experiments have provided profound insights into dynamical interactions, particularly the statistical properties of outcomes. In addition, assuming a finite size of the point masses, one can in principle investigate the occurrence rate of transients, such as tidal disruption events or stellar collisions, using N-body simulations with finite size of the point masses <cit.>. However, in these studies, non-linear hydrodynamic effects on stars or on surrounding media, such as tidal deformation or shocks, which are essential for the prediction of observables and an accurate identification of outcomes, are ignored or treated very approximately. We have therefore recently began a systematic hydrodynamical investigation of 3-body close encounters between a binary and a tertiary object, involving spatially resolved stars, which we have presented in a series of papers. In <cit.>, we investigated the outcomes of close encounters between main sequence stars and stellar-mass binary BHs, using 3D smoothed particle hydrodynamics simulations, and for a variety of initial orbital parameters and encounter geometries. We found a rich phenomenology for the predicted accretion rates (which can be considered to zeroth order a proxy for the luminosity): while some encounters lead to signatures similar to those typical of the 2-body encounters, other situations carry clear signatures of the binarity, with the accretion rate modulated over the binary period, as both BHs alternate in stripping mass from the star. We further found that the interaction of the BH binary with the star and the stellar disruption itself can produce a significant feedback on the binary orbital parameters, quantitatively differing from the cases of pure scattering <cit.>. A single close encounter can produce changes of up of unity in the GW-driven merger timescale. In <cit.>, we explored the outcomes of close 3-body encounters in which the binary is composed of two main sequence stars, while the incoming tertiary object is a stellar-mass BH, using moving-mesh hydrodynamics simulations. Again exploring a variety of initial conditions, the simulations uncovered a variety of astrophysical outcomes, from the most standard one of a single star disruption, to a double star disruption, to member exchange leading to the formation of an X-ray binary, to the formation of runaway stars and runaway BHs made active by the accreting debris from the disrupted stars. Most recently, in <cit.>, we performed moving-mesh hydrodynamical simulations of close encounters between single BHs and binaries composed of a main sequence star and a BH. Outcomes were found to range from orbital perturbations of the original binary, to member exchanges either of the BH (hence forming a new X-ray binary), or of the star (hence leading to a binary BH). Deep encounters on the other hand were found to more often lead to the disruption of the star, with accretion rates that can display modulation if the two BHs have bound to form a binary. In this paper, which is the fourth in our series, we continue our investigation in this area by performing 3D hydrodynamical simulations of close encounters between main sequence, single stars, and binaries composed of a BH and a star, using numerical methods developed in and refined in . In addition to outcomes already encountered before, we find new astrophysical phenomena, such as the formation of binary star systems via member exchange, and stellar mergers, whose evolution we then follow with the stellar evolution code MESA <cit.>. The paper is organized as follows. Section <ref> describes the numerical methods we use, and the initial conditions of our simulations. Simulation results are presented in Section <ref>, with their astrophysical implications discussed in Section <ref>. We summarize and conclude in Section <ref>. § METHODS Our numerical methods are essentially the same as described in , except that the incoming object is now a 10 main-sequence (MS) star. We concisely summarize the key elements here, but we refer to for the specific details. We perform a suite of 3D hydrodynamic simulations of the close encounters using the massively parallel gravity and magnetohydrodynamic moving mesh code AREPO <cit.>. We use the HELMHOLTZ equation of state <cit.> which includes radiation pressure, assuming local thermodynamic equilibrium. The initial state of the two stars, the one in the binary and the incoming single, is identical and was taken from evolved MS stars with the core H mass fraction of 0.3 (at age of 18 Myr) computed using the stellar evolution code MESA (version r22.05.1) <cit.>. This stellar model is the same as the one adopted in . We refer to the Section 3.2 `Stellar model' in for the choices of the parameters adopted to evolve the star and for its radial density profile. We map the 1D MESA model into a 3D AREPO grid with N≃ 5×10^5 cells, and fully relax the resulting 3D single star. We model the BH using an initially non-rotating sink particle, which interacts gravitationally with the gas and can grow in mass via gas accretion. We follow exactly the same procedure for accretion described in , including the refinement criteria introduced there. We parameterize the binary's semimajor axis a using the analytical approximation of the Roche lobe radius by <cit.>, r_ RL/a= 0.49 q^2/3/0.6q^2/3+ln(1+q^1/3), where r_ RL is the volume averaged Roche lobe radius of the star, q=M_⋆/M_∙ is the mass ratio, and a is the orbital separation. We define a_ RL≡ a(R_ RL=R_⋆) as the separation at which the star fills its Roche lobe. For q=0.5 and r_ RL = R_⋆, a_ RL≃ 3.12, and R_⋆≃ 16.9. §.§ Initial conditions In our simulations, a circular binary consisting of a 20 BH and a 10 star encounters a single 10 star on a parabolic trajectory. As remarked in , the choices of the encounter parameters are somewhat arbitrary, but BHs with such masses have been observed in X-ray binaries <cit.>. In addition, encounters between objects of similar masses are expected in the centers of young star clusters where massive objects accumulate due to mass segregation. Later, based on our simulation results, we discuss potential effects of different masses in  <ref>. We consider three semi-major axes: a/a_ RL=2, 4 and 6, corresponding to orbital periods of 4, 12, and 22 days, respectively. The distance between the binary's center of mass and the BH at the first closest approach is parameterized using the impact parameter b, i.e., r_ p=ab/2. Here, r_ p is the pericenter distance and a the binary semimajor axis. We investigate the dependence of encounter outcomes on key encounter parameters, i.e., inclination angle i=0, 30^∘, 60^∘, 120^∘ and 180^∘, b = 1/4, 1/2, 1 and 2, and the phase angle ϕ=0^∘- 315^∘ with an increment of Δϕ=45^∘. Here, ϕ is the initial angle between the line connecting the two members in the binary and the x-axis (see Figure 2 in ). To study the impact of ϕ, we initially rotate the binary while the initial separation between the binary and the star is fixed at 5a. We study the dependence of outcomes on i, ϕ and b using the encounters of the intermediate-size binaries (a/a_ RL=4). We summarize the initial parameters considered in our simulations in Table <ref>. Each of the models is integrated to a few up to 100 t_p, which is the typical time it takes to identify the final outcomes. Here, t_ p= (r_ p^3/GM)^1/2 is the dynamical time at r=r_ p and M is the total mass of three objects. The value of t_ p for each model is given in Table <ref>. § RESULTS §.§ Classification of outcomes We divide the outcomes produced in the three-body encounters between BH-star binaries and single stars into five classes: Stellar disruption, Merger, Orbital perturbation, Member exchange, and Triple formation. We provide a sketch with an overview of the five outcome classes with corresponding short descriptions in Figure <ref>, and we summarize the outcome types and properties for all our models in Table  <ref>. * Stellar disruption: this class refers to encounters in which one or both stars are fully destroyed via tidal disruptions and collisions with the BH. These disruptive encounters mostly take place when the BH and the single star meet first. Once the star is destroyed, the BH is quickly surrounded by an accretion flow, which would generate an electromagnetic transient (see Section <ref>). Disruptions can also happen when two stars encounter first on a retrograde orbit (i=150^∘): in two models (Models 15. a4b1/2ϕ180i150 and 20. a2b1/2ϕ180i150), the two stars merge first, followed by the disruption of the merged star by the BH. The three types of final outcomes in this class are: * Single BH: a single accreting BH when both stars are destroyed (Models. 15. a4b1/4ϕ180i150, 19. a2b1/2ϕ0i150, and 20. a2b1/2ϕ180i150). The ejection velocity of single BHs is 60-100 km s^-1, which is high enough to escape globular clusters. * Binary: a full disruption of the incoming star and the original binary where the BH is accreting, hence there is no ejected single star (e.g., Models 3. a4b1/2ϕ9i30 and 4. a4b1/4ϕ0i30). * Binary + single: a partial disruption event of the incoming star, creating an unbound partially disrupted star, and the original binary where the BH is accreting (e.g., Models 11. a4b1/2ϕ0i150 and 27. a4b1/2ϕ0i120). * Merger: this corresponds to the case where the two stars merge and survive. These events frequently occur when the two stars encounter first on a prograde orbit (i=30^∘). For this case, the merged stars form a binary with the BH, except in one case where the merged star and the BH are unbound (Model 36. a4b1/2ϕ315i30). The semimajor axes and the eccentricities of the binaries are a≃ 130 - 200 and e≃0.5-0.7, respectively. * Orbit perturbation: in this class, the incoming star weakly perturbs the orbit of the original star and becomes an unbound single. The final outcome is the perturbed binary consisting of its original members and an ejected star which was the incoming single. We do not identify a well-defined region of the parameter space which is specific to this class: the encounter parameters of these cases cover almost the entire range considered in this work. The perturbed binaries have a smaller a than the initial value by 10 - 40 percent, depending on the encounter parameters. The ejected stars have a velocity ranging between 120 - 230 km s^-1. * Member exchange: We find that in five models (out of 37) the initial binary is dissociated and a new binary forms while the third object is ejected. The newly formed binaries consist of either the BH and the initially incoming star (Models 5. a4b2ϕ180i30, 8. a4b1/4ϕ180i30, 34. a4b1/2ϕ135i30) or the two stars (Models 2. a4b1ϕ0i30 and 16. a4b1/4ϕ180i30). The semi-major axes of the newly formed binaries are larger than that of the original binary by 5-50 percent in all these models except Model 34. a4b1/2ϕ135i30 where the newly formed binary is smaller than the original binary by 30 percent. The eccentricities of the newly formed binaries are in the range 0.1-0.8. The ejection velocities of the singles vary between 80 - 230 km s^-1. Given our small sample size, we could not reliably identify the parameter region where the member exchange happens frequently. * Triple formation: In this class, a hierarchical triple forms after the original binary is dissociated (Models 1. a4b2ϕ0i30, 32. a4b1/2ϕ45i30, and 35. a4b1/2ϕ225i30). In two cases (Models 1. and 35.), the inner binary consists of the two stars and the tertiary is the BH. In the last model, the star originally in the binary is in an outer orbit around the inner binary made up of the BH and the initially single star. According to the stability criteria by <cit.>, which is an improved version of <cit.>, all these triples are unstable. §.§ Dependence of outcomes on parameters * Phase angle: This primarily determines which two objects meet first. As indicated by the varieties of the outcomes (e.g., tidal disruption, triple, binary, and merger) from the models with varying ϕ (Models 3 and 32-37), the outcomes sensitively depend on the exact configuration at the first encounter. A general trend is that the chances of having mergers are significantly higher in encounters where the two stars meet first, compared to the cases where the BH and the incoming star interact closely first. For the latter, a likely outcome is stellar disruption. For the parameters covered by Models 3 and 32-37, a full disruption occurs within a relatively small region of ϕ (Δϕ < 45^∘). * Impact parameter: Violent events (i.e., tidal disruption , collision, and merger) tend to occur when the impact parameter is less than a/2 (or b <1). However, a small impact parameter does not always lead to such star-destroying events, depending on other encounter parameters, primarily the phase angle. Hence b <1 is a necessary condition for disruptive interactions. For example, in Model 16. a4b1/4ϕ180i150, the incoming star dissociates the binary, but the interactions do not ultimately lead to either a stellar disruption or merger. * Inclination angle: Two primary effects of the inclination angle are as follows. Firstly, the inclination angle determines how small the relative velocity becomes between the two meeting objects, which is directly translated into the size of the gravitational-focusing encounter cross section: the higher the relative velocity (retrograde), the smaller the cross section is. For the same encounter parameters, prograde encounters likely create star-removing events. Second, although rare, a high relative speed in retrograde encounters inversely indicates that, if a strong encounter occurs, the resulting change in the momentum would be relatively high. In fact, because the head-on collision of the two stars so effectively removes the kinetic energy of the two colliding stars, the merger events followed by a disruption only occurs in retrograde cases. * Semimajor axis: Encounters involving an initially smaller binary (e.g., a/a_ RL=2) appear to more preferentially create TDEs and collisions, which may be attributed to the fact that the incoming star would be more directed towards the binary's center of the mass where violent interactions are more likely to occur. However, we do not see any other clear trend associated with the size of the binary. §.§ Binary formation In our simulations, the formation of a binary, as a final product, is very frequent. We present in Figure <ref> the orbital properties of the binaries. As shown in the left panel, this type of three-body interactions results in both wider and more compact binaries than the initial binaries with a≲ 100. The semi-major axes of most of the final binaries range from 35 to 200. Our simulations also show that wide binaries with a as large as a≃ 1000 can be produced by three-body interactions involving a relatively compact binary. However, we could not find any significant dependence of the ratio of the final a to the initial a on any of the parameters or the types of outcomes introduced in Section <ref>. Note that the well-defined power-law relation with small scatter is simply because of the same initial semi-major axis in most models (or a = 4 a_ RL). The final eccentricities, as shown in the right panel of Figure <ref>, are within e≃ 0.1 - 0.8 for binaries with a≲ 200. The wide binaries with a≳ 300 are more eccentric, e≃ 0.86 - 0.99. Note that the star in the very wide binary in Model 28. a4b1/2ϕ0i180 will be partially disrupted at the next pericenter passage because the pericenter distance (≃ 4) is between its full disruption radius ≃ 3 and the partial disruption radius ≃ 14[The full disruption radius is calculated using Equation 5 in <cit.> and the partial disruption radius using Equation 17 in <cit.> assuming the star has the same initial structure as the original star. ]. Similarly to the semi-major axis, we do not find any clear trends of the final eccentricity in terms of the encounter parameters and final outcomes. This may imply that the properties of the final binaries are sensitively dependent on multiple encounter parameters. We also find that interacting binaries form in Models 8. a4b1/4ϕ180i30 and 31. a4b1/2ϕ180i120, the models with (I) next to the class in Table <ref>. §.§ Accretion In the majority of our models, the BHs are surrounded by gas produced in TDEs, collisions, and stellar mergers. In those cases, the BH accretes gas, potentially creating electromagnetic transients (EMTs), although there may be a significant delay between the moment of the close encounter and the peak emission of the EMT because of a large optical depth of the debris (or a long cooling time). We present in Figure <ref> the accretion rate Ṁ in models where at least one of the stars is disrupted. As shown in the figure, the shape of the accretion rate as well as the peak rate, ranging from 10^-9-10^-4 s^-1, are diverse. We split the types of Ṁ curve into three categories, depending on their shape and the mechanism that creates the accretion disk. * Single-peak : Ṁ rises relatively rapidly and decays slowly, which can be generated in two cases. 1) Partial TDE (Models 11 and 27): in this event, only a fraction of mass is lost from a star (most often the incoming one), which quickly forms an accretion disk. The peak accretion rate is substantially lower than in other cases, Ṁ≲ 10^-7 s^-1. 2) Full TDE or head-on collision (Models 4, 12, and 17): when the incoming star undergoes a collision with the BH or is completely tidally disrupted, the accretion rate surges very rapidly and then decays. * Multiple-peaks: To produce an Ṁ with more than one peak, more than one violent event should occur. We find three such cases in our simulations. 1) Partial TDE ⟶ full TDE (Models 3, 14, 21, 25, 26, and 28): in this case, a partial TDE occurs, followed by a full disruption. 2) Merger⟶ full TDE (Models 15, 20, and 24): when the two stars merge, some fraction of mass is ejected (see Section<ref>). The BH nearby captures the gas and accretes it (e.g., the first Ṁ peak at t≃ 1 days in Model 20). If the collision significantly reduces the kinetic energy of the merged star, this places the merged star on a radial orbit around the BH and it is disrupted at the first pericenter passage. For this case, the time difference between peaks is determined by how far from the BH a merger happens and how quickly the merged star is disrupted. * Rise-flat: Ṁ in Model 19 rises on a time scale of 1 day and then stays nearly constant at Ṁ≃ 10^-7 s^-1. The flat Ṁ indicates that gas is continuously injected into the BH. In fact, in this model the two stars are partially destroyed at each pericenter passage, soon followed by two total disruptions. As a result of continuous mass inflow into the BH, the overall shape of the accretion rate is flat. In the remainder of this section, we investigate the properties of the accretion disk around the BH. The accretion disk is sub-Keplerian and optically and geometrically thick. As an example, we depict the density of the disk formed in Model 20. a2b1/2ϕ180i150 in Figure <ref>. In general, the disks have an aspect ratio of 0.4 - 0.6 at distance r≳ 1 from the BH, which increases inwards to ≳ 1 at r≲ 0.1. The azimuthal velocity of the disks is ≃ 0.6 -0.9, indicating that the disk is radiation pressure-supported. The density of the disks is mostly flat at r≲ 0.1-1, and it decreases outwards following a power-law of r^-3-r^-4. The temperature decreases approximately monotonically as r increases: T∝ r^-0.25 at r < 1 and T∝ r^-1 at r> 1. The r- scaling relations for ρ and T are very similar to those for the disks that form in three-body interactions between BH-star binaries and single BHs (see Figure 9 in ). In Figure <ref>, we present the density, temperature, rotational velocity, and the aspect ratio of the disks in the models considered in Figure <ref>. Finally, in Figure <ref>, we show the mutual inclination angle between the disk and the binary orbit in models where the final product is a binary consisting of the BH surrounded by a disk. In seven out of ten models considered, the mutual inclination angle between the disk orbit and the binary orbit is not very different from the initial encounter inclination angle, which is not surprising. However, it is quite striking that the final disk-binary orbit inclination angles in the remaining three models (Models 11., 12, and 27) are completely different from the initial encounter inclination angle. Coincidentally they are all retrograde encounters (three out of four). These findings imply that, since in actual astrophysical settings a third body will approach a binary with an arbitrary inclination angle, if a disk forms around a binary member during the three-body interactions, the binary orbit and the disk are not likely aligned at the moment of the disk formation. This may indicate that the orientation of the disk around the BH in BH-star binaries in dense environments can be indicative of the inclination angle of the incoming object in the previous encounter. §.§ Merger product Our simulations show that the two stars in this type of dynamical interactions can merge (12 out of 37 models). The post-merger star often forms a binary with the BH. The center of mass velocity of the binary is typically very low (3-10 km s^-1). The mutual orbits of the two stars before the collision are such that 1-e≃ -0.01 - 0.2 and the pericenter distance is (0.05 - 0.5)×, corresponding to the velocity v_ rel≃ 0.7-0.9 v_ esc at infinity. Here, v_ esc=(G/)^1/2≃ 600 km s^-1 is the escape velocity of the 10 star. In the parameter space considered, mergers almost exclusively occur when the two stars first meet. The fate of the merged stars is diverse. If the merger is able to significantly cancel the momenta of the colliding stars, the merged star is brought on a radial orbit towards the BH and disrupted at pericenter. On the other hand, if the momentum cancellation is not significant, the merged stars can form a binary with the BH. Last is a case (Model 37. a4b1/2ϕ315i30) in which the incoming star undergoes a close encounter with the BH and exerts a momentum kick to the BH strong enough to eject it from the two stars. Then the two stars merge, remaining unbound from the BH. We find that the dynamically merged stars have a mass of ∼ 18 - 19 after losing ∼ 1 - 2 during the merger. This mass loss corresponds to 5 - 10 percent of the total mass, which is similar to what has been found for equal or similar mass low-velocity (v_ rel<v_ esc) stellar collisions in previous work <cit.>. The thermodynamic state (e.g., density and temperature) of all collision products is not varying significantly among one another. However, collision remnants are significantly puffed up compared to a non-rotating ordinary main sequence (MS) star of the same mass at a similar evolutionary stage (“ordinary” star), evolved using MESA[ Note that we confirmed that the internal structure of the non-rotating star is very similar to that of rotating stars with a rotational speed less than 60% of their break-up speed, which is roughly the maximum speed of merged stars in our simulations.], except the one in Model 31 where the merger product has the smallest mass and is substantially more compact than the others. The inflated radii are also similarly found for the coalescence of two stars initially in binaries <cit.>. To demonstrate this, we depict the 1D radially averaged density and temperature profiles in the top panels of Figure <ref>. As shown in the top-left panel, the density profiles of the merger products are not significantly different from each other. However, they are much more extended in size than for the ordinary star (dashed grey). Because of the larger size, the central densities of the merger product (∼ 5 - 10 g cm^-3) are generally lower than those of the ordinary star; we find them to be lower by a factor of less than two. Similarly, as shown in the top-right panel, the overall temperature profiles of the merger products are extended outwards and their temperatures are lower than for the ordinary star. The H (X_ H) and He (X_ He) mass fractions reveal more significant differences from those of the ordinary star, which are shown in the bottom panels of Figure <ref>. In particular, the core X_ He of the most merged stars has decreased from 0.68 (initial state) to 0.6. Equivalently, the core X_ H has increased from 0.3 (initial state) to 0.4. The transition from the core (X_ He≃ 0.6) to the envelope (X_ He≃ 0.25) is smoother than in the MESA ordinary non-merger stellar model. As an example, we show how He is mixed in the core during a merger in Model 29. a4b1/2ϕ180i0 in Figure <ref>. We note that in two cases (Models 6. and 22.), the core X_ H≃ 0.35, is somewhat smaller than that of most of the merged stars. This difference could originate from a different configuration at collision (e.g., relative speed at collision and impact parameter): less significant mixing (X_ H closer to its initial value) would have resulted from a collision with a smaller impact parameter, i.e., closer to a head-on collision. The smaller X_ He than the initial state indicates that fresh H initially in the envelope of each star is mixed into the merged core during the merger, as similarly shown for unequal-mass stellar collisions in <cit.>. Notice that some merger products reveal unstable gradients in the profile, such as an inverted gradient in composition at R≃ 15 for Model 6 or in temperature at R≃ 12 for Model 22, which likely indicates that the merger products have not reached a fully stable state. However, as the star is settling into a stable state, the inverted gradients will be removed via, e.g., thermohaline mixing of the composition <cit.>. The merged stars tend to be differentially rotating, as shown in the top panel of Figure <ref>. The rotational frequency Ω near the core is 6 - 8× 10^-4^-1 and decreases outwards to ≲ 10^-4^-1 at the surface. This corresponds to Ω/Ω_ cri≳ 0.4 within ≃ 1, and Ω/Ω_ cri≃ 0.1 - 0.5 near the surface. Here Ω_ cri is the local critical frequency, defined as (GM(<R)/R^3)^1/2, and R is the distance from the center of mass of the merged star. We also find that the two merger products with relatively low X_ H (Models 6. and 22.) are rotating more slowly (Ω≲ 2× 10^-4^-1) near the core, as expected from a head-on collision, and they are closer to rigid rotators than the other merger products. Because of the rapid spin, the overall shape of the merged stars takes that of an oblate spheroid. Figure <ref> shows the density in the equatorial plane and a x-z slice of the merger product in Models 6. An interesting remark here is that we do not see any evidence of a disk around the merged stars, as illustrated in the figure. This is consistent with <cit.>. Instead, the star is surrounded by a low-density spherical envelope. Based on their hydrodynamics simulations of off-axis stellar collisions between 0.6 and 0.8 evolved MS stars, <cit.> posed an “angular momentum problem” where merger products are formed with too large angular momentum so that some of the angular momentum has to be lost in order for them to settle into a stable state, possibly blue stragglers. The existence of a disk can mitigate this problem because disk-star interactions, e.g., magnetic locking, can remove the angular momentum of the merger products. Although we do not find a disk surrounding the merger products, their angular momentum is already below critical in our simulations. We show in the bottom panel of Figure <ref> the cumulative angular momentum distribution inside the merger products, in comparison with the two maximum angular momentum of the ordinary star (dashed horizontal) and the original 10 star (dot-dashed horizontal). The total angular momentum inside the merged stars is 1-3×10^53 g cm^2 s, which is more than a factor of 2 smaller than the maximum angular momentum that the ordinary star with ≃ 18.5 would have. This means that in principle the merged stars could settle into a stable state without losing any mass due to exceedingly large centrifugal forces. § DISCUSSION §.§ Electromagnetic Transients Three-body interactions between BH-star binaries and single stars can create a variety of EMTs. For the parameters considered in this study, four classes can create immediate EMTs. In the class Stellar disruption, the stellar debris quickly forms an accretion disk and the BH accretes gas; this, together with shocks, can generate EM radiation. In the class Merger, some fraction of mass is ejected at the collision and spread out. Almost instantaneously the BH becomes embedded in a gaseous medium like in a common envelop phase, and can emit radiation via accretion and shocks. Additionally, if the merged star forms a sufficiently compact and highly eccentric binary with the BH (e.g., Model 31), eccentric mass transfer can lead to periodic EM emission. Interacting binaries can form also in the last two classes: while we find such a case only in the class Member exchange (Model 8), the formation of interacting binaries is in principle possible also in the class Orbit perturbation. The EM signatures from EMTs are diverse, as illustrated in Figure <ref>, depending on the encounter configurations and outcomes. To zeroth order, Ṁ can be a useful proxy for luminosity. In this sense, the various types of Ṁ and the identification of the encounter types that generate each type of Ṁ in this work can be used to understand the origin of transients produced in three-body interactions. However, for a more reliable identification of transients, a more systematic investigation covering a wider range of parameters will be required. §.§ Long-term evolution of merger products Our simulations show that two stars can collide and merge in three-body interactions, and the merger product can form a binary with the BH or be ejected from the BH. The rate of such stellar collisions in three-body interactions involving binaries can be significantly large compared to that between two single stars in clusters, due to mass segregation and a large encounter cross section <cit.>. We also showed that the internal structure of the merger products, after being dynamically settled, is different from that of an ordinary star of the same mass and metallicity at a similar age that has not undergone any merger (Figure <ref>). First, the merger products have larger radii than those of ordinary stars (by almost a factor of 2-3), indicating that the merger products are not in thermal equilibrium. Second, the core hydrogen fraction can be enhanced by 30 percent compared to that of the original star before merger. Equivalently, the core helium fraction can be lower by a similar amount. Lastly, the merged stars tend to be differentially rotating at 0.1-0.5 of the critical rotational velocity. All of these properties are qualitatively very similar to those of partially disrupted stars <cit.>. We note that magnetic fields, if included, can be significantly enhanced in merger products <cit.>. Given the peculiarity of the merger products, we investigate their long-term evolution using . We create a non-rotating zero-age main-sequence star with the same metallicity as the original 10 star (i.e., Z=0.006). Then we relax the star until its entropy, mass and chemical composition distribution match those of the merger product. This is achieved by iteratively modifying the normal stellar model over 1000 steps under the condition that the internal structure satisfies the stellar structure equations. Then we evolve the relaxed star using the wind and overshoot prescriptions adopted to create the original 10 star. In this analysis, we ignore rotation. Figure <ref> shows the evolution of two models (Models 7. a4b1/2ϕ180i30 and 22. a6b1/2ϕ180i30) for the next 5-6 million years since merger, and that of the ordinary star with mass of 18.5, in a Hertzsprung-Russell diagram. The core helium fraction for the merger products is always higher than that of the ordinary star at similar locations in the diagram. The evolutionary tracks of the merger products are generally located above the track of the ordinary star, implying that the merger products are hotter and more luminous at any given stellar age. This is qualitatively consistent with previous work on MS stellar mergers <cit.>. However, their temperature and luminosity are not significantly larger, at most by a factor of 1.3. We should note that it would be important to include rotation in this analysis given rotation-induced mixing <cit.>. Although the total angular momentum of the merger products is smaller than the critical value, the merger products can still lose mass due to spin depending on the angular momentum distribution inside the star <cit.>. If the core with mass M_ core retains an angular momentum larger than that at the innermost stable circular orbit, j>GM_ core/c≃ 2× 10^6 (M_ core/2) cm^2 s^-1 <cit.>, by the time the core collapses, the merger products can become progenitors of hypernovae and long-duration gamma ray bursts. All of this implies that the evolutionary tracks, when the spin is taken into account, could be different from the tracks shown above and have unique astrophysical implications. Given such potential effects on the evolution, we will examine the impact of rotation on the long-term evolution of merger products with proper modeling of rotation and resulting mass loss in future work. §.§ Encounters with different masses In this study, we consider two stars of the same mass in the three-body interactions and the mass ratio of the stars to the BH is fixed at 0.5. However, in realistic cluster environments, the mass of the two stars are not necessarily the same. Also, the star-BH mass ratio would be variable. Nonetheless, many of our findings can still apply to this type of three-body encounters with varying masses. If the impact parameter b is less than a, interactions would still possibly become violent, independently of the mass ratio. In addition, whether outcomes are mergers between two stars or disruption of a star(s) by the BH would be primarily determined by which two objects meet. The final outcome types, their properties, and their formation frequency would depend on the mass of the incoming star and its mass ratio to the binary mass, like other encounter parameters. For example, if a smaller intruder can play a role as a catalyst for violent interactions <cit.>, stellar mergers, TDEs, and star-BH collisions would be more frequent. However, if the incoming mass is too small compared to the masses of both binary members, the immediate impact of close encounters (e.g., dissociation of the binary) would be relatively small. For this case, if a merger occurs between two unequal mass stars, the internal structure of the merger product and its long-term evolution could be significantly different from what we found for equal-mass collisions, which would probably result in the strongest mixing <cit.>. On the other hand, if the incoming star is much more massive than the mass of the star in the binary, then the encounters would be effectively a two-body problem between the incoming star and the BH. §.§ Runaway star and black holes We showed that this type of three-body encounters can create single stars ejected at velocities of 120 - 240 km s^-1, much greater than the typical escape speed of globular clusters <cit.>, as well as single BHs also ejected at high velocities of 63 - 115 km s^-1. In particular, some of the rapidly moving BHs had undergone a TDE and became surrounded by an accretion disk, meaning they are emitting radiation while being ejected. If the lifetime of the accretion disk around them is sufficiently long, those could be observed as rapidly moving runaway BHs outside clusters. §.§ Encounter rate in globular clusters Following and , we first make an order-of-magnitude estimate for the differential rate of a BH-star binary encountering a single star per single star as dℛ/ d N_ s≃ nΣ v_ rel. Here, n is the binary number density near the cluster center, v_ rel the relative velocity between the binary and the single star, and Σ the encounter cross-section. We adopt the estimate for dℛ/ d N_ s made in , dℛ/ d N_ s ≃ 4 × 10^-14^-1(f_ b/10^-5) (n_ s/10^5 pc^-3)(M_∙ +/20) ×(a/100) (σ/15)^-1, where we express n as n ≃ f_ bn_ s, f_ b is the non-interacting star - BH binary fraction ≃ 10^-4-10^-5 <cit.>, n_ s gives the number density of stellar-mass objects, and σ is the velocity dispersion. Because the number of single stars in the core of size r_ c≃ 1 is N_ s≃ 4π r_ c^3 n_ s/3 ≃ 4× 10^5, the rate of strong three-body encounters per globular cluster is, ℛ ≃ 2× 10^-8^-1(r_ c/1)^3(f_ b/10^-5) (n_ s/10^5 pc^-3)^2(M_∙ + /20) ×(a/100) (σ/15)^-1. Assuming ≃150 globular clusters in the Milky Way <cit.>, ℛ≃3× 10^-6 per year per galaxy. As noted in , a more precise estimate of ℛ requires a more careful consideration of cluster evolution history. § CONCLUSIONS Multi-body dynamical interactions, a fundamental mechanism responsible for the evolution of star clusters, have been studied mostly using N-body simulations even though non-hydrodynamical effects are essential for determining outcomes and their observables. Continuing our efforts of bringing our understanding of three-body interactions beyond the point-particle approximation, we have investigated the outcomes of three-body encounters between a 20 BH – 10 star circular binary and a 10 star, using a suite of hydrodynamical simulations with the moving-mesh code AREPO, for a wide range of encounter parameters. The results of our simulations are summarized in the following. * Three-body encounters between BH-star binaries and single stars can produce five different outcomes, stellar disruption, merger, orbit perturbation, member exchange, and triple formation. Although in principle the essence if these outcomes can be identified with the point particle approximation assuming finite sizes of the mass points, we obtained the properties of the outcomes and their observables in detail, which can not be studied with N-body simulations alone, see Section <ref> for detailed properties of them, such as their plausible formation configurations. * The phase angle and the impact parameter play the most important role in determining the outcomes, similarly to the three-body interactions between star-BH binaries and single BHs studied in . The phase angle determines which two objects first meet: if two stars meet first, one likely outcome is a stellar merger. whereas if the incoming star and the BH interact closely first, the star is destroyed in a tidal disruption event or a collision with the BH. The impact parameter sets the zeroth-order boundary between violent, star-destroying events (b < 1) and non-violent events (b > 1). However, even for b<1, the outcomes can vary depending on the phase angle. The probability of having disruptive events is further enhanced when the encounter is in a prograde direction in which the encounter cross section is large because of smaller relative velocities. * The accretion rate produced in stellar disruptions is mostly super-Eddington and displays various shapes, depending on the configuration at disruption (e.g., single full disruption, partial disruption, collision, or multiple disruptions, see Figure <ref>). Accretion timescales are generally a few to ten days, comparable to the duration of fast blue optical transients. * The merger products are hotter and larger than an ordinary star of the same mass at a similar age, and are rotating at 30-50 percent of the critical value. Those stars stay hotter and brighter than the ordinary star for the next 5 - 6 million years until they become red supergiants. We considered similar encounter parameters in this study as those in . The only difference is the type of the incoming object: a star in this study and a BH in , while the masses of the incoming object and the parameters of the original binary are the same. Nonetheless, the two types of three-body encounters produce substantially different types of final outcomes and properties. Importantly, the stellar mergers found in the present study can have important implications for the subsequent long-term evolution of binaries consisting of a merger product formed dynamically in clusters (see Figure <ref>). Although our simulations cover a wide range of encounter parameters, the entire parameter space of the three-body interactions remains vast. Nonetheless, out investigation has identified key outcomes such as tidal disruption events and stellar mergers, leaving larger or more focused parametric studies to future explorations. In addition, we will investigate the impact of magnetic fields and background gas on the outcomes and their observables in our future work. § ACKNOWLEDGEMENTS TR is grateful to Stephen Justham and Earl Bellinger for fruitful discussions of stellar mergers and the evolution of the merger products. This research project was conducted using computational resources (and/or scientific computing services) at the Max-Planck Computing & Data Facility. Some of the simulations were performed on the national supercomputer Hawk at the High Performance Computing Center Stuttgart (HLRS) under the grant number 44232. The authors gratefully acknowledge the scientific support and HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) under the NHR project b166ea10. NHR funding is provided by federal and Bavarian state authorities. NHR@FAU hardware is partially funded by the German Research Foundation (DFG) – 440719683. The authors would like to also thank Stony Brook Research Computing and Cyberinfrastructure, and the Institute for Advanced Computational Science at Stony Brook University for access to the high-performance SeaWulf computing system, which was made possible by a $1.4M National Science Foundation grant (#1531492). R. Perna acknowledges support by NSF award AST-2006839. § DATA AVAILABILITY Any data used in this analysis are available on reasonable request from the first author. mnras @urlcharsothermakeother $&#_% @doi@urlcharsother ifnextchar [ @doi@ @doi@[] @doi@[#1]#2tempa#1tempaempty http://dx.doi.org/#2 doi:#2http://dx.doi.org/#2 #1 @eprint#1#2@eprint@#1:#2::nil @eprint@arXiv#1http://arxiv.org/abs/#1 arXiv:#1 @eprint@dblp#1http://dblp.uni-trier.de/rec/bibtex/#1.xml dblp:#1 @eprint@#1:#2:#3:#4niltempa #1tempb #2tempc #3tempc empty tempc tempb tempb tempa tempb empty tempb arXivifundefined mn@eprint@tempbtempb:tempcmn@eprint@tempbtempc [Antonini, Chatterjee, Rodriguez, M orscher, Pattabiraman, Kalogera & RasioAntonini et al.2016]Antonini2016 Antonini F., Chatterjee S., Rodriguez C. L., M orscher M., Pattabiraman B., Kalogera V., Rasio F. A., 2016, @doi [] 10.3847/0004-637X/816/2/65, https://ui.adsabs.harvard.edu/abs/2016ApJ...816...65A 816, 65 [Binder et al.,Binder et al.2021]binder_wolfrayet_2021 Binder B. A., et al., 2021, @doi [ApJ] 10.3847/1538-4357/abe6a9, 910, 74 [BlaauwBlaauw1961]Blaauw1961 Blaauw A., 1961, , https://ui.adsabs.harvard.edu/abs/1961BAN....15..265B 15, 265 [Dale & DaviesDale & Davies2006]DaleDavies2006 Dale J. E., Davies M. B., 2006, @doi [] 10.1111/j.1365-2966.2005.09937.x, https://ui.adsabs.harvard.edu/abs/2006MNRAS.366.1424D 366, 1424 [Drout et al.,Drout et al.2014]Drout+2014 Drout M. R., et al., 2014, @doi [] 10.1088/0004-637X/794/1/23, https://ui.adsabs.harvard.edu/abs/2014ApJ...794...23D 794, 23 [EggletonEggleton1983]Eggleton1983 Eggleton P. P., 1983, @doi [] 10.1086/160960, https://ui.adsabs.harvard.edu/abs/1983ApJ...268..368E 268, 368 [Fragione, Grishin, Leigh, P erets & PernaFragione et al.2019]Fragione2019 Fragione G., Grishin E., Leigh N. W. C., P erets H. B., Perna R., 2019, @doi [] 10.1093/mnras/stz1651, https://ui.adsabs.harvard.edu/abs/2019MNRAS.488...47F 488, 47 [Fregeau, Cheung, Portegies Zwart & RasioFregeau et al.2004]Fregeau+2004 Fregeau J. M., Cheung P., Portegies Zwart S. F., Rasio F. A., 2004, @doi [] 10.1111/j.1365-2966.2004.07914.x, https://ui.adsabs.harvard.edu/abs/2004MNRAS.352....1F 352, 1 [Freitag & BenzFreitag & Benz2005]Freitag+2005 Freitag M., Benz W., 2005, @doi [] 10.1111/j.1365-2966.2005.08770.x, https://ui.adsabs.harvard.edu/abs/2005MNRAS.358.1133F 358, 1133 [Gaburov, Lombardi & Portegies ZwartGaburov et al.2010]Gaburov+2010 Gaburov E., Lombardi James C. J., Portegies Zwart S., 2010, @doi [] 10.1111/j.1365-2966.2009.15900.x, https://ui.adsabs.harvard.edu/abs/2010MNRAS.402..105G 402, 105 [Glebbeek, Gaburov, Portegies Zwart & PolsGlebbeek et al.2013]Glebbeek+2013 Glebbeek E., Gaburov E., Portegies Zwart S., Pols O. R., 2013, @doi [] 10.1093/mnras/stt1268, https://ui.adsabs.harvard.edu/abs/2013MNRAS.434.3497G 434, 3497 [Goodman & HernquistGoodman & Hernquist1991]Goodman1991 Goodman J., Hernquist L., 1991, @doi [] 10.1086/170464, https://ui.adsabs.harvard.edu/abs/1991ApJ...378..637G 378, 637 [HarrisHarris2010]Harris+2010 Harris W. E., 2010, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2010arXiv1012.3224H p. arXiv:1012.3224 [Heger, Langer & WoosleyHeger et al.2000]Heger+2000 Heger A., Langer N., Woosley S. E., 2000, @doi [] 10.1086/308158, https://ui.adsabs.harvard.edu/abs/2000ApJ...528..368H 528, 368 [Hut et al.,Hut et al.1992]Hut1992 Hut P., et al., 1992, @doi [] 10.1086/133085, https://ui.adsabs.harvard.edu/abs/1992PASP..104..981H 104, 981 [Jermyn et al.,Jermyn et al.2022]jermyn22 Jermyn A. S., et al., 2022, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2022arXiv220803651J p. arXiv:2208.03651 [Kippenhahn, Ruschenplatt & ThomasKippenhahn et al.1980a]Kippenhahn+1980 Kippenhahn R., Ruschenplatt G., Thomas H. C., 1980a, , https://ui.adsabs.harvard.edu/abs/1980A A....91..175K 91, 175 [Kippenhahn, Ruschenplatt & ThomasKippenhahn et al.1980b]Kippenhahn+1980b Kippenhahn R., Ruschenplatt G., Thomas H. C., 1980b, , https://ui.adsabs.harvard.edu/abs/1980A A....91..175K 91, 175 [Lai, Rasio & ShapiroLai et al.1993]Lai+1993 Lai D., Rasio F. A., Shapiro S. L., 1993, @doi [] 10.1086/172946, https://ui.adsabs.harvard.edu/abs/1993ApJ...412..593L 412, 593 [Laycock & SillsLaycock & Sills2005]LaycockSills2005 Laycock D., Sills A., 2005, @doi [] 10.1086/430372, https://ui.adsabs.harvard.edu/abs/2005ApJ...627..277L 627, 277 [Leigh, Stone, Geller, Shara, Muddu, Solano-Oropeza & ThomasLeigh et al.2016]Leigh+2016 Leigh N. W. C., Stone N. C., Geller A. M., Shara M. M., Muddu H., Solano-Oropeza D., Thomas Y., 2016, @doi [] 10.1093/mnras/stw2178, https://ui.adsabs.harvard.edu/abs/2016MNRAS.463.3311L 463, 3311 [Leigh, Geller, Shara, Garland, Clees-Baron & AhmedLeigh et al.2017]Leigh+2017 Leigh N. W. C., Geller A. M., Shara M. M., Garland J., Clees-Baron H., Ahmed A., 2017, @doi [] 10.1093/mnras/stx1704, https://ui.adsabs.harvard.edu/abs/2017MNRAS.471.1830L 471, 1830 [Li, Mustill & DaviesLi et al.2020]Li2020 Li D., Mustill A. J., Davies M. B., 2020, @doi [] 10.1093/mnras/staa2945, https://ui.adsabs.harvard.edu/abs/2020MNRAS.499.1212L 499, 1212 [Lopez, Batta, Ramirez-Ruiz, Martinez & SamsingLopez et al.2019]Lopez+2019 Lopez Martin J., Batta A., Ramirez-Ruiz E., Martinez I., Samsing J., 2019, @doi [] 10.3847/1538-4357/ab1842, https://ui.adsabs.harvard.edu/abs/2019ApJ...877...56L 877, 56 [Mapelli et al.,Mapelli et al.2021]Mapelli2021 Mapelli M., et al., 2021, @doi [] 10.1093/mnras/stab1334, https://ui.adsabs.harvard.edu/abs/2021MNRAS.505..339M 505, 339 [Mardling & AarsethMardling & Aarseth2001]MardlingAarseth2001 Mardling R. A., Aarseth S. J., 2001, @doi [] 10.1046/j.1365-8711.2001.03974.x, https://ui.adsabs.harvard.edu/abs/2001MNRAS.321..398M 321, 398 [Margutti et al.,Margutti et al.2019]Margutti+2019 Margutti R., et al., 2019, @doi [] 10.3847/1538-4357/aafa01, https://ui.adsabs.harvard.edu/abs/2019ApJ...872...18M 872, 18 [McMillan, Cranmer, Shorter & HernquistMcMillan et al.1991]McMillan+1991 McMillan S. L. W., Cranmer S. R., Shorter S. A., Hernquist L., 1991, in Janes K., ed., Astronomical Society of the Pacific Conference Series Vol. 13, The Formation and Evolution of Star Clusters. pp 418–420 [Meynet & MaederMeynet & Maeder1997]MeynetMaeder1997 Meynet G., Maeder A., 1997, , https://ui.adsabs.harvard.edu/abs/1997A A...321..465M 321, 465 [Morscher, Pattabiraman, Rodriguez, Rasio & UmbreitMorscher et al.2015]Morscher+2015 Morscher M., Pattabiraman B., Rodriguez C., Rasio F. A., Umbreit S., 2015, @doi [] 10.1088/0004-637X/800/1/9, https://ui.adsabs.harvard.edu/abs/2015ApJ...800....9M 800, 9 [Pakmor, Springel, Bauer, Mocz, Munoz, Ohlmann, Schaal & ZhuPakmor et al.2016]ArepoHydro Pakmor R., Springel V., Bauer A., Mocz P., Munoz D. J., Ohlmann S. T., Schaal K., Zhu C., 2016, @doi [] 10.1093/mnras/stv2380, https://ui.adsabs.harvard.edu/abs/2016MNRAS.455.1134P 455, 1134 [Paxton, Bildsten, Dotter, Herwig, Lesaffre & TimmesPaxton et al.2011]Paxton+2011 Paxton B., Bildsten L., Dotter A., Herwig F., Lesaffre P., Timmes F., 2011, @doi [] 10.1088/0067-0049/192/1/3, http://adsabs.harvard.edu/abs/2011ApJS..192....3P 192, 3 [Paxton et al.,Paxton et al.2013]paxton:13 Paxton B., et al., 2013, @doi [] 10.1088/0067-0049/208/1/4, http://adsabs.harvard.edu/abs/2013ApJS..208....4P 208, 4 [Paxton et al.,Paxton et al.2015]paxton:15 Paxton B., et al., 2015, @doi [] 10.1088/0067-0049/220/1/15, http://adsabs.harvard.edu/abs/2015ApJS..220...15P 220, 15 [Paxton et al.,Paxton et al.2018]MESArelaxation Paxton B., et al., 2018, @doi [] 10.3847/1538-4365/aaa5a8, https://ui.adsabs.harvard.edu/abs/2018ApJS..234...34P 234, 34 [Paxton et al.,Paxton et al.2019]paxton:19 Paxton B., et al., 2019, @doi [] 10.3847/1538-4365/ab2241, https://ui.adsabs.harvard.edu/abs/2019ApJS..243...10P 243, 10 [Perets, Li, Lombardi & MilcarekPerets et al.2016]Perets2016 Perets H. B., Li Z., Lombardi James C. J., Milcarek Stephen R. J., 2016, @doi [] 10.3847/0004-637X/823/2/113, https://ui.adsabs.harvard.edu/abs/2016ApJ...823..113P 823, 113 [Perna, Wang, Farr, Leigh & CantielloPerna et al.2019]Perna2019 Perna R., Wang Y.-H., Farr W. M., Leigh N., Cantiello M., 2019, @doi [] 10.3847/2041-8213/ab2336, https://ui.adsabs.harvard.edu/abs/2019ApJ...878L...1P 878, L1 [Podsiadlowski, Mazzali, Nomoto, Lazzati & CappellaroPodsiadlowski et al.2004]Podsiadlowski+2004 Podsiadlowski P., Mazzali P. A., Nomoto K., Lazzati D., Cappellaro E., 2004, @doi [] 10.1086/421347, https://ui.adsabs.harvard.edu/abs/2004ApJ...607L..17P 607, L17 [Portegies Zwart & McMillanPortegies Zwart & McMillan2000]Portegieszwart2000 Portegies Zwart S. F., McMillan S. L. W., 2000, @doi [] 10.1086/312422, http://adsabs.harvard.edu/abs/2000ApJ...528L..17P 528, L17 [Portegies Zwart, Makino, McMillan & HutPortegies Zwart et al.1999]PortegiesZwart+1999 Portegies Zwart S. F., Makino J., McMillan S. L. W., Hut P., 1999, @doi [] 10.48550/arXiv.astro-ph/9812006, https://ui.adsabs.harvard.edu/abs/1999A A...348..117P 348, 117 [Rodriguez, Morscher, Pattabiraman, Chatterjee, Haster & RasioRodriguez et al.2015]Rodriguez2015 Rodriguez C. L., Morscher M., Pattabiraman B., Chatterjee S., Haster C.-J., Rasio F. A., 2015, @doi [Physical Review Letters] 10.1103/PhysRevLett.115.051101, http://adsabs.harvard.edu/abs/2015PhRvL.115e1101R 115, 051101 [Ryu, Leigh & PernaRyu et al.2017]Ryu+2017b Ryu T., Leigh N. W. C., Perna R., 2017, @doi [] 10.1093/mnras/stx395, https://ui.adsabs.harvard.edu/abs/2017MNRAS.467.4447R 467, 4447 [Ryu, Krolik, Piran & NobleRyu et al.2020a]Ryu+2020a Ryu T., Krolik J., Piran T., Noble S. C., 2020a, @doi [] 10.3847/1538-4357/abb3cf, https://ui.adsabs.harvard.edu/abs/2020ApJ...904...98R 904, 98 [Ryu, Krolik, Piran & NobleRyu et al.2020b]Ryu+2020b Ryu T., Krolik J., Piran T., Noble S. C., 2020b, @doi [] 10.3847/1538-4357/abb3cd, https://ui.adsabs.harvard.edu/abs/2020ApJ...904...99R 904, 99 [Ryu, Krolik, Piran & NobleRyu et al.2020c]Ryu+2020c Ryu T., Krolik J., Piran T., Noble S. C., 2020c, @doi [] 10.3847/1538-4357/abb3ce, https://ui.adsabs.harvard.edu/abs/2020ApJ...904..100R 904, 100 [Ryu, Trani & LeighRyu et al.2022a]Ryu+2022c Ryu T., Trani A. A., Leigh N. W. C., 2022a, @doi [] 10.1093/mnras/stac1987, https://ui.adsabs.harvard.edu/abs/2022MNRAS.515.2430R 515, 2430 [Ryu, Perna & WangRyu et al.2022b]Ryu+2022 Ryu T., Perna R., Wang Y.-H., 2022b, @doi [] 10.1093/mnras/stac2316, https://ui.adsabs.harvard.edu/abs/2022MNRAS.516.2204R 516, 2204 [Ryu, Valli, Pakmor, Perna, de Mink & SpringelRyu et al.2023a]Ryu+2023b Ryu T., Valli R., Pakmor R., Perna R., de Mink S. E., Springel V., 2023a, arXiv e-prints, https://ui.adsabs.harvard.edu/abs/2023arXiv230401792R p. arXiv:2304.01792 [Ryu, Perna, Pakmor, Ma, Farmer & de MinkRyu et al.2023b]Ryu+2023 Ryu T., Perna R., Pakmor R., Ma J.-Z., Farmer R., de Mink S. E., 2023b, @doi [] 10.1093/mnras/stad079, https://ui.adsabs.harvard.edu/abs/2023MNRAS.519.5787R 519, 5787 [Samsing, MacLeod & Ramirez-RuizSamsing et al.2014]Samsing2014 Samsing J., MacLeod M., Ramirez-Ruiz E., 2014, @doi [] 10.1088/0004-637X/784/1/71, https://ui.adsabs.harvard.edu/abs/2014ApJ...784...71S 784, 71 [Sana et al.,Sana et al.2022]Sana+2022 Sana H., et al., 2022, @doi [] 10.1051/0004-6361/202244677, https://ui.adsabs.harvard.edu/abs/2022A A...668L...5S 668, L5 [Schneider, Ohlmann, Podsiadlowski, Röpke, Balbus, Pakmor & SpringelSchneider et al.2019]Schneider+2019 Schneider F. R. N., Ohlmann S. T., Podsiadlowski P., Röpke F. K., Balbus S. A., Pakmor R., Springel V., 2019, @doi [] 10.1038/s41586-019-1621-5, https://ui.adsabs.harvard.edu/abs/2019Natur.574..211S 574, 211 [Schneider, Ohlmann, Podsiadlowski, Röpke, Balbus & PakmorSchneider et al.2020]Schneider+2020 Schneider F. R. N., Ohlmann S. T., Podsiadlowski P., Röpke F. K., Balbus S. A., Pakmor R., 2020, @doi [] 10.1093/mnras/staa1326, https://ui.adsabs.harvard.edu/abs/2020MNRAS.495.2796S 495, 2796 [Sills, Faber, Lombardi, Rasio & WarrenSills et al.2001]Sills+2001 Sills A., Faber J. A., Lombardi James C. J., Rasio F. A., Warren A. R., 2001, @doi [] 10.1086/318689, https://ui.adsabs.harvard.edu/abs/2001ApJ...548..323S 548, 323 [Sills, Adams, Davies & BateSills et al.2002]Sills+2002 Sills A., Adams T., Davies M. B., Bate M. R., 2002, @doi [] 10.1046/j.1365-8711.2002.05266.x, https://ui.adsabs.harvard.edu/abs/2002MNRAS.332...49S 332, 49 [SpringelSpringel2010]Arepo Springel V., 2010, @doi [] 10.1111/j.1365-2966.2009.15715.x, https://ui.adsabs.harvard.edu/abs/2010MNRAS.401..791S 401, 791 [StoneStone1979]Stone1979 Stone R. C., 1979, @doi [] 10.1086/157311, https://ui.adsabs.harvard.edu/abs/1979ApJ...232..520S 232, 520 [The LIGO Scientific Collaboration et al.,The LIGO Scientific Collaboration et al.2021]LIGO2021 The LIGO Scientific Collaboration et al., 2021, @doi [arXiv e-prints] 10.48550/arXiv.2111.03606, https://ui.adsabs.harvard.edu/abs/2021arXiv211103606T p. arXiv:2111.03606 [Timmes & SwestyTimmes & Swesty2000]HelmholtzEOS Timmes F. X., Swesty F. D., 2000, @doi [] 10.1086/313304, https://ui.adsabs.harvard.edu/abs/2000ApJS..126..501T 126, 501 [Trani, Fujii & SperaTrani et al.2019]Trani+2019 Trani A. A., Fujii M. S., Spera M., 2019, @doi [] 10.3847/1538-4357/ab0e70, https://ui.adsabs.harvard.edu/abs/2019ApJ...875...42T 875, 42 [Vynatheya, Hamers, Mardling & BellingerVynatheya et al.2022]Vynatheya+2022 Vynatheya P., Hamers A. S., Mardling R. A., Bellinger E. P., 2022, @doi [] 10.1093/mnras/stac2540, https://ui.adsabs.harvard.edu/abs/2022MNRAS.516.4146V 516, 4146 [Wang, Perna & LeighWang et al.2020]Wang2020 Wang Y.-H., Perna R., Leigh N. W. C., 2020, @doi [] 10.1093/mnras/staa1627, https://ui.adsabs.harvard.edu/abs/2020MNRAS.496.1453W 496, 1453 [Wang, Perna & ArmitageWang et al.2021a]Wang2021TDE Wang Y.-H., Perna R., Armitage P. J., 2021a, @doi [] 10.1093/mnras/stab802, https://ui.adsabs.harvard.edu/abs/2021MNRAS.503.6005W 503, 6005 [Wang, McKernan, Ford, Perna, Leigh & LowWang et al.2021b]Wang2021 Wang Y.-H., McKernan B., Ford S., Perna R., Leigh N. W. . C., Low M.-M. M., 2021b, @doi [] 10.3847/2041-8213/ac400a, https://ui.adsabs.harvard.edu/abs/2021ApJ...923L..23W 923, L23 [Weinberger, Springel & PakmorWeinberger et al.2020]Arepo2 Weinberger R., Springel V., Pakmor R., 2020, @doi [] 10.3847/1538-4365/ab908c, https://ui.adsabs.harvard.edu/abs/2020ApJS..248...32W 248, 32 § DISK PROPERTIES We provide the profiles of the aspect ratio (top-left), the ratio of the azimuthal velocity to the Keplerian velocity (top-right), density (bottom-left), and temperature (bottom-right) of disks produced during dynamical interactions in Figure <ref>.
http://arxiv.org/abs/2307.00805v1
20230703074111
On Symmetric Factorizations of Hankel Matrices
[ "Mehrdad Ghadiri" ]
math.NA
[ "math.NA", "cs.DS", "cs.NA", "65F99, 15B05", "F.2.1; G.1.3" ]
fminipage algbox[0]0.2in 6.3in 0.2in theoremTheorem corollary[theorem]Corollary proposition[theorem]Proposition lemma[theorem]Lemma fact[theorem]Fact observationObservation conjecture[theorem]Conjecture definitionDefinition remarkRemark *remark*Remark ≼ ≽ ≻ ≺ #1≈_#1 #1Span(#1 ) #1 #1#2_#1[ #2 ] #1#2^#1[ #2 ] #1#2𝔼_#1[ #2 ] #1[ #1 ] #1( #1 ) #1( #1 ) def= #1{#1 } #1|#1 | #1Tr (#1 ) #1Tr_σ (#1 ) #1⌊#1 ⌋ #1⌈#1 ⌉ #1dim (#1) #1sgn (#1) ∪ ∩ ⋃ ⋂ #1|#1 | #1 #1 #1 #1 𝒞 ℰ 𝒢 ℒ 𝒮 å𝑎 𝑡 ϵ On Symmetric Factorizations of Hankel Matrices Mehrdad Ghadiri[Georgia Institute of Technology, <ghadiri@gatech.edu>] =========================================================================== empty We present two conjectures regarding the running time of computing symmetric factorizations for a Hankel matrix 𝐇 and its inverse 𝐇^-1 as 𝐁𝐁^* under fixed-point arithmetic. If solved, these would result in a faster-than-matrix-multiplication algorithm for solving sparse poly-conditioned linear programming problems, a fundamental problem in optimization and theoretical computer science. To justify our proposed conjectures and running times, we show weaker results of computing decompositions of the form 𝐁𝐁^* - 𝐂𝐂^* for Hankel matrices and their inverses with the same running time. In addition, to promote our conjectures further, we discuss the connections of Hankel matrices and their symmetric factorizations to sum-of-squares (SoS) decompositions of single-variable polynomials. § INTRODUCTION Linear system solvers are a workhorse of the modern approach to optimization in which a linear system is solved in each iteration. This approach has been adapted for many problems ranging from graph problems <cit.>, to p-norm regression <cit.>, and linear programming <cit.>. If the linear systems in the problem have a special structure, then the structure can usually be exploited to obtain faster algorithms. This has probably been best exemplified by near-linear time Laplacian solvers that have led to improved running times in many graph problems <cit.>. Solving a general linear system and various factorization of matrices can be done in O(n^3) arithmetic (or field) operations. This can be improved using fast matrix multiplication techniques to O(n^ω), where ω<2.373 is the matrix multiplication exponent <cit.>. For solving linear systems with structured matrices such as Hankel and Toeplitz, fast algorithms with O(n^2) arithmetic operations have been presented <cit.>. This can be improved to algorithms with (n) arithmetic operations. These are called super fast solvers <cit.>. These are based on finding a representation of the inverse that has (n) size (for example the inverse is constructed by shifting and adding a rank two matrix that can be presented by 4 vectors). The representation is then applied to the response vector of the linear system, for example, using fast Fourier transform (FFT) techniques <cit.>. Note that in such super fast algorithms, the inverse is never written explicitly since it costs Ω(n^2) to write an n-by-n matrix explicitly. Hankel matrices are a special class of structured matrices with many connections to other structured matrices such as Toeplitz, generalized Cauchy, and Vandermonde matrices <cit.>. They also have many applications in theoretical computer science, including solving sparse linear systems <cit.> (which itself has applications in improving runtime bounds for convex optimization algorithms <cit.>) and sum-of-squares (SoS) decomposition of single variable polynomials <cit.>. A recent breakthrough of Peng and Vempala <cit.> has shown that a poly-conditioned sparse linear system can be solved faster than matrix multiplication time by using block-Krylov methods. The high-level idea is to form a random block-Hankel matrix from the input matrix and then solve a linear system for this Hankel matrix instead. Although the bit complexity of this Hankel matrix is considerably more than the bit complexity of the input matrix (by a factor of m<n^0.25), Peng and Vempala showed, with a careful analysis, that the number of bit operations of their algorithm is o(n^ω) for any ω>2. Note that the algorithm of <cit.> does not generate an explicit inverse but instead generates a linear operator (an implicit inverse) that can be applied to a vector to solve the linear system. Since the seminal works of Karmarkar <cit.> and Vaidya <cit.> on solving linear programs (LPs) using interior point methods (IPMs) maintaining the inverse of a matrix that goes under low-rank updates has been an important tool in improving the running time of algorithms for optimization problems. This inverse maintenance is done using Sherman-Morrison-Woodbury identity (Fact <ref>) which is equivalent to solving a batch of linear systems, i.e., computing ^-1 for a matrix instead of ^-1, which is solving one linear system. Although the sparse solver of Peng and Vempala is faster than matrix multiplication for solving one linear system, for a batch of linear systems of size n (i.e., is an n× n matrix), it is slower than direct methods that compute an explicit inverse that can directly be multiplied by <cit.>. Despite this caveat, the sparse solver has been utilized to improve the running time of p-norm regression problems for sparse poly-conditioned matrices beyond matrix multiplication time <cit.>. This improvement crucially depends on the fact that p-norm regression, for fixed p, can be solved by an algorithm with (n^1/3) iterations <cit.>. A main idea of <cit.> is to recompute the linear operator associated with the inverse whenever the rank of the update in Sherman-Morrison-Woodbury identity is large and causes the running time to go above n^ω. Since the number of iterations is (n^1/3), this recomputation only happens a few times and a total running time of o(n^ω) is achieved for p-norm regression. This approach, however, does not work for linear programming problems since the IPMs used for these problems require Ω(n^1/2) iterations. We provide more details for this issue in Section <ref>. Inspired by this, we propose two conjectures regarding the running time of computing symmetric factorizations of the form ^* for Hankel matrices and their inverses, where ^* denotes the conjugate transpose of the matrix . Due to general displacement structures that we will discuss later, these conjectures have implications for the block-Hankel matrix arising in the block-Krylov approach of <cit.>. In particular, the following are implied by our conjectures. * The first implication of our conjectures is an algorithm for solving a batch of poly-conditioned linear systems faster than <cit.>. We have computed the running times of solving a batch of linear systems for a matrix with polynomial condition number and O(n) nonzero entries using the online tool of Brand <cit.> that uses the running times developed in <cit.>. This is illustrated in Table <ref>. For example, for a batch of size n^0.96 (i.e., computing ^-1 where ∈^n× n and ∈^n× n^0.96), our approach would give an improvement of n^0.016 in the running time. * Perhaps the most important implication of our conjectures is an algorithm that solves a linear program with a sufficiently sparse matrix with polynomial condition number faster than matrix multiplication time. The sufficient sparsity is o(n^ω-1) nonzero entries. We discuss this in detail in Section <ref>. * In addition, the algorithm developed based on our conjectures improves the running time of the sparse p-norm regression algorithm developed in <cit.>. Outline. Motivated by these applications, we present necessary definitions and preliminaries for understanding our conjectures and results in Section <ref>. We then present our conjectures and corresponding results that justify them in Section <ref>. We discuss the applications of Hankel matrices and the implications of our conjectures, including the implications for solving sparse linear programs, in Section <ref>. We then provide a result regarding symmetric factorizations of Toeplitz matrices (which is used as a subprocedure for symmetric factorization of Hankel matrices) in Section <ref>. We present a key identity for Hankel matrices in Section <ref> that allows us to design a recursive algorithm for symmetric factorization of them. We then present our results regarding the symmetric factorization of Hankel matrices and their inverses in Section <ref> and <ref>, respectively. We finally conclude in Section <ref>. §.§ Notation and Preliminaries We consider the entries of our matrices to be in a field . This can be considered the field of reals or complex numbers . For both of these, our factor matrices and are in . Our results also extend to finite fields . In this case, the entries of and are from an extension of field that contains the square root of all of the elements of . For and , we consider fixed-point arithmetic for computation and representing our numbers. In this case, we cannot necessarily represent the square root of our numbers with finitely many bits, but for a number a with ℓ bits, we can find a number b with O(ℓ) bits such that b-√(a)<2^-ℓ. Therefore for matrices over and , our symmetric factorizations have some small error. For matrices over and , we denote the Frobenius norm and the operator norm by ·_ and ·_2, respectively. Then we define the condition number of an invertible matrix over or as _2 ·^-1_2, and we denote it by κ(). We denote the entry (j,k) of a matrix either by _j,k or (j,k). For natural numbers j_2>j_1 and k_2>k_1, we show the block of with rows j_1,j_1+1,…,j_2 and columns k_1,k_1+1,…,k_2 with _j_1:j_2,k_1:k_2. The matrix consisting of rows j_1,…,j_2 and all columns is denoted by _j_1:j_2,:. We denote the n-by-n identity matrix with _n and if the dimension is clear from the context, we drop the subscript. We denote an m-by-n matrix of all zeros with 0_m× n and if the dimensions are clear from the context, we drop the subscript. We denote the positive definite (Loewner) ordering by ≼. We denote the running time of multiplying an n× m matrix with an m× k matrix with (n,m,k). Then n^ω=(n,n,n). We denote the transposition of a matrix by ^⊤ and its conjugate transposition by ^*. Note that for real matrices, transposition and conjugate transposition are the same. We also denote the complex conjugate of a number a∈ by a^*. Moreover we define i=√(-1). For a matrix , we denote its real part and imaginary part by () and (), respectively. Note that both () and () are real matrices and = () + i ·(). We use notation to omit polylogarithmic factors in n and ℓ from the complexity, i.e., for function f, (f):=O(f·log^c (nℓ)) where c is a constant. We denote the set {1,…,n} by [n]. We extensively use the shift matrix _n ∈^n× n that is zero everywhere except on the entries under the diagonal for which it is one. For example, _4 = [ 0 0 0 0; 1 0 0 0; 0 1 0 0; 0 0 1 0 ]. When the dimension of _n is clear from the context, we omit the subscript and show the shift matrix by . Multiplying a matrix from left by (^⊤) shifts the rows of the matrix down (up) by one row and multiplying a matrix from right by (^⊤) shifts the columns of the matrix left (right) by one column. A matrix is symmetric if =^⊤ and is Hermitian if = ^*. Also is skew-symmetric if ^⊤ = -. Let be a field and = (h_1,…,h_2n-1) ∈^2n-1 be a vector. Then the corresponding Hankel matrix is defined as _ij = h_i+j-1. For example for n=4, = [ h_1 h_2 h_3 h_4; h_2 h_3 h_4 h_5; h_3 h_4 h_5 h_6; h_4 h_5 h_6 h_7; ]. For a vector =(t_1,…,t_n)∈^n, where t_1∈, the corresponding Hermitian Toeplitz matrix is defined as _i,j = t_j-i+1 if j≥ i, and _i,j = t_i-j+1^*, otherwise. For example, the Hermitian Toeplitz matrix corresponding to (t_1,t_2,t_3,t_4) is = [ t_1 t_2 t_3 t_4; t_2^* t_1 t_2 t_3; t_3^* t_2^* t_1 t_2; t_4^* t_3^* t_2^* t_1 ]. Note that this can be considered for a general field by extending it using the polynomial root x^2+1=0. It is easy to check that for a Toeplitz matrix , - ^⊤ is of rank two, and for a Hankel matrix , - ^⊤ has rank two. These are called the displacement rank of Toeplitz and Hankel matrices. The general definitions are as the following. Let ,,∈^n× n. The Sylvester-type displacement rank of with respect to (, ) is equal to the rank of -. The Stein-type displacement rank of with respect to (, ) is equal to the rank of -. For example a Hankel matrix has a Sylvester-type displacement rank of two with respect to (,^⊤ ). This allows us to define the displacement rank for block-Hankel matrices of the following form as well. = [ _1 _2 _3 _4; _2 _3 _4 _5; _3 _4 _5 _6; _4 _5 _6 _7; ], where each _i is an s× s matrix. Then has a Sylvester-type displacement rank of 2s with respect to (,^⊤), where = [ 0_s× s 0_s× s 0_s× s 0_s× s; _s 0_s× s 0_s× s 0_s× s; 0_s× s _s 0_s× s 0_s× s; 0_s× s 0_s× s _s 0_s× s; ]. Moreover, the inverse of a Hankel matrix is not Hankel but it has a Sylvester-type displacement rank of two with respect to (,^⊤ ). Similarly the inverse of a Toeplitz matrix is not Toeplitz but it has a Stein-type displacement rank of two with respect to (,^⊤). Note that multiplying a vector by a Hankel or Toeplitz matrix can be done in (n) time using FFT techniques <cit.> due to their connections to single-variable polynomials. Finally, the following illustrates the connection between inverse maintenance and solving a batch of linear systems. [Sherman-Morrison-Woodbury identity <cit.>] For an invertible n × n matrix and matrices ∈^n × r,∈^r × r,∈^r × n, if and (+)^-1 are invertible, then (+)^-1 = ^-1 - ^-1 (^-1 + ^-1)^-1^-1. § RESULTS AND CONJECTURES Our first conjecture is about computing a symmetric factorization of positive definite Hankel matrix as ^* in linear time. Since is at least n× n (for a full-rank Hankel matrix), we do not require outputting explicitly. Instead, the output should be an implicit representation of size (n·ℓ) that describes . Note that this is similar to the way that Hankel matrices are described as well. For example, if we give (h_1,…,h_7) in (<ref>), then the corresponding Hankel matrix is completely described and this representation has a linear size in n. Let ∈^n× n be a positive definite Hankel matrix with bit complexity ℓ. There exists an algorithm that finds a representation of a matrix with n rows, (n) columns, and bit complexity ℓ in time (n ·ℓ) such that - ^*_<1/2^ℓ. Our conjecture over finite fields would require such that =^*. In this case, we assume the field operations are performed in O(1) time, and therefore we require a running time of (n). For matrices over , we also require - ^*_<1/2^ℓ. To justify our conjecture, we provide an algorithm that runs in the specified running time and computes a representation of a factorization of the form ^* - ^*. theoremsymmetricHankel Let ∈^n× n be a Hankel matrix with bit complexity ℓ. There exists an algorithm that finds a representation of matrices and , each with n rows, O(nlog n) columns, and bit complexity ℓ in time (n ·ℓ) such that - (^* - ^*)_<1/2^ℓ. Since Hankel matrices are symmetric, Theorem <ref> does not require the positive definite condition. Our algorithm gives similar bounds and running times for matrices over and over finite fields it finds representations of and such that = ^* - ^*. The factorization of the form ^* - ^* has been considered before for Toeplitz matrices and their inverses with the goal of solving linear systems with a Toeplitz matrix in linear time <cit.>. The positive semi-definiteness of ^* and ^* provides some stability properties for solving linear systems with a Toeplitz matrix <cit.>. These algorithms are related to the study of orthogonal polynomials and generally either use the Schur algorithm or Levinson algorithm to compute and . We provide similar results for Toeplitz matrices with a simpler and more straightforward algorithm. theoremsymmetricToeplitz Let ∈^n× n be a Hermitian Toeplitz matrix with bit complexity ℓ. There exists an algorithm that finds a representation of matrices and , each with n rows, O(nlog n) columns, and bit complexity ℓ in time (n ·ℓ) such that - (^* - ^*)_<1/2^ℓ. Theorem <ref> is used as a subprocedure for Theorem <ref>. The simplicity of our algorithm for Toeplitz matrices allows us to use it in combination with a recursive algorithm that recursively decomposes a Hankel matrix to the sum of log(n) Toeplitz-like matrices to achieve our main result for decomposition of Hankel matrices. Our second conjecture is about computing a symmetric factorization for the inverse of a positive definite Hankel matrix. Note that we do not require the inverse as input since the approach of <cit.> can obtain a representation of it that has a linear size in linear time. Let ∈^n× n be a positive definite Hankel matrix with bit complexity ℓ and condition number bounded by 2^ℓ. There exists an algorithm that finds a representation of a matrix with n rows, (n) columns, and bit complexity ℓ in time (n^ω/2·ℓ) such that ^-1 - ^*_<1/2^ℓ. Again for finite fields, we require ^-1=^* and a running time of (n^ω/2). Note the difference between the running time of Conjecture <ref> and Conjecture <ref>. This is because of the running time that we can achieve for the factorization of the form ^* - ^* in the following result. theoremsymmetricInverseHankel Let ∈^n× n be a Hankel matrix with bit complexity ℓ and condition number bounded by 2^ℓ. There exists an algorithm that finds a representation of matrices and , each with n rows, O(nlog n) columns, and bit complexity ℓ in time (n^ω/2·ℓ) such that ^-1 - (^* - ^*)_<1/2^ℓ. The result of Theorem <ref> is actually more general than the inverse of Hankel matrices. The algorithm we present can find such a factorization in the specified time for any matrix that has a Sylvester-type displacement rank of two with respect to (, ^⊤) and it can be generalized to block matrices as described in Section <ref>. The main reason for the running time difference between Theorem <ref> and Theorem <ref> is the recursion in our algorithm. For Theorem <ref>, our recursion starts with the n× n Hankel matrix and modifies it to a matrix with four blocks of size n/2×n/2 where each block itself is Hankel, i.e., the displacement rank of the blocks is the same as the larger matrix. It then continues this process for O(log n) iterations. However, for general matrices with small Sylvester-type displacement rank, when we apply the recursion, the Sylvester-type displacement rank of the blocks is doubled. This forces us to stop the recursion when the size of the blocks is √(n) and results in the running time proportional to n^ω/2. § MOTIVATION AND RELATED WORK Hankel matrices have many connections to Toeplitz matrices. One can see that reversing the order of rows or columns of a Hankel matrix results in a Toeplitz matrix and vice versa. Therefore solving a linear system for Toeplitz matrices implies a solver for Hankel matrices as well. Therefore many works have focused on Toeplitz matrices. However, there are some applications that are specifically directed to Hankel matrices. Examples are linear system solvers based on block Krylov matrices (that are used to solve linear systems with general poly-conditioned sparse matrices <cit.>) and sum-of-squares (SoS) decomposition of single-variable polynomials. Here we first discuss sparse linear system solvers based on block-Krylov methods in Section <ref> and explain how our conjecture leads to faster algorithms for solving a batch of linear systems. Then in Section <ref>, we explain how this leads to a faster algorithm for solving sparse poly-conditioned linear programs faster than matrix multiplication time. We finally discuss the connection of Hankel matrices to the sum-of-squares (SoS) decomposition of single variable polynomials in Section <ref>. §.§ Faster Sparse Linear System Solvers for Batch Problems We start by describing the block-Krylov approach that has resulted in faster sparse linear system solvers for matrices over rational numbers <cit.>, fixed-point arithmetic <cit.>, and finite fields <cit.>. Linear system solvers based on block-Krylov matrices. To solve a linear system =, this approach forms a block Krylov matrix =[ ^2 ⋯ ^m-1 ]∈^n× n, where is a sparse n-by-s random matrix, and m· s = n. If the matrix is sparse, for example, its number of nonzero entries is O(n), then can be formed quickly. Note that ^i+1 can be obtained from ^i by multiplying it with . More specifically, for with constant bit complexity, can be formed in time (· s · m^2)=(· n · m), where the · s factor comes from the time that takes to multiply by an n-by-s matrix. One of the factors of m comes from the number of such matrix multiplications we need to perform and the other one comes from the bit-complexity of the resulting matrices, e.g., the entries of ^m-1 need (m) bits. For a small enough m (for example, m≈ n^0.01), (· n · m) is smaller than the matrix multiplication time if ≪ n^ω-1. Then the inverse of is presented by (^⊤)^-1^⊤. Note that for symmetric , ^⊤ is a block-Hankel matrix of the following form ^⊤ = [ ^⊤ ^⊤^2 ^⊤^3 ⋯ ^⊤^m; ^⊤^2 ^⊤^3 ^⊤^4 ⋯ ^⊤^m+1; ^⊤^3 ^⊤^4 ^⊤^5 ⋯ ^⊤^m+2; ⋮ ⋮ ⋮ ⋱ ⋮; ^⊤^m ^⊤^m+1 ^⊤^m+2 ⋯ ^⊤^2m-1; ]∈^n× n. Note that the symmetry assumption for is not a limitation since we can instead consider the linear system ^⊤ = ^⊤, which has a symmetric matrix. One can think of this matrix as an m-by-m Hankel matrix where each entry is a s-by-s matrix with bit-complexity of (m). Therefore multiplying any two entries of this matrix together costs (s^ω· m). Moreover ^⊤ can be multiplied with an n× s matrix in time (s^ω· m^2) by using fast Fourier transform (see <cit.> for details). Finally, note that ^⊤ can be formed in time (· n · m) similar to the approach we described above for computing . Fast and super fast solvers for block-Hankel matrices. To discuss the running time of inverting the block-Hankel matrix ^⊤ or applying the inverse to a block-matrix (or a vector), we need to consider the number of block operations. One can think of each block operation as multiplying two blocks of ^⊤ together. These blocks are s× s and have bit complexity m. So multiplying them by fast matrix multiplication <cit.> and using FFT to multiply the corresponding numbers in linear time results in a running time of (s^ω· m). Therefore an algorithm that takes k block operations runs in time (s^ω· m · k) with the assumption that the bit complexity stays the same during the algorithm. Therefore fast solvers that need m^2 operations are slow for inverting ^⊤, since they result in a total cost of s^ω· m^3 > n^ω. Thus one needs to use super fast solvers for the matrix ^⊤. Most of the classical super fast solvers are either based on orthogonal polynomials <cit.> or based on the conversion of Hankel matrix to generalized Cauchy and hierarchically semi-separable (HSS) matrices <cit.> that admit low-rank properties for off-diagonal blocks. The caveat of these methods is that they blow up the bit complexity of L to at least L^2. This means an extra factor of m in addition to s^ω· m^2 operations which again results in a total running time of more than n^ω. There has been another class of super fast solvers based on hierarchical Cholesky decomposition and Schur complements that classically were analyzed in the exact computation setting (for example, for matrices on finite fields) <cit.>. Very recently, <cit.> analyzed such algorithms for real matrices in the fixed-point arithmetic and showed that such super-fast solvers only need to increase the bit complexity by polylogarithmic factors in n. This resulted in an algorithm with a total running time of (s^ω m^2) for finding a representation of the inverse of ^⊤. This algorithm was one of the main building blocks that allowed <cit.> to go below matrix multiplication time. The representation of the inverse of ^⊤ obtained from this approach is the product of two matrices and ^⊤, i.e., (^⊤)^-1≈^⊤. and are block matrices with a small displacement rank of 2s. Therefore they can be applied to another matrix of size n× s with (m) block operations by utilizing FFT. The caveat of this approach is that the bit complexity of matrices and is Ω(m). Therefore although they can solve one linear system faster than matrix multiplication time, for any selection of parameters s and m, there is a 0<c<1 such that solving n^c linear system with a common matrix takes more than the matrix multiplication time. This is strange since inverting the matrix n using fast matrix multiplication takes (n^ω) time and then the inverse can be applied to n vectors in (n^ω) time <cit.> and this does not need any sparsity properties. We now bound the running time of solving a batch of linear systems of size r with <cit.> solver. To do so, we need the following lemma (proved in Section <ref>) for the running time of applying the matrix to a matrix of size n× r. lemmaapplyK Let ∈^n× n and ∈^n× r. Let ∈^n× s be a matrix with (n) nonzero entries, where 1≤ s≤ n is a divisor of n. Let m=n/s and = [ ^2 ⋯ ^m-1 ]∈^n× n. Let bit complexity of and be ℓ and the bit complexity of be m·ℓ. Then and ^⊤ can be computed in (· r · m^2 ·ℓ) time. Assuming the bit complexity of the input matrix is constant and its condition number is n, for a fixed m and r, the total running time of applying the inverse operator of <cit.> to an n× r matrix is the following. (· n · m + n^ωm^2-ω + m^2 ·(n/m,n/m,r) + · r · m^2). The first term of (<ref>) is for forming and ^⊤. The second term is finding the representation of the inverse. The third term is the running time of applying the inverse of ^⊤ to an n× r matrix. The last term is for applying ^⊤ or to an n× r matrix (see Lemma <ref>). Note that for solving one linear system, (<ref>) boils down to (· n · m + n^ωm^2-ω). Then one can see that by taking m = n · ()^-1/(ω-1), a running time of (n^2 ()^(ω-2)/(ω-1)) is achieved, which is faster than matrix multiplication for all values of ω>2 and < n^ω-1. The running time of (<ref>) is obtained by applying , (^⊤)^-1 and ^⊤ separately. Another approach is to take and from (<ref>) and compute = and = using Lemma <ref>. Then the inverse of is given by ^⊤, where the bit complexity of and is (m). Then solving a batch of linear systems of size r by multiplying and takes the following running time. (· n · m^2 + n^ωm^2-ω + m ·(n,n,r)). The first term of (<ref>) is from computing ,, which also dominates the running time of forming and ^⊤. The second term is for finding the representation of the inverse of ^⊤, and the last term comes from the running time of multiplying and with an n× r matrix. Given a fixed r, one can optimize over the best value of m for each of (<ref>) and (<ref>) and report the smaller running time. This is what we used for Table <ref>. Symmetric factorization of inverse operator for faster batch solves. The main caveat of the approach of <cit.> is that the representation of the inverse has a bit complexity of Ω(m), whether we use ^-1 representation or ^⊤^⊤. This is the main reason that when applied to large batches, the running time of <cit.> becomes slower than direct methods. Here we present an approach based on our conjectures to obtain a representation of the inverse with small bit complexity. Our approach is to write the inverse of ^⊤ as a symmetric factorization ^*. In this case, the inverse of is represented as ()()^*. Therefore we have _ = √(trace(()()^*)) = √(trace(^-1)) = √(∑_i=1^n λ_i), where λ_i's are the eigenvalues of ^-1. Therefore in the case where λ_i's are poly(n) (which is the assumption in <cit.>), the absolute value of entries in the matrix is bounded by poly(n). Moreover, one can compute using Lemma <ref>. Therefore, in this case, we can represent the inverse of as ^*, where =, and the bit complexity of entries of is (1). Then the running time of solving a batch of linear systems of size r becomes (· n · m^2 + n^ωm^1-ω/2 + (n,n,r)) since can be applied to an n× r matrix in time (n,n,r). Note that we require the error bound of less than 1/2^ℓ (which here be less than 1/2^m) in Conjectures <ref> and <ref> because the bit complexity of is (m) and this way we can guarantee that ()()^* is close to ^-1. Note that Conjecture <ref> gives an algorithm that runs with (m^ω/2) block operations and uses numbers with the bit complexity of the input problem. Therefore Conjecture <ref>, if true, computes a representation of the matrix in time (s^ω· m^1+ω/2) = (n^ωm^1-ω/2) such that ^* - ^⊤_≤1/2^m. Since the bit complexity of this representation is (m), we can write down in time (n^2 · m) and then use Lemma <ref> to compute = in time (· n · m^2). This gives us the running time stated in Equation (<ref>), which is also the formula we used for our running time in Table <ref>. We next discuss how our approach results in a faster-than-matrix-multiplication time for solving linear programs with sparse and poly-conditioned matrices. §.§ Solving Linear Programs Faster than Matrix Multiplication Here we first give a simple explanation of the linear systems that are solved in each iteration of interior point methods (IPMs) for solving LPs. IPMs are the state-of-the-art approach for solving LPs. The seminal works of Karmarkar <cit.> and Vaidya <cit.> started the study of IPMs, and recently, IPM-based approaches have resulted in algorithms that solve linear programs approximately in (n^ω) arithmetic operations <cit.>. We consider the linear programs of the form min_^⊤ = , ≥ 0^⊤     (primal)     and     max_≤  ^⊤     (dual), where ∈^n× d, ∈^d, ∈^n, and n≥ d. Starting from a feasible solution, each iteration k of IPM corresponds to computing a vector of the following form √(^(k)) (^⊤^(k))^-1^⊤√(^(k))^(k), where ^(k)∈^n× n is a diagonal matrix and ∈^n. Note that this is equivalent to solving a linear system with the matrix ^⊤^(k). Recent advances in IPMs <cit.> have shown that instead of (<ref>), we can use the following vector √(^(k)) (^⊤^(k))^-1^⊤√(^(k))^(k), where ^(k)∈^n× n is another diagonal matrix such that ^(k) - ^(k)_∞ < C for some constant C, and ^(k) and ^(k) are the vectors corresponding to diagonal matrices ^(k) and ^(k), respectively. Another insight from IPMs is that ^(k-1) and ^(k) are very close to each other in the sense that ^(k-1) - ^(k)_2 < β for some constant β. Then the following lemma allows us to bound the number of low-rank changes we need to apply to ^(k) to maintain ^(k) - ^(k)_∞ < C over the course of the algorithm. Therefore we can use the Sherman-Morrison-Woodbury identity (Fact <ref>) to maintain the inverse (^⊤^(k))^-1, and this results in an algorithm for solving LPs with (n^ω) arithmetic operations. Let β>0 be a constant. Let ^(0),^(1),^(2),… be vectors in ^n arriving in a stream with the guarantee that ^(k+1)-^(k)_2 ≤β for all k. Then for 0<C<0.5, we can pick ^(0),^(1),^(2),…, so that (see Algorithm 4 on <cit.>) * ^(k)-^(k)_∞≤ C for all k. * ^(k)-^(k-1)_0 ≤ O(2^2q_k(β/C)^2 log^2(n)) where q_k is the largest integer with k = 0 2^q_k. In the original papers of Cohen-Lee-Song <cit.> and Brand <cit.>, instead of the matrix (^⊤^(k))^-1, the matrix (^⊤^(k))^-1^⊤ is maintained. The reason is that for a dense matrix (e.g., =Ω(n^2)), the cost of multiplying by a vector in each iteration is Ω(n^2). Therefore since the number of iterations of IPM is √(n), this alone gives a running time of Ω(n^2.5), which is much higher than n^ω. However in our case, since is sparse with o(n^ω-1) nonzero entries, the cost of this multiplication over the course of the algorithm is at most O(n^ω-0.5). Therefore we focus on maintaining (^⊤^(k))^-1. To maintain (^⊤^(k))^-1, we either have to use Fact <ref> or compute (^⊤^(k))^-1 from scratch. Consider a fix m for the sparse solver of <cit.> and the number of updates of rank n/m in the IPM, i.e., the number of indices k such that n/m entries are different between ^k and ^k+1. By Lemma <ref>, the number of such changes is (√(m)). If we recompute the inverse from scratch when we encounter these updates, then by (<ref>), our cost is at lease Ω(· n · m + n^ωm^2.5-ω), which is larger than n^ω because 2.5>ω. If we use Sherman-Morrison Woodbury identity (Fact <ref>), since it is equivalent to applying the inverse to an n×n/m matrix, the cost is at least Ω(m^0.5· s^ω· m^2) = Ω(s^ω· m^2.5), because applying the inverse of ^⊤ to an n×n/m matrix costs at least Ω(s^ω· m^2). This is again more than n^ω because ms=n. Using the representation also leads to a cost of Ω(m^1.5·(n,n,n/m)), which is again more than n^ω. However, if based on our conjectures, we had a representation of the form ^*, then by (<ref>), the cost of this would be O(m^0.5·(n,n,n/m)), which is smaller than matrix multiplication time. Now suppose our conjectures are true and we can find a representation of the inverse as ^*. To go below matrix multiplication time for this inverse maintenance problem, one can adapt the following approach: If the rank of the update is larger than n/m^(ω-2)/2, recompute the inverse and ^* from scratch. If the rank of the update is smaller than n^α, use the Sherman-Morrison-Woodbury identity in an online way, i.e., compute the product of each term with the given vector separately (where α>0.31 is the dual of matrix multiplication exponent and is the largest number such that an n× n matrix can be multiplied with an n× n^α matrix in O(n^2+o(1)) time). Finally if the rank of the update was between n^α and n/m^(ω-2)/2, compute the update term of Sherman-Morrison-Woodbury identity (i.e., the second term) and store it as an explicit matrix . With the above approach, the inverse operator is then given as ^* + +, where is an implicit matrix given by Sherman-Morrison-Woodbury identity, i.e., = - (^* + ) (_S)^⊤ (^-1 + _S (^* + ) (_S)^⊤)^-1_S (^* + ), where S is the set of indices corresponding to updates to that are not incorporated to ^* or , and is the diagonal matrix corresponding to these updates. Note that the cost of applying to any matrix is the same as the cost of applying . Then one can see that by Lemma <ref> and Equation (<ref>), the cost of inverse maintenance is bounded by (· n · m^1.5+ω/4 + n^ωm^0.5-ω/4 + m^(ω-2)/4(n,n,n/m^(ω-2)/2)), where the first two terms come from the cost of recomputations of the inverse, and the other term comes from the updates performed using Sherman-Morrison-Woodbury identity. Now note that the exponent of m in the second term of (<ref>) is negative for any ω>2. For any value of ≪ n^ω-1, we can take m small enough to make the first term less than n^ω. Similarly, the third term is smaller than n^ω. This can be checked by the online tool of <cit.>. In addition to inverse maintenance, one needs to consider the cost of queries for computing (<ref>). For these, since we only need to have ^(k) that is close to ^(k), one does not need to compute all the entries of the formula. We only need to compute the entries that cause an entry of ^(k) to change in the next iteration. This can be done by using heavy-hitters data structures in a way similar to their use in the recent works for solving tall dense linear programs, see <cit.>. We omit the details of this here, but one can verify that with this approach, the total cost of queries can also be made less than matrix multiplication time. Therefore the overall approach gives an algorithm for solving linear programs with a sparse (i.e., =o(n^ω-1)) and poly-conditioned matrix faster than matrix multiplication time. §.§ SoS decomposition of polynomials If the coefficients of a degree n polynomial p is represented by a vector = [ a_n a_n-1 ⋯ a_1 a_0 ]^⊤∈^n+1, then with = [ x^n x^n-1 ⋯ x 1 ]^⊤, ^⊤ = p. Another way of representing a polynomial p(x)=a_0 + a_1 x + a_2 x^2 + ⋯ + a_2k x^2k of even degree using a Hankel matrix is to define ∈^(k+1)× (k+1) as _ij = a_i+j-2/i+j-1 if i+j ≤ k+1, and _ij = a_i+j-2/2k-i-j+1, otherwise. For example, for a degree 4 polynomial, we have = [ a_0 a_1/2 a_2/3; a_1/2 a_2/3 a_3/2; a_2/3 a_3/2 a_4 ]. Then one can see that with = [ 1 x ⋯ x^k-1 x^k ]^⊤, we have p=^⊤. Now suppose there exists polynomials ℓ_1,…,ℓ_m (of degree at most k) such that p = ∑_j=1^m ℓ_j^2. Then showing the coefficient of ℓ_j with b_0^(j),…,b_k^(j), for j∈[m], and defining the matrix ∈^(k+1)× m as _r,j=b_r-1^(j), we have p = ^⊤^⊤. Now note that a symmetric factorization of like =^⊤ gives us such coefficients for the polynomials. Moreover for j∈[m], we have ℓ_j^2 (x) =^⊤_:,j^⊤_:,j=^⊤[ b_0^(j) b_1^(j) ⋯ b_k^(j) ]^⊤[ b_0^(j) b_1^(j) ⋯ b_k^(j) ] = (b_0^(j) + b_1^(j) x + ⋯ + b_k^(j) x^k)^2 Therefore symmetric factorization of Hankel matrices give a sum-of-squares (SoS) decomposition of single-variable polynomials. § SYMMETRIC FACTORIZATION OF HERMITIAN TOEPLITZ MATRICES We start this section by showing how one can find a symmetric factorization of a certain rank-two Hermitian matrix. We will then use this to find a symmetric factorization for a Hermitian Toeplitz matrix. Let be a rank two Hermitian matrix of the following form. = [ 0 ⋯ 0 t_1^* 0 ⋯ 0; 0 ⋯ 0 t_2^* 0 ⋯ 0; ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮; 0 ⋯ 0 t_j-1^* 0 ⋯ 0; t_1 ⋯ t_j-1 0 t_j+1 ⋯ t_n; 0 ⋯ 0 t_j+1^* 0 ⋯ 0; ⋮ ⋱ ⋮ ⋮ ⋮ ⋱ ⋮; 0 ⋯ 0 t_n^* 0 ⋯ 0 ]. In other words, only the j'th row and column of this matrix is nonzero and its entry (j,j) is also zero. Then has exactly two nonzero eigenvalues λ_1 and λ_2 that are real and λ_1=-λ_2. Moreover let _1 and _2 be the eigenvectors corresponding to λ_1 and λ_2, respectively. Also let λ_1 be the positive eigenvalue and _1=√(λ_1)_1, _2=√(λ_1)_2. Then = _1 _1^* - _2 _2^*. We calculate the eigenvalue decomposition of . Since is Hermitian, its eigenvectors can be picked to be orthonormal, and since is a rank two matrix, it has at most two nonzero eigenvalues that can be computed by the formula =λ. This gives the following set of linear systems ∑_k∈[n],k≠ j t_k v_k = λ v_j, t_k^* v_j = λ v_k, ∀ k∈[n], k≠ j. Therefore for nonzero λ, we have v_k=t_k^* v_j/λ. Substituting this into the first equation, we have v_j/λ∑_k∈[n],k≠ j^n (t_k t_k^*) = λ v_j. Note that v_1 is nonzero because otherwise all of v_k's are zero by (<ref>) (and this is in contrast with the assumption that the norm of the eigenvectors is equal to one). Therefore λ^2 = ∑_k∈[n],k≠ j^n (t_k t_k^*) = ∑_k∈[n],k≠ j^n t_k^2. Hence the right hand side is positive and has two real eigenvalues λ_1=√(∑_k=1^n (t_k t_k^*)) and λ_2=-√(∑_k=1^n (t_k t_k^*)), where we define t_j=0. Let _1 and _2 be the eigenvectors corresponding to λ_1 and λ_2, respectively. Let _1=√(λ_1)_1 and _2=√(λ_1)_2. Note that since λ_1 is positive √(λ_1) is real and therefore _1^* = √(λ_1)_1^* and _2^* = √(λ_1)_2^*. Then we have = λ_1 _1 _1^* + λ_2 _2 _2^* = λ_1 _1 _1^* - λ_1 _2 _2^* = _1 _1^* - _2 _2^*. By Lemma <ref>, to find a symmetric factorization of the matrix in (<ref>), we only need to find its eigenvalues and eigenvector. Since is a rank two matrix with O(n) nonzero entries, this can be done in O(n) time. To prove Theorem <ref>, we essentially find a symmetric factorization of such a matrix and show that a symmetric factorization of a Hermitian Toeplitz matrix can be constructed by shifting and adding this symmetric factorization for a rank two matrix. * Let be a Toeplitz matrix that is equal to everywhere except on the diagonal and the diagonal of is equal to zero. We now show that can be written as _1 _1^* - _2 _2^* for _1,_2∈^n× n. Let = - ^⊤. Since is a Toeplitz matrix, is a matrix of rank two of the following form = [ 0 t_2 t_3 ⋯ t_n; t_2^* 0 0 ⋯ 0; t_3^* 0 0 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; t_n^* 0 0 ⋯ 0 ]. Therefore by Lemma <ref>, there exists _1 and _2 such that = _1 _1^* - _2 _2^*. Now note that = ∑_j=1^n^j-1 (^j-1)^⊤. Therefore defining _1 = [ _1 _1 ^2 _1 ⋯ ^n-1_1 ], _2 = [ _2 _2 ^2 _2 ⋯ ^n-1_2 ], we have = _1 _1^* - _2 _2^*. Let t_1 be the diagonal element of that is a real number because is a Hermitian matrix. Then if t_1≥ 0, setting = [ _1 √(t_1) ], and = _2, and setting = _1, and = [ _2 √(t_1) ], otherwise, we have = ^* - ^*. Finally note that in the first case is a Toeplitz matrix, and therefore has a displacement rank of two and has a displacement rank of two with respect to (, [ ]^⊤). Similarly, in the second case also the displacement rank of both matrices is two. This implies that a vector can be multiplied by ,,^*,^* in (n) time by FFT techniques. § KEY IDENTITY FOR HANKEL MATRICES In this section, we consider a symmetric factorization of a Hankel matrix and without loss of generality, we assume ∈^2^k× 2^k, for k∈. Note that if the dimensions of is not a power of two, we can extend it to a Hankel matrix in which the dimensions are a power of two as the following. Let =(h_1,…,h_2s-1), for s∈, be the generating vector of the Hankel matrix . In this case ∈^s× s, and we are assuming s is not a power of two. Let k be the smallest integer such that 2^k>s and let =(h_1,…,h_2s-1,0,…,0)∈^2^k+1-1. Now let be a Hankel matrix with generating vector . Then ∈^2^k × 2^k. Moreover _1:s,1:s =. Therefore if , are matrices such that = ^* - ^*, then defining = _1:s,: and = _1:s,:, we have = ^* - ^*. Therefore we only need to find a symmetric factorization of . We now define a matrix that converts a Hankel matrix to Toeplitz and vice versa. Let _n∈^n× n be a matrix with _n(i,j)=1 if i+j=n+1, and _n(i,j)=0, otherwise. For example _4 = [ 0 0 0 1; 0 0 1 0; 0 1 0 0; 1 0 0 0 ]. We call this matrix the exchange matrix (also called backward identity). When the dimension is clear from the context, we show the exchange matrix with just . Note that =, and ^⊤ =. Moreover we say, a matrix is centrosymmetric if =, is persymmetric if = ^⊤, and is bisymmetric if it is both symmetric and centrosymmetric. The next lemma describes our similarity transformation to decompose a Hankel matrix to the sum of a Hermitian Toeplitz matrix and a centrosymmetric Hankel matrix. Let = 1/2(1+i)+ 1/2(1-i), where is the identity matrix and is the exchange matrix. Let be a Hankel matrix. The imaginary part of ^* is a skew-symmetric Toeplitz matrix with zero diagonal and its real part is a centrosymmetric Hankel matrix. We have ^* = (1/2(1+i)+ 1/2(1-i) ) (1/2(1+i)+ 1/2(1-i) )^* = (1/2(1+i)+ 1/2(1-i) ) (1/2(1-i)+ 1/2(1+i) ) = 1/4((1+i)(1-i) + (1+i)(1-i) + (1+i)^2 + (1-i)^2 ) = 1/2 ( + ) + i/2 ( - ). Therefore the real part of ^* is 1/2 ( + ). Now we have ( + ) = + = ( + ). Therefore (^*) is centrosymmetric. Also note that both and are Hankel and the sum of Hankel matrices is a Hankel matrix. Therefore (^*) is also Hankel. In addition, note that since is Hankel, both and are Toeplitz matrices and the sum (and also the difference) of Toeplitz matrices, is a Toeplitz matrix. Therefore - and (^*) are Toeplitz matrices. Finally we have ( - )^⊤ = ^⊤^⊤ - ^⊤^⊤ = - = -( - ). Therefore (^*) is a skew-symmetric matrix, and hence its diagonal is equal to zero. Note that this implies i·(^*) is a Hermitian matrix. We now give a 4-by-4 example to understand Lemma <ref> better. Let = [ h_1 h_2 h_3 h_4; h_2 h_3 h_4 h_5; h_3 h_4 h_5 h_6; h_4 h_5 h_6 h_7; ]. We then have (^*) =1/2[ h_1+h_7 h_2+h_6 h_3+h_5 2h_4; h_2+h_6 h_3+h_5 2 h_4 h_3+ h_5; h_3 + h_5 2h_4 h_3+h_5 h_2+h_6; 2h_4 h_3+h_5 h_2+h_6 h_1+h_7; ], (^*) = 1/2[ 0 h_3 - h_5 h_2 - h_6 h_1 - h_7; h_5 - h_3 0 h_3 - h_5 h_2 - h_6; h_6 - h_2 h_5 - h_3 0 h_3 - h_5; h_7 - h_1 h_6 - h_2 h_5 - h_3 0 ]. Now note that matrix is a unitary matrix and therefore ^* = ^* =. Therefore, we have = ^* ^* = ^* ( (^*) + i·(^*) ) Therefore if we have matrices _1,_2,_1,_2 such that (^*) = _1 _1^* - _1 _1^* and i·(^*) = _2 _2^* - _2 _2^*, then we have = [ ^* _1 ^* _2 ][ ^* _1 ^* _2 ]^* - [ ^* _1 ^* _2 ][ ^* _1 ^* _2 ]^*. In the next section, we discuss how Lemma <ref> can be exploited to devise our recursive algorithm. algoruled 0.15cm § SYMMETRIC FACTORIZATION OF HANKEL MATRICES Since i·(^*) in (<ref>) is a Hermitian Toeplitz matrix, we can use Theorem <ref> to find a symmetric factorization for it. To deal with the real part of ^*, we use Lemma <ref> in a recursive fashion using the following matrix. For n=2^k, and t=1,…,k, we define _t := 1/2(1+i)_n/2^t-1+ 1/2(1-i) _n/2^t-1∈^(2^k-t+1)× (2^k-t+1), and _t := [ _t 0 ⋯ 0; 0 _t ⋯ 0; ⋮ ⋮ ⋱ ⋮; 0 0 ⋯ _t ]∈^2^k× 2^k. Algorithm <ref> is our main procedure to find a symmetric factorization of a Hankel matrix. However, so far, we only know how to find a symmetric factorization of _1 using Theorem <ref>. Therefore we discuss how to find a symmetric factorization for the rest of _t's. We start by characterizing the structure of matrices _t and _t. For t=1,…,k, matrix _t consists of 2^t× 2^t blocks of Hankel matrices as the following. The first block-row consists of 2^t Hankel matrices [ _1 _2 ⋯ _2^t-1 _2^t ]. The second block-row is [ _2 _1 _4 _3 ⋯ _2^t _2^t - 1 ] For s=1,…,t-1, let ∓_s be the matrix consisting of block-rows 1 to 2^s, and _s be the matrix consisting of block-rows 2^s+1 to 2^s+1. Let ∓_s = [ ∓_s,1 ∓_s,2 ⋯ ∓_s,2^t-s ]. Then _s = [ ∓_s,2 ∓_s,1 ∓_s,4 ∓_s,3 ⋯ ∓_s,2^t-s ∓_s,2^t-s-1 ]. Moreover the structure of the block-columns of _t is similar to the structure of the block-rows we described above. Before proving the lemma, note that the description completely describes _t (at least up to the block structure) since it describes the first two block-rows and then it uses the first 2^s block-rows to describe the next 2^s block rows. The structure of _1 follows from Lemma <ref>. We then use induction to prove the structure for the rest of _t's. First note that the number of block-rows and block-columns of _t is twice the number of block-rows and block-columns of _t-1. In other words, each block of _t-1 is split into four blocks in _t. Note that in iteration t, we multiply _t-1 by _t and _t^* from left and right, respectively, which is equivalent to multiplying each block of _t-1 by _t and _t^* from left and right, respectively. Therefore by induction hypothesis for _t-1 and Lemma <ref>, since the blocks of _t-1 are Hankel matrices, the blocks of _t=(_t_t-1_t^*) are also Hankel matrices. Moreover the relation between the first block-row and the second block-row directly follows from Lemma <ref> due to centrosymmetry of the resulting Hankel matrix. Finally the relation between ∓_s and _s for s=1,…,t-1 simply follows from the induction hypothesis for the structure of _t-1. A similar argument proves the structure of block-columns as well. We now use Lemma <ref> to characterize the structure of matrices _t. For t=1,…,k, matrix _t consists of 2^t-1× 2^t-1 blocks of Hermitian Toeplitz matrices as the following. The first block-row consists of 2^t-1 Toeplitz matrices [ _1 _2 ⋯ _2^t-1 _2^t-1 ]. The second block-row is [ _2 _1 _4 _3 ⋯ _2^t-1 _2^t-1 - 1 ] For s=1,…,t-2, let ∓_s be the matrix consisting of block-rows 1 to 2^s, and _s be the matrix consisting of block-rows 2^s+1 to 2^s+1. Let ∓_s = [ ∓_s,1 ∓_s,2 ⋯ ∓_s,2^t-s ]. Then _s = [ ∓_s,2 ∓_s,1 ∓_s,4 ∓_s,3 ⋯ ∓_s,2^t-s ∓_s,2^t-s-1 ]. Moreover for even j, _j=0. Also the structure of the block-columns of _t is similar to the structure of the block-rows we described above. Note that if a real matrix is skew-symmetric, i· is Hermitian. Therefore the structure of _1 follows from Lemma <ref>. For t=2,…,k, note that _t = i ·(_t_t-1_t^*). Therefore we use the structure of _t-1 described in Lemma <ref> to prove the structure for _t. Note that the number of blocks of _t is equal to the number of blocks of _t-1. Moreover for each block of _t-1, the corresponding block in _t is i·(_t _t^*), which is a Hermitian Toeplitz matrix by Lemma <ref>. Moreover the structure of block-rows of _t (i.e., the relation between the first block-row and the second block-row and the relation between ∓_s and _s, for s=1,…,t-2) follows from the structure of _t-1 due to Lemma <ref> and the fact that and _t commute, i.e., _t = _t. The structure of block columns also follows similarly. Finally note that if is a centrosymmetric Hankel matrix, then (_t _t^*) is zero since (_t _t^*) = 1/2( - ) = 0, where the second equality follows from the definition of centrosymmetry (see Definition <ref>). For the first equality see the proof of Lemma <ref>. Finally note that for a Hankel matrix , by Lemma <ref>, (_t _t^*) is a centrosymmetric Hankel matrix. Therefore the top-right and bottom-left blocks of (_t _t^*) are also centrosymmetric Hankel matrices. Therefore since the blocks of _t-2 are Hankel matrices, the blocks _j with even index j in _t-1 are centrosymmetric Hankel and therefore the corresponding blocks of them in _t are zero. Before going further, we need to define the following matrices that allow us to exploit the structures described in Lemmas <ref> and <ref>. For t∈[k], let _t = [ 0_2^t-1× 2^t-1 0_2^t-1× (2^k-2^t-1); 0_(2^k-2^t-1) × 2^t-1 _2^k-2^t-1 ]∈^2^k× 2^k, _t = [ 0 _2^t-1 0 0 ⋯ 0 0; _2^t-1 0 0 0 ⋯ 0 0; 0 0 0 _2^t-1 ⋯ 0 0; 0 0 _2^t-1 0 ⋯ 0 0; ⋮ ⋮ ⋮ ⋮ ⋱ ⋮ ⋮; 0 0 0 0 ⋯ 0 _2^t-1; 0 0 0 0 ⋯ _2^t-1 0 ]∈^2^k× 2^k. For example, the matrix _2 allows us to permute ∓_1 in _k to get _1 in Lemma <ref>. We then can remove the first entry of ∓_1 using _2 to prevent it with clashing with the first column. Note that _t^⊤ = _t and _t^⊤ = _t. Therefore we can use these matrices for permuting block-columns as well. For the other block-rows/columns, we can use appropriate _t's and _t's. We use these matrices in the proofs of the rest of the section. We are now equipped to describe how a representation of symmetric factorization of matrices _t, for t∈[k], can be found in linear time. For t=1,…,k, we can find _t and _t in (n) time such that _t = _t _t^* - _t _t^*. Let [ _1 ⋯ _2^t-1 ] be the blocks of the first block-row and the first block-column of _t, respectively. For s∈[t], let _s be a matrix with block structure as _t such that all of its blocks are zero except the block-rows/columns 1,…,2^s-1 and its block-rows/columns 1,…,2^s-1 are equal to the block-rows/columns 1,…,2^s-1 of _t. For example _t = _t and _1 = [ _1 _2 _3 ⋯ _2^t-1; _2 0 0 ⋯ 0; _3 0 0 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; _2^t-1 0 0 ⋯ 0 ]. For s∈[2^t-1], let _s,U be an upper triangular matrix that is equal to _s on the diagonal and above the diagonal, and let _s,L be a lower triangular matrix with zero diagonal that is equal to _s below the diagonal. Therefore _s = _s,U + _s,L. Moreover since _s is Hermitian by Lemma <ref>, _s = _s,U^* + _s,L^*. Let and be matrices with the same block structure as _t. Moreover suppose all of blocks of and are zero except the first block-row and the first block-column. Let the first block-row and the first block-column of be [ 1/2_1 _2,U _3,U⋯_2^t-1,U ], and [ 1/2_1 _2,U _3,U⋯_2^t-1,U ]^*, respectively. Also let the first block-row and the first block-column of be [ 1/2_1 _2,L _3,L⋯_2^t-1,L ], and [ 1/2_1 _2,L _3,L⋯_2^t-1,L ]^*, respectively. Therefore _1 = +. We now give symmetric factorizations for and . By construction and Lemma <ref>, the diagonal of _1 is zero. Therefore by Lemma <ref>, the matrix consisting of only the first row and the first column of can be written as _1 _1^* - _2 _2^*. Moreover, the matrix consisting of only the first row and the first column of is equal to - ^⊤, where = [ _2^k-t+1 0 ⋯ 0; 0 _2^k-t+1 ⋯ 0; ⋮ ⋮ ⋱ ⋮; 0 0 ⋯ _2^k-t+1 ]∈^2^k× 2^k. Therefore - ^⊤ = _1 _1^* - _2 _2^* for some vectors _1 and _2 that can be computed in O(n) time. Now since _1 is Toeplitz and _s,U's are upper triangular and Toeplitz, we have = ∑_j = 0^2^k-t+1-1^j ( - ^⊤) (^j)^⊤. Therefore setting _1 = [ _1 _1 ^2 _1 ⋯ ^2^k-t+1-1_1 ], _2 = [ _2 _2 ^2 _2 ⋯ ^2^k-t+1-1_2 ], we have = _1 _1^* - _2 _2^*. Now consider the matrix that is equal to zero everywhere except on row 2^k-t+1 and column 2^k-t+1 and on that row and column, it is equal to . This matrix is equal to - ^⊤. Because the diagonal of _1 is equal to zero, by Lemma <ref>, we can find _1 and _2 such that - ^⊤ = _1 _1^* - _2 _2^* in O(n) time. Now since _1 is Toeplitz and _s,L's are lower triangular and Toeplitz, we have = ∑_j = 0^2^k-t+1-1 (^j)^⊤ ( - ^⊤) ^j. Therefore setting _1 = [ _1 ^⊤_1 (^2)^⊤_1 ⋯ (^2^k-t+1-1)^⊤_1 ], and _2 = [ _2 ^⊤_2 (^2)^⊤_2 ⋯ (^2^k-t+1-1)^⊤_2 ], we have = _1 _1^* - _2 _2^*. Hence _1 = [ _1 _1 ][ _1 _1 ]^* - [ _2 _2 ][ _2 _2 ]^*. We now construct other _s's recursively. By using matrices in Definition <ref> and the structure of _t described in Lemma <ref>, for s=2,…,t, we have _s = _s-1 + _k-t+s+1_k-t+s+1_s-1_k-t+s+1^⊤_k-t+s+1^⊤ Therefore if ∓_s and _s are matrices such that _s = ∓_s ∓_s^* - _s _s^*, then for s=2,…,t ∓_s = [ ∓_s-1 _k-t+s+1_k-t+s+1∓_s-1 ], _s = [ _s-1 _k-t+s+1_k-t+s+1_s-1. ]. This completes the proof since _t = _t. We now prove the main theorem. In addition to Theorem <ref>, this only requires describing how to find a representation of symmetric factorization of _k+1=_k in linear time. By Theorem <ref>, we can find _t and _t such that _t = _t _t^* - _t _t^*, for t∈[k]. Therefore we only need to find _k+1 and _k+1 such that _k+1 = _k+1_k+1^* - _k+1_k+1^*. Note that _k+1 = _k. The matrix _k is 2^k-by-2^k and according to Lemma <ref>, it consists of 2^k-by-2^k blocks. Therefore each block of it is only one entry. In this case for a block , we have =. Therefore all of the entries on the diagonal of _k are the same. Moreover, each row is just a permutation of the first row and similarly each column is a permutation of the first column. Now let _k be the matrix that is equal to _k everywhere except on the diagonal and _k is zero on the diagonal, i.e., _k = _k + _k(1,1) ·. Now let _0 be a matrix with all of the entries equal to zero except the first row and the first column and its first row and column is equal to the first row and column of _k. By Lemma <ref>, we can find vectors _1 and _2 such that _0 = _1 _1^* - _2 _2^* in O(n) time. Now for t∈[k], let _t be the matrix that is zero everywhere except on rows/columns 1,…,2^t and its rows/columns 1,…,2^t are equal to the corresponding rows/columns of _k. Note that _k = _k. Now because of the structure of _k described by Lemma <ref>, for t∈[k], we have _t = _t-1 + _t_t _t-1_t^⊤_t^⊤ Let ∓_t and _t be such that _t = ∓_t ∓_t^* - _t _t^*. Then by (<ref>), we have _t = [ ∓_t-1 _t-1_t-1∓_t-1 ][ ∓_t-1 _t-1_t-1∓_t-1 ]^* - [ _t-1 _t-1_t-1_t-1 ][ _t-1 _t-1_t-1_t-1 ]^*. Therefore ∓_t = [ ∓_t-1 _t-1_t-1∓_t-1 ] and _t=[ _t-1 _t-1_t-1_t-1 ]. Therefore, we only need to find _1 and _2 with _0 = _1 _1^* - _2 _2^* to completely describe _k. Now if H_k(1,1)≥ 0, we set _k+1 = [ ∓_k √(_k(1,1))· ], and _k+1 = _k, and we set _k+1 = ∓_k, and _k+1 = [ _k √(_k(1,1))· ], otherwise. Therefore we can also find a symmetric factorization of _k+1 in (n) time and this combined with Theorem <ref> completes the proof. § SYMMETRIC FACTORIZATION OF INVERSES OF HANKEL MATRICES In this section, we prove our result for symmetric factorizations of the inverses of Hankel matrices (Theorem <ref>). We actually prove a more general result: for a given matrix ∈^n× n with Sylvester-type displacement rank of two and bit complexity ℓ, we show how to find a representation of the matrices and with n rows, (n) columns, and bit complexity ℓ in time (n^ω/2·ℓ) such that - (^*-^*)_ < 1/2^ℓ. Then since the inverse of a Hankel matrix has a Sylvester-type displacement rank of two, this gives an algorithm for the inverse of a Hankel matrix. A key technique in our algorithm is the following lemma which is similar to Lemma <ref> with the important difference that when we apply the recursion arising from this lemma, the displacement rank of the matrix doubles (instead of staying the same). This then forces us to stop the recursion when the size of the blocks is √(n). Let = 1/2(1+i)+ 1/2(1-i), where is the identity matrix and is the exchange matrix (see Definition <ref>). Let be an n× n real symmetric matrix with Sylvester-type displacement rank of less than or equal to r with respect to (,^⊤) and (^⊤,). Then (^*) is bisymmetric and has a Sylvester-type displacement rank of at most 2r with respect to (,^⊤) and (^⊤,). Moreover (^*) is persymmetric Hermitian and has a Stein-type displacement rank of at most 2r+2 with respect to (,^⊤). Also the diagonal entries of (^*) are zero. We first write each part of ^*. We have ^* = ( 1/2(1+i)+ 1/2(1-i) ) ( 1/2(1-i)+ 1/2(1+i) ) = 1/2( + ) + i/2( - ). Note that + is symmetric because both and are symmetric. Moreover (+ ) = + = ( + ). Therefore (^*) = 1/2( + ) is bysymmetric. Now we show it has a Sylvester-type displacement rank of at most 2r with respect to (,^⊤). We need to show that ( ( + ) - ( + ) ^⊤) ≤ 2r. One can easily verify that = ^⊤. Therefore - ^⊤ = (^⊤ - ) . Thus since is a full rank matrix, ( ( + ) - ( + ) ^⊤) ≤ ( - ^⊤) + ( - ^⊤) = ( - ^⊤) + ((^⊤ - ) ) = ( - ^⊤) + (^⊤ - ) ≤ 2r. We can bound the Sylvester-type displacement rank of (^*) with respect to (^⊤, ) in a similar way. Now we turn to the imaginary part. Since is real and symmetric ^*=. Hence we have (i(-))^* = -i (^* - ^* ) = -i ( - ) = i( - ). Therefore (^*) is Hermitian. Again since is real and symmetric ^⊤=. Therefore i( - ) = i( -)=i(-) = i(^⊤-^⊤)^⊤ = i ( - )^⊤. Now we need to show that ( ( - ) - ( - ) ^⊤) ≤ 2r. We have - ^⊤ = (- )J. Therefore ( - ^⊤) = (- ). Let be a matrix obtained from by changing the (1,n) entry from zero to one. For example = [ 0 0 0 1; 1 0 0 0; 0 1 0 0; 0 0 1 0; ]. Therefore is full-rank. Moreover let be a matrix obtained by by changing the (1,1) entry from one to zero. For example = [ 0 0 0 0; 0 1 0 0; 0 0 1 0; 0 0 0 1; ]. Then we have ^⊤ =. Since is full rank, (- ) = (^⊤- ) Now note that differ from only in the first row and similarly differ from only in the first row. Therefore (^⊤- ) ≤ (^⊤- ) + 1. Then since the Sylvester-type displacement rank of with respect to (^⊤,) is at most r, we have ( - ^⊤) ≤ r + 1. In a similar fashion, one can verify that ( - ^⊤) ≤ r + 1. Therefore ( ( - ) - ( - ) ^⊤) ≤ ( - ^⊤) + ( - ^⊤) ≤ 2r + 2. Finally note that since is symmetric _i, n + 1 - i = _n + 1 - i, i for all i∈ [n]. Therefore since ()_i,i = _i, n + 1 - i and ()_i,i = _n + 1 - i, i, we have ( - )_i,i = 0. Thus the diagonal entries of (^*) are zero. We are now equipped to prove our result for general matrices with small displacement ranks. We state the theorem for the inverse of Hankel matrices, but our algorithm and proof work for these general matrices due to Lemma <ref>. algoruled 0.15cm * We consider a general matrix that has Sylvester-type displacement rank of two with respect to (, ^⊤) and (^⊤, ). For example, one can consider = ^-1. Note that we can find a representation of ^-1 as ^⊤ in (n·ℓ) time using the approach of <cit.>. Without loss of generality, we assume that n=2^k and k is even because otherwise, we can extend the matrix to a size of power of four by appropriately copying the entries to make sure Sylvester-type displacement rank does not change. We show that Algorithm <ref> outputs the desired factorization in the specified running time and bit complexity. Note that this algorithm is similar to the one we used for the symmetric factorization of Hankel matrices (Algorithm <ref>). The main difference is that we recurse only for k/2 iterations. This is because by Lemma <ref>, the blocks of _t have a Sylvester-type displacement rank of 2^t+1 and a size of n/2^t×n/2^t. So for t=k/2, the size of each block is √(n)×√(n) while its displacement rank is 2 √(n). In other words, the displacement rank is more than the rank of the matrix and therefore it does not help with speeding up the computation of symmetric factorization. The rest of the proof basically is similar to the symmetric factorization of Hankel matrices. Similar to Lemma <ref> and <ref>, _t consists of 2^t-1× 2^t-1 Hermitian blocks. However instead of being Toeplitz, by Lemma <ref> each block has a Stein-type displacement rank of 2^t+1 +2 with respect to (,^⊤). Moreover each block is persymmetric. The structure of the blocks is also similar to Lemma <ref>. More specifically, the first block row of _t consists of 2^t-1 matrices [ _1 _2 ⋯ _2^t-1 _2^t-1 ]. The second block-row is [ _2 _1 _4 _3 ⋯ _2^t-1 _2^t-1 - 1 ] For s=1,…,t-2, let ∓_s be the matrix consisting of block-rows 1 to 2^s, and _s be the matrix consisting of block-rows 2^s+1 to 2^s+1. Let ∓_s = [ ∓_s,1 ∓_s,2 ⋯ ∓_s,2^t-s ]. Then _s = [ ∓_s,2 ∓_s,1 ∓_s,4 ∓_s,3 ⋯ ∓_s,2^t-s ∓_s,2^t-s-1 ]. The structure of block columns of _t is also similar to the above. For example, the matrix consisting only of the first row and the first column of _t is like the following. _t = [ _1 _2 _3 ⋯ _2^t-1; _2 0 0 ⋯ 0; _3 0 0 ⋯ 0; ⋮ ⋮ ⋮ ⋱ ⋮; _2^t-1 0 0 ⋯ 0 ]. All the other block-rows and block-columns are obtained by taking permutations of the first block-row and the first block-column, respectively. Moreover the block-row and the block-column with the same index are obtained by the same permutation. Since _1 is a Hermitian matrix, _1-_1 ^⊤ is also Hermitian. Therefore the eigenvalues of _1-_1 ^⊤ are real and since it is an n/2^t×n/2^t of rank 2^t+1+2, we can find its eigenvalue decomposition in (n/2^t· 2^t · (ω-1)·ℓ). Then we can separate the eigenvectors associated with positive and negative eigenvalues into matrices and , respectively, and write _1-_1 ^⊤ as ^* - ^*. Then by writing _1 = _1-_1 ^⊤ + (_1-_1 ^⊤) ^⊤ + (_1-_1 ^⊤) ^⊤^⊤ + ⋯, we can take the appropriate shifts and write = ^* - ^*, where = [ ⋯ ],   and   = [ ⋯ ]. Then we can write _t as _1 _1^* - _1 _1^*, where _1 = [ ; 0 _2; 0 _3; ⋮ ⋮; 0 _2^t-1; ],   and  _1 = [ 0; 0 0 _2; 0 0 _3; ⋮ ⋮; 0 0 _2^t-1; ] Then since the rest of block-rows and block-columns of _t are obtained by permuting its first block-row and block-column, we can obtain the factorization by taking appropriate shift and permutations of _1 and _1. This then produces _t and _t such that _t = _t _t^* - _t _t^*. Then by defining _t = _1^* _2^* ⋯_t-1^* _t^* _t and _t = _1^* _2^* ⋯_t-1^* _t^* _t, and = [ _k/2+1 _k/2 ⋯ _2 _1 ] and = [ _k/2+1 _k/2 ⋯ _2 _1 ], we have = ^* - ^*. Note that for each _t, we only need to compute an eigendecomposition for its corresponding _1 - _1 ^⊤, and the rest of the decomposition for _t follows from deterministic shifts and permutations. We do not even need to compute _2,⋯,_2^t-1 since the entries of them can be obtained by O(log n) addition/subtraction of the entries of the original matrix. Since _1 is obtained by taking summations over principal submatrices of the original matrix, its bit complexity and operator norm are the same as the original matrix (up to a log n factor). Therefore the cost of computing this eigendecomposition is (n/2^t· 2^t · (ω-1)·ℓ). because _1-_1 ^⊤ is an n/2^t×n/2^t of rank 2^t+1+2, and we can add a random matrix with small entries (of less than 1/n^2 ℓ). Adding this random matrix, does not cause us to go above the error threshold but it causes a gap between the eigenvalues of _1-_1 ^⊤ and causes it to have n· 2^ℓ condition number which results in the running time stated above — see <cit.>. Therefore the total cost of computing the representation is ∑_t=1^1+k/2(n/2^t· 2^t · (ω-1)·ℓ) = ∑_t=1^1+k/2(n · 2^t · (ω-2)·ℓ). Since t≤k/2+1 and n^1+k/2=2√(n), this is bounded by (n^ω/2·ℓ). § DISCUSSION AND CONCLUSION In this paper, we presented novel super-fast algorithms to find a representation of symmetric factorizations of the form ^* - ^* for Hankel matrices and their inverses. Our running times for Hankel matrices and their inverses are (n ·ℓ) and (n^ω/2·ℓ). We also conjectured that it is possible to find factorizations of the form ^* for these problems in the same running times. We explained how our conjectures lead to faster algorithms for solving a batch of linear systems faster than the approach of <cit.> and how they lead to a faster-than-matrix-multiplication algorithm for solving sparse poly-conditioned linear programs. Here we present a statement that has the same implications. This is weaker than our conjectures but stronger than the results we proved. Suppose we find and such that ^* - ^* = (^⊤)^-1 and ^* ≼poly(n) (^⊤)^-1, then since ^* - ^* = (^⊤)^-1≽ 0, ^* ≼poly(n) (^⊤)^-1. Then by the argument of Equations (<ref>), and , both will have low bit-complexity since ^* ^⊤≼poly(n) (^⊤)^-1^⊤ = poly(n) ^-1 and ^* ^⊤≼poly(n) (^⊤)^-1^⊤ = poly(n) ^-1. Then using the linear operator ^* - ^* for = and = leads to the same running times presented in Equations (<ref>) and (<ref>). § ACKNOWLEDGEMENT We express our gratitude to Richard Peng for providing valuable comments and engaging discussions that greatly influenced the development of this paper. Furthermore, we extend our appreciation to Santosh Vempala for insightful discussions regarding sparse linear systems. Their valuable input and expertise have played a pivotal role in inspiring the author to produce this work. alpha § OMITTED PROOFS * We first discuss he running time of computing . We partition the rows of to m blocks _1,⋯, _m∈^s× n as the following = [ _1; _2; ⋮; _m ]. Then we have = ∑_i=1^m ^i-1_i. For j∈[m], define _j = ∑_i=j^m ^i-j_i. Therefore = _1 and for j ≥ 2, _j-1 = ∑_i=j-1^m ^i-j+1_i =_j-1 + ∑_i=j^m ^i-j+1_i =_j-1 + ∑_i=j^m ^i-j_i = _j-1 + _j. Therefore given _j, we can compute _j-1 in time (mnr ·ℓ + · r · (ℓ+ℓ_j)), where ℓ_j is the bit complexity of _j. The first term in the running time is for computing _j-1 and essentially follows from the fact that each column of _j-1 can be multiplied by in (m n ·ℓ) time because the bit complexity of is at most m ℓ and has (n) nonzero entries. The second term in the running time is for computing _j and follows from a similar argument. Moreover ℓ_j-1 = (max{m·ℓ,ℓ+ℓ_j}). Since _m=_m, it can be computed in (mn r ·ℓ) time and it has a bit complexity of (m·ℓ). Therefore by the recurrence relation (<ref>), = _1 can be computed in time (· r · m^2 ·ℓ) because the recursion only goes for m steps and the bit complexity of the intermediate matrices stays (m·ℓ). Computing ^⊤ is less complicated. Note that ^⊤ = [ ^⊤; ^⊤^⊤; ^⊤ (^⊤)^2; ⋮; ^⊤ (^⊤)^m-1 ]. Given (^⊤)^j, ^⊤ (^⊤)^j and (^⊤)^j+1 can be computed in time (n· r · (ℓ+ℓ_j)) and (· r · (ℓ + ℓ_j)), where ℓ_j is the bit complexity of (^⊤)^j. Therefore this gives a total running time of (· r · m^2 ·ℓ).
http://arxiv.org/abs/2307.02517v1
20230705143435
Locating Robber with Cop Strategy Graph: Subdivision vs. Multiple Cop
[ "Shiqi Pan" ]
cs.CC
[ "cs.CC" ]
S. Pan panshiqi.psq@gmail.com Locating Robber with Cop Strategy Graph: Subdivision vs. Multiple Cop Shiqi Pan August 1, 2023 ===================================================================== We consider the Robber Locating Game, where an invisible moving robber tries to evade the pursuit of one or more "helicopter" cops, who send distance probes from anywhere on the graph. In this paper, we attempt to propose two useful constructions for general problems in this game: a state variable that describes the available game information for the cops, and a Cop Strategy Graph construction that presents all possibilities of the game given a deterministic cop strategy. Then we will use them, along with algorithms and pseudo-code, to explain the relationship between two graph parameters, the localization number ζ and the subdivision number η. Researchers have shown that η=O(ζ) and ζ≠ O(η). We will revisit their proofs, consolidate the essential correspondence between the two numbers via our proposed constructions, and show an explicit result for ζ in terms of η, the capture time, and the graph diameter. § INTRODUCTION Cops and Robbers is a widely studied graph pursuit game. Its basic setup includes a finite countable connected graph and two sides of players, robbers and cops. It has many variations. And in this paper, we focus on one game variation called the Robber Locating Game. In the Robber Locating game, there are one or more "helicopter" cops and a single invisible robber, i.e., the location of the robber is inaccessible to the cops. The game starts with the robber choosing its initial position. Then in each round, the two sides take alternative turns to play: the cops first simultaneously send distance probes from anywhere on the graph, each receiving the distance between the probed vertex and the robber's location; then the robber moves to the neighbor vertices or stays unmoved. The cops win if they manage to locate the robbers by identifying their position. We are particularly interested in the cops' strategy. A strategy is called cop-winning if it secures a win for the cops in finite time. And the graph is called localizable if there exists a cop-winning strategy, and not localizable if not. Many studies have been done on the localization number. Given a graph G, the localization number of G, denoted ζ, is the minimum number of cops for G to be localizable. Seager studied graphs with one cop <cit.> <cit.> when she first introduced the game, and her research focuses on special graphs such as cycles, complete graphs, and subdivisions of graphs. More studies followed on the topic of subdivision later on. A subdivision of graph G with an integer m, denoted G^1/m, is when each edge of the G is replaced by a path of length m. Games on subdivisions usually assume one cop, as the added paths already put a disadvantage on the robber by slowing it down. We call the minimum value of m such that G^1/m is localizable the subdivision number of G, denoted η. In Haslegrave, Johnson and Koch's papers <cit.> <cit.>, the bounds on η are proved as n/2 for n-vertex G. In their later paper <cit.>, they investigate the relationship between localization numbers ζ and η. Many other graph parameters on various game variations have also been studied. <cit.> <cit.> <cit.> explore a variation with perfect information to all players, and <cit.> <cit.> further incorporate complexity analysis in the research of such variation. Parameter-wise, for example, <cit.> introduced the capture time, denoted capt, in the Robber Locating Game. It is defined as the minimum number of rounds for the cop to guarantee to win.  <cit.> also studies a similar concept called escape length in the game variation Rabbit and Hunter where the rabbit can "jump" to any vertex during its round. Previous papers have used many tools to prove the localizability of graphs, such as verbal explanations, graphs, and tables. But so far there haven't been many commonly used variables and terms shared across the research. In this paper, we intend to formalize some common terms that many studies have already implicitly relied on, and provide a general and explicit basis for problems in general graph pursuit games. Specifically, we will purpose a state variable that describes the game process and a construction called the Cop strategy Graph, which is a graph based on the state variable that presents all possibilities of a game given a deterministic cop strategy. In Section 2, we will present formal definitions and the process of building the cop strategy graph. The cop strategy graph is helpful with illustrating the game process under a certain cop strategy, as well as relating strategies and games. In Section 3, we will use them to explore the relationship between the localization number ζ and the subdivision number η. The former will be studied on a Robber Locating Game with one robber and multiple cops; we call it the multiple-cop game, denoted Game_cop. And the latter will be defined on the game with one robber and one cop on a subdivision graph; it is called the subdivision game and denoted as Game_subs. Haslegrave, Johnson, and Koch's showed that η = O(ζ) <cit.>. However, for the important observation of the correlation between the two games, their paper didn't provide detailed and explicit explanations. Thus, we will revisit their claim in section <ref> and in particular, establish an explicit correspondence between the games with state. Their paper also showed that ζ≠ O(η). In section <ref>, we use the Cop Strategy Graph to again establish a correspondence with the two games. For an explicit relationship, with capt being the minimum number of rounds for cops to secure a win and δ being the graph diameter, we will show that ζ=O(2^capt/η16^ηδ^2η). § STATE AND COP STRATEGY GRAPH First, we define several variables. For any game, the robber set R_i is defined as the set of possible positions of the robber after the cops probe and before the robber moves in round i and the extended robber set X is that after the robber moves. X_0=V(G), and X_i=CloseNeighbor(R_i) for any i, where the close neighbor includes the vertices themselves and their adjacent vertices. Cop wins if |R_i| = 1. In this paper, we assume that the cop strategies are deterministic in terms of all accessible information in the game. And state, denoted ϕ, is defined as the set of information that may affect the cop's probes. It may include the (extended) robber set, round numbers, previous probing results, and so on. The cops take different probes if and only if the states are different. Then the cop strategy A is defined as a deterministic function that takes in ϕ and outputs the probes for cops to make. Probes can be written as A(ϕ). Now, we construct the Cop Strategy Graph. Given a strategy A, we denote its cop strategy graph as H(A). Vertex set V(H) contains states as their values. And the graph starts with a node with the initial state. An edge exits from state ϕ_1 on one level to ϕ_2 on the next level if and only if there exists a valid set of probing results from probes A(ϕ_1) that updates the state to ϕ_2, and the edge's value is the probing results. State ϕ is a leaf if and only if it is a terminating state, i.e., the robber set has of size 1 and the cops win. See Figure <ref> for an example. Given a graph G, consider the game with one single cop (Graph (2)) and a cop strategy. The cop first probes C by the strategy, locating robbers at C, A, or B if the probing result is 0, 1, or 2 respectively; if the result is 3, then the robber sets R={D,E}, X=CloseNeighbor(R){B,D,E}. Then the cop probes D, where it is able to locate the robber with any probing results. Similarly. a strategy for the game with two cops is presented in Graph (3). The cop strategy A is cop-winning if and only if its corresponding graph H_A is finite. A being cop-winning is equivalent to that for any robber's route, the cop can always reduce possible robber locations to one in finite time with A, which is equivalent to each branch of H_A terminating on a leaf, i.e., H_A is finite. § MULTIPLE COPS VS. SUBDIVISION Now, we use the state variable and the cop strategy graph to explore the relationship between the localization number and subdivision number. Set a graph G. We study the localization game Game_cop and the subdivision game Game_subs. Note that for ease of reading, the footnotes _cop(or _c) and _subs(or _s) will also be used in other parameters to flag the two games respectively. Previous papers have shown important conclusions regarding the relationship.  <cit.> proves that η = O(ζ) and that a factor of 2 is the best possible. It focuses mainly on the specific actions of the cop in Game_subs and puts less emphasis on the relationship between the two games. However, a closer look at the correspondence, including the transformation of probing vertices and results, is necessary. Thus, we will revisit their cop strategy and use the state variable to present a detailed analysis of the relationship and the deduction method in this section, which is the foundation of the validity of the strategy.  <cit.> also proves that ζ≠ O(η), and in particular, ∀ m ≥ 3, there exists G such that η=m, ζ=2^m-1. In Section <ref>, we use the Cop Strategy Graph to relate the two games and present an explicit result that ζ=O(2^capt/η16^ηδ^2η). §.§ η = O(ζ)) Assume there exists a cop-winning strategy A_cop for the multiple-cop game with ζ cops. We will show that for any subdivision number m=O(ζ), Game_subs is localizable. Our key is to “mock" the robber in Game_cop that makes the "same" moves as the “real" one in Game_subs and to get probing "hints" from A_cop. Before going into the algorithm, we introduce several concepts for subdivisions of graphs. For details, readers are directed to <cit.>. The paths that are added to G^1/m are called threads. Their endpoints, i.e., vertices in G^1/m that correspond to vertices in the original graph G, are called branch vertices. We denote the relationship of branch vertex b in G^1/m and its corresponding vertex v in G as b = v^1/m and v = b^m, and we say b and v are “subdivisionally equal" or “subdivisionally equivalent". The same notations are also applied to vertex sets in the following sections. When m is odd, the midpoint is the one vertex in the middle of every thread; and when m is even, near-midpoints are the two in the middle. By probing any branch vertex on the graph and taking the module of the result distance, the cop can tell whether the robber is at a branch vertex, (near)-midpoint, or neither. The robber remains in one thread until visiting a branch vertex. And it is always closest to one or two vertices, two when the robber is at the midpoint of a thread with an odd m. Its closest vertex remains the same between two consecutive visits to midpoints. We say that the robber is within the vicinity of a branch vertex b if b is its nearest branch vertex. We thus observe a natural link between the robber being within the vicinity of a branch vertex b in G^1/m and it being at v=b^m in G. And we will base our strategy on this link. §.§.§ Cop Strategy We propose a cop strategy A_subs in Game_subs. It only probes branch vertices. And the game is considered as three separate stages in regard to the robber's movement. Stage 1 is before the robber visits any branch vertex; Stage 2 is when the robber moves between branch vertices along threads; and Stage 3 is after the robber last visits a branch vertices until it is caught. In Stage 1, we probe at random. The robber is guaranteed to remain in one thread, and thus it is located once both endpoints of its current thread have been probed. For Stage 2, similar to <cit.>, we make sure that the cop probes the corresponding vertices of probing vertices provided by A_cop while it is in the vicinity of a branch vertex, probing to "identify" that branch vertex. Along the robber's route, it may visit multiple branch vertices and midpoints. We define a stride as the rounds between the robber's two consecutive visits to branch vertices while passing midpoints. Namely, stride i includes the rounds exclusively after the robber moves to a branch vertex, denoted b_i-1, and inclusively before it visits the consecutive one, denoted b_i. We also call the rounds in Game_cop as strides too as there is a correspondence between the two in this stage. With ϕ_c as the initial state, the cops probe in Game_subs based on strategy A_probs in each stride i as in Algorithm <ref>: §.§.§ Deduction of Probing Results Consider only the strategic probes, the ones based on multiple-cop strategy A_cop. We now show that the Deduce function in the algorithm, with probing results in Game_cops as input and those for Game_subs as output, maintains a correspondence of the robber sets in the two games. Deduce maintains the subdivisional equality of the robber sets R_c in Game_cops and R_s in Game_subs in each stride. Let q be a probe in Game_cops by A_cops in an arbitrary stride i. Its counterpart q=p^1/m must be probed in stride i in Game_subs too. Let v be a robber location that complies with the probing result of p in Game_subs. Let v_1, v_2 be the endpoints of the thread v is on, where v_1 is strictly closer to v. Let the corresponding vertices in G be w_1=v_1^m, w_2=v_2^m. We prove below that given dist_s(p,v), Deduce() outputs dist_s(q,w_1), and thus, w complies with the probing result of q. Denote d_1=dist_s(p,v_1), d_2=dist_s(p,v_2) and x=dist_s(v_1,v), m-x=dist_s(v_2,v). Since v is close to v_1, 0<x<⌊ m ⌋ / 2. See Figure <ref> for an illustration. Paths between p and v must go through either one v_1 or v_2. Thus, dist_s(p,v)= d_1+x ≡ x m, if path(p,v) contains v_1 d_2+m-x ≡ m-x m, if path(p,v) contains v_2 Case 1: dist_s(p,v) < m/2 m. This means that path_s(p,v) contains v_1, and thus ⌊ dist_s(p,v)/m ⌋ = ⌊d_1+x/m⌋ = d_1/m = dist_p(q,w_1). Case 2: dist_s(p,v) > m/2 m. This means that path_s(p,v) contains v_2, so d_2+m-x < d_1+x. Since d_1,d_2 are multiples of m, and 0<x<⌊ m ⌋ / 2, d_1 ≥ d_2 + m. By Triangle Inequality, the triangle with points p, v_1, and v_2, d_1 ≤ d_2 + m. Therefore, d_1 = d_2 + m. ⌊ dist_s(p,v)/m⌋+1 = ⌊(d_2+m-x)+ m/m⌋ = d_2 + m/m = d_1/m = dist_p(q,w_1). Thus, for both cases, the deducted result round(dist_s(p,v)/m) is always equal to dist_p(q,w_1), and thus w complies with the probing result from q. Similarly, the other direction is also true: if w is a possible location in regard to the probing result from q, vertices in G^1/m in the vicinity of v are also possible locations in regard to the result from p. And by a simple induction on stride number, we show that the subdivisional equality of the robber sets R_c and R_s is maintained. §.§.§ Proof of Correctness We now show that the subdivision strategy A_subs that we propose based on cop-winning multiple-cop strategy A_cop is cop-winning. A_s is cop-winning. As proved above, if we only consider the strategic probes in Game_subs, the robber sets in the two games would be subdivisionally the same. The non-strategic random probes in Game_subs during Stage 1 or 2 only eliminate elements from the robber set R_s. Thus, R_s ⊆ R_c anytime in the games. Now, assume in some point in the games, |R_c|=1, then |R_s| ≤ 1. Note that such must happen before the game enters Stage 3, and we will be able to locate the robber in the first two Stages. We have thus shown that A_s is cop-winning. §.§ ζ=O(2^capt/η16^ηδ^2η) Now, we look at the relationship between the localization number and the subdivision number in the other direction. Set a graph G. Given a cop-winning strategy A_subs for Game_subs on graph G^1/η , we will construct a cop-winning strategy A_cop for Game_cop with k cops, for any k ≥ 2^capt/η16^ηδ^2η. We again establish a correspondence between the robber sets for the two games, and to do so, we first set some special limits on the robber's behavior in Game_subs. This is valid as we "mock" the robber in Game_subs to solve the target game Game_subs. First, we limit the robber to only choose branch vertices as its initial position. Next, the robber takes strides during the game and their definition is slightly different from the previous. In the current problem, each stride contains exactly η rounds, and the robber lands on a branch vertex at the end of each stride. If it moves to another branch vertex in a stride from the last consecutive stride, it can only do so directly without backtracking; if it ends up at the same vertex, it is limited to remaining on the vertex for all m rounds or moving to a midpoint and back. The round index for a stride is defined as the index of the round within the η rounds in a stride and takes the value of 1 to η. We denote the branch vertex the robber visits at the end of each stride i as b_i. Its initial position is denoted as b_0. Then we construct the cop strategy graph H of the cop strategy A_subs for this special setup of the subdivision game. It is rooted at the initial state with the extended robber set as all vertices. As an example, Figure <ref> presents a graph G (1) and a complete cop strategy graph H (2). With our special setup, H is reduced (3). In specific, we remove 1 the non-branch vertices in the robber sets in round 1, as well as ab, be from the extended robber set in round 2 when transforming ϕ to ϕ', as they don't comply with the robber's allowed movements. For H with the rules, we define stride level. Stride level i of H as all possible probes during stride i. And we denote the sub-tree rooted at a state ϕ as SubTree(H, ϕ). SubTree(H, ϕ, {l_1, l_2}) represents the set of vertices on stride level l_1, l_2 of H that are contained in SubTree(H, ϕ). SubTree(H, ϕ, l) ⊂ SubTree(H, ϕ', l) where ϕ' is an ancestor state of ϕ. See Graph (3) in Figure <ref> for illustrations. We also introduce the term corresponding endpoints of a vertex p in G^1/η as the vertices in G that correspond to the two endpoints of the thread that p is on, denoted CorrEnd(p). If p is itself a branch vertex, the corresponding endpoint is p^m itself. §.§.§ Cop Strategy We now present the cop-winning strategy A_cop in Game_cop on G. Its main difference from the one in Section <ref> is that it creates an one-to-multiple relationship of states in the two games. During Game_subs, we maintain a set of states Φ_s, which contains all possible states in the subdivision game. It is initialized with a single initial state whose extended robber set is all vertices in G^1/η. At the start of the game, we probe in G the corresponding endpoints of SubTree(H, ϕ_s, {1}). Starting from the next round (counted as round 1 for convenience), A_cop acts according to Algorithm <ref> until the cops win. The probes starting for round 1 are regarded as the strategic probes. And to emphasize the one-to-one correspondence of rounds and strides in the two games, we also call rounds in Game_cop as strides. If ⋃_ϕ_s ∈Φ_s SubTree(H,ϕ_s,{i,i+1}) for some stride i contains terminating states in Game_subs, our probes in Game_cop of the corresponding endpoints of terminating robber's locations will determine if they are terminating states in Game_cops as well. if so the cop wins, and if not, the state is eliminated from focus. Thus, for the following proofs, we assume that the subtrees don't have terminating states. §.§.§ Deduction of Probing Results We now take a closer look at the DeduceAndUpdate process. We will show that it maintains the correspondence between the robber set after updating with respective probing results. See Algorithm <ref> for DeduceAndUpdate(). For each stride, it takes in three variables: a graph A, the current state in Game_subs, and a set of probing results D, which is a 4-set of probing results in G from each corresponding endpoint of the probe in the previous and current rounds. Different from the previous Deduce function defined in Algorithm <ref>, it then outputs one or two sets of updated states, instead of one. DeduceAndUpdate() maintains the subdivisional equality between the robber sets in the two games during each round/stride. At each stride i in Game_subs, given a probe p in G^1/η in SubTree(H, ϕ, i), let p be on a thread with endpoints a',b', and let a, b be their correspondence in G. By our cop strategy, both a and b are probed in rounds i-1 and i, i.e., before and after the robber's turn in round i-1. Say that the robber moves from u to v in round i-1, then in D we can obtain dist(u,a), dist(u,b), dist(v,a), dist(v,b), for all p. Case 1: If at least of one the probing results are different in the two rounds, i.e., there exists p such that dist(u,a)≠ dist(v,b) or dist(u,B)≠ dist(v,B), then v, u must be different vertices. See the illustration as Figure <ref> (1). In G^1/η, given a round count j, we want to "mock" the robber at the j-th vertex from u'=u^1/m to v'=v^1/m. Denote the robber's location as r. Then the probing result of p is the minimum length of paths from r and p, and we simply return the minimum length of the 4 paths. This is described in the function DeduceRoundMoves(). Case 2: If the probing results in the two rounds are exactly the same, i.e, ∀ p, dist(u,A)=dist(v,A) and dist(u,A)=dist(v,A), then v, u are either the same or adjacent with the same distance to the probes. In this case, with one set of probing results in Game_cop, two sets in Game_subs are considered. The first results "mock" the robber remaining at the same vertex, ss Figure <ref> (2) shows. It is described in the function DeduceRoundStays(). The second results, on the other hand, "mock" the robber moving from u towards v with the same probing result. Figure <ref> (1) presents an illustration, and it is described in the function DeduceRoundMoves(). In both cases, the updated robber sets in the two games contain exactly the close neighbors of their previous sets that comply with the probing results. And thus, the correspondence is maintained. §.§.§ Proof of Correctness Now, we show that the proposed cop strategy is efficient to win the multiple-cop game. First, we assume that we have an infinite number of cops in Game_cop, i.e., all probing attempts are fulfilled each round, and based on this, we prove the correctness of the strategy. Next, we will calculate the actual cop number required to complete the probes, and provide the bound on localization number ζ in regard to the subdivision number η formally. Given A_subs is cop-winning, A_cop is cop-winning. The strategic probes maintain the subdivisional equality between the robber sets in the two games. And in Game_cop, the additional probes in round "0" at the start may further reduce its robber set, and probing the corresponding endpoints of terminating states in Game_subs may locate the robber. Thus, R_c ⊆ R_s anytime in the games. Assume in some stride i in the games, |R_subs|=1, then |R_c| ≤ 1. Thus, A_cop is cop-winning. Now, we calculate the localization number ζ required to send all probe attempts each round. Let G be a graph with subdivision number η, and let capt be the number of rounds needed for the cops to win the game on graph G^1/η. Let δ be the diameter of G. We show that ζ= O(2^capt/η16^ηδ^2η). Consider the maximum number of states in Game_subs maintained in each stride, i.e., the maximum size of Φ_s. Φ_s remains the same size when probing results in two consecutive rounds are different in Game_cop, but doubles when the probing results are the same. With capt rounds, there are O(capt/η) strides, and thus |Φ_s|=O(2^capt/η). Then consider the number of probes for each round in Game_cop. We probe in G the corresponding endpoints of ⋃_ϕ_s ∈Φ_s SubTree(H,ϕ_s,{i,i+1}) each round. A stride level in H contains exactly η tree levels, and each tree node has O(δ_G^1/η) degree as there are at most O(δ_G^1/η) possible probing results. Since at each tree node, the round index and probe positions are settled, i.e., the distance between the robber/probes and their nearest branch vertex, each path length in G corresponds to 4 distances, and thus O(δ_G^1/η)=O(4δ). Each SubTree(H,ϕ_s,{i,i+1}) has O(2η) tree levels, so it has in total O((4δ_G)^2η)=O(16^ηδ^2η) vertices. Therefore, the minimum number of probes required for each round is ζ= O(2^capt/η16^ηδ^2η). 20 ref_article1 S.Seager: Locating a robber on a graph. Discrete Mathematics 312(22), 3265–3269 (2012) ref_article2 S.Seager: Locating a backtracking robber on a tree. Theoretical Computer Science 539, 28–37 (2014) ref_article3 J.Carraher, I.Choi, M.Delcourt, L.H.Erickson, D.B. West: Locating a robber on a graph via distance queries. Theoretical Computer Science 436, 54–61 (2012) ref_article4 J.Haslegrave, R.A.B Johnson, S.Koch: The Robber Locating game. Discrete Mathematics 339(1), 109–117 (2016) ref_article5 J.Haslegrave, R.A.B Johnson, S.Koch: Subdivisions in the Robber Locating game. Discrete Mathematics 339(11), 2804–2811 (2016) ref_article6 J.Haslegrave, R.A.B Johnson, S.Koch: Locating a robber with multiple probes. Discrete Mathematics 341(1), 184–193 (2018) ref_article7 R.Nowakowski, P.Winkler: Vertex-to-vertex pursuit in a graph. Discrete Mathematics 43(2–3), 235–239 (1983) ref_article8 P. Frankl: Cops and robbers in graphs with large girth and Cayley graphs. Discrete Applied Mathematics 17(3), 301–305 (1987) ref_article9 J. Petr, J. Portier, L. Versteegen: A faster algorithm for Cops and Robbers. Discrete Applied Mathematics 320, 11–14 (2022) ref_article10 W. B. Kinnersley: Cops and robbers is EXPTIME-complete. Journal of Combinatorial Theory, Series B 111, 201–220 (2015) ref_article11 N.C. Behague, A. Bonato, M.A. Huggan, T.G. Marbach, B. Pittman: The localization capture time of a graph. Theoretical Computer Science 911, 80–91 (2022) ref_article12 M. Adler, H. Racke, N. Sivadasan, C. Sohler, B. Vocking: Randomized Pursuit-Evasion in Graphs. Combinatorics, Probability and Computing 12, 225–244 (2003)
http://arxiv.org/abs/2307.02933v1
20230706115143
In Time and Space: Towards Usable Adaptive Control for Assistive Robotic Arms
[ "Max Pascher", "Kirill Kronhardt", "Felix Ferdinand Goldau", "Udo Frese", "Jens Gerken" ]
cs.HC
[ "cs.HC", "cs.AI", "cs.RO" ]
Simple Anosov representations of closed surface groups Tianqi Wang August 1, 2023 ====================================================== empty empty Robotic solutions, in particular robotic arms, are becoming more frequently deployed for close collaboration with humans, for example in manufacturing or domestic care environments. These robotic arms require the user to control several DoF to perform tasks, primarily involving grasping and manipulating objects. Standard input devices predominantly have two DoFs, requiring time-consuming and cognitively demanding mode switches to select individual DoFs. Contemporary ADMC have shown to decrease the necessary number of mode switches but were up to now not able to significantly reduce the perceived workload. Users still bear the mental workload of incorporating abstract mode switching into their workflow. We address this by providing feed-forward multimodal feedback using updated recommendations of ADMC, allowing users to visually compare the current and the suggested mapping in real-time. We contrast the effectiveness of two new approaches that a) continuously recommend updated DoF combinations or b) use discrete thresholds between current robot movements and new recommendations. Both are compared in a VR in-person study against a classic control method. Significant results for lowered task completion time, fewer mode switches, and reduced perceived workload conclusively establish that in combination with feedforward, ADMC methods can indeed outperform classic mode switching. A lack of apparent quantitative differences between Continuous and Threshold reveals the importance of user-centered customization options. Including these implications in the development process will improve usability, which is essential for successfully implementing robotic technologies with high user acceptance. § INTRODUCTION While robotic devices have long been put behind fences for safety reasons, advances in the development of such (semi-) autonomous technologies have started to permeate almost all aspects of our personal and professional lives. These include increased close-quarter collaborations with robotic devices – from industry assembly lines <cit.> to mobility aides <cit.>. Assistive robotic arms are a particularly useful and versatile subset of collaborative technologies with varied applications in different fields, e.g., <cit.>. Yet, new challenges arise when robots are tasked with (semi-) autonomous actions, resulting in additional stress for end-users if not correctly addressed during the design process <cit.>. Pollak et al. highlight the decreased feeling of control users experienced when using a robot's autonomous mode. Switching to manual mode allowed their study participants to regain control and decrease stress significantly. These findings are corroborated by Kim et al. whose comparative study of control methods resulted in markedly higher user satisfaction for the manual mode cohort <cit.>. A proposed solution from previous work <cit.> to these challenge are adaptive controls – referred as Adaptive DoF Mapping Controls (ADMCs) – which merge the advantages of (semi-) autonomous actions with the flexibility of manual controls. They combine multiple DoF dynamically for a specific scenario to assist in controlling the robot. In our concept, a CNN interprets a camera's video feed of the environment and dynamically combines the most likely DoF for a suggested movement. Building on this, we already showed that such ADMC combinations of the robot's DoF can lead to a significantly lower number of mode switches compared to standard control methods <cit.>. However, our study could not show that this may also improve task completion time or reduce cognitive load. Also, challenges concerning the understanding of DoF mappings were raised during the study. Based on these previous findings, the present study evaluates two novel ADMC methods for an assistive robotic arm. We compare the variants Continuous and Threshold, differing in the time at which suggestions are communicated to the user, against a classic control method. In detail, we examine possible effects on task completion time, number of necessary mode switches, perceived workload, and subjective user experience. Our contribution is two-fold: * We demonstrate that both ADMC methods significantly reduce the task completion time, the average number of mode switches, and the perceived workload of the user. * Further, we establish that for Continuous and Threshold, each has specific advantages which some users may prefer over the other, raising the need for customizable configurations. § RELATED WORK Collaborative robotic solutions have received much attention in recent years. Previous work has generally focused on (a) different designs of robot motion intent and most recently (b) ADMC for robots. The latter requires a critical yet seldom addressed topic in how collaborative robots can effectively communicate recommended movement directions to their user. §.§ Robot Motion Intent Advance knowledge of the intended robot behavior and subsequent movements within the physical world are critical for effective collaboration when humans and robots occupy the same space and need to coordinate their actions <cit.>. In previous work, we analyzed existing techniques of communicating robot motion intent and identified different intent types as well as several intent properties, such as location and information or the placement of the technology <cit.>. Users generally prefer to have the robot's future movements represented visually <cit.>. To convey detailed robot motion intent, researchers often rely on AR <cit.>, as with the help of AR, interaction can become more intuitive and natural to humans <cit.>. Effective communication of robot motion intent is particularly relevant when using ADMC for assistive robotic arms, as in such a shared or traded control environment each interaction needs to be precisely coordinated. §.§ Adaptive DoF Mapping Controls Traditionally, robot control methods include individual commands for each DoF, requiring frequent mode switches for controlling translations, rotations, and gripper functionality. Herlant et al. called into question the suitability of these standard control methods as task completion time markedly increases by using user-initiated compared to time-optimal mode switches <cit.>. To tackle this issue, we proposed in previous work the concept of ADMC – a dynamic combination of multiple DoF, thus adjusted to specific scenarios or tasks <cit.>. This streamlining decreases the need for constant mode switching, resulting in faster and more efficient task fulfillment. In <cit.> we implemented a CNN as control unit to provide these dynamic DoF mappings and gave the user a triggering mechanism to request an update. In a 2D simulation study which had a 4-DoF robot control mapped to a 2-DoF input device, we found promising results. We then extended this approach into a 3D VR simulation, thereby mapping a 7-DoF robot control to a 2-DoF input device <cit.>. We evaluated two ADMC methods – differing in their respective movement suggestion concept – against the baseline control method Classic. Simulating the effect of a CNN, our work relied on a task-specific script to provide DoF mappings based on the relative position and orientation between gripper and target. This removed the potentially confounding effect of a suboptimal CNN implementation. Results showed that the number of mode switches was significantly reduced compared to Classic, but task completion time was unaffected. Users reported high cognitive demand and difficulties understanding the mapping to 2 different input DoF. In addition, the system felt difficult to predict and required trial and error <cit.>. § ADAPTIVE DOF MAPPING CONTROLS Building on our previous work <cit.>, we created a VR simulation of a HRI experimental setup to compare different ADMC methods to a non-adaptive baseline condition Classic. Like in previous work <cit.> we applied a task-specific script to explore our ADMC methods. We tackle previous issues by 1) visualizing not only the current but also the forthcoming DoF mapping suggestion (improving predictability) and 2) reducing the input to a single DoF (reducing cognitive demand). We propose two approaches as different trade-offs between control fidelity and cognitive demand. The VR simulation includes a virtual model of the Kinova Jaco 2[Kinova Robotic arm. <https://assistive.kinovarobotics.com/product/jaco-robotic-arm>, last retrieved August 1, 2023.] – a commercially available assistive robotic arm frequently used in HRI studies, e.g., <cit.>. Our proposed visual feedback mimics AR, with directional cues registered in 3D space. This allows the user to understand different movement directions for the actual control and the suggested DoF combinations. To simplify understanding, we use arrows, a straightforward and common visualization technique to communicate motion intent <cit.>. As a control method for the ADMC, we implemented a task-specific script. This removed any potential bias that a more generic but currently still technically limited approach such as a CNN-based control method may introduce. Of course, our approach only works in a controlled experimental setting. The task-specific script evaluates the gripper's current position, rotation, and finger position relative to a target. The DoF mapping system then suggests five different movement options (referred in the following to as modes) – in order of assumed usefulness – to the user. * Optimal Suggestion: Combining translation, rotation, and finger movement [opening and closing] into one suggestion, causing the gripper to move towards the target, pick it up, or release it on the intended surface. * An orthogonal suggestion based on (1) but excluding the finger movement. Allows the users to adjust the gripper's position while still being correctly orientated. * A pure translation towards the next target, disregarding any rotation. * A pure rotation towards the next target without moving the gripper. * Opening or closing of the gripper's fingers. During movement, the ADMC system re-calculates the best DoF combinations to fulfill the specific task, which are then presented as new suggestions. Users cycle through these modes – by pressing a button on the input device – to select a suitable one or continue moving with the previous active suggestion (see Figure <ref>). A suggestion indicator is visible above the gripper when users are not moving the robot to distinguish between the modes. Five slanted cubes represent the possible suggestions. The cubes appear gray if no suggestion is active and turn blue to indicate that a new suggestion is selected. The cube corresponding to the selected mode increases in size. In contrast to our previous work <cit.> and to the dual axis system of the baseline control method (see Figure <ref>), only one input axis is required to control the robotic arm. Consequently, the cognitive demand on the users is reduced as they can focus on evaluating one movement rather than two simultaneous suggestions. Continuous: This control method uses continuous feedback of robot motion intent to increase oversight of updated movement suggestions. Continuous feedback enables users to move in a direction and constantly evaluate the updated optimal suggestion by the ADMC system. If found fitting, users can switch to a new suggestion and move the robot in the updated path to fulfill the task. Here, two directional indicators are virtually attached to the robotic arm's gripper: a light blue and a dark blue arrow. The former represents the currently selected movement option (mode) mapped to the input axis. The forward movement of the input axis moves the gripper in the direction the arrow is pointing; engaging it backward moves the gripper in the arrow's reverse direction. The dark blue arrow represents the currently optimal suggestion at a given time. Users can only move the robot along the dark blue arrow if they switch to that suggestion first – which causes both arrows to overlap. While this approach increases transparency, users might be distracted by the constantly updating suggestions, potentially leading to more mode switches and perceived workload. Threshold: In contrast to Continuous, Threshold uses time-discrete and multimodal feedback to indicate optimized movement suggestions. Again, a light blue arrow maps the selected movement option (mode) to the input axis. New suggestions are only shown to the users if the optimal mode differs – by a set degree – from the current movement. We followed Singhal et al. and used a cosine between-vector similarity measure to calculate this threshold <cit.>, ranging from exact alignment [0%] to total opposite direction [100%]. In pretests, we determined a 20% difference between the current and optimal vector as the suggestion threshold. If exceeded, a short vibration pulse to the input device and a 1kHz sound inform the users of an updated suggestion. In addition, a dark blue arrow appears which visualizes the new suggested movement. Users can continue the active movement, switch to the new suggestion, or cycle through the other four modes before deciding on one. Unlike with Continuous, users can therefore entirely focus on the movement they are currently performing until explicitly notified and directed to a new suggestion. We expect Threshold to reduce perceived workload compared to Continuous as it does not require constant evaluation of the visual feedback. However, we expect task completion time to increase, as Threshold systematically interrupts the users' workflow. Additionally, Threshold might result in a perceived loss of control, potentially negatively influencing usability. § STUDY METHOD AND MATERIALS To explore the effectiveness of our ADMC methods, we conducted a supervised, controlled experiment as a VR simulation study with 24 participants. We compared our ADMC methods to Classic, which relies on mode switching to access and control all DoF one after another. Approaches as Classic are well established (e.g., when driving a car) and are predictable and transparent for the user. Comparing ADMC methods to Classic allows HRI researchers to disentangle their respective advantages and disadvantages. §.§ Study Design We applied a within-participant design with control method as an independent variable with three conditions: (1) Classic, (2) Continuous, and (3) Threshold. Every participant performed eight training trials and 24 measured trials per condition, resulting in 72 measured and 24 training trials per participant and 1,728 measured trials in total. To counter learning and fatigue effects, the order of conditions was fully counter-balanced. We measured the following dependent variables: * Average Task Completion Time For each trial, we measured the time in seconds needed to pick an object and place it on the target surface. * Average Number of Mode Switches For each trial, we recorded every mode switch conducted by pressing a button on the input device. * Perceived Workload After completing each condition, we measured cognitive workload with the RTLX questionnaire <cit.>. * Subjective Assessment After completing each condition, we measured the five dimensions of the QUEAD <cit.>. After completing all trials, participants were further asked to rank the three conditions. After each condition, participants were prompted with several open questions regarding their experience, their understanding of the control methods and the directional cues, plus any issue of interest they considered noteworthy. Additionally, participants were asked how they proceeded in situations when they could not solve the task at first. Video and audio recordings of the interviews with the entire study cohort were assessed independently by two researchers. Open coding was applied to gather participants' opinions of the different control methods. We used Miro[Miro. <https://miro.com>, last retrieved August 1, 2023.] – an online whiteboard <cit.> – to complete an affinity diagram of the open codes. Codes were then organized into themes (see  <ref>). §.§ Hypotheses Overall, we expected ADMC methods to reduce not just mode switches (as in prior work <cit.>) but – due to the advances in our designs – also improve on task completion time and workload. H1: Continuous and Threshold lead to a lower task completion time compared to Classic. However, we expect Continuous to perform faster compared to Threshold, as the latter systematically interrupts the user during interaction. H2: Continuous and Threshold result in fewer mode switches compared to Classic. We expect Continuous to require more mode switches than Threshold, as users have no clear guidance about when to switch modes. This may cause them to oversteer or accept new suggestions inefficiently. H3: Continuous and Threshold cause lower perceived workload compared to Classic. However, we expect Continuous to cause a higher workload compared to Threshold, as it requires constant evaluation of the visual feedback while Threshold allows the user to relax until further notification. §.§ Apparatus Developing and testing new concepts for a robotic arm involves inherent challenges associated with a real robot's physical bulk and complexity. Quickly changing the experimental setup, adding feedback components, or providing information to the user further complicate testing regimes. We created a 3D testbed environment for HRI studies in VR to address these challenges. This testbed contains a simulated robotic arm (a virtual model of the Kinova Jaco 2) with multiple control mechanisms and a standardized pick-and-place task. Visual feedback mimics AR, with directional cues registered in 3D space. A Meta Quest motion controller is used as an input device to control the robotic arm. Photogrammetry scans of an actual room were used to design the VR environment, which was created using the Unreal Engine 4.27 and optimized for usage with a Meta Quest VR HMD (see Figure <ref>). During the study, user behavior was recorded with appropriate software on a Schenker XMG Key 17 laptop with Windows 10 64-bit and Oculus Link connected to the VR headset. For our implementation of the baseline control method Classic, users cycled through four distinct modes to access all seven robot DoF, as they are mapped on a two-DoF joystick, such as the control-stick on a Meta Quest motion controller: * X-Translation + Y-Translation * Z-Translation + Roll * Yaw + Pitch * Open/Close fingers We illustrate the current mapping between the robot's DoF and the input device through two arrows attached to the gripper. Light blue arrows indicate the robot's DoF assigned to the first, dark blue arrows to the second input axis. Looking at the joystick in VR, the same color-coded visualization is applied. Users press a button on the input device – the A-Button of the Meta Quest motion controller – to switch between modes, cycling back to the first one at the end. Four blue spheres – in contrast to the slanted cubes used in our ADMC methods – above the robotic arm's gripper indicate the total number of available and the currently active mode when users are not moving the robot. The sphere representing the active mode is bigger and brighter than the spheres of inactive modes. §.§ Participants A total of 24 participants took part in our study (7 female, 17 male). The participants were aged 19 to 37, with a mean age of 26 years (SD = 4.85 years). No one declared any motor impairments that might influence reaction times. Five participants had prior experience with controlling a robotic arm. Participants were recruited from a university campus and an online appointment form. §.§ Procedure Utilizing the benefits of a standardized and portable VR simulation environment, the study was conducted in multiple comparable physical localities. Before commencing, participants were fully informed about the project objective and the various tasks they had to complete. Every participant gave their full and informed consent to partake in the study, have video and audio recordings taken, and have all the relevant data documented. A study administrator observed the experiment on a laptop and briefed participants on using the hardware as well as the general functionalities of the study environment. Once set up, users followed command prompts embedded in the virtual simulation environment. For each of the three conditions, the following steps were performed: * Participants were given a written and standardized explanation of the control method used in the current condition. * Participants conducted eight training trials for familiarization with the respective control method. * Participants then conducted 24 measured trials. * Interview and questionnaires. After completing all conditions, participants ranked the three control methods from most to least preferred and explained the reasoning behind their decision. The study concluded with a de-briefing. The average session lasted for 90 minutes and participants were compensated with 30 EUR. §.§ Experimental Task The experimental task is based on our previous work and resembles a common pick-and-place scenario <cit.>. A blue object appears on a table in front of the participant, which signals the start of a trial. The user has to control the robot from its starting position to pick the object and place it on a red target surface, also located on the table. To change the DoF mapping – for trial fulfillment – users could switch modes. Upon completion, the blue object disappears, and the robot automatically returns to the original starting position. A new blue object appears when this position is reached, and a new trial commences. For each trial, the position of the blue object is placed in one of eight possible locations spaced evenly around the red target surface. Each position occurred once during training and thrice during measured trials. However, the order of appearance was randomized. We used a neutral block shape rather than specific objects to avoid bias and ensure trial comparability. § RESULTS The study comprises 1,728 (24 participants × 3 control methods × 24 trials) measured trials. Training trials were excluded from the analysis. We explored the distribution of the data through QQ-plots and either applied parametric RMANOVA or non-parametric Friedman tests. For the latter, post-hoc pairwise comparisons using Wilcoxon signed-rank test with Bonferroni correction followed the omnibus test. Relevant effect sizes were calculated with r: >0.1 small, >0.3 medium, and >0.5 large effect. §.§ Task Completion Time Mean task completion time calculated per participant and control method (see Fig. <ref>) resulted in Threshold = 16.54s (SD = 4.09s); Continuous = 16.61s (SD = 4.77s); and Classic = 30.96s (SD = 4.89s). Outliers [N = 3] with average times ≥ 2.2 * IQR of the mean task completion time in at least one control method were excluded <cit.>. The QQ-plot of the remaining 21 participants followed a normal distribution. A RMANOVA found a significant main effect (F(2, 36) = 130.92, p ≤0.001). A post-hoc pairwise comparison (Bonferroni corrected) showed a significant difference between Continuous and Classic (p ≤0.001) as well as between Threshold and Classic (p ≤0.001). No significant difference was found between Continuous and Threshold (p ≥0.999). §.§ Mode Switches We used a non-parametric Friedman test, as our data was not normally distributed, to determine differences between the average number of necessary mode switches between control methods. Two outliers – based on ≥ 2.2 * IQR of the mean value – were excluded prior to further analysis. This resulted in mean numbers of mode switches for Threshold = 9.28 (SD = 1.26); Continuous = 9.93 (SD = 1.47); and Classic = 19.55 (SD = 2.93) for N = 22. We found a significant main effect (χ^2(2) = 33.82, p ≤0.001, N = 22). Post-hoc pairwise comparisons showed a significant difference between Continuous and Classic (Z = -4.11, p ≤0.001, r = 0.62) as well as Threshold and Classic (Z = -4.11, p ≤0.001, r = 0.62). Again, we found no significant difference between the two ADMC methods (Z = -1.51, p = 0.131, r = 0.28) (see Fig. <ref>). §.§ Perceived Workload RTLX <cit.> scores [scale from 1 to 100] for all participants resulted in mean task load values of Threshold = 22.67 (SD = 13.86); Continuous = 23.23 (SD = 13.26); and Classic = 34.24 (SD = 14.65). We applied a Friedman test which revealed a significant main effect for perceived task load: (χ^2(2) = 9.87, p = 0.007, N = 24). Post-hoc pairwise comparisons show significant differences between Continuous and Classic (Z = -3.03, p = 0.002, r = 0.44), Threshold and Classic (Z = -2.76, p = 0.006, r = 0.40), but not between Continuous and Threshold (Z = -0.21, p = 0.830, r = 0.03). §.§ Evaluation of Physical Assistive Devices The QUEAD encompasses five individual scales (3 to 9 items each, 7-point Likert). Friedman tests for individual dimensions revealed significant main effects for Perceived Usefulness (PU), Perceived Ease of Use (PEU), Emotions (E), and Comfort (C), but not for Attitude (A). Post-hoc pairwise comparisons indicate significant differences between Continuous and Classic for PU, PEU, and C as well as between Threshold and Classic for PU and PEU (refer to Table <ref> for detailed scores). §.§ Individual Ranking Participants ranked the control methods in order of preference from 1 = favorite to 3 = least favorite. Mean values in ascending order are Continuous = 1.67; Threshold = 2.04; and Classic = 2.29. A Friedman test revealed no significant main effect (χ^2(2) = 4.75, p = 0.100, N = 24). §.§ Qualitative Insights Overall, the open coding process led to the identification of five main themes, as discussed below. §.§.§ Familiarization While all three control methods included a training phase, comments suggest that in particular the ADMC methods required familiarization. Here, participants felt the controls were sometimes inverted (P3) and wanted to move the stick in the direction the arrow was pointing at (P6). They also reported that it takes a while to get used to (P24), but routine set in fast (P18). §.§.§ Handling Adaptive DoF Mapping Suggestions The study cohort showed a relatively uniform response to the two ADMC methods with clear distinctions between Threshold and Continuous. In Threshold, many participants trusted the system (P23) and switched to the new suggestion as soon as they perceived the multimodal indicator. They did not have to think a lot (P4) and relied on what the suggestion says (P7). This dependence on the system caused some to draw a blank when something went wrong because [they] forgot they had other options (P8). One participant even tried using the Threshold control method with eyes closed, which worked surprisingly well (P7). In contrast, participants evaluated the suggestions in Continuous more thoroughly, as they had to decide when to switch without the help of threshold-based indicators. Some participants waited for suggestions with relatively simple direction cues, such as straight arrows (P6, P16) as an indication to switch modes, while others trusted their gut feeling (P23). Uncertainties of How do I approach this? (P23) were more frequent in this control method than Threshold. Participants dealt with problems in both ADMC conditions in one of two ways to find alternative suggestions that better align with their needs. They cycled through the further offered suggestions for an alternative option or reversed their current movement direction until a different suggestion was offered. §.§.§ Visualization Overall, participants understood the different visualizations. Yet, difficulties arose in all three conditions relating to depth perception and understanding if the gripper is positioned correctly to pick or place the object. Some participants suggested a laser pointer (P16) to indicate the gripper's position above the table for improved depth perception. This is a known problem for robot teleoperation. In the past, researchers have suggested and explored AR Visual Cues to counter that, which include similar approaches as the ones mentioned by our participants <cit.>. Interestingly, some participants manipulated the second mode of Classic (X- and Y-Translation) to mimic this effect, as that mode shows straight up- and downward pointing arrows as directional cues along the y-axis. §.§.§ Multimodal Feedback As described above, most participants used Threshold as intended, switching to the next suggestion when they received the multimodal feedback. However, some participants experienced the haptic and audio indicators as irritating (P20) or weird and horrible (P17). The poignant statement If I had to do this for five more minutes, it would be too annoying. (P7) reveals some participants' strong reactions to this control method. As a possible mitigation, one participant suggested implementing multiple thresholds of varying intensity instead of a singular one that instantly beeps loudly at me and says 'Do this now!' (P24). §.§.§ Control vs. Comfort Participants reported substantial differences in the level of control and comfort between Classic, Continuous, and Threshold. By nature, Classic offers the highest control level but requires participants to decide individually on every task step. In contrast, Threshold allowed participants to perform tasks entirely brainlessly (P16) and only press forward, then A, then forward, then A (P17). Many participants expressed that they felt too directed by [Threshold] (P8), attesting Continuous a higher level of comfort or freedom to experiment (P24). Overall, participants described Continuous as a reasonable compromise or the golden middle (P14) between the comfortable execution in Threshold and the high level of control in Classic. § DISCUSSION Adaptive DoF mapping controls have already been indicated to have benefits over classic methods <cit.>. Yet, research is still limited, and analysis of time-based dimensions of directional cues is lacking. In this paper, we examined to what extent the two ADMC methods, Continuous and Threshold, differ from the Classic baseline – and each other – in terms of task completion time, necessary mode switches, perceived workload, and subjective assessment. Significant results for all four metrics partially support our initial hypotheses. Most strikingly, ADMC methods reduced task completion time (H1) and mode switches (H2) by 50% respectively compared to Classic. As previously suggested by Kim et al., this establishes that ADMC methods lead to faster and less involved execution of pick-and-place tasks <cit.>. These findings are in line with previous work <cit.>, underlining the benefits of ADMC compared to Classic controls. In contrast to previous results <cit.>, our novel ADMC methods were able to significantly lower task completion time and perceived workload compared to the Classic method. The latter finding also partially supports H3. This highlights that ADMC which communicate the suggested recommendation to the user – irrespective of timing – were able to increase usability. Notably, the decreased workload of ADMC is particularly meaningful as the end goal should be the smooth integration of robotic devices into people's lives and workflows, not to add stress. Turning to the second part of our analysis – contrasting different time-based communication of feed-forward recommendations – we found no significant differences in the four metrics between Continuous and Threshold. The lack of measurable differences between Continuous and Threshold implies that both discrete and continuous communication of movement suggestions allows users to use ADMC methods efficiently. Insights gained by the results of the QUEAD and our qualitative interviews corroborate these findings, while the latter also helped to provide a more distinguished analysis. Overall, participants expressed a positive stance regarding the ADMC methods. However, individual preferences vary greatly between Continuous and Threshold. While some participants preferred the higher level of control Continuous allowed, others favored the comfortable execution possible with Threshold. Consequently, future development of ADMC methods should – in accordance with Burkolter et al. – include individualization options to increase comfort and end-user acceptance <cit.>. Customizations would be particularly beneficial for Threshold-based controls as participants repeatedly criticized the multimodal feedback. Allowing users to adjust the modalities, the signal intensity, and even the threshold itself may improve usability while still offering the advantages of ADMC. In contrast to expectations derived from our initial hypotheses, qualitative insights revealed that the Classic control method could still be a valuable addition in specific situations. Participants felt an apparent lack of control when the ADMC suggestions did not match their expectations. To improve usability, ADMC methods could incorporate static suggestions for certain situations. A potential way to address this could be combining ADMC and static suggestions using only the most common input-DoF. However, further experimental studies are needed to disentangle exactly which factors shape personal preferences and how customizations or crossover methods can deliver the best results. §.§ Limitations We explored the proposed ADMC methods in a VR simulation environment. While the usage of virtual simulations in industrial settings has been successfully established <cit.>, future work should confirm if our promising findings can be replicated in the real world with a physical robot. § CONCLUSIONS Our ADMC methods Continuous and Threshold are promising approaches to communicate proposed directional cues effectively. We extend our previous work <cit.> by demonstrating that ADMC significantly reduce task completion time (1), the average number of necessary mode switches (2), and the perceived workload of the user (3). Further, we establish that Continuous and Threshold perform equally well in quantitative measures while qualitative insights reveal individual preferences. The observations of this study provide valuable implications for any HRI researcher involved in designing novel ADMC methods for human-robot collaborative settings. Future work should focus on disentangling quantitative and qualitative feedback of focus groups to develop optimal robot motion control methods, thus increasing usability, safety and – ultimately – end-user acceptance. § ACKNOWLEDGMENT This research is supported by the German Federal Ministry of Education and Research (BMBF, FKZ: 16SV8563, 16SV8565). Our study is approved by the Ethics Committee of the Faculty of Business Administration and Economics of the University of Duisburg-Essen with the ID: 2202007. IEEEtran
http://arxiv.org/abs/2307.01300v1
20230703190926
Traffic Centralization and Digital Sovereignty: An Analysis Under the Lens of DNS Servers
[ "Demétrio F. Boeira", "Eder J. Scheid", "Muriel F. Franco", "Luciano Zembruzki", "Lisandro Z. Granville" ]
cs.NI
[ "cs.NI" ]
Traffic Centralization and Digital Sovereignty: An Analysis Under the Lens of DNS Servers Demétrio F. Boeira, Eder J. Scheid, Muriel F. Franco, Luciano Zembruzki, Lisandro Z. Granville Institute of Informatics (INF), Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, Brazil {demetrio.boeira, ejscheid, mffranco, lzembruzki, granville}@inf.ufrgs.br Received ; accepted ===================================================================================================================================================================================================================================================================================== The Domain Name System (DNS) service is one of the pillars of the Internet. This service allows users to access websites on the Internet through easy-to-remember domain names rather than complex numeric IP addresses. DNS acts as a directory that translates the domain names into a corresponding IP address, allowing communication between computers on different networks. However, the concentration of DNS service providers on the Internet affects user security, privacy, and network accessibility. The reliance on a small number of large DNS providers can lead to (a) risks of data breaches and disruption of service in the event of failures and (b) concerns about the digital sovereignty of countries regarding DNS hosting. In this sense, this work approaches this issue of DNS concentration on the Internet by presenting a solution to measure DNS hosting centralization and digital sovereignty in countries. With the data obtained through these measurements, relevant questions are answered, such as which are the top-10 DNS providers, if there is DNS centralization, and how dependent countries are on such providers. DNS, Internet Access, Communication Protocols, Digital Sovereignty, Measurement § INTRODUCTION The Internet's Domain Name System (DNS) is a globally hierarchical naming mechanism that enables the association of networks, servers, and services to Internet Protocol (IP) addresses <cit.>. DNS enables, for example, accessing Websites through easy-to-remember domain names rather than IP addresses, meaning that would be translated to . The records that map domain names and IP addresses are maintained by authoritative DNS servers that provide authoritative and up-to-date records. Because deploying a local DNS server requires technical expertise <cit.>, companies not rarely have been delegating the task of maintaining their authoritative NameServers (NS) records to third-party DNS providers (Cloudflare <cit.> and Akamai <cit.>). Such a delegation, which has been increasing over the years <cit.>, led to the current scenario where DNS resolution is concentrated on a small number of large providers. And, for the sake of the business model, each large DNS provider multiplexes its Information Technology (IT) or data center infrastructure among its client companies <cit.>. As a result, DNS centralization inevitably leads to security and availability risks, such as user privacy and the inability to resolve domain names in case of an outage or service failure at one of the large providers. The dependency on a few providers creates concerns regarding the  dependability and  digital sovereignty of countries, especially considering compliance regulations, such as Europe's General Data Protection Regulation (GDPR) and Brazil's Data Protection Law (LGPD). DNS centralization has been widely investigated in the literature. There exist a number of research efforts on assessing the degree of centralization in authoritative DNS servers <cit.> <cit.> <cit.>, showing that popular domains share the same authoritative DNS servers. Thus, disruptions (due to cyberattacks or sabotage) on DNS infrastructure providers could lead to collateral damages to multiple DNS domains. Although this centralization aspect has been previously addressed, further research on digital sovereignty implications is necessary considering such a DNS dependency. Analyzing digital sovereignty is crucial because it ensures a country's autonomy, control, and security over its digital infrastructure <cit.>. Efforts to quantify the dependency of different countries on DNS providers are, thus, required to uncover possible sovereignty risks for the nations and their critical infrastructures (healthcare, banking, and education sectors), too. In this paper, we investigate how country code top-level domain (ccTLD) from two conglomerate of countries,  Brazil, Russia, India, China, and South Africa (BRICS) and  the European Union (EU), are resolved and quantify their dependency on foreign public DNS providers. For that, we define an approach to periodically collect measurements about records, records, and records in order to find out and map the organizations responsible for managing such providers' infrastructure. These measurements use domains extracted from the Tranco list <cit.>. Thus, we also analyze how domains are managed and discuss the implications on regulations, compliance, and digital sovereignty. The results show that DNS centralization is a reality and a key concern for digital sovereignty, especially for countries that do not have relevant DNS providers and rely on infrastructure providers from countries or companies with different regulations and interests. The rest of this paper is organized as follows. In Section II, we review background knowledge and discuss related work on DNS centralization and digital sovereignty. In Section III, we introduce our and its components, including implementation details. In Section IV, we present the evaluation and results, followed by a discussion in Section V. Finally, in Section VI, we close this paper presenting conclusions and discussions about future work. § BACKGROUND AND RELATED WORK Due to the massive damage it may bring to the Internet infrastructure  <cit.>, academia started worrying about and discussing DNS centralization. Some observations found an alarming concentration of DNS traffic, with more than 50% of the observed traffic being handled by only 10 AS operators <cit.>. There is also efforts toward emerging topics to build a responsible Internet <cit.>, which proposes more transparency and trust within networks, independent of vendors and countries that run the underlying infrastructure. Thus, it is clear that companies from the technology and telecommunication sectors have a place to ensure secure communication and a key role in the digital sovereignty. There are significant concerns about DNS centralization and the impacts it may cause. One big concern is related to performance and how a centralized environment may negatively affect the time-response of DNS in some regions of the globe  <cit.>. Internet Service Providers (ISP) typically operate DNS resolvers for their customers, which means they have access to users' DNS queries and can potentially monitor or manipulate the data, which is definitely a fact to watch. This centralization of DNS resolution can raise privacy and political concerns, mainly if ISPs engage in activities like DNS filtering, censorship, or surveillance <cit.>. Furthermore, another two concerns that DNS centralization may raise are security since cyberattacks are evolving and becoming more sophisticated <cit.>, including those that target or explore DNS (tunneling, amplification, and flood) to cause technical, economic, and societal impacts <cit.>. This security concern includes attacks on DNS authoritative servers <cit.> and, also, availability of services worldwide, since the phenomenon of centralization is, in addition to being logical, also physical and geographical <cit.>. Another concept that can appear from the discussions on DNS centralization is digital sovereignty. Digital sovereignty refers to a nation's ability to control its digital infrastructure, data, and digital technologies within its territorial borders  <cit.>. It encompasses the idea that countries should be able to shape their digital policies, regulations, and frameworks to protect their national interests, security, and values in the digital realm. Digital sovereignty relies on certain aspects, such as data protection and privacy regulations, domestic digital infrastructure, digital trade and economic policies, and Internet governance <cit.>. Different works have focused on sovereignty from different perspectives, such as the usage o the decentralization provided by blockchain technology <cit.> as a potential ally for digital sovereignty <cit.>. However, it is unlikely that fundamental changes will become a reality in the short term since, besides enormous technological efforts and associated costs, it depends on convergences between technical and political spheres. It is important to note that digital sovereignty is a complex and evolving topic, and there are debates around its implementation and potential trade-offs. Striking a balance between digital sovereignty and the benefits of an interconnected global digital ecosystem remains challenging for policymakers worldwide. The question, thus, is how the digital sovereignty of countries is affected by the current Internet and its underlying infrastructure. Thus, it explores DNS and its centralization on few companies and governments to shed light on discussions about digital sovereignty under the lens of DNS. § MEASUREMENT APPROACH The approach consists of the mapping of lists of popular Internet domains (based on the publicly available rankings) to its authoritative NSes and organizations behind providing such a service. This allows to identify who provides the correct IPs, which organization operates the NS infrastructure, and to which country and regulations the operator is subjected. For that, the approach combines information from domains (, , and records) and Autonomous System (AS) records provided by Internet registries (LACNIC, RIPE, and ARIN). An AS is a network of interconnected computing devices that operate under the same policy. It is often managed by a single entity (ISPs or technology organizations) and is identified by an AS Number (ASN). Each AS manages one or more unique IP ranges, for example, Wikimedia Foundation Inc. has an ASN 14907 and manages the IP range in the United States of America and in the Netherlands. Thus, it is possible to associate the IP of any NS to an AS and, consequently, to its operator and region. Therefore, the approach is able to determine the entire flow from the domain name to the organization handling the AS that manages the IP of the associated NS. This allows to understand the different points where centralization and digital sovereignty risks might occur. For example, the owner of an NS can tamper the DNS records, while the AS operator can outage the communication to make the DNS translation unavailable. In both scenarios, a clear DNS-related dependence can be identified on a few players that maintain the underlying infrastructure (those that operate ASes and NSes). This makes the need to analyze such players and centralization a key pillar for discussing digital sovereignty. Figure <ref> depicts the components that are part of the approach and the flow of information between them. They are organized in three main groups, namely Datasets, Approach, and Outputs. Datasets containing information regarding Autonomous Systems (AS) and list of Domains are used as inputs for the approach. The ASes responsible for each NS are defined using the list provided by the Center for Applied Internet Data Analysis (CAIDA). For that, it was used the network prefix mapping to AS <cit.> and also the mapping of AS to organizations <cit.>. This allows to determine the AS, the organization managing the AS, and, thus, the country/region of DNS providers (based on the IP of the NS). For each measurement, an updated listed of CAIDA is obtained by the Data Gatherer and processed to ensure that the correct and most up-to-date information. For the domains, the Tranco list <cit.> is used as dataset since it provides a updated source of top 1 million Websites on Internet based on popularity and access traffic. The list is updated considering a variety of sources, such as Alexa, SimilarWeb and Moz. This offers a reliable and transparent list that can be used to conduct research that needs popular domains. The Data Gatherer also obtains the updated Tranco list for each measurement (using a diff approach to identify changes) and the Data Processor organize the information of both ASes and domains to be used in further steps. Next, the Records Retrieves analyzes each one of the 1 million domains and retrieves information regarding the A, AAAA, and NS records. For example, for the domain wikipedia.org, the is , the is and the is . This information is sent to the mapper to understand the entire path to resolve the DNS in order to build the NS Resolution Flow and also collects statistics (organizations concentration, measurement errors, and identified IPs) for further analysis. The NS Mapper receives the records regarding the domain and obtains the IP of the NS. This information is then used to map the IP to the correspondent AS managing it. For that, the record can be used in case of an IPv4 prefix or the for IPv6. Finally, the organization name is obtained by looking at the CAIDA AS organization rank mapping dataset <cit.>, and a complete analysis can be conducted to identify its region and relevant characteristics (regulations and number of ASes being operated). The NS Mapper stores information obtained in the Database and builds, as output, the NS Resolution Flow. This flow shows how the domain is resolved until discovering the organization/company that is managing the infrastructure, which is a point that may directly impact DNS resolutions in case of network disruption. Further, identifying NSes is crucial as they might tamper with DNS records, as they answer the requests in an authoritative manner. Figure <ref> illustrates a graph-like structure of the NS Resolution Flow for the domains wikipedia.org and dns.br. In the example, wikipedia.org has two records,   and  , while dns.br has one . This means these NSes are authoritative servers for these domains and are crucial to their operation. This also applies to the organization that manages the IP addresses and advertise routing information of such servers (their ASes). Such organizations, including the countries they are operating, are retrieved using the and records of the NSes by identifying the resolved IPs using their prefixes and mapping them with the AS dataset list. Thus, in the example, wikipedia.org is managed by the Wikimedia Foundation Inc., placed in the United States of America, and dns.br is managed by NIC.BR, placed in Brazil. The approach implementation and results are publicly available at <cit.>. Python was used to implement the approach's components, with the dnspython <cit.>, a Python library to request and manipulate DNS records, being used to implement the Records Retriever. The NS Mapper connects with a SQLite3 database to store and manipulate the data required to build the NS Resolution Flow. Further, statistics can be retrieved and processed from such a database. § EVALUATION AND ANALYSIS The measurements considered all the 1 million domains from the Tranco list <cit.>, using only the pay-level domains filter, with the latest list used in the experiments generated on June 16, 2023. To infer the AS names and countries, it was leveraged the CAIDA’s AS-to-organization dataset <cit.>. To conduct the measurements, it was used a six-core AMD Ryzen 5-5500U @ 2.1 GHz with 8 GB of RAM and connected to the Internet using an Ethernet cable to maintain a stable network connection. Its operation system was a Debian 11 “bullseye" stable distribution. It is essential to mention that during the experiments, not all domains from the Tranco list were resolved correctly (NS servers not found or incorrectly configured), and their NS or ASN was identified; thus, hindering the possibility of identifying the country where their DNS was managed. However, such limitation does not invalidate the results provided herein. §.§ Identifying Top-10 DNS Providers Table <ref> lists the ranking, using 10 positions, of the DNS providers identified during the analysis of centralization aspect of the DNS traffic. The position in the ranking is based on the amount of domains that rely on such DNS provider during the indicated period. Three periods were defined, Period 1 from 16/12/2022 to 23/01/2023, Period 2 from 23/01/2023 to 13/02/2023, and Period 3 from 13/02/2023 to 15/03/2023. As it can be seen in the table, the ranking remained stable during these periods and there was only one change, rows highlighted in gray in the table, in the ranking, where TIGGEE was the 6th during the first two periods but replaced MICROSOFT-CORP-MSN-AS-BLOCK as 7th in the third period. Within this context, it was also investigated if the domains of such DNS providers (cloudflare.com) were managed by them or if they relied on services from competitors. Table <ref> presents the results of such investigation. The results indicate that not all DNS providers rely on their DNS services for their domains. For example, Amazon, the second largest DNS provider according to Table <ref>, uses Oracle's DNS services, and Godaddy, which employs its own DNS service but also relies on Akamai's DNS service. However, all the major providers use their own DNS service. This paper does not delve into why tiggee.com is among the largest DNS providers on the Internet but is not listed in the top one million domains according to the Tranco list. Thus, this aspect could be investigated in future research. §.§ Measuring DNS Centralization Having identified the top-10 DNS providers that are responsible for hosting the highest amount of domains in the list, one question that arises is if there is an apparent centralization on those providers or if the DNS providing service is highly distributed to avoid Single Point of Failures (SPoF) or monopoly. To answer such a question, it was measured from 16/12/2022 until 15/03/2023, the concentration of domains resolved by the domains listed in Table <ref>. Figure <ref> depicts the results from the performed concentration measurements. In the figure, the x-axis represents the date on which the concentration percentage was calculated, and the y-axis represents the concentration in the top-10 providers. Considering the period, the average concentration was 30% of the measured domains. This means that, on average, 30% of the one million domains of the Tranco list (300 000 domains) had their DNS records hosted by the top 10 DNS providers (Table <ref>). Further, considering that such a concentration peaked at 39% on 29/01/2023 and the fact that it was identified that around 3000 DNS providers were responsible for managing all of the one million domains, there is strong evidence that centralization in the DNS hosting industry is a reality. §.§ Analyzing Digital Sovereignty Narrowing down the discussion on DNS centralization to a country-based analysis, it is possible to analyze countries' dependency on these providers and quantify how sovereign its Internet infrastructure is in terms of DNS hosting. For that, domains from the Tranco list were selected based on their ccTLD ( and ) and grouped into their political conglomerates. In total, 91 286 domains from 95 792 domains using the BRICS and EU ccTLDs were resolved, and their DNS hosting organization was identified. This represents 9.1% and 9.5% of the Tranco list, respectively. Russia's ccTLD () represented 59% of the resolved domains, approximately 54 168 domains. Results from such analysis categorized by these groups are presented in the following sections. §.§.§ BRICS Domains BRICS represents a conglomerate of five major emerging economies, namely  Brazil,  China,  India,  Russia, and  South Africa, formed to promote inter-economic cooperation and inter-political discussions. As BRICS does not have an official ccTLD as Europe, the ccTLD for the BRICS are, respectively,  ,  ,  ,  , and  . Figure <ref> depicts the results of the BRICS analysis. Each section of the chart represents the percentage of domains from the defined ccTLD that have their authoritative DNS servers located in the country of the section. The countries are represented as Alpha-2 ISO country codes <cit.>, and countries with less than 4% of domains were aggregated in the “Others” section. For example, in Brazil (Figure <ref>), there was a tie between domains of the Tranco list that relied on DNS providers from the United States (US) (46.9%) and domains that are provided by Brazilian-based companies (46.8%), the rest of the share (6.2%) were located in other countries (France and Germany). It is possible to observe that US-based DNS providers, such as Cloudflare, Inc., Amazon.com, Inc., and Google LLC, represent a significant portion of the DNS hosting industry in the BRICS, with India presenting the highest dependence ( 60.3%) of the five nations. The exceptions are Russia (60.8%) and South Africa (53.2%), with most domains provided by national DNS companies (Yandex.Cloud LLC for Russia and Xneelo (Pty) Ltd for South Africa). Thus, showing indications of concern regarding digital sovereignty. Further, to have an overview of the digital sovereignty of the BRICS as a conglomerate, the five countries' results were aggregated and illustrated in Figure <ref>. Russia and the United States appear to host the majority of the domains (a total of 73.3%), followed by Brazil, China, and Germany. This behaviour is logical considering the division of Figure <ref>. Therefore, showing a dystopian view of digital sovereignty, where the BRICS is subject to and dependent on the United States regarding DNS regulations and infrastructure. §.§.§ European Union The EU is a political and economic union composed of 27 member states (, Portugal, Spain, France, Italy, Germany, and Hungary) located in Europe. For such countries, the ccTLD was examined. Any person, company or organization within the EU may register domains with this ccTLD. Figure <ref> illustrates a different scenario than the one from the BRICS (Figure <ref>), where more countries share the DNS hosting infrastructure of the EU. Germany (DE) represents a significant portion given its size and number of DNS hosting providers. However, the US also concentrates a significant portion of the DNS hosting industry for domains. After Germany, France and the Netherlands appear as major countries hosting DNS domains for Europe, this supports the data presented in Table <ref>, where OVH, a French cloud computing company, appears as the 10th DNS provider in the ranking. This concentration in a cloud provider might indicate that other services, besides DNS, are being hosted in France and the Netherlands, given the fact that such companies offer more services than DNS, such as virtual machines, Function-on-a-Service (FaaS), and web hosting that require a DNS provider. §.§ Hosting Governmental Domains One analysis dimension that is highly relevant concerning digital sovereignty and centralization is to investigate where restricted TLDs, such as , are hosted. These domains are intended to be used only by federal government institutions (security agencies and institutes). Thus, their DNS should be hosted within federal organizations to maintain critical services for citizens and control over the infrastructure during critical periods (global conflicts, pandemics, or sanctions). Figure <ref> depicts the results from the analysis of the BRICS domains: , , , and . Russia did not present domains in the Tranco list; hence, it is not presented in the results. It can be seen that Brazil's governmental domains are mostly resolved within Brazil, specifically in the Federal Data Processing Service (Serviço Federal de Processamento de Dados - SEPRO, in Portuguese), which is the biggest government-owned corporation of IT services in Brazil. Further, Indian and South African government domains are mostly hosted in their countries, with the National Informatics Centre (NIC) hosting most domains for India and the State Information Technology Agency (SITA) for South Africa. These results show a concern within BRICS about hosting governmental DNS domains for federal services within government organizations to avoid censorship, data leakage and disruption of critical services. §.§ Discussion and Key Observations Different insights can be obtained from our experiments under a different lens. From the technical dimension, we have shown that the DNS has a centralization and few players. We also showed that DNS centralization is economic in nature since big techs from developed countries lead the market. Moreover, several economic impacts (business disruption and reputation harm) may happen in companies and governments in case of intentional or non-intentional disruption of DNS underlying infrastructure. Our findings can also be explored from a legal dimension since digital sovereignty involves regulations and actions that can be done by policy-makers based on the technical analysis of the different protocols and dependence (DNS and its centralization on a few companies and countries). The rest of this section provides a discussion on each one of these dimensions. On the technical dimension, based on the results, it is very straightforward to assume that there is a clear indication of a DNS centralization, which can lead to a scenario where the Internet's infrastructure and management are directly dependent on a few players (governments and companies with different technical and political characteristics). This is not the best scenario, since it can lead to the issues discussed in Section <ref>, such as security, availability, and performance. Moreover, by allowing such centralization in a given country, region or company, the risk of Internet censorship increases, as such a control can be achieved by injecting fake DNS replies to block access to certain content <cit.>. Thus, the DNS infrastructure and its distribution concentrated on a few authoritative servers may lead to Internet outages (due to misconfigurations) and Internet censorship, as the technical enablers for implementing this control are in place. When discussing the economic dimension of DNS centralization, one point that relates is the possibility of DNS providers profiting from DNS lookup data. <cit.> advocates that DNS providers do not commercialize such information because of the potential consumer and regulatory backlash of such a monetization. However, suppose the DNS provider's centralization occurs in a country with not-so-well-defined regulations concerning commercializing user-sensitive data. In that case, further monopoly is risky as DNS lookup can be valuable for advertisement. Thus, monitoring and addressing DNS centralization and digital sovereignty is critical to tackling such an economic perspective. Further, most DNS providers (Amazon, Google, and Microsoft) are also major cloud provider companies <cit.>, where their business is strongly tied to providing a reliable DNS infrastructure to access such cloud instances. However, such a combined service offering leads to a vendor lock-in issue <cit.> and even further dependence on their infrastructure, in which companies are subject to such companies' pricing policies. In addition to these possible economic impacts, DNS centralization also has an economic motivation since big techs (often based in US) offer DNS resolvers and associated services as part of their business core. In 2020, the DNS market was worth USD 372 million, and it is expected to be worth USD 862 million by 2025 <cit.>. This growth expectation is attributed to the growing number of domain name registrations and also Web traffic. Concerns about security, centralization, and digital sovereignty may be part of the marketing and product development strategies for DNS providers and big techs operating the underlying infrastructure. Lastly, in the legal and political dimension, there are different efforts from the EU to strengthen the EU's digital sovereignty, such as the GDPR for the idea of data sovereignty and the action plan for more digital sovereignty called by governments of Germany, Estonia, Denmark, and Finland <cit.>. Cybersecurity experts, entrepreneurs, and decision-makers also moved to the discussion to highlight the need to develop and promote digital infrastructures under European technological sovereignty <cit.>. However, even though digital sovereignty is receiving much political attention around the World, the discussions still need to evolve to find a common understanding to succeed. In Brazil, the discussions on the topic are also increasing since discussions on regulations are still needed to increase national cybersecurity and digital sovereignty <cit.>. Thus, as seen with these examples and discussions, digital sovereignty is a matter that many stakeholders (governments, companies, and society) have to address from technical, economic, and legal perspectives. Otherwise, digital colonialism may become more prominent and dangerous in the following years, providing mechanisms to increase censorship and digital warfare. Thus, as shown in this work and experiments, we advocate that analysis and discussions under different lenses are needed. Besides centralization of protocols such as DNS, cybersecurity, regulations and investments for technology, and mobile communications and its vendors are examples of aspects that must be investigated to lead the discussions of digital sovereignty. § CONCLUSION AND FUTURE WORK The Domain Name System (DNS) infrastructure plays an essential role in the Internet access infrastructure by allowing content and services to be reached using easy-to-remember names (domains). However, during its development, it was never imagined that such a system would become a market of global proportions. Thus, aspects such as its centralization and governmental regulations were disregarded. In this sense, given its central role in society and concerns regarding the level of control that DNS providers could enforce if the system becomes centralized, understanding and identifying DNS centralization is a key concern. Thus, in this paper, we presented an approach to measure DNS centralization and digital sovereignty based on DNS domain resolution. The approach relies on a list of 1M popular domains (the Tranco list) and, for each one, identifies the name server responsible for hosting the domain (its authoritative server) and, based on its IP address, maps it to the Autonomous System (AS) managing the IP address. Further, with the AS information, the approach identifies the country in which the AS is located to analyze which regulations the AS is subject to. Consequently, with that information, the approach infers the top-10 DNS providers, the percentage of centralization of the Tranco list in these providers, and also the portion of domains that are managed within their country based on its country-code Top-Level Domain (ccTLD). Results from the analysis show that most of the top-10 DNS providers identified in the Tranco list are in the US, with Cloudflare being the 1st DNS provider. Further, the analysis of how centralized the DNS hosting industry is revealed that the concentration of domains resolved in the identified top-10 providers peaked at almost 40%, which shows signals of centralization. Lastly, the results of measuring digital sovereignty in Brazil, Russia, India, China, and South Africa (BRICS) and the European Union (EU) unveiled a scenario where a significant percentage of domains within these countries are not hosted by national companies but hosted on US-based organizations; exceptions being Russia and South Africa. Based on the results, it can be said that not only is DNS centralization occurring on the Internet as previous literature showed (Section <ref>), but also that countries are becoming less sovereign in terms of control over the national DNS infrastructure. Considering future work, it is planned to  analyze such DNS providers distribution with additional countries that are discussing digital sovereignty,  address the limitations of the work discussed in Section <ref>, and  create a tool to analyze DNS providers distribution periodically. Furthermore, our measurement approach can be extended to analyze additional protocols and technologies to provide a more granular technical view of the digital sovereignty landscape. IEEEtran All links visited on July, 2023
http://arxiv.org/abs/2307.00975v2
20230703124538
Fast Convergence of Inertial Multiobjective Gradient-like Systems with Asymptotic Vanishing Damping
[ "Konstantin Sonntag", "Sebastian Peitz" ]
math.OC
[ "math.OC", "90C29, 90C30, 90C25, 91A12, 91B55, 34E10, 37L05, 90B50, 91B55" ]
1]Konstantin Sonntag 2]Sebastian Peitz [1]Department of Mathematics, Paderborn University, Germany [2]Department of Computer Science, Paderborn University, Germany Fast Convergence of Inertial Multiobjective Gradient-like Systems with Asymptotic Vanishing Damping [ =================================================================================================== We present a new gradient-like dynamical system related to unconstrained convex smooth multiobjective optimization which involves inertial effects and asymptotic vanishing damping. To the best of our knowledge, this system is the first inertial gradient-like system for multiobjective optimization problems including asymptotic vanishing damping, expanding the ideas laid out in [H. Attouch and G. Garrigos, Multiobjective optimization: an inertial dynamical approach to Pareto optima, preprint, arXiv:1506.02823, 201]. We prove existence of solutions to this system in finite dimensions and further prove that its bounded solutions converge weakly to weakly Pareto optimal points. In addition, we obtain a convergence rate of order 𝒪(t^-2) for the function values measured with a merit function. This approach presents a good basis for the development of fast gradient methods for multiobjective optimization. § INTRODUCTION In this paper $̋ is a real Hilbert space with scalar product⟨·, ·⟩and norm‖·‖. We are interested in a gradient-dynamic approach to the Pareto optima of the multiobjective optimization problem min_x ∈ F(x), MOP withF: →̋^m, F(x) = (f_1(x), …, f_m(x)), wheref_i:→̋are convex and continuously differentiable functions fori = 1,…, m. We define the following multiobjective inertial gradient-like dynamical system with asymptotic vanishing damping MAVDα/tẋ(t) + _C(x(t)) + ẍ(t)(0) = 0, whereC(x) ( {∇f_i(x) : i = 1,…, m})is defined as the convex hull of the gradients. For a closed convex setK ⊂$̋ and a vector x ∈$̋, the projection ofxon the setKis denoted by_K(x) _y ∈K ‖y - x ‖^2. Our interest in the system (<ref>) is motivated by the active research in dynamical systems for fast minimization and their relationship with numerical optimization methods. In the singleobjective casem = 1, (<ref>) reduces to the inertial gradient system with asymptotic vanishing damping AVDẍ(t) + α/tẋ(t) + ∇ f(x(t)) = 0, which is introduced in <cit.> in connection with Nesterov's accelerated gradient method (see <cit.>) and analyzed further in <cit.>. Forα> 0every solutionxof (<ref>) satisfieslim_t →+ ∞ f(x(t)) = min_x ∈ f(x). Forα≥3it holds thatf(x(t)) - min_x∈ f(x) = 𝒪(t^-2)<cit.>. Forα> 3the trajectories experience an improved converge rate of orderf(x(t)) - min_x∈ f(x) = o(t^-2)and every solutionxconverges weakly to a minimizer offgiven that the set of minimizers is nonempty (see <cit.>). Here, for a real valued functiong:[t_0 ,+∞) →_≥0, witht_0 ≥0, we writeg(t) = 𝒪(t^-2)if there existsC > 0such thatt^2g(t) ≤Cfor allt ≥t_0and we writeg(t) = o(t^-2)iflim_t →+∞ t^2g(t) =0. It is an open question if similar results can be obtained for multiobjective optimization problems (see <cit.>). While there exists exhaustive literature on gradient-systems connected with singleobjective optimization problems, similar systems for multiobjective optimization problems are rarely adressed in the literature. There are only few results in this area which we present in the following. When we neglect the inertial effects introduced byẍ(t)and drop the damping coefficientα/tin (<ref>) we return to the multiobjective gradient system MOGẋ(t) + _C(x(t))(0) = 0. The relation of the system (<ref>) to multiobjective optimization problems is discussed in <cit.>. In <cit.> the system (<ref>) is extended to the setting of nonsmooth multiobjective optimization. A remarkable property of the system (<ref>) is that the function values of each objective decrease along any solutionxof (<ref>), i.e.,d/dtf_i(x(t)) ≤0for alli = 1,…, m. Further, bounded solutions of (<ref>) converge weakly to weakly Pareto optimal solutions (see <cit.>). A first study on inertial gradient-like dynamical systems for multiobjective optimization in Hilbert spaces is proposed in <cit.>. The authors of <cit.> combine the system (<ref>) with the so-called heavy ball with friction dynamic. For a scalar optimization problemmin_x∈ f(x)with a smooth objective functionf:→̋the heavy ball with friction dynamical system reads as HBFẍ(t) + αẋ(t) + ∇ f(x(t)) = 0, forα> 0. The system (<ref>) and its connection with optimization problems is well-studied (see <cit.>). It can be shown that for a convex smooth functionfwith a nonempty set of minimizers, the solutionsxof (<ref>) converge weakly to minimizers offand hencelim_t →+ ∞ f(x(t)) = min_x ∈ f(x)(see <cit.>). In <cit.> the systems (<ref>) and (<ref>) are combined to the inertial multiobjective gradient system IMOGẍ(t) + αẋ(t) + _C(x(t))(0) = 0, forα> 0(see also <cit.>). Any bounded solutionxof (<ref>) converges weakly to a weak Pareto optimum of (<ref>) under the conditionα^2 > L, whereL > 0is a common Lipschitz constant of the gradients∇f_i. This last condition is the reason why it is not straight-forward to introduce asymptotic vanishing damping in (<ref>). If we include a damping coefficientα/tin the system (<ref>) to adapt the proof, we would need(α/t)^2 > Lfor allt ≥t_0which cannot hold. So either one has to find a different proof for the convergence of the trajectories of (<ref>) or one has to define another generalization of the system (<ref>) to the multiobjective optimization setting. In a first step to define a gradient-like system with asymptotic vanishing damping, in <cit.> we defined the system IMOG'αẋ(t) + _C(x(t)) + ẍ(t)(0) = 0, withα> 0, which also simplifies to the system (<ref>) in the singleobjective case. In <cit.> it is shown that for smooth convex objective functionsf_ieach bounded solution of (<ref>) converges weakly to a weak Pareto optimum of (<ref>). When we introduce asymptotic vanishing damping in (<ref>), we recover the system (<ref>) which is analyzed in this paper. In singleobjective optimization, asymptotic vanishing damping is of special interest from an optimization point of view since it guarantees fast convergence rates for the function values, namely of order𝒪(t^-2). To proof these convergence rates, one uses Lyapunov type energy functions which usually involve terms of the formt^2(f(x(t)) - f(x^*)), wherex^*is a minimizer tof. The choice of the minimizer is not crucial to the derivation since all minimizers have the same function value for a convex problem. In multiobjective optimization this is not possible anymore. Since there is no total order on^m, there is not a single solution to the multiobjective optimization problem (<ref>) but a set of solutions, the Pareto set. The values of the different objective functions vary along the Pareto front. If we choose a Pareto optimal solutionx^*to (<ref>) we run into the problem that the termsf_i(x(t)) - f_i(x^*)do not necessarily remain positive along the solution trajectories of (<ref>). We need a different concept to define suitable energy functions to ensure monotonous decay of the energy along trajectories. A fruitful approach to analyze the convergence rate of multiobjective optimization methods is conducted by the use of so-called merit functions. In <cit.> the function u_0(x) = sup_z ∈min_i = 1,…, m f_i(x) - f_i(z), is introduced as a merit function for nonlinear multiobjective optimization problems. The investigation of merit functions is part of the active research on accelerated first order methods for multiobjective optimization (see <cit.>). The functionu_0is nonnegative, attains the value zero only for weakly Pareto optimal solutions and is lower semicontinuous. It is therefore suitable as a measure of convergence speed for optimization methods in multiobjective optimization. In addition, in the singleobjective settingu_0(x)simplifies tof(x) - inf_z ∈ f(z). Using this idea, we are able to define energy functions in the spirit of <cit.> to prove convergence rates of orderu_0(x(t)) = 𝒪(t^-2)for solutionsxto (<ref>). The systems (<ref>) and (<ref>) have a fundamental distinction from the other systems mentioned beforehand. Since (<ref>) and (<ref>) involve the term_C(x(t)) + ẍ(t)(0), they cannot be written as an explicit second order differential equation of the formẍ(t) = G(x(t), ẋ(t), t), hence we cannot use standard theorems like the Cauchy-Lipschitz or Peano Theorem to prove existence of solutions. Instead, we use existence results for differential inclusions and show solutions to the system (<ref>) exist if a related differential inclusion has solutions. These ideas were already laid out in <cit.>. This paper is organized as follows. In Section <ref>, we present the background on multiobjective optimization and merit functions which we use in our convergence analysis. We prove the existence of global solutions to the system (<ref>) in finite dimensions in Section <ref>. Section <ref> contains our main results on the properties of the trajectoriesxof (<ref>). In Theorem <ref>, we provelim_t →+ ∞ u_0(x(t)) + 1/2‖ẋ(t) ‖^2 = 0for allα> 0. Convergence of orderu_0(x(t)) = 𝒪(t^-2)forα≥3is proven in Theorem <ref>. Theorem <ref> states weak convergence of the bounded solutions of (<ref>) to weakly Pareto optimal solutions using Opial's Lemma, givenα> 3. In Section <ref>, we verify the bounds for the convergence speed ofu_0(x(t))on two numerical examples. We end with a discussion on the relation of the system (<ref>) with numerical methods for fast multiobjective optimization in Section <ref> and the conclusion and outlook on future research in Section <ref>. § MULTIOBJECTIVE OPTIMIZATION §.§ Pareto optimal and Pareto critical points The goal of multiobjective optimization is to optimize several functions simultaneously. In general, it is not possible to find a point minimizing all objective functions at once. Therefore, we have to adjust the definition of optimality in this setting. This can be done via the concept of Pareto optimality (see <cit.>), which is defined as follows. Consider the optimization problem (<ref>). * A point x^* ∈$̋ is Pareto optimal if there does not exist another pointx ∈$̋ such that f_i(x) ≤ f_i(x^*) for all i = 1,…,m, and f_j(x) < f_j(x^*) for at least one index j. The set of all Pareto optimal points is the Pareto set, which we denote by P. The set F(P) ⊂^m in the image space is called the Pareto front. * A point x^* ∈$̋ is weakly Pareto optimal if there does not exist another vectorx ∈$̋ such that f_i(x) < f_i(x^*) for all i = 1,…, m. The set of all weakly Pareto optimal points is the weak Pareto set, which we denote by P_w and the set F(P_w) is called the weak Pareto front. From Definition <ref> it immediately follows thatP ⊂P_w. Solving problem (<ref>) means in our setting computing one Pareto optimal point. We do not aim to compute the entire Pareto set. This can be done in a consecutive step using globalization strategies (see <cit.>). Since Pareto optimality is defined as a global property, the definition cannot directly be used in practice to check whether a given point is Pareto optimal. Fortunately, there are first order optimality conditions that we can use instead. As in the singleobjective case, they are known as the Karush-Kuhn-Tucker conditions. The set Δ^m {θ∈^m : θ≥ 0, and ∑_i=1^m θ_i = 1 } is the positive unit simplex. A point x^* ∈$̋ satisfies the Karush-Kuhn-Tucker conditions if there existsθ∈Δ^msuch that ∑_i=1^m θ_i ∇ f_i(x^*) = 0. Ifx^*satisfies the Karush-Kuhn-Tucker conditions, we call it Pareto critical. The set of all Pareto critical points is the Pareto critical set, which we denote byP_c. The Karush-Kuhn-Tucker conditions are equivalent to0 ∈({∇ f_i(x^*) : i = 1, …, m })(which is also called Fermat's rule). Analogously to the singleobjective setting, criticality of a point is a necessary condition for optimality. In the convex setting, the KKT conditions are also sufficient conditions for weak Pareto optimality and we have the relation P ⊂ P_w = P_c. §.§ Convergence analysis and merit functions How do we characterize the convergence of function values for multiobjective optimization problems? For singleobjective optimization problems of the formmin_x ∈ f(x)with a convex objective functionf:→̋, we are interested in the rate of convergence off(x(t)) - inf_x∈ f(x). For the problem (<ref>) there is no solution yielding a smallest function value for all objective functions. In the image setF()̋ = { F(x) : x ∈}̋⊂^mthere is a set of nondominated points, the so-called Pareto front, which isF(P) ⊂ F()̋. If we want to characterize the rate of convergence of the function values of a trajectoryxfor a multiobjective optimization problem, this should relate to the distance ofF(x(t))to the Pareto front. A line of research considers so-called merit functions for multiobjcetive optimization problems to characterize the rate of convergence of function values (see <cit.> and further references in <cit.>). A merit function associated with an optimization problem is a function that returns zero at an optimal solution and which is strictly positive otherwise. In this paper we restrict ourselves to the merit function u_0(x) sup_z ∈min_i = 1,…, m f_i(x) - f_i(z). This function is indeed a merit function for multiobjective optimization problems with respect to weak Pareto optimality as the following theorem states. Let u_0(x) be defined by (<ref>). For all x ∈$̋ it holds thatu_0(x)≥ 0. Moreover,x ∈$̋ is weakly Pareto optimal for (<ref>), if and only if u_0(x) = 0. A proof can be found in <cit.>. Additionally,u_0is lower semicontinuous. Therefore, accumulation points of a smooth curvet ↦ x(t)withlim_t → + ∞ u_0(x(t)) = 0are weakly Pareto optimal. This motivates the usage ofu_0as a measure of convergence speed for multiobjective optimization methods. Even if all objective functions are smooth, the functionu_0is in general not smooth. This has to be kept in mind when we define Lyapunov type functions for the system (<ref>) involvingu_0. The relationship of the merit functionu_0to the distance ofF(x(t))from the Pareto front is visualized in Figure <ref>. For the given Pareto front the problemsup_z ∈min_i=1,…,m f_i(x) - f_i(z)can be solved visually. The solutionz^* ∈_z ∈min_i=1,…,m f_i(x) - f_i(z)in this example satisfies the following two properties. On the one handu_0(x) = f_j(x) - f_j(z^*)for allj = 1,…, m. On the other handu_0(x) = _‖·‖_∞(F(x), F(P)), where_‖·‖_∞(K, x) inf_y ∈ K‖ y - x ‖_∞is the distance of a pointx ∈^mto a given setK ⊂^min the maximum norm. This makes the interpretation of the merit functionu_0intuitive in many cases. In the analysis laid out in Section <ref> we require the following standing assumption on (<ref>). This assumption describes a condition on the weak Pareto set. Let P_w be the set of weakly Pareto optimal points for (<ref>), and define for F̂∈^m the lower level set ℒ(F̂) { x ∈ : F(x) ≤F̂}. Further define P_w(F̂) = P_w ∩ℒ(F̂). We assume that for all x_0∈$̋ and allx ∈ℒ(F(x_0))there existsx^* ∈ P_w(F(x))and R sup_F^* ∈ F(P_w(F(x_0)))inf_x ∈ F^-1(F^*)1/2‖ x - x_0 ‖^2 < +∞. Assumption <ref> is satisfied in the following cases. * For singleobjective optimization problems, Assumption <ref> is satisfied if the optimization problem has at least one optimal solution. In this setting, for all x_0 ∈$̋ the weak Pareto setP_w = P_w(F(x_0))coincides with the optimal solution set_x ∈ f(x)andinf_x∈ P_w1/2‖ x - x_0 ‖^2 < + ∞holds. * Assumption <ref> is valid, if the level setℒ(F(x_0))is bounded. For example, this is the case when for at least onei ∈{ 1, …, m }the set{ x ∈:̋ f_i(x) ≤ f_i(x_0) }is bounded. For the convergence analysis in Section <ref> we need an additional lemma for the merit functionu_0. This lemma describes how to retrieveu_0frommin_i=1,…,m f_i(x) - f_i(z)without taking the supremum over the whole space$̋. We need this lemma in particular when we apply the supremum to inequalities in which we bound u_0. Let x_0 ∈$̋ andx ∈ℒ(F(x_0)), then sup_F^* ∈ F(P_w(F(x_0)))inf_z ∈ F^-1(F^*)min_i=1,…,m f_i(x) - f_i(z) = sup_z ∈min_i=1,…,m f_i(x) - f_i(z). A proof of this statement is contained in the proof of Theorem 5.2 in <cit.>. § GLOBAL EXISTENCE OF SOLUTIONS TO (MAVD) IN FINITE DIMENSIONS In this section, we show that solutions exist for the Cauchy problem related to (<ref>), i.e. CP|[ α/tẋ(t) + _C(x(t)) + ẍ(t)(0) = 0,; ; x(t_0) = x_0, ẋ(t_0) = v_0, ]. with initial datax_0, v_0 ∈$̋ and starting time t_0 > 0. The differential equation in (<ref>) is implicit. Therefore, we cannot use the Cauchy-Lipschitz or the Peano Theorem to prove existence of solutions. To overcome this problem, we show that solutions exist for (<ref>) if there exist solutions to a first order differential inclusion (u̇(t), v̇(t)) ∈ G(t, u(t), v(t)), with a set-valued map G:[t_0, +∞)××̋⇉̋×̋$̋. Then, we use an existence theorem for differential inclusions from <cit.>. Using this approach, we do not expect solutionsxto be twice continuously differentiable but allow solutions to (<ref>) to be less smooth. We specify what we mean by a solution to the Cauchy problem (<ref>) in Definition <ref>. Since the coefficientα/thas a singularity att = 0, we restrict the analysis in this paper to the caset_0 > 0. As our argument only works in finite-dimensional Hilbert spaces, we demand()̋ < +∞in this section. In our context, the set-valued map G: [t_0, + ∞) ××̋⇉̋×̋,̋ (t,u,v) ↦{ v }×( - α/t v - _g ∈ C(u)⟨ g, -v ⟩) is of interest. As stated above,C(u) ({∇ f_i(u) : i = 1,…,m }). We can show that (<ref>) has a solution if the differential inclusion DI|[ (u̇(t), v̇(t)) ∈ G(t, u(t), v(t)),; ; (u(t_0), v(t_0)) = (u_0, v_0), ]. with appropriate initial datau_0, v_0 ∈$̋ and t_0 > 0 has a solution. §.§ Existence of solutions to (DI) To show that there exist solutions to (<ref>), we investigate the set-valued map (t, u,v) ⇉ G(t, u,v) defined in (<ref>). For a more detailed introduction to basic definitions regarding set-valued maps, the reader is referred to <cit.>. The set-valued map G defined in (<ref>) has the following properties: * For all (t,u,v) ∈ [t_0, +∞) ××̋$̋, the setG(t,u,v) ⊂×̋$̋ is convex and compact. * G is upper semicontinuous. * For ()̋ < + ∞, the map ϕ: [t_0, +∞) ××̋→̋×̋,̋ (t,u,v) ↦_G(t,u,v)(0) is locally compact. * Assume the gradients ∇ f_i are globally L-Lipschitz continuous. Then, there exists c > 0 such that for all (t, u, v) ∈ [t_0, +∞) ××̋$̋, it holds that sup_ξ∈ G(t, u, v)‖ξ‖_×̋≤ c(1 + ‖ (u,v) ‖_×̋), where for(x,y) ∈×̋$̋ we have ‖ (x,y) ‖_×̋ = √(‖ x ‖^2 + ‖ y ‖^2). The proof is contained in Appendix <ref>. The following existence theorem from <cit.> is applicable in our setting. Let 𝒳 be a Hilbert space and let Ω⊂×𝒳 be an open subset containing (t_0,x_0). Let G be an upper semicontinuous map from Ω into the nonempty closed convex subsets of 𝒳. We assume that (t,x) ↦_G(t,x)(0) is locally compact. Then, there exists T > t_0 and an absolutely continuous function x defined on [t_0, T] which is a solution to the differential inclusion ẋ(t) ∈ G(t, x(t)), x(t_0) = x_0. In the following remark we want to give a more precise description of the solutions to a differential inclusion (<ref>) and give more insight into Theorem <ref>. This is particularly important since the main results of this paper are concerned with the asymptotic behaviour of the solutions of a differential inclusion. Consider the general differential inclusion (<ref>). A solutions x:[t_0, T] →𝒳 given by Theorem <ref> is not differentiable but merely absolutely continuous. Therefore, the notion ẋ(t) ∈ G(t, x(t)) cannot hold on the entire domain [t_0, T]. An absolutely continuous function x:[t_0, T] →𝒳 is differentiable almost everywhere in [t_0, T]. A solution x to (<ref>) satisfies the inclusion ẋ(t) ∈ G(t, x(t)) in every t, where the derivative ẋ(t) is defined. In general ẋ will not be continuous. But since x is absolutely continuous with values in a Hilbert space (which satisfies the Radon-Nikodym property) ẋ is Bochner integrable and x(t) = x(t_0) + ∫_t_0^t ẋ(s) ds (see <cit.>). Within the next theorem, we prove existence of solutions to the differential inclusion (<ref>). Assume $̋ is finite-dimensional and that the gradients of the objective function∇ f_iare globally Lipschitz continuous. Then, for all(u_0, v_0) ∈×̋$̋ there exists T > t_0 and an absolutely continuous function (u(·), v(·)) defined on [t_0, T] which is a solution to the differential inclusion (<ref>). The proof follows immediately from Proposition <ref> which shows that the set-valued map satisfies all conditions required for Theorem <ref>. In the following, we show that under additional conditions on the objective functions f_i, there exist solutions defined on [t_0, +∞). The extension of the solution follows by a standard argument. We show that solutions to (<ref>) remain bounded. Then, we use Zorn's Lemma to retrieve a contradiction if there is a maximal solution that is not defined on [t_0, +∞). Assume $̋ is finite-dimensional and that the gradients of the objective function∇ f_iare globally Lipschitz continuous. Then, for all(u_0, v_0) ∈×̋$̋ there exists a function (u(·), v(·)) defined on [t_0, +∞) which is absolutely continuous on [t_0, T] for all T > t_0 and which is a solution to the differential inclusion (<ref>). Theorem <ref> guarantees the existence of solutions defined on [t_0, T) for some T ≥ t_0. Using the domain of definition, we can define a partial order on the set of solutions to the problem (<ref>). Assuming there is no solution defined on [t_0, +∞), Zorn's Lemma guarantees the existence of a solution (u(·), v(·)):[t_0, T) →×̋$̋ withT < + ∞which can not be extended. In order to derive a contradiction to the claimed maximality, we show that(u(t), v(t))does not blow up in finite time and therefore can be extended. Define h(t) ‖ (u(t), v(t)) - (u(t_0), v(t_0)) ‖_×̋, where‖ (x,y) ‖_×̋ = √(‖ x ‖^2 + ‖ y ‖^2). We show thath(t)can be bounded by a real-valued function defined on[t_0, T]. Using the Cauchy-Schwarz inequality, we get d/dt1/2 h^2(t) = ⟨ (u̇(t), v̇(t)), (u(t), v(t)) - (u(t_0), v(t_0)) ⟩_×̋ ≤‖ (u̇(t), v̇(t)) ‖_×̋ h(t). Proposition <ref> (iii) guarantees the existence of a constantc > 0with ‖ (u̇(t), v̇(t)) ‖_×̋≤ c(1 + ‖ (u(t), v(t)) ‖_×̋). Definec̃ c ( 1 + ‖ (u(t_0), v(t_0)) ‖_×̋). Then, by applying the triangle inequality to (<ref>) we have ‖ (u̇(t), v̇(t)) ‖_×̋≤c̃(1 + ‖(u(t), v(t)) - (u(t_0), v(t_0)) ‖_×̋). Combining inequalities (<ref>) and (<ref>), we get d/dt1/2h^2(t) ≤c̃(1 + h(t) ) h(t), for all t ∈ [t_0, T). Using a Gronwall-type argument (see Lemma A.4 and Lemma A.5 in <cit.>) as in Theorem 3.5 in <cit.>, we conclude that for anyε > 0h(t) ≤c̃T exp(c̃T), for all t ∈ [t_0, T - ε]. Since this upper bound is independent oftandε, it follows thath ∈ L^∞([0,T], ). Therefore, solutions to (<ref>) do not blow up in finite time and can be extended. This is a contradiction to the maximality of the solution(u(t), v(t)). §.§ Existence of solutions to (CP) Using the findings of the previous subsection, we can proceed with the discussion of the Cauchy problem (<ref>). In this subsection, we show that solutions to the differential inclusion (<ref>) immediately give solutions to the Cauchy problem (<ref>). Before we retrieve this result we have to specify what we understand under a solution to (<ref>). Due to the implicit structure of the equation (<ref>), we cannot guarantee the existence of a twice continuously differentiable functionxwhich satisfies the equation (<ref>). We reduce the requirements in the following sense: We call a function x:[t_0, +∞) →$̋ a solution to the Cauchy problem (<ref>) if it satisfies the following conditions. * x ∈ C^1([t_0, +∞)), e.g. x is continuously differentiable on [t_0, +∞). * ẋ is absolutely continuous on [t_0, T] for all T ≥ t_0. * There exists a (Bochner) measurable function ẍ:[t_0, +∞) →$̋ withẋ(t) = ẋ(t_0) + ∫_t_0^t ẍ(s) dsfor allt ≥ t_0. *ẋis differentiable almost everywhere andd/dtẋ (t) = ẍ(t)holds for almost allt ∈ (t_0, +∞). *α/tẋ(t) + _C(x(t)) + ẍ(t)(0) = 0holds for almost allt ∈ [t_0, +∞). *x(t_0) = x_0andẋ(t_0) = v_0hold. Conditions (iii) and (iv) are merely consequences of (ii) (see <cit.>), since ẋ is absolutely continuous on every compact interval [t_0, T] with values in a Hilbert space (which satisfies the Radon-Nikodym property). The (Bochner) measurability of ẍ will be of importance in the analysis of the trajectories. To show that solutions of (<ref>) give solutionsxwhich satisfy part(v)of Definition <ref>, we need the following auxiliary Lemma. Let $̋ be a real Hilbert space,C ⊂$̋ a convex and compact set and η∈$̋ a fixed vector. Thenξ∈η - _μ∈ C⟨μ, η⟩if and only ifη = _C + ξ(0). Let ξ∈η - _μ∈ C⟨μ, η⟩. Then, η - ξ∈_μ∈ C⟨μ, η⟩ and hence ⟨η - ξ , η⟩≤⟨μ , η⟩ for all μ∈ C. This is equivalent to 0 ≤⟨μ + ξ - η , η⟩ for all μ∈ C. Since η∈ C + ξ this is equivalent to η = _C + ξ(0). The other implication follows analogously. Let x_0, v_0 ∈$̋ andt_0 > 0. Assume(u(·), v(·)): [t_0, +∞) →×̋$̋ is a solution to (<ref>) with (u(t_0), v(t_0)) = (x_0, v_0). Then, it follows that x(t) u(t) satisfies the differential equation α/tẋ(t) + _C(x(t)) + ẍ(t)(0)= 0, for almost all t∈ (t_0, +∞), and x(t_0) = x_0, ẋ(t_0) = v_0. Since (u(·), v(·)) is a solution to (<ref>), the relations u̇(t) = v(t) and v̇(t) ∈ - α/t v(t) - _g ∈ C(u(t))⟨ g, -v(t)⟩, hold for almost all t ∈ (t_0, +∞). Using α/t > 0, we can write the second line as v̇(t) ∈ - α/t v(t) - _g ∈ C(u(t))⟨ z, - α/t v(t) ⟩. Using Lemma <ref> with η = -α/tv(t), C = C(u(t)) and ξ = v̇(t), the second line in (<ref>) gives - α/t v(t) = _C(u(t)) + v̇(t)(0). Rewriting this system using x(t) = u(t), ẋ(t) = u̇(t) = v(t) and ẍ(t) = v̇(t) and verifying the initial conditions x(t_0) = u(t_0) = x_0 and ẋ(t_0) = v(t_0) = v_0 yields the desired result. Finally, we can state the full existence theorem for the Cauchy problem (<ref>). Assume $̋ is finite-dimensional and the gradients of the objective function∇ f_iare globally Lipschitz continuous. Then, for allx_0, v_0 ∈$̋, there exists a function x which is a solution to the Cauchy problem (<ref>) in the sense of Definition <ref>. The proof follows immediately combining Theorem <ref> and Theorem <ref>. In Theorem <ref>, we assume that the gradients ∇ f_i of the objective functions are globally Lipschitz continuous. One can relax this condition and only require the gradients to be Lipschitz continuous on bounded sets if we can guarantee that the solutions remain bounded. This holds for example if one of the objective functions f_i is coercive. § ASYMPTOTIC BEHAVIOUR OF TRAJECTORIES This section contains the main results regarding the asymptotic properties of the solutions to (<ref>). We show that forα > 0the trajectoriesxof (<ref>) minimize the function values. In Theorem <ref>, we show thatu_0(x(t)) + 1/2‖ẋ(t) ‖^2 → 0ast → + ∞holds in this setting. Hence, every weak accumulation point ofxis weakly Pareto optimal. Forα≥ 3we prove fast convergence for the function values with rateu_0(x(t)) = 𝒪(t^-2)ast → + ∞, as we show in Theorem <ref>. In Theorem <ref>, we prove that forα > 3the trajectoriesxof (<ref>) converge weakly to weakly Pareto optimal points using Opial's Lemma. §.§ Preliminary remarks and estimations Throughout this subsection, we fix a solutionx:[t_0, +∞) →$̋ to (<ref>) in the sense of Definition <ref> with initial velocity ẋ(t_0) = 0. Setting the initial velocity to zero has the advantage that the trajectories x remain in the level set ℒ(F(x_0)) as stated in Corollary <ref>. Hence, if the level set ℒ(F(x_0)) is bounded also the solution x remains bounded. Let x:[t_0, +∞) →$̋ be a solution to (<ref>). Fori = 1,…,m, define the global energy _i:[t_0, +∞) →, t↦ f_i(x(t)) + 1/2‖ẋ(t) ‖^2. Then, for alli = 1,…, mand almost allt∈(t_0,+∞)it holds thatd/dt_i(t) ≤ -α/t‖ẋ(t) ‖^2. Hence,_iis nonincreasing, and_i^∞ = lim_t → +∞_i(t)exists in∪{-∞}. Iff_iis bounded from below, then_i^∞∈.   The function _i is differentiable almost everywhere in [t_0, +∞) with derivative d/dt_i(t) = d/dt[f_i(x(t)) + 1/2‖ẋ(t) ‖^2 ] = ⟨∇ f_i(x(t)) , ẋ(t) ⟩ + ⟨ẋ(t) , ẍ(t) ⟩. Using the variational representation of -α/tẋ(t) = _C(x(t)) + ẍ(t) and the fact that ∇ f_i(x(t)) ∈ C(x(t)), we get for all i = 1,…, m ⟨ẍ(t) + α/tẋ(t) + ∇ f_i(x(t)), ẋ(t) ⟩≤ 0, and hence, ⟨∇ f_i(x(t)), ẋ(t) ⟩ + ⟨ẍ(t), ẋ(t) ⟩≤ - α/t‖ẋ(t) ‖^2. Combining (<ref>) and (<ref>) gives d/dt_i(t) ≤ -α/t‖ẋ(t) ‖^2. Due to the inertial effects in (<ref>), there is in general no monotone descent for the objective values along the trajectories. The following corollary guarantees that the function values along the trajectories are at least bounded from above by the initial function values givenẋ(t_0) = 0. Let x:[t_0, +∞) →$̋ be a solution to (<ref>) withẋ(t_0) = 0. For alli = 1,…,mand allt ∈ [t_0, +∞), it holds that f_i(x(t)) ≤ f_i(x_0), i.e.x(t) ∈ℒ(F(x_0))for allt ≥ t_0. From Proposition <ref>, we follow for all t ∈ [t_0, +∞) f_i(x_0) = _i(t_0) ≥_i(t) = f_i(x(t)) + 1/2‖ẋ(t) ‖^2 ≥ f_i(x(t)). In the following proofs, we need the weightsθ(t) ∈Δ^mwhich are implicitly given by -α/tẋ(t) = _C(x(t)) + ẍ(t)(0) = ∑_i=1^m θ_i(t) ∇ f_i(x(t)) + ẍ(t), fort ∈ [t_0, +∞). In the proofs of the following lemmata, we take the integral over the weightsθ(t). Therefore, we have to guarantee that we can find a measurable selectiont ↦θ(t) ∈Δ^msatisfying (<ref>). Let x(t) be a solution to (<ref>). Then, there exists a measurable function θ: [t_0, +∞) →Δ^m, t ↦θ(t), with _C(x(t)) + ẍ(t)(0) = ∑_i=1^m θ_i(t) ∇ f_i(x(t)) + ẍ(t), for all t ∈ [t_0, +∞). Our proof is based on the proof of Proposition 4.6 in <cit.>. Rewrite θ(t) as a solution to the problem θ(t) ∈_θ∈Δ^m j(t, θ), with j(t, θ) 1/2‖∑_i = 1^m θ_i ∇ f_i(x(t)) + ẍ(t) ‖^2. We show that j is a Carathéodory integrand. Then, the proof follows from Theorem 14.37 in <cit.>, which guarantees the existence of a measurable selection θ : [t_0, +∞) →Δ^m, t ↦θ(t) ∈_θ∈Δ^m j(t, θ). For all t ∈ [t_0, +∞), the function θ↦ j(t, θ) is continuous. By Theorem <ref>, x is a solution to (<ref>) in the sense of Definition <ref>. This means ẍ is (Bochner) integrable on every interval [t_0, T] and therefore (Bochner) measurable. Then, for all θ∈Δ^m the function t ↦ j(θ, t) is measurable as a composition of a measurable and a continuous function. This implies that j is indeed a Carathéodory integrand which completes the proof. In the following, whenever we writeθwe mean the measurable function given by Lemma <ref>. For z ∈$̋, defineh_z:[t_0, +∞) →by h_z(t) 1/2‖ x(t) - z ‖^2. For almost allt ∈ [t_0, +∞), it holds that ḧ_z(t) + α/tḣ_z(t) + ∑_i=1^m θ_i(t) (f_i(x(t)) - f_i(z)) ≤‖ẋ(t) ‖^2.   By the chain rule, we have for almost all t ∈ [t_0, +∞) ḣ_z(t) = ⟨ x(t) - z, ẋ(t) ⟩ and ḧ_z(t) = ⟨ x(t) - z, ẍ(t) ⟩ + ‖ẋ(t) ‖^2. We combine these expressions to get ḧ_z(t) + α/tḣ_z(t) = ‖ẋ(t) ‖^2 + ⟨ x(t) - z, ẍ(t) + α/tẋ(t) ⟩ = ‖ẋ(t) ‖^2 + ⟨ x(t) - z, - ∑_i = 1^m θ_i(t) ∇ f_i(x(t)) ⟩. The objective functions f_i are convex and hence ⟨ x(t) - z , ∇ f_i(x(t)) ⟩≥ f_i(x(t)) - f_i(z). Then, combining (<ref>) and (<ref>) gives ḧ_z(t) + α/tḣ_z(t) + ∑_i=1^m θ_i(t) (f_i(x(t)) - f_i(z)) ≤‖ẋ(t) ‖^2. Using this lemma, we derive the following relation betweenh_zand_i. Take z∈$̋ and let_iandh_zbe defined by (<ref>) and (<ref>), respectively. Then, for allt ∈ [t_0, +∞), it holds that ∫_t_0^t 1/s∑_i=1^m θ_i(s)(_i(s) - f_i(z) ) ds + 3/2α∑_i=1^m θ_i(t)(_i(t) - f_i(z) ) ≤ C_z - 1/tḣ_z(t), withC_z (α + 1) 1/t_0^2h_z(t_0) + 3/2αmax_i = 1,…, m(f_i(x_0) - f_i(z)).   Adding ‖ẋ(t) ‖^2 to inequality (<ref>) and dividing by t, we get for almost all t ∈ [t_0, +∞) 1/tḧ_z(t) + α/t^2ḣ_z(t) + 1/t∑_i=1^m θ_i(t) (_i(t) - f_i(z)) ≤3/2t‖ẋ(t) ‖^2. We reorder the terms in inequality (<ref>) and integrate from t_0 to t > t_0, to obtain ∫_t_0^t1/s∑_i=1^m θ_i(s) (_i(s) - f_i(z)) ds ≤ - ∫_t_0^t 1/sḧ_z(s) + α/s^2ḣ_z(s) ds + ∫_t_0^t 3/2s‖ẋ(s) ‖^2 ds. Integration by parts gives - ∫_t_0^t 1/sḧ_z(s) + α/s^2ḣ_z(s) ds = 1/t_0ḣ_z(t_0) - 1/tḣ_z(t) - (α + 1) ∫_t_0^t 1/s^2ḣ_z(s) ds. From ẋ(t_0) = 0 we immediately get ḣ_z(t_0) = 0. Combining (<ref>) and (<ref>) yields ∫_t_0^t1/s∑_i=1^m θ_i(s) (_i(s) - f_i(z)) ds ≤ - 1/tḣ_z(t) - (α + 1) ∫_t_0^t 1/s^2ḣ_z(s) ds + ∫_t_0^t 3/2s‖ẋ(s) ‖^2 ds. By Proposition <ref>, we have ∫_t_0^t 3/2s‖ẋ(s) ‖^2 ds ≤3/2α∑_i=1^m θ_i(t)(_i(t_0) - W_i(t)). Applying inequality (<ref>) to (<ref>) and using _i(t_0) = f_i(x_0) yields ∫_t_0^t1/s∑_i=1^m θ_i(s) (_i(s) - f_i(z)) ds ≤ - 1/tḣ_z(t) - (α + 1) ∫_t_0^t 1/s^2ḣ_z(s) ds + 3/2α∑_i=1^m θ_i(t)(f_i(x_0) - _i(t)). Using integration by parts one more time, gives ∫_t_0^t 1/s^2ḣ_z(s) ds = 1/t^2h_z(t) - 1/t_0^2h_z(t_0) + ∫_t_0^t 2/s^3h_z(s) ds ≥ -1/t_0^2 h_z(t_0). Combining (<ref>) and (<ref>), we derive ∫_t_0^t1/s∑_i=1^m θ_i(s) (_i(s) - f_i(z)) ds ≤ - 1/tḣ_z(t) + (α + 1) 1/t_0^2h_z(t_0) + 3/2α∑_i=1^m θ_i(t)(f_i(x_0) - _i(t)) ≤ - 1/tḣ_z(t) + (α + 1) 1/t_0^2h_z(t_0) + 3/2α∑_i=1^m θ_i(t)(f_i(x_0) - f_i(z)) + 3/2α∑_i=1^m θ_i(t)(f_i(z) - _i(t)) ≤ C_z - 1/tḣ_z(t) - 3/2α∑_i=1^m θ_i(t) (_i(t) - f_i(z)), with C_z (α + 1) 1/t_0^2h_z(t_0) + 3/2αmax_i = 1,…, m(f_i(x_0) - f_i(z)), which completes the proof. Take z∈$̋ and let_i, h_zandC_zbe defined by (<ref>), (<ref>) and (<ref>), respectively. Then, for allτ > t_0it holds that min_i = 1,…,m(_i(τ) - f_i(z) ) [ τlnτ + A τ + B ] ≤ C_z( τ - t_0) + h_z(t_0)/t_0, with constantsA, B ∈which are independent ofz. Set z ∈$̋ andτ≥ t > t_0. Proposition <ref> states that the functions_iare nonincreasing. Therefore, we have for alls ∈ [t_0, t], that_i(τ) - f_i(z) ≤_i(s) - f_i(z)and hence, min_i = 1,…,m(_i(τ) - f_i(z) ) ∫_t_0^t 1/s ds + 3/2αmin_i = 1,…,m(_i(τ) - f_i(z) ) ≤ ∫_t_0^t 1/smin_i = 1,…,m(_i(s) - f_i(z) ) ds + 3/2αmin_i = 1,…,m(_i(t) - f_i(z) ). Using Lemma <ref>, we get ∫_t_0^t 1/smin_i = 1,…,m(_i(s) - f_i(z) ) ds + 3/2αmin_i = 1,…,m(_i(t) - f_i(z) ) ≤ ∫_t_0^t 1/s∑_i=1^m θ_i(s) (_i(s) - f_i(z) ) ds + 3/2α∑_i=1^m θ_i(t)(_i(t) - f_i(z) ) ≤ C_z - 1/tḣ_z(t). Inequalities (<ref>) and (<ref>) together give min_i = 1,…,m(_i(τ) - f_i(z) ) [ ln t - ln t_0 + 3/2α] ≤ C_z - 1/tḣ_z(t). Integrating inequality (<ref>) fromt = t_0tot = τ, we have min_i = 1,…,m(_i(τ) - f_i(z) ) [ τlnτ + τ - t_0 ln t_0 - t_0 + (3/2α - ln t_0 )(τ - t_0) ] ≤ C_z( τ - t_0) - ∫_t_0^τ1/tḣ_z(t) dt. Integration by parts yields ∫_t_0^τ1/tḣ_z(t) dt = h_z(τ)/τ - h_z(t_0)/t_0 + ∫_t_0^τh_z(t)/t^2 dt ≥ - h_z(t_0)/t_0. Using inequality (<ref>) in (<ref>), we write min_i = 1,…,m(_i(τ) - f_i(z) ) [ τlnτ - τ - t_0 ln t_0 + t_0 + (3/2α - ln t_0 )(τ - t_0) ] ≤ C_z( τ - t_0) + h_z(t_0)/t_0. Introducing suitable constantsA, B∈, this gives the desired result. With the next theorem, we state the main result of this subsection. Theorem <ref> states that the function values of the trajectoriesF(x(t)) ∈^mconverge to elements of the Pareto front. In addition, Theorem <ref> states that every weak limit point of the trajectoryx(t)is weakly Pareto optimal. This will be important when we prove the weak convergence of the trajectories to weakly Pareto optimal points in Subsection <ref>. Let α > 0 and suppose x:[t_0, +∞) →$̋ is a solution of (<ref>). Define the global energy (t) = u_0(x(t)) + 1/2‖ẋ(t) ‖^2 = sup_z ∈min_i=1,…,m(f_i(x(t)) - f_i(z) ) + 1/2‖ẋ(t) ‖^2. If the functionsf_iare bounded from below and Assumption <ref> holds. Then,lim_t → + ∞(t) = 0. Hence,lim_t → + ∞ u_0(x(t)) = 0and by Theorem <ref> every weak limit point ofx(t)is Pareto critical. Lemma <ref> states min_i = 1,…,m(_i(τ) - f_i(z) ) [ τlnτ + A τ + B ] ≤ C_z( τ - t_0) + h_z(t_0)/t_0, for all τ > t_0. We cannot directly take the supremum on both sides since C_z might be unbounded w.r.t. z ∈$̋. Forz ∈ℒ(F(x_0))we havemax_i=1, …, m( f_i(x_0) - f_i(z) ) ≤max_i=1, …, m(f_i(x_0) - inf_z ∈ f_i(z) ) M. Since allf_iare bounded from below by assumption, we haveM < +∞. FixF^* ∈ F(P_w(F(x_0))). Using the definition ofC_zgiven in (<ref>), we get for allz ∈ F^-1(F^*)C_z(τ - t_0) + h_z(t_0)/t_0≤ ((α + 1)1/t_0^2h_z(t_0) + M)(τ - t_0) + h_z(t_0)/t_0 = ( α + 1/t_0^2(τ - t_0) + 1/t_0) h_z(t_0) + M(τ - t_0). By Assumption <ref>sup_F^* ∈ F(P_w(F(x_0)))inf_z ∈ F^-1(F^*) h_z(t_0) = R < + ∞. Applying this infimum and supremum to (<ref>) we have sup_F^* ∈ F(P_w(F(x_0)))inf_z ∈ F^-1(F^*)( C_z(τ - t_0) + h_z(t_0)/t_0) ≤((α + 1)R/t_0^2 + M )(τ - t_0) + R/t_0. Lemma <ref> states sup_F^* ∈ F(P_w(F(x_0)))inf_z ∈ F^-1(F^*)min_i=1,…,m(f_i(x) - f_i(z)) = sup_z ∈min_i=1,…,m(f_i(x) - f_i(z)). Combining (<ref>), (<ref>) and (<ref>), we get for allτ > t_0(τ) [ τlnτ + A τ + B ] ≤ Cτ + D, withA, B, D ∈andC > 0. Since(t)is nonnegative, we deducelim_t → + ∞(t) = 0. We can derive some additional facts on the function valuesf_i(x(t))along the trajectories from Theorem <ref>. Let x:[t_0, +∞) →$̋ be a solution to (<ref>) and assume all assumptions of Theorem <ref> hold. Then, for alli = 1,…, mlim_t →∞ f_i(x(t)) = f_i^∞∈ exists, and furtherf_i^∞ = _i^∞. Theorem <ref> states lim_t →∞(t) = 0. By definition we have (t) = u_0(x(t)) + 1/2‖ẋ(t) ‖^2. Theorem <ref> guarantees u_0(x(t)) ≥ 0 for all t ≥ t_0 and obviously 1/2‖ẋ(t) ‖^2 ≥ 0. Then from lim_t → + ∞(t) = 0 it follows that lim_t → +∞1/2‖ẋ(t) ‖^2 = 0. Since lim_t → +∞_i(t) = lim_t → + ∞ f_i(x(t)) + 1/2‖ẋ(t) ‖^2 exists, it follows that lim_t → + ∞ f_i(x(t)) exists, which completes the proof. §.§ Fast convergence of function values In this subsection, we show that solutions of (<ref>) have good properties with respect to multiobjective optimization. Along the trajectories of (<ref>), the function values converge with order𝒪(t^-2)to an optimal value, givenα≥ 3. This convergence has to be understood in terms of the merit functionu_0. We prove this result using Lyapunov type energy functions similar to the analysis for the singleobjective case laid out in <cit.> and <cit.>. To this end, we introduce two important auxiliary functions in Definition <ref> and discuss their basic properties in the following lemmata. The main result of this subsection on the convergence of the function values is stated in Theorem <ref>. Let λ≥ 0, ξ≥ 0, z ∈$̋ andx:[t_0, +∞) →$̋ be a solution to (<ref>). For t ≥ t_0 define _i, λ, ξ(t) = t^2(f_i(x(t)) - f_i(z)) + 1/2‖λ(x(t) - z) + tẋ(t) ‖^2 + ξ/2‖ x(t) - z ‖^2. and _λ, ξ(t) = min_i=1,…,m_i, λ, ξ(t) = t^2 min_i=1,…, m( f_i(x(t)) - f_i(z) ) + 1/2‖λ (x(t) - z ) + tẋ(t) ‖^2 + ξ/2‖ x(t) - z ‖^2. Let ξ^* = λ(α - 1 - λ) and α≥λ +1. Then, for almost all t ∈ [t_0, +∞) d/dt_i, λ, ξ^*(t) ≤ 2t(f_i(x(t)) - f_i(z)) - t λmin_i=1,…,m( f_i(x(t)) - f_i(z) ) + t (λ + 1 - α)‖ẋ(t) ‖^2.   The function _i, λ, ξ^*(t) is differentiable almost everywhere since f_i and x are differentiable and ẋ is absolutely continuous. We compute d/dt_i, λ, ξ^*(t) using the chain rule on (<ref>) d/dt_i, λ, ξ^*(t) = 2t(f_i(x(t)) - f_i(z)) + t ⟨ x(t) - z , λ(λ + 1) + ξ^*/tẋ(t) + λẍ(t) ⟩ + t^2 ⟨ẋ(t) , ∇ f_i(x(t)) + ẍ(t) ⟩ + t(λ + 1) ‖ẋ(t) ‖^2. Using Proposition <ref> on the third summand in (<ref>), we bound this by ≤ 2t(f_i(x(t)) - f_i(z)) + t ⟨ x(t) - z , λ(λ + 1) + ξ^*/tẋ(t) + λẍ(t) ⟩ + t(λ + 1 - α) ‖ẋ(t) ‖^2. We rewrite (<ref>) as = 2t(f_i(x(t)) - f_i(z)) + t λ⟨ x(t) - z , α/tẋ(t) + ẍ(t) ⟩ + t(λ + 1 - α) ‖ẋ(t) ‖^2, using λ(λ + 1) + ξ^* = λα. The definition of (<ref>) together with Lemma <ref> implies = 2t(f_i(x(t)) - f_i(z)) - t λ⟨ x(t) - z , ∑_i=1^m θ_i(t) ∇ f_i(x(t)) ⟩ + t(λ + 1 - α) ‖ẋ(t) ‖^2. The objective functions f_i are convex and hence, f_i(z) - f_i(x(t)) ≥⟨∇ f_i(x(t)), z - x(t) ⟩ and therefore, ≤ 2t(f_i(x(t)) - f_i(z)) - t λ∑_i=1^m θ_i(t)( f_i(x(t)) - f_i(z) ) + t(λ + 1 - α) ‖ẋ(t) ‖^2. We bound the convex combination by the minimum to get ≤ 2t(f_i(x(t)) - f_i(z)) - t λmin_i=1,…,m( f_i(x(t)) - f_i(z) ) + t(λ + 1 - α) ‖ẋ(t) ‖^2. To retrieve a result similar to Lemma (<ref>) for the function_λ, ξdefined in (<ref>), we need an auxiliary lemma which helps us to treat the derivative of_λ, ξ. Let (h_i)_i=1,…,m be a family of continuously differentiable functions h_i:(t_0, +∞) →. Define the function h:(t_0, + ∞) →, t ↦ h(t) min_i=1,…, mh_i(t). Then, it holds that * For almost all t ∈ (t_0, +∞), the function h is differentiable in t. * For almost all t ∈ (t_0, +∞), there exists i ∈{1,…, m} with h(t) = h_i(t) and d/dth(t) = d/dth_i(t).   (i) The functions h_i are continuously differentiable. Therefore, h is locally Lipschitz continuous. Then, by Rademachers Theorem h is differentiable almost everywhere. (ii) From (i), we know that h is differentiable almost everywhere. Let t ∈ (t_0, +∞) be a point where h is differentiable. This means the limit lim_s → 0h(t + s) - h(t)/s = d/dth(t) exists. Fix a sequence (s_k)_k≥ 0 with s_k → 0, then lim_k → +∞h(t + s_k) - h(t)/s_k = d/dth(t). From the definition of h it follows, that for every k≥ 0 there exists i_k ∈{1, …, m} with h(t + s_k) = h_i_k(t + s_k). Since the set {1, …, m} is finite, there exists i ∈{1, …, m} and an infinite strictly monotonic increasing subsequence (k_l)_l ≥ 0 with h(t + s_k_l) = h_i(t + s_k_l) for all l ≥ 0. Since h and h_i are continuous, it holds that h(t) = h_i(t). In total, we can follow d/dth_i(t) = lim_s → 0h_i(t + s) - h_i(t)/s = lim_l → +∞h_i(t + s_k_l) - h_i(t)/s_k_l = lim_l → +∞h(t + s_k_l) - h(t)/s_k_l = lim_s → 0h(t + s) - h(t)/s = d/dth(t). Let x:[t_0, +∞) →$̋ be a solution to (<ref>) and letz ∈$̋. The energy function _λ, ξ^* satisfies the following conditions. * The function _λ, ξ^* is differentiable in almost all t∈(t_0, +∞). * For almost all t ∈ (t_0, +∞), it holds that d/dt_λ, ξ^*(t) ≤ (2-λ)t min_i=1,…, m( f_i(x(t)) - f_i(z) ) - (α - λ - 1) t ‖ẋ(t) ‖^2. * For all t ∈ (t_0, +∞), it holds that _λ, ξ^*(t) - _λ, ξ^*(t_0) ≤ (2 - λ) ∫_t_0^t t min_i=1,…,m( f_i(x(t)) - f_i(z) ) dt + ∫_t_0^t t(λ + 1 - α) ‖ẋ(t) ‖^2 dt.   (i) The functions t ↦ f_i(x(t)) are continuously differentiable for all i = 1,…,m. Then, by Lemma <ref> the function t ↦min_i=1,…,m(f_i(x(t)) - f_i(z) ) is differentiable in t for almost all t ∈ (t_0, +∞). Since x is a solution to (<ref>) in the sense of Definition <ref>, we know that ‖λ (x(t) - z) + t ẋ(t) ‖^2 and ξ/2‖ x(t) - z ‖^2 are differentiable in t for almost all t ∈ (t_0, +∞). In total we get that _λ, ξ^*(t) is differentiable in t for almost all t ∈ (t_0, +∞). (ii) In order to compute the derivative of _λ, ξ^*(t), we need the derivative of min_i=1,…,m(f_i(x(t))-f_i(z)). By Lemma <ref>, we know that for almost all t ∈ (t_0, +∞) there exists j ∈{1, …, m} with d/dtmin_i=1,…, m( f_i(x(t)) - f_i(z) ) = d/dt( f_j(x(t)) - f_j(z) ), and f_j(x(t)) - f_j(z) = min_i=1,…,m(f_i(x(t)) - f_i(z) ). For the remainder of the proof fix t ∈ [t_0, +∞) and j satisfying equation (<ref>). From the first part of (<ref>), we immediately get d/dt_λ, ξ^*(t) = d/dt_j, λ, ξ(t). Applying Lemma <ref>, we bound (<ref>) by ≤ 2t(f_j(x(t)) - f_j(z)) - t λmin_i=1,…,m( f_i(x(t)) - f_i(z) ) + t(λ + 1 - α) ‖ẋ(t) ‖^2. Then, the second equation in (<ref>) gives = (2 - λ) t min_i=1,…,m( f_i(x(t)) - f_i(z) ) + t(λ + 1 - α) ‖ẋ(t) ‖^2. Statement (iii) follows immediately from (ii) by integrating inequality (<ref>) from t_0 to t. The function_λ, ξ^*(t)is not suitable for convergence analysis. The termmin_i=1,…, m( f_i(x(t)) - f_i(z) )will not remain nonnegative in general. Hence, we cannot guarantee that_λ, ξ^*(t)is nonnegative. Therefore, we cannot directly retrieve results on the convergence rates. We are still able to get convergence results using Lemma <ref>. Let α≥ 3 and x:[t_0, + ∞) →$̋ be a solution to (<ref>). Then t^2 u_0(x(t)) ≤ t_0^2 u_0(x_0) + 2(α + 1)R + (3 - α) ∫_t_0^t s ‖ẋ(s) ‖^2 ds, and henceu_0(x(t)) ≤t_0^2 u_0(x_0) + 2(α + 1)R/t^2for allt ∈ [t_0, +∞).   We consider the energy function _λ, ξ^*(t) with parameter λ = 2. From the definition of _2, ξ^*(t) and part (iii) of Lemma <ref>, we deduce t^2min_i=1,…,m(f_i(x(t)) - f_i(z) ) ≤_2, ξ^*(t) ≤_2, ξ^*(t_0) + (3-α) ∫_t_0^t s ‖ẋ(s) ‖^2 ds. Writing out the definition of _2, ξ^*(t_0) and using λ = 2 and ξ^* = λ(α - 1 - λ) = 2(α - 3), we have t^2min_i=1,…,m(f_i(x(t)) - f_i(z) ) ≤ t_0^2min_i=1,…,m(f_i(x_0) - f_i(z) ) + (α + 1)‖ x_0 - z ‖^2 + (3-α) ∫_t_0^t s ‖ẋ(s) ‖^2 ds. We want to apply the supremum and infimum in accordance with Lemma <ref>. Let F^* = (f_1^*, …, f_m^*) ∈ F(P_w(F(x_0))), then inf_z ∈ F^-1(F^*)[ t_0^2 min_i=1,…, m(f_i(x_0) - f_i(z)) + (α + 1) ‖ x_0 - z ‖^2 ] = t_0^2 min_i=1,…, m(f_i(x_0) - f^*_i) + (α + 1) inf_z ∈ F^-1(F^*)‖ x_0 - z ‖^2. Now, we can apply the supremum to inequality (<ref>) and get sup_F^* ∈ P_w(F(x_0))inf_z ∈ F^-1(F^*)[ t_0^2 min_i=1,…, m(f_i(x_0) - f_i(z)) + (α + 1) ‖ x_0 - z ‖^2 ] ≤ t_0^2 sup_F^* ∈ P_w(F(x_0))inf_z ∈ F^-1(F^*)min_i=1,…, m(f_i(x_0) - f^*_i) + (α + 1) sup_F^* ∈ P_w(F(x_0))inf_z ∈ F^-1(F^*)inf_z ∈ F^-1(F^*)‖ x_0 - z ‖^2. By Assumption <ref> and the definition of u_0(x), this is equal to t_0^2 u_0(x_0) + 2(α + 1) R. Now, by applying sup_F^* ∈ P_w(F(x_0))inf_z ∈ F^-1(F^*) to t^2 min_i = 1,…,m(f_i(x(t)) - f_i(z)) and using (<ref>) - (<ref>), we get t^2 u_0(x(t)) ≤ t_0^2 u_0(x_0) + 2(α + 1)R + (3 - α) ∫_t_0^t s ‖ẋ(s) ‖^2 ds. Let α > 3 and x:[t_0, + ∞) →$̋ be a solution to (<ref>). Then ∫_t_0^t s ‖ẋ(s) ‖^2 ds < + ∞, i.e.(t ↦ t ‖ẋ(t) ‖^2) ∈ L^1([t_0, + ∞)). §.§ Weak convergence of trajectories In this subsection, we show that the bounded trajectiories of (<ref>) converge weakly to weakly Pareto optimal solutions of (<ref>), givenα > 3. We prove this in Theorem <ref> using Opial's Lemma. Since we need to apply Theorem <ref> and Theorem <ref>, we assume in this subsection that the functionsf_iare bounded from below and that Assumption <ref> holds. We start this subsection by reciting Opial's Lemma (see <cit.>) that we need in order to prove weak convergence. Let S ⊂$̋ be a nonempty subset of$̋ and x:[t_0, +∞) →$̋. Assume thatxsatisfies the following conditions. * Every weak sequential cluster point of x belongs to S. * For every z∈ S, lim_t→ + ∞‖ x(t) - z ‖ exists. Then,x(t)converges weakly to an elementx^∞∈ S, ast → +∞. We need the following additional lemma in order to utilize Opial's Lemma. Let t_0 > 0 and let h:[t_0, +∞) → be a continuously differentiable function which is bounded from below. Assume t ḧ(t) + αḣ(t) ≤ g(t), for some α > 1 and almost all t ∈ [t_0, +∞), where g ∈ L^1([t_0, +∞)) is a nonnegative function. Then, lim_t → + ∞ h(t) exists. A proof can be found in <cit.>. Let α > 3 and let x:[t_0, +∞) →$̋ be a bounded solution to (<ref>). Assume that the functionsf_iare bounded from below and that Assumption <ref> holds. Then,x(t)converges weakly to a weakly Pareto optimal solution of (<ref>). Define the set S { z ∈:̋ f_i(z) ≤ f_i^∞ for all i = 1,…, m}, where f_i^∞ = lim_t→ + ∞ f_i(x(t)), which exists due to Theorem <ref>. Since x(t) is bounded it posses a weak sequential cluster point x^∞∈$̋. Hence, there exists a sequence(x(t_k))_k ≥ 0witht_k → + ∞andx(t_k) ⇀ x^∞fork → + ∞. Because the objective functions are lower semicontinuous in the weak topology, we get for alli = 1,…, mf_i(x^∞) ≤lim inf_k → + ∞ f_i(x(t_k)) = lim_k → + ∞ f_i(x(t_k)) = f_i^∞. Therefore, we can conclude thatx(t)converges weakly to an elementx^∞∈ S. Hence,Sis nonempty and each weak sequenial cluster point ofx(t)belongs toS. Letz ∈ Sand defineh_z(t) = 1/2‖ x(t) - z ‖^2. The first and second derivative ofh_z(t)are given by ḣ_z(t) = ⟨ x(t) - z , ẋ(t) ⟩ and ḧ_z(t) = ⟨ x(t) - z , ẍ(t) ⟩ + ‖ẋ(t) ‖^2, for almost allt ∈ [t_0, +∞). Multiplyingḣ_z(t)withα/tand adding it toḧ_z(t)gives ḧ_z(t) + α/tḣ_z(t) = ⟨ x(t) - z, ẍ(t) + α/tẋ(t) ⟩ + ‖ẋ(t) ‖^2. Using the equation (<ref>) together with the weightsθ(t) ∈Δ^mfrom Lemma <ref>, we get from (<ref>) the equation ḧ_z(t) + α/tḣ_z(t) = ∑_i=1^m θ_i(t)⟨ z - x(t), ∇ f_i(x(t)) ⟩ + ‖ẋ(t) ‖^2. We want to bound the inner products⟨ z - x(t), ∇ f_i(x(t)) ⟩. Since_i(t)is monotonically decreasing by Proposition <ref> and converging tof_i^∞by Theorem <ref>, we get f_i(x(t)) + 1/2‖ẋ(t) ‖^2 = _i(t) ≥ f_i^∞, for alli = 1,…, m. Fromz ∈ Sand the convexity of the functionsf_i, we conclude f_i^∞≥ f_i(z) ≥ f_i(x(t)) + ⟨∇ f_i(x(t)), z - x(t) ⟩. Together, (<ref>) and (<ref>) imply ⟨∇ f_i(x(t)), z - x(t) ⟩≤1/2‖ẋ(t) ‖^2, for alli = 1,…, m. Now, we combine (<ref>) and (<ref>), to conclude ḧ_z(t) + α/tḣ_z(t) ≤3/2‖ẋ(t) ‖^2, and further tḧ_z(t) + αḣ_z(t) ≤3t/2‖ẋ(t) ‖^2. Theorem <ref> states that(t ↦ t‖ẋ(t) ‖^2) ∈ L^1([t_0, +∞))forα≥ 3. Then, Lemma <ref> applied to equation (<ref>) guarantees thath_z(t)converges and by Opial's Lemma (Lemma <ref>) we conclude thatx(t)converges weakly to an element inS. By Theorem <ref>, we know that every weak accumulation point ofx(t)is weakly Pareto optimal. § NUMERICAL EXPERIMENTS In this section, we conduct numerical experiments to verify the convergence rates we prove in the previous section. In particular, we show that the convergence ofu_0(x(t))with rate𝒪(t^-2)as stated in Theorem <ref> holds. Since we cannot calculate analytical solutions to (<ref>) for a general multiobjective optimization problem in closed form, we compute the approximation to a solutionxusing an explicit discretization. We do not discuss the quality of the discretization we use. For all experiments we use initial timet_0 = 1, set a fixed initial statex(t_0) = x_0and use initial velocityẋ(t_0) = 0. We use equidistant time stepst_k = t_0 + kh, withh = 1e-3. We use the schemex(t_k) ≈ x^k,ẋ(t_k) ≈x^k+1 - x^k/handẍ(t_k) ≈x^k+1 - 2x^k + x^k-1/h^2to compute the discretization(x^k)_k ≥ 0of the trajectoryxfor100 000time steps. We look at two example with instances of the multiobjective optimization problem (<ref>). Both problem instances use two convex and smooth objective functionsf_i: ^2 →fori=1,2. In Subsection <ref> we look at a quadratic multiobjective optimization problem and in Subsection <ref> we consider a convex optimization problem with objective functions that are not strongly convex. For both examples, we plot approximations of the solutionxand plot the functionu_0(x(t))to show that the inequalityu_0(x(t)) ≤t_0^2 u_0(x_0) + 2(α + 1)R/t^2holds fort ≥ t_0. To computeu_0(x(t))we have to solve the optimization problemu_0(x^k) = sup_z ∈min_i = 1,…,m f_i(x^k) - f_i(z)for every of the100 000iterations with adequate accuracy. Therefore, we restrict ourselves to problems where the Pareto set of (<ref>) can be explicitly computed. For these problems the value ofu_0can be computed more efficiently using Lemma <ref>. §.§ A quadratic multiobjective optimization problem We begin with an instance of (<ref>) with two quadratic objective functions f_i: ^2 →, x ↦1/2(x-x^i)^⊤ Q_i (x-x^i), fori = 1,2, given matrices and vectors Q_1 = ( [ 2 0; 0 1 ]), Q_2 = ( [ 1 0; 0 2 ]), x^1 = ( [ 1; 0 ]), x^2 = ( [ 0; 1 ]). For this problem the Pareto set is P = { x ∈^2 : x = ( [ 2λ/(1 + λ); 2(1-λ)/(2 - λ) ]), for λ∈ [0,1] }. In our first experiment, we use the initial valuex_0 = (-.2, -.1)^⊤. We compute an approximation of a solution to (<ref>) for different values ofα∈{3, 10, 50, 100}as described in the introduction of Section <ref>. The results can be seen in Figure <ref>. Subfigures <ref> - <ref> contain plots of the trajectoriesxfor different values ofα. In the plots of the trajectories we added a circle every500iterations to visualize the velocities. In Subfigures <ref> - <ref> the values ofu_0(x(t))and the boundst_0^2 u_0(x_0) + 2(α + 1)R/t^2for different values ofαare shown. The inequalityu_0(x(t)) ≤t_0^2 u_0(x_0) + 2(α +1)R/t^2holds for all values ofα. For the smallest value ofα=3we see a large number of oscillations in the trajectory and in the values ofu_0(x(t)), respectively. This behavior is typical for systems with asymptotic vanishing damping. For larger values ofαwe observe fewer oscillations and see improved convergence rates, with slower movement in the beginning due to the high friction. These phenomena are consistent with the observations made in the singleobjective setting. §.§ A nonquadratic multiobjective optimization problem In our second example, we consider the problem (<ref>) with two objective functions f_i: ^2 →, x ↦log( ∑_j=1^p exp((a_j^(i))^⊤ x - b_j^(i))), fori = 1,2,p = 4and given matrices and vectors A^(1) = ( [ (a_1^(1))^⊤; ⋮; (a_4^(1))^⊤ ]) = ( [ 10 10; 10 -10; -10 -10; -10 10 ]), b^(1) = ( [ 0; -20; 0; 20 ]), A^(2) = ( [ (a_1^(2))^⊤; ⋮; (a_4^(2))^⊤ ]) = ( [ 10 10; 10 -10; -10 -10; -10 10 ]), b^(2) = ( [ 0; 20; 0; -20 ]). The objective functions given by (<ref>) are convex but not strongly convex. Taking advantage of the symmetry in the objective functionsf_i, the Pareto setPcan be explicitly computed and is P = { x ∈^2 : x = ( [ -1 + 2λ; 1 - 2λ ]), for λ∈ [0,1] }. We choose the initial valuex_0 = (0,3)^⊤and compute an approximate solution to (<ref>) as described in the beginning of Section <ref>. Analogous to the last example, we present the results of the computations in Figure <ref>. Again, Subfigures <ref> - <ref> contain plots the trajectories and Subfigures <ref> - <ref> contain the values of the merit functionu_0(x(t)). We observe results similar to the example in Subsection <ref>. Since the objective functions given in (<ref>) are not strongly convex, we experience slower convergence especially in the beginning, where the gradients along the trajectories remain almost constant. Once more, we see for small values ofαoscillations in the trajectoryxand the merit function valuesu_0(x(t))introduced by the inertia in the system (<ref>). Larger values ofαcorrespond to higher friction in the beginning and we therefore experience slower convergence for the time interval we consider. Oscillations can only be seen forα=3and close to the end forα=10. The slower convergence in this example is expected due to the lack of strong convexity. § CONNECTION WITH FAST NUMERICAL OPTIMIZATION METHODS The interplay between algorithms for optimization and continuous time systems linked to these methods is an active field of research. Using a discretization of the system (<ref>) in the spirit of <cit.>, we can derive the accelerated multiobjective gradient method (<ref>). We do not discuss the scheme (<ref>) in detail in this paper but refer the interested reader to <cit.>. For the multiobjective optimization problem (<ref>) we define the following scheme. Letx^0 = x^1 ∈$̋ and α, s > 0. Define the sequences { (x^k, y^k, θ^k) }_k ≥ 1⊂×̋×̋Δ^m using the following update rule. MOAG. [ y^k = x^k + k-1/k+α-1(x^k - x^k-1),; ; θ^k ∈ _θ∈Δ^m‖ s ∑_i=1^m θ_i ∇ f_i(y^k) - (y^k - x^k) ‖^2,; ; x^k+1 = y^k - s∑_i = 1^m θ_i^k ∇ f_i(y^k), ]} for k ≥ 1. This system reduces in the case of scalar optimization to Nesterov's accelerated gradient method (see <cit.>) with acceleration coefficient k-1/k + α - 1 from <cit.> which reads as NAG. [ y^k = x^k + k-1/k+α-1(x^k - x^k-1),; ; x^k+1 = y^k - s ∇ f(y^k), ]} for k ≥ 1. The generalization (<ref>) of the scheme (<ref>) does not use the multiobjective steepest descent direction (see <cit.>) which can be written as . [ -_C(y^k)(0) = - ∑_i=1^m θ_i^k ∇ f_i(y^k), with θ^k ∈_θ∈Δ^m‖ s ∑_i=1^m θ_i ∇ f_i(y^k) ‖^2, ]. to update y^k, but involves a different projection problem. Most interestingly, the scheme (<ref>) can be rewritten as . [ y^k = x^k + k-1/k+α-1(x^k - x^k-1),; ; x^k+1 = _y^k - s C(y^k)(x^k), ]} for k ≥ 1. Instead of applying the steepest descent direction at y^k to compute x^k+1, we want to find an update to y^k using a convex combination of the gradients ∇ f_i(y^k) which is closest to x^k. This relation is only revealed to us using the continuous time perspective and by deriving (<ref>) as a discretization of (<ref>). We do not want to discuss the properties of (<ref>) in detail. A first discussion of this scheme can be found in <cit.>. § CONCLUSION We introduce the system (<ref>) and discuss its main properties. To the best of our knowledge, this system is the first inertial gradient-like system with asymptotic vanishing damping for multiobjective optimization problems, expanding the ideas laid out in <cit.>. We prove existence of global solutions in finite dimensions for arbitrary initial conditions. We discuss the asymptotic behaviour of solutions to (<ref>) and show that the function values decrease along the trajectory with rate u_0(x(t)) = 𝒪(t^-2) for α≥ 3 using a Lyapunov type analysis. Further, we show that bounded solutions converge weakly to weakly Pareto optimal points given α > 3. These statements are consistent with the result obtained for singleobjective optimization. We verify our results on two test problems and show that the given bounds on the decay of the function values are satisfied. We close the discussion by relating the system (<ref>) to numerical algorithms for multiobjective optimization. For future work it would be interesting to further adapt the system (<ref>). Possible research directions involve Tikhonov regularization <cit.>, Hessian-driven damping <cit.> and the treatment of linear constraints by the means of Augmented Lagrangian type systems <cit.>. From an algorithmic point of view it would be interesting to further analyze the scheme (<ref>) (especially for the case α > 3) and also to define improved algorithms based on Tikhonov regularization, Hessian-driven damping and for problems with linear constraints. unsrt § APPENDIX § PROOF OF PROPOSITION <REF> We recall here the most important definitions on set-valued maps we need to prove Proposition <ref>. The notation is aligned with <cit.>. Let 𝒳, 𝒴 be real Hilbert spaces and let G : 𝒳⇉𝒴 be a set-valued map. We say G is upper semicontinuous (u.s.c.) at x_0 ∈𝒳 if for any open set N ⊂𝒴 containing G(x_0) there exists a neighborhood M ⊂𝒳 of x_0 such that G(M) ⊂ N. We say that G is u.s.c. if it is so at every x_0 ∈𝒳. We say G is u.s.c. at x_0 in the ε sense if, given ε > 0, there exists δ > 0 such that G(B_δ(x^0)) ⊂ G(x^0) + B_ε(0). We say that G is u.s.c. in the ε sense if it is so at every x_0 ∈𝒳. Let G be a set valued map. The following statements hold. * If G is u.s.c. it is also u.s.c. in the ε sense. * If G is u.s.c. in the ε sense and takes compact values G(x) ⊂𝒴 for all x ∈𝒳, then it is u.s.c. as well. We say that a map ϕ: 𝒳→𝒴 is locally compact if for each point x_0 ∈𝒳 there exists a neighborhood which is mapped into a compact subset of 𝒴. In the proof of Proposition <ref> we use the following auxiliary lemma. Lemma <ref> states that the set-valued map (u,v) ↦_g ∈ C(u)⟨ g, -v ⟩ is u.s.c.. Let ( u, v) ∈×̋$̋ be fixed. Then, for allε > 0there existsδ > 0such that for all(u,v) ∈×̋$̋ with ‖ (u,v) - (u, v) ‖_×̋ < δ and for all g ∈_g ∈ C(u)⟨ g, -v ⟩ there exists g∈_g∈ C(u)⟨g, -v⟩ with ‖ g - g‖ < ε.   Let ( u, v) ∈×̋$̋. We can describe the set_g∈ C(u)⟨g, -v⟩using the vertices ofC(u). The setC(u)is a convex polyhedron and the objective function⟨g, -v⟩is linear. A minimum ofmin_g∈ C(u)⟨g, -v⟩is attained at a vertex ofC(u)and therefore it exists at least onei ∈{ 1, …, m }such that⟨∇ f_i(u), - v⟩ = min_g∈ C(u)⟨g, -v⟩. The same can be done for any(u,v) ∈×̋$̋. Define the sets of optimal and nonoptimal vertices 𝒜 { i ∈{ 1,…, m } : ⟨∇ f_i(u), -v⟩ = min_g∈ C(u)⟨g, -v⟩}, and ℐ { 1,…, m }∖𝒜. 𝒜(u,v) { i ∈{ 1,…, m } : ⟨∇ f_i(u), -v ⟩ = min_g ∈ C(u)⟨ g, -v ⟩}, and ℐ(u,v) { 1,…, m }∖𝒜(u,v). There exists M ∈ such that for all i ∈𝒜 and j ∈ℐ it holds that ⟨∇ f_i(u), -v⟩ < M < ⟨∇ f_j(u), -v⟩. Then by the continuity of (u,v) ↦⟨∇ f_i(u), v ⟩ we can choose δ > 0 such that for all i ∈𝒜 and j ∈ℐ ⟨∇ f_i(u), -v ⟩ < M < ⟨∇ f_j(u), -v ⟩, for all (u,v) ∈×̋$̋ with‖ (u,v) - (u, v) ‖_×̋ < δ. For these(u,v)it holds that𝒜(u,v) ⊂𝒜. Now, the rest follows from the continuity of the function∇ f_i(·). Letg ∈_g ∈ C(u)⟨ g, -v ⟩. We writeg = ∑_i ∈𝒜(u,v)λ_i ∇ f_i(u)as a convex combination of the optimal vertices inC(u). Since𝒜(u,v) ⊂𝒜, it follows thatg = ∑_i ∈𝒜(u,v)λ_i ∇ f_i(u)is a solution tomin_g∈ C(u)⟨g, -v⟩. Since all∇ f_i(·)are continuous, we can chooseδ > 0such that‖ g - g‖ < ε. We are now in the position to prove Proposition <ref>.   (i) Fix (t,u,v) ∈ [t_0, +∞) ××̋$̋. The setC(u) ( {∇ f_i(u) : i = 1,…, m})is convex and compact as a convex hull of a finite set. Then_g ∈ C(u)⟨ g, -v ⟩is also convex and compact and the statement follows since sums and Cartesian products of convex and compact sets are convex and compact. (ii) We show thatGis u.s.c. in theεsense using Lemma <ref>. Then, we use Proposition <ref> together with (i) to concludeGis u.s.c. as well. This is technical but we include the proof for the sake of completeness. Using Lemma <ref> we can show that for allε > 0there existsδ > 0satisfying G(B_δ((t, u, v))) ⊂ G(t, u, v) + B_ε((0,0)), whereB_δ((t, u, v))) ⊂ [t_0, + ∞) ××̋$̋ and B_ε((0,0)) ⊂×̋$̋ are open balls with radiusδandε, respectively. To this end, we show that for all(t, u, v) ∈×̋$̋ with ‖ (t, u, v) - (t, u, v) ‖_××̋ < δ and for all (x,y) ∈ G(t,u,v) there exists an element (x, y) ∈ G(t, u, v) with ‖ (x, y) - (x, y) ‖_×̋ < ε. For (t,u,v) ∈××̋$̋,(x,y) ∈ G(t, u, v)is equivalent to x = v, y = - α/t v -g , with g ∈_g∈ C(u)⟨ g, -v ⟩. From Lemma <ref>, we know that there existsδ_1 > 0such that from‖ (t, u, v) - (t, u, v) ‖_×̋ < δ_1it follows that there existsg∈_g∈ C(u)⟨g, -v⟩such that ‖ g - g‖ < ε/3. Fixg∈_g∈ C(u)⟨g, -v⟩satisfying (<ref>). Further, there existsδ_2 > 0such that from| t - t| < δ_2it follows that |α/t - α/t|‖ v ‖ < ε/3. Letδ = min{δ_1, δ_2, ε/3(1+α/t_0)}. It holds that (x, y) (v, - α/tv - g) ∈ G(t, u, v). Then it follows that ‖ (x,y) - (x, y) ‖_×̋≤‖ v - v‖ + ‖ - α/t v - g + α/tv + g‖ ≤ (1 + α/t) ‖ v - v‖ + |α/t - α/t|‖ v ‖ + ‖ g - g‖ < ε, which completes the proof. (iii) If()̋ < + ∞the proof follows from (ii). On the other hand, fromϕbeing locally compact, we follow thatv ↦ vis locally compact which is equivalent to$̋ being finite-dimensional. (iv) Before we start with the proof, we recall that the norm ‖ (·, ·) ‖_×̋ and the norm ‖·‖ fulfill the following inequality. For all x,y ∈$̋, it holds that ‖ (x,y) ‖_×̋≤‖ x ‖ + ‖ y ‖≤√(2)‖ (x,y) ‖_×̋. Let(t,u,v) ∈ [t_0, + ∞) ××̋$̋ and ξ∈ G(t,u,v). Then ξ = ( v, - α/t v - g ), with g ∈_g ∈ C(u)⟨ g, -v ⟩ and we follow ‖ξ‖_×̋≤‖ v ‖ + ‖ - α/t v - g ‖ ≤ (1 + α/t) ‖ v ‖ + max_θ∈Δ^m‖∑_i=1^m θ_i ∇ f_i(u) ‖ ≤ (1+ α/t_0) ‖ v ‖ + max_θ∈Δ^m‖∑_i=1^m θ_i (∇ f_i(u) - ∇ f_i(0)) ‖ + max_θ∈Δ^m‖∑_i=1^m θ_i ∇ f_i(0) ‖ ≤ (1 + α/t_0) ‖ v ‖ + L ‖ u ‖ + max_i=1,…,m‖∇ f_i(0) ‖ ≤ c(1 + ‖ (u,v) ‖_×̋), where we choose c = √(2)max{(1 + α/t_0), L, max_i=1,…,m‖∇ f_i(0) ‖}.
http://arxiv.org/abs/2307.01500v1
20230704061337
An Optimal Multiple-Class Encoding Scheme for a Graph of Bounded Hadwiger Number
[ "Hsueh-I Lu" ]
cs.DS
[ "cs.DS", "math.CO", "05C38, 05C10, 05C85, 68P05" ]
Kinetic inductance and voltage response dependence on temperature: Asymmetric dc SQUID case study 1st M. A. Galí Labarias CSIRO Manufacturing Lindfield, NSW, Australia. marc.galilabarias@csiro.au 2nd O. A. Nieves CSIRO Manufacturing Lindfield, NSW, Australia. oscar.nieves@csiro.au 3rd S. T. Keenan CSIRO Manufacturing Lindfield, NSW, Australia. shane.ocianain@gmail.com 4th E. E. Mitchell CSIRO Manufacturing Lindfield, NSW, Australia. emma.mitchell@csiro.au August 1, 2023 ====================================================================================================================================================================================================================================================================================================================================================================================================== Since Jacobson [FOCS 1989] initiated the investigation of succinct encodings for various classes of graphs 35 years ago, there has been a long list of results on balancing the generality of the class, the encoding and decoding speed, the succinctness of the encoded string, and the query support. Let _n denote the set consisting of the graphs in a class that have at most n vertices. A class is nontrivial if the information-theoretically minimum number ⌈log_2 |_n|⌉ of bits to distinguish the members of _n is Ω(n). An encoding scheme based upon a single class is -optimal if it takes a graph G of _n and produces in deterministic linear time an encoded string of at most log_2 |_n|+o(log_2 |_n|) bits from which G can be recovered in linear time. Despite the extensive efforts in the literature, trees and general graphs were the only nontrivial classes admitting -optimal encoding schemes that support the degree query in O(1) time. Basing an encoding scheme upon a single class ignores the possibility of a shorter encoded string using additional properties of the graph input. To leverage the inherent structures of individual graphs, we propose to base an encoding scheme upon of multiple classes: An encoding scheme based upon a family of classes, accepting all graphs in ⋃, is -optimal if it is -optimal for each ∈. Although an -optimal encoding scheme is by definition -optimal for each ∈, having a -optimal encoding scheme for each ∈ does not guarantee an -optimal encoding scheme. Under this more stringent optimality criterion, we present an -optimal encoding scheme A^* for a family of an infinite number of classes such that ⋃ comprises all graphs of bounded Hadwiger numbers. Precisely, consists of the nontrivial quasi-monotone classes of k-clique-minor-free graphs for each positive integer k. Just to name a few, examples of monotone members of are graphs with genus at most 2, 3-colorable plane graphs, graphs of page numbers at most 4 and girths at least 5, 6-clique-minor-free graphs, forests with diameters at most 7, graphs having no minor that is an 8-cycle, and each nontrivial minor-closed class of graphs other than the class of all graphs. Examples of non-monotone members of are trees, floor-plans, triconnected planar graphs, and plane triangulations. Our -optimal encoding scheme A^* supports queries of degree, adjacency, neighbor-listing, and bounded-distance shortest path in O(1) time per output. Hence, we significantly broaden the graph classes admitting optimal encoding schemes that also efficiently support fundamental queries. Our A^* does not rely on any recognition algorithm or any explicit or implicit knowledge of the exact or approximate values of ⌈log |_n|⌉ for the infinite number of classes ∈. A^* needs no given embedding of the input graph. However, A^* accepts additional information like a genus-O(1) embedding, an O(1)-coloring, or an O(1)-orientation for the input graph to be decoded from the encoded string and answered by the query algorithms of A^*. § INTRODUCTION Single-class encoding schemes Jacobson's <cit.> initial investigation 35 years ago into compact representations of graphs has led to extensive research <cit.>. Let be a class of graphs. An encoding scheme for consists of an encoding algorithm, a decoding algorithm, and possibly some query algorithms. The encoding algorithm takes an n-vertex graph G from and produces an encoded string X from which G can be recovered by the decoding algorithm. Let set _n consist of the graphs in having at most n vertices. Let |S| denote the cardinality of set S. All logarithms throughout the paper are base 2. The succinctness of X is determined by comparing its bit count to the minimum number ⌈log |_n|⌉ of bits needed to distinguish the graphs in _n. The encoded string is -succinct if it has at most f(n)+o(f(n)) bits for each continuous super-additive function f with log |_n|≤ f(n)+o(f(n)) <cit.>, which essentially means that the encoding size is bounded by log |_n|+o(log |_n|). An encoding scheme is -optimal if its encoding and decoding algorithms run in deterministic linear time and the encoded string is -succinct. For example, if is the class of simple undirected graphs, then log |_n|=0.5n^2-o(n^2) <cit.> and hence a 0.5n(n+1)-bit string representing the symmetric adjacency matrix of the input graph is the encoded string of a -optimal encoding scheme that supports adjacency query in constant time. If is the class of rooted ordered trees, then log |_n|=2n-O(n) and hence a 2(n-1)-bit string representing the depth-first traversal of the input tree is the encoded string of a -optimal encoding scheme which can take linear time to find the parent of a vertex in the tree <cit.>. A class is nontrivial if log |_n|=Ω(n). Figure <ref> displays a long list of results to balance several factors: the generality of class , the speed of encoding and decoding, the ability of X to support queries, and the succinctness of X. Despite these extensive efforts in the literature, trees (see, e. g., <cit.>) and general graphs formed the previously only known nontrivial classes of graphs admitting -optimal encoding schemes that also support constant-time degree queries. Upper-bounding the bit count of the encoded string of a graph G in _n by log |_n|+o(log|_n|) for -optimality is reasonable in the worst-case scenario for all the graphs in _n. However, as the class containing G becomes broader, this -succinctness criterion of the encoded string of G becomes looser. Encoding schemes based on a single class do not consider the possibility of shortening the encoded string using the structure of each individual graph in . For example, the encoded string for a 2-colorable planar graph produced by a -optimal encoding scheme for the class of planar graphs need not be -succinct for the class of 2-colorable planar graphs. Similarly, the encoded string for a tree produced by a -optimal encoding scheme for the class of general graphs may not even be asymptotically -succinct for the class of trees. To encode a graph G as compactly as possible, one needs to try all known (optimal or not) encoding schemes for all classes that contain G. This would also require additional efforts of running recognition algorithms for many classes on G. Multiple-class encoding schemes To leverage the inherent structures within individual graphs, we propose to base an encoding scheme upon a family of multiple graph classes. The input n-vertex graph G is taken from the ground class (i. e., the union ⋃ of all member classes) of the family . To assess the succinctness of the encoded string of an n-vertex graph G, we compare its bit count to the minimum of log |_n| over all classes that satisfy G∈∈. The encoded string X for G is -succinct if it is -succinct for all classes with G∈∈, meaning that the bit count of X is at most f(n)+o(f(n)) for every continuous super-additive function f that satisfies log |_n|≤ f(n)+o(f(n)) for at least one class with G∈∈. For instance, consider the family of the classes ^(k) of k-colorable graphs. An -succinct encoded string of a 3-colorable non-bipartite graph is ^(k)-succinct for each k≥ 3 but need not be ^(2)-succinct. Hence, the -succinctness criterion of the encoded string for G∈∈ is at least as stringent as the -succinctness criterion, even when log|(⋃)_n| is considerably larger than log|_n|. An encoding scheme is -optimal if it is -optimal for every member class of the family . In other words, for an encoding scheme to be -optimal, its encoding algorithm must run in deterministic linear time on every graph G∈⋃ to generate an -succinct encoded string X from which its decoding algorithm should recover G in deterministic linear time. As the diversity of member classes in increases, an -optimal encoding scheme exploits a broader range of graph structures. Having a -optimal encoding schemes for each member class in a family does not guarantee an -optimal encoding scheme, as finding a class in containing G with the minimum |_n| can be expensive. For instance, let the k-th class in consists of the k-colorable graphs. A collection of -optimal encoding schemes for all classes ∈ need not yield an -optimal encoding scheme, since computing the chromatic number of a graph is NP-complete <cit.>. Hence, it might appear impossible to design an -optimal encoding scheme unless determining a minimizer of |_n| over all classes with G∈∈ takes linear time. However, our result in this paper indicates that this is not necessarily the case. In what follows, we first provide some definitions for graphs and their classes and then explain our -optimal encoding scheme A^* for a family of an infinite number of graph classes such that ⋃ comprises all graphs of bounded Hadwiger numbers. Our -optimal encoding scheme A^* does not require explicit or implicit knowledge of the exact or approximate values of log |_n| for the member classes of . Our A^* needs no recognition algorithm of any member class of . A^* is -optimal for each class in even if recognizing a graph of some class ∈ is an undecidable problem. Concepts for graphs An induced subgraph of G is a graph that can be obtained by deleting zero or more vertices and their incident edges from G. A subgraph of G is a graph that can be obtained from an induced subgraph of G by deleting zero or more edges. A minor of G is a graph that can be obtained from a subgraph of G by contracting zero or more edges. An induced subgraph (respectively, a subgraph) of a graph G is a subgraph (respectively, a minor) of G, but not the other way around. Let V(G) denote the vertex set of a graph G, and E(G) its edge set. Let u⃗v⃗ denote a directed edge from u to v. Let G^r for a graph G denote the graph on V(G) with u⃗v⃗∈ E(G) if and only if v⃗u⃗∈ E(G^r). Let G∩ H denote the maximal common subgraph of graphs G and H. Let G∪ H denote the minimal common supergraph of graphs G and H. The Hadwiger number <cit.> (also known as the contraction clique number <cit.>) of a graph G, denoted η(G), is defined as the largest integer k such that the k-clique is a minor of G∪ G^r. Thus, the Hadwiger number of a forest or a planar graph is at most 2 or 4, respectively. Hadwiger's conjecture <cit.> states that the Hadwiger number η(G) upper-bounds the chromatic number of G, and it remains one of the deepest <cit.> and most famous <cit.> open problems in graph theory. An edge u⃗v⃗ is called an outgoing edge of u (respectively, an incoming edge of v) or simply a u-out (respectively, v-in) edge. An i-orientation is a graph in which each vertex has at most i outgoing edges. A graph D is an i-orientation for a graph G if D is an i-orientation with D∪ D^r = G∪ G^r. It is known that |E(G)|=O(|V(G)|·η(G)√(logη(G))) (see, e.g., <cit.>). Since η(G)=O(1) implies |E(G)|=O(|V(G)|), the minimum degree of a graph G with η(G)=O(1) is O(1). By η(H)≤η(G) for each induced subgraph H of G, an n-vertex graph G with η(G)=O(1) admits an O(n)-time obtainable O(1)-orientation. Although the Hadwiger number η(G) of an n-vertex graph G is NP-hard to compute <cit.> and cannot be computed in n^o(n) time <cit.> unless ETH <cit.> fails, one can determine whether η(G)≤ h for any given h=O(1) in O(n^2) time <cit.> and possibly in O(nlog n) time <cit.>. For related work on graphs with bounded Hadwiger numbers, see <cit.>. Concepts for graph classes We defined a class of graphs as nontrivial if log ||=Ω(n). The class of unlabeled undirected paths is not nontrivial, while those of trees and plane triangulations are nontrivial since the logarithms of the numbers of n-vertex trees and plane triangulations are 2n-o(n) and (log256/27)· n-o(n) <cit.>, respectively. The intersection of nontrivial classes may not be nontrivial since it can be empty. A class of graphs is minor-closed, monotone, or hereditary if every minor, subgraph, or induced subgraph of each graph in remains in , respectively. A minor-closed (respectively, monotone) class of graphs is also monotone (respectively, hereditary), but not vice versa. For instance, the class of trees is not hereditary, the class of complete graphs is hereditary but not monotone, the class of 2-colorable graphs is monotone but not minor-closed, and the class of forests is minor-closed. The intersection of minor-closed, monotone, or hereditary classes remains minor-closed, monotone, or hereditary, respectively. Let G[U] (respectively, G-U) for a vertex subset U of G denote the subgraph of G induced by U (respectively, V(G)∖ U). Let the N_G(U) of a vertex subset U in G consist of the vertices of V(G-U) that is adjacent to one or more vertices of U in G. Disjoint vertex sets U and V are adjacent in a graph G if N_G(U)∩ V∅, i. e., {u⃗v⃗,v⃗u⃗}∩ E(G)∅ holds for some vertices u∈ U and v∈ V. Let N_G[U]=N_G(U)∪ U. The open (respectively, closed) neighborhood of U in G is G[N_G(U)] (respectively, G[N_G[U]]). The quasi-neighborhood G(U) of U in G is the closed neighborhood of U in G excluding the edges in the open neighborhood of U in G, i. e., G(U)=G[N_G[U]]∖ E(G-U). A k-quasi-member of a class of graphs is a graph that can be obtained from a graph in by deleting at most k vertices and their incident edges. A class of graphs is quasi-monotone if the quasi-neighborhood G(U) of each vertex subset U of each graph G in is an O(|N_G(U)|)-quasi-member of . For instance, a forest can be made a tree by adding a new vertex with several incident edges to connect its connected components. Thus, the class of trees is quasi-monotone, since each non-tree subgraph of a tree is a 1-quasi-member of the class of trees. A monotone class of graphs is quasi-monotone but not vice versa. The intersection of quasi-monotone classes remains quasi-monotone. We call a class of graphs slim if it is nontrivial and quasi-monotone and admits a constant that upper-bounds the Hadwiger numbers of all graphs in . Examples of slim classes include trees, forests, series-parallel graphs, and all classes listed in Figure <ref>. The monumental graph minor theory <cit.> by Robertson and Seymour <cit.> confirms Wagner's <cit.> conjecture that each minor-closed class of undirected graphs can be defined by a finite list of excluded minors. Hence, all nontrivial minor-closed classes of graphs other than the one composed of all graphs (whose forbidden minor set is empty) are slim, implying that the class of graphs with genus no more than a constant g is slim. Moreover, since each graph having a bounded Hadwiger number is separable <cit.>, an n-vertex graph in a slim class of graphs can be represented in O(n) bits <cit.>. Therefore, log |_n|=Θ(n) holds for each slim class of graphs. Our optimal multiple-class encoding scheme For the rest of the paper, let denote the family of all slim classes of graphs. We present an -optimal encoding scheme A^*. This means that our encoding scheme A^* is -optimal for every nontrivial quasi-monotone class of graphs that admits a constant h satisfying the condition η(G)≤ h for all graphs G∈. Since for each integer h≥ 2 the graphs G with η(G)≤ h form a distinct slim class of graphs, the number of member classes within is infinite. Imposing other known or unknown nontrivial properties that are monotone or quasi-monotone on each of these infinite slim classes leads to more varieties. Examples of monotone slim classes include graphs with genus at most 2, 3-colorable plane graphs, graphs with page number at most 4, and graphs with girths at least 5. Examples of non-monotone slim classes include trees, triconnected planar graphs, and triangulations or floor-plans of genus-O(1) surfaces (see <ref>). We present our -optimal encoding scheme A^* using a simple unweighted directed n-vertex graph G given in an adjacency list. Our A^* accepts additional information of G such as vertex colors or edge directions which can be recovered in tandem with G by the decoding algorithm of A^* and answered by the query algorithms of A^*, as long as at least one slim class containing G equipped with the additional information satisfies log |_n|=Θ(n). Moreover, if the given adjacency list of G reflects a genus-O(1) embedding, then A^* can also accept the embedding as additional information. Our A^* supports the following fundamental queries in O(1) time: * Output |N_G(u)| and the numbers of u-out and u-in edges of G. * Output a neighbor v of u in G, if |N_G(u)|≥ 1. * Output the color assigned to the vertex u. * Output whether u⃗v⃗∈ E(G). * Output the direction assigned to the edge u⃗v⃗ by an equipped orientation D for G. * Output the incident edges of u (respectively v) preceding and succeeding u⃗v⃗ in clockwise orders around u (respectively, v) according to the embedding of G produced by the decoding algorithm. * Output a shortest uv-path of G if there is one with length bounded by a prespecified constant t. The neighbor-listing query for a vertex u can be supported in O(|N_G(u)|) time using the O(1)-time queries of reporting a neighbor and the next neighbor. The above list is not exhaustive. Additional queries can be created by following the design of the o(n)-bit encoded strings and O(1)-time query algorithms elaborated in <ref>. As mentioned above, -optimal encoding schemes that support O(1)-time degree query were only known for the classes of trees and general graphs. Hence, our -optimal encoding scheme A^* significantly broadens the graph classes that admit -optimal encoding schemes which also support fundamental queries in O(1) time per output. The ground class ⋃ contains all graphs with bounded Hadwiger numbers, since the class of graphs with η(G)≤ h for each integral constant h≥ 2 is slim. Thus, the encoding algorithm of A^* can compute an encoded string X for an input n-vertex graph G with η(G)=O(1) in O(n) time, and the decoding algorithm of A^* can decode X back to G in O(n) time. The bit count of X is essentially bounded by log |_n|+o(log |_n|) for each slim class containing G, which means that our encoding scheme A^* automatically exploits all slim structures of G to encode G as compactly as possible. We emphasize that A^* does not require explicit or implicit knowledge of the slim structures of G or the exact or approximate values of their ⌈log |_n|⌉. The encoded string X for an input n-vertex graph G∈⋃ produced by our encoding scheme A^* is the concatenation of an -succinct base string X_base and an o(n)-bit string X_q for each supported query q. The input graph G and the equipped additional information can be reconstructed using solely X_base. Thus, the bit count of X_base is affected by the complexity of the additional information. As an example, A^* can encode a 3-colored undirected plane graph G as a directed planar graph that is equipped with a 3-coloring for the vertices and a planar embedding for the edges according to which the edge u⃗v⃗ for every two adjacent vertices u and v of G always immediately precedes the edge v⃗u⃗ in clockwise order around u. * If the coloring and embedding are required to be recovered together with G by the decoding algorithm of A^*, then the base string X_base is -succinct for each slim class of 3-colored undirected plane graphs that contains the input graph G. This X_base need not be -succinct for each slim class of 3-colorable undirected planar graphs (i. e., without equipped planar embeddings) that contains G. Via two o(n)-bit strings X_q appended to X_base, the query algorithms of A^* support the two queries q of reporting the color of a vertex and obtaining the edges immediately succeeding and preceding an edge u⃗v⃗ around u (respectively, v) in clockwise order according to the equipped planar embedding of G. * If the given coloring and planar embedding of G need not be recovered, the decoding algorithm of A^* reports a graph H isomorphic to G represented in an adjacency list reflecting an embedding of H that might have nonzero genus. The base string X_base is -succinct for each slim class of 3-colorable undirected planar graphs that contains the input graph G. A^* supports the query of obtaining the edges immediately succeeding and preceding an edge u⃗v⃗ around u (respectively, v) in clockwise order according to the embedding of H. The concatenation structure of X allows new queries be supported by appending an o(n)-bit string X_q for each new query q to the original X, eliminating the need to recompute the entire encoded string. Comparing with previous work Reed and Wood <cit.> suggested that the most general setting for separator theorems involves graphs that exclude a fixed minor. It remains unclear whether a graph's separability implies a constant bound on its Hadwiger number. However, all hereditary classes of separable graphs known to date are minor-closed <cit.>. Consequently, our -optimal encoding scheme A^* outperforms all previous encoding schemes listed in Figure <ref>. * The encoded strings produced by He, Kao, and Lu's <cit.> encoding scheme for the class of planar graphs and Blandford, Blelloch, and Kash's <cit.> encoding scheme for hereditary classes of separable graphs are -succinct. However, their encoding and decoding algorithms run in O(nlog n) time. Lu <cit.> extended He et al.'s framework to accommodate all non-trivial monotone classes of bounded-genus graphs and some non-monotone classes of graphs, improving the encoding and decoding time to O(n), but without supporting efficient queries. Blelloch and Farzan <cit.> extended Blandford et al.'s <cit.> encoding framework to support adjacency, degree, and neighbor queries in O(1) time per output. Blelloch et al.'s extension is based on Raman, Raman, and Satti's <cit.> indexable dictionary <cit.> and fully indexable dictionary <cit.>. Raman et al. <cit.> only mentioned that their indexable dictionary can be constructed in expected linear time without commenting on its deterministic time bound. The expected and deterministic time bounds of their fully indexable dictionary are not explicitly stated. Let poly(f(n))=f(n)^O(1). According to the first author Raman <cit.> of the indexable dictionary <cit.>, the deterministic constructing time of an indexable dictionary for a poly(n)-bit string having n 1-bits is O(n^3log n), which is the same as that of Fredman, Komlós, and Szemerédi's <cit.> data structure. * Previous encoding schemes <cit.> for the classes of planar and O(1)-page graphs, which support adjacency, degree, neighbors, or bounded-distance shortest path queries in O(1) time per output, take linear time to encode and decode, but their encoded string are not -succinct. In particular, Kowalik and Kurowski's <cit.> encoding scheme for planar graphs, which is extendable to accommodate minor-closed classes, requires O(n) words and thus O(nlog n) bits. Kammer and Meintrup <cit.> optimized the bit count for minor-closed classes but did not support efficient query of bounded-distanced shortest path. Kammer et al.'s result is based on Blelloch and Farzan's <cit.> encoding framework for hereditary classes of separable graphs, so their encoding algorithm runs in deterministic O(n^3log n) time. Fuentes-Sepúlveda, Seco, and Viaña's <cit.> result for plane triangulations and Fuentes-Sepúlveda, Navarro, and Seco's <cit.> result for planar graphs also use Raman et al.'s <cit.> indexable dictionary. Castelli Aleardi, Devillers, and Schaeffer <cit.> mentioned that their encodings for triconnected planar graphs and plane triangulations supporting local queries can be constructed in linear time. However, they did not provide details about how to answer queries for concatenated code segments. Assuming that their encoding algorithm is also based on Raman et al.'s indexable dictionary, their deterministic encoding time is O(n^3log n). Our -optimal encoding scheme A^* has only one encoding algorithm that automatically handles all infinite member classes in , regardless of whether a class is monotone or merely quasi-monotone. That is, there is no need to adjust our A^* for any non-monotone member class of . Moreover, A^* does not require a user to supply any recognition algorithms for any slim class of graphs with prespecified time complexity bounds. Note that A^* is effective even if the recognition problem for a member class of is undecidable. In contrast, each previous -succinct encoding scheme <cit.> for a non-trivial class of graphs other than trees and general graphs requires its user to specify by supplying a recognition algorithm for the class that runs in O(1)^poly(n) time in order to construct a look-up table for all tiny graphs in . Moreover, different classes need different approaches for their decoding algorithm to glue tiny subgraphs of the input graph G to recover G. For instance, Castelli Aleardi et al. <cit.> used two separate sections of their paper to describe their encoding schemes for plane triangulations and triconnected planar graphs. Lu's <cit.> framework for non-monotone classes accommodated triangulations and floor-plans in different ways. Technical overview Similar to all former -succinct encoding schemes for nontrivial classes other than trees and general graphs, our -optimal encoding scheme A^* is based on decomposing the input graph into tiny subgraphs. Lu's <cit.> encoding algorithm on a graph whose bounded-genus embedding is either given or linear-time computable uses Dijdjev and Venkatesan's <cit.> planarizer for a graph of bounded genus and Goodrich's <cit.> separator-decomposition tree for a planar graph. Our encoding scheme A^* decomposes the input graph based on Reed and Wood's algorithms to obtain a separator of G <cit.> and an H-partition of G for an O(n/log^2 n)-vertex graph H with η(H)≤η(G) <cit.>. By choosing these tools, our encoding scheme A^* does not rely on any embedding of the input graph, although it is acceptable to encode a genus-O(1) embedding in tandem with the input graph. To avoid the requirement of supplying a recognition algorithm for any slim class , our encoding scheme A^* does not precompute a look-up table for all tiny subgraphs in the slim classes . Instead, A^* collects the tiny subgraphs of the input graph G at the bottom of the recursive decomposition and computes a code-book string only for them. We will prove that, as long as G belongs to a slim class (no matter whether is monotone or merely quasi-monotone), the encoded string is guaranteed to be -succinct. Our encoding scheme A^* can be directly applied to any graph having a bounded Hadwiger number without modification for any slim class. For a given class of graphs to have our encoding scheme A^* as a -optimal encoding scheme, one just have to prove that is slim. This is exactly what we do for triconnected planar graphs and triangulations and floor-plans of genus-O(1) surfaces in <ref>. In order to ensure that the encoding length we get is also -succinct for each non-monotone slim class that contains the input graph G, we classify a k-vertex subgraph H of G into two sets according to whether it admits a vertex subset U⊆ V(H) with H=G(U) and |N_G(U)|≤ |V(H)|/log^2 |V(H)|. If it does, then we can prove that the encoded string for H is -succinct by showing that its bit count is bounded by f(k)+o(f(k)) for any any continuous super-additive function f with log|_k|≤ f(k)+o(f(k)). Although the encoded string for an H that does not admit such a vertex subset U need not be -succinct, we manage to show that the number of such subgraphs H that occur in our recursive encoding algorithm is too few to ruin the -succinctness of the final encoded string for G. The “star partition” (see <ref>) is crucial in ensuring this. The key of supporting queries lies in designing an o(n)-bit string so that information involving multiple tiny graphs can be obtained in O(1) time. Our -optimal encoding scheme A^* does not rely on an indexable dictionary like that of Raman et al. <cit.> whose encoding size is very close to information-theoretical minimum, since we cannot afford its super-linear deterministic construction time. Instead, we just need an O(rlog m)+o(m)-bit representation for an m-bit string having r 1-bits to support its membership, rank, and select queries in O(1) time. We show in <ref> that such a representation can be constructed in deterministic linear time (see Lemma <ref>). A challenging task for the query support of A^* is to design an O(n)-time computable o(n)-bit string X_q from which the query near_t(u,v) of bounded-distance shortest path can be answered in O(1) time. Kowalik and Kurowski <cit.> showed an involved O(n)-time computable O(nlog n)-bit data structures for an n-vertex undirected planar graph that supports the query in O(1) time. They only briefly sketched how the data structure can be extended to work for minor-closed classes of graphs. Our improvement is two-fold: * We provide a simpler solution to achieve the same objective of supporting the query in O(1) time not just for an undirected planar graph but directly for an n-vertex directed graph G with η(G)=O(1). Specifically, we present an O(n)-time computable O(1)-orientation D for G from which the query can be answered in constant time. The description and proof for such a D, which is called a “director” for G in <ref>, are much shorter than those of Kowalik and Kurowski's data structure. Recall that our -optimal encoding scheme A^* accepts additional information like an orientation for the graph. Thus, A^* can directly encode G together with a director D for G such that the direction of each edge of G assigned by D can be answered in O(1) time. This leads to an O(n)-time computable O(n)-bit encoded string for (G,D) that supports the bounded-distance shortest-path query in O(1) time, already improving upon the O(nlog n)-bit result of Kowalik and Kurowski for the class of planar graphs and minor-closed classes of graphs. * The above O(n)-bit encoded string for a pair (G,D) of two graphs G and D need not be an -succinct encoded string for G. To obtain an -succinct encoded string for G that also supports the query in O(1) time, we show that G admits a director D such that the number of extra bits required to encode (G,D) in addition to that required to encode G is only o(n). The trick is to directly equip a director for each decomposed tiny subgraph at the bottom of the recursive decomposition and keep it in an o(n)-bit code-book string. As a result, we do not need too many extra bits to encode the equipped director for each tiny subgraph. The challenge then lies in showing that o(n) bits suffice to combine the equipped directors of those tiny subgraphs of G into a director for G. The star partition of G to be presented in <ref> is once again crucial in accomplishing this objective. Computation model and road-map We assume the conventional unit-cost RAM model of computation, in which operations such as read, write, and add on O(log n) consecutive bits take O(1) time. The model has been adopted by all previous work on graph encoding except that of Jacobson <cit.>. The rest of the paper is organized as follows. Section <ref> shows the linear-time encoding and decoding algorithms of A^*. Section <ref> shows the query algorithms of A^*, which run in O(1) time per output. Section <ref> concludes the paper. § THE ENCODING AND DECODING ALGORITHMS Section <ref> shows that each graph of bounded Hadwiger number admits a linear-time computable star partition. Sections <ref> and <ref> present the encoding and decoding algorithms of our -optimal encoding scheme A^* based on star partition. Section <ref> shows that the classes of triconnected planar graphs and triangulations and floor-plans of genus-O(1) surfaces are all slim, implying that A^* is -optimal for each of these three classes . §.§ Star partition Let [i,j] for integers i and j consist of the integers k with i≤ k≤ j. Let [j]=[1,j]. We call (V_0,…,V_p) a partition of a graph G if V_0,…,V_p are pairwise-disjoint subsets of V(G) whose union is V(G). As illustrated in Figure <ref>(a), a partition (V_0,…,V_p) of an n-vertex graph G is a star partition of G if * V_1,…,V_p are pairwise nonadjacent in G, * |N_G[V_i]|=poly(log n) holds for each i∈[p], and * |V_0|+p+∑_i∈ [p] |N_G(V_i)|=O(n/log^4/3 n). It takes O(n) time to obtain a star partition of an n-vertex graph G with η(G)=O(1). The rest of the subsection proves Lemma <ref> by Lemmas <ref> and <ref> below. As illustrated in Figure <ref>(b), a partition (U_1,…,U_m) of a graph G is an H-partition of G for an undirected graph H with V(H)={U_1,…,U_m} if the next statements hold for any distinct indices i and j in [m]: * |U_i|=O(n/m). * U_iU_j is an edge of H if and only if the subsets U_i and U_j of V(G) are adjacent in G. For an n-vertex graph G with η(G)=O(1) and an m=O(n), it takes O(n) time to obtain an H-partition of G for an O(m)-vertex graph H with η(H)=O(1). A partition (A,B,C) of an m-vertex graph H is balanced if max(|A|,|B|)≤ 2m/3, |C|=O(m^2/3), and A and B are nonadjacent in G. It takes O(m) time to obtain a balanced partition of an m-vertex graph H with η(H)=O(1). Apply Lemma <ref> to obtain an undirected graph H with m=|V(H)|=Θ(n/log n) and η(H)=O(1) and an H-partition (U_1,…,U_m) of G in O(n) time. Within the proof, we call a member of V(G) a vertex of G and call a member of V(H) a node of H. Hence, each node U_j of H with j∈[m] is a set of O(log n) vertices of G. For each vertex v of G, let j_v∈ [m] denote the index with v∈ U_j_v. Let b=⌈log^2 n⌉. Let T be the rooted ordered binary tree on node subsets of V(H) returned by the following recursive procedure on H: * If |V(H)|≤ b^2, then return V(H) as a leaf of T. * If |V(H)|>b^2, then obtain a balanced partition (A,B,C) of H in O(|V(H)|) time by Lemma <ref> and return the subtree T(C) of T rooted at C whose left (respectively, right) subtree is the one returned by the procedure call on H[A∪ C] (respectively, H[B∪ C]). Let W_1,…,W_p be the leaves of T. Note that each W_i with i∈[p] is a node subset of V(H). Let each V_i with i∈[p] consist of the vertices v∈ V(G) with |N_G(v)|≤ b such that W_i is the unique leaf of T with U_j_v∈ W_i. Let V_0=V(G)∖ (V_1∪⋯∪ V_p). Thus, a vertex v of G belongs to V_0 if and only if |N_G(v)|>b or U_j_v∈ C holds for a nonleaf member C∈ V(T). The rest of the proof shows that the above procedure runs in O(n) time and (V_0,…,V_p) is a star partition of G. To show that (V_0,…,V_p) is obtained in O(n) time, let each Ψ(C)⊆ V(H) with C∈ V(T) denote the union of the leaves of the subtree T(C) of T rooted at C. Hence, Ψ(C)=V(H) for the root C of T and Ψ(C)=C for each leaf C of T. If X and Y are the children of a C∈ V(T) in T, then (Ψ(X)∖ C,Ψ(Y)∖ C,C) is a balanced partition of H[Ψ(C)] obtained in O(|Ψ(C)|) time. Let Λ_0 consist of the leaves of T. Let m_0=∑_C∈Λ_0|C|. For each of the O(log m_0) levels of T, the balanced partitions of H[Ψ(C)] for all C∈ V(T) in the same level of T take overall O(m_0) time. Thus, T is obtained in O(m_0log m_0) time. Observe that if C∈ V(T), then C∈Λ_0 if and only if |Ψ(C)|≤ b^2. Let each Λ_i with i≥ 1 consist of the C∈ V(T) with b^2· (1.5)^i-1< |Ψ(C)|≤ b^2·(1.5)^i. If X and Y are distinct members of Λ_i such that X is an ancestor of Y in T, then the distance of X and Y in T is O(1). Thus, ∑_C∈Λ_i|Ψ(C)|=O(m_0) holds for each i≥ 1, implying |Λ_i|=O(m_0/b^2· (2/3)^i) by b^2· (1.5)^i-1< |Ψ(C)|. Each C∈Λ_i with i≥ 1 admits a balanced partition (A,B,C) of H[Ψ(C)], implying |C|=O(b^4/3· (1.5)^2i/3) by |Ψ(C)|≤ b^2·(1.5)^i. Since each C∈ V(T)∖Λ_0 contributes |C| to the nonnegative difference m_0-m, we have m_0-m =∑_i≥ 1∑_C∈Λ_i|C| =∑_i≥ 1|Λ_i|· O(b^4/3· (1.5)^2i/3) =O(m_0/b^2/3)·∑_i≥ 1(2/3)^i/3 =O(m_0/log^4/3n). By m=m_0-o(m_0), we have m_0=Θ(m). Thus, the time of computing T is O(m_0log m_0)=O(mlog m)=O(n), implying that (V_0,…,V_p) can be obtained in O(n) time. We now show that (V_0,…,V_p) is a star partition of G. To see Condition <ref>, let v∈ V_i with i∈[p]. By definition of T, the leaf W_i of T containing U_j_v also contains all neighbors of U_j_v in H. The union of the nodes of H in W_i is a subset of V_0∪ V_i. Since (U_1,…,U_m) is an H-partition of G, we have N_G(v)⊆ V_0∪ V_i. Thus, V_1,…,V_p are pairwise nonadjacent in G. To see Condition <ref>, we have |V_i|=O(log^5 n) by |W_i|≤ b^2 for each i∈ [p] and |U_j|=O(log n) for each j∈[m]. By |N_G(v)|≤ b for each v∈ V_i, we have |N_G[V_i]|=O(log^7 n)=poly(log n). As for Condition <ref>, η(G)=O(1) implies that the number of vertices v with |N_G(v)|>b is O(n/b). By Equation (<ref>), we have ∑_C∈ V(T)∖Λ_0|C|=O(n/log^7/3n). Thus, |V_0|=O(n/log^2 n)+O(n/log^4/3 n)=O(n/log^4/3 n). By definition of T, |W_i|=Ω(b^2) holds for each i∈[p]. By ∑_i∈[p]|W_i|=m_0=O(m), we have p=O(m/b^2)=O(n/log^4/3 n). Each C∈ V(T)∖Λ_0 contributes O(|C|·log n) to the sum ∑_i∈[p]|N_G(V_i)|. By Equation (<ref>), we have ∑_i∈[p]|N_G(V_i)|=∑_C∈ V(T)∖Λ_0|C|· O(log n)=O(m_0/log^1/3n)=O(n/log^4/3 n). Therefore, |V_0|+p+∑_i∈ [p] |N_G(V_i)|=O(n/log^4/3 n). §.§ The encoding algorithm A concatenation prefix of p binary strings X_1,…,X_p having overall n bits is an O(plog n)-bit O(n)-time computable string χ such that it takes O(1) time to obtain from the concatenation X of χ,X_1,…,X_p the starting position of each X_i with i∈ [p] in X. For instance, such a χ can be the concatenation of strings χ_-1,χ_0,χ_1,…,χ_p, each having exactly b=1+⌈log n⌉ bits, where (1) χ_-1 is 0^b-11, (2) χ_0 is the binary representation of p, and (3) each χ_i with i∈[p] is the binary representation of the starting position of X_i in the concatenation of X_1,…,X_p. Thus, it takes O(1) time to obtain b from the first O(1) words of X. It then takes O(1) time to obtain p and the starting position of each X_i with i∈[p] in X. The prefixed concatenation of binary strings X_1,…,X_p is the concatenation X of χ,X_1,…,X_p for a concatenation prefix χ of X_1,…,X_p. When p=o(n/log n), the prefixed concatenation of X_1,…,X_p has n+o(n) bits. The encoding algorithm of our -optimal encoding scheme A^* on an input n-vertex graph G has the following three phases: (1) The first phase computes a decomposition tree T for G via Lemma <ref>. (2) The second phase computes a code-book string χ for the subgraphs of G at the leaves of T. (3) The third phase computes an encoded string code(G) for G in a bottom-up manner along T. The base encoded string X_base reported by the encoding algorithm of A^* on G is the prefixed concatenation of χ and code(G). Phase 1 The first phase constructs a height-O(1) decomposition tree T for G such that (i) each member of V(T), called a node of T, is a subgraph of G, (ii) the union of all nodes of T is G, and (iii) a node of T is a leaf of T if and only if it has at most ℓ= ⌈loglog n⌉ vertices. For each subgraph H of G, let ∂ H=N_G(V(G)∖ V(H)). Let the initial tree T consist of the single node G. We iteratively update T until each leaf node H of T has at most ℓ vertices. The round for a leaf node H of T with |V(H)|>ℓ performs the following: * If G is equipped with a genus-O(1) embedding that is required to be recovered together with G by the decoding algorithm, then obtain a graph Δ_H from H by adding edges to triangulate each face of H according to the induced embedding of H. The genus and hence the Hadwiger number of Δ_H remain O(1). Obtain a star partition (V_0,…,V_p) of Δ_H in O(|V(H)|) time by Lemma <ref>.[We can only recover a given genus-O(1) embedding of G: H is triangulated into Δ_H to ensure the -succinctness of the encoded string. The genus of Δ_H remains O(1), so Δ_H belongs to a minor-closed class of graphs. By η(Δ_H)=O(1), Δ_H admits an O(|V(H)|)-time obtainable star partition. If the given embedding of G has genus ω(1), then we do not know even know whether Δ_H admits a star partition.] Otherwise, obtain a star partition (V_0,…,V_p) of H in O(|V(H)|) time by Lemma <ref>. * Let U_0=V_0∪∂ H and U_i=V_i∖∂ H for each i∈[p]. We have N_H(U_i)⊆ V_0∪∂ H⊆ U_0 and H(U_i)=G(U_i) for each i∈[p]. See Figure <ref> for an illustration. We call (U_0,…,U_p) the T-partition of H. Let H_0=H[U_0] and H_i=H(U_i) for each i∈[p]. We call (H_0,…,H_p) the T-subgraphs of H. Replace the leaf node H of T by a height-1 subtree of T rooted at H_0 whose i-th child with i∈[p] is H_i. Each edge of G belongs to exactly one node of T, but a vertex of G may belong to more than one nodes of T. By Conditions <ref> and <ref> of (V_0,…,V_p), each of the above rounds increases the overall number of vertices in all nodes of T by ∑_i∈ [p]|N_H(U_i)|≤∑_i∈[p] |N_H(V_i)|+|V_i∩∂ H|≤ |∂ H|+O(|V(H)|/log^4/3|V(H)|). By Condition <ref> of (V_0,…,V_p), the height of T is O(1). By ∂ H_i⊆ N_H(U_i) for each i∈[p] and ∂ G=∅, the overall number of vertices in all nodes of T is n+o(n). Phase 1 runs in O(n) time. A subtree of T is denoted T_H for a subgraph H of G if H is the union of the nodes of T_H. Thus, T_H=H for each leaf node H of T and T_G=T. Note that distinct subtrees T_H and T_H' of T may have isomorphic subgraphs H and H' of G, but their |∂ H| and |∂ H'| can still be different. Phase 2 For each positive integer k≤ℓ, let Λ_k consist of the leaf nodes H of T with |V(H)|≤ k. Let each Λ^*_k with k∈[ℓ] consist of each graph H in Λ_k such that at least one of its occurrences in G identified by the tree structure of T admits a vertex subset U⊆ V(H) with H=G(U) and |N_G(U)| ≤ |V(H)|/log^2 |V(H)|. Thus, each graph in Λ^*_k is an O(k/log^2 k)-quasi-member of each slim class that contains G. By O(1)^poly(ℓ)=o(n), it takes O(n) time to * design for each distinct k-vertex graph H in Λ_ℓ a unique encoded string code(H) having at most log |Λ_k|+O(1) bits such that if H is in Λ^*_ℓ, then code(H) has at most log |Λ^*_k|+O(1) bits and * construct an o(n)-bit code-book string χ with which each graph H∈Λ_ℓ and code(H) can be obtained from each other in O(1) time. Note that whether two leaf nodes of T are considered distinct elements of Λ_ℓ depends on whether they are isomorphic and equipped with the same additional information to be recovered by the decoding algorithm. Two distinct subtree T_H and T_H' with isomorphic leaf nodes H and H' of T may have different |∂ H| and |∂ H'|, though. The higher complexity the additional information has, the longer the code-book string χ and the encoded strings code(H) for the tiny subgraphs H of G in Λ_ℓ become. The base encoded string X_base for the input graph G is the prefixed concatenation of the o(n)-bit code-book string χ and the -succinct encoded string code(G) for G to be computed in the third phase. Phase 3 The encoded string code(G) is defined recursively for each subtree T_H of G: If |V(H)|≤ℓ, then let code(H) be as defined in the code-book string χ such that code(H) and H can be obtained from each other in O(1) time via χ. Otherwise, let (U_0,…,U_p) and (H_0,…,H_p) be the T-partition and T-subgraphs of H. We already have each code(H_i) with i∈[p], since T_H_i is the i-th subtree of T_H. The encoded string code(H) for H is the prefixed concatenation of code(H_0),…,code(H_p), where code(H_0) is the following O(|V(H)|)-time computable O(|U_0|·log |V(H)|)+o(|V(H)|)-bit string with which the decoding algorithm can recover H and its equipped information from the subgraphs H_i with i∈[p] and their equipped information in O(|V(H)|) time. * Each edge of H_0 and each duplicated copy of each vertex of U_0 in H_i with i∈[p] can be represented using O(log|V(H)|) bits. By η(H)=O(1), we have |E(H_0)|=O(|U_0|)=O(|V_0|+|∂ H|). By Equation (<ref>), we have ∑_i∈[p]|N_H(U_i)|≤ |U_0|+O(|V(H)|/log^4/3|V(H)|). Thus, H together with its equipped information like vertex and edge coloring and orientation can be recovered from H_1,…,H_p and their equipped information using an O(|U_0|·log |V(H)|)+o(|V(H)|)-bit string code(H_0). * For the case that G is equipped with a genus-O(1) embedding required to be recovered together with G by the decoding algorithm, consider the clockwise order of the incident edges of each vertex u in H around u according to the induced embedding of H. * If u∈ U_0, then the induced embedding of each H_i with i∈[p] preserves the induced order of the incident edges of u in H_i around u. * If u∈ U_i with i∈ [p], then H_i contains all incident edges of u, implying that the induced embedding of H_i preserves their order around u. Hence, code(H_0) uses O(log |V(H)|) bits to encode H[{u,v,w}] for every triple of vertices u∈ U_0, v∈ U_j, and w∈ U_k with jk=0 or j k such that u⃗v⃗ or v⃗u⃗ immediately precedes u⃗w⃗ or w⃗u⃗ around u in clockwise order around u according to the induced embedding of H. * The number of such triples (u,v,w) with jk=0 is O(|U_0|), since each of the O(|U_0|) edges of H_0 belongs to Θ(1) such subgraphs H_0[{u,v,w}]. * The number of such triples (u,v,w) with jk 0 and j k is also O(|U_0|): Vertices sets U_j⊆ V_j and U_k⊆ V_k with jk 0 and j k are non-adjacent in the triangulated version Δ_H of H by Condition <ref> of (V_0,…,V_p), implying that one of the O(|U_0|) incident edges of u in Δ_H[U_0] succeeds u⃗v⃗ or v⃗u⃗ and precedes u⃗w⃗ or w⃗u⃗ in the clockwise order around u according to the embedding of Δ_H. Thus, code(H_0) has O(|U_0|·log |V(H)|)+o(|V(H)|) bits. Since the height of T is O(1) and the overall number of vertices in all nodes of T is n+o(n), the third phase also runs in O(n) time. Encoding size A function f is super-additive and continuous <cit.> if f(n_1)+f(n_2) ≤ f(n_1 +n_2) and f(n + o(n)) = f(n) + o(f(n)), respectively. For example, f(n) = n^a log ^b n for any constants a ≥ 1 and b ≥ 0 is continuous and super-additive. Super-additivity and continuity are both closed under additions. By X_base=code(G)+o(n) and log |_n|=Θ(n) for each class ∈, we ensure the -succinctness of X_base by proving code(G)≤ f(n)+o(f(n)) for each continuous super-additive function f and each slim class containing the input n-vertex graph G that satisfy log|_n|≤ f(n)+o(f(n)). We first ready Equations (<ref>), (<ref>), and (<ref>) below that are needed to prove Equation (<ref>). For each subtree T_H of T with k=|V(H)|>ℓ, the encoded string code(H) is the prefixed concatenation of code(H_0),…,code(H_p) for the T-subgraphs (H_0,…,H_p) of H. For the T-partition (U_0,…,U_p) of H and its corresponding star partition (V_0,…,V_p), we have U_0=V_0∪∂ H and for each i∈[p] V(H_i) =U_i∪ N_H(U_i) ∂ H_i∪ N_H(U_i) ⊆ N_H(V_i)∪ (V_i∩∂ H). By code(H_0)=O(|U_0|·log k)+o(k) and Conditions <ref> and <ref> of (V_0,…,V_p), we have code(H_0) =O(|∂ H|·log k)+o(k) |∂ H_1|+⋯+|∂ H_p| ≤ |∂ H|+o(k/log k) |V(H_1)|+⋯+|V(H_p)| ≤ |∂ H|+k+o(k/log k). We can then prove for each subtree T_H of T, no matter whether k=|V(H)| is more than ℓ or not, code(H)=O(k+|∂ H|·log k) by induction on the bounded height of T_H: If k≤ℓ, then code(H)≤log |Λ_k|+O(1)=O(k) by η(H)=O(1). If k>ℓ, then Equation (<ref>) and p=o(k/log k) imply code(H) =O(|∂ H|·log k)+o(k)+∑_i∈[p]O(|V(H_i)|+|∂ H_i|·log |V(H_i)|) =O(k+|∂ H|·log k) by the inductive hypothesis. Equation (<ref>) is proved. Each k-vertex graph H in Λ^*_k can be represented by an encoding of a k+o(k/log k)-vertex graph H' in and an o(k)-bit string specifying a set U'⊆ V(H') of o(k/log k) vertices with H=H'-U'. We have log |Λ^*_k| ≤log |_k+o(k)|+o(k) ≤ f(k+o(k))+o(f(k+o(k)))+o(k) ≤ f(k)+o(f(k)), since f is continuous and super-additive. We now prove Equation (<ref>) using Equations (<ref>), (<ref>), and (<ref>). Note that we have ∂ G=∅, T_G=T, G=G(V(G)), and |N_G(V(G))|=0. Thus, it suffices to show for each subtree T_H of T with k=|V(H)| that if H=G(U) holds for a vertex set U⊆ V(H) with |N_G(U)|≤ k/log^2 k, then code(H)≤ f(k)+o(f(k)). We prove Equation (<ref>) by induction on the bounded height of T_H. If k≤ℓ, then H=G(U) belongs to Λ^*_k by |N_G(U)|≤ k/log^2 k, implying code(H)≤log |Λ^*_k|+O(1) ≤ f(k)+o(f(k)). The basis holds. If k>ℓ, then H=G(U) implies ∂ H⊆ N_G(U). By |N_G(U)|≤ k/log^2 k, we have |∂ H|≤ k/log^2 k. Let (U_0,…,U_p) and (H_0,…,H_p) be the T-partition and the T-subgraphs of H. By Condition <ref> of the corresponding star partition (V_0,…,V_p) of H, we have |V(H_i)|=|N_H[U_i]|≤ |N_H[V_i]|=poly(log k) for each i∈[p]. Let I consist of the indices i∈[p] such that H_i admits subset U'⊆ V(H_i) with H_i=G(U') and |N_G(U')|≤ |V(H_i)|/log^2 |V(H_i)|. If i∉ I, then H_i=H(U_i)=G(U_i) implies |N_H(U_i)|=|N_G(U_i)|> |V(H_i)|/log^2 |V(H_i)| =Ω(|V(H_i)|/log^2 log k). By Equations (<ref>), (<ref>), and (<ref>), we have ∑_i∈ [p]∖ I |V(H_i)| = ∑_i∈ [p]∖ I O(|N_H(U_i)|·log^2 log k) =O((|∂ H|+k/log^4/3k)·log^2log k) =o(k/log k). By f(k)=Ω(k), the inductive hypothesis, and Equations (<ref>), (<ref>), (<ref>), and (<ref>), we have code(H) =O(|∂ H|·log k)+o(k)+ ∑_i∈ [p]∖ Icode(H_i)+ ∑_i∈ Icode(H_i) ≤ o(k)+∑_i∈ [p]∖ I O(|V(H_i)|+|∂ H_i|·log |V(H_i)|)+ ∑_i∈ If(|V(H_i)|)+o(f(|V(H_i)|)) ≤ o(k)+∑_i∈ [p] O(|∂ H_i|·log k)+ ∑_i∈ [p]f(|V(H_i)|)+o(f(|V(H_i)|)) ≤ f(k)+o(f(k)). Equation (<ref>) is proved, implying that Equation (<ref>) holds. Since X_base is -succinct for each slim class that contains G, X_base is -succinct. Thus, the encoded string X produced by the O(n)-time encoding algorithm of A^* for the input n-vertex graph G is -succinct. §.§ The decoding algorithm The decoding algorithm of A^* first obtains the o(n)-bit code-book string χ and the binary string code(G) for G from X_base. It takes O(n) time to recover the height-O(1) decomposition tree T of G obtained by the encoding algorithm and the string code(H) for each subtree T_H of T. It takes overall O(n) time to obtain all subgraphs H of G at the leaves of T from χ and code(H). For each non-singleton subtree T_H of T, it takes O(|V(H)|) time to recover H and its equipped additional information, if any, from code(H_0) and the subgraphs H(U_i) with i∈[p] with respect to the T-partition (U_0,…,U_p) of H used by the encoding algorithm of A^* which is preserved in the tree structure of T_H. Thus, it takes O(n) time to decode X back to the graph G and its equipped additional information. For the case that the embedding reflected by the input adjacency list of G need not be recovered together with G, the decoding algorithm of A^* reports an adjacency list of G that is recursively defined for each subtree T_H of T as follows. For each distinct subgraph H of G in Λ_ℓ, we report the adjacency list of H stored in the code-book string χ. Note that this need not be the same as the induced adjacency list of any occurrence of H in G. For each non-singleton subtree T_H of T, let (U_0,…,U_p) and (H_0,…,H_p) be the T-partition and the T-subgraphs of H. For each vertex u∈ U_i with i∈[p], the neighbor list of u in H is exactly the reported neighbor list of u in H_i. For each u∈ U_0, the neighbor list of u in H is the concatenation of the reported neighbor lists of u in H_0,…,H_p in order. §.§ Three non-monotone slim classes of graphs To ensure that our -optimal encoding scheme A^* indeed overshadows all the previous work for non-monotone classes of graphs listed in Figure <ref>, we show that the three non-monotone classes of triconnected planar graphs and triangulations and floor-plans of genus-O(1) surfaces are quasi-monotone. Specifically, we verify that the subgraph G(U) of G for each nonempty proper subset U of V(G) can be obtained from a graph of by deleting O(|N_G(U)|) vertices and their incident edges. Since all graphs in these three classes are connected, we have |N_G(U)|≥ 1. * Example 1: triconnected planar graphs. Let H=G(U). Repeat the following three steps on the current graph H for three iterations: (i) Let the current graph H be embedded such that each of its biconnected components is incident to the exterior face. (ii) Add a new vertex into the exterior face of H. (iii) Make the new vertex a common neighbor of all vertices on the exterior bound of the current H. The plane graph H at the end of the k-th round is k-connected. Thus, the initial H can be obtained from the final H∈ by deleting exactly three vertices and their incident edges. * Example 2: triangulations of a genus-O(1) surface. The boundary of each non-triangle face F of G(U) contains at least two vertices of N_G(U). Let e_F be an edge between arbitrary two vertices of N_G(U) on the boundary of F. The graph consisting of the edges e_F for all non-triangle faces F of G(U) has genus O(1), implying that the number of non-triangle faces of G(U) is O(|N_G(U)|). Thus, adding a new vertex u_F to triangulate each non-triangle face F of G(U) results in a triangulation of a genus-O(1) surface. G(U) can be obtained from the resulting graph by deleting the O(|N_G(U)|) new vertices u_F and their incident edges. * Example 3: floor-plans of a genus-O(1) surface. Since each vertex a floor-plan has O(1) degree, one can make G(U) a floor-plan by adding O(|N_G(U)|) vertices and edges such that G(U) can be obtained from the resulting floor-plan by deleting the new vertices and their incident edges. Note that there is no need to modify our A^* to accommodate these non-monotone classes as did by the previous encoding schemes in the literature. Since the above classes are clearly nontrivial and their members have bounded Hadwiger numbers, we just have to prove that they are quasi-monotone and then our -optimal encoding scheme A^* is guaranteed to be -optimal encoding schemes for each of the above classes . One can obtain more examples this way. § THE QUERY ALGORITHMS Section <ref> presents a framework for designing an O(n)-time obtainable o(n)-bit string X_q for a query q such that an answer to q can be obtained in O(1) time from X_base and X_q. Section <ref> applies the framework on the query of obtaining the degree of a vertex u in G. Other queries for a vertex of G can be supported in a same way. Section <ref> applies the framework on the query of determining whether u⃗v⃗ is an edge of G for a pair (u,v) of vertices of G. Other queries for a pair of adjacent vertices of G can be supported in a same way. Section <ref> applies the framework on the query q_t of reporting a shortest uv-path of G for a pair (u,v) of vertices u and v of G if there is one whose length is bounded by a prespecified t=O(1). §.§ A framework for supporting O(1)-time queries using additional o(n) bits This subsection presents a framework for supporting a query q in O(1) time using an O(n)-time obtainable o(n)-bit encoded string X_q which is the prefixed concatenation of (i) a string χ_label for labeling, (ii) a string χ_leaf supporting the query for the tiny graphs in Λ_ℓ, and (iii) a string χ_G to be recursively defined based on the decomposition tree T for G. Dictionary Our framework uses Lemma <ref> below to handle the labels of vertices. Let each Y[i] with i∈[m] denote the i-th bit of an m-bit binary string Y. For each i∈[m], let rank(Y,i)=Y[1]+⋯+Y[i]. Let rank(Y) denote the number rank(Y,Y) of 1-bits in Y. Let each select(Y,j) with j∈[rank(Y)] denote the index i∈[m] such that Y[i] is the j-th 1-bit in Y. A fully indexable dictionary <cit.> for Y is a binary string from which (1) each select(Y,j) with j∈[rank(Y)], (2) each rank(Y,i) with i∈[m], and (3) each Y[i] with i∈ [m] can be obtained in O(1) time. It takes O(m) time to compute an O(rlog m)+o(m)-bit fully indexable dictionary dict(Y) for an m-bit binary string Y with rank(Y)=r. Let h=⌈1/2log m⌉. Assume r≥ 1 and that m is an integral multiple of h^2 without loss of generality. Let dict(Y) be the prefixed concatenation of the following O(m)-time obtainable O(rlog m)+o(m)-bit strings dict_1(Y), dict_2(Y), and dict_3(Y). (1) Select: The select query can be supported in O(1) time by the string dict_1(Y) whose j-th 2h-bit word for each j∈ [r] stores select(Y,j). We have dict_1(Y)=O(rlog m). (2) Rank: The rank query can be supported in O(1) time by the prefixed concatenation dict_2(Y) of the following O(m)-time obtainable o(m)-bit strings χ_2a,χ_2b,χ_2c: * For each i∈ [m/h^2], let the i-th 2h-bit word χ_2a(i) of χ_2a store rank(Y,ih^2). Thus, χ_2a=o(m). * For each i∈[m/h^2], let Y_i be the i-th h^2-bit substring Y[(i-1)h^2+1,ih^2] of Y and let χ_2b store the ranks for the positions that are integral multiples of h in all Y_i with i∈[m/h^2]. Specifically, let χ_2b be the concatenation of all χ_2b(i) with i∈[m/h^2], where the j-th 2⌈log h⌉-bit word of χ_2b(i) stores rank(Y_i,jh) for each j∈[h]. Thus, χ_2b=O((m/h^2)· h·log h)=o(m). * By 2^h=O(√(m)), there is an O(m)-time computable o(m)-bit string χ_2c answering rank(Z) in O(1) time for any string Z having at most h bits. For each k∈[m], it takes O(1) time to obtain rank(Y, k)= r_a+r_b+r_c as follows. With i=⌈ k/h^2⌉, obtain r_a=rank(Y,(i-1)h^2) from χ_2a. With j=⌈(k-(i-1)h^2)/h⌉, obtain r_b=rank(Y_i,(j-1)h) from χ_2b. Obtain r_c=rank(Y[(i-1)h^2+(j-1)h+1,k]) from χ_2c. (3) Membership: The membership query can be supported in O(1) time by the prefixed concatenation dict_3(Y) of the following strings χ_3a, χ_3b, and χ_3c: * Let χ_3a be the m/h-bit string such that χ_3a[i]=0 if and only if the i-th h-bit word of Y is 0. * Let χ_3b=dict_2(χ_3a), which has o(m) bits. * Let χ_3c be the hr-bit string such that if χ_3a[i] is the j-th 1-bit of χ_3a, then the j-th h-bit word of χ_3c with j∈ [r] stores the i-th h-bit word of Y. For each k∈[m], it takes O(1) time to obtain Y[k] as follows. Let i=⌈ k/h⌉. If χ_3a[i]=0, then Y[k]=0. If χ_3a[i]=1, then obtain j=rank(χ_3a,i) from χ_3b and obtain Z=Y[(i-1)h+1,ih] from the j-th h-bit word of χ_3c. Thus, Y[k]=Z[k-(i-1)h]. Labeling A labeling for a graph H is a bijection L:V(H)→ [|V(H)|], assigning each vertex u of H a label L(u) representable in ⌈log|V(H)|⌉ bits. The query support of our encoding scheme A^* is based on the following O(n)-time obtainable labeling L=L_G. By O(1)^poly(ℓ)=o(n), it takes O(n) time to associate an arbitrary fixed labeling L_H to each distinct subgraph H in Λ_ℓ and construct an o(n)-bit string χ_leaf from which L_H can be obtained in O(1) time via code(H). For each subtree T_H of T with k=|V(H)|>ℓ, let let (U_0,…,U_p) and (H_0,…,H_p) be the T-partition and T-subgraphs of H. Let each L_H(u) with u∈ U_0 be an arbitrary distinct integer in [|U_0|]. For each u∈ U_i with i∈[p], if L_H_i(u) is the j-th smallest number in the set L_H_i(U_i) of labels, then let L_H(u)=|U_0|+⋯+|U_i-1|+j. Let χ_label be the prefixed concatenation of χ_leaf and χ_G, where χ_H for each subtree T_H of T is recursively defined as follows. Let χ_H be an O(1)-bit string fixed for all subgraphs H∈Λ_ℓ, signifying that H is a leaf node of T and code(H) can be obtained from X_base in O(1) time using the position of T_H in T. If k=|V(H)|>ℓ, then let χ_H be the prefixed concatenation of χ_H_0,…,χ_H_p for the T-partition (U_0,…,U_p) and the T-subgraphs (H_0,…,H_p) of H such that χ_H_0 supports the following queries in O(1) time: * Given L_H(u), obtain the index i∈[0,p] with u∈ U_i. * Given i∈[0,p] and L_H_i(u), obtain L_H(u). (1) For Query <ref>, let Y_0 be the k-bit string whose j-th 1-bit with j∈ [p] is at position |U_0|+⋯+|U_j-1|+1. The index i∈[0,p] with u∈ U_i is rank(Y_0,L_H(u)), which is obtainable in O(1) time from the O(plog k)+o(k)-bit string χ_0=dict(Y_0) by Lemma <ref>. By Condition <ref> of the star partition (V_0,…,V_p) corresponding to (U_0,…,U_p), we have χ_0=o(k). (2) For Query <ref>, we focus on the case with i∈[p], since L_H(u)=L_H_0(u). For each i∈ [p], let χ_i,a=dict(Y_i) for the |V(H_i)|-bit string Y_i such that Y_i[j]=1 if and only if U_0 contains the vertex u with L_H_i(u)=j. Lemma <ref> implies χ_i,a=O(|N_H(U_i)|·log k)+o(|V(H_i)|). Let χ_i,b be the O(|N_H(U_i)|·log k)-bit string whose j-th ⌈log k⌉-bit word χ_i,b(j) with j∈ [|N_H(U_i)|] stores L_H(u) for the vertex u of H_i such that Y_i[L_H_i(u)] is the j-th 1-bit of Y_i (i. e., select(Y_i,j)=L_H_i(u)). To obtain L_H(u) from i∈ [p] and L_H_i(u), obtain c=Y_i[L_H_i(u)] from χ_i,a. If c=0, then obtain L_H(u)=L_H_i(u)+select(Y_0,i)-1 from χ_0. If c=1, then obtain L_H(u)=χ_i,b(rank(Y_i,L_H_i(u))) from χ_i,a and χ_i,b. Let χ_i be the O(|N_H(U_i)|·log k)+o(|V(H_i)|)-bit prefixed concatenation of χ_i,a and χ_i,b. The bit count of the O(k)-time obtainable prefixed concatenation χ_H_0 of the strings χ_0,…,χ_p is o(k)+∑_i∈[p] O(|N_H(U_i)|·log k)+o(|V(H_i)|)=o(k)+O(|∂ H|·log k). We prove the next equation for each subtree T_H of T by induction on the bounded height of T_H: χ_H=o(k)+O(1+|∂ H|·log k). If k≤ℓ, then χ_H=O(1) implies Equation (<ref>) (even if k=O(1) and |∂ H|=0). The basis holds. If k>ℓ, then Equations (<ref>) and (<ref>) and p=o(k/log k) (by Condition <ref> of (V_0,…,V_p)) imply χ_H =o(k)+O(|∂ H|·log k)+∑_i∈[p]o(|V(H_i)|)+O(1+|∂ H_i|·log |V(H_i)|) =o(k)+O(1+|∂ H|·log k) by the inductive hypothesis. Equation (<ref>) is proved. By |∂ G|=0, we have χ_G=o(n). The framework and how to use it Since each graph H in Λ_ℓ has at most ℓ=O(loglog n) vertices, it takes O(n) time to compute an o(n)-bit string χ_q,leaf from which the query q can be supported in O(1) time via the labels L_H(u) and other information of the vertices u given to the query algorithm for q and the encoded string code(H) for H which is O(1)-time obtainable from X_base using the position of T_H in T. For each subtree T_H of T, recursively define χ_q,H as follows. Let χ_q,H be an O(1)-bit string fixed for all subgraphs H∈Λ_ℓ, signifying that H is a leaf node of T and code(H) can be obtained from X_base in O(1) time using the position of T_H in T. If k=|V(H)|>ℓ, then let (U_0,…,U_p) and (H_0,…,H_p) be the T-partition and the T-subgraphs of H. To use the framework, one just has to provide an O(k)-time obtainable string χ_q,H_0 with χ_q,H_0=o(k)+O(|U_0|·log k) such that the query q can be answered in O(1) time from the prefixed concatenation χ_q,H of χ_q,H_0,…, χ_q,H_p via the labels L_H(u) and other information of the vertices u given to the query algorithm. The query q can then be supported in O(1) time from X_base and the prefixed concatenation X_q of χ_label, χ_q,leaf, and χ_q,G. Since the overall number of vertices in all nodes of T is O(n), the strings χ_q,G and X_q can be computed in O(n) time. We prove the following equation for each subtree T_H of T by induction on the bounded height of T_H: χ_q,H=o(k)+O(1+|∂ H|·log k). If k≤ℓ, then χ_q,H=O(1) implies Equation (<ref>) (even if k=O(1) and |∂ H|=0). The basis holds. If k>ℓ, then Equation (<ref>) and Condition <ref> of (V_0,…,V_p) imply χ_q,H =o(k)+O(|U_0|·log k)+ ∑_i∈[p]o(|V(H_i)|)+O(1+|∂ H_i|·log |V(H_i)|) =o(k)+O(1+|∂ H|·log k) by the inductive hypothesis. Equation (<ref>) is proved. By |∂ G|=0, we have χ_q,G=o(n). §.§ Degree and other information of a vertex For the query q of obtaining the degree |N_G(u)| of a vertex u in G from L(u), let χ_q,H_0 be the O(k)-time obtainable O(|U_0|·log k)-bit string whose L_H(u)-th ⌈log k⌉-bit word χ_q,H_0(L_H(u)) stores the degree |N_H(u)| of the vertex u∈ U_0. As instructed by the framework, it suffices to show as follows that it takes O(1) time to obtain |N_G(u)| from the prefixed concatenation χ_q,H of χ_q,H_0,…,χ_q,H_p via L_H(u) for the case with k=V(H)>ℓ: Obtain the index i∈ [0,p] with u∈ U_i and L_H_i(u) from χ_label via Query <ref>. If i=0, then return |N_H(u)|=χ_q,H_0(L_H(u)) in O(1) time. If i∈[p], then return |N_H(u)|=|N_H_i(u)| which can be obtained from χ_q,H_i and L_H_i(u) in O(1) time. Note that this query q does not need Query <ref>. Other queries q for a vertex u of G like (i) the equipped color of u in G, (ii) the number of incident outgoing edges of u in G, or (iii) the label L(v) of an arbitrary neighbor v of u in G can be supported in a same way as long as (1) the answer can be represented in O(log n) bits, (2) the answer to the query q for a vertex u∈ U_i with i∈[0,p] can be obtained from merely χ_q,H_i using L_H_i(u), and (3) the answers for the subgraphs H of G in Λ_ℓ can be stored in an o(n)-bit string χ_q,leaf with the help of X_base to provide code(H). §.§ Adjacency and other information between a pair of adjacent vertices We first claim an O(n)-time obtainable O(1)-orientation D for G that can be represented using only o(n) bits into addition to X_base. The query of determining the directions of the edges between two vertices in G via their labels can then be supported in O(1) time by the query q for obtaining the labels L(v) and the directions of the edges of G[{u,v}] for the O(1) outgoing incident edges u⃗v⃗ of u in D from L(u): To determine whether u⃗v⃗ is an edge of G, just run the query algorithm of q on u and v to report the O(1) outgoing edges of u and v in D and the directions of these O(1) edges in G. Since D is an O(1)-orientation for G, u⃗v⃗ is an edge of G if and only if it is detected by running the query algorithm for q on u and v in O(1) time. This query q is about O(log n)-bit representable information of a vertex, but the approach of the previous subsection does not directly work. Since the answer involves labels of the neighbors of the queried vertex, Query <ref> has to come in: Let χ_q,H_0 be an O(|U_0|·log k)-bit string whose j-th O(log k)-bit word χ_q,H_0(j) with j∈ [|U_0|] stores the label L_H(v) and directions of the edges of H[{u,v}] for each of the O(1) u-out edges u⃗v⃗ in D for the vertex u∈ U_0 with L_H(u)=j. To obtain the answer for a vertex u∈ V(H) from χ_q,H via L_H(u) for the case with k=V(H)>ℓ, we also obtain the index i∈ [0,p] with u∈ U_i and L_H_i(u) from χ_label via Query <ref>. If i=0, then return the answer stored in χ_q,H_0(L_H(u)). If i∈ [p], then |N_H(u)|=χ_q,H_0(L_H(u)) in O(1) time. If i∈[p], then we obtain the answer in H_i from χ_q,H_i via L_H_i(u). To return the answer of u in H, we need Query <ref> to obtain the label L_H(v) from χ_label via L_H_i(v) and i for each of the O(1) u-out edges u⃗v⃗ in D. It remains to prove the claim by (i) presenting an O(k)-time obtainable O(1)-orientation D_H of the k-vertex graph H for each subtree T_H of T and (ii) showing that D=D_G can be represented in o(n) bits in addition to X_base to support the O(1)-time query d of reporting the O(1) labels L(v) for the u-out edges u⃗v⃗ in D from L(u). By ℓ=O(loglog n), it takes O(n) time to obtain an o(n)-bit string χ_d,leaf assigning an O(1)-orientation D_H for each distinct graph H∈Λ_ℓ such that the query q for each vertex u of H can be answered in O(1) time from χ_d,leaf via L_H(u) and code(H), which can be obtained in O(1) time from X_base using the position of T_H in T. If k>ℓ, then let (U_0,…,U_p) and (H_0,…,H_p) be the T-partition and the T-subgraphs of H. For each i∈[p], let D_i consist of the outgoing incident edges of U_i in D_H_i. Let W_0 consist of the vertices of U_0 and the vertices u that can be reached by exactly one outgoing incident edges of U_0 in D_H_1∪⋯∪ D_H_p. Let D_0 consist of the outgoing incident edges of W_0 in an arbitrary O(k)-time obtainable O(1)-orientation D'_H for H. The O(k)-time obtainable graph D_H=D_0∪ D_1∪⋯∪ D_p is an O(1)-orientation for H:[The following proof is a specialized version (i. e., for the case with t=1) of the proof of Lemma <ref> in <ref>.] The number of u-out edges in D_H of each vertex u∈ U_0 is that in D_0⊆ D'_H and hence is O(1). The number of u-out edges in D_H of each vertex u∈ U_i with i∈ [p] is exactly that in D_0∪ D_i⊆ D'_H∪ D_H_i and hence is O(1). Assume for contradiction E(D_H[{u,v}])=∅ for adjacent vertices u∈ U_i and v∈ U_j in H with {i,j}⊆ [0,p]. We have i j or else u and v are adjacent in D_i=D_j. We have ij=0 or else u and v are non-adjacent in H. Let i=0 and j∈ [p] without loss of generality. We have u⃗v⃗∉ E(D'_H) or else u⃗v⃗∈ E(D_0)⊆ E(D_H) by u∈ U_0⊆ W_0. Since D'_H is an O(1)-orientation for H, we have v⃗u⃗∈ E(D'_H). We also have u⃗v⃗∉ E(D_H_j) or else v∈ W_0 implies v⃗u⃗∈ E(D_0)⊆ E(D_H), contradicting E(D_H[{u,v}])=∅. Since D_H_j is an O(1)-orientation for H_j, we have v⃗u⃗∈ E(D_H_j), implying v⃗u⃗∈ E(D_j)⊆ E(D_H), contradicting E(D_H[{u,v}])=∅. Thus, D_H is indeed an O(1)-orientation for H. To see that D=D_G can be represented in o(n) bits in addition to X_base to support the query d, let χ_q,H_0 be the o(k)+O(|U_0|·log k)-bit prefixed concatenation of the following strings χ_1 and χ_2. Observe first that each D_H_i with i∈[p] is an O(1)-orientation for H_i, implying |W_0|=|U_0|+∑_i∈[p]O(|N_H(U_i)|)=O(|U_0|)+o(k/log k) by Condition <ref> of the corresponding star partition (V_0,…,V_p) and Equation (<ref>). * Let χ_1=dict(Y) for the k-bit string Y such that Y[L(u)]=1 if and only if u∈ W_0. By Lemma <ref>, χ_1=o(k)+O(|W_0|·log k)=o(k)+O(|U_0|·log k). * Let χ_2 be a string whose j-th O(log k)-bit word χ_2(j) for each j∈ [|W_0|] stores L_H(v) and H[{u,v}] of each u-out edge u⃗v⃗ for the vertex u with rank(Y,L_H(u))=j in D_0. We have χ_2=O(|W_0|·log k)=o(k)+O(|U_0|·log k). The query d for a vertex u∈ V(H) can be supported in O(1) time from χ_d,H via L_H(u) as follows: Obtain the index i∈ [0,p] with u∈ U_i and L_H_i(u). If Y[L_H(u)]=1, then report the information stored in χ_2(rank(Y,L_H(u))) in O(1) time first. If i∈[p], then obtain the labels L_H_i(v) using the query d in H_i via L_H_i(u) in O(1) time and return the labels L_H(v) using Query <ref>. According to the framework of <ref>, the query d is supported in O(1) time using an o(n)-bit X_d and X_base. Other queries q for a pair of adjacent vertices u and v of G can be supported in a same way as long as (1) the answer can be represented in O(log n) bits and (2) the answers for the subgraphs of G in Λ_ℓ can be stored in overall o(n) bits in addition to X_base. For example, our encoding scheme A^* supports in O(1) time the query q of reporting the the neighbor of u succeeding (respectively, preceding) v in clockwise order around u according to the embedding of G reflected by the adjacency list of G decoded by A^*. Specifically, let χ_q,H_0 additionally store for each u-out edge u⃗v⃗ of D the O(log |V(H)|)-bit information of the O(1) triples (u,v,w) or (v,u,w) stored in code(H_0). Since the bit count of χ_q,H_0 remains o(k)+O(|U_0|·log k), the query q can be supported in O(1) time via X_base and an o(n)-bit string X_q. Combing with the O(1)-time query of reporting an arbitrary neighbor v of u, the query of listing all neighbors of a vertex u can be supported in O(1) time per output. §.§ Bounded-distance shortest path Let d_G(u,v) for vertices u and v of a graph G denote the length of a shortest uv-path of G. A uv-path P of G is D-directive for a graph D if P contains a vertex w with d_P∩ D(u,w)+d_P∩ D^r(w,v)=d_G(u,v). A D-directive uv-path of G is a shortest uv-path of G but not vice versa. A vertex pair (u,v) of G is D-directive if G contains a D-directive uv-path. For a positive integer t, a t-director for G is an O(1)-orientation D for G such that each vertex pair (u,v) of G with d_G(u,v)∈ [t] is D-directive. Thus, each pair (u,v) with d_G(u,v)∈ [t] admits an edge u⃗x⃗ of G∩ D with d_G(x,v)=d_G(u,v)-1 or an edge y⃗v⃗ of G∩ D^r with d_G(u,y)=d_G(u,v)-1. See Figure <ref>(a) for an illustration. An O(1)-orientation for G is a 1-director for G and vice versa. We prove the following lemma in <ref>. For any prespecified positive integer t=O(1), it takes O(n) time to obtain a t-director for an n-vertex graph G with η(G)=O(1). As mentioned in the technical overview of <ref>, Lemma <ref> and our -optimal encoding scheme A^* immediately lead to an O(n)-time obtainable O(n)-bit encoded string for any graph G with η(G)=O(1) equipped with a t-director D, supporting the query q of bounded-distance shortest path in O(1) time, already improving upon the O(nlog n)-bit data structure of Kowalik and Kurowski <cit.>: To obtain a shortest uv-path of G for vertices u and v with d_G(u,v)≤ t, just find a shortest uv-path in the O(1)-vertex subgraph of G induced by W={w∈ V(G): min{d_D(u,w),d_D(v,w)}≤ t}, which can be obtained from accessing the O(1)-time query d of obtaining the O(1) u-out edges u⃗v⃗ in D and their G[{u,v}] for O(1) times due to t=O(1). An -succinct encoded string for G equipped with an arbitrary t-director for G need not be -succinct for G. To support the query q of bounded-distance shortest path in O(1) time without affecting the -succinctness of the encoded string for G, we present an O(n)-time obtainable t-director D=D_G for G which can be represented using an o(n)-bit string X_d in addition to X_base supporting the above query d in O(1) time. Specifically, we define for each subtree T_H of T a t-director D_H for H by the following recursive procedure: We apply Lemma <ref> to obtain a t-director D_H for each distinct subgraph H of G in Λ_ℓ. By ℓ=O(loglog n), it takes O(n) time to obtain an o(n)-bit string χ_d,leaf such that the answer to the query d for a vertex u∈ V(H) can be obtained in O(1) time from χ_d,leaf via L_H(u) and code(H), which is O(1)-time obtainable from X_base using the position of the subtree T_H in T. For a subtree T_H of T with |V(H)|>ℓ, let (U_0,…,U_p) and (H_0,…,H_p) be the T-partition and T-subgraphs of H. Let each D_i with i∈[p] consist of the u-out edges for all vertices u∈ U_i in the recursively defined t-director D_H_i for H_i. Let each W_i with i∈[p] consist of the vertices u∈ U_i with d_D_H_i(U_0,u)≤ t. That is, u∈ W_i if and only if there is an xu-path in D_H_i of length at most t for a vertex x∈ U_0. Let D_0 consist of the u-out edges for all vertices u in W_0=U_0∪ W_1∪⋯∪ W_p in an arbitrary O(k)-time obtainable t-director D'_H for H. We prove the following lemma in <ref>. D_H=D_0∪ D_1∪⋯∪ D_p is a t-director for H. Since the overall number of vertices in all nodes of T is O(n), the t-director D=D_G for G can be obtained in O(n) time. Since t=O(1) and each D_H_i with i∈[p] is an O(1)-orientation for H_i, we have |W_0| =|U_0|+∑_i∈[p]O(|N_H(U_i)|)=O(|U_0|)+o(k/log k) by Condition <ref> of the corresponding star partition (V_0,…,V_p) of H and Equation (<ref>). Thus, the query d can be supported by an o(n)-bit string X_d and X_base in O(1) time in precisely the same way as the query d for the O(1)-orientation D for G in <ref>. The rest of the subsection proves Lemmas <ref> and <ref> in <ref> and <ref>, respectively. §.§.§ Proving Lemma <ref> If η(G)=O(1), then min{|N_G(v)|:v∈ V(G)}=O(1). It takes O(n) time to expand a t-director for an n-vertex graph G with t=O(1) and η(G)=O(1) into a t+1-director for G. We first reduce Lemma <ref> to Lemma <ref> via Lemma <ref> and then prove Lemma <ref>. By t=O(1) and Lemma <ref>, it suffices to show that a 1-director D for the n-vertex input graph G with η(G)=O(1) can be obtained in O(n) time: For each i∈ [n], let u_i be a vertex of G_i=G-{u_j:j∈ [i-1]} with minimum |N_G(u_i)|, which is O(1) by Lemma <ref>. It takes O(n) time to obtain the O(1)-orientation D={u⃗_⃗i⃗v⃗_⃗i⃗: v_i∈ N_G_i(u_i), i∈[n]} for G. Let D be a t-director for G. A uv-path P of G with |E(P)|≥ 1 is D-pivoted if d_P∩ D^r(u,x)+d_P∩ D(x,v)=d_G(u,v)≥ 1 holds for the neighbor x of u in P. See Figure <ref>(b) for an illustration. A D-pivoted uv-path of G is a shortest uv-path of G but not vice versa. A vertex pair (u,v) of G is D-pivoted if G contains a D-pivoted uv-path. A graph E_j is a j-enhancer for D with j∈ [t+1] if the next Conditions E hold: * D_j=D∪ E_j remains an O(1)-orientation for G and hence a t-director for G. * Each D-pivoted vertex pair (u,v) of G with d_G(u,v)∈ [j] such that G∩ D_j does not contain any edge u⃗x⃗ with d_G(x,v)=j-1 admits a D-pivoted uv-path P of G with P-u⊆ D_j∩ D_j^r. See Figure <ref>(c) for an illustration. Claim 1: It takes O(n) time to expand a j-enhancer for D with j∈[t] into a j+1-enhancer for D. By Claim 1 and t=O(1), it suffices to ensure that D_t+1=D∪ E_t+1 for a t+1-enhancer for D is a t+1-director for G, since the empty graph is a 1-enhancer for D. Assume for contradiction that a vertex pair (u,v) of G with d_G(u,v)∈ [t+1] is not D_t+1-directive. By Condition <ref> for E_t+1, we have d_G(u,v)=t+1, implying an edge u⃗x⃗ of G with d_G(x,v)=t. D does not contain u⃗x⃗ or else the union of u⃗x⃗ and a D-directive xv-path of G is a D_t+1-directive uv-path of G. Since D is an orientation for G that does not contain u⃗x⃗, D^r contains u⃗x⃗. Let Q be a D-directive xv-path of G. D contains Q or else the union of a D-directive uy-path for the neighbor y of v in Q and the edge y⃗v⃗ is a D_t+1-directive uv-path of G. Hence, u⃗x⃗∪ Q is a D-pivoted uv-path of G, implying that (u,v) is a D-pivoted vertex pair of G with d_G(u,v)=t+1. G∩ D_t+1 does not contain any edge u⃗x⃗ with d_G(x,v)=t or else the union of u⃗x⃗ and a D-directive xv-path of G is a D_t+1-directive uv-path of G. By Condition <ref> for E_t+1, G contains a D-pivoted uv-path P with P-u⊆ D_t+1∩ D_t+1^r, implying that P is a D_t+1-directive uv-path of G, contradiction. The rest of the proof ensure Claim 1 by showing a graph E obtainable in O(n) time from a j-enhancer E_j for D such that E_j+1=E_j∪ E is a j+1-enhancer for D. We rely on the following graph H and an O(1)-orientation D_H for H: Let H be the graph on V(G) consisting of the edges u⃗v⃗ such that (u,v) are vertex pairs of G with d_G(u,v)=j+1 that admit D-pivoted uv-paths P of G with P-{u,v}⊆ D_j∩ D_j^r. We show that H can be obtained in O(n) time. Since D is a t-director for G with j≤ t=O(1), it takes O(1) time to determine for each vertex pair (u,v) of G whether d_G(u,v)≥ j+1. Since D is an O(1)-orientation, each vertex x of G can be the second vertex of O(1) length-j+1 paths of G that are D_j-pivoted. Thus, it takes O(n) time to obtain the set of all D-pivoted uv-paths P of G satisfying d_G(u,v)=j+1 and P-{u,v}⊆ D_j∩ D_j^r. We have ||=O(n). Since H consists of the edges u⃗v⃗ admitting uv-paths in , it takes O(n) time to construct H. Claim 2: It takes O(n) time to obtain an O(1)-orientation D_H for H. For each edge u⃗v⃗ of H, let P(u,v) be an arbitrary fixed uv-path in and let x(u,v) be the neighbor of u in P(u,v). Define E as the O(n)-time obtainable subgraph of G∪ G^r consisting of * the incident edge of u in P(u,v) for each edge u⃗v⃗ of H∩ D_H and * the incident edge of v in (P(u,v))^r for each edge u⃗v⃗ of H∩ D_H^r. Since D_H is an O(1)-orientation, so is E, implying that D_j+1=D_j∪ E remains an O(1)-orientation for G. Condition <ref> for E_j+1 holds. To see Condition <ref> for E_j+1, let (u,v) be a D-pivoted vertex pair of G with d_G(u,v)∈ [j+1] such that G∩ D_j+1 does not contain any edge u⃗x⃗ with d_G(x,v)=j. By Condition <ref> for E_j and E_j⊆ E_j+1, it suffices to focus on the case with d_G(u,v)=j+1. Let Q be a D-pivoted uv-path of G. See Figure <ref>. Let y⃗v⃗ be the incident edge of v in Q. Since Q-v is a D-pivoted uy-path of G, (u,y) is a D-pivoted vertex pair of G with d_G(u,y)=j. G∩ D_j does not contain any edge u⃗x⃗ with d_G(x,y)=j-1 or else u⃗x⃗ is an edge of G∩ D_j+1 with d_G(x,v)=j. By Condition <ref> for E_j, there is a D-pivoted uy-path R of G such that D_j∩ D_j^r contains R-u or R-y. Observe that D_j∩ D_j^r does not contain R-y or else the incident edge u⃗x⃗ of u in R is an edge of G∩ D_j with d_G(x,y)=j-1. Thus, D_j∩ D_j^r contains R-u, implying that the union P of the D-pivoted uy-path R and the edge y⃗v⃗ of G∩ D is a D-pivoted uv-path of G with P-{u,v}⊆ D_j∩ D_j^r. Hence, u⃗v⃗ is an edge of H. By d_G(x(u,v),v)=j, the incident edge of u in P(u,v) is not in G∩ D_j+1. Thus, E contains the incident edge of v in (P(u,v))^r, implying that P(u,v) is a D-pivoted uv-path of G with P(u,v)-u⊆ D_j+1∩ D_j+1^r. Condition <ref> for E_j+1 holds. It remains to prove Claim 2. We need an O(n)-time obtainable O(1)-coloring of the graph F=D_j^2j-1 to construct D_H. That is, F consists of the edges u⃗v⃗ with d_D_j(u,v)∈ [2j-1]. Since D_j is an O(1)-orientation, so is F by j≤ t=O(1). Let each v_i with i∈[n] be an arbitrary vertex of F_i=F-{v_1,…,v_i-1} that minimizes |N_F_i(v_i)|. Since each F_i with i∈[n] is an O(1)-orientation, we have |N_F_i(v_i)|=O(1). Thus, F admits an O(n)-time obtainable O(1)-coloring via assigning for each v_i with i∈[n] a color distinct from those of vertices in N_F_i(v_i). Based on the O(1)-coloring of F, it takes O(n) time to decompose H into O(1) edge-disjoint subgraphs H_k of H whose union is H such that the vertices x(u,v) for distinct edges u⃗v⃗ in the same subgraph H_k are distinct vertices having the same color in F. If u⃗v⃗ and y⃗z⃗ are distinct edges of the same subgraph H_k of H, then P(u,v) does not intersect V(P(y,z))∖{y,z} or else d_D_j(x(u,v),x(y,z))∈[2j-1] contradicts that x(u,v) and x(y,z) have the same color in F. Hence, each H_k can be obtained from the union of the paths P(u,v) of G over all edges u⃗v⃗∈ E(H_k) by contracting each P(u,v) into u⃗v⃗, implying that H_k is a minor of G. By η(H_k)≤η(G)=O(1), it takes O(n) time to obtain an O(1)-orientation D_H_k for H_k. The union D_H of the O(1) graphs D_H_k is an O(n)-time obtainable O(1)-orientation for H. §.§.§ Proving Lemma <ref> Observe first that D_H is an O(1)-orientation, since (U_0,…,U_p) is a partition of H and the u-out edges in D_H=D_0∪⋯∪ D_p for each vertex u∈ U_i with i∈[0,p] are u-out edges in the two O(1)-orientations D'_H and D_H_i. We show D_H∪ D_H^r=H∪ H^r to further ensure that D_H is an O(1)-orientation for H. Since each edge of D_H belongs to an O(1)-orientation for a subgraph of H, we have D_H⊆ H∪ H^r. Each edge of H[U_i] with i∈ [0,p] is in D_i∪ D_i^r⊆ D_H∪ D_H^r. An edge of H not in any H[U_i] with i∈[0,p] has to be an edge u⃗v⃗ or v⃗u⃗ of H_i with i∈ [p] between a vertex u∈ U_i and a vertex v∈ U_0⊆ W_0. If u∈ W_0, then D'_H[{u,v}]⊆ D_0⊆ D_H. If u∉ W_0, then D_H_i does not contain the edge v⃗u⃗. Since D_H_i is an orientation for H_i, we have u⃗v⃗∈ E(D_H_i), implying that u⃗v⃗ is an edge of D_i⊆ D_H. Hence, H⊆ D_H∪ D_H^r. Therefore, D_H is an O(1)-orientation for H. It remains shows that each vertex pair (u,v) of H with d_H(u,v)∈ [t] is D_H-directive. Assume for contradiction that (u,v) is a non-D_H-directive pair of vertices u∈ U_i and v∈ U_j with i∈ [0,p] that minimizes d_H(u,v)∈ [t]. The following Conditions D follow from the minimality of d_H(u,v): * H∩ D_H contains no edge u⃗w⃗ with d_H(w,v)=d_H(u,v)-1 or else the union of u⃗w⃗ and a D_H-directive wv-path of H is a D_H-directive uv-path of H. * H∩ D_H^r contains no edge w⃗v⃗ with d_H(u,w)=d_H(u,v)-1 or else the union of a D_H-directive uw-path of H and w⃗v⃗ is a D_H-directive uv-path of H. Let P be a D'_H-directive uv-path of H, implying that D'_H contains the incident edge u⃗x⃗ of u in P or the incident edge v⃗y⃗ of v in P^r. If i=j=0, then D_0⊆ D_H contains u⃗x⃗ or v⃗y⃗, violating Condition <ref> with w=x or Condition <ref> with w=y. Thus, i∈ [p] or j∈ [p]. However, we show as follows that (a) i∈[p] implies j∈[p] and u⃗x⃗∉ E(D'_H) and (b) j∈ [p] implies i∈[p] and v⃗y⃗∉ E(D'_H). This leads to E(D'_H)∩{u⃗x⃗,y⃗v⃗}=∅, contradiction. Statement (a): i∈ [p] implies j∈[p] and u⃗x⃗∉ E(D'_H). Let the uz-path Q be the maximal prefix of P with Q⊆ H_i. Let R be a D_H_i-directive uz-path of H_i. See Figure <ref>(a). We have R^r⊆ D_H_i or else D_i⊆ D_H contains the incident edge u⃗w⃗ of u in R, violating Condition <ref>. We have z∈ U_0: If z were not in U_0, then we have v=z∈ U_i by the maximality of Q, implying P=Q⊆ H_i and |E(R)|=d_H_i(u,v)=d_H(u,v)∈ [t]. By R^r⊆ D_H_i, the incident edge v⃗w⃗ of v in R^r is in D_i⊆ D_H, violating Condition <ref>. Thus, z∈ U_0. By R^r⊆ D_H_i, we have u∈ W_0, implying u⃗x⃗∉ E(D'_H) or else u⃗x⃗∈ E(D_0)⊆ E(D_H) violates Condition <ref> with w=x. By u⃗x⃗∉ E(D'_H), we have P^r⊆ D'_H. Thus, j 0 or else v⃗y⃗∈ E(D'_H) implies v⃗y⃗⊆ E(D_0)⊆ E(D_H), violating Condition <ref> with w=y. Statement (b): j∈ [p] implies i∈[p] and v⃗y⃗∉ E(D'_H). Let the zv-path Q be the maximal suffix of P with Q⊆ H_j. Let R be a D_H_j-directive zv-path of H_j. See Figure <ref>(b). We have R⊆ D_H_j or else D_j⊆ D_H contains the incident edge v⃗w⃗ of v in R^r, violating Condition <ref>. We have z∈ U_0: If z were not in U_0, then we have u=z∈ U_j by the maximality of Q, implying P=Q⊆ H_j and |E(R)|=d_H_j(u,v)=d_H(u,v)∈[t]. By R⊆ D_H_j, the incident edge u⃗w⃗ of u in R is in D_j⊆ D_H, violating Condition <ref>. Thus, z∈ U_0. By R⊆ D_H_j, we have v∈ W_0, implying v⃗y⃗∉ E(D'_H) or else v⃗y⃗∈ E(D_0)⊆ E(D_H) violates Condition <ref> with w=y. By v⃗y⃗∉ E(D'_H), we have P⊆ D'_H. Thus, i 0 or else u⃗x⃗∈ E(D'_H) implies u⃗x⃗∈ E(D_0)⊆ E(D_H) violates Condition <ref> with w=x. § CONCLUDING REMARKS We propose to base an encoding scheme on multiple classes of graphs to exploit the inherent structures of individual graphs. As the first nontrivial such example, we present an -optimal encoding scheme A^* for a family of an infinite number of classes. Specifically, A^* takes an n-vertex k-clique-minor-free graph G with k=O(1) and produces a -succinct encoded string X in deterministic O(n) time for each nontrivial quasi-monotone class of graphs that contains G. A^* can decode X back to G in deterministic O(n) time. This means that A^* automatically exploits an infinite number of possible nontrivial quasi-monotone structures of G to encode G as compactly as possible. A^* does not require any embedding of G or any recognition algorithm or other explicit or implicit knowledge about the member classes of . A^* also supports fundamental queries in O(1) time per output. Moreover, A^* accepts additional information like an O(1)-coloring or an genus-O(1) embedding of G that can be decoded back together with G by A^* and answered by the query algorithms of A^*. It is of interest to see if our -optimal encoding scheme A^* can be extended so that (i) the ground ⋃ can be a proper superclass of the graphs having bounded Hadwiger numbers, (ii) the member classes of can be more refined to beyond quasi-monotonicty, or (iii) efficient updates can be supported for the input graph G and its equipped information. hilabbrv
http://arxiv.org/abs/2307.02394v1
20230705160921
Won't Get Fooled Again: Answering Questions with False Premises
[ "Shengding Hu", "Yifan Luo", "Huadong Wang", "Xingyi Cheng", "Zhiyuan Liu", "Maosong Sun" ]
cs.CL
[ "cs.CL" ]
Collision integral with momentum-dependent potentials and its impact on pion production in heavy-ion collisions Akira Ono August 1, 2023 =============================================================================================================== Pre-trained language models (PLMs) have shown unprecedented potential in various fields, especially as the backbones for question-answering (QA) systems. However, they tend to be easily deceived by tricky questions such as “How many eyes does the sun have?”. Such frailties of PLMs often allude to the lack of knowledge within them. In this paper, we find that the PLMs already possess the knowledge required to rebut such questions, and the key is how to activate the knowledge. To systematize this observation, we investigate the PLMs' responses to one kind of tricky questions, i.e., the false premises questions (FPQs). We annotate a dataset containing 2365 human-written FPQs, with the corresponding explanations for the false premises and the revised true premise questions. Using , we discover that PLMs are capable of discriminating FPQs by fine-tuning on moderate numbers (e.g., 256) of examples. PLMs also generate reasonable explanations for the false premise, which serve as rebuttals. Further replaying a few general questions during training allows PLMs to excel on FPQs and general questions simultaneously. Our work suggests that once the rebuttal ability is stimulated, knowledge inside the PLMs can be effectively utilized to handle FPQs, which incentivizes the research on PLM-based QA systems. The dataset and code are available at <https://github.com/thunlp/FalseQA>. § INTRODUCTION Recent advances in pre-trained language models (PLMs)  <cit.> have achieved significant performance gains for various types of tasks, even surpassing human levels on language ability benchmarks <cit.>. The unprecedented ability of PLMs lays the foundation for various practical applications. For example, PLMs that exhibit general world knowledge and commonsense knowledge have the potential to serve as backbones for general-purpose question-answering models <cit.>. However, these PLM-based question-answering models have an intriguing paradox. On the one hand, they achieve high performance on normal questions raised by humans. For example, UnifiedQA <cit.> achieves state-of-the-art performance on many question-answering tasks. Macaw <cit.> can perform multi-angle question-answering and answer 75% of the question in the Challenge300 dataset <cit.> correctly. On the other hand, they are vulnerable to tricky questions (see Table <ref>). For example, Macaw answers one out of nine tricky questions correctly, while other models including GPT-3 <cit.> fail all of them <cit.>. InstructGPT <cit.> also reports that it fails to identify instructions with false premises. These questions are easy to rebut for humans but pose an undeniable obstacle for PLMs[Although most PLMs fail, we found ChatGPT <cit.> satisfactorily answers these questions. Their training data is manually written by annotators and continuously updated using user queries, which might contain such questions. However, their data is not public. Our work provides the same possibility for general PLMs, even the much smaller ones.]. The inability to rebut also results in the misalignment <cit.> of language models to human expectations. Without careful investigation, this paradox could easily lead to the conclusion that PLMs lack the world or commonsense knowledge to rebut these questions. Although it's crucial for the PLMs to embed as much general knowledge as possible, we provide a pilot experiment to find out that the PLMs already possess the knowledge required for the tricky questions which they fail (see Section <ref>). As a consequence, we hypothesize that the knowledge in current PLMs is enough for handling a large portion of tricky questions. However, this knowledge is not activated. To support our hypothesis, we take a close look at these tricky questions. Most of these tricky questions contain false premises. For example, in the question “How many eyes does the sun have?”, the questioner must presume that “the sun can have eyes” in order to make the query about the quantity meaningful. These questions are called False Premise Questions (FPQs). Such false premises always violate human knowledge or logic and rarely appear in the natural text, thus leading to an out-of-distribution generalization gap for the PLMs. Targeting to fill the gap between the natural text and FPQs, we present the first specialized dataset of FPQs, dubbed as dataset. Specifically, we first systematically categorize the false premises to ensure the coverage of the dataset. Then we ask human annotators to manually compose the FPQs, as well as explanations for the false premises. The annotators are also asked to edit the false premise questions into true premise questions (TPQs) using minimal modification, with which the PLMs are less prone to learn shortcuts from the format of FPQs. Based on dataset, we first conduct systematic experiments on the PLMs’ discrimination and rebuttal ability of FPQs. We reach three essential conclusions:  (1) PLMs of different types and scales can distinguish the FPQs from TPQs, and scaling effect <cit.> also holds for . (2) PLMs can give reasonable explanations for the false premises, which can serve as rebuttals. (3) The number of FPQ examples needed to activate the PLM's rebuttal ability is moderate. For example, 256 FPQs can result in more than 70% accuracy for models larger than 1B. And for some larger PLMs, in-context learning with a few examples can also activate the ability. Then we consider the practical scenario where the models need to handle both FPQs and general questions. We demonstrate that a simple but effective data replay method can help mitigate the catastrophic forgetting of general questions, where the model discriminates 86.7% FPQs in and only rebuts 1.4% general questions. These results lead to optimism that PLMs can be used as the backbones of a practical question-answering system that is robust to tricky questions. § RELATED WORK Three groups of research are related to our work: direct question answering dataset, question unanswerability, and question premise verification. Direct Question Answering Dataset. For the most practical scenario of the question-answering system, the candidate answers are absent. Therefore, direct question answering (DQA), as a counterpart to extractive QA <cit.> or multiple-choice QA <cit.> has received increased attention. Natural Questions <cit.> collects the queries sent to the google search engine. ARC-DA <cit.> proposes modifying a reasoning-based multiple choice QA into DQA format.  <cit.> manually compose Challenge300 dataset which is still challenging to powerful models such as GPT-3 and Macaw. Our dataset can be seen as a direct question-answering dataset with explanations. However, the question distribution is radically different from the questions in natural corpora, serving as an adversarial scenario for DQA models. Question Unanswerability. Tricky questions are unanswerable questions. Previous works <cit.> confirm the existence of unanswerable questions in existing benchmarks, including SQuAD <cit.>, Natural Questions <cit.>, VQA <cit.>, etc. Most unanswerable questions in these benchmarks are due to missing information in the context provided to the questions. However, contains questions that are out of natural text distribution, and are unanswerable due to misleading false premises. Question Premise Verification. Answering FPQs has been studied before the deep learning era <cit.>. In recent PLM-based question-answering research, relevant efforts use external knowledge to verify the correctness of the question premise. For example,  <cit.> studies the FPQs in Natural Questions <cit.>. A concurrent work <cit.> further gathers the 8400 Reddit questions and annotated the false premises among 25% of them. The correctness of the premises in their datasets requires expert knowledge or context to determine. Therefore, they use retrieval-augmented language models <cit.> or external knowledge base to provide information for the premise classification, and both reach the conclusion that discovering and explaining those prepositions that require expert knowledge is challenging. However, it remains elusive whether PLMs without external assistance can discover and rebut the tricky questions that require only general knowledge and are straightforward for humans. We propose the first manually written dataset for FPQs and support our hypothesis through experiments that the inability of PLMs for FPQs can be mitigated when giving them examples. § PRELIMINARIES In this section, we introduce the definition of FPQ and the pilot experiment on PLMs about FPQs. §.§ False Premise Questions When questioning, humans usually assume that some facts are shared and endorsed by the questioner and the answerer. Such facts are the premises of the question. For example, in the question “How many eyes does the sun have?”, the target of the question is the number of eyes, which assumes the correctness of the fact “The sun has eyes”. In general, a fact can be expressed by relational triples, where each relational triple takes the form of . A question is asking for the missing part in one relational triple. For example, the above question can be expressed as nested triples as , where . We define the complete relational triple as the support triple. Then a false premise problem is one whose support triples are not correct. In the above example, is false under real-world background, thus any question that builds on this triple contains false premises. By this definition, “Does the sun have eyes?” is not an FPQ, since it does not assume to be true. In fact, PLMs know the authenticity of such triples well. However, they can't answer FPQs built upon these triples. §.§ PLMs' original responses to FPQs We begin with a pilot experiment that confirms current PLMs' responses to FPQs are not satisfactory despite their knowledge. We query the PLMs with the questions taken from test split (see Section <ref>). We use the large PLMs whose API is publicly available, including Bloom <cit.>, OPT <cit.>, Jurassic-1 <cit.>, GPT-3(text-davinci-003) <cit.> (as known as InstructGPT). We use the prompt “Question: [6mm] Answer:”, where the blank is filled by the question text. We provide the generated answers of these models in Table <ref>. We also provide our model's answer (See Section <ref>) as comparisons. As we can see, all models fail on these simple FPQs. However, in the column “Ablation”, we are surprised to find that all models give the correct responses to the questions that ask directly about the correctness of the premises. This motivates us to hypothesize that the inability of current PLMs to handle FPQs is due to distribution mismatch, instead of missing knowledge. Therefore, we need a dataset specializing in FPQs. § DATASET To build a dataset on FPQs, there are potentially two approaches. An approach is to collect them from natural corpora. However, false premise questions rarely appear in natural corpora, which makes the question collection process laborious. Second, even if we collect false premise questions, the false premises are made by humans and thus are hard to be detected by humans, which doesn't fit with the motivation of this paper. In fact,  <cit.> have done pioneering work using this approach. On the contrary, our approach is to manually write such false premise questions. To ensure the quality of our dataset, we expect dataset to have the following key features: broad coverage, high quality, few shortcuts, and detailed explanations for the false premises. Below we introduce the annotation steps that ensure these features. §.§ Categorization of FPQs. People ask questions in a wide variety of contexts and formats. Increasing the coverage of questions is proven to be beneficial <cit.>. However, asking annotators to write FPQs freely does not guarantee the coverage of the questions. Therefore, the authors manually think up 29 initial FPQs (see Appendix <ref>). Then we categorize these FPQs in terms of error types, and question format. We summarize the categories in Table <ref>. In total, there are eight error types covering commonsense errors, logical errors, etc., and six question formats covering factual questions, descriptive questions, etc. Although we try to collect as many examples as possible into the initial set, the categorizations are far from exhaustive. Therefore we include an “Others” option to encourage creativity. Writing FPQs. We recruit twenty human annotators to think up questions that contain false premises. To make the creative process easier, we provide source words to the annotators to compose sentences. We use the subject word of GenericsKB <cit.> as the source word since they have broad coverage and each word is paired with a short illustrative sentence that can also inspire the annotators. However, we don't require the annotated sentence to contain the source word. Moreover, the annotators have the freedom to skip the source words that are not easy to brainstorm. We then ask the annotators to categorize the questions into the above categories. The annotators are required to keep a balanced distribution (see Appendix <ref>) over categories when they finish their part. For the quality of the written FPQs, we require them to be correct in syntax and contain obvious false premises. Revising into TPQs. Previous studies <cit.> point out that PLMs are skilled at finding shortcuts in datasets and do not really understand the task. Since the FPQs are created manually, it's easy to fall into the fixed writing style of the annotators. To alleviate the problem, we annotate a comparison set for these FPQs. Specifically, we ask annotators to edit each FPQ with minimal modifications to make it a problem with true premises (TPQ). The resulting pairs of questions differ only in the correctness of the premises, ensuring that the model learns the essentials of the task. Writing Detailed Explanations/Answers. Humans usually reply to FPQs with an explanation of why the premise is false <cit.>. Generating the explanation also helps check whether the model truly understands the FPQs. Therefore, we ask the annotators to write an explanation for each FPQ. For quality control of the explanations, we require the explanation to be more than the negation of the false premise. For the training set and validation set, we require one explanation per question, for the test set, we require two explanations per question. For symmetry, the annotators also write answers to the TPQs. The full annotating process is demonstrated in Figure <ref>. §.§ Dataset Statistics The final dataset, dubbed as FalseQA, contains 2365 question pairs. A snapshot of the FPQ dataset is in Table <ref>. We randomly split the dataset into train, validation, and test splits, with a ratio of 5:2:3. The summary of statistics is shown in Table <ref>. g>greycolorc § EXPERIMENTS Our experiments are divided into two main parts. To begin with, we conducted extensive experiments to demonstrate that PLMs have the ability to discriminate and rebut FPQs with moderate training data. Next, we propose a practical method to handle both FPQs and general questions well. §.§ Models and Settings PLMs are usually divided into three main architectures, namely, encoder-only, decoder-only, and encoder-decoder language models. Since the encoder-only language model can not be used as the QA model, we select typical PLMs from the latter two for experiments. For decoder-only models, we choose OPT <cit.>, which is a series of open-source pre-trained models aligned to OpenAI GPT-3 <cit.>. For the encoder-decoder models, we use T5 <cit.> and Macaw <cit.>. T5 <cit.> models are trained with the massive unsupervised pre-training corpus and a mixture of supervised tasks, making them very capable of solving various downstream tasks. Macaw is fine-tuned from T5 models on QA tasks. They achieve state-of-the-art performance on direct QA dataset ARC-DA <cit.> and perform satisfactorily on most categories of the demanding dataset Challenge300 <cit.> except for the FPQs. Unless specified, all experiments are repeated three times with different random seeds. For each result, we report the mean and standard deviation. The detailed hyperparameters for each experiment are in Appendix <ref>. §.§ Discriminating FPQs We first train the PLMs to classify the question in into FPQ and TPQ. To mitigate the gap between pre-training and fine-tuning, we adopt the prompt learning paradigm <cit.> to do the classification. We report the accuracy of the classification. Besides, we report the recall and precision for FPQs since we emphasize the FPQs. From Table <ref>, we can see all the models can achieve non-trivial performance on the binary classification. (1) The most powerful model Macaw-11B, can achieve 86.6 accuracy. (2) Across all the models of the same type, performance boosts when the size of the model increases. We hypothesize that the scaling effect is because larger models both contain more knowledge and are easier to be activated to understand the task. (3) There is a slight improvement from T5 to Macaw, showing that the ability to identify FPQs can be enhanced by fine-tuning on a corpus of normal questions. §.§ Impact of Training Data Size Then we study the PLMs' performance to discriminate FPQs with fewer training data. We randomly sample 32, 128, 256, and 512 pairs of FPQ and TPQs as the training data and plot the performance under each data scale in Figure <ref>. We can see that the accuracy of classifying FPQs and TPQs grows almost linearly as the number of pairs grows exponentially. With only 256 pairs of questions, models larger than 2.7B, i.e., OPT-2.7B, Macaw-3B, Macaw-11B, all achieve more than 70% accuracy, while the smaller models need more data to achieve non-trivial performance. The trade-off between model scale and data scale hints that larger models might be activated with even fewer training data. However, as we have noticed, the gap between human performance and model performance remains large, as an average person can almost completely classify such problems. The above results already allow us to design a primitive QA pipeline that can handle FPQs. For example, if the model predicts that a question is FPQ, then it refuses to answer such questions, while for other questions it generates the answer. §.§ Answering FPQs with Explanations Next, we train the PLMs to discriminate and generate explanations for the FPQs at the same time. Since we need to start from models that already have zero-shot QA ability, we choose only Macaw for the encoder-decoder models. For the decoder-only model, we follow similar approaches to <cit.> to train OPT models with a fraction of UnifiedQA dataset <cit.> in order to steer the model into QA mode [We will release the checkpoint.] without injecting much additional knowledge. We select the model size that can achieve non-trivial performance using 256 pairs of data for this experiment. To discriminate and generate explanations jointly, we let the models generate the discriminating tokens: “tricky question” or “true question” first. Then the model continues to generate the explanation to FPQs or the answer to TPQs. Since the numbers of tokens responsible for discrimination and generation differ dramatically, we add an additional binary loss on the discriminating tokens. The ratio between the binary loss and the generation loss is 1. We conduct experiments on three training data sizes, i.e, 32, 256, and 1187 question pairs. In evaluation, if a generated answer contains “tricky question”, we consider the question classified as an FPQ, otherwise, it is classified as a TPQ. Similar to the previous section, we report the recall, precision of predicting FPQs, and accuracy of the binary classification. In addition, we evaluate the quality of the generated explanation by computing the maximum Rouge-L <cit.> score between it and the two ground-truth explanations. Note since we focus on the explanation of FPQs, the evaluation does not include the TPQs. From Table <ref>, we have three observations. (1) The models jointly predict the question and generate answers successfully. (2) When training data is limited, e.g., 32 question pairs, the accuracy is significantly higher than conducting classification alone (See in Figure <ref>), which shows that the explanations of the FPQs help the model to quickly adapt to the task. (3) Adding binary loss boosts the model's performance on classification. For the generated explanations, the best Rouge-L achieves 42.0, showing that the explanations are close to humans'. The quality of explanations also gets higher as the model size and data size increase. We provide the model-generated explanation for 10 randomly sampled FPQs in Appendix <ref>. We can see the explanations are reasonable. §.§ In-context Learning We proceed to study the performance of larger models, e.g., GPT-3(175B) on . The large PLMs are tuned by in-context learning with frozen model parameters. We select OPT-66B <cit.>, Jurassic-1 <cit.>, and GPT-3(001) and GPT-3(002) [text-davinci-001, and text-davinci-002 checkpoints.]. We present the results in Table <ref>. We can see that OPT-66B and Jurassic-1 perform poorly. Therefore, we conclude that due to the distribution mismatch of FPQs to normal questions, it is still hard to activate the rebuttal ability using a few examples for these models, which we leave to future work. GPT-3 can be activated with 2 or 4 pairs of examples, however, its performance is lower than the much smaller fine-tuned models in Section <ref>. Surprisingly, GPT-3(002) has far better performance than GPT-3(001). We hypothesize that they more easily understand the rebuttal task since they are trained with instruction tuning <cit.>. §.§ Performance w.r.t. Category To better understand which kind of FPQs is harder to be discriminated against, we draw the accuracy of each category in Figure <ref>. In spite of the inconsistency between PLMs, index error is generally hard to classify while logic and causality error is easy. For question types, selective questions are hard to classify while factual questions are easy. These observations can guide the future improvement of our dataset. §.§ Answering FPQs and General Questions QA models are originally used to answer general questions, e.g., questions in ARC-DA <cit.> [Short for AI2 Reasoning Challenge-Direct Answer.] dataset where the distribution is different from . Therefore, training purely on may lead to catastrophic forgetting. To produce a model that handles both FPQs and general questions, we use a simple data replay technique (DR) <cit.>. Specifically, during training on dataset, for each iteration over batches, we add a batch of the data samples from the ARC-DA. In order to use as little ARC-DA data as possible, we keep the ARC-DA samples to be the same within 30 batch iterations. The aforementioned binary loss is used no matter with or without DR. The concrete numbers of general questions used in each setting and training details are in Appendix <ref>. In Table <ref>, we summarize the performance of the raw model before training on , the model tuned on , and the model tuned on with DR. For the original models, since they do not generate the “tricky question” or “true question”, we manually read the generated answers for 100 randomly sampled questions pairs to determine whether it contains any rebuttals. As we can see, before fine-tuning on FPQs, the models perform well on the ARC-DA dataset. However, they fail substantially on . After tuning on , though the models' rebuttal ability is activated, Rouge-L and F1 scores on ARC-DA drop considerably. The false prediction rate (FPR), i.e., the fraction of ARC-DA questions that are incorrectly labeled as tricky questions, is non-negligible. Fortunately, when we apply the DR technique, models not only have small FPRs and the improved quality of generated answers on ARC-DA but the same or even better performance on . We also find the questions in ARC-DA that PLM still rebuts (see Appendix <ref>) are also reasonable to rebut for humans. The result gives us a promising direction for building QA systems that perform well on general questions and FPQs. § CONCLUSION In this paper, we investigate using PLMs to answer FPQs, which are simple for humans but deceive most PLMs. We present the first human-written dataset of FPQs. Using the dataset, we successfully activate the discrimination and explanation ability of PLMs and produce PLMs that are both capable of general questions and robust to FPQs. For future directions, we think that more advanced techniques can be used together with to fully activate the model's ability, e.g., reinforcement learning with human feedbacks <cit.>. Incorporating more knowledge into PLMs is also beneficial for PLMs to answer FPQs. § ACKNOWLEDGEMENT This work is supported by the National Key R&D Program of China (No. 2020AAA0106502), China Postdoctoral Science Foundation (No. 2022M721829), and Institute Guo Qiang at Tsinghua University. § LIMITATIONS There are several limitations in our work. (1) Although we think that PLMs' rebuttal ability is activated in our experiments, the performance has a large space for improvement. For a binary classification problem, the most powerful PLM in our experiment reaches 87.1% accuracy at most. (2) Since it's hard to probe what the PLMs truly know, we didn't further investigate whether PLMs still fail on some FPQs due to a lack of relevant knowledge or other reasons. (3) A third limitation is that we notice that the newly announced model ChatGPT <cit.> handles such questions satisfactorily. However, since their training data and details are not open-sourced, we are unable to investigate how the ability of these particular models is activated. (4) In this paper, we standardize the expected responses to FPQs as rebuttals, which takes a conventional perspective. However, sometimes we can react with a more creative response, such as a rhetorical question. This can be future work. § ETHICAL STATEMENT In the construction of the dataset, we forbid the annotators to compose any sentence that is offensive, harmful, or contains personal information. The annotated data is manually checked to ensure safety. We pay our annotators a competitive salary relative to market rates. The annotated dataset is helpful to encourage models “think” before they provide a response, thus being safer in practical deployment. acl_natbib § APPENDICES § ANNOTATION DETAILS §.§ Initial FPQs We provide the annotators with 29 FPQs in the annotation guide. These questions are original references provided for annotators to brainstorm questions. We list the questions and their error types in Table <ref>. We didn't provide FPQs for each question format since the question format is much easier to determine without examples. §.§ Distribution Balance Criterion We expect our dataset to have a richer and more uniform distribution of FPQs. We achieve this goal with the help of constraints on the FPQ types. For the eight error types, each type of FPQ should account for at least 5% of the overall data, and the maximum category should not exceed 30%. And for the six problem formats, each type of FPQ should account for at least 10% of the entire data, and the maximum category should not exceed 30%. All balance criteria do not take into account the “other" category. § EXPERIMENT DETAILS [WE CHOOSE RANDOM SEEDS 4, 13, AND 34 IN ALL EXPERIMENTS.] §.§ API Calls for Pilot Experiments We summarize the APIs used in Section <ref> in Table <ref>. We will also provide the screenshot of using these APIs in our final reproducible code. §.§ Details of Discriminating FPQs For the experiments in Table <ref>, we use the prompt learning <cit.> paradigm. We use “true” and “false” as the label word for FPQ and TPQ, respectively [Since our target is to classify whether it has a false premise, we set True for FPQs and False for TPQs.]. For T5 models, following the usage of T5 <cit.> in their original paper, we append “potential tricky question:” to identify the task. Macaw models are multi-angle QA models, to use their direct question angle, we follow their paper and use “$answer$ ; $question$ = ” as the prefix. For OPT models, we train them in a vanilla input-output format. We list the hyper-parameters for each experiment in Table <ref>. For Macaw-11B, we use half-precision acceleration and do not find performance degradation compared to full-precision computation. For the experiment in Figure <ref>, we use the same input-output format mentioned before. Our hyperparameters used in this section are listed in Table  <ref>. myboxcolback=gray!6!white, colframe=gray!75!black §.§ Details of Answering FPQs Since fine-tuned models in few-shots (e.g. 32 question pairs) sometimes may not generate “tricky/true question” at the beginning of sentence [Some seeds in OPT models sometimes produce “this is a tricky question”.], and a normal answer hardly has “tricky/true question” in it, we count whether “tricky question” or “true question” appears in outputs for classification evaluation to get the recall, precision, and accuracy scores. When evaluating the generated explanation, we remove “tricky question” and “true question”. We list our hyperparameters used in this section in Table <ref> and keep them the same when adding the binary loss. §.§ Details of In-context Learning In-context learning, introduced in GPT-3 <cit.>, has been a successful way of adapting extensive language models. In in-context learning, we provide a textual prefix p of the task and one or a few training data samples before sending the input questions. We adopt the QA prefix in the GPT-3 demo for all the PLMs tested. Specifically, the prefix is: p = I am a highly intelligent question answering bot. If you ask me a question that is rooted in truth, I will give you the answer. If you ask me a question that is nonsense, trickery, or has no clear answer, I will say “tricky question.” first and give the reason, otherwise I will say “true question.” first and give the reason. A few pairs of samples {(q_F^i, a_F^i), (q_T^i, a_T^i)} can be concatenated to the textual instruction. Therefore the full prefix before the input question has the following form: p + Q: q_F^i + A: a_F^i +Q: q_T^i + A: a_T^i + ... + Q:[6mm] + A: where + indicates string concatenation, and the input example is filled into the blank. We list our hyperparameters for in-context learning in Table <ref>. §.§ Answering FPQs and General Questions We list our hyperparameters in this section in Table <ref>. We count the number of general questions when using the data replay technique in Table <ref>. § ADDITIONAL RESULTS §.§ More Raw PLM's Responses to FPQs We present three more examples of PLM's responses to the FPQs and their responses to the corresponding questions that directly ask about the correctness of the premises in Table <ref>. We can see that in most cases PLMs identify whether the premises are true or false successfully, however, they fail on the FPQs. §.§ Model-generated Answers and Explanations We present randomly sampled FPQs in the test split and the corresponding references, discrimination results, and explanations/answers in Table <ref>. We use Macaw-11B trained with full training data while binary loss is added in this demonstration. We can see that in most cases, the explanation generated by the model is close to the reference. However, there are cases that the generated explanation is counterfactual. For example, “A spider's shell is not helpful to its breath” is incorrect. §.§ The Questions in ARC-DA that Macaw-FPQ Rebuts We show the problem that the model still rebuts after data replay. Specifically, we show the model results for the Macaw-11B model after training on the full training data as well as the replayed data. Since our experiments have three seeds, we show the problem that the model refutes in all seeds. We also show the explanations generated by our model, we randomly pickle one explanation from the three seeds. As we can see in Table <ref>, the correctness of the premises of these questions is not very clear. As a human, these questions can also be seen as questions containing false premises. The question in Table <ref> “How is a skin cell from a mouse similar to an amoeba?” can be seen as a question that contains a false premise “A mouse's skin cells, like amoebas, are single-celled organisms.”, as a human, we may also rebut this presupposition. For the question “Volcanoes are considered constructive because they”, generally, the volcanoes are considered destructive unless we want a creative answer. If a user truly wants the creative answer, he might provide explicit instructions to the PLM to trade robustness for creativity, which can be future work.
http://arxiv.org/abs/2307.13649v1
20230702040032
Optoelectronic properties of silver doped copper oxide thin films
[ "Vishal Mohade", "Krishna Kumar", "Parasuraman Swaminathan" ]
physics.app-ph
[ "physics.app-ph", "cond-mat.mtrl-sci" ]
Optoelectronic properties of silver doped copper oxide thin films Vishal Mohade1, Krishna Kumar1 Parasuraman Swaminathan1,2* 1Electronic Materials and Thin Films Lab, Dept. of Metallurgical and Materials Engineering, Indian Institute of Technology, Madras, Chennai, India 2Centre of Excellence in Ceramics Technologies for Futuristic Mobility, Indian Institute of Technology Madras, Chennai, India *Email: swamnthn@iitm.ac.in § ABSTRACT Thin films have found a wide variety of applications because of the substantial improvement in their properties as compared to bulk metals. Metal oxide thin films are increasingly being used in various fields and are especially important in functional applications. They can be either p- or n-type in nature depending on the materials, dopants, and preparation route. Copper oxide is an example of a p-type metal oxide, which finds application in solar cells, photo-electrochemical cells, gas sensors, supercapacitors, and thermoelectric touch detectors. Both copper (I) and copper (II) oxides can be grown with the lower valence state oxide stable at low temperature and the higher valence state obtained by annealing at higher temperatures. In this work, we modify the optical and electrical properties of copper oxide thin films, by doping of silver through a thermal evaporation process route. Copper is thermally evaporated onto the substrate and silver is co-evaporated during this process. The films are then annealed in ambient under various conditions to obtain copper oxide. Structural and functional comparison is made between undoped and silver doped copper oxide thin films, prepared under the same conditions. Thermal evaporation is a simple route for obtaining doped metal oxides and the process can be extended to a variety of other systems as well. Keywords: Copper oxide; Thermal evaporation; Optoelectronic properties; Electron microscopy; Silver doping § INTRODUCTION Copper oxide is a well-studied material because of the abundance of copper in nature, its p-type conductivity, easy synthesis by oxidation of copper, good optical and electrical properties, and non-toxic nature. There are various methods to produce copper oxide thin films viz. thermal evaporation <cit.>, magnetron sputtering<cit.>, electrodeposition<cit.>, chemical vapor deposition <cit.>, chemical bath deposition <cit.>, plasma sputtering <cit.>, molecular beam epitaxy, DC reactive sputtering<cit.>, RF reactive sputtering<cit.>, ion beam sputtering <cit.>, sol-gel <cit.>, and pulsed laser deposition<cit.>. There have been several studies to determine the properties of copper oxide thin films. Pure copper film on annealing is found to form different oxides at different annealing temperatures. The first oxide is cuprous oxide (Cu_2O), which starts forming in range of 200-250 C and exhibits and optical bandgap of 2.0-3.0 eV, with cubic crystal structure of lattice parameter 0.427 nm. The higher valence, cupric oxide (CuO) starts forming above 300 C and has a optical bandgap of 1.2-1.7 eV. CuO shows monoclinic crystal structure and both oxides show p-type conductivity <cit.>. These oxides shows promising applications in gas sensors<cit.>, solar cells<cit.>, thermoelectric touch sensors<cit.>, thin film transistors<cit.>, and supercapacitors<cit.>, to name a few. Thermal evaporation produces very uniform films with no porosity, and high adhesion. It is a non-toxic and line of sight process. The Cu-Ag phase diagram <cit.> shows a eutectic obtained at 71.9 wt. % Ag. For temperatures below 400 C and very low concentration of Ag there is negligible mixing or solubility of Ag in Cu. The pressure vs. temperature diagram for the Cu-O system <cit.> shows that at a pressure greater than 100 Torr and temperatures higher than 600 C, copper oxide exists as CuO. Papadimitropolous et al. deposited copper oxide thin films using thermal vacuum evaporator on a silicon substrate <cit.>. On oxidation, the film showed cuprous oxide formation at 225 C, copper silicides were also seen at this temperature. At 280 C, Cu_2O amount starts to decrease and CuO peaks appear, which then converts to only CuO peaks at 350 C. CuO forms because the Gibbs free energy for the oxidation of Cu_2O to CuO at a temperature of 200 C is -3.73 kcal/mol <cit.>. Chaudhary et al. justified it as the supply of thermal energy to the cubic Cu_2O should cause higher ionicity and smaller grain size to transform into lower symmetry and larger grain size <cit.>. Figuieredo et al. observed similar peaks of Cu_2O (111) from 250 to 300 C and CuO (11-1) above 300 C by using e-beam evaporation of pure copper sample on glass slide followed by annealing <cit.>. Thus, the overall conversion can be written as Cu → (Cu + Cu_2O) → Cu_2O → (Cu_2O + CuO) → CuO In this work we attempted addition of silver nanoparticles on the surface of a copper thin film without breaking vacuum. We used a thermal evaporation process route to ascertain efficient film morphology for study of the functional properties. The as deposited samples were then annealed to obtain oxides. The study of morphology helped us to understand grain development with temperature. A variety of characterization tools were used to study the effects of silver addition. The optical properties showed the bandgap values of copper oxide similar to others. Similar work has been attempted by only one other researcher, who used microwave annealing process to study optical properties of silver doped copper oxide thin film <cit.>. Li-doped copper oxide thin films also shows a decrease in band gap with Li concentration <cit.>. The doping process for copper oxide thin film has not been adapted elsewhere. The effect on electrical properties because of doping copper oxide have not yet been reported. This work will give a brief information about changes in resistance and carrier concentration by silver doping. The effect of temperature on resistance of copper oxide thin film can also help to develop a temperature sensor. These properties were studied elsewhere with computational modelling using density functional theory <cit.>. § MATERIALS AND METHODS Copper thin films were deposited on microscopic glass slides using high vacuum coating unit (Model HPVT 303) by Hydro Pneo Vac Technology. Prior to deposition, the glass slides were cut to dimension of 2.5 × 2.5 cm^2. Then they were cleaned in steps of 1 % soap solution, deionized water, ethanol and deionized water again by ultra-sonication for 15 min in each solution. These were dried by wiping with lint free cloth. The deposition was performed at high vacuum level (7 × 10^-6 mbar). The rate of deposition was maintained for all depositions to be   0.14 nm s^-1. Thin film thickness was monitored with a quartz crystal microbalance. The substrate was kept at room temperature and no external heating was provided during deposition, though there was a rise in temperature of the substrate during thermal evaporation. Silver thin film of thickness 1 nm were deposited on top of the pure copper film, without breaking vacuum. The deposited thin film was then annealed at temperatures from 150 to 450 C at an interval of 100 C. The annealing was performed in a muffle furnace in ambient atmosphere. The heating rate was maintained at 5 C/min and the holding time was 2 h. Both pure copper and silver doped copper films were annealed under the same conditions. Grazing incidence x-ray diffraction (GIXRD) measurements were performed on a Rigaku Smartlab XRD machine with 9 kW rotating anode X-ray source. The configuration includes Cu as the source material and Ge monochromator. The incidence angle was maintained at 1 and 2θ was varied at the scanning rate of 0.01 per s. The analysis was done using XPert Highscore pro.cUV-Visible spectroscopy was performed on a Jasco V-730 spectrophotometer, with halogen and deuterium lamps as sources. The wavelength range for measurement was 200 – 1100 nm. Raman spectroscopy was performed using NdYAG laser of wavelength 532 nm on WiTech alpha 300 confocal Raman microscope. The accumulation time was kept at 5 s. The objective used was 20 ×. For photoluminescence spectroscopy (PL) Jasco FP 6300 was used. The excitation wavelengths used were 320 nm and 450 nm. A DC powered 150 W Xe lamp was used as a source. The accumulation and integration time were set at 10 s respectively. Electrical transport measurement was carried out in Van der Pauw setup on DynaCool PPMS by Quantum design at room temperature. All measurements were done at 300 K. The contacts were made using silver paint adhesive and copper wire. The sample dimensions were maintained at 4 mm × 4mm (L × W). I-V measurements were performed in a four-probe setup on a source meter, Keysight precision measurement system. The contacts were made using silver epoxy adhesive and copper wires. Voltage range was kept as -10 V to 10 V. I-V measurement with temperature (Seebeck coefficient) was done on Cascade Microtech summit 12000 AP. The temperature range was from room temperature to 120 C in steps of 10 C and ETC 200L (Espec corp.) thermos chuck was used to monitor temperatures. The voltage range was -10 to 10 V. Sheet resistance was measured on Jandel mode RM 3000 using a four probe configuration. The equipment was calibrated and zeroed before taking readings. The current was automatically set by equipment within range of 10 nA to 99.99 mA. Scanning electron microscopy was done on FEI Quanta 400 high resolution scanning electron microscope. The detector used was Everhart Thornley detector (ETD). Transmission electron microscopy was performed on FEI Tecnai 12 electron microscope. § RESULTS AND DISCUSSION §.§ Structural characterization On annealing of samples, visually, we can clearly deduce that the sample with 250 C annealing temperature was the most transparent. Annealing at lower temperatures, 150 C, produced samples which were opaque, while at the highest temperature of 450 C, the annealed sample was more transparent than 350 C annealed sample. The optical images are summarized in figure <ref> and the samples are kept above IITM logo for showcasing their transparency. To identify the phases, XRD studies were carried out on copper oxide and silver doped copper oxide thin films. The XRD patterns were measured for thin films annealed at temperatures from 150 to 450 C. GIXRD results, presented in figure <ref> (a) and (b), showed the presence of copper peaks in 150 C annealed sample (ICDD number: 01-085-1326), indicating incomplete oxidation at this temperature. Samples annealed at 250 C showed peaks at 36.5 corresponding to Cu_2O (111) and at 42.2 corresponding to Cu_2O (200) (ICDD number: 05-667). Samples annealed at 350 C showed peaks at 35.5 and 38.7 corresponding to CuO [ICDD number: 45-0937] and samples annealed at 450 C showed 35.5 and 38.7 peaks. At 150 C, a copper peak appears at 42.4 , from Cu (111) plane. The data obtained matches results are obtained by Choudhary et al.<cit.>. The crystallite size variation with temperature and doping can be seen in table <ref>. The crystallite size was obtained using Hall–Petch equation. For undoped copper oxide thin films the crystallite size decreases with annealing temperature, whereas for Ag doped copper oxide it increases. Ag doping has decreased the crystallite size in films annealed at 150 and 250 C, when compared to undoped Cu at these temperatures. This is due to higher affinity of copper towards oxygen and the preferential oxidation of copper that occurs in presence of silver <cit.>. Ag_2O (s) + Cu (s) → CuO (s) + 2Ag (s) SEM was performed at magnification of 40000 × in secondary electron mode using a Everhart–Thornley detector. Thin films were sputtered with gold before measurement. Typical microstructure of the thin films, without and with silver, analyzed by SEM are shown figures <ref> and <ref> respectively. The inset at the bottom left shows the calibration scale to observe the grain size and top right shows the temperature of annealing. Both thin films annealed at 150 C shows smoother film and very fine grain size and closed packed structure. The grain size increases with annealing temperature. Figure <ref> shows undoped copper oxide thin films annealed at different temperatures. The film morphology appears smooth and grain size distribution is uniform. Thin films annealed at 350 C shows very high increase in grain size as compared to other films. The film annealed at 450 C shows presence of pores smaller than the grains. Ag doped SEM images, on the other hand, shows smooth and uncracked films. The grains appear well-defined in Ag doped copper oxide thin film (shown in figure <ref>) and the grain size increases with temperature, which is similar to the undoped copper oxide thin film. The grains agglomerated to form bigger grains, therefore grain size variation in the 450 C annealed film looks non uniform. A crack is also observed around bigger grains. This crack is due to connecting of adjoining pores. The pore size observed is comparable to the grain size. The 350 C film shows more layered structure and decreased pore size. On comparing to undoped copper oxide thin film annealed at 450 C in figure <ref>, the Ag doped thin film shows aggregated grains with more defined shape of top layer. To get closer look at the grain structure on the addition of silver to copper thin films, representative TEM images of the as deposited copper and silver doped copper thin films are shown in figure <ref>. Figure <ref> (a) shows a uniform grain size for pure copper, whereas in figure <ref> (b) the grain size is non uniform and there are some grains which are distributed randomly with larger size as compared to rest of the grains and the smaller grains shows similar size as in fig <ref> (a). By comparison between (a) and (b) we can conclude that the bigger grains are of Ag doped copper. §.§ Optical properties Figure <ref> shows the absorbance plot for both doped and undoped copper oxide thin films. The absorbance is high in the UV region due to the opaque nature of the thin film. Silver doped thin film shows higher absorbance in blue region as compared to undoped copper oxide. This may be due to the nature of noble metals such as silver. They absorb light due to the transition of electrons between unoccupied hybridized sp states and occupied d states. Absorbance decreases with increase in wavelength till 500 nm (blue green region) for undoped thin film. A similar behavior is shown by silver doped copper oxide annealed at 250 C. The optical band gap of copper oxide thin films can be calculated using Tauc equation and figure <ref> shows the Tauc plot for determining the band gap of CuO and Ag doped CuO thin films. Band gap values can be seen from the extrapolation of the linear portion of to the energy axis of the Tauc plot (arrow pointing on X-axis). Cu_2O has a direct band gap of 2.91 ± 0.3 eV and CuO has indirect band gap of 1.5 ± 0.3 eV. The data obtained is similar to those obtained in literature <cit.>. The band gap of silver doped Cu_2O is 3.04 ± 0.3 eV and CuO is 1.70 ± 0.3 eV. Figure <ref> shows Raman spectra of annealed copper oxide thin films. Among nine optical modes of CuO, three are Raman active (A_g+ 2 B_g). The peaks 298, 342 and 627 cm^-1 corresponds to CuO and the peaks 143, 210, and 617 cm^-1 correspond to Cu_2O. 450 C Ag doped copper oxide thin film shows all three CuO peaks only. On the other hand, other temperatures show the presence of Cu_2O peaks more dominantly. The silver doped copper oxide thin film annealed at 250 C shows the greatest intensity of peaks at 143 and 210 cm^-1. This can be attributed to the presence of Cu_2O. Raman data confirms the results obtained by XRD. Undoped and doped copper oxide films annealed at 150 C shows only one peak corresponding to Cu_2O at 1100 cm^-1. This signifies the presence of very thin oxide layer, although no other peaks are available. This can suggest that the oxide formation has not started properly at 150 C. Photoluminescence spectra were also recorded for the undoped and doped copper oxide thin films. The film annealed at 250 C shows peaks at 398 nm (near UV emission) and 496 nm (corresponding to cyan emission). The peaks at 540 and 578 nm correspond to green and yellow emissions respectively. The presence of high yellow peaks indicates excess oxygen whereas green emissions indicates oxygen deficiency <cit.>. Both doped and undoped copper oxide thin films annealed at 450 C exhibit emission in UV range i.e. 364 nm, which can be correlated to band gap transition in CuO <cit.>. The green emissions are also associated with the presence of surface defects and transition of carriers from near conduction band (oxygen vacancies) to deep valence band (Cu vacancies). Figure <ref> shows PL spectra for undoped and doped copper oxide thin films at an excitation wavelength 450 nm. The graph shows a single red emission, this can be due to neutral and single ionized oxygen transitions or deep level emission related to presence of defect <cit.>. §.§ Electrical properties Electrical transport measurements were performed with Van der Pauw method to obtain carrier concentrations. It shows the resistance value obtained for undoped copper oxide thin films annealed at 250 and 450 C are 12-20 MΩ and 2.8 MΩ respectively. For silver doped thin film, annealed at 450 C, the corresponding value was 22-30 kΩ. The carrier concentration values for undoped Cu_2O thin film were in range of 10^15 cm^-3 and for doped CuO in range of 10^16 cm^-3. The nature of the Hall coefficient shows p-type conductivity in copper oxide thin films <cit.>. Table <ref> shows the sheet resistance values obtained by four probe measurement. The films annealed at 150 C showed same resistance as copper because of negligible surface oxidation. The resistance is very low for undoped copper oxide thin films as compared to silver doped copper oxide thin films. The increase in resistance is due to increased scattering centers by the addition of silver. The I-V curves were also obtained for undoped and silver doped copper oxide thin films. The undoped copper oxide thin film annealed at 450 C shows non-linear behavior, while the other samples showed Ohmic behavior. The resistance values obtained were similar to those obtained by four-probe data. The temperature dependence of resistance was also measured and the values are listed in table <ref>. The table shows that the resistance decreases with increase in temperature for all the annealed thin films. The drop in the resistance for silver doped thin film annealed at 250 C is very drastic, but the initial resistance value is very high which also can be seen the from table. This decrease is not regular for undoped copper oxide thin film annealed at 250 C. The decrease for silver doped thin film annealed at 450 C is gradual with temperature, which makes it a great choice for temperature sensor application. § CONCLUSION Copper oxide thin film annealed at 250 C shows the highest optical transparency. XRD studies shows the presence of Cu_2O phase in both undoped and silver doped copper oxide thin films annealed at 250 C. CuO phase starts forming above 330 C and hence it is visible in XRD pattern of thin films annealed at 350 and 450 C. Raman spectroscopy also confirm the results obtained by XRD. On increasing the annealing temperature, the grain size increased. Silver doped copper oxide thin films showed larger band gap than undoped copper oxide thin films. The blue and green emission showed deficiency or excess of oxygen in thin films, and red emission showed transition of single and ionized oxygen. Copper oxide thin films showed p-type conductivity. Silver doped copper oxide thin films showed higher resistance than undoped copper oxide. The resistance value decreases with increase in temperature for all copper oxide thin films. The resistance variation of silver doped copper oxide thin film annealed at 450 C with temperature showed a possible application as temperature sensor. § ACKNOWLEDGMENTS Support from the Centre of Excellence in Ceramics Technologies for Futuristic Mobility (project number SB22231272MMETWO008702) is acknowledged. Electron microscopy (SEM and TEM) was carried out at the facilities available in the Dept. of Metallurgical and Materials Engineering, IIT Madras. Optical characterisation was performed at the facilities available at the Dept. of Physics, IIT Madras, while electrical characterisation was carried out at the Centre for NEMS and Nanophotonics, IIT Madras. hunsrt
http://arxiv.org/abs/2307.01868v1
20230704182717
Generalized Quasiorders and the Galois Connection End-gQuord
[ "Danica Jakubíková-Studenovská", "Reinhard Pöschel", "Sándor Radeleczki" ]
math.RA
[ "math.RA", "08A, 06A15" ]
K2 & observations of symbiotic X-ray binaries: and G. J. M. Luna 1 Received June 2023; accepted y ===================================================== Equivalence relations or, more general, quasiorders (i.e., reflexive and transitive binary relations) ρ have the property that an n-ary operation f preserves ρ, i.e., f is a polymorphism of ρ, if and only if each translation (i.e., unary polynomial function obtained from f by substituting constants) preserves ρ, i.e., it is an endomorphism of ρ. We introduce a wider class of relations – called generalized quasiorders – of arbitrary arities with the same property. With these generalized quasiorders we can characterize all algebras whose clone of term operations is determined by its translations by the above property, what generalizes affine complete algebras. The results are based on the characterization of so-called u-closed monoids (i.e., the unary parts of clones with the above property) as Galois closures of the Galois connection -, i.e., as endomorphism monoids of generalized quasiorders. The minimal u-closed monoids are described explicitly. § INTRODUCTION Equivalence relations ρ have the remarkable well-known property that an n-ary operation f preserves ρ (i.e., f is a polymorphism of ρ) if and only if each translation, i.e., unary polynomial function obtained from f by substituting constants, preserves ρ (i.e., is an endomorphism of ρ). Checking the proof one sees that symmetry is not necessary, thus the same property, called Ξ in this paper (see <ref>), also holds for quasiorders, i.e., reflexive and transitive relations. No further relations with property Ξ were known and once we came up with the interesting (for us) question, if there are other relations (than quasiorders) which satisfy Ξ, we hoped to prove that Ξ(ρ) implies that ρ has to be a quasiorder (or at least to be “constructible” from quasiorders). This attempt failed, but a new notion was born: transitivity of a relation with higher arity. The next step was to investigate reflexive and transitive m-ary relations which naturally are called generalized quasiorders for m≥ 3 (for m=2 they coincide with usual (binary) quasiorders) and which all have the property Ξ (Theorem <ref>). Moreover, these generalized quasiorders are more powerful than quasiorders or equivalence relations (see Remark <ref>) and therefore allow finer investigations of the structure of algebras (A,F). The next challenging question was: are there further relations with property Ξ, other than generalized quasiorders? The answer is “yes, but not really”: there are relations ρ satisfying Ξ(ρ) and not being a generalized quasiorder (see Example in <ref>), but each such relation ρ is “constructively equivalent” to generalized quasiorders in the sense that they generate the same relational clone and therefore can be expressed mutually by primitive positive formulas (Proposition <ref>). With the property Ξ the clone ρ of polymorphisms is completely determined by the endomorphism monoid M=ρ. Changing the point of view and starting with an arbitrary monoid M≤ A^A of unary mappings, one can ask for the set M^* of all operations whose translations belong to M. Then Ξ(ρ) means ρ=(ρ)^* (for details see Section <ref>), in particular, M^* is a clone. But in general, M^* is only a so-called preclone (counterexample <ref>). This leads to the question When M^* is a clone? and to the notion of a u-closed monoid (namely if M^* is a clone). These u-closed monoids play a crucial role in this paper. Their characterization via generalized quasiorders, namely as Galois closed monoids (of the Galois connection - introduced in Section <ref>), is one of the main results (Theorem <ref>) from which the answer to all above questions more or less follows. The paper is organized as follows. All needed notions and notation are introduced in Section <ref>. Section <ref> deals with the property Ξ and the u-closure and clarifies the preclone structure of M^*. Section <ref> is the stage for the main player of this paper: the generalized quasiorders. In particular, Theorem <ref> proves the property Ξ for them. As already mentioned, in Section <ref> the Galois connection - and the crucial role of u-closed monoids is considered. Moreover, the behavior of the u-closure under taking products and substructures is clarified. In Section <ref> we consider the u-closure of concrete monoids M≤ A^A, in particular all minimal u-closed monoids are determined (Theorem <ref>). In Section <ref> we collect some facts and problems for further research. In particular we show how the notion of an affine complete algebra can be generalized via generalized quasiorders. § PRELIMINARIES In this section we introduce (or recall) all needed notions and notation together with some results. Throughout the paper, A is a finite, nonempty set. :={0,1,2,…} (N_+:=∖{0}) denotes the set of (positive) natural numbers. Let [n](A) and [n](A) denote the set of all n-ary operations f:A^n→ A and n-ary relations ρ⊆ A^n, n∈_+, respectively. Further, let (A)=⋃_n∈_+[n](A) and (A)=⋃_n∈_+[n](A). The so-called projections e^n_i∈[n](A) are defined by e^n_i(x_1,…,x_n):=x_i (i∈{1,…,n}, n∈_+). The identity mapping is denoted by 𝕀_A (=e^1_1). C:={_a| a∈ A} is the set of all constants, considered as unary operations given by _a(x):=a for a∈ A. Special sets of relations are (A)⊆(A) and (A)⊆[2](A) of all equivalence relations (reflexive, symmetric and transitive) and quasiorder relations (reflexive and transitive), respectively, on the set A. For f∈[n](A) and r_1,…,r_n∈ A^m, r_j=(r_j(1), …, r_j(m)), (n,m∈_+, j∈{1,…,n}), let f(r_1,…,r_n) denote the m-tuple obtained from componentwise application of f, i.e., the m-tuple (f(r_1(1),…,r_n(1)),…,f(r_1(m),…,r_n(m))). For f∈[n](A) and g_1,…,g_n∈[1](A), the composition f[g_1,…,g_n] is the unary operation given by f[g_1,…,g_n](x):=f(g_1(x),…,g_n(x)), x∈ A. An operation f∈[n](A) preserves a relation ρ∈[m](A) (n,m∈_+) if for all r_1,…,r_n∈ρ we have f(r_1,…,r_n)∈ρ, notation fρ. The Galois connection induced by gives rise to several operators as follows. For Q⊆(A) and F⊆(A) let Q :={f∈(A)|∀ρ∈ Q: fρ} (polymorphisms), F :={ρ∈(A)|∀ f∈ F: fρ} (invariant relations), Q :={f∈[1](A)|∀ρ∈ Q: fρ} (endomorphisms), F :=(A,F):= F∩(A) (congruence relations), F :=(A,F):= F∩(A) (compatible quasiorders). The Galois closures for - and - are known and can be characterized as follows: F=F (clone generated by F), Q=[Q]_∃,,= (relational clone, generated by Q, equivalently characterizable as closure with respect to primitive positive formulas, i.e., formulas containing variable and relational symbols and only ∃,,=), M=M ((sub)monoid generated by M⊆ A^A), Q=[Q]_∃,,,= (weak Krasner algebra generated by Q, equivalently characterizable as closure with respect to positive formulas, i.e., formulas containing variable and relational symbols and ∃,,, =). We refer to, e.g., <cit.>, <cit.>, <cit.>, <cit.>. A set F ⊆(A) is called a preclone if it contains 𝕀_A and is closed under the operations ζ, τ and ∘ that are defined as follows. Let f∈[n](A) and g ∈[m](A), n,m∈_+. Then * 𝕀_A(x):=x (identity operation); * (ζ f)(x_1,x_2,…,x_n) := f(x_2,…,x_n,x_1) (cyclic shift), if n = 1 then ζ f := f; * (τ f)(x_1,x_2,x_3,…,x_n) := f(x_2,x_1,x_3,…,x_n) (permuting the first two arguments), if n = 1 then τ f := f; * (f∘ g)(x_1,…,x_m,x_m+1,…,x_m+n-1):= f(g(x_1,…,x_m),x_m+1,…,x_m+n-1) (composition). For later use we introduce here also the operations ∇ (adding a fictitious argument at first place) and Δ (identification of the first two arguments): * (∇ f)(x_1,x_2,…,x_n+1) :=f(x_2,…,x_n+1), * (Δ f)(x_1,…,x_n-1):=f(x_1,x_1,…,x_n-1) if n≥ 2, and Δ f=f for n=1. Remarks: Clearly, because of (<ref>) and (<ref>), the unary part F∩[1](A) of a preclone F is a monoid. The (m+n-1)-ary function f ∘ g (defined in (<ref>)) sometimes is called linearized composition (or superposition), because this is a special case of the general linearized composition, linearization or superposition mentioned in <cit.>, <cit.> or <cit.>, respectively. Preclones, also known as operads, can be thought as “clones where identification of variables is not allowed” (cf. <ref>). The term preclone was introduced by Ésik and Weil <cit.> in a study of the syntactic properties of recognizable sets of trees. A general characterization of preclones as Galois closures via so-called matrix collections can be found in <cit.>. The notion of operad originates from the work in algebraic topology by May <cit.> and Boardman and Vogt <cit.>. For general background and basic properties of operads, we refer the reader to the survey article by Markl <cit.>. Clones are special preclones. There are many (equivalent) definitions of a clone. One of these definitions is that a clone is a set F⊆(A) closed under <ref>(<ref>)-(<ref>), <cit.>. Therefore we have: A preclone is a clone if and only if it is also closed under ∇ (adding ficticious variables) and Δ (identification of variables). For F⊆(A), the clone generated by F is denoted by F or [A]F. § THE PROPERTY Ξ AND U-CLOSED MONOIDS Equivalence relations or, more general, quasiorder relations ρ have the remarkable property Ξ (see <ref> below) that a polymorphism f∈ρ is determined by its translations f defined as follows: For an n-ary operation f:A^n→ A, i∈{1,…,n} and a tuple 𝐚=(a_1,…,a_i-1,a_i+1,…,a_n)∈ A^n-1, let f_𝐚,i be the unary polynomial function f_𝐚,i(x):=f(a_1,…,a_i-1,x,a_i+1,…,a_n), called translation (see, e.g., <cit.>, 1-translation in <cit.>, basic translation in <cit.>) and let f be the set of all such translations f_𝐚,i. For constants (as well as for arbitrary unary functions) f we put f:={f}. For F⊆(A) let F:=⋃_f∈ Ff⊆ A^A. Given a set M⊆ A^A we define M^*:={f∈(A)|f⊆ M}. Remark: For a unary function f we have f∈ (f)^*, in particular f∈ M^* implies f∈ M. Thus M^*=M for every M⊆ A^A. For a relation ρ∈(A) we consider the following property Ξ in three equivalent formulations: Ξ(ρ): ∀ f∈(A): fρfρ ∀ f∈(A): f∈ρf⊆ρ ρ=(ρ)^*. This can be extended to sets Q⊆(A) just by substituting Q for ρ in the above definition, e.g., Ξ(Q) Q=( Q)^*. As noticed above, it is well-known that Ξ(ρ) holds for ρ∈(A) or, more general, for ρ∈(A). Equivalently, expressed with the usual notions of congruence or quasiorder lattices, this means (A,F)=(A,F) and (A,F)=(A,F) for each algebra (A,F) (F⊆(A)). Clearly, there arises the question already mentioned in the introduction: Does there exist other relations ρ with the property Ξ(ρ)? Since Ξ(ρ) implies that (ρ)^* is a clone and therefore it is closed under ∇ (cf. <ref>). As we shall see in <ref> below this also implies C⊆ρ, what expresses the fact that ρ is reflexive (for definition see <ref>). However, the converse is not true: not each reflexive relation satisfies Ξ(ρ) as the following example shows. Let A={0,1,2} and M:=ρ for the binary relation ρ={(0,0),(1,1), (2,2), (0,1), (1,2)}. Note that ρ is reflexive but not transitive. Define f:A^2→ A by the following table: c]|r|rcc| f(x,y) y= 0 1 2 x= 0 0 0 1 1 0 0 1 2 1 1 2 One can immediately check that each unary polynomial f_𝐚,i preserves ρ, i.e., f⊆ M, but g:=Δ f (i.e., g(x)=f(x,x)) is the mapping 0↦ 0, 1↦ 0, 2↦ 2 which does not belong to M (since g does not preserve ρ because g maps (1,2)∈ρ to (0,2)∉ρ). Thus f∈ M^* but g∉ M^*. Hence M^* is not a clone. Since M^* is not always a clone, there also arises the question: what is the algebraic nature of the sets M^*? The answer gives the following proposition. Let M≤ A^A be a monoid. Then M^* is a preclone (i.e., it contains 𝕀_A and is closed under the operations ζ,τ,∘, cf. <ref>). Moreover, M^* is closed under ∇ (cf. <ref>(<ref>)) if and only if C⊆ M. Clearly 𝕀_A∈ M⊆ M^*. It is straightforward to check that for f,g∈ M^* also ζ f, τ f and f∘ g belong to M^* (notation see <ref>). We show it for <ref>(<ref>): if all variables x_1,…,x_m,…,x_m+n-1, with exception of x_i, are constant, say 𝐚=(a_1,…,a_m,…,a_m+n-1), then, for i≥ m+1, we have (f∘ g)_𝐚,i=f(b,a_m+1,…,x_i,…,a_m+n-1) with b:=g(a_1,…,a_m), what obviously belongs to f⊆ M. If i≤ m, then g_𝐚',i∈ M (because g∈ M^*), 𝐚':=(a_1,…,a_m) (without the i-th component) and f_𝐚”,1(x)=f(x,a_m+1,…,a_m+n-1) belongs to M (because f∈ M^*), where 𝐚”:=(a_m+1,…,a_m+n-1), consequently (f∘ g)_𝐚,i(x)=f_𝐚”,1(g_𝐚',i(x)) also belongs to the monoid M. Thus f∘ g⊆ M, i.e., f∘ g∈ M^*. Further we observe ∇ f=f∘ e^2_2 and e^2_2=∇𝕀_A where e^2_2 is the binary projection e^2_2(x_1,x_2)=x_2. Thus the preclone M^* is closed under ∇ if and only if e^2_2∈ M^*. But e^2_2={𝕀_A}∪ C (since e^2_2(a,x)=𝕀_A(x) and e^2_2(x,a)=_a for a∈ A), therefore e^2_2∈ M^*e^2_2⊆ M C⊆ M, and we are done. M^* is a preclone for a monoid M by <ref>. Conversely, for a preclone P the translations P form a monoid (because of <ref>(<ref>) and (<ref>)). Thus we can consider the following two mappings between monoids and preclones: ϕ : P↦P, where P is a preclone on A. ψ : M↦ M^*, xiwhere M≤ A^A is a monoid on A. Then (ϕ,ψ) is a residuated pair of mappings (covariant Galois connection) between the lattice of submonoids of A^A and the lattice of preclones on A. We have ϕ(P)⊆ M P⊆ψ(M). Moreover, the corresponding kernel operator ϕ(ψ(M))=M^*=M is trivial (cf. remark in Definition <ref>). However, the corresponding closure operator P↦ψ(ϕ(P)) is nontrivial and it is an open problem which preclones are closed, i.e., when do we have P=ψ(ϕ(P))=(P)^*? Let M_i≤ A^A, i∈ I. Then (⋂_i∈ I M_i)^*=⋂_i∈ IM_i^*. Since, for a residuated pair (ϕ,ψ), the residual ψ is meet-preserving, the Lemma immediately follows from <ref>. We add a direct proof just using the definitions: f∈(⋂_i∈ I M_i)^* f⊆⋂_i∈ IM_i∀ i∈ I: f⊆ M_i ∀ i∈ I: f∈ M_i^* f∈⋂_i∈ IM_i^*. Since M^* is not always a clone, the question arises: For which monoids M≤ A^A the preclone M^* is a clone? To attack this problem we introduce the u-closure M what shall lead to the equivalent problem (cf. <ref>(<ref>)) of characterizing u-closed monoids. For M⊆ A^A let M:=⋂{N| M⊆ N ≤ A^A, and N^* is a clone}. A monoid M≤ A^A is called u-closed if M=M. Let M⊆ A^A. * The operator M↦M is a closure operator (this follows from Lemma <ref>). * M is a monoid containing C and (M)^* is a clone (the latter follows from <ref> because, by definition, M is the intersection of monoids N with N^* being a clone; thus from <ref> follows C⊆M, too). In particular we have M=M=M. * M is u-closed (i.e. M=M) if and only if M^* is a clone (in fact, “⇒” follows from (<ref>), “⇐” follows from definition <ref>). A characterization of u-closed monoids M will be given in the next sections (Proposition <ref>, Theorem <ref> and Corollary <ref>). § GENERALIZED QUASIORDERS Let A={a_1,…,a_k} and M≤ A^A. We define the following |A|-ary relation: Γ_M:={(ga_1,…,ga_k)| g∈ M}. Thus Γ_M consists of all “function tables” _g:=(ga_1,…,ga_k) (considered as elements (columns) of a relation) of the unary functions g in M. In particular, we have M=Γ_M. In fact, h∈Γ_M, i.e., hΓ_M, implies h(_𝕀)∈Γ_M, i.e., ∃ g∈ M: h(_id)=_g what gives h=g∈ M. Conversely , if h∈ M, then h(_g)=_h∘ g∈Γ_M for all _g∈Γ_M, i.e., hΓ_M. Moreover, it is known that Γ_M coincides with the so-called stabilizer (M) of M and it is the largest element in the monoidal interval defined by M (all clones with unary part M form an interval in the clone lattice, called monoidal interval, cf., e.g., <cit.>). If F is a clone with F^(1)=M, then Γ_M is the so-called first graphic of F denoted by Γ_F(χ_1) in <cit.>. An m-ary relation ρ⊆ A^m is called reflexive if (a,…,a)∈ρ for all a∈ A, and it is called (generalized) transitive if for every m× m-matrix (a_ij)∈ A^m× m we have: if every row and every column belongs to ρ – for this property we write ρ (a_ij) – then also the diagonal (a_11,…,a_mm) belongs to ρ, cf. Figure <ref>. A reflexive and transitive m-ary relation is called generalized quasiorder. The set of all generalized quasiorders on the base set A is denoted by (A), and [m](A):=[m](A)∩(A) will denote the m-ary generalized quasiorders. From the definitions easily follows: (i) Each quasiorder (i.e., binary reflexive and transitive relation) is also a generalized quasiorder. The converse is also true: Each binary generalized quasiorder is a usual quasiorder relation, i.e., we have ^(2)(A)=(A). (ii) Each so-called diagonal relation is a generalized quasiorder. Here an m-ary relation δ∈(A) (m∈_+) is called diagonal relation if there exists an equi­valence relation ϵ on the set {1,…,m} of indices such that δ={(a_1,…,a_m)∈ A^m|∀ i,j∈{1,…,m}: (i,j)∈ϵ a_i=a_j}. We generalize the notation ρ(a_ij) to n-dimensional “m×…× m-matrices” (tensors) (a_i_1,…, i_n)∈ A^m×…× m where i_1,…,i_n∈{1,…,m}): ρ (a_i_1,…, i_n) denotes the fact that every “row” in each dimension belongs to ρ, i.e., for each index j∈{1,…,n} and any fixed i_1,…,i_j-1,i_j+1,…,i_n the m-tuple a_i_1,…,[j],…,i_n:= (a_i_1,…,1,…,i_n,…,a_i_1,…,m,…,i_n) (the indices 1,…,m are on the j-th place in the index sequence) belongs to ρ. Example: For n=3, ρ (a_i_1,i_2,i_3) means that for all i_1,i_2,i_3∈{1,…,m} we have (a_1,i_2,i_3,…,a_m,i_2,i_3)∈ρ, (a_i_1,1,i_3,…,a_i_1,m,i_3)∈ρ and (a_i_1,i_2,1,…,a_i_1,i_2,m)∈ρ. The (main) diagonal of (a_i_1,i_2,i_3) is the m-tuple (a_1,1,1,…,a_m,m,m). Remark: Let A={1,…,k}. We mention that for an n-ary function f:A^n→ A and a monoid M≤ A^A we have f∈ M^* if and only if Γ_M (a_i_1,…,i_n) where a_i_1,…,i_n:=f(i_1,…,i_n), i_1,…,i_n∈{1,…,k}. For ρ⊆ A^m let ρ^ denote the transitive closure of ρ, i.e., ρ^=⋂{σ⊆ A^m|σ is transitive and ρ⊆σ} is the least transitive relation containing ρ (it is easy to check that the intersection of transitive relations is again transitive). Analogously, the generalized quasiorder closure ρ^ is the least generalized quasiorder containing ρ. The reflexive closure is naturally defined as ρ^:=ρ∪{(c,…,c)∈ A^m| c∈ A}. These closures can be constructed (inductively) as follows. For ρ∈[m](A) define ∂(ρ):={(a_11,…,a_mm)∈ A^m|∃ (a_ij)∈ A^m× m: ρ (a_ij)} and let ρ^(0):=ρ, ρ^(n+1):=ρ^(n)∪∂(ρ^(n)) for n∈. Then we have ρ^=⋃_n∈ρ^(n) and ρ^=ρ^ . Remark: If ρ is reflexive, then ρ⊆∂(ρ). For binary relations ρ the operator ∂ is just the relational product: ∂(ρ)=ρ∘ρ. Let ρ∈[m](A). Then for every n-dimensional m×…× m-matrix (a_i_1,…, i_n)_i_1,…,i_n∈{1,…,m} we have ρ (a_i_1,…, i_n) (a_1,…,1,…,a_m,…, m)∈ρ. For n=2 the condition follows from the definition of a generalized quasiorder. Thus we can assume n≥ 3. Let M_k=(b^k_i_1,…,i_n-k) denote the (n-k)-dimensional m×…× m-matrix with b^k_i_1,…,i_n-k:=a_i_1,…,i_1,i_2,…,i_n-k (the first k coordinates are equal i_1). . Thus M_0=(a_i_1…,i_n) and M_n-1=(b^1_i)=(a_i,…,i)_i∈{1,…,m}= (a_1,…,1,…,a_m,…, m). We have to show M_n-1∈ρ (formally ρ M_n-1). This can be done by induction on k. By assumption we have ρ M_k for k=0. Assume ρ M_k for some k∈{0,1,…,n-2}. We are going to show ρ M_k+1 what will finish the proof. Let i_1,…,i_n-k∈{1,…,m}. We fix i_3,…,i_n-k and consider the m× m-matrix M'_k:=(b^k_i,j,i_3,…,i_n-k)_i,j∈{1,…,m}. Clearly, ρ M_k implies ρ M'_k. Therefore (b^k_1,1,i_3,…,i_n-k,…,b^k_m,m,i_3,…,i_n-k)∈ρ because ρ is a generalized quasiorder. Since i_3,…,i_n-k were chosen arbitrarily, this implies (together with ρ M_k) that we have ρ M_k+1 (note b^k+1_i_1,i_3,…,i_n-k=b^k_i_1,i_1,i_3,…,i_n-k). One of the crucial properties of generalized quasiorders is that preservation of a relation only depends on the translations, i.e., it extends the property Ξ (see (<ref>)) from (usual) quasiorders to generalized quasiorders. For f∈(A) and ρ∈(A) we have: fρfρ. Thus Ξ(ρ) holds. “⇒": Since each g∈f is a composition of f and constants c∈ C and since constants preserve ρ because of reflexivity, we have f⊆{f}∪ Cρ. “⇐”: Let (f)=n, (ρ)=m, fρ and let r_1,…,r_n∈ρ. We are going to show f(r_1,…,r_n)∈ρ what implies fρ and will finish the proof. Define a_i_1,…,i_n:=f(r_1(i_1),…,r_n(i_n)). Then a_i_1,…,[j],…,i_n=f_,j(r_j)∈ρ for =(r_1(i_1),…,r_j-1(i_j-1),r_j+1(i_j+1),…,r_n(i_n)) (notation see (<ref>)) because f_,j∈fρ, j∈{1,…,n}. Thus ρ(a_i_1,…,i_n) and we have f(r_1,…,r_n) =(f(r_1(1),…,r_n(1)),…,f(r_1(m),…,r_n(m))) =(a_1,…,1,…,a_m,…,m)∈ρ by <ref>, and we are done. Let F⊆(A) and Q⊆(A). Then * (A,F)=(A,F) (cf. Remark <ref>) * Ξ(Q) holds, i.e., Q=( Q)^*, in particular, ( Q)^* is a clone and Q is u-closed. (<ref>) directly follows from <ref>. Concerning (<ref>), we have Q=⋂_ρ∈ Qρ =_<ref>,(<ref>)⋂_ρ∈ Q(ρ)^* =_<ref>(⋂_ρ∈ Qρ)^*=( Q)^*, i.e., Ξ(Q). Now we characterize the u-closed monoids M≤ A^A (i.e., M=M) by various properties. The condition <ref>(<ref>) will show that the situation as in Example <ref> is characteristic for being not u-closed. For a monoid M≤ A^A the following are equivalent: * M is u-closed (equivalently, M^* is a clone), * M^*=Γ_M, * C⊆ M and for every binary f∈ M^* we have Δ f∈ M, * Γ_M is a generalized quasiorder. Each of the conditions (<ref>), (<ref>) and (<ref>) implies C⊆ M (cf. <ref> for (<ref>), (<ref>) and note that Γ_M is reflexive if and only if C⊆ M). Thus we can assume C⊆ M in the following. (<ref>)(<ref>)(<ref>) is clear (each set of the form Q is a clone, and any clone is closed under Δ). (<ref>)(<ref>): M is just the unary part F^(1) of the clone F:=M^*. It is well-known (cf., e.g., <cit.>) that Γ_M is the largest clone F with unary part F^(1)=M, thus M^*=F⊆Γ_M. Conversely, let f∈Γ_M, i.e., fΓ_M. Remember that the elements of Γ_M are of the form _g for some g∈ M (notation see <ref>). Thus fΓ_M means f(_g_1,…,_g_n)∈Γ_M whenever g_1,…,g_n∈ M. Since f(_g_1,…,_g_n)=_f[g_1,…,g_n], this equivalently can be expressed by the condition that the composition f[g_1,…,g_n] belongs to M whenever g_1,…,g_n∈ M. Consequently, any translation g:=f_𝐚,i derived from f (w.l.o.g. we take i=1), say g(x):=f(x,a_2,…,a_n) for some a_2,…,a_n∈ A, must belong to M, since g=f[𝕀_A,_a_2,…,_a_n] and M contains the identity 𝕀_A and the constant functions. Thus f⊆ M, hence f∈ M^*, and we get Γ_M⊆ M^*. (<ref>)(<ref>): Assume (<ref>) and assume on the contrary that M^* is not a clone. We lead this to a contradiction. Since M^* is a preclone by <ref>, M^* cannot be closed under Δ and there must exist a function f∈ M^*, say n-ary, such that h:=Δ f∉ M^* (clearly n≥ 3, otherwise we have a contradiction to (<ref>)). Thus some translation g:=h_𝐚,i derived from h cannot belong to M. If i≠ 1, then g(x)=h(c_1,…,c_i-1,x,c_i+1…,c_n-1)=f(c_1,c_1,c_i-1,x,c_i+1…,c_n-1) would belong to M since f∈ M^*. Therefore i=1 and g(x)=h(x,c_2,…,c_n-1)=f(x,x,c_2,…,c_n-1) does not belong to M. Consider the binary function f'(x_1,x_2):=f(x_1,x_2,c_2,…,c_n-1). We have f'∈ M^* (since f∈ M^*) and Δ f'∉ M (since g=Δ f' by definition), in contradiction to (<ref>). (<ref>)(<ref>): Let A={1,…,k}. There is a bijection between binary operations f:A^2→ A and (k× k)-matrices (a_ij) via a_ij=f(i,j) for i,j∈{1,…,k}. Note that rows and colums of (a_ij) are just the function tables (f(i,1),…,f(i,k)) and (f(1,j),…,f(k,j)) of the translations f(i,x) and f(x,j). Therefore f∈ M^* (i.e., f⊆ M by definition) is equivalent to the property that all rows and colums of (a_ij) belong to Γ_M (since the colums of Γ_M are just the function tables of the unary functions in M), i.e., Γ_M (a_ij). Further, Δ f∈ M is equivalent to the property that the diagonal (a_11,…,a_kk) of (a_ij) belongs to Γ_M. Thus condition (<ref>) is equivalent to the reflexivity (because C⊆ M) and transitivity of Γ_M (according to <ref>), and therefore to Γ_M being a generalized quasiorder. The following corollary is a simple tool to construct functions in the u-closure of a monoid. Let A={1,…,k} and M≤ A^A. If, for a binary operation h:A^2→ A, we have h∈(M)^*, in particular if h∈ M^*, then Δ h∈M. The statement is just <ref>(<ref>) for the u-closed monoid M. We mention further that h∈(M)^* is equivalent to Γ_M V for the matrix V:=(h(i,j))_i,j∈ A. § THE GALOIS CONNECTION - The preservation property induces a Galois connection between unary mappings and generalized quasiorders given by the operators Q :={h∈ A^A|∀ρ∈ Q: hρ} (endomorphisms) and M :={ρ∈(A)|∀ h∈ M: hρ} (generalized quasiorders) for M⊆ A^A and Q⊆(A). The corresponding Galois closures are M and Q. Now we can show one of our main results, namely that the u-closed monoids are just the Galois closures with respect to the Galois connection -. As a consequence (as shown in <ref> and <ref>) we can answer the questions raised in the Introduction. Let M⊆ A^A. Then we have: M= M. At first we observe M⊆ M (this holds for every Galois connection), M⊆M=Γ_M, in particular MΓ_M, and by <ref>(<ref>) we know Γ_M∈(A). Thus Γ_M∈ M. Consequently we get M⊆ M=_<ref>(<ref>) M⊆Γ_M=M, and we are done. In addition to the characterization in <ref> we give some further consequenses of Theorem <ref>, characterizing M^* (<ref>(<ref>)) and u-closed monoids M (<ref>(<ref>)). Since every monoid can be given as endomorphism monoid of invariant relations, M= Q, we also look for the characterization of those Q with u-closed endomorphism monoid (<ref>(<ref>)): * (M)^*= M for M⊆ A^A. * The following are equivalent for M≤ A^A: (i) M is u-closed, (i)' M^* is a clone, (i)” Γ_M∈(A), (ii) M= Q for some Q⊆(A), (iii) M^*= Q for some Q⊆(A), where the same Q can be taken in (ii) and (iii). * The following are equivalent for Q⊆(A): (i) Q is u-closed, (i)' ( Q)^* is a clone, (i)” Γ_ Q∈(A), (ii) ∃ Q'⊆(A): Q= Q', (ii)' ∃ Q'⊆(A): [Q]_∃,,,==[Q']_∃,,,= (closure under positive formulas) (iii) ∃ Q'⊆(A): ( Q)^*= Q', where the same Q' can be taken in (ii) and (iii). Instead of “ ∃ Q'⊆(A)” one can take “ ∃ρ∈(A)” and Q'={ρ}. (<ref>): Let Q:= M. Then Ξ(Q) by <ref>, i.e., Q=( Q)^* (cf. (<ref>)). Thus Q=( M)^*=(M)^* by <ref>. (<ref>): For (i)(i)'(i)” see <ref>(<ref>) and <ref>(<ref>). (i)(ii): Take Q:= M. If M is u-closed, then M=M=_(<ref>) Q. (ii)(iii): ( Q)^*= Q directly follows from <ref>(<ref>). (iii)(i)' is obvious, because M^*= Q is a clone. (<ref>) is just (<ref>) for M= Q. (ii)(ii)' follows from the properties of the Galois connection - (in particular [Q]_∃,,,== Q, cf. <ref>). Further note, that Q'={Γ_ Q} also will do the job (instead of arbitrary Q') since Q=Γ_ Q. Now we are also able to answer the question which (sets of) relations satisfy the property Ξ (cf. <ref>): The following are equivalent for Q⊆(A): (i) Ξ(Q) holds, i.e., Q=( Q)^*, (ii) ∃ Q'⊆(A): Q= Q', (ii)' ∃ Q'⊆(A): [Q]_∃,,==[Q']_∃,,= (closure under primitive positive formulas), (ii)” [Q]_∃,,==[[Q]_∃,,=∩(A)]_∃,,=. (i)(ii): Assume Q=( Q)^* and let M:= Q and Q':= M. M is u-closed (since M^* is a clone), therefore Q=M^*=(M)^*=_<ref>(<ref>) Q'. (ii)(i): Assume Q= Q' (Q'⊆(A)). Then Q= Q' and we have Q= Q'=_<ref>(<ref>)( Q')^*=( Q)^*, consequently Ξ(Q) by Definition <ref>. (ii)(ii)' follows from the properties of the Galois connection - (in particular [Q]_∃,,== Q, cf. <ref>). (ii)'(ii)” is obvious. We know from <ref> that ρ∈(A) implies Ξ(ρ). The converse is not true: Ξ(ρ) does not imply ρ∈(A) in general! A counterexample is the binary relation ρ={(i,j)| 1≤ i,j≤ n, j≤ i+1, j≠ i-1} in <cit.> on an at least 5-element set A={1,…,n}. This relation is strongly C-rigid (what means ρ={𝕀_A}∪ C) and reflexive, but not transitive, i.e., ρ∉(A). Nevertheless Ξ(ρ) holds. To see this we have to show ρ=M^*, where M:=ρ, i.e., M={𝕀_A}∪ C=T. M^* is u-closed (what we shall prove in <ref>), thus M^*=Γ_M by <ref>(<ref>). By <cit.>, for a clone F, if its unary part F^(1) equals {𝕀_A}∪ C, then F={𝕀_A}∪ C. Consequently, for F=M^* we have F^(1)=M={𝕀_A}∪ C and therefore we get M^*={𝕀_A}∪ C=ρ. For n=5 we get the relation ρ shown in Figure <ref> (this is a so-called tournament). Nevertheless, by <ref>, ρ must be “constructively equivalent” to some Q'⊆(A), i.e., [ρ]_∃,,==[Q']_∃,,=. In this concrete case we can take Q'={Γ_M}, i.e., we have [ρ]_∃,,==[Γ_M]_∃,,=, since ρ=Γ_M. Before we investigate the u-closure for concrete monoids we show how this closure behaves under taking products and substructures. For this we need some notation. Let g_i∈ A_i^A_i (i∈{1,2}) and A=A_1× A_2. Then g:=g_1⊗ g_2 denotes the unary operation g∈ A^A defined componentwise by g(a_1,a_2):=(g_1a_1,g_2a_2). For M_i⊆ A_1^A_i we put M_1⊗ M_2:={g_1⊗ g_2| g_1∈ M_1 and g_2∈ M_2}. Further, for ρ_i∈[m](A_i) and Q_i⊆(A_i), i∈{1,2}, let ρ_1⊗ρ_2 :={((a_1,b_1),…,(a_m,b_m))| (a_1,…,a_m)∈ρ_1 and (b_1,…,b_m)∈ρ_2}, Q_1⊗ Q_2 :={ρ_1⊗ρ_2|ρ_1∈ Q^(m)_1 and ρ_2∈ Q^(m)_2, m∈_+}. Remark: For monoids M_1, M_2, the product M_1⊗ M_2 is isomorphic (as monoid) to the direct product M_1× M_2. Let 𝕀_A_i∈ M_i⊆ A_i^A_i, i∈{1,2} and A=A_1× A_2. Then we have * _A(M_1⊗ M_2)=(_A_1M_1)⊗ (_A_2M_2). * M_1⊗ M_2=M_1⊗M_2. (<ref>): According to <cit.> and because the identity map belongs to M_i, for the invariant relations we have _A(M_1⊗ M_2)=(_A_1M_1)⊗ (_A_2M_2). Thus, in order to prove (<ref>), it only remains to show that ρ_1⊗ρ_2∈(A) ρ_1∈(A_1) and ρ_2∈(A_2) for ρ_1∈[m](A_1) and ρ_2∈[m](A_2). But this follows from (notation see <ref>) ρ_1 (a_ij) and ρ_2 (b_ij) (ρ_1⊗ρ_2) ((a_ij,b_ij)), and (a_11,…,a_mm)∈ρ_1 and (b_11,…,b_mm)∈ρ_2 ((a_11,b_11),…,(a_mm,b_mm))∈ρ_1⊗ρ_2, what is clear from the definitions <ref>. (<ref>): Since the trivial equivalence relations Δ_A_i and ∇_A_i belong to _A_iM_i (i∈{1,2}), we can apply (_A_1Q_1)⊗ (_A_2Q_2)=_A(Q_1⊗ Q_2) from <cit.> (restricting to unary mappings, i.e., taking instead of and Q_i= M_i) in order to get the second equality in the following conclusions: M_1⊗M_2 <ref>=( M_1)⊗( M_2) =(( M_1)⊗ ( M_2)) (<ref>)=(M_1⊗ M_2) <ref>=M_1⊗ M_2. Let M⊆ A^A and B∈ M for some ∅≠ B⊂ A. Then _B(M_B)=(_AM)_B. According to <cit.> we have _B(M_B)=(_AM)_B. Thus it remains to show that the generalized quasiorders (which are special invariant relations) correspond to each other, more precisely, we have to prove that (B)=((A))_B. “⊆”: Let σ∈[m](B) and ρ:=σ∪{(a,…,a)∈ A^m| a∈ A∖ B}. Then σ=ρ_B. Moreover, ρ is reflexive by construction. To show transitivity, let ρ (a_ij)∈ A^m× m. If (a_ij)∈ B^m× m, then σ(a_ij) and we get (a_11,…,a_mm)∈σ⊆ρ (since σ is transitive). If some row or column of (a_ij) contains an element a∈ A∖ B, then by definition of ρ this row or column must be (a,…,a). Thus a_ij=a for all i,j, and the diagonal obviously belongs to ρ. Thus ρ is transitive, i.e., ρ∈(A). “⊇”: If ρ∈(A) then σ:=ρ_B∈(B) is obviously reflexive (on B) and also transitive (since each matrix (b_ij)∈ B^m× m can be considered as a matrix in A^m× m). Thus σ∈(B). We do not consider here the other side of the Galois connection, i.e., the Galois closures of the form Q for Q⊆(A). In general, they are not relational clones (contrary to the Galois connection -). In particular, (A) is not a relational clone. It contains all diagonal relations and is closed under several relational clone operations, but, e.g., not under (i.e., deleting of coordinates). For example, the relation ρ:={(0,0,0),(1,1,1),(2,2,2),(2,0,1),(1,1,2)} on A={0,1,2} is a generalized quasiorder (this is easy to check), but _2,3(ρ)={(x,y)|∃ a:(a,x,y)∈ρ}={(0,0),(1,1),(2,2),(0,1),(1,2)} is not (because it is not transitive). § MINIMAL U-CLOSED MONOIDS In this section we investigate some special monoids and their u-closure. For a unary function f∈ A^A let M_f:=f∪ C. This is the least monoid containing f and all constants. What can be said about the u-closure of such monoids M_f? In the following we have to deal much with the relation Γ_M for a monoid M=M_f and with the situation that Γ_M V for some k× k-matrix V=(v_ij), k:=|A|. Therefore it is convenient to identify a g∈ M with the vector _g=(ga_1,…,ga_k) (cf. <ref>, here we assume A={a_1,…,a_k} where A is implicitly ordered by the indices of a_i). Thus we can say that a row or column of V equals some “vector” (k-tuple) g∈ M and write =g meaning =(ga_1,…,ga_k). This will be used very often in the proofs (in great detail in the proof of <ref>). Furthermore, let _i,*:=(v_i1,…,v_ik) and _*,i:=(v_1i,…,v_ki) denote the i-th row and the i-th column of V=(v_ij), respectively (i∈{1,…,k}). Note that Γ_M_f is reflexive since M_f contains all constants. For the trivial monoid T:=M_𝕀_A={𝕀_A}∪ C we have: The monoid T={𝕀_A}∪ C is u-closed. Let A={a_1,…,a_k}. We show that Γ_T is a generalized quasiorder (then we are done due to <ref>(<ref>)). Γ_T is reflexive, thus it remains to show that Γ_T is transitive. Let V=(v_ij)_i,j∈{1,…,k} be a k× k-matrix such that Γ_T V, i.e., each row and each column is one of the “vectors” g∈ T, namely 𝕀_A=(a_1,…,a_k) or one of the constants _1=(a_1,…,a_1),…, _k=(a_k,…,a_k) (_i denotes the constant mapping _i(x)=a_i). If v_jj=a_i for some i≠ j, then Γ_T V can hold only if all rows and columns are equal to the constant _i (since _i is the only vector where a_i is on the j-th place), in particular, the main diagonal of V also equals _i and therefore belongs to Γ_T. It remains the case v_ii=a_i for all i∈{1,…,k}. Then the diagonal of V is 𝕀_A, also belonging to Γ_T. Consequently, Γ_T is transitive. For |A|=2 there exist only two monoids containing all constants, namely T and A^A, both are u-closed (the first by <ref>, the second trivially). Therefore, in the following, we always can assume |A|≥ 3. We are going to characterize the minimal u-closed monoids, i.e., u-closed monoids M≤ A^A which properly contain no other u-closed monoid except the trivial monoid T={𝕀_A}∪ C. Such minimal u-closed monoids must be generated by a single function, i.e., they must be of the form M_f for some unary f, moreover, M_f can be assumed to be C-minimal, i.e., minimal among all monoids properly containing T (otherwise M_f'< M_f would imply M_f'≤M_f and M_f could be canceled in the list of minimal u-closed monoids). It is well-known which unary functions f generate a C-minimal monoid M_f≤ A^A (it follows, e.g., from <cit.>), namely if and only if f∈ A^A is a nontrivial (i.e., f∉ T) function satisfying one of the following conditions: * f^2=f, * f^2 is constant, * f is a permutation, such that f^p=𝕀_A for some prime number p. As shown in <cit.>, among these functions are those for which the quasiorder lattice f is ma­ximal among all quasiorder lattices (on A), equivalently, for which f is minimal (among all endomorphism monoids of quasiorders). These functions are of so-called type , and , defined as follows: * f^2=f, * f^2 is constant, say v, and |{x∈ A| fx=v}|≥ 3, * f is a permutation with at least two cycles of length p, such that f^p=𝕀_A for some prime number p. Note that f={𝕀_A,f} for f of type and , and f={𝕀_A,f,f^2,…,f^p-1} is a cyclic group of prime order for f of type . Surprisingly it turns out (see Theorem <ref>) that for each candidate M_f with f satisfying (i)–(iii), the u-closure M_f is either not a minimal u-closed monoid or M_f itself is already u-closed. Thus the minimal u-closed monoids coincide with the u-closed C-minimal monoids. We start with the functions of type , and . Let f be a function of type , or . Then M_f is a minimal u-closed monoid, in particular M_f=M_f. Moreover we have M_f= M_f. Clearly, f∪ C=M_f⊆ M_f⊆ M_f. But we have M_f=f∪ C as it was explicitly stated in <cit.> (but it already follows from the results in <cit.>, <cit.> and also from <cit.>). Thus we have equality instead of the above inclusions and M_f is u-closed (by Theorem <ref>). Since M_f has no proper submonoids except T because f satisfies one of the above conditions (i)–(iii), it is a minimal u-closed monoid. Let 3≤|A|<∞. The minimal u-closed monoids M≤ A^A are exactly those of the form M_f=f∪ C where f∈ A^A is nontrivial and satisfies (I) f^2=f, or (II') f^2 is a constant and |A|≥ 4, or (III') f^p=𝕀_A for some prime p such that f has at least two fixed points or f is of type . In particular, each minimal u-closed monoid is C-minimal, too. Part 1: At first we show that M_f is u-closed for all functions of type , and of the new type ' or '. Because of Proposition <ref>, it remains to check only those functions which are of type ' or ', but not of type or , respectively. Case 1: f is of type ' but not of type , i.e., f^2 is constant, denoted by 1, |{x∈ A| fx=1}|=2 and |A|≥ 4. For simplicity we denote the elements of A by numbers, A={1,2,…,k}, where f1=1 and f2=1 (otherwise fx=2), k≥ 4. Thus f has the form as given in Figure <ref>(a). Observe that M_f={𝕀_A,f,_1,…,_k} (_i denotes the constant function i). As in the proof of <ref> it is enough to show that Γ_M_f is transitive. Assume Γ_M_f V for a matrix V=(v_ij)_i,j∈ A, i.e., the rows and colums of V all are of the form 𝕀_A=(1,2,3,…,k), f=(1,1,2,…,2) or _i=(i,i,i,…,i) (i∈{1,…,k}). We have to show that the diagonal d_V:=(v_11,…,v_mm) belongs to Γ_M_f. Step by step we reduce the cases to be checked. (a) We start with v_11=i≠ 1 for some i∈{2,…,k}. Then _1,*=_i (otherwise _1,*∉Γ_M_f), thus, for each j∈{2,…,k} we have v_1j=i what implies _*,j=_i. Consequently d_V=_i∈Γ_M_f and we are done. (b) Now we can assume v_11=1. Then _1,*, _*,1∈{𝕀_A,f,_1} what implies _2,*∈{𝕀_A,f,_1} and therefore we have v_12,v_21,v_22∈{1,2}. Let v_22=1. Then _*,2,_2,*∈{f,_1} (because f and _1 are the only elements of Γ_M_f with value 1 in the second component), in particular v_2i∈{1,2} for all i. If _*,i=_j is constant for some i≥ 3, then j∈{1,2} (because v_2i∈{1,2}) and all rows _ℓ,* must be equal to _j for all ℓ≥ 3, consequently d_V=(1,1,j,…,j)∈Γ_M_f for j∈{1,2}. If _*,i=f for some i≥ 3, then all rows _ℓ,* must be equal to f for all ℓ≥ 3 (in no other element of Γ_M_f appears 2 at the i-th place), consequently d_V=(1,1,2,…,2)∈Γ_M_f. The same arguments apply for the cases _i,*∈{f,_j} for some i≥ 3 (change the role of rows and columns). Thus it remains to consider the case that all _*,i and _i,* (i≥ 3) are neither f nor some _j. However then all these columns and rows were equal to 𝕀_A, but this cannot appear because, e.g., _3,*=𝕀_A and _*,4=𝕀_A would give v_34=4 and v_34=3, respectively, a contradiction. Note that here is used the fact k≥ 4. Now let v_22=2. Then _2,*∈{𝕀_A,_2}. If _2,*=𝕀_A, then we must have _*,j=_j for j≥ 3, thus d_V=(1,2,3,…,k)∈Γ_M_f. If _2,*=_2, then we must have _*,1=𝕀_A (recall v_11=1). Consequently, _j,*=_j for j≥ 3 and we also get d_V=(1,2,3…,k)∈Γ_M_f. Case 2: f is of type ' but not of type , i.e., f^p=𝕀_A for some prime p and the permutation f has only one cycle of length p but m fixed points z_1,…,z_m where m≥ 2. For simplicity let A={0,1,…,p-1,z_1,…,z_m} where 0,1,…,p-1 denote the elements of the cycle, i.e., f=(0 1 … p-1)(z_1)…(z_m), moreover let k:=p+m=|A|. Thus Γ_M_f consists of the n-tuples f^i=(i,i+1,…,i+p-1,z_1,…,z_m) (i∈_p={0,1,…,p-1}, all counting in _p is done modulo p) and all constants _a=(a,a,…,a), a∈ A. We have to show that Γ_M_f is transitive. Thus let Γ_M_f V where V is an (k× k)-matrix V=(v_ij)_i,j∈ A (here we enumerate the rows and columns by the elements of A). If v_00=z is a fixed point z∈{z_1,…,z_m} then all columns and rows of V (as elements of Γ_M_f) must be equal to _z, thus d_V=(z,…,z)∈Γ_M_f. Let v_00=i for some i∈_p. Then v_*,0∈{_i,f^i}. Assume v_*,0=_i. If there exists some row _j,*=f^i (for some j∈_p), then v_j,z=z and therefore _*,z=z for each z∈{z_1,…,z_m}. Thus the last m columns are all different, what implies _a,*=f^i for all a∈ A (here we need m≥ 2). Consequently, d_V=(i,i+1,…,i+p-1,z_1,…,z_m)∈Γ_M_f. Otherwise (if such a row _*,j=f^i does not exist), all rows _j,* must be equal to _i (j∈_p), what implies _*,z=_i for z∈{z_1,…,z_m}, consequently d_V=(i,…,i,…,i)∈Γ_M_f. The same arguments apply to the case v_0,*=_i resulting in d_V∈Γ_M_f. Thus it remains to consider the case _*,0=f^i and _0,*=f^i. However, this case cannot occur since then _z_1,*=_z_1 and _*,z_2=_z_2 leads to the contradiction v_z_1,z_2=z_1 and v_z_1,z_2=z_2 (note m≥ 2). Part 2: Now we show that there are no more minimal u-closed monoids than those of type , ' and '. There are only the following two cases (A) and (B) for functions f to be considered for which M_f is C-minimal (i.e., satisfies (i)–(iii)) but which are not of type , ' or '. We are going to show that for these f the u-closure M_f is not minimal what will finish the proof of the Theorem. Case (A): f^2 is constant and |A|=3. There is only one (up to isomorphism) such function f on a 3-element set and we use the notation from Figure <ref>(A). Then M_f={𝕀_A,f,_0,_1,_2}. Consider the binary mapping h defined by the following table: [ h 0 1 2 ∈Γ_M_f; 0 0 0 0 _0; 1 0 0 1 f; 2 0 1 2 𝕀_A; ] Clearly h∈ M_f^* (as indicated in the last column). Therefore (cf. <ref>) g:=Δ h∈M_f where g (see Figure <ref>(b)) is a function of type . Thus, by <ref>, we get M_g=M_g⊂M_f, i.e., M_f is not minimal u-closed. Case (B): f^p=𝕀_A, f consists of a single p-cycle and has at most one fixed point. For f we use the notation as in Figure <ref>(B), A={0,1,…,p-1,z}. All computation in _p={0,1,…,p-1} is done modulo p. If f has no fixed point, z can be ignored in all what follows. We have M_f={𝕀,f,f^2,…,f^p-1,_0,_1,…,_p-1,_z}. Consider the binary mapping h defined by the following table: [ h 0 1 p-1 z ∈Γ_M_f; 0 0 1 p-1 z 𝕀_A; 1 1 2 0 z f; 2 2 3 1 z f^2; ⋮ ⋮ ⋮ ⋮ ⋮ ⋮; p-1 p-1 0 p-2 z f^p-1; z z z z z _z; ] Clearly h∈ M^* (indicated in the last column). Therefore (cf. <ref>) g:=Δ h∈M_f and g is the permutation g:x↦ 2x for x∈ Z_p and gz=z. Note that 0 is an additional fixed point. First we consider the case that p≥ 5. In the group generated by g there must exist an element g' of prime order q with q<p. Since p≥ 5, g has either more than one q-cycle or at least two fixed points, i.e., g' is of type '. Since g'∈g⊆M_f we get (with <ref>) M_g'=M_g'⊂M_f, i.e., M_f is not minimal u-closed. It remains to consider the cases p=2 and p=3. For p=3, we get g=(0)(12)(z) (in cycle notation) if there exists a fixed point z what is a function of type ', and we can continue as above with g'. Otherwise we have g=(0)(12). For p=2 there must exist the fixed point z (since |A|≥ 3) and we have f=(01)(z), what is a function of the same form as g in case p=3 (up to isomorphism). Thus we can continue with g. Take the function h' given by the table [ h' 0 1 2 ∈Γ_M_g; 0 0 0 0 _0; 1 0 1 2 𝕀_A; 2 0 2 1 g; ] Then h'∈ M_g^* (as indicated in the last column) and therefore g”:=Δ h' belongs to M_g⊆M_f. But g” is a function of type (g'0=0, g'2=g'1=1). Thus, as above, M_g”=M_g”⊂M_f, i.e., M_f is not minimal u-closed. Comparing Theorem <ref> with the above mentioned results from , we can conclude that there are monoids M≤ A^A which are characterizable by generalized quasiorders but not by quasiorders, i.e., we have M= M but M⫋ M (namely those M_f with f of type ' or ' but not of type or ). With other words, generalized quasiorders are really more powerful than quasiorders (or congruences). For |A|=3, M. Behrisch (personal communication) computed all monoids of the form Q for Q⊆(A) and of the form Q for Q⊆(A), their number is 89 and 71, respectively, among all 699 monoids M≤ A^A. Let A=_k={0,1,…,k-1}, k≥ 2, and let γ_k∈ A^A be the full cycle γ_k=(01… k-1), i.e., γ_k(x)=x+1 (all computation is done modulo k). Consider the monoid M_γ_k=[S_k]γ_k∪ C. It can be shown (unpublished result) that for the u-closure M_γ_k=[S_k]γ_k we need only congruence relations instead of all generalized quasiorders (cf.Theorem <ref>), i.e., we have M_γ_k= M_γ_k. This closure contains much more elements than M_γ_k (namely, if k=p_1^m_1·…· p_n^m_n is the decomposition of k into powers of different primes, then |M_γ_k|= ∑_i=1^np_i^ p_i+p_i^2+…+p_i^m_i ). In particular, M_γ_k is not u-closed (what was proved, at least for prime k=p, already with Part II, Case (B), in the proof of Theorem <ref>). For fixed A and fixed arity m∈_+, the set [m](A,F) of all m-ary generalized quasiorders of an algebra (A,F) forms a lattice with respect to inclusion (where one can restrict F to unary mappings because of <ref>). All these lattices together also form a lattice, namely ^(m)_A:={[m](A,F)| F⊆ A^A}. For m=2 this lattice was investigated in <cit.> (note (A,F)=[2](A,F)). Due to the Galois connection - the lattice ^(m)_A is dually isomorphic to the lattice of all those u-closed monoids M≤ A^A which are endomorphism monoids of m-ary generalized quasiorders. The “largest” lattice ^(k)_A with k:=|A| is isomorphic to the lattice of all u-closed monoids. With Theorem <ref> we also determined the maximal elements of this lattice ^(k)_A, which are of the form M_f with f satisfying one of the conditions , ' or '. This ^(k)_A contains all ^(m)_A for m<k via an order embedding. In fact, for m<n, there is an order embedding ϕ^m_n:^(m)_A↪^(n)_A given by ϕ^m_n([m](A,F)):=[n](A,F) with F:=[m](A,F). Conversely, there is a surjective order preserving map ψ^n_m:^(n)_A→^(m)_A given by ψ^n_m([n](A,F)):=[m](A,F). This mapping is well-defined because [m](A,F) is “contained” in [n](A,F)) since [m](A,F)={ρ∈[m](A)| A^n-m×ρ∈[n](A,F)} where A^n-m×ρ={(a_1,…,a_n-m,b_1,…,b_m)| a_1,…,a_m∈ A, (b_1,…,b_m)∈ρ} (it is easy to see that A^n-m×ρ is a generalized quasiorder if and only if ρ is). Thus ρ↦ A^n-m×ρ is an order embedding from [m](A,F) into [n](A,F). § CONCLUDING REMARKS An algebra (A,F) is called affine complete if every function compatible with all congruence relations of (A,F) is a polynomial function, equivalently (for finite A), if (A,F) is the clone [clone]F∪ C generated by F and the constants C. With the notation introduced in (<ref>) (and due to Remark <ref>) we have: (A,F) affine complete [clone]F∪ C=(A,F), ∃ Q⊆(A): .[clone]F∪ C=M^* for M:= Q. Instead of equivalence relations we may now consider other relations which also satisfy the property Ξ (cf. <ref>). This leads to the notion generalized quasiorder complete, or -complete for short, which can be defined and characterized as follows: (A,F)-complete: [clone]F∪ C=(A,F) ∃ Q⊆(A): .[clone]F∪ C=M^* for M:= Q ∃ u-closed M≤ A^A: [clone]F∪ C=M^*. As an intermediate step one might introduce -complete algebras (replacing by above). Clearly, affine completeness implies -completeness (but not conversely). Thus it is natural to ask which algebraic properties of affine complete algebras remain valid for -complete algebras. Moreover, what can be said about varieties generated by -complete algebras? We recall that a variety 𝒱 is called affine complete, if all algebras A∈𝒱 are affine complete. Similarly, we can define a -complete variety by the property that all its algebras A∈𝒱 are -complete. Hence, by our definition, -complete varieties can be considered a generalization of the affine complete varieties. It is known that any affine complete variety is congruence distributive (see e.g. <cit.>). There arises the question what are the properties of -complete varieties, could they be still congruence distributive? In the paper <cit.> also a characterization of affine complete arithmetical varieties is established (A variety is called arithmetical, if any algebra in it is congruence distributive and congruence permutable.) Therefore, it is meaningful to ask if there exists any characterization for -complete arithmetical algebras. We mention some further topics for research: - Characterize the u-closed monoids which are already given by their quasiorders or congruences (cf. Remarks <ref>, <ref>), i.e., monoids M with the property M= M= M or M= M= M. - Characterize the Galois closures Q, cf. <ref>. - Investigate the lattices ^(m)_A (Remark <ref>) and their interrelations. Acknowledgement. The research of the first author was supported by the Slovak VEGA grant 1/0152/22. The research of the third author was carried out as part of the 2020-1.1.2-PIACI-KFI-2020-00165 “ERPA” project – supported by the National Research Development and Innovation Fund of Hungary. § REMARKS BY TWO OF THE COAUTHORS In June 2022, a Honorary colloquium on the occasion of Reinhard Pöschel's 75th birthday was held in Dresden. There R. Pöschel presented a talk containing the basics of this article (<cit.>). The colloquium was organized by M. Bodirsky and M. Schneider, who at the same time informed about a forthcoming topical collection of Algebra Universalis, which will be dedicated to R. Pöschel. At that time, the full version of the presented results was not yet written. We, the co-authors of the results, somehow also would like to contribute to this honorary commemoration and therefore here – because we cannot submit it to the topical collection – we use the presentation of our common results as an opportunity to express our deep respect and gratitude to Reinhard, for his inventiveness, creativity, energy, and for his kindness. For more than 16 years we both have been working successfully together with Reinhard who was the initiator of many of our joint works. Our thanks also go to Martin Schneider for his activities. June 2023 Danica Jakubíková-Studenovská and Sándor Radeleczki ' ' 99999 url<#>1urlprefixURL [BoaV1973]BoaV1973 J.M. Boardman and R.M. Vogt, Homotopy invariant algebraic structures on topological spaces. Lecture Notes in Mathematics, Vol. 347, Springer-Verlag, Berlin-New York, 1973. [BodKKR1969]BodKKR69a V.G. Bodnarčuk, L.A. Kalužnin, N.N. Kotov, and B.A. Romov, Galois theory for Post algebras I. Kibernetika (Kiev) (3), (1969), 1–10, (Russian). [BruDPS1993]BruDPS93 J. Brunner, Th. Drescher, R. Pöschel, and H. Seidel, Power algebras: clones and relations. J. Inform. Process. Cybernet. EIK 29(5), (1993), 293–302. [ÉsiW2005]EsiW2005 Z. Ésik and P. Weil, Algebraic recognizability of regular tree languages. Theoret. Comput. Sci. 340(2), (2005), 291–321. [Grä2008]Gra2008 G. Grätzer, Universal algebra. Springer, New York, second edn., 2008, with appendices by Grätzer, Bjarni Jónsson, Walter Taylor, Robert W. Quackenbush, Günter H. Wenzel, and Grätzer and W. A. Lampe. [GräW1984]GraW1984 G. Grätzer and S. Whitney, Infinitary varieties of structures closed under the formation of complex structures. Colloq. Math. 48(1), (1984), 1–5. [Ihr2003]Ihr2003 Th. Ihringer, Allgemeine Algebra, vol. 10 of Berliner Studienreihe zur Mathematik [Berlin Study Series on Mathematics]. Heldermann Verlag, Berlin, 2003, mit einem Anhang über universelle Coalgebra von H. P. Gumm. [With an appendix on universal coalgebra by H. P. Gumm](New edition, first edition Teubner 1988). [Jak1982]Jak1982 D. Jakubíková-Studenovská, On congruence relations of monounary algebras I. Czechoslovak Math. J. 32(107)(3), (1982), 437–459. [Jak1983]Jak1983 D. Jakubíková-Studenovská, On congruence relations of monounary algebras II. Czechoslovak Math. J. 33(108)(3), (1983), 448–446. [JakPR2016]JakPR2016 D. Jakubíková-Studenovská, R. Pöschel, and S. Radeleczki, The lattice of quasiorder lattices of algebras on a finite set. Algebra Universalis 75(2), (2016), 197–220. [JakPR2018]JakPR2018 D. Jakubíková-Studenovská, R. Pöschel, and S. Radeleczki, The lattice of congruence lattices of algebras on a finite set. Algebra Universalis 79(1), (2018), Paper No. 4, 23 pp., arXiv:1612.07648. [JakPR2022]JakPR2022 D. Jakubíková-Studenovská, R. Pöschel, and S. Radeleczki, Generalized quasiorders. Talk at the Honorary colloquium on the occasion of Reinhard Pöschel’s 75th birthday, Technische Universität Dresden, June 9, 2022. [JakPR2023]JakPR2023 D. Jakubíková-Studenovská, R. Pöschel, and S. Radeleczki, The minimal closed monoids for the Galois connections End-Con. Math. Bohem. , online version: https://doi.org/10.21136/MB.2023.0133-22. [KaaM1997]KaaM1997 K. Kaarli and R. McKenzie, Affine complete varieties are congruence distributive. Algebra Universalis 38(3), (1997), 329–354. [KerPS2014]KerPS2014 S. Kerkhoff, R. Pöschel, and F.M. Schneider, A short introduction to clones. In: Proceedings of the Workshop on Algebra, Coalgebra and Topology (WACT 2013), vol. 303 of Electron. Notes Theor. Comput. Sci., Elsevier Sci. B. V., Amsterdam, 2014, pp. 107–120. [LänP1984]LaeP84 H. Länger and R. Pöschel, Relational systems with trivial endomorphisms and polymorphisms. J. Pure and Appl. Algebra 32, (1984), 129–142. [Leh2010]Leh2010 E. Lehtonen, Characterization of preclones by matrix collections. Asian-Eur. J. Math. 3(3), (2010), 457–473. [Mal1963]Mal1963 A. I. Malcev, On the general theory of algebraic systems. Amer. Math. Soc. Transl. (2) 27, (1963), 125–142. [Mar2008]Mar2008 M. Markl, Operads and PROPs. In: Handbook of algebra. Vol. 5, Elsevier/North-Holland, Amsterdam, 2008, pp. 87–140. [May1972]May1972 J.P. May, The geometry of iterated loop spaces. Lecture Notes in Mathematics, Vol. 271, Springer-Verlag, Berlin-New York, 1972. [Pös2004]Poe04a R. Pöschel, Galois connections for operations and relations. In: K. Denecke, M. Erné, and S.L. Wismath (Eds.), Galois connections and applications, vol. 565 of Mathematics and its Applications, Kluwer Academic Publishers, Dordrecht, 2004, pp. 231–258. [PösK1979]PoeK79 R. Pöschel and L.A. Kalužnin, Funktionen- und Rela­tionen­algebren. Deutscher Verlag der Wissenschaften, Berlin, 1979, Birkhäuser Verlag Basel, Math. Reihe Bd. 67, 1979. [Sze1986]Sze86 Á. Szendrei, Clones in universal algebra, vol. 99 of Séminaire de Mathématiques Supérieures. Les Presses de l'Université de Montréal, Montréal, 1986. Danica Jakubíková-Studenovská: Institute of Mathematics, P.J. Šafárik University, Košice, , Reinhard Pöschel: Institute of Algebra, Technische Universität Dresden, , Sándor Radeleczki: Institute of Mathematics, University of Miskolc, .
http://arxiv.org/abs/2307.00574v2
20230702135745
Bidirectional Temporal Diffusion Model for Temporally Consistent Human Animation
[ "Tserendorj Adiya", "Sanghun Kim", "Jung Eun Lee", "Jae Shin Yoon", "Hwasup Lim" ]
cs.CV
[ "cs.CV" ]
Using Cascade in Quantum Key Distribution Norbert Lütkenhaus August 1, 2023 ========================================= We introduce a method to generate temporally coherent human animation from a single image, a video, or a random noise. This problem has been formulated as modeling of an auto-regressive generation, i.e., to regress past frames to decode future frames. However, such unidirectional generation is highly prone to motion drifting over time, generating unrealistic human animation with significant artifacts such as appearance distortion. We claim that bidirectional temporal modeling enforces temporal coherence on a generative network by largely suppressing the motion ambiguity of human appearance. To prove our claim, we design a novel human animation framework using a denoising diffusion model: a neural network learns to generate the image of a person by denoising temporal Gaussian noises whose intermediate results are cross-conditioned bidirectionally between consecutive frames. In the experiments, our method demonstrates strong performance compared to existing unidirectional approaches with realistic temporal coherence. § INTRODUCTION Humans express their own space-time continuum in the form of appearance and motion. While existing generative models <cit.> have been successful to restore the space, i.e., high-quality image generation with diverse human appearance, they often fail to decode the time, e.g., temporally incoherent human motion. In this paper, we introduce a method for temporal modeling of a generative network to synthesize temporally consistent human animations. Our method can generate a human animation from three different modalities: a random noise, a single image, and a single video as shown in Figure. <ref>. Such generated human animations enable a number of applications including novel content creation for non-expert media artists and pre-visualization of human animation that can be further refined by professional video creators. r0.4 < g r a p h i c s > Results from a unidirectional generative model with texture drifting over time. The temporal modeling for human animation has been often formulated as a video auto-regression problem: past frames are conditioned on a generative network to decode future frames. While such unidirectional generation (forward auto-regression) has shown smooth animation results, it often suffers from texture drifting, e.g., the texture on the clothing of a person such as a skirt in Figure <ref> is largely distorted along its dynamic movements. This is mainly due to the significant motion ambiguity where there exist infinite solutions to decide the future state of human appearance. To suppress such motion ambiguity, we model a human appearance bidirectionally: a generative network decodes the human appearance in the context of both forward and backward image regression whose intermediate features are cross-conditioned over time. Our key observation is that the bidirectional temporal consistency in feature space highly suppresses the motion ambiguity, which prevents from the texture drifting while maintaining its temporal smoothness. We realize the idea of bidirectional temporal modeling by utilizing a generative denoising diffusion model <cit.>. A denoising network learns to iteratively remove temporal Gaussian noises to generate the human animation guided by conditioning poses and appearance style. Inspired by message passing algorithms in dynamic programming <cit.>, we recursively cross-condition the intermediate results between consecutive frames in a bidirectional way as shown in Figure <ref>; where the temporal context of human appearance is locally consistent for consecutive frames at the first denoising step, and it is progressively refined at every denoising iteration to be globally coherent for entire frames. In the experiments, we demonstrate that our bidirectional denoising diffusion model generates human animations from a single image with a strong temporal coherence, outperforming the results from unidirectional generative models. We also show that learning from multiple frames, i.e., a person-specific video, can further improve the physical plausibility of the generated human animation. Finally, we showcase that our method can generate human animations with diverse clothing styles and identities without any conditioning images. Contribution (1) We propose a bidirectional temporal diffusion model that can generate temporally coherent human animation from random noise, a single image, or a video. (2) Inspired by dynamic message passing algorithms, we introduce the feature cross-conditioning between consecutive frames with recursive sampling, which allows embedding the motion context on the iterative denoising process in a locally and globally consistent way. (3) We quantitatively and qualitatively demonstrate that our method shows a strong temporal coherence compared to existing unidirectional methods. For an accurate evaluation, we newly create high-quality synthetic data of people in dynamic movements using graphics simulation, which provides ground-truth data, i.e., different people in the perfectly same motion. § RELATED WORKS Human Motion Transfer Given a sequence of guiding body poses and the style of human appearance, it aims to generate the human animation that satisfies the conditioning motion and style. Many existing pose transfer methods have utilized 2D keypoints as conditioning body pose maps <cit.>. However, these approaches often fail to extract the physical implications from the keypoints maps, resulting in temporally unnatural human animation. To address this motion consistency issue, methods such as EDN <cit.>, V2V <cit.>, and DIW <cit.> leveraged Markovian independence to generate auto-regressive frames. These approaches utilize Densepose <cit.> as a 2D pose conditioning and learn motion-dependent appearance for a specific person, producing realistic animation results for unseen motions. Recent advancements in this area involve embedding 3D velocities from the SMPL <cit.> model as pose conditioning <cit.>, leading to the better generation of complex transformations. However, these methods require extensive training on the videos of a single individual, limiting their generalizability to diverse people. To synthesize human animations of diverse people using a single model, several works have studied human motion transfer from a single image. Solutions include applying affine transformations <cit.>, flow-based warping <cit.>, or assuming a base 3D human model and texture mapping with DensePose <cit.> or the SMPL model <cit.>. However, these methods struggle to represent diverse surface transformations in clothing, i.e., the clothing texture looks static even under the pose changes, resulting in unnatural animations. Generative Diffusion Models Recently, diffusion models have demonstrated outstanding performance in high-quality image generation <cit.>, text-to-image translation <cit.>, image super-resolution <cit.>, image restoration <cit.> view synthesis <cit.> and video generation  <cit.> tasks. Compared to generative adversarial networks (GANs) <cit.>, diffusion models enable more stable training and reduced mode collapse, leading to diverse and high-quality generation results. The initial diffusion model was based on Song's  <cit.> score-matching approach, which estimates gradients using Langevin dynamics to infer data distributions. Subsequently, the DDPM <cit.> method was introduced, leveraging weighted variational bounds and becoming widely adopted. Later, NCSN <cit.> and its equivalent from ODE <cit.> emerged, presenting a more general form. One notable drawback of Markov Chain Monte Carlo (MCMC) based inference in diffusion models is the longer generation time compared to GANs. DDIM <cit.> addresses this issue by interpreting the diffusion process as an implicit function that guides the path towards real samples and introduces a non-Markovian sampling method, significantly reducing sampling time while preserving generation quality. Further improvements include distillation-based <cit.> methods for reducing denoising steps and higher-order Runge-Kutta <cit.> sampling techniques. While the previous diffusion models have shown promising generation quality, their application to the video often involves temporal incoherence such as shape drifting and appearance jitters. Unlike the previous works, this paper provides a novel and practical solution to generate temporally coherent animation, particularly focusing on humans. § METHOD Conventional Denoising Diffusion Probabilistic Models (DDPM) work by gradually diffusing isotropic Gaussian noise onto a data sample y ∈𝒟 across K steps along a Markovian chain. The process is reversed, such that y^k is approximated from the 𝒩(0,I) distribution. This technique simplifies the learning process by transforming a complex data distribution into a more manageable K step distribution. One can extend these conventional DDPM to generate a human animation driven by a sequence of human pose maps 𝒮 (e.g., densepose <cit.>) in an auto-regressive way. For example, a network is designed to generate future frames dependent on previous frames by gradually diffusing isotropic Gaussian noise onto the training sample y_t under the conditional Markovian independence, i.e., p(𝒟)= ∏_t=1^T p(y_t | y_t-1;s_t∈𝒮). However, such autoregressive models often suffer from texture drifting due to the motion ambiguity that is inherent in unidirectional prediction. To suppress the motion ambiguity, we design a bidirectional temporal diffusion model (BTDM) as shown in Figure <ref>. BTDM learns motion-dependent appearances in both forward and backward directions along the time axis. The denoising results from each step in either time direction serve as mutual conditions for generating human animation. Our model can generate realistic animations unconditionally, as well as conditionally from a single image or video. §.§ Bidirectional Temporal Diffusion Model Given a pose sequence S={s_0, ..., s_T} and its corresponding image sequence Y={y_0,...,y_T}, modeling their mapping bidirectionally along the time axis that follows Markovian independence results in: p_f(Y|S):=∏_t=1^T p(y_t|y_t-1, s_t), p_b(Y|S):=∏_t=1^T p(y_t-1|y_t, s_t-1) In this setup, p_f represents the forward direction along the time axis, and p_b signifies the backward direction. We define a marginal distribution with isotropic Gaussian process that gradually adds increasing amounts of noise to the data sample as the signal-to-noise-ratio λ(·) decreases, following <cit.>: q(y_t^1:K|y_t^0) := ∏_k=1^K q(y_t^k|y_t^k-1), q(y_t^k|y_t^k-1) := 𝒩(y_t^k; √(σ(λ(k)))y_t^k-1, σ(-λ(k)) I) where σ(·) is the sigmoid function, K is the number of diffusion step, and I denotes the identity. Both the motion-dependent appearance distribution in Equation <ref> and the diffusion process in Equation <ref> follow a Markovian chain. Ideally, we should predict y_t and y_t-1 using perfectly denoised y_t-1^0 (in the forward direction) or y_t^0 (in the backward direction) as conditions. However, such perfectly denoised images are not available in inference time, which leads to the overfitting to the training data. For this reason, we integrate these two independent Markovian chains as follows: p_f(Y^k|S):=∏_t=1^T p(y_t^k|y_t-1^k, s_t), p_b(Y^k|S):=∏_t=1^T p(y_t-1^k|y_t^k, s_t-1), where by utilizing the noisy y_k as a condition, we concurrently diminish the reliance of motion-dependent appearance generation on the preceding frame and avert overfitting to the condition, thereby alleviating artifacts when generating unseen conditions and improving the model's generalization performance. This approach also yields more temporally consistent animations by highly limiting the motion diversity between consecutive frames. Although p_f and p_b are independent, p(y_t | y_t-1) and p(y_t-1 | p_t) are concurrently defined on the time axis t. This allows us to optimize both probabilities simultaneously. Therefore, the objective function for training is defined as follows: L = 𝔼_t ∼ [1,T], k ∼ [1,K], y^k ∼ q_k (|| (f_θ(y_t^k, y_t-1^k, λ(k), s_t, c, d_f) - y_t^0)||_2^2 + || (f_θ(y_t-1^k, y_t^k, λ(k), s_t-1, c, d_b) - y_t-1^0)||_2^2) where f_θ is a neural network whose task is to denoise the frame y_t-1^k, y_t^k given a different noisy frame y_t^k, y_t-1^k and given pose s_t, s_t-1. The λ is the log signal-to-noise-ratio function dependent on k, and c is a single image condition that determines the appearance of a target person. The notation d_f, d_b are learnable positional encoding vectors for distinguishing temporal direction. Following the method used in <cit.>, we adapt our model to predict clean images instead of noise. For the sake of brevity in expressing Equation <ref>, subsequent descriptions will be replaced with f_θ(y_t^k, y_t-1^k, s_t) and f_θ(y_t-1^k, y_t^k, s_t-1). Recursive Sampling While our approach effectively maintains spatiotemporal consistency between successive time steps, t-1 and t, for a single denoising iteration, it does not inherently guarantee this consistency throughout the entire sequence. To address this, we incorporate an autoregressive sampling technique <cit.> in a temporally bidirectional way at every denoising iteration, which ensures consistent and smooth transitions across the entirety of the generated sequences. Please refer to the supplementary document for the comprehensive algorithm of recursive sampling. §.§ Bidirectional Temporal U-Net To enable BTDM, we construct a Bidirectional Temporal U-Net (BTU-Net) by modifying the U-Net architecture, as shown in Figure <ref>. This architecture consists of a network, E_a, that encodes a single image condition c; another network, E_p, that encodes poses s corresponding to t-1 and t; and a pair of U-Nets, f_θ, that accept y_t-1^k and y_t^k as input and predict the denoised human images temporally in both forward and backward directions. The multi-scale intermediate features are modulated by pose features, noise ratio λ, and temporal direction vectors (d_f and d_b) using existing Feature-wise Linear Modulation layer (FiLM) <cit.>. This pair of U-Nets shares weights and applies attention between the features encoded by E_a and intermediate features of f_θ, as shown in the bidirectional attention block in Figure <ref>-(right), which is composed of appearance and spatiotemporal block. Appearance Block This block applies cross attention between the appearance feature v_a^l, encoded by E_a from c, and the forward and backward motion features v_f^l, v_b^l from FiLM where l denotes the layer index of the intermediate features as follows: Q_f= ϕ_q^l(v_f^l), Q_b = ϕ_q^l(v_b^l) , K = ϕ_k^l(v_a^l), V = ϕ_v^l(v_a^l), v_f^l = W^l 𝐬𝐨𝐟𝐭𝐦𝐚𝐱(Q_f K^T/√(C))V + v_f^l, v_b^l = W^l 𝐬𝐨𝐟𝐭𝐦𝐚𝐱(Q_bK^T/√(C))V + v_b^l, where ϕ_q^l, ϕ_k^l, ϕ_v^l are layer-specific 1×1 convolution operators. W^l refers to learnable weights to generate cross-attended features v_f^l, v_b^l. This operation allows single image conditioning. Spatiotemporal Block The outputs v_f and v_b from the appearance block undergo cross attention as follows: Q_f= ϕ_q^l(v_f^l), Q_b = ϕ_q^l(v_b^l) , K_f = ϕ_k^l(v_b^l), K_b = ϕ_k^l(v_f^l), V_f = ϕ_v^l(v_b^l), V_b = ϕ_v^l(v_f^l), v_f^l = W^l 𝐬𝐨𝐟𝐭𝐦𝐚𝐱(Q_f K_f^T/√(C))V_f + v_f^l, v_b^l = W^l 𝐬𝐨𝐟𝐭𝐦𝐚𝐱(Q_bK_b^T/√(C))V_b + v_b^l where ϕ_q^l, ϕ_k^l, ϕ_v^l are different layer-specific 1×1 convolution operators from appearance block. This block ensures that t-1 and t exhibit spatiotemporal consistency. We adopt the bidirectional attention block for the feature at specific resolutions, i.e., 32 × 32, 16 × 16, and 8 × 8. §.§ Training and Inference for Various Tasks Single Image Animation Our BTDM, trained on multiple videos, can be directly applied to generate realistic human animation results for unseen people and poses. Similar to existing one-shot generation methods <cit.>, we further fine-tune our BTU-Net on the given single image to enhance the visual quality. For this, the conditioning image c is set as a single image sequence Y={c} and the pose sequence S={g(c)}, where g(·) is a pose estimation function (e.g. DensePose). This setup aligns with the training process outlined in Equation <ref>. Person-Specific Animation Our method can be applied to the task of generating novel animations by training a single person's video. To adapt our method to this task, we train our BTDM framework using the objective function from Equation <ref>, excluding the image condition c. Unconditional animation Moreover, our method facilitates the creation of temporally consistent animations without any appearance-related conditions. For such unconditional generation, we trained our model with the condition c set to ∅ of Equation <ref>. § EXPERIMENTS We validate our bidirectional temporal diffusion model on two tasks: generating human animation from a single image and generating human animation by learning from a person-specific video. We also show that our model can generate diverse human animation with an unconditional setting (i.e., generating human animation from random noise). §.§ Single Image Animation Dataset We use two datasets that can effectively validate the quantity and quality of temporal coherence in the generated human animation. 1) Graphics simulation: for quantitative evaluation, we construct a high-quality synthetic dataset using a graphics simulation tool for soft 3D clothing animation <cit.> which provides perfect ground truth data for the motion transfer task (i.e., different people in the exact same motion) with physically plausible dynamic clothing movements. The dataset includes a total of 80 training videos and 19 testing videos, each of which lasts 32 seconds at 30 FPS. We customize the 3D human appearance using CharacterCreator <cit.>, and we use Mixamo motion data <cit.> for animation. The pose map is obtained by rendering the IUV surface coordinates of a 3D body model (i.e., SMPL<cit.>). Please see the supplementary materials for more details of our graphics simulation data. 2) UBC Fashion dataset <cit.>: it consists of 500 training and 100 testing videos of individuals wearing various outfits and rotating 360 degrees. Each video lasts approximately 12 seconds at 30 FPS. We apply DensePose <cit.> to obtain pose UV maps. We use this dataset for the qualitative demonstration on real images since it does not provide ground truth data with the exact same motion. During training, the conditioning image is randomly paired in both datasets. Baselines We compare our method to existing unidirectional temporal models: Thin-Plate Spline Motion Model for Image Animation (TPSMM) <cit.> and Motion Representations for Articulated Animation (MRAA) <cit.> are designed to predict forward optical flow to transport the pixel from a source to target pose, following a rendering network. Both methods were trained on each dataset from scratch using the provided scripts and recommended training setup. All methods are trained at a resolution of 256×256. Metric To evaluate the quality of the generated human animations, we employ four key metrics: 1) SSIM (Structural Similarity Index) <cit.>: This metric assesses the similarity between local patterns of pixel intensities in normalized luminance and contrast spaces. It essentially quantifies the structural similarity between the generated and ground truth images. 2) LPIPS (Learned Perceptual Image Patch Similarity) <cit.>: This metric provides an evaluation of cognitive similarity between synthesized images and ground truth images. It achieves this by comparing the perceptual features extracted from both, utilizing a pre-trained deep neural network. 3) tLPIPS (Temporal Learned Perceptual Image Patch Similarity) <cit.>: This metric extends the LPIPS measure to temporal domain, evaluating the plausibility of change across consecutive frames. It is defined as tLPIPS = ||LPIPS(y_t, y_t-1) - LPIPS(g_t, g_t-1)||, where y and g represent the synthesized and ground truth images, respectively. 4) FID (Fréchet Inception Distance) <cit.>: This metric measures the distance between the distributions of synthesized and real images in the feature space of a pre-trained Inception network. Result The quantitative results for the graphics simulation data are presented in Table <ref>. Our BTDM method outperforms other methods in all metrics. As can be seen in Figure <ref>, our method closely resembles the source image, and the appearance changes depending on the movement are more realistic than other baseline methods. TPSMM and MRAA undergo significant artifacts such as texture distortion and blur due to errors in the forward optical flow prediction. In particular, the models from baseline methods highly confuse on the motion with large dynamics. The same trend is observed in the UBC Fashion data. Specifically, when the appearance of a driven video significantly differs from the source image in TPSMM or MRAA methods, abnormal artifacts often occur such as the loss of identity. Moreover, our method is found to preserve fine details considerably better. §.§ Person-specific Animation Dataset To evaluate the performance of our method in the task of person-specific animation, we use five videos from <cit.>. Each video comprises between 6K and 15K frames, featuring a person performing a diverse range of dynamic actions. The pose UV map is obtained using DensePose<cit.>. Baseline We compare our method to V2V <cit.>, EDN <cit.>, HFMT <cit.>, DIW <cit.>, and MDMT <cit.>, which utilized a generative network in a temporally unidirectional way. All methods were trained on the training set of each video and evaluated on the test set. Metrics We use SSIM, LPIPS, and tLPIPS as used in Section <ref>. Result The evaluation results for our method and the baselines on the test sequences from the five videos are displayed in Table <ref>. Our approach exhibits a performance that is either comparable to or surpasses that of other state-of-the-art methods in the LPIPS and tLPIPS metrics. Specifically, our method outshines all others in the SSIM evaluation with a diffusion-based generative framework. The highest average score implies that our method performs consistently better than other methods in terms of temporal coherence and visual plausibility across assorted appearance and motion styles. Further qualitative results are demonstrated in Figure <ref> where the baseline methods often lose context or become blurred in complex poses, leading to physically implausible human animation. Our method demonstrates robustness to dynamic movements and strong temporal coherence, yielding clear and stable results. Please also refer to the demo video. §.§ Ablation study To evaluate the effect of the module in our method, we perform an ablation study. The quantitative results are in Table <ref>, and please refer to the supplementary materials for visual comparison. Unidirectional vs. Bidirectional We compare our bidirectional temporal diffusion model with the unidirectional one. For this, we trained the same BTU-Net in a unidirectional manner using the same loss. The bidirectional approach demonstrates far more spatiotemporal consistency based on tLPIPS in Table <ref>. Our main observation is that due to significant motion ambiguity, the generated texture sometimes diverges at the end of the frame under highly dynamic human movements. Based on the improvements in LPIPS, we can notice such strong temporal coherence helps with improving the visual quality as well. Static Sampling vs. Recursive Sampling We examine the effect of recursing sampling by comparing it to the static sampling approach. For static sampling, we only use the pose map in the current time to generate the image of a person without conditioning the samples from different time instances, i.e., generated images and pose maps in other time instances are not used conditioning inputs. Based on the comparison in Table <ref>, recursive sampling outputs more temporally coherent animations by effectively propagating the temporal motion context in a local-to-global manner. Number of Images for Fine-Tuning For single-image animation, we fine-tune the model on a single image. We found that the quality improves as the number of fine-tuning images increases. In the supplementary material, we introduce the experiments about the impact of the number of fine-tuning images on rendering quality. Unconditional Human Animation Generation Figure <ref>, we demonstrate that our method can generate human animation with diverse appearances without conditioning any images or videos. § CONCLUSION We introduce a new method to synthesize temporally coherent human animation from a single image, a video, or a random noise. We address the core challenge of temporal incoherence from existing generative networks that decode future frames in an auto-regressive way. We argue that such unidirectional temporal modeling of a generative network involves a significant amount of motion ambiguity, leading to the artifacts such as texture drifting. We suppress the motion ambiguity by newly designing a bidirectional temporal diffusion model (BTDM): a denoising network progressively removes temporal Gaussian noises whose intermediate results are cross-conditioned over consecutive frames, which allows conditioning locally and globally coherent motion context on our video generation framework. We perform the evaluation on two different tasks, i.e., human animation from a single image and person-specific human animation, and demonstrate that BTDM shows strong temporal coherence, which also helps to improve the visual quality, compared to existing methods. Limitation While BTDM produces temporally coherent human animations, there exist several limitations. Since our model generates the video as a function of the estimated body poses, the errors in the pose estimation affect the rendering quality, e.g., the misdetection of hands produces some appearance distortion around the hand. Due to the inherent ambiguity of 2D pose representation, our method sometimes shows weakness in the sequence with 3D human rotations. Our potential future work is to improve 3D awareness and completeness by utilizing a complete 3D body model, e.g., SMPL <cit.>, in our bidirectional temporal diffusion framework. plain § RECURSIVE SAMPLING The recursive sampling algorithm introduced in Section 3.1 of our paper is demonstrated in <ref>. The symbol K stands for the number of diffusion steps, while T symbolizes the number of temporal intervals. The BTDM neural network is denoted by f_θ. The notation σ(·) represents sigmoid function, and λ(·) is used to represent the noise ratio function. § IMPLEMENTATION DETAILS Single image animation Our method is trained at a resolution of 256x256 pixels, similar to all other methods. We generates 64x64 pixel animations via the BTU-Net, which are subsequently upscaled to 256x256 pixels using the SR3 <cit.>. We train both the BTU-Net and SR3 from scratch on all training data for 15 and 30 epochs respectively. We set the denoising step to K=1000 and the learning rate to 1e-5. During testing, we fine-tune model with test appearance condition for 300 iterations with a learning rate of 1e-5. It should be noted that we employ K=50 at test time for expedited generation. Person specific animation The training settings for the BTU-Net and SR3 <cit.> are identical to those used in the Single image animation setup, with the exception that both the BTU-Net and SR3 <cit.> are trained for 100 epochs each without fine-tuning. § ABLATION STUDY Bidirectional vs Unidirectional Figure <ref> demonstrates the qualitative results of bidirectional and unidirectional temporal training via our BTU-Net. The unidirectional approach struggles to generate images fitting the pose condition, tending to replicate the texture of the front image input as the condition instead. Unlike the unidirectional approach, the bidirectional model successfully creates images that meet the pose condition. Number of Images for Fine-Tuning We also evaluate the performance depending on the number of images used for fine-tuning. The performance comparison results are shown in Figure <ref> and Figure <ref>. The more images used for fine-tuning, the better the network achieves. Unconditional Animation Generation We demonstrate that our method can generate human animations featuring diverse clothing styles and identities, even without any image conditions. Results from unconditional generation experiments on both datasets are illustrated in Figures <ref> and <ref>. § GRAPHIC SIMULATED DATASET Graphic simulated dataset is comprised of approximately 98,000 images, each rendered at a resolution of 512x512. These images illustrate various dynamic movements (such as dance, exercise, etc.) of 3D human models with a total of 99 different appearances. The 3D human models in this dataset are created using Character Creator 4 <cit.>, and we simulate the soft cloth motion in iClone8 <cit.>. Mixamo <cit.> human motions, are exported as an Alembic <cit.> file. For realistic rendering, we employ Ray Tracing Texel (RTX) rendering and the Nvidia Omniverse <cit.> as the rendering tool. Figure <ref> shows a few samples from our graphically simulated data. We are currently conducting an internal review on the disclosure of this dataset, and we will provide updates later.
http://arxiv.org/abs/2307.02997v1
20230706135712
Fourier-Net+: Leveraging Band-Limited Representation for Efficient 3D Medical Image Registration
[ "Xi Jia", "Alexander Thorley", "Alberto Gomez", "Wenqi Lu", "Dipak Kotecha", "Jinming Duan" ]
eess.IV
[ "eess.IV", "cs.CV" ]
Journal of Class Files, Vol. 14, No. 8, August 2015 Shell et al.: Bare Advanced Demo of IEEEtran.cls for IEEE Computer Society Journals U-Net style networks are commonly utilized in unsupervised image registration to predict dense displacement fields in the full-resolution spatial domain. For high-resolution volumetric image data, this process is however resource-intensive and time-consuming. To tackle this challenge, we first propose Fourier-Net, which replaces the U-Net style network's expansive path with a parameter-free model-driven decoder. This results in fewer network parameters, memory usage, and computational operations. Specifically, instead of directly predicting a full-resolution displacement field in the spatial domain, our Fourier-Net learns a low-dimensional representation of the displacement field in the band-limited Fourier domain. This representation is then decoded by our model-driven decoder to obtain the dense, full-resolution displacement field in the spatial domain. Expanding upon Fourier-Net, we then introduce Fourier-Net+, which takes the band-limited spatial representation of the images as input, instead of their original full-resolution counterparts. This leads to a reduction in the number of convolutional layers in the U-Net style network's contracting path, resulting in a further decrease in network parameters, memory usage, and computational operations. Finally, to enhance the registration performance, we propose a cascaded version of Fourier-Net+. We evaluate our proposed methods on three datasets, including two brain and a cardiac MRI (CMR) datasets, comparing them against various state-of-the-art approaches. Our proposed Fourier-Net and its variants achieve comparable results with these approaches, while exhibiting faster inference speeds with a lower memory footprint and fewer multiply-add operations. For example on the 3D-CMR dataset, our Fourier-Net+ outperforms the current state-of-the-art models, TransMorph and LKU-Net, with improvements of 7.1% and 7.5% in terms of Dice, respectively. Additionally, our Fourier-Net+ exhibits exceptional inference speed, 9.05 and 4.42 times faster, while utilizing only 0.35% and 0.84% of their multiply-add operations, and 2.07% and 0.97% of their memory usage. With such small computational cost, our Fourier-Net+ enables the training of large-scale 3D registration on low-VRAM GPUs efficiently. Our code is publicly available at <https://github.com/xi-jia/Fourier-Net>. Efficient Image Registration, U-Net, Fourier-Net, Band-Limited Representation, Cascades. Fourier-Net+: Leveraging Band-Limited Representation for Efficient 3D Medical Image Registration Xi Jia, Alexander Thorley, Alberto Gomez, Wenqi Lu, Dipak Kotecha and Jinming Duan X. Jia, A. Thorley, and J. Duan are with the School of Computer Science, University of Birmingham, Birmingham, UK. A. Thorley and A. Gomez are with the Ultromics Ltd, Oxford, UK. W. Lu is with the Department of Computer Science, University of Warwick, UK. D. Kotecha is with the Institute of Cardiovascular Sciences, University of Birmingham, Birmingham, UK. J. Duan is also with the Alan Turing Institute, London, UK. The corresponding author is J. Duan (email: j.duan@bham.ac.uk). Manuscript received xxxx xx, 2022 August 1, 2023 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION § INTRODUCTION Medical image registration plays a critical role in many medical image analysis applications, such as population modeling, longitudinal studies, and statistical atlases <cit.>. The goal of medical image registration is to learn a spatial deformation that establishes the correspondences between a fixed image and a moving image. Medical image registration can be categorized based on various factors, including data dimensionality (e.g., 2D, 3D, and 3D+t), image modality (e.g., CT, MR, and Ultrasound), objects of interest (e.g., brain, lung, and heart), nature of transformation (e.g., rigid, affine, and deformable), registration basis (e.g., landmarks, image features, and voxel intensities), and optimization techniques (e.g., gradient descent or ADMM). For a more comprehensive taxonomy of medical image registration, we suggest that the reader refers to the surveys presented in <cit.> and the updated version in <cit.>. In this work, we concentrate on unsupervised, intensity-based, mono-modality, deformable image registration. Intensity-based deformable image registration has been an active field for the past two decades. Given a moving image and a fixed image, the goal is to compute a dense deformation field that minimizes the distance (or maximizes the similarity) between the warped moving image and the fixed image. While the concept of deformable image registration is simple, it remains a challenging problem due to its ill-posed nature. That is, the solution is often not unique <cit.>. In practice, prior constraints or assumptions over the deformation are imposed before its estimation, including but not limited to smoothness, symmetry, topology preservation, and diffeomorphisms <cit.>. These prior constraints serve as regularization terms that are imposed on the deformation, in combination with data distance (or similarity) terms between the warped and fixed images to construct the optimization objective of registration methods. In general, deformable image registration for medical imaging can be categorized as iterative optimization-based registration approaches, their deep learning-based counterparts, and a combination of both approaches. Before the emergence of deep learning, deformable image registration relied heavily on iterative optimization techniques. Popular optimization-based approaches from this paradigm include free-form deformation (FFD) <cit.>, large deformation diffeomorphic metric mapping (LDDMM) <cit.>, DARTEL <cit.>, Demons<cit.>, Elastix <cit.>, ANTs <cit.>, NiftyReg <cit.>, Flash <cit.>, ADMM <cit.>, etc. Though widely applied and mathematically sound, these approaches require extensive hyper-parameter tuning for each image pair and are computationally expensive thus limiting their applications in real-time and large-scale volumetric image registration. Recently, there has been a surge in the use of deep learning-based approaches for medical image registration <cit.>. Their success is largely due to their ability to perform fast inference, and the flexibility to leverage auxiliary information such as anatomical masks as part of the training process. The most effective methods, such as VoxelMorph <cit.>, typically employ a U-Net style architecture to estimate dense spatial deformation fields. These methods require only one forward pass during inference, making them orders of magnitude faster than traditional iterative methods. Following the success of VoxelMorph, numerous deep neural networks have been proposed for various registration tasks <cit.>. These networks generally enhance the registration performance through two strategies: cascading U-Net style networks <cit.> and replacing basic convolutional blocks with more sophisticated alternatives, such as attention-based transformers <cit.> and parallel convolutional blocks <cit.>. However, these changes increase the number of network parameters and multiply-add operations (mult-adds), negatively impacting training and inference efficiency. An alternative approach for image registration is to combine data-driven deep learning models with iterative methods, as proposed by <cit.>. These methods embed the mathematical structure of minimizing a generic objective model into a neural network. By doing so, the network mapping process inherits prior knowledge from the objective model while maintaining the data efficiency of the iterative methods. As a result, these model-driven networks have the advantages of both communities and have been shown to outperform purely data-driven registration methods. However, despite being faster than traditional iterative optimization-based methods, emulating the iterative optimization process often requires the use of multiple U-Nets and other sophisticated neural layers in these networks (e.g., intensity consistency layer <cit.>), which often result in a slower registration speed when compared to purely network-based methods. A common characteristic among learning-based registration approaches is the utilization of U-Net style networks. In this paper, we argue that for such styles of registration networks, it may not be necessary to include the entire expansive path of the decoder. Additionally, we suggest that training and inference efficiency can be further improved by learning a low-dimensional representation of the displacement field in the band-limited Fourier domain. Our observations are based on the results shown in Figure <ref>, which demonstrate that a small number of coefficients in the band-limited Fourier domain are sufficient to reconstruct a full-resolution deformation accurately. Inspired by this insight, we propose Fourier-Net, an end-to-end unsupervised registration model that is able to learn such a low-dimensional, band-limited representation of the displacement field. Specifically, by removing several layers in the expansive path of a U-Net style architecture, Fourier-Net outputs only a small patch containing low-frequency coefficients of the displacement field in the Fourier domain. A model-driven decoder then recovers the full-resolution spatial displacement field from these coefficients, using a zero-padding layer that broadcasts complex-valued low-frequency signals into a full-resolution complex-valued map, and an inverse discrete Fourier transform (iDFT) that recovers the spatial displacement field from the map. Both zero-padding and iDFT layers are parameter-free, making our Fourier-Net very fast. We also propose a diffeomorphic variant, termed Diff-Fourier-Net, which learns the band-limited representation of the velocity field and uses the squaring and scaling layers to encourage the output deformation to be diffeomorphic <cit.>. Building on the results of Fourier-Net, we hypothesize that it may be feasible to learn the band-limited displacement field or band-limited velocity field directly from a band-limited representation of the input image pairs, rather than from the original full-resolution image pairs. This has the potential to further reduce the number of convolutional layers in the contracting path in Fourier-Net and further accelerate its registration speed. To this end, we propose Fourier-Net+, which utilizes the same decoder as Fourier-Net but features an improved encoder that aims to reduce computational overhead. As per standard U-Net style architectures, Fourier-Net is designed to take the original full-resolution image pairs as input, with an encoder involving multiple 3D convolutional layers which are computationally expensive operations, particularly in the earlier layers of the model. In contrast, the encoder of Fourier-Net+ includes a model-driven encoding of the original full-resolution image pairs into a low-dimensional band-limited representation of such image pairs, followed by 3D convolutions. This encoder enables 3D convolutions to operate on smaller resolutions, making Fourier-Net+ a much lighter network compared to Fourier-Net or U-Net style backbones. Figure <ref> provides a visual comparison of the architectural differences between U-Net, Fourier-Net, and Fourier-Net+. To enhance the registration performance of Fourier-Net+, we then propose a cascaded version of this network: cascaded Fourier-Net+. Despite the potential increase in computational cost from using multiple cascades, the efficient design of Fourier-Net+ allows the cascaded version of this network to still have fewer mult-adds operations than Fourier-Net in our experiments. We note that preliminary results of Fourier-Net were presented in the conference proceedings <cit.>. In this paper, we introduce the two extended versions of Fourier-Net, namely Fourier-Net+ and cascaded Fourier-Net+. We also provide extensive experimental results to thoroughly investigate Fourier-Net and its variations. § RELATED WORKS §.§ Iterative Optimization-Based Approaches Iterative methods based on instance-level optimization are prohibitively slow, especially when the images to be registered are of a high-dimensional form. Over the past decades, many works have been proposed to accelerate such methods. For example, rather than directly estimating dense displacement fields, FFD models <cit.> were proposed in which a deformation grid over a few control points is interpolated to a dense field using B-splines. Computational challenges in diffeomorphic registration are even more pronounced. For this task, Ashburner <cit.> introduced a fast algorithm called DARTEL for diffeomorphic registration to accelerate the computation of LDDMM, which integrates deformations through non-stationary velocities over time using the Lagrange transport equation. DARTEL used a stationary velocity field (SVF) representation <cit.> and computed the resulting deformation through scaling and squaring of the SVF. Zhang and Fletcher <cit.> developed the Fourier-approximated Lie algebras for shooting (Flash) for fast diffeomorphic image registration, where they proposed to speed up the solution of the EPDiff equation used to compute deformations from velocity fields in the band-limited Fourier domain. Hernandez <cit.> reformulated the Stokes-LDDMM variational problem used in <cit.> in the domain of band-limited non-stationary vector fields and utilized GPUs to parallelize their methods. Another fast approach for deformable image registration is Demons <cit.>, which imposed smoothness on displacement fields by incorporating inexpensive Gaussian convolutions into its iterative process. The diffeomorphisms in Demons were achieved by the greedy composition of the speed vector fields <cit.>. Recently, Thorley et al. <cit.> proposed a convex optimization model that used an arbitrary order regularisation term. By combining Nesterov's accelerated gradient descent and the alternating direction method of multipliers (ADMM), this model was shown to register pairs of cardiac 3D MR images within 2s on GPUs. §.§ Deep Registration Approaches Unsupervised methods: Convolutional neural networks (CNNs) based on unsupervised learning have recently been used to accelerate the speed of registration <cit.> without the need for ground truth deformations. U-Net style networks, in particular, have been shown to be effective in learning deformations between pairwise images <cit.>. The authors of VoxelMorph <cit.> demonstrated that a simple U-Net can achieve comparable registration performance to iterative methods, with inference times orders of magnitude faster. Building on the success of VoxelMorph, various extensions have been proposed to improve registration accuracy and address specific challenges in medical image registration. One such extension used multiple recurrent or cascaded U-Net architectures, where the deformation was iteratively composed by incorporating the output from each cascade <cit.>. These methods were shown to be particularly effective at handling large deformations in input image pairs by allowing the network to capture both global and local features of the images. Another approach to improving U-Net-based registration models was to augment the U-Net backbone with more representative layers to better capture correspondences between input images. For example, transformer-based methods, such as ViT-V-Net <cit.> and TransMorph <cit.>, replaced some of the regular convolutional layers in the U-Net style network with transformer layers, allowing for the modeling of long-range dependencies between voxels, and LKU-Net <cit.> employed parallel large-kernel convolutional layers to handle different scales in input images. Although combining U-Nets with more representative layers has been shown to achieve promising registration performance, these methods come with a significantly higher computational cost, leading to slower inference speed. Another alternative to U-Net style architectures proposed the use of Siamese or dual-stream networks as the backbone <cit.>. These models utilized separate encoder branches to extract features from the moving and fixed images, which were then fused and passed through the decoder to generate the deformation. Similar to U-Net style networks, Siamese and dual networks also used a contracting path for image encoding and an expansive path for decoding the deformations from the encoded image features. There are several other extensions which have been shown to improve registration performance, including the use of different loss functions and regularization techniques, the incorporation of prior knowledge, and the estimation of deformation using B-spline control points. For instance, the segmentation Dice loss <cit.> measures the overlap between segmented regions of fixed and moving images and has been used to improve registration performance between different anatomical structures. Regularization techniques, such as inverse consistency <cit.> and squaring and scaling <cit.>, are utilized to encourage diffeomorphic deformations. These methods are particularly useful for task-specific registration. In addition, combining deep learning with conventional model-based approaches <cit.> and incorporating prior knowledge <cit.> have also demonstrated improvements to registration performance. For example, Jia et al.<cit.> used the variational model to linearize the bright consistency similarity metric and optimized the auxiliary variable with a cascaded U-Net. Albeit slightly slower, their approach was found to be more accurate than both single (VoxelMorph <cit.>) and cascaded U-Net (RC-Net <cit.>) for cardiac motion estimation. Another line of work is to estimate a grid of B-spline control points with regular spacing <cit.>, which is then interpolated based on cubic B-spline basis functions <cit.>. These networks predict deformations more efficiently by estimating only a few control points, although currently they are less accurate. Supervised methods: Instead of minimizing a similarity metric and regularization term as per unsupervised methods, supervised approaches learn from ground truth deformation fields. However, they have several pitfalls: 1) it is generally hard to provide human-annotated ground truth deformations for supervision; and 2) if trained using numerical solutions of other iterative methods, the performance of these supervised registration methods may be limited by iterative methods. Yang et al. proposed Quicksilver <cit.> which is a supervised encoder-decoder network and trained using the initial momentum of LDDMM as the supervision signal. Wang et al. extended Flash <cit.> to DeepFlash <cit.> in a learning framework in lieu of iterative optimization. Compared to Flash, DeepFlash accelerated the computation of the initial velocity fields but still needed to solve a PDE (i.e., EPDiff equation) in the Fourier domain so as to recover the full-resolution deformation in the spatial domain, which was shown to be slow. The fact that DeepFlash required the numerical solutions of Flash <cit.> as training data attributes to lower registration performance than Flash. Although DeepFlash also learns a low-dimensional band-limited representation, it differs from our Fourier-Net in four aspects, which represent our novel contributions to this area. First, DeepFlash is a supervised method that requires ground truth velocity fields calculated from Flash (30 minutes per 3D image pair in CPU) prior to training, whilst Fourier-Net is a simple and effective unsupervised method thanks to our proposed model-driven decoder. Second, DeepFlash is a multi-step method whose network's output requires an additional PDE algorithm <cit.> to compute final full-resolution spatial deformations, whilst Fourier-Net is a holistic model that can be trained and used in an end-to-end manner. Third, DeepFlash requires two individual convolutional networks to estimate real and imaginary signals in the band-limited Fourier domain, whilst Fourier-Net uses only one single network directly mapping image pairs to a reduced-resolution displacement field without the need of complex-valued operations. Lastly, DeepFlash is essentially an extension of Flash and it is difficult for the method to benefit from vast amounts of data, whilst Fourier-Net is flexible and can easily learn from large-scale datasets. § PROPOSED METHODS In this section, we first introduce Fourier-Net, and its extended versions: Fourier-Net+ and cascaded Fourier-Net+. We then detail their network architectures and finally present their loss functions used in our experiments. §.§ Fourier-Net We illustrate Fourier-Net in Figure <ref>, where its encoder takes a pair of spatial images as input and encodes them to a low-dimensional representation of the displacement field (or velocity field if diffeomorphisms are imposed) in the band-limited Fourier domain. The decoder then brings the displacement field (or velocity field) from the band-limited Fourier domain to the spatial domain via a padding and an iDFT. The decoder ensures that the input and output have the same spatial size. Next, squaring and scaling layers are optionally used to encourage a diffeomorphism in the final deformation. Finally, by minimizing the loss function, the warping layer deforms the moving image to be similar to the fixed image. Encoder of Fourier-Net: the encoder aims to learn a displacement or velocity field in the band-limited Fourier domain, which requires the encoder to handle complex-valued numbers. One may directly use complex convolutional networks <cit.>, which were designed for the case where both input and output are complex values. Note that complex convolutional operations sacrifice computational efficiency. Instead, DeepFlash <cit.> tackles this problem by first converting input image pairs to the Fourier domain and then using two individual real-valued convolutional networks to learn the real and imaginary signals separately. However, such an approach makes training and inference costly. To bridge the domain gap between real-valued spatial images and complex-valued band-limited displacement fields without increasing complexity, we propose to embed a DFT layer at the end of the encoder. This is a simple and effective way to produce complex-valued band-limited displacement fields without the network handling complex values itself. Let us denote the moving image as I_M, the fixed image as I_F, the convolutional network as CNN with the parameters Θ^1, the DFT layer as ℱ, the full-resolution spatial displacement field as ϕ, and the complex band-limited displacement field as 𝔹_ϕ. We therefore define our encoder as 𝔹_ϕ = ℱ(CNN(I_M, I_F; Θ^1)), resulting in a compact, efficient implementation in contrast to DeepFlash <cit.>. We also attempted to regress 𝔹_ϕ directly from I_M and I_F using only convolutional layers, i.e., 𝔹_ϕ = CNN(I_M, I_F; Θ^1). However, our experimental results suggested that this is very difficult for the network to learn, resulting in lower performance as detailed in Table <ref>. If the CNN is required to directly learn a band-limited displacement field, it must go through two domains in total: first mapping the spatial images to the spatial displacement field and then mapping this displacement field into its band-limited Fourier domain. In this case, we believe the domain gap is too big for a CNN to learn such a mapping. Our network however only needs to go through one domain and then the DFT layer (ℱ) handles the second domain. Experimentally, we found this approach to be more effective. So far, we have given an intuitive explanation of how the encoder in our network learns. We now discuss the mathematical relationship between the low-dimensional spatial displacement field 𝕊_ϕ= CNN (I_0, I_1; Θ^1), its band-limited representation 𝔹_ϕ=ℱ(𝕊_ϕ), as well as the displacement field ϕ in the full-resolution spatial domain (see Figure <ref> for details). For simplicity, we use a 2D displacement field as an example and the formulations below can be easily extended to 3D cases. A define a general DFT used on ϕ as follows: [ℱ(ϕ)]_k, l= ∑_i=0^H-1∑_j=0^W-1ϕ_i, j e^-√(-1)(2 π k/H i+2 π l/W j), where ϕ is of size H× W, i ∈ [0, H-1] and j∈ [0, W-1] are the discrete indices in the spatial domain, and k ∈ [0, H-1] and l ∈ [0, W-1] are the discrete indices in the frequency domain. In our Fourier-Net, ϕ is actually a low-pass filtered displacement field. If we define a H × W sized sampling mask 𝒟 whose entries are zeros if they are on the positions of high-frequency signals in ϕ and ones if they are on the low-frequency positions. With 𝒟, we recover the displacement field ϕ from Eq. (<ref>) as follows: ϕ_i, j = 1/HW∑_k=0^H-1∑_l=0^W-1𝒟_k,l [ℱ(ϕ)]_k, l e^√(-1)(2 π i/H k+2 π j/W l). If we shift all low-frequency signals of ϕ to a center patch of size H/a×W/b (H/a,W/b,a=2Z_a,b=2Z_b,Z_a,Z_b ∈ℤ^+), center-crop the patch (denoted by 𝔹_ϕ), and then perform the iDFT on this patch, we obtain 𝕊_ϕ in Eq. (<ref>): [𝕊_ϕ]_i, j = ab/H W∑_k=0^H/a-1∑_l=0^W/b-1 [𝔹_ϕ]_k,l e^√(-1)(2 π ai/Hk+2 π bj/Wl), where i∈ [0, H/a-1] and j∈ [0, W/b-1] are the indices in the spatial domain, and k∈ [0, H/a-1] and l∈ [0, W/b-1] are the indices in the frequency domain. Note that 𝕊_ϕ is a low-dimensional spatial representation of ϕ and we are interested in their mathematical connection. Another note is that 𝕊_ϕ actually contains all the information of its band-limited Fourier coefficients in 𝔹_ϕ. As such, we do not need the network to learn the coefficients in 𝔹_ϕ and instead only to learn its real-valued coefficients in 𝕊_ϕ. Since most of entries (a × b -1/a × b%) in ℱ(ϕ) are zeros, and the values of remaining entries are exactly the same as in 𝔹_ϕ, we can conclude that 𝕊_ϕ contains all the information ϕ can provide, and their mathematical connection is [𝕊_ϕ]_i, j = ab ×ϕ_a i, bj. With this derivation, we show that we can actually recover a low-dimensional spatial representation 𝕊_ϕ from its full-resolution spatial displacement field ϕ, as long as they have the same low-frequency coefficients 𝔹_ϕ. This shows that there exists a unique mapping function between 𝕊_ϕ and ϕ and that it is reasonable to use a network to learn 𝕊_ϕ directly from image pairs. Model-driven decoder: The decoder contains no learnable parameters. We instead replace the expansive path with a zero-padding layer, an iDFT layer, and an optional squaring and scaling layer. The output from the encoder is a band-limited representation 𝔹_ϕ. To recover the full-resolution displacement field ϕ in the spatial domain, we first pad the patch 𝔹_ϕ containing mostly low-frequency signals to the original image resolution with zeros (i.e., ℱ(ϕ)). We then feed the zero-padded complex-valued coefficients ℱ(ϕ) to an iDFT layer consisting of two steps: shifting the Fourier coefficients from centers to corners and then applying the standard iDFT to convert them into the spatial domain. The output from Fourier-Net is thus a full-resolution spatial displacement field. An illustration of this process is given in Figure <ref>. Both padding and iDFT layers are differentiable and therefore Fourier-Net can be optimized via standard back-propagation. We also propose a diffeomorphic variant of Fourier-Net which we term Diff-Fourier-Net. A diffeomorphic deformation is defined as a smooth and invertible deformation, and in Diff-Fourier-Net we need extra squaring and squaring layers for the purpose. The output of the iDFT layer can be regarded as a stationary velocity field denoted by v instead of the displacement field ϕ. In group theory, v is a member of Lie algebra, and we can exponentiate this stationary velocity field (i.e., Exp(v)) to obtain a diffeomorphic deformation. In this paper, we use seven scaling and squaring layers <cit.> to impose such a diffeomorphism. §.§ Fourier-Net+ In Fourier-Net, we proposed a model-driven decoder to replace some of the expansive convolutional layers in a U-Net style backbone. By doing so, Fourier-Net removed the need of progressively decoding the displacement/velocity field from the latent features learned from the encoder. Such a decoder is thus capable of reducing the computational cost and improving inference speed. However, Fourier-Net still takes the original full-resolution image pairs as input. For 3D images, the encoder of Fourier-Net involves multiple 3D convolutional layers, which are computationally expensive operations, particularly in the earlier layers of the model. To further reduce the computational cost and memory footprint, we propose Fourier-Net+ by embedding a model-driven encoding layer before the contracting convolutional layers. Model-driven encoder: It is more efficient for learning if we feed our encoder a band-limited representation of input images, which we term band-limited images in line with band-limited displacements. With this consequently much lighter convolutional network, we are able to estimate their band-limited displacements from these band-limited images and further reduce computional costs. Similarly to our approach with the decoder in Fourier-Net, in Fourier-Net+ the input images I_M and I_F are first mapped into the frequency domain using a DFT layer, forming ℱ(I_M) and ℱ(I_F). We then perform center-cropping on the low-frequency regions, forming 𝔹_I_M and 𝔹_I_F, which are transformed to the spatial domain using iDFT. The real numbers are then taken, and the resulting spatial patches 𝕊_I_M and 𝕊_I_F are the band-limited images, as illustrated in Figures <ref>. Once we have the band-limited images, we only need a small CNN parameterized by Θ^2 to estimate their band-limited displacements, i.e., 𝕊_ϕ = CNN(𝕊_I_M, 𝕊_I_F; Θ^2). This encoder removes several convolutional layers in the contracting path of the Fourier-Net encoder, which we expect to further accelerate the registration process and reduce the memory footprint of Fourier-Net significantly. The architecture of Fourier-Net+ is shown in Figure <ref>. Except for the incorporation of a model-driven encoding layer, the remaining parts of Fourier-Net+ are exactly the same as Fourier-Net. By adding the squaring and scaling layers at the end of Fourier-Net+, we get its diffeomorphic version, which we term Diff-Fourier-Net+. An important note here is that the warping layer in Fourier-Net+ or Diff-Fourier-Net+ warps the originally sized images instead of the band-limited images in order to compute the final loss. §.§ Cascaded Fourier-Net+ Due to the band-limited representation of both images and deformations, Fourier-Net+ is lighter than Fourier-Net in terms of the number of parameters and computations. However, such a light network may face limitations in accurately capturing complex deformations (e.g., in brain images). To tackle this problem, we propose a cascaded version of Fourier-Net+, which is illustrated in Figure <ref>. For the diffeomorphic version of this network, we impose the squaring and scaling layers after the last cascade. The loss function is also used after the last cascade. For terminology, K×Fourier-Net+ means that we use a Fourier-Net+ with K cascades, and Diff-K×Fourier-Net+ is the diffeomorphic version of K×Fourier-Net+. This cascaded Fourier-Net+ is parameterized by Θ^3, and its network parameters are not shared across different cascades. Technically, in the first cascade of K×Fourier-Net+ the input is the moving image I_M and the fixed image I_F. From the second cascade onward, the input is I_M^w(k) (a warped version of I_M, k ∈ [1,K]) and I_F. The process iterates until Cascade k, where the input is I_M^w(k-1) and I_F and the output δϕ^(k). Specifically, as in <cit.>, I_M^w(k) is defined as: I_M^w(k) = ((((I_M ∘δϕ^(1))∘δϕ^(2)) ∘…) ∘δϕ^(k-1)) ∘δϕ^(k), or equivalently I_M^w(k) = I_M∘ϕ^(k), where ϕ^(k) is the displacement field computed by composing the outputs from Cascade 1 to k: ϕ^(k) = δϕ^(1)∘δϕ^(2)∘…∘δϕ^(k-1)∘δϕ^(k). In the case of k=K, ϕ^(K) denotes the final output displacement field used to warp the original moving image I_M in the computation of the loss. §.§ Network Architectures and Loss Functions Network architectures: For all networks using a U-Net backbone in our experiments, we employed the architecture defined in Figure <ref>, which is similar to that used in <cit.>. In this network, there are five convolutional layers in the contracting path and five convolutional layers in the expansive path. Specifically, given a pair of moving and fixed images in 3D, each with a size of D× H × W, the layer size in the network flows as (C,D,H,W) →(2C,D/2,H/2,W/2) →(4C,D/4,H/4,W/4)→(8C,D/8,H/8,W/8)→(16C,D/16,H/16,W/16) in the contracting path. These layers in the expansive path were progressively upsampled to (3,D,H,W), which is the size of the final displacement/velocity field. The architecture of Fourier-Net given in Figure <ref> was modified from the U-Net backbone by discarding several layers in the expansive path. In the two variants of Fourier-Net, we used fewer layers in the expansive path, which lead to a smaller spatial size of the output (last) layer and reduces the convolutional computations in higher-dimensional space. A smaller size of the output layer rapidly decreases the model parameters and speeds up training and inference time, but may lead to lower performance. A larger size of output layer will retain registration accuracy but eliminate the efficiency advantage of our methods. We noticed that different datasets favor different sizes and investigated in our experiments two different sizes for the output layer, i.e., (3,D/8,H/8,W/8) and (3,D/4,H/4,W/4). As illustrated by Figure <ref>, in Fourier-Net+ we instead used small-sized band-limited images as input, allowing us to remove some convolutional layers in the contracting path to further save the computational cost. In our experiments we studied two different sizes of such band-limited images, i.e., (D/2,H/2,W/2) and (D/4,H/4,W/4), which in combination with the two sizes of band-limited displacements results in a total of four Fourier-Net+ variants. Loss functions: For Fourier-Net, Fourier-Net+, and K×Fourier-Net+, the final output is the full-resolution displacement field ϕ (in K×Fourier-Net+ we let ϕ=ϕ^K). Both warping layers in 2D and 3D are based on linear interpolation as in <cit.>. We define an unsupervised training loss ℒ(Θ), computed from the moving image I_M, the fixed image I_F, the predicted displacement field ϕ, and the network parameters Θ. In detail, ℒ(Θ) is of the following form: 1/N∑_i=1^N ℒ_S( I_M_i∘ (ϕ_i(Θ) + Id) - I_F_i ) + λ/N∑_i=1^N ∇ϕ_i(Θ) _2^2 , where N is the number of training pairs, Id is the identity grid, ∘ is the warping operator, and ∇ is the first order gradient implemented using finite differences <cit.>. The first term ℒ_S defines the similarity between warped moving images and fixed images, and the second term defines the smoothness of displacement fields. Here λ is a hyper-parameter balancing the two terms. For all diffeomorphic variants of Fourier-Net and Fourier-Net+, the final output is the exponentiated full-resolution velocity field v after the squaring and squaring layers. The training loss ℒ(Θ) in this case is defined as 1/N∑_i=1^N ℒ_S( I_M_i∘ Exp(v_i(Θ)) - I_F_i ) + λ/N∑_i=1^N ∇v_i(Θ) _2^2. ℒ_S can be either mean squared error (MSE) or normalized cross-correlation (NCC), which we clarify in our experiments. Depending the network architecture, Θ is either Θ^1, Θ^2, or Θ^3 (see Sec. <ref>, <ref>, and <ref> for details). The aim is to minimize ℒ(Θ) with respect to Θ using gradient descent via backprogration. § EXPERIMENTS In this section, we detail the datasets used in our experimentation, provide implementation details and conduct ablation and parameter studies to demonstrate the utility of our contributions. We finally compare our method with a range of state-of-the-art methods across three different registration tasks. §.§ Datasets OASIS-1 dataset <cit.> consists of a cross-sectional collection of T1-weighted brain MRI scans from 416 subjects. In experiments, we used the pre-processed OASIS data provided by the Learn2Reg challenge<cit.> and performed subject-to-subject brain registration. This dataset has 414 2D 160×192 slices and masks extracted from their corresponding 3D 160×192×224 volumes. We randomly split this 2D dataset into 201, 12, and 201 images for training, validation, and testing. After pairing, we used 40200, 22, and 400 image pairs for training, validation, and testing, respectively. Each 2D segmentation mask contained 24 automated labels from FreeSurfer. Further details of this pre-processed data can be found at the MICCAI 2021 Learn2Reg challenge[<https://learn2reg.grand-challenge.org/Learn2Reg2021/>]. IXI dataset[<https://brain-development.org/ixi-dataset/>] contains nearly 600 MRI scans from healthy subjects. In experiments, we used the pre-processed IXI data provided by <cit.> to perform atlas-based brain registration. The atlas is generated by the authors of <cit.> using the method in <cit.>. There are in total 576 160×192×224 volumetric images in this pre-processed dataset, which are split into 403 for training, 58 for validation, and 115 for testing. There is no pairing step for this dataset as it is an atlas-to-subject registration task. The performance of this task was evaluated with 30 labeled anatomical structures[<https://github.com/junyuchen245/TransMorph_Transformer_for_Medical_Image_Registration/blob/main/IXI/TransMorph_on_IXI.md>]. 3D-CMR dataset <cit.> consists of 220 pairs of 3D high-resolution cardiac MRI scans, in which each scan is captured during only one single breath-hold and includes the End-diastolic (ED) to End-systolic (ES) frames of the cardiac cycle. In our experiments, we re-sampled all scans from original resolution 1.2×1.2×2.0mm^3 to 1.2×1.2×1.2mm^3 and center-cropped 128×128×96 sized volumes. We randomly split the data into 100, 20, and 100 corresponding to training, validation, and testing sets. The Dice score and Hausdorff distance (HD) between the warped ES segmentation and the ED ground truth mask was measured on three anatomical structures: left ventricle cavity (LV), left ventricle myocardium (Myo), and right ventricle cavity (RV). We performed motion estimation from an ES state to an ED state from a cardiac cycle. §.§ Implementation Details The U-Net backbone used in experiments is given in Figure <ref>. There were 5 blocks in both the contracting and expansive paths. In the encoder, the first block directly encoded the input image pair to C feature maps, each with a size of D× H × W. For the remaining 4 blocks, each contained 2 sequential convolutional layers, where the first layer maintained the same spatial resolution as its input, and the second layer performed a down-sampling with a stride of 2 and doubled the number of feature channels. In the decoder, each of the first 4 blocks contained a fractionally-strided convolutional layer followed by 2 sequential convolutional layers, where the fractionally-strided convolutional layer performed an up-sampling with a stride of 2, and the convolutional layers halved the number of feature channels. The output from the last block was the final displacement field. The kernel size in all blocks was 3× 3× 3, and each convolution was followed by a PReLU activation except the last output layer, which did not have any activation function. We implemented all our proposed networks (see Figure <ref> middle and right) using PyTorch, where training is optimized using Adam. To adapt to 2D images, 3D kernels were changed to 2D, each with a size of 3 × 3. For training in both 2D and 3D, we tuned built-in hyper-parameters on a held-out validation set. In terms of loss functions, we used MSE to train our networks on OASIS-1 for 10 epochs, and the optimal performance was achieved with λ=0.01 for all our networks. On IXI, we trained Fourier-Net, Diff-Fourier-Net, Fourier-Net+ and K×Fourier-Net+ with the NCC loss for 1000 epochs with λ=5. In contrast, K×Fourier-Net+ and Diff-K×Fourier-Net+ are trained optimally with λ=2. On 3D-CMR, we used MSE to train our networks for 1000 epochs with λ=0.001. All our networks were trained using an NVIDIA A100 GPU. §.§ Ablation Studies In this section, we detail our ablation studies where we investigated whether the proposed modules in Fourier-Net and its variants were effective. All experiments undertaken in this section were conducted on OASIS-1. Impact of embedding a DFT layer: In Table <ref>, we show the necessity of embedding a DFT layer at the end of the encoder for Fourier-Net (see Figure <ref>). Without this layer, such an encoder would be purely a CNN that has to learn complex-valued Fourier coefficients from the spatial image pairs. This setup is similar to DeepFlash <cit.>, where two encoders were used to respectively learn the real and imaginary parts of these complex coefficients. As reported in Table <ref>, with this DFT layer the registration Dice score was shown to improve from 0.664 to 0.732 (6.8%↑) and from 0.675 to 0.756 (8.1%↑) for the output sizes of 20× 24 and 40×48, respectively. On the other hand, due to the dual encoders, the network required more mult-adds operations and memory footprint to learn the band-limited displacement field. The improvments to both computational efficiency and performance indicate the necessity of using such a DFT layer in our Fourier-Net. Impact of using a band-limited representation: The necessity of learning a band-limited displacement field might be questioned when one could simply estimate a low-resolution displacement field and then directly up-sample to a full-resolution one using linear interpolation. In Table <ref>, we performed such an experiment by replacing the DFT layer and the decoder in Fourier-Net with a simple bilinear interpolation (termed as Bilinear-Net). We observed that in terms of Dice, Bilinear-Net was respectively 1.3% and 1% lower than our Fourier-Net for the output size of 20× 24 and 40×48. This experiment showed that compared to the low-resolution directly down-sampled displacement field, it was more effective to learn the band-limited representation of the displacement field. As an additional experiment to demonstrate the utility of band-limited images, we evaluated the performance of Fourier-Net+ (see Figure <ref>) against two variants of Bilinear-Net+: one used bilinear down-sampled images as input and estimated a down-sampled displacement field, and the other one used bilinear down-sampled images as input and estimated a band-limited displacement field. We found that in terms of Dice the two Bilinear-Net+ variants achieved similar results but both are inferior to Fourier-Net+ with a clear performance gap, i.e., 2.2%↓ on resolution 20× 24 and 1.2%↓ on resolution 40×48. Diffeomorphisms: In Table <ref>, we compared the performance of Fourier-Net and Fourier-Net+ and their diffeomorphic counterparts. The squaring and scaling (SS) layers encouraged diffeomorphisms for the estimated deformation, resulting in a lower percentage of negative values of the Jacobian determinant of deformation (|J|_< 0%). Besides the influence on negative Jacobians, it was notable that the incorporation of such layers slightly fluctuated the Dice score of different models. {Ablation and Parameter Studies on UKBB In order to validate the generalization ability of the proposed methods on different anatomical structures, we conducted ablation studies two cardiac CMR datasets. In the following section, we detail or results on the 2d UKBB dataset. Embedded DFT layer in the encoder: The importance of the embedded DFT layer in 2D cardiac motion estimation was also seen, bringing 1.7% and 1.6% improvements on Dice score for Fourier-Net with patch size 16×16 and 32×32, respectively. The embedded DFT layer also improves the Hausdorff distance with clear margins for these two patch sizes of 0.51mm and 0.57mm. In addition, the incorporation of such a DFT layer also reduces the mult-adds cost and memory footprint. Patch size of band-limited deformation: We again experiment with two patch sizes, i.e., 16×16 and 32×32. For Fourier-Net, a 32×32 patch can achieves a slightly higher Dice score than the 16×16 patch. The 32×32 Fourier-Net is 0.005 lower than U-Net in terms of Dice, however Fourier-Net outperforms U-Net with a 0.14mm margin in terms of HD distance whilst while the latter has around 246.63% more mult-add operations and a 268.44% larger memory footprint. Patch size of band-limited image: In Table <ref>, we also experiment with two different patch sizes for the band-limited input, i.e., 64×64 and 32×32. For the proposed Fourier-Net+, a 64×64 patch is sufficient to estimate the deformation. Specifically, with only 76.22 (M) mult-adds in 1×Fourier-Net+, we can achieve a 0.822 Dice score and a 9.224 HD distance, which is only 0.002 lower than the Diff-U-Net in terms of Dice and 0.41mm better in terms of HD distance, while Diff-U-Net has ∼1535% mult-adds cost and ∼1011% memory footprint. Impact of cascades: Using 2 cascades, 2×Fourier-Net+ improves the 0.818 Dice score of 1×Fourier-Net+ to 0.822 and reduces its 9.90mm HD distance to 9.65mm. We found that on this dataset, stacking more cascades does not bring more performance gains for 2×Fourier-Net+ and Diff-1×Fourier-Net+. We believe that the for the smoother, smaller deformations seen in cardiac motion, two cascades is sufficient given the complexity of the task. In comparison, the intricate deformations seen in brain atlas registration required four cascades in both out 2D and 3D datasets to achieve satisfactory results. Diffeomorphisms: The optional squaring and scaling layers are also important for diffeomorphisms of the estimated deformation, as suggested by |J|_< 0%. Additionally, though the Dice scores of different experimental settings are not much affected by the squaring and scaling layers, it is notable that the SS layers indeed help to improve the Hausdorff distance in the majority of experiments. §.§ Parameter Studies In this section, we show our investigations into how the choice of different parameter combinations affected the performance of our proposed networks on both 2D OASIS-1 and 3D IXI datasets. Resolution of band-limited displacement fields: For the 2D OASIS-1 experiments in Table <ref>, we studied Fourier-Net with two resolutions (i.e., 20× 24 and 40×48) of the predicted band-limited displacement field, which were respectively 1/8×1/8 and 1/4×1/4 of the original image resolution (160×192). It can be seen that resolution 40× 48 improved the Dice score by 2.4% compared to resolution 20× 24 (0.732 vs 0.756), with an increase in mult-adds operations (from 679.19M to 888.25M) and memory footprint (from 31.18MB to 35.89MB). Using resolution 40×48, the Dice scores of our Fourier-Net and Diff-Fourier-Net were respectively 1% and 0.6% lower than those of the full-resolution U-Net and Diff-U-Net. However, in terms of mult-adds and memory footprint, such two U-Nets were 2.5 and 2.7 times more expensive than our Fourier-Net and Diff-Fourier-Net, respectively. For the 3D IXI experiments (with resolutions of 160×192×224 ) in Table <ref>, we also observed that learning a displacement field with a smaller resolution (20× 24×28) was less accurate than using a larger one (40×48×56) in terms of their Dice (0.754 vs 0.763). It also can be seen that learning resolution 40×48×56 performed marginally worse than the full-resolution U-Net and Diff-U-Net, with Dice scores only 0.5% and 0.4% lower, respectively. However, in terms of mult-adds and memory footprint, such two U-Nets were 5.1 and 3.5 times more expensive than our Fourier-Net and Diff-Fourier-Net, respectively. Resolution of band-limited images: In Fourier-Net, we selected the optimal resolutions of the band-limited displacements for OASIS (40× 48) and IXI (40× 48× 56). In Fourier-Net+, we also considered the resolution of the band-limited images. In Table <ref>, we experimented on Fourier-Net+ with two resolutions (i.e., 80× 96 and 40×48) on 2D OASIS-1. We observed that a larger resolution (80×96) was superior to a smaller one (40×48). Specifically, with 40× 48 band-limited images, Fourier-Net+ achieved a Dice score of 0.717, which was 2% lower than the Fourier-Net+ variant with 80×96. On 3D IXI, Fourier-Net+ achieved a Dice score of 0.748 using resolution 80×96×112, which was 1.2% higher than that of Fourier-Net+ using resolution 40×48×56. Impact of cascade number: As can be seen from Table <ref>, although Fourier-Net+ significantly reduced the computational cost and memory footprint, its performance was inferior to Fourier-Net. To overcome this accuracy issue, we proposed a cascaded Fourier-Net+ by stacking multiple Fourier-Net+. Given significant computational savings in Fourier-Net+, cascaded Fourier-Net+ still had an efficiency advantage compared to Fourier-Net and U-Net. From Table <ref>, on OASIS-1, we observe that using more cascades indeed improves performance. For both Fourier-Net+ and Diff-Fourier-Net+ (with resolution 80× 96 and 40× 48 for input and output respectively), increasing the number of cascades k from 1 to 4 continuously improved the registration performance, i.e., from 0.738 to 0.761 and from 0.740 to 0.755, respectively. Note that, even with 4 cascades, Fourier-Net+ (570.64M) had 35.76% less mult-adds than Fourier-Net (888.25M), whilst showing performance gains of 0.5% in terms of Dice (0.761 vs 0.756). Our 4×Fourier-Net+ and Diff-4×Fourier-Net+ were on par with the full-resolution U-Net and Diff-U-Net, with Dice scores only 0.5% and 0.7% lower, respectively. However, in terms of mult-adds and memory footprint, such two U-Nets were 3.8 and 2.5 times more expensive than our 4×Fourier-Net+ and Diff-4×Fourier-Net+, respectively. We notice a similar improvement of performance with the addition of cascades in 3D IXI: specifically, 3×Fourier-Net+ achieved a 0.766 Dice score which is only 0.2% lower than that of U-Net. Meanwhile, the Dice score of Diff-3×Fourier-Net+ was the same as Diff-U-Net (0.765), but U-Net and Diff-U-Net had respectively 15 and 8.3 times more mult-adds and memory footprint than 3×Fourier-Net+ and Diff-3×Fourier-Net+. §.§ Comparison with the state-of-the-art We have so far shown that Fourier-Net can learn a band-limited displacement field to represent the full-resolution deformation with minimal performance loss compared to full-resolution U-Net architectures. We then showed, with cascaded Fourier-Net+, that learning such a band-limited displacement field from band-limited images can achieve similar performance with Fourier-Net as well as full-resolution U-Net architectures. In this section, we compare our Fourier-Net and its variants with a few state-of-the-art methods on the three datasets. We note that all reported CPU and GPU runtimes were tested on a machine with 128G RAM, 16 3.80GHz Intel(R) Core(TM) i7-9800X CPUs, and 1 NVIDIA Geforce RTX 2080Ti GPU. The computational time includes the cost of loading models and images and was averaged on the whole testing set with batch size 1. §.§.§ Comparison on Inter-subject Brain Registration In Table <ref>, we compared the performance of our Fourier-Net and Fourier-Net+ with Flash <cit.>, DeepFlash <cit.>, and Diff-B-Spline <cit.> on the challenging task of 2D inter-subject brain registration (OASIS-1). We compiled and ran Flash[<https://bitbucket.org/FlashC/flashc/src/master/>] on CPU, but encountered segmentation fault errors with the official GPU version. As such we have not compared GPU inference times of Flash. We reported the performance of Flash on three band-limited resolutions (i.e., 16×16, 20× 24, and 40×48), and we grid-searched its built-in hyper-parameters over 252 different combinations on the whole validation set for each resolution. We also attempted to run the official DeepFlash[<https://github.com/jw4hv/deepflash>] with supervision from Flash's results. We trained DeepFlash on all 40200 training pairs for up to 1000 epochs with more than 40 different combinations of hyper-parameters. Diff-B-Spline <cit.> was trained by using its official implementation[<https://github.com/qiuhuaqi/midir>] on all image pairs in the training set. The hyper-parameters were tuned on the held-out validation set, and the highest performing model used MSE data similarity and λ=0.01 smoothness regularisation. All our variants of Fourier-Net outperformed competing methods in terms of Dice. Compared to Flash using a 40× 48 resolution, Diff-Fourier-Net improved Dice by 2.2% and was 5718 times faster on CPU. Although DeepFlash was much faster than Flash, we found it extremely difficult to successfully train a model and would expect its true potential to fall close to that of Flash, in line with their published work. We note that DeepFlash is not an end-to-end method, because its output (band-limited velocity field) requires an additional PDE algorithm to compute the final deformation. As such, the method is slower than other deep learning methods such as our methods or Diff-B-Spline (0.012s per image pair on CPU). Fourier-Net+ and Diff-Fourier-Net, with similar mult-adds and memory footprint as Diff-B-Spline, achieved comparable results to Diff-B-Spline in terms of Dice but were respectively 2 and 1.5 times faster than Diff-B-Spline. Our highest performing 4×Fourier-Net+ achieved a Dice score of 0.761, which was able to bridge the gap between Fourier-Net+ and Fourier-Net, whilst still achieving very fast runtimes with fewer mult-adds and less memory footprint. We also listed the percentage of negative values of the Jacobian determinant of deformation (denoted by |J|_ < 0%) for all compared methods in Table <ref>. Though both Flash and Diff-B-Spline are diffeomorphic approaches, neither of them produced perfect diffeomorphic deformations on this dataset. Diff-Fourier-Net and Diff-Fourier-Net+ however generated nearly zero negative Jacobian determinants. §.§.§ Comparison on Atlas-Based Brain Registration In Table <ref>, we compared our Fourier-Net and its variants with iterative methods such as ANTs SyN<cit.> and Flash<cit.>, as well as deep learning methods such as TransMorph <cit.> and LKU-Net <cit.>. To guarantee a fair comparison between different methods, we used the IXI dataset with the exact same pre-processing steps and testing protocol as <cit.>. Because of this, we directly took relevant results from the original papers <cit.> and <cit.>, and such results are labeled with ∗ in Table <ref>. Note that the runtimes of all compared methods were computed on our end using the same machine. For Flash <cit.>, we grid-searched 200 combinations of hyper-parameters using only 5 randomly selected pairs from the validation set, due to the fact that the registration process of Flash takes more than 30 minutes on the CPU for each input image pair. Our Fourier-Net achieved a 0.763 Dice score which is competitive with top-performing learning-based methods including Transmorph and LKU-Net, whilst reducing CPU inference time to close to a second (1.029s). Fourier-Net+ traded a small performance drop for sub-second CPU runtimes, and had the lowest memory footprint and number of mult-adds across all methods. Specifically, Fourier-Net+ achieved a 0.748 Dice score with only 19.30G mult-adds, 670.2MB memory footprint, and 0.455s per pair speed. Compared to VoxelMorph-1 and VoxelMorph-2 respectively, Fourier-Net+ improved Dice by 2% and 1.6% and was 4.6 and 4.9 times faster, whilst using only 6.35% and 4.84% their mult-adds, and 22.34% and 17.22% their memory footprint. Our cascaded 3×Fourier-Net+ and Diff-3×Fourier-Net+ equaled the performance of top state-of-the-art methods whilst retaining a similar inference time compared to those diffeomorphic and non-diffeomorphic learning-based methods. Amongst iterative diffeomorphic methods, Flash achieved the highest Dice score of 0.692, but it needed 1,760s to register an image pair on average. Our Diff-3×Fourier-Net+ achieved a Dice score of 0.765, with 57.90G mult-adds and 4.981s runtimes, outperforming Diff-B-Spline, B-Spline-TransMorph, and Diff-LKU-Net in terms of Dice, mult-adds, and CPU runtimes. We note that those diffeomorphic methods based on dense stationary velocity fields (SVFs) were around 3.6 seconds slower than their non-diffeomorphic versions (see Diff-LKU-Net vs LKU-Net and Diff-Fourier-Net vs Fourier-Net), with the extra 3.6 seconds accounting for the computation of 7 squaring and scaling layers. Such dense SVF-based diffeomorphic methods were however faster than B-Spline methods such as Diff-B-Spline and B-Spline-TransMorph because these methods require additional transposed convolutional layers in order to first recover a full-resolution SVF, which costs extra time. Figure <ref> shows that whilst Flash's deformation grids have no foldings, it over-smooths the displacement field resulting in a less accurate warping. Figure <ref> (last row) shows that only Flash and Fourier-Net produce strictly band-limited Fourier coefficients. The deformation of Diff-Fourier-Net, 3×Fourier-Net+, and Diff-3×Fourier-Net+ are no longer band-limited due to the use of the squaring and scaling layers and cascades. We additionally plot the performance of different methods on 7 representative brain structures with the respective boxplot shown in Figure <ref>, including the brain stem, left/right cerebellum white matter, left/right cerebellum-cortex, and left/right hippocampus. As can be seen, our Fourier-Net variants consistently perform well over all classes. {Comparison on 2D Cardiac Motion Estimation In Table <ref>, we compare the registration performance between different methods, including the traditional FFD, TV-L_1 and more recent deep learning-based VoxelMorph <cit.>, RC-Net<cit.>, SYMNet<cit.> and LKU-Net<cit.>. Since we used the exact same training/validation/testing splitting as in <cit.>, we directly reported their FFD and TV-L_1 results in Table <ref>. On the other hand, we trained all compared deep learning methods from scratch and tuned all built-in parameters for each deep learning method on the validation dataset. We found that the MSE data term obtains better performance than the NCC for all methods. We describe the detailed parameter settings used for each method as follows: * VoxelMorph: Similar in Table <ref>, We trained two 2D variants of VoxelMorph, i.e., VoxelMorph-1 and VoxelMorph-2. The only difference between them is VoxelMorph-1 has 8 channels at the first and last convolutional layers while VoxelMorph-2 has 16 channels at such layers. * RC-Net: We trained two different versions of RC-Net by respectively using 3 and 4 cascades and adopting the VoxelMorph-1 as the backbone. We, therefore, term the two RC-Net as 3×RC-Net and 4×RC-Net in Table <ref>. * SYMNet: We adopted the official SYMNet code for 2D cardiac motion estimation by switching all convolutions from 3D to 2D. The optimal results are achieved with λ=0.01. * Diff-B-Spline: We set the control pointing space as 4 and tuned the regularization parameters on the validation set. The optimal results are achieved with λ=0.01. * LKU-Net: The official 2D LKU-Net and Diff-LKU-Net are used, where we set the number of channels in the initial layer as 8 and the large kernel size as 5×5. The optimal results are achieved with λ=0.01. Non-Diffeomorphic: Firstly, we can clearly observe the deep learning-based methods consistently outperform traditional FFD and TV_L1 in terms of both the Dice score and HD distance. On the other hand, approaches based on a single U-Net such as VoxelMorph-1, VoxelMorph-2, and LKU-Net all achieve comparable results in terms of both Dice and HD. 3×RC-Net and 4×RC-Net, by cascading multiple U-Nets, achieve the top 2 Dice scores in the compared methods. However, our Fourier-Net and 2×Fourier-Net+ can achieve comparable Dice results and lower HD distance with significantly fewer amounts of computational cost and memory footprint. For example, 4×RC-Net is 0.004 higher than our 2×Fourier-Net+ in terms of Dice, while performing 0.3mm worse in HD distance and requiring 754% mult-adds, 321% memory usage, and 283% inference time in CPU. Diffeomorphic: When comparing to the diffeomorphic methods such as SYMNet and Diff-LKU-Net, our Diff-Fourier-Net_Small and Diff-1×Fourier-Net+ can achieve comparable Dice score while better HD distance using significantly less computational cost. For example, our 1×Diff-Fourier-Net+ takes only 76.22 million mult-adds and 5.08 MB memory footprint, which are about 23.98% and 20.02% mult-adds of SYMNet and LKU-Net, and about 38.25% and 18.43% memory of SYMNet and LKU-Net. When compared to the Diff-B-Spline that also estimates a low-dimensional deformation, our Diff-1×Fourier-Net+ can achieve better Dice results and HD distance with comparable computational cost (76.22 vs 75.95) and a faster speed on CPU (0.006 vs 0.009). §.§.§ Comparison on 3D Cardiac Motion Estimation We conducted a final experiment on the 3D-CMR dataset to ensure we have validated the performance of our models beyond only the task of brain registration. In Table <ref>, we compared the registration performance between different methods, including FFD, Demons, ANTs SyN, RC-Net <cit.>, VR-Net<cit.>, TransMorph<cit.>, and LKU-Net<cit.>. Since we used the same pre-processed data in <cit.>, the results of these iterative methods such as Demons, FFD, and ANTs SyN are directly adopted from <cit.>. We used MSE as the data term for its superior performance to NCC in this dataset. The detailed parameter settings used for completing deep learning based methods were as follows: * VoxelMorph <cit.>: We trained two variants of VoxelMorph, i.e., VoxelMorph-1 and VoxelMorph-2. The only difference between them is VoxelMorph-1 has 8 channels at the first and last convolutional layers while VoxelMorph-2 has 16 channels at such layers. Both VoxelMorph-1 and VoxelMorph-2 were trained with the MSE data term with the first-order smoothness regularization, where λ was set to 0.01 for optimal results. * RC-Net <cit.>: We trained two different variants of RC-Net by respectively using 3 and 4 cascades. Both variants used VoxelMorph-1 as the backbone. We therefore term the two variants of RC-Net as 3×RC-Net and 4×RC-Net. The optimal results were achieved with λ=0.01 for both variants. * SYM-Net <cit.>: We used its official code[<https://github.com/cwmok/Fast-Symmetric-Diffeomorphic-Image-Registration-with-Convolutional-Neural-Networks>]. The initial number of kernels was set to 8. The optimal results were achieved with λ=0.01. * Diff-B-Spline <cit.>: We trained three different variants of Diff-B-Spline with the control pointing spaces being 3, 4, and 8, respectively. The optimal results were achieved with the control pointing spaces being 8 and λ being 0.01. * VR-Net <cit.>: As suggested by the original code[<https://github.com/xi-jia/Learning-a-Model-Driven-Variational-Network-for-Deformable-Image-Registration>], the data term was set to `L_1', the number of warping cascades is set to 2, and the number of intensity consistency layers in each cascade is set to 1 in our experiments. For a fair comparison, we used VoxelMorph-1 as the backbone in each cascade. * TransMorph <cit.>: The official TransMorph code[<https://github.com/junyuchen245/TransMorph_Transformer_for_Medical_Image_Registration>] is adopted. The optimal λ was set to 0.01. * LKU-Net <cit.>: The official implementation[<https://github.com/xi-jia/LKU-Net>] was used, where we set the number of channels in the initial layer as 8 and the large kernel size as 5×5×5 for both LKU-Net and its diffeomorphic version Diff-LKU-Net. The optimal results for both methods were achieved with λ=0.01. In Table <ref>, we observe that all methods estimating full-resolution deformations including non-diffeomorphic displacement fields and diffeomorphic velocities were outperformed by the methods estimating a low-dimensional representation of the displacement or velocity field, such as Diff-B-Spline and our Fourier-Net variants. We note that the deformations produced by such methods are inherently very smooth. In contrast to the intricate and detailed deformations required in brain registration tasks, the deformation in the left and right ventricles between ED and ES is also smooth. As Dice and Hausdorff distances of the LV, RV and Myo are used as the surrogate for registration accuracy in this task, we believe the inherent smoothness of Diff-B-Spline and Fourier-Nets is an advantage in this task. Although Diff-B-Spline was very competitive on this dataset, the highest Dice and the lowest Hausdorff distance were all achieved by our methods, with our highest performing method (Diff-Fourier-Net+) outperforming Diff-B-Spline by 1.9% in Dice and 0.15mm in Hausdorff distance (HD). On the other hand, our Fourier-Net+ outperformed TransMorph and LKU-Net, with improvements of 7.1% and 7.5% in terms of Dice, and 0.45mm and 0.51mm in terms of HD, respectively. Additionally, our Fourier-Net+ was 9.05 and 4.42 times faster while utilizing only 0.35% and 0.84% of their multiply-add operations, and 2.07% and 0.97% of their memory usage. In Figure. <ref>, we plot estimated displacement fields, deformation grids, and warped moving images from each method. We can clearly see that the estimated deformations from our Fourier-Nets are smoother than the completing methods, and Fourier-Net and Fourier-Net+ produce strictly a band-limited deformation. In Figure. <ref>, we plot the distributions of Dice and HD for different methods over three structures (LV, Myo, and RV) and their average (Avg) on 3D-CMR, where we can clearly observe that our Diff-Fourier-Net and Diff-Fourier-Net+ exhibit an improvement over comparing methods in terms of Dice. The HD distributions of Diff-Fourier-Net are comparable with those of Diff-B-Spline, with a slight improvement seen in Diff-2×Fourier-Net+ on all three structures. In Figure. <ref>, we compared the computational cost of different methods, where the x-axis and y-axis denote the runtime in CPU (s) and Dice score, respectively. The number of mult-adds operations is expressed by the area of a circle. As can be seen Fourier-Net, Fourier-Net+, and cascaded Fourier-Net+ achieved a higher Dice score with faster inference speed and fewer computational costs, while their diffeomorphic versions take a while longer and produce slightly better performance. § CONCLUSION To reduce the computational cost and memory footprint of U-Net style registration networks, we first proposed Fourier-Net to learn the low-dimensional representation of a displacement/velocity field in the band-limited Fourier domain. Building upon Fourier-Net and to further boost the registration efficiency, we then proposed Fourier-Net+ and cascaded Fourier-Net+, aiming to learn the band-limited displacement/velocity field directly from band-limited images, instead of their original full-resolution counterparts. As band-limited images and displacement/velocity fields are of low-resolution representation, our experiments on three datasets showed that Fourier-Net, Fourier-Net+, and cascaded Fourier-Net+ were significantly more efficient than U-Net style architectures and a number of state-of-the-art approaches, whilst retaining a comparative performance in terms of registration accuracy. Specifically, on the 2D OASIS-1 and 3D IXI brain datasets, we showed that Fourier-Net can learn effectively a band-limited displacement field to represent the full-resolution deformation with minimal performance loss as compared to full-resolution U-Net architectures. We then showed with cascaded Fourier-Net+, that learning such a band-limited displacement field directly from band-limited images was able to achieve similar performance to Fourier-Net and full-resolution U-Net architectures. On the 3D-CMR dataset where cardiac motion is intrinsically smooth and relatively simple, our Fourier-Net+ alone performed already very well, with the registration accuracy on par with Fourier-Net, cascaded Fourier-Net+, and other state-of-the-art methods, but with significantly less computational cost and memory footprint. We also noticed that diffeomorphic Fourier-Net variants were often more accurate than their non-diffeomorphic counterparts in terms of Dice, but were slightly slower in terms of inference speed due to the use of squaring and scaling layers. However, our diffeomorphic methods were still the most efficient approaches when compared to other competing diffeomorphic methods. Though our proposed Fourier-Net, Fourier-Net+, and cascaded Fourier-Net+ achieved comparable performance with other state-of-the-art methods, it is notable that Fourier-Net assumes that the displacement or velocity field lacks high-frequency signals. This assumption is valid for most smooth and diffeomorphic deformations, and our experimental results on all three datasets also support this assumption. However, we would obviously expect a performance drop in tasks where this assumption does not hold. In Fourier-Net+, intuitively, one might assume the removal of high-frequency image information within the encoder to be prohibitive in tasks such as brain registration which contain complex structures within images. We showed however that the efficient design of Fourier-Net+ allows the cascaded version of this network to have fewer multiply-add operations than even Fourier-Net, whilst retaining similar performance despite the removal of high-frequency image information in our encoder. We will explore in our future work how to additionally incorporate high-frequency information into Fourier-Net and Fourier-Net+ through the training process to further improve performance. § ACKNOWLEDGMENTS The authors would like to Prof Declan P. O’Regan for providing the CMR image data for this research. This work is partially supported by the British Heart Foundation Accelerator Award (AA/18/2/34218), and X. Jia is partially funded by the China Scholarship Council. § APPENDIX §.§ 3D Cardiac Motion Estimation In Table <ref>, we list more experimental results for this dataset. We can observe that the performance of both 16×16×12 and 32×32×32 Fourier-Net outperforms the full-resolution U-Net backbone with large margins in terms of Dice, i.e., 7.1% and 8.3%. Additionally, the two Diff-Fourier-Nets also outperform the full-resolution Diff-U-Net with large margins, i.e., 7.9% and 6.0%. This phenomenon is different from our observations in the other two brain datasets where Fourier-Net approached to the performance of full-resolution U-Net, but did not exceed it. Here, the results show that through learning a band-limited deformation, Fourier-Net can significantly outperform U-Net. We note that the deformation of the left and right ventricles between ED and ES is very smooth, in contrast to the often intricate and complex deformations required for brain atlas registration. We believe this dataset is therefore particularly well suited to Fourier-Net, as the resultant displacements from the band-limited 𝔹_ϕ intrinsically preserve global smoothness. In Table <ref>, we investigate two patch sizes (i.e., 32×32×24 and 64×64×48) for the band-limited image. We observe that even with a 32×32×24 band-limited image as input, the performance of Fourier-Net+ (0.803 Dice, 6.06mm HD) is already close to Fourier-Net (0.814 and 6.00mm). The performance of Diff-Fourier-Net+ (0.828 and 5.89) is also comparable to that of Diff-Fourier-Net (0.827 and 5.89), while the latter has 59.6 times mult-adds and 53.5 times memory footprint. With a 64×64×48 band-limited image, the performance of both Fourier-Net+ and Diff-Fourier-Net+ is further improved. §.§ Hyper-parameters of Flash and DeepFlash In this section, we detail the hyper-parameters of Flash and DeepFlash used for our experiments in the main paper. §.§.§ Flash on 2D OASIS In Table 2 of the main paper, we reported the registration performance of Flash <cit.> on 2D OASIS data with respect to three patch sizes of band-limited velocities, i.e., 16×16, 20×24, and 40×48. The 16×16 patch is recommended in the official implementation. However, For a fair comparison with our Fourier-Net, we additionally experimented the patch sizes of 20×24 and 40×48 as well. For each patch size, by varying α (α∈{1,3,4,6}), γ (γ∈{0.05,0.1,0.2,0.5,1.0, 2.0,5.0}), and σ (σ∈{0.01,0.03,0.05,0.1,0.3,0.5,1,3,5}) appeared in the official implementation[<https://bitbucket.org/FlashC/flashc/src/master/Testing/runImageMatching/runImageMatching.sh>], we experimented 252 different hyper-parameter combinations. We then used the set of hyper-parameters that has the best performance on the validation set and reported its registration performance on the test set. In Table <ref>, we list the final hyper-parameters we used for the 2D OASIS test set. §.§.§ Flash on 3D IXI In Table 4 of the main paper, we reported the performance of Flash on the 3D IXI test set. For this experiment, we tried in total 200 different combinations of hyper-parameters with three different patch sizes (16×16×16,20×24×28, and 40×48×56). The experiments were performed on 5 randomly selected validation examples only, as Flash takes really long time to predict a band-limited velocity field for an image pair (30 minutes for 20×24×28 band-limited velocity fields and more than 1 hour for 40×48×56 band-limited velocity fields). In Table <ref>, we listed the final hyper-parameters used in the test set. As the best result came from using the 20×24×28 patch size, the Fourier coefficients of Flash in Figure 4 of the main paper have the smallest area. §.§.§ DeepFlash on 2D OASIS In Table 2 of the main paper, we reported the registration performance of DeepFlash <cit.> on 2D OASIS data with two different patch sizes, i.e., 16×16 and 20×24. We did not implement DeepFlash with 40×48 patch size because: 1) Flash takes 85.773 seconds to predict 40×48 band-limited velocity fields, producing the velocity fields for all 40200 training pairs therefore will cost about 40 days, which is not practical; and 2) the performance of DeepFlash is bounded by Flash, whose performance in terms of Dice is already lower than that of the proposed Fourier-Net. For the similar reasons, we did not include DeepFlash on 3D IXI either. We experimented 40 combinations of hyper-parameters by varying learning rate ({0.005, 0.001, 0.0005, 0.0001}), batch size ({1,8,16,32,64}) and dropout ratio ({0, 0.2}) in DeepFlash. The maximum training epoch for each set of parameters is 1000 epochs, as we notice that DeepFlash is slow to converge. Note that we only train our Fourier-Net with 10 epochs on this 2D data. The final hyper-parameters of training DeepFlash are listed in Table <ref>. IEEEtran
http://arxiv.org/abs/2307.00302v1
20230701111119
Constrained Prioritized 3T2R Task Control for Robotic Agricultural Spraying
[ "Ivo Vatavuk", "Zdenko Kovačić" ]
cs.RO
[ "cs.RO" ]
Temperature-independent almost perfect photon entanglement from quantum dots via the SUPER scheme Doris E. Reiter August 1, 2023 ================================================================================================= empty empty In this paper, we present a solution for robot arm-controlled agricultural spraying, handling the spraying task as a constrained prioritized 3T2R task. 3T2R tasks in robot manipulation consist of three translational and two rotational degrees of freedom, and are frequently used when the end-effector is axis-symmetric. The solution presented in this paper introduces a prioritization between the translational and rotational degrees of freedom of the 3T2R task, and we discuss the utility of this kind of approach for both velocity and positional inverse kinematics, which relate to continuous and selective agricultural spraying applications respectively. Agricultural Automation, Mobile Manipulation, Optimization and Optimal Control § INTRODUCTION Agricultural robotics is a rapidly advancing research field that focuses on developing and deploying robotic technology for various agricultural tasks. The goal is to enhance the efficiency and sustainability of different agricultural procedures and address labor shortages. Research presented in this paper is a part of the project HEKTOR <cit.>, which aims to introduce heterogeneous robotic systems to the agricultural areas of viticulture and mariculture. A mobile manipulator is envisioned to autonomously perform various viticultural tasks, including monitoring, spraying and suckering. Manual agricultural spraying is often performed with a spray wand, a nozzle mounted on the end of a lightweight pole. The nozzle is often mounted at an angle, making it easier for the operator to control both the position and the orientation of the nozzle, and reach high and low areas of the canopy. In the presented work, a spray wand is mounted as the robot arm end-effector (Fig. <ref>), aiming to maintain the advantages of manual spraying while benefiting from increased efficiency and precision of robotic technology. Our previous work focused on the problem of selecting coordinated control inputs for the vehicle and the robot arm in the same scenario <cit.>. The robot arm was controlled in the task space, controlling solely the translational velocity of the spraying frame, depicted in Fig. <ref>, and disregarding its rotation. The reasoning behind this was that, to achieve large enough linear velocities of the spraying frame, and reach high and low areas of the plant, it is not possible to fully control the orientation of the spraying frame. In this paper, a more complete solution to the robot arm control problem is offered, handling the control of the spraying frame as a prioritized 3T2R task. 3T2R tasks, also known as pointing tasks, are frame pose control tasks where all three components of the frame position, and only two components of the frame orientation are considered <cit.>. Since only five degrees of freedom are controlled, a functional redundancy is introduced for robot arms with six degrees of freedom and more. There is extensive research on different approaches to resolving functional redundancies in robot manipulation <cit.>. Tasks performed with axis-symmetric tools, such as robotic welding, paint spraying and drilling, are frequent examples of 3T2R tasks. In robotic drilling for example, both the position and the orientation of the drill bit are important for task execution, but the rotation around the drill bit is not. We handle the agricultural spraying task in a similar way, since the rotation around the approach axis of the spraying frame does not effect the application of the spraying agent. However, unlike the drilling task, spraying with a correct spraying frame position and a non ideal approach axis orientation can still be acceptable <cit.>. In <cit.>, From et. al. report that the linear velocity of the paint gun is far more important than its orientation for achieving uniform paint coating. We believe the same to be the case in agricultural spraying, even to a larger extent, since the spraying agent in agricultural spraying is generally less dense than the paint in spray painting applications, and human operators performing the agricultural spraying tasks generally handle the orientation of the nozzle with less care than in the paint spraying applications. This insight is handled by introducing a prioritization between translational and rotational components of the 3T2R task. Prioritized task space control <cit.> replaces commonly used task weighting approach, with hard priorities that are guaranteed to be satisfied. The solution to the lower priority task is found inside the nullspace of a higher priority task. This is performed iteratively, for any number of tasks with different priorities, until a certain task fully constrains the optimization problem. This kind of approach is often referred to as prioritized velocity space inverse kinematics (IK), prioritized instantaneous IK or just prioritized IK <cit.>. In this work, constrained prioritized task space control algorithm presented in <cit.> is used to solve the prioritized velocity space inverse kinematics problem, for the continuous agricultural spraying application. Continuous spraying refers to the task of treating the entire canopy of the plant. Previous robotic approaches to this task mostly used a set of nozzles fixed on the mobile vehicle <cit.>, while the robot arm was mostly utilized for selective spraying <cit.>. Selective spraying refers to the problem of spraying a specific part of the plant, for example a single disease-ridden leaf, or a fruit cluster. A solution to this problem is also presented, handling agricultural selective spraying as a prioritized positional level IK problem. A solver for this purpose based on iterative constrained prioritized task space control is presented. §.§ Contribution We present a constrained prioritized 3T2R task control scheme for agricultural spraying, solving the 3T2R control task on both velocity and position levels, prioritizing between its translational and rotational components. Two use cases are discussed in which the velocity and the position level algorithms are applied to continuous and selective agricultural spraying respectively. The implementation of the velocity level prioritized task space control scheme for continuous spraying, and the prioritized positional inverse kinematics solver for selective spraying are discussed in detail. §.§ Paper Organization The remainder of this paper is structured as follows: Section II presents the constrained prioritized task space control approach for continuous agricultural spraying. The details of the approach are presented as well as the discussion on the effects of different constrains on the performance. Section III presents the solution to the selective agricultural spraying problem, approached as a prioritized positional inverse kinematics problem. Details of the implementation are presented, and the results and their implications are discussed. Finally, Section IV. concludes the paper with some comments on future work. § CONSTRAINED PRIORITIZED TASK SPACE CONTROL FOR CONTINUOUS SPRAYING As already mentioned, continuous spraying refers to the problem of applying the spraying agent to the entire canopy of the plant. Constrained prioritized task space control is used to select joint velocity commands that follow the commanded spraying frame velocity. Velocity of the spraying frame is controlled as a prioritized 3T2R control task, prioritizing its translational over its rotational component. §.§ Velocity Level Prioritized Task Space Control Joint velocity commands are selected by solving a constrained prioritized task space control problem <cit.>. The general constrained prioritized task space control problem is defined as: h_i = xmin E_i(x) s.t. E_k(x) = h_k, ∀ k < i A_eqx + b_eq = 0 A_ieqx + A_ieq≥ 0 where E_i is the quadratic cost function of the i-th priority, h_i is the optimal value of that cost function, and A_eq, b_eq, A_ieq and b_ieq are the matrices and vectors describing linear equality and inequality constraints, respectively. Priorities used for continuous agricultural spraying are: * Translational part of the 3T2R task * Rotational part of the 3T2R task * Desired joint positions The cost function of the first priority has a following form: E_1(q̇) = v_c - J_Tq̇^2 where v_c is the commanded linear velocity of the spraying frame, J_T is the translational part of the spraying frame Jacobian, and q̇ is the joint velocity vector. v_c is the output of the MPC solver described in <cit.>. Generally, there are multiple joint velocity vectors q̇ that result in the commanded linear velocity, and the criterion function of the second priority is selected between those solutions, in the null space of the first priority. The cost function of the second priority, referring to the rotational part of the 3T2R task, has a following form: E_2(q̇) = ω^L_c,x - J^L_R,xq̇^2 + ω^L_c,y - J^L_R,yq̇^2 where ω^L_c,x and ω^L_c,y are commanded angular velocities of the spraying frame around its local x and y axes respectively, and J^L_R,x and J^L_R,y are the corresponding Jacobian matrices. Since the spraying nozzle is an axis-symmetric tool, the angular velocity around its local z axis is not directly controlled. The final priority, which resolves any redundancy remaining after minimizing the first two priorities, favors such joint velocities q̇ that move the arm towards a desired configuration: E_3(q̇) = q̇_c - q̇^2 The commanded joint velocities q̇_c that drive the robot arm towards a desired pose q_d are selected by a proportional controller: q̇_c = K_P,q(q_d - q) where K_P,q is the controller gain and q is a current joint position vector. Inequality constraints are used to enforce joint velocity and acceleration limits: q̇≤q̇≤q̇ q̈≤q̈≤q̈ Since the prioritized task space control problem deals with joint velocities, equation (<ref>) is replaced with the one in the velocity space: q̇_P + q̈Δ t ≤q̇≤q̇_P + q̈Δ t where Δ t is the control time step, and q̇_P are joint velocities in the previous time step. §.§ Commands for the rotational part of the 3T2R task Commands for the local angular velocities of the spraying frame ω^L_c,x and ω^L_c,y are calculated using the error between the desired and the current approach axis orientation: err_α = arccos( app_z·app_d,z ) err_axis = app_z×app_d,z where app_z and app_z,d are the current and the desired approach axis vectors, respectivelly, err_α is the angular distance between the two vectors, and err_axis is an axis around which err_α acts. Angular error vector represented in the local frame is: α^L_err = _LR^B(err_α·err_axis) If the z axis of the frame is considered its approach axis, the z component of α^L_err is always zero, and the local angular velocities are calculated as: ω^L_c = K_P,ωα^L_err = [ ω^L_c,x; ω^L_c,y; 0; ] where K_P,ω is the proportional controller gain. §.§ Continuous Spraying Examples The previously described approach was tested on three continuous spraying examples, with different commanded linear velocities and constraints (Fig. <ref>). In all the examples the spraying frame rotates freely around its approach axis, as a result of 3T2R control (Fig. <ref>). Footage of the examples can be seen in the accompanying video[<https://www.youtube.com/watch?v=FRdmGsSCAh4>]. Constrained prioritized task space control solver described in <cit.> by de Lasa et al. was implemented in C++ using OSQP (Operator Splitting Quadratic Program) quadratic programming solver <cit.>. This implementation was used for the experiments, and is available on GitHub[<https://github.com/ivatavuk/ptsc_eigen>]. First example has a low commanded linear velocity of 0.2 m/s, resulting in both the linear and rotational velocity being feasible during the entire trajectory. The 3T2R task is followed in its entirety as a result, and the third priority (Eq. (<ref>)) fully constrains the prioritized optimization problem. In the second example, the same commanded linear velocity was used as in the first one, but with an addition of a positional constraint on a nozzle height. The nozzle is not allowed to reach positions lower than 0.3 m from the robot arm base. During the lower segment of the trajectory, this constraint becomes active, which results in the prioritization between the translational and rotational component of the 3T2R task being noticeable (Fig. <ref>). Third example has a large commanded linear velocity of the spraying frame of 0.8 m/s, which results in joint velocity and acceleration constraints being reached during the execution of the trajectory. As a consequence, the 3T2R task is not achievable in its entirety, and the third priority is disregarded for the most part of the trajectory. To combat this issue, for the fast trajectory only two priorities are used. The first priority is the same as in the previous examples (Eq. <ref>), and the second priority is a weighted combination of the rotational component of the 3T2R task and desired joint movement: E_2(q̇) = ω^L_x,d - J^L_R,xq̇^2 + ω^L_y,d - J^L_R,yq̇^2 + wq̇_d - q̇^2 In this example, the position of the spraying frame follows the commanded linear velocity, while its desired orientation is not achievable due to joint velocity and acceleration constraints. The utility of the presented method resides in the prioritization between the translational and rotational components of the 3T2R tasks. When the 3T2R task velocity is not feasible in its entirety, due to the constraints posed by the robot arm or due to the custom constraints posed by the user, task priorities are utilized to find an optimal spraying angle. § PRIORITIZED POSITIONAL INVERSE KINEMATICS FOR SELECTIVE SPRAYING Selective agricultural spraying refers to the task of spraying a specific part of the plant, for example a cluster of grapes. This task is handled as a prioritized positional inverse kinematics problem. Prioritization between the translational and rotational components of the 3T2R task used for continuous spraying remains for this use case. §.§ Prioritized Positional Inverse Kinematics Solver Prioritized positional inverse kinematics solver implementation is similar to standard numerical inverse kinematics, iteratively solving the velocity level problem. The velocity level problem is solved as a constrained prioritized task-space control problem, as described in section <ref>. While the standard positional inverse kinematics solvers aim to achieve a commanded end-effector pose, the presented solver has the ability of handling multiple, potentially conflicting tasks with different priorities. Solver pseudoalgorithm is given in Algorithm <ref>. The algorithm requires an initial guess for joint positions q_initial. Task errors and Jacobians are calculated based on the current joint positions q and the type of the task. Error gradients are updated for each task as a difference between the task error in current and previous iteration of the algorithm. Task Jacobians and clamped errors are used to construct a prioritized task space control problem. Finally, a solution to the prioritized task space problem is used to update the current joint positions. If the sum of all task error norms or error gradient norms reaches a threshold the problem is considered to be solved. Prioritized inverse kinematics library for ROS is available on GitHub[<https://github.com/ivatavuk/pik_ros>]. MoveIt is used to calculate the Jacobians of the specified frames, which must be present in the URDF file of the MoveIt planning group. The Jacobians obtained with MoveIt are modified to support any of the following tasks: * Frame pose task * Frame position task * Frame orientation task * Frame approach axis vector task These task types correspond to the tasktype_i variable in the pseudoalgorithm <ref>. A frame pose task Jacobian is the standard Jacobian matrix, and frame position and orientation task Jacobians correspond to the first and last three rows of the frame pose Jacobian. The Jacobian and the error for the frame approach axis vector task are calculated as described in section <ref>. The framework allows for user defined parameters used by solver, which are: * Change in joint angle constraint * Positional clamp magnitude * Orientational clamp magnitude * Use constrained optimization * Error norm threshold * Maximum execution time * Maximum number of iterations §.§ Selective Spraying Examples Tasks used in selective agricultural spraying examples are, with decreasing priorities: * Spraying frame position task * Spraying frame approach axis orientation task * Elbow frame position task Like for the continuous spraying, there is a prioritization of the spraying frame position over its approach axis orientation, which correspond to the translational and rotational components of the 3T2R task. The third priority, which fully constrains the positional inverse kinematics problem is the desired elbow frame position. The solver was tested on three different examples seen in Fig. <ref>, with desired values for all the tasks given in table <ref>. Tasks are set up in such a way that in the first two examples the desired values of the full 3T2R task are feasible, and the elbow position task fully constrains the problem, and in the last example only the position of the spraying frame is feasible (Fig. <ref>). The description of solver performance for the examples is given in Tab. <ref>. All experiments were conducted on a 2.2GHz Intel Core i7 processor. It can be noticed that the third example takes the largest amount of time to be solved, which is due to the solution being close to the robot arm singularity. Parameters for the solver used in the examples are: * Use constrained optimization = True * Error norm gradient threshold = 1× 10^-3 * Change in joint angle constraint = 10 [^∘] * Use solution polishing = True * Polish error norm gradient threshold = 1× 10^-2 * Polish change in joint angle constraint = 3 [^∘] * Positional clamp magnitude = 0.3 [m] * Orientational clamp magnitude = 30 [^∘] Solution polishing refers to the usage of smaller change in joint angle constraint when the solver is close to the solution, which is detected as polish error norm gradient threshold being reached. All three tasks are not feasible in any of the given examples, so the solver considers the positional prioritized IK problem solved once task error gradients reach a specified threshold. For most prioritized inverse kinematics applications the same would be the case, as the main strength of this approach is its ability to handle conflicting, infeasible tasks with clearly defined priorities. § CONCLUSION AND FUTURE WORK For the task of robotic agricultural spraying, and for robotic spraying in general, the position of the spraying frame is more important than its orientation to ensure satisfactory spray coverage. We propose a solution where constrained prioritized optimization is used for velocity and positional level 3T2R task control, which corresponds to continuous and selective agricultural spraying tasks, respectively. Prioritized task space control and prioritized positional inverse kinematics are described in detail. Positional inverse kinematics are solved using iterative constrained prioritized task space control. In the future work, the prioritized IK framework is planned to be expanded to allow for more task types, such as preferred joint positions, manipulability maximization task and others. Voxel based obstacle avoidance is also planned to be included. We plan to explore the applicability of the framework for different robot control tasks. Prioritized positional inverse kinematics could have a number of applications, most interesting ones including high dimensional floating base robotic systems, which could have a high number of prioritized conflicting tasks. The utility of the prioritized optimization described in this paper for trajectory planning also remains to be explored. §.§.§ Acknowledgments Research work presented in this article has been supported by the project Heterogeneous autonomous robotic system in viticulture and mariculture (HEKTOR) financed by the European Union through the European Regional Development Fund-The Competitiveness and Cohesion Operational Programme (KK.01.1.1.04.0036). * ieeetr
http://arxiv.org/abs/2307.00613v2
20230702164251
Nondefinability results for elliptic and modular functions
[ "Raymond McCulloch" ]
math.LO
[ "math.LO", "33E05, 03C64, 11F03" ]
a4paper,centering === 00A0 .tifpng.png`convert #1 `dirname #1`/`basename #1 .tif`.png equationsection plain theoremTheorem[section] lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]Proposition conjecture[theorem]Conjecture claim[theorem]Claim question[theorem]Question definition *ackAcknowledgements definition[theorem]Definition exercise[theorem]Exercise example[theorem]Example remark[theorem]Remark problem[theorem]Problem fact[theorem]Fact remark note #1#1 #1 #1 #1 ∫ #1#2#30=#1#2#3∫ #2#3-.50 = - Nondefinability results for elliptic and modular functions]Nondefinability results for elliptic and modular functions raymond.mcculloch@manchester.ac.uk [2020]33E05, 03C64, 11F03 Affiliation: University of Manchester. ORCID ID: 0000-0002-0570-4977 This work formed part of the author's PhD thesis, which was supported by an Engineering and Physical Sciences Research Council Doctoral Training Award. The author is also is grateful to the Heilbronn Institute for Mathematical Research for support. Let Ω be a complex lattice which does not have complex multiplication and ℘=℘_Ω the Weierstrass ℘-function associated to it. Let D⊆ℂ be a disc and I⊆ℝ be a bounded closed interval such that I∩Ω=∅. Let f:D→ℂ be a function definable in (ℝ,℘|_I). We show that if f is holomorphic on D then f is definable in ℝ. The proof of this result is an adaptation of the proof of Bianconi for the ℝ_exp case. We also give a characterization of lattices with complex multiplication in terms of definability and a nondefinability result for the modular j-function using similar methods. [ McCulloch, Raymond August 1, 2023 ====================== § INTRODUCTION Model theorists have for some time been interested in definability questions concerning structures given by expanding the ordered real field by certain functions. For example the sine function is not definable in , an immediate consequence of the o-minimality of , which is proved by combining a result of Wilkie in <cit.> and work of Khovanski in <cit.>. Here and throughout this paper definable means definable with parameters in . In <cit.> Bianconi went further and showed that no non-trivial restriction of sine to a real interval is definable in . This result may be rephrased to say that no restriction of the exponential function to an open disc D in is definable in . Extending this further Bianconi showed in <cit.> that if f:D→ is holomorphic and definable in then f is algebraic. In <cit.> Peterzil and Starchenko use this result to characterise all definable locally analytic subsets of ^n in . This question of definability can in fact be generalised to other transcendental functions. Indeed such an example occurs with a transcendental function similar to the exponential function. Consider a complex lattice Ω⊆, a discrete subgroup of rank 2. Associated to each such lattice is the function ℘(z)=℘_Ω(z)=1/z^2+∑_ω∈Ω∖{0}(1/(z-ω)^2-1/ω^2). This function is similar to the exponential function as they are both periodic and have an addition formula as well as a differential equation. Also over the complex field an elliptic curve E()=E_Ω()⊆ℙ() is given by the equation Y^2Z=4X^3-g_2XZ^2-g_3Z^3, where the complex numbers g_2 and g_3 depend on the lattice Ω and are known as the invariants of ℘_Ω. The map exp_E:→ E(),z↦[℘(z):℘'(z):1] is called the exponential map of E. These similarities and the well known model theory of the exponential function make the model theory of the Weierstrass ℘-function a natural thing to consider. This has been done by various authors including Bianconi in <cit.>, Macintyre in <cit.> as well as Peterzil and Starchenko in <cit.> and Jones, Kirby and Servi in <cit.>. During his investigations into the model theory of these Weierstrass ℘-functions, Macintyre observed the following. If the lattice Ω=ℤ+iℤ then the restriction of ℘ to any complex disc D on which ℘ is analytic is definable in the structure (,℘|_[1/8,3/8]). The interval [1/8,3/8] is chosen for convenience as it avoids both the poles of ℘ and the zeroes of ℘'. Any such interval may be chosen. For the lattice +i it can immediately be seen that ℘(iz)=-℘(z) and this is all that is required to prove Macintyre's observation. In particular there is a non integer complex number α such that αΩ⊆Ω. A lattice with this property is said to have complex multiplication. A complex lattice Ω is called a real lattice if Ω=Ω. The lattice Ω=ℤ+iℤ is an example of a real lattice which has complex multiplication. In the preprint <cit.> Macintyre's result is extended to all real lattices with complex multiplication. It is also shown that if the restriction of ℘ to some open disc D⊆ is definable in the structure (,℘|_I), where I⊆ is a closed interval this does not contain any lattice points and the lattice Ω is real, then the lattice Ω has complex multiplication. A direct extension of this result to semiabelian varieties is presumably false. For example consider the semiabelian variety G=E×𝔾_m where E is an elliptic curve with complex multiplication and 𝔾_m is the multiplicative group. Then a restriction of exp_G to the real part of its fundamental domain will give the exponential map exp_E but will not give us, presumably, the full real exponential function. Now we turn to extending the final aforementioned result of Bianconi to the ℘-function. The following theorem can be seen as a ℘-function analogue of Theorem 4 in <cit.>. Let D⊆^2n be a definable open polydisc and u,v:D→ be two functions that are both definable in the structure (,℘|_I), where Ω is a complex lattice which does not have complex multiplication and I is some bounded closed interval in which does not contain a lattice point. Let f(x,y)=u(x,y)+iv(x,y) be holomorphic in D. Then u and v are definable in . The proof of this theorem is given in Section <ref> and adapts the method of Bianconi used to prove Theorem 4 in <cit.>. However the final part of the proof differs from Bianconi's argument as some of the conclusions are unclear. Bianconi's method involves using a theorem of Wilkie on smooth functions that are defined implicitly that was proved in general by Jones and Wilkie in <cit.>. However here we use an implicit definition obtained from a model completeness result due to Gabrielov in <cit.>. Although the theorem of Gabrielov is well known, as far as we are aware this is the first application of this result in order to obtain an implicit definition of this kind. These implicit definitions are given in Section <ref>. In Section <ref> we give some nondefinability results for various transcendental functions, beginning with an analogue of the aforementioned result of Peterzil and Starchenko in <cit.> for the Weierstrass ℘-function. Then we give a characterisation of the definability of restrictions of ℘ to a disc D⊆ in terms of the associated lattice Ω having complex multiplication, one direction of which follows from Theorem <ref>. This extends the result in <cit.> to all complex lattices. To complete this section we give a nondefinability result for the modular j-function the proof of which adapts a similar method to the proof of Theorem <ref>. Finally in Section <ref> we give some concluding remarks on what other transcendental functions can give rise to similar nondefinability statements and the obstacles that prevent one from proving a version of Theorem <ref> for such functions using the method of Section <ref>. § THE WEIERSTRASS ℘ AND MODULAR J FUNCTIONS In this section we give background on both the Weierstrass ℘-function and the modular j-function. Let Ω⊆. Then Ω is said to be a complex lattice if there exist complex numbers ω_1 and ω_2 such that Ω={ mω_1+nω_2:m,n∈,(ω_2/ω_1)>0 }. The set {ω_1,ω_2 } is referred to as an oriented basis for the lattice Ω. The quotient τ=ω_2/ω_1∈ is known as the period ratio of Ω. The lattice generated by 1 and τ is denoted Ω_τ=⟨1,τ⟩. The following theorem can be seen in Chapter 3 of <cit.>. For all z∈∖Ω we have that, (℘'(z))^2=4℘^3(z)-g_2℘(z)-g_3. Therefore the functions ℘ and ℘' are algebraically dependent. Differentiating both sides of this differential equation gives that ℘”(z)=6℘^2(z)-g_2/2. In particular for any n≥2 the derivative ℘^(n) may be written as a polynomial with complex coefficients in ℘ and ℘'. Another crucial property of ℘ is its addition formula. This can be seen in Theorem 6 in Chapter 3 of <cit.>. For complex numbers z and w such that z-w∉Ω we have that, ℘(z+w)=1/4(℘'(z)-℘'(w)/℘(z)-℘(w))^2-℘(z)-℘(w). The function ℘' also has an addition formula. However this is less well known and may be deduced from the identity ℘(z) ℘'(z) 1 ℘(w) ℘'(w) 1 ℘(z+w) -℘'(z+w) 1 =0, which can be seen in page 363 of <cit.>. From this identity we have for all complex numbers z and w such that z-w∉Ω, ℘'(z+w)=℘(w)℘'(z)-℘'(w)℘(z)-℘(z+w)(℘'(z)-℘'(w))/℘(z)-℘(w). This next definition can be seen in Section 4 of Chapter 1 of <cit.>. The modular j-function is the function j:→ defined by, j(τ)=1728g_2^3(τ)/g_2^3(τ)-27g_3^2(τ), where the complex numbers g_2 and g_3 are the invariants of the complex lattice Ω with period ratio τ. It turns out that the modular j-function may be written rather differently, namely it has a q-expansion with (positive) integer coefficients. This may be seen in Proposition 7.4 in Chapter 1 of <cit.> and the explicit coefficients are in Example 6.2.2 of Chapter 2 of <cit.>. Let q=e^2π i z. Then, j(z)=q^-1+744+196884q+21493760q^2+…. From the q-expansion it is clear that the restriction of j to ∩ i is a real valued function. By Theorem 4.1 in <cit.> the j-function is a modular function of weight zero. That is, for all z,w∈ we have that j(z)=j(w) if and only if there is some matrix γ∈ SL_2() such that w=az+b/cz+d, where γ=[ a b; c d ]. If γ is a matrix in GL^+_2(ℚ), the group of 2×2 matrices with rational entries and positive determinant, then there is a unique positive integer M such that Mγ∈ GL_2(ℤ) and the entries of Mγ are relatively prime. By Proposition 23 in <cit.> we have that for each positive integer M there is a polynomial Φ_M∈ℤ[X,Y] such that Φ_M(j(z),j(w))=0 if and only if there is a matrix γ∈ GL^+_2(ℚ) such that z=γ w and (Mγ)=M. Finally we note as in <cit.> that j satisfies a nonlinear third order differential equation, namely j”'/j'-3/2(j”/j')^2+(j^2-1968j+2654208/2j^2(j-1728)^2)(j')^2=0. To conclude this section we state the versions of the Ax-Schanuel theorem for the Weierstrass ℘-function and the modular j-function. For the ℘-function this is due to Brownawell and Kubota and can be seen in <cit.>. Suppose Ω_1,…,Ω_m are complex lattices each of which does not have complex multiplication. Let τ_1,…,τ_m be their corresponding period ratios and ℘_1,…,℘_m be their corresponding ℘-functions. Suppose that for all i,j=1,…,m and i j there do not exist integers a,b,c,d with ad-bc0 such that τ_j=aτ_i+b/cτ_i+d. Let z_1,…,z_n be analytic functions on a disc D centred at α∈ and suppose that z_1-z_1(α),…,z_n-z_n(α) are linearly independent over . Then we have that _[z_1,…,z_n,℘_1(z_1),…,℘_1(z_n),…,℘_m(z_1),…,℘_m(z_n)]≥ nm+1. The version of the Ax-Schanuel theorem for j is due to Pila and Tsimerman in <cit.>. Let z_1,…,z_n be analytic functions defined on a disc D⊆, which take values in the upper half plane, such that j(z_1),…,j(z_n) are non-constant. Suppose that Φ_M(j(z_i),j(z_j))0 for all positive integers M and for all i,j=1,…,n where i j. Then, _ℂ[z_1,…,z_n,j(z_1),…,j(z_n),j'(z_1),…,j'(z_n),j”(z_1),…,j”(z_n)]≥ 3n+1. § IMPLICIT DEFINITIONS The purpose of each of these implicit definitions is to give a low upper bound on the transcendence degree of a finitely generated extension of . Before giving the first of these implicit definitions we give a precise definition of a property used in the statement of these implicit definitions. Let be a countable collection of real analytic functions defined on a bounded interval I in . Let f∈. If the derivatives of f may be written as a polynomial with coefficients in in terms of a finite number of the functions in then we say that the set is closed under differentiation. Consider the structure (,) with as above. Then if all the derivatives of the functions defined by terms are also defined by terms we say that the structure (,) has a ring of terms that is closed under differentiation. §.§ Desingularisation The first implicit definition comes from ideas of Wilkie in <cit.> and is referred to by Bianconi in <cit.> as the Desingularisation Theorem. A more general form of this implicit definition was proved by Jones and Wilkie in <cit.>. Let =(,) be an expansion of by a set of total analytic functions in one variable, closed under differentiation. We also assume that has a model complete theory and as is closed under differentiation the ring of terms of is closed under differentiation. Before stating the first implicit definition we give a definition. Let f_1:I→, for some open interval I⊆, be a function definable in the structure =(,). Then we say that f_1 is implicitly -defined if there are some integers n,l≥ 1, polynomials P_1,…,P_n in [y_1,…,y_(l+1)(n+1)] and functions f_2,…,f_n:I→ such that for all z∈ I, [ F_1(z,f_1(z),…,f_n(z))=0; ⋮; F_n(z,f_1(z),…,f_n(z))=0 ] and (∂ F_i/∂ x_j)_i=1,…,n j=2,…,n+1(z,f_1(z),…,f_n(z)) 0, where F_i(z,f_1(z),…,f_n(z))=P_i( z,f_1(z),…,f_n(z), g_1(z),g_1(f_1(z)),…,g_1(f_n(z)), …, g_l(z),g_l(f_1(z)),…,g_l(f_n(z))) for g_1,…,g_l∈. Let f:I→, for some open interval I⊆, be a definable function in . Then there are subintervals I_1,…,I_m⊆ I such that I∖(∪_k=1^m I_k) is a finite set and f is implicitly -defined on each of these subintervals. §.§ An implicit definition following from a result of Gabrielov This implicit definition is obtained from a model completeness result of Gabrielov in <cit.>. As noted in the introduction, although the theorem of Gabrielov is well known, as far as I am aware this is the first application of this theorem in order to obtain an implicit definition of this kind. Firstly we state Gabrielov's theorem and give some background terminology from <cit.>. Then we state and prove the implicit definition. Let Y be a Φ-subanalytic subset of [0,1]^n. Then Ỹ=[0,1]^n∖ Y is Φ-subanalytic. Consider a set of restricted real analytic functions Φ and a subanalytic set Y defined from the functions in Φ. Then by the previous theorem the complement of Y is defined by functions in the algebra generated by the functions in Φ, their partial derivatives, the constants 0 and 1 and the coordinate functions. In particular we have the following corollary. Let be an infinite collection of real analytic functions that are defined on a bounded closed interval in that is closed under differentiation. Then the structure (,) is model complete. The following lemma is Lemma 3 in <cit.> and is required for the proof of the implicit definition. Let X be a Φ-semianalytic set in [0,1]^m+n, and let Y=π X⊆ [0,1]^n,d= Y. Then there exist finitely many Φ-semianalytic subsets X_v^' and a Φ-subanalytic subset V of X such that Y=(π V)∪⋃_v π X_v^' and * X_v^' is effectively non-singular, X_v^'=d and π:X_v^'→ Y has rank d at every point of X_v^' for each v. * π V<d * X_u^'∩ X_v^'=∅, for u v. Now we shall state and prove the implicit definition that arises from Gabrielov's theorem. Let be a set of real analytic functions defined on a neighbourhood in [0,1] that contains a closed interval I, suppose that is closed under differentiation and consider the structure (,|_I), where |_I{ g|_I:g∈}. Let f:U→ I^k where U⊆ I^m for some m,k≥1 be a function definable in (,) and let f_1,…,f_k:U→ I denote its coordinate functions. Then there exist integers n,l ≥ 1, polynomials P_1,…,P_n in [y_1,…,y_(l+1)(m+n)], functions f_k+1,…,f_n:B→ I for an open box B⊆ U and g_1,…,g_l∈ such that for all =(z_1,…,z_m)∈ B, [ F_1(,f_1(),…,f_n())=0; ⋮; F_n(,f_1(),…,f_n())=0 ] and (∂ F_i/∂ x_j)_i=1,…,n j=m+1,…,m+n(,f_1(),…,f_n())0, where F_i(,f_1(),…,f_n())=P_i( ,f_1(),…,f_n(), g_1(z_1),…,g_1(z_m),g_1(f_1()),…,g_1(f_n()), …, g_l(z_1),…,g_l(z_m),g_l(f_1()),…,g_l(f_n())). Here the functions in are defined on a neighbourhood in [0,1] rather than a neighbourhood containing [0,1]. This has a slight impact on the definitions and results of Gabrielov that we wish to apply, namely that the interval I⊆[0,1] takes the place of [0,1] in the above statements. Let Y=Γ(f)⊆^m+1 be the graph of f. Clearly Y=m. Then Y is a definable set in the structure (,) and by Corollary <ref> the set Y is a -subanalytic set of dimension m. By definition Y=π X where X is a -semianalytic subset of ^m+n for some n. By Lemma <ref> we have that Y=(π V)∪⋃π X_v' where X_v' are effectively non-singular -semianalytic sets of dimension m and π V is small. It is enough to prove the result for Y=π X_v' for a single effectively non-singular set X_v'. By the definition of an effectively non-singular set and the rank condition seen in Definition 3 in <cit.> the function f may be defined by a non-singular system of m+n-m equations as described in the statement. § PROOF OF THEOREM <REF> The proof of Theorem <ref> consists of three cases. Namely, when the lattice Ω is closed under complex conjugation (a real lattice), when it is isogenous to its conjugate and when it is not. The method for each of these cases is essentially the same and here we give the proof in the case when Ω is a real lattice. The differences between the proof of the real lattice case and the other two cases are explained at the end of this section. Assume that Ω is a real lattice. Then the restriction ℘|_I is a real valued function, this can be seen in Section 18 of <cit.>. From the differential equation it is clear that the structures (,℘|_I) and (,℘|_I,℘'|_I) are the same in the sense of having the same definable sets and it therefore suffices to prove the theorem using the structure (,℘|_I,℘'|_I). By Gabrielov's theorem, Theorem <ref>, this structure is model complete. Model completeness results involving the ℘-function are also due to Bianconi in <cit.>. However these results deal with complex functions rather than their restrictions to a real interval and therefore do not seem applicable here. If n>1 then we can fix all the variables except one and apply the n=1 case for each variable in turn. Therefore each coordinate function of f is semialgebraic and holomorphic and so f is an algebraic function in each variable and by Theorem 2 in <cit.> the function f is itself algebraic and therefore definable in . Hence we may assume that n=1. Assume for a contradiction that v is not definable in . The proof of the following claim is a straightforward application of the identities for the real and imaginary parts of a complex function and so we simply state this claim. This corresponds to Claim 1 in the proof of Theorem 4 in <cit.>. The function u(x,y) is not definable in . In fact the functions x,y,u(x,y), v(x,y) are algebraically independent over . By applying the addition formula for ℘ we may translate and shrink the interval I and assume that I⊆ [0,1]. Similarly we may replace D with a smaller disc and assume that D⊆ I^2⊆[0,1]^2. If f is algebraic on this smaller disc it will be algebraic on the original disc and it therefore suffices to prove the theorem on the smaller disc. The images of u and v restricted to this disc will be bounded and by a final translating and scaling we may suppose that these images are contained in the interval I. Let f_2(x,y)=u(x,y) and f_3(x,y)=v(x,y). By Proposition <ref>, for some integer n≥1 and an open box B⊆ D there are polynomials P_2,…,P_n∈[y_0,…,y_3n+2] and non-zero rationals a_0,…,a_n, certain functions f_4,…,f_n:B→ I, such that for all (x,y)∈ B, [ F_2(x,y,f_2(x,y),…,f_n(x,y))=0; ⋮; F_n(x,y,f_2(x,y),…,f_n(x,y))=0 ] and (∂ F_i/∂ x_j)_i=2,…,n j=2,…,n(x,y,f_2(x,y),…,f_n(x,y)) 0, where for i=2,…,n we have that F_i(x_0,…,x_n)=P_i(x_0,…,x_n,℘(a_0x_0),…,℘(a_nx_n),℘'(a_0x_0),…,℘'(a_nx_n)). Therefore for all i,j=2,…,n ∂ F_i/∂ x_j(x_0,…,x_n)=∂ P_i/∂ y_j()+a_j℘'(a_jy_j)∂ P_i/∂ y_j+n+1()+a_j℘”(a_jy_j)∂ P_i/∂ y_j+2n+2(), where =(x_0,…,x_n,℘(a_0x_0),…,℘(a_nx_n),℘'(a_0x_0),…,℘(a_nx_n)). Let f_0(x,y)=x and f_1(x,y)=y. Now n is taken to be minimal such that there exists an open box B, some non-zero rationals a_0,…,a_n and polynomials P_2,…,P_n in 3n+3 variables and F_i(x_0,…,x_n)=P_i(x_0,…,x_n,℘(a_0x_0),…,℘(a_nx_n),℘'(a_0x_0),…,℘'(a_nx_n)) and there are also some functions f_4,…,f_n whose domain is B such that F_i(f_0(x,y),…,f_n(x,y))=0 and (∂ F_i/∂ x_j)(f_0(x,y),…,f_n(x,y)) 0 for all (x,y)∈ B. The functions f_0,…,f_n are real analytic on a disc D'⊆ B centred at some α=(α_1,α_2)∈ B. It can easily be shown that f_0-f_0(α),…,f_n-f_n(α) are linearly independent over . Applying Theorem <ref> to a_0f_0,…,a_nf_n gives that _[f_0,…,f_n,℘(a_0f_0),…,℘(a_nf_n)]≥ n+2. The rest of the proof consists of finding a contradictory upper bound on this transcendence degree. Let =(x,y)=(f_0(x,y),…,f_n(x,y)) and =(x,y)=( f_0(x,y),…,f_n(x,y),℘(a_0f_0(x,y)),…,℘(a_nf_n(x,y)), ℘'(a_0f_0(x,y)),…,℘'(a_nf_n(x,y))) for all (x,y)∈ B. From (<ref>) it is clear that for all (x,y)∈ B [ ∂ F_2/∂ x_2 … ∂ F_2/∂ x_n; ⋮ ⋱ ⋮; ∂ F_n/∂ x_2 … ∂ f_n/∂ x_n ]((x,y))=[ ∂ P_2/∂ y_2 … ∂ P_2/∂ y_3n+2; ⋮ ⋱ ⋮; ∂ P_n/∂ y_2 … ∂ P_n/∂ y_3n+2 ]((x,y))· M, where M is the (3n+1)×(n-1) matrix M=[ 0 0 0 0; I_n-1 ⋮ ⋮ M_1 ⋮ ⋮ M_2; 0 0 0 0 ]^T where M_1=[ a_2℘'(a_2f_2(x,y)) … 0; ⋮ ⋱ ⋮; 0 … a_n℘'(a_nf_n(x,y)) ] and M_2=[ a_2℘”(a_2f_2(x,y)) … 0; ⋮ ⋱ ⋮; 0 … a_n℘”(a_nf_n(x,y)) ]. The rows of [ ∂ F_2/∂ x_2 … ∂ F_2/∂ x_n; ⋮ ⋱ ⋮; ∂ F_n/∂ x_2 … ∂ F_n/∂ x_n ]((x,y)) are linearly independent over and so the rows of [ ∂ P_2/∂ y_2 … ∂ P_2/∂ y_3n+2; ⋮ ⋱ ⋮; ∂ P_n/∂ y_2 … ∂ P_n/∂ y_3n+2 ]((x,y)) are also linearly independent over . Therefore for all (x,y)∈ B the matrix [ ∂ P_2/∂ y_2 … ∂ P_2/∂ y_3n+2; ⋮ ⋱ ⋮; ∂ P_n/∂ y_2 … ∂ P_n/∂ y_3n+2 ]((x,y)) has maximal rank n-1. Given Proposition 5.3 in Chapter 8 of <cit.> it follows by a standard argument that _[f_0,…,f_n,℘(a_0f_0),…,℘(a_nf_n)]≤ 2n+4. In order to obtain the desired contradictory upper bound n+3 polynomial equations shall be added to the system and it shall be shown how this lowers the upper bound on transcendence degree. The first n+1 of these equations correspond to the differential equation for the ℘-function in each of the n+1 variables and the final two of these equations arises from the Cauchy-Riemann equations for the functions u and v. For each i=0,…,n define P_i+n+1(y_i+n+1,y_i+2n+2)=y_i+2n+2^2-4y_i+n+1^3+g_2y_i+n+1+g_3. For all (x,y)∈ B and i=0,…,n P_i+n+1(℘(a_if_i(x,y)),℘'(a_if_i(x,y)))=0. By differentiating and using (<ref>) it can be shown that for all i=0,…,n and (x,y)∈ B, ∂ P_i+n+1/∂ y_j(y_j+n+1,y_j+2n+2) +a_j℘'(a_jf_j(x,y))∂ P_i+n+1/∂ y_j+n+1(y_j+n+1,y_j+2n+2) +a_j℘”(a_jf_j(x,y))∂ P_i+n+1/∂ y_j+2n+2(y_j+n+1,y_j+2n+2)=0. It can then easily be shown that the matrix [ ∂ P_2/∂ y_2 … ∂ P_2/∂ y_3n+2; ⋮ ⋱ ⋮; ∂ P_2n+1/∂ y_2 … ∂ P_2n+1/∂ y_3n+2 ]((x,y)) has maximal rank 2n and therefore by the same standard argument we have that _[f_0,…,f_n,℘(a_0f_0),…,℘(a_nf_n),℘'(a_0f_0),…,℘'(a_nf_n)]≤ n+3. By the implicit function theorem the derivatives of f_i(x_0,x_1) for i=2,…,n are given by [ ∂ f_2/∂ x_k; ⋮; ∂ f_n/∂ x_k ]=-Δ^-1[ ∂ F_2/∂ x_k; ⋮; ∂ F_n/∂ x_k ], where k=0,1 and Δ=(∂ F_i/∂ x_j) and the right hand side is evaluated at (x_0,…,x_n)=(f_0,…,f_n). Multiplying both sides by the determinant of Δ and using the Cauchy-Riemann equations for f_2 and f_3 gives two new equations F_0 and F_1 with corresponding polynomials P_0 and P_1, following the method of Bianconi in <cit.>. These are of the form, F_0=[ first line of -Δ·(Δ^-1(∂ F_i/∂ x_0)) minus the second line of -Δ·(Δ^-1(∂ F_i/∂ x_1))] and F_1=[ first line of -Δ·(Δ^-1(∂ F_i/∂ x_1)) plus the second line of -Δ·(Δ^-1(∂ F_i/∂ x_0))]. In order to lower the upper bound further we have the following lemma, the proof of which adapts those of Claims 5 and 6 in the proof of Theorem 4 in <cit.>. For each k=0,1 there is a point z∈^3n+3 such that P_k(z)0 and P_1-k(z)=0 and P_i(z)=0 for all i=2,…,2n+1. This adapts the proofs of Claims 5 and 6 in the proof of Theorem 4 in <cit.>. Let V be the subset of ^3n+3 defined by V={(x,y,z)∈^n+1×^n+1×^n+1:y=℘(ax),z=℘'(ax) } where ℘(ax)=(℘(a_0x_0),…,℘(a_nx_n)) and ℘'(ax)=(℘'(a_0x_0),…,℘'(a_nx_n)). Also let W be the subset of ^3n+3 defined by W={ z∈^3n+3:P_2(z)=0,…,P_2n+1(z)=0 and (∂ P_i/∂ y_j)(z) 0 for i=2,…,2n+1,j=2,…,3n+2 has maximal rank }. Let X be the subset of ^3n+3 defined by {(x,y)|(x,y)∈ B }. Then it is clear that X⊆ V∩ W. The subset V may also be written as V={ (x,y,z)∈^n+1×^n+1×^n+1: F̂_0(x,y,z)=…=F̂_2n+1(x,y,z)=0 }, where for i=0,…,n _i(x,y,z)=y_i-℘(a_ix_i) _i+n+1(x,y,z)=z_i-℘'(a_ix_i). We denote the Jacobian matrix for this system by Φ and this is a (2n+2)× (3n+3) matrix given by Φ=[ -a_0℘'(a_0x_0) … 0 1 … 0 0 … 0; ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮; 0 … -a_n℘'(a_nx_n) 0 … 1 0 … 0; -a_0℘”(a_0x_0) … 0 0 … 0 1 … 0; ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮; 0 … -a_n℘”(a_nx_n) 0 … 0 0 … 1 ]. The normal space to V at a point is generated by the rows of Φ evaluated at this point. Recall the matrix M, M=[ 0 0 0 0; I_n-1 ⋮ ⋮ M_1 ⋮ ⋮ M_2; 0 0 0 0 ]^T where M_1=[ a_2℘'(a_2f_2(x,y)) … 0; ⋮ ⋱ ⋮; 0 … a_n℘'(a_nf_n(x,y)) ] and M_2=[ a_2℘”(a_2f_2(x,y)) … 0; ⋮ ⋱ ⋮; 0 … a_n℘”(a_nf_n(x,y)) ]. Let M' be the matrix M'=[ 0 0; ⋮ ⋮ M^T; 0 0 ]. Then the matrix product M'·(Φ())^T gives the (n-1)× (2n+2) zero matrix. Therefore the kernel of the linear transformation from ^3n+3 to ^2n+2 given by the matrix M' is generated by the rows of the matrix Φ(). Let P be the matrix P=[ ∂ P_2/∂ y_0 … ∂ P_n/∂ y_0; ⋮ ⋱ ⋮; ∂ P_2/∂ y_3n+2 … ∂ P_n/∂ y_3n+2 ](ỹ). Then we have that M'· P=[ ∂ F_2/∂ x_2 … ∂ F_n/∂ x_2; ⋮ ⋱ ⋮; ∂ F_2/∂ x_n … ∂ F_n/∂ x_n ](x̃). The columns of the matrix on the right hand side of this equation are linearly independent over . Therefore the subspace of ^3n+3 generated by the columns of P has trivial intersection with the kernel of the linear transformation given by M'. As the normal space to W at a point is generated by the columns of P evaluated at this point we have that in particular the normal spaces to V and W at each point in X have trivial intersection and so the intersection of V and W is transversal. Therefore if the subspace V is shifted locally then the intersection of V and W is still transversal. We shall now give such a shift explicitly. For real numbers η and ξ we let V_η,ξ be the subset given by applying the following operations to V. In other words V_η,ξ=Ψ(V) for Ψ:^3n+3→^3n+3 where Ψ does the following, for (y_0,…,y_3n+2)∈^3n+3 y_2↦ y_2+η y_0+ξ y_1 y_2+n+1↦1/4(y_2+2n+2-℘^'(a_2(η y_0+ξ y_1))/y_2+n+1-℘(a_2(η y_0+ξ y_1)))^2-y_2+n+1-℘(a_2(η y_0+ξ y_1)) and y_2+2n+2↦( ℘(a_2(η y_0+ξ y_1))y_2+2n+2-℘'(a_2(η y_0+ξ y_1))y_2+n+1 -℘(a_2(y_2+η y_0+ξ y_1))(y_2+2n+2-℘'(a_2(η y_0+ξ y_1))) /y_2+n+1-℘(a_2(η y_0+ξ y_1)) and the rest of the variables are fixed. The projection of W onto the variables y_0,y_1,y_2,y_3 contains the set {(f_0,f_1,f_2(f_0,f_1),f_3(f_0,f_1))|f_0,f_1∈ B } in its interior. If it did not then as π W=4 we have ∂ W≤3 and so there is an algebraic relation between f_0,f_1,f_2 and f_3 contradicting Claim <ref>. So for each real η and ξ there is a positive real number δ such that for all real f_0 and f_1 with f_0^2+f_1^2<δ^2 the intersection of X with V_η,ξ is non-empty. The effect of Ψ on the subset X is the following. f_2→ f_2+η f_0+ξ f_1 ℘(a_2f_2)→℘(a_2(f_2+η f_0+ξ f_1)) ℘'(a_2f_2)→℘'(a_2(f_2+η f_0+ξ f_1)). The real numbers η and ξ may be chosen so that at least one of the Cauchy-Riemann equations for u and v are not satisfied. Therefore there is a point z∈^3n+3 such that P_k(z) 0 for some k=0,1 and P_1-k(z)=P_j(z)=0 for j=2,…,2n+1 and so the lemma is proved. By shrinking and shifting the disc D if necessary we may assume that all the points (x,y)=( x,y,f_2(x,y),…,f_n(x,y), ℘(a_0x),℘(a_1y),℘(a_2f_2(x,y)), …,℘(a_nf_n(x,y)), ℘'(a_0x),℘'(a_1y),℘'(a_2f_2(x,y)),…,℘'(a_nf_n(x,y))) such that the system P_2()=…=P_2n+1()=0 is satisfied are contained in a single irreducible component of the variety (⟨ P_2,…,P_2n+1⟩) denoted 𝒲. Suppose that (𝒲∩(⟨ P_0⟩))=𝒲. Then ∩(⟨ P_0⟩ )=𝒲 as 𝒲 is irreducible. By the proof of Lemma <ref> there is a point z∈ such that P_2(z)=…=P_2n+1(z)=0 and P_0(z) 0. Therefore there is a point z∈ such that z∉(P_0), a contradiction. By once again shifting and shrinking the disc D we may suppose that all of the points (x,y) satisfying the system P_0()=P_2()=…=P_2n+1()=0 are contained in an irreducible component of the variety (⟨ P_0,P_2,…,P_2n+1⟩), denoted '. Suppose that ( '∩(⟨ P_1⟩))=', then again as ' is irreducible we have that '∩(⟨ P_1⟩ )='. Again by the proof of Lemma <ref> there is a point z∈ such that only one of P_0(z) and P_1(z) equals zero and P_2(z)=…=P_2n+1(z)=0. Therefore there is a point z∈' and z∉(⟨ P_1⟩), a contradiction as required. We have shown that if we add each of the polynomials P_0 and P_1 to the system P_2,…,P_2n+1 and consider the variety corresponding to the ideal generated by each of these new systems in turn then the dimension of each of these varieties decreases. Hence the upper bound on the transcendence degree of our finitely generated extension of decreases by two. Therefore we have a lower bound _[f_0,…,f_n,℘(a_0f_0),…,℘(a_nf_n)]≥ n+2 and an upper bound _[f_0,…,f_n,℘(a_0f_0),…,℘(a_nf_n)]≤ n+1, a contradiction as required. If Ω is not a real lattice then one must consider the structure (,(℘)|_I,(℘)|_I, (℘')|_I,(℘')|_I), which is also model complete by Gabrielov's result, Corollary <ref>. The presence of the real and imaginary parts of ℘ gives an extra 2n+2 variables in the system of polynomial equations arising from Proposition <ref>. This raises the corresponding upper bound by 2n+2. Therefore the method in the real lattice case must be adapted in order to find the required contradictory upper and lower bounds on transcendence degree. By Proposition <ref> we have a system of polynomials involving the real and imaginary parts of both ℘ and ℘', which may be rearranged to give a polynomial system involving ℘,℘', and ' where (z)=℘()=℘_(z). If Ω is not isogenous to then there are no integers a,b,c,d with ad-bc0 such that τ=(aτ+b)/(cτ+d) and so we may apply Theorem <ref> with the Weierstrass functions ℘ and in order to obtain a higher lower bound on transcendence degree. In order to lower the corresponding upper bound on transcendence degree further we add polynomial equations corresponding to the differential equation for in each variable as well as corresponding versions of the polynomial equations added in the real lattice case. This gives the desired contradiction. If Ω is isogenous to its complex conjugate then there is a non-zero complex number α such that αΩ⊆. Therefore from the definition of ℘ we may rewrite (z) as a rational function in ℘(α^-1z). The system of polynomials obtained using Proposition <ref> may be rewritten as system of rational functions involving ℘(z),℘(α^-1z),℘'(z) and ℘'(α^-1z) from which a system of polynomials may be obtained. The lower bound on transcendence degree is raised by applying Theorem <ref> with ℘ to the functions a_0f_0,…,a_nf_n,α^-1a_0f_0,…,α^-1a_nf_n. The upper bound on transcendence degree is lowered further by adding polynomial equations corresponding to the differential equation for ℘(α^-1z) in each variable as well as once again adding corresponding versions of the polynomial equations added in the real lattice case. This completes the proof of Theorem <ref>. § FURTHER DEFINABILITY RESULTS The first result in this section is an immediate corollary of Theorem <ref> combined with Theorem 12.5 in <cit.>. Let Ω⊆ be a complex lattice which does not have complex multiplication and I be a bounded closed interval in which does not intersect Ω. Let X be an analytic subset of an open set U⊆^n. Assume that U and X are definable in (,℘|_I). Then there is a complex algebraic set A⊆^n such that X is an irreducible component of A∩ U. For real lattices the following theorem can be seen in <cit.>. Here the result in <cit.> is extended to all complex lattices and a different proof is given. Let Ω be a complex lattice and I⊆ a bounded closed interval such that I∩Ω is empty. Let D⊆ be a disc. Then ℘|_D is definable in (,℘|_I) if and only if the lattice Ω has complex multiplication. Suppose that D∩Ω is empty. Firstly we assume that Ω has complex multiplication and so there is a non-zero complex number α such that αΩ⊆Ω. Define f(z)=℘(α z). Then for all ω∈Ω we have that f(z+ω)=℘(α z+αω)=℘(α z) and so f is a meromorphic function that is periodic with respect to Ω. By Theorem 3.2 in Chapter 6 of <cit.> the function f is a rational function in terms of ℘ and ℘'. Therefore ℘|_α I is definable in (,℘|_I). Similarly we have that ℘'|_α I is definable in (,℘|_I). We may assume that D⊆ I ×α I. Therefore for any z∈ D we have that z=x+α y for x,y∈ I. By the addition formula for ℘ ℘(x+α y)=R(℘(x),℘'(x),℘(y),℘'(y)) for a rational function R. Therefore ℘|_D is definable in (,℘|_I). Conversely, suppose that Ω does not have complex multiplication and that there is a disc D⊆ such that ℘|_D is definable in (,℘|_I). As ℘ is holomorphic on D we have that by Theorem <ref> the function ℘|_D is definable in , a contradiction. Now let D be a disc containing a single lattice point ω∈Ω and consider the function f(z)=(z-ω)^2℘(z). If Ω has complex multiplication then as (z-ω)^2 is definable in the structure (,℘|_I) it is clear by a repetition of the above argument we have that f|_D is definable in (,℘|_I). Conversely suppose that Ω does not have complex multiplication and assume for a contradiction that f|_D is definable in the structure (,℘|_I). Then f|_D' is definable in (,℘|_I) for some disc D'⊆ D that does not contain ω. Therefore ℘|_D' is definable in (,℘|_I), a contradiction. In the proof of Theorem <ref> the existence of an Ax-Schanuel statement for the Weierstrass ℘-function is essential. This raises the question of whether we can recover corresponding nondefinability results for other transcendental functions that also satisfy an Ax-Schanuel theorem. In this context the modular j-function is a natural function to consider and the Ax-Schanuel result is due to Pila and Tsimerman in <cit.>. The following theorem can be thought of as a j-function analogue of Theorem <ref>. The proof of this theorem adapts a similar method to the one seen in Section <ref> and uses the first implicit definition in Section <ref>. Let I⊆^>0 be an open interval that is bounded away from zero and let D⊆ be a non-empty disc. Then the restriction of j to the disc D is not definable in the structure (,j|_iI). Assume for a contradiction that there is a disc D⊆ such that the restriction j|_D is definable in the structure (,j|_iI). For notational convenience we can suppose that the disc D contains the horizontal line segment i+I and so the real and imaginary parts of the function j|_i+I are definable in the structure (,j|_iI). Rearranging the differential equation satisfied by j given in (<ref>) gives that ij”'(it)=-3/2(j”(it))^2/ij'(it)+(j^2(it)-1968j(it)+2654208/2j^2(it)(j(it)-1728)^2)(ij'(it))^3 and so ij”'(it) may be written as a polynomial in j(it),ij'(it),j”(it),(ij'(it))^-1 and (2j^2(it)(j(it)-1728)^2)^-1. By shrinking the interval I if necessary we may assume that the denominators do not vanish for any t∈ I. Therefore by differentiating this equation with respect to t we can see that all the higher derivatives of j(it) may also be given as polynomials in these functions. Consider the auxiliary structure given by expanding by the functions j_B(t)=j(iB(t)),j_B'(t)=ij'(iB(t)),j_B”(t)=j”(iB(t)),j_1(t)=(ij'(B(t)))^-1 and j_2(t)=(2j(iB(t))^2(j(iB(t))-1728)^2)^-1 as well as B and B_1. Here B:→ I is an algebraic function and B_1 is a rational function arising from the derivative of B such that all higher derivatives of B are polynomials in B and B_1. The structures (,j|_iI) and (,j_B,j'_B,j”_B,j_1,j_2,B,B_1) are equivalent in the sense of having the same definable sets. They also have the same universally and existentially definable sets. Therefore the real and imaginary parts of the function j|_i+I are definable in the structure (,j_B,j'_B,j”_B,j_1,j_2,B,B_1). Therefore it suffices to prove Theorem <ref> in this auxiliary structure. It is clear from construction that the set { j_B,j'_B,j”_B,j_1,j_2,B,B_1 } is closed under differentiation and the ring of terms of this auxiliary structure is closed under differentiation in the sense of Section <ref>. By the Gabrielov result, Corollary <ref>, the auxiliary structure (,j_B,j'_B,j”_B,j_1,j_2,B,B_1) is model complete. Let f_1,f_2:I→ be defined by f_1(t)=(j(i+t)) and f_2(t)=(j(i+t)). By applying Theorem <ref> to both f_1 and f_2, we have that for some integer n≥ 1 and a subinterval I'⊆ I there are polynomials P^*_1,…,P^*_n:^8n+8→ in [y_1,…,y_8n+8], certain functions f_3,…,f_n:I'→ such that for all t∈ I', [ F_1(t,f_1(t),…,f_n(t))=0; ⋮; F_n(t,f_1(t),…,f_n(t))=0 ] and (∂ F_i/∂ x_j)_i=1,…,n j=2,…,n+1(t,f_1(t),…,f_n(t)) 0, where for i=1,…,n we have that F_i(t,f_1(t),…,f_n(t))=P^*_i( t,f_1(t),…,f_n(t), j(iB(t)),j(iB(f_1(t))),…,j(iB(f_n(t))), ij'(iB(t)),ij'(iB(f_1(t))),…,ij'(iB(f_n(t))), j”(iB(t)),j”(iB(f_1(t))),…, j”(iB(f_n(t))), j_1(t),j_1(f_1(t)),…,j_1(f_n(t)), j_2(t),j_2(f_1(t)),…,j_2(f_n(t)) B(t),B(f_1(t)),…,B(f_n(t)), B_1(t),B_1(f_1(t)),…,B_1(f_n(t))). By the definition of the functions j_1 and j_2 as well as B and B_1 we may write F_1,…,F_n as algebraic functions in t,f_1(t),…,f_n(t),j(iB(t)),j(iB(f_1(t))),…,j(iB(f_n(t))) and ij'(iB(t)),ij'(iB(f_1(t))),…,ij'(iB(f_n(t))) as well as j”(iB(t)),j”(iB(f_1(t))),…, j”(iB(f_n(t))). In defining these algebraic functions square roots are introduced from the definition of B, which may affect the analyticity of these algebraic functions. The domain of these algebraic functions is a small open subset of ^4n+4 containing the set Γ_j={[f(t),j(iB(f(t))),ij'(iB(f(t))),j”(iB(f(t)))]:t∈ I' } where f(t)=(t,f_1(t),…,f_n(t)) and the algebraic functions are taken to be analytic on this domain. Hence for i=1,…,n we have that F_i(x_1,…,x_n+1)=P_i( x_1,…,x_n+1,j(iB(x_1)),…,j(iB(x_n+1)), ij'(iB(x_1)),…,ij'(iB(x_n+1)),j”(iB(x_1)),…, j”(iB(x_n+1))) for algebraic functions P_1,…,P_n and in particular for all t∈ I', F_i(t,f_1(t),…,f_n(t))=P_i[ t,f_1(t),…,f_n(t), j(iB(t)),j(iB(f_1(t))),…,j(iB(f_n(t))), ij'(iB(t)), ij'(iB(f_1(t))),…,ij'(iB(f_n(t))), j”(iB(t)), j”(iB(f_1(t))),…, j”(iB(f_n(t)))]=0. Now take n to be minimal such that the subinterval I', the functions f_3,…,f_n and the system of algebraic functions P_1,…,P_n exists as given above. Let =(t)=( t,f_1(t),…,f_n(t), j(iB(t)),j(iB(f_1(t))),…,j(iB(f_n(t))), ij'(iB(t)), ij'(iB(f_1(t))),…,ij'(iB(f_n(t))), j”(iB(t)), j”(iB(f_1(t))),…, j”(iB(f_n(t)))). For all t∈ I' it can easily be shown that the matrix ( ∂ P_i/∂ y_j)_i=1,…,n j=2,…,4n+4((t)) has maximal rank n. The standard argument noted in the proof of Theorem <ref> can be readily adapted for a system of algebraic functions and so _ℂℂ[ t,f_1,…,f_n, j(iB(t)),j(iB(f_1)),…,j(iB(f_n)), ij^'(iB(t)),ij^'(iB(f_1)),…,ij^'(iB(f_n)), j^''(iB(t)),j^''(iB(f_1)),…,j^''(iB(f_n))] ≤ 4n+4-n=3n+4. Suppose that there is some integer M≥ 1 such that Φ_M[j(iB(f_k(t))),j(iB(f_l(t)))]=0 for all t∈ I', where 0≤ k,l≤ n and k l and f_0(t)=t. For convenience we assume that k=n-1 and n=l. Then iB(f_n(t)) may be written as a rational function in iB(f_n-1(t)). Rearranging the modular polynomial Φ_M gives that j(iB(f_n(t))) may be written as an algebraic function in j(iB(f_n-1(t))). Differentiating both sides of this equation and rearranging and repeating this process gives algebraic functions for ij'(iB(f_n(t))) and j”(iB(f_n(t))) in terms of f_n-1(t),j(iB(f_n-1(t))),ij'(iB(f_n-1(t))) and f_n-1(t),j(iB(f_n-1(t))),ij'(iB(f_n-1(t))),j”(iB(f_n-1(t))) respectively. Therefore the non-singular system of algebraic functions P_1,…,P_n may be rearranged to give a system of algebraic functions in fewer variables. If this system is non-singular at the points (t) then there is a contradiction to the minimality of n. Therefore this system is assumed to be singular at these points. However this leads to a contradiction of the non-singularity of the original system and we may therefore conclude that no such integer M≥ 1 exists. From this it can be shown that there is no integer M≥ 1 such that Φ_M(j(iB(f_k(t))),j(iB(f_l(t))))=0 for all k,l=0,…,n with k l. Applying Theorem <ref> to i+f_0,iB(f_0),…,iB(f_n) gives that _[ i+t,iB(t),iB(f_1),…,iB(f_n), j(i+t),j(iB(t)),j(iB(f_1)),…, j(iB(f_n)), j'(i+t),j'(iB(t)),j'(iB(f_1)),…,j'(iB(f_n)), j”(i+t),j”(iB(t)), j”(iB(f_1)),…,j”(iB(f_n)) ]≥ 3n+7 and so _[ i+t,iB(t),iB(f_1),…,iB(f_n), j(i+t),j(iB(t)),j(iB(f_1)),…, j(iB(f_n)), j'(iB(t)),j'(iB(f_1)),…,j'(iB(f_n)), j”(iB(t)), j”(iB(f_1)),…,j”(iB(f_n)) ]≥ 3n+5. As f_1,f_2 are the real and imaginary parts of j(i+t) and the function B is algebraic and i+t and iB(t) are algebraically dependent we have that _[ i+t,iB(t),iB(f_1),…,iB(f_n), j(i+t),j(iB(t)),j(iB(f_1)),…, j(iB(f_n)), j'(iB(t)),j'(iB(f_1)),…,j'(iB(f_n)), j”(iB(t)), j”(iB(f_1)),…,j”(iB(f_n)) ]≤ 3n+4, a contradiction. § FINAL REMARKS It is reasonable to expect that further nondefinability results for transcendental functions such as the modular j-function can be obtained by adapting the methods given here. In particular an analogue of Theorem <ref> for the modular j-function is a natural statement. However there are some obstructions in directly applying the method of Section <ref> to this case. Firstly the necessity for a system of algebraic functions requires a reworking of the final part of the proof of Theorem <ref> for such a system. Also the lack of addition formula for the modular j-function makes a direct application of the proof of Lemma <ref> impossible. However for the Weierstrass ζ-function, a quasi-periodic meromorphic function related to ℘ by the formula ζ'=℘ some definability results can be readily obtained. By using classical formulae and an Ax-Schanuel statement involving ℘ and ζ, which is also due to Brownawell and Kubota in <cit.> one can characterise the definability of restrictions of ζ to a disc D⊆ in the structure (,℘|_I,ζ|_I), where I⊆ is a bounded closed interval such that I∩Ω=∅, in terms of complex multiplication. This is an analogue of Theorem <ref> for the Weierstrass ζ-function and the proof is simply another adaptation of the method seen in the proof of Theorem <ref>. abbrv
http://arxiv.org/abs/2307.00265v1
20230701081217
IRS-Aided Overloaded Multi-Antenna Systems: Joint User Grouping and Resource Allocation
[ "Ying Gao", "Qingqing Wu", "Wen Chen", "Yang Liu", "Ming Li", "Daniel Benevides da Costa" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
IRS-Aided Overloaded Multi-Antenna Systems: Joint User Grouping and Resource Allocation Ying Gao, Qingqing Wu, Wen Chen, Yang Liu, Ming Li, and Daniel Benevides da Costa Y. Gao is with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 201210, China, and also with the State Key Laboratory of Internet of Things for Smart City, University of Macau, Macao 999078, China (e-mail: yinggao@um.edu.mo). Q. Wu and W. Chen are with the Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 201210, China (e-mail: qingqingwu@sjtu.edu.cn; whenchen@sjtu.edu.cn). Y. Liu and M. Li are with the School of Information and Communication Engineering, Dalian University of Technology, Dalian 116024, China (e-mail: yangliu_613@dlut.edu.cn; mli@dlut.edu.cn). D. B. da Costa is with the Technology Innovation Institute, 9639 Masdar City, Abu Dhabi, United Arab Emirates (email: danielbcosta@ieee.org). August 1, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== This paper studies an intelligent reflecting surface (IRS)-aided multi-antenna simultaneous wireless information and power transfer (SWIPT) system where an M-antenna access point (AP) serves K single-antenna information users (IUs) and J single-antenna energy users (EUs) with the aid of an IRS with phase errors. We explicitly concentrate on overloaded scenarios where K + J > M and K ≥ M. Our goal is to maximize the minimum throughput among all the IUs by optimizing the allocation of resources (including time, transmit beamforming at the AP, and reflect beamforming at the IRS), while guaranteeing the minimum amount of harvested energy at each EU. Towards this goal, we propose two user grouping (UG) schemes, namely, the non-overlapping UG scheme and the overlapping UG scheme, where the difference lies in whether identical IUs can exist in multiple groups. Different IU groups are served in orthogonal time dimensions, while the IUs in the same group are served simultaneously with all the EUs via spatial multiplexing. The two problems corresponding to the two UG schemes are mixed-integer non-convex optimization problems and difficult to solve optimally. We first provide a method to check the feasibility of these two problems, and then propose efficient algorithms for them based on the big-M formulation, the penalty method, the block coordinate descent, and the successive convex approximation. Simulation results show that: 1) the non-robust counterparts of the proposed robust designs are unsuitable for practical IRS-aided SWIPT systems with phase errors since the energy harvesting constraints cannot be satisfied; 2) the proposed UG strategies can significantly improve the max-min throughput over the benchmark schemes without UG or adopting random UG; 3) the overlapping UG scheme performs much better than its non-overlapping counterpart when the absolute difference between K and M is small and the EH constraints are not stringent. Intelligent reflecting surface, overloaded multi-antenna systems, SWIPT, user grouping, phase errors. § INTRODUCTION Radio-frequency (RF) signals-enabled wireless power transfer (WPT) has been recognized as a viable and convenient solution for providing virtually perpetual energy supplies to wireless devices <cit.>. Moreover, since RF signals carry both energy and information, the integration of WPT and wireless information transmission (WIT) spurs a new paradigm, namely, simultaneous wireless information and power transfer (SWIPT), which has drawn an upsurge of interest <cit.>. However, as the path loss is proportional to the transmission distance, the performance of SWIPT systems is basically limited by the low efficiency and short range of WPT. Although using massive antenna arrays at the transmitter can overcome this issue, the required high energy consumption and hardware cost hinder its practical implementation, which calls for an energy-efficient and cost-effective alternative solution <cit.>. Recently, intelligent reflecting surface (IRS) has been proposed as a promising solution that can improve the spectral efficiency and/or energy efficiency of various wireless systems <cit.>. Specifically, an IRS is a planar array consisting of a substantial quantity of low-cost passive metamaterial elements, each of which can be adapted to tune the phase shifts of the incoming signals, enabling the reconfiguration of the wireless propagation environment for boosting the efficiencies of WPT and WIT <cit.>. Furthermore, IRSs possess several other attractive benefits, including a compact form factor, lightweight construction, and conformal geometry. Therefore, IRSs can be mounted on surfaces of arbitrary shapes, accommodating diverse application scenarios <cit.>. Inspired by these advantages, several works have investigated the integration of IRSs into SWIPT systems, e.g., <cit.>. Two distinct research lines can be identified depending on whether the information users (IUs) and energy users (EUs) are geographically separated or co-located. For the case of separated IUs and EUs, the authors of <cit.> studied the joint design of the transmit precoder at the access point (AP) and the phase shifts at the IRS for maximizing the weighted sum-power of the EUs in an IRS-aided multiple-input single-output (MISO) SWIPT system. Their simulation results demonstrated that the IRS can significantly improve the power harvested by the EUs in its vicinity and enlarge the signal-to-interference-plus-noise ratio (SINR)-energy region. Moreover, adopting the same system model as in <cit.>, the authors of <cit.> investigated the transmit power minimization problem. Also, the weighted sum-rate of the IUs was maximized in <cit.> for an IRS-assisted multiple-input multiple-output (MIMO) SWIPT system. On the other hand, for the case of co-located users with both information decoding and energy harvesting (EH) requirements, the authors of <cit.> considered the power splitting (PS) receiver structure and maximized the minimum energy efficiency among the users to guarantee user fairness in a MISO SWIPT system aided by an IRS. Additionally, the rate-energy (R-E) trade-off of a single user employing either PS or time switching (TS) receiver structures in an IRS-aided multicarrier MISO SWIPT system was studied in <cit.>. All the aforementioned works assumed that the phase shifts induced by the IRS reflecting elements can be estimated perfectly and/or set precisely to the desired values, which, however, may be ideal due to the intrinsic hardware imperfection of IRSs. The phase shift deviations from the desired values caused by imperfect phase estimation and/or low-precision phase configuration are referred to as phase errors <cit.>. Several studies on wireless communication systems aided by IRSs with phase errors have been carried out, e.g., <cit.>. These works indicate that if ignoring the phase errors at the design stage, then the system performance would degrade since the system resources are not utilized properly. Among them, there are two commonly used distributions for modeling the phase errors, i.e., the uniform distribution and the Von Mises distribution. For the former case, the authors of <cit.> derived a closed-form expression for the average rate of an IRS-aided SISO system. In <cit.>, the sum throughput was maximized for an IRS-aided multiuser SISO wireless powered communication network (WPCN). For the latter case, the outage probability of an IRS-aided SISO system was analyzed in <cit.>. Also, the authors of <cit.> explored the performance of a double-IRS-assisted multiuser MISO system over spatially correlated channels. However, to the best of our knowledge, the research on IRS-aided SWIPT systems in the presence of phase errors is still in its infancy. If the design parameters are determined without considering the phase errors, the systems employing them may fail to meet the quality-of-service (QoS) requirements at the IUs and the EUs, and also cannot utilize the resources properly to maximize the system performance. Hence, it is necessary and important to take the phase errors into account in practical IRS-aided SWIPT systems. In addition to the above restriction, prior works on IRS-aided SWIPT systems (e.g., <cit.>) have the following limitation. To be specific, in <cit.>, the transmitter sends information and energy simultaneously via spatial multiplexing to all the IUs and EUs over the whole transmission interval. While this transmission strategy can neutralize multiuser interference and guarantee user fairness when the number of transmit antennas is sufficient, it fails to achieve satisfactory results in overloaded scenarios where the number of IUs and EUs is large such that the number of signals multiplexed in the spatial domain exceeds the number of transmit antennas, even with the aid of IRSs. Since overloaded scenarios are gaining increasing importance with the ever-growing demands for ultra-high connectivity, it is necessary to pay attention to them <cit.>. Then, a question arises: how to improve the minimum throughput performance among the IUs in overloaded scenarios? Intuitively, the fewer the number of IUs served by the transmitter with the help of IRSs over the given frequency band, the higher the achievable SINR of each IU. Inspired by this, user grouping (UG) can be pursued, where different IU groups are served in orthogonal time dimensions to avoid inter-group interference, and all the EUs can still harvest energy over the whole transmission duration. Although a higher SINR can be achieved per IU in this case, the duration that the transmitter serves each IU is reduced. Thus, it is unknown whether the max-min throughput performance can be improved or not by doing so. If the answer is yes, then another question arises: does allowing overlap among the IU groups lead to more significant performance improvement? This question is motivated by the fact that as a super-scheme of non-overlapping UG, overlapping UG offers a better utilization of the system resources. For overlapping UG, the IUs that belong to multiple groups can benefit from an extended duration of service compared to when they exist in only one group. Nevertheless, as the number of IUs within a single group increases, the achievable SINR of each IU in the group decreases. Hence, it is unclear whether and when overlapping UG can noticeably outperform non-overlapping UG. The answer to this question can offer important engineering insights. For instance, considering that non-overlapping UG is easier to implement, if it exhibits comparable performance to overlapping UG, then it is undoubtedly a better choice for practical systems. Finally, since the spatial correlation among the IUs in the same group significantly impacts the system performance and can be changed by IRSs, both the non-overlapping and overlapping UG schemes should be carefully designed. =-1 Motivated by these considerations, this paper investigates an IRS-aided overloaded SWIPT system which is composed of an IRS with phase errors, an AP with M antennas, and two sets of single-antenna users, i.e., K IUs and J EUs. In addition, K + J > M and K ≥ M. We aim at maximizing the minimum throughput among all the IUs via optimizing the allocation of resources (including time, transmit beamforming at the AP, and IRS phase shifts), subject to the EH requirements of the EUs. Our main contributions are summarized as follows. =-1 * Unlike existing works (e.g., <cit.>) where all the IUs are served simultaneously, we propose two UG schemes, namely, the non-overlapping UG scheme and the overlapping UG scheme, to assign the IUs into several groups. The second scheme is a super-scheme of the first one, distinguishing itself by allowing each IU to be assigned into multiple groups. The transmission time is divided into several time slots, each for one group. In each time slot, the IUs in the corresponding group are served simultaneously with all the EUs via spatial multiplexing. We formulate two max-min throughput maximization problems corresponding to the two UG schemes, denoted by (P1) and (P2), respectively. These two problems are mixed-integer non-convex optimization problems, which are much more challenging to solve than those in <cit.> that do not involve UG-related binary optimization variables. =-1 * For (P1) and (P2), we first provide a method to check their feasibility. Then, we propose a computationally efficient algorithm to solve (P1) suboptimally by applying the proper change of variables, the big-M formulation, the penalty method, the block coordinate descent (BCD), and the successive convex approximation (SCA). To proceed, we prove that removing the UG-related binary variables in (P2) does not compromise optimality, which reveals that although (P2) is a general case of (P1), it is easier to solve. Due to the similarity between (P1) and the simplified version of (P2) (denoted by (P2')), the algorithm proposed for (P1) is modified to find a suboptimal solution of (P2') (and thus (P2)). * Numerical results verify the effectiveness of our proposed algorithms and indicate the importance of robust design for practical IRS-aided SWIPT systems with phase errors since a non-robust design ignoring the phase errors generally leads to an infeasible EH solution. Furthermore, our proposed UG strategies can achieve remarkable improvements in max-min throughput compared to the cases without UG or adopting random UG. In addition, the overlapping UG scheme is preferable for scenarios where the absolute difference between K and M is small and the EH constraints are loose, since it significantly surpasses the non-overlapping UG scheme in these scenarios. By contrast, the non-overlapping UG scheme is a more favorable choice for the opposite scenarios, because it performs comparably to the overlapping UG scheme in these scenarios and is easier to implement in practice. The remainder of this paper is organized as follows. Section <ref> elaborates on the system model and problem formulations for an IRS-aided overloaded SWIPT system under two different UG strategies. Section <ref> provides a feasibility checking method for the formulated problems. In Section <ref> and <ref>, we propose computationally efficient algorithms to solve the formulated problems suboptimally. In Section <ref>, we evaluate the performance of our proposed algorithms via simulations. Finally, Section <ref> concludes the paper. Notations: ℂ denotes the complex space. ℂ^M× N represents the space of M× N complex-valued matrices. Denote by ℍ^M the set of all M-dimensional complex Hermitian matrices. 0 and 𝐈 are an all-zero matrix and an identity matrix, respectively, whose dimensions are determined by the context. For a square matrix 𝐒, 𝐒≽0 means that 𝐒 is positive semidefinite while tr(𝐒) denotes its trace. For two square matrices 𝐒_1 and 𝐒_2, 𝐒_1 ≽𝐒_2 (𝐒_1 ≼𝐒_2) indicates that 𝐒_1 - 𝐒_2 is positive (negative) semidefinite. ·_2 stands for the maximum singular value of a matrix. Let rank(·) be the rank of a matrix. We denote the conjugate transpose and expectation operators by (·)^H and 𝔼(·), respectively. · and [·]_i represent the Euclidean norm and the i-th element of a vector, respectively. diag(·) denotes the diagonalization operation. 𝒞𝒩(𝐱,Σ) represents a complex Gaussian distribution with a mean vector 𝐱 and co-variance matrix Σ. For a scalar x, |x| denotes its modulus. For a set 𝒳, |𝒳| denotes its cardinality. ≜√(-1) refers to the imaginary unit. Denote Re{·} as the real part of a complex number. ⊙ denotes the Hadamard product. § SYSTEM MODEL AND PROBLEM FORMULATION §.§ System Model This paper considers an IRS-aided overloaded multiuser MISO downlink SWIPT system consisting of an N-element passive IRS, an M-antenna AP, K single-antenna IUs, and J single-antenna EUs, where K + J > M and K ≥ M. The sets of reflecting elements, IUs, and EUs are denoted by 𝒩, 𝒦, and 𝒥, respectively, with |𝒩| = N, |𝒦| = K, and |𝒥| = J. It is assumed that the K IUs can be assigned into at most L groups, indexed by G_1, ⋯, G_L. Define a binary variable a_k,ℓ, k∈𝒦, ℓ∈ℒ≜{1,⋯,L}, which indicates that the k-th IU is assigned into the ℓ-th group if a_k,ℓ = 1; otherwise, a_k,ℓ = 0. As illustrated in Fig. <ref>, we consider two UG schemes, i.e., the non-overlapping UG scheme and the overlapping UG scheme, according to whether there are identical IUs in different groups. For the non-overlapping scheme, we have ∑_ℓ∈ℒ a_k,ℓ≤ 1, whereas for the overlapping scheme, there is no constraint on the value of ∑_ℓ∈ℒ a_k,ℓ, ∀ k∈𝒦. Furthermore, the total transmission time T is divided into L time slots, each occupying a duration of τ_ℓ≥ 0 (ℓ∈ℒ), satisfying ∑_ℓ∈ℒτ_ℓ≤ T. In time slot ℓ, the AP transmits energy and information simultaneously to all the EUs and only the IUs in G_ℓ over the given frequency band, as shown in Fig. <ref>. By relying on linear precoding, the complex baseband transmitted signal from the AP at time slot ℓ, ℓ∈ℒ, can be expressed as x_ℓ = ∑_k∈𝒦a_k,ℓ𝐰_k,ℓs_k + x_E,ℓ, where s_k ∈ℂ denotes the transmitted data symbol for IU k, which is precoded by the precoding vector 𝐰_k,ℓ∈ℂ^M× 1 at time slot ℓ if a_k,ℓ = 1. Suppose that s_k ∼𝒞N(0,1), ∀ k∈𝒦 and {s_k} are independent over k. In addition, x_E,ℓ∈ℂ^M× 1 denotes the transmitted energy signal at time slot ℓ with covariance matrix 𝐖_E,ℓ = 𝔼( x_E,ℓ x^H_E,ℓ) ≽0, and the rank of 𝐖_E,ℓ determines the number of energy beams that are spatially transmitted <cit.>. The quasi-static flat-fading model is assumed for all the channels. Let 𝐅∈ℂ^N× M, 𝐡_d,k^H ∈ℂ^1× M, 𝐠_d,j^H ∈ℂ^1× M, 𝐡_r,k^H ∈ℂ^1× N, and 𝐠_r,j^H ∈ℂ^1× N denote the channel coefficients from the AP to the IRS, from the AP to IU k, from the AP to EU j, from the IRS to IU k, and from the IRS to EU j, respectively. The cascaded channels from the AP to IU k and EU j via the IRS can be denoted as Φ_k = diag(𝐡_r,k^H)𝐅 and Ψ_j = diag(𝐠_r,j^H)𝐅, respectively. We assume that the perfect channel state information of the direct and cascaded channels can be acquired using existing channel estimation methods such as <cit.>. Besides, denoted by Θ_ℓ = diag( e^θ_ℓ,1, ⋯, e^θ_ℓ,N) and Θ̃_ℓ = diag( e^θ̃_ℓ,1, ⋯, e^θ̃_ℓ,N) the phase-shift matrix and the phase-error matrix at the IRS at time slot ℓ, respectively, where θ_ℓ,n∈ [0,2π) stands for the phase shift induced by the n-th element and θ̃_ℓ,n represents the additive random phase error that reflects the imperfection in phase estimation and/or phase configuration. Moreover, θ̃_ℓ,n is assumed to be uniformly distributed on [-π/2, π/2], ∀ℓ∈ℒ, n∈𝒩 <cit.>. Then, the received signal at IU k at time slot ℓ is given by y_k,ℓ^ I = ( h_r,k^HΘ_ℓΘ̃_ℓ𝐅 + 𝐡_d,k^H ) x_ℓ + n_k = (𝐯_ℓ⊙𝐯̃_ℓ)^H𝐇_k x_ℓ + n_k, k∈𝒦, ℓ∈ℒ, where 𝐯_ℓ = [𝐮_ℓ; 1] with 𝐮_ℓ = [e^θ_ℓ,1,⋯, e^θ_ℓ,N]^H, 𝐯̃_ℓ = [𝐮̃_ℓ; 1] with 𝐮̃_ℓ = [e^θ̃_ℓ,1,⋯, e^θ̃_ℓ,N]^H, 𝐇_k = [Φ_k;𝐡_d,k^H], and n_k ∼𝒞𝒩(0, σ_k^2 ) represents the additive white Gaussian noise with variance σ_k^2 at IU k. Assuming that the IUs cannot cancel the interference caused by the energy signals, the SINR of IU k at time slot ℓ can be written as γ_k,ℓ = a_k,ℓ|( 𝐯_ℓ⊙𝐯̃_ℓ)^H𝐇_k𝐰_k,ℓ|^2/∑_i∈𝒦\{k}a_i,ℓ|( 𝐯_ℓ⊙𝐯̃_ℓ)^H𝐇_k𝐰_i,ℓ|^2 + tr(𝐇_k^H( 𝐯_ℓ⊙𝐯̃_ℓ)( 𝐯_ℓ⊙𝐯̃_ℓ)^H𝐇_k𝐖_E,ℓ) + σ_k^2, On the other hand, by adopting the widely used linear EH model <cit.>, the harvested RF-band energy at EU j, j∈𝒥, over the whole transmission duration can be expressed as Q_j = ∑_ℓ∈ℒτ_ℓ(∑_k∈𝒦a_k,ℓ|( 𝐯_ℓ⊙𝐯̃_ℓ)^H𝐆_j𝐰_k,ℓ|^2 + tr(𝐆_j^H( 𝐯_ℓ⊙𝐯̃_ℓ)( 𝐯_ℓ⊙𝐯̃_ℓ)^H𝐆_j𝐖_E, ℓ) ), where 𝐆_j = [Ψ_j;𝐠_d,j^H] and the negligible noise power is ignored. Note that γ_k,ℓ and Q_j contain the random phase errors that are generally unknown. In view of this, we consider the expectations of them. The expectations of γ_k,ℓ and Q_j are respectively given by 5pt 𝔼_𝐯̃_ℓ{γ_k,ℓ} = a_k,ℓ𝐰_k,ℓ^H𝐗_k,ℓ𝐰_k,ℓ/∑_i∈𝒦\{k}a_i,ℓ𝐰_i,ℓ^H𝐗_k,ℓ𝐰_i,ℓ + tr(𝐗_k,ℓ𝐖_E,ℓ)+ σ_k^2≜γ̂_k,ℓ, k∈𝒦, ℓ∈ℒ, 𝔼_𝐯̃_ℓ{ Q_j} = ∑_ℓ∈ℒτ_ℓ(∑_k∈𝒦a_k,ℓ𝐰_k,ℓ^H𝐘_j,ℓ𝐰_k,ℓ + tr(𝐘_j,ℓ𝐖_E, ℓ) ), j∈𝒥, where 𝐗_k,ℓ = 𝐇_k^H diag(𝐯_ℓ)𝐙 diag(𝐯_ℓ^H)𝐇_k, 𝐘_j,ℓ = 𝐆_j^H diag(𝐯_ℓ)𝐙 diag(𝐯_ℓ^H)𝐆_j, and 𝐙 = [ 1 4/π^2 ⋯ 4/π^2 2/π; 4/π^2 1 ⋯ 4/π^2 2/π; ⋮ ⋮ ⋱ ⋮ ⋮; 4/π^2 4/π^2 ⋯ 1 2/π; 2/π 2/π ⋯ 2/π 1; ]∈ℝ^(N+1) ×(N+1). Please refer to Appendix <ref>. §.§ Problem Formulation In this paper, we aim to maximize the minimum throughput among all the IUs, denoted by η≜min_k∈𝒦∑_ℓ∈ℒτ_ℓlog_2(1 + γ̂_k,ℓ), by jointly optimizing the UG variables {a_k,ℓ}, the time allocation {τ_ℓ}, the information precoders {𝐰_k,ℓ} and the energy covariance matrices {𝐖_E,ℓ} at the AP, and the IRS phase-shift vectors {𝐯_ℓ} while satisfying the EH constraints at the EUs. For the non-overlapping UG scheme, we can formulate the problem of interest as follows (P1): η, {𝐰_k,ℓ}, {𝐖_E,ℓ≽0}, { a_k,ℓ}, {τ_ℓ},{𝐯_ℓ}max η s.t. ∑_ℓ∈ℒτ_ℓlog_2(1 + γ̂_k,ℓ) ≥η, ∀ k∈𝒦, ∑_ℓ∈ℒτ_ℓ(∑_k∈𝒦a_k,ℓ𝐰_k,ℓ^H𝐘_j,ℓ𝐰_k,ℓ + tr(𝐘_j,ℓ𝐖_E, ℓ) ) ≥ E, ∀ j∈𝒥, ∑_k∈𝒦a_k,ℓ𝐰_k,ℓ^2 + tr(𝐖_E,ℓ) ≤ P, ∀ℓ∈ℒ, ∑_ℓ∈ℒτ_ℓ≤ T, τ_ℓ≥ 0, ∀ℓ∈ℒ, a_k,ℓ∈{0,1}, ∀ k∈𝒦, ℓ∈ℒ, ∑_ℓ∈ℒ a_k,ℓ≤ 1, ∀ k∈𝒦, |[𝐯_ℓ]_n| = 1, [𝐯_ℓ]_N+1 = 1, ∀ℓ∈ℒ, n∈𝒩, where constraint (<ref>) indicates that each EU is required to harvest at least E Joule (J) energy and constraint (<ref>) implies that the AP's instantaneous transmit power cannot exceed P. Similarly, the minimum throughput maximization problem corresponding to the overlapping UG scheme can be formulated as (P2): η, {𝐰_k,ℓ}, {𝐖_E,ℓ≽0}, { a_k,ℓ}, {τ_ℓ},{𝐯_ℓ}max η s.t. (<ref>) - (<ref>), (<ref>). Note that the only difference between (P1) and (P2) is that (P1) includes an extra constraint (<ref>). Both (P1) and (P2) are challenging to solve for the following reasons: 1) the variables {a_k,ℓ} are binary, making (<ref>)-(<ref>) involve integer constraints; 2) even with fixed {a_k,ℓ}, (<ref>)-(<ref>) are non-convex constraints due to the coupling of all other variables; 3) the unit-modulus constraints on the IRS phase shifts in (<ref>) are non-convex. As a result, (P1) and (P2) are both mixed-integer non-convex optimization problems, which are typically NP-hard and non-trivial to solve optimally. =-1 § FEASIBILITY CHECKING FOR (P1) AND (P2) Prior to solving (P1) and (P2), we first check their feasibility, i.e., whether the EH requirement of each EU can be satisfied under the given AP's transmit power and transmission duration. To this end, we define δ≜min_j∈𝒥∑_ℓ∈ℒτ_ℓ tr(𝐘_j,ℓ𝐖_E,ℓ) and consider the following minimum harvested energy maximization problem: δ, {𝐖_E,ℓ≽0}, {τ_ℓ},{𝐯_ℓ}max δ s.t. ∑_ℓ∈ℒτ_ℓ tr(𝐘_j,ℓ𝐖_E,ℓ) ≥δ, ∀ j∈𝒥, tr(𝐖_E,ℓ) ≤ P, ∀ℓ∈ℒ, ∑_ℓ∈ℒτ_ℓ≤ T, τ_ℓ≥ 0, ∀ℓ∈ℒ, | [𝐯_ℓ]_n| ≤ 1, [𝐯_ℓ]_N+1 = 1, ∀ℓ∈ℒ, n∈𝒩, which is non-convex because the optimization variables are strongly coupled in constraint (<ref>). Given that it is difficult, if not impossible, to solve this problem directly, we alternately solve its subproblems concerning different sets of variables based on the principle of BCD <cit.>, as detailed in the following. §.§ Optimizing {{𝐖_E,ℓ}, {τ_ℓ}} for Given {𝐯_ℓ} With given {𝐯_ℓ}, by applying the change of variables 𝐒_E,ℓ = τ_ℓ𝐖_E,ℓ, ∀ℓ∈ℒ, the subproblem with respect to (w.r.t.) {{𝐖_E,ℓ}, {τ_ℓ}} can be equivalently expressed as δ, {𝐒_E,ℓ≽0}, {τ_ℓ}max δ s.t. ∑_ℓ∈ℒ tr(𝐘_j,ℓ𝐒_E,ℓ) ≥δ, ∀ j∈𝒥, tr(𝐒_E,ℓ) ≤τ_ℓP, ∀ℓ∈ℒ, (<ref>). By direct inspection, problem (<ref>) is a convex semidefinite program (SDP), and its optimal solution, denoted by {{𝐒_E,ℓ^⋆},{τ_ℓ^⋆}}, can be found by ready-made solvers, e.g., CVX <cit.>. Moreover, the optimal original variables {𝐖_E,ℓ^⋆} can be recovered from {{𝐒_E,ℓ^⋆},{τ_ℓ^⋆}} by setting 𝐖_E,ℓ^⋆ = 𝐒_E,ℓ^⋆/τ_ℓ^⋆ if τ_ℓ^⋆ > 0 and 𝐖_E,ℓ^⋆ = 0 otherwise, ∀ℓ∈ℒ. §.§ Optimizing {𝐯_ℓ} for Given {{𝐖_E,ℓ},{τ_ℓ}} For any given {{𝐖_E,ℓ},{τ_ℓ}}, the subproblem of problem (<ref>) for optimizing {𝐯_ℓ} can be written as δ,{𝐯_ℓ}maxδs.t.(<ref>), (<ref>). It is hard to state whether constraint (<ref>) is convex since the optimization variables {𝐯_ℓ} are not exposed in the current form of (<ref>). To tackle this issue, we introduce the following lemma. Constraint (<ref>) can be equivalently converted to ∑_ℓ∈ℒ'τ_ℓ𝐯_ℓ^H𝐐_j,E, ℓ𝐯_ℓ≥δ, ∀ j∈𝒥, where ℒ' = {ℓ|τ_ℓ > 0}⊆ℒ and 𝐐_j,E,ℓ = ∑_m=1^r_ E,ℓq_ℓ,m diag(𝐆_j𝐰_ E,ℓ,m)𝐙( diag(𝐆_j𝐰_ E,ℓ,m))^H with r_ E,ℓ = rank(𝐖_E,ℓ) ≥ 1, q_ℓ,1, ⋯, q_ℓ,r_ E,ℓ denoting the eigenvalues of 𝐖_ E, ℓ, and 𝐰_ E,ℓ,m being the unit-norm eigenvector of 𝐖_ E, ℓ corresponding to q_ℓ,m, m∈{1,⋯,r_ E,ℓ}. Please refer to Appendix <ref>. Note that constraint (<ref>) is in the form of a super-level set of convex quadratic functions, which makes it non-convex but allows the application of the iterative SCA technique <cit.>. Specifically, given the local feasible point 𝐯_ℓ^t in the t-th iteration of SCA, we can replace the convex term 𝐯_ℓ^H𝐐_j,E, ℓ𝐯_ℓ with its first-order Taylor expansion-based lower bound, yielding a convex subset of constraint (<ref>) expressed as ∑_ℓ∈ℒ'τ_ℓ( 2 Re{𝐯_ℓ^H𝐐_j,E,ℓ𝐯_ℓ^t} - (𝐯_ℓ^t)^H𝐐_j,E,ℓ𝐯_ℓ^t) ≥δ, ∀ j∈𝒥. As a result, the optimization problem to be solved in the t-th iteration of SCA is given by δ,{𝐯_ℓ}maxδs.t.(<ref>), (<ref>), which is a convex quadratically constrained quadratic program (QCQP) and thus can be optimally solved by existing solvers such as CVX <cit.>. In addition, the optimal {𝐯_ℓ^⋆} must satisfy | [𝐯_ℓ^⋆]_n| = 1, ∀ℓ∈ℒ', n∈𝒩, for achieving maximum signal reflection. By iteratively solving problem (<ref>) until convergence is reached, we can obtain a locally optimal solution of problem (<ref>) <cit.>. §.§ Overall Algorithm In summary, the proposed algorithm updates {{𝐖_E,ℓ}, {τ_ℓ}} and {𝐯_ℓ} in an alternating manner. The computational complexity of updating {{𝐖_E,ℓ}, {τ_ℓ}} via solving problem (<ref>) is 𝒪(√(M)log_2(1/ε)(β M^3 + β^2 M^2 + β^3)) <cit.> with β≜ J + L and ε denoting the prescribed accuracy, and that of updating {𝐯_ℓ} via iteratively solving problem (<ref>) until SCA converges is 𝒪(I_0√(LN + 2J)log_2(1/ε)N^3L^3J ) <cit.> with I_0 representing the required number of SCA iterations. This algorithm is guaranteed to converge since the objective value of problem (<ref>) is non-decreasing with the update iteration index and has a finite upper bound. Moreover, any limit point of the BCD procedure is a stationary point of problem (<ref>) <cit.>. Once the objective value exceeds E in the BCD procedure, we can stop the iterations and verify that (P1) and (P2) are feasible. For another case where the proposed algorithm converges with an objective value less than E, we consider (P1) and (P2) to be infeasible. § PROPOSED ALGORITHM FOR (P1) In this section, we aim to solve (P1). First of all, we deal with the non-convex unit-modulus constraints in (<ref>) by relaxing them to | [𝐯_ℓ]_n| ≤ 1, ∀ℓ∈ℒ, n∈𝒩. As such, an upper bound of the optimal value of (P1) can be obtained by solving the following problem η, {𝐰_k,ℓ}, {𝐖_E,ℓ≽0}, { a_k,ℓ}, {τ_ℓ},{𝐯_ℓ}max η s.t. (<ref>)-(<ref>), |[𝐯_ℓ]_n| ≤ 1, [𝐯_ℓ]_N+1 = 1, ∀ℓ∈ℒ, n∈𝒩. To facilitate the solution of problem (<ref>), we define 𝐖_k,ℓ = 𝐰_k,ℓ𝐰_k,ℓ^H, satisfying 𝐖_k,ℓ≽0 and rank(𝐖_k,ℓ) ≤ 1, ∀ k∈𝒦, ℓ∈ℒ. Then, constraints (<ref>)-(<ref>) can be converted to ∑_ℓ∈ℒτ_ℓlog_2(1 + a_k,ℓ tr(𝐗_k,ℓ𝐖_k,ℓ)/∑_i∈𝒦\{k}a_i,ℓ tr(𝐗_k,ℓ𝐖_i,ℓ) + tr(𝐗_k,ℓ𝐖_E,ℓ)+ σ_k^2) ≥η, ∀ k∈𝒦, ∑_ℓ∈ℒτ_ℓ(∑_k∈𝒦a_k,ℓ tr(𝐘_j,ℓ𝐖_k,ℓ) + tr(𝐘_j,ℓ𝐖_E, ℓ) ) ≥ E, ∀ j∈𝒥, ∑_k∈𝒦a_k,ℓ tr( 𝐖_k,ℓ) + tr(𝐖_E,ℓ) ≤ P, ∀ℓ∈ℒ. By applying the change of variables 𝐒_k,ℓ = τ_ℓ𝐖_k,ℓ, ∀ k∈𝒦, ℓ∈ℒ and recalling the variables {𝐒_E, ℓ} defined in the previous section, we can further transform constraints (<ref>)-(<ref>) into ∑_ℓ∈ℒτ_ℓlog_2(1 + a_k,ℓ tr(𝐗_k,ℓ𝐒_k,ℓ)/τ_ℓ/∑_i∈𝒦\{k}a_i,ℓ tr(𝐗_k,ℓ𝐒_i,ℓ)/τ_ℓ + tr(𝐗_k,ℓ𝐒_E,ℓ)/τ_ℓ + σ_k^2) ≥η, ∀ k∈𝒦, ∑_ℓ∈ℒ(∑_k∈𝒦a_k,ℓ tr(𝐘_j,ℓ𝐒_k,ℓ) + tr(𝐘_j,ℓ𝐒_E, ℓ) ) ≥ E, ∀ j∈𝒥, ∑_k∈𝒦a_k,ℓ tr( 𝐒_k,ℓ) + tr(𝐒_E,ℓ) ≤τ_ℓ P, ∀ℓ∈ℒ, with 𝐒_k,ℓ≽0, rank(𝐒_k,ℓ) ≤ 1, and 𝐒_E, ℓ≽0, ∀ k∈𝒦, ℓ∈ℒ. Next, the big-M formulation <cit.> is adopted to tackle the coupling between the binary variables {a_k,ℓ} and the continuous variables {𝐒_k,ℓ} in (<ref>)-(<ref>). Specifically, we introduce auxiliary variables 𝐒̃_k,ℓ = a_k,ℓ𝐒_k,ℓ, ∀ k∈𝒦, ℓ∈ℒ, and impose the following additional constraints: 𝐒̃_k,ℓ≼ a_k,ℓPT𝐈, ∀ k∈𝒦, ℓ∈ℒ, 𝐒̃_k,ℓ≼𝐒_k,ℓ, 𝐒̃_k,ℓ≽0, ∀ k∈𝒦, ℓ∈ℒ, 𝐒̃_k,ℓ≽𝐒_k,ℓ - (1 - a_k,ℓ)PT𝐈, ∀ k∈𝒦, ℓ∈ℒ, rank(𝐒̃_k,ℓ) ≤ 1, ∀ k∈𝒦, ℓ∈ℒ. It can be verified that when the constraints in (<ref>) and (<ref>) are satisfied, constraints (<ref>)-(<ref>) are respectively equivalent to ∑_ℓ∈ℒτ_ℓlog_2(1 + tr(𝐗_k,ℓ𝐒̃_k,ℓ)/τ_ℓ/∑_i∈𝒦\{k} tr(𝐗_k,ℓ𝐒̃_i,ℓ)/τ_ℓ + tr(𝐗_k,ℓ𝐒_E,ℓ)/τ_ℓ + σ_k^2) ≥η, ∀ k∈𝒦, ∑_ℓ∈ℒ(∑_k∈𝒦 tr(𝐘_j,ℓ𝐒̃_k,ℓ) + tr(𝐘_j,ℓ𝐒_E, ℓ) ) ≥ E, ∀ j∈𝒥, ∑_k∈𝒦 tr(𝐒̃_k,ℓ) + tr(𝐒_E,ℓ) ≤τ_ℓ P, ∀ℓ∈ℒ. Based on the above results, by replacing constraints (<ref>)-(<ref>) in problem (<ref>) with (<ref>)-(<ref>) and taking (<ref>) into account, we can rewrite problem (<ref>) in its equivalent form, as follows η, 𝒵maxηs.t.(<ref>)-(<ref>), (<ref>), (<ref>)-(<ref>), where 𝒵≜{{𝐒̃_k,ℓ∈ℍ^M}, {𝐒_k,ℓ∈ℍ^M}, {𝐒_E,ℓ≽0}, { a_k,ℓ}, {τ_ℓ},{𝐯_ℓ}}. Since the binary constraint (<ref>) is an obstacle to solving problem (<ref>), we equivalently re-express it as =-1 0 ≤ a_k,ℓ≤ 1, ∀ k∈𝒦, ℓ∈ℒ, a_k,ℓ - a_k,ℓ^2 ≤ 0, ∀ k∈𝒦, ℓ∈ℒ. Note that (<ref>) is a linear constraint while (<ref>) is a reverse convex constraint that yields a disconnected feasible region. To handle (<ref>), we incorporate it into the objective function of problem (<ref>) via a multiplicative penalty function based on the penalty method <cit.>, yielding the following problem η,𝒵maxη - ρ h({a_k,ℓ}) s.t.(<ref>), (<ref>), (<ref>), (<ref>)-(<ref>), (<ref>), where h({a_k,ℓ}) ≜∑_ℓ∈ℒ∑_k ∈𝒦(a_k,ℓ - a_k,ℓ^2) and ρ > 0 serves as a penalty parameter to penalize the violation of constraint (<ref>). Notably, to maximize the objective function of problem (<ref>) when ρ→∞, the optimal {a_k,ℓ^⋆} should meet the condition h({a_k,ℓ^⋆})≤ 0. On the other hand, since {a_k,ℓ^⋆} satisfy constraint (<ref>), we have h({a_k,ℓ^⋆}) ≥ 0. Thus, h({a_k,ℓ^⋆}) = 0 and accordingly a_k,ℓ^⋆∈{0,1} follows, ∀ k∈𝒦, ℓ∈ℒ, which verifies the equivalence between problems (<ref>) and (<ref>). It is worth mentioning that since setting ρ significantly large at the very beginning may render this approach ineffective <cit.>, we initialize ρ to a small value to find a good starting point and then solve problem (<ref>) iteratively with ρ increasing with the iterations until h({a_k,ℓ^⋆}) → 0. For any given ρ, problem (<ref>) is still hard to solve directly due to the non-concave objective function and the non-convex constraints in (<ref>), (<ref>), and (<ref>). Nevertheless, it is observed that either given or only optimizing {𝐯_ℓ}, the resulting problem is more tractable. This motivates us to apply the BCD method as in the previous section to solve problem (<ref>) suboptimally by alternately optimizing 𝒵̃≜𝒵\{𝐯_ℓ} and {𝐯_ℓ}, elaborated as follows. §.§ Optimizing 𝒵̃ for Given {𝐯_ℓ} With given {𝐯_ℓ}, all the other variables in 𝒵 can be jointly optimized by solving the subproblem of (<ref>), which is expressed as η,𝒵̃max η - ρ h({a_k,ℓ}) s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), ∑_ℓ∈ℒ( f_k,ℓ - g_k,ℓ) ≥η, ∀ k∈𝒦, where constraint (<ref>) is the equivalent form of constraint (<ref>), with the expressions of the concave functions f_k,ℓ and g_k,ℓ given by f_k,ℓ = τ_ℓlog_2( ∑_i∈𝒦 tr(𝐗_k,ℓ𝐒̃_i,ℓ)/τ_ℓ + tr(𝐗_k,ℓ𝐒_E,ℓ)/τ_ℓ + σ_k^2 ), ∀ k∈𝒦, ℓ∈ℒ, g_k,ℓ = τ_ℓlog_2( ∑_i∈𝒦\{k} tr(𝐗_k,ℓ𝐒̃_i,ℓ)/τ_ℓ + tr(𝐗_k,ℓ𝐒_E,ℓ)/τ_ℓ + σ_k^2 ), ∀ k∈𝒦, ℓ∈ℒ, respectively. We observe that the convex term a_k,ℓ^2 in h({a_k,ℓ}) makes the objective function non-concave while the concave term g_k,ℓ in constraint (<ref>) makes this constraint non-convex. These, together with the rank constraints in (<ref>), lead to the non-convexity of problem (<ref>). To handle this problem, we leverage the SCA technique as in the previous section. Specifically, since the first-order Taylor expansion of any convex (concave) function at any point is its global lower (upper) bound, the following inequalities hold: a_k,ℓ^2 ≥ -( a_k,ℓ^r)^2 + 2a_k,ℓ^ra_k,ℓ≜χ^ lb, r(a_k,ℓ), ∀ k∈𝒦, ℓ∈ℒ, g_k,ℓ(𝐒̂_k,ℓ, 𝐒_E, ℓ, τ_ℓ) ≤τ_ℓ^rlog_2(Υ_k,ℓ^r) + ∑_i∈𝒦\{k} tr(𝐗_k,ℓ( 𝐒̃_i,ℓ-𝐒̃_i,ℓ^r)) + tr(𝐗_k,ℓ( 𝐒_E,ℓ-𝐒_E,ℓ^r) )/Υ_k,ℓ^rln2 + (log_2(Υ_k,ℓ^r) - Υ_k,ℓ^r -σ_k^2 /Υ_k,ℓ^rln2)(τ_ℓ-τ_ℓ^r) ≜ g_k,ℓ^ ub, r(𝐒̂_k,ℓ, 𝐒_E, ℓ, τ_ℓ), ∀ k∈𝒦, ℓ∈ℒ, where 𝐒̂_k,ℓ denotes the collection of the variables {𝐒̃_i,ℓ}_∀ i∈𝒦\{k} and Υ_k,ℓ^r = ∑_i∈𝒦\{k} tr(𝐗_k,ℓ𝐒̃_i,ℓ^r)/τ_ℓ^r + tr(𝐗_k,ℓ𝐒_E,ℓ^r)/τ_ℓ^r + σ_k^2. In addition, a_k,ℓ^r, 𝐒̃_i,ℓ^r, 𝐒_E,ℓ^r, and τ_ℓ^r represent the given local points in the r-th iteration of SCA. By replacing the term a_k,ℓ^2 in h({a_k,ℓ}) with χ^ lb, r(a_k,ℓ) and the term g_k,ℓ in (<ref>) with g_k,ℓ^ ub, r(𝐒̂_ℓ, 𝐒_E, ℓ, τ_ℓ), a performance lower bound of problem (<ref>) can be obtained by solving η,𝒵̃max η - ρ h^ ub, r({a_k,ℓ}) s.t. (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), ∑_ℓ∈ℒ( f_k,ℓ - g_k,ℓ^ ub, r(𝐒̂_ℓ, 𝐒_E, ℓ, τ_ℓ)) ≥η, ∀ k∈𝒦, where h^ ub, r({a_k,ℓ}) ≜∑_ℓ∈ℒ∑_k ∈𝒦(a_k,ℓ - χ^ lb, r(a_k,ℓ)). If we drop the non-convex rank constraints in (<ref>), problem (<ref>) is reduced to a convex SDP that can be solved exactly using off-the-shelf solvers, e.g., CVX <cit.>. However, the obtained {𝐒̃_k,ℓ} cannot be guaranteed to satisfy constraint (<ref>). Therefore, instead of dropping constraint (<ref>), we equivalently transform it into =-1 tr(𝐒̃_k,ℓ) - 𝐒̃_k,ℓ_2 ≤ 0, ∀ k∈𝒦, ℓ∈ℒ, which is a reverse convex constraint. Similar to problem (<ref>), we incorporate constraint (<ref>) into the objective function in (<ref>) by introducing a penalty parameter μ > 0 and then convert problem (<ref>) to η,𝒵̃max η - ρ h^ ub, r({a_k,ℓ}) - μ q({𝐒̃_k,ℓ}) s.t. (<ref>), (<ref>), (<ref>)-(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), where q({𝐒̃_k,ℓ}) ≜∑_ℓ∈ℒ∑_k ∈𝒦( tr(𝐒̃_k,ℓ) - 𝐒̃_k,ℓ_2). When μ→∞, solving problem (<ref>) yields an identical solution to problem (<ref>). Despite having a convex feasible set, problem (<ref>) is non-convex due to the convexity of the term 𝐒̃_k,ℓ_2 in q({𝐒̃_k,ℓ}). By replacing 𝐒̃_k,ℓ_2 with its first-order Taylor expansion-based lower bound, we can approximate problem (<ref>) as η,𝒵̃max η - ρ h^ ub, r({a_k,ℓ}) - μ q^ ub, r({𝐒̃_k,ℓ}) s.t. (<ref>), (<ref>), (<ref>)-(<ref>), (<ref>), (<ref>), (<ref>), (<ref>), where q^ ub, r({𝐒̃_k,ℓ}) ≜∑_ℓ∈ℒ∑_k ∈𝒦( tr(𝐒̃_k,ℓ) -𝐒̃_k,ℓ^r_2 -(𝐬̃_k,ℓ^max, r)^H(𝐒̃_k,ℓ-𝐒̃_k,ℓ^r)𝐬̃_k,ℓ^max, r) with 𝐬̃_k,ℓ^max, r being the eigenvector that corresponds to the largest eigenvalue of 𝐒̃_k,ℓ^r. Since problem (<ref>) is a convex SDP, standard solvers such as CVX <cit.> can be used to find its optimal solution. Based on the above, we provide in Algorithm <ref> the details of solving problem suboptimally (<ref>) via combining the SCA and the penalty method, where c_1 > 1 is a scaling factor. The inner loop of Algorithm <ref> is used to iteratively solve problem (<ref>) under fixed μ, whose convergence is guaranteed since the objective value is non-decreasing over the iterations and also bounded from above. In the outer loop, by iteratively increasing μ via μ← c_1μ, we enforce q({𝐒̃_k,ℓ}) → 0, such that the obtained solution satisfy the rank constraints on {𝐒̃_k,ℓ}. In this way, Algorithm <ref> is guaranteed to converge to a stationary point of problem (<ref>) <cit.>. It is worth mentioning that if there are no phase errors at the IRS elements, the matrix 𝐗_k,ℓ in constraint (<ref>) can be replaced by 𝐇_k^H𝐯_ℓ𝐯_ℓ^H𝐇_k ≜𝐗̂_k,ℓ and we have rank(𝐗̂_k,ℓ) = 1, ∀ k∈𝒦, ℓ∈ℒ. With this condition, it can be proved that, for arbitrary direct and cascaded channels, if the optimal solution obtained by solving problem (<ref>) with the rank constraint (<ref>) removed (or equivalently, problem (<ref>) with μ = 0) violates constraint (<ref>), we can always construct an alternative optimal solution that satisfies constraint (<ref>) by using 𝐒_E, ℓ to absorb the non-rank-one part of each 𝐒̃_k,ℓ. The corresponding proof is similar to that in <cit.>, and we omit it for brevity. However, in the presence of the phase errors, we cannot prove the above result by following the same derivation as in <cit.> since 𝐗_k,ℓ is generally of high rank. Despite this, almost all of our simulations show that solving problem (<ref>) with even a sufficiently small μ via CVX can yield a rank-one optimal solution. Thus, the characterization of the optimal solution structure of problem (<ref>) (or problem (<ref>)) deserves further study. §.§ Optimizing {𝐯_ℓ} for Given 𝒵̃ Given any feasible 𝒵̃, by introducing slack variables {λ_k,ℓ} and ignoring the constant term -ρ h({a_k,ℓ}) in the objective function of problem (<ref>), we can equivalently express the subproblem of (<ref>) w.r.t. {𝐯_ℓ} as η,{𝐯_ℓ},{λ_k,ℓ}max η s.t. (<ref>), (<ref>), ∑_ℓ∈ℒ'τ_ℓlog_2( 1 + λ_k,ℓ) ≥η, ∀ k∈𝒦, tr(𝐗_k,ℓ𝐒̃_k,ℓ)/τ_ℓ/λ_k,ℓ≥∑_i∈𝒦\{k} tr(𝐗_k,ℓ𝐒̃_i,ℓ)/τ_ℓ + tr(𝐗_k,ℓ𝐒_E,ℓ)/τ_ℓ + σ_k^2, ∀ k∈𝒦, ℓ∈ℒ', where ℒ' = {ℓ|τ_ℓ > 0}⊆ℒ, and the constraints in (<ref>) and (<ref>) are transformed from those in (<ref>), which incurs no loss of optimality since there always exists an optimal solution to problem (<ref>) that makes the constraints in (<ref>) satisfied with equality. Observe that the optimization variables {𝐯_ℓ} are not exposed in the current forms of constraints (<ref>) and (<ref>). To facilitate the solution development of problem (<ref>), we recast (<ref>) and (<ref>) as ∑_ℓ∈ℒ'(∑_k∈𝒦𝐯_ℓ^H𝐀_j,k,ℓ𝐯_ℓ + 𝐯_ℓ^H𝐁_j,E,ℓ𝐯_ℓ) ≥ E, ∀ j∈𝒥, 𝐯_ℓ^H𝐂_k,k,ℓ𝐯_ℓ/τ_ℓ/λ_k,ℓ≥∑_i∈𝒦\{k}𝐯_ℓ^H𝐂_k,i,ℓ𝐯_ℓ/τ_ℓ + 𝐯_ℓ^H𝐃_k,E,ℓ𝐯_ℓ/τ_ℓ + σ_k^2, ∀ k∈𝒦, ℓ∈ℒ', where 𝐀_j,k,ℓ = diag(𝐆_j𝐬̃_k,ℓ)𝐙( diag(𝐆_j𝐬̃_k,ℓ))^H if 𝐒̃_k,ℓ≠0 and 𝐀_j,k,ℓ = 0 otherwise, 𝐁_j,E,ℓ = ∑_m=1^π_ E,ℓb_ℓ,m diag(𝐆_j𝐬_ E,ℓ,m)𝐙( diag(𝐆_j𝐬_ E,ℓ,m))^H if 𝐒_E,ℓ≠0 and 𝐁_j,E,ℓ = 0 otherwise, 𝐂_k,i,ℓ = diag(𝐇_k𝐬̃_i,ℓ)𝐙( diag(𝐇_k𝐬̃_i,ℓ))^H if 𝐒̃_i,ℓ≠0 and 𝐂_k,i,ℓ = 0 otherwise, and 𝐃_k,E,ℓ = ∑_m=1^π_ E,ℓb_ℓ,m diag(𝐇_k𝐬_ E,ℓ,m)𝐙( diag(𝐇_k𝐬_ E,ℓ,m))^H if 𝐒_E,ℓ≠0 and 𝐃_k,E,ℓ = 0 otherwise, ∀ j∈𝒥, k,i∈𝒦, ℓ∈ℒ. In addition, 𝐬̃_k,ℓ is obtained from 𝐒̃_k,ℓ by performing the Cholesky decomposition, i.e., 𝐒̃_k,ℓ = 𝐬̃_k,ℓ𝐬̃_k,ℓ^H, and {b_ℓ,m} and {𝐬_E,ℓ,m} are obtained from the eigenvalue decomposition of 𝐒_E,ℓ with 𝐒_ E,ℓ = ∑_m=1^π_ E,ℓb_ℓ,m𝐬_ E,ℓ,m𝐬_ E,ℓ,m^H and π_E,ℓ = rank(𝐒_E,ℓ). The proofs of the equivalence between (<ref>) and (<ref>) and between (<ref>) and (<ref>) are similar to that in Appendix <ref> for Lemma <ref> and are omitted here for brevity. It is obvious that constraints (<ref>) and (<ref>) are non-convex since the quadratic terms in the left-hand-sides of them are convex w.r.t. 𝐯_ℓ, which motivates us to convexify these two constraints via the SCA technique. To be specific, by replacing the left-hand-sides of the non-convex constraints (<ref>) and (<ref>) with their respective first-order Taylor expansions at the given local points {𝐯_ℓ^q} in the q-th iteration of SCA, (<ref>) and (<ref>) can be approximated as the following convex constraints: ∑_ℓ∈ℒ'(∑_k∈𝒦ℱ^ lb, q_𝐀_j,k,ℓ(𝐯_ℓ) + ℱ^ lb, q_𝐁_j,E,ℓ(𝐯_ℓ)) ≥ E, ∀ j∈𝒥, 𝒢^ lb, q(𝐯_ℓ, λ_k,ℓ) ≥∑_i∈𝒦\{k}𝐯_ℓ^H𝐂_k,i,ℓ𝐯_ℓ/τ_ℓ + 𝐯_ℓ^H𝐃_k,E,ℓ𝐯_ℓ/τ_ℓ + σ_k^2, ∀ k∈𝒦, ℓ∈ℒ', where ℱ^ lb, q_𝐑(𝐯_ℓ) ≜ 2 Re{𝐯_ℓ^H𝐑𝐯_ℓ^q} - (𝐯_ℓ^q)^H𝐑𝐯_ℓ^q, 𝐑∈{𝐀_j,k,ℓ, 𝐁_j,E,ℓ}, and 𝒢^ lb, q(𝐯_ℓ, λ_k,ℓ) ≜2 Re{𝐯_ℓ^H𝐂_k,k,ℓ𝐯_ℓ^q}/τ_ℓλ_k,ℓ^q - (𝐯_ℓ^q)^H𝐂_k,k,ℓ𝐯_ℓ^q/τ_ℓ( λ_k,ℓ^q)^2λ_k,ℓ. Then, a locally optimal solution of problem (<ref>) can be obtained by iteratively solving the following convex QCQP via readily available solvers (e.g., CVX <cit.>) until convergence is declared <cit.>. η,{𝐯_ℓ},{λ_k,ℓ}maxηs.t.(<ref>), (<ref>), (<ref>), (<ref>). §.§ Overall Algorithm Based on the above results, we summarize the details of our proposed algorithm for (P1) in Algorithm <ref>. For any given ρ, the BCD inner loop of Algorithm <ref> solves problem (<ref>) by alternately solving problems (<ref>) and (<ref>) and is guaranteed to converge to a stationary point of problem (<ref>) <cit.>. In the outer loop, we gradually increase ρ to a sufficiently large value via ρ← c_2ρ to make h({a_k,ℓ}) → 0, thereby ensuring a_k,ℓ∈{0,1}, ∀ k∈𝒦, ℓ∈ℒ. As a consequence, after the convergence of the outer loop, we can obtain a stationary solution of problem (<ref>) satisfying the binary constraints on {a_k,ℓ}. Since the obtained {𝐯_ℓ^i} may not satisfy the unit-modulus constraints of (P1), we set [ 𝐯̂_ℓ]_n = [𝐯_ℓ^i]_n/|[ 𝐯_ℓ^i]_n|, ∀ℓ∈ℒ, n∈𝒩, without violating any other constraints of (P1). Then, by performing the remaining operations in steps <ref> and <ref> of Algorithm <ref>, we can obtain a suboptimal solution of (P1). The computational complexity of Algorithm <ref> is analyzed as follows. In each inner loop iteration, the main complexity lies in steps <ref> and <ref>. The computational cost of step <ref> for solving problem (<ref>) via Algorithm <ref> is 𝒪(I_ out^1I_ inn^1√(M)log_2(1/ε)(ω M^3 + ω^2 M^2 + ω^3)) <cit.>, where I_ inn^1 and I_ out^1 denote the numbers of inner and outer iterations required for the convergence of Algorithm <ref>, respectively, ε is the solution accuracy, and ω≜ 3KL + K + L + J. The complexity of step <ref> for solving problem (<ref>) via SCA is 𝒪(I_ s√(JN + JL + KL)log_2(1/ε)(JKL^4N^3 + JL^4N^4 + K^2L^4N^2 + K^3L^3)) <cit.>, where I_ s stands for the number of iterations required for the convergence of SCA. Therefore, the overall complexity of Algorithm <ref> is about 𝒪[I_ out^2I_ inn^2log_2(1/ε)(I_ out^1I_ inn^1√(M)(ω M^3 + ω^2 M^2 + ω^3) + I_ s√(JN + JL + KL)(JKL^4N^3 + JL^4N^4 + K^2L^4N^2 + K^3L^3))], with I_ inn^2 and I_ out^2 denoting the numbers of inner and outer iterations required for the convergence of Algorithm <ref>, respectively. § PROPOSED ALGORITHM FOR (P2) We note that (P2) differs from (P1) in the sense that it does not have constraints on ∑_ℓ∈ℒ a_k,ℓ, ∀ k∈𝒦, as in constraint (<ref>) of (P1), which enables us to simplify (P2) by removing the binary variables {a_k,ℓ}. In other words, we have the following theorem. Problem (P2) shares the same optimal value with its simplified version, denoted by (P2') which is obtained by removing {a_k,ℓ} in (P2). Denote by η̅ and ὴ the optimal values of (P2) and (P2'), respectively. First, we have η̅≥ὴ since (P2') is actually a special case of (P2) with a_k,ℓ = 1, ∀ k ∈𝒦, ℓ∈ℒ. Next, denote {η̅, {𝐰̅_k,ℓ}, {𝐖̅_E,ℓ},{a̅_k,ℓ}, {τ̅_ℓ},{𝐯̅_ℓ}} as an arbitrary optimal solution to (P2). Let 𝐰̆_k,ℓ = 𝐰̅_k,ℓ if a̅_k,ℓ = 1 and 𝐰̆_k,ℓ = 0 otherwise, ∀ k∈𝒦, ℓ∈ℒ. It is easy to verify that {η̅, {𝐰̆_k,ℓ}, {𝐖̅_E,ℓ}, {τ̅_ℓ},{𝐯̅_ℓ}} is a feasible solution to (P2'). Then, it follows that η̅≤ὴ. This, together with η̅≥ὴ, yields η̅= ὴ. Theorem <ref> is thus proved. Based on Theorem <ref>, we only need to focus on solving (P2'). Since (P2') is similar to but much simpler than (P1), Algorithm <ref> for (P1) can be modified to solve (P2'). Furthermore, the computational complexity of solving (P2') is much lower than that of solving (P1) since (P2') does not involve binary variables {a_k,ℓ}. The details are omitted due to the space limitation. Denote by 𝒵́≜{ή, {𝐰́_k,ℓ}, {𝐖́_E,ℓ}, {τ́_ℓ},{𝐯́_ℓ}} the obtained solution of (P2'). Let á_k,ℓ = 1 if 𝐰́_k,ℓ≠ 0 and á_k,ℓ = 0 otherwise, ∀ k∈𝒦, ℓ∈ℒ. By doing so, we obtain a suboptimal solution {𝒵́, {á_k,ℓ}} of (P2). § SIMULATION RESULTS In this section, simulations are presented to evaluate the performance of our proposed UG schemes. As illustrated in Fig. <ref>, we consider a three-dimensional (3D) coordinate setup with the locations of the AP and the IRS being (3, 0, 0) and (0, 8, 0 ) measured in meter (m), respectively. The EUs and the IUs are randomly and uniformly distributed in two different circular regions centered at (3, 8, 0) m and (3, 50, 0) m, respectively, with identical radii of 2 m. Each channel response is assumed to comprise two types of radio fading: large-scale and small-scale. The large-scale fading is modeled as PL(d) = C_0/d^α <cit.>, where C_0, d, and α denote the path loss at the reference distance of 1 m, the link distance, and the path loss exponent, respectively. We set C_0 = -30 dB for all the links, α = 3.5 for the direct links, and α = 2.2 for the IRS-related links, respectively. Furthermore, the small-scale fading is characterized by Rayleigh fading for the direct links while Rician fading for the IRS-related links with a Rician factor of 3 dB. Unless otherwise stated, other parameters are set as σ_k^2 = -80 dBm, ∀ k∈𝒦, M = 4, N = 40, P = 43 dBm, T = 1 s, μ = ρ = 10^-2, c_1 = c_2 = 10, ϵ_1 = ϵ_2 = 10^-4, and ς_1 = ς_2 = 10^-7. §.§ Achievable Max-Min Harvested Energy We first provide a numerical comparison of the max-min harvested energy achievable by the following schemes: 1) Robust w/ time division: the algorithm proposed in Section <ref> for problem (<ref>); 2) Non-robust w/ time division: we solve a problem similar to problem (<ref>) but without considering the phase errors, after which we apply the obtained solution to compute the actual achievable max-min harvested energy in the presence of the phase errors; 3) Robust w/o time division: the counterpart of the scheme in 1) without time division (i.e., with time-invariant transmit/reflect beamforming); 4) Non-robust w/o time division: the counterpart of the scheme in 2) without time division. In Fig. <ref>, we plot the achievable max-min harvested energy of the above schemes versus the number of EUs for L = 5. Firstly, it is observed that with increasing J, all the schemes experience a striking decrease in the max-min harvested energy. This is intuitive since the more the number of EUs, the more difficult it is to balance the energy fairness among different EUs. Secondly, we note that the time division-based schemes perform much better than their counterparts without time division. The reason is that in the considered overloaded system, the time division-based schemes allow the AP (IRS) to steer the energy (reflected) signals towards different EUs in different time slots, which improves the minimum harvested energy of more EUs (especially those with weak channel conditions). Lastly, it is expected that for both cases with and without time division, the non-robust design suffers a substantial performance loss compared to the robust one since the former does not considering the phase errors when designing the transmit/reflect beamforming and time allocation (if any). Nevertheless, ignoring the phase errors brings a more significant performance degradation to the time division-based scheme than to that without time division. This is because the phase errors have a greater negative impact on the former scheme adopting time-varying IRS beamforming than the latter one with time-invariant IRS beamforming. The above two observations demonstrate the importance of robust design for IRS-aided SWIPT systems with EH requirements and phase errors since a non-robust design can finally lead to an infeasible EH solution. §.§ Achievable Max-Min Throughput This subsection compares the achievable max-min throughputs of our proposed non-overlapping and overlapping UG schemes with those of the following two benchmark schemes: 1) Random UG: a_k,ℓ is non-optimized and randomly selected from {0,1}, ∀ k∈𝒦, ℓ∈ℒ; 2) Without UG: the conventional IRS-aided SWIPT strategy as in <cit.>, with the number of available time slots being 1 (i.e., τ_1 = T) and a_k,1 = 1, ∀ k∈𝒦. If any scheme is judged infeasible under certain setups, we assign a value of zero to its achievable max-min throughput as a means of factoring in the associated penalty. In addition, since the non-robust counterparts of the considered schemes almost always result in infeasible EH solutions, their simulation curves are omitted. =-1 §.§.§ Impact of Number of IUs Fig. <ref> depicts the average max-min throughput versus the number of IUs when E = 1 × 10^-5 and 2 × 10^-5 J, respectively. Here, we set J = 8 and L = 3. From Fig. <ref>, it is first observed that the schemes adopting overlapping, non-overlapping, or random UG exhibit overwhelming superiority over that without UG, with the performance improvement in percentage increasing as K increases. The reasons are twofold. For one thing, since the three UG-based schemes with time division make it easier to fulfill the EH constraints at the EUs (see Fig. <ref>), more degrees-of-freedom (DoF) are left for enhancing the performance of the IUs, as compared to the scheme without UG (and time division). For another, under the setting of K + J > M and K ≥ M, grouping the IUs can alleviate the inter-user interference more effectively than not grouping them, especially when K is large. Second, the scheme with random UG performs not so well as those with optimized UG, which shows the importance of well-optimized UG for performance enhancement. Third, the overlapping UG scheme consistently outperforms its sub-scheme, i.e., the non-overlapping UG scheme, as the former enables more efficient utilization of all the available resources. However, it is noteworthy that as K increases, the performance improvement of the overlapping UG scheme over the non-overlapping UG scheme becomes less pronounced. The explanation is that since increasing K must lead to more severe inter-user interference, allowing some IUs to participate in multiple groups may no longer be a better choice or can only bring little throughput gain. From Fig. <ref>, besides the observations similar to those in Fig. <ref>, we observe that the performance gap between the overlapping and non-overlapping UG schemes is marginal, even when K is relatively small. This can be explained as follows. With E = 2 × 10^-5 J, few resources are available for the IUs since the EUs with stringent EH requirements occupy most of them, which makes the overlapping UG scheme hardly bring its advantage in more efficient resource utilization for throughput improvement into play. To gain more insights, we plot the corresponding average total number of active time slots and ∑_ℓ∈ℒ∑_k ∈𝒦a_k,ℓ versus the number of IUs in Figs. <ref> and <ref>, respectively. Here, the active time slots refer to those with positive time durations, and the number of them also implies the number of IU groups. Fig. <ref> shows that the average number of active time slots (as well as the average number of IU groups) increases with K, since the number of IUs that the AP can serve well in one time slot is limited. Moreover, it is worth pointing out that the number of IU groups is not the more the better, since the transmission duration allocated to each group is inversely proportional to the number of IU groups. This may explain why not all the available time slots are active. We also note that more active time slots are required for both the UG schemes when E = 2 × 10^-5 J than when E = 1 × 10^-5 J. Besides, it can be seen from Fig. <ref> that for the overlapping UG scheme, the average ∑_ℓ∈ℒ∑_k ∈𝒦a_k,ℓ is always larger than its corresponding K. This confirms that in certain channel realizations, some IUs do participate in more than one IU group. =-1 §.§.§ Impact of Number of Available Time Slots In Fig. <ref>, we investigate the impact of the number of available time slots on the system performance for J = 8 and E = 1 × 10^-5 J. It is observed that with increasing L, the max-min throughputs achieved by the overlapping and non-overlapping UG schemes first grow monotonically and then become gradually saturated. The reasons for this result are as follows. When L is small, the increase in L allows the formation of more IU groups, each with fewer IUs, enabling more spatial multiplexing gains to be achieved in each corresponding time slot. On the other hand, when L is large enough, further increasing L would no longer lead to an increased number of IU groups. This is because dividing the IUs into many more groups but each with a shorter transmission duration can be unfavorable for max-min throughput performance, which is confirmed by the trends of the curves representing the random UG scheme. §.§.§ Impact of Number of EUs Fig. <ref> illustrates the average max-min throughput versus the number of EUs when K = 5, L = 3, and E = 1×10^-5 J. As can be seen, the max-min throughputs achieved by all the schemes decrease rapidly with the increase of J. This is expected since the number of EH constraints increases with J, which narrows the feasible regions of the considered problems corresponding to these schemes. Additionally, in the absence of EUs (i.e., J = 0), the three schemes with UG still significantly outperform that without UG, thus further verifying the usefulness of grouping the IUs for max-min throughput improvement. Finally, the performance gap between the overlapping and non-overlapping UG schemes decreases as J increases, which is consistent with the observation in Fig. <ref> that the increase in E diminishes the advantage of the overlapping UG scheme over its non-overlapping counterpart. §.§.§ Impact of Number of IRS Elements In Fig. <ref>, we plot the average max-min throughput versus the number of IRS elements when E = 1 × 10^-5 and 2 × 10^-5 J, respectively. It is observed that the max-min throughputs achieved by all the schemes show upward trends as N becomes larger, since more DoF are available for customizing more favorable channels. Nevertheless, the performance gains diminish with N, which is especially evident for the relatively smaller E. We explain this result based on the following two facts. First, increasing N makes the EH requirement gradually less of a limiting factor to the performance. Second, the achievable max-min throughput of each scheme is upper-bounded by a finite value due to the AP's limited transmit power and transmission duration. Besides, we note that with the increase of N, the performance gap between the overlapping UG scheme and the other three schemes becomes more pronounced, since the former can better utilize the increased DoF. § CONCLUSION This paper considered an overloaded multiuser MISO downlink SWIPT system assisted by an IRS with phase errors. We grouped the IUs by considering two UG schemes, i.e., the non-overlapping UG scheme and the overlapping UG scheme, which do not allow and allow each IU to participate in multiple groups, respectively. Aiming to maximize the minimum throughput among all the IUs while satisfying the EH requirement of each EU, we formulated two design problems, each corresponding to one UG scheme, where the UG variables, the time allocation, and the transmit/reflect beamforming were jointly optimized. Computationally efficient algorithms were proposed to solve these two mixed-integer non-convex optimization problems suboptimally. Simulation results demonstrated that robust design is vital to practical IRS-aided SWIPT systems with phase errors since the solution obtained when ignoring the phase errors generally fails to satisfy the EH constraints. Moreover, our proposed UG schemes can remarkably improve the max-min throughput performance compared to the case without UG, as they enable higher active and passive beamforming gains by serving fewer IUs concurrently. Finally, unless the absolute difference between the number of transmit antennas at the AP and the number of IUs is small and the EH constraints are loose, the max-min throughput achieved by the non-overlapping UG scheme is comparable to that by the overlapping UG scheme. Thus, in most scenarios, the non-overlapping UG scheme is more attractive to practice systems due to its comparable performance to the overlapping UG scheme and extra advantage of easier implementation. § PROOF OF THEOREM <REF> According to (<ref>) and (<ref>), 𝔼_𝐯̃_ℓ{γ_k,ℓ} and 𝔼_𝐯̃_ℓ{ Q_j} can be written in the following forms: 5pt 𝔼_𝐯̃_ℓ{γ_k,ℓ} = a_k,ℓ𝐰_k,ℓ^H𝐇_k^H𝒫_ℓ𝐇_k𝐰_k,ℓ/∑_i∈𝒦\{k}a_i,ℓ𝐰_i,ℓ^H𝐇_k^H𝒫_ℓ𝐇_k𝐰_i,ℓ + tr(𝐇_k^H𝒫_ℓ𝐇_k𝐖_E,ℓ)+ σ_k^2, k∈𝒦, ℓ∈ℒ, 𝔼_𝐯̃_ℓ{ Q_j} = ∑_ℓ∈ℒτ_ℓ(∑_k∈𝒦a_k,ℓ𝐰_k,ℓ^H𝐆_j^H𝒫_ℓ𝐆_j𝐰_k,ℓ + tr(𝐆_j^H𝒫_ℓ𝐆_j𝐖_E, ℓ) ), j∈𝒥, where 𝒫_ℓ = 𝔼_𝐯̃_ℓ{(𝐯_ℓ⊙𝐯̃_ℓ)(𝐯_ℓ⊙𝐯̃_ℓ)^H }. Then, the problem of deriving the closed-form expressions of 𝔼_𝐯̃_ℓ{γ_k,ℓ} and 𝔼_𝐯̃_ℓ{ Q_j} is converted into that for 𝒫_ℓ. Notice that 𝒫_ℓ can be recast as 𝒫_ℓ = diag(𝐯_ℓ)𝔼_𝐯̃_ℓ{𝐯̃_ℓ𝐯̃_ℓ^H} diag(𝐯_ℓ^H) with the expression of 𝔼_𝐯̃_ℓ{𝐯̃_ℓ𝐯̃_ℓ^H} given by [ 1 𝔼_Δθ̃_ℓ,2,1{ e^Δθ̃_ℓ,2,1} ⋯ 𝔼_Δθ̃_ℓ,N,1{ e^Δθ̃_ℓ,N,1} 𝔼_̃̃θ_ℓ,1{ e^-θ̃_ℓ,1}; 𝔼_Δθ̃_ℓ,1,2{ e^Δθ̃_ℓ,1,2} 1 ⋯ 𝔼_Δθ̃_ℓ,N,2{ e^Δθ̃_ℓ,N,2} 𝔼_θ̃_ℓ,2{ e^-θ̃_ℓ,2}; ⋮ ⋮ ⋱ ⋮ ⋮; 𝔼_Δθ̃_ℓ,1,N{ e^Δθ̃_ℓ,1,N} 𝔼_Δθ̃_ℓ,2,N{ e^Δθ̃_ℓ,2,N} ⋯ 1 𝔼_θ̃_ℓ,N{ e^-θ̃_ℓ,N}; 𝔼_θ̃_ℓ,1{ e^θ̃_ℓ,1} 𝔼_θ̃_ℓ,2{ e^θ̃_ℓ,2} ⋯ 𝔼_θ̃_ℓ,N{ e^θ̃_ℓ,N} 1; ]. In (<ref>), Δθ̃_ℓ,m,n≜θ̃_ℓ,m - θ̃_ℓ,n, m,n∈𝒩, m≠ n, ℓ∈ℒ. Since θ̃_ℓ,m and θ̃_ℓ,n are uniformly distributed on [-π/2, π/2], Δθ̃_ℓ,m,n follows a triangular distribution on [-π,π] and its probability density function can be expressed as f(Δθ̃_ℓ,m,n) = Δθ̃_ℓ,m,n/π^2 + 1/π, Δθ̃_ℓ,m,n∈ [-π,0], -Δθ̃_ℓ,m,n/π^2 + 1/π, Δθ̃_ℓ,m,n∈ (0,π], 0, otherwise. With (<ref>), we have E_Δθ̃_ℓ,m,n{ e^Δθ̃_ℓ,m,n} = ∫_-π^0(Δθ̃_ℓ,m,n/π^2 + 1/π)e^Δθ̃_ℓ,m,ndΔθ̃_ℓ,m,n + ∫_0^π(-Δθ̃_ℓ,m,n/π^2 + 1/π)e^Δθ̃_ℓ,m,ndΔθ̃_ℓ,m,n = 4/π^2. On the other hand, since θ̃_ℓ,n obeys a uniform distribution on [-π/2,π/2], one can easily derive that 𝔼_θ̃_ℓ,N{ e^θ̃_ℓ,N} = ∫_-π/2^π/21/πe^θ̃_ℓ,Ndθ̃_ℓ,N = 2/π, 𝔼_θ̃_ℓ,N{ e^-θ̃_ℓ,N} = ∫_-π/2^π/21/πe^-θ̃_ℓ,Ndθ̃_ℓ,N = 2/π. By substituting (<ref>)-(<ref>) into (<ref>), we have 𝔼_𝐯̃_ℓ{𝐯̃_ℓ𝐯̃_ℓ^H} = [ 1 4/π^2 ⋯ 4/π^2 2/π; 4/π^2 1 ⋯ 4/π^2 2/π; ⋮ ⋮ ⋱ ⋮ ⋮; 4/π^2 4/π^2 ⋯ 1 2/π; 2/π 2/π ⋯ 2/π 1; ] = 𝐙, with which, the closed-form expression of 𝒫_ℓ is given by 𝒫_ℓ = diag(𝐯_ℓ)𝐙 diag(𝐯_ℓ^H). Finally, by replacing 𝒫_ℓ in (<ref>) and (<ref>) with its closed-form expression, we arrive at (<ref>) and (<ref>), respectively. This completes the proof of Theorem <ref>. § PROOF OF LEMMA <REF> We prove Lemma <ref> by showing that tr(𝐘_j,ℓ𝐖_E,ℓ) = 𝐯_ℓ^H𝐐_j,E, ℓ𝐯_ℓ, ∀ℓ∈ℒ'. First, {q_ℓ,m} and {𝐰_ E,ℓ,m} can be obtained from the eigenvalue decomposition of 𝐖_ E, ℓ and we can express 𝐖_ E, ℓ as 𝐖_ E,ℓ = ∑_m=1^r_ E,ℓq_ℓ,m𝐰_ E,ℓ,m𝐰_ E,ℓ,m^H. Second, recall that 𝐘_j,ℓ = 𝐆_j^H diag(𝐯_ℓ)𝐙 diag(𝐯_ℓ^H)𝐆_j, then we can derive that tr(𝐘_j,ℓ𝐖_ E,ℓ) (a)=∑_m=1^r_ E,ℓq_ℓ,m𝐰_ E,ℓ,m^H𝐆_j^H diag(𝐯_ℓ)𝐙 diag(𝐯_ℓ^H)𝐆_j𝐰_ E,ℓ,m (b)=∑_m=1^r_ E,ℓq_ℓ,m conj(𝐯_ℓ^H diag(𝐆_j𝐰_ E,ℓ,m)𝐙( diag(𝐆_j𝐰_ E,ℓ,m))^H𝐯_ℓ) (c)=∑_m=1^r_ E,ℓq_ℓ,m𝐯_ℓ^H diag(𝐆_j𝐰_ E,ℓ,m)𝐙( diag(𝐆_j𝐰_ E,ℓ,m))^H𝐯_ℓ = 𝐯_ℓ^H(∑_m=1^r_ E,ℓq_ℓ,m diag(𝐆_j𝐰_ E,ℓ,m)𝐙( diag(𝐆_j𝐰_ E,ℓ,m))^H)𝐯_ℓ = 𝐯_ℓ^H𝐐_j,E, ℓ𝐯_ℓ, where the equality (a) utilizes the properties of the trace operator, the equality (b) holds due to the facts that 𝐰_ E,ℓ,m^H𝐆_j^H diag(𝐯_ℓ) = conj(𝐯_ℓ^H diag(𝐆_j𝐰_ E,ℓ,m)) and diag(𝐯_ℓ^H)𝐆_j𝐰_ E,ℓ,m = conj( ( diag(𝐆_j𝐰_ E,ℓ,m))^H𝐯_ℓ), and the equality (c) is true since each term in the left-hand-side of (c) is a real number. With (<ref>), we can readily verify that constraint (<ref>) is equivalent to constraint (<ref>), which completes the proof of Lemma <ref>. IEEEtran
http://arxiv.org/abs/2307.02150v3
20230705094641
Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and Consistency
[ "Md Abdul Kadir", "Gowtham Krishna Addluri", "Daniel Sonntag" ]
cs.LG
[ "cs.LG", "cs.CV" ]
Harmonizing Feature Attributions Across Deep Learning Architectures M. A. Kadir et al. German Research Center for Artificial Intelligence (DFKI), Germany {abdul.kadir, Gowthamkrishna.Addluri, daniel.sonntag}@dfki.deUniversity of Oldenburg, Oldenburg, Germany Harmonizing Feature Attributions Across Deep Learning Architectures: Enhancing Interpretability and Consistency Md Abdul Kadir10000-0002-8420-2536 GowthamKrishna Addluri10009-0008-0513-9016 Daniel Sonntag1, 20000-0002-8857-8709 Received: date / Accepted: date ======================================================================================================================= Enhancing the interpretability and consistency of machine learning models is critical to their deployment in real-world applications. Feature attribution methods have gained significant attention, which provide local explanations of model predictions by attributing importance to individual input features. This study examines the generalization of feature attributions across various deep learning architectures, such as convolutional neural networks (CNNs) and vision transformers. We aim to assess the feasibility of utilizing a feature attribution method as a future detector and examine how these features can be harmonized across multiple models employing distinct architectures but trained on the same data distribution. By exploring this harmonization, we aim to develop a more coherent and optimistic understanding of feature attributions, enhancing the consistency of local explanations across diverse deep-learning models. Our findings highlight the potential for harmonized feature attribution methods to improve interpretability and foster trust in machine learning applications, regardless of the underlying architecture. § INTRODUCTION Deep learning models have revolutionized various domains, but their complex nature often hampers our ability to understand their decision-making processes <cit.>. Interpretability techniques have emerged, with local and global explanations being two significant categories <cit.>. Local explanations focus on understanding individual predictions, highlighting the most influential features for a specific instance. This method is valuable for understanding model behavior at a granular level and providing intuitive explanations for specific predictions. On the other hand, global explanations aim to capture overall model behavior and identify patterns and trends across the entire dataset. They offer a broader perspective and help uncover essential relationships between input features and model prediction. This paper delves into interpretability in deep learning models, particularly model-agnostic feature attribution, a subset of local explanation techniques. Feature attribution refers to assigning importance or relevance to input features in a machine learning model's decision-making process <cit.>. It aims to understand which features have the most significant influence on the model's predictions or outputs. Feature attribution techniques provide insights into the relationship between input features and the model's decision, shedding light on the factors that drive specific outcomes. These techniques are precious for interpreting complex models like deep learning, where the learned representations may be abstract and difficult to interpret directly <cit.>. By quantifying the contribution of individual features, feature attribution allows us to identify the most influential factors, validate the model's behavior, detect biases, and gain a deeper understanding of the decision-making process. Feature attribution methods can be evaluated through various approaches and metrics <cit.>. Qualitative evaluation involves visually inspecting the attributions and assessing their alignment with domain knowledge. Perturbation analysis tests the sensitivity of attributions to changes in input features <cit.>. Sanity checks ensure the reasonableness of attributions, especially in classification problems. From a human perspective, we identify objects in images by recognizing distinct features <cit.>. Similarly, deep learning models are trained to detect features from input data and make predictions based on these characteristics <cit.>. The primary objective of deep learning models, irrespective of the specific architecture, is to learn the underlying data distribution and capture unique identifying features for each class in the dataset. Various deep learning architectures have proven proficient in capturing essential data characteristics within the training distribution <cit.>. We assume that if a set of features demonstrates discriminative qualities for one architecture, it should likewise exhibit discriminative properties for a different architecture, provided both architectures are trained on the same data. This assumption forms the foundation for the consistency and transferability of feature attributions across various deep learning architectures. Our experiments aim to explore the generalizability of features selected by a feature attribution method for one deep learning architecture compared to other architectures trained on the same data distribution. We refer to this process as harmonizing feature attributions across different architectures. Our experimental results also support our assumption and indicate that different architectures trained on the same data have a joint feature identification capability. § RELATED WORK Various explanation algorithms have been developed better to understand the internal mechanisms of deep learning models. These algorithms, such as feature attribution maps, have gained significant popularity in deep learning research. They offer valuable insights into the rationale behind specific predictions made by deep learning models <cit.>. Notable examples of these explanation methods include layer-wise relevance propagation <cit.>, Grad-CAM <cit.>, integrated gradient <cit.>, guided back-propagation <cit.>, pixel-wise decomposition <cit.>, and contrastive explanations <cit.>. Various methods have been developed to evaluate feature attribution maps. Ground truth data, such as object-localization or masks, has been used for evaluation <cit.>. Another approach focuses on the faithfulness of explanations, measuring how well they reflect the model's attention <cit.>. The IROF technique divides images into segments and evaluates explanations based on segment relevance <cit.>. Pixel-wise evaluations involve flipping pixels or assessing attribution quality using pixel-based metrics <cit.>. <cit.> has demonstrated the utility of feature attribution methods for feature selection. Additionally, research conducted by <cit.> and <cit.> has explored the internal representation similarity between different architectures. However, to the best of our knowledge, the generalization of feature attributions across diverse neural architectures still needs to be explored. Motivated by the goal of evaluating feature attributions, we are investigating a novel approach that involves assessing feature attributions across multiple models belonging to different architectural designs. This method aims to provide a more comprehensive understanding of feature attribution in various contexts, thereby enhancing the overall explainability of deep neural networks. § METHODOLOGY This experiment investigates the generalizability and transferability of feature attributions across different deep learning architectures trained on the same data distribution. The experimental process involves generating feature attribution maps for a pretrained model, extracting features from input images, and passing them to two models with distinct architectures. The accuracy and output probability distribution are then calculated for each architecture. In this experiment, we employ a modified version of the Soundness Saliency (SS) method <cit.> for generating explanations. The primary objective with a network f, for a specific input x (Fig. <ref> (a)), and label a, is to acquire a map or mask M ∈{0, 1}^hw. This map aims to minimize the expectation E_x̃∼(x, M) [- ∑ f_i(x̃)log(f_i(x̃))], wherein the probability assigned by the network to a modified or composite input x is maximized. x̃∼Γ(x, a) ≡x∼𝒳, x̃ = M ⊙ x + (1- M) ⊙x Here, M (Fig. <ref> (b)) represents the feature attribution map generated by the Soundness Saliency algorithm. The saliency map M provides information about the importance of each pixel and the extent of its contribution to the classification. If the value of M (Fig. <ref> (b)) for a specific pixel is 0, it implies that the pixel has no significance in the classification process. Conversely, if the value of M for a particular pixel is high, it indicates that the pixel is highly important for the classification. We enhance the extraction of important features (Fig <ref> (c)) from input by applying the Hadamard product between each input channel and the corresponding attribution map M. In addition to the Soundness Saliency (SS) algorithm, we also employ Grad-CAM <cit.> (GC) for feature extraction. We utilize the selected features Fig. <ref> (c) extracted through feature attributions and feed them to two distinct models with different architectures, albeit trained on the same training data. Accuracy, F1 score, and output probability scores are calculated for these models. The focus is observing model prediction changes when only the selected features are inputted rather than the entire image. § EXPERIMENT AND RESULTS In this study, we selected four distinct pretrained architectures: the Vision Transformer architecture (ViT) <cit.>, EfficientNet-B7 (E-7) <cit.>, EfficientNet-B6 (E-6) <cit.>, and EfficientNet-B5 (E-5) <cit.>. To generate feature attribution maps, we first employed E-7 along with a challenging subset[<https://github.com/fastai/imagenette>] of the ImageNet validation data, which is known to be particularly difficult for classifiers. Subsequently, we generated feature maps for all test data and passed them to E-6 and E-5. In parallel, we also generated features for ViT and followed the same procedure. We chose to utilize both a transformer and a CNN architecture in our experiments because they are fundamentally different from one another, allowing for a comprehensive evaluation of the various architectures. Our experimental results (Tables <ref> and <ref>) indicate that features generated by a neural architecture can be detected by other architectures trained on the same data. This implies that feature attribution maps encapsulate sufficient data distribution information. Consequently, feature maps created using attribution maps on one architecture can be recognized by another architecture, provided that both are trained on the same data. As depicted in Fig. <ref>, when we feed only features to the model, the class probability increases (Fig. <ref> (b), (d), and (f)), particularly when using similar architectures for feature generation and evaluation. When employing different types of architectures (e.g., Transformer for generating feature maps and CNN for evaluating them), there is a slight drop in accuracy (Fig. <ref> (j) and (l)), but the performance remains consistent. Accuracy decreases when features are extracted with Grad-CAM saliency maps, suggesting these maps might not capture crucial information on the data distribution. However, when examining row GC in Tables <ref> and <ref>, it's observed that accuracy remains consistent across various architectural configurations when features are generated using Grad-CAM. This suggests that different architectures have harmony in detecting certain features from data. § CONCLUSION The experiment validates our hypothesis that various architectures acquire shared features from a common data distribution. We noticed a notable rise in class prediction probabilities when utilizing selected features as inputs, particularly when employing similar neural architecture building blocks such as Convolution. Additionally, the consistency of predictions on future attribution maps across architectures demonstrates that different architectures are not randomly learning features from the data, thereby enhancing the reliability of the models. These findings underscore the potential to generalize features and emphasize the need for additional research to harmonize feature attribution maps, expanding their applicability in various domains. §.§.§ Acknowledgements This work was partially funded by the German Federal Ministry of Education and Research (BMBF) under grant number 16SV8639 (Ophthalmo-AI) and 2520DAT0P2 (XAINES) and German Federal Ministry of Health (BMG) under grant number 2520DAT0P2 (pAItient) and supported by the Lower Saxony Ministry of Science and Culture and the Endowed Chair of Applied Artificial Intelligence (AAI) of the University of Oldenburg. splncs04nat
http://arxiv.org/abs/2307.02301v1
20230705135935
Sumformer: Universal Approximation for Efficient Transformers
[ "Silas Alberti", "Niclas Dern", "Laura Thesing", "Gitta Kutyniok" ]
cs.LG
[ "cs.LG", "cs.CL", "stat.ML" ]
[ Sumformer: Universal Approximation for Efficient Transformers Silas AlbertiLMU,Stanford Niclas DernTUM Laura ThesingLMU Gitta KutyniokLMU TUMTUM School of Computation, Information and Technology, Technical University of Munich, Munich, Germany LMUDepartment of Mathematics, Ludwig-Maximilians-Universität München, Germany StanfordDepartment of Electrical Engineering, Stanford University, United States Silas Albertisalberti@stanford.edu 0.3in ] Natural language processing (NLP) made an impressive jump with the introduction of Transformers. ChatGPT is one of the most famous examples, changing the perception of the possibilities of AI even outside the research community. However, besides the impressive performance, the quadratic time and space complexity of Transformers with respect to sequence length pose significant limitations for handling long sequences. While efficient Transformer architectures like Linformer and Performer with linear complexity have emerged as promising solutions, their theoretical understanding remains limited. In this paper, we introduce Sumformer, a novel and simple architecture capable of universally approximating equivariant sequence-to-sequence functions. We use Sumformer to give the first universal approximation results for Linformer and Performer. Moreover, we derive a new proof for Transformers, showing that just one attention layer is sufficient for universal approximation. § INTRODUCTION The introduction of the Transformer architecture in 2017 <cit.> commenced a new revolution in the field of deep learning. It not only revolutionized Natural Language Processing with famous models like BERT <cit.> and GPT-3 <cit.> but also other areas like computer vision <cit.> and biology <cit.>. Unlike residual neural networks (RNNs), Transformers have a global structure, providing them with two significant advantages: First, Transformers can remember and relate information that is not locally close. It even improves this capability beyond Long Short-term memory (LSTM) networks <cit.> or gated RNNs <cit.>. Second, Transformers can train simultaneously with the entire input sequence. This makes it possible to process the tokens in parallel and scale the model much better. However, Transformers can become computationally expensive at scale. In many cases, the primary performance bottleneck is the attention mechanism that needs to compute a n × n-Matrix, where n ∈ is the length of the input sequence. Therefore, the computational complexity of a forward pass grows O(n^2) with the sequence length. This establishes the sequence length as one of the major bottlenecks when using of Transformers for long sequences, which are encountered in many fields, such as NLP for processing longer documents like books, time series <cit.>, genomics <cit.>, and reinforcement learning <cit.>. To address this problem, many new architectures have been proposed <cit.>. These can be roughly divided into sparse Transformers and efficient Transformers <cit.>. In some cases, the complexity can be reduced to as low as 𝒪(n). While, in practice, these new architectures do not match the performance of Transformers, the relative performance to the decrease in computational cost makes them promising. Besides their empirical performance, little is known about the theoretical properties of these new architectures. Particularly, they have not yet been studied from the perspective of expressivity. This paper shows that the efficient Transformers, Linformer and Performer, are universal approximators of equivariant continuous sequence-to-sequence functions on compact sets. §.§ Summary of contributions In this paper, we introduce the Sumformer architecture. This architecture serves as a simple tool that we can use to investigate the expressive power of Transformers and two selected efficient Transformer architectures: Linformer <cit.> and Performer <cit.>. We chose the latter two architectures since they performed best in the Long Range Arena benchmark <cit.>. First, we show that the Sumformer architecture is able to approximate all continuous equivariant sequence-to-sequence functions on compact sets (Sec. <ref>). We give two different proofs: A continuous proof based on the algebra of multisymmetric polynomials, and a discrete proof based on a piecewise constant approximation. Using this result, we give a new proof of the universal approximation theorem for Transformers (Sec. <ref>). This proof improves significantly upon the previous result from <cit.>, by reducing the number of necessary attention layers. Our proof only needs one attention layer, whereas the number of attention layers in <cit.> grows exponentially with the token dimension. Based on this proof, we give the first proof that Linformer and Performer are universal approximators (Sec. <ref>). This is the first universal approximation theorem for efficient Transformers, showing that despite using the efficient attention mechanisms we do not suffer from a loss in expressivity. Our numerical experiments (Sec. <ref>) using the Sumformer architecture show that the Sumformer architecture is not only theoretically useful but can indeed be used to learn functions using gradient descent. Furthermore, we find an exponential relation between the token dimension and the necessary latent dimension. §.§ Related work This paper analyses the expressive power of newly evolving efficient Transformer architectures. Expressivity is a natural first question when investigating the possibilities and limitations of network architectures. Therefore, the question of which functions can be approximated (uniformly) with neural networks and their variance is of great interest. The publications mentioned in the following are by no means exhaustive but rather a selection: The first universal approximation result for neural networks dates back to 1989 with the universal approximation theorem in <cit.>. Further investigations also for deeper networks were made in <cit.>. These results were extended to functions with the rectified linear unit (ReLU) activation function in <cit.> and convolutional neural networks in <cit.>. Feed forward neural networks with fewer non-zero coefficients and values that can be stored with fewer bits and therefore improve memory efficiency are investigated in <cit.>. The Transformer architecture has not been explored as much in the literature. We know from <cit.> that Transformers are universal approximators in L_p, for 1 ≤ p < ∞ for continuous sequence-to-sequence functions. Moreover, it has been shown in <cit.> that under certain assumptions on the sparsity pattern, sparse Transformers form universal approximators in the same setting. The expressivity of the self-attention mechanism has also been examined from a complexity theory perspective in <cit.>. For efficient Transformer architectures, no such universal approximation results exist to our knowledge. The main inspiration for this work is the Deep Sets architecture which shows a universal approximation theorem for invariant functions on sets <cit.>. We expand on their theorems in the continuous case (Theorem 7 & 9) and expand the theory from invariant functions on sets to equivariant functions on sequences. A similar model to Sumformer was proposed, and universality was proven in <cit.>. However, the connection to (efficient) Transformers was not made. We build upon their proof and propose an alternative discontinuous version. Concurrent work has given the continuous proof in higher dimension, but neither considers the expansion to equivariant sequence-to-sequence functions nor to Transformers <cit.>. § PRELIMINARIES This section describes the setting and states helpful theorems for our proofs and experiments. We first recall the definition of attention heads and the Transformer block from <cit.>. Afterwards, we describe how they can be changed to be more efficient with Linformer and Performer. Furthermore, we define equivariant, semi-invariant functions, multisymmetric polynomials, and multisymmetric power sums <cit.>. We also state important theorems about the relations between these concepts from <cit.> and <cit.>. Lastly, we recall an important theorem from <cit.>. §.§ Transformer The central part of the Transformer is the (self-)attention layer, which is able to connect every element in a sequence with every other element. Let W_Q,W_K,W_V∈^d× d be weight matrices and let ρ:^d→^d be the softmax function. A (self-)attention head is a function :^n× d→^n× d with (X):=ρ((XW_Q)(XW_K)^⊤ /√(d))_AXW_V where ρ is applied row-wise. We call A∈^n× n the attention matrix. Computing the attention matrix A has a computational complexity of 𝒪(n^2), thereby forming the highest cost in evaluating the Transformer. In the next step, we combine the attention heads to an attention layer by concatenating h attention heads and multiplying them with another weight matrix W_O. Let h∈, let _1,…,_h be attention heads and let W_O∈^hd× d. A (multi-head) (self-)attention layer :^n× d→^n× d is defined as (X) := [_1(X),…,_h(X)]W_O. For the Transformer architecture the attention layer is combined with fully-connected layers that are applied token-wise. Moreover, there are residual connections between all the layers <cit.>. Those three components together yield the Transformer block. A Transformer block :^n× d→^n× d is an attention layer :^n× d→^n× d followed by a fully-connected feed forward layer :^d→^d with residual connections (X):=X+(X+(X)) where the fully-connected feed-forward layer is applied row-wise. Similar to the concept of feed-forward neural networks, we stack several Transformer blocks after each other by concatenation. The Transformer architecture is then defined as follows. Let ℓ∈ and _1,…,_ℓ be Transformer blocks. A Transformer network :^n× d→^n× d is a composition of Transformer blocks: (X):=(_ℓ∘_ℓ-1∘…∘_1)(X). §.§ Efficient Transformer To address the O(n^2) bottleneck of computing the attention matrix A, various efficient Transformers were introduced. We chose to investigate Linformer and Performer since they stood out in the Long Range Arena benchmark <cit.>. Both architectures only replace the attention mechanism and do not change the rest of the architecture. §.§.§ Linformer The Linformer architecture is motivated by the observation that the attention matrix A is effectively low rank. This is supported by empirical evidence in actual language models and theoretical results in <cit.>. The Linformer architecture utilizes the Johnson-Lindenstrauss Lemma by using linear projections E,F∈^k× n to project the key and value matrix K=XW_K and V=XW_V from ^n× d to ^k× d. The entries of E and F are sampled from a normal distribution. The precise definition of a Linformer Attention Head is as follows: Let k∈ with k<n and let E,F∈^k× n be linear projection matrices. Furthermore, let W_Q,W_K,W_V∈^d× d, ρ:^d→^d be as in the Definition of a Transformer attention head <ref>. A Linformer attention head is a function :^n× d→^n× d with (X):= ρ((XW_Q)(EXW_K)^⊤ /√(d)) FXW_V where ρ is applied row-wise. Then, the new attention matrix A=ρ((XW_Q)(EXW_K)^⊤ /√(d)) will be in ^n× k, giving a computational complexity of 𝒪(nk) instead of 𝒪(n^2). Using the Johnson-Lindenstrauss Lemma it is shown that when k is chosen on the order of 𝒪(d/^2), the attention mechanism is approximated with error. Since 𝒪(d/^2) is independent of n, the complexity of Linformer Attention is 𝒪(n) as n increases. §.§.§ Performer The key insight that motivates the Performer architecture is the fact that the attention mechanism could be more efficient if the attention matrix had no non-linearity: (QK^T)V=Q(K^TV) This reduces the computational complexity from O(n^2d) to O(nd^2). By interpreting the attention matrix as a kernel matrix, this non-linearity can be replaced by a dot product in a kernel space, enabling the following efficient attention algorithm: Let k∈ with k<n, let _1,…,_k∼𝒩(0,I_d) and define a:^d→^k as a(x):=1/√(k)exp(-x^2/2)[exp(_1^⊤ x),…,exp(_k^⊤ x)]. Furthermore, let W_Q,W_K,W_V∈^d× d be weight matrices. A Performer attention head is a function :^n× d→^n× d with (X):=a(XW_Q)(a(XW_K)^⊤ (XW_V)) where a is applied row-wise. With this definition, we avoid the computation of the full attention matrix, which reduces the computational complexity from O(n^2d) to O(nkd). §.§ Equivariant and Semi-Equivariant Functions Let and be the domain and range of a function, e.g., ==^d or ,=[0,1]^d in the compact case. We call an element X∈^n a sequence of n elements and denote X=[x_1,…,x_n]. Often, we refer to the elements x_i of the sequence as points. In the canonical case ⊆^d, we can represent sequences X∈^n× d as matrices. We call functions of type f:^n→ sequence-to-point functions. A sequence-to-point function f:^n→, with ,⊂^d is equivariant to the order of elements in a sequence if for each permutation π:[n]→[n]: f([x_π(1),…,x_π(n)])=[f_π(1)(X),…,f_π(n)(X)]. We write that f∈. Transformers represent sequence-to-sequence functions, but sometimes it is more convenient to work with sequence-to-point functions. To facilitate that, we recall the concept of a semi-invariant function (see: <cit.>). A sequence-to-point function g:^n→ is semi-invariant if for each permutation π:[n]∖{1}→ [n]∖{1}: g([x_1,x_2,…,x_n])=g([x_1,x_π(2),…,x_π(n)]). In this context, the following insight from [<cit.>, Lemma 10] is important because it enables us to deal with equivariant sequence-to-sequence functions by looking at semi-invariant sequence-to-point functions instead: A sequence-to-sequence function f:^n→^n is equivariant if and only if there exists a semi-invariant sequence-to-point function g:^n→ such that f ([x_1,…,x_n]) =[g(x_1,{x_2,x_3…}),g(x_2,{x_1,x_3,…}),…]. §.§ Multisymmetric Polynomials We discuss two different proofs for the universality of Sumformer. For the continuous proof, we use multisymmetric polynomials, which we introduce now. Our definitions are based on <cit.>. Let ⊂^d. A (real) multisymmetric polynomial in a sequence of length n is a polynomial p: ^n →ℝ in the variables x^(1)_1, x^(1)_2, ..., x^(n)_d which is invariant in permutations of x^(1), ..., x^(n). A multisymmetric power sum of multidegree α = (α_1, ..., α_d) ∈^d \{0} is a multisymmetric polynomial of the form: p_α: ^n →ℝ, [x^(1), ..., x^(n)] ↦∑_i = 1^n (x^(i))^α where (x^(i))^α = (x^(i)_1)^α_1⋯ (x^(i)_d)^α_d. The multisymmetric power sums are of interest because they can generate any multisymmetric polynomial. The following theorem which follows directly from [<cit.>, Theorem 3 & Corollary 5] shows this relationship: The real multisymmetric power sums in a sequence of length n with multidegree |α| α_1 + … + α_d ≤ n generate all real multisymmetric polynomials (in a sequence of length n), i.e. every multisymmetric polynomial p can be represented by p = σ(p_α^(1), ..., p_α^(z)) with a (real) polynomial σ and the multisymmetric power sums p_α^(1), ..., p_α^(z). §.§ Deep sets As discussed in Section <ref>, the concept of a Sumformer, which we introduce in section <ref>, is related to the concept of deep sets introduced in <cit.>. We also utilize the following theorem for the discontinuous proof: Let Z = {z_1, ..., z_M}, z_m ∈ E, E countable and 𝒵 be the power set of Z. A function f: 𝒵→ℝ operating on Z can be permutation invariant to the elements in Z, if and only if it can be decomposed in the form ψ(∑_z ∈ Zϕ(x)), for suitable transformations ϕ and ψ. § SUMFORMER We now introduce the new architecture Sumformer. The name stems from the inherent dependence on the sum of a function evaluation of every token separately. Let d'∈ and let there be two functions ϕ:→^d',ψ:×^d'→. A Sumformer is a sequence-to-sequence function 𝒮:^n→^n which is evaluated by first computing Σ:=∑_k=1^nϕ(x_k), and then 𝒮([x_1,…,x_n]):=[ψ(x_1,Σ),…,ψ(x_n,Σ)]. The Sumformer architecture is simple and can be approximated with Transformers, Linformers, and Performers. The simplicity of the architecture and the ability to prove the universality of multiple architectures using it suggests that Sumformers can also be approximated by other architectures and thereby give universal approximation theorems for them. § UNIVERSAL APPROXIMATION In this section, we give the main theorems of this paper. We first show that Sumformers are universal approximators for continuous sequence-to-sequence functions. This result can be used to give a new proof for the universality of Transformers and the first universal approximation results for Linformer and Performer. Before continuing, we make an important assumption: For the rest of this paper, let ,⊆^d and let be a compact set. Note that and do not need to have the same dimensionality in the following theorems. This only simplifies our notation. §.§ Sumformer We show two different proof ideas for the universal approximation by Sumformer. The second relies on a local approximation with a piecewise constant function. This approximation allows us to choose the inherent dimension d'=1. Hence, we are able to choose a very small attention matrix. However, due to the discontinuous structure, we need exponentially many feed-forward layers in the sequence lengths n and the token size d. This problem can be circumvented with an approximation with continuous ψ and ϕ using multisymmetric power sums from Definition <ref>. In this case, four feed-forward layers and one attention or summing layer are sufficient. However, the inherent dimension d' scales with n^d - for a fixed d - in this case. Therefore, the related attention matrices also scale with n^d. We investigate this trade-off further in Section <ref> with numerical experiments. For each function f∈ and for each >0 there exists a Sumformer 𝒮 such that sup_X∈^nf(X)-(X)_∞<. We aim to use Theorem <ref>. Therefore, for every i ∈ [d], we approximate coordinate i of f with an equivariant vector of polynomials p_i: ^n ↦^n with an accuracy of ϵ / d (as done in <cit.>). This is possible using a version of the Stone-Weierstrass theorem from <cit.>. Because p_i is equivariant we can use Theorem <ref> to represent p_i by a semi-invariant polynomial q_i:^n ↦, such that p_i([x_1, …, x_n])=[q_i(x_1, {x_2, …, x_n }), …, q_i(x_n, {x_1, …, x_n-1}). Now, we use Theorem <ref> and a representation similar to <cit.> to represent q_i using multisymmetric monomials and polynomials of multisymmetric power sums. For this, we define a function mapping to the power sums: Let ϕ:^d ↦^d' be the map to all d' multisymmetric monomials with order 0 < |α|≤ n. The sum in the Sumformer is then represented as Σ = ∑_i=1^n ϕ (x^(i)). We represent q_i by ψ_i(x^(j), Σ) = ∑_α∈ P (x^(j))^α·σ_α(Σ - ϕ(x^(j))) with P ⊆ℕ_0^d, |P| < ∞ and σ_α are polynomials. Finally, by setting ψ = [ψ_1, ..., ψ_d], we obtain a Sumformer 𝒮 with 𝒮(x) = [p_1(x), ..., p_d(x)] which therefore also fulfills the required goodness of fit. Instead of approximating the equivariant function f, we approximate the semi-invariant and uniformly continuous (since is compact) function g, which represents every component as described in Theorem <ref>. To be able to use Theorem <ref> with a countable input, we approximate g with a locally constant function g. The used grid is of size (1/δ)^nd for some δ>0, which depends on . The new function g is also semi-invariant. Now, we can assign every grid point p ∈ G a coordinate χ(p) = (a, b) ∈ [Δ]^d × [Δ]^(n - 1) × d where Δ = 1/δ. Furthermore, we can find a function λ: [Δ]^(n - 1) × d→ℕ with a finite range which yields the same output if and only if the input sequences are permutations of each other. In the next step, we can use Theorem <ref> to find ϕ^* and ψ^* so that λ(b) = ψ^*( ∑_i = 1^n-1ϕ^*(b_i) ). Let q: → [Δ]^d be the function mapping tokens to the corresponding cube-coordinate. Then by defining Σ=∑_i=1^nϕ^*(q(x)) and ψ(x_1,Σ) :=g(χ^-1(q(x_1), λ^-1(ψ^*(Σ-ϕ(x_1))))) we yield a Sumformer with the required goodness of fit. Note that even though λ^-1, in general, might not be invertible, we can find an inverse of a restriction of λ to a subset of the domain such that properties necessary for our proof are given. §.§ Transformer With the approximation result for Sumformers, we can now present two new proofs for the universality of Transformers. Before we give these proofs, we want to highlight the first universality theorem for Transformers from <cit.> and discuss the similiarities and differences. Let >0 and let 1≤ p<∞. Then, for any continuous, permutation-equivariant function f:^n× d→^n× d with compact support, there exists a Transformer Network such that (∫(X)-f(X)_p^p dX )^1/p≤. The first noticeable difference is the fact that <cit.> uses the L_p norm to measure the accuracy. In our setting, we aim to understand the worst-case behavior and therefore use the supremum norm. Furthermore, <cit.> also gives proofs for functions that are not equivariant by using positional encoding. Because the positional encoding is added only to the input and does not change any further points about the architecture, this can probably be applied also in our case. Beyond the difference in the theorem setup, we also have a very different proof strategy. The proof in <cit.> relies on the concept of contextual mappings. To implement these mappings, the Transformer needs ^-d many attention layers, where d is the token size and is the desired approximation accuracy. With our proof, we improve upon this result by showing that we only need one attention layer, which is used to represent the sum in the Sumformer. With this information, we can now state our theorem for the universal approximation by Transformers. For each function f∈ and for each >0 there exists a Transformer such that sup_X∈^nf(X)-(X)_∞<. First, note that the weights in the attention matrix can be set to zero; this way, we can get feed-forward networks only. In the continuous case, ϕ is also continuous and can therefore be approximated with a 2-layer network by <cit.>. For the discontinuous proof, we know from <cit.> that we need 𝒪 (n(1/)^nd/n!) many layers for the approximation. In the following steps, we approximate the sum with an attention head. This step is equal for the continuous and discontinuous settings. However, in the discontinuous case, we can set d'=1. This step is also the only step we need to investigate for the Linformer and Performer proof. We first use a feed-forward neural network to have as input the matrix: [ 1 x_1 ϕ(x_1) 0_d'; … ; 1 x_n ϕ(x_n) 0_d' ]∈^n × 1 + d+ 2d' Then, we choose W_Q=W_K=[e_1,0_(1+d+2d')× (1+d+2d')] with e_1=[1,0_d+2d']^⊤∈^1+d+2d' such that A= 1/n 1_n × n and W_V such that we get together with the skip connection: [ 1 x_1 ϕ(x_1) Σ; … ; 1 x_n ϕ(x_n) Σ ]∈^n × 1 + d+ 2d' We can then, in the continuous case, apply another two layers for the approximation of the continuous ψ, or we need another 𝒪(n (1/)^nd/n!) many feed-forward layers to approximate the ψ build in the discontinuous case. §.§.§ Network size Using Sumformer, we were able to give two different constructions for the Transformer as universal approximators. We note that the construction of the attention head remains the same except for the possible choice of d'. When we approximate ϕ and ψ with smooth functions, we need a larger latent dimension d'. In the discontinuous construction, we need more layers to approximate ϕ and ψ but can approximate the function of interest f using only d'=1. The same situation can be observed for the efficient Transformers as we only replace the attention heads but keep the functions ϕ and ψ from the proof of the Transformer. There might be another way of representing functions with Sumformers. However, the current proofs suggest a trade-off between the size of the latent dimension d' and the number of necessary layers. In Section <ref>, we test the dependence of the validation loss on the relationship of d' to the sequence length n and the token size d. §.§ Efficient Transformers are Universal Approximators Using the concept of Sumformer, we can show that Linformer and Performer are universal approximators for continuous functions on a compact support. We are able to utilize the proof for Transformers as the architecture is only changed in the attention head, which forms the main computational cost of Transformer. As the rest of the architecture stays the same, this part of the proof does not need to be adapted. We start with Linformer as introduced in Definition <ref>. For each function f∈ and for each >0 there exist k ∈𝒪(d/^2) and there exist matrices E,F∈^k× n and a Linformer _Lin such that sup_X∈^nf(X)-_Lin(X)_∞<. By Definition <ref>, Linformer _Lin have the same architecture as Transformer except for the attention head. Therefore, we can use the same construction for ψ and ϕ as in the proof of Theorem <ref>. It remains to show that we can represent the sum in the Sumformer with the linear attention head as well. We now discuss how the weight and projection matrices are chosen for the approximation. Let E=1/n1_k× n and F=1/k1_k× n, W_Q, W_K, W_V as in Equation (<ref>) and we get that the Linformer attention layers maps to ρ((XW_Q)(EXW_K)^T)·(FXW_V) =[0_n × 1+d+d', Σ] After applying the skip connection, we get the same output as in Equation (<ref>) in Theorem <ref>. Therefore, we can apply the same representation for ψ and get the desired approximation. Now, even though the structure and idea of Performer differ a lot from Linformer, we can use a similar strategy to show the universal approximation. Let k∈ with k<n. For each function f∈ and for each >0 there exists a Performer _Per such that sup_X∈^nf(X)-_Per(X)_∞<. As in the proof for the Linformer attention layer we use the fact that the Performer _Per only differs from a Transformer by the choice of the attention head. Therefore, we now build a Performer attention head which is able to approximate the sum for the Sumformer. We choose the same W_Q and W_K as in Equation (<ref>). Next, we fix the vectors w_1,…,w_k in a in the Performer Definition <ref>. Then, because all rows are the same and a is applied row-wise, a(XW_Q)a(XW_K)^⊤ = λ·1_n× n for some λ∈. In contrast, to the previous proof, we need to add another feed-forward layer after the attention layer. We choose the weight matrix to be W=1/λ nI_(1+d+2d') and the bias b=0_1+d+2d'. Then, we get an output of Wa(XW_Q)a(XW_K)^⊤(XW_V)+ b =[0_n × 1+d+d',Σ]^T. With the skip connection we get the desired input for ψ and are able to use the same approximation for ψ as in Theorem <ref>. § NUMERICAL EXPERIMENTS We implemented two different Sumformer architectures and tested them on approximating analytically given (i.e., non-real-world) functions. Both architectures consist of three components: one representing ϕ, one representing ψ, and the last combining the two as described in Definition <ref>. The function ψ is represented by a Multi-layer perceptron (MLP) in both architectures. The representation of ϕ differs: The first model uses the ϕ we constructed in the proof of Theorem <ref> (Polynomial Sumformer), whereas the second one uses an MLP again (MLP Sumformer). Each MLP we used consisted of five hidden layers of 50 nodes. We use the ReLU activation function. We trained our two models (using the same latent dimension d') on approximating multiple equivariant functions (assuming = [0, 1]^d): two polynomial-type and two non-polynomial-type functions. The results (Fig.<ref>) show that the previous results are not just theoretical: Sumformer architectures can approximate a variety of functions. It is interesting to note that the two Sumformers perform approximately equally well on most functions we approximated (polynomial & non-polynomial type). Based on this, we observe that the construction used in the continuous proof of Theorem <ref> is indeed able to learn our benchmark functions using gradient descent. Furthermore, we observe that the validation loss of the Polynomial Sumformer is smoother and decreases in a more stable way than that of the MLP Sumformer. In contrast, the validation loss of the MLP Sumformer often jumps to drastically lower levels over just a few epochs and is relatively flat apart from that. This phenomenon could be explained by the interaction of the two disjoint trainable components (MLPs). We also tested how changing the dimension d' (see Definition <ref>) in the MLP Sumformer impacts the best validation loss over a fixed number of epochs while holding n, d and the function to approximate constant. The results (Fig. <ref>) show - as expected - that higher dimensions d' generally lead to better approximation. Furthermore, when changing d linearly, we have to make non-linear - presumably exponential - changes to the size of d' to achieve significantly diminishing returns on further increasing d'. This finding is particularly interesting as the continuous proof of Theorem <ref> needs d' = [ n + d; d ] - 1 = (n + d)!/d! n! - 1 = ∏_i = 1^d n + i/i - 1 in ϕ for a fixed n. This suggests that the empirical performance aligns with the theory. § CONCLUSION We have seen that the efficient Transformers, Linformer, and Performer, are able to represent all equivariant continuous sequence-to-sequence functions on compact sets arbitrarily well. Due to the simplicity of the Sumformer architecture on which the proofs are based, it seems likely that further research can use similar techniques to show that other Transformer architectures and state space models are also universal approximators. In addition, we offered a new proof for universal approximation by Transformer and were able to reduce the necessary number of non-zero attention layers to only one. In our experiments, we showed that the construction from our continuous proof of universal approximation by Sumformer is tractable and indeed able to approximate given functions using gradient descent. Furthermore, our numerical results about the impact of the latent dimension d' of a Sumformer in relation to the token size d nicely relate to the required size of the latent dimension in our continuous proof. Lastly, we note that a significant limitation of our continuous proof is that (for a fixed token size d) the size of the attention matrix scales with n^d. In other words: Although for a fixed model dimension d' the computational cost scales linearly in n, for achieving universal approximation the required dimension d' grows polynomially in n and correspondingly the overall computational cost. In the discontinuous setting, we were able to keep the latent dimension small but had to scale the number of feed-forward layers accordingly. It would be interesting to improve on this result and analyze the trade-off further in future research. § ACKNOWLEDGEMENTS LT and GK acknowledge support from the German Research Foundation in the frame of the priority programme SPP 2298. SA appreciates the support by the Stanford Graduate Fellowship. GK is also grateful for partial support by the Konrad Zuse School of Excellence in Reliable AI (DAAD), the Munich Center for Machine Learning (BMBF) as well as the German Research Foundation under Grants KU 1446/31-1 and KU 1446/32-1 and under Grant DFG-SFB/TR 109, Project C09 and the Federal Ministry of Education and Research under Grant MaGriDo. icml2023 § PROOFS OF THE UNIVERSAL APPROXIMATION RESULTS FOR SUMFORMER In this section we give the details of the continuous and discontinous proofs of Theorem <ref>. By Lemma <ref>, there exists a semi-invariant function g:^n→ such that f(X)=[g(x_1,{x_2,…,x_n}), …, g(x_n,{x_1, …, x_n-1})]. Since f is continuous, the component functions f_1,…,f_n are also continuous and thus also g. The compactness of implies that ^n is compact and therefore g is uniformly continuous. Without loss of generality, let the compact support of g be contained in [0,1]^n× d. Then, we define a piece-wise constant function g by g(X)=∑_p∈𝒢g(p)1{X∈ C_p}, where the grid 𝒢:={0,δ,…,1-δ}^n× d for some δ:=1/Δ with Δ∈ consists of cubes C_p=∏_i=1^n∏_k=1^d[p_i,k,p_i,k+δ) with corresponding values g(p)∈ for each p∈𝒢. Because g is uniformly continuous, there exists for each >0 a δ>0 such that sup_X∈^ng(X)-g(X)_∞<. We next show that that g is semi-invariant. Since g is semi-invariant, we have g([x_1,x_π(2),…,x_π(n)])=g([x_1,x_2,…,x_n]) for any permutation π:[n]∖{1}→ [n]∖{1}. With p=[p_1,…,p_n], we can write π(p)=[p_1,p_π(2),…,p_π(n)] and get g(p)=g(π(p)). Moreover, we get X∈ C_p⇔π(X)∈ C_π(p). Hence, for any X∈ C_p, we get g(X)=g(p)=g(π(p))=g(π(X)). Now, we want to to represent g using an appropriate . While it is trivial to match each X to its corresponding p such that X∈ C_p, it is more difficult to find the corresponding cube of X when only being able to use x_1 and the aggregated Σ. To achieve this, we will use the following strategy: Recall that Δ∈ is the number of cubes in each dimension. We can assign each grid point p∈𝒢 a coordinate χ(p)=(a,b)∈[Δ]^d×[Δ]^(n-1)× d. The map χ:𝒢→ [Δ]^d×[Δ]^(n-1)× d is bijective and the first part of the coordinate a∈[Δ]^d can be constructed from x_1 by quantizing it in each dimension. Let q:→[Δ]^d be this quantization function such that q(x_1)=a. Let us now find a way to choose ϕ and ψ such that we can reconstruct b from Σ. We can treat b as a sequence of length n-1 and write b=[b_1,…,b_n-1] with b_i∈[Δ]^d. Since there are finitely many b∈[Δ]^(n-1)× d, we can enumerate all b using a function λ:[Δ]^(n-1)× d→. Moreover, let us choose λ to be invariant to permutations of [b_1,…,b_n-1], i.e. for all permutations π:[n-1]→ [n-1] we have λ([b_1,…,b_n-1])=λ([b_π(1),…,b_π(n-1)]), but we let λ always assign different values to b_1,b_2 if they are not a permutation of each other. Although this prevents λ from being injective, all cubes with the same value under λ have the same value under g, due to semi-invariance, i.e. for a fixed a∈[Δ]^d and for all n in the range of λ the inverse is well defined and we can evaluate g(χ^-1(a,λ^-1(n))). Now, λ is an invariant sequence-to-point function and since [Δ]^d is countable, we can utilize Theorem <ref> (note that we use multisets of a fixed size here, to which the proof in <cit.> can be easily extended) to find ϕ^*:[Δ]^(n-1)× d→ and ψ^*:→ such that λ(b)=ψ^*(∑_i=1^n-1ϕ^*(b_i)) With the quantization function q we set ϕ(x):=ϕ^*(q(x)) and define Σ=∑_i=1^nϕ^*(q(x)). We can than recover λ(b) by λ(b)=ψ^*(Σ-ϕ(x_1)). Now, we can define ψ such that the related 𝒮 is equal to g: ψ(x_1,Σ):=g(χ^-1(q(x_1), λ^-1(ψ^*(Σ-ϕ(x_1))))). Since we chose g to uniformly approximate g and thereby each component of f up to error, this implies that uniformly approximates f up to error. As before we have that the compactness of 𝒳 implies that 𝒳^n is compact and without loss of generality, we can assume that the compact support of f is contained in [0, 1]^n × d. Now, for every i ∈ [d], we approximate coordinate i of f with an equivariant vector of polynomials p_i: ^n ↦^n with an accuracy of ϵ / d (as done in <cit.>). This is possible using a version of the Stone-Weierstrass theorem from <cit.>. Because p_i is equivariant we can use Theorem <ref> to represent p_i by a semi-invariant polynomial q_i:^n ↦, such that p_i([x_1, …, x_n])=[q_i(x_1, {x_2, …, x_n }), …, q_i(x_n, {x_1, …, x_n-1})]. Now, we use Theorem <ref> and a representation similar to <cit.> to represent q_i using multisymmetric monomials and polynomials of multisymmetric power sums. For this, we define a function mapping to the power sums: Let ϕ: [0, 1]^d →ℝ^d', x ↦[ x_1^1 x_2^0⋯ x_d^0; x_1^2 x_2^0⋯ x_d^0; ⋮; x_1^α_1 x_2^α_2⋯ x_d^α_d; ⋮; x_1^0 x_2^0⋯ x_d^n; ] where α = (α_1, ..., α_d) runs over all multidegrees with order 0 < |α| ≤ n. The sum in the Sumformer is then represented as Σ = ∑_i=1^n ϕ (x^(i)). By Theorem <ref> the function s_j(x^(i ≠ j)) = σ( ∑_i ≠ jϕ(x^(i)) ) with σ being a polynomial function can fit any multisymmetric polynomial in the variables x^(i ≠ j){x^(1), ..., x^(j - 1), x^(j + 1), ..., x^(n)} perfectly. We can therefore represent q_i by ψ_i(x^(j), Σ) = ∑_α∈ P (x^(j))^α·σ_α(Σ - ϕ(x^(j))) with P ⊆ℕ_0^d, |P| < ∞ and σ_α are polynomials. By setting ψ = [ψ_1, ..., ψ_d], we obtain a Sumformer 𝒮 with 𝒮(x) = [p_1(x), ..., p_d(x)] which is able to approximate f sufficiently well. § PROOFS OF THE UNIVERSAL APPROXIMATION RESULTS FOR TRANSFORMER Now we give the detailed proof of the universality of Transformers from Theorem <ref>. We use the triangular inequality to divide the approximation in two steps. We first approximate f by a Sumformer and then show that the Sumformer can be approximated by a Transformer , i.e. sup_X ∈^nf(X)-(X)_∞≤sup_X ∈^nf(X)-(X)_∞ + sup_X ∈^n(X)-(X)_∞ For the first summand we have from Theorem <ref> that there is a Sumformer which approximates f to an accuracy of /2. The Sumformer has the inherent latent dimension d'. We now turn to the second summand and construct a Transformer that is able to approximate the Sumformer to /2 accuracy. Transformers are constructed as described in Definition <ref>. Because of the structure with X + (X+(X)), we can set the attention for the first layers to zero. Thereby, we obtain feed-forward layers without attention. The Transformer is then constructed as follows. We have the input X =[x_1, …, x_n]^⊤∈^n with x_i ∈^1 × d and map it with a feed-forward from the right to [ x_1, x_1; ⋯; x_n, x_n ]∈^n × 2d. We can then find a two layer feed-forward network such that it acts as the identity on the first n components and approximates the function ϕ. The approximation with two feed forward layers of ϕ is possible because of the universal approximation theorem <cit.>. In the discontinuous setting we need more layers to approximate ϕ. Therefore, after three feed-forward layers we get [ x_1, ϕ(x_1); ⋯; x_n, ϕ(x_n) ]∈^n × (d+d') . Before, we get to the attention layer we add one more layer from the right :^d+d'→^1+d+2d' with W=[ 0_d× 1 I_d 0_d× d' 0_d× d'; 0_d'× 1 0_d'× d I_d' 0_d'× d' ]∈^(d+d')×(1+d+2d') and b=[1_n,0_n×(d+2d')]. Using these transformations, we get as output after the first step: X_1 = [ 1 x_1 ϕ(x_1) 0_d'; ⋯ ; 1 x_n ϕ(x_n) 0_d ]∈^n × 1+d+2d' Note that these steps are the same for the efficient Transformers. Now, we turn to the attention head to represent the sum Σ = ∑_i=1^n ϕ(x_i) ∈^d'. First we choose W_Q=W_K=[e_1,0_(1+d+2d')× (1+d+2d')]∈^(1+d+2d')×(1+d+2d') for e_1=[1,0_d+2d']^⊤∈^1+d+2d', such that A = ρ((X_1W_Q)(X_1W_K)^⊤) = 1/n1_n × n. The matrix A will then be multiplied with X_1W_V. We can choose W_V=[[ 0_(1+d)×(1+d+d') 0_(1+d)× d'; 0_d'×(1+d+d') n· I_d'; 0_d'×(1+d+d') 0_d'× d' ]]∈^(1+d+2d')× (1+d+2d'). The output of this attention layer is [0_1+d+d', Σ]^⊤. Then, we apply a residual connection and obtain [1,x_i,ϕ(x_i), Σ]^⊤. Last, we implement ψ. For the discontinuous case, we first compute q(x_i). Then, we map a finite set of values to another finite set of values for which we can use Lemma 7 in <cit.>. Hence, we need to add another O(n(1/)^dn/n!) feed-forward layers for the approximation of ψ. In the continuous case this can be avoided because of the continuity of ψ, we can approximate it with the universal approximation theorem <cit.> with 2 feed-forward layers. § DEEP SETS Sumformers are related to the concept of deep sets introduced in <cit.>. For the discrete proof we use Theorem <ref>. However, there is also a version for uncountable inputs which we highlight here: Assume the elements are from a compact set in ^d, i.e. possibly uncountable, and the set size is fixed to M. Then any continuous function operating on a set X, i.e. f : ^d × M→ which is permutation invariant to the elements in X can be approximated arbitrarily close in the form of ψ(∑_x ∈ Xϕ(x)), for suitable transformations ϕ and ψ. The fundamental differences of the previous theorem to our work are that we consider equivariant, continuous sequence-to-sequence functions. This difference is the reason why we need a second parameter in ϕ. What are deep sets discretized and continous model Discussion of d' in appendix (no multi-symmetric sums here)
http://arxiv.org/abs/2307.02785v1
20230706054232
Effects of Hoyle state de-excitation on $νp$-process nucleosynthesis and Galactic chemical evolution
[ "Hirokazu Sasaki", "Yuta Yamazaki", "Toshitaka Kajino", "Grant J. Mathews" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.GA", "nucl-ex", "nucl-th" ]
Division of Science, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Division of Science, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-033, Japan Division of Science, National Astronomical Observatory of Japan, 2-21-1 Osawa, Mitaka, Tokyo 181-8588, Japan Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-033, Japan School of Physics, and International Research Center for Big-Bang Cosmology and Element Genesis, Beihang University, Beijing 100183, China Peng Huanwu Collaborative Center for Research and Education, Beihang University, Beijing 100183, China Department of Physics and Astronomy, Center for Astrophysics, University of Notre Dame, Notre Dame, IN 46556, USA 1.5 The partcle-induced hadronic de-excitation of the Hoyle state in ^12C induced by inelastic scattering in a hot and dense plasma can enhance the triple-alpha reaction rate. This prevents the production of heavy nuclei within the neutrino-driven winds of core-collapse supernovae and raises a question as to the contribution of proton-rich neutrino-driven winds as the origin of p–nuclei in the solar system abundances. Here we study ν p-process nucleosynthesis in proton-rich neutrino-driven winds relevant to the production of ^92,94Mo and ^96,98Ru by considering such particle-induced de-excitation. We show that the enhancement of the triple-alpha reaction rate induced by neutron inelastic scattering hardly affects the ν p-process, while the proton scattering contributes to the nucleosynthesis in proton-rich neutrino-driven winds at low temperature. The associated enhanced triple-alpha reaction rate decreases the production of ^92,94Mo and ^96,98Ru in a wind model of ordinary core-collapse supernovae. On the other hand, the abundances of these p–nuclei increase in an energetic hypernova wind model. Hence, we calculate the galactic chemical evolution of ^92,94Mo and ^96,98Ru by taking account of both contributions from core-collapse supernovae and hypernovae. We show that the hypernova ν p-process can enhance the calculated solar isotopic fractions of ^92,94Mo and ^96,98Ru and make a significant impact on the GCE of p–nuclei regardless of the particle-induced Hoyle state de-excitation. § INTRODUCTION The excited 0^+ state of ^12C at 7.65 MeV (the so-called “Hoyle state") can resonantly enhance the reaction rate of the triple-alpha (3α) process essential for nucleosynthesis inside stars <cit.>. In a hot and dense plasma, inelastic scatterings of background particles can induce the hadronic de-excitation of the Hoyle state to the ground state or to the excited 2^+ state at 4.44 MeV. This enhances the 3α reaction rate <cit.>. In this context, the enhancement was calculated based on the statistical Hauser-Feshbach model <cit.>, and the contribution of neutrons was recently measured in a neutron inelastic scattering experiment <cit.>. The 3α reaction is crucial for the nucleosynthesis of heavy elements inside explosive astrophysical sites such as core-collapse supernovae (CCSNe) and neutron star mergers. In particular, the ν p–process nucleosynthesis <cit.>, that can occur within the proton-rich neutrino-driven winds of core-collapse supernovae is sensitive to the uncertainty of the 3α reaction rate <cit.>. The ν p–process is induced by the absorption of electron antineutrinos on free protons, p(ν̅_e,e^+)n <cit.>. The free neutrons produced via the neutrino absorption allow for the production of heavier elements beyond the waiting point nucleus ^64Ge and other bottleneck nuclei through (n,p) reactions instead of slower β^+ decays. The ν p–process is affected also by various nuclear reactions rates. Among them the ^59Cu(p,α)^56Ni and ^7Be(α,γ)^11C were recently measured in nuclear experiments <cit.>. The ν p–process can produce large numbers of p-nuclei that cannot be synthesized through either the slow (s-) or rapid (r-) neutron capture processes. Most p-nuclei can be produced in the γ-process <cit.> induced by successive photodisintegration reactions on heavier isotopes. However, calculations of the galactic chemical evolution (GCE) of abundant p-isotopes such as ^92,94Mo and ^96,98Ru in models that only include the γ-process in the outer layers of both thermonuclear supernovae (SNe Ia) and ordinary core-collapse supernovae (SNe II) drastically underestimate the solar isotopic fractions <cit.>. Moreover, the molybdenum isotopic anomalies in meteorites indicate that the p-isotopes ^92,94Mo and the r-isotope ^100Mo are synthesized in the same star but by different processes <cit.>. A ν p–process in core-collapse supernovae where the r-process also occurs could meet such a requirement and be a candidate site for the production of these abundant p-isotopes ^92,94Mo and ^96,98Ru. In particular, a strong ν p–process is possible in the proton-rich neutrino-driven winds of very energetic hypernovae (HNe) <cit.>. Such a HN ν p–process significantly increases the elemental abundances of Mo and Ru at low metallicity [Fe/H]<-2 <cit.>. Recently, <cit.> reported that the enhanced 3α reaction induced by the hadronic de-excitation of the Hoyle state should increase the seed nuclei for the production of heavy elements and suppress the ν p–process. This raises a question as to the impact of the ν p-process on the solar abundances of ^92,94Mo and ^96,98Ru. This seems to be true for a neutrino-driven wind model in SNe II with small entropy per baryon and a large expansion timescale. However, the contribution to the ν p–process of such particle-induced Hoyle state de-excitation in HNe with a massive proto-neutron star (PNS) and large neutrino luminosities is still uncertain <cit.>. Neutrino-driven winds in such HNe have larger entropy and a shorter expansion timescale than the SN II wind model. The ν p–process in SNe II hardly affects the GCE due to the relatively small production yield of p-nuclei while the HN ν p–process can dominantly contribute <cit.>. Observational quantities such as the solar isotopic fractions and elemental abundances can be affected by the Hoyle state effect in the HN ν p–process. For the present work, we calculate the enahnced 3α reaction in the ν p–process by using both SN II and HN wind models. Then, we carry out the GCE calculation with the calculated nuclear yields to demonstrate how the particle-induced Hoyle state de-excitation in the HN ν p–process affects the GCE of Mo and Ru. § METHODS §.§ Models of neutrino-driven winds in supernova and hypernovae We need hydrodynamic quantities and neutrino fluxes to calculate the ν p–process nucleosynthesis within neutrino-driven winds. We calculate the temperature and baryon density profiles of the neutrino-driven wind based upon a model for general-relativistic steady-state, spherically symmetric trajectories <cit.>. The radius of the PNS is taken to be R_PNS=15 km. The baryon density near the PNS radius is taken to be ρ_0=10^11g/cm^3, and the temperature at the PNS radius is determined by the condition q̇=0, where q̇ is the net heating rate from neutrino interactions <cit.>. We assume that the neutrino luminosity L_ν is independent of neutrino species, and that these neutrinos obey Fermi-Dirac distributions on the surface of the PNS, with the neutrino mean energies fixed to E_ν_e= 13.1 MeV, E_ν̅_e= 15.7 MeV, and E_ν_X= 16.3 MeV. With these neutrino parameters, we have calculated the rates of neutrino-induced reactions and the electron fraction inside the neutrino-driven winds. The calculated electron fraction is used to determine mass fractions of initial neutrons and protons for the subsequent nuclear network calculation. To study the effect of the particle-induced Hoyle state de-excitation, we prepare wind trajectories of both ordinary SN II and energetic HN models with different values of the PNS mass M_PNS and L_ν as shown in Table <ref>. These parameters for the SN II model are typical values for the late stages of the explosion (e.g. see <cit.>). We assume that a collapse of a rapidly rotating massive star is associated with the energetic explosion mechanism of HNe. For the HN wind model, we consider a proton-rich neutrino-driven wind blown off from the massive PNS toward the polar region before the black hole formation as in <cit.> by employing a large PNS mass (M_PNS=3M_⊙) and a large neutrino luminosity (L_ν=10^53erg/s) as seen in neutrino radiation hydrodynamic simulations <cit.>. §.§ Enhanced 3α reaction rate due to Hoyle state de-excitation We execute the nuclear network calculation following the numerical setup of <cit.>. We use the LIBNUCNET reaction network engine <cit.> with nuclear reaction rates from the JINA Reaclib database <cit.>. We have included reaction rates for neutrino absorption and e^± capture. However, we ignore the contribution from neutrino oscillations <cit.>. The enhanced 3α reaction rate owing to the induced Hoyle state de-excitation is given by <cit.>, λ_3α = λ_3α^(0){ 1+Y_nρ_6f_n(T_9)+Y_pρ_6f_p(T_9) }, f_n(T_9) = 75.1e^-T_9+88.7, f_p(T_9) = 0.03680-1.667T_9+2.350T_9^2-0.2911T_9^3+0.01160T_9^4, where λ_3α^(0) is the 3α reaction rate without the enhancement <cit.>, ρ_6 is the baryon density in units of 10^6 g cm^-3, and T_9 is the temperature in units of 10^9 K. The quantities Y_n and Y_p are the number abundance fractions of free neutrons and protons, respectively. The second and third terms on the right-hand side of Eq. (<ref>) are the contributions from neutron and proton inelastic scattering, and the values of f_n and f_p are determined by fitting to the statistical Hauser-Feshbach calculation of <cit.>. This enhanced 3α reaction is utilized for the network calculations in the present SN and HN nucleosynthesis models. §.§ Model for Galactic chemical evolution We adopt the GCE model of <cit.> which reproduces reasonably well the chemical evolution of the light elements from hydrogen to zinc as well as the model of <cit.>. The adopted model <cit.> has already been successfully applied to the GCE of the intermediate-to-heavy mass nuclei including the r–process contributions from magneto-hydrodynamic jet SNe, collapsars and binary neutron star mergers as well as the neutrino-driven wind in core collapse SNe <cit.> and the ν p–process contributions from Type II SNe and Hypernovae <cit.>. The latter study <cit.> includes not only the ν p–process, but the γ– (p–), s–, and r–processes in addition to the ν p–process. In the present calculations, we follow the same numerical setup except for the input data of the HN ν p–process as discussed in the previous section. We demonstrate the impact of the ν p–process on the GCE calculation with the enhanced 3α reaction rate in Eq. (<ref>). We focus on the GCE of ^92,94Mo and ^96,98Ru whose total solar isotopic fractions are as high as 24.1% and 7.4%, respectively. A GCE calculation that only includes the γ–process underestimates such large solar abundances and the ν p–process potentially resolves this underestimation problem. The HN ν p–process can be a main contributor to the GCE of ^92,94Mo and ^96,98Ru <cit.> so that the effect of the particle-induced Hoyle state de-excitation on GCE could be well demonstrated by considering the HN ν p–process. We consider the γ–process in both SNe Ia and II and the ν p–process in both SNe II and HNe as the astrophysical sources of p-nuclei. We use the HN wind model in Table <ref> as the fiducial proton-rich neutrino-driven winds in HNe with Eq. (<ref>). In the HN ν p–process, the yield of the nucleus i is estimated by X_iṀτ_NS where X_i and Ṁ are a mass fraction of i and the mass ejection rate inside the HN wind, and τ_NS=1 s is a typical lifetime of the massive PNS <cit.>. The progenitor mass for the HN model is set to 100 M_⊙ as in <cit.>. § RESULTS AND DISCUSSIONS §.§ Hydrodynamic and neutrino properties Table <ref> shows the hydrodynamic quantities characterizing properties of the wind models such as the expansion timescale τ_dyn, the entropy per baryon S, the initial electron fraction for the network calculation Y_e^(0), and the mass ejection rate Ṁ. The values of τ_dyn and S are calculated at a high temperature before the production of heavy elements as in <cit.>. The entropy for the HN wind model is higher than that of the SN II wind model due to the massive PNS mass. The small expansion timescale in the HN model originates from the large neutrino luminosity. Such properties of neutrino-driven winds are consistent with the results of <cit.>. Y_e^(0) corresponds to the electron fraction at the beginning of the nuclear network calculation (T_9=10). The value of Y_e^(0) is larger than 0.5 for the proton-rich neutrino-driven winds, and almost the same in both wind models because we use the same neutrino energies for the neutrino distributions. Y_e^(0) becomes larger when the difference between the mean neutrino energies, E_ν̅_e-E_ν_e is small. Ṁ is determined from a supersonic wind solution, and the value increases with the neutrino luminosity. In an ordinary SN explosion, the entropy of the wind increases with the decrease of neutrino luminosity in the later explosion phase (t>1 s). Although a high entropy is favorable for the production of heavy elements, their total yields in the later wind trajectories are not so large due to the small Ṁ <cit.>. The massive PNS and the large neutrino luminosity in the HN model simultaneously enable both a high entropy and a large mass ejection rate. §.§ ν p–process nucleosynthesis Figure <ref>(a) shows the 3α reaction rates used for the ν p–process calculations in the SN II wind model. The solid and dashed lines are respectively the rates with and without the particle-induced Hoyle state de-excitation of neutron and proton scatterings. To see the contribution of the neutron-induced enhancement, we obtained the dotted line by setting f_p=0 in Eq. (<ref>). The 3α reaction rate is enhanced by more than a factor of 100 at higher temperatures (T_9>9) due to the large baryon density ρ_6 and the large abundance of free nucleons (Fig. <ref>(b)). The dashed and dotted lines are almost identical at T_9<6 and the contribution from the neutron scattering becomes negligible. This is because the mass fraction of free neutrons significantly decreases with the production of seed nuclei around ^56Ni through the α-capture reactions as shown in Fig. <ref>(b). Then, the enhancement of the 3α reaction is mainly caused by proton scattering in the temperature range 2<T_9<6 due to the freeze out of the protons as in Fig. <ref>(b). Finally, as the baryon density decreases, the enhancement is negligible in the low-temperature region, T_9<2. Figure <ref> shows the enhanced 3α reaction rates and the evolution of mass fractions in the HN wind model. The results are similar to the case of the SN II wind model. The proton scattering only contributes to the enhancement in Eq. (<ref>) for T_9<4. This is due to the freeze out of free protons and decreasing free neutrons. The large entropy per baryon of the HN wind model results in a small amount of seed nuclei such as ^56Ni and a large production of heavy elements through the ν p–process. Figure <ref>(a) shows the effect of the particle-induced Hoyle state de-excitation on the nuclear mass fractions of final abundances for various nuclei in the SN II wind model. The solid and dashed lines show the results with and without the Hoyle state effect, respectively, and the difference between them is prominent for nuclei with A>60. This is because the enhanced 3α reaction suppresses the production of heavy elements through the ν p–process. Figure <ref>(b) shows the mass fractions of ^92,94Mo and ^96,98Ru in the SN II wind model with and without the enhancement of Eq. (<ref>). The results with the enhanced 3α reaction rate (solid line) are smaller than those without it (dashed line) by about a factor of 100. Such significant suppression of the ν p–process in the SN II wind model having S∼ 50 k_Bnuc^-1, τ_dyn∼ 10 ms is consistent with the results of <cit.>. The dotted line on Fig. <ref>(a) shows the calculated mass fraction obtained with the enhanced 3α reaction rate ignoring the third term on the right-hand side of Eq. (<ref>) which is the contribution from proton-induced de-excitation. The dotted and dashed lines almost completely overlap. This indicates the negligible impact of neutron scattering on the ν p–process. As shown by the dotted line on Fig. <ref>, neutron scattering induces an enhancement of the 3α reaction rate at T_9>6. However, the 3α reaction hardly affects the nuclear abundances in such a high-temperature region. The enhancement of the 3α reaction only contributes to the nuclear network calculation at T_9<6 where the free neutrons are consumed by the synthesis of seed nuclei (e.g. ^56Ni) as in Fig. <ref>(b). We note that the contribution from neutron scattering should be negligible even if we use the experimentally determined value of f_n from <cit.>, which turns out to be much smaller than that of Eq. (<ref>). The calculated mass fractions in the HN wind model are shown in Fig. <ref>. The contribution from neutron scattering is negligible as in the case of the SN II wind model. Hence, there is no difference between the dashed and dotted lines on Fig. <ref>(a). The ν p–process in the HN wind model proceeds up to heavier elements than that of the SN II wind model because of the shorter τ_dyn and higher S. The enhanced 3α reaction rate decreases the production of heavy elements in the higher mass region (A>140) by 10–60 %. Also, the suppression is less significant than that of the SN II wind model due to the high entropy of the HN wind model. The value of ρ_6 in Eq. (<ref>) becomes small at a nearly fixed temperature of the α-particle recombination T_9∼4 in the high entropy wind. Thus, the enhancement is small for the high entropy case <cit.>. The enhanced 3α reaction rate decreases the ratio of free neutrons to the seed nuclei Δ_n from the p(ν̅_e,e^+)n reaction <cit.>. Such a decrease of Δ_n shifts the endpoint of the ν p–process to lower masses and increases the mass fractions in the A=60-110 range. In particular, the mass fractions of ^92,94Mo and ^96,98Ru increase by 20–30% as shown in Fig. <ref>(b). The enhanced 3α reaction rate suppresses the production of heavy elements around the ν p–process endpoint in both HN and SN II wind models. However, the HN wind model produces sufficient heavy elements beyond A>100, while the SN II model does not. Therefore, the role of the particle-induced Hoyle state de-excitation in ^92,94Mo and ^96,98Ru depends upon the wind models. The mass fractions of ^92,94Mo and ^96,98Ru in Fig. <ref>(b) are too small to affect the GCE even without the suppression of the ν p–process. Hence, we will focus on neutrino-driven winds in HNe producing a large number of heavy elements, as shown in Fig. <ref>(b), and we will demonstrate the effects of Hoyle state de-excitation on the solar abundances and the GCE of ^92,94Mo and ^96,98Ru in the next sections. §.§ Comparison to Solar abundances Figure. <ref>(a) shows the calculated solar abundances of p-nuclei in the A=84–102 range. These are compared with observational data <cit.>. All calculated results include the contributions from the ν p–process in SNe II and the γ–process in both SNe Ia and II. The result without taking into account of the HN ν p–process (upward-pointing triangles) underestimates the solar abundances of p–nuclei (square points). However, as shown by the circles and downward-pointing triangles, the HN ν p–process can significantly contribute to the GCE of p–nuclei and increase the calculated solar abundances of ^92,94Mo and ^96,98Ru. The circles (Hoyle ON) and down-pointing triangles (Hoyle OFF) in Fig. <ref>(a) show the results including the HN ν p–process with and without the effects of particle-induced Hoyle state de-excitation, respectively. For Hoyle ON and Hoyle OFF, we use the results of the solid and dashed lines in Fig. <ref>, respectively. The HNe ν p–process can increase the solar abundances of ^92,94Mo and ^96,98Ru irrespective of the Hoyle state effect. This is consistent with the result of <cit.> although <cit.> suggests a small contribution from the ν p-process. The ratios of Hoyle OFF/Hoyle ON are shown in Fig. <ref>(b). Here, the abundances of p–nuclei are enhanced by up to about 30%. Although the suppression of ^92,94Mo and ^96,98Ru is due to the enhanced 3α reaction rate as reported by <cit.>, this occurs only in typical neutrino-driven winds of SNe II. On the other hand, such suppression is not necessarily found in the HN neutrino-driven wind based on the energetic supernova explosion model with a massive PNS and large neutrino luminosity as given in <cit.>. The Hoyle ON points (circles) on Fig. <ref>(a) show the overestimation of ^98Ru due to the large mass fraction of ^98Ru in Fig. <ref>. However, our calculation uses only one HN wind trajectory to estimate the abundances for the HN ν p-process ignoring the time dependence of the neutrino-driven wind and simply multiplying by τ_NS=1 s to obtain total integrated yields. Hence, the overestimation might be resolved by integrating various neutrino-driven winds with smaller PNS masses (M_PNS<3M_⊙) over time until the black hole forms. §.§ Galactic chemical evolution of Mo and Ru isotopes Figure <ref> shows the calculated elemental abundances of Mo normalized to the solar system values at [Fe/H]=0 and the observational data taken from the SAGA database <cit.>. The thick solid line in Fig. <ref>(a) shows the result of Hoyle ON and the thick dashed line shows the result without the contribution from the HN ν p–process. These two lines indicate the significant enhancement of the elemental abundances. Moreover, the HN ν p–process improves the agreement between the GCE calculation with the observational data. Such impact of the HN ν p–process was also found in <cit.>. Figure <ref>(b) shows the impact of the particle-induced Hoyle state de-excitation on the elemental abundance in the shaded region of the top panel. The Hoyle state effect slightly increases the total elemental abundance of Mo with the HN ν p–process. The thin solid, dash-dotted, and dotted lines on Fig. <ref>(a) are partial contributions of the (γ+ν p)–processes, the s–process, and the r–process, respectively. At low metallicity [Fe/H]<-2, the γ–process is negligible and the HN ν p-process dominantly contributes to the total elemental abundance irrespective of the Hoyle state effect. The results for the Ru elemental abundances are shown in Fig <ref>. The particle-induced Hoyle state de-excitation is almost unchanged. The Ru elemental abundance and the contribution from the HN ν p–process is less prominent than for Mo. This is because the solar isotopic fractions of ^96,98Ru are small (7.4%) compared with those of ^92,94Mo (24.1%). Also, the r–process is the main contributor <cit.> to the Ru elemental abundance. To increase the calculated Ru elemental abundance at low metallicity, an increased contribution from the r–process <cit.> may be needed rather than the ν p–process. This is because the overestimation of ^98Ru in Fig. <ref>(a) would not be improved if the yield of the ν p–process was increased. Finally, we note several theoretical uncertainties in our calculation. First, we estimated the yield of the HN ν p–process with only one set of neutrino-driven wind trajectories. However, the trajectories should also involve different explosion timescales and different progenitors leading to different massive PNSs. Also, nucleosynthesis calculations employing matter profiles obtained in neutrino radiation hydrodynamic simulations (e.g. <cit.>) may provide more reliable yields of the p–nuclei. Another point is that, for simplicity we have ignored the contribution of neutrino oscillations <cit.> to the ν p–process. In particular, fast flavor conversions <cit.> which have been actively studied in recent years can enhance the yields of p–nuclei <cit.>. Additionally, we have ignored the enhancement of the 3α reaction induced by α-particle inelastic scattering in the reaction network. Such reactions may have a non-negligible impact after the α-rich freeze-out at low temperature as in Figs. <ref>(b) and <ref>(b). Also, an R–matrix description <cit.> incorporating experimental results could allow for a more sophisticated evaluation of the particle-induced Hoyle state de-excitation near the energy threshold. This could provide a more accurate 3α reaction rate for nucleosynthesis calculations. We have employed theoretical estimates of nuclear masses for unstable nuclei. Nuclear masses obtained in recent mass measurements (e.g. <cit.>) could make the ν p–process calculations more realistic and revise the nuclear yields beyond the waiting-point nucleus ^64Ge. § CONCLUSION We have analyzed the ν p–process in core-collapse supernovae by taking into account the enhanced 3α reaction rates induced by inelastic scatterings of free neutrons and protons on the Hoyle state. We find a negligible impact from the neutron-induced inelastic scattering but a large effect from the proton-induced scattering. For the SN neutrino-driven wind, the particle-induced Hoyle state de-excitation suppresses the production of ^92,94Mo and ^96,98Ru abundances as reported in previous work. On the other hand, for the HN wind with a massive PNS ∼3M_⊙, ^92,94Mo and ^96,98Ru are enhanced by the Hoyle state effect although the nuclear yields for A>140 are reduced. The calculated abundance yields of the HN wind model were then applied to the GCE calculation of p–nuclei. We found that the HN ν p–process significantly contributes to the GCE of ^92,94Mo and ^96,98Ru, regardless of the particle-induced Hoyle state de-excitation. We have demonstrated a possible contribution from the HN ν p–process to the origin of p–nuclei in the solar system. Further studies, including an analysis of uncertainties neglected in our calculation, could help substantiate these conclusions. § ACKNOWLEDGEMENT This work was supported in part by Grants-in-Aid for Scientific Research of Japan Society for the Promotion of Science (19J13632, 20K03958, 21J11453). Work at the University of Notre Dame (GJM) was supported by DOE nuclear theory grant DE-FG02-95-ER40934. This work (TK) was also supported in part by the National Key R&D Program of China (2022YFA1602401). aasjournal
http://arxiv.org/abs/2307.01521v1
20230704070233
BU Canis Minoris -- the Most Compact Known Flat Doubly Eclipsing Quadruple System
[ "Theodor Pribulla", "Tamás Borkovits", "Rahul Jayaraman", "Saul Rappaport", "Tibor Mitnyan", "Petr Zasche", "Richard Komžík", "András Pál", "Robert Uhlař", "Martin Mašek", "Zbyněk Henzl", "Imre Barna Bíró", "István Csányi", "Remko Stuik", "Martti H. Kristiansen", "Hans M. Schwengeler", "Robert Gagliano", "Thomas L. Jacobs", "Mark Omohundro", "Veselin Kostov", "Brian P. Powell", "Ivan A. Terentev", "Andrew Vanderburg", "Daryll LaCourse", "Joseph E. Rodriguez", "Gáspár Bakos", "Zoltán Csubry", "Joel Hartman" ]
astro-ph.SR
[ "astro-ph.SR" ]
firstpage–lastpage Exploiting Richness of Learned Compressed Representation of Images for Semantic Segmentation Wassim Hamidouche, Lina Bariah, and Mérouane Debbah ============================================================================================= We have found that the 2+2 quadruple star system BU CMi is currently the most compact quadruple system known, with an extremely short outer period of only 121 days. The previous record holder was TIC 219006972 <cit.>, with a period of 168 days. The quadruple nature of BU CMi was established by <cit.>, but they misidentified the outer period as 6.6 years. BU CMi contains two eclipsing binaries (EBs), each with a period near 3 days, and a substantial eccentricity of ≃ 0.22. All four stars are within ∼0.1 M_⊙ of 2.4 M_⊙. Both binaries exhibit dynamically driven apsidal motion with fairly short apsidal periods of ≃ 30 years, thanks to the short outer orbital period. The outer period of 121 days is found both from the dynamical perturbations, with this period imprinted on the eclipse timing variations (ETV) curve of each EB by the other binary, and by modeling the complex line profiles in a collection of spectra. We find that the three orbital planes are all mutually aligned to within 1 degree, but the overall system has an inclination angle near 83.5^∘. We utilize a complex spectro-photodynamical analysis to compute and tabulate all the interesting stellar and orbital parameters of the system. Finally, we also find an unexpected dynamical perturbation on a timescale of several years whose origin we explore. This latter effect was misinterpreted by <cit.> and led them to conclude that the outer period was 6.6 years rather than the 121 days that we establish here. stars: individual: BU CMi - binaries: eclipsing – binaries: spectroscopic § INTRODUCTION There are currently more than 300 known 2+2 quadruples consisting of an orbiting pair of eclipsing binaries, most of which have been found with TESS (. The criteria for accepting these as quadruples are: (i) there are two eclipsing binaries that are (ii) unresolved at the pixel level with TESS, and (iii) which show only one dominant star in Gaia within the TESS pixel. However, given that Gaia does not often distinguish stars that are ≲ 1/2” apart, and that these objects are typically a kpc away, this implies only that the physical separation of the EBs is ≲ 500 AU or so. The corresponding outer orbital periods are only constrained to an order of ≲ 5000 years. At the largest of these orbital separations, the quadruples would be too wide for easy-to-measure dynamical interactions that could lead to a determination of the outer orbital period. However, a small percentage of these quadruples have much closer separations of less than a few AU, and these have led to measurable outer orbits as well as other interesting dynamical interactions, such as TIC 454140642 (432 days; ), TIC 219006972 (168 days; ), and VW LMi (355 days; ). With quadruples having outer orbital periods as short as ≲ a couple of years, interesting and informative dynamical interactions to look for include: (i) dynamical delays resulting from a changing period of either EB due to the varying distance to the other EB; (ii) dynamically forced orbital precession in the EBs; and (iii) forced precession of the orbital planes leading to eclipse depth variations. Every once in a while, one of these quadruples turns out to have a dramatically short outer period (e.g. TIC 219006972; ) and it becomes quite feasible to use ETV data as well as RV data to completely diagnose most of the important stellar and orbital parameters. In this work, we report on BU CMi, a 2+2 quadruple which we find to have the shortest known outer orbital period of 121 days, and a very interesting array of dynamical effects. This work is organized as follows. In the remainder of Sect. <ref>, we discuss how the quadruple nature of BU CMi came to be known. Section  <ref> briefly reviews the observations that we bring to bear on the analysis of BU CMi, including with TESS and follow-up ground-based photometry and spectroscopy. In Sect. <ref> we briefly describe our use of broadening functions to extract radial velocities (RVs) from the spectra. Section <ref> details our production of a long-term ETV curve from archival as well as TESS data, and what we can learn from a visual inspection of it. In Sect. <ref> we explain in some detail how we fit a model of stellar and orbital parameters directly to the complex and overlapping line profiles in each spectrum. Section <ref> is devoted to a discussion of disentangling the four spectra, after we determined the stellar and orbital parameters. We review our spectro-photodynamical model for ascertaining the system parameters in Sect. <ref>. A discussion of our work and its implications follows in Sect. <ref>. We give some concluding remarks in Section <ref>. §.§ Prior work on Multiple Stars The Transiting Exoplanet Survey Satellite (TESS; ) has been instrumental in the discovery of multiple star systems. There has been significant progress on the problem of identifying triple, quadruple, quintuple, and even sextuple star systems from TESS data; for instance, the Planet Hunters citizen science project uncovered tens of multiple star system candidates <cit.>. In parallel, a combination of machine learning techniques and human vetting led to the creation of an extensive catalog of quadruple stellar systems <cit.>, in addition to the identification of the first-ever sextuply-eclipsing six star system <cit.>. §.§ BU Canis Minoris BU CMi (HD 65241, HIP 38945, TIC 271204362) is listed as a suspected Algol-type eclipsing binary in the 74th special namelist of new variable stars <cit.>. In spite of its brightness (V = 6.42), the object was not subject to any detailed study until <cit.> found it to be a quadruple system composed of two eclipsing binaries with P_A = 2.94 days and P_B = 3.26 days. Although the authors had observed BU CMi with a photoelectric photometer in 2012, they first noticed two systems of eclipses in MASCARA photometric data in 2020. According to their analysis, all four stars in the quadruple are similar (A0 spectral type), having masses between 3.1 - 3.4 M_⊙. The orbital eccentricities of the inner binaries were found to be relatively high for close binaries, with e_A = 0.20 and e_B = 0.22. The authors interpreted the observed ETVs as being due to (i) relatively rapid apsidal motion in the inner (i.e., binary) orbits with the periods U_A = 25.4 years and U_B = 26.3 years, and (ii) light travel time effects (LTTE) from the outer orbit between the two binaries having a period of 6.62-years and a high-eccentricity (e_AB = 0.7). Gaia DR3 <cit.> lists BU CMi as a single object astrometrically and spectrosopically with a parallax of π = 4.0143±0.0335 mas and the following atmospheric parameters T_ eff = 10173^+43_-39 K, log g = 3.727^+7_-6, and [M/H] = 0.778^+17_-40. The main catalogued photometric and kinematic data for BU CMi are given in Table <ref>. §.§ Our two independent discoveries of the quadruple nature of BU CMi The Visual Survey Group (`VSG'), in its search for compact multistellar systems, continues to visually survey large numbers of light curves from the TESS mission <cit.>. Some of its findings, including in the area of multistellar systems, are given in <cit.>, and <cit.>. In 2021 March, the group spotted BU CMi in the Sector 34 light curves and immediately identified it as a potential quadruple star system. In parallel to the VSG's survey of the light curves from the TESS full-frame images (FFIs), RJ and SR have also been searching the 2-minute and 20-second cadence light curves for strongly periodic or time-varying stellar phenomena. Every 2-minute-cadence light curve is passed through an algorithm that flags it for further review if there is a detection of at least a 12-σ peak in its periodogram. When searching the Sector 34 short-cadence light curves through this algorithmic process, BU CMi triggered on this algorithm; upon human review, this light curve was found to be a bona fide quadruple system and flagged for further follow-up. Since that time, we have continued to collect information on, and model, the BU CMi system. After two years of study, we now report here on the discovery of a 121 day outer period for the system. § OBSERVATIONS §.§ TESS Observations BU CMi (TIC 271204362) was observed during TESS Sectors 7, 34, and 61 (from 7 January 2019 to 2 February 2019, 13 January 2021 to 9 February 2021, and 18 January 2023 to 12 February 2023, respectively). In Sector 7, it was observed at 30-minute cadence in the full-frame images (FFIs); while in Sectors 34 and 61, besides the 600 and 200-sec cadence FFIs, it was also observed at 2-minute cadence. For most of our study we used the FFI observations, from which the light curves were processed using the FITSH pipeline <cit.>. We used the 2-min cadence SAP-FLUX light curves, which were downloaded directly from the Barbara A. Mikulski Archive for Space Telescopes (MAST) website, only for the determination of mid-eclipse times over Sectors 34 and 61 data. Naturally, in the case of the Sector 7 observations, we could determine the mid-eclipse times only from the sparsely cadenced FFI data at 30 min. We show some illustrative segments of the three TESS sectors in Fig. <ref>. The superposed model curves will be discussed in Sect. <ref>. §.§ Ground-based Photometry BU CMi is bright enough (M_V = 6.42) that it can be reliably observed using small-aperture telescopes from the ground. We used archival data obtained from the Kilodegree Extremely Little Telescope (KELT–2600 points; ), the Multi-site All-Sky CAmeRA (MASCARA–10800 points; see and ), and the Hungarian Automated Telescope Network (HATnet–3200 points; ). BU CMi was heavily saturated in the HATNet observations, so we performed a custom analysis of these data that differs from the standard methods used by the survey. We extracted aperture photometry for BU CMi and 200 other comparably bright stars through an annulus excluding the saturated core of the target. We then performed an ensemble correction using a linear combination of the 200 bright neighbors to correct for instrumental and atmospheric variations in the resulting light curve. Additionally, we carried out further photometric follow-up observations between 2021 March and 2023 April. These were mainly obtained with a small 34-mm telescope in a private observatory in Jílové u Prahy in the Czech Republic, as well as remotely in Northern Italy (both by one of the co-authors R.U.). Another observing site was in Argentina as part of the Pierre Auger Observatory <cit.>, and obtained by M.M. On three additional nights, the target was also observed by Z.H. from two observing sites in the Czech Republic. Finally, two further eclipses of BU CMi were observed with the RC80 telescope of Baja Astronomical Observatory in 2022 and 2023. §.§ Spectroscopy from Skalnaté Pleso Observatory High-dispersion spectroscopy was obtained with a 1.3 m, f/8.36 Nasmyth-Cassegrain telescope equipped with a fiber-fed échelle spectrograph at the Skalnaté Pleso (SP) Observatory, Slovakia. Its layout follows the MUSICOS design <cit.>. The spectra were recorded by an Andor iKon-L DZ936N-BV CCD camera with a 2048 × 2048 array, 13.5 μm square pixels, 2.9 e^- readout noise, and a gain close to unity. The spectral range of the instrument is 4250–7375 Å  (56 échelle orders), with a maximum resolution of R = 38 000. Because of the relatively long orbital period of both inner binaries, three 600-sec exposures were combined to increase the SNR and to clean cosmic ray hits. The raw spectroscopic data were reduced as in <cit.> using IRAF package tasks, LINUX shell scripts, and FORTRAN programs. In the first step, master dark and flat-field frames were produced, based on the spectra from the tungsten lamp and blue LED. In the second step, the photometric calibration of the frames was performed using dark and flat-field frames. Bad pixels were cleaned using a bad-pixel mask, and cosmic ray hits were removed using the program of <cit.>. Order positions were defined by fitting sixth-order Chebyshev polynomials to tungsten-lamp and blue LED spectra. Subsequently, scattered light was modelled and subtracted, and then aperture spectra were extracted for both BU CMi and the ThAr lamp. The resulting two-dimensional spectra were then dispersion solved and combined to one-dimensional spectra. Finally, all spectra were continuum normalized. The typical radial velocity stability of the spectrograph is 200 m s^-1. In total, 60 spectra with per-pixel SNR ranging from 45 to 175 (at 5500 Å) were obtained from 2020 March 17 through 2023 March 3. §.§ CTIO spectroscopy Additional spectroscopy was obtained with the CHIRON fiber-fed échelle spectrograph at the 1.5m telescope of the Cerro-Tololo Interamerican Observatory, Chile. A detailed description of the spectrograph can be found in <cit.>. All spectra were taken in the slicer mode providing R = 80 000. The spectrum was extracted from 70 échelle orders covering 4080-7000 Å. Similar to the case of the SP spectroscopy, three consecutive 600-second exposures were combined. Even without an iodine cell, the spectrograph stability is a few m s^-1. Because of a reflection causing a blemish in the blue part of the spectrum, an independent pipeline based on the IRAF scripts has been developed to make use of the full spectral range of the instrument. In addition to the reduction steps taken for the SP spectra, the raw frames were trimmed and corrected for the overscan. The relative responses of four quadrants as read out by different amplifiers were taken into account. In total, BU CMi was observed on 16 nights from 2022 November 11, until 2023 March 29. The per-pixel SNR ranged from 70 to 290 (at 5500 Å). §.§ Konkoly and Rozhen spectroscopic observations Additional spectroscopic measurements were obtained with the R ∼ 20 000 échelle spectrograph mounted on the 1m RCC telescope at Konkoly Observatory, Hungary between 2021 December and 2022 March. The spectrograph is capable of covering the 3890-8670 Å wavelength range in a set of 33 échelle orders. The images were taken with a back-illuminated FLI ML1109 CCD camera having an array of 2048 × 506 pixels of size 12 μm, 10 e^- readout noise, and a gain close to unity. A total of 11 spectra were observed, with exposure times varying between 900 and 3600 seconds. Two additional spectra were taken with the 2 m RCC telescope at NAO Rozhen, Bulgaria equipped with the R ∼ 30 000 ESpeRo spectrograph in January 2022. The details of the instruments can be found in <cit.>. The spectra were reduced completely in the same manner as in <cit.> including the steps of bias, dark and flat corrections, wavelength calibration, continuum normalization, telluric line removal, and barycentric correction. The per-pixel SNR of these additional observed spectra is between 70 and 140. § RADIAL VELOCITIES We extracted the broadening functions from all spectra of BU CMi using the publicly available software BF-rvplotter[https://github.com/mrawls/BF-rvplotterhttps://github.com/mrawls/BF-rvplotter] in a ±300 km s^-1 velocity range using 3 km s^-1 RV bins, and a Gaussian smoothing using a rolling window of five datapoints. We tried templates of different (G-, F-, A- spectral type) stars, and the spectrum of Vega with A0V spectral type produced the best quality BFs over the 4400-4600 Å and 4900-5300 Å wavelength range that we used for the calculations. The sum of four Gaussian functions was then fitted to the resulting broadening functions to find the peak positions of the profiles of the component stars in order to determine their RVs. Although the use of Gaussian functions is not the most appropriate choice for fitting the BFs of rapidly rotating stars, we found it to work well in the case of the BFs of BU CMi, as we were able to constrain satisfactory model fits to them using the sum of four Gaussian functions. An example of a typical BF along with the corresponding model fit is displayed in Fig. <ref>. The resulting RVs and their 1-sigma uncertainties are listed in Table <ref>. table-1 § PERIOD STUDY §.§ Determination of times of minima In order to carry out a preliminary period study with the usual method of analyzing the eclipse timing variations (ETV), we determined accurate mid-eclipse times from the various photometric light curves. Due to the presence of frequently and strongly overlapping eclipses in the two EBs, this process required some extra care. In the case of the quasi-continuous, 25–26-day-long TESS observations, we disentangled the light curves of the two EBs, and mutually removed the signals of the other EB from the light curve in the same manner as was also done for our analyses of V994 Her, <cit.>. This needs to be done in order to avoid losing the overlapping eclipse events and, moreover, to enhance the accuracies of those times where the eclipses of the two EBs do not overlap each other, but other light-curve features (e.g. the periastron bumps) of one of the binaries distorts the eclipsing signal of the other EB. Due to the rapid and large amplitude ETVs, this method was done separately for the three sectors and, even then, in the case of a few fully overlapping eclipsing events, the disentanglement was not perfect. As a result, we had to drop out 2-3 outlier eclipse times from our analysis. In the absence of well-covered out-of-eclipse light curves for our 2021–2023 targeted ground-based follow-up eclipse observations, we were unable to carry out the same disentanglement process, and had no choice but to simply exclude a few blended eclipses from our ETV analysis. In the case of the MASCARA, KELT and HAT data, after their conversion from the original Heliocentric Julian Date (HJD) times to Barycentric Julian Date (BJD), we again disentangled the two binaries' signals from the light curves. We then took those eclipse observations where both the ingress and egress portions of the same eclipses were measured, and we determined mid-eclipse times for these individual eclipses. The resulting mid-eclipse times as determined from the TESS, KELT, and MASCARA archival data, as well as our new ground-based observations, are listed in Tables <ref>, <ref>. Finally, we utilized some additional published eclipse times from the papers of <cit.> and <cit.>. Regarding this latter work, <cit.> tabulate their own eclipse times determined from the MASCARA data and the TESS sector 7 and 34 observations. We do not use those particular times, as we have determined our own eclipse times from the same datasets using our own method. We do use, however, most of the eclipse times that <cit.> determined from their own observations, and to which we do not have access. Unfortunately, they do not provide uncertainties for these times, so we arbitrarily take an estimated error of 0.0001 days for each of these data points. The combination of all these data has provided 102 and 81 mid-eclipse times for binaries A and B, respectively. §.§ Preliminary ETV study The ETV curves formed from all the data listed in Tables <ref> and <ref> are shown in Fig. <ref>. The top two panels show the overall ETV curves spanning all the data. The left and right panels are for binary A and binary B, respectively. The bottom panels are zoom-ins around the three TESS sectors and one ground-based segment. A first visual inspection of the ETV curves of both binaries shows two very remarkable features. First, each EB exhibits a large sinusoid with an amplitude of ∼± 0.2 days, and periods of 25-30 years. Note that the corresponding curves for the primary and secondary eclipses are anticorrelated. This clearly indicates apsidal motion of the eccentric EBs. We will show in Sect. <ref> that this apsidal motion is driven by the presence of the `other' binary. Secondly, the high-quality modern ETV points determined from the TESS and ground-based follow-up observations exhibit clear ∼120-day periodic variations, with amplitudes of 30-40 minutes. The shapes of these short-period (∼120-day) ETVs clearly resemble the dynamically driven ETVs of several other tight and compact triple and quadruple star systems <cit.>. We thereby associate the ∼120-day ETV features with the outer orbital period of the quadruple in which binary A orbits binary B. Moreover, the apsidal motion periods are in perfect agreement with periods which are theoretically expected for tight triple systems with the observed inner and outer periods. It was therefore clear to us that BU CMi is an exceptionally compact doubly-eclipsing 2+2 hierarchical quadruple system. As a first attempt to understand these ETVs quantitatively, we made analytic ETV fits using the code and method described in <cit.>. As this code was developed for tight triple systems, unfortunately, it is unable to simultaneously fit the ETVs of both binaries. On the other hand, however, the analytical descriptions of the P_out-timescale, as well as the apse-node perturbation terms, are included in the software package and, hence, it is very useful for getting reliable third-body solutions for dynamically-dominated ETV curves. When used to analyze the BU CMi ETV curves, the solutions confirmed the P_out≈121.5 d outer orbital period and, moreover, the fact that the observed rapid apsidal motions are driven by the third-body perturbations of the other EB.[The period of the apsidal motion is not merely a free, adjustable parameter, but is calculated from the fitted inner and outer periods, mass ratios, and eccentricities as per theoretical formulae <cit.>.] While our analytic ETV fits confirmed our initial hypothesis about the system configuration, the obtained fit was surprisingly poor for both binaries. This inadequate fit is due to an `extra feature' in the ETV curve with a period of ∼1000-1200 days that can be seen by casual inspection of the curves. We were able to improve our analytical ETV fit substantially, however, when we “added” an additional, more distant fourth (i.e., in the present situation, fifth) stellar component to the system. A recent improvement of our analytic ETV software package makes it possible to fit simultaneously a second light-travel time effect (LTTE). We found that adding such a second LTTE component with a period of P_outermost≈1100 days results in statistically excellent fits for the ETV curves of both binaries. We emphasize, however, that this was a solution that was useful in the context of the fitting code, rather than having any physical significance. Such an extremely tight 2+2+1 quintuple system would be at best marginally stable; furthermore, the strong gravitational perturbations of the outermost component should also have to be taken into account. Later, we demonstrate, when discussing the fully consistent, photodynamical treatment (see Sect. <ref>), that the explanation for the origin of the extra cyclic terms in the ETVs with a ≈1000-1200-day period does not require the presence of an additional, more distant stellar component in the system. Rather, we show that the extra cyclic feature in the ETV curve arises naturally from the mutual gravitational perturbations between the two binaries. Finally, we note that the results of this preliminary, analytic ETV analysis were used only for finding reliable input parameters for the complex, photodynamical fitting procedure (Sect. <ref>). Thus, a more detailed, quantitative discussion of the ETV results will be given later, in Sect. <ref>, in the context of the photodynamical results. § DIRECT FITTING TO THE SPECTRA AND TESS LIGHT CURVES The ETV analysis presented in section <ref> strongly indicates that the outer orbit of the system is very tight, with P_out≈121.5 d. This means that the mutual orbit of the eclipsing binaries is relatively fast, resulting in EB center of mass radial velocity semi-amplitudes on the order of tens of km s^-1. Such a large RV variability superposed on EB orbits of a few days should be easy to detect. The analysis of the spectra of BU CMi is, however, significantly complicated by the fact that all its stellar components possess an early spectral type. The spectra are therefore dominated by strong, wide hydrogen Balmer lines; these lines are very difficult to work with, as we cannot reliably detect their splitting and robustly calculate radial velocities of the individual stars in the system. On the other hand, the metal lines of the system are rather shallow; as a result, a relatively high signal-to-noise ratio is necessary to use these in our analysis. The line splitting due to the orbital motion in both pairs is clearly visible in the strongest metal line (Mg ii 4481 Å). Because the rotational velocities of the components (v sin i∼ 50–80 km s^-1) are approximately half the radial velocity semi-amplitudes (K ∼ 120–130 km s^-1), the components' profiles are almost always somewhat blended. Thus, a direct measurement of the radial velocity is rather difficult and may yield inaccurate results (see Section  <ref>). Consequently, we attempted to directly model the observed spectra, under the assumption of two binary stars in a Keplerian orbit about each other. Two wavelength ranges were found to be appropriate: a blue region around the Mg ii 4481 Å  line, covering 4380 - 4605.4 Å, and a green region around the Mg i triplet, covering 4900 - 5379.6 Å. The spectra were weighted using the signal-to-noise ratios provided by the reduction pipeline at both 4400 Å  and 5500 Å. The standard deviation at either of the spectral ranges was assumed to be 1/SNR[Only SP and CTIO spectra were used in the modelling]. Before being used for modeling, the original spectra were rebinned to a logarithmic wavelength grid with a radial velocity step of 3.5 km s^-1 in both spectral ranges. Due to the small number of observations during epochs in which either of the systems was in an eclipse, no spin-axis orbital-plane misalignment for either of the components was assumed; thus, we set the parameter λ = 0 deg. The sum of synthetic spectra for all stellar components was fitted to the observed spectrum at each epoch. Prior to summing, the synthetic spectra of individual components were convolved with a theoretical limb-darkened rotational profile. Mutual eclipses of the components were taken into account. Keplerian motion in both the inner and outer orbits was assumed. Synthetic spectra were computed using iSpec <cit.>, a program based on the SPECTRUM code <cit.>. For the spectra, we assumed a solar metallicity (log [m/X] = 0.0) and that 9 000 ≤ T_ eff ≤ 12 000 K. We also allowed for apsidal motion for the inner orbits. No other gravitational perturbations were assumed in this relatively simple model. We found that the principal atmospheric parameters corresponding to the best model for either of the spectral ranges are log g = 4.5 for all four stars; T_ eff = 11000 K for components 2, 3, and 4; and T_ eff = 11 500 K for component 1.[When describing our spectroscopic analysis, we refer to the stellar components of the A subsystem (P = 2.94 d) as 1 and 2. The stellar components of the wider subsystem B (P = 3.26 d) will be denoted as 3 and 4.] We modeled the blue and red spectral ranges separately. In order to constrain the fit parameters—specifically the inclination angle i, orbital periods of inner binaries, P_A, P_B, and the apsidal motion rates of the binaries P_ aps—we included the TESS photometry from sectors 7 and 34 in the modelling. The ellipsoidal light variations were modeled using the analytical approximations of <cit.>; these were found to be sufficiently accurate for both inner binaries. While the semi-amplitudes of the RV changes K_i are free parameters, the semi-amplitude of the radial-velocity variations of the center of masses of the binaries depend on the masses of the individual components (M_i, i=1,2,3,4), the inclination angle i_ out, eccentricity e_ out, and the orbital period P_ out of the outer orbit. The semi-amplitudes of these radial velocity changes, K_ 12 and K_ 34, were hence computed using these parameters. The simultaneous modeling of the TESS light curve and the spectra was performed via gradient-based optimization starting from multiple sets of trial parameters in order to arrive at the global minimum of χ^2. Initial optimization runs showed that the inclination of the outer orbit is close to 90 , but attempts to adjust it led to divergent results. Hence, the outer orbit was fixed at edge-on and the inclination was not adjusted. We were able to constrain the outer orbit and demonstrate that it is extremely tight, with P_ out∼ 121 d. This differs significantly from the estimate provided by <cit.>, who cite a value over twenty times ours (P_ out = 2420±40 d or 6.62 years). In Section <ref>, we discuss the differences in analyses, and lay out a case for why 121 d is definitely the period of the outer orbit. The parameters corresponding to the best models for the blue and red regions are listed in Table <ref>, and the best-fit models to the first 20 observed spectra of BU CMi in the blue wavelength range, close to the Mg ii 4481 Å  line, are plotted in Figure <ref>. The best-fit stellar parameters for the component stars are listed in Table <ref>. For further analysis, we will equate the blue and yellow spectral ranges with the Johnson B and V passbands, and use B and V magnitudes where relevant. § SPECTRA DISENTANGLING AND ATMOSPHERIC PARAMETERS After obtaining a set of model radial velocities for both stellar components at all epochs of the spectroscopic observations, we then attempted to disentangle the composite spectrum in order to obtain individual spectra for each star (1, 2, 3, and 4). The iterative disentangling approach of <cit.> was used, and the spectra were disentangled in both the blue and red ranges. As previously, a radial-velocity step size of 3.5 km s^-1 was used for both spectral ranges. The process converged to a solution for both spectral ranges, and the resulting composite spectra are plotted in Figure <ref>. These spectra show small imperfections, perhaps from issues with the continuum rectification and artifacts from the disentangling. However, our technique does not yield the flux ratio of the components <cit.>. As a result, we corrected the component spectra for the contribution of the other components before proceeding. Properly normalizing the spectra will have an impact on the line depths and is possible using the component passband-specific luminosities derived from simultaneously modeling the spectra and the light curve. Computed brightnesses for the B and V Johnson bands were assumed for the blue and red spectral ranges, respectively. As part of our modeling, we used the radii derived in Table <ref> for the correction step and to determine the atmospheric parameters T_ eff, log g, metallicity log [m/X]. The projected rotational velocities were taken from the results of the spectral fitting. As before, we used the iSpec code. The temperatures and surface gravities for binary A are T_ pri = 10760±360 K, log g_ pri = 4.23±0.29, T_ sec = 10180±430 K, log g_ pri = 4.11±0.30. The corresponding parameters for binary B are T_ pri = 10820±380 K, log g_ pri = 4.63±0.26, T_ sec = 10040±350 K, log g_ pri = 4.18±0.23. The metallicity was fixed at the solar value. The component temperatures are slightly lower than those of the best-fitting templates. We also highlight our finding that component 4, in binary B (P=3.26 d), is cooler than the remaining components. As part of our presentation of these results, we add the caveat that the atmospheric parameters strongly depend on the technique used to normalize the individual disentangled spectra. Using different flux ratios would change the line depth and result in different temperatures and/or metallicities for each of the components in the system. § SPECTRO-PHOTODYNAMICAL ANALYSIS Independent of the spectroscopic analyses described in Sect. <ref>, we carried out a simultaneous, joint analysis of the TESS light curve, the ETV curves calculated from the mid-times of all the observed eclipses (i.e., from TESS and ground-based measurements; see Sect. <ref>), radial velocities (determined by Gaussian-profile fitting to the BFs; see Sect. <ref>), as well as all the available multi-wavelength SED data for BU CMi using the Lightcurvefactory software package <cit.>. During the analysis we followed the same steps as for TIC 454140642 and TIC 219006972 (; ), two other compact quadruple systems discovered with TESS. For further details of the photodynamical analysis, we refer the reader to Sect. 5.1 of <cit.>. Here, we describe only the steps in the data preparation that are specific to this system. In order to retain similar sampling and to reduce computational costs, we binned the 2-minute cadence datasets from Sectors 34 and 61 to 1800 sec, similar to the sector 7 FFI-cadence light curve. When comparing some preliminary fits to the short- and long-cadence light curves, we found no statistically significant discrepancies in the resultant parameters and, hence, we decided to use exclusively the 1800-sec cadence light curves for all three available TESS sectors. During our earlier initial analysis runs (made in 2021 and 2022), we also made use of a second set of light curves compiled from the ground-based photometric follow up observations. However, when the sector 61 observations of the TESS spacecraft became available and, hence, the high-quality and homogeneous satellite photometry then covered more than a 4-year-long interval, we decided to no longer use the more inhomogeneous and less accurate ground-based photometric data for direct light curve fitting. Naturally, the eclipse times derived from these ground-based observations did continue to be used for the ETV analysis part of the simultaneous, joint spectro-photodynamical fitting process. Because our software package Lightcurvefactory is unable to handle direct fitting of the spectral lines (see Sect. <ref>) and fits only direct RV data, we used a second, different approach. We mined out model-independent RV data from all available spectra in the manner described briefly in Sect. <ref>. Due to the highly blended spectral lines, the correct identifications of the individual stellar components were quite problematic in the case of several spectra. Hence, after obtaining the first model fits, we had to change the labeling of the stars in the RV data in several cases. The RV data used for the spectro-photodynamical analysis are listed in Table <ref>. In addition to the TESS light curves and the RV data, as was mentioned before, we used the four ETV curves (primary and secondary ETV data for both binaries; see Tables <ref> and <ref>). Finally, we also used the catalog passband magnitudes listed in Table <ref> for the SED analysis. For this analysis, similar to our previous works, we used a minimum uncertainty of 0.03 mag for most of the observed passband magnitudes. This was done in order to avoid an outsized contribution from the extremely precise Gaia magnitudes, as well as to counterbalance the uncertainties inherent in our interpolation method during the calculations of theoretical passband magnitudes that are part of the fitting process. The only exception is the WISE W4 magnitude, for which the uncertainty was set to 0.3 mag. Table <ref> lists the median values of the stellar and orbital parameters of the BU CMi quadruple system that have been either adjusted, internally constrained, or derived from the MCMC posteriors, together with the corresponding 1σ statistical uncertainties. Sections of the light curves, the ETV and RV curves of the lowest χ^2_global solution are plotted in Figs. <ref>, <ref>, and <ref>, respectively. One caveat should be noted regarding the proper interpretation of the orbital parameters listed in Table <ref>. The tabulated orbital elements, with a few exceptions, are so-called `instantaneous osculating elements,' which are valid at the moment of the cited epoch t_0. Thus, they cannot be simply compared with those orbital elements that are deduced directly from photometric and/or spectroscopic observations. These latter orbital elements can be considered to be some kind of long-term averaged orbital elements and connected to such Keplerian orbits that represent the approximations of the time-averaged envelopes of the true (non-Keplerian) motions. This question was discussed in more detail in Sect. 5.1 of <cit.>. Regarding the exceptions, in the (second) row P_obs we give the average or, `observable', periods for the three orbits (A, B, AB) which were obtained from a longer-term numerical integration initiated with the parameters of the best-fit spectro-photodynamical model. The values given for the inner binaries (A, B) stand for the long-term average of their eclipsing periods (which technically means that calculating the ETV curves with these periods, the averages of the ETVs over full apsidal motion cycles remain constant). In the case of the outer orbit, however, P_obs was determined as the time average of consecutive periastron passages, i.e., this is the average anomalistic period of the outer orbit. The other exception is the first row of the apsidal motion related parameters, P_apse^obs. Here the duration of an apsidal motion cycle, i.e., the time needed for the complete, 360 variation of the observable arguments of periastron of the three orbits, ω_A,B,AB, are given. The other apsidal motion parameters are calculated internally by the software package Lightcurvefactory with the use of the usual analytic formulae <cit.>. § DISCUSSION AND IMPLICATIONS §.§ System parameters Our two independent analyses (Sects. <ref>-<ref>) have resulted in very similar results, at least qualitatively; however, quantitatively, a number of the discrepancies exceed the estimated uncertainties. For example, both models agree that BU CMi consists of four very similar hot stars. Moreover, the mass ratio of the two primaries (i.e., m_Ba/m_Aa) are ∼0.993 and ∼0.992, according to the direct spectral fitting and the spectro-photodynamical solutions, respectively (i.e., they agree well within 0.1%). The direct spectral fitting approach, however, resulted in systematically more massive (by ∼4-5%) and hotter (by ∼3-8%) components than the spectro-photodynamical analysis. At the moment we are unable to resolve completely these discrepancies. One should keep in mind, however, that the individual masses are primarily controlled by the RV data which, in the current situation, are quite uncertain for the reasons discussed in Sects. <ref> and <ref>. Regarding the temperatures, our attempts to obtain reliable spectroscopic solutions with the use of slightly lower temperature templates have failed. On the other hand, if one compares the photometric distance calculated from the spectro-photodynamical solution d_phot=245±8 pc with the Gaia DR3-derived one d_GaiaDR3=247±2 pc <cit.> the agreement looks perfect. In contrast to this, in the case of the direct spectral-fitting solution which gives hotter and more massive (and, hence, larger) stellar components, the total luminosity of the quadruple was found to be larger by ∼66% (compare the derived individual luminosities in Tables <ref> and <ref>) resulting in a ∼29% larger photometric distance. However, one should keep in mind that in the Gaia DR3 astrometric solutions, stellar multiplicity was not taken into account and hence the Gaia-derived distance might be subject to some systematic errors. Thus, in conclusion, we believe that we are not currently in a position to prefer one or the other solution; hence, we conclude that the true uncertainties of our solutions, at least in the masses and temperature, must be around 5-7%. The quantitatively similar results of the two independent approaches clearly show that the parameters are robust and BU CMi is indeed a very tight quadruple with ∼120 day outer orbital period. The differences in the orbital and component parameters between the two analyses give us, however, an independent check of the parameter uncertainties. The similar brightness of the inner binaries makes the astrometric signal of the system's photocenter tiny; thus we cannot expect that Gaia would provide independent orbital parameters. Because the photodynamical analysis is more complex and in addition to RVs it takes into account all available photometric data (including observed colors and brightness), we prefer the corresponding parameters. Turning to the orbital geometry of the system, we find that BU CMi is not only the most compact but also an extremely flat, 2+2 type quadruple system. All three orbital planes (that of the two inner EBs and also the outer orbital plane) are well aligned within 1. Note, this is a common feature of the previous two TESS-discovered compact doubly eclipsing 2+2 type quadruple systems: TIC 454140642 <cit.> and TIC 219006972 <cit.>. In drawing any general conclusions from the common flatness of these new compact 2+2 quadruples, however, one should keep in mind that these findings might be biased by observational selection effects, at least, in part. This is so because, in the case of substantially inclined orbital planes, the three planes would precess with different amplitudes and periods on timescales of decades or centuries (depending on the mutual inclination angles), and this would make it very unlikely that both inner binaries would show eclipses at the same time. And, in the absence of eclipses in either or both of the binaries, there would be no (or only a very minor) chance of detecting the given target as a compact 2+2 system. On the other hand, however, such a selection effect does not explain the extreme flatness (i.e. orbital alignments within 1-2). For example, the current binaries A and B would already produce eclipses for inclinations i_A≳729 and i_B≳743, from which it follows that, keeping the inclination of the system's invariable plane at the currently derived value of i_0=838±02, both inner binaries would continuously produce well detectable eclipses in the case of mutual inclinations of, let's say, i_A-AB;B-AB=5. So, in conclusion, in our view the very strong flatness is likely a consequence of the formation processes of the most compact 2+2 systems, but, due to the strong observational bias described above, this conclusion is not highly robust. Besides the similarities of BU CMi to the previously mentioned two quadruples, there are strong differences as well. For example, in the case of the other two quadruples, the inner binaries are in nearly circular orbits, while the inner binaries of BU CMi have remarkably substantial eccentricities (e_A=0.2191±0.0008 and e_B=0.2257±0.0009, respectively). As BU CMi has the smallest P_A,B/P_out ratio of these three quadruples and, moreover, contains the closest inner binaries amongst these three systems, these findings at first sight appear to be quite unexpected for a number of reasons. First, the less tight a hierarchical system is, the less effective are the gravitational perturbations of the outer component(s) acting upon the Keplerian motion(s) of the inner pair(s). Second, due to the compactness of the inner binaries, the fractional radii of the constituent stars in BU CMi exceed the same quantities of the stars in the other two quadruples by factors of 4-5. Hence, one can expect that tidal forces and tidal dissipation, which are related to the fifth (for equilibrium tides) and eighth order (for tidal dissipation) of the fractional radii are much more effective in BU CMi than in the other two systems. These discrepancies can be resolved by considering the facts that (i) in contrast to TICs 454140642 and 219006972, BU CMi is formed by four early-type, massive, radiative stars, in which tidal dissipation is much less effective; and moreover, (ii) this system is much younger (i.e., ∼300 Myr-old, in contrast to the other two quadruples which are several Gyr old). Thus, one may conclude that there has been insufficient time for the circularizaton of the inner orbits since the formation of the quadruple system BU CMi. The significant inner eccentricities in BU CMi, as well as the non-edge-on view of the two inner orbital planes (i_A=834±01; i_B=839±01), together with the very similar surface brightnesses of the binary stars, have an interesting observational consequence. Namely, depending on the orientations of the orbital ellipses relative to the Earth, the depths of the two consecutive eclipses in each EB alter each others. Regarding binary A, currently the periastron passage, i.e., the smallest separation between the two stars, is much closer to the inferior conjunction of the more massive (primary) component. Hence, due to the larger fraction of the occulted stellar disk of the secondary, the deeper light minimum occurs during this event, i.e., when the more massive, hotter star eclipses the less massive and cooler component. In contrast to this, during the HAT observations, nearly half an apsidal cycle earlier, the currently shallower eclipse was actually the deeper eclipse. In the case of binary B, the tendency is just the opposite, as can be seen in Fig. <ref>.[In this regard, we note that at the beginning of our comprehensive analysis we considered the deeper TESS eclipses as primary ones in both binaries and the stars were labeled accordingly. Then, after concluding that the eclipse depths in both binaries reverse throughout an apsidal cycle and, moreover, considering the fact that currently, in the case of binary A the cooler star is eclipsed during the deeper light minima, we decided to redefine primary eclipses as those events in which the hotter (and more massive) stars are eclipsed, irrespective of the instantaneous amplitude ratio of the two eclipses. Hence, finally we relabeled the components of binary A, and all the tabulated parameters (and the figures) are given accordingly.] Turning to the apsidal motion cycles mentioned above, our direct spectral fits give a ∼25 yr-period for both inner binaries. Allowing for the fact that the duration of the full spectroscopic dataset is ∼2.7 yr, i.e., about 10% of the full cycles, we feel that these results are in essentially perfect agreement with the findings of the spectro-photodynamical analysis (P_apse_A,B=28.7±0.1 and 25.1±0.1 yrs, respectively). The latter results are based on the ETV data, which extend to more than the half of the full apsidal cycles. In addition to the apsidal motion periods of the inner binaries, the photodynamical solution also gives the apsidal motion for the outer orbit, and this is also found to be remarkably rapid at P_apse_AB=145.3±0.2 yr. These values, which were `measured' from the numerical integration of the best-fit photodynamical solution, are in good accord with the theoretically calculated apsidal motion periods—of which the medians were found to be P_apse=31.8±0.1, 29.0±0.1 and, 151.6±0.8 yrs for orbits A, B, and AB, respectively.[The discrepancies of 3-6 yrs (i.e. 4-10%) might have come from the neglect of the octupole order perturbation terms.] Besides the `measured' and theoretically calculated apsidal motion periods, we also tabulate the contribution of the dynamical (third-body), relativistic, and classic tidal effects to the apsidal advance rates (Δω_3b, GR, tide, respectively). It is evident that the dynamical effects substantially dominate over the tidal and relativistic ones, and this clearly supports our previous statement in Sect. <ref> that the rapid apsidal motion must have a dynamical origin. This latter statement leads us back to the previously mentioned contradiction between our results and those of <cit.> which will be discussed below. §.§ Comparison with the results of <cit.> As mentioned in the Introduction, <cit.> also published a detailed analysis on this quadruple system. Their conclusions are remarkably different from our findings. Here, we discuss the origins of these discrepancies and make attempts to resolve them. First, the most substantial difference is that they report an outer period of about 6.6 yr (2420±20 days), instead of the much shorter outer period of 121 days that we report in this work. They arrived at this conclusion via an analysis of the ETV curves, where besides the evident large-amplitude sinusoidal variations characteristic of dynamically driven apsidal motion with a ∼25 yr period, they detected additional cyclic variations with a period of ∼2400 days. They interpret the latter variations as the light-travel-time effect (LTTE) originating from the orbit of the two EBs around their common center of mass. Moreover, they also detect another variation of small amplitude in the high-precision TESS Sector 7 eclipse times, which they claim might be due to a small ∼60-day-period libration (which they call `nutation') in the lines of the apsides of both binaries, caused by the other pair. They do not investigate, however, how such a relatively wide outer orbital separation would be able to produce such a short-period effect, nor how it could account for the rapid apsidal motion. In contrast to their results, we found that these small-amplitude, so-called `nutations' are the principal third-body effects. These variations are, in fact, the same thing as we have seen in the ETVs of several tight, hierarchical triple stellar systems where the ETVs are dominated by the P_out timescale third-body perturbations due to the tertiary (which, in the present case, is also a binary itself). These variations can be modeled both analytically <cit.> and numerically, as, in the present work. Our photodynamical analysis shows clearly that these structures in the ETVs of both EB pairs can be explained with the mutual P_out timescale third-body perturbations due to the other EB, and consequently, the two binaries orbit around each other with a period of P_out=121.50±0.02 days (see the lower panels of Fig. <ref>). This conclusion makes this 2+2 quadruple momentarily the one with the shortest known outer period. Regarding the longest period—the largest amplitude cycles on the ETVs (of ∼25 years) of both binaries—we agree with <cit.> that these arise from apsidal motion. We note, however, that this relatively short period of the apsidal motions in both binaries can be explained neither by combinations of the classic tidal effects and the general relativistic contribution in the EBs, nor by the gravitational perturbations of a relatively distant companion in an ≈6 yr-period. The characteristic timescale of dynamically driven apsidal motion is proportional to P_apse∝ P_out^2/P_in which, in the case of P_out=2420 days yields ≈4923 and ≈5457 yrs for binaries A and B, respectively. In contrast to this, as one can see, e.g., in the upper panels of Fig. <ref>, and also in Table <ref>, where we tabulated the tidal, relativistic and third-body contributions to the apsidal advance rate separately, our photodynamical solution indeed produces the observed apsidal motions. The most pressing issue regarding the ETV curves is the origin of the extra cyclic variations with a period of a few years. As was mentioned above, <cit.> find their period to be ∼2420±40 d, while our complex photodynamical analysis (and the preliminary analytic ETV studies, as well) have resulted in a period close to half the value they found. Below, we attempt to explain the origin of these `extra' ETV variations that occur on a timescale of ∼1200 days. §.§ Origin of the ETV variations on a timescale between P_ out and P_ apse-node As mentioned above, the most interesting question about BU CMi is the origin of the extra, ≈3 or 6 yr-period cyclic variations of the ETVs. This effect, at first sight, looks quite mysterious because these cycles appeared naturally in the numerically-generated ETVs during our modelling processes (see, e.g., the upper panels of Fig. <ref>). However, they do not appear even in our most detailed analytical ETV model, described in <cit.>. In other words, they are not present in our formulae based on the once-averaged and doubly-averaged octupole-order analytic perturbation theories of hierarchical triple stellar systems.[In a hierarchical triple system, the perturbations occur on three different, well separable time-scales. (i) The short-period periodic perturbations have a characteristic period proportional to the period P_in of the inner binary, and the relative amplitudes are related to (P_in/P_out)^2; (ii) the medium-period periodic perturbations act on the time-scale of the outer orbit P_out, while their amplitudes are proportional to P_in/P_out; and, finally (iii) the long-period perturbations (sometimes called `apse-node' perturbations) have a characteristic time-scale related to P_out^2/P_in, while their relative amplitudes are in the order of unity, i.e., the given orbital element might take any of its physically realistic values during that interval. Considering the analytic description of these perturbations, e.g., with a perturbed Hamiltonian, the three groups of perturbations are connected to those trigonometric terms in which the arguments contain: (i) the mean anomaly of the inner orbit; (ii) the mean anomaly of the outer orbit, but not that of the inner orbit; and, (iii) neither of the two mean anomalies. Hence, averaging out both mean anomalies (double averaging) from the Hamiltonian, one can model analytically the long-term (and secular) perturbations. By contrast, when averaging out only the inner mean anomaly, the medium-period perturbations can be studied.] In order to identify the origin of these effects, we checked the variations of the instantaneous osculating orbital elements during each numerical integration step. In Fig. <ref> we show that we found periodic variations in the semi-major axes of the two binaries' orbits that are similar to what we see in the observed ETV curves. We display in the upper right panel of Fig. <ref>, that these variations have exactly the same periods and opposite phases in the two binaries, and hence, their effect on the ETVs may really mimic the signs of mutual LTTE. The connection between these small variations of the semi-major axes and the ETVs are also well illustrated in the lower left panel of Fig. <ref>. We can easily infer this connection not only for the similar periods, but also for the fact that the larger the amplitude of the variations in the semi-major axes, the larger the amplitude of the extra ETV bumps. And, moreover, one can see, that there is an exact 90-phase shift in between the extrema of the semi-major axis and ETV variations. This is in perfect accordance with the fact that the ETV represent some cumulative or, integrated variations. Or, mathematically, a sine-like perturbation in the semi-major axis (or, in the mean motion) will result in a cosine-like ETV for the additional integration. This finding is not surprising, insofar as these slight variations in the semi-major axes naturally reflect the instantaneous mean motion of the given body, to which the variations of the ETVs are extremely sensitives. On the other hand, such kinds of variations in the semi-major axes are surprising in the sense that both the once- and doubly-averaged perturbation theories of hierarchical triple systems arrive at the conclusions that there are no medium-, long-period and secular perturbations in the semi-major axes. In other words, according to the perturbation theories that are usually used in the hierarchical stellar three-body problem, one should not expect any variations in the semi-major axes for which the periods exceed the period of the inner binary. Note, however, that in our last theoretical work about the analytical description of the ETVs of tight triples <cit.>, we introduced some further terms, which allow periodic perturbations in the inner semi-major axes on the timescale of the outer period; here, however, we detected periodic perturbations in these elements with a factor of ∼10 longer period. In order to test the physical origin of this behaviour, we made some additional numerical runs where all but one of the initial parameters were the same (i.e., they were taken from the best-fit model), and only the outer period of the quadruple was slightly modified. As one can see in the upper panels of Fig. <ref>, only a small change in the outer period can result in substantial variations in the amplitude and the period of the effect, and this can also be seen clearly in the corresponding simulated ETVs (bottom panels of Fig. <ref>). In generating and studying Fig. <ref>, we found it interesting to note that the closer the ratio of the outer to inner periods is to an integer, the larger the amplitude and longer the period are of these few-year-long cycles. For example, in the order of decreasing amplitudes, in binary A the period ratios are 40.99; 41.33; 41.73; while for binary B: 36.94; 37.24; 37.61.[For calculating these ratios, we did not use the instantaneous osculating anomalistic periods at epoch t_0 used as input parameters to the integrations; rather, from the results of the numerical integrations, we derived average outer periods, which are also given in the legends of the upper left panel of Fig. <ref>.] In our view, these facts suggest some similarities with the mean-motion resonances of the classical planetary perturbation theories. Namely, when the mean motions are nearly commensurable this may lead to large-amplitude and long-period perturbations in the given orbital elements. Normally, however, only low-order mean motion resonances will produce large amplitude perturbations, because the amplitudes of the resonant perturbations in general are multiplied by some power-law functions of the eccentricities, in which the powers are proportional to the order of the resonances. Here, we suspect some similar behavior: Namely, due to the commensurability of the inner and outer periods, some very high order `resonant' terms yield some contribution which cannot be averaged out perfectly. This actually results in a very low-amplitude cyclic variation in the semi-major axes (the relative variations are of the order of 10^-4 -10^-5). But, due to the extreme sensitivity of the ETVs to the mean-motions, these tiny variations may produce observable effects in the occurrence times of the eclipses. We note that this effect is worthy of a more detailed and quantitative investigation. But it should be carried out first for a simpler configuration, i.e., for an actual 2+1 triple system instead of a 2+2 quadruple. In this regard we note that the photodynamical analysis of the tight triple TIC 167692429 <cit.>, which was based on the first year of TESS data revealed similar extra cycles in the ETVs; these have now been verified by the third and fifth years of new TESS observations. We will further investigate these extra cycles in the context of this triple system in the near future. § CONCLUSIONS The bright variable star BU CMi is composed of two short-period eclipsing binaries. Our work has found it to be the tightest quadruple system known, with P_ out≃ 121.5 days. Although the quadruple nature of the system was first pointed out by <cit.> from the MASCARA photometry, they found a much longer outer orbital period of P = 6.62 years, attributing the 121-day ETV variations to a `nutation' effect. Simultaneous modelling of high-dispersion spectroscopy and TESS light curves conclusively demonstrates that the line profile variations cannot be explained by the 6.62-year periodicity. The spectroscopic data clearly show large, ∼50 km s^-1, variations in the systemic velocity of the inner subsystems (i.e., the two EBs). For P_ out = 6.62 years, the amplitude of the RV changes of the binary centers of mass would be only about ∼ 18 km s^-1. Even more conclusive proof of the extremely tight outer orbit comes from the detailed photodynamical modelling, which takes into account satellite photometry, all available eclipse timing data, and the RVs. The numerical integration of the orbits explains the ETV changes with the 121-day period (the outer orbit), establishes the longer-term ETV variability on the ∼1200-day period, and perfectly predicts the rapid apsidal motions in the EBs on ∼25-year timescales. This apsidal motion is very rapid, and it is dynamically driven by the mutual gravitational perturbation of the binaries. For the much longer outer orbit determined by <cit.>, the apsidal motion rate would be substantially slower. Although the orbital and stellar component parameters are well constrained by the complex photodynamical modeling, there remain a few open questions that require further observations. For example, we seek to constrain any spin-orbit misalignment; this possibility, however, appears unlikely due to the near-coplanarity of the inner and outer orbits. Determination of the spin-orbit misalignment would require dedicated spectroscopy of the system during the eclipses of the stellar components. The small observed range of the radial velocities of the components and their high rotation rates would complicate the modelling of the Rossiter-McLaughlin effect. Assuming that the stellar rotational rates are quasi-synchronous (rotation rate equals the Keplerian orbital rate at periastron) and that the spin axes are perpendicular to the inner orbits, we can further constrain the stellar radii from their measured projected rotational velocities v sin i. Although both of the inner binaries are eclipsing, the relatively low inclination of the outer orbit, i_A-B∼ 83.8, and a relatively large semimajor axis, a_A-B∼ 221 R_⊙, preclude outer-orbit eclipses. A shallow outer eclipse could occur if the outer inclination angle were larger than about 88.5. Finally, we note that the BU CMi system could have had an outer period as short as ≈ 32 days and still be dynamically stable (see, e.g., ; ; ). This assumes the same EB periods and masses, the same outer eccentricity, and the same orbital coplanarity. Thus, there is much room in phase space to find even tighter quadruples. Whether evolution scenarios will permit such short-period quadruples is another matter. It is therefore worth trying to observationally push the boundaries to ever shorter outer periods. § DATA AVAILABILITY All photometric and spectroscopic data used in this paper and the codes used for the direct fitting of the spectra and photodynamical analysis will be shared upon a reasonable request to the corresponding author. § ACKNOWLEDGEMENTS A. P. acknowledges the financial support of the Hungarian National Research, Development and Innovation Office – NKFIH Grant K-138962. TP and RK acknowledge support from the Slovak Research and Development Agency – contract No. APVV-20-0148 and the VEGA grant of the Slovak Academy of Sciences No. 2/0031/22. GB, ZC and JH acknowledge funding from NASA Grant 80NSSC22K0315. We would also like to thank the Pierre Auger Collaboration for the use of its facilities. The operation of the robotic telescope FRAM is supported by the grant of the Ministry of Education of the Czech Republic LM2023032. The data calibration and analysis related to the FRAM telescope is supported by the Ministry of Education of the Czech Republic MSMT-CR LTT18004, MSMT/EU funds CZ.02.1.01/0.0/0.0/16_013/0001402, CZ.02.1.01/0.0/0.0/18_046/0016010 and CZ.02.1.01/0.0/0.0/18_046/0016007. This paper includes data collected by the TESS mission. Funding for TESS is provided by NASA's Science Mission Directorate. mnras
http://arxiv.org/abs/2307.03394v1
20230707053354
Towards Robust SDRTV-to-HDRTV via Dual Inverse Degradation Network
[ "Kepeng Xu", "Gang He", "Li Xu", "Xingchao Yang", "Ming Sun", "Yuzhi Wang", "Zijia Ma", "Haoqiang Fan", "Xing Wen" ]
eess.IV
[ "eess.IV", "cs.MM" ]
Towards Robust SDRTV-to-HDRTV via Dual Inverse Degradation Network Kepeng Xu1 2 Gang He1 3 * Li Xu1 Xingchao Yang2 Ming Sun3 Yuzhi Wang2 Zijia Ma1 Haoqiang Fan2 Xing Wen3 1 Xidian University 2 Megvii Technology 3Kuaishou Technology kepengxu11@gmail.comghe@xidian.edu.cn August 1, 2023 =============================================================================================================================================================================================================================== empty Recently, the transformation of standard dynamic range TV (SDRTV) to high dynamic range TV (HDRTV) is in high demand due to the scarcity of HDRTV content. However, the conversion of SDRTV to HDRTV often amplifies the existing coding artifacts in SDRTV which deteriorate the visual quality of the output. In this study, we propose a dual inverse degradation SDRTV-to-HDRTV network DIDNet to address the issue of coding artifact restoration in converted HDRTV, which has not been previously studied. Specifically, we propose a temporal-spatial feature alignment module and dual modulation convolution to remove coding artifacts and enhance color restoration ability. Furthermore, a wavelet attention module is proposed to improve SDRTV features in the frequency domain. An auxiliary loss is introduced to decouple the learning process for effectively restoring from dual degradation. The proposed method outperforms the current state-of-the-art method in terms of quantitative results, visual quality, and inference times, thus enhancing the performance of the SDRTV-to-HDRTV method in real-world scenarios. § INTRODUCTION High dynamic range television (HDRTV) has become increasingly popular because it can more realistically reproduce real-world luminance and color information, providing people with a better video viewing experience. The main differences between SDRTV and HDRTV are dynamic range, color gamut, and bit depth. However, despite the advances in HDRTV technology, there is a lack of available HDRTV content. Therefore, the conversion of SDRTV to HDRTV is an important work as it can help to alleviate the scarcity of HDRTV content and improve the video viewing experience for consumers. Convolutional neural networks are well suited for low-level image and video enhancement. At this stage, there have been a large number of specific applications, such as video restoration<cit.>, image restoration<cit.>, image denoising<cit.>, and image synthesis<cit.>, and so on. Therefore, the convolutional neural network based method has emerged to convert SDRTV to HDRTV. The existing methods <cit.> can effectively convert SDRTV to HDRTV frame by frame through feature modulation and dynamic convolution. We combined the actual needs of SDRTV-to-HDRTV with the existing technology for analysis and got three observations. The first observation is that the frame-by-frame SDRTV-to-HDRTV method extracts the current frame information for feature modulation, and ignores the multi-frame information for color restoration. The single-frame method is prone to the discontinuity between frames. The second observation is that solving the issue of the coding artifacts being amplified during the inverse tone mapping process is indispensable. Due to some historical technical and copyright reasons, a large number of current SDRTV videos do not have approximately lossless versions, and only relatively low-quality SDRTVs exist. Therefore, the SDRTV-to-HDRTV method needs to convert low-quality (LQ) SDRTV to high-quality (HQ) HDRTV in practical applications, as shown in Figure <ref> (a). Meanwhile, previous works <cit.> found that the conventional method of converting LQ SDRTV to HDRTV amplifies the coding artifacts. As shown in Figure <ref>, LQ SDRTV with inverse tone mapping exhibits significant coding artifacts. In particular, we found that compared to SDRTV, HDRTV has more information in the high-frequency domain (third observation), so enhancing features in the frequency domain can effectively improve the quality of HDRTV. According to the above three observations, we model practical SDRTV-to-HDRTV as a dual inverse degradation task (video restoration and inverse tone mapping). Previous methods convert high-quality SDRTV to high-quality HDRTV, as shown in Fig.<ref>(b). But usually SDRTV is not of high quality in real scenes, and this gap will lead to poor performance of such methods. In our DIDNet, temporal-spatial feature alignment and auxiliary loss are proposed to improve the spatial texture quality of HDRTV. Furthermore, to improve color restoration quality, a dual modulation convolution that cooperates with a 3D ConditionNet has been designed. Finally, we propose a wavelet attention module to enhance the frequency domain features to further improve the HDRTV quality. The proposed DIDNet (Fig. <ref> (c)) is able to perform dual degradation recovery simultaneously, making the SDRTV-to-HDRTV method really move towards real applications. Our contributions include four main points. * We investigate that the HDRTV obtained by SDRTV-to-HDRTV conversion in real application scenarios has the problem of excessive amplification of coding artifacts. For the first time, a multi-reference frame alignment method is proposed to solve the serious problem of HDRTV artifacts. * We reveal inverse tone mapping and artifact restoration are coupled in the process of SDRTV-to-HDRTV. Therefore, an auxiliary loss is designed to learn artifact removal, which allows efficient learning of dual restorations using a single end-to-end network. * We analyze the computational mode of feature modulation and design a lighter and more efficient double modulation convolution. * We discovered that HDRTV has more high-frequency information, so we proposed wavelet attention to improve the quality of HDRTV in the frequency domain. § RELATED WORK SDRTV-to-HDRTV conversion is the reconstruction of standard dynamic range video (SDRTV) images into high dynamic range video (HDRTV). <cit.> first studied the problem of super-resolution and SDRTV-to-HDRTV together. In these works, the input image is decomposed into a detail component for texture reconstruction and a base component for contrast enhancement. Specifically, <cit.> first performs the SDRTV-to-HDRTV conversion using a convolutional neural network (CNN). Then, <cit.> introduces modulation blocks to modulate the local intensity in a spatially varying manner to achieve adaptive local contrast enhancement. Recently, <cit.> has proposed a scheme for SDRTV-to-HDRTV. Inspired by the SDR/HDR formation process, <cit.> proposes a three-step solution pipeline that includes adaptive global color mapping, local enhancement, and highlight generation. <cit.> also provides a benchmark dataset called HDRTV1K for SDRTV to HDRTV conversion. <cit.> proposes a model for joint local and global feature modulation<cit.> capable of local adaptive tuning. <cit.> proposes a feature mapping model and uses dynamic convolution to model feature transformations, thus completing the inverse tone mapping process more accurately. <cit.> uses the discrete cosine transform to enhance the low frequency information, which is used to reduce artifacts in the low frequency part. Although these methods successfully perform inverse tone mapping, the existing SDRTV always has some degradation (coding artifacts). These degradations are amplified during the inverse tone mapping process, resulting in poor quality HDRTV from the conversion. Unlike the previous methods, this paper addresses the above encoding degradation recovery problem and designs a lighter and more accurate dual modulation convolution for more accurate inverse tone mapping. § METHODOLOGY This section details the motivations and solutions in the SDRTV-to-HDRTV process. §.§ Motivations Artifacts in generated HDRTV. Typically, video content undergoes a video coding process to reduce storage costs, which results in some artifacts during video coding. The extent of artifact distortion is determined by the bit rate utilized for encoding and the complexity of the scene. In most cases, SDRTV is encoded with an 8-bit depth, leading to the presence of encoding artifacts. As mentioned in <cit.>, these artifacts are amplified during the inverse tone mapping process. If the coding artifacts are not adequately addressed, the visual quality of the resulting HDRTV output will be poor. Limitations of single-frame global feature modulation (GFM). Previous methods only perform inverse tone mapping in the form of single-frame global feature modulation. Specifically, a state vector is predicted by the image of the current frame, and then global broadcast multiplication and addition is performed on the state vector and the image features extracted from the current frame. This single-frame adaptive processing is hindered by the lack of continuity between frames. Additionally, the computational complexity of feature modulation increases as the video frame resolution increases. Dual degradation learning. Previous methods only restore from single degradation (tone mapping), but in real-world applications, coding artifacts can lead to the unacceptable visual quality of HDRTV generated by these methods. SDRTV-to-HDRTV is a dual inverse degradation learning process, i.e., the restoration for coding artifacts and inverse tone mapping. This complexity makes it challenging to learn dual degeneration using a single model. A straightforward solution is to employ two separate models where the first model trains only for restoration and the second model learns only inverse tone mapping. However, successive independent training of such multiple submodels leads to cumulative errors and performance degradation due to poor coordination. To address this issue, we propose an auxiliary loss to facilitate coupled learning of dual degradation. Less high frequency information in SDRTV. During our research, we found that HDRTV contains richer high-frequency information compared to SDRTV, as shown in Figure <ref>. Consequently, enhancing the feature in frequency domain can improve the visual quality of HDRTV. §.§ Overall of dual inverse degradation network As elaborated by us previously, we propose the dual inverse degradation model to address the issue of coding artifact restoration in converted HDRTV, which allows for efficient learning of dual recovery using a single end-to-end model. The overall framework is presented in Fig.<ref>. First, the low-quality SDRTV frame X_LS is input to the temporal-spatial alignment feature fusion module TSAF to obtain F_fusion. X_fusion = TSAF(X_LS) The predicted high-quality SDRTV frames Ẍ_HS are obtained by convolving F_fusion with a 3×3 convolution. Ẍ_HS = Conv_3 × 3(X_fusion) Auxiliary loss L_Aux is computed using Ẍ_HS with high-quality SDRTV frames X_HS, thus allowing TSAF to learn the coding artifact inverse degradation process in a targeted manner. L_Aux = L_1(Ẍ_HS,X_HS) Meanwhile, F_fusion and X_LS are input to the dual-modulated convolution module DMC to obtain the modulated feature F_Modulated. F_Modulated = DMC(F_fusion,X_LS) The F_Modulated is input to the wavelet attention module WA to enhance the feature in the frequency domain, then a 3×3 convolution is performed to obtain the predicted high-quality HDRTV frame Ẍ_HH, and the main loss L_Main is calculated with the high-quality HDRTV frame X_HH. Both loss functions use L_1. Ẍ_HH = Conv_3 × 3(WA(F_Modulated)) L_Main = L_1(Ẍ_HH,X_HH) §.§ Restoration: temporal-spatial alignment quality enhancement To leverage the temporal information while overcoming artifacts, we propose a temporal-spatial alignment method based on deformable convolution <cit.>. The input SDRTV frame X_LS is first processed by predicting the offset F_offset with an Unet-like structure. Subsequently, deformable convolution is performed using F_offset to obtain spatially aligned features F_Aligned. These aligned features are then further enhanced by aggregating them using residual blocks to obtain the fused features F_fusion. §.§ Auxiliary Supervision: restoring high quality SDRTV In the motivation, it was mentioned that learning coding artifact recovery and inverse tone mapping simultaneously through a single model is challenging due to the coupling of the two degeneracies. To address this issue, we propose an auxiliary loss for SDRTV artifact recovery, which directly improves the quality of the SDRTV frames and forces the alignment fusion component to learn the quality enhancement function. This decoupled learning approach enables a single model to effectively restore both degradations. Specifically, the fused features F_fusion are fed into a 3×3 convolution to generate predicted high-quality SDRTV frames Ẍ_HS. The training process uses high-quality SDRTV frames X_HS as a supervisory signal, which encourages the temporal-spatial alignment feature to focus on learning artifact removal and enhancing the quality of the SDRTV frames. §.§ Dual Modulated Convolution §.§.§ Preliminary of Global Feature Modulation To utilize the global prior extracted from the input image, previous studies <cit.> have introduced a method called Global Feature Modulation (GFM), which has shown to be effective in tasks such as photo retouching and SDRTV-to-HDRTV conversion. GFM involves modulating the output feature of a convolutional layer using scaling and shifting operations, which can be represented by the formula (<ref>). y = α· Conv(x) + β §.§.§ Feature modulation = Convolutional kernel modulation GFM modulates the features through scaling and shifting operations using a modulation vector obtained from the condition network. However, we discovered that the same feature scaling and shifting can be achieved by modulating the convolution kernel. To demonstrate this equivalence, we use the example of a 1×1 convolution to deduce that modulating features is equivalent to modulating the convolution kernel. This mathematical equivalence holds true for more complex convolutions, such as the 3×3 convolution. To further clarify this equivalence, we consider a specific pixel convolution as an example. Without loss of generality, let us assume that the input feature channel is 3 and the output feature channel is 4. Denote the input feature as x∈ R^3, the convolution kernel as w ∈ R^4,3, the output feature as y∈ R^4, the modulation vector as α∈ R^4, and the bias as b ∈ R^4. The matrix dot product is represented as ⊙. The convolution operation can be expanded as shown in formula (<ref>). In formula (<ref>), the 1 × 1 convolution is performed first. It has been proven that 1 × 1 convolution is equivalent to a fully connected layer <cit.>. Therefore, we describe the feature modulation and convolution process as a fully connected layer. Next, the output feature is multiplied by the modulation vector α and then added to the bias vector β. Each element in the output feature obtained after convolution interacts with the modulation vector. To simplify the process, we merge the modulation vector α into the convolution kernel parameters and merge β into the bias parameters to obtain formula (<ref>). The original method of feature modulation multiplies each pixel of the image feature by a scaling factor α and adds a bias term β. In contrast, modulating the convolution kernel achieves the same effect by modulating the kernel and bias terms directly. This approach requires significantly less computation, as the computational cost of modulating the convolution kernel is C_O × C_I + C_O, while the cost of feature modulation is 2 × H × W × C_O. For the SDRTV-to-HDRTV task, which typically involves high resolutions such as 1080P and above, modulating the convolution kernel results in less than 1/1000 of the computational cost of feature modulation. Therefore, modulating the convolution kernel provides a significant computational advantage. §.§.§ Dual modulated convolution We will further examine the modulated convolution kernel and its properties. Feature modulation involves multiplying each convolution kernel by a modulation factor. Common forms of feature modulation, such as formula (<ref>), involve modulating features after convolution. To enhance the tone-mapping ability of our model, we designed a dual feature modulation module that modulates input features before and after convolution, as seen in formula (<ref>). Interestingly, we found that the dual feature modulation before and after convolution can be converted to convolution kernel modulation, which we refer to as the DMC module. As shown in formula (<ref>), dual feature modulation is mathematically equivalent to convolution kernel modulation, and DMC modulates convolution kernels of various dimensions. Ultimately, DMC achieves better feature transformation with significantly less computation than feature modulation. §.§.§ 3D ConditionNet In order to improve the accuracy of feature modulation vector extraction and alleviate inter-frame jitter, we propose a 3D ConditionNet to extract color priors from multiple SDRTV frames. This simple yet effective design pattern results in improved HDRTV quality. The network structure of the 3D ConditionNet is depicted in Figure <ref>. §.§ Wavelet attention: better restoration of high frequency detail During our investigation, we discovered that in comparison to SDRTV, HDRTV exhibits not only differences in low-frequency information such as dynamic range and color information, but it also possesses a greater amount of high-frequency information, as illustrated in Figure <ref>. To improve the quality of HDRTV, we design a wavelet attention module to enhance the high-frequency information of the features. The structure of the frequency domain attention enhancement module we design is shown in Fig.<ref>. The specific process is as follows: first, the input feature x is decomposed into different subbands (ll,lh,hl,hh)∈ coeffs by wavelet transform. Then, the different subbands are concatenated and the next one is reduced by 1×1 convolution to obtain z. Channel attention is performed on the reduced dimensional features to obtain z_o. After a 1×1 convolution, the features are arranged into different subbands by channel cutting, and a skip link is added. The enhanced subbands are inverted by wavelet inversion to obtain the output feature x_o. coeffs = Wavelet(x) z = Conv(Concat(coeffs)) s = Conv(GlobalPooling(z)) z_o = Conv(z · s ) coeffs_o = UnConcat(z_o) x_o = iWavelet(coeffs_o+coeffs) § EXPERIMENT §.§ Experiment setup Dataset. We used the videos provided by <cit.> as a data set. All of these HDR videos were encoded with PQ-OETF<cit.> and rec.2020<cit.> color space, and the video resolution was 1080P. Eighteen pairs of videos were used for training and four videos were used for testing. We used X265 <cit.> to encode the SDR videos with different fixed QP (27, 32, 37, 42) to construct datasets with different degrees of coding degradation. We perform scene segmentation on the test set and extract 348 video sequences of 10 frames each. Implementation details. During the training process, we use the SDR video encoded with QP=37 as input data, and the output is high quality HDR. Our Adam optimizer <cit.> is used, and the initial learning rate is set to 0.0005. After 100,000 iterations, the learning rate is reduced to 1/2 of the initial rate every 60,000 iterations, and the total number of training iterations is set to 6600,000. Mean absolute error (MAE) is used to calculate the loss. In the dual loss model training, the weight of the primary loss is set to 0.8, and the weight of the auxiliary loss is set to 0.2. To verify the performance of different algorithms in a fair generalization, we take the last 6 stored weights (560000,58000,600000,620000,6400000,660000) to test the metrics. The model trained on coding degradation with a fixed QP=37 was tested on 4 different QP coding test sets. As with <cit.>, this multiple evaluation ensures that we can accurately and fairly evaluate the performance of different models. Metrics. We conducted comparative experiments on several QP-encoded test sets. And the different methods are evaluated using seven metrics: PSNR, SSIM, MS-SSIM, Δ E_ITP, VIFp <cit.>, Harrpsi <cit.>, and VSI <cit.>. These seven metrics evaluate the performance of various algorithm results in objective fidelity, structural similarity, multi-scale structural similarity, color fidelity, visual fidelity, local similarity, and visual saliency, respectively. §.§ Quantitative results We compare the proposed method with the state-of-the-art SDRTV-to-HDRTV methods (HDRTVNet<cit.>, FMNet<cit.>, HDCFM<cit.>, HyCondITM<cit.>, etc.). For a fair comparison, all image quality enhancement methods were retrained in our training set, and the last 6 checkpoints of each model were selected to be tested on the test set and the average value was calculated. The Quantitative Results. The quantitative results for each metric are shown in Tables 1 and 2, respectively. It can be observed that our method consistently outperforms all comparison methods in terms of mean PSNR, SSIM, MSSSIM, Δ E_ITP, Harrpsi, and VIS for the test set. Specifically, in the PSNR and MSSSIM metrics, our method improves by 0.35 and 0.0013, respectively, compared to the previous SOTA. In terms of color difference, our method achieves a reduction of 0.614. Similar results can be found for VSI and other metrics. §.§ Qualitative results Fig.<ref> shows the qualitative results for the four test video frames. As can be seen, the HDRTV frames obtained from the conversion are heavily distorted by compression artifacts. The single-frame-based approach can effectively perform inverse tonal mapping of video frames, but the resulting frames are usually enhanced for noise such as coding artifacts. Our method can effectively recover the dual degradation and prevent the coding artifacts from being amplified during the inverse tone mapping process, thus improving the quality of the converted HDRTV frames. §.§ Ablation study To understand the contributions of the proposed components, we start with a baseline and gradually insert the components. The models are trained on QP=37 and the average test results over multiple QPs are reported. Ablation of temporal-spatial alignment module. To ablate multi-frame and temporal-spatial alignment methods, we introduce multi-frame input and alignment in multiple methods, respectively. We introduced Multi-Frame into the model design of RESNET and HDRTVNET, and the average PSNR was increased by 0.1 and 0.39, respectively. This demonstrates the effectiveness of multi-frame input. On this basis, we added a temporal-spatial alignment module on the basis of AGCM and HDRTVNET, and the PSNR was improved by 0.08 and 0.67. It should be noted that the temporal-spatial alignment has been further improved compared to the simple multi-frame input, and the PSNR has been improved by 0.28. The PSNR improvement is summarized in Table <ref> line 1-4. Ablation of auxiliary loss. Since high-quality HDRTV to low-quality SDRTV is a dual degradation process, the model needs to learn both quality enhancement (artifact removal) and inverse tone mapping. A single-model learning dual restoration suffers from coupled learning problems, which leads to degraded model performance. To solve this problem, we use the auxiliary loss to supervise the dual degradation learning process. We show the experimental results of introducing auxiliary loss on the PSNR metric in Table <ref> line 6 in our method and Aligned-HDRUNET, respectively. After introducing the auxiliary loss in our method, the PSNR is improved by 0.35. After Aligned-HDRUNET introduces the auxiliary loss, the PSNR is improved by 0.31. The PSNR improvement is summarized in Table <ref> line 6-7. Ablation of 3D ConditionNet. To ablate 3D conditional network for prior extraction, we discard multi-frame input conditional network. After discarding the 3D conditional, the PSNR drops by 0.36db. It can be concluded that 3D ConditionNet can estimate the color prior more accurately, thus improving the quality of HDRTV The PSNR improvement is summarized in Table <ref> line 5. Ablation of wavelet attention (WA). To ablate the Wavelet Attention module, we add the WA module on the basis of the previous module. The WA module enhances the features in the frequency domain, which can reconstruct more details and finer edges, and the results are shown in Table <ref>. The average PSNR improved from 33.31 to 33.41. Ablation of dual modulated convolution (DMC). Our proposed modulated convolution model is significantly less computationally intensive than feature modulation. As seen in Table <ref>, the additional computation of modulated convolution is constant, does not increase with input image resolution, and is significantly less computationally intensive than feature modulation. We conducted ablation experiments, and DMC can further improve HDRTV quality. It is reported from Table <ref> that after adding DMC, the PSNR of converted HDRTV is increased by 0.03. § CONCLUSION We analyze the difficulties in the current SDRTV-o-HDRTV process, including inaccurate inverse tone mapping, amplified artifacts, difficulties in learning the coupling, and insufficient high-frequency information. The corresponding modules Dual Modulated Convolution (DMC), Auxiliary Loss, and Wavelet Attention (WA) are proposed. DMC can perform inverse tone mapping more accurately. The Proposed auxiliary loss can decouple the learning process of inverse tone mapping and artifact repair, and obtain high-quality HDRTV content. WA enhances the features in the frequency domain, which can further improve the quality of HDRTV. Our proposed method makes SDRTV-to-HDRTV practical, solves the issue of low visual quality caused by the amplification of artifacts, and can convert real-world low quality SDRTV into high-quality HDRTV. ieee_fullname
http://arxiv.org/abs/2307.02375v1
20230705154206
Online Learning of Order Flow and Market Impact with Bayesian Change-Point Detection Methods
[ "Ioanna-Yvonni Tsaknaki", "Fabrizio Lillo", "Piero Mazzarisi" ]
q-fin.TR
[ "q-fin.TR", "econ.EM" ]
Machine learning at the mesoscale: a computation-dissipation bottleneck Emanuele Panizon August 1, 2023 ======================================================================= Financial order flow exhibits a remarkable level of persistence, wherein buy (sell) trades are often followed by subsequent buy (sell) trades over extended periods. This persistence can be attributed to the division and gradual execution of large orders. Consequently, distinct order flow regimes might emerge, which can be identified through suitable time series models applied to market data. In this paper, we propose the use of Bayesian online change-point detection (BOCPD) methods to identify regime shifts in real-time and enable online predictions of order flow and market impact. To enhance the effectiveness of our approach, we have developed a novel BOCPD method using a score-driven approach. This method accommodates temporal correlations and time-varying parameters within each regime. Through empirical application to NASDAQ data, we have found that: (i) Our newly proposed model demonstrates superior out-of-sample predictive performance compared to existing models that assume i.i.d. behavior within each regime; (ii) When examining the residuals, our model demonstrates good specification in terms of both distributional assumptions and temporal correlations; (iii) Within a given regime, the price dynamics exhibit a concave relationship with respect to time and volume, mirroring the characteristics of actual large orders; (iv) By incorporating regime information, our model produces more accurate online predictions of order flow and market impact compared to models that do not consider regimes. § INTRODUCTION The study and modeling of order flow and market impact in financial markets hold paramount importance for understanding the incorporation of private information into prices and designing effective trading algorithms that consider transaction costs. A substantial body of literature (see for example <cit.> and references therein) has revealed that the joint modeling of impact and order flow is more intricate than initially presumed. The persistence and autocorrelation of signed trade order flow[I.e. the sequence of signed trade volume, positive (negative) when buyer (seller) initiated.] have been extensively documented since the works of <cit.> and <cit.>. This persistence aligns with a long memory process, suggesting that a realistic market impact model should combine statistically efficient prices with correlated order flow. The introduction of transient impact models, also known as propagator models (<cit.>), successfully accomplishes this goal. Additionally, empirical evidence has attributed the temporal persistence of order flow primarily to order splitting, as discussed in <cit.>. Order splitting refers to the common practice of large investors incrementally executing their orders, termed "metaorders," through several smaller trades known as "child orders." The model proposed by <cit.> quantitatively establishes a relationship between the autocorrelation of order flow and the distribution of metaorder sizes. In essence, this model postulates that the strong serial dependence arises from the optimal execution strategies employed by institutional investors, which leads to persistent order flow. From an econometric perspective, this notion might be connected to the well-known fact that (approximately) long memory time series can be generated by regime-shift models, where each regime exhibits short memory and heterogeneous lengths. Regime shift models gained popularity around two decades ago when they were proposed to explain the long-range memory of volatility, as seen in <cit.>. This paper proposes to use regime shift models to describe order flow time series, with the objectives of: (i) econometrically explaining the long memory of order flow; (ii) enhancing the prediction of order flow and price dynamics through detected regimes; and (iii) suggesting a connection between regimes and the execution of metaorders. Unlike many conventional regime shift approaches that require a predetermined number of regimes (e.g., Hidden Markov Models), we focus on a model that allows online detection of change-points (CPs) to identify the occurrence of new regimes. Existing algorithms typically operate offline, recursively segmenting the time series ex-post into increasingly smaller regimes. In this paper, we specifically concentrate on online CP detection, employing Bayesian approaches. The Bayesian framework is well-suited for quantifying our uncertainty regarding CPs using the posterior distribution, as illustrated in <cit.>. We primarily consider the class of algorithms known as Bayesian Change-Point Detection Methods (BOCPD), pioneered by <cit.> as an improvement on the ideas developed by <cit.>. Since 2007, BOCPD and its extensions have found applications in various financial settings. Most applications have focused on stock returns, as demonstrated in <cit.>, <cit.>, <cit.>, and more recently, <cit.> utilized BOCPD as an exit-entry model for long-short prediction in the stock market. The work by <cit.> extended the BOCPD approach to the multi-sequence setting to analyze changes in 401 U.S. stocks within the S&P 500 index. BOCPD utilizes a message-passing algorithm to recursively compute the posterior distribution of the time since the last CP, termed the "run length". This elapsed time is continuously updated upon receiving new data points. To perform online inference, the underlying predictive model (UPM) is computed, representing the distribution of data given the current run length. For instance, the UPM may assume a Gaussian model with different means across regimes. In the BOCPD model, the data is assumed to be independently and identically distributed (i.i.d.) within each regime. However, this assumption is unrealistic for most financial time series. In this work, we extend BOCPD to accommodate Markovian data within each regime. While <cit.> consider the correlation structure of multivariate time series, their CP detection algorithm is offline. To the best of our knowledge, this is the first work that combines an online learning algorithm for CPs with a Markovian data structure. Furthermore, we propose a second extension of the BOCPD algorithm that relaxes the assumption of constant parameters within a regime, allowing for time-varying autocorrelation. To achieve this, we employ the class of Score Driven models introduced by <cit.> and <cit.>, which provide an observation-driven framework for real-time learning of time-varying parameters. Thus, our newly proposed method combines the online CP detection approach of BOCPD with the online learning of time-varying autocorrelation parameters within each regime. We present an empirical application of the proposed methods using order flow samples from stocks traded on NASDAQ. In an out-of-sample forecasting exercise, we find that the Score-Driven-based method outperforms other models, including autocorrelated time series models without regimes. Our analysis demonstrates that the model is correctly specified, and the residuals within each regime exhibit no correlation. By investigating the price dynamics during identified regimes, we discover that they follow concave functions of time, with the total price change in a regime also exhibiting a concave relationship with volume. These findings resemble those observed for real metaorders, consistent with the square root impact law. Finally, we demonstrate that knowledge of order flow regimes can be effectively utilized to improve predictions of order flow and price dynamics. We accomplish this by exploiting the well-known correlation between order flow and simultaneous/future price changes through market impact. The paper is organized as follows: In Section 2, we present the dataset, the variable of interest, and provide the motivation for applying a regime shift model to order flow time series. Section 3 covers the methodological aspects of the paper, outlining the main properties of BOCPD and introducing the two proposed extensions. In Section 4, we describe the estimation results of the models on NASDAQ data and analyze the obtained findings. Section 5 examines the average price dynamics during an order flow regime and quantifies the relationship between the total price change and the net order flow exchanged within a regime. Section 6 presents the forecasting analysis of order flow and the correlation with price dynamics through market impact. Finally, in Section 7, we draw conclusions and offer suggestions for further research. § DATA SET AND MOTIVATION In this paper, we consider the order flow of trades and in particular the aggregated signed volume. Let v_i (i=1,…,M) the signed volume (positive for buyer initiated and negative for seller initiated trades) of the i-th trade and let us indicate with N the number of trades we aggregate, so that our time series is composed by T=⌊ M/N ⌋ observations. The time series of interest is the aggregated order flow x_t on the interval ∩[N(t-1)+1,N(t-1)+N] given by x_t = ∑_j=1^N v_N(t-1)+j,          t=1,...,T. Our data set consists of executed orders during March 2020 for Microsoft Corp. (MSFT) and of December 2021 for Tesla Inc. (TSLA). In order to investigate the role of aggregation time scale, we choose N=240 and N=730 for TSLA and N=400 and N=1200 for MSFT, which, in both cases correspond to an average time interval of 1 and 3 minutes, respectively. The length of the two time series is 8,686 data points for 1 minute and 2,856 data points for 3 minutes of TSLA and 8,723 data points for 1 minute and 2,908 data points for 3 minutes of MSFT. To motivate our analysis, Figure <ref> shows the autocorrelation function of the order flow for TSLA and MSFT at the 3 minutes aggregation scale. Consistently with the literature (see, e.g., <cit.>), we observe that the autocorrelation function in all the cases has a slow decay. i.e. a buy (sell) trade is more likely followed by a buy (sell) trade. More quantitatively, it has been documented in many papers that the autocorrelation function of trade signs ρ(τ) decays asymptotically as a power law with an exponent smaller than one ρ(τ)∼1/τ^γ where γ<1 which implies that the time series is long memory with a Hurst exponent H=1-γ/2>1/2. The origin of this large persistence has been investigated both empirically and theoretically. Making use of labeled data allowing to identify the market member initiating each trade, <cit.> empirically showed that the long-range persistence observed at the London Stock Exchange is strongly driven by order splitting, i.e. the same trader sequentially placing trades with the same sign, very likely as part of an optimal execution program. On the contrary, herding, i.e. groups of investors trading in the same direction in the same period, plays a much minor role. From a theoretical point of view, the connection between order splitting and long memory of order flow has been elucidated by <cit.> (LMF). They proposed a simple model that postulates that market participants who intend to execute large orders split them into smaller orders and trade them incrementally. The large orders are called metaorders and the small trades in which they are split and sequentially traded are termed child orders. Under the assumption that metaorders are randomly sampled from a size distribution p_L (L∈ℕ), the LMF model predicts the form of the autocorrelation function of trade signs. In particular, when the distribution p_L is a power law p_L = α/L^1+α then the autocorrelation function of trade signs decays asymptotically as ρ(τ)∼1/τ^α-1 i.e. the model predicts that γ=1-α. Interestingly, very recently <cit.> theoretically showed that predictions of the LMF model remain valid also when there is heterogeneity in trading frequency and size distribution across market participants. The empirical validation of the LMF model poses some challenges because of the need for complete information on metaorders traded in the market. <cit.> used off-market trades as a proxy of metaorders. An alternative to such proxy is suggested by <cit.>, which propose segmentation algorithms and Hidden Markov Models to identify metaorders from brokerage data. Without relying on noisy proxies, <cit.> used private data of real metaorders by financial companies to test the power law hypothesis. In such a case, the results lacks generality since information on metaorders is company-specified. Recently, <cit.> used account-level data of the whole Tokyo Stock Exchange to directly test the LMF model. The predicted relationship between the exponent γ and α has been spectacularly verified, both at the market and at the single stock level. Summarizing, the LMF model proposes that most of the autocorrelation of order flow comes from the execution of metaorders but, due to the anonymous nature of financial markets, their presence cannot be easily and directly inferred from public market data. However, the start of the execution of a new metaorder should lead to a regime change in the order flow time series, which could be detected with suitable statistical methods. Thus the identification of a CP in the order flow might signal the arrival of a new metaorder execution. Moreover, from the market (or econometric) point of view, the identification of a CP modifies the forecasting of future order flow and price dynamics, since only past data after the last CP are useful for predictions. The practical use of such methods for CP detection requires that they work online, i.e. that the identification is done in real time and not ex-post (such as in the segmentation algorithms used, for example, in <cit.>). Finally, the method should allow to perform one-step ahead prediction in an online mode. § BAYESIAN ONLINE CHANGE-POINT DETECTION ALGORITHMS In the following, we briefly review the BOCPD algorithm, introduced by <cit.>, and present two novel extensions that we propose here, namely the Markovian BOCPD (MBO) and the Markovian BOCPD for Correlated data (MBOC) algorithm. The original BOCPD algorithm relies on the assumption that the data are independent and identically distributed within each regime. To relax such a strong assumption, we propose a new MBO algorithm that considers Markovian dynamics within each regime, thus allowing for serial correlation. At this stage, both BOCPD and MBO assume that parameters (i.e. mean, variance, autocorrelation) are constant within each regime. We further relax this assumption in a generalization of the MBO, named MBOC, that accounts for time-varying correlations. Such a generalization is based on the Score Driven approach introduced by <cit.>. In a companion paper <cit.>, we introduce in full generality the new class of regime-shift score-driven models, where other parameters can change over time within each regime. §.§ The BOCPD Algorithm The BOCPD algorithm for i.i.d. data has been introduced by <cit.>. Let x_1:T = {x_1,...,x_T} be a sample time series. The model assumes that data are non-stationary (because of regimes) and satisfies the product partition model (PPM), see <cit.>, meaning that data can be partitioned into regimes. Moreover, the parameters θ_R within each regime R are i.i.d. random variables drawn from some given distribution. Such a distribution needs to belong to the exponential family. Throughout the paper, we consider normal distributions. This assumption is checked by the Jarque-Bera (JB) test in the empirical application as shown in Section <ref>. <cit.> assumes a time series as the sequential realization of i.i.d. random variables from a normal distribution with unknown mean θ_R and known variance σ^2, x_i∼𝒩(θ_R,σ^2). Regimes and CPs separating them are not directly observable but must be inferred from data. To this end, the goal is to infer the elapsed time since the last CP, a quantity named run length and defined as follows. The run length r_t is a non-negative discrete variable defined as: r_t = 0, if a CP occurs at time t r_t-1+1, else. In the BOCPD algorithm, the arrival of a CP is modeled as a Bernoulli process[Other assumptions on the distribution of regimes can be implemented, for example, those leading to a non-exponential distribution of regime length. This more realistic extension is left for future research.] with hazard rate 1/h: p(r_t|r_t-1) = 1/h, if r_t=0 1-1/h, if r_t = r_t-1+1 0, otherwise. The primary quantity of interest is the computation of the run length posterior p(r_t|x_1:t) which characterizes probabilistically the number of time steps since the last CP given the data observed so far, p(r_t|x_1:t) = p(r_t,x_1:t)/p(x_1:t). The joint distribution over both the run length and the observed data can be written recursively, p(r_t,x_1:t) = ∑_r_t-1p(r_t,r_t-1,x_t,x_1:t-1) = ∑_r_t-1p(x_t|r_t-1,x_1:t-1)_UPMp(r_t|r_t-1)_Hazardp(r_t-1,x_1:t-1)_Message. An important assumption that simplifies the computation is about the changepoint prior, namely r_t is conditionally dependent on r_t-1 only. The quantity p(x_1:t) is named evidence and is computed as p(x_1:t) = ∑_r_tp(r_t,x_1:t). The Underlying Predictive Model (UPM) is defined as the predictive posterior distribution given the current run length. Because of the assumption on PPM, such a distribution depends only on the last r_t-1 observations and can be stated in a more compact form as p(x_t|r_t-1,x_1:t-1) = p(x_t|x_t-1^(r_t-1)) where x_t-1^(r_t-1) = x_t-r_t-1:t-1 and x_t:t-1 = ∅ . By using the conjugacy property of the exponential family when data are i.i.d., one obtains closed-form solutions for the UPM term, see <cit.>. For normal distributions, the UPM term is p(x_t|x_t-1^(r_t-1)) = 𝒩(μ_r_t-1,σ^2+σ_r_t-1^2), with posterior parameters given by μ_r_t-1 = ∑_i=t-r_t-1^t-1x_i/σ^2+μ_0/σ_0^2/r_t-1/σ^2+1/σ_0^2 and σ_r_t-1^2 = (r_t-1/σ^2+1/σ_0^2)^-1 for r_t-1∈{1,...,t-1}. Let us stress that the run length in Eq. (<ref>) is a latent variable we must infer. As such, the value of the posterior parameters varies depending on r_t-1, i.e. where we put the last change point, whose probability is in Eq. (<ref>). The BOCPD algorithm works as follows. At time t=0, we initialize the prior values μ_0, σ_0^2 and the known variance σ^2 (see below for details). At the generic time t>0, a new data point x_t becomes available, and the UPM in Eq. (<ref>) is computed for any possible μ_r_t-1 and σ^2_r_t-1, as a function of the run length r_t-1 that takes value from 0 to t-1. Then, the joint distribution over both the run length and the observed data point, see Eq. (<ref>), is computed for all the possible values of the run length. Thus we obtain: * the growth probabilities, p(r_t=l,x_1:t), for l = 1,...,t; * the CP probability p(r_t=0,x_1:t). After computing the evidence in Eq. (<ref>), the run length posterior is obtained by Eq. (<ref>). Finally, μ_r_t and σ^2_r_t are updated as in Eq. (<ref>) in order to be used at the next time t+1. §.§ The MBO Algorithm The assumption about the independence of data is clearly restrictive. Here we introduce the MBO algorithm as an extension of the BOCPD algorithm to the case of Markovian dependence. Similarly to before, we consider normally distributed data. As such, within a regime R, a time series is a realization of an AR(1) process with normal innovations, x_t ∼𝒩(θ_R,σ^2), x_t|x_t-1 ∼𝒩(θ_R+ρ(x_t-1-θ_R),σ^2(1-ρ^2)). As in the previous model, in each regime, the unconditional distribution is normal with unknown mean θ_R and known variance σ^2. Moreover the conditional distribution is normal with constant correlation ρ = Cov(x_t,x_t-1)/σ^2. In the next Section, we introduce a further generalization, allowing ρ to be time-varying within each regime[ We have also explored a further generalization where the variance inside each regime is time-varying, similarly to a GARCH model. The results for the order flow are qualitatively similar and we do not present them here. The reader interested in this model can consult the companion paper <cit.>.]. The key observation is that the conjugacy property still holds when the data are Markovian since the conditional distribution of any member in the exponential family is still in the family, see <cit.>. After some computations, one can obtain a closed form for the UPM term as p(x_t+1|x_t^(r_t)) = 𝒩(μ_r_t+ρ(x_t-μ_r_t),σ^2(1-ρ^2)+σ_r_t^2), where the posterior parameters are μ_r_t = b_r_t+μ_0/σ_0^2/a_r_t+1/σ_0^2 and σ_r_t^2 = (a_r_t+1/σ_0^2)^-1 for r_t∈{1,...,t} and a_r_t = 1/σ^2+(r_t-1)(1-ρ)^2/σ^2(1-ρ^2) b_r_t = x_t/σ^2, for r_t=1 x_t-1/σ^2+(1-ρ)(x_t-ρ x_t-1)/σ^2(1-ρ^2), for r_t=2 x_t+1-r_t/σ^2+(1-ρ)^2∑_i=t+2-r_t^t-1x_i+(1-ρ)(x_t-ρ x_t+1-r_t)/σ^2(1-ρ^2), for r_t∈{3,...,t}. Let us notice that we recover the previous case when ρ=0. §.§ The MBOC Algorithm Both the baseline model and its Markovian generalization assume that the parameters within each regime are constant. This might be unrealistic in many empirical cases. For example, heteroscedasticity, i.e. time-varying variance, is ubiquitous in financial time series. Since our interest here mostly focuses on the correlation of the order flow and its temporal dependencies, we consider a model where temporal correlation (i.e. ρ) is time-varying within the regime. This might capture the variability and temporal persistence of trading volume, which in turn depends on the available liquidity of the market. Time-varying parameters models display typically some difficulties for estimation. Following <cit.>, we consider the class of observation-driven models where the parameters are unconditionally random variables, but evolve in time based on some nonlinear deterministic function of past observations. In particular, we consider the class of Score-Driven models introduced by <cit.> and <cit.>, which assume that the dynamics of the time-varying parameter(s) is autoregressive with an innovation term depending on the so-called score[We remind that the score is the derivative of the log-likelihood with respect to the parameter(s).]. The score is then re-scaled by the inverse of the Fisher matrix[The Fisher matrix is defined as ℐ_t|t-1 = _t|t-1[∇_t^T∇_t] where ∇_t is defined in Eq. <ref>.], which is used to modulate the importance of the innovation according to the concavity of the log-likelihood. The intuition is simple: the scaled score adjusts the value of the parameter(s) in order to maximize the likelihood of the observed data. It is worth noticing that many standard models in financial econometrics, such as the GARCH, ACD, MEM, etc., are special cases of score-driven models (see <www.gasmodel.com> for more details). We extend the MBO model by promoting the correlation coefficient ρ to a time-varying parameter ρ_t described by the Score-Driven version of the AR(1) process (see <cit.>). We name such an extension as MBOC. We then introduce an online method to estimate both the time-varying parameter ρ_t and the regime characteristics, namely the mean θ_R characterizing the regime and the run length r_t. More specifically, within a regime R, the data generating process is assumed to be x_t = ρ_t(x_t-1-θ_R)+θ_R+u_t, u_t∼𝒩(0,σ^2), where θ_R and σ^2 are unknown. According to the Score-Driven AR(1) process, the time-varying correlation ρ_t is described by the recursive relation[This specification does not guarantee that |ρ_t|≤ 1, thus sometimes one uses a link function (e.g. an inverse logistic) which maps [-1,1] in ℝ, see <cit.>. In our empirical analysis, we observe that the filtered |ρ_t| is larger than 1 in less than one per thousand observations, thus we simply set a threshold |ρ_t|≤ 1.] ρ_t = ω+α s_t-1 + βρ_t-1 where s_t is the scaled score defined as s_t = ℐ_t|t-1^-d·∇_t ∇_t = ∂log p_u(u_t)/∂ρ_t ℐ_t|t-1 = _t|t-1[∇_t^T∇_t] and u_t = x_t-θ_R-ρ_t (x_t-1-θ_R) is the prediction error associated with the observation x_t. The vector of parameters λ⃗= [ω, α, β, σ^2]' is estimated through a Maximum Likelihood Estimation method. In the analysis below, we set d=0, i.e. we consider a not re-scaled score. It is s_t = ∇_t = u_t/σ^2(x_t-1-θ_R). Then the UPM term becomes p(x_t+1|x_t^(r_t)) = 𝒩(μ_r_t+ρ_t(x_t-μ_r_t),σ^2+σ_r_t^2), where the posterior parameters are μ_r_t = b_r_t+μ_0/σ_0^2/a_r_t+1/σ_0^2 and σ_r_t^2 = (a_r_t+1/σ_0^2)^-1 for r_t∈{1,...,t}, and a_r_t = 1/σ^2+(r_t-1)(1-ρ_t)^2/σ^2(1-ρ_t^2) b_r_t = x_t/σ^2, for r_t=1 x_t-1/σ^2+(1-ρ_t)(x_t-ρ_t x_t-1)/σ^2(1-ρ_t^2), for r_t=2 x_t+1-r_t/σ^2+(1-ρ_t)^2∑_i=t+2-r_t^t-1x_i+(1-ρ_t)(x_t-ρ_t x_t+1-r_t)/σ^2(1-ρ_t^2), for r_t∈{3,...,t}. The vector of parameters λ⃗ is estimated at each time-step within the time window associated with the most likely regime after we demean the data with the posterior mean see Eq. (<ref>). In particular at each time step t>1 we find i = _i∈{1,...,t}p(r_t=i|x_1:t) and we consider the demeaned data set x_t^(i)-μ_i = {x_t+1-i-μ_i,...,x_t-μ_i} in which we infer λ⃗ and we filter ρ_t with the use of the Score-Driven model. In order to robustify the algorithm and accomplish better Mean Squared Error (MSE) (see section <ref> for more details) we define a threshold value th, then we filter the time-varying correlation and we infer the variance (see Algorithm <ref> for more details on the inference procedure of the MBOC model) whenever i>th. In that way, the demeaned data set contains at least th data points. The threshold value is a hyperparameter that is tuned in a preliminary phase, see below for the implementation details. § MODEL ESTIMATION AND EMPIRICAL ANALYSIS We estimate the three models, BOCPD, MBO, and MBOC, on the time series of aggregated order flow x_t of TSLA and MSFT, both for the 1 and the 3 min aggregation time scale. The application of the models requires a careful choice of several hyperparameters. For both TSLA and MSFT, the prior value of the mean for all models is set to μ_0 = 0 shares due to the symmetry between buy and sell orders. The tuning of the other hyperparameters is obtained by minimizing the MSE in the first day of each month and each stock. For TSLA, for all the models, the prior value of the variance of the mean is set to σ_0^2 = 10^7 shares^2, while for MSFT σ_0^2 = 15· 10^7 shares^2. The hazard rate is set to h = 1/80. For both BOCPD and MBO the known variance is set to σ^2 = 10^8 shares^2 for TSLA while for MSFT σ^2 = 15· 10^8 shares^2. The same values for TSLA and MSFT are being used for the initial variance σ_i^2 of the MBOC model (see Algorithm <ref>). Moreover, the initial correlation ρ_1 is set to 0.2 for TSLA and 0.3 for MSFT while the initial parameters of the Score-Driven dynamics are set to λ⃗= [0.08,0.02,0.05,10^8]' and the th value is set equal to 20 shares for the 1 minute and 10 shares for the 3 minute data set. Finally, for the constant correlation coefficient ρ of MBO we have tested different specifications. In the table below we will report the results for three of them, showing that the predictive capacity of the model slightly depend on it. §.§ Model comparison We present here the results of an online prediction study for order flow data by using BOCPD, MBO, and MBOC models introduced in Section <ref>, and we compare their performances by computing the Mean Squared Error for the predictive mean of each model. The three models are then compared with the ARMA(1,1) model, estimated on the whole time period by assuming the absence of regimes. As such, the ARMA(1,1) model represents a natural benchmark to test whether including regime-switching dynamics does improve the forecasting of order flow. The predictive mean μ̂_t at time t as one step ahead forecast is defined by using observations up to time t and to predict out-of-sample the realization at time t+1. That is μ̂_t = ∑_r_t p(x_t+1|x_1:t,r_t,)p(r_t|x_1:t) = ∑_r_tμ_r_t p(r_t|x_1:t) where μ_r_t is defined in Equations (<ref>), (<ref>), and (<ref>) for BOCPD, MBO and MBOC model respectively. Then, the MSE can be computed as MSE = 1/T∑_t=1^T(μ̂_t-1-x_t)^2. Table <ref> shows the MSE of the three aforementioned models along with the ARMA(1,1) for both TSLA and MSFT stocks. As mentioned above, we consider three different values of the correlation coefficient ρ in the MBO model. We observe that the MBOC model outperforms all competitors. In particular, the proposed models (MBO and MBOC) outperform systematically the ARMA(1,1) benchmark, while the baseline BOCPD model displays comparable performances. Finally, notice that the role of the hyperparameter ρ for the MBO model is relatively marginal. In conclusion, the online prediction study with order flow data suggests that regime-switching models accounting for a Markovian correlation structure outperform both the baseline BOCPD model and the benchmark. The MBOC model displays the best forecasting performance and high flexibility in data description. In the following Sections, we exploit such flexibility in modeling regime-switching dynamics in the presence of time-varying correlations to empirically show a clear connection between regimes for aggregated order flows and the market impact of associated trades (likely including metaorders). 3c||TSLA 3cMSFT 1-7 ρ 0.1 0.2 0.3 0.2 0.3 0.4 ARMA 0.908 0.908 0.908 0.860 0.860 0.860 BOCPD 0.907 0.907 0.907 0.878 0.878 0.878 MBO 0.895 0.896 0.911 0.835 0.834 0.844 MBOC 0.890 0.890 0.890 0.831 0.831 0.831 tableComparison of out of sample one-step-ahead MSE of ARMA(1,1), BOCPD, MBO and MBOC. The correlation ρ is the one used in MBO. Data refer to TSLA and MSFT at the 3 minute resolution. §.§ Empirical Analysis of Identified Regimes Here we investigate the statistical properties of the identified regimes for the aggregated order flows. We consider the MBOC model because of the best performances. Let us first introduce the adopted definition of an identified regime. Let x_1:T be a time series and t,s∈∩[1,T] times with t<s such that _i∈{0,1,...,t}p(r_t=i|x_1:t) = _i∈{0,1,...,s}p(r_s=i|x_1:s) = 0 and for any u∈∩(t,s), _i∈{0,1,...,u}p(r_u=i|x_1:u) ≠ 0. Then the subset x_t:s-1 of the time series x_1:T is defined as a regime. The top panel of Figure <ref> (<ref>) shows x_t for TSLA (MSFT) aggregated every 730 (1200) executions corresponding to an average time interval of 3 minutes. The vertical red dashed lines indicate the CPs identified by MBOC, according to the definition above. Interestingly, many CPs are observed at the end of a trading day for both stocks. On one side this is expected since overnight is a natural separation between regimes, but on the other side, this is an indication that the proposed method is able to identify regime changes. The bottom panels of Figure <ref> and <ref> show the run length posterior of the MBOC model for the two assets. For each time (on the abscissa) the vertical axis displays in grayscale the probability that the run length has a given value (on the ordinate). Darker grey regions correspond to higher probabilities. The red line highlights the most likely path, i.e. the value of r_t with the largest run length posterior p(r_t|x_1:t) for each t. Finally, we show also the (one step ahead) predictive standard deviation defined as σ̂_t = √(∑_r_tσ_r_t^2 p(r_t|x_1:t)), where σ^2_r_t is as in Equations (<ref>), (<ref>), and (<ref>) for BOCPD, MBO, and MBOC models, respectively. Regime length distribution. For the 1 (3) minute(s) data set, we find 911 (546) regimes for TSLA and 1394 (690) regimes for MSFT. Figure <ref> shows the histograms of the length of the detected regimes. Consistently with the constant hazard function, we find that the regime length is approximately exponentially distributed with a mean regime length of 10 for the 1 minute data set and 5 for the 3 minutes data set intervals for TSLA and 7 for the 1 minute data set and 4 intervals for the 3 minutes data set of MSFT corresponding to 10 and 15 trading minutes for TSLA and to 7 and 12 trading minutes for MSFT respectively. The length of the regimes ranges in the interval [1,92] for 1 minute and in [1,30] for 3 minutes for TSLA while for MSFT in [1,65] for 1 minute and in [1,30] for 3 minutes for MSFT. Gaussianity inside regimes. The main assumption of both BOCPD and MBO is that the variable x_t is Gaussian within each regime, with constant parameters. For MBOC, we expect that x_t is only conditionally Gaussian, because of the time-varying autocorrelation, but not unconditionally over the whole period and only approximately within a regime. The non-Gaussianity of unconditional aggregated order flow in the whole period is proved by performing the Jarque-Bera (JB) test on the whole time series {x_t}_t=1,…,T, which signals a p-value smaller than 1%, for both stocks and time scales. We then perform the JB test within each regime detected by the MBOC model. For the 3 minute time scale, we cannot reject the null hypothesis at 5% confidence level for 94% (95%) of the regimes for TSLA (MSFT). When we consider the 1 minute time scale, the frequency of rejection is 86% (87%). These findings support the choice of MBOC to identify regimes with order flows as approximately Gaussian within each regime. Autocorrelation of residuals inside regimes. As a final model checking we test for the lack of serial correlation in the residuals of our model within each regime. We have seen above that, coherently with the literature, v_t is strongly autocorrelated. Following <cit.>, our assumption is that this correlation is driven by the presence of regimes, which in turn are likely associated with metaorders. We thus apply the Ljung-Box test to the residuals in each regime. For the 3 minute time scale, we cannot reject the null hypothesis of uncorrelated residuals for 98% (99%) of the regimes of TSLA (MSFT), with 5% confidence level. For the 1 minute time scale, the frequency of rejection is 97% (93%). It is possible to conclude that the MBOC model captures most of the serial correlation of aggregated order flow. Notice that, according to the model, the unconditional slow decay of the autocorrelation of order flow observed in the literature (see also Fig. <ref>) is due largely to regime-switching dynamics and, only partially, to Markovian temporal dependencies. § PRICE IMPACT DURING ORDER FLOW REGIMES In this Section we empirically study the average price dynamics inside a detected order flow regime and we measure the relation between the total price change and the net volume exchanged in the same regime. §.§ Price as a function of time inside an order flow regime As said, we first study the average price dynamics during an order flow regime. This type of analysis mirrors the one performed, for example, by <cit.> which studied the average price dynamics during the execution of a metaorder. Using labeled data allowing to identify when an institutional investor executed a metaorder, these papers find that (i) the average price dynamics is correlated with the conditioning metaorder sign, i.e. the price increases (decreases) when a buy (sell) metaorder is executed; (ii) the price dynamics is concave in time, i.e. the price increases faster at the beginning of a buy metaorder and slowly toward the end. Here we take a step forward by asking what is the average price dynamics during a regime of aggregated order flow detected with the MBOC model. To this end, for each detected regime R (see Definition <ref>), characterized by an initial time t_R and a final time s_R>t_R, we denote with ϵ_R =(∑_t_R≤ t <s_Rx_t) the sign of the order flow in the regime, being equal to +1 (-1) when the regime is dominated by the volume of buyer (seller) initiated trades. Since during a metaorder execution we expect a significant net imbalance of buy or sell volume, we will consider subsets of regimes for which Z_R:=| ∑_t_R≤ t< s_Rx_t/∑_t_R≤ t< s_R|x_t| |>Θ,          0≤Θ <1 i.e. when the difference between buy and sell volume divided by their sum is larger than Θ. Notice that when Θ=0 the subset coincides with the entire set of regimes identified by the MBOC. Appendix <ref> reports the number of regimes in the different subsets. Indicating with p_t the log-price of the last transaction in the interval labeled by time t, we compute the log-price change between the beginning of the regime and t_R+k, where 0≤ k< s_R-t_R and we take the average ℐ^Θ(k) = _R[ϵ_R(p_t_R+k-p_t_R-1)|t_R+k< s_R,Z_R>Θ]. With _R[·], we denote the sample average over the regimes, i.e. that t_R is the first interval of a regime, and the conditioning restricts it to those regimes for which the observation at t_R+k is in the same regime as the one at t_R as well as to those regimes that satisfy condition in Eq. (<ref>). Figure <ref> shows the impact function ℐ^Θ(k) as a function of k for the two stocks and the two time scales when Θ=0, 0.5, and 0.9. Error bars are standard errors in each bin. We notice that in all cases impact is positive and increasing. This is somewhat expected since we are implicitly conditioning on the sign of order flow in the whole regime, thus the observed behavior is coherent with the known correlation between aggregated order flow and contemporaneous price change, see <cit.>. Interestingly the price dynamics is a concave function of time, similarly to what is observed when conditioning on metaorders execution instead of on order flow regimes. Moreover the degree of concavity increases with Θ. Clearly the concavity is not expected by the mere fact that the regime is characterized by a net order flow sign, while it could instead be explained by the Transient Impact Model of <cit.> or by the LLOB model of <cit.>, which predicts a concave average price temporal profile when the order flow has a non-zero average as during a metaorder execution. §.§ Price impact as a function of volume Finally, we study the relation between the total price change in a regime and the total net volume in the same time span. A large body of empirical literature <cit.> have shown that on average the total price impact during a metaorder execution scales with a sublinear power law of metaorder volume, a relation well fit by a power law with an exponent ranging in [0.4,0.7]. This is the celebrated square root impact law. It is therefore natural to investigate empirically the relation between the same two quantities within a regime identified with our method. To this end, defining Δ p_R = p_s_R - p_t_R we consider the non-linear regression: ϵ_RΔ p_R = A(ϵ_R∑_t_R≤ t<s_Rx_t)^γ+noise. Since the measurement of market impact is notoriously very noisy, we have performed the estimation both on the original dataset and on a dataset where potential outliers are removed. In the latter approach, we used the standard procedure of considering outliers datapoints corresponding to regimes whose price change is smaller (larger) than the first (third) quartile minus (plus) 1.5 times the interquartile range (see Appendix <ref> for details). Table <ref> reports the estimated parameters when outliers are removed, while Appendix <ref> reports the results for the entire dataset and presents the scatter plots of the data and the fitted curve. Both Tables indicate that the exponent γ is smaller than one and for the data without outliers it is remarkably close to 0.5, as postulated by the square root law. 8cTSLA 2-9 4c|Δ t=1 min 4cΔ t=3 min Θ A SE of A γ SE of γ A SE of A γ SE of γ 0 0.121 0.05 0.592 0.041 0.298 0.139 0.52 0.045 0.5 0.159 0.066 0.567 0.041 0.284 0.133 0.528 0.045 0.9 0.172 0.084 0.552 0.049 0.387 0.179 0.504 0.044 8cMSFT 2-9 4c|Δ t=1 min 4cΔ t=3 min Θ A SE of A γ SE of γ A SE of A γ SE of γ 0 0.498 0.209 0.4 0.038 0.365 0.203 0.458 0.048 0.5 0.469 0.209 0.408 0.04 0.314 0.185 0.47 0.05 0.9 0.334 0.144 0.444 0.038 0.332 0.199 0.47 0.051 tableEstimated parameters and their standard errors (SE) of the regression of Eq. (<ref>). Results refer to data with outliers removed. Clearly these results are preliminary and should be validated on larger panels of stocks, also pooling them together with the usual rescaling by daily volatility and volume. However we find these results very encouraging and suggestive of a relation between the identified regimes and the execution of metaorders. § ONLINE PREDICTION OF ORDER FLOW AND MARKET IMPACT The possibility of performing an online detection of regimes and regime changes in the order flow opens the question of how to use this information to predict subsequent order flow and price changes. In the PPM regimes are independent, hence in forecasting future values only data from the current regime are useful, while older data add noise to the prediction. This idea will be used to build online predictions of order flow. Additionally, through market impact, price dynamics is correlated with order flow. Thus a proper modeling of order flow is useful to forecast future price. Since we have seen in the last Section that order flow sign of a regime correlates with contemporaneous price change, we can ask the question of whether the knowledge that a new regime in order flow has just started allows to predict the future order flow and, more importantly, the future price dynamics. Consistently with the results of Section <ref> showing that the MBOC model outperforms the competitors (BOCPD, MBO, ARMA) in one step ahead prediction of order flow, here we focus our analyses on the regimes identified by MBOC. It is important however to stress that qualitatively similar results are obtained with the other two simpler models, BOCPD and MBO. In other words the relation between price dynamics and aggregated order flow is importantly understood by using regimes, while the choice of the specific regime shift model improves the short-term prediction of aggregated order flow. However, as shown in in the appendix <ref>, the MBOC method achieves higher predictability wrt the other CP detection methods. To better understand the role of regimes in prediction, let consider the following argument. If the data generating process of order flow is truly consistent with a product partition model (i.e. independent regimes), the knowledge of the data of the previous regimes is not useful for prediction. Thus in this case, it is better to use only the data in the current regime and its learned statistical properties. However, even if we are relatively sure that a new regime has just started, its parameters could be quite uncertain at the beginning. Thus, in order to form a forecast, it is better to wait for few observations into a new regime. Moreover, if many regimes are very short (e.g. composed by one or two intervals), as observed empirically in Figure <ref>, it might be better to build predictions after the observation of a few intervals in a regime. Online order flow prediction. Following the above argument, we adopt the following procedure. Whenever we detect a CP in an online fashion for the time series of order flow, we measure the correlation ℐ^(1)_ϵ(k) = _R[(x_t_R) ·(x_t_R+k)]      k=1,2,... where _R indicates that we are conditioning on the fact that t_R is the first observation of a new regime. Notice that (i) the correlation is extended to values of k possibly beyond the end of the detected regime (at time s_R); (ii) we do not consider the case k=0 since in this case the correlation is trivially equal to 1. The superscript ^(1) in the above expression means that we take the sign of the aggregated order flow in the first interval (see below for an extension). The continuous yellow line in Figure <ref> shows that ℐ^(1)_ϵ(k) is a poor predictor of order flow. To better quantify this statement, the dashed yellow line in the figure is the unconditional correlation Ĩ^(1)_ϵ(k) = [(x_t) ·(x_t+k)]      k=1,2,... which makes no use of regime detection (and for this reason the expectation does not have the subscript _R; the tilde refers to the expectation without considering regimes). Clearly, the unconditional correlation is larger than the conditional one. As said above, one of the reasons of the comparable performance of ℐ^(1)_ϵ with respect to Ĩ^(1)_ϵ is the fact that there are many regimes of length one and also that the sign of the new regime, ϵ_R, might be poorly measured by the sign of the first interval (x_t_R). A better option is to wait few more intervals within the regime before building the predictor. Thus, defining m=1,2,.. the number of intervals in a regime we wait before forming the prediction, we introduce the correlation ℐ^(m)_ϵ(k) = _R[(∑_t=t_R^t_R+m-1x_t) ·(x_t_R+m-1+k)]      k=1,2,... which is the correlation between the sign of the order flow in the first m intervals of a regime and the sign of the order flow in an interval k steps after these m intervals. Notice that we are not conditioning on the fact that t_R+m-1+k is in the same regime as t_R, so the two observation could belong to different regimes. However, we condition on the fact that t_R+m-1 is in the same regime as t_R. Similarly we use as a benchmark case the predictor Ĩ_ϵ^(m)(k) = [(∑_s=t^t+m-1x_s)·(x_t+m-1+k)]      k=1,2,... . The orange, red, and dark red continuous lines in Figure <ref> show ℐ^(m)_ϵ(k) for m=2,3,4 respectively, while the corresponding dashed lines refer to Ĩ^(m)_ϵ(k) . We observe that the correlations based on regimes are larger than the corresponding ones without regimes, especially for large m. This empirical evidence indicates that the knowledge of the order flow regimes improves the short-term predictability of order flow. Online market impact prediction. We now consider the prediction of price change based on the knowledge of being in a regime of order flow. To this end we introduce the online impact ℐ^(m)_Δ p(k) = _R[(∑_t=t_R^t_R+m-1x_t) · (p_t_R+m-1+k-p_t_R+m-1)]      k=1,2,... which is the correlation between the sign of the total order flow in the m initial intervals of a regime and the subsequent price change over k intervals. Compared to Eq. (<ref>), two important differences are worth to be highlighted. First, the sign inside the expectation is taken only on the aggregated order flow of the m intervals used to build the prediction, while in Eq. (<ref>) ϵ_R considers the sign of the whole regime and therefore is non-causal. Second, in ℐ^(m)_Δ p(k) we do not condition on s_R-t_R>k as in Eq. (<ref>) since after having observed m intervals in a regime we do not know when the regime is going to end. In other words, for a given k, we take the average both on cases when t_R and t_R+k belong to the same regime and when they do not. Finally, as before we use as a benchmark an impact predictor that is based on the sign of the order flow, Ĩ^(m)_Δ p(k) = [(∑_s=t^t+m-1x_s)· (p_t+m-1+k-p_t+m-1)]      k=1,2,... . When m=1 the quantity Ĩ^(m)_Δ p(k) becomes the response function widely investigated in the market impact literature, mainly in transaction time, see <cit.>. We choose this more general definition in order to make a fairer comparison between impact predictors using the same number of past order flow observations. Figure <ref> shows these different quantities for online market impact prediction, considering both stocks and both time intervals. It is evident that as soon as m>1, ℐ^(m)_Δ p(k) (orange to dark red lines) is much larger than the corresponding response function Ĩ^(m)_Δ p(k) (dashed orange to dark red lines) which does not make use of regimes. Moreover, the larger m, the larger the correlation between the order flow sign and the future price change, in all four investigated cases. Thus the (online) knowledge that a regime has started provides a significant additional forecasting power to future price change with respect to the response function, which is an unconditional cross-correlation between current order flow and future price change. § CONCLUSION In this work, we proposed the use of Bayesian Online Change Point Detection Methods to identify (in a real-time setting) regimes in time series of aggregated order flow of financial assets. Since the existing methods make very strong assumptions on the data generating process, in particular for what concerns the serial correlation of data within each regime, we proposed here two extensions of the regime detection algorithm: the first one assumes a Markovian dynamics inside each regime, while the second one makes use of an observation driven dynamics based on the score-driven mechanism. As shown by the recent econometric literature, the score driven approach is extremely flexible also as a filter of a misspecified dynamics (tantamount to GARCH). The companion paper (<cit.>) provides more methodological details of this new class of models by discussing different specifications where other parameters (e.g. the variance) are time-varying within each regime. The analysis of two liquid stocks traded in the NASDAQ market shows that the new algorithms presented here, particularly the latter, outperform the baseline model in out-of-sample forecasting. In general, we find that the regime-switching methods outperform standard econometric time series models like ARMA(1,1). Moreover, a careful model checking shows that the algorithm outputs well specified regimes both in terms of Gaussianity of data and of lack of serial correlation of residuals, within each regime. From the financial point of view, the identification of weakly autocorrelated regimes in the order flow time series suggests that the observed unconditional long memory might be explained by regime switching. This is in line with the mechanism proposed by <cit.> who connected the long memory to order splitting by heterogeneous institutional investors. It is natural at this point to try to identify the detected regimes with time periods when one or a few institutional investors are trading a large order. Of course, we do not have any empirical evidence in support of this idea which, at this point, can be considered as a conjecture to be tested with suitable data (for example those used by <cit.> or <cit.>). The paper shows how the online identification of regimes can be used to significantly improve the forecasting of order flow and of price dynamics. Using the knowledge of the order flow during the current regime provides better predictions when compared with methods using unconditionally the past history of order flow. We foresee that such improvement could be fruitfully used in several financial applications, such as optimal trading, market making, and alpha signal detection. Similarly, if our interpretation above is correct, the online regime detection method could be used to statistically identify the execution of a large institutional execution from anonymous market data. § ACKNOWLEDGEMENTS This paper is funded by European Union Next Generation EU with the grant PNRR IR0000013 ‘‘SoBigData.it". [Adams and MacKay, 2007]c:07 Adams, R. P., and MacKay, D. J. (2007). Bayesian online changepoint detection. arXiv preprint arXiv:0710.3742. [Almgren et al., 2005]Almgren2005DirectEO Almgren, R., Thum, C., Hauptmann, E., and Li, H. (2005). Direct estimation of equity market impact. Risk, 18(7), 58-62. [Bacry et al., 2015]Bacry2015 Bacry, E., Iuga, A., Lasnier, M., and Lehalle, C.-A. (2015). Market impacts and the life cycle of investors orders. Market Microstructure and Liquidity, 1:1550009. [Barry and Hartigan, 1992]r:92 Barry, D. and Hartigan, J. A. (1992). Product partition models for change point models. The Annals of Statistics, 20:260 – 279. [Bershova and Rakhlin, 2013]ar:2013 Bershova, N. and Rakhlin, D. (2013). The non-linear market impact of large trades: evidence from buy-side order flow. Quantitative Finance, 13:1759–1778. [Blasques et al., 2014]Score-Driven_AR Blasques, F., Koopman, S., and Lucas, A. (2014). Optimal formulations for nonlinear autoregressive processes. WorkingPaper 14-103/III, Tinbergen Institute. [Bouchaud et al., 2018]book:18 Bouchaud, J.-P., Bonart, J., Donier, J., and Gould, M. (2018). Trades, Quotes and Prices: Financial Markets Under the Microscope. Cambridge University Press. [Bouchaud et al., 2009]em:09 Bouchaud, J.-P., Farmer, J. D., and Lillo, F. (2009). How markets slowly digest changes in supply and demand. Handbook of Financial Markets: Dynamics and Evolution, Handbook of Finance. [Bouchaud et al., 2004]Bouchaud04 Bouchaud, J.-P., Gefen, Y., Potters, M., and Wyart, M. (2004). Fluctuations and response in financial markets: the subtle nature of ‘random’ price changes. Quantitative Finance, 4:176–190. [Cox, 1981]cox Cox, D. (1981). Statistical analysis of time series: Some recent developments. Scandinavian Journal of Statistics, 8:93 – 115. [Creal et al., 2013]Score-Driven1 Creal, D., Koopman, S. J., and Lucas, A. (2013). Generalized autoregressive score models with applications. Journal of Applied Econometrics, 28:777–795. [Diaconis and Ylvisaker, 1979]r:79 Diaconis, P. and Ylvisaker, D. (1979). Conjugate priors for exponential families. The Annals of Statistics, 7:269–281. [Diebold and Inoue, 2001]r:01 Diebold, F. X. and Inoue, A. (2001). Long memory and regime switching. Journal of Econometrics, 105:131–159. [Donier et al., 2015]LLOB Donier, J., Bonart, J., Mastromatteo, I., and Bouchaud, J. P. (2015). A fully consistent, minimal model for non-linear market impact. Quantitative Finance, 15:1109 – 1121. [Fan and Mackey, 2017]r:2017 Fan, Z. and Mackey, L. (2017). Empirical Bayesian analysis of simultaneous changepoints in multiple data sequences. The Annals of Applied Statistics, 11:2200–2221. [Fearnhead and Liu, 2007]r:07 Fearnhead, P. and Liu, Z. (2007). On-line inference for multi- ple changepoint problems. Journal of the Royal Statistical Society, 69:589–605. [Ghahramani, 2015]r:17 Ghahramani, Z. (2015). Probabilistic machine learning and artificial intelligence. Nature, 521:452–459. [Harvey, 2013]Harvey Harvey, A. (2013). Dynamic models for volatility and heavy tails: with applications to financial and economic time series. Cambridge University Press. [Lillo, 2023]Lillo23 Lillo, F. (2023). Order flow and price formation, volume Machine Learning and Data Sciences for Financial Markets: A Guide to Contemporary Practices. Cambridge University Press. [Lillo and Farmer, 2004]LilloFarmer04 Lillo, F. and Farmer, J. D. (2004). The long memory of efficient market. Studies in Nonlinear Dynamics and Econometrics, 8:1. [Lillo et al., 2005]ar:2005 Lillo, F., Mike, S., and Farmer, J. (2005). Theory for long memory in supply and demand. Physical Review E, 71:066122. [Lleo et al., 2022]r:22 Lleo, S., Zhitlukhin, M., and Ziemba, W. T. (2022). Using a mean-changing stochastic processes exit–entry model for stock market long-short prediction. The Journal of Portfolio Management, 49:172–197. [Lleo et al., 2020]c:2020 Lleo, S., Ziemba, W. T., and Li, J. (2020). Exploring breaks in the distribution of stock returns: Empirical evidence from apple inc. In SSRN Working Paper 3700419. Elsevier. [Moro et al., 2009]r:09 Moro, E., Vicente, J., Moyano, L. G., Gerig, A., Farmer, J. D., Vaglica, G., Lillo, F., and Mantegna, R. N. (2009). Market impact and trading profile of hidden orders in stock markets. Physical Review E, 80:452–459. [Murphy, 2007]k:07 Murphy, K. P. (2007). Conjugate Bayesian analysis of the Gaussian distribution. Technical report, University of British Columbia. [Patzelt and Bouchaud, 2018]PhysRevE.97.012304 Patzelt, F. and Bouchaud, J.-P. (2018). Universal scaling and nonlinearity of aggregate price impact in financial markets. Physical Review E, 97:012304. [Sato and Kanazawa, 2023a]c:23 Sato, Y. and Kanazawa, K. (2023a). Direct quantitative evidence of the order-splitting hypothesis as the microscopic origin of long-range correlations in market order flow. arXiv:2301.13505. [Sato and Kanazawa, 2023b]kana23 Sato, Y. and Kanazawa, K. (2023b). Exact solution to a generalised lillo-mike-farmer model with heterogeneous order-splitting strategies. arXiv:2306.13378. [Torre, 1997]BARRA Torre, N. (1997). Barra market impact model handbook. (BARRA Inc., Berkeley, 1997). [Tot́h et al., 2011]PhysRevX.1.021006 Tóth, B. and Lempérière, Y. and Deremble, C. and de Lataillade, J. and Kockelkoren, J. and Bouchaud, J.-P. (2011). Anomalous price impact and the critical nature of liquidity in financial markets. Physical Review X, 1:021006. [Tóth et al., 2015]Toth Tóth, B., Palit, I., Lillo, F., and Farmer, J. D. (2015). Why is equity order flow so persistent? Journal of Economic Dynamics and Control, 51:218–239. [Tot́h et al., 2016]SRILforOptions Tot́h, B., Eisler, Z., and Bouchaud, J.-P. (2016). The square-root impact law also holds for option markets. Wilmott, 85. [Tsaknaki et al., 2023]tsaknaki Tsaknaki, I.-Y., Lillo, F., and Mazzarisi, P. (2023). A score-driven Bayesian online change-point detection model. (in preparation). [Vaglica et al., 2010]r:80 Vaglica, G., Lillo, F., and Mantegna, R. N. (2010). Statistical identification with hidden Markov models of large order splitting strategies in an equity market. New Journal of Physics, 12:075031. [Vaglica et al., 2008]r:20 Vaglica, G., Lillo, F., Moro, E., and Mantegna, R. N. (2008). Scaling laws of strategic behavior and size heterogeneity in agent dynamics. Physical Review E, 77:036110. [Wainwright and Jordan, 2008]r:08 Wainwright, M. J. and Jordan, M. I. (2008). Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1:1–305. [Xuan and Murphy, 2007]c:07b Xuan, X. and Murphy, K. (2007). Modeling changing dependency structure in multivariate time series. In Proceedings of the international conference on Machine learning (ICML-07), volume 24, pages 1055–1062. PMLR. [Zarinelli et al., 2015]r:2015 Zarinelli, E., Treccani, M., Farmer, J., and Lillo, F. (2015). Beyond the square root: Evidence for logarithmic dependence of market impact on size and participation rate. Market Microstructure and Liquidity, 1:1550004. [Zhao et al., 2022]c:2022 Zhao, Y., Landgrebe, E., Shekhtman, E., and Udell, M. (2022). Online missing value imputation and change point detection with the Gaussian copula. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI-22), volume 36, pages 9199–9207. § PRICE DYNAMICS INSIDE AN ORDER FLOW REGIME §.§ Concavity of order flow regimes In table <ref> we report the number of regimes that satisfy the condition: Z_R:=| ∑_t_R≤ t< s_Rx_t/∑_t_R≤ t< s_R|x_t| |>Θ for various values of Θ. 2c||TSLA 2cMSFT 1-5 Θ 1 min 3 min 1 min 3 min 0 911 546 1394 690 0.5 674 466 1195 651 0.9 344 321 827 550 tableNumber of regimes satisfying the condition in Eq. (<ref>) for Θ=0,0.5 and 0.9, for both TSLA and MSFT with Δ t=1 min and Δ t = 3 min. §.§ Testing the square root impact law Table <ref> presents the estimation of the parameters of the regression in Eq. (<ref>) when we consider the entire data sets without removing any outliers. We observe that the exponent γ is a bit larger than 1/2 being typically close to 0.7. The outlier removal is obtained by the standard interquartile approach. Namely, we compute the first and third quartile Q1 and Q3, respectively, of log-returns. Then the data points outside the range [Q1-1.5IQR,Q3+1.5IQR], where IQR=Q3-Q1,are considered as outliers. Figure <ref> illustrates the data and the fits for the two stocks and the two timescales. The red points are those which are identified as outliers. 8cTSLA 2-9 4c|1 min 4c3 min Θ A SE of A γ SE of γ A SE of A γ SE of γ 0 0.032 0.013 0.732 0.04 0.08 0.04 0.654 0.047 0.5 0.045 0.018 0.7 0.039 0.096 0.047 0.639 0.046 0.9 0.05 0.023 0.691 0.044 0.221 0.101 0.564 0.043 8cMSFT 2-9 4c|1 min 4c3 min Θ A SE of A γ SE of γ A SE of A γ SE of γ 0 0.015 0.006 0.737 0.037 0.005 0.003 0.835 0.053 0.5 0.015 0.006 0.736 0.038 0.004 0.003 0.845 0.055 0.9 0.022 0.01 0.706 0.042 0.006 0.004 0.827 0.061 tableEstimated parameters and their standard errors (SE) of the regression of Eq. (<ref>). The whole dataset is considered. § COMPARISON OF ORDER FLOW AND MARKET IMPACT PREDICTIONS UNDER DIFFERENT REGIME SHIFT MODELS Figure <ref> compares the correlation function ℐ^(m)_ϵ(k) of order flow for the MBOC and the BOCPD model. Figure <ref> compares the predictor of market impact ℐ^(m)_Δ p(k) for the same models. From both figures, it is evident that the MBOC model outperforms the BOCPD when m>1. This justifies why in the main text we present the results obtained with the MBOC model.
http://arxiv.org/abs/2307.02532v1
20230705180001
A phenomenological estimate of isospin breaking in hadronic vacuum polarization
[ "Martin Hoferichter", "Gilberto Colangelo", "Bai-Long Hoid", "Bastian Kubis", "Jacobo Ruiz de Elvira", "Dominic Schuh", "Dominik Stamen", "Peter Stoffer" ]
hep-ph
[ "hep-ph", "hep-ex", "hep-lat", "nucl-th" ]
http://arxiv.org/abs/2307.03137v1
20230706170649
Topology-Aware Loss for Aorta and Great Vessel Segmentation in Computed Tomography Images
[ "Seher Ozcelik", "Sinan Unver", "Ilke Ali Gurses", "Rustu Turkay", "Cigdem Gunduz-Demir" ]
eess.IV
[ "eess.IV", "cs.CV", "cs.LG" ]
Topology-Aware Loss for Aorta and Great Vessel Segmentation in Computed Tomography Images Seher Ozcelik, Sinan Unver, Ilke Ali Gurses, Rustu Turkay, and Cigdem Gunduz-Demir This work was partly supported by the Scientific and Technological Research Council of Turkey, project no: TÜBİTAK 120E497. S. Ozcelik is with the Computational Sciences and Engineering Program and KUIS AI Center, Koc University, 34450 Istanbul, Turkey (e-mail: sozcelik19@ku.edu.tr). S. Unver is with the Department of Mathematics, Koc University, 34450 Istanbul, Turkey (e-mail: sunver@ku.edu.tr). I. A. Gurses is with the Department of Anatomy, School of Medicine, Koc University, 34450 Istanbul, Turkey (e-mail: igurses@ku.edu.tr; iagurses@gmail.com). R. Turkay is with the Department of Radiology, School of Medicine, Haseki SUAM, Medical Sciences University, 34265 Istanbul, Turkey (e-mail: rustu.turkay@sbu.gov.tr; rustuturkay@hotmail.com). C. Gunduz-Demir is with the Department of Computer Engineering, School of Medicine, and KUIS AI Center, Koc University, 34450 Istanbul, Turkey (e-mail: cgunduz@ku.edu.tr). August 1, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Segmentation networks are not explicitly imposed to learn global invariants of an image, such as the shape of an object and the geometry between multiple objects, when they are trained with a standard loss function. On the other hand, incorporating such invariants into network training may help improve performance for various segmentation tasks when they are the intrinsic characteristics of the objects to be segmented. One example is segmentation of aorta and great vessels in computed tomography (CT) images where vessels are found in a particular geometry in the body due to the human anatomy and they mostly seem as round objects on a 2D CT image. This paper addresses this issue by introducing a new topology-aware loss function that penalizes topology dissimilarities between the ground truth and prediction through persistent homology. Different from the previously suggested segmentation network designs, which apply the threshold filtration on a likelihood function of the prediction map and the Betti numbers of the ground truth, this paper proposes to apply the Vietoris-Rips filtration to obtain persistence diagrams of both ground truth and prediction maps and calculate the dissimilarity with the Wasserstein distance between the corresponding persistence diagrams. The use of this filtration has advantage of modeling shape and geometry at the same time, which may not happen when the threshold filtration is applied. Our experiments on 4327 CT images of 24 subjects reveal that the proposed topology-aware loss function leads to better results than its counterparts, indicating the effectiveness of this use. Topology, persistent homology, Vietoris-Rips filtration, encoder-decoder networks, aorta and great vessel segmentation, computed tomography. § INTRODUCTION Encoder-decoder networks have achieved state-of-the-art results for various segmentation problems on medical images. The training of these networks relies on minimizing a loss function, e.g., mean squared error and cross entropy, which typically defines the loss of each pixel separately and aggregates these pixel-wise losses. This aggregation might be unweighted, assigning the unit weight to each pixel's loss, or weighted, giving higher loss weights to hard-to-learn pixels. In the latter case, the pixels' weights can be assigned beforehand and remains the same during training, e.g., giving higher weights for pixels close to object boundaries <cit.> or belonging to the minority foreground classes <cit.>. Alternatively, these weights can be adaptively changed during the training by modulating them based on the network performance, e.g., reducing the weights of easy-to-learn pixels for which the network gives high posteriors at a given epoch <cit.>, <cit.>. These typical loss functions define the loss of each pixel only on its true and predicted values, but not considering those of other pixels, and aggregate them by weighted averaging or summing without considering the spatial relations between the predictions. Since this type of definition is of local nature, these loss functions may not sufficiently impose a network to learn the shape of an object or the geometry between multiple objects. On the other hand, the ability of the network to learn the shape may be important for better segmenting the objects in medical images since these objects typically have an expected shape or a geometry due to their intrinsic characteristics. One example is the formation of the aortic arch and great vessels in a human body. The aorta and the large arteries and veins (also known as great vessels) are not randomly distributed over the human body. Instead, they are found in a particular geometry due to the human anatomy (Fig. <ref>). Besides, they mostly seem as round objects on a 2D axial image since blood vessels are tubular in 3D. This anatomic information is indeed utilized by human annotators to locate these vessels and delineate their boundaries. In response to this issue, this paper introduces a new topology-aware loss function to train an encoder-decoder network for segmenting the aortic arch and the great vessels in computed tomography (CT) images. This loss function is defined as a weighted cross entropy, in which the weight for a training sample (and thus, for its pixels) is calculated inversely proportional to topological similarity between the maps of its ground truth and predicted vessels. This paper proposes to quantify topological features of these maps through persistent homology. To this end, it proposes to calculate the persistence diagram of each map by applying the Vietoris-Rips filtration on the point cloud of its vessel contours and enforces the network to minimize the Wasserstein distance between the corresponding persistence diagrams by defining the loss weight as a function of this distance. The proposed approach differs from the existing studies in the construction of the topological loss function and its integration to the network. Although there exist recent studies, such as <cit.>, <cit.>, <cit.> and <cit.>, that satisfactorily use persistent homology to train neural networks using the topology of the ground truth, our approach is different from these studies. We use persistent homology to learn not only the topology but also the geometry of the ground truth. Here, by geometry we mean the differential geometric nature of the objects in question, namely the shape of the aorta and the distribution of the great vessels with respect to each other. We will describe the details of this contribution in more mathematical terms below in Sec. <ref>. In summary, the difference of our contribution from the other studies stems from the type of filtration that we use in persistent homology, from the employment of the full strength of the persistent homology on the ground truth, as well as the use of the special metric to compare the persistent diagrams of the ground truth and prediction maps. The previous studies use the persistent homology of the prediction based on the threshold filtration associated to a likelihood function predicted by the network and the Betti numbers of the ground truth. Additionally, the loss functions they use only ensure that the topologies of the ground truth and the prediction approached each other. On the other hand, different from this previous approach, in this paper, we propose to use the Vietoris-Rips filtration on the persistent homology of both the ground truth and the prediction maps and the Wasserstein distance between the corresponding persistence diagrams. The use of the Vietoris-Rips filtration takes into account the geometry of the ground truth and the prediction maps, and the loss function based on the Wasserstein distance ensures that these two geometries approach each other. Besides, this is the first proposal of using a topology-aware loss function in a neural network design for the purpose of segmenting the aortic arch and great vessels, which are indeed found in a particular geometry in the human body, and thus, provides an exemplary showcase to demonstrate the usefulness of the geometry-preserving property of a neural network. Although there exist previous traditional models and network designs to segment the aorta and coronary arteries <cit.>, <cit.>, <cit.>, <cit.>, none of them use persistent homology in their models or in the definition of their loss functions. § RELATED WORK AND CONTRIBUTION §.§ Persistent Homology For visual data of medical nature, there are prior restrictions on their shape coming from the human anatomy. At first, it might be thought of as natural to use their topological invariants as part of their features since the topological invariants would comprise a summary of the global properties of the images. On the other hand, one immediately faces the problem that topological invariants are very rigid and real life data are very noisy so that using such rigid invariants to summarize noisy data will lead to complications. A solution to this problem in the context of homology of topological spaces is to use an invariant which is more stable under small perturbations. This invariant we will use, is persistent homology, which we will briefly review in our context. An important invariant of a topological space X is the n-th homology group H_n(X), for each n≥ 0 <cit.>. Here and elsewhere in this paper, we always consider homology groups with coefficients in the field of rational numbers ℚ. This makes the homology groups vector spaces over ℚ. The dimension b_n(X) of the vector space H_n(X) is the n-th Betti number of X and is a measure of the number of n dimensional spheres which do not bound an n+1 dimensional ball. Thus, b_0(X) is the number of connected components of X and b_1(X) is the number of circles which do not bound a disk. Even though the Betti invariants are extremely useful in the abstract study of topological spaces, they are too rigid to be of use in machine learning applications. More precisely, all real-world data come with noise and error. This is indeed a very typical case for medical images, in which noise and artifacts commonly exist in an image due to non-ideal conditions in image acquisition and/or technical limitations of the image scanner. A variant of homology that is more convenient for real-world applications is the persistent homology <cit.>, <cit.>. Here the input is a topological space 𝕏, which is endowed with a filtration {X_t}_t ∈ℝ, indexed by the real numbers ℝ, with 𝕏=∪ _t ∈ℝX_t. The condition for being a filtration is that for every s≤ t, X_s ⊆ X_t. Taking homology of the spaces in the filtration for a fixed integer n, we obtain a persistence module { H_n(X_t) }_t ∈ℝ, which associates a ℚ-vector space to each t ∈ℝ and a linear map between these vector spaces for each pair (t,s) with t≤ s, coming from the functoriality of homology. Associated to a persistence module, there is a barcode and a persistence diagram that summarizes at which filtration index the holes are born and at which filtration index they die. The flavor of the persistent homology and what it measures depends very much on which filtration one chooses to consider on 𝕏. §.§ Persistent Homology for Segmentation Networks There exist only a few studies that employ persistent homology in the design of a segmentation network. Similar to ours, this is achieved through the definition of a topological loss function. On the other hand, the main difference between these previous studies and ours is the type of filtration, the choice of which affects the phenomena persistent homology quantifies, and hence, the phenomena that a network is enforced to learn during its training. In the context of segmentation networks, there are essentially two different ways one obtains a topological space with a filtration. One of these filtrations, which is the one that we employ in this work, is through the use of a distance function. Here, one starts with a point cloud X in a metric space M with a distance function d. For each t∈ℝ, X_t is the set of m∈ M such that there exists an x ∈ X with d(m,x)≤ t. Then { X_t}_t ∈ℝ gives a filtration of M. The persistence homology of this filtration encodes information about the shape of X. This filtration is called the distance filtration or the Vietoris-Rips filtration below. The other type of filtration is constructed by using a real valued function f on space. If one lets X_t:=f^-1((-∞, t]) then { X_t}_t∈ℝ forms a filtration of the underlying space. In most of the studies below, f is chosen to be 1-p where p is a likelihood function on ℝ^n, which aims to predict a shape X in ℝ^n. More precisely, we wish p to have the property that x ∈ X if and only if p(x)=1. §.§.§ The method of <cit.> In this work, if Ω denotes the image, which is viewed as a rectangular domain, there are two filtrations obtained on Ω, which correspond to two different functions on Ω. The first one is a binary function f which assumes the value 0 on the foreground and 1 on the background. The other function is g:=1-p, where p is the likelihood function predicted by the neural network. The functions f and g give two different filtrations on Ω and these result in two different persistent homology data. The topological part of the loss function used in <cit.> is the square of the 2-Wasserstein distance between the persistence diagrams for these filtrations for both dimensions 0 and 1. The effect of using this topological loss function, in addition to the per-pixel cross-entropy loss, is that the network will emphasize learning the 0-th and 1-st Betti numbers of the ground truth, in addition to learning the pixels. §.§.§ The method of <cit.> In this work, the authors use a training set in which they know the ground-truth segmentation for only some of the items, but they know the topology of the ground-truth segmentation for all of the items. The topology is known a priori without the use of the network. The method is then to train the network so that the predicted images have the desired Betti numbers as well as the pixelwise Dice loss function is minimized on the labeled images. These desired Betti numbers are determined by the correct prior topology. The topological loss function is then constructed in terms of these Betti numbers. The loss function is based on increasing the barcode length of the k-th largest barcode lengths if k is the desired Betti number. More precisely, denoting the birth and death coordinates of a bar by (b,d), if this bar is to be a prominent feature of the image, its contribution to the loss function is 1-(d-b)^2, otherwise it is (d-b)^2. The same authors extended this method to multi-class image segmentation, including the Betti numbers corresponding to the triplets of the objects of different classes into the prior topology <cit.>. The method of <cit.> is somewhat similar. In this paper, the authors define the filtration by using the voxel intensity function on the data. The intensity function is normalized to have values between 0 and 1. The joint loss function is defined in terms of the Dice and cross-entropy losses together with the topological loss, which is the 1-Wasserstein distance between the persistence diagram of the prediction and the persistence diagram of the expected topological space. Denoting the birth and death coordinates of a bar with (b,d), the contribution to the topological loss function is 1-(d-b) if the feature is expected to be a prominent feature, and is (d-b), otherwise. Using such a loss function has the effect of killing non-prominent features and emphasizing prominent ones through the learning process. §.§.§ The method of <cit.> In this work, the authors use persistent homology in two different ways to improve the 3D segmentation of objects: First, they use a topological loss function in a similar vein as those in <cit.> and <cit.>. This topological loss function is defined as the distance between the persistence diagrams of the likelihood function predicted by the network and those of the ground truth labels. The distance function between the persistence diagrams is defined using the L_∞-norm on ℝ^2, and first finding a matching between the diagrams that realizes the 1-Wasserstein distance between these two diagrams, and then computing the sum of the squares of the distances between the matched points with respect to the ordinary metric on ℝ^2. Besides, the authors integrate persistent homology with a graph convolution network to capture multi-scale structural information. To do so, they form a point cloud in three dimensions and calculate persistence diagrams in each dimension using the distance filtration. From the persistence diagrams, the persistence image is constructed and put into a vector form, and added as a local feature map to augment the feature map obtained by the graph convolutional network. Even though persistence diagrams are defined on point clouds using the distance filtration, this second use, which involves adding the vectorization of the persistence image as a feature, differs considerably from our method, which is based on defining a loss function using the Wasserstein distance between the persistence diagrams of the Vietoris-Rips filtrations of the prediction and the ground truth. Both of the uses in <cit.> do not define such kind of loss function to enforce the network to learn the topology and shape of the objects as well as the geometry in between. §.§.§ Other uses In <cit.>, the authors use persistent homology for a generative adversarial network to synthesize more realistic images in its generator. They map synthetic and real images into a topological feature space and define their topological dissimilarity as an additional loss term. However, different from our proposal, the filtration function is defined on the distance transform, the distance from each background pixel to the closest foreground object, which would not reflect the shape of an object or the geometry between multiple objects. In <cit.>, the authors define a topological loss term for a segmentation network, but not using persistent homology. Instead, they measure the difference between the pretrained VGG19 responses of the predicted and ground truth maps and use it as an additional loss term to correct the topology of linear structures in the ground truth. §.§ Persistent Homology for Other Network Tasks Another common use of persistent homology is to design convolutional neural networks where classification is the upstream task. These networks use persistent homology as a tool to obtain better latent representations, which reflect the topological characteristics in the data. Such representations are obtained from persistence landscape <cit.> and using the filtration associated to the height function <cit.>, and integrated as a topological layer of the classification network. There are works which also use the persistent homology associated to the Vietoris-Rips complex. In <cit.>, a loss function is defined based on the death times of the barcodes for 0-dimensional persistent homology of the latent representation. The loss function measures the difference between these death times and a fixed distance η. Our work is different from <cit.> in several aspects. First, since we are interested in shape as well as connectivity, our loss function uses both the 0 and 1-dimensional persistent homology groups unlike <cit.>, which only uses the 0-dimensional homology. Additionally, our loss function is based on the Wasserstein distance between the persistence diagrams of both the ground truth and the prediction, and hence, the nature of the loss function changes as the ground truth varies, whereas in <cit.> the loss function is defined with respect to a fixed distance η as described above. Such flexibility is essential in our setting since the shape and arrangement of the great vessels and aorta change through the axial scans due to the inherent nature of the human anatomy. Likewise, in <cit.>, connectivity properties of the latent representation in an autoencoder are improved through the use of a loss function based on the 0-dimensional persistent homology. Moreover, all these studies apply this learning process, involving persistent homology, to classification tasks rather than a segmentation task as we do in our study. Other principal uses of persistent homology in machine learning, which will not have relevance for this work, includes regularizing the weights of a network <cit.>, interpreting the weights of layers in a convolutional neural network <cit.>, and extracting topological features for a classifier <cit.>. Nevertheless, none of these studies use persistent homology to define a loss function for a segmentation network. § METHODOLOGY Our method relies on 1) quantifying the topological features of the ground truth and the predicted segmentation maps by their persistence diagrams, 2) defining a loss function using the Wasserstein distance between the persistence diagrams of the ground truth and the prediction, and 3) training an encoder-decoder network by minimizing the proposed loss function. The following subsections give the details. §.§ Persistence Diagram Calculation for the Aortic Arch and Great Vessels Suppose that we start with a point cloud X in ℝ^n. In our case n=2, and X will be the contours of the aorta and the great vessels for either the ground truth or the prediction of the network at a given epoch. For t ∈ℝ, if we let X_t to be the set of points in ℝ^n, whose distance to X is less than or equal to max(t,0), then this gives us a filtration { X_t}_t ∈ℝ of ℝ^n=𝕏, with X_0=X. The associated persistence diagrams can be thought of as more dynamic and stable versions of the ordinary homology groups of X. The persistent homology with the Vietoris-Rips (distance) filtration above has an added, somewhat surprising, benefit. It tells us about the geometry of X, and not only about its topology. By geometry, we mean the shape of an object and also the distribution of multiple objects with respect to each other. Fig. <ref> depicts the boundaries of two homotopy equivalent objects of different shapes, exhibiting the same topology but different 1-dimensional persistent homologies. Likewise, Fig. <ref> sketches the boundaries of two pairs of homotopy equivalent objects with different distributions. These object pairs have the same topology, but this time, different 0-dimensional persistent homologies. Such geometry differences cannot be captured by a filtration associated to the likelihood function, as suggested by the previous segmentation networks <cit.>. On the other hand, as illustrated in these figures, the Vietoris-Rips filtration that we use in our design produces different barcodes, which allows us to model differences in the object geometries. This idea will be very important for our segmentation model since the shape of the aorta and the distribution and the distances of the great vessels with respect to each other give us essential global invariants, which will help us improve the network using this prior geometric information. In our model, we define a loss function based on the 0-dimensional persistent homology if the ground truth includes any great vessels, which are indeed smaller in size compared to the aorta. The reason is that when we are dealing with the great vessels, the geometry of the associated point cloud is essentially determined by the distribution and the distances of the connected components in the data. The connected components in the images we consider correspond to the individual great vessels themselves. Even though the number of the connected components, hence the 0-th Betti number, in two point clouds might be the same, the corresponding barcodes associated to their 0-dimensional persistent homology might be quite different (see Fig. <ref>). If the ground truth includes only the aortic arches, we use the 1-dimensional persistent homology in the loss function since this time the shape becomes more distinctive for these relatively larger veins. The fact that the holes are born and they die at different indices of the Vietoris-Rips filtration gives essential information about the shape of the aortic arch (see Fig. <ref>), and this trait can be successfully used to train the network. §.§ Topology-Aware Loss Function Let I be a training image, i ∈ I be a pixel, p_i be the ground truth for the pixel i, and p̂_i be its posterior probability estimated by a network. Here p_i=1 if i is an aortic arch or a great vessel pixel, and p_i=0 otherwise. The cross-entropy loss CE_I for the image I is defined as: CE_I = ∑_i∈ I- p_i logp̂_i - (1-p_i) log(1-p̂_i) In this work, we define our topology-aware loss function ℒ_T as a weighted sum of cross-entropy losses CE_I ℒ_T = ∑_Iω_I CE_I where ω_I is the topological weight term for the training image I calculated based on the difference between the persistence diagrams of its ground truth map S_I and the prediction map S_I estimated by the network at the end of each forward pass, the persistence diagrams denoted by Π_ S_I and Π_ S_I, respectively. We define the topological weight term ω_I as a linear combination of the Wasserstein distances of the homology group 0 and the homology group 1, d_0( S_I, S_I) and d_1( S_I, S_I), respectively. ω_I = 1 + α_I · d_0(Π_ S_I, Π_ S_I) + β_I · d_1(Π_ S_I, Π_ S_I) where α_I and β_I are the constants that determine the importance of a homology group. Based on our discussions given at the end of Sec. <ref>, if the ground truth S_I of the image I includes any great vessel, we consider only the homology group 0 and empirically set (α_I, β_I) = (5.0e–6, 0.0). Otherwise, if S_I contains only the aortic arches (without any great vessel), we consider the homology group 1 and set (α_I, β_I) = (0.0, 1.0e–4). There are different choices to calculate the distance between two persistence diagrams. These calculations rely on finding matches between the points in these two persistence diagrams that minimize the cost over all matchings, where the points are allowed to be matched with any point on a diagonal (Fig. <ref>). The bottleneck distance uses the maximum of the distances between the matched points whereas the Wasserstein distance uses the sum of the powers of the distances between the matched points. The main difference between these two distance functions is that several matched points have a contribution to the Wasserstein distance whereas they have no contribution to the bottleneck distance. In preliminary tests with our data, we noticed that using the Wasserstein distance led to better results. This is consistent with our intuition that the loss function defined in terms of the Wasserstein distance will continue improving the network when there are several predicted components which need minor corrections. In contrast, the loss function defined in terms of the bottleneck distance will not improve the network when all the components would need only minor corrections. §.§ Network Architecture and Training An encoder-decoder network is used to segment the aortic arch and great vessels in CT images. This network is trained to minimize the proposed topology-aware loss function by backpropagation. At each epoch, the forward pass estimates segmentation maps for every training image and updates the topology-aware loss ℒ_T by calculating cross-entropy losses CE_I as well as topological weight terms ω_I with respect to the difference between the ground truths and the predictions. Then, the backward pass updates the network weights by differentiating the updated loss ℒ_T. This training has a warm-up period for 25 epochs where only the cross-entropy loss is used (i.e., ω_I = 1). Afterwards, it continues with minimizing the proposed topology-aware loss function ℒ_T. It is worth noting that this strategy is also used by the previous studies <cit.>. In this work, we use a UNet architecture <cit.>, which is illustrated in Fig. <ref>. The encoder path comprises three blocks of two convolutions, with 3×3 filters, and one max pooling, with a 2×2 filter. The dropout layer with a dropout factor of 0.3 is added to prevent overfitting. The encoder paths starts with 32 feature maps in its first block and doubles the number of feature maps at the end of each block. The bottleneck block has the same two convolution layers without max pooling. The decoder path includes three blocks, each of which consecutively applies upsampling, concatenation, and two convolution operations. Likewise, the convolution and upsampling layers use 3×3 and 2×2 filters, respectively. The number of feature maps are halved at the end of each decoder block. All convolution layers except the last one use the ReLu activation function. The last layer uses the sigmoid function. This network was implemented in Python using the PyTorch framework. It was end-to-end trained from scratch with an early stopping approach; training was stopped if there was no improvement on the validation set loss in the last 40 epochs. AdaDelta was used as an optimizer to adaptively adjust the learning rate and the momentum. The batch size was selected as 1. The training was conducted on a Tesla T4 GPU. The implementation is available at https://github.com/seherozcelik/TopologyAware. § EXPERIMENTS §.§ Dataset The proposed topology-aware loss function was tested on a dataset that contains CT scans of 24 subjects with prediagnosis of pulmonary embolism. The CT scans were acquired using a 128 slice Philips Ingenuity CT scanner with 1.5 mm slice thickness. A 60 ml of non-ionic contrast material (iohexol; generic name Opaxol) was introduced with a 100 ml saline chaser at 5 ml/s. The data collection was conducted in accordance with the tenets of the Declaration of Helsinki and was approved by Koc University Institutional Review Board (Protocol number: 2022.161.IRB1.064). We randomly split the 24 subjects into the training and test sets. The training set contains 2896 images of 16 subjects; 2234 images of 12 subjects were used to learn the network weights by backpropagation and 662 images of 4 subjects were used as validation images for early stopping. The test set comprises 1431 images of 8 subjects; note that the images of none of these subjects were used neither in the training nor for early stopping. §.§ Evaluation Predictions were quantitatively evaluated by calculating the performance metrics both at the pixel- and vessel-level. For the pixel-level evaluation, true positive pixels were found, and the precision, recall, and f-score were calculated for each image, separately. These metrics were then averaged over the test set images. The vessel-level evaluation was conducted as follows: Let s_i be a vessel (a great vessel or an aortic arch) in the ground truth S_I of a test set image I, and ŝ_j be a segmented object in its prediction map S_I. Each vessel s_i ∈ S_I was matched with its maximally overlapping object ŝ_j∈ S_I, and considered as true positive if the intersection-over-union (IoU) for this match was greater than 50 percent. Afterwards, true positive vessels were accumulated over all test set images and the vessel-level metrics were calculated. With TP_I being the number of true positive vessels in the test set image I, precision=∑_I TP_I / ∑_I | S_I|, recall=∑_I TP_I / ∑_I | S_I|, and f-score was calculated. Additionally, the Hausdorff distance was found between each ground truth vessel s_i ∈ S_I and its maximally overlapping object in the prediction map, and vice versa. If there is no overlap for a vessel, the Hausdorff distance was calculated between this vessel and the closest segmented object. Then, for the test image I, the overall Hausdorff distance was the weighted average of all Hausdorff distances where the weight of a vessel was selected as the ratio of the vessel's area to the area of all vessels in S_I. Note that better segmentations yield higher precision, recall, and f-score metrics, and lower Hausdorff distances. §.§ Comparisons We used three algorithms for comparison and ablation studies. The first one was the Baseline algorithm that had the UNet architecture given in Fig. <ref>. This network and its training were exactly the same with ours except that it used the standard cross-entropy as its loss function. We used this algorithm in our comparisons to understand the importance of using a topology-aware loss function in the network training. Here it is worth noting that we used the same set of initial network weights for this baseline as well as our model. Thus, it is possible to directly observe the effects of a loss function in a controlled setting. The second algorithm had also the same network design, with the same set of initial network weights, but used another topology-preserving loss function suggested by <cit.>. As mentioned in the introduction and the related work, this suggested loss relied on calculating the persistent homology based on the threshold filtration associated to a likelihood function predicted by the network and the Betti numbers of the ground truth. We included this LikelihoodFiltration algorithm in our comparisons to investigate the benefits of using the Vietoris-Rips filtration on the persistent homology, which is effective in modeling the topology but also the geometry of the ground truth vessels (i.e., both the shape of the vessels and the distribution of the vessels with respect to each other). Note that in our experiments, we run this algorithm using the codes provided by its authors. The last comparison was with the FourierNet algorithm that proposed a shape-preserving network design <cit.>. This algorithm represented the shape prior by extracting Fourier descriptors on the objects' contours and concurrently learned these descriptors with the main task of segmentation. We included this algorithm in our comparisons to observe the effects of modeling the vessels' geometry instead of modeling only the vessels' shape. We also run this algorithm using the codes provided by its authors. §.§ Results and Discussion The quantitative results obtained on all test set images are given in Table <ref>. We run our model as well as the comparison algorithms five times, and these are the quantitative results averaged across these five runs. This table reveals that our model, which uses the proposed topology-aware loss function, leads to better segmentations, giving higher f-scores and lower Hausdorff distances. The visual results obtained on exemplary test set images are also consistent with this observation (Fig. <ref>). Comparing with the Baseline algorithm, which uses the standard cross-entropy as its loss function, our model with the proposed topology-aware loss is effective to eliminate false positives (the first two rows of Fig. <ref>) as well as correct false negatives (the third and fourth rows of Fig. <ref>). It can also fix incorrect segmentations when false positives and false negatives are found in the same segmentation map (the fifth and sixth rows of Fig. <ref>). Although the LikelihoodFiltration and FourierNet algorithms can correct them to some extent, the proposed model is more effective than these algorithms, as also reflected in the quantitative results. The last row of Fig. <ref> shows an example where all models failed to fix an undersegmentation. However, even on this example, the proposed loss function improved incorrectly segmented pixels better than the other algorithms, partially predicting the boundary pixels between the two vessels. The main contribution of this work is to use the Vietoris-Rips filtration for calculating the persistent homology of the ground truth and the prediction. This has the benefit of modeling the shape of the objects and their geometry, which the persistent homology associated to a likelihood function fails to detect. This concurrent modeling is essential to capture the global invariants in our application. Since CT images contain both aorta and great vessels, the shape of the aorta and the distribution and the distances of the great vessels with respect to each other contain important prior geometric information that could be exploited. To investigate this further, we also calculated the performance metrics separately, for images containing any great vessels and for those containing only the aorta (or the aortic arches). These metrics are reported in Tables <ref>(a) and <ref>(b), respectively. These tables demonstrate that the proposed topology-aware loss function is able to improve the metrics both for the great vessels and the aortic arches. Here one can observe that the FourierNet algorithm, with the shape-preserving property, gives the best vessel-level f-score for the aortic arches, which is consistent with our observation that the shape is important for the aorta. On the other hand, it is not successful to model the distribution of the great vessels, and in turn, it yields the worst results for them. § CONCLUSION This paper presented a topology-aware loss function for automated segmentation of the aorta and great vessels in CT images. This loss function was defined as a weighted cross entropy, in which the weight for an image was the topological dissimilarity between the ground truth map and the segmented map predicted by the network at the end of each epoch. Different from the previously suggested segmentation network designs, this paper proposed to apply the Vietoris-Rips filtration to obtain the persistence diagrams of these maps and calculate their (dis)similarity using the Wasserstein distance between the corresponding persistence diagrams. Experiments on 4327 CT images of 24 subjects revealed that this proposal is more effective than its counterparts in simultaneously modeling the shape of the aorta and the geometry between the great vessels. In this work, we used the topological dissimilarity between the ground truth and the prediction to define the weight of an image in the loss function. In other words, we used the same dissimilarity metric to penalize every pixel in the same image, regardless of whether they were correctly or incorrectly predicted. One future research direction is to reflect the dissimilarity only to false negative and false positive pixels, and possibly with different extents. The aorta and great vessel segmentation provides an exemplary showcase for the necessity of modeling the shape and geometry at the same time, and hence, the effectiveness of applying the Vietoris-Rips filtration to obtain the persistence diagrams for defining the proposed topology-aware loss function. Using it for other segmentation problems can be considered as another future research direction. 99 ronneberger15 O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional networks for biomedical image segmentation,” in Proc. Int. Conf. Med. Image Comput. Comput. Assist. Intervent., 2015, pp. 234–241. sudre17 C. H. Sudre, W. Li, T. Vercauteren, S. Ourselin, and M. J. Cardoso, “Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations,” Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Springer, 2017, pp. 240–248. lin17 T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, “Focal loss for dense object detection,” in Proc. IEEE Int. Conf. Comp. Vis., 2017, pp. 2980–2988. gunesli20 G.N. Gunesli, C. Sokmensuer, and C. Gunduz-Demir, “AttentionBoost: Learning what to attend for gland segmentation in histopathological images by boosting fully convolutional networks,” IEEE Trans. Med. Imaging, vol. 39, no. 12, pp.4262–4273, 2020. neur X. Hu, F. Li, D. Samaras, and C. Chen, “Topology-preserving deep image segmentation," in Proc. Adv. Neural Inf. Process. Syst., 2019. pami J. Clough, N. Byrne, I. Oksuz, V.A. Zimmer, J.A. Schnabel, and A. King, “A topological loss function for deep-learning based image segmentation using persistent homology," IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, pp. 8766–8778, 2022. icml E. Khramtsova, G. Zuccon, X. Wang, and M. Baktashmotlagh, “Rethinking persistent homology for visual recognition," in Topological, Algebraic and Geometric Learning Workshops, 2022, pp. 206–215. itmi N. Byrne, J. Clough, I. Valverde, G. Montana, and A. King, “A persistent homology-based topological loss for CNN-based multiclass segmentation of CMR," IEEE Trans. Med. Imaging, vol. 42, no. 1, 2023. bonechi21 S. Bonechi et al., “Segmentation of aorta 3D CT images based on 2D convolutional neural networks," Electronics, vol. 10, no. 20, pp. 2259, 2021. cheung21 W.K. Cheung et al., “A computationally efficient approach to segmentation of the aorta and coronary arteries using deep learning," IEEE Access, vol. 9, 2021. zhong21 J. Zhong, Z. Bian, C.R. Hatt, and N.S. Burris, “Segmentation of the thoracic aorta using an attention-gated U-Net," Medical Imaging 2021: Computer-Aided Diagnosis, vol. 11597, 2021. gu21 L. Gu and X.-C. Cai, “Fusing 2D and 3D convolutional neural networks for the segmentation of aorta and coronary arteries from CT images," Artif. Intell. Med., vol. 121, pp. 102189, 2021. hatcher02 A. Hatcher, “Algebraic Topology," Cambridge University Press, Cambridge, 2002. carlsson09 G. Carlsson, “Topology and data," Bull. Am. Math. Soc., vol. 46(2), pp. 255–308, 2009. chazal21 F. Chazal, and B. Michel, “An introduction to topological data analysis: fundamental and practical aspects for data scientists," 2021, arXiv:1710.04019. haft20 M. Haft-Javaherian, M. Villiger, C.B. Schaffer, N. Nishimura, P. Golland, and B.E. Bouma, “A topological encoding convolutional neural network for segmentation of 3D multiphoton images of brain vasculature using persistent homology," in Proc. IEEE Int. Conf. Comp. Vis. Pattern Recognit. Workshops, 2020, pp. 990–991. wong21 C.-C. Wong and C.-M. Vong, “Persistent homology based graph convolution network for fine-grained 3D shape segmentation," in IEEE Int. Conf. Comp. Vis., 2021, pp. 7078–7087. wang20 F. Wang, H. Liu, D. Samaras, and C. Chen, “TopoGAN: A topology-aware generative adversarial network," in Computer Vision – ECCV 2020, 2020, pp. 118–136. mosinska18 A. Mosinska, P. Marquez-Neila, M. Kozinski, and P. Fua, “Beyond the pixel-wise loss for topology-aware delineation," in Proc. IEEE Conf. Comp. Vis. Pattern Recognit., 2018, pp. 3136–3145. hofer17 C. Hofer, R. Kwitt, M. Niethammer, and A. Uhl, “Deep learning with topological signatures," in Proc. Adv. Neural Inf. Process. Syst., 2017. hofer19 C. Hofer, R. Kwitt, M. Niethammer, and M. Dixit, “Connectivity-optimized representation learning via persistent homology," in Proc. Int. Conf. Mach. Learning, 2019, pp. 2751–2760. moor20 M. Moor, M. Horn, B. Rieck, and K. Borgwardt, “Topological autoencoders," in Proc. Int. Conf. Mach. Learning, 2020, pp. 7045–7054. gabrielsson20 R. Gabrielsson, B. Nelson, A. Dwaraknath, P. Skraba, L. Guibas, and G. Carlsson, “A Topology layer for machine learning," in Proc. Int. Conf. Artif. Intell. Statistics, 2020, pp. 1553–1563. gabrielsson18 R. Gabrielsson, and G. Carlsson, “A look at the topology of convolutional neural networks," 2018, arXiv:1810.03234. qaiser19 T. Qaiser et al., “Fast and accurate tumor segmentation of histology images using persistent homology and deep convolutional features," Med. Image Anal., vol.55, pp. 1–14, 2019. fourierNet S. Cansiz, C. Kesim, S.N. Bektas, Z. Kulali, M. Hasanreisoglu, and C. Gunduz-Demir, “FourierNet: Shape-preserving network for Henle's fiber layer segmentation in optical coherence tomography images," IEEE J. Biomed. Health Inform., vol. 27, no. 2, pp.1036–1047, 2023.
http://arxiv.org/abs/2307.02040v1
20230705055508
VertiBench: Advancing Feature Distribution Diversity in Vertical Federated Learning Benchmarks
[ "Zhaomin Wu", "Junyi Hou", "Bingsheng He" ]
cs.LG
[ "cs.LG", "cs.AI" ]
Multimodal Imbalance-Aware Gradient Modulation for Weakly-supervised Audio-Visual Video Parsing Jie Fu, Junyu Gao, and Changsheng Xu, Fellow, IEEE Jie Fu is with Zhengzhou University, ZhengZhou 450001, China, and also with the State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China. (email: fujie@gs.zzu.edu.cn). Junyu Gao and Changsheng Xu are with the State Key Laboratory of Multimodal Artificial Intelligence Systems (MAIS), Institute of Automation, Chinese Academy of Sciences, Beijing 100190, P. R. China, and with School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China. Changsheng Xu is also with Peng Cheng Laboratory, ShenZhen 518055, China. (e-mail: junyu.gao@nlpr.ia.ac.cn; csxu@nlpr.ia.ac.cn). ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Vertical Federated Learning (VFL) is a crucial paradigm for training machine learning models on feature-partitioned, distributed data. However, due to privacy restrictions, few public real-world VFL datasets exist for algorithm evaluation, and these represent a limited array of feature distributions. Existing benchmarks often resort to synthetic datasets, derived from arbitrary feature splits from a global set, which only capture a subset of feature distributions, leading to inadequate algorithm performance assessment. This paper addresses these shortcomings by introducing two key factors affecting VFL performance - feature importance and feature correlation - and proposing associated evaluation metrics and dataset splitting methods. Additionally, we introduce a real VFL dataset to address the deficit in image-image VFL scenarios. Our comprehensive evaluation of cutting-edge VFL algorithms provides valuable insights for future research in the field. § INTRODUCTION The increasing demand for ample, high-quality data for training advanced machine learning models is evident, particularly in the context of large language models <cit.>. However, real data, often sensitive and distributed across multiple parties, presents a challenge, especially in the face of strict privacy regulations like the GDPR <cit.>. As a result, federated learning <cit.> has been highlighted as a promising approach to train machine learning models on distributed data while ensuring privacy. In this study, we consider a broad definition of federated learning <cit.>, encompassing all privacy-preserving collaborative learning paradigms, including assisted learning <cit.> and split learning <cit.>. Given the emerging variety of federated learning approaches <cit.>, the importance of comprehensive benchmarks for evaluating these new algorithms is underscored. The landscape of federated learning benchmarks, featuring contributions such as FedScale <cit.>, MNIST <cit.>, FedEval <cit.>, and NIID-Bench <cit.>, predominantly caters to horizontal federated learning (HFL), wherein each party possesses a subset of instances. In comparison, vertical federated learning (VFL) - where each party holds a subset of features - is notably under-addressed. The development of real-world VFL benchmarks faces two main hurdles. First, privacy concerns inherent to federated learning inhibit the public sharing of distributed data. Second, the current limited pool of actual VFL datasets, such as those in the OARF benchmark <cit.>, NUS-WIDE <cit.>, and Vehicle <cit.>, may not sufficiently represent the broad range of possible VFL scenarios (Figure <ref>; Section <ref>). This scarcity and underrepresentation highlight the urgent need for synthetic VFL benchmarks that can facilitate a comprehensive evaluation of VFL algorithms across diverse scenarios. Existing efforts to construct synthetic VFL benchmarks have struggled to represent the diversity of real-world scenarios. Benchmarks such as OARF <cit.>, FedML <cit.>, and LEAF <cit.> fabricate vertically partitioned data by randomly assigning an equal number of features to synthetic parties. Some studies <cit.> resort to a simplistic approach, manually dividing features without offering a substantial rationale for their choice. Furthermore, these existing benchmarks <cit.> do not offer a meaningful comparison of cutting-edge VFL algorithms. Hence, it becomes crucial to critically examine the key factors influencing VFL algorithm performance, and thoughtfully design synthetic VFL benchmarks that reflect these considerations. The task of creating a systematic synthetic VFL benchmark hinges on identifying key factors influencing VFL algorithm performance. Current synthetic benchmarks for non-i.i.d. HFL like NIID-Bench <cit.> are inapplicable to VFL their assumptions about feature space and instance equality. In particular, HFL benchmarks assume that all parties operate within the same feature space, a presumption that doesn't conform to VFL's distributed feature scenario. Moreover, the instance equality presupposed by NIID-Bench during allocation doesn't apply when dealing with features of varied importance, underscoring the unique challenges in the analysis of synthetic VFL benchmark. Given these limitations, our study conducts a systematic analysis to identify feature importance and correlation as two crucial determinants of VFL performance. Accordingly, we propose VertiBench, a comprehensive VFL benchmark which introduces novel feature-splitting methods for synthetic dataset generation and a new real-world VFL dataset. Our key contributions include: (1) We develop new feature-splitting methods that generate synthetic datasets based on feature importance and correlation, covering a diverse range of VFL scenarios. (2) We introduce a real-world VFL dataset, filling a noted gap in image-image VFL scenarios. (3) We devise methods to quantify the importance and correlation of real-world VFL datasets, allowing them to align with our synthetic datasets. (4) We conduct rigorous benchmarks of advanced VFL algorithms across diverse scenarios, thereby offering valuable insights for future research. For example, we demonstrate the scalability of certain VFL algorithms, challenging prior assumptions about VFL scaling difficulties <cit.>, and emphasize the importance of communication efficiency in VFL, especially for imbalanced distributed datasets. § RELATED WORK Vertical federated learning datasets. The scarcity and limited range of real-world VFL datasets <cit.> in benchmarks and studies <cit.> underscore the need for synthetic VFL datasets capable of depicting a broader spectrum of scenarios. Given VFL's focus on data privacy, obtaining such real datasets is challenging. Synthetic benchmarks <cit.> and VFL study datasets <cit.> commonly rely on unexplained random or manual feature splitting, which often represents scenarios of balanced feature importance and high inter-party correlation (Figure <ref>, <ref>). This situation indicates a pressing demand for systematic methods to generate synthetic VFL datasets that accurately reflect a diverse set of scenarios, fostering comprehensive evaluation of VFL algorithms. Feature importance. The Shapley value <cit.>, used to assess party contributions in federated learning <cit.>, has significant computational costs, rendering it unsuitable for guiding feature splitting. Certain methodologies <cit.> utilize a Dirichlet distribution for global dataset random split, creating imbalanced federated learning datasets. However, they do not consider the partitioning of features of varying importance. Feature correlation. The task of efficiently gauging correlation among two groups of features is challenging despite well-studied individual feature correlation <cit.>. The Shapley-Taylor index, proposed for evaluating correlation between feature sets <cit.>, is computationally intensive (NP-hard), and unsuitable for high-dimensional datasets. The determinant of the correlation matrix <cit.> efficiently estimates inter-party correlation but is over-sensitive to linearly correlated features, impeding its use in feature partitioning. A more refined metric - the multi-way correlation coefficient (mcor) <cit.>, addresses this, but like the determinant, it struggles with unequal feature numbers across parties, a typical VFL scenario, due to the assumption of a square correlation matrix. § VFL ALGORITHMS This section critically reviews current VFL algorithms, with a focus on accuracy, efficiency, and communication size. VertiBench concentrates on standard supervised learning tasks such as classification and regression within synchronized parties, summarized in Table <ref>. Notably, this benchmark excludes studies exploring different VFL aspects such as privacy <cit.>, fairness <cit.>, data pricing <cit.>, asynchronization <cit.>, latency <cit.>, and other tasks like unsupervised learning <cit.>, matrix factorization <cit.>, multi-task learning <cit.>, and coreset construction <cit.>. While most VFL algorithms presume accurate inter-party data linking, we adopt this approach in VertiBench, despite recent contrary findings <cit.> that this assumption may not be true. We refer to parties with and without labels as primary and secondary parties respectively. The existing methods can be bifurcated into two categories: ensemble-based and split-based. The distinguishing factor lies in the independent prediction capability of each party. Ensemble-based methods involve parties each maintaining a full model for local feature prediction, with collaborative ensemble methods during training, while split-based methods require each party to hold a partial model forming different inference stages of the full model. Consequently, split-based partial models cannot perform independent inference. For split-based models, our focus is on advanced models such as neural networks (NNs) and gradient boosting decision trees (GBDTs) <cit.>, though VertiBench can accommodate various models <cit.>. Split-NN-based models are trained by transferring representations and gradients, while split-GBDT-models are trained by transferring gradients and histograms. A more detailed comparison of ensemble-based and split-based algorithms is provided in Appendix <ref>. § SYNTHETIC VFL DATASETS §.§ Factors that affect VFL performance Suppose there are K parties. Denote the data on party P_k as a random vector 𝐗_k (1 ≤ k ≤ K). Denote the label as a random variable y. A supervised learning algorithm maximizes the likelihood function where hypothesis h represents models and parameters, i.e., L(y|𝐗_K,...,𝐗_1;h). These supervised learning algorithms estimate the following probability mass function. The proof of Proposition <ref> is provided in Appendix <ref>. The probability mass function can be written as log𝒫(y|𝐗_K,...,𝐗_1) = ∑_i=1^Klog(y|𝐗_k,...,𝐗_1)/(y|𝐗_k-1,...,𝐗_1) + log(y) In VFL, (y) is the same for all the parties. The skewness among K parties is determined by K ratios of distributions. Interestingly, this ratio quantifies the divergence between two marginal probability distributions of y - one inclusive of 𝐗_k and the other exclusive of 𝐗_k. Essentially, the ratio estimates the impact on the global distribution when the features of a single party are excluded. This can be interpreted as the importance of a given party. It is important to note that Proposition <ref> is applicable regardless of the order of 𝐗_1,…,𝐗_k. For a more precise evaluation of each party's importance, especially considering the independence among features, Shapley value has proven to be a useful measure. It has been employed to estimate the importance of each party in vertical federated learning scenarios <cit.>. In another aspect, the ratio (y|𝐗_k,...,𝐗_1)/(y|𝐗_k-1,...,𝐗_1) is determined by the correlation between 𝐗_k and 𝐗_1,…,𝐗_k-1. In other words, the global distribution is affected by the feature correlation between different parties. In summary, we highlight feature importance and correlation as two crucial factors that could potentially influence the performance of VFL algorithms. We treat importance and correlation as independent variables affecting (y|𝐗_k,...,𝐗_1)/(y|𝐗_k-1,...,𝐗_1) in our analysis, despite a potential innate correlation between the two. The subsequent sections will introduce our approach to generating synthetic datasets based by these two factors. §.§ Feature Importance In light of the computational expense incurred by the Shapley value method, an alternative and more efficient strategy is necessary to perform feature splits based on importance. With all parties exhibiting symmetry in the context of 𝐗, varying the importance among parties essentially translates to varying the variance of the importance among them. Assuming each party P_i possesses an importance factor α_i>0, we propose the implementation of the Dirichlet distribution parameterized by {α_i}_i=1^K for feature splitting. This approach ensures two beneficial properties post-split: (1) a larger α_i guarantees a higher expected importance for P_i, and (2) a smaller {α_i}_i=1^K_2 assures a greater variance in the importance among parties. More specifically, we propose a feature splitting method based on feature importance. After initializing local datasets for each party, a series of probabilities p_1,…,p_K s.t. ∑_i=1^Kp_i=1 is sampled from a Dirichlet distribution Dir(α_1,…,α_K). Each feature is randomly allocated to a party P_k, selected based on the probabilities p_k. To accommodate algorithms that fail when faced with empty features, we can ensure each party is initially provided with a random feature before the algorithm is set in motion. Detailed formalization of this algorithm can be found in Appendix <ref>. Consider a feature index set 𝒜={1,2,...,m} and a characteristic function v:2^𝒜→ℝ such that v(∅)=0. Let ϕ_j(v) denote the importance of the j-th feature on v such that ∑_j=1^mϕ(j)=v(𝒜). Assume that the indices in 𝒜 are randomly distributed to K parties with probabilities r_1,...,r_K where ∑_i=1^Kr_i=1. Given budgets b_i=r_i v(𝒜), let Z_i be the sum of feature importance for party i. Then, we have ∀ i∈[1,K],𝐄[Z_i]=b_i and 𝐄[Z_i]∝ r_i. The proof of Theorem <ref> is provided in Appendix <ref>. The metric of importance, ϕ_j(v), comprises the Shapley value and the recently proposed Shapley-CMI <cit.>. We assert in Theorem <ref> that the expected cumulative importance of each party is proportional to the ratio generated by the Dirichlet distribution. The inherent properties of the Dirichlet distribution ensure that: (1) a larger value of α_i leads to a higher expected value of r_i, and (2) a smaller value of {α_i}_i=1^K_2 results in a larger variance in r_i. Hence, the proposed method naturally aligns with the requirements for feature importance. §.§ Feature Correlation In the initial stages of our investigation into feature-split methods based on correlation, we first look at the evaluation of feature correlation. Building upon established methods that utilize a metric grounded in correlation matrices <cit.>, we propose a novel metric to examine the correlation when the parties involved possess unequal numbers of features. Our approach hinges on the use of the standard variance of the singular values of the correlation matrix. This serves as an efficient measure of the overall correlation between two parties. Since the feature-wise correlation is an orthogonal research area, we selected Spearman rank correlation <cit.> due to its capability to handle non-linear correlation. To elaborate further, we denote the column-wise correlation matrix between two matrices, 𝐗_i and 𝐗_j, as cor(𝐗_i,𝐗_j). As a result, we formally define the correlation between two entities, 𝐗_i∈ℝ^n× m_i and 𝐗_j∈ℝ^n× m_j, in terms of their respective parties as Equation <ref>. Pcor(𝐗_i,𝐗_j) := 1/d√(∑_i=1^d(σ_i(cor(𝐗_i,𝐗_j)) - σ)^2), d = min(m_i,m_j) In this equation, σ_i(·) means the i-th singular value of a matrix, while σ stands for their mean value. Icor(𝐗_1,…,𝐗_K) := 1/K(K-1)∑_i=1^K∑_j=1,j≠ i^KPcor(𝐗_i,𝐗_j) This correlation-based feature-split algorithm, as depicted in Algorithm <ref>, is meticulously designed to allocate features across multiple parties while taking into account the correlations inherent among the features. The algorithm's operation is premised on the provision of a defined number of features for each party, represented as m_1,…,m_K. Commencing with the initialization of a column permutation matrix, denoted as 𝐏, to an identity matrix (line 1), the algorithm proceeds to define a score function, f(𝐏;𝐗), which represents the overall correlation Icor after the features have undergone permutation by 𝐏 (line 2). Subsequently, the algorithm determines the lower and upper bound of the score function (lines 3-4). This forms the basis for calculating the target correlation f^*(𝐗;β), which is a linear interpolation between the lower and upper bounds controlled by the correlation index β (line 5). Next, the algorithm locates the optimal permutation matrix 𝐏^* by solving an permutation-based optimization problem. Notably, we employ the Biased Random-Key Genetic Algorithm (BRKGA) <cit.> for this purpose. The final step of the algorithm splits the features according to the derived optimal permutation and the pre-set number of features for each party (lines 6-7). Owing to the fact that the optimization approach requires many invocations of Icor, it is important that this process is conducted with the highest degree of efficiency. For datasets of smaller dimensions, singular values can be directly computed utilizing Singular Value Decomposition (SVD) <cit.>. However, in the case of high-dimensional datasets, we resort to employing Truncated SVD <cit.> to estimate the largest top-d_t singular values, with the remaining singular values assumed as zero prior to calculating the standard variance. It is worth noting that we make use of GPU acceleration to expedite the computation of Icor, thereby ensuring that the optimization procedure is as swift and efficient as possible. Our experiments, as presented in Appendix <ref>, validate that both split methods can complete within a reasonable time. Empirical Validation. We conduct extensive experiments to rigorously evaluate the practical performance of our proposed correlation evaluation metric and the correlation-based feature-split algorithm; the details are in Appendix <ref>. Briefly, for the correlation evaluation metric Icor, we observe that Pcor mirrors the behavior of mcor <cit.> in assessing inner-party correlation and displays a similar trend to mcor for inter-party correlation evaluation. Moreover, we split features of synthetic datasets of different β values using Algorithm <ref>, contrasting it with a random split. The absolute correlation matrix visualized in Figure <ref> suggests that as β increases, so does inter-party correlation. In contrast, random feature splitting does not effectively portray scenarios with low inter-party correlation. § REAL-WORLD VFL DATASETS Real-world VFL datasets, though highly desirable, are limited in scope and type, often encompassing tabular-tabular data, as in Vehicle <cit.>, Movielens <cit.>, Songs <cit.>, and tabular-image data, as in NUS-WIDE <cit.>. Notably missing are image-image datasets. Addressing this, we introduce a real-world VFL dataset, Satellite, adapted from <cit.>, containing 62,832 images across 16 parties, simulating a practical VFL scenario of collaborative location identification via multiple satellites. Further details on Satellite's construction are in Appendix <ref>. An in-depth analysis of the Satellite dataset, using our proposed metrics and the visualization of absolute correlation matrix (Figure <ref>), reveals low inter-party correlation (Icor) and high inner-party correlation, similar to NUS-WIDE (Figure <ref>). These results underscore the fact that random feature splits, which usually result in larger β values (Figures <ref> and <ref>), may not truly represent real-world scenarios, reinforcing the need for systematic VFL datasets generation methods. Estimating α and β for real VFL datasets. In order to align real datasets with the synthetic ones generated by VertiBench, we put forward methods to estimate α and β for real VFL datasets. To calculate α, we determine the significance of each party by adding up the Shapley value of its features. We do this efficiently by estimating Shapley values on a select subset. These Shapley values are then normalized and treated as Dirichlet parameters α_i for each party P_i, in line with Theorem <ref>. To approximate the scale of the Dirichlet parameters and align them with the generation of synthetic datasets, we find a symmetric Dirichlet distribution Dir(α) that has the same variance as Dir(α_1,…,α_K), as given in Proposition <ref>. This value of α reflects the variance in feature importance across parties. The proof is provided in Appendix <ref>. Given a Dirichlet distribution Dir(α_1,…,α_K) with mean variance σ, symmetric Dirichlet distribution Dir(α) that has the same mean variance σ if α = K - 1 - K^2σ/ K^3σ. To estimate β, we start by computing the potential minimum and maximum values of Icor by shuffling the features among parties, denoted as Icor_min,Icor_max. Next, we estimate the Icor of the actual dataset, Icor_real, and derive the β value using β = min{max{Icor_real-Icor_min/Icor_max-Icor_min,0},1}. It is important to note that in real-world scenarios, Icor_real might fall slightly outside the range of Icor_min,Icor_max due to the constraints of optimization algorithms. To rectify this, we clip the estimated β to ensure β∈[0,1]. Using the estimated α and β, we display the importance and correlation of existing real datasets within the VertiBench-supported range in Figure <ref>. We note that real datasets represent a limited set of VFL scenarios with a large α and small β, indicating a high degree of feature imbalance and low inter-party correlation. Further, conducting random feature splits, as is common in existing VFL experiments, results in a distinct extreme characterized by high values of both α and β. This observation underscores the importance of VertiBench, which can generate a broad range of VFL scenarios for robust evaluation of VFL algorithms. § EXPERIMENT This section comprehensively benchmarks cutting-edge VFL algorithms. The experimental settings are delineated in Section <ref>, with results for VFL accuracy and communication efficiency presented in Sections <ref> and  <ref>, respectively. Additional evaluations, including scalability, training time, and performance on real datasets, are discussed in Appendix <ref>. Each experiment elucidates results and provides relevant insights, highlighting (1) the performance-communication tradeoff of NN-based and boosting-based methods, (2) the necessity for advanced communication-efficient algorithms for imbalanced distributed datasets, and (3) the scalability potential of VFL algorithms. §.§ Experimental Settings This subsection includes the datasets, evaluated algorithms, and training methodology. Detailed dataset specifications, environments, and hyperparameter settings can be found in Appendix <ref>. Datasets. Our experimental design incorporates seven public datasets, namely covtype <cit.>, msd <cit.>, gisette <cit.>, realsim <cit.>, epsilon <cit.>, letter <cit.>, and radar <cit.>, detailed in Appendix <ref>. The msd dataset is used for regression tasks, while the others cater to classification tasks. Each dataset is partitioned into 80% training and 20% testing instances. The datasets' features are distributed among multiple parties (typically four), split based on feature importance (α) or correlation (β). In the correlation-based split, each party is assigned an equal number of dataset features. Algorithms. We assess extensive code-available VFL algorithms in our experiments, including split-NN-based (SplitNN <cit.>, C-VFL <cit.>), split-GBDT-based (FedTree <cit.>, SecureBoost <cit.>, Pivot <cit.>), and ensemble-based (GAL <cit.>) algorithms. AL <cit.> is excluded due to its inferiority to GAL <cit.>. For fairness, experiments are conducted without encryption or noise addition. In light of the reported minor variations in accuracy and communication (w/o encryption) among split-GBDT-based methods like FedTree, SecureBoost, and Pivot due to precision issues <cit.>, we have elected to use FedTree as a representative in our evaluation of their performance and communication costs. Training. For classification tasks, we use accuracy as the evaluation metric, while regression tasks are evaluated using the Root Mean Square Error (RMSE). To ensure the reliability of our results, we conduct five runs for each algorithm, using seeds ranging from 0 to 4 to randomly split the datasets for each run, and then compute their mean metrics and standard deviation. Detailed hyper-parameter settings for each algorithms are provided in Appendix <ref>. §.§ VFL Accuracy In this subsection, we study the performance of VFL algorithms by varying the data split parameters, α and β, and assessing the resulting impact on the accuracy. Our analysis includes a range of algorithm types, namely split-NN-based, split-GBDT-based, and ensemble-based methods. The performance is detailed in Table <ref>. From our exploration, we can draw three key observations. The influence of split parameters α and β on VFL performance varies significantly with the choice of algorithm and dataset. The performance of certain algorithms, such as SplitNN and FedTree, remains relatively consistent across different α and β values. For others, notably C-VFL, these parameter changes can cause substantial variations in performance. For instance, on the epsilon dataset, C-VFL's accuracy fluctuates by up to 12% and 10% when α and β are adjusted from 0.1 to 100 and from 0 to 1.0, respectively. Despite the potential significant influence of α and β parameters, their effect on accuracy seems to be contingent upon specific dataset-algorithm combinations. This underlines the importance of extensive evaluations across a broader spectrum of α and β values, a critical step towards illustrating the robustness of VFL algorithms. SplitNN often leads in accuracy across most datasets; however, the performance of split-GBDT-based and ensemble-based methods can vary significantly depending on the dataset. As anticipated, given its iterative transmission of substantial representations and gradients, SplitNN often outperforms other methods across a majority of datasets. Comparatively, the performance of FedTree and GAL is dataset-dependent. FedTree is well-suited to high-dimensional, smaller datasets like gisette, but struggles with larger datasets like epsilon and covtype. GAL, on the other hand, performs admirably with binary classification and regression tasks, though its performance drops significantly as the number of classes increases, as observed on the covtype and letter dataset. The compression of SplitNN-based methods, particularly when employed on imbalanced partitioned datasets, can significantly impact accuracy. While C-VFL's model structure is akin to SplitNN, the incorporation of compression results in C-VFL having the lowest accuracy among all the tested baselines. This is particularly pronounced in cases of imbalanced importance distribution, i.e., smaller α. For example, when α=0.1, C-VFL's performance on the letter and epsilon datasets is barely superior to random guessing. This highlights a pressing need for further exploration and development of compression methods suited for biased partition scenarios. §.§ Communication Efficiency In this subsection, we evaluate VFL algorithms' communication efficiency by analyzing their total communication size within 50 fixed epochs, as shown in Figure <ref>. Additional communication details, such as the maximum incoming and outgoing communication, are provided in Appendix <ref>. Given that FedTree, Pivot, and SecureBoost incur comparable communication costs when excluding encryption overhead, we will utilize FedTree as a representative for the other two for simplicity. Upon examining the figure, two main observations can be drawn. Gradient-boosting algorithms, including GAL and FedTree, generally exhibit smaller communication sizes compared to neural-network-based algorithms like SplitNN and C-VFL, with the exception of the letter dataset with 26 classes. C-VFL's compression techniques, though limiting its communication size, cannot match the efficiency of GAL and FedTree, even with a significant accuracy trade-off. The higher communication cost in neural networks is due to frequent transmission of gradients and representations, a factor that boosts SplitNN's optimal accuracy. Additionally, the efficiency of FedTree and GAL is contingent on the global dataset's size. The primary distinction between FedTree and GAL lies in the type of information received on the primary party side. GAL collates prediction results from all secondary parties, with the size being proportional to the number of instances. Conversely, FedTree gathers the histogram from all secondary parties, with the size being proportional to the number of features. Consequently, GAL incurs lower communication costs than FedTree on high-dimensional datasets, such as gisette and realsim, while maintaining comparable communication costs on other datasets. § CONCLUSION In this study, we introduce VertiBench, a versatile benchmarking framework for Vertical Federated Learning (VFL). VertiBench facilitates the synthetic generation of diverse VFL datasets from a single global set, thereby enabling a comprehensive performance assessment of VFL algorithms across a wide spectrum of application domains. Our empirical results reveal potential significant variations in algorithm performance under different data partition scenarios, underscoring the importance of our benchmark. Additionally, we contribute a new real-world VFL dataset, addressing a deficit in image-image VFL datasets. This study highlights the necessity of examining VFL algorithms under diverse data distribution conditions, providing a crucial trajectory for future research. unsrtnat tocsectionAppendix PART: Appendix § PROOF The probability mass function can be written as log𝒫(y|X_K,...,X_1) = ∑_i=1^Klog(y|X_k,...,X_1)/(y|X_k-1,...,X_1) + log(y) According to the definition of conditional probability, this marginal distribution can be written as (y|X_K,...,X_1) = (y,X_K,...,X_1)/(X_K,...,X_1) = (y)(X_1|y) ∏_k=2^K(X_k|y,X_k-1,...,X_1)/(X_1) ∏_k=2^K(X_k|X_k-1,...,X_1) = (y) (X_1|y)/(X_1)∏_k=2^K(X_k|y,X_k-1,...,X_1)/(X_k|X_k-1,...,X_1) Denoting c_k = log(X_k|y,X_k-1,...,X_1)/(X_k|X_k-1,...,X_1), c_1 = log(X_1|y)/(X_1) Adding logarithm on both sides, we have log𝒫(y|X_K,...,X_1) = ∑_i=1^Klogc_i + log(y) Furthermore, we have c_k = (X_k|y,X_k-1,...,X_1)/(X_k|X_k-1,...,X_1) = (X_k,y|X_k-1,...,X_1)/(X_k|X_k-1,...,X_1)(y|X_k-1,...,X) = (y|X_k,...,X_1)/(y|X_k-1,...,X_1) Combining (<ref>) and (<ref>), we have log𝒫(y|X_K,...,X_1) = ∑_i=1^Klog(y|X_k,...,X_1)/(y|X_k-1,...,X_1) + log(y) Consider a feature index set 𝒜={1,2,...,m} and a characteristic function v:2^𝒜→ℝ such that v(∅)=0. Let ϕ_j(v) denote the importance of the j-th feature on v such that ∑_j=1^mϕ(j)=v(𝒜). Assume that the indices in 𝒜 are randomly distributed to K parties with probabilities r_1,...,r_K where ∑_i=1^Kr_i=1. Given budgets b_i=r_i v(𝒜), let Z_i be the sum of feature importance for party i. Then, we have ∀ i∈[1,K],𝐄[Z_i]=b_i and 𝐄[Z_i]∝ r_i. For each feature j assigned to party i with probability r_i, we define the feature importance Y_ij as: Y_ij= ϕ_j(v), w.p. r_i 0, w.p. 1-r_i By leveraging the property of linearity of expectation, we find that: 𝐄[Z_i]=∑_j=1^m𝐄[Y_ij]=∑_j=1^mϕ_j(v)r_i=r_i∑_j=1^mϕ(j) Given that ∑_j=1^mϕ(j)=v(𝒜), we derive: 𝐄[Z_i]=b_i Moreover, since v(𝒜) is a constant, it follows that: 𝐄[Z_i]∝ r_i Given a Dirichlet distribution Dir(α_1,…,α_K) with mean variance σ, symmetric Dirichlet distribution Dir(α) that has the same mean variance σ if α = K - 1 - K^2σ/ K^3σ. Suppose we have variables X_1,…,X_K following the Dirichlet distribution, denoted as Dir(α,…,α). Leveraging the inherent properties of the Dirichlet distribution, we can formulate the variance (X_i) for all i∈[1,K] as (X_i) = K-1/K^2(Kα+1) The mean variance, denoted as σ, can subsequently be articulated in terms of the expected variance, 𝔼[(X_i)], as 𝔼[(X_i)] = K-1/K^2(Kα+1) = σ Recognizing that for a Dirichlet distribution σ>0 holds, we can transform the above equation to express α in terms of σ: α = K - 1 - K^2σ/ K^3σ § DETAILS OF VFL ALGORITHMS In this section, we provide a detailed comparison of the existing VFL algorithms as an extension of Table <ref>. §.§ Ensemble-based VFL Algorithms In the ensemble-based VFL algorithms detailed in Algorithm <ref>, each iteration t commences with the primary party P_1 calculating the residual of the prior global model F^t-1(·) (line 3). This is followed by the communication of residuals 𝐫_1^t to the secondary parties (P_2,…,P_K). In the ensuing step (line 5), each secondary party trains a local model f(θ_i^t;·) on its local data 𝐗_i to predict the residuals, subsequently sending the model parameters θ_i^t back to P_1. P_1 then aggregates these local models and updates the global model F^t(·) (line 6). This process iterates until a convergence criterion is achieved. Specifics of residual sharing and model aggregation depend on algorithm design. In AL, residuals are shared among parties, and models are aggregated through summation. Conversely, in GAL, pseudo residuals (i.e., gradients) are shared, and models are aggregated through a weighted summation. Furthermore, the aggregation weight in GAL can updated during the training process. §.§ Split-NN-based VFL Algorithms As described in Algorithm <ref>, each iteration t starts with the parties P_i conducting forward-propagation on their local data 𝐗_i to derive local representation 𝐙_i^t (line 4). These representations are subsequently forwarded to the primary party P_1. Depending on the iteration, P_1 then merges the local representations (line 6), further derives a global prediction ŷ^t with an aggregated model (line 7), updates the aggregated model parameters θ_1^t (line 8), and broadcast the encoded aggregation model θ_i^t to all parties (line 9). The parties P_i then employ these encoded aggregation model to update their local models (line 11). This process is repeated until a specified criterion for stopping is met. The specific methods of encoding, determining aggregation frequency, and merging are dependent on the algorithm design. In the process of forward-pass encoding, SplitNN sends local representations directly to Party P_1 for merging, while C-VFL compresses these representations before transmission. In contrast, BlindFL utilizes a source layer to encode local representations, ensuring privacy preservation. During backward-pass encoding, C-VFL transmits the top-k-compressed aggregation model. Both SplitNN and BlindFL initially compute the gradients with respect to 𝐙 and subsequently broadcast either the raw or source-layer encoded gradients to all the parties. Regarding aggregation frequency, C-VFL aggregates every Q iterations to reduce communication cost, while both SplitNN and BlindFL aggregate at every iteration. For the merging process, SplitNN and C-VFL use concatenation of local representations, while BlindFL applies a secret-sharing summation with source-layer-encoded representations. §.§ Split-GBDT-based VFL Algorithms Outlined in Algorithm <ref>, each iteration t initiates with the primary party P_1 encoding the gradient of residuals, yielding 𝐫^t (line 3). Following this, all parties P_i calculate local histograms 𝐇_i^t utilizing their individual local data 𝐗_i and the encoded residuals 𝐫^t (line 5). These local histograms are then transmitted to P_1 for merging (line 6). In the next step, P_1 trains a decision tree using the merged histogram 𝐇^t and the encoded residuals 𝐫^t (line 7). The selected split points from this tree are communicated to the secondary party that possesses the split feature, which stores these split points for potential future requests during inference (line 9). Finally, P_1 updates the ensemble model F^t with the newly trained tree (line 10). This sequence of operations continues until a set stopping condition is fulfilled. Depending on the algorithm, the specific techniques for encoding, computing histograms, merging, and updating modes differ. Pivot views the instance set in each node as confidential information, implementing homomorphic encryption on it. Conversely, SecureBoost, FedTree, and VF2Boost treat the instance set as public information and apply homomorphic encryption only on a specific set of instances. In terms of merging, SecureBoost and FedTree perform homomorphic decryption directly to acquire actual sum values after aggregating encrypted histograms. VF2Boost further introduces efficiency measures such as Polynomial-based histogram packing and reordered histogram accumulation, while Pivot utilizes multi-party computation supporting comparison to maintain the secrecy of values at all times. In terms of the update mode, SecureBoost, Pivot, and FedTree adopt a sequential approach, whereas VF2Boost utilizes a pipeline processing for speedup. Additionally, it is worth mentioning that SecureBoost employs a threshold for binary classification to enhance accuracy in the context of datasets with label imbalance. § SPLIT METHOD DETAILS In this section, we formally state out proposed importance-based feature-split algorithm in Algorithm <ref>. After initializing local datasets for each party (line 1), a series of probabilities p_1,…,p_K s.t. ∑_i=1^Kp_i=1 is sampled from a Dirichlet distribution, parameterized by α_1,…,α_K (line 2). For each feature, it then proceeds to randomly select a party P_k, according to the probabilities p_k, and assigns the respective feature to P_k (lines 3-5). In order to address potential failures in algorithms when confronted with empty features, we can optionally initialize each party with a random feature prior to the commencement of the algorithm. § EMPIRICAL VALIDATION OF SPLIT METHODS To rigorously evaluate the practical performance of our proposed correlation evaluation metric and the correlation-based feature-split algorithm, we conduct a series of systematic experiments. §.§ Correlation Evaluation Metric In order to validate the efficacy of the correlation evaluation metric, Pcor, we create two synthetic datasets, 𝐗_1 (m_1 features) and 𝐗_2 (m_2 features), using the library. Initially, party P_1 holds 𝐗_1, and P_2 holds 𝐗_2. Over the course of the experiment, we gradually transfer features from 𝐗_1 to P_2, each time exchanging for a feature of 𝐗_2. This process continues until all features of 𝐗_1 end up on P_2, while the total number of features remains constant during the whole process. Our observations, as presented in Figure <ref> and <ref>, reveal the following: (1) Pcor behaves similarly to mcor when evaluating inner-party correlation and shows a similar trend to mcor <cit.> when assessing inter-party correlation. (2) Both Pcor and mcor exhibit the lowest inter-party correlation and the highest inner-party correlation at the extremities of the x-axis. This suggests that the datasets 𝐗_1 and 𝐗_2 are managed by distinct parties and exhibit complete independence. This pattern is also reflected in Figure <ref> when Pcor is applied to parties of different dimensions. These observations validate the appropriateness of Pcor as a measure for evaluating inter-party correlation, even when the number of features between the two parties varies. §.§ Correlation-based Feature-split Algorithm We turn our attention towards validating the efficacy of our proposed correlation-based feature-split algorithm. Three synthetic datasets, each encompassing 10 features, are independently generated using the library. These datasets are subsequently concatenated along the feature axis, yielding a global dataset with 30 features. This dataset is then split into three local datasets, each containing 10 features, deploying our proposed algorithm with β values set at 0, 0.5, and 1.0. The visualization of the absolute value of the correlation matrix between each pair of features is presented in Figure <ref>. As evident in Figure <ref>, <ref>, and <ref>, an increment in the value of β corresponds to an increase in the correlation between features of different parties. This observation is in alignment with our expectations. However, randomly splitting features into three parties, serving as our baseline, fails to reflect scenarios with low inter-party correlation (as illustrated in Figure <ref>). These empirical findings underscore the efficacy of our proposed algorithm in partitioning features based on their correlation attributes. §.§ Time Efficiency of Split Methods. Table <ref> provides a summary of the estimated time requirements for our proposed split methods, with the I/O time for loading and saving datasets excluded. Notably, the importance-based split method demonstrates significant efficiency, typically completing within a minute. In contrast, the correlation-based split method requires a longer processing time, due to the need to resolve three optimization problems. This time cost is especially pronounced on high-dimensional datasets, such as realsim, because the singular value decomposition (SVD) used in the correlation-based split algorithm is dependent on the number of features. Despite these differences in time consumption, both split methods prove capable of handling large datasets, accommodating instances up to 581k and features up to 20k, within a reasonable time frame. § REAL DATASET CONSTRUCTION In this section, we outline the construction process of the Satellite dataset, which was adapted from the WorldStrat dataset <cit.>, originally intended for high-resolution imagery analysis. The Satellite dataset encompasses Point of Interest (POI) data, each associated with one or more Areas of Interest (AOI). Every AOI incorporates a unique location identifier, a land type, and 16 low-resolution, 13-channel images, each taken during a satellite visit to the location. During the data cleaning phase, we scrutinize the dataset thoroughly, identifying and removing 67 incomplete data records that have an insufficient number of low-resolution images. Furthermore, given the inconsistent widths and heights of images across different locations, we standardize the size of all images to a 158x158 square via bicubic interpolation. Additionally, the pixel values of each image are scaled to integer values within the range of [0,255]. The Satellite dataset forms a practical VFL scenario for location identification based on satellite imagery. Each AOI, with its unique location identifier, is captured by 16 satellite visits. Assuming each visit is carried out by a distinct satellite organization, these organizations aim to collectively train a model to classify the land type of the location without sharing original images. The Satellite dataset encompasses four land types as labels, namely (4.8%), (8.9%), (61.3%), and (25.0%), making the task a 4-class classification problem of 3927 locations. License. Our use of the WorldStrat dataset was restricted to the labels and Sentinel2 imagery, falling under the CC BY 4.0 <cit.> license, while excluding high-resolution imagery that falls under the CC BY-NC 4.0 <cit.> license. Therefore, we have released the Satellite dataset under the CC BY 4.0 license. File description and maintenance plan. We aim to create a dedicated website for federated learning datasets to host the Satellite dataset and future VFL datasets. Although the website is currently under construction, we have made the Satellite dataset available via a public Google Drive link <cit.> for review purposes. The provided ZIP file comprises 32 CSV files, corresponding to training and testing datasets split at a ratio of 8:2. Each training and testing file contains 3,142 and 785 flattened images from a party, respectively. § EXPERIMENTAL DETAILS Datasets. The datasets employed in our experiments exhibit a range of dimensions (from 16 to 20,958), instance numbers (from 15k to 581k), and tasks, which include binary classification, multi-class classification, and regression. Detailed information about these datasets and the corresponding licenses are presented in Table <ref>. Hyperparameters. For models based on split-GBDT, such as SecureBoost, FedTree, and Pivot, our experiments are conducted with the following hyperparameters: , , , and . Due to the constraints of dataset sizes in their codes, Pivot is evaluated exclusively on two datasets: the letter dataset under the default setting of and on the gisette dataset with . The latter alteration was necessitated by a segmentation fault encountered under the default setting. With regard to Split-NN-based models, specifically SplitNN and C-VFL, each local model is trained by a two-layer multi-layer perceptron (MLP) with each hidden layer containing 100 units. The corresponding aggregated model is a single-layer MLP with 200 hidden units. The learning rate, chosen from the set {10^-4,10^-3,3×10^-3}, is contingent on the specific algorithm and dataset. The number of iterations is fixed at 50 for SplitNN and 200 for C-VFL, with the latter setting aimed at ensuring model convergence. We also test C-VFL using four quantization buckets, a single vector quantization dimension, and a top-k compressor as recommended in the default setting. The number of local rounds Q in C-VFL is set to 10. Finally, for the ensemble-based model, GAL, we utilize a , , , and , with the assist mode set to . In the GAL framework, each party employs an MLP model consisting of two hidden layers, each containing 100 hidden units. Environments. The hardware configuration used for C-VFL, GAL, SplitNN, and FedTree consists of 2x AMD EPYC 7543 32-Core Processors, 4x A100 GPUs, and 503.4 GB of RAM, running on Python 3.10.11 with PyTorch 2.0.0, Linux 5.15.0-71-generic, Ubuntu 22.04.2 LTS. For FATE framework, we are using Docker image, running with Python 3.8.13 on Docker 23.0.2. Pivot is compiled from source using CMake 3.19.7, g++ 9.5.0, libboost 1.71.0, libscapi with git commit hash , and runs on a slurm cluster with AMD EPYC 7V13 64-Core Processor with the same number of cores as 2x AMD EPYC 7543 used for other algorithms. License. The licenses pertinent to the datasets and algorithms utilized in VertiBench are documented in Table <ref> and Table <ref>, respectively. We ensure adherence to these licenses as VertiBench neither redistributes the codes and data nor utilizes them for any commercial purpose. § ADDITIONAL EXPERIMENTS §.§ Communication Cost Details In this subsection, we analyze the maximum incoming and outgoing communication of each VFL algorithm (Figure <ref>), which aligns with our observations in Section <ref>. Generally, GAL is more efficient than FedTree, albeit FedTree has less incoming communication cost on low-dimensional datasets. The incoming communication cost includes residuals for GAL and histograms for FedTree. Residual size is dependent on the number of instances, while histogram size relates to the feature count. Hence, FedTree shows less incoming communication cost on low-dimensional datasets but more on high-dimensional datasets. However, FedTree's outgoing communication cost is approximately twice that of GAL, as it transmits both gradients and Hessians, neutralizing its incoming communication efficiency on low-dimensional datasets. C-VFL primarily reduces communication costs via representation compression sent to the server, yet it does not enhance the backward server-client communication. This substantial reduction in the incoming communication cost is indicative of a noteworthy decrease in the representation size. Conversely, the consistent outgoing communication cost suggests that the transmission of compressed aggregated models within C-VFL is not demonstrating greater efficiency compared to the process of transmitting uncompressed gradients of the cut layer. Such an observation provides insights into potential enhancements that could be pursued to reduce SplitNN's backward communication costs. §.§ Scalability In this section, we examine the scalability of various VFL algorithms on two high-dimensional datasets, depicted in Figure <ref>. The datasets, split by importance with α=1, consist of a varying number of parties, ranging from 2 to 2048. Our results demonstrate that SplitNN and FedTree are scalable to thousands of parties without any significant drop in accuracy. This is attributable to FedTree's lossless design and SplitNN's robust structure. However, both GAL and C-VFL show substantial performance declines with an increase in party numbers. An intriguing observation is that C-VFL's accuracy nearly matches SplitNN's when the number of parties reaches 2048 on the gisette dataset. This is likely because the average number of features per party reduces to 2 in this scenario, causing the compression mechanism to potentially fail, and thus, C-VFL reverts to SplitNN's performance. §.§ Training Time The training duration for VFL algorithms is consolidated in Table <ref>. It should be noted that FedTree and SecureBoost are executed without the use of encryption or noise. Conversely, we retain the default privacy setting for Pivot as it does not offer a non-encryption alternative. Three observations can be gleaned from the table. Firstly, we observe a considerable overhead associated with the encryption processes of Pivot. Pivot, which employs both homomorphic encryption and secure multi-party computation to ensure stringent privacy, endures a training time that is up to 10^5 times longer than FedTree. This limitation renders such strict privacy measures impractical for real-world applications that employ large datasets. This observation underscores the necessity for further exploration into the efficiency-privacy trade-off in VFL. Secondly, when comparing non-encryption methods, we find that split-based algorithms (SplitNN, FedTree) generally outperform ensemble-based algorithms (GAL) in terms of efficiency. This is primarily because split-based algorithms require each party to train a partial model, whereas ensemble-based algorithms mandate that each party train an entire model for ensemble purposes. This design characteristic also contributes to the lower communication costs associated with ensemble-based algorithms, as demonstrated in Figure <ref> and Figure <ref>. Lastly, we note that SplitNN demonstrates higher efficiency than FedTree on high-dimensional small datasets, yet demands more training time on low-dimensional large datasets. This discrepancy arises because FedTree computes a fixed-size histogram for each feature, which alleviates the impact of a large number of instances but is sensitive to the number of features. Conversely, SplitNN trains data in batches, rendering it sensitive to the number of instances. This observation emphasizes the importance of carefully selecting a VFL algorithm based on the properties of the dataset in the application. §.§ Performance on Satellite dataset In Table <ref>, we present the single-party and VFL performance results on the Satellite dataset. For an equitable comparison, each single party trains a concatenated MLP, formed by linking a SplitNN's local model with its aggregated model, under the same hyperparameters. Our results indicate that VFL can yield approximately a 10% accuracy improvement over local training, thus affirming the practical utility of the Satellite dataset for vertical federated learning applications. § DISCUSSION This section discusses the limitations of VertiBench, shedding light on several areas where improvement is needed. Additionally, we engage in a discussion surrounding potential negative social impacts and personal privacy issues related to VertiBench. §.§ Limitations In this subsection, we outline the limitations of VertiBench, focusing on three primary aspects. Scalability of correlation-based split. The correlation-based split method that we propose may face efficacy and efficiency challenges when applied to a large number of parties. As the number of parties increases, the potential feature splits proliferate exponentially. This complexity presents a significant obstacle for optimization methods such as BRKGA <cit.>, making it challenging to locate the minimum and maximum Icor, as well as the optimal split that corresponds to the given β. This situation underscores the necessity for more advanced permutation-based optimization algorithms that can enable the correlation-based split method to scale out to a greater number of parties. Relationship between importance and correlation. Within VertiBench, we regard importance and correlation as two orthogonal factors impacting the feature split. However, this viewpoint might overlook the potential correlation that could exist between these two factors. For instance, in cases of highly imbalanced feature split, parties might demonstrate low inter-party correlation. As a result, a comprehensive benchmarking framework that simultaneously considers both importance and correlation is desired to provide a more rigorous evaluation of VFL algorithms. Evaluation of privacy. Although VertiBench assesses performance, efficiency, and communication cost, it does not provide a quantitative evaluation of privacy. The high performance observed with SplitNN could potentially come at the cost of privacy, while the markedly high overhead of Pivot might be attributed to its robust privacy requirements. The task of quantitatively evaluating the privacy of different VFL algorithms and models remains an open problem, which we aim to tackle in future work. §.§ Social Impacts Negative social impact. While VertiBench primarily focuses on analyzing and comparing existing methodologies, and hence is less likely to cause additional negative social impact, the potential for biased interpretation of our experimental results could inadvertently mislead future research or applications. Specifically, we emphasize that the superior performance of non-encrypted methods such as SplitNN and GAL does not necessarily indicate that they are fit for immediate deployment in real-world VFL applications. The privacy concerns arising from the transfer of residuals or representations require further investigations. A quantitative benchmark on privacy is a critical prerequisite to deploying VFL approaches in real-world applications, which we plan to explore in the future research. Personal privacy. For synthetic datasets, VertiBench does not create new datasets but instead employs novel methods to split existing publicly available datasets, thereby avoiding any additional personal privacy issues. As for real-world data, our Satellite dataset pertains to land type data derived from the WorldStrat dataset, which does not contain personal information. The privacy protections implemented in the Satellite dataset align with those of the WorldStrat dataset, which asserts that direct or indirect identification of individuals is not possible <cit.>. In our approach, the use of low-resolution imagery in the Satellite dataset serves to further diminish any potential, albeit extremely unlikely, risk of accidentally capturing identifiable information about individuals. Consequently, we confidently assert that VertiBench does not pose any concerns regarding personal privacy.
http://arxiv.org/abs/2307.00486v1
20230702061515
Quadrupole Insulator without Corner States in the Energy Spectrum
[ "Yu-Liang Tao", "Jiong-Hao Wang", "Yong Xu" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.quant-gas" ]
yongxuphy@tsinghua.edu.cn ^1Center for Quantum Information, IIIS, Tsinghua University, Beijing 100084, People's Republic of China ^2Hefei National Laboratory, Hefei 230088, PR China The quadrupole insulator is a well-known instance of higher-order topological insulators in two dimensions, which possesses midgap corner states in both the energy spectrum and entanglement spectrum. Here, by constructing and exploring a model Hamiltonian under a staggered ℤ_2 gauge field that respects momentum-glide reflection symmetries, we surprisingly find a quadrupole insulator that lacks zero-energy corner modes in its energy spectrum, despite possessing a nonzero quadrupole moment. Remarkably, the existence of midgap corner modes is found in the entanglement spectrum. Since these midgap states cannot be continuously eliminated, the quadrupole insulator cannot be continuously transformed into a trivial topological insulator, thereby confirming its topological nature. We show that the breakdown of the correspondence between the energy spectrum and entanglement spectrum occurs due to the closure of the edge energy gap when the Hamiltonian is flattened. Finally, we present a model that demonstrates an insulator with corner modes in the energy spectrum even in the absence of the quadrupole moment. In this phase, the entanglement spectrum does not display any midgap states. The results suggest that the bulk-edge correspondence of quadrupole insulators generally manifests in the entanglement spectrum rather than the energy spectrum. Quadrupole Insulator without Corner States in the Energy Spectrum Yong Xu^1,2 ================================================================= Higher-order topological insulators have experienced rapid development in recent years <cit.>, serving as a generalization of conventional first-order topological insulators. In contrast to first-order topological states which have m=1, these higher-order states support edge states of (n-m) dimensions (1< m ≤ n) in an n-dimensional system. A notable example of a higher-order topological phase in two dimensions (2Ds) is the quadrupole insulator <cit.>, which showcases topologically protected corner states. These topological insulators are identified by their quantized quadrupole moment <cit.>, which is enforced by symmetries, such as chiral symmetry <cit.>. The entanglement spectrum provides an alternative means of describing the topological properties of a system <cit.>. It refers to the eigenvalue spectrum of the reduced density matrix for a system comprised of two separate subsystems. While the connection between the entanglement spectrum and edge energy spectrum is demonstrated for conventional first-order topological insulators <cit.>, there are exceptions. For example, in systems with inversion symmetry, it has been observed that when the energy spectrum exhibits a gap without midgap states, the entanglement spectrum contains midgap modes <cit.>. This gives rise to a topological state as the midgap states in the entanglement spectrum cannot be continuously removed. In the context of higher-order topological phases, the entanglement spectrum between a quarter part and its complement is considered so that a corner boundary is provided. It has been found that quadrupole insulators consistently harbor midgap modes in both the energy spectrum and entanglement spectrum <cit.>. This suggests a strong correspondence between the two spectra in higher-order topological phases. In the paper, we introduce a tight-binding model under a staggered ℤ_2 gauge field and surprisingly find an exotic topological phase. This phase breaks the correspondence between the energy spectrum and entanglement spectrum. More specifically, we find that this phase possesses a nonzero quadrupole moment, indicating that it can be classified as a quadrupole insulator (see Fig. <ref>). However, it does not exhibit midgap corner modes in the energy spectrum. Remarkably, we observe the emergence of midgap states in the entanglement spectrum. This result suggests that the quadrupole moment generally identifies the presence of midgap states in the entanglement spectrum instead of the energy spectrum. The reason for this is that in the case of higher-order topology, the edge energy gap can close, causing corner modes to appear or disappear without any changes occurring in the bulk states when we flatten the Hamiltonian by altering the eigenenergies. Consequently, as we transition from a flattened Hamiltonian to the original one, the corner states vanish while the quadrupole moment remains unaffected. As for the entanglement spectrum, since it is derived from the bulk states, it exhibits a relationship with the energy spectrum of the flattened version of the original Hamiltonian. To further validate our conclusion, we introduce a model with zero quadrupole moment and subsequently observe the presence of corner modes in the energy spectrum, while they are absent in the entanglement spectrum. Model Hamiltonian.— We start by introducing a 2D tight-binding model shown in Fig. <ref>(a) where there are four sites in each unit cell. The hopping is endowed with the phase of 1 or -1 represented by solid or dashed lines, respectively, leading to a flux of 0 or π in each plaquette. Its Bloch Hamiltonian in momentum space is given by H(k)= -t_xτ_zσ_x+t_yτ_xσ_x+t_x^'cosk_xτ_0σ_x -t_x^'sink_xτ_zσ_y+t_y^'cosk_yτ_yσ_y+t_y^'sink_yτ_xσ_y, where τ_i and σ_i with i=x,y,z are Pauli matrices, τ_0 and σ_0 are 2×2 identity matrices, and their tensor products act on the internal degrees of freedom in the unit cell. t_ν and t_ν^' with ν=x,y denote the intracell and intercell hopping strengths along the ν direction. For simplicity, we choose t_x^'=t_y^'=1 as the units of energy. The system respects time-reversal symmetry T=κ (κ is the complex conjugate operator), i.e., TH(k)T^-1= H(-k), and chiral symmetry Γ=τ_0σ_z, i.e., Γ H(k)Γ^-1=-H(k). Chiral symmetry acts as a protective mechanism to ensure the quantization of the quadrupole moment  <cit.>, making it a robust and well-defined topological invariant. Due to the staggered ℤ_2 gauge field, two momentum-glide reflection symmetries <cit.>, M_x=τ_0σ_x and M_y=τ_yσ_y, are respected so that M_x H(k_x,k_y)M_x^-1=H(-k_x,π+k_y) and M_y H(k_x,k_y)M_y^-1=H(π+k_x,-k_y). Now, we utilize the quadrupole moment and edge polarizations to characterize the higher-order nontrivial topology. The quadrupole moment is defined by <cit.> q_xy=[1/2πImlog (U_o^†D̂ U_o)-Q_0] mod 1, where U_o=(|ψ_1⟩,…,|ψ_2L^2⟩) with |ψ_j⟩ being the jth occupied eigenstate of an L× L system under periodic boundary conditions (PBCs) with j=1,…,2L^2, and D̂=diag{e^2π i x_j y_j/L^2}_j=1^4L^2 with (x_j,y_j) being the spatial position of lattice site j. Here, Q_0 is contributed by background positive charges. In order for a system to possess a well-defined quadrupole moment, it is essential to ensure the absence of bulk dipole moments, which is achieved by momentum-glide reflection symmetries M_x and M_y. When q_xy=0.5, the nontrivial phase is classified as a quadrupole insulator. To characterize the nontrivial edge property on an L_x× L_y lattice, we calculate the edge polarization p_x^edge along x (similarly for p_y^edge along y) in a cylinder geometry with open boundaries along y. It is determined by the sum of the distribution of polarization over a half lattice along y <cit.>, i.e., p_x^edge=∑_R_y=1^L_y/2p_x(R_y). Here, p_x(R_y) is the distribution of polarization at cell R_y. We calculate p_x(R_y) based on p_x(R_y)=∑_j=1,2ρ^j(R_y)ν_x^j, where ρ^j(R_y)=(1/L_x)∑_k_x,α|∑_n=1^2L_y[u_k_x^n]^R_y,α[ν_k_x^j]^n|^2 is the probability distribution of the hybrid Wannier functions, [ν_k_x^j]^n is the nth component of the jth eigenstates of the Wannier Hamiltonian with eigenvalue ν_x^j, and [u_k_x^n]^R_y,α is the component of the nth occupied eigenstates of the Hamiltonian H(k_x,L_y). Phase diagram.— We map out the phase diagram with respect to t_x and t_y in Fig. <ref>(b) based on the quadrupole moment and edge polarizations. We find that when t^2=t_x^2+t_y^2<2, q_xy=0.5, indicating that the phase corresponds to a quadrupole insulator, which is in stark contrast to the the Benalcazar-Bernevig-Hughes (BBH) model where a quadrupole insulating phase appears in the square region (light red region) with |t_x|<1 and |t_y|<1. Such a phase arises from a topologically trivial phase with q_xy=0 through a bulk energy gap closure at t^2=2 as we decrease t^2 [see the black line in Fig. <ref>(a)]. However, we surprisingly find that zero-energy corner modes in the energy spectrum only exist in the red region where p_x^edge=p_y^edge=0.5 as shown in Fig. <ref>(c)–(d). In the green and blue regions, although q_xy=0.5, no midgap corner modes appear in the energy spectrum as illustrated in Fig. <ref>(e). In addition, in the green region, (p_x^edge,p_y^edge)=(0,0.5) and in the blue region, (p_x^edge,p_y^edge)=(0.5,0). While type-II quadrupole insulators exhibit the same edge polarization configurations, they also possess corner modes in contrast to this case. In fact, these phases also satisfy the relation that Q^corner =p_y^edge +p_x^edge -q_xy with Q^corner being the corner charge <cit.>, while the type-II one violates it <cit.>. These phases transition into the traditional quadrupole insulator through the closure of an edge energy gap [see Fig. <ref>(a)], leading to the change of one edge polarization while preserving the quadrupole moment. For instance, consider t_y=0 so that the model reduces to the one shown in Fig. <ref>(b). Clearly, the y-normal edge states are described by the Su-Schrieffer-Heeger (SSH) model which experiences an energy gap closing at t_x=±1. Such a gap closure results in a change in p_x^edge from 0 to 0.5. However, the bulk energy gap vanishes when t_x=±√(2) at k_x=0,π as seen from the bulk energies, E_b,±^2=t_x^2+2 ± 2|t_x|√(1+cos^2 k_x). This differs from the BBH model whose spectrum always remains gapped as seen from its energies, E_b^2=t_x^2+2t_xcos k_x+2. We now evaluate the entanglement spectrum by diagonalizing the correlation matrix in a quarter subsystem A as shown in Fig. <ref>(a) defined as <cit.> [C_A]_r_iα,r_jβ=⟨ĉ^†_r_iαĉ_r_jβ⟩, where ĉ^†_r_iα (ĉ_r_iα) is the fermionic creation (annihilation) operator at lattice site (r_i,α). Figure <ref>(b) illustrates that midgap modes exist in the entanglement spectrum in the region with q_xy=0.5. These modes are mainly localized at corners as shown in Fig. <ref>(c). In fact, the entanglement spectrum has a one-to-one correspondence with the energy spectrum of the flattened Hamiltonian with open boundaries enclosing the quarter part <cit.>, similar to the first-order case <cit.>. Since the quadrupole moment is evaluated using bulk states, it describes the topology of a flattened Hamiltonian. Our results thus indicate that the bulk-edge correspondence of quadrupole insulators generally manifests in the entanglement spectrum rather than the energy spectrum. This prompts the question of why there might be a breakdown of the bulk-edge relationship in the energy spectrum of the original Hamiltonian. We find that the breakdown occurs because the edge energy gap can close even though the bulk states remain unchanged as the Hamiltonian is flattened. Specifically, we define H_def(λ)=UD(λ)U^†, where U=(|ψ_1⟩,…,|ψ_4L^2⟩) with |ψ_j⟩ being the jth eigenstate of our model under PBCs. Eigenenergies of H_def(λ) are listed in D(λ)=λdiag(-0.5,0.5)⊗ I_2L^2+(1-λ) diag(E_1,…,E_4L^2) with E_1,…,E_4L^2 being eigenenergies of our model in Eq. (<ref>) sorted in an ascending order. As we vary λ from 0 to 1, the Hamiltonian is continuously deformed from the original Hamiltonian to the flattened one without involving bulk energy gap closing [see Fig. <ref>(d)]. Since the eigenstates remain unchanged during the process, the quadrupole moment does not change. However, we find that an energy gap closure at y-normal boundaries occurs for H_def(λ) under OBCs along y [see Fig. <ref>(d)], rendering the emergence of corner modes as reflected by a relative zero-energy DOS [see Fig. <ref>(e)]. Model without momentum-glide reflection symmetry.— Although the Hamiltonian in Eq. (<ref>) respects time-reversal symmetry and momentum-glide reflection symmetries, they are not essential to the quadrupole insulator. To clarify this, we add two extra terms Δ_1τ_yσ_x and Δ_2τ_yσ_y, breaking these symmetries while preserving chiral symmetry. We have also checked that the bulk dipole moments are zero so that the quadrupole moment is well defined. In this case, we still observe the presence of the phases represented by the same color as in Fig. <ref>(b) identified by the energy gaps, quadrupole moment and edge polarizations [see Fig. <ref>(a)]. In the green region, while q_xy=0.5, no midgap modes are observed in the energy spectrum [see Fig. <ref>(b)]. However, they arise in the entanglement spectrum shown in Fig. <ref>(c). Interestingly, besides these phases, we also observe a semimetal phase (yellow region) with four Dirac points in momentum-space energy spectra. Model without quadrupole moment.— Next, we construct another model as shown in Fig. <ref>(a). Similar to the model in Eq. (<ref>), this model is still subject to staggered ℤ_2 gauge fields; however, each unit cell does not carry a π flux. The Bloch Hamiltonian in momentum space reads H_2(k)= t_xτ_0σ_x+t_yτ_xσ_x-t_x^'cosk_xτ_zσ_x +t_x^'sink_xτ_0σ_y+t_y^'cosk_yτ_yσ_y+t_y^'sink_yτ_xσ_y. It still respects time-reversal symmetry T=κ, chiral sysmmetry Γ=τ_0σ_z, and two momentum-glide reflection symmetries, M_x=τ_0σ_x and M_y=τ_xσ_x. Similarly, the two reflection symmetries enforce the absence of bulk dipole moments so that the quadrupole moment is well defined. We also take t_x^'=t_y^'=1 as the units of energy. We find that the model always has zero quadrupole moment, implying that it is a trivial quadrupole insulator. Consequently, the absence of midgap modes in the entanglement spectrum is observed in Fig. <ref>(e), consistent with the bulk-edge correspondence of quadrupole insulators in the entanglement spectrum. However, midgap corner modes appear in the energy spectrum as revealed in Fig. <ref>(c)–(d) when |t_x|<1 and |t_y|<1 corresponding to the grey and gold regions in Fig. <ref>(b). These two regions also exhibit the edge polarization of p_x^edge=0.5 and p_y^edge=0.5, respectively. Thus, the relation that Q^corner =p_y^edge +p_x^edge -q_xy is still preserved. In summary, we have proposed a model Hamiltonian that demonstrates a novel type of quadruple insulator with a nonzero quadrupole moment. The insulator does not exhibit midgap corner modes in the energy spectrum but does have midgap modes in the entanglement spectrum. Our results indicate that the bulk-edge correspondence of quadrupole insulators generally manifests in the entanglement spectrum instead of the energy spectrum. Our analysis reveals that the breakdown of the relationship between the energy spectrum and entanglement spectrum arises because the edge energy gap can close during the process of flattening of the Hamiltonian while preserving its bulk states. Importantly, our findings are not restricted to 2Ds as the model can be extended to three dimensions (3Ds), identifying octupole insulators devoid of midgap modes in the energy spectrum. Furthermore, it is possible to investigate the semimetallic phase in 3Ds, where the presence of hinge arc states is exclusively observed in the entanglement spectrum, rather than the energy spectrum. The work is supported by the National Natural Science Foundation of China (Grant No. 11974201) and Tsinghua University Dushi Program. Note added: During the preparation of this manuscript, we became aware of a related work <cit.>. 99 Taylor2017ScienceW. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, Science 357, 61 (2017). Taylor2017PRBW. A. Benalcazar, B. A. Bernevig, and T. L. Hughes, Phys. Rev. B 96, 245115 (2017). Fritz2012PRLM. Sitte, A. Rosch, E. Altman, and L. Fritz, Phys. Rev. Lett. 108, 126807 (2012). ZhangFan2013PRLF. Zhang, C. L. Kane, and E. J. Mele, Phys. Rev. Lett. 110, 046404 (2013). Slager2015PRBR.-J. Slager, L. Rademaker, J. Zaanen, and L. Balents, Phys. Rev. B 92, 085126 (2015). Brouwer2017PRLJ. Langbehn, Y. Peng, L. Trifunovic, F. von Oppen, and P. W. Brouwer, Phys. Rev. Lett. 119, 246401 (2017). FangChen2017PRLZ. Song, Z. Fang, and C. Fang, Phys. Rev. Lett. 119, 246402 (2017). Schindler2018SAF. Schindler, A. M. Cook, M. G. Vergniory, Z. Wang, S. S. P. Parkin, B. A. Bernevig, and T. Neupert, Sci. Adv. 4, eaat0346 (2018). Wang2018ELQ. Wang, D. Wang, and Q.-H. Wang, Europhys. Lett. 124, 50005 (2018). Brouwer2019PRXL. Trifunovic and P. W. Brouwer, Phys. Rev. X 9, 011012 (2019). Seradjeh2019PRBM. Rodriguez-Vega, A. Kumar, and B. Seradjeh, Phys. Rev. B 100, 085138 (2019). Roy2019PRBD. Cǎlugǎru, V. Juričić, and B. Roy, Phys. Rev. B 99, 041301(R) (2019). Yang2019PRLX.-L. Sheng, C. Chen, H. Liu, Z. Chen, Z.-M. Yu, Y. X. Zhao, and S. A. Yang, Phys. Rev. Lett. 123, 256402 (2019). Hughes2020PRBP. Zhu, K. Loehr, and Taylor L. Hughes, Phys. Rev. B 101, 115140 (2020). Xu2020PPRY.-B. Yang, K. Li, L.-M. Duan, and Y. Xu, Phys. Rev. Research 2, 033029 (2020). Parameswaran2020PRLA. Tiwari, M.-H. Li, B. A. Bernevig, T. Neupert, and S. A. Parameswaran, Phys. Rev. Lett. 124, 046801 (2020). AYang2020PRLC. Chen, Z. Song, J.-Z. Zhao, Z. Chen, Z.-M. Yu, X.-L. Sheng, and S. A. Yang, Phys. Rev. Lett. 125, 056402 (2020). Xu2020NJPY.-L. Tao, N. Dai, Y.-B. Yang, Q.-B. Zeng, and Y. Xu, New J. Phys. 22, 103058 (2020). Roy2020PPRA. Agarwala, V. Juričić, and B. Roy, Phys. Rev. Research 2, 012067(R) (2020). Wang2021PRLJ.-H. Wang, Y.-B. Yang, N. Dai, and Y. Xu, Phys. Rev. Lett. 126, 206404 (2021). Xu2023PRBY.-L. Tao and Y. Xu, Phys. Rev. B 107, 184201 (2023). Huber2018Nature M. Serra-Garcia, V. Peri, R. Süsstrunk, O. R. Bilal, T. Larsen, L. G. Villanueva, and S. D. Huber, Nature 555, 342-345 (2018). Bahl2018Nature C. W. Peterson, W. A. Benalcazar, T. L. Hughes, and G. Bahl, Nature 555, 346-350 (2018). Cho2019PRBB. Kang, K. Shiozaki, and G. Y. Cho, Phys. Rev. B 100, 245134 (2019). Hughes2019PRBW. A. Wheeler, L. K. Wagner, and T. L. Hughes, Phys. Rev. B 100, 245135 (2019). Xu2021PRBY.-B. Yang, K. Li, L.-M. Duan, and Y. Xu, Phys. Rev. B 103, 085408 (2021). Shen2020PRLC.-A. Li, B. Fu, Z.-A. Hu, J. Li, and S.-Q. Shen, Phys. Rev. Lett. 125, 166801 (2020). Haldane2008PRLH. Li and F. D. M. Haldane, Phys. Rev. Lett. 101, 010504 (2008). Ryu2006PRBS. Ryu and Y. Hatsugai, Phys. Rev. B 73, 245115 (2006). Dodriguez2009PRB I. D. Rodríguez and G. Sierra, Phys. Rev. B 80, 153303 (2009). BrayAli2009PRB N. Bray-Ali, L. Ding, and S. Haas, Phys. Rev. B 80, 180504(R) (2009). Pollmann2010PRBF. Pollmann, A. M. Turner, E. Berg, and M. Oshikawa, Phys. Rev. B 81, 064439 (2010). Fidkowski2010PRLL. Fidkowski, Phys. Rev. Lett. 104, 130502 (2010). Prodan2010PRLE. Prodan, T. L. Hughes, and B. A. Bernevig, Phys. Rev. Lett. 105, 115501 (2010). Turner2010PRBA. M. Turner, Y. Zhang, and A. Vishwanath, Phys. Rev. B 82, 241102(R) (2010). THughes2011PRBT. L. Hughes, E. Prodan, and B. A. Bernevig, Phys. Rev. B 83, 245132 (2011). Alexand2011PRBA. Alexandradinata, T. L. Hughes, and B. Andrei Bernevig, Phys. Rev. B 84, 195103 (2011). Chandran2011PRBA. Chandran, M. Hermanns, N. Regnault, and B. Andrei Bernevig, Phys. Rev. B 84, 205136 (2011). Bernevig2013PRBC. Fang, M. J. Gilbert, and B. Andrei Bernevig, Phys. Rev. B 87, 035119 (2013). Dubinkin2020arxivO. Dubinkin and Taylor L. Hughes, arXiv:2002.08385 (2020). Yuxin_KBZ. Y. Chen, S. A. Yang, and Y. X. Zhao, Nat. Commun. 13, 2215 (2022). Tao_KBY.-L. Tao, M. Yan, M. Peng, Q. Wei, Z. Cui, S. A. Yang, G. Chen, and Y. Xu, arXiv:2305.09174 (2023). ZhuArxiv Z. Zhu et. al., arXiv:2305.08450 (2023). ChengArxiv Y. Wang, C. Zhang, Z.Y. Chen, B. Liang, Y.X. Zhao, and J. Cheng, arXiv:2305.07174 (2023). Peschel2003I. Peschel, J. Phys. A 36, L205 (2003). supplementSee the Supplemental Material. Yang2023arxivJ. Hu, S. Zhuang, and Y. Yang, arXiv:2306.15477 (2023). In the Supplemental Material, we will follow Refs. <cit.> to show the relation between the entanglement spectrum in our case and the energy spectrum of the flattened Hamiltonian. We divide an L_x× L_y system into a quarter part A and its complement A [see Fig. 3(a) in the main text] <cit.>. The entanglement spectrum refers to the spectrum of the reduced density matrix ρ_A of the subsystem A by tracing out its complement A for the density matrix of the ground state |Ψ_G⟩ <cit.>, that is, ρ_A=Tr_B(|Ψ_G⟩⟨Ψ_G|)=e^-H_A/Z_A. Here we write ρ_A in terms of a Hamiltonian H_A, and Z_A=Tre^-H_A. For a non-interacting system, the entanglement spectrum is determined by eigenvalues of the single-particle correlation matrix in the region A <cit.>, [C_A]_r_iα,r_jβ=⟨ĉ^†_r_iαĉ_r_jβ⟩, where ĉ^†_r_iα (ĉ_r_iα) is the fermionic creation (annihilation) operator at lattice site (r_i,α). Writing in momentum space, we have [C_A]_r_iα,r_jβ=1/L_x L_y∑_k⟨ĉ^†_kαĉ_kβ⟩ e^-ik·(r_i-r_j), with ĉ_kα=1/√(L_x L_y)∑_r_iĉ_r_iαe^-ik·r_i denoting the fermionic annihilation operator of component α in momentum space. Diagonalizing the Hamiltonian, we have Ĥ=∑_k,nE_n(k)f̂^†_knf̂_kn, where f̂^†_kn=∑_αĉ^†_kα [u_k^n]^α, and [u_k^n]^α is the αth component of the eigenvector of H(k) with eigenenergy E_n(k). For ⟨ĉ^†_kαĉ_kβ⟩, we have ⟨ĉ^†_kαĉ_kβ⟩ =⟨Ψ_G|ĉ^†_kαĉ_kβ|Ψ_G⟩ =∑_nn^' [u_k^n*]^α [u_k^n^']^β⟨Ψ_G| f̂^†_knf̂_kn^' |Ψ_G⟩ =∑_n∈occ. [u_k^n*]^α [u_k^n]^β, which only involves occupied states. Substituting Eq. (<ref>) into Eq. (<ref>) yields [C_A]_r_iα,r_jβ =1/L_x L_y∑_k,n∈occ. [u_k^n*]^α [u_k^n]^β e^-ik·(r_i-r_j). We now consider the projector onto the occupied states P_occ=∑_ k∑_n∈occ. | k n ⟩⟨ k n|. Its representation in real space is given by [P_occ]_r_iα,r_jβ =1/L_x L_y∑_k,n∈occ. [u_k^n]^α [u_k^n*]^β e^ik·(r_i-r_j)=[C_A]_r_iα,r_jβ^*. Obviously, the correlation matrix C_A is the complex-conjugate matrix of the real-space projector restricted in the subsystem A. Since the flattened Hamiltonian is defined by the projector, i.e., H_flat=1/2-P_occ, then H_flat^t=1/2-C_A, where t represents the transpose operation. Thus, the entanglement spectrum has a one-to-one correspondence with the energy spectrum of the flattened Hamiltonian. Specifically, if there is a midgap mode at ξ=0.5 for C_A, then there exists a zero-energy mode for the flattened Hamiltonian, and vice versa. 4 SMTurner2010PRBA. M. Turner, Y. Zhang, and A. Vishwanath, Phys. Rev. B 82, 241102(R) (2010). SMTHughes2011PRBT. L. Hughes, E. Prodan, and B. A. Bernevig, Phys. Rev. B 83, 245132 (2011). SM_Schindler2018SAF. Schindler, A. M. Cook, M. G. Vergniory, Z. Wang, S. S. P. Parkin, B. A. Bernevig, and T. Neupert, Sci. Adv. 4, eaat0346 (2018). SM_Wang2018ELQ. Wang, D. Wang, and Q.-H. Wang, Europhys. Lett. 124, 50005 (2018). SM_Hughes2020PRBP. Zhu, K. Loehr, and Taylor L. Hughes, Phys. Rev. B 101, 115140 (2020). SMHaldane2008PRLH. Li and F. D. M. Haldane, Phys. Rev. Lett. 101, 010504 (2008). SMPollmann2010PRBF. Pollmann, A. M. Turner, E. Berg, and M. Oshikawa, Phys. Rev. B 81, 064439 (2010). SMFidkowski2010PRLL. Fidkowski, Phys. Rev. Lett. 104, 130502 (2010). SM_Peschel2003I. Peschel, J. Phys. A 36, L205 (2003).
http://arxiv.org/abs/2307.03084v1
20230705163014
OpenDelta: A Plug-and-play Library for Parameter-efficient Adaptation of Pre-trained Models
[ "Shengding Hu", "Ning Ding", "Weilin Zhao", "Xingtai Lv", "Zhen Zhang", "Zhiyuan Liu", "Maosong Sun" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL" ]
Utility-Aware Load Shedding for Real-time Video Analytics at the Edge [ Received ; accepted ===================================================================== The scale of large pre-trained models (PTMs) poses significant challenges in adapting to downstream tasks due to the high optimization overhead and storage costs associated with full-parameter fine-tuning. To address this, many studies explore parameter-efficient tuning methods, also framed as “delta tuning” in <cit.>, which updates only a small subset of parameters, known as “delta modules”, while keeping the backbone model's parameters fixed. However, the practicality and flexibility of delta tuning have been limited due to existing implementations that directly modify the code of the backbone PTMs and hard-code specific delta tuning methods for each PTM. In this paper, we present OpenDelta [GitHub Repo <https://github.com/thunlp/OpenDelta>, Demo Video <https://rb.gy/qjvpav>.], an open-source library that overcomes these limitations by providing a plug-and-play implementation of various delta tuning methods. Our novel techniques eliminate the need to modify the backbone PTMs' code, making OpenDelta compatible with different, even novel PTMs. OpenDelta is designed to be simple, modular, and extensible, providing a comprehensive platform for researchers and practitioners to adapt large PTMs efficiently. § INTRODUCTION =-1 With the rapid development of self-supervised learning methods in the realm of deep learning, especially pre-training techniques <cit.>, foundational pre-trained models <cit.> (PTMs) have become a common cornerstone for numerous downstream tasks. And as a result, research into large-scale PTMs has flourished. =-1 Nevertheless, the ever-expanding scale of PTMs also poses substantial obstacles in practical use. In traditional model adaptation, all the parameters of the PTMs are optimized for each downstream task, which becomes increasingly impractical as the model scales. Firstly, optimizing all the parameters incurs prohibitive computing and memory consumption; secondly, storing a fine-tuned model instance for each task or experiment significantly amplifies the storage cost. =-1 To address these challenges, researchers have developed parameter-efficient methods for model adaptation. Such methods keep the parameters of the main model fixed and update only a small subset of parameters during adaptation. This approach, known as “delta tuning”, is described and surveyed in <cit.>. Different delta tuning methods have been proposed, with varying types and positions of “delta modules”. For example, Adapter module <cit.> is composed of two low-dimensional linear projection layers with an activation function, while LoRA <cit.> module introduces a low-rank decomposition for the weight matrix. BitFit <cit.>, on the other hand, specifies the bias vector in PTMs as the delta modules. The delta module can be applied to different positions <cit.> to achieve either better performance or efficiency. =-1 Theoretically, incorporating most delta tuning methods would necessitate restructuring the backbone model, a requirement conventionally achieved through direct code manipulation. While this method may seem simple, it carries several disadvantages. Primarily, it lacks flexibility, as delta modules can theoretically be implemented in various positions, making modifications to each position in the backbone model code a cumbersome task. Additionally, this method is not scalable, as accommodating delta tuning for newly introduced PTMs requires fresh code modifications, posing a challenge for researchers and engineers. =-1 In this paper, we present a novel approach to implement delta tuning methods. Our approach modifies the backbone model's architecture after it is loaded into the memory. We propose four essential techniques, namely named-based addressing, dynamic tensor re-routing, runtime initialization, and a visualization system. Using these key techniques, we build OpenDelta, an open-source toolkit for delta tuning without modifying the backbone model code. OpenDelta has several key features. Firstly, it is simple to use. Migrating from existing full-parameter training to delta tuning requires as few as three lines of code. For beginners or engineers, we also support automatic delta model construction. Secondly, it is modular, with delta modules implemented as independent sub-modules that can be attached to or detached from the backbone models. This feature allows different delta modules to coexist and cooperate in the same backbone model and serves multiple tasks flexibly. Thirdly, OpenDelta is highly extensible, supporting pre-trained models in a wide range of frameworks, including both official implementations from the Huggingface Library <cit.> and customized PTMs. It can potentially be used with newly emerged PTMs and integrated with other PTMs' frameworks for efficient training, such as the parallel training framework. § RELATED WORK =-1 Our work is related to delta tuning, more specifically, the implementation of delta tuning methods. =-1 Delta Tuning. Delta tuning refers to the parameter-efficient method for tuning a large PTM. Different delta tuning methods <cit.> differ in both the architecture of the delta module and the positions that the delta modules are integrated into the backbone model. Various works have attempted to connect these disparate delta tuning approaches under a unified perspective <cit.>. In our work, we draw inspiration from this unified viewpoint and aim to devise a framework that can support different delta tuning methods within the same pipeline. Our library includes the most popular delta tuning methods and is amenable to new methods as they emerge. Implementation of Delta tuning. Previous implementation frameworks for delta tuning relied on the code modification approach. For example, AdapterHub <cit.> copies a specific version of Huggingface transformers Library <cit.> and implement several popular delta tuning methods for a set of pre-defined PTMs. LoRA <cit.> implements a limited library of LoRA linear layers. These methods are model-specific and involve hard-coded implementations, which restrict their usability across various PTMs. In contrast, OpenDelta represents a significant advancement as it requires no code changes to the backbone model, making it highly versatile and broadly applicable. § MOTIVATION =-1 In this section, we begin by presenting the unified formulation of delta tuning. Then we underscore a set of crucial characteristics of delta tuning, focusing on the implementation aspect, which emphasizes the pressing need for a novel toolkit to aid in the research and advancement of delta tuning approaches. §.§ Unified Formulation of Delta Tuning Although delta tuning is principally not limited to a specific type of neural networks, currently almost all the delta tuning methods are applied to PTMs <cit.> with the Transformers architecture <cit.>. A PTM ℳ parameterized by Θ is composed of multiple sub-modules m, where the hidden representations 𝐡 are passed through the sub-module to produce new hidden representation 𝐡', i.e., 𝐡' = m(𝐡). The adaptation of a PTM ℳ to downstream tasks is to update the original parameters Θ into Θ'. In full-parameter fine-tuning, all parameters can be updated, i.e., potentially, |ΔΘ| = |Θ|. In contrast, delta tuning only updates a small fraction of parameters, i.e., |ΔΘ| ≪ |Θ|. Despite the drastic difference in the specific form of the delta tuning methods,  <cit.> unify them into special forms of modifications Δ𝐡 to the hidden representation 𝐡. The Δ𝐡 is generated by passing a hidden state 𝐡_δ to a delta module m_δ. Formally, 𝐡←𝐡 + Δ𝐡 = 𝐡 + m_δ(𝐡_δ), where ← denotes a replacement of the original 𝐡, and 𝐡_δ can be the same as or different to 𝐡. §.§ Key Features for Delta Tuning Several key features of delta tuning methods can be observed from Eq.(<ref>). Tensor Re-routing. The first feature of delta tuning is the ability to redirect the flow of hidden states. In a pre-trained model, the flow of hidden states forms a static graph, with the hidden states serving as nodes and sub-modules acting as transformations on the edges As shown in Eq.(<ref>), the introduction of the edge transformation m_δ redirects node 𝐡_δ and injects it into another node 𝐡, creating a new flow of hidden states that is not present in the original model architecture. The implementation of OpenDelta should achieve such tensor re-routing without hard-coding them. Flexibility. Eq.(<ref>) allows for the input hidden states and output hidden states to be located at any position in the backbone model ℳ. For example, AdapterDrop <cit.> observes that only applying delta modules to the upper half of Transformer layers yields better results than the lower half. The flexibility of applied positions provides remarkable opportunities to explore the potential structure of delta modules <cit.>. However, it also presents a challenge for the implementation to be able to achieve flexibility in practice that matches the theoretical framework. Compositionality. Different delta tuning methods can co-exist or even be combined in the same backbone model <cit.>, potentially boosting performance or supporting multitask learning <cit.>. Thus, it is crucial to enable easy and independent implementation of each delta tuning method, while also allowing for the flexible composition of multiple modules. Dynamism. It is common for the backbone PTM to serve as a central model for multiple tasks in delta tuning. To serve a specific task, delta modules are attached to the backbone model, creating a task-specific expert. When the delta modules are detached, the backbone models revert back to their original function as general language models. This dynamic nature of delta tuning-based task adaptation should be incorporated into OpenDelta. § OPENDELTA =-1 In light of the aforementioned key features of delta tuning, we present OpenDelta. We will begin by presenting an overview of OpenDelta. Following that, we will delve into the key implementations of this framework. §.§ Framework =-1 To perform delta tuning, two prerequisites are required: a pre-trained language model ℳ and the “modified modules”, which are a user-specified list of sub-modules m_i to which the delta modules should be applied. Our target is to construct a delta object. Our objective is to create a delta object, which is a collection of delta modules typically located at various positions within ℳ and serves as a whole to adapt the PTM to downstream tasks. We follow three steps to create a delta object. Firstly, we use name-based addressing to obtain the pointers to the modified modules. Secondly, we construct a delta object comprising uninitialized delta modules. Thirdly, we modify the route of tensors in the modified modules into the delta modules using a dynamic tensor re-routing technique. After the updated route of the hidden state is established, we perform runtime initialization to initialize the delta object. =-1 After the delta object is constructed, we attach it to the backbone model. Then, we provide a simple functional interface to turn off the gradient computation in the backbone models and only compute the gradient of parameters in the delta object. After the training is complete, we provide a simple interface for saving only the delta objects, which significantly reduces the storage requirements for the backbone model. =-1 The overall framework of OpenDelta is shown in Figure <ref>. Next, we introduce the key implementations that support the construction of delta objects. §.§ Key Implementations The above framework is achieved by four key implementations, i.e., name-based addressing, dynamic tensor re-routing, runtime initialization, and visualization system. Name-based Addressing. Firstly, we need to obtain a pointer to the desired sub-modules which are applied with the delta modules. In practice, we can effectively retrieve the pointer by using the name of the sub-module. Since the sub-modules are organized in a tree structure, we perform a depth-first search to find the sub-modules that match the provided name. This search results in a full path consisting of all the names from the root to the matched sub-module, accurately matching the sub-module. However, directly writing the full path to the sub-modules can be impractical, so we design several simplifications to make addressing easier and more human-readable [<https://opendelta.readthedocs.io/en/latest/notes/namebasedaddr.html>]. One such simplification involves taking advantage of the repetitiveness of transformer layers, which many delta tuning methods address by adding delta modules to the same type of sub-modules in each layer. For example, when users specify , they likely intend to apply delta modules to the attention sub-modules in all transformer layers. To address this need, we provide a tail-matching mechanism that automatically matches the sub-modules based on their names. For more complex configurations of positions, we allow matching based on regular expressions and web-based selection using our custom-designed web interface. Dynamic Tensor Re-routing. A fundamental distinction that sets OpenDelta apart from other implementations is its ability to add delta modules without requiring any modifications to the code of the backbone modules. This feature necessitates a dynamic rerouting of tensors through the delta modules and back into the backbone model. To achieve this rerouting, we wrap the original forward function of a sub-module with a wrapper function and replace the original forward function with the wrapper function. To ensure seamless replacement, we utilize a decorator to inherit the original function's attributes, including the I/O, doc string, etc. Within the wrapped function, we implement three distinct routes of the hidden states, taking into account the order of the original sub-module and the delta module. The first route utilizes the input hidden state 𝐡_in of m_i as both the modification target and the input to the delta module. We pass it through the delta module to get the output m_δ(𝐡_in), and merge it to 𝐡_in. Formally, 𝐡_in←𝐡_in + m_δ(𝐡_in). The second route employs the output hidden state 𝐡_out of m_i as the modification target: 𝐡_out←𝐡_out + m_δ(𝐡_out). The third route leverages the input hidden state 𝐡_in as the input to the delta module, and sets the output hidden state 𝐡_out as the modification target: 𝐡_out←𝐡_out + m_δ(𝐡_in). =-1 While these three routes do not necessarily encompass all possible relationships between the delta module and the backbone model, they are sufficient to support most popular delta tuning methods (as illustrated in Table <ref>). However, we remain open to the possibility of incorporating additional routes as needed. =-1 Runtime Initialization. To ensure that weight matrices in the delta module match the hidden states in terms of shape and dimension, we must account for hidden states whose shapes are not specified in the model configuration. In traditional implementations, this requires manually examining the code of the backbone model. However, OpenDelta automates this process by passing a pseudo input through the backbone model, allowing the shapes of the hidden states to be automatically determined as they propagate from the input to the output. =-1 Visualization System. As delta tuning provides flexibility and dynamism, it is essential to ensure the correct construction of delta objects by verifying that delta modules are added as specified. However, direct printing of large pre-trained models results in massive outputs. To address this, we provide a visualization system that leverages repetition in transformer architecture. Specifically, we collapse the repetitive layers and neatly print the parameters' information. With the addition of delta modules to the backbone model, users can easily observe the changes made in the model through visualization. An example of visualization can be seen in Figure <ref>. As the visualization system is useful beyond delta tuning, it has been separated into an independent package named “” [<https://pypi.org/project/bigmodelvis/>]. a § USAGE In this section, we provide the use cases of OpenDelta which demonstrate the three characteristics of OpenDelta, i.e., simplicity, modularity, and extensibility. §.§ Simplicity Migrating from Fine-tuning. To facilitate the migration from existing full-parameter fine-tuning to delta tuning, only a few lines of code modifications are required, as exemplified in Figure <ref>. Initially, in the traditional full-parameter fine-tuning, the PTM is loaded from external libraries, such as Huggingface Transformers (Line 1), and train the model (Line 10). To introduce delta tuning, line 3-8 are added and executed. To begin with, an optional step is to visualize the backbone model to identify the target “”. Then, a delta object, such as LoRA, is created and attached to the backbone model. Subsequently, the model parameters, excluding the delta modules and the randomly initialized classification head, are frozen. The “” parameter is employed to remove the non-trainable parameters from the model checkpoint. Lastly, the sub-modules of the backbone are visualized to verify the successful creation and attachment of the delta modules. An example of the visualization results is depicted in Figure <ref>. AutoDelta Mechanism. The implementation of OpenDelta supports highly intricate designs of delta modules, catering to diverse experimental requirements. Nonetheless, it is desirable to provide a default configuration of delta modules for practitioners who may not be well-versed in the mechanism of delta tuning. However, the naming conventions of sub-modules differ significantly among various backbone models, despite their shared transformer architecture. To tackle this issue, we establish a common name convention and employ a mapping technique to map the model-specific name convention to the common one [<https://opendelta.readthedocs.io/en/latest/notes/unifyname.html>]. This enables the AutoDelta mechanism to be supported seamlessly. Figure <ref> exemplifies that, once the type of the delta tuning method is specified, the delta modules will be attached to the backbone model in default positions and with appropriate hyper-parameters. We have listed the default configurations of each delta tuning method in Table <ref>. Furthermore, the AutoDelta mechanism facilitates the loading of fine-tuned checkpoints of delta modules, without explicit knowledge of the type and hyper-parameters of the delta modules. §.§ Modularity The second notable attribute of OpenDelta is modularity. It affords the capacity to independently attach and detach each delta object from the backbone model, thereby providing the possibility of multi-task serving with a single backbone model. Specifically, suppose data pertaining to various tasks are presented sequentially, wherein each data triggers the attachment of a corresponding delta object to the backbone model for processing, and once completed, the delta object is detached. A case that illustrates this functionality is illustrated in Figure <ref>, where three tasks are process sequentially using a single backbone model. §.§ Extensibility Delta tuning is one of the important techniques that enables the use of large PTMs, and as such, we make efforts to ensure its compatibility with other techniques such as model acceleration and multi-GPU training. Specifically, we currently provide support for the BMTrain framework [<https://github.com/OpenBMB/BMTrain>] with ZeRO-3 optimization enabled <cit.>. It is also worth noting that we plan to expand our support for additional model-acceleration frameworks in the future. § CONCLUSION In summary, OpenDelta is a plug-and-play library for delta tuning, offering an intuitive and modular solution to adapt large PTMs using delta tuning without the need for code modifications. The library's user-friendliness, flexibility, and extensibility make it accessible and useful for both researchers and engineers. In the future, we plan to continuously update the library with new delta tuning methods and ensure its compatibility with the latest versions of other major PTMs libraries. § ACKNOWLEDGEMENTS This work is supported by the National Key R&D Program of China (No.2022ZD0116312), National Natural Science Foundation of China (No. 62236004), Major Project of the National Social Science Foundation of China (No. 22ZD298). § LIMITATIONS Although we believe that OpenDelta is simple, easy to use, flexible, and extensible since it does not require code modification, it is still limited by many implementation details. For example, some delta tuning methods, such as Prefix Tuning, are limited by theory and can only be used in Attention layers, making them unable to be arbitrarily specified. This is also why we did not use it as an example in this paper. On the other hand, some base models differ significantly from mainstream implementations, making it difficult to use the AutoDelta mechanism. Therefore, we maintain a list of tested models that can use AutoDelta, while other models may still use OpenDelta in a customized manner. Thirdly, while theoretically compatible with acceleration frameworks other than BMTrain, such as Deepspeed, there are some implementation details that currently limit the compatibility of some functions. We will do our best to communicate with the maintainer of those packages to increase compatibility. § ETHICAL CONSIDERATION In the writing process of this paper, ChatGPT <cit.> was utilized for revision and refinement. However, the authors can guarantee that each sentence in this paper has been thoroughly reviewed and checked to accurately convey the authors' intended meaning. acl_natbib
http://arxiv.org/abs/2307.01422v1
20230704012802
Generative Flow Networks: a Markov Chain Perspective
[ "Tristan Deleu", "Yoshua Bengio" ]
cs.LG
[ "cs.LG" ]
Garbage in, garbage out: Zero-shot detection of crime using Large Language Models Anj Simmons, Rajesh Vasa Applied Artificial Intelligence Institute, Deakin University, Geelong, Australia Email: {a.simmons, rajesh.vasa}@deakin.edu.au August 1, 2023 =============================================================================================================================================================== While Markov chain Monte Carlo methods (MCMC) provide a general framework to sample from a probability distribution defined up to normalization, they often suffer from slow convergence to the target distribution when the latter is highly multi-modal. Recently, Generative Flow Networks (GFlowNets) have been proposed as an alternative framework to mitigate this issue when samples have a clear compositional structure, by treating sampling as a sequential decision making problem. Although they were initially introduced from the perspective of flow networks, the recent advances of GFlowNets draw more and more inspiration from the Markov chain literature, bypassing completely the need for flows. In this paper, we formalize this connection and offer a new perspective for GFlowNets using Markov chains, showing a unifying view for GFlowNets regardless of the nature of the state space as recurrent Markov chains. Positioning GFlowNets under the same theoretical framework as MCMC methods also allows us to identify the similarities between both frameworks, and most importantly to highlight their differences. § INTRODUCTION Sampling from a probability distribution defined up to a normalization constant p(x) = R(x) / Z can be a challenging problem when the normalization constant Z is intractable (e.g., the partition function of an energy-based model over a large, or even continuous, sample space), and only the unnormalized probability R(x) can be computed easily. Besides energy-based models, this also includes posterior distributions of the form p(x|) in Bayesian inference, that is the joint distribution p(| x)p(x) (which can often be evaluated analytically), normalized by the evidence p(), which is typically intractable as well. In those cases, Markov chain Monte Carlo methods (MCMC; ) have proven to be a versatile tool to sample from such distributions defined up to normalization. Despite their generality though, MCMC methods may suffer from slow mixing when the target distribution p(x) is highly multi-modal, affecting the convergence of these methods and therefore yielding poor samples which are not representative of p(x). Recently, Generative Flow Networks (GFlowNets; ) have been proposed as an alternative framework to sample from distributions defined up to normalization, when the samples in question have a natural compositional structure. Applications of GFlowNets include generating small molecules <cit.>, Bayesian structure learning of Bayesian Networks <cit.>, modeling Bayesian posteriors over structured latent variable models <cit.>, generating biological sequences <cit.>, as well as scientific discovery at large <cit.>. Unlike MCMC, GFlowNets treat the generation of a sample not as a Markov chain over the sample space, but as a sequential decision making problem where each new sample is constructed piece by piece from scratch, mitigating the problem of sampling from diverse modes of a multi-modal target distribution. <cit.> recently extended this framework to more general state spaces, including continuous spaces. Even though GFlowNets were originally introduced from the perspective of flow networks <cit.>, most of the recent advances completely ignore the need to work with flows, and work directly as a Markovian process, with conditions heavily inspired by the literature on Markov chains (e.g., the detailed balance conditions; ). In this work, we formalize this connection with Markov chains by treating a GFlowNet as a recurrent Markov chain, whose certain marginal distribution matches the target distribution under some boundary conditions. This new perspective under the same theoretical framework as MCMC methods allows us to better understand the similarities between both methods, but also to identify clearly their differences. The objective of this paper is also to provide an introduction to GFlowNets for researchers with expertise in MCMC methods, using a familiar approach and a similar vocabulary. § DISCRETE GENERATIVE FLOW NETWORKS In this section, we recall the formalism of GFlowNets as a pointed Directed Acyclic Graph (DAG), as described in <cit.>, and introduce a new perspective using recurrent Markov chains. For some DAG , we will use the notations _(s) and _(s) to denote the set of parents and children of s respectively in . §.§ Flow networks over pointed Directed Acyclic Graphs A Generative Flow Network (GFlowNet) is a generative model that treats the generation of an object as a sequential decision making problem <cit.>. We assume in this section that these objects x ∈ are discrete and have some compositional structure, where the sample space is denoted by . A GFlowNet constructs a sample x ∈ by creating a process over a (finite) superset of states ⊇, starting at some fixed initial state s_0∈, leveraging the compositional structure of x. For example, <cit.> used a GFlowNet to define a distribution over molecules, where represents the space of all (complete) molecules, which can be constructed piece by piece by attaching a new fragment to a partially constructed molecule, i.e. a state in \, starting from the empty state s_0. Beyond containing the sample space , the states of therefore serve as intermediate steps along the generation process. <cit.> formalized this process by structuring the state space as a Directed Acyclic Graph (DAG) = (, ), whose vertices = ∪{s_f} correspond to the states of the GFlowNet , with the addition of an abstract state s_f∉ with no child, called the terminal state. The terminal state is added in such a way that its parents are exactly the elements of the sample space: _(s_f) =. The states x∈ are called terminating states. The edges of the DAG follow the compositional structure of the states in , while guaranteeing acyclicity; for example, there may be an edge s → s' ∈ if s' is the result of adding a new fragment to a partial molecule s. We also assume that all the states s∈ are accessible from the initial state s_0, and that is a pointed DAG, meaning that is rooted at s_0 and any trajectory starting at s_0 following the edges in eventually terminates at s_f. See <ref> (left) for an example of a pointed DAG. In addition to the pointed DAG structure over states, every terminating state x∈ is associated with a reward R(x) > 0, indicating a notion of “preference” for certain states. By convention, we set R(s) = 0 for any intermediate state s ∈\. The goal of a GFlowNet is to find a flow F along the edges of that satisfy, for all the states s' ∈\{s_0}, the following flow-matching condition: ∑_s ∈_(s') F(s → s') - ∑_s”∈_(s') F(s' → s”) = R(s'). The condition above has an intuitive interpretation: we want the total amount of flow going into s' to be equal to the total amount of flow going out of s', with some residual R(s'). If <ref> is satisfied for all s' ∈\{s_0}, then the GFlowNet induces a distribution over the terminating states x ∈ which is proportional to R(x). More precisely, if we sample a complete trajectory (s_0, s_1, …, s_T, x, s_f) using a transition probability distribution defined by normalizing the outgoing flows P_F(s_t+1| s_t) ∝ F(s_t→ s_t+1), with the conventions s_T+1 = x and s_T+2 = s_f, then x is sampled with probability proportional to R(x). The distribution induced by the GFlowNet P_F^⊤(x) ∝ R(x) is called the terminating state probability distribution <cit.>, and corresponds to the marginal distribution of the process described above at x∈: P_F^⊤(x) ≜ P_F(x→ s_f)∑_τ: s_0⇝ x∏_t=0^T_τP_F(s_t+1| s_t), where s_0⇝ x denotes all the possible trajectories τ from s_0 to x, following the edges in . Independent samples of P_F^⊤ can be obtained by running the above process multiple times, with completely independent trajectories. §.§ The cyclic structure of GFlowNets Instead of treating independent samples of the GFlowNet as being completely separate trajectories in a pointed DAG, we can view this process as being a single Markov chain that regenerates every time it reaches s_0. We consider a Markov chain over a discrete (but not necessarily finite) state space , with transition probability P_F. We assume that this Markov chain is irreducible, to ensure that the Markov chain reaches any state s∈ from the initial state, which is a necessary condition of GFlowNets. This may be enforced by requiring P_F to follow an appropriate directed graph structure over the state space. Furthermore, we assume that the Markov chain is positive recurrent, to guarantee the existence of an invariant measure F: ∀ s' ∈, F(s') = ∑_s∈F(s)P_F(s, s'). The invariant measure F plays the role of the state flow in GFlowNets <cit.>, so that the product F(s)P_F(s, s') would represent the edge flow in <ref> and <ref> can be viewed as an alternative way to write the flow-matching conditions <ref> in a GFlowNet, apart from the residual part in R(s'). Recurrence can be achieved by “wrapping around” the structure of the GFlowNet, and merging the terminal state s_f with the initial state s_0. An example of this construction from a pointed DAG is shown in <ref>. Unlike recurrence though, the graph structure alone combined with s_f≡ s_0 is not enough to conclude that the Markov chain is positive. However, positiveness here is only required when is infinite, since any recurrent Markov chain over a finite state space is necessarily positive (and therefore admits an invariant measure); see <ref> for a detailed counter-example. The invariant measure of such a Markov chain is essentially unique, up to a normalization constant (see <ref>). Terminating state probability. The fact that the Markov chain is recurrent guarantees that it will return back to its initial state s_0 an infinite amount of time. We can define the return time σ_s_0 as being the (random) time the chain first goes back to the initial state: σ_s_0 = inf{k ≥ 1| X_k = s_0}, where (X_k)_k≥ 0 is the (canonical) Markov chain following the transition probability P_F and such that X_0 = s_0. Excursions of the Markov chain between two consecutive returns to s_0 are independent by the strong Markov property, since σ_s_0 is a stopping time; in other words, the independence of trajectories for two different samples in a GFlowNet is preserved even with a single recurrent Markov chain running indefinitely. We can define the terminating state probability from <ref> in terms of the return time of the Markov chain. Given an irreducible and positive recurrent Markov chain over , with transition probability P_F, the terminating state probability distribution is defined as the marginal distribution of the chain right before returning to its initial state s_0: P_F^⊤(x) ≜_s_0(X_σ_s_0-1 = x) = _s_0[1(X_σ_s_0-1=x)] We show in <ref> that P_F^⊤ is a properly defined probability distribution (i.e., non-negative and sums to 1) over ⊆, corresponding to the parents of the initial state s_0 in the directed graph structure described above: = {s∈| P_F(s, s_0) > 0}. Boundary conditions. The goal of a GFlowNet is to find a transition probability P_F such that the corresponding terminating state probability P_F^⊤ matches the reward function, up to normalization. Similar to <ref>, this requires satisfying boundary conditions of the form ∀ x∈, F(x)P_F(x, s_0) = R(x), in addition to F being an invariant measure for P_F (i.e., the “flow-matching conditions”). Using the interpretation of F(x)P_F(x, s_0) as being the flow through x → s_0 as mentioned above, these boundary conditions are equivalent to the reward matching conditions enforced in GFlowNets <cit.>. If P_F admits an invariant measure F that satisfies the boundary conditions above, then we get the fundamental theorem of GFlowNets that guarantees that the terminating state probability distribution induced by the GFlowNet is proportional to the reward. ftheoreminvariantterminatingstate Let P_F be an irreducible and positive recurrent Markov kernel over that admits an invariant measure F such that ∀ x∈, F(x)P_F(x, s_0) = R(x), where R is a finite measure on ⊆. Then the terminating state probability distribution defined in <ref> is proportional to the measure R: ∀ x ∈, P_F^⊤(x) = R(x)/∑_x'∈R(x') The proof of this theorem is based on the unicity of the invariant measure of P_F up to a normalization constant, and is available in <ref>. In practice, the objective is to find a transition probability P_F and a measure F, which may be parametrized by neural networks, that satisfy the conditions of <ref>: F must satisfy the boundary conditions of <ref>, and must also be invariant for P_F. The summation in <ref> is typically inexpensive if the state space is well structured (i.e., if the objects have a clear compositional nature), since we only have to sum over the parents of s'—all the states s∈ such that P_F(s, s') > 0. However, checking the invariance of F at the initial state s_0 proves to be as difficult as computing the partition function itself, since the parent set of s_0 is the whole sample space . Fortunately, the following proposition shows that we only have to check that F satisfies <ref> for any state s' ≠ s_0. propositioninvariantmeasurenoinitialstate Let P_F be an irreducible and positive recurrent Markov kernel over . A measure F is invariant for P_F if and only if for all s' ≠ s_0 we have F(s') = ∑_s∈F(s)P_F(s, s'). The proof is available in <ref>. This result mirrors the practical implementation of GFlowNets as a pointed DAG, where the flow-matching conditions are never checked at the terminal state s_f, since it corresponds to an abstract state that is not in . The invariance of F over the whole state space is only a convenient tool that allows us to use existing results from the Markov chain literature, which can be extended to general state spaces. § GENERATIVE FLOW NETWORKS OVER GENERAL STATE SPACES §.§ Existing extensions of GFlowNets beyond discrete state spaces While GFlowNets were initially introduced to construct distributions over discrete objects <cit.>, some existing works have proposed to generalize this framework beyond discrete state spaces. For example in CFlowNets <cit.>, the authors considered the case where is a continuous space, and introduced a flow-matching condition where the summations in <ref> were simply replaced by integrals. As highlighted by <cit.> though, implicit assumptions made on the transition function and the omission of critical aspects of GFlowNets, such as the accessibility of any state s∈ from the initial state s_0, severely limit the scope of applications of CFlowNets. Closely related to our work, <cit.> introduced a theoretical framework for studying GFlowNets in general state spaces, thus including continuous state spaces, and even hybrid spaces with both discrete and continuous components <cit.>. For a measurable space (, Σ), where Σ is a σ-algebra on , their approach relies on the notion of Markov kernels, which generalizes transition probabilities in discrete spaces. Let (, Σ) be a measurable state space. A function κ: ×Σ→ [0, +∞) is called a positive σ-finite transition kernel if * For any B ∈Σ, the mapping s↦κ(s, B) is measurable, where the space [0, +∞) is associated with the Borel σ-algebra ([0, +∞)); * For any s∈, the mapping B ↦κ(s, B) is a positive σ-finite measure on (, Σ). Furthermore, if the mappings κ(s, ·) are probability distributions (i.e., κ(s, ) = 1), the transition kernel is called a Markov kernel. Taking inspiration from the pointed DAG formulation <cit.>, <cit.> augmented the state space = ∪{} with a distinguished element ∉, and proposed a generalization of the pointed DAG structure to measurable spaces, called a “measurable pointed graph”, which is also “finitely absorbing” in the sense that any Markov chain starting at s_0 eventually reaches in bounded time. Unlike CFlowNets <cit.>, which were still operating on edge flows directly as in <ref>, they defined a flow on this measurable pointed graph as a tuple F = (μ, P̅_F) satisfying the following flow-matching conditions ∫_f(s')μ(ds') = ∬_×f(s')μ(ds)P̅_F(s, ds'), for any measurable function f: S̅→ such that f(s_0) = 0, where P̅_F is a Markov kernel[We use the notation P̅_F for the Markov kernel in <cit.>, to avoid confusion with the rest of the paper where we use another Markov kernel P_F, which in particular will be defined over and not .] on (, Σ̅) and μ is a measure over (Σ̅ being the augmented σ-algebra associated to , built from Σ). §.§ Harris recurrence and invariant measures We consider a Markov kernel P_F on a measurable space (, Σ). For a fixed measure ϕ on , we will assume that P_F is ϕ-irreducible, meaning that any set B ∈Σ such that ϕ(B) > 0 is accessible from any state in (see <ref> for details). Similar to the discrete case, we will also assume some form of recurrence for the Markov kernel in addition to irreducibility, in order to guarantee the existence of an invariant measure for P_F. For general state spaces, we will use a stronger notion of recurrence called Harris recurrence. A ϕ-irreducible Markov kernel P_F is said to be Harris recurrent if for all set B ∈Σ such that ϕ(B) > 0, any Markov chain starting in B eventually returns back to B in finite time with probability 1: ∀ s ∈ B, _s(σ_B < ∞) = 1, where σ_B = inf{k ≥ 1| X_k∈ B} is the return time of the Markov chain to B (similar to <ref>). The condition that the Markov chain returns to any accessible set with probability 1 is reminiscent of the “finitely absorbing” condition of <cit.> that requires all trajectories to be of bounded length (see also <ref>). The fact that P_F is Harris recurrent ensures that there exists an invariant measure F over <cit.>, i.e., for any bounded measurable function f: →, we have ∫_f(s')F(ds') = ∬_× f(s')F(ds)P_F(s, ds'). The equation above is similar to the flow-matching condition <ref> in <cit.>, with the exception that there is no restriction of the form f(s_0) = 0, and integration is carried out on the same (non-augmented) space . In fact, the flow-matching condition <ref> encodes a form of invariance of μ for P_F everywhere except at s_0, thus following closely the conditions in <ref> for discrete state spaces. §.§ Creation of an atom via the splitting technique The key property preserved by wrapping around the state space at s_0≡ s_f in <ref> was that the Markov chain was effectively “regenerating” every time it was returning to s_0, thanks to the (strong) Markov property. In this context, {s_0} is called an atom of the Markov chain (), which informally corresponds to the chain “forgetting” about the past every time it goes through any state in the atom. Let P_F be a Markov kernel over a measurable space (, Σ). A set A ∈Σ is called an atom if there exists a probability measure ν over such that ∀ s ∈ A, and ∀ B ∈Σ, P_F(s, B) = ν(B). The notion of atom is evident for singletons in discrete spaces, but in general Markov chains may not contain any accessible atom. Although it was not interpreted this way, we can view the augmentation of the state space in <cit.> with ∉ as a way to create an accessible artificial atom at , if we were to also (informally) wrap around the GFlowNet at s_0≡ as in <ref>. In this section, we will show how the split chain construction <cit.> can be used as an alternative way to create a pseudo-atom in a large class of Markov chains, without changing . Instead of introducing a new state to , we will use a set ∈Σ that satisfies a minorization condition. This set will eventually correspond to the sample space of the terminating state probability distribution induced by the GFlowNet (see <ref>). Let P_F be a ϕ-irreducible Markov kernel over a measurable space (, Σ). A set ∈Σ such that ϕ() > 0 is said to satisfy the minorization condition if there exists a non-negative measurable function ε such that ε^-1((0, +∞)) = (i.e., is the set on which ε is positive), and a probability measure ν, such that ∀ s∈ and ∀ B∈Σ P_F(s, B) ≥ε(s)ν(B). Taking B= in the inequality above, we can see that ε is necessarily bounded, with ε(s) ∈ [0, 1] for all s ∈. This minorization condition is not particularly interesting when s ∉, as it simply implies that P_F(s, B) ≥ 0. When s ∈ though, this allows us to interpret P_F as a mixture of two Markov kernels, whose mixture weights depend on ε(x): P_F(x, B) = (1 - ε(x))R_ν(x, B) + ε(x)ν(B), and where R_ν(x, B) is a “remainder” Markov kernel, defined precisely in <ref>; the minorization condition is necessary for this remainder kernel to exist. The important property to note is that the second kernel in this mixture, ν(B), is completely independent of x—only the mixture weight itself depends on ε(x). We can interpret <ref> as follows: we first select which kernel to apply, with probability ε(x), and upon selection of the second kernel we “reset” the Markov chain with ν(B). Using the terminology of GFlowNets, and connecting with the Markov kernel P̅_F of <cit.>, we can view ν(B) ≈P̅_F(s_0, B) as the probability of transitioning from the initial state s_0, and ε(x) ≈P̅_F(x, {}) the probability of terminating at x ∈. This suggests the construction of a split chain (Z_k)_k≥ 0, where each element can be broken down into Z_k = (X_k, Y_k), with X_k being a state in , and Y_k being a binary variable indicating which of the two kernels in the mixture <ref> to select at the next step. This is a Markov chain over the product space (', Σ') = (×{0, 1}, Σ⊗σ({0, 1})), with Markov kernel P_F^split = Q_ν⊗ b_ε, where Q_ν((x, y), B) = 1(y = 0)R_ν(x, B) + 1(y = 1)ν(B) B ∈Σ b_ε(x, C) = (1 - ε(x))δ_0(C) + ε(x)δ_1(C) C ∈σ({0, 1}), where δ_y is the Dirac measure at y. This construction is illustrated in <ref>. It is similar to splitting terminating states x ∈ in discrete GFlowNets into transitions x → x^⊤, where x^⊤ has no children except s_f <cit.>. It is easy to show that the set ×{1}∈Σ' is an atom of the split chain (Z_k)_k ≥ 0 <cit.>. We will call this atom _0≜×{1}, by analogy with the initial state s_0 in discrete GFlowNets in <ref>, which plays a similar role. §.§ GFlowNets as recurrent Markov chains Similar to <ref>, we will define a GFlowNet in terms of a (Harris) recurrent Markov chain, this time over a general state space . Instead of wrapping around the GFlowNet though to construct an atom {s_0}, we now use the atom _0 created in <ref> via the splitting technique. Just like in <ref> for discrete spaces, we can define the terminating state probability over as the marginal distribution of the split chain returning to the atom _0. Let P_F be a Harris recurrent kernel over (, Σ) such that satisfies the minorization condition in <ref>. The terminating state probability distribution is defined as the marginal distribution of the split chain returning to the atom _0 = ×{1}. For all B ∈Σ_ P_F^⊤(B) ≜__0[1_B(X_σ__0)] = __0[1_B×{1}(Z_σ__0)]. Since _0 is an atom, the kernel ν is guaranteed to be selected to transition from any state z ∈_0; this justifies our notation __0[·] to denote _z[·] for any z ∈_0. We show in <ref> that P_F^⊤ is again a properly defined probability distribution over (, Σ_), where Σ_ is a trace σ-algebra of subsets of . Note that in <ref>, we define a distribution P_F^⊤ over as an expectation over the split chain Z_k = (X_k, Y_k). It is interesting to see that while the terminating state probability in discrete spaces involved the state of the Markov chain at time σ_s_0 - 1 (<ref>), the general case above does not have this offset by one. The reason is that in the split chain, we can treat X_k→ Y_k as being a “sub-transition”, which is enough to make up for the extra step required in the discrete case. Since the Markov kernel P_F is Harris recurrent, it admits an invariant measure F satisfying <ref>. Just like for discrete spaces, we will require F to also satisfy some boundary conditions in order to obtain a terminating state probability P_F^⊤ that matches some reward measure, up to normalization. For a positive measure R over , the boundary conditions take the form ∀ B∈Σ_, R(B) = ∫_Bε(x)F(dx), where ε is the measurable function in the minorization condition. If F is an invariant measure of P_F that satisfies <ref>, then we obtain a generalization of <ref>. ftheoreminvariantterminatingstategeneral Let P_F be a Harris recurrent kernel over (, Σ) such that ∈Σ satisfies the minorization condition in <ref>. Moreover, assume that P_F admits an invariant measure F such that ∀ B ∈Σ_, R(B) = ∫_Bε(x)F(dx), where R is a finite measure on and ε is the measurable function in <ref>. Then the terminating state probability distribution is proportional to the measure R: ∀ B ∈Σ_, P_F^⊤(B) ∝ R(B). The proof of this theorem, available in <ref>, relies once again on the unicity of the invariant measure of a Harris recurrent Markov chain, up to normalization. §.§ Construction of Harris recurrent chains Since Harris recurrence is essential to guarantee that a GFlowNet does induce a terminating state probability distribution proportional to R, we must find ways to construct Markov kernels satisfying this property. One possibility, heavily inspired by the pointed DAG structure of discrete GFlowNets <cit.>, is to enforce Harris recurrence through the structure of the state space. For example, the following proposition shows that any Markov chain defined on a finitely absorbing measurable pointed graph <cit.> is necessarily Harris recurrent. propositionfinitelyabsorbingharris Let G be a finitely absorbing measurable pointed graph as defined by <cit.>, with κ its reference transition kernel. Suppose that we identify the source state s_0 of G with its sink state ≡ s_0, as in <ref>. Then any ϕ-irreducible Markov kernel P_F absolutely continuous wrt. κ (in the sense that ∀ s∈, P_F(s, ·) ≪κ(s, ·)) is Harris recurrent. The proof of this proposition is available in <ref>. It is important to note that in the case of measurable pointed graphs, the structure of the state space as defined by its reference transition kernel κ is not sufficient to conclude the Harris recurrence of the chain (hence, to guarantee the fundamental theorem of GFlowNets in <ref>), and that its “finitely absorbing” nature is essential in <cit.>. Examples of such chains include cases where the generation of an object is done in a fixed number of steps <cit.>. However, Harris recurrence is a more general notion that goes beyond the structure of the state space as in <cit.>. For example, if ≡, we can ensure that a ϕ-irreducible Markov chain satisfying the minorization condition of <ref> is Harris recurrent by enforcing ε(s) ≥ b, for some fixed b ∈ (0, 1] <cit.>. In other words, this means that the GFlowNet terminates with probability at least b > 0 at each step of the generation. § COMPARISON WITH MARKOV CHAIN MONTE CARLO METHODS GFlowNets and Markov chain Monte Carlo methods (MCMC) were both introduced to solve a similar problem: sampling from a probability distribution that is defined up to a normalization constant. Applications include energy based models, where a Boltzmann distribution is defined up to its (intractable) partition function, and Bayesian inference where the posterior distribution is defined proportionally to the joint distribution, with the intractable evidence being the normalization constant. One of the main advantages of viewing GFlowNets from the perspective of Markov chains is that it places them under the same theoretical framework as MCMC methods, highlighting the similarities and differences between both methods. These differences are summarized in <ref>. Recall that the goal of MCMC methods is to construct a Markov chain so that its invariant distribution matches the target distribution defined up to normalization constant. Samples from the distribution are then obtained by running the Markov chain until convergence to the invariant distribution, which typically requires to run the Markov chain for a long (burn-in) period. The convergence of iterates {P^n(s_0, ·)}_n ≥ 0 to the invariant distribution is guaranteed by the ergodicity of the Markov chain (i.e. positive recurrent, and aperiodic). By contrast, the Markov chain of a GFlowNet is only required to be positive recurrent (<ref>) to guarantee the existence of an invariant distribution F, but no guarantee on the convergence to F is necessary. Moreover, while the Markov kernel P_F of MCMC methods needs to be carefully built to ensure that the invariant distribution matches the target distribution, the invariant distribution of the Markov kernel in a GFlowNet may be arbitrary, as long as it matches the boundary condition <ref>; there may be multiple Markov kernels with different invariant distributions yielding the same terminating state probability. This could explain why the Markov kernels in MCMC methods are typically handcrafted (e.g., Metropolis-Hastings, based on a proposal kernel), as opposed to GFlowNets where P_F is learned (e.g., with a neural network). Probably the main difference between GFlowNets and MCMC methods is the relation between the invariant measure/distribution of the Markov chain and the target distribution: while MCMC requires the invariant distribution to match the target distribution, GFlowNets only require the marginal distribution of the Markov chain (the terminating state probability distribution; <ref>) to match the target distribution. The state space of the Markov chain in MCMC therefore corresponds to the sample space of the target distribution, and as a consequence these chains are known to poorly mix (leading to slow convergence to the invariant distribution) in the presence of multiple modes. The Markov chain of a GFlowNet, on the other hand, is constructed on an augmented state space , broader than the sample space ⊆, allowing the chain to use these intermediate steps to move between modes more easily. It is worth noting that there exists some MCMC methods, such as Hamiltionian Monte Carlo methods (HMC; ), where the target distribution is the marginal of the invariant distribution over an augmented space. Finally, since MCMC methods rely on the convergence to the invariant distribution, samples of the target distribution are only guaranteed asymptotically. Moreover, consecutive samples are correlated by the Markov kernel P_F, and additional post-processing techniques are required to reduce the effect of this cross-correlation between samples. On the other hand, samples from a GFlowNet are obtained in finite time due to the positive recurrence (or Harris recurrence in general state spaces) of the Markov chain, and are guaranteed to be independent from one another thanks to the strong Markov property. § ACKNOWLEDGMENTS We would like to thank Nikolay Malkin, Salem Lahlou, Pablo Lemos, and Dinghuai Zhang for the useful discussions and feedback about this paper. plainnat equationsection figuresection tablesection Appendix § EXISTENCE OF AN INVARIANT MEASURE In this section, we recall some standard results about the existence of an invariant measure for Markov chains, first over a discrete state space, and then over general state spaces. For further fundamentals about Markov chains, we recommend the book <cit.>. We start by stating the existence of an invariant measure for irreducible and positive recurrent Markov chains over a discrete state space , used to show the fundamental theorem of GFlowNets in <ref>. Let P_F be an irreducible and positive recurrent Markov kernel over . Then there exists a non-trivial invariant measure λ for P_F (i.e., λ P_F = λ), unique up to a multiplicative positive constant, defined for all s∈ by: λ(s) = _s_0[∑_k=0^σ_s_0-11(X_k = s)] = ∑_k=0^∞_s_0[1(k < σ_s_0)1(X_k=s)] In particular, λ is the unique invariant measure of P_F such that λ(s_0) = 1. See <cit.>. When (, Σ) is a general measurable state space, we first need to introduce the notion of ϕ-irreducibility, characterizing the sets that are accessible by the Markov chain using a measure ϕ. Let ϕ be a measure over a measurable space (, Σ). A Markov kernel P_F is said to be ϕ-irreducible if any set B ∈Σ such that ϕ(B) > 0 is accessible, in the sense that for all s∈ _s(τ_B < ∞) > 0, where τ_B = inf{n ≥ 0| X_k∈ B} is the hitting time of the Markov chain to B. We can extend <ref> guaranteeing the existence of an invariant measure to Harris recurrent kernels satisfying the minorization condition. Although invariant measures may also exist for Markov kernels under weaker assumptions, here we use the assumptions that match the conditions necessary for GFlowNets over measurable state spaces to exist. Let P_F be a Harris recurrent kernel over such that satisfies the minorization condition in <ref>. Then there exists a non-trivial invariant measure λ for P_F (i.e., λ P_F = λ), unique up to a multiplicative positive constant, defined for all bounded measurable functions f: → by ∫_f(s)λ(ds) = __0[∑_k=1^σ__0f(X_k)] = ∑_k=1^∞__0[1(k ≤σ__0)f(X_k)]. Moreover, λ is the unique measure such that ∫_ε(s)λ(ds) = 1, where ε is the measurable function in the minorization condition. The existence and the form of the invariant measure are given in <cit.>. It is therefore sufficient to prove that ∫_ε(s)λ(ds) = 1. We first show that ∀ k ≥ 1: __0[1(k ≤σ__0)ε(X_k)] = __0[1(k ≤σ__0)_Z_k-1[1(Y_1 = 1)| X_1]] = __0[1(k ≤σ__0)__0[1(Y_k=1)| Z_0:k-1,X_k]] = __0[1(k ≤σ__0)1(Y_k=1)] = __0[1(σ__0 = k)] where we used the interpretation of ε(x) as the probability of terminating at x (see <ref>) in <ref>, the Markov property of the chain (Z_k)_k≥ 0 in <ref>, the law of total expectation in <ref>, and finally the definition of σ__0 as being the return time to _0, where we necessarily have Y = 1 in <ref>. Therefore, using the for of the invariant measure λ, and since ε is a measurable function, we have ∫_ε(s)λ(ds) = ∑_k=1^∞__0[1(k ≤σ__0)ε(X_k)] = ∑_k=1^∞__0[1(σ__0 = k)] = 1, where we were able to conclude since the chain is Harris recurrent (hence it returns to _0 with probability 1; <ref>). If the Markov kernel P_F satsifies the minorization condition in <ref>, then we showed in <ref> that we can write P_F as a mixture of two Markov kernels, one being ν(B) being independent of the current state, and the other being a remainder kernel R_ν(x, B), defined by R_ν(x, B) = 1(ε(x) < 1)P_F(x, B) - ε(x)ν(B)/1 - ε(x) + 1(ε(x) = 1)ν(B) One can easily show that this is a Markov kernel, and can be defined thanks to the minorization condition P_F(x, B) ≥ε(x)ν(B). § COUNTER-EXAMPLE FOR POSITIVE RECURRENCE Let state space = be the space of non-negative integers, and P_F the transition probability distribution defined as ∀ n ∈: P_F(n, n + 1) = exp[-1/(n+1)^2] P_F(n, 0) = 1 - exp[-1/(n+1)^2], with all other transitions having probability 0. Then we have for all n ≥ 1 _0[1(σ_0 = n)] = P_F(n, 0)∏_k=0^n-1P_F(k, k+1) = (1 - exp[-1/(n+1)^2])exp[-∑_k=0^n-11/(k+1)^2] = exp[-∑_k=0^n-11/(k+1)^2] - exp[-∑_k=0^n1/(k+1)^2] Therefore, the probability of returning to the state 0 in finite time is _0(σ_0 < ∞) = ∑_n=1^∞_0[1(σ_0 = n)] = 1 - exp[-∑_k=0^∞1/(k+1)^2] = 1 - exp(-π^2/6) < 1 § PROOFS §.§ Discrete spaces The terminating state probability distribution P_F^⊤ is related to the invariant measure λ defined in <ref> by, ∀ x∈, P_F^⊤(x) = λ(x)P_F(x, s_0). For any k ≥ 0, we have _s_0[1(k < σ_s_0)1(X_k = x)] P_F(x, s_0) = _s_0[1(k < σ_s_0)1(X_k=x)P_F(X_k, s_0)] = _s_0[1(k<σ_s_0)1(X_k=x)_X_k[1(X_1=s_0)]] = _s_0[1(k<σ_s_0)1(X_k=x)_s_0[1(X_k+1=s_0)| X_0:k]] = _s_0[1(k<σ_s_0)1(X_k=x)1(X_k+1=s_0)] = _s_0[1(σ_s_0=k+1)1(X_k=x)] In details, we used an equivalent definition of P_F(X_k, s_0) in <ref>, as the expectation of the chain moving to s_0 after one step (1(X_1=s_0)), starting at X_k, the Markov property in <ref>, the law of total expectation in <ref>, and finally the definition of the return time σ_s_0 = inf{n≥ 1| X_n=s_0} in <ref>. Using the definition of the invariant measure λ, we then obtain the expected result: λ(x)P_F(x, s_0) = ∑_k=0^∞_s_0[1(k < σ_s_0)1(X_k=s)]P_F(x, s_0) = ∑_k=0^∞_s_0[1(σ_s_0=k+1)1(X_k=x)] = ∑_k=1^∞_s_0[1(σ_s_0=k)1(X_k-1=x)] = _s_0[1(X_σ_s_0-1=x)] = P_F^⊤(x). The terminating state probability distribution P^⊤_F is a properly defined probability distribution over . It is easy to see that for all x∈, P_F^⊤(x) ≥ 0. Moreover, using <ref>, we know that P_F^⊤(x) = λ(x)P_F(x, s_0), where λ is the unique invariant measure of P_F such that λ(s_0) = 1 (see <ref>). Therefore: ∑_x∈P_F^⊤(x) = ∑_x∈λ(x)P_F(x, s_0) = λ(s_0) = 1, where we used the invariance of λ in the second equality, and the fact that P_F(s, s_0) = 0 for any s ∉. * Since F is an invariant measure of P_F, by unicity of the invariant measure of P_F up to a multiplicative constant (<ref>), there exists a constant α > 0 such that F = αλ. Using <ref> and the boundary condition F(s)P_F(s, s_0) = R(s), we get for all x ∈ P_F^⊤(x) = λ(x)P_F(x, s_0) = 1/αF(x)P_F(x, s_0) = R(x)/α. In fact, since we saw in <ref> that P_F^⊤ is a probability distribution, the multiplicative constant happens to be α = R() = ∑_x'∈R(x') (i.e., the partition function). Let P_F be an irreducible and positive recurrent Markov kernel over that admits an invariant measure F such that ∀ x ∈, F(s)P_F(s, s_0)=R(s), where R is a finite measure on ⊆. Then we have F(s_0) = R() = ∑_x∈R(x). Using the boundary condition and the invariance of the measure F: R() = ∑_x∈R(x) = ∑_x∈F(x)P_F(x, s_0) = F(s_0). * One can show (e.g., by induction) that if <ref> is satisfied for all s'≠ s_0 then F(s) = ∑_k=0^∞F(s_0)_s_0[1(k < σ_s_0)1(X_k = s)] Since P_F is irreducible and positive recurrent, by <ref> it admits an invariant measure. The only non-trivial statement one must show is that if <ref> is satisfied for any s'≠ s_0, then it is also satisfied for s' = s_0. Since P_F is positive recurrent, any Markov chain starting at s_0 must eventually return to s_0 in finite time. In other words ∑_k=1^∞_s_0[1(σ_s_0=k)] = 1. Furthermore, it is clear that at any point in time k ≥ 0, X_k must be in one of the states of , meaning that ∑_s∈1(X_k=s) = 1. Therefore ∑_s∈F(s)P_F(s, s_0) = ∑_s∈∑_k=0^∞F(s_0)_s_0[1(k < σ_s_0)1(X_k=s)]P_F(s, s_0) = F(s_0)∑_s∈∑_k=1^∞_s_0[1(σ_s_0 = k)1(X_k-1=s)] = F(s_0)∑_k=1^∞_s_0[1(σ_s_0 = k)∑_s∈1(X_k-1=s)] = F(s_0)∑_k=1^∞_s_0[1(σ_s_0=k)] = F(s_0), where we used the proof of <ref> in <ref>. §.§ General spaces The terminating state probability distribution P_F^⊤ is related to the invariant measure λ defined in <ref> by, ∀ B ∈Σ_, P_F^⊤(B) = ∫_Bε(x)λ(dx), where ε is the positive measurable function defined in <ref>. The proof follows the same pattern as the proof of <ref>. We first show that for any k ≥ 1, we have __0[1(k ≤σ__0)1_B(X_k) ε(X_k)] = __0[1(k ≤σ__0)1_B(X_k)_Z_k-1[1(Y_1=1)| X_1]] = __0[1(k ≤σ__0)1_B(X_k)__0[1(Y_k=1)| Z_0:k-1,X_k]] = __0[1(k ≤σ__0)1_B(X_k)1(Y_k=1)] = __0[1(σ__0=k)1_B(X_k)] The derivation above follows similar steps as in the proofs of <ref> & <ref>. Using the definition of the invariant measure λ, and given that for any B∈Σ_ the measurable function 1_Bε is non-negative, we have ∫_Bε(x)λ(dx) = ∑_k=1^∞__0[1(k ≤σ__0)1_B(X_k)ε(X_k)] = ∑_k=1^∞__0[1(σ__0 = k)1_B(X_k)] = __0[1_B(X_σ__0)] = P_F^⊤(B). The terminating state probability distribution P_F^⊤ defined in <ref> is a properly defined probability distribution over . Using <ref>, we know that P_F^⊤ is related to the unique invariant measure λ such that ∫_ε(s)λ(ds) = 1. Therefore: P_F^⊤() = ∫_ε(x)λ(dx) = ∫_ε(s)λ(ds) = 1, where we used the fact that ε is positive only on (see <ref>). * The proof is similar to the one of <ref>. By unicity of the invariant measure of P_F up to a multiplicative constant (<ref>), there exists a constant α > 0 such that F = αλ. Using <ref>, and the boundary conditions in <ref>, we get for all B ∈Σ_: P_F^⊤(B) = ∫_Bε(x)λ(dx) = 1/α∫_Bε(x)F(dx) = R(B)/α∝ R(B). * Let (, ) be the topological space over which the measurable pointed graph G is defined. Recall that a measurable pointed graph is finitely absorbing in ∃ N > 0 such that supp(κ^N(s_0, ·)) = {}. We assume that N is the minimal integer satisfying this property (i.e., the maximal trajectory length). Let P_F be a Markov kernel absolutely continuous wrt. κ. Then we necessarily have that supp(P_F^N(s_0, ·)) = {} as well. Let B ∈ be a set such that ϕ(B) > 0. It is clear that there exists 0 ≤ m ≤ N such that ∀ s ∈ B, supp(P_F^m(s, ·)) = {} (otherwise, this would contradict the maximal trajectory length). By definition of a measurable pointed graph, ∃ n ≥ 0, κ^n(s_0, B) > 0 (and necessarily P_F^n(s_0, B) > 0 as well). If we identify the source and sink states s_0≡, then by Chapman-Kolmogorov equation we have ∀ s ∈ B: P_F^n+m(s, B) = ∫_P_F^m(s, ds')P_F^n(s', B) = P_F^m(s, {})P_F^n(s_0, B) > 0. Moreover, by minimality of N, we also have that ∀ n > N, P_F^n(s, B) = 0. Therefore, ∀ s ∈ B, _s(σ_B≤ N < ∞) = 1.
http://arxiv.org/abs/2307.01154v1
20230703165343
Characteristic signatures of accreting binary black holes produced by eccentric minidisks
[ "John Ryan Westernacher-Schneider", "Jonathan Zrake", "Andrew MacFadyen", "Zoltán Haiman" ]
astro-ph.HE
[ "astro-ph.HE" ]
Binary signatures from eccentric minidisks John Ryan Westernacher-Schneider john.westernacher.schneider@gmail.com 0000-0002-3047-7200]John Ryan Westernacher-Schneider Leiden Observatory, Leiden University, P.O. Box 9513, 2300 RA Leiden, The Netherlands Department of Physics and Astronomy, Clemson University, Clemson, SC 29634, USA 0000-0002-1895-6516]Jonathan Zrake Department of Physics and Astronomy, Clemson University, Clemson, SC 29634, USA 0000-0002-0106-9013]Andrew MacFadyen Center for Cosmology and Particle Physics, Physics Department, New York University, New York, NY 10003, USA 0000-0003-3633-5403]Zoltán Haiman Department of Astronomy, Columbia University, New York, NY 10027, USA We show that gas disks around the components of an orbiting binary system (so-called minidisks) may be susceptible to a resonant instability which causes the minidisks to become significantly eccentric. Eccentricity is injected by, and also induces, regular impacts between the minidisks at roughly the orbital period of the binary. Eccentric minidisks are seen in vertically integrated, two-dimensional simulations of a circular, equal-mass binary accreting from a circumbinary gas disk with a Γ-law equation of state. Minidisk eccentricity is suppressed by the use of an isothermal equation of state. However, the instability still operates, and can be revealed in a minimal disk-binary simulation by removing the circumbinary disk, and feeding the minidisks from the component positions. Minidisk eccentricity is also suppressed when the gravitational softening length is large (≳ 4% of the binary semi-major axis), suggesting that its absence could be an artifact of widely adopted numerical approximations; a follow-up study in three dimensions with well-resolved, geometrically thin minidisks (aspect ratios ≲ 0.02) may be needed to assess whether eccentric minidisks can occur in real astrophysical environments. If they can, the electromagnetic signature may be important for discriminating between binary and single black hole scenarios for quasi-periodic oscillations in active galactic nuclei, which may in turn aid in targeted searches with pulsar timing arrays for individual supermassive black hole binary sources of low-frequency gravitational waves. § INTRODUCTION There is strong evidence that some galaxies host supermassive black hole binaries at their centers <cit.>. These objects are powerful sources of low-frequency gravitational wave (GW) radiation, and their population has long been theorized to generate a stochastic GW background <cit.>. The recent detection of such a background in the 15-year NANOGrav data <cit.>, and indications that the SMBHB population is indeed its likely source <cit.>, creates a new imperative to identify individual SMBHB systems. Galaxies that host SMBHBs are likely to be active, due to the presence of copious circumnuclear gas associated with the galaxy merger that gave rise to the black hole pair <cit.>. Such binary AGNs may exhibit quasi-periodic oscillations (QPOs) connected in some way to the binary's orbital motion, and more than 200 binary AGN candidates have been proposed based on QPO detections in electromagnetic (EM) surveys <cit.>. Those SMBHB candidates, and others yet to be discovered, could become joint GW-EM sources as the pulsar timing arrays continue to accumulate sensitivity. The identification of an individual low-frequency GW source with a particular time-varying AGN will be challenging, firstly, because of theoretical uncertainties in the relationship between the binary's GW and EM temporal signatures, and secondly, because AGN periodicity can also signify processes that have nothing to do with a binary, such as jet precession or accretion disk instabilities <cit.>. It means that many of the binary AGN candidates so far identified could be single accreting SMBHBs, exhibiting real periodicities. To separate the single black hole AGNs from the binaries will thus require a theoretical understanding of the unique temporal characteristics of the binary accretion process. In a black hole binary accretion system, gas comes to the binary through a circumbinary disk (CBD), which feeds gas into minidisks around each of the black holes. Most of the radiated power escapes as quasi-thermal X-rays from the surfaces of the minidisks <cit.>, but radiation from the CBD at UV, optical, infrared <cit.>, and even radio frequencies <cit.> could also be significantly affected by the dynamical interaction between the CBD and the binary. Variability in γ-rays is also expected in binary blazars, due to modulation of the rates of mass delivery to the black holes <cit.>. Computational studies have revealed a variety of mechanisms that could lead to detectable, periodically varying EM output, associated specifically with the dynamics of binary accretion <cit.>. These range from periodic variations at or near the orbital period, to those over tens of binary orbits, and they originate from distinct spatial regions of the accretion system. In a previous study <cit.>, we reported multi-wavelength light curves of thermal emission from accreting black hole binaries, computed from vertically integrated 2-dimensional viscous hydrodynamics simulations with a detailed treatment of the radiative cooling. In that paper, we identified a strong QPO in the disk thermal luminosity at roughly the binary orbital period. However, we did not give a detailed discussion of the physical origin of the QPO, nor whether the same one had been identified previously by other authors. In this paper we report on the physical mechanism of the QPO we found in W22, and present very high resolution simulations revealing that it arises from an instability in which the minidisks become eccentric and exchange mass at a regular interval – see fig:fig1 for sample snapshots. The instability is sensitive to the thermodynamic treatment, being generally suppressed when locally isothermal or β-cooling approximations are used. Aside from the need to understand the temporal structure of EM output from accreting binaries to identify SMBHBs among candidates, there is also a new imperative to understand CBD morphology as radio imaging (at especially high resolution by ALMA) has revealed dramatic examples of well-resolved substructures in proto-planetary disks including those around young stellar binaries <cit.>. Our paper is organized as follows. In sec:background we review theoretical results on periodic light curves from accreting binaries. In sec:models we describe the simulation setup and diagnostics used to quantify the minidisk morphology. We describe the numerical approach in sec:numerics. Our simulation results are reported in sec:results, including subsections on the irrelevance of infall from the CBD for driving minidisk eccentricity (sec:CBDinfall), the central role of the minidisk-minidisk interaction (sec:interaction), the periodic mass trade between minidisks and precession effects (sec:masstrade), dependence on various parameters and prescriptions (sec:depsoft-sec:dephole), numerical convergence of the minidisk eccentricity growth rate (sec:convergence), and a summary of our numerical findings (sec:results-summary). We provide discussions in sec:discuss on the mechanism of the eccentric instability (sec:mechanism), a comparison to other eccentricity-driving mechanisms (sec:other), the role of gravitational softening (sec:softening & sec:precess), guidance for future 3-dimensional studies (sec:pars), and the observable consequences of the minidisk mass trade (sec:obs). We conclude in sec:conclude, and our suite of simulations is tabulated in the appendix. § BACKGROUND We briefly review some of the mechanisms known to cause periodic oscillations in the light curves of accreting binary systems. Several of these are related to the formation of an m=1 over-density in the CBD, referred to here as the “lump” <cit.>. The lump forms near the inner edge of the CBD at r ≳ 3a, and orbits the binary at roughly the Kepler frequency (for reference that is ∼ 5 binary orbits if the lump orbits at a radius of 3a; in general we denote the lump orbital frequency as f_ lump). Its presence leads to a modulation in the rate of mass delivered from the CBD to the minidisks on the time scale 1/f_ lump, because gas orbits in the CBD are generally eccentric, and feeding to the binary is enhanced when the lump orbits through its closest approach to the binary. Variations in the rate of feeding to the minidisks are not necessarily transmitted to the black holes. Indeed, the time scale for mass to accrete through the minidisks is typically in the range of 10's of binary orbits. However, simulations by a number of authors <cit.> indicate that in spite of possible buffering by the minidisks, the 5-10 orbit lump period can still be detectable at the ∼ 10% level in the time series of the accretion power. Also, gas injection to a minidisk involves the impacts of gas streams from the CBD, and the resulting disturbances may propagate to the black hole in a fraction of an orbit, much faster than the viscous rate. Furthermore, in radiatively efficient environments, the EM light curves could reflect the stream-minidisk impacts even if the black holes themselves accrete steadily. QPOs associated with lump-induced variation of gas delivery from the CBD may thus be a detectable feature of binary AGN light curves. The presence of the lump can lead to a second kind of periodic oscillation, at the frequency 2(f_ bin - f_ lump) <cit.>. This “binary-lump beat frequency” is the frequency at which one black hole or the other overtakes the orbiting lump. Simulations indicate that mass delivery to the binary is modulated at this frequency when the CBD extends inwards far enough for tidal stripping of the CBD to operate at any orbital phase, and thus be enhanced any time a binary component passes the lump. When the low-density cavity around the binary is very large, gas is only tidally stripped from the near side of the eccentric CBD, and the enhanced feeding rate at the frequency 2(f_ bin - f_ lump) is suppressed <cit.> (one could say the duty cycle of this mode is reduced to a fraction of the lump orbit around its closest approach to the binary). For reference, when the lump period is 5 binary orbits, the binary-lump beating operates at a frequency of 1.6 times the binary orbital frequency. In W22, we computed light-curves of thermal disk emission from an equal-mass black hole binary, and reported a QPO operating at between 1 and 2 times the binary orbital frequency. The similarity in frequency made it easy to confuse this feature with the binary-lump beating effect, however there was an important difference to suggest it had a distinct physical origin: the QPO from W22 showed sensitivity to the length scale parameter r_ soft used in the code to soften the gravitational potential. In particular, the W22 frequency seemed to approach the orbital frequency as r_ soft was decreased. The binary-lump beating involves a coupling between the outer edges of the minidisks and the inner edge of the CBD, so it should not be sensitive to how gravity is numerically modeled very near the black holes. We have carried out a detailed analysis since the publication of W22, and confirmed that indeed the QPO we saw there had nothing to do with the lump, nor the CBD in any direct way. Instead, we found the effect arises due to the minidisks developing a significant eccentricity, and experiencing regular collisions with one another as a result. The minidisks have opposing eccentricity vectors, and the disks collide to produce an EM flare when the long ends of the disks strike one another. The minidisk eccentricity vectors undergo retrograde precession, and the collisions occur at the beat frequency between the minidisk precession and the binary orbit. As we show below, the rate of the minidisk precession increases with r_ soft, and this accounts for the observation from W22 that the eccentric minidisk beat frequency approaches the orbital frequency as softening is decreased. In the subsequent sections we present a detailed characterization of the eccentric minidisk beating effect, and an investigation of the conditions that lead to the growth of minidisk eccentricity. § NUMERICAL SETUP Following W22, we study a binary with total mass M=M_1 + M_2 = 8×10^6 M_⊙ and semi-major axis a≃ 10^-3pc, yielding an orbital period T_ bin≃ 1yr, but in this paper we consider only an equal mass binary (mass ratio q=1) on a circular orbit (e=0). We use an adiabatic equation of state 𝒫=Σϵ (Γ -1) with Γ=5/3, neglecting radiation pressure. In order to obtain numerically tractable Mach numbers ≃𝒪(10) while neglecting radiation pressure, the accretion rate must often be extremely super-Eddington <cit.>. In this work, gas Mach numbers are chosen in the initial conditions, and are subsequently determined self-consistently, and are typically in the range ∼ 7 - 25. We compare constant-α and constant-ν viscosity models, where ν = α c_s h is the kinematic viscosity, c_s = √(Γ𝒫/Σ) is the sound speed, h=√(𝒫/Σ) / Ω̃ is the disk scale height, and Ω̃≡√(GM_1/r_1^3 + GM_2/r_2^3) is a frequency scale accounting for both masses M_1, M_2 and distances to them r_1, r_2. We also compare two cooling models, a physical optically thick radiative cooling Q̇ = - (8/3)σ T^4 / (κΣ) where T is the midplane temperature and κ = 0.4cm^2/g is the electron scattering opacity, as well as a popular phenomenological β-cooling prescription Q̇ = - Ω̃Σ (ϵ - ϵ_0) / β where β is a dimensionless parameter, ϵ is the specific internal energy, and ϵ_0=-Φ / [ℳ^2 Γ (Γ -1)] is a target specific internal energy profile, where ℳ=10 is the target orbital Mach number and Φ is the gravitational potential. Ω_K ≡√(GM/r_s^3) is a softened Keplerian frequency (r_s ≡√(r^2+r_ soft^2) is the softened radial coordinate with r_ soft the softening length scale). Although our β-cooling models both heat and cool (i.e. Q̇=0 ϵ = ϵ_0), we explored a variant which only cools (Q̇=0 when ϵ<ϵ_0) and our conclusions were unaffected. In addition to cooling models, we also compare our results with locally isothermal runs, where the prescribed sound speed profile corresponds to a uniform orbital Mach number of 10. This is consistent with the target ϵ_0 we use in our β-cooling runs. Disk initial conditions correspond to near-equilibrium configurations about a single gravitating object. The gas configuration is allowed to settle around the orbiting binary over several viscous times before analysis begins. This corresponds to ∼1000-3000 binary orbits, depending on the model – see Table <ref>. The constant-α models exclusively use self-consistent radiative cooling, and their initial conditions are Σ∝ r^-3/5 and 𝒫∝ r^-3/2. The constant-ν models initially have Σ =constant, corresponding to a spatially uniform accretion rate. The subset of constant-ν models with radiative cooling initially have 𝒫∝ r^-3/4, corresponding to local balance of viscous heating and radiative cooling, and yielding a Mach number profile ℳ∝ r^-1/8. The subset of constant-ν models with β-cooling instead initially have uniform ℳ=10, which is a popular Mach profile used in both isothermal and β-cooled models, and the β-cooling term drives towards this initial Mach number. As in W22, black holes are represented by torque-free sinks <cit.> with radius r_ sink and sink rate s. Our gravity model derives from a Plummer potential Φ_ P∝ (r^2 + r_ soft^2)^-1/2. As recognized in the literature <cit.>, although some type of softening is numerically necessary to regulate the divergence at a Newtonian point mass, in two-dimensional calculations it physically represents the vertical integration of the force of gravity when the disk has finite thickness (we discuss this in sec:precess). In our run with r_ soft=0, we use a purely Newtonian force F⃗_ N outside the sinks, and transition to a Plummer force F⃗_ P (softened using r_ sink) inside the sinks using a functional form F⃗ = θF⃗_ P + (1-θ)F⃗_ N, where θ = [1-(r/r_ sink)^2]^2 for distances r<r_ sink from the black hole, θ=0 otherwise. This regulates the singular behavior at the black hole location while achieving zero softening in the regions of interest (i.e. regions outside the sink). Lastly, we perform a set of “decretion” runs, where the binary is initialized in near-vacuum, and the sinks are replaced by sources, non-zero only within distances r_s from each point mass, given by U̇_ source = -10^3 Ω_K ( U⃗ - U⃗_0 ), where U⃗ are the conserved variables and U⃗_0 are their target values, given by a rigid circular rotation at speed √((1/2)GM/r_ soft), a uniform surface density 0.1 M/a^2, and a chosen uniform Mach number. This source term strongly drives U⃗ to U⃗_0 inside the source. A circumbinary decretion disk is prevented from forming by allowing ejected material to flow off the grid. Our suite of simulations is summarized in Table <ref>. §.§ Minidisk diagnostics To quantify minidisks individually, we integrate hydrodynamic quantities over the spatial region within a distance of 0.5 a from their host black hole. These diagnostics are the minidisk mass, and the center-of-mass (COM) vector measured relative to the host black hole's location. Visual inspection confirms that the COM vector points in the direction of the farthest edge of the minidisk. Finite minidisk eccentricity is found to be indicated by persistent non-zero COM amplitude over tens of binary orbits. A comparison between integrating over distances of 0.5 a and 0.25 a from black holes is provided in fig:COMvsrsoft, and indicates that trends are robust to the size of the integration region. Crucially, the coherence of minidisk eccentricity shows up as an orderly precession, and this is indicated by a steady linear trend in the COM phase over tens of binary orbits. A tell-tale sign of a lack of persistent eccentricity is a jagged COM phase over time. This usually indicates that the COM vector is reflecting smaller scale or more transient features in the minidisks, rather than the coherent lopsidedness characteristic of the eccentric minidisk instability. A prototypical case of steady, coherent minidisk eccentricity is shown in fig:MDCOM (top and bottom panels). We use two diagnostics to characterize the relationship between minidisks: relative orientation and mass flux. Their relative orientation is quantified by their relative COM phases. A prototypical case of a steady relative orientation of π radians is shown in fig:MDCOM (middle panel). The mass-trading between minidisks Ṁ_ trade(t) is quantified by the root-mean-square (RMS) flux of mass across a line of length a through the origin, orthogonally bisecting the black hole separation. §.§ Solution scheme We use the code, which is same code we used in W22, and we refer the reader to that work for most numerical details ( is a GPU implementation of which was used in , and ). A summary of parameters for our suite of runs is provided in Table <ref>. The computational domain usually extends to 12 a; exceptions are the high-resolution runs labeled H1-H4 (which extend to 15 a), the very high-resolution zoom-in run VH3 (which is evolved for a short time and whose grid extends to 7.5 a), and the decretion runs D0-D1 (which extend to 1.75 a because there is no circumbinary disk). We use Courant-Friedrichs-Lewy numbers in the range of 0.01-0.1. Radiative cooling requires delicate numerical treatment <cit.>. Motivated by studies which have a coordinate singularity or inner boundary between the minidisks <cit.>, we assess the effect such an obstruction by placing a third sink at the origin of the binary system of radius 0.05 a, in addition to the two orbiting sinks representing black holes. Note our Cartesian grid has no singular behavior at the origin. § RESULTS In fig:fig1 we show snapshots of the surface density Σ (raised to the power of 1/2 to improve visual contrast) from two of our high-resolution runs, focusing on the minidisks. Both runs use self-consistent thermal cooling with a nominal Mach number of ℳ∼ 11 and α-viscosity with α = 0.1. The runs shown in the left and right panels of fig:fig1 are models VH3 and H2 respectively (see Table <ref>). VH3 is a zoomed-in version of model H3, with double resolution (Δ x = 0.00125 a). The H2 and H3 models have the same parameters, except that the one on the right (model H2) has had the CBD removed, to demonstrate that the minidisks develop eccentricity even if they do not interact with gas infall from the environment. In both cases, the minidisk eccentricity is persistent, i.e. the images in fig:fig1 are a good representation of how the disks would look at a randomly selected time in a well-evolved simulation. fig:fig1 also reveals that the minidisks settle into a configuration with their apsides oriented 180^∘ away from each other. In the sections below we examine these effects in detail. §.§ Infall from the CBD is not required to drive minidisk eccentricity In order to determine whether the minidisk eccentricity is driven by gas infall from the CBD, we restarted a run from a well-developed state, with the CBD subtracted and replaced with a near-vacuum (model H2). After the restart, the minidisks continue to evolve, but can no longer acquire gas from the environment. The right panel of fig:fig1 shows that the eventual relaxed state of the minidisks is again eccentric, and still has the disk apsides anti-parallel to one another. fig:MDCOMnoCBD shows the time series of the minidisk diagnostics following the depletion. Without feeding from the CBD, the mass in the minidisks diminishes over time as they accrete into the sinks (4th panel). The minidisk eccentricity (indicated by COM amplitude, 3rd panel), opposing orientation (2nd panel), and precession (1st panel) undergo a short disruption at roughly 20 binary orbits after the CBD is depleted, but minidisk eccentricity then restores over the subsequent 40 binary orbits. The maintenance of eccentric minidisks in the absence of gas infall from the CBD indicates that eccentricity is being injected by interactions between the minidisks.[Video content from this circumbinary disk depletion experiment is available at: <https://youtu.be/9pltm6oOHhE>] We have also confirmed that the eccentric minidisks can be established if mass is supplied from the sinks in a “decretion” run (model D1). For this model, a circular equal-mass binary was initialized in near-vacuum, with mass being steadily added to the system from the sinks (rather than subtracted, see sec:models). Animations of the decretion run[Video content from this decretion experiment is available at: <https://youtu.be/om15kZRhC18>] illustrate how the eccentric minidisks are established. First, gas flows out from the particle positions and forms minidisks around the binary components. Then as the minidisks grow in size and overflow their Roche lobes, they “collide” and exchange mass across the inner Lagrange point. After the mass-trading event, the minidisks recede inside their Roche lobes, but develop a small amount of eccentricity. Subsequently, the minidisks collide preferentially at their “long end,” leading to further eccentricity injection and then stronger collisions. fig:MDCOMdecretion shows the time series of minidisk diagnostics in the decretion experiment, and indicates that the eccentric minidisks, retrograde precession, and anti-parallel orientations become fully established. fig:MDCOMnoMDs shows the results of one final experiment, in which the minidisks are subtracted but the CBD gas is retained (model H3). The results are similar to the decretion run: the minidisks refill, this time from gas infalling from the CBD, and over the course of about 30 orbits they settle into the characteristic eccentric, anti-aligned configuration. In the 3rd panel we also show the model VH3, which is a zoomed-in version of H3 with double resolution (Δ x = 0.00125 a), and which shows that the minidisk eccentricity settles to the same level.[Video content from this minidisk refilling experiment is available at: <https://youtu.be/GCh7yW-QuY8>] §.§ Role of the minidisk-minidisk interaction The visual impression given by animations of the decretion and minidisk refilling experiments (models D1 and H3) is that interaction between the minidisks, and the associated mass exchanges, mediate the eccentricity growth. To see how things would be changed without the minidisk-minidisk interaction, we performed a run (model H4) where one of the minidisks is replaced by a large absorber of radius 0.45 a. In this configuration, one minidisk refills from the circumbinary disk, and can lose mass to the companion absorber, but does not receive stream impacts from a companion minidisk. Time series of the minidisk diagnostics in fig:MDCOMnoMDs1absbig show that the COM amplitude of the lone minidisk (2nd panel) exhibits large oscillations around roughly 0.05a.[Video content from this single minidisk refilling experiment is available at: <https://youtu.be/Th13XvxKsxA>] In contrast, with no absorber present (model H3; fig:MDCOMnoMDs), the COM amplitude is about 0.1a with relatively little variation. Also, the minidisks undergo a steady rate of retrograde precession in the “normal” run H3, and that precession is not seen when the absorber is present (fig:MDCOMnoMDs top panel vs. fig:MDCOMnoMDs1absbig top panel). These observations indicate that some eccentricity must be injected by gas infall to the minidisks, but not persistently enough to account for the minidisks observed in the “normal” run H3. The persistent eccentricity, seen in runs that include both minidisks, indicates the minidisk-minidisk coupling is a likely cause of the directionally coherent eccentricity injection. §.§ Periodic mass trade and apsidal precession The eccentric minidisks collide periodically and exchange mass. This effect is shown quantitatively in the top panel of fig:mdot_vs_mtrade, where we plot the RMS mass flux across the midline between the binary components (the midline rotates at the binary orbital frequency). The spikes in the RMS mass flux correspond to a rate of mass exchange that exceeds the average mass flow to the binary by factors of 10 - 20. The mass transferred per collision exceeds 20% of the disk mass when the instability is most aggressive (fig:COMvsrsoft), so the mass-trading events are dynamically significant. The pulses are very regular, and the interval is on the order of the binary orbital period. The frequency of the mass trading events is accurately predicted by the beat frequency f_ bin - f_ prec associated with the binary orbital frequency f_ bin and the apsidal precession frequency f_ prec of the minidisks. This is confirmed in fig:rsoftlim, which shows periodograms of the RMS mass flux between minidisks for runs S0-S5 (same runs as the first row of fig:MD_tiles). Apsidal precession of eccentric disks in binary systems is generally governed by a combination of pressure gradients, viscous stresses, and the tidal field of the companion <cit.>. The precession rate seen in our simulations is additionally found to be sensitive to the gravitational softening length, r_ soft. For example, the run shown in the left-most panel of the top row of fig:MD_tiles has a precession rate that is consistent with zero, and that run (model S0) also has a zero gravitational softening length. We also performed a decretion run with zero softening (model D0 in Table <ref>) where the initial condition is a low-density atmosphere and gas is injected from the component locations, and with a small viscosity α=0.001. This experiment reveals that kinematic shear viscosity tends to drive retrograde disk precession; in contrast to model S0 (which has α=0.1), in model D0, which has much lower viscosity, we found the minidisk precession becomes prograde with a period of ∼ 47 binary orbits.[Video content from this low-viscosity, zero-softening decretion experiment is available at: <https://youtu.be/rFEvJFRePDA>] When gravitational softening is zero and the viscosity is negligible, the precession rate can still be positive or negative, and is then determined by a competition between tidal interaction with the companion (which drives prograde precession) and pressure gradients, which generally drive retrograde precession provided that radial derivatives of pressure are not too positive; <cit.>. §.§ Dependence on gravitational softening To determine the necessary conditions for the instability to operate, we have systematically “switched off” different pieces of physics. A summary of the visual results is presented in the panels of fig:MD_tiles. The top row shows a sequence of representative images (again, color showing Σ^1/2) from runs where the gravitational softening length r_ soft is increased from 0.0 to 0.05a. It is visually evident that the instability gets weaker with larger r_ soft, and becomes too weak to see when r_ soft≳ 0.03a. This trend is corroborated in fig:COMvsrsoft, where we plot the distance of the minidisk COM from the respective binary component (as described in <ref>) as a function of r_ soft. Although the minidisk eccentricity is not visually obvious for r_ soft≳ 0.03a, precession is nonetheless quantifiable (see e.g. the high-resolution model H1 with r_ soft=0.04 a in fig:MDCOM), and fig:rsoftlim shows the cadence of minidisk mass exchange is still well-predicted by f_ bin - f_ prec. §.§ Dependence on the orbital Mach number The second row of images in fig:MD_tiles shows representative minidisk morphologies for a range of nominal Mach numbers in the range 7 - 25, and significant and persistent minidisk eccentricity is seen for all cases. The top panel of fig:COMvsnuMach shows the minidisk COM diagnostic as a function of the Mach number, and confirms that there is no clear dependence of the minidisk eccentricity on the disk temperature in the range we have simulated here. §.§ Dependence on gas viscosity and suppression by target temperature profiles The third row of fig:MD_tiles shows how the minidisk morphology depends on the gas viscosity, and reveals that high enough viscosity, ν≳ 10^-4 reliably suppresses the instability, resulting in roughly circular minidisks. This is also corroborated in the bottom panel of fig:COMvsnuMach. The fourth row of fig:MD_tiles (other than the right-most image) shows that the instability is significantly suppressed by the use of target temperature profiles. The degree of suppression is not markedly affected by the rate of driving towards the target temperature profile (compare 2nd and 3rd panels). Crucially, use of the isothermal equation of state (4th row, 4th panel), which is widely used in studies of binary accretion, strongly suppresses minidisk eccentricity in circumbinary accretion. However, in sec:convergence we show that suppression by the isothermal equation of state can be overcome in some cases. §.§ Effect of a “hole” near the origin Many studies of binary accretion use a grid code with cylindrical polar coordinates, and such geometries could induce anomolous flow patterns in the vicinity of the coordinate origin. Given that mass transferred between the minidisks generally passes through the origin, we found it germane to examine how a source of systematic numerical error, such as arising from a coordinate singularity or inner boundary, might affect how the instability behaves. We modeled the source of error using a “hole” placed at the origin (runs labeled HOLE and S2 in Table <ref>), which is included as a third sink term as described in sec:numerics. The minidisk morphology when the hole is present is shown at the bottom right panel of fig:MD_tiles. The time series of the minidisk COM, shown in fig:hole, shows that the hole diminishes the average minidisk eccentricity by about half, and also reveals a large amplitude, slow oscillation about the mean eccentricity. This suggests the instability could be mischaracterized in simulations that use a cylindrical polar coordinate grid, unless particular care is taken to avoid numerical errors near the origin. This experiment also yielded serendipitous insight into the dynamics of the eccentricity driving. The slow oscillation of the minidisk COM is seen in both disks, but these oscillations are 180^∘ out of phase with one another; one disk gets more eccentric while the other circularizes. The oscillation period for the run shown in fig:hole is roughly 9 orbits, which is also the minidisk apsidal precession period for that run. We now understand that the radializing minidisk is hogging the gas falling in from the CBD, and that the circularizing minidisk is relatively starved. The explanation for this may be as follows: (a) the CBD cavity is eccentric, (b) the minidisks are eccentric, and (c) one of the disks extends more in the direction of the near side of the cavity wall, thereby receiving more of the infalling gas. These conditions apply too when no hole is present, but then gas flows relatively unimpeded from the disk which catches more of the infall to the one which catches less, and both disks remain fed. By inhibiting the mass and momentum transfer between disks, the hole leads to the starvation of one disk at a time, and also allows that disk to circularize. This is a further indication that the minidisk-minidisk interaction (sec:interaction) is instrumental in driving the instability. §.§ Numerical convergence of eccentricity growth rates We performed a resolution study of the exponential growth rate of the minidisk COM amplitude using decretion experiments. We used the decretion scenario in order to remove the complicating effects of the mass infall from the CBD. Indeed, this scenario most cleanly isolates the role of the minidisk-minidisk interaction in driving the instability. When the CBD is removed, the instability does grow even with an isothermal EOS (note that we have not observed robust development of the instability in any runs that both use a target temperature profile and include the CBD, indicating that a high degree of symmetry of the minidisk-minidisk interaction plays an important role). Since isothermal runs are significantly less computationally expensive than radiatively cooled runs, we used the isothermal EOS with no CBD to illustrate the numerical convergence of the growth rate over a wider range of resolutions. fig:dec shows the exponential growth of the minidisk COM amplitude, in runs where the grid resolution is 200, 400, 800, 1600, and 3200 zones per semi-major axis. All of these runs develop eccentric minidisks, but the lowest resolution runs displayed in each panel, 200 (400) zones per a for isothermal (radiatively cooled Γ-law) equations of state, show a spuriously large growth rate and early saturation. The growth rate is consistently measured to be approximately 0.07 f_ bin at each subsequent doubling of the grid resolution. Saturation occurs around a consistent value. §.§ Summary of the numerical findings The results of our numerical investigation strongly point to the minidisk-minidisk interaction as a necessary and sufficient condition for the growth of persistent minidisk eccentricity. This interaction manifests as the regular trading of mass between the minidisks across the inner Lagrange point, at roughly the orbital period of the binary. Departure of the observed mass trade interval from the orbital period is due to apsidal precession of the minidisks. Precession can in general be prograde or retrograde, but when r_ soft≳ 0.01a it is always retrograde and gets faster with increased r_ soft. This effect is consistent with known retrograde precession of ballistic particles in a softened gravitational potential, which we also checked with numerical integrations of eccentric particle orbits in softened potentials. The instability likely exists in a formal sense regardless of how thermodynamics is modeled, however it seems to be suppressed in scenarios where a CBD is present, and a target temperature profile is used, as with β-cooling or the locally isothermal EOS. § DISCUSSION §.§ Mechanism of the instability We propose that minidisk eccentricity injection is the result of regular impacts between the minidisks, and that it can modeled schematically in terms of perturbations to a ring of orbiting particles. The outer edge of a minidisk is pictured as an eccentric ring of test particles in Keplerian orbit around a binary component, as illustrated in fig:diagram. The particles in the ring are subject to an external forcing term f⃗_e(ν, θ), which depends on the ring eccentricity e, and the orbital phases, ν and θ, of the ring particle and of the binary orbit respectively. The zero of the binary orbital phase is chosen so that θ=0 means the eccentricity vector e⃗_1 of BH1 minidisk points horizontally to the right. The forcing term needs to capture the dynamical effects of head-on impacts between gas parcels in opposing minidisks. Since more mass is exchanged per impact as the eccentricity grows (fig:COMvsrsoft), the forcing amplitude must increase with e. Impacts occur when θ≃ 0, and for small eccentricities they mainly affect the particles at the long ends of the minidisks around ν≃π. The impact force is directed opposite the particle's velocity v⃗_ orb(ν), and is proportional in magnitude to its speed. A possible forcing term is then f⃗_e(ν, θ) = - const× e(t) × δ(θ) δ(ν - π) v⃗_ orb(ν) , where δ is a Dirac delta function and the constant in Eqn. <ref> is positive. The ring evolves “rigidly”, in the sense that, only that part of the forcing which determines the total torque and power applied to the ring is included in the equation of motion. This also means the ring does not precess, although one could estimate the rotation rates of e⃗_1,2 by averaging the local rate of apsidal rotation over the particle phases ν; non-elliptical distortions obviously cannot be captured in the ring approximation. Integration of r⃗×f⃗_e and f⃗_e ·v⃗_ orb over dν yields respectively the ring specific torque ℓ̇ and specific power Ė, and in turn, an expression for ė(t) via e = √(1 + 2 Eℓ^2 / (G M)^2). A detailed solution of the e(t) equation is not needed to appreciate that circular rings subject to the forcing term in Eqn. <ref> are unstable to small-amplitude perturbations. When 0 < e ≪ 1, the ring particles near the far turnaround points (overlapping ellipses in fig:diagram) experience a weak retrograde impulse, corresponding to the minidisk-minidisk impact. Backwards forcing near apocenter drives an angular momentum deficit, i.e. it increases the particle eccentricity, and that effect is not compensated around the minidisk pericenter because of the factor δ(ν - π). The larger e leads to a stronger retrograde impulse via Eqn. <ref>, completing a feedback loop in which e(t) grows exponentially. In sec:convergence we determined the growth rate to be ≃ 0.07 f_ bin; this rate empirically fixes the constant in Eqn. <ref>. A stochastic forcing term could be added as a model of gas falling in from the CBD and impacting the minidsks. The result should be the appearance of a non-zero but random-walking minidisk eccentricity, like what we observed in the simulations from sec:interaction and sec:dephole, where the minidisk-minidisk interaction was suppressed by use of a large absorber, or a “hole”, respectively. The mechanism proposed here for the eccentric minidisk instability does not deal with the hydrodynamical energy budget, and therefore cannot account for the instability's apparent sensitivity to the thermodynamical treatment. Our numerical results are consistent with two possible interpretations. The first, is that the regularity of minidisk-minidisk impacts is compromised by the appearance of spiral arms, and that spiral arm formation is directly sensitive to the equation of state. The second, is that the equation of state directly affects the CBD morphology, which in turn sets the cadence and regularity of minidisk feeding, to which the eccentricity evolution is sensitive. In the second scenario, the absence of spiral arms (fig:MD_tiles) could be a red herring, or it could be a consequence of the disks already being eccentric. This issue needs to be investigated further. §.§ Comparison to other eccentricity mechanisms The physical picture just proposed is adapted from one that was described in <cit.> to explain the growth of eccentric disks around the white dwarf accretors of SU Ursae Majoris (SU UMa) binary systems. Those systems are seen to exhibit so-called “superhump” oscillations during periods of enhanced mass transfer from the donor star. The oscillations are widely interpreted as signifying an eccentric disk around the white dwarf, which precesses and causes the observed superhump mode to occur at the beat frequency with the binary orbital period. Eccentricity is known to be excited by the 3:1 Lindblad resonance <cit.> operating in the outer edge of the disk, however L94 was exploring an alternative in which the eccentricity was driven instead by the gas stream from the donor star impacting the disk around the white dwarf. The ballistic particle-ring approximation with external forcing was used in L94 to analyze the eccentricity injection by the impacting gas stream, however with a different forcing term from the one in Eqn. <ref>. In L94, the forcing strength was set proportional to the rate of mass flow from the donor star, which would not be in resonance with any waves excited in the disk. L94 showed that the ring eccentricity is excited during periods of increasing mass flow, but then is dissipated after the mass flow rate stabilizes to a new level. It was concluded in L94 that stream impacts were not a viable scenario for eccentricity injection in SU UMa systems, and that the 3:1 resonance was the more likely culprit. It is relevant to note that we considered the 3:1 Lindblad resonance as a possible mechanism for the eccentric minidisk instability. However, that has been shown to succeed only when the binary mass ratio is q ≲ 1/3 <cit.>, whereas we see the eccentric minidisk instability operating when q=1. Besides, the Lindblad resonance is a tidal interaction, and our results from sec:interaction and sec:dephole indicate the eccentric minidisk instability is being driven by resonant mass exchange. In sec:interaction we established that the resonant interaction could be destroyed by replacing one minidisk with a large absorber, but we also saw that some eccentricity was nonetheless developing in the extant minidisk, albeit without the coherent directionality. We interpreted this as arising from stochastic eccentricity injection by gas infall from the CBD, however we have considered the possibility that minidisks might also be susceptible to some kind of secular instability. For example, isolated α-disks were found by <cit.> to be unstable to eccentricity growth by a viscous overstability. Later work by <cit.> pointed out that viscous overstability could be an unphysical aspect of the α-disk model, because it is suppressed when accounting for the finite relaxation time of magnetohydrodynamic turbulence expected in accretion disks. Possible scenarios where viscous overstability may be physical were further elucidated in <cit.>. <cit.> also showed that bulk viscosity suppresses viscous overstability, and this fact was used by <cit.> to test whether viscous overstability was important in the development of eccentric disks. A simple way to assess the importance of viscous overstability is to perform runs with non-zero kinematic bulk viscosity λ. Indeed, it was argued in <cit.> that, if viscous overstability were important, then using λ = 2 ν would suppress disk eccentricity. We checked this case (see the panel labeled “λ= 2ν = 10^-4” in fig:MD_tiles), but we found no significant suppression of eccentricity (minidisk COM amplitude is still ≃ 0.09 a). <cit.> also derived a quenching condition for viscous overstability when α=0.1, namely that the bulk α-viscosity parameter is >0.35. Thus, we also checked a case with λ/ν > 0.35 (see the panel labeled “λ = 4ν = 10^-4” in fig:MD_tiles), and we again found no suppression of minidisk eccentricity (minidisk COM amplitude is still ≃ 0.09 a). In both bulk viscosity tests, we reduced ν to below 10^-4√(GMa) to ensure the tests were not affected by the viscous suppression demonstrated in the bottom panel of fig:COMvsnuMach. We conclude that viscous overstability is not a likely explanation for the appearance of eccentric minidisks. §.§ Why gravitational softening produces less eccentric minidisks We found (top row of fig:MD_tiles) that minidisk eccentricity is suppressed by gravitational softening. This can be understood in terms of ballistic particle trajectories in softened gravitational potentials. Consider the effective potential for a gas parcel of specific angular momentum ℓ orbiting in the softened potential of an object with mass M, u_ eff(r) = ℓ^2/2 r^2 - G M/√(r^2 + r_ soft^2) . The turning points in this potential are fixed by the specific orbital energy E of the gas parcel. Orbital eccentricity is not defined in the usual sense when r_ soft > 0, however, if ℓ and E are both fixed, then the radial distance between the turning points can be easily seen to decrease with increasing r_ soft. The result is that a given forcing amplitude (i.e. a fixed value of the constant in Eqn. <ref>) results in a smaller geometrical distortion of the disk when r_ soft is larger. This effect could be accounted for by replacing e(t) in Eqn. <ref> with a different function that reflects the degree to which a ring with parameters ℓ and E is non-circular. Doing so would predict a slower growth rate and could account for the observed reduction of minidisk eccentricity with larger r_ soft. §.§ Softening-driven apsidal precession In the vertically integrated thin disk setting, gravity is softened at second order in h/r, where h is the vertical disk height measured from the midplane. This can be seen by introducing an ansatz for the vertical density profile, say ρ = ρ_0 (1 - (z/h)^2) for |z|<h, ρ=0 otherwise, where ρ_0 has no dependence on z. The amplitude of the horizontal component of the gravitational force density is ρ (GM/R^3) r, where R^2 = z^2 + r^2 and r is the cylindrical radial coordinate. Taking advantage of the fact that z/r≪ 1, we can write ρGM/R^3 r = ρGM/r^2[ 1 - 3/2(z/r)^2 ] + 𝒪(z/r)^4. Integrating over z∈[-h,h] and defining Σ≡∫_-h^hρ dz yields ∫_-h^h dz ρGM/R^3r ≃ΣGM/r^2[1 - 3/10(h/r)^2 ]. The factor in square brackets is ≤ 1, and therefore weakens (“softens”) the gravitational force. Gravitational softening can thus be understood as modeling the finite thickness of the disk. If the factor in square brackets also decreases for decreasing r (a condition which depends upon h(r)), it will soften more at smaller r, similar to the Plummer potential we use to model gravity in our simulations. In practice, a commonly used model for softened gravity in thin disks derives from the Plummer potential <cit.>, Φ = -GM/√(r^2 + r_ soft^2), where r_ soft is the softening length. Based on comparisons to three-dimensional disks, the softening length in the Plummer potential ought to be on the order of the disk scale height <cit.>.[Note that any dependence of the softening length on the fluid variables or horizontal coordinates is usually ignored when taking the gradient of the Plummer potential.] In this section, based on the Plummer model of gravitational softening, we describe conditions under which one might expect softening-driven retrograde apsidal precession of planar eccentric disks. To understand the role of softening, we consider a single gravitating mass, and neglect the disk self-gravity and hydrodynamic effects such as pressure gradients and effective viscosity <cit.>. We therefore operate in the ballistic approximation around a single gravitating mass, whereby fluid elements are treated as test masses moving freely under gravity. We take the softening length to be linear in h with a constant of proportionality that is of order unity <cit.>. Consider first the case of a razor thin disk, such that r_ soft∝ h = 0. In this case, the gravitational force is Newtonian, thus we expect eccentric orbits to be closed ellipses (i.e. zero precession). The same expectation holds whenever the disk has a constant aspect ratio, h ∝ r, because then r_ soft∝ r and the Plummer potential becomes proportional to the Newtonian potential. In this case, eccentric orbits are still closed ellipses, but it is as though the central gravitating object has a suppressed mass. This condition is representative of gas-dominated α-disks, as they have relatively constant aspect ratios <cit.>. On the other hand, radiation-dominated disks have constant disk scale height <cit.>, i.e. h/r ∼ 1/r, so that the Plummer force is approximately Newtonian for r≫ h but weaker than Newtonian for r≃ h. In this case, the deficit of (vertically integrated) gravity near the central object causes eccentric orbits to precess in the retrograde direction. This can be understood intuitively as follows. At large distances, the eccentric orbit is approximately Newtonian, i.e. an ellipse. But close to the central object, gravity becomes increasingly weaker than Newtonian, and unable to close the particle trajectory to an ellipse. This causes its next apocenter to be rotated in the direction opposite of the orbital motion. We verified this picture numerically by evolving test particle trajectories in a Plummer potential. §.§ Eccentric precessing minidisks in 2D versus 3D As guidance for three-dimensional studies, in this section we point to regions of parameter space where the effects of minidisk eccentricity and retrograde precession may reveal themselves. Since minidisk eccentricity is triggered by mass-trading activity between minidisks, it is important that such activity not be disrupted by, e.g. artificial obstructions between them. Thus, the entire region between the binary should be resolved. Since minidisk eccentricity is suppressed by viscosity, three-dimensional studies seeking to reveal eccentric minidisks should have weaker effective viscosity. The value of α=0.01 achieved in <cit.>, for example, should be amply low, since we find eccentric minidisks with viscosity as high as α=0.1. Since minidisk eccentricity is suppressed by gravitational softening, and softening represents the finite thickness of disks, a three-dimensional investigation should strive to make the disk thinner. Simulating thinner disks in three dimensions increases computational cost due to the higher resolution required in the z-direction. Although this does not increase the number of cells required in the vertical direction, the cells are smaller, which tightens the time step constraint. If one strives to simulate the r_ soft=0.01 a case (which yields obvious minidisk eccentricity in 2 dimensions, e.g. e∼ 0.5), and assuming the softening length is ≃ 0.5 h where h is the disk half-thickness, then one requires that h does not exceed ≃ 0.02 a within the minidisks. Note that the insensitivity of minidisk eccentricity to Mach number in our study is not expected to be reproduced in three-dimensional studies, because the effective softening length is intimately tied to Mach number via ℳ∼ (h/r)^-1; whereas in our study, the softening length is instead an independent ad hoc parameter, artificially decoupled from Mach number. On the other hand, testing softening-driven retrograde precession in three-dimensional studies requires disks that are sufficiently thin (such that minidisk eccentricity is appreciable), but still sufficiently thick that the effect of the implied gravitational softening on precession dominates over other hydrodynamical and tidal effects (see sec:masstrade). Our results suggest that the r_ soft=0.01 a case should be sufficient (i.e. h≃ 0.02 a). However, the functional form of the Plummer potential also suggests that disks with nearly constant aspect ratios will not undergo retrograde precession (see sec:precess); instead, three-dimensional studies seeking to reveal retrograde minidisk precession should focus on flatter disk profiles (such as constant disk heights expected in radiation-dominated disks). Note that retrograde precession should not require a binary, so a targeted simulation of a disk with flat height profile and eccentricity initialized to e.g. e≃0.5 around a single gravitating object ought to be a sufficient test of the effect. Cylindrical polar coordinates, rather than spherical, would be efficient for simulations of flat disks. In their relativistic simulations, <cit.> reported time-varying tilts of the minidisks out of the equatorial plane, with tilt angles comparable to the aspect ratio of the disk. If severe enough, such tilts could cause eccentric minidisks to miss each other at the phase θ=0 shown in fig:diagram, thereby inhibiting eccentricity growth. Thus, it is conceivable that three-dimensional effects not captured in the vertically integrated approach could prevent minidisk eccentricity growth in some scenarios. There are subtleties about our two-dimensional models which may be important to understand when comparing with three-dimensional simulations. Firstly, the Plummer potential is an ad hoc model of the gravitational softening that occurs when integrating out the vertical degree of freedom in thin disks. In particular, it is not derived in a controlled perturbative procedure in powers of the local disk aspect ratio. To do so would require greater knowledge of the local vertical density profile. Thus, a lack of softening-driven retrograde precession in constant aspect ratio disks is only predicted to the extent that the Plummer potential is a reasonable model of softened gravity in that regime <cit.>. However, we expect that retrograde precession in flat disks (i.e. h =constant) should be a generic consequence of gravitational softening, independent of the applicability of the Plummer model. Secondly, when performing a vertical integration of the hydrodynamic equations of a thin disk, the gravitational force softens beginning at second order in the disk aspect ratio. Instead, if a polar integration is performed, the magnitude of the gravitational force per unit area can be calculated at fully nonlinear order as GMΣ_ polar/R^2 since the coordinates conform with the spherical symmetry of the point mass gravitational potential. Here, we defined Σ_ polar≡∫_π-θ_h^π+θ_hρ R dθ to be the exact surface density, where θ_h defines the disk's local polar extent. In other words, with polar integration, gravity does not appear to have a softened functional form at fully nonlinear order. Thus, it is more cautious and nuanced to say that retrograde precession is possibly a finite-thickness effect, which manifests via softened gravity under vertical integration, but may have alternative physical interpretations in other two-dimensional reductions. Ambiguity in the physical interpretation of thin disks was recognized in <cit.>, e.g. the use of spherical versus cylindrical coordinates trades between a vertical gravitational force and a vertical centrifugal force. On this note, it is worth pointing out that pressure and finite disk thickness (which are not mutually exclusive) are known to influence the apsidal precession rates of disks <cit.>, with pressure effects in particular giving rise to retrograde forcing (as long as radial derivatives of pressure are not too positive) which becomes stronger for thicker disks <cit.>. This increased retrograde driving with thicker disks <cit.> is at least directionally consistent with the softening-driven precession we describe in this work (i.e. thicker disks imply larger softening, which implies stronger retrograde driving). Lastly, two-dimensional models of disks with explicit viscosity are, strictly speaking, turbulence closure models. Hence, the evolved variables must be understood as suitable averages <cit.>. A strict comparison with three-dimensional simulations would therefore necessitate computing such averaged quantities. If eccentric minidisks and softening-driven retrograde precession are manifested in three dimensions in an average sense, but not in any instantaneous sense, or if these effects are complete artifacts of the two-dimensional models, this would be a peculiar aspect of turbulence models of thin disks that theorists ought to be aware of. §.§ Observable consequences In our runs varying the softening length (labeled S0 – S5 in Table <ref>, and see the first row of fig:MD_tiles), ∼ 2-13% of the minidisk mass is exchanged between minidisks per trading event (see the red curve in fig:COMvsrsoft). Averaged over time, this corresponds to a mass exchange rate of ∼ 0.6-2.1 times the total mass accretion rate (see fig:mdot_vs_mtrade). Mass trading events can therefore be significant hydrodynamic events that cause observable EM flares. In our previous work W22, we reported simulated light curves from accreting equal-mass circular binaries which exhibit periodicity at near-orbital frequency. In this work, we explain this periodicity as corresponding to mass trading events between minidisks; see the bottom panel of fig:beats. The frequency of mass trading is a beat frequency f_ bin - f_ prec between the binary orbit f_ bin and the signed minidisk precession f_ prec (negative values meaning retrograde precession), indicated with the dotted vertical line in the bottom panel of fig:beats. The system's optical periodogram (obtained in the same way as W22) exhibits a peak at this beat frequency. The well-established m=1 overdensity in the circumbinary disk (called the “lump”) has a pattern speed which imprints on the optical emission at a frequency of f_ lump = 0.1 per binary orbit (see the leftmost peak in fig:beats). We note that f_ bin - f_ prec is a distinct physical phenomenon from the beat frequency between the binary and the lump, 2(f_ bin - f_ lump) (shown as the dash-dotted vertical line in fig:beats), which forms when all sides of the cavity wall are sufficiently close to the binary that the binary can strip material from it at almost all lump phases <cit.>. The top panel of fig:beats shows a snapshot of Σ (raised to the 1/3rd power to improve contrast), showing that the cavity is far too large and offset for the binary-lump beat frequency to form. Also note that a similar minidisk mass-exchange phenomenon was observed in relativistic simulations <cit.>. That phenomenon was reported as being an effect of relativistic gravity, characterized by a sloshing flow behavior resulting mostly in alternating mass transfer between minidisks, rather than our finding of a Newtonian phenomenon characterized by eccentric, precessing minidisks, and sychronized mass trading events. There is potential to confuse the observational signatures of these two effects. The cadence of the sloshing effect between minidisks reported in the more recent work by <cit.> is roughly 1.4× per orbit, quite similar to our f_ bin - f_ prec when the softening length is ∼ 0.04 a (see fig:rsoftlim). The rate of mass exchanged in the sloshing mechanism <cit.>, is at the level of 0.1 ×Ṁ_ BHs, whereas the eccentric minidisk instability leads to a mass exchange rate of ∼ 0.6-2.1 ×Ṁ_ BHs (e.g. see fig:mdot_vs_mtrade). If eccentric minidisks do form in accreting SMBHBs, we showed in W22 that they could produce a detectable QPO at or near the binary orbital period, likely in the UV. This knowledge could aid in the identification of EM counterparts to future individual-source detections by the pulsar timing arrays, or assist in targeted searches by placing a prior on the binary orbital period given the QPO periodicity <cit.>. § CONCLUSIONS & OUTLOOK In this work, we showed that accreting, circular, equal mass binaries are prone to an instability which grows a significant eccentricity in the minidisks around the binary components. The mechanism originates in mass trading between the minidisks, which tends to synchronize and become periodic, driving up eccentricity, and causing the eccentric minidisks to maintain opposing orientations. This process is especially strong in the limit of small gravitational softening. Gas impacts from a circumbinary disk are neither necessary nor sufficient to explain this effect. We investigated the dependence of minidisk eccentricity on model details. We found that many model choices, such as the use of artificial target temperature profiles (e.g. the use of β-cooling or locally isothermal equations of state), large gravitational softening (e.g. r_ soft = 0.05 a), large viscosity (e.g. ν = 10^-3√(GMa)), and a grid obstruction between the minidisks, all suppress minidisk eccentricity. This may partly explain why significant minidisk eccentricity in circular equal-mass binaries has not been previously reported in the literature. We found that minidisk eccentricity is robust to large bulk-to-shear viscosity ratios, which suggests this phenomenon is robust to a finite relaxation time of magnetohydrodynamic turbulence <cit.>. We also showed that eccentric minidisks tend to precess steadily in the retrograde direction when gravity is softened. In the limit of zero softening, the precession can in general be prograde, zero, or retrograde, depending on the balance of driving from hydrodynamic and tidal forces. The minidisks trade mass at a beat frequency, f_ bin - f_ prec, between the binary orbital frequency f_ bin and the minidisk precession frequency f_ prec; note the minidisk precession frequency is negative when the precession is retrograde. This “eccentric minidisk beat frequency” imprints on light curves from thermal disk emission, as we reported in <cit.>; in this work we clarified that the physical origin of such periodicity is the minidisk mass trade. Although the frequency can be similar, this effect is distinct from the “binary-lump beat frequency” 2(f_ bin - f_ lump) formed between the lump and the binary, which occurs when the cavity wall is sufficiently close to the binary that the binary can draw material from the lump at most lump phases. In a careful interpretation of the two-dimensional thin disk setting, we argued that softening-driven retrograde precession is a finite-thickness effect, even though a precise physical interpretation is not obvious. We believe that future three-dimensional simulations could observe the eccentric minidisk instability, but acknowledge that high resolution may be required due to the possible need for a rather thin disk, with h/r ≲ 0.02. Three-dimensional simulations could also help clarify the physical meaning of gravitational softening commonly used in vertically integrated hydrodynamical settings. Seeing as we restricted this work to circular, equal-mass binaries, future work should also determine the range of binary parameters where this eccentric minidisk instability operates. All simulations were performed on Clemson University's Palmetto cluster, and we gratefully acknowledge the Palmetto HPC support team. We acknowledge support from National Science Foundation grants AST-2006176 (to ZH) and AST-1715661 (to ZH and AM), NASA ATP grant 80NSSC22K0822 (to ZH and AM), and use of the software NumPy <cit.>, Matplotlib <cit.>, SciPy <cit.>, CuPy <cit.>. We thank the KITP at UC Santa Barbara for their hospitality during the Binary22 workshop, where some of the early work for this project was performed. We also acknowledge Julian Krolik, Mark Avara, Matthew Bate, and Steve Lubow for valuable discussions at that workshop and since. KITP is supported in part by the National Science Foundation under Grant No. NSF PHY-1748958. aasjournal § SIMULATION SUITE
http://arxiv.org/abs/2307.02615v1
20230705193804
Human Inspired Progressive Alignment and Comparative Learning for Grounded Word Acquisition
[ "Yuwei Bao", "Barrett Martin Lattimer", "Joyce Chai" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Learning when to observe: A frugal reinforcement learning framework for a high-cost world Colin Bellinger10000-0002-3567-7834 Isaac Tamblyn2,31111-2222-3333-4444 Mark Crowley40000-0003-3921-4762 ============================================================================================================ Human language acquisition is an efficient, supervised, and continual process. In this work, we took inspiration from how human babies acquire their first language, and developed a computational process for word acquisition through comparative learning. Motivated by cognitive findings, we generated a small dataset that enables the computation models to compare the similarities and differences of various attributes, learn to filter out and extract the common information for each shared linguistic label. We frame the acquisition of words as not only the information filtration process, but also as representation-symbol mapping. This procedure does not involve a fixed vocabulary size, nor a discriminative objective, and allows the models to continually learn more concepts efficiently. Our results in controlled experiments have shown the potential of this approach for efficient continual learning of grounded words. § INTRODUCTION Two of the important word acquisition problems are: 1) what must be learned to acquire a word, and 2) how to learn the word? To the first question, cognitive studies have shown that several critical steps in learning language naturally comes from joint attention establishment <cit.>, and symbol grounding <cit.>. Children's attention are usually redirected through a mother or teacher's guidance, and they learn to map these attended sensory inputs (e.g. color, sound, heat) with their corresponding words or sentences. Living in a rich and diverse world enabled by our multiple body sensors, we learned to filter out the noise and pay attention to specific aspects of an input which we assign linguistic labels to. This attention establishment and information filtration process is the first step of word acquisition. After filtering out the noise, we are left with a mental representation of what a word entails. Just as the word “car” triggers certain impressions of a common transportation in head, we store these representations as they could come in handy later when we use them to reason, imagine, and express ourselves. To acquire a word, humans learn to filter out noise to focus on key information from sensory inputs that contributes to its meaning <cit.>, and store that meaning representation for future use <cit.>. As for the second question, one of the common but under-explored methods is implicit or explicit comparisons. Caretakers may lay out stuffed animals around a baby and name them one by one to differentiate them. In school, teachers may compare different components of the learning material, e.g. “Today we learn `colors'. This is red/blue/yellow...”. Comparison is the process of finding commonalities and highlighting differences <cit.>. It allows children to attend to matching relational structures of inputs <cit.>, filter out background noise, and learn to generalize as well as abstract. With comparisons, especially clean well-structured comparisons, children can learn a lot of condensed knowledge efficiently and cultivate their capabilities to tackle noisier challenges outside of the classroom <cit.>. From these findings, we propose a new method of word acquisition for artificial intelligent (AI) agents. We mimic the classroom learning setting and constructed a small clean dataset named SOLA – Simulated Objects for Language Acquisition. This dataset allows the model to draw efficient similarity and difference comparisons, learn to filter out noise, pay attention only to key information that contributes to a word meaning, and store these word-representation mappings continually as new words are introduced. While a larger scale evaluation is needed in the future, through controlled experiments, our preliminary results have demonstrated the potential of this model in efficient continual learning of grounded words. The dataset and code are available at <https://github.com/sled-group/Comparative-Learning>. The contributions of this work include: * Constructed a small, clean dataset SOLA for studying efficient comparisons. * Framed the acquisition of words as both an information filtration process, and as a representation-symbol mapping. * Proposed a new method of grounded word acquisition through comparative learning * Demonstrated the performance, usability, and generalizability of the acquired representations through multiple tasks. § RELATED WORK §.§ Human Language Acquisition Language acquisition is the process of putting linguistic labels onto abstracted features, and structuring them into sentences following publicly recognized grammatical rules to express intention. The simple process of abstracting features, takes input attention filtering and relational generalization to pinpoint the learning concept, and associate them with linguistic labels <cit.>. Studies show that the amount of mother-child joint attention facilitation time is positively correlated to a child's early vocabulary size growth <cit.>, and that human infants are capable of comparison and abstraction through same/different relation comprehension <cit.>. Comparison is a central component of human cognition which results in our own uniquely structured knowledge representations <cit.>. The theory of Structure-Mapping predicts that similarity comparison allows subjects to attend to matching relational structures of inputs <cit.>, highlight the differences, and that human infants are able to learn such relations in very few examples <cit.>. The difficulty of establishing a structural mapping, however, is influenced by the ease of the alignment process <cit.>. Progressive alignment <cit.> suggests that constructing an alignment among highly similar comparisons can invite young children to reason about relational structures and serve as base knowledge for future hierarchical abstractions and complex characteristic learning. Our work took inspiration from the above two theories by constructing grouped multimodal samples for similarity and difference comparisons. We also start progressive alignment with highly aligned pairings during early word acquisition. §.§ Continual Learning There are two major limitations that current neural network models face. Models either take the large-pretrained approach, throwing as much data as possible during training, and hope to learn everything and achieve AGI <cit.> all at once without the need for continual learning. Or models take the architectural <cit.>/ rehearsal <cit.>/ replay <cit.>/ regularization <cit.> approaches hoping to retain previously learned knowledge amid newly introduced data distribution shift <cit.>. Humans, however, are lifelong learners. We are constantly adapting to new environments, learning new concepts & tasks, and evolving together as a society. Human execution of this process is simple, natural, and cost effective, without catastrophically forgetting previously learned knowledge <cit.>, nor having to retrain from scratch every time new knowledge is introduced <cit.>. Our method follows the human learning approach and gradually learns more concepts as they are introduced. We demonstrate the model's resistance against catastrophic forgetting and the data learning efficiency in our experiments. §.§ Contrastive Learning Contrastive learning is a paradigm that enables models to learn feature representations through contrasting examples without explicit labeling <cit.>. Contrastive learning uses a single contrastive loss function that pushes similar classes together and dissimilar classes apart <cit.>. In this paper, we introduce Comparative Learning which adapts the general definition of contrastive learning by explicitly separating the similarity training from the difference training. On top of encoding each input as in contrastive learning, we took additional steps to further extract information about similarities and differences separately given the same amount of inputs. Supervised by associated words, we use the similarity batches to learn the process of noise filtration and a shared feature representation. We use the difference batches to refine and differentiate these feature representations. §.§ Multimodal Grounding A large number of previous works try to draw connections between language and different modalities, such as VILBERT <cit.>, LXMERT <cit.>, UNITER <cit.>, OSCAR <cit.>, Vokenization <cit.>, and more <cit.>. These models demonstrated their state of the art multimodal representations on a range of downstream tasks, including image/video captioning, image/text retrieval, visual question answering, and text-to-image generation <cit.>. A large portion of these works focusd on visual recognition and language production tasks such as captioning, retrieval, and some visual question answering. These works embed visual and textual inputs into the same latent space for similarity comparison and retrieval. These models can learn a great language-vision matching filter, but often do not preserve a grounded concept representation given the linguistic labels. Another line of works focus on language comprehension and image/video generation. They take a pre-trained language embedding and use it to generate high resolution images, and have achieved extraordinary performance. Notably, <cit.> achieved compositional visual generation with energy based models. <cit.> worked on image editing given instructions with paired training images. Also others demonstrated language grounding through compositional text to image generations <cit.>. These models rely on great grounded language representations to generate meaningful images. Our work frames the language acquisition process as both input information filtration and representation learning. A few methods include both parts of this definition. CLIP <cit.> used contrastive learning on massive number of weakly linked image-text pairs to project each modality into the same embedding space, which allows the encoders to filter inputs, and store the representations through text embeddings. Several works including <cit.> used a set of energy based models on recognition tasks for input filtration, and iteratively refined the representations through the Langevin dynamics procedure <cit.>. Our work proposes a human inspired approach for word acquisition. We jointly train both the input filtration process and the representations, and map them to their corresponding words through comparative learning. § DATASET Inspired by the classroom teaching setting and the Progressive Alignment theory <cit.>, we created a new dataset SOLA (Simulated Objects for Language Acquisition). SOLA has little noise and clearly defined attributes to isolate different concepts for efficient sample comparisons and grounded language-feature mapping. We generated SOLA using the open-source simulation software Kubric <cit.> designed for semi-realistic image/video synthesis. SOLA (Figure <ref>) contains images of individual simulated objects with three associated learning attributes: color, material, and shape. Each object is a composition of one of 8 colors, 11 shapes, and 4 materials. We also diversify the images by capturing each object at 3 different light settings and 6 different camera angles. A total of 6336 Red Green Blue Alpha (RGBA) images were generated. To evaluate the generalizability and robustness of the models on nosier inputs, we also composed a Variation Test set (D_test_v) of 989 RGBA images by applying a stretch, shade change, or size transformation. An object in this test set is either stretched along one of the x, y, and z axis, colored with a darker or lighter shade, or shrunk to a medium or small size. Although not used in this work, we rendered the Depth, Surface Normal, Segmentation Map, and Object Coordinates images for each corresponding RGBA image for future research. To evaluate the novel composition capability of the methods, we reserved 9 learning attribute pairs exclusively in the Novel Composition Test set (D_test_nc). The rest were assembled into the Train set (D_train) for word acquisition training. To evaluate models' abilities to continual learning, we split the vocabulary into two sets: a vocabulary and an vocabulary set, which leads to two datasets D_known and D_unknown. The D_unknown dataset includes images describable by at least one of the three attributes: [yellow, glass, torus_knot], and the rest of the images are in D_known. Each training and testing dataset is broken down into and versions accordingly. More details about SOLA can be found in the Appendix. Several existing datasets offer dense compositional attribute annotations that can be helpful for language grounding, such as MIT-States <cit.>, UT-Zappos <cit.>, CUB <cit.>, ShapeNet <cit.>, Visual Genome <cit.>, and PACO <cit.>. These datasets are great resources for scaling attribute concept learning, especially from noisy real world images, but are not designed to form clean structural alignment for comparative language acquisition. Our work took the baby step of progressive alignment <cit.> by offering the model structured and denoised sets of inputs for easier structural comparison and efficient feature extraction. Following these works, we believe that equipping the model with a set of clean base knowledge can help it extend to messier inputs in the future. Other abstract datasets such as CLEVR <cit.> focus on diagnosing and probing the reasoning or interpretability of models through visual question and answering, and are not designed for language acquisition. Additionally, our dataset includes 2 more materials and 8 more shapes than CLEVR, providing a lot more variance and opportunities for vocabulary learning. We also diversify lighting, camera angles, and further object transformations in the variation test set for generalization and composition analysis. We introduce SOLA as it offers clean, grouped images for structured comparative learning. More detailed dataset comparisons can be found in Table <ref>. § METHOD §.§ Comparative Learning Comparative Learning is the process of finding the similarities and differences from a set of inputs. It is a general learning strategy that can be applied to different input modalities, sizes, and duration. The general formulation can be found below. For each label/word/symbol l_i in an unconstrained set L = {l_1, l_2, ⋯}, we first assemble a batch of samples ℬ_s = {a_1^l_1, a_2^l_1, ⋯, a_n^l_1}, that share the label l_i for similarity learning, and a batch of samples ℬ_d = {b^l_1, ⋯, b^l_j, ⋯}_j≠ i that cannot be described by l_i for difference learning. The process of SIM_l_i (Eq.<ref>) finds similarities across examples in ℬ_s, and extracts out its representation Rep_l_i. The process of DIFF_l_i (Eq.<ref>) highlights the differences between l_i and other non-compatible labels, and refines the representation Rep_l_i. Non-compatible labels are the ones that cannot be assigned to the same entity at the same time, e.g.(up, down). Comparable to the positive and negative batches in contrastive learning, these labels naturally occur through difference comparisons, and are organized by the supervisor. Both the computations and the representation are stored to map the label: {l_i: [SIM_l_i, DIFF_l_i, Rep_l_i]}. Rep_l_i = SIM_l_i({a^l_i ∈ℬ_s }) Rep_l_i = DIFF_l_i(a^l_i, {b^l ∈ℬ_d }) In this work, we contextualize the method of comparative learning in word acquisition through a set of still visual inputs (Figure <ref>). For each concept, e.g. “red”, we assemble a batch of images that share the word “red” for similarity training. We also assemble a batch of images that are of any other color (non-compatible) but “red” for difference refinement. We keep the rest of the attributes the same for better structural alignment. As illustrated in Algorithm <ref> and Figure <ref>, given a batch of training samples (sim and diff) for word l_i: ℬ={ℬ_s, ℬ_d}, we first take a shortcut by having each image a_u go through a frozen pre-trained CLIP <cit.> image embedding as the starting point. This shortcut bypasses a few structural alignment steps, and encodes the raw images into the same 512 dimensions e_u available for direct comparisons. The information denoising and attention establishment process is composed of two parts for each word l_i: the filter 𝙵_l_i and the encoder 𝙴𝚗𝚌_l_i. The filter maintains a vector the same size as the embedding e_u, and computes the element-wise product of the two. It is a learning vector that masks the input embedding by selecting only the relevant dimensions that contributes to the word l_i. This masked embedding goes through two fully connected layers of 𝙴𝚗𝚌_l_i to output a condensed representation r_u. On top of learning the attention filtration process (𝙵_l_i, 𝙴𝚗𝚌_l_i), we then calculate the centroid of all the sample representations r_u from the similarity batch ℬ_s as the condensed representation Rep_l_i for l_i. For difference learning, we have all the ℬ_d samples to go through the same filtration and encoding process for word l_i. Since none of them can be described by the word l_i, the output should be nothing like Rep_l_i. Therefore, the loss function pushes the distance between each sim batch sample and the centroid close, and pushes the diff batch sample representations apart from the centroid. This process filters out and abstracts the shared representations of l_i, and differentiates it from other non-compatible words. It jointly trains the filter 𝙵_l_i, the encoder 𝙴𝚗𝚌_l_i, and the representation Rep_l_i. We store the mapping {l_i: [𝙵_l_i, 𝙴𝚗𝚌_l_i, Rep_l_i]} for each word in memory for later use. §.§ Generative Decoder Learning Due to input filtration, the dimensions of the condensed word representations come from selective, word-specific subsets of the original 512 dimensions of e. They are, therefore, not aligned in the same space across different words and cannot be used for direct interactions. To allow compositional reasoning with all the words and their grounded representations, we trained a decoder (Figure <ref>) to revert the condensed representations back to the same space as the CLIP embedding e. To train the decoder 𝙳𝚎𝚌_l_p for word p, we adopted two strategies in parallel: Editing and Reconstruction (Figure <ref>). About editing, given an image of a (blue, flower), for example, if we filter out blue add red, we should get an image of a (red, flower). Following this logic as in Eq. <ref>, we mask out feature q from input embedding e_q by multiplying the opposite of filter q: (1 - 𝙵_l_q). We then add back the decoded (𝙳𝚎𝚌_l_p) representation of Rep_l_p for word p. Both the filter 𝙵_l_q and the representation Rep_l_p were trained in the previous step and frozen. The output (out_q2p) aims to resemble the embedding of e_p. Similarly, for reconstruction as in Eq. <ref>, if we filter out feature p from input embedding e_p and add back the decoded representation of Rep_l_p, we should get the original embedding of e_p. Both passes are trained jointly to learn the decoder of p (Eq. <ref>). Each decoder is stored together in the mapping {l: [𝙵_l, 𝙴𝚗𝚌_l, 𝙳𝚎𝚌_l, Rep_l]}. The decoded representations open the door for zero-shot compositional comprehension, generation, and reasoning. For illustration purpose, we also trained a small image generator that upsamples the CLIP embedding back to an RGB image. Details about the models and training can be found in the Appendix. out_q2p = e_q (1 - 𝙵_l_q) + 𝙳𝚎𝚌_l_p[Rep_l_p] out_p2p = e_p (1 - 𝙵_l_p) + 𝙳𝚎𝚌_l_p[Rep_l_p] loss = 𝙳𝚒𝚜𝚝 [e_p, out_q2p] + 𝙳𝚒𝚜𝚝 [e_p, out_p2p] § EXPERIMENTS With the training described above, each word will have a mapping {l: [𝙵_l, 𝙴𝚗𝚌_l, 𝙳𝚎𝚌_l, Rep_l]} stored in the memory. These acquired word representations can be used during inference time for downstream tasks. In this section, we evaluate these representations on several tasks that test models' robustness, generalizability, flexibility, and ability to continual learning. §.§ Multi-Attribute Recognition In this task, the models are challenged with zero-shot recognition of all the attributes (color, shape, material) of a given test image a under two evaluation settings: (1) Novel composition setting where the image with a combination of attributes which is not seen during training (i.e., a ∈ D_test_nc); and (2) Noisy setting where the images were injected with noise in the variation test set (a ∈ D_test_v). The models were trained on the training data (D_train). For each test image (Figure <ref>), we go through the memory, apply the corresponding filter and encoder of each word to the input embedding, and picked the top 3 words with the shortest mean squared error (MSE) between the learned word representation and image embedding. We took the essence of several zero-shot compositional learning methods such as <cit.>, and implemented them as variations of the CLIP model for a better experiment control and fairer comparison. More specifically, we compare our method with the following baselines: CLIP Zero Shot computes the highest matching words for each test image. We experimented with different prompts, and reported the highest performances using the prompt “a photo of a x”. CLIP Contrastive adds two fully connected layers to the image encoder, and fine tune the model on the same training dataset with a contrastive loss. CLIP Linear also adds two fully connected layers to the image encoder, but with an output dimension of the vocabulary size. It predicts 1s for all corresponding word dimensions, and 0s otherwise. This method is subject to a fixed vocabulary size, and can be hard to expand to new concepts. CLIP Multi-Attr finetunes two fully connected layers out of the image encoder for each word, and predicts 1s and 0s based on its confidence measured by the word-image matchability (i.e., similarity). The performance of all the methods over two test datasets can be found in Figure <ref>. For each image, we evaluate whether its corresponding color, material, shape, or all three of them are among the top 3 selected words. It is observed that our method consistantly outperforms all baselines across two test datasets and four categories. CLIP Zero Shot showed decent baseline performance on the multi-attribute recognition task, as this model was pre-trained on massive weakly linked language-image pairs. However, our model and the finetuned models are able to surpass this baseline with a significant margin. CLIP Contrastive overfits to the color features, mainly guessing colors in its top three resulting in high color performance but lagging behind in all other attributes. CLIP Linear and CLIP Multi-Attr showed an improved performance compared to CLIP Zero Shot, but couldn't catch up with our method. Among the 3 attributes, the material attribute was the hardest to learn for all the methods. Humans generally learn textures through touching, tapping an object for sound, and other sensors so a visual input alone may not be sufficient to grasp the meaning of materials, especially under a dim light. However, our method was still able to lead in performance on materials, which consequently also increased the accuracy for all top 3. This is likely because our model is able to pay attention to specific aspects (e.g. light reflection, transparency) of the images better through explicit comparisons. §.§ Continual Word Acquisition We investigated models' capability to continually acquire new words on the same multi-attribute recognition task in comparison with the models mentioned in Section <ref>. As mentioned in Section <ref>, we split all the training and testing datasets into two parts based on the vocabulary (D_known, D_unknown). The D_known datasets include 20 words, and the D_unknown datasets include an additional 3 new words for continual word acquisition and evaluation. Any image that shares at least one of the 3 new words is part of D_unknown. Our model conducts continual learning in two ways (Figure <ref>) it can learn new concepts using the exact same way as described in Figure <ref>, and add the word-representation mapping to the memory; 2) It can also update and refine the learned concepts, whenever new samples are available. More specifically, we extract the relevant {l: [𝙵_l, 𝙴𝚗𝚌_l, 𝙳𝚎𝚌_l, Rep_l]} from the memory for word l. The new samples go through similarity and difference learning with the old 𝙵_l and 𝙴𝚗𝚌_l to get a batch of condensed representation {r}'s. Together with the old Rep_l, we can calculate a new centroid with these embeddings, and a loss. Through backpropogation and training, the new centroid will be the refined Rep_l, and both the encoder and filter are updated in the memory for word l. We first evaluate the severity of catastrophic forgetting of the methods (Figure <ref>). In Round 1, the models were trained on D_known datasets, and the D_unknown sets in Round 2. We evaluate the accuracy of the models on two D_known test sets by computing the percentage of models' top 3 predictions all being the ground truth attributes. For CLIP Contrastive, we increased the vocab size for Round 2 training. For CLIP Multi-Attr and our method, we introduced additional models for each new concept. The CLIP Linear model was the hardest to grow as the output dimension was fixed to the previous vocab size. We initialized the first linear weights with the learned weights in Round 1, and had to retrain the model in Round 2. In Figure <ref>, except for the CLIP Contrastive model, most models suffered from catastrophic forgetting between Round 1 and Round 2. Our method had a mild performance decrease as more vocab was introduced. This is likely due to the top 3 label selection competitions among increasing vocab size. CLIP Linear and CLIP Multi-Attr suffered severe catastrophic forgetting on the Variation Test D_known set, likely due to lack of generalizability. We also evaluated the continual training data efficiency for different models. During Round 2, we compare how much replay data would the models need in order to achieve a decent performance by training them on either the D_unknown datasets only (new lessons) or both the D_known+D_unknown datasets (new and old lessons). Round 2 trained on only D_unknown receives significantly less data, and does not require reviewing old lessons. In Figure <ref>, when trained only with the D_unknown set, our method had already outperformed all other methods even compared to their performances when trained with both D_known+D_unknown datasets. When more data was available, our method was able to improve performance even further on identifying all attributes. These results showed early signs of efficient continual learning capabilities and resistance against catastrophic forgetting. Unlike discriminative methods such as CLIP Linear, which has a fixed output dimension based on the vocab size, our method is a lot more flexible to grow for new concepts, and achieved higher performance without the need to review old concepts. Further investigations are needed for larger scale evaluation. §.§ Compositional Imagination and Reasoning Another way of evaluating acquired words is through compositional imagination and reasoning given words. With the stored meaning representations, we will be able to flexibly compose different meanings together for reasoning, imagination, simulation, and language understanding. We evaluate this capability in two use cases: composition reasoning and generation. Most traditional multimodal methods, such as the ones in Section <ref> only focus on learning a feature extractor given an image. They do not store a grounded representation for each word for reasoning or generation. We therefore, have to turn to the text embedding part of CLIP for comparison as they were trained to be in the same space as the image embeddings. Text embeddings have been shown to carry grounded semantic meanings through high resolution image generations, but also have been found to struggle at grounding certain attributes <cit.>. In this section, we compare our method to CLIP Zero Shot and CLIP Finetune on the following tasks. We use the text embedding of both methods to do image editing and compositional generation. For CLIP Finetune, we added two fully connected layers on the text embedding and fintuned with our D_train dataset. §.§.§ Composition Generation Without any given images, humans are able to build mental representations of objects given linguistic descriptions. These representations are built upon abstract word associated features, and can be flexibly composed and manipulated as more features are added. Unlike previous works that emphasize on high resolution image generation, we focus on building compositional mental representations that can be used for downstream reasoning. Quantitatively, we evaluate the fidelity of a pair of concept composition through a multiple choice task. Given any two random compatible words, e.g. (red, cone), and the CLIP embedding of two hard distractors (each sharing at least one attribute as the original pair, e.g. A. (Red, Cone), B. (Red, Cylinder), C. (Blue, Cone)), we challenge the models to generate a mental representation such that it is closest to the correct image embedding. Each choice is a randomly selected image with the two specified features. As shown in Figure <ref>, for example, we decode representations of both “red” and “cone” and then add the two resulting vectors to create our “mental image” embedding of a red cone. The multiple choice embedding with the smallest MSE is chosen to be the desired attribute composition imagination. The performance can be found in Table <ref>. Over 15 runs of 100 randomly selected questions each, our model is able to outperform the text embedding composition of both CLIP Zero Shot and CLIP Text Finetune. Among those, the Color+Shape combo is the easiest one to assemble, likely due to the challenges of learning the material features for the other two combinations. Our method is better at extracting different aspects of the inputs, and learn to pin down the exact word meanings through efficient similarity and difference comparisons. Qualitatively, we also generated the representations of the novel combinations in our testing data (Figure <ref>), and see how visually close they are to the GT pictures. The visualization shows that CLIP Zero Shot and CLIP Finetune both struggle at representing the precise definition of some shape concepts, such as “Cone”, “Sphere”, and “Teapot”, but are good at embedding color concepts. The last row serves as an inspiration of possible ground truth images given a pair of concepts. §.§.§ Composition Reasoning Another simple compositional reasoning task on different object features is to do `arithmetic' with them. For example, a (red, cone) - red + blue= (blue, cone). With the Decoder training in Figure <ref>, we can flexibly edit a given image to a desired feature. In this section, given an input image, and a random pair of attribute switching, we qualitatively evaluate if the edited image resembles the desired attribute while keeping all other features the same. Figure <ref> shows three qualitative examples on feature switching over color, material, and shape, compared to CLIP Zero Shot and CLIP Finetune. It is observed that CLIP trained text embeddings excel at extracting color related concepts, but struggle at material and shape. This could be due to its unfamiliarity with the specific material and shape words that we use in our dataset, whereas color concepts are more universal and are easier to learn from pixels. Finetuning helps improve the performance, but still lagged behind our method. More qualitative examples can be found in the Appendix. § CONCLUSION In this work, we took a human inspired approach to acquire multi-attribute concepts. We define the acquisition of word as learning both an information filtration process, and a representation-symbol mapping. We mimic the classroom setting, constructed a small clean dataset SOLA for efficient comparative and continual learning. We evaluated the learned representations in multi-attribute recognition, compositional simulation and reasoning tasks. Our experiment results outperformed CLIP variations in controlled settings, and showed early signs of a promising new method for continual grounded word acquisition through comparative learning. § LIMITATIONS As exciting as this work is, it does have several limitations and a lot of opportunities for future improvement. How to scale? We demonstrated our method in a highly constrained environment with very limited concepts, whereas humans are able to pick up new concepts in the noisy world with few shots. How could these representations learned in a clean environment be useful in real world? Would comparative learning still be useful outside of the classroom? We followed the baby step of progressive alignment and hoping that establishing a set of clean base knowledge, can ease the acquisition of future more complex concepts through comparisons with existing knowledge, analogy and hierarchical abstraction. This hypothesis remains to be investigated in the future. What about other words? Some concepts can be learned through just visual inputs, like color, whereas other concepts require grounding through different sensory types or modalities, like “hot”, “loud” and “fast”. Even more concepts are built upon existing words through abstraction and generalization, e.g. “philosophy”, “momentum”. Comparisons can still be used to ground these words, but input to these comparisons could vary from data modalities to computation methods, to abstract representations. We leave these for future work. How to put words into sentences? This work only focused on the grounding of individual words into visual representations, whereas sentence syntax, grammar, and article structure are yet to be learned. For future work, we could treat language as its own modality, and learn the structure through comparisons as well. Just like in an elementary linguistic class, a teacher would list out several examples “I shower”/“You shower”/“He shower”. Humans can learn grammar through what's changing and what's constant. This could be an interesting next step to look into. Who can offer the supervision? As mentioned at the beginning, human language acquisition is a highly supervised learning process. Babies are rarely inventing new words but learning how adults label objects through generations of conventions. A classroom setting with highly structured curriculum and clean dataset takes a lot of curriculum design and heavy annotation. This is the cost that humans are willing to spend in order to educate human children from kindergarten to college. Maybe it is a fair price that we have to pay in order for artificial intelligence to learn what we want them to learn. About the current work itself, there are several constraints that we are limited to. First of all, due to limited computation resources and data size, we had to take a shortcut by using a pre-trained CLIP embedding as a starting point for our models. In theory, we could and would love to train our models from scratch, just like how a new born would learn their first language. A dataset like Toys-200 <cit.> could mimic the process of babies interacting with the objects, get a 360 view and help build 3D mental representations. Second of all, like many other continual learning methods, an unbounded memory space is an unrealistic assumption. As more concepts are learned, the memory space would grow fast, so as the search time. An interesting next step could be to re-organize the memory according to the association distances and hierarchical structures. Lastly, our work aims at proposing a novel language acquisition definition and the comparative continual learning method. We used somewhat simple model architecture and image generation models for proof-of-concept demonstration on the method. More sophisticated model architecture and training can be switched for different input modalities and applications. Listed above are several major limitations and future directions based on current work. We are more than happy to take constructive suggestions and criticism to help improve this and future works. § ETHICS STATEMENT This work took the human inspired approach to learn word acquisition for artificial intelligent agents. We generated a small clean dataset using the open-source simulation software Kubric, which was designed for semi-realistic image/video synthesis. All of our training was done on a single machine with 8GB GPU and an Interl i9 processor, with very limited environmental cost. This work does not involve human subject, nor can be can be used to directly interact with humans. § ACKNOWLEDGEMENTS This work was supported in part by NSF IIS-1949634 and DARPA PTG program HR00112220003. The authors would like to thank the anonymous reviewers for their valuable comments and suggestions. acl_natbib § DATASET SOLA Here is a detailed description of the Simulated Objects for Language Acquisition (SOLA) dataset: Learning Attributes (Figure <ref>): * Color: 8 * Material: 4 * Shape: 11 Changing Attributes (Figure <ref>): * Lighting: 3 * Camera Angle: 6 Variation Attributes (Figure <ref>): * Shade: 3 * Size: 3 * Stretch: 4 Image Types: * RGBA * Depth * Surface Normal * Segmentation * Object Coordinates Coordinates This amounts to 7325 RGBA images in total, with 6336 originals and 989 with variations. A training and testing split can be found in Table <ref>. The original image set was first broke down into Novel Composition Training and Novel Composition Testing. 9 pairs of attributes are: * (yellow, cone) * (green, metal) * (plastic, cube) * (purple, teapot) * (red metal) * (glass, torus_knot) * (white, cylinder) * (aqua, rubber) * (glass, sphere) For continual learning evaluation, we split the vocabulary into the following two sets. Any images associated at least one of the concepts in D_unknown are assembled into D_unknown train/test datasets, and the rest in D_known. The number of samples in each split can be found in Table <ref>. = [brown, green, blue, aqua, purple, red, white, rubber, material, plastic, cube, cylinder, sphere, cone, torus, gear, sponge, spot, teapot, suzzane] = [yellow, glass, torus_knot] § MODEL ARCHITECTURE AND TRAINING For the encoder training, we used the pretrained CLIP image encoder (frozen) to embed the input images, going through a filter of 512 dimensions, and two fully connected layers with a hidden dimension of 128 and latent dimension of 16. Each round is trained on a similarity batch and a difference batch of size 128 each. The training moves on to the next concept when the loss went down below 0.008 or hit 200 rounds. The whole vocabulary was trained for 5 epochs with a learning rate of 1e^-3. For the decoder training, we froze the weights of the filter and the pre-trained representations from the previous step, and trained four fully connected layers with a dimension upsampling 16 → 64 → 64 → 96 → 512 with a dropout rate of 0.2. Each concept was trained for 100 round with a batch size of 128. The whole vocabulary was trained for 5 epochs with a learning rate of 1e^-3. For comparisons, CLIP Contrastive embedded both image inputs, and text inputs. The image embeddings went through two fully connected layers with a hidden dimension of 128 and output dimension of the vocabulary size. CLIP Linear trained two fully connected layers on top of the image embeddings with a hidden dimension of 128 and output dimension of the vocabulary size. CLIP Multi-Attr did the same thing for each word, and the output dimension was 1 over softmax predictions. CLIP Text Finetune trained two fully connected layers on top of the text embeddings, with an input & output dimensions of 512, and hidden dimension of 66. We tried to keep all the model architecture relative the same or having similar number of parameters for a fair comparison. The models were each trained for 50 epochs with a learning rate of 1e^-3. The small image generator contains 5 up-sampling convolution layers with dimensions going from (512,1) to (3,224,224). The number of channels are [128, 64, 32, 16, 3]. We trained 100 epochs on our original dataset with a learning rate or 1e^-3. All experiments done on a single NVIDIA(R) GeForce(R) RTX 2070 SUPER(TM) 8GB GDDR6 and 10th Gen Intel(R) Core(TM) i9-10900K processor. § SOLA AND OTHER DATASET COMPARISONS
http://arxiv.org/abs/2307.01873v1
20230704183316
K2 & TESS observations of symbiotic X-ray binaries: GX 1+4 and IGR J16194-2810
[ "G. J. M. Luna" ]
astro-ph.HE
[ "astro-ph.HE", "astro-ph.SR" ]
CONICET-Universidad Nacional de Hurlingham, Av. Gdor. Vergara 2222, Villa Tesei, Buenos Aires, Argentina juan.luna@unahur.edu.ar I analyze the K2 and data taken in 2016, 2019 and 2021 of the symbiotic X-ray binaries and . consists of a pulsar accreting from a red giant companion in a 1160 days orbit. Since 1984, the pulsar has shown a continuous spin-down rate of Ṗ=-0.1177(3) mHZ/yr. I report the detection of the spin period at an average value of 180.426(1) seconds as observed with the K2 mission and confirm that the spin period continues to increase at a rate of ∼1.61×10^-7 s/s. The K2 and hard X-rays, as observed with /BAT, varied in tandem, in agreement with other authors who proposed that the optical light arise from reprocessed X-ray emission. In the case of , the X-ray and optical spectroscopy have been interpreted as arising from a neutron star accreting from a M2 III red giant companion. Its orbital period is unknown, while I report here the detection of a modulation with a period of 242.837 min, interpreted as the neutron star spin period. is thus the second symbiotic X-ray binary where the spin period is detected in optical wavelengths. This period, however, was only detected during the observations of Sector 12 in 2019. The non-detection of this modulation during the observations of Sector 39 in 2021 is perhaps related with the orbital modulation, i.e. a low inclination of the orbit. K2 & observations of symbiotic X-ray binaries Luna. K2 & observations of symbiotic X-ray binaries: and G. J. M. Luna 1 Received June 2023; accepted y ===================================================== § INTRODUCTION Symbiotic binaries consists of compact object accreting from a red giant companion. Those symbiotics with neutron stars are known as symbiotic X-ray binaries. The current census accounts for about a dozen of these systems <cit.>. This class of accreting neutron stars is extremely heterogeneous. At first sight, the only feature that these objects share is the presence of an evolved, wind mass-loosing companion, from where the neutron star accretes. Other system parameters such the neutron star spin period, the orbital period, or the accretion luminosity are very different from one system to another. For example, spin periods range from about a 100 s (Sct X-1) to more than 18,000 s (4U 1954+319) <cit.>. (V2116 Oph) was the first member of this class, discovered in X-rays by <cit.> with balloon experiment, obtaining a glimpse of what later would be confirmed as the spin period of about 2 min. The optical counterpart was discovered by <cit.> as an M5 III spectral type red giant <cit.>. The spin period in optical wavelengths was first reported by <cit.>, and until this study it was the only symbiotic X-ray binary with the spin period detected in optical. The neutron star in has since then been identified as an accreting pulsar in a symbiotic binary. A long history of the spin behavior of the neutron star in exists, with a thoroughly compilation by <cit.>. The changes in the spin period of the accreting pulsar are thought to be related with the accretion rate and the torques' changes due to the interaction of the neutron' star magnetic field with the inner and outer region of the truncated accretion disk <cit.>. was classified as a symbiotic X-ray binary by <cit.> after the identification of the optical counterpart of the hard X-ray source. In their analysis of the /XRT light curve, the authors did not find evidence of pulsation of the neutron star, perhaps because of geometric effects such as low inclination of the binary or close alignment of the rotation and magnetic axis of the neutron star. In this letter I analyze the exquisite, long term, almost uninterrupted photometric time series of and obtained by the K2 and missions and searched for the neutron star spin period and its possible changes. In Section <ref>, I present the data and detail the procedures to extract and remove spurious effects from the light curve and the search for the spin period. Sections <ref> and <ref> show and discuss the results. § OBSERVATIONS was observed during quarter 11 of the K2 mission on 2016 September 24 19:12:30 UT (lc1) and on 2016 October 21 06:17:05 UT (lc2) with a cadence of 1 min during 23.2902 (lc1) and 47.7263 (lc2) days respectively. was observed with during Sectors 12 and 39, starting at 2019-05-21 11:07:37UT and 2021-05-27 06:37:12UT, respectively. During Sector 12 the cadence was 30 min while during Sector 39 it was 10 minutes. I used the package <cit.> to download the light curves and remove outliers[as described in <https://docs.lightkurve.org/tutorials/index.html>]. fluxes (e^- s^-1) were transformed to magnitudes using the zero points from <cit.>. I then applied a Savitzky-Golay smoothing filter to remove the low frequency variability (Figures <ref> and <ref>). In order to search for the presence of the spin period, I used the Generalized Lomb-Scargle (GLS) algorithm as implemented in the library with a "standard"[The standard normalized periodogram is normalized by the residuals of the data around the constant reference model (see <https://docs.astropy.org/en/stable/timeseries/lombscargle.html)>] normalization. Significance levels were calculated by the bootstrapping method implemented on the same library. In the case of the light curve from , I searched for periods around the already known spin period, in the frequency range of 470 to 490 d^-1. The light curve of presents larger gaps than those from K2 due to satellite downlink and/or bad-quality cadences (conservatively I have only downloaded those measurements with the ). In this case I divided the light curve in three portions: (1) MJD < 58640.43; (2) 58643.97 > MJD < 58652.37 and (3) MJD > 59363.64 (see Figure <ref>) and searched for periods. In order to determine the X-ray flux state of and , which is related with the accretion state and possibly changes in the spin period, I downloaded the /BAT light curve of from the /BAT Monitor web[<https://swift.gsfc.nasa.gov/results/transients/>] in the 15–50 keV energy range <cit.>, and selected those 1-day bins between the dates observed with K2 (see Figure <ref>). In the case of I downloaded the MAXI[<http://maxi.riken.jp/star_data/J1619-281/J1619-281.html>] light curve in the 2–20 keV energy range and rebinned at a 10-days bin size to increase the signal-to-noise ratio. § RESULTS §.§ The GLS periodogram from the lc1+lc2 light curve shows a highly significant peak at the period P=180.426(1) s (Figure <ref>) and few other peaks close by. <cit.> reported a period of 124.17±0.04 s on 1996 April 26 in optical wavelengths. Several studies using high energy observations, previous and after the detection of the spin period in optical, already reported periods in the range of ∼120 to ∼170 seconds <cit.>. Moreover, the spin period is known to evolve, with a spin-up phase from 1970 until 1984 and a spin-down phase since then. The most recent measurement reported from a observation taken on October 2015 yielded a 178.778±0.006 s spin period <cit.>. On their Figure 6, <cit.> nicely shows the evolution of the spin period until 2010. I used the data from their Table B.1 and update their figure by including the optical period from <cit.>, <cit.>, , Fermi <cit.> and the K2 period detection reported here (see right panel in Figure <ref>). Overall, the spin period found in the K2 data confirms the spin-down trend determined from the other data. The long term coverage of K2 allows to search for changes in the spin period during more than 70 days. I extracted the periodograms from 1-day, consecutive and overlapping (50% overlapping) slices of the light curve, which show strong power at the frequency of the spin of the neutron star. During consecutive slices the period increases, following the spin-down already observed in high energies (Figure <ref>). The spin-down rate during the K2 observations, determined by a simple linear fit, is ∼1.61×10^-7 s/s. According to the orbital ephemeris from <cit.>, the K2 observations covered the 0.92 to 0.98 orbital phases, and in agreement with I-band measurements reported by <cit.>, the optical emission increased toward the periastron passage. <cit.> also present ephemeris for a possible eclipse of the neutron star, and the K2 observations covered the phase range from 0.67 to 0.73 from the inferior conjunction. /BAT Transient Monitor observations during the K2 observations show that the source was in a low hard X-ray flux state during lc1 and part of lc2, while afterwards, the hard X-ray flux increased by a factor of about four (see Figure <ref>). The /BAT light curve shows a slow rise after about MJD 57690, reaching a maximum count rate of 0.043 c s^-1 or a flux[Following <http://www.dsf.unica.it/ riggio/Scripts/crab_to_erg.js> and <cit.>] of 2.6×10^-9  which translates into a luminosity of 5.7×10^36 at a distance of 4.3 kpc <cit.>. The tandem variability observed between BAT and K2 light curves supports the scenario proposed by <cit.> where the optical light arise from reprocessed X-rays. The current spin down rate could be caused by a retrograde rotating disk, which extracts angular momentum from the pulsar, with an increased spin down rate at higher X-ray luminosities. The analysis of the K2 and /BAT light curve does not support this scenario because an steady spin down rate is observed even after the increase of the X-ray luminosity. Moreover, as pointed out by <cit.>, a retrograde disk lasting for about 40 years needs further investigation. As an alternative, <cit.> explore the scenario of quasi-spherical accretion onto the neutron star as a possible explanation to the observed long term behavior of the spin rate. In the case that an accretion disk cannot be formed through wind accretion, depending on the source luminosity, accretion can proceed via free-fall of matter towards the magnetosphere when L_X is above a few 10^36 while for lower luminosities, the accreting material forms a hot shell around the magnetosphere, being later accreted through instabilities in the magnetosphere. Free-fall accretion seems unlikely to have proceeded during the K2 observations because of the low /BAT luminosity (free fall accretion would require L_X above 10^37 ), which suggest that the settling accretion regime could have been at work. §.§ The GLS periodogram from the first two portions of the light curve of revealed a strong peak at a frequency of 5.9299 d^-1, corresponding to a period of 242.839(2) minutes (14570.34 seconds). The first harmonic of this period is also significantly detected in the power spectrum. I interpret this period as the neutron star spin period, being the first time that it is detected at any wavelength. Figure <ref> shows the power spectrum of each portion of the light curve. It is noticeable that the 242.839 min period is not detected during the observations performed during sector 39, on May 2021. The 2-20 keV MAXI light curve (panel c in Fig. <ref>) does not point to an increase/decrease of the X-ray flux during the May 2021 with respect to May 2019, which, being related with the accretion rate, could point to the origin of the non-detection of the spin of the neutron star, which remains unknown. <cit.> constructed models of symbiotic X-ray binaries with various improvements over past models, such as the accretion settling regime. In their figure 2, <cit.> presents a P_spin-L_X diagram for different accretion scenarios (disc or wind-accretion) and different evolutionary stages of the companion (CHeB, core helium burning; EAGB, early AGB). At a distance of ≲ 3.7 kpc, the X-ray luminosity of is ≲ 7×10^34 <cit.> and with the detected spin period of 14570.34 seconds, is located in the region of this diagram where other symbiotic X-ray binaries are found. The yet-unknown orbital period however, precludes to distinguish between the various models. § CONCLUSIONS By searching for the neutron stars spin period on the K2 and light curves of the X-ray symbiotics and , I have found that: * During the K2 observations in 2016, the neutron star in continued to spin down at a rate of 1.61×10^-7 s/s. It is clear however, that since the beginning of the spin-down phase back in 1984 until now, the spin down rate is not constant (Fig. <ref>). The /BAT data during the same epoch shows an increase in the X-ray luminosity, which as expected if the optical light results from reprocessed X-rays, was accompanied by an optical brightening. * The increase in the X-ray luminosity during the K2 observations was not high enough to change the trend of the spin period. These changes have been previously observed, in X-rays, and at higher X-ray luminosities <cit.>. * I report here, for the first time, the detection of a modulation in the light curve of the symbiotic X-ray binary . The 242.839 min period is interpreted as the period of the neutron star spin. This period is transient, detected only during the observations performed in 2019 and while absent in 2021. The non-detection of the spin period in 2021 does not seem to be related with the luminosity state of the source, given that neither the X-ray nor the optical luminosity changed significantly between the years 2019 and 2021. Further observations could elucidate the reasons behind the non-detection of the spin period. * The spin period of 242.839 min and the X-ray luminosity of about 10^34-35 <cit.> of agree with model predictions and matches the location of other symbiotic X-ray binaries in the P_spin-L_X diagram <cit.>. I thank the anonymous referee for useful remarks. GJML is a member of CIC-CONICET (Argentina) and acknowledge support from grant ANPCYT-PICT 0901/2017. This paper includes data collected by the Kepler mission and obtained from the MAST data archive at the Space Telescope Science Institute (STScI). Funding for the Kepler mission is provided by the NASA Science Mission Directorate. STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5–26555. This research made use of Lightkurve, a Python package for Kepler and TESS data analysis (Lightkurve Collaboration, 2018). aa
http://arxiv.org/abs/2307.00744v1
20230703041747
Strong uniqueness principle for fractional polyharmonic operators and applications to inverse problems
[ "Ching-Lung Lin", "Hongyu Liu", "Catharine W. K. Lo" ]
math.AP
[ "math.AP", "Primary 35R30, secondary 35R11, 26A33" ]
The contribution of Coulomb interaction to elastic pp and pp scattering in holographic QCD Akira Watanabe August 1, 2023 ========================================================================================== In this work, we investigate inverse problems for poly-fractional equations, where the poly-fractional operator is of the form P( (-Δ)^s)u := ∑_i=1^M α_i(-Δ)^s_iu for s=(s_1,…,s_M), 0<s_1<⋯<s_M<∞, s_M∈ℝ_+\ℤ. We give novel results for the unique continuation properties of P((-Δ)^s). With these results in hand, we consider the associated inverse problems, and proved the uniqueness in recovering the potential, the source function in the semilinear case, and the coefficients associated to the non-isotropy of the fractional operator. Keywords. Fractional Laplacian, unique continuation property, Calderón problem. Mathematics Subject Classification (2020): Primary 35R30; secondary 35R11, 26A33 § INTRODUCTION §.§ Mathematical Setup and Statement of the Main Results Let Ω⊂ℝ^n be a bounded Lipschitz domain, for n∈ℕ, and q∈ L^∞(Ω). Given s=(s_1,…,s_M), 0<s_1<⋯<s_M<∞, s_M∈ℝ_+\ℤ, we consider the exterior value problem P( (-Δ)^s)u + qu := ∑_i=1^M α_i(-Δ)^s_iu + qu =0 in Ω, u=f in Ω^c:=ℝ^n\Ω Here, the fractional Laplacian (-Δ)^γ is defined for smooth, compactly supported functions via the Fourier transform for all 0<γ<∞: (-Δ)^γ u = |ξ|^2γû, û(ξ)=∫_ℝ^ne^-iξ· xu(x) dx. Classically, the fractional Laplacian is defined for 0<γ<1, and the higher order fractional Laplacian was first investigated in <cit.> and <cit.> using conformal geometry techniques, and later developed in <cit.> and <cit.>. We also assume that 0 is not an eigenvalue of the operator (P( (-Δ)^s) + q), i.e. if w∈ H^s_M(ℝ^n) solves (P( (-Δ)^s) + q)w=0 in Ω and w|_Ω^c=0, then w≡0. We are interested in the unique determination inverse problem for fractional polyharmonic equations. Let α_i^j,q_j∈ L^∞(ℝ^n), consider P( (-Δ)^s)_j u_j + q_ju_j := ∑_i=1^M α_i^j(-Δ)^s_iu_j + q_ju_j =0 in Ω, u_j=f_j in Ω^c We also define P̃( (-Δ)^s̃)u := ∑_i=1^M'α̃_i(-Δ)^s̃_iu for s̃=(s̃_1,…,s̃_M'), 0<s̃_1<⋯<s̃_M'≤ s_M, α̃_i∈ L^∞(ℝ^n) for P̃∈𝒜 for some admissible class 𝒜 which will be detailed later. Assuming the well-posedness of (<ref>), we can define the Dirichlet-to-Neumann (DtN) map formally via ℳ_q_j,α^j : H^s_M(Ω^c)→ H^-s_M(Ω), f↦. P̃( (-Δ)^s̃) u_f |_Ω , j=1,2, where u_f ∈ H^s(ℝ^n) is the unique solution to (<ref>). Then, we can prove the following result: Let Ω⊂ℝ^n be a bounded Lipschitz domain, for n∈ℕ, and q_j∈ L^∞(Ω) for j=1,2. Then ℳ_q_1,αf = ℳ_q_2,αf, for a nonzero f∈ H^s_M(Ω^c) implies q_1=q_2 in some subset E⊂Ω. Furthermore, we have the following result under additional assumptions. Let Ω⊂ℝ^n be a bounded Lipschitz domain, for n∈ℕ, and q_j∈ L^∞(Ω) for j=1,2. Let W⊂Ω^c be a nonempty open set. For any nonzero f∈ C^∞_c (W) such that u_f satisfies ℒu(x)=g(x) in Ω^c for some (local) second order elliptic operator ℒ and g(x)∈ C^∞_c(Ω^c), the relation ℳ_q_1,αf = ℳ_q_2,αf implies q_1=q_2 in some subset E⊂Ω. Furthermore, we can extend our results to consider semilinear equations of the form P( (-Δ)^s)u + F(x,u) =0 in Ω, u=f in Ω^c. Suppose F is analytic up to order L such that F(x,0)=0 for all x∈Ω, i.e. F can be written as a power series F(x,u)=∑_ℓ=1^L F^(ℓ)(x)u^ℓ/ℓ!, where F^(ℓ)(x)=∂^ℓ F/∂ u^ℓ(x,0)∈ L^∞(Ω). Then we have the follow corresponding results: Let Ω⊂ℝ^n be a bounded Lipschitz domain, for n∈ℕ, and F_j with the analyticity defined above for j=1,2. Then ℳ_F_1,αf = ℳ_F_2,αf, for a nonzero f∈ H^s_M(Ω^c) implies F_1=F_2 in some subset E⊂Ω. Let Ω⊂ℝ^n be a bounded Lipschitz domain, for n∈ℕ, and F_j with the analyticity defined above for j=1,2. Let W⊂Ω^c be a nonempty open set. For any nonzero f∈ C^∞_c (W) such that u_f satisfies ℒu=g in Ω^c as in Theorem <ref>, the relation ℳ_F_1,αf = ℳ_F_2,αf implies F_1=F_2 in some subset E⊂Ω. Similar results can also be said for the coefficients α_i. Let Ω⊂ℝ^n be a bounded Lipschitz domain, for n∈ℕ, and q∈ L^∞(Ω) for j=1,2. Assume that α_i^1=α_i^2 for every i=1,…,M except i=m. Then ℳ_q,α^1f = ℳ_q,α^2f, for a nonzero f∈ H^s_M(Ω^c) implies α_m^1=α_m^2 in some subset E'⊂Ω. Let Ω⊂ℝ^n be a bounded Lipschitz domain, for n∈ℕ, and q∈ L^∞(Ω) for j=1,2. Assume that α_i^1=α_i^2 for every i=1,…,M except i=m. Let W⊂Ω^c be a nonempty open set. For any nonzero f∈ C^∞_c (W) such that u_f satisfies ℒu=g in Ω^c as in Theorem <ref>, the relation ℳ_q,α^1f = ℳ_q,α^2f implies α_m^1=α_m^2 in some subset E'⊂Ω. We will show these results using a minimal number of measurements (single in the case of the potential and coefficient, and L for the semilinear case). In particular, our proof relies on the following unique continuation principles (UCPs). If u∈ H^s_M(ℝ^n) satisfies P̃( (-Δ)^s̃)u = 0 in Ω, u=0 in Ω^c, then u≡ 0 in ℝ^n. Here, we can assume that the value function is 0 away from the domain Ω, which is usually the medium in which the particle/species lives. Alternatively, it is also possible to input a function f which is defined everywhere in the exterior Ω^c (for instance, f may be nonlocal but concentrated near the boundary ∂Ω). On the other hand, if we assume that the value function is the solution of a second order linear elliptic problem, such as in most diffusion in non-viscous fluids, we have the following corollary: Let W be a nonempty open subset in Ω^c. Let ℒ be any second order elliptic operator. If u∈ H^s_M(ℝ^n) satisfies P̃( (-Δ)^s̃)u = 0 in Ω, ℒu=0 in Ω^c, u=0 in W, then u≡ 0 in ℝ^n. §.§ Discussions and Historical Remarks The study of inverse problems in the context of partial differential equations (PDEs) have long fascinated researchers. The recovery of internal properties of a medium (corresponding to certain terms in an equation/system or operator) from indirect measurements (corresponding to information on solutions of equations in certain domains) remains a pertinent problem in many scientific disciplines such as electromagnetism, geophysics, medical imaging and economics. Consequently, the study of inverse problems associated with partial differential equations remains an active and influential research area. One of the most famous problem in this area is the Calderón problem arising in electrostatics. The classical Calderón problem investigates whether one can determine the electrical conductivity σ(x) of a medium by making voltage and current measurements at its boundary. It is modeled by the following Dirichlet problem: ∇· (σ∇ u) = 0 in Ω, u = f on ∂Ω, where the conductor filling Ω is a bounded domain with smooth boundary. In mathematical terms, the Calderón problem asks whether one can determine σ from the knowledge of the Dirichlet-to-Neumann map defined by Λ_σ: f↦. σ∂ u/∂ν|_∂Ω. Physically, this means that we apply a voltage f at the boundary ∂Ω, which will induce a voltage u(x) in Ω, and we measure the current σ∂ u/∂ν at the boundary ∂Ω. Beginning with the seminal work of Calderón in <cit.>, the inverse conductivity problem has been studied intensively. Numerous positive result have been obtained, including in <cit.>, <cit.>, <cit.> and <cit.>. In particular, in <cit.>, Alessandrini reduced the conductivity-type problem to a Schrödinger-type one, where one attempts to determine the potential q(x) in - Δ v + qv = 0 in Ω, v = f on ∂Ω from the measurement map Λ_q: f↦. ∂ v/∂ν|_∂Ω. Recently, the study of equations involving non-local operators have gained substantial attention. A typical non-local operator is the fractional Laplacian (-Δ)^s. These kinds of equations are interesting due to their capability to model complex systems. Such effects arise in a diverse range of disciplines, in which the presence of anomalous diffusion effects, long-range correlations, and memory effects necessitates the consideration of fractional problems. By applying the concept of fractional calculus to problems in control theory, optimization, image processing, structural dynamics, signal processing, epidemiology, and population dynamics, one is able to model and analyse better complex physical phenomena. Correspondingly, inverse problems associated with fractional operators have been studied. A major point of interest is the fractional Schrödinger equation in the field of fractional quantum mechanics, which arises naturally as a generalisation of the classical Schrödinger equation. The Calderón problem for the fractional Schrödinger equation was first solved by Ghosh, Salo and Uhlmann in <cit.>. In thiis work, instead of the (boundary value) Dirichlet problem associated with the classical Calderón problem, the authors considered the exterior value Dirichlet problem (- Δ)^γ u + qu = 0 in Ω, u = f in Ω^c, for γ∈(0,2). The inverse problem asks whether one can determine the potential q in Ω from the exterior partial measurements of the Dirichlet-to-Neumann map Λ_q: f↦. (- Δ)^γ u|_Ω^c. This problem has a positive answer in <cit.>, where the Dirichlet-to-Neumann map Λ_q uniquely determines q in Ω. This result was then generalised in numerous works in many different directions, including in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> and <cit.>, to name a few. The proof of the fractional Calderón problem strongly relies on the strong uniqueness property: for u∈ H^γ/2(ℝ^n), u = ℒ^γ u = 0 in an arbitrary nonempty open set in ℝ^n implies u≡0 in ℝ^n for appropriately defined fractional Laplacian-type operators ℒ^γ. The proof of the unique continuation property above is based on the Caffarelli-Silvestre definition <cit.> of the fractional Laplacian (-Δ)^γ u(x) := C_n,γlim_y→ 0^+ y^1-2γ∂/∂ y U(x,y) for x∈ℝ^n for some constant C_n,γ, where U is the solution of the extension problem ∇· (y^1-2γ∇ U )=0 in ℝ^n+1_+, U (x,0)=u(x) in ℝ^n, This definition enables us to derive properties of the fractional Laplacian (-Δ)^γ from local arguments in the extension problem, such as in <cit.>. We remark that there are distinct differences between the classical and fractional Calderón problems. In particular, no construction of CGO solutions is required in dealing with the fractional problem. On the other hand, the unique continuation property is a distinctive feature of fractional operators which is not present in local operators, and it makes fractional inverse problems more manageable and help us obtain strong results. Indeed, it has been observed in <cit.> that a uniqueness result in the local case guarantees a uniqueness result in the fractional setting, but the result does not hold vice versa. Consequently, in the consideration of higher order fractional Laplacians (-Δ)^γ for γ∈(0,∞), many works have focused on the derivation of a unique continuation principle, such as in <cit.>, <cit.> , <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. Once again, many of these works relied on a Caffarelli-Silvestre-type extension, which have been extended to higher fractional exponents γ. The higher order fractional Laplacian extends the fractional Laplacian by considering fractional orders γ greater than 2. Higher order fractional Laplacians provide a way to capture even more intricate details of the function's curvature and variations, and was first considered in geometrical settings in <cit.>. It is useful for analysing and modeling complex data with non-local dependencies, enabling a more accurate representation and understanding of intricate structures and patterns. Most previous works have focused on fractional-type operators with a single order of singularity, i.e. the kernel of ℒ^γ is of the form σ(x,y)|y|^-d-γ where σ(x,y) is homogeneous of order zero and sufficiently smooth in y. Fractional partial differential equations with mixed singularities are much less understood. Recently, various existence and regularity results for the forward problems involving mixed fractional operators have been obtained. This includes cases of sums of fractional Laplacians as in <cit.>, mixing a local operator with a nonlocal operator such as in <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.>, <cit.> or <cit.>, as well as cases where different orders of fractional operators are applied on the space and time variables separately such as in <cit.>, <cit.> or <cit.>. Operators with mixed fractional orders (including the classical local Laplacian) arise naturally from the superposition of multiple stochastic processes with different scales, including classical random walks and Lévy flights. When a particle follow either of these processes according to a certain probability, the associated limit diffusion equation is described by a mixed fractional operator as in (<ref>). See Appendix B of <cit.> for a thorough probabilistic discussion of this phenomenon. Such a phenomenon can also be seen in biological models, as explained in <cit.>, <cit.>, or <cit.>, and these mixed operators describe biological species whose individuals diffuse by a mixture of random walks and jump processes, according to prescribed probabilities. Indeed, mixed operators allow us to study the inter-correlating impact of local and nonlocal effects, such as in <cit.>, and is a powerful tool in applied sciences. In fact, a classical model involving mixed fractional orders is that of the surface quasi-geostrophic equation, which is a semilinear anisotropic advection-diffusion equation involving a fractional operator and the classical gradient (see for instance <cit.> or <cit.>). We remark that a conceptually different yet closely related operator is those of the ones of variable exponents, and previous works include <cit.>, <cit.>, <cit.>, <cit.> and <cit.>. Such operators and their associated models arise in some physical phenomena (see for instance <cit.> and <cit.>). Correspondingly, their inverse problems have been considered in <cit.> and <cit.> for the time-fractional case, but there has not yet been any result for the space-fractional Laplacian. Despite the increased versatility for models based on fractional operators with mixed orders, which we will call poly-fractional operators, the associated problems in consideration are very challenging due to the presence of both nonlocal and local effects. Indeed, it was shown in <cit.> that the associated Caffarelli-Silvestre extension is way more complicated. As such, previous results have only been for the poly-fractional time operator, such as in <cit.>. To the best of our knowledge, there have not yet been any results on unique continuation properties of poly-fractional operators in the space domain, let alone any results on inverse problems involving such operators. In this paper, we consider these poly-fractional operators P( (-Δ)^s). We first give some novel unique continuation properties associated to them. Such properties are essential for the study of their associated exterior value problems (<ref>). Then as in <cit.> and later works, we derive the uniqueness of various environmental effects in the model from the knowledge of the Dirichlet-to-Neumann map f↦. P( (-Δ)^s) u_f |_Ω. We recover the potential, the source function in the semilinear case, and the coefficients associated to the non-isotropy of the fractional operator. Therefore, our problems can be viewed as variants of the fractional Calderón problem studied in <cit.>. We believe that our work opens up the doors for poly-fractional inverse problems, and holds promise for addressing a multitude of real-world problems. It should be noted that another form of poly-fractional inverse problems have previously been considered, where the fractional problem takes the form of a Caputo fractional-time derivative and a space-fractional Laplacian of Mittag-Leffler type. For more details of the problem setup, refer to <cit.>, <cit.>, <cit.>, <cit.> or the references therein. In these works, the authors made use of the spectral eigenfunction expansion of the weak solution to the initial/boundary value problem, to recover the fractional exponents from the initial value function. Such a method heavily relies on the structural definition of the fractional Laplacian, which is difficult to be generalised. §.§ Organisation of This Paper The rest of this paper is structured as follows. In Section <ref>, we provide rigorous mathematical formulations of the fractional operator P((-Δ)^s) for s=(s_1,…,s_M), 0<s_1<⋯,s_M<∞ and fractional Sobolev spaces. In Section <ref>, we prove the unique continuation properties of P((-Δ)^s), which will form the basis of our study of the associated inverse problems. In Section <ref>, we prove the uniqueness in recovering the potential, the source function in the semilinear case, and the coefficients associated to the non-isotropy of the fractional operator. We end with some final remarks and open problems in Section <ref>. § PRELIMINARIES §.§ Fractional Sobolev spaces We first begin with a review of fractional Sobolev spaces. The fractional Sobolev spaces H^s(ℝ^n) for all real positive s are defined by H^s(ℝ^n)={u∈𝒮'(ℝ^n):{ξ↦(1+|ξ|^2)^s/2û(ξ)}∈ L^2(ℝ^n)}, with norm u_H^s(ℝ^n)=(1+|ξ|^2)^s/2û_L^2(ℝ^n), and its dual space H^-s(ℝ^d):={ξ∈𝒮'(ℝ^d):{1+|ξ|^-sξ̂}∈ L^2(ℝ^d)}, where 𝒮 is the Schwartz space and 𝒮' the dual, and û(ξ)=∫_ℝ^de^-2π ix·ξu(x) dx is the Fourier transform of u. Let ω⊂ℝ^n be an open set. We define the following fractional Sobolev spaces H̃^s(ω) := closure of C_c^∞(ω) in H^s(ℝ^n), H^s(ω) := {u|_ω : u∈ H^s(ℝ^n)}, which is complete with norm u_H^s(ω):=inf{v_H^s(ℝ^n): v∈ H^s(ℝ^n) and v|_ω=u}. We also define H^s_0(ω) := closure of C_c^∞(ω) in H^s(ω) and H_ω^s(ℝ^n):={u∈ H^s(ℝ^n):supp(u)⊂ω}. Observe that H̃^s(ω)⊂ H^s_0(ω) and the duals (H̃^s(ω))^*= H^-s(Ω) and (H^s(ω))^*=H̃^-s(ω). If ω is, in addition, a Lipschitz domain, H̃^s(ω)=H_ω^s(ℝ^n) for all s∈ℝ and H^s_0(ω)=H_ω^s(ℝ^n) for r≥0, r≠2k+1/2, k∈ℕ. For more detailed discussion regarding these Sobolev spaces, we refer readers to the reference <cit.>. §.§ Nonlocal Operators Defined via the Fourier Transform With the definitions of the previous section, we are able to obtain a solution to the forward problem. We first begin by recalling the Poincaré inequality for higher order fractional Laplacians. Let s∈ℝ_+\ℤ, K⊂ℝ^n be a compact set, and u∈ H^s_K(ℝ^n). Then there exists a constant C>0 depending on n, K and s such that u_L^2(ℝ^n)≤ C(-Δ)^s/2u_L^2(ℝ^n). For any operator A((-Δ)^α) := α_1 (-Δ)^a_1 + α_2 (-Δ)^a_2 + ⋯ + α_p (-Δ)^a_p, with α=(α_1,…,α_p), 0≤ a_1<⋯<a_p<∞ for some p∈ℕ, p<∞, a_i∈ℝ and α_i>0, α_i∈ L^∞(ℝ^n) for i=1,…,p, we define the bilinear form ⟨ A((-Δ)^α)u,v⟩ = ⟨α_1, ((-Δ)^a_1u)v ⟩ + ⟨α_2, ((-Δ)^a_2u)v ⟩ + ⋯ + ⟨α_p, ((-Δ)^a_pu)v⟩. We also define the operator A((-Δ)^α) + qu in Ω, with bilinear form A_q(u,v):=⟨α_1, ((-Δ)^a_1u)v ⟩ + ⟨α_2, ((-Δ)^a_2u)v ⟩ + ⋯ + ⟨α_p, ((-Δ)^a_pu)v⟩ + (qu,v)_Ω. Here, ⟨φ,ψ⟩ := ∫_ℝ^nφψ dx, for any φ,ψ∈ L^2(ℝ^n), (φ,ψ)_Ω:= ∫_Ωφψ dx, for any φ ,ψ∈ L^2(Ω). Let α=(α_1,…,α_p), 0≤ a_1<⋯<a_p<∞ for some p∈ℕ, p<∞, a_i∈ℝ and α_i>0, α_i∈ L^∞(ℝ^n) for i=1,…,p. Let f∈ H^α_p(ℝ^n) and F∈ H^-α_p(Ω). Then the problem A((-Δ)^α)u=F in Ω, u=f in Ω^c has a unique weak solution u∈ H^α_p(ℝ^n), i.e. u satisfies ⟨ A((-Δ)^α)u,v⟩ = ⟨ F,v⟩ for all v∈H̃^α_p(Ω). Moreover, u satisfies ∑_i=1^pu_H^α_i(ℝ^n)≤ C ( F_ (H^α(Ω))^* + f_ (H^α(Ω^c))^* ) , for some constant C>0 independent of u, f and F. Here ·_ (H^α(Ω))^* := ∑_i=1^pF_ (H^α_i(Ω))^* . The proof follows by observing that the terms (-Δ)^a_1,…,(-Δ)^a_p-1 are all lower order terms, so we can simply apply standard arguments. Indeed, since α_i>0, α_i∈ L^∞(ℝ^n), there exists a bound a_* and a^*. Therefore, that the bilinear form is bounded since for u,v∈ C_c^∞(ℝ^n), ⟨ A((-Δ)^α)u,v⟩ ≤α_i_L^∞(ℝ^n)u_H^α_p(ℝ^n)v_H^α_p(ℝ^n) + ∑_i=1^p-1α_i_H^α_p-α_i(ℝ^n)(-Δ)^a_iu_H^α_p-α_i(ℝ^n)v_H^α_p(ℝ^n) ≤ a^*(u_H^α_p(ℝ^n)+⋯+u_H^α_p(ℝ^n))v_H^α_p(ℝ^n) = a^*pu_H^α_p(ℝ^n)v_H^α_p(ℝ^n) by Hölder's inequality, and so this also holds for u,v∈ H^α_p(ℝ^n) by density. On the other hand, ⟨ A((-Δ)^α)u,u⟩≥ a_*(u_H^α_1(ℝ^n)^2+⋯+u_H^α_p(ℝ^n)^2)≥ a_*u_H^α_p(ℝ^n)^2, so the bilinear form is coercive. Finally, ⟨ F,v⟩ is clearly a bounded linear functional on H^α_p(ℝ^n), so we can apply the Lax-Milgram theorem to obtain a solution u∈ H^α_p(ℝ^n). Uniqueness follows from linearity, so if u_1, u_2 are two solutions, then setting w:=u_1-u_2, ⟨ A((-Δ)^α)w,v⟩ = 0 for all v∈H̃^α_p(Ω). In particular, taking v to be w, we have 0=⟨ A((-Δ)^α)w,w⟩≥ a_*w_H^α_p(ℝ^n)^2. Thus, w≡0 and u_1=u_2. Finally, the estimate follows as in the proof of coercivity, with the use of the Cauchy-Schwarz inequality. Special cases of this result can be found in Lemma 3.4 of <cit.> with a single higher order fractional Laplacian and local lower order terms, in Theorem 1.1 of <cit.> with a local fractional Laplacian and lower order fractional terms, in Lemma 5.1 of <cit.> with a single higher order fractional Laplacian and a single rough potential, and in <cit.> for infinitely many fractional Laplacians of order 0<s<1. With the well-posedness at hand, we are able to define the corresponding DtN map ℳ_q,α rigorously, via the bilinear form (<ref>): ⟨ℳ_q,α f, g ⟩:= A_q (u_f, g), where u_f is the unique solution to (<ref>), and g∈ C^∞_0(ℝ^n) could be arbitrary. §.§ Admissibility Conditions Suppose T:𝒮(ℝ^n)→ L^2(ℝ^n) is an abelian linear operator acting on the Schwartz space 𝒮(ℝ^n). We define 𝒜 to be the admissible algebra of all real posynomials A(T) with positive L^∞(ℝ^n) coefficients acting on the operator T by A(T) := α_1 T^a_1 + α_2 T^a_2 + ⋯ + α_p T^a_p, 0≤ a_1<⋯<a_p<∞, p<∞, a_i∈ℝ, p∈ℕ, such that there exists another real posynomial B with (possibly negative) L^∞(ℝ^n) coefficients such that their (usual, pointwise-defined) product BA is of the form B(T)A(T) = ∑_j=1^J c_j T^n_j + c_r T^r, 0≤ n_1<⋯<n_J<∞, r>0, J<∞, with n_j∈ℕ, r∉ℕ, c_j,c_r≠0 for all j=1,…,J. By density, this definition can be extended to any operator T:L^2(ℝ^n)→ L^2(ℝ^n), with T(u)=∞ if u∉H^n_J(ℝ^n)∩ H^r(ℝ^n). It is obvious that for all real posynomials p∈ℝ[z], p=λ_1z^l_1+λ_2z^l_2+⋯+λ_Lz^l_L, given an operator T in the space of all abelian linear operators from 𝒮(ℝ^n) to L^2(ℝ^n), we can write p(T)=λ_1T^l_1+λ_2T^l_2+⋯+λ_LT^l_L, and the map p↦ p(T) is a unital homomorphism. Therefore, our definition above makes sense. Furthermore, the operator (-Δ)^s is commutative for any 0<s<∞ (see, for instance, Lemmas 4.2 and 4.3 of <cit.>, or <cit.> where the authors made use of conformal geometry techiques). Therefore, for every A∈𝒜, we can write A((-Δ)^s). It should be remarked that the admissible set 𝒜 is non-empty. In particular, for p≥2, if the coefficients α_i of P(T) := ∑_i=1^M α_iT^s_i satisfy the relation 2α_i=α_i-1+α_i+1, then it can be easily seen that P∈𝒜. On the other hand, for p=2, it can also be easily verified that if 2(α_2-α_1)∉ℕ, then P∈𝒜. § UNIQUE CONTINUATION PROPERTIES In this section, we prove more general unique continuation property (UCP) for fractional polyharmonic equations. We first begin by stating the UCP for a single higher order fractional Laplacian. We give the result of <cit.> for fractional Laplacian of positive order: Suppose γ∈ℝ_+\ℕ and u∈ H^r(ℝ^n) for some r∈ℝ. If (-Δ)^γ u|_ω=u|_ω=0 for some nonempty open set ω⊂ℝ^n, then u≡0 in ℝ^n. A strong UCP result was first stated as Corollary 5.5 in <cit.> and proved in <cit.> for 1<s<2, where the vanishing of u of infinite order at a point x_0∈ω⊆ℝ^n implies u≡0 in ω for a connected open domain ω. This was later extended in <cit.> for fractional Schrödinger equations with Hardy type gradient potentials for s∈ℝ_+\ℕ. For other similar works, see <cit.> in the case where s depends on the dimension n, <cit.> for the UCP result with lower order (<s) perturbation, and <cit.> for the UCP result for fractional p-Laplacians. Another UCP result is also given in <cit.> for the higher order fractional Laplacian defined spectrally, which may coincide with the fractional Laplacian we chose here, since they appear to satisfy the same extension problem of Caffarelli-Silvestre type. With Theorem <ref> in hand, we are able to prove our main UCP results, Theorems <ref> and <ref>. We first begin with a lemma. Let W be a nonempty open subset in Ω^c. If u∈ H^s_M(ℝ^n) satisfies P̃( (-Δ)^s̃)u = 0 in ℝ^n, u=0 in W, then u≡ 0 in ℝ^n. We first show this for u,v∈ C_c^∞(ℝ^n), and obtain the result for u∈ H^s_M(ℝ^n) by density since (<ref>) is then well-defined. Since P̃∈𝒜, there exists B((-Δ)^b) such that B((-Δ)^b)P̃( (-Δ)^s̃) u(x)= ∑_j=1^J c_j (-Δ)^n_j u(x) + c_r (-Δ)^r u(x), J<pq,x∈ℝ^n, where n_j∈ℕ, r∉ℕ, c_j,c_r≠0 for all j=1,…,J. By assumption, P̃( (-Δ)^s̃)=0 in ℝ^n, so B((-Δ)^b)P̃( (-Δ)^s̃) u(x)= ∑_j=1^J c_j (-Δ)^n_j u(x) + c_r (-Δ)^r u(x)=0 for all x∈ℝ^n. In particular, this holds for x∈ W, where we have assumed u(x) = 0, which gives (-Δ)^n_j u(x) = 0 in W for every n_j∈ℕ. Consequently, from (<ref>), we obtain (-Δ)^r u(x) = 0 in W. Applying the unique continuation principle for (-Δ)^r in W given in Theorem <ref>, we can obtain u≡0 in ℝ^n. From this, we are able to prove Theorem <ref>. Since u≡0 in the exterior of Ω, we can view the condition (<ref>) as a Dirichlet problem for u in Ω with Dirichlet condition 0. Applying the existence Theorem <ref> with F=f≡0, we have that u≡0 by (<ref>). This means that u satisfies P̃( (-Δ)^s̃)u = 0 in ℝ^n, u=0 in Ω^c, which in particular, means that u satisfies P̃( (-Δ)^s̃)u = 0 in ℝ^n, u=0 in W for any nonempty open subset W⊂Ω^c, i.e. u satisfies the condition (<ref>) of Lemma <ref>. Therefore, we can apply the result of Lemma <ref> to obtain that u≡0 in ℝ^n. This result implies our final unique continuation principle. Since u satisfies ℒu=0 in Ω^c, u=0 in W⊂Ω^c, by maximum principles in the classical theory of second order elliptic operators, we have that u=0 in Ω^c. By the previous Theorem <ref>, we have the result u≡0 in ℝ^n. § INVERSE PROBLEMS §.§ Recovery of Potential We first assume that α_i^1=α_i^2, i.e. P( (-Δ)^s)_1=P( (-Δ)^s)_2=:P( (-Δ)^s) in (<ref>). Suppose u_j, j=1,2, satisfies (<ref>), i.e. P( (-Δ)^s) u_j + q_ju_j =0 in Ω, u_j=f_j in Ω^c. Since ℳ_q_1,αf =ℳ_q_2,αf, for a given P̃( (-Δ)^s̃), P̃( (-Δ)^s̃)u_1 = ℳ_q_1,αf_1 = ℳ_q_1,αf_2 = P̃( (-Δ)^s̃)u_2 in Ω. At the same time, . u_1|_Ω^c = f_1 = f_2 = . u_2|_Ω^c in Ω^c. Since P̃ is linear (any fractional power of the Laplacian is linear by the linearity of the Fourier transform), writing ũ=u_1-u_2∈ H^s_M(ℝ^n), we have that P̃( (-Δ)^s̃)ũ = 0 in Ω, ũ=0 in Ω^c, i.e. ũ satisfies (<ref>). By the UCP Theorem <ref>, we obtain that ũ≡0 in ℝ^n, i.e. u_1=u_2 in ℝ^n. In particular, ũ=u_1-u_2=0 in Ω. Therefore, taking the difference of the two equations of (<ref>), we have 0 = P( (-Δ)^s) ũ + (q_1-q_2)u_1 + q_2 ũ = (q_1-q_2)u_1 in Ω. But for a nonzero input f_1 in Ω^c, the Lax-Milgram theorem guarantees a non-trivial solution in Ω. Therefore, in some small enough open set E⊂Ω, u_1(x)≠0 for all x∈ E. Restricting to E, we have that (q_1(x)-q_2(x))u_1(x)=0 in E, so q_1(x)=q_2(x) for x∈ E. Observe that (<ref>)–(<ref>) are pointwise in x∈ E. In fact, we are only using a single measurement in here, unlike the infinite measurements usually required for previous results as in <cit.>, <cit.> or <cit.>. The proof follows along the same lines as that of Theorem <ref>. However, instead of (<ref>), we have that . u_1|_W = f_1 = f_2 = . u_2|_W in W. Since u_j is assumed to satisfy ℒu_j=g in Ω^c for some second order elliptic operator ℒ, consequently, ũ=u_1-u_2∈ H^s_M(ℝ^n) satisfies P̃( (-Δ)^s̃)ũ = 0 in Ω, ℒũ=0 in Ω^c, ũ=0 in Q, i.e. ũ satisfies (<ref>). By the UCP Theorem <ref>, we obtain that ũ≡0 in ℝ^n, i.e. u_1=u_2 in ℝ^n. Therefore, the remaining continues as in the previous proof, and we obtain q_1(x)=q_2(x) for x∈ E. §.§ Semilinear Case As in <cit.>, we make use of a higher order linearisation scheme, which we briefly sketch here. Consider the system (<ref>), and observe that u=0 is a solution of the poly-fractional equation. Let f(ε)=∑_ℓ=1^Lε^ℓ f^ℓ. By Theorem <ref>, there exists a unique solution u(x;ε) of (<ref>). Let u(x;0) be the solution of (<ref>) when ε=0. Define u^(1):=∂_ε u|_ε=0=lim_ε→ 0u(x,t;ε)-u(x,t;0) /ε, and consider the corresponding problem for u^(1). Since F is analytic, we have P( (-Δ)^s)u^(1)(x) + F^(1)(x)u^(1)(x)=0 in Ω, u^(1)=f^1 in Ω^c. Next, we consider u^(2):=∂_ε^2 u|_ε=0, which gives the second order linearisation: P( (-Δ)^s)u^(2)(x) + F^(1)(x)u^(2)(x) + F^(2)(x)[u^(1)(x)]^2=0 in Ω, u^(2)=f^2 in Ω^c. Inductively, for ℓ∈ℕ, we consider u^(ℓ)=∂_ε^ℓ u|_ε=0, we can obtain a sequence of equations, which shall be employed again in determining the higher order Taylor coefficients of the unknown F. Note that in order to apply this high order linearisation technique, we need the infinite differentiability of the equation (<ref>) with respect to the given boundary data f, which can be easily shown by applying the implicit function theorem of Banach spaces, as in <cit.>. We omit the proof here. With this, we can proceed to prove Theorem <ref>. Comparing (<ref>) with (<ref>), we apply the results of Theorem <ref> to (<ref>) to obtain that F^(1)_1(x)=F^(1)_2(x) for all x∈ E for some small enough E⊂Ω, so we have the uniqueness result for the first order Taylor coefficient of F. Furthermore, u^(1)_1(x)=u^(1)_2(x) in ℝ^n. Thus, (<ref>) reduces to P( (-Δ)^s)u^(2)(x) + F^(2)(x)u^(2)(x)=0 in Ω, u^(2)=f^2 in Ω^c:=ℝ^n∖Ω. Once again, we can compare this with (<ref>) and apply Theorem <ref> to obtain that F^(2)_1(x)=F^(2)_2(x) for all x∈ E, since u^(2)(x)≠0 for all x∈ E⊂Ω. Reiterating this argument inductively for ℓ=1,…,L, we have that the uniqueness result for all the ℓ-th order Taylor coefficients of F. By the definition of F, this means that F_1=F_2 for x∈ E. In this case, we repeated the argument L times and made L measurements, to recover F. Since L is the order of analyticity of F, we have made use of a minimal number of measurements. Finally, The proof follows by a similar modification as in that of Theorem <ref>. Indeed, for each order of linearisation (inductively), we have P( (-Δ)^s)(u^(ℓ)_1-u^(ℓ)_2) + (F^(ℓ)_1-F^(ℓ)_2)u^(ℓ)_1+F^(1)_2(u^(ℓ)_1-u^(ℓ)_2)=0 in Ω, ℒ(u^(ℓ)_1-u^(ℓ)_2)=0 in Ω^c, u^(ℓ)_1-u^(ℓ)_2=0 in Ω^c. Therefore, as in Theorem <ref>, we can apply the UCP Theorem <ref> to obtain the F^(ℓ)_1=F^(ℓ)_2 in E for every ℓ=1,…,L. §.§ Recovery of Non-Isotropy Next, we proceed to recover the non-isotropy of the poly-fractional equation, given by the coefficients α_i of (<ref>). We first observe as in the proof of Theorem <ref> that for a nonzero input f_j in Ω^c, the unique solution u_j is non-trivial, so in some small enough open set E'⊂Ω, (-Δ)^s_m u_1(x)≠0 for all x∈ E'. Let u_j be the solution of (<ref>) for j=1,2, that is, u_j satisfies P( (-Δ)^s)_j u_j + qu_j := ∑_i=1^M α_i^j(-Δ)^s_iu_j + qu_j =0 in Ω, u_j=f_j in Ω^c Writing ũ=u_1-u_2, ũ solves P( (-Δ)^s)_1 ũ + qũ = (α_m^2-α_m^1)(-Δ)^s_mu_2 in Ω, ũ=0 in Ω^c Next, as in the proof of Theorem <ref>, the condition (<ref>) and the UCP Theorem <ref> implies ũ=0 in ℝ^n, so P( (-Δ)^s)_jũ=qũ=0 in Ω. Therefore, (α_m^2-α_m^1)(-Δ)^s_mu_2 = 0 in Ω. In particular, (α_m^2(x)-α_m^1(x))(-Δ)^s_mu_2(x) = 0 ∀ x∈ E'. But by assumption, E' is taken to be the set in Ω such that (-Δ)^s_m u_1(x)≠0. Therefore, α_m^2(x)=α_m^1(x) ∀ x∈ E'. Finally, The proof follows by a similar modification as in that of Theorem <ref>. Here, instead of (<ref>), we have P( (-Δ)^s)_1 ũ + qũ = (α_m^2-α_m^1)(-Δ)^s_mu_2 in Ω, ℒũ=0 in Ω^c, ũ=0 in Ω^c. Apply the UCP Theorem <ref> to obtain the α_m^2(x)=α_m^1(x) in E'. § FINAL REMARKS AND OPEN PROBLEMS We remark that it may also be possible to apply our UCP result to recover the variable coefficient a(x) of the anisotropic fractional Laplacian (-Δ)^s_σ, which may be of the form (-∇·σ∇)^s or -∇^s·σ∇^s. A possibility is using the fractional Liouville reduction. Similar results have been obtained in <cit.> in the case of 0<s<1, and only a UCP result in <cit.>. However, in a similar line of thought to Remark 1.4 of <cit.>, it should be noted that it remains open which definition one should take for higher order anisotropic fractional Laplacians. In <cit.>, the authors interpreted (-Δ)^s_σ through its spectral decomposition, but that possesses a different generalised Caffarelli-Silvestre extension compared to the one simply extended from the extension of the isotropic case. Furthermore, this spectral representation of the higher order fractional Laplacian may possibly be useful in the recovery of the exponent of the poly-fractional Laplacian. Indeed, previous works on recovering the exponent have made use of the eigenfunction expansion of the solution of the forward problem (see, for instance, <cit.>). Another interesting open problem is to consider the variable exponent fractional operator instead. This may include the form considered in <cit.> and <cit.>. A result is known in <cit.> and <cit.> for the time-fractional case, but there has not yet been any result for the space-fractional Laplacian. Indeed, suppose the variable exponent γ(x) of the fractional Laplacian is such that γ(x)=1 in Ω^c. Then, the UCP condition (<ref>) reduces to that of (<ref>). Therefore, we leave these two problems as interesting open problems for readers for future research. Acknowledgment. C.-L. Lin is partially supported by the Ministry of Science and Technology of Taiwan. H. Liu is supported by the Hong Kong RGC General Research Fund (projects 12302919, 12301218 and 11300821) and the NSFC/RGC Joint Research Grant (project N_CityU101/21). plain
http://arxiv.org/abs/2307.02009v1
20230705033940
Using Data Augmentations and VTLN to Reduce Bias in Dutch End-to-End Speech Recognition Systems
[ "Tanvina Patel", "Odette Scharenborg" ]
cs.CL
[ "cs.CL" ]
ZJU ReLER Submission for EPIC-KITCHEN Challenge 2023: Semi-Supervised Video Object Segmentation Jiahao Li, Yuanyou Xu, Zongxin Yang, Yi Yang, Yueting Zhuang ReLER, CCAI, Zhejiang University {xljh,yoxu,yangzongxin,yangyics,yzhuang}@zju.edu.cn =================================================================================================================================================================================== Speech technology has improved greatly for norm speakers, i.e., adult native speakers of a language without speech impediments or strong accents. However, non-norm or diverse speaker groups show a distinct performance gap with norm speakers, which we refer to as bias. In this work, we aim to reduce bias against different age groups and non-native speakers of Dutch. For an end-to-end (E2E) ASR system, we use state-of-the-art speed perturbation and spectral augmentation as data augmentation techniques and explore Vocal Tract Length Normalization (VTLN) to normalise for spectral differences due to differences in anatomy. The combination of data augmentation and VTLN reduced the average WER and bias across various diverse speaker groups by 6.9% and 3.9%, respectively. The VTLN model trained on Dutch was also effective in improving performance of Mandarin Chinese child speech, thus, showing generalisability across languages. Index Terms: E2E ASR, Bias, Vocal Tract Length Normalization (VTLN), speed perturbations, Spectral augmentations § INTRODUCTION Several studies have shown that State-of-the-Art (SotA) Automatic Speech Recognition (ASR) systems struggle with large acoustic variation in speech <cit.>. These variations can be due to many (demographic) factors, including age <cit.>, gender <cit.>, race <cit.>, accents <cit.>, whispered speech <cit.>, speech impairment <cit.>, etc. In short, ASR systems perform well for norm speakers, i.e., adult native speakers of a language without speech impediments or strong accents, but show a bias against speech from diverse speakers, i.e., those speakers that deviate from the norm. In this work, we analyse and aim to reduce the bias against speakers of different age groups (children, teenagers, adults, older adults) and non-native speakers of Dutch. An often-mentioned potential source of bias is scarcity of training data from diverse speaker groups. Hence, a potential bias mitigation approach is then generating synthetic training data to reduce the bias against certain speaker groups <cit.>. A second potential source of bias are the feature representations <cit.>. Acoustic differences between different age groups are mostly due to differences in vocal tract anatomy <cit.>, while non-native speech is mostly characterised by a noticeable first language (L1) accent in the pronunciation of the second language sounds (L2) <cit.>. These acoustic differences between norm and diverse speech may lead to mismatches between the feature representations of norm speech vs. diverse speech, potentially causing performance degradation and bias against diverse speech. Here, we aim to improve recognition performance and reduce bias against diverse speech by 1) using SotA data augmentation techniques, specifically speed perturbation <cit.> and spectral augmentation <cit.> and, 2) Reducing the feature variability between speaker groups by using Vocal Tract Length Normalization (VTLN) to scale or normalize the acoustic features <cit.>. The VTLN approach has been extensively used to reduce inter-speaker variability for various tasks, e.g., speaker recognition <cit.> and child speech recognition <cit.> but mostly in hybrid ASR systems. Since, End-to-End (E2E) ASR systems generally outperform hybrid models for different types of speech, e.g., spontaneous, telephonic, and noisy speech <cit.>, here, we investigate the usability of VTLN within the E2E frame-work. We train the VTLN model using both norm speech and diverse speech. VTLN can easily be trained for new languages, as it requires only audio and no extra annotation. However, collecting diverse speech (from several speaker groups) can be difficult, especially in low-resource scenarios. Hence, we explore the effectiveness of the VTLN model across languages. To that end, the VTLN model trained on Dutch is applied to Mandarin Chinese speaker groups. In this work, the ASR performance is evaluated in terms of Word Error Rate (WER) and bias. Bias is related to WERs, but an improvement in WER may not always imply a reduction in bias (as bias is evaluated with respect to a certain speaker group). An important open question is how to actually measure bias. Recently, studies have proposed measures to quantify bias against various speaker groups. In the ASR and speaker recognition literature, bias measures are generally defined as differences or ratios between the base metrics (e.g., WER, EER) of a speaker group and a reference group. For e.g., in <cit.>, bias against a specific diverse speaker group is computed by taking the absolute WER difference with the best performing diverse speaker group. The authors in <cit.> propose a similar measure but use the relative WER gap as bias measure. Generally, the reference group is the minimum WER group in the category, however, there are some drawbacks to these measures (see Section 3.2) and hence we propose a new bias measure. Summarizing, in this work, we investigate the effectiveness of data augmentation and feature normalization (VTLN) as bias mitigation approaches in a Dutch E2E ASR system, focusing on both read and conversational speech, and propose a new bias measure. § METHODOLOGY Here, we describe the process of data augmentation and feature normalization by VTLN used for E2E training. §.§ Data Augmentation We consider two types of data augmentations: one applied to the raw audio wave file and one to the feature vector, i.e., speed perturbations <cit.> to increase the training data and spectral augmentation to improve system robustness <cit.>, respectively. Speed Perturbation (SP): Speed perturbation is performed by resampling the original raw speech signal which results in a warped time signal. Given an audio speech signal s(t), time warping by a factor β gives the signal s(β t). The Fourier transform of s(β t) is S(ω/β)/β. This implies that, in addition to the change in the duration of the signal which affects the number of frames in the utterance, the warping factor produces shifts in the frequency components (shift of the speech spectrum). Adding speed perturbed data to the training data has shown to improve ASR recognition performance <cit.>. Spectral Augmentation (SpecAug): Spectral Augmentation is applied on the log mel spectrogram of the input speech rather than the raw waveform itself. It consists of three augmentation policies: 1) time masking and 2) frequency masking (that masks a block of consecutive time steps or mel frequency channels) and 3) time-warping that randomly warps the spectrogram along the time axis. SpecAug does not increase or reduce the duration of the speech signal but squeezes and stretches the spectrogram locally. Using SpecAug is computationally efficient and has also shown to improve ASR recognition performance <cit.>. §.§ Vocal Tract Length Normalization (VTLN) The vocal tract length varies from person to person and across age groups leading to variations in the speech spectrum due to the formants shifting in frequency in an approximately linear fashion. The process of compensating spectral variation due to vocal tract length variation is known as Vocal Tract Length Normalization (VTLN). The process of VTLN includes: * Train a VTLN model on a given speech database. * Estimate the warping factor α for a given test utterance and normalize the features of the test utterance with the factor. The process of VTLN warps the features to that of an ideal or reference speaker (α_r=1). For adult, male speakers, the energy in the speech spectrum is towards the lower frequencies, while it is higher for females, hence, their estimated warping factors are around α_m≥α_r and α_f≤α_r, respectively. For children, since their spectrum energies are typically even higher than female speakers, it is expected that α_c<α_r to compress the frequency axis closer to the reference. The VTLN model training is done as in <cit.>, which uses a linear feature transform corresponding to each warp factor <cit.> with a grid search that finds out the best α in the range [0.80,1.20]. § EXPERIMENTAL SETUP §.§ Databases We consider two Dutch databases: the Corpus Gesproken Nederlands (CGN) <cit.> for training the ASR system and the Jasmin-CGN corpus <cit.> for testing the different speaker groups. Additionally, we use the Mandarin Chinese Spoken Language Technology (SLT) 2021 database <cit.> for investigating the language-independence of the VTLN model trained on Dutch language. §.§.§ The Dutch Corpora Corpus Gesproken Nederlands (CGN) <cit.>: The corpus consists of native speech data spoken by norm speakers within the 18-65 years age range from the Netherlands and Flanders. We use the Netherlands data consisting of monologue and multilogue speech. The data includes lecture recordings, broadcast data, spontaneous conversations, telephonic speech, etc. The unprocessed training data consists of around 480 hours of speech and the CGN test data consists of read broadcast news (Rd) and conversational telephone speech (CTS). Table <ref> shows the train, development, and test partitions, as in <cit.> Jasmin corpus <cit.>: This corpus is an extension of the CGN corpus[CGN and Jasmin are recorded under a variety of conditions (potentially non-overlapping) leading to potentially mismatched scenarios.] consisting of read speech and Human Machine Interaction (HMI) speech spoken by various diverse speaker groups, i.e., native and non-native speaking children, teenagers and older adults, see Table <ref> for an overview. §.§.§ The Mandarin Database This dataset is a part of the Children Speech Recognition Challenge at the IEEE SLT 2021 workshop <cit.>. It has different aged speaker groups, and thus, will allow us to study the language-independence of the VTLN model trained on Dutch. The Sets A, C1, and C2 consist of adult read speech, child read speech and child conversational speech, respectively. Table <ref> shows training, development and test sets as in <cit.>. §.§ ASR System Architecture For our ASR experiments, we use the conformer architecture <cit.> trained using the ESPNet toolkit <cit.>. The other features and training parameters are as follows: Features: The front-end features are 80 dimensional log-mel filterbank features with 3-dimensional pitch features used for network training. The audio files are sampled at 16kHz. Dictionary: For the Dutch ASR system, a unigram model with 5000 byte pair tokens is used. For the Mandarin ASR, a character level model is build with 5767 characters. Augmentation parameters: The training data is perturbed by modifying the speed to 90% and 110% of the original rate creating a 3-fold training set. Post speed perturbation, SpecAug is used with default settings within, maximum width of each time and frequency mask, T=40, F=30, respectively. Normalization: The MFCC features are used to train a VTLN model using the kaldi recipe <cit.>. For each wave file, the VTLN model estimates a single warping factor typically in the range 0.8 to 1.2. The warping factors are used to scale the frequency axis during front-end feature extraction. The VTLN model is trained on two different datasets, VTLN_CGN: trained on norm speech (CGN) and VTLN_Jasmin: trained on diverse speech (Jasmin). This allows us to investigate the effect of training on norm vs. diverse speech on the estimated warping factors and ASR performance (Section 4.2). Evaluation (Error Rate): We use the Word Error Rate (WER) and Character Error Rate (CER) to evaluate the Dutch and Mandarin ASR systems performance, respectively. Evaluation (Bias): Generally, bias of the diverse speaker group is estimated w.r.t a reference speaker group. The reference group is for instance the minimum WER group in the category <cit.>, however, this means that the bias of the reference group itself cannot be estimated. Also, a minimum WER group may not always exist. Hence, we consider the norm group as the reference speaker group. If WER_norm is the WER of the norm group of speakers and WER_spk_g is the WER of the diverse speaker group spk_g (assuming WER_spk_g > WER_norm) then the Individual Bias for speaker group spk_g is, Individual Bias = WER_spk_g-WER_norm Thus, for a total of G speaker groups, the Overall Bias of the system can be defined as, Overall Bias = 1/G ∑_g WER_spk_g-WER_norm . Here, G=10, when estimating the overall ASR system bias, i.e., five diverse speaker groups for read and HMI each. § RESULTS AND DISCUSSIONS We investigate the effect of data augmentation techniques and VTLN separately and combined. Table <ref> presents the WERs for different speaker groups and different speaking styles. §.§ Baseline ASR The baseline ASR system (no augmentation or normalization; row a) achieves 9.6% and 23.9% WER on read and continuous speech for norm speakers of CGN (matched condition), respectively. The baseline performed (much) worse on the Jasmin speaker groups, with the worst performances for non-native adults (NnA) and teens (NnT), and native children (DC). Even the better recognised diverse speaker groups have WERs that are more than twice that of the norm speaker group. §.§ Experiments related to data augmentation and VTLN Effect of Data Augmentation: Adding data using speed perturbations improves performance for the norm and diverse native speaker groups (row b). The improvement is largest in DC, thus, time compression and frequency scaling using SP seems to benefit child speech recognition the most. A slight performance degradation is observed for the non-native speakers, which is expected as with SP, the amount of native (norm) data is increased thus (further) skewing the norm vs. diverse speech distribution in the training data. SpecAug improves recognition performance for the non-native speakers, mostly for HMI speech (row c). Averaged over all speaker groups, adding both SP and SpecAug decreases the WER by ∼3% and ∼7% for read and HMI speech, respectively, compared to baseline. Effect of VTLN: We investigate the warping factors estimated for each of the test speaker groups by the two different VTLN models by visualising them in the box plots in Fig. <ref>. With the VTLN_CGN, almost all speaker groups have α<0.9. This may be due to the fact that the model is trained with only adult speech from CGN. However, when the VTLN model is trained on diverse speech, VTLN_Jasmin, which includes almost equal amounts of data from different age groups, the warping factors are estimated well (child speech α<1 and adult speech α≈1) <cit.>.Why these better warping factors did not lead to better performance than VTLN_CGN is a topic for further investigation. The effect of VTLN on ASR system performnce is shown in Table <ref> (row d,e). With VTLN_CGN, the WER is lower than the baseline for almost all speaker groups. For VTLN_Jasmin, the results are mixed and only significant improvement is seen for child speech (row e). Using VTLN performs slightly better than baseline and similar (or a bit worse) than when using data augmentation even though data augmentation leads to thrice the amount of training data compared to when using VTLN. Effect of augmentation and VTLN: To investigate the effect of data augmentation and VTLN together, we apply both SP and SpecAug and also normalize the features while training the models. With VTLN model trained on CGN and Jasmin (rows f and g), respectively, the performance improved across all speaker groups compared to when only using augmentations (row-c), indicating that the bias reduction methods are complementary in their effect on the WER. In addition, the better warping factors as estimated with VTLN_Jasmin, (see Fig. 1) indeed lead to the lowest average WER of all systems. The performance improvement for the diverse speaker groups is observed without much affecting the performance of norm speakers. §.§ Bias in the Dutch ASR System Table <ref> shows the bias as calculated using the WERs in Table <ref>. The overall bias is larger for read speech than for HMI speech for all models. This is most likely due to the very low WER for norm CGN read speech (Rd) compared to norm CGN conversational speech (CTS), thus resulting in a larger WER gap and a larger bias against diverse speakers for read speech than HMI speech. The average overall bias, reduced by 2.2% with SP+SPecAug compared to baseline. And on further applying VTLN, the bias reduced by an additional 1.72%. Figure 2 shows the average bias for the individual diverse speaker groups for the baseline system (blue), when applying data augmentations (red), VTLN trained on Jasmin (yellow) and when applying both (green). The bias was largest for NnA, NnT, DC, DOA, DT in order of decreasing bias. Importantly, the best performing system, i.e., with data augmentation and VTLN trained on Jasmin, also resulted in the lowest bias for all diverse speaker groups. The smallest bias for native teenagers can potentially be due to their vocal tract characteristics and speaking styles being similar to those of norm speakers, while the vocal tract characteristics of children and the speaking styles of non-native speakers and older adults differ (vary) more from norm speech, negatively impacting recognition performance. §.§ Language Independence of the VTLN Model To investigate whether the VTLN model can be used across languages we used the two VTLN models trained on Dutch to estimate the warping factors for the Mandarin Chinese speaker group during testing. The baseline Mandarin ASR system is trained using the Mandarin adult read speech data from SetA (norm speech), with speed perturbations and SpecAugment (similar to the Dutch model). Next, using the VTLN models (VTLN_CGN and VTLN_Jasmin), we estimate the warping factors for the test sets SetA (norm), SetC1 (child read speech), SetC2 (child spontaneous speech) of the Mandarin dataset. Table <ref> (row-a) shows the CERs for the baseline system without normalization, and when VTLN_CGN (row b) and VTLN_Jasmin (row c) VTLN models are applied to the test sets. For the baseline, the CER for child read speech (SetC1) is highly similar to that of the adult speakers (SetA). The performance for the conversational speech of 4-11 year old children (SetC2) is almost 4 times higher than norm speech, likely due to the younger age of some of the speakers and of course due to the conversational nature of the speech. Considering that SetA consists only of adult (norm) speech, we did not expect to find an improvement for SetA, which was indeed the case. Despite expecting improvements for the two child speech sets, none was observed for the SetC1. For the conversational child speech (SetC2), a small reduction in CER was observed for both the VTLN_CGN and the VTLN_Jasmin models. In short, we observe that feature normalization by VTLN can help to reduce the pronunciation variations due to vocal tract differences across languages. § SUMMARY AND CONCLUSIONS In this work, we investigated the effectiveness of using data augmentation and feature normalization by VTLN with E2E models. We observe that with augmentation and VTLN, there is a reduction in WER and in bias against age and non-native accented speech. Generally, VTLN has been applied for child speech recognition and in an hybrid ASR framework while in this work, we investigate the usefulness of VTLN for improving recognition performance and reducing bias against other diverse speaker groups as well in an E2E-ASR framework. We observed improved recognition performance when using only SP for the native speaker groups. Adding SpecAug improved the recognition performance of the non-native speakers particularly. Thus, data augmentations helped to use norm speaker data to improve performance of diverse speakers. VTLN gave comparable recognition results across the board but with far less training data. The combination of speed perturbation, SpecAug, and VTLN gave the best recognition performances and reduced bias the most. Bias was and remained highest against non-native speakers, which implies that the acoustic properties of native and non-native accented speakers are rather different and cannot be straightforwardly compensated with data augmentation or feature normalization. Ideally the warping factors are speaker specific and should be language independent. Our final experiment showed that a VTLN model trained on one language is able to some extent extract warp factors for another language and hence, VTLN can be used as a pre-processing module to the ASR for another language. With just normalizing the test features, improvement is observed. Possibly, the VTLN model can be further improved when trained with diverse speech from several languages as well. In the future, we the efficacy of VTLN and other combinations of data augmentation techniques to further reduce the bias against non-native speakers and improve recognition performance and lower bias across more diverse groups, in our aim to build inclusive automatic speech recognition. IEEEtran
http://arxiv.org/abs/2307.03153v1
20230706172934
MultiVENT: Multilingual Videos of Events with Aligned Natural Text
[ "Kate Sanders", "David Etter", "Reno Kriz", "Benjamin Van Durme" ]
cs.IR
[ "cs.IR", "cs.CV", "cs.MM" ]
Reversible Non-Volatile Electronic Switching in a Near Room Temperature van der Waals Ferromagnet Ming Yi August 1, 2023 ================================================================================================= *Equal contribution.footnote Everyday news coverage has shifted from traditional broadcasts towards a wide range of presentation formats such as first-hand, unedited video footage. Datasets that reflect the diverse array of multimodal, multilingual news sources available online could be used to teach models to benefit from this shift, but existing news video datasets focus on traditional news broadcasts produced for English-speaking audiences. We address this limitation by constructing , a dataset of multilingual, event-centric videos grounded in text documents across five target languages. includes both news broadcast videos and non-professional event footage, which we use to analyze the state of online news videos and how they can be leveraged to build robust, factually accurate models. Finally, we provide a model for complex, multilingual video retrieval to serve as a baseline for information retrieval using . § INTRODUCTION Information dissemination for current events has traditionally consisted of professionally collected and produced materials, leading to large collections of well-written news articles and high-quality videos. As a result, such materials form the basis for significant prior work in content analysis and retrieval <cit.>. Meanwhile, a high volume of event-centric content today is generated by non-professionals, such as on-the-scene witnesses to events who hastily capture videos and upload them to the internet without further editing. We propose that this contemporary landscape of news content can be leveraged by models to produce a more comprehensive understanding of events. News agencies have adapted to this shift, often collecting and incorporating this online content into official broadcasts, but news video datasets still do not typically address this new domain of event coverage. In addition to focusing on traditional news sources, existing news video datasets predominantly consider content produced in English. This is consistent with common practices in video dataset collection: Collected videos and captions are recorded in English, and when multilinguality is considered, it is achieved by directly translating captions and transcripts <cit.>. Because this data is originally produced for English speaking audiences, these multilingual datasets can contain unwanted content biases like "translationese" <cit.>. As event-centric video content produced in other languages makes up a large portion of news videos online, we argue that including organic, multilingual content is necessary for a diverse and perspective-agnostic sampling of event coverage. With these ideas in mind, we present , a dataset of Multilingual Videos of Events with aligned Natural Text that contains 2,396 diverse, event-centric videos and text descriptions that reflect the distribution of news content online. The videos are grounded in natural language video descriptions and long-form text documents, and the data spans 260 current events across over forty countries. The content in is collected in five target languages: Arabic, Chinese, English, Korean, and Russian, and as the multilinguality is organic, the data is less likely to suffer from translation bias. We provide an illustration of the dataset's contents in Figure <ref>: Each natural language query (describing a video of a current event) is paired with grounding text documents and a unique corresponding video. We use to explore and characterize the variety of event-centric videos available online and illustrate the importance of leveraging these different video types when building multimodal information systems. Citizen journalism, the most notable example being Wikipedia <cit.>, has emerged alongside other online news sources as a method for curating comprehensive summaries of events. Work in natural language processing has considered the problem of automating this process by training models to generate informative reports using online source materials <cit.>. We use to explore how this process can be extended to incorporate multimodal sources of evidence. As a first step in this direction, we consider the task of video retrieval on , through which a model learns to retrieve multimodal source material given a natural language event description. This task differs from prior video retrieval benchmarks <cit.> as the videos in vary widely in length and content presentation, are multilingual, and can involve significant amounts of on-screen text. In addition to multilingual natural language captions for each video, we provide full text documents that ground the events and serve as more complex retrieval queries. In summary, our contributions are: * We present , a multimodal, multilingual information retrieval dataset of grounded videos depicting current events. The dataset targets five languages and covers a range of online video formats beyond traditional news broadcasts. * Using , we quantitatively illustrate the information presented by news videos and the differences in content between video formats, and qualitatively evaluate how multimodal coverage of an event can evolve over time. * We present MultiCLIP, a model for multilingual, event-centric video retrieval that serves as a baseline for video retrieval approaches on the task. § RELATED WORK §.§ Video retrieval datasets Early video datasets generally contained short clips spanning narrow ranges of topics, such as the Microsoft Research Video Description Corpus <cit.>. Video datasets spanning larger domains include MSR-VTT <cit.> and DiDeMo <cit.>, although the lengths of these videos were still relatively short. The V3C dataset <cit.> offered longer video lengths while still spanning a wide range of topics such as news reports. A shift towards massive video datasets was instigated by HowTo100M <cit.>, which included over 130 million video clips belonging to one million narrated instructional videos. VaTeX <cit.>, released in the same year, considered video retrieval from a multilingual context using caption translation. Additional multilingual video retrieval datasets include Rudder <cit.>, consisting of instructional videos for making toys with multilingual captions, MTVR <cit.>, which extended the TVR dataset <cit.> by adding Chinese subtitles and queries, and Multi-HowTo100M <cit.>, which extended HowTo100M by scraping YouTube for subtitles in up to 9 other languages. Recently, Chen et al. <cit.> released the ChinaOpen dataset which contains a wide range of video-caption pairs originally produced in Chinese. Recent work has also considered the problem of interpreting text-heavy video content: Wu et al. <cit.> and Jahagirdar et al. <cit.> introduced datasets that focus on within-video text and OCR annotations, including news broadcasts. §.§ Video retrieval methods The size of early video datasets allowed retrieval systems to rely on pre-extracted features from expert systems like action recognition models. As massive video datasets gained prominence, the video retrieval paradigm moved towards ad-hoc video-text feature extraction using large pretrained models. Dosovitskiy et al. <cit.> proposed using stand-alone transformer architectures for video understanding, and Bertasius et al. <cit.> showed that applying space- and time-based self-attention independently improved performance. Bain et al. applied findings directly to video retrieval, training and evaluating transformer architectures on WebVid-2M <cit.>. Radford et al. <cit.> introduced CLIP and showed that pretraining models to match captions to images can result in scalable models, and CLIP's applicability to video retrieval was demonstrated by Fang et al. <cit.> through their CLIP2Video model. More fine-grained modifications to CLIP were proposed. Wang et al. <cit.> introduced "Object-aware Transformers", which extended video-text transformers to incorporate object-level annotations within video footage, and Ge et al. <cit.> modified the pretraining task to involve teaching a vision-text model to answer multiple choice questions about a video. Bain et al. <cit.> adapted large image-text models to the task of long video retrieval by incorporating the weighted-mean of frame embeddings, and Wu et al. <cit.> incorporated independent optical character recognition and embeddings into the encoder pipeline to explicitly model in-video text. §.§ Report generation using online sources A wide range of research has used online corpora for report generation tasks, including QA-pair and knowledge graph generation <cit.>. Notably, Lewis et al. <cit.> introduced a method for automatically extracting question-answer pairs from large corpora of text documents, and applied this method to Wikipedia to produce the PAQ dataset. Some PAQ extensions have been multilingual — Pisare et al. <cit.> built the WikiOmnia QA dataset on Russian Wikipedia documents, and Rybak et al. <cit.> produced a question-Wikipedia passage dataset in Polish. Recently, Qian et al. <cit.> extended the ideas in PAQ to construct WebBrain, a task in which a model must generate factual articles with references given a natural language query. In the multimodal domain, Reddy et al. and Chen et al. have considered the problem of open-domain QA for image-text data <cit.>, with Chen et al. using Wikipedia to generate a multimodal dataset. In a similar vein, Li et al. propose a dataset for information extraction from multimedia articles <cit.> and an extraction approach that can be used with text, image, and video content <cit.>. § DATASET In this section we outline the collection process. The dataset includes 2,396 videos and corresponding text descriptions covering 260 current events grounded in 468 text documents, and includes content in Arabic, Chinese, English, Korean, and Russian. We first identify 260 visually salient current events spanning from 2013 to 2023, and assign a target language to each event. Then, for each event, we collect grounding text documents and a set of videos in the event's target language. §.§ Current event curation We consider four primary event categories for : Disasters, political events, social events, and technology events. We include thirteen current events per category for each target language. We use Google Trends statistics to select these events, based on its tracking of term popularity based on internet activity by country. We construct lists of the top five countries with the most speakers of each target language and review the top trending topics on Google in each of these countries over the last ten years. We record topics and search phrases that corresponded to current events that (1) align with one of the predefined event categories and (2) have sufficient online video coverage. For categories that did not amass a sufficient list of current events per language through this process, we consult Wikipedia's yearly summaries of events to fill the remaining slots. Detailed statistics characterizing this set of current events are shown in Figure <ref>. As shown, the majority of selected events take place in the last few years, with only three taking place before 2016. Also shown in Figure <ref>, there is not a bijective mapping between the language used in event coverage and the country the event took place in. The language and country are often related, e.g., Russian news content in MultiVENT predominantly takes place in Russia, but this is not true of all events in the dataset. For example, we include data in Chinese pertaining to the 2023 ATP tennis circuit in Dallas, Texas: At this event, tennis player Wu Yibing became the highest-ranked Chinese player in the history of the ATP rankings, and so the event received substantial Chinese news coverage. In cases such as this, news in multiple languages will heavily focus on the same current event, such as sports events and international political relations. We do not include the same event in multiple languages in MultiVENT by design, in contrast with data collection procedures used for efforts such as AIDA <cit.> which aim to cover a small collection of current events in many languages. Every current event in the dataset is grounded in an English natural language document and, if the event is tagged with a non-English language, an additional natural language document in that target language. First, we check if a full English Wikipedia article exists for the current event. If not, we manually find a Wikipedia article that includes a passage describing the event. If Wikipedia does not have a passage that appropriately grounds the event, then a news article in English is selected as a grounding document instead. This process is then repeated for the target language. The dataset includes 468 grounding articles in total: 313 are full Wikipedia articles, 104 are Wikipedia passages, and 51 are external articles. §.§ Video collection We aim to collect visually and semantically distinct videos for each current event with an even split between firsthand witness accounts (e.g., first-person smartphone videos), amateur edited videos (e.g., vlogs), and professional news reports and compilations. Information regarding the resultant distribution of these categories and their semantic differences is included in Section <ref>. For each current event, we collect ten videos in the current event's target language. We search YouTube and Twitter for these videos using target keywords collected from the Google Trends search and Wikipedia. After collecting the videos, we manually identify and remove duplicates, resulting in 2,396 videos in total. We do not include repeat videos, but sometimes professional news reports include firsthand footage that is already included as unedited footage in the dataset. In these cases, we keep both the news report and the original footage as the context and text metadata between the two are distinct. If the video has a natural language description, we tag the video with this description. If it does not, we use the video title as the tagged natural language description. We report the distribution of videos by source in Figure <ref>. § DATA ANALYSIS We present an analysis of to help characterize how online multimodal content contributes to our understanding of current events. We explore multimodal event coverage from three angles: (1) what kinds of information news videos contribute, (2) the differences in informative content provided by different types of news videos, and (3) how multimodal coverage of an event can evolve over time. §.§ Semantic information in video Visual data can provide rich, semantically nuanced details of an event that are not captured in text documents due to reporting bias, limitations of text, and document length limits. To characterize the complexity of these videos and the information they provide, we annotate a set of two hundred videos of disasters in to identify visual entities in the videos that help answer common "who, what, where"-type questions about the events they depict. We present videos of disaster footage to local annotators and provide them with a set of event-centric questions derived from FrameNet's "disaster scenario" template <cit.>. We modify this template, designed to annotate the event semantics of text documents, to better cover the range of information provided by visual content. We instruct annotators to identify every on-screen entity (such as people, scrolling news headline banners, etc.) that might help answer one of these event-centric questions. The template divides salient entities into six categories: The disaster itself ("what"), the location of the disaster ("where"), the time the disaster takes place ("when"), people affected by the disaster ("who") and first responders for the disaster, e.g., firefighters (also "who"), and any visible outcomes of the disaster. Not every category applies to both visual content and text: We exclude "where" and "when" from the set of categories that visual content should be annotated for (because identifiable depictions of "where" are present in almost every frame, and "when" in virtually none) and disaster outcomes from the set of text annotation categories, as textual examples of this category tend to involve full clauses, which complicate the annotation process. We present the number of event-relevant entities that appear on-screen in these annotated videos in Table <ref>. For each annotated entity, we additionally ask annotators to rate their certainty that the entity is directly related to the event described by the video's natural language description from 0% to 100%. We record these certainty scores in 20% intervals, i.e. as 20%, 40%, 60%, 80%, or 100%. The averages of the linguists' confidence rankings by entity type are listed in Table <ref>. As shown in Table <ref>, each video contains an average of 9.32 informative visual entities that pertain to the event in question. About half of these entities are purely visual, and half are within-video text that can be identified with an optical character recognition model. As indicated by Table <ref>, purely visual entities are more ambiguous than the text content shown onscreen alongside it, which aligns with past research that explores the difficulty humans have in interpreting visual content depicting complex events <cit.>. §.§ Video content by domain As described in Section <ref>, we collect three main types of videos: Official news broadcasts, edited video footage, and raw, unedited footage. Of the 210 videos in the annotation set reported in Table <ref>, 53% are news broadcasts, 11% are edited footage, and 36% are raw footage. To quantify the difference in information presented by these different video types, we take the video annotations shown in Table <ref> and partition these annotations by video type. We present the results in Table <ref>. As shown by the results, news broadcasts depict the most relevant semantic information, followed by edited footage. This is particularly apparent when considering text content alone. On average, news coverage contains almost 9 times as much relevant on-screen text content than raw footage, and over three times more than edited footage. Visual content differences were less drastic, but news content still had two times more visual content than raw footage and 1.3 times more than edited footage. The difference in visual content between news coverage and edited footage is possibly due to average video length and the quality of the video curation — oftentimes, unprofessionally edited footage only draws from one source whereas news coverage draws from many. §.§ Information evolution As shown in Table <ref>, first-person footage is often opaque compared to professional coverage. However, comprehensive coverage often builds on earlier, less informative coverage. This can be seen in news cycles for slowly unfolding events and for sudden, unexpected events that take time to assess. This is illustrated in Figure <ref>, which shows a snapshot of the 2019 Notre Dame fire news cycle and demonstrates how unedited and poorly curated footage, often first-person witness accounts on social media, can be instrumental in the construction of our collective understanding of events. So, we propose that teaching models to understand different video formats, despite clear discrepancies in the amount of information they present, is important for developing robust systems. § EXPERIMENTS §.§ Approach We consider the problem of teaching a model to map multilingual, natural language queries to multilingual video clips. Specifically, we consider a video set V and query set T with a indicator mapping function f that returns whether a query t∈ T describes a video v∈ V. The model h is provided with the full set of videos V and a text query t∈ T, and for each video v∈ V returns the probability that t describes v, or h(v,t)=ℙ[f(v,t)=1]. When there is a bijective mapping between queries and videos (e.g., when using video descriptions as queries), the model is evaluated on its recall when considering the top 1, 5, and 10 ranked videos (R@1, R@5, and R@10), as well as the median rank (MedR). When a given query may describe multiple videos, (e.g., when using event descriptions as queries), we instead evaluate the model on its precision given the top 1, 5, and 10 ranked videos (P@1, P@5, and P@10). We define these metrics as: Given S:= _V' ⊆ V : |V'| = k∑_v∈ V'h(v,t) , R@k=|{s∈ S :f(s,t)=1}|/|{v∈ V :f(v,t)=1}|and P@k=|{s∈ S :f(s,t)=1}|/k. §.§ Model architecture and training We introduce MultiCLIP, a multilingual baseline for video retrieval on . We base our architecture on the pretrained LAION CLIP ViT-H/14 frozen XLM-Roberta-Large model <cit.>, which jointly trains an image and text encoder on text-image data to learn to pair images with their captions. At test time, it produces a zero-shot linear layer based on the test input's visual features through which natural language captions can be passed in. The model architecture contains a vision encoder based on a ViT architecture <cit.> and a text encoder based on the the multilingual XLM Roberta large model <cit.>. A full overview of the CLIP architecture and pretraining can be found in the original paper <cit.>. In experiments using MultiCLIP, we first tokenize text descriptions using the XLM-Roberta-Large tokenizer, containing a vocabulary of over 250,000 words, and pass the tokens into MultiCLIP which produces a text embedding of size 1024. Next, we uniformly sample videos at a rate of 12 frames per video with an input size of 224x224, which the model uses to create a frame embedding of size 1024. To incorporate multilinguality into the model's frame-level features, we use a ViT architecture trained with a contrastive objective over multilingual image-caption pairs from the LAION-5B dataset <cit.>, which is constructed from the Common Crawl archive using images and their alt-text to produce a multilingual image-text dataset with over 100 languages. We mean pool the frame embeddings to produce a final video embedding, and use the text and video features to compute a similarity matrix of videos and descriptions. §.§ Retrieval baselines We first evaluate MultiCLIP on the existing video retrieval task MSR-VTT <cit.> using the recall metrics described in Sec. <ref> alongside contemporary video retrieval approaches (FrozenInTime <cit.>, Clip2Video <cit.>, InternVideo <cit.>, and MPLUG-2 <cit.>). Results on MSR-VTT's validation set are reported in Table <ref>. The results indicate MultiCLIP performs well on standard video retrieval tasks, matching performance of separate text/vision pipeline models released within the last two years. It performs better than existing models that use separate text and vision pipelines (FrozenInTime <cit.> and Clip2Video <cit.>), but not as well as models that use larger architectures involving multimodal encodings (InternVideo <cit.> and MPLUG-2 <cit.>). §.§ retrieval We now evaluate MultiCLIP and related retrieval approaches on . We first use multilingual video descriptions as queries, and then we use English event summaries taken from the grounding text documents, meaning that one text query maps to up to ten videos. The event queries are selected by taking one to two sentences from each English event text document that describes the event most holistically. We exclusively use English queries for this section, as our annotators fluent in the other languages were not available for this task. In addition to MultiCLIP, we consider a set of contemporary video retrieval models with lightweight architectures (FrozenInTime <cit.>, CLIP2Video <cit.>, InternVideo <cit.>, and a pooled CLIP model using the same setup as MultiCLIP without the additional multilingual pretraining). We argue that lightweight architectures are most appropriate for evaluating a full, pairwise set of similarity scores between text and video data of large multimodal corpora. Results are reported, partitioned on language, in Table <ref>. We report the standard recall @ rank k metric for retrieval on individual video queries, and precision @ rank k for retrieval on event description queries. The results suggest that some existing video retrieval models may particularly struggle on this task, regardless of language. We hypothesize that this is due to a combination of the videos' length, complex semantic content, ambiguity, and frequent OCR content, as well as the long and often noisy video description queries. While as a whole poses challenges to existing models, it is also clear that multilingual data may significantly impact performance on models trained primarily on English content - all models suffer a performance loss when evaluated on multilingual content (even when using English queries, as shown by the event description query results). While MultiCLIP suffers a performance loss on this data as well, comparing the standard pooled CLIP model against MultiCLIP shows that training on multilingual data does mitigate this multilingual performance loss: The two models perform comparably on English data, but MultiCLIP performs better on the multilingual content, especially when multilingual queries are used. §.§ OCR ablation study As shown in past video understanding work <cit.>, explicitly incorporating optical character recognition into a video-text model can improve its performance on downstream tasks that involve videos with high text content such as news broadcasts. Therefore, we explore whether modeling multilingual OCR in our retrieval pipeline can improve MultiCLIP's performance on the retrieval task. In addition to text and video feature extraction, we extract optical character content with the off-the-shelf EasyOCR model and update 6/6. Results are reported in Table <ref>. This study indicates that explicitly incorporating multilingual optical character recognition into a video retrieval model pipeline can improve § CONCLUSION We introduce , a multimodal, multilingual dataset grounded in natural language documents for event-centric video retrieval and information acquisition. This dataset consists of 2,396 videos covering 260 current events reported in five target languages (Arabic, Chinese, English, Korean, and Russian) paired with multilingual natural language video descriptions and long-form event-centric text documents. We use this dataset to characterize online news coverage and how models can use this online content for information acquisition. We propose a multilingual video retrieval benchmark using and present MultiCLIP, multilingual video retrieval model to serve as a baseline for the task. We evaluate this model and related retrieval approaches on MSR-VTT and to illustrate the importance of pretraining on multilingual data for evaluation on . In future work, we aim to explore the effect that joint vision-OCR embeddings can have on video retrieval in text-heavy contexts. Also in future work, a RePAQ-adjacent system <cit.> for automatically extracting question-answer pairs from video content and video-document pairs could be developed and applied to . Through this, a framework for teaching models to perform open-domain question-answering tasks with multimodal background corpora could be established, expanding the domain of questions a model can answer. plain
http://arxiv.org/abs/2307.00455v1
20230702023057
Connecting the Dots: A Comprehensive Literature Review on Low and Medium-Voltage Cables, Fault Types, and Digital Signal Processing Techniques for Fault Location
[ "Shankar Ramharack", "Sanjay Bahadoorsingh" ]
eess.SP
[ "eess.SP" ]
Connecting the Dots Ramharack and Bahadoorsingh Shankar Ramharack and Sanjay Bahadoorsingh The University of the West Indies, St. Augustine, Trinidad and Tobago, shankar.ramharack@gmail.com sanjay.bahadoorsingh@sta.uwi.edu Connecting the Dots: A Comprehensive Literature Review on Low and Medium-Voltage Cables, Fault Types, and Digital Signal Processing Techniques for Fault Location Shankar Ramharack0000-0003-3759-7333* Sanjay Bahadoorsingh August 1, 2023 ================================================================================================================================================================= The review begins with an exploration of acceptable cable types guided by local standards. It then investigates typical cable faults, including insulation degradation, conductor faults, and ground faults, providing insights into their characteristics, causes, and detection methods. Furthermore, the manuscript surveys the latest publications and standards on DSP techniques in fault location spanning various algorithms used. This review provides a comprehensive understanding of low and medium-voltage cables, fault types, and DSP techniques. The findings contribute to improved fault diagnosis and localization methods, facilitating more accurate and efficient cable fault management strategies § BACKGROUND Cable fault diagnostics play a crucial role in ensuring the safe and reliable operation of electrical power systems<cit.>. Low and medium-voltage cables are vital components of power distribution networks, supplying electricity to residential, commercial, and industrial consumers. However, over time, these cables can experience various types of faults, such as insulation degradation, conductor faults, and ground faults. These faults can disrupt power supply, lead to equipment failures, and pose safety risks. Identifying and understanding acceptable cable types enables engineers, contractors, and installers to make informed decisions during cable selection and installation processes. Similarly, conducting a literature review on technical publications and standards concerning typical cable faults on low and medium-voltage cables is of great significance as it is often neglected in fault location work<cit.>. By reviewing technical publications and standards, researchers and practitioners gain access to collective knowledge and experiences in cable fault diagnostics. This knowledge aids in developing effective maintenance strategies, reducing downtime, and improving the overall reliability of power networks. Furthermore, the literature review on modern digital signal processing (DSP) techniques used in reflectometry and cable fault location addresses the need for advanced and accurate fault detection methodologies. DSP techniques, such as time-domain reflectometry (TDR), offer powerful tools for analyzing cable faults and determining their locations<cit.>. By reviewing the literature on these techniques, researchers can identify the latest advancements, algorithms, and methodologies employed in fault detection and localization. This knowledge contributes to the development of more precise and efficient cable fault location systems, minimizing repair time, reducing costs, and enhancing the overall reliability of power distribution networks. §.§ Objectives The objectives of this work are as follows * To perform a literature review on types of low and medium-voltage cables that are acceptable for installations as guided by the local and international standards. * To perform a literature review of technical publications and standards on typical cable faults on low and medium-voltage cables. * To perform a literature review of modern digital signal processing techniques used in reflectometry and cable fault location. § LOW AND MEDIUM-VOLTAGE CABLES: ACCEPTABLE TYPES AS GUIDED BY INTERNATIONAL AND LOCAL STANDARDS The most widely used standards guiding LV and MV Cable Installations in North America and the Caribbean are those issued by: * The Aluminum Association (AA) * American National Standards Institute (ANSI) * American Society for Testing and Materials (ASTM) * Canadian Standards Association (CSA) * Insulated Cable Engineers Association (ICEA) * National Electrical Manufacturers Association (NEMA) * Association of Edison Illuminating Companies (AEIC) * Rural Utilities Service (RUS) * Underwriter’s Laboratories (UL) * National Electrical Code (NEC) Aerial cable is used occasionally for primary conductors in special situations where clearances are too close for open-wire construction or where adequate tree trimming is not practical. The type of construction more frequently used consists of covered conductors (nonshielded) supported from the messenger by insulating spacers of plastic or ceramic material<cit.>. The conductor insulation, usually a solid dielectric such as polyethylene, has a thickness of about 150 mils for a 15-kV class circuit and is capable of supporting momentary contacts with tree branches, birds, and animals without puncturing<cit.>. The conductor sizes most commonly used in underground primary distribution vary from No. 4 AWG to 1000 kcmil<cit.>. Four-wire main feeders may employ 3- or 4-conductor cables, but single conductor concentric-neutral cables are more popular for this purpose. The latter usually employ crosslinked polyethylene insulation, and often have a concentric neutral of one-half or one-third of the main conductor cross-sectional area. The smaller-sized cables used in lateral circuits of Underground Residential Distributions(URD) systems are nearly always single-conductor, concentric-neutral, crosslinked polyethylene-insulated, and usually directly buried in the earth. Insulation thickness is on the order of 175 mils for 15-kV-class cables and 345 mils for 35-kV class with 100% insulation level<cit.>. Stranded or solid aluminum conductors have virtually supplanted copper for new construction, except where existing duct sizes are restrictive. With the solid-dielectric construction, to limit voltage gradient at the surface of the conductor within acceptable limits, a minimum conductor size of No. 2 AWG is common for 15-kV-class cables, and No. 1/0 AWG for 35-kV class. Primary voltage circuits(5-35kV) use paper-insulated, lead-covered (PILC) three-conductor cables extensively. Single-conductor secondary cables with rubber insulation and neoprene jacket are common. More recently, single-conductor polyethylene-insulated(PE) cables are being used for both primary and secondary<cit.>. Copper conductors predominated in the past, but aluminum has nearly displaced copper in new installations, except where existing duct space is limiting. In residential and suburban areas, new underground distribution systems to serve commercial loads often employ direct-buried cables<cit.>; conduits may be provided in locations where subsequent excavation would be excessively expensive or inconvenient. Aluminum conductors are almost universal. For primary cables, solid dielectric insulation is used almost exclusively, with cross-linked polyethylene(XLPE) and ethylene–propylene rubber(EPR) insulations <cit.>. Concentric-neutral wires are common. Secondary cables in these systems generally have aluminum conductors and solid-dielectric insulation, with cross-linked polyethylene being the most common. The secondary neutral is usually an insulated conductor, although there is some use of bare copper neutrals. Electric supply cables are insulated with a range of materials depending on voltage ratings, type of service, installation conditions etc. The following are commonly used: * Rubber and rubber-like for 0 to 35kV * Varnished cambric for 0 to 28kV * Impregnated paper of the solid type for voltages up to 69kV and with pressurized gas or oil up to 345kV or higher For most distribution circuits in the 5-kV class or higher, the cables employ a shielded construction<cit.>. Shielding is used on the outer surface of the cable insulation or directly over the main conductor, or both. Outside shielding, often in the form of metallic tapes, metallic sheaths, or concentric wires, must be effectively grounded. The aforementioned insulation systems usually require a sheath or suitable jacket to prevent infiltration of moisture, loss of oil, gas, or impregnate and to provide protection against corrosion and electrolysis. In some cases, an armor overlay is used to provide mechanical protection. Single conductor cables are used in single-phase primary systems and frequently used in 3-phase direct buried primary systems. The 3 conductor primary cables are often used in duct systems. At the present time,solid-dielectric insulating materials such as tree retardant, cross-linked polyethylene and EPR are receiving the widest application in Underground Distribution Systems(UDS)<cit.>, both direct buried and duct systems. From the Electric distribution handbook<cit.>, cables used in underground systems may either be concentric neutral cables or power cables(for utilities). The jacket is usually made of Linear low-density polyethylene (LLDPE), PE or Semiconductors. The insulation most used in the industry is PE, XLPE, PILC, TR-XLPE & EPR. For URD applications, aluminum is the choice of conductor. Caribbean countries and areas outside US and Europe follow similar installation practices and cable selections such as in Trinidad. During a consultation with the Trinidad and Tobago Electrical Commission (T&TEC), Shielded Polyvinyl Chloride (PVC) and shielded Cross Linked Polyethene (XLPE) cables are most used in public transmission systems, however, in private distributions, shielded EPR has been recently adopted<cit.>. This is supported by <cit.> who performed a survey of the cables used in LV and MV installations. Furthermore, the cables used by LV and MV installations usually follow the guidelines of the British Standards Institution. The standards recommend PVC Jacket, Aluminium-Armoured XLPE insulated cables with stranded copper cores. Other standards such as the IEC utilize similar cable configurations with slight differences in the installation environment guidelines, thermal requirements, and conductor sizing (British Standards Institute 2007). The same shielding and sheathing practices that are done internationally are done locally in Trinidad and Tobago per the TTS standards<cit.> which build upon the NEC Standards. For URD applications, copper is the choice of conductor while aluminum is used for the shield. § TYPICAL CABLE FAULTS IN LOW AND MEDIUM-VOLTAGE CABLES Numerous studies have shed light on the types of faults typically found in low and medium-voltage cables, as well as their underlying causes. The authors of <cit.> identified insulation breakdown as a prevalent fault type, often caused by aging, thermal stress, or manufacturing defects. Mechanical stresses, such as bending or crushing, were found to be a significant cause of faults in medium-voltage cables <cit.>. Another common fault type is moisture ingress, resulting from damaged cable sheaths or inadequate sealing, as highlighted in the investigation by <cit.>. Furthermore, <cit.> discussed short circuits and open circuits as faults in low-voltage cables, which can arise due to insulation damage, conductor breakage, or loose connections. Other studies have also explored specific fault types, such as insulation material aging <cit.>, weather-related faults <cit.>, manufacturing defects <cit.>, and partial discharge <cit.>. These findings emphasize the importance of understanding and addressing the diverse causes of cable faults to enhance the reliability and performance of low and medium-voltage cable systems. After interviewing the fault location personnel at T&TEC they revealed that the most common faults found locally are high resistance faults and single line-to-ground (SLG) faults. It was reported in <cit.> that SLG faults and Series faults are the most common in underground power systems. Series faults are often caused by a cable layer losing continuity due to a force above ground such as a collision. A SLG fault occurs when the insulation of one or more conductors fails. These faults are often permanent faults. Intermittent faults are not considered in most fault locator designs since most transmission systems utilize monitoring equipment to briefly cut transmission and allow the fault to clear themselves. Furthermore, <cit.> state that up to 90% of cable faults are SLG faults and hence intermittent faults fall into a minority. Resistive faults are classified as high resistive if it is beyond 200Ω. TDR cannot locate faults greater than this but as mentioned, they are rare. Hence it is useful to limit fault resistance to 200 Ω in the test cases. § DIGITAL SIGNAL PROCESSING TECHNIQUES IN REFLECTOMETRY AND FAULT LOCATION The authors of <cit.> performed a state-of-the art review of cable fault location methods showing that the most common methods of fault distance location are impedance-based methods, differential equation-based methods and travelling wave methods. Impedance-based methods are shown to be cheap and simple to perform such as the Murray and Varley Loop method, but they are sensitive to the fault resistance. It is shown in <cit.> that the loop methods perform accurate for open and short circuit faults, however, for extreme low and high resistance faults, the bridges are difficult to stabilize. Box’s optimization algorithm is proposed in <cit.> to automate bridge stabilization to balance the bridges of the loop methods. The optimization methods perform very favourable in real life testing with a 0.8% error rate. it is suggested in <cit.> that capacitance bridges should be used for open faults, resistance bridges for short circuit faults but HV bridges should be used to locate insulation faults and high impedance faults. According to <cit.>, differential equation models are costly and do not perform well for long lines. In addition, both ends of the line may need to be accessed requiring more manpower and resources than a single ended approach. It is reported in <cit.> the distributed line model, the characteristic method has the merit of being suitable for short, long, transposed and untransposed lines and can be modified for three-terminal systems. The accuracy is sensitive to the choice of the time window and limited by the sampling rate. This may not be practical for low-cost location for distribution lines. Travelling wave (TW) methods have also been widely mentioned in the literature as covered in <cit.>reports the TW method performs poorly for 6-35kV networks which indicates it may not be feasible to use in this project. Measurement of TW is made more difficult when there are taps on the distribution line adding to the complexity. The problems of TW fault location(TWFL) is the attenuation of waves when sent through underground cables. TWFL can be done either single ended or on both ends. It is reported in <cit.> that it is possible to achieve greater accuracy with the multi-end methods compared to the traditional fault location methods. It is recommended by Megger, KEP, <cit.> that reflectometry methods be used for low resistance faults. For higher resistance faults(where fault resistance is > 200Ω), MIM, or ICE techniques on a surge wave generator(SWG) should be used. Decay or HV-Bridge methods should be used for locating high impedance or sheath faults. For tapped cables, differential multiple impulse response methods can be used to determine the fault locations. It is suggested in <cit.> to use of decay methods for intermittent fault location. After meeting with T&TEC, the local fault location professionals reported that TDR is mostly used for fault location. There are many reflectometry methods used in fault location. Reflectometry methods are distinguished by their EM test signal and method of the reflectogram analysis<cit.>. The most common reflectometry methods are Time Domain Reflectometry (TDR), Frequency Domain Reflectometry (FDR), Time-Frequency Domain Reflectometry (TFDR) and Spectrum Time Domain Reflectometry (STDR) (Shi and Kanoun 2014). Each method is suited to a particular application. Reflectometry works well for most low and medium voltage applications<cit.>). Reflectometry methods work on the principle of applying an excitation signal to a cable or transmission line and analysing the reflected trace. It is a form of RADAR and relies on the change of the characteristic impedance of the line to generate a reflection. The crux of reflectometry is the reflectogram analysis. There have been several analyses of the reflectograms generated. Time domain reflectometry methods usually involve sending a step signal or pulsed waveform on a cable and sampling the input where the signal was applied for a reflection<cit.>. The sampled reflectogram is denoised via a matched filter or wavelet denoising algorithm then the travel time between signal inflexions are read off from a screen or graph and used to calculate the fault distance. Common methods to automate this are bubble sort to determine the wavefront points and peak detection<cit.>. Work has also been done on automated TDR systems which utilize reflection detection via MCU capture interrupts to time the echo for the first reflection of a travelling wave <cit.>. There has been no work found attempting to use time series segmentation to treat the TDR problem as a CPD problem possibly due to the significant waveform distortion present on the waves. Frequency domain reflectometry is rarely seen in the literature for medium voltage applications hence it is not explored. Furthermore, it does not perform well on cables longer than 2km and expensive directional couplers are required to sample the system and obtain the reflectogram<cit.>. Time-Frequency domain reflectometry is much more common and attempts to address the shortcomings of both time and frequency domain reflectometry. In this a waveform with good time-frequency localization is incident on the cable. The reflected wave is sampled and cross-correlated with the incident signal to determine the fault location <cit.>. In <cit.>, the authors model a chafe in an aircraft cable using scattering parameters <cit.>. The transfer function of the cable is then derived in terms of a parameter set, θ. The parameter set is estimated using a statistical method of probability inversion using the reflected trace as target and a initial parameter combination. The initial parameter is updated in a Bayesian approach until it reaches a distribution similar to the reflected trace. The final parameters will have information about the fault distance, type and dimension. Similar to <cit.>, most TFDR work utilize the Gaussian Envelope Linear Chirp (GELC) signal as the incident signal due to its good time-frequency localization <cit.>. However, they differ by their means of signal processing. In <cit.> a dictionary is created of all possible transformations of the parameterized incident signal in terms of phase shift, amplitude and frequency. The reflected signal is projected onto the dictionary to find the closest match. The closest match’s time shift parameter is used to determine the fault location. Other works utilize matched filters to eliminate noise and time correlators to determine the fault distance <cit.>. In <cit.> a statistical model-based detection and frequency identification method is employed to calculate the fault distance. The GELC is used a the incident signal. A Likelihood Ratio test (LRT) is used to detect the reflection and a Hidden Markov Model Hang Over Scheme is used to avoid the LRT from cutting the tail of the GELC which can happen in cases of severe attenuation. The Hilbert transform is used to determine the instantaneous phase and consequently the signal’s instantaneous frequency. The instantaneous frequency is obtained as linear combination of the GELC carrier sinusoid and the angular frequency sweep rate. Ambient noise is removed via a constrained Kalman filter (CKF). The filtered signal is the incident and reflected wavefronts which are used to calculate the distance by multiplying half the delay by the velocity of propagation (VOP). Variations on the frequency estimation and noise handling was done in <cit.>. The test signal in STDR is a PN binary code <cit.> that is launched from the test device and encounters partial reflection and transmission at each impedance discontinuity in the system being tested. The STDR response is created by cross-correlating the reflections that return to the test point with a delayed copy of the event PN code. SSTDR generates a sine-like correlated reflection signature using a square- or sine-wave modulated PN code as the test signal. The reflected signal will be dispersed and attenuated if the system is frequency-dependent or lossy. Furthermore, the method is robust to noise and can be applied on a electrical apparatus <cit.>, underwater applications <cit.> and power cables <cit.>. Changepoint detection methods in <cit.> have been used in RADAR and can be used to analyse the reflectogram automatically and determine the fault distance without the need for reflectogram interpreters <cit.> or commercial TDR equipment. At the time at writing, the author has not found any work done on the application of changepoint detection in time domain reflectometry for power cable applications. The crux of reflectometry-based fault location is the algorithm used. Hence, this work explores the DSP methods used to perform fault location. Furthermore, a guide to the development of a prototype that uses the method explored is shown in the Discussion. This guide will account for instrumentation noise (which can be addressed with a LPF) and recommendations on things to be done to improve the accuracy of the algorithm for the application domain. § DISCUSSION The literature review has provided valuable insights into acceptable cable types, typical faults, and digital signal processing (DSP) techniques used in fault location for low and medium-voltage cables. The review identified that common cable types include polyethylene-insulated solid dielectric cables, concentric-neutral cables, and shielded constructions. Additionally, aluminum conductors have become the preferred choice over copper due to cost and practical considerations. Regarding typical cable faults, the review highlighted that insulation breakdown, mechanical stresses, moisture ingress, short circuits, and open circuits are among the common faults encountered in low and medium-voltage cables. Understanding the root causes of these faults is crucial for effective fault diagnostics and maintenance strategies. The literature revealed high resistance, hard faults are the most common. Furthermore, and single line-to-ground (SLG) faults are prevalent in the local power distribution systems. The literature review on DSP techniques for cable fault location revealed the prominence of impedance-based methods, differential equation-based methods, and traveling wave (TW) methods. Impedance-based methods, such as the Murray and Varley Loop method, are simple and cost-effective but sensitive to fault resistance. Differential equation models are more accurate but costly and may require access to both ends of the cable. TW methods, while widely mentioned in the literature, have limitations in accuracy and may not be practical for low-cost fault location in distribution networks. The use of modern digital signal processing techniques, such as Box's optimization algorithm, has shown promise in automating bridge stabilization and improving the accuracy of impedance-based methods. § CONCLUSION This comprehensive literature review has provided valuable insights into the types of low and medium-voltage cables acceptable for installations as guided by local and international standards. Typical cable faults in low and medium-voltage cables were explored, including insulation breakdown, mechanical stresses, moisture ingress, short circuits, and open circuits. Knowledge of these fault types and their underlying causes is critical for effective fault diagnostics and maintenance strategies, enabling power utilities to minimize downtime and enhance the overall reliability of power networks. The review of digital signal processing techniques used in cable fault location highlighted the prominence of impedance-based methods, differential equation-based methods, and traveling wave (TW) methods. While impedance-based methods offer simplicity and cost-effectiveness, modern DSP techniques like Box's optimization algorithm show promise in improving accuracy. However, TW methods may not be practical for low-cost fault location in distribution networks. The findings of this review contribute to improved fault diagnosis and localization methods. Further research and development in DSP techniques hold the potential for enhancing cable fault location systems, reducing repair time, lowering costs, and ultimately improving the overall reliability of power distribution networks. § AVAILABILITY OF DATA AND MATERIAL There materials surrounding the manuscript can be obtained by contacting the authors. § COMPETING INTERESTS The authors declare no competing interests § FUNDING This work was funded by the University of the West Indies St. Augustine Campus. § AUTHORS' CONTRIBUTIONS The authors confirm contribution to the paper as follows: study conception and design: Shankar Ramharack, Sanjay Bahadoorsingh; analysis and interpretation of results: Shankar Ramharack; draft manuscript preparation: Shankar Ramharack.All authors reviewed the results and approved the final version of the manuscript § ACKNOWLEDGEMENTS The authors would like to thank Mr. Veeresh Ramnarine for their guidance during the project. Furthermore, the authors would like to thank Mr. Anil Rambharat and Mr. Varma Ratan for their insights into fault location within Trinidad and Tobago. Lastly, the author would like to thank Dr. Letitia Addison for their assistance in exploring changepoint detection. spmpsci_unsrt
http://arxiv.org/abs/2307.01454v1
20230704031636
Retrieving information from Hawking radiation in the non-isometric holographic model of black hole interior: theory and quantum simulations
[ "Ran Li", "Xuanhua Wang", "Kun Zhang", "Jin Wang" ]
hep-th
[ "hep-th", "gr-qc", "quant-ph" ]
A Pulsed Muon Source Based on a High-Repetition-Rate Electron Accelerator Kim Siang Khaw August 1, 2023 ========================================================================= § INTRODUCTION The most important quantum behavior of the black holes is that from the effective field theory calculation, it can radiate particles in the form of thermal spectrum at the temperature proportional to the surface gravity of the event horizon <cit.>. If the collapsing matter that forms the black hole is initially in a pure state, the whole system will evolve into a mixed state after the black hole is completely evaporated. This is incisively inconsistent with the unitary evolution principle in quantum mechanics and gives rise to the long standing puzzle of the black hole information <cit.>. After the discovery of AdS/CFT correspondence <cit.>, it is generally believed that the dynamical process of black hole collapsing and evaporating is an unitary one satisfying the principles of quantum mechanics. During this process, the information inside the black hole is released in the form of Hawking radiation and the information conservation is guaranteed <cit.>. However, this poses the question of how the information contained in the infalling objects is released in the evaporating process and also how the information can be recovered by the observer outside of the black hole from collecting and decoding the Hawking radiation <cit.>. The biggest obstacle to answering these questions lies in that the dynamics of the black hole interior is not known for the outside observer due to the existence of causal boundary event horizon. Recently, motivated by the theory of the quantum error correction <cit.> and quantum computational complexity <cit.>, a holographic model of the black hole interior was proposed to resolve the black hole information puzzle <cit.>. In this model, there are two descriptions of the black hole degrees of freedom, one is the effective field description and the other is the fundamental description from the quantum gravity. Although a full quantum gravity theory is not completely constructed by now and the exact nature of the fundamental description of the black hole is unknown to us, from the central dogma of black hole physics <cit.> we can treat the area of the event horizon as counting the fundamental quantum gravity degrees of freedom. From the effective description of the semiclassical gravity along with the evaporating process, the entangled pairs of the outside radiated modes and their inside partners are generated continuously and the number of the effective field theory modes inside the black hole eventually exceeds the number of the black hole degrees of freedom accounted by the horizon area from the fundamental description <cit.>. In order to resolve this apparent contradiction, Akers, Engelhardt, Harlow, Penington, and Vardhan (AEHPV) proposed that there is a non-isometric holographic map from the effective description to the fundamental description <cit.>. This means that a large number of “null" states are annihilated by the non-isometric holographic map, which apparently violates the unitarity of the effective description. However, it is shown that on the average the deviation from the unitarity in the effective description is negligibly small in the entropy. Furthermore, the entanglement entropy of Hawking radiation in the fundamental description is given by the quantum extremal surface formula in the effective description <cit.>. Therefore, it is argued that the AEHPV model of the black hole interior can give a Hilbert space interpretation of the Page curve computation from the island rule <cit.>. The non-isometric holographic model of encoding black hole interiors has inspired many interesting works <cit.>. In the present work, based on the AEHPV model of the black hole interior, we explore the problem of decoding Hawking radiation and information recovery from black hole. By studying a modified version of the Hayden-Preskill thought experiment <cit.>, We first try to address the problem of the decoupling condition under which the information swallowed by the black hole can be recovered by decoding the Hawking radiation. This amounts to estimating the operator distance between the reduced density matrix of black hole and reference system and the density matrix of their product state <cit.>. In principle, when the decoupling condition is satisfied, the entanglement between the reference system and the black hole is transferred to the entanglement between the reference system and the Hawking radiation and that the information swallowed by the black hole can be recovered. Furthermore, under the assumption that the black hole interior dynamics is known to the outside observer, we discuss how the Yoshida-Kitaev decoding strategies <cit.> can be used to decode the Hawking radiation in the modified version of the Hayden-Preskill protocol. For the probabilistic decoding strategy, the corresponding decoding probability and the fidelity on the average of the random unitary group are computed. It shows that the decoding probability imposes an additional constraint on the dimension of the projecting space and the fidelity can achieve the maximal quality when the decoupling condition is satisfied. For the deterministic decoding strategy, we show that a procedure similar to the Grover's search algorithm <cit.> can be applied to recover the initial quantum state of the system swallowed by the black hole. We also perform the quantum simulation experiments of the decoding strategies on the 7-qubit IBM quantum processors. The experimental results validate our analytical findings and show the feasibility of the information recovery. At last, inspired by the work of Kim and Preskill <cit.>, where an infalling agent interacts with the radiation both outside and inside the black hole, we further study the effects caused by such interactions. We argue that the interaction of the infalling message system with the outside right-going Hawking radiation causes no additional effect in our modified version of Hayden-Preskill protocol. This paper is arranged as follows. In Sec. <ref>, we briefly introduce the non-isometric holographic model of the black hole interior. In Sec. <ref>, based on the non-isometric model of black hole interior, we propose a modified version of Hayden-Preskill thought experiment. In Sec. <ref>, we discuss the decoupling condition to recovery the information swallowed by the black hole. In Sec. <ref>, we apply the Yoshida-Kitaev decoding scheme to our model to show that the information can be recovered from the Hawking radiation. Two types of decoding strategies are discussed. In Sec. <ref>, the simulation experiments of the decoding Hawking radiation are implemented on the IBM quantum processors. In Sec. <ref>, we comment on the interaction of the infalling message system with the outside radiation. The conclusion and discussion are presented in the last section. § REVIEW OF AEHPV MODEL OF BLACK HOLE INTERIOR In this section, we give a brief review of the black hole interior model proposed by AEHPV <cit.>. There are two complementary descriptions for the dynamics of black hole interior, one is the effective field description and the other one is the fundamental description from the quantum gravity. The fundamental description gives the viewpoint of the outside observer. In this description, the outside observer finds that there are black hole B and Hawking radiation R. The Hilbert space is given by ℋ_B⊗ℋ_R . According to the central dogma of the black hole physics <cit.>, the dimension of ℋ_B is proportional to e^A_EH/4G with A_EH being the horizon area. In contrast to the outside observer, the infalling observer will experience another picture. In a “nice slice" <cit.>, the infalling observer will find that there are the right-moving modes R in the black hole exterior, and the left-moving modes l and the right-moving modes r in the black hole interior. Here, R is again the degrees of freedom of the Hawking radiation, l can be treated as the degrees of freedom that forms the black hole and r denotes the interior partners of the Hawking radiation R. In this effective description of the infalling observer, the Hilbert space is given by ℋ_l⊗ℋ_f⊗ℋ_r⊗ℋ_R , where f is an additional system that accounts the fixed degrees of freedom and does not play an essential role in the analysis. In addition, it is obvious that r and R are in the maximally entangled state from the effective field theory calculation. As claimed in the introduction, there is an apparent contradiction for the outside observer and the infalling observer. As the black hole evaporates, the degrees of freedom of Hawking radiation R increase monotonically while the degrees of freedom of black hole B decrease. For the black hole at late time, the number of the degrees of freedom of Hawking radiation is larger than that of black hole, i.e. |R|>|B|, or |r|>|B|, where |·| denotes the Hilbert space dimension of the corresponding system. This will result in the contradiction that the entanglement entropy of Hawking radiation in the effective description exceeds the black hole entropy in the fundamental description. In order to resolve this conflict, AEHPV proposed that there is a non-isometric holographic map from the effective description to the fundamental description V: ℋ_l⊗ℋ_r→ℋ_B . Consider the Hilbert space ℋ_l⊗ℋ_f⊗ℋ_r, and introduce an auxiliary system P such that ℋ_l⊗ℋ_f⊗ℋ_r=ℋ_B⊗ℋ_P . The non-isometric map can be explicitly realized as V=√(|P|)⟨ 0|_P U|ψ_0⟩_f=√(|P|)   0.2holographic_nonisometric_map.png . Here, |ψ_0⟩_f and |0⟩_P are the fixed states in ℋ_f and ℋ_P, respectively. The prefactor √(|P|) is introduced to preserve the normalization of the resulting state, which will be clarified in the next section. The graph gives the intuitive representation of the non-isometric map. The dynamics of the effective field theory degrees of freedom l, r and f in the black hole interior is modeled by a typical scrambling unitary operator U. The contradiction between the effective description and the fundamental description is resolved by post-selecting or projecting certain degrees of freedom on the auxilliary system P, resulting in a non-isometric mapping from the black hole interior of much larger degrees of freedom in the effective description to the black hole B of much lower degrees of freedom in the fundamental description. The post-selection or the projection in the holographic map reminds us of the final state proposal by Horowitz and Maldacena <cit.> (see also <cit.>). However, in the final state proposal the post-selection comes from a modification of quantum mechanics and in AEHPV model the post-selection is a property of the non-isometric holographic map itself. In addition, in the final state model the post-selection is supposed to happen at the singularity while the post-selection in AEHPV model happens in the black hole interior. We should also mention another interesting proposal by Wang et.al in <cit.> that is inspired by the Island rule. In <cit.>, such projections or post-selected measurements occur on the horizon to avoid causal issues, and that the information is transferred to the outside once it enters the entanglement island. Due to the post-selection or the projection in the black hole interior, the unitarity is apparently violated in the effective description. However, two important observations from the AEHPV model are: (1) averaged over the random unitary group, the deviation from the unitarity in the effective description is negligibly small in the entropy; (2) the entanglement entropy of the Hawking radiation in the fundamental description can be computed by the quantum extremal surface formula in the effective description. It is therefore argued that the non-isometric holographic model of the black hole interior gives a Hilbert space interpretation of the Page curve computation from the Island rule. This model has also been generalized to include the effects induced by the infalling agent interacting with the radiation both outside and inside the black hole horizon <cit.>. It is tested that the unitarity of the S-matrix is guaranteed to a very high precision. Inspired by these works, a “backwards-forwards" evolving model was also introduced to probe the non-trivial interactions between the infalling modes with the radiation modes outside and inside the horizon <cit.>. § MODIFIED HAYDEN-PRESKILL PROTOCOL BASED ON THE NON-ISOMETRIC MODEL In this section, based on the non-isometric holographic model of black hole interior, we will propose a modified version of Hayden-Preskill protocol. In fact, the model reviewed in the last section is a static one. We should properly modify the static model to incorporate the dynamical process of the information scrambling and the black hole radiating <cit.>. Now let us introduce our modified model inspired by the dynamical model in <cit.>. As shown in the right panel of Figure <ref>, on the “nice slice" Cauchy surface at the initial time, there are the matter system f that forms the black hole, the infalling message system A, the reference system A' that is maximally entangled with A, the outside Hawking radiation R and its interior partner r. In our model, f denotes the matter system that collapses to the black hole. (This point is different from the AEHPV model, where f denotes an auxilliary system in the fixed state while l denotes the infalling matter system that forms the black hole.) For simplicity, we also set f to be in the fixed state |ψ_0⟩_f. The reference system A' is introduced to purify the message system A. In addition, from the effective field viewpoint, R and r are maximally entangled. In this setup, the state of the total system at the initial time is given by |Ψ_i⟩ = |EPR⟩_A'A⊗ |ψ_0⟩_f ⊗ |EPR⟩_rR , where EPR represents the maximally entangled state, for example, |EPR⟩_A'A=1/√(|A|)∑_j |j,j⟩. Motivated by the non-isometric holographic map, after some time the state of the whole system evolves into the the following modified Hayden-Preskill state, which is given by |Ψ_HP⟩ = √(|P|)⟨ 0|_P (I_A'⊗ U_(Afr)(BPR')⊗ I_R) |Ψ_i⟩ = √(|P|)⟨ 0|_P (I_A'⊗ U_(Afr)(BPR')⊗ I_R) |EPR⟩_A'A⊗ |ψ_0⟩_f ⊗ |EPR⟩_rR . In this expression, the dynamical process of the information scrambling and the black hole radiating is typically represented by the random unitary operator U. In order to describe the black hole radiating, we introduce the newly generated Hawking radiation R'. The subscripts (Afr) and (BPR') denote the input systems and the output systems of the random unitary operator U. It is clear that the Hilbert space dimension of the input systems Afr is equal to that of the output system BPR', i.e., |A||f||r|=|B||P||R'|=d. Similar to the non-isometric model, after scrambling, certain degrees of freedom P are projected onto the fixed state ⟨ 0|_P in the black hole interior. The factor √(|P|) is introduced to preserve the normalization. In this setup, |Ψ_HP⟩ describes the state of the total system in the fundamental description including the remnant black hole B, the newly-generated Hawking radiation R', the early Hawking radiation R and the reference system A', which are systematically depicted in the right panel of Figure <ref>. In this study, calculations are done using the graphical representation. The modified Hayden-Preskill state (<ref>) can be graphically represented as |Ψ_HP⟩ =√(|P|)   0.2modified_HP_protocol.png . In this graph, 0.2EPR.png represents the EPR state of A and A' and the black dot stands for the normalization factor 1/√(|A|). Similar rules applies to the system r and R. As we have claimed, in the black hole interior, the infalling message system A, the matter system f and the Hawking partner mode r are scrambled by the random unitary operator U, resulting in the output system composed by the newly-generated Hawking radiation R', the remnant black hole B and an auxiliary system P. According to the non-isometric map, P is postselected or projected onto the fixed state |0⟩_P. With the same rules, we can represent the conjugate state ⟨Ψ_HP| as ⟨Ψ_HP|=√(|P|)   0.2HPState_dagger.png . This graph is obtained by flipping the graphical representation (<ref>) of |Ψ_HP⟩ and replacing the random unitary U with U^†. Because the holographic map from the effective description to the fundamental description is non-isometric, the modified Hayden-Preskill state is not normalized in general. One can see that the graphical representation of the inner product ⟨Ψ_HP|Ψ_HP⟩ can be obtained by connecting the open ends of the graphs in Eq.(<ref>) and (<ref>) ⟨Ψ_HP|Ψ_HP⟩=|P|   0.2HP_inner_product.png . Note that due to the existence of the postselection on the fixed state |0⟩_P, the successive action of U and U^† can not be treated as the identity operator I. In this sense, in general ⟨Ψ_HP|Ψ_HP⟩≠ 1. However, the normalization is preserved on the Haar average over the random unitary operator U. To realize this aim, we invoke the following integral formula ∫ dU U_ij U^†_j'i'=∫ dU U_ij U^*_i'j' =δ_ii'δ_jj'/d , which can be graphically represented as <cit.> ∫ dU (0.2U_Udagger.png) =1/d(0.2integral_U_Udagger.png) . Then the average of the inner product ⟨Ψ_HP|Ψ_HP⟩ of Eq. (<ref>) over the random unitary operator U can be calculated as ∫ dU ⟨Ψ_HP|Ψ_HP⟩ = |P| ∫ dU (0.2HP_inner_product.png) = |P|/d∫ dU (0.2HP_inner_product_sim.png) = |P||B||R'|/d=1 . In deriving this result, we used the fact that the loop denotes the trace of identity operator over the Hilbert space of the corresponding system, which gives rise to the factor of its Hilbert space dimension. The loop with two dots is equal to unity. The line that connects the two fixed states (for example |0⟩_P and ⟨ 0|_P) represents the normalization condition of the fixed state and gives rise to the factor of unity. The normalization condition of the modified Hayden-Preskill state is preserved on the average over the random unitary operator U. This also implies that the dynamical process is unitary on average for the observer in the effective description. § DECOUPLING CONDITION OF THE MODIFIED HAYDEN-PRESKILL PROTOCOL In this section, with the modified Hayden-Preskill state given in Eq.(<ref>), we now discuss whether the information contained in the message system A can be recovered by the outside observer from collecting and decoding the early and the newly-generated Hawking radiation R and R'. The condition that the aim can be achieved relies on the decoupling or the disentangling between the reference system A' and the remnant black hole B. We refer to this condition as the decoupling condition. This is to say that the decoupling condition can be obtained by estimating the operator distance between the “reduced density matrix" ρ_A'B and the product state of A' and B averaged over the random unitary operator U. The “reduced density matrix" for the combined system of the reference A' and the remnant black hole B can be obtained from the density matrix of the modified Hayden-Preskill state by tracing out the early Hawking radiation R and the newly-generated Hawking radiation R', which can be graphically represented by ρ_A'B=Tr_RR'|Ψ_HP⟩⟨Ψ_HP|=|P|/|r|   0.2reduced_density_matrix_ApB.png , where the factor 1/|r| comes from the normalization factor of the EPR state for the system r and R. The above graph is obtained by juxtaposing the representation of |Ψ_HP⟩ in Eq. (<ref>) with the representation of ⟨Ψ_HP| in Eq. (<ref>) and then connecting the same legs of the newly-generated radiation R' and the early radiations R. Here, taking the trace over a specific system is simply realized by connecting the corresponding open ends in the graphical representation of the density matrix |Ψ_HP⟩⟨Ψ_HP|. Note that ρ_A'B is not a real reduced density matrix in the usual sense, which can be observed by calculating its trace. Tracing out the remnant black hole B and the reference A' of Eq. (<ref>) gives us Trρ_A'B=|P|/|A||r|   0.2trace_rdm_ApB.png . One can see Trρ_A'B≠ 1 . The reason is the same with ⟨Ψ_HP|Ψ_HP⟩≠ 1. The observation that the trace of ρ_A'B is not equal to unity explains why we have put the double quotation marks to denote ρ_A'B. However, for an observer in the effective description, the Haar average of Trρ_A'B over the random unitary operator U can be calculated by invoking the graphical representation in Eq.(<ref>) ∫ dU Trρ_A'B=|P|/d|A||r|0.2integral_trace_rdm_ApB.png =|P||B||A||r||R'|/d|A||r|=1 . The technique in Eq.(<ref>) is also used in the above derivation. This result shows that for an observer in the effective description, ρ_A'B is a reduced density matrix on average over the random unitary operator. Now we estimate the operator distance between the “reduced density matrix" ρ_A'B and the product state of the reference system A' and the remnant black hole B averaged over the random unitary operator U. We should consider the following quantity <cit.> (∫ dU ρ_A'B-1/|A'||B| I_A'⊗ I_B _1)^2 , where 1/|A'| I_A' and 1/|B| I_B are the maximally mixed density matrices of the system A' and B. The operator trace norm ·_1 is the L_1 norm, defined for any operator M as M_1=Tr√(M^† M). If the quantity in Eq.(<ref>) is small enough, the correlations between the reference system A' and the remnant black hole B can be ignored. Therefore, we try to estimate the upper bound of the quantity in Eq.(<ref>). By defining the L_2 norm as M_2=√(Tr M^† M), and using the inequality M_2≤M_1≤√(N)M_2 with N being the dimensionality of the Hilbert space, one can estimate (∫ dU ρ_A'B-1/|A'||B| I_A'⊗ I_B _1)^2 ≤ ∫ dU ρ_A'B-1/|A'||B| I_A'⊗ I_B _1^2 ≤ |A'||B| ∫ dU ρ_A'B-1/|A'||B| I_A'⊗ I_B _2^2 = |A'||B| ∫ dU Trρ_A'B^2-1 , where we have used Jensen's inequality and the fact that ∫ dUTrρ_A'B=1. To proceed, we calculate the average value of Trρ_A'B^2. Using the graphical representations of ρ_A'B in Eq.(<ref>), ρ_A'B^2 can be obtained by taking two copies of graphical representation of ρ_A'B and connecting the legs of both the reference system A' and the remnant black hole system B in the middle. The trace of ρ_A'B^2 is obtained by connecting the remaining legs, which can be graphically expressed as Trρ_A'B^2=|P|^2/|A|^2|r|^2   0.2trace_sqrho_ApB.png . Computing the Haar average of Trρ_A'B^2 involves the following formula ∫ dU U_i_1j_1U_i_2j_2U^∗_i_3j_3U^∗_i_4j_4 = δ_i_1i_3δ_i_2i_4δ_j_1j_3δ_j_2j_4 + δ_i_1i_4δ_i_2i_3δ_j_1j_4δ_j_2j_3/d^2-1 -δ_i_1i_3δ_i_2i_4δ_j_1j_4δ_j_2j_3 + δ_i_1i_4δ_i_2i_3δ_j_1j_3δ_j_2j_4/d(d^2-1) . The details on evaluating such integrals are given in the Appendix A. With this in hand, the average of Tr(ρ_A'B)^2 over the random unitary operator is given by ∫ dU Trρ_A'B^2 = |P|^2/|A|^2|r|^2∫ dU U_(a_1 f r_1)(b_1 0 r_1')U_(a_2 f r_2)(b_2 0 r_2')U^*_(a_2 f r_1)(b_2 0 r_1')U^*_(a_1 f r_2)(b_1 0 r_2') = |P|^2/|A|^2|r|^2(d^2-1)[ δ_a_1a_2δ_r_1r_2δ_a_2a_1δ_r_1r_2δ_b_1b_2δ_r_1'r_1'δ_b_2b_1δ_r_2'r_2'.                     +δ_a_1a_1δ_r_1r_2δ_a_2a_1δ_r_2r_1δ_b_1b_1δ_r_1'r_2'δ_b_2b_2δ_r_2'r_1'                     -1/dδ_a_1a_2δ_r_1r_1δ_a_2a_1δ_r_2r_2δ_b_1b_1δ_r_1'r_2'δ_b_2b_2δ_r_2'r_1'                    . -1/dδ_a_1a_1δ_r_1r_2δ_a_2a_2δ_r_2r_1δ_b_1b_2δ_r_1'r_1'δ_b_2b_1δ_r_2'r_2'] = |P|^2/(d^2-1)[|B||R'|^2/|A|+|B|^2|R'|/|r|-1/d|B|^2|R'|/|A|-1/d|B||R'|^2/|r|] . Therefore, we have |A'||B| ∫ dU Trρ_A'B^2-1 = (|A|^2|f|-1)(d^2-|R'|^2|P|)/(d^2-1)|R'|^2|P| ≅ |A|^2|f|/|R'|^2|P|(1-1/|B|^2) ≅ |A|^2|f|/|R'|^2|P| , which gives the inequality of the operator distance between the “reduced density matrix" ρ_A'B and the decoupled density matrix 1/|A'||B| I_A'⊗ I_B ∫ dU ρ_A'B-1/|A'||B| I_A'⊗ I_B _1≤√(|f|/|P|)|A|/|R'| . If the following condition is satisfied, i.e., |R'| ≫√(|f|/|P|) |A| , then we have ∫ dU ρ_A'B-1/|A'||B| I_A'⊗ I_B _1 ≪ 1 . This equation implies that the operator distance between ρ_A'B and the product state of the reference system A' and the remnant black hole B averaged over the random unitary operator is small enough. Therefore, the reference system A' is decoupled from the remnant black hole B and the entanglement between the reference system A and the message system A' is transferred to the entanglement between the reference system A' and the newly-generated Hawking radiation R'. In this case, the information contained in the message system A can be recovered by the outside observer who has the full access of the early Hawking radiation R and the newly-generated Hawing radiation R'. In this sense, Eq.(<ref>) is the decoupling condition to guarantee the information can be retrieved from the black hole. § DECODING HAWKING RADIATION BASED ON THE NON-ISOMETRIC MODEL In this section, we consider how the observer who stays outside of the black hole can use the Hawking radiation that one collected and apply the Yoshida-Kitaev decoding strategy to recover the information thrown into the black hole. The strategy was firstly proposed by Yoshida and Kitaev in <cit.> for the original model of Hayden-Preskill thought experiment. One can refer to <cit.> for the decoding strategies with the quantum decoherence or noise. In these strategies, it is assumed that the outside observer has the full information about the information scrambling and black hole evaporating, which is usually represented by an random unitary operator. §.§ Probabilistic decoding We have claimed that the dynamics of the information scrambling and the black hole evaporating are represented by the random unitary matrix. If the message system A, which is entangled with the reference system A', is thrown into the black hole, after some time the quantum state of the whole system is described by the modified Hayden-Preskill state. For the outside observer, he has the full access to the early Hawking radiation R and the newly-generated Hawking radiation R'. The observer wants to apply some operations to recover the information that is contained in the message system A. The probabilistic decoding strategy can be implemented as follows. Firstly, we prepare one copy of |Ψ_0⟩_f and one copy of |EPR⟩_A'A. The copy of |EPR⟩_A'A is denoted as |EPR⟩_FF'. Then, with the modified Hayden-Preskill state in hand, apply the complex conjugate U^* of the random unitary operator on the composed system of R, f and F. The resultant state is denoted as |Ψ⟩_in, which can be graphically expressed as |Ψ⟩_in=|P|   0.2Psi_in.png . The operator U^* can be treated as the time reversal operator of the black hole dynamics. The output system of U^* consists of a copy of newly-generated radiation R”, a copy of the remnant black hole B' and another auxiliary system P. The output auxiliary system P is post-selected or projected onto the fixed state |0⟩_P. Next, we project the system R' and R” onto the state |EPR⟩_R'R”. This is to act the projecting operator Π_R'R”=|EPR⟩_R'R”⟨EPR|_R'R” on the system R' and R”. The resulting state is denoted as |Ψ⟩_out, which can be graphically expressed as |Ψ⟩_out = (I_A'B⊗Π_R'R”⊗ I_B'F')|Ψ⟩_in = |P|/√(P_EPR)   0.2Psi_out.png , where P_EPR is the averaged projecting probability. The projecting operation Π_R'R” serves to decouple the prepared system F' from the remnant black holes B and B' and teleports the quantum state of the message system A to the prepared system F' owned by the outside decoder. The factor 1/√(P_EPR) is introduced to preserve the normalization of the state |Ψ⟩_out on the Haar average of the random unitary operator. Therefore, the condition ∫ dU  _out⟨Ψ|Ψ⟩_out=1 gives the graphical representation of the projecting probability, P_EPR=|P|^2/|A|^2|r||R'|∫ dU ( 0.2Probability.png) , where the inner product _out⟨Ψ|Ψ⟩_out is represented by connecting the legs of |Ψ⟩_out in the upper half of the graph to the corresponding legs of _out⟨Ψ| in the lower half. After a rearrangement of the unitary operators, this graph is equivalent to that of Trρ_A'B^2 in Eq.(<ref>), which results in the following relation P_EPR=|r|/|R'|∫ dU Trρ_A'B^2 . By using the previous result of ∫ dU Trρ_A'B^2 given in Eq.(<ref>), it can be shown that the projecting probability is given by P_EPR = |P|^2/(d^2-1)[|r||B||R'|/|A|+|B|^2-1/d|r||B^2|/|A|-1/d|B||R'|] ≅ |P|/|f|1/|A|^2+1/|R'|^2 -1/|f|1/|A|^2|R'|^2-|P|/d^2 . Under the decoupling condition (<ref>), the projecting probability can be further approximated as P_EPR≅|P|/|f|1/|A|^2 , where only the leading order term is retained. This result shows that the projecting probability depends not only on the dimensionality of the message system A but also depends on the ratio of the dimensionalities of the black hole interior projecting system P and the initial collapsing matter's system f. By requiring the projecting probability less than one, we can impose the following condition on the dimension of projecting space P as |P|≤|f||A|^2 . This condition imposes the upper bound on the dimension of projecting space P. This is reasonable because if |P| is too large, the information swallowed by the black hole is lost due to the projection. It seems that by adjusting the ratio |P|/|f|, one can have a relatively large projecting probability. This means that the decoding strategy for this modified model is more efficient than the original model considered by Yoshida and Kitaev <cit.>. The decoding fidelity can be quantified by the derivation of the out state |Ψ⟩_out from |EPR⟩_A'F'. The decoding fidelity is then defined and graphically expressed as F_EPR = Tr(Π_A'F'|Ψ⟩_out _out⟨Ψ| ) = |P|^2/P_EPR|A|^3|r||R'|    0.2Fidelity.png , where the upper half of the graph represents the state |Ψ⟩_out and the lower half represents _out⟨Ψ|. The techniques of the operators acting on the system and tracing out the system used in the previous calculations are also applied here. The average of the decoding fidelity over the random unitary group can be calculated as ∫ dU F_EPR = |P|^2/P_EPR|A|^3|r||R'| ∫ dU U_(a_1 f r_1)(b_1 0 r_1')U_(a_2 f r_2)(b_2 0 r_2')U^*_(a_1 f r_1)(b_2 0 r_1')U^*_(a_2 f r_2)(b_1 0 r_2') = |P|^2/P_EPR|A|^3|r||R'| [ δ_a_1a_1δ_r_1r_1δ_a_2a_2δ_r_2r_2δ_b_1b_2δ_r_1'r_1'δ_b_2b_1δ_r_2'r_2'.                     +δ_a_1a_2δ_r_1r_2δ_a_2a_1δ_r_2r_1δ_b_1b_1δ_r_1'r_2'δ_b_2b_2δ_r_2'r_1'                     -1/dδ_a_1a_1δ_r_1r_1δ_a_2a_2δ_r_2r_2δ_b_1b_1δ_r_1'r_2'δ_b_2b_2δ_r_2'r_1'                    . -1/dδ_a_1a_2δ_r_1r_2δ_a_2a_1δ_r_2r_1δ_b_1b_2δ_r_1'r_1'δ_b_2b_1δ_r_2'r_2'] = d^2/P_EPR(d^2-1)|A|^2[|P|/|f|+1/|R'|^2-1/|f||R'|^2-|P|/d^2] ≅ |P|/P_EPR|A|^2|f| . If the decoupling condition (<ref>) is satisfied, the projecting probability is approximated as |P|/|f|1/|A|^2, which implies that the decoding fidelity achieves the maximal decoding quality F_EPR≅|P|/P_EPR|A|^2|f|≅ 1 . In summary, we have shown that the Yoshida-Kitaev probabilistic decoding strategy can be successfully employed in our modified Hayden-Preskill protocol to decode the Hawking radiation and recover the information falling into the black hole. §.§ Deterministic decoding We have shown that the probabilistic decoding strategy can be applied to recover the initial information. In this subsection, we discuss a deterministic decoding strategy for the modified Hayden-Preskill protocol. The decoding process is similar to the Grover's search algorithm <cit.>. For the deterministic decoding strategy, we define the following operator Π̃_R'BA= |P|   0.2tilde_Pi.png , which operates on the newly-generated radiation R', the remnant black hole B and the message system A. In the ideal case, one needs to prove the following relations in order to apply the Grover's search algorithm (I_A'B⊗Π_R'R”⊗ I_B'F') |Ψ⟩_in = √(|P|/|f|)1/|A||Ψ⟩_out , (I_A'B⊗Π_R'R”⊗ I_B'F') |Ψ⟩_out = |Ψ⟩_out , (I_A'BR'⊗Π̃_R”B'F') |Ψ⟩_in = |Ψ⟩_in , (I_A'BR'⊗Π̃_R”B'F')|Ψ⟩_out = √(|P|/|f|)1/|A||Ψ⟩_in . The first two relations are apparent. The last two relations are satisfied only for the typical random unitary operator U in the ideal case. The last two relations can be verified by showing that the distance of the density matrices for the states on the l.h.s and the r.h.s of the equations averaged over the random unitary group is small. A less rigorous verification of the last two relations is presented in the Appendix B and the proof is presented in the Appendix C. Consider the two dimensional plane spanned by |Ψ⟩_in and |Ψ⟩_out. On this plane, we also introduce a state vector |Ψ⟩_out^⊥ that is orthogonal to |Ψ⟩_out. It is easy to check that |Ψ⟩_out^⊥∝(1-I_A'B⊗Π_R'R”⊗ I_B'F') |Ψ⟩_out . By defining the unitary operator G as G=1-2Π , one can show that the inner product of |Ψ⟩_in and |Ψ⟩_out^⊥ is equal to the inner product of (I_A'B⊗ G_R'R”⊗ I_B'F')|Ψ⟩_in and |Ψ⟩_out^⊥, i.e. the following relation holds _in⟨Ψ|Ψ⟩_out^⊥ = _in⟨Ψ|(I_A'B⊗ G_R'R”⊗ I_B'F')|Ψ⟩_out^⊥ . Therefore, the application of the operator G on the state |Ψ⟩_in results in a reflection across the state |Ψ⟩_out^⊥. The reflection angle θ is determined by the equation sinθ/2=√(|P|/|f|)1/|A| . Similarly, one can define the G̃ operator G̃=1-2Π̃ . The application of the operator G̃ on the state (I_A'B⊗ G_R'R”⊗ I_B'F')|Ψ⟩_in means a reflection across the state |Ψ⟩_in. The reflection angle is also given by θ. Such a procedure is presented in Figure <ref>, where the operation of G̃ is accomplished by U^∗ G U^T. Therefore, the application of the combined operator G̃ G on the state |Ψ⟩_in results in the rotation of this state on the two dimensional plane by the angle θ. Such a procedure is similar to Grover's search algorithm. After n times, we have |Ψ(n)⟩=sin((n+1/2)θ)|Ψ⟩_out+cos((n+1/2)θ)|Ψ⟩_out^⊥ . For our quantum simulation that will be discussed in the following section, the message system A, the infalling matter system f and the projecting system P are represented by one qubit respectively. So we have |P|=|f|=|A|=2 and θ=π/3. In this case, the initial quantum state of the message system A can be successfully recovered by applying the combined operator G̃G on the state |Ψ⟩_in only one time. Such a strategy is presented in Figure <ref>. Note that the operation of G̃ is accomplished by U^∗ G U^T. With the initial state |Ψ⟩_in in hand, the decoder should apply sequentially the reflection operator G on the newly generated radiation R' and its copy R”, the scrambling operator U^T on the radiation copy R” and the black hole copy B', and again the reflection operator G on the black hole copy B' and the prepared system F', and then U^∗ on the radiation copy R” and the black hole copy B'. In this way, the decoder can retrieve the information of the message system A, namely its quantum state |ψ⟩_A, on the prepared system F' outside the black hole. § QUANTUM SIMULATION OF DECODING HAWKING RADIATION Recently, the works on the quantum processor's realization of traversable wormhole dynamics and Hawking radiation have attracted significant attention <cit.>. These works stimulate the studying of quantum gravity in the laboratory. Benefited by the development of quantum computers, it is believed that some essential quantum features of black holes can be simulated on the quantum computers, which will provide us a deeper understanding of the nature of quantum gravity. In this section, we try to implement the probabilistic and the deterministic decoding strategies for the Hawking radiation on the IBM quantum processors to verify the feasibility of the information recovery from the black hole. To this end, we experimentally realize the decoding strategies discussed in the last section on the 7-qubit IBM quantum processors using a 3-qubit scrambling unitary. The key is to realize the typical Haar scrambling unitary operator on the quantum processors. This is a difficult task especially in IBM quantum processors because the seven qubits on the IBM quantum processors are not fully connected. Some optimization schemes for the quantum circuit should be taken carefully. §.§ A typical scrambling unitary operator Firstly, we discuss how to realize the scrambling unitary operator on the IBM quantum processors. We consider the 3-qubit Clifford scrambler <cit.>. An ideal three-qubit Clifford scrambling unitary operator should transform single-qubit operations into three-qubit operations. An example of such scrambling unitaries satisfying Eq. (<ref>) can be realized using the quantum circuit shown in Figure <ref>. Algebraically, the quantum circuit of the scrambling unitary in Figure <ref> can be expressed as U = (I ⊗ I ⊗ |0⟩⟨0| + I ⊗σ_z ⊗ |1⟩⟨1|) (I ⊗ |0⟩⟨0| ⊗ I + σ_z ⊗ |1⟩⟨1| ⊗ I) ×(I ⊗ I ⊗ |0⟩⟨0|+σ_z ⊗ I ⊗ |1⟩⟨1|) (H ⊗ H ⊗ H) (I ⊗ |0⟩⟨0| ⊗ I + σ_z ⊗ |1⟩⟨1| ⊗ I) ×(I ⊗ I ⊗ |0⟩⟨0| + I ⊗σ_z ⊗ |1⟩⟨1|) (I ⊗ I ⊗ |0⟩⟨0|+σ_z ⊗ I ⊗ |1⟩⟨1|) , where σ_x, σ_y, σ_z are Pauli matrices and I is the two dimensional identity matrix. Note that in Figure <ref>, the ordering of the left three controlled-Z gates or the right three controlled-Z gates does not affect the scrambling unitary. This unitary operator was used in <cit.> to realize the scrambling dynamics of the quantum information. It can be shown that the scrambling unitary in the computing basis can be expressed in the matrix form as U=1/2√(2)[ 1 1 1 -1 1 -1 -1 -1; 1 -1 1 1 1 1 -1 1; 1 1 -1 1 1 -1 1 1; -1 1 1 1 -1 -1 -1 1; 1 1 1 -1 -1 1 1 1; -1 1 -1 -1 1 1 -1 1; -1 -1 1 -1 1 -1 1 1; -1 1 1 1 1 1 1 -1; ] . It is easy to check that the the scrambling unitary satisfies the following gate transformation identities U^†( σ_x ⊗ I ⊗ I ) U= σ_z ⊗σ_y ⊗σ_y , U^†( I ⊗σ_x ⊗ I ) U= σ_y ⊗σ_z ⊗σ_y , U^†( I ⊗ I ⊗σ_x ) U= σ_y ⊗σ_y ⊗σ_z , U^†( σ_y ⊗ I ⊗ I ) U= σ_y ⊗σ_x ⊗σ_x , U^†( I ⊗σ_y ⊗ I ) U= σ_x ⊗σ_y ⊗σ_x , U^†( I ⊗ I ⊗σ_y ) U= σ_x ⊗σ_x ⊗σ_y , U^†( σ_z ⊗ I ⊗ I ) U= σ_x ⊗σ_z ⊗σ_z , U^†( I ⊗σ_z ⊗ I ) U= σ_z ⊗σ_x ⊗σ_z , U^†( I ⊗ I ⊗σ_z ) U= σ_z ⊗σ_z ⊗σ_x , which suggests that all single-qubit operators are dispersed into three-qubit operators after the operation of the scrambling unitary. This is the indication of its scrambling property. In the following, we will use the 3-qubit Clifford scrambler given in Figure <ref> to simulate the two decoding strategies. §.§ Simulation of the probabilistic decoding strategy The probabilistic decoding strategy is realized in the quantum circuit presented in Figure <ref>. We use the first three qubits to represent A, f, and r, respectively. In addition, we use the next three qubits to represent R, f, and F, respectively. The last qubit represents the prepared system F'. The seven classical bits are used to record the measurement results. To simplify the model, we set the message system A to be in a pure state without the reference system A'. The quantum circuit realizes the probabilistic decoding strategy in Eq. (<ref>). In Figure <ref>, the vertical dashed lines represent the barriers, which are added just for the convenience of visualization. It is clear that the whole circuit is divided into five parts. In the first part, we prepare the entanglement state of the qubits of q[2] and q[3] and the entanglement state of the qubits of q[5] and q[6] and set the input states of q1] and q[4] to be |1⟩. The qubits q[1] and q[4] represent the infalling matter system that collapses to the black hole. Without the loss of generality, the entanglement state is selected to be the EPR state |EPR⟩=1/√(2)(|00⟩+|11⟩). The quantum state of q[0], which is the state that we want to recover, can be set to be |0⟩ or |1⟩. Here, |0⟩ and |1⟩ represent the eigenstates of the spin operator σ_z. In figure <ref>, the initial state of q[0] is set to be |0⟩. A X-gate that added on the q[0] qubit can change this state to be |1⟩. The first part prepares the initial setup of the quantum circuit. In the second part, the first three qubits and the next three qubits are processed by the scrambling unitary operators U and U^*, respectively. The scrambling unitary operator U is given in Figure <ref>. Note that U^*=U, because the scrambling matrix is real. The second part realizes the scrambling dynamics in the black hole interior. In the third part, the qubits q[1] and q[4] are measured. The projection of a part of degrees of freedom in the black hole interior onto the system P is realized by postselecting the measured value of q[1]q[4] to be 00 or 11. In the forth part, we perform the EPR projecting measurement on the qubits q[2] and q[3]. The measured value of q[2]q[3] being 00 means that the success of the EPR projection. In the last part, the qubit q[6] is measured. If the input state of the first qubit is |0⟩, the measured value of the last qubit is 0 means the success of the decoding. In this quantum circuit, the information contained in the qubit q[0] is dispersed to the whole system by the scrambling unitary and is finally recovered in the qubit q[6] by the projection operators. Without errors, the quantum circuit can be regarded as the realization of traversable wormhole on the quantum computer that teleports the information from the qubit q[0] to q[6]. We implement the quantum circuit presented in figure <ref> on the IBM quantum processor to verify the probabilistic decoding strategy. The circuit was run on IBM-nairobi processor, which is a 7-qubit quantum computer with quantum volume 32. In Figure <ref>, we present the experimental results for the case that the initial input of q[0] is |0⟩. In this figure, we have presented all the measurement outcomes. The meaningful experimental results from figure <ref> are presented in Figure <ref>. In the left panel, the measurement outcome of the qubits q[4]q[1] is selected to be 00, which means that the qubits q[4]q[1] are projected to the state |00⟩. The red bars represent that the measurement outcome of the qubits q[2]q[3] is 00, which means that the qubits q[2]q[3] are projected to the specific EPR state 1/√(2)(|00⟩+|11⟩). The green bars represent that the qubits q[2]q[3] are projected to other EPR states, which means the failure of the EPR projection. From the data presented in the left panel of Figure <ref>, it can be calculated that the probability of projecting to the specific EPR state 1/√(2)(|00⟩+|11⟩) is about 25%. In the ideal case, the projection on the EPR state means the success of decoding radiation and recovering the information. However, due to the noise in the quantum processor, there are always errors in the circuit outcomes. In the left panel, the error is represented by the relatively low red bar where the output of the qubit q[6] is 1. The decoding efficiency in this case is about 84%. The decoding efficiency is defined as the ratio between the frequency of a successful decoding to the frequency of a successful EPR projection. Therefore, there is the strong signal of recovering the information by executing the quantum circuit on the IBM quantum processor. In the right panel of figure <ref>, the measurement outcome of the qubits q[4]q[1] is selected to be 11, which means that the qubits q[4]q[1] are projected to the state |11⟩. In this case, the EPR projecting probability is estimated as 27% and the decoding efficiency is about 87%. This result also implies the success of decoding the information. The original experimental results for the case that the initial input of q[0] is |1⟩ are presented in figure <ref>. Similarly, we have plotted the meaningful experimental results in figure <ref>. The red bars represent that the qubits q[2]q[3] are projected to the correct EPR state 1/√(2)(|00⟩+|11⟩) and the green bars represent that the qubits q[2]q[3] are projected to other EPR states. In the left panel, the measurement outcome of the qubits q[4]q[1] is selected to be 00, which means that the qubits q[4]q[1] are projected to the state |00⟩. In this case, the EPR projecting probability is estimated as 27% and the decoding efficiency is about 79%. In the right panel of figure <ref>, the measurement outcome of the qubits q[4]q[1] is selected to be 11, which means that the qubits q[4]q[1] are projected to the state |11⟩. From these data, one can calculate the EPR projecting probability is estimated as 24% and the decoding efficiency is about 75%. The decoding efficiencies are smaller than those in the case that the initial input of q[0] is 0. This is caused by the fact that the qubit is more likely to decay to the state |0⟩. These results indicate that the decoding strategy can also recover the information when the initial state of q[0] is |1⟩. §.§ Simulation of the deterministic decoding strategy In this subsection, we discuss the experimental realization of the deterministic decoding strategy of the Hawking radiation on the IBM-perth quantum processor. This processor is more suitable for conducting the task of the deterministic decoding since the deterministic decoding algorithm involves more gates operations and the decoherence time of the IBM-perth quantum processor is longer compared to the other machines available to us. In general, the efficiency of the deterministic decoding strategy depends heavily on the quality of quantum processors and the IBM-perth processor performs better than the other IBM quantum processors available to us. The unitary operator G in figure <ref> can be realized diagrammatically as shown in figure <ref>. It can be easily checked that the matrix representation of the operator G in the computational basis coincides with that of the definition of G operator in Eq.(<ref>). To realize the deterministic decoding strategy with less operating gates, we simplify the quantum circuit in figure <ref> with the operator G to the following circuit shown in figure <ref>. Similar to the probabilistic decoding circuit, in this circuit the first three qubits represent A ,f and r with the information to be recovered denoted by q[0]. The last three qubits represent R ,f and F, respectively. Ideally, the pre-measurement state coming out from the qubit q[5] should recover the initial state of q[0]. In figure <ref>, the initial state is set to |0⟩ as an example and it can be set to other states as well. The circuit for the operator G is simplified to a single Y-gate for q[2] after leaving out the swap-gate and rewiring the scrambling unitary U^T. Similarly, it is also simplified for q[5] after we rewiring the measurement gate. For the deterministic decoding protocol to work for the non-isometric holographic model, we need to post-select the measurement result on q[4] to c[2] to be the same as the initial state of q[4], which is chosen to be |1⟩ in this demonstration. The two measurements whose results are sent to bits c[0] and c[1] represent the projection onto the system P and this projection can be realized by post-selecting the measurement results to be either |00⟩ or |11⟩. We test the decoding protocol of figure <ref> on the IBM-perth quantum processor with 20,000 shots. In figure <ref>, we present the original data of the test results. We remind that only the outcomes with the last three digits 100 and 000 in figure <ref> are post-selected and other rows can be disregarded. The post-selected results are shown in figure <ref> where we include the decoding outcomes for projections onto both the 00 states (the red bars) and the 11 states (the green bars). For the initial input q[0]=|0⟩, when the projection is onto the 00 state the count for the successful decodings (labelled by 0100 in figure <ref> (a)) is 1847. This corresponds to the successful decoding rate of approximately 73%. When the projection is onto the 11 state, the decoding efficiency is about 72%. For the initial input q[0]=|1⟩, the data are presented in figure <ref> (b) and <ref> (b). In this case, the decoding efficiency is about 73% when the projection of system P is onto the 00 state and the decoding efficiency is about 72% when the projection is onto the 11 state. It can be noticed that the decoding efficiencies for this decoding strategy are slightly compromised compared to the probabilistic decoding strategy due to the higher circuit complexity. However, in this strategy there is no additional probabilities of successful EPR projections on which the probabilistic decoding efficiencies are conditioned on. Therefore, the overall decoding efficiencies of the deterministic decoding strategy are much higher than the efficiencies in the probabilistic decoding. In summary, we have experimentally verified the feasibility of the probabilistic and the deterministic decoding strategies on the IBM quantum processors by using a typical scrambling unitary operator. it is shown that the initial quantum states can be recovered on the quantum circuits for the non-isometric model. Especially, for the probabilistic decoding, the quantum circuit can be viewed as a realization of quantum teleportation. This quantum circuit can also be regarded as a modified version of a traversable wormhole. On the other hand, the information recovery from decoding the Hawking radiation depends heavily on the scrambling dynamics in the black hole interior. Previous studies have relied on partial scrambling unitaries in accordance with the connecting configurations of the qubits on the IBM quantum processors to achieve the desired result <cit.>. The issue with such unitaries is that they do not satisfy the scrambling properties in Eq. (<ref>) so that decoding information from such partial-scrambling unitaries is often impossible due to the information loss. In this study, we used a full scrambling unitary that satisfies Eq. (<ref>). Therefore, the successful simulation of the quantum circuits on IBM quantum processors indicates that high-quality three-qubit scrambling dynamics can be realized on IBM quantum processors even though the qubits are not fully connected. This requires extra effort in the simplification of the quantum circuits. This study may stimulate further investigations of black hole information problems on the IBM quantum processors and provide us more essential understandings of the nature of quantum gravity. § ON THE INTERACTION OF INFALLING SYSTEM WITH OUTSIDE HAWKING RADIATION In our previous model, the interaction between the infalling message system A and the right-moving mode r of the radiation partner inside of the black hole was considered. But the interaction of the infalling message system A with the outside right-going Hawking radiation R is not taken into account, at least apparently. In this section, we will make a brief comment on the effect caused by this type of interaction <cit.>. In this case, the modified Hayden-Preskill state is graphically given by |Ψ'⟩_HP= 0.2MHP_int1.png , where u represents the interaction between the message system A and the Hawking radiation R. It is clear that the modified Hayden-Preskill state can be equivalently given by |Ψ'⟩_HP= 0.2MHP_int2.png . In this graphical representation, the interaction between the message system A and the Hawking radiation R is properly transferred into the interaction between the message system A and the interior Hawking partner mode r. Therefore, we can further modify the scrambling unitary operator U to be U'=U_(Afr)(BPR')· (v_Ar⊗ I_f) to take this type of the interaction into account. Finally, the modified Hayden-Preskill state can be represented by the original state without this type of interaction |Ψ'⟩_HP= 0.2MHP_int3.png . The discussion on the decoupling condition as well as the decoding strategy considered in Sec.<ref> and Sec.<ref> can be properly applied to study this case and the final conclusions do not change. § CONCLUSION AND DISCUSSION In the previous studies on Hayden-Preskill thought experiment of decoding the Hawking radiation <cit.>, the full dynamics of the black hole evolution is assumed to be unitary and there is no question that under such assumption the information will come out from the black hole and can be decoded at late times. However, whether such decoding strategy can still be realized in the non-isometric model where the map from the fundamental to the effective descriptions involves nonunitary projections is still unclear. In this study, based on the non-isometric holographic model of the black hole interior proposed by AEHPV, we explored the problem of decoding Hawking radiation and recovering information from a black hole. We firstly investigated the probabilistic decoding problem in the non-isometric model and presented the new decoupling condition under which the information can be retrieved by the outside observer. Under the assumption that the observer has a full access of the early-time and the late-time Hawking radiation as well as the full knowledge of the dynamics in the black hole interior, the Yoshida-Kitaev decoding strategy can be employed to decode the Hawking radiation and recover the information swallowed by the black hole. We showed that the new decoupling condition in this model is dependent on the size of the projected Hilbert space and is less stringent if a large effective degrees of freedom is projected out in the fundamental description. The projection operator in the map from the fundamental to effective descriptions can be realized by postselecting the measurement results in the quantum computer simulations. In the modified Hayden-Preskill protocol, the success of projection onto the EPR state indicates the feasibility of recovering the information from the radiation. A further improved deterministic decoding algorithm can circumvent the issue of EPR projections and recover the information with certainty. Furthermore, we implemented the decoding strategies through the quantum circuits of qubits and conducted tests of both decoding strategies on the IBM quantum computer using a full scrambling unitary circuit. The results from the quantum computers confirmed our analytical findings and demonstrated the feasibility of both probabilistic and deterministic decoding strategies on the IBM quantum computer. At last, we also commented on the case where the infalling message system interacts with the outside Hawking radiation. We argued that this type of interaction causes no additional effect on the decoding or the recovery of the quantum information. § ACKNOWLEDGMENTS We acknowledge the use of IBM Quantum services for this work. X.W appreciates the start-up grant WIUCASQD2022026 from the Wenzhou Institute of UCAS. § INTEGRAL FORMULAS OVER THE HAAR MEASURE ON RANDOM UNITARY GROUP In this section, we present the general formula for evaluating the integral of the product of the 2n operators in the unitary group U(d) with its normalized Haar measure dU. We consider the general 2n-operator integral over the Haar measure, which is given by ∫ U_i_1j_1… U_i_nj_n U^∗_i_1'j_1'… U^∗_i_n'j_n' dU = ∑_σ,τ∈ S_nδ_i_1 i'_σ(1)…δ_i_n i'_σ(n)δ_j_1 j'_τ(1)…δ_j_n j'_τ(n)Wg(τσ^-1,n,d) , where σ, τ are the permutations of n letters of the symmetric group S_n and Wg(ρ,d) is the Weingarten function <cit.>. In general, for a 2n-operator integral, there are (n!)^2 terms. For d≥ n, the Weingarten function takes the following form Wg(ρ,n,d)=1/(n!)^2∑_λ⊢ nχ^λ (1)^2 χ^λ(ρ)/s_λ,d(1) , where the sum is over all partitions λ of n, χ^λ is the character of the symmetric group S_n, and s is the Schur polynomial of λ. Below are some explicit examples of the integrals used in this study. For the two-operator integral, the only relevant Weingarten function is Wg([1],1,d)=1/d , where [1] is the identity map. Therefore, we have ∫ U_i_1j_1U_i_1'j_1'^∗ dU=δ_i_1i_1'δ_j_ij_1'Wg([1],1,d)=δ_i_1i_1'δ_j_ij_1'/d . This is just the integral formula of Eq.(<ref>). For the four-operator integral, ∫ U_i_1j_1U_i_2j_2U_i_1'j_1'^∗ U_i_2'j_2'^∗ dU= (δ_i_1i_1'δ_i_2i_2'δ_j_1j_1'δ_j_2j_2' + δ_i_1i_2'δ_i_2i_1'δ_j_1j_2'δ_j_2j_1')Wg([1,1],2,d) +(δ_i_1i_1'δ_i_2i_2'δ_j_1j_2'δ_j_2j_1' + δ_i_1i_2'δ_i_2i_1'δ_j_1j_1'δ_j_2j_2')Wg([2],2,d) , where [2] denotes the permutation (12) and Wg([1,1],2,d)=1/d^2-1 , Wg([2],2,d)=-1/d(d^2-1) . This result is just the integral formula of Eq.(<ref>). For the six-operator integral of our interest, the relevant Weingarten functions are Wg([1,1,1],3,d) = d^2-2/d(d^2-1)(d^2-4) , Wg([2,1],3,d) = -1/(d^2-1)(d^2-4) , Wg([3],3,d) = 2/d(d^2-1)(d^2-4) . Eq. (<ref>) can be written out explicitly using the above functions as follows ∫ U_i_1j_1 U_i_2j_2 U_i_3j_3 U^∗_i_1'j_1' U^∗_i_2'j_2' U^∗_i_3'j_3' dU = ∑_σδ_i_1 i'_σ(1)δ_i_2 i'_σ(2)δ_i_3 i'_σ(3)δ_j_1 j'_σ(1)δ_j_2 j'_σ(2)δ_j_3 j'_σ(3)·(d^2-2)/d(d^2-1)(d^2-4) + ∑_σ{δ_i_1 i'_σ(1)δ_i_2 i'_σ(2)δ_i_3 i'_σ(3)δ_j_1 j'_σ(2)δ_j_2 j'_σ(1)δ_j_3 j'_σ(3). +δ_i_1 i'_σ(1)δ_i_2 i'_σ(2)δ_i_3 i'_σ(3)δ_j_1 j'_σ(3)δ_j_2 j'_σ(2)δ_j_3 j'_σ(1) .+δ_i_1 i'_σ(1)δ_i_2 i'_σ(2)δ_i_3 i'_σ(3)δ_j_1 j'_σ(1)δ_j_2 j'_σ(3)δ_j_3 j'_σ(2)}·(-1)/(d^2-1)(d^2-4) + ∑_σ{δ_i_1 i'_σ(1)δ_i_2 i'_σ(2)δ_i_3 i'_σ(3)δ_j_1 j'_σ(2)δ_j_2 j'_σ(3)δ_j_3 j'_σ(1). +. δ_i_1 i'_σ(1)δ_i_2 i'_σ(2)δ_i_3 i'_σ(3)δ_j_1 j'_σ(3)δ_j_2 j'_σ(1)δ_j_3 j'_σ(2)}·2/d(d^2-1)(d^2-4) . For d≫ 1, the dominant contribution out of the 36 terms comes from the ones associated with Wg([1,1,1],3,d) which corresponds to identical permutations σ=τ. Therefore, the leading order of the integral is given by ∫ U_i_1j_1 U_i_2j_2 U_i_3j_3 U^∗_i_1'j_1' U^∗_i_2'j_2' U^∗_i_3'j_3' dU ≃∑_σδ_i_1 i'_σ(1)δ_i_2 i'_σ(2)δ_i_3 i'_σ(3)δ_j_1 j'_σ(1)δ_j_2 j'_σ(2)δ_j_3 j'_σ(3)·1/d^3 , where σ is the permutation on three letters and there are six choices of σ's in this summation. For a general 2n-operator integral with n≥ 4, direct computations of the Weingarten functions can be extremely involved. In this case, we can refer to the asymptotic behaviors of Weingarten functions in the limit d≫ 1, Wg(ρ,n,d)≃ d^-n-|ρ|Π_i (-1)^|C_i|-1 c_|C_i|-1 , where ρ is a product of cycles of lengths C_i, c_j=(2j)!/(j!(j+1)!) is the Catalan number, and |ρ| is the smallest number of transpositions of the products. The leading order in 1/d of the Weingarten functions is obtained when ρ=[1^n], which indicates that |ρ|=0 and the Catalan number c_1=1. Therefore, we have the asymptotic approximation Wg([1^n],n,d)≃ d^-n , and the 2n-operator integral over the Haar measure can be approximated by ∫ U_i_1j_1… U_i_nj_n U^∗_i_1'j_1'… U^∗_i_n'j_n' dU ≃∑_σδ_i_1 i'_σ(1)…δ_i_n i'_σ(n)δ_j_1 j'_σ(1)…δ_j_n j'_σ(n)·1/d^n , where σ is the permutation on n letters. Given a particular diagram, usually only one term contributes dominantly in this study. The above formulae are exploited to evaluate the integrals in Appendix B and C below. § A QUICK CHECK OF THE LAST TWO RELATIONS IN EQ.(<REF>) In this appendix, we show a not very rigorous demonstration of the last two relations in Eq.(<ref>) by using the integral formulas discussed in Appendix A. We have claimed that the two relations are satisfied only in the ideal case. This is to say that the unitary operator U should be a typical one. A not-so rigorous check can be made by showing the following relations hold ∫ dU  _in⟨Ψ| (I_A'BR'⊗Π̃_R”B'F') |Ψ⟩_in=1 , ∫ dU  _in⟨Ψ| (I_A'BR'⊗Π̃_R”B'F') |Ψ⟩_out=√(|P|/|f|)1/|A| . The two integrals involve the six-order unitary integral formula that is given in Eq.(<ref>). In the ideal case, only one term contributes the final result. In the following, we will calculate the integral bu using the graphical representation. Firstly, the first integral of Eq. (<ref>) can be graphically represented and approximately evaluated as ∫ dU  _in⟨Ψ| (I_A'BR'⊗Π̃_R”B'F') |Ψ⟩_in = |P|^3∫ dU   (0.2Psi_in_Pi_Psi_in.png) ≃ |P|^3/d^3   0.2Psi_in_Pi_Psi_in_sim.png ≃ |P|^3|B|^3|R'|^3/d^3 = 1 , where we have considered the ideal case when d≫ 1 and the decoupling condition is satisfied. In this calculation, only one particular choice of σ in Eq. (<ref>) returns the dominant contribution to ∫ dU  _in⟨Ψ| (I_A'BR'⊗Π̃_R”B'F') |Ψ⟩_in of Eq. (<ref>). In addition, we have omitted the lines that represent the normalization conditions ⟨ψ_0|ψ_0⟩_f=1 and ⟨ 0|0⟩_P=1 in the graphical representation. Note that in deriving the above result, we also used the fact that |A|=|F| and |B|=|B'|. The second integral can be graphically represented and evaluated in the ideal case as ∫ dU  _in⟨Ψ| (I_A'BR'⊗Π̃_R”B'F') |Ψ⟩_out = |P|^3/√(P_EPR)∫ dU   (0.2Psi_in_Pi_Psi_out.png) ≃ |P|^3/√(P_EPR)1/d^3   0.2Psi_in_Pi_Psi_out_sim.png ≃ |P|^3/√(P_EPR)|B|^2|r||R'|^2/d^3 |A| ≃ √(|P|/|f|)1/|A| . As we remarked, the above calculations serve as a simple demonstration that the last two relations in Eq.(<ref>) hold. However, a rigorous proof of them is much more tedious and should be carried out by showing that the operator distance of the density matrices for the states on the l.h.s and the r.h.s of the equations averaged over the random unitary group is small. This procedure involves the operator integrals of higher n over the random unitary group, which will be presented in the Appendix C. § PROOF OF THE LAST TWO RELATIONS IN EQ.(<REF>) The last two relations in Eq.(<ref>) can be rewritten as (I_A'BR'⊗Π̃_R”B'F') |Ψ⟩_in = |Ψ⟩_in , 1/√(P_EPR)(I_A'BR'⊗Π̃_R”B'F')|Ψ⟩_out = |Ψ⟩_in . We now prove that the above relations are satisfied in the ideal case. Define the following three density matrices as ρ_in = |Ψ⟩_in _in⟨Ψ| , ρ_1 = Π̃|Ψ⟩_in _in⟨Ψ|Π̃ , ρ_2 = 1/P_EPRΠ̃|Ψ⟩_out _out⟨Ψ|Π̃ , where we have omitted the identity operators and the subscripts for simplicity. The last two relations in Eq.(<ref>) can be proved by estimating the operator distances between ρ_1 and ρ_in and between ρ_1 and ρ_in are small enough in the ideal case. To our aim, only the dominant contribution from the unitary integral is considered. Firstly, we evaluate the operator distance between ρ_1 and ρ_in. Let us consider (∫ dU ||ρ_1-ρ_in||_1)^2 ≤ ∫ dU ||ρ_1-ρ_in||_1^2 ≤ 2C ∫ dU ||ρ_1-ρ_in||_2^2 = 2C ∫ dU Tr(ρ_1^2+ρ_in^2-2ρ_1ρ_in) = 2C ∫ dU (  _in⟨Ψ|Π̃^2|Ψ⟩_in^2 +  _in⟨Ψ|Ψ⟩_in^2 -2  _in⟨Ψ|Π̃|Ψ⟩_in^2) , where C is the normalization factor. For a typical scrambling unitary which we consider or the normalized pure-state density matrices ρ_1 and ρ_in, the normalization factor is one. The factor of two in the second line comes from the operator inequality || X||_p≤(Rank(X))^1/p-1/q ||X||_q . In the ideal case, we can just consider the leading order's contribution to the integral on the right hand side of the inequality. Using the graphical representation, the first term can be calculated as ∫ dU _in⟨Ψ|Π̃^2|Ψ⟩_in^2 = |P|^8 ∫ dU   (0.2Tr_rho1_sq.png)^2 ≃ |P|^8/d^8∫ dU   (0.2Tr_rho1_sq_sim.png)^2 = |P|^8 |B|^8 |R'|^8/d^8 = 1 . Note that we have omitted the lines that represent the normalization conditions ⟨ψ_0|ψ_0⟩_f=1 and ⟨ 0|0⟩_P=1 in this graphical representation. The second term can be calculated as ∫ dU _in⟨Ψ|Ψ⟩_in^2 = |P|^4 ∫ dU   (0.2Tr_rhoin_sq.png)^2 ≃ |P|^4/d^4∫ dU   (0.2Tr_rhoin_sq_sim.png)^2 = |P|^4 |B|^4 |R'|^4/d^4 = 1 . The third term can be calculated as ∫ dU _in⟨Ψ|Π̃|Ψ⟩_in^2 = |P|^6 ∫ dU   (0.2Tr_rho1rhoin.png)^2 ≃ |P|^6/d^6∫ dU   (0.2Tr_rho1rhoin_sim.png)^2 = |P|^6 |B|^6 |R'|^6/d^6 = 1 . Putting it together, the leading order contribution to the integral on the right hand side of the inequality is zero. Therefore, we have ∫ dU ||ρ_1-ρ_in||_1≤𝒪(1) , which implies that in the ideal case, the operator distance between ρ_1 and ρ_in is small enough when averaged over the random unitary matrix. This gives that the first relation in Eq.(<ref>). For the operator distance between ρ_2 and ρ_in, we have (∫ dU ||ρ_2-ρ_in||_1)^2 ≤ ∫ dU ||ρ_2-ρ_in||_1^2 ≤ 2C ∫ dU ||ρ_2-ρ_in||_2^2 = 2C∫ dU Tr(ρ_2^2+ρ_in^2-2ρ_2ρ_in) = 2C ∫ dU ( 1/P_EPR^2 _out⟨Ψ|Π̃^2|Ψ⟩_out^2 +  _in⟨Ψ|Ψ⟩_in^2 -2/P_EPR _out⟨Ψ|Π̃|Ψ⟩_in^2) . We also evaluate the leading order contribution to the integral on the right hand side of the inequality. The first term can be calculated as ∫ dU _out⟨Ψ|Π̃^2|Ψ⟩_out^2 = |P|^8/P_EPR^2∫ dU   (0.2Tr_rho2_sq.png)^2 ≃ |P|^8/P_EPR^21/d^8∫ dU   (0.2Tr_rho2_sq_sim.png)^2 = |P|^8/P_EPR^2|B|^4|r|^4|R'|^4/d^8 |A|^4 ≃ P_EPR^2 . The third term can be calculated as ∫ dU _out⟨Ψ|Π̃|Ψ⟩_in^2 = |P|^6/P_EPR∫ dU   (0.2Tr_rho2rhoin.png)^2 ≃ |P|^6/P_EPR1/d^6∫ dU   (0.2Tr_rho2rhoin_sim.png)^2 = |P|^6/P_EPR|B|^4 |R'|^4 |r|^2/d^6 |A|^2 ≃ P_EPR . Finally, we find that the leading order's contribution is also zero. Therefore, we have ∫ dU ||ρ_2-ρ_in||_1≤𝒪(1) . We can conclude that the operator distance between ρ_2 and ρ_in is also small enough in the ideal case. This gives that the second relation in Eq.(<ref>). In summary, we have proved the equations that used in the deterministic decoding strategy. 99 Akers:2022qdl C. Akers, N. Engelhardt, D. Harlow, G. Penington and S. Vardhan, “The black hole interior from non-isometric codes and complexity,” [arXiv:2207.06536 [hep-th]]. Hawking:1975vcx S. W. Hawking, “Particle Creation by Black Holes,” Commun. Math. Phys. 43 (1975), 199-220 [erratum: Commun. Math. Phys. 46 (1976), 206]. Hawking:1976ra S. W. Hawking, “Breakdown of Predictability in Gravitational Collapse,” Phys. Rev. D 14 (1976), 2460-2473. Maldacena:1997re J. M. Maldacena, “The Large N limit of superconformal field theories and supergravity,” Adv. Theor. Math. Phys. 2 (1998), 231-252 [arXiv:hep-th/9711200 [hep-th]]. Gubser:1998bc S. S. Gubser, I. R. Klebanov and A. M. Polyakov, “Gauge theory correlators from noncritical string theory,” Phys. Lett. B 428 (1998), 105-114 [arXiv:hep-th/9802109 [hep-th]]. Witten:1998qj E. Witten, “Anti-de Sitter space and holography,” Adv. Theor. Math. Phys. 2 (1998), 253-291 [arXiv:hep-th/9802150 [hep-th]]. Page:1993df D. N. Page, “Average entropy of a subsystem,” Phys. Rev. Lett. 71 (1993), 1291-1294 [arXiv:gr-qc/9305007 [gr-qc]]. Page:1993wv D. N. Page, “Information in black hole radiation,” Phys. Rev. Lett. 71 (1993), 3743-3746 [arXiv:hep-th/9306083 [hep-th]]. Hayden:2007cs P. Hayden and J. Preskill, “Black holes as mirrors: Quantum information in random subsystems,” JHEP 09 (2007), 120 [arXiv:0708.4025 [hep-th]]. Yoshida:2017non B. Yoshida and A. Kitaev, “Efficient decoding for the Hayden-Preskill protocol,” [arXiv:1710.03363 [hep-th]]. Almheiri:2014lwa A. Almheiri, X. Dong and D. Harlow, “Bulk Locality and Quantum Error Correction in AdS/CFT,” JHEP 04 (2015), 163 [arXiv:1411.7041 [hep-th]]. Pastawski:2015qua F. Pastawski, B. Yoshida, D. Harlow and J. Preskill, “Holographic quantum error-correcting codes: Toy models for the bulk/boundary correspondence,” JHEP 06 (2015), 149 [arXiv:1503.06237 [hep-th]]. Harlow:2013tf D. Harlow and P. Hayden, “Quantum Computation vs. Firewalls,” JHEP 06 (2013), 085 [arXiv:1301.4504 [hep-th]]. Brown:2019rox A. R. Brown, H. Gharibyan, G. Penington and L. Susskind, “The Python’s Lunch: geometric obstructions to decoding Hawking radiation,” JHEP 08 (2020), 121 [arXiv:1912.00228 [hep-th]]. Almheiri:2020cfm A. Almheiri, T. Hartman, J. Maldacena, E. Shaghoulian and A. Tajdini, “The entropy of Hawking radiation,” Rev. Mod. Phys. 93 (2021) no.3, 035002 [arXiv:2006.06872 [hep-th]]. Mathur:2008wi S. D. Mathur, “What Exactly is the Information Paradox?,” Lect. Notes Phys. 769 (2009), 3-48 [arXiv:0803.2030 [hep-th]]. Ryu:2006bv S. Ryu and T. Takayanagi, “Holographic derivation of entanglement entropy from AdS/CFT,” Phys. Rev. Lett. 96 (2006), 181602 [arXiv:hep-th/0603001 [hep-th]]. Faulkner:2013ana T. Faulkner, A. Lewkowycz and J. Maldacena, “Quantum corrections to holographic entanglement entropy,” JHEP 11 (2013), 074 [arXiv:1307.2892 [hep-th]]. Engelhardt:2014gca N. Engelhardt and A. C. Wall, “Quantum Extremal Surfaces: Holographic Entanglement Entropy beyond the Classical Regime,” JHEP 01 (2015), 073 [arXiv:1408.3203 [hep-th]]. Penington:2019npb G. Penington, “Entanglement Wedge Reconstruction and the Information Paradox,” JHEP 09 (2020), 002 [arXiv:1905.08255 [hep-th]]. Almheiri:2019psf A. Almheiri, N. Engelhardt, D. Marolf and H. Maxfield, “The entropy of bulk quantum fields and the entanglement wedge of an evaporating black hole,” JHEP 12 (2019), 063 [arXiv:1905.08762 [hep-th]]. Kar:2022qkf A. Kar, “Non-isometric quantum error correction in gravity,” JHEP 02 (2023), 195 [arXiv:2210.13476 [hep-th]]. Faulkner:2022ada T. Faulkner and M. Li, “Asymptotically isometric codes for holography,” [arXiv:2211.12439 [hep-th]]. deBoer:2022zps J. de Boer, D. L. Jafferis and L. Lamprou, “On black hole interior reconstruction, singularities and the emergence of time,” [arXiv:2211.16512 [hep-th]]. Kim:2022pfp I. H. Kim and J. Preskill, “Complementarity and the unitarity of the black hole S-matrix,” JHEP 02 (2023), 233 [arXiv:2212.00194 [hep-th]]. Basu:2022crn D. Basu, Q. Wen and S. Zhou, “Entanglement Islands from Hilbert Space Reduction,” [arXiv:2211.17004 [hep-th]]. Giddings:2022ipt S. B. Giddings, “Comparing models for a unitary black hole S-matrix,” [arXiv:2212.14551 [hep-th]]. Gyongyosi:2023sue Z. Gyongyosi, T. J. Hollowood, S. P. Kumar, A. Legramandi and N. Talwar, “The Holographic Map of an Evaporating Black Hole,” [arXiv:2301.08362 [hep-th]]. Cao:2023gkw C. Cao, W. Chemissany, A. Jahn and Z. Zimborás, “Approximate observables from non-isometric maps: de Sitter tensor networks with overlapping qubits,” [arXiv:2304.02673 [hep-th]]. DeWolfe:2023iuq O. DeWolfe and K. Higginbotham, “Non-isometric codes for the black hole interior from fundamental and effective dynamics,” [arXiv:2304.12345 [hep-th]]. Nielsen M. A. Nielsen and I. L. Chuang, “Quantum computation and quantum information,” Cambridge University Press, Cambridge, UK (2010). Hayden:2006 P. Hayden, M. Horodecki, A. Winter and J. Yard, “The mother of all protocols: Restructuring quantum information's family tree,” Proc. R. Soc. A 465(2009):2537-2563, [arXiv:quant-ph/0606225]. Hayden:2007 P. Hayden, M. Horodecki, A. Winter and J. Yard, “A decoupling approach to the quantum capacity,” Open Syst. Inf. Dyn. 15 (2008) 7-19, [arXiv:quant-ph/0702005]. Grover:1996rk L. K. Grover, “A Fast quantum mechanical algorithm for database search,” Proceedings of the twenty-eighth annual ACM symposium on Theory of computing (1996), [arXiv:quant-ph/9605043 [quant-ph]]. Horowitz:2003he G. T. Horowitz and J. M. Maldacena, “The Black hole final state,” JHEP 02 (2004), 008 [arXiv:hep-th/0310281 [hep-th]]. Lloyd:2013bza S. Lloyd and J. Preskill, “Unitarity of black hole evaporation in final-state projection models,” JHEP 08 (2014), 126 [arXiv:1308.4209 [hep-th]]. Wang:2023eyb X. Wang, K. Zhang and J. Wang, “Entanglement islands, fire walls and state paradox from quantum teleportation and entanglement swapping,” Class. Quant. Grav. 40 (2023) no.9, 095012 [arXiv:2107.09228 [hep-th]]. Harlow:2014yka D. Harlow, “Jerusalem Lectures on Black Holes and Quantum Information,” Rev. Mod. Phys. 88 (2016), 015002 [arXiv:1409.1231 [hep-th]]. Yoshida:2018vly B. Yoshida and N. Y. Yao, “Disentangling Scrambling and Decoherence via Quantum Teleportation,” Phys. Rev. X 9 (2019) no.1, 011006 [arXiv:1803.10772 [quant-ph]]. Bao:2020zdo N. Bao and Y. Kikuchi, “Hayden-Preskill decoding from noisy Hawking radiation,” JHEP 02 (2021), 017 [arXiv:2009.13493 [quant-ph]]. Cheng:2019yib Y. Cheng, C. Liu, J. Guo, Y. Chen, P. Zhang and H. Zhai, “Realizing the Hayden-Preskill protocol with coupled Dicke models,” Phys. Rev. Res. 2 (2020) no.4, 043024 [arXiv:1909.12568 [cond-mat.quant-gas]]. Li:2021mnl R. Li and J. Wang, “Hayden-Preskill protocol and decoding Hawking radiation at finite temperature,” Phys. Rev. D 106 (2022) no.4, 046011 [arXiv:2108.09144 [hep-th]]. Landsman:2018jpm K. A. Landsman, C. Figgatt, T. Schuster, N. M. Linke, B. Yoshida, N. Y. Yao and C. Monroe, “Verified Quantum Information Scrambling,” Nature 567 (2019) no.7746, 61-65 [arXiv:1806.02807 [quant-ph]]. Brown:2019hmk A. R. Brown, H. Gharibyan, S. Leichenauer, H. W. Lin, S. Nezami, G. Salton, L. Susskind, B. Swingle and M. Walter, “Quantum Gravity in the Lab. I. Teleportation by Size and Traversable Wormholes,” PRX Quantum 4 (2023) no.1, 010320 [arXiv:1911.06314 [quant-ph]]. Nezami:2021yaq S. Nezami, H. W. Lin, A. R. Brown, H. Gharibyan, S. Leichenauer, G. Salton, L. Susskind, B. Swingle and M. Walter, “Quantum Gravity in the Lab. II. Teleportation by Size and Traversable Wormholes,” PRX Quantum 4 (2023) no.1, 010321 [arXiv:2102.01064 [quant-ph]]. Shapoval:2022xeo I. Shapoval, V. P. Su, W. de Jong, M. Urbanek and B. Swingle, “Towards Quantum Gravity in the Lab on Quantum Processors,” [arXiv:2205.14081 [quant-ph]]. Jafferis:2022crx D. Jafferis, A. Zlokapa, J. D. Lykken, D. K. Kolchmeyer, S. I. Davis, N. Lauk, H. Neven and M. Spiropulu, “Traversable wormhole dynamics on a quantum processor,” Nature 612 (2022) no.7938, 51-55. Shi:2021nkx Y. H. Shi, R. Q. Yang, Z. Xiang, Z. Y. Ge, H. Li, Y. Y. Wang, K. Huang, Y. Tian, X. Song and D. Zheng, et al. “Quantum simulation of Hawking radiation and curved spacetime with a superconducting on-chip black hole,” [arXiv:2111.11092 [quant-ph]]. Yan:2020fxu B. Yan and N. A. Sinitsyn, “Recovery of damaged information and the out-of-time-ordered correlators,” Phys. Rev. Lett. 125 (2020) no.4, 040605 [arXiv:2003.07267 [quant-ph]]. Harris:2021mma J. Harris, B. Yan and N. A. Sinitsyn, “Benchmarking Information Scrambling,” Phys. Rev. Lett. 129, no.5, 050602 (2022) [arXiv:2110.12355 [quant-ph]]. Collins2003 B. Collins, “Moments and cumulants of polynomial random variables on unitarygroups, the itzykson-zuber integral, and free probability," International Mathematics Research Notices 2003 (2003), no. 17:953-82 [arXiv:0205010 [math-ph]]. Collins2006 B. Collins and P. Śniady, “Integration with Respect to the Haar Measure on Unitary, Orthogonal and Symplectic Group," Commun. Math. Phys. 264, 773–795 (2006) [arXiv:0402073 [math-ph]].
http://arxiv.org/abs/2307.01472v1
20230704044054
Beyond Conservatism: Diffusion Policies in Offline Multi-agent Reinforcement Learning
[ "Zhuoran Li", "Ling Pan", "Longbo Huang" ]
cs.AI
[ "cs.AI", "cs.LG", "cs.MA" ]
Photodisintegration Cross Section of ^4He in the Giant Dipole Resonance Energy Region H. Utsunomiya August 1, 2023 ===================================================================================== We present a novel Diffusion Offline Multi-agent Model (DOM2) for offline Multi-Agent Reinforcement Learning (MARL). Different from existing algorithms that rely mainly on conservatism in policy design, DOM2 enhances policy expressiveness and diversity based on diffusion. Specifically, we incorporate a diffusion model into the policy network and propose a trajectory-based data-augmentation scheme in training. These key ingredients make our algorithm more robust to environment changes and achieve significant improvements in performance, generalization and data-efficiency. Our extensive experimental results demonstrate that DOM2 outperforms existing state-of-the-art methods in multi-agent particle and multi-agent MuJoCo environments, and generalizes significantly better in shifted environments thanks to its high expressiveness and diversity. Furthermore, DOM2 shows superior data efficiency and can achieve state-of-the-art performance with 20+ times less data compared to existing algorithms. § INTRODUCTION Offline reinforcement learning (RL), commonly referred to as batch RL, aims to learn efficient policies exclusively from previously gathered data without interacting with the environment <cit.>. Since the agent has to sample the data from a fixed dataset, naive offline RL approaches fail to learn policies for out-of-distribution actions or states <cit.>, and the obtained Q-value estimation for these actions will be inaccurate with unpredictable consequences. Recent progress in tackling the problem focuses on conservatism by introducing regularization terms in the training of actors and critics <cit.>. Conservatism-based offline RL algorithms have also achieved significant progress in difficult offline multi-agent reinforcement learning settings (MARL) <cit.>. Despite the potential benefits, existing methods have limitations in several aspects. Firstly, the design of the policy network and the corresponding regularizer limits the expressiveness and diversity due to conservatism. Consequently, the resulting policy may be suboptimal and difficult to represent complex strategies <cit.>. Secondly, in multi-agent scenarios, the conservatism-based method is prone to getting trapped in poor local optima. This occurs when each agent is incentivized to maximize its own reward without efficient cooperation with other agents in existing algorithms <cit.>. To demonstrate this phenomenon, we conduct experiment on a simple MARL scenario consisting of 3 agents and 6 landmarks (Fig. <ref>), to highlight the importance of policy expressiveness and diversity in MARL. In this scenario, the agents are asked to cover 3 landmarks and are rewarded based on their proximity to the nearest landmark while being penalized for collisions. We first train the agents with 6 target landmarks and then randomly dismiss 3 of them in evaluation. Our experiments demonstrate that existing methods (MA-CQL and OMAR detailed in Section 3.2 <cit.>), which constrain policies through regularization, limit the expressiveness of each agent and hinder the ability of the agents to cooperate with diversity. As a result, only limited solutions are found. Therefore, in order to design robust algorithms with good generalization capabilities, it is crucial to develop methods beyond conservatism for better performance and more efficient cooperation among agents. To boost the policy expressiveness and diversity, we propose a novel algorithm based on diffusion for the offline multi-agent setting, called Diffusion Offline Multi-Agent Model (DOM2). Diffusion model has shown significant success in generating data with high quality and diversity <cit.> and our goal is to leverage this advantage to promote expressiveness and diversity in policy learning of RL. Specifically, the policy for each agent is built using the accelerated DPM-solver to sample actions <cit.>. In order to train an appropriate policy that performs well, we propose a trajectory-based data-augmentation method to facilitate policy training by efficient data sampling. These techniques enable the policy to generate solutions with high quality and diversity and overcome the limitations of conservatism-based approaches. In the 3-agent example, we show that DOM2 can find a more diverse set of solutions with high performance and generalization (Fig. <ref>), compared to conservatism-based methods such as MA-CQL and OMAR <cit.>. Our contributions are summarized as follows. * We propose a novel Diffusion Offline Multi-Agent Model (DOM2) algorithm to address the limitations of conservatism-based methods. DOM2 consists of three critical components: diffusion-based policy with an accelerated solver, appropriate policy regularizer, and a trajectory-based data augmentation method for enhancing learning. * We conduct extensive numerical experiments on Multi-agent Particles Environments (MPE) and Multi-agent MuJoCo HalfCheetah environments. Our results show that DOM2 achieves significantly better performance improvement over state-of-the-art methods. * We show that our diffusion-based method DOM2 possesses much better generalization abilities and outperforms existing methods in shifted environments (trained in standard environments). Moreover, DOM2 is ultra-data-efficient, and achieves SOTA performance with 20+ times less data. § RELATED WORK Offline RL and MARL: Distribution shift is a key obstacle in offline RL and multiple methods have been proposed to tackle the problem based on conservatism to constrain the policy or Q-value by regularizers <cit.>. Policy regularization ensures the policy to be close to the behavior policy via a policy regularizer, e.g., BRAC <cit.>, BEAR <cit.>, BCQ <cit.>, TD3+BC <cit.>, implicit update <cit.> and importance sampling <cit.>. Critic regularization instead constrains the Q-values for stability, e.g., CQL <cit.>, IQL <cit.>, and TD3-CVAE <cit.>. On the other hand, Multi-Agent Reinforcement Learning (MARL) has made significant process under the centralized training with decentralized execution (CTDE) paradigm <cit.>, such as MADDPG <cit.>, MATD3 <cit.>, IPPO <cit.>, MAPPO <cit.>, VDN <cit.> and QMIX <cit.> in decentralized critic and centralized critic setting. The offline MARL problem has also attracted attention and conservatism-based methods have been developed, e.g., MA-BCQ <cit.>, MA-ICQ <cit.>, MA-CQL and OMAR <cit.>. Diffusion Models: Diffusion model <cit.>, a specific type of generative model, has shown significant success in various applications, especially in generating images from text descriptions <cit.>. Recent works have focused on the foundation of diffusion models, e.g., the statistical theory <cit.>, and the accelerating method for sampling <cit.>. Generative model has been applied to policy modeling, including conditional VAE <cit.>, diffusers <cit.> and diffusion-based policy <cit.> in the single-agent setting. Our method successfully introduces the diffusion model with the accelerated solver to offline multi-agent settings. § BACKGROUND In this section, we introduce the offline multi-agent reinforcement learning problem and provide preliminaries for the diffusion probabilistic model as the background for our proposed algorithm. Offline Multi-Agent Reinforcement Learning. A fully cooperative multi-agent task can be modeled as a decentralized partially observable Markov decision process (Dec-POMDP) <cit.> with n agents consisting of a tuple G=⟨ℐ,𝒮,𝒪,𝒜,Π,𝒫,ℛ,n,γ⟩. Here ℐ is the set of agents, 𝒮 is the global state space, 𝒪=(𝒪_1,...,𝒪_n) is the set of observations with 𝒪_n being the set of observation for agent n. 𝒜=(𝒜_1,...,𝒜_n) is the set of actions for the agents (𝒜_n is the set of actions for agent n), Π=(Π_1,...,Π_n) is the set of policies, and 𝒫 is the function class of the transition probability 𝒮×𝒜×𝒮'→[0,1]. At each time step t, each agent chooses an action a_j^t ∈𝒜_j based on the policy π_j∈Π_j and historical observation o_j^t-1∈𝒪_j. The next state is determined by the transition probability P∈𝒫. Each agent then receives a reward r_j^t∈ℛ: 𝒮×𝒜→ℝ and a private observation o_j^t∈𝒪_i. The goal of the agents is to find the optimal policies π=(π_1,...,π_n) such that each agent can maximize the discounted return: 𝔼[∑_t=0^∞γ^t r_j^t] (the joint discounted return is 𝔼[∑_j=1^n∑_t=0^∞γ^t r_j^t]), where γ is the discount factor. Offline reinforcement learning requires that the data to train the agents is sampled from a given dataset 𝒟 generated from some potentially unknown behavior policy π_β (which can be arbitrary). This means that the procedure for training agents is separated from the interaction with environments. Conservative Q-Learning. For training the critic in offline RL, the conservative Q-Learning (CQL) method <cit.> is to train the Q-value function Q_ϕ(o,a) parameterized by ϕ, by minimizing the temporal difference (TD) loss plus the conservative regularizer. Specifically, the objective to optimize the Q-value for each agent j is given by: ℒ(ϕ_j) =𝔼_(o_j,a_j)∼𝒟_j[(Q_ϕ_j(o_j,a_j)-y_j)^2] +ζ𝔼_(o_j,a_j)∼𝒟_j[log∑_ã_jexp(Q_ϕ_j(o_j,ã_j))-Q_ϕ_j(o_j,a_j)]. The first term is the TD error to minimize the Bellman operator with the double Q-learning trick <cit.>, where y_j=r_j+γmin_k=1,2Q_ϕ_j^k(o'_j,π_j(o'_j)), Q_ϕ_j,π_j denotes the target network and o'_j is the next observation for agent j after taking action a_j. The second term is a conservative regularizer, where ã_j is a random action uniformly sampled in the action space and ζ is a hyperparameter to balance two terms. The regularizer is to address the extrapolation error by encouraging large Q-values for state-action pairs in the dataset and penalizing low Q-values of state-action pairs. r0.5 < g r a p h i c s > Diffusion probabilistic model as a continuous-time stochastic differential equation (SDE) <cit.> and relationship with Offline MARL. Diffusion Probabilistic Model. We present a high-level introduction to the Diffusion Probabilistic Model (DPM) <cit.> (detailed introduction is in Appendix <ref>). DPM is a deep generative model that learns the unknown data distribution x_0∼ q_0(x_0) from the dataset. The process of data generation is modeled by a predefined forward noising process characterized by a stochastic differential equation (SDE) dx_t = f(t)x_tdt + g(t)dw_t (Eq. (5) in <cit.>) and a trainable reverse denoising process characterized by the SDE dx_t = [f(t)x_t-g^2(t)∇_x_tlog q_t(x_t)]dt + g(t)dw_t (Eq. (6) in <cit.>) shown in Fig. <ref>. Here w_t,w_t are standard Brownian motions, f(t),g(t) are pre-defined functions such that q_0t(x_t|x_0)=𝒩(x_t;α_tx_0,σ^2_tI) for some constant α_t,σ_t>0 and q_T(x_T)≈𝒩(x_T;0,σ̃^2I) is almost a Gaussian distribution for constant σ̃>0. However, there exists an unknown term -σ_t∇_x_tlog q_t(x_t), which is called the score function. In order to generate data close to the distribution q_0(x_0) by the reverse SDE, DPM defines a score-based model ϵ_θ(x_t, t) to learn the score function and optimize parameter θ such that θ^*=min_θ𝔼_x_0∼ q_0(x_0),ϵ∼𝒩(0,I),t∼𝒰(0,T)[‖ϵ-ϵ_θ(α_tx_0+σ_tϵ, t)‖^2_2] (𝒰(0,T) is the uniform distribution in [0,T], same later). With the learned score function, we can sample data by discretizing the reverse SDE. To enable faster sampling, DPM-solver <cit.> provides an efficiently faster sampling method and the first-order iterative equation (Eq. (3.7) in <cit.>) to denoise is given by x_t_i = α_t_i/α_t_i-1x_t_i-1-σ_t_i(α_t_iσ_t_i-1/α_t_i-1σ_t_i-1)ϵ_θ (x_t_i-1,t_i-1). In Fig. <ref>, we highlight a crucial message that we can efficiently incorporate the procedure of data generation into offline MARL as the action generator. Intuitively, we can utilize the fixed dataset to learn an action generator by noising the sampled actions in the dataset, and then denoising it inversely. The procedure assembles data generation in the diffusion model. However, it is important to note that there is a critical difference between the objectives of diffusion and RL. Specifically, in diffusion, the goal is to generate data with a distribution close to the distribution of the training dataset, whereas in offline MARL, one hopes to find actions (policies) that maximize the joint discounted return. This difference influences the design of the action generator. Properly handling it is the key in our design, which will be detailed below in Section <ref>. § PROPOSED METHOD In this section, we present the DOM2 algorithm, which is shown in Fig. <ref>. In the following, we first discuss how we generate the actions with diffusion in Section <ref>. Next, we show how to design appropriate objective functions in policy learning in Section <ref>. We then present the data augmentation method in Section <ref>. Finally, we present the whole procedure of DOM2 in Section <ref>. §.§ Diffusion in Offline MARL We first present the diffusion component in DOM2, which generates actions by denoising a Gaussian noise iteratively (shown on the right side of Fig. <ref>). Denote the timestep indices in an episode by {t}_t=1^T, the diffusion step indices by τ∈[τ_0,τ_N], and the agent by {j}_j=1^n. Below, to facilitate understanding, we introduce the diffusion idea in continuous time, based on <cit.>. We then present our algorithm design by specifying the discrete DPM-solver-based steps <cit.> and discretized diffusion timestep, i.e., from [τ_0,τ_N] to {τ_i}_i=0^N. (Noising) Noising the action in diffusion is modeled as a forward process from τ_0 to τ_N. Specifically, we start with the collected action data at τ_0, denoted by b_t,j^τ_0∼π_β_j(·|o_t,j), which is collected from the behavior policy π_β_j(·|o_t,j). We then perform a set of noising operations on intermediate data {b_t,j^τ}_τ∈[τ_0,τ_N], and eventually generate b_t,j^τ_N, which (ideally) is close to Gaussian noise at τ_N. This forward process satisfies that for ∀τ∈[τ_0,τ_N], the transition probability q_τ_0τ(b_t,j^τ|b_t,j^τ_0)=𝒩(b_t,j^τ;α_τb_t,j^τ_0,σ_τ^2I) <cit.>. The selection of the noise schedules α_τ,σ_τ enables that q_τ_N(b_t,j^τ_N|o_t,j)≈𝒩(b_t,j^τ_N;0,σ̃^2I) for some σ̃>0, which is almost a Gaussian noise. According to <cit.>, there exists a corresponding reverse process of SDE from τ_N to τ_0, which is based on Eq. (2.4) in <cit.> and takes into consideration the conditioning on o_t,j: da^τ_t,j = [f(τ)a^τ_t,j-g^2(τ)∇_b^τ_t,jq_τ(b^τ_t,j|o_t,j)_Neural Network ϵ_θ_j]dτ + g(τ)dw_τ,a^τ_N_t,j∼ q_τ_N(b_t,j^τ_N|o_t,j), where f(τ)=dlogα_τ/dτ,g^2(τ)=dσ_τ^2/dτ-2dlogα_τ/dτσ_τ^2 and w_t is a standard Brownion motion, and a^τ_0_t,j is the generated action for agent j at time t. To fully determine the reverse process of SDE described by Eq. (<ref>), we need the access to the conditional score function -σ_τ∇_b^τ_t,jq_τ(b^τ_t,j|o_t,j) at each τ. We use a neural network ϵ_θ_j(b_t,j^τ,o_t,j,τ) to represent it and the architecture is the multiple-layered residual network, which is called U-Net <cit.> shown in Fig. <ref>. The objective of optimizing the parameter θ_j is <cit.>: ℒ_bc(θ_j) =𝔼_(o_t,j,a_t,j^τ_0)∼𝒟_j,ϵ∼𝒩(0,I),τ∈𝒰({τ_i}_i=0^N)[‖ϵ-ϵ_θ_j (α_τa_t,j^τ_0+σ_τϵ,o_t,j,τ )‖_2^2]. (Denoising) After training the neural network ϵ_θ_j, we can then generate the actions by solving the diffusion SDE Eq. (<ref>) (plugging in -ϵ_θ_j(a_t,j^τ,o_t,j,τ)/σ_τ to replace the true score function ∇_b_t,j^τlog q_τ(b_t,j^τ|o_t,j)). Here we evolve the reverse process of SDE from a^τ_N_t,j∼𝒩(a_t,j^τ_N;0,I), a Gaussian noise, and we take a^τ_0_t,j as the final action. In our algorithm, to facilitate faster sampling, we discretize the reverse process of SDE in [τ_0,τ_N] into N+1 diffusion timesteps {τ_i}_i=0^N (the partition details are shown in Appendix <ref>) and adopt the first-order DPM-solver-based method (Eq. (3.7) in <cit.>) to iteratively denoise from a^τ_N_t,j∼𝒩(a_t,j^τ_N;0,I) to a^τ_0_t,j for i=N,...,1 written as: a_t,j^τ_i-1 = α_τ_i-1/α_τ_ia_t,j^τ_i-σ_τ_i(α_τ_iσ_τ_i-1/α_τ_i-1σ_τ_i-1)ϵ_θ_j (a_t,j^τ_i,o_t,j,τ_i) for i=N,...1, and such iterative denoising steps are corresponding to the diagram in the right side of Fig. <ref>. §.§ Policy Improvement Notice that only optimizing θ_j by Eq. (<ref>) is not sufficient in offline MARL, because then the generated actions will only be close to the behavior policy under diffusion. To achieve policy improvement, we follow <cit.> to take the Q-value into consideration and use the following loss function: ℒ(θ_j)=ℒ_bc(θ_j)+ℒ_q(θ_j)=ℒ_bc(θ_j)-η̃𝔼_(o_j,a_j)∼𝒟_j,a_j^τ_0∼π_θ_j[Q_ϕ_j(o_j,a_j^τ_0)]. The second term ℒ_q(θ_j) is called Q-loss <cit.> for policy improvement , where a_j^τ_0 is generated by Eq. (<ref>), ϕ_j is the network parameter of Q-value function for agent j, η̃=η/𝔼_(s_j,a_j)∼𝒟[Q_ϕ_j(o_j,a_j)] and η is a hyperparameter. This Q-value is normalized to control the scale of Q-value functions <cit.> and η is used to balance the weights. The combination of two terms ensures that the policy can preferentially sample actions with high values. The reason is that the policy trained by optimizing Eq. (<ref>) can generate actions with different distributions compared to the behavior policy, and the policy prefers to sample actions with higher Q-values (corresponding to better performance). To train efficient Q-values for policy improvement, we optimize Eq. (<ref>) as the objective <cit.>. §.§ Data Augmentation r0.49 [t]0.49 In DOM2, in addition to the design of the novel policy with their training objectives, we also introduce a data-augmentation method to scale up the size of the dataset (shown in Algorithm <ref>). Specifically, we replicate trajectories 𝒯_i∈𝒟 with high return values (i.e., with the return value, denoted by Return(𝒯_i), higher than threshold values) in the dataset. Specifically, we define a set of threshold values ℛ={r_th,1, ..., r_th,K}. Then, we compare the reward of each trajectory with every threshold value and replicate the trajectory once whenever its return is higher than the compared threshold (Line <ref>), such that trajectories with higher returns can replicate more times. Doing so allows us to create more data efficiently and improve the performance of the policy by increasing the probability of sampling trajectories with better performance in the dataset. §.§ The DOM2 Algorithm The resulting DOM2 algorithm is presented in Algorithm <ref>. Line <ref> is the initialization step. Line <ref> is the data-augmentation step. Line <ref> is the sampling procedure for the preparation of the mini-batch data from the augmented dataset to train the agents. Lines <ref> and <ref> are the update of actor and critic parameters, i.e., the policy and the Q-value function. Line <ref> is the soft update procedure for the target networks. Our algorithm provides a systematic way to integrate diffusion into RL algorithm with appropriate regularizers and how to train the diffusion policy in a decentralized multi-agent setting. Some comparisons with the recent diffusion-based methods for action generation are in place. First of all, we use the diffusion policy in the multi-agent setting. Then, different from Diffuser <cit.>, our method generates actions independently among different timesteps, while Diffuser generates a sequence of actions as a trajectory in the episode using a combination of a diffusion model and a transformer architecture, so the actions are dependent among different timesteps. Compared to the DDPM-based diffusion policy <cit.>, we use the first-order DPM-Solver <cit.> for action generation and the U-Net architecture <cit.> of the score function for better and faster action sampling, while the DDPM-based diffusion policy <cit.> uses the multi-layer perceptron (MLP) to learn score functions. In contrast to SfBC <cit.>, we use the conservative Q-value for policy improvement to learn the score functions, while SfBC only uses the BC loss in the procedure. Below, we will demonstrate, with extensive experiments, that our DOM2 method achieves superior performance, significant generalization, and data efficiency compared to the state-of-the-art offline MARL algorithms. § EXPERIMENTS We evaluate our method in different multi-agent environments and datasets. We focus on three primary metrics, performance (how is DOM2 compared to other SOTA baselines), generalization (can DOM2 generalize well if the environment configurations change), and data efficiency (is our algorithm applicable with small datasets). §.§ Experiment Setup Environments: We conduct experiments in two widely-used multi-agent tasks including the multi-agent particle environments (MPE) <cit.> and high-dimensional and challenging multi-agent MuJoCo (MAMuJoCo) tasks <cit.>. In MPE, agents known as physical particles need to cooperate with each other to solve the tasks. The MAMuJoCo is an extension for MuJoCo locomotion tasks in the setting of a single agent to enable the robot to run with the cooperation of agents. We use the Predator-prey, World, Cooperative navigation in MPE and 2-agent HalfCheetah in MAMuJoCo as the experimental environments. The details are shown in Appendix <ref>. To demonstrate the generalization capability of our DOM2 algorithm, we conduct experiments in both standard environments and shifted environments. Compared to the standard environments, the features of the environments are changed randomly to increase the difficulty for the agent to finish the task, which will be shown later. Datasets: We construct four different datasets following <cit.> to represent different qualities of behavior policies: i) Medium-replay dataset: record all of the samples in the replay buffer during training until the performance of the policy is at the medium level, ii) Medium dataset: take 1 million samples by unrolling a policy whose performance reaches the medium level, iii) Expert dataset: take 1 million samples by unrolling a well-trained policy, and vi) Medium-expert dataset: take 1 million samples by sampling from the medium dataset and the expert dataset in proportion. Baseline: We compare the DOM2 algorithm with the following state-of-the-art baseline offline MARL algorithms: MA-CQL <cit.>, OMAR <cit.>, and MA-SfBC as the extension of the single agent diffusion-based policy SfBC <cit.>. Our methods are all built on the independent TD3 with decentralized actors and critics. Each algorithm is executed for 5 random seeds and the mean performance and the standard deviation are presented. A detailed description of hyperparameters, neural network structures, and setup can be found in Appendix <ref>. §.§ Multi-Agent Particle Environment Performace Table <ref> shows the scores of the algorithms under different datasets. We see that in all settings, DOM2 significantly outperforms MA-CQL, OMAR, and MA-SfBC. We also observe that DOM2 has smaller deviations in most settings compared to other algorithms, demonstrating that DOM2 is more stable in different environments. Generalization In MPE, we design the shifted environment by changing the speed of agents. Specifically, we change the speed of agents by randomly choosing in the region v_1,v_2∈[v_min,1.0] in each episode for evaluation (the default speed of any agent j is all v_j=1.0 in the standard environment). Here v_min=0.4,0.5,0.3 in the predator-prey, world, and cooperative navigation, respectively. The values are set to be the minimum speed to guarantee that the agents can all catch the adversary using the slowest speed with an appropriate policy. We train the policy using the dataset generated in the standard environment and evaluate it in both the standard environment and the shifted environments to examine the performance and generalization of the policy, which is the same later. The results of these shifted environments are shown in the table <ref>. We can see that DOM2 significantly outperforms the compared algorithms in nearly all settings, and achieves the best performance in 11 out of 12 settings. Only in one setting, the performance is slightly below OMAR. Data Efficiency In addition to the above performance and generalization, DOM2 also possesses superior data efficiency. To demonstrate this, we train the algorithms with only use a small percentage of the given dataset. The results are shown in Fig. <ref>. The averaged normalized score is calculated by averaging the normalized score in medium, medium-expert and expert datasets (the benchmark of the normalized scores is shown in Appendix <ref>). DOM2 exhibits a remarkably better performance in all MPE tasks, i.e., using a data volume that is 20+ times smaller, it still achieves state-of-the-art performance. This unique feature is extremely useful in making good utilization of offline data, especially in applications where data collection can be costly, e.g., robotics and autonomous driving. Ablation study In this part, we present an ablation study for DOM2, to evaluate its sensitivity to key hyperparameters, including the regularization coefficient value η and the diffusion step N. The effect of the regularization coefficient η Fig. <ref> shows the average score of DOM2 over the MPE world task with different values of the regularization coefficient η∈[0.1,25.0] in 4 datasets. In order to perform the advantage of the diffusion-based policy, the appropriate coefficient value η needs to balance the two regularization terms appropriately, which is influenced by the performance of the dataset. For the expert dataset, η tends to be small, and in other datasets, η tends to be relatively larger. The reason that small η performs well in the expert dataset is that with data from well-trained strategies, getting close to the behavior policy is sufficient for training a policy without policy improvement. The effect of the diffusion step N Fig. <ref> shows the average score of DOM2 over the MPE world task with different values of the diffusion step N∈[1,10] under each dataset. The numbers of optimal diffusion steps vary with the dataset. We also observe that N=5 is a good choice for both efficiency of diffusion action generation and the performance of the obtained policy in MPE. §.§ Scalability in Multi-Agent MuJoCo Environment We now turn to the more complex continuous control tasks HalfCheetah-v2 environment in a multi-agent setting (extension of the single-agent task <cit.>) and the details are in Appendix <ref>. Performance. Table <ref> shows the performance of DOM2 in the multi-agent HalfCheetah-v2 environments. We see that DOM2 outperforms other compared algorithms and achieves state-of-the-art performances in all the algorithms and datasets. Generalization. As in the MPE case, we also evaluate the generalization capability of DOM2 in this setting. Specifically, we design shifted environments following the scheme in <cit.>, i.e., we set up Random (R) and Extreme (E) environments by changing the environment parameters (details are shown in Appendix <ref>). The performance of the algorithms is shown in Table <ref>. The results show that DOM2 significantly outperforms other algorithms in nearly all settings, and achieves the best performance in 7 out of 8 settings. § CONCLUSION We propose DOM2, a novel offline MARL algorithm, which contains three key components, i.e., a diffusion mechanism for enhancing policy expressiveness and diversity, an appropriate regularizer, and a data-augmentation method. Through extensive experiments on multi-agent particle and multi-agent MuJoCo environments, we show that DOM2 significantly outperforms state-of-the-art benchmarks. Moreover, DOM2 possesses superior generalization capability and ultra-high data efficiency, i.e., achieving the same performance as benchmarks with 20+ times less data. unsrtnat [section] [section]l1 § ADDITIONAL DETAILS ABOUT DIFFUSION PROBABILISTIC MODEL In this section, we elaborate on more details about the diffusion probabilistic model that we do not cover in Section <ref> due to space limitation, and compare the similar parts between the diffusion model and DOM2 in offline MARL. In the noising action part, we emphasize a forward process {b_t,j^τ}_τ∈[τ_0,τ_N] starting at b_t,j^τ_0∼π_θ_j(·|o_t,j) in the dataset 𝒟 and b_t,j^τ_N is the final noise. This forward process satisfies that for any diffusing time index τ∈[τ_0,τ_N], the transition probability q_τ_0τ(b_t,j^τ|b_t,j^τ_0)=𝒩(b_t,j^τ;α_τb_t,j^τ_0,σ_τ^2I) <cit.> (α_τ,σ_τ is called the noise schedule). We build the reverse process of SDE as Eq. (<ref>) and we will describe the connection between the forward process and the reverse process of SDE. Kingma <cit.> proves that the following forward SDE (Eq. (<ref>)) solves to a process whose transition probability q_τ_0τ(b_t,j^τ|b_t,j^τ_0) is the same as the forward process, which is written as: db^τ_t,j = f(τ)b_t,j^τdτ + g(τ)dw_τ, b^τ_0_t,j∼π_β_j(·|o_t,j). Here π_β_j(·|o_t,j) is the behavior policy to generate b_t,j^τ_0 for agent j given the observation o_t,j, f(τ)=dlogα_τ/dτ,g^2(τ)=dσ_τ^2/dτ-2dlogα_τ/dτσ_τ^2 and w_t is a standard Brownion motion. It was proven in <cit.> that the forward process of SDE from τ_0 to τ_N has an equivalent reverse process of the SDE from τ_N to τ_0, which is the Eq. (<ref>). In this way, the forward process of conditional probability and the reverse process of SDE are connected. In our DOM2 for offline MARL, we propose the objective function in Eq. (<ref>) and its simplification. In detail, following <cit.>, the loss function for score matching is defined as: ℒ_bc(θ_j) :=∫_τ_0^τ_Nω(τ)𝔼_a_t,j^τ∼ q_τ(b_t,j^τ)[‖ϵ_θ_j(a_t,j^τ, o_t,j, τ) + σ_τ∇_b_t,j^τlog q_τ(b_t,j^τ|o_t,j) ‖_2^2] dτ =∫_τ_0^τ_Nω(τ)𝔼_a_t,j^τ_0∼π_β_j(a_t,j^τ_0|o_t,j),ϵ∼𝒩(0,I)[‖ϵ-ϵ_θ_j (α_τa_t,j^τ_0+σ_τϵ,o_t,j,τ ) ‖_2^2] dτ+C, where ω(τ) is the weighted parameter and C is a constant independent of θ_j. In practice for simplification, we set that w(τ)=1/(τ_N-τ_0), replace the integration by random sampling a diffusion timestep and ignore the equally weighted parameter ω(τ) and the constant C. After these simplifications, the final objective becomes Eq. (<ref>). Next, we introduce the accelerated sampling method to build the connection between the reverse process of SDE for sampling and the accelerated DPM-solver. In the denoising part, we utilize the following SDE of the reverse process (Eq. (2.5) in <cit.>) as: da^τ_t,j = [f(τ)a^τ_t,j+ g^2(τ)/σ_τϵ_θ_j (a_t,j^τ,o_t,j,τ)]dτ + g(τ)dw_τ, a^τ_N_t,j∼𝒩(0,I). To achieve faster sampling, Song <cit.> proves that the following ODE equivalently describes the process given by the reverse diffusion SDE. It is thus called the diffusion ODE. da_t,j^τ/dτ=f(τ)a_t,j^τ+g^2(τ)/2σ_τϵ_θ_j (a_t,j^τ,o_t,j,τ), a^τ_N_t,j∼𝒩(0,I). At the end of the denoising part, we use the efficient DPM-solver (Eq. (<ref>)) to solve the diffusion ODE and thus implement the denoising process. The formal derivation can be found on <cit.> and we restate their argument here for the sake of completeness, for a more detailed explanation, please refer to <cit.>. For such a semi-linear structured ODE in Eq. (<ref>), the solution at time τ can be formulated as: a_t,j^τ=exp(∫_τ'^τf(u)du)a_t,j^τ'+∫_τ'^τ(exp(∫_u^τf(z)dz)g^2(u)/2σ_uϵ_θ_j (a_t,j^u,o_t,j,u))du. Defining λ_τ=log(α_τ/σ_τ), we can rewrite the solution as: a_t,j^τ=α_τ/α_τ'a_t,j^τ'-α_τ∫_τ'^τ(dλ_u/du)σ_u/α_uϵ_θ_j (a_t,j^u,o_t,j,u)du. Notice that the definition of λ_τ is dependent on the noise schedule α_τ,σ_τ. If λ_τ is a continuous and strictly decreasing function of τ (the selection of our final noise schedule in Eq. (<ref>) actually satisfies this requirement, which we will discuss afterwards), we can rewrite the term by change-of-variable. Based on the inverse function τ_λ(·) from λ to τ such that τ=τ_λ(λ_τ) (for simplicity we can also write this term as τ_λ) and define ϵ̂_θ_j(â_t,j^λ_τ,o_t,j,λ_τ)=ϵ_θ_j (a_t,j^τ,o_t,j,τ), we can rewrite Eq. (<ref>) as: a_t,j^τ=α_τ/α_τ'a_t,j^τ'-α_τ∫_λ_τ'^λ_τexp(-λ)ϵ̂_θ_j(â_t,j^λ,o_t,j,λ)dλ. Eq. (<ref>) is satisfied for any τ,τ'∈[τ_0,τ_N]. We uniformly partition the diffusion horizon [τ_0, τ_N] into N subintervals {[τ_i, τ_i+1]}_i=0^N-1, where τ_i=i/N (also τ_0=0,τ_N=1). We follow <cit.> to use the variance-preserving (VP) type function <cit.> to train the policy efficiently. First, define {β_τ}_τ∈[0,1] by β_τ=1-exp(-β_min1/(N+1)-(β_max-β_min)2Nτ+1/2(N+1)^2), and we pick β_min=0.1,β_max=20.0. Then we choose the noise schedule α_τ_i,σ_τ_i by α_τ_i=1-β_τ_i,σ_τ_i^2=1-α_τ_i^2 for i=1… N. It can be then verified that by plugging this particular choice of α_τ and σ_τ into λ_τ = log(α_τ/σ_τ), the obtained λ_τ is a strictly decreasing function of τ (Appendix E in <cit.>). In each interval [τ_i-1,τ_i], given a_t,j^τ_i, the action obtained in the previous diffusion step at τ_i, according to Eq. (<ref>), the exact action next step a_t,j^τ_i-1 is given by: a_t,j^τ_i-1 = α_τ_i-1/α_τ_ia_t,j^τ_i-α_τ_i∫_λ_τ_i^λ_τ_i-1exp(-λ)ϵ̂_θ_j(â_t,j^λ,o_t,j,λ)dλ. We denote the k-th order derivative of ϵ̂_θ_j(â_t,j^λ,o_t,j,λ) at λ_τ_i, which is written as ϵ̂_θ_j^(k)(â_t,j^λ,o_t,j,λ_τ_i). By ignoring the higher-order remainder 𝒪((λ_τ_i-1-λ_τ_i)^k+1), the k-th order DPM-solver for sampling can be written as: a_t,j^τ_i-1 = α_τ_i-1/α_τ_ia_t,j^τ_i-α_τ_i∑_n=0^k-1ϵ̂_θ_j^(n)(â_t,j^λ_τ_i,o_t,j,λ_τ_i)∫_λ_τ_i^λ_τ_i-1exp(-λ)(λ-λ_τ_i)^n/n!dλ. For k=1, the results are actually the first-order iteration function in Section <ref>. Similarly, we can use a higher-order DPM-solver. § EXPERIMENTAL DETAILS §.§ Experimental Setup: Environments We implement our algorithm and baselines based on the open-source environmental engines of multi-agent particle environments (MPE) <cit.>,[https://github.com/openai/multiagent-particle-envs] and multi-agent MuJoCo environments (MAMuJoCo)<cit.>[https://github.com/schroederdewitt/multiagent_mujoco]. Figure <ref> shows the tasks from MPE and MAMuJoCo. In cooperative navigation shown in Fig. <ref>, agents (red dots) cooperate to reach the landmark (blue crosses) without collision. In predator-prey in Fig. <ref>, predators (red dots) are intended to catch the prey (blue dots) and avoid collision with the landmark (grey dots). The predators need to cooperate with each other to surround and catch the prey because the predators run slower than the prey. The world task in Fig. <ref> consists of 3 agents (red dots) and 1 adversary (blue dots). The slower agents are intended to catch the faster adversary that desires to eat food (yellow dots). The agents need to avoid collision with the landmark (grey dots). Moreover, if the adversary hides in the forest (green dots), it is harder for the agents to catch the adversary because they do not know the position of the adversary. The two-agent HalfCheetah is shown in Fig. <ref>, and different agents control different joints (grey or white joints) and they need to cooperate for better control the half-shaped cheetah to run stably and fast. The expert and random scores for cooperative navigation, predator-prey, and world are {516.8,159.8},{185.6,-4.1},{79.5,-6.8}, and we use these scores to calculate the normalized scores in Fig. <ref>. For the MAMuJoCo environment, we design two different shifted environments: Random (R) environment and Extreme (E) environments following <cit.>. These environments have different parameters and we focus on randomly sampling the three parameters: (1) power, the parameter to influence the force that is multiplied before application, (2) torso density, the parameter to influence the weight, (3) sliding friction of the joints. The detailed sample regions of these parameters in different environments are shown in <ref>. §.§ Experimental Setup: Network Structures and Hyperparameters In DOM2, we utilize the multi-layer perceptron (MLP) to model the Q-value functions of the critics by concatenating the state-action pairs and sending them into the MLP to generate the Q-function, which is the same as in MA-CQL and OMAR <cit.>. Different from MA-CQL and OMAR, we use the diffusion policy to generate actions, and we use the U-Net architecture <cit.> to model the score function ϵ_θ_j (a_t,j^τ_i,o_t,j,τ_i) for agent j at timestep τ_i. Different from MA-SfBC <cit.>, we use the U-Net architecture with a dropout layer for better training stability. All the MLPs consist of 1 batch normalization layer, 2 hidden layers, and 1 output layer with the size (input_dim,hidden_dim),(hidden_dim,hidden_dim),(hidden_dim,output_dim) and hidden_dim=256. In the hidden layers, the output is activated with the Mish function, and the output of the output layer is activated with the Tanh function. The U-Net architecture resembles <cit.> with multiple residual networks as shown in Figure <ref>. We also introduce a dropout layer after the output of each residual layer with a 0.1 dropout rate to prevent overfitting. We use 5× 10^-3 in all MPE environments as the learning rate to train the network of the score function (Fig. <ref>) in the diffusion policy. For training the Q-value network, we use the learning rate of 3× 10^-4 in all environments. The trade-off parameter η is used to balance the regularizers of actor losses, and the total diffusion step number N is for sampling denoised actions. In the MAMuJoCo HalfCheetah-v2 environment, the learning rates for training the network of the score function in medium-replay, medium, medium-expert, and expert datasets are set to 1× 10^-4,2.5× 10^-4,2.5× 10^-4,5× 10^-4, respectively. We use N=5 as the diffusion timestep in MPE and N=10 in the MAMuJoCo HalfCheetah-v2 environment. The hyperparameter η and the set of threshold values ℛ={r_th,1,...,r_th,K} in different settings are shown in <ref>. For all other hyperparameters, we use the same values in our experiments. §.§ Details about 3-Agent 6-Landmark Task We now discuss detailed results in the 3-Agent 6-Landmark task. We construct the environment based on the cooperative navigation task in multi-agent particles environment <cit.>. This task contains 3 agents and 6 landmarks. The size of agents and landmarks are all 0.1. For any landmark j=0,1,...,5, its position is given by (cos(2π j/6),sin(2π j/6)). In each episode, the environment initializes the positions of 3 agents inside the circle of the center (0,0) with a 0.1 radius uniformly at random. If the agent can successfully find any one of the landmarks, the agent gains a positive reward. If two agents collide, the agents are both penalized with a negative reward. We construct two different environments: the standard environment and shifted environment. In the standard environment, all 6 landmarks exist in the environment, while in the shifted environment, in each episode, we randomly hide 3 out of 6 landmarks. We collect data generated from the standard environment and train the agents using different algorithms for both environments. We evaluate how our algorithm performs compared to the baseline algorithms in this task (with different configurations of the targets) and investigate their performance by rolling out K times at each evaluation (K ∈{1, 10}) following <cit.>. For evaluating the policy in the standard environment, we test the policy for 10 episodes with different initialized positions and calculate the mean value and the standard deviation as the results of evaluating the policy. This corresponds to rolling out K=1 time at each evaluation. For the shifted environment, in spite of the former evaluation method (K=1), we also evaluate the algorithm in another way following <cit.>. We first test the policy for 10 episodes at the same initialized positions and take the maximum return in these 10 episodes. We repeat this procedure 10 times with different initialized positions and calculate the mean value and the standard deviation as the results of evaluating the policy, which corresponds to rolling out K=10 times at each evaluation. It has been reported (see e.g. <cit.>) that for a diversity-driven method, increasing K can help the diverse policy gain higher returns. In <ref>, we show the results of different algorithms in standard environments and shifted environments. It can be seen that DOM2 outperforms other algorithms in both the standard environment and shifted environments. Specifically, in the standard environment, DOM2 outperforms other algorithms. This shows that DOM2 has better expressiveness compared to other algorithms. In the shifted environment, when K=1, it turns out that DOM2 already achieves better performance with expressiveness. Moreover, when K=10, DOM2 significantly improves the performance compared to the results in the K=1 setting. This implies that DOM2 finds much more diverse policies, thus achieving better performance compared to the existing conservatism-based method, i.e., MA-CQL and OMAR. In Fig. <ref> (same as Fig. <ref>), we show the average mean value and the standard deviation value of different datasets in the standard environment as the left diagram and in the shifted environment with 10-times evaluation in each episode as the right diagram. The performance of DOM2 is shown as the light blue bar. Compared to the MA-CQL as the orange bar and OMAR as the red bar, DOM2 shows a better average performance in both settings, which means that DOM2 efficiently trains the policy with much better expressiveness and diversity.
http://arxiv.org/abs/2307.01629v1
20230704102604
The Gaia alerted fading of the FUor-type star Gaia21elv
[ "Zsófia Nagy", "Sunkyung Park", "Péter Ábrahám", "Ágnes Kóspál", "Fernando Cruz-Sáenz de Miera", "Mária Kun", "Michał Siwak", "Zsófia Marianna Szabó", "Máté Szilágyi", "Eleonora Fiorellino", "Teresa Giannini", "Jae-Joon Lee", "Jeong-Eun Lee", "Gábor Marton", "László Szabados", "Fabrizio Vitali", "Jan Andrzejewski", "Mariusz Gromadzki", "Simon Hodgkin", "Maja Jabłońska", "Rene A. Mendez", "Jaroslav Merc", "Olga Michniewicz", "Przemysław J. Mikołajczyk", "Uliana Pylypenko", "Milena Ratajczak", "Łukasz Wyrzykowski", "Michal Zejmo", "Paweł Zieliński" ]
astro-ph.SR
[ "astro-ph.SR", "astro-ph.GA" ]
firstpage–lastpage The River Model of Gravitational Collapse Soumya Chakrabarti August 1, 2023 ========================================= FU Orionis objects (FUors) are eruptive young stars, which exhibit outbursts that last from decades to a century. Due to the duration of their outbursts, and to the fact that only about two dozens of such sources are known, information on the end of their outbursts is limited. Here we analyse follow-up photometry and spectroscopy of Gaia21elv, a young stellar object, which had a several decades long outburst. It was reported as a Gaia science alert due to its recent fading by more than a magnitude. To study the fading of the source and look for signatures characteristic of FUors, we have obtained follow-up near infrared (NIR) spectra using Gemini South/IGRINS, and both optical and NIR spectra using VLT/X-SHOOTER. The spectra at both epochs show typical FUor signatures, such as a triangular shaped H-band continuum, absorption-line dominated spectrum, and P Cygni profiles. In addition to the typical FUor signatures, [Oi], [Feii], and [Sii] were detected, suggesting the presence of a jet or disk wind. Fitting the spectral energy distributions with an accretion disc model suggests a decrease of the accretion rate between the brightest and faintest states. The rapid fading of the source in 2021 was most likely dominated by an increase of circumstellar extinction. The spectroscopy presented here confirms that Gaia21elv is a classical FUor, the third such object discovered among the Gaia science alerts. Stars: variables: T Tauri – stars: pre-main sequence § INTRODUCTION Studying the accretion in young stellar objects (YSOs) is important to understand their formation. Most of what we know about accretion in YSOs is based on the magnetospheric accretion scenario, according to which the material accretes onto the forming star from the infalling envelope through the disk, by following the magnetospheric lines <cit.>. The accretion rates of YSOs are known to be highly variable, with extreme cases of eruptive YSOs, which experience outburst events, when their luminosity increases up to two orders of magnitude. These events are detected as 2-5 mag brightening in optical and near-infrared (NIR) bands. During the outbursts the mass accretion rate can increase from ∼10^-8 M_⊙ yr^-1 in quiescence to ∼10^-4 M_⊙ yr^-1 (, ). Studies with large samples of objects indicate that young stars experience these events once every 10^3-10^4 years (e.g. ). Episodic accretion is one of the possible explanations for the observed large luminosity spread of young stellar objects <cit.>. FU Orionis objects (FUors) are well-studied examples of episodic accretion <cit.>. FUors are low-mass (<2 M_⊙) eruptive YSOs that exhibit large-amplitude (>4 mag) outbursts at optical and infrared wavelengths. These outbursts are expected to last up to a century, suggesting that these events will not only increase the final stellar mass by a significant amount, but also affect the evolution of the circumstellar disc. The representative characteristics of FUors are brightness increase on a time scale of 1-10 yr, P Cygni profile of Hα, Lii 6707 Å absorption, strong CO absorption features, triangular shape of the H-band continuum due to the strong water absorption bands on both sides of the H-band window, typical of late M-type stars (; ). So far the number of confirmed FUors is limited to no more than two dozens <cit.>. One of the important, so far unclear points is the end of the FUor outbursts, i.e. their return to quiescence. FUor outbursts are expected to end when the inner disc depletes. However, due to the typically decades-long duration of the outbursts, no bona fide FUor has returned to quiescence yet, apart from cases of short, temporary halt in the accretion, e.g. V899 Mon <cit.> and V346 Nor (, ). Another example is V1647 Ori, an eruptive YSO that has shown some FUor characteristics <cit.>, and returned to quiescence after a ten-years long outburst <cit.>. The spectroscopic deviation of V1647 Ori from well-known FUors, however, ruled out its FUor classification <cit.>. Therefore, it is not known whether the end of FUor outbursts is an abrupt event when accretion suddenly stops and the brightness drops back to the quiescent level in 1-2 years, or it is a slow gradual decrease of the accretion rate resulting in a slowly decreasing light curve over perhaps decades. The first scenario would indicate some instability, like the thermal instability model proposed by <cit.>. To understand how FUors end their outbursts, it is important to increase their sample. One of the best tools to discover the brightening or fading of eruptive young star candidates is the Gaia Photometric Science Alerts system, due to its large sky coverage and typically monthly cadence <cit.>. Several eruptive YSOs have already been discovered based on the Gaia Science Alerts, including the FUors Gaia17bpi <cit.> and Gaia18dvy <cit.>, and the EX Lupi-type eruptive YSOs (EXors) Gaia18dvz <cit.>, Gaia20eae <cit.> and Gaia19fct <cit.>. Some additional eruptive YSOs were found, which cannot be classified as either a FUor or an EXor, such as Gaia19ajj <cit.>, Gaia19bey <cit.>, and Gaia21bty (Siwak et al., submitted). Two Gaia alerted sources with light curves similar to eruptive YSOs, Gaia20bwa and Gaia20fgx <cit.>, turned out to be classical T Tauri stars (CTTS), while the brightening of another Gaia alerted YSO, V555 Ori (Gaia17afn), was confirmed to be caused by variable circumstellar extinction, rather than a change in its accretion rate <cit.>. Here we present a study of a previously known YSO, which triggered the Gaia Science Alerts system due to its fading. Gaia21elv (ESO Hα-148 or 2MASS J08410676-4052174, α_ J2000 = 08^ h 41^ m 0675, δ_ J2000 = -40^∘ 52' 1744) had a Gaia alert on 2021 October 6 due to its quick fading by 1.2 mag over 18 months. Its archival photometry based on photographic plates of the SuperCOSMOS Sky Survey (SSS) showed a long-term brightening <cit.>. It is a known young, Class II type star (, ), associated with the Vela Molecular Ridge <cit.>, and in particular, with the RCW 27 HII region located at a distance of ∼1 kpc <cit.>. Its Gaia DR3 <cit.> parallax is 1.0727±0.0397 mas. The Renormalised Unit Weight Error (RUWE) of 1.291 and the astrometric excess noise of 0.437 mas suggest that the astrometry is accurate. We derived a zero-point correction of -0.02513 based on <cit.> for this parallax. After the zero-point correction, the Gaia DR3 parallax can be converted to a distance of 910.9±33.7 pc, which we use in this paper. This distance is close to the estimate of 905^+36_-26 pc by <cit.>. In this paper, we provide spectroscopic evidence that Gaia21elv is a FUor, and discuss the cause of its fading that triggered the Gaia Alerts system. We describe the photometric and spectroscopic observations in Sect. <ref> and present their results in Sect. <ref>. We analyse the FUor signatures in the NIR spectra in Sect. <ref>, discuss the nature of the fading of the source, and provide a comparison to other similar sources. We summarize our main findings in Sect. <ref>. § OBSERVATIONS §.§ Optical photometry In 2022 June, we obtained optical photometric observations of Gaia21elv with the 60-cm Ritchey-Chrétien Rapid Eye Mount (REM) telescope operated by the Italian National Institute for Astrophysics (INAF) at La Silla (Chile) using its ROS2 instrument, an optical imager operating at four simultaneous passbands (Sloan g'r'i'z') with a field of view (FoV) of 91×91 and pixel scale of 058. Three images were taken per filter on four nights, 2022 June 5, 6, 8, and 9. After the usual bias and flat field correction, and removal of hot pixels, we obtained aperture photometry for Gaia21elv and about 15 comparison stars in the FoV. We selected the comparison stars from the APASS9 catalog <cit.> making sure that they are sufficiently constant in brightness (σ_V<0.08mag). We calculated the z-band brightness of the comparison stars by plotting their spectral energy distribution (SED) using APASS9 Bg'Vr'i' and 2MASS JHK_s magnitudes <cit.> and interpolating between these points for the effective wavelength of the z' filter, 1.05μm. We used an aperture radius of 6 pixels (35) and sky annulus between 20 and 40 pixels (1168 and 2336). Because all comparison stars were much bluer than Gaia21elv, in order to avoid introducing large uncertainties by extrapolation, we converted the instrumental magnitudes by averaging the calibration factors of all comparison stars without fitting a colour term. The results can be seen in Table <ref>. Further observations of the target have been performed with REM between 2022 Oct 26 and 2023 Jan 4, during 12 nights. These observations, taken in Sloan g'r'i' passbands, were uploaded to the BHTOM service.[BHTOM - Black Hole TOM: ] 40, 38 and 44 images were reduced in Sloan g'r'i', respectively. Photometric observations were obtained with the PROMPT6 telescope located at Cerro Tololo Inter-American Observatory in Chile. This telescope is a part of SkyNET robotic network and is supplied with FLI CCD camera with 15.1 × 15.1 arcmin field-of-view (2048 × 2048 pixels, 0.44 arcsec/pix). All 42 observations (14 frames per band) were taken in Johnson-Cousins V, R and I bands and uploaded to the BHTOM service, where they were reduced and converted to standard magnitudes (in APASS/V, APASS/r and APASS/i respectively). We obtained photometric observations with the 1.54m Danish telescope, located at La Silla, Chile. The telescope is equipped with the CCD camera (E2V231-42) in the Cassegrain focus, cooled by liquid nitrogen. The FoV is 13.7 × 13.7 arcmin (2048 × 2048 pixels; pixel scale of 0.4 arcsec/pixel). The filters used were Johnson-Cousins B V R_c I_c. In all cases, the exposure time was 90 seconds. We collected data using the 50cm CDK telescope equipped with a QHY268M pro camera. This telescope (ROTUZ) is part of the DeepSkyChile[], and belongs to the Janusz Gil Institute of Astronomy, University of Zielona Gora, Poland. We reduced the data by applying bias, dark, and flat correction using AstroImageJ software <cit.>. The photometry was done using the BHTOM server. The photometry done using the BHTOM server is based on the method described in <cit.> and <cit.>. The results are shown in Fig. <ref> and are summarized in Tables <ref> and <ref>. §.§ Infrared photometry In 2022 June, we obtained infrared photometric observations with the REM, using the infrared imaging camera, REMIR. The reduction of the JHK images, performed with our own IDL routines, included the construction and subtraction of a sky image, and flat-fielding. We extracted the instrumental magnitudes for the target as well as for all good-quality 2MASS stars (i.e. with a 2MASS photometric quality flag of AAA) in the field in an aperture with a radius of ∼37. No extended nebulosity is visible around the source on the 2MASS images. The final step was the determination of an average constant calibration factor between the instrumental and the 2MASS magnitudes of typically 30–50 stars, and this offset was applied to the target observations. The results can be found in Table <ref>. REMIR was used again between October 2022 and January 2023 for J-band imaging. Each image came from the five single images jittered along a circle thanks to a dithering wedge from which a median sky was derived. Every image was then sky-subtracted with the median sky. Subsequently, the five images were re-aligned and averaged into a single J band exposure. Calibrated images were then uploaded to the BHTOM service, reduced and matched to 2MASS J band as described above for the optical data. We used mid-infrared photometry from the Wide-field Infrared Survey Explorer (WISE) and NEOWISE surveys from the NASA/IPAC Infrared Science Archive. NEOWISE observes the full sky on average twice per year with multiple exposures per epoch. For a comparison with the photometry from other instruments, we computed the average of multiple exposures of a single epoch. NEOWISE W1 and W2 photometry is known to display a photometric bias for saturated sources. We corrected for this bias using the correction curves given in the Explanatory Supplement to the NEOWISE Data Release Products. We derived the average of the uncertainties of the single exposures (err1). We also calculated the standard deviation of the points we averaged per season (err2). For the error of the data points averaged per epoch we used the maximum of err1 and err2. §.§ Spectroscopy We obtained high-resolution (R∼45,000) NIR spectra of Gaia21elv on 2020 November 14 (Program ID: GS-2020B-Q-218, PI: S. Park) using the Immersion GRating INfrared Spectrograph <cit.> of Gemini South, in the H and K bands. The spectrum was obtained with a slit size of 0.34 × 5. Gaia21elv was observed with two sets of ABBA nodding observations to subtract the sky background better. The total exposure time of Gaia21elv was 192 sec with 24 sec exposure of each frame. The data were reduced using the IGRINS pipeline <cit.> for flat-fielding, sky subtraction, correcting the distortion of the dispersion direction, wavelength calibration, and combining the spectra. In order to correct for telluric absorption features, a nearby A0 telluric standard star (HIP 21514) was observed right before the target. Then, the telluric correction and flux calibration were applied as done in <cit.>. Finally, barycentric velocity correction using barycorrpy <cit.> was applied (V_bary = 16.715 km s^-1). A spectrum using the X-SHOOTER instrument of the Very Large Telescope (VLT) at ESO's Paranal Observatory in Chile <cit.> was taken on 2021 December 12 (Program ID: 108.23M6, PI: Z. Nagy). X-SHOOTER simultaneously covers a wavelength range from 300 nm to 2480 nm, and the spectra are divided into three arms, the ultraviolet (UVB, 300 – 550 nm), the visible (VIS, 500 – 1020 nm), and the near-infrared (NIR, 1000 – 2480 nm). The observations were performed with the narrow slits of 1”, 0.9”, and 0.4” in the UVB, VIS, and NIR respectively, leading to spectral resolution of R ∼ 5400, 8900, and 11600, respectively. The exposure time was 1800 s in each of the three arms. We obtained additional exposures with the 5” slits, which resulted in data without slit losses, which we used for the correct flux calibration of the spectra obtained with the narrower slits. The ABBAAB nodding pattern was used. The observations were processed with the official ESO pipeline. Telluric correction was performed using ESO's Molecfit program <cit.> running in the same EsoReflex environment <cit.>. § RESULTS §.§ Light and colour variations Figure <ref> shows the optical and infrared light curves of Gaia21elv, including archival data from 1977 ( and references therein), the All-Sky Automated Survey for Supernovae (ASAS-SN, , ), and the Asteroid Terrestrial-impact Last Alert System (ATLAS, , , ) survey downloaded from the ATLAS Forced Photometry web service <cit.>. Based on these data, the eruption occurred around between 1991 and 1996. The amplitude of the brightening was 4-4.5 mag from a quiescent 16.5-17 mag to around 12 mag in the R-band. A slow fading of the source is already seen after 2010 based on data points from <cit.> (collected from the AAVSO Photometric All Sky Survey (APASS) DR9 <cit.>, the VST Photometric Halpha Survey (VPHAS+) DR2 <cit.>, the Bochum Galactic disc survey <cit.>), and the Gaia G-band light curve. In 2021, the source started a more rapid fading, and had a Gaia alert in 2021 October due to its 1.2 mag fading in 18 months. After the Gaia alert, a temporary brightening by about 0.2 mag was seen in early 2022, and after that, the source stayed at the same brightness for several months, around 14.25 mag in Gaia G-band. Between 2022 July and November, the source brightened again, by about 0.3 mag as is seen in the lower panel of Fig. <ref>. A slow long-term fading is also seen in the WISE data points. Figure <ref> shows a colour-magnitude diagram based on the WISE W1 and W2 bands. As the changes are mostly grey, extinction can be excluded as the physical mechanism between the flux changes observed at the WISE wavelengths. Figure <ref> shows the J-H vs H-K_s diagram for the bright state (2MASS data point from 1999 February) and for the faint state (REM data point from 2022 June). The difference between the two data points in this diagram (Δ J ∼ 0.61 mag, Δ (J-H) ∼ 0.16 mag, Δ (H-K_s) ∼ 0.13 mag) may be consistent with the reddening of the source between 1999 and 2022. In this case, the colour change implies a visual extinction increase by A_V ∼ 2 mag. However, the colour change in the J-H vs H-K_s diagram can also be caused by accretion. Eruptive young stars in the J-H vs H-K_s plot usually move toward or away from the main sequence (e.g. ). Figure <ref> shows a colour-magnitude diagram during the fading, as shown in Fig. <ref> based on the o and c band magnitudes from the ATLAS survey. There is an indication of a long-term increasing trend of the extinction. Since the period of the quick fading in 2021 is not sampled well by these data points (as seen in Fig. <ref>), it is not clear based on them, whether the increasing extinction also applies for this period. Figure <ref> shows colour-magnitude diagrams after the fading of the source, based on the o and c band magnitudes from the ATLAS survey, g-r versus g and r-i versus r colour-magnitude diagrams based on our follow-up observations between 2022 June and 2023 January. The periods covered by these figures are also indicated in Fig. <ref>. These colour-magnitude diagrams show extinction-related variations between 2022 June and 2023 January. The colour-magnitude diagram based on the ATLAS o and c band also includes data points from a period between 2021 October and 2022 May. These data points do not show an extinction-related trend, indicating, that mechanisms other than the extinction may also play a role in this post-fading phase. Based on the colour variations alone, it is not possible to make a conclusion on the origin of the brightness variations of Gaia21elv. The o and c band data from the ATLAS survey as well as the g-r versus g and r-i versus r colour-magnitude diagrams suggest extinction-related brightness variations both during the fading and the brightening. Such extinction-related variations are not seen in the WISE colour-magnitude diagrams, whereas the J-H vs H-K_s diagram can be interpreted both as a result of extinction and accretion. Therefore, we do not make a conclusion on the origin of the brightness variations based on the colour variations, and will further investigate it in Sect. <ref>. §.§ Reddening and spectral features Figure <ref> shows the spectra taken at the two epochs in optical and NIR using Gemini South/IGRINS and VLT/X-SHOOTER and their comparison to the VLT/X-SHOOTER spectrum of FU Ori. Following the method of <cit.>, we used the X-SHOOTER spectrum to estimate the visual extinction toward the source by comparing it to the spectrum of FU Ori, which has a low and well known extinction (A_V =1.7±0.1 mag; e.g. , ). We dereddened the spectrum of Gaia21elv with increasing A_V until it matched the scaled, flux calibrated spectrum of FU Ori. The resulting Δ A_V is ∼4 mag, which suggests A_V ∼ 5.7 mag for Gaia21elv in its faint state. Table <ref> lists the lines we identified in the VLT/X-SHOOTER spectrum of Gaia21elv. Most detected lines are seen in absorption, such as Baii, Lii, Na D, Ki, Ali, Hei, Paβ, and Mgi (Fig. <ref>). Some of these absorption lines show two (or more) components, such as the Baii, Hei, and Paβ lines. Some lines show a P Cygni profile, such as Hα and Hβ (Fig. <ref>) and the Caii triplet (Fig. <ref>). Forbidden lines of [Oi], [Feii], and [Sii] were detected in emission (Fig. <ref>). These lines may indicate the presence of a jet associated with Gaia21elv, similarly to what was seen for the classical FUor V1057 Cyg (e.g. ). Forbidden emission lines in young stars were also suggested to trace disk winds (, , ). The H and K-band spectra were observed at two different epochs: in 2020 November, just before the rapid fading of the source (Gemini South/IGRINS) and in 2021 December, soon after the Gaia alert reporting the fading (VLT/X-SHOOTER). These spectra display very similar features (Fig. <ref>), including a triangular shaped H-band continuum and the CO-bandhead features in absorption, both typical FUor signatures. Fig. <ref> shows lines detected at both epochs, such as Mgi, Brγ, Nai, and Cai. The line profiles did not change significantly between the two epochs. To interpret the CO bandhead features observed at the two epochs, we used an isothermal slab model to find a best-fitting CO column density and excitation temperature of the absorbing material, similarly to <cit.> and <cit.>. We found the best-fitting CO column density to be ∼10^22 cm^-2, and a best-fitting excitation temperature of 2800±100 K at the first epoch (Gemini South/IGRINS) and 2300±100 K at the later epoch (VLT/X-SHOOTER). The results are shown in Figure <ref>. In Sect. <ref> we analyse the spectra in more detail and compare the observed features to those seen in FUors. §.§ Spectral Energy Distribution modeling In the following, we analyse the Spectral Energy Distribution (SED) of Gaia21elv at three different epochs. To create a SED for the state of the maximum brightness, we used archival data from APASS9 <cit.>, DENIS <cit.>, 2MASS <cit.>, and the ALLWISE <cit.> catalogues. A comparison of the DENIS I-band flux from 1996 December with the APASS9 i'-band flux from 2010 December shows that the brightness of the star did not change significantly between these dates, thus the fact that the used archival data correspond to different epochs is not expected to affect the modeling of the SED in the bright state. In addition to the epoch of the bright state, we compiled an SED for 2020 Oct–Nov that is very close to the epoch of the Gemini/IGRINS spectrum, and as such, it is just before the fast fading phase of the source. We used the available ASAS-SN g, Gaia G and WISE W1 data, as well as photometry in the cyan and orange bands of the ATLAS survey for this epoch. The third epoch we considered is the epoch of the VLT/X-SHOOTER spectrum in 2021 December, as it represents the faint state at the end of the fast fading of the source. We obtained synthetic photometry in the APASS9 and 2MASS bands from the X-SHOOTER spectrum, and also used the NEOWISE W1 data point closest to this epoch. The three SEDs are shown in Fig. <ref>. As we will discuss in Sec. <ref>, the properties of Gaia21elv resemble those of FU Orionis-type stars. In these objects the circumstellar matter is expected to form an accretion disc <cit.>. To estimate the properties of the accretion disc in Gaia21elv at the three epochs, we modelled the SEDs using a steady, optically thick and geometrically thin viscous accretion disc, whose mass accretion rate is constant in the radial direction. This method was successfully applied to estimate the accretion rate in several eruptive YSOs including HBC 722 <cit.>, V582 Aur <cit.>, 2MASS 22352345 + 7517076 <cit.>, Gaia18dvy <cit.>, V1057 Cyg <cit.>, and V1515 Cyg <cit.>. In this model, the temperature profile of the disc is defined based on <cit.> as: T (r) = [ 3GM_⋆Ṁ/8π R^3_⋆σ( 1 - √(R_⋆/r)) ]^1/4, where r is the distance from the star, R_⋆ is the stellar radius, M_⋆ is the stellar mass, Ṁ is the accretion rate, and G,σ are the gravitational and Stefan-Boltzmann constants, respectively. The model SED was calculated by integrating black-body emission in concentric annuli between the inner disc radius and the outer disc radius. The resulting SED was then reddened by different A_V values. One of the input parameters of the model is the inclination, and as it is unknown for Gaia21elv, we used an intermediate value of 45^∘. We assumed a distance of 910.9 pc, as derived above from the Gaia DR3 parallax and its zero-point correction. There is a known degeneracy in the model between the inner disc radius and A_V. To break this degeneracy we adopted the A_V value of ∼5.7 mag obtained from the X-SHOOTER spectrum in Sect. <ref>. This choice fixed the inner disc radius to R_in = 2 R_⊙, a reasonable value, as it is the same as determined for FU Ori by <cit.>. The remaining free parameters of the disc model are M_⋆Ṁ, A_V, and R_out. Finding the best M_⋆Ṁ and A_V combinations was performed with χ^2 minimization over a large grid in both the accretion rate and the extinction, by taking into account all flux values between 0.4 and 4.0 μm. The formal uncertainties of the data points were set to a homogeneous 5% of the measured flux values. We ran several models assuming different outer disc radii in the range between 0.2 and 2 au, and found that the WISE data points are reasonably well fitted with R_out = 1 au, though this value is less constrained than the other two parameters. The best-fitting visual extinctions and products of the stellar mass and the accretion rate are plotted in Fig. <ref>. Since the outcome of the model is the product M_⋆Ṁ, the true accretion rate depends on the stellar mass. However, FUors are typically low-mass objects <cit.>, thus our obtained values provide a good approximation to the accretion rate. Considering the results for all three epochs, the three data points suggest that the accretion rate followed a monotonic decay in the last 15 years. Our models suggest a slight increase of the extinction toward the source from 3.6 mag to 4.4 mag between the maximum brightness and the Gemini epoch in 2020 November. Remarkably, the quick fading in 2021, corresponding to the Gaia alert, was mostly caused by an increase in extinction. The accretion luminosity of the source also dropped in parallel to the accretion rate between the first and last epoch, from 106 L_⊙ to 68 L_⊙, although the absolute values depend on the unknown inclination angle, too. § DISCUSSION §.§ Classification of Gaia21elv as a FUor To investigate, whether Gaia21elv is indeed a FUor, we used the criteria from <cit.>, which they list in their Table 3. In the following, we list these defining characteristics and check if Gaia21elv fulfills them. - The eruption is observed for each bona fide FUor, unlike for FUor-like and peculiar objects. This criterion is fulfilled for Gaia21elv. The date of the eruption can be constrained based on the light curve shown in <cit.> in their figure B2, which includes data points from the literature starting from 1977 (Fig. <ref>). The outburst of Gaia21elv based on the long term light curve occurred between 1991 and 1996. - Bona fide FUors have well defined CO absorption features. Strong CO absorption was also observed for Gaia21elv (Fig. <ref>) at both of our observing epochs. - Water vapor bands can be identified in the NIR spectra of bona fide FUors, including the feature at 1.33 μm and the triangular shaped H-band continuum, which is due to water vapor bands on each end of the H-band (Fig. <ref>). Gaia21elv shows these features at both epochs. - Bona fide FUors show other molecular bands in their J-band spectra, such as those from vanadium oxide (at 1.05 μm and 1.19 μm) and titanium oxide (0.88, 0.92, and 1.11 μm). The X-SHOOTER spectrum of Gaia21elv shows all these molecular bands as wide absorption features (Fig. <ref>). - Another characteristic of FUors is their hydrogen lines, especially the Paα, β, γ, and δ lines, are in absorption, Brγ line is very weak, with the rest of the Brackett series not observed. For Gaia21elv the Paβ and Paδ lines are indeed seen in absorption, however, the other two Paschen lines are not detected. It was not possible to detect the Paα line due to the poor atmospheric transmission at its wavelength (1.87 μm). The Brγ line shows a weak absorption, while the rest of the Brackett series is not detected, similarly to what was expected for FUors. - FUors show very few, if any, emission lines, and even those are typically the emission components of P Cygni profiles. Gaia21elv shows a few P Cygni profiles in Hα, Hβ, and the Ca II triplet, and in addition to those, there are forbidden lines of [Oi], [Feii], and [Sii] in emission. The absorption lines and P Cygni profiles typically detected in the spectra of FUors are related to the disc, while the forbidden emission lines trace a jet or disk wind. Forbidden emission lines are not always detected in the spectra of known FUors, but were identified for a few examples, including the classical FUors V2494 Cyg ( and references therein) and V1057 Cyg <cit.>, therefore, their detection does not rule out a classification as a bona fide FUor. - FUors show weak absorption lines of Nai (2.208 μm) and Cai (2.256 μm) <cit.>. As shown in Fig. <ref>, these lines are detected in the spectra of Gaia21elv at both epochs. - Another spectroscopic signature of FUors is the Hei line at 1.083 μm, which is also present in the spectrum of Gaia21elv (Fig. <ref>). The Hei line detected toward Gaia21elv is double-peaked, where the higher intensity component is largely blueshifted, detected at a velocity of around -400 km s^-1, and the lower intensity component is seen at a velocity of around +25 km s^-1 (Fig. <ref>). Most bona fide FUors show blueshifted absorption lines, with a mean velocity of -350 km s^-1 (see Fig. 4. in ). Another characteristics of FUors is that their spectral type is wavelength-dependent <cit.>. To check whether this applies to Gaia21elv, we used the VLT/X-SHOOTER spectrum, and compared it to the synthetic stellar spectra calculated by <cit.> in the 300 nm to 1.8 μm wavelength range. These stellar templates are given for effective temperatures in the range between 3500 K and 6000 K in steps of 250 K. We compared the VLT/X-SHOOTER spectrum to these stellar templates at optical and at NIR wavelengths, separately. At optical wavelengths, the best match was found with the stellar template corresponding to an effective temperature of 5500±250 K, while at NIR wavelengths, the best fit corresponds to an effective temperature of 3750±250 K. This is consistent with the expectation for FUors, that the stellar type is wavelength-dependent. Based on the above criteria from <cit.> as well as its wavelength-dependent spectral type, we conclude that Gaia21elv can be classified as a bona fide FUor. This classification is consistent with the high accretion luminosity of the source implied by our accretion disc modelling. §.§ On the recent fading of Gaia21elv Until now, no bona fide FUor is known to have completely ended its outburst. This is why it is important to monitor their brightness variations, and study their fading episodes. A temporary fading of V346 Nor was reported by <cit.> and <cit.>, which was due to a decrease in the accretion rate, however, after the fading, the star brightened again to nearly reach its outburst brightness. Another eruptive young star, V899 Mon, which shows properties of both FUors and EXors, faded to quiescence for a little less than a year <cit.>. However, this quiescent phase was followed by another outburst. In addition to their fading being temporary, neither V346 Nor, nor V899 Mon is a bona fide FUor. The long-term fading of a classical FUor, V1515 Cyg was recently reported by <cit.>: its fading started around 2006 and is approximately consistent with an exponential decay with an e-folding time of 12 years. Another classical FUor, V733 Cep also shows long-term fading (Park et al., in prep.), which was found to be the result of a decrease in the accretion rate. Brightness variations of young stars are only partly related to changes in the accretion rate <cit.>. The other main process is variable circumstellar extinction. To probe whether the fading of Gaia21elv was the result of a decrease of the accretion rate, we estimated the accretion rate by fitting the SEDs with an accretion disc model in Sec. <ref>. The accretion rates derived for Gaia21elv are typical of FUors ( and references therein). The accretion rate between the brightest and faintest states decreased by ∼36%. However, according to the accretion disc models fitted to the SEDs, the decreasing accretion rate was combined with increasing circumstellar extinction, especially between 2020 and 2022. It is most likely, that the increased circumstellar extinction dominated the rapid fading of the source that triggered the Gaia Alerts system in 2021. After the Gaia alert, the brightness of the source also started a slow increase, though it is still almost a magnitude fainter than in early 2020, before the start of this fading episode. The decrease found between the accretion rates at the brightest and faintest states indicates an e-folding time of about 25 years. Based on our results, the fading of Gaia21elv found by the Gaia alert is likely a temporary event. Future photometric and spectroscopic monitoring of the source is important to provide more information on the evolution of its outburst. § SUMMARY We analysed the photometry and spectroscopy of a young star exhibiting a long-term outburst and a recent fading alerted by the Gaia Science Alerts system. Optical and NIR spectra confirm that Gaia21elv is a bona fide FUor. This is the third FUor which was found based on the Gaia alerts. In addition to the classical FUor signatures, forbidden emission lines were detected, which are typically tracing a jet or disk winds. Fitting the SEDs at the maximum brightness and and its faint state using an accretion disc model suggests a decrease in the accretion rate. However, fitting the SED at an epoch close to the onset of the quick fading in late 2020-2021 indicates that this episode was mostly caused by an increase of circumstellar extinction. In the future, a photometric and spectroscopic monitoring of Gaia21elv is important to characterize its behavior after its fading episode. § ACKNOWLEDGEMENTS We thank the referee for comments which helped to improve our paper. This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No 716155 (SACCRED). We acknowledge support from the ESA PRODEX contract nr. 4000132054. G.M. and Z.N. were supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. G.M. has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 101004141. Zs.M.Sz. acknowledges funding from a St Leonards scholarship from the University of St Andrews. For the purpose of open access, the author has applied a Creative Commons Attribution (CC BY) licence to any Author Accepted Manuscript version arising. E.F. and T.G. acknowledge financial support from the project PRIN-INAF 2019 "Spectroscopically Tracing the Disk Dispersal Evolution (STRADE)". We acknowledge ESA Gaia, DPAC and the Photometric Science Alerts Team (http://gsaweb.ast.cam.ac.uk/alerts). This work used the Immersion Grating Infrared Spectrometer (IGRINS) that was developed under a collaboration between the University of Texas at Austin and the Korea Astronomy and Space Science Institute (KASI) with the financial support of the Mt. Cuba Astronomical Foundation, of the US National Science Foundation under grants AST-1229522 and AST-1702267, of the McDonald Observatory of the University of Texas at Austin, of the Korean GMT Project of KASI, and Gemini Observatory. This work was supported by K-GMT Science Program (PID: GS-2020B-Q-218) of Korea Astronomy and Space Science Institute (KASI). Based on observations collected at the European Southern Observatory under ESO programme 108.23M6. This work has made use of data from the Asteroid Terrestrial-impact Last Alert System (ATLAS) project. The Asteroid Terrestrial-impact Last Alert System (ATLAS) project is primarily funded to search for near earth asteroids through NASA grants NN12AR55G, 80NSSC18K0284, and 80NSSC18K1575; byproducts of the NEO search include images and catalogs from the survey area. This work was partially funded by Kepler/K2 grant J1944/80NSSC19K0112 and HST GO-15889, and STFC grants ST/T000198/1 and ST/S006109/1. The ATLAS science products have been made possible through the contributions of the University of Hawaii Institute for Astronomy, the Queen’s University Belfast, the Space Telescope Science Institute, the South African Astronomical Observatory, and The Millennium Institute of Astrophysics (MAS), Chile. This project used data obtained via BHTOM (https://bhtom.space), which has received funding from the European Union's Horizon 2020 research and innovation program under grant agreements No. 101004719. § DATA AVAILABILITY The data underlying this article will be shared on reasonable request to the corresponding author. mnras § PHOTOMETRY
http://arxiv.org/abs/2307.01119v1
20230703154838
Spectral Theory of Non-Markovian Dissipative Phase Transitions
[ "Baptiste Debecker", "John Martin", "François Damanet" ]
quant-ph
[ "quant-ph", "cond-mat.other" ]
Institut de Physique Nucléaire, Atomique et de Spectroscopie, CESAM, University of Liège, B-4000 Liège, Belgium To date, dissipative phase transitions (DPTs) have mostly been studied for quantum systems coupled to idealized Markovian (memoryless) environments, where the closing of the Liouvillian gap constitutes a hallmark. Here, we extend the spectral theory of DPTs to arbitrary non-Markovian systems and present a general and systematic method to extract their signatures, which is fundamental for the understanding of realistic materials and experiments such as in the solid-state, cold atoms, cavity or circuit QED. We first illustrate our theory to show how memory effects can be used as a resource to control phase boundaries in a model exhibiting a first-order DPT, and then demonstrate the power of the method by capturing all features of a challenging second-order DPT in a two-mode Dicke model for which previous attempts had fail up to now. Spectral Theory of Non-Markovian Dissipative Phase Transitions Baptiste Debecker, John Martin, François Damanet July 3, 2023 ============================================================== Introduction. Finding new ways to control phase transitions in quantum systems to access different properties is at the forefront of research for developing new materials and technologies. In this context, driven-dissipative mechanisms obtained via the coupling of systems to engineered environments and fields offer opportunities to generate matter phases otherwise inaccessible <cit.>. However, so far, dissipative phase transitions (DPTs) have mostly been studied for systems coupled to memoryless reservoirs <cit.>. Yet, most realistic systems are coupled to reservoirs with a spectral structure <cit.>, giving the latter a memory of past system-bath exchanges, which considerably complicates their dynamics. Such non-Markovian effects are crucial to be understood, not least because they can be used as a resource to generate useful phenomena, such as non-Markovian-assisted steady state entanglement <cit.>, quantum transport <cit.>, spin squeezing <cit.>, chaotic behaviors <cit.> or new dynamical phases <cit.>. Moreover, from a computational perspective, it is sometimes desirable to derive reduced descriptions of a large Markovian open quantum system in order to deal with a smaller Hilbert space, which usually implies dealing with non-Markovian effects <cit.>. Here, we extend the spectral theory of DPTs to arbitrary non-Markovian systems and present a general method to characterize their signatures, opening possibilities for exploring DPTs in a wider range of systems. Our approach is based on the Hierarchical Equations of Motion (HEOM) <cit.>, a numerical method for non-Markovian dynamics extensively used in quantum physics and chemistry, from which one can define a generalization of the Liouvillian usually associated with the Lindblad master equation for Markovian systems whose spectral properties are connected to DPTs. Indeed, one of the necessary conditions for DPTs is the closing of the Liouvillian gap <cit.>. To the best of our knowledge, we are the first to show that HEOM can be used to define a similar quantity for non-Markovian systems and derive a spectral theory of non-Markovian DPTs. Non-Markovian effects in DPTs have been studied via other techniques, such as Green functions to study the impact of the environment spectral density on the critical exponent <cit.>, Lindblad master equations with time-dependent rates to characterize the dynamics of a probe coupled to a non-Markovian environment <cit.>, or time-evolving matrix product operators (TEMPO) to localize DPT in the spin-boson model <cit.>. However, such studies are sparse and mostly focused on the paradigmatic spin-boson model <cit.>. As our approach is the natural extension of the powerful spectral theory machinery widely used for Markovian systems, it provides an ideal framework to complement previous studies and explore non-Markovian effects in new regimes and systems relevant for real materials and experiments. Below, we first present the central element of this work: the generalization of the Liouvillian for non-Markovian systems. We then derive its properties and their connections with DPTs and symmetries. As a first example, we study a generalized Lipkin-Meshkov-Glick model <cit.> and show that deviations from a Markovian reservoir lead to a shift of the phase transition boundary. Finally, we show our framework can capture all the features of a DPT in a challenging two-mode Dicke model <cit.> for which all previous non-Markovian descriptions had fail so far. Theoretical framework. We consider an arbitrary quantum system S linearly coupled to a bosonic environment E of harmonic oscillators at zero temperature, keeping in mind that the theory below is easily generalizable to multiple bosonic or fermionic baths at finite temperatures. The total Hamiltonian reads (we set ħ = 1) H = H_S + ∑_k ω_k a_k^† a_k_≡ H_E + ∑_k (g_ka_k L_k^† + g_k^* a_k^† L_k)_≡ H_int, where H_S is the system Hamiltonian, H_E is the environment Hamiltonian with a_k (a_k^†) the annihilation (creation) operator for the k-th mode of frequency ω_k, and H_int is the interaction Hamiltonian with L_k being arbitrary system operators and g_k being the system-bath coupling strengths. The effect of the environment on the system is encoded in the spectral density (SD) J(ω) = π∑_k |g_k|^2 δ(ω-ω_k) or equivalently in the bath correlation function (CF) α(τ) = ∑_k |g_k|^2 e^-i ω_k τ which are related to each other in the continuum limit via the relation α(τ) = (1/π)∫_0^∞ J(ω) e^iωτdω. The SD is a positive function whose specific structure depends on the details of the model, so as the CF. In what follows, we assume it is a sum of M decaying exponentials α(τ) = ∑_j=1^M G_j e^-iω_j τ - κ_j |τ| , with κ_j, ω_j, G_j ∈ℝ. This decomposition can be performed in a wide range of applications, either exactly or with great precision <cit.>. This amounts to decomposing the non-Markovian structured environment E into a set of M modes of frequencies {ω_j} which are damped with rates {κ_j} due to their coupling to independent Markovian baths, as illustrated in Fig. <ref>. This so-called pseudo-mode picture <cit.> can be applied to a wide range of systems, from atoms in a lossy cavity, to superconducting qubits coupled to leaky resonators <cit.>, electrons coupled to damped phonons <cit.>, or emitters in plasmonic cavities <cit.>. The form (<ref>) of the CF with the assumption that the global system is initially in the product state ρ(0) = ρ_S(0) ⊗ρ_B(0) allows us to describe the complete dynamics of our model via a numerically exact method called the hierarchical equations of motion (HEOM) which takes the form <cit.> dρ^(n⃗, m⃗)/dt = -i[H_S, ρ^(n⃗, m⃗)] - (w⃗^*n⃗ + w⃗m⃗) ρ^(n⃗, m⃗) + ∑_j = 1^M { G_j ( n_j L_j ρ^(n⃗-e⃗_⃗j⃗, m⃗) + m_j ρ^(n⃗, m⃗-e⃗_⃗j⃗)L_j^†). + .[ρ^(n⃗+e⃗_⃗j⃗, m⃗), L_j^†] + [L_j, ρ^(n⃗, m⃗+e⃗_⃗j⃗)]}, where n⃗ = (n_j) and m⃗= (m_j) are multi-indices in ℕ^M, w⃗ = (κ_j + i ω_j) ∈ℂ^M, e⃗_⃗j⃗ = (δ_jj') unit vectors, and a⃗b⃗ = ∑_j a_j^* b_j the standard scalar product in ℂ^M. In Eq. (<ref>), ρ^(0⃗, 0⃗)≡ρ_S corresponds to the physical density operator of the system S with which all the system correlations are computed, while ρ^(n⃗, m⃗) for (n⃗, m⃗) ≠ (0⃗,0⃗), which are also operators acting on the system Hilbert space ℋ_S, correspond to auxiliary states from which bath correlations can be obtained <cit.>. Although the hierarchy is formally infinite, it can be truncated in practice at large hierarchy depth indices n⃗ and m⃗. In general, the stronger the non-Markovianity, the larger the number of auxiliary states we need to retain to obtain convergence of the results. Here, we choose a triangular truncation condition such that ρ^(n⃗, m⃗)(t) = 0 ∀ n⃗,m⃗: ∑_j (n_j + m_j) > k_max, where k_max is the truncation order, yielding a total of K = (2M+k_max)!/((2M)! k_max!) auxiliary states <cit.>. An effective non-Markovian Liouvillian matrix can be derived by exploiting the Choi-Jamiołkowski isomorphism between linear maps and states <cit.>. Vectorizing (<ref>) with |i⟩⟨j|≅|i⟩⊗|j⟩, we get dρ^(n⃗, m⃗)/dt = -i[H_S⊗1 - 1⊗ H_S^T - (w⃗n⃗ + w⃗^*m⃗)] ρ^(n⃗, m⃗) + ∑_j = 1^M G_j ( n_j L_j⊗1ρ^(n⃗ -e⃗_⃗j⃗,m⃗) + m_j 1⊗ L_j^* ρ^(n⃗, m⃗-e⃗_⃗j⃗)) + ∑_j = 1^M[(1⊗ L_j^* - L_j^†⊗1) ρ^(n⃗+e⃗_⃗j⃗, m⃗). - .(1⊗ L_j^* - L_j^†⊗1)^†ρ^(n⃗, m⃗+e⃗_j)], where ρ^(n⃗, m⃗) denotes the vectorization of the matrices ρ^(n⃗, m⃗) and 1 the identity matrix acting on ℋ_S. We also used the notation L_j^* (L_j^T) for the conjugate (transpose) matrix of L_j. By stacking in a vector ρ all the vectorized matrices ρ^(n⃗, m⃗), we can construct a matrix ℒ_HEOM(k_max), called HEOM's Liouvillian, such that the system of linear equations (<ref>) takes the form (see Supplemental Material (SM) for an example of explicit constructions of ℒ_HEOM) dρ/dt = ℒ_HEOM(k_max) ρ. ℒ_HEOM(k_max) is the generator of the non-Markovian dynamics of the system which generalises Lindblad's Markovian Liouvillian. Equation (<ref>) becomes exact for a CF of the form (<ref>) in the limit k_max→ +∞. Alternatively, to obtain the dynamics of the system, we could enlarge it by including explicit bosonic degrees of freedom for the pseudo-modes and considering standard Lindblad damping channels for them, as illustrated in Fig. 1(b). This would define a standard Markovian Liouvillian ℒ_M for the global system S_M. However, as explained in the SM, using ℒ_HEOM is computationally more favorable than ℒ_M, especially for large M. Properties of the HEOM's Liouvillian. The superoperator ℒ_HEOM is linear and in general non-Hermitian. We assume it is diagonalizable and denote its eigenvectors and eigenvalues by ρ_i and λ_i. For a truncation order k_max, its dimension is D = (2M+k_max)!dim(ℋ_S)^2/[(2M)! k_max!]. It admits the following properties (see proofs in the SM): (i) its spectrum is symmetric with respect to the real axis; (ii) it preserves the trace of the physical state ρ^(0⃗,0⃗); (iii) the eigenvalue 0 is always in its spectrum, guaranteeing the existence of a stationary state; (iv) all the eigenvalues must have a negative real part in the limit k_max→ +∞; (v) Tr[1_(0⃗, 0⃗)ρ_i] = 0 if ρ_i is a right eigenoperator of ℒ_HEOM associated to the eigenvalue λ_i with Re[λ_i] ≠ 0. As in <cit.>, we order the eigenvalues of ℒ_HEOM so that |Re[λ_0]| < |Re[λ_1]| < … < |Re[λ_D]|, where λ_0=0. DPT and HEOM's Liouvillian spectrum. Consider an open system dynamics described by Eq. (<ref>) which admits a valid thermodynamic limit N →∞ and a unique steady state ρ_ss for all finite N. We say that the system undergoes a phase transition of order M when a non-analytical change in a g-independent system observable O occurs when the parameter g tends to a critical value g_c in the limit N →∞, i.e., <cit.> lim_g→ g_c|∂^M/∂ g^Mlim_N→ +∞⟨ O ⟩_ss| = +∞, where ⟨ O ⟩_ss = Tr[Oρ_ss^(0⃗,0⃗)]. This definition of DPTs is the same as for Markovian systems. The only difference is that the steady state is obtained from the HEOM (<ref>) instead of the a Lindblad master equation. As for the Markovian case, a non-analytical change as described by (<ref>) must occur due to a level crossing in the spectrum of ℒ_HEOM, and thus to the closing of the HEOM's Liouvillian gap λ≡ |Re[λ_1]|. Symmetries and DPTs. We call weak symmetry of ℒ_HEOM any unitary operator 𝒰 such that [ℒ_HEOM, 𝒰] =0. If 𝒰 is a symmetry, then the matrix representing ℒ_HEOM in the eigenvector basis of 𝒰 is block-diagonal ℒ_HEOM =[ [ ℒ_u_0 ⋯ 0; ⋮ ⋱ ⋮; 0 ⋯ ℒ_u_n ]], where each block ℒ_u_k is associated with each distinct eigenvalue u_k of 𝒰, in number n+1. We define the symmetry sector L_u as the subspace spanned by the eigenvectors of 𝒰 associated with the eigenvalue u. Without any symmetry, all the ρ^(n⃗, m⃗) states are coupled together by (<ref>). In the presence of a symmetry 𝒰, the Liouvillian is partitioned into uncoupled blocks and independent hierarchies for sets of components of the physical state ρ^(0⃗, 0⃗) can be written. We can prove that if the steady-state ρ_ss of (<ref>) is unique, then ρ_ss∈ L_u=1 <cit.>. Again, in close analogy with the Markovian case <cit.>, a DPT associated with a spontaneous symmetry breaking (SSB) is characterized by the occurrence of several eigenvectors, belonging to different symmetry sectors, associated with the same eigenvalue λ = 0 for g ≥ g_c if g (g_c) is the order parameter (its critical value) associated with the DPT. To be specific, if we assume that ℒ_HEOM can be written as a direct sum of n+1 blocks as in (<ref>) and if the eigenvalues are sorted in each symmetry block k as |Re[λ_0^(k)]| < |Re[λ_1^(k)]| < … < |Re[λ_l(k)^(k)]| (with ∑_k l(k) = D), a SSB in the thermodynamic limit is signaled by λ_0^(1), λ_0^(2), …,λ_0^(n)→λ_0^(0) = 0 for g≥ g_c when N → +∞ <cit.>. Physically, this means that the independent hierarchies associated with each block mix in the limit N→ +∞ so that ρ_ss is no longer an eigenvector of 𝒰. Instead, ρ_ss becomes a statistical mixture of eigenvectors associated with different symmetry sectors for g≥ g_c. The existence of such a symmetry greatly simplifies the numerical computation, thanks to the block structure (<ref>). The remainder of this work is devoted to the analysis of DPTs in experimentally relevant models where only Markovian dissipation regimes have been studied or where standard Lindblad descriptions fail. First-order DPT. We first consider a Lipkin-Meshkov-Glick (LMG) model of the form H_LMG = V/N(S_x^2 - S_y^2) = V/2N(S_+^2 + S_-^2), where S_α = ∑_k=1^N σ_α^(k)/2 (α = x, y, z) are the collective spin operators defined in terms of N single-spin Pauli operators σ_α^(k) and S_± = S_x ± i S_y. When the spin system undergoes collective decay as described by Lindblad's master equation ρ̇ = -i[H_LMG, ρ] + γ/2N𝒟[S_-] where 𝒟[o] = 2oρ o^† - {o^† o, ρ}, as would occur if coupled to an unstructured bath, the model is known to exhibit a first order DPT at the critical point V_c^M = γ/2 <cit.>, separating a steady state phase where ⟨ S_z⟩/(N/2) → -1 for N →∞ (V < V_c^M) to a phase where ⟨ S_z⟩/(N/2) → 0 (V > V_c^M), as can be seen in Fig.<ref>(a). For V > V_c^M, a mean-field analysis predicts an infinite number of pure steady states corresponding to stable orbits on the Bloch sphere around fixed points located at the equator, yielding persistent oscillations of ⟨ S_z⟩ that nonetheless averaged to zero over time [Fig.<ref>(g)] <cit.>. Here, we generalize the study of this DPT to the non-Markovian regime by considering that the damping of the collective spin originates from the coupling of the system to a structured bath with a correlation function α(τ) = G e^-κ |τ| -iωτ, via an interaction Hamiltonian H_int = √(G) (S_-a^† + S_+ a) with G= γκ /(2N) and a the annihilation operator of a damped pseudo-mode of Hamiltonian H_E = ω a^† a. This model allows us to study non-Markovian effects on the DPT and compare them to the Markovian case by tuning the “loss” rate κ of the pseudo-mode. Indeed, the collective spin and the pseudo-mode form an extended Markovian system governed by the master equation ρ̇_tot = -i[H, ρ_tot] + κ 𝒟[a] with H = H_LMG + H_E + H_int. Adiabatic elimination of the pseudo-mode's degrees of freedom recovers Eq. (<ref>) in the limit κ→∞ (see SM). When κ is finite, memory effects arise and affect the DPT as described below. The model under consideration has a ℤ_2 symmetry represented by the superoperator 𝒰_2 = U_2 ⊗ U_2^† with U_2 = e^iπ (S_z + a^† a). 𝒰_2 has two distinct eigenvalues u_k = e^ikπ=± 1 with k=0, 1 and so there are two sectors of symmetry associated with the parity of the total number of excitations, with L_u_0 = 1 containing ρ_ss. The impact of memory effects on the DPT based on the study of the non-Markovian Liouvillian ℒ_HEOM for the spin system can be seen in Fig. <ref>. First, we see in panel (b) that the steady state spin magnetization ⟨ S_z⟩ exhibits a transition at a critical point V_c smaller than in the Markovian case shown in Fig. <ref>(a). A mean-field analysis (<ref>) provides an explanation of this observation (see SM for all details). In a nutshell, the fully polarized steady state yielding ⟨ S_z ⟩/(N/2) → -1 becomes unstable for V > V_c^+ = γ/(2√(1+ω^2/κ^2)), while the fixed points at the Bloch sphere equator become unstable only for V < V_c^- = γ/(2(1+ω^2/κ^2)). For V_c^- < V < V_c^+, a new phase emerges where both the fully polarized state at the south pole and orbits around the fixed points can be valid steady states, as can be seen in panel (i). For V ≳ V_c^-, only orbits closed to the fixed points are stable. As V increases, more orbits become stable, meaning that the mean-field critical point depends on the initial conditions on the Bloch sphere. As the HEOM's Liouvillian describes the statistical behavior of the system, it predicts the transition at the averaged mean-field critical points, which for ω/κ = 1 is V_c/γ≈ 0.332 (see SM). From a physical point of view, the shift in the critical point can be understood as follows: the smaller κ, the greater the probability that excitations escaping from the system will be reabsorbed by the system at later times. The degree of openness of the system therefore decreases as κ decreases, which leads to a stabilisation of the phase dominated by the Hamiltonian (<ref>) for small values of V. In the limit κ→ 0 (i.e. for a closed system), the phase transition disappears because the Hamiltonian dynamics no longer competes with dissipative dynamics. In the opposite limit κ→∞, we recover the Markovian case as V_c^±,V_c → V_c^M. The HEOM's Liouvillian spectrum correctly captures all DPT signatures. Indeed, it captures the emergence of both the level-touching at the critical point in the symmetry sector k = 0, i.e., -Re[λ_0^(0)] → 0 at V = V_c as N →∞ (a necessary and sufficient condition for a first-order DPT) and the SSB associated to the DPT, i.e., -Re[λ_0^(1)] → 0 for V ≥V_c as N →∞. These features can be seen in panels (c) and (d). Note that this DPT can be studied via an approximate reduced description of the collective spin dynamics, by performing an adiabatic elimination of the pseudo-mode (see <cit.> for a similar approach for the Dicke model). However, this approach gives incorrect quantitative results for finite N, which prevents finite-size effects from being estimated correctly. Also, it cannot always account for all the features of a DPT, as discussed in the next section, which motivates the use of our framework. Second order DPT. We now examine the case of a second order DPT with SSB in a model for which the reduced Redfield descriptions (to order 2 and 4) of the system fail to capture the relaxation dynamics correctly: a two-mode Dicke model described by <cit.> H = ω_0 S_z + ω_A a^† a + ω_B b^† b + g/√(N)(a S_+ + b S_- + h.c.), where S_z,S_± are collective spin operators and a, b (a^†, b^†) bosonic annihilation (creation) operators, to which we add damping of the modes a and b at the same rate κ yielding the Lindblad master equation (<ref>) ρ̇_tot = -i[H, ρ_tot] + κ (𝒟[a] + 𝒟[b]). This model is known to undergo a second-order DPT between a normal phase with ⟨a|=⟩⟨b|=⟩ 0, |⟨S_z||⟩ = N/2 and a superradiant phase with ⟨a|,⟩⟨b|≠⟩0, |⟨S_z||⟩ < N/2 as N →∞ <cit.>. For ω_A = ω_B = ω, the critical value g_c of the coupling g that drives the transition can be calculated from a mean-field approach and satisfies 2g_c^2N = ω_0(ω^2 + κ^2)/ω. The model (<ref>) exhibits a continuous 𝕌(1) symmetry described by the superoperator 𝒰_1 = U_1 ⊗ U_1^† with U_1 = e^iα(S_z+a^† a- b^† b) (α∈ℝ), spontaneously broken in the superradiant phase as N →∞. Reduced descriptions of the collective spin dynamics have been studied and compared in <cit.> with the mean-field results summarized above. It has been shown that, unlike the previous dissipative LMG model (and the Dicke model <cit.>), a standard Redfield approach completely misses the DPT, while a fourth-order Redfield master equation (i.e., a fourth-order perturbative treatment in the interaction Hamiltonian) appears to capture the correct steady state and critical point but fails to predict the closing of the gap, a necessary condition for DPT. Our numerically exact and systematic method, on the other hand, captures all features of the DPT and the SSB, as shown in Fig. <ref>, which displays the magnetization ⟨ S_z ⟩/(N/2) (a), the closing of the gap |Re[λ_0^(k >0)]| (c,d) and the imaginary part of λ_0^(k > 0) (b). Conclusion. We developed a comprehensive framework for studying DPTs in arbitrary non-Markovian systems, relevant for realistic experimental conditions. Our method is numerically exact, systematic, easily implementable (as it is built on the well-established HEOM technique available in open access libraries <cit.>), and provides a considerable computational advantage over a standard embedding technique. We first illustrated our method to characterize the impact of memory effects on a first-order DPT with a discrete SSB arising in a dissipative LMG model, and demonstrated that deviations from a flat environmental spectral density lead to a shift of the transition point, which could be observed e.g. in cavity QED or trapped ions. Secondly, we have shown that our method correctly captures all the defining features of a second-order DPT arising in a challenging U(1)-symmetric Dicke model for which other previously studied reduced descriptions had so far failed <cit.>. Our work makes it possible to explore out-of-equilibrium matter phases beyond the idealized Markovian limit, featuring non-markovianity as a resource for controlling them. This is so far an unexplored territory as most works dealing with dissipative many-body dynamics is generally constrained to standard Lindblad dissipation, which potentially hinders the evidences of DPTs <cit.>. Our method could be further improved via hybridization with advanced numerical techniques, such as corner-space renormalization <cit.> or matrix product operators (as in <cit.>), to tackle DPTs in strongly interacting systems. Other perspectives include investigations in the non-Markovian regime of connections between DPTs and symmetry breaking <cit.>, geometric phase curvature <cit.>, or dynamical phase transitions <cit.>, measurement-induced phase transitions <cit.>, or dissipation engineering of long-range order <cit.>. Computational resources were provided by the Consortium des Equipements de Calcul Intensif (CECI), funded by the Fonds de la Recherche Scientifique de Belgique (F.R.S.-FNRS) under Grant No. 2.5020.11. apsrev4-2
http://arxiv.org/abs/2307.02574v1
20230705181630
Semi-supervised Learning from Street-View Images and OpenStreetMap for Automatic Building Height Estimation
[ "Hao Li", "Zhendong Yuan", "Gabriel Dax", "Gefei Kong", "Hongchao Fan", "Alexander Zipf", "Martin Werner" ]
cs.CV
[ "cs.CV", "cs.LG" ]
Multimodal Temporal Fusion Transformers Are Good Product Demand Forecasters Marcel Worring June 25, 2023 =========================================================================== Accurate building height estimation is key to the automatic derivation of 3D city models from emerging big geospatial data, including Volunteered Geographical Information (VGI). However, an automatic solution for large-scale building height estimation based on low-cost VGI data is currently missing. The fast development of VGI data platforms, especially OpenStreetMap (OSM) and crowdsourced street-view images (SVI), offers a stimulating opportunity to fill this research gap. In this work, we propose a semi-supervised learning (SSL) method of automatically estimating building height from Mapillary SVI and OSM data to generate low-cost and open-source 3D city modeling in LoD1. The proposed method consists of three parts: first, we propose an SSL schema with the option of setting a different ratio of "pseudo label" during the supervised regression; second, we extract multi-level morphometric features from OSM data (i.e., buildings and streets) for the purposed of inferring building height; last, we design a building floor estimation workflow with a pre-trained facade object detection network to generate "pseudo label" from SVI and assign it to the corresponding OSM building footprint. In a case study, we validate the proposed SSL method in the city of Heidelberg, Germany and evaluate the model performance against the reference data of building heights. Based on three different regression models, namely Random Forest (RF), Support Vector Machine (SVM), and Convolutional Neural Network (CNN), the SSL method leads to a clear performance boosting in estimating building heights with a Mean Absolute Error (MAE) around 2.1 meters, which is competitive to state-of-the-art approaches. The preliminary result is promising and motivates our future work in scaling up the proposed method based on low-cost VGI data, with possibilities in even regions and areas with diverse data quality and availability. Data and code supporting this paper are publicly available in (<https://github.com/bobleegogogo/building_height>). § INTRODUCTION For decades, the world has been comprehensively mapped in 2D, however a vertical dimension remains underexplored despite its huge potential, which is even more critical in Global South areas due to inherent mapping inequality and diverse data availability. Mapping human settlements as a 3D representation of reality requires an accurate description of vertical dimension besides the 2D footprints and shapes <cit.>. Such 3D representation of human settlements is of significant importance in many aspects, for instance, quiet and shadow routing <cit.>, environmental exposure modeling <cit.>, architecture and city planning <cit.> and population capacity estimation <cit.>. However, it remains challenging to derive low-cost and open-source 3D representation of buildings at scale. In this paper, with "low-cost", we mainly refer to the cost of data acquisition in 3D building modeling. Given existing methods of photogrammetry and remote sensing, 3D city reconstruction is still a high-cost and time-consuming task, which mostly requires extensive expert knowledge and a large amount of geospatial data (e.g., cadastral data, airborne photogrammetry data). This fact will certainly increase the difficulty of ordinary stakeholders and city governments with limited funding in establishing 3D city modeling systems for their well-being demands. Fortunately, the increasing availability of Volunteer Geographic Information (VGI) together with crowdsourcing technology <cit.> has provided a low-cost and scalable solution of mapping our world even in a 3D representation. OpenStreetMap (OSM), as the most successful VGI project, was considered as a valuable global data source for creating large-scale 3D city models <cit.>. For instance, in <cit.>, a joint processing method of OSM and mutli-sensor remote sensing data (e.g., TanDEM-X and Sentinel-2) was developed to generate large-scale 3D urban reconstruction; Milojevic-Dupont et al <cit.>. demonstrated the capability of accurate building height prediction purely based on morphometric features (or urban forms) extracted from OSM data (e.g., building and street geometry). Moreover, several recent works in <cit.> and <cit.> highlight the huge potential of low-cost street-view images (SVI) in increasing the efficiency of large-scale 3D city modeling. The idea is intuitive as SVI provides a low-cost and close-range observation of urban buildings, therefore contains key information needed for 3D reconstruction, such as facade elements, shapes, and building heights. Given the fast development of geospatial machine learning and artificial intelligence (GeoAI) <cit.>, automatic interpretations of SVI have become more efficient than ever before. Hence, the geospatial ML method, which can integrate building height information derived from SVI with existing 2D building footprints from OSM, presents a promising solution for creating large-scale and open-source 3D city models. In this paper, we propose a semi-supervised learning (SSL) method (as shown in Figure <ref>) to accurately estimate building height based on open-source SVI and OSM data. As a case study, we implement the proposed method by training three different machine learning (ML) models, namely Random Forests (RF), Support Vector Machine (SVM), and Convolutional Neural Network (CNN), in the city of Heidelberg, Germany. Specifically, we first extract multi-level urban morphometric features from existing OSM data (i.e., buildings, streets, street blocks) as a feature space to the regression of building height, then we collect SVI with metadata via the Mapillary platform (<https://www.mapillary.com>) and design a building floor estimation workflow with a pre-trained facade object detection network to generate "pseudo label" for the SSL of building height estimation models. As a result, we create an open-source LoD1 3D city models for selected areas in Heidelberg using the low-cost SVI data and OSM 2D building footprints. § RELATED WORK §.§ Building Height Estimation Existing methods of building height estimation generally rely on Light Detection and Ranging (LiDAR) <cit.>, Synthetic Aperture Radar (SAR) <cit.>, and high-resolution remote sensing image data <cit.>. In these data sources, LiDAR data provides highly accurate information of building height but is difficult to estimate building height in large scale, considering its collection cost. For SAR, the estimation result is often affected by the mixture of different microwave scattering, thus have high uncertainties <cit.>. To avoid these problems, many researchers also investigate remote sensing image data. For these methods, considering that remote sensing image data does not contain 3D information directly, existing works select stereo/multi-view images as the data source to achieve the estimation of building height <cit.>. However, although SAR and remote sensing image data have a relatively low collection cost than LiDAR data, the complex data processing of these data source causes their high time and labor costs. Compared with these three data, SVI data and 2D building footprint data are easier and cheaper to be collected and processed, especially with the support of VGI (e.g., Mapillary and OpenStreetMap). There have been some early efforts to estimate building height based on these new data sources. Biljecki et al. <cit.>, Milojevic-Dupont et al. <cit.>, and Bernard et al. <cit.> proposed several methods based on RF or other ML approaches to analyze the relationship between building heights and their features (such as building area and type), and finally achieve the building height estimation from 2D footprint data. Yan and Huang <cit.> proposed a deep learning-based method to estimate building height from SVI. Zhao et al. <cit.> combined 2D building footprints and SVI to estimate building heights, which also used deep learning technology. These methods also achieved good performance but require a large amount of training data, which limits their generalization and practicality. Currently, there is little work on how to accurately estimate building height from 2D building footprint and SVI with only limited training data. §.§ VGI and 3D Building Models CityGML is a well-known international standard for 3D building modeling. In CityGML 2.0, 3D building models are divided into five levels of detail (LoDs). In LoD0, only the 2D footprint information is involved in the model. In LoD1, the LoD0 model is extruded by their building heights, and the obtained cuboid after extrusion are the LoD1 model. In LoD2, the 3D roof structure information is added into the LoD2 model. The LoD3 model further contains the facade element information, such as windows and doors. The LoD4 model is more complicated and contains both external and internal building elements. To meet the requirements of the abovementioned CityGML standard, many cities like New York, Singapore, and Berlin have created and freely released 3D city models with different LoDs in the past years. However, most of these 3D city building models are constructed in LoD1 or LoD2 for urban area, while large-scale and fine-grained (LoD3 and LoD4) models with semantic information are hardly available for cities with limited funding in establishing their own 3D city modelling systems. Hence, that is the main motivation of this work to provide a low-cost and open-source solution of creating large-scale 3D city models (e.g., first in LoD1). Early work in <cit.> highlighted that OSM, as a crowdsourced VGI data source, can be combined with international standards of the Open Geospatial Consortium (OGC) to effectively create CityGML models in LoD1 and LoD2. Recently, Zhang et. al <cit.> proposed a web-based interactive system, namely VGI3D, as a collaborative platform to collect 3D building models with fine-grained semantic information in a crowdsourcing approach. In this work, we aim to further investigate the potential of low-cost VGI data sources, especially OSM data and crowdsourced SVI, in generating LoD1 3D city models via automatic building height estimation with only limited training data. § METHODOLOGY The proposed method of automatic building height estimation mainly consists of three parts: (1) an SSL schema for height regression, (2) OSM morphometric feature extraction, and (3) building floor estimation based on the SVI. Figure <ref> shows the methodological workflow of automatically generating open-source 3D city modeling (i.e., LoD1 city model) via the proposed SSL method. In the rest of this section, we will elaborate on the details of this design. §.§ Semi-supervised Learning Schema In traditional supervised learning, one relies on labelled data to build the prediction model. However, such a labelling process is mostly time consuming, labour demanding, and difficult to scale up. Therefore, the capability of learning from unlabeled data is a desirable feature to overcome this challenge. In this context, Semi-supervised learning (SSL) is a promising technique to accommodate the lack of labeled data by allowing the model to integrate part of unlabeled data during the supervised model training <cit.>. To be noticed, the SSL herein is different from self-supervised learning, which does not rely on any ground truth labels during the training process. A common way of implementing SSL is to generate "pseudo label" from the data itself or even auxiliary data <cit.>, which can be then merged with existing labelled data to boost model performance. Following this concept, we design an SSL schema with the option of defining different ratio of "pseudo label" during the supervised regression of building height. The proposed SSL schema is tasked with estimating building heights (h) based on a list of morphometric features x = ⟨ x_1, …, x_m ⟩ extracted from diverse scales of OSM data (e.g., individual building footprint, street network, street block, etc.), where m refers to the total number of features. In this context, the task of building height estimation can be formulated as a multifactor regression task in the following mathematic form: h_Θ(x) = ∑_i=0^mΘ_i x_i where Θ = ⟨Θ_1, …, Θ_m ⟩ is the corresponding regression coefficients. More importantly, the regression target value of building heights h comes from the following two parts: h = (1-a)*h_Raw + a*h_SSL Where a is the ratio of "pseudo label" (h_SSL) obtained from automatic facade parsing of Mapillary SVI. We will elaborate on this later in Section <ref>, while it is sufficient to understand that besides available training label (i.e., known building heights) the model can also benefit from SSL labels which are extracted from large-scale and open-source SVI in an automatic and unsupervised method. To build the model for accurate building height estimation, we train a classic supervised regression model of finding the optimal regression coefficients with gradient descent and optimizing a loss function of Mean Square Error (MSE) in the following format: ℒ^MAE_Θ^* = min_Θ1/N∑_i=1^N∥ĥ_i - h_i ∥ where ℒ_MAE and Θ^* refer to the loss function and the optimal coefficients set, respectively, and N is the number of training samples (h_SSL and h_Raw). The design of SSL is concise and model-independent, which means in case we can keep feeding the ML models with "pseudo label" (h_SSL) of building height extracted from SVI, the regression task can be tackled with diverse ML models. In this paper, we demonstrate the capability of these three ML models (i.e., RF, SVM, and CNN) in estimating building height in a typical western European city, so to say the city of Heidelberg, Germany. §.§ OSM Morphometric Feature Extraction Intensive existing works have confirmed the excellent capability of multi-level morphological features (or urban-form features) in predicting key attributes (e.g., height, function, energy consumption, etc.) of buildings and streets from an urban analytic perspective <cit.>. To infer building height, we implement a range of morphometric features extracted from OSM at three different levels, namely building-level, street-level, and street block-level, as shown in Table <ref>. In total, we calculate 129 morphometric features based on OSM data (i.e., individual building footprints and street networks) to construct their spatial and geometric relationships (e.g., spatial vicinity and compactness of street-blocks). More specifically, we elaborate on the details of OSM morphometric features (in three distinct levels) as follows: Building-level: Considering the hidden information from the building footprint itself, we calculate 9 features such as footprint area, perimeter, circular compactness, convexity, orientation and length of wall shared with other buildings. The intuition herein is that such building-level features can provide explicit and implicit information about the footprint shape (e.g., compactness and complexity), which contributes to estimating building heights. For instance, it was reported that a higher building generally consists of a large net internal area, and vice versa <cit.>. In addition, since buildings are mapped differently in OSM (e.g., one building in several polygons or several buildings in one polygon), we simplify this data quality issue by considering each polygon as a single building, while future work is definitely needed in investigating the impact of how individual buildings are presented in OSM. Street-level: Besides morphometric features of the building footprint itself, the street network surrounding a building can be informative in estimating building height. For instance, a high density (or compactness) of streets can imply more high-story buildings in order to accommodate a potentially higher number of residents. Therefore, we calculate 9 features based on the spatial relationship of buildings and their closest streets and road intersections, such as length, average width, distance to the building, local closeness, betweenness and centrality, etc. Street block-level: Furthermore, we generate morphological tessellations based on the OSM street network. This tessellation representation and its interaction with roads and buildings were included in the design of the feature space (8 features). The motivation is straightforward, as a preliminary assumption is that buildings in the same block are more likely to be of a similar height. Moreover, to capture the spatial auto-correlation in the OSM data, we extend these three levels of OSM morphometric features by considering their second-order features (e.g., total, average, and standard deviation) in the neighbourhood (i.e., within 20, 50, and 500 meters buffers). As for the implementation, we rely on the open-source Python software toolkit called momepy v.0.5.1 to calculate these features. For a complete list of OSM morphometric features, please refer to the GitHub repository (<https://github.com/bobleegogogo/building_height>). §.§ Building Floor Estimation from Street-Level Images Inspired by the work of automatic facade parsing in <cit.>, we develop a building floor estimation workflow based on automatic facade parsing and urban architecture rules. In short, we aim to generate the estimation of building floor or height (by multiplying an average floor height) as the "pseudo label" to guide ML regression models with the aforementioned OSM morphometric features as covariates. Figure <ref> illustrates the developed method of building floor estimation based on SVI. To explain the developed method in more detail, we elaborate on three main steps as follows: SVI and OSM building alignment: As the first step, we download existing SVI from Mapillary via their open-source image API, where each SVI record consists of geotagged coordinates of the camera during a trip sequence and additional metadata information (Table <ref>), especially the compass angle of the camera direction (i.e., 0 to 360 degrees). This compass angle together with geotagged coordinates of the camera is key for aligning SVI with an individual OSM building. To this end, we apply a simple ray-tracing method to determine their relationship and assign the selected Mapillary SVI to its corresponding OSM building footprints (see Figure <ref>). Currently, we manually select Mapillary images which cover the complete facade of a building without being blocked by vegetation and cars, while future work is needed to automate this selecting process. A possible solution is to apply semantic segmentation approaches and ensure the skyline and ground are both visible within a single SVI. Facade object detection: There are two common approaches in measuring building heights: either estimating absolute metrics (e.g., meters) or counting the floor number. As for accurately inferring the floor number, key features (e.g., window, balcony, and door) and their layout in the building facade play a key role <cit.>. Herein, we aim to detect these key features from street-level Mapillary imagery via the facade parsing technique. To this end, we follow the deep learning method developed in <cit.> for automatic facade parsing from the SVI data. Specifically, we use a pre-trained one-stage object detection network, namely YOLO v3 <cit.> (with the Darknet53 backbone), for the purpose of fast and accurate facade object detection. Herein, the facade object detection has been pre-trained on a facade semantic dataset called FaçadeWHU <cit.>, thus could be directly applied to detect key facade features (e.g., window, balcony, and door) from the Mapillary SVI collected in Heidelberg without further training. As a result, the detected facade features are saved as a list of objects and their image coordinates. Building floor estimation: Based on facade object detection results, we then apply a rule-based approach to determine the floor number in order to estimate the height of corresponding OSM buildings. Specifically, we first group facade objects (i.e., windows and doors) with their vertical coordinates and calculate the difference between each two neighbored elements, next k-mean clustering (with k=2) is used to find the clusters where objects are aligned vertically with each other, which results in a floor number estimation by counting the number of windows. By considering an average floor-to-floor height (i.e., 2.5 meters for residential or 3.5 meters for commercial), we can then derive the building height information from the SVI data, and use it as an SSL training label (h_SSL) to train the ML regression model on OSM morphometric features. § PRELIMINARY RESULT §.§ Case Study As a case study, we implemented and tested the proposed method (Figure <ref>) in a classic western European city, namely the city of Heidelberg, Germany by considering Heidelberg was relatively well-mapped in OSM. Moreover, the reference data (h_Raw) of building heights obtained from the City of Heidelberg is also available, where building eaves heights (as we aim at LoD1 model for now) were recorded and spatially joined with OSM building footprints. We extracted the latest OSM data (buildings and streets) via the ohsome API, which is built on the OpenStreetMap History Database (OSHDB) <cit.>. Herein, the ohsome API enables us to trace back to even historical OSM data, which can potentially contribute to more intrinsic features (e.g., the curve of nodes or contributions density). However, this goes beyond the scope of this paper. In this work, we calculated 129 morphometric features for 16,089 building footprints within the city of Heidelberg, which were used to train three types of ML regression models, specifically RF with 1000 trees, SVM with RBF kernel, and a three-layer dense CNN, to estimate building heights. Regarding the SVI data, we followed the method described in Figure <ref> by manually choosing 308 street-level Mapillary images and aligning them with 308 corresponding OSM building footprints by considering the SVI metadata. Then, we estimated their floor number and further converted them into building heights by multiplying an average height of 2.5 meters for residential buildings and 3.5 meters for commercial and public buildings <cit.>. Herein, we manually verified the building function for these 308 SVI and their corresponding OSM building footprints. Although it is possible to automate this process with OSM data <cit.>, the prediction of building functions is beyond the scope of this paper. Despite its limitation, the proposed method provides a promising and low-cost solution to create open-source 3D city models (LoD1) by consuming only VGI data sourced (i.e., OSM and SVI) with a flexible SSL schema. §.§ Experimental Result In our case study, we conduct two comparative analysis to evaluate the capability of our SSL method w.r.t mainly two variables: first, the different OSM morphometric features, second, the different ratio of "pseudo label" during SSL training, by comparing the regress performance among three ML regression models (e.g., RF, SVM, and CNN). Height estimation with different OSM features: To validate multi-level morphometric features extracted from OSM, Table <ref> compares the regression performance of three ML models (RF, SVM, and CNN) using two different levels of morphometric features (i.e., 64 building-level features and all 129 features). Herein, we set a split ratio (between training and testing samples) of 0.7 on the reference data and calculate three common regression metrics (MAE, RMSE, and R^2, all in meters) for the evaluation purpose. An important finding is that the integration of street and street-block features leads to an incremental boosting in the model performance, though this is less significant in the case of CNN. Though in the case of SVM, more features seem to be not helpful. A potential reason can be attributed to a potential effect of the curse of dimensionality. In short, an average MAE of around 2.3 meters (RF with 129 features), which is less than the average height of a single floor, confirms the feasibility of accurately estimating building height only from OSM morphometric features. This result encourages us to incorporate these OSM morphometric features with the proposed SSL method to better create large-scale and open-source 3D city models. SSL with different ratio of "pseudo label": Based on the workflow described in Figure <ref>, we are able to collect 308 SVI from Mapillary and extract "pseudo label" via facade object detection, then associate these height values with their corresponding OSM building footprints. To test the impact of different SSL ratio, we set up three training sets: 1) to use only estimated heights from SVI (SVI) as an aggressive scenario of SSL; 2) to randomly select 308 OSM buildings and retrieve their heights from the reference data to simulate the fully supervised scenario (RAW); 3) to merge the "pseud label" with reference heights thus have a balance SSL training set (i.e., 308 each for SVI and RAW). In addition, a valuation set with 2,000 buildings randomly extracted from the reference data is considered given the limited number of training labels. Table <ref> shows the numerical results using different ratio of "pseudo label" (e.g., SVI, RAW, and SSL) and three ML regression models (with 129 features). Although the "pseudo label" (SVI) still leads to the largest error (w.r.t MAE and RMSE) in all three regression models, the "pseudo" height extracted from SVI is indeed informative for building height regression, more importantly, it is beneficial when merging with existing labels. Therefore, the quantitative result listed in Table <ref> confirms that the proposed SSL method is effective and efficient in extracting "pseudo" training information from crowdsourced SVI data, which largely boosts the estimation accuracy using all three different ML regression models. In future work, it would be interesting to further investigate how different building types (e.g., residential or commercial, one-floor or multi-floor) can affect the accuracy of building height estimation. Regarding the generation of "pseudo label", Figure <ref> shows selected examples of building floor estimations from Mapillary SVI in Heidelberg. One can observe that for lower floor numbers in case the captured facade is complete, the model works in a sensible way. However, we encountered several challenging cases when the building facade is not complete or the layout of windows (e.g., dormer windows) is difficult to be grouped by our floor estimation rules. In this context, future work is needed to develop a more robust method of extracting and distinguishing related features from SVI, such as roof types, dormer windows, and building functions, which can be helpful to generate more reliable "pseudo labels" for the SSL method. § DISCUSSION In Figure <ref>, we demonstrate a 3D city model in LoD1 for selected buildings in the old town of Heidelberg, which is created using the proposed SSL method based on SVI (Figure <ref> (b)) and OSM building footprints (Figure <ref> (a)). In future work, we aim to refine this method by addressing the aforementioned limitations and comparing the estimated one with official LoD1 city models in selected cities. Our preliminary result echoes the findings in <cit.> and <cit.> to a certain extent. More importantly, the SSL method will make our method in principle even more flexible and easy-to-apply in areas where the availability of training data (e.g., existing building heights) is limited or difficult-to-access. For instance, in most developed countries, 3D city models can be established using e.g., Digital Terrain Model (DTM), however the acquisition of large-scale and accurate DTM data remains costly and time-consuming. In this context, the proposed method provides a solution to directly harness existing crowdsourced VGI data (OSM and SVI) for 3D city modeling without additional data acquisition (e.g., DTM). Therefore, the "low-cost" herein mainly refers to the cost of traditional data acquisition methods w.r.t building height information. Despite the high potential, we identify several limitations to be addressed in future work: * It is key to improve the building floor estimation workflow in terms of accuracy and speed, with which more "pseudo labels" can be extracted and used for SSL. For instance, the current SVI selection is done manually to ensure complete coverage of a building facade without being blocked by vegetation and cars, while this process can be automated using a semantic segmentation approach to improve the efficiency of generating high-quality "pseudo labels" at scale. Moreover, OSM data itself may contain information about building height ("building:levels=* or height=*") as well, which could be a helpful source to get more training data into the SSL method. * Despite its low-cost and open-source nature, the quality aspect of VGI data (i.e., OSM and SVI data) remains under-quantified in this work, but certainly deserves a careful and decent treatment when applied to different countries or cities in the world <cit.>. For instance, the positional error and obstruction in SVI can significantly hinder the existing floor estimation approach. In addition, one needs to investigate how many SVI images are needed to have a reasonable spatial coverage of a study area to ensure the effectiveness of the SSL method. * The ML regression models used in this work are based on a 1D vector feature space (up to 129 different features). However, a more sophisticated method is needed to encode the spatial relationship among buildings. For example, one option is to apply a graph CNN <cit.> as a spatial-explicit building height regressor. * It is still unclear how different architecture types (e.g., roof type, construction age, building function) and city styles (e.g., low-rise, medium-rise, or high-rise) will affect the effectiveness and accuracy of our SSL method. § CONCLUSION In this paper, we present a semi-supervised learning (SSL) method of automatic building height estimation by integrating crowdsourced street-level images (SVI) with multi-level morphometric features extracted from the OpenStreetMap (OSM) data. In this context, we design a workflow to convert facade object detection results from Mapillary SVI into "pseudo label" of building heights for three different ML regression models. As a case study, we validate the proposed SSL method in the city of Heidelberg, Germany, and the preliminary result looks very promising. However, the varying quality of volunteered geographical information (VGI) data, cultural and city-wise differences in the morphological features used, and the varying availability of SVI, all lead to certain limitations of such an SSL method. Our future work will focus on tackling these limitations and provide a robust and scalable solution of large-scale and open-source 3D city modeling purely based on low-cost VGI data. plainurl
http://arxiv.org/abs/2307.01303v1
20230703191320
A $p$-adic Simpson correspondence for smooth proper rigid varieties
[ "Ben Heuer" ]
math.AG
[ "math.AG", "14G22, 14G45, 14D22" ]
A p-adic Simpson correspondence for smooth proper rigid varieties]A p-adic Simpson correspondence for smooth proper rigid varieties [ Ben Heuer Received ; accepted ======================== For any smooth proper rigid analytic space X over a complete algebraically closed extension of _p, we construct a Simpson correspondence: an equivalence of categories between vector bundles on Scholze's pro-étale site of X and Higgs bundles on X. This generalises a result of Faltings from smooth projective curves to any higher dimension, and further to the rigid analytic setup. The strategy is new, and is based on the study of rigid analytic moduli spaces of pro-étale invertible sheaves on spectral varieties. § INTRODUCTION §.§ Main result Let K be a complete algebraically closed extension of _p. The goal of this article is to prove the following global p-adic Simpson correspondence. [<Ref>] Let X be a smooth proper rigid space over K. Then choices of a ^+/ξ^2-lift 𝕏 of X and of an exponential for K induce an exact tensor equivalence S_𝕏,:{ pro-étale vector bundles on X}{ Higgs bundles on X} which is natural in the datum of the pair (X,𝕏). Here an exponential for K is a continuous splitting of the p-adic logarithm log 1+𝔪_K→ K, and the two sides of the p-adic Simpson correspondence S_𝕏, are defined as follows: * A pro-étale vector bundle on X is a finite locally free sheaf on the pro-étale site X_ of <cit.> endowed with the completed structure sheaf Ø. * A Higgs bundle on X is a pair (E,θ) of an analytic vector bundle E on X and a morphism of Ø_X-modules θ E→ E⊗Ω_X^1(-1) satisfying θ∧θ=0. Faltings constructed the p-adic Simpson correspondence in the case when X is a smooth projective curve, under some further assumptions on X and K <cit.>, and formulated in terms of “generalised representations”, which are equivalent to pro-étale vector bundles (see <cit.>). Since then, it has been one of the main open questions in the area whether such a correspondence exists in higher dimension (see e.g. <cit.>). <Ref> confirms that this is the case: It generalises Faltings' result not only from smooth projective curves to smooth proper varieties, but further to the rigid analytic setting. Our method is quite different from that of <cit.> even for curves, and we can avoid any use of semi-stable models and log-structures. Instead, we work with Scholze's perfectoid foundations of p-adic Hodge theory. As a consequence, our result is stronger than that of Faltings even in the case of curves, namely we need weaker choices, as we will explain below. As the name “generalised representations” suggests, any choice of base-point x∈ X(K) induces, via descent from the universal pro-finite-étale cover, a natural fully faithful functor Rep_K(π_1(X,x))↪{pro-étale vector bundles on X} from continuous representations of the étale fundamental group π_1(X,x) on finite dimensional K-vector spaces (see <cit.>). We thus obtain a fully faithful exact tensor functor 𝒮_𝕏,:Rep_K(π_1(X,x))↪{Higgs bundles on X} from <Ref>. This allows us to generalise the question posed in <cit.>: How can we characterise the essential image of 𝒮_𝕏,? The name “p-adic Simpson correspondence” for <Ref> alludes to the famous non-abelian Hodge correspondence in complex geometry due to Corlette and Simpson <cit.>. For a smooth projective variety Y over with a base-point y∈ Y(), this is an equivalence of categories from representations of the topological fundamental group π_1(Y,y) on finite dimensional spaces to the category of semistable Higgs bundles on Y with vanishing rational Chern classes. The functor 𝒮_𝕏, is a very close analogue of this functor. <Ref> is therefore another main question about the p-adic Simpson correspondence. One can hope that for smooth projective varieties over K=_p, the completion of _p, the essential image of 𝒮_𝕏, is given by semistable Higgs bundles with vanishing rational Chern classes, as over . For line bundles, this is proved in <cit.>, where it is however also shown that for proper X or for larger K, one in general needs stronger assumptions. §.§ p-adic non-abelian Hodge theory Let X be a smooth proper rigid space over K. Then we have a p-adic analogue of the Hodge decomposition from complex geometry: By a result of Scholze <cit.>, building on the work of Tate <cit.>, Faltings <cit.> and others in the algebraic setting, the datum of a ^+/ξ^2-lift 𝕏 of X induces an isomorphism H^n_(X,_p)⊗__p K=⊕_i+j=nH^i(X,Ω^j_X(-j)) where (-j) denotes a Tate twist. We note that there is a canonical such lift 𝕏 if we are given a model of X over a complete discretely valued subfield of K with perfect residue field. In complex geometry, the non-abelian Hodge correspondence of Corlette and Simpson is a non-abelian categorical generalisation of the Hodge decomposition. Starting with the pioneering work of Deninger–Werner <cit.> and Faltings <cit.>, it has been a much studied question what a p-adic version of the non-abelian Hodge correspondence could look like. Our <Ref> can now be regarded as providing such a “p-adic non-abelian Hodge correspondence”. There are several ways to explain this interpretation: The first perspective, emphasized by Abbes–Gros <cit.>, is that the p-adic Simpson corrspondence should generalise the p-adic Hodge decomposition <Ref> to more general coefficients. Indeed, we prove: In the setting of <Ref>, let V be a pro-étale vector bundle on X and let (E,θ)= S_𝕏,(V) be the associated Higgs bundle. Then there is a natural isomorphism RΓ(X_,V)=RΓ_Higgs(X,(E,θ)). Here RΓ(X_,-) is pro-étale cohomology, while RΓ_Higgs(X,-) is Dolbeault cohomology (<Ref>). In the simplest case of V=Ø_X, the left hand side equals RΓ_(X,_p)⊗__p K by Scholzes' Primitive Comparison Theorem <cit.>, while the right hand side is equal to Hodge cohomology. Hence, this case recovers the Hodge–Tate decomposition <Ref>. There is a second, very different way in which we can regard <Ref> as a generalisation of the Hodge decomposition <Ref> to more general coefficient: In the spirit of Simpson's perspective on non-abelian Hodge theory <cit.>, the functor S_𝕏, can be interpreted as a “categorical Hodge decomposition for non-abelian coefficient groups”: If G is any rigid group, then the set of isomorphism classes of G-torsors on X_ is given by H^1_(X,G). For G=_r, this classifies the pro-étale vector bundles of rank r on X up to isomorphism. For G=_a, we instead have H^1_(X,_a)=H^1_(X,_p)⊗__p K by the aforementioned Primitive Comparison Theorem. Second, there is a general notion of G-Higgs bundles for rigid groups G, and the set of isomorphism class of _a-Higgs bundles is H^1(X,Ø)⊕ H^0(X,Ω^1_X(-1)). From this perspective, the isomorphism <Ref> for n=1 can be regarded as matching up pro-étale _a-torsors and _a-Higgs bundles, and indeed one can upgrade this bijection to an equivalence between these two categories. In this sense, <Ref> provides a categorified generalised Hodge decomposition in degree n=1 for _r-coefficients. §.§ Comparison to previous results Faltings' article has been very influential and has sparked a great deal of activity in recent years. Its line of argument can roughly be divided in three parts: The local correspondence between “small” objects in terms of a toric chart, the global correspondence between “small” objects in terms of a lift, and finally the global p-adic Simpson correspondence in the case of projective curves. The first two steps, which we shall summarise as the “small correspondence”, have since been the subject of extensive studies, and can now be regarded as being well-understood: This started with the work of Abbes–Gros and Tsuji <cit.><cit.>, who have reinterpreted and studied in great detail the small correspondence for certain semi-stable schemes with log structures. More recently, the small correspondence has been studied for rigid analytic X under various additional technical assumptions, including the case when X is arithmetic and the pro-étale bundle comes from a _p-local system due to Liu–Zhu <cit.>, and the case when X has good reduction due to Wang <cit.>. Moreover, there are new approaches based on prismatic crystals <cit.><cit.>, leading to a small correspondence in families <cit.>. In contrast, the final step in <cit.>, the p-adic Simpson correspondence for proper curves, is much less well-understood: It is deduced from the small correspondence by descent from finite étale covers, using a subtle construction of “twisted pullback” which has recently been studied further by Xu <cit.>. There are two main reasons why this strategy is limited to curves: Firstly, the descent step relies on the fact that one can make global differentials on curves p-adically small by passing to finite étale covers. Second, it uses a semistable reduction assumption, but in higher dimension one does not know if semistable models exist for any finite étale covers. For these reasons, this strategy does not generalise to higher dimension. Our approach to the p-adic Simpson correspondence is quite different from that of <cit.>, and in particular it is logically independent of the global correspondence for small objects. As a consequence, even in the case of curves, <Ref> is in fact more general than Faltings' result: Firstly, due to the different technical foundations, the base field K in <Ref> is more general, as Faltings assumes that X admits a model X_0 over a discretely valued non-archimedean field L⊆ K with perfect residue field. More importantly, however, a new aspect is that our p-adic Simpson functor S_𝕏, depends on a lift of X to ^+/ξ^2, whereas in <cit.> it is important to instead choose a lift of a semi-stable model over Ø_K to the integral subring A_inf/ξ^2⊆^+/ξ^2, because such a datum is necessary for the small correspondence. The relevance of this improvement is that for a smooth proper variety X_0 over L, the base-change to K always admits a canonical lift to ^+/ξ^2. Indeed, one always has a canonical map L→^+/ξ^2 along which one can base-change, but this does not restrict to a map Ø_L→A_inf/ξ^2 unless L is absolutely unramified. Consequently, in contrast to Faltings' result, we can eliminate the choice of lift in the important special case that X admits a model X_0 over L, making it more canonical, in close analogy to the Hodge–Tate decomposition <Ref>. Apart from Faltings' result for curves, the only other previously known cases of a p-adic Simpson correspondence for proper X beyond the small case are the case of line bundles, i.e. rank one <cit.>, and the case of projective space X=^n <cit.>. Moreover, there are partial results, e.g. in the case of vanishing Higgs field <cit.><cit.> and for pro-finite-étale bundles <cit.>. But our result is new even when X is an abelian variety. The cohomological comparison <Ref> was previously known for the small correspondence under various additional hypothesis: These include the algebraic settings of Abbes–Gros and Tsuji <cit.><cit.>, the case of good reduction <cit.><cit.>, and arithmetic settings of Galois-equivariant pro-étale vector bundles, namely for _p-local systems due to Liu–Zhu <cit.>, and more generally by Min–Wang <cit.>. For curves, Faltings deduced it from the small case. Beyond these cases, this result is new, already for line bundles. §.§ Strategy The approach of this article is rooted in the idea of using p-adic analytic moduli spaces to study the p-adic Simpson correspondence, initiated in <cit.><cit.>. In this article, we apply this perspective to the spectral variety, an object going back to the work of Hitchin <cit.>: Let (E,θ) be a Higgs bundle on X. To simplify notation, we set :=Ω_X^1(-1). The datum of θ is then equivalent to an Ø_X-algebra homomorphism ^∙_Ø_X^∨→(E) on X_. Let B⊆(E) be its image. This is a coherent commutative Ø_X-algebra and the Higgs field θ is encoded by the B-action on E via the natural map ^∙^∨→ B. We call the finite cover X':=_Ø_XB→ X the spectral variety[This is slightly different to the spectral variety studied in the context of the Hitchin fibration. Roughly, we replace the characteristic polynomial by the minimal polynomial. But this difference is not essential.] of (E,θ). Consider now Scholze's pro-étale site X_ of <cit.> endowed with the completed structure sheaf. The very basic idea for defining S^-1_𝕏, is to use the morphism of ringed sites ν:X_→ X_ to send S^-1_𝕏,:{Higgs bundles on X} → {pro-étale vector bundles on X} (E,θ) ↦ ν^∗E⊗_ℬℒ_θ where ℬ:=ν^∗B and where ℒ_θ will be a certain invertible (i.e. locally free of rank 1) on X_ that depends on θ. In order to define ℒ_θ, the key idea is to show that the moduli functor of invertible ℬ-modules is represented by a rigid group variety. To make this precise, let _K, be the site of smooth rigid spaces over K with the étale topology and let _X'_K,→Ab, S↦{line bundles L on X'× S with L_x≅Ø_S ∀ x∈ X'(K)}/∼ be the rigid analytic Picard functor of X'. We define the pro-étale Picard functor of ℬ as _,_K,→Ab, S↦{[ loc. free -modules ℒ of rank 1 on (X× S)_; such that ℒ_|x× S≅ B_x⊗_K Ø_S ∀ x∈ X(K) ]}/∼. Our main technical result is now the following “multiplicative Hodge–Tate sequence for ”: There is a short exact sequence of abelian sheaves on _K, 0→_X'→_,→ H^0(X,B⊗_Ø_X)⊗_K _a→ 0. If _X' is representable, then (<ref>) is represented by a short exact sequence of rigid group varieties over K. The associated exact sequence of Lie algebras is the Hodge–Tate sequence 0→ H^1_(X,B)→ H^1_(X,)→ H^0(X,B⊗)→ 0. We can now explain how we use the choices in <Ref>: The ^+/ξ^2-lift 𝕏 induces a splitting s_𝕏 of (<ref>). The natural map ^∨→ B induces a section τ_B∈ H^0(X,B⊗). We now use an observation from <cit.> based on Fargues' work on p-divisible rigid groups <cit.>: For any rigid group H over K such that [p] H→ H is surjective, the datum of an exponential for K induces a natural Lie exponential _H(H)→ H(K). For algebraic groups H this is shown in <cit.> for the definition of twisted pullback. We instead apply it to H=^0_,, which is not algebraic even for curves, to get an invertible ℬ-module ℒ_θ:=_H(s_𝕏(τ_B))∈ H^1_(X,ℬ^×). In summary, we used the diagram [row sep = 0.5cm] _X'(K) [r] _,(K)[r] H^0(X,⊗ B)∋τ_B. H^1_(X,B)[r] [u] H^1_(X,)[u,"_H"] [r] H^0(X,⊗ B)∋τ_B. [u,equal] [l, "s_𝕏"' xshift=7.5, bend right=20,start anchor=north west, end anchor = north east, yshift = -3] In order to make the construction canonical and functorial, we define a notion of rigidifications of invertible pro-étale ℬ-modules that makes ℒ_θ unique up to unique isomorphism. The perspective of p-adic moduli spaces enters twice in the construction of ℒ_θ: Once in order to prove right-exactness in <Ref>, and independently when we invoke <Ref>. We note that <Ref> generalises results of <cit.><cit.> which deal with the case B=Ø_X. Our proof of this Theorem relies on the technical preparations from these works. The assumption in <Ref> that _X' is representable is always satisfied when X is algebraic. As it currently seems a bit unclear whether it is true in general (see 2.2), we also give a more general construction of ℒ_θ motivated by the result of <cit.>: we show that the torsor _,→𝒜_B has a reduction of structure group to _X'[p^∞] that is representable. The construction of the functor S_𝕏, from pro-étale vector bundles to Higgs bundles is similar: Let V be any pro-étale vector bundle on X. Then we use that by a construction of Rodríguez Camargo <cit.>, one can endow V with a canonical Higgs field θ_V:V→ V⊗. The image B of the induced morphism ^∙^∨→(V) is a coherent Ø_X-algebra. Set :=ν^∗ B, then as before, we can use <Ref> and the choices to define an invertible -module ℒ_V. We then show that (V,θ_V)⊗_ℬℒ_V^-1 is an analytic Higgs bundle. During the final stages of this work, we learnt that Daxin Xu was independently studying moduli of invertible B-modules in the case of curves. In joint work in progress, we plan to build on this circle of ideas to compare moduli spaces in p-adic non-abelian Hodge theory. §.§ Acknowledgements We thank Johannes Anschütz, Arthur-César Le Bras, Tongmu He, Ruochuan Liu, Juan Esteban Rodríguez Camargo, Peter Scholze, Annette Werner, Matti Würthen, Daxin Xu and Mingjia Zhang for very helpful discussions. This project was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – Project-ID 444845124 – TRR 326. § THE RELATIVE PRO-ÉTALE PICARD VARIETY OF THE SPECTRAL VARIETY §.§ Setup Let K be a complete algebraically closed non-archimedean field over _p as in the introduction. Throughout, we work in the setting of <cit.> and thus work with analytic adic spaces in the sense of Huber: By a rigid space, we mean an adic space locally of topologically finite type over (K,Ø_K). We use the following notation from <cit.>: For any smooth rigid space X over K, we write _X:=Ω_X|K^1(-1) for the Tate twist of the sheaf of Kähler differentials on X. We also just write =_X if X is clear from the context. For any k∈, we set _X=∧^k_Ø_X_X. We can always make a choice of p-power unit roots in K to trivialise the Tate twist, but it is better to remember it to get the correct notion of functoriality in K, especially in situations where there is a Galois action. For any rigid space X, we denote by _X the category of smooth rigid spaces over X, and by _X the category of perfectoid spaces over X in the sense of <cit.>. If X=(K), we also write this category as _K. We endow it with the étale topology, or with the finer v-topology, to make them into sites. We indicate the topology by an index, e.g. _K,. For any rigid space X, there is an associated diamond X^ in the sense of <cit.>, which we can equivalently regard as a sheaf X^:_K,v→Sets. Since the resulting functor -^:_K→_K,v is fully faithful, we will freely switch back and forth between rigid spaces and their associated diamonds, and therefore usually drop the -^ from notation. §.§ The relative pro-étale Picard variety of a coherent algebra Let X be a proper rigid space over K. Then we have the rigid Picard functor _X:_K→Ab, Y↦ H^1_(X× Y,Ø^×)/H^1_(Y,Ø^×) from the category of smooth rigid spaces over K to the category of abelian groups. Here on the right hand side, we can equivalently use the étale topology by <cit.>. It is conjectured that _X is always represented by a smooth rigid group variety. This is known e.g. when X is algebraisable, i.e. when there is a proper scheme X_0 over K such that X=X_0^: In this case, _X_0 is represented by a locally finite type group scheme over K by Theorems of Grothendieck, Murre and Oort, see <cit.>. We then have _X=_X_0^ by Köpf's relative GAGA Theorem <cit.>. There are other cases in which _X is known to be representable. We mention <cit.><cit.> and also refer to <cit.> for an overview. Let B be a (commutative) coherent Ø_X-algebra on X_ and let ℬ:=ν^∗ B be the associated sheaf on X_. Then we define the pro-étale Picard functor of B to be _,:_K,→Ab, S↦{[ invertible -module on (X× S)_; such that L_|x× S≅ B_|x× S for each x∈ X(K) ]}/∼. Here and in the following, we also regard B and ℬ as sheaves on X× S via pullback from X. The notation -_|x× S refers to the sheaf obtained by pullback along {x}× S→ X× S. The meaning of the condition on L_|x× S is that we want to make sure that there is no non-trivial contribution from invertible H^0(X,B)-modules on S_. In fact, it would suffice to fix a point x∈ X(K) such that the map H^0(X,B)⊗Ø_S→ B_|x× S is injective and simply ask that L_|x× S≅ B_|x× S. Indeed, we will explain in the next section how to define a better notion of rigidifications for which we do not need to choose x depending on B. The main goal of this section is to prove the following result: Let π:X→(K) be a smooth proper rigid space. Let B be a coherent Ø_X-algebra on X_ and let f:X':=_Ø_X(B)→ X be the associated finite morphism. We denote by ℬ:=ν^∗ B the associated sheaf on X_. Then we have: * There is a canonical short exact sequence of abelian sheaves on _K, 0→_X'→_, H^0(X,B⊗_Ø_X_X)⊗_K _a→ 0. which is functorial in B and X. * The sequence in (1) becomes split over an open subgroup of H^0(X,B⊗_X)⊗_a. Assume now furthermore that the rigid analytic Picard functor _X' is representable by a rigid group. For example, this is the case when X is algebraisable. Then we moreover have: * The sequence in (1) is representable by a sequence of rigid group varieties. * The induced sequence of Lie algebras over K obtained by passing to tangent spaces of the identity is canonically isomorphic to the Hodge–Tate sequence of B 0→ H^1_(X,B)→ H^1_(X,)→ H^0(X,B⊗_X)→ 0. * The multiplication map [p] on the identity component of _, is finite étale. In comparison to the rigid analytic Picard functor _X, we have made the following changes in the definition of _,: * We have replaced the analytic (or étale) topology with the pro-étale topology. * We have replaced Ø^× with units in a coherent Ø_X-algebra B. There is a technical variant of _, where we additionally make the following changes: * We replace the test category _X, by _K,. * We replaced the pro-étale topology with the v-topology. It is possible to relate these two versions to each other via “diamondification”. But for the purpose of this article it is much easier for technical reasons (and arguably more natural) to instead work with rigid test objects. Nevertheless, both approaches would work. We note that up to this non-essential technical difference, we can then recover the “diamantine v-Picard variety” from <cit.> as the simplest special case of B=Ø_X. In contrast to the natural map H^1_(X,B^×)→ H^1_(X',Ø_X'^×), which is an isomorphism, the natural map H^1_(X,^×)→ H^1_(X',Ø_X'^×) is usually not an isomorphism, i.e. it makes a difference whether we consider invertible B-modules on X_ or invertible Ø_X'-modules on X'_. For example, X' could be non-reduced, but as perfectoid algebras are reduced, Ø_X' on X'_ does not see the non-reduced structure. In contrast, the algebra ℬ on X_ keeps this structure. This motivates the relative setup of <Ref>. §.§ The Leray exact sequence of the sheaf ℬ^× Let X be any smooth rigid space and let B be a coherent Ø_X-algebra on X_. We denote by ν:X_→ X_, λ:X_→ X_ the natural morphism of sites. As before, we set :=λ^∗ B. There is a finitely presented p-torsionfree Ø_X^+-module B^+⊆ B with B^+=B. We use the results of <cit.> on existence of formal models: These guarantee that we can find a formal model 𝔣:𝔛'→𝔛 of f which is finite (<cit.>, or combine <cit.> using that quasi-finite and proper implies finite). Then 𝔅:=𝔣_∗Ø_𝔛' is a coherent Ø_𝔛-module whose rigid generic fibre is B. Let η:X_→𝔛 be the natural morphism of ringed spaces, then η^-1𝔅⊗_η^-1Ø_𝔛Ø_X_^+ has the desired properties. For our coherent Ø_X-module B, let us fix a choice of such a submodule B^+⊆ B. We also write B^+ for the associated module on X_. Let ℬ^+:=ν^-1 B^+⊗_ν^-1Ø^+_X_Ø^+_X_=λ^-1 B^+⊗_λ^-1Ø^+_X_Ø^+_X_⊆ℬ. By right-exactness of λ^∗, this is still finitely presented in the sense that locally in X_, there is a left-exact sequence Ø^+m→Ø^+n→^+→ 0. * We have ^+/p^k=ν^-1(B^+/p^k) on X_. As a consequence, we have Rν_∗(^+/p^k)=B^+/p^k. The analogous statements also hold for the sheaf B/B^+. * We have ^×/(1+p^+)=ν^-1(B^×/(1+pB^+)) on X_. In particular, Rν_∗(^×/(1+p ^+))=B^×/(1+p B^+). * As the statement is local on X_, we can assume that B^+ admits a presentation on X_ of the form Ø^+m_X→Ø^+n_X→ B^+→ 0. This stays exact after passing to quotients mod p^k. Applying the natural transformation ν^-1→ν^∗, we see that this is an isomorphism on Ø^+n_X/p^k, hence also ν^-1B/p^k^+/p^k. The statement now follows from <cit.> * This follows from part 1 by the same argument as in <cit.>: Let Z≈ Z_i be any pro-étale affinoid perfectoid tilde-limit in X_ with Z_i∈ X_. We claim that ^×(Z)/(1+p^+)(Z)= B^×(Z_i)/(1+pB^+)(Z_i). This shows the desired statement by <cit.>. Let f∈^×(Z), then by part 1 we can pass to an étale cover of Z to find sequences (f_n)_n∈ and (f'_n)_n∈ in B(Z_i) whose images under the map ϕ: B(Z_i)→(Z) satisfy ϕ(f_n)→ f and ϕ(f_n')→ f^-1. Then we have to have ϕ(f_nf_n'-1)∈ p^+(Z) for n≫ 0 and hence f_nf_n'-1∈ pB^+(Z_i) for some i≫ 0 by part 1. It follows that f_n ∈ B^×(Z_i). This shows surjectivity. Injectivity then also follows from part 1. Let X be a smooth rigid space and let ν:X_→ X_ be the natural map. Let B be a coherent Ø_X-module on X. We consider the functor ν^∗:=Ø_X_⊗_ν^-1Ø_X_ν^-1-:Mod(X_,Ø_X_)→Mod(X_,Ø_X_) where we recall that we denote by Ø=Ø_X_ the completed structure sheaf. Let ℬ=ν^∗B. * We have ℬ=ν^∗B= Lν^∗ B via the natural map. * For any n∈ℤ_≥ 0, we have R^nν_∗= B⊗_Ø_X^n_X * We have ν_∗^×= B^× and R^nν_∗^×=B⊗^n_X for any n≥ 1. Relations like Lν^∗B=ν^∗ B are the reason why X_ is also called the “flattened pro-étale site”. Note that (1) becomes false if we replace X_ by X_v, which for example includes points (K)→ X. That said, (2) still has a chance to be true for X_v. * All statements are local on X, so we may assume that X=(R) is affinoid. Since R is then regular, there exists a finite free resolution 0→Ø_X^n_1→…→Ø_X^n_l→ B→ 0. We need to show that this stays exact after applying ν^∗ as this computes Lν^∗. To do so, we use that a basis of X_ is given by objects that are of the form U_3 U_2 U_1 X, where * f_1 is an étale morphism, * f_2 is obtained from a toric chart U_1→𝕋^d as the pullback of the affinoid perfectoid toric cover 𝕋^d_∞→𝕋^d, and * f_3 is a pro-finite-étale map of affinoid perfectoid objects. It therefore suffices to prove that f_3^∗f_2^∗ f_1^∗- preserves the exactness. The functor f_1^∗ is exact since f_1 is a flat morphism of rigid spaces. Hence we may assume without loss of generality that U_1=X. To show that f_2 preserves the exactness, write 𝕋^d=(A,A^+) and 𝕋^d_∞=(A_∞,A^+_∞). We need to see that _AA_∞ preserves the exactness. For this we use that after rescaling the transition morphisms, we can find an integral model 0→Ø_X^+,n_1→…→Ø_X^+,n_l→ B^+→ 0 of our complex which has bounded p-torsion cohomology. Since A^+→ A^+_∞ is faithfully flat mod p^n, the complex still has bounded p-torsion cohomology after applying _A^+A_∞^+, and thus is exact after inverting p. Finally, the map f_3 is of the form (C,C^+)→(B,B^+) for perfectoid algebras C and B such that B^+/p^n→ C^+/p^n is a filtered colimit of almost finite étale maps by almost purity <cit.>. In particular, it is almost faithfully flat mod p^n. Thus the same argument as for f_2 shows that f_3^∗- preserves exactness. * We have R^nν_∗Ø=_X^n=Ω_X^n(-n) by <cit.>. Since B is a perfect complex on the smooth rigid space X, it follows from part 1 and the projection formula that Rν_∗=Rν_∗Lν^∗ B=B⊗^L_Ø_X Rν_∗𝒪=B⊗_Ø_X Rν_∗𝒪 where as before we denote by Ø the completed structure sheaf on X_. * We first note that from parts 1 and 2 we can also deduce ν_∗^+=B^+ using that ν_∗(/^+)=B/B^+. For the second part, we note that the sheaf ^+ is p-adically complete: For example, this can be seen by considering the derived p-adic completion of the complex (<ref>), using that Ø^+_X is p-adically complete. It follows that the _p-Banach algebra exponential defines an isomorphism exp:p^+→ 1+p^+, see e.g. <cit.>. We thus have a short exact sequence 0→^+^×→^×/(1+p^+)→ 0. Applying Rν_∗ and using <Ref>, this shows that the exponential defines an isomorphism R^nν_∗^+=R^nν_∗^× for n> 0. Second, using the short exact sequence 0→^+^+→^+/p→ 0, we see from <Ref> that R^nν_∗^+ is uniquely p-divisible for n>0. This shows that R^nν_∗^+= (R^nν_∗^+)=R^nν_∗=R^nν_∗Ø⊗_Ø_X by the first part, where we use quasi-compactness to commute R^nν_∗ and . We have a left-exact sequence of abelian groups, functorial in X, 0→ H^1_(X,B^×)→ H^1_(X,^×) H^0(X,_X⊗ B) We form the Leray sequence and use <Ref>.(iii). Applying the Corollary to X× Y for any Y∈_K,, it follows that for varying Y, we obtain short exact sequences, functorial in Y 0→ H^1_(X× Y,B^×)→ H^1_(X× Y,^×)→ H^0(X× Y,_X× Y⊗_Ø_X B) Recall now that we have the product formula for differentials Ω^1_X× Y=π_1^∗Ω^1_X⊕π_2^∗Ω_Y^1 where π_i is the projection. We now use that X is proper. By Kiehl's Proper Mapping Theorem and the resulting rigid version of “cohomology and base-change”, this shows that H^0(X× Y,_X× Y⊗ B)=(H^0(X,_X⊗ B)⊗_KØ(Y) ) ⊕(H^0(Y,_Y)⊗_K B(X)). Observe now that for any section of _,, the image in the second factor is trivial as it is trivial at every x∈ X(K). Upon sheafification in Y, we thus arrive at the left-exact sequence 0→_X'→_,→ H^0(X,⊗ B)⊗_K _a described in <Ref>. To get the desired short exact sequence, we are thus left to prove the right-exactness, which is more difficult. As a preparation, we first discuss the partial splitting. For this we use: §.§ The Higgs–Tate torsor of Abbes–Gros We now define the analogue of the Higgs–Tate torsor of Abbes–Gros <cit.> in the analytic setting of the pro-étale site. Let X be a smooth rigid space over K and let 𝕏 be a lift of X to ^+/ξ^2. Via the homeomorphism |𝕏|=|X|, we may regard Ø_𝕏 as a sheaf on X_. We define a pro-étale sheaf L_𝕏 on X_ as the subsheaf of (λ^-1Ø_𝕏,^+/ξ^2) defined as follows: L_𝕏:={4.5cmhomomorphisms φ of sheaves of ^+/ξ^2-algebras on X_ making the following diagram of sheaves commutative:[row sep =0.55cm] λ^-1Ø_𝕏[d] [r,"φ"] 𝔹_^+/ξ^2 [d] λ^-1𝒪_X [r] Ø_X_}. L_𝕏 is a pro-étale torsor under ν^∗_X^∨ on X_. The associated class in H^1_(X,ν^∗_X^∨), considered as an extension of Ø by ν^∗_X^∨, is dual to the Faltings extension 0→Ø→ E→_X^1→ 0. This is closely related to the discussion in <cit.>. As the statement is local on X, we may assume that X=(R) is affinoid with a lift 𝕏=(R). Let Y→ X be any affinoid perfectoid object of X_, and write S=Ø(Y). Then L_𝕏(Y) describes the dotted morphisms making the following diagram commutative: ^+/ξ^2[r][d] R[d] [r, dashed,"φ"] A_inf(S)/ξ^2[d] K[r] R [r] S. As R is formally smooth over ^+/ξ^2 and the rightmost vertical map is a square-zero thickening, such a lift φ always exists. The kernel of 𝔹_^+/ξ^2→𝒪_X is given by the Tate twist 𝒪_X(1). Hence, by deformation theory, for any two such lifts, the difference is a derivation R→ S(1), or equivalently, an R-linear morphism ^1_R→ S. Thus L_𝕏 is a torsor under ν^∗^∨_X. We postpone the discussion of the relation to the Faltings extension to later. Let X be a smooth rigid space and let 𝕏 be a ^+/ξ^2-lift of X. Then for any ^∙_Ø_X^∨-algebra B on X_ that is coherent over Ø_X, we obtain an associated pro-étale torsor under ℬ:=ν^-1B on X_ as the pushout along the induced map ν^∗^∨→, L_𝕏,B:= L_𝕏×^ν^∗^∨. Let X be a smooth rigid space and let 𝕏 be a ^+/ξ^2-lift of X. Sending any ^∙_Ø_X_X^1-algebra B as above to the class of L_𝕏,B defines a natural section s_𝕏,B:H^0(X,⊗ B)→ H^1_(X,) of the Hodge–Tate map _B:H^1_(X,)→ H^0(X,⊗ B) that is functorial in B. It is clear from the construction that B↦ L_𝕏,B is functorial in B. In particular, we may reduce to the universal case of B=^∨. Second, the construction is functorial in X. In particular the statement is local, so we may reduce to the case that X=(R) is toric. In the toric case, we can now describe L_𝕏 explicitly, as follows. Fix a toric chart X→𝕋^d and denote by T_1,…,T_d the induced coordinates on X. We then have the standard basis dT_1T_1,…,dT_dT_d of Ω_R^1. Let us denote its dual basis by ∂_1,…,∂_d. For any lift 𝕏=( R) of X, there exists by formal smoothness a lift of the map K⟨ T_1^±,…,T_d^±⟩→ R induced by the chart to an étale morphism of ^+/ξ^2-algebras ^+/ξ^2⟨ T_1^±,…,T_d^±⟩→ R. Any choice of such a lift induces a section of L_𝕏(X_∞), as follows: Let X_∞=(R_∞)→ X be the toric cover. The morphism ^+/ξ^2⟨ T_1^±,…,T_d^±⟩→𝔹^+_/ξ^2(X_∞), T_i↦ [T_i^1/p^∞] extends by formal étaleness to a unique morphism φ:R→𝔹^+_/ξ^2(X_∞) lifting the map R→ R_∞. Let Δ be the Galois group of X_∞→ X. We write c_i:Δ→_p(1)=_n∈μ_p^n(K) for the map determined by saying that for any γ∈Δ, we have γ· [T_i^1/p^∞]= [c_i(γ)]· [T_i^1/p^∞]. Under the identification L_𝕏(X_∞)=Ω^∨_R⊗_RR_∞(1) induced by the section φ, the Δ-action on the right is given by the continuous 1-cocycle Δ→_R=Ω_R^1∨(1), γ↦∑_i=1^d c_i(γ)·∂_i. Let R_∞{1}:=(B^+_/ξ^2(R_∞)→ R_∞). For any ∑ a_i∂_i∈(Ω_R,R_∞{1}), the corresponding element of L_𝕏(R_∞) is uniquely characterised (by formal étaleness) by saying that it sends T_i↦ [T_i^1/p^∞]+a_i. In order to describe the γ-action, we thus need to compute γ([T_i^1/p^∞]+a_i)-[T_i^1/p^∞] =(c_i(γ)-1)[T_i^1/p^∞]+γ a_i where c_i(γ)-1∈ R_∞{1}. Indeed, recall that for any ε∈_p(1) and a∈_p, we have ε^a-1≡ a(ε-1)ξ^2, which induces a canonical isomorphism R(1)→ R{1}, r ε→ r(ε-1). Under this identification, we see that the cocycle has the desired description. It now suffices to consider for any dT_i/T_i∈ H^0(X,Ω^1) with associated map ^∨→Ø(1) the Ø(1)-torsor L_𝕏×^ν^∗^∨Ø(1). We need to see that its image under (1) is dT_i/T_i. By <Ref>, the associated cocycle Δ→^∨_R→ R(1) is of the form γ↦ c_i(γ). The statement now follows from <Ref> by the characterisation of the map (1):H^1_(X,Ø(1))→ H^0(X,Ω) given in <cit.>: Indeed, sending T_i∈Ø_X_^× around the bottom left corner of the diagram described in the cited lemma defines (1)^-1(dT_i/T_i)∈ H^1_(X,Ø(1)). Going around the top right yields the class defined by the 1-cocycle Δ→ R(1), γ↦ c_i(γ). The proof of <cit.> also shows the relation to the Faltings extension. §.§ The partial splitting We can now also construct the partial splitting. Consider the composition H^1_(X,p^+)→ H^1_(X,) H^0(X,_X⊗ B). The image of the first map is an Ø_K-submodule H^+ of the finite dimensional K-vector space H^1_(X,) such that H^+ = H^1_(X,). Hence it contains an open Ø_K-sublattice H_0. Let now s:H^0(X,⊗ B)→ H^1_(X,) be any splitting, e.g. by <Ref> this could be induced by the Higgs–Tate torsor L_𝕏 for any choice of an ^+/ξ^2-lift 𝕏 of X. Let Λ⊆ H^0(X,⊗ B) be the preimage of H_0 under this splitting and set Γ:=s(Λ). Summarising the construction, Λ and Γ are finite free Ø_K-modules and we have a diagram Γ⊗_Ø_K_a^+ [d] [r] [r] H^1_(X,p^+)⊗_a^+[d,""] [r] R^1π_*(p^+) [d,""] [r,"log"] R^1π_*^×[d,""] Λ⊗_Ø_K_a^+ [r, hook] [u, bend left,"s"] H^0(X,⊗ B)[r,equal] H^0(X,⊗ B)[r,equal] H^0(X,⊗ B). The composition of s with the top row defines a splitting of over Λ⊗_a^+. §.§ Right-exactness via the exponential map In order to prove <Ref>.1, it remains to see that the natural morphism _,→ H^0(X,B⊗_X)⊗_a is surjective. For this, we will give a geometric argument, using the following version of proper base change, an extension of a special case of <cit.> to p-torsion coefficients: Let g:Z→(K) be any proper rigid space over K. Then for any n,m∈, we have an isomorphism of sheaves on _K, R^mg_∗(/n)=H^m_(X,/n). As a first step, we prove this for a different test category, namely we replace _K, by _K,. We claim that we then have R^mg^_∗/p=H^m_(X,/p) where the left hand side is the étale sheafification of the presheaf that sends a perfectoid space Y to H^m_(Z^× Y,/p). For this claim, the proof of <cit.> goes through with one minor change: In the proof of <cit.>, the toric cover of opens U⊆ Z can be replaced by the pro-finite-étale affinoid perfectoid cover of Colmez from <cit.>. This still gives a perfectoid Čech nerve of Galois covers of Z. It thus remains to deduce the rigid case from the perfectoid one. For this, we may without loss of generality restrict to test objects Y∈_K, that admit a toric chart. The induced toric tower is then a pro-finite-étale perfectoid cover Y_∞=_i∈ Y_i. For this we have H^m_(Z^× Y_∞,/n)=_i∈ H^m_(Z× Y_i,/n) by <cit.>. Using Y_∞,=2- Y_i,, it now follows from the perfectoid case that the following natural map becomes an isomorphism in the colimit over i: H^m_(Z,/n)(Y_i)→ R^mg_∗(/n)(Y_i) Since each Y_i→ Y is finite étale, this means that the morphism H^m_(Z,/n)(Y)→ R^mg_∗(/n)(Y) is an isomorphism, as we wanted to see. In the following, let us write π':X'→(K) for the structure map of X'. If _X'=R^1π'_∗Ø^× is representable, then [p]:^0_X'→^0_X' is finite étale. More precisely, it is an étale torsor under H^1_(X,μ_p). The Kummer sequence induces an exact sequence 1→ R^1π'_∗μ_p→_X'_X'→ R^2π'_∗μ_p. By <Ref>, the last map goes from a rigid group to a constant group, so it sends ^0_X' to 0. Hence [p] is surjective on ^0_X'. It is also étale by <cit.> as it induces multiplication by p on tangent spaces. By <cit.>, it remains to see that [p] is finite. This follows from the fact that H^1_(X',μ_p) is finite by <cit.>. Recall that the left-exact sequence of <Ref> arises from the Leray sequence for the projection X_→ X_. It can therefore be continued to a 4-term exact sequence 0→ R^1π_∗B^×→_, H^0(X,B⊗_X)⊗_a R^2π_∗B^×. We will show that the boundary map ∂ vanishes. For this we first make a few reduction steps: Explicitly, we wish to show that for any Y∈_K,, the boundary map ∂_Y:H^0(X,B⊗)⊗Ø(Y)→ H^2_(X× Y,B^×) vanishes after étale sheafification on Y. For this we use the following about the codomain: Let Y be any rigid space over K. Then * There is a natural isomorphism H^1_(X× Y,B^×)=H^1_(X'× Y,Ø^×) * The natural map H^2_(X× Y,B^×)→ H^2_(X'× Y,Ø^×) is injective. Redefining X as X× Y, we may without loss of generality assume that Y=(K). Recall that by definition, B^×=f_∗Ø^×. The 5-term exact sequence of the Leray sequence of the morphism of étale sites associated to the finite map f:X'→ X is therefore of the form 1→ H^1_(X,B^×)→ H^1_(X',Ø^×)→ H^0(X,R^1f_∗Ø^×)→ H^2_(X,B^×)→ H^2_(X',Ø^×). We claim that R^1f_∗Ø^×=1. To see this, we first recall that by étale descent, any line bundle on X'_ is trivial locally in the Zariski-topology, and hence associated to a coherent module on X' that is locally free of rank one by <cit.>. By <cit.>, it follows that it is already trivial locally on the pullback of a Zariski cover of X. After sheafifying in the rigid space Y, we have thus constructed a natural map ∂':A:=H^0(X,B⊗)⊗Ø(Y)→ R^2π'_∗Ø^× By injectivity in <Ref>.(2), we have reduced to showing that this map vanishes. By the splitting of over an open subgroup of A from <ref>, we know that any x∈ A(Y) is in the kernel of ∂' after multiplying by p^n for some n. Hence ∂' factors through a map ∂':A=H^0(X,B⊗)⊗_a→ R^2π'_Ø^×[p^∞]. We claim that any such homomorphism vanishes. To see this, we consider the Kummer sequence on X'_. Using <Ref>, we see that this induces a long exact sequence _X'→H^2_(X',μ_p^n)→ R^2π'_∗Ø^× R^2π'_∗Ø^×. Taking the colimit over n, it follows that we have an étale surjection H^2_(X',μ_p^∞)→ R^2π'_Ø^×[p^∞]. Let Q be a sheaf on _K, that admits a surjection H→ Q from a locally constant sheaf. Then any map h:Z→ Q from a connected rigid space Z is constant. That h is an étale surjection means that there is an étale cover Z'→ Z by a rigid space and a map h' fitting into a commutative diagram [row sep = 0.55cm] Z' [d,"h'"] [r] Z [d,"h"] H[r] Q. Then h' is locally constant by the assumption on H. Since the étale map Z'→ Z is open <cit.>, it follows that h is locally constant, hence constant as Z is connected. Applying this to the homomorphism ∂' from the rigid group variety A, which is in particular connected, we deduce that ∂'=0. This finishes the proof of <Ref>.1 Evaluating at K-points, we deduce that there is a short exact sequence 0→(X')→ H^1_(X,^×)→ H^0(X,B×)→ 0. Note, however, that our proof crucially uses that this sequence can be upgraded to sheaves as it relies on a geometric argument to see the vanishing of the boundary map on the right. §.§ Representability of _, We can now complete the proof of <Ref>: Assume that _X'=R^1π_∗B^× is representable by a rigid group. Then the same argument as for <cit.> shows that also R^1π_∗^× is representable: For any n∈, let U_n:=^-1(p^1-nΛ⊗_a^+)⊆_,, then _,=⋃_n∈U_n. Since the short exact sequence is split over Λ, the sheaf U_1≅Λ⊗_a^+×_X' is representable. The general case follows inductively: We have a morphism of short exact sequences [row sep = 0.50cm] 0 [r] _X',[d, "[p]"] [r] U_n[d, "[p]"] [r] p^1-nΛ[d, "· p","∼"labelrotate] [r] 0 0 [r] _X',[r] U_n-1[r] p^1-(n-1)Λ[r] 0. The morphism on the left is locally on the target a finite étale H^1(X',μ_p^n)-torsor by <Ref>, hence the same is true for U_n→ U_n-1. Any such torsor is representable: For example, we can argue in v-sheaves and use that U_n-1,=U^_n-1, by <cit.> to see that any finite étale torsor is representable in U_n-1,, hence also on _K,. If X is algebraisable, then so is the coherent Ø_X-module B by rigid GAGA, and it follows that X' is algebraisable, i.e. there is a proper scheme X'_0→(K) such that X'=X'^_0. As explained in the introduction, it follows that _X' is representable in this case. Part (5) follows from the above diagram (<ref>) and <Ref>. Finally, for part (4), we first note the following well-known fact: If _X' is representable, then _X'=H^1_(X',Ø)=H^1_(X,B). This is standard if the Picard functor is defined on all rigid spaces by testing on K[X]/X^2. But the statement is still true if we only know representability on _K,: We first note that for any commutative rigid group G, <cit.> implies that we have G=(_a^+,G). Second, there is a natural map H^1(X',Ø)→(_a^+,_X') induced by inverting p on the map exp:R^1π'_∗pØ^+→ R^1π'_∗Ø^×. Conversely, any homomorphism _a^+→_X' defines an element in H^1(X',1+Ø^+) by <cit.>, whose image under exp:(1+Ø^+)→Ø defines an element in H^1(X',Ø). Thus (_a^+,_X')=H^1(X',Ø). Part (4) about tangent spaces now follows from the local splitting constructed in the diagram (<ref>), and the relation to the Hodge–Tate sequence explained by <Ref>. § INVERTIBLE -MODULES VIA THE EXPONENTIAL We continue with the setup of <Ref>, that is, X is a smooth proper rigid space, B is a coherent Ø_X-module on X_ and ℬ=ν^∗B on X_. Later, we will assume Ø_X→ B is injective. The theme of this section is to relate invertible -modules to torsors under the additive group via the exponential. The relation is furnished by the exponential map exp:p^+→^× from <Ref> which by applying -⊗_ on the level of sheaves of groups yields a natural map exp:→^× :=_x↦ x^p^×. In this section, we first explain how this map can be used to construct reductions of structure groups of _, which are always representable. For this, we explain how one can functorially exponentiate the Higgs–Tate torsor to obtain invertible -modules, splitting on K-points. Second, we use this to define a notion of rigidifications of invertible -modules. §.§ Reduction of structure groups Motivated by the map exp:→^×, we now pass from invertible -modules to ^×-torsors. Let us clarify what we mean by this: Let ℒ be an invertible -module on X_. We denote by ℒ^× the associated ^×-torsor given by the invertible sections of . We then write ℒ^×:=_n∈, x↦ x^p (ℒ^⊗ p^n)^× for the torsor obtained by pushout along the homomorphism ^×→^×. Our starting point is the functor _,. Even if _, is representable, this is typically no longer represented by a rigid group. But using that X is quasi-compact, we can still regard it as a moduli functor of isomorphism classes of pro-étale ^×-torsors. Since any rigid vector group is uniquely divisible, we still have a morphism :=:_,→𝒜_B:=H^0(X,_X⊗ B)⊗_a. We now explain the relation to torsors under the additive group : For any pro-étale -torsor M, let us write M^exp:=M×^^× for the pushout along the exponential. We apply this to the following -torsor: Recall from <Ref> that the datum of a ^+/ξ^2-lift 𝕏 induces a ν^∗_X^∨-torsor L_𝕏 on X. By pushout along the tautological map ν^∗_X^∨→ over 𝒜_B=H^0(X,B⊗_X)⊗_a, we obtain from <Ref> a pro-étale -torsor L_𝕏,B over X×𝒜_B. Sending the pro-étale -torsor L_𝕏,B to L^exp_𝕏,B on X×𝒜_B now defines a map e_𝕏:𝒜_B→_, which is a natural splitting of by <Ref>. All in all, we obtain a commutative diagram [column sep = 2.5cm] _,[rd, "[1/p]" description] _,[r, "" description] [u,""] 𝒜_B. [lu, "e_𝕏"', bend right] where : _,→_, is the canonical map. In particular, this shows that the _X'-torsor _,→𝒜_B of <Ref> is split after inverting p, or in other words that the associated element in H^1_(𝒜_B, _X') is trivial. It follows formally, using long exact sequences, that _, admits a reduction of structure group to _X'[p^∞]. In fact, a more canonical construction is possible using the lift 𝕏: Let X be a smooth proper rigid space over K. Let B be a coherent Ø_X-algebra and set :=ν^∗ B. Let 𝕏 be a ^+/ξ^2-lift of X. Consider the abelian sheaf on _K, [column sep = 1.5cm] 𝒫_𝕏:=eq(_,[r, "", shift left] [r, "e_𝕏∘"', shift right] _,). Then the natural maps define a morphism of short exact sequences 0[r] _X'[p^∞][r][d] 𝒫_𝕏[r,""][d] 𝒜_B[r][d,equal] 0 0[r] _X'[r] _,[r,""] 𝒜_B[r] 0. The first sequence is always representable by a short exact sequence of rigid groups. In particular, we have 𝒫_𝕏 =𝒜_B. The connected component of the identity 𝒫^0_𝕏 is p-divisible and the morphism :𝒫^0_𝕏→𝒜_B is still surjective. We first prove left-exactness: Let x,y ∈𝒫_𝕏(Y) be any sections over Y∈_K,. Since e_𝕏 is a section of , it is in particular injective. We thus have equivalences (x)=(y)⇔ e_𝕏∘(x)=e_𝕏∘(y)⇔(x)=(y) ⇔ x · y^-1∈_,[p^∞]=(). Considering multiplication by p on the sequence of <Ref>, we see that _,[p^∞]=_X'[p^∞]. This gives the desired left-exact sequence. To see the right-exactness, we use that by considering the cokernel of [p] on the sequence of <Ref>, and using <Ref>, there is a short exact sequence _,_,→H^2_(X',μ_p^∞). Regarding e_𝕏 as an element of _,(𝒜_B), we see that its image in the third term is a homomorphism 𝒜_B→H^2_(X',μ_p^∞) which has to be trivial since 𝒜_B is connected. It follows that there is an étale cover 𝒜'→𝒜_B over which e_𝕏 lifts to a map s:𝒜'→_,. Then by construction, and using that = ∘, we see that e_𝕏∘(s)=e_𝕏∘∘(s)=e_𝕏∘∘ e_𝕏=e_𝕏 =(s). Hence s∈𝒫_𝕏(𝒜'), and we see that :𝒫_𝕏→𝒜_B is surjective. To see the representability, we first note that by the Kummer sequence and <Ref>, we see that _X'[p^∞]=R^1π'_∗μ_p^∞=H^1_(X',μ_p^∞) is representable by a locally constant rigid group. It follows that 𝒫_𝕏→𝒜_B is étale in 𝒜_B^, and we deduce that it is representable by a rigid space. The statement about Lie algebras is immediate from (_X'[p^∞])=0. Finally, to see the claim regarding 𝒫^0_𝕏, we consider multiplication by p^n on the short exact sequence of the first part. Using <Ref>, we obtain an exact sequence 𝒫_𝕏𝒫_𝕏→H^2_(X',μ_p). Let N:=𝒫_𝕏/𝒫_𝕏^∘ be the group of connected components, then Q:=(𝒫^0_𝕏𝒫^0_𝕏) sits in an exact sequence N[p]→ Q→H^2_(X',μ_p). Using <Ref>, we deduce that the map 𝒫^0_𝕏→ Q is trivial, hence 𝒫^0_𝕏 is p-divisible. Finally, since :𝒫^0_𝕏→𝒜_B is still locally split over an open subgroup of 𝒜_B, it follows from the p-divisibility of 𝒫^0 that the map 𝒫^0_𝕏→𝒜_B is still surjective. We thus get a smaller moduli space of invertible -modules which is always representable, at the expense of using the choice of 𝕏. This will allow us to circumvent the question whether _X' is representable while still obtaining a rigid moduli space of invertible -modules. §.§ Rigidifying pro-étale invertible -modules While the rigid group G representing the functor _, is unique up to unique isomorphism if it exists, we have stopped short of giving a canonical universal -module ℒ_ on (X× G)_. Rather, by definition of _,, the identity G→ G only corresponds to an isomorphism class of such objects. For Picard functors, the usual way to rectify this issue is to add a rigidification at a base point x∈ X to the moduli problem. In our case, this would be an isomorphism (ℒ_)|_x× G B_x ⊗_KØ_G of B-linear Ø_G-modules on G_. This exists since (ℒ_)|_x× G is trivial on x× G by definition of _,. This gives a good notion of rigidifications in the case of B=Ø_X, for the same reason that this works for the classical Picard functor: We then have Aut(ℒ_)=(Ø(X)⊗_K Ø(G))^×=(Ø_X,x⊗_K Ø(G))^×=((ℒ_)|_x× G), hence the automorphisms of ℒ_ correspond one-to-one to the automorphisms of the rigidification at x. However, this is no longer true for general B, as the map Ø(B)→ B_x is in general neither injective nor surjective. The failure to be injective means that a rigidification does not detect all automorphisms of ℒ_. The failure to be surjective means that between any two choices of universal invertible -modules with rigidifications there may not necessarily exist an isomorphism comparing rigidifications. Either issue is problematic. The goal of this section is to solve this issue by defining a more elaborate notion of rigidifications for invertible -modules in 𝒫_𝕏. Rather than defining a rigidified moduli problem, which is more complicated, it suffices for our purposes to define rigidifications for K-points of 𝒫_𝕏(K), i.e. for certain invertible -modules on X. This will be enough to prove <Ref>. The basic idea for this notion of rigidified invertible -modules on X_ is as follows: We wish to compare the natural -torsor L_𝕏 from <Ref> to invertible -modules. To make this precise, we pass from invertible -modules L to ^×-torsors L^×. While the step from L to L^× is harmless, we loose information when passing from L^× to L^×. The idea is therefore to combine this with the datum of a trivialisation of the fibre L_x. In order to make this precise, we first take care of connected components: If X_B is disconnected, we can decompose B=∏ B_i into coherent factors such that each X_B_i is connected, and then _,=∏__i,. Hence we can assume for the purpose of rigidifications that B is connected. Then B(X):=H^0(X,B) is a K-algebra whose reduced quotient is =K. Fix a base-point x∈ X(K). We now assume that Ø_X→ B is injective. Then X_B→ X surjects onto X, so the fibre B_x is non-zero. It thus makes sense to trivialise L_x in terms of B_x. However, such a trivialisation will in general be too much data as (B_x) is usually disconnected. We therefore make the following auxiliary choice (well-defined due to B_x≠ 0). We choose a component z∈π_0((B_x)) and denote by B_x→ B_z the corresponding quotient. For Y∈_K, and any invertible -module M on X× Y, set M_z:=M_x⊗_B_xB_z on x× Y=Y. We make the analogous definition for ^×-torsors. To rigidify ^×-torsors, we use the sheaf L_𝕏,B from <Ref>. While it is clear that its fibre L_𝕏,B,x over x×𝒜_B is a trivial B_x-torsor on 𝒜_B, there is a priori no canonical trivialisation of this torsor. We could fix such a trivialisation by choosing a lift of x to a B_^+/ξ^2-point of 𝕏. However, we may avoid this choice by instead working with the -torsor 𝔏:=𝔏_𝕏,B:=L_𝕏,B⊗π^∗_𝒜_B(L_𝕏,B,x)^-1 on X×𝒜_B, where we have written π_𝒜_B:X×𝒜_B→𝒜_B for the structure map of which x is a splitting. With this definition, there is over x×𝒜_B a tautological rigidification r:(B_z⊗_K𝒪_𝒜_B)𝔏_z. A rigidified invertible -module on X_ is a triple (L,α,β) consisting of * an invertible -module L on X_, for which we set τ:=(L)∈𝒜_B(K), * an isomorphism of B_z-modules α:B_z L_z, * an isomorphism β:𝔏^exp_τ L^× of ^×-torsors over X_, such that the following diagram of abelian groups (=fibres of sheaves over z) commutes: [row sep=0.6cm] B_z^×[r,"α^×[1/p]"][d,equal] L_z^× B^exp_z[r,"(r^exp)_τ"] 𝔏^exp_z,τ[u,"β_z"'] We also call this a z-rigidified invertible -module to indicate the dependence on z. A morphism of rigidified invertible -modules (L,α,β)→ (L',α',β') is a morphism of invertible -modules L→ L' that makes the obvious diagrams comparing α to α' and β to β' commute. When (L,α,β) is a rigidified invertible -module, then the existence of β witnesses that e_𝕏∘(L)=(L), so the isomorphism class of L is necessarily in 𝒫_𝕏(K). Conversely: Let L be an invertible -module on X_ whose isomorphism class lies in 𝒫_𝕏(K)⊆ H^1_(X,^×). Then: * There exists a rigidification (L,α,β) on L. * If (L',α',β') is another rigidified invertible -module on X_ and L'≅ L, then there is a unique isomorphism (L,α,β) (L',α',β') of rigidified invertible -modules. The proof hinges on the following relation: The following diagram of abelian groups is both a pullback and a pushout square: B(X)^×[r] [d] B(X)^×[d] B_z^×[r] B_z^×. Let R be any finitely generated K algebra such that (R) is a single point. Then we have R=R^⊕𝔫 where 𝔫 is the nilradical of R and R^:=R/𝔫 is the reduced quotient. Indeed, the map R→ R^ admits a K-algebra splitting since the assumption on R implies R^=K (as K is algebraically closed). Passing to units, this induces a split exact sequence 0→𝔫 R^×→ K^×→ 0. Indeed, the kernel is =1+𝔫 which identifies with 𝔫 via exp and log. We can thus decompose R^× = 𝔫× K^×. where the first factor is already uniquely p-divisible. Hence we have a short-exact sequence 1→μ_p^n→ R^× R^×→ 1. Applying this first to R=B(X) and then to R=B_z, we obtain a commutative diagram 1 [r] μ_p^n[r] [d,equal] B(X)^×[d] [r, "· p^n"] B(X)^×[d] [r] 1 1 [r] μ_p^n[r] B_z^×[r, "· p^n"] B_z^×[r] 1 in which the second square is consequently Cartesian. Since _n commutes with finite limits, in particular with forming pullback squares, this shows the pullback property. For the pushout property, we need to show that the following map is surjective: B_z^×⊕ B(X)^×→ B_z^× This follows from the decomposition in (<ref>): We have B_z^×=𝔫× K^×, so the first factor comes from the first summand B_z^× = 𝔫× K^× on the left, while the second factor K^× comes from K^×⊆ B(X)^×. Hence the two summands are jointly surjective onto B_z^×. * The assumptions guarantee that we can find some isomorphisms α:B_z L_z, β:𝔏_z^exp L^×, which do not need to satisfy any compatibility yet. More precisely, the failure of the square in <Ref> to commute is measured by an element δ:=β∘ r∘α^-1 of (L_z^×)=B_z^×. By the pushout property in <Ref>, we can find an element α_0 of (L_z)=B_z^× and an element β_0 of ( L^×)=B(X)^× such that δ=β_0^-1α_0. Then replacing α by α_0∘α and β by β_0∘β, we obtain α and β making the diagram commute. * We first treat the case that (L,α,β)=(L,α,β), i.e. we show (L,α,β)=1. We have (L)=B(X)^×. Any automorphism b of L changes α by the image of b in B_z^× and β by the image of b in B(X)^×. That b is an automorphism of (L,α,β) means that these images are trivial. Then b is trivial by <Ref>. We now return to the general case: Choose any isomorphism ϕ:L L', then we may pull back α',β' to L and assume without loss of generality that L=L'. By the first part, it suffices to prove the existence of an isomorphism. Let thus (L,α',β') be any other rigidification on L, then δ_α:=α'∘α^-1∈ B_z^× and δ_β:=β'∘β^-1∈ B(X)^× are such that in B_z^×, we have β∘ r∘α^-1=𝕀=β'∘ r∘α'^-1 =δ_β∘β∘ r∘α^-1∘δ_α^-1⇒δ_α=δ_β in B_z^×. It follows from <Ref> that δ_α=δ_β is already in B(X)^×=(L). Hence we may replace ϕ by δ_α∘ϕ to arrange that α'=α and β'=β. Finally, we note that our notion of rigidifications is functorial in B: Let f:B→ B' be a morphism of coherent torsionfree Ø_X-modules such that X_B and X_B' are connected. Let z'∈π_0(X_B',x) and z∈π_0(X_B,x) be such that f(z')=z. Then for any z-rigidified invertible :=ν^∗ B-module ℒ=(L,α,β) on X_, ℒ⊗_':=(L⊗_',α⊗_B_zB'_z',β×^^×'^×) is a z'-rigidified invertible ':=ν^∗ B'-module on X_. Using that f(z')=z, we obtain a natural map B_z→ B'_z'. It therefore makes sense to define α⊗_B_zB'_z':B'_z→ L_z⊗_B_zB'_z'=(L⊗_ℬ')_z'. Let τ:=(L) and τ':=(L⊗_'), then f(τ)=τ' by functoriality of in B in <Ref>. It follows from the construction of 𝔏_𝕏,B that we have a natural isomorphism 𝔏_𝕏,B,τ^exp×^^×[1/p]'^×𝔏_𝕏,B',τ'^exp. Hence we obtain a natural map β×^^×[1/p]'^×:𝔏^exp_𝕏,B',τ' L^××^B^×[1/p]B'^×=(L⊗_')^× The compatibility between α and β is then clearly preserved by forming pushouts §.§ The p-adic exponential for rigid groups As we have already used earlier, recall that the p-adic exponential is as usually defined as the continuous group homomorphism exp:p^α𝔪_K→ 1+p^α_K, x↦∑_n=0^∞x^nn! where α=1/p-1 for p≥ 3, and α=2 for p=2. Second, we have the p-adic logarithm log:1+_K→ K, x↦∑_n=1^∞(-1)^n+1(x-1)^nn whose radius of convergence is larger and that is surjective because K is algebraically closed. An exponential for K is a group homomorphism K→ 1+_K s.t. * splits the logarithm, meaning that log∘=𝕀, and * restricts to exp on p^α𝔪_K. It is clear from (2) that K is automatically continuous. An exponential always exists. For any commutative rigid group G over K, let G=(_p,G)⊆ G be the topological p-torsion sub-v-sheaf from <cit.>. By <cit.>, G is an open rigid subgroup. If [p]:G→ G is surjective, it is a p-divisible rigid group in the sense of Fargues. Following <cit.>, we then have a logarithm map that fits into a left-exact sequence of rigid groups 0→ G[p^∞]→G(G)⊗_K _a where (G) is the Lie algebra of G, i.e. the tangent space at the identity. We now use: [<cit.>] Let G be a commutative rigid group such that [p] G→ G is surjective on K-points. Then any exponential K→ 1+ induces a continuous splitting _G:(G)→G(K)⊆ G(K) of the logarithm, i.e. log_G∘_G=𝕀. For fixed , the map _G is functorial in G. §.§ Invertible -modules via the exponential Finally, we now consider the case that B is a _X^∨-algebra on X_ that is coherent over Ø_X. Then 𝒜_B(K) has a canonical section: Let X be any smooth rigid space. Set T_X:=^∙_X^∨ and let B be any T_X-algebra. As _X^∨ is projective, the induced Ø_X-linear map _X^∨→ B dualises to a morphism τ_B:Ø→_X⊗ B. This defines a tautological section τ_B∈ H^0(X,_X⊗ B). Let (B_i)_i∈ I be a cofiltered inverse system of coherent Ø_X-modules. Then an invertible (B_i)_i∈ I-module on X_ is a family of invertible _i:=ν^∗B_i modules L=(L_B_i)_i∈ I and for any j→ i in I an isomorphism ψ_ij:L_B_i L_B_j⊗__j_i satisfying the cocycle condition. Given x∈ X(K) and compatible splittings z_i:π_0(B_i,x)→π_0(B_i), we say that L is if each L_B_i is a _i-module and the ψ_ij respect the rigidifications. We can now finally combine all results that we have discussed up until this point to obtain the following, which summarises the key technical construction of this article: Let X be a smooth proper rigid spaces over K. Let (B_i)_i∈ I be the cofiltered inverse system of T_X-algebras on X_ for which each B_i is Ø_X-coherent and Ø_X-torsionfree. Set ℬ_i:=ν^∗B_i on X_ where ν:X_→ X_ is the natural map. Assume we are given: * a ^+/ξ^2-lift 𝕏 of X, * an exponential Exp:K→ 1+. Then one can associate to this data an invertible (B_i)_i∈ I-module (L_B_i)_i∈ I (see <Ref>) with ( L_B_i)=τ_B_i, in a way that is unique up to isomorphism and natural in (X,𝕏,x). More precisely, choose x∈ X(K) and a section z of _i ∈ Iπ_0(X_B_i,x)→_i ∈ Iπ_0(X_B_i). Then there is a cofinal inverse system J⊆ I for which there is a z-rigidified invertible (B_j)_j∈ J-module (ℒ_B_j)_j∈ J on X_ with [ℒ_B_j]∈𝒫^0_𝕏(K), unique up to unique isomorphism. We obtain the system for I by choosing for i∈ I\ J some j∈ J with j→ i and defining L_B_i:=ℒ _B_j⊗__j_i. Let B be one of the B_i. To explain the argument, let us first assume that _X_B is representable. Then by <Ref>.2, there exists a rigid group G representing _,. Consider the topological p-torsion subgroup G of G, on which we have the logarithm map log:G→(G), see <ref>. By functoriality of log, this fits into a commutative diagram G(K) [d, "log"] [rr, ""] H^0(X,⊗ B) Lie(G) [r,equal] H^1_(X,) [r, ""] H^0(X,⊗ B)[u,equal,"log"'] where the identification of the bottom row is given by <Ref>.4. By <Ref>.5, the identity component G^0 is such that [p]:G^0→ G^0 is surjective. It follows from <Ref> that the choice of induces a splitting _G of the logarithm map, natural in G. Second, by <Ref>, the lift 𝕏 induces a splitting of the map , s_𝕏:H^0(X,_X⊗ B) → H^1_(X,) From (<ref>), we see that these two splittings define a splitting of . We thus get an element x_B:=_G(s_𝕏(τ_B))∈G(K)⊆ G(K)=_,(K) where τ_B is the canonical element of H^0(X,B⊗_X) defined in <Ref>. More generally, without any assumptions on representability, we can use the rigid group G:=𝒫^0_𝕏→𝒜_B from <Ref>. This is already topologically p-torsion and p-divisible. Since G=H^0(X,_X⊗ B), we can define x_B to be the image of _G(τ_B)∈ G(K) under the natural map 𝒫^0_𝕏→_,. We note that this still depends on 𝕏 since 𝒫_𝕏 does. Consider now the inverse system (B_i)_i∈ I. This system is cofiltered: For any i,j∈ I, the algebras B_i and B_j are dominated by the product B_i× B_j. Let J⊆ I be the cofinal system of those j∈ I for which zπ_0(X_B_i)→π_0(X_B_i,x) sends π_0(X_B_j) into π_0(X_B_j,x). Treating each connected component of X_B_i separately, we can therefore choose an invertible _i-module ℒ_B_i representing x_B_i and apply <Ref>.1 to endow it with the structure of a rigidified invertible _i-module (L_B_i,α_B_i,β_B_i) that is unique up to unique isomorphism. The desired compatibility in J now follows from <Ref>: This says that for any j→ i in J with associated morphism B_j→ B_i, we can regard (ℒ_B_j,α_B_j,β_B_j)⊗__j_i as a rigidified invertible _i-module on X_. It follows from <Ref>.2 that there is a unique map L_B_j→ L_B_j⊗__j_i L_B_i that is compatible with rigidifications. This satisfies the cocycle condition by uniqueness. It remains to see that the construction is natural in (X,𝕏,x): Let f:Y→ X be any morphism that admits a lift 𝕐→𝕏. For each i∈ I, let A_i be the Ø_Y-torsionfree quotient of f^∗B_i and let M_i:=f^∗L_i⊗_f^∗B_iA_i. Then (M_i)_i∈ I is an invertible (A_i)_i∈ I-module on Y_. Via the natural map T_Y→ f^∗T_X, we can regard (A_i)_i∈ I as a cofiltered inverse system of T_Y-modules on Y_ which are Ø_Y-coherent and Ø_Y-torsionfree. Let (N_i)_i∈ I be the natural invertible (A_i)_i∈ I-module (N_i)_i∈ I defined in the Theorem applied to Y. By functoriality of s_𝕏 and 𝒫^0_𝕏, we always have N_i≅ M_i as ν^∗A_i-modules on Y_ for all i∈ I. We claim that there is an isomorphism (N_i)_i∈ I (M_i)_i∈ I of invertible (A_i)_i∈ I-modules: While it is in general difficult to compare the α-parts of the rigidifications, the β-part gives canonical compatible isomorphisms ϕ_i N_i^× M_i^×. Let ℐ_i⊆Isom_A_i(N_i,M_i) be the subset of isomorphisms such that ϕ_i^× is the canonical map, then ℐ_i is a principal homogeneous space under H^0(X,A_i)^×[p^∞]. For j_1→ j_2, applying -⊗_A_j_1A_j_2 induces a map ℐ_j_1→ℐ_j_2, equivariant with respect to H^0(X,A_j_1)^×[p^∞]→ H^0(X,A_j_2)^×[p^∞]. In particular, if _Ø_Y(A_k) is connected for k=1,2, then ℐ_j_1→ℐ_j_2 is an isomorphisms. By treating connected components separately, we deduce from this that _j∈ Jℐ_j≠∅. § LOCAL CONSIDERATIONS ON THE P-ADIC SIMPSON CORRESPONDENCE VIA TWISTING The final ingredient for our construction of the non-abelian Hodge correspondence are local considerations of how to pass between Higgs bundles and pro-étale vector bundles via twisting with invertible -modules. This is based on the following general construction: Let (E,θ_E) be a Higgs bundle on X. The Higgs field θ_E:E→ E⊗_X dualises to a morphism ^∨_X→(E) sending ∂↦ (EE⊗_XE). Due to the Higgs field condition θ∧θ=0, this extends to an Ø_X-algebra morphism on X_ T_X:=^∙_Ø_X_X^∨→(E). Let B_θ be the image of this morphism, this is a commutative subalgebra of (E). Since (E) is coherent as an Ø_X-module, so is its submodule B_θ. There is a canonical section τ_θ∈ H^0(X,B_θ⊗_X) defined as the image of 𝕀∈^∨⊗→ B_θ⊗_X, uniquely determined by the property that H^0(X,B_θ⊗_X)↪ H^0(X,(E)⊗_X) sends τ_θ↦θ_E. We shall also denote B_θ just by B and τ_θ by τ_B when θ is clear from context. Invoking the construction of <Ref> after making the necessary auxiliary choices of base points, the idea of the construction of the p-adic Simpson functor for proper X {Higgs bundles on X}{pro-étale vector bundles on X} will now be to send any Higgs bundle (E,θ) on X to the pro-étale vector bundle ν^∗ E⊗__θL_B_θ where L_B_θ is the invertible _θ:=ν^∗B_θ-module from <Ref> with (L_B_θ)=τ_θ. Note that for this definition, we are free to enlarge B=B_τ, as follows: If T_X→ B'→ B is any Ø_X-coherent sub-quotient of T_X, then B' acts on E via B'→ B, and E⊗_L_B=E⊗_⊗_'L_B'= E⊗_'L_B'. In order to be able to go into the other direction, we need some preparations on pro-étale vector bundles: For this we begin by recalling the local correspondence. As we will explain, one can reinterpret this in terms of twisting with pro-étale invertible -modules. In contrast to Faltings' construction, we do not actually rely on the local correspondence for the construction of 𝒮, but we will use it to see that 𝒮 is an equivalence. §.§ The Local correspondence We call an affinoid rigid space U toric if there is an étale map f:U→𝕋^d to the torus over K which is a composition of rational localisations and finite étale maps. We call f a toric chart. Given a chart f, consider the affinoid perfectoid torus 𝕋^d_∞→𝕋, a pro-étale Galois torsor under the group Δ:=_p(1)^d. We denote by U→ U the pullback along f. The chart f induces parameters T_1,…,T_d∈Ø(U)^× on U which define a basis dT_1/T_1,…,dT_i/T_i of Ω_U. We denote by ∂_1,…,∂_d the dual basis of Ω_U^∨. Then f induces an isomorphism ρ_f:H^0(U,_U)__p(Δ,Ø(U)). which can be characterised as follows: Its dual _p^d(1)→Ω_U^∨(1)(U) is the (1)-twist of the map that sends the standard basis vector γ_i of _p^d to ∂_i. For any coherent Ø_U-module B, we will denote by ρ_f,B or just by ρ_f the induced isomorphism obtained by tensoring with B(U). There is for any smooth rigid space U an intrinsic notion of “smallness” for both pro-étale vector bundles and Higgs bundles. As we will not need the technical details, we just refer to <cit.> for the definition. What will be important for us is only the following: * Any pro-étale vector bundle or Higgs bundle on U becomes small on an étale cover. * For toric U, we have the following equivalence, the “Local correspondence”: [<cit.><cit.>] Let U be a toric smooth rigid space and let f:U→𝕋^d be a toric chart. Then f induces an exact equivalence of categories _f:{small pro-étale vector bundles on U}{small Higgs bundles on U}. It sends (E,θ) to the unique pro-étale vector bundle V for which V( U) is the Δ-module V( U):= Ø( U)⊗_Ø(U)E(U), γ· (a⊗ x)=γ(a)⊗exp(ρ_f(θ)(γ))x where ρ_f:=ρ_f,(E) H^0(U,_U⊗(E))(Δ,(E)) is the map from <Ref>. In particular, we have a natural isomorphism of Ø( U)-modules E( U)=V( U). Let X be any smooth rigid space and let V be a pro-étale vector bundle on X. Then the sheaf of Ø_X-linear endomorphisms (V) is a coherent module on X_. The statement is local, so we may assume that X is toric and V is small. Then by <Ref>, (V)≅(E,θ) for some Higgs bundle (E,θ), and this is coherent. Let now B be any coherent Ø_X-algebra and ℬ:=ν^∗B. We choose B^+⊆ B as in <Ref>. Then we also have the following variant which is essentially a weak version of a local correspondence for invertible -modules that will be enough for our purposes: Let ℒ be an invertible -module on X_. Set τ:=_τ(ℒ). Then there is an étale cover of X by toric rigid spaces U→ X with charts f:U→𝕋^d satisfying the following: Let U→ U be the toric Δ-torsor induced by f. The restriction ρ_f(τ|_U):Δ→ B(U) has image in pB^+(U) and ℒ is isomorphic to the invertible -module ℒ_τ|_U,f on U_ defined via descent along U→ U of _| U endowed with the Δ-action defined as follows: ( U)=Ø( U)⊗_Ø(U) B(U), γ· (a⊗ x)=γ(a)⊗exp(ρ_f,B(τ|_U)(γ)). Let U→ X be any étale map from a toric rigid space. By <Ref>, we have a left-exact sequence 0→ H^1_(U,B^×)→ H^1_(U,^×) H^0(U,_U⊗ B) It follows that any invertible -module ℒ' on U_ with _U( ℒ')=_U(ℒ)=τ becomes isomorphic to ℒ after passing to an étale cover. After passing to an open subgroup of Δ by replacing U by a finite étale cover, we can ensure that ρ_f,B(τ) has image in pB^+(U) on which exp is defined. Then the Δ-module ℒ_τ|_U,f defined in the lemma satisfies (ℒ_τ|_U,f)=τ by construction, hence ℒ_τ|_U,f≅ℒ after passing to a further étale cover. We can use this to reinterpret the local correspondence in terms of twisting: In the setting of <Ref>, let (E,θ) be any Higgs bundle. Denote by B⊆(E) the coherent Ø_X-module of <Ref> with section τ_θ∈ H^0(X,⊗ B). Then over an étale cover of U, we have a natural isomorphism _f^-1(E,θ)ν^∗E⊗_ℬℒ_τ_θ,f. Comparing the descriptions in <Ref> and <Ref>, it suffices to see that for any γ∈Δ, the element ρ_f,B(τ_θ)(γ)∈ B(U) acts on E(U) as ρ_f,(E)(θ_E)(γ)∈(E(U)). But by naturality of ρ_f,- applied to the map B→(E), we have a commutative diagram H^0(U,⊗ B)[d] [r, "ρ_f"] (Δ,B(U))[d] τ_θ[d, maps to] [r, maps to] ρ_f(τ_θ) [d, maps to] H^0(U,⊗(E))[r, "ρ_f"] (Δ,(E|_U)) θ_E [r, maps to] ρ_f(θ_E) where the vertical maps send τ_θ to θ_E by the defining property of τ_θ in <Ref>. §.§ Rodríguez Camargo's Higgs field Motivated by earlier results of Pan <cit.>, it was first observed by Rodríguez Camargo in <cit.>, that one can use the Local correspondence to endow any pro-étale vector bundle with a canonical Higgs field in a natural way. Since his construction is written for a different technical setup, we now give a slight reinterpretation, which simplifies the proof somewhat in our special case of interest. Let X be a smooth rigid space over K. To simplify notation, let us still denote by _X the pullback ν^∗_X of the sheaf of <Ref> along ν:X_→ X_. Then there is a unique way to endow any pro-étale vector bundle V on X with a Higgs field θ_V:V→ V⊗_Ø_X_X on X_ in such a way that the following conditions hold: * The association V↦θ_V is functorial in V and X. * If X is toric and f:X→𝕋^d is a toric chart, then θ_V corresponds in terms of the associated Higgs bundle (E,θ_E):=LS_f(V) to the tautological morphism of Higgs bundles θ_E: (E,θ_E)→ (E,θ_E)⊗ (,0), where (,0) is with the trivial Higgs field. * We have θ_V=0 if and only if V is étale-locally trivial on X. For any local basis ω_1,…,ω_d of _X, the tensor product of Higgs bundles (E,θ_E)⊗ (,0) in (2) is E⊗ with the Higgs field given in terms of θ=∑_iθ_iω_i by θ_E: E⊗_X→ E⊗_X⊗_X, ∑_i e_i⊗ω_i↦∑_i∑_j θ_j(e)⊗ω_i⊗ω_j. More explicitly, (2) means that θ_V corresponds on the toric cover X→ X to the natural map Ø( X)⊗_Ø(X) E(X)→Ø( X)⊗_Ø(X) E(X)⊗_X, a⊗ e↦ a⊗θ(e) which commutes with the Δ-action as exp(θ_i) commutes with each θ_j. It is clear that (2) defines a Higgs field θ_V:V→ V⊗ for any small pro-étale vector bundle V if X is toric. This is functorial in V since any morphism of Higgs bundles ϕ:(E,θ)→ (E',θ') induces a morphism ϕ⊗𝕀:(E,θ)⊗ (,0)→ (E',θ')⊗ (,0). It therefore suffices to prove that θ_V is independent of the choice of toric chart: Let f' be a second toric chart and let θ_V':V→ V⊗ be the Higgs field induced via _f'. We need to show that θ_V=θ_V'. To see this, we may replace U by any étale cover. Let (E',θ')=_f'(V), then there exists a non-canonical isomorphism between (E,θ) and (E',θ') after étale localisation on U, e.g.  by <cit.>. Indeed, we can see this via twisting: Let B be the coherent quotient of (θ,θ',0):T_X→((E,θ)⊕(E',θ')⊕ (,0)) from <Ref> and let =ν^∗ B. Set L:=ℒ_τ_B,f and L':=ℒ_τ_B,f'. Then by <Ref>, and <Ref>, we see that we can use L and L' to compute _f and _f', namely we can find isomorphisms λ:V=_f^-1(E,θ)ν^∗ E⊗_ L and λ':V=_f^-1(E',θ')ν^∗ E'⊗_ L'. Since (L)=τ_B=(L'), there is by <Ref> after a further localisation a -linear isomorphism ψ:L L'. We can combine this to an isomorphism _f^-1(E,θ)=Vν^∗ E'⊗_ L'ν^∗ E'⊗_ L_f^-1(E',θ'). Since _f^-1 is fully faithful, this comes from an isomorphism ϕ:(E,θ) (E',θ') of Higgs bundles. Summarising the discussion, this shows that the following diagram commutes: θ_V: V [d, "𝕀",xshift=0.32cm] [r, "λ"] ν^∗ E⊗_ L [d, "ϕ⊗ψ"] [r, "θ_E⊗𝕀"] ν^∗(E⊗)⊗_ L [d, "(ϕ⊗𝕀)⊗ψ"] [r, "λ^-1"] V⊗[d, "𝕀⊗𝕀"] θ_V': V [r, "λ'"] ν^∗ E'⊗_ L' [r, "θ_E'⊗𝕀"] ν^∗(E'⊗)⊗_ L' [r, "λ'^-1"] V⊗ This shows that θ_V=θ_V'. Functoriality in X can be seen by the same argument. Alternatively, even without checking that the left square at the end of the proof commutes, we could simply define Ψ:V→ V as the unique isomorphism making the square commute. Then the diagram says that we have an isomorphism Ψ:(V,θ_V)→ (V,θ'_V). But since θ_V commutes with any endomorphism of V by functoriality, this implies that θ_V=θ_V'. If U is a smooth rigid space with toric chart f:U→𝕋^d, then the isomorphism E( U)=V( U) of <Ref> identifies the pullback of θ_E and θ_V to U. Immediate from <Ref>.2 and functoriality in the last part of <Ref>. Let X be a smooth rigid space and let (E,θ_E) be a Higgs bundle on X. Let B⊆(E) and τ_B be associated to θ_E as in <Ref> and set :=ν^∗ B. Let ℒ be any invertible -module on X such that (ℒ)=τ_B. Then V:=ν^∗ E⊗_ℒ is a pro-étale vector bundle on X whose canonical Higgs field θ_V:V→ V⊗_X of <Ref> is given by ν^∗θ_E⊗_ℒ:V→ V⊗_X defined more explicitly as the composition ^∙_X^∨_B(E)(V). The conclusion is étale-local on X, so we may assume that X is toric with a toric chart f:X→𝕋^d. By <Ref>, we can after a further étale localisation assume that there is an isomorphism ℒℒ_τ_B,f, so it suffices to prove the statement for V= ν^∗ E⊗_ℒ_τ_B,f. But then we have _f(V)=(E,θ) by <Ref>. The claim then follows from <Ref>.2 and <Ref>, which say that θ_V=_f(θ_E)=ν^∗θ_E⊗_ℒ_τ_B,f. We now explain how to use the canonical Higgs field θ_V to pass from pro-étale vector bundles to Higgs bundles by twisting with invertible -modules. This is based on the following: As in <Ref>, for any pro-étale vector bundle V on X, we can equivalently regard the canonical Higgs field θ_V as a homomorphism _X^∨→(V), ∂↦ (V V⊗_XV) on X_ that extends to an Ø_X-algebra homomorphism θ_V:T_X→(V). Let B=B_V be the image of this map. By <Ref>, this is a coherent Ø_X-algebra. Set :=ν^∗ B_V, then V is a -module in a canonical way. Like in <Ref>, the image of 𝕀∈(, )=^∨⊗ under the map ^∨→ B then defines a canonical section τ_B:=τ_θ_V∈ H^0(X,B⊗). Let X be a smooth rigid space. Let V be a pro-étale vector bundle on X. Let B=B_V and :=ν^∗B_V be as in <Ref>. Let L be any pro-étale invertible on X_ with (L)=τ_B∈ H^0(X,B⊗). Then E:=V⊗_ L^-1 is an analytic-locally trivial vector bundle on X, which inherits a natural Higgs field θ_E:=θ_V⊗_ L^-1. The statement is étale-local, so we may assume that X is toric and fix a toric chart f:X→𝕋^d so that we are in the setup of <Ref>: More precisely, by <Ref>, we may assume that there is a Higgs bundle (E',θ') on X such that V=ν^∗ E'⊗_ℒ_τ_B,f. By <Ref>.2, the canonical Higgs field θ_V is then ν^∗ E'⊗_ℒ_τ_B,f→ν^∗(E'⊗)⊗_ℒ_τ_B,f. More precisely, we a priori need to be more careful and use B=B_θ', but the natural map θ_V:_X^∨→(V) factors through -⊗__θ'ℒ_τ_θ',f:(E',θ')→(V), which shows that B_θ'=B_V. Hence E=V⊗_ L^-1=E'⊗_ℒ_τ_B,f⊗_ L^-1. To prove that E is étale-locally trivial, it thus suffices to prove that ℒ_τ_B,f⊗_L^-1 is an étale-locally trivial invertible -module. By <Ref>, this follows from the fact that (ℒ_τ_B,f⊗_ℒ^-1)=_B(ℒ_τ_B,f)-_B(ℒ^-1)=τ_B-τ_B=0. The Higgs field comes from the fact that -⊗_ L^-1 defines a natural map (V)→(E). We can compose it with θ_V:T_X→(V) to get the desired Higgs field T_X→(E). § THE P-ADIC SIMPSON CORRESPONDENCE §.§ Proof of Main Theorem We can now prove our main result. Let K be a complete algebraically closed extension of _p. Let X be a smooth proper rigid space over K. Let ν:X_→ X_ be the natural map. Let 𝕏 be a ^+/ξ^2-lift of X and let be an exponential for K. By <Ref>, these choices induce a compatible family of invertible ν^∗B-modules ℒ_B for any Ø_X-coherent Ø_X-torsionfree quotient ^∙_Ø_X^∨→ B. Then the following functors define an exact tensor equivalence of categories S_𝕏,:{pro-étale vector bundles on X} {Higgs bundles on X} V ↦ ν_∗((V,θ_V)⊗_ν^∗ B_Vℒ_B_V^-1) ν^∗ E⊗_ν^∗ B_θℒ_B_θ (E,θ) where B_θ and B_V are as defined in Definitions <ref> and <ref>. They are natural in (X,𝕏). Let V be a pro-étale vector bundle on X. Then by <Ref>, the Ø_X-module (V,θ_V)⊗_ν^∗ B_Vℒ_V^-1 on X_ is an analytic-locally trivial vector bundle endowed with a Higgs field. Hence its restriction to X_ is a Higgs bundle. In the other direction, since E is a vector bundle and ℒ_B_θ is pro-étale locally on X isomorphic to _θ:=ν^∗ B_θ, it is clear that ν^∗ E⊗__θℒ_B_θ is a pro-étale vector bundle. So the two mappings are well-defined on objects. To see that they are functorial, let φ V→ W be any morphism of pro-étale vector bundles. If φ is an isomorphism, then we have a canonical isomorphism B_V=B_W which induces an isomorphism S_𝕏,(V)→S_𝕏,(W). In general, we can write φ as a composition V V⊕ W V⊕ W W to reduce to showing that the construction is compatible with direct sums. It is clear from <Ref> and compatibility of the local correspondence with ⊕ that θ_V⊕ W=θ_V⊕θ_W on V⊕ W. Consequently, the restriction of the T_X-action on V⊕ W to either subspace defines natural maps B_V⊕ W→ B_V and B_V⊕ W→ B_W. It thus suffices to observe that for any morphism of Ø_X-coherent T_X-algebras ψ:B'→ B with canonical sections τ_B and τ_B' as in <Ref>, we have by <Ref> a canonical identification (-)⊗_'ℒ_τ_B'^-1= (-)⊗_⊗_'ℒ_τ_B^-1= (-)⊗_ℒ_τ_B^-1, where as usual we write B=ν^∗ B and B'=ν^∗ B'. Applying this transformation to the inclusion ψ=V→ V⊕ W and the projection ψ=V⊕ W→ W, we obtain the desired compatibility with ⊕. The exactness can be seen by the same argument. The other direction works in exactly the same way. Thus both functors are well-defined. Let us write T_𝕏, for the functor from right to left. We claim that S_𝕏, and T_𝕏, are mutual quasi-inverses to each other: Let (E,θ_E) be a Higgs bundle on X and V=T_𝕏,(E,θ_E). Then by <Ref>, we have a commutative diagram [row sep = 0cm] (E,θ_E)[dd,hook, "-⊗__θℒ_θ"] T_X [rd, "θ_V"'] [ru, "θ_E"] (V) where θ=θ_E in the subscript. Since the vertical map is clearly injective, we thus have a canonical identification of the respective images B_θ=B_V of θ_E and θ_V, which identifies τ_θ with τ_V and hence also ℒ_V with ℒ_θ. It follows that we have S_𝕏,∘T_𝕏,(E,θ)=ν_∗((E,θ_E)⊗__θℒ_θ⊗__Vℒ_V^-1))=(E,θ). The other direction can be seen in exactly the same way, using instead that the diagram [row sep = 0cm] (E) T_X [rd, "θ_V"'] [ru, "θ_E"] (V) [uu,hook, "-⊗__Vℒ_V^-1"'] commutes by definition of θ_E as in <Ref>. It remains to show that the equivalence is compatible with ⊗. Let (E_1,θ_1) and (E_2,θ_2) be two Higgs bundles on X and recall that their tensor product is defined as (E_1,θ_1)⊗ (E_1,θ_1)=(E_1⊗_Ø_XE_2,θ_1⊗𝕀+ 𝕀⊗θ_2). The associated T_X-module structure on (E_1,θ_1)⊗ (E_2,θ_2) thus factors as θ:T_X→(E_1)⊕(E_2)(E_1⊗ E_2). It follows that the associated coherent torsionfree quotient B=T_X/θ admits a natural map B↪ B_1⊕ B_2 where B_1 and B_2 are the respective Ø_X-algebras associated to (E_1,θ_1) and (E_2,θ_2). It follows from functoriality in <Ref> that we can identify ν^∗(E_1⊗ E_2)⊗_ℒ=ν^∗(E_1⊗ E_2)⊗_(_1⊕_2)(ℒ_1⊕ℒ_2)=(ν^∗ E_1⊗__1ℒ_1)⊗_Ø_X (ν^∗ E_1⊗__2ℒ_2) as we wanted to see. §.§ The cohomological comparison Let X be a smooth rigid space and let (E,θ) be a Higgs bundle on X. Recall that we write ^k_X=∧^k_Ø_X_X=Ω^k_X(-k). The Higgs complex of (E,θ) is then defined as 𝒞^∗_Higgs(E,θ):= [EE⊗^1_XE⊗^2_X… E⊗^n_X] where θ_k(e⊗ w):=θ(e)∧ w. We then define the Dolbeault cohomology of (E,θ) as RΓ_Higgs(X,(E,θ)):=RΓ(X_,𝒞^∗_Higgs(E,θ)). A more conceptual definition of Dolbeault cohomology is given by the observation that RΓ_Higgs(X,(E,θ))=_T_X(Ø_X,E) where E is equipped with the T_X:=^∙^∨-module structure defined by θ. Indeed, to compute the right hand side, we can use the resolution of Ø_X as a T_X-module K_∙:=[T_X⊗_X^d∨→…→ T_X⊗_X^2∨→ T_X⊗_X^∨→ T_X]→Ø_X that locally on X in terms of any choice of basis ∂_1,…,∂_d∈^∨ is the Koszul complex Kos_∙(T_X;∂_1,…,∂_d). One easily sees that _T_X(Ø_X,E)=R_T_X(K_∙,E)=𝒞^∗_Higgs(E,θ). We can now prove the last remaining result mentioned in the introduction: In the setting of <Ref>, let V be a pro-étale vector bundle on X and let (E,θ)= S_𝕏,(V) be the associated Higgs bundle. Then there is a natural isomorphism RΓ_(V)=𝒞^∗_Higgs(E,θ) in D(X_). Its cohomology over X defines a natural isomorphism RΓ(X_,V)=RΓ_Higgs(X,(E,θ)). Following <cit.>, a natural way to prove this would be to show that <Ref> extends to perfect complexes. We believe that, in principle, this should be possible by our approach if one has a canonical Higgs structure on v-perfect complexes on X. We begin by defining a natural morphism in D(X_) 𝒞^∗_Higgs(E,θ)=_T_X(Ø_X,E)→_X_(Ø_X,V)=RΓ_(V). For this, we combine ideas of <cit.> and <cit.>: Let J:=(T_X(E)) and consider T_J:=_n T_X/J^n. Since the natural map T_X→T_J is flat, we have _T_X(Ø_X,E)=_T_J(Ø_X,E). Since each T_X/J^n is Ø_X-coherent, we obtain an invertible ν^∗T_J-module L lifting the invertible -module ℒ=ℒ_B_θ of <Ref>, unique up to isomorphism. Note that as we are working in D(X_) and only consider complexes up to isomorphism, we do not need to worry about rigidifications. So the datum of L up to isomorphism suffices. To simplify the notation, let us drop the ν^∗ from notation in the following. We can now define a natural morphism in D(X_) ψ:_T_J(Ø_X,E)=R_T_J(K_∙⊗_T_XT_J,E) R_X_(K_∙⊗_T_XL,E⊗_T_JL). This is functorial in E and B. In particular, we may now without loss of generality make B bigger, i.e. J smaller, to assume that J is contained in the kernel of the projection T_X→Ø_X. Since the T_J-action on E factors through B, we have E⊗_T_JL=E⊗_BL=V. Moreover, since -⊗_T_JL is exact, we have K_∙⊗_T_JL=Ø_X⊗_T_JL=Ø_X⊗_ L=Ø_X. Hence the right hand side of ψ is naturally identified with the desired target _X_(Ø_X,V). It remains to see that ψ is an isomorphism. In other words, we need to check that H^nψ:_T_J^n(Ø_X,E)→^n_X_(Ø_X,V) is an isomorphism on X_. As this can be checked locally on X_, we can now replace X by a toric object f:U→𝕋^d of X_ with a fixed toric chart and assume that E and V are small. In this case, we first note that the chart f defines an integral subsheaf _U^+⊆_U as the preimage of (Δ,Ø^+_U)⊆(Δ,Ø_U) under the isomorphism ρ_f of <Ref>. This is a finite locally free Ø_X^+-submodule of _U. Its dual _U^+,∨:=_Ø_U^+(_U^+,Ø^+_U)=Δ⊗__pØ^+_U inherits from Δ a basis ∂_1,…,∂_d. We can use this to compare T_J to a much smaller submodule: Over U, we first obtain an integral submodule T_U^+:=_Ø_U^+^∙ p^-α_U^+,∨⊆ T_U, where α is as in <ref>. Write J^+:=J_|U∩ T_U^+. We can use this to define the following diagram T_U=Ø_U[∂_1,…,∂_d] [d] [r] _n T_U^+/J^+n[d] [r] T_J 𝒯_U:=𝒪_U⟨ p^-α∂_1,…,p^-α∂_d⟩[r] 𝒪^+_U[[p^-α∂_1,…,p^-α∂_d]] of flat T_U-modules, where the vertical maps send ∂_i→∂_i, and the second vertical map is well-defined as our assumptions on J ensure J^+⊆ (p^-α∂_1,…,p^-α∂_d) T^+_U. By flatness, we can use either of these algebras A to compute _T_X^n(Ø_X,E)_|U as the sheaf _A(K_∙⊗_T_UA,E|_U[n]). We shall use the algebra 𝒯_U. To simplify notation, we will in the following also just write this as 𝒪_U⟨ p^-α∂⟩:=𝒪_U⟨ p^-α∂_1,…,p^-α∂_d⟩, and similarly for the other algebras. Note that due to the assumption that E is small, the T_U-action on E_U extends to a 𝒯_U-action on E_U. In particular, we still have a natural map 𝒯_U→ B. The relevance of the convergence condition is now that after a further étale localisation, we can also lift L to an invertible 𝒯_U-module ℒ: Namely, according to <Ref>, we can simply take ℒ to be the 𝒯_U-module whose Δ-action on 𝒯_U( U) is defined by the continuous 1-cocycle c:Δ→𝒯_U^×(U), γ_i↦exp(∂_i). This allows us to compute ψ_|U more explicitly: It is given by sending a homomorphism K_∙⊗_T_U𝒯_U→ E_|U[n] to the map in D(U_) associated to the Δ-linear morphism of complexes of Ø( U)-modules (K_∙⊗_T_U𝒯_U)( U)→ E⊗_T_U𝒯_U[n]( U) where the action on K_∙ is trivial and the action on the second factors is via c. On the left hand side, we note that we can describe (K_∙⊗_T_U𝒯_U)(U) as the Koszul complex over R:=Ø(U), (K_∙⊗_T_X𝒯_U)(U)=Kos_∙(R⟨ p^-α∂⟩;∂_1,…,∂_d). On the right hand side of <Ref>, we recover V( U)[n] with its natural action via <Ref>. At this point, it suffices to show that every class in ^n_X_(Ø_X,V) is of the form <Ref>. To see this, we use that by <cit.>, the Cartan–Leray sequence of U→ U induces an isomorphism H^n_(Δ,M)=^n_U_(Ø_U,V) for M:=E(U) with Δ-action via c. Unravelling the definitions of continuous cohomology H^∗_ and the Cartan–Leray sequence, this means the following: Any element of ^n_U_(Ø_U,V) can be represented by the morphism of complexes of pro-étale sheaves described over U by the Δ-equivariant Ø( U)-linear map of complexes obtained by tensoring the R-linear Δ-equivariant morphism Kos_∙(R[[T_1,…,T_d]];T_1,…,T_d)→ M[n] with Ø( U). Here the action of γ_i∈Δ on the left is given by multiplication with T_i+1 (see for example the proof of <cit.>). Exactly as on the Higgs side, it follows from the fact that V is small that we can instead compute this using the R-linear Δ-equivariant maps Kos_∙(R⟨ p^-αT⟩;T_1,…,T_d)→ M[n]. It thus suffices to see that there is a natural R-linear Δ-equivariant isomorphism Kos_∙(R⟨ p^-α∂⟩;∂_1,…,∂_d)→Kos_∙(R⟨ p^-αT⟩;T_1,…,T_d). To construct this, we first note that the natural map ϕ:R⟨ p^-α∂⟩→ R⟨ p^-αT⟩, ∂↦log(T+1) is well-defined due to the convergence condition, and is Δ-equivariant because ϕ(f(∂)·γ_i)=ϕ(f(∂)exp(∂_i))=f(log(T+1))(T_i+1)=ϕ(f(∂))·γ_i. It is an isomorphism because T↦exp(∂)-1 is an inverse. We thus obtain an isomorphism ϕ:Kos_∙(R⟨ p^-α∂⟩;∂_1,…,∂_d)Kos_∙(R⟨ p^-αT⟩;log(T_1+1),…,log(T_d+1)). Finally, the fact that log(T+1)/T is a unit in R⟨ p^-αT⟩^× means that the right hand side is isomorphic to Kos_∙(R⟨ p^-αT⟩;T_1,…,T_d) by <cit.>. All in all, this shows that H^nψ_|U is indeed an isomorphism, as we wanted to see.
http://arxiv.org/abs/2307.02945v1
20230706122434
Cohomologically tropical varieties
[ "Edvard Aksnes", "Omid Amini", "Matthieu Piquerez", "Kris Shaw" ]
math.AG
[ "math.AG", "14T05 (Primary), 14C30, 14F99 (Secondary)" ]
Given the tropicalization of a complex subvariety of the torus, we define a morphism between the tropical cohomology and the rational cohomology of their respective tropical compactifications. We say that the subvariety of the torus is cohomologically tropical if this map is an isomorphism for all closed strata of the tropical compactification. We prove that a schön subvariety of the torus is cohomologically tropical if and only if it is wunderschön and its tropicalization is a tropical homology manifold. The former property means that the open strata in the boundary of a tropical compactification are all connected and the mixed Hodge structures on their cohomology are pure of maximum possible weight; the latter property requires that, locally, the tropicalization verifies tropical Poincaré duality. We study other properties of cohomologically tropical and wunderschön varieties, and show that in a semistable degeneration to an arrangement of cohomologically tropical varieties, the Hodge numbers of the smooth fibers are captured in the tropical cohomology of the tropicalization. This extends the results of Itenberg, Katzarkov, Mikhalkin, and Zharkov. [ Aristides Gionis August 1, 2023 ==================== § OVERVIEW The tropicalization process transforms algebraic varieties into piecewise polyhedral objects. While losing part of the geometry, some of the invariants, such as dimension and degree, of the original variety can still be computed from its tropicalization. For the complement of a hyperplane arrangement, Zharkov shows that the tropical cohomology of the tropicalization computes the usual cohomology of the variety <cit.>. Moreover, Hacking relates the top-weight mixed Hodge structure of a variety to the homology of its tropicalization <cit.>. We are interested in determining for which varieties similar types of results hold. We introduce the relevant concepts and notation before stating our results. Let N be a lattice of rank n, M the dual lattice, and = ([M]) ≅ (^*)^n the corresponding torus. We let N_ and N_ denote N ⊗_ and N ⊗_, respectively. Let ⊆ be a non-singular subvariety of and denote by = () its tropicalization <cit.>. A unimodular fan Σ in N_ with support gives rise to a complex toric variety _Σ and a tropical toric variety _Σ. Taking the closures of and X in _Σ and _Σ, respectively, gives compactifications and . We note that the compactifications depend on the choice of the fan Σ whose support is (X), however, we have chosen not to indicate it in the notation for or . Here and elsewhere in the paper, we use bold letters for algebraic varieties and regular letters for tropical varieties. For a complex variety , we denote by H^∙() the cohomology ring of with coefficients in . For a tropical variety Z, the k-th tropical cohomology group of Z can be defined as H^k(Z) ⊕_p + q = k H^p,q(Z), where H^p,q(Z) is the (p,q)-th tropical cohomology group with -coefficients introduced in <cit.>, see <ref>. The tropical cohomology groups together form a ring H^∙(Z) = ⊕_k H^k(Z), the product structure being induced by the cup product in cohomology MZ14, GS-sheaf. We note that the tropical cohomology of Z depends only on Z. In particular, if Z = (), no information about beyond () goes into the recipe for computing H^∙(()). The question addressed in this paper can be informally stated as follows: Under which conditions can the cohomology of be related to the tropical cohomology of ()? Let , Σ and be as above, with and the corresponding compactifications. We define X,Σ to be the ring homomorphism X,Σ H^∙ () → H^∙ () between the cohomologies of and , defined by composing the isomorphism H^k,k() ≅ A^k(_Σ), proved in <cit.>, with the cycle class map A^k(_Σ) → H^2k(_Σ) and the pullback morphism on cohomology associated to the embedding ↪_Σ. The groups H^p,q() are sent to zero by X,Σ for p ≠ q. We refer to <ref> for more details. In the following, we will use the map not only on and , but also on some of their subvarieties: the toric varieties _Σ and _Σ are endowed with natural stratifications induced by the cone structure of Σ. Each cone σ∈Σ gives rise to the torus orbits ^σ and N^σ_ in _Σ and _Σ, respectively, with corresponding lattice N^σ. The closures in _Σ and _Σ of these orbits are denoted by _Σ^σ and _Σ^σ, respectively, and are isomorphic to the complex and tropical toric varieties associated to the star fan Σ^σ of σ in Σ. Intersection with these strata induce a stratification of and . We denote by ^σ = ∩^σ and ^σ=∩ N_^σ the stratum associated to σ∈Σ, and by ^σ and ^σ their closures in and , respectively. The stratum ^σ is a closed subvariety of the torus ^σ and its tropicalization coincides with ^σ. Moreover, the star fan Σ^σ is a unimodular fan with support ^σ. We thus obtain a morphism H^∙ (^σ) → H^∙ (^σ) that we also denote by τ^*. Let ⊆ be a subvariety, Σ a unimodular fan with support = (), and and the corresponding compactifications. We say that is cohomologically tropical with respect to Σ if the induced maps ^σ,Σ^σ H^∙ (^σ) → H^∙ (^σ) are isomorphisms for all σ∈Σ. We show that the property of being cohomologically tropical for schön subvarieties of tori does not depend on the chosen unimodular fan. Recall from <cit.> that a subvariety ⊆ is schön if for some, equivalently for any, unimodular fan Σ of support (), the open strata ^σ, σ∈Σ, of the corresponding compactification are all non-singular. It also implies that the compactification is non-singular, and that ∖ is a simple normal crossing divisor. *theorem:ct_fan_indep<ref> \begintheorem:ct_fan_indep Suppose that the subvariety ⊆ is schön and let = () be its tropicalization. The following are equivalent. * There exists a unimodular fan Σ with support such that is cohomologically tropical with respect to Σ. * For any unimodular fan Σ with support , is cohomologically tropical with respect to Σ. \endtheorem:ct_fan_indep Such a schön subvariety ⊆ will be called cohomologically tropical. For example, the linear subspaces in ^n, restricted to the torus, form a family of cohomologically tropical subvarieties. These very affine varieties are complements of hyperplane arrangements, see <ref>. A generalization is given in <cit.> in which Schock defines quasilinear subvarieties of tori as those having a tropicalization which is tropically shellable in the language of <cit.>. He shows that these subvarieties are necessarily schön. It follows from his results that quasilinear subvarieties of tori are cohomologically tropical. We now introduce a class of subvarieties ⊆ with cohomology amenable to a tropical description using the notion of mixed Hodge structures, see <ref>. A non-singular subvariety ⊆ of the torus is called wunderschön with respect to a unimodular fan Σ with support () if all the open strata ^σ of the corresponding compactification are non-singular and connected, and the mixed Hodge structure on H^k(^σ) is pure of weight 2k for each k. In particular, a point in the torus is wunderschön. It follows from the preceding discussion that if is wunderschön, it is schön. Therefore, if is the compactification with respect to a unimodular fan Σ, the boundary ∖ is a strict normal crossing divisor. We prove that the property of being wunderschön is independent of the fan, and that the cohomology of a wunderschön variety is divisorial in the sense of <ref>. *theorem:wunderschon_fan_independent<ref> \begintheorem:wunderschon_fan_independent Suppose that the subvariety ⊆ is wunderschön with respect to some unimodular fan. Then is wunderschön with respect to any unimodular fan with support =(). \endtheorem:wunderschon_fan_independent *theorem:wunderschon_Divisorial<ref> \begintheorem:wunderschon_Divisorial Let ⊆ be a wunderschön subvariety. Let be the compactification of with respect to a unimodular fan Σ with support = (). Then the cohomology of is divisorial and generated by irreducible components of ∖. \endtheorem:wunderschon_Divisorial A tropical variety is called a tropical homology manifold if any open subset in verifies tropical Poincaré duality. For a tropical variety which is the support of a tropical fan, this amounts to the property that for some, equivalently for any, rational unimodular fan Σ of support , the corresponding open strata ^σ verify tropical Poincaré duality for all σ∈Σ. In particular this implies that, for any unimodular fan Σ of support X, any open subset of the corresponding tropical compactification verifies tropical Poincaré duality. A tropical fanfold is called Kähler if for some, equivalently for any, quasi-projective unimodular fan Σ with support , and for any σ∈Σ, the Chow ring A^∙(Σ^σ) verifies the Kähler package, that is, Poincaré duality, hard Lefschetz theorem and Hodge-Riemann bilinear relations. Here, for a unimodular fan Σ, the Chow ring A^∙(Σ) coincides with the Chow ring of the corresponding toric variety _Σ. We have the following main theorem on characterization of cohomologically tropical subvarieties of tori. *theorem:main<ref> \begintheorem:main Let ⊆ be a schön subvariety with support = (). Then the following statements are equivalent. * is wunderschön and is a tropical homology manifold, * is cohomologically tropical. Moreover, if any of these statements holds, then is Kähler. \endtheorem:main We deduce the following result from the above theorem. *theorem:openCT<ref> \begintheorem:openCT[Isomorphism of cohomology on open strata] Suppose that ⊆ is schön and cohomologically tropical. Let Σ be any unimodular fan with support =(). Then we obtain isomorphisms H^k (^σ) H^k(^σ) for all σ∈Σ and all k. \endtheorem:openCT Going beyond cohomologically tropical subvarieties of tori, and following the work of <cit.>, one can ask the following question. Which families _t of complex projective varieties over the complex disk degenerating at t=0 have the property that the tropical cohomology of their tropical limit captures the Hodge numbers of a generic fiber in the family? In <ref>, we weaken the condition given in <cit.> by showing that it suffices to ask the open components of the central fiber to be cohomologically tropical and schön. By <ref>, this is equivalent to asking the maximal dimensional strata to be wunderschön and their tropicalizations to be tropical homology manifolds. More precisely, let π→ D^* be an algebraic family of non-singular algebraic subvarieties in ^n parameterized by a punctured disk D^* and with fiber _t over t ∈ D^*. Let Z⊆^n be the tropicalization of the family. By Mumford's proof of the semistable reduction theorem <cit.>, we find a triangulation of Z (possibly after a base change) such that the extended family π→ D is regular and the fiber over zero _0 is reduced and a simple normal crossing divisor. Note that since the extended family is obtained by taking the closure in a toric degeneration of ^n, each open stratum in _0 will be naturally embedded in an algebraic torus. We say that a tropical variety is a tropical homology manifold if all of its local tropical fanfolds verify tropical Poincaré duality. *theorem:globalization<ref> \begintheorem:globalization Let π→ D^* be an algebraic family of subvarieties in ^n parameterized by the punctured disk and let π→ D be a semistable extension. If the tropicalization Z⊆^n is a tropical homology manifold and all the open strata in _0 are wunderschön, then H^p,q(Z) is isomorphic to the associated graded piece W_2p/W_2p-1 of the weight filtration in the limiting mixed Hodge structure H_lim^p+q. The odd weight graded pieces in H_lim^p+q are all vanishing. Moreover, for t ∈ D^*, we have H^p,q(_t) = H^p,q(Z), for all non-negative integers p and q. \endtheorem:globalization We refer to <cit.> for other interesting results connecting the topology of tropicalizations to the Hodge theory of nearby fibers. §.§ Acknowledgement The research of E. A. and K. S. is supported by the Trond Mohn Foundation project “Algebraic and Topological Cycles in Complex and Tropical Geometries”. O. A. thanks Math+, the Berlin Mathematics Research Center, for support. M. P. has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 101001995). This project was started during a visit to the Norwegian Academy of Science and Letters under the Center for Advanced Study Young Fellows Project “Real Structures in Discrete, Algebraic, Symplectic, and Tropical Geometries”. We thank the Centre of Advanced Study and Academy for their hospitality and wonderful working conditions. We thank as well the hospitality of the mathematics institute at TU Berlin where part of this research was carried out. § PRELIMINARIES §.§ Subvarieties of the torus and tropicalization We briefly recall the tropicalization of subvarieties of tori. Let N be a lattice of rank n, M its dual, N_ = N ⊗, and =_N =([M]) ≅n. Let be a d-dimensional subvariety of the torus , so that = (I) for an ideal I ⊆[M]. The tropicalization of can be described using initial ideals, see e.g. <cit.>, () = { w ∈ N_ | _w(I) ≠⟨ 1 ⟩}. A d-dimensional fan Σ is weighted if it comes equipped with a weight function Σ_d → where Σ_d denotes the d-dimensional cones. A tropical fan is a weighted fan which is pure dimensional and which satisfies the balancing condition in tropical geometry <cit.>. A fanfold is a subset of N_ which is the support of a rational fan, and it is a tropical fanfold if it is the support of a tropical fan. The tropicalization X () is a tropical fanfold, and any fan structure on () is equipped with a weight function _ induced by . If Σ is a rational fan in N_ of support X and η is some facet of Σ, then for a generic point w in the relative interior of η, the variety (_w(I)) is a union of translates of torus orbits. Then _(η) is equal to the number of such torus orbit translates counted with multiplicity. This number is invariant for generic choices of points in the relative interior of η. The tropicalization endowed with the weight function _ satisfies the balancing condition and thus is a tropical fanfold in N_ <cit.>. §.§ Tropical compactifications of complex varieties We now briefly review the notion of tropical compactifications introduced in <cit.>. Let Σ be a fan in N_, and _Σ the associated toric variety. There is a bijection between cones in Σ and torus orbits in _Σ. For each σ∈Σ, we denote by ^σ the corresponding torus orbit. The closure ^σ is the disjoint union _γ⊇σ^γ, for cones γ∈Σ containing σ. For ⊆ a subvariety and its closure in _Σ, we have = _σ∈Σ^σ∩. We denote the stratum ^σ∩ by ^σ, and its closure by ^σ. Note that, ^σ = _γ⊇σ^γ. For Σ a unimodular fan with support equal to =(), the closure of in _Σ is compact, giving a tropical compactification <cit.>. Moreover for such a Σ, the compactification of in _Σ is said to be schön if the torus action ×→_Σ is non-singular and surjective, in which case is non-singular, and the boundary ∖ is a simple normal crossing divisor <cit.>. The compactification is schön if and only if ^σ is non-singular for each σ∈Σ <cit.>. If admits a schön compactification, then any unimodular fan with support equal to will provide a schön compactification <cit.>, and in this case we will say that is schön. For f= ∑_I∈Δ(f) a_I x^I ∈[x_1^±, …, x_n^±] a Laurent polynomial, it is pointed out in <cit.> that the very affine hypersurface =V(f) being schön is equivalent to the condition that f is non-degenerate (with respect to its Newton Polytope), a concept studied in <cit.> and <cit.>. For each face γ∈Δ(f) of the Newton Polytope of f, one defines f_γ= ∑_I∈γ a_I 𝐱^I. Then f is non-degenerate if, for all γ∈Δ(f), the polynomials x_1 ∂ f_γ/∂ x_1, …, x_n ∂ f_γ/∂ x_n share no common zero in (^*)^n. This implies that =V(f) is schön by for instance <cit.>. §.§ Canonical compactifications of tropical varieties Let Σ be a rational fan in N_. The dimension of a cone σ will be denoted by σ, and we denote by Σ_k the set of cones of Σ of dimension k. The unique cone of dimension 0 is denoted . Let γ, δ be two faces of Σ. We write γδ if γ is a face of δ. For δ∈Σ a cone, the saturated sublattice parallel to δ is denoted N_δ, and the quotient lattice N/N_δ is denoted N^δ, with quotient maps π^δ N → N^δ and π^δ N_→ N_^δ. Furthermore, the star at δ is the fan Σ^δ in N^δ_ whose cones are given by π^δ (σ) |δσ. We briefly review the construction of tropical toric varieties, referring to <cit.> for a detailed construction. Let = ∪{+∞} denote the tropical semi-field. Denote by σ^∨ the semigroup of element of M_ which are nonnegative on σ. For each σ∈Σ, one defines U_σ^_semigroup(σ^∨∩ M, ), which can be identified with the set _δσ N_^δ. We equip U_σ^ with the subset topology of the product topology on the infinite product ^σ^∨∩ M. For σ unimodular, U_σ^ is isomorphic to ^n-σ×^σ. For δσ, the inclusion identifies U_δ^ as an open subset of U_σ^. The tropical toric variety _Σ associated to Σ is the space given by gluing the U_σ^ along common faces, with underlying set _σ∈Σ N_^σ. Let Σ be a fan with support . The canonical compactification of relative to the fan Σ is the closure of as a subset of its tropical toric variety _Σ. Furthermore, has a cellular structure, which we denote Σ. See <cit.> for details. For any cone σ∈Σ, we denote by ^σ the fanfold associated to Σ^σ. The canonical compactification ^σ of ^σ is canonically isomorphic to the closure of ^σ when considered as a subset in N_^σ⊆_Σ, and we will denote this compactified fanfold by ^σ, when Σ is understood from the context. Moreover, there is an inclusion of canonical compactifications i ^σ↪^δ for δσ. When = () the tropical canonical compactification relative to any fan Σ with support is the same as the extended tropicalization of the closure ⊆_Σ in the sense of <cit.>. §.§ Mixed Hodge structures Keeping the notation from <ref>, let ⊆ be a non-singular subvariety, and Σ a unimodular fan supported on the tropicalization = (), so that we obtain a tropical compactification of . Moreover suppose that the boundary ∖ is a simple normal crossing divisor. We have that = ⋃_ζ∈Σ_1^ζ. By <cit.>, the logarithmic de Rham complex Ω^∙_(log) induces an isomorphism H^k(;)≅^k (;Ω^∙_(log)), for each k. Moreover, there is a weight filtration W_∙ on the logarithmic de Rham complex, which gives a mixed Hodge structure on H^k(). This is given by the Deligne weight spectral sequence E_1^-p,q=H^q-2p(_σ∈Σ_p^σ) H^q-p(), which degenerates on the E_2-page. Below, we display the rows E_1^∙,2k+1 and E_1^∙, 2k, where the rightmost elements are in position (0,2k+1) and (0,2k), respectively. [column sep = tiny, row sep= tiny] ⊕_σ∈Σ_k H^1(^σ) ⊕_δ∈Σ_k-1 H^3(^δ) ⋯ ⊕_ζ∈Σ_1 H^2k-1(^ζ) H^2k+1() ⊕_σ∈Σ_k H^0(^σ) ⊕_δ∈Σ_k-1 H^2(^δ) ⋯ ⊕_ζ∈Σ_1 H^2k-2(^ζ) H^2k()[from=2-3, to=2-4] [from=2-4, to=2-5] [from=2-1, to=2-2] [from=2-2, to=2-3] [from=1-1, to=1-2] [from=1-2, to=1-3] [from=1-3, to=1-4] [from=1-4, to=1-5]. All the differentials are sums of Gysin homomorphisms with appropriate signs. Recall that, given a unimodular fan Σ, and a pair of faces σ,δ∈Σ such that δ is a codimension one face of σ, the inclusion map i^σ→^δ induces a restriction map in cohomology i^* H^∙ (^δ)→ H^∙ (^σ), with dual map i_* H^∙ (^σ)^*→ H^∙ (^δ)^*. Applying the Poincaré duality for both ^σ and ^δ gives a map PD_^δ^-1∘ i_* ∘PD_^σ H^∙(^σ) → H^∙+2(^δ), called the Gysin homomorphism and denoted _σδ. Since the Deligne spectral sequence degenerates at the E_2 page, the cohomology of the rows E_1^∙,2k+1 and E_1^∙, 2k yields the following associated graded elements [column sep = tiny, row sep=tiny] _2k+1^W (H^k+1) _2k+1^W(H^k+2) ⋯ _2k+1^W(H^2k) _2k+1^W(H^2k+1) _2k^W (H^k) _2k^W(H^k+1) ⋯ _2k^W(H^2k-1) _2k^W(H^2k), where H^k H^k() and _l^W(H^k) denotes the weight l part of the mixed Hodge structure on H^k. Recall that a mixed Hodge structure H is pure of weight n if _i^W (H)=0 for i≠ n. A mixed Hodge structure H is Hodge-Tate if _k^W(H) is of type (l,l) if k=2l and 0 for k odd, see, e.g., <cit.>. §.§ Wunderschön varieties We now consider wunderschön varieties ⊆ as introduced in <ref>. As we noted previously, wunderschön varieties are schön. In addition, we have the following. If a non-singular subvariety ⊆ is wunderschön with respect to Σ, then the weight function of the tropicalization _ is equal to one on all top dimensional faces η of Σ. The weight _(η) is equal to the intersection multiplicity of with the toric stratum _Σ^η. In other words, it is the number of points in the variety ^η counted with multiplicities. Since is wunderschön, the variety ^η must consist of a single point. Hence, for all facets η we have _(η) = 1. A consequence of the wunderschön property is that, for each σ∈Σ, the even rows of the E_2 = E_∞-page for ^σ, taking a priori the form shown in (<ref>), are in fact zero except in the leftmost position, which implies that H^k(^σ) = _2k^W(H^k(^σ)). Moreover, the odd rows of the E_1-page are all identically zero by the following lemma. Let ⊆ be a wunderschön variety with respect to Σ. Then H^2k-1(^σ)=0 for k=1,…, (^σ) and all σ∈Σ. The property is true for a wunderschön point. By induction on dimension, we have H^2k-1(^σ)=0 for k=1,…, (^σ) and all cones σ except the central vertex , so that it remains to prove that H^2k-1()=0 for k=1,…,(). For each such k, the (2k-1)-th row of the E_2-page of the Deligne spectral sequence is given by E_2^0,2k-1 = H^2k-1()= _2k-1^W(H^2k-1()) and all other terms are 0. Since is wunderschön, _2k-1^W(H^2k-1())=0, and so H^2k-1()=0. Since the E_2-page is the cohomology of the E_1-page, this proves the following lemma. For ⊆ a wunderschön variety with respect to Σ, and for each cone σ∈Σ and each k, we have the following exact sequences 0 H^k(^σ) ⊕_μσ μ=σ+k H^0(^μ) ⊕_νσ ν=σ+k-1 H^2(^ν) ⋯ ⋯⊕_ξσ ξ=σ+1 H^2k-2(^ξ) H^2k(^σ) 0, where denotes the logarithmic residue map and denotes a signed sum of suitable Gysin maps. [Wunderschön curves are rational] We classify wunderschön curves ⊆. A tropical compactification consists of adding points to . Points have pure mixed Hodge structure on their cohomology. Thus, for to be wunderschön with respect to a fan Σ, it is necessary that each stratum X^ζ for ζ∈Σ_1 be connected, i.e., consists of a single point. The Deligne weight spectral sequence degenerates on the E_2 page, and is shown in <ref> and <ref>. Note that H^2() is trivial. Moreover, if is wunderschön, then H^1() = _1^W(H^1()) must be trivial. Therefore, a non-singular curve ⊆ is wunderschön if and only if the curve is isomorphic to ^1 and it meets each toric boundary divisor of _Σ in only one point. We conclude that the only wunderschön (open) curves are complements of a finite set of points in a non-singular rational curve. §.§ Tropical homology and cohomology We now briefly sketch the theory of tropical homology and cohomology, and refer to <cit.> for details. We work with -coefficients. Let Σ be a rational fan in N_ with support . Let be the closure of inside the tropical toric variety _Σ. The closure has a cellular structure Σ where the cells of Σ consist of the closures of the cones in Σ^σ for all σ∈Σ. In particular, each face of Σ is indexed by a pair of cones σ, γ∈Σ satisfying γσ, and denoted C_γ^σ∈Σ. For each face C_γ^σ∈Σ, the p-th multi-tangent space _p(C_γ^σ) (with -coefficients) is defined as _p (C_γ^σ) ∑_ηγ⋀^p (N_η/ N_σ) ⊗⊆⋀^p N_^σ. Moreover, for αβ two faces of Σ, there is a map ι_βα_p(β) →_p(α), which is an inclusion if both faces lie in the same subfan Σ^σ for some σ, or if α = C_η^σ and β = C_η^σ' with ησσ', then ι_βα is induced by the projection N^σ'→ N^σ. Generally, the map ι_βα is defined as compositions of such inclusions and projections. Furthermore, by dualizing, we obtain the p-th multi-cotangent spaces ^p(α) and reversed morphisms. By selecting orientations for each of the cones α∈Σ, we obtain relative compatibility signs sign(α,β) ∈± 1 for αβ with β = α+1. We may thus use the multi-tangent spaces to define a chain complex C_p,q(Σ) ⊕_α∈Σ_q_p(α), that is, summing over faces α of dimension q in Σ, with differentials ∂_q C_p,q(Σ) → C_p,q-1(Σ) defined component-wise as the maps sign(α,β)ι_βα when αβ and β = α+1, and defined to be 0, otherwise. Similarly, by dualizing everything, we obtain a cochain complex C^p,q(Σ) for the multi-cotangent spaces. The homology groups H_p,q(Σ) H_q(C_p,∙(Σ)) of the complex C_p,∙(Σ) are invariants of the canonically compactified support of the support of the fan Σ. Therefore, we define the tropical homology of as the homology H_p,q() H_q(C_p,∙(Σ)) of the complex C_p,∙(Σ). The tropical cohomology of is H^p,q() H^q(C^p,∙(Σ)). In fact, tropical homology and cohomology can be defined for any rational polyhedral space. Moreover, there are various equivalent descriptions of tropical (co)homology in terms of cellular, singular, and sheaf theoretic terms <cit.>. For any rational polyhedral space Z, we set H^k(Z) ⊕_p+q=k H^p,q(Z). For example, for a fanfold X, the tropical homology is H_p,q(X)= _p() if q=0 and 0 otherwise, and the tropical cohomology of is H^p,q(X)= ^p() if q=0 and 0 otherwise <cit.>. If is a tropical fanfold, the balancing condition implies the existence of a fundamental class []∈ H_d,d(), which induces a cap product ⌢ [] H^p,q()→ H_d-p,d-q() for each p,q∈0,…,d. When these maps are isomorphisms for all p and q, the variety is said to satisfy tropical Poincaré duality. A tropical fanfold is called a tropical homology manifold if one of the three following equivalent conditions hold: * There exists a unimodular fan Σ with support equal to such that each of the canonical compactifications ^σ satisfies tropical Poincaré duality, for all cones σ∈Σ. * For any unimodular fan Σ with support equal to , each of the canonical compactifications ^σ satisfies tropical Poincaré duality, for all cones σ∈Σ. * Any open subset U of X satisfies tropical Poincaré duality, , the tropical Poincaré duality induces an isomorphism between the tropical cohomology and the tropical Borel-Moore homology of U (see <cit.> for details). This definition corresponds to the notion of tropical smoothness in <cit.> and to local tropical Poincaré duality spaces in <cit.>. The equivalence of the three statements is non-trivial and follows from Theorems 3.20, 3.23 and 7.9 of the article <cit.>. §.§ Chow rings of fans We now recall some facts about the Chow ring of a fan, see for instance AP-hi, AP-tht for more details. Let Σ be a unimodular fan in a vector space N_. The Chow ring A^∙(Σ) is the quotient ring A^∙(Σ) [_ζ|ζ∈Σ_1](I+J) with a variable _ζ for each ray ζ∈Σ_1. Here I is the ideal generated by all monomials _ζ_1⋯_ζ_l such that the rays ζ_1,…, ζ_l do not form a cone of Σ; and J is the ideal generated by the expressions ∑_ζ∈Σ_1⟨ m, _ζ⟩_ζ, where _ζ∈ N is the primitive vector of the ray ζ and m ranges over elements of the dual lattice M. For σ∈Σ, we define _σ_ζ_1⋯_ζ_k, where ζ_1, …, ζ_k are the rays of σ. As a vector space, A^∙(Σ) is generated by _σ, σ∈Σ. For a pair of cones δσ, there is a Gysin map _σδ A^∙(Σ^σ)→ A^∙+σ-δ (Σ^δ). This map is defined by mapping _η'∈Σ^σ to _η'_ζ_1⋯_ζ_r, where η' is a face of Σ^σ, η is the corresponding face in Σ^δ, and ζ_1,…, ζ_r are the rays of σ not in δ. Since Σ is unimodular, there is an isomorphism of rings Φ_Σ A^∙(Σ) A^∙(_Σ) from the Chow ring of Σ to the Chow ring of the toric variety _Σ, see e.g., <cit.>. Furthermore, the cycle class map cyc_Σ A^∙(_Σ) → H^2∙(_Σ) gives a graded ring homomorphism to cohomology, see <cit.>. Consider a subvariety of the torus, and assume that the support of Σ is (). Let be the corresponding compactification. There is the restriction map of rings r^* H^∙ (_Σ) → H^∙ (). Composing all these homomorphisms gives a morphism of rings Φ A^∙(Σ) → H^2∙() which maps _σ to the class of ^σ. In the tropical world, there is a similar map. Let be the support of Σ and let be the corresponding compactification. One can consider the composition A^∙(Σ) → H^2∙(_Σ) → H^2∙() mapping _σ to the class of ^σ. By the Hodge isomorphism theorem <cit.>, this composition induces an isomorphism of rings ⊕_k A^k(Σ) ⊕_k H^k,k(). We define the inverse map Ψ H^∙() → A^∙/2(Σ) by mapping (p,q)-classes to zero if p ≠ q. Here, by convention, A^k/2(Σ) is trivial for odd k. If Σ is a tropical homology manifold, Ψ is an isomorphism by <cit.>, that is, H^p,q() is trivial for p≠ q. §.§.§ Kähler package We recall the Kähler package for Chow rings of fans, see <cit.>. Assume Σ is tropical and quasi-projective, , there exists a conewise linear function f on Σ which is strictly convex in the following sense. For any σ∈Σ, there exists a linear map m ∈ M such that f-m is zero on σ and strictly positive on U∖σ for some open neighborhood U of the relative interior of σ. To such an f, one can associate the element L ∑_ζ∈Σ_1 f(_ζ) _ζ∈ A^1(Σ). These elements coming from strictly convex functions are called ample classes. Since Σ is tropical, the degree map A^d(Σ) → mapping _η to (η) for any facet η of Σ is a well-defined morphism. The Chow ring A^∙(Σ) is said to verify the Kähler package if the following holds: * (Poincaré duality) the pairing [ A^k(Σ) × A^d-k(Σ) → ,; a,b-k ↦ (ab), ] is perfect for any k; * (Hard Lefschetz theorem) for any ample class L, the multiplication by L^d-2k induces an isomorphism between A^k(Σ) and A^d-k(Σ) for all k ≤ d/2; * (Hodge-Riemann bilinear relations) for any k≤ d/2 and any ample class L, the bilinear map [ A^k(Σ) × A^k(Σ) → ,; a, b ↦ (-1)^k(L^d-2kab), ] is positive definite on ( · L^d-2k+1 A^k(Σ) → A^d-k+1(Σ)). A tropical fanfold is called Kähler if it is a tropical homology manifold and there exists a quasi-projective unimodular fan of support such that A^∙(Σ^σ) verifies the Kähler package for any σ∈Σ. In such a case, any quasi-projective unimodular fan Σ on verifies the previous property (cf. <cit.>). §.§ Tropical Deligne resolution Let Σ be a unimodular fan on some tropical homology manifold X. Let δσ be two faces of Σ. The inclusion i^^σ→^δ of canonically compactified fanfolds, both satisfying tropical Poincaré duality, gives a homomorphism i_*^ H_k(^σ) → H_k(^δ). Applying the tropical Poincaré duality for both ^σ and ^δ, this gives a map PD_^δ^-1∘ i_*^∘PD_^σ H^k(^σ) → H^k+2(σ - δ)(^δ), called the tropical Gysin homomorphism and denoted _σδ^. In <cit.>, it is shown that for a fanfold which is a tropical homology manifold and a unimodular fan Σ with support , there are tropical Deligne resolutions, i.e., exact sequences for any k, 0 ⟶H^k()⟶⊕_σ∈Σ_k H^0(^σ)⟶⊕_δ∈Σ_k-1 H^2(^δ)⟶⋯ ⋯⟶⊕_ζ∈Σ_1 H^2k-2(^ζ)⟶H^2k()⟶0, where the first non-zero morphism is given by integration (that is, by the evaluation of the element α∈ H^k() at the canonical multivector of each face σ∈Σ_k), and all subsequent maps are given by the tropical Gysin homomorphisms (with appropriate signs <cit.>). § THE INDUCED MORPHISM ON COHOMOLOGY BY TROPICALIZATION The aim of this section is to define a map relating tropical cohomology to classical cohomology, as well as to prove <ref>, which relates Gysin maps in tropical and classical cohomology. Let ⊆ be a subvariety and Σ a unimodular fan with support =(), and and be the compactifications of and with respect to Σ. We define X,Σ H^∙ () → H^∙ () to be the ring homomorphism defined as the composition of the maps Ψ H^∙ () → A^∙/2(Σ) with Φ A^∙/2 () → H^∙ () from <ref>. The map is the morphism comparing the tropical and classical cohomology in order to define cohomologically tropical varieties in <ref>. We will now relate the classical and tropical Gysin maps through the map . This will be useful later for comparing Deligne sequences. Let = () the be tropicalization of a subvariety ⊆, Σ a unimodular fan with support , with σ,δ∈Σ such that δ is a face of σ of codimension one, giving inclusion maps ^σ→^δ and ^σ→^δ. Then the following diagram commutes: H^k(^σ) H^k(^σ) H^k+2(^δ) H^k+2(^δ). ["_σδ^", from=1-1, to=2-1] [",Σ"', from=1-1, to=1-2] ["_σδ", from=1-2, to=2-2] ["',Σ'", from=2-1, to=2-2] Expanding the definition of , we obtain the following diagram H^∙ (^σ) A^∙/2 (Σ^σ) H^∙ (^σ) H^∙+2 (^δ) A^∙/2+1(Σ^δ) H^∙+2 (^δ).["_σδ^"', from=1-1, to=2-1] ["_σδ", from=1-2, to=2-2] ["Φ", from=1-2, to=1-3] ["Φ", from=2-2, to=2-3] ["Ψ", from=1-1, to=1-2] ["Ψ", from=2-1, to=2-2] ["_σδ", from=1-3, to=2-3] The first square is commutative by <cit.>, in light of <cit.>. The commutativity of the second square follows from the functoriality of the cycle class map in light of <cit.> and <cit.>. Let ⊆_N and '⊆_N' be two non-singular subvarieties of tori associated to two lattices N and N', with and ' the corresponding tropicalizations, and two unimodular fans Σ and Σ' with supports and ', respectively. Assume there exists a morphism of lattices ϕ N → N' which takes cones of Σ to cones of Σ' such that the induced map ϕ→' is surjective. This makes the induced morphism of toric varieties f _Σ→_Σ' proper <cit.>. We denote by f^_Σ→_Σ' the induced morphism on tropical toric varieties. Furthermore, suppose that f()='. Since is compact we have that f()=[3]f()='. This also gives f^ ()=' for the canonical compactifications of and ' with respect to Σ and Σ'. One can then prove the commutativity of the following diagram [ampersand replacement=&] H^∙ ('),Σ["f^,*"']&H^∙ (')f^* H^∙ ()["',Σ'"']&H^∙ (). Let ⊆ be a subvariety of complex dimension d and Σ a unimodular fan with support =(), and and be the compactifications of and with respect to Σ. Suppose satisfies tropical Poincaré duality and is non-singular. Then X,Σ H^∙ () → H^∙ () is injective. Both maps Ψ H^2d() → A^d(Σ) and Φ A^d(Σ) → H^2d() commute with the corresponding degree maps. Now for both tropical and classical cohomology, the fact that the products induce perfect pairings implies that X,Σ is injective. § IRRELEVANCE OF FAN To be schön, wunderschön, cohomogically tropical, Kähler, or a tropical homology manifold are all properties of the form there exists a fan Σ such that a specific property holds with some restriction on the fan, as unimodularity for instance. Informally, we say that such a property is fan irrelevant if we can replace there exists a unimodular fan by for any unimodular fan (this is strongly linked with the notion of shellable properties in <cit.>). It is already known that to be schön, Kähler or a tropical homology manifold is fan irrelevant. In this section we prove <ref> about the fan irrelevance of being cohomologically tropical and wunderschön. We begin with a lemma. Suppose a schön subvariety ⊆ is cohomologically tropical. Then the tropicalization =() is a tropical homology manifold. Let Σ be a unimodular fan whose support is (). It follows that the cohomology groups H^∙(^σ) are all isomorphic to the cohomology groups H^∙(^σ), and so they verify Poincaré duality. We infer that is a tropical homology manifold. Let be a schön subvariety of the torus which is cohomologically tropical. It follows from the previous lemma and the fan irrelevance of being a tropical homology manifold that all the cohomology groups H^p,q() are vanishing provided that p≠ q, for the canonical compactification of with respect to any unimodular fan with support . Let Σ be a unimodular fan with support the fanfold , and let σ be a cone in Σ of dimension at least two. Let Σ' be the barycentric star subdivision of Σ obtained by star subdividing σ, see e.g. <cit.>. Denote by ρ the new ray in Σ'. Let and ' be the compactifications of with respect to Σ and Σ', respectively. The following theorem provides a description of the Chow ring of Σ' in terms of the Chow rings of Σ and Σ^σ. Let be the kernel of the surjective map ^*_σ A^∙(Σ)→ A^∙(Σ^σ) and let P() ∏_ζσ ζ=1(_ζ+). There is an isomorphism of Chow groups given by the map χA^∙(Σ)[]( +P()) A^∙(Σ') which sends to -_ρ and which verifies ∀ζ∈Σ_1, χ(_ζ) = _ζ+_ρ if ζσ, _ζ otherwise. In particular this gives a vector space decomposition of A^∙(Σ') as A^∙(Σ')≅ A^∙(Σ)⊕ A^∙-1(Σ^σ)⊕…⊕ A^∙-σ+1(Σ^σ)^σ-1. In addition, if is the tropicalization of a schön subvariety ⊆, and and ' are compactifications of with respect to Σ and Σ', respectively, then we have an isomorphism H^∙(')≅H^∙()[]( +P()), and the decomposition H^∙(')≅ H^∙()⊕ H^∙-1(^σ)⊕…⊕ H^∙-σ+1(^σ)^σ-1. Here, by an abuse of notation, the variable denotes the image of -_ρ in H^2(') for the induced map A^∙(Σ') → H^∙('), is the kernel of H^∙()→ H^∙(^σ), and P() is the image of ∏_ζσ ζ=1(_ζ+) in H^∙()[] under the map A^∙(Σ) → H^∙(). Decomposition (<ref>), for instance, means that for any 1 ≤ k ≤σ, we have a natural injective map A^∙(Σ^σ) ↪ A^∙(Σ'^ρ) -_ρ A^∙+1(Σ') ^k-1 A^∙+k(Σ'). The piece A^∙(Σ^σ)^k in the above decomposition then denotes the image of the above map. We refer to <cit.> and <cit.> for more details and the proof. Two unimodular fans with the same support are called elementary equivalent if one can be obtained from the other by a barycentric star subdivision. The weak equivalence between unimodular fans with the same support is then defined as the transitive closure of the elementary equivalence relation. We will need the weak factorization theorem, stated as follows. Two unimodular fans with the same support are always weakly equivalent. We are now in a position to prove the independence of being cohomologically tropical from the chosen fan for schön varieties. Suppose that the subvariety ⊆ is schön and let = () be its tropicalization. The following are equivalent. * There exists a unimodular fan Σ with support such that is cohomologically tropical with respect to Σ. * For any unimodular fan Σ with support , is cohomologically tropical with respect to Σ. Suppose that the subvariety of the torus is schön. Let = (). Let Σ be a unimodular fan with support such that is cohomologically tropical with respect to Σ. Let Σ' be a second unimodular fan with support . We need to prove that is cohomologically tropical with respect to Σ'. By the weak factorization theorem, it will be enough to assume that Σ and Σ' are elementary equivalent. We consider the compactifications ' and ' of and with respect to Σ', and those with respect to Σ by and . Consider first the case where Σ' is obtained as a barycentric star subdivision of Σ. Denote by σ the cone of Σ which has been subdivided and by ρ the new ray of Σ'. We start by explaining the proof of the isomorphism H^∙ (')H^∙ ('). We use the notation preceding <ref>. By Keel's lemma, we get A^∙(Σ')≅A^∙(Σ)[](+P()) and H^∙(')≅H^∙()[](+P()) with and P() as in <ref>. By the Hodge isomorphism theorem <cit.>, see <ref>, we have isomorphisms A^p(Σ') H^p,p(') and A^p(Σ) H^p,p() for each p. Moreover, since is cohomologically tropical by <ref>, all the cohomology groups H^p,q(') and H^p,q() are vanishing for p≠ q. The isomorphism H^∙ (')H^∙ (') now follows from the commutativity of the diagram in <ref>, the isomorphisms H^∙ ()H^∙ () and H^∙ (^σ)H^∙ (^σ), and the compatibility of the decompositions in Keel's lemma in the tropical and algebraic settings with respect to these isomorphisms. Consider now an arbitrary cone δ of Σ' and denote by η the smallest cone of Σ which contains δ. The star fan Σ'^δ of δ in Σ' is isomorphic to a product of two fans Δ×Θ with Δ a unimodular fan living in N^η_ and Θ a unimodular fan living in N_σ, N_δ∩σ,. In the case ησ, the first fan Δ coincides with the star fan Σ^η of η in Σ. Otherwise, when ησ, Δ is the fan obtained from Σ^η by subdividing the cone σ/η. The other fan Θ is unless δ contains the ray ρ in which case, Θ is the fan of the projective space of dimension σ - σ∩δ. Similarly, '^δ admits a decomposition into a product ×, where = ^η in the case ησ, and is the blow-up of ^σ in ^η in the other case ησ. And is ^0, that is a point, unless δ contains ρ in which case ≅^σ - σ∩δ. The isomorphism H^∙ ('^δ)H^∙ ('^δ) for δ can be then obtained from the above description, and by observing that when σ is face of η and Δ is the subdivision of η/σ in Σ^η, we can apply the argument used in the first treated case above to ^η and ^η to conclude. Consider now the case where Σ is obtained as a barycentric star subdivision of Σ'. We only discuss the isomorphism H^∙ (')H^∙ ('), the other isomorphisms H^∙ ('^δ)H^∙ ('^δ) for δ∈Σ' can be obtained by using the preceding discussion. The cohomology of ' appears as a summand of the cohomology of according to the decomposition in Keel's lemma. Similarly, the cohomology of ' is a summand of the cohomology of . Using the compatibility of the decompositions in the Keel's lemma, the isomorphism H^∙ ()H^∙ () induces an isomorphism H^∙ (')H^∙ (') between the two summands. Suppose that the subvariety ⊆ is wunderschön with respect to some unimodular fan. Then is wunderschön with respect to any unimodular fan with support =(). The proof of this theorem is similar to the proof given above for <ref>. We omit the details. § DIVISORIAL COHOMOLOGY In this section, we prove <ref> which states that the cohomology of a wunderschön variety is divisorial. The cohomology of a non-singular algebraic variety is divisorial if there is a surjective ring homomorphism [ _1, …, _s ] → H^∙() such that the image of each _i is [_i] ∈ H^2(), the Poincaré dual of some divisor _i of . Similarly, the Chow ring A^∙() is divisorial if there is a surjective ring homomorphism [ _1, …, _s ] → A^∙() such that the image of each _i is the class of a divisor _i of . In this case, we also say that the (Chow) cohomology of is generated by the divisors _1, …, _s. Notice that if is projective and its cohomology is divisorial, then all its cohomology is generated by algebraic cycles and the Hodge structure on the cohomology is Hodge-Tate. The Chow ring of any non-singular complex toric variety is divisorial and generated by the toric boundary divisors, see <cit.> and <ref>. It follows, using our previous notations, that if the the map ,Σ H^∙ () → H^∙ () is a surjection, then the cohomology of is divisorial and generated by the irreducible components of ∖. Let ⊆ be a wunderschön subvariety. Let be the compactification of with respect to a unimodular fan Σ with support = (). Then the cohomology of is divisorial and generated by irreducible components of ∖. We proceed by induction on the dimension of . If is a point, then this is trivial. Notice also that if is a wunderschön curve then must be ^1 and hence the cohomology is divisorial as H^∙(^1)≅[]/⟨^2 ⟩. We have the following commutative diagram ⊕_ρ∈Σ_1[_ζ | ζ∈Σ_1 and (ρ+ζ)∈Σ_2] ⊕_ρ f_ρ⊕_ρ -·_ρ ⊕_ρ∈Σ_1 H^∙(^ρ) [_ζ | ζ∈Σ_1] f H^∙+2(), where ρ+ζ is the cone generated by the rays ρ and ζ, the f_ρ are surjective ring homomorphisms which send _ζ to [^ρ+ζ], and f maps _ζ to [^ζ]. Since is wunderschön the maps ⊕_ρ∈Σ_1 H^k(^ρ) → H^k +2() from the Deligne weight spectral sequence are all surjections for k ≥ 0 and we deduce that f is surjective. Therefore, the cohomology of is divisorial and is generated by the components of ∖. § PROOF OF THE MAIN THEOREM We now turn to proving <ref>. Let ⊆ be a schön subvariety with support =(). Then the following statements are equivalent. * is wunderschön and is a tropical homology manifold, * is cohomologically tropical. Moreover, if any of these statements holds, then is Kähler. We begin by assuming that is wunderschön and that is a tropical homology manifold, and prove that is cohomologically tropical. We must show that the maps ^σ,Σ^σ H^∙ (^σ) → H^∙ (^σ) are isomorphisms for all σ∈Σ. Notice that is non-singular since it is wunderschön. If is of dimension 0 and wunderschön it consists of a single point. Therefore, its tropicalization is a point of weight 1 thus is cohomologically tropical. We proceed by induction on the dimension of . Therefore, we can assume that each of the ^σ is cohomologically tropical for all cones σ∈Σ not equal to the origin. Since is schön, let = ∖ be the simple normal crossing divisor of the compactification. The Deligne weight spectral sequence for the tropical compactification (, ) of abuts in the associated graded objects of the weight filtration of the cohomology of H^k(). Since is wunderschön, the E_1-page of Deligne spectral sequence extends to exact rows by <ref>, with the morphisms being sums of Gysin maps. In the tropical setting, since X is a tropical homology manifold, there are tropical Deligne resolutions <ref>, where the maps are sums of tropical Gysin maps. Now by induction, ^σ, Σ^σ H^∙ (^σ) → H^∙(^σ) is an isomorphism, and moreover the appropriate commutative diagrams using the classical and tropical Gysin maps commute by <ref>. We may therefore identify the two exact sequences. Applying the five lemma in the cases k≥ 2, exactness gives us isomorphisms H^k()→ H^k() and X,Σ H^2k() → H^2k(). For k=0, since is assumed to be connected, there is an isomorphism H^0()≅≅ H^0(), and it merely remains to show the claim for k=1. We consider the following commutative diagram 0 H^1() ⊕_ζ∈Σ_1 H^0(^ζ) H^2() 0 0 H^1() ⊕_ζ∈Σ_1 H^0(^ζ) H^2() 0 [from=2-2, to=2-3] ["g", from=2-3, to=2-4] [from=2-4, to=2-5] [from=2-1, to=2-2] [from=1-4, to=1-5] [from=1-1, to=1-2] [from=1-2, to=1-3] [from=1-3, to=1-4] [",Σ", from=1-4, to=2-4] ["⊕^σ, Σ^σ", from=1-3, to=2-3] [from=1-2, to=2-2]. By induction, the middle vertical arrow is an isomorphism, and we wish to show that the rightmost vertical arrow is an isomorphism. By a diagram chase, exactness of the lower row implies that this arrow is surjective. The injectivity follows from <ref>. Therefore, the map ^σ, Σ^σH^2()→H^2() is an isomorphism. Together with our induction assumption on the maps ^σ, Σ^σ this proves that is cohomologically tropical. Now assume that is cohomologically tropical. By <ref>, we know that is a tropical homology manifold. It remains to show that is wunderschön. We again proceed by induction on dimension as the case for points is trivial. We equip with the tropical compactification given by Σ, such that all open ^σ are wunderschön by induction, for σ different from the central vertex of Σ. We have H^0() ≅ H^0() by hypothesis, and H^0()≅, thus is connected and so is  . It remains to show that the mixed Hodge structure on H^k() is pure of weight 2k for each k. This follows from comparing the Deligne weight spectral sequence and tropical Deligne resolution by <ref>, using that all the maps ^σ, Σ^σ are isomorphisms. Hence is wunderschön. Finally we prove that if is cohomologically tropical, then is Kähler. By <ref>, we know that is a tropical homology manifold. There exists a unimodular fan Σ with support such that Σ is quasi-projective. It follows that the Chow rings A^∙/2(Σ^σ), σ∈Σ, are isomorphic to H^∙(^σ). Moreover, since Σ is quasi-projective, and is schön, ^σ is a non-singular projective variety, and so its cohomology verifies the Kähler package. We conclude that is Kähler. Suppose that ⊆ is schön and cohomologically tropical. Let Σ be any unimodular fan with support =(). Then we obtain isomorphisms H^k (^σ) H^k(^σ) for all σ∈Σ and all k. It suffices to prove the statement for , since if is cohomologically tropical so are all strata ^σ. It follows from the proof of <ref>, that if is cohomologically tropical, then is wunderschön and hence the 2k-th row of the 1st page of the Deligne weight spectral sequence provides a resolution of H^k() for all k. Moreover, the maps H^∙ (^σ) → H^∙ (^σ) are isomorphisms for all strata and they commute with the tropical and complex Gysin maps. Therefore, we obtain an isomorphisms of the resolutions which induces isomorphisms H^k(^σ) → H^k (^σ) for all σ∈Σ and all k. § GLOBALIZATION We discuss a natural extension of the main theorem of <cit.>. We follow the setting of that work. Let π→ D^* be an algebraic family of non-singular complex algebraic varieties in ^n over the punctured disk D^*. Let Z⊆^n be the tropicalization of the family. We suppose Z admits a unimodular triangulation. This is always possible after a base change of the form D^* → D^*, z↦ z^k, for k∈_+. Using the triangulation, we construct a degeneration of ^n to an arrangement of toric varieties, and taking the closure of the family inside this toric degeneration leads to a family extended over the full punctured disk D. By Mumford's proof of the semistable reduction theorem, we can always find a triangulation, after a suitable base change, such that the extended family is regular and the fiber over zero is reduced and simple normal crossing. This is known as a semistable extension of the family π→ D^*. Denote by _0 the fiber at zero of the extended family. Note that since the extended family is obtained by taking the closure of the family in a toric degeneration of ^n, each open stratum in _0 will be naturally embedded in an algebraic torus. For t ∈ D^* denote by _t the fiber of π over t. Let π→ D^* be an algebraic family of subvarieties in ^n parameterized by the punctured disk and let π→ D be a semistable extension. If the tropicalization Z⊆^n is a tropical homology manifold and all the open strata in _0 are wunderschön, then H^p,q(Z) is isomorphic to the associated graded piece W_2p/W_2p-1 of the weight filtration in the limiting mixed Hodge structure H_lim^p+q. The odd weight graded pieces in H_lim^p+q are all vanishing. Moreover, for t ∈ D^*, we have H^p,q(_t) = H^p,q(Z), for all non-negative integers p and q. Since Z is a tropical homology manifold, the local fanfolds appearing in the tropical variety Z are all tropical homology manifolds. Moreover, the Chow ring of any unimodular fan supported in a local fanfold of Z is the cohomology ring of a non-singular proper complex algebraic variety. It follows that this Chow ring verifies the Kähler package provided that the fan is quasi-projective. We apply now the Steenbrink-Tropical comparison theorem proved in <cit.> to obtain the isomorphism between the cohomology groups H^p,q(Z) with the cohomology of the Steenbrink sequence in weight 2p associated to the triangulation, on one side, and the vanishing in the odd-weight of the cohomology of the Steenbrink sequence on the other side. The Steenbrink spectral sequence gives the weight 2p part of the limit mixed Hodge structure in degree p+q. The wunderschön assumption implies that the limit mixed Hodge structure is of Hodge-Tate type. We conclude similarly to the proof of Corollary 2 in <cit.>. § DISCUSSIONS §.§ Examples In this section, we give various examples of varieties verifying some but not all conditions of the main <ref>. These examples tend to demonstrate that the main theorem cannot be weakened. §.§.§ A wunderschön variety which is not cohomologically tropical Take N = ^2. Let ⊆_N be the conic given by the equation a + bz_1 + cz_2 + dz_1z_2 = 0 for generic complex coefficients a,b,c and d. The variety is ^1 with four points removed. This is a wunderschön variety: looking at the compactification ⊆ (^1)^2, is non-singular and the intersections with torus orbits are the points hence non-singular, so that is schön. Moreover, each of the points removed is trivially wunderschön. Finally, the Deligne weight spectral sequence shows that has pure Hodge structure. However, the tropicalization of is the union of the axes in ^2, which is not uniquely balanced, , H^2()=2>1. This means that is not a tropical homology manifold (see <cit.>). Moreover, computing the cohomology groups of , we obtain H^0()=1, H^1() = 0 and H^2()=2, which differs from the cohomology groups of the sphere . §.§.§ A schön variety with pure strata, whose tropicalization is a tropical homology manifold but which is not cohomologically tropical Let be a generic conic in (^*)^2. The variety is ^1 with six points removed. Its tropicalization is the usual tropical line equipped with weights equal to 2 on all edges, hence again a tropical homology manifold by <cit.>. The variety is schön since it is non-singular, and each one of the three strata consists of two distinct points, hence it is non-singular. The mixed Hodge structure on is pure, as the Deligne weight spectral sequence shows that _1^W H^1 ()=H^1() = 0. However, it is not wunderschön since its strata are not connected. The map H^∙() → H^∙() is an isomorphism: it maps the class of a point in to twice the class of a point in . Nevertheless, is not cohomologically tropical since, for any ray ζ of , H^0(^ζ)≅^2 but H^0(^ζ)≅. §.§.§ A schön variety which is not pure nor cohomologically tropical and whose tropicalization is a tropical homology manifold Consider the punctured elliptic curve in (^*)^2 of equation az_1^2+bz_2+cz_1z_2^2=0 for generic complex coefficents a,b and c. Topologically it is a torus punctured in three points. The tropicalization is the unimodular tropical line of weight one with rays generated by (2,1),(-1,1) and (-1,-2), which is a tropical homology manifold. The variety is non-singular and connected, and each of the three strata at infinity of its compactification is a point hence non-singular and connected. Hence is schön. The cohomology group H^1() is nontrivial of dimension 2. However, H^1() is trivial. Hence is not cohomologically tropical. This is because is not wunderschön. More precisely, H^1() is not pure of weight 2. Indeed, by the Deligne weight spectral sequence _1^W(H^1()) ≅ H^1() ≠ 0. §.§.§ A non-schön variety which is cohomologically tropical Once again, N is of dimension 2. Let ⊆_N be given by the equation (z_1-a)(z_2-b) = 0 for a,b ≠ 0. The variety is a reducible nodal curve with two components both being ^1 with two punctures. The tropicalization is again the union of the two coordinate axes in ^2, which is not a tropical homology manifold, and the variety is not schön as it is singular. However, for each line of the cross, the cocycle associated to this line is mapped to the cocycle associated to the corresponding sphere. This is an isomorphism between H^2() and H^2(). Since H^0() is trivially isomorphic to H^0() and other cohomology groups are trivial, we deduce that is cohomologically tropical. §.§ Hyperplane arrangement complements We will now see that all three properties of <ref> are satisfied for complements of projective hyperplane arrangements. We will use the de Concini-Procesi model of the complement of a projective hyperplane arrangement <cit.>, as discussed in <cit.>. Let =H_i_i=0^n be an arrangement of n+1 hyperplanes in _^d, not all having a common intersection point, and let _ =_^d ∖⋃_H_i ∈ H_i be the complement of the arrangement. For each i, let ℓ_i be the homogeneous linear form such that H_i=z∈_^d|ℓ_i(z) = 0. These define a map _→n given by z ↦ (ℓ_i(z)) in homogeneous coordinates on n. This map is injective, since no z∈_ lies on all hyperplanes by assumption, and induces an isomorphism of _≅_, where _ is a subvariety of n, see <cit.> for details. By a theorem of Ardila and Klivans <cit.>, the tropicalization Y_ = (_) is the support of the Bergman fan Σ_M_ of the matroid M_ associated to the arrangement , see <cit.>. First, Tevelev shows <cit.> that the variety _ is schön, it is clearly connected, and by <cit.>, its cohomology has a pure Hodge structure of Hodge-Tate type. Moreover, given a face σ∈Σ_M_, the star fan of σ corresponds to the complement of a hyperplane arrangement. By induction, this shows that complements of hyperplane arrangements are wunderschön. Furthermore, it is shown in <cit.> by an inductive argument that the Bergman fan Σ_M_ of a matroid is a tropical homology manifold. Therefore, one can apply <ref>, which gives us that _ is cohomologically tropical, i.e. the map _,Σ H^∙ (Y_) → H^∙(_) is an isomorphism. In light of <ref>, this can be compared with the main result of <cit.>, also independently proved in <cit.>, showing that H^∙(_)≅ H^∙(_), however lacking explicit maps. §.§ A non-matroidal example We present an example of ⊆_N which is not a complement of a hyperplane arrangement yet is wunderschön, cohomologically tropical, and the tropicalization () is a tropical homology manifold. The variety will be the complement of an arrangement of lines and a single conic in ^2. Let [z_0:z_1: z_2] be homogeneous coordinates on ^2. Let _0, _1, and _2 be the coordinate lines of ^2 so that _i is defined by z_i = 0. Let _3 be defined by the linear form z_0 - z_1 + z_2 = 0 and let the conic be defined by z_1^2 +z_2^2 - z_0z_1 - 2z_1z_2 = 0. Let denote the union of _0, …, _3,. As depicted in <ref>, note that is tangent to _1 at the point [1:0:0] where _1 intersects _2. Also the conic is tangent to _0 at the intersection point [0:1:1] with _3. The conic also passes through the intersection point [1:1:0] of _2 and _3. Consider the map ϕ^2 ∖→ (^*)^4 defined by [z_0: z_1:z_2] ↦ (z̃_1, z̃_2, 1 - z̃_1 + z̃_2, z̃_1^2 +z̃_2^2 - z̃_1 - 2z̃_1z̃_2 ), with z̃_1 = z_i/z_0 and z̃_2 = z_2/z_0. Let ⊆ (^*)^4 denote the image of the map ϕ. The space () is 2-dimensional and is the support of the fan described below. The fan has 8 rays in directions given in <ref>. Each ray is adjacent to exactly 3 faces of dimension 2 for a total of 12 faces of dimension 2. The structure is given in <ref>: we draw an edge between two vertices if the there is a face between the two corresponding rays. Note that to get a unimodular subdivision, one has to add some rays, for instance the rays α and β of <ref>. We denote by Σ this unimodular fan. It can be verified in polymake that this fan is a tropical homology manifold and its tropical Betti numbers are 1,0,6,0,1. For an alternative proof, note that the the fan Σ is obtained by the process of tropical modification <cit.> as follows. Let Σ_U_3,4⊆^3 be the Bergman fan of the uniform matroid U_3,4. Its rays are the rays 0,1,2 and 3 in <ref>, where we forget the fourth coordinate. Let C ⊆Σ_U_3,4 be a tropical trivalent curve with rays a,b,c (once again we forget the last coordinate). Then Σ in ^4 is obtained by a tropical modification of Σ_U_3,4 along C. By <cit.>, the Bergman fan Σ_U_3,4 is a tropical homology manifold, see also <ref>. By <cit.> the trivalent tropical curve is also a tropical homology manifold. By <cit.> the modification of Σ_U_3, 4⊆^3 along C is a tropical homology manifold. The tools developed in this last article also allow to compute the cohomology of quite easily, and to check that the fan is Kähler. The compactification of in _Σ is given as follows. Consider ^2 blown up in the three points whose homogeneous coordinates are [1:0:0], [0:1:1], and [1:1:0]. Then, in the blow up, the exceptional divisor above [1:0:0], the proper transform of , and the proper transform of _1 all intersect in a single point. Similarly, there is a triple intersection of the exceptional divisor above [0:1:1] and the proper transforms of and _0. We further blow-up these two intersection points to obtain a surface . The divisor ∖ consists of the five exceptional divisors and the proper transforms of all curves in . Therefore, H^2() = 6 and H^0() = H^4() = 1 and H^k() = 0 otherwise. We claim that is wunderschön. Indeed, for each ray ζ of the fan Σ the variety ^ζ is ^1 with two or three marked points corresponding to the intersections with the other divisors in ∖, so it is wunderschön. Moreover, is non-singular and connected, and its cohomology is pure. Hence, is wunderschön. §.§ Shellability It seems plausible that a framework parallel to the one in <cit.> can be developed for properties of tropicalization of algebraic varieties. The properties discussed in this paper concern pairs consisting of a subvariety of an algebraic torus and a fan structure on its tropicalization. Three basic operations can be conducted on these pairs: products, blow-ups and blow-downs, and taking the graph of a holomorphic function on the subvariety, restricted to the complement of its divisor. For example the cases described in <ref> can both be obtained by these operations. The properties of being schön, wunderschön, and cohomologically tropical should be shellable in this framework. We refer to <cit.> for some results in this direction.
http://arxiv.org/abs/2307.00633v1
20230702184005
Effects of Explanation Specificity on Passengers in Autonomous Driving
[ "Daniel Omeiza", "Raunak Bhattacharyya", "Nick Hawes", "Marina Jirotka", "Lars Kunze" ]
cs.RO
[ "cs.RO", "cs.AI", "cs.CY", "cs.HC", "cs.LG" ]
Analyzing Lack of Concordance Between the Proteome and Transcriptome in Paired scRNA-Seq and Multiplexed Spatial Proteomics Jai Prakash Veerla12, Jillur Rahman Saurav12, Michael Robben12, Jacob M. Luber12 1Department of Computer Science, University of Texas at Arlington 2 Multi-Interprofessional Center for Health Informatics, University of Texas at Arlington Email: jxv6663@mavs.uta.edu,{mdjillurrahman.saurav, michael.robben, jacob.luber}@uta.edu August 1, 2023 ============================================================================================================================================================================================================================================================================================================================================= empty empty The nature of explanations provided by an explainable AI algorithm has been a topic of interest in the explainable AI and human-computer interaction community. In this paper, we investigate the effects of natural language explanations' specificity on passengers in autonomous driving. We extended an existing data-driven tree-based explainer algorithm by adding a rule-based option for explanation generation. We generated auditory natural language explanations with different levels of specificity (abstract and specific) and tested these explanations in a within-subject user study (N=39) using an immersive physical driving simulation setup. Our results showed that both abstract and specific explanations had similar positive effects on passengers' perceived safety and the feeling of anxiety. However, the specific explanations influenced the desire of passengers to takeover driving control from the autonomous vehicle (AV), while the abstract explanations did not. We conclude that natural language auditory explanations are useful for passengers in autonomous driving, and their specificity levels could influence how much in-vehicle participants would wish to be in control of the driving activity. § INTRODUCTION The automotive industry is witnessing an increasing level of development in the past decades, from manufacturing manually operated vehicles to manufacturing vehicles with a high level of automation. As highly automated vehicles make high-stake decisions that can significantly affect end-users, the vehicles should explain or justify their decisions to meet set transparency guidelines or regulations. Associating natural language explanations with an AV's driving decisions is one promising approach for better vehicle transparency <cit.>. This transparency, obtained through intelligible explanations, can help to reassure passengers of safety and also assist them in effectively calibrating their trust in an AV <cit.>. The specificity level of explanations is, however, important in achieving the aforementioned benefits. For example, while vehicle operators, developers, and incident investigators might desire very specific and detailed explanations from an AV for auditing and debugging purposes, it's not clear what impact such level of specificity would have on passengers. Would very specific explanations that are capable of exposing AV errors be beneficial to passengers? Further, as passengers are expected to be able to engage in other activities during an autonomous ride, the visual mode of communicating awareness to passengers might be futile in conditions where a human is required to intervene. Hence, other feedback mechanisms such as auditory communication <cit.> are needed. In this study, we use an immersive driving simulator, an automated auditory explainer, and a virtual reality headset to investigate the effects of explanation specificity on passengers in highly automated vehicles. The effects of interest are perceived safety, the feeling of anxiety, and the feeling to takeover control from an AV. While there are related works on external human-machine interfaces <cit.>, we focus on auditory explanations provided to in-vehicle participants. We use the term abstract to mean the provision of vague auditory explanations that conceal some details about a driving situation. The term specific is used to mean the provision of very specific explanations with more details about a situation. Our contributions are: * a use case of explanation specificity in the autonomous driving context; * an enhanced interpretable technique for generating auditory natural language explanations for AV navigation actions; * findings on whether high AV transparency, though critical to other stakeholders, is helpful to AV passengers. § RELATED WORK Explanations have been found useful in enhancing user experience <cit.>, trust <cit.>, and improved situational awareness <cit.> in automated driving. Recent works have explored human factors in the application of explainable AI in autonomous driving. For instance, in <cit.>, a socio-technical approach to explainability was proposed. An interpretable representation and algorithms for explanations based on a combination of actions, observations, and road rules were designed. In relation to explanation depths, the ideology that explanations with higher abstractions and/or correctness are better has been discussed in <cit.>. Ramon et al. <cit.> also argued that explanation specificity depends on the application context, and in particular, low-level specificity is preferred for people with a more deliberative cognitive style. In this paper, the term explanation specificity is used to refer to two specificity levels of explanations, abstract (low transparency) and specific (high transparency). Explanations can be used to convey different information in autonomous driving, e.g., vehicle uncertainties and intentions, and communicated through different modalities. For example, Kunze et al. <cit.> conveyed visual uncertainties with multiple levels to operators using heartbeat animation. This information helped operators calibrate their trust in automation and increased their situation awareness. Similarly, Kunze et al. <cit.> used peripheral awareness display to communicate uncertainties with the aim of alleviating the workload on operators simultaneously observing the instrument cluster and focusing on the road. This uncertainty communication style decreased workload and improved takeover performance. In addition, the effects of augmented reality visualisation methods on trust, situation awareness, and cognitive load have been investigated in previous studies using semantic segmentation <cit.>, scene detection and prediction <cit.>, and pedestrian detection and prediction <cit.>. These deep vision-based techniques applied to automated driving videos and rendered in augmented reality mode were a way of calling the attention of operators to risky traffic agents in order to enhance safety. While under-explored, auditory means of communicating explanations are important to calling in-vehicle participants' attention to critical situations in autonomous driving. We thus used an auditory communication style in this study to convey explanations to passengers. Some existing works around human-machine interaction <cit.> have leveraged theoretical models (e.g., mental and situational models <cit.>) to study explanations. We based our work on behavioural cues and subjective feedback from subjects while drawing connections to such existing works. § PASSENGER STUDY In this section, we describe the participants' demography, experiment apparatus setup, experiment design, and the procedure of the experiment. The necessary approval to conduct the study was obtained from the University of Oxford Research Ethics Committee. §.§ Participants We conducted a power analysis to estimate the number of subjects required for the study. Afterwards, calls for participants were placed on various online platforms, such as the callforparticipants platform, university mailing groups, university Slack channels, the research group website, and social media to recruit subjects. Upon screening, the final sample consisted of N = 39 participants (28 male, 11 female) ranging in age from 18 to 59 years. The participants comprised students, university employees, and members of the callforparticipants platform. Although prior driving experiences were not required, 28 (71.79 %) of the participants were licensed drivers. Only 2 of the 39 participants (5.13%) had experience with autonomous vehicles, however, in a research context. 6 (15.38%) of the participants had used a virtual reality headset for a driving game or driving experiment in the past. 8pt10pt §.§ Apparatus §.§.§ Hardware The hardware setup is shown in <Ref>. We conducted the experiment in a driving simulator that comprised a GTR arcade seat, Logitech G29 steering wheel with force feedback, turn signal paddles, brake and accelerator pedals, and an ultra-wide LG curved screen to display the experiment. A state-of-the-art virtual reality (VR) headset (with an immersive 360^∘ FoV and an eye tracker) was also used to provide an immersive experience and high visual fidelity. §.§.§ Driving Software Software architecture is illustrated in <Ref>. We adapted the DReyeVR <cit.>, an open-source VR-based driving simulation platform for behavioural and interaction research involving human drivers. DReyeVR was built atop Carla <cit.>, an open-source driving simulator for autonomous driving, and Unreal Engine 4. DReyeVR provides a very realistic experience with naturalistic visuals (e.g., in-vehicle mirrors) and auditory (e.g. vehicular and ambient sounds) interfaces allowing for an ecologically valid setup. It also provides an experimental monitoring and logging system to record and replay scenarios, as well as a sign-based navigation system. §.§.§ Explainer Software As shown in <Ref>, we developed an explainer system (based on previous work in <cit.>) that uses a tree-based model fitted on an AV driving dataset that we have collected and annotated (with a multilevel annotation scheme) in a previous project. While the original algorithm in <cit.> is mainly data-driven, we incorporated a rule-based technique that acts as a fallback when the data-driven method fails or makes an incorrect ego action prediction. While the data-driven method uses a trained tree-based model to predict and generate explanations from the detections from Carla, the rule-based approach uses Carla's ground truth data and follows pre-defined rules to determine which agent(s) to include in the explanation. With the data-driven approach, we are able to know when a prediction is incorrect by comparisons with ground truth observations from Carla's simulation data. We used this improved explainer system (data-driven and rule-based) to generate preliminary explanations for our created scenarios. While Wintersberger et al. <cit.> suggested the types of traffic elements to be included in visual explanations based on a study on user preferences, our proposed explainer picks up traffic elements that the driving model deemed important (cf.  <cit.>) for its driving decisions (see Algorithm <ref>). We performed post-processing operations on the generated explanations. Post-processing operations included fine-tuning some of the explanations and modifying the explanations' timestamps to make them come at the right time. §.§ Experiment Design Before the start of the trials, participants manually drove a vehicle in VR mode for about two minutes in Carla Town03. Thirty vehicles and ten pedestrians were spawned in this town. The aim of the drive was to familiarise participants with the simulation environment. A within-subject design was then implemented with one independent variable: specificity, and three dependent variables: perceived safety, feeling of anxiety, and takeover feeling. The first specificity level (abstract comprised vague explanations that can conceal all the AV's perception errors. The second specificity level (specific) comprised more specific and detailed explanations indicating high transparency. A within-subject design was chosen to avoid any potential co-founding factor of between-individual differences in a between-subject design. We didn't have a control scenario where explanations were not provided because the goal of the study was to investigate the impact of explanation specificity and not the presence of explanations. Previous studies have already shown that explanations, including placebo explanations that convey no helpful information, provide positive effects on people <cit.>. Hence, we focused on how the specificity of these explanations influences passengers. §.§.§ Independent Variables We created two driving scenarios, one in which abstract explanations were provided and the other with specific explanations. The driving scenarios were carefully designed to include different driving conditions that are obtainable in the real world (See <Ref>). Abstract Scenario a route from Carla Town10HD, which was about 4 minutes in length (330 secs), was created. Driving conditions were a combination of the events in <Ref>. The rules governing explanations for this scenario were: (i) all traffic lights are referred to as `traffic signs' without specifying the state (e.g., red, green, amber, off) of the traffic light (ii) pedestrians are referred to as `road users' (iii) all non-human moving actors are referred to as `vehicle'. This includes cycles, motorbikes, cars, etc. An example explanation is `stopping because of the traffic sign on my lane'. This obfuscates the type or colour of the traffic sign. Specific Scenario A scenario in Carla Town10HD, which was about 4 minutes in length (256 seconds), was created. Driving conditions in this scenario were also a combination of the events in <Ref>. The explanations generated in this scenario were fine-grained, and detailed and could expose any perception system errors in the AV. We introduced 5% error into the perception system of the AV as an attempt to model a realistic AV perception system. This error value was estimated following the dynamic traffic agent classification model and confusion matrix provided in <cit.>. We were only interested in the confusion matrices (and not the models). The confusion matrices helped us to systematically introduce the 5% perception system errors to be reflected in the specific explanations. This amounted to one erroneous explanation out of the 22 explanations provided in this scenario. An example of an erroneous explanation is: `van ahead on my lane'. Here, a car was misclassified as a van. Note that this error was insignificant to the AV's navigation actions. We counterbalanced the routes across scenarios. That is, the AV's route was different in each scenario. This design decision was made to reduce carry-over effects on the participants. With this setup, the scenarios were still comparable as they were all within the same town, and the routes shared similar features. Each scenario also had a balanced combination of the events listed in Table <ref>. In both scenarios, the AV maintained a speed below 30mph, the recommended speed limit in urban areas in the UK. The AV also respected all road rules and avoided collisions in both scenarios. §.§.§ Dependent Variables The Autonomous Vehicle Acceptance Model Questionnaire (AVAM) <cit.> was adopted to assess perceived safety and the feeling of anxiety dependent variables. AVAM is a user acceptance model for autonomous vehicles, adapted from existing user acceptance models for generic technologies. It comprises a 26-item questionnaire on a 7-point Likert scale, developed after a survey conducted to evaluate six different autonomy scenarios. We selected Items 19—21 to assess the feeling of anxiety factor and Items 24—26 to assess the perceived safety factor. Similar to <cit.>, we introduced a new item to assess participants' feeling to takeover navigation control from the AV during the ride (takeover feeling). Specifically, participants were asked to rate the statement `During the ride, I had the feeling to take over control from the vehicle' on a 7-point Likert scale. Actual navigation takeover by participants was not permitted because we wanted to be able to control the entire experiment and have all participants experience the same scenarios. Moreover, we were dealing with L4 automation. Though participants were not expected to drive or take over control, they might have nursed the thought to do so. This is what the takeover feeling dependent variable measured. We also added a free response question directly related to explanations. Participants were asked the following question: `What is your thought on the explanations provided by the vehicle, e.g., made you less/more anxious, safe, feeling to take over control?'. We refer to the resulting questionnaire as the APT Questionnaire (i.e., A-Feeling of Anxiety, P-Perceived Safety, T-Takeover Feeling). §.§ Procedure The procedure of the experiment is illustrated in <Ref>. After all preliminary form completions and briefings, we introduced the physical driving simulator and explained the next steps, which involved a pre-experiment manual driving session (in VR mode) which lasted for 2 minutes. The participants were informed that the purpose of the pre-experiment exercise was to help them get familiar with the simulation environment. This exercise also helped us to exclude those with motion sickness from the actual experiment. When the manual driving exercise was completed, we took the VR headset off the participant and explained the aim and the procedure of the main experiment. “you would experience two autonomous rides in different vehicles, [...] and after each ride, you would complete a short survey. The vehicle drives along a predefined path for about 4 minutes and provides explanations for its planned driving decisions, and announces relevant objects in its environment [...]. The vehicle tells you its next direction at a junction or an intersection using its right or left red light indicators on its dashboard accordingly. [...] Simply click any of these buttons if the decision or the explanation of the vehicle makes you feel confused, anxious or unsafe. Note that you cannot take over driving control from the vehicle during the drives”. The researcher then puts the VR headset back on the participants and launched the two driving scenarios (one after the other) in a complete counterbalanced order. § QUANTITATIVE RESULTS We aim to investigate the effect of explanation specificity on passengers' perceived safety, the feeling of anxiety, and takeover feeling. We analysed the data from the two APT questionnaires administered. A latent variable (perceived safety) was formed from the means of the responses from AVAM Items 24—26 to assess participants' perceived safety during the study. Another latent variable (anxiety feeling) was formed from the means of AVAM Items 19—21. We calculated the Cronbach Alpha (α) for the independent variables to see if they had adequate internal consistency. Takeover feeling was also assessed using the 7-point Likert scale question introduced into the APT questionnaire. Results with p-value less than 0.05 (p < .05) are reported as significant. Bonferroni corrections were done in all statistical tests to reduce the chance of Type 1 errors. Kolmogorov-Smirnov, Shapiro-Wilk, and Anderson-Darling tests indicated a violation of normality for the perceived safety, feeling of anxiety, and takeover feeling variables. Therefore, the Friedman test was performed for these dependent variables (see <Ref>). §.§.§ Perceived Safety Specific scenario had a higher mean rank of 2.22 compared to the abstract scenario with a mean rank of 2.15. However, no significant statistical difference was observed in perceived safety across the abstract and specific scenario cases when the Friedman test was performed (see <Ref>). While specific explanations yielded a higher perception of safety in our experiment, this relative difference is statistically insignificant. §.§.§ Feeling of Anxiety Specific scenario had a higher mean rank of 1.81 compared to the abstract scenario with a mean rank of 1.72. Similar to the perceived safety result, no significant statistical difference was observed in the feeling of anxiety across the abstract and specific scenarios when the Friedman test was performed (see <Ref>). Hence, explanations specificity, are as well inconsequential to the feeling of anxiety in the context of the study. §.§.§ Takeover Feeling For the takeover feeling dependent variable, specific scenario had a higher mean rank of 2.10 compared to the abstract scenario with a mean rank of 1.68. The Friedman test indicated a significant statistical difference between the abstract scenario and the specific scenario, H(2) = 4.23, p=.037 (see <Ref>). Furthermore, our statistical analysis showed no statistically significant difference in the takeover feeling variable between those who possessed and those who did not possess a driving licence (p > 0.05). Hence, specific explanations could evoke more takeover thoughts in passengers than abstract explanations in an AV. § QUALITATIVE RESULTS: THEMES AND REFLECTIONS We obtained qualitative data from the APT questionnaire administered after each scenario run. Participants were asked to describe their feelings with respect to the explanations that they received during the ride. <Ref> describes the themes obtained from the inductive thematic analysis of the comments. Themes were broadly categorised based on the participants' feelings, their assessment of the explanations, and the vehicle dynamics. §.§ Feelings Both driving experiences (abstract and specific) produced positive effects on passengers' safety. Passengers felt safer mostly through the reassurance that the explanations provided. While the abstract explanations were a bit confusing to the passengers, they didn't create a significant negative impact on the passengers' perceived safety. See sample quotes: `It was initially confusing due to the strange terminology used by the explanations. However, because the use of the explanations was consistent, it did inspire some confidence that the car was safe and knew what it was doing.'—CAND25 (Abstract). `[...] Safer and with more correct directions and decisions. Cyclist and motorcyclist wear no helmets.'—CAND38 (Specific). Comments regarding the feeling of anxiety seem to have an equal number of appearances in both the abstract and specific scenarios. Explanations in both cases made people less anxious. `they probably contributed to make me feel less anxious. [...]'—CAND31 (Abstract). `Explanations were reassuring and made me feel less anxious.'–CAND22 (Specific). While many participants felt safe in both driving cases, a few nursed the thought to be in charge of the driving activity at some points, e.g., `I am okay with the vehicle driving because they don't make mistake. I don't feel unsafe but sometimes I feel like being in control. The explanations were simple.'—CAND17 (Abstract). Some participants preferred their own driving style to that of the AV, and for this reason, they felt like being in control at certain points, e.g., `Some of the car's decisions and corresponding explanations did not align with what I would have done in the situation and therefore made me feel like I would like to take over control.'—CAND35 (Specific). §.§ Explanations Participants did notice the vagueness of the explanations in the abstract scenario. Some thought it was good, while others thought it was confusing and made them uncomfortable. `Its explanations were not specific enough since they only referred to traffic signs instead of the colour of the lights. This made me doubt the vehicle's actions a bit, even if they were correct.'—CAND15 (Abstract). `[...] Also I would be more comfortable if the explanation 'traffic sign' was 'traffic light is red/green' when referring to a traffic light.'—CAND23 (Abstract). There were more comments on the plausibility of the explanations in the specific scenario compared to those in the abstract scenario. `It explained the situation and its actions well, although sometimes it would perform an action and not provide an explanation (e.g. stopping briefly in front of a stop sign without voicing the action or situation). I still trust the vehicle's explanations since they were accurate descriptions of the situation.'—CAND15 (Specific). A couple of participants thought that the explanations in the abstract scenario were either too early or late. For example, `The explanations should have arrived a bit earlier, like a few meters before the vehicle actually stops so that I will know that it is planning to stop. [...]'—CAND23 (Abstract). §.§.§ Vehicle Dynamics Some comments were made about the vehicle's driving style and its interior. There was a comment relating to aggressive manoeuvre in the abstract scenario: `Seemed like oncoming vehicles were going to collide with me. It seems to sometime drive on pavements when negotiating corners.'—CAND29 (Abstract). The rotating steering wheel of the vehicle made some of the participants uncomfortable: `The steering wheel moving abruptly startled me sometimes.'—CAND1 (Specific). § DISCUSSION Our results corroborate prior studies by showing that intelligible explanations create positive experiences for users in autonomous driving <cit.>. While specific explanations might provide details that are likely to expose perception errors, evidence from this study shows that these errors, when they are not consequential, have no significant effect on passengers. Passengers would feel safe as far as the AV makes the right decisions. In fact, specific explanations tend to create a higher perception of safety (using the mean rank metric). This is against the thought that abstract explanations, with their ability to abstract details, would hide possible errors from the passengers, providing a `higher' sense of safety. Moreover, placebo explanations have been shown to have positive effects as real explanations on people <cit.>. A link between perceived safety and anxious feelings has been assumed in the literature <cit.>. Hence, since participants' perceived safety was highest in the specific scenario, we expected a lower feeling of anxiety in the specific scenario as well. Our results matched this expectation. Dillen et al. <cit.> observed that in-vehicle features, such as the rotating steering wheel, could influence the feeling of anxiety in in-vehicle participants. This was reinforced in our study by the comment from CAND1. We note that anxiety is hard to objectively capture, so the results from this experiment are only based on the participants' perceptions, and thus the term `feeling of anxiety'. Although passengers were not meant to takeover control from the vehicle in this study, we expected that they would conceive the idea to do so when they repeatedly received vague explanations that were not clear or too specific explanations that exposed subtle errors inherent in the AV. We found that takeover feeling was higher in the specific scenario. This might be because the participants were able to better understand the reasoning process of the AV through the details that the explanations provided. This understanding allowed the participants to predict and judge the AV's actions, leading to the thought to takeover control where the actions of the AV were irreconcilable with the participants' driving preference. We observe from the qualitative results that the thought to takeover might not necessarily be triggered by errors but could be the participants' desire to be in the driving loop or their strong preference for their own driving style. This aligns with the argument in <cit.> that shared control rather than human-out-of-the-loop automated driving is required. In general, explanations are helpful for passengers in autonomous driving. The level of explanations specificity might not have a significant effect on passengers' perceived safety and feeling of anxiety but might influence their thought to takeover control from an AV. § CONCLUSION We conducted a within-subject lab study (N=39) using an immersive driving simulator to investigate the effects of explanation specificity on passengers' perceived safety, the feeling of anxiety, and takeover feeling. Our results showed that both abstract and specific auditory natural language explanations are helpful for improving passengers' perceived safety and reducing the feeling of anxiety with no particular specificity level significantly better than the other. However, the specificity of the explanations influenced the passengers' thought to takeover control from the AV. In particular, more participants nursed the thought to takeover control from the AV at certain points when they received specific explanations. In future work, we will investigate the effect of varying degrees of AV perception system errors as an additional dimension to the independent variables explored in this study. § ACKNOWLEDGMENTS This work was supported by the EPSRC RAILS project (grant reference: EP/W011344/1) and the EPSRC project RoboTIPS (grant reference: EP/S005099/1). ieeetr
http://arxiv.org/abs/2307.02181v1
20230705101948
X17 discovery potential from $γD \to e^+ e^- p n$ with neutron tagging
[ "Cornelis J. G. Mommers", "Marc Vanderhaeghen" ]
hep-ph
[ "hep-ph", "nucl-ex", "nucl-th" ]
cmommers@uni-mainz.de Institut für Kernphysik and PRISMA^+ Cluster of Excellence, Johannes Gutenberg-Universität, D-55099 Mainz, Germany We propose a novel direct search experiment for X17 using the reaction γ D → e^+ e^- pn. X17 is a hypothetical particle conjectured by the ATOMKI collaboration to explain anomalous signals around 17 MeV in excited ^8Be, ^4He and ^12C nuclear decays via internal pair creation. It has been subject to a global experimental and theoretical research program. The proposed direct search in γ D → e^+ e^- pn can verify the existence of X17 through the production on a quasi-free neutron, and determine its quantum numbers separate from ongoing and planned nuclear-decay experiments. This is especially timely in view of the theoretical tension between results from the ^12C and ^8Be measurements. Using the plane-wave impulse approximation, we quantify the expected signal and background for pseudoscalar, vector and axial-vector X17 scenarios. We optimize the kinematics for the quasi-free neutron region with the upcoming MAGIX experiment at MESA in mind and show that for all three scenarios the X17 signal is clearly visible above the QED background. X17 discovery potential from γ D → e^+ e^- pn with neutron tagging Marc Vanderhaeghen August 1, 2023 ================================================================== Despite its amazing success, the Standard Model of particle physics is incomplete. For example, it fails to account for dark matter, neutrino oscillations and the strong CP problem. This has led to ongoing research into improvements and extensions `beyond the Standard Model'. Chief among these searches is the inclusion of undiscovered particles. With heavier particles either excluded or experimentally out of reach, a vigorous effort is presently underway to search for light dark sector particles in the MeV - GeV mass range <cit.>. Recent results of the ATOMKI collaboration have garnered significant theoretical and experimental interest; in a series of experiments <cit.> the collaboration claims to have found evidence of a new, light boson dubbed X17. The ATOMKI collaboration looked at internal pair creation in decays of excited ^8Be, ^4He and, recently, ^12C nuclei. In all three cases, an anomalous bump was found in the distribution of the emitted electron-positron pair's relative angle, with a statistical significance consistently exceeding 6 σ (see Ref. <cit.> for a review). In the Standard Model, nuclear transitions where an e^+ e^- pair is emitted are mediated by electromagnetic interactions and are well understood. They are sensitive to new physics appearing at the MeV scale, and thus the ATOMKI collaboration attributes their anomaly to the as-of-yet unseen X17, with a reported averaged mass of 17.02(10) MeV <cit.>. Assuming definite parity, the beryllium results indicate X17 can be a pseudoscalar, vector or axial-vector particle <cit.>, while the carbon results point to a scalar, vector or axial-vector particle <cit.>. Theoretically, models for X17 have been developed for the pseudoscalar, vector and axial-vector cases <cit.> that can explain the ATOMKI anomalies while conforming to existing exclusion bounds. In particular, according to the vector model put forward by Feng et al. <cit.> X17 must be protophobic (couple weakly to protons) to meet existing bounds from the NA48/2 experiment <cit.>. Experimentally, a global effort is underway to scrutinize the results of the ATOMKI anomaly, with new experiments such as CCPAC <cit.>, MEG II at the PSI <cit.> among others <cit.>. Many of these ongoing experiments focus on nuclear decays. After all, this is where X17 was first observed. However, if X17 is a bona fide new particle, then it must also play a role in other processes. Assuming the size of the X17 couplings needed to explain the ATOMKI anomaly, it was estimated that contributions from X17 to the reaction γ n → e^+ e^- n would be clearly visible over the QED background. The latter is suppressed for a neutron target <cit.>, enabling a direct search for X17 at electron accelerators. The upcoming MAGIX experiment at MESA <cit.> is ideal for this, due to MESA's low-energy yet high-intensity electron beam (105 MeV in its energy-recovering mode) and MAGIX’s high-resolution spectrometers, capable of resolving the invariant mass of the outgoing e^+ e^- pair to at least 0.1 MeV. Of course, in the lab one does not have access to a free, high-density neutron target, so processes like γ n → e^+ e^- n are not directly measurable. Instead, in this work we propose a novel direct search experiment using neutron tagging <cit.> with dilepton photoproduction on a deuteron, γ D → e^+ e^- pn. By tagging the neutron we can treat the bound neutron as quasi-free and the bound proton as a spectator. In this way, scattering events take place primarily on the `nearly on-shell' quasi-free neutron. We investigate the X17 signal relative to the QED background for the pseudoscalar, vector and axial-vector X17 scenarios in such an experiment, for a kinematic regime accessible by the MAGIX experiment at MESA. Let us begin with specifying the kinematics. All quantities are given in the lab frame in which the deuteron is at rest. We consider the reaction γ(E_γ, 𝐪, λ) D(m_D, 0, M) → e^+(E_+, 𝐩_+, s_+) e^-(E_-, 𝐩_-, s_-) p(E_p, 𝐩_p, s_p) n(E_n, 𝐩_n, s_n), where λ is the polarization of the incoming photon, and s_±, s_p and s_n are e^±, p and n helicities, respectively, and M is the deuteron spin projection on the z-axis. The z-axis is chosen along the direction of the photon momentum. The masses of the nucleons and deuteron are denoted by m_N and m_D, respectively. The momenta of the virtual photon or X17 is given by q' = p_+ + p_- and we denote the invariant mass of the dilepton system by q'^2 = m_ee^2. Our kinematic variables are E_γ, |𝐩_±|, the polar angles θ_± and θ_n, and the azimuthal angles ϕ_± and ϕ_n. All polar angles are defined with respect to the z-axis. Lastly, the cross section is given by dσ/dΠ = 1/ 64 ( 2π )^8 m_D E_γ | 𝐩_+ |^2 | 𝐩_- |^2 /E_+ E_- × | 𝐩_n |^2 /| 𝐩_n | ( m_D + E_γ - q'^0 ) - E_n | 𝐪 - 𝐪' | cosθ_nγγ ×⟨ | ℳ |^2 ⟩, where dΠ is shorthand for d |𝐩_+| d | 𝐩_- | dΩ_n dΩ_- dΩ_+. Here, ⟨|ℳ|^2 ⟩ is the spin-averaged Feynman matrix element and θ_nγγ is the angle between 𝐩_n and 𝐪 - 𝐪'. To calculate ℳ we use the plane-wave impulse approximation (PWIA). This allows us to separate the process γ D → e^+ e^- pn into two parts: a part where the proton is a spectator and another where the neutron is a spectator. By doing so, we disregard meson exchange currents and final state interactions. However, in our kinematic regime of interest the meson exchange currents are estimated to give corrections of approximately 5%, meaning they can be safely neglected. Likewise, for a first approximation, the final state interactions can be omitted (see Fig. 5 and its discussion in Ref. <cit.>. We have checked our model against the solid line in Fig. 5 and found both results to agree reasonably well). In the PWIA, the quasi-free neutron amplitude is given by ℳ_n^quasi-free( γ D → e^+ e^- p n ) = (2 m_D)^1/2(E_p/E_ñ)^1/2 ×∑_s_ñΨ̃^M_s_p s_ñ( 𝐩_p ) ℳ( γ ñ→ e^+ e^- n ) , where the quasi-free neutron, ñ, has momentum and spin projection -𝐩_p and s_ñ, respectively. The relative deuteron wave function in momentum space is Ψ̃^M_s_p s_ñ(𝐩_p) = (2π)^3/2{1/√(4π)ψ̃_0(|𝐩_p|) ⟨12 s_p; 12 s_ñ| 1 M ⟩ - ψ̃_2(|𝐩_p|)∑_M_s Y_2(M - M_s)(𝐩̂_p) ⟨ 1 M_s; 2 M - M_s | 1 M ⟩ ×⟨12 s_p; 12 s_ñ| 1 M_s ⟩}, where ⟨ j_1 m_1; j_2 m_2 | j m ⟩ are the Clebsch-Gordan coefficients and Y_LM are the spherical harmonics. We use the CD-Bonn parametrization <cit.> for the s- and d-wave functions, ψ̃_0 and ψ̃_2 respectively. The relevant diagrams are given in Fig. <ref>. The QED background processes include the Bethe-Heitler process (a) and Compton scattering (b). For the energy range of the MAGIX experiment at MESA, with E_γ around 100 MeV, we can describe the Compton amplitude as the sum of Born, π^0 t-channel exchange, and electric (α_E) and magnetic (β_M) nucleon polarizability contributions. The latter are parameterized by a low-energy expansion as detailed in Ref. <cit.>. The main contribution to the X17 signal process comes from the Born amplitude in Fig. <ref>(b), in which X17 is produced on a nucleon. In the Bethe-Heitler process, a possible X17 contribution is far off-resonance and is therefore negligible. To estimate the coupling of X17 to the nucleon we employ models by Alves and Weiner <cit.> for the pseudoscalar case, by Feng et al. <cit.> for the vector case and by Kozaczuk et al. <cit.> for the axial-vector case. The relevant Lagrangians are ℒ_P = i N̅γ_5 ( g^(0)_XNN + g^(1)_XNNτ_3 ) N X, ℒ_V = -e X_μ∑_N = p,nε_N N̅γ^μ N, ℒ_A = - X_μ∑_N = p,n a_N N̅γ^μγ_5 N, where τ_3 is the isospin Pauli matrix, g^(0)_XNN and g^(1)_XNN the isoscalar and isovector pseudoscalar couplings, respectively, ε_p,n the vector couplings, e>0 the proton charge and a_p,n the axial-vector couplings. We can constrain X17's couplings by using the branching fractions of the ^8Be and ^12C decays reported by the ATOMKI collaboration <cit.>, Γ_X/Γ_γ|_^8Be(18.15) = 6(1) × 10^-6, Γ_X/Γ_γ|_^12C(17.23) = 3.6(3) × 10^-6, which can be translated to limits on the X17 couplings <cit.>. We assume that the electronic branching fraction, ℬ(X → e^+ e^-), which always appears as an overall factor, is equal to unity. Note that the ^8Be(18.15) state, which is predominately isoscalar, is isospin mixed with the ^8Be(17.64) state, which is predominately isovector. In our multipole analysis we parameterize this isospin mixing with an isospin-mixing angle, θ_1^+, and an isospin-breaking parameter, κ <cit.>. Following Ref. <cit.>, we take θ_1^+ = 0.35(8) ^∘, whence κ = 0.681 <cit.>. For a pseudoscalar X17 scenario, results from the SINDRUM collaboration <cit.> put a strong bound on the isovector coupling, |g^(1)_XNN| ≤ 0.6 × 10^-3. By following the procedure described in Ref. <cit.> we derive a limit on the isoscalar coupling. For a vector X17 scenario the constraint provided by the NA48/2 experiment <cit.> leads to the protophobia condition, |ε_p| ≤ 1.2 × 10^-3. We derive the remaining neutron coupling from the ^8Be data as outlined in Ref. <cit.>, and from the ^12C data using <cit.> Γ_X/Γ_γ|_^12C(17.23) = k/Δ E( 1 + m_X^2/2 Δ E^2) | ε_p - ε_n |^2, where k = √(Δ E^2 - m_X^2). To derive couplings for the scenario of an axial-vector X17 we need its nuclear matrix elements. For the carbon transition these matrix elements are unknown, and computing them is outside the scope of this work. For the beryllium transition we take the matrix elements as calculated in Ref. <cit.>, ⟨^8Be(g.s.) ||σ̂^(p)||^8Be(18.15) ⟩ = (-0.38 ± 2.19) × 10^-2, ⟨^8Be(g.s.) ||σ̂^(n)||^8Be(18.15) ⟩ = (-10 ± 2.6) × 10^-2. Our coupling values are presented in Table <ref>. As noted in Ref. <cit.>, there is some tension between the carbon and beryllium results for a vector-like X17. The neutron couplings only overlap when the uncertainty of the ^8Be results is increased to around 3σ. Indeed, this discrepancy highlights the need for independent verification of X17, as proposed here. To maximize the signal-to-background ratio for the quasi-free neutron contribution to the γ D → e^+ e^- pn process, we must optimize the kinematics. In doing so, to ensure the validity of the quasi-free neutron process in the PWIA, we have to remain within the neutron quasi-free peak (NQFP) region, defined by <cit.> | 𝐩_p | ≲√(m_N Δ)≈ 45.7 MeV/c, where Δ≈ 2.2 MeV is the deuteron binding energy. Our kinematics are optimized for the MAGIX experiment at MESA with a beam energy of E_γ = 105 MeV and we consider in-plane kinematics where ϕ_± = 0.0 ^∘ and ϕ_n = 0.0 ^∘. We found the optimal kinematics, consistent with the above, to be an asymmetric backward configuration for the lepton pair, |𝐩_+| = 65.7 MeV/c, θ_+ = -165.0 ^∘, |𝐩_-| = 20.1 MeV/c, θ_- = 168.0 ^∘, θ_n = 5.0 ^∘, where negative angles indicate that the positron is emitted in the opposite half plane in comparison with the electron and neutron. The NQFP corresponds to an angular range θ_n ∈[ -10, 18 ] ^∘. Figure <ref> shows the quasi-free neutron cross section for a vector X17 as a function of θ_n or m_ee. The blue and green signals are derived from beryllium and carbon, respectively. The red QED background includes uncertainties from neutron polarizabilities as taken from the PDG <cit.>. Dark and light bands represent 2σ and 3σ variations in X17’s couplings. Figures <ref>(a) and <ref>(b) depict the pseudoscalar and axial-vector scenarios in yellow and magenta with a 1σ variation in the coupling. The signal cross section is averaged over a bin of δ m_ee = 0.1 MeV, corresponding to the expected resolution of MAGIX. In Fig. <ref>(a) we see that the signal is visible above the QED background for both the beryllium- or carbon-derived couplings. In Fig. <ref>(b) we see that in a bump-hunt-style search the presence of X17 would cause a spike in a single bin, which would be particularly noticeable if we use the neutron-coupling values derived from the ^8Be experiment. A signal is also expected for the other two scenarios in Fig. <ref>, where the X17 signal juts out above the QED background. In summary, we have studied the reaction γ D → e^+ e^- pn with quasi-free neutron kinematics in the context of a novel, direct search for the X17 particle conjectured by the ATOMKI collaboration. Using the plane-wave impulse approximation we have shown that the X17 signal is visible above the QED background for a pseudoscalar, vector and axial-vector X17 scenario. Furthermore, we have shown that such a process can be experimentally accessed at the upcoming MAGIX experiment at MESA. Due to the uncertainty surrounding X17’s nature and the potential tension between the carbon and beryllium results, an experiment at an electron scattering facility like MESA, with its unparalleled e^+ e^- invariant-mass resolution, will provide an important and timely check of ongoing nuclear-decay experiments. The authors thank S. Schlimme for helpful communications. This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), in part through the Research Unit [Photon-photon interactions in the Standard Model and beyond, Projektnummer 458854507 - FOR 5327], and in part through the Cluster of Excellence [Precision Physics, Fundamental Interactions, and Structure of Matter] (PRISMA^+ EXC 2118/1) within the German Excellence Strategy (Project ID 39083149).
http://arxiv.org/abs/2307.02807v1
20230706065320
Dynamics of a droplet in shear flow by smoothed particle hydrodynamics
[ "Kuiliang Wang", "Hong Liang", "Chong Zhao", "Xin Bian" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
a]Kuiliang Wang b]Hong Liang c]Chong Zhao a]Xin Bianauthor [author] Corresponding author. E-mail address: bianx@zju.edu.cn [a] State Key Laboratory of Fluid Power and Mechatronic Systems, Department of Engineering Mechanics, Zhejiang University, Hanghzhou 310027, China. [b] Department of Physics, Hangzhou Dianzi University, Hangzhou 310018, China. [c] Hangzhou Shiguangji Intelligient Electronics Technology Co., Ltd, Hangzhou, 310018, China. We employ a multi-phase smoothed particle hydrodynamics (SPH) method to study droplet dynamics in shear flow. With an extensive range of Reynolds number, capillary number, wall confinement, and density/viscosity ratio between the droplet and the matrix fluid, we are able to investigate systematically the droplet dynamics such as deformation and breakup. We conduct the majority of the simulations in two dimensions due to economical computations, while perform a few representative simulations in three dimensions to corroborate the former. Comparison between current results and those in literature indicates that the SPH method adopted has an excellent accuracy and is capable of simulating scenarios with large density or/and viscosity ratios. We generate slices of phase diagram in five dimensions, scopes of which are unprecedented. Based on the phase diagram, critical capillary numbers can be identified on the boundary of different states. As a realistic application, we perform simulations with actual parameters of water droplet in air flow to predict the critical conditions of breakup, which is crucial in the context of atomization. droplet; multiphase flow; SPH; § INTRODUCTION The deformation and breakup of droplets in shear flow are ubiquitous in engineering applications. On microfluidic chips, droplets are utilized for microbial cultivation and material transport <cit.>, and a thorough understanding of their dynamics in confined flows may improve the efficiency of production and transportation. In other environmental and industrial applications such as protection against harmful aerosols, ink-jet printing and atomization in nozzles <cit.>, liquid droplets are typically in gas flows. Accordingly, a decent knowledge on their dynamics with a high density/viscosity ratio against the matrix fluid is significant. To this end, a comprehensive investigation on the dynamics of a droplet in shear flow, which involves a wide range of Reynolds number, capillary number, confinements of the wall, viscosity/density ratio between the two phases, is called for. Since pioneering works by Taylor on droplet deformation in shear and extensional flows <cit.>, enormous theoretical and experimental studies have been conducted. A series of works by the group of Mason <cit.> further studied the deformation and burst of droplets, and even depicted the streamlines inside and around the droplets. Chaffey and Brenner <cit.> extended a previous analytical approximation to a second order form, which is crucial for the non-elliptic deformation of a highly viscous droplet under large shear rate. Barthes-Biesel and Acrivos <cit.> expressed the solution of creeping-flow equations in powers of deformation parameters and applied a linear stability theory to determine the critical values for the droplet breakup. Hinch and Acrivos <cit.> investigated theoretically the stability of a long slender droplet, which is largely deformed in shear flow. However, early analytical works rarely considered effects of finite Reynolds number or wall confinements. In addition, numerous experimental studies have been conducted on the droplet deformation and breakup <cit.>, where not only the effects of viscosity ratio between the droplet and the matrix fluid <cit.>, but also wall confinements <cit.> have been taken into account. With advance in computational science, numerical simulation has become a popular approach to study droplet dynamics in the past decades. Boundary integral method was among the first to be applied to study deformation of droplets in stationary and transient states <cit.>, non-Newtonian droplets <cit.>, and migration of a droplet in shear flow <cit.>. Moreover, Li et al. <cit.> employed a volume-of-fluid (VOF) method and Galerkin projection technique to simulate the process of droplet breakup. In the work of Amani et al. <cit.>, a conservative level-set (CLS) method built on a conservative finite-volume approximation is applied to study the effect of viscosity ratio and wall confinement on the critical capillary number. In addition, lattice Boltzmann method (LBM) has been widely employed to study deformation, breakup and coalescence of droplets <cit.>; to model viscoelastic droplet <cit.> and surfactant-laden droplet <cit.>. We note that an interface tracing technique such as VOF, CLS, a phase-field formulation, or immersed boundary method is often necessary by a flow solver based on Eulerian meshes. As a Lagrangian method, smoothed particle hydrodynamics (SPH) method has some advantages in simulating multiphase flows. Since different phases are identified by different types of particles, the interface automatically emerges without an auxillary tracing technique, even for a very large deformation. Moreover, inertia and wall effects can be taken into account straightforward, in contrast to theoretical analysis or the boundary integral method. Since its inception in astrophysics, SPH method has been largely developed and widely applied in various flow problems <cit.>. Morris <cit.> considered the surface tension based on a continuous surface force model and simulated an oscillating two-dimensional rod in SPH. Hu et al. <cit.> proposed a multi-phase model that handles both macroscopic and mesoscopic flows in SPH, where a droplet in shear flow was selected as a benchmark to validate the method. Other improvements and modifications have also been proposed for SPH in the context of multiphase problems <cit.>. Furthermore, a droplet or matrix flow with special properties can also be considered. For example, Moinfar et al. <cit.> studied the drop deformation under simple shear flow of Giesekus fluids and Vahabi <cit.> investigated the effect of thixotropy on deformation of a droplet under shear flow. Saghatchi et al. <cit.> studied the dynamics of a 2D double emulsion in shear flow with electric field based on an incompressible SPH method. There are also studies on colliding and coalescence process of droplets by SPH <cit.>. Simulation of bubbles in liquid is similar, but can encounter special challenges <cit.>, due to the reverse density/viscosity ratio as that of droplet in gas. Previously, simulations of multiphase flows by SPH method often investigated specific circumstances. Therefore, the objective of this paper is two fold: firstly, to simulate an extensive range of parameters to examine the SPH method for multiphase flows; secondly, to fill gaps of unexplored range of parameters and systematically investigate their influence on the droplet dynamics. The rest of the paper is arranged as follows: in Sec. <ref>, we introduce the multiphase SPH method and a specific surface tension model. We present validations and extensive numerical results in Sec. <ref>. We summarize this work after discussions in Sec. <ref>. § METHOD §.§ Governing equations and surface tension model We consider isothermal Navier-Stokes equations with a surface tension for multiphase flow in Lagrangian frame dρ/dt =-ρ∇·𝐯, d𝐯/dt = 1/ρ ( -∇ p + 𝐅_b + 𝐅_v + 𝐅_s ), where ρ, 𝐯 and p are density, velocity and pressure respectively. 𝐅_b is the body force, which is not considered in this study. 𝐅_v, 𝐅_s denote viscous force and surface tension at the interface between two phases, respectively. Following previous studies of quasi-incompressible flow modeling <cit.>, an artificial equation of state relating pressure to density can be written as p=c_s^2 ( ρ - ρ_ref ), where c_s is an artificial sound speed and ρ_ref is a reference density. Theoretically, subtracting the reference density has no influence on the gradient of pressure, but it can reduce the numerical error of SPH discretizations for the gradient operator. For a Newtonian flow, the viscous force 𝐅_v simplifies to 𝐅_v=μ∇^2𝐯, where μ is the dynamic viscosity. We assume surface tension to be uniform along the interface and do not consider Marangoni force. Therefore, the surface tension acts on the normal direction of the interface. Moreover, its magnitude depends on the local curvature as 𝐅 _s=σκ𝐧̂δ_s, where σ, κ, 𝐧̂ are surface tension coefficient, curvature and unit normal vector to the concave side, respectively; δ_s is a surface delta function and its discrete form shall be described later. To describe the surface tension at the interface between two fluids, a continuous surface tension model is adopted. As a matter of fact, surface tension my be written as the divergence of a tensor 𝐓 <cit.> σκ𝐧̂δ_s= ∇·𝐓, where 𝐓 = σ ( 𝐈 - 𝐧̂⊗𝐧̂ )δ_s. To represent a multiphase flow, we define a color function c and set a unique value for each phase, that is, c^I=0 and c^II=1 for the two phases, respectively. Apparently, the color function has a jump from 0 to 1 at the interface between phase I and II. Therefore, the unit normal vector can be represented by the normalized gradient of the color function as 𝐧̂=∇ c/ | ∇ c |, and the surface delta function is replaced by the scaled gradient as δ_s = | 𝐧 |= |∇ c |/ | c^I-c^II |. §.§ SPH method In SPH, fluid is represented by moving particles carrying flow properties such as density, velocity and pressure. We largely follow the work of Hu and Adams <cit.> and provide a brief derivation here. Density of a particle is calculated by interpolating the mass of neighboring particles as ρ_i=m_i∑_jW_ij, where mass m_i is constant for every particle. W_ij denotes a weight function for interpolation W_ij=W ( 𝐫 _ij,h ), where 𝐫 _ij=𝐫 _i-𝐫 _j is a relative position vector from particle j to i and h is the smoothing length. We further define V_i = 1/∑_jW_ij, to be an equivalent volume of particle i so that V_i = m_i / ρ_i. The pressure gradient can be computed as - ( 1/ρ∇ p ) _i = - ∑_j ( V_i^2p_i + V_j^2p_j ) ∂ W/∂ r_ij𝐞_ij, where p_i and p_j are obtained by Eq. (<ref>). The viscous force can be calculated as (μ∇^2𝐯 ) _i=∑_j2μ_i μ_j/μ_i + μ_j ( V_i^2 + V_j^2 ) 𝐯_ij/r_ij∂ W/∂ r_ij, where 𝐯_ij=𝐯_i - 𝐯_j is the relative velocity of particle i and j and r_ij= | 𝐫 _ij | is the distance between them. As suggested by Morris <cit.> and Hu et al. <cit.>, a part of pressure contribution σd-1/dδ_s is removed to avoid attractive force and improve the stability of the interactions between SPH particles. Therefore, we employ 𝐓' = σ (1/d𝐈 - 𝐧̂⊗𝐧̂ )δ_s to replace Eq. (<ref>), where d is the spatial dimension. Combining Eq. (<ref>), Eq. (<ref>) and Eq. (<ref>), we obtain 𝐓'=σ/ | c^I-c^II | | ∇ c | ( | ∇ c |^2 /d𝐈 - ∇ c ⊗∇ c ). The gradient of color function between phase I and phase II can be calculated in SPH as ∇ c_i = 1/V_i ∑_jV_j^2 ( c_j - c_i )∂ W/∂ r_ij𝐞_ij, where c_i (or c_j) is initially assigned to be c^I or c^II according to which phase particle i (or j) consititutes. Substitute Eq. (<ref>) into Eq. (<ref>) to obtain stress tensor 𝐓'_i=σ/ | ∇ c_i | | c^I-c^II | ( | ∇ c_i |^2 /d𝐈 - ∇ c_i ⊗∇ c_i ). Finally, the surface force term is calculated by the stress tensor using the SPH expression for divergence ( σκ𝐧̂δ_s)_i = ∑_j∂ W/∂ r_ij𝐞_ij· ( V_i^2𝐓'_i + V_j^2𝐓'_j ). It is simple to see that the discrete version of δ_s in SPH is ( δ_s )_i = 1/V_i | c^I-c^II | | ∑_jV_j^2 ( c_j - c_i )∂ W/∂ r_ij𝐞_ij |, which has a finite support to remove the singularity and distributes the surface tension onto a thin layer of two fluids across the interface. §.§ Computational settings The quintic kernel is adopted as weight function W =ϕ (3-R)^5-6(2-R)^5+15(1-R)^5 0 ≤ R < 1; (3-R)^5-6(2-R)^5 1 ≤ R < 2; (3-R)^5 2 ≤ R < 3; 0 R ≥ 3, where R=r/h and h is the smoothing length. ϕ is a normalization coefficient which equals 1/120, 7/(478π) and 1/(120π) in one, two and three dimensions, respectively. We set h=1.2Δ x with Δ x as the initial spacing distance between particles. This means that the support domain of the kernel function is truncated at 3.6Δ x, namely the cutoff r_c = 3.6Δ x. According to our tests, a smoothing length of 1.2Δ x is almost optimal for an excellent accuracy while avoiding the pairing instability. A detailed discussion on this issue is referred to Price <cit.>. Since we adopt a weakly compressible formulation, the sound speed c_s should be large enough to restrict the density fluctuations. Based on a scale analysis, Morris et al. <cit.> suggested that c^2_s should be comparable to the largest of U^2/Δ , μ U/ρ_0 L Δ , FL/Δ , σκ/ρ_0Δ, where Δ is the density variation and U, L, F, κ and σ are typical velocity, length, body force, curvature and surface tension coefficient, respectively. Accordingly, for multiphase flows the sound speed may be different for each phase. In all simulations, we set identical Δ≤ 0.5% for each phase and calculate c_s accordingly. At every time step, the minimal relative density is recorded among all particles, that is, ρ_min=min{ min{ρ _i/ρ _0^I}, min{ρ _j/ρ _0^II}}, where particle i belongs to phase I and particle j belongs to phase II; ρ_0^I, ρ_0^II are initial densities for the two phases, respectively. Thereafter, ρ_ref^I=0.99ρ_minρ _0^I, ρ_ref^II=0.99ρ_minρ _0^II are subtracted as reference density for each phase in Eq. (<ref>) to compute the particle pressure. This operation is performed to reduce numerical errors in calculating the pressure gradient while still keeping repulsive forces between particles. The explicit velocity-Verlet method is adopted for time integration and a time step is chosen appropriately for stability <cit.>. § NUMERICAL RESULTS We consider a shear flow generated by two parallel walls with opposite velocity of magnitude U. Periodic boundaries apply in the x direction. The computational domain is with length L and height H. A circular droplet with radius R_0 is initially located at the center of the computational domain, as shown in Fig. <ref>. No-slip boundary condition is applied at the wall-fluid interfaces using the method proposed by Morris <cit.>. Five dimensionless parameters that determine the deformation of the droplet are Reynolds number Re = ρ_c γ̇ R_0^2 / μ _c, Capillary number Ca = γ̇ R_0μ _c / σ, confinement ratio R_0/H, viscosity ratio λ = μ_d / μ _c and density ratio α = ρ_d / ρ _c, where γ̇ = 2U/H is the shear rate, σ is the surface tension coefficient, ρ_d and μ_d are density and viscosity of the dispersed fluid phase inside the droplet while and ρ_c and μ _c are for the continuous fluid phase, respectively. In Sec. <ref>, we study the deformation for an intact droplet while considering the effects due to the five dimensionless numbers. In Sec. <ref>, we examine the breakup of the droplet. In Sec. <ref>, we summarize the droplet dynamics for both intact shape and breakup in phase diagrams. In Sec. <ref>, we demonstrate the deformation and breakup with physical parameters of a water droplet in air flow as an industrial application. §.§ Droplet deformation When the shear is mild, the droplet remains intact and deforms to arrive at a stable shape eventually. The degree of droplet deformation can be quantified by the Taylor deformation parameter D = (A-B)/(A+B), where A is the greatest length and B is the breadth of the droplet as shown in Fig. <ref>. To validate our method, we first compare our results of transient deformations with that of Sheth and Pozrikidis using immersed boundary method within the finite difference method <cit.>. We follow their work to set L=H=4R_0=1, ρ_d = ρ_c = 1, μ_d = μ _c = 0.5 and adjust shear rate and surface tension. The two walls slide with velocities ±1/2γ̇H to generate a clockwise rotation of the droplet. Two resolutions are considered for particles initially placed on squared lattice: Δ x = 2R_0/25 and R_0/25, corresponding to the droplet containing N=484 and 1976 particles, respectively. We present particle distributions and D as functions of time in Fig. <ref> for a typical simulation with Re=0.125 and Ca=0.45. We note that the deformation of the droplet may oscillate in time and its maximum elongation does not necessarily takes place at the steady state of a very long time. We further focus on the transient deformations in short time in Fig. <ref> so that we can compare our results with those of Sheth and Pozrikidis <cit.>. It can be readily seen that our results with low resolution Δ x = 0.02 or N=484 already reproduce the reference very well for different Reynolds numbers and/or capillary numbers. As the reference is within a rather short time period, some interesting phenomenon such as oscillation of the Taylor deformation parameter D is not captured, as indicated for Re=12.5 and Ca=0.025 on Fig. <ref>. To validate our method for vanishing Reynolds numbers, we calculate the stationary deformation and orientation of the droplet with respect to Ca. We follow Zhou and Pozrikidis <cit.> to set L=H=8R_0=2, ρ_d = ρ_c =1, μ_d = μ _c = 0.5 and adjust shear rate and surface tension accordingly. The deformation parameter D and orientation θ (defined on Fig. <ref>) as functions of Ca (up to Ca=1) for Re=0.01 are shown in Fig. <ref>. Results for Re=0.1 and 1 are also given for comparison, where droplet breakup already takes places at Ca ≳ 0.4 for Re=1. The difference between the results of Re=0.1 and Re=0.01 is insignificant and they both resemble the results of boundary integral method for Stokes flow <cit.>. We can readily conclude that Re=0.1 is small enough to approximate the Stokes flow and present the steady shapes accordingly on Fig. <ref>. We further present the contours and streamlines for a typical evolution of droplet deformation at vanishing Reynolds number in Fig. <ref>. We commence to investigate the effects of confinement and set L=16R_0 to minimize the periodic artifacts. We first restrict out attention to Re=0.1, α=1 and λ=1. Four ratios of confinement are considered: H=2.4R_0, 4R_0, 8R_0 and 16R_0. The deformation parameter as a function of Ca is shown in Fig. <ref>. As we can see, a smaller distance of the two walls enhances the elongation of droplet and makes its long axis align more horizontally. As we relax the confinement, the relation between D and Ca becomes linear and the difference between H=16R_0 and H=8R_0 is already negligible. Furthermore, we simulate cases where the droplet and the matrix flow are two fluids with different physical properties. We first consider two fluids of the same density but with different viscosities. We choose a computational domain of 16R_0 × 16R_0 and set Re=0.1, α=1 and λ ranges from 0.1 to 10. Initial space Δ x among nearest particles is 2R_0/25 so a droplet contains 484 particles. The deformation parameter as a function of Ca is shown in Fig. <ref>. As we can see, the deformation increases as λ increases from 0.1 to 10. In this range of λ, a droplet with lower viscosity has a smooth inside circulation and fast reaction which can reduce the elongation <cit.>. The other case is that fluids inside and outside the droplet have the same viscosity but different densities. The sound speed is chosen according to the ratio of initial density to balance the pressure c_s^c/c_s^d = ρ_ref^d/ρ_ref^c = ρ_d/ρ_c=α, where c_s^c, c_s^d and ρ_ref^c, ρ_ref^d are sound speeds and reference densities used for fluids outside and inside the droplet. As shown in Fig. <ref>, the difference between deformations of droplet under density ratio 0.1 - 10 is very small except obvious lower inclination at small Ca when α=0.1. In this small Reynolds number regime (Re=0.1), the density ratio has negligible influence and only the capillary number determines the droplet deformation. In 3D simulations, the width of simulation box W is an additional computational parameter compared to the 2D simulations. To compare with analytical predictions or experiment data, the length and width of simulation box are numerical and should be large enough. One set of parameters of Re=0.1, Ca=0.2, α = λ =1, H = 4R_0 are selected and different length L and width W of simulation box are examined. According to our simulations, the deformation basically decreases with the increase of L and/or width W. We compare the Taylor deformation parameter D in steady states of our simulations with the analytical prediction of Shapira and Haber <cit.>. The differences between our results and analytical prediction under different L and W are plotted in Fig. <ref>. It can be seen that when L is larger than 24R_0 and W is larger than 8R_0, the results has little change with the increase of L and/or W. Fig. <ref> shows the steady deformation of 3D droplets in shear flow when L=24R_0, W=8R_0, Re=0.1 and α=λ =1 with different Ca and confinement in H direction, compared with theoretical predictions of Shapira and Haber <cit.> and experiment data of Sibillo et al. <cit.>. Our results agree well with both anlaytical and experiment references at Ca=0.1 and 0.2, whereas are closer to the experimental data at Ca=0.3. The deformation increases with the confinement ratio R_0/H, which has the same trend as for 2D cases. §.§ Droplet breakup When the shear is strong, the droplet is over-stretched to break up. We find two patterns of breakup process under different viscosity ratios in simulations. As shown in Fig. <ref>, when α=1, λ=0.2, Re=0.1, Ca=10, and L=H=16R_0, a droplet is rotated and then stripped of its main body near the surface and gradually breaks apart. We call this breakup type A. This type is also found in the experiment study of Grace and they call it "tip streaming breakup" <cit.>. The conditions for type A breakup happening is exhibited in the next section. Fig. <ref> shows another set of typical snapshots of the droplet shape and flow fields in shear flow breaks when α=λ=1, Re=0.1, Ca=0.9 and L=H=16R_0. In this simulation, a droplet is stretched and its waist becomes slender and slender and finally breaks up. We call this breakup type B. To encompass the breakup of a 3D droplet with a large elongation, we employ a rather long computational domain with L=32R_0. Fig. <ref> shows the dynamics of the breakup with Re=0.1, H=2.857R_0, Ca=0.46 and α=λ=1. Left side are SPH particle distributions and right side are corresponding contour interfaces processed by SPH kernel interpolation into mesh cells. The color represents the magnitude of velocity. We adopt the same Ca and R_0/H as the experiment in creeping flow by Sibillo et al. <cit.>. The shape of the droplet in the breakup process of our simulation is very close to their experimental observation. Only a slight difference appears in the final stage: in the experiment, the droplet is divided into three main parts, while in our simulation the middle part continues to split into two smaller droplets. In contrast to the 2D case, a 3D droplet has a more slender shape before breaking up. §.§ Phase diagram To clearly visualize the states of a droplet in different conditions, we consider a range of Reynolds numbers, capillary numbers, and confinements/density/viscosity ratios and summarize our simulation results into phase diagrams. Thereafter, we may estimate the critical capillary number Ca_c that segments the intact and breakup states and further investigate how it is influenced by other dimensionless parameters. For λ=α=1, we perform a group of 2D simulations with different Reynolds number Re=0.01, 0.1, 1, 10 and confinement H=2.4R_0, 4R_0, 8R_0, 16R_0 with Ca ∈ [0.1, 1.1] and L=16R_0. For a general overview, the states of the droplet are summarized in Fig. <ref>. To get a clear view, we slice the phase diagram by two perspectives. Firstly, we divide results into groups of the same confinement to reveal the influence of Re on Ca_c as shown in Fig. <ref>. Overall it is apparent that a higher Re reduces Ca_c. Three scenarios are special: under confinement H=2.4R_0, 4R_0 and 8R_0, we can not differentiate Ca_c between Re=0.01 and 0.1. From another perspective of Ca versus confinement ratio for each Re on Fig. <ref>, we are not able to find a universal pattern. Under Re=0.01, Ca_c decreases with R_0/H while under Re=10, Ca_c increases with R_0/H. Whereas, under Re=0.1 and 1, Ca_c has no monotonic relation with R_0/H. Furthermore, we investigate effects of viscosity ratio λ=μ_d/μ_c ∈ [0.1, 10] on the droplet dynamics for Re=0.1 and three confinement ratios H=4R_0, 8R_0 and 16R_0. The results are shown in Fig. <ref>. For breakup type A, the droplet rotates and is stripped off as described in Sec. <ref>; Breakup type B represents that a droplet is stretched and breaks up in the middle. Under Re=0.1, type A is observed only if the droplet has a much smaller viscosity compare to the matrix fluid. Overall, Ca_c decreases with the increase of λ. However, we notice a flatten trend or even a reverse trend with small difference for Ca_c from λ=5 to λ=10, as shown on the insets of Fig. <ref>. According to the study of Karam et al. and Grace <cit.>, a maximum transfer of energy takes place across an interface, which demands this trend. Due to highly computational cost in 3D, we only consider a moderate confinement H/R_0=4 and perform a group of simulations to draw a phase diagram in the plane of Ca and Re, as shown in Fig. <ref>. The size of the simulation box is L=32R_0, W=8R_0, H=4R_0. As in 2D case, the critical Ca_c decreases with increasing Re in 3D, as shown in in Fig. <ref>. However, the critical capillary number Ca_c in 3D case is significantly smaller than that of 2D case. §.§ Water droplet in air flow As one specific application, we employ our method to predict the breakup of a water droplet in shear flow of air. The critical capillary number or the shear rate determined is helpful to design an effective atomization device. Actual physical properties of water and air around 20^∘C are adopted: ρ_d=998.2 kg· m^-3, μ_d=1.0087× 10^-3 Pa· s and ρ_c=1.205 kg· m^-3, μ_c=1.81× 10^-5 Pa· s are set for water (dispersed) phase and air (continuous) phase, respectively; surface tension coefficient σ =72.75× 10^-3 N· m^-1 is set for the water-air interface. We perform a relative large range of Reynolds numbers and depict a phase diagram on the plane of Re and Ca in logarithmic-logarithmic scales on Fig. <ref>. This allows us to connect the results with the same droplet size and observe its behavior while changing Re and Ca. Points on each dotted line represent the droplet of the same radius, as marked in the figure. For example, we have a line of dynamics for the droplet with R_0=10μ m under shear rates of 1× 10^6s^-1, 2× 10^6s^-1, 5× 10^6s^-1, 1× 10^7s^-1, 2× 10^7s^-1; another line of dynamics for the droplet with R_0=100μ m under shear rates of 5× 10^4s^-1, 1× 10^5s^-1, 2× 10^5s^-1, 5× 10^5s^-1, 1× 10^6s^-1. Furthermore, we observe that if the Re is on the order of 100, the critical Ca for breakup is very sensitive to Re. We also perform a group of 3D simulations for a droplet with R_0=50 μ m under shear rates of 1× 10^5s^-1, 2× 10^5s^-1, 5× 10^5s^-1, 1× 10^6s^-1, 2× 10^6s^-1. The 3D results for the critical point of breakup is close to that of the 2D results. § CONCLUSIONS AND DISUCSSIONS In this study, we employed a multi-phase SPH method to simulate droplet deformation and breakup subjected to a simple shear flow in an extensive range of physical parameters. We performed both 2D and 3D simulations and validated them by benchmarks: transient deformations and steady shapes of droplets are compared with previous simulations, analytical derivations and experimental data. These results indicate that the method is reliable to simulate droplet dynamics in general. We wish to emphasize the convenience of SPH method in simulating multi-phase problems, as we can leverage on its Lagrangian nature and differentiate different phases by particle species. In addition, the algorithm and data structure for 2D and 3D simulations have tiny difference and therefore, it is a simple task to extend the code from 2D to 3D. Economical 2D simulations allow us to investigate a wide range of physical parameters in five dimensions, which serve as a guide to 3D realistic situations. From the results, we come to the following conclusions. (1) A larger Reynolds number Re or capillary number Ca leads to a more considerable deformation of the droplet. The transient and steady-state deformations of the droplet in our study are in good agreement with the previous studies but beyond their time limits <cit.>. (2) Under low Reynolds number (Re=0.1), a stronger confinement due to the walls enhances the steady-state deformation in both 2D and 3D simulations. When the walls are separated further apart, the Taylor deformation parameter is almost linear with respect to Ca. The influence of confinement on the deformation of a droplet has been studied by Shapira and Haber by a first-order analytical solution based on Lorentz's reflection method. They proved that the walls do not influence the shape of deformed droplet but increases the deformation magnitude with a term of order ( R_0/H ) ^3 <cit.>. The experiment data of Sibillo et al. illustrate satisfactory agreement with the predictions of Shapira and Haber except for the droplet being within a small gap, where the reflection analysis is expected to fail <cit.>. Our 3D simulation results resemble the whole set of experiment data even when the droplet is within the small gap, which suggests the method as an applicative tool for more realistic situations in microfluidics. (3) The effects of wall confinement on the critical capillary number Ca_c are not universal under different Re. When Re=0.1, a closer gap of walls reduces Ca_c. This is because a closer gap of walls increases the deformation as described above. But when Re is larger, the relation between Ca_c and the confinement ratio is unclear. From our observation, this non-monotonic relation results from an interplay of influences by the shear strength and the stability of the whole flow field. On the one hand, the shear stress transferred to the droplet from the wall is more pronounced in stronger confinement <cit.>, thus closer walls reduce the Ca_c. On the other hand, the narrower channel reduces the instability of the flow and restricts droplet movements, thus increases the Ca_c. (4) Under Re=0.1 and the range of viscosity ratio λ∈ [ 0.1,1 ], a higher λ causes a larger deformation. The effect of λ on Ca_c is not monotonic when λ > 1 and there is a minimum value of Ca_c between λ=1 and λ=10. The existence of a minimal Ca_c among different λ has also been found by previous experiment studies <cit.>, when λ is about 1. The discrepancy between our results and the previous ones are attributed to the difference between 2D and 3D cases. At the same Re, the influence of density ratio on droplet deformation is much smaller compared with that of the viscosity ratio. (5) As an application, a phase diagram obtained by actual physical parameters of water and air is depicted to predict the magnitude of shear rate for breaking a droplet of certain size, which is helpful in the designing atomization nozzles. § ACKNOWLEDGEMENTS K. Wang and X. Bian acknowledge the national natural science foundation of China under grant number: 12172330. This work is partially supported by Hangzhou Shiguangji Intelligent Electronics Technology Co., Ltd, Hangzhou, China. § ACKNOWLEDGEMENTS K. Wang and X. Bian acknowledge the national natural science foundation of China under grant number: 12172330. This work is partially supported by Hangzhou Shiguangji Intelligient Electronics Technology Co., Ltd, Hangzhou, China. elsarticle-num
http://arxiv.org/abs/2307.03356v1
20230707022958
Co-variance Operator of Banach Valued Random Elements: U-Statistic Approach
[ "Suprio Bhar", "Subhra Sankar Dhar" ]
math.ST
[ "math.ST", "math.PR", "stat.TH" ]
#1#1 apalike
http://arxiv.org/abs/2307.01553v1
20230704080945
Disentangling the Role of Electrons and Phonons in the Photoinduced CO Desorption and CO Oxidation on (O,CO)-Ru(0001)
[ "Auguste Tetenoire", "J. I. Juaristi", "M. Alducin" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci", "physics.chem-ph" ]
The role played by electronic and phononic excitations in the femtosecond laser induced desorption and oxidation of CO coadsorbed with O on Ru(0001) is investigated using ab initio molecular dynamics with electronic friction. To this aim, simulations that account for both kind of excitations and that only consider electronic excitations are performed. Results for three different surface coverages are obtained. We unequivocally demonstrate that CO desorption is governed by phononic excitations. In the case of oxidation the low statistics does not allow to give a categorical answer. However, the analysis of the adsorbates kinetic energy gain and displacements strongly suggest that phononic excitations and surface distortion also play an important role in the oxidation process. § INTRODUCTION The use of intense femtosecond laser pulses in the near infrared, visible, and ultraviolet regime constitutes an efficient tool to promote adsorbate reactions at metal surfaces that are forbidden or less likely under thermal conditions <cit.>. The laser excites the electrons of the metal and energy is subsequently transferred to the surface atoms by means of electron-phonon coupling. As a consequence, the adsorbates can gain energy from both the excited electronic and phononic systems. Experimentally, two-pulse correlation measurements have been used to disentangle which the timescale for the energy transfer to the adsorbates is <cit.>. In this way, the reaction is ascribed to be a dominant electron-assisted process when its timescale is of few picoseconds or less and to be a dominant phonon-assisted process when its timescale is longer. From the theoretical side, a proper understanding of this kind of experiments and of their outcome requires a proper characterization of the reaction dynamics. The excitation generated by the laser on the surface is accounted for using a two temperature model (2TM) in which the electronic and phononic excitations are described in terms of time-dependent electronic (T_e) and phononic (T_l) temperatures <cit.>. Subsequently, the dynamics of the adsorbates in the highly excited environment are simulated <cit.>. In this respect, the extension of the ab initio molecular dynamics with electronic friction method <cit.> to incorporate the effect of time-dependent electronic and phononic temperatures in the adsorbate dynamics [hereafter denoted as (T_e, T_l)-AIMDEF] <cit.> constitutes a way of treating the multidimensional dynamics of the adsorbates and surface atoms at the density functional theory (DFT) level, incorporating the coupling of the adsorbates to both the excited electronic and phononic systems. An important reaction that cannot be thermally activated under ultrahigh vacuum conditions <cit.> but can be propelled by femtosecond laser pulses <cit.> is CO oxidation when coadsorbed with atomic O on the Ru(0001) surface. Still, even in these conditions, CO desorption is around 30 times more probable than CO oxidation <cit.>. In two previous works <cit.>, we have applied the (T_e, T_l)-AIMDEF method to this system. Different surface coverages, for which the reaction paths under equilibrium conditions for CO desorption and oxidation had been previously studied <cit.>, were taken into account. Our results reproduced the experimental fact regarding the CO desorption to oxidation branching ratio being larger than one order of magnitude. Additionally, our dynamics simulations showed the reason for this behavior. We observed that CO desorption is a direct process only limited by the energy the CO molecules need to gain to overcome the desorption energy barrier. In contrasts, the oxidation dynamics is much more complex, the configurational space to oxidation is very restricted, and the fact that the O and CO adsorbates gain energy enough to overcome the energy barrier to oxidation does not guarantee their recombination. Our simulations also reproduced the changes in the O K-edge XAS experimental spectra attributed to the initial stage of the oxidation process <cit.>, further confirming the robustness of the theoretical model. An important question that was not studied in the previous works is the relative importance of electronic and phononic excitations in both CO desorption and CO oxidation reactions. In the present paper we aim to elucidate this question. We perform the so-called T_e-AIMDEF simulations <cit.>, in which the Ru surface atoms are kept frozen in their equilibrium positions, so that the adsorbates are uniquely coupled to the excited electrons. In this way, we gain information about the CO desorption and oxidation probabilities, and about the dynamics of these processes, when only electronic excitations are considered. Comparison of these results with those obtained in the (T_e, T_l)-AIMDEF simulations, in which the effect of both electronic and phononic excitations is accounted for, allows us to answer the question about which channel dominates each reaction on each of the studied surface coverages. The paper is organized as follows. The theoretical model and computational settings are described in the Theoretical Methods section. The results of both the T_e-AIMDEF and the (T_e, T_l)-AIMDEF simulations for the CO desorption and oxidation probabilities, kinetic energy gains, and adsorbate displacements are presented in the Results and Discussions section. Finally, the main conclusions of the paper are summarized in the Conclusions section. § THEORETICAL METHODS §.§ Photoinduced Desorption Model The photoinduced desorption and oxidation of CO from the (O,CO)-covered Ru(0001) surface was simulated in <cit.> with the ab initio classical molecular dynamics with electronic friction method (T_e, T_l)-AIMDEF that allows to include the effect of both the laser-induced hot electrons and concomitant electron-excited phonons <cit.>. As described in detail elsewhere <cit.>, the electronic and ensuing phononic excitations created in the metal surface by near infrared laser pulses are described within a two-temperature model (2TM) <cit.> in terms of two coupled heat thermal baths. The time-dependent temperatures that are associated to the electron and phonon baths, T_e(t) and T_l(t), are obtained by solving the following differential equations: C_e∂ T_e/∂ t = ∂/∂ zκ∂ T_e/∂ z-g (T_e-T_l)+S(z,t) , C_l∂ T_l/∂ t = g(T_e-T_l) , where C_e and C_l are the electron and phonon heat capacities, respectively, κ is the electron thermal conductivity, g is the electron-phonon coupling constant, and S(z,t) is the absorbed laser power per unit volume that depends on the shape, wavelength, and fluence of the applied pulse. According to the above equations, the laser pulse is responsible of heating directly the electron system that subsequently transfers part of its energy into either the bulk electrons or the lattice phonons [first and second terms in the r.h.s. of Equation (<ref>), respectively]. The diameter of the laser beam, on the one hand, and the time scale of few tens of picoseconds of interest, on the other hand, justify neglecting lateral heat diffusion by electrons in Equation (<ref>) and heat diffusion by phonons in Equation (<ref>) <cit.>. All the simulations performed in the present work as well as those in <cit.> correspond to irradiating the surface with the experimental pulse of ref. <cit.>, i.e., a 800 nm Gaussian pulse of 110 fs duration. Figure <ref> shows the results for T_e(t) and T_l(t) as obtained from 2TM for the experimental absorption fluences F= 200 and 300 J/m^2. As input parameters for the Ru(0001) surface in Equations (<ref>) and (<ref>), we use those of refs. <cit.>. Next, the effect of the laser-excited electrons on each adsorbate is described through the following Langevin equation: m_id^2𝐫_i/dt^2=-∇_𝐫_i V(𝐫_1,...,𝐫_N)-η_e,i(𝐫_i)d𝐫_i/dt +𝐑_e,i[T_e(t),η_e,i(𝐫_i)] , where m_i, r_i, and η_e,i are the mass, position vector, and electronic friction coefficient of the i^th atom conforming the set of adsorbates. The adiabatic force [first term in the r.h.s. of Equation (<ref>)] depends on the position of all atoms in the system (i.e., adsorbates and surface atoms). The electronic friction force (second term) and the electronic stochastic force (third term), which are related by the fluctuation-dissipation theorem, describe the effect of the electronic excitations and deexcitations on the adsorbate dynamics. In particular, 𝐑_e,i is modeled by a Gaussian white noise with variance, Var[𝐑_e,i(T_e,η_e,i)]=(2 k_B T_e(t) η_e,i(𝐫_i))/Δ t, with k_B the Boltzmann constant and Δ t the time-integration step. For each atom i, the electronic friction coefficient η_e,i(𝐫_i) is calculated with the local density friction approximation (LDFA) <cit.>. Within this approximation, the friction coefficient is assumed to be equal to the friction coefficient that the same atom i would experience in case of moving within a homogeneous free electron gas (FEG) of density n_0 = n_sur(r_i), with n_sur(r_i) being the electron density of the bare metal surface at the position r_i. As proposed by Novko et al. <cit.>, an efficient method to extract on-the-fly the bare surface electron density from the self-consistent DFT electron density of the whole system (adsorbates and surface), which is calculated at each integration step in AIMDEF, consists in applying the Hirshfeld partitioning scheme <cit.>. Specifically, the latter is used to subtract the contribution of the adsorbates from the self-consistent electronic density in order to obtain the bare surface electron density. In the (T_e, T_l)-AIMDEF simulations also the heating of the surface lattice due to the laser-induced electronic excitations is included. The latter is achieved by coupling the surface atoms to a Nosé-Hoover thermostat <cit.> that follows the temperature T_l(t) obtained from 2TM. In contrast, in the T_e-AIMDEF simulations that we perform in this work all the surface atoms are kept fixed at their equilibrium positions and only the adsorbates are allowed to move as dictated by the T_e-dependent Langevin dynamics [Equation (<ref>)]. These dynamics-restricted simulations are an attempt to single out the direct effect of the laser-excited electrons on the adsorbates from the effect due to energy transfer between the adsorbates and the surface atoms, which are also vibrationally excited by the electrons. §.§ General DFT Computational Settings The new T_e-AIMDEF simulations presented here were performed with vasp <cit.> (version 5.4) and the AIMDEF module <cit.> using the same computational settings that we used in our previous (T_e, T_l)-AIMDEF simulations of the desorption and oxidation of CO on different covered Ru(0001) surfaces <cit.>. Figure <ref> shows the supercells used to characterized the three coverages under study: * The low coverage (0.5ML O+0.25ML CO), in which each atop CO is surrounded by six O atoms that adsorb on the nearest hcp and fcc sites forming a honeycomb arrangement. * The intermediate coverage (0.5ML O+0.375ML CO), in which the O atoms adsorb at hcp sites forming a p(1×2) structure, while the CO molecules occupy the empty space left between the O arrays. * The high coverage (0.5ML O+0.5ML CO), in which both the O and CO adsorb on hcp sites forming two inserted p(1×2) structures. As seen in the figure, the three coverages are modeled with the same supercell that consists of a (4×2) surface unit cell and a vector length along the surface normal of 30.22 Å. Within this supercell, each covered Ru(0001) surface is described by five layers of Ru atoms and the corresponding (O,CO) overlayer. The Ru topmost layer and the bottom of the nearest periodic Ru slab are separated by about 19 Å of vacuum. The employed (4×2) surface cell contains various adsorbates and, hence, it will provide a reasonable description of the interadsorbate interactions and their effect in the adsorbate dynamics, which become important at sufficiently large coverages <cit.>. Let us remark that the low and intermediate coverages have been found in experiments <cit.>, while the high coverage is predicted to be stable by DFT <cit.> but has not been experimentally observed. In the T_e-AIMDEF simulations, the adiabatic forces are calculated with non spin-polarized DFT using the van der Waals exchange-correlation functional proposed by <cit.> and the same computational parameters that were used in our previous studies on the energetics <cit.> and (T_e, T_l)-AIMDEF dynamics of the O+CO-Ru(0001) system <cit.>. Specifically, the electronic ground state energy is determined at each integration step within a precision of 10^-6 eV. Integration in the Brillouin zone is performed using a Γ-centered 3×6×1 Monkhorst-Pack grid of special k points <cit.> and the Methfessel and Paxton scheme of first order with a broadening of 0.1 eV <cit.>. The Kohn-Sham orbitals are expanded in a plane-wave basis set with an energy cutoff of 400 eV. The projector augmented wave (PAW) method <cit.> that is implemented in VASP <cit.> is used to describe the electron-core interaction. Integration of the Langevin equation is performed with the Beeman method implemented in our AIMDEF module <cit.>. Each trajectory starts with the adsorbates at rest at their equilibrium position and is propagated up to 4 ps using a time step of 1 fs. For each coverage and absorbed fluence we run 100 trajectories. §.§ Calculation of observables Following <cit.>, a CO molecule is counted as desorbed if its center of mass height reaches the distance Z_cm= 6.5 Å from the Ru(0001) topmost layer with positive momentum along the surface normal (P_z > 0). After analyzing all the trajectories, the CO oxidation (i.e., the O+CO recombinative desorption as CO_2) and CO desorption probabilities per CO molecule are calculated for each coverage as P_des(A)=N_des(A)/N_t N_CO with N_des(A) the number of the desorbing molecules under consideration (i.e., A stands for CO or CO_2), N_t the total number of trajectories, and N_CO the number of CO molecules in the simulation cell (2, 3, and 4, respectively, for low, intermediate, and high coverages). The mean total kinetic energy ⟨ E_kin⟩ (t) and mean center-of-mass kinetic energy ⟨ E_cm⟩ (t) per adsorbate type are calculated at each instant t as ⟨ E_kin (cm)⟩ (t)=∑_i=1^N_t∑_j=1^N_aE_kin (cm)^j(t)/N_t N_a where N_a is the total number of the specific species under consideration (e.g., nondesorbing CO molecules that remain adsorbed on the surface at the end of the simulation, CO molecules that desorb, nondesorbing O adatoms...) and E_kin (cm)^j is the kinetic (center-of-mass) energy of adsorbate j at instant t. § RESULTS AND DISCUSSION The CO desorption and CO oxidation probabilities obtained from the T_e-AIMDEF and (T_e, T_l)-AIMDEF simulations at the same absorbed fluence of 200 J/m^2 are compared for each coverage in Table <ref>. The CO desorption probabilities in the intermediate and high coverages are reduced by a factor 33.8 and 34.5, respectively, when only the direct effect of the excited electrons are included (T_e-AIMDEF). Assuming that a similar factor of ∼34 stands for the low coverage, we consider that the predicted desorption probability of ∼ 0.5% is compatible with the lack of CO desorption events we obtain within our limited statistics. As found in the (T_e, T_l)-AIMDEF simulations <cit.>, the P_des(CO) values correlate well with the CO desorption barriers calculated with DFT-vdW for each coverage <cit.>. That is, the number of CO desorption events increases as the barrier decreases. Let us remark that the drastic reduction we obtain in the T_e-AIMDEF desorption probabilities aligns with the two-pulse correlation measurements suggesting that the photoinduced desorption of CO on the O+CO-Ru(0001) surface is a phonon-dominated process <cit.>. Interestingly, this feature, the importance of the excited phonons in the photodesorption of CO, is not exclusive of the (O,CO)-covered surface, as it has been observed in diverse experiments in which Ru(0001) is covered with CO <cit.> and in molecular dynamics calculations motivated by those experiments <cit.>, which included the effect of T_e(t) and T_l(t) following the model by <cit.>. In respect of the CO oxidation process, there are no events in the case of the T_e-AIMDEF simulations. Nevertheless, the statistics is insufficient to exclude that the laser-excited electrons are the dominant driving mechanism, as proposed in <cit.>. The analysis of the kinetic energy and displacements below will show however that there are distinct features in the (T_e, T_l)-AIMDEF adsorbate dynamics as compared to the T_e-AIMDEF adsorbate dynamics suggesting that not only electrons but also the highly excited phonons are contributing to the oxidation process, similarly to what was obtained for the laser-induced desorption of CO from Pd(111) <cit.>. In order to confirm the above idea and gain further insights into the role of the excited electrons and phonons we also calculated for illustrative purposes an additional set of 100 T_e-AIMDEF trajectories assuming a extreme absorption fluence F= 300 J/m^2 for one of the covered surfaces, namely, the high coverage. As shown in Figure <ref>(a), the maximum of the electronic temperature for the new fluence is about 1600 K higher than for F= 200 J/m^2. After reaching the maximum, a difference of about 800 K is still maintained during the rest of the integration time used in our calculations. The purpose of these new simulations is to increase the energy provided to the adsorbates but excluding effects due to the lattice distortions inherent to phonon excitations. The results in Table <ref> show that P_des(CO) increases from 1% to 15.25% because of the fluence. The latter value is still about a factor 2 smaller than in the (T_e, T_l)-AIMDEF simulations for F=200 J/m^2. Lastly, neither at this high fluence there are oxidation events, although the analysis of the adsorbate displacements below will show that in a few cases the adsorbates can eventually abandon their adsorption well. §.§ Kinetic Energy Gain The time evolution of the mean kinetic energy of the adsorbates along the T_e-AIMDEF (thick solid lines) and (T_e,T_l)-AIMDEF dynamics (dotted lines) is compared in Figure <ref> for each adsorbate type and each coverage. In both simulations the absorbed laser fluence is F= 200 J/m^2. For simplicity, only the results of the nondesorbing species, i.e., the adsorbates that remain on the surface at the end of our simulations, will be discussed. A detailed analysis of the kinetic energy gained by the desorbed CO in the (T_e,T_l)-AIMDEF simulations can be found elsewhere <cit.>. A common observation in Figure <ref> is that irrespective of the coverage the adsorbates gain less kinetic energy in the T_e-AIMDEF simulations than in (T_e,T_l)-AIMDEF. There exist some interesting features worth mentioning. As discussed in  <cit.> in the case of (T_e,T_l)-AIMDEF simulations a quasithermalized state was obtained at the end of the simulations, and even more rapidly for the intermediate and high coverages. This is shown by the fact that the average total kinetic energy of the CO molecules is twice the average kinetic energy of their center of mass, and that this coincides, roughly, with the average kinetic energy of the O atoms. This is what is expected when there exists equipartition of the energy among the different degrees of freedom. This is clearly not the case in the T_e-AIMDEF simulations. For instance, we observe that for all coverages the average kinetic energy of the atomic O is larger than the average kinetic energy of the center of mass of the CO molecules. This can be rationalized by the fact that the O atoms are more strongly bound than the CO molecules and therefore their coupling to the electronic system is stronger. Note, finally, that even though this statement is generally true for all the coverages, in the high coverage case the difference between these two energies is much smaller and that tends to disappear at the end of the simulation time. This may be due to an increased importance of interadsorbate energy exchange that favors the thermalization of the system when the concentration of adsorbates at the surface is larger. Nevertheless, note that the energy gain when only electronic excitations are considered is roughly one half of the energy gain when both electronic and phononic excitations are taken into account. It is difficult to rationalize reduction factors larger than 30 in the CO desorption probabilities such as those presented in Table <ref> in terms of this reduction in the energy gain. This suggests that not only the increased energy gain but also other effects are playing a role in the increased CO desorption and oxidation probabilities when phononic excitations are considered. In order to strengthen this point further, in Figure <ref> we show the results for the kinetic energy gain of the adsorbates in T_e-AIMDEF simulations in the high coverage case for a larger fluence, namely F=300 J/m^2. In this case, albeit a slightly different time dependence, the energy gain is very similar to that obtained with F=200 J/m^2 in the (T_e,T_l)-AIMDEF simulations. Nevertheless, even with similar energy gains desorption and oxidation probabilities are, as shown in Table <ref>, much lower when only electronic excitations are accounted for. This constitutes a definitive proof of the fact that the role played by phononic excitations in the desorption and oxidation probabilities is not limited to being a energy source channel. This important point is further analyzed in the next subsection. §.§ Adsorbate Displacements Evidence of the important role of the excited phonons in the photoinduced reactions on (O+CO-Ru(0001) is provided by comparing the in-plane displacement of the adsorbed species between both types of calculations, T_e-AIMDEF and (T_e,T_l)-AIMDEF. As in the previous section, only the diffusion of the nondesorbing species will be discussed. The displacement data is presented in terms of colored density plots, which correspond to two-dimensional histograms of the adsorbates (x,y) positions over the surface (Figures <ref>, <ref>, and <ref>). For each kind of adsorbate and simulation type, each density plot is constructed using the in-plane positions along the whole trajectory (4 ps, i.e., 4000 steps) of all the adsorbates of that kind and of all the simulated trajectories. Thus, the density color code gives qualitatively an idea of the amount of time the adsorbates have spent in a given position (higher densities will correspond to longer times). Let us also remark that in each plot the atoms are allowed to go out of the unit cell (enclosed by a black solid line in the figures). The reason is that we are using a extended coordinate representation in order to show the continuous path followed by the adsorbates. The adsorbate displacements in the low coverage case are plotted in Figure <ref>. In the T_e-AIMDEF simulations (left panels of Figure <ref>), the O adatoms stay on (or very close to) their respective adsorption sites. Something similar is observed for the CO molecules. They remain in top/near-top sites, showing no preference to move neither towards fcc sites nor towards hcp sites. The CO molecules explore an ellipse centered on the top position with a long axis of ∼1.3 Å and a short axis of ∼1 Å. These short displacements are clearly insufficient for CO oxidation to occur because the CO and O adsorbates cannot get close enough to recombine. The in-plane mobility of all the adsorbates increases much when the Ru lattice excitation is incorporated with the (T_e,T_l)-AIMDEF simulations (right panels of Figure <ref>). The O adatoms can now abandon their initial adsorption site and cross the surrounding bridge sites. In particular, O atoms initially located at the fcc sites (O_fcc) show a tendency to move to the nearest hcp sites, whereas the ones initially located at the hcp sites (O_hcp) show a tendency to move to the closest fcc sites. We also observe that O_hcp shows a slightly smaller mobility than O_fcc. This is consistent with the larger adsorption energy of the former (5.62 eV) as compared to the latter (4.95 eV) <cit.>. Although less probable, note that there also exist events in which the O atoms move beyond the nearest neighbor adsorption sites. The mobility of the CO molecules is also much increased in the (T_e,T_l)-AIMDEF simulations. Now they explore a circle of radius ∼2.3 Å centered at their equilibrium position. In some cases, the excited CO may even move beyond their first neighboring site. Figure <ref> shows that the adsorbate mobility is drastically reduced also in the intermediate coverage when the effect of the hot Ru lattice is not included. In the T_e-AIMDEF simulations (left column of Figure <ref>), the CO molecules basically move within an ellipse centered at their corresponding adsorption site as in the low coverage case, although the explored area is larger (long and short axis lengths of ∼2.5 Å and ∼1.3 Å, respectively). We also observe a few cases in which the CO diffuses either along the x direction or to a nearest top site. The O adatoms mostly remain in their hcp adsorption sites, although there exist a few events in which O diffuses towards the nearest fcc site that is located farther from the other adatoms. Furthermore, these diffusing atoms are the ones that have not a CO molecule adsorbed on the near-top site that is located above them in the density plot. We checked that the displacement of the second left O occurs once the nearest CO above it desorbs. The mobility of both kind of adsorbates increases considerably in the (T_e,T_l)-AIMDEF simulations (right column of Figure <ref>). In fact, the difference respect to the T_e-AIMDEF simulations is even more pronounced than in the low coverage because in the intermediate coverage basically every spot in the simulation cell is at some instant occupied by either O or CO. Still, we observe that in the case of CO molecules, the O row acts as a barrier that prohibits the CO molecules to access the lower part of the simulation cell, except for very few rare events. In contrast, the O adatoms can move all over the cell albeit it is less probable to find them at top sites than on hcp or fcc sites. These features provide, in a qualitative manner, indirect information on the properties of the potential energy surface and were already discussed in detail elsewhere <cit.>. The in-plane displacement of the adsorbates in the high coverage are shown in Figure <ref>. Recall that in this case, the T_e-AIMDEF simulations were performed for two different fluences, namely, F= 200 and 300 J/m^2. In T_e-AIMDEF simulations with a laser fluence of 200 J/m^2 (left column of Figure <ref>), the in-plane motion of the CO molecules is mostly restricted to a circle of radius ∼1 Å. Still, we observe some events that involve lateral displacement of the CO molecules from one hcp site to another along the row in which they are located. However, since O atoms hardly moves away from their corresponding adsorption sites, no CO oxidation event is expected to take place under these conditions. As in the intermediate coverage, all adsorbates become extremely mobile when we also include the effect of the excited phonons [right column of Figure <ref>, (T_e,T_l)-AIMDEF simulations]. In fact the pattern of the density plots for both coverages, which share the same p(1×2) arrangement of the O adatoms, is very similar. Specifically, the CO molecules move predominantly along the row in which they are adsorbed, while the O adatoms end moving all over the surface. The comparison of the O and CO displacements in the T_e-AIMDEF simulations with F= 300 J/m^2 (middle column of Figure <ref>) and in the (T_e,T_l)-AIMDEF simulations with F= 200 J/m^2 (right column of Figure <ref>) is probably the most clear evidence of the importance that the hot phonons created indirectly by the laser pulse has on the reaction dynamics. As shown in the previous section, the energy gained by both O and CO is very similar in the two simulations (see Figure <ref>). In spite of it, we show here that the mobility of the adsorbates is still very limited in the high fluence T_e-AIMDEF simulations as compared to the displacements obtained in the (T_e,T_l)-AIMDEF simulations at lower laser fluence. For example, the in-plane motion of the CO molecules is mostly restricted to a circle of radius ∼1.2 Å in the former, while it basically occupies the whole surface in the latter. Nonetheless, it is also worth noticing that compared to T_e-AIMDEF with F= 200 J/m^2, the lateral displacement along the y-axis in between the two nearest rows of Ru atoms is clearly much more probable in the high fluence simulations. In the case of the O adatoms, even if they remain mostly at their adsorption site, we observe some events in which they go through bridge sites, toward the nearest fcc sites. Clearly, the adsorbates have an increased mobility when the laser fluence is increased that could allow them to eventually recombine, even if there is no motion of the surface, but with a much smaller probability. All in all, the present analysis shows that, for all coverages, the inclusion of lattice motion and phononic excitations increase the mobility of the adsorbates and allow them to explore larger regions of the configurational space. Therefore, though the low statistics does not allow us to categorically establish whether electronic or phononic excitations govern the CO oxidation process, these results strongly suggest that the role of phononic excitations cannot be neglected. Regarding the CO desorption process, it is also interesting to compare the displacements along the surface normal with and without including the effect of the phonon excitations. Similar to the in-plane displacements, the CO mobility along the z-axis is higher in (T_e,T_l)-AIMDEF than in T_e-AIMDEF for each coverage. As an example, we show in Fig. <ref> the time evolution of the CO center of mass height Z_CM for the high coverage. Comparing the results calculated for the same absorbed fluence (F=200 J/m^2), we observe that the Z_CM displacements of the nondesorbing CO increase from about 0.5–1 Å in T_e-AIMDEF to 2–3 Å in (T_e,T_l)-AIMDEF. Increasing the fluence in the T_e-AIMDEF simulations also implies an increase in the Z_CM displacements and, importantly, in the number of desorption events, but still significantly smaller than in the (T_e,T_l)-AIMDEF simulations at 200 J/m^2. § CONCLUSIONS The photoinduced desorption and oxidation of CO coadsorbed with O on Ru(0001) has been simulated with ab initio molecular dynamics with electronic friction that include the effect of the laser-induced hot electrons but neglects that of the phonon excitations (T_e-AIMDEF). Comparison of these new results with those we obtained previously with simulations that incorporated in the adsorbate dynamics both the effect of the hot electrons and hot phonons [(T_e,T_l)-AIMDEF)] allows us to discern the role of electrons and phonons in the oxidation and desorption of CO from the covered surface. The probability of both reactions are drastically reduced when only the coupling to electrons is included. As suggested by two pulse correlation experiments in this system, CO desorption is dominated by the transient high temperature that is indirectly created by the laser pulse. Unfortunately, the statistics for CO oxidation is insufficient to determine the relative importance of the electronic and phononic mechanisms. Nonetheless, the comparative analysis of various dynamical properties such as the adsorbate kinetic energy and adsorbate displacements indicates that energy exchange with the hot lattice and the associated strong surface distortions are important ingredients to understand the CO oxidation reaction. This conclusion is supported by T_e-AIMDEF simulations performed at a high laser fluence. The kinetic energy gain is similar to that obtained in (T_e,T_l)-AIMDEF at a lower fluence but the adsorbate displacements are still insufficient to facilitate recombination. § CONFLICT OF INTEREST STATEMENT The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. § AUTHOR CONTRIBUTIONS Authors contributed equally to the development of this research project. § FUNDING The authors acknowledge financial support by the Gobierno Vasco-UPV/EHU Project No. IT1569-22 and by the Spanish MCIN/AEI/10.13039/501100011033 [Grant No. PID2019-107396GB-I00]. This research was conducted in the scope of the Transnational Common Laboratory (LTC) “QuantumChemPhys – Theoretical Chemistry and Physics at the Quantum Scale”. § ACKNOWLEDGMENTS Computational resources were provided by the DIPC computing center. § DATA AVAILABILITY STATEMENT The data supporting the conclusions of this study are included in the manuscript, further inquiries by any qualified researcher can be directed to the corresponding author. Frontiers-Harvard
http://arxiv.org/abs/2307.01661v1
20230704113904
On Finite groups whose power graphs are line graphs
[ "Parveen", "Jitender Kumar" ]
math.CO
[ "math.CO", "math.GR", "05C25, 20D15" ]
Line Graph of Power Graphs of Finite Groups]On Finite groups whose power graphs are line graphs Parveen, Jitender Kumar]Parveen, JITENDER KUMAR^^* ^1Department of Mathematics, Birla Institute of Technology and Science Pilani, Pilani-333031, India p.parveenkumar144@gmail.com,jitenderarora09@gmail.com S. Bera (Line graph characterization of power graphs of finite nilpotent groups, Communication in Algebra, 50(11), 4652-4668, 2022) characterized finite nilpotent groups whose power graphs and proper power graphs are line graphs. In this paper, we extend the results of above mentioned paper to arbitrary finite groups. Also, we correct the corresponding result of the proper power graphs of dihedral groups. Moreover, we classify all the finite groups whose enhanced power graphs are line graphs. We classify all the finite nilpotent groups (except non-abelian 2-groups) whose proper enhanced power graphs are line graphs of some graphs. Finally, we determine all the finite groups whose power graphs, proper power graphs, enhanced power graphs and proper enhanced power graphs are the complement of line graphs, respectively. [2020]05C25, 20D15 [ [ ^1Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411, Tartu, Estonia ===================================================================================== § HISTORICAL BACKGROUND The study of graphs associated to algebraic structures is a large research area and one of the important topic of research of algebraic graph theory. Such study provides an interplay between algebra and graph theory. The study of graphs associated to groups have been studied extensively because they have valuable applications and related to automata theory (see <cit.>). Graphs associated to groups, viz: Cayley graphs, power graphs, commuting graphs, enhanced power graphs, prime graphs, intersection graphs etc., have been studied by various researchers. Kelarev et. al <cit.> introduced the notion of power graphs. The power graph 𝒫(G) of a group G is a simple undirected graph with vertex set G such that two vertices a and b are adjacent if one is a power of the other or equivalently: either a ∈⟨ b⟩ or b ∈⟨ a⟩. Cameron <cit.> proved that two finite groups which have isomorphic power graphs have the same number of elements of each order. Further, Cameron et. al <cit.> showed that two finite abelian groups are isomorphic if and only if their power graphs are isomorphic. A graph is said to be Γ-free if it has no induced subgraph isomorphic to Γ. Doostabadi et. al <cit.> characterized all the finite groups whose power graphs are K_1,3-free, K_1,4-free or C_4-free. Power graphs of groups with certain forbidden subgraphs such as split, threshold, chordal and cograph have been investigated in <cit.>. For more results on power graphs of groups, we refer the reader to <cit.> and references therein. The dominating vertices of a graph are the one which are adjacent to all other vertices of the graph. The study of connectedness of the graphs obtained by deleting dominating vertices becomes important and interesting. Proper power graph (is the graph obtained from after deleting its dominating vertices) of a group G is also studied in the literature. The connectivity of proper power graphs for certain groups including nilpotent groups have been studied in <cit.>. Further, Cameron and Jafari <cit.> discussed the connectivity of proper power graph of an arbitrary finite group and characterize all groups whose power graphs have finite independence number. They showed that a group whose proper power graph is connected must be either a torsion group or a torsion-free group. Also, they classify those groups whose power graph is dominatable. In order to study how close the power graph is to the commuting graph of a finite group G, Aalipour et. al <cit.> introduced the enhanced power graph. The enhanced power graph $̧ of a groupGis a simple undirected graph with vertex setGand two verticesxandyare adjacent ifx,y∈⟨z ⟩for somez∈G. Equivalently, two verticesxandyare adjacent in$̧ if and only if ⟨ x,y⟩ is a cyclic subgroup of G. Note that the power graph is a spanning subgraph of the enhanced power graph $̧. Bera and Bhuniya <cit.> studied the interconnection between algebraic properties of the groupGand graph theoretic properties of its enhanced power graph$̧. They proved that the enhanced power graph $̧ is Eulerian if and only ifGis of odd order. Also, they characterized the abelian groups and non-abelianp-groups having dominatable enhanced power graphs. Together with certain graph theoretic invariants such as minimum degree, independence number, strong metric dimension, matching number etc., Panda et al.<cit.> studied perfectness of the enhanced power graphs of certain groups including finite abelianp-groups. Zahirovićet al.<cit.> proved that two finite abelian groups are isomorphic if and only if their enhanced power graphs are isomorphic. Also, they supplied a characterization of finite nilpotent groups whose enhanced power graphs are perfect. Enhanced power graphs of groups with certain forbidden subgraphs such as split, threshold, chordal and cograph have been investigated in <cit.>. A detailed list of results and open problems related to enhanced power graph of a group can be found in <cit.>. Moreover, the proper enhanced power graph (is the graph obtained from$̧ after deleting its dominating vertices) is also studied in the literature. Bera et al. <cit.> classified all nilpotent groups whose proper enhanced power graph is connected and calculated their diameter. Moreover, they determined the domination number of proper enhanced power graphs of finite nilpotent groups. Bera et. al <cit.> computed the number of connected components of proper enhanced power graph. Moreover, they studied the connectivity of proper enhanced power graphs of certain non-abelian groups. The line graph L(Γ) of the graph Γ is a graph whose vertex set is all the edges of Γ and two vertices of L(Γ) are adjacent if they are incident in Γ. An example of L(Γ) of the graph Γ is shown in Figure <ref>. Line graphs are described by nine forbidden subgraphs (cf. Theorem <ref>). Recently, Bera <cit.> classified all the finite nilpotent groups whose power graphs and proper power graphs are line graphs. Motivated by the work of <cit.>, in the present paper, we intend to study the line graph of certain power graphs associated to finite groups, viz: power graph, proper power graph, enhanced power graph, proper enhanced power graph. In order to extend the results of <cit.>, we study the following problems. * Classification of finite groups G such that ∈{, , ,̧} is a line graph. * Classification of finite groups G such that ∈{, , ,̧} is the complement of a line graph. The main results of this manuscript are stated in Section 3. § PRELIMINARIES A graph Γ consists of a vertex set V(Γ) and an edge set E(Γ), where E(Γ) is an unordered subset of V(Γ) × V(Γ). If {u,v}∈ E(Γ), then we say u and v are adjacent and we write as u ∼ v. Otherwise, we write u v. If {u,v}∈ E(Γ) then the vertices u and v are called endpoints of the edge {u,v}. Two edges e_1 and e_2 are said to be incident if they have a common endpoint. An edge {u,v} is called a loop if u=v. A graph without loops or repeated edges is called simple graph. Throughout this paper, we are considering only finite simple graphs. Let Γ be a graph. The set of all vertices adjacent to a vertex u is called neighbours of u in Γ, and it is denoted by N(u), N[u]= N(u)∪{u}. The degree of a vertex u in a graph Γ is the cardinality of N(u) in Γ. A subgraph of a graph Γ is a graph Γ' such that V(Γ')⊆ V(Γ) and E(Γ')⊆ E(Γ). If V(Γ')= V(Γ), we call Γ ' a spanning subgraph of Γ. A subgraph Γ' of Γ is an induced subgraph by a set X if V(Γ')=X and two vertices of Γ' are adjacent if they are adjacent in Γ. A vertex u is said to be a dominating vertex of a graph Γ if u is adjacent to all other vertices Γ, and it is denoted by Dom(Γ). A graph Γ is called complete if every vertex of Γ is a dominating vertex, and the complete graph on n vertices is denoted by K_n. A graph Γ is said to be bipartite if V(Γ) can be partitioned into two subsets such that no two vertices in the same partition subset are adjacent. A complete bipartite graph is a bipartite graph such that every vertex in one part is adjacent to all the vertices of the other part. A complete bipartite graph with partition size m and n is denoted by K_m, n. A complete bipartite graph K_1,n is called a star graph. The complement of a graph Γ is a graph Γ such that V(Γ)= V(Γ) and two vertices are adjacent in Γ if and only if they are not adjacent in Γ. A path of length r from u to v in a graph is a sequence of r+1 distinct vertices starting with u and ending with v such that consecutive vertices are adjacent. If there is a path between any two vertices of a graph, then Γ is connected, otherwise disconnected. A maximal connected subgraph Γ', of a graph Γ, is called component. A path graph is a connected graph having at least 2 vertices and it has two (terminal) vertices that have degree 1, while all other vertices have degree 2. We denote by P_n, a path graph on n vertices. Let Γ_1,… , Γ_m be m graphs such that V(Γ_i)∩ V(Γ_j)= ϕ, for i≠ j. Then Γ =Γ_1 ∪⋯∪Γ_m is a graph with vertex set V(Γ_1) ∪⋯∪ V(Γ_m) and edge set E(Γ_1) ∪⋯∪ E(Γ_m). Let Γ_1 and Γ _2 be two graphs with disjoint vertex set, the join Γ_1 ∨Γ_2 of Γ_1 and Γ_2 is the graph obtained from the union of Γ_1 and Γ_2 by adding new edges from each vertex of Γ_1 to every vertex of Γ_2. Two graphs Γ_1 and Γ _2 are isomorphic if there is a bijection, f from V(Γ _1) to V(Γ _2) such that if u∼ v in Γ _1 if and only if f(u)∼ f(v) in Γ_2. Next lemma is very important relation for characterization of line graph. <cit.> Let Γ be a graph. Then Γ is the line graph of some graph if and only if none of the nine graphs in Figure <ref> is an induced subgraph of Γ. For characterization of complement of line graph we use the following lemma. <cit.> A graph Γ is the complement of a line graph if and only if none of the nine graphs Γ_i of Figure <ref>, is an induced subgraph of Γ. We shall use Γ_i(or Γ_i), for 1≤ i ≤ 9, explicitly in this paper without referring to it. Let G be a finite group. We write o(x) by the order of an element x in G. For a positive integer n, ϕ(n) denotes the Euler's totient function of n. A cyclic subgroup of the group G is called a maximal cyclic subgroup if it is not contained in any cyclic subgroup of G other than itself. We denote by the set of all maximal cyclic subgroups of G and ℳ^(p)(G)={M∈ : M is a p-group}. Note that ||=1 if and only if G is a cyclic group. The intersection of all the maximal cyclic subgroups of G is denoted by . The following remark is useful in the sequel. Let G be a finite group. Then G= ⋃_M∈ M and the generators of a maximal cyclic subgroup does not belong to any other maximal cyclic subgroup of G. (i) In the enhanced power graph $̧,x∼yif and only ifx,y∈Mfor someM∈. Consequently,Dom()̧= . (ii) In the power graph,x∼yif and only ifx, y ∈Mfor someM∈and eithero(x)|o(y)oro(y)|o(x). (iii) LetM∈such thatM=⟨m ⟩. ThenN[m]=Min$̧ as well as in . Moreover, if G is non-cyclic then m∉ and so m∈ V(). (iv) Let M, M'∈ such that M=⟨ m ⟩ and M'=⟨ m'⟩. Then m m' in $̧ and somm'inΓ(G)∈{ , , }. <cit.> If G is a finite group, then | |≠ 2. <cit.> Let G be a finite group. Then the following statements are equivalent: (i)G is a nilpotent group. (ii) Every Sylow subgroup of G is normal. (iii)G is the direct product of its Sylow subgroups. (iv) For x,y∈ G, x and y commute whenever o(x) and o(y) are relatively primes. <cit.> Any maximal cyclic subgroup of a finite nilpotent group G=P_1× P_2×⋯× P_r is of the form M_1× M_2×⋯× M_r, where M_i is a maximal cyclic subgroup of P_i, (1 ≤ i ≤ r). Forn ≥2, the generalized quaternion groupQ_4nis defined in terms of generators and relations asQ_4n = ⟨a, b : a^2n = e” , a^n= b^2, ab = ba^-1 ⟩.LetG_1be a nilpotent group having no Sylow subgroups that are cyclic or generalized quaternion. Supposee, e^''are the identities ofG_1andQ_2^krespectively. Let𝒟_1= {(e, x, e^''): x ∈ℤ_n}and𝒟_2={(e, x, y): x ∈ℤ_n, y ∈Q_2^k.and.o(y)=2}. <cit.> Let G be a finite nilpotent group. Then Dom()= {e} if G=G_1, {(e, x): x ∈ℤ_n} if G=G_1 ×ℤ_n and gcd(|G_1|, n)=1, {(e, e^''),(e, y)} if G=G_1 × Q_2^k and gcd(|G_1|, 2)=1, 𝒟_1 ∪𝒟_2 if G=G_1 ×ℤ_n × Q_2^k and gcd(|G_1|, n)=gcd(|G_1|, 2) =gcd(n, 2)=1 .<cit.> The enhanced power graph $̧ of the groupGis complete if and only ifGis cyclic. <cit.> For a finite group G, the power graph is complete if and only if G is a cyclic group of order 1 or p^m, for some prime p and for some m∈ℕ. <cit.> Let G be a finite group. Suppose that x∈ G has the property that for all y∈ G, either x is a power of y or vice versa. Then one of the following holds: (i)x=e; (ii)G is cyclic and x is a generator; (iii)G is a cyclic p-group for some prime p and x is arbitrary; (iv)G is a generalized quaternion group and x is order 2. § MAIN RESULTS The main results of the manuscript are stated in this section. In the following three theorems, the results of <cit.> are extended from nilpotent groups to arbitrary finite groups. Let G be a finite group. Then is a line graph of some graph Γ if and only if G is a cyclic group of prime power order. Let G be a finite non-cyclic group which is not a generalized quaternion group. Then is a line graph if and only if G satisfies the following conditions: (i) For each M∈, we have |M| ∈{6, p^α} for some prime p. (ii)(a) If M_i, M_j, M_k ∈ℳ^(2)(G), then |M_i ∩ M_j| ≤ 2 and |M_i ∩ M_j ∩ M_k|=1. (b) If M_s ∈∖ℳ^(2)(G) and M_t ∈, then |M_s ∩ M_t|=1. Let G be a finite non-cyclic group of odd order and let M_i ∈. Then is a line graph if and only if G satisfies the following conditions: (i) For each i, we have |M_i|= p^α for some prime p. (ii) The intersection of any two maximal cyclic subgroups is trivial. Let G be a generalized quaternion group Q_4n. Then is a line graph if and only if n∈{p, 2^k} for some odd prime p and k≥ 1. Further, we consider the (proper) enhanced power graph of a finite group and classify the finite groupsGsuch that the graphs$̧ and are line graphs. Let G be a finite group. Then $̧ is a line graph of some graphΓif and only ifGis a cyclic group. Let G be a finite non-cyclic group and = ⋂_M_i∈ M_i. Then is a line graph of some graph Γ if and only if G satisfies the following conditions: (i)|(M_i∩ M_j) ∖|≤ 1 (ii)|(M_i∩ M_j∩ M_k) ∖|=0 Let G be a finite non-cyclic nilpotent group (except non-abelian 2-groups). Then is a line graph of some graph Γ if and only if G is isomorphic to one of the following groups. (i)ℤ_2×ℤ_2^2 (ii)ℤ _2^2×ℤ_2^2 (iii)ℤ_n×ℤ_p×ℤ_p ×⋯×ℤ_p, where p is a prime and gcd(n,p)=1. (iv)ℤ_n× Q_2^k such that gcd(2,n)=1. (v)ℤ_n× P such that P is a non-abelian p-group with gcd(n,p)=1 and the intersection of any two maximal cyclic subgroups of P is trivial. Finally, we investigate the question: When these above mentioned graphs are the complement of line graphs? Consequently, we obtain the following results. The power graph of a finite group is the complement of a line graph of some graph Γ if and only if G is isomorphic to one of the groups: ℤ_6, ℤ_2×⋯×ℤ_2, Q_8, ℤ_p^α, where p is a prime. Let G be a finite group which is not a cyclic p-group. Then is the complement of a line graph of some graph Γ if and only if G is isomorphic to one of the groups: ℤ_6, ℤ_2×⋯×ℤ_2, Q_8. Let G be a finite group of order n. Then the enhanced power graph $̧ is the complement of a line graph of some graphΓif and only ifGis isomorphic to one of the groups:ℤ_n, ℤ_2×⋯×ℤ_2, Q_8. Let G be a finite non-cyclic group. Then is the complement of a line graph of some graph Γ if and only if G is isomorphic to either ℤ_2×⋯×ℤ_2 or Q_8. § PROOF OF THE MAIN RESULTS In this section, we provide the proof our main results. §.§ Proof of the Theorem <ref> In order to prove the Theorem <ref>, first we establish the following proposition. Let G be a finite non-cyclic group and let ∈{ , }̧. Then there does not exist any graph Γ such that = L(Γ). Since G is a finite non-cyclic group, by Lemma <ref>, we have ||≥ 3. Let H_1=⟨ x ⟩, H_2= ⟨ y ⟩ and H_3= ⟨ z ⟩ be three maximal cyclic subgroups of G. Then the subgraph induced by the set {x,y,z,e} of is isomorphic to K_1,3 (see Remark <ref>(iv)). Consequently, is not a line graph of any graph (cf. Lemma <ref>). On combining Proposition <ref> and <cit.>, we obtain Theorem <ref>. §.§ Proof of Theorem <ref> LetGbe a finite non-cyclic group which is not generalized quaternion. Supposeis a line graph of some graphΓ. Then by Theorem <ref>, V()=G ∖{e}. On contrary assume thatGdoes not satisfy condition (i). ThenGhas a maximal cyclic subgroupMsuch that neither|M|=6nor|M|is a prime power. Let|M|=p_1^α_1 p_2^α_2⋯ p_k^α_k (k ≥ 2)be the prime power factorization of|M|. Ifp_i>3for somei ∈[k], thenMcontains at least4elements of orderp_iand4elements of orderp_i p_jfor somej ∈[k] ∖{i}. Letx_1, x_2, x_3, y_1, z_1 ∈ Msuch thato(x_1)=o(x_2)=o(x_3)=p_i p_j,o(y_1)=p_jando(z)=p_i. Then by Remark <ref>(ii), the subgraph induced by the set{x_1, x_2, x_3, y_1, z_1}ofis isomorphic toΓ_3; a contradiction (see Lemma <ref>). Thus,p_i ≤ 3for alli ∈[k]. Therefore,|M|=2^α 3^βfor someα≥ 1, β≥ 1. Since|M| ≠ 6, we get eitherα≥ 2orβ≥ 2. First suppose thatα≥ 2. Letx_1, x_2, y_1, y_2, z_1, z_2 ∈ Msuch thato(x_1)=o(x_2)=4, o(y_1)=o(y_2)=3ando(z_1)=o(z_2)=12. Observe that the subgraph induced by the set{x_1, x_2, y_1, y_2, z_1, z_2}ofis isomorphic toΓ_6; again a contradiction. Similarly, we get a contradiction forβ≥ 2. Thus, for eachM∈, we obtain|M|∈{6, p^α}. If possible, assume thatGdoes not satisfy condition (ii). Further, we have the following cases: Case-1:G does not satisfy (ii)(a). Then we have the following further two subcases: Subcase-1.1:|M_1 ∩ M_2 ∩ M_3| ≥ 2 for some M_1, M_2, M_3 ∈ℳ^(2)(G). Letx (≠ e) ∈ M_1 ∩ M_2 ∩ M_3andM_i=⟨ m_i⟩for eachi ∈{1,2,3}. Then by Remark <ref>(ii), the subgraph induced by the vertex set{x, m_1, m_2, m_3}ofis isomorphic toK_1,3, which is not possible. Subcase-1.2:| M_1 ∩ M_2 | ≥ 3 for some M_1, M_2 ∈ℳ^(2)(G). Supposex, y ∈ (M_1∩ M_2)∖{e}. LetM_1=⟨ m_1⟩andM_2=⟨ m_2⟩. By Remark <ref>, the subgraph induced by the set{x, y, m_1,m_1^-1, m_2, m_2^-1}ofis isomorphic toΓ_6 ;a contradiction. Thus,Gmust satisfy the condition (ii)(a). Case-2:G does not satisfy (ii)(b). Then there exist two maximal cyclic subgroupsM_1 ∈∖ℳ^(2)(G)andM_2∈such that|M_1 ∩ M_2| ≥ 2. In view of the condition (i), we discuss the following subcases. Subcase-2.1:|M_1|=6. Then|M_1 ∩ M_2| ∈{2,3}. Let|M_1 ∩ M_2|=2,M_1=⟨ x⟩andM_2=⟨ y⟩. Thenx^3 ∈ M_1 ∩ M_2becausex^3is the only element of order2inM_1. The subgraph induced by the set{ x, x^2, x^3, x^4, x^5, y}ofis isomorphic toΓ_5; a contradiction. If|M_1 ∩ M_2|=3, thenx^2, x^4 ∈ M_1 ∩ M_2. The subgraph induced by the set{x, x^2, x^4, x^5, y, y^-1}is isomorphic toΓ_6, which is not possible. Subcase-2.2:|M_1|=p^α (p> 2). LetM_1=⟨ x⟩,M_2=⟨ y⟩and letm (≠ e)∈ M_1 ∩ M_2 . Then the subgraph ofinduced by the set{ x, x^-1, y, y^-1, m, m^-1}is isomorphic toΓ_6; a contradiction. Conversely, suppose thatGsatisfies both the given conditions. On contrary, assume thatis not a line graph and so it has an induced subgraphΓisomorphic to one of the nine graphs given in Figure <ref>. In view of Remark <ref>, first we prove the following claim. Claim 2.3: Ifx∈ V(Γ)such thatx∈ Mfor someM∈, thenMmust be a2-group. Proof of claim: If possible, letx∈ MandM∈∖ℳ^(2)(G). Then by condition (ii)(b), we haveM∩ M'={e}for allM' (≠ M)∈. Consequently, by Remark <ref>(ii), we getN(x) ⊆ Min. Therefore, ifx∼ y, theny∈ Mand soN(y) ⊆ M. Connectedness ofΓimplies thatV(Γ) ⊆ M. For|M|=p^α, wherepis an odd prime, note that the subgraphinduced by any non-empty subset ofMis a complete graph. It implies thatΓis a complete subgraph ofwhich is not true becauseΓ≅Γ_ifor somei, where1≤ i≤ 9(see Figure <ref>). If|M|=6, thenM ≅ℤ_6. Observe that the subgraph ofinduced by the setM \{e}, shown in Figure <ref>, can not containΓas an induced subgraph. Thus, the claim holds. Now ifΓis isomorphic toK_1,3, as shown in Figure <ref>, then by Remark <ref>(ii) and Claim2.3, there exist maximal cyclic subgroupsM_1, M_2, M_3 ∈ℳ^(2)(G)such thata, d ∈ M_1, b, d ∈ M_2andc, d ∈ M_3. Note thatM_1 ≠ M_2. Otherwise,a∼ b in(see Remark <ref>(ii)), which is not possible. Similarly,M_2 ≠ M_3andM_1 ≠ M_3. Also,d ∈ (M_1 ∩ M_2 ∩ M_3)∖{e}; a contradiction to the condition (ii)(a). Thus,Γcan not be isomorphic toK_1,3. Now, supposeΓ≅Γ_ifor somei,2≤ i ≤ 9. Further, note thatΓhas an induced subgraph isomorphic toΓ'as shown in Figure <ref>. Sincex ∼ y, y ∼ zandz ∼ x, then by the definition of, it is easy to observe that one of the following three holds:x, y ∈⟨ z⟩, y, z ∈⟨ x⟩, x, z ∈⟨ y⟩. Consequently, there existsM ∈ℳ^(2)(G)such thatx, y, z ∈ M. Similarly, there existsM^'∈ℳ^(2)(G)such thaty, z, w ∈ M^'. Notice thatM ≠ M^'. Otherwise,x ∼ win, which is not possible. Buty, z ∈ M ∩ M^'; a contradiction of condition (ii)(a). Thus,is a line graph. This completes our proof. For the dihedral groupD_2n= ⟨ a, b : a^n = b^2 = e, ab = ba^-1⟩, note that𝒫^**(D_6) = L(K_1,2∪ 3 K_2). It follows that Theorem1.10of <cit.> is not correct. Moreover, we correct the same in the following corollary. Let G be the dihedral group D_2 n of order 2n. Then is a line graph of some graph Γ if and only if n∈{6, p^α} for some prime p. First assume that 𝒫^**(D_2 n) is a line graph. Note that D_2 n has one maximal cyclic subgroup M=⟨ a ⟩ of order n, and n maximal cyclic subgroups M_i=⟨ a^ib⟩, where 1≤ i ≤ n, of order 2. Then by Theorem <ref>, either n=6 or p^α. Conversely, if n∈{6, p^α}, then G satisfies the condition (i). Note that the intersection of any two maximal cyclic subgroups of D_2n is trivial. Thus, condition (ii) holds. By Theorem <ref>, 𝒫^**(D_2 n) is a line graph. Let G be the semidihedral group SD_8n = ⟨ a, b : a^4n = b^2 = e, ba = a^2n -1b ⟩. Then 𝒫^**(SD_8n) is not a line graph of any graph. Since M_1=⟨ a⟩, M_2=⟨ ab⟩ and M_3=⟨ a^3b⟩ are three maximal cyclic subgroups of SD_8n such that M_1∩ M_2∩ M_3={e, a^2n}. By Theorem <ref>, 𝒫^**(SD_8n) is not a line graph of any graph. §.§ Proof of Theorem <ref> LetGbe a generalized quaternion group such thatis a line graph. ThenV()=G∖ Z(G)andGhas one maximal cyclic subgroup of order of2 nandnmaximal cyclic subgroups of order4. LetMbe the maximal cyclic subgroup of order2 n. If possible, assume thatnis divisible by two primespandqsuch thatp< q. Letx_1, x_2, y_1, y_2, z_1, z_2 ∈ Msuch thato(x_1)=o(x_2)=2 p, o(y_1)=o(y_2)=2 p qando(z_1)=o(z_2)=q. By Remark <ref>(ii), the subgraph induced by the set{x_1, x_2, y_1, y_2, z_1, z_2}is isomorphic toΓ_6; a contradiction. Thus,n=p^α. Ifp>2andα≥ 2, then note thatMhas at least two elementsx, x'of order2 p, two elementsy,y'of orderp^2and two elementsz, z'of order2 p^2. The induced subgraph by the set{x,x',y,y',z,z'}is isomorphic toΓ_6, again a contradiction. Thus, eithern=2^kornis an odd prime. Conversely, Ifnis an odd prime, then𝒫^**(Q_4 n)=K_2 n-2∪ n K_2=L(K_1,2 n-2∪ n K_1,2) . Ifn=2^k, then𝒫^**(Q_4 n)is a line graph (cf. <cit.>). This completes the proof. Now we intend to classify all the groupsGsuch that the enhanced power graph$̧ and the proper enhanced power graph are line graphs, respectively. In order to prove the Theorem <ref>, first note that if G is a cyclic group of order n, then by Theorem <ref>, we get ≅̧K_n. Further, note that K_n=L(K_1 , n). Consequently, we have the following lemma. Let G be a finite cyclic group. Then $̧ is a line graph of some graph. Thus, by Proposition <ref> and Lemma <ref>, we obtain Theorem <ref>. IfGis cyclic, thenis an empty graph (cf. Theorem <ref>). Consequently, we now characterize all the finite non-cyclic groupsGwhose proper enhanced power graphsare line graphs (see Theorem <ref>). §.§ Proof of Theorem <ref> First, suppose thatis a line graph of some graphΓ. On contrary, assume thatGdoes not satisfy condition (i). ThenGhas two maximal cyclic subgroupsM_1andM_2such that|(M_1∩ M_2) ∖ |≥ 2. Sincee∈, we have|M_1|≥ 3and|M_2| ≥ 3. SupposeM_1 =⟨ x_1 ⟩ = ⟨ y_1 ⟩andM_2= ⟨ x_2 ⟩ = ⟨ y_2 ⟩. Further, letx,y ∈ (M_1∩ M_2) ∖. Then the subgraph induced by the set{x,y,x_1,y_1,x_2, y_2}is isomorphic toΓ_6(see Remark <ref>); a contradiction. Thus,Gmust satisfy the condition (i). Now suppose thatG does not satisfy the condition (ii). ThenGhas three maximal cyclic subgroupsM', M”, M”'such that|(M'∩ M”∩ M”') ∖ |≥ 1. Assume thatm∈ (M'∩ M”∩ M”') ∖. Consider M'= ⟨ m' ⟩, M”= ⟨ m”⟩and M”'= ⟨ m”' ⟩. Then the subgraph induced by the set{ m, m', m”, m”'}is isomorphic toΓ_1(cf. Remark <ref>); a contradiction. Conversely, suppose thatGsatisfies (i) and (ii). On contrary assume thatis not a line graph. Then by Lemma <ref>,has an induced subgraph isomorphic to one of the nine graphs given in Figure <ref>. Lethas an induced subgraph isomorphic toK_1,3given in Figure <ref>. Consequently,⟨ a,d ⟩ , ⟨ b,d ⟩and⟨ c , d ⟩are cyclic subgroups ofG. LetM_1, M_2andM_3be maximal cyclic subgroups containing⟨ a,d ⟩ , ⟨ b,d ⟩and⟨ c , d ⟩, respectively. Note thatM_1≠ M_2. Otherwise,a∼ bin. Similarly,M_2≠ M_3andM_3≠ M_1. Sinced∈ V(), we obtaind∉. It follows thatd∈ (M_1∩ M_2∩ M_3) ∖; a contradiction of condition (ii). Thus,can not contain an induced subgraph isomorphic toK_1,3. Now suppose thathas an induced subgraph isomorphic to one of the remaining eight graphs in Figure <ref>. Then observe thathas an induced subgraph isomorphic toΓ 'as shown in Figure <ref>. Note that thatx, yandzbelong to a maximal cyclic subgroup ofG. On contrary, assume thatx, y, z ∉ Mfor anyM∈. Sincex∼ y, y∼ zandz∼ xin, by Remark <ref>(i), we have three maximal cyclic subgroupsM_4, M_5andM_6such thatx,y ∈ M_4, y,z ∈ M_5andz,x∈ M_6. Thus,x∈ (M_4∩ M_6)∖. Ifo(x)≥ 3, then x^-1 (≠ x)∈ M_4∩ M_6. Further, note thatN(x)=N(x^-1)and sox^-1∉. It follows thatx^-1∈ (M_4∩ M_6)∖; a contradiction of condition (i). Consequently,o(x)=2. Similarly,o(y)=o(z)=2. ButM_4cannot contain two elements of order2. Thus,x, y, z ∈ M' for someM'∈. By similar argument, we gety, z, w∈ M”for someM”∈. Note thatM'≠ M”. Otherwise,x∼ win. Also,y,z∈ (M'∩ M”)∖; again a contradiction. Thus,cannot contain an induced subgraph isomorphic to the graphΓ '(see Figure <ref>). This completes our proof. For n ≥ 2, consider the semidihedral groupSD_8n = ⟨ a, b : a^4n = b^2 = e, ba = a^2n -1b ⟩. Since SD_8n has a maximal cyclic subgroup M=⟨ a^2b ⟩ of order 2, therefore 𝒯(SD_8n)={e}. Consider M_1=⟨ a⟩, M_2=⟨ ab⟩ and M_3=⟨ a^3b⟩ are three maximal cyclic subgroups of SD_8n. Then note that M_1∩ M_2∩ M_3={e, a^2n}. Thus, SD_8n does not satisfy the condition (ii) of Theorem <ref>, and so 𝒫^**_E(SD_8n) is not a line graph of any graph. Let G be a finite non-cyclic group such that the intersection of any two maximal cyclic subgroups is equal to . Then = L(Γ) for some graph Γ. For n ≥ 2, the generalized quaternion groupQ_4n = ⟨ a, b : a^2n = e, a^n= b^2, ab = ba^-1⟩. Then Q_4n has one maximal cyclic subgroup of order 2n and n maximal cyclic subgroups of order 4. Observe that the intersection of any two maximal cyclic subgroups of Q_4n is {e, a^n} and so 𝒯(Q_4n)={e, a^n}. Consequently, P^**_E(Q_4n) is a line graph of some graph Γ. Indeed, P^**_E(Q_4n)=K_2n-2∪ nK_2= L(K_1,2n-2∪ nK_1,2). For n ≥ 3, consider the dihedral groupD_2n = ⟨ a, b : a^n = b^2 = e, ab = ba^-1⟩. Then D_2n has one maximal cyclic subgroup of order n and n maximal cyclic subgroups of order 2. Consequently, the intersection of any two maximal cyclic subgroups of D_2n is trivial. It follows that 𝒯(D_2n)={e}. Thus, by Corollary <ref>, P^**_E(D_2n) is a line graph of some graph Γ. In fact, P^**_E(D_2n)= K_n-1∪ nK_1= L(K_1,n-1∪ nK_2). The converse of the Corollary <ref> need not be true in general. For instance, ifG= ℤ_2 ×ℤ_2^2, thenis a line graph of some graph butℤ_2 ×ℤ_2^2has two maximal cyclic subgroups whose intersection is non-trivial. However, ifGis of odd order, then the converse is also true. Let G be a finite group of odd order. Then is a line graph of some graph Γ if and only if the intersection of any two maximal cyclic subgroups is equal to . Suppose that is a line graph of some graph Γ. On contrary, assume that there exist two maximal cyclic subgroups M_1 and M_2 such that x∈ (M_1∩ M_2)∖. Since M_1∩ M_2 is a subgroup of G, we obtain x^-1∈ M_1∩ M_2. Also, N(x)=N(x^-1). It follows that x^-1∈ (M_1∩ M_2)∖; a contradiction (see Theorem <ref>). §.§ Proof of Theorem <ref> In order to prove Theorem <ref>, first we prove some necessary results. Let G be a finite non-cyclic nilpotent group. If is a line graph, then there exists a unique Sylow subgroup of G which is non-cyclic. Let G=P_1P_2 ⋯ P_r be a finite nilpotent group such that P_i's are Sylow p_i-subgroups of G. On contrary, assume that G has two Sylow subgroups which are non-cyclic. Without loss of generality, suppose that P_1 and P_2 are non-cyclic. It implies that ℳ(P_i)≥ 3 for every i∈{1,2} (cf. Lemma <ref>). Consider M_1, M_1', M_1”∈ℳ(P_1) such that M_1= ⟨ x_1⟩, M_1'=⟨ y_1⟩ and M_1”=⟨ z_1 ⟩. Since G is non-cyclic, we get M, M', M”∈ where M=M_1M_2⋯ M_r, M'=M_1'M_2⋯ M_r and M”=M_1”M_2⋯ M_r (cf. Lemma <ref>). Assume that M_i =⟨ x_i⟩ for i∈ [r]∖{1}. Note that M= ⟨ x⟩ , M'= ⟨ y⟩ and M”=⟨ z ⟩, where x=x_1x_2⋯ x_r, y=y_1x_2⋯ x_r and z=z_1x_2⋯ x_r. By Remark <ref>, x y, y z and z x and so x,y,z∈ V(). Now consider t=e_1x_2⋯ x_r and t'=e_1x_2'x_3⋯ x_r, where e_1 is the identity element of P_1, ⟨ x_2⟩∈ℳ(P_2) and ⟨ x_2'⟩≠⟨ x_2⟩. By Remark <ref>(iv), we have x_2 x_2' in 𝒫_E(P_2) and so t' t in $̧ (cf. <cit.>). Also,x∼ t,y∼ tandz∼ tin$̧ and so in . Thus, the subgraph induced by the set {x,y,z,t} is isomorphic to Γ_1 (see Figure <ref>); a contradiction. Thus, the result holds. Let G be a finite non-cyclic abelian group. Then is a line graph of some graph Γ if and only if G is isomorphic to one of the following groups: (i)ℤ_2×ℤ_2^2 (ii)ℤ _2^2×ℤ_2^2 (iii)ℤ_n×ℤ_p×ℤ_p ×⋯×ℤ_p, where p is a prime and gcd(n,p)=1. Let G be a finite non-cyclic abelian group. Then G=P_1× P_2 ×⋯× P_r, where P_i's are Sylow p_i-subgroups of G. Suppose that is a line graph. Then by Lemma <ref>, G has a unique non-cyclic Sylow subgroup. Consequently, G≅ℤ_n× P, where P is a non-cyclic abelian Sylow p-subgroup of G and gcd(p,n)=1. Then by Theorem <ref>, V( )= G∖{(a,e): a∈ℤ_n}, where e is the identity element of P. Observe that 𝒫_E(P) is an induced subgraph of $̧. Also,𝒫_E(P)= 𝒫(P)(cf. <cit.>). By <cit.>, ifPℤ_2×ℤ_2^2, ℤ _2^2×ℤ_2^2, ℤ_p×ℤ_p ×⋯×ℤ_p, then𝒫^**(P)has an induced subgraph isomorphic toΓ_1and socontains an induced subgraph isomorphic toΓ_1; a contradiction. Thus,Pis isomorphic to one of the three groups:ℤ_2×ℤ_2^2, ℤ _2^2×ℤ_2^2, ℤ_p×ℤ_p ×⋯×ℤ_p. Ifn>1andG≅ℤ_n×ℤ_2×ℤ_2^2, thenGhas two maximal cyclic subgroupsM_1=⟨ (1,0,1)⟩andM_2=⟨ (1,1,1)⟩of order4nsuch that(1,0,2), (2,0,2)∈ (M_1∩ M_2)∖, where ={(a,0,0): a∈ℤ_n}; a contradiction (see Theorem <ref>). Thus,n=1. Ifn>1andG≅ℤ_n×ℤ_2^2×ℤ_2^2, thenGhas two maximal cyclic subgroupsM_3=⟨ (1,1,0)⟩andM_4=⟨ (1,1,2)⟩of order4nsuch that(1,2,0), (2,2,0)∈ (M_3∩ M_4)∖, where ={(a,0,0): a∈ℤ_n}; again a contradiction. Conversely, if eitherG≅ℤ_2×ℤ_2^2orG ≅ℤ_2^2×ℤ_2^2, then= . By <cit.>, = L(Γ)for some graphΓ. Now supposeG≅ℤ_n×ℤ_p×ℤ_p ×⋯×ℤ_p (k-times), wherek≥ 2. Then = p^k-1/p-1K_(p-1)n = L(p^k-1/p-1K_1,(p-1)n). This completes the proof. Let G be a finite non-abelian nilpotent group (except non-abelian 2-group). Then is a line graph of some graph Γ if and only if G is isomorphic to one of the following groups: (i)ℤ_n× Q_2^k such that gcd(2,n)=1. (ii)ℤ_n× P such that P is a non-abelian p-group with gcd(n,p)=1 and the intersection of any two maximal cyclic subgroups of P is trivial. Let G=P_1× P_2×⋯× P_r be a finite non-abelian nilpotent group which is not a 2-group. Suppose that is a line graph. By Lemma <ref>, exactly one P_i is non-cyclic. Consequently, G≅ℤ_n× P such that P is a non-abelian p-group and gcd(n,p)=1. If P=Q_2^k, then there is nothing to prove. We may now suppose that P is not a generalized group and n>1. On contrary, assume that P has two maximal cyclic subgroups M' and M” such that x (≠ e)∈ M'∩ M”. Consequently, G has two maximal cyclic subgroup M_1=ℤ_n × M' and M_2=ℤ_n × M” (see Lemma <ref>) such that (1,x), (2,x)∈ M'∩ M”. Since = {(a,e): a∈ℤ_n} (see Theorem <ref> and Remark <ref>(i)), we get a contradiction of Theorem <ref>. Thus, G is isomorphic to the group described in (ii). Now suppose n=1 then G=ℤ_1× P is a p-group, where p is an odd prime. Then by Corollary <ref>, the intersection of any two maximal cyclic subgroups of G is equal to . By Theorem <ref>, ={e}. Thus, G is isomorphic to the group described in (ii). Conversely, suppose that G≅ℤ_n× Q_2^k such that gcd(2,n)=1. Also, the intersection of any two maximal cyclic subgroups of Q_2^k is Z(Q_2^k). Consequently, the intersection of any two maximal cyclic subgroups of G is the set {(a,b): a∈ℤ_n, b∈ Z(Q_2^k)} (see Lemma <ref>). Indeed, = {(a,b): a∈ℤ_n, b∈ Z(Q_2^k)}. By Corollary <ref>, is a line graph of some graph Γ. If G≅ℤ_n× P, where P is a non-abelian p-group such that gcd(n,p)=1 and the intersection of any two maximal cyclic subgroups of P is trivial. Then by Lemma <ref>, the intersection of any two maximal cyclic subgroups of G is the set {(a,e): a∈ℤ_n}. Moreover, ={(a,e): a∈ℤ_n}. By Corollary <ref>, is a line graph of some graph Γ. On combining Proposition <ref> and Proposition <ref>, we obtain Theorem <ref>. §.§ Proof of Theorems <ref>-<ref> The following propositions play an important role to prove the Theorems <ref>, <ref>, <ref>, <ref>. Let G be a finite cyclic group. Then is the complement of a line graph of some graph Γ if and only if either G≅ℤ_6 or G≅ℤ_p^α for some prime p. Let G be a cyclic group of order n. First, suppose that is the complement of line graph of some graph Γ. On contrary assume that neither n=6 nor n is a prime power. Consider n=p_1^α _1p_2^α _2⋯ p_r^α _r (r≥ 2) is the prime factorization of n such that p_1<p_2<⋯ <p_r. Now, if p_i ≥ 5 for any i∈ [r], then G has at least 4 elements of order p_i. Let x,y ,z,w ∈ G such that o(x)=o(y)=o(z)=p_i and o(w)=p_j for some j∈ [r]∖{i}. Then by Remark <ref>(ii), the subgraph induced by the set {x,y,z,w} is isomorphic to Γ_1 (see Lemma <ref>); a contradiction. Thus, p_i≤ 3 for all i∈ [r]. Consequently, r=2 and p_1=2, p_2=3. Since n≠ 6, we have either α _1≥ 2 or α _2 ≥ 2. If α _1≥ 2, then consider x_1, x_2 , x_3, x_4 ∈ G such that o(x_1)=2^β _1, o(x_2)=2^β_2 , o(x_3)=2^β_3 and o(x_4)=3. The subgraph of induced by the set {x_1, x_2, x_3, x_4} is isomorphic to Γ_1; a contradiction. Similarly, if α_2 ≥ 2 then again we get a contradiction. Thus, either G≅ℤ_6 or G≅ℤ_p^α for some prime p. Conversely, if G≅ℤ_p^α, then = K_p^α (cf. Theorem <ref>). Observe that K_n=L(n K_2) and so = L(p^α K_2). If G≅ℤ_6 then by Figure <ref>, we have = L(3K_2∪ P_4). This completes our proof. Let G be a finite non-cyclic group and ∈{,̧ , , }. Then is the complement of a line graph of some graph Γ if and only if G is isomorphic to Q_8 or ℤ_2×⋯×ℤ_2. Let G be a finite non-cyclic group such that is the complement of a line graph of some graph Γ. Since G is non-cyclic, by Lemma <ref>, we have ||≥ 3. We now discuss the following cases. Case-1:||≥ 4. In this case, we show that G≅ℤ_2×⋯×ℤ_2 (k-copies), where k≥ 3. On contrary, if G is not isomorphic to ℤ_2×⋯×ℤ_2 then G has a maximal cyclic subgroup M such that |M|≥ 3. Consequently, M has at least 2 generators. Let x, y ∈ M such that M=⟨ x ⟩ = ⟨ y ⟩ and let z, t, w be generators of other three maximal cyclic subgroups of G. Then by Remark <ref>, the subgraph induced by the set {x,y,z,t,w} is isomorphic to Γ_3 (see Figure <ref>); which is a contradiction. Case-2:||=3. Consider M_1, M_2, M_3∈ such that ϕ(|M_1|)≥ϕ(|M_2|)≥ϕ(|M_3|). Now we have the following subcases: Subcase-2.1:ϕ(|M_1|)≥ 3. Let M_1=⟨ x ⟩ =⟨ y ⟩ =⟨ z ⟩ and let M_2= ⟨ t ⟩. Then the subgraph of induced by the set {x,y,z,t} is isomorphic to Γ_1; a contradiction. Therefore, this subcase is not possible. Subcase-2.2:ϕ(|M_1|)≤ 2. Then |M_1|∈{2,3,4,6}. Let |M_1|=6 and let M_1= ⟨ x ⟩. Then x^2 and x^3 are elements of order 3 and 2, respectively. Let M_2=⟨ y ⟩. Then M_2 cannot contain both the elements x^2 and x^3. Otherwise, M_1⊆ M_2 which is not possible. Without loss of generality, assume that x^2 ∉ M_2. Then x^2 y in $̧ and sox^2 yin. Consequently, the subgraph ofinduced by the set{x, x^2, x^5, y}is isomorphic toΓ_1; again a contradiction (see Remark <ref>). Thus,|M_1|≤ 4. Similarly, we get|M_2|,|M_3|≤ 4. It follows thato(G)≤ |M_1∪ M_2∪ M_3|≤ 10. By Table1of <cit.>, there exist only two groupsQ_8andℤ_2×ℤ_2(whose order is at most10) with exactly three maximal cyclic subgroups. Thus,G≅ Q_8orG≅ℤ_2×ℤ_2. Conversely, letG≅ Q_8. For∈{,̧}, we obtain = K_2 ∨ 3K_2 = L(2K_2∪ K_4)(see Figure <ref>). If∈{, }, then we have = 3K_2 = L(K_4). Now assume thatG ≅ℤ_2×⋯×ℤ_2(k-times), wherek≥ 2. For∈{,̧}, we have = K_1, 2^k-1= L(K_2∪ K_1, 2^k-1). If∈{ , }, then note that = (2^k-1)K_1= L(K_1, 2^k-1). Let G be a finite cyclic group which is not a p-group. Then is the complement of a line graph of some graph Γ if and only if G≅ℤ_6. First suppose that is the complement of a line graph of some graph. Then in the similar lines of the proof of Proposition <ref>, we obtain G≅ℤ_6. Conversely, note that 𝒫^**(ℤ_6)=L(P_4). This completes our proof. Proposition <ref> together with Proposition <ref> yields Theorem <ref>. On combining Proposition <ref> and Proposition <ref>, we get Theorem <ref>. IfGis a cyclic group, then≅̧K_n(cf. Theorem <ref>). Observe thatK_n=L(nK_2). Using these facts and Proposition <ref>, we obtain Theorem <ref>. § DECLARATIONS Funding: The first author gratefully acknowledge for providing financial support to CSIR (09/719(0110)/2019-EMR-I) government of India. The second author wishes to acknowledge the support of Core Research Grant (CRG/2022/001142) funded by SERB. Conflicts of interest/Competing interests: There is no conflict of interest regarding the publishing of this paper. Availability of data and material (data transparency): Not applicable. Code availability (software application or custom code): Not applicable. 10a.Cameron2016 G. Aalipour, S. Akbari, P. J. Cameron, R. Nikandish, and F. Shaveisi. On the structure of the power graph and the enhanced power graph of a group. Electron. J. Combin., 24(3):3.16, 18, 2017. a.barati2021 Z. Barati. Line zero divisor graphs. J. Algebra Appl., 20(9):2150154, 13, 2021. a.beineke1970 L. W. Beineke. Characterizations of derived graphs. J. Combin. Theory, 9:129–135, 1970. a.bera2022 S. Bera. Line graph characterization of power graphs of finite nilpotent groups. Comm. Algebra, 50(11):4652–4668, 2022. a.Bera2017 S. Bera and A. K. Bhuniya. On enhanced power graphs of finite groups. J. Algebra Appl., 17(8):1850146, 2018. a.bera2022dominating S. Bera and H. K. Dey. On the proper enhanced power graphs of finite nilpotent groups. J. Group Theory, 25(6):1109–1131, 2022. a.bera2021connectivity S. Bera, H. K. Dey, and S. K. Mukherjee. On the connectivity of enhanced power graphs of finite groups. Graphs Combin., 37(2):591–603, 2021. a.Cameron2010 P. J. Cameron. The power graph of a finite group, II. J. Group Theory, 13(6):779–783, 2010. a.Cameron2011 P. J. Cameron and S. Ghosh. The power graph of a finite group. Discrete Math., 311(13):1220–1222, 2011. a.cameron2020connectivity P. J. Cameron and S. H. Jafari. On the connectivity and independence number of power graphs of groups. Graphs Combin., 36(3):895–904, 2020. a.chakrabarty2009undirected I. Chakrabarty, S. Ghosh, and M. K. Sen. Undirected power graphs of semigroups. Semigroup Forum, 78(3):410–426, 2009. a.chattopadhyay2021minimal S. Chattopadhyay, K. L. Patra, and B. K. Sahoo. Minimal cut-sets in the power graphs of certain finite non-cyclic groups. Comm. Algebra, 49(3):1195–1211, 2021. a.doostabadiforbidden A. Doostabadi, A. Erfanian, and M. Farrokhi D. G. On power graphs of finite groups with forbidden induced subgraphs. Indag. Math. (N.S.), 25(3):525–533, 2014. a.doostabadi2015connectivity A. Doostabadi and M. Farrokhi D. Ghouchan. On the connectivity of proper power graphs of finite groups. Comm. Algebra, 43(10):4305–4319, 2015. b.dummit1991abstract D. S. Dummit and R. M. Foote. Abstract algebra. Prentice Hall, Inc., Englewood Cliffs, NJ, 1991. a.kelarev2000groups A. Kelarev and S. Quinn. A combinatorial property and power graphs of groups. Contrib. General Algebra, 12(58):3–6, 2000. kelarev2003graph A. V. Kelarev. Graph algebras and automata, volume 257. Marcel Dekker, Inc., New York, 2003. kelarev2004labelled A. V. Kelarev. Labelled Cayley graphs and minimal automata. Australas. J. Combin., 30:95–101, 2004. a.kelarev2009cayley A. V. Kelarev, J. Ryan, and J. Yearwood. Cayley graphs as classifiers for data mining: the influence of asymmetries. Discrete Math., 309(17):5360–5369, 2009. a.powergraphsurvey A. Kumar, L. Selvaganesh, P. J. Cameron, and T. Tamizh Chelvam. Recent developments on the power graph of finite groups—a survey. AKCE Int. J. Graphs Comb., 18(2):65–94, 2021. a.kumar2023 J. Kumar, X. Ma, Parveen, and S. Singh. Certain properties of the enhanced power graph associated with a finite group. Acta Math. Hungar., 169(1):238–251, 2023. a.masurvey2022 X. Ma, A. Kelarev, Y. Lin, and K. Wang. A survey on enhanced power graphs of finite groups. Electron. J. Graph Theory Appl. (EJGTA), 10(1):89–111, 2022. a.ma2021forbidden X. Ma, S. Zahirović, tY. Lv, and Y. She. Forbidden subgraphs in enhanced power graphs of finite groups. arXiv:2104.04754, 2021. a.MannaForbidden2021 P. Manna, P. J. Cameron, and R. Mehatari. Forbidden subgraphs of power graphs. Electron. J. Combin., 28(3):3.4, 14, 2021. a.panda2021enhanced R. P. Panda, S. Dalal, and J. Kumar. On the enhanced power graph of a finite group. Comm. Algebra, 49(4):1697–1716, 2021. a.kumar2022complement Parveen and J. Kumar. The complement of enhanced power graph of a finite group. arXiv:2207.04641, 2022. a.zahirovic2020study S. Zahirović, I. Bošnjak, and R. Madarász. A study of enhanced power graphs of finite groups. J. Algebra Appl., 19(4):2050062, 2020. Parveen 1, Jitender Kumar 1 Addresses:
http://arxiv.org/abs/2307.01271v1
20230703180252
High-Strength Amorphous Silicon Carbide for Nanomechanics
[ "Minxing Xu", "Dongil Shin", "Paolo M. Sberna", "Roald van der Kolk", "Andrea Cupertino", "Miguel A. Bessa", "Richard A. Norte" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci", "physics.app-ph" ]
APS/123-QED Department of Precision and Microsystems Engineering, Delft University of Technology, Delft 2628 CD, The Netherlands Kavli Institute of Nanoscience, Department of Quantum Nanoscience, Delft University of Technology, Delft 2628 CD, The Netherlands Department of Precision and Microsystems Engineering, Delft University of Technology, Delft 2628 CD, The Netherlands Department of Materials Science and Engineering, Delft University of Technology, Delft 2628 CD, The Netherlands Else Kooi Laboratory, Faculty of Electrical Engineering, Mathematics and Computer Science , Delft University of Technology, Delft 2628 CD, The Netherlands Kavli Nanolab, Department of Quantum Nanoscience, Delft University of Technology, Delft 2628 CD, The Netherlands Department of Precision and Microsystems Engineering, Delft University of Technology, Delft 2628 CD, The Netherlands miguel_bessa@brown.edu School of Engineering, Brown University, Providence, RI 02912, USA R.A.Norte@tudelft.nl Department of Precision and Microsystems Engineering, Delft University of Technology, Delft 2628 CD, The Netherlands Kavli Institute of Nanoscience, Department of Quantum Nanoscience, Delft University of Technology, Delft 2628 CD, The Netherlands For decades, mechanical resonators with high sensitivity have been realized using thin-film materials under high tensile loads. Although there have been remarkable strides in achieving low-dissipation mechanical sensors by utilizing high tensile stress, the performance of even the best strategy is limited by the tensile fracture strength of the resonator materials. In this study, a wafer-scale amorphous thin film is uncovered, which has the highest ultimate tensile strength ever measured for a nanostructured amorphous material. This silicon carbide (SiC) material exhibits an ultimate tensile strength of over 10 GPa, reaching the regime reserved for strong crystalline materials and approaching levels experimentally shown in graphene nanoribbons. Amorphous SiC strings with high aspect ratios are fabricated, with mechanical modes exceeding quality factors 10^8 at room temperature, the highest value achieved among SiC resonators. These performances are demonstrated faithfully after characterizing the mechanical properties of the thin film using the resonance behaviors of free-standing resonators. This robust thin-film material has significant potential for applications in nanomechanical sensors, solar cells, biological applications, space exploration and other areas requiring strength and stability in dynamic environments. The findings of this study open up new possibilities for the use of amorphous thin-film materials in high-performance applications. High-Strength Amorphous Silicon Carbide for Nanomechanics Richard A. Norte August 1, 2023 ========================================================= § INTRODUCTION Advances in nanotechnology have revolutionized a broad spectrum of fields, with the development of tensile-loaded, thin-film mechanical devices playing a pivotal role in state-of-the-art force, acceleration, and displacement sensing <cit.>. Two approaches are used to boost the sensitivity of nanomechanical resonators under tensile loads. One approach fabricates the resonators using different thin-film materials in pursuit of films with low mechanical loss tangents resulting in higher intrinsic mechanical quality factors. In room temperature environments, high-tensile amorphous silicon nitride (a-Si_3N_4) nanomechanical resonators have marked some of the best performing devices in ultra-sensitive mechanical detectors <cit.>. Despite the fact that crystalline thin film materials (e.g. crystalline silicon (c-Si) <cit.>, crystalline silicon carbide (c-SiC) <cit.>) and graphene are expected to have higher theoretical limits, their projected performance relies on having perfect crystal structures with no defects (e.g. edge defects). Additionally it is difficult to practically attain crystalline thin-films <cit.> which can be easily deposited, have good film isotropy <cit.> and few lattice imperfections <cit.>. By nanostructuring edges into free-standing crystalline devices, one introduces a form of defect by exposing the edge of the crystal where fracture can initiate under high-tensile loads <cit.>. The other approach to attain state-of-the-art sensors is by innovating designs of the nanomechanical resonators' geometries to have high stresses at crucial regions of the resonator when it vibrates. Ultimately the design space of these resonators is constrained by the thin film materials' tensile fracture limits or ultimate tensile strength (UTS). The UTS of a thin-film is fundamentally lower after being nanostructured into a suspended geometry with edges, since the introduced crystalline defects such as dislocations can facilitate the propagation of cracks <cit.>. For example, the UTS of a-Si_3N_4 thin film has been shown to be 6.8 GPa <cit.>. To date, only crystalline and 2D materials have experimentally demonstrated UTS surpassing 10 GPa after being top-down nanofabricated <cit.>. Among 2D crystalline materials, graphene harbors one of the highest theoretical UTS <cit.>, but practically reaching the limit is also challenging due to lattice imperfections <cit.>, atomically irregular edges <cit.>, or sparser grain boundaries <cit.> resulting from nanostructuring processes, which lead to a reduced fracture limit when it is tensile-loaded. In this regard, amorphous thin films with high UTS offer more design freedom for free-standing nanostructures, due to their lack of both crystalline defects and sensitivity to notches <cit.>. Apart from allowing the enhancement of the Q factor of nanomechanical resonators, higher material UTS can enable the devices to perform better in diverse and harsh environments. Among the materials processing the highest ultimate tensile strengths <cit.>, silicon carbide (SiC) is attracting rising interest and demand both in industry and in academia in recent years due to its exceptional mechanical, chemical, electrical and optical properties <cit.>. SiC exists in three forms: single crystalline, poly-crystalline, and amorphous, all of which can be fabricated into thin films for various applications <cit.>. Among them, amorphous SiC (a-SiC) thin film offers distinct advantages over their crystalline counterparts <cit.>, including lower deposition temperature, compatibility with various substrates, isotropic physical properties, and the ability to deposit on large wafer scales. The widespread applications of a-SiC range from protective coatings to avoid mechanical wear <cit.> and chemical corrosion <cit.>, low-loss deposited dielectrics <cit.>, device and functional layers for solar cells <cit.>, MEMS sensors <cit.> and electron transparent windows <cit.>, low-loss optical resonators <cit.> and integrated photonics <cit.>, bio-molecular <cit.> and medical <cit.> applications, as well as gratings with nano-pillars <cit.>, to name a few. The versatility of a-SiC makes it a promising material for the high-yield production of integrated mechanical, electronic and photonic devices on a chip, paving the way for applications in sensing <cit.>, transduction <cit.>, and quantum technology <cit.>. In this work, we demonstrate wafer-scale amorphous films that harbor an ultimate tensile strength over 10 GPa after nanostructuring, a regime that is conventionally reserved for ultra-strong crystalline and 2D materials. Using delicate nanofabrication techniques, we produce several different nanomechanical resonators that can accurately determine the mechanical properties of SiC thin films such as density, Young's modulus, Poisson ratio, and ultimate tensile strength. Notably, our highest measured tensile strength (>10 GPa) is comparable to the values shown for c-SiC <cit.> and approaching the experimental values obtained for double-clamped graphene nano-ribbon <cit.>. We achieve mechanical quality factor up to 2× 10^8 with a-SiC mechanical resonators, and measure loss-tangents on par with other materials used in high-precision sensors. Beyond sensing, these strong films open up new possibilities in high-performance nanotechnology, including thin solar cell technologies <cit.>, mechanical sensing <cit.>, biological technologies<cit.> and even lightsail space exploration <cit.>. § FABRICATION OF AMORPHOUS SIC RESONATORS In pursuit of thin film materials for nanomechanical resonators with low mechanical dissipation, high film quality and high tensile stress are desirable. The Low Pressure Chemical Vapor Deposition (LPCVD) technique is preferred for these requirements, since its low pressure and high temperature deposition environment ensures lower defect density and higher thermal stress. The non-stoichiometric LPCVD a-SiC films used in this paper are deposited with different gas flow ratios (GFR) between SiH_2Cl_2 and 5% C_2H_2 in H_2 (GFR=2,3,4), various deposition pressures (170 and 600 mTorr), and on both silicon and fused silica substrates (Table <ref>). This variation of deposition parameters allows us to systematically characterize the mechanical properties of LPCVD a-SiC thin films. All a-SiC thin films were deposited for the same period of time (3 hours 47 minutes) at a temperature of 760^∘C in order to better determine the effect of various deposition environments while maintaining the films in the amorphous form <cit.>. With the fabrication process demonstrated in the Supporting Information (H), nanomechanical resonators made of a-SiC can be suspended over the substrates with high yield using dry etching processes due to their extremely high chemical selectivity. Better chemical stability and inertness of the sensing components can significantly improve the sensors' reliability, particularly for their operation under chemically harsh environments. Meanwhile, thin film materials' chemical inertness allows them to be deposited on various substrates, patterned, and then suspended as nanomechanical resonators by removing the substrate underneath (i.e. undercutting). A high selectivity between the thin film and the substrate allows for higher yield and accuracy in fabricating suspended nanostructures . Similar to their crystalline counterpart, LPCVD a-SiC thin films have been reported to have very high chemical inertness to various wet etchants <cit.>. Likewise, we found that a-SiC also has high chemical inertness to the widely used dry etchants, such as SF_6 isotropic plasma etching for silicon substrate, and vapor hydrofluoric acid etching for silicon oxide substrate, as illustrated in Figure <ref>(a). The excellent chemical stability implies high selectivity between a-SiC and various commonly used substrates during undercutting, as shown in Figure <ref>(b). Dry etchants are preferred for suspending high-aspect-ratio nanomechanical structures, since they help to avoid stiction during liquid etchant evaporation and thus improve the yield rate of working devices. To take advantage of the superior chemical inertness, we fabricated nanomechanical resonators with continuous films down to 5nm, as shown in SEM pictures in Supporting Information (F). Moreover, the undercut method based on dry etchants introduces little perturbation to the suspended nano-structure, making it possible to perform tensile testing on chips. In Section <ref>, we fabricate suspended nano-structures with different maximum tensile stresses to accurately determine the ultimate tensile strength (UTS) of a-SiC films. As a result, we demonstrate that a-SiC films have UTS up to 10-12 GPa, which are the highest among amorphous materials after patterning and are approaching the UTS of strong materials like c-SiC <cit.> and graphene nano-ribbons <cit.>, both of which are known for their high UTS. The comparison of UTS between LPCVD a-SiC and other materials commonly used for nanomechanics is shown in Figure <ref>(c). § MECHANICAL PROPERTY CHARACTERIZATION WITH RESONANCE METHOD In order to design desired nanomechanical resonators with a specific thin film material, it is necessary to accurately characterize the material's mechanical parameters, such as film stress, Young's modulus, Poisson ratio and density. Various methods are developed to measure these parameters, including static methods, like nano-indentation <cit.> and dynamic methods like resonance response <cit.>. Many studies aiming to design high-performance nanomechanical resonators have relied on mechanical parameter values obtained from literature without considering potential variations of thin film properties due to different deposition environments, such as commonly used materials like a-Si_3N_4 <cit.>, c-Si <cit.>, and c-SiC <cit.>. While these adaptations are usually reasonable and align well with experimental results, characterizing the exact parameters of the materials used would be beneficial when exploring the optimal performance of nanomechanical resonators <cit.>. In this section, we present a simple and universal method to systematically characterize the important mechanical parameters of LPCVD a-SiC thin films. The characterization flow of the method begins with measuring the thickness of the a-SiC thin film (t) after LPCVD deposition using a spectroscopic ellipsometer, which is an optical technique to confirm the thin film thickness and investigate its dielectric properties simultaneously. We then identify the film stress (σ) using the wafer bending method. After dicing the wafer into small chips, we pattern the a-SiC thin film and suspend it in the form of membranes, cantilevers, and strings with different lengths (L). The suspended nanomechanical resonators are measured with a laser Doppler vibrometer (LDV) in the vacuum environment down to 10^-7 mbar. The measured resonant frequencies of the fundamental modes of the membranes (f_mem), cantilevers (f_can), and strings (f_str) can be fitted with their corresponding analytical expressions, which reveal the Young's modulus (E), Poisson ratio (ν), and density (ρ) of the a-SiC thin film, respectively. During the fitting process, finite element method (FEM) simulation is used to describe the patterned resonators more precisely by taking into account the holes on the membranes and the overhangs from under-cutting adjacent to the cantilevers and strings. More detailed information about the measurements, analytical fitting, and simulations are shown in Supporting Information (A) and (B). Using the above measurements, we can characterize the important mechanical parameters of LPCVD a-SiC thin films, then design and fabricate nanomechanical resonators with desired performance. Note that this straightforward and non-contact method can be universally applied to characterize the mechanical properties of other tensile thin film materials that can be fabricated into resonators with various geometries, i.e. cantilevers, strings, and membranes. This allows for quality control of thin films deposited in different batches or under varied deposition environments. As a result, nanomechanical resonators manufactured for various applications can be characterized in an efficient and economical manner, resulting in higher reliability for both industrial and academic applications. With all the relevant mechanical parameters accurately measured, we can fabricate a series of suspended devices specifically designed for characterizing the ultimate tensile strengths of a-SiC thin films in the following section. § ULTIMATE TENSILE STRENGTH OF AMORPHOUS SIC Ultimate tensile strength (UTS), or sometimes coinciding with yield strength for brittle materials such as a-SiC <cit.>, describes the maximum tensile stress a material can endure before breaking while being stretched. Materials with higher UTS not only allows them to operate more reliably as mechanical sensors or coating in harsh environments, but also enlarges the design space for nanomechanical resonators. High UTS has been shown for nanowires fabricated with various materials, whose small cross-section areas minimize the appearance of defects <cit.>, and for nanomechanical membranes without nano-patterning to avoid the presence of rough sidewalls <cit.>. However, both scenarios above do not allow for further shape modification, reducing interest in their potential for various applications. While the crystalline form of materials usually tend to be mechanically stronger than their amorphous forms due to long-range order, examples such as glassy metal <cit.> and synthesized AM-III carbon <cit.> demonstrate extraordinary mechanical properties comparable to their celebrated crystalline counterparts in terms of fracture toughness and yield strength, or hardness and compressive strength, respectively. This correspondence remains between c-SiC and a-SiC. While c-SiC has shown a UTS as high as 12-18 GPa via micro-pillars so far <cit.>, a-SiC nanowires have been measured to have a UTS up to 8.8 GPa via a tensile test with its two ends fixed by silver epoxy <cit.>, which is higher than the ones shown for LPCVD a-Si_3N_4 (6.8 GPa <cit.>) and Si (7.6 GPa <cit.>). With the aim of characterizing the design space of nanomechanical resonators using LPCVD a-SiC thin film, we characterized its UTS by geometrically tapering the suspended a-SiC thin film in order to concentrate the tensile stress up to the fracture point. Unlike other tensile test methods <cit.>, the presented method allows to determine the UTS of the tensile nanostructured film accurately, while avoiding the ambiguity caused by external loads, glues, and limitations of nano-fabrication, e.g. limited accuracy of nano-patterning and stiction during wet undercut. With the mechanical parameters characterized in Section <ref>, a-SiC hourglass-shaped devices consisting of a short and narrow tether surrounded by long and wide pads on both sides which are designed and suspended to measure the UTS of LPCVD a-SiC thin films. The devices have a total length of 1500um, with pads on both sides that have a width of 15 um, and the middle tethers that have varying lengths and a width of 500 nm, as shown in Figure <ref>(a). After being suspended with dry etchants, the tensile stress on the hourglass-shaped device will redistribute and result in an increase of stress on the middle tether due to the pulling of the pads caused by residual stresses arising from the fabrication. The re-distributed stress profile in Figure <ref>(a) is obtained via finite element method (FEM). The devices are designed to have varying tether lengths from short to long, which are then arranged adjacently as shown in Figure <ref>(b). To establish a force equilibrium between the tether and the pads on each device, the ratio between the tensile stresses on the tether and the pads is inversely proportional to the ratio between their widths, combined with a small proportion of the lengths between the two, which enhances the strain (percentage of elongation) on the tether, the tensile stress on the tether in our hourglass-shaped devices can be significantly amplified during stress relaxation after suspending. As shown with FEM simulation in Figure <ref>(c), devices with shorter tether lengths contain higher maximum concentrated tensile stresses on the tethers. This method allows the determination of the UTS of the nanostructured a-SiC thin films by counting the number of surviving devices after suspension. As shown in Figure <ref>(b), a series of hourglass-shaped devices are fabricated with a-SiCR2. The 18 devices have tether lengths ranging from 30 to 115 um, corresponding to stresses from 12.53 to 5.97 GPa, respectively. The adjacent devices have tether lengths that differ for 5 um, the shorter the tethers are, the larger difference in concentrated stress the devices contain, e.g. the concentrated stress difference between devices with 115 um and 110 um tether lengths is 0.18 GPa, while one between devices with 35 um and 30 um is 0.72 GPa. In the case of each a-SiC thin film, the survival rate of each tensile interval shown in Figure <ref>(d) is determined, by employing 36 to 72 devices for testing. The survival of the suspending hour-glass-shaped device with the tether length below 50 um, corresponds to a UTS above 10 GPa for a-SiCR2. Similarly, we can identify the UTS for all a-SiC thin films used in this study to be higher than 10 GPa, as shown in the histograms of ratios of survival devices in Figure <ref>(d). The histograms also show that, with relatively higher deposition pressure and lower gas flow ratios, a maximum UTS up to 12 GPa can be achieved with a-SiCR2, which is almost twice that of the UTS shown for nanostructured LPCVD a-Si_3N_4 films. The measured UTS of a-SiCR4 is below 3.5 GPa, which is not attractive for further characterization. In the future, with a larger number of fabricated devices and a denser range of tether lengths, one can determine the UTS of the LPCVD a-SiC thin films more precisely. In practice, the nanopatterning with electron beam lithography can readily achieve an accuracy of 10 nm, which allows for the method's accuracy to be as low as 1.2 MPa on a-SiCR2, i.e. an error of less than 0.2% when measuring the UTS. Higher UTS is found for recipes deposited with lower gas flow ratios (a-SiCR2/3/4), which might due to a higher carbon composition in the thin film <cit.>, and C-C chemical bonds are stronger than Si-C and Si-Si bonds <cit.>. For a-SiC films deposited with different pressure, a-SiCR2 (600 mTorr) is found to have a higher UTS, while a-SiC170 (170 mTorr) exhibits better yield under lower concentrated stresses as shown by the survival rates. According to the relationship between strength and Young's modulus E of SiC shown in <cit.>, UTS (or fracture strength) is 5.3% of E, therefore the theoretically predicted UTS for a-SiCR2/a-SiC170/a-SiCR3, are 11.82/11.66/11.13 GPa, respectively, matching well with the experimentally extracted data from the survived devices 12.04/10.27/11.12 GPa shown in Table <ref>. The small offset for the values of a-SiC170 may be due to its rougher surface as shown in Supporting Information (G). With strain engineering techniques, one can amplify the mechanical quality factor Q=D_Q · Q_0 of a nanomechanical resonator by boosting their dissipation dilution factor D_Q, where Q_0 is the intrinsic quality factor of the thin film material <cit.>. Since the upper bound for D_Q of a nanomechanical string vibrating at a certain frequency ω is given by D_Q ≤ 12E ϵ^2_UTS/(ρ t^2 ω^2), where ϵ_UTS denotes the UTS of the thin film material <cit.>, thin film materials with higher UTS and lower thickness are advantageous to obtain a higher D_Q. Among all a-SiC thin films shown in this work, a-SiCR2 is the most promising one to maximize the Q factor, thanks to its high Q_0 and UTS. The superior chemical resistivity of a-SiC enables the fabrication of thin films into suspending resonators with a thickness as low as 5 nm (shown in Supporting Information (F)). This combined with its elevated ultimate tensile strength ϵ_UTS, which measures above 10 GPa in thicker films, makes a-SiC string resonators highly promising in achieving a supreme upper bound for D_Q at a certain frequency ω. § INTRINSIC QUALITY FACTOR AND HIGH Q MECHANICAL RESONATORS In this section we characterize the intrinsic quality factor Q_0 of LPCVD a-SiC, then design and fabricate high-Q nanomechanical resonators with it. High mechanical quality (Q) factor nanomechanical resonators are desirable for various applications, ranging from precise force/acceleration sensing <cit.>, microwave-to-optical conversion <cit.>, to quantum optomechanics <cit.>. Following the method introduced by LIGO <cit.>, the field of strain engineering is advancing rapidly, boosting the Q factor of nanomechanical resonators by several orders of magnitude. A variety of strategies have been proposed aiming to improve the Q factors of tensile-loaded nanomechanical resonators. These include patterning 2D geometries appropriately <cit.>, modifying mass distribution <cit.> and mode of interest (e.g., from fundamental to higher order or from flexural to torsional modes <cit.>), in-situ annealing for surface cleaning <cit.>, as well as cooling down to cryogenic temperatures <cit.>. The methods mentioned above can benefit from utilizing the LPCVD a-SiC thin film we characterized in this work, due to its high deposition film tensile stress, superior chemical resistivity, and impressive ultimate tensile strength. The intrinsic quality factors Q_0 of a-SiC thin films are identified by experimentally measuring the Q factors of phononic crystal (PnC) nanostrings <cit.>, whose many spurious loss mechanisms are eliminated, and dissipation dilution factor D_Q is well defined, leading to an expected intrinsic Q factor Q_0=Q/D_Q. For thin nanomechanical resonators, Q_0 can be assumed to depend linearly on the film thickness, since it is predominantly determined by surface loss rather than bulk loss <cit.>. We fabricate a series of uniformly corrugated high-aspect-ratio (PnC) nanostrings with a length of 4 mm, varying unit-cell lengths L_uc and defect lengths L_def in the middle (Figure <ref>(a)), leading to PnC nanostrings with unit-cell numbers from 20 to 44. The widths of the wide and narrow parts of the nanostrings are 3 um and 1 um respectively. The vibration amplitude of the nanostrings as a function of frequency is acquired (Figure <ref>(b)) with a custom balanced homodyne detection interferometer at the vacuum environment of 4× 10^-9 mbar (see Supporting Information (I)). Using the ringdown method, the Q factors of defect modes for each PnC nanostring are measured. For example, the ones of 10 unit-cells PnC nanostrings fabricated wtih a-SiCR2 and a-SiCR2FS are plotted in Figure <ref>(d). Using FEM simulation, the dilution factor D_Q of each PnC nanostring geometry can be numerically calculated. Together with the Q factors of the corresponding nanostring measured experimentally, the intrinsic Q factor Q_0 of the different a-SiC thin films are determined. For example, the Q_0 of a-SiCR2 and a-SiCR2FS are shown in Figure <ref>(c) and (e) respectively. The Q_0 of the other a-SiC films are shown in Table <ref>, and the corresponding measurement data can be found in Supporting Information (D). In order to compare the Q_0 of a-SiC thin films with different thicknesses, we present them with the unit Q_0 per 100 nm, as Q_0 of a thin film is shown to be a function of thickness <cit.>. Deposited with the same recipe, a-SiCR2 (5175/100 nm) and a-SiCR2FS (4554/100 nm) have similar Q_0. The similar performance on the transparent substrate allows for integrating high-Q nanomechanical sensors into free-space optical systems in a practical manner. By reasonably assuming the films have similar mechanical properties on different substrates, a-SiCR2FS is measured to have a deposition stress of 1596 MPa, a factor of two higher than a-SiCR2 due to a larger thermal expansion coefficients difference between the a-SiC thin film and fused silica substrate. Worth noting is that the Q_0 of a-SiCR2 is the highest among all LPCVD a-SiC investigated, indicating that a lower gas flow ratio (GFR=2), i.e., more carbon content <cit.>, and a moderate deposition pressure (600 mTorr) is beneficial to have better film quality. To exploit the sensing potential of LPCVD a-SiC, we designed and optimized a tapered PnC nanostring with a length of 6mm and a thickness of 71 nm using a a-SiCR2 thin film. Bayesian optimization <cit.> was used to find designs with high Q-factor – more details can be found in Supporting Information (E). This simulation-based optimization is largely possible due to the accurate characterization of the material properties of the a-SiC thin films in previous sections. As shown in Figure <ref>(f), the optimized PnC nanostring consists of fixed 24 unit cells with different widths and lengths, leading to a stress concentration of up to 1.2 GPa towards its center part. Within the phononic bandgap generated by the optimized tapered PnC nanostring, a soft clamped defect mode with a simulated Q factor Q_sim=2.11± 0.17× 10^8 appears at the frequency of f_sim=921 kHz, as shown at the bottom of Figure <ref>(f). The optimized tapered PnC nanostring was fabricated based on the design at the top of Figure <ref>(f), and it was measured at an interferometer under ultra-high vacuum of 4× 10^-9 mbar. As a result, an high Q factor mechanical mode with Q=(1.98±0.03)× 10^8 was measured experimentally at a frequency of f=896 kHz at room temperature, shown by its ringdown curve plotted in Figure <ref>(g). This result demonstrates, for the first time, a mechanical quality factor exceeding 10^8 for silicon carbide nanomechanical resonators, as predicted by simulation. This also suggests that future design strategies to enhance resonator performance can be carried out using the LPCVD a-SiC thin films. In addition, the quality factor-frequency product of the optimized LPCVD a-SiC tapered PnC nanostring is Q× f = 1.791 × 10^14, which is significantly higher than the quantum limit Q× f = 2π k_B T /ħ = 6.24× 10^12. This paves the way towards engineering quantum states in room temperature environments <cit.>. This high quality factor of the nanomechanical resonator with an effective mass m_eff=1.27× 10^-13 kg corresponds to a force sensitivity of √(S_F)= √(4k_B T m_eff· 2π f/Q ) = 7.7 aN/Hz^1/2 at room temperature, which is comparable to a typical atomic force microscope cantilever operating at liquid helium temperature. With the high quality factor shown above, LPCVD a-SiC is shown to be the third material that can reach Q>10^8 at room temperature using strain engineering, after conventional a-Si_3N_4 <cit.> and strained silicon <cit.>. Moreover, the superior chemical and mechanical properties of LPCVD a-SiC allow for the fabrication of thinner and stronger resonators, enabling it to be more compatible with the dissipation dilution method. With advantages such as a relatively simple and low-cost fabrication process, compatibility with various substrates, including transparent ones, and its potential to perform better and more stably in harsh environments as high-Q nanomechanical resonators, LPCVD a-SiC is a promising material for fabricating commercial mechanical sensors. § CONCLUSION AND OUTLOOK In summary, our study has uncovered an amorphous silicon carbide thin film with a ultimate tensile strength above 10 GPa, the highest value ever measured for a nanostructured amorphous material and approaching the experimental values shown by graphene nano-ribbons <cit.>. Their robustness to chemicals allow us to fabricate nanostructures with very high fidelity even when their geometries make them delicate high-aspect-ratio structures. This ability to produce structures with high fidelity also allow us to measure the film's mechanical properties with high precision. We deposit amorphous silicon carbide in varying deposition conditions and substrates to understand new approaches towards increasing ultimate yield strength. Then using the a-SiC with the highest UTS, we designed and fabricated a variety of well-understood nanostructures such as cantilever, membranes and doubly-clamped strings to measure the thin films mechanical properties such as density, Young's modulus, Poisson ratio, and mechanical loss tangents. For the latter we employ nanostrings patterned with phononic bandstructures which conventionally have some of the lowest mechanical dissipations in literature, and this allows us to measure very low mechanical dissipation. The a-SiC nanostrings support soft-clamped mechanical modes with quality factors exceeding 10^8 at room temperature; a new regime for SiC devices and on par with the state-of-the-art SiN resonators. This corresponds to a high force sensitivity of √(S_F)= 7.7 aN/Hz^1/2. We demonstrate a robust characterization process based on the simple fabrication and optical techniques which does not rely on complex tension loading setups. The discovery of this amorphous SiC material represents an advancement in the field of high-strength material science which is conventionally dominated by crystalline and 2D materials. However, our findings demonstrate that amorphous materials have the potential to surpass crystalline materials in certain applications due to their inherently isotropic mechanical properties, which allow for more design freedom and ease of fabrication. The high ultimate tensile strength of this amorphous material is particularly attractive for mechanical sensors, as it enables greater flexibility in strain engineering. This discovery opens up new possibilities for the use of amorphous materials in a variety of high-performance applications. We wish to acknowledge Peter G. Steeneken, Gerard Verbiest, and Martin Lee for their helpful suggestions on the manuscript and their support of our project. We want to thank Satadal Dutta, Ali Sarafraz, Matthijs de Jong for the helpful discussions. M.X. and R.N. also thank the staffs of both the Kavli Nanolab Delft and the Else Kooi Lab, in particular from C. de Boer, for supporting our fabrication efforts. This publication is part of the project, Probing the physics of exotic superconductors with microchip Casimir experiments (740.018.020) of the research programme NWO Start-up which is partly financed by the Dutch Research Council (NWO).This work has received funding from the EMPIR programme co-financed by the Participating States and from the European Union’s Horizon 2020 research and innovation programme (No. 17FUN05 PhotoQuant). R.N. would like to acknowledge support from the Limitless Space Institute’s I^2 Grant. * 118 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Krause et al.(2012)Krause, Winger, Blasius, Lin, and Painter]Krause2012 author author A. G. Krause, author M. Winger, author T. D. Blasius, author Q. Lin, and author O. Painter, title title A high-resolution microchip optomechanical accelerometer, @noop journal journal Nature Photonics volume 6, pages 768 (year 2012)NoStop [Pratt et al.(2023)Pratt, Agrawal, Condos, Pluchar, Schlamminger, and Wilson]Pratt2021 author author J. R. Pratt, author A. R. Agrawal, author C. A. Condos, author C. M. Pluchar, author S. Schlamminger, and author D. J. Wilson, title title Nanoscale torsional dissipation dilution for quantum experiments and precision measurement, @noop journal journal Physical Review X volume 13, pages 011018 (year 2023)NoStop [Manzaneque et al.(2022)Manzaneque, Ghatkesar, Alijani, Xu, Norte, and Steeneken]Manzaneque2022 author author T. Manzaneque, author M. K. Ghatkesar, author F. Alijani, author M. Xu, author R. A. Norte, and author P. G. Steeneken, title title Resolution limits of resonant sensors with duffing non-linearity, @noop journal journal arXiv preprint arXiv:2205.11903 (year 2022)NoStop [Garcia-Sanchez et al.(2012)Garcia-Sanchez, Fong, Bhaskaran, Lamoreaux, and Tang]Sanchez2012 author author D. Garcia-Sanchez, author K. Y. Fong, author H. Bhaskaran, author S. Lamoreaux, and author H. X. Tang, title title Casimir force and in situ surface potential measurements on nanomembranes, @noop journal journal Physical review letters volume 109, pages 027202 (year 2012)NoStop [Pate et al.(2020)Pate, Goryachev, Chiao, Sharping, and Tobar]Pate2020 author author J. M. Pate, author M. Goryachev, author R. Y. Chiao, author J. E. Sharping, and author M. E. Tobar, title title Casimir spring and dilution in macroscopic cavity optomechanics, @noop journal journal Nature Physics volume 16, pages 1117 (year 2020)NoStop [Hälg et al.(2021)Hälg, Gisler, Tsaturyan, Catalini, Grob, Krass, Héritier, Mattiat, Thamm, Schirhagl et al.]halg2021membrane author author D. Hälg, author T. Gisler, author Y. Tsaturyan, author L. Catalini, author U. Grob, author M.-D. Krass, author M. Héritier, author H. Mattiat, author A.-K. Thamm, author R. Schirhagl, et al., title title Membrane-based scanning force microscopy, @noop journal journal Physical Review Applied volume 15, pages L021001 (year 2021)NoStop [Reinhardt et al.(2016)Reinhardt, Müller, Bourassa, and Sankey]reinhardt2016 author author C. Reinhardt, author T. Müller, author A. Bourassa, and author J. C. Sankey, title title Ultralow-noise sin trampoline resonators for sensing and optomechanics, @noop journal journal Physical Review X volume 6, pages 021001 (year 2016)NoStop [Verbridge et al.(2006)Verbridge, Parpia, Reichenbach, Bellan, and Craighead]Verbridge2006 author author S. S. Verbridge, author J. M. Parpia, author R. B. Reichenbach, author L. M. Bellan, and author H. G. Craighead, title title High quality factor resonance at room temperature with nanostrings under high tensile stress, @noop journal journal Journal of Applied Physics volume 99, pages 124304 (year 2006)NoStop [Norte et al.(2016)Norte, Moura, and Gröblacher]Norte2016 author author R. A. Norte, author J. P. Moura, and author S. Gröblacher, title title Mechanical resonators for quantum optomechanics experiments at room temperature, @noop journal journal Physical review letters volume 116, pages 147202 (year 2016)NoStop [Ghadimi et al.(2018)Ghadimi, Fedorov, Engelsen, Bereyhi, Schilling, Wilson, and Kippenberg]Ghadimi2018 author author A. H. Ghadimi, author S. A. Fedorov, author N. J. Engelsen, author M. J. Bereyhi, author R. Schilling, author D. J. Wilson, and author T. J. Kippenberg, title title Elastic strain engineering for ultralow mechanical dissipation, @noop journal journal Science volume 360, pages 764 (year 2018)NoStop [Fedorov et al.(2020)Fedorov, Beccari, Engelsen, and Kippenberg]Fedorov2020 author author S. A. Fedorov, author A. Beccari, author N. J. Engelsen, and author T. J. Kippenberg, title title Fractal-like mechanical resonators with a soft-clamped fundamental mode, @noop journal journal Physical Review Letters volume 124, pages 025502 (year 2020)NoStop [Shin et al.(2022)Shin, Cupertino, de Jong, Steeneken, Bessa, and Norte]Shin2021 author author D. Shin, author A. Cupertino, author M. H. de Jong, author P. G. Steeneken, author M. A. Bessa, and author R. A. Norte, title title Spiderweb nanomechanical resonators via bayesian optimization: inspired by nature and guided by machine learning, @noop journal journal Advanced Materials volume 34, pages 2106248 (year 2022)NoStop [Bereyhi et al.(2022)Bereyhi, Arabmoheghi, Beccari, Fedorov, Huang, Kippenberg, and Engelsen]Bereyhi2022 author author M. J. Bereyhi, author A. Arabmoheghi, author A. Beccari, author S. A. Fedorov, author G. Huang, author T. J. Kippenberg, and author N. J. Engelsen, title title Perimeter modes of nanomechanical resonators exhibit quality factors exceeding 10^9 at room temperature, @noop journal journal Physical Review X volume 12, pages 021036 (year 2022)NoStop [Beccari et al.(2022)Beccari, Visani, Fedorov, Bereyhi, Boureau, Engelsen, and Kippenberg]Beccari2022 author author A. Beccari, author D. A. Visani, author S. A. Fedorov, author M. J. Bereyhi, author V. Boureau, author N. J. Engelsen, and author T. J. Kippenberg, title title Strained crystalline nanomechanical resonators with quality factors above 10 billion, @noop journal journal Nature Physics volume 18, pages 436 (year 2022)NoStop [Romero et al.(2020)Romero, Valenzuela, Kermany, Sementilli, Iacopi, and Bowen]Romero2020 author author E. Romero, author V. M. Valenzuela, author A. R. Kermany, author L. Sementilli, author F. Iacopi, and author W. P. Bowen, title title Engineering the dissipation of crystalline micromechanical resonators, @noop journal journal Physical Review Applied volume 13, pages 044007 (year 2020)NoStop [Klaß(2022)]Klas2022 author author Y. S. Klaß, title High Q nanomechanical resonators fabricated from crystalline silicon carbide, @noop Ph.D. thesis, school Technische Universität München (year 2022)NoStop [Manjeshwar et al.(2022)Manjeshwar, Ciers, Hellman, Bläsing, Strittmater, and Wieczorek]Manjeshwar2022 author author S. K. Manjeshwar, author A. Ciers, author F. Hellman, author J. Bläsing, author A. Strittmater, and author W. Wieczorek, title title Micromechanical high-q trampoline resonators from strained crystalline ingap for integrated free-space optomechanics, @noop journal journal arXiv preprint arXiv:2211.12469 (year 2022)NoStop [Wu et al.(2021)Wu, Yan, Jia, and Yao]wu2021defective author author Q. Wu, author X. Yan, author Y. Jia, and author X. Yao, title title Defective carbon-based materials: controllable synthesis and electrochemical applications, @noop journal journal EnergyChem volume 3, pages 100059 (year 2021)NoStop [Goldsche et al.(2018)Goldsche, Sonntag, Khodkov, Verbiest, Reichardt, Neumann, Ouaj, von den Driesch, Buca, and Stampfer]Goldsche2018 author author M. Goldsche, author J. Sonntag, author T. Khodkov, author G. J. Verbiest, author S. Reichardt, author C. Neumann, author T. Ouaj, author N. von den Driesch, author D. Buca, and author C. Stampfer, title title Tailoring mechanically tunable strain fields in graphene, @noop journal journal Nano letters volume 18, pages 1707 (year 2018)NoStop [Cao et al.(2020)Cao, Feng, Han, Gao, Hue Ly, Xu, and Lu]cao2020elastic author author K. Cao, author S. Feng, author Y. Han, author L. Gao, author T. Hue Ly, author Z. Xu, and author Y. Lu, title title Elastic straining of free-standing monolayer graphene, @noop journal journal Nature communications volume 11, pages 284 (year 2020)NoStop [Gu et al.(2013)Gu, Wu, Zhang, Srolovitz, and Greer]gu2013microstructure author author X. W. Gu, author Z. Wu, author Y.-W. Zhang, author D. J. Srolovitz, and author J. R. Greer, title title Microstructure versus flaw: mechanisms of failure and strength in nanostructures, @noop journal journal Nano letters volume 13, pages 5703 (year 2013)NoStop [Zhao et al.(2022)Zhao, Mao, Liu, Cao, Haigh, Papageorgiou, Li, and Young]zhao2022controlling author author X. Zhao, author B. Mao, author M. Liu, author J. Cao, author S. J. Haigh, author D. G. Papageorgiou, author Z. Li, and author R. J. Young, title title Controlling and monitoring crack propagation in monolayer graphene single crystals, @noop journal journal Advanced Functional Materials volume 32, pages 2202373 (year 2022)NoStop [Bereyhi et al.(2019)Bereyhi, Beccari, Fedorov, Ghadimi, Schilling, Wilson, Engelsen, and Kippenberg]Bereyhi2019 author author M. J. Bereyhi, author A. Beccari, author S. A. Fedorov, author A. H. Ghadimi, author R. Schilling, author D. J. Wilson, author N. J. Engelsen, and author T. J. Kippenberg, title title Clamp-tapering increases the quality factor of stressed nanobeams, @noop journal journal Nano letters volume 19, pages 2329 (year 2019)NoStop [Kwon et al.(2015)Kwon, Jo, Lim, Shin, Jin, Kwon, and Sun]Kwon2015 author author G. Kwon, author H.-H. Jo, author S. Lim, author C. Shin, author H.-H. Jin, author J. Kwon, and author G.-M. Sun, title title Room-temperature yield and fracture strength of single-crystalline 6h silicon carbide, @noop journal journal Journal of materials science volume 50, pages 8104 (year 2015)NoStop [Rasool et al.(2013)Rasool, Ophus, Klug, Zettl, and Gimzewski]Rasool2013 author author H. I. Rasool, author C. Ophus, author W. S. Klug, author A. Zettl, and author J. K. Gimzewski, title title Measurement of the intrinsic strength of crystalline and polycrystalline graphene, @noop journal journal Nature communications volume 4, pages 2811 (year 2013)NoStop [Zhang et al.(2015)Zhang, Li, and Gao]zhang2015fracture author author T. Zhang, author X. Li, and author H. Gao, title title Fracture of graphene: a review, @noop journal journal International Journal of Fracture volume 196, pages 1 (year 2015)NoStop [Lee et al.(2008)Lee, Wei, Kysar, and Hone]lee2008measurement author author C. Lee, author X. Wei, author J. W. Kysar, and author J. Hone, title title Measurement of the elastic properties and intrinsic strength of monolayer graphene, @noop journal journal science volume 321, pages 385 (year 2008)NoStop [Wang et al.(2012)Wang, Yan, Ma, Hu, and Chen]wang2012effect author author M. Wang, author C. Yan, author L. Ma, author N. Hu, and author M. Chen, title title Effect of defects on fracture strength of graphene sheets, @noop journal journal Computational Materials Science volume 54, pages 236 (year 2012)NoStop [Xu et al.(2018)Xu, Yuan, Zhu, Wang, Tang, and Gao]xu2018enhancing author author J. Xu, author G. Yuan, author Q. Zhu, author J. Wang, author S. Tang, and author L. Gao, title title Enhancing the strength of graphene by a denser grain boundary, @noop journal journal ACS nano volume 12, pages 4529 (year 2018)NoStop [Qu et al.(2012)Qu, Calin, Eckert, and Zhang]qu2012metallic author author R. Qu, author M. Calin, author J. Eckert, and author Z. Zhang, title title Metallic glasses: Notch-insensitive materials, @noop journal journal Scripta Materialia volume 66, pages 733 (year 2012)NoStop [Jiang et al.(2021)Jiang, Shang, Xian, Sun, Zhang, Yu, Bai, Gu, and Wang]jiang2021structures author author H. Jiang, author T. Shang, author H. Xian, author B. Sun, author Q. Zhang, author Q. Yu, author H. Bai, author L. Gu, and author W. Wang, title title Structures and functional properties of amorphous alloys, @noop journal journal Small Structures volume 2, pages 2000057 (year 2021)NoStop [Greer(1995)]greer1995metallic author author A. L. Greer, title title Metallic glasses, @noop journal journal Science volume 267, pages 1947 (year 1995)NoStop [Telford(2004)]telford2004case author author M. Telford, title title The case for bulk metallic glass, @noop journal journal Materials today volume 7, pages 36 (year 2004)NoStop [Wong et al.(1997)Wong, Sheehan, and Lieber]Wong1997 author author E. W. Wong, author P. E. Sheehan, and author C. M. Lieber, title title Nanobeam mechanics: elasticity, strength, and toughness of nanorods and nanotubes, @noop journal journal science volume 277, pages 1971 (year 1997)NoStop [Cui et al.(2019)Cui, Zhang, Jiang, Liu, Zou, Guo, Lu, Parkin, and Guo]Cui2019 author author J. Cui, author Z. Zhang, author H. Jiang, author D. Liu, author L. Zou, author X. Guo, author Y. Lu, author I. P. Parkin, and author D. Guo, title title Ultrahigh recovery of fracture strength on mismatched fractured amorphous surfaces of silicon carbide, @noop journal journal ACS nano volume 13, pages 7483 (year 2019)NoStop [Wijesundara and Azevedo(2011)]Wijesundara2011 author author M. Wijesundara and author R. Azevedo, @noop title Silicon carbide microsystems for harsh environments, Vol. volume 22 (publisher Springer Science & Business Media, year 2011)NoStop [Gerhardt(2011)]Gerhardt2011 author author R. Gerhardt, @noop title Properties and applications of silicon carbide (publisher BoD–Books on Demand, year 2011)NoStop [Kimoto and Cooper(2014)]Kimoto2014 author author T. Kimoto and author J. A. Cooper, @noop title Fundamentals of Silicon Carbide Technology: Growth, Characterization, Devices and Applications (publisher Wiley-IEEE Press, year 2014)NoStop [Yi et al.(2020)Yi, Zheng, Huang, Lin, Yan, You, Huang, Zhang, Shen, Zhou et al.]Yi2020 author author A. Yi, author Y. Zheng, author H. Huang, author J. Lin, author Y. Yan, author T. You, author K. Huang, author S. Zhang, author C. Shen, author M. Zhou, et al., title title Wafer-scale 4h-silicon carbide-on-insulator (4h–sicoi) platform for nonlinear integrated optical devices, @noop journal journal Optical Materials volume 107, pages 109990 (year 2020)NoStop [Martini and Politi(2017)]Martini2017 author author F. Martini and author A. Politi, title title Linear integrated optics in 3c silicon carbide, @noop journal journal Optics Express volume 25, pages 10735 (year 2017)NoStop [Powell et al.(2022)Powell, Li, Shams-Ansari, Wang, Meng, Sinclair, Deng, Lončar, and Yi]Powell2022 author author K. Powell, author L. Li, author A. Shams-Ansari, author J. Wang, author D. Meng, author N. Sinclair, author J. Deng, author M. Lončar, and author X. Yi, title title Integrated silicon carbide electro-optic modulator, @noop journal journal Nature Communications volume 13, pages 1851 (year 2022)NoStop [Saddow and Agarwal(2004)]Saddow2004 author author S. E. Saddow and author A. K. Agarwal, @noop title Advances in silicon carbide processing and applications (publisher Artech House, year 2004)NoStop [Morana et al.(2013)Morana, Pandraud, Creemer, and Sarro]Morana2012 author author B. Morana, author G. Pandraud, author J. Creemer, and author P. Sarro, title title Characterization of lpcvd amorphous silicon carbide (a-sic) as material for electron transparent windows, @noop journal journal Materials Chemistry and Physics volume 139, pages 654 (year 2013)NoStop [Iliescu and Poenar(2012)]Iliescu2012 author author C. Iliescu and author D. P. Poenar, title title Pecvd amorphous silicon carbide (α-sic) layers for mems applications, in @noop booktitle Physics and Technology of Silicon Carbide Devices (publisher IntechOpen, year 2012)NoStop [Fraga et al.(2012)Fraga, Pessoa, Massi, and Maciel]Fraga2012 author author M. A. Fraga, author R. S. Pessoa, author M. Massi, and author H. S. Maciel, title title Applications of sic-based thin films in electronic and mems devices, @noop journal journal Physics and technology of silicon carbide devices volume 1, pages 313 (year 2012)NoStop [Blank et al.(1999)Blank, Popov, Pivovarov, Lvova, and Terentev]Blank1999 author author V. Blank, author M. Popov, author G. Pivovarov, author N. Lvova, and author S. Terentev, title title Mechanical properties of different types of diamond, @noop journal journal Diamond and related materials volume 8, pages 1531 (year 1999)NoStop [Dang et al.(2021)Dang, Chou, Dai, Chou, Yang, Fan, Lin, Meng, Hu, Zhu et al.]Dang2021 author author C. Dang, author J.-P. Chou, author B. Dai, author C.-T. Chou, author Y. Yang, author R. Fan, author W. Lin, author F. Meng, author A. Hu, author J. Zhu, et al., title title Achieving large uniform tensile elasticity in microfabricated diamond, @noop journal journal Science volume 371, pages 76 (year 2021)NoStop [Papageorgiou et al.(2017)Papageorgiou, Kinloch, and Young]Papageorgiou2017 author author D. G. Papageorgiou, author I. A. Kinloch, and author R. J. Young, title title Mechanical properties of graphene and graphene-based nanocomposites, @noop journal journal Progress in materials science volume 90, pages 75 (year 2017)NoStop [Blum et al.(1999)Blum, Dresler, Hoffmann et al.]Blum1999 author author T. Blum, author B. Dresler, author M. Hoffmann, et al., title title Wear-resistant amorphous sic coatings produced by plasma-enhanced cvd, @noop journal journal Surface and Coatings Technology volume 116, pages 1024 (year 1999)NoStop [Jiang et al.(1999)Jiang, Chen, Wang, Xu, Stubhan, and Merkel]Jiang1999 author author L. Jiang, author X. Chen, author X. Wang, author L. Xu, author F. Stubhan, and author K.-H. Merkel, title title a-sicx: H films deposited by plasma-enhanced chemical vapor deposition at low temperature used for moisture and corrosion resistant applications, @noop journal journal Thin Solid Films volume 352, pages 97 (year 1999)NoStop [Flannery et al.(1998)Flannery, Mourlas, Storment, Tsai, Tan, Heck, Monk, Kim, Gogoi, and Kovacs]Flannery1998 author author A. F. Flannery, author N. J. Mourlas, author C. W. Storment, author S. Tsai, author S. H. Tan, author J. Heck, author D. Monk, author T. Kim, author B. Gogoi, and author G. T. Kovacs, title title Pecvd silicon carbide as a chemically resistant material for micromachined transducers, @noop journal journal Sensors and Actuators A: Physical volume 70, pages 48 (year 1998)NoStop [Buijtendorp et al.(2022)Buijtendorp, Vollebregt, Karatsu, Thoen, Murugesan, Kouwenhoven, Hähnle, Baselmans, and Endo]buijtendorp2022hydrogenated author author B. Buijtendorp, author S. Vollebregt, author K. Karatsu, author D. Thoen, author V. Murugesan, author K. Kouwenhoven, author S. Hähnle, author J. Baselmans, and author A. Endo, title title Hydrogenated amorphous silicon carbide: A low-loss deposited dielectric for microwave to submillimeter-wave superconducting circuits, @noop journal journal Physical Review Applied volume 18, pages 064003 (year 2022)NoStop [Köhler et al.(2021)Köhler, Pomaska, Procel, Santbergen, Zamchiy, Macco, Lambertz, Duan, Cao, Klingebiel et al.]Kohler2021 author author M. Köhler, author M. Pomaska, author P. Procel, author R. Santbergen, author A. Zamchiy, author B. Macco, author A. Lambertz, author W. Duan, author P. Cao, author B. Klingebiel, et al., title title A silicon carbide-based highly transparent passivating contact for crystalline silicon solar cells approaching efficiencies of 24%, @noop journal journal Nature Energy volume 6, pages 529 (year 2021)NoStop [Tawada et al.(1982)Tawada, Kondo, Okamoto, and Hamakawa]Tawada1982 author author Y. Tawada, author M. Kondo, author H. Okamoto, and author Y. Hamakawa, title title Hydrogenated amorphous silicon carbide as a window material for high efficiency a-si solar cells, @noop journal journal Solar Energy Materials volume 6, pages 299 (year 1982)NoStop [van Cleef et al.(1998)van Cleef, Rubinelli, Rizzoli, Pinghini, Schropp, and van der Weg]Cleef1998 author author M. W. van Cleef, author F. A. Rubinelli, author R. Rizzoli, author R. Pinghini, author R. E. Schropp, and author W. F. van der Weg, title title Amorphous silicon carbide/crystalline silicon heterojunction solar cells: a comprehensive study of the photocarrier collection, @noop journal journal Japanese journal of applied physics volume 37, pages 3926 (year 1998)NoStop [Sarro(2000)]Sarro2000 author author P. M. Sarro, title title Silicon carbide as a new mems technology, @noop journal journal Sensors and Actuators A: Physical volume 82, pages 210 (year 2000)NoStop [Fraga and Pessoa(2020)]Fraga2020 author author M. Fraga and author R. Pessoa, title title Progresses in synthesis and application of sic films: From cvd to ald and from mems to nems, @noop journal journal Micromachines volume 11, pages 799 (year 2020)NoStop [Barnes et al.(2012)Barnes, Zorman, and Feng]Barnes2012 author author A. C. Barnes, author C. A. Zorman, and author P. X. Feng, title title Amorphous silicon carbide (α-sic) thin square membranes for resonant micromechanical devices, in @noop booktitle Materials Science Forum, Vol. volume 717 (organization Trans Tech Publ, year 2012) pp. pages 533–536NoStop [Xing et al.(2019)Xing, Ma, Ooi, Choi, Agarwal, and Tan]Xing2019 author author P. Xing, author D. Ma, author K. J. Ooi, author J. W. Choi, author A. M. Agarwal, and author D. Tan, title title Cmos-compatible pecvd silicon carbide platform for linear and nonlinear optics, @noop journal journal ACS Photonics volume 6, pages 1162 (year 2019)NoStop [Lee et al.(2015)Lee, Lu, and Lin]Lee2015 author author J. Y. Lee, author X. Lu, and author Q. Lin, title title High-q silicon carbide photonic-crystal cavities, @noop journal journal Applied Physics Letters volume 106, pages 041106 (year 2015)NoStop [Xing et al.(2020)Xing, Ma, Kimerling, Agarwal, and Tan]Xing2020 author author P. Xing, author D. Ma, author L. C. Kimerling, author A. M. Agarwal, and author D. T. Tan, title title High efficiency four wave mixing and optical bistability in amorphous silicon carbide ring resonators, @noop journal journal APL Photonics volume 5, pages 076110 (year 2020)NoStop [Auditore et al.(2002)Auditore, Satriano, Coscia, Ambrosone, Parisi, and Marletta]Auditore2002 author author A. Auditore, author C. Satriano, author U. Coscia, author G. Ambrosone, author V. Parisi, and author G. Marletta, title title Human serum albumin adsorption onto a-sic: H and ac: H thin films deposited by plasma enhanced chemical vapor deposition, @noop journal journal Biomolecular Engineering volume 19, pages 85 (year 2002)NoStop [Saddow(2016)]Saddow2016 author author S. Saddow, @noop title Silicon Carbide Biotechnology, Second Edition: A Biocompatible Semiconductor for Advanced Biomedical Devices and Applications (publisher Elsevier, year 2016)NoStop [Bauser et al.(2022)Bauser, Foley, Phelan, Weigand, Needell, Holman, and Atwater]bauser2022amorphous author author H. C. Bauser, author M. D. Foley, author M. E. Phelan, author W. Weigand, author D. R. Needell, author Z. C. Holman, and author H. A. Atwater, title title Amorphous silicon carbide high contrast gratings as highly efficient spectrally selective visible reflectors, @noop journal journal Optics Express volume 30, pages 26787 (year 2022)NoStop [Deku et al.(2018)Deku, Frewin, Stiller, Cohen, Aqeel, Joshi-Imre, Black, Gardner, Pancrazio, and Cogan]deku2018amorphous author author F. Deku, author C. L. Frewin, author A. Stiller, author Y. Cohen, author S. Aqeel, author A. Joshi-Imre, author B. Black, author T. J. Gardner, author J. J. Pancrazio, and author S. F. Cogan, title title Amorphous silicon carbide platform for next generation penetrating neural interface designs, @noop journal journal Micromachines volume 9, pages 480 (year 2018)NoStop [Teufel et al.(2009)Teufel, Donner, Castellanos-Beltran, Harlow, and Lehnert]teufel2009nanomechanical author author J. D. Teufel, author T. Donner, author M. Castellanos-Beltran, author J. W. Harlow, and author K. W. Lehnert, title title Nanomechanical motion measured with an imprecision below that at the standard quantum limit, @noop journal journal Nature nanotechnology volume 4, pages 820 (year 2009)NoStop [Bagci et al.(2014)Bagci, Simonsen, Schmid, Villanueva, Zeuthen, Appel, Taylor, Sørensen, Usami, Schliesser et al.]bagci2014optical author author T. Bagci, author A. Simonsen, author S. Schmid, author L. G. Villanueva, author E. Zeuthen, author J. Appel, author J. M. Taylor, author A. Sørensen, author K. Usami, author A. Schliesser, et al., title title Optical detection of radio waves through a nanomechanical transducer, @noop journal journal Nature volume 507, pages 81 (year 2014)NoStop [Castelletto and Boretti(2020)]Castelletto2020 author author S. Castelletto and author A. Boretti, title title Silicon carbide color centers for quantum applications, @noop journal journal Journal of Physics: Photonics volume 2, pages 022001 (year 2020)NoStop [Castelletto et al.(2022)Castelletto, Peruzzo, Bonato, Johnson, Radulaski, Ou, Kaiser, and Wrachtrup]Castelletto2022 author author S. Castelletto, author A. Peruzzo, author C. Bonato, author B. C. Johnson, author M. Radulaski, author H. Ou, author F. Kaiser, and author J. Wrachtrup, title title Silicon carbide photonics bridging quantum technology, @noop journal journal ACS Photonics volume 9, pages 1434 (year 2022)NoStop [Sementilli et al.(2022)Sementilli, Romero, and Bowen]Sementilli2022 author author L. Sementilli, author E. Romero, and author W. P. Bowen, title title Nanomechanical dissipation and strain engineering, @noop journal journal Advanced Functional Materials volume 32, pages 2105247 (year 2022)NoStop [Nguyen et al.(2017)Nguyen, Phan, Kamble, Vadivelu, Dinh, Iacopi, Walker, Hold, Nguyen, and Dao]nguyen2017nanopore author author T.-K. Nguyen, author H.-P. Phan, author H. Kamble, author R. Vadivelu, author T. Dinh, author A. Iacopi, author G. Walker, author L. Hold, author N.-T. Nguyen, and author D. V. Dao, title title Superior robust ultrathin single-crystalline silicon carbide membrane as a versatile platform for biological applications, @noop journal journal ACS applied materials & interfaces volume 9, pages 41641 (year 2017)NoStop [Atwater et al.(2018)Atwater, Davoyan, Ilic, Jariwala, Sherrott, Went, Whitney, and Wong]Atwater2018 author author H. A. Atwater, author A. R. Davoyan, author O. Ilic, author D. Jariwala, author M. C. Sherrott, author C. M. Went, author W. S. Whitney, and author J. Wong, title title Materials challenges for the starshot lightsail, @noop journal journal Nature materials volume 17, pages 861 (year 2018)NoStop [Morana(2015)]MoranaPhD2015 author author B. Morana, title Silicon carbide thin films for MEMS nanoreactors for in-situ transmission electron microscopy, @noop Ph.D. thesis, school TU Delft (year 2015)NoStop [Chu and Zhang(2009)]Chu2009 author author J. Chu and author D. Zhang, title title Mechanical characterization of thermal sio2 micro-beams through tensile testing, @noop journal journal Journal of Micromechanics and Microengineering volume 19, pages 095020 (year 2009)NoStop [Imran et al.(2018)Imran, Mahendran, and Keerthan]Imran2018 author author M. Imran, author M. Mahendran, and author P. Keerthan, title title Mechanical properties of cold-formed steel tubular sections at elevated temperatures, @noop journal journal Journal of Constructional Steel Research volume 143, pages 131 (year 2018)NoStop [Zhou et al.(2004)Zhou, Wang, and Mallick]Zhou2004 author author Y. Zhou, author Y. Wang, and author P. Mallick, title title An experimental study on the tensile behavior of kevlar fiber reinforced aluminum laminates at high strain rates, @noop journal journal Materials Science and Engineering: A volume 381, pages 355 (year 2004)NoStop [Chen et al.(2000)Chen, Ayon, and Spearing]Chen2004 author author K.-S. Chen, author A. Ayon, and author S. M. Spearing, title title Controlling and testing the fracture strength of silicon on the mesoscale, @noop journal journal Journal of the American Ceramic Society volume 83, pages 1476 (year 2000)NoStop [Shuman et al.(2007)Shuman, Costa, and Andrade]Shuman2007 author author D. J. Shuman, author A. L. Costa, and author M. S. Andrade, title title Calculating the elastic modulus from nanoindentation and microindentation reload curves, @noop journal journal Materials characterization volume 58, pages 380 (year 2007)NoStop [Kim et al.(2003)Kim, Yeon, Jeon, Kim, and Kim]Kim2003 author author J.-H. Kim, author S.-C. Yeon, author Y.-K. Jeon, author J.-G. Kim, and author Y.-H. Kim, title title Nano-indentation method for the measurement of the poisson’s ratio of mems thin films, @noop journal journal Sensors and Actuators A: Physical volume 108, pages 20 (year 2003)NoStop [Klaß et al.(2022)Klaß, Doster, Bückle, Braive, and Weig]Klass2022 author author Y. S. Klaß, author J. Doster, author M. Bückle, author R. Braive, and author E. M. Weig, title title Determining young's modulus via the eigenmode spectrum of a nanomechanical string resonator, @noop journal journal Applied Physics Letters volume 121, pages 083501 (year 2022)NoStop [Barboni et al.(2018)Barboni, Gillich, Chioncel, Hamat, and Mituletu]Barboni2018 author author L. Barboni, author G. Gillich, author C. Chioncel, author C. Hamat, and author I. Mituletu, title title A method to precise determine the young’s modulus from dynamic measurements, in @noop booktitle IOP Conference Series: Materials Science and Engineering, Vol. volume 416 (organization IOP Publishing, year 2018) p. pages 012063NoStop [Chirikov et al.(2020)Chirikov, Dimitrov, and Boyadjiev]Chirikov2020 author author V. A. Chirikov, author D. M. Dimitrov, and author Y. S. Boyadjiev, title title Determination of the dynamic young’s modulus and poisson’s ratio based on higher frequencies of beam transverse vibration, @noop journal journal Procedia Manufacturing volume 46, pages 87 (year 2020)NoStop [Chen(2000)]Chen2000 author author S. Chen, title title Resonant frequency method for the measurement and uncertainty analysis of acoustic and elastic properties, @noop journal journal Ultrasonics volume 38, pages 206 (year 2000)NoStop [Villanueva and Schmid(2014a)]Villanueva2014 author author L. G. Villanueva and author S. Schmid, title title Evidence of surface loss as ubiquitous limiting damping mechanism in sin micro-and nanomechanical resonators, @noop journal journal Physical review letters volume 113, pages 227201 (year 2014a)NoStop [Wang et al.(2017)Wang, Shan, and Huang]Wang2017 author author S. Wang, author Z. Shan, and author H. Huang, title title The mechanical properties of nanowires, @noop journal journal Advanced Science volume 4, pages 1600332 (year 2017)NoStop [Banerjee et al.(2018)Banerjee, Bernoulli, Zhang, Yuen, Liu, Dong, Ding, Lu, Dao, Zhang et al.]banerjee2018ultralarge author author A. Banerjee, author D. Bernoulli, author H. Zhang, author M.-F. Yuen, author J. Liu, author J. Dong, author F. Ding, author J. Lu, author M. Dao, author W. Zhang, et al., title title Ultralarge elastic deformation of nanoscale diamond, @noop journal journal Science volume 360, pages 300 (year 2018)NoStop [Shafikov et al.(2021)Shafikov, Schurink, van de Kruijs, Benschop, Van den Beld, Houweling, and Bijkerk]Shafikov2021 author author A. Shafikov, author B. Schurink, author R. W. van de Kruijs, author J. Benschop, author W. Van den Beld, author Z. S. Houweling, and author F. Bijkerk, title title Strengthening ultrathin si3n4 membranes by compressive surface stress, @noop journal journal Sensors and Actuators A: Physical volume 317, pages 112456 (year 2021)NoStop [Demetriou et al.(2011)Demetriou, Launey, Garrett, Schramm, Hofmann, Johnson, and Ritchie]Demetriou2011 author author M. D. Demetriou, author M. E. Launey, author G. Garrett, author J. P. Schramm, author D. C. Hofmann, author W. L. Johnson, and author R. O. Ritchie, title title A damage-tolerant glass, @noop journal journal Nature materials volume 10, pages 123 (year 2011)NoStop [Zhang et al.(2022)Zhang, Li, Luo, He, Gao, Soldatov, Benavides, Shi, Nie, Zhang et al.]Zhang2022 author author S. Zhang, author Z. Li, author K. Luo, author J. He, author Y. Gao, author A. V. Soldatov, author V. Benavides, author K. Shi, author A. Nie, author B. Zhang, et al., title title Discovery of carbon-based strongest and hardest amorphous material, @noop journal journal National Science Review volume 9, pages nwab140 (year 2022)NoStop [Namazu(2023)]namazu2023mechanical author author T. Namazu, title title Mechanical property measurement of micro/nanoscale materials for mems: A review, @noop journal journal IEEJ Transactions on Electrical and Electronic Engineering volume 18, pages 308 (year 2023)NoStop [Grunenberg(2001)]grunenberg2001intrinsic author author J. Grunenberg, title title Intrinsic bond strengths of multiple c- c, si- si, and c- si bonds, @noop journal journal Angewandte Chemie International Edition volume 40, pages 4027 (year 2001)NoStop [Villanueva and Schmid(2014b)]villanueva2014evidence author author L. G. Villanueva and author S. Schmid, title title Evidence of surface loss as ubiquitous limiting damping mechanism in sin micro-and nanomechanical resonators, @noop journal journal Physical review letters volume 113, pages 227201 (year 2014b)NoStop [Fedorov et al.(2019)Fedorov, Engelsen, Ghadimi, Bereyhi, Schilling, Wilson, and Kippenberg]Fedorov2019 author author S. A. Fedorov, author N. J. Engelsen, author A. H. Ghadimi, author M. J. Bereyhi, author R. Schilling, author D. J. Wilson, and author T. J. Kippenberg, title title Generalized dissipation dilution in strained mechanical resonators, @noop journal journal Physical Review B volume 99, pages 054107 (year 2019)NoStop [Guo et al.(2019)Guo, Norte, and Gröblacher]Guo2019 author author J. Guo, author R. Norte, and author S. Gröblacher, title title Feedback cooling of a room temperature mechanical oscillator close to its motional ground state, @noop journal journal Physical review letters volume 123, pages 223602 (year 2019)NoStop [González and Saulson(1994)]Gonzalez1994 author author G. I. González and author P. R. Saulson, title title Brownian motion of a mass suspended by an anelastic wire, @noop journal journal The Journal of the Acoustical Society of America volume 96, pages 207 (year 1994)NoStop [Tsaturyan et al.(2017)Tsaturyan, Barg, Polzik, and Schliesser]Tsaturyan2017 author author Y. Tsaturyan, author A. Barg, author E. S. Polzik, and author A. Schliesser, title title Ultracoherent nanomechanical resonators via soft clamping and dissipation dilution, @noop journal journal Nature nanotechnology volume 12, pages 776 (year 2017)NoStop [Li et al.(2023)Li, Xu, Norte, Aragón, Van Keulen, Alijani, and Steeneken]Li2023 author author Z. Li, author M. Xu, author R. A. Norte, author A. M. Aragón, author F. Van Keulen, author F. Alijani, and author P. G. Steeneken, title title Tuning the q-factor of nanomechanical string resonators by torsion support design, @noop journal journal Applied Physics Letters volume 122, pages 013501 (year 2023)NoStop [Hoch et al.(2022)Hoch, Yao, and Poot]hoch2022geometric author author D. Hoch, author X. Yao, and author M. Poot, title title Geometric tuning of stress in predisplaced silicon nitride resonators, @noop journal journal Nano Letters volume 22, pages 4013 (year 2022)NoStop [Høj et al.(2022)Høj, Hoff, and Andersen]Hoj2022 author author D. Høj, author U. B. Hoff, and author U. L. Andersen, title title Ultra-coherent nanomechanical resonators based on density phononic crystal engineering, @noop journal journal arXiv preprint arXiv:2207.06703 (year 2022)NoStop [Sadeghi(2021)]Sadeghi2021 author author P. Sadeghi, title Study of high-Q nanomechanical silicon nitride resonators, @noop Ph.D. thesis, school Wien (year 2021)NoStop [Gisler et al.(2022)Gisler, Helal, Sabonis, Grob, Héritier, Degen, Ghadimi, and Eichler]Gisler2022 author author T. Gisler, author M. Helal, author D. Sabonis, author U. Grob, author M. Héritier, author C. L. Degen, author A. H. Ghadimi, and author A. Eichler, title title Soft-clamped silicon nitride string resonators at millikelvin temperatures, @noop journal journal Physical Review Letters volume 129, pages 104301 (year 2022)NoStop [Høj et al.(2021)Høj, Wang, Gao, Hoff, Sigmund, and Andersen]Hoj2021 author author D. Høj, author F. Wang, author W. Gao, author U. B. Hoff, author O. Sigmund, and author U. L. Andersen, title title Ultra-coherent nanomechanical resonators based on inverse design, @noop journal journal Nature communications volume 12, pages 5766 (year 2021)NoStop [Turvey(1990)]Turvey1990 author author K. Turvey, title title An undergraduate experiment on the vibration of a cantilever and its application to the determination of young’s modulus, @noop journal journal American Journal of Physics volume 58, pages 483 (year 1990)NoStop [Bückle et al.(2021)Bückle, Klaß, Nägele, Braive, and Weig]Buckle2021 author author M. Bückle, author Y. S. Klaß, author F. B. Nägele, author R. Braive, and author E. M. Weig, title title Universal length dependence of tensile stress in nanomechanical string resonators, @noop journal journal Physical Review Applied volume 15, pages 034063 (year 2021)NoStop [Wilson et al.(2009)Wilson, Regal, Papp, and Kimble]Wilson2009 author author D. J. Wilson, author C. A. Regal, author S. B. Papp, and author H. Kimble, title title Cavity optomechanics with stoichiometric sin films, @noop journal journal Physical review letters volume 103, pages 207204 (year 2009)NoStop [Allen et al.(1999)Allen, Thomas, and Jones]Allen1999 author author S. M. Allen, author E. L. Thomas, and author R. A. Jones, @noop title The structure of materials, Vol. volume 44 (publisher Wiley New York, year 1999)NoStop [Kazmerski(2012)]Kazmerski2012 author author L. Kazmerski, @noop title Polycrystalline and amorphous thin films and devices (publisher Elsevier, year 2012)NoStop [Hansen(2004)]Hansen2004 author author N. Hansen, title title Hall–petch relation and boundary strengthening, @noop journal journal Scripta materialia volume 51, pages 801 (year 2004)NoStop [Wu et al.(2022)Wu, Kou, Lai, Lan, Katnagallu, Hahn, Taheriniya, Wilde, Gleiter, and Feng]Wu2022 author author S. Wu, author Z. Kou, author Q. Lai, author S. Lan, author S. S. Katnagallu, author H. Hahn, author S. Taheriniya, author G. Wilde, author H. Gleiter, and author T. Feng, title title Dislocation exhaustion and ultra-hardening of nanograined metals by phase transformation at grain boundaries, @noop journal journal nature communications volume 13, pages 5468 (year 2022)NoStop [Gottstein and Shvindlerman(2009)]Gottstein2009 author author G. Gottstein and author L. S. Shvindlerman, @noop title Grain boundary migration in metals: thermodynamics, kinetics, applications (publisher CRC press, year 2009)NoStop [Ritchie(2011)]Ritchie2011 author author R. O. Ritchie, title title The conflicts between strength and toughness, @noop journal journal Nature materials volume 10, pages 817 (year 2011)NoStop [Kurotani and Tanaka(2022)]Kurotani2022 author author Y. Kurotani and author H. Tanaka, title title Fatigue fracture mechanism of amorphous materials from a density-based coarse-grained model, @noop journal journal Communications Materials volume 3, pages 67 (year 2022)NoStop [Gludovatz et al.(2013)Gludovatz, Demetriou, Floyd, Hohenwarter, Johnson, and Ritchie]Gludovatz2013 author author B. Gludovatz, author M. D. Demetriou, author M. Floyd, author A. Hohenwarter, author W. L. Johnson, and author R. O. Ritchie, title title Enhanced fatigue endurance of metallic glasses through a staircase-like fracture mechanism, @noop journal journal Proceedings of the National Academy of Sciences volume 110, pages 18419 (year 2013)NoStop [Ghadimi et al.(2017)Ghadimi, Wilson, and Kippenberg]Ghadimi2019 author author A. H. Ghadimi, author D. J. Wilson, and author T. J. Kippenberg, title title Radiation and internal loss engineering of high-stress silicon nitride nanobeams, @noop journal journal Nano letters volume 17, pages 3501 (year 2017)NoStop [Schmid et al.(2016)Schmid, Villanueva, and Roukes]Schmid2016 author author S. Schmid, author L. G. Villanueva, and author M. L. Roukes, @noop title Fundamentals of nanomechanical resonators, Vol. volume 49 (publisher Springer, year 2016)NoStop [Hernandez et al.(2014)Hernandez, Easter, Murphy-Mariscal, Maestre, Tavassoli, Allen, Barrows, Belnap, Ochoa-Hueso, Ravi et al.]hernandez2014environmental author author R. R. Hernandez, author S. Easter, author M. L. Murphy-Mariscal, author F. T. Maestre, author M. Tavassoli, author E. B. Allen, author C. W. Barrows, author J. Belnap, author R. Ochoa-Hueso, author S. Ravi, et al., title title Environmental impacts of utility-scale solar energy, @noop journal journal Renewable and sustainable energy reviews volume 29, pages 766 (year 2014)NoStop [Macho-Stadler et al.(2015)Macho-Stadler, Elejalde-García, and Llanos-Vázquez]macho2015oscillations author author E. Macho-Stadler, author M. Elejalde-García, and author R. Llanos-Vázquez, title title Oscillations of end loaded cantilever beams, @noop journal journal European Journal of Physics volume 36, pages 055007 (year 2015)NoStop [Nakao et al.(2008)Nakao, Ando, Chen, Mehregany, and Sato]nakao2008mechanical author author S. Nakao, author T. Ando, author L. Chen, author M. Mehregany, and author K. Sato, title title Mechanical characterization of sic film at high temperatures by tensile test, in @noop booktitle 2008 IEEE 21st International Conference on Micro Electro Mechanical Systems (organization IEEE, year 2008) pp. pages 447–450NoStop § SUPPORTING INFORMATION (A): MECHANICAL PROPERTIES CHARACTERIZATION OF LPCVD A-SIC THIN FILMS USING RESONANCE METHOD The characterization flow of the method start with measuring the a-SiC thin film thickness t after the LPCVD a-SiC deposition using the Spectroscopic Ellipsometer (Woollam M-2000F). Then we identify the film stress by measuring the radius curvature R_1 of the silicon wafer before the deposition with the stress meter (Flexus, Toho), and measuring again the curvature R_2 after the deposition with the a-SiC on the backside of the wafer removed with CHF3/Ar plasma anisotropic etching. The film stress σ can be determined from wafer bending method by Stoney's equation σ=E_subD_sub^2/6(1-ν_sub)t(1/R_1-1/R_2), where E_sub, ν_sub and D_sub is the real component of Young's modulus, Poisson ratio and thickness of the substrate (silicon wafer in our case), respectively, and t is the thickness of the a-SiC thin film. Apart from film stress, Young's modulus E, Poisson ratio ν and density ρ of a-SiC are most relevant among all material properties to designing a-SiC resonator with targeted resonant frequency and stress distribution, which can be measured by patterning and then suspending the thin film as squared membrane, cantilevers and strings of different lengths L. After suspended, the nanomechanical resonators are measured with Laser Doppler Vibrometer (LDV, Polytec PSV-400), while they are placed in a vacuum chamber pumped down to 10^-7 mbar vacuum environment. After measuring the resonant frequencies of the squared membranes with lengths L varying from 200 to 2000 um, we fit the measured data with the analytical formula for fundamental mode <cit.> f_mem≈1/√(2)L_eff√(σ/ρ_eff), where L_eff=L+L_oh is the effective length includes the overhang size L_oh generated during undercut, ρ_eff=A_corr×ρ is the effective density of the thin film, and A_corr is the correction factor due to the arrays of holes on top for fast undercut, which in our case A_corr=0.804 calculated from COMSOL (corresponds to the holes with diameter 1.5 um are placed 3 um apart between adjacent centers, see Figure <ref>). We can therefore determine the density ρ of the thin film since σ and L are known beforehand. The resonant frequencies of strings <cit.> with lengths from 200 to 6000 um is measured with LDV. The eigenfrequency of string resonators can be analytically formulated as f_str,n = n^2 π/2L^2√(E t^2/12ρ)√(1+12σ_1D L^2/n^2 π^2 E t^2) where L is the length of the string, and needed to be modified into L_eff due to the overhang from undercutting, n is the eigenmode number, ρ is the material density, σ_1D=σ× (1-ν) is the tensile stress on the string, σ is the film stress and ν is the Poisson ratio, E is the Young's modulus and t is the thickness of the film. In our case, a-SiC thin films have high tensile stress, which leads to 12σ_1D L^2 ≫ n^2 π^2 E t^2, and the formula of the fundamental mode is reduced to the form one can use to fit the measurement data f_str≈1/2L_eff√(σ· (1-ν)/ρ), from which the Poisson ratio ν of a-SiC can be determined. Also the resonant frequency of cantilevers <cit.> with lengths from 7 to 80 um are also measured, and can be fitted to the analytical formula in the following form f_can≈1.8751^2/2π L_eff^2√(E· t^2/12 ρ), from which the Young's modulus E of a-SiC can be determined. § SUPPORTING INFORMATION (B). VALIDATION OF THE RESONANCE METHOD WITH COMSOL § SUPPORTING INFORMATION (C): THEORY OF DISSIPATION DILUTION, AND DILUTION FACTORS OF 1D PNC NANOSTINGS In this work, the intrinsic quality factor Q_int is determined by measuring the mechanical quality factors Q_D of PnC nanostrings <cit.> Q_D = D · Q_int, where the dilution factors D are calculated numerically, it depends on the various mechanical properties of the materials, as well as the geometry of the resonator. For a string-like resonator with thickness t and length L, the dilution factor has the factor D_n = 1/2λ + π^2 n^2 λ^2, where n is the mode number of the resonator and λ is defined as λ = h/L√(E/12σ). In order to further investigate its applicability to fabricate high-Q nanomechanical resonators, we need to identify the intrinsic quality factor Q_0 of a-SiC thin films, which is most accurately by experimentally measuring Q factor of geometrically strain-engineered resonators whose external loss mechanisms are eliminated and dissipation dilution factor D_Q is well defined, leading to an expected intrinsic Q factor Q_0=Q/D_Q. To perform such experiments, we fabricate a series of uniformly corrugated high-aspect-ratio phononic crystal (PnC) nanostrings of length 4 mm, whose unit-cell lengths L_uc together with defect lengths L_def in the middle are varied (Figure 4(a)), leading to PnC nanostrings of unit-cell numbers from 20 to 44. The widths of the wide and narrow parts of the nanostrings are 3 um and 1 um respectively. With higher unit-cell number or shorter defect length, the PnC nanostring has a defect mode located in a phononic bandgap at higher frequency, the example of PnC nanostring with 20 unit cells is shown in Figure 4(b). The vibration amplitudes of the nanostrings as a function of frequency are acquired with a custom balanced homodyne detection interferometer under a vacuum level of 4× 10^-9 mbar, with them the engineered phononic bandgaps of the PnC nanostrings are identify and the defect modes inside are confirmed. With the ringdown method, the defect mode Q factors of the PnC nanostrings are measured, see Figure 4(d). Using finite element method (FEM) simulation, the dilution factor D_Q of each PnC nanostring geometry can be numerically calculated, together with the Q factors of the corresponding nanostring measured experimentally, the intrinsic Q factor Q_0 of a-SiC thin films are determined, as shown in Figure 4(e-f). We employ PnC nanostrings for intrinsic Q factor identification instead of other geometries such as membranes <cit.> or normal strings <cit.> as shown for other works, since their FEM simulated D_Q are much less dependent on the meshing at the clamping edges, as well as the measured Q factors of the localized defect mode do not rely on how the resonators link to the substrate. The intrinsic loss Q_0 of a-SiC films can be attributed to the volume loss Q_vol and the surface loss Q_surf, i.e. Q_0 = (1/Q_vol + 1/Q_surf· t)^-1, for our thin film resonators the low surface-to-volume ratio allows us to set Q_vol to be the same as the the one of LPCVD a-SiN, i.e. 28000 (see Figure <ref>(f) for more detailed), and Q_surf is proportional to the thickness t of the corresponding film, which we compare with the one of LPCVD a-SiN for clarification, i.e. Q_surf^SiC = x· Q_surf^SiN, where x is the ratio between the two surface loss and Q_surf^SiN=6900· t/100[nm] is the surface loss of a-SiN. § SUPPORTING INFORMATION (D): MORE DATA ON RINGDOWN MEASUREMENT OF 1D PNC NANOSTRINGS, AND INTRINSIC Q FACTOR CHARACTERIZATION OF A-SIC THIN FILMS § SUPPORTING INFORMATION (E): MACHINE LEARNING TECHNIQUE FOR DESIGNING HIGH-Q RESONATOR Bayesian optimization for ultra-high Q a-SiC nanostring made with a-SiCR2. The nanostring has a total length of 6 mm and the number of unit cells is fixed to 24. In the Bayesian optimization algorithm, the Q factor of the nanomechanical resonators is being optimized. 9 design parameters are setup for the optimization. The iteration steps are in total 500. Among the parameters, v1 is the defect width, constrained between 1 to 3 um; v2 is the defect length, constrained between 50 to 500 um; v3 is unitcell width ratio, constrained between 1:1.5 to 1:3; v4 is unitcell length ratio, constrained between 1:3 to 3:1; v5 to v9 are the widths of the unitcells' thin parts, the width parameters defining the tapering shape, constrained between 1 to 3 um. Lengths of the unitcells was determined considering bandgap frequency matching condition, once the set of each unitcell's width is defined <cit.>. One can find that the mode shapes are getting more and more confined from the clamping points to minimize the clamping loss. From the fifth optimized result (Iter 107) to the final optimized result (Iter 442), the geometry is changing from a uniformly corrugated design on the edge to a non-uniformly corrugated one, this interesting found might lead to interesting perspectives in designing the 1d PnC nanostring in the future. Interesting to note that the highest Q factor design doesn't coincide with the design with the highest tensile stress. § SUPPORTING INFORMATION (F): THIN A-SIC RESONATOR Both the force sensitivity and force responsivity of nanomechanical sensors are limited by the minimum thickness achievable for the continuous films. For thin film sensors with high intrinsic tensile stress , higher aspect-ratio (length/thickness) can increase the feasibility of higher Q factor, thus higher force sensitivity due to lower noise level; While for the ones with low film stress, thinner films lead to lower stiffness, therefore larger deflection under given forces, or higher force responsivity. Unlike 2D materials such as graphene, which use the bottom-up approach to construct ultra-thin suspended resonators (e.g. growing the thin film layer by layer atomically), engineerable thin films such as a-SiN and s-Si usually use the top-down approach to get ultra-thin resonators. As respectively shown in <cit.> and <cit.>, in order to fabricate high-aspect-ratio (HAR) resonators with thicknesses down to 12nm SiN and 15nm (sSi) thin films, encapsulating layers are required to hold up as well as to protect the thin SiN and sSi structures, which complicate the fabrication processes and may even introduce contamination. Compared to the above materials commonly used for nanomechanics, a-SiC thin films can be fabricated into thinner resonators thanks to its superior chemical inertness. As shown previously, even with LPCVD, which is considered to deposit continuous and conformal films, cannot avoid pinholes on the films while depositing the first nanometers. In order to find out the minimum thickness achievale for a-SiC thin film, the ion beam etcher (SCIA Ion Mill 150) is used to thin down the thicker and more uniform GFR2 films down to the desire thickness. The ion beam etcher, compared to other techniques, such as reactive plasma etching, is preferred to get ultra-thin films down to sub-nanometer accuracy, with a physical sputtering process applicable to a wide range of materials called ion beam milling, which avoid surface contamination and local charge accumulation that might hinder to obtain the desired conformal thin film accurately. The GFE2 films with initial thickness 71nm are thinned down with a beam voltage of 120V (kinetic energy 120eV per ion) and an incident angle of 4 degrees sheering, resulting in an etch rate of 7 nm/h. By suspending resonators with polished film thicknesses from 3 to 6 nm on silicon substrate, we found that minimum continuous smooth LPCVD a-SiC GFR2 film achievable is between 4.1 and 4.9 nm. Worth to note that buckling patterns are observed on the trampoline resonator, due to the facts that: 1. the thinner and larger the resonator is, the smaller stress gradients in both in-plane and out-of-plane directions are required in order to keep the resonator flat <cit.>; 2. the first several nanometers of the a-SiC film is with less tensile or even compressive stress due to defects at the material interface at the start of the deposition, which are also shown in <cit.>. The surface topography of the a-SiCR2 thin film polished down to 3nm, is measured with AFM shown in SI. G(d), which is only half the RMS compared to the one prior to being polished. § SUPPORTING INFORMATION (G): SURFACE TOPOGRAPHY OF A-SIC THIN FILMS WITH ATOMIC FORCE MICROSCOPY Among all recipes, a-SiCR2 and a-SiCR3 have the lowest roughness. Yet one can still find large bright spots on a-SiCR3 films with the dark field image from the optical microscope, indicating that it is not as flat in a larger landscape. After polishing with the ion beam miller, a-SiCR2 with 3 nm is scanned, with only half the roughness compared to the one prior to polishing. Interesting to note that, with lower deposition pressure, a-SiC170 is rougher than a-SiCR2, which matches well with its lower intrinsic Q factor and lower fracture strength. The topography of a-SiCR4 is very rough, can be due to the re-crystallization of Si components in the thin film. § SUPPORTING INFORMATION (H): FABRICATION PROCESS Low pressure chemical vapor deposition (LPCVD) non-stoichiometric a-SiC films were used in this paper, deposited with different gas flow ratios (GFR) between SiH_2Cl_2 and 5% C_2H_2 in H_2 (GFR=2,3,4), at various deposition pressures (170/600 mTorr), and on both silicon and fused silica substrates (Table <ref>). The variation in deposition parameters allows us to systematically characterize the mechanical properties of LPCVD a-SiC. All a-SiC films were deposited at a temperature 760C for the same period of time (3 hours 47 minutes) to avoid film property differences caused by thermal effects and to ensure that the SiC films were composited of amorphous form instead of poly-crystalline form <cit.>. After LPCVD a-SiC deposition, the wafers are diced into smaller chips. The chips are then exposed to electron beam lithograph to create desired patterns on the e-beam resist coated on top. Subsequently, these patterns are transferred into the a-SiC films using CHF_3 anisotropic plasma etching. Next, the patterned chips are cleaned with dimethylformamide and Piranha solution, followed by the undercut of the silicon substrate or fused silica substrate using cryogenic SF_6 isotropic plasma etching or vapor hydrofluoric acid. Finally, the designed a-SiC nanomechanical resonators are fabricated. § SUPPORTING INFORMATION (I): RINGDOWN MEASUREMENT WITH HOMODYNE DETECTION We use a balanced homodyne interferometer for performing ringdown experiments on a-SiC nanoemchanical resonators. As shown in Figure <ref>, the a-SiC nanomechanical resonator (green) on top on the substrate (brown) is placed in an ultra-high vacuum (UHV) chamber under a pressure lower than 10^-8 mbar. This avoids mechanical losses due to gas damping. Ringdown measurements are performed via a piezoelectric actuator which resonantly drives the corrugated nanostrings. After reaching maximal amplitude, the drive is stopped to observe the rate at which mechanical energy is dissipated from the nanostrings. The vibration amplitude of the resonator is measured optically with a fiber coupled infrared laser (1550 nm). The power of the laser is divided into two parts, 90% of it is used for interference reference (local oscillator), while the other 10% terminates with a lensed fiber shines on the resonator. The reflected light from the resonator then compares its phase to the one from the local oscillator using the balanced homodyne measurement setup, with which the amplitude of the resonator is measured.
http://arxiv.org/abs/2307.05331v1
20230703161246
Application of MUSIC-type imaging for anomaly detection without background information
[ "Won-Kwang Park" ]
eess.SP
[ "eess.SP", "cs.NA", "math.NA", "78A46" ]
parkwk@kookmin.ac.kr Department of Information Security, Cryptography, and Mathematics, Kookmin University, Seoul, 02707, Korea It has been demonstrated that the MUltiple SIgnal Classification (MUSIC) algorithm is fast, stable, and effective for localizing small anomalies in microwave imaging. For the successful application of MUSIC, exact values of permittivity, conductivity, and permeability of the background must be known. If one of these values is unknown, it will fail to identify the location of an anomaly. However, to the best of our knowledge, no explanation of this failure has been provided yet. In this paper, we consider the application of MUSIC to the localization of a small anomaly from scattering parameter data when complete information of the background is not available. Thanks to the framework of the integral equation formulation for the scattering parameter data, an analytical expression of the MUSIC-type imaging function in terms of the infinite series of Bessel functions of integer order is derived. Based on the theoretical result, we confirm that the identification of a small anomaly is significantly affected by the applied values of permittivity and conductivity. However, fortunately, it is possible to recognize the anomaly if the applied value of conductivity is small. Simulation results with synthetic data are reported to demonstrate the theoretical result. MUltiple SIgnal Classification (MUSIC) microwave imaging scattering parameter simulation results § INTRODUCTION Although the MUltiple SIgnal Classification (MUSIC) algorithm was developed for estimating the individual frequencies of multiple time-harmonic signals, it was successfully applied to an inverse scattering problem of localizing a set of point-like scatterers <cit.>. From this pioneering research, MUSIC has been applied to various problems, for example, identification of arbitrarily shaped targets in inverse scattering problem <cit.> as well as the microwave imaging <cit.>, detection of detecting internal corrosion <cit.>, damage diagnosis on complex aircraft structures <cit.>, radar imagings <cit.>, impedance tomography <cit.>, ultrasound imaging <cit.>, and medical imaging <cit.>. Several studies have demonstrated that the MUSIC algorithm is fast, effective, and stable in both the inverse scattering problem and microwave imaging. However, for its successful application, one must 1 discriminate nonzero singular values to determine the exact noise subspace and 2 know a priori information of the background (exact values of background permittivity, conductivity, and permeability at a given frequency) to design an imaging function. Several studies <cit.> have revealed certain properties of singular values and methods for appropriate threshold schemes. However, although some research has been performed on certain phenomena when an inaccurate frequency is applied (see <cit.>, for instance), to the best of our knowledge it remains unexplained why one cannot localize a small anomaly when an inaccurate value of background permittivity or conductivity is applied. This provides the motivation for this study aimed at determining the effect of an applied inaccurate value of background permeability, permittivity, or conductivity. The purpose of this paper is to establish a new mathematical theory of MUSIC in microwave imaging when the background information is unknown. To this end, we carefully explore the structure of MUSIC imaging function by constructing a relationship with infinite series of Bessel functions of integer order, antenna arrangement, and an applied inaccurate value of background wavenumber. This is based on the integral equation formula for the scattered-field S-parameter in the presence of a small anomaly and the structure of left-singular vector associated with the nonzero singular value of the scattering matrix. From the explored structure, we can explain that 1 when an inaccurate value of background permeability or permittivity is applied, the identified location of the small anomaly is shifted in a specific direction, 2 when an inaccurate value of background conductivity is applied, there is no shifting effect and it is possible to identify the location fairly precisely if the applied value is small, 3 however it will be very difficult to recognize the existence of an anomaly if the applied value of conductivity is not small. To validate the theoretical results, various simulation results in the presence of single and multiple anomalies are presented. This paper is organized as follows. In Section <ref>, we briefly introduce the basic concept of scattering parameters in the presence of a small anomaly, introduce the imaging function of MUSIC. In Section <ref>, the mathematical structure of the imaging function is explored by establishing a relationship with an infinite series of Bessel functions, antenna arrangement, and an applied inaccurate wavenumber. In Section <ref>, we present various results of numerical simulations with synthetic data to confirm the theoretical results and discuss certain phenomena. In Section <ref>, a short conclusion including future works is provided. § SCATTERING PARAMETER AND THE IMAGING FUNCTION OF MUSIC Let D be a circle-like small anomaly with radius α, location _⋆, permittivity _⋆, and conductivity σ_⋆ at given angular frequency ω. We set D to be surrounded by a circular array of dipole antennas Λ_n, n=1,2,⋯,N, with location _n and they are placed outside of the homogeneous region of interest (ROI) Ω. In this paper, we assume that there exists no magnetic materials in Ω thus, the anomaly D and the background Ω are characterized by the value of dielectric permittivity and electric conductivity at a given angular frequency ω=2π f, where f denotes the ordinary frequency measured in . We denote and as the permittivity and conductivity of Ω, respectively, and _⋆ and σ⋆ that satisfy ω≫, and 2α√(_⋆/)<wavelength, and set the value of magnetic permeability as a constant such that μ()≡=1.257e-6/ for every ∈Ω. With this, we denote be the background wavenumber that satisfies ^2=ω^2(+i/ω) and define the following piecewise permittivity and conductivity as () and σ(), respectively such that ()={[ _⋆ for ∈ D,; for ∈Ω\D, ]. σ()={[ σ_⋆ ∈ D,; ∈Ω\D, ]. Let _(,,_m)∈ℂ^1×3 be the incident electric field in Ω due to the point current density 𝐉 at Λ_m that satisfies {[ ∇×_(,,_m)=-iω_(,,_m),; ∇×_(,,_m)=(+iω)_(,,_m), ]. where _∈ℂ^1×3 denotes the magnetic field. Analogously, let _(,_n,)∈ℂ^1×3 be the total electric field in the existence of D measured at Λ_n that satisfies {[ ∇×_(,_n,)=-iω_(,_n,),; ∇×_(,_n,)=(σ()+iω())_(,_n,) ]. with the transmission condition on the boundary of D. Let S_(n,m) be the incident-field S-parameter, which is the scattering parameter without D with transmitter number m and receiver number n. Similarly, S_(n,m) be the total-field S-parameter, which is the scattering parameter in the presence of D with transmitter number m and receiver number n. In this paper, the measurement data to retrieve D is the scattered-field S-parameter with transmitter number m and receiver number n denoted by S_(n,m)=S_(n,m)-S_(n,m). Then, on the basis of <cit.>, S_(n,m) can be represented by the following integral equation S_(n,m)=i^2/4ω∫_Ω((')-/+iσ(')-/ω)_(,',_m)·_(,_n,')'. Notice that the exact expression of _(,_n,') is unknown, it is very hard to apply (<ref>) to design MUSIC algorithm. Since the condition (<ref>) holds, it is possible to apply the Born approximation to (<ref>). Then, S_(n,m) can be approximated as S_(n,m)≈i^2/4ω∫_D(_⋆-/+iσ_⋆-/ω)_(,',_m)·_(,_n,')'. It is worth to emphasize that based on the simulation configuration <cit.>, only the z-component of the field _(,,_n) can be measured at Λ_n. Hence, by denoting it as u(,',_m) and applying the mean-value theorem, S_(n,m) can be written as S_(n,m)≈iα^2^2π/4ω(_⋆-/+iσ_⋆-/ω)u(,_m,_⋆)u(,_n,_⋆). To introduce the imaging function of MUSIC algorithm, we perform the singular value decomposition for the scattering matrix 𝕂=[ 0 S_(1,2) ⋯ S_(1,N); S_(2,1) 0 ⋯ S_(2,N); ⋮ ⋮ ⋱ ⋮; S_(N,1) S_(N,2) ⋯ 0 ]=𝕌𝔻𝕍^*≈τ_1_1_1^*, where τ_1 denotes the nonzero singular value, and _1 and _1 are the first left- and right-singular vectors of the scattering matrix, respectively. We refer to <cit.> why the diagonal elements of 𝕂 are set to zero. With this, by denoting 𝕀 as the N× N identity matrix, we can define projection operator ℙ_ onto the noise subspace: ℙ_=𝕀-_1_1^*. Then, based on the structure of the approximation (<ref>), we introduce a unit test vector: for each ∈Ω, (,)=(,)/|(,)|, where (,)=[u(,_1,_⋆),u(,_2,_⋆),…,u(,_N,_⋆)]^T. Then, since (,)∈Range(𝕂) if and only if =_⋆∈ D, we can examine that |ℙ_((,_⋆))|=0 and by plotting the following imaging function of MUSIC 𝔉(,)=1/|ℙ_((,))|, ∈Ω, the location _⋆∈ D can be identified. We refer to <cit.> for detailed descriptions. It is important that, to generate the test vector (,), on the basis of (<ref>), the exact value of must be known. This means that exact values of ω, , , and must be known. However, their exact values are sometimes unknown because these values are significantly dependent on the frequency, temperature, and other factors. Now, we assume that the exact values of and are unknown, and apply an alternative value instead of the true . Correspondingly, we set a unit test vector (,) from (<ref>) and consider the imaging function of MUSIC 𝔉(,)=1/|ℙ_((,))|, ∈Ω. Notice that, the exact location of D cannot be retrieved through the map of 𝔉(,), but we can recognize the existence of an anomaly and the identified location is shifted in a specific direction. However, some phenomena exhibited in Section <ref> cannot be explained yet. § THEORETICAL RESULT: STRUCTURE OF THE IMAGING FUNCTION WITH INACCURATE WAVENUMBER In this section, we explore the structure of the imaging function 𝔉(,) to explain the theoretical reason of some phenomena. The result is following. Let _n=_n/|_n|=_n/R=(cosθ_n,sinθ_n) and -_⋆=|-_⋆|(cosϕ,sinϕ). If _n satisfies |_n-|≫1/4||,1/4|| for n=1,2,⋯,N, 𝔉(,) can be represented as follows: 𝔉(,)≈N^2-2N+1/N^2-2N(1-|J_0(|-_⋆|)+1/N∑_n=1^N∑_q∈ℤ_0i^qJ_q(|-_⋆|)e^iq(θ_n-ϕ)|^2)^-1/2, where J_s denotes the Bessel function of order s and ℤ_0 denotes the set of integer number except 0. Based on <cit.>, the incident field u(,,') can be written as u(,,')=i/4H_0^(2)(|-'|), ', where H_0^(2) denotes the Hankel function of order zero of the second kind. Since |_n-|,|_n-_⋆|≫1/4||,1/4|| for all n=1,2,⋯,N, the following asymptotic forms of the Hankel function hold (see <cit.>, for instance) i/4H_0^(2)(|_n-'|)≈(-1+i)e^-i|_n|/4√(π|_n|)e^i_n·' and i/4H_0^(2)(|_n-'|)≈(-1+i)e^-i|_n|/4√(π|_n|)e^i_n·', the unit test vector (,) becomes (,)≈1/√(N)[e^i_1·,e^i_2·,⋯,e^i_N·]^T, and the scattering matrix 𝕂 can be written as 𝕂≈α^2 e^-2i R/32Rω(_⋆-/+iσ_⋆-/ω)[ 0 e^ik(_1+_2)·_⋆ ⋯ e^ik(_1+_N)·_⋆; e^ik(_2+_2)·_⋆ 0 ⋯ e^ik(_2+_N)·_⋆; ⋮ ⋮ ⋱ ⋮; e^ik(_N+_1)·_⋆ e^ik(_N+_2)·_⋆ ⋯ 0; ]. Let us denote 𝒪=_⋆-/+iσ_⋆-/ω and 𝕄=[ 0 e^ik(_1+_2)·_⋆ ⋯ e^ik(_1+_N)·_⋆; e^ik(_2+_2)·_⋆ 0 ⋯ e^ik(_2+_N)·_⋆; ⋮ ⋮ ⋱ ⋮; e^ik(_N+_1)·_⋆ e^ik(_N+_2)·_⋆ ⋯ 0; ]. Then, by performing an elementary calculus, we can examine that 𝕄𝕄^* =[ N-1 (N-2)e^ik(_1-_2)·_⋆ ⋯ (N-2)e^ik(_1-_N)·_⋆; (N-2)e^ik(_2-_1)·_⋆ N-1 ⋯ (N-2)e^ik(_2-_N)·_⋆; ⋮ ⋮ ⋱ ⋮; (N-2)e^ik(_N-_1)·_⋆ (N-2)e^ik(_N-_2)·_⋆ ⋯ N-1; ] =𝕀+(N-2)[ e^ik(_1-_1)·_⋆ e^ik(_1-_2)·_⋆ ⋯ e^ik(_1-_N)·_⋆; e^ik(_2-_1)·_⋆ e^ik(_2-_2)·_⋆ ⋯ e^ik(_2-_N)·_⋆; ⋮ ⋮ ⋱ ⋮; e^ik(_N-_1)·_⋆ e^ik(_N-_2)·_⋆ ⋯ e^ik(_N-_N)·_⋆; ] and correspondingly, we have _1_1^* =1/|τ_1|^2𝕂𝕂^*≈|α^2𝒪/32Rωτ_1|^2𝕄𝕄^* =C𝕀+C(N-2)[ e^ik(_1-_1)·_⋆ e^ik(_1-_2)·_⋆ ⋯ e^ik(_1-_N)·_⋆; e^ik(_2-_1)·_⋆ e^ik(_2-_2)·_⋆ ⋯ e^ik(_2-_N)·_⋆; ⋮ ⋮ ⋱ ⋮; e^ik(_N-_1)·_⋆ e^ik(_N-_2)·_⋆ ⋯ e^ik(_N-_N)·_⋆; ], C=|α^2𝒪/32Rωτ_1|^2∈ℝ. Since the following JacobiAnger expansion formula holds uniformly, e^ixcosθ=J_0(x)+∑_q∈ℤ_0i^qJ_q(x)e^iqθ, we can evaluate that ∑_n=1^Ne^i_n·(-_⋆) =∑_n=1^Ne^i|-_⋆|cos(θ_n-ϕ) =(NJ_0(|-_⋆|)+∑_n=1^N∑_q∈ℤ_0i^qJ_q(|-_⋆|)e^iq(θ_n-ϕ)) =N(J_0(|-_⋆|)+1/N∑_n=1^N∑_q∈ℤ_0i^qJ_q(|-_⋆|)e^iq(θ_n-ϕ)) :=N(J_0(|-_⋆|)+ℰ(,_⋆)). With this, by applying (<ref>) and (<ref>), we can evaluate (𝕀-_1_1^*)(,) ≈(1-C)/√(N)[ e^i_1·; e^i_2·; ⋮; e^i_N· ] -C(N-2)√(N)[ e^i_1·_⋆(J_0(|-_⋆|)+ℰ(,_⋆)); e^i_2·_⋆(J_0(|-_⋆|)+ℰ(,_⋆)); ⋮; e^i_N·_⋆(J_0(|-_⋆|)+ℰ(,_⋆)) ] and correspondingly, |ℙ_((,))| =(ℙ_((,))·ℙ_((,)))^1/2 =[∑_n=1^N((1-C)^2/N-(Ψ_1+Ψ_1)+Ψ_2Ψ_2)]^1/2, where Ψ_1 =(1-C)C(N-2)e^i_n·(-_⋆)(J_0(|-_⋆|)+ℰ(,_⋆)) Ψ_2 =C(N-2)Ne^i_n·_⋆(J_0(|-_⋆|)+ℰ(,_⋆)). Applying (<ref>) again, we can easily obtain that ∑_n=1^N(Ψ_1+Ψ_1)=2(1-C)C(N-2)N|J_0(|-_⋆|)+ℰ(,_⋆)|^2 ∑_n=1^NΨ_2Ψ_2=C^2(N-2)^2N^2|J_0(|-_⋆|)+ℰ(,_⋆)|^2. Therefore, |ℙ_((,))|≈((1-C)^2-2C(1-C)(N-2)N|J_0(|-_⋆|)+ℰ(,_⋆)|^2 +C^2(N-2)^2N^2|J_0(|-_⋆|)+ℰ(,_⋆)|^2)^1/2. Finally, since |ℙ_((,))|=0 and |J_0(|-_⋆|)+ℰ(,_⋆)|=1 when =_⋆, (1-C)^2-2C(1-C)(N-2)N+C^2(N-2)^2N^2=0 or equivalently, ((1-C)-C(N-2)N)^2=0. Hence, C(N-1)^2=1 and correspondingly, |ℙ_((,))|≈|1-C|^2(1-|J_0(|-_⋆|)+ℰ(,_⋆)|^2)^1/2 =N^2-2N/N^2-2N+1(1-|J_0(|-_⋆|)+1/N∑_n=1^N∑_q∈ℤ_0i^qJ_q(|-_⋆|)e^iq(θ_n-ϕ)|^2)^1/2. With this, we can obtain the structure (<ref>). Based on the identified structure 𝔉(,), we can say that the location =(/)_⋆ will be identified instead of the true one _⋆ because J_0(|-_⋆|)=1 and ℰ()=0 when |-_⋆|=0. This is the reason why the identified location of the anomaly is shifted. Note that, if the anomaly is located at the origin, its location can be identified for any value . Further properties will be discussed in the simulation results. § SIMULATION RESULTS AND DISCUSSIONS In this section, we present the results of simulation with synthetic data to check the theoretical result. To this end, N = 16 dipole antennas were used to transmit/receive signals at f =1 such that _n=0.09(cos2nπ/N,sin2nπ/N), n=1,2,⋯,N. The ROI Ω was set to be an interior of a circle with (,)=(20_0,0.2/) and radius 0.085 centered at the origin. Here, _0=8.854e-12/ is the vacuum permittivity. For the anomaly, we select a small ball D with _1=(0.01,0.03), α_1=0.01, and (_1,σ_1)=(55_0,1.2/). For multiple anomalies, we select another small ball D_2 with _2=(-0.04,-0.02), α_2=α_1, and (_2,σ_2)=(45_0,1.0/). We refer to Figure <ref> for an illustration of simulation configurations. [Application of inaccurate background permeability] First, we assume that only the true value of is unknown, that is, we applied alternative wavenumber that satisfies ^2=ω^2(+i/ω). Then, identified location of anomaly becomes =(/)_⋆=√(/)_⋆. Hence, the identified location will approach the origin if > and. Otherwise, the identified location will be far from the origin if <. Figure <ref> shows maps of 𝔉(,) with various in the presence of D_1. As we already mentioned, as the value of increases, the identified location approaches the origin. Otherwise, as the value of decreases, the identified location becomes far from the origin. It is interesting to examine the size of the identified anomaly becomes small and large as increases and decreases, respectively. We can observe the same phenomenon in the presence of multiple anomalies D_1 and D_2, as shown in Figure <ref>. [Application of inaccurate background permittivity] Next, let us assume that only the true value of is unknown, that is, we applied alternative wavenumber that satisfies ^2=ω^2(+i/ω). Note that, if satisfies the condition (<ref>), the identified location will be =(/)_⋆=√(ω+i/ω+i)_⋆≈√(/)_⋆. Hence, similar to the Example <ref>, the identified location will approach and be far from the origin if > and <, respectively. Figure <ref> shows maps of 𝔉(,) with various in the presence of D_1. Similar to the results in Figure <ref>, the identified location approaches the origin as the value of increases. Otherwise, the identified location becomes far from the origin as the value of decreases. It is interesting to examine the size of the identified anomaly becomes small and large as increases and decreases, respectively. We can observe the same phenomenon in the presence of multiple anomalies D_1 and D_2, as shown in Figure <ref>. [Application of inaccurate background conductivity] Here, we consider the case when only the true value of is unknown and correspondingly, we apply an alternative wavenumber that satisfies ^2=ω^2(+i/ω). Notice that opposite to the Examples <ref> and <ref>, if satisfies the condition (<ref>), the identified location will be =(/)_⋆=√(ω+i/ω+i)_⋆≈√(ω/ω)_⋆=_⋆. Hence, it can be expected that almost exact location of the anomaly can be identified through the map of 𝔉(,). This means that, it will be possible to identify the location of the anomaly by selecting a small value of although its true value is unknown. Otherwise, if does not satisfy the condition (<ref>), it will be impossible to identify the anomaly because the Born approximation cannot be applied to design the imaging function. Figure <ref> shows maps of 𝔉(,) with various in the presence of D_1. As we discussed previously, it is possible to identify almost exact location of D_1 if the value of is sufficiently small. However, owing to the appearance of unexpected artifacts with large magnitudes, it is very difficult to recognize the location of D_1 if the value of is not small. Hence, contrary to Example <ref>, selecting a small value of will guarantee successful identification of the anomaly without accurate value of . We can observe the same phenomea in the presence of multiple anomalies, as shown in Figure <ref>. § CONCLUSION Based on the integral equation for the scattered-field S-parameter and singular value decomposition of the scattering matrix in the presence of a small anomaly, we showed that the imaging function of MUSIC can be expressed by an infinite series of Bessel functions and applied wavenumber. Thanks to the theoretical result, we confirmed why an inaccurate location of the anomaly was retrieved when inaccurate value of background permeability, permittivity, or conductivity. However, the relationship between the retrieved size of the anomaly and the applied wavenumber remains unknown. It will be interesting to investigate a mathematical theory to explain this phenomenon. Moreover, the development of an effective algorithm for estimating exact value of background wavenumber would be a valuable addition to this work. § ACKNOWLEDGMENTS This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (NRF-2020R1A2C1A01005221). 33 urlstyle [Deveney(2002)]D authorA. J. Deveney, titleSuper-resolution processing of multi-static data using time-reversal and MUSIC, <http://www.ece.neu.edu/faculty/devaney/ajd/preprints.htm>, year2002. [Ammari et al.(2005)Ammari, Iakovleva, and Lesselier]AIL2 authorH. Ammari, authorE. Iakovleva, authorD. Lesselier, titleTwo numerical methods for recovering small electromagnetic inclusions from scattering amplitude at a fixed frequency, journalSIAM J. Sci. Comput. volume27 (year2005) pages130–158. [Chen and Zhong(2009)]CZ authorX. Chen, authorY. Zhong, titleMUSIC electromagnetic imaging with enhanced resolution for small inclusions, journalInverse Prob. volume25 (year2009) pagesArticle No. 015008. [Park(2022)]P-MUSIC7 authorW.-K. Park, titleA novel study on the MUSIC-type imaging of small electromagnetic inhomogeneities in the limited-aperture inverse scattering problem, journalJ. Comput. Phys. volume460 (year2022) pagesArticle No. 111191. [Kurtoğlu et al.(2019)Kurtoğlu, Çayören, and Çavdar]KCC authorI. Kurtoğlu, authorM. Çayören, authorI. H. Çavdar, titleMicrowave imaging of electrical wires with MUSIC algorithm, journalIEEE Geosci. Remote Sens. Lett. volume16 (number5) (year2019) pages707–711. [Park(2021)]P-MUSIC6 authorW.-K. Park, titleApplication of MUSIC algorithm in real-world microwave imaging of unknown anomalies from scattering matrix, journalMech. Syst. Signal Proc. volume153 (year2021) pagesArticle No. 107501. [Solimene and Dell'Aversano(2014)]SD authorR. Solimene, authorA. Dell'Aversano, titleSome remarks on time-reversal MUSIC for two-dimensional thin pec scatterers, journalIEEE Geosci. Remote Sens. Lett. volume11 (number6) (year2014) pages1163–1167. [Ammari et al.(2008)Ammari, Kang, Kim, Louati, and Vogelius]AKKLV authorH. Ammari, authorH. Kang, authorE. Kim, authorK. Louati, authorM. Vogelius, titleA MUSIC-type algorithm for detecting internal corrosion from electrostatic boundary measurements, journalNumer. Math. volume108 (year2008) pages501–528. [Bao et al.(2020)Bao, Yuan, and Guo]BYG authorQ. Bao, authorS. Yuan, authorF. Guo, titleA new synthesis aperture-MUSIC algorithm for damage diagnosis on complex aircraft structures, journalMech. Syst. Signal Proc. volume136 (year2020) pagesArticle No. 106491. [Fan et al.(2021)Fan, Zhang, Sun, and Yun]FZSY authorS. Fan, authorA. Zhang, authorH. Sun, authorF. Yun, titleA local TR-MUSIC algorithm for damage imaging of aircraft structures, journalSensors volume21 (number10) (year2021) pagesArticle No. 3334. [Cicchetti et al.(2021)Cicchetti, Pisa, Piuzzi, Pittella, D'Atanasio, and Testa]CPPPDT authorR. Cicchetti, authorS. Pisa, authorE. Piuzzi, authorE. Pittella, authorP. D'Atanasio, authorO. Testa, titleNumerical and experimental comparison among a new hybrid FT-MUSIC technique and existing algorithms for through-the-wall radar imaging, journalIEEE Trans. Microwave Theory Tech. volume69 (number7) (year2021) pages3372–3387. [Liu et al.(2021)Liu, Wu, Yang, and Lu]LWYL authorZ. Liu, authorJ. Wu, authorS. Yang, authorW. Lu, titleDOA estimation method based on EMD and MUSIC for mutual interference in FMCW automotive radars, journalIEEE Geosci. Remote Sens. Lett. volume19 (year2021) pagesArticle No. 3504005. [Zhang et al.(2015)Zhang, Zhu, and Kuang]ZZK authorS. Zhang, authorY. Zhu, authorG. Kuang, titleImaging of downward-looking linear array three-dimensional SAR based on FFT-MUSIC, journalIEEE Geosci. Remote Sens. Lett. volume12 (number4) (year2015) pages885–559. [Hanke(2017)]H3 authorM. Hanke, titleA note on the MUSIC algorithm for impedance tomography, journalInverse Prob. volume33 (number2) (year2017) pagesArticle No. 025001. [Labyed and Huang(2012)]LH2 authorY. Labyed, authorL. Huang, titleUltrasound time-reversal MUSIC imaging of extended targets, journalUltrasound Med. Biol. volume38 (number11) (year2012) pages2018–2030. [Ruvio et al.(2013)Ruvio, Solimene, D'Alterio, Ammann, and Pierri]RSAAP authorG. Ruvio, authorR. Solimene, authorA. D'Alterio, authorM. J. Ammann, authorR. Pierri, titleRF breast cancer detection employing a noncharacterized vivaldi antenna and a MUSIC-inspired algorithm, journalInt. J. RF Microwave Comput. Aid. Eng. volume23 (number5) (year2013) pages598–609. [Scholz(2002)]S2 authorB. Scholz, titleTowards virtual electrical breast biopsy: space frequency MUSIC for trans-admittance data, journalIEEE Trans. Med. Imag. volume21 (year2002) pages588–595. [Son et al.(2015)Son, Kim, Lee, Kim, Lee, Jeon, and Choi]SKLKLJC authorS.-H. Son, authorH.-J. Kim, authorK.-J. Lee, authorJ.-Y. Kim, authorJ.-M. Lee, authorS.-I. Jeon, authorH.-D. Choi, titleExperimental measurement system for 3-6 microwave breast tomography, journalJ. Electromagn. Eng. Sci. volume15 (year2015) pages250–257. [Gavish and Donoho(2014)]GD authorM. Gavish, authorD. L. Donoho, titleThe optimal hard threshold for singular values is 4/√(3), journalIEEE Trans. Inf. Theory volume60 (number8) (year2014) pages5040–5053. [Hou et al.(2006)Hou, Sølna, and Zhao]HSZ1 authorS. Hou, authorK. Sølna, authorH. Zhao, titleA direct imaging algorithm for extended targets, journalInverse Prob. volume22 (year2006) pages1151–1178. [Park and Lesselier(2009)]PL1 authorW.-K. Park, authorD. Lesselier, titleElectromagnetic MUSIC-type imaging of perfectly conducting, arc-like cracks at single frequency, journalJ. Comput. Phys. volume228 (year2009) pages8093–8111. [Solimene et al.(2013a)Solimene, Maisto, and Pierri]SMP authorR. Solimene, authorM. A. Maisto, authorR. Pierri, titleRole of diversity on the singular values of linear scattering operators: the case of strip objects, journalJ. Opt. Soc. Am. A volume30 (year2013a) pages2266–2272. [Xu et al.(2023)Xu, Xing, Cui, and Tian]XXCT authorK. Xu, authorM. Xing, authorY. Cui, authorG. Tian, titleHow to determine an optimal noise subspace?, journalIEEE Geosci. Remote Sens. Lett. volume20 (year2023) pagesArticle No. 3500304. [Park(2017)]P-MUSIC3 authorW.-K. Park, titleAppearance of inaccurate results in the MUSIC algorithm with inappropriate wavenumber, journalJ. Inverse Ill-Posed Probl. volume25 (number6) (year2017) pages807–817. [Park and Park(2015)]PP1 authorJ. H. Park, authorW.-K. Park, titleLocalization of small perfectly conducting cracks from far-field pattern with unknown frequency, journalAppl. Math. Lett. volume43 (year2015) pages25–32. [Solimene et al.(2013b)Solimene, Ruvio, Dell'Aversano, Cuccaro, Ammann, and Pierri]SRDCAR authorR. Solimene, authorG. Ruvio, authorA. Dell'Aversano, authorA. Cuccaro, authorM. J. Ammann, authorR. Pierri, titleDetecting point-like sources of unknown frequency spectra, journalProg. Electromagn. Res. B volume50 (year2013b) pages347–364. [Haynes et al.(2014)Haynes, Stang, and Moghaddam]HSM2 authorM. Haynes, authorJ. Stang, authorM. Moghaddam, titleReal-time microwave imaging of differential temperature for thermal therapy monitoring, journalIEEE Trans. Biomed. Eng. volume61 (number6) (year2014) pages1787–1797. [Kim et al.(2019)Kim, Lee, Kim, Jeon, and Son]KLKJS authorJ.-Y. Kim, authorK.-J. Lee, authorB.-R. Kim, authorS.-I. Jeon, authorS.-H. Son, titleNumerical and experimental assessments of focused microwave thermotherapy system at 925, journalETRI J. volume41 (number6) (year2019) pages850–862. [Ammari and Kang(2004)]AK2 authorH. Ammari, authorH. Kang, titleReconstruction of Small Inhomogeneities from Boundary Measurements, vol. volume1846 of seriesLecture Notes in Mathematics, publisherSpringer-Verlag, addressBerlin, year2004. [Cheney(2001)]C authorM. Cheney, titleThe linear sampling method and the MUSIC algorithm, journalInverse Prob. volume17 (year2001) pages591–595. [Zhong and Chen(2007)]ZC authorY. Zhong, authorX. Chen, titleMUSIC imaging and electromagnetic inverse scattering of multiple-scattering small anisotropic spheres, journalIEEE Trans. Antennas Propag. volume55 (year2007) pages3542–3549. [Park et al.(2017)Park, Kim, Lee, and Son]PKLS authorW.-K. Park, authorH. P. Kim, authorK.-J. Lee, authorS.-H. Son, titleMUSIC algorithm for location searching of dielectric anomalies from S-parameters using microwave imaging, journalJ. Comput. Phys. volume348 (year2017) pages259–270. [Colton and Kress(1998)]CK authorD. Colton, authorR. Kress, titleInverse Acoustic and Electromagnetic Scattering Problems, vol. volume93 of seriesMathematics and Applications Series, publisherSpringer, addressNew York, year1998.
http://arxiv.org/abs/2307.01006v1
20230703134046
Innovative Polarimetry for High$-$energy Cosmic $γ$ and $e^{+}/e^{-}$ Induced by Vector Photo$-$productionn
[ "Dart-yin A. Soh", "Zhaoyi Qu" ]
hep-ph
[ "hep-ph", "astro-ph.HE" ]
http://arxiv.org/abs/2307.02440v1
20230705171043
Membrane Thickness Sensitivity of Avian Prestin: Implications
[ "Kuni H Iwasa" ]
physics.bio-ph
[ "physics.bio-ph", "q-bio.SC" ]
decorations.pathmorphing,patterns .tifpng.png`convert #1 `dirname #1`/`basename #1 .tif`.png
http://arxiv.org/abs/2307.03140v1
20230706170815
Greedy Matching in Optimal Transport with concave cost
[ "Andrea Ottolini", "Stefan Steinerberger" ]
math.CA
[ "math.CA", "math.OC", "math.PR" ]
]Greedy Matching in Optimal Transport with Concave Cost ]Andrea Ottolini Stefan Steinerberger Department of Mathematics, University of Washington, Seattle, WA 98195, USA ottolini@uw.edu steinerb@uw.edu We consider the optimal transport problem between a set of n red points and a set of n blue points subject to a concave cost function such as c(x,y) = x-y^p for 0< p < 1. Our focus is on a particularly simple matching algorithm: match the closest red and blue point, remove them both and repeat. We prove that it provides good results in any metric space (X,d) when the cost function is c(x,y) = d(x,y)^p with 0 < p < 1/2. Empirically, the algorithm produces results that are remarkably close to optimal – especially as the cost function gets more concave; this suggests that greedy matching may be a good toy model for Optimal Transport for very concave transport cost. [2020]82B44, 90B80 The authors gratefully acknowledge support from the Kantorovich Initiative. A.O. is supported by an AMS-Simons Travel Grant. S.S. was supported by the NSF (DMS-2123224). The authors are indebted to Aleh Tsyvinski for drawing their attention to the problem. [ [ ===== § INTRODUCTION §.§ The problem. The original motivation behind this paper is to understand the geometry of optimal transport with concave cost. Perhaps the easiest instance of this problem is the following: let X = {x_1, …, x_n} and Y = {y_1, …, y_n} be two sets of real numbers, what can be said about the optimal transport cost W^p_p(X,Y) = min_π∈ S_n∑_i=1^n | x_i - y_π(i)|^p, where π:{1,2,…, n}→{1,2,…, n} ranges over all permutations? The answer is trivial when p ≥ 1: order both sets in increasing order and send the i-th largest element from X to the i-th largest element from Y. The problem becomes highly nontrivial when the cost function is concave. As was already pointed out by Gangbo & McCann <cit.> For concave functions of the distance, the picture which emerges is rather different. Here the optimal maps will not be smooth, but display an intricate structure which – for us – was unexpected; it seems equally fascinating from the mathematical and the economic point of view. [...] To describe one effect in economic terms: the concavity of the cost function favors a long trip and a short trip over two trips of average length [...] it can be efficient for two trucks carrying the same commodity to pass each other traveling opposite directions on the highway: one truck must be a local supplier, the other on a longer haul. (Gangbo & McCann, <cit.>) The problem has received increased attention in recent years, we refer to results of Bobkov and Ledoux <cit.>, Boerma, Tsyvinski, Wang and Zhang <cit.>, Caracciolo, D’Achille, Erba and Sportiello <cit.>, Caracciolo, Erba and Sportiello <cit.>, Delon, Salomon and Sobolevski <cit.>, Juillet <cit.> and McCann <cit.>. A reason why the problem is interesting is illustrated in Figure <ref>: as suggested by Gangbo-McCann, there is a very curious dichotomy where most points get matched to points that are very close with a few exceptional points being transported a great distance. It is somewhat clear, in a qualitative sense, that this is to be expected (considering, for example, the Jensen inequality for concave functions). However, on a more quantitative level, the non-locality poses considerable difficulties. §.§ Dyck and Greedy Matching. Perhaps the main point of our paper is to point out that in the regime c(x,y) = h(|x-y|) with h concave, there exist two natural toy models that are effective in different regimes. The first such model is the Dyck matching of Caracciolo-D’Achille-Erba-Sportiello <cit.>: their idea is to introduce g:[0,1] →ℤ g(x) = #{1 ≤ i ≤ n: x_i ≤ x} - #{1 ≤ i ≤ n: y_i ≤ x} The function is increasing whenever x crosses a new element of x while it decreases every time it crosses an element of y. The Dyck matching is then obtained by matching across level sets of the function g (see Fig. <ref>). The Dyck matching is independent of the cost function. It is shown numerically in <cit.> (and reproduced in 3) that the Dyck matching produces a nearly optimal matching whose costs exceeds the optimal cost by very little. The second toy model is given by a simple greedy matching which works in general metric spaces. Greedy Matching. * Determine m = min_1 ≤ i,j ≤ n c(x_i, y_j). * Find a pair (x_i, y_j) with c(x_i, y_j) =m and set π(i) = j. * Remove x_i from X and y_j from Y and repeat. If the cost function is strictly monotonically increasing in the distance c(x,y) = h(|x-y|), this greedy matching is, like the Dyck matching, independent of the cost function. This algorithm leads to mediocre results when the cost function is convex. This was already observed in the PhD thesis of d'Achille <cit.> who explicitly considers the algorithm when c(x,y) = |x-y|^p and p≥ 1 and shows that the results are not particularly good. One of the main points of our paper is to point out that the greedy matching is very good for very concave cost functions. § RESULTS §.§ Main Result It is clear that a greedy matching will initially perform well since it is matching points that are very close to each other. The main concern is that the greedy matching ends up maneuvering itself into a situation where all remaining choices are bad. We will now show that, in a suitable sense, this does not happen. The result will be phrased in terms of the Wasserstein distance W_1 between the sets X and Y where the sets will be assumed to lie in a metric space W_1(X,Y) = inf_π∈ S_n∑_i=1^n d(x_i, y_π(i)). When 0 < p < 1 it follows from Hölder's inequality with coefficients 1/p and 1/(1-p) that we can bound the size of the optimal matching by W^p_p(X,Y) = inf_π∈ S_n∑_i=1^n d(x_i, y_π(i))^p≤ W_1(X,Y)^p· n^1-p. Our main result shows that, in any arbitrary metric space, the greedy matching, proceeding at each step blindly and without any foresight into the future, achieves the same rate up to a constant when 0 < p <1/2. Note that the greedy matching will usually be very different from the one that minimizes W_1. For any 0 < p < 1 there is a constant c_p > 0 such that for any two sets of n points X,Y in any metric space, the greedy matching produces matching satisfying, with respect to the cost function c(x,y) = d(x,y)^p, Greedy_p(X,Y) ≤ c_p· W_1(X,Y)^p· n^1-p  0 < p < 1/2 √(n)·logn  p=1/2 n^p  1/2 < p < 1. The result is optimal up to constants when 0< p < 1/2. We illustrate this on [0,1] equipped with the Euclidean distance (see Fig. <ref>). If the blue point are close to 0 and the red points are close to 1, then the greedy algorithm takes _p(X,Y) ∼ n while W_1(X,Y) ∼ n. If all the points alternate in an equispaced way, _p(X,Y) ∼ n^1-p and W_1(X,Y) ∼ 1. The proof shows that c_p ∼ 1+2p for p small. There is a clear change around p =1/2, the greedy matching becomes less effective (see also Fig. <ref>). We also note a result of Bobkov-Ledoux <cit.> in one dimension: the optimal transport cost for d(x,y)^p and 0 < p < 1 is, in expectation, smaller than the one induced by the ordered optimal p = 1 matching. §.§ Greedy is non-crossing. It is known (see McCann <cit.>) that if we match points {x_1, …, x_n} to {y_1, …, y_n} under a cost function c(x,y) = h(|x-y|) with h ≥ 0 concave, then the optimal matching satisfies a non-crossing condition which can be described as follows: if the optimal matching sends x_i to y_π(i), then the n circles C_i that go tangentially though x_i and y_π(i), the circles C_i = {z ∈ℝ^2: z - x_i + y_π(i)/2 = |y_π(i)-x_i|/2}, do not intersect. The greedy matching has this desirable property for trivial reasons. The greedy matching is non-crossing. The argument is very simple. Suppose i < j and circle C_i intersects circle C_j. C_i intersects x_i and y_π(i) while C_j intersects x_j and x_π(j). Suppose w.l.o.g. x_i < y_π(i) (otherwise relabel X and Y). Since C_i and C_j intersect we have to either have x_i < x_j < y_π(i) or x_i < y_π(j) < y_π(i) but either of these cases leads to a contradiction at stage i of the greedy algorithm. §.§ Random Points I One of the most interesting cases is naturally that of matching random points to random points. Here, we can show that the greedy matching leads to nontrivial results. Let X, Y be two sets of n uniform i.i.d. random variables on [0,1]^d. The greedy matching subject to c(x,y) = x-y^p for 0 < p < 1/2 satisfies 𝔼 Greedy_p(X,Y) ≤ c_p· n^1- p/2  d=1 n^1-p/2· (logn)^p/2  d=2 n^1 - p/d  d ≥ 3. . While all representing a power-saving over the trivial bound n, we do not expect any of these results to be optimal (our Main Result cannot be expected to be optimal for random points, one would expect additional cancellations coming from the randomness). It seems likely to assume that, when d=1 and p < 1/2, we have 𝔼 _p(X,Y) ≤ c_p· n^1- p which would be optimal up to a constant. Most importantly, and illustrated in 3, is that still stronger results seem to be true: the greedy matching appears to be remarkably close to the cost function, even at the pointwise level (see 3.3). The Dyck matching is known to produce an optimal rate. For various different models of n random points on [0,1], we have 𝔼(Dyck_n) ∼ n^1-p. The Dyck matching is an a priori global construction that is naturally connected to the structure of a Brownian bridge, a fact that allows for a variety of tools to be applied. Conversely, the greedy algorithm is a `blind' local algorithm: it is difficult to predict what it will do without running it. As such, proving a result along the lines of Caracciolo-D’Achille-Erba-Sportiello <cit.> for the Dyck matching appears to be difficult and in need of new ideas. Some conjectures in that direction can be found in Section 5. §.§ Random points II More can be said if we assume that the points are chosen uniformly at random with respect to a fixed probability measure μ since one would then expect a certain limiting behavior to arise. This is indeed the case. Let 0<p<d/2 and μ be the uniform measure on a bounded set Ω⊂ℝ^d. If X,Y are i.i.d. copies from μ, then lim_n →∞ n^p/d -1· W^p_p(X,Y) = β_p(d) · |Ω|^p/d. When d=1, the result has recently been extended by Goldman-Trevisan <cit.> to also allow for randomness with respect to variable absolutely continuous density. Let 0 < p < 1/2. There exists c_p > 0 such that for compactly supported μ with a.c. density f(x)dx lim_n →∞ n^p -1· W^p_p(X,Y) = c_p∫_ℝ f(x)^1-p dx. The restriction p < 1/2 is necessary, the behavior is strictly different at p = 1/2 (this is the scale where the fluctuations of the empirical distribution of the points starts to come into play). Since the uniform measure is a special case of a measure of the form f(x)dx, the implicit (universal) constants are the same as in the result of Barthe-Bordenave in the sense of c_p = β_p(1). We provide an easy explicit lower bound on these constants. Let 0 < p < d/2. Then β_d(p) ≥ω_d^-p/d·Γ( 1 + p/d), where ω_d is the volume of the unit ball in ℝ^d. The argument is not terribly difficult. The bound seems to be rather accurate for small values of p. As a consequence, we obtain two-sided bounds in the one-dimensional setting when p is close to 0. There is an interesting heuristic: we expect that the optimal matching will send most points distance ∼ c · n^-1 with c ∼ 1 ranging over some different values but being approximately at order ∼ 1 and thus the cost should be close to ∼ c^p· n^1-p. However, since p ∼ 0+, one expects c^p∼ 1 and thus that the optimal transport cost is perhaps given by (1+𝒪(p)) · n^1-p. For 0 < p< 1/2, 2^-p·Γ(1+p) ≤β_1(p) ≤1/1-2p2^p/Γ(1-p) In particular, we conclude that β_1(p) = 1 + 𝒪(p) when p ∼ 0 is small. §.§ Extreme concave matching The cost c(x,y) = |x-y|^p is particularly natural. There is, in a suitable sense, a canonical limit as p → 0^+ since lim_p → 0^+|x-y|^p - 1/p = log |x-y|. As suggested by Fig. <ref> (and Numerics in 3), the greedy algorithm performs very well in this setting. We prove a basic result suggesting that this is not a coincidence. Let X,Y be two sets of n distinct points in a metric space such that d(x_i, y_j) ≤ 1 and assume that all pairwise distances are unique. Then there exists K ∈ℕ such that for all k ≥ K the solution of the optimal matching problem min_π∈ S_n∑_i=1^n( log d(x_i, y_π(i)))^2k+1 The argument is quite simple and there is nothing particularly special about the logarithm, similar results could be attained with many other cost functions that are dramatically different across different length scales. The main point of this simple Proposition is to illustrate that the effectiveness of the greedy matching in the setting of very concave cost functions is perhaps not entirely surprising: the dramatic separation of scales puts a heavy reward on having matching with very small distance which, coupled with the separation of scales, then suggests the greedy algorithm as a natural object. § NUMERICS OF THE GREEDY MATCHING §.§ Transport costs. The purpose of this section is to consider the behavior of the greedy algorithm and the Dyck matching when matching n i.i.d. random points on [0,1] given the cost function c(x,y) = |x-y|^p for 0 ≤ p ≤ 1/2. The results suggest that, at least for random points, the greedy matching leads to results that are remarkably close to the ground truth: this effect becomes more pronounced when p becomes smaller (also at least partially suggested by the Proposition). The effectiveness of the greedy algorithm is strictly restricted to the region 0 < p < 1/2: for p ≥ 1/2, the greedy algorithm starts to scale differently and becomes less effective. This can already be observed for small values of n (and as pointed out by d'Achille <cit.> for p ≥ 1). We also observe that, as p becomes smaller, the effectiveness of the greedy matching increases dramatically while, for p close to 1, the effectiveness of the Dyck matching increases dramatically (see Fig. <ref>). The Dyck matching is optimal when p=1 (see <cit.>). Fig. <ref> is well suited to iterate a main contribution of our paper: when d=1 and considering matchings with c(x,y) = d(x,y)^p and 0 < p < 1, there exists a natural dichotomy depending on whether p is close to 0 or whether it is close to 1. [ultra thick] (0,0) – (6,0); [ultra thick] (0,-0.1) – (0,0.1); [ultra thick] (6,-0.1) – (6,0.1); at (0, -0.4) 0; at (3, -0.4) p; at (6, -0.4) 1; at (0, 0.4) greedy; at (6, 0.4) Dyck; at (3,0.4) c(x,y) = d(x,y)^p; at (8,0) (d=1); A natural question is, for example, whether one can identify the precise value 0 < p^* < 1 where the effectiveness of the two algorithms undergoes a phase transition and the matching problem for n iid random points is better approximated by the Dyck matching or the greedy matching, respectively. §.§ Transport maps. Another interesting aspect of the greedy matching is that it seems to recover a transport map that is even somewhat accurate at the pointwise level. The first few steps of the greedy algorithm (matching points to other points that are very close) seem to almost always be matched identically by the optimal matching. The likelihood drops as the greedy algorithm starts matching pairs of points that are further and further away from each other but it remains remarkably high throughout the process (see Fig. <ref>). There is presumably little hope for a pointwise statement along these lines as the number of points n →∞ but we do believe these examples to be a further instance of the `vague' phenomenon that the greedy matching somehow captures the behavior of Optimal Transport in the framework of very concave cost in one dimension. It would be very interesting if this could be made precise further. We emphasize that one would perhaps not expect such a pointwise statement in higher dimensions, the rigidity of the real line seems to be crucial (see also 3.4). §.§ Higher dimensions One of the important aspects of the greedy matching is that it is not at all restricted to one dimension; it works in any metric space. What we observe, see Table <ref>, is that the greedy matching becomes an even better approximation to the Optimal Matching. This is presumably more a statement about the geometry of random points than about Optimal Transport. One could imagine that, in the setting of random points, many possible `almost-optimal' matchings exist and that the problem becomes, in some sense easier (an example is shown in Fig. <ref>). If that were true, that would re-iterate the importance of understanding the more rigid low-dimensional cases. § PROOFS §.§ Proof of the Theorem We assume the sets of points are X = {x_1, …, x_n} and Y={y_1, …, y_n} and we will use X_k, Y_k to denote the set of points after k-1 points have been removed following the greedy algorithm. In particular, X = X_1 and Y= Y_1. We will also abbreviate the cost at stage k via c_k = inf_x ∈ X_k, y ∈ Y_k d(x, y). Our goal is to estimate ∑_k=1^n c_k^p. Recalling that, by definition, X_k and Y_k have n-k+1 elements each, the Wasserstein distance W_1 can be written as W_1(X_k, Y_k) = min_π:X_k → Y_k π  ∑_x ∈ X_k^ d(x, π(x)) where the minimum ranges over all bijections. At this point, we remark that the definition of W_1 is slightly more comprehensive (the infimum ranges over all ways of splitting and rearranging points): at this point, we employ the celebrated result of Birkhoff <cit.> and von Neumann <cit.> ensuring that in the case of n equal masses being transported to n equal masses, the optimal solution can be realized by a permutation (no mass is `split'), we refer to <cit.> for a generalization. Our first observation uses averaging. Let us use π to denote the permutation achieving the optimal W^1 transport cost. Then the distance achieved by the greedy matching in the k-th step can be bounded from above by c_k = inf_x ∈ X_k, y ∈ Y_k d(x, y) ≤1/n-k+1∑_x ∈ X_k^ d(x, π(x)) = W_1(X_k, Y_k)/n-k + 1. The second ingredient will be to show that W_1(X_k+1, Y_k+1) cannot be much larger than W_1(X_k, Y_k). For this purpose, let us assume that X_k is given by the points x_1, x_2, …, x_n-k+1 and, similarly, Y_k is given by y_1, y_2, … ,y_n-k+1. Then, after possibly relabeling the points, we have that W_1(X_k, Y_k) = ∑_i=1^n-k+1 d(x_i, y_i). Let us now assume that the greedy matching at this point matches x_i and y_j, meaning that d(x_i, y_j) is the smallest pairwise distance in X × Y. Then the greedy matching is going to match these two points up and the remaining sets of points are given by X_k+1 = X_k ∖{x_i } Y_k+1 = Y_k ∖{y_j }. We will provide an upper bound on W_1(X_k+1, Y_k+1) by taking the original matching between X_k and Y_k and then modify it a little to obtain a matching for X_k+1 with Y_k+1. This is done by a simple modification: the point x_j is now mapped to y_i (note that x_i has been mapped to y_j and both have been removed from the set). We preserve all other matchings. Then W_1(X_k+1, Y_k+1) ≤ W_1(X_k, Y_k) + d(x_j, y_i) - d(x_i, y_i) - d(x_j, y_j). At this point, we invoke the triangle inequality and argue that d(x_j, y_i) ≤ d(x_j, y_j) + d(y_j, y_i) ≤ d(x_j, y_j) + d(y_j, x_i) + d(x_i, y_i) Combining the last two inequalities, we realize that we can bound the increase in the W_11-distance between the two sets in terms of the cost function of the greedy matching at the k-th step by W_1(X_k+1, Y_k+1) ≤ W_1(X_k, Y_k) + d(x_j, y_i) - d(x_i, y_i) - d(x_j, y_j) ≤ W_1(X_k, Y_k) + d(x_i, y_j) = W_1(X_k, Y_k) + c_k. Combining these two ingredients, we arrive W_1(X_k+1, Y_k+1) ≤ W_1(X_k, Y_k) + c_k ≤ W_1(X_k, Y_k) + W_1(X_k, Y_k)/n-k+1 = W_1(X_k, Y_k) ( 1 + 1/n-k+1). By induction, we obtain W_1(X_k, Y_k) ≤ W_1(X,Y)·∏_ℓ=1^k-1(1 + 1/n-ℓ+1). Observe that ∏_ℓ=1^k-1(1 + 1/n-ℓ+1) = ∏_ℓ=1^k-1n-ℓ+2/n-ℓ+1 = n+1/n-k+2. Therefore W_1(X_k, Y_k) ≤n+1/n-k+2· W_1(X,Y). Applying the pigeonhole principle one more time, we see that c_k ≤W_1(X_k, Y_k)/n-k+1≤n+1/(n-k+1)^2· W_1(X,Y). Thus ∑_k=1^n c_k^p≤ W_1(X,Y)^p· (n+1)^p·∑_k=1^n1/(n-k+1)^2p. We have ∑_k=1^n1/(n-k+1)^2p = ∑_k=1^n1/k^2p. When 0 < p < 1/2, we have ∑_k=1^n1/k^2p≤ 1 + ∫_1^n+11/x^2p dx ≤ 1 + (n+1)^1-2p/1-2p. Dealing with the remaining cases in the usual fashion, we obtain ∑_k=1^n c_k^p≤ c_p· W_1(X,Y)^p· n^1-p  0 < p < 1/2 √(n)·logn  p=1/2 n^p  1/2 < p < 1. The argument also shows that, for p < 1/2 we have _p(X,Y) ≤(1/1-2p + o(1)) · n^1-p· W_1(X,Y). In particular, for p close to 0, we have that that 1/(1-2p) ∼ 1 + 2p and the implicit constant is close to 1. §.§ Proof of Corollary 1 Corollary 1 follows immediately from the Theorem. The missing ingredient is a good estimate on W_1(X,Y) where X and Y are two sets of n i.i.d. uniformly distributed points in [0,1]^d. The case d=2 is arguably the most famous, the celebrated result of Ajtai, Komlos, Tusnady <cit.> ensures that c_1 √(n logn)≤min_π∈ S_n∑_i = 1^n x_i - y_π(i)≤ c_2√(n logn). with high probability. The one-dimensional case is a bit simpler and one has (see, for example, <cit.>), with high probability, min_π∈ S_n∑_i = 1^n | x_i - y_π(i)| ≤ c √(n). A short proof of this one-dimensional fact can be given via Fourier Analysis: as noted in <cit.>: for any measure μ on the unit interval, we have W_1(μ, dx) ≤ W_2(μ, dx) ≤ c( ∑_ℓ∈ℤℓ≠ 0|μ(ℓ)|^2/ℓ^2)^1/2, where the upper bound, in the context of finite point sets, is also known as Zinterhof's diaphony <cit.>. In our setting, where μ = ∑_k=1^nδ_x_k and the x_k are random variables on [0,1], we have 𝔼 |μ(ℓ)|^2 = 𝔼| ∫_0^1 e^-2π i ℓ x dμ|^2 = 𝔼| ∑_k=1^n e^-2 π i ℓ x_k|^2 = 𝔼∑_k,m = 1^n e^-2 π i ℓ x_k e^2 π i ℓ x_m = n + ∑_k,m = 1 k ≠ m^n 𝔼 e^-2 π i ℓ x_k e^2 π i ℓ x_m = n. Thus, with 𝔼√(|X|)≤√(𝔼 |X|), we have 𝔼 W_1(μ, dx) ≤ c ( ∑_ℓ∈ℤℓ≠ 0n/ℓ^2)^1/2≤ c_2 √(n). From this and the triangle inequality W_1(μ_X, μ_Y) ≤ W_1(μ_X, dx) + W_1(μ_Y, dx) the result follows. The case d ≥ 3 where min_π∈ S_n∑_i = 1^n | x_i - y_π(i)| ≤ c · n^1-1/d was already remarked by Ajtai, Komlos, Tusnady <cit.>. A modern treatment of a much more general case is given in <cit.>. §.§ Proof of Proposition 2 Since the constant is independent of the domain, it will be enough to derive an upper bound in a fixed, arbitrary domain. We choose the unit cube [0,1]^d for convenience, however, we emphasize that there is nothing particularly special about the unit cube. Given n i.i.d. uniform points X_1, …, X_n in [0,1]^d and an independent uniform point Y, we have for all sufficiently small 0<ε< ε_0 that whenever Y is sufficiently far from the boundary of the unit cube d(Y, ∂ [0,1]^d) ≥ε, then ℙ(min_1 ≤ i ≤ n |Y-X_i |≥ε)=(1- ω_dε^d)^n, where ω_d is the volume of the unit ball in ℝ^d. Conditional on Y, since the X_i's are all independent, we obtain ℙ(min_1 ≤ i ≤ n |Y-X_i|≥ε|Y)=(1-|B_Y(ε)∩ [0,1]^d|)^n. Since d(Y, ∂ [0,1]^d) ≥ε, we have (1-|B_Y(ε)∩ [0,1]^d|)^n = (1 - ω_d ε^d)^n. A simple monotonicity argument guarantees that for all ε the conclusion still holds, up to replacing = with ≥. We assume the sets of points {X_1, …, X_n }⊂ [0,1]^d and {Y_1, …, Y_n }⊂ [0,1]^d are both sets of independent uniformly distributed random variables in [0,1]^d. The main idea is the use of the trivial bound 𝔼inf_π∈ S_n∑_i=1^n d(x_i, y_π(i))^p≥∑_i=1^n𝔼inf_x ∈ X |Y_i - x|^p=n𝔼min_1≤ i≤ n|X_i-Y|^p We introduce a change of coordinates ε = c^p/n^p/d. Then, for all sufficiently small ε (i.e., for fixed c, p, d and n→ +∞), we have ℙ(min_1 ≤ j ≤ n |X_i-Y |^p≥ε) = ℙ(min_1 ≤ j ≤ n |X_i-Y |≥c/n^1/d) =(1 - ω_d c^d/n)^n→ e^-ω_d c^d. As we remarked earlier, for all ε≥ 0 (i.e., for all c) we have ℙ(min_1 ≤ j ≤ n |X_i-Y |^p≥ε)≥ e^-ω_dc^p, so that we obtain, as n→ +∞, n ·𝔼inf_1≤ i≤ n|X_i-Y|^p =n ∫_0^∞ℙ(min_1 ≤ j ≤ n |X_i-Y |^p≥ε)dϵ =∫_0^+∞pc^p-1e^-ω_d c^ddc = n^1-p/d· w_d^-p/d·Γ(1+p/d). We note that the argument slightly improves when X_i is close to the boundary (since then the volume of a neighborhood intersected with [0,1]^d has smaller volume). However, since the number of points distance ∼ n^-1/d close to the boundary is ∼ n^(d-1)/d≪ n, exploiting this fact would not lead to better asymptotics. §.§ Proof of Proposition 3 Since 0< d(x_i, y_j) < 1, all the summands are negative and can use the trivial estimate n min_1 ≤ i ≤ n( log d(x_i, y_π(i)))^2k+1 ≤∑_i=1^n( log d(x_i, y_π(i)))^2k+1 ≤min_1 ≤ i ≤ n( log d(x_i, y_π(i)))^2k+1. Suppose now that min_1 ≤ i ≤ n d(x_i, y_π(i)) > min_1 ≤ i,j ≤ n d(x_i, y_j). Then, for all k ≥ K_1 sufficiently large, n min_1 ≤ i ≤ n( log d(x_i, y_π(i)))^2k+1 >min_1 ≤ i,j ≤ n( log d(x_i, y_j))^2k+1 since one grows exponentially larger than the other. This shows that any optimal matching has to at least coincide with the greedy matching in the first step. We emove the closest pair of points (x_i, y_j) and repeat the procedure on the remaining set of points. We see that for all k ≥ K_2, the next step has to coincide with that of the greedy matching. Repeating the procedure, we see that for all K ≥max(K_1, …, K_n-1) the minimum can only be given by the greedy matching. § CONCLUDING REMARKS §.§ Geometry of greedy matching I One of our motivation for this paper was the following related problem: given a convex cost function, the optimal permutation for two sets of points in the unit interval is the identity (with respect to their order statistics). For concave functions, we conjecture that the resulting permutation should be close to the identity in a suitable sense. One way to quantify this is to consider the number of approximate matching between the order statistics of two sets of points point (i.e., steps for which the k_1 smallest point of the first set is matched to the k_2 smallest point of the second set, with |k_1-k_2|≪ n). In order to capture this, one possibility is to construct a suitable bi-invariant metric along the lines of the bi-invariant Caley's distance on the symmetric group given by d(σ, π) =n-# fixed points in σπ^-1 =minimum number of tranposition to go from σ to π. While we do not have a canonical candidate, there is a large body of metrics on the symmetric group that share a desirable number of properties, see <cit.> for an account of their uses. The result would be consistent with the intuition given by the continuous limit, see Section 2 in <cit.>. The combination of this conjecture, together with that of Section 3.2, can be rephrased as an answer to the following question: given two sets of n uniformly random points on the unit interval, how many approximate matchings of the order statistics does the greedy algorithm produce? An easier question, still unanswered, is whether the likelihood of an approximate matching in the first step of the algorithm is asymptotically 1. §.§ Geometry of greedy matching II A quantitative variant of the above conjecture, suggested by numerics, is as follows: we see, empirically, that if we match n iid uniformly chosen random points on [0,1] to n other such iid points, then the greedy matching sends ∼ n - o(n) roughly distance ∼ 1/n and a small proportion, o(n), a large distance. Numerically, it appears that this term is roughly of order ∼√(n) or perhaps ∼√(n logn) which would also be in line with the bounds obtained in this paper. 10 achille M. D'Achille, Statistical properties of the Euclidean random assignment problem, PhD Thesis, Universite Paris-Saclay, 2020. akt M. Ajtai, J. Komlos and G. Tusnady, On optimal matchings. Combinatorica, 4 (1984), p. 259-264. barthe F. Barthe and C. Bordenave, Combinatorial optimization over two random point sets. In: Seminaire de probabilites XLV. Cham: Springer, 2013, pp. 483–535 birk G. Birkhoff. Tres observaciones sobre el algebra lineal. Universidad Nacional de Tucuman Revista Series A, 5:147–151, 1946 bobkov S. Bobkov and M. Ledoux, M. One-dimensional empirical measures, order statistics, and Kantorovich transport distances (Vol. 261, No. 1259). American Mathematical Society, 2019. bob S. Bobkov and M. Ledoux, Transport inequalities on Euclidean spaces for non-Euclidean metrics. Journal of Fourier Analysis and Applications, 26 (2020), 60. boerma J. Boerma, A. Tsyvinski, R. Wang, Z. Zhang, Composite Sorting, arXiv:2303.06701 brown L. Brown and S. Steinerberger, On the Wasserstein distance between classical sequences and the Lebesgue measure. Transactions of the American Mathematical Society, 373 (2020), p. 8943-8962. c S. Caracciolo, M. D’Achille, V. Erba and A. Sportiello, The Dyck bound in the concave 1-dimensional random assignment model. Journal of Physics A: Mathematical and Theoretical, 53 (2020), 064001. c2 S. Caracciolo, V. Erba and A. Sportiello, The number of optimal matchings for Euclidean Assignment on the line, Journal of Statistical Physics 183 (2021), p. 1–27 c3 S. Caracciolo, V. Erba and A. Sportiello, The p-Airy distribution, arXiv:2010.14468 delon J. Delon, J. Salomon and A. Sobolevski, Local matching indicators for concave transport costs. Comptes Rendus Mathematique, 348 (2010), 901-905. delon2 J. Delon, J. Salomon and A. Sobolevski, Local matching indicators for transport problems with concave costs. SIAM Journal on Discrete Mathematics, 26 (2012), 801-827. diaconis P. Diaconis, Group representations in probability and statistics. Lecture notes-monograph series, 11 (1988/1/1). fournier N. Fournier and A. Guillin, On the rate of convergence in Wasserstein distance of the empirical measure. Probability theory and related fields, 162 (2015), 707-738. gangbo W. Gangbo and R. McCann, The geometry of optimal transportation, Acta Math. 177 (1996), p. 113-161 dario M. Goldman and D. Trevisan, On the concave one-dimensional random assignment problem and Young integration theory, arXiv:2305.09234 bamdad B. Hosseini and S. Steinerberger, Intrinsic Sparsity of Kantorovich solutions. Comptes Rendus. Mathématique, 360(G10), 1173-1175, 2022. juillet N. Juillet, On a solution to the Monge transport problem on the real line arising from the strictly concave case. SIAM Journal on Mathematical Analysis, 52 (2020), 4783-4805. mccann R. McCann, Exact solutions to the transportation problem on the line. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences, 455 (1999), 1341-1380. stein S. Steinerberger, A Wasserstein inequality and minimal Green energy on compact manifolds, Journal of Functional Analysis, 281 (2021), 109076. von J. von Neumann, A certain zero-sum two-person game equivalent to an optimal assignment problem, Ann. Math. Studies 28:5–12, 1953. zinter P. Zinterhof, Uber einige Abschatzungen bei der Approximation von Funktionen mit Gleichverteilungsmethoden. Osterreich. Akad. Wiss. Math.-Naturwiss. Kl. S.-B. II 185 (1976), no. 1-3, 121–132.
http://arxiv.org/abs/2307.02974v1
20230706131906
Cross-Spatial Pixel Integration and Cross-Stage Feature Fusion Based Transformer Network for Remote Sensing Image Super-Resolution
[ "Yuting Lu", "Lingtong Min", "Binglu Wang", "Le Zheng", "Xiaoxu Wang", "Yongqiang Zhao", "Teng Long" ]
cs.CV
[ "cs.CV" ]
L[1]>p#1 C[1]>p#1 R[1]>p#1 Journal of Class Files, Vol. 18, No. 9, September 2020 How to Use the IEEEtran Templates Cross-Spatial Pixel Integration and Cross-Stage Feature Fusion Based Transformer Network for Remote Sensing Image Super-Resolution Yuting Lu, Lingtong Min, Binglu Wang†, Member, IEEE, Le Zheng, Senior Member, IEEE, Xiaoxu Wang, Member, IEEE, Yongqiang Zhao, Member, IEEE, and Teng Long, Fellow, IEEE Yuting Lu, Xiaoxu Wang and Yongqiang Zhao are with School of Automation, Northwestern Polytechnical University, Xi’an 710072, China (e-mail:lyt1996@mail.nwpu.edu.cn, woyaofly1982@nwpu.edu.cn, zhaoyq@nwpu.edu.cn). Lingtong Min is with School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, China (e-mail:minlingtong@nwpu.edu.cn). Binglu Wang, Le Zheng and Teng Long are with the Radar Research Laboratory, School of Information and Electronics, Beijing Institute of Technology,Beijing 100081, China (e-mail: wbl921129@gmail.com, le.zheng.cn@gmail.com, longteng@bit.edu.cn). †Corresponding author: Binglu Wang. This work is supported by the Postdoctoral Science Foundation of China under Grant 2022M710393, the Fourth Special Grant of China Postdoctoral Science Foundation (in front of the station) 2022TQ0035 and the Shaanxi Science Fund for Distinguished Young Scholars 2022JC-49. August 1, 2023 ==================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Remote sensing image super-resolution (RSISR) plays a vital role in enhancing spatial detials and improving the quality of satellite imagery. Recently, Transformer-based models have shown competitive performance in RSISR. To mitigate the quadratic computational complexity resulting from global self-attention, various methods constrain attention to a local window, enhancing its efficiency. Consequently, the receptive fields in a single attention layer are inadequate, leading to insufficient context modeling. Furthermore, while most transform-based approaches reuse shallow features through skip connections, relying solely on these connections treats shallow and deep features equally, impeding the model's ability to characterize them. To address these issues, we propose a novel transformer architecture called Cross-Spatial Pixel Integration and Cross-Stage Feature Fusion Based Transformer Network (SPIFFNet) for RSISR. Our proposed model effectively enhances global cognition and understanding of the entire image, facilitating efficient integration of features cross-stages. The model incorporates cross-spatial pixel integration attention (CSPIA) to introduce contextual information into a local window, while cross-stage feature fusion attention (CSFFA) adaptively fuses features from the previous stage to improve feature expression in line with the requirements of the current stage. We conducted comprehensive experiments on multiple benchmark datasets, demonstrating the superior performance of our proposed SPIFFNet in terms of both quantitative metrics and visual quality when compared to state-of-the-art methods. remote sensing image super-resolution, transformer network, cross-spatial pixel integration, cross-stage feature fusion § INTRODUCTION Remote sensing imaging technology is of paramount importance in numerous fields, including environmental monitoring <cit.>, disaster management <cit.>, urban planning <cit.>, and object detection <cit.>. Therefore, the acquisition of high-resolution remote sensing images is imperative for the effective implementation and analysis of remote sensing image applications. However, challenges arise due to factors such as sensor noise, optical distortion and environmental interference, which can significantly degrade the image quality. Image super-resolution (SR) is a typical computer vision task that involves reconstructing high-resolution (HR) images from low-resolution (LR) images. The primary objective of SR is to mitigate the detrimental impact of acquisition equipment and environmental factors on remote sensing imaging outcomes, thereby enhancing the resolution of remote sensing images. As an alternative to developing physical imaging technologies, SR has gained significant attention in recent years for its ability to effectively generate high-resolution remote sensing images <cit.>. Traditional RSISR methods often rely on interpolation-based techniques, such as bicubic interpolation <cit.> or Lanczos interpolation. While these methods are simple, they may yield limited performance due to their inability to capture high-frequency details and structural information in the generated images. Recent advancements in deep learning <cit.> have led to the emergence of convolutional neural networks (CNNs) as powerful tools for various image processing tasks, including RSISR. CNN-based methods <cit.> have demonstrated promising results in learning complex representations from large datasets. However, despite the success of CNNs, they still possess certain limitations when employed for RSISR. CNNs typically operate locally with fixed receptive fields, which may hinder their ability to effectively capture long-range dependencies. As a result, they may have limited modeling capacity for remote sensing scenes with large spatial extent <cit.>. Transformer-based architectures, initially developed for natural language processing tasks, have emerged as a suitable solution for addressing the limitations of CNNs and have demonstrated impressive performance in diverse computer vision tasks, including image classification <cit.> and object detection <cit.>. The pioneering vision transformer model <cit.> employs a redundant attention mechanism, resulting in quadratic computation complexity relative to the image size. This high computational complexity poses challenges for its application in high-resolution predictions for RSISR tasks. To mitigate this issue, recent proposals have explored the use of self-attention within small spatial regions <cit.>. However, since remote sensing images usually cover a large range of areas, ground objects and landforms with similar features are far apart in space. As shown in Fig. <ref>, the texture, shape and semantic features of the areas in the boxes are similar to each other but far apart in space. As a result, these methods that partition an image into fixed-size windows and employ self-attention within those windows to model pixel dependencies fail to capture interactions between distant but similar pixels, which is crucial for attaining optimal performance <cit.>. Furthermore, the majority of current transformer-based RSISR methods primarily rely on skip connections to transmit shallow features to deep features. However, treating these reused shallow features equally impedes the representational capability of transformers, despite the proven effectiveness of skip connections in RSISR <cit.>. To address these limitations, we propose two components: cross-space pixel Fusion attention (CSPIA) and cross-stage feature fusion attention (CSFFA). CSPIA allows the local window to perceive the contextual window (refer to Fig. <ref>) by maximizing the similarity between image pairs. This, in turn, effectively enlarges the receptive field, as depicted in Fig. <ref>, enabling the utilization of valuable context information from the image. In parallel, CSFFA enhances feature expression by adaptively integrating features cross-stages. CSPIA consists of three main steps: Space Division (SD), Local-Context Matching (LCM) and Cross Attention (CA). The SD is responsible for obtaining local windows and contextual windows through different spatial partitioning strategies. Then, LCM is used to obtain the most matched contextual window for each pair of local window. Finally, the most similar contextual window is selected to conduct CA with corresponding local window. In this way, context information can be integrated into current local window efficiently, as shown in Fig <ref>. Building upon CSPIA, we construct a cross-spatial pixel integration block (CSPIB). Following the fusion of pixels from contextual windows, we apply the standard multihead self-attention (MSA) to capture local-range dependencies within the refined area of the local window and a local 3 × 3 convolution further handles local details. Subsequently, CSFFA calculates cross-covariance of feature channels across stages to generate cross-stage attention map based on both shallow features and deep features (after projection of key and query). CSFFA enables the model to adaptively adjust the channel-wise feature maps at cross-stage of the network to enhance the informative multiscale feature representation ability. Furthermore, to enhance the flow of complementary features and allow subsequent network layers to focus on finer image details, we integrate a feed forward network (FFN) <cit.> into our model. Utilizing CSFFA and FFN, we construct a cross-stage feature fusion block (CSFFB). By combining CSPIBs and CSFFBs, we develop a transformer network, named SPIFFNet, which incorporates cross-spatial pixel integration and cross-stage feature fusion, specifically designed for RSISR. This architecture is depicted in Fig. <ref>. Furthermore, our experimental findings unequivocally demonstrate the superiority of the proposed SPIFFNet model over state-of-the-art methods. The article presents three key contributions that can be summarized as follows: 1) We propose cross-spatial pixel integration attention (CSPIA) to introduce contextual information into the local window. The contextual information of the image enhances the global cognition and understanding of the entire image. By incorporating the context information within the local window, the model gains a better understanding of the relationship between ground features and the surrounding environment, thereby enhancing the consistency and accuracy of the RSISR results. 2) We propose cross-stage feature fusion attention (CSFFA), which facilitates effective feature representation by modeling the interdependencies among different channels across stages. By dynamically assigning weights to different channels, this mechanism enhances the model's capacity to capture essential image features while suppressing irrelevant ones, thereby producing higher-quality super-resolved images. 3) Based on CSPIA and CSFFA, we propose SPIFFNet, a cross-spatial pixel integration and cross-stage feature fusion based transformer network for remote sensing image super-resolution. SPIFFNet effectively captures contextual information to enhance the global perception ability of the local window and adaptively fuses information from previous stages to enhance feature representation. Experimental results on benchmark datasets validate that SPIFFNet achieves state-of-the-art performance in terms of objective metrics as well as visual quality. The rest of this article is organized as follows. Section 2 presents the related works on SR. Section 3 introduces the proposed SPIFFNet model, including the CSPIA and the CSFFA. Section 4 presents the experimental results and analysis. Finally, Section 5 concludes this paper. § RELATED WORK §.§ Deep Learning-based Methods for SR Deep learning-based super-resolution (SR) methods predominantly rely on standard convolutional neural networks (CNNs) owing to their robust nonlinear representation capabilities. Typically, these methods approach super-resolution as an image-to-image regression task, with the objective of learning the direct mapping from LR to HR images. SRCNN <cit.> first uses three convolution layers to map the low-resolution images to high-resolution images. Building upon SRCNN, Kim et al. extended the network depth in their work called DRCN <cit.>, resulting in considerable performance improvements over SRCNN <cit.>. FSRCNN <cit.> achieves high computational efficiency without compromising restoration quality through a redesigned architecture of SRCNN. VDSR <cit.> addressed the challenge of handling multi-scale images within a unified framework by incorporating residual learning, gradient cropping, and an increased number of network layers. EDSR <cit.> achieves superior performance by streamlining the model architecture, eliminating redundant modules from the conventional ResNet framework. For remote sensing images, LGCNet <cit.> stands as the pioneering CNN-based model for super-resolution, introducing the concept of local and global contrast features to enhance the preservation of details and clarity in the reconstructed images. Haut et al. <cit.> coordinates several different improvements in network design to achieve the most advanced performance on the RSISR task. A novel single-path feature reuse approach and a second-order learning mechanism are proposed by Dong et al. <cit.>, which aim to effectively utilize both small and large difference features. Although these methods have achieved impressive results, the limited receptive field of CNNS cannot capture the long-range dependencies between pixels, thereby limiting their performance. §.§ Transformer-based Methods The Transformer network, initially proposed in 2017 for machine translation tasks <cit.>, has gained popularity in computer vision due to its remarkable performance in image processing <cit.>. Since its inception, numerous visual models based on the Transformer have been proposed <cit.>. As an example, Chen et al. introduced the Image Processing Transformer (IPT) as a novel pre-trained model for low-level computer vision tasks. To fully leverage the potential of the transformer, a substantial amount of corrupted image pairs is generated using the ImageNet dataset. The IPT model adapts to diverse image processing tasks through multi-head and multi-tail training, along with the incorporation of contrastive learning techniques. Another well-known image restoration method called Uformer <cit.> utilizes the Locally Enhanced Window Transformer block, which reduces computational requirements by utilizing a non-overlapping window-based self-attention mechanism. Moreover, three skip connection schemes are explored to facilitate efficient information transfer from the encoder to the decoder. Furthermore, the Restoration Transformer (Restormer) network <cit.> proposes an effective Transformer model that captures remote pixel interactions and is suitable for large image, which is achieved through key design choices in the building blocks, including multi-head attention and feedforward networking. Efficient Super-Resolution Transformer (ESRT) <cit.> presents a hybrid model that combines a lightweight CNN backbone (LCB) with a lightweight transformer backbone (LTB) can dynamically adjusts the feature map size, achieving competitive results with low computational cost. SwinIR <cit.> introduced a robust baseline model for image restoration that utilizes the Swin Transformer architecture. In the context of RSISR, TransENet <cit.> proposes a multilevel enhancement architecture based on the Transformer framework, which can be integrated with the conventional super-resolution (SR) framework to effectively merge multi-scale high- and low-dimensional features. § METHODOLOGY In this section, we introduce the proposed SPIFFNet for RSISR. The overall framework of SPIFFNet is presented in Section III-A and the SPIFFNet gruop that integrates CSPIB and CSFFB is carefully discussed in Section III-B. Furthermore, in Section III-C, we will provide a concise overview of the implementation details. §.§ Overview of SPIFFNet In this section, we introduce the framework of our method, as shown in Fig. <ref>. Given an input image I_LR∈ℝ^H × W × 3, where H and W are the image height and width. Then, the input I_LR undergoes a transformation into the feature space through a 3 × 3 convolutional layer F_0= Conv(I_LR) where the Conv denotes 3 × 3 convolution and the F_0∈ℝ^H × W × C represents the shallow features. Then, several SPIFFNet groups, each of which involves CSPIB, LSAB, local 3 × 3 convolution and CSFFB are set up after the convolutional layer for deep feature extraction. We extract deep feature F_DF∈ℝ^H × W × C from F_0 as F_DF=H_DF(F_0) where H_DF represents the deep feature extraction module which contains K SPIFFNet groups. Specifically, the intermediate features F_1, F_2, …, F_K and the final deep feature F_DF are extracted sequentially F_i=H_i(F_i-1), i=1,2,...,K where H_i denotes the i-th SPIFFNet group. Finally, the deepest features F_DF are reconstructed using a 3 × 3 convolutional layer and pixel-shuffle upsampling operations <cit.> to generate SR image I_r. In addition, a bilinear interpolation of the LR image I_b is incorporated in the summation process to aid in the recovery process for the super-resolution output I_r I_SR=I_r+I_b We train the proposed model using the L1 loss function. The loss function is obtained by comparing the LR images I_LR with their corresponding HR reference images I_HR L_(θ)=1/N∑_i = 1^N I_HR^(i) - I_SR^(i)_1 where θ represents the parameters of the SPIFFNet, and N denotes the number of training samples. §.§ SPIFFNet Group In this section, we introduce the SPIFFNet group, a crucial component of our SPIFFNet model. Each SPIFFNet group consists of four components: cross-spatial pixel integration attention block (CSPIB), Local Spatial Attention Block (LSAB), local 3 × 3 convolution and cross-stage feature fusion block (CSFFB). The Cross-Spatial Pixel Integration Block (CSPIB) expands the model's receptive field, allowing it to capture long-range dependencies and contextual information from the input feature maps. This expansion is facilitated by the utilization of the Cross-Spatial Pixel Integration Attention (CSPIA). The LSAB is responsible for capturing the correlations of local spatial information. Local 3 × 3 convolution deals with local details in a fine-grained manner. CSFFB is designed to adaptive integration of information from the previous stage according to the needs of the characteristics of the current stage rather than treating them equally. This combination of local and global information, along with the cross stage adaptive information fusion, enables the model to capture complex spatial dependencies and contextual information effectively. 1) Cross-Spatial Pixel Integration Block (CSPIB): Previous methods often overlooked the interaction between local and contextual features. The CSPIB is designed to expand the local spatial window to capture more context information, as shown in Fig. <ref>(c). CSPIB contains two sequential modules, the cross-spatial pixel integration attention (CSPIA) is designed to capture contextual information for local windows and the MLP module for feature projection. Cross-Spatial Pixel Integration Attention (CSPIA): In this section, our objective is to expand the receptive field of local windows, enabling them to capture context information from the input feature maps. Previous studies have demonstrated that SR networks with a wider effective receptive field achieve superior performance <cit.>. The challenge lies in enabling the network to model global connectivity while preserving computational efficiency. Due to the fixed partitioning of windows at the layer level, there are no direct connections between windows. One straightforward approach is to exhaustively combine the information from every window pair. However, this approach is unnecessary and inefficient since many windows are irrelevant and uninformative. Additionally, redundant interactions may introduce noise that impairs the model's performance. Based on these observations, we introduce an innovative technique called cross-spatial pixel integration attention (CSPIA), where each local window adaptively integrates pixels with the most correlated global window. Specifically, as shown in Fig. <ref>, the CSPIA consists of three steps: Spatial Division (SD), Local-Context Matching (LCM) and Cross Attention (CA). Through SD, we split the feature map into two parts: local windows and global windows. In the case of local windows, adjacent embeddings of size G × G are grouped together. An example is illustrated in Fig. <ref> with G = 4. In the case of global windows, where the input size is S × S, the feature map is sampled at a fixed interval I. Fig. <ref> demonstrates an example with I = 2, where embeddings with a red border belong to a window. The height or width of the group for global windows is calculated as G = S/I. All windows can be processed in parallel, after which the outputs are pasted to their original location in window aggregation module. It is worthwhile to note that such cropping strategy is adaptive to arbitrary input size, which means no padding pixels are needed. Then, local windows and contextual windows are spatially pooled into one-dimensional tokens. These tokens encode the characteristic of the windows, which are later used for similarity calculation and window matching. This process can be expressed as X̅_i = max_X_j L(X_i)^TL(X_j),j i where X̅_i is the best-matching global window with current local window X_i, and L( · ) is the average pooling function along spatial dimension followed by flatten operation and layer normalization. Since the argmax opration is non-differentiable, we replace it with Gumbel-Softmax opration <cit.> during training so as to make it possible to train end-to-end. After that, pixel information of X̅_i are fused into X_i via Cross-Attention (CA) X_i = CA(X_i,X̅_i) As illustrated in Fig. <ref>, CA works in a similar way to the standard self-attention <cit.>, but the key and value are calculated using X̅_i. As a result, CSPIA can enable contextual pixel integration while introducing little computational overhead. 2) Local Spatial Attention Block (LSAB): As shown in Fig. <ref>(d), LSAB adopts the standard multihead self-attention (MSA) paradigm <cit.>, with two modifications. Firstly, LSAB operates at the window level instead of the image level. Secondly, positional embedding is omitted due to the introduction of the convolutional layer, which implicitly learns positional relationships and enhances the network's efficiency and conciseness. LSAB is designed to model local-range dependencies within a window, facilitating the comprehensive utilization of contextual information. Specifically, for feature X ∈ℝ^P^2× C, the corresponding query, key and value matrices Q ∈ℝ^P^2× d, K ∈ℝ^P^2× d, V ∈ℝ^P^2× C are computed as Q = XW_Q, K = XW_K, V = XW_V where the weight matrices W_Q, W_K and W_V are shared across windows, P is the window size. By comparing the similarity between Q and K, we obtain a attention map of size ℝ^P^2 × P^2 and multiply it with V. Overall, the calculation of Multi-head Self-Attention (MSA) can be expressed as MSA(X) = softmax (QK^T/√(d) )V Here √(d) is used to control the magnitude of QK^T before applying the softmax function. Similar to the conventional transformer layer <cit.>, the MLP is employed after MSA module to further transform features. MLP contains two fully-connected layers, and one GELU nonlinearity is applied after the first linear layer. 3) Local 3 × 3 Convolution: By adding a local 3 × 3 convolutional layer after feature extraction, the Transformer-based network is infused with the inherent inductive bias of convolution operations. This enhances the foundation for aggregating shallow and deep features in subsequent stages. 4) Cross-Stage Feature Fusion Block (CSFFB): Skip Connections are commonly used to propagate shallow features to deeper layers. However, the long-term information from the shallow stages tends to be attenuated. Although the shallow features can be reused through skip connections, they are treated indiscriminately with the deep features across different stages, thereby impeding the representational capacity of CNNs. To address this concern, we introduce CSFFB, illustrated in Fig. <ref>(e). CSFFB comprises two consecutive components: the Cross-Stage Feature Fusion Attention (CSFFA) for adaptive feature fusion across stages, and the FFN for feature transformation. Cross-Stage Feature Fusion Attention (CSFFA): Figure <ref> demonstrates the operation of the CSFFA, which calculates attention scores at the channel level using feature maps cross-stages. These scores are then applied to the feature maps, enabling the weighting of each channel's contribution and the integration of features from previous stages with the current input. This process promotes the fusion of channel information across stages, facilitating the model to capture both low-level details and high-level contextual information. As a result, the approach enables more precise and effective image super-resolution. Specifically, the features of the current stage, denoted as X_cur∈ℝ^H × W × C, and the previous stage, denoted as X_pre∈ℝ^H × W × C, are concatenated along the feature dimension to obtain Y ∈ℝ^H × W × 2C. Our CSFFA then gets query, key, and value projections: Q = W_d^QW_p^QX_cur, K = W_d^KW_p^KY, and V = W_d^VW_p^VY. Where W_p^( · ) denotes the 1 × 1 point-wise convolution, and W_d^( · ) represents the 3 × 3 depth-wise convolution. Subsequently, we reshape the Q and K to facilitate their dot-product interaction, resulting in a cross-stage channel attention map A with dimensions ℝ^C × 2C. The CSFFA process can be defined as X_out = W_pAttn(Q,K,V) + X_cur Attn(Q,K,V) = V · Softmax (Relu(Q · K/α )) where X_cur and X_out are the input and output feature maps, respectively; Q ∈ℝ^C × HW; K ∈ℝ^HW × 2C; and V ∈ℝ^2C × HW matrices are obtained by the X_cur∈ℝ^H × W × C and Y∈ℝ^H × W × 2C, respectively. To enhance feature control and promote the development of sophisticated image attributes, we introduce a ReLU non-linearity function before the softmax normalization. The ReLU non-linearity function applies sparse constraints to the cross-stage attention map promotes the model's focus on the most informative regions and mitigates the influence of noisy or irrelevant features. To transform features, we use two parallel 1 × 1 convolutions and 3 × 3 convolutions to the feature map. Subsequently, a SimpleGate activation function <cit.> multiplies one of the branches to regulate the flow of complementary features and facilitate feature transformation. To be specific, when provided with an input tensor X ∈ℝ^H × W × C, the FFN can be expressed as X̂ = W_p^0Gating(X) + X Gating(X) = ϕ (W_d^1W_p^1(LN(X))) ⊙ W_d^2W_p^2(LN(X)) where the symbol ⊙ denotes element-wise multiplication, ϕ refers to the GELU non-linearity function, and LN denotes layer normalization <cit.>. §.§ Implementation Details This article focuses on RSISR at three magnification factors: ×2, ×3, and ×4. During the training, we randomly sample LR remote sensing images and their corresponding HR reference windows as 48 × 48 windows. To augment the training samples, we apply random rotations (90°, 180°, and 270°) and horizontal flipping. The proposed SPIFFNet consists of 10 blocks, each of which is designed with a local window size and a global window size of 16 and a feature dimension of 64. Additionally, the SPIFFNet employs 4 attention heads. Further details and experimental analyses are presented in Section IV. We employ the Adam optimizer <cit.> for model optimization, setting β_1 = 0.9, β_2 = 0.99, and ε = 10^-8. We set the initial learning rate to 4 × 10^-4 and the mini-batch size to 8. We train the model for a total of 2000 epochs, gradually reducing the learning rate to 5 × 10^-7 at epoch 2000 using the cosine annealing schedule <cit.>. § EXPERIMENTAL RESULTS AND ANALYSES §.§ Experimental Datasets and Metrics 1) Datasets: This study employs two publicly available remote sensing datasets, namely UCMecred <cit.> and AID <cit.>. The UCMerced dataset comprises 21 classes, each containing 100 remote sensing scenes images that have dimensions of 256 × 256 pixels. We partitioned the dataset into two subsets: one for training purposes and the other for testing. Each subset comprises 1050 images. The AID dataset consists of 10,000 images representing 30 classes of remote sensing scenes which have dimensions of 600 × 600 pixels. In the case of the AID dataset, 80% of the total dataset is randomly assigned as the training set, while the remaining images are allocated for testing. 2) Metrics: We select PSNR and SSIM <cit.> as the evaluation metrics for RSISR, and assess all super-resolution results on the RGB channels. §.§ Ablation Studies We conducted experiments in this section to validate the components of our method. All experiments were performed using the same experimental setup, with the UCMereced dataset and a uniform magnification factor of 4 was applied. We start with a naive baseline by removing both components. Then we add CSPIA and CSFFA to the baseline, respectively. At last, both components are employed to compose our final version of method. The results are reported in Table <ref>. 1) Effects of CSPIA: Table <ref> summarizes the results of this ablation study. We can see that the model with CSPIA significantly outperforms the baseline model, indicating the effectiveness of the expandable window mechanism in capturing both global and local features. To better understand the main reason of the improvement brought by CSPIA, we utilize LAM <cit.> to visualize the effective receptive field of a input window. As shown in Fig. <ref>, the window benefits from a global range of useful pixels by using CSPIA. The results indicate the effectiveness of the proposed CSPIA in improving PSNR and SSIM performances. 2) Effects of CSFFA: Table <ref> summarizes the results of this ablation study. The model incorporating CSFFA demonstrates a significant performance improvement over the baseline model, providing strong evidence of the effectiveness of CSFFA. §.§ Comparisons with Other Methods This section presents a comparative analysis of the proposed method with several deep learning-based SR methods, namely SRCNN <cit.>, VDSR <cit.>, LGCNet <cit.>, DCM <cit.>, HSENet <cit.>, and TransENet <cit.>. 1) Quantitative Results on UCMerced Dataset: The performance of these approachs on the UCMerced dataset is reported in Table <ref>. The best result is indicated in bold font. Notably, some results have been reported in multiple published articles <cit.>, <cit.>. To ensure consistency, we retrained these comparison methods using the open-source code, subjecting all methods to the same testing conditions. The results demonstrate that our SPIFFNet achieves the highest values in terms of PSNR and SSIM. Table <ref> provides a summary of the PSNR for each class in the UCMeced dataset at an upscale factor of 3. We observed that SPIFFNet significantly outperforms HSENet <cit.> and TransENet <cit.> on the buildings, harbors and parking lot classes, which require precise local context information for object discrimination and image detail reconstruction. These notable results serve as further evidence of the effectiveness of our proposed method. 2) Quantitative Results on AID Dataset: We conducted additional experiments on the AID dataset to further validate the effectiveness of SPIFFNet. Table <ref> presents the PSNR and SSIM results of SPIFFNet compared to other methods on this dataset. Compared to other methods, SPIFFNet achieves the best results. Furthermore, Table <cit.> presents the test results for each category at a magnification factor of 4 on the AID dataset. The consistent superior performance of SPIFF in various scenarios further demonstrates the effectiveness of our approach. 3) Qualitative Results: Fig. <ref> displays several super-resolved examples from the UCMerced dataset, such as scenes depicting "agricultural", "buildings", and "overpass". Similarly, Fig. <ref> showcases examples from the AID dataset, including scenes of "playground" and "parking". Our method demonstrates superior performance compared to other methods in challenging areas such as texture and edge, as evident from the visual results. This observation provides further evidence of the effectiveness of our approach. § CONCLUSION This paper introduces a novel transformer-based method called Cross-Spatial Pixel Integration and Cross-Stage Feature Fusion Based Transformer Network (SPIFFNet) for RSISR. The aim of SPIFFNet is to enhance the global perception ability of local windows by introducing context information and to improve the representation ability of features by integrating cross-stage features. SPIFFNet consists of two key components: CSPIA, which introduces context information into the reconstruction of local windows to enhance global awareness, and CSFFA, which enables adaptive aggregation of features across different stages of the network, resulting in more effective information fusion and superior super-resolution performance. We conducted extensive experiments on benchmark datasets to validate the effectiveness of SPIFFNet. IEEEtran [ < g r a p h i c s > ]Yuting Lu is pursuing the Ph.D. degree in control science and engineering from the Northwestern Polytechnic University,Xi’an, China. His research interests include image super-resolution and computer vision. [ < g r a p h i c s > ]Lingtong Min received the B.S. degree from Northeastern University, Shenyang, China, in 2012, and the Ph.D. degree from Zhejiang University, Hangzhou, in 2019. He is an Associate Professor with Northwestern Polytechnical University. His main research interests are computer vision, pattern recognition, and remote sensing image understanding. [ < g r a p h i c s > ]Binglu Wang (M’21) received the Ph.D. degree in Control Science and Engineering with the School of Automation at Northwestern Polytechnic University, Xi'an, China, in 2021. He is currently a Post-doctoral with the Department of Electrical Engineering, Beijing Institute of Technology, Beijing, China. His research interests include Computer Vision, Digital Signal Processing and Deep Learning. [ < g r a p h i c s > ]Le Zheng (Senior Member, IEEE) received the B.Eng. degree from Northwestern Polytechnical University (NWPU), Xi’an, China, in 2009 and Ph.D degree from Beijing Institute of Technology (BIT), Beijing, China in 2015, respectively. He has previously held academic positions in the Electrical Engineering Department of Columbia University, New York, U.S., first as a Visiting Researcher from 2013 to 2014 and then as a Postdoc Research Fellow from 2015 to 2017. From 2018 to 2022, he worked at Aptiv (formerly Delphi), Los Angeles, as a Principal Radar Systems Engineer, leading projects on the next-generation automotive radar products. Since July 2022, he has been a Full Professor with the School of Information and Electronics, BIT. His research interests lie in the general areas of radar, statistical signal processing, wireless communication, and high-performance hardware, and in particular in the area of automotive radar and integrated sensing and communications (ISAC). [ < g r a p h i c s > ]Xiaoxu Wang(M’10) received the M.S. and Ph.D. degrees from the School of Automation, Harbin Engineering University, Harbin, China, in 2008 and 2010, respectively. He was as a Postdoctoral Researcher from 2010 to 2012, and as an Associate Professor from 2013 to October 2018 in Automation School of Northwestern Polytechnical University. He is currently a professor with the Northwestern Polytechnical University. His main research interests include deep learning, inertial navigation and nonlinear estimation. [ < g r a p h i c s > ]Yongqiang Zhao(M’05) received the B.S., M.S., and Ph.D. degrees in control science and engineering from the Northwestern Polytechnic University,Xi’an, China.From 2007 to 2009, he was as a Post-Doctora Researcher with McMaster University, Hamilton, ON, Canada, and Temple University, Philadelphia,PA, USA. He is currently a Professor with the Northwestern Polytechnical University. His research interests include polarization vision, hyperspectral imaging, and pattern recognition. [ < g r a p h i c s > ]Teng Long (Fellow IEEE) was born in Fujian, China, in 1968. He received the M.S. and. Ph.D. degrees in electrical engineering from the Beijing Institute of Technology, Beijing, China, in 1991 and 1995, respectively. He was a Visiting Scholar with Stanford University, California, in 1999, and University College London, in 2002. He has been a Full Professor with the Department of Electrical Engineering, Beijing Institute of Technology, since 2000. He has authored or co-authored more than 300 articles. His research interests include synthetic aperture radar systems and real-time digital signal processing, with applications to radar and communication systems. Dr. Long is a Fellow of the Institute of Electronic and Technology and the Chinese Institute of Electronics. He was the recipient of many awards for his contributions to research and invention in China. He has been a member of the Chinese Engineering Academy since 2021.
http://arxiv.org/abs/2307.01887v1
20230704191101
Binary differential equations associated to congruences of lines in Euclidean 3-space
[ "J. W. Bruce", "F. Tari" ]
math.DG
[ "math.DG" ]
propProposition[section] theo[prop]Theorem cor[prop]Corollary ex[prop]Example exs[prop]Examples defn[prop]Definition rem[prop]Remark rems[prop]Remarks lem[prop]Lemma proof Proof acknow Acknowledgments. example Example. 𝔾 𝔼 ℝ ℂ 𝕃 𝕂 ℙ 𝕊 ℍ 𝕄 𝕋 𝕌 𝕍 α β̱ γ̧ δ̣ ϵ łλ μ øω σ τ κ̨ G Δ E ØO Binary differential equations associated to congruences of lines in Euclidean 3-space J. W. Bruce and F. Tari Accepted. Received; in original form ===================================================================================== We study quotients of quadratic forms and associated polar lines in the projective plane. Our results, applied pointwise to quadratic differential forms, shed some light on classical binary differential equations (BDEs) associated to congruences of lines in Euclidean 3-space and allows us to introduce a new one. The new BDE yields a new singular surface in the Euclidean 3-space associated to a congruence of lines. We determine the generic local configurations of the above BDEs on congruences. [2010 Mathematics Subject classification: 53A05, 34A09, ] [Key Words and Phrases. Congruences of lines, binary differential equations, quadratic forms, polarity in the projective plane, singularities.] § INTRODUCTION Let denote the space of lines in real Euclidean 3-space; the geometry of submanifolds of is well studied, with contributions from such luminaries as Hamilton, Dupin, Study and Blaschke. The set can be modelled by the tangent bundle to the unit sphere T^2⊂^2×^3, with a point (n,x) corresponding to the line x+tn, although this does depend on an arbitrary choice of origin. In this paper we consider generic surfaces in , known classically as congruences, and some associated BDEs. We classify the singularities of these BDEs, relating them to other aspects of the geometry. We shall suppose that the congruence Z⊂ is a smooth regular surface. Classically, congruences were specified by a smooth immersion (n,x):U→^2×^3 with a point (n(u,v),x(u,v)) determining the line x(u,v)+tn(u,v); x(U) is called the director surface or directrix. The directrix is clearly not unique: if f:U→ is any smooth function the map (n,x+fn) represents the same family of lines. In what follows we will only be interested in local properties of congruences so we are considering germs of mappings (n,x):^2,0→^2×^3, though sometimes we specify a domain, that is an open set U⊂^2. There are a range of ways to study the geometry of congruences, but because we need to carry out explicit calculations we prefer the Gaussian approach, employed for example in <cit.>, using quadratic forms, that is families of quadratic differential forms Q:T_zZ→ which depends smoothly on z∈ Z. Clearly a curve on Z through a point z yields the germ of a ruled surface in ^3. Some properties of these ruled surfaces depend only on the direction of the tangent to the curve and classically these were used, as below, to pick up distinct directions in T_zZ and to associate to Z some generally singular surfaces in ^3. For example consider a curve γ=(n,x) : I→ Z⊂ with 0∈ I an open interval and γ(0)=(n(0),x(0) )= L. We have an associated ruled surface parametrised by (t,s)↦ x(t)+sn(t) ⊂^3. On the line L there is a central point, in antiquated terminology the foot of the common perpendicular of consecutive lines. These central points sweep out the striction curve of the ruled surface. We shall denote by r=-(x'· n'/n'· n')(0) the signed distance from x(0) to the central point in the n(0) direction. Clearly the central point only depends on the tangent to the curve at z. There is also a classical invariant λ=[x',n,n']/n'· n', called the distribution parameter or pitch of the ruled surface, where [-,-,-] denote the usual scalar triple product in ℝ^3. Clearly this only depends on the point γ(0) and γ'(0). When this is identically zero we have a developable surface, generally the set of tangent lines to a smooth space curve C, and the line of striction here is the edge of regression, that is C which does not depend on the choice of directrix. We have the following concepts (see for example, <cit.>). * The directions in T_zZ along which the values of r are extreme are called principal directions; as we shall see generally there are two such directions. The corresponding central points on L∈^3 are called boundary points. * A point z is called an umbilic point if every direction in T_zZ is a principal direction. The integral curves on Z of the principal directions are the principal curves and the surfaces swept out by the boundary points in ^3 are called the boundary surfaces. * The surface in ^3 swept out by the midpoints of the boundary points is called the middle surface. * There are 0, 1 or 2 directions in T_zZ, called torsal directions, for which the pitch λ of any associated ruled surface vanishes (infinitesimally we have a developable surface in ℝ^3). The associated central points on the ray L⊂ℝ^3 are called foci. The point z∈ Z is called a parabolic point when the two torsal directions coincide. The surface in ^3 swept out by the foci is called a focal surface. At any point the midpoint of the two foci (which is always real even if the two focal points are imaginary, see <cit.>) is also the midpoint of the boundary points. * For each z∈ Z there are at most two directions in T_zZ, called mean directions, along which the values of the pitch is extreme. Their associated points on L are precisely the midpoints of the focii (or boundary) points. The principal, torsal and mean directions are solutions of BDEs. These are equations of the form a(u,v)dv^2+2 b(u,v)dvdu+ c(u,v)du^2=0, where a, b, c are smooth functions on some open set U⊂^2. Clearly any quadratic differential form ω on Z yields a BDE, given by ω=0. We establish the generic local configurations of the solutions of the above BDEs, and a new BDE associated to Z called the characteristic BDE. The new BDE yields a new singular surface in the Euclidean 3-space associated to Z (<ref>). In <ref> we see how the above BDEs are related using the approach in <cit.>, where the elementary geometry of binary quadratic forms casts light on the geometry of BDEs (see Figure <ref>). The relations between the above BDEs can be deduced from a more general result on quotient of quadratic forms given in <ref>. § PRELIMINARIES We denote the BDE (<ref>) by ω=( a,2 b, c) and refer to a, b, c as the coefficients of the BDE. Such an equation determines a pair of distinct directions at each point in the region where δ= b^2- ac>0 and no direction at points where δ<0. The set δ=0 is the discriminant of the BDE. If the coefficients of the BDE do not all vanish at a point (u,v) on the discriminant, then it determines a unique (double) direction there; if they do, then all directions at (u,v) are considered solutions. The solution curves of a BDE form a pair of transverse foliations in the region . These foliations together with the discriminant curve constitute the so-called configuration of the BDE. Two BDEs are said to be topologically equivalent if there is a local homeomorphism in the plane which maps the configuration of one equation to that of the other. Let q be a regular point on the discriminant curve. If the unique solution of ω at q is transverse to the discriminant curve, then the local configuration of the BDE is a family of cusps (see <cit.> for references and Figure <ref>). When the direction is tangent to the discriminant curve, there are three generic (i.e., stable) local topological models: folded saddle, folded node and folded focus (<cit.>, Figure <ref>). If we write p=dv/du and the 2-jet of the BDE in the form a_0p^2+(b_0+b_1u+b_2v)p+c_1u+c_2v+c_3u^2+c_4uv+c_5v^2, then according to Lemma 2.1 in <cit.>, the origin is a fold singularity if a_0 0 and b_0=c_1=0 (when b_0=0 and a_0c_1 0 we get a family of cusps). At a folded singularity, setting λ=(4a_0c_3- b_1^2-b_1c_2)/(4c_2^2), the singularity is of type folded saddle if λ<0, folded node if 0<λ<1/16 and folded focus if λ>1/16. At a point where all the coefficients of the BDE vanish, the discriminant is singular. Our interest here is when the singularity is of type A_1^+ (i.e., δ∘ h^-1(u,v)=± (u^2+v^2) for some germ of a diffeomorphism h). Then the BDE has three generic possible configurations as shown in Figure <ref>, classified in (<cit.>), called the star, monstar or lemon (here, by generic we mean within the set of BDEs with coefficients vanishing at the origin and whose discriminants have an A_1^+-singularity). If we write the 1-jet of the coefficients of the BDE at the origin in the form j^1ω=(a_1u+a_2v,2b_1u+2b_2v,c_1u+c_2v), then according to <cit.>, at an A_1^+-singularity of the discriminant we get a star or monstar (resp. lemon) if the cubic ϕ(p)=a_2p^3+(2b_2+a_1)p^2+(2b_1+c_2)p+c_1 has three distinct roots (resp. one root). The cases star and monstar are distinguished by the signs of α(p_i)ϕ'(p_i) at the roots of ϕ, where α(p)=a_2p^2+(b_2+a_1)p+b_1. If all of them are positive we get a star, otherwise we get a monstar. The star, monstar and lemon singularities of BDEs are not stable within the set of all BDEs (<cit.>). However, those that appear here associated to congruences are stable in that context. We parametrise locally a congruence by (n,x):U→^2×ℝ^3, where U is an open set in ℝ^2 and identify germs of congruences with germs of mappings C^∞(U,^5). We endow the set C^∞(U,^5) with the Whitney C^∞-topology and say that a property of congruences is generic if it is satisfied in an open and dense subset of C^∞(U,^5). To prove that a property is generic we consider the map Φ: U↦ J^k(2,5) given Φ(u,v)=j^k_(u,v)(n,x). A given property is usually represented by a real algebraic variety V in J^k(2,5) with a smooth stratification. Thom's transversality theorem asserts that for generic congruences, Φ is transverse to the stratification of V. This means, in particular, that a property defined by more than three independent conditions, that is the codimension of V>3, is not generic. § QUOTIENTS OF QUADRATIC FORMS Some key properties of the special directions on congruences of lines and their associated BDEs given in the introduction can be derived from a more general result below on quotients of quadratic forms. (1) Let q_i=a_i ^2+2b_i+̱c_i^̱2, i=1,2, be two quadratic forms and consider the quotient f(,)̱=q_2(,)̱/q_1(,)̱, defined off the set q_1(,)̱=0. There are two local (global if q_1 is positive definite) extremal sets of points of the function f which lie along the rays (t,t)̱ where (,)̱ are solutions of the quadratic equation | [ ^̱2 -^2; a_1 b_1 c_1; a_2 b_2 c_2 ]|=0. This expression is classically known as the Jacobian of q_1,q_2, written Jac(q_1,q_2) that is the determinant of the 2× 2 matrix of partial derivatives of q_1,q_2. These directions coincide if and only if q_1 and q_2 have a common root. (2) Representing q_i by symmetric matrices E_i=([ a_i b_i; b_i c_i ]) and writing w=(,)̱^T, the directions in (1) are the solutions of E_2 w=μ E_1 w. The corresponding values of μ, denoted μ_1, μ_2, are real if E_1 is positive or negative definite. (3) The directions (_1,_̱1), (_2,_̱2) corresponding to μ_1, μ_2 are orthogonal with respect to q_2. With respect to the same form they bisect the directions given by q_1=0 when these are real; this only occurs when q_2 is positive definite. (1) Taking an affine chart the extremal points of f(,1) are the solutions of (a_1+b_1)(a_2^2+2b_2+c_2)-(a_2+b_2)(a_1^2+2b_1+c_1)=0, equivalently, (a_1b_2-b_1a_2)^2+(a_1c_2-c_1a_2)+(b_1c_2-c_1b_2)=0, which can be written in the determinantal form as stated. It is easy to see that the solutions coincide if and only if q_1, q_2 have a common root. (2) It is easy to check that if E_2 w=μ E_1 w then (, )̱ is a rootof the quadratic form Jac(q_1,q_2)=0 and conversely. The second part follows from the usual argument: if E_2w=μ E_1 w, then taking complex conjugates E_2w̅=μ̅ E_1 w̅ so wE_2w̅=μ̅w E_1 w̅ and w̅E_2 w=μw̅E_1 w. So (μ-μ̅)(w̅E_1 w)=0 and the second factor is zero if and only if w=0. (3) If q_2 has rank 2 we can assume it is ± (^2+^̱2) or $̱. In the first case we know we can reduceq_1=a^2+c^̱2, andJac(q_1,q_2)=4(a-c)$̱ the results easily follow. In the second case we can clearly suppose that q_1=a^2+c^̱2. Now Jac(q_1,q_2)=2(a^2-c^̱2) and if ac≥ 0 clearly the two directions given by Jac(q_1,q_2)=0 are orthogonal with respect to q_2. Note that q_1=0 then has no roots. There is a classical (elementary) geometry of quadratic forms used in the study of BDEs in <cit.>. A non-zero quadratic form aα ^2+2b αβ+cβ ^2 can be represented by the point q=(a:2b:c) in the projective plane ^2. In ^2 the set of singular quadratic forms is the conic Γ={q: b^2-ac=0}. The polar line q of a point q with respect to Γ is the line that contains all points p such that q and p are harmonic conjugate points with respect to the intersection points R_1 and R_2 of the conic Γ and a variable line through q. If the polar line q meets Γ, then the tangents to Γ at the points of intersection meet at q. A point (a_1:2b_1:c_1) is on the polar line of a point p=(a:2b:c) if and only if 2bb_1-ac_1-a_1c=0. Three points in the projective plane are said to form a self-polar triangle if the polar line of any vertex of the triangle is the line through the remaining two points. In our case the points represent quadratic forms, so any vertex of the self-polar triangle is the Jacobian of the remaining two vertices. § BDES ON CONGRUENCES A quadratic differential form ω(du,dv)= a(u,v)dv^2+2 b(u,v)dvdu+ c(u,v)du^2 on a congruence Z determines a BDE ω=0 on Z. So the associated BDEs of two quadratic differential forms ω_1,ω_2 yields a Jacobian BDE Jac(ω_1,ω_2)=0; the formula gives a quadratic form at every point (u,v). This yields, at each point, the directions in which the quotient ω_2/ω_1 has extrema. Note that Theorem <ref> shows that Jac(ω_1,ω_2)=0 has a single solution at points where the resultant of ω_1, ω_2 vanishes. Moreover the directions determined by Jac(ω_1,ω_2)=0 are orthogonal with respect to ω_1 or ω_2 and bisect the directions ω_1=0 (resp. ω_2=0) with respect to the possibly indefinite metric ω_2 (resp. ω_1). So for example if we consider a smooth surface in ℝ^3 with first fundamental form ω_1 and second fundamental form ω_2 then the Jacobian BDE Jac(ω_1,ω_2)=0 determines the principle directions, which from above are orthogonal, with the asymptotic directions at hyperbolic points bisecting those directions. For a congruence Z⊂ there are three key quadratic forms Q_1, Q_3, Q on the tangent spaces T_zZ, with Q only defined up to a multiple of Q_1, from which two other (well-defined) quadratic forms can be constructed (see <cit.>). (1) Define Q_1, Q, Q_3:T_zZ→ by Q_1(du,dv)=|| n_u du+ n_v dv||^2, Q(du,dv)=(x_u du+x_v dv)·(n_u du +n_v dv), Q_3(du,dv)=[x_u du+x_v dv,n_u du+n_v dv,n], so that [ Q_1(du,dv) = n_v· n_v dv^2+2n_u· n_vdudv+n_u· n_udu^2,; Q(du,dv) = n_v· x_v dv^2+(n_u· x_v+n_v· x_u)dudv+n_u· x_udu^2,; Q_3(du,dv) = [x_v, n_v,n] dv^2+([x_u, n_v,n]+[x_v, n_u,n])dudv+[x_u, n_u,n]du^2. ] We write [ A=n_u· n_u, B=n_u· n_v, C=n_v· n_v,; a=-n_u· x_u, b_1=-n_u· x_v, b_2=-n_v· x_u, c=-n_v· x_v,; b=-1/2(b_1+b_2), b̅=-1/2(b_1-b_2). ] (2) Define Q_2, Q_4:T_zZ→ by Q_2= Jac(Q,Q_1) and Q_4=Jac(Q_3,Q_1). At each point z∈ Z, we shall write Q_i for Q_i(du,dv). (1) Q_1, Q_2, Q_3, Q_4 are all well defined. (2) The quadratic form Q_2 is given by [ Q_2 = | [ dv^2 -dudv du^2; A B C; a b c ]|; ; = (Bc-Cb)dv^2+(Ac-Ca)dudv+(Ab-Ba)du^2, ] and the quadratic form Q_3 is a non-zero multiple of [ Q_3 = (Bc-Cb-Cb̅)dv^2+(Ac-Ca-2Bb̅)dudv+(Ab-Ba-Ab̅)du^2; = Q_2-b̅Q_1. ] (3) The quadratic form Q_4=Jac(Q_3,Q_1)=Jac(Q_2,Q_1) is given by [ Q_4 = | [ dv^2 -dudv du^2; A B C; Ab-Ba 1/2(Ac-Ca) Bc-Cb ]| ], alternatively, [ Q_4 = (2B^2c-2BCb-ACc+C^2a)dv^2+2(ABc-2ACb+BCa)dudv+; (2B^2a-2ABb+A^2c-ACa)du^2. ] (1) Recall that we may replace x:U→^3 by x+fn for any function f:U→. This replaces Q by Q+fQ_1, and clearly Jac(q_2+cq_1,q_1)=Jac(q_2,q_1) for any quadratic forms q_1, q_2 and constant c. Similarly Q_3 is replaced by [ (x_u+f_un+fn_u)+(̱x_v+f_vn+fn_v), n_u+ṉ_v.n] which clearly yields the same value. (2) The expression of Q_2 follows from the fact that it is Jac(Q,Q_1). For Q_3, we first observe that all coefficients of Q_3 vanish at points where n is singular. Indeed, for a generic congruence, the singular set of n is a regular curve and along it n_v=α n_u (or n_v=α n_u) for some scalar function α. Differentiating [x_u, n,n]≡ 0 and [x_v, n,n]≡ 0 along the curve proves the claim. Suppose now that n is not singular. Then n=n_u∧ n_v/||n_u∧ n_v||. We are interested in the solutions of Q_3=0, so do not distinguish differential forms that are non-zero multiples of each other and multiply Q_3 by ||n_u∧ n_v||. Then [ [ x_u, n_u, n_u∧ n_v] = x_u.( n_u∧( n_u∧ n_v))=x_u.(( n_u. n_v) n_u-( n_u. n_u). n_v)); = Ba-A x_u. n_v=Ba-Ab+Ab; ] Similarly, [ [ x_u, n_v, n_u∧ n_v] = x_u.( n_v∧( n_u∧ n_v))=x_u.(( n_v. n_v) n_u-( n_v. n_u). n_v)); = Ca-Bb+Bb,; ] [ [ x_v, n_u, n_u∧ n_v] = x_v.( n_u∧( n_u∧ n_v))= x_v.(( n_u. n_v) n_u-( n_u. n_u). n_v)); = Bb+Bb-Ac,; ] and [ [ x_v, n_v, n_u∧ n_v] = x_v.( n_v∧( n_u∧ n_v))=x_v.(( n_v. n_v) n_u-( n_v. n_u). n_v)); = Cb+Cb-Bc.; ] It follows that Q_3 is a non-zero multiple of (Bc-bC-Cb̅)dv^2+(Ac-Ca-2Bb̅)dudv+(Ab-Ba-Ab̅)du^2, and we shall take this as our Q_3 from now on, so that Q_3=Q_2-bQ_1. Part (3) is a straightforward calculation. Interpreting, as previously, the forms (at each point of Z) as elements of the real projective plane ^2, note that although Q is not well defined the line joining Q to Q_1 is, and is the polar line Q_2 of Q_2=Jac(Q,Q_1). Then Q_3=Q_2-bQ_1 is on the line joining Q_1 and Q_2 which is the polar line Q_4 of Q_4=Jac(Q_3,Q_1)=Jac(Q_2,Q_1). It is not hard to see that this lies on the line joining Q to Q_1, so we have a configuration like that in Figure <ref>. We also get a new quadratic form Q_5=Jac(Q_3,Q_4) which we deal with in more details in <ref>. We prove in Theorem <ref> (resp. Theorem <ref>) that Q_1,Q_2,Q_4 (resp. Q_3,Q_4,Q_4) are vertices of a self-polar triangle. As in the introduction a curve (̧s)=(n,x)(u(s),v(s)) through a point z=(̧0) on Z represents a ruled surface in ℝ^3. Along any generator of a ruled surface the central point on L is the directed distance r=-Q((̧0))(u'(0),v'(0))/Q_1((̧0))(u'(0),v'(0)) from x(0), which evidently only depends on the derivative '̧(0) and (̧0). As the tangent direction rotates in T_zZ the central point moves up and down L. These ruled surfaces also have a parameter of distribution or in the terminology of Blaschke a pitch which is constant along a generator and again only depends on '̧(0). The pitch of the ruled surface (n,x)(u(s),v(s)) at s=0 is λ=Q_3((̧0))(u'(0),v'(0))/Q_1((̧0))(u'(0),v'(0)). We can now describe the geometry behind the quadratic forms above using Theorem <ref>. (1) The principal directions: the directions in which the distance from the director surface to the central point has extreme value are given by the BDE Jac(Q,Q_1)=Q_2=0. (2) The torsal directions: the directions in which the ruled surfaces are (infinitesimally) developable are given by Q_3=0. (3) The mean directions: the pitch is Q_3/Q_1=Q_2/Q_1-b̅ so the directions in which the pitch has its maximal values are given by Q_4=Jac(Q_2,Q_1). We determine the generic local topological configurations of the BDEs in Proposition <ref> and introduce a new BDE determining the so-called characteristic directions. We deal in <ref> with the pairs determined by the geometry of the central points and in <ref> with those determined by the pitch. These BDEs are discussed in <cit.>; our approach reveals the way they are related at each point via polarity in the projective plane. We also obtain the generic configurations of the solution curves of the BDEs at their singular points and on the singular set Σ(n) of the map n. As our study is local in nature, in the rest of the paper we take the line of interest L to be the oriented z-axis and parametrise Z locally at z by (u,v)↦ (n(u,v), x(u,v)), with (u,v) in a neighbourhood U of the origin in ^2. We have n(0,0)=(0,0,1) and take the directrix x(u,v)=(x_1(u,v),x_2(u,v),0) with j^3x_1 =α_10u+α_11v+α_20u^2+α_21uv+α_22v^2 +∑_i=0^3α_3iu^3-iv^i, j^3x_2 =β_10u+β_11v+β_20u^2+β_21uv+β_22v^2+ ∑_i=0^3β_3iu^3-iv^i. The map n:U→ S^2 is a local diffeomorphism on U∖Σ(n) where, for generic congruences, Σ(n) is empty or a regular curve. The map n has locally a fold singularity at most points on Σ(n) and a cusp singularity at isolated points on that curve. With the notation in Definition <ref>, (u,v)∈Σ(n) if and only if (B^2-AC)(u,v)=0, that is, Q_1 is degenerate. To simplify notation, we shall omit mention of selecting (u,v)∈ U, but implicitly we are referring to the quadratic forms and projective plane at each point (n(u,v), x(u,v))∈ Z. §.§ BDEs associated to central points As above the principle directions are given by the BDE Q_2=0; its integrals curves are the lines of principal curvature; point z on Z is called an umbilic point if the all the coefficients of Q_2 vanish at z. We observe that umbilic points are stable on a generic congruence, that is, they persists when deforming the congruence. Suppose that Q_1 is not degenerate, that is, n is not singular. If μ_1, μ_2 are the values described in Theorem <ref>(2), the discriminant function δ(Q_2) of Q_2 is given by δ(Q_2)=4(B^2-AC)^2(H^2-K), where [ H=1/2(μ_1+μ_2), K=μ_1μ_2, ] so δ(Q_2) vanishes if and only if H^2-K=0, that is precisely at umbilic points. These are points where all the coefficients of Q_2 vanish, that is, where Q_1 and Q are linearly dependent. Note that because of the ambiguity in the choice of Q, μ_1 and μ_2 are only defined up to the addition of the (same) constant, that is only |μ_1-μ_2| is well defined, and so is H^2-K=(μ_1-μ_2)^2/4. (1) Away from Σ(n) and umbilic points, the lines of principal curvature form a Q_1-orthogonal net. The three generic configurations of BDEs of Morse Type A_1^+ (Figure <ref>) can occur at umbilic points on generic congruences and only these (see also <cit.>). (2) For each (u,v)∈ U∖Σ(n) the quadratic form Q_2∈^2 is the unique point on the polar line Q of Q that has Q_1-orthogonal roots. It is the intersection of the polar lines Q_1 and Q (all forms evaluated at (u,v)). (3) On Σ(n), Q_2 factors as a product of two 1-forms σ_i, i=1,2. At fold singularities of n and at most points on Σ(n), the two 1-forms are regular and their foliations have 2-point contact along Σ(n). Their common tangent direction is along the kernel of dn, so they are transverse to Σ(n) (Figure <ref> left). The pair (σ_1,σ_2) is topologically equivalent to (du, d(u-v^2)). At isolated fold points on Σ(n), one of the 1-forms is regular and the other has generically a saddle, node or focus singularity with eigenspaces when real transverse to Σ(n) (Figure <ref> middle three figures). The pair (σ_1,σ_2) is topologically equivalent to [ (du,(v-u)du+vdv), ,; (du,(v+1/8 u)du+vdv), ,; (du,(u+v)du+vdv), . ] At a cusp singularity of n, the two 1-forms are regular and their leaves have 3-point contact at the singular point of n and are tangent to Σ(n) (Figure <ref>, right). The pair (σ_1,σ_2) is topologically equivalent to (du, d(u+vu^2+v^3)). (1) For generic congruences, the umbilic points are not on Σ(n) (three independent conditions need to be satisfied for that to happen), so at umbilic points, we can presume that n is non-singular, that is Q_1 is positive definite, and we can take n(u,v)=(u,v,√(1-u^2-v^2)). Taking j^3x_i,i=1,2, as in (<ref>) we find that the 1-jets of the coefficients of Q_2 are [ j^1(Bc-Cb) = 1/2(α_11 +β_10+ (α_21 + 2β_20)u + (2α_22 + β_21)v),; j^1(Ac-Ca) = α_10-β_11 + (2α_20 - β_21)u + (α_21 - 2β_22)v,; j^1(Ab-Ba) = 1/2(α_11 + β_10 + (α_21 + 2β_20)u) + (2α_22 + β_21)v).; ] Then the origin is an umbilic point if and only if β_10=-α_11 and β_11=α_10. In that case, j^1Q_2=(a_1u+a_2v,b_1u+b_2v,-a_1u-a_2v), with [ [ a_1 = 1/2α_21+β_20,; b_1 = 2α_20-β_21,; ] [ a_2 = α_22+1/2β_21,; b_2 = α_21-2β_22. ] ] Clearly, we can choose the 2-jets of x_1 and x_2 so that a_i,b_i, i=1,2, can take any values in ℝ. It follows by the results in <cit.> that the lines of principal curvature can have any of the three generic topological configurations lemon, star or monstar in Figure <ref>, and these are the only configurations that generically occur at umbilic points. This result on the configurations at umbilics was obtained previously by Craizer and Garcia in <cit.>. (2) The statement follows from Theorem <ref>. (3) Suppose the origin is a fold singularity of n. We can take Σ(n)={v=0} and n_v=0 on Σ(n). Then n_vv 0, so n_v(u,v)=vw(u,v) for some smooth w with w(0,0) 0 and n_u,w linearly independent in a neighbourhood of the origin. We have [ A=n_u· n_u 0, B=vn_u· w, C=v^2w_v· w_v,; a=-n_u· x_u, b=-1/2(n_u· x_v+vw· x_v), c=-vw· x_v, ] so the coefficients of ω(Q_2) are [ a_P=Bc-Cb = -v^2( (n_u· w)(w· x_v)-1/2(n_u· x_v+vw· x_u)(w· w)),; b_P=Ac-Ca = -v((n_u· n_u)(w· x_v)-v(w· w)(n_u· x_u))),; c_P=Ab-Ba = -1/2(n_u· n_u)(n_u· x_v+vw· x_u)+v(n_u· w)(n_u· x_u).; ] It follows that δ(Q_2)=v^2(n_u· n_u)[ (n_u· n_u)(w· x_v)^2-2(n_u· w)(n_u· x_u)(w· x_v)+(w· w)(n_u· x_v)^2+vO(u,v) ] for some smooth function O(u,v). As n_u and w are linearly independent, it follows that Q_1(du,dv)=w· w dv^2+2n_u· wdudv+n_u· n_udu^2 defines (locally) a metric on Z. The coefficient of v^2 in δ(Q_2) is then Λ=(n_u· n_u)Q_1(w· x_v,-n_u· x_v). It vanishes if and only if w· x_v=n_u· x_v=0, which does not occur on generic congruences at points on Σ(n). Therefore, Λ>0. We now have two cases, c_P(0,0)=(n_u· n_u)(n_u· x_v) vanishing or not. If n_u· x_v(0,0) 0, c_P(0,0) 0 so Q_2 is a non-zero multiple of the product of the following two regular 1-forms σ_i=2c_Pdu+(-b_P-(-1)^i√(δ(Q_2)) )dv, i=1,2. On v=0, the above one forms reduce to du=0 which is the kernel direction of dn. Their leaves have 2-point contact at points on v=0. The result on their topological configurations follows from Theorem 2.2 in <cit.>. If n_u· x_v(0,0)=0, then all the coefficients of Q_2=0 vanish at the origin. Away from the curve c_P(u,v)=0, Q_2 is still a product of the above 1-forms ω_i. Writing a_P=v^2a̅_P, b_P=vb̅_P, c_P=c̅_P, and assuming b̅_P(0,0) 0 for generic congruences (we have already two independent condition at the origin), and without loss of generality that b̅_P(0,0)>0, we have σ_i= 2c̅_Pdu+ v(-b̅_P-(-1)^ib̅_P(1-4a̅_Pc̅_P/b̅_P^2)^1/2 )dv, i=1,2. The 1-form σ_1=c̅_P(2du+h(u,v)dv) for some regular function h. So it defines a regular foliation away from c̅_P=0 which extends to a regular foliation on c̅_P=0 (we get a line of singularities on c̅_P=0 which does not alter the configuration of the foliation determined by σ_1). For the analysis of the singularity of the 1-form σ_2 we can take the first two component of n in the form (u,(1+k_10u+k_11v+O(2))v^2+l_2u^2+O(u^3))). Then n_u· x_v(0,0)=α_11=0 and j^1σ_2=((2l_2β_11 + α_21)u) + 2(α_22 + β_10)v)du+2β_11vdv. Clearly, σ_1 can have generically a saddle, node or focus singularity. The result on the topological configurations of the pair (σ_1,σ_2) follows from Theorem 2.2 in <cit.>. At a cusp singularity of n, we can still take n_v=0 on Σ(n)={λ(u,v)=0} for some smooth function λ with λ_u(0,0)λ_vv(0,0) 0. We can then proceed as above and write n_v(u,v)=λ(u,v)w(u,v) with w(0,0) 0 and x_u and w linearly independent. The coefficients a_P,b_P,c_P are as above but replacing the factor v by λ(u,v). For generic congruences we cannot have any extra conditions, so c_P(0,0) 0 and Q_2 factors as a product of two 1-forms σ_i, i=1,2. We can show that the leaves at the origin have 3-point contact between them and 2-point contact with Σ(n). The result on the topological configurations of the pair (σ_1,σ_2) also follows from Theorem 2.2 in <cit.>. §.§ BDEs associated to the parameter distribution We consider as in <ref> a curve γ on the surface Z with γ(0)=(n(0), x(0))= z and seek the directions γ'(0)∈ T_zZ for which the parameter distribution λ of the associated ruled surface in ^3 vanishes. These are the torsal directions and their integral curves are the torsal curves. We have the following observation. For a generic congruence, the coefficients of the BDE Q_3=0 of the torsal curves vanish simultaneously at z if and only if n is singular at z. We showed in the proof of Proposition <ref> (2) that the coefficients of Q_3 all vanish on Σ(n). For the converse, setting to zero the coefficients of Q_3 leads to (AC-B^2)(Ac-aC)=0. The first factor vanishes if and only if n is singular. If the second factor vanishes then it is easy to see that b̅=0 and z an umbilic point, so we have three independent conditions which cannot be satisfied simultaneously for a generic congruence. (1) At each point z on Z∖Σ(n) there are 2,1 or 0 torsal directions in T_zZ for which the pitch λ vanishes. These are given by the BDE Q_3=Q_2-b̅ Q_1=0. We have δ(Q_3)=δ(Q_2)+b^2δ(Q_1). The discriminant of the BDE Q_3=0 is generically a smooth curve on Z, called the parabolic curve, and all the configurations in Figure <ref> can occur, and generically only these. The set on Z where δ(Q_3)>0 (resp. δ(Q_3)< 0) is called the hyperbolic (resp. elliptic) region of Z. (2) The torsal curves extend to Σ(n). For generic congruences, the curve Σ(n) lies in the closure of the hyperbolic region of Z. The parabolic curve intersect Σ(n) tangentially at isolated points. At such points, the torsal curves form locally a family of cusps as in Figure <ref>. (1) We have already found the form of the BDE which is also Q_2-bQ_1=0. This means that Q_3 belongs to the pencil determined by Q_1 and Q_2, that is, to the polar line of Jac(Q_1,Q_2). A straightforward calculation shows that δ(Q_3)=δ(Q_2)+4b^2δ(Q_1). Using (<ref>), we get δ(Q_3)=4δ(Q_1)^2(H^2-K)+4b^2δ(Q_1), with δ(Q_1)=B^2-AC, so δ(F)=0 H^2-K+b^2/B^2-AC=0. Equation (<ref>) generically determines a regular curve on the surface. For the configuration of the BDE Q_3=0, we use the setting of the proof of Theorem <ref>(1). Then the 2-jets of the coefficients (a_T,b_T,c_T) of Q_3 are given by [ j^2a_T= α_11 + α_21u + 2α_22v +α_31u^2 - (β_11 - 2α_32)uv + (3α_33 + α_11)v^2,; j^2b_T= α_10-β_11 + (2α_20 - β_21)u + (α_21 - 2β_22)v +(3α_30-β_11 - β_31 )u^2 +; ( α_11+ 2α_31-2β_32 -β_10 )uv +(α_10 + α_32-3β_33 )v^2,; j^2c_T= -β_10 - 2β_20u - β_21v -(3β_30 + β_10)u^2 +( α_10-2β_31 )uv- β_32v^2. ] The origin is on the discriminant if and only if (α_10-β_11)^2+4α_11β_10=0. It is a singular point of Q_3=0 if and only if (α_10-β_11 )^3β_20 + 2(α_10-β_11 )^2(β_21 - α_20)β_10 + 4β_10^2(α_10-β_11 ) (β_22 - α_21) - 8α_22β_10^3=0. Following the arguments in <ref>, we can show that all the stable singularities in can occur. For generic congruences, those are the only singularities that can occur. (2) We follow the setting of the proof of Theorem <ref> (3) at a fold or a cusp singularity of n. We set n_v(u,v)=λ(u,v)w(u,v) with Σ(n)={λ(u,v)=0} and w,n_u linearly independent. Then (a_T,b_T,c_T)=(λ^2ã_T,λb̃_T,λc̃_T) for some smooth functions ã_T,b̃_T,c̃_T. This means that the torsal curves have a removable singularity on Σ(n). We define their extension to Σ(n) as the solutions of the BDE with coefficients (λã_T,b̃_T,c̃_T). Then the parabolic curve extended to points on Σ(n) and is given by b̃_T^2-4λã_Tc̃_T=0. In particular, it intersects Σ(n) at the origin if and only if b̃_T(0,0)=0 and for generic congruences, the two curves have ordinary tangency at that point. The remaining part of the proof follows by computing the initial jets of the coefficients (λã_T,b̃_T,c̃_T). Following the setting in the proof of Theorem <ref> (3) at a fold singularity of n, we get [ j^1(vã_T) = 4α_11v,; j^1b̃_T = -2β_11 + 2(2l_2α_11 - β_11k_10 - β_21)u+ (4α_10 -3β_11k_11 - 4β_22)v,; j^1c̃_T = -2β_10 + 2(2l_2α_10 - β_10k_10 - 2β_20)u - (3β_10l_11 + 2β_21)v. ] The origin is on the parabolic curve if and only if β_11=0. The parabolic set is generically a smooth curve (α_11β_10 0) and has ordinary tangency with Σ(n) (2l_2α_11 - β_11k_10 - β_21 0) at the origin. The torsal curves form a family of cusps along the parabolic curve as α_11β_10 0. Observe that along Σ(n), δ(Q_3)=b̃_T^2(u,0)>0, so there are always two real torsal directions away from parabolic points. The claim at a cusp singularity of n follows similarly observing that for generic congruences the cusp singularity of n is not on the parabolic curve. We turn now to the extremes of the pitch, these occur along the mean directions. (1) At every non-umbilic point on Z∖Σ(n) there are two Q_1-orthogonal directions along which the parameter distribution has an extremum. These are given by the BDE Q_4=Jac(Q_2,Q_1)=0, that is, Q_4= | [ dv^2 -dudv du^2; A B C; Ab-Ba (Ac-Ca)/2 Bc-Cb ]|=0. The discriminant of the BDE (<ref>) consists of the umbilic points, and all the configurations in Figure <ref> can occur at such points and only these. (2) The triple Q_1,Q_2,Q_4 are vertices of a self-polar triangle. (3) For generic congruences, the solution curves of Q_4=0 extend to Σ(n). Their configurations are as those in Figure <ref> with the special fold points generically distinct from those of the principal curves. (1) We have already established the form of the BDE, and the orthogonality condition follows from Theorem <ref>. We have δ(Q_4)=-δ(Q_1)δ(Q_2) which vanishes only at umbilic points or on Σ(n). At an umbilic point which is generically not on Σ(n), and with the same setting as for the previous cases, we have j^1Q_4=(a̅_1u+a̅_2v,b̅_1u+b̅_2v,-a̅_1u-a̅_2v) with [ a̅_1=-2α_20 + β_21 a̅_2=-α_21 +2β_22,; b̅_1=2(α_21 + 2β_20) b_2=2(2α_22 + β_21). ] Clearly, as in the proof of Theorem <ref>, we can obtain the three generic configurations in Figure <ref> and show that only these occur. Observe that a̅_1u+a̅_2v=-b_1u-b_2v and b̅_1u+b̅_2v=4(a_1u+a_2v) with j^1Q_2=(a_1u+a_2v,b_1u+b_2v,-a_1u-a_2v) as in the proof of Theorem <ref>. (2) We have Jac(Q_2,Q_4)=δ(Q_2)Q_1, Jac(Q_1,Q_4)=δ(Q_1)Q_2 and Q_4=Jac(Q_1,Q_2), so Q_1,Q_2,Q_4 are vertices of a self-polar triangle. (3) The analysis at points on Σ(n) is identical to that for principal curves and is omitted. §.§ Characteristic directions and characteristic points The BDEs of the torsal and mean directions determine another BDE on the surface, namely, Q_5=Jac(Q_3,Q_4)=0. We call its solutions at each point z the characteristic directions and their integral curves the characteristic curves. (1) At each point z on the surface Z∖Σ(n) there are 2,1 or 0 characteristic directions in T_zZ. These are given by the BDE Q_5=-δ(Q_2)Q_1-bδ(Q_1) Q_2=0, so Q_5 is on the polar line Q_4 of Q_4. The triple Q_3,Q_4,Q_5 are vertices of a self-polar triangle. (2) The discriminant of the BDE Q_5=0 is given by δ(Q_5)=δ(Q_1)δ(Q_2)δ(Q_3). Consequently, when the torsal directions are real the characteristic ones are imaginary and vice-versa, that is, the characteristic curves lie on the closure of the elliptic region of Z. (3) The discriminant of Q_5 is the union of the parabolic curve together with the umbilic points. The folded singularities of the characteristic directions occur at the same points as those of the torsal directions. The two configurations have opposite indices (when one has a folded saddle, the other is a folded node or focus). At umbilic points, the characteristic curves have the same configurations as those of the principal curves. (4) The characteristic curves BDE extend to Σ(n). For generic congruences and away from parabolic points, there are no characteristic directions on Σ(n). At a parabolic point on Σ(n), we get a family of cusps along the parabolic curve. (1) and (2). The expression for Q_5 follows by calculating Jac(Q_2,Q_3). Clearly it is on the polar line Q_4 of Q_4=Jac(Q_1,Q_2). We have Jac(Q_4,Q_5)=δ(Q_1)δ(Q_2)Q_3 so the polar line Q_3 of Q_3 contains Q_4 and Q_5. The polar line Q_5 of Q_5 contains Q_3 and Q_4 by definition of Q_5. This proves that Q_3,Q_4,Q_5 are vertices of a self-polar triangle. Observe that away from Σ(n) and umbilic points δ(Q_1)<0 and δ(Q_2)>0, so δ(Q_5) and δ(Q_3) have opposite signs. It follows that the torsal curves and the characteristic curves live on opposite sides of the parabolic curve. (3) We follow the setting of the proof of the previous cases and compute the 2-jet of the coefficients of BDE Q_5. At a parabolic point, we find that the BDE is singular if and only if the condition (<ref>) is satisfied. We compute the scalar λ that determines the type of the folded singularity (see <ref>) and find that it is the opposite of that associated with of the folded singularity of the torsal direction. At umbilic points, the 1-jet of Q_5 is a scalar multiple of that of the principal curves, so at generic umbilic points, the two BDEs have the same configurations. When the characteristic directions are real at a point z on Z, they determine points on the line L⊂^3 associated to z which we call characteristic points. We call the surface in ℝ^3 that they trace as z varies in Z the characteristic surface. There are 2,1 or 0 characteristic points on the line L⊂^3. When there are two of them, there are no focal points and vice-versa. The midpoint of the characterise points is the midpoint of the boundary points. The characteristic points lie between the boundary points; see Figure <ref>. We consider the case when n is not singular; the singular case is similar. Then each characteristic direction (when it exists) at a point z∈ Z determines a characteristic point on the line L⊂ℝ^3 with distance r=-x'.n'/n'.n' from x, with (u',v') a solution of Q_5=0. Eliminating u', v' gives r^2-2Hr+K+AC-B^2/b̅^2(H^2-K)^2=0. It follows that the midpoint (x+Hn)(0,0) of the characteristic points is the midpoint of the boundary points (these are given by r^2-2Hr+K=0, <cit.>). The square of the distance between the characteristic points is H^2-K-(AC-B^2)(H^2-K)^2/b̅^2 and that between the boundary points is H^2-K. Therefore the characteristic points are between the boundary points. § NORMAL CONGRUENCES A congruence Z⊂ is said to be a normal congruence if for some smooth surface X⊂^3 the set Z consists of the normals to X denoted by N(X). Normal congruences are important in geometrical optics, where X can be thought of as a wavefront and the associated lines as rays. In this case the quadratic forms in the previous section are determined by the first and second fundamental forms of X. We parametrise the directrix X away from its umbilic points so that the coordinate curves represent the lines of principal curvature. Then the coefficients of the first fundamental form are x_u· x_u=E, x_u· x_v=0, x_v· x_v=G. The quadratic form Q_1 is the third fundamental form of X and has coefficients A=κ_1^2E, B=0 and C=κ_2^2G. For normal congruences, b̅=0 (in fact b̅=0 if and only if Z is a normal congruence, <cit.>, p. 196) and a=κ_1E,b=0,c=κ_2G are the coefficients of the second fundamental form of X. The quadratic form Q_2=κ_1κ_2EG(κ_1-κ_2)dudv, so the solution curves of Q_2=0, i.e., the principal curves of the congruence Z coincide with the lines of principal curvature of X in the parameter space. In fact, the natural map X→, x↦ (x, N(x)) with image N(X), takes the classical principle directions of X to the principal directions of the congruence N(X). (Here Σ(n) is the parabolic set of X given by κ_1κ_2=0, so the principal directions on Z have removable singularities on Σ(n).) We have Q_3=Q_2, so the torsal curves coincide with the principal curves, so every non-umbilic point on N(X) is a hyperbolic point. The BDE of the mean directions is given by Q_4=Jac(Q_1,Q_2)=κ_2^2Gdv^2-κ_1^2Edu^2=0. The directions determined by Q_4 are called minimal orthogonal spherical image directions in <cit.>. They are the unique pair of tangent directions to X that have orthogonal images under dn and that are inclined at a minimal angle at each point of X. The characteristic BDE is Q_5=Jac(Q_4,Q_2)=κ_2^2Gdv^2+κ_1^2Edu^2=0, so does not have real solutions (there are no elliptic points on N(X)). The work in this paper was partially supported by the FAPESP Thematic project grant 2019/07316-0. 99 bruce-fidal J. W. Bruce and D. Fidal, On binary differential equations and umbilics. Proc. Royal Soc. Edinburgh 111A (1989), 147–168. bdes J. W. Bruce and F. Tari, On binary differential equations. Nonlinearity 8 (1995), 255–271. Codim1bdesJ. W. Bruce and F. Tari, , Generic 1-parameter families of binary differential equations. Discrete Contin. Dyn. Syst. 3 (1997), 79–90. duality J. W. Bruce and F. Tari, Duality and implicit differential equations. Nonlinearity 13 (2000), . dupin J. W. Bruce and F. Tari, Dupin indicatrices and families of curve congruences. Trans. Amer. Math. Soc. 357 (2005), 267–285. BruceTAffineCongJ. W. Bruce and F. Tari, On the affine geometry of congruences of lines. Preprint, 2023. CraizerGarcia M. Craizer and R. A. Garcia, Singularities of generic line congruences. To appear in J. Math. Soc. Japan. davbook A.A. Davydov, Qualitative control theory. Translations of Mathematical Monographs 142, AMS, Providence, R.I., 1994. guinez V. Guíñez, Positive quadratic differential forms and foliations with singularities on surfaces. Trans. Amer. Math. Soc. 309 (1988), 477–502. joey J. M. Oliver, Pairs of geometric foliations on regular and singular surfaces. Doctoral thesis, Durham University, 2010. Pottman_Wallner H. Pottmann and J. Wallner, Computational Line Geometry. Springer, 2010. Weatherburn C. E. Weatherburn, Differential Geometry of Three Dimensions. Cambridge University Press, 1955. JWB: Department of Mathematical Sciences, University of Liverpool, Liverpool, L69 3BXl E-mail: billbrucesingular@gmail.com FT: Instituto de Ciências Matemáticas e de Computacão - USP, Avenida Trabalhador são-carlense, 400 - Centro, CEP: 13566-590 - São Carlos - SP, Brazil. E-mail: faridtari@icmc.usp.br
http://arxiv.org/abs/2307.00324v1
20230701123058
DeepMediX: A Deep Learning-Driven Resource-Efficient Medical Diagnosis Across the Spectrum
[ "Kishore Babu Nampalle", "Pradeep Singh", "Uppala Vivek Narayan", "Balasubramanian Raman" ]
cs.CV
[ "cs.CV", "cs.LG", "I.2.1" ]
definitionDefinition[section] remarkRemark[section] theoremTheorem[section] lemmaLemma[section] propositionProposition[section] corollaryCorollary[section] exampleExample[section] mydefDefinition remark TheoremTheorem[section] Lemma[Theorem]Lemma Definition[Theorem]Definition Proposition[Theorem]Proposition 1.1
http://arxiv.org/abs/2307.01653v1
20230704111954
Translating nano-Hertz gravitational wave background into primordial perturbations taking account of the cosmological QCD phase transition
[ "Katsuya T. Abe", "Yuichiro Tada" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph" ]
kabe@chiba-u.jp Center for Frontier Science, Chiba University, 1-33 Yayoi-cho, Inage-ku, Chiba 263-8522, Japan tada.yuichiro.y8@f.mail.nagoya-u.ac.jp Institute for Advanced Research, Nagoya University, Furo-cho Chikusa-ku, Nagoya 464-8601, Japan Department of Physics, Nagoya University, Furo-cho Chikusa-ku, Nagoya 464-8602, Japan The evidence of the nano-Hertz stochastic GW background is reported by multiple pulsar timing array collaborations. While a prominent candidate of the origin is astrophysical from supermassive black hole binaries, alternative models involving GWs induced by primordial curvature perturbations can explain the inferred GW spectrum. Serendipitously, the nano-Hertz range coincides with the Hubble scale during the cosmological QCD phase transition. The influence of the QCD phase transition can modify the spectrum of induced GWs within the nano-Hertz frequency range, necessitating careful analysis. We estimate GWs induced by power-law power spectra of primordial curvature perturbations taking account of the QCD phase transition. Then we translate the implication of the NANOGrav data into the constraint on the power spectrum of the primordial curvature perturbation, which suggests that one may miss the correct interpretation if neglecting the QCD effect. We also derive fitting formulae for their amplitude and scale dependence, helping to update the constraint in future experiments. Translating nano-Hertz gravitational wave background into primordial perturbations taking account of the cosmological QCD phase transition Yuichiro Tada August 1, 2023 ========================================================================================================================================== PBHprimordial black hole GWgravitational wave QCDquantum chromodynamics CMBcosmic microwave background § INTRODUCTION The evidence of the stochastic GW background in the nano-Hertz range is reported by the NANOGrav <cit.>, European Pulsar Timing Array <cit.>, Parkes Pulsar Timing Array <cit.>, and Chinese Pulsar Timing Array <cit.>. The inferred spectrum is consistent with the astrophysical expectation from supermassive black hole binaries, but it can be also explained by some primordial origin <cit.> represented by the induced GW due to large primordial curvature perturbations <cit.>. In particular, GW with the observed amplitude and frequency range would correspond to sizable enough perturbations which can cause PBH of the stellar mass <cit.> (see also the recent review article <cit.>). Such stellar mass PBH can explain some fraction of black hole binaries found by merger GW in the LIGO–Virgo–KAGRA collaboration <cit.>. Large primordial perturbations themselves are viewed as important information on the detailed mechanism of cosmic inflation. Serendipitously, the nano-Hertz range coincides with the Hubble scale during the cosmological QCD phase transition. There, the equation-of-state parameter w=p/ρ and the sound speed (squared) ^2=*pρ, where ρ and p are energy density and pressure, slightly reduce from the exact radiation value, 1/3, (see Fig. <ref>) and hence the compaction of the density perturbation and also the dilution of the induced GW are affected <cit.>. In fact, the resultant GW spectrum shows a sharp drop in this range even if the input primordial curvature perturbation is exactly scale-invariant (see Fig. <ref>). Hence, a naive estimate without the QCD effect can miss the true implication. In this Letter, given input power-law power spectra of curvature perturbations, we numerically calculate the resultant spectra of the induced GW with the QCD effect in the nano-Hertz range and derive fitting formulae for their amplitude and scale dependence with respect to the input parameters. Making use of these formulae, specifically the implication of the NANOGrav data is translated into the constraint on the power spectrum of the primordial curvature perturbation.[See Refs. <cit.> for the implication on the PBH and induced GW (without the QCD effect) of the latest NANOGrav data. See also Ref. <cit.> for a discussion of the QCD effect (only on w) on the primordial GW in general contexts in nano-Hertz frequency ranges.] Throughout this paper, we adopt the natural unit c=ħ=1. § INDUCED GRAVITATIONAL WAVES DURING THE QCD PHASE TRANSITION We briefly review Ref. <cit.> for GW induction during the QCD phase transition. Let us first specify the background dynamics. The temperature dependence of the effective degrees of freedom for energy density, g_*, and entropy density, g_*s, of the QCD plasma has been extensively studied both in analytic and numerical ways. Saikawa and Shirai unified these results in the form of the fitting function (see Appendix C of Ref. <cit.>), which is plotted in the left panel of Fig. <ref>. These effective degrees of freedom are related to w and ^2 by w(T)=4g_*s(T)/3g_*(T)-1, ^2(T)=4(g_*s'(T)T+4g_*s(T))/3(g_*'(T)T+4g_*(T))-1, which are shown in the right panel of Fig. <ref>. Once their temperature dependence is fixed, the time evolution of the temperature of the universe (and hence the evolution of all background parameters) can be calculated through the continuity equation, the Friedmann equation, and the definition of g_*: ρη=-3(1+w)ρ, 3^2^2=a^2ρρ(T)=π^2/30g_*(T)T^4. a is the scale factor, η=∫ a^-1t is the conformal time, =∂_ηln a is the conformal Hubble parameter, and =1/√(8π G) is the reduced Planck mass. Perturbations evolve along this background. The (Fourier-space) gravitational potential Φ̂_(η) in the Newton gauge follows the Bardeen equation, Φ̂_”(η)+3(1+^2)Φ̂_'(η) +^2k^2+3^2(^2-w)Φ̂_(η)=0. Here Φ̂_(η) is decomposed into the transfer function Φ_k(η) and the primordial perturbation ψ̂_ as Φ̂_(η)=Φ_k(η)ψ̂_. ψ̂_ is related to the gauge-invariant curvature perturbation ζ̂_ by ψ̂_=-2ζ̂_/3 and the initial condition of the transfer function is given by Φ_k(η)→1 and Φ_k'(η)→0 for η→0. The second-order effect of Φ̂_(η) can source the linear tensor perturbation ĥ_(η) through the equation, Λ_ηa(η)ĥ_(η)=4a(η)_(η), where Λ_η is the derivative operator Λ_η=∂_η^2+k^2-1-3w(η)/2^2(η), and _(η) represents the source term _(η)=∫[3]/(2π)^3e_ij()k̃^ik̃^j[2Φ̂_(η)Φ̂_-(η) +4/3(1+w(η))Φ̂_(η)+Φ̂_'(η)/(η)Φ̂_-(η)+Φ̂_-'(η)/(η)]. e_ij() is one polarization tensor. This sourced equation can be solved in the Green function method as ĥ_(η)=4/a(η)∫η̃G_k(η,η̃)a(η̃)_(η̃), with the Green function G_k(η,η̃): Λ_ηG_k(η,η̃)=δ(η-η̃). Practically, the Green function can be constructed by the two independent homogeneous solutions, Λ_η g_1k(η)=Λ_η g_2k(η)=0, as G_k(η,η̃)=g_1k(η)g_2k(η̃)-g_1k(η̃)g_2k(η)/g_1k'(η̃)g_2k(η̃)-g_1k(η̃)g_2k'(η̃). Eventually, the GW power spectrum is given by _h(k,η)=64/81a^2(η)∫_k_1-k_2≤k≤k_1+k_2lnk_1lnk_2 I^2(k,k_1,k_2,η)k_1^2-(k^2-k_2^2+k_1^2)^2/(4k^2)^2/k_1k_2k^2_ζ(k_1)_ζ(k_2), where I(k,k_1,k_2,η)=k^2∫^η_0η̃a(η̃)G_k(η,η̃)[2Φ_k_1(η̃)Φ_k_2(η̃) +4/3(1+w(η̃))Φ_k_1(η̃)+Φ_k_1'(η̃)/(η̃)Φ_k_2(η̃)+Φ_k_2'(η̃)/(η̃)]. The GW density parameter is well approximated by its oscillation average _h(k,η) well after its horizon reentry as Ω_(k,η)=ρ_(η,k)/3^2H^2=1/24k/^2_h(k,η), where H=/a is the ordinary Hubble parameter. It is extended to the current time η_0 with the current radiation density parameter Ω_r0h^2=4.2×10^-5 as Ω_(k,η_0)h^2=Ω_r0h^2a__/a__^21/24k/_^2_h(k,η_), where h=H_0/(100km s^-1 Mpc^-1) is the normalized Hubble constant, the subscripts `' and `' indicate the time when the GW of interest becomes well subhorizon and the density parameter becomes almost constant (to which time we solve the induced GW) and the time when all relevant phase transitions are completed and g_* and g_*s well asymptote to the current values (to which time we solve the background dynamics). In this way, the induced GW can be calculated on an arbitrary background. In Fig. <ref>, we show the example GW spectrum normalized by the scalar amplitude squared A_ζ^2 for the scale-invariant scalar perturbation _ζ(k)=A_ζ. § GRAVITATIONAL WAVE SIGNALS We then calculate the GW spectrum, particularly in the NANOGrav's sensitivity range f=2–59nHz for the power-law-type power spectrum of the curvature perturbation:[Do not confuse this with the inferred value =0.965±0.004 by the CMB observation <cit.>. They correspond to different perturbation scales and are independent of each other in principle.] _ζ(k)=A_ζk/k_yr^-1^-1, where k_yr^-1=2π×1yr^-1≃2e+7Mpc^-1 is the pivot scale. In Fig. <ref>, the resultant GW spectra normalized by A_ζ^2 are shown for several values of . These spectra within this limited frequency range can be well-fitted by a power-law function Ω_(f) h^2≈Q_()A_ζ^2f/f_yr^-1^β_(), with the fitting parameters Q_() and β_(). Fig. <ref> shows fitting values of these parameters for each numerically-obtained GW spectrum. From this, one can further find the fitting formula for these fitting parameters themselves as Q_()≈10^1.101(-1)^2+0.1618(-1)-4.931, β_()≈2.133(-1)-0.2404. We also show the pure radiation result, Q_()≈10^0.8520(-1)^2-0.1021(-1)-4.453, β_()=2(-1). The analytic formulae derived in Refs. <cit.> are useful for this calculation.[Some examples of the amplitude parameter Q_() (without the factor of Ω_r0h^2) are shown in Table 1 of Ref. <cit.> with a sufficiently broad integration range for the k_1 and k_2 integrals in Eq. (<ref>). In this paper, we restrict the integration range as 0.1k<(k_1,k_2)<100k both for the QCD and pure radiation cases just because of the technical limitation, though it could be more natural as the power spectrum should not be amplified infinitely.] Making use of these formulae, the NANOGrav constraint on the GW spectrum with the power-law assumption (Fig. 11 of Ref. <cit.>) can be interpreted as the constraint on the primordial power spectrum as shown in Fig. <ref>. We also show the (mis)interpretation assuming the exact radiation background as a comparison. This is our main result. width=figs/omegagw_ns.pdfResultant GW spectra for various values of the spectral index within the NANOGrav's sensitivity range f=2–59nHz. The vertical red dotted line shows the pivot scale f_yr^-1.fig: various ns § SUMMARY AND DISCUSSION In this Letter, we derived a fitting formula (<ref>) and (<ref>) for the scalar-induced GW, including the effect of the QCD phase transition, and the NANOGrav constraint on the GW spectrum is interpreted in terms of the parameters for the power spectrum of the primordial curvature perturbation (<ref>) as shown in Fig. <ref>. One finds that the correct interpretation could be missed if one neglects the QCD effect. The fitting formula would be also useful when the observational constraint on the GW spectrum would be improved more by future experiments. We close the section by mentioning the implication of our result on the PBH formation. The PBH formation is also affected by the QCD phase transition (see, e.g., Refs. <cit.> and also Refs. <cit.> for detailed numerical studies on this effect) and the corresponding scenario is sometimes referred to as the thermal history model <cit.>. Ref. <cit.> claims that this thermal history model is consistent with several observational “positive evidence" of PBH (see also the review article <cit.>). Nevertheless, these works basically focused on the almost scale-invariant case ≃0.96 and cannot be directly applied to our spectrum -1∼1 inferred by the NANOGrav data. Just regarding the amplitude, Ref. <cit.> supposes that the root-mean-square of the density contrast is σ_δ=0.0218 on the solar-mass scale. σ_δ coarse-grained on a scale R is related to the primordial power spectrum by (see, e.g., Ref. <cit.>) σ_δ^2=16/81∫k/kW^2(kR)(kR)^4_ζ(k), and for the power-law power spectrum _ζ=A_ζ(k/k_*)^-1 and the Gaussian window function W(z)=^-z^2/2, it is simplified as σ_δ,^2=8/81A_ζ(k_*R)^1-Γ+3/2. Making use of the mass-scale relation (see, e.g., Ref. <cit.>) M(R)≃10^20 gg_*(R)/106.75^-1/6R/6.4e-14Mpc^2, one finds that the inferred value, A_ζ∼0.007, -1∼1, and k_*=k_yr^-1, corresponds to σ_δ,∼0.013 on the solar-mass scale, which could be consistent with Ref. <cit.>. Detailed numerical studies for -1∼1 are anyway necessary. Y.T. is supported by JSPS KAKENHI Grant No. JP21K13918.
http://arxiv.org/abs/2307.00224v1
20230701044652
Flexible Bayesian Modeling for Longitudinal Binary and Ordinal Responses
[ "Jizhou Kang", "Athanasios Kottas" ]
stat.ME
[ "stat.ME" ]
#1 1 Flexible Bayesian Modeling for Longitudinal Binary and Ordinal Responses Jizhou Kang and Athanasios KottasJizhou Kang (jkang37@ucsc.edu) is Ph.D. student, and Athanasios Kottas (thanos@soe.ucsc.edu) is Professor, Department of Statistics, University of California, Santa Cruz. Department of Statistics, University of California, Santa Cruz August 1, 2023 ===================================================================================================================================================================================================================================================================================== Longitudinal studies with binary or ordinal responses are widely encountered in various disciplines, where the primary focus is on the temporal evolution of the probability of each response category. Traditional approaches build from the generalized mixed effects modeling framework. Even amplified with nonparametric priors placed on the fixed or random effects, such models are restrictive due to the implied assumptions on the marginal expectation and covariance structure of the responses. We tackle the problem from a functional data analysis perspective, treating the observations for each subject as realizations from subject-specific stochastic processes at the measured times. We develop the methodology focusing initially on binary responses, for which we assume the stochastic processes have Binomial marginal distributions. Leveraging the logits representation, we model the discrete space processes through sequences of continuous space processes. We utilize a hierarchical framework to model the mean and covariance kernel of the continuous space processes nonparametrically and simultaneously through a Gaussian process prior and an Inverse-Wishart process prior, respectively. The prior structure results in flexible inference for the evolution and correlation of binary responses, while allowing for borrowing of strength across all subjects. The modeling approach can be naturally extended to ordinal responses. Here, the continuation-ratio logits factorization of the multinomial distribution is key for efficient modeling and inference, including a practical way of dealing with unbalanced longitudinal data. The methodology is illustrated with synthetic data examples and an analysis of college students' mental health status data. Keywords: Bayesian hierarchical modeling; Continuation-ratio logits; Functional data analysis; Markov chain Monte Carlo; Student-t process. 1.5 § INTRODUCTION Recent years have witnessed a rapid growth of longitudinal studies with binary and ordinal responses in several disciplines, including econometrics, and the health and social sciences. In such studies, of primary importance are the probability response curves, i.e., the probabilities of the response categories that evolve dynamically over time. This article aims at developing a hierarchical framework, customized to longitudinal settings, that allows flexible inference for the probability response curves. In addition, the defining characteristic of longitudinal data is that repeated measurements on the same subject induce dependence. Hence, a further objective is to flexibly model lead-lag correlations among repeated measurements. The development of statistical methods for longitudinal binary and ordinal data stems from models for longitudinal continuous responses, postulating the generalized linear model framework. Analogous to the continuous case, a specific model is formulated under one of three broad approaches pertaining to marginal models, conditional models, or subject-specific models. Marginal models provide alternative modeling options when likelihood-based approaches are difficult to implement. A conditional model describes the distribution of responses conditional on the covariates and also on part of the other components of the responses. In a subject-specific model, the effects of a subset of covariates are allowed to vary randomly from one individual to another. In the absence of predictor variables, functions of the observation time are usually used as covariates. We refer to <cit.> for a comprehensive review. In Section <ref>, we elaborate on the connection of our proposed modeling approach with existing methods. In this article, we introduce a novel viewpoint for longitudinal binary and ordinal data analysis. We begin with the model construction for longitudinal binary responses. The critical insight that distinguishes our methodology from the majority of the existing literature is functional data analysis. We treat the subjects' measurements as stochastic process realizations at the corresponding time points. The benefits are twofold. First, the models can incorporate unbalanced data from longitudinal studies in a unified scheme; directly inferring the stochastic process provides a well-defined probabilistic model for the missing values. Secondly, we can exploit the power of Bayesian hierarchical modeling for continuous functional data <cit.>. To that end, we adopt the Binomial distribution with the logit link that connects binary responses to continuous signals, which, subject to additive measurement error, are then modeled as (conditionally) independent and identically distributed (i.i.d.) realizations from a Gaussian process (GP) with random mean and covariance function. We place an Inverse-Wishart process (IWP) prior on the covariance function, and conditional on it, use a GP prior for the mean function. Therefore, the two essential ingredients in longitudinal modeling, the trend and the covariance structure, are modeled simultaneously and nonparametrically. The hierarchical structure allows borrowing of strength across the subjects' trajectories. We apply a specific setting of hyperpriors for the GP and IWP priors, such that marginalizing over them, the latent continuous functions have a Student-t process (TP) prior. The TP enhances the flexibility of the GP <cit.>. It retains attractive GP properties, such as analytic marginal and predictive distributions, and it yields predictive covariance that, unlike the GP, explicitly depends on the observed values. For inferential purposes, we represent the joint posterior distribution in multivariate form through evaluating the functions on the pooled grid, resulting in the common normal-inverse-Wishart conditional conjugacy. In conjunction with the Pólya-Gamma data augmentation technique <cit.>, we develop a relatively simple and effective posterior simulation algorithm, circumventing the need for specialized techniques or tuning of Metropolis-Hastings steps. To extend the model for ordinal responses, we utilize the continuation-ratio logits representation of the multinomial distribution. Such representation features an encoding of an ordinal response with C categories as a sequence of C-1 binary indicators, in which the j-th indicator signifies whether the ordinal response belongs to the j-th category or to one of the higher categories. We show that fitting a multinomial model for the ordinal responses is equivalent to fitting separately the aforementioned model on the binary indicators. Hence, we can conduct posterior simulation for each response category in a parallel fashion, leading to significant computational efficiency gains in model implementation. In modern longitudinal studies, it is common that the complete vector of repeated measurements is not collected on all subjects. As a specific example, in ecological momentary assessment (EMA) studies, emotions and behaviors are repeatedly measured for a cohort of participants, through wearable electronic devices <cit.>. For instance, in the StudentLife study <cit.>, researchers monitored the students' mental status through pop-up questionnaires on their smartphones that prompted multiple times at pseudorandom intervals during the study period. Since the data collection process is based on the participants' conscious responding to prompted questions several times a day, non-response is inevitable. Missing values are typically considered to be a nuisance rather than a characteristic of EMA time series. Parametric and nonparametric Bayesian methods have been developed to handle longitudinal data with missingness; see <cit.> for a review. The common issue is that one has to bear the drawbacks of making either structured or unstructured assumptions to manage missingness. The unstructured approach leads to flexibility, yet it may result in difficulties due to a large number of parameters relative to the sample size. Besides, the majority of the existing literature on longitudinal studies with missingness focuses on the scenario with continuous responses, and the extension to discrete responses is not trivial. Accordingly, our contributions can be summarized as follows: (i) we model the mean and covariance jointly and nonparametrically, avoiding potential biases caused by a pre-specified model structure; (ii) we unify the toolbox for balanced and unbalanced longitudinal studies; (iii) the model encourages borrowing of strength, preserving systematic patterns that are common across all subject responses; (iv) we develop a computationally efficient posterior simulation method by taking advantage of conditional conjugacy; (v) the model facilitates applications for ordinal responses with a moderate to large number of categories. The rest of the paper is organized as follows. Section <ref> develops the methodology for binary responses, including model formulation, study of model properties, and the computational approach to inference and prediction. Section <ref> illustrates the modeling approach through an EMA study that focuses on analyzing students' mental health through binary outcomes. The modeling extension for longitudinal ordinal responses is presented in Section <ref>, including an illustration involving an ordinal outcome from the same EMA study. Finally, Section <ref> concludes with a summary. § THE MODELING APPROACH FOR BINARY RESPONSES Here, we develop the methodology for longitudinal binary responses. The data consist of repeated binary responses on n subjects, with the observation on subject i at time τ_it denoted by Y_it. The set of repeated outcomes for the i-th subject is collected into a T_i-dimensional vector 𝐘_i= (Y_i1,⋯,Y_iT_i)^⊤. The hierarchical model construction is presented in Section <ref>. In Section <ref>, we discuss model properties related to our inference objectives. Bayesian inference and prediction is developed in Section <ref>. In Section <ref>, we outline the findings from simulation studies, the details of which are included in the Supplementary Material. Finally, to place our contribution within the literature, we discuss in Section <ref> the proposed model in the context of relevant Bayesian nonparametric approaches. §.§ Model specification We examine the data from a functional data analysis perspective, treating each observed data vector 𝐘_i as the evaluation of trajectory Y_i(τ) on grid τ_i= (τ_i1,⋯,τ_iT_i)^⊤, for i=1,⋯,n. The n trajectories are assumed to be (conditionally) independent realizations from a continuous-time stochastic process. The prior probability model is built on the stochastic process. This approach avoids strong pre-determined assumptions on the transition mechanism within the sequence of subject-specific responses in 𝐘_i, while it is suitable to accommodate repeated measurements regardless of their observational pattern. The functional data analysis view of longitudinal data dates back at least to <cit.>, where it is suggested that functional data analysis tools, such as principal component analysis, can be used to capture periodic structure in longitudinal data. Indeed, <cit.> study functional principal component analysis (FPCA) for sparse longitudinal data, a method that can provide effective recovery of the entire individual trajectories from fragmental data. FPCA has been applied in finance <cit.>, biomechanics <cit.>, and demographic studies <cit.>. Its extension to examine sequences of discrete data is studied in <cit.>. Our methodology builds from a GP-based hierarchical model for continuous functional data <cit.>. Regarding mean-covariance estimation, the model in <cit.> can be considered as a Bayesian counterpart of <cit.>. The hierarchical scheme enables a natural extension to studies with binary responses. We assume that, subject to measurement error, the i-th subject's responses, Y_it≡ Y_i(τ_it), depend on the i-th trajectory of the underlying process, evaluated at times τ_it, through the following model Y_i(τ_it) | Z_i(τ_it),ϵ_it ind.∼ Bin(1, φ(Z_i(τ_it)+ϵ_it)), t=1,⋯,T_i, i=1,⋯,n, where φ(x) = exp(x)/{ 1 + exp(x) } denotes the expit function. The error terms are i.i.d. from a white noise process, that is, ϵ_it|σ^2_ϵi.i.d.∼ N(0,σ^2_ϵ), and independent of the process realizations Z_i(·). The main building block for the model construction is a hierarchical GP prior for the Z_i(·). In particular, given random mean function μ(·) and covariance kernel Σ(·,·), the Z_i(·) are i.i.d. GP realizations, denoted by Z_i |μ,Σi.i.d.∼ GP(μ,Σ), for i=1,⋯,n. The hierarchical GP prior model is completed with nonparametric priors for the mean function and covariance kernel: μ|Σ∼ GP(μ_0,Σ/κ), Σ∼ IWP(ν,Ψ_ϕ) , where GP(·,·) and IWP(·,·) denote the GP and IWP prior, respectively. The nonparametric prior reflects the intuition that parametric forms will generally not be sufficiently flexible for the mean and covariance functions. We adopt an IWP prior for the covariance kernel, defined such that, on any finite grid τ=(τ_1,⋯,τ_T) with |τ| points, the projection Σ(τ,τ) follows an inverse-Wishart distribution with mean Ψ_ϕ(τ,τ) / (ν-2), denoted by IW(ν,Ψ_ϕ(τ,τ)). Here, Ψ_ϕ(·,·) is a non-negative definite function with parameters ϕ. Note that we use the parameterization from <cit.> for the inverse-Wishart distribution, in particular, ν is the shape parameter and ν+|τ|-1 is the degrees of freedom parameter in the more common parameterization. <cit.> validate that this parameterization defines an infinite dimensional probability measure whose finite dimensional projection on grid τ coincides with the inverse-Wishart distribution IW(ν,Ψ_ϕ(τ,τ)). The model formulation is completed with prior specification for the hyperparameters. The error variance is assigned an inverse Gamma prior, σ_ϵ^2∼ IG(a_ϵ,b_ϵ). We focus primarily on stationary specifications under the prior structure in (<ref>). In particular, we work with mean function, μ_0(τ) ≡μ_0, and isotropic covariance function, Ψ_ϕ, within the Matérn class, a widely used class of covariance functions <cit.>. In general, the Matérn covariance function is specified by a scale parameter σ^2, a range parameter ρ, and a smoothness parameter ι. To encourage smoothness in the probability response curves, we set ι = 5/2, such that the covariance kernel is given by Ψ_ϕ(τ,τ^') = σ^2 ( 1+√(5)|τ-τ^'|/ρ+ 5|τ-τ^'|^2/3ρ^2) exp( -√(5)|τ-τ^'|/ρ), where ϕ={σ^2,ρ}. For hyperparameters μ_0, σ^2, ρ, we take the commonly used choice, μ_0∼ N(a_μ,b_μ), σ^2∼Gamma(a_σ,b_σ), ρ∼ Unif(a_ρ,b_ρ). Finally, we set κ = (ν-3)^-1, such that the continuous-time process for the Z_i(·) is a TP when μ and Σ are marginalized out (see Section <ref> for details). As a consequence, parameter ν controls the tail heaviness of the marginal process, with smaller values of ν corresponding to heavier tails. We place a uniform prior on ν, ν∼ Unif(a_ν,b_ν), with a_ν>3 to ensure positive definiteness of Σ/κ. As discussed in <cit.>, the correlation of repeated measurements on the same subject commonly has the following patterns. First, it should decrease with respect to the measurements' separation in time, while remaining positive to indicate the measurements are from the same subject. This feature is encapsulated by the form of the covariance kernel Ψ_ϕ. The IWP prior elicits realizations for which this property holds a priori, while enabling a flexible estimate of the covariance structure with information from the data a posteriori. Second, measurements that are made arbitrarily close in time are subject to imperfect correlation, possibly caused by subsampling of each subject. This feature is represented by the error term in our model. Moreover, the motivation for adding the error term arises from the fact that measurement error is introduced in the estimation of a continuous-time function based on data collected at discrete time points. Although the probability model is formulated through stochastic process realizations, posterior simulation is based on the corresponding finite dimensional distributions (f.d.d.s.). Consequently, to write the model for the data, we need to represent the likelihood and prior in multivariate forms through evaluating the functions on finite grids. Denoting Y_i(τ_i) by 𝐘_i, Z_i(τ_i) by 𝐙_i, and ϵ_i= (ϵ_i1,⋯,ϵ_iT_i)^⊤, the model for the data can be written as 𝐘_i|𝐙_i,ϵ_i ind.∼ ∏^T_i_t=1Bin(1, φ(Z_it+ϵ_it)), i=1,⋯,n, 𝐙_i|μ(τ_i),Σ(τ_i,τ_i) ind.∼ N(μ(τ_i),Σ(τ_i,τ_i)), ϵ_i|σ_ϵ^2 ind.∼ N(0,σ_ϵ^2 𝐈). Notice that the grids {τ_i:i=1,⋯,n} are not necessarily the same for all subjects. Therefore, the shared GP and IWP prior in (<ref>) need to be evaluated on the pooled grid τ=∪_i=1^nτ_i. If μ, Σ, and Ψ_ϕ denote μ(τ), Σ(τ,τ), and Ψ_ϕ(τ,τ), respectively, then μ|Σ,μ_0, ν ∼ N(μ_01, (ν-3) Σ), Σ|ν, ϕ ∼ IW(ν,Ψ_ϕ). The hierarchical model formulation for the data in (<ref>) and (<ref>) forms the basis for the posterior simulation algorithm, which is discussed in detail in Section <ref>. §.§ Model properties To fix ideas for the following discussion, we refer to Z_i(τ) as the signal process of the binary process Y_i(τ), and to 𝒵_i(τ)=Z_i(τ)+ϵ_i(τ) as the latent process of Y_i(τ). Since the stochastic process is characterized by its f.d.d.s., we shall investigate the random vectors 𝐘_τ= Y_i(τ), 𝒵_τ= 𝒵_i(τ), and 𝐙_τ= Z_i(τ), for a generic grid vector τ= (τ_1,⋯,τ_T)^⊤. We surpass the subject index i because the subject trajectories are identically distributed. The Supplementary Material includes proofs for the propositions included in this section. Among the various inference goals in a study that involves longitudinal binary data, estimating the probability response curve and the covariance structure of the repeated measurements are the most important ones. In Proposition <ref>, we derive the probability response curves and covariance matrix of the binary vector 𝐘_τ, conditional on the signal vector 𝐙_τ and error variance σ^2_ϵ. The probability response curve can be defined generically as 𝐏_𝐲τ = (Pr(Y_τ_1=y_τ_1|𝐙_τ,σ_ϵ^2), ⋯,Pr(Y_τ_T=y_τ_T|𝐙_τ, σ_ϵ^2) )^⊤, where y_τ_t is either 0 or 1. Without loss of generality, we focus on 𝐏_1τ. The probability response curve is given by 𝐏_1τ = E(π(𝒵_τ) |𝐙_τ,σ_ϵ^2), where π(𝐱) denotes the vector operator that applies the expit function to every entry of 𝐱. Regarding the covariance matrix, for τ∈τ, Var(Y_τ|𝐙_τ,σ_ϵ^2) = E(φ(𝒵_τ)|𝐙_τ,σ_ϵ^2) - E^2(φ(𝒵_τ)|𝐙_τ,σ_ϵ^2), and for τ, τ^'∈τ, with τ^'≠τ, Cov(Y_τ,Y_τ^'|𝐙_τ,σ_ϵ^2)= Cov(φ(𝒵_τ),φ(𝒵_τ^')|𝐙_τ,σ_ϵ^2). The conditional expectations in all of the above expressions are with respect to distribution, 𝒵_τ|𝐙_τ, σ_ϵ^2 ∼ N(𝐙_τ,σ_ϵ^2 𝐈). The practical utility of Proposition <ref> lies on performing posterior inference for the probability response curve and the covariance structure of the binary process, conditioning on the signal process and the noise. With posterior samples of 𝐙_τ and σ^2_ϵ, we can simulate 𝒵_τ from N(𝐙_τ,σ^2_ϵ𝐈) and numerically compute the corresponding moments in Proposition <ref>. The entries of 𝒵_τ are independent, given 𝐙_τ, and thus simulating 𝒵_τ is not computationally demanding, even when |τ| is large. We next establish a closer connection between the binary process and the signal process. Proposition <ref> reveals that the evolution of the binary process over time can be (approximately) expressed as a function of the expectation of the signal process and the total variance. Moreover, the covariance of the binary process is approximately the covariance of the signal process scaled by a factor related to the expectation of the signal. Consider the proposed model as described in (<ref>) and denote μ(τ)=μ, and Σ(τ,τ)=Σ. Then, Pr(Y_τ=1|μ,Σ,σ_ϵ^2)≈φ(E(Z_τ|μ,Σ))+Var(Z_τ|μ,Σ)+σ_ϵ^2/2φ^''(E(Z_τ|μ,Σ)), ∀τ∈τ, Cov(Y_τ,Y_τ^'|μ,Σ,σ_ϵ^2) ≈φ^'(E(Z_τ|μ,Σ))φ^'(E(Z_τ^'|μ,Σ)) Cov(Z_τ,Z_τ^'|μ,Σ) -1/4[Var(Z_τ|μ,Σ)+σ_ϵ^2][Var(Z_τ^'|μ,Σ)+σ_ϵ^2]φ^''(E(Z_τ|μ,Σ))φ^''(E(Z_τ^'|μ,Σ)), ∀τ,τ^'∈τ. Here, φ^'(x)=dφ(x)/dx=φ(x)[1-φ(x)] and φ^''(x)= d^2φ(x)/dx^2= φ(x)[1-φ(x)][1-2φ(x)]. Our inference results are based on exact expressions, such as the ones in Proposition <ref>. Nonetheless, the approximate expressions derived in Proposition <ref> are practically useful to gain more insight on properties of the binary process, as well as for prior specification. Note that exploring properties of the binary process is not trivial due to the lack of general analytical forms for moments of logit-normal distributions. Hence, a connection with properties of the signal process is useful. For instance, if we specify the covariance for the signal process to decrease as a function of separation in time, an analogous structure will hold (approximately) for the binary process. The previous discussion focuses on studying the f.d.d.s of the binary process given the signal process. Therefore, it is important to investigate the marginal f.d.d.s of the signal process. We show that, under the specification κ = (ν-3)^-1, the f.d.d.s. of the signal process correspond to a multivariate Student-t (MVT) distribution, and thus the signal process is a TP. We first state the definition of the MVT distribution and the TP <cit.>. Notice that we use the covariance matrix as a parameter for the MVT distribution, instead of the more common parameterization based on a scale matrix. The random vector 𝐙∈ℝ^n is MVT distributed, denoted 𝐙∼ MVT(ν,μ,Ψ), if it has density Γ(ν+n/2)/[(ν-2)π]^n/2Γ(ν/2)|Ψ|^-1/2( 1 + (𝐙-μ)^TΨ^-1 (𝐙-μ)/ν-2)^-ν+n/2 where ν > 2 is the degrees of freedom parameter, μ∈ℝ^n, and Ψ is an n× n symmetric, positive definite matrix. Under this parameterization, E(𝐙)=μ and Cov(𝐙)=Ψ. Consider a process Z(τ) formulated through mean function μ(τ), a non-negative kernel function Ψ(τ,τ), and parameter ν>2, such that its f.d.d.s correspond to the MVT distribution with mean vector and covariance matrix induced by μ(τ) and Ψ(τ,τ), respectively. Then, Z(τ) follows a TP, denoted by Z(τ)∼ TP(ν,μ(τ),Ψ(τ,τ)). Marginalizing over μ and Σ in (<ref>) and (<ref>), the implied distribution for 𝐙_τ is MVT, with degrees of freedom parameter ν (with ν > 3 in our context), mean vector μ_0 1, and covariance matrix Ψ_ϕ = Ψ_ϕ(τ,τ). We thus obtain the following result for the signal process. Under the model formulation in (<ref>) and (<ref>), the signal process follows marginally a TP, that is, Z∼ TP(ν,μ_0,Ψ_ϕ). Proposition <ref> is beneficial in terms of both computation and interpretation. Without a constraint on κ, as in <cit.>, the marginal distribution of 𝐙_τ does not have analytical form. Hence, for prediction at new time points, one has to sample from an IWP and a GP, which is computationally intensive, especially for a dense grid. In contrast, we can utilize the analytical form of the TP predictive distribution to develop a predictive inference scheme that resembles that of GP-based models (see Section <ref>). Moreover, the result highlights the model property that the degrees of freedom parameter ν controls how heavy tailed the process is. Smaller values of ν correspond to heavier tails. As ν gets larger, the tails resemble Gaussian tails. Moreover, ν controls the dependence between Z_τ and Z_τ^', which are jointly MVT distributed, with smaller values indicating higher dependence. Such interpretation of parameter ν facilitates the choice of its hyperprior. The local behavior of stochastic process realizations is crucial for interpolation. Under the longitudinal setting, continuous, or perhaps differentiable, signal process trajectories are typically anticipated. Evidently, the observed data can not visually inform the smoothness of signal process realizations. Rather, such smoothness should be captured in the prior specification that incorporates information about the data generating mechanism. For weakly stationary processes, mean square continuity is equivalent to the covariance function being continuous at the origin <cit.>. And, the process is ι-times mean square differentiable if and only if the 2ι-times derivative of the covariance function at the origin exists and is finite. Under our model, the signal process follows a TP marginally. Its covariance structure is specified by the Matérn covariance function with smoothness parameter ι. Referring to the behavior of the Matérn class of covariance functions at the origin, we obtain the following result for the mean square continuity and differentiability of the signal process. Consider the proposed model with marginal signal process Z∼ TP(ν,μ_0,Ψ_ϕ), where Ψ_ϕ belongs to the Matérn family of covariance functions with smoothness parameter ι. Then, the signal process is mean square continuous and ⌊ι⌋-times mean square differentiable. The results in this section study several properties that are useful in model implementation. Indeed, the practical utility of such model properties with respect to prior specification and posterior inference is discussed in the next section. §.§ Prior specification and posterior inference The model described in Section <ref> contains parameters {σ_ϵ^2,μ_0,σ^2,ρ,ν} whose prior hyperparameters need to be specified. We develop a default specification strategy that relies on the model properties explored in Section <ref>. First, we set the prior for μ_0 such that the prior expected probability response curve does not favor any category, and the corresponding prior uncertainty bands span a significant portion of the unit interval. For instance, this can be achieved with prior μ_0∼ N(0,100) which yields prior expected probability of positive response of about 1/2 across τ. In general, we would not expect to have available prior information about the variance and correlation structure of the unobserved signal process, which are controlled by parameters σ^2 and ρ. However, Proposition <ref> suggests an approximate relationship between the covariance structure of the binary process and the signal process, and we can thus specify the corresponding priors similarly to GP-based models. In particular, we select the uniform prior for the range parameter ρ such that the correlation between Z_τ and Z_τ^' decreases to 0.05 when the difference between τ and τ^' is within a pre-specified subset of the observation time window. For instance, for the data analysis in Section <ref> where the total observation window comprises 72 days, we used a Unif(3,12) prior for ρ, which implies that the aforementioned correlation decreases to 0.05 when the time difference ranges from 7 to 31 days. The hyperprior for ν is Unif(a_ν,b_ν). We specify a_ν>3 to reflect the constraint for Σ/(ν-3) to be a well-defined covariance matrix, and b_ν large enough such that the tail behavior of the marginal TP is hard to distinguish from that of a GP. For instance, a default choice is a_ν=4 and b_ν=30. We follow <cit.> to specify the prior for σ_ϵ^2∼ IG(a_ϵ,b_ϵ). Integrating out σ_ϵ^2, the measurement error ϵ is marginally distributed as a univariate Student-t distribution with location parameter 0, scale parameter b_ϵ/a_ϵ, and degrees of freedom parameter 2a_ϵ. For a predetermined measurement error range (-R,R) with degree of freedom υ, we can use the relationship ± t_1-(1-q)/2^υ√(b_ϵ/a_ϵ)=± R to obtain a_ϵ= υ/2 and b_ϵ= R^2υ/[2(t_1-(1-q)/2^υ)^2], where t^υ_q is the q-th percentile of a Student-t distribution with υ degrees of freedom. Proceeding to posterior inference, we develop an MCMC algorithm based on (<ref>) and (<ref>). We introduce layers of latent variables, beginning with ξ_it∼ PG(1,0) for every observation Y_it, where PG(a,b) denotes the Pólya-Gamma distribution with shape parameter a and tilting parameter b <cit.>. Denote the collection of Pólya-Gamma variables for each subject by ξ_i=(ξ_i1,⋯,ξ_iT_i)^⊤. Also, introduce 𝒵_it=Z_it+ϵ_it, and let 𝒵_i= (𝒵_i1,⋯,𝒵_iT_i)^⊤. Recall that τ=∪_i=1^nτ_i is the pooled grid. Denote the evaluations on the pooled grid by 𝐙̃_i=Z_i(τ) and let 𝐙_i^*=𝐙̃_i∖𝐙_i. That is, 𝐙_i^*=Z_i(τ_i^*), where τ_i^*=τ∖τ_i is the set of grid points at which the i-th trajectory misses observations. Then, the hierarchical model for the data {Y_it: t=1,⋯,T_i, i=1,⋯,n} can be expressed as Y_it|𝒵_itind.∼ Bin(1,φ(𝒵_it)), ξ_iti.i.d.∼ PG(1,0), t=1,⋯,T_i, 𝒵_i|𝐙_i,σ_ϵ^2ind.∼ N(𝐙_i,σ_ϵ^2𝐈_T_i), 𝐙̃_i=(𝐙_i,𝐙_i^*)^⊤|μ,Σi.i.d.∼N(μ,Σ), i=1,⋯,n, σ_ϵ^2∼ IG(a_ϵ,b_ϵ), μ|μ_0,Σ,ν∼ N(μ_01,(ν-3)Σ), μ_0∼ N(a_μ,b_μ), Σ|ν,Ψ_ϕ∼ IW(ν,Ψ_ϕ), Ψ_ϕ=Ψ_ϕ(τ,τ), ϕ={σ^2,ρ}, σ^2∼Gamma(a_σ,b_σ), ρ∼ Unif(a_ρ,b_ρ), ν∼ Unif(a_ν,b_ν). Hence, the joint posterior density of all model parameters can be written as p({𝒵_i}_i=1^n, {ξ_i}_i=1^n,{𝐙̃_i}_i=1^n, μ,Σ,σ_ϵ^2,μ_0,σ^2,ρ,ν|{𝐘_i}_i=1^n) ∝∏_i=1^n{p(𝐘_i|𝒵_i,ξ_i)p(ξ_i)p(𝒵_i|𝐙_i,σ_ϵ^2)p(𝐙_i^*|𝐙_i,μ,Σ)p(𝐙_i|μ,Σ)} × p(μ|μ_0,Σ,ν)p(Σ|σ^2,ρ,ν)p(σ_ϵ^2)p(μ_0)p(σ^2)p(ρ)p(ν). The introduction of the latent variables enables a Gibbs sampling scheme with conditionally conjugate updates. Denote generically by p(θ| -) the posterior full conditional for parameter θ. Notice that p(𝒵_i,ξ_i| -)∝ p(𝐘_i |𝒵_i,ξ_i) p(ξ_i) p(𝒵_i|𝐙_i,σ_ϵ^2), which matches the Bayesian logistic regression structure in <cit.>. Therefore, p(𝒵_i| -) and p(ξ_i| -) can be sampled directly. Factorizing the prior of 𝐙̃_i as p(𝐙̃_i|μ,Σ)= p(𝐙_i^*|𝐙_i,μ,Σ)p(𝐙_i|μ,Σ), results in p(𝐙_i^*,𝐙_i| -)∝ p(𝐙_i^*|𝐙_i,μ,Σ) p(𝐙_i|μ,Σ) p(𝒵_i|𝐙_i,σ_ϵ^2). This forms yields ready updates for 𝐙_i^* and 𝐙_i using GP-based predictive sampling. All other model parameters can be sampled using standard updates. The details of the MCMC algorithm are given in the Supplementary Material. We have linked the probability response curve and covariance structure of the binary process Y_i(τ) to the corresponding signal process Z_i(τ). To estimate the signal process, we obtain posterior samples for 𝐙_i^+=Z_i(τ^+), where τ^+⊃τ is a finer grid than the pooled grid. Denote τ̌=τ^+∖τ as the time points where none of the subjects have observations, and let 𝐙̌_i=Z_i(τ̌). Using the marginal TP result from Proposition <ref>, [ 𝐙̃_i; 𝐙̌_i ]∼ MVT( ν, [ μ_0τ; μ_0τ̌ ], [ Ψ_τ,τ Ψ_τ,τ̌; Ψ_τ̌,τ Ψ_τ̌,τ̌ ]), where μ_0·=μ_01_|·|, and Ψ_·,· denotes the covariance function evaluation Ψ_ϕ(·,·). Next, based on the conditionals of the MVT distribution <cit.>, 𝐙̌_i|𝐙̃_i ∼ MVT ( ν+|τ|,μ̌_iτ̌,ν+S_iτ-2/ν+|τ|-2Ψ̌_τ̌,τ̌), with μ̌_iτ̌=Ψ_τ̌,τΨ_τ,τ^-1(𝐙̃_i- μ_0τ)+ μ_0τ̌, S_iτ=(𝐙̃_i- μ_0τ)^⊤Ψ_τ,τ^-1(𝐙̃_i- μ_0τ) and Ψ̌_τ̌,τ̌=Ψ_τ̌,τ̌-Ψ_τ̌,τΨ_τ,τ^-1Ψ_τ,τ̌. Using (<ref>), given each posterior sample for 𝐙̃_i, μ_0, ϕ and ν, we can complete the posterior realization for the signal process over the finer grid. As discussed in Section <ref>, we can then obtain full posterior inference for functionals of the binary process. The predictive distribution of the signal process also illustrates the information borrowed across subjects. For the i-th subject, the grid, τ^+, where predictions are made can be partitioned as τ_i∪τ_i^*∪τ̌, where τ_i^*=τ∖τ_i represents the grid points where subject i does not have observations, while at least one of the other subjects have observations. Then, we first predict Z_i(τ_i^*) conditioning on Z_i(τ_i) by the GP predictive distribution, and next predict Z_i(τ̌) conditioning on Z_i(τ_i) and Z_i(τ_i^*) by the TP predictive distribution. Comparing with the GP, (<ref>) suggests the TP is scaling the predictive covariance by the factor ν+S_iτ-2/ν+|τ|-2. Note that S_iτ is distributed as the sum of squares of |τ| independent MVT_1(ν,0,1) random variables and hence E(S_iτ)=|τ|. Accordingly, if we have made good interpolation prediction, the predictive covariance for extrapolation of Z_i(τ̌) is expected to scale down and vice versa. Comparing with predicting both Z_i(τ_i^*) and Z_i(τ̌) conditioning on Z_i(τ_i) through the GP predictive distribution, our model allows using information across subjects to adjust the individual trajectory's credible interval. §.§ Synthetic data examples We assess the model by applying it to carefully designed simulation scenarios that reflect our main contributions. The full details are provided in the Supplementary Material. Here, we briefly discuss the simulation study setting and summarize the main findings. For the two sets of simulation studies we considered, the longitudinal binary responses are generated from the following generic process: Y_i(τ_i) |𝒵_i(τ_i) ind.∼ Bin(1,η(𝒵_i(τ_i))), τ_i=(τ_i1,⋯,τ_iT_i), i=1,⋯,n, 𝒵_i(τ_i) =f(τ_i)+ω_i+ϵ_i ϵ_ii.i.d.∼ N(0,σ_ϵ^2𝐈), where η(·) is a generic link function mapping ℝ to (0,1), f(τ) is a signal function, and ω_i is a realization from a mean zero continuous stochastic process that depicts the temporal covariance within the i-th subject. The first set of simulation studies focuses on evaluating the effectiveness of the proposed model in capturing the fluctuation of the temporal trend. We consider different link function, signal function, and temporal covariance structure combinations, and we simulate unbalanced data with different sparsity levels. The results demonstrate that, despite the data generating process and the sparsity level, the model can recover not only the subject's probability response curve, but also the underlying continuous signal function. The objective of the second set of simulation studies is to explore the performance of the proposed model in estimating the within subject covariance structure. To this end, we examine a number of possible choices for generating the ω_i in (<ref>), which imply covariance structures that are not of the same form as the covariance kernel of the model. The results reveal that the model can recover the true covariance between the signal variables, (Z_i(τ_it),Z_i(τ_i t^')), and the binary responses, (Y_i(τ_it),Y_i(τ_i t^')), thus providing empirical evidence for the robustness of the covariance kernel choice. In both cases, we examine simplified versions of the model for comparison. The simplified models are constructed by modeling either the mean structure or the covariance structure parametrically in the two sets of simulation studies, respectively. Demonstrating that the proposed model outperforms its parametric backbones, we highlight the practical utility of the nonparametric modeling for the mean and covariance structure. §.§ Connections with existing literature Our methodology is broadly related with certain Bayesian nonparametric methods. The proposed model is related to a particular class of conditional models, known as transition models, which induce the aging effect by allowing past values to explicitly affect the present observation, usually through autoregressive dynamics. <cit.> studied a class of non-Gaussian autoregression models for continuous responses, which can be extended to handle binary longitudinal outcomes by treating them as a discretized version of the continuous outcomes. <cit.> developed a nonparametric density regression model for ordinal regression relationships that evolve in discrete time. Compared with the proposed methodology, these models are more flexible in terms of the binary response distribution. However, it is demanding to handle higher than first-order dynamics, and there is no natural way to treat missing data under a discrete time autoregressive framework, hindering applications for unbalanced longitudinal studies. The proposed model is more closely related to subject-specific models, where the responses are assumed to be independent conditioning on subject-specific effects. The main approach has been to construct models for longitudinal binary responses building from the various Bayesian nonparametric models for longitudinal continuous data, developed under the mixed effects framework <cit.>. For instance, embedding a Dirichlet process mixture of normals prior as the probability model for the latent variables, <cit.> and <cit.> consider binary responses, and <cit.> handle mixed-scale data comprising continuous and binary responses. The proposed model differs in the way of treating subject-specific effects, and it arguably offers benefits in terms of computational efficiency. There is a growing trend of adopting functional data analysis tools in longitudinal data modeling. These methods specify observations as linear combinations of functional principal components (FPCs), with the FPCs represented as expansions of a pre-specified basis. Bayesian methods include <cit.> for continuous responses, and <cit.> for binary and count responses. Challenges include inference which is sensitive to the basis choice, and a complex orthogonality constraint on the FPCs. Recently, <cit.> proposed an approach that can serve as foundation for generalized FPC analysis of sparse and irregular binary responses. Nonetheless, our model involves a more parsimonious formulation, including the structure with the GP and TP predictive distributions. § APPLICATION WITH BINARY RESPONSES: STUDENTLIFE DATA §.§ Data for analysis Studentlife <cit.> is a study that integrates automatic sensing data and an EMA component to probe students' mental health status and to study its relationship with students' academic performance and behavior trends. The data were collected by a smartphone app carried by 48 students over a 10-week term at Dartmouth College. The dataset is available from the R package “studentlife” <cit.>. We focus on a subset of the data that corresponds to assessing the students' emotional status. In the Studentlife study, the assessment of emotion is conducted by the Photographic Affect Meter (PAM), a tool for measuring affect in which users select from a wide variety of photos the one which best suits their current mood <cit.>. The PAM survey is deployed to the mobile app and prompts everyday during the study period. The participants either respond to the survey, or ignore it, introducing missingness. The outcome of the survey contains two attributes, the PAM valence and the PAM arousal. They are scores of -2 to 2 (excluding 0) that measure the subject's extent of displeasure to pleasure or state of activation ranging from low to high, respectively. We dichotomize the valence and arousal scores by their sign, representing the positive values by 1. In this section, we focus on analyzing the change of binary valence and arousal responses to evaluate students' affects as the term progresses. The data were collected during the spring 2013 term at Dartmouth college. We set the study period according to the official academic calendar, from the first day of classes (March 25, 2013) to the end of the final exam period (June 4, 2013), resulting in a total of 72 days. We exclude subjects with less than 12 responses, resulting in 45 students. The longitudinal recordings of valence or arousal of the i-th student are denoted by Y_i(τ_i), for i=1,⋯,45, where the student-specific grid points are a subset of τ= (0,1,⋯,71)^⊤, representing the days on which the measurements are recorded. Several special events occurred during the study period, and we are particularly interested in investigating the change of students' affects on the time intervals around these events. Specifically, the events and corresponding periods are: (i) Days following the Boston marathon bombing (April 15, 2013 to April 17, 2013); (ii) The Green Key (a spring festival at Dartmouth) period (May 17, 2013 to May 18, 2013); (iii) The Memorial Day long weekend (May 25, 2013 to May 27, 2013); (iv) The final examination period (May 31, 2013 to June 3, 2013). We retrieve the data for the specific responses and study period from the R package “studentlife” that contains the database for the entire study. Over all observations, the percentage of missing values is 31.1%. There are slightly more missing responses at the beginning and toward the end of the study, while the missing pattern for each subject can be viewed as random. We further explore the correlations between the binary responses within a week. We split the whole observation sequence into batches representing a week, and empirically calculate the Pearson and the tetrachoric correlation coefficient for each pair of time and distance combinations. Figure <ref> presents the results. It suggests that the correlation of the students' response to valence and arousal decreases slowly in time. §.§ Analysis and results We fit the proposed model for the binary valence and arousal responses separately. We specify the prior for the model parameters by the procedure mentioned in Section <ref>. (Results from prior sensitivity analysis are presented in the Supplementary Material.) Posterior inference results are based on 5000 MCMC samples obtained every 4 iterations from a chain of 50000 iterations with a 30000 burn-in period (which is conservative). We first examine in Figure <ref> the probability response curves, defined as the probability of obtaining positive valence or arousal as a function of time. For the valence, the happiness level drops as the term begins and increases when the term ends. The Boston marathon bombing may have had a minor effect on the valence. We observe local peaks around the Green Key festival and the Memorial Day holiday. As the students finish their exams, there is a trend toward happiness. As for arousal, it is relatively stable at the beginning of the term, and fluctuates as the term progresses. There is a drop in activation level after the Boston marathon bombing and during the final exam period, while the activation level reaches a local maximum at around the Green Key festival and the Memorial Day holiday. Moreover, we assess the student's emotional status on specific days. According to <cit.>, various states of emotional status can be represented by points located at the two dimensional mood coordinate space spanned by valence for the horizontal dimension and arousal for the vertical dimension. Moods such as excitement, distress, depression, and contentment, are represented by points in the quadrants of the space. For each observation, we can map the corresponding pairs of probabilities for positive valence and arousal onto the unit square in the mood space. In Figure <ref>, the density heatmap is obtained by the posterior samples of positive probabilities for a new student of the same cohort, while the posterior means of the in-sample positive probabilities are marked by crosses. Panels (a) and (b) suggest the students are mostly excited at the festival and holiday. Moving from panel (c) to panel (d), we observe that the happiness level increases and the activation level decreases towards the end of the exam period. We also obtain the posterior point and 95% interval estimate for the covariance kernel of the signal process, which is displayed in Figure <ref>. It is noteworthy that there is a similar decreasing trend for the two distinct binary responses of valence and arousal. The practical range, defined as the distance at which the correlation is 0.05, has an estimated mean of 20.99 for valence and 22.97 for arousal. §.§ Performance comparisons For comparison with a traditional approach, we consider an analysis of the data under the GLMM setting. In particular, we assume the model Y_it|𝒵_itind.∼ Bin(1,φ(𝒵_it)), 𝒵_it=τ̃_it^⊤β+∑_k=1^KS_itkb_k+μ_i+ϵ_it, t=1,⋯,T_i, i=1,⋯, n, where τ̃_it=(1,τ_it)^⊤, β is the vector of fixed effects, and ϵ_iti.i.d.∼N(0,σ^2_ϵ) is the measurement error. To allow flexibility in modeling the time effect, we consider cubic B-spline basis functions with K=9 knots that separate naturally the observed interval by week; S_itk is the k-th basis associated with time, with parameter b_ki.i.d.∼N(0,σ^2_b). Finally, μ_ii.i.d.∼N(0,σ_μ^2) are subject-specific random effects. The model is implemented using the integrated nested Laplace approximation (INLA) approach <cit.> with the “INLA” package in R <cit.>. We used the default choices provided by the R package for the prior on β (a flat prior), and for the values of the variance terms, σ^2_ϵ, σ^2_b, and σ^2_μ. We perform model comparison using two different metrics: the posterior predictive loss criterion which combines a goodness-of-fit term, G(ℳ), and a penalty term, P(ℳ), for model complexity <cit.>; and, the continuous ranked probability score (CRPS), defined in terms of predictive cumulative distribution functions <cit.>. Both criteria can be calculated from the posterior samples for model parameters, and both favor the model with a smaller value. Table <ref> summarizes the results. For the valence response, both criteria favor the proposed model. As for the arousal response, the proposed model provides a more accurate fit to the data, while being penalized more than the GLMM with respect to model complexity. Nonetheless, our model is favored in terms of total posterior predictive loss, as well as by the CRPS criterion. § MODEL FOR ORDINAL RESPONSES §.§ The extended model We extend the model developed in Section <ref> to handle ordinal responses. Suppose the observation on subject i at time τ_it, denoted by Y_it, takes C possible categories. We can equivalently encode the response as a vector with binary entries 𝐘_it=(Y_i1t⋯,Y_iCt), such that Y_it=j is equivalent to Y_ijt=1 and Y_ikt=0 for any k≠ j. We assume a multinomial response distribution for 𝐘_it, factorized in terms of binomial distributions, Mult(𝐘_it| m_it,ω_i1t,⋯,ω_iCt)=∏_j=1^C-1Bin(Y_ijt| m_ijt,φ(Z_ijt+ϵ_ijt)) where m_it=∑_j=1^CY_ijt≡ 1, m_i1t=m_it, and m_ijt=m_it-∑_k=1^j-1Y_ikt. This factorization bridges the gap between binary and ordinal responses. Similar to the model for binary responses, we adopt a functional data analysis perspective on {Z_ijt}, modeling them separately through the hierarchical framework developed in Section <ref>. That is, Z_ij(τ)|μ_j,Σ_ji.i.d.∼ GP(μ_j,Σ_j), for i=1,⋯,n, and μ_j|Σ_jind.∼ GP(μ_0j, (ν_j -3) Σ_j), Σ_jind.∼IWP(ν_j,Ψ_ϕ_j), where ϕ_j={σ^2_j,ρ_j}, for j=1,⋯,C-1. The error terms are modeled as ϵ_ijt|σ^2_ϵ jind.∼N(0,σ^2_ϵ j). Hence, the hierarchical model for the data can be expressed as 𝐘_i|{𝐙_ij},{ϵ_ij}ind.∼∏_t=1^T_i∏_j=1^C-1Bin(Y_ijt| m_ijt,φ(Z_ijt+ϵ_ijt)), i=1,⋯,n, 𝐙_ij|μ_j(τ_i),Σ_j(τ_i,τ_i)ind.∼ N(μ_j(τ_i),Σ_j(τ_i,τ_i)), ϵ_ij|σ_ϵ j^2ind.∼ N(0,σ_ϵ j^2 𝐈), μ_j|μ_0j,Σ_j,ν_j ind.∼ N(μ_0j1, (ν_j - 3) Σ_j); Σ_j|ν_j,Ψ_jind.∼ IW(ν_j,Ψ_j), j=1,⋯,C-1 where 𝐘_i=(𝐘_i1,⋯,𝐘_iT_i)^⊤, 𝐙_ij=(Z_ij1,⋯,Z_ijT_i)^⊤, ϵ_ij=(ϵ_ij1,⋯,ϵ_ijT_i)^⊤, and the collection of the functional evaluations on the pooled grid τ are denoted by the corresponding bold letter. The structure in (<ref>) is referred to as the continuation-ratio logits representation of the multinomial distribution <cit.>. In the context of Bayesian nonparametric modeling, it has been used as the kernel of nonparametric mixture models for cross-sectional ordinal regression <cit.>. Examining model properties reveals the practical utility of the continuation-ratio logits structure. The factorization in (<ref>) allows us to examine the probability response curves and the within subject covariance structure in the same fashion as for binary responses. Specifically, the continuation-ratio logit for response category j is the logit of the conditional probability of response j, given that the response is j or higher. As a consequence, for any finite grid τ=(τ_1,⋯,τ_T)^⊤, the probability response curves are given by 𝐏_𝐣τ = (Pr(Y_τ_1=j|𝐙_τ,σ_ϵ^2), ⋯,Pr(Y_τ_T=j|𝐙_τ, σ_ϵ^2))^⊤ =E( π_j τ|𝐙_j τ,σ_ϵ j^2 ) ∏_k=1^j-1E( (1-π_k τ)|𝐙_k τ,σ_ϵ k^2 ), where π_jτ=(φ(𝒵_j1),⋯,φ(𝒵_jT))^⊤ and 𝒵_j τ|𝐙_j τ,σ_ϵ j^2∼ N(𝐙_j τ, σ_ϵ j^2𝐈_T), for j=1,⋯,C. To avoid redundant expressions, we include the term π_Cτ and set it always equal to 1. As for the covariance structure, we study the joint probability of the repeated measurements on the same subject at time τ and τ^' taking category j and j^'. Exploiting the conditional independence structure across the categories, Pr(Y_τ=j,Y_τ^'=j^'|{𝐙_j τ},{σ_ϵ j^2}) ={ E(π_jτπ_jτ^'|𝐙_j τ,σ_ϵ j^2)∏_k≠ jE[(1-π_kτ)(1-π_kτ^')|𝐙_k τ,σ_ϵ k^2] j=j^' E[π_jτ(1-π_jτ^')|𝐙_j τ,σ_ϵ j^2] E[(1-π_j^'τ)π_j^'τ^'|𝐙_j^'τ,σ_ϵ j^'^2] ×∏_k≠ j,j^'E[(1-π_kτ)(1-π_kτ^')|𝐙_k τ,σ_ϵ k^2] j≠ j^'.. Hence, we can explore the covariance of the two ordinal responses 𝐘_τ,𝐘_τ^' by studying the pairwise covariance for each entry. The continuation-ratio logits structure is also key to efficient model implementation. It implies a sequential mechanism, such that the ordinal response is determined through a sequence of binary outcomes. Starting from the lowest category, each binary outcome indicates whether the ordinal response belongs to that category or to one of the higher categories. This mechanism inspires a novel perspective on the model implementation. That is, we can re-organize the original data set containing longitudinal ordinal responses to create C-1 data sets with longitudinal binary outcomes. Then, fitting model (<ref>) to the original data set is equivalent to fitting the model of Section <ref> separately on the C-1 re-organized data sets. The procedure is elaborated below. Denote the set of all possible subject and time indices by ℐ_1, that is, ℐ_1={(i,t):i=1,⋯,n,t=1,…,T_i}. To build the first re-organized data set with binary outcomes, we create binary indicators Y^(1)_it, such that Y^(1)_it=1 if Y_i1t=1 and Y^(1)_it=0 if Y_i1t=0. The first data set is then 𝒟_1 = {Y^(1)_it: (i,t)∈ℐ_1}. Moving to the second data set, we first filter out the observations that are already categorized into the smallest scale, and denote the remaining indices set by ℐ_2=ℐ_1∖{(i,t):Y_i1t=1}. This is the set of indices with original ordinal responses belonging to categories higher than or equal to the second smallest scale. Then, we create new binary indicators Y^(2)_it, such that Y^(2)_it=1 if Y_i2t=1, and Y^(2)_it=0 if Y_i2t=0. The second data set is obtained as 𝒟_2 = {Y^(2)_it: (i,t)∈ℐ_2}. The process is continued until we obtain the (C-1)-th data set, 𝒟_C-1={Y^(C-1)_it: (i,t)∈ℐ_C-1}, where ℐ_C-1 is the indices set such that the original ordinal responses belong to either category C-1 or C. Notice that every re-organized data set 𝒟_j, for j=1,⋯,C-1, contains longitudinal binary outcomes for which the model of Section <ref> is directly applicable. Provided the priors placed on each ordinal response category's parameters are independent, it is straightforward to verify that fitting separately the model for binary responses to the re-organized data sets {𝒟_j: j=1,⋯,C-1} is equivalent to fitting model (<ref>) to the original data set. We formalize the conclusion in the following proposition. Fitting the ordinal responses model in (<ref>) is equivalent to fitting the model for binary responses separately, C-1 times to the data sets {𝒟_j: j=1,⋯,C-1}. Based on Proposition <ref>, the posterior simulation algorithm for the ordinal responses model can be parallelized and implemented on separate cores. In applications where the number of response categories is moderate to large, such a parallel computing scheme is especially beneficial. Also, since the binary responses model serves as the backbone for modeling ordinal responses, the prior specification strategy and the posterior simulation method described in Section <ref> can be readily extended to model (<ref>). Finally, from (<ref>) and (<ref>), it is clear that the posterior samples obtained from the C-1 separate models suffice to obtain full posterior inference for the ordinal response process. §.§ Data illustration As an illustration example, we consider the PAM arousal score on the original scale, which is obtained from the same EMA study discussed in Section <ref>. PAM arousal is a -2 to 2 (excluding 0) score. We examine the same cohort of students on the same study period as described in Section <ref>. Over all observations, the distribution of arousal scores involves 16.6% for level -2, 27.7% for level -1, 12.6% for level 1, and 12% for level 2, while 31.1% of the observations are missing. To implement model (<ref>), we follow the procedure outlined above Proposition <ref>. We re-organize the original data into separate data sets {𝒟_j: j=1,⋯,3}, each of them containing the binary responses indicating whether the arousal scores are at level j or a higher level. Then, the proposed model is fitted to the three data sets in parallel. The primary inference focus is on the change of arousal scores as the term progresses, which is depicted by the probability response curve of each response level. We display posterior point and interval estimates of 𝐏_𝐣τ (defined in (<ref>)) in Figure <ref>. The probability of the highest arousal level drops dramatically as the term begins, indicating that the excitement of a new quarter may vanish within a week. The Boston marathon bombing slightly triggers higher probability for moderately low to low arousal level. There is a drop of the probability for moderately high to high arousal level after the Green Key festival and the Memorial Day holiday. The exams may have a significant impact on the arousal level. We observe peaks of arousal at the beginning of the final exam period, and also the middle of the term, which corresponding to the midterm exam period. Since the students are taking different courses, the midterm exam times vary, resulting in some curves with lead or lag peaks compared to the majority. This pattern is not clear in the analysis of binary arousal scores. Hence, examining the finer ordinal scale enables us to discover subtle changes of the students activation states. We have also investigated the temporal covariance structure of the ordinal responses, with details presented in the Supplementary Material. § SUMMARY We have developed a novel Bayesian hierarchical model for analyzing longitudinal binary data. We approach the problem from a functional data analysis perspective, resulting in a method that is suitable for either regularly or irregularly spaced longitudinal data. The modeling approach achieves flexibility and computational efficiency in full posterior inference. With regard to the former, the key model feature is the joint and nonparametric modeling of the mean and covariance structure. As illustrated by the data application, our approach enables interpretable inference with coherent uncertainty quantification, and provides improvement over the GLMM approach. The model formulation enables a natural extension to incorporate ordinal responses, which is accomplished by leveraging the continuation-ratio logits representation of the multinomial distribution. This representation leads to a factorization of the multinomial model into separate binomial models, on which the modeling approach for binary responses can be applied. The computational benefit is retained, since we can utilize parallel computing across response categories. § SUPPLEMENTARY MATERIAL The Supplementary Material includes details for the MCMC algorithm, proofs of the propositions, and additional results for the data examples. jasa3 Supplementary Material: Flexible Bayesian Modeling for Longitudinal Binary and Ordinal Responses § MCMC POSTERIOR SIMULATION DETAILS Based on the joint posterior distributions derived from (<ref>), we design the MCMC sampling algorithm for the proposed model with binary responses. This process can be achieved entirely with Gibbs updates, by iterating the following steps. For notation simplicity, we let (ϕ| -) denote the posterior full conditional distribution for parameter ϕ. Step 1: For i=1,⋯,n update 𝒵_i from N(𝐦_i,𝒱_i), where 𝒱_i=(Ω_i+(1/σ_ϵ^2)𝐈)^-1, and 𝐦_i=𝒱_i(λ_i+(1/σ_ϵ^2)𝐙_i). Here Ω_i denote the diagonal matrix of ξ_i, and λ_i=(Y_i1-1/2,⋯,Y_iT_i-1/2)^⊤. Step 2: Update the Pólya-Gamma random variables ξ_it by sample from PG(1,𝒵_it), for i=1,⋯,n and t=1,⋯,T_i. Step 3: Update σ_ϵ^2 by sample from IG(a_ϵ+∑_i=1^nT_i/2,b_ϵ+∑_i=1^n(𝒵_i-𝐙_i)^⊤(𝒵_i-𝐙_i)/2). Step 4: Update 𝐙̃_i for i=1,⋯,n, * In the case that all the subjects having observations on a common grid, 𝐙_i^* vanishes and 𝐙̃_i=𝐙_i. It has full conditional distribution 𝐙_i| -∼ N(μ̃_i,𝐕̃_i), where 𝐕̃_i=((1/σ_ϵ^2)𝐈+Σ^-1)^-1, and μ̃_i=𝐕̃_i((1/σ_ϵ^2)𝒵_i+Σ^-1μ). * In the case that the repeated measurements for the subjects are collected on uncommon grids, we first update 𝐙_i^* from N(μ_i^*,V_i^*), where μ_i^*=μ(τ^*_i)+Σ(τ^*_i,τ_i)Σ(τ_i,τ_i)^-1(𝐙_i-μ(τ_i))=𝐁_i𝐙_i-𝐮_i, V_i^*=Σ(τ^*_i,τ^*_i)-Σ(τ^*_i,τ_i)Σ(τ_i,τ_i)^-1Σ(τ_i,τ^*_i), with 𝐁_i=Σ(τ^*_i,τ_i)Σ(τ_i,τ_i)^-1 and 𝐮_i=𝐁_iμ(τ_i)-μ(τ^*_i). Then, to update 𝐙_i, we sample from N(μ̃_i,𝐕̃_i), where 𝐕̃_i=[(1/σ_ϵ^2)𝐈+Σ(τ_i,τ_i)^-1+𝐁_i^T(V_i^*)^-1𝐁_i]^-1, μ̃_i=𝐕̃_i[(1/σ_ϵ^2)𝒵_i+Σ(τ_i,τ_i)^-1μ(τ_i)+𝐁_i^T(V_i^*)^-1(𝐮_i+𝐙_i^*)]. Step 5: Update μ and Σ jointly by sample from N(μ^*,Σ/κ^*) and IW(ν^*,Ψ^*), respectively, with μ^*=κ/κ+nμ_0+n/κ+n𝐙̃^m, κ^*=n+κ, ν^*=n+ν Ψ^*=Ψ+S+nκ/n+κ(𝐙̃^m-μ_0)(𝐙̃^m-μ_0)^T, S=∑^n_i=1(𝐙̃_i-𝐙̃^m)(𝐙̃_i-𝐙̃^m)^top, where 𝐙̃^m denote the mean of {𝐙̃_i}_i=1^n. Step 6: Update μ_0 from N(a_μ^*,b_μ^*), where b_μ^*=[1^⊤[(ν-3)Σ]^-11+1/b_μ]^-1, and a_μ^*=b_μ^*[1^⊤[(ν-3)Σ]^-1μ+a_μ/b_μ]. Step 7: Update σ^2 from Gamma(a_σ+(ν+|τ|-1)|τ|/2,b_σ+1/2tr(Ψ_ρΣ^-1)). Here Ψ_ρ denotes the correlation matrix Ψ_ϕ/σ^2. Step 8: Using the Griddy-Gibbs sampler by <cit.>, update ρ from P(ρ=c_l| -)=|Ψ_c_l|^(ν+|τ|-1)/2exp(-1/2tr(Ψ_c_lΣ^-1))/∑_l=1^G|Ψ_c_l|^(ν+|τ|-1)/2exp(-1/2tr(Ψ_c_lΣ^-1)), where c_1,⋯,c_G are grid points on a plausible region of ρ and Ψ_c_l denotes the correlation matrix when ρ taking the value c_l. Step 9: Using the Griddy-Gibbs sampler, update ν from P(ν=c_l| -)=N(μ|μ_0,(c_l-3)Σ)IW(Σ| c_l+|τ|-1,Ψ_ϕ)/∑_l=1^GN(μ|μ_0,(c_l-3)Σ)IW(Σ| c_l+|τ|-1,Ψ_ϕ). where c_1,⋯,c_G are grid points on a plausible region of ν. § PROOFS §.§ Proof of Proposition <ref> For the probability response curve 𝐏_1τ, we have 𝐏_1τ =∫ (Pr(Y_τ_1= 1 |𝒵_τ,𝐙_τ,σ_ϵ^2),⋯,Pr(Y_τ_T=1 |𝒵_τ,𝐙_τ,σ_ϵ^2))^⊤p(𝒵_τ|𝐙_τ,σ_ϵ^2) d𝒵_τ =∫π(𝒵_τ)N(𝒵_τ|𝐙_τ,σ_ϵ^2𝐈) d𝒵_τ=E(π(𝒵_τ)|𝐙_τ,σ_ϵ^2). Then, to find the diagonal and off-diagonal elements for the covariance matrix of 𝐘_τ, we use the law of total variance/covariance. For the diagonal elements, we can write Var(Y_τ|𝐙_τ,σ_ϵ^2) =Var[E(Y_τ|𝒵_τ)|𝐙_τ,σ_ϵ^2]+E[Var(Y_τ|𝒵_τ)|𝐙_τ,σ_ϵ^2] =Var[φ(𝒵_τ)|𝐙_τ,σ_ϵ^2]+E[φ(𝒵_τ)(1-φ(𝒵_τ))|𝐙_τ,σ_ϵ^2] =E[φ(𝒵_τ)|𝐙_τ,σ_ϵ^2]-E^2[φ(𝒵_τ)|𝐙_τ,σ_ϵ^2]. Similarly, for the off-diagonal entries, we obtain Cov(Y_τ,Y_τ^'|𝐙_τ,σ_ϵ^2) =Cov[E(Y_τ|𝒵_τ),E(Y_τ^'|𝒵_τ)|𝐙_τ,σ_ϵ^2]+E[Cov(Y_τ,Y_τ^'|𝒵_τ)|𝐙_τ,σ_ϵ^2] =Cov[φ(𝒵_τ),φ(𝒵_τ^')|𝐙_τ,σ_ϵ^2]. §.§ Proof of Proposition <ref> To establish the result, we first prove the following lemma. Consider the bivariate vector 𝐙=(Z_1,Z_2)^⊤ that follows N(μ,Σ), where μ=(μ_1,μ_2)^⊤ and Σ=[ σ_1^2 γσ_1σ_2; γσ_1σ_2 σ_2^2 ]. Then we have, E(φ(Z_i))≈φ(μ_i)+σ_i^2/2φ^''(μ_i), i=1,2, E(φ(Z_1)φ(Z_2))≈φ(μ_1)φ(μ_2)+1/2[σ_1^2φ^''(μ_1)φ(μ_2)+2γσ_1σ_2φ^'(μ_1)φ^'(μ_2)+σ_2^2φ(μ_1)φ^''(μ_2)]. To show the result, we write 𝐙=μ+ζ, where ζ∼ N(0,Σ). By Taylor expansion around the mean, φ(Z_i)≈φ(μ_i)+ζ_iφ^'(μ_i)+ζ_i^2/2φ^''(μ_i). Then taking expectation yields E(φ(Z_i))≈φ(μ_i)+σ_i^2/2φ^''(μ_i), i=1,2. As for E(φ(Z_1)φ(Z_2)), consider the function f(𝐙)=φ(Z_1)φ(Z_2), using the bivariate version of Taylor expansion, f(𝐙)≈ f(μ)+▽ f(μ)^⊤ζ+1/2ζ^⊤▽^2f(μ)ζ. Similarly, taking expectation with respect to ζ we can obtain the result. Turning to the proof of Proposition <ref>, we notice that 𝐙_τ|μ,Σ∼ N(μ,Σ). Marginalizing out 𝐙_τ, we have 𝒵_τ|μ,Σ,σ_ϵ^2∼ N(μ,Σ+σ_ϵ^2𝐈). Therefore, for any τ,τ^'∈τ, we have [ 𝒵_τ; 𝒵_τ^' ]|μ,Σ,σ_ϵ^2 ∼ N([ μ_τ; μ_τ^' ], [ Σ_τ,τ+σ_ϵ^2 Σ_τ,τ^'; Σ_τ^',τ Σ_τ^',τ^'+σ_ϵ^2 ]) To establish the connection with the mean and covariance of the signal process, we write [ μ_τ; μ_τ^' ] = [ E(Z_τ|μ,Σ); E(Z_τ^'|μ,Σ) ] [ Σ_τ,τ+σ_ϵ^2 Σ_τ,τ^'; Σ_τ^',τ Σ_τ^',τ^'+σ_ϵ^2 ] = [ Var(Z_τ|μ,Σ)+σ_ϵ^2 Cov(Z_τ,Z_τ^'|μ,Σ); Cov(Z_τ,Z_τ^'|μ,Σ) Var(Z_τ^'|μ,Σ)+σ_ϵ^2 ] Similar to the proof of Proposition <ref>, we can show Pr(Y_t=1|μ,Σ,σ_ϵ^2)=E(φ(𝒵_τ)|μ,Σ,σ_ϵ^2) Cov(𝐘_τ,𝐘_τ^'|μ,Σ,σ_ϵ^2)=Cov[φ(𝒵_τ),φ(𝒵_τ^')|μ,Σ,σ_ϵ^2] Applying Lemma <ref>, the desired outcome emerges as a direct consequence of algebraic simplification. §.§ Proof of Proposition <ref> The result is proved by considering the corresponding f.d.d.s. on any finite grids τ. Let the bold letter denote the corresponding process evaluated at τ. From the model assumption mentioned in (<ref>) and (<ref>), we have 𝐙|μ,Σ ∼ N(μ,Σ), μ|Σ ∼ N(μ_01,(ν-3)Σ), Σ∼ IW(ν,Ψ). To obtain the marginal distribution of 𝐙, we have p(𝐙)=∫∫ p(𝐙|μ,Σ)p(μ|Σ)p(Σ) dμ dΣ. Marginalizing over the mean vector μ, we obtain 𝐙|Σ∼ N(μ_01,(ν-2)Σ). Based on that, p(𝐙) =∫ p(𝐙|Σ)p(Σ) dΣ ∝∫exp{-1/2Tr[(Ψ_ϕ+(𝐙-μ_01)(𝐙-μ_01)^⊤/ν-2)Σ^-1]}/|Σ|^(ν+|τ|+1)/2 dΣ ∝ [1+(𝐙-μ_01)^⊤Ψ_ϕ^-1(𝐙-μ_01)/ν-2]^-(ν+|τ|)/2, which can be recognized as the kernel of a MVT distribution. Therefore, the result holds. § SYNTHETIC DATA EXAMPLES The principal goal of analyzing longitudinal data is to estimate the mean and covariance structure of the subject's repeated measurements. We conduct simulation studies to evaluate the proposed method on fulfilling this goal. In the following, Section <ref> evaluates the reliability of the proposed model in capturing the fluctuation of the mean structure, and Section <ref> explores the performance of the proposed model in estimating within subject covariance structure. Unless otherwise specified, the posterior analyses in this section are based on 5000 posterior samples collected every 4 iterations from a Markov chain of 30000 iterations, with the first 10000 samples being discarded. §.§ Estimating mean structure Consider a generic process of generating longitudinal binary responses, 𝐘_i = Y_i(τ_i) |𝒵_i(τ_i) ind.∼ Bin(1,η(𝒵_i(τ_i))), τ_i=(τ_i1,⋯,τ_iT_i), i=1,⋯,n, 𝒵_i(τ_i) =𝒵_i =f(τ_i)+ω_i+ϵ_i ϵ_ii.i.d.∼ N(0,σ_ϵ^2𝐈), where η(·) is a generic link function mapping ℝ to (0,1), f(τ) is a signal function, and ω_i is a realization from a mean zero continuous stochastic process that depicts the temporal covariance within subject. The objective is twofold. First, to estimate the subject's probability response curve, which is defined as the probability of obtaining positive response, as a function of time. Second, to estimate the true underlying signal function. We consider three data generating processes. The specific choice of η(·), f(τ) and ω_i for each generating process is summarized as follows: * Case 1: η_1(·)=φ(·), where φ(·) is the expit function, f_1(τ)=0.3+3sin(0.5τ)+cos(τ/3), and ω_ii.i.d.∼ N(0,K_1(τ,τ)), with covariance kernel K_1(τ_t,τ_t^')=exp(-|τ_t-τ_t^'|^2). * Case 2: η_2(·)=Φ(·), where Φ(·) denotes the CDF of standard normal distribution, f_2(τ)=0.1+2sin(0.25τ)+cos(0.25τ), and ω_ii.i.d.∼ MVT(5,0,K_2(τ,τ)), with covariance kernel K_2(τ_t,τ_t^')=1/3exp(-|τ_t-τ_t^'|^2). * Case 3: a mixture of Case 1 and Case 2, with equal probability of generating data from each model. For n=30 subjects, we simulate T=31 binary observations at time τ=0,⋯,30, following the aforementioned data generating processes. To enforce an unbalanced study design, we randomly drop out a proportion of the simulated data. We term the drop out proportion sparsity level, for which we consider 10%, 25% and 50%. The proposed hierarchical model is applied to the data, with a weakly informative prior placed on the mean structure. We obtain posterior inference of the probability response curve and the signal process on a finer grid τ^+=(0,1/3,2/3,⋯,30). Figure <ref> plots posterior point and interval estimates of the subject's probability response curve for a randomly selected one in each case. Despite the data generating process and the sparsity level, the model can recover the evolution of the underlying probability used in generating binary responses. We observe a shrink in the interval estimate at the set of grid points where at least one subject has observation, that is, τ. The expanding of the credible interval width at τ̌ reflect the lack of information at those time grids. We further investigate the model's ability in out-of-sample prediction, by estimating the probability response curve for a new subject from the same cohort. Figure <ref> shows the posterior point and interval estimates of Pr(Y_*(τ_*t)=1), including, as a reference point, the posterior mean estimates of each subject's probability response curve Pr(Y_i(τ_it)=1), i=1,⋯,n. The true probability function that triggered the binary response, given as the signal transformed by the link function, is also shown in the figure. It is obtained with the simulated data with 10% sparsity, while there is no major difference for the other two sparsity levels. The behavior of the probability response curve for the new subject is to be expected. It follows the overall trend depicted by the true underlying probability function, while suffers from a comparable level of measurement error with the observed subjects. It is also of interest to assess the model's ability in recovering the underlying continuous signal process, since the signal process describes the intrinsic behavior and is crucial to answer related scientific questions. In our proposed model, the signal process is modeled nonparametricly through a GP. To further emphasize the benefits of this model formulation, we compare the proposed model with its simplified backbone. The simpler model differs from the original one in modeling the mean function. Instead of modeling the mean function μ through a GP, we consider modeling it parametricly by μ(τ)≡μ_0, and μ_0∼ N(a_μ,b_μ). The model's ability in capturing the signal process is summarized by the rooted mean square error (RMSE), which is defined by RMSE^ℳ=√(1/n∑_i=1^n1/|τ^+|∑_τ∈τ^+(Ẑ^ℳ_i(τ)-f(τ))^2). Here Ẑ^ℳ_i(τ) denote the model ℳ estimated signal for subject i evaluated at time τ, which can be obtained at every MCMC iteration. Figure <ref> explores the posterior distribution of the RMSE under the proposed model and its simplified version, for different data generating process and sparsity level combinations. Despite the scenario, the proposed model shows a notably smaller RMSE. Contrasting the performance with the simpler model highlights the practical utility of including the layer of GP for the mean function in terms of effective estimation of the underlying continuous signal process. §.§ Estimating covariance structure Since we emphasize the importance of modeling dependence in longitudinal data, we now explore how well our model works for estimating different covariance structure. Consider the data generating process in (<ref>), with expit link function and signal f(τ)=0.1+2sin(0.5τ)+cos(0.5τ). We examine a number of possible choices for generating ω_i, that imply covariance structures which would not be in the same form as the covariance kernel used in the proposed model. The primary interest is to exhibit the robustness of covariance kernel choice to different true covariance structures. We let T_i=T and τ_it=τ_t, namely that all subjects are observed over the same time grids. For n=100 subjects, we generate sequences of length T=11 at time τ=0,⋯,10. We study the following options of generating ω_i: * Case 1: ω_ii.i.d.∼ N(0,K_1(τ,τ)), with squared exponential kernel K_1(τ_t,τ_t^')=exp(-|τ_t-τ_t^'|^2/(2· 3^2)). Each realized trajectory is infinitely differentiable. * Case 2: ω_ii.i.d.∼ N(0,K_2(τ,τ)), with exponential kernel K_2(τ_t,τ_t^')=exp(-|τ_t-τ_t^'|/5). Each realization is effectively from a continuous-time AR(1) GP. * Case 3: ω_ii.i.d.∼ MVT(5,0,K_3(τ,τ)), with compound symmetry kernel K_3(τ_t,τ_t^')=𝐈_{τ_t=τ_t^'}+0.4𝐈_{τ_t≠τ_t^'}. The covariance between two observations remains a constant, despite their distance. * Case 4: ω_ii.i.d.∼ MVT(5,0,K_4(τ,τ)), with kernel K_4(τ_t,τ_t^')=0.7K_2(τ_t,τ_t^')+0.3K_3(τ_t,τ_t^'), a mixture of AR(1) and compound symmetry covariance structure. In terms of longitudinal binary responses, the covariance structure can be elucidated in two senses, namely the covariance between the pair of binary data (Y_i(τ_t),Y_i(τ_t^')) and between the pair of signal (Z_i(τ_t),Z_i(τ_t^')). We consider the covariance structure of the signal process first. From Proposition <ref>, Cov(Z_i(τ_t),Z_i(τ_t^'))=Ψ_ϕ(τ_t,τ_t^'), ∀ i, where the covariance function Ψ_ϕ is defined in (<ref>). Hence, the signal covariance structure estimated from the model is also isotropic, facilitating a graphic comparison between the posterior estimate of Ψ_ϕ(τ_d) versus the true covariance kernel K(τ_d), where τ_d=|τ_t-τ_t^'|. The results are presented in Figure <ref>. As expected, the proposed model recovers the truth, despite the mis-specification of the covariance kernel. Comparing with the other three cases, the posterior point estimate of covariance kernel is less accurate in Case 3. This can be explained by noticing that the constant covariance in that case violates the model assumption. Nonetheless, the posterior interval still covers the truth. As for the covariance between the pair of binary data, we consider two measurements, the Pearson correlation coefficient and the tetrachoric correlation coefficient. For a review of the definitions and properties of these two correlation coefficients, we refer to <cit.>. At each MCMC iteration, we predict a new sequence of binary responses of length T, denoted as {Y^(s)_i^*(τ):s=1,⋯,S}. Correspondingly, we also obtain samples of binary sequences from the true data generating process, denoted by {Ŷ^(s)_i^*(τ):s=1,⋯,S}. Both sets of binary sequences form S/n datasets that mimic the original samples. From the datasets comprised by posterior predictive samples Y^(s)_i^*(τ), we obtain interval estimates of the two correlation coefficients. In addition, for Ŷ^(s)_i^*(τ) that are generated from the truth, we obtain point estimates, which can be viewed as the correlation coefficients from the data, accounting for the variation in the data generating process. Notice that marginally the binary process is not guaranteed to be isotropic. Hence, the correlation coefficients should be calculated for every possible pair of (τ_t,τ_t^')∈τ. The resulting point and interval estimates of both types of correlation coefficients are displayed in Figure <ref>. All the posterior interval estimates cover the truth, indicating that the proposed model effectively captures the binary covariance structure. The simulation studies have illustrated the benefits of our approach, that is, avoiding possible bias in covariance structure estimation caused by mis-specification of the covariance kernel for the signal process. Such benefits are led by the IWP prior placed on the covariance function. To emphasize this point, we consider an alternative, simplified modeling approach, with Z_ii.i.d.∼GP(μ,Ψ_ϕ), μ∼ GP(μ_0,Ψ_ϕ/κ). That is, instead of modeling the covariance function nonparametricly, we assume a covariance kernel of certain parametric form, specified by Ψ_ϕ. We consider the centralized signal process ω_i=Z_i-μ evaluated at a finite grid τ, denoted as ω_i. Under the proposed model, ω_ii.i.d.∼MVT(ν,0,Ψ_ϕ(τ,τ)), while under the simplified model, ω_ii.i.d.∼ N(0,(1+1/κ)Ψ_ϕ(τ,τ)). We know the true distribution of ω_i from the data generating process. Therefore, we can compute the 2-Wasserstein distance between the model estimated distribution of ω_i to the truth. The usage of 2-Wasserstein distance is motivated by its straightforward interpretation: a 2-Wasserstein distance of d means that coordinatewise standard deviations differ by at most d <cit.>. Iterating over the posterior samples of model parameters, we obtain the distributions of 2-Wasserstein distance between the model estimated distribution of ω_i and the truth, which is shown in Figure <ref>. Clearly, for the proposed model, the 2-Wasserstein distances are substantially small. Contrasting the performance testifies our motivation of modeling the covariance structure nonparametricly. § ADDITIONAL RESULTS FOR DATA EXAMPLES §.§ Binary responses from Studentlife study The hyperprior for σ_ϵ^2 depends on the belief about the extent of the measurement error. Hence, it is useful to perform a prior sensitivity analysis with respect to this hyperprior, especially on the real data. In general, the measurement error reflects the remaining variability of the underlying continuous process, whose major change has been captured by the signal process. Consequently, it should have small probability of taking large values. For the analysis conducted in Section <ref>, we believe the measurement error range should be small, and we pick a moderate value for the error degree of freedom. Specifically, we take R=0.1 and υ=10, and using the method described in Section <ref>, obtain the hyperprior for σ_ϵ^2 as IG(5,0.001). We term it the original hyperprior. To perform a prior sensitivity analysis, we assume an alternative hyperprior on σ_ϵ^2. In the case of valence score, we assume a larger measurement error range R=0.5, resulting in the hyperprior σ_ϵ^2∼ IG(5,0.02). As for the arousal score, we assume the error distribution has a heavier tail, achieved by setting υ=6. The hyperprior in this case is σ_ϵ^2∼ IG(3,0.0007). We check the posterior samples for μ_0, σ^2, ρ, and ν, because these four parameters determine the signal process, which is the inference target of primary interest. Results are shown in Figure <ref>. The posterior distributions of the four model parameters are similar, suggesting that the conclusion are robust with respect to the hyperprior choice for the error variance. §.§ Four levels arousal score data Particular to the ordinal responses, we assess the time dependence through the joint probability Pr(𝐘_τ=j,𝐘_τ^'=j^'|{𝐙_jτ},{σ^2_ϵ j}), whose posterior inference can be obtained by evaluating (<ref>) with the posterior samples of model parameters. Figure <ref> displays the posterior point and interval estimate for all possible pairs of the joint probabilities. It suggests that the proposed model enables flexible estimate of the time dependence among the ordinal responses.
http://arxiv.org/abs/2307.02552v1
20230705180009
On contact modulo p L-space covers
[ "Bruno Roso" ]
math.SG
[ "math.SG", "math.GT", "57K33, 57R58, 57M10" ]
rticle.tex aths.tex otation.tex ib.tex On contact modulo p L-space covers bs.tex Introduction ntro.tex Borel Floer contact invariant nv.tex Serre Spectral Sequence erre.tex Torsion or.tex Knot Invariants not.tex Lifting contact classes ift.tex Bibliography
http://arxiv.org/abs/2307.03371v2
20230707033854
What makes a successful rebuttal in computer science conferences? : A perspective on social interaction
[ "Junjie Huang", "Win-bin Huang", "Yi Bu", "Qi Cao", "Huawei Shen", "Xueqi Cheng" ]
cs.SI
[ "cs.SI", "cs.DL" ]
1 .001 What makes a successful rebuttal in computer science conferences? Huang et al. mode = title]What makes a successful rebuttal in computer science conferences? : A perspective on social interaction 1,2]Junjie Huang[style=chinese] huangjunjie17s@ict.ac.cn Methodology, Data analysis, Experiments, Writing: original draft, review & editing 3]Win-bin Huang[style=chinese] huangwb@pku.edu.cn Methodology, Data curation, Resources, Writing: original draft, review & editing 3]Yi Bu[style=chinese] buyi@pku.edu.cn Methodology, Data curation, Resources, Writing: original draft, review & editing 1,2]Qi Cao[style=chinese] caoqi@ict.ac.cn Methodology, Data curation, Funding acquisition, Resources, Writing: original draft, review & editing 1,2]Huawei Shen[style=chinese] shenhuawei@ict.ac.cn Methodology, Data curation, Funding acquisition, Resources, Writing: original draft, review & editing 4]Xueqi Cheng[style=chinese] cxq@ict.ac.cn Methodology, Funding acquisition, Resources [1]organization=Data Intelligence System Research Center, Institute of Computing Technology, Chinese Academy of Sciences, state=Beijing, country=China [2]organization=University of Chinese Academy of Sciences, state=Beijing, country=China [3]organization=Department of Information Management, Peking University, state=Beijing, country=China [4]organization=CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, state=Beijing, country=China With an exponential increase in submissions to top-tier Computer Science (CS) conferences, more and more conferences have introduced a rebuttal stage to the conference peer review process. The rebuttal stage can be modeled as social interactions between authors and reviewers. A successful rebuttal often results in an increased review score after the rebuttal stage. In this paper, we conduct an empirical study to determine the factors contributing to a successful rebuttal using over 3,000 papers and 13,000 reviews from ICLR2022, one of the most prestigious computer science conferences. First, we observe a significant difference in review scores before and after the rebuttal stage, which is crucial for paper acceptance. Furthermore, we investigate factors from the reviewer's perspective using signed social network analysis. A notable finding is the increase in balanced network structure after the rebuttal stage. Subsequently, we evaluate several quantifiable author rebuttal strategies and their effects on review scores. These strategies can help authors in improving their review scores. Finally, we used machine learning models to predict rebuttal success and validated the impact of potential factors analyzed in this paper. Our experiments demonstrate that the utilization of all features proposed in this study can aid in predicting the success of the rebuttal. In summary, this work presents a study on the impact factors of successful rebuttals from both reviewers' and authors' perspectives and lays the foundation for analyzing rebuttals with social network analysis. Peer Review Rebuttal Analysis Social Network Analysis Social Interaction Rebuttal Strategy Rebuttal Success Prediction _set:nn stm / mktitle nologo [ [ Abstract 0.9 We review the modular flavor symmetric models of quarks and leptons focusing on our works. We present some flavor models of quarks and leptons by using finite modular groups and discuss the phenomenological implications. The modular flavor symmetry gives interesting phenomena at the fixed point of modulus. As a representative, we show the successful texture structure at the fixed point τ = ω. We also study CP violation, which occurs through the modulus stabilization. Finally, we study SMEFT with modular flavor symmetry by including higher dimensional operators. =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Scientific peer review, the process of evaluating scientific literature by knowledgeable peers, originated in the 1700s <cit.>. This process aims to filter out papers of low quality based on criteria such as competence, significance, novelty, and originality <cit.>, thereby promoting scientific progress <cit.>. As a cornerstone of scientific research <cit.>, peer review is used in almost all scientific disciplines, improves the quality of published research <cit.>, and has achieved significant success in research evaluation <cit.>. Peer-reviewed publications predominantly consist of conference proceedings and journals. However, the preference for journals over conferences varies across different disciplines. Notably, studies have identified Computer Science (CS) as a discipline that values conference publications more than other academic fields <cit.>. Publishing in top-tier conferences is closely associated with a researcher's reputation, funding distribution, and tenure, among other factors <cit.>. Unlike journal publications, CS conference publications have distinct timelines (i.e., specific deadlines), acceptance rate limits (around 20%), page limits (up to 10 pages without restrictions on appendices), consistent annual meeting dates (like in May), and often require in-person discussions (such as offline meetings)[Due to COVID-19, ICLR2022 was a virtual conference; offline meetings were cancelled.]. These features emphasize the importance of modelling social interactions in CS conferences. Furthermore, with the surge in scientific literature, several problems have emerged in the peer review system. These are due to the scarcity of professional reviewers, the unpredictable nature of early scientific discoveries, and the difficulties in judging the potential of groundbreaking discoveries. Issues such as the reproducibility crisis <cit.> and unethical behaviours under the "publish or perish" pressure <cit.> are hindering the advancement of CS. Peer review in CS conferences is even viewed as ineffective and arbitrary <cit.>. To mitigate the randomness in peer review and promote the publication of valuable scientific discoveries, conference organizers have implemented the rebuttal mechanism. https://openreview.netOpenReview was developed to assist researchers in reviewing high-quality papers using the rebuttal mechanism <cit.>. Today, this platform is extensively used in top CS conferences like NeurIPS[https://nips.cc/], ICML[https://icml.cc/], and ICLR[https://iclr.cc/]. The rebuttal mechanism offers a chance to clarify any misunderstandings of reviewers, increase the reviewer score, and enhance the possibility of paper acceptance. Senior researchers have proposed many tips and suggestions for writing effective author responses to assist authors in navigating the rebuttal phase <cit.>[https://deviparikh.medium.com/how-we-write-rebuttals-dc84742fece1]. Yet, few studies have empirically investigated what makes a successful rebuttal by applying metrics to real-world datasets. In this paper, we aim to bridge this gap by analysing the impact of successful rebuttals from both the reviewers' and authors' perspectives. In this paper, we select ICLR2022 as our research subject to examine the rebuttal stage in the CS conference peer review process. ICLR2022 employs a reviewing workflow[https://iclr.cc/Conferences/2022/ReviewerGuide] that is similar to those of other major CS conferences, with a few modifications: * Initially, reviewers can bid on papers for further review based on their expertise and potential conflicts of interest. * Upon paper assignment, typically three or four reviewers independently assess a paper. * The initial review comments and scores are posted on the website for authors and other reviewers to view. * The rebuttal phase then commences, during which authors and reviewers can discuss the paper and the review, allowing for clarification and correction of misunderstandings. * After the rebuttal, reviewers can examine the authors' responses and other peer-review comments and discuss their perspectives. * Reviewers finalize their reviews, and Area Chairs (ACs) write meta-reviews. * Finally, Program Chairs (PCs) make the acceptance or rejection decisions based on reviewers' comments and meta-reviews. While ICLR2022 includes different roles, such as Area Chairs (ACs), Senior Area Chairs (SACs), and Program Chairs (PCs), its core process remains the same as outlined above. In this paper, we focus on this process and overlook the mechanisms of ACs, SACs, and PCs. As shown in fig:rebuttal_process_modelling, the core process encompasses the initial review, author rebuttal, reviewer discussion with reviewers and ACs, and the final decision. Authors are primarily involved in paper submission, revision, and rebuttal discussions, while reviewers partake mainly in the initial independent review, rebuttal discussions with authors, and discussions with other reviewers and ACs. This paper specifically focuses on the rebuttal stage, comprising the author rebuttal and reviewer discussion. We examine the changes in review scores between t_1 and t_2 and investigate the factors that might influence these changes, including reviewer social pressure and author rebuttal strategy. Based on the review process in ICLR2022, we focus on exploring the following research questions in this paper: * RQ1: Does rebuttal stage matter? Is there a difference between the initial and final review results in ICLR2022? (see sec:rebuttal_results) * RQ2: Does “peer effect” influence the score changes for reviewers? How to model it with signed social network analysis? (see sec:ssna) * RQ3: Are there effective strategies that authors can employ for a successful rebuttal? (see sec:strategy_analysis) * RQ4: Can we build machine learning models to predict whether reviewers will revise their score after rebuttal? (see sec:prediction) The remaining sections of the paper are organized as follows: In sec:related_work, we provide an overview of the related works. Then, in sec:dataset, we present our datasets, including details about data collection, basic data description, and overall analysis. In sec:rebuttal_results, we analyze the rebuttal results to answer the research question on the significance of rebuttal (RQ1). To address RQ2, we introduce Signed Social Network Analysis (SSNA) to measure the balanced motif changes in sec:ssna. In sec:strategy_analysis, we conduct a strategy analysis to investigate how authors can better respond to reviewers for RQ3. Furthermore, for RQ4, we formulate rebuttal success prediction tasks using machine learning models to examine the role of different features. Finally, in sec:conclusion, we present our concluding thoughts, limitations and discuss the future work. § RELATED WORK §.§ The Significance of Conference Proceedings in Computer Science Peer review is considered the "gatekeeping" process of science <cit.> and is widely adopted in almost every discipline, including Computer Science (CS). Peer-reviewed publications include both conference proceedings and journals, but there are several differences between them, including publication quality, page limits, and timing <cit.>. For CS, some bibliometric studies demonstrate that CS has a preference for conferences over journals <cit.>. The publications on prestigious conferences are related to the researcher's reputation, distribution of funds, the acceptance of research proposals, faculty positions, promotion, and tenure <cit.>. One big significance in peer review in CS is that the review of prestigious conferences (ICLR, NeurIPS, ICML) usually adopts Double-blind system, rather than Single-blind system. <cit.> suggested that using the Double-blind system in CS conference can reduce bias in peer review <cit.>. Additionally, the rebuttal mechanism, which replaces author responses in journals, is introduced at CS conferences to reduce the arbitrariness in the peer review process <cit.>. Compared to revision and author response in journal peer review aimed at gaining greater recognition for a paper <cit.>, the rebuttal in CS conference mainly aims to make papers accepted succeed in the competition with a low acceptance rate and massive high-quality submissions. In this paper, we use ICLR2022 as an example to investigate the rebuttal process in CS. §.§ Open Peer Review Dataset To address the growing demand for transparency and reproducibility crisis in the peer review process, Open Peer Review (OPR) has become increasingly popular <cit.>. For example, the journal Nature Communications[https://www.nature.com/ncomms/] offers authors the option to make peer reviewers' comments and their responses alongside their paper. Many prestigious CS conferences, such as ACL, NeurIPS, and ICLR, have joined the OPR movement based on the OpenReview system. Several researchers have released peer review comment datasets, including <cit.>, who provided a dataset containing 14.7k paper drafts and the corresponding accept/reject decisions from top-tier CS conferences, including ACL, NeurIPS, and ICLR. <cit.> presented a corpus containing over 4k reviews and 1.2k author responses from ACL2018. More recently, <cit.> presented DISAPERE, a labeled dataset of 20k sentences contained in 506 review-rebuttal pairs, and Dycke et al.<cit.> collected a dataset for ACL Rolling Review. These datasets enable quantitative studies of peer review rebuttals. Many machine learning models have been proposed to predict whether a paper will be accepted <cit.>, perform sentiment analysis on peer review comments <cit.>, and extract responses and argument pairs <cit.>. However, most studies focus on Natural Language Processing (NLP) rather than the social interactions between reviewers and authors. Our study aims to identify the factors that contribute to a successful rebuttal from both authors' and reviewers' perspectives, such as reviewer social pressure and author rebuttal strategy. For reviewer social pressure, we primarily analyze peer effects among reviewers (see  sec:peer_effect). Other social pressures include public pressure and fear of reprisals <cit.>. In terms of the author's rebuttal strategy, we primarily verify the effectiveness of the tips on how to respond to peer reviewers <cit.>. This can enhance our understanding of the rebuttal process in peer review from a data-driven perspective. §.§ Peer Effect in Peer Review The peer effect is a concept that describes how an individual's behavior is influenced by that of their peers <cit.>. It has been extensively studied in various fields, such as education <cit.>, economics <cit.>, sociology, and social psychology <cit.>. In peer review, a paper is usually assessed by multiple reviewers, and social influences can occur between their opinions. <cit.> proposed that the rebuttal in peer review is related to the opinion dynamics<cit.>, which is similar to the peer effect. They also found that conformity bias <cit.> plays a significant role in peer reviewing. However, their findings rely on machine learning models, which lack interpretability. In this paper, we use Signed Social Network Analysis (SSNA) to model the peer effects in peer review. A signed network (also known as a sentiment social network <cit.>) is a social network that contains both positive and negative links. The structure balance theory has been proposed to understand the structure and origin of conflicts in a social network <cit.>. It is a sociological theory that seeks to explain how individuals form and maintain social relationships based on the balance or imbalance of positive and negative sentiments among them <cit.>. For author-referee networks, it can be modeled as a signed bipartite graph <cit.>. <cit.> examined the influence of author-referee networks on peer review. For the signed networks before and after the rebuttal phase, <cit.> found that the ratio of balanced isomorphism (e.g., balanced signed butterfly) in signed bipartite networks increased, while the number of positive links slightly decreased. However, obtaining such a signed bipartite graph is difficult due to privacy concerns <cit.>. In this paper, we utilize SSNA to analyze the social pressure in peer review, specifically focusing on the influence between reviewers of a single paper. Our approach offers a more interpretable method for investigating the peer effect in the peer review process. § DATASET §.§ Data Collection Computer science conferences have important deadlines for authors and reviewers, such as abstract and paper submission deadlines, paper review release dates, discussion period end dates, and decision notification dates. For ICLR2022, the review release date was November 9, 2021, and the decision notification date was January 24, 2022. Detailed dates can be found on the official schedule[More detailed dates can be found at https://iclr.cc/Conferences/2022/Dates]. Since the review results for ICLR2022, available on https://openreview.netOpenReview, are continually updated in real-time, there are no archived results. (The preliminary results are overwritten by the latest reviews). Therefore, we used a web spider to crawl the website before and after the rebuttal phase at two different timestamps[The first time is 10 Nov 2021. The second time is 08 Mar 2022.]. After data cleaning, we obtained a dataset comprising 3,338 papers, 13,021 reviews, and 37,478 replies from ICLR2022. ICLR2022 uses a double-blind review mechanism, which means reviewers' identities are anonymized and each is assigned a unique ID not specific to any paper. Therefore, it is not possible to construct a signed bipartite graph as <cit.> did in their study. §.§ Data Description This section presents various statistics related to the decisions and presentation formats of papers submitted to ICLR2022. In addition to the acceptance or rejection decisions, papers are also grouped according to their presentation format. Out of the 3,338 papers submitted, 54 were selected for Oral presentation, 176 for Spotlight presentation, and 865 for Poster presentation. The remaining 2,243 papers were rejected, with 1,576 papers receiving Review rejection and 667 being Desk rejection/Withdrawn cases. Similar to journal peer review, the ICLR2022 organizers may add more reviewers to papers if they receive insufficient or widely varying reviews during the preliminary review. In our dataset, 139 papers had additional reviewers assigned to them, with 123 papers receiving one additional reviewer, 13 papers receiving two additional reviewers, and three papers receiving three additional reviewers. Interestingly, 73 out of the 139 papers (52.5%) experienced an increase in their average score after the additional reviews. The proportion of papers with increased scores after additional reviews exceeded the overall percentage of papers that had an increase in score post-rebuttal (43.59%). In ICLR2022, reviewers were asked to provide a recommendation score to express their opinions on the papers under review. Reviewers chose their scores from a range of {1, 3, 5, 6, 8, 10} (1: strong reject; 3: reject, not good enough; 5: marginally below the acceptance; 6: marginally above the acceptance; 8: accept, good paper; 10: strong accept, should be highlighted at the conference) Of the 13,021 reviews collected in our dataset, 304 reviews assigned a score of strong reject, 3,167 of reject, 3,710 of marginally below the acceptance, 3,698 of marginally above the acceptance, 2,088 of accept, and 54 of strong accept. To facilitate analysis, we treated scores of 6, 8, and 10 as positive and scores of 1, 3, and 5 as negative. The resulting positive ratio stood at 0.449, surpassing the positive ratios reported in some related works <cit.> (refer to sec:ssna). §.§ Overall Analysis In this section, we provide an overall analysis of both papers and reviews in our dataset. This analysis forms the foundation for our study and enhances our understanding of the ICLR2022 peer review process. §.§.§ Paper Analysis In this subsection, we analyze the difference between accepted papers and rejected papers. First, we analyze the number of authors in each group. Subsequently, we compute the word count of the paper's meta information, including the title, abstract, and keywords. Additionally, we ascertain whether the paper includes a one-sentence summary and whether supplementary material was uploaded, as per the options provided by ICLR2022. In addition, we investigate whether having skillful authors impacts the acceptance of a paper. We define a binary variable to indicate whether any author of a paper has previously published in an ICLR conference or is listed among the top Computer Science Scientists. (It is based on a scholar's D-index (Discipline H-index <cit.>))[https://research.com/scientists-rankings/computer-science]. Additionally, we quantify the number of revisions and pages per paper. Finally, we perform a two-sample t-test between accepted and rejected papers and report the corresponding p-value. The results are listed in tab:paper-analysis. From tab:paper-analysis, we can infer that: [(1)] * The number of authors for accepted papers is significantly higher than that of rejected papers (4.80 > 4.27). Papers accepted for Oral presentations have the highest average author count (5.33), while those that wereDesk Reject/Withdrawn have the lowest average author count (4.26). * The length of the title and abstract does not show any significant difference between accepted and rejected papers. Therefore, it appears that the length of the title and abstract does not significantly impact the paper's acceptance. * There are significant differences between accepted and rejected papers in the number of keywords and whether the one-sentence summary is completed. Our analysis suggests that keywords and one-sentence summaries might play a more significant role in aligning with reviewers' bids than titles and abstracts do. * Although the ratio of material uploaded is slightly higher for accepted papers (0.47 compared to 0.45), there is no significant difference in the upload of supplementary material between accepted and rejected papers. * The proportion of accepted papers with skilled authors is 0.80, which is significantly higher than that of rejected papers (0.60). This finding demonstrates that seasoned authors have a higher probability of paper acceptance compared to first-time authors. Additionally, accepted papers include a higher proportion of top-rated Computer Science Scientists as authors, indicative of the Matthew effect <cit.> prevalent in CS conferences. * Accepted papers have significantly more pages and revisions than rejected papers. This indicates that a higher page count (typically including appendices) and more revisions could potentially enhance the probability of a paper's acceptance. §.§.§ Review Analysis In this subsection, we examine the differences between positive and negative reviews. Initially, we analyze the detailed scoring metrics of each review across various groups to understand their correlation with recommendation scores. Next, we investigate reviewer rebuttal activity by counting the number of replies. Subsequently, we employ sentiment analysis to discern if a review adopts a more negative tone when the paper is considered inadequate (i.e., when the recommendation score is less than 6.0). We use a fine-tuned DistilBERT model <cit.> as the tool to analyze the review texts data and calculate a positive sentiment score, ranging from 0 to 1, where a value less than 0.5 indicates negative sentiment. Furthermore, we calculate the word count of the review text to compare the length of reviews across different groups. Finally, we perform a two-sample t-test [In cases where the variances of the two groups are unequal, we utilize Welch's t-test.] to evaluate the differences between positive and negative reviews, and we report the corresponding p-value. The results are listed in tab:review-analysis. From tab:review-analysis, we can find that: [(1)] * Positive reviews (i.e., reviews for accepted papers) score significantly higher in aspects such as correctness, novelty, and significance compared to negative reviews (i.e., reviews for rejected papers), indicating that reviewers exhibit greater confidence in identifying flaws. * Reviewers demonstrate more active engagement in discussions for positive reviews (i.e., 1.55 compared to 1.35), suggesting that positive reviews tend to foster more active discussions. * Reviews favoring acceptance yielded significantly higher positive sentiment scores compared to reviews advocating rejection (i.e., 0.44 compared to 0.13), suggesting that recommendation scores correspond to the positive or negative sentiment expressed in the review text. * Despite negative reviews exhibiting a significantly higher word count than positive reviews, the reviews expressing strong acceptance recorded the highest word counts. § REBUTTAL RESULTS In this section, we address our first research question (RQ1): How impactful is the rebuttal stage? tab:rebuttal-analysis displays the number of reviews that increase (INC), decrease (DEC), or maintain (KEEP) their recommendation scores after the rebuttal stage (#Review). Additionally, we present the average paper scores (#Paper), changes in average paper scores (Δ), and acceptance percentages in tab:rebuttal-analysis. The average scores of 1,444 papers show an increase, resulting in an acceptance rate of approximately 58.38%, significantly higher than the acceptance rates of the 167 papers with decreased scores (13.77%) and the 1,727 papers with unchanged scores (13.26%). While 43.25% (1,444/3,338) of papers experienced an increase in scores, only 17.95% (2,310/12,863) of reviews displayed a similar increase Changes in the scores significantly impact paper acceptance. Subsequently, we delve deeper into the analysis of score changes, considering both the perspectives of the paper and the review. §.§ Paper Perspective We display the average score distribution for different acceptance groups in fig:inital_score_distrbution and fig:final_score_distrbution. From fig:inital_score_distrbution and fig:final_score_distrbution, we observe that the original average score cut-off for a paper to be accepted is between 5.0 and 6.0. When the initial score is below 5.0, the probability of acceptance is low. For the final average score, the cut-off is 6.0. This demonstrates that the rebuttal stage is crucial, and most accepted papers experience an increase in score after rebuttal. To illustrate the increase in average scores, we use the Sankey diagram in fig:sankey_paper and categorical heatmap in fig:heapmap_paper to display the paper changes. From fig:sankey_paper and fig:heapmap_paper, we find that while a smaller percentage of papers in all score groups show a decrease (DEC) in average scores, most papers maintain (KEEP) or increase (INC) in average scores after rebuttal. In particular, we observe a higher percentage of increases (INC) when the average scores of papers are above 6.0 compared to when they are maintained (KEEP) or decreased (DEC). The results mentioned above reflect the peer review process requires accepted papers to meet a minimum standard of quality. Besides, it indicates that the rebuttal stage primarily benefits borderline papers, helping them gain acceptance. §.§ Review Perspective We display the changes in review scores using the Sankey diagram in fig:sankey_review and categorical heatmap in fig:heapmap_review. From Figures <ref> and <ref>, we observe that most reviewers do not change their initial review scores. The majority of score increases (INC) occur at the borderline (around 5.0), which may help the paper gain acceptance. For high scores (8.0), the percentage of decreases (DEC) is larger than increases (INC), but both are smaller than maintenance (KEEP). This finding is consistent with the results in <cit.>. fig:detailed_score_corr shows the correlation of different scores before and after rebuttal. We find that the recommendation score is positively correlated with other detailed scores but negatively correlated with confidence. Furthermore, the final review scores are positively correlated with the initial review scores. In conclusion, changes in scores before and after rebuttal do exist and have a more significant impact on borderline papers. In top-tier CS conferences, the limitation of acceptance rates makes borderline papers highly competitive; authors strive to increase the recommendation score to ensure their papers are accepted. § SIGNED SOCIAL NETWORK ANALYSIS In peer review, reviewers may be influenced by the behavior of their peers. For example, reviewers might change their decision to reject a paper based on accepted recommendations from other reviewers for the same paper. <cit.> point out that "peer pressure" is the most important factor for score changes. They use all peer review scores for a given submission to build features, including before-rebuttal scores, the statistics of other peer reviews' scores, and statistics of all peer reviews' scores (max/min/mean/median/std). Although these metrics are considered the most important indicators for rebuttal analysis <cit.>, we argue that these metrics cannot reflect the number of reviewers and lack interpretability of the rebuttal results. In this paper, we propose using Signed Social Network Analysis (SSNA) to analyze the rebuttal process. SSNA is based on balance theory developed by Fritz Heider in the 1950s <cit.>. It is a psychological theory that aims to explain how individuals strive to maintain cognitive consistency in their attitudes and perceptions about themselves, others, and objects in their social environment. According to this theory, people are more comfortable with balanced relationships and feel an inner tension when they experience imbalance. In our case, we first define the following four signed motifs for a link from reviewer R_i to paper P_1 considering reviewer R_j. According to balance theory, we define the first two motifs in fig:signed_motifs as unbalanced, and the last two motifs as balanced. The first two motifs mean that reviewer R_j will "negatively" affect R_i, which may cause R_i to change their score. For example, in the first motif, R_i gives a positive score to paper P_1. But when R_j gives a negative score to P_1, it will cause "peer pressure" on R_i. When more reviewers exert this pressure, it is very likely that R_i will revise their score. To test whether the above theories hold in the peer-reviewed rebuttal scenario, we perform validation measurements on three top computer science conference datasets. In addition to the dataset of ICLR2022 in this paper, we also include the data of ACL2018 <cit.> and a top computer science conference (TCSC) <cit.>. First, we assign review signs based on different scoring scales. In ACL2018, the conference adopts a 6-point scale (1: clear reject, ..., 4: worth accepting, 5: clear accept, 6: award-level), and the sign interval is 3. In ICLR2022, the conference uses 1, 3, 5, 6, 8, 10, and the sign interval is 5.0. For TCSC, the dataset is provided with signs. Then, we report the ratio of positive links before and after rebuttal. Next, we count the total number of balanced and unbalanced motifs from different conferences. For each reviewer, we extract the number of distinct motifs in fig:signed_motifs. Additionally, we compute the proportion of balanced motifs in each paper (, the result of [-1, 1, 1] is 33.3%), and we perform a paired t-test before and after the rebuttal stage to verify whether the changes are significant. We report the average score of all papers in tab:sign_analysis. From tab:sign_analysis, we observe that: [(1)] * The ratio of positive links is below 50% across all three datasets. We hypothesize that this is due to the need to control acceptance rates in top computer science conferences (, 24.9% in ACL2018). * In all three datasets, the number of balanced motifs and the proportion of balanced motifs per paper increase after rebuttal and the unbalanced ones decrease (11,720 → 13,329(↑) and 7,208 → 6,087(↓)). The proportion of balanced motifs per paper significantly increases after the rebuttal stage. This finding validates the balance theory in this scenario, suggesting that reviewers tend to reduce conflict in review comments, which helps the review results reach an agreement. * Another interesting observation is that, unlike ICLR2022, the negative link ratio of ACL2018 and TCSC increases after rebuttal. We speculate that this is because review comments will eventually be made public in ICLR2022, but not in ACL2018 and TCSC. The public social peer <cit.> pressure may be the primary reason for these surprising findings. § STRATEGY ANALYSIS In this section, we analyze the possible strategies to be employed during the rebuttal phase and the correlations between these strategies and outcomes. We compile a list of rebuttal strategies for successful rebuttals from the literature, a set of review guidelines published by journals and conferences[https://iclr.cc/Conferences/2022/AuthorGuide], and guidelines from experienced researchers[https://deviparikh.medium.com/how-we-write-rebuttals-dc84742fece1]. First, based on our analysis in sec:rebuttal_results, we define a review score increase (INC) after rebuttal as a successful rebuttal, while the rest of the rebuttals (KEEP and DEC) are considered non-successful. The overall success ratio is 17.95%. Second, we group the reviews for which authors did not submit any rebuttal as G_0 (the number in this group is 3,089). From the authors' perspective, there are several approaches or strategies they can use to improve their submissions. We have summarized the following quantifiable strategies: * Work hard: We use the number of authors' replies and the total word counts of authors' replies to investigate whether authors working harder (more replies) will lead to an increased score. We define the number of author replies greater than 2 as G_1 and the remaining replies as G_2. For the word count, we define the top 33% as G_1, and the bottom 33% as G_2. * Be polite: We use PoliteLex <cit.> to extract the positive and polite patterns in author responses, to verify whether being more polite will be helpful. We then rank the number of positive and polite patterns and divide the top 33% into G_1 and the bottom 33% into G_2. We can assume that G_1 is more polite than G_2. * Never miss: To verify whether authors address all reviewers' concerns without missing any points, we use both TF-IDF text similarity and deep cosine similarity of sentence embeddings <cit.> to measure the similarity of reviews and author responses. We sort the similarity values and choose the top 33% as G_1, and the bottom 33% as G_2. We can assume that G_1 misses fewer concerns from reviewers than G_2. * Add references: To determine whether authors respond with references, we use regular expressions to analyze the author's response to reviews and divide the references in the author's response into G_1, otherwise, it is G_2. * Make consensus: Based on the SSNA in  sec:ssna, we compute whether the authors or reviewers mention other reviewers. As ICLR2022 provides reviewer IDs, we can easily count whether other reviewer IDs appear in the text or not[An example: https://openreview.net/forum?id=0IqFsR9wJvI&noteId=hk92FJmSfz]. If the IDs are mentioned in the author responses, we group such author rebuttals as G_1, otherwise, it is G_2. For each strategy, we divide the responded reviews into two groups and conduct analyses and t-tests on both groups to ascertain if the strategy leads to significant differences in the outcomes between the two groups. The results are shown in  tab:strategy_analysis. We can find that: [(1)] * Even if authors do not submit any rebuttals, reviewers may revise their review comments with a low probability (2.82%). * The success rate of the group that adopted the rebuttal strategy (G_1) was significantly higher than that of the group that did not adopt the strategy. This suggests that when authors submit rebuttals, they need to be more skillful instead of simply replying. * The make consensus strategy is a particular example of a review process where the author's response can be enhanced by introducing peer effects. § REBUTTAL SUCCESS PREDICTION In this section, we first define a Rebuttal Success Prediction task to validate our modeling on rebuttals. Second, we present the possible features to model the successful/non-successful prediction task. Lastly, we provide the results and analysis of our multi-factor model. §.§ Problem Definition For a review r in paper p, we have the initial score as s_0. After the authors complete the rebuttal, r will receive the final score as s_1. Our task is to predict the sign for the final score s_1 - s_0 (s_1 - s_0 > 0 or not). We define the INC (s_1 - s_0 > 0) as a successful rebuttal (1), and the other results (KEEP and DEC) as non-successful (0). It is an imbalanced binary classification task, so we use Macro-F1 and AUC to measure the performance of our models. Macro-F1 is a variant of the F1-score, which is a harmonic mean of precision and recall. AUC (Area Under the Curve) is a measure of how well a binary classification model can distinguish between positive and negative samples <cit.>. Both metrics are used to measure the performance of proposed model. §.§ Methods Based on our previous discussion, we propose a multi-factor prediction model in fig:mlp_model with the following input features: * Paper Meta-Information X_m: We use the meta-information of a paper p in tab:paper-analysis to predict the sign of rebuttal, including 10 features (X_m∈ℝ^N× 10). * Rebuttal Text X_t: We use SPECTER <cit.> to encode both initial review text and author response text into 768-d dimension reviewer vectors X_t^r ∈ℝ^N× 768 and author vectors X_t^a ∈ℝ^N× 768. * Peer Effect Feature X_p: Following <cit.>, we use the detailed scores of review r (Recommendation Score, Correctness, Technical Novelty and Significance, Empirical Novelty and Significance, and Confidence) and the statistics (max/min/mean/median/std) of the detailed scores of other peer reviews for a given paper p. Besides, we add the number of balanced motifs and unbalanced motifs. The peer effect features are 27-d dimension vectors (X_p∈ℝ^N× 27). * Author Strategy Feature X_s: We use the strategy features employed in tab:strategy_analysis as the author strategy feature (X_s∈ℝ^N× 7). After obtaining the above features, we concatenate these features, then encode these features through a two-layer Multilayer Perceptron (MLP) model (the activation function is ReLU), and output the probability of success of the current review r by a sigmoid function f(x) =1/1+e^-x: X = Concatenate(X_m, X_t, X_p, X_s) P_success = Sigmoid(W_2 ·ReLU(W_1 · X+b_1) + b_2), where W_1, b_1, W_2 and b_2 are the parameters of MLP functions. Besides, to verify the role of each feature, we input it separately as a baseline (MLP(X_m)). Additionally, we compare two simple baselines: the majority baseline always chooses non-successful predictions, while the random baseline selects successful/non-successful predictions at random. To validate the model, we perform train-validation-test splits of the data. We exclude reviews whose authors have not received any responses. Next, we randomly select 80% of the reviews as training data and 10% as validation data. The remaining 10% of reviews are used for testing model performance. These models are implemented using PyTorch with the Adam optimizer (Learning Rate=0.01, Weight Decay=1e-3). We choose Binary Cross Entropy (BCE) as our loss function and train for 1,000 epochs to select the model that performs best on the validation set, then report the performance on the test set. The results are listed in tab:exp-results. §.§ Results From Table <ref>, we can observe the following: [(1)] * Utilizing a machine learning model effectively improves prediction results (AUC > 0.5). * Among the various feature types, the peer effect feature proves to be the most effective. This is consistent with previous findings  <cit.>. * The paper's meta-information feature exhibits poor performance, which can be attributed to a discrepancy between the paper's metadata and the rebuttal success prediction task. * Employing all features results in the best performance, surpassing that of other features. § CONCLUSION §.§ Summary and Discussion In this paper, we conduct an empirical study on the impact of a successful rebuttal stage in CS conference peer reviews. First, we collect and construct an open review dataset (ICLR2022) to examine the rebuttal stage at CS conferences. Second, through a preliminary analysis of the dataset, we determine that the rebuttal stage is crucial for paper acceptance. Third, we analyze the key factors for achieving a successful rebuttal, including reviewer social impacts and author rebuttal strategies. We employ signed networks to investigate peer effects and discover that the balanced structure significantly increases after the rebuttal in all three top conference datasets. Regarding author rebuttal strategies, we assess the effectiveness of several quantifiable approaches. Finally, we develop a machine learning model to predict the success or failure of a review rebuttal in order to validate our findings. We hope our research can illuminate strategies for crafting successful rebuttals for reviews and assist authors in getting their submissions accepted. Undoubtedly, the most crucial aspect of a submission is the paper's quality. Maintaining high quality requires diligent effort. For instance, providing more details in appendices can strengthen the paper's foundation. Submissions typically have strict upper limits for the main text, but allow unlimited pages for appendices and citations. Enlisting the help of a skilled author is also recommended to improve submissions. During the rebuttal stage, we employ social network analysis to evaluate changes in balanced network structures. We observe an increase in the balanced network structure following the rebuttal stage. This suggests that social pressure may play a critical role in influencing reviewers to modify their review scores. Our findings align with previous studies <cit.>, but we offer an analytical perspective through social network analysis. This can be utilized as a strategy (building consensus) to encourage conformity and balance among reviewers. Moreover, additional strategies (being polite and remaining engaged) are advised to enhance the likelihood of a successful rebuttal. These recommendations are also supported by related works on crafting detailed responses to reviewers in journal peer review <cit.>. Lastly, peer review platforms might consider refining the rebuttal process to make it more transparent and helpful, as the exchange between reviewers and authors can be viewed as an integral part of the scientific contribution to a paper. §.§ Limitation and Future Work There are several limitations to this study. Regarding reviewers' social pressures, we only measured the interaction and influence among reviewers. However, interactions between reviewers and Area Chairs (ACs), which contribute to the final decision on a paper, are equally important. Unfortunately, due to data constraints, we were unable to assess the impact of this aspect. In our examination of authors' rebuttal strategies, we employed the t-test hypothesis test, which only analyzes the mean variance of the data. This statistical approach may have certain limitations, and additional experiments using causal analysis could be applied to assess strategies more effectively. Moreover, in addition to the quantitative strategies mentioned in the paper, there are many strategies that are difficult to quantify. Conducting research through questionnaires or interviews may also prove beneficial in exploring which strategies are most effective. In terms of future work, we will concentrate on negative links (i.e., rejected decisions/recommendations) in peer review to identify the most significant (implicit/explicit) reasons behind these negative links. Such insights can better enable authors to enhance their submissions and increase the likelihood of paper acceptance. Moreover, comparing multiple conferences or analyzing a single conference over several years presents a valuable research direction. This approach allows us to assess the progress of peer review for conference organizers and contribute to the improvement of peer evaluation within the scientific community. § ACKNOWLEDGMENTS This work is funded by the National Natural Science Foundation of China under Grant Nos. U21B2046, 62272125, and the National Key R&D Program of China (2020AAA0105200). Huawei Shen is also supported by Beijing Academy of Artificial Intelligence (BAAI). apalike
http://arxiv.org/abs/2307.00777v1
20230703064115
GA-DRL: Graph Neural Network-Augmented Deep Reinforcement Learning for DAG Task Scheduling over Dynamic Vehicular Clouds
[ "Zhang Liu", "Lianfen Huang", "Zhibin Gao", "Manman Luo", "Seyyedali Hosseinalipour", "Huaiyu Dai" ]
cs.LG
[ "cs.LG", "cs.AI" ]
GA-DRL: Graph Neural Network-Augmented Deep Reinforcement Learning for DAG Task Scheduling over Dynamic Vehicular Clouds Zhang Liu, Student Member, IEEE, Lianfen Huang, Member, IEEE, Zhibin Gao, Member, IEEE, Manman Luo, Student Member, IEEE, Seyyedali Hosseinalipour, Member, IEEE, Huaiyu Dai, Fellow, IEEE Z. Liu (zhangliu@stu.xmu.edu.cn) and L. Huang (lfhuang@xmu.edu.cn) are with the Department of Informatics and Communication Engineering, Xiamen University, Fujian, China. S. Hosseinalipour (alipour@buffalo.edu) is with the Department of Electrical Engineering, University at Buffalo-SUNY, Buffalo, NY 14260. M. Luo (luomanman@stu.xmu.edu.cn) is with the Department of Electronic Engineering, Xiamen University, Xiamen, China. Z. Gao (gaozhibin@jmu.edu.cn) is with the Navigation Institute, Jimei University, Xiamen, Fujian, China. H. Dai (hdai@ncsu.edu) is with the Department of Electrical and Computer Engineering, North Carolina State University, Raleigh, NC 26795 USA. (Corresponding author: Lianfen Huang). August 1, 2023 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Vehicular clouds (VCs) are modern platforms for processing of computation-intensive tasks over vehicles. Such tasks are often represented as directed acyclic graphs (DAGs) consisting of interdependent vertices/subtasks and directed edges. In this paper, we propose a graph neural network-augmented deep reinforcement learning scheme (GA-DRL) for scheduling DAG tasks over dynamic VCs. In doing so, we first model the VC-assisted DAG task scheduling as a Markov decision process. We then adopt a multi-head graph attention network (GAT) to extract the features of DAG subtasks. Our developed GAT enables a two-way aggregation of the topological information in a DAG task by simultaneously considering predecessors and successors of each subtask. We further introduce non-uniform DAG neighborhood sampling through codifying the scheduling priority of different subtasks, which makes our developed GAT generalizable to completely unseen DAG task topologies. Finally, we augment GAT into a double deep Q-network learning module to conduct subtask-to-vehicle assignment according to the extracted features of subtasks, while considering the dynamics and heterogeneity of the vehicles in VCs. Through simulating various DAG tasks under real-world movement traces of vehicles, we demonstrate that GA-DRL outperforms existing benchmarks in terms of DAG task completion time. Vehicular cloud, directed acyclic graph, deep reinforcement learning, graph neural network. § INTRODUCTION §.§ Background and Challenges Vehicular networks are one of the main components of the Internet-of-Things (IoT) ecosystem. They have been envisioned to provide a reliable platform for execution of a myriad of applications/tasks, such as autonomous driving and mobile E-health<cit.>,<cit.>. Many of these tasks possess complex computation topologies, which are often represented as directed acyclic graphs (DAGs) <cit.>. Fig. <ref> illustrates a real-world DAG task model corresponding to a navigation application executed on a vehicle<cit.>, where vertices denote subtasks of the task and directed edges describe the dependencies between the execution of subtasks. In particular, each subtask represents a processing component of navigation, while directed edges dictate the sequence of executions of subtasks. The sequential execution of subtasks in a DAG model stems from the fact that processing of a subtask may depend on the output data of others (e.g., in Fig. <ref>, processing of subtask b_2 relies on the output data of subtask b_1 and processing of b_4 relies on the output data of both b_2 and b_3). In vehicular networks, DAG tasks are frequently encountered. Nevertheless, one of the main obstacles in the execution of DAG tasks is that a task owner (i.e., a vehicle with a DAG task) in a vehicular network often fails to fulfill the task's execution requirements due to its limited on-board resources. To circumvent this, offloading the computation of DAG tasks from task owners to edge servers through the mobile edge computing (MEC) platform has been proposed<cit.>. However, such task offloading strategies often rely on vehicle-to-infrastructure (V2I) communications, which can suffer from a high latency (e.g., due to a high data traffic congestion on the fronthaul/backhaul links) and limited coverage (e.g., in suburban areas)<cit.>. In response to these limitations, vehicular clouds (VCs) have emerged as novel computing platforms that integrate heterogeneous and distributed computation resources of moving vehicles via opportunistic vehicle-to-vehicle (V2V) communications to build flexible and scalable computing topologies for real-time task processing<cit.>. Specifically, in a VC, DAG subtasks are dispersed across vehicles and the data needed for the execution of subtasks is transmitted via V2V links. Although DAG task processing over VCs is promising, efficient scheduling of DAG subtasks across vehicles is a highly non-trivial problem, which resembles mixed integer programming (MIP) due to the existence of continuous and binary variables in the formulation (detailed in Section III-E). MIP are NP-hard problems<cit.>, for which, dynamic programming <cit.>, <cit.> and list scheduling algorithms<cit.>,<cit.> have been widely used to obtain solutions. These algorithms, however, often suffer from a prohibitively high computation complexity, which renders them impractical for large-scale VC networks. Also, these algorithms require a prior knowledge about the system dynamics (e.g., time-varying V2V channel qualities), which is cumbersome to acquire in practical systems. To overcome these challenges, researchers have recently started exploring the learning-based methods, a popular example of which is deep reinforcement learning (DRL)<cit.>. Roughly speaking, DRL learns from interacting with an environment so as to generate real-time near-optimal mappings from the state space (detailed in Section IV-C) to the action space (detailed in Section IV-D) without requiring any prior knowledges about the environment. Although DRL has shown a tremendous success in task scheduling/offloading<cit.>, it cannot be readily adopted for scheduling of DAG tasks over dynamic VCs. This is due to the fact that DAG tasks' data (i.e., tasks' topologies) resides in a non-Euclidean space (i.e., graph). As a result, conventional DRL with handcrafted states designed to work with data in Euclidean spaces fails to automatically learn the topological information of DAG tasks, and thus can be hardly applied to unseen DAG task topologies upon deployment in real-world systems. To overcome this challenge, we propose to augment DRL with an emerging learning architecture called graph neural network (GNN). GNNs are capable of adaptively extracting discriminative features for each node of a graph based on the topological information aggregated from its neighboring nodes<cit.>. As a sub-category of GNNs, graph attention networks (GATs) have recently gained tremendous attentions, which extend the spatial convolution in convolutional neural networks (CNNs) to graph structures <cit.> and thus enjoy inductive learning, making their learned models generalizable to unseen graph topologies. §.§ Overview and Summary of Contributions Inspired by the unique advantages of DRL and GNNs, we propose GA-DRL, a GNN-augmented DRL scheme to conduct DAG subtask-to-vehicle allocation aiming at minimizing the DAG task completion time. In doing so, we first model the VC-assisted DAG task scheduling as a Markov decision process (MDP). We then tailor a GAT to extract a set of features for each subtask of a DAG task. Finally, we integrate our developed GAT into the learning architecture of a double deep Q-network (DDQN) to generate subtask-to-vehicle allocation decisions, while taking into account the dynamics and heterogeneity of vehicles in a VC. Major contributions of this paper can be summarized as follows: * We develop a multi-head GAT capable of extracting features for DAG subtasks. Particularly, our GAT conducts a two-way topological information aggregation by simultaneously considering predecessors and successors of each subtask. Further, we incorporate a non-uniform neighborhood sampling methodology into our GAT by codifying the scheduling priority of subtasks, making our GAT generalizable to unseen DAG task topologies upon being deployed over the real-world systems. * We propose a DDQN to conduct subtask-to-vehicle allocation decisions according to the extracted features of subtasks by our GAT, while taking into account the dynamics and heterogeneity of vehicles in a VC. Further, we incorporate an action mask module into the DDQN to avoid infeasible subtask-to-vehicle allocations, ensuring successful execution of subtasks. * We evaluate the performance of GA-DRL on a real-world road network obtained from OpenStreetMap<cit.> where SUMO<cit.>, one of the most popular softwares for generating traffic flow, is used to form a VC. Through simulating DAG tasks with various topologies, we reveal that GA-DRL can outperform existing benchmarks in terms of task completion time. The rest of this paper is organized as follows: Section II contains the related work. In Section III, we present the system model and formulate the VC-assisted DAG task scheduling as a MIP problem. We develop GA-DRL in Section IV. In Section V, we present simulation results before concluding the paper in Section VI. § RELATED WORK Existing works on DAG task scheduling over cloud-assisted networks can be roughly divided into two categories with respect to the type of their scheduling mechanisms: i) heuristic-based algorithms <cit.>,<cit.>,<cit.>,<cit.>,<cit.>; ii) learning-based methods<cit.>,<cit.>,<cit.>. Below, we summarize the contributions of these works, and highlight the differences between our methodology in this paper and prior works. §.§ Heuristic DAG Task Scheduling §.§.§ Static computing environment Heuristic methods for DAG task scheduling have been extensively studied for static MEC networks with fully connected servers<cit.>,<cit.>,<cit.>,<cit.>. H. Topcuoglu et al. in <cit.> proposed HEFT algorithm, where each subtask is assigned to the processor with the least execution time. In <cit.>, L. F. Bittencourt et al. proposed forward looking attributions to improve the performance of HEFT. In <cit.>, H. Kanemitsu et al. proposed a clustering-based DAG task scheduling algorithm via prioritizing assigning the subtasks located on the critical path to the same processor. G. C. Sih et al. in <cit.> adopted a compile-time-aware scheduling algorithm to dynamically allocate DAG subtasks over the existing processing units in the system. Recently, in <cit.>, Y. Sahni et al. introduced JDOFH to simultaneously consider dependencies among DAG subtasks and start time of network flows to transmit the data of subtasks over the network. §.§.§ Dynamic computing environment Few recent works have studied DAG task scheduling over dynamic networks<cit.>,<cit.>,<cit.>. Q. Shen et al. in <cit.> proposed DTOSC to conduct DAG task offloading and service caching in vehicular edge computing. F. Sun et al. in <cit.> addressed DAG task scheduling over VC via a modified genetic algorithm focusing on vehicles' dwell times. In <cit.>, Y. Liu et al. developed MAMTS to prioritize allocation of different DAG tasks according to their computation topologies in vehicular edge computing. The methodologies developed in the aforementioned works are heuristic, applying of which requires considerable number of iterations to reach locally optimal solutions. As a result they often suffer from prohibitively high computation complexities, which renders them impractical for real-time DAG task allocation. Also, these heuristic algorithms often presume a prior knowledge about the system dynamics (e.g., known time-varying V2V channel qualities), obtaining of which is extremely challenging in dynamic VCs, where the network topology may exhibit a significant temporal variation. §.§ Learning-based DAG Task Scheduling §.§.§ Static computing environment DRL schemes have become one of the most popular learning-based techniques in the literature of task scheduling, especially for static MEC networks<cit.>. In <cit.>, J. Yan et al. proposed an actor-critic DRL to learn the optimal DAG subtask assignment to access points. M. S. Mekala et al. in <cit.> developed a DRL-based DAG task offloading approach to reduce the utilization cost of edge servers. In<cit.>, J. Wang et al. proposed a DAG task offloading methodology based on meta reinforcement learning. M. Goudarzi et al. in<cit.> introduced weighted actor-learner architectures for DAG task allocation over resource-constrained IoT devices. In<cit.>, Z. Hu et al. presented a DRL-based Monte-Carlo tree search method to minimize DAG tasks' completion times through a clustered scheduler. §.§.§ Dynamic computing environment Considering dynamic computing environments<cit.>,<cit.>, <cit.>, H. Liu et al. in <cit.> utilized a policy-based DRL for minimizing DAG tasks' completion times in multi-vehicle scenarios. In <cit.>, J. Shi et al. proposed a DRL-based DAG task offloading scheme for vehicular fog computing considering the vehicles' mobility and availability. X. Wei et al. in <cit.> developed a DRL-based algorithm to jointly optimize the unmanned aerial vehicle trajectory planning, and DAG task scheduling. In <cit.>, L. Geng et al. proposed a multi-agent actor-critic DRL to schedule DAG tasks in a vehicular edge computing network. The DRL algorithms developed in the above works are based on handcrafted features, making them unable to fully capture the existing topological information in DAG tasks. This is because the state space of the DRL architectures studied in the above works merely contains basic, human-selected information regarding subtasks (e.g., their computation workloads, transmission data sizes, and number of predecessors/successors). As a result, the DRL methods explored in the above works are solely capable of making allocation decisions for DAG tasks with computation topologies that they have seen during their training period. In this work, we take the first steps towards addressing this limitation. §.§ Footprints of GNNs in Mobile Edge Computing Recently, the success of GNNs in solving a variety of complex problems in wireless communications has been revealed <cit.>, while studying their application in the context of DAG task scheduling is still in early stages. In <cit.>, Z. He et al. investigated the spectrum allocation in vehicle-to-everything networks based on the integration of GNNs and deep Q-learning. Y. Li et al. in <cit.> proposed a meta-reinforcement learning method for DAG task offloading in MEC platform, where the interdependencies between subtasks was extracted by GNNs. In <cit.>, H. Lee et al. developed a graph convolution network (GCN) and DRL to effectively learn a priority-based scheduling policy for DAG tasks. J. Chen et al. in <cit.> proposed an algorithm called ACED for DAG task offloading, where a GCN is leveraged to capture the topological information of DAG subtasks. The aforementioned works either ignore the topology of computation-intensive tasks (e.g., interdependencies among subtasks)<cit.> or focus on static MEC environments, overlooking the dynamics and instability of resource provisioning<cit.>, which are significant features of VCs. Moreover, the GCN architecture developed in <cit.> relies on transductive learning, which requires knowing the graph structure of DAG tasks upfront. As a result, their learned solutions for DAG task scheduling are not applicable to unseen DAG task topologies, which makes them suffer from a prohibitively high training overhead for each newly arrived DAG task to the system. In this work, we are particularly interested to address the shortcomings mentioned above. § SYSTEM MODEL AND PROBLEM FORMULATION In this section, we first give an overview of the system of our interest, DAG task model, vehicle mobility model, and computation offloading model. We then obtain an optimization formulation for VC-assisted DAG task scheduling. Table <ref> summarizes the major notations used in this section. §.§ System Overview We consider a time-slotted VC-assisted DAG task scheduling scenario, which is coordinated by a road side unit (RSU) with coverage diameter of D. We presume that the area comprises |𝒱| vehicles collected by the set 𝒱={v_m | 1 ≤ m ≤ |𝒱|}. In order to fulfill its DAG task completion demands, a task owner engages in offloading its DAG task with |ℬ| subtasks collected by the set ℬ={b_i | 1 ≤ i ≤ |ℬ|} to other vehicles[This paper investigates the DAG task scheduling problem for a single task owner with a single DAG task in one VC for analytical simplicity. Cooperations and resource sharing among VCs and competitions between multiple task owners to acquire computation resources are left as future work.]. Fig. <ref> shows a schematic of our VC of interest for the DAG task topology depicted in Fig. <ref>, subtask b_0 is a virtual subtask executed on the task owner (detailed in Section III-B). After receiving the offloading request from the task owner (i.e., v_1), the RSU acts as a centralized coordinator<cit.>, which processes a set of collected data (e.g., locations and resources of vehicles) to assign DAG subtasks to vehicles. Specifically, in Fig. <ref>, virtual subtask b_0 is assumed to be executed on the task owner locally, while subtask b_1 is allocated to vehicle v_3. After executing subtask b_1, vehicle v_3 is scheduled to transmit the output data of subtask b_1 to vehicles v_2 and v_4 for processing subtasks b_2 and b_3. Due to the interdependencies among DAG subtasks, the execution of subtask b_4 relies on the output data of both subtasks b_2 and b_3. Hence, vehicles v_2 and v_4 will both be scheduled to transmit their output data to vehicle v_5. Finally, vehicle v_5 will send a feedback (i.e., the final result of DAG task processing) to the RSU. The main assumptions made in this paper are summarized below: * It is assumed that VC remains stationary during each time slot<cit.>. * We presume that a single vehicle can only handle one subtask at a time <cit.>. Consequently, if multiple subtasks are assigned to a vehicle, they must wait until resources become available[Upon having vehicles that can process multiple subtasks simultaneously, those vehicles can be modeled as multiple virtual vehicles with unlimited contact duration among them, where each of them can process one subtask at a time.]. * Due to the mobility and the limited contact durations among vehicles, this paper only focuses on one-hop data transmission between vehicles<cit.>. * Since the size of the feedback sent to the RSU is usually smaller than that of the original input data, the time it takes to transmit this feedback is neglected<cit.>. §.§ DAG Task Model Without loss of generality, we index the task owner as v_1 with a computation-intensive DAG task, which is represented by a graph 𝒢=(ℬ,ℰ). In graph 𝒢, ℬ={b_i | 1 ≤ i ≤ |ℬ|} denotes the set of subtasks, and ℰ denotes the set of directed edges, where e_i,j∈ℰ indicates that subtask b_i has to be completed before the execution of subtask b_j. To better capture the sequential execution nature of DAG tasks, we further define the set of immediate predecessors of each subtask b_i as 𝒫_i={b_j | b_j ∈ℬ, e_j,i∈ℰ}. Similarly, we define the set of immediate successors of each subtask b_i as 𝒮_i. For example, in Fig. <ref>, we have ℬ={b_1,b_2,b_3,b_4}, ℰ={e_1,2,e_1,3,e_2,4,e_3,4}, 𝒫_4={b_2,b_3}, and 𝒮_1={b_2,b_3}. Furthermore, to make our analysis tractable, we introduce a virtual subtask to the DAG task topology denoted by b_0, which is connected to subtask(s) with no immediate predecessors as shown in Fig. <ref>. §.§ Vehicle Mobility Model We assume that each vehicle v_m is driving at a random and constant speed g_m (meters per second). Since the speeds of vehicles are non-negative, we adopt a truncated Gaussian distribution<cit.> to capture them. Specifically, for any value of speed g, the probability density function of the truncated Gaussian distribution is defined as F(g)=2F(g)/Φ(g_𝗆𝖺𝗑 -μ_g/σ_g√(2))-Φ(g_𝗆𝗂𝗇 -μ_g/σ_g√(2)), where Φ(x)=2/√(2 π)∫_0^x e^-t^2 dt is the Gaussian error function, and g_𝗆𝖺𝗑 and g_𝗆𝗂𝗇 are defined as the maximum and minimum speed of vehicles, respectively. In (<ref>), F(g) is the probability density function of a Gaussian distribution which is given by F(g)=1/σ_g√(2 π) exp(-(g-μ_g)^2/2σ_g^2), where μ_g is the average speed of all vehicles, and σ_g is the standard deviation. Considering resource provisioning for DAG subtasks is conducted by vehicles that are located in the VC (i.e., within the coverage of the RSU), we utilize the notion of the dwell time to characterize vehicles' mobility. Specifically, considering that a contact event (i.e., V2V link formation) can happen between two vehicles as long as they have not left the VC, we define the dwell time of vehicle v_m in the VC as interval [AT_m, DT_m], where AT_m and DT_m represent the arrival and departure time of v_m at and from the VC, respectively, between which vehicle v_m is available to offer its computation resource. §.§ Computation Offloading Model Path Loss Model. Let (x_m(t),y_m(t)) denote the 2D coordinates of each vehicle v_m at time slot t, to consider the impact of dynamics of VCs on V2V links, we first adopt a dual-slope piecewise-linear model<cit.> to represent the propagation loss (in dB) between two vehicles v_m and v_n, denoted by PL(d_m,n(t)), as follows: PL(d_m,n(t))= PL_𝖫𝗈𝖲(d_m,n(t)) + β, ∀v_m,v_n ∈𝒱, where d_m,n(t)=√((x_m(t)-x_n(t))^2+(y_m(t)-y_n(t))^2) (in meters) denotes the Euclidean distance between vehicles v_m and v_n at time slot t, and β is an additional attenuation factor modeled according to a lognormal random variable with mean μ_β=5+max(0,15log_10(d_m,n(t))-41) (in dB) and standard deviation σ_β=4.5 (in dB). In (<ref>) PL_𝖫𝗈𝖲(d_m,n(t)) is the path loss of the light-of-sight (LoS) transmission between two vehicles, which is given by PL_𝖫𝗈𝖲(d_m,n(t))=32.4+20log_10(d_m,n(t)) +20log_10(F_c)+ δ, ∀v_m,v_n ∈𝒱, where F_c is the center frequency (in GHz), and δ captures the effect of signal power fluctuations due to surrounding objects modeled by a lognormal random variable with standard deviation σ_δ=3 (in dB). We then introduce the notion of ready time which enables us to develop our scheduling methodology for DAG tasks by taking their sequential execution into account. (Ready Time). Ready time RT_i indicates the time when all of the immediate predecessors of subtask b_i are completed/finished, which corresponds to the starting time of data transmission between b_j (b_j ∈𝒫_i) and the vehicle that processes b_i: RT_i=max _b_j ∈𝒫_i{AFT_j}, b_i ∈ℬ, where AFT_j is the actual finish time of subtask b_j when it is practically executed on a vehicle. Transmission Model. Combining (<ref>) - (<ref>), we let TT_i,m;j,n denote the data transmission time associated with edge e_i,j when subtasks b_i and b_j are allocated to vehicles v_m and v_n, respectively, which can be calculated as TT_i,m;j,n={[ c_i,j Ψ(PL(d_m,n(RT_j))), m≠n; 0, m = n ]. ∀b_i,b_j ∈ℬ, e_i,j ∈ℰ, v_m,v_n ∈𝒱, where c_i,j (in bits) is the transmission data size between subtasks b_i and b_j associated with directed edge e_i,j, and Ψ(·) is a monotone increasing function indicating that a higher value of path loss between vehicles v_m and v_n at time slot RT_j (i.e., a worse V2V channel condition) leads to a longer transmission time (see Section V for a realization of Ψ(·)). Computation Model. To model the scheduling of DAG subtasks, let EST_i,m and EFT_i,m denote the earliest start time, and finish time of processing of subtask b_i on vehicle v_m, respectively. We assume that virtual subtask b_0 is processed on the task owner (i.e., v_1) and its computation workload is zero, we thus have EST_0,1=EFT_0,1=0. Subsequently, for each subtask b_i ∈ℬ, i ≠ 0, the values of EST_i,m and EFT_i,m can be calculated recursively as follows: EST_i,m=max{ AVT_i,m, RT_i+max_b_j ∈𝒫_i{ TT_j,n;i,m}^(I) }, ∀b_i ∈ℬ, v_m ∈𝒱, where AVT_i,m denotes the available time when vehicle v_m completes its latest assigned subtask and term (I) indicates the earliest arrival time of the required data for processing subtask b_i at vehicle v_m. Furthermore, we consider heterogeneous computation capabilities across vehicles, where for each vehicle v_m, its computation capability is denoted by f_m (in CPU cycles per second). As a result, the earliest finish time of processing subtask b_i on vehicle v_m is given by EFT_i,m= EST_i,m+u_i/f_m, ∀b_i ∈ℬ, v_m ∈𝒱, where u_i denotes the computation workload (in CPU cycles) of subtask b_i. §.§ Optimization Formulation We capture the subtask-to-vehicle allocations through a set of binary indicators ℐ={ξ_i,m| 0 ≤ i ≤ |ℬ|, 1 ≤ m ≤ |𝒱|}, where ξ_i,m=1 denotes that subtask b_i is allocated to vehicle v_m, and ξ_i,m=0 otherwise. Aiming to minimize the overall DAG task completion time, we formulate DAG task scheduling over the VC as the following mixed integer programming (MIP): min_ℐ max_b_i ∈ℬ, v_m ∈𝒱{EFT_i,m ξ_i,m}, s.t. (<ref>), (<ref>), ∑_v_m ∈𝒱 ξ_i, m=1, b_i ∈ℬ, C1 ξ_i,m ∈{0,1}, b_i ∈ℬ, v_m ∈𝒱, C2 ⋂_b_i ∈ℬ_m[EST_i, m,EFT_i, m]= ∅, v_m ∈𝒱, C3 EST_i,m ≥EFT_j,n, b_i ∈ℬ, b_j∈𝒫_i, v_m,v_n ∈𝒱C4, [EST_i, m,EFT_i, m] ⊂[AT_m,DT_m], b_i ∈ℬ, v_m ∈𝒱. C5 In (<ref>), the objective function captures the sequential execution of DAG subtasks, where the maximum finish time of all subtasks indicates the overall DAG task completion time. Also, constraint (C1) guarantees that each subtask is allocated to only one vehicle, while (C2) restricts the value of the allocation indicator ξ_i,m to be binary. Constraint (C3) ensures that a vehicle can only process one subtask at a time, where ℬ_m={b_i |ξ_i,m=1, 0 ≤ i ≤ |ℬ|} denotes the set of subtasks processed on vehicle v_m. Constraint (C4) indicates that the processing of a subtask can not start until all of its predecessors are completed, (C5) guarantees the availability of computation resources of vehicles with respect to the vehicles' dwell times: the earliest start time and earliest finish time of executing each subtask b_i on vehicle v_m should between the arrival time and departure time of vehicle v_m in a VC. It is known that MIP formulations similar to what we have in (<ref>) are NP-hard<cit.>. Also, considering the sequential execution of DAG subtasks (i.e. different subtasks may be executed at different time slots), we need the prior knowledge of the V2V path loss and availability of vehicles' computation resources, obtaining of which is cumbersome in practice. As a result, to tackle these challenges, we propose a GNN-augmented DRL scheme, named GA-DRL, to efficiently find near-optimal solutions for (<ref>). § GNN-AUGMENTED DRL (GA-DRL) FOR DAG TASK SCHEDULING OVER DYNAMIC VCS In this section, we first provide an overview of our GA-DRL methodology and the challenges we aim to address. We then tailor a GAT module for extracting features of subtasks. Subsequently, the VC-assisted DAG task scheduling is modeled as an MDP consisting of the state space, action space, and reward. Finally, we utilize a DDQN architecture to tackle (<ref>) and discuss its training procedure. §.§ GA-DRL Overview and Challenges §.§.§ GA-DRL overview Our method takes a different approach from traditional DRL methods developed for task scheduling <cit.>,<cit.>,<cit.>, which only consider predetermined states, such as computation workload, data size, and number of subtask predecessors/successors. Instead, we propose a GNN-augmented DRL approach that automatically learns distinctive subtask features and creates assignments between subtasks and vehicles. In particular, as shown in Fig. <ref>, the features of subtasks are acquired through a GAT module, rather than being predetermined. Our GA-DRL conducts subtask-to-vehicle allocations through a sequence of decision steps. At each decision step k, the DRL agent functioning at the RSU diligently collects relevant data on the system state s^(k), which includes the extracted features of current subtask obtained by GAT, as well as the parameters of the vehicles describing their dynamics and heterogeneity. DRL agent then feeds state s^(k) to a DDQN. The objective of DDQN is to effectively assign subtasks to vehicles by determining the best course of action a^(k). To this end, DDQN evaluates the value of each state-action combinations, and conducts a subtask-to-vehicle allocation a^(k), which moves the system to the next state s^(k+1). Finally, the DRL agent receives a reward r^(k), that aids in the training of a deep learning model. This, in turn, enhances the agent's ability to take better actions over time. §.§.§ Main challenges When applying GA-DRL to the VC-assisted DAG task scheduling, there are two main challenges that need to be tackled. (1) Feasibility of allocation decisions. Unlike static computing environments that have stable, fully-connected computing servers<cit.>,<cit.>,<cit.>, the dynamics of VC's resources can greatly affect the execution of DAG subtasks. This is captured by constraint (C5) in (<ref>), satisfying of which guarantees the time-interval of processing subtask b_i on vehicle v_m to be within the dwell time of v_m. Ensuring that subtask-to-vehicle allocation decisions are feasible (specifically, meeting constraint (C5)) can be difficult because neural networks typically lack a module to filter out infeasible actions. (2) Generalizability of designed GNN. Efficient inductive learning is a key feature of GAT<cit.>, which makes it suitable for using with previously unseen graph topologies. However, it can be difficult to ensure that the GNN model is applicable to various DAG tasks, as each task has its own unique topology and interdependence among subtasks. To overcome this challenge, we must carefully encode the information of each DAG task's topology to achieve meaningful results when combined with our later developed GAT. Table <ref> summarizes the major notations used in this section. §.§ Graph Neural Network In this subsection, we explain the structure of GNNs and how we use a multi-head GAT to extract distinctive features of subtasks. Our GAT incorporates a two-way aggregation method that considers the topological information of both predecessors and successors of each subtask. To further enhance the adaptability of our GAT to new DAG tasks, we utilize a ranking-based sampling technique. §.§.§ Architecture of GNNs The architecture of a GNN is depicted in Fig. <ref>, where a GNN takes raw features of all subtasks as the input and subsequently generates result features containing corresponding topological information of the DAG task. Specifically, GNN utilizes an Aggregate function to accumulate the topological information passed by the neighbors of each subtask. The accumulated information is then modified through a nonlinear Update function. This procedure is repeated L times to create the result feature for each subtask. Raw feature of each subtask. Similar to conventional DRL methods <cit.>, which rely on human-selected information to define DAG subtasks, we also define the raw feature[Super-index 0 is used to capture that these are initial features of the subtask, which are later processed and enhanced through GNN.] of each subtask b_i as h^(0)_i={u_i, c_i, |𝒫_i|, |𝒮_i|}, b_i ∈ℬ, b_j ∈𝒮_i, where u_i is the computation workload of subtask b_i, and c_i = ∑_b_j ∈𝒮_ic_i, j/|𝒮_i| indicates the average transmission data size associated with edges e_i,j, b_j ∈𝒮_i. Also, |𝒫_i| and |𝒮_i| represent the number of predecessors and successors of subtask b_i, respectively. Neighbor set of each subtask. Considering that DAG subtasks are executed sequentially, we define 𝒩_i as the neighbor set of each subtask b_i which includes all of its predecessors as well as b_i itself; mathematically 𝒩_i= {b_j |e_j,i∈ℰ}∪{b_i}. Through an iterative process involving the use of Update and Aggregate functions, GNN obtains the result feature of each subtask b_i. Mathematically, at each iteration ℓ, we have h^(ℓ+1)_i= Update(Aggregate({h^(ℓ)_j|b_j ∈𝒩_i})), where h^(ℓ+1)_i denotes the result feature of subtask b_i at iteration ℓ. Through L iterations, the GNN derives the final result feature for each subtask, denoted by h_i^(L). This feature incorporates both the raw feature of each subtask (i.e., at ℓ = 0; h^(0)_i), as well as the topological information from neighboring subtasks (i.e., 𝒩_i) within the DAG task. Hereafter, we detail the Aggregate and the Update functions designed to extract features of DAG subtasks. §.§.§ Multi-head GAT Considering that the subtasks involved in 𝒩_i have different computation workloads, transmission data sizes and interdependencies, we employ an attention mechanism, which is inspired by <cit.> to assign diverse weights to subtasks with the aim of enhancing information of key subtasks. Specifically, at each iteration ℓ, we define an attention-based aggregation function called Aggregate^𝖺𝗍 as Aggregate^𝖺𝗍({ h^(ℓ)_j|b_j ∈𝒩_i})=∑_b_j ∈𝒩_iα^(ℓ)_i,jW^(ℓ)h^(ℓ)_j, where W^(ℓ) is a trainable weight matrix at iteration ℓ, and α^(ℓ)_i,j is a normalized attention coefficient at iteration ℓ, which measures the relative importance of subtask b_j to subtask b_i as follows: α^(ℓ)_i,j= exp(A^(ℓ) [ W^(ℓ) h^(ℓ)_i || W^(ℓ)h^(ℓ)_j] )/∑_b_j^' ∈𝒩_i exp(A^(ℓ)[W^(ℓ)h^(ℓ)_i || W^(ℓ)h^(ℓ)_j^'] ). In (<ref>), A^(ℓ) is a trainable vectors at iteration ℓ, and ·||· denotes the vector concatenation. Further, in order to enhance the effectiveness of GAT's learning process, we propose to use a multi-head GAT, where different attention heads learn to give more relevant weights to different subtasks. Let Z denote the total number of heads. Each attention head, denoted by z will individually aggregate topological information of subtasks, in conjunction with other attention modules. The multi-head attention-based aggregation function called Aggregate^𝗆𝖺𝗍 can be then formulated as Aggregate^𝗆𝖺𝗍({h^(ℓ)_j|b_j ∈𝒩_i}) = 1/Z ∑_z=1^Z (∑_b_j ∈𝒩_i α^(ℓ)(z)_i,jW^(ℓ)(z)h^(ℓ)_j), where iteration index ℓ and head index z are both used as superscripts hereafter. To better suit our problem, we aim to modify the Aggregate^𝗆𝖺𝗍 function defined in (<ref>) through developing a two-way aggregation for the multi-head GAT. This approach takes into consideration the predecessors and successors of each subtask, which helps to aggregate topological information in a more effective manner. §.§.§ Two-way aggregation To execute DAG subtasks, capturing the conditions of predecessors and successors of each subtasks are equally important. As a result, we develop a two-way aggregation approach that utilizes two different types of attention heads. This approach involves using the inverse neighbor set 𝒩_i^-1 of each subtask b_i, which includes all of its successors and b_i itself 𝒩^-1_i={b_j |e_i,j ∈ℰ} ∪{b_i}. At each iteration ℓ, half of the attention heads from Z are then allocated to collect topological information from the neighboring subtasks, while the remaining half is utilized to gather topological information from the inverse neighbor set, which leads to the modification of (<ref>) to Aggregate^𝗆𝖺𝗍(h^(ℓ)_j)=1/Z[∑_z=1^Z/2(∑_b_j ∈𝒩_i α^(ℓ)(z)_i,jW^(ℓ)(z)h^(ℓ)_j) +∑_z=Z/2^Z(∑_b_j ∈𝒩^-1_i α^(ℓ)(z)_i,jW^(ℓ)(z)h^(ℓ)_j)]. We then aim to further modify the Aggregate^𝗆𝖺𝗍 function defined in (<ref>). This makes out GAT module different from other existing GNNs <cit.>: we do not consider all the neighbors of a given subtask to accumulate topological information. Instead, we opt for a weighted sample of neighbors, based on their scheduling priority. This approach allows our GAT to be more generalizable to unseen DAG task topologies. We next describe this approach. §.§.§ Ranking-based sampling We first devise an approach to prioritize scheduling of subtasks based on their ranking value. By employing a recursive method, we determine the ranking value of each subtask b_i labeled as rank_i as follows: rank_i = max_b_j ∈𝒫_i {rank_j +u_j + c_j,i}, b_i ∈ℬ, where u_j is the average execution cost of subtask b_j, b_j ∈𝒫_i, which is given by u_j=∑_m=1^|𝒱|u_j / f_m/|𝒱|, v_m ∈𝒱, and c_j,i denotes the average transmission cost associated with edge e_j,i, b_j ∈𝒫_i at the beginning (i.e., at time slot 0), which is given by c_j,i=∑_m=1^|𝒱|∑_n=1^|𝒱|c_j,iΨ(PL(d_m,n(0 )))/|𝒱|^2. Assuming rank_0=0 for virtual subtask b_0, we maintain a subtask scheduling priority list ℒ^𝗋𝖺𝗇𝗄 as ℒ^𝗋𝖺𝗇𝗄={b_i ≻b_j |b_i,b_j ∈ℬ, rank_i <rank_j }, where the preference relation b_i ≻ b_j indicates that subtask b_i has a higher scheduling priority compared with subtask b_j due to a lower value of rank_i[Our current ranking method for DAG subtasks relies on heuristics, which may limit the GNN-augmented DRL algorithm's ability. We plan to address this issue by exploring alternative methods for determining the scheduling priority of DAG subtasks using DRL in the future.]. Finally, we define 𝒩^𝗋𝖺𝗇𝗄_i as a ranking-based neighbor set of subtask b_i which contains the subtasks sampled from 𝒩_i. The sampling probability/weight of subtask b_j from 𝒩_i to be included in 𝒩_i^𝗋𝖺𝗇𝗄, denoted by p_j, is calculated as p_j=exp(rank_j)/∑_b_j^' ∈𝒫_iexp(rank_j^'). This weighted subtask sampling method leads to improving the generalizability of our method by intentionally losing topological information passed by the subtasks which are not sampled, which makes our GAT model less sensitive to the topological variations in DAG tasks. This resembles the dropout<cit.> mechanism widely leveraged in training deep neural network. Note that subtask sampling is done with replacement if the sample size is larger than the size of 𝒩_i. Aggregate function. By integrating aforementioned methodologies, our designed Aggregate^𝗆𝖺𝗍 function not only enables information enhancement of key subtasks by considering a two-way multi-head attention-based aggregation, but also improves generalizability by considering a ranking-based sampling; mathematically Aggregate^𝗆𝖺𝗍(h^(ℓ)_j)=1/Z[∑_z=1^Z/2(∑_b_j ∈𝒩^𝗋𝖺𝗇𝗄_i α^(ℓ)(z)_i,jW^(ℓ)(z)h^(ℓ)_j) +∑_z=Z/2^Z(∑_b_j ∈𝒩^-𝗋𝖺𝗇𝗄_i α^(ℓ)(z)_i,jW^(ℓ)(z)h^(ℓ)_j)], where 𝒩^-𝗋𝖺𝗇𝗄_i is the inverse ranking-based neighbor set of subtask b_i sampled from the 𝒩^-1_i using a similar sampling method described in (<ref>). Update function. After receiving aggregated topological information in (<ref>), we apply the exponential linear unit activation (ELU)<cit.> in the Update function. Finally, combining the aforementioned Aggregate and Update functions, we can express (<ref>) as h^(ℓ+1)_i=ELU(1/Z[∑_z=1^Z/2(∑_b_j ∈𝒩^𝗋𝖺𝗇𝗄_i α^(ℓ)(z)_i,jW^(ℓ)(z)h^(ℓ)_j) +∑_z=Z/2^Z(∑_b_j ∈𝒩^-𝗋𝖺𝗇𝗄_i α^(ℓ)(z)_i,jW^(ℓ)(z)h^(ℓ)_j)]). In our experiments, we found that our approach could achieve high performance with L=2,Z=4, where W^(1)∈ℝ^4×16, A^(1)∈ℝ^32×1, and W^(2)∈ℝ^16×32, A^(2)∈ℝ^64×1. A flow chart of the relationships between the components developed for Aggregate function is shown in Fig. <ref>. Also, Algorithm <ref> details the corresponding procedure of our GAT module with computation complexity 𝒪(|ℬ|LZ), where we assume a set of learned parameters (i.e., W^(ℓ)(z) and A^(ℓ)(z)). These parameters are later optimized in conjunction with DDQN parameters. We next formulate DAG task scheduling as an MDP with state, action, and reward representation. §.§ State Representation The result feature of each subtask b_i, denoted by h^(L)_i, is generated through consecutive L iterations in (<ref>). We assume subtasks to vehicles assignments through a series of decision steps indexed by k, at each decision step k, let subtask b_τ(k) be the current subtask waiting to be allocated to a vehicle, where τ(k) indicates the subtask's index at position k in ℒ^𝗋𝖺𝗇𝗄. We define the system state s^(k) as follows: s^(k)={h^(L)_τ(k), ℐ^(k-1), 𝒜^(k), 𝒪^(k)}, where ℐ^(k-1) denotes subtask-to-vehicles allocation decisions for the subtasks located before current subtask b_τ(k) in ℒ^𝗋𝖺𝗇𝗄, and 𝒜^(k)= {avail_1, avail_2, ⋯, avail_|𝒱|} is the availability indicator set at the instant of decision step k, where avail_m=1 denotes that vehicle v_m is available for offering its computation resource or processing current subtask, and avail_m=0 otherwise. Also, 𝒪^(k)= {(x_m,y_m)| v_m ∈𝒱} is the instantaneous location of vehicles at decision step k. §.§ Action Space During each decision step k, we need to determine which vehicle should be assigned to each subtask based on the system state s^(k) and subtask scheduling priority list ℒ^𝗋𝖺𝗇𝗄. In particular, at decision step k, for current subtask b_τ(k), action a^(k) is defined as a^(k) ∈{1,2,⋯,|𝒱|}, where a^(k)=1 implies that current subtask b_τ(k) is processed locally on task owner v_1, and a^(k)∈{2,⋯,|𝒱| } implies that current subtask b_τ(k) is allocated to other vehicles for a faster execution. §.§ Reward Design At decision step k, given state s^(k), we associate performing action a^(k) for allocating of current subtask b_τ(k) to an immediate reward r^(k) leveraged to evaluate the quality of action a^(k). We define the reward r^(k) as the decrease in the EFT of all subtasks as r^(k) =max_b_i ∈ℬ, v_m ∈𝒱{EFT^(k-1)_i,m }_(I)-max_b_i ∈ℬ, v_m ∈𝒱{EFT^(k)_i,m }_(II), where term (I) and (II) denote the maximum DAG task completion time before and after scheduling the current subtask, respectively. We next demonstrate the rationality of reward function introduced above. Rational of the Choice of Reward. Let K denote the total number of decision steps. According to (<ref>), the discounted cumulative reward can be calculated as R = ∑_k=1^K γ_1^kr^(k) = ∑_k=1^K γ_1^k( max_b_i ∈ℬ, v_m ∈𝒱{EFT^(k-1)_i,m }-max_b_i ∈ℬ, v_m ∈𝒱{EFT^(k)_i,m }), where γ_1 is the discount factor. Assuming γ_1=1 for simplicity, since at decision step k, we determine the allocation of only the current subtask b_τ(k) according to scheduling priority list ℒ^𝗋𝖺𝗇𝗄, we have K=|ℒ^R|=|ℬ|. Thus, (<ref>) can be rewritten as R = ∑_k=1^K(max_v_m ∈𝒱{EFT_τ(k-1),m }-max_v_m ∈𝒱{ EFT_τ(k),m }) =(max_v_m ∈𝒱{EFT_τ(0),m }-max_v_m ∈𝒱{EFT_τ(1),m } +max_v_m ∈𝒱{EFT_τ(1),m }+⋯-max_v_m ∈𝒱{EFT_τ(K),m } ) =-(max_v_m ∈𝒱{EFT_τ(K),m }-max_v_m ∈𝒱{EFT_τ(0),m }), where we define b_τ(0) as the virtual subtask with max _v_m ∈𝒱{EFT_τ(0),m}=0. The last result in (<ref>) (i.e., term -max _v_m ∈𝒱{EFT_τ(K),m}) indicates that maximizing the cumulative reward is consistent with minimizing the task completion time given in (<ref>). Hereafter, in order to solve the above mentioned MDP, we resort to a DDQN, which adopts the action (i.e., subtask-to-vehicle allocation) at each decision step yielding the largest Q-value (i.e., state-action value) prior to DAG task scheduling over dynamic VCs. §.§ Double Deep Q-Network §.§.§ Deep Q-network We first describe DQN methodology<cit.>, which paves the way for DDQN. In DQN, we have two deep neural networks (DNNs) called predict Q-network Q(s,a; θ^𝗉) and target Q-network Q(s,a; θ^𝗍). Particularly, θ^𝗉 and θ^𝗍 are the vectors of weights/parameters of DNNs, and s and a denote the state and action, respectively. Predict Q-value. At each decision step k, given state s^(k), using the predict Q-network, the DRL agent first estimates/predicts the Q-value Q(s^(k), a;θ^𝗉) of all actions a = 1,2,⋯,|𝒱|, where s^(k) consists of the extracted feature of current subtask b_τ(k) and vehicles' parameters given in (<ref>). Q-value is a measure of the quality of the action: a higher Q-value is an indicator to a better action. Action selection. The DRL agent then performs an action a^(k) using a max mathematical estimator as follow a^(k)=argmax_aQ(s^(k), a; θ^𝗉), a∈{1,2⋯|𝒱|}. The DRL agent then receives a reward r^(k) computed by (<ref>). Target Q-value. The system subsequently transits to the next state s^(k+1), and the DRL agent resorts to target Q-network for calculating the target Q-value of state s^(k), denoted by 𝗒^(k): 𝗒^(k)= r^(k) + γ_2 Q(s^(k+1), argmax_aQ(s^(k+1), a;θ^𝗍)_(I); θ^𝗍)_(II), a∈{1,2⋯|𝒱|}. To obtain the parameter θ^𝗉, the mean square error, denoted by 𝖦(θ^𝗉), is used with discount factor γ_2 as follows 𝖦(θ^𝗉) = 1/2[𝗒^(k)- Q(s^(k),a^(k); θ^𝗉)]^2, a∈{1,2⋯|𝒱|}. Also, the weights of the target network θ^𝗍 are periodically copied from the predict network θ^𝗉. §.§.§ Double Deep Q-network In standard DQN, the max operator employs the same values to both select (i.e., term (I) in (<ref>)) and evaluate (i.e., term (II) in (<ref>)) an action. This implies that the Q-values are updated based on estimated future rewards, rather than actual rewards. Thus, there is a risk of overestimating Q-values, especially when the estimates are based on an inaccurate model of the environment. To prevent this, we resort to DDQN <cit.> aiming at separating the action selection from the action evaluation. In DDQN, the action with the maximum Q-value is selected using the predict network, and the Q-value for this action is evaluated using the target network. In particular, DDQN uses the same approach for predicting Q-value and selecting action as DQN. However, it uses a different target network update rule detailed next. Target Q-value in DDQN. The target value of state s^(k) in DDQN (see the red line shown in Fig. <ref>), denoted by 𝗒^(k)_𝖣𝗈𝗎𝖻𝗅𝖾 is changed from (<ref>) to 𝗒^(k)_𝖣𝗈𝗎𝖻𝗅𝖾= r^(k) + γ_2Q(s^(k+1), amax Q(s^(k+1), a, θ^𝗉); θ^𝗍), a∈{1,2⋯|𝒱|}, where the action a is conducted by predict Q-network Q(s, a;θ^𝗉). Finally, the mean square error for training predict Q-network Q(s, a;θ^𝗉) is modified from (<ref>) to 𝖦(θ^𝗉) =1/2[𝗒^(k)_𝖣𝗈𝗎𝖻𝗅𝖾 - Q(s^(k),a^(k); θ^𝗉)]^2, a∈{1,2⋯|𝒱|}. Using which the parameter θ^𝗉 is obtained, the weights of the target network θ^𝗍 are then periodically copied from the predict network θ^𝗉. §.§ Training Process We consider training of DRL through a series of episodes, where each episode contains total of K sequential decision steps. At each decision step k, DRL agent generates a pair of observation (s^(k),a^(k),r^(k),s^(k+1)). An episode is considered to be complete when a vehicle is assigned the subtask with the lowest scheduling priority, which is listed in the last position of ℒ^𝗋𝖺𝗇𝗄 (i.e, K=|ℬ|). §.§.§ Q-network training Based on policy gradient algorithm<cit.>, predict Q-network Q(s, a;θ^𝗉) is trained by iteratively tuning the weights θ^𝗉 at each decision step k through minimizing the mean square error given in (<ref>) as follows: θ^𝗉 ←θ^𝗉- μ∂𝖦(θ^𝗉)/∂θ^𝗉 where μ is the tunable learning rate. As for the target Q-network Q(s, a;θ^𝗍), θ^𝗍 is copied from θ^𝗉 at beginning, and θ^𝗍 will be iteratively updated to θ^𝗉 after conducting some iterations (5 decision steps in our simulations). We adopt a ϵ-greedy policy to select action, in which the DRL agent probabilistically explores the actions which have not been adopted yet instead of an action with the maximum Q-value in (<ref>). Also, we leverage a replay buffer ℛ to store the sequence of (s^(k),a^(k),r^(k),s^(k+1)) obtained through decision steps k. In particular, the gradient in (<ref>) is obtained by selecting mini-batches of data from the reply buffer. At each decision step k, we consider the feasibility of actions for the current subtask b_τ(k). Actions that meet constraint (C5) are defined as feasible, while the others are infeasible. We leverage action mask<cit.> technique to prevent DDQN from performing infeasible actions. In this approach, the Q-value for an infeasible action is set to a large negative value, to ensure that taken actions are feasible. §.§.§ GAT training The state s^(k) which consists of the extracted features of current subtask b_τ(k) is obtained from the GAT with parameters 𝒲={W^(ℓ)(z)| 1≤ℓ≤ L, 1≤ z≤ Z} and 𝒜={A^(ℓ)(z)| 1≤ℓ≤ L, 1≤ z≤ Z}. Thus, we can rewrite the right hand-side of (<ref>) as [r^(k) + γ_2Q(s^(k+1)(𝒲,𝒜), a^*max Q(s^(k+1), a^*, θ^𝗉); θ^𝗍) - Q(s^(k)(𝒲,𝒜),a^(k); θ^𝗉)]^2, a∈{1,2⋯|𝒱|}, which indicates that parameters 𝒲, 𝒜 and θ^𝗉 are trained simultaneously by minimizing (<ref>) during the decision steps of the DRL agent. Algorithm <ref> presents a pseudocode of GA-DRL training procedure. § PERFORMANCE EVALUATION In this section, we first provide parameter settings for simulations. We then study the convergence of GA-DRL. Finally, we compare the performance of GA-DRL with four DAG task scheduling benchmarks in terms of the task completion time. §.§ Simulation Setting Simulation environment. All neural networks considered in this work are implemented using PyTorch 2.0.0<cit.> and Python 3.8.1 platforms, and Adam<cit.> is leveraged to optimize networks. In our simulations, we consider a real-world highway traffic region as shown in Fig. <ref>(a) of size 1km×1km in Xiamen, China, obtained from OpenStreetMap<cit.>. Moreover, SUMO<cit.> is utilized to import mobile vehicles using the mobility model developed in (<ref>)-(<ref>), and subsequently emulate a real-world VC as shown in Fig. <ref>(b). Also, the arrival time of each vehicle, i.e., AT_m, is assumed to be uniformly distributed in [1,5] (in second) for analytical simplicity, and μ_g=50 (in Kilometres per hour) with σ_g=10. Parameter setting of DAG tasks. The task owner has a DAG task which is generated according to <cit.>. We assume that the computation capability of each vehicle is uniformly distributed in [1, 10] (in GHz)<cit.>, the distance between different vehicles during the task scheduling process are captured by SUMO, and function Ψ(·) in (<ref>) is defined as Ψ(PL(d_m, n(t))) = 0.15 PL(d_m, n(t)) + 0.001<cit.>. Also, the computation workload of each subtask is uniformly distributed in [1, 2] (in Gigaclock cycles)<cit.> and the transmission data size of each edge is uniformly distributed in [100, 500] (in KB)<cit.>. During training, we have chosen ϵ-greedy policy with ϵ=0.9 and discount factor γ_2 = 0.9. §.§ Convergence Performance In Fig. <ref>, we depict the convergence behavior of GA-DRL with respect to the number of episodes. Note that the best convergence and reward values are achieved when the GA-DRL's learning rate is 0.0001. On the other hand, as the learning rate increases from 0.0001 to 0.0005, the average reward is significantly decreased due to the instability of learning. As a result, we fix the learning rate of the GA-DRL to 0.0001 when comparing it with benchmarks in the following. §.§ Benchmarks To study the performance of GA-DRL, we implement four DAG task scheduling benchmarks, including LPS, HEFT <cit.>, MGA <cit.>, and DRLOSM <cit.> as detailed below. * Local processing scheme (LPS): All subtasks are processed locally by the task owner itself without offloading to other vehicles. * Heterogeneous earliest finish time (HEFT)<cit.>: All subtasks are first sorted according to their ranking value in (<ref>). The subtasks are assigned to the vehicles that can complete them in the shortest time. The HEFT algorithm does not take into account the constraint of V2V transmission (C5) since it was designed for a static computing environment. We assume that subtasks-to-vehicles allocations that do not satisfy constraint (C5) are executed locally. * Modified genetic algorithm (MGA)<cit.>: MGA considers an integer encoding to denote subtask-to-vehicle assignments. The assignments with high fitness (i.e., low task completion time) are stochastically selected to perform crossover (i.e., exchange their processing vehicles). Finally, a mutation (i.e., changing the processing vehicle) is adopted to avoid early convergence. MGA considers a VC environment satisfying V2V communication constraint (C5). * DRL offloading scheduling method (DRLOSM)<cit.>: DRLOSM is an improved version of the method proposed in <cit.>. All subtasks are first sorted according to their ranking value in (<ref>). DRLOSM uses a DDQN architecture, where at decision step k, the raw feature of current subtask b_τ(k) is integrated in s^(k) without the use of GNNs. DRLOSM also satisfies the V2V communication constraint (C5) through an action mask module. §.§ Simulation Results of Randomly Generated DAG Tasks We conduct performance evaluations by analyzing the average completion time of DAG tasks for various numbers of layers[The number of layers of a DAG task refers to the length of the longest path from the starting subtask to the finishing subtask. For a DAG task with a fixed number of subtasks, as the number of layers increases/decreases, there are more/less subtasks that are successors of the same subtask, implying a higher/lower potential for parallelism during the task execution.] of DAG tasks, subtasks, and vehicles in the network. The results are the average performance obtained via 100 independent Monte-Carlo iterations. Also, to compare the generalizability of DRLOSM and our GA-DRL, during the training period, we use the same DAG task topology, while deploying them for various DAG task topologies under performance evaluation. §.§.§ Impact of the number of vehicles in VC The results presented in Fig. <ref> illustrate the impact of increasing the number of vehicles from 1 to 20 on the completion time of the DAG task. The experiment was conducted with 20 subtasks and 5 layers. We observed that when only one vehicle is involved in VC, all DAG subtasks have to be executed sequentially and locally, resulting in the same completion time across different algorithms. However, as the number of vehicles increases, the completion of DAG tasks is significantly accelerated due to the sufficient computation resources. Overall, GA-DRL outperforms other algorithms in terms of task completion time. Itis 51.63% better than LPS, 27.82% better than HEFT, 24.69% better than MGA, 5.17% better than DRLOSM at 5 vehicles; and is 57.59% better than LPS, 25.15% better than HEFT, 17.08% better than MGA, and 10.41% better than DRLOSM at 20 vehicles. §.§.§ Impact of the number of subtasks In Fig. <ref>, we can see the evaluation of the completion time for DAG tasks as the number of subtasks increases. In this experiment, we set the number of vehicles involved in the VC at 10, and the number of layers at 5. The results show that our proposed GA-DRL algorithm outperforms the other four benchmarks, achieving faster task completion times. Additionally, Fig. <ref> demonstrates the effectiveness of GA-DRL compared to conventional DRL in terms of generalizability. As the number of subtasks increases from 25 to 30, the task completion time of DRLOSM becomes longer than that of both MGA and HEFT. This is due to the fact that the topologies of DAG tasks become more complicated, making the DRLOSM algorithm, which relies solely on human-selected features without the usage of GNNs, unable to capture the topological information of the newly generated DAG task topologies. On the other hand, our GA-DRL algorithm benefits from the subtasks’ features, which are automatically learned from GAT, making its models well generalizable to unseen DAG task topologies. In summary, the performance of GA-DRL in terms of the task completion time is 27.29% better than LPS, 19.87% better than HEFT, 13.76% better than MGA, 11.29% better than DRLOSM at 10 subtasks; and is 59.84% better than LPS, 11.01% better than HEFT, 0.08% better than MGA, and 15.19% better than DRLOSM at 30 subtasks. §.§.§ Impact of the number of layers within DAG task In Fig. <ref>, it is evident that changing the number of DAG task layers from 4 to 8 has a significant impact on the completion time of the DAG task. In this result, we considered 20 subtasks and 10 vehicles. It is observed that increasing the number of layers leads to a longer completion time. This is because, as the number of layers increases, the parallelism of the DAG task decreases, resulting in more subtasks being executed in a sequential manner. This, in turn, leads to a longer task completion time. The performance of GA-DRL in terms of the task completion time is 61.39% better than LPS, 29.31% better than HEFT, 23.35% better than MGA, 8.27% better than DRLOSM at 4 layers; and is 30.41% better than LPS, 14.04% better than HEFT, 4.36% better than MGA, and 1.38% better than DRLOSM at 8 layers. §.§ Simulation Results for Real Application DAG Task In Fig. <ref>, we illustrate a real-world DAG task of a modified molecular dynamic code <cit.>. The subtasks' computation workload and transmission data size were set according to the parameter settings, and we considered 20 vehicles in the result. Table <ref> presents the performance comparison of various benchmarks, except LPS[LPS is excluded, since it takes no algorithm running time, while we have demonstrated above that the task completion time of LPS is always worse than that of other benchmarks.], with respect to the DAG task completion time (in seconds) and the algorithm running time (in seconds). It is important to note that DRLOSM, which solely learns from human-selected features of subtasks without the usage of GAT, exhibits a higher task completion time than others, such as HEFT, MGA, and our GA-DRL. This is a clear indication of the superiority of generalization of our GA-DRL, especially in the case of a large number of subtasks. Additionally, MGA has the longest running time among the benchmarks due to its internal iteration time for convergence. However, our GA-DRL shows better performance in terms of task completion time at the mild cost of higher algorithm running time. The performance of GA-DRL is 19.09% better than HEFT, 4.31% better than MGA, and 25.64% better than DRLOSM. In summary, simulation results verify that our proposed GA-DRL algorithm offers an efficient and commendable reference in scheduling DAG tasks over dynamic VCs. § CONCLUSION In this paper, we focused on scheduling DAG tasks using a combination of GNNs and DRL. We approached the problem by modeling it as an MDP, and using a GAT module to extract features for each subtask in the DAG task topology. We then integrated the GAT with a DDQN to allocate subtasks to vehicles while taking into account the dynamics and heterogeneity of the vehicles. Our GAT uses multiple heads to enhance information for important subtasks and aggregates topological information from both subtasks' predecessors and successors. We also incorporated a non-uniform neighborhood sampling methodology to improve the GAT's generalizability. Our evaluations showed that our GA-DRL method outperforms benchmarks in terms of task completion time. Future work could explore cooperation among different vehicles and optimizing start times for network flows to transmit the data of subtasks among each other. 00 b1 Z. Chen et al., “An empirical study of latency in an emerging class of edge computing applications for wearable cognitive assistance,” in Proc. ACM/IEEE Symp. Edge Comput., Oct. 2017, pp. 1–14. b2 T. Taleb, A. Ksentini, M. Chen, and R. Jantti, “Coping With Emerging Mobile Social Media Applications through Dynamic Service Function Chaining,” IEEE Trans. Wireless Commun., vol. 15, no. 4, pp. 2859–2871, Apr. 2016. b3 L. F. Bittencourt, R. Sakellariou and E. R. M. Madeira, “DAG scheduling using a lookahead variant of the heterogeneous earliest finish time algorithm,” in Proc. Eur. Conf. Parallel Process., Apr. 2010, pp. 27-34. b4 G. C. Sih and E. A. Lee, “A Compile-Time Scheduling Heuristic for Interconnection-Constrained Heterogeneous Processor Architectures,” IEEE Trans. Parallel Distrib. Syst., vol. 4, no. 2, pp. 175-187, Feb. 1993. b5 H. Arabnejad and J. G. Barbosa, “List Scheduling Algorithm for Heterogeneous Systems by An Optimistic Cost Table,” IEEE Trans. Parallel Distrib. Syst., vol. 25, no. 3, pp. 682-694, Mar. 2014. b6 H. Kanemitsu, M. Hanada and H. Nakazato, “Clustering-Based Task Scheduling in A Large Number of Heterogeneous Processors,” IEEE Trans. Parallel Distrib. Syst., vol. 27, no. 11, pp. 3144-3157, Nov. 2016. b7 Z. Liu, M. Liwang, S. Hosseinalipour, H. Dai, Z. Gao and L. Huang, “RFID: Towards low latency and reliable DAG task scheduling over dynamic vehicular clouds," IEEE Trans. Veh. Technol., Early Access, Apr. 2023. b8 M. Barbera, S. Kosta, A. Mei, and J. Stefa, “To Offload or Not To Offload? The Bandwidth and Energy Costs of Mobile Cloud Computing,” in Proc. IEEE Conf. Comput. Commun., Apr. 2013, pp. 1285–1293. b9 W. Shi, J. Cao, Q. Zhang, Y. Li and L. Xu, “Edge Computing: Vision and Challenges,” IEEE Internet Things J., vol. 3, no. 5, pp. 637-646,Oct. 2016. b10 M. Haklay and P. Weber, “OpenStreetMap: User-Generated Street Maps,” IEEE Pervasive Comput., vol. 7, no. 4, pp. 12-18, Oct. 2008. b11 P. A. Lopez et al., “Microscopic traffic simulation using SUMO,” in Proc. Int. Conf. Intell. Transp. Syst. (ITSC), Nov. 2018, pp. 2575-2582. b12 H. Topcuoglu, S. Hariri and Min-You Wu, “Performance-Effective and Low-Complexity Task Scheduling for Heterogeneous Computing,” IEEE Trans. Parallel Distrib. Syst., vol. 13, no. 3, pp. 260-274, March 2002. b13 Y. Sahni, J. Cao, L. Yang and Y. Ji, “Multihop offloading of multiple DAG tasks in collaborative edge computing,” IEEE Internet Things J., vol. 8, no. 6, pp. 4893-4905, March, 2021. b14 Q. Shen, B. -J. Hu and E. Xia, “Dependency-aware task offloading and service caching in vehicular edge computing," IEEE Trans. Veh. Technol., vol. 71, no. 12, pp. 13182-13197, Dec. 2022. b15 F. Sun et al., “Cooperative Task Scheduling for Computation Offloading in Vehicular Cloud,” IEEE Trans. Veh. Technol., vol. 67, no. 11, pp. 11049-11061, Nov. 2018. b16 H. Liu, H. Zhao, L. Geng and W. Feng, “A policy gradient based offloading scheme with dependency guarantees for vehicular networks,” in Proc. IEEE Global Commun. Conf. (GLOBECOM), Dec. 2020, pp. 1-6. b17 Y. Liu, S. Wang, Q. Zhao, S. Du, A. Zhou, X. Ma, and F. Yang, “Dependency-aware task scheduling in vehicular edge computing,” IEEE Internet Things J., pp. 4961–4971, 2020. b18 J. Shi, J. Du, J. Wang, J. Wang, and J. Yuan, “Priority-aware task offloading in vehicular fog computing based on deep reinforcement learning,” IEEE Trans. Veh. Technol., pp. 16067–16081, 2020. b19 J. Xie et al., “Advanced Dropout: A model-free methodology for Bayesian dropout optimization," IEEE Trans. Pattern Anal. Mach. Intell., vol. 44, no. 9, pp. 4605-4625, Sept. 2022. b20 J. Yan, S. Bi and Y. J. A. Zhang, “Offloading and resource allocation with general task graph in mobile edge computing: a deep reinforcement learning approach," IEEE Trans. Wireless Commun., vol. 19, no. 8, pp. 5404-5419, Aug. 2020. b21 M. S. Mekala et al., “A DRL-based service offloading approach using DAG for edge computational orchestration," IEEE Trans. Comput. Social Syst., Early Access, Apr. 2022. b22 J. Wang, J. Hu, G. Min, A. Y. Zomaya and N. Georgalas, “Fast adaptive task offloading in edge computing based on meta reinforcement learning," IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 1, pp. 242-253, Jan. 2021. b23 M. Goudarzi, M. S. Palaniswami and R. Buyya, “A distributed deep reinforcement learning technique for application placement in edge and fog computing environments," IEEE Trans. Mobile Comput., vol. 22, no. 5, pp. 2491-2505, May 2023. b24 Z. Hu, J. Tu and B. Li, “Spear: optimized dependency-aware task scheduling with deep reinforcement learning," in Proc. IEEE Int. Conf. Distrib. Comput. Syst. (ICDCS), Oct. 2019, pp. 2037-2046. b25 X. Wei, L. Cai, N. Wei, P. Zou, J. Zhang and S. Subramaniam, “Joint UAV trajectory planning, DAG task scheduling, and service function deployment based on DRL in UAV-empowered edge computing," IEEE Internet Things J., Early Access, Mar. 2023. b26 L. Geng, H. Zhao, J. Wang, A. Kaushik, S. Yuan and W. Feng, “Deep reinforcement learning based distributed computation offloading in vehicular edge computing networks," IEEE Internet Things J., Early Access, Feb. 2023. b27 C. Shu, Z. Zhao, Y. Han, G. Min and H. Duan, “Multi-User Offloading for Edge Computing Networks: A Dependency-Aware and Latency Optimal Approach,” IEEE Internet Things J., vol. 7, no. 3, pp. 1678-1689, March 2020. b28 M. Taneja and A. Davy, “Resource aware placement of IoT application modules in fog-cloud computing paradigm,” in Proc. IFIP/IEEE Symp. Integr. Netw. Serv. Manag. (IM), Jul. 2017, pp. 1222–1228. b29 M. Giordani, T. Shimizu, A. Zanella, T. Higuchi, O. Altintas and M. Zorzi, “Path Loss Models for V2V mmWave Communication: Performance Evaluation and Open Challenges," in Proc. IEEE Connected Automated Vehicles Symp. (CAVS), Oct. 2019, pp. 1-5. b30 Z. Ning, P. Dong, X. Kong, and F. Xia, “A cooperative partial computation offloading scheme for mobile edge computing enabled Internet of thing,” IEEE Internet Things J., vol. 6, no. 3, pp. 4804–4814, Jun. 2019. b31 P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio, “Graph Attention Networks," Proc. Int. Conf. Learn. Representations (ICLR), Feb. 2018, pp. 1–12. b32 Saleh Yousefi, Eitan Altman, Rachid El-Azouzi, and Mahmood Fathy, “Analytical model for connectivity in vehicular ad hoc networks," IEEE Trans. Veh. Technol., vol. 57, no. 6, pp. 3341-3356, Nov. 2008. b33 S. Misra and S. Bera, “Soft-VAN: Mobility-Aware Task Offloading in Software-Defined Vehicular Network,” IEEE Trans. Veh. Technol., vol. 69, no. 2, pp. 2071-2078, Feb. 2020 b34 Z. He, L. Wang, H. Ye, G. Y. Li and B. -H. F. Juang, “Resource allocation based on graph neural networks in vehicular communications," in Proc. IEEE Global Commun. Conf. (GLOBECOM), Jan. 2020, pp. 1-5. b35 Y. Li, J. Li, Z. Lv, H. Li, Y. Wang and Z. Xu, “GASTO: A fast adaptive graph learning framework for edge computing empowered task offloading," IEEE Trans. Netw. Service Manage., Early Access, Feb. 2023. b36 H. Lee, S. Cho, Y. Jang, J. Lee and H. Woo, “A global DAG task scheduler using deep reinforcement learning and graph convolution network," IEEE Access, vol. 9, pp. 158548-158561, Nov. 2021. b37J. Chen, Y. Yang, C. Wang, H. Zhang, C. Qiu and X. Wang, “Multi-task offloading strategy optimization based on directed acyclic graphs for edge computing," IEEE Internet Things J., vol. 9, no. 12, pp. 9367-9378, Jun. 2022. b38 M. Liwang, Z. Gao, and X. Wang, “Energy-aware graph task scheduling in software-defined air-ground integrated vehicular networks,” arXiv:2008.01144, 2021. b39 J. Wang, C. Jiang, K. Zhang, T. Q. S. Quek, Y. Ren, and L. Hanzo, “Vehicular sensing networks in a smart city: principles, technologies and applications,” IEEE Wireless Commun., vol. 25, no. 1, pp. 122-132, Feb. 2018. b40 R. J. Williams, “Simple statistical gradient-following algorithms for connectionist reinforcement learning,” Mach. Learn., vol. 8, nos. 3–4, pp. 229–256, 1992. b41 F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The Graph Neural Network Model,” IEEE Trans. Neural Netw., vol. 20, no. 1, pp. 61–80, Jan. 2009. b42 S. Wang, M. Lee, S. Hosseinalipour, R. Morabito, M. Chiang and C. G. Brinton, “Device sampling for heterogeneous federated learning: theory, algorithms, and implementation," in Proc. IEEE Int. Conf. Comput. Commun. (INFOCOM), Jul. 2021, pp. 1-10. b43 H. Ye, J. Wang and Z. Li, “MIP reformulation for max-min problems in two-stage robust SCUC," IEEE Trans. Power Syst., vol. 32, no. 2, pp. 1237-1247, Mar. 2017. b44 A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, and A. Desmaison, “PyTorch: An imperative style, high-performance deep learning library," in Proc. Conf. Neural Inf. Process. Syst. (NeurIPS), Dec. 2019, pp. 8024–8035. b45 D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. Int. Conf. Learn. Represent. (ICLR), Dec. 2015, pp. 1–15. b46 W. Zhan et al., “Deep-reinforcement-learning-based offloading scheduling for vehicular edge computing," IEEE Internet Things J., vol. 7, no. 6, pp. 5449-5465, Jun. 2020. b47 X. Yang, H. Luo, Y. Sun and M. Guizani, “A novel hybrid-ARPPO algorithm for dynamic computation offloading in edge computing," IEEE Internet Things J., vol. 9, no. 23, pp. 24065-24078, Dec.1, 2022. b48 D. Clevert, T. Unterthiner, and S. Hochreiter, “Fast and accurate deep network learning by exponential linear units (ELUs),” in Proc. Int. Conf. Learn. Represent. (ICLR), 2016, pp. 1–14. b49 V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through deep reinforcement learning," Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015. b50 H. van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning with double Q-learning,” in Proc. AAAI Conf. Artif. Intell., Sep. 2016, pp. 2094–2100.
http://arxiv.org/abs/2307.02300v1
20230705135826
Improving Address Matching using Siamese Transformer Networks
[ "André V. Duarte", "Arlindo L. Oliveira" ]
cs.LG
[ "cs.LG", "cs.IR", "I.2" ]
samplepaper.bib abstract Abstract 6em6em Improving Address Matching using Siamese Transformer Networks https://orcid.org/0000-0001-5987-0789 < g r a p h i c s > André V. Duarte https://orcid.org/0000-0001-8638-5594 < g r a p h i c s > Arlindo L. Oliveira Received March 10, 2023; accepted May 12, 2023 ========================================================================================================================================================================= Instituto Superior Técnico / INESC-ID 0.3in Matching addresses is a critical task for companies and post offices involved in the processing and delivery of packages. The ramifications of incorrectly delivering a package to the wrong recipient are numerous, ranging from harm to the company's reputation to economic and environmental costs. This research introduces a deep learning-based model designed to increase the efficiency of address matching for Portuguese addresses. The model comprises two parts: (i) a bi-encoder, which is fine-tuned to create meaningful embeddings of Portuguese postal addresses, utilized to retrieve the top 10 likely matches of the un-normalized target address from a normalized database, and (ii) a cross-encoder, which is fine-tuned to accurately rerank the 10 addresses obtained by the bi-encoder. The model has been tested on a real-case scenario of Portuguese addresses and exhibits a high degree of accuracy, exceeding 95% at the door level. When utilized with GPU computations, the inference speed is about 4.5 times quicker than other traditional approaches such as BM25. An implementation of this system in a real-world scenario would substantially increase the effectiveness of the distribution process. Such an implementation is currently under investigation. § INTRODUCTION Over the past few years, the value of global e-commerce sales has been steadily increasing, leading to a considerable rise in the number of parcels being shipped worldwide every day <cit.>. The effective delivery of parcels relies on the crucial role played by delivery companies and post offices in connecting senders with recipients. Therefore, it is essential that these companies have efficient methods to ensure successful deliveries of parcels. Although most parcels have accurate address information, there are instances where addresses are written in an unstructured way, leading to incorrect or failed deliveries. The errors may include insufficient information, redundant information, or spelling mistakes, among others. While there is no publicly available information on how companies address these issues, some methods involve address normalization, such as converting “Street” to “St.”, or parsing the address elements followed by pair-wise matching. However, these techniques are not perfect and frequently require human intervention. The primary objective of this work is to develop a solution that can enhance the quality of postal and parcel delivery services by reducing the number of misdelivered parcels and minimizing human involvement. Given the recent advancements that transformers have provided in the natural language processing field, we have chosen a fully transformer-based architecture for our solution. We combine a siamese neural network (bi-encoder: retriever) with a DistilBERT <cit.> model adapted for sentence-pair classification (cross-encoder: reranker). To the best of our knowledge, our work is the first one to use this type of approach to tackle an address-matching task. § BACKGROUND AND RELATED WORK Determining if two addresses refer to the same location can be a challenging task. The most straightforward method for this, is to calculate the similarity metrics between the strings that describe each address. The standard algorithm used for this purpose is the edit distance, also known as the Levenshtein distance (LD) <cit.>. However, the LD fails to provide accurate results, even for simple cases. To address this issue, more sophisticated algorithms have been developed, such as searching for the largest sub-sequences of common words, or tokenizing strings by words and sorting them alphabetically, so that the original word order is not relevant <cit.>. Nevertheless, the effectiveness of string similarity measures for string matching varies depending on the task, and no single algorithm can be claimed to be superior <cit.>. A significant challenge associated with traditional string similarity measures is to choose an appropriate threshold that determines when a match is considered correct or not. Santos et al. addressed this issue by proposing a supervised machine learning approach that leverages string similarity values as the model features <cit.>. This method reduced the need for manual threshold tuning and improved the matching performance against the more traditional approaches. The methods previously discussed are effective at identifying symbolic similarities between addresses. However, they often struggle to accurately match addresses that share semantic meaning but are written differently. Deep Learning (DL) techniques have brought a new level of flexibility to string matching algorithms by leveraging sentence-level features that capture semantic similarities. As a result, recent studies in address matching have shifted towards DL methods due to their ability to produce superior results <cit.>. Comber et al. proposed a novel approach <cit.> that leverages the benefits of both conditional random fields (CRFs) and Word2Vec <cit.> for address matching. CRFs are employed to parse the address into its main components, and then Word2Vec is used to create an embedding for each parsed field. The similarity between fields is computed using cosine similarity, and a machine learning classifier is then used to determine whether the two addresses match. The author's proposed approach outperformed previous techniques such as CRF + Jaro-Winkler similarity <cit.>. Lin et al. proposed solving an address matching task using the Enhanced Sequential Inference Model (ESIM). The first step is to train a Word2Vec model to transform address records into their corresponding vector representations. Then, the ESIM model is applied, which consists of four main steps. Firstly, the input addresses are encoded using a Bi-LSTM. Then, local inference is performed on the encoded addresses through a decomposable attention mechanism. Next, a new bidirectional long short-term memory (Bi-LSTM) layer is applied to extract higher-level representations of the addresses. Finally, a multilayer perceptron (MLP) is used to indicate whether the address pairs are a match. The proposed Word2Vec + ESIM approach outperformed simpler methods, such as Word2Vec + Random Forest, demonstrating its effectiveness for address matching <cit.>. Another alternative solution that has been proposed to address the issue of address matching is the Attention-Bi-LSTM-CNN (ABLC) network based on contrast learning, which has demonstrated better performance than the ESIM model <cit.>. The ABLC model combines an attention mechanism, Bi-LSTMs, and convolutional neural networks (CNN) to extract features from the addresses. A distinct methodology that has been proposed for address matching and is also relevant for multiple similarity search tasks involves the use of the best match 25 (BM25) algorithm in conjunction with BERT <cit.>. The method starts by employing the BM25 algorithm to retrieve the top-10 most probable records from a database for a given query. BERT <cit.> is then applied to rerank the retrieved candidates. This approach has demonstrated superior performance when compared to other models, such as Word2Vec + ML Classifiers. For similarity search tasks, pre-trained deep transformers have proven to be highly effective <cit.>. There are two main types of transformers that are commonly used: cross-encoders <cit.>, <cit.>, <cit.>, which use full self-attention to encode the pair, and dual-encoders, which encode the pair separately. Dense Passage Retrieval (DPR) <cit.> and SBERT <cit.> are two well-established dual-encoder approaches widely used for similarity search tasks. § DATA DESCRIPTION AND PREPARATION §.§ Addresses Structure This work employed two main types of addresses: (i) normalized addresses - follow a specific structure and adhere to predefined rules and (ii) unnormalized addresses - often unstructured and, therefore, more difficult to interpret. When sending a parcel, the sender usually writes the recipient's address in an unnormalized format. Classifying an address as unnormalized does not necessarily mean that it is incorrect, but rather that it does not fully comply with the standardized structure. A typical normalized Portuguese address comprises several essential elements, including (1) Artery Type - the configuration of the artery; (2) Artery Name; (3) Door ID - the house or apartment number; (4) Accommodation ID - details about the floor and accommodation and (5) ZIP-Code - a 7-digit code followed by a Postal Designation determined by the Post Office (known as CP4-CP3 combination). Figure <ref> provides an example of a normalized Portuguese address. §.§ Datasets §.§.§ Normalized Dataset The normalized addresses dataset used in this work was made available by CTT-Correios de Portugal, the national post office company of Portugal. The dataset comprises approximately 430k addresses, which corresponds to roughly 10% of the universe of addresses in Portugal. Although not all addresses are included, the provided data covers the entire country and not just a specific region. A detailed geographical distribution of the available addresses can be found in Appendix [sec:Appendix B]A. As the data was previously curated by CTT, no cleaning steps were required. §.§.§ Dataset for fine-tuning the Bi-Encoder The dataset used to fine-tune the bi-encoder model consists of pairs of unnormalized-normalized addresses, along with a label indicating whether they match. The unnormalized address data was also obtained from CTT, based on the history of delivered parcels over a 3-month period, which resulted in over 3 million records. However, the data required deduplication and cleaning. The deduplication process consisted in removing exact duplicates, while the cleaning process restructured some records with information in the wrong columns and discarded others that lacked mappings to the normalized database. The resulting cleaned unnormalized addresses file contained approximately 1.1 million valid records. For fine-tuning the bi-encoder, 90% of these records were sampled and duplicated to form address pairs with a 1:1 positive-to-negative ratio. The normalized address for the false matching pair is generated from three categories, each with an equal probability of occurrence: easy match (random address), hard match (address with a string similarity metric > 0.8), and very hard match (address in the same ZIP-Code). This approach was chosen to increase the number of challenging records in the training dataset. §.§.§ Test Dataset and Dataset for fine-tuning the Cross-Encoder Approximately 120k unnormalized records were not used for fine-tune the bi-encoder. From this pool of records, we extracted a random sample of around 60k addresses to build the test dataset used to assess the final model's performance. The remaining available addresses were used to fine-tune the cross-encoder, with an approximate positive-to-negative ratio of 1:9. Negative samples were generated by querying the bi-encoder with the unnormalized address and retaining the top-9 most probable addresses that did not match with the unnormalized one. § MODEL IMPLEMENTATION The proposed model consists in combining a bi-encoder with a cross-encoder. The following subsections describe in detail these two network types and how they connect with each other in order to create the final solution. §.§ Bi-Encoder Our bi-encoder is a dual-encoder network trained in a siamese way with the purpose of learning how to derive meaningful sentence embeddings that can be compared with others through cosine-similarity. The base transformer model used is the multilingual DistilBERT - a smaller, cheaper, and lighter version of the multilingual BERT. A distilled model is achieved through a process called knowledge distillation <cit.>, which consists in compressing a bigger model (the teacher) into a more compact model (the student) that is trained to reproduce the behavior of the teacher. It is proved that on inference time, the distilled model can be 60% faster than the teacher, while 40% smaller and retaining 97% of the performance <cit.>. Moreover, from a deployment perspective, the adoption of a more compact model is generally preferred. Therefore, the tradeoff offered by DistilBERT was considered good enough to be chosen for this work. Even though DistilBERT was already pre-trained in some natural language processing (NLP) tasks, in order for its parameters to reach an optimal value for the specific address matching task, the model must be fine-tuned. The architecture of the bi-encoder considered for fine-tuning on the address data is displayed in Figure <ref>. To generate fixed-length embeddings of the addresses, we apply mean pooling on the DistilBERT output, which is then passed through an MLP with hyperbolic tangent activation. This reduces the dimensionality of the address embeddings to 512. We employ the contrastive loss function as our optimization objective, which seeks to minimize the distance between the embeddings of matching address pairs and maximize the distance between non-matching pairs. 1/2[y· D^2(x_A , x_B) + (1-y) ·{relu(α - D(x_A , x_B))}^2] Here, x_A and x_B represent the embeddings of addresses A and B, respectively, while y is the label indicating whether both addresses are related. The distance metric D between x_A and x_B is calculated as 1 - cosine similarity(x_A, x_B). Additionally, the margin α is introduced to ensure that the negative pair is at least separated by a distance equal or greater than that value. §.§ Cross-Encoder One of the specificities of a bi-encoder is that sentences are given individually to the network, for which individual sentence embeddings are then computed, that can afterwards be compared through a similarity measure. A cross-encoder does the exact opposite - it feeds both sentences simultaneously to the network, like in the BERT architecture adapted for a sentence pair classification task (Figure <ref>). For that reason, a cross-encoder does not compute sentence embeddings. §.§ Proposed Model: Bi-Encoder + Cross-Encoder Reimers et al. noted that performance-wise, for a sentence similarity task, the cross-encoder achieves a better performance than a bi-encoder <cit.>. However, using only the cross-encoder for address matching is not feasible. If one wants to search on a normalized database for the address most similar to the unnormalized address that is being paired, all the combinations of (unnormalized, normalized_i) must be fed to the cross-encoder, which is computationally demanding. For that reason, a decision was made to not use the cross-encoder by itself for the final solution, but rather a combination of the cross-encoder with the bi-encoder, in order to get the best features of each model. The proposed model is named Bi-Encoder + Cross-Encoder or BI+CE. In Figure <ref> is presented the full architecture of the model for the address matching task. There are two main modules in our architecture: (1) the database pre-embedding module and (2) the predicting module. The database pre-embedding module is not mandatory but, if included, increases the speed of the predicting module significantly. Its goal is to create and store in memory all the embeddings related to the normalized database. Since the database will be static most of the time, it is unnecessary to recompute embeddings every time the model is initialized. The module receives as input each normalized address and computes its corresponding embedding through the bi-encoder. The final outputs are aggregated on a normalized embeddings file. In order to increase the performance speed in the predicting module, an extra step is performed: nine auxiliar databases are created according to the nine possible first digits in a CP4. Each unnormalized address is, therefore, on the predicting stage, compared only with the addresses on the corresponding auxiliar database. The predicting module is the main part of the BI+CE model. The process of finding the corresponding pair in the normalized database for the target unnormalized address (x_1) is done by: (i) feeding x_1 to the bi-encoder; (ii) comparing the embedding of x_1 (E_1) with the correspondent auxiliar embeddings through cosine-similarity; (iii) returning the k-most similar addresses (in this case k=10); (iv) feeding the pairs (x_1,returned_address_i) to the cross-encoder, which will rerank them. The reason behind selecting the top-10 addresses instead of only the most similar is due to the fact that the bi-encoder sometimes misses at assigning the highest probability to the correct address. However, the correct address is usually retrieved in the top-10, hence, the cross-encoder is used. More details on this topic are provided in section <ref>. §.§ Training Overview Table <ref> displays the combination of the best hyperparameters, chosen for fine-tuning the bi-encoder and the cross-encoder. They were reached by trial and error using the common good practices to fine-tune this type of models <cit.>. Regarding the values for the bi-encoder, the variable that is further apart from the usual values is the epoch number. Usually this variable is never higher than 4. However, we decided to select a value of 20. A study was performed on the impact of the epoch number on the fine-tuning performance of models like BERT, and the conclusions were that a larger number of epochs, such as 20, works better <cit.>. As for the cross-encoder, we explored various epoch values, including 20, in order to keep the hyperparameters similar to the bi-encoder, but determined that 15 yielded the best results. In Appendix [sec:Appendix G]B and Appendix [sec:Appendix H]C are presented the hyperparameter combinations that were tested in order to reach the optimal values for maximizing the performance of the bi-encoder and the cross-encoder on the test dataset. Both bi-encoder and cross-encoder are fine-tuned using one NVIDIA Tesla V100S (32 GB)[Code available at: <https://github.com/avduarte333/adress-matching>]. Each fine-tuning process takes approximately eight hours to complete and is done using the python package ‘sentence-transformers’[<https://github.com/UKPLab/sentence-transformers>]. The experimental results for the traditional approaches consist of individual runs for each model. However, we conducted multiple runs of both the bi-encoder and the cross-encoder models after obtaining the optimized hyperparameters. To ensure the robustness of our findings, we run each model 5 times and report the results using the ones with the most consistent performance across the runs, as determined by the median outcome accuracy at the door level. Our results demonstrate that fine-tuning remains stable across the runs. We observed a low standard deviation of 0.077 for door level accuracy in the bi-encoder and 0.084 in the cross-encoder (more details about the 5 runs in Appendix [sec:Appendix I]D). §.§ Model Evaluation For comparison purposes, we evaluated several models: (i) the proposed one (BI+CE), (ii) two traditional string matching algorithms, token sort and token set [<https://github.com/seatgeek/fuzzywuzzy>], (iii) a bi-encoder, (iv) a BM25 ranking function combined with a cross-encoder (BM25+CE), based on the approach of Gupta et al. <cit.>, and (v) a Dense Passage Retrieval (DPR) model as introduced by Karpukhin et al. <cit.>, where we used two independent pre-trained multilingual DistilBERTs as base transformers. The models were evaluated based on two metrics: (1) inference time, which is the number of matches performed by the model in one second, and (2) accuracy, which is the proportion of correctly predicted pairs out of all pairs. Additionally, we analyzed the quality of the top-k retrieval for the approaches (iii), (iv), and (v). The primary objective of the model is to achieve high accuracy at the door level, as correctly identifying the door is crucial for parcel delivery services. While retrieving the correct artery is important, failure to identify the correct door results in an incorrect mapping. However, misidentifying doors in the same artery should not significantly impact delivery efficiency, as they are typically close in geographic proximity. Thus, the results are reported for both artery and door level accuracy. In practical applications, it is crucial to minimize the number of misdelivered parcels. To achieve this, we propose imposing a threshold or filter value (cutting value) on the matching probability variable to ensure that only address pairs with high matching probability are accepted as correct, and the remaining pairs are subjected to manual inspection. In all experiments, we used this criterion and selected the optimal filter values by examining the match confidence variable's distribution. § RESULTS AND DISCUSSION §.§ Inference Time The average number of unnormalized addresses paired per second for each tested model is presented in Table <ref>. To ensure a fair comparison, we made sure that each model performs roughly the same number of operations when pairing new addresses. As anticipated, the inference speed improves when CP4 filtering is applied, regardless of the approach. Without the CP4 filter, the inference time ranges from 0.07 to 0.91 iterations per second, while with CP4 filter, the inference time ranges from 0.61 to 5.40 iterations per second. However, there is a significant difference in the performance between the traditional string matching algorithms and the ones where dense vectors are utilized (DPR, bi-encoder and BI+CE). This can be attributed to the fact that calculating cosine similarity between an address and the candidate addresses in the normalized database is a faster operation than computing string metrics. The results presented in Table <ref> also reveal that adding a cross-encoder to the model architecture decreases the model's inference speed. This is expected, as the extra layer of complexity introduced by the cross-encoder leads to an increase in computational workload. §.§ Accuracy Results - Test Dataset §.§.§ Results - Traditional String Matching Algorithms From Table <ref>, both results at artery and door level suggest that the token set algorithm performs better than the token sort, since it usually achieves higher accuracies while retaining more addresses (27.09% vs 17.57% at door level). However, when performing the manual filtering, the great majority of addresses are discarded, and there is no significant improvement in the overall accuracy. Results at artery level are significantly better than the ones at the door level but none of the algorithms achieved results that may be considered promising enough to solve the address matching problem successfully. §.§.§ Results - Bi-Encoder, DPR and BM25+CE When evaluating the retrieval capabilities of the bi-encoder, DPR and the BM25+CE[Although the model under study is the BM25+CE, when evaluating the retrieval capabilities, the cross-encoder is not used, therefore, for notation simplicity, the model is mentioned as BM25.], one can consider two scenarios: the top-1 retrieval and the top-k. Table <ref> displays, for each method, the proportion of instances where the correct normalized address is among the retrieved addresses. It is evident from Table <ref> and Table <ref> that introducing the dense retrievers, like the bi-encoder or DPR, in the solution enhances the results significantly. Its accuracies both on artery and door level are above 85%, while for BM25 and the traditional methods they never surpass 63.31%. Table <ref> also highlights two major advantages of the bi-encoder in terms of retrieval quality. Firstly, the top-1 retrieval alone is a near-perfect solution, with a door level accuracy of 95.68%. Secondly, the top-10 retrieval gives almost every address a chance of being correctly paired. Our experimental results indicate that while DPR achieves a top-10 accuracy comparable to that of the bi-encoder, its top-1 accuracy falls short by nearly 15% (95.68% - 80.92%). Regarding BM25, its top-1 accuracy is limited to 33.49% and 72.80% at most when considering top-10 retrieval. Despite being more than double, it still falls short compared to the bi-encoder. Therefore, using the BM25 or DPR as a solution for the problem is not optimal. Nevertheless, it is worth mentioning that the rerankers can leverage the retrieval results of the BM25 and the DPR significantly. When considering the top-1 address before and after the reranking, the door level accuracies shift from 33.49% to 63.31% for BM25 and from 80.92% to 85.91% for DPR (Table <ref>). When studying the optimal cutting value for filtering the bi-encoder results, two interesting properties in the distribution of the match confidence variable (Figure <ref>) were identified: (i) distribution strongly skewed and (ii)  77% of the pairs have a matching probability that lies in the [0.99;1.00] interval. Combining these factors, the cutting value chosen was 0.99. The hypothesis that implementing this filtering technique would result in near-perfect classification was not supported by the data (Table <ref>). Despite an overall increase in the model's accuracy (95.68% to 97.39% at door level) and the fact that only 23.37% of records were discarded, there remains room for improvement. Therefore, in light of these findings and the ones from the BM25 and DPR experiments, it was decided to incorporate the cross-encoder into the proposed model. §.§.§ Results - BI+CE (Proposed Model) Contrary to the expectation, the model’s overall accuracy at the door level did not improve in comparison to the bi-encoder approach – 4.68% of the addresses remain incorrectly classified (Table <ref>: 100% - 95.32%). It improved, though, at the artery level, but only slightly (96.49% to 97.08%). The matching probability distribution is, however, quite different in this scenario (Figure <ref>). In the bi-encoder experiment, the lowest matching probability assigned by the model was 0.689. In the BI+CE model, the lowest probabilities assigned are really low values (< 1%). Figure <ref> displays a big gap between the highest probable pairs and the lowest probable ones. There are a few pairs spread across the x-axis scale. However, their proportion is just 2.28 % of the total number of addresses. The cutting value chosen for filtering, in this case, is 0.90. Performing this step provided interesting results, namely: the variation of the accuracy on artery and door level is quite positive (from 97.08% to 99.71% on artery and from 95.32% to 98.35% on door) and the number of discarded addresses is lower than the number discarded on the bi-encoder experiment (from 23.37% to 18.08%). § CONCLUSIONS The main goal of this work was the development of a model that could solve with success an address matching task, by using a DL approach, specifically pre-trained transformers. The bi-encoder proved to be a fundamental piece in the solution, not only for the speed up it introduces but also for its retrieval quality which can place the correct normalized address in the top-10 retrievals 99.41% of the time. We also found that the cross-encoder increases the robustness of the model’s accuracy, at the cost of a negative impact on the inference time. Nevertheless, that drawback can be mitigated by using the model with GPU computations where the inference speed can significantly increase against more traditional approaches such as the BM25 (roughly 4.5 times faster). In a real application, we would probably assume that the only correct pairs are the ones that the model gave a high matching probability (> 0.90). The results in the test dataset suggest that imposing such criteria would significantly reduce the number of misdelivered packages, although a small proportion of the addresses (∼18%) would still require a manual correction. There are other alternatives, such as disregarding the matching probability variable, which would mitigate the time spent on manual correction. It would, however, introduce some downsides such as a higher error rate on the package delivery. § ACKNOWLEDGEMENTS The authors would like to acknowledge the support of Dr. Egídio Moutinho, Drª. Marília Rosado, Dr. Rúben Rocha, Dr. André Esteves, Dr. Paulo Silva, Dr. Gonçalo Ribeiro Enes and Dr. Diogo Freitas Oliveira in the development of this project. We also gratefully acknowledge the financial support provided by Recovery and Resilience Fund towards the Center for Responsible AI project (Ref. C628696807-00454142) and the multiannual financing of the Foundation for Science and Technology (FCT) for INESC-ID (Ref. UIDB/50021/2020). § DISTRIBUTION OF ADDRESSES PER REGION ON NORMALIZED DATABASE § BEST HYPERPARAMETER SEARCH FOR BI-ENCODER FINE-TUNING § BEST HYPERPARAMETER SEARCH FOR CROSS-ENCODER FINE-TUNING § BI-ENCODER AND CROSS-ENCODER ACCURACY PLOTS WITH ERROR BARS
http://arxiv.org/abs/2307.00408v1
20230701184404
Droplet formation simulation using mixed finite elements
[ "Darsh Nathawani", "Matthew Knepley" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
[t]0.7title [t]0.7author date Droplet formation happens in finite time due to the surface tension force. The linear stability analysis is useful to estimate droplet size but fails to approximate droplet shape. This is due to a highly non-linear flow description near the point where the first pinch-off happens. A one-dimensional axisymmetric mathematical model was first developed by Eggers and Dupont<cit.> using asymptotic analysis. This asymptotic approach to the Navier-Stokes equations leads to a universal scaling explaining the self-similar nature of the solution. Numerical models for the one-dimensional model were developed using the finite difference<cit.> and finite element method<cit.>. The focus of this study is to provide a robust computational model for one-dimensional axisymmetric droplet formation using the Portable, Extensible Toolkit for Scientific Computation (PETSc). The code is verified using the Method of Manufactured Solutions (MMS) and validated using previous experimental studies done by Zhang and Basaran<cit.>. The present model is used for simulating pendant drops of water, glycerol, and paraffin wax, with an aspiration of extending the application to simulate more complex pinch-off phenomena. § INTRODUCTION Singularity in free surface flows is a crucial problem for time-accurate simulations. A mathematical treatment, built on the self-similarity of the pinch-off region, has made numerical simulations tractable. Our study is concerned with the formation of pendant droplets. Rayleigh was the first to demonstrate that droplet formation occurs in finite time due to the force of surface tension acting against inertia <cit.>. The pinch-off dynamics of a pendant drop has received the most attention from both mathematicians and scientists. Considering the pendant drop as a fluid column, the surface tension force and the gravitational force are initially balanced. The drop becomes heavier as more fluid is added from the top. Eventually gravity extends the drop, increasing the surface energy. In order to minimize this energy, the radius of the fluid column shrinks, forming a neck region, what we see as the action of surface tension. At some finite time, the radius becomes zero at some location and the drop separates from the fluid column. This location in the fluid column is called the pinch-off point or singularity. Fluid motion in the immediate vicinity of the singularity is driven by very high velocity gradients generated by the surface tension, inertial, and viscous forces. In fact, the solution in this region is self-similar, meaning that it does not depend on the initial or boundary conditions, but has a universal character. After the first pinch-off, a long neck recoils back with high velocity. This induces surface perturbations in the column and can lead to further breakup into smaller satellite droplets, a phenomenon also observed in liquid bridges and decaying jets. A now-classic treatment of the governing dynamics of singularities as well as analysis of the self-similarity for these cases is given by Eggers <cit.>. Linear stability analysis can accurately approximate droplet size, but fails to approximate the shape of a droplet <cit.>. Moreover, even the higher order analysis is not able to explain the shape of the drop near singularity <cit.>. One-dimensional analysis has proved useful for circular liquid jets <cit.> and pendant drops <cit.>. On the experimental side, studies have examined the pinch-off dynamics of pendant drops <cit.>, as well as characterized droplet dynamics in terms of non-dimensional parameters <cit.>. A full one-dimensional mathematical model was constructed by Eggers and Dupont <cit.> using the asymptotic expansion of the Navier-Stokes equations in cylindrical coordinates. They used a finite difference scheme to discretize the equations, and simulated both a pendant drop and a decaying jet. They verified that one-dimensional treatment can accurately simulate the pinch-off dynamics for a fluid in a quiescent background. However, their computational approach could only simulate up to the first pinch-off. Other computational models using one and two-dimensional analysis were explored by Ambravaneswaran et al. using finite elements to discretize the problem <cit.>. They investigated the effect of volume flow rate on droplet and neck shape. Moreover, they were able to simulate satellite drops. However, they were not able to validate the satellite drop simulations, as they had the primary droplet. Their two-dimensional simulations support the conclusion that for axisymmetric droplet formation, one-dimensional computational models are much faster and reliably accurate. Ambravaneswaran et al. also propose a hybrid 1D-2D computational model, and matching between 1D and 2D domains is also explored <cit.>. Other numerical approaches like the Volume-of-Fluid (VOF) method were tested, but either failed to accurately capture features of the flow, such as micro-threads, or were limited to only a certain range of fluid parameters, such as viscosity <cit.>. Additionally, the VOF method requires at least a 2D domain, which increases the computational cost. Formation of droplets can be seen in many scientific and industrial applications, a few examples being ink-jet printing <cit.>, spray cooling <cit.>, and droplet entrainment in annular flow <cit.>. Moreover, droplet formation has a pivotal role in fuel entrainment and burning in hybrid rockets <cit.>, which is the motivation to pursue this study. We present a computational model that is both verified with MMS and validated with previous experimental work. We extend the results with paraffin wax simulations to build a base for the future work on droplets in a shear force environment. § MATHEMATICAL AND COMPUTATIONAL MODEL In this section, we consider the Navier-Stokes momentum equation in cylindrical coordinates for an axisymmetric fluid column, as treated by Eggers and Dupont <cit.>. The fluid is considered incompressible with density ρ and kinematic viscosity ν. Assuming no swirling motion, we consider the flow only in radial and axial directions. A schematic of a pendant drop in cylindrical coordinates is shown in Fig. <ref>. The surface of the droplet, which is defined as variable h(z,t), is moving with the velocity. Therefore, the model equation for h is given by ∂ h/∂ t + u_z ∂ h/∂ z = u_r |_r=h Approaching the pinch-off point, the radial contraction is much faster than the axial expansion. Therefore, considering the radius r as an asymptotic parameter, the axial velocity (u_z) and pressure (p) are expanded asymptotically in even order terms to satisfy symmetry. Then using the continuity equation, the radial velocity is derived. u_z = u_0 + u_2 r^2 + …, u_r = - ∂ u_0/∂ zr/2 - ∂ u_2/∂ zr^3/4 - …, p = p_0 + p_2 r^2 + … To introduce the surface tension force (γ) in the governing equations, the following force balance is considered. 𝐧̂σ𝐧̂ = -γ(∇·𝐧̂) 𝐧̂σ𝐭̂ = 0 Here, σ is a stress tensor, 𝐧̂ is a unit outward normal, 𝐭̂ is a unit tangent, and ∇·𝐧̂ is a mean curvature. The above force balance explains that the normal stress is balanced by the surface tension and the tangential stress is zero. Using the force balance and the leading order terms in r from the expansion, we simplify the momentum equation. The advecting surface equation is already in leading order. Dropping the subscripts, the governing equations for a one-dimensional axisymmetric fluid column are given by ∂ u/∂ t + u ∂ u/∂ z + γ/ρ∂ (∇·𝐧̂)/∂ z - 3 ν/h^2∂/∂ z( h^2 ∂ u/∂ z) - g = 0 ∂ h/∂ t + u ∂ h/∂ z + h/2∂ u/∂ z = 0 where, the mean curvature term ∇·𝐧̂ is given by ∇·𝐧̂ = [ 1/h (1 + ∂ h/∂ z^2 )^1/2 - ∂^2 h/∂ z^2/ (1 + ∂ h/∂ z^2 )^3/2] Here, the curvature term is not approximated to the leading order because it has been shown that using the full curvature term better captures the singularities <cit.>. Equations (<ref>) and (<ref>) govern a pendant drop under gravitational forcing. These are solved for u and h using a finite element discretization. However, the highest order derivative is of the third order, which is problematic for our C^0 continuous element scheme. The approximation for this term will be discontinuous across element interfaces. We could handle this using a discontinuous Galerkin (DG) scheme, but instead, we choose a mixed-element formulation, inspired by Ambravaneswaran et al. <cit.>, in which we explicitly discretize the axial derivative of the radius h (or slope), s = ∂ h / ∂ z, so that ∇·𝐧̂ = [ 1/h (1 + s^2 )^1/2 - ∂ s/∂ z/ (1 + s^2 )^3/2] The mixed finite element formulation is given by ∫_Ω q [ ∂ u/∂ t + u ∂ u/∂ z + γ/ρ∂ (∇·𝐧̂)/∂ z - 3 ν/h^2∂/∂ z( h^2 ∂ u/∂ z) - g ] d Ω = 0 ∫_Ω v [ ∂ h/∂ t + u ∂ h/∂ z + 1/2 h ∂ u/∂ z ]d Ω = 0 ∫_Ω w [ s - ∂ h/∂ z ] d Ω = 0 Here, q, v, and w are test functions, and the mean curvature is defined by Eq. (<ref>). The third and fourth terms in Eq. (<ref>) are simplified by performing integration by parts, which makes the weak form with the highest order derivative to first order. The equations after integration by parts take the following form. ∫_Ω q [ ∂ u/∂ t + u ∂ u/∂ z -6ν/h∂ h/∂ z∂ u/∂ z + γ/ρ{-s ∂ s/∂ z/h (1 + s^2 )^3/2 - s/h^2 (1 + s^2 )^1/2} - g ] dΩ + ∫_Ω∇ q [3 ν∂ u/∂ z + γ/ρ∂ s/∂ z/ (1 + s^2 )^3/2] d Ω - ∫_Γ∇ q [3 ν∂ u/∂ z + γ/ρ∂ s/∂ z/ (1 + s^2 )^3/2] d Ω = 0 ∫_Ω v [ ∂ h/∂ t + u ∂ h/∂ z + 1/2 h ∂ u/∂ z ]d Ω = 0 ∫_Ω w [ s - ∂ h/∂ z ] d Ω = 0 Initially, the velocity is zero and the curvature profile is a hemisphere as it minimizes surface energy. The inlet radius h_0 is fixed depending on the nozzle radius and the inflow velocity u_0 is constant. The radius at the tip of the droplet, at length L(t), is zero. The set of Eq. (<ref>)-(<ref>) are then solved using a continuous Galerkin formulation subject to the following constraints. Initial conditions: h = √(h_0^2 - z^2) s = -z/√(h_0^2 - z^2) for (0≤ z < L_0), s|_L_0 = - C u = 0 where C is a large negative number. In our implementation, we use -10. However, the code was tested with larger values and the results were unchanged. Boundary conditions: z = 0 h = h_0 u = u_0 z = L(t) h = 0 u = dL/dt The length of the drop L(t) can be calculated as a part of the solution as explained by Ambravaneswaran et al. <cit.> by calculating the volume of the drop, which can then be used to calculate the velocity at the tip. However, this results in a dense row in the Jacobian, so we instead produce L(t) by self-consistent iteration. Initially, we are given u(t) and h(t), including the velocity at the end of the droplet. We use that velocity to predict L(t + dt) that is L(t) + u_tip*dt, giving us our boundary condition, and we extend our mesh to this length. We then solve our existing system (Eq. (<ref>)-(<ref>)) for u(t + dt) and h(t + dt). This allows us to calculate the droplet volume by integrating h(t + dt) along the length. This must match the volume from the last time-step augmented by the amount of liquid flowing, which is 4 π h^2_0 u_0 dt. The difference between the calculated volume and the theoretical volume is used for self-consistency in adjusting the length L(t + dt). We use bisection to arrive at a consistent length L for this new time step. This adaptation loop is done when the conservation of volume is satisfied to a given tolerance (we use 0.1%). The one-dimensional mesh, representing the domain 0 ≤ z ≤ L(t), moves as the length L(t) changes. We first update the position of the last vertex and then move the remaining vertices to even out the cell lengths. The interpolation of the discrete field representation between these two meshes can be achieved using the Galerkin projection <cit.>. Galerkin projection is optimal for the L_2 norm, which measures energy. Alternatively, we could replace this with a volume constraint during interpolation. As shown in Fig. <ref>, the re-meshing is done using the calculated L(t) between each time step. The Galerkin projection is then used to interpolate the solution to the new mesh. For instance, if the solution on the old mesh is u^old and on the new mesh is u^new, then the interpolation is done as follows: u^new = η^new_i (u^old ) u^new = η^new_i (∑_k u^old_k ϕ^old_k ) u^new = ∑_k u^old_k η^new_i (ϕ^old_k ) where,η^new_i ϕ^old_k = ∑_q w_q ϕ^old_k(x_q) Here, ϕ represents the basis, η represents the dual basis, x_q are the quadrature points and w_q are the weights on the quadrature points. Calculating the length, scaling the mesh, projecting the solution and the length adaptation (when volume lost is more than a specified threshold) can be merged into a self-consistent loop as shown in Algorithm <ref>. The neck requires sufficiently refined mesh to capture the singularity. Hence, we start with a coarse mesh and refine it as we approach the singularity. The elements are labeled for refinement based on the radius and velocity gradients. Before the next time step, the labeled elements are then refined if necessary. This adaptive mesh refinement is also included in the algorithm <cit.>. We use the Portable, Extensible Toolkit for Scientific Computation (PETSc) <cit.> to set up and solve the system using time-stepper (TS) object <cit.>. The self-consistent algorithm is set up using TSAdapt functionality of TS. We use a direct solver using LU factorization. § RESULTS AND DISCUSSION In this section, we discuss the verification and validation of the numerical model presented in the previous section. Then, we explore the pinch-off dynamics in paraffin wax. §.§ Verification and Validation Before proceeding with the computational model, it is vital to perform a verification test. Verification is a mathematical exercise that can be used to examine the error evaluation done by the implemented code. One elegant method for code verification is the Method of Manufactured Solution (MMS) <cit.>. This is a very straightforward method, where we simply pick a non-trivial solution and add the source term into the equation generated by applying the operator on the solution. This way we know the exact solution of the modified equation (original equation with the source term). Then the error evaluation for this modified equation must be zero. The computational model we use is verified using the MMS. The MMS helps to eliminate coding errors and is also useful to test the discretization for problems with unknown exact solutions. Figure <ref> shows log-log plot of L_2 norm of the error, || u_fe - u_mms ||_L_2, where u_fe is finite element solution and u_mms is the manufactured solution. The velocity (u) and radius (h) are discretized using third-order polynomials, whereas slope (s) is discretized using second-order polynomials. The error reduces by order four for u and h, and order three for s, verifying the correct implementation of the numerical model. The error evaluation is done on a moving mesh with the scaling factor of 1.0001. The MMS solution is evaluated every time step on a scaled mesh and compared to the solution. For the validation of our computational model, we use the experimental results by Zhang and Basaran <cit.>. One crucial parameter to validate is the evolution of the droplet length with time. Because the calculation of the length is implicitly involved in the numerical model as explained in Algorithm <ref>. Figure <ref> illustrates the comparison of the numerical simulation result with the experimental data. The length (L) is non-dimensionalized by the inlet radius (h_0). The time axis shows the time distance from the pinch-off. The comparison presented is for water and (85%) glycerol solution. The viscosity, density, and surface tension are given in Table <ref>. Initially, the droplet evolves slowly because the surface tension force is stronger than the effect of gravity. As we add more fluid, the drop becomes heavier and eventually, the gravitational force surpasses the surface tension. The surface tension starts to decrease the radius from the middle section, trying to minimize the surface energy. The length evolves much faster after the necking begins, suggesting an increase in the advection due to increasing velocity gradients. The primary droplet is separated when the radius is zero. The visualization of the droplet evolution after necking is also shown for water and (85%) glycerol. Each droplet profile is attached to the point in the plot that corresponds to the time away from the pinch-off. The glycerol solution shows more elongation approaching the pinch-off time, which explains the effect of viscosity. The strong viscous effect allows glycerol droplet to have a long neck. In case of the water droplet, the surface tension force is much more dominant compared to the viscous forces approaching the singularity, resulting a shorter neck. Another important parameter for validation is the evolution of a minimum radius in time away from the pinch-off. Figure <ref> illustrates numerical results for the evolution of this parameter, compared with the experimental profiles for water and (85%) glycerol solution. The profiles for both water and glycerol show similar evolution until a point where the dynamics start to become self-similar. From this point, the glycerol profile shows the influence of high viscosity by more elongation. The minimum radius decreases slower, which is evident by the long tail at the end of the profile. For the water droplet, the radial shrinkage is much faster due to the small viscosity compared to the glycerol solution. The numerical profile for glycerol agrees with the experiments very well. For the water, the profile shows a small amount of delay between the numerical and experimental results. This is due to the initially added artificial viscosity to increase diffusive behavior, which is inspired by the computational model by Eggers and Dupont <cit.>. We increase the viscosity value initially for the low-viscosity materials since their highly convective nature introduces surface fluctuations. We reduce this added diffusion to zero well before the necking begins. Hence, the numerical profile for the water droplet starts to agree with the experiments right where the necking begins. This added diffusion shows no impact on the length evolution or the pinch-off location at all. §.§ Pinch-off dynamics of paraffin wax Paraffin wax is explored as one of the potential candidate fuels for hybrid propellant rockets <cit.>. In the combustion chamber of a hybrid rocket, a solid paraffin wax form a liquid layer on its burning surface. This liquid layer, under a high shear forcing, shows hydrodynamic instabilities that lead to the formation of droplets. These droplets are then entrained in the flow. Here, we explore the pinch-off dynamics of a pendant paraffin wax droplet using our computational model. The curvature is a chief attribute in understanding the pinch-off dynamics in droplet formation. Figure <ref> shows a paraffin wax droplet profile at pinch-off alongside a curvature profile. Approaching the singularity, the curvature starts to increase. But a finite time curvature blow-up happens at the pinch-off location, where the radius is zero. As explained in the introduction section, the fluid in close vicinity of this singular point is driven by very high-velocity gradients. The motion is independent of the initial and boundary conditions. This feature is mathematically described by a self-similar solution. The velocity derivative profile on the right in Fig. <ref> shows high gradients close to the singularity. Moreover, the velocity gradients change signs at the singular point location. This suggests that the fluid above and below the pinch-off point moves in opposite directions very quickly. This recoil provokes surface instabilities leading to the satellite drop formation. However, it is also evident that the fluid motion approaching the pinch-off is highly convective in nature. The finite element model can approximate the solution but when the truncated terms are getting larger, the solution becomes unstable. Hence, the numerical scheme is augmented with a stabilization technique when the fluid viscosity is low. There are many options to consider for stabilization <cit.>. We used the Streamline Upwinding (SU) and Streamline Upwinding Petrov Galerkin (SUPG) and found that this 1D problem can become stable with just the SU method. The SUPG method is better suited for problems with cross-convection in 2D or 3D. Also, the SUPG method regularizes the strong form residual that contains second-order derivatives, which can be problematic for C^0 continuous elements. However, the SU scheme just adds an artificial diffusion into the system and is over-diffusive in nature <cit.>. Therefore, we also decrease the artificial diffusivity as we refine the mesh adaptively to avoid adding too much diffusion. This stabilization was only enabled for low-viscosity fluids, like paraffin wax, water, etc. High-viscosity fluids like glycerol can be handled without any stabilization. § CONCLUSION A one-dimensional numerical model is reliable for simulating droplet formation. The present model is validated using experimental pendant drops of water and glycerol, which were excellently matched with simulation results. The SU stabilization method was implemented since the low-viscosity fluids, like water and paraffin wax are highly convective. The velocity and curvature profile was shown for paraffin wax at the pinch-off time. The velocity was approaching infinity at the pinch-off location with the opposite signs suggesting that the pinch-off is followed by the recoil and then satellite drop formation. The current computational model can be extended to capture the motion after pinch-off and satellite drops as well. Understanding the volume of satellite droplets is useful in accurately predicting the regression rate of the fuel in hybrid rockets. Also, the present model can be improved for droplet formation in a shear environment. The droplet formation in a turbulent environment may not be axisymmetric since the turbulence sets the initial droplet profile <cit.>. For non-symmetric interfaces, where no swirling motion assumption fails, two- or three-dimensional approaches are better, which is also recommended by Ambraneswaran <cit.>. However, the droplet shape may not be assumed to be axisymmetric, the solution in the singularity region is still self-similar and follows the analogous dynamics. § ACKNOWLEDGEMENT Funded by the United States Department of Energy’s (DoE) National Nuclear Security Administration (NNSA) under the Predictive Science Academic Alliance Program III (PSAAP III) at the University at Buffalo, under contract number DE-NA0003961. This work was partially supported by the Department of Energy Office of Science Award DE-AC02-0000011838.
http://arxiv.org/abs/2307.02758v1
20230706034345
Exploring Linguistic Style Matching in Online Communities: The Role of Social Context and Conversation Dynamics
[ "Aparna Ananthasubramaniam", "Hong Chen", "Jason Yan", "Kenan Alkiek", "Jiaxin Pei", "Agrima Seth", "Lavinia Dunagan", "Minje Choi", "Benjamin Litterer", "David Jurgens" ]
cs.CL
[ "cs.CL" ]
[ Chunzhen Huang August 1, 2023 ================== Linguistic style matching (LSM) in conversations can be reflective of several aspects of social influence such as power or persuasion. However, how LSM relates to the outcomes of online communication on platforms such as Reddit is an unknown question. In this study, we analyze a large corpus of two-party conversation threads in Reddit where we identify all occurrences of LSM using two types of style: the use of function words and formality. Using this framework, we examine how levels of LSM differ in conversations depending on several social factors within Reddit: post and subreddit features, conversation depth, user tenure, and the controversiality of a comment. Finally, we measure the change of LSM following loss of status after community banning. Our findings reveal the interplay of LSM in Reddit conversations with several community metrics, suggesting the importance of understanding conversation engagement when understanding community dynamics. § INTRODUCTION Social influence can be subtle. When two persons converse, their interpersonal dynamics can lead to one person adopting the language of the other. For example, in settings where one person has higher status or power, the lower-status person may unconsciously begin mirroring the language of the other <cit.>. This process has been described as accommodation <cit.> or linguistic style matching (LSM) <cit.> and can reflect the underlying influence that individuals have on each other <cit.>. Past work has primarily focused on how linguistic influence changes relative to the identities of the speakers. However, the larger social context in which a conversation happens also plays a role in determining whether an individual may be influential. Here, we perform a large-scale study of linguistic influence to test how specific types of social context influence the level of accommodation. Past work in the social sciences has studied accommodation to understand the influence and social power dynamics in specific settings, like job interviews (applicants and interviewers) <cit.> and academic context (students and faculty)<cit.>. Also, LSM has been studied to understand group dynamics <cit.> and negotiations <cit.>. Work in NLP has operationalized these theories to test accommodation theory in new domains. Typically, these works adopt some tests for measuring influence in language and have shown these measures correlate with known social differences. However, it is yet unknown how LSM occurs in conversations in online community platforms and differs by community dynamics. Our work examines the larger context in which linguistic influence occurs. Using a large sample of 2.3 million conversations from Reddit and two measures of linguistic influence, we test how the level of linguistic influence correlates with conversational outcomes, such as conversation length and even the continued presence of a person in a community. Further, we examine how specific social and contextual factors influence the rates of linguistic influence. For instance, we discover that the controversy level of the parent comment can lead to different dynamics of style matching in the conversation threads. This paper offers the following three contributions. First, we systematically compare complementary measures of accommodation, showing clear evidence of style accommodation in Reddit conversations. Second, we draw the relationships of several social factors that affect LSM, including levels of engagement, the popularity of the content, and tenure within a subreddit. Third, we demonstrate the use of LSM to measure the loss of status through the banning of subreddits. We have released all code and data for full reproducibility.[<https://github.com/davidjurgens/style-influence>] § ACCOMMODATION AND ITS MEASUREMENT In this section, we discuss communication accommodation theory and associated sociolinguistic research to outline the accommodation of communicative behavior based on perceived social power dynamics. Subsequently, we explore the concept of linguistic style matching and methods adopted by researchers to quantify this phenomenon. We also investigate various factors that contribute to LSM variations and their strategic uses. §.§ Accommodation Theory as Social Influence When two individuals engage in social interaction, they may either converge or diverge in their communicative behavior. The Communication Accommodation Theory (CAT) suggests that the degree of convergence or divergence is affected by the relative social power between the interlocutors <cit.>. Asymmetric convergence is more likely to occur in situations where there is a power imbalance between the interlocutors. Individuals with lower social power or status are more likely to adapt their communication style to align with those in higher or dominant positions <cit.>. For instance, Puerto Ricans in New York City during the 1970s, who were perceived to hold less power than African Americans, adopted the dialect of African Americans to converge with their more powerful counterparts <cit.>. Social power has been often found to be an important determinant of degrees of accommodation <cit.> and interactants of differential social power or social status can act in a complementary fashion <cit.>. §.§ Linguistic Style Matching Linguistic alignment is a pervasive phenomenon that occurs in human communication where interactants unconsciously coordinate their language usage. This coordination, described as convergence in the psycholinguistic theory of communication accommodation, involves aspects such as word choice, syntax, utterance length, pitch, and gestures <cit.>. Linguistic style matching (LSM) is a specific manifestation of linguistic alignment, wherein individuals unconsciously match their speaking or writing styles during conversations <cit.>. Unlike content accommodation, LSM focuses on stylistic accommodation, examining how things are communicated rather than what they communicate. Individuals strategically negotiate their language style to decrease social distance, seek approval, and accommodate each other. LSM can also reflect the level of common understanding and conceptualization of the conversation topic between speakers. The degree of LSM can indicate social power dynamics as indicated by <cit.>. Empirical evidence from recent studies <cit.> showed that participants with less power (such as lawyers or non-administrative roles in Wikipedia) exhibit greater coordination in conversational behavior than participants with high power (such as justices or administrators). Additionally, <cit.> identified a positive correlation between linguistic accommodation and social network centrality, which effect can be greater than the effect of power status distinction. Studies by <cit.> further show that individuals in a lower position of power tend to accommodate their linguistic style to match that of their higher-power counterparts during face-to-face communication as well as computer-mediated communication. The variance in LSM can be attributed to various social and psychological factors and can be triggered for different purposes. Linguistic alignment may signal likability and agreement, relate to seeking approval or arise from social desirability. Higher levels of accommodation in social behaviors are found to be associated with increased feelings of affiliation, liking, and successful interpersonal relationships <cit.>. Thus, linguistic alignment can be strategically employed to establish relationship initiation and stability <cit.>, increase group cohesion, and task performance <cit.>, and assist in negotiations <cit.>. Furthermore, alignment has been found to enhance persuasiveness, motivating listeners to adopt healthier practices <cit.> while in some cases like presidential debates, it has been perceived as more aggressive <cit.>. The degree of matching may differ based on context and individual factors. § DATA Reddit is a popular social media platform with a forum-based interface. It allows users to interact with dispersed individuals who share similar experiences or topics of interest. Our dataset to study LSM spans from July 2019 to December 2022 and includes 35M users and 500K subreddits. Using the Pushshift Reddit Dataset which contains the full history of comments aggregated on a monthly basis <cit.>, we construct conversation threads from the comments and filter those that satisfy the following conditions: (1) the conversation chain consists of exactly two users; (2) the beginning of the conversation chain must be a root comment which does not have a parent comment; and (3) the lengths of a conversation chain must between 3 and 100. These conditions allow us to capture conversation dynamics between exactly two users without any interference. Our resulting dataset contains 16,893,013 conversation turns (or comments) across 2,305,775 conversation chains from 68,788 subreddits. § HOW SHOULD WE MEASURE LINGUISTIC INFLUENCE? Computational work has proposed multiple approaches for both what to measure and how to measure linguistic influence. In this section, we aim to build intuition for what the two measures of accommodation—using function words and formality—are operationalizing. §.§ Linguistic Style Markers Our study measures linguistic influence with two complementary style markers. We use the notation m to refer to a marker throughout. Marker 1: Function Words Function words (e.g. pronouns, prepositions, articles, and auxiliary words) are primarily employed unconsciously and frequently and incorporate social knowledge for comprehension and usage <cit.>. Prior computational studies of linguistic accommodation have measured linguistic influence by tracking the relative frequencies of function words across conversation turns <cit.>. Function words reflect how content is expressed, rather than what specific content is expressed (e.g., content words) and are thought to be a better proxy for unconscious language processing <cit.>. Here, we use the function words defined by the Linguistic Inquiry and Word Count (LIWC) lexicon <cit.>. Marker 2: Formality Individuals adopt a specific register that is appropriate to their position in the social context, real or desired <cit.>. A commonly varied register is the level of formality used when speaking to another. The level of formality shown by a speaker is known to reflect the speaker's opinion towards a topic or their closeness to the listener <cit.>. Unlike function words, variation in formality often requires conscious processing to select the appropriate phrasing in a given circumstance. As a result, it offers a complementary view into how a speaker influences another through shifting the conversation towards a more formal or informal register. Here, we measure formality using a supervised classification model. The model is a fine-tuned RoBERTa-based classifier <cit.> trained on the GYAFC <cit.> and Online Formality Corpus <cit.> datasets; we use the model available from the Hugging Face API[<https://huggingface.co/s-nlp/roberta-base-formality-ranker>]. Both datasets contain social media text and the reported model performance is high for both blogs and Q&A text (Spearman's ρ>0.7). Using this classifier, each comment's formality is measured on a continuous scale in [0,1]. Importantly, these style variables are related; function word frequency also changes in more formal contexts, where articles and prepositions typically become more common while pronouns and interjections become less common <cit.>. Content word-based measures of style and function word counts are thought to capture the same latent style variables, i.e., they are interchangeable at a stylometric level <cit.>. §.§ Measuring Linguistic Influence At a high-level, linguistic influence (also referred to as LSM or accommodation in this paper) is measured by testing whether the value for some measure m of a comment made by user a is predictive of the value of m in the reply to that comment by user b. Therefore, one straightforward way to measure accommodation is with linear regression: m_b ∼β_0 + β_1 m_a where β_0 reflects the baseline level of the measure (e.g., the average formality) and β_1 measures the level of accommodation (e.g., the average increase in formality associated with a 1-unit increase in the formality of the parent comment). However, as <cit.> note, the characteristics of a comment are likely influenced by other unrelated factors such as the length of the comment or the number of turns in the conversation. Indeed, they show that unless one controls for such factors, linguistic influence may be overestimated. Therefore, we used a mixed-effects regression to control for comment a and b's length in tokens (fixed effects L_a, L_b), the number of replies r_b → a that b has made to a so far in the conversation. To capture individual and community-level variation, we include random effects to control for the effect of the subreddit s; these random effects let us control for differences in the norms of communities (e.g., some communities are more/less formal) to test for relative changes in m. Linguistic accommodation is modeled as m_b ∼ β_0 + β_1 m_a + β_2 L_a + β_3 L_b + β_4 r_b → a + (1 | s) where β_1 measures the level of accommodation. §.§ Results We first observe clear evidence of accommodation in both style markers: parent comments with more function words receive replies with more function words (Figure <ref>), and more formal parent comments receive more formal replies (Figure <ref>). For comments where we have the text of the original post, we observe accommodation even after controlling for the author and original post's style markers, suggesting that users may accommodate to the style of the person they are interacting with in the comment thread. However, this effect plateaus when the parent comment has above-average levels of a style marker, suggesting a potential threshold for the impact of parent comment style on reply style. This attenuation of effect may be the result of several mechanisms, including regression to the mean or an author modulating their replies according to their own personal style (i.e., a more extreme parent comment may trigger greater modulation). Second, the two style markers are almost perfectly uncorrelated, suggesting that they measure distinct constructs. In order to calculate the correlation between these two measures, we randomly sample 1,000 subsets of the conversation turns and calculate the extent of accommodation in function words and formality in that subset. The correlation between the function-word- and formality-based accommodation scores is -0.00171. Third, accommodation in the two style markers seems to occur via fundamentally distinct psychological processes. Accommodation can occur either 1) through a subconscious priming mechanism, where the speaker instinctively repeats what they hear; or 2) through a more conscious, strategic act with communicative intent <cit.>. Figure <ref> suggests that function-word-accommodation seems to be an unconscious form of relating to the audience, while formality-accommodation seems to be more intentional and strategic. Commenters exhibit greater accommodation in function words when they take less time to reply to the prior comment (<ref>) and greater accommodation in formality when they reply more slowly (<ref>). These results are consistent with prior work, suggesting that accommodation of function words occurs subconsciously (reflexively, takes less time) and builds on this work to show that accommodation in other style markers, like formality, occurs strategically (intentionally, takes more time). Fourth, there is little variation in accommodation across subreddit characteristics. Figure <ref> shows the levels of accommodation across ten different types of subreddits, using an existing taxonomy of popular subreddits.[<https://www.reddit.com/r/ListOfSubreddits/wiki/listofsubreddits/>] While certain types of subreddits (e.g., lifestyle) tend to have higher levels of accommodation than others (e.g., technology, entertainment), most differences are only weakly significant (p>0.01) with a small effect size. Moreover, Figure <ref> shows the relationship between subreddit size and variation in linguistic style, for 300 subreddits sampled based on their number of subscribers. To calculate variation in linguistic style, we use <cit.>'s comprehensive set of linguistic features. Linguistic variation within each subreddit is estimated as the mean Shannon Entropy of each Biber tag frequency at the subreddit level. Despite expectations that larger communities may exhibit greater diversity in language use <cit.>, we find no relationship between community size and linguistic variation. Overall, these findings point to the nuanced dynamics of LSM in online interactions, indicating that factors such as function word usage and formality in the parent comment are associated with the linguistic style and tone of replies. § WHAT FACTORS ABOUT A COMMENT INFLUENCE THE DEGREE OF ACCOMMODATION? LSM can be affected by many factors and existing studies have pointed out the roles of not only linguistic characteristics but also the contextual factors affecting LSM <cit.>. In this section, we study the connection between LSM and a series of contextual factors where the comment is posted (i.e., comment depth) and the “success” of a comment (i.e., comment Karma and parent comment Karma). §.§ Experimental Setup To test for heterogeneity in the level of accommodation with respect to several covariates (e.g., depth, Karma), we run a mixed effects regression similar to Section <ref>.2, but include an interaction term to test whether accommodation changes significantly with respect to some covariate (say, Karma K): m_b ∼ β_0 + β_1 m_a + β_2 K + β_3 m_a*K + β_4 L_a + β_5 L_b + β_6 r_b → a + (1 | b) + (1 | s) Here, β_1 measures the level of accommodation when K=0 and β_3 measures the increase in accommodation when K increases by one point; if β_3 is significantly different from 0, then we have evidence that accommodation is heterogeneous with respect to Karma. In order to visualize these effects, we fit the model in the above equation to estimate accommodation at different values of Karma. In order to appropriately represent uncertainty in this model, we sample 100,000 conversation turns at each value of Karma 10 times and use this to obtain 10 different estimates of accommodation for each value of the covariate. To visualize the association between Karma and accommodation, we plot Karma on the x-axis and the LSM estimates on the y-axis. §.§ Results As shown in Figure <ref>, various factors of comments are related to LSM. Comment depth Comment depth reflects the position of a comment in the conversation tree. Deeper comments are usually posted in longer conversations and when the users are more engaged in the dialogue. As shown in Figure <ref> and Figure <ref>, comment depth is positively correlated with LSM. However, accommodation in formality drops off for very deep comments. LSM happen more when the comment is deeper in the conversation tree, suggesting that users tend to match not only the content but also the structural aspects of their language in response to their interlocutor. Such a trend could be due to greater investment in the conversation. When two users are involved in longer and deeper conversations, they are more likely to be engaged in the conversation, which may lead to higher subconscious but lower conscious LSM. Comment Karma A key feature of Reddit is the ability for users to upvote or downvote comments, which determines the comment's karma - a measure of its popularity within the community. In figure <ref>, we observe several non-linear associations between karma, comment characteristics, and LSM. In terms of comment karma, users' LSM tends to remain relatively constant, except for cases where the comment has very high karma, which is associated with an increase in LSM. This finding implies that highly popular comments may foster greater linguistic alignment between users. We also see that comments with low karma have lower levels of LSM than comments with high karma (Figure <ref>), which makes sense since we'd expect users to respond better to comments whether the author is mirroring their interlocutor. Notably, this upward trend reverses in comments with very high karma – which have lower levels of LSM than comments with lower levels of karma. The reversal of the LSM trend in comments with high karma warrants further exploration. One possible explanation for this phenomenon is that highly upvoted comments may exhibit unconventional linguistic styles that deviate from the norm, which could be seen as novel by the Reddit community. Another explanation may be that comments with high karma are more likely to be popular in larger, diverse communities where users may have a wider range of linguistic styles. Additionally, it is possible that comments with high karma receive a higher volume of comments and interactions, which may dilute the overall LSM score due to the presence of diverse linguistic styles from multiple interlocutors. § WHAT EFFECT DOES ACCOMMODATION HAVE ON THE CONVERSATION ITSELF? Linguistic accommodation is usually associated with positive social benefits <cit.>. Here, we test whether linguistic accommodation is associated with two positive behaviors in social media: sustained conversation and length of participation in a subreddit. §.§ Experimental Setup We fit a linear regression on conversational dyads following the LSM measure in Section <ref>.2. Following the procedure from the prior section, we estimate the level of accommodation for comments around a particular covariate by sampling 100,000 conversation turns at or near the respective value of the covariate. Once again, we verify that differences between covariates are significant, by introducing interaction terms in the regression and testing for a statistically significance effect. §.§ Results Figures <ref>a and <ref>b compare the effect of alignment when conditioned on the total length of the conversation thread. For both functions words and formality, we observe from the fitted lines that accommodation is more likely to happen from longer conversations, but only up to a certain length of approximately 30-40. This suggests the possibility of LSM being an earlier indicator of how engaged the users will be in a conversation. On the other hand, the likelihood of accommodation in formality decreases when the conversation becomes longer than a certain threshold, which suggests that speakers may stop consciously trying to accommodate once the conversation becomes sufficiently long. Figures <ref>c and <ref>d compare accommodation likelihoods at a given turn within a conversation. Interestingly, we can observe that LSM starts off highest at the beginning of a conversation and decreases as the number of turns increases. Combining the two results, we can conjecture that while the degree of LSM generally decreases within a conversation thread, the initial levels of LSM observed at the early stages of a conversation can indicate how engaged the speakers will be, which one can use to estimate the overall conversation length. How does LSM differ by tenure and number of subsequent posts in a subreddit? Figure <ref> shows that, for both style markers, users who have a longer tenure in the subreddit or who post more in the subreddit in the next month tend to display higher subconscious and lower conscious LSM. We consider these results as evidence of the “lifespan” of a user's engagement toward conversations held within that subreddit, and ultimately engagement toward the subreddit itself, which has been noted in prior work <cit.>. § WHAT EFFECT DOES THE SOCIAL CONTEXT HAVE ON ACCOMMODATION: CONTROVERSIALITY? In this section, we examine whether LSM differs by social contexts that arise during conversations. Specifically, we focus on the controversy level of the parent comment. In contrast to non-controversial issues, controversial issues lead to competitive disagreement, where the goal of the groups involved in argumentation is to convince the opponent group(s) of the validity of one’s point of view <cit.>. The arguments on controversial issues tend to invite strong emotions with negative affect <cit.> and deteriorate the deliberation in the public sphere because interactions often turn uncivil <cit.>. §.§ Experimental Setup Following the procedure from the prior section, we estimate the level of accommodation for comments at each covariate, separately for controversial and non-controversial comments. When a comment or post receives a substantial number of upvotes and downvotes, Reddit automatically designates it as controversial. The exact method used by Reddit to determine controversy remains private. However, the Reddit API offers a binary label indicating whether a comment is controversial or non-controversial <cit.>. Approximately 1.30% (n=218,899) of the comments in our sample are labeled as controversial. We test that differences between conditions are significant with a three-way interaction term in the regression between the parent-comment style, the comment's Karma (or other covariates) and the comment's controversiality: m_a × K × C. §.§ Results Figure <ref> reveals that LSM occurs differently in controversial and non-controversial comments. For both function words and formality, LSM is less likely to occur in controversial rather than non-controversial comments when the conversation length is below a certain threshold (12-14). Interestingly, we see that this trend is strengthened as the conversation length increases. One possible explanation is that controversial comments generate more initial interest that promotes users to engage more in conversations. However, this initial effect is washed away as the conversation takes further turns, and the conversation is less likely to continue due to reasons such as incivility. Non-controversial comments, on the other hand, enjoy less of this initial boost and is more likely to carry on if the users have accommodated each other's language during their conversation. With the addition of Karma, we can observe a more complex trend that plays out differently for each style marker. For function words, conversations in controversial comments have a nonlinear relationship that drops as the parent comment's Karma increases, whereas a weak positive correlation can be observed for non-controversial comments and levels of Karma. In contrast, for formality, LSM occurs most at comments with about 0-5 Karma and decreases for higher Karma for both controversial and non-controversial comments. Overall, we observe that social contexts that are defined by the community platform such as Karma or controversy have complex, nonlinear effects on how LSM occurs in conversations. § LOSS OF STATUS VIA COMMUNITY BANNING Reddit bans specific subreddit communities as a result of policy violations, such as repeated posting of highly offensive content or lack of moderator oversight <cit.>. When users are highly active in such communities, the ban potentially results in a loss of status, as they are forced to find new communities to participate in. Here, we test the extent to which users change how they are linguistically influenced by others after such a ban. While prior work has studied how users change after gaining status <cit.>, our unique setting allows us to perform a novel study of the potentially humbling effects of status loss. In addition, a study of the subreddit suggests that formality is (weakly) associated with more effective persuasion on Reddit <cit.>; we hypothesize that users who recently experienced a ban may have multiple pragmatic reasons to accommodate more. §.§ Experimental Setup We test for changes to linguistic influence using a pseudo-causal difference-in-difference analysis <cit.>. Subreddit ban dates were determined by identifying all banned subreddits and then using the last date of a post in that subreddit. Our sample includes 1,024 subreddits banned between July 2019 and December 2022. We identify 16,686 users in our sample who made at least one comment in these subreddits in the 30 days before their ban. Each user from a banned subreddit is considered as treated and matched with a control user who did not participate in that subreddit. Three analyses of the effect of the ban are performed, controlling for user-level and temporal factors. First, we estimate the effect of commenting in a banned subreddit, by comparing posts made in banned subreddits t months before the ban to posts made by the same users at the same time, in other subreddits. Second, using a difference-in-differences approach, we estimate the effect of banning a subreddit on authors' use of accommodation in (unbanned) subreddits they were active in for t months before and after the ban. This second analysis measures the spill-over effects of the ban on users' behaviors in other subreddits; the difference-in-differences estimator uses users active in these subreddits at the same time, but not in a banned subreddit, as a control for temporal and subreddit-level effects. Third, we calculate the effect of the ban on commenting behavior in subreddits users migrated to (i.e., newly joined) after the ban was enacted. The difference-in-differences estimator compares accommodation in comments in the banned subreddits to comments in the subreddits these users migrated to; to isolate the effect of migration, the difference between the comments in the migrated and banned subreddits are compared against the spill-over effects in other subreddits that users were a part of during this time. §.§ Results Our results suggest that policy actions on Reddit, such as banning, have an effect on the level of accommodation by users. First, the level of subconscious accommodation tends to be lower in banned subreddits than other subreddits the users comment in during the 30 days before the ban (the effects are all below 0 in Figure  <ref> (p < 2e-16). Second, following the banning of a subreddit, users tend to change their LSM levels in other subreddits: Figure <ref> shows that function-word-mirroring (banned:function) and formality-mirroring (banned:formality) increase after a subreddit is banned. Our results suggest that users who had previously been active in banned subreddits may have been making an effort to index agreeableness by accommodating (e.g., to avoid losing status in another community). Third, changes in accommodation are initially amplified in subreddits that these users migrate to after their original community was banned. The comments left by these users in banned subreddits exhibit higher levels of accommodation than would be expected immediately before the ban and maintain higher subconscious accommodation in subreddits they migrated (Figures <ref> and <ref>p < 2e-16). Since function-word mirroring is likely subconscious and formality-mirroring strategic (Section <ref>), our results suggest that users who had previously been active in banned subreddits may have, intrinsically, indexed agreeableness by accommodating (e.g., to gain status in their new community) but without making a conscious effort (e.g., because they were upset about the loss of a status). These users also increased LSM in the subreddit immediately before it was banned (e.g., perhaps to index agreeableness when warnings about the ban were issued). § DISCUSSION AND CONCLUSION In this study, we performed a large-scale computational analysis on Reddit conversations to understand when LSM occurs and its effect on platform engagement. Overall, do our findings indicate that LSM frequently occurs in online conversations within Reddit, and that it exhibits complex nonlinear relationships with conversation metrics such as Karma, conversation lengths, or controversy scores, which suggests linguistic influence can affect conversation dynamics. Furthermore, we show that the degree of accommodation in conversations is related to greater levels of engagement both at conversation and platform levels. Our findings highlight the possibility of identifying LSM as an indicator of engagement and civil conversations and suggest ideas for building and maintaining online communities that promote constructive discourse. In our experiments, we have assumed LSM as a unidirectional concept by measuring the exhibition of a particular style conditioned on the previous turn. However, LSM can occur in several different directions, such as the two speakers converging into a single style or even diverging to separate styles. While not in the scope of this study, the existence of such types of LSM in Reddit conversation threads can be studies in future research. § ETHICAL CONSIDERATIONS This study was conducted only on observational data and did not require any human intervention. We did not use any information that could identify individuals or specific demographic groups, and all of our presented results were obtained through aggregation from millions of users and comments. § ACKNOWLEDGMENTS This material is based in part upon work supported by the National Science Foundation under Grant No IIS-2143529. acl_natbib
http://arxiv.org/abs/2307.02906v1
20230706103814
A Real-time Human Pose Estimation Approach for Optimal Sensor Placement in Sensor-based Human Activity Recognition
[ "Orhan Konak", "Alexander Wischmann", "Robin van de Water", "Bert Arnrich" ]
cs.LG
[ "cs.LG", "cs.CV", "eess.SP" ]
orhan.konak@hpi.de 0000-0003-1884-8029 Hasso Plattner Institute University of Potsdam Prof.-Dr.-Helmert-Straße 2-3 Potsdam Germany 14482 alexander.wischmann@student.hpi.uni-potsdam.de Hasso Plattner Institute University of Potsdam Prof.-Dr.-Helmert-Straße 2-3 Potsdam Germany 14482 robin.vandewater@hpi.de Hasso Plattner Institute University of Potsdam Prof.-Dr.-Helmert-Straße 2-3 Potsdam Germany 14482 bert.arnrich@hpi.de Hasso Plattner Institute University of Potsdam Prof.-Dr.-Helmert-Straße 2-3 Potsdam Germany 14482 Sensor-based Human Activity Recognition facilitates unobtrusive monitoring of human movements. However, determining the most effective sensor placement for optimal classification performance remains challenging. This paper introduces a novel methodology to resolve this issue, using real-time 2D pose estimations derived from video recordings of target activities. The derived skeleton data provides a unique strategy for identifying the optimal sensor location. We validate our approach through a feasibility study, applying inertial sensors to monitor 13 different activities across ten subjects. Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach, demonstrating its efficacy. This research significantly advances the field of Human Activity Recognition by providing a lightweight, on-device solution for determining the optimal sensor placement, thereby enhancing data anonymization and supporting a multimodal classification approach. A Real-time Human Pose Estimation Approach for Optimal Sensor Placement in Sensor-based Human Activity Recognition Bert Arnrich Gastón P. Fernández[Ph.D. student at the University of Leuven (KU Leuven), Department of Economics, Naamsestraat 69, box 3565, 3000 Leuven (e-mail: gfernandez@kuleuven.be). I deeply appreciate the invaluable guidance of my advisors Laurens Cherchye and Frederic Vermeulen. I would also like to thank Wietse Leleu and all participants at the Conference of the European Society for Population Economics (ESPE) in Belgrade, the Trans-Atlantic Doctoral Conference (TADC) in London, and the Public-Labor-Health Seminar, the Household Economics Gathering, and the ECORES Summer School in Leuven for their helpful comments. All errors are on my own.] University of Leuven (KU Leuven) =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Sensor-based Human Activity Recognition (HAR), as part of pervasive computing, describes the process of distinguishing movements by using Inertial Measurement Units (IMU). IMUs primarily measure quantities such as acceleration and angular velocity. Depending on the performed movements, the data constitute distinct time series patterns, which can be classified by a Machine Learning (ML) model. The movements can range from low-level activities, such as walking and standing, to high-level activities, which are combinations of multiple low-level activities. HAR has potential applications in various domains, including healthcare, sports, and smart environments <cit.>. However, challenges remain regarding the placement of sensors to achieve higher classification performance and the preservation of privacy <cit.>. Optimal placement of on-body sensors is a major challenge in HAR, as the location of these sensors directly influences the activity classification <cit.>. Therefore, many sensors or experiences from similar studies are consulted before conducting an own study. Data acquisition and labeling in HAR are another challenge and are typically facilitated by the use of a camera, introducing privacy concerns. Video recordings in privacy-sensitive areas often raise ethical questions, thus prompting convoluted workarounds <cit.>. To address these challenges, we introduce a method designed to optimize the sensor placement while preserving privacy in HAR. To preserve privacy, we convert video data into real-time 2D human pose estimations, creating a skeleton representation of the subject's movements, as depicted in <ref>. These 2D keypoints not only aid in recommending optimal sensor placement for given activities but also enrich the classification process in a multimodal classification approach. To evaluate the effectiveness of our approach, we conducted a lab study with ten subjects performing nursing activities. These activities were recorded over eight hours, providing an open access rich dataset that encompassed a wide range of movements and scenarios. We evaluated our method with nursing activities due to their complexity, i.e., the variety of low-level and high-level tasks involved and the relevance to healthcare, one of the main domains where HAR can have a significant impact <cit.>. The evaluation allowed us to assess our approach's performance in terms of sensor placement, multimodal classification, as well as its ability to preserve privacy during these processes. Through our evaluation, we found that multimodality increased F1 score by up to 4.4%. Furthermore, three out of four sensor placement suggestions were equal to the best-performing deep learning model, a CNN-LSTM, with an overall Kendall’s tau of 0.8. Therefore, our research contributes to the field of HAR through an on-device method using 2D pose estimation for determining optimal sensor placement, requiring only 500 data points. This approach can even work with publicly available video footage of target activities. Furthermore, the utilization of 2D keypoints from pose estimation not only enhances privacy during data collection but also facilitates a multimodal approach to HAR, creating an efficient fusion between IMUs and 2D keypoints. The remainder of the paper is structured as follows: In Section <ref>, we contextualize our research within existing approaches of optimal sensor placement and multimodality. In Section <ref>, we provide details on our approach. In Section <ref>, we uncover and assess the practicality of the proposed features through a feasibility study on nursing activities. Section <ref> discusses the results and limitations of our study, while Section <ref> concludes the paper and outlines potential future research directions. § RELATED WORK HAR is a field that has seen significant progress in recent years, particularly in the context of sensor-based HAR with wearable sensors <cit.>. HAR has many potential applications, e.g., healthcare, fitness, security, and surveillance. In healthcare, HAR can be used to monitor the activity levels of elderly patients with chronic diseases <cit.>. In fitness, HAR can be used to track physical activity and provide feedback to athletes <cit.>. In security and surveillance, HAR can be used to monitor the activities of people in restricted areas or identify potential threats <cit.>. As such, a vast body of literature encompasses a range of research sub-areas, including sensor placement optimization and a multimodal classification approach. Throughout this section, we highlight the strengths and limitations of the existing approaches and compare them to our approach. §.§ Sensor Placement in HAR Conducting a study on sensor-based HAR with IMUs necessitates the question of sensor placement. The classification result highly depends on the incoming data, which varies with the location and number of used sensors for different body parts <cit.>. Research suggests that the most accurate results are achieved when sensors are positioned at the chest, ankles, and thighs <cit.>. Evidence indicates that harnessing accelerometers on both the upper and lower torso concurrently can significantly enhance the precision of activity recognition <cit.>. <cit.> compared the performance of different placements of accelerometer devices on the body in categorizing physical activities and estimating energy expenditure in older adults. They used five different body positions for accelerometer placement: wrist, hip, ankle, upper arm, and thigh. The study concludes that considering the placement of the accelerometer devices is important in optimizing the accuracy of HAR. <cit.> discusses how the performance of HAR systems is affected by the sensor position and proposes an optimization scheme to generate the optimal sensor position from all possible locations given a fixed number of sensors. The system uses virtual sensor data to access the training dataset at a low cost and can help make decisions about sensor position selection with great accuracy using feedback. In contrast to existing approaches, our approach does not require any sensor setup. Instead, we rely on human pose estimations using either self-recorded videos or existing videos of the target activities to determine the optimal sensor placement. This significantly reduces the setup and calibration efforts required for HAR and eliminates the need for physical sensors. Additionally, our approach involves much less computation compared to classical approaches that involve training and testing with large datasets, making it a more efficient and practical solution for real-world applications. §.§ Multimodal Approaches in HAR Multimodal HAR has gained more attention in recent years due to its potential to leverage multiple sources of sensory data and provide more accurate and robust activity recognition compared to unimodal approaches <cit.>. <cit.>, for example, explored methods of fusing and combining multi-representations of sensor data, using data-level, feature-level, and decision-level fusions with Deep Convolutional Neural Networks and achieved promising results. <cit.> proposed MMHAR-EnsemNet, which uses four different modalities to perform sensor-based HAR and has been evaluated on two standard benchmark datasets. In contrast to these multimodal approaches, we utilize a single device for collecting data from IMUs and videos. This data is transformed in real-time into 2D human pose estimations, providing an inherently given multimodal datastream for recording and classification. § METHODS This section outlines our approach toward the collection and recording of data as well as the proprietary method for determining the optimal sensor placement. §.§ Connection We decided to use the Xsens™DOT sensor[For detailed information see the user manual: <https://www.xsens.com/hubfs/Downloads/Manuals/Xsens%20DOT%20User%20Manual.pdf>] as a standalone device at specific on-body locations, which allows for unobtrusive data recording because of its size and weight. We used the Xsens DOT Android software development kit (SDK version v2020.4)[https://base.xsens.com/s/article/Xsens-DOT-Software-Package?language=en_UShttps://base.xsens.com/s/article/Xsens-DOT-Software-Package?language=en_US] to build an app for scanning, connecting, and receiving data in real-time. Xsens DOT uses Bluetooth for data transmission to the host device. Although there is no connection limit in the Xsens DOT SDK services, the central devices' hardware and operating system constraints limit the maximum number of sensors that can be connected simultaneously. Using Android, it is possible to connect up to seven sensors. The output rate for the measurement can be specified and ranges from 1 Hz to 60 Hz for real-time streaming. The recording mode allows up to 120 Hz. All sensors are time-synced after synchronization. Transmitted data includes calibrated orientation data (quaternion), calibrated inertial data, and magnetic field data. §.§ Recording Connecting the IMUs to our application facilitates capturing various sensor data types, including quaternions, free acceleration, angular velocity, and the magnetic field normalized to Earth's field strength, at adjustable output rates. The application also supports video recording. While the output rate can be set according to the user's preference, it is ultimately limited by the device's hardware capabilities. The recorded video is leveraged to generate real-time pose estimations. These estimations serve three primary purposes: they guide the determination of optimal sensor placement, ensure the anonymization of the incoming data stream, and support a multimodal classification approach, thereby enhancing the accuracy and utility of our method. §.§ Optimal Sensor Placement The optimal sensor placement is derived through 2D pose estimations. Pose estimation is a computer vision technique that refers to detecting humans and their poses from image and video data <cit.>. We use the incoming video data for real-time pose estimations to create key body joints. To make it work on the device, we use MoveNet Thunder's[https://tfhub.dev/google/movenet/singlepose/thunder/4https://tfhub.dev/google/movenet/singlepose/thunder/4] pre-trained TensorFlow Lite (TFLite) pose estimation model <cit.>. The outcome is a landmark of 17 keypoints in 2D at different body locations, such as ankles, knees, hips, wrists, elbows, shoulders, and some facial parts in each timestamp. Since the position data has a causal link to the acceleration through the second derivative, each keypoint can be understood as an accelerometer. Hence, we interpret each keypoint as a potential location for sensor placement. We implemented an algorithmic procedure to calculate the optimal sensor placement, which works in three phases. The selected pose estimations underwent preprocessing, involving the combination of keypoints. Not all 17 detected keypoints were suitable for sensor placement, leading to the consolidation of several keypoints. The head-related and hip keypoints were replaced with a single, average keypoint as they are part of one bone segment. To mitigate rapid changes in keypoint coordinates due to movement or incorrect pose estimation, the remaining 12 keypoints were centralized, with their center of mass located at point (0.5, 0.5) in each data series. For comparison with a real-life setting, we reduced the number of keypoints to five by selecting the two wrists, two ankles, and pelvis. <cit.> showed in their work on Deep Inertial Poser that these locations contain rich information for full body pose estimation, making them ideal for evaluation purposes. The head was excluded from sensor placement as it was considered less relevant to the movements under study. In the second step, we define and calculate a cross-validated feature metric D_k, inspired by the cosine distance formula. Our formula determines the optimal sensor placement. For each activity, we require a minimum sequence of 500 data points in x and y, corresponding to a 50 s recording with 10 Hz. The number 500 was determined through experimentation with different sequence lengths. Activities with a recording time longer than 500 data points are cut to a uniform length of 500. We convert these sequences into a multivariate per-keypoint time series. We denote activities by a_i ∈ A=a_1, a_2, …, a_n, where n represents the number of activities. A concatenated time series is created for each a_i ∈ A, and each of the 12 combinations of s subsets of the keypoints; this results in a vector A_k^i of length s × 500 of two-dimensional data points for each. We hypothesize that more distinct vectors, i.e. a lower dot product, between the activities correspond to more distinct features, thus leading to higher classification accuracy. Therefore, a higher D_k value coincides with a higher likelihood of an optimal sensor location. Using the following expression, we calculate the D_k value for each combination k, indicating the difference of the respective keypoint vectors between the different activities: D_k:= ∑_i=1^n-1∑_j=i+1^n|1 - 𝐀^𝐢_𝐤·𝐀^𝐣_𝐤/𝐀^𝐢_𝐤𝐀^𝐣_𝐤|. Finally, all keypoint combinations are sorted by D_k and displayed in a dialog box. Providing a vision-based virtual sensor approach allows us to find the optimal sensor placement with less effort than using physical IMUs with subsequent model training and evaluation. Therefore, having the sensors at hand and collecting IMU data is not required. An existing or self-recorded video on targeted activities suffices to receive recommendations for the sensor placement. § EXPERIMENTAL EVALUATION: NURSING ACTIVITY RECOGNITION In order to evaluate the effectiveness of the algorithmic approach, we collected data on nursing activities under the instruction of a real nurse. Nursing activities were selected as they encompass a wide range of complex and diverse tasks, requiring accurate and efficient data collection and classification. By applying our approach to this real-world scenario, we can effectively demonstrate its capabilities in addressing the challenges of sensor placement, multimodal classification, and privacy preservation. §.§ Data Description Data for this study were collected using five Xsens DOT sensors with a 60 Hz output rate at the positions left wrist, right wrist, pelvis, left ankle, and right ankle. The sensor data outputs, consisting of 14 features, contain four-dimensional quaternion values, four-dimensional angular velocity determined by the derivative of the quaternion values, three-dimensional acceleration values, and three-dimensional magnetic field values. The dataset comprises 13 activities, including ten subjects, leading to 51 recordings with an overall of 1519418 data points per feature, which corresponds to 486.8 minutes (∼8 hours) recording. <ref> shows an excerpt of the activities conducted in the study. <ref> highlights the distribution of the subjects on each activity. We chose to utilize the Xsens DOT sensors for our study due to their accuracy, reliability, and suitability for the healthcare scenario we focused on. These sensors provide high-quality data, which is essential for accurate activity classification involving the 13 specific activities we examined. Although our experiments and comparisons were conducted using Xsens DOT sensors, our findings and insights can be applied to other sensor types and devices. The methodology and techniques we employed for data collection, classification, and privacy preservation are generally applicable to a wide range of HAR scenarios, regardless of the specific sensors used. §.§ Results We trained a CNN-LSTM deep learning model. We used hyperparameter optimization using grid search <cit.> for the window length and learning rate. This resulted in a learning rate of 1e-4 and an input size of 600 × 70. The input size corresponds to a window length of 600 (equal to 10 s with 60 Hz) and 14 features from each sensor (14· 5=70). The used model contains a preprocessing step for filling missing values and a batch-normalization layer to standardize the inputs in each feature row. The output of the network is a dense softmax layer with the number of activity classes. We used the Adam<cit.> optimization approach. The categorical cross-entropy loss function was used for a multi-class classification problem: L = -log ( e^s_p/∑_j^C e^s_j ) where C denotes the set of classes, s the vector of predictions, and s_p the prediction for the target class. The architecture of the CNN-LSTM model is composed of six layers. The input layer is followed by two convolutional layers, two LSTM <cit.> layers, and the output layer. For evaluation, we used three different cross-validation techniques, namely, * k-fold cross-validation on time windows of length 600 with k=5; * leave-recordings-out cross-validation: One recording corresponds to starting and ending a recording in one go. In our study, the recordings are between 46s and 1249s, and we used an 80:20 train-test ratio. One recording can contain only one specific activity or multiple activities performed multiple times. This validation technique reflects the performance of the model when used in the app; * lastly, we evaluate leave-one-subject-out cross-validation. §.§.§ Optimal Sensor Placement Using the CNN-LSTM model, we trained 31 models for all sensor combinations and ranked them according to the F1 score. In addition, the results from the cross-validated feature metric D_k were calculated from only 3000 data rows (300 s with 10 Hz) per activity and sensor. For the purpose of comparison of the trained models, we only included the results of D_k for the same body locations using five sensors. The results for both approaches are shown in <ref>. As can be seen, three of the four comparisons per number of sensors match the best-ranked sensor placement. There is only a slight difference in placing two sensors. Kendall's Tau coefficient τ, a measure of the rank correlation between two variables, was calculated to evaluate the similarity between the rankings obtained from the CNN-LSTM model and D_k. A value of 1 indicates perfect agreement, while a value of -1 indicates perfect disagreement. The formula for Kendall's Tau coefficient is: τ = 2/n(n-1)∑_i<jsgn(x_i - x_j)sgn(y_i - y_j) where n is the number of paired observations, x and y are the rankings of the two variables being compared, and sgn is the sign function. §.§.§ Multimodal Activity Recognition <ref> displays the results for each modality combination in the nursing dataset. Combining IMU and pose estimation data always performs best, whereas pose estimation data only always performs the worst. § DISCUSSION This paper addresses the development of a method for optimal sensor placement and a multimodal classification approach. The cross-validated feature metric D_k represents a suitable approach for the optimal determination of sensor localization. The approach seems to have recognized the importance of hand movements well. Similarly, multiple sensor combinations work correctly. Notably, this is the case even when the additional sensor detects relatively little motion, as for the pelvis. These results are in agreement with those obtained by the trained model. This could be explained by the fact that the sensors act as counterparts, one constituting a root point or reference point. Since different ML models can lead to different results, it is also difficult to conclude whether the minimal difference in the two sensors might be related to the used model. The multimodal approach from IMUs and pose estimation data leads to increased classification accuracy overall. Nonetheless, the performance boost is not significant. There are two likely causes for this. (1) The results could be attributed to the different camera angles in data acquisition. A recording taken from the side lets the keypoints move closer together in 2D, which makes classification harder. The viewing angle, thus, plays an important role. (2) The lack of pose estimations under certain conditions. Pose estimations are not feasible when the camera does not capture the entire body or large portions. Out of the 51 recordings, pose estimation data is missing for ten. §.§ Limitations Our approach comes with some limitations. When forming pose estimations, distortions in the image can occur quickly if there are objects in front of the person or if the focus is shifted. This leads to low confidence values and, thus, gaps in data collection. Consequently, this would corrupt both a multimodal approach and the optimal determination of sensor positions. Furthermore, we use a 2D pose estimation approach that does not map depths. The missing dimension leads to an inaccurate distance representation of the observed person when the person turns or the recording angle changes. Our sensor placement optimization method is effective, straightforward to implement, and quick in execution, making it a practical choice for many applications. However, it's important to note that our study did not include a comparison with other sensor placement optimization methods. This was due to the lack of readily available implementations of alternative methods. § CONCLUSION AND OUTLOOK The aim of the present research was to design a novel, lightweight optimal sensor placement approach. We make several contributions with our approach. First, the pose estimation technique is able to effectively anonymize test subjects. Second, we demonstrate the possibility of determining the optimal sensor placement without the necessity of actual IMUs. Videos from targeting activities are sufficient to determine the optimal placement. Lastly, the possibility to infer a multimodal classification approach. Further improvement could be achieved by integrating a 3D pose estimation model, video recordings, and diverse sensor types. Future work will also aim to address implementing and comparing other sensor optimization methods. § CODE & DATA AVAILABILITY The study was conducted under subject's consent and ethical approval from the University of Potsdam, reference number 51/2021. The data from the feasibility study is accessible via Nextcloud <cit.>. The code for the application including all used models is shared on GitHub <cit.>. ACM-Reference-Format
http://arxiv.org/abs/2307.00614v1
20230702164910
Non-equilibrium dynamics of Jaynes-Cummings dimer in presence of Kerr nonlinearity
[ "G. Vivek", "Debabrata Mondal", "S. Sinha" ]
quant-ph
[ "quant-ph", "cond-mat.stat-mech" ]
Indian Institute of Science Education and Research-Kolkata, Mohanpur, Nadia-741246, India We investigate the non-equilibrium dynamics of a Josephson coupled Jaynes-Cummings dimer in the presence of Kerr nonlinearity, which can be realized in the cavity and circuit quantum electrodynamics systems. The semiclassical dynamics is analyzed systematically to chart out a variety of photonic Josephson oscillations and their regime of stability. Different types of self trapped states appear due to various dynamical transitions, resulting in a photon population imbalance between the two cavities. We also study the dynamics quantum mechanically to identify characteristic features of different steady states and to explore fascinating quantum effects, such as spin dephasing, phase fluctuation and revival phenomena of the photon field, as well as the entanglement of spin qubits. For a particular `self trapped' state, the mutual information between the atomic qubits exhibits a direct correlation with the photon population imbalance, which is promising for generating photon mediated entanglement between two apparently non interacting qubits in a controlled manner. Under a sudden quench from stable to unstable regime, the photon distribution exhibits phase space mixing with a rapid loss of coherence, resembling a thermal state. Finally, we discuss the relevance of the new results in experiment, which can have applications in quantum information processing and quantum technologies. Non-equilibrium dynamics of Jaynes-Cummings dimer in presence of Kerr nonlinearity G. Vivek, Debabrata Mondal, and S. Sinha August 1, 2023 ================================================================================== § INTRODUCTION The advancement of the cavity and circuit quantum electrodynamics (QED) paves the way to study the non-equilibrium and dissipative dynamics <cit.> of quantum systems, apart from their potential application to quantum information processing <cit.>. In addition, photon loss and other natural processes give rise to dissipative effects in these systems, which is the key ingredient for the formation of non-equilibrium states and dissipative transitions <cit.>. Within a certain regime, such atom-photon interacting systems can be well described by the Jaynes-Cummings <cit.> or Tavis-Cummings model <cit.>, depending on the number of atoms in the cavity. Recent experiments have demonstrated that, coupling the atomic condensates to the cavity mode can exhibit fascinating phenomena like the formation of super solid phase <cit.> and non-equilibrium transition <cit.>. Moreover, coupling the cavities in an array opens up the possibility to explore many body physics with such light matter interacting system <cit.>, similar to the Hubbard model. A variety of such models can exhibit quantum phase transitions, which have been explored theoretically <cit.>. The simplest configuration of such a many body system is the dimer of two coupled cavities forming a Jaynes-Cummings Josephson junction (JCJJ), which has been realized in circuit QED setup <cit.>. Such systems can serve as a test bed to study various non-equilibrium phenomena as well as dissipative dynamics <cit.>. In the present work, we investigate the non-equilibrium dynamics and various quantum phenomena arising from it, in an atom-photon interacting system described by JCJJ in the presence of Kerr nonlinearity <cit.>. An insight about the overall dynamical behavior can be gained from the semi classical analysis, which is also useful for finding a variety of photonic Josephson oscillations in the JCJJ and dynamical transitions between them. Interestingly, this system exhibits a self-trapping phenomenon, for which photons are dynamically localized in one of the cavities <cit.>. Apart from this, other self trapped states also appear as a consequence of Kerr nonlinearity, which we analyze in detail, particularly the dynamical origin of different types of self-trapping phenomena and their regime of stability. On the other hand, in quantum dynamics, atoms and photons become entangled, which gives rise to interesting quantum effects leading to the deviation from classical behavior. Additionally, loss of coherence in the photon field can occur as a result of phase fluctuation during the time evolution. It is a pertinent issue to study the entanglement dynamics and change in the state of photons due to the combined effect of interaction and entanglement for different dynamical states, as well as for a rapid quench to a dynamically unstable regime. We also demonstrate how the self-trapping phenomena can be employed to control the photon mediated correlation between the atomic qubits, which are otherwise non interacting. The possibility of such dynamical manipulation of entanglement between the qubits in the Jaynes-Cummings dimer model can have potential applications in quantum information processing. The paper is organized as follows. In Sec.<ref>, we describe the JCJJ model and analyze it semiclassically in Sec.<ref> to obtain different branches of Josephson dynamics, their stability as well as transitions between them. Quantum dynamics and its comparison with semiclassical steady states are presented in Sec.<ref>. Sec.<ref> contains a detailed discussion on the quantum nature of the photon field, particularly phase diffusion and revival phenomena. In this section, we also investigate the entanglement properties of the spin 1/2 atomic qubits corresponding to the different steady states, as well as the signature of phase space mixing of photons in the quench dynamics. Finally, we summarize the results and conclude in Sec.<ref>. § THE MODEL The Jaynes-Cummings Josephson junction formed by coupling two cavities <cit.>, can be described by the Hamiltonian, ℋ̂=∑_i[ℋ̂_ JC^(i)+U/2n̂_i(n̂_i-1)]- J(â^†_Lâ_R+h.c.)-μM̂ where, the site index i=L,R indicates the left and right cavity, which are coupled by the Josephson coupling J. Each cavity can be modeled by the Jaynes-Cummings Hamiltonian, ℋ̂_ JC^(i) = ωn̂_i+ω_0σ̂^+_iσ̂_i^-+g(â_iσ̂_i^++â^†_iσ̂_i^-), describing the interaction between an atom and single mode cavity field with frequency ω, represented by the annihilation (creation) operators â_i(â^†_i). The two level atom with energy gap ω_0 at each cavity is described by the Pauli spin operators σ⃗̂⃗_i. The last term of ℋ̂_ JC^(i) describes the atom-photon interaction with strength g. In addition, we consider the effect of Kerr nonlinearity in each cavity, represented by the second term in Eq.(<ref>), giving rise to the repulsive interaction of the photon field with strength U. The JCJJ described by the Hamiltonian in Eq.(<ref>) preserves the U(1) symmetry similar to the Jaynes-Cummings model <cit.>, leading to the conserved total excitation number, M̂=∑_i=L,R(n̂_i+σ̂^+_iσ̂_i^-). In grand canonical ensemble, the μ in Eq.(<ref>) represents the chemical potential corresponding to the number of excitations. Such JCJJ has been realized in circuit QED setup <cit.> where, the strength of interactions and the photon hopping amplitude can be tuned. Next, we discuss the different Josephson oscillations of JCJJ, described by the Hamiltonian in Eq.(<ref>) within the semiclassical method and compare them with the quantum mechanical dynamics. In rest of the paper, we set ħ,k_B=1 and scale the energy (time) by J (1/J). § SEMICLASSICAL ANALYSIS In this section, we study the dynamics of JCJJ governed by the Hamiltonian given in Eq.(<ref>), using the time dependent variational method <cit.>. The photons and two level atoms in the cavities can be described semiclassically by their respective coherent states <cit.>, which we consider to construct the following time dependent variational wavefunction, |ψ_c(t)⟩ = ∏_i=L,R|α_i(t)⟩⊗|θ_i(t),ϕ_i(t)⟩. The coherent state of the cavity mode is represented by, |α_i⟩ = exp(α_i â_i^†-α_i^*â_i)|0⟩ where, α_i is the eigenvalue of â_i, representing the photon field classically. The wavefunction for the two level atoms can be described by, |θ_i,ϕ_i⟩ = cos(θ_i/2)|↑⟩+sin(θ_i/2)e^iϕ_i|↓⟩ where |↓⟩(|↑⟩) represents the ground (excited) state and the canonically conjugate variables ϕ_i,z_i = cosθ_i describe the orientation of such a spin 1/2 systems on the Bloch sphere, for which ⟨Ŝ⃗_i⟩ = S(sinθ_icosϕ_i,sinθ_isinϕ_i,cosθ_i) with S=1/2. The coherent state representation of the photon field is appropriate for large number of photons in each cavity, giving rise to the large number of conserved total excitation, that can be written semiclassically as M = ∑_i|α_i|^2+(1+z_i)/2. It is evident from conservation equation, the amplitude of classical field α_i scales with √(M). Therefore, for large number of conserved excitation, we define α_i/√(M) = √(n_i)exp(ιψ_i)=(x_i+ι p_i)/√(2), where, n_i∈ [0,1] is the scaled photon number, ψ_i represents its phase and x_i,p_i are the corresponding conjugate variables. In terms of the dynamical variables 𝐱={n_i,ψ_i,z_i,ϕ_i}, the Lagrangian scaled by the total excitation number M can be written as, ℒ = 1/M⟨ψ_c|i∂/∂ t-ℋ̂|ψ_c⟩ = ∑_i=L,R[-ψ̇_in_i+η/2ϕ̇_iz_i-(ω-μ) n_i-η/2(ω_0-μ)z_i. -Ũ/2n_i^2 -.g̃√(n_i)√(1-z_i^2)cos(ϕ_i+ψ_i)] +2√(n_Ln_R)cos(ψ_L-ψ_R), where η=2S/M and the interaction strengths are scaled as g̃ = g/√(M), Ũ=UM. Note that, in general η=2S/M for large spin system with magnitude S, which is considered to be small in the present case of Jaynes-Cummings model with S=1/2 and M≫1. From the Euler-Lagrange equation d/dt(∂ℒ/∂ẋ)-∂ℒ/∂x=0 of the dynamical variables x = {n_i,ψ_i,z_i,ϕ_i}, we obtain the following equations of motion (EOM), ṅ_̇i̇ = -g̃√(n_i)√(1-z_i^2)sin(ϕ_i+ψ_i) +2√(n_in_i)sin(ψ_i-ψ_i) ψ̇_̇i̇ = -(ω-μ)-g̃/2√(n_i)√(1-z_i^2)cos(ϕ_i+ψ_i) +√(n_i/n_i)cos(ψ_i-ψ_i)-Ũn_i ηϕ̇_̇i̇ = η(ω_0-μ)-2g̃z_i/√(1-z_i^2)√(n_i)cos(ϕ_i+ψ_i) ηż_̇i̇ = 2g̃√(n_i)√(1-z_i^2)sin(ϕ_i+ψ_i) where, i̅≠ i. Conservation of total excitation number yields the constrain, n_L+n_R+η/2(z_L+z_R+2)=1. We solve the Eq.(<ref>) within the grand canonical ensemble, where μ is fixed by the Eq.(<ref>). However, in the limit g̃→ 0, both the photon number and atomic inversion become conserved individually and therefore, our formalism can not be continued to this limit. Hence, we exclude the regime of small g̃ from our discussion. First, we investigate the steady states corresponding to the fixed point (FP) 𝐱^*= (n_i^*,ψ_i^*,z_i^*,ϕ_i^*) of the EOM given in Eq.(<ref>), for which 𝐱̇=0. Next, we perform the linear stability analysis around the steady states, describing the evolution of small initial fluctuation δ𝐱(0) in the form δ𝐱(t) = δ𝐱(0)e^i ω̃ t and determine the frequency ω̃. The stability of FPs is ensured if the Im(ω̃) = 0 and the ω̃ yields the small amplitude oscillation frequency around the corresponding steady states. In the JCJJ, the stable steady states describe the different types of photonic Josephson oscillations with the frequency that can be obtained from the linear stability analysis mentioned above. Next, we find the possible steady states from Eq.(<ref>) and analyze their stability. §.§ STEADY STATE ANALYSIS In this subsection, we systematically investigate various steady states obtained from the EOM in Eq.(<ref>) and analyze their stability as outlined above. As evident from Eq.(<ref>)(a,d), the steady states satisfy the conditions, sin(ϕ^*_i+ψ^*_i) = 0 sin(ψ^*_L-ψ^*_R) = 0, which correspond to the phase relations ϕ^*_i+ψ^*_i=0,π and ψ^*_L-ψ^*_R=0,π, that is used for classifying the steady states. The relative phase of bosons ψ^*_L-ψ^*_R=0(π) equivalently describes (anti)ferromagnetic spin configuration of the cavities in the x-y plane, corresponding to ϕ_L^*-ϕ_R^*=0(π). We categorize the steady states in these two classes, which are represented schematically in Fig.<ref>(a,b). Note that, the transformation ϕ^*_i→ϕ^*_i+δ and ψ^*_i→ψ^*_i-δ leave the steady state equations Eq.(<ref>) invariant as a consequence of the U(1) symmetry. As a result, the continuous set of FPs lie on circles in x_i-p_i and S_ix-S_iy plane with corresponding radius √(2n^*_i) and √(1-z_i^*2)/2 respectively (see Fig.<ref>(a,b)). For a particular class of spin configuration and a given value of η, the steady states can be obtained in terms of {n_i^*,z_i^*}, by solving Eq.(<ref>)(b,c), subjected to the constraint in Eq.(<ref>), in order to conserve total excitation. The steady state thus obtained, can be categorized in terms of the relative photon population f = n^*_R/n^*_L, which we denote as symmetric (f=1) and self trapped (f 1) corresponding to equal and unequal photon population in the cavities. Note that, once the photon population n_i^* is found out, that also determines the atomic inversion z_i^*, z^*_i = ξ_2η(ω_0-μ)/√(η^2(ω_0-μ)^2+4g̃^2n^*_i). Next, we analyze the steady state equations graphically, which provides a physical picture and qualitative behavior of the steady states as well as the dynamical transitions between them. For small values of η, from Eq.(<ref>), the total photon number can be written approximately as n_L^*+n_R^*=1-η, which yields, n_L^*=(1-η)/1+f, n_R^*=f(1-η)/1+f. Using these relations, the steady state equations Eq.(<ref>)(b,c) can be reduced to a single effective equation in terms of the relative photon population f, 𝒴(f) = ξ_1(f-1)-Ũ(1-η)(1-f/f+1)√(f) - ξ_2g̃^2√(f)(1/√(ℱ_L(f))-1/√(ℱ_R(f)))=0, where, ℱ_i(f) = η^2(ω_0-μ)^2+4g̃^2n_i^*(f) and the discrete variable ξ_1 = cos(ψ_L^*-ψ_R^*) =± 1 describes the spin orientation of two qubits and ξ_2 = cos(ϕ^*_i+ψ_i^*) = ± 1. For small η, the chemical potential μ can be written as, μ = ω-ξ_1/2(√(f)+1/√(f))+Ũ/2+ξ_2g̃/4√(1+f)(1+1/√(f)). Note that, as a consequence of the exchange symmetry between the cavities, Eq.(<ref>) remains invariant under the transformation f→1/f, hence we only consider the steady state solutions for f∈ [0,1]. The roots of Eq.(<ref>) yield the possible steady states for a given combination of ξ_1,ξ_2, which we discuss below. §.§.§ Ferromagnetic class (ψ_L^*-ψ_R^*=0) For the ferromagnetic orientation of the qubits, ξ_1=+1 and the other variable can take two values ξ_2 = ± 1. For ξ_2=-1, the equation 𝒴(f) has only one root for f=1, describing a symmetric steady state corresponding to the ground state configuration. The other case, ξ_2=1 is more interesting, since it gives rise to various non trivial steady states, as shown in Fig.<ref>(a,b). Similar to the previous case, f=1 is always a solution of equation 𝒴(f) describing a symmetric state with higher energy density (scaled by the total number of excitation), which is denoted by FP-F. Interestingly, two new solutions appear above a critical coupling strength, g̃_c1(Ũ)= 2+3(1+Ũ/2)^4/3η^2/3-η/2, giving rise to two self trapped states, one of which is unstable, as seen from Fig.<ref>(b). The FP with vanishingly small relative photon population f≈ 0 corresponds to a stable perfect self trapped (PST) state <cit.>, describing a situation, where, almost all the photons are localized in one of the cavities. As illustrated in Fig.<ref>(a,b), such self trapped states arise as a result of a saddle node bifurcation occurring at g̃_c1(Ũ), for which non vanishing small parameter η plays a crucial role. The unstable self trapped state ST_ u (with larger value of f) undergoes a subcritical pitchfork bifurcation with FP-F at the critical point g̃_c2(Ũ), which can be written approximately as, g̃_c2(Ũ) = (√(8)+√(2)Ũ)-(3/√(2)Ũ+√(2))η, after which the symmetric state FP-F becomes unstable, as shown in Fig.<ref>(b). Next, we consider the steady states corresponding to the anti-ferromagnetic spin configuration. §.§.§ Anti-ferromagnetic class (ψ_L^*-ψ_R^*=π) For the anti-ferromagnetic class with ξ_1=-1, the steady states and their dynamical transitions are very intriguing, where, the Kerr nonlinearity Ũ plays a crucial role. The other variable can take two values ξ_2=± 1, and we discuss the corresponding steady states one by one. ξ_2=1: In this case, there exists a symmetric steady state denoted by FP-π, which undergoes a pitchfork bifurcation at a critical Kerr nonlinearity, as evident from Fig.<ref>(c,d). After the bifurcation FP-π becomes unstable, giving rise to stable self trapped state ST_1. This phenomenon also occurs in the Bose-Josephson junction, in absence of coupling to the spin (g̃=0) <cit.>, which has been detected experimentally <cit.>. However, in the present case, the critical Kerr nonlinearity also depends on the coupling strength g̃, which is given by, Ũ_c1(g̃) = 2+g̃/√(2)+(2+3g̃/2√(2))η, for small η. Unlike the perfect self-trapping, the relative photon population imbalance between the two cavities, Z_p = n_L-n_R/n_L+n_R of the self trapped state ST_1 increases continuously after the bifurcation and approaches to unity with increasing Kerr nonlinearity Ũ, as shown in Fig.<ref>(d). In contrast, the relative population imbalance Z_p decreases with increasing atom-photon coupling strength g̃ (see inset of Fig.<ref>(d)), which serves as a characteristic feature of this ST_1 state for its identification. As seen from Eq.(<ref>), for self trapped state, the photon population imbalance Z_p leads to the atomic population imbalance, Z_a = |z_R-z_L|/2, exhibiting similar behavior with coupling strength. ξ_2=-1: A similar type of phenomenon can also be observed for ξ_2=-1. In this case, there exists another symmetric state FP-AF, which is energetically different from FP-π, but with the same anti-ferromagnetic spin orientation. The symmetric state FP-AF undergoes a pitchfork bifurcation at a critical strength of Kerr interaction, Ũ_c2(g̃) = 2-g̃/√(2)+(2-3g̃/2√(2))η, which occurs only for g̃≲ 2. Above this critical coupling, the FP-AF state becomes unstable, giving rise to new self trapped state denoted by ST_2, as shown in Fig.<ref>(e,f). Unlike ST_1, this self trapped state becomes unstable at a critical Kerr nonlinearity Ũ_ I(g̃) (see Fig.<ref>(b)), due to which ST_2 can exist as a stable state, only in the range Ũ_c2(g̃)≤Ũ<Ũ_ I(g̃) and for g̃≲ 2. The relative photon population imbalance Z_p for ST_2 increases and approaches unity with increasing both the interaction strengths Ũ,g̃ (as shown in Fig.<ref>(f)), which is strikingly different from the behavior of ST_1, where, Z_p diminishes with g̃. Such qualitatively different features can be employed to distinguish two self trapped states ST_1 and ST_2 in quantum dynamics, which we will discuss in the next section. Note that, in addition to ST_2, other self trapped states can also appear exhibiting complicated scenarios, which we prefer to leave out from the present discussion as they are less relevant due to their existence within a small range of parameters. Moreover, the signature of these states have not been found in quantum dynamics. In the limit g̃→ 0, the steady states corresponding to ξ_2=± 1 become almost identical (see Eq.(<ref>)), with small difference of order η in the physical quantities. In this regime, both the self trapped states ST_1 and ST_2 become practically identical. However, we exclude the small g̃ regime from our discussion, as the formalism can not be extrapolated to g̃ =0, for which both the atomic excitation and photon number are conserved separately. The plethora of steady states obtained from the above analysis is summarized in the phase diagrams, depicted in Fig.<ref>(a,b), separately for ferromagnetic and anti-ferromagnetic classes, where the region of their stability is shown. Here the phase diagrams are obtained by solving the steady state equations exactly, for a fixed value of η (equivalently, fixed number of excitation M). The numerically obtained phase boundaries of the steady states PST, FP-F, (shown in Fig.<ref>(a)) and the transition lines between FP-π to ST_1 and FP-AF to ST_2, (depicted in Fig.<ref>(b)) are in good agreement with the analytical results given in Eq.(<ref>,<ref>,<ref>,<ref>) for small values of η. The appropriate parameter regimes can be identified from the phase diagrams for observation of different dynamical behavior and transitions. §.§ CLASSICAL DYNAMICS To this end, we investigate the classical dynamics corresponding to the different steady states illustrated in the phase diagram of Fig.<ref>, which provides useful information about various photonic Josephson oscillations and dynamical transitions between them. The time evolution is performed by solving the EOM given in Eq.(<ref>) numerically, for an appropriately chosen initial condition. In general, if the initial condition is chosen close to a stable fixed point, the photon number and other physical quantities oscillate around the steady state, with oscillation frequencies obtained from the linear stability analysis. We illustrate the oscillation around the symmetric state FP-π by computing the deviation of photon number δ n_i(t)=n_i(t)-n_i^* and atomic inversion δ z_i(t)=z_i(t)-z_i^* from the corresponding steady state values, which exhibits small amplitude oscillation around zero, as shown in Fig.<ref>(a,b). Numerically, the Fourier transform of the time evolution of photon population and atomic inversion yields the relevant frequencies present in the dynamics. As observed from Fig.<ref>(c,d), the lowest frequency ω̃_0 obtained from the linear stability analysis of the steady state FP-π corresponds to the highest amplitude of the Fourier transform, indicating its dominant role in both the photon and spin (atom) dynamics. However, as evident from Fig.<ref>(d), the higher frequency modes also contribute in the spin degree with small amplitude, resulting in a fast dynamics, as shown in Fig.<ref>(b). It is very fascinating to study the dynamics across the bifurcation of the steady states, particularly, the emergence of the self trapped states. Here, we focus on the classical dynamics across the pitchfork bifurcation of the symmetric state FP-π to the self trapped state ST_1, which occurs by tuning the Kerr nonlinearity Ũ. Before the bifurcation, since the stable FP-π is a symmetric state, we study the dynamics of the photon field in x-p plane for one of the cavities, as shown in Fig.<ref>(a). As mentioned before, due to the U(1) symmetry, the continuous FPs lie on a circle in the x-p plane of the photon field (black line in Fig.<ref>(a)). Ideally, the small amplitude dynamics is expected to be confined around one of the FPs, which occurs only in the absence of fluctuation in ϕ+ψ. However, for an arbitrary initial condition around one of the FPs, the trajectory surrounds all the fixed points on the ring, as depicted in Fig.<ref>(a). As the main characteristic feature of the FP-π mode, the relative phase of photons ψ_r = ψ_L-ψ_R oscillates around the value π, which is shown in Fig.<ref>(c). Above the critical coupling Ũ_c1, FP-π becomes unstable and depending on the initial condition, the dynamics is attracted towards one of the stable self trapped states. As seen from Fig.<ref>(b), the trajectory is repelled from the FP-π state and attracted towards the ring of FPs corresponding to the ST_1 state. Consequently, the photon imbalance Z_p oscillates around a finite value corresponding to the steady state (see the Fig.<ref>(d)). The signature of this dynamical transition can be observed from the oscillations frequencies of FP-π and ST_1 state, both of which vanish at the critical coupling strength Ũ_c1, as evident from Fig.<ref>(e). Similar phenomenon also occurs for the bifurcation of FP-AF to ST_2 state. Next, we focus on the dynamics of the self trapped states ST_1 and ST_2. As a distinguishing feature between them, the relative photon population imbalance Z_p decreases with increasing atom-photon coupling g̃ for ST_1 (see Fig.<ref>(a)), whereas, it increases for ST_2, as shown in Fig.<ref>(b). Since the atomic inversion is directly related to the photon population in each cavities, as given in Eq.(<ref>), the relative photon population imbalance Z_p can also induce an atomic inversion imbalance Z_a for the self trapped states. The variation of Z_a with g̃ can as well distinguish two self trapped states ST_1 and ST_2, exhibiting opposite behavior, as illustrated in Fig.<ref>(c,d). However, its variation is small for the ST_2 state as compared to that of ST_1. So far we have analyzed the classical dynamics based on a simplified description, neglecting the atom-photon correlation. Hence, it is important to investigate the signature of such dynamical state in quantum dynamics and the effect of atom-photon entanglement, which we consider in the next sections. § QUANTUM DYNAMICS In this section, we study the full quantum dynamics of the JCJJ and compare them with the classical dynamics, in order to investigate the effect of Kerr nonlinearity as well as the atom-photon correlation. We evolve the initial state |Ψ(0)⟩, with a fixed number of excitations M, within the Schrödinger prescription, which is performed numerically by truncating the basis upto a sufficiently large number N_ max. In order to compare with classical dynamics, we choose the initial state as the product of coherent states of photons and spins, described in Eq.(<ref>,<ref>) respectively, which represents the classical phase space point. To investigate the signature of different branches of the dynamical states, we time evolve the appropriately chosen initial state and obtain the dynamics of different physical quantities such as population of photons and the atom in different cavities as well their imbalance, characterizing those states. The parameters are also chosen from the stability region of the corresponding states from the phase diagram, given in Fig.<ref>. First, we study the dynamics of the symmetric states FP-F, FP-AF corresponding to ferromagnetic and anti-ferromagnetic classes respectively. To characterize these states quantum mechanically, we obtain the photon population imbalance Z_p=(⟨n̂_L⟩-⟨n̂_R⟩)/(⟨n̂_L⟩+⟨n̂_R⟩), where, ⟨n̂_i⟩ is computed from the time evolved state |Ψ(t)⟩, starting from the initial coherent state. For both FP-F and FP-AF states, we obtain the time evolution of Z_p and compare them with that obtained from the classical dynamics, as shown in Fig.<ref>(a,b). We observe from Fig.<ref>(a,b), the simple classical analysis is able to capture the full quantum dynamics reasonably well, however, there is certain deviation as t increases. To reveal the relative spin orientation in two cavities, we introduce the quantity, C_LR = ⟨Ŝ_LxŜ_Rx+Ŝ_LyŜ_Ry⟩/√((1/4-⟨Ŝ_Lz⟩^2)(1/4-⟨Ŝ_Rz⟩^2)), which in the classical limit takes the value -1(+1) corresponding to the (anti)ferromagnetic class of steady states. As shown in Fig.<ref>(c,d), the quantum dynamics of C_LR also approaches to these values for FP-F and FP-AF, which is consistent with their classification based on classical analysis. On the other hand, in quantum dynamics the correlation (entanglement) between spins and photons gives rise to interesting effects leading to the deviation from classicality. In the spin dynamics of FP-F state, the average values of the spin components in the x-y plane evolve around a circle corresponding to the classical FPs. Whereas for FP-AF state, the spin trajectory deviates from the ring of classical FPs and spirals to the center corresponding to ⟨Ŝ_x⟩ = ⟨Ŝ_y⟩ =0, exhibiting spin dephasing phenomena <cit.>, as seen from Fig.<ref>(f). Typically for spin 1/2 qubits, the classical description fails due to the enhanced quantum fluctuations and entanglement with photons, which we analyze later. Next, we investigate different types of self-trapping phenomena from quantum dynamics. We search for a perfect self trapped state, where almost all the photons are localized in one of the cavities. It is evident from the classical phase diagram, atom-photon interaction is crucial for perfect self-trapping of photons. For small Kerr nonlinearity, we identify the perfect self trapped state quantum mechanically, for which the relative imbalance of photon Z_p remains close to unity for sufficiently long time (see red line in Fig.<ref>(a)). However, for sufficiently large Kerr nonlinearity, the imbalance becomes significantly lower than unity and decays with time, as shown in Fig.<ref>(a). The rate of exponential decay Γ can be obtained by numerically fitting the time evolution of imbalance. The variation of the decay rate with Kerr nonlinearity exhibits an interesting feature, it grows rapidly above certain Kerr nonlinearity, which is depicted in Fig.<ref>(b). This indicates that sufficiently large Kerr nonlinearity induces an instability in perfect self-trapping. Although, classically, the stable perfect self trapped state exists for g̃>g̃_c1, the sufficiently large Kerr nonlinearity gives rise to instability in such state during quantum dynamics. We also analyze the self trapped states ST_1 and ST_2 of anti-ferromagnetic class, as shown in Fig.<ref>(a,b). As a characteristic feature of these states, we study the dynamics of the photon imbalance Z_p for increasing values of atom-photon coupling strength g̃. As shown in Fig.<ref>(a), for ST_1, the imbalance decreases with g̃, whereas, it increases for ST_2 (see Fig.<ref>(b)), which is consistent with the classical analysis and can be used to distinguish between these two self trapped states. Since both these states belong to the anti-ferromagnetic class during dynamics, the quantity C_LR acquires a negative value for both the states, however, its deviation from classical value is large for ST_2 (depicted in Fig.<ref>(c,d)). We also study the dynamics of atomic imbalance Z_a=|⟨Ŝ_Lz⟩-⟨Ŝ_Rz⟩|, which decreases with increasing coupling strength g̃ for ST_1, as shown in Fig.<ref>(e), that is in agreement with the classical analysis (see Fig.<ref>). For the ST_2 state, the evolution of Z_a always saturates to a very small value, exhibiting a weak variation with g̃ (see Fig.<ref>(f)), which is in stark contrast with ST_1 state. The above analysis reveals that the deviation from classicality is significantly large for the ST_2 state as compared to ST_1. Interestingly, the signature of all the steady states obtained from the simple semiclassical analysis is observed from the quantum dynamics. However, the entanglement between the qubit and photon degree is generated during quantum evolution, which leads to the deviation from classicality and also a change in the quantum state, which we analyze in the next section. § ENTANGLEMENT AND QUANTUM FLUCTUATIONS The semiclassical formalism presented in Sec.<ref> is based on the product coherent state representation, which is appropriate for describing the phase coherent photonic Josephson dynamics. However, the presence of interactions and Kerr nonlinearity can destroy such coherent dynamics due to enhanced phase fluctuations, which in turn gives rise to the deviation from classicality due to a change in the nature of the quantum state. To this end, we study the phase fluctuations of the photon field by constructing the phase states <cit.>, |ψ_m⟩=1/√(N_max+1)∑_n=0^N_maxexp(inψ_m)|n⟩ with ψ_m=ψ_0+2π m/(N_max+1), where m is an integer m∈ [0,N_max] and ψ_m∈ [-π,π]. These phase states are eigenstates of the phase operator ψ̂, as given by, exp(± iψ̂)|ψ_m⟩=exp(± iψ_m)|ψ_m⟩. The phase distribution corresponding to the photon field of one of the cavities (i = L,R) is given by, p(ψ_m^i) = Tr(ρ̂^i_p|ψ_m⟩⟨ψ_m|) with ∑_mp(ψ_m^i)=1, where ρ̂_p^i is the reduced density matrix corresponding to the photon field of the ith cavity, obtained by tracing out the other degrees. The average value and the fluctuation of the phase of the photon field in each cavity can be computed from the phase distribution as, ⟨ψ̂_i⟩ = ∑_mψ_m p(ψ^i_m) (Δψ̂_i)^2 = ∑_m(ψ_m-⟨ψ̂_i⟩)^2 p(ψ^i_m). Using the above prescription, we compute the mean phase difference between the cavity modes ψ_r=⟨ψ̂_L⟩-⟨ψ̂_R⟩ and its time evolution. The dynamics of the relative phase of photon modes for symmetric FP-F and FP-π states are shown in Fig.<ref>(a) and (b) respectively, which exhibit coherent oscillations around their steady state values 0 and π. To quantify the degree of coherence, we calculate the normalized phase fluctuation of photons (Δψ_i)^2_ N=(Δψ_i)^2/(Δψ_i)^2_ max in one of the cavities (i=L,R), where, the maximum phase fluctuation (Δψ_i)^2_ max=π^2/3 corresponds to a uniform phase distribution <cit.>. For the FP-π state, the phase fluctuation remains small during time evolution, as shown in Fig.<ref>(a), due to which the coherent phase oscillation is retained. On the other hand, an enhancement of phase fluctuation can be observed for the self trapped state ST_1, arising for large Ũ, as seen from Fig.<ref>(b). In general, the phase fluctuation increases with Kerr nonlinearity, which is evident from the above comparison. Such enhanced phase fluctuation during the time evolution is associated with the broadening of the phase distribution, indicating the deviation of the photon field from its classical representation in terms of the coherent state. Spreading of the phase distribution of FP-π and ST_1 states during time evolution is apparent from Fig.<ref>(c,d). Even though the phase fluctuation attains its maximum value almost immediately for ST_1 state, the appearance of dips in the time evolution of (Δψ_L)^2_ N, as observed from Fig.<ref>(b), corresponds to the revival of the phase of the photon field, which we discuss later. Apart from the phase fluctuation, the entanglement between the photon field and spins during the time evolution gives rise to interesting quantum effects and deviation from classicality. Starting from the total density matrix ρ̂ = |Ψ(t)⟩⟨Ψ(t)| computed from the full wavefunction |Ψ(t)⟩, the reduced density matrix of a subsystem (such as the spin/photon field of each cavity) can be obtained by integrating out rest of the degrees of freedom. Following this prescription, we compute the entanglement entropy of the subsystem (corresponding to the cavities) as, 𝒮_i=-∑_lλ_l^ilog(λ_l^i) where, λ_l^i represents the eigenvalue with index l of the reduced density matrix corresponding to the subsystem denoted by i (for example, i=L,R is the cavity index). In a similar manner, we can also compute the reduced density matrix and entanglement entropy 𝒮_LR for the total photon and spin degree separately. Ideally 𝒮_i vanishes for product state, but due to atom-photon interactions, the entanglement entropy increases during time evolution. We obtain the entanglement entropy 𝒮_i of the spin in each cavity, corresponding to the symmetric states FP-F and FP-AF, which are compared in Fig.<ref>(a,b). Unlike the FP-F state, 𝒮_i grows rapidly and saturates to its maximum value k_Bln2 for the FP-AF state, due to which the spin dynamics deviates from classical steady states exhibiting dephasing phenomenon (see Fig.<ref>(f)), as discussed in Sec.<ref>. We also compare the entanglement entropy of spins in both the cavities for self trapped states, which reveals contrasting features between ST_1 and ST_2 state. For ST_2, the 𝒮_i is almost same for both cavities and saturates to their maximum value. On the contrary, for ST_1, the entanglement entropy is larger corresponding to the cavity containing more number of photons, as seen from Fig.<ref>(c,d). In addition, we also study the difference between the entanglement entropy of spins in two cavities Δ𝒮 = 𝒮_L-𝒮_R and their variation with coupling strength g̃, as shown in Fig.<ref>(e,f). For ST_1, similar to the photon imbalance Z_p, the saturation value of Δ𝒮 decreases with increasing g̃, which is in stark contrast to ST_2 state, for which Δ𝒮 vanishes, showing no variation with g̃. Such contrasting feature of entanglement dynamics of two qubits can also distinguish the self trapped states ST_1 and ST_2 states. Apart from the interaction induced entanglement between spins and photons in each cavity, two apparently non interacting spins can also be entangled, which is mediated by photons. Such photon induced hidden correlation between two spins can be analyzed from the mutual information <cit.>, ℐ=𝒮_L+𝒮_R-𝒮_LR, which reveals very interesting behavior for the self trapped state. For the ST_2 state, both Δ𝒮 and ℐ are very small, exhibiting almost no variation with interaction strengths, which indicates that the reduced density matrix corresponding to the two spins approaches to the maximally mixed state <cit.>. On the other hand, in case of ST_1, increasing the photon population imbalance leads to an increase in Δ𝒮, while, the mutual information ℐ decreases, as shown in Fig.<ref>. Such tunability of quantum correlation between two non interacting spins in the cavities can have potential applications in quantum information processing. Additionally, for the ST_1 state, the mutual information ℐ exhibits dip and spike like structure during the time evolution, as seen from Fig.<ref>(a). Such dips in the mutual information correspond to the phase revival phenomenon <cit.>, resulting in a sudden drop in phase fluctuation, as seen in Fig.<ref>(b). This revival cycle can be analyzed from the evolution of the semiclassical phase space density of the photon field, described by the Husimi distribution, Q(α)=1/π⟨α|ρ̂_i^p|α⟩. where, ρ̂_i^p represents the reduced density matrix of the photon field in the cavities. Initially, the density is localized around one of the FPs, exhibiting the coherent structure of the photon field. As time evolves, the phase space density spreads over the ring of fixed points, describing the loss of coherence and finally, it is reconstructed at a point in the phase space, when another dip in the (Δψ)^2_ N occurs, exhibiting the revival phenomenon (see Fig.<ref>(c)). Interestingly, in the middle of the cycle, the phase space density splits and is localized around two diagonally opposite phase space points, which resembles the density distribution of a cat state. Such structure of phase space density is associated with the appearance of a spike in the mutual information, as seen from Fig.<ref>(a). The above analysis elucidates fascinating quantum effects and entanglement associated with the evolution of the quantum state corresponding to different dynamical branches, which can also be relevant in the context of quantum information processing. Such quantum effects give rise to the deviation from classicality, however, the qualitative behavior of the system can still be captured from the coherent state description. Apart from the steady state dynamics, it would also be interesting to investigate the evolution of the quantum state, particularly that of the photon field when the system is driven to the unstable regime. *Quench dynamics to unstable regime: Next, we investigate the quench dynamics corresponding to an abrupt change in the Kerr nonlinearity Ũ, starting from the initial coherent state corresponding to the stable FP-π mode. First, we consider a small change in Ũ, for which the FP-π state remains stable. Under this sudden change, the system still follows the stable FP-π branch, exhibiting oscillations around it. After quench, the initial coherent state begins to move around the ring of FPs, as shown in Fig.<ref>(e). During the time evolution, the wavefunction initially remains fairly localized and slowly spreads along the ring of FPs. As a consequence, the scaled kinetic energy (KE) and potential energy (PE) (mω⟨x̂_i^2⟩/2ħ M,⟨p̂_i^2⟩/2ħ mω M) of the photon field in each cavity oscillate coherently, keeping the average photon number fixed (see Fig.<ref>(a)). During the evolution, the photon phase fluctuation increases slowly and finally saturates to its maximum value after a sufficiently long time, while its phase space density remains localized around the ring of FPs. On the contrary, when the interaction strength Ũ is quenched above the dynamical transition, where the π mode becomes unstable, the system exhibits incoherent dynamics, dominated by large fluctuations, instead of following any stable branch. After quenching, the photon field loses its coherence rapidly, as the phase fluctuation attains the maximum value, as well the scaled kinetic and potential energies approach the same steady value, without large amplitude oscillations (see Fig.<ref>(b)), analogous to the equipartitioning of energy. Consequently, the Husimi distribution spreads over the phase space, as shown in Fig.<ref>(f), exhibiting large fluctuation in photon number (Δ n_i)^2=⟨n̂_i^2⟩-⟨n̂_i⟩^2≫⟨ n_i⟩, compared to the previous case, for which (Δ n_i)^2 ≈⟨ n_i⟩, similar to the coherent state (see Fig.<ref>(c,d)). In this case, we find that the reduced density matrix of the photon field in each cavity has dominating contribution from the diagonal elements, which gives rise to larger entanglement entropy compared to that of the quench dynamics in the stable regime. After the quench to the unstable regime, the entanglement entropy of the photon field grows rapidly and finally saturates. Moreover, the reduced density matrix of the spins in two cavities approaches the maximally mixed state. Such scenarios of quench dynamics to unstable regime resemble thermalization, which yields incoherent photon gas analogous to the thermal state. Apart from the coherence properties, such non-equilibrium dynamics of the photon fields can also reveal interesting phenomena which can be probed in experiments. § CONCLUSION To summarize, we explore the non-equilibrium dynamics of the Jaynes-Cummings dimer model in the presence of Kerr nonlinearity, focusing on the quantum states of photons as well as entanglement properties corresponding to the different dynamical states. Within the semiclassical approach, we systematically study the dynamics to chart out a variety of steady states and their regime of stability for different atom-photon coupling strengths and Kerr nonlinearity. Moreover, the stability analysis yields the frequency of photonic Josephson oscillation that can be probed in experiments. Different dynamical branches are classified according to the relative spin orientation and photon population imbalance between the cavities. Self trapped states with unequal photon population in the two cavities emerge as a consequence of dynamical transitions. Apart from a perfect self trapped state arising from a saddle node bifurcation, we also identify two different self trapped states for which the Kerr interaction plays an important role. From quantum dynamics, we also observe the characteristic features of different steady states obtained semiclassically, however, interactions and atom-photon entanglement gives rise to intriguing quantum effects leading to a deviation from classicality. In contrast to classical motion, dephasing in spin dynamics is observed as a result of relatively large quantum fluctuations in the spin 1/2 qubits. During the time evolution, the state of the photon field deviates from the initial coherent state and gradually loses its coherence due to phase fluctuations, which is typically enhanced by the Kerr nonlinearity. Apart from the phase fluctuations, we identify a periodic revival phenomenon for self trapped state, exhibiting fascinating phase space structures of the photon field, particularly the appearance of a bimodal density distribution, resembling a cat state of photon. Interestingly, photon mediated entanglement between two atomic qubits, which are otherwise non interacting, makes JCJJ a promising candidate for quantum information processing. Using mutual information, we demonstrate how quantum correlation between the atomic qubits in the two cavities can be manipulated by changing the photon population imbalance. Finally, we investigate the quench dynamics starting from a stable steady state to unstable regime, which results in the formation of an incoherent gas of photons spread over in phase space, resembling its thermal state. The Jaynes-Cummings dimer has already been realized in circuit QED setup <cit.>, as well it can also be engineered by coupling the optical cavities <cit.>. The signature of self-trapping phenomena has also been observed experimentally in micro-cavities <cit.>, which is promising for observation of different types of photonic Josephson oscillations discussed in this work. The Kerr nonlinearity can be realized in circuit QED <cit.> as well in optical cavities <cit.>, which is the key ingredient for observation of various quantum phenomena related to the steady states, such as revival cycle in self trapped regime. A rich variety of collective phenomena can also be observed in cavities containing many atoms, which has been implemented in experiments by coupling the condensates of ultracold atoms with cavity mode <cit.>. Since dissipation is inherent in these systems, particularly because of the photon loss, it allows us to investigate interesting effects arising from it, such as dissipative transitions <cit.>. As a result of dissipation, particularly arising from photon loss, the steady states discussed in this work can have finite life time, which can however be controlled in appropriate experimental setup <cit.>. In conclusion, the Josephson coupled Jaynes-Cumming dimer can serve as a test bed to study the fascinating non-equilibrium phenomena, as well as manipulation of entanglement between the two atomic qubits, which can have potential applications in quantum information processing. § ACKNOWLEDGMENT We thank Nirmalya Ghosh, Sudip Sinha and Sayak Ray for fruitful comments and discussions. 99 Angelakis Changsuk Noh and Dimitris G. Angelakis, Rep. Prog. Phys. 80, 016401 (2016). Serge_Haroche J. M. Raimond, M. Brune, and S. Haroche, Rev. Mod. Phys. 73, 565 (2001). Steven_Girvin A. Blais, A. L. Grimsmo, S. M. Girvin, and A. Wallraff, Rev. Mod. Phys. 93, 025005 (2021). Hemmarich J. Klinder, H. Keßler, M. Wolke, L. Mathey, and A. Hemmerich, Proc. Natl. Acad. Sci. U.S.A. 112, 3290 (2015). Dissipative_transition1 F. Brennecke, R. Mottl, K. Baumann, R. Landig, T. Donner, and T. Esslinger, Proc. Natl. Acad. Sci. USA 110, 11763 (2013). Dissipative_transition2 M. Fitzpatrick, N. M. Sundaresan, A. C. Y. Li, J. Koch and A. A. Houck, Phys. Rev. X 7, 011016 (2017). Dissipative_transition3 H. J. Carmichael, Phys. Rev. X 5, 031028 (2015). Dissipative_transition4 F. Reiter, T. L. Nguyen, J. P. Home, and S. F. Yelin, Phys. Rev. Lett. 125, 233602 (2020). Dissipative_transition5 K. C. Stitely, A. Giraldo, B. Krauskopf, and S. Parkins, Phys. Rev. Research 2, 033131 (2020). Dissipative_transition6 J. Li, R. Fazio, and S. Chesi, New J. Phys. 24, 083039 (2022). JC E.T. Jaynes, F.W. Cummings, Proc. IEEE. 51 (1): 89–109 (1963). TC M. Tavis and F. W. Cummings, Phys. Rev. 170, 379 (1968). Esslinger_SS_2 J. Léonard, A. Morales, P. Zupancic, T. Esslinger, and T. Donner, Nature (London) 543, 87 (2017). Esslinger_SS_3 J. Léonard, A. Morales, P. Zupancic, T. Donner, and T. Esslinger, Science 358, 1415 (2017). JCH_Plenio M. Hartmann, F. G. S. L. Brandão and M. B. Plenio, Nature Phys 2, 849–855 (2006). Greentree A. Greentree, C. Tahan, J. Cole et al., Nature Phys 2, 856–861 (2006). Plenio_Review1 G. Lepert, M. Trupke, M. J. Hartman, M. B. Plenio, and E. A. Hinds, New J. Phys. 13, 113002 (2011). Hartmann Michael J. Hartmann, J. Opt. 18, 104005 (2016). Carusotto_Review I. Carusotto and C. Ciuti, Rev. Mod. Phys. 85, 299 (2013). Hopping_Houck D. L. Underwood, W. E. Shanks, Jens Koch and A. A. Houck, Phys. Rev. A 86, 023837 (2012). Superfulidity_light P. Leboeuf and S. Moulieras, Phys. Rev. Lett. 105, 163904 (2010). Vortex_Dominici L. Dominici, R. Carretero-González, A. Gianfrate et al., Nat Commun 9, 1467 (2018). Vortex_Carusotto K. Lagoudakis, M. Wouters, M. Richard et al., Nature Phys 4, 706–710 (2008). Plenio_effective_spin_system M. J. Hartmann, F. G. S. L. Brandão, and M. B. Plenio, Phys. Rev. Lett. 99, 160501 (2007). Polariton_Yamamoto T. Byrnes, N. Kim and Y. Yamamoto, Nature Phys 10, 803–813 (2014). Fazio_glassy_phase D. Rossini and R. Fazio, Phys. Rev. Lett. 99, 186401 (2007). Hall_Sugato J. Cho, D. G. Angelakis, and S. Bose, Phys. Rev. Lett. 101, 246809 (2008). Girvin_time_reversal J. Koch, A. A. Houck, K. Le Hur, and S. M. Girvin, Phys. Rev. A 82, 043811 (2010). Blatter S. Schmidt and G. Blatter, Phys. Rev. Lett. 103, 086403 (2009). Le_Hur J. Koch and K. Le Hur, Phys. Rev. A 80, 023811 (2009). M_Knap M. Knap, E. Arrigoni, and W. von der Linden, Phys. Rev. B 82, 045126 (2010). Sibastian_1 L. Guo, S. Greschner, S. Zhu, and W. Zhang, Phys. Rev. A 100, 033614 (2019). Sugato_bose D. G. Angelakis, M. F. Santos, and S. Bose, Phys. Rev. A 76, 031805(R) (2007). Yamamoto_Glass N. Na, S. Utsunomiya, L. Tian, and Y. Yamamoto, Phys. Rev. A 77, 031803 (2008). JC_dimer_expt J. Raftery, D. Sadri, S. Schmidt, H. E. Türeci, and A. A. Houck, Phys. Rev. X 4, 031043 (2014). JCD_Houck S. Schmidt, D. Gerace, A. A. Houck, G. Blatter, and H. E. Türeci, Phys. Rev. B 82, 100507(R) (2010). JCD_Manus_K A. Dey and M. Kulkarni, Phys. Rev. A 101, 043801 (2020). JCD_Sadri H. Shapourian and D. Sadri, Phys. Rev. A 93, 013845 (2016). Dan_Walls D. F. Walls and G. J. Milburn (1995). Quantum optics. Berlin; New York: Springer-Verlag. Kerr_1 S. D. Du and C. D. Gong, Phys. Rev. A 50, 779 (1994). Kerr_2 S. Rebić, J.Twamley, and G. J. Milburn, Phys. Rev. Lett. 103, 150503 (2009). Kerr_3 A. Imamoğlu, H. Schmidt, G. Woods, and M. Deutsch, Phys. Rev. Lett. 79, 1467 (1997). Kerr_4 H. Schmidt and A. Imamoğlu, Opt. Lett. 21, 1936 (1996). Kerr_5 H. Rokhsari and K. J. Vahala, Opt. Lett. 30, 427 (2005). Kerr_6 S. Rebic, S. M. Tan, A. S. Parkins, and D. F. Walls, J. Opt. B 1, 490 (1999). Kerr_7 M. Kounalakis, C. Dickel, A. Bruno, N. Langford, and G. Steele, npj Quantum Inf. 4, 38 (2018). dephasing S. Pramanik, S. Bandyopadhyay, and M. Cahay, Phys. Rev. B 68, 075313 (2003). Dirac P.A.M. Dirac, Proc. Cambridge Philos. Soc. 26, 376 (1930); J. Frenkel, Wave Mechanics, Claredon Press, Oxford, 1934. Coherent_state J. M. Radcliffe, J. Phys. A: Gen.Phys., 4, 313 (1971). Shenoy1 A. Smerzi, S. Fantoni, S. Giovanazzi, and S. R. Shenoy, Phys. Rev. Lett. 79, 4950 (1997). Shenoy2 S. Raghavan, A. Smerzi, S. Fantoni, and S. R. Shenoy, Phys. Rev. A 59, 620 (1999). Oberthalar R. Gati and M. K. Oberthaler, J. Phys. B 40, R61 (2007). phaseBarnett D. T. Pegg and S. M. Barnett, Phys. Rev. A 39, 1665 (1989). Nilson_Chuang M. Nielsen and I. Chuang, Quantum Computation and Quantum Information (Cambridge University Press, Cambridge, England, 2000). MI1 D. P. DiVincenzo, M. Horodecki, D. W. Leung, J. A. Smolin, and B. M. Terhal, Phys. Rev. Lett. 92, 067902 (2004). MI2 G. Adesso and A. Datta, Phys. Rev. Lett. 105, 030501 (2010). MI3 X.-M. Lu, J. Ma, Z. Xi, and X. Wang, Phys. Rev. A 83, 012327 (2011). MI4 L. Henderson and V. Vedral, J. Phys. A: Math. Gen. 34, 6899 (2001). MI5 P. Das, D. S. Bhakuni, and A. Sharma, Phys. Rev. A 107, (2023). revival M. Greiner, O. Mandel, T. W. Hänsch, I. Bloch, Nature 419, 51–54 (2002). Self_trapping1 M. Abbarchi, A. Amo, V. G. Sala, D. D. Solnyshkov, H.Flayac, L. Ferrier, I. Sagnes, E. Galopin, A. Lemaître, G. Malpuech, and J. Bloch, Nat. Phys. 9, 275 (2013). Dissipation1 V. Sevriuk, K. Y. Tan, E. Hyyppä, M. Silveri, M. Partanen, M. Jenei, S. Masuda, J. Goetz, V. Vesterinen, L. Grönberg, and M. Möttönen, Appl. Phys. Lett. 115, 082601 (2019). Dissipation2 A. J. Fleisher, D. A. Long, Q. Liu, and J. T. Hodges, Phys. Rev. A 93, 013833 (2016).
http://arxiv.org/abs/2307.01087v1
20230703150810
Buckling of a monolayer of plate-like particles trapped at a fluid-fluid interface
[ "Suriya Prakash", "Hugo Perrin", "Lorenzo Botto" ]
cond-mat.soft
[ "cond-mat.soft", "physics.flu-dyn" ]
APS/123-QED Corresponding author Email address: l.botto@tudelft.nl (Lorenzo Botto) Department of Process & Energy, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Delft, The Netherlands. Particles trapped at a fluid-fluid interface by capillary forces can form a monolayer that jams and buckles when subject to uni-axial compression. Here we investigate experimentally the buckling mechanics of monolayers of millimeter-sized rigid plates trapped at a planar fluid-fluid interface subject to uni-axial compression in a Langmuir trough. We quantified the buckling wavelength and the associated force on the trough barriers as a function of the degree of compression. To explain the observed buckling wavelength and forces in the two-dimensional monolayer, we consider a simplified system composed of a linear chain of plate-like particles. The chain system enables us to build a theoretical model which is then compared to the two-dimensional monolayer data. Both the experiments and analytical model show that the wavelength of buckling of a monolayer of plate-like particles is of the order of the particle size, a different scaling from the one reported for monolayers of spheres. A simple model of buckling surface pressure is also proposed, and an analysis of the effect of the bending rigidity resulting from a small overlap between nanosheet particles is presented. These results can be applied to the modeling of the interfacial rheology and buckling dynamics of interfacial layers of 2D nanomaterials. Buckling of a monolayer of plate-like particles trapped at a fluid-fluid interface Lorenzo Botto August 1, 2023 ================================================================================== § INTRODUCTION The buckling wavelength of monolayers of nearly spherical particles trapped at a fluid interface under compression has been studied with both realistic particles (Lycopodium, Chemigum) <cit.> as well as model particles (glass beads, zirconium oxide beads) <cit.>. In these experiments, the particles were spread at an air-water interface and the particle layer subject to uni-axial compression in a Langmuir trough. Both the buckling wavelength and the force on the barrier, proportional to the surface pressure <cit.>, were measured. A mathematical model that treats the monolayer as a continuous elastic sheet captured the buckling wavelength measured in these experiments. The relation between the mechanical properties of this sheet and the particle size was obtained by assuming an effective Young modulus E ∼γ / d, where γ is the surface tension of the bare fluid/fluid interface and d is the nominal sphere diameter <cit.>. According to this model, and in agreement with the experimental results <cit.>, the buckling wavelength of the monolayer scales as ∼√(ℓ_c d), where ℓ_c = √(γ/(δρ g)) is the capillary length, δρ is the density difference between the two fluids across the interface and g is the acceleration of gravity. Similar compression experiments on buckling of a monolayer of 2D nanomaterial particles of graphene oxide show a buckling wavelength in the range of 4 - 20 particle lengths <cit.>. The theory developed for monolayers of spheres overestimates the wavelength observed for graphene oxide monolayers by at least one order of magnitude. Given the large aspect ratio of graphene oxide sheets, applying models for spheres is questionable. Therefore, a mathematical model describing the mechanics of interfacial monolayers of plate-like particles is necessary. Developing one such model starting from data obtained with realistic nanoparticles, which is affected by variables that are difficult to control, such as polydispersity in size <cit.> and the possibility of particle-particle overlap <cit.>, is challenging. With a model experimental system, in which macroscopic particles of controlled shapes are used, one can study the buckling phenomenon and associated interfacial mechanics without the complications of an actual nanoparticle system. In this paper, we study experimentally the uni-axial compression of a monolayer of millimeter-sized plate-like particles trapped at a fluid-fluid interface by capillary forces. We start with observations of a two-dimensional monolayer of hexagonal particles at an air-water interface. We then consider a linear chain of square plates (1D system). We develop a theory to explain the linear chain system which is then applied to the two-dimensional particle monolayer. In our experiments, the particles are not overlapping for most of the monolayer deformation. However, we use the one-dimensional mathematical model to discuss possible implications of small overlaps between the particles in terms of an increased effective bending rigidity of the particle layer. In our experiments, the Bond number based on the weight of the particles is small <cit.>, so the downward distortion of the fluid interface owing to the weight of the particle (minus buoyancy) is relatively unimportant. However, as we will see, when in contact the particles can displace fluid by rotating around an axis parallel to the fluid interface. This results in a gravitational contribution to the interfacial mechanics. In the linear chain case, we are able to investigate the regime in which capillary forces are dominant over gravitational forces by density matching of the upper and lower fluids. The motivation for the current work is to better understand the compression of two-dimensional nanomaterials at fluid-fluid interfaces. Two-dimensional nanomaterials, of which the most discussed are graphene and graphene oxide, can take the form of a colloidal dispersion of nanometrically thin plate-like particles of large aspect ratios <cit.>. Recently, the use of fluid interfaces has emerged as a way to control the assembly of these systems <cit.>. In the Langmuir-Blodgett technique, for example, a monolayer of 2D nanomaterials is adsorbed at a flat fluid-fluid interface, and the monolayer compressed by barriers <cit.>. The monolayer is then transferred to a solid substrate <cit.>. Critical to the performance of the resulting particle coating is predicting the particle coverage in the fluid interface upon uni-axial compression in the trough, and whether the particle monolayer displays a solid-like behavior. If the particles jam at the fluid interface, the particle monolayer can buckle and the signature of this buckling is visible in the profile of surface pressure vs. barrier displacement <cit.>. The analysis of the relation between buckling wavelength and associated force on the barrier discussed in the current paper is relevant for interpreting interfacial rheology measurements with 2D nanomaterials. More broadly, the current investigation is carried out in the context of understanding the mechanics of particle rafts and armored bubbles and droplets, a research field that has received increasing attention recently from the soft matter physics, colloidal science and fluid mechanics communities <cit.>. § EXPERIMENTAL METHODS Uni-axial compression experiments are carried out in an in-house-made rectangular trough of length 200 mm and width W_t =50 mm, see Fig. <ref>. A stationary barrier mounted on a force sensor allows us to measure the force F on the barrier and the surface pressure Π=F/W_t. A moving barrier mounted on a linear stage allows us to control the distance Δ between the barriers in steps of 10μ m. To measure forces of the order of mN we used a load cell with a resolution of ± 0.1 mN. For small forces of the order of μ N, produced by the smallest particles we considered, we used a cantilever-based force sensor, that is described later in detail. For the 2D monolayer experiments, we used transparent hexagonal plates made of Mylar (density ρ_p ≃ 1400 kg/m^3) purchased from Geotech International. The plates have thickness t = 50 μ m and two different lateral sizes, L = 1.5 mm and 3 mm. Here L refers to the inscribed circle diameter of the hexagonal plates. To remove possible contaminants, we aspirate the fluid interface using a suction pipette after moving the barriers to minimum opening <cit.>. The process is repeated until the fluid interface is clean. The interface is assumed to be clean if the surface pressure at maximum compression is below 4 mN/m. The 2D monolayer is prepared by gently sprinkling the particles on the air/water interface at maximum Δ≃ 3 W_t. Overlapping particles were separated by a stirring rod. The 2D monolayer is then compressed at a velocity of 200 μ m/s. The monolayer undergoes out-of-plane deformations, whose amplitude A is measured by the inclined laser line method <cit.>. The technique involves projecting a laser sheet at an angle θ with respect to the particle-laden fluid interface (Fig. <ref>). The intersection of the laser sheet with the monolayer results in a line that is imaged from the top by a camera. The intersecting line is straight for a flat monolayer and distorted for a deformed monolayer. The out-of-plane deformation amplitude can be calculated from the lateral distortion of the laser line, accounting for a proportionality constant tan(θ), where in our case θ≃ 28^∘. The resolution of the out-of-plane deformation is 60 μ m. The novelty of our method is that we use a laser line that sweeps the monolayer providing a continuous topographic map, instead of the height profile along a single line. To do so, the laser source is mounted on a linear stage controlled by a stepper motor. For the single chain experiments, we used square-shaped Mylar plates of lateral sizes L = 1, 3, 5, 7, 10, 15, & 20 mm and thickness t=125 μ m, except for the 1 mm Mylar plates for which the thickness is t=23 μ m. For all the particles the aspect ratio L/t is larger than 23. The smallest plates are manufactured by laser cutting (Optec Laser Systems). Using the length and thickness of the plates and Young's modulus ≃ 3 GPa of Mylar, we estimate an Euler buckling threshold for the plates of ≥ 240 mN. Therefore, the plates do not buckle under compression forces of the order of a few mN and are considered to be rigid in our experiments. Experiments are carried out with both a glycerol/air interface and a water/sunflower oil interface. Corresponding density differences are δρ = 1200 ± 1 kg/m^3 and 80 ± 1 kg/m^3, respectively, measured by an Anton Paar density meter (DMA 5000). The surface tensions of the glycerol/air and water/sunflower interfaces are 65 ± 1 mN/m and 26 ± 1 mN/m, respectively, measured by the pendant drop method in a Dataphysics Goniometer (OCA 25). For the water/oil interface, the particles are first arranged at an air/water interface and the oil is gently added. Care is taken to arrange the particles in a straight chain between the barriers. Upon compression, the chain undergoes out of the plane deformation. A camera captures the side view of the chain and from the images we extracted the average amplitude <A > of individual plates in the chain. As mentioned earlier, for forces of the order of mN the load cell is used. For forces of the order of few μ N we used a cantilever force sensor similar to the micropipette force sensor described in Ref. <cit.>. The deflection ξ of the cantilever is measured from the side view by a calibrated camera with a zoom lens. The force is computed from F = k ξ. The stiffness k of the cantilever was obtained by calibration; see Appendix <ref> for the calibration procedure and calibration curves. We used cantilevers of stiffnesses k=29 and 58 μ N/mm. The resolution of the force F is ∼ 1 μ N. This value is set by the resolution of the camera (≃ 11 μ m/pixel) and the stiffness of the cantilever. § RESULTS §.§ Observations on the 2D monolayers Figure <ref> (a) shows a typical evolution of the surface pressure Π= F/W_t for decreasing values of the normalized distance Δ / W_t between the barriers. Fig. <ref> (b) shows amplitude maps corresponding to 4 characteristic points of the Π vs. Δ curve, denoted A, B, C and D. For Δ/W_t > 2 the plates are not touching each other and Π≃0, as expected. As Δ / W_t decreases, contacts between the particles are established and a non-zero value of Π is measured. In correspondence to point A, Π>0 because of the formation of force chains, but the interface remains flat (see panel A in Fig. <ref>). Buckling of the monolayer becomes measurable in correspondence to the point B. Buckling is evident from the change in amplitude of the particle-laden interface (inset X of Fig. <ref> (a) and inset X in panel B of Fig. <ref> (b)). Further compression leads to an increase in the number of buckled regions as the surface pressure rises. The characteristic point C belongs to this region of behavior. Buckling is predominantly present near the moving barrier (on the right in panel C of Fig. <ref> b). Beyond the point D, referred to as the “collapse point” in the following, particle multilayers form. From A to D, the surface pressure increases relatively steeply, while for Δ/W_t smaller than the one corresponding to the “collapse point” D the surface pressure increases comparatively mildly. A key observation is that the characteristic wavelength of the monolayer deformation in the regions where buckling occurs is of the order of the particle diameter (see inset Z of Fig. <ref> (a) ). While this is shown for the 1.5mm plates, the same observation holds for the larger 3mm plates. Also, the monolayer does not show long-range ordered wave-like patterns, as reported instead for spheres <cit.>. The fact that no wavelengths much larger than the particle size occur is compatible with a simple model of chain compression, which we now describe. §.§ One-dimensional chain model and comparison with experiment We now analyze the compression of a linear chain of N=16 square plates of size L=10 mm trapped at an air-glycerol interface. The measured force F and the normalized average amplitude < A > /L of the out-of-plane deformation are shown in Fig. <ref> as a function of Δ/(NL). From this plot, two regimes can be identified. For Δ/(NL) > 1, the distance between the barriers is larger than the total length of the chain. Therefore, F=0 and < A > ≃ 0 (“flat state”). For Δ/(NL) = 1, the plates touch each other and F starts to increase. The measured average amplitude increases when Δ/(NL) is approximately equal to 0.9995. The fact that F can be finite while < A > ≃ 0, a feature that was also observed in the 2D system, is due to small particle rearrangements before jamming. The “buckled state” for Δ/(NL) < 0.9995 is characterized by a sharp increase in F followed by a plateau. In the rest of this paper, we will call the plateau value of F the buckling force, as it represents the magnitude of the force that would be required to buckle the monolayer in an experiment conducted at applied force. The wavelength λ of the monolayer corrugation was obtained by visual inspection. Experiments with different numbers of plates, from 5 to 16, consistently gave λ≃ 2L, as shown for N=16 in the inset of Fig. <ref>. To analyze the observed behavior, we developed a mathematical model based on a balance between capillary forces, gravity and contact forces. The vertical interface deformation caused by the plate weight is proportional to Bo_p ℓ_c where Bo_p = ρ_p g L t/γ is the particle Bond number and ℓ_c = √(γ/(δρ g)) is the capillary length <cit.>. From this estimate, the maximum vertical deformation is smaller than approximately 0.1 ℓ_c for all the plates we used in our experiments. Therefore, the effect of particle weight on the interfacial distortion can be neglected. The total free energy of the system is then given by the gravitation potential energy of the fluid (located both below the fluid interface and below the plates), and the interfacial energy of the fluid-fluid interface. Calling h(x,z) the height of the fluid-fluid interface (see Fig. <ref>), and assuming that the plates pin the contact line at their edges <cit.>, the gravitational potential energy contribution to the total free energy is E_g = ∫_0^Δ dx [ 1/2δρ g L h^2(x,0) + 2 ∫_0^∞1/2δρ g h^2 dz ] , where δρ = ρ_l - ρ_a is the difference in density between the heavier fluid and the lighter fluids, x is the coordinate along the chain and z is the coordinate perpendicular to the chain in the plane of the unperturbed fluid interface, with z=0 corresponding to the contact line on one side of each plate (see Fig. <ref>). The first term in Eqn. (<ref>) is the gravitational energy of the liquid below the plates and the second term is the gravitational energy of the liquid in the two side menisci. The capillary energy associated with the menisci on both sides of the chain is E_γ = 2γ∫_ 0^Δ dx [ ∫_0^∞√(1 + (∂ h/∂ x)^2 + (∂ h/∂ z)^2) dz ]. Note that we neglected the capillary contribution due to the fluid interface in the gap between the particles (i.e. in -L<z<0). To enforce the constraint that the total length of the chain is constant, we add to the total free energy the term E_c = F [ N L - ∫_0^Δ√(1 + .(∂ h/∂ x)^2|_z=0) ], where F is a scalar Lagrange multiplier. Physically, F represents the contact force between the plates. Setting the functional derivative δ ( E_g + E_γ + E_c ) = 0 yields two equations. The first equation is the small-amplitude Young-Laplace equation governing the shape of the fluid-fluid interface for -L> z > 0: δρ g h = γ( ∂^2 h/∂ x^2 + ∂^2 h/∂ z^2). The second equation is the boundary condition at z=0: δρ g L h -2 γ∂ h/∂ z + F ∂^2 h/∂ x^2 = 0. Upon multiplication by L, equation (<ref>) is a balance of moments. The first term represents the moment of the hydrostatic pressure force due to the weight of the fluid below the plates. The second term represents the moment of the vertical projection of the surface tension force at the contact line, located at z=0 and z = -L. The third term represents the moment of the contact forces F between the particles. The leading-order Fourier mode solution of Eqn. (<ref>) that matches the triangle-wave profile of the contact line is <cit.> h(x,z) = A e^-z√((2π/λ)^2 + 1/ℓ_c^2)sin( 2π x/λ), where ℓ_c = √(γ/δρ g) is the capillary length. Equation (<ref>) satisfies h(x,z=0)=A sin (2 π x/λ) and h(x,z →∞) = 0. For λ≫ℓ_c and λ≪ℓ_c, the decay lengths of the meniscus in the z direction are ℓ_c and λ/2 π, respectively. Thus, in the surface tension-dominated regime the buckling wavelength and the decay length of the fluid interface distortion are roughly of the same order of magnitude. Substituting (<ref>) into (<ref>) yields the contact force as a function of the wavelength: F = 1/4 π^2δρ g L λ^2 + 1/2 π^2γλ√((2π)^2 + ( λ/ℓ_c)^2). In Fig. <ref> we show two configurations of buckled chains, with λ = 2 L in configuration (b) and λ = 10 L in configuration (c) . Both wavelengths are local minima of the function F(λ). The absolute minimum of F(λ) is the total energy minimum, similar to the buckling of an Euler beam <cit.>. Since F(λ) is a monotonically increasing function and wavelengths smaller than 2L are not possible, the equilibrium wavelength is λ = 2L. The contact force corresponding to λ = 2L is the buckling force: F_b/γℓ_c = 1/π^2(L/ℓ_c)^3 + 2/πL/ℓ_c√(1 + (L/πℓ_c)^2) . Figure <ref> shows F_b/(γℓ_c) vs. √(Bo) = L/ℓ_c, comparing Eqn. (<ref>) with the experimental data. Here Bo = δρ g L^2/γ. The agreement between the experimental data and the theory is excellent, except for the smallest values of Bo where a perfect alignment of the plates cannot be ensured. For Bo ≫ 1 the gravitational force dominates and F_b∼δρ g L^3. In this regime, the buckling force is of the order of the weight of the liquid displaced by each plate as the chain deforms. For Bo ≪ 1, F_b ∼γ L. In this regime, the buckling force is of the order of the capillary force exerted by the side meniscus on each plate. Equating the first and second terms in Eq. (<ref>) provides a threshold L/ℓ_c ≃π for the transition between the capillarity-dominated and gravity-dominated regimes. §.§ Comparison of 1D model with 2D experiment It is instructive to compare the prediction of the chain model to the experimental data for the 2D monolayer. This comparison should account for two differences. First, in the 1D chain the internal stress in the monolayer due to particle-particle contact forces is essentially homogeneous along the compression direction (on a scale ≫ L). While in the 2D assembly, the contact forces are a random function of position and orientation. Secondly, in the 2D monolayer the balance of forces on the entire monolayer should account for friction with the lateral walls <cit.>. Evidence of the importance of the lateral walls in our experiments is the fact that the amplitude of the monolayer deformation is larger near the moving barrier (see panel C and D in Fig. <ref> b). A larger deformation occurs in this region because the gradient of the surface pressure along x must balance the frictional stresses on the lateral walls. So the surface pressure and deformations will be larger near the moving barrier. However, the 1D chain model could still provide an estimate of the average value of Π in regions where buckling occurs and sufficiently away from the lateral walls. From Eqn. (<ref>) the buckling surface pressure is Π_b = F_b/L = 1/π^2δρ g L^2 + 1/π^2γ√((2π)^2 + ( 2 L/ℓ_c)^2). We performed buckling experiments at different trough aspect ratios. Trough aspect ratios are changed by varying the number of particles between the barriers for a fixed trough width. The particle sizes are fixed (L = 1.5 mm) in all experiments. Figure <ref> shows the surface pressure profiles for the 2D monolayer as a function of Δ/W_t for N ≃ 330 - 2040. We see from this curve that the surface pressure profile depends on the initial trough area, another manifestation of the effect of lateral wall friction <cit.>. Figure <ref> shows the experimental data for the surface pressure in the 2D monolayer, averaged over 3 different measurements, for different values of Δ/W_t. This figure is obtained from Fig. <ref> by reporting the value of Π and Δ/W_t corresponding to the collapse point D. In order to compare the collapse surface pressure in the 2D monolayer with the 1D model we used a Coulomb model for the lateral wall friction as done in <cit.> for a monolayer of spherical particles. This model assumes that the frictional force per unit length is proportional to the local values of Π according to a proportionality constant μ_wall. This approximation yields an exponential decay law also referred to as the Janssen model, Π = Π_0 exp(-2 μ_wallνΔ/W_t). Here Π_0 is the pressure at the moving barrier and ν is the ratio of surface pressures perpendicular and parallel to the compression direction. Assuming ν = 1/3 <cit.>, the best fit to the data (dashed curve in Fig. <ref>) gives Π_0 = 53.8 mN/m and μ_wall = 0.24. For comparison, the reported friction coefficients for Mylar are in the range 0.13 - 0.41 <cit.>. The black square dot in figure <ref> is the extrapolation of the experimental data for the 2D monolayer to Δ/W_t=0, which yields Π_0 = 53.8 mN/m. The red square dot in figure <ref> is obtained by using the parameters of our problem in Eqn. (<ref>). The value of Π_0 from the friction model is larger than the value from the 1D chain model, but the difference is small (about 13 %). Considering the simplicity of the chain model, the agreement with the 2D data is surprisingly good. As stated before, the 2D monolayer differs from the 1D chain in the distribution of contact forces between the particles. Statistics of contact forces between jammed particles have been studied extensively in the context of granular materials <cit.>. These studies reveal that the probability of contact forces attaining a value f larger than the mean value < f > decays fast, approximately as p(f/< f > )∼exp(-β f/< f > ), with β an O(1) numerical coefficient <cit.>. Therefore it is expected that the monolayer contains few contact forces that are large compared to the average contact force <cit.>. Upon monolayer compression, the first buckling events will occur for groups of particles for which the contact force exceeds the estimate in Eqn. (<ref>). Because such large forces are small in number, the buckling regions are initially localized, as seen in panel B in Fig. <ref> b. If the mechanical response of the monolayer is dominated by these spatially scattered regions, Eq. (<ref>) could provide an upper bound for the surface pressure measured at the barrier in the 2D experiment. §.§ 1D model with bending rigidity Key in our derivations is the absence of bending energy in the energy functional. In an experiment with a 2D nanomaterial such as graphene oxide <cit.> a possible explanation for observing wavelengths larger than 2L could be the presence of a small but finite bending rigidity. An extension of Eqn. <ref> accounting for an effective monolayer bending rigidity (per unit width) D is D w ∂^4 h/∂ x^4 + δρ g h w - 2 γ∂ h/∂ z + F ∂^2 h/∂ x^2 = 0, where w=L for square particles. Substituting Eqn. (<ref>) into Eqn. (<ref>) gives F/γ L = D/γ L^2( 2 π L/λ)^2 + 2 ( λ/2 π L) √(1 + Bo ( λ/ 2 π L)^2 ) + Bo ( λ/2 π L)^2. For D/(γ L^2) ≫ 1, the buckling mechanics is dominated by competition between gravitational and bending forces. Thus, λ_b = (D/δρ g)^1/4, which is the result of Ref. <cit.>. For D/(γ L^2) ≪ 1 bending rigidity effects are negligible and we recover the results of Sec. III b. For intermediate values of D/(γ L^2), the wavelength that minimizes the force will be larger than 2L. Its precise value can be found by solving dF_b/dλ = 0. For Bo ≪ 1 the buckling wavelength is λ_b/2L = max {1, π( D/γ L^2)^1/3} and the corresponding buckling force is F_b/γ L = max {2/π , 3 ( D/γ L^2)^1/3}. Figure <ref> shows F/(γ L) vs. λ/(2L) for Bo = 0 and selected small values of D/(γ L^2). The wavelength that minimizes F is indicated by the red dots. From Eqn. (<ref>) and (<ref>) we see that both the buckling wavelength and buckling force are proportional to (D/(γ L^2))^1/3, thus F_b ∝λ_b (red dashed line in Fig. <ref>). For increasing values of D/(γ L^2) the wavelength that minimizes the force becomes larger than 2L. In an interfacial monolayer of 2D nanosheets, the nanosheets can overlap slightly <cit.>. This overlap can result in a small but finite effective bending rigidity because of the attractive force between the sheets in the overlapping region. In Ref. <cit.> a Lennard-Jones potential was used to model the attractive interaction potential between parallel sheets of graphene. Using the Lennard-Jones potential, and assuming that the angle between pairs of overlapping sheets is small, it is easy to estimate the effective bending rigidity corresponding to an average overlap length ℓ (see Appendix <ref>): D ≃40 Γ L ℓ^3/3 r_0^2. Here Γ is the adhesion energy per unit area and r_0 is the nanometric equilibrium separation between the nearly-parallel sheets. The model suggests a strong ℓ^3 scaling with the overlap length. For graphene oxide sheets in high-humidity conditions, molecular dynamics simulations suggest r_0 ≃ 7.7 - 12 A^∘ <cit.> and Γ≃ 0.1 - 0.2 J/m^2 <cit.>. Taking realistic values Γ = 0.2 J/m^2, and r_0 = 12 A^∘ and an average sheet length L = 1 μ m, D/(γ L^2) is estimated to be 0.02 and 26 for ℓ = 1 nm and 10 nm, respectively (assuming the surface tension of water, γ = 0.07 J/m^2). The corresponding wavelengths are 2 μ m and 20 μ m. Thus, even for relatively small overlaps of only 10nm, the wavelength of buckling can be an order of magnitude larger than 2L. § CONCLUSIONS We have measured the amplitude of deformation, wavelength and force on the barrier for a two-dimensional and one-dimensional monolayer of plates trapped at a fluid-fluid interface and subject to uni-axial compression. The amplitude and wavelength of the corrugations of the 2D monolayer were measured by a laser scanning technique. The model we have developed to predict the experimental data for the linear chain (one-dimensional monolayer) predicts the buckling force well over a wide range of values of L/ℓ_c, where ℓ_c is the capillary length and L is the particle length, and without adjustable parameters (Fig. <ref>). The 1D chain model provides a reasonable order of magnitude estimate of the buckling surface pressure Π for the two-dimensional monolayer, provided that this pressure is identified as the collapse pressure corresponding to the point D in Figs. <ref> and <ref>. The chain model does not contain a dependence on the trough aspect ratio Δ/W_t, but the inclusion of frictional forces with the lateral wall via a Coulomb friction model enables us to model the observed dependence of Π on Δ/W_t. The chain model predicts a buckling wavelength λ =  2L, independent of L/ℓ_c. The 2D monolayer does not display a regular wave pattern, but the local wavelength in regions where buckling occurs is of the order of the particle size, as in the chain model. Uni-axial compression of monolayers of spherical particles gives smooth undulations with a wavelength λ∼√(ℓ_c L) <cit.>, different from the one we observe. In our case, the effective bending rigidity of the monolayer is negligible, as the plates can “hinge” at their contact points without a bending energy penalty. In the case of spheres, even in the absence of colloidal force contribution bending energy can originate from the motion of the contact line on the surface of each particle as the mean interface curvature changes <cit.>. An indication of this is that the order of magnitude of the effective bending rigidity corresponding to λ∼√(ℓ_c L) is γ d^2; this can be seen as the change in interfacial energy as a sphere of diameter d protrudes in the fluid interface over a distance comparable to d. In our case, the undulations of the contact line relative to the particles, if present, are at most limited to a scale t ≪ L, where t is the particle thickness. The corresponding changes in interfacial energy upon a change in interfacial curvature is O(γ L t) <cit.>. For L/ℓ_c ≪ 1 and t/L ≪ 1, this contribution is negligible in comparison to the dominant contribution, of order γ A λ∼γ L^2, due to the rotation of each particle as the monolayer is compressed. The aspect ratio of the particle thus determines which capillary energy contribution controls the micromechanics of the particle monolayer. In our experiments, we prepare the particle-laden interface ensuring no initial overlaps. If a monolayer of 2D nanosheets is prepared with care, overlaps can be largely prevented (nanosheet stacking requires overcoming an energy barrier <cit.>), but probably not completely eliminated at large degrees of compression. Tuning the pH of the liquid <cit.> or adding surfactants <cit.> has been shown to suppress the stacking of 2D materials at fluid interfaces, so one may realize the experimental systems described in the current paper using real 2D materials. If particle overlaps did occur even before the compression of the particle-laden interface, the analysis would need to account for particle-particle interactions as well as statistics of the geometry of the overlapping regions. Overlaps contribute to a finite bending rigidity as a result of the adhesion forces between the nanosheets. We have shown mathematically that this effect increases the buckling wavelength compared to 2L (see Fig. <ref>). Compression of plate-like particles trapped at fluid interfaces occurs in a variety of applied settings, for instance in the manufacturing of thin films <cit.>, in the deformation of Pickering emulsions <cit.>, or in the production of crumpled graphene by aerosolization <cit.>. This work contributes to our understanding of the link between particle shape, contact mechanics, and response of the fluid interface during the compression of monolayers of plate-like particles of controlled geometry. § ACKNOWLEDGEMENTS We thank Simon Gravelle and Adyant Agarwal for useful discussions on modeling the interaction energy between two nanosheets. We thank Paul Grandgeorge for useful suggestions on force measurements in the μ N range. We gratefully acknowledge funding by European Research Council (ERC) under the European Union’s Horizon 2020 Research and Innovation program (project FLEXNANOFLOW, grant agreement no. 715475). § MICRO FORCE SENSOR The cantilever force sensors are Mylar sheets with lengths of 80 and 100 mm, a width of 10 mm, and a thickness of 125 μ m. One end of the sheet is clamped and the free end is unconstrained. The free end is passed through another Mylar sheet, with a rectangular hole, which acts as the barrier (see Fig. <ref>). The deflection of the Mylar sheet (ξ) from its undeformed position is calculated by imaging from the side view. To calibrate the force sensors, the fixed end of the cantilever is mounted on a manual precision stage and the free end is rested on a knife edge placed on a Mettler Toledo precision micro-balance. Imposing successive displacements of 0.5 mm in the manual precision stage, the corresponding forces are read from the balance. Figure <ref> shows force vs. displacement of the manual stage. The force values are linear with respect to the displacement for displacements as larger as 5 mm. The slope of the line fitted to the experimental data gives the stiffness k of the beam. § BENDING RIGIDITY DUE TO OVERLAPS The equilibrium distance between two nanosheets is determined by the competition between the attractive van der Waals and the repulsive electrostatic forces between the solid surfaces. A Lennard-Jones potential has been used to model the interaction between two nanosheets in <cit.>. We use the standard 4-10 Lennard-Jones potential energy of interaction (per unit area) between two thin parallel plates <cit.> ϕ(r) = Γ/3( 5 (r_0/r)^4 - 2 (r_0/r)^10), where r is the separation distance between the plates, r_0 is the equilibrium separation and Γ = ϕ(∞) - ϕ(r_0) is the adhesion energy. If the separation distance r > r_0 the plates attract each other due to van der Waals forces and if r < r_0 the plates repel each other due to electrostatic forces. In the limit of small displacement around r_0, a quadratic approximation to the energy per unit area is <cit.> ϕ(r) ≃20 Γ/r_0^2 (r - r_0)^2. We consider a 1D chain of plate-like particles at a fluid interface where each particle pair has a small overlap of length ℓ (see Fig. <ref>). We model the interface as a continuous curve parameterized by θ(s), the local rotation angle along the curvilinear coordinate s. The configuration of a single overlap is illustrated in the inset of Fig. <ref>. Referring to this figure, we take r in the direction normal to the top plate and ζ in the direction tangential to the top plate. Under compression the plates are rotated with respect to each other by an angle dθ. The displacement of the second plate is r (ζ) = r_0 + ζ tan(dθ) (see figure <ref>). The energy required to impose this rotation for a particle pair is dE ≃ w∫_0^ℓ20 Γ/r_0^2 (r(ζ) - r_0)^2 dζ. Carrying out the integration for |d θ| ≪ 1 we obtain dE ≃w/2 ( 40 Γℓ^3/3 r_0^2) dθ ^2 Multiply and divide by (ds)^2, where ds is an infinitesimal element of curvilinear coordinate, we obtain dE ≃w/2 ( 40 Γℓ^3/3 r_0^2 ds ) (dθ/ds)^2 ds. For a continuous surface, the bending rigidity D (per unit width) is defined so that dE = 1/2 w D κ^2 ds, where κ = dθ/ds is the curvature. Comparing this expression to Eqn. (<ref>) we obtain D = (40 Γℓ^3)/(3 r_0^2)ds. In our case, because dE represents the energy per particle pair, ds is the distance between two particle centers, i.e. ds = L - ℓ. For ℓ≪ L the estimate of the bending rigidity is D = (40 Γℓ^3)/(3 r_0^2)L, as in Eq. (<ref>). The assumption of a continuous surface is reasonable if N ≫ 1, where N is the total number of plates <cit.>. The bending rigidity thus scales proportionally to the adhesion energy Γ and depends strongly on the overlap length ℓ.
http://arxiv.org/abs/2307.02650v1
20230705205306
A Complete Characterisation of Structured Missingness
[ "James Jackson", "Robin Mitra", "Niels Hagenbuch", "Sarah McGough", "Chris Harbron" ]
stat.ME
[ "stat.ME", "stat.AP", "stat.ML" ]
A palindromic polynomial connecting the earth mover's distance to minuscule lattices of Type A William Q. Erickson August 1, 2023 ================================================================================================= ^1The Alan Turing Institute, London, UK, ^2Department of Statistical Science, University College London, London, UK, ^3F. Hoffmann-La Roche AG, Basel, Switzerland, ^4Genentech, South San Francisco, CA, USA, ^5Roche Pharmaceuticals, Welwyn Garden City, UK jjackson@turing.ac.uk Our capacity to process large complex data sources is ever-increasing, providing us with new, important applied research questions to address, such as how to handle missing values in large-scale databases. <cit.> noted the phenomenon of Structured Missingness (SM), which is where missingness has an underlying structure. Existing taxonomies for defining missingness mechanisms typically assume that variables' missingness indicator vectors M_1, M_2, , M_p are independent after conditioning on the relevant portion of the data matrix 𝐗. As this is often unsuitable for characterising SM in multivariate settings, we introduce a taxonomy for SM, where each M_j can depend on 𝐌_-j (i.e., all missingness indicator vectors except M_j), in addition to 𝐗. We embed this new framework within the well-established decomposition of mechanisms into MCAR, MAR, and MNAR <cit.>, allowing us to recast mechanisms into a broader setting, where we can consider the combined effect of 𝐗 and 𝐌_-j on M_j. We also demonstrate, via simulations, the impact of SM on inference and prediction, and consider contextual instances of SM arising in a de-identified nationwide (US-based) clinico-genomic database (CGDB). We hope to stimulate interest in SM, and encourage timely research into this phenomenon. § INTRODUCTION Missing values are to be expected in most data sets, especially when the underlying data collection process is complex, such as in a clinical setting. The underlying mechanism driving the missingness dictates whether adjustments are required in order to obtain valid inferences. Over the years a substantial amount of literature has been devoted to dealing with missing data <cit.>. The seminal taxonomy of missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR), introduced by <cit.>, has been fundamental to modelling missingness mechanisms. It allows analysts to understand the relationship between missingness and the observed and missing portions of the data matrix 𝐗, thus allowing the missing data to be appropriately addressed. It is unclear, however, how this taxonomy deals with the case where there is an underlying structure to the missingness itself. Firstly, it assumes an element of randomness and does not cover the scenario where values are missing with certainty. Secondly, after conditioning upon relevant variables, 's taxonomy assumes the probability of missingness is equal across all subjects. That is, in the MCAR case, it is assumed that, for a given variable, missing values are equally likely across all subjects; similarly, in the MAR case, it is assumed that missing values are equally likely across all subjects after conditioning on the relevant portion of the observed data; and a similar notion holds in the case of MNAR. Moreover, MCAR, MAR, and MNAR assume that the rows and columns of the missingness indicator matrix – a matrix comprising of zeros and ones denoting whether each value in the data matrix is observed or missing – are conditionally independent. Yet, such assumptions do not always hold. What if missingness is non-random? What if dependencies exist among the missingness indicators themselves? What if factors other than the data are driving the missingness? This is where the notion of structured missingness (SM) comes in. The challenges associated with SM were recently set out by <cit.>. To summarise briefly, SM is an umbrella term covering a range of missingness mechanisms that have an underlying structure. list five “common routes to SM”, demonstrating how SM can arise in practice. The first two of these relate to data linkage – specifically, to multi-modal and multi-scale linkage – which often leads to large swathes of missing data that is not random but rather a certainty. The third route to SM is through batch failure, which can result in missingness with a sequential aspect, and the last two relate to skip patterns and population heterogeneity. Research into SM is important for a number of reasons. Strong dependencies between variables' missingness indicator vectors can cause inferential challenges; for example, it may not be possible to fit imputation models to the data. Another reason is that missingness structures themselves can potentially hold useful information; for example, an interaction detected in the missingness indicators for two variables, may reveal an insight into the relationship between those variables, a relationship that may not be obvious from the data values. In this respect, exploring SM shares similarities with the use of paradata <cit.>, to fully extract all available information from a data set. Problems due to SM have been previously encountered when dealing with incomplete data. However, the literature tends to be scattered, focusing on specific instances of SM without a wider appreciation of the phenomenon, as well as tending to re-purpose existing methods rather than developing bespoke framework that SM clearly merits. <cit.> propose utilising tree models to learn about structures present among the missing data but do not elaborate further on this idea. <cit.> consider graphical models to represent multivariate dependencies in incomplete data, with the potential to incorporate relationships between missingness indicators, although their focus is primarily on addressing causal inference problems and not SM. Modelling and imputation methods that handle, among other things, certain types of SM have also been developed; for example, <cit.> developed a multi-level model for incomplete data, and <cit.> proposed a non-parametric imputation model for blocks of missing values. While these approaches may be adequate for the specific problems they seek to address, they do not provide a complete picture of the rich landscape SM describes, and consequently may belie some of the deeper questions it poses. For further work to proceed in the area of SM, we need to first define what a SM mechanism means. <cit.> introduced nine Grand Challenges to be addressed in relation to SM, of which defining SM is the first. It is also arguably the most pressing, as it is intrinsic to the other grand challenges and the direction of future research. In this paper, we identify and define SM mechanisms relating to multivariate missingness structures; that is, we focus on relationships among missingness indicator variables themselves, as well as relationships between missingness indicators and variables in the data. By doing so, we introduce a taxonomy that comprehensively details the mathematical framework underpinning SM, and embed this within 's taxonomy, thus generalising the concept of a missing data mechanism to better deal with complex multivariate incomplete data settings. The purpose of this paper is not, therefore, to offer solutions for SM, but rather to provide a solid foundation that defines and characterises this phenomenon, by considering the myriad ways it can manifest. The remainder of this paper is organised as follows. Section <ref> revisits 's taxonomy, and Section <ref> explores its limitation in relation to SM. In Section <ref>, we detail our SM taxonomy, structuring this to be aligned with the classic MCAR/MAR/MNAR decomposition of missing data mechanisms. Section <ref> covers remaining forms of SM that fall outside the categorisation given in Section <ref>. Section <ref> provides motivating simulation examples that illustrate the effects of SM on analyses and inferential validity. Section <ref> provides contextual settings where SM mechanisms can arise, using a large Clinico-Genomic Database (CGDB) for illustration. Section <ref> ends with some concluding remarks. § 'S TAXONOMY To formally define SM, we return to the definitions introduced by <cit.>, widely known as MCAR, MAR, and MNAR. To define these, suppose we have an individual-level data set 𝐗 = (X_1,X_2,,X_p) comprising n subjects (n rows) and p variables (p columns), such that the data form an n× p matrix. Suppose we have a corresponding n× p missingness indicator matrix 𝐌 = (M_1,M_2,,M_p), where M_ij = 1 if observation X_ij is missing and M_ij = 0 if X_ij is observed. We can then decompose 𝐗 into its observed and missing portions, 𝐗_obs and 𝐗_mis, respectively: 𝐗_obs ={X_ij| M_ij =0} and 𝐗_mis ={X_ij| M_ij =1}. It is typical in the missing data literature to conceptualise the missingness mechanism as a binary probability function p(𝐌|𝐗, γ), where γ is a parameter space governing 𝐌. The matrix 𝐌 has the same dimensions as the data matrix 𝐗, so whenever 𝐗 is multivariate (p>1), 𝐌 is also multivariate. Importantly, 's definitions of MCAR, MAR, and MNAR do not consider the relationship between M_j, the missingness indicator vector for the jth variable, and 𝐌_-j, the missingness indicator vectors for all variables except the jth. In effect, it is implicitly assumed that M_j and 𝐌_-j are (conditionally) independent (for all j∈{1, , p }). This independence can be explicitly shown in the definitions, by expressing M_j in terms of 𝐗, γ, and now additionally 𝐌_-j. Data in 𝐗 are missing completely at random (MCAR) if, for each variable M_j, p(M_j|𝐌_-j, 𝐗, γ) = p(M_j|γ) ∀ j, 𝐗, γ. Data in 𝐗 are missing at random (MAR) if, for each variable M_j, p(M_j|𝐌_-j, 𝐗, γ) = p(M_j|𝐗_obs, γ) ∀ j,𝐗_mis, γ. Data in 𝐗 are missing not at random (MNAR) if, for each variable M_j, p(M_j|𝐌_-j, 𝐗, γ) = p(M_j|𝐗_obs, 𝐗_mis, γ) ∀ j, γ. Here, 𝐗_mis also includes variables not necessarily included in the data set, such as an unobserved or latent variable. While traditionally 𝐌_-j is not explicitly conditioned on in the mechanism, we can see from the above definitions that the different mechanisms correspond to independence assumptions between M_j and 𝐌_-j, described in more detail in Section 3.2. When modelling the joint distribution of a multivariate data set, it is common to decompose it into a product of conditional univariate distributions, as is done, for example, to 𝐗 when fitting imputation models. Thus, we can express 𝐌 in the following form: p(𝐌) = p(M_1, M_2, , M_p) = p(M_1) ∏_j=2^p p(M_j| M_j-1, , M_1). Given this formulation, it would suffice to say that M_j is independent of M_1, , M_j-1 (rather than 𝐌_-j) for all j. The next section delves into the limitations of this taxonomy, thus paving the way for a new taxonomy relating to SM. § LIMITATIONS WITH EXISTING TERMINOLOGY Here, we describe two primary limitations of existing terminology in relation to SM, thereby motivating the need for further terminology. §.§ Non-random missingness The definitions of MCAR, MAR, and MNAR assume that missingness, to a certain extent, is random. In the traditional domain of survey data sets, where subjects are selected at random and are then either missing or observed at random, this notion of random missingness suffices. Yet in many complex data sets – where individuals are neither selected at random nor are missing at random – the definition begins to break down. Surveys, which are designed with statistical analysis in mind, typically ensure that subjects are randomly selected from a well-defined sampling frame that is representative of the population at large. In a non-survey environment, however, data do not necessarily originate from well-defined populations. There has recently been a drive to utilise alternative data sources, such as administrative data, which often include missing values that are not missing randomly – but with certainty. Similarly, data sets arising from fusing or linking multiple data sources often include subjects who were excluded from at least one of the data sources, resulting in values missing with certainty in the linked data sets (see , for an early approach for dealing with this). This notion of “missing with certainty” can be tied to inclusion probabilities. Whenever a subject has a zero inclusion probability, it will result in unit non-response that is non-random. Identifying these cases relies on knowledge relating to the construction of the data set, such as ascertaining how subjects were selected, what the underlying population was, and whether any data linkage took place. §.§ Dependencies between missingness indicators The missingness indicator matrix 𝐌 comprises just zeros and ones. As seen in Section 2, 's taxonomy implicitly assumes that the columns of this matrix – which correspond to variables' missingness indicator vectors – are (conditionally) independent: that is, missingness in one variable is (conditionally) independent of missingness in other variables. Specifically: Under MCAR: M_j M_k, ∀ j ≠ k with j, k ∈{1, …, p}, Under MAR: M_j M_k |𝐗_obs, ∀ j ≠ k with j, k ∈{1, …, p}, Under MNAR: M_j M_k |𝐗, ∀ j ≠ k with j, k ∈{1, …, p}, where denotes independence. Yet, in practice, these assumptions may be violated: there may be more complex dependencies between certain variables' missingness vectors. That is, there may be a multivariate structure to the matrix 𝐌. The notion of dependencies between the variables M_1, , M_p adds a layer of complexity when modelling missingness mechanisms, giving scope for a range of complex interactions between 𝐌 and 𝐗. Essentially, rather than considering, say, M_j as a function of 𝐗 alone, we can consider M_j to be a function of 𝐗 and 𝐌_-j. To see how such dependencies arise in practice, let us consider the following example. Suppose batch failure occurs – by a component failing, for example – preventing a series of measurements from being collected. This would result in measurements prior to the failure being observed, while measurements post-failure would be missing. § A TAXONOMY FOR STRUCTURED MISSINGNESS In this section, we set out our taxonomy, which, for a given variable j, hinges on the relationship between M_j, 𝐌_-j, and 𝐗. Existing missingness mechanisms focus on the relationship between M_j and 𝐗; for example, the taxonomy of MCAR, MAR, and MNAR broadly looks at whether M_j depends on 𝐗, 𝐗_obs, or neither. By considering interactions between M_j and 𝐌_-j, too – that is, by considering that missingness in other variables can affect missingness in a variable – a dimension of SM cases relating to multivariate missingness mechanisms is formed. We continue to use the terms (i) MCAR, (ii) MAR, and (iii) MNAR, for when: (i) M_j is independent of X, (ii) M_j depends on 𝐗_obs, and (iii) M_j depends on 𝐗 (𝐗_mis and 𝐗_obs). We term the mechanisms where M_1, , M_p are independent as the “unstructured” cases, and the mechanism where dependencies exist among M_1, , M_p as the “structured” cases. In statistical modelling, and missing data methods more widely, it is generally assumed that the rows and columns (subjects and variables) of the data matrix – and hence the missingness indicator matrix – are interchangeable. However, many non-survey data sets have a natural, time-based ordering, as data are often collected in a sequential fashion. Therefore, while a variety of structural forms may be observed, we distinguish specifically between two forms of structure: block structure and sequential structure. The former relates to the case where missingness in a variable can be affected by any other variables; the latter relates to the case where missingness in a variable can only be affected by missingness in earlier variables. When considering the missingness between a pair of variables, we can also distinguish between a positive and negative SM relationship. The former is the case where missingness in one variable increases the probability of missingness in another variable. The latter is the case where missingness in one variable decreases the probability of missingness in another variable. Finally, for each of the mechanisms we define, we give an example of how such a mechanism could arise in a clinical setting. We also use a directed graph (DG) to visualise the relationships associated with each mechanism, not altogether dissimilar to the directed graphs (DGs) in <cit.> for describing causal relationships in incomplete data. §.§ Cases relating to MCAR We begin by defining cases of SM that relate to MCAR, in the sense that there are no associations between M_j and 𝐗 for each j ∈{1,,p}; that is, missingness depends on neither the observed nor the missing portion of the data. We do assume, however, that associations exist between the missingness indicators M_1, , M_p. We express mechanisms for M_j, therefore, in terms of 𝐌_-j, and γ (where γ is used simply to represent the parameters characterising the mechanism). §.§.§ MCAR – Unstructured (MCAR-U) The MCAR-U mechanism is what is currently known as MCAR. It is the case where the missingness mechanism for M_j (j∈{1, , p }) is independent of both the observed (𝐗_obs) and missing portions of the data (𝐗_mis), and also independent of missingness in other variables (𝐌_-j). A missingness mechanism 𝐌=(M_1, , M_p) is MCAR-U if, for each variable M_j, p(M_j|𝐌_-j, 𝐗, γ) = p(M_j|γ) for all 𝐗, γ. MCAR-U missingness mechanisms have a tendency to display structure, even though there are no dependencies within 𝐌 itself. Rearranging variables and subjects – in order of increasing number of missing values, for example – often creates the impression of structure. We call this apparent structure. The DG in Figure <ref> illustrates the relationship between the X_ij and M_ij in an MCAR-U mechanism. This shows, as do all subsequent DGs that follow, three observations of a single variable, say, X_11, X_12, and X_13, along with their missingness indicators M_11, M_12, and M_13. The X_ij and M_ij are represented by blue squares and red diamonds, respectively; with solid red diamonds denoting the case where X_ij is observed (M_ij=0), and uncoloured red diamonds denoting the case where X_ij is missing (M_ij=1). The blue squares are always solid, representing the true underlying value. We always assume X_11 (the left blue square) is observed, and that X_12 and X_13 (the centre and right blue squares) are missing; thus it follows that the left red diamond is always coloured and the centre and right red diamonds are uncoloured. When illustrating sequential structures, we assume, without loss of generality, that X_11, X_12, and X_13 are temporally ordered. Edges (for which there are none in this first instance) indicate relationships between the variables, with directed edges (arrows) indicating causal effects and bi-directed edges/arrows indicating associative effects. Dashed arrows indicate a probabilistic relationship and solid arrows indicate a deterministic relationship. This DG shows no association between the X_ij and M_ij, because missingness is unstructured and MCAR. To give an example of how an MCAR-U mechanism can arise in a clinical setting, suppose the matrix 𝐗 relates to laboratory test results; that is, suppose each row gives a particular subject's results, and each column gives the results for a particular test. Suppose some tests randomly fail owing to entirely technical reasons. Then, M_j, the missingness indicator vector for the jth variable (jth test), would be independent of both the test results themselves (X) and the missingness indicators of other variables (𝐌_-j). §.§.§ MCAR – Weak Structure (MCAR-WS) As we are considering cases of SM related to MCAR, we continue to assume no relationship between M_j and 𝐗 for each j∈{1, , p }. We now assume, however, that a relationship exists between M_j and 𝐌_-j. The MCAR-WS mechanism effectively assumes that missingness in at least one variable affects the probability of missingness in another variable. As an example, suppose that if X_1 is missing (M_1=1), the probability of missingness in X_2 (M_2=1) is q_1; whereas, if X_1 is observed (M_1=0), the probability of missingness in X_2 is q_2. Thus, there is a probabilistic relationship, which we term weak structure. A missingness mechanism 𝐌=(M_1, , M_p) is MCAR-WS if, for each variable M_j, p(M_j|𝐌_-j, 𝐗, γ) = p(M_j|𝐌_-j, γ) for all 𝐗, γ. Thus, unlike with MCAR-U, each M_j now depends on 𝐌_-j. Definition <ref> can be said to relate to block structure: it implicitly assumes that the columns in 𝐌 do not have a given ordering. A sequential structure relies on an ordering of the variables, X_1, , X_p, with missingness in X_j only depending on M_1, , M_j-1, and not on M_j+1, , M_p; that is, when missingness depends only on variables appearing earlier in the sequence. The sequential structure, then, can be viewed as a special case of block structure, when missingness in one variable depends only on certain variables and not all variables. There are similarities, too, between sequential missingness structures and missing data in longitudinal studies, which also have a given ordering to the missingness <cit.>. For example, 's definitions of completely random drop-out (CRD), random drop-out (RD), and informative drop-out (ID), could potentially have useful interpretations here. For a practical example, again suppose the matrix 𝐗 denotes subjects' test results, this time collected over several visits to the clinic. An MCAR-WS mechanism would occur if, once a subject misses a visit (say M_j=1), they are more likely to miss subsequent visits (the probabilities p(M_j+1=1), p(M_j+2=1), increase). In this instance, a sequential structure would arise, but if there was no ordering to the variables, it would result in a block structure. The left frame of Figure <ref> presents an MCAR-WS mechanism with a block structure (arrows point in both directions). The right frame presents a sequential structure (arrows point in one direction). Crucially, the arrows are among the missingness indicators only. §.§.§ MCAR – Strong Structure (MCAR-SS) The MCAR-SS mechanism is when missingness in one or more variables implies missingness in another variable with certainty. Revisiting our earlier example, where q_1 is the probability of missingness in X_2 given that X_1 is missing, strong structure is the case where q_1 is equal to 1. Hence, MCAR-SS can be viewed as a special case of MCAR-WS, where the relationship is no longer probabilistic but deterministic. Strong structure does not only include the case of a positive relationship in missingness across variables – that is, it is not just the case where missingness in one or more variables implies missingness in another – it also includes the case of a negative relationship: that is, the case where the observing of one or more variables implies missingness in another with certainty. As before, since we are broadly dealing with MCAR, we continue to assume no relationship between M_j and 𝐗 for each j∈{1, , p }. A missingness mechanism 𝐌=(M_1, , M_p) is MCAR-SS if, for each variable M_j, there exists i ∈ A ⊆{1, , n}, such that p(M_ij=1|𝐌_-j, 𝐗, γ) = p(M_ij=1|𝐌_-j, γ) =1 for all 𝐗, γ. Thus, there is an element of certainty with MCAR-SS that is not present in MCAR-WS. We distinguish between block and sequential structure, where the latter assumes an underlying ordering of the variables which the former does not have. To see the difference between MCAR-SS and MCAR-WS from a practical perspective, we revisit the previous example: MCAR-SS would arise if, once a subject misses a visit (M_j=1), they are dropped from the study (M_j=1 ⇒ M_j+1=1, M_j+2=1,). The DGs in Figure <ref> also present the difference between MCAR-SS and MCAR-WS: the arrows of the DG are now solid – not dashed – which indicates certainty. A key point to note here is that traditionally MCAR is viewed as a relatively simple, uninteresting scenario. However, what we show here is that, even with MCAR mechanisms, when considering the additional dimension of SM, the potential for a range of settings exists, varying in complexity. §.§ Cases relating to MAR We now move on to cases of SM that relate to MAR. The setup will resemble that of the previous section, but the difference is that we now consider interactions between 𝐗_obs (the observed portion of the data) and 𝐌. Throughout this section, therefore, we will express mechanisms in terms of 𝐌_-j, 𝐗_obs, and γ. An important point with these MAR cases is that X_ij cannot have a direct effect on its corresponding M_ij, as this would relate to MNAR. §.§.§ MAR – Unstructured, Probabilistic (MAR-UP) As with the MCAR cases of SM, we begin with the unstructured MAR mechanisms, where M_j depends on the observed data 𝐗_obs, but does not depend on missingness in the other variables 𝐌_-j. We first consider the case of a probabilistic mechanism – the notion of which is similar to that of weak structure from MCAR-WS – which is when 𝐗_obs affects the probability that M_ij=1. A missingness mechanism 𝐌=(M_1, , M_p) is MAR-UP if, for each variable M_j, p(M_j|𝐌_-j, 𝐗, γ) = p(M_j|𝐗_obs,γ) for all 𝐗_obs, γ. The MAR-UP definition is identical to the current definition of MAR, and, although unstructured, can display apparent structure, especially if a similar set of variables is influencing missingness in each variable. MAR-UP may arise, for example, if physicians are less inclined (without there being consistency of decisions) to give a particular test X_j to elderly patients. In this way, M_j (X_j's missingness indicator vector) depends only on patients' ages (𝐗_obs) and not on missingness in other variables (𝐌_-j). §.§.§ MAR – Unstructured, Deterministic (MAR-UD) Similarly, in the case of deterministic unstructured MAR mechanisms, 𝐗_obs directly dictates whether M_ij=1. Thus, it is a special case of MAR-UP where values are missing with certainty. A missingness mechanism 𝐌=(M_1, , M_p) is MAR-UD if, for each variable M_j, there exists i ∈ A ⊆{1, , n}, such that p(M_ij=1|𝐌_-j, 𝐗, γ) = p(M_ij=1|𝐗_obs,γ) = 1 for all 𝐗_obs, γ. The MAR-UD mechanism would arise, for example, if physicians never give a particular test to elderly patients. Now, patients' ages (𝐗_obs) have a deterministic effect on M_j. The DGs in Figure <ref> illustrate the basic structure of the MAR-UP and MAR-UD mechanisms. In both instances, there is now an effect of X_11 (left blue square), which is observed, on M_12 and M_13 (uncoloured red diamonds); with MAR-UP the arrows are dashed, and with MAR-UD the arrows are solid. §.§.§ MAR – Weak Structure (MAR-WS) We now additionally assume that relationships exist between M_j and 𝐌_-j, as well as between M_j and 𝐗_obs. As before, we distinguish between a weak structure, when 𝐌_-j and 𝐗_obs taken together affect the probability distribution of M_j, and strong structure, when 𝐌_-j and 𝐗_obs directly determine whether M_ij=1. Once again, we also distinguish between block and sequential structure, where the latter assumes an underlying ordering to the variables and missingness depends only on variables earlier in the sequence. A missingness mechanism 𝐌=(M_1, , M_p) is MAR-WS if, for each variable M_j, p(M_j|𝐌_-j, 𝐗, γ) = p(M_j|𝐌_-j, 𝐗_obs, γ) for all 𝐗_obs, γ. MAR-WS may arise if physicians do not always give a particular test (X_j) to elderly patients – thus 𝐗_obs has an effect on M_j – and which in turn then means they are less likely to be invited back for further tests (M_j=1 increases the probabilities p(M_j+1=1), p(M_j+2=1),). The left and right frames of Figure <ref> present a MAR-WS mechanism with a block (causal arrows point in both directions) and sequential structure (causal arrows point in one direction), respectively. As with the DG for MAR-UP (Figure <ref>), there is a probabilistic causal arrow from X_11 (observed; left blue square) to M_12 and M_13 (uncoloured red diamonds); and, in addition, there are now relationships between M_11, M_12, and M_13. §.§.§ MAR – Strong Structure (MAR-SS) The MAR-SS mechanism is the case where 𝐗_obs and 𝐌_-j, taken together, directly dictate whether M_ij=1. Thus, it is the deterministic case of MAR-WS. A missingness mechanism 𝐌=(M_1, , M_p) is MAR-SS if, for each variable M_j, there exists i ∈ A ⊆{1, , n}, such that p(M_ij=1|𝐌_-j, 𝐗, γ) = p(M_ij=1|𝐌_-j, 𝐗_obs, γ)=1 for all 𝐗_obs, γ. To see how the mechanism MAR-SS can arise, we reconsider the previous example. Now suppose tests are only granted to subjects below a certain age (where age is observed in 𝐗_obs), and that once a subject misses one test (M_j=1), they are automatically dropped from a study (M_j=1 ⇒ M_j+1=1, M_j+2=1,). The left and right frames of Figure <ref> give a MAR-SS mechanism with a block (arrows between M_11, M_12, and M_13 point in both directions) and sequential structure (one directional arrows), respectively. The difference compared with the MAR-WS case (Figure <ref>) is that the arrows are now solid. §.§ Cases relating to MNAR Lastly, we move on to cases of SM that relate to MNAR. We now assume that the X_ij do, indeed, have a direct effect on the corresponding M_ij. In a similar way in which Section <ref> builds on Section <ref>, this section builds on Section <ref>, with the difference being that mechanisms now additionally include the missing portion of the data 𝐗_mis in their expressions; that is, we express mechanisms in terms of 𝐌_-j, 𝐗_obs, 𝐗_mis, and γ. §.§.§ MNAR – Unstructured, Probabilistic (MNAR-UP) In this probabilistic instance, which is identical to the current definition of MNAR, 𝐗_obs and 𝐗_mis (or more simply, 𝐗) affect the probability that M_ij=1. A missingness mechanism 𝐌=(M_1, , M_p) is MNAR-UP if, for each variable M_j, p(M_j|𝐌_-j, 𝐗, γ) = p(M_j|𝐗_obs,𝐗_mis,γ) for all 𝐗_obs, 𝐗_mis, γ. To give an example of how MNAR-UP may arise, suppose X_j gives peak expiratory flow (PEF) measurements, a test commonly given to patients with asthma. A particularly bad case of asthma – a case which, if the test is carried out, would return a low PEF result (𝐗_mis) – may preclude even taking a PEF test (for example, the patient may be too ill to visit the clinic), thus causing a missing test result. §.§.§ MNAR – Unstructured, Deterministic (MNAR-UD) In the deterministic instance of MNAR-UD, 𝐗_obs and 𝐗_mis directly dictate whether M_ij=1. A missingness mechanism 𝐌=(M_1, , M_p) is MNAR-UD if, for each variable M_j, there exists i ∈ A ⊆{1, , n}, such that p(M_ij=1|𝐌_-j, 𝐗, γ) = p(M_ij=1|𝐗_obs, 𝐗_mis, γ) =1 for all 𝐗_obs,𝐗_mis, γ. Continuing with the asthma example, MNAR-UD can arise, for example, if PEF measurements falling below a certain value are incorrectly recorded as being missing. The DGs in Figure <ref> illustrate the basic structure of the MNAR-UP and MNAR-UD mechanisms. In both instances, we now also have causal effects from X_12 and X_13 (centre and right blue squares) to their own missingness indicators M_12 and M_13 (uncoloured red diamonds). In the left and right frames, we have a block and sequential structure, respectively. §.§.§ MNAR – Weak Structure (MNAR-WS) Again, we distinguish between weak and strong structure. In this instance, weak structure is when M_j depends on both 𝐌_-j and 𝐗. A missingness mechanism 𝐌=(M_1, , M_p) is MNAR-WS if, for each variable M_j, p(M_j|𝐌_-j, 𝐗, γ) = p(M_j|𝐌_-j, 𝐗_obs,𝐗_mis, γ) for all 𝐗_obs,𝐗_mis, γ. For the asthma example, the mechanism MNAR-WS can arise if, once a patient misses a visit to the clinic due to a severe case of asthma – an occasion on which a low PEF result (𝐗_mis) would have been returned had they attended – they begin to lose contact with the clinic (they become disengaged), leading to a lower likelihood of further testing (increasing the probabilities p(M_j+1=1), p(M_j+2=1),). The DGs in Figure <ref> present the structure of an MNAR-WS mechanism. As with MNAR-UP and MNAR-UD (Figure <ref>), we have causal effects from X_12 and X_13 (centre and right blue squares) to their own missingness indicators M_12 and M_13 (uncoloured red diamonds). In addition, we have relationships among the missingness indicators M_11, M_12, and M_13 (red diamonds). In the left and right frames, the arrows between the missingness indicators are bi- and one-directional, representing block and sequential structures, respectively. §.§.§ MNAR – Strong Structure (MNAR-SS) Finally, the MNAR-SS mechanism is the case where both 𝐌_-j and 𝐗 directly dictate whether M_ij=1; it is a special case of the weak structure. A missingness mechanism 𝐌=(M_1, , M_p) is MNAR-SS if, for each variable M_j, there exists i ∈ A ⊆{1, , n}, such that p(M_ij=1|𝐌_-j, 𝐗, γ) = p(M_ij=1|𝐌_-j, 𝐗_obs, 𝐗_mis, γ) =1 for all j, 𝐗_obs, 𝐗_mis, γ. The mechanism MNAR-SS can arise if a severe case of asthma (𝐗_mis) automatically precludes taking a PEF test (M_j=1), and which, in turn, automatically precludes all further testing (M_j=1 ⇒ M_j+1=1, M_j+2=1,). The DGs in Figure <ref> give a MNAR-SS mechanism. The difference compared to MNAR-WS (Figure <ref>) is that arrows are now solid. §.§ Summary of SM cases So far, we have built on 's taxonomy to describe and characterise a range of SM mechanisms (summarised in Figure <ref>) that vary according to the following dimensions: * The relationship between M_j (for all j∈{1, , p }) and the observed and missing portion of the data, 𝐗_obs and 𝐗_mis, which broadly relates to the concepts of MCAR, MAR, and MNAR. * Whether missingness is unstructured or structured; that is, whether M_j depends on 𝐌_-j. * Whether the multivariate structure of missingness is weak or strong (probabilistic or deterministic); that is, whether 𝐌_-j influences or directly determines M_j. * Whether missingness occurs in either a block or sequential structure; that is, whether only previously observed variables M_1, , M_j-1 affect M_j (given an underlying ordering to the variables). § STRUCTURED MISSINGNESS: OTHER CASES While the definitions in Section <ref> cover the possible interactions between M_j, 𝐌_-j, and 𝐗, they do not exhaustively cover all cases of SM. In this section, we consider alternative ways in which structure can manifest in the missingness indicator matrix. §.§ The presence of a subject effect The missing data literature typically assumes that two subjects with the same characteristics have the same probability of missingness for a particular variable. Yet, in practice, this is unlikely to be the case; for example, some subjects are inherently more inclined to attend a clinic than others. This heterogeneity (one of the routes to SM considered by ) can be accounted for via the presence of a subject effect, which can be conceptualised mathematically through a random effect S_i (i∈{1, , n }) unique to individual i and unrelated to any variables observed in the data. As an example, in the presence of a subject effect, Definition <ref> for MCAR-U can be amended to: A missingness mechanism 𝐌=(M_1, , M_p) is MCAR-U with a subject effect if, for each variable M_j, p(M_ij=1|𝐌_-j, 𝐗, S_i, γ) = p(M_ij| S_i,γ) for all 𝐗, S_i, γ. Note, we could add a subject effect to any of the mechanisms considered so far. The DG corresponding to MCAR-U with a subject effect is presented in Figure <ref>. The subject effect is denoted by the green circle. To add another layer of complexity, we could include dependencies between the S_i. Dependencies could arise, for instance, through clustering. In a cluster-randomised controlled trial, for example, children from the same school class, or subjects from the same region in a multi-centre trial, may have correlated random effects. §.§ Logical Missingness There are also cases of missingness that are reminiscent of structural zeros in the categorical data literature, where counts in contingency tables comprise of two sorts: random (or sampling) zeros and structural zeros (see, for example, ). The former are zeros that arise through the random nature of sampling; if a similar sample is taken, such counts may not be necessarily zero. The latter are zero counts for which there is a logical reason why they must be zero. A similar concept holds in relation to missingness. We can further decompose cases into those for which there is a logical or biological reason why missingness occurred; that is, missingness may occur because it is fundamentally not possible for an entry in the data matrix to be observed; for example, questions relating to pregnancy are only relevant to women. Logical missingness is unlike other cases of missingness, where underlying data values exist and make sense, but where the uncertainty of the data collection process – as well as the uncertainty in the data values themselves – result in particular values being unobserved. For example, a protocol may specify that only the most severe patients receive a certain test due to its invasive nature. By contrast, cases of logical missingness are when there is a fundamental reason for the missingness. For example, a prostate specific antigen test would not be applicable to females resulting in these test results being logically missing. In general, when faced with logical missingness, we can either ignore or weight out the values which are logically missing. In this instance, we would effectively be assuming that a missing mechanism is composed of (at least) two underlying mechanisms, which leads us on to the notion of multiple mechanisms. §.§ Multiple SM mechanisms Multiple SM mechanisms are likely to be the norm in data sets developed at scale. Consider the case, for example, where there is a probabilistic relationship between the missingness indicators but a deterministic relationship with the data. For example, suppose 𝐌_-j affects the probability that M_ij=1, but 𝐗_obs dictates, in a deterministic fashion, whether M_ij=1. This is essentially the union between the MCAR-WS and the MAR-SS mechanisms: 𝐗_obs first dictates whether certain values will be missing; and then these missing values influence missingness in other variables. An example of this in a clinical setting is when tests are only performed at clinic visits and only subjects above a certain age are invited to regular visits, but where other subjects may receive a test when they attend the clinic for other reasons. The DG in Figure <ref> shows how in this instance there is a weak structure (dashed line) between the M_ij but a strong structure between 𝐗_obs and the M_ij. It is straightforward to see how further similar mechanisms can arise through combining multiple mechanisms. For example, if 𝐗_obs only influences whether certain values will be missing, but then these missing values dictate whether certain values will be missing in other variables, we would have the union between the MAR-WS and the MCAR-SS mechanisms. § EMPIRICAL ILLUSTRATIONS OF THE IMPACT OF SM ON INFERENCE AND PREDICTION We have introduced new definitions to cover the cases where the missingness mechanism for M_j depends on 𝐌_-j, yet a glaring question remains: does SM really matter? That is, does existing methodology, and statistical software, already satisfactorily deal with SM? In this section, we demonstrate, through simulation examples, the impact of SM on inferential validity and prediction performance, showing that strong structures, in particular, pose unique challenges, but also sometimes unique opportunities to exploit. We consider three simulation examples, looking at structure in relation to MCAR, MAR, and MNAR, respectively. This section is by no means intended to be a comprehensive guide to dealing with SM, which is beyond the scope of this paper. Rather, its intention is to highlight the impact of SM from a practical perspective. §.§ Simulation 1: Prediction in the presence of structured and unstructured MCAR mechanisms When assessing the impact of SM on the performance of statistical methods, there are two broad aspects to consider: predictive and inferential performance. In this first simulation, we highlight the impact of SM on prediction when considering structured and unstructured MCAR examples. Specifically, we consider the effect of various factors relating to missingness structure on prediction, including: * Strength of missingness structure (strong structure vs. weak structure); * Sequential and block structures; * Missingness within the test data set; * Correlation within the data matrix 𝐗. We first generate simulated data sets with p=10 variables and n=1100 subjects – 100 rows are used for training; 1000 rows are used as the test data set – distributed according to a multivariate normal (MVN) distribution: 𝐗 ∼MVN_10(μ, Σ) where μ =[ 0; 0; ⋮; 0 ] and Σ = [ 1 ρ … ρ; ρ 1 … ρ; ⋮ ⋮ ⋱ ⋮; ρ ρ … 1 ]. For the parameter ρ, which defines the level of correlation within the data matrix 𝐗, we consider two values: ρ=0, which equates to assuming the variables are independent; and ρ=0.4, which equates to assuming equal, non-zero covariances between all pairs of variables. We next impose a range of unstructured and structured missingness mechanisms on 𝐗. These include four MCAR-U mechanisms, labelled MCAR-U (1)–(4), which have different missingness rates across the p=10 variables. These are: * MCAR-U (1): Variables 1–10 have 45% missingness. * MCAR-U (2): Variable 1 has 0% missingness, variable 2 has 10% missingness, dvariable 3 has 30% missingness, ..., and variable 10 has 90% missingness. * MCAR-U (3): Variables 1–5 have 0% missingness; variables 6–10 have 90% missingness. * MCAR-U (4): Variable 1 has 0% missingness; variables 2–10 have 50% missingness. We also consider MCAR-WS and MCAR-SS mechanisms (with both block and sequential structures); for comparison, we consider a complete data set and a case of block missingness (block missingness in the sense that we fully observe some subjects and never observe the others). For every structure except the complete case, there is approximately 45% missingness, and therefore the only difference between the mechanisms is in the distribution of the missing values, that is, the missing data pattern. The various missingness structures are listed and shown visually in Figure <ref>, with blue tiles representing observed values and orange and grey tiles representing missing values of 𝐗 (we go on to impute the orange tiles and delete the grey tiles). We multiply impute missing values using fully conditional specification (FCS), an approach which has various names, including Sequential Regression Multiple Imputation (SRMI) <cit.> and Multivariate Imputation via Chained Equations (MICE) <cit.>, and which is implemented in the R package mice. We use the default settings, which includes the use of predictive mean matching (), 5 iterations of the Gibbs sampler (), and the generation of m=5 imputed data sets (). We then use these imputation models to multiply impute missing values in our test set, too; this is achieved through the argument in mice. We consider the quantity Y, linked to 𝐗 via the following analysis (substantive) model, Y= α_0 + ∑_i=1^10α_i X_i + ε_i where ε∼ N(0,σ^2), and where σ^2=4. We use our training data to estimate: (i) the imputation model parameters, and (ii) the analysis model's parameters α=(α_0, , α_10), which we then use to predict Y in our test set. The true values for α are α_i=1 for all i. We can then compare the predictions for Y with the true values, and compute the mean squared error (MSE) for our test set as a measure of predictive performance. The total error observed can be decomposed into two sources, error due to inaccuracies in the model fit, that is, the estimation of α, and error due to inaccuracies in imputing values into the test data set. To distinguish between these two sources of error, we also run the simulation with no missing values in the test set. To briefly summarise, therefore, there are three factors at work in the simulation: * The correlation between the variables in 𝐗 (either ρ=0 or ρ=0.4). * The type of MCAR missingness mechanism imposed on 𝐗 (11 examples considered; see Figure <ref>). * Whether the test set is complete or includes missing values (complete or missing). This returns 2× 11 × 2=44 combinations of factors. The MSEs for these combinations, over n_sim=1000 simulation runs, are presented in the boxplots in Figure <ref>. The results for when the test set is (i) complete or (ii) includes missing values are given in the red and blue boxplots, respectively. From a general perspective, the structured mechanisms return greater error than the unstructured mechanisms when the variables in 𝐗 are correlated rather than uncorrelated (the bottom four rows vs. the top seven rows). This result is far from obvious: typically for MCAR mechanisms, values in 𝐗 are seen to neither influence nor be influenced by the missing values. Yet here, the missing data pattern is clearly affecting the results of an analysis performed on 𝐗. In a similar way, for the structured mechanisms there is greater error when the variables in 𝐗 are correlated rather than uncorrelated (right vs. left boxplots). This too is not obvious for MCAR mechanisms, where 𝐗 and 𝐌 are independent. The MCAR-SS (block) mechanism, when we impute missing values in the test set (blue boxplot) and when ρ=0.4, results in the greatest overall error. By contrast, however, the other three boxplots relating to MCAR-SS (block) show relatively small amounts of error. This illustrates the danger of strong structures, in the sense that key relationships between variables can easily be lost. The MSE metric can be expressed as squared bias plus variance; in this instance, as expected, the error is arising through higher variances rather than bias. The overriding point from this simulation is that missingness structure clearly has an impact on predictive ability – and this relationship is not obvious from the outset. More generally, the effect of missingness mechanisms on predictive ability is an underexplored area of research (see ) for further investigation in this area), especially the effect of different types of SM. §.§ Simulation 2: An inference example involving a structured MAR mechanism We now demonstrate how SM can affect inferences, especially when dealing with strong structures. We begin by generating data for n=1000 subjects, as follows: X_1 ∼ N(0, 1) X_2 | (X_1=x_1) ∼ N(2x_1, 1) X_3 | (X_2=x_1) ∼ N(1+x_1+2x_2, 1). We let M_1, M_2, and M_3 denote the corresponding missingness indicator vectors for X_1, X_2, and X_3. We impose a MAR missing data mechanism on X_2 by supposing that values in X_2 depend on X_1, p(M_2=1 | x_1 ) =exp(2x_1)/1+exp(2x_1). We then impose a SM mechanism on X_3 by assuming that M_3 depends only on M_2: p(M_3=1 | M_2) = q if M_2=0 0 if M_2=1. The DG in Figure <ref> illustrates the relationships between X_1, X_2, X_3,M_1, M_2, and M_3. By considering a range of q (from 0 to 1) for each simulated data set, we can assess the effect of structure on inferential validity. The parameter q links to the notion of weak and strong SM structure. When q=0, there is no effect of M_2 on M_3, representing an unstructured mechanism (MAR-U). When q ∈ (0,1), the effect of M_2 on M_3 represents weak structure (MAR-WS). And when q=1, the effect of M_2 on M_3 represents strong structure (MAR-SS). We again multiply impute missing values using mice. Apart from switching to Bayesian normal linear regression imputation models () to accurately reflect the true relationship in (<ref>), we use mice's default settings, which includes the generation of m=5 imputed data sets (). The default setting for the algorithm's number of iterations is . To show that this is insufficient for larger values of q, we also repeat the simulation with . We consider the following analysis model, a normal linear regression model of X_2 on X_1: X_3 =β_0+ β_1 X_1 + β_2 X_2 + ε, where ε is a N(0, σ^2) random variate. We know from the formulation in (<ref>) that the true, underlying values for these regression coefficients are β = (β_0,β_1, β_2) = (0,1,2). After fitting this model to each imputed data set and applying the multiple imputation (MI) combining rules <cit.>, we compute the bias to assess the validity of estimates, and the coverage to assess the proportion of 95% confidence intervals that cover the true value. The left and right plots of Figure <ref> give the bias and coverage when estimating the regression coefficient β_2. As q tends towards 1, that is, as the structure becomes stronger, bias is introduced and the coverage proportion consequently falls away. When (red circles), the bias increases at a faster rate than when (turquoise triangles), showing that in this particular example the bias can partly be attributed to slow convergence. For example, when q=0.9, there is noticeable bias and undercoverage when , but not for when . When q=1, however, the bias cannot be reduced by increasing the number of iterations: there is a fundamental obstacle here that cannot be overcome. When q = 1, the strong structure means we have a file matching pattern, that is, X_2 and X_3 are never simultaneously observed, so there is no information on the true relationship between X_2 and X_3 (see ). Hence the mice algorithm will never converge to the true value, even if we were to greatly increase the number of iterations. In general, as q approaches 1 (but, importantly less than 1) and the fraction of missing information increases, a larger number of iterations is required – considerably more than the default of – for the algorithm to converge. The maximum percentage of missing information in X_3, which is approximately 50%, occurs when q=1. To show that the bias present when q=1 is due to SM rather than an increasing of missingness percentage, we also ran the simulation where missingness in X_3 is determined through the same MAR mechanism used to impose missingness in X_2 (equation <ref>). We found that, in this instance, after values were multiply imputed using mice with , estimates were still unbiased and confidence intervals valid. Thus, we can be sure that when q=1 the bias can be attributed to SM. This example highlights two statistical issues to be careful of when dealing with SM: an inherent, occasional inability to obtain valid inferences with strong structures, and slow convergence. With regards to the former, in this small scale example it is fairly clear how the bias arises – a lack of information. In large complex data sets, however, especially when undertaking multivariate analyses, this source of bias may not be obvious. Similarly, in this small scale example, it is straightforward to increase the number of iterations to allow the algorithm to converge. Yet in larger data sets this could be problematic. Firstly, assessing convergence of imputations and inferences is non-trivial in a multivariate space; and secondly, achieving convergence with SM may not scale well to high-dimensional data, requiring algorithms to be run for more iterations. Thus, in practice, when faced with SM, assessing convergence of either mice or other computational methods will likely involve a non-trivial decision around the number of iterations to run the method for, as well as careful inspection of convergence diagnostics. §.§ Simulation 3: A structured MNAR example In this final simulation, we show how SM can be utilised to our advantage. Specifically, we consider a structured MNAR setting, and generate n_sim=1000 simulated data sets (with n=1000) as follows: Z ∼ N(0, 1) X_1 | (Z=z) ∼ N(2z, 1) X_2 | (Z=z, X_1=x_1) ∼ N(1+z+2x_1, 1), and we let M_1 and M_2 denote the corresponding missingness indicator vectors for X_1 and X_2. We now suppose that we have an unobserved variable Z, that is, a latent variable that the analyst (imputer) does not have access to. We suppose that Z has an effect on missingness in X_1, thus imposing an MNAR missing data mechanism on X_1, p(M_1=1 | Z ) =exp(2z)/1+exp(2z). We then impose a SM mechanism on X_2, by assuming that M_2 depends only on the previously observed M_1: p(M_2=1 | M_1) = 1/2 if M_1=1 q if M_1=0 Essentially, the difference between the setup here and that used in the previous simulation is that, whereas previously Z was observed, now it is missing. The DG in Figure <ref> illustrates the setup for this example, which is nearly identical to that in Figure <ref>. The parameter q again links to the notion of weak and strong SM structure: when q=0, we have an MNAR-U mechanism; when q ∈ (0,1), we have an MNAR-WS mechanism; and when q=1, we have an MNAR-SS mechanism. Moreover, in this example when q ∈ (0,1/2), there is a positive SM mechanism between M_1 and M_2, that is, missingness in M_1 increases the probability of missingness in M_2; and when q ∈ (1/2,1), there is a negative SM mechanism between M_1 and M_2, that is, missingness in M_1 reduces the probability of missingness in M_2. The estimand we consider is the expectation of X_2, for which the true value is 0. As mentioned earlier, we suppose that the imputer does not have access to the latent variable Z. We consider two methods of imputation for dealing with missing values: * Impute X_1 conditional on X_2 Impute X_2 conditional on X_1 This first method requires the use of FCS. We run 50 iterations () of the mice algorithm, with and . This can be viewed as the standard way of multiply imputing missing values. * Impute X_2 conditional on M_1 As the estimand of interest – the expectation of X_2 – does not depend on X_1, we do not need to impute missing values for X_1. We can instead use the SM relationship between M_1 and M_2 to impute missing values for X_2 using a model that depends on M_1 (not X_1). A secondary benefit of this approach is that, by definition, M_1 only includes 0s and 1s, so is completely observed and thus FCS is not required to impute missing values for X_2. The results are given in Figure <ref>. Approach (a), denoted by the red circles, clearly fails, with estimates hovering around 0 instead of 1 (the estimates do improve slightly as q increases, and association between M_1 and M_2 helps to more accurately capture the relationship between X_1 and X_2). Owing to this bias, the confidence intervals can never cover the true value. The results for approach (b), on the other hand, denoted by the turquoise triangles, are clearly unbiased and the coverage values are at the nominal level. Thus, in this example, if we replace X_1 with the missingness indicator M_1 in the imputation model for X_2, we obtain valid inferences. In this instance, we are utilising the fact that missingness in X_2 depends on missingness in X_1 – that is, M_1 rather than X_1 – and by removing the link between X_1 and X_2 we are avoiding bias caused when imputing X_1 from propagating through to X_2. While this is a relatively simple example, it demonstrates the advantages possible from leveraging information/structure present in 𝐌 – in addition to the information in 𝐗 – to best address problems posed by the missing data. § STRUCTURED MISSINGNESS WITHIN REAL-WORLD CLINICO-GENOMIC DATABASES (CGDB) We now consider a real-world oncology clinico-genomic database (CGDB), formed through the linkage of Flatiron Health (FH) electronic health records (EHR) with Foundation Medicine (FMI) comprehensive genomic profiling for patients with cancer in the United States treated at approximately 280 US cancer clinics (∼800 sites of care) <cit.>. The database consists of 22 individual CGDBs, 21 of which are disease-specific and one which is disease-agnostic. In each, retrospective longitudinal clinical data are derived from FH EHR data, comprising patient-level structured and unstructured data, curated via technology-enabled abstraction, and linked to genomic data derived from FMI comprehensive genomic profiling tests by de-identified, deterministic matching <cit.>. Together, the CGDBs represent an impressive collection of longitudinal, patient-level data encompassing over 100,000 patients diagnosed with cancer. This comprehensive data source provides scientists with an invaluable resource to study not only cancer-specific cohorts in each disease-specific CGDB, but also to leverage the collection of CGDBs across cancers to explore pan-tumor or tumor-agnostic insights, a recently emerging paradigm in cancer treatment based on shared molecular characteristics across cancer types <cit.>. This research advantage arises from the CGDB's ability to offer rich and diverse datasets, enabling the study of commonalities and differences across cancers at both clinical and genomic levels. By aggregating information from a vast number of patients, it is possible to explore shared molecular characteristics, treatment responses, and potential biomarkers that transcend individual cancer types. Such a `pan-tumor' approach can uncover novel therapeutic strategies and guide personalized medicine approaches, transforming cancer research and ultimately improving patient outcomes. Collating these data across patients with different cancer types, each with different sets of clinically-relevant measurements, can give rise to SM challenges purely as a consequence of data combining. Furthermore, SM may be inherent in data collection and batch testing, for example, panels of lab tests or genomic tests. Such instances of SM can pose analytical challenges when seeking to learn from the totality of the CGDB, and should be characterised. We discuss several motivating examples in the sub-sections that follow. Table <ref> gives a brief summary of the variables used to illustrate SM in this section. §.§ Example 1: PSA Testing The prostate-specific antigen (PSA) is a common blood biomarker for the diagnosis, screening and monitoring of prostate cancer. Levels of PSA tend to be abnormally elevated in patients with prostate cancer and are measured because of their association with prostate cancer severity. PSA tests are therefore routinely administered to male patients with prostate cancer every 6–12 months, in particular following radiation therapy or surgery. Undiagnosed males who are high-risk (for example, carrying a mutation to the BRCA2/BRCA1 genes) may also receive PSA screening regularly. However, females, lacking a prostate, do not receive this test. Therefore, for sensible reasons, the PSA test is absent in the records of female patients and it is commonly missing among male patients diagnosed with another type of cancer. For the sake of illustration, let us evaluate this scenario in the context of the SM taxonomy introduced in Section <ref>. Suppose we wish to evaluate a time series of PSA tests across lab visits for patients in a pan-tumor research setting where the disease-specific CGDBs have been combined across cancer types and patients, including males and females. Firstly, in the case of female patients, sex has a deterministic effect on whether a series of PSA tests are missing. This represents a MAR-UD mechanism. It can also be considered a case of logical missingness, as clearly a PSA test would be uninformative and inappropriate in this instance. The left plot of Figure <ref> shows this scenario graphically; SEX has a deterministic effect (solid arrows) on the missingness indicators (red triangles) for the PSA variables, PSA_1, PSA_2, and PSA_3. Secondly, we consider male patients with cancer types other than prostate cancer; these patients may receive PSA tests as part of routine screening based on risk factors. Those with certain risk factors – such as older patients or those with mutations to the BRCA2/BRCA1 genes – are more likely to be tested. In particular, the BRCA2 mutation has more recently been identified as a strong risk factor for prostate cancer diagnosis and severity, leading to calls by the research community to screen PSA levels earlier in affected men <cit.>. If these risk factors are observed, they would have a probabilistic effect on whether a patient receives a PSA test on a given visit, hence the dashed causal arrows from YEAR OF BIRTH and BRCA1/2 to the missingness indicators for PSA_1, PSA_2, and PSA_3 in the right plot of Figure <ref>. There is also an element of structure in the missingness, too, because if the test returns normal values for a patient, there is less need to repeat the test on the next visit, especially if little time has elapsed between the visits. In some instances, therefore, there is a negative SM relationship between the PSA missingness indicators (denoted by arrows between the PSA missingness indicators in the right plot of Figure <ref>), as observing the test at one visit increases the probability of missing it at the next visit (and vice versa). This mechanism can then be viewed as a sequential MAR-WS. §.§ Example 2: Genomic Testing Before considering this next example, it is worth noting the unit non-response within the CGDB in relation to the population of cancer patients more widely. The CGDB only includes cancer patients in the FH EHR database who received comprehensive genomic profiling testing to characterise the DNA or RNA alterations that may be driving the growth of a specific tumor. Hence the individuals included represent a sample of a larger population of patients with cancer. The decision to test is made by the physician, dependent on many factors, including: the current disease state of the patient, family history, their response to previous therapies, and any disease presentation indicating the likelihood of their cancer being driven by a particular gene mutation. It can be argued, therefore, that these missing records – relating to patients with cancer who did not undergo genomic profiling, thus not observed in the CGDB – are MAR-UP. This would need to be accounted for if obtaining inferences about the population as a whole. Patients included in the CGDB have received one or more comprehensive genomic profiling tests, each measuring some set of cancer-relevant genes (“bait sets”) ranging from dozens to hundreds of genes. Genomic alterations are identified via comprehensive genomic profiling (CGP) of cancer-related genes on FMI's next-generation sequencing (NGS) tests <cit.>. The choice of genomic test is influenced by multiple factors, including the type of cancer (for example, solid vs. haematologic malignancies). Furthermore, the date of the test will dictate the particular bait set (that is, the list of genes) used as an assay which may change over time (so a particular genomic test may assay different bait sets depending on when the test is performed). For example, the gene BCL10, commonly mutated in B-cell lymphomas, is tested exclusively in the haematologic bait sets. Conversely, the gene CASP8, which is known to be mutated in a number of solid tumors, is tested exclusively in the solid tumor bait sets. These multigene panels include guideline-recommended genes relevant for oncology, and a typical analysis could use a disease-specific CGDB to study the relevant disease cohort of interest. However, generating insights across cancer types (that is, solid and haematologic malignancies) will require the combining of multiple cancer cohorts, with different bait sets used in each cohort, that can exacerbate SM in the genomic data. Thus these broader, pooled cohorts may exhibit SM as a result of test (and potential changes in standard of care) dictating the measurement of genes, rather than biology. For example, in the case of analysing genomic alterations in haematological and solid tumor malignancies together, patients with diffuse large B-cell lymphoma (haematological) might be systematically missing the alteration status of CASP8 and patients with breast cancer (solid tumor) might be systematically missing BCL10. Looking exclusively at the genomic data, the missingness of these genes may appear to be a function of CANCER TYPE and DATE: in other words, the missingness mechanism for the gene variables BCL10 and CASP8 can be viewed as MAR-UP, where CANCER TYPE and DATE have a probabilistic effect on whether these genes are missing. Alternatively, however, an auxiliary variable can be introduced here, say BAIT SET or GENOMIC TEST, that denotes the type of bait set and genomic test a patient received, respectively. These variables would otherwise be considered nuisance variables; they would not, for example, be included in a model for prediction. Nevertheless, they would introduce strong structure into the missingness mechanisms for the gene variables because SM can occur at either the test level or the bait set level; for example, BCL10 exhibits SM at the GENOMIC TEST level because it is only included in bait sets for certain haematologic tests and not others. On the other hand, CASP8 exhibits SM at the BAIT SET level. Both occurrences of SM may be influenced by CANCER TYPE and DATE, affecting the genomic test and bait set administered, and then, if a given test or bait set is missing, the genes measured by them are also missing. Two examples are given in the DGs in Figures <ref> and <ref>. In Figure <ref>, which excludes the bait set or genomic test variable, CANCER TYPE and DATE are shown to have a direct effect on the missingness indicators (red triangles) for the gene of interest, BCL10 and CASP8. In Figure <ref>, CANCER TYPE and DATE affect the missingness indicators for the BAIT SET or GENOMIC TEST variables, which in turn have a deterministic effect on missingness in BCL10 and CASP8, respectively. This example, where introducing an auxiliary variable changes the dynamic of the nature of the missingness mechanism, poses an interesting research question: which representation is to be preferred? While, on one hand, omitting the auxiliary variable makes the missingness mechanism easier to model – for example, omitting BAIT SET means that the mechanism is MAR in the sense of <cit.>, so multiple imputation can be utilised – on the other hand, including the auxiliary variable, which can be considered a mediator variable, allows a deeper understanding of the true underlying process at work. The answer to the above question, therefore, likely depends on the analysis being performed, and the analyst's motivation for trying to understand the underlying structure of the missingness. § DISCUSSION Given that SM encompasses such a wide range of missingness scenarios, setting out a new taxonomy provides a solid foundation for characterising it, and thereby allowing appropriate action to be taken. Exploration of the missingness indicator matrix 𝐌 – whether it is its effect on inferences' validity or as a source of information from which to improve the quality of inferences – has, for no obvious reason, largely been overlooked in the literature. The matrices 𝐗 and 𝐌, of course, go hand-in-hand: 𝐌 can be viewed as 𝐗 maximally flattened to 0s and 1s. The question which is fundamental to our paper is whether 𝐌 is just a less informative representation of 𝐗, or whether the multivariate relationships within 𝐌 can help us to better understand the underlying mechanisms governing 𝐗? This paper begins to address the nine Grand Challenges set out by <cit.> in relation to SM. In addition to defining SM (Challenge 1), we begin, through the use of DGs, to also consider the relationship between SM and causality (Challenge 8). Arguably more importantly, however, is the fact that by defining SM we are helping to facilitate further research into SM. This taxonomy will assist, for example, in understanding the geometry of SM (Challenge 2), as we may expect strong (deterministic) structures to exhibit sharply defined block missingness patterns. Moreover, this taxonomy should allow us to design experiments to mitigate any deleterious impact of SM (Challenge 3), will clearly help in devising methodological approaches for prediction or inference (Challenges 4–6), and will provide a starting point for developing benchmark data sets for SM (Challenge 7). Finally, understanding the underlying mechanisms for SM, and their impact on the observed data and any inferences made from that data, will inform on the risk of any sociocultural biases (Challenge 9). As mentioned in Section <ref>, in practice, multiple mechanisms are likely to be present in any one data set, especially in complex multi-modal data sets such as the CGDB. Further research could be focused on considering how such mechanisms interact. If there is weak structure (a probabilistic relationship) between 𝐌 and 𝐗, for example, but a strong structure (a deterministic relationship) between M_j and 𝐌_-j, then how would these two structures interact? Would the strong structure “dominate” the weak structure? That is, would the probabilistic relationship be irrelevant given the presence of a deterministic relationship? As we seek to develop methods that address and utilise SM in practice, it is important we keep in mind that the missing data themselves are not typically of primary interest; rather, it is the analysis of the complete data that drives key applied research questions, and the missing values are seen as a nuisance. As a result, we can postulate the key questions applied researchers would want to address when faced with the potential of SM obstructing their analysis. * Determine the (likely unknown) types of SM present in their data. * Identify which types of SM (amongst those identified in the data) are problematic for their desired analysis. * Implement relevant methods to deal with these types of SM to obtain valid analyses and extract maximum information from the data. As mentioned earlier, certain approaches have considered aspects of SM in passing, and may be useful to consider developing further when seeking to address SM, in light of the above points. For 1), the suggestion of <cit.> to utilise tree models to learn about structures present among the missing data is particularly appealing, especially when multiple (unknown) types of SM are present in the data. A range of approaches could be considered, from standard Classification and Regression Trees <cit.>, to more sophisticated approaches that incorporate uncertainty into the tree structure, such as Random Forests <cit.> or Bayesian Additive Regression Trees <cit.>. For 2), the concept of m-graphs given in <cit.> could potentially be explored and developed into building causal relationships between the missing data indicators themselves within a framework that provides valid inferences for certain SM scenarios. More generally, the extensive literature on causal inference and graphical models will likely also prove useful to consider, particularly when SM can be characterised using purely causal pathways. Morever, while we have typically presented SM as an obstacle to overcome, in some instances the structure can be viewed as a help rather than a hindrance, as seen in Simulation 3 in Section <ref>. Thus, a complementary goal researchers may also like to consider here, is determining which types of SM can be leveraged to provide important information for feeding into their analysis. Lastly for 3), in large multivariate settings involving a number of unknown SM mechanisms, straightforward application of off-the-shelf methods for incomplete data, such as mice, or other standard multiple imputation packages, are not immediately obvious. Shared traits between strong structure and block missingness suggest exploring the potential of existing methods proposed to impute blocks of missing data. For example, <cit.> propose an approach to multiply impute missing values through ordered monotone blocks. More generally, Bayesian methods offer a great degree of flexibility, which could be utilised to develop a unifying framework; for example, modelling data with a mix of logical and random SM has been achieved through multi-level models <cit.>, while <cit.> develop a Bayesian hierarchical regression model for data collated across several different surveys. Utilising the full Bayesian machinery to address SM is particularly appealing, whether this be model averaging inferences over multiple possible missing data mechanisms <cit.> or leveraging information provided by SM through the use of informative priors. To conclude, research into SM is critical, timely, and a necessary component to unlocking the full potential associated with large complex databases. We thus hope this contribution stimulates interest amongst the statistics community, as well as the scientific community more generally, to develop theory and methods that address the challenges, as well as the opportunities, posed by SM.
http://arxiv.org/abs/2307.01387v1
20230703225053
ALBERTI, a Multilingual Domain Specific Language Model for Poetry Analysis
[ "Javier de la Rosa", "Álvaro Pérez Pozo", "Salvador Ros", "Elena González-Blanco" ]
cs.CL
[ "cs.CL" ]
2023 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). SEPLN 2023: 39th International Conference of the Spanish Society for Natural Language Processing 1]Javier de la Rosa[ orcid=0000-0002-9143-5573, email=versae@nb.no, ] [1]National Library of Norway, Norway 2]Alvaro Perez Pozo[ orcid=0000-0001-7116-9338, email=alvaro.perez@linhd.uned.es, ] [2]Universidad Nacional de Educación a Distancia, Spain 2]Salvador Ros[ orcid=0000-0001-6330-4958, email=sros@scc.uned.es, ] 3]Elena Gonzalez-Blanco[ orcid=0000-0002-0448-1812, email=egonzalezblanco@faculty.ie.edu, ] [3]IE University, Spain The computational analysis of poetry is limited by the scarcity of tools to automatically analyze and scan poems. In a multilingual settings, the problem is exacerbated as scansion and rhyme systems only exist for individual languages, making comparative studies very challenging and time consuming. In this work, we present Alberti, the first multilingual pre-trained large language model for poetry. Through domain-specific pre-training (DSP), we further trained multilingual BERT on a corpus of over 12 million verses from 12 languages. We evaluated its performance on two structural poetry tasks: Spanish stanza type classification, and metrical pattern prediction for Spanish, English and German. In both cases, Alberti outperforms multilingual BERT and other tranformers-based models of similar sizes, and even achieves state-of-the-art results for German when compared to rule-based systems, demonstrating the feasibility and effectiveness of DSP in the poetry domain. Natural Language Processing Multilingual Language Models Poetry Stanzas Scansion ALBERTI, a Multilingual Domain Specific Language Model for Poetry Analysis [ 9 June 2023 ========================================================================== § INTRODUCTION Poetry analysis is the process of examining the elements of a poem to understand its meaning. To analyze poetry, readers must examine its words and phrasing from the perspectives of rhythm, sound, images, obvious meaning, and implied meaning. Scansion, a common approach to analyze metrical poetry, is the method or practice of determining and usually graphically representing the metrical pattern of a line of verse. It breaks down the anatomy of a poem by marking the metrical pattern of a poem by breaking each line of verse up into feet and highlighting the stressed and unstressed syllables <cit.>. Having multilingual tools for scansion and analysis of poetic language enables large-scale examinations of poetry traditions, helping researchers identify patterns and trends that may not be apparent through an examination of a single tradition or language <cit.>. By using multilingual tools, scholars can compare and contrast different poetic forms, structures, and devices across languages and cultures, allowing them to uncover similarities and differences and gain a more comprehensive understanding of poetic expression. However, the analysis of multilingual poetry presents significant challenges that must be overcome. It demands a deep understanding of diverse linguistic and cultural traditions, as each language brings its own unique poetic conventions and nuances. Researchers and scholars need expertise in multiple languages to navigate the intricacies of each tradition accurately. Additionally, translation and interpretation pose complex obstacles in multilingual poetry analysis. Figurative language, wordplay, and cultural references deeply rooted in the specific language and culture of the poem make it challenging to convey the intended meaning, emotional impact, and artistic integrity when translating. Cultural contexts, historical references, and subtle language connotations often get lost in translation, making it difficult to fully capture the essence of the original work. Furthermore, the development of advanced computational tools is crucial for effective analysis and comparison of poetic expression across multiple languages. This requires the application of sophisticated machine learning techniques, natural language processing algorithms, and other emerging technologies. Building models that can accurately capture the unique aesthetic qualities, rhythm, rhyme, and stylistic variations in different languages is an ongoing research endeavor that requires continuous refinement and innovation. In this work, we investigate whether domain-specific pre-training (DSP) <cit.> in a multilingual poetry setting can be leveraged to mitigate some of these issues. Specifically, we introduce Alberti, a multilingual encoder-only BERT-based language model suited for poetry analysis. We experimentally demonstrate that Alberti exhibits better performance than the base model it was built on, a multilingual BERT <cit.> which was pre-trained on the 104 languages with the largest Wikipedias. And by reformulating scansion and stanza identification as classification problems, we show that Alberti also outperforms its based model in these downstream tasks. Moreover, we are releasing both Alberti and the dataset used for further training it, which consists of over 12 million verses in multiple languages. § RELATED WORK The transformer architecture <cit.> is now pervasive in natural language processing (NLP). In the last five years, context-aware language models have revolutionized the computational modeling of language. In the humanities, domain specific BERT-based models <cit.> trained with the goal of predicting masked words are starting to appear. In MacBERTh, <cit.>, the authors present diachronic models for pre-1950 English literature. And a new shared task on historical models for English, French, and Dutch took place last year <cit.>. While pre-training these large language models from scratch is often cost-prohibitive and extremely data demanding, adjusting them to work on other domains and tasks via transfer learning requires less data and fewer resources. For poetry, computational approaches have focused primarily on generation <cit.> and scansion <cit.>, but generally in a monolingual setting. While multilingual systems exist for metrical analysis, they internally work by having different sets of rules for each language <cit.> or by building ad-hoc neural networks <cit.>. To the best of our knowledge, the only attempt at multilinguality for metrical pattern prediction was introduced in <cit.> for English, German, and Spanish, where the authors jointly fine-tune different monolingual language models and document some cross-lingual transferability when using multilingual RoBERTa <cit.>. Inspired by their good results, in this work we build a domain specific language model trained on a corpus of verses in 12 languages to explore its performance on tasks of poetic nature. § METHODS AND DATA We leverage domain-specific pre-training techniques by fine-tuning the widely used multilingual BERT (mBERT) model with the same base architecture and vocabulary for our specific domain. We adopt the masked language modeling (MLM) [MLM is a form of self-supervised learning that involves masking some of the words in a sentence and training the model to predict them based on the surrounding words.] objective and further train the model for 40 epochs on a large corpus consisting of 12 million verses, which were sourced from various poetry anthologies. The training was conducted on a Google TPUv3 virtual machine with a batch size of 256, a learning rate of 1.25e-4, and a weight decay of 0.01. The maximum sequence length was set to 32 since verses with up to 32 tokens using the mBERT tokenizer make up for almost 99 percent of the total. Furthermore, we used a 10,000-step warmup process, which allowed the model to learn the distribution of the corpus gradually. We are naming the resulting model Alberti [An homage to Spanish poet https://es.wikipedia.org/wiki/Rafael_AlbertiRafael Alberti.]. After training, we evaluate the model on 10% of the corpus held out as a validation set, achieving a final global MLM accuracy of 57.77%. §.§ PULPO The training of the model was done over a new corpus we built for the occasion. The Prolific Unannotated Literary Poetry Corpus (PULPO) is a set of multilingual verses and stanzas with over 72 million words. It was created to tackle the needs of scholars interested in poetry from a machine learning perspective. Although poetry is a fundamental aspect of human expression that has been around for millennia, the study of poetry from a machine learning perspective is still in its infancy, largely due to the scarcity of poetic corpora. And while literary corpora are becoming more readily available, multilingual poetic corpora remain elusive. The lack of such corpora presents a major challenge for researchers interested in natural language processing (NLP) and machine learning (ML) applied to poetry. The PULPO corpus comprises over 12 million deduplicated metrical verses from 12 different languages in 3 scripts (see Tables <ref> and <ref>). We chose these languages because of the large number of poems freely available on the Internet out of copyright or with a permissive license. The poems date from the 15th-century to contemporary poetry and a number of them also have stanza separations. This makes the corpus a valuable resource for multilingual NLP and machine learning research. In addition, the corpus includes poems from various historical periods and literary traditions, providing a diverse range of poetic styles and forms. §.§ Stanzas To further evaluate the performance of the model, we conduct extrinsic evaluations using two different tasks. First, a stanza-type classification task for Spanish poetry. This task aims to assess the ability of the model to distinguish between different stanza types, such as tercet, quatrain, and sestina (see Table <ref> for an example). A stanza, which is considered the fundamental structural unit of a poem, serves to encapsulate themes or ideas <cit.>. Comprised of verses, stanzas are influenced by the writing styles and historical preferences of authors. The Spanish tradition boasts a rich abundance of stanza types, rendering their identification a challenging and intricate task. Generally, three factors contribute to the identification of a stanza: metrical length, rhyme type, and rhyme scheme <cit.>. Consequently, the classification of stanzas can be approached in three stages <cit.>: * Calculation of the metrical length per verse. This process typically involves counting the number of syllables while considering rhetorical devices that may alter this count (e.g., syneresis, synalephas). In some cases, the pattern formed by these verse lengths can assist in determining the stanza type. * Determination of the rhyme type. When the sounds after the final stressed syllable of each verse match, it is known as consonance rhyme. Alternatively, assonance rhyme involves the matching of vowel sounds while disregarding consonant sounds. However, there are stanza types where this distinction becomes irrelevant. * Extraction of the rhyme scheme. The rhyme scheme is established based on the verses that share a rhyme. Following <cit.>, we approached stanza type identification as a classification task. We used their 5,005 Spanish stanzas containing between 12 and 170 examples for each of the 45 different types of stanzas[An extra stanza type `unknown' was ignored in this study as it does account for anything not recognized as any of the other stanza types], and used the already existing splits of 80% for training, 10% for validation, and 10% for testing. §.§ Scansion Second, a multilingual scansion task aimed at testing the ability of the model to predict the metrical pattern of a given verse in different languages. The scanning of a verse relies on assigning stress correctly to the syllables of the words. This process can be influenced by rhetorical figures and individual traditions. The synalepha is a common device in Spanish, English, and German poetry, which combines separate phonological groups into a single unit for metrical purposes. Syneresis and dieresis are two other devices that operate similarly but within the word, either joining or splitting syllables. The meter of a verse can be seen as a sequence of stressed and unstressed syllables, represented by the symbols `+' and `-', respectively. Examples <ref>, <ref>, and <ref> from <cit.> illustrate verses with metrical lengths of 8, 10, and 7 syllables in Spanish, English, and German, respectively. These examples also demonstrate the resulting metrical pattern after applying (or breaking, as in the case for `la-her' in the Spanish verse) synalepha, represented by ` ', and considering the stress of the last word as it may affect the metrical length in Spanish poetry. cubra de nieve la hermosa cumbre["[It] cover with snow the beautiful summit."] cu-bra-de-nie-ve-la-her-mo-sa-cum-bre +–+—+-+- 11 (Garcilaso de la Vega) Our foes to conquer on th’ embattled plain; Our-foes-to-con-quer-on-th'em-bat-tled-plain; -+-+—+-+ 10 (Rhys Prichard) Leise lausch’ ich an der Thür["I quietly listen at the door"] Lei-se-la-schu'ich-an-der-Thür +-+-+-+ 7 (Adolf Schults) In order to measure the performance of Alberti, we follow the experimental design in <cit.> and use their chosen datasets of verses manually annotated with syllabic stress for English, German, and Spanish. For the Spanish corpus, the Corpus de Sonetos de Siglo de Oro <cit.> was used. This TEI-XML annotated corpus consists of hendecasyllabic verses from Golden Age Spanish authors. A subset of 100 poems initially used for evaluating the ADSO Scansion system <cit.> was selected for testing, while the remaining poems were split for training and evaluation. Unfortunately, suitable annotated corpora of comparable scale were not found for English and German. Instead, an annotated corpus of 103 poems from For Better For Verse <cit.> was used for English, and a manually annotated corpus from <cit.> was used for German. The German corpus contains 158 poems which cover the period from 1575 to 1936. Around 1200 lines have been annotated in terms of syllable stress, foot boundaries, caesuras and line main accent. These corpora were divided into train, evaluation, and test sets, following a 70-15-15 split. Table <ref> shows the number of verses per language and split . § EVALUATION AND RESULTS After training, we evaluated the resulting model Alberti on several fronts. For intrinsic evaluation, we used the aforementioned MLM metric as well as a perplexity proxy score based on the predicted token probabilities. We calculated these metrics for every language on the validation set of PULPO for both Alberti and mBERT. As shown in Figure <ref>, the MLM accuracy of Alberti is generally higher than that of mBERT for all languages. The gains of Alberti against mBERT range from +19.65 percentage points for Portuguese to +40.59 for Finnish. A similar trend is shown for our perplexity proxy score in Figure <ref>, with clear gains of Alberti over mBERT across the board, ranging from -35.75 for French to staggering -739.16 points for Chinese. The stark difference for Chinese could be a result of differences in the way text is represented in that language in both the pre-training corpus of mBERT and PULPO. For extrinsic evaluation, we also evaluated Alberti against mBERT for stanza classification and metrical pattern prediction. We chose the best performing models on the validation set over a small grid search of learning rates 10^-5, 3 × 10^-5, and 5 × 10^-5, for 3, 5, and 10 epochs, and warmup of 0 and 10% of the steps. Figure <ref> shows the ROC curves of each stanza type versus the rest for both Alberti and mBERT, with higher areas under the curve (AUC) in 29 out of the 45 stanza types for Alberti, and in 16 out of 45 for mBERT. Table <ref> shows F1 and accuracy macro scores for each model, with Alberti outperforming mBERT by a small percentage. Interestingly, our baseline fine-tuned mBERT model scores better than the monolingual Spanish BETO <cit.> reported in <cit.>. Nonetheless, the combination of the rule-based system Rantanplan <cit.> with an expert system remains state of the art for stanza classification. The prediction of metre was approached as a multi-class binary classification task, i.e., one class per syllable where each syllable can be stressed (strong) or unstressed (weak). After a grid search with roughly the same hyperparameters as in <cit.>, Alberti outperforms mBERT for every language, as shown in Table <ref>. When compared to other similarly sized models (English RoBERTa <cit.> and multilingual XLM RoBERTa <cit.>) as reported in <cit.>, it still performs better for English and German. Lastly, Alberti achieves a new state-of-the-art for German, as it performs better than both the large version of XLM RoBERTa and the rule-based system Metricalizer <cit.>. backgrounds,calc § CONCLUSIONS AND FURTHER WORK In this work, we hope to make a significant contribution to the fields of Digital Humanities and NLP by introducing the first multilingual large language model for poetry, Alberti. Our model demonstrated substantial improvements over mBERT, indicating its effectiveness in capturing the nuances of poetic language in various languages and demonstrating the feasibility of domain-specific pre-training for poetry. The evaluation of the model on intrinsic and extrinsic metrics highlights its potential for practical applications in tasks such as stanza-type identification and scansion on a multilingual setting. The release of our model and accompanying corpus will provide an important resource for researchers in the field, facilitating further investigation into poetry-related tasks. It is our plan to train Alberti at the stanza level and compare its performance against the current verse-based model, which presents itself as an exciting avenue for future research, as it could potentially improve the ability of the model to capture the meaning and structure of poetry in a more sophisticated way. Given the good results obtained by Alberti, despite its training on an arguably outdated model, future iterations will leverage more powerful and larger pre-trained models, thereby enhancing its performance and versatility. Moreover, we do believe that the strong accuracy of Alberti in the masked language prediction task could pave the way for methods analyzing metaphoric language by leveraging the differences between the predictions of Alberti and the predictions of other models trained on more journalistic or encyclopedic type of data. Overall, the results of this study have the potential to significantly advance our understanding of poetry in various languages and contribute to the development of more sophisticated NLP models that can capture the subtleties of poetic language. We hope that our work will inspire further research and innovation in this field, and we look forward to seeing how our model and corpus will be used in future studies. Research for this paper has been partially supported by the Starting Grant research project Poetry Standardization and Linked Open Data: POSTDATA (ERC-2015-STG-679528) obtained by Elena González-Blanco, a project funded by the European Research Council (https://erc.europa.eu) (ERC) under the research and innovation program Horizon2020 of the European Union. § PULPO PULPO, the Prolific Unannotated Literary Poetry Corpus, is a set of multilingual corpora of verses and stanzas with over 72M words. The poems as such are not available as lines that "looked like" poetry where extracted from books in the Project Gutenberg. See <https://github.com/aparrish/gutenberg-poetry-corpus>. The individual corpora were downloaded using the https://github.com/linhd-postdata/averell/Averell tool, developed by the https://postdata.linhd.uned.es/POSTDATA team, and other sources found on the Internet. §.§ Averell sources §.§.§ Spanish * https://github.com/pruizf/discoDisco v3 * https://github.com/bncolorado/CorpusSonetosSigloDeOroCorpus of Spanish Golden-Age Sonnets * https://github.com/bncolorado/CorpusGeneralPoesiaLiricaCastellanaDelSigloDeOroCorpus general de poesía lírica castellana del Siglo de Oro * https://github.com/linhd-postdata/gongocorpusGongocorpus - http://obvil.sorbonne-universite.site/corpus/gongora/gongora_obra-poeticasource §.§.§ English * https://github.com/alhuber1502/ECPAEighteenth-Century Poetry Archive (ECPA) * https://github.com/waynegraham/for_better_for_verseFor better for verse §.§.§ French * https://crisco2.unicaen.fr/verlaine/index.php?navigation=accueilMétrique en Ligne - https://github.com/linhd-postdata/metrique-en-lignesource §.§.§ Italian * https://github.com/linhd-postdata/biblioteca_italianaBiblioteca italiana - http://www.bibliotecaitaliana.it/source §.§.§ Czech * https://github.com/versotym/corpusCzechVerseCorpus of Czech Verse §.§.§ Portuguese * https://gitlab.com/stichotheque/stichotheque-ptStichotheque §.§ Internet sources §.§.§ Spanish * https://github.com/linhd-postdata/poesi.asPoesi.as - http://www.poesi.as/source §.§.§ English * https://github.com/aparrish/gutenberg-poetry-corpusA Gutenberg Poetry Corpus §.§.§ Arabic * https://www.kaggle.com/ahmedabelal/arabic-poetryArabic Poetry dataset §.§.§ Chinese * https://github.com/THUNLP-AIPoet/Datasets/tree/master/CCPCTHU Chinese Classical Poetry Corpus §.§.§ Finnish * https://github.com/sks190/SKVRSKVR §.§.§ German * https://github.com/linhd-postdata/textgrid-poetryTextGrid Poetry Corpus - https://textgrid.de/en/digitale-bibliotheksource * https://github.com/tnhaider/german-rhyme-corpusGerman Rhyme Corpus §.§.§ Hungarian * https://github.com/ELTE-DH/verskorpuszELTE verskorpusz §.§.§ Portuguese * https://www.kaggle.com/oliveirasp6/poems-in-portuguesePoems in Portuguese §.§.§ Russian * https://www.kaggle.com/grafstor/19-000-russian-poems19,000 Russian poems § AVAILABILITY * Alberti: <https://huggingface.co/linhd-postdata/alberti> * PULPO: <https://huggingface.co/datasets/linhd-postdata/pulpo>
http://arxiv.org/abs/2307.01994v1
20230705025319
Performance Analysis of RIS-Aided Space Shift Keying With Channel Estimation Errors
[ "Xusheng Zhu", "Wen Chen", "Qingqing Wu", "Liwei Wang" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Performance Analysis of RIS-Aided Space Shift Keying With Channel Estimation Errors Xusheng Zhu, Wen Chen, Qingqing Wu, Liwei Wang Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai, China Email: {xushengzhu, wenchen, qingqingwu, wanglw2000}@sjtu.edu.cn August 1, 2023 ====================================================================================================================================================================================================================== In this paper, we investigate the reconfigurable intelligent surface (RIS) assisted space shift keying (SSK) downlink communication systems under the imperfect channel state information (CSI), where the channel between the base station to RIS follows the Rayleigh fading, while the channel between the RIS to user equipment obeys the Rician fading. Based on the maximum likelihood detector, the conditional pairwise error probability of the composite channel is derived. Then, the probability density function for a non-central chi-square distribution with one degree of freedom is derived. Based on this, the closed-form analytical expression of the RIS-SSK scheme with imperfect CSI is derived. To gain more valuable insights, the asymptotic ABEP expression is also given. Finally, we validate the derived closed-form and asymptotic expressions by Monte Carlo simulations. Reconfigurable intelligent surface, space shift keying, imperfect channel state information, average bit error probability. § INTRODUCTION Reconfigurable intelligent surface (RIS) has recently attracted considerable concern thanks to their ability to make environments controllable <cit.>. Particularly, RIS is an electromagnetic metasurface comprising small, low-cost, and almost passive scattering elements that can induce a predetermined phase shift in the incident wave <cit.>. Consequently, RIS can efficiently modify the scattering, reflection, and refraction of the environment cost-effectively, thereby improving the efficiency of wireless networks <cit.>. To clarify the impact of multiple RIS on system performance, <cit.> investigated the statistical characteristics and modeling of distributed multiple RIS-assisted wireless systems. In addition, <cit.> conducted measurements of the path loss of RIS-assisted wireless communication in a microwave radio chamber taking into account different scenarios. Spatial modulation (SM), or more generally index modulation (IM), has recently gained significant research attention due to its efficient energy utilization <cit.>. In particular, IM offers a solution to this issue by utilizing an index of available resources, such as transmit or receive antennas and frequency domain subcarrier, to convey a portion of the information <cit.>. This allows only a fraction of the energy-consuming resources to be activated at any given time, making IM a highly energy-efficient option. For this reason, IM is viewed as a promising technology for 6G systems <cit.>. With the aim of focusing more on spatial domain information, <cit.> studied the space shift keying (SSK) scheme by neglecting the symbol domain information of the SM. Due to the large path loss in the millimeter wave (mmWave) band, it is difficult to guarantee the reliability of the received data by utilizing SM techniques for signal transmission at each time slot. In this regard, <cit.> proposed a new quadrature spatial scattering modulation scheme that exploits the hybrid beamforming instead of a single antenna in the SM. In light of the advantages possessed by RIS and SSK, the RIS-assisted SSK scheme has attracted extensive research interest from the academic community <cit.>. Specifically, the RIS-aided SSK scheme was presented in <cit.>, where the antenna is switched and selected at the transmitter side and the RIS is viewed as a passive relay. In <cit.>, the RIS incorporates Alamouti space-time block coding, allowing the RIS to send its Alamouti-encoded data and reflect the incoming SSK signals toward the target. Moreover, <cit.> takes the case of a hardware-impaired transceiver into account and analyzes its impact on the average bit error probability (ABEP) of the RIS-SSK scheme. With the aim of studying RIS for mmWave information transmission, <cit.> proposed a RIS-assisted spatial scattering modulation scheme and provides a theoretical analysis with respect to ABEP. All the above-mentioned RIS-aided literature assumes that the channel state information (CSI) is completely well-known at the transceiver. Nevertheless, in reality, the estimated CSI is imperfect on account of estimation errors and limited radio resources of the RIS. Although work on RIS-assisted communication systems under imperfect CSI is common, there has been no work on RIS-assisted IM schemes under imperfect CSI in the open literature. Against this background, we intend to elucidate this timely and interesting topic. To the best of our knowledge, there is no analytical approach that has been adopted to investigate imperfect CSI for RIS-assisted SSK system error performance. For clarity, the contribution of this paper are summarized as follows: 1) In this paper, we study RIS-assisted SSK systems, where the base station to RIS (BS-RIS) channel obeys the Rayleigh fading, while the RIS to user equipment (RIS-UE) channel obeys the Rician fading. We consider that perfect CSI estimation can be obtained in the BS-RIS channel, while perfect CSI cannot be estimated in the RIS-UE channel. 2) The maximum likelihood (ML) detection algorithm is used to adjudicate the RIS-SSK scheme and derive the conditional pairwise error probability (CPEP) expressions. In addition, we provide the complexity analysis. By utilizing the central limit theorem (CLT), we derive the expectation and variance of the composite channel. After that, we derive the probability density function (PDF) from the BS to UE. 3) Based on the derived CPEP and PDF, we derive the closed-form solution for the unconditional pairwise error probability (UPEP) considering the imperfect CSI case. Further, asymptotic UPEP and ABEP expressions are both derived. We have exhaustively verified the ABEP expressions via the Monte Carlo simulations. § SYSTEM MODEL In this section, we study the RIS-SSK system model under imperfect CSI, where optimal reflection phase of RIS is considered. It is assumed that the SSK technique is used by mapping the input bits to the index of a specific transmit antenna, which is activated to allow the transmit signal to reach the UE through the RIS. The fading channel between the n_t-th transmit antenna and the l-th reflective element of the RIS is represented by g_l,n_t=α_l,n_te^- jθ_l,n_t, n_t∈1,⋯, N_t. Meanwhile, the channel between the l-th RIS reflecting shift and the receive antenna is denoted by h_l=β_le^-jψ_l, l = 1,⋯,L. It is worth noting that the RIS controller can adjust the reflected phase shift to maximize the SNR by adjusting the reflecting phase shift based on the acquired CSI. In particular, it is assumed that the direct link between BS and UE is not reachable due to undesirable channel conditions and that communication occurs only through the RIS. Let us consider the communication system shown in Fig. <ref> that utilizes a RIS to assist the communication between BS to the UE, where the BS consists of N_ t transmit antennas, while the UE is equipped with a single antenna <cit.>. Besides, the RIS is comprised of a dimensional uniform linear array (ULA) with L reflective elements. Due to the existence of blockage, RIS is deployed to assist the communication between BS and UE, where the RIS is fixed to the exterior wall of the building, thus enabling accurate estimation of the indirect channel by calculating the slowly changing arrival and departure angles <cit.>. In contrast, reflective channels are more challenging to acquire as the location of the user and environmental factors change. Considering this, we suppose that the BS-RIS link is perfect, while the RIS-UE link is imperfect owing to channel estimation errors <cit.>. §.§ Channel Model In Fig. <ref>, the RIS-UE channel 𝐡∈ℂ^1× L is modeled as Rician fading channel, which can be characterized as 𝐡 = ζĥ +√(1-ζ^2)Δ𝐡, where ζ is the correlation coefficient between 𝐡 and ĥ, where ĥ is the information obtained by the channel estimation technique, and 𝐡 stands for the practical channel obtained at the UE side. The corresponding estimation error is denoted by Δ𝐡. In particular, ĥ and Δ𝐡 are mutually uncorrelated. On the other hand, the BS-RIS channel 𝐠_n_t can be modeled as Rayleigh fading channel with non-line-of-sight (NLoS) components. For the RIS, we set the amplitude of each reflection element of the RIS to one <cit.>. Based on this, the reflection matrix of the RIS is modeled as Φ = diag(e^jϕ_1,n_t,⋯,e^jϕ_l,n_t,⋯,e^jϕ_L,n_t), where e^jϕ_l,n_t denotes the phase shift that is related to the RIS controller connected to the n_t-th activated transmit antenna and the l-th reflecting element. The reflection estimation channel 𝐡 can be formulated as <cit.> ĥ=√(κ/κ+1)ĥ^LoS+√(1/κ+1)ĥ^NLoS, where the deterministic line-of-sight (LoS) path matrix can be modeled as ĥ^LoS(φ) = [1,e^j2π d/λsinφ,⋯,e^j2π d/λlsinφ, ⋯,e^j2π d/λ(L-1)sinφ]^T, where l represents the indices of the RIS element. Besides, the expectation and variance of the magnitude of the path from the l-th reflecting element to the UE can be respectively expressed as <cit.> E(β̂_l)=√(π/4κ+4)e^-κ/2[(1+κ) I_0(κ/2)+κ I_1(κ/2)], Var(β̂_l) = 1- E^2(β̂_l), where each component of ĥ^NLoS suffers from 𝒞𝒩(0,1). On the other hand, the BS-RIS channel can be indicated as 𝐠_n_t∼𝒞𝒩(0,𝐈_L× L), that is, the path from the activated antenna to the l-th reflecting element of the RIS can be written as g_l,n_t∼𝒞𝒩(0,1). Accordingly, the mean and variance of the magnitude on the l-th reflecting element to n_t-th transmit antenna can be evaluated as <cit.> E(α_n_t,l)=√(π)/2, Var(α_n_t,l) = (4-π)/4. At the UE side, the received signal can be given as y=√(P_s)𝐡Φ𝐠_n_tx + n_0, where n_0∼𝒞𝒩(0,N_0) stands for the additive white Gaussian noise (AWGN). Note that x denotes the Gaussian data symbol, which is a random variable with zero mean and unit variance satisfying E(|x|^2)=1. In this scheme, we aim to study the RIS-aided SSK technique, so the x term can be neglected. Consequently, the (<ref>) can be re-expressed as y=√(P_s)∑_l=1^Lh_le^jϕ_l,n_tg_l,n_t + n_0, where h_l = ζĥ +√(1-ζ^2)Δh. Further, the (<ref>) can be written as y= √(P_s)ζ∑_l=1^Lĥ_le^jϕ_l,n_tg_l,n_t+ √(P_s(1-ζ^2))∑_l=1^LΔhe^jϕ_l,n_tg_l,n_t + n_0, where Δ h represents the error of channel estimation ĥ of and obeys 𝒞𝒩(0,σ_e^2) distribution. Particularly, σ_e^2 represents the variance of the estimation error, which depends on the estimation strategy and the number of pilot symbols employed [It is worth noting that σ_e^2 denotes the several factor on the CSI due to limited feedback and channel estimation. Even in the high SNR region, the channel obtained at the UE is still inaccurate.]. By adopting orthogonal pilot channel estimation sequences, the estimation error decreases linearly with the increase in the number of pilots. According to <cit.>, the correlation coefficient can be set as ζ = 1/√(1+σ_e^2). It is worth mentioning that when σ_e^2=0, ζ = 1 can be obtained, which indicates perfect channel estimation. The RIS can adjust the phase shift to make ϕ_l,n_t=θ_l,n_t+ψ_l, thus maximizing the energy of the desired signal of the UE. §.§ Detector and Complexity §.§.§ Detector In this manner, the received signal can be demodulated by the ML detector, which can be given by [n̂_t]=min_n_t∈{1,⋯,N_t}|y-√(P_s)ζ∑_l=1^Lα_l,n_tβ̂_l|^2. §.§.§ Complexity Analysis Note that every complex multiplication requires 4 real multiplications and 2 real additions. Computing the square of the absolute value of a complex number requires 2 real multiplications and 1 real addition. In (<ref>), computing ∑_l=1^Lα_l,n_tβ_l requires L real multiplications and (L-1) real additions. Computing √(P_s) and ζ requires 2 real multiplications. Subtracting ∑_l=1^Lα_l,n_tβ_l from y requires 1 real addition. At this point, with L+2 real multiplications and L real additions, to detect the transmitting antenna correctly, it is necessary to traverse and search through all the antennas on the transmission end. Therefore, the computational complexity of (<ref>) becomes (L+4)N_t multiplications and (L+1)N_t additions. § PERFORMANCE ANALYSIS In this section, we derive the performance of the RIS-SSK scheme under imperfect CSI, where the RIS is used to connect the Rayleigh fading channel on the BS-RIS side and the Rician fading channel on the RIS-UE side. The CPEP and UPEP expressions with the optimal ML detector are derived. Further, the corresponding ABEP expression of the RIS-SSK scheme with the imperfect CSI is obtained. §.§ CPEP Expression It is assumed that the activated transmit antenna index is n_t and the detected antenna index is n̂_t. By exploiting the decision rules provided in (<ref>), the CPEP can be given as P_b = {n_t →n̂_t|α_l,n_t,β̂_l} = {|y - √(P_s)ζ∑_l=1^L α_l,n_tβ̂_l|^2 >|y-√(P_s)ζ∑_l=1^L α_l,n̂_tβ̂_l e^-j(θ_l,n_t-θ_l,n̂_t)|^2} = {-2{y√(P_s)ζ∑_l=1^L α_l,n_tβ̂_l}+|√(P_s)ζ∑_l=1^L α_l,n_tβ̂_l|^2 >-2{y√(P_s)ζ∑_l=1^L α_l,n̂_tβ̂_l e^-j(θ_l,n_t-θ_l,n̂_t)} +|√(P_s)ζ∑_l=1^L α_l,n̂_tβ̂_l e^-j(θ_l,n_t-θ_l,n̂_t)|^2}. To simplify the representation of (<ref>), let us define η = ∑_l=1^L α_l,n_tβ̂_l, η̂=∑_l=1^L α_l,n̂_tβ̂_l e^-j(θ_l,n_t-θ_l,n̂_t). Substituting (<ref>) into (<ref>), the CPEP can be updated to P_b = (-2{y√(P_s)ζη}+|√(P_s)ζη|^2 >-2{y√(P_s)ζη̂}+|√(P_s)ζη̂|^2) = (2{y√(P_s)ζ(η̂-η)}. +.|√(P_s)ζη|^2 -|√(P_s)ζη̂|^2>0). Recall that (<ref>), let us define u = ∑_l=1^Lg_l,n_te^jϕ_l,n_tΔ h_l.[For two independent random variables X and Y, we can obtain the expectation and variance of term XY as E(XY)=E(X)E(Y) and Var(XY)=Var(X)Var(Y)+Var(X)E^2(Y)+E^2(X)Var(Y), respectively.] By adopting CLT, the u obeys 𝒞𝒩(0,σ_e^2L). In this manner, (<ref>) can be recast as P_b = (2{(√(P_s)ζη + √(P_s(1-ζ^2))u + n_0)√(P_s)ζ(η̂-η)} +|√(P_s)ζη|^2 -|√(P_s)ζη̂|^2>0) = (2{(√(P_s(1-ζ^2))u + n_0)√(P_s)ζ(η̂-η)} -|√(P_s)ζη|^2 -|√(P_s)ζη̂|^2-2P_sζ^2ηη̂>0) = (2{(√(P_s(1-ζ^2))u + n_0)√(P_s)ζ(η̂-η)} -|√(P_s)ζη̂-√(P_s)ζη|^2>0) = (D>0), where D∼𝒩(μ_D,σ_D^2). The expectation and variance of D are represented as μ_D=-P_sζ^2|η̂-η|^2 and σ_D^2=2(N_0+P_s(1-ζ^2)σ_e^2L), respectively. In this respect, the (<ref>) can be evaluated as P_b = (-μ_D/σ_D)=Q(√(P_sζ^2|η̂-η|^2/2(N_0+P_s(1-ζ^2)σ_e^2L))). §.§ UPEP Expression By employing (<ref>), the term of η -η̂ can be written as η -η̂= ∑_l=1^L β̂_l (α_l,n_t- α_l,n̂_t e^-jω). where ω=θ_l,n_t-θ_l,n̂_t. Since θ_l,n_t and θ_l,n̂_t both independently and uniformly distributed in (0,2π), then the PDF of ω can be given as follows: f_ω(x)={ 1/2π(1+x/2π), x ∈ [-2π,0), 1/2π(1-x/2π), x ∈ [0,2π). . In this manner, the α_l,n̂_t e^-jω in (<ref>) can be calculated as α_l,n̂_t e^-jω=α_l,n̂_tcosω-jα_l,n̂_tsinω. Since the symmetry of cosine and sine function, we have E[α_l,n̂_t e^-jω] = 0, Var[(α_l,n̂_t e^-jω)_] = 1/2, Var[(α_l,n̂_t e^-jω)_] = 1/2. Note that subsequent section requires fitting the distribution using the CLT, the expectation and variance of each variable need to be obtained. It is known that the real and imaginary parts are two independent parts of each other, thus the variance of α_l,n̂_t e^-jω is Var[(α_l,n̂_t e^-jω)] = 1. After some simple algebraic operations, we can derive the mean and variance of α_l,n_t- α_l,n̂_t e^-jω in (<ref>) as E(α_l,n_t- α_l,n̂_t e^-jω) = √(π)/2, Var(α_l,n_t- α_l,n̂_t e^-jω) = 8-π/4. Further, the mean and variance of β_l (α_l,n_t- α_l,n̂_t e^-jω) in (<ref>) can be respectively expressed as E[β̂_l (α_l,n_t- α_l,n̂_t e^-jω)] = √(π)/2E(β̂_l), Var[β̂_l (α_l,n_t- α_l,n̂_t e^-jω)] =2-π/4E^2(β̂_l). Due to the fact that each reflector element of a RIS is independent of each other, it is difficult to directly obtain an accurate PDF for the sum of the reflective elements. To address this issue, we can use the CLT to approximate the PDF as a real Gaussian distribution. Consequently, the corresponding mean and variance can be respectively expressed as μ = √(π)LE(β̂_l)/2, σ^2 = L[8-π E^2(β̂_l)]/4. Based on (<ref>) and (<ref>), the UPEP of the proposed scheme can be calculated as P̅_b = ∫_0^∞ Q(√(ρζ^2x/2(1+ρ(1-ζ^2)σ_e^2L)))f(x)dx, where x = |η-η̂|^2, ρ=P_s/N_0 stands for SNR, and f(x) denotes the PDF of x variable. According to <cit.>, we have Q(x) ≈1/12exp(-x^2/2)+1/4exp(-2x^2/3). Substituting (<ref>) into (<ref>), the UPEP can be reformulated as P̅_b ≈ 1/12∫_0^∞exp(-ρζ^2x/4(1+ρ(1-ζ^2)σ_e^2L))f(x)dx +1/4∫_0^∞exp(-ρζ^2x/3(1+ρ(1-ζ^2)σ_e^2L))f(x)dx. Since the variable x represents the non-central chi-square distribution with one degree of freedom, its PDF form is very cumbersome. To facilitate the address (<ref>), we resort to <cit.> to obtain its moment-generating function as M_X(s)=1/√(1-2sσ^2)exp(μ^2s/1-2sσ^2). Substituting (<ref>) into (<ref>), we obtain the closed-form expression of UPEP as P̅_b = 1/12√(2(1+ρ(1-ζ^2)σ_e^2L)/2(1+ρ(1-ζ^2)σ_e^2L)+ρζ^2σ^2) ×exp(-μ^2ρζ^2/4(1+ρ(1-ζ^2)σ_e^2L)+2ρζ^2σ^2) + 1/4√(3(1+ρ(1-ζ^2)σ_e^2L)/3(1+ρ(1-ζ^2)σ_e^2L)+2σ^2ρζ^2) ×exp(-μ^2ρζ^2/3(1+ρ(1-ζ^2)σ_e^2L)+2σ^2ρζ^2). After some manipulations, (<ref>) can rewritten as P̅_b = 1/12√(2(1+ρσ_e^4L)/2(1+ρσ_e^4L)+ρσ^2)exp(-μ^2ρ/4(1+ρσ_e^4L)+2ρσ^2) + 1/4√(3(1+ρσ_e^4L)/3(1+ρσ_e^4L)+2σ^2ρ)exp(-μ^2ρ/3(1+ρσ_e^4L)+2σ^2ρ). §.§ Asymptotic UPEP To provide a better demonstration of the impact of the channel estimation error parameters on the performance of the considered RIS-SSK system, we evaluate the performance of this system with respect to the high SNR region. Here, the asymptotic expression for the UPEP can be calculated as P̅_a= lim_ρ→∞P̅_b= 1/6√(2σ_e^4/8σ_e^4+8-π E^2(β̂_l))exp(-π LE^2(β̂_l)/16σ_e^4+16-2π E^2(β̂_l)) +1/4√(6σ_e^4/6σ_e^4+8-π E^2(β̂_l))exp(-π L E^2(β̂_l)/6σ_e^4+8-π E^2(β̂_l)). §.§ ABEP It is worth noting that ABEP is equal to UPEP when N_t is two, while ABEP is the union upper bound of the scheme if N_t is greater than two. Consequently, the ABEP of the RIS-SSK scheme can be characterized as <cit.> ABEP ≤1/log_2N_t∑_n̂_t=1^N_t∑_n_t≠n̂_t^N_tP̅_i N(n̂_t→ n_t), i ∈{a,b} where N(n̂_t→ n_t) indicates the number of error bits between the true transmit antenna index n_t and the decoded judgment obtained antenna index n̂_t. § SIMULATION AND ANALYTICAL RESULTS In this section, we explore the error performance of the proposed scheme under imperfect CSI via Monte Carlo simulation. The simulation involves generating a random data sequence and transmitting it to the receiver via RIS reflection after modulation. In simulations, the ABEP value corresponding to each SNR is generated 1×10^6 times and then the average value is calculated, which is used to validate the analytical derivations. Unless otherwise specified, N_t and N_r are set to 2 and 1, respectively, and RIS is a square array. Additionally, the impact of any large-scale path loss is ignored as it is already implicit in the received SNR. In Fig. <ref>, we plot the ABEP performance of the RIS-SSK scheme under imperfect CSI, where the Rician factor κ the error estimation parameters σ_e^2 are set as 3 dB and 0.1, respectively. It is worth noting that the simulation results in (<ref>) and the closed-form expression in (<ref>) start to agree with the variation of SNR when L is not less than 144 since the CLT requires at least two orders of magnitude. In addition, we observe that the simulation value coincides almost perfectly with the analytical result in the case of L=256, which further validates the correctness of the derived result and shows that the gap between (<ref>) and the real value is very small and almost negligible. In Fig. <ref>, we validate the correctness of the developed asymptotic ABEP, where the Rician factor κ and the error estimation parameter σ_e^2 are setup at 3 dB and 0.1, respectively. Specifically, in Fig. <ref>, the analytical and asymptotic ABEP curves are plotted, which are generated based on (<ref>) and (<ref>), respectively. It can be observed that as the SNR increases, there is not only a significant performance degradation but also an error floor, which is caused by the channel estimation error. As the number of reflective elements L increases, the ABEP of the scheme decreases accordingly. However, for high SNR regions with imperfect CSI, this is not the case, since the dominant noise is no longer AWGN, but originates from channel estimation errors. In Fig. <ref>, the Monte Carlo simulation results and analytical curves of the RIS-SSK scheme with κ = 3 dB and L=144 are given. Note that the error of channel estimation is set as fixed values σ_e^2 = 3,2,1,0.1, respectively, that is, the correlation coefficients are ζ = 0.500,0.5774,0.7071,0.9535. As a reference, the corresponding ABEP with the perfect CSI is also shown with a dashed line for the RIS-SSK scheme. From Fig. <ref>, it can be seen that the RIS-SSK scheme has good anti-noise performance. According to <cit.>, σ_e^2 is taken to be much less than 0.1 to approach the value of perfect CSI, while in this figure, it is taken to 0.1 to be very close to the ABEP with perfect CSI. In Fig. <ref>, we exhibit the performance impact of the LoS path of the reflection channel on the RIS-SSK scheme with imperfect CSI, where the number of reflecting elements is 144 and the estimation error variance of each reflecting element up to the UE is 0.1. From Fig. <ref>, simulation and analytical values match very well. The error can be reduced by increasing the number of simulations. Moreover, the simulation and analytical values exhibit a strong correlation. To decrease the error, one can increase the number of simulations or increasing the SNR values. A higher Rician factor indicates a stronger reflected LoS path signal energy, which in turn results in a higher quality of the received signal on the UE side. As a result, the ABEP performance can be improved. Additionally, it is also found that the ABEP performance of RIS-SSK in the imperfect CSI case is enhanced as the SNR increases. § CONCLUSION In this paper, the performance of RIS-SSK with imperfect channel estimation is analyzed, where BS-RIS channel suffers from Rayleigh fading and RIS-UE channel follows the Rician fading. Based on the ML detector, we derived the CPEP expression and the PDF of the composite channel with a non-central chi-square distribution with one degree of freedom. After that, we derive the closed-form expression and asymptotic expression of ABEP under the RIS-SSK scheme with impact CSI. Finally, all the analytical derivations are verified by Monte Carlo simulation and it is found that the ABEP values obtained are closer to the true results as the channel estimation error is smaller or the Rician factor is larger. 99 wu2019towards Q. Wu and R. Zhang, “Towards smart and reconfigurable environment: Intelligent reflecting surface aided wireless network," IEEE Commun. Mag., vol. 58, no. 1, pp. 106-112, Jan. 2020. Taosum2021 A. Ihsan, W. Chen, M. Asif, W. U. Khan, Q. Wu, and J. Li, “Energy-efficient IRS-aided NOMA beamforming for 6G wireless communications," IEEE Trans. Green Commun. Netw., vol. 6, no. 4, pp. 1945-1956, Dec. 2022. saad2020vis E. Basar, M. Di Renzo, J. De Rosny, M. Debbah, M. -S. Alouini, and R. Zhang, “Wireless communications through reconfigurable intelligent surfaces," IEEE Access, vol. 7, pp. 116753-116773, Aug. 2019. do2021multi T. N. Do, G. Kaddoum, T. L. Nguyen, D. B. da Costa, and Z. J. Haas, “Multi-RIS-aided wireless systems: Statistical characterization and performance analysis," IEEE Trans. Commun., vol. 69, no. 12, pp. 8641-8658, Dec. 2021. tang2021wireless W. Tang et al., “Wireless communications with reconfigurable intelligent surface: Path loss modeling and experimental measurement," IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 421-439, Jan. 2021. zhu2021performance X. Zhu, L. Yuan, Q. Li, Q. Li, L. Jin, and J. Zhang, “On the performance of 3-D spatial modulation over measured indoor channels," IEEE Trans. Veh. Technol., vol. 71, no. 2, pp. 2110-2115, Feb. 2022. li2023index J. Li et al., “Index modulation multiple access for 6G communications: Principles, applications, and challenges," IEEE Netw., vol. 37, no. 1, pp. 52-60, Jan./Feb. 2023. jegan2009space J. Jeganathan, A. Ghrayeb, L. Szczecinski, and A. Ceron, “Space shift keying modulation for MIMO channels," IEEE Trans. Wireless Commun., vol. 8, no. 7, pp. 3692-3703, Jul. 2009. zhu2023qua X. Zhu, W. Chen, Z. Li, Q. Wu, and J. Li, “Quadrature spatial scattering modulation for mmWave transmission," IEEE Commun. Lett., vol. 27, no. 5, pp. 1462-1466, May 2023. can2020re A. E. Canbilen, E. Basar, and S. S. Ikki, “Reconfigurable intelligent surface-assisted space shift keying," IEEE Wireless Commun. Lett., vol. 9, no. 9, pp. 1495-1499, Sept. 2020. li2021space Q. Li, M. Wen, S. Wang, G. C. Alexandropoulos, and Y.-C. Wu, “Space shift keying with reconfigurable intelligent surfaces: Phase configuration designs and performance analysis," IEEE Open J. Commun. Soc., vol. 2, pp. 322-333, Feb. 2021. canbilen2022on A. E. Canbilen, E. Basar, and S. S. Ikki, “On the performance of RIS-assisted space shift keying: Ideal and non-ideal transceivers," IEEE Trans. Commun., vol. 70, no. 9, pp. 5799-5810, Sept. 2022. zhu2021ris X. Zhu, L. Yuan, K. J. Kim, Q. Li, and J. Zhang, “Reconfigurable intelligent surface-assisted spatial scattering modulation," IEEE Commun. Lett., vol. 26, no. 1, pp. 192-196, Jan. 2022. zhou2020robust G. Zhou, C. Pan, H. Ren, K. Wang, M. D. Renzo, and A. Nallanathan, “Robust beamforming design for intelligent reflecting surface aided MISO communication systems," IEEE Wireless Commun. Lett., vol. 9, no. 10, pp. 1658-1662, Oct. 2020. yang2022per P. Yang, L. Yang, and S. Wang, “Performance analysis for RIS-aided wireless systems with imperfect CSI," IEEE Wireless Commun. Lett., vol. 11, no. 3, pp. 588-592, Mar. 2022. san2007dig K. S. Sanila and N. Rajamohan, Digital Communications, 5th ed. New York, NY, USA: McGraw-Hill, 2007. basar2012per E. Basar, U. Aygolu, E. Panayirci, and H. V. Poor, “Performance of spatial modulation in the presence of channel estimation errors," IEEE Commun. Lett., vol. 16, no. 2, pp. 176-179, Feb. 2012. xx2007tab A. Jeffrey and D. Zwillinger, Table of Integrals, Series, and Products. Elsevier, 2007.
http://arxiv.org/abs/2307.03222v1
20230706180001
Probing the two-body decaying dark matter scenario with weak lensing and the cosmic microwave background
[ "Jozef Bucko", "Sambit K. Giri", "Aurel Schneider" ]
astro-ph.CO
[ "astro-ph.CO" ]
Probing the two-body decaying dark matter scenario with WL and the CMB Institute for Computational Science, University of Zurich, Winterthurerstrasse 190, 8057 Zurich, Switzerland Nordita, KTH Royal Institute of Technology and Stockholm University, Hannes Alfv́ens väg 12, SE-106 91 Stockholm, Sweden Université Paris-Saclay, Université Paris Cité, CEA, CNRS, Astrophysique, Instrumentation et Modélisation Paris-Saclay, 91191 Gif-sur-Yvette, France Decaying dark matter (DDM) scenarios have recently re-gained attention due to their potential ability to resolve the well-known clustering (or S_8) tension between weak lensing (WL) and cosmic microwave background (CMB) measurements. In this paper, we investigate a well-established model, where the original dark matter (DM) particle decays into a massless and a massive daughter particles. The latter obtains a velocity kick during the decay process resulting in a suppression of the matter power spectrum at scales that are observable with WL shear observations. We perform the first fully nonlinear WL analysis of this two-body decaying dark matter (ΛDDM) scenario including intrinsic alignment and baryonic feedback processes. We thereby use the cosmic shear band power spectra from the KiDS-1000 data combining it with temperature and polarization data from to constrain the ΛDDM model. We report new limits on the decay rate and mass splitting parameters that are significantly stronger than previous results, especially for the case of low mass splittings. We also investigate the S_8 tension only finding a marginal improvement of 0.3σ for ΛDDM compared to the ΛCDM case. The improvement is not caused by a shift but a slight bloating of the posterior contours caused by the additional free model parameters. We therefore conclude that the two-body ΛDDM model does not provide a convincing solution to the S_8 tension. Our emulator to model the nonlinear ΛDDM power spectrum is published as part of the publicly available code DMemu at <https://github.com/jbucko/DMemu>. Probing the two-body decaying dark matter scenario with weak lensing and the cosmic microwave background. Jozef Buckojozef.bucko@uzh.ch1 Sambit K. Giri1,2 Fabian Hervas Peters1,3 Aurel Schneider1 Received: XX, XX, XXXX. Accepted: YY, YY, YYYY, Report Number: NORDITA 2023-020 ====================================================================================================================================== § INTRODUCTION The standard Λ-cold dark matter (ΛCDM) cosmology provides an outstanding description of the Universe explaining a wide range of cosmic observations, such as the cosmic microwave background (CMB), baryonic acoustic oscillations (BAO), large-scale structure formation or Big Bang nucleosynthesis (BBN). Despite its tremendous success, there are still questions as to which ΛCDM, as understood nowadays, cannot provide satisfying answers, including the nature of dark matter (DM) and dark energy. Moreover, with progressively precise measurements, several discrepancies within the model have emerged. An example of such a discrepancy is the mild yet persistent clustering amplitude tension between CMB and weak lensing (WL) measurements, often expressed via the parameter S_8=σ_8 √(Ω_ m/0.3), with σ_8 and Ω_ m describing the clustering amplitude and the total matter abundance. More specifically, CMB measurements from Planck collaboration yield S_8 = 0.834 ± 0.016 <cit.> while a variety of low-redshift surveys report consistently lower values. For example, the Kilo-Degree Survey <cit.> obtains a value of S_8 = 0.760^+0.016_-0.038 <cit.> in agreement with (albeit slightly lower than) results from the Hyper Supreme-Cam <cit.> and the Dark Energy Survey <cit.>. It remains unclear whether the S_8 tension emerges from an insufficient modelling of the nonlinear clustering of matter <cit.>, the modelling of cosmic shear <cit.> or one has to look beyond standard ΛCDM. Resolving the S_8 tension can be achieved by suppressing the matter power spectrum at scales k∼ 0.1-1 h/Mpc, which most substantially influence the clustering amplitude value S_8 <cit.>. Such suppression may be obtained by a number of extensions of ΛCDM, such as cold-warm dark matter <cit.>, cannibal dark matter <cit.>, models involving interactions between dark matter and dark radiation at early times <cit.>, scenarios introducing an interaction between dark matter and dark energy <cit.>, baryons and dark energy <cit.> or models assuming unstable dark matter particles <cit.>. The class of decaying dark matter models includes several different scenarios. In the simplest case, a fraction of dark matter decays into a relativistic component (often assumed to be dark radiation). However, such a model is strongly constrained by CMB observations <cit.> as it affects the cosmic background evolution. An alternative and only somewhat more complex scenario assume the initial dark matter particles decay into a pair of massless and massive particles, the latter obtaining a velocity kick during the decay process. This scenario is referred to as a two-body decaying dark matter model and will be denoted as ΛDDM hereafter. A direct consequence of the ΛDDM model is the free streaming process of the stable decay products which alters the gravitational collapse of cosmic structures. This effect is relevant at scales set by the free-streaming length of the stable daughter particles, thus by the magnitude of the velocity kicks (v_k) they receive as a consequence of energy-momentum conservation. As a result, the matter power spectrum gets suppressed during late times at scales above k∼ 0.1 h/Mpc <cit.>. In addition to the aforementioned reason, there are arguments motivated by particle physics to consider models in which dark matter is not stable over cosmic time. First of all, such a stability condition does not emerge naturally, i.e. usually requires additional assumptions such as Z_2 symmetry <cit.>. Moreover, there are numerous theoretical models involving dark matter decays, such as sterile neutrinos <cit.>, R-parity violation <cit.> or super Weakly Interacting Massive particles <cit.>. The two-body decaying dark matter model (ΛDDM) has been studied from various angles over the last decade, e.g. by using perturbation theory <cit.> or N-body simulations ranging from individual galaxies <cit.> to the large-scale structure <cit.>. For example, <cit.> obtained constraints on the ΛDDM model based on Milky Way satellite counts while <cit.> and <cit.> used Lyman-α forest data to constrain the two-body decay rate in the regime of low-mass splittings. Additionally, <cit.> and <cit.> considered Planck CMB observations together with supernova type Ia (SNIa) data and BAO to derive constraints on two-body decays. After including priors from WL observations, they report a reduction of the S_8-tension for a best-fitting ΛDDM model with τ=120 Gyr and v_k/c≃ 1.2% <cit.>. In this work, we perform the first WL analysis of the ΛDDM model using cosmic shear data from the KiDS-1000 survey <cit.>. The nonlinear clustering predictions are thereby modelled using an emulator based on a suite of N-body simulations. Next to the WL analysis, we also perform a reanalysis of the Planck 2018 CMB temperature and polarisation data as well as a combined WL plus CMB modelling investigating, in particular, the potential of ΛDDM to solve the S_8 clustering tension. Our paper is organized as follows; in Section <ref>, we describe basic physics and implications of ΛDDM model, while Section <ref> provides a detailed overview of our modelling of WL and CMB observables. In Section <ref>, we comment on choices made in relation to model inference and in Section <ref>, we describe our results and compare them to recent studies before concluding in Section <ref>. In Appendix <ref>, we compare the results of our N-body simulations to previous studies. Appendix <ref> discusses the cosmology dependence of the ΛDDM effects and Appendix <ref> studies the effects two-body decays and baryons have on WL and CMB observables. In Appendix <ref>, we study more closely the tension between WL and CMB data in the context of the ΛDDM model and, finally, Appendix <ref> provides more detailed information about parameters we obtain from model inference. § TWO-BODY DECAYING DARK MATTER In the two-body decaying dark matter (DM) model (ΛDDM model), an original (mother) DM particle decays into a slightly lighter, stable particle and a relativistic massless relic, while the energy released during the decay process is split between the two product species. We describe the basic theoretical properties of the ΛDDM model in Section <ref>, in Section <ref> we discuss our ΛDDM N-body simulations and, finally, we introduce a new emulator of the ΛDDM nonlinear matter power spectra in Section <ref>. §.§ Theory The ΛDDM model is simple enough to be described by two phenomenological parameters. The first parameter is the decay rate Γ controlling the frequency of the decay processes. The second parameter corresponds to the velocity kick v_k of the massive daughter particle which is directly linked to the mass ratio between the mother and daughter particles. Some authors replace the velocity kick magnitude v_k with the ratio ε of the rest-mass energy ε = 1/2(1 - m^2_ wdm/M^2_ dcdm), where m_ wdm and M_ dcdm denote the rest mass of the warm daughter and decaying cold mother DM particles, respectively <cit.>. The momentum of the daughter particle in the centre-of-momentum frame is p_ wdm=m_ wdmcε/ √(1-2ε) <cit.> where c is the speed of light. Note that in the non-relativistic limit, we obtain a simple relation v_k = ε c. The background evolution of the energy densities of both the cold (mother) and warm (daughter) DM species as well as the mass-less, dark-radiation (daughter) component can be written as <cit.> ρ̇_ dcdm + 3ℋρ_ dcdm =-a Γρ_ dcdm, ρ̇_ wdm + 3ℋρ_ wdm =a ΓM^2_ dcdm+m^2_ wdm/2M^2_ dcdmρ_ dcdm, ρ̇_ dr + 4ℋρ_ dr =a ΓM^2_ dcdm-m^2_ wdm/2M^2_ dcdmρ_ dcdm or using ε instead of particle masses ρ̇_ dcdm + 3ℋρ_ dcdm =-a Γρ_ dcdm, ρ̇_ wdm + 3ℋρ_ wdm =a Γ(1-ε) ρ_ dcdm, ρ̇_ dr + 4ℋρ_ dr =a Γερ_ dcdm. In the above equations, ℋ is the conformal Hubble parameter and ρ_i the energy density of species `i'. The subscripts `dcdm', `wdm' and `dr' refer to the cold, warm, and massless species. Dots denote derivatives with respect to physical time and a stands for the scale factor. Next to the two model parameters Γ and v_k, we include a third parameter f=Ω_ dcdm/Ω_ dcdm+ Ω_ cdm, which allows for a scenario where only a faction f of the total, initial DM fluid is unstable (while the remaining DM corresponds to a stable CDM particle). Here we have introduced the abundances of the stable (Ω_ cdm) and unstable (Ω_ dcdm) dark matter species, respectively. With the above description, one can in principle study the full parameters space of two-body decays taking arbitrary Γ, ε and f values, with limiting cases Γ→ 0, ε→ 0 or f → 0 approaching ΛCDM cosmology and ε→ 1/2 approaching one-body decays. However, as recent studies have demonstrated <cit.>, the ΛDDM models with very large decay rate and velocity kicks are ruled out by observations as they lead to a strong power suppression at k< 0.1 h/Mpc. We, therefore, focus on the regime with late-time decays (Γ≲ H_0) and non-relativistic velocity kicks (v_k ≪ c) throughout this paper. §.§ Simulations Considering only non-relativistic decays, we implement the ΛDDM model into the N-body code <cit.>, a tree-based gravity solver based on fast multi-pole expansion and adaptive time-stepping. Following the theoretical description (<ref>)-(<ref>), we find that at first order, the background equations remain unmodified. With this approximation, there is no energy transfer between the radiation and dark matter components caused by the decay process. Therefore, we keep the background cosmology implementation of PKDGRAV3 unchanged and implement only the non-relativistic velocity kicks received by the WDM particles. Note that this differs from the one-body DDM model studied in <cit.> where the background evolution had to be modified. The two-body decays are implemented into PKDGRAV3 via a function pkdDecayMass, which is revisited at each global integration time step (separated by a time interval Δ t). The decay probability of a given (not yet decayed) particle at time step i is P = ΓΔ t. Thus, a number of Δ N^i_ wdm = ΓΔ t N^i_ dcdm simulation particles undergo the decay process, where N^i_ dcdm denotes the number of unstable DCDM particles at time step i. The particles that are about to decay are chosen randomly from all CDM particles. Immediately after the decay they obtain a uniform velocity kick in a random direction. Importantly, these particles are flagged and added into a set of already decayed particles in order to be excluded from the decay process occurring in future time steps. In Fig. <ref>, we plot the ratios of simulated ΛDDM to ΛCDM power spectra for varying values of f, v_k, and 1/Γ (see panel a, b, and c). In general, the two-body decaying DM model leads to a suppression of the total matter power spectrum at small scales leaving the large scales unchanged. The amplitude of the suppression, as well as the scale of the downturn, depend on the values of the ΛDDM parameters. The fraction of decaying DM as well as the decay rate both affect the amplitude of the suppression while the value of the velocity kick primarily influences the position of the downturn along the k-axis. The latter can be understood by the fact that larger streaming velocities are able to affect the formation of structures at larger scales. The redshift dependence of the power suppression is shown in panel (d) of Fig. <ref>. Not surprisingly, the amplitude of the suppression increases towards lower redshifts. This behaviour is caused by the fact that more particles decay with time causing a reduction of the clustering process compared to ΛCDM. We run a suite of N-body simulations for decay rates Γ < 1/13.5 Gyr^-1 and velocity kicks v_k/c < 0.02. All our simulations are run assuming a fiducial cosmology with parameters h_0 = 0.678, Ω_ m,0 = 0.307, Ω_Λ,0 = 0.693, Ω_ b,0 = 0.048, n_s = 0.966 and σ_8 = 0.883. We obtain converged results at the scales k∼ 0.01-10 h/Mpc for box sizes of L_ box = 125,250,512 Mpc/h and particle numbers of N = 256^3,512^3,1024^3 (depending of a specific ΛDDM configuration). We compared the output of our simulations to results from previous works and find a good level of agreement (see Appendix <ref>). As we are primarily interested in the ratio of the nonlinear power spectra between the ΛDDM and ΛCDM models, the cosmology dependence is factored out to a large extent; we test that the impact of the cosmology is much smaller than the suppression due to the two-body decays. §.§ Emulating the impact of dark matter decays In order to carry out a Bayesian inference analysis (Section <ref>), we need a fast modelling framework to explore the vast parameter space of astrophysics, cosmology, and dark matter models. As N-body simulations are not fast enough for this purpose, we build an emulator to account for the different ΛDDM parameters. The basic characteristics of our emulator-building procedure are shown in the flowchart of Fig. <ref>. First, we run ∼100 gravity-only simulations for different dark matter parameters (Γ, v_k, and f) and measure the nonlinear matter power spectra up to k∼ 6 h/Mpc between z=2.35 and z=0. In the next step, we perform a principal component analysis <cit.> on the ratios of ΛDDM and ΛCDM matter power spectra 𝒮_Λ DDM^Γ,v_k,f(k,z). We find that five PCA components are sufficient to describe the ratio of spectra with a reconstruction error of ∼0.1%. Next, we train a neural network to model these five PCA components of the simulated power spectra ratios for a given parameter vector (Γ,v_k,f,z). The network output is then transformed back from the PCA representation to the power spectra ratios before being compared to the original (simulated) ratios used for the training and testing of the emulator. During the network training process, we minimise the differences between the predicted and true power spectra ratios in the training set. We consider the mean squared error (MSE) metric to quantify the differences. We choose the sinusoidal representation networks (SIRENs; ) architecture for building our ΛDDM emulator. SIRENs have been successfully shown to have good interpolation and signal reconstruction properties <cit.>. The main difference compared to standard feed-forward architectures is replacing the commonly assumed ReLU activation function with a sine function. We use the architecture with two hidden layers each having 1024 neurons to perform the emulation task. During the training, we optimize the MSE with the Adam optimizer <cit.>. To further fine-tune the SIREN architecture, we perform a hyperparameter optimization for the network's learning rate l_r and regularization strength λ using the Bayesian Optimization and Hyperband (BOHB) method <cit.>. The entire training is performed using the <cit.> machine learning framework. After training we test the emulator on separate data (i.e. the test set) and monitor the prediction mismatch in each of the 30 k-bins. We show the emulation performance in Fig. <ref> and demonstrate that both 1σ and 2σ errors stay below 1%. Our emulator can thus efficiently predict the response of decays on the nonlinear matter power spectrum 𝒮_Λ DDM^Γ,v_k,f(k,z). The prediction time of our emulator is a few milliseconds. With this tool, we can model the nonlinear power spectrum in the presence of dark matter decays as P^ nonlin_Λ DDM(k,z) = 𝒮_Λ DDM^Γ,v_k,f(k,z) × P^ nonlin_Λ CDM(k,z), where P^ nonlin_Λ CDM(k,z) is obtained by multiplying the nonlinear power spectrum from the revised_halofit method <cit.>. § DATA SETS AND MODELLING FRAMEWORK Here we first describe the observational data that we use to study the ΛDDM model. Later, in Section <ref> and <ref>, we present our modelling framework. §.§ Data sets Galaxy WL is a particularly promising observable to probe decaying dark matter models as such scenarios tend to affect structure formation at late times and small cosmological scales. However, while primarily focusing on the WL analysis, we also include CMB data into our analysis. Although the CMB radiation is not sensitive to the ΛDDM model parameters, it helps constrain the cosmological parameters and provides insights into the issue of the S_8-tension. In our study, we use the following observational data sets: ∙The WL cosmic shear data of the KiDS-1000 data release, obtained in five redshift bins between 0.1≲ z ≲ 1.2 <cit.>. We use the band power angular spectra measured at scales 118 ≤ l ≤ 1266, and ∙ The Planck 2018 high-ℓ power spectra (ℓ≥ 30) of temperature (TT), polarization (EE) and their cross (TE) obtained from <cit.>. In the following subsections, we will discuss how we predict these observations in our modelling framework. §.§ Cosmic Shear modelling for KiDS-1000 The KiDS-1000 catalogue provides information about the shear of over 20 million galaxies divided into five tomographic bins between z∼0.1 and z∼ 1.2. The catalogue provides the basis of the auto and cross-correlation band power of all five redshift bins. With eight data points for each spectrum, the KiDS-1000 band power contains a total of 120 observational data points with correlated errors from the corresponding covariance matrix. §.§.§ Nonlinear matter power spectrum We use the public version of the Boltzmann Solver CLASS <cit.> to calculate the Λ CDM matter power spectrum for any given set of cosmological parameters. For the nonlinear modelling, we rely on the method <cit.> implemented in . Following <cit.>, we assume a single massive neutrino species with fixed mass m_ν = 0.06 eV throughout this work. The process of baryonic feedback causes gas to be expelled out of galaxies and clusters leading to a suppression of the matter power spectrum at small cosmological scales <cit.>. Note that this suppression is similar in shape to the one caused by DM decays <cit.> making it particularly important to include baryonic feedback in our modelling pipeline. We use [The code can be found at <https://github.com/sambit-giri/BCemu>.] <cit.>, an emulation tool providing the suppression 𝒮_ 𝒷(k,z) due to baryonic feedback described in <cit.>. 𝒮_ 𝒷 is a function of seven baryonic parameters and one cosmological parameter, namely the ratio of the baryonic and total matter abundance. However, we use the reduced three-parameter model presented in <cit.> where four parameters are fixed based on results from hydrodynamical simulations. The three-parameter model consists of two parameters describing the gas distribution (log_ 10M_c, θ_ ej) and one parameter describing the stellar mass (η_δ) around galaxies and clusters. We refer to <cit.> for the detailed description of the model. Finally, the response function from the two-body DM decays is multiplied with the nonlinear, baryonified power spectrum as shown in Eq. <ref>. It is obtained from the emulator described in Section <ref>. This modular method assumes that all dependence on cosmological parameters is captured by the power spectrum from the revised halofit. At the same time, the responses from the baryonification and the two-body decays remain independent of cosmology. Regarding the baryonification method, this assumption has been validated before <cit.>. In Appendix <ref>, we validated this assumption for the ΛDDM model. §.§.§ Intrinsic alignment The effects from intrinsic galaxy alignments are modelled assuming the nonlinear alignment model (NLA) as described in <cit.> and first published in <cit.>. Intrinsic alignment enters the band powers modelling via window function <cit.> that goes into galaxy-intrinsic and intrinsic-intrinsic terms of cosmic shear angular power spectrum. Among the two intrinsic alignment parameters A_ IA and η_ IA, we fixed the latter one to zero, following the approach used in the standard KiDS-1000 analyses <cit.>. §.§.§ Angular power spectrum We investigate the impact of decaying dark matter on structure formation through the analysis of the cosmic shear angular power spectrum, including both auto-correlation and cross-correlation power spectra between different galaxy populations (tomographic bins). Our modelling approach follows the methodology presented in <cit.>, and we refer the reader to that work for a more comprehensive description. We use the multi-purpose cosmology calculation tool <cit.> to model the cosmic shear angular power spectrum. The angular power is then converted into band powers following <cit.>. In Fig. <ref>, we show all band power coefficients together with the respective error bars. The auto- and cross-band powers measurements are illustrated in five redshift bins between l ≃ 118 to l ≃ 1266. §.§ CMB modelling The Boltzmann Solver CLASS can also be used for theoretical modelling of the CMB temperature and polarization data. Since the work of <cit.>, CLASS comes with an implementation of the ΛDDM model which we use throughout this work[<https://github.com/PoulinV/class_decays>]. To investigate the effects of ΛDDM cosmology on CMB data, we model temperature and polarization power spectra from the Planck 2018 data release <cit.>. We adopt the same methodology as the one introduced in <cit.> and refer the reader to this work for more details. Note, however, that our pipeline is tested for the ΛCDM cosmology reaching an excellent agreement with the results from <cit.>. A comparison can be found in appendix A of <cit.>. § MODEL INFERENCE We perform a number of Markov Chain Monte Carlo (MCMC) samplings in order to infer the posterior probability distribution of cosmological, baryonic, intrinsic alignment and two-body DDM parameters based on the WL data from KiDS and the CMB observations from Planck. We employ the Strech Move ensemble method implemented within package <cit.> to sample from the posterior distribution. An overview of the sampled parameters and the prior choices are listed in Tab. <ref>. We use flat priors for all cosmological parameters except for the optical depth τ where we assume a Gaussian prior 𝒩(0.0506,0.0086), as explained in Section 3.1 of <cit.>. For the DM abundance (ω_ dm) and the amplitude of the primordial power spectrum (A_ s), we use priors that are wide enough to comfortably include the WL posteriors found by <cit.>. Note that ω_ dm represents the initial DM abundance and, as we assume only late-time decays, this parameter describes the total initial DM budget in both ΛCDM and ΛDDM scenarios. For the baryon abundance ω_ b, Hubble constant h_0 and spectral index n_ s, which cannot be well constrained by WL alone, we choose the prior that is as wide as possible to span the values found by surveys (e.g. CMB data) that are more sensitive to these parameters. The prior range for the intrinsic alignment parameter A_ IA is wide enough to include the posterior distribution of this parameter found by <cit.>. The Planck absolute calibration A_ planck was probed under Gaussian prior 𝒩(1.0,0.0025)[<https://wiki.cosmos.esa.int/planck-legacy-archive/index.php/CMB_spectrum_%26_Likelihood_Code>]. We further impose flat priors on the baryonic parameters log_ 10 M_ c,θ_ ej and η_δ covering the full range of the BCemu parameters. For the ΛDDM decay rate (Γ) and the velocity kick magnitude (v_k), we assume flat priors for both log_10Γ and log_10 v_k spanning from ΛCDM values up to the upper boundary limited by the range of the ΛDDM emulator (see Tab. <ref>). In both the WL and CMB setups, we assume Gaussian likelihoods. In the case of the CMB, we use a marginalized, light-weight version of the full Planck likelihood called Plik_lite <cit.>, being an affordable approximation in case of ΛDDM. <cit.> demonstrates that using Plik_lite for ΛDDM produces only a negligible difference on recovered posteriors compared to a full Planck analysis. Plik_lite also comes with the marginalized version of the covariance matrix, which we use throughout this work. In the case of WL, we use the band powers covariance matrix as published in <cit.>. To assess the convergence of our chains, we apply the Gelman-Rubin criterion <cit.> assuming the chains to be converged at R_c<1.1. Although the emulator presented Section <ref> accounts for the three parameters Γ, v_k and f, we fix the fraction of decaying to stable DM to unity (f=1). This means that we restrict our analysis to the case of a universe with one initial (unstable) DM fluid leaving the further investigation of a multi-fluid DM sector to future work. § RESULTS In the first part of this section, we describe the constraints on two-body decays from WL and CMB observations and report our findings regarding baryonic physics. In the second part, we revisit the impact of the ΛDDM model on the S_8 tension before concluding with a discussion about how well the observational data can be fitted within the ΛDDM model. §.§ Derived constraints on two-body decays We obtain the constraints on the decay rate Γ and velocity kick magnitude v_k by marginalizing over cosmology, baryons and nuisance parameters. We display the outcome in Fig. <ref>, showing the ΛDDM posteriors at the 95% credible intervals as obtained from WL (green) and CMB (orange) analysis. In both cases, the obtained constraints are in agreement with ΛCDM showing no hint of decay in the DM sector. Fig. <ref> also includes a comparison of our findings with several recent studies: <cit.> (black) focusing on Milky Way satellites, <cit.> (light blue) examining the impact on Lyman-α forest, and <cit.> and <cit.> (pink and blue, respectively) using Planck 2018, SNIa, and BAO data. We note that our CMB posteriors are consistent with previous CMB studies, in particular with the results of <cit.>. Our limits are slightly stronger than those of <cit.> while exhibiting similar hyperbolic contour trends. Milky way satellites as probed by the DES collaboration <cit.> are sensitive to decay rate Γ and rule out half-life times τ<30 Gyr, while Lyman-α showing to be less sensitive to two-body decays than MW satellites. Finally, the WL data alone provides the most stringent constraints, excluding regions where the following conditions hold at 95% credible intervals: τ = Γ^-1≲ 125  Gyr, and v_k ≳ 300  km/s. Hence, within the parameter space of ΛDDM and the data sets analysed in this study, WL data impose much stronger limits compared to CMB observations[It is important to note that in our analysis we do not include the CMB lensing effect.]. This is mainly due to the fact that, unlike for the case of 1-body decays <cit.>, the two-body ΛDDM model does not significantly affect the background evolution of the Universe leaving the signal from the late-time integrated Sachs-Wolfe effect unchanged. More discussions about the effects of two-body decays on the CMB signal can be found in Appendix <ref>. §.§ Revisiting the S_8 tension between WL and the CMB As a next step, we investigate to what extent the ΛDDM model is able to alleviate the S_8 tension between lensing and CMB data. In contrast to previous findings <cit.>, we thereby argue that the ΛDDM model is unable to significantly reduce the systematic shift between the clustering signal predicted from the CMB and the KiDS-1000 WL survey. In Fig. <ref> we illustrate the Ω_ m-S_8 posterior contours as obtained from the individual WL and CMB data analyses for both the ΛCDM (dark blue and black) and the ΛDDM model (green and orange). We observe only a marginal change of the contours when going from the ΛCDM to the ΛDDM case. Note that this is true for both the WL and the CMB data analysis. Using the definition of Gaussian tension <cit.>, which is given as τ_S_8 = S_8^ CMB - S_8^ WL/√(Var[S_8^ CMB] + Var[S_8^ WL]), where S_8 can be replaced by any parameter inferred from CMB and WL (or in general from two different data sets), we obtain the S_8 tension of τ_S_8 = 3.0σ for ΛCDM and τ_S_8 = 2.7σ for ΛDDM. We, therefore, conclude that the ΛDDM model does not provide a convincing solution to the S_8 tension between observations from KiDS-1000 and Planck 2018. We have further tested this conclusion using several other criteria defined in previous studies and provide them in Appendix <ref>. Moreover, despite introducing two additional free parameters (with f fixed to 1 throughout our analysis), the two-body decaying dark matter does not improve the fit to either the CMB or the WL data (see Tab. <ref>). Fitting the combined WL+CMB data, we obtain a preference for a model with non-zero DDM parameters. The best fitting values are log_10Γ = -2.25^+0.74_-0.23 and log_10 v_k > 2.80. See Appendix <ref> for the full results from our MCMC analyses. These findings align with the conclusions of <cit.> who added a Gaussian prior with the S_8 value from KiDS to their CMB analysis. However, given the original tension between WL and CMB data, it is not surprising to obtain a preference for non-vanishing ΛDDM parameters in the combined data analysis. We want to stress that this is by no means a signal of a departure from ΛCDM but rather a natural consequence of the internal tension between the two data sets. This interpretation is confirmed by the fact that the posteriors form the individual analysis shown in Fig. <ref> do neither show significant overlap in the ΛCDM nor in the ΛDDM case. Finally, it is worth mentioning that, unlike other extensions of ΛCDM, the combined analysis of the two-body decaying DM model shows a degeneracy of S_8 with ΛDDM parameters, which helps to accommodate the broader range of S_8 values, compatible both with CMB and WL data, as hinted from the left panel of Fig. <ref>. We believe this feature has led several authors in the past to claim that two-body decays can resolve the S_8 tension. However, even in the combined analysis, the S_8 tension is not significantly relaxed as it becomes clear from the right panel of Fig. <ref>. When plotting the logarithm of the product Γ× v_k against S_8, we observe that the overlapping contours of WL (green) and combined (purple) data apparent in the left panel originate from a projection effect. In fact, the lower S_8 value observed in the CMB+WL contours is only allowed in the regime of strong DDM suppression, which is not favored by the WL-only analysis, as illustrated by the right panel of Fig. <ref>. To quantify this further, we use the tensiometer package[<https://tensiometer.readthedocs.io/>] to compute the multidimensional tension between logΓ× v_k and S_8. Our analysis reveals a tension at the 2σ level between the WL-only and the combined scenario (green and purple contours in the right panel of Fig. <ref>). However, using only the one-dimensional definition given in Eq. (<ref>), one would obtain an underestimated S_8 tension value of 1.2σ. The constraints inferred for all sampled parameters in our MCMC runs in case of CMB-only, WL-only and combined analysis can be found in Tab. <ref> and Tab. <ref> of Appendix <ref>. In particular, the values of S_8 from our CMB and WL analyses are compared to the results of <cit.>, <cit.> and <cit.> in Fig. <ref>. Details about tests of our MCMC pipeline and the related discussion can be found in Appendix A of <cit.>. § SUMMARY & CONCLUSION A dark matter (DM) scenario including particle decay forms a natural extension to the minimal model of a cold, stable, and collisionless DM particle. In this work, we investigate the case of two-body DM decays where particles decay into a massless and a massive daughter particle, the latter obtaining a velocity kick as a result of the decay process. This model has been studied in different contexts in the past <cit.> and has been proposed as a potential solution to the S_8 tension <cit.>. In this paper, we perform the first fully nonlinear analysis of WL and CMB data for the two-body decaying dark matter (ΛDDM) scenario. Based on a suite of N-body simulations, we construct a neural network-based emulator to obtain fast predictions of nonlinear matter power spectra for arbitrary values of the decay rate (Γ), the decay-induced velocity kick (v_k) and the fraction of decaying to total DM (f). We then include the emulator into our pipeline predicting WL observations and perform an MCMC analysis with WL data from KiDS-1000 and CMB data from Planck 2018. We present improved constraints on the two-body decaying DM parameters based on the WL data from KiDS-1000. Our constraints are significantly stronger compared to previous results. Specifically, we exclude models with τ = Γ^-1≲ 125 Gyr and v_k ≳ 300 km/s. Fig. <ref> provides a summary of our constraints from the WL and the CMB, along with previous results from the literature. When considering the clustering (or S_8) tension between KiDS-1000 and Planck 2018, we observe a marginal improvement of 0.3σ with ΛDDM compared to the original 3.0σ tension measured in the ΛCDM model. We, therefore, conclude that the two-body ΛDDM scenario is unable to convincingly resolve the clustering tension between WL and CMB observations. Note that previous work obtaining different conclusions <cit.> did not include a full, self-consistent modelling of the WL signal and were therefore unable to directly test the S_8 tension in the case of ΛDDM. A further step forward with respect to the current analysis could involve the analysis of the ΛDDM model using Dark Energy Survey (DES) observations along with the possible addition of galaxy clustering and galaxy-galaxy lensing. Furthermore, including additional low-redshift data sets, such as eBOSS and SNIa, could lead to stronger constraints on parameters such as h_0 and Ω_ b allowing for a more precise determination of the DM parameters. Finally, we expect data from Euclid and the Vera C. Rubin Observatory to significantly improve current limits on DM decays. The emulator of two-body decays developed in this study is now publicly available as the DMemu Python package. We welcome researchers to incorporate this package into their data analysis pipelines and further test the ΛDDM model. We thank Douglas Potter and Jonathan Hubert for helpful discussions and technical support. This work is supported by the Swiss National Science Foundation under grant number PCEFP2_181157. Nordita is supported in part by NordForsk. aa § COMPARING TWO-BODY DECAY SIMULATIONS TO PREVIOUS WORK In Fig. <ref>, we present a comparison of our N-body simulations with three other studies that have investigated the same ΛDDM model <cit.>. In the top panel of Fig.<ref>, we compare linear recipes based on solving the Boltzmann hierarchy, as developed by <cit.> (dashed lines) and <cit.> (dotted lines). The solid lines represent the results of our ΛDDM N-body simulations for f=1.0. For the chosen values of v_k and Γ, we observe good agreement in terms of the downturn scales. However, at nonlinear scales, there are more pronounced differences between the results, which is expected due to the limitations of linear calculations at higher values of k. Note that for v_k=30000 km/s, which corresponds to 10% of the speed of light, we have concerns about the accuracy of our N-body implementation. Therefore, we do not perform model inference for such extreme values of v_k in this work. In the bottom panel of Fig. <ref>, we provide a benchmark of the ΛDDM model (for f=1.0) by comparing it with the N-body study conducted by <cit.> for six different sets of dark matter parameters. We find that scenarios with smaller velocity kicks (v_k≤ 500 km/s) exhibit agreement at the percent level across all scales. At the nonlinear regime (k ≳ 1 h/Mpc) and for larger velocity kicks (v_k≥ 1000 km/s), we observe larger deviations in the predicted power suppression. These deviations can be attributed to the different dark matter implementations employed by <cit.>. However, it is important to note that scenarios with such a significant power suppression in the nonlinear regime are ruled out by observations, which favour a much weaker decrease of power at k≈ 1 - 10 h/Mpc <cit.>. Therefore, the observed differences between our results and those of <cit.> are not a cause for concern. § COSMOLOGY DEPENDENCE OF SUPPRESSION OF THE MATTER POWER SPECTRUM IN ΛDDM WITH RESPECT TO ΛCDM The effects of ΛDDM in our study are accounted for by applying a cosmology-independent boost to the ΛCDM nonlinear matter power spectrum, calculated using the revised_halofit method within the CLASS code. To demonstrate the validity of this approach, we conducted a test suite of N-body simulations, where we kept the ΛDDM parameters fixed at Γ^-1 = 26.20 Gyr and v_k = 500 km/s, and varied one ΛCDM parameter at a time. We show in Fig. <ref> that this approach is a good approximation as neglecting the cosmology dependence in the applied boost results only in a second-order effect. In panels (a) to (e) of this figure, we show the impact of Hubble parameter (h_0), clustering amplitude (σ_8), spectral index (n_ s), matter abundance (Ω_ m) and baryon abundance (Ω_ b), respectively. We display the ΛDDM boost for the fiducial cosmology of our N-body simulations in solid salmon, while we plot the same quantity for the best-fit cosmology of KiDS-450 survey <cit.> in dashed blue. We also add a few additional models in dotted grey. In this analysis, we find that the choice of cosmology has a noticeable impact on the ΛDDM boost only for the parameters σ_8 (panel b) and Ω_ m (panel d). For the remaining parameters, the choice of cosmology does not significantly affect the boost. It is important to note that the models exhibiting substantial differences assume cosmologies that are quite distinct from the fiducial one. However, even in the cases of σ_8 and Ω_ m, the observed effects are relatively small compared to the overall amplitude of the observed boost. § EFFECT OF MODEL AND BARYONIC FEEDBACK PARAMETERS ON THE OBSERVABLES §.§ Weak Lensing To correctly interpret results, it is important to understand the role of model parameters in relation to observables. Fig. <ref> shows the original KiDS-1000 measurements of cosmic shear band powers with corresponding error bars (black points) and we illustrate how well WL, CMB and WL+CMB scenarios can fit these data. The solid green lines show the best fit configuration in the WL-only setup in the case of ΛDDM (however, performing very similar to the ΛCDM case). Once we calculate the band powers resulting from CMB-only best fit cosmology (orange lines), we recover significantly stronger WL signal compared to KiDS observations. This is the consequence of larger clustering (S_8 value) preferred by CMB data. In the combined WL+CMB scenario, tighter error bars on cosmological parameters from Planck result in a best fit cosmology close to the one from the CMB-only scenario, but baryons and namely two-body decays on small scales provide a much better fit to cosmic shear data compared to the CMB-only scenario, as seen from the solid purple lines. To also showcase that two-body decays have a significant influence on the shear signal, we display the dashed and dotted purple lines. The former represents the same cosmology and baryonic parameters as in the solid purple case, but switching off the two-body decays completely, while the latter displays the largest possible impact of two-body decays in the case of cosmology and baryons fixed to the WL+CMB ΛDDM best-fit scenario (solid purple lines). This impact is large enough to completely over- or underestimate the measured shear signal as can be seen e.g. from the autocorrelation spectra of higher-redshift bins. This also explains why in the case of combined ΛDDM analysis, we recover the constraints on the two-body parameters that are detached from ΛCDM cosmology. In the combined WL+CMB scenario, the tighter error bars on cosmological parameters from the Planck results lead to a best-fit cosmology that is close to the one obtained from the CMB-only scenario. However, the inclusion of baryonic effects and, specifically, two-body decays on small scales improves the fit to the cosmic shear data, as indicated by the solid purple lines. To demonstrate the significant impact of two-body decays on the shear signal, we also display the dashed and dotted purple lines. The dashed line corresponds to the same cosmology and baryonic parameters as the solid purple case, but with two-body decays completely turned off. The dotted line represents the largest possible impact of two-body decays when the cosmology and baryons are fixed to the WL+CMB ΛDDM best-fit scenario (solid purple lines). This impact is substantial enough to either overestimate or underestimate the measured shear signal, particularly noticeable in the autocorrelation spectra of higher-redshift bins. This explanation clarifies why, in the combined ΛDDM analysis, we obtain constraints on the two-body parameters that are distinct from the ΛCDM cosmology. The significant influence of two-body decays on the shear signal justifies the deviation from the ΛCDM scenario. §.§ Baryon feedback We use three free parameters in the baryonic effects emulator (log_10 M_c,θ_ ej,η_δ) as proposed in <cit.>. From our analysis, we find that the WL data does not provide any information about the stellar population parameter η_δ. This is evident from Fig. <ref> and can be inferred from Tab. <ref>, as varying η_δ does not affect the modelled WL observables. On the other hand, the gas profile parameters log_10 M_ c and θ_ ej have an impact on the WL observables. For log_10 M_ c, we obtain log_10 M_ c<13.1, (13.2) for ΛCDM (ΛDDM) from the WL-only analysis, and log_10 M_ c>13.8, (unconst) from the combined analysis. Similarly, for θ_ ej, we find θ_ ej < 5.45, (5.57) in the case of ΛCDM (ΛDDM) from the WL-only analysis, and θ_ ej > 5.88, (unconst) from the combined analysis. These results indicate that the values of these baryonic parameters are mutually exclusive when comparing the WL-only and combined (ΛCDM) scenarios. However, the underlying reasons for this behaviour are discussed in the following sections. Fig. <ref> demonstrates the impact of baryonic feedback on the KiDS band powers by varying the log M_c, θ_ ej, and η_δ parameters within the BCemu framework. The best-fit configurations for ΛCDM (ΛDDM) are depicted in dark blue (green), while the coloured lines represent the variation of baryonic feedback strength in the ΛCDM best-fit case. Notably, we observe that modifying the stellar population parameter η_δ does not alter the shear signal. However, adjustments to the gas profile parameters log_10 M_c and θ_ ej affect the model predictions at scales l>300. Fig. <ref> and <ref> provide insights into how the baryonic and ΛDDM parameters impact the band powers. Both sets of parameters exhibit a similar qualitative effect, resulting in a suppression of the signal at small scales. However, the influence of baryons is relatively subtle compared to the effects of DDM, as indicated by the available priors. Also, note that baryonic effects primarily manifest on the smallest scales, with no discernible impact below l ∼ 300. In contrast, ΛDDM can influence the signal on larger scales, particularly when considering scenarios with large velocity kicks. Thus the analysis reveals that when considering KiDS data alone, the preferred values for baryonic feedback parameters (and in the ΛDDM case, the two-body DDM parameters) tend towards the lower end. However, when combined with CMB data, these parameters drive the cosmology towards higher S_8 values, resulting in an excessive boost to the WL signal, as discussed earlier. The baryonic parameters can partially counteract this effect in the ΛCDM scenario by decreasing the signal on small scales, which explains the extreme values observed for the gas profile parameters in the combined analysis with CMB. Nevertheless, the limited suppression capability of baryons is inadequate to fully accommodate the WL signal, leading to deteriorated fits for both WL and CMB. In contrast, in the ΛDDM scenario, the DDM parameters can more effectively suppress the band power signal, resulting in improved fits for both WL and CMB compared to the ΛCDM setup. This can be observed in Fig. <ref>, where the left panel depicts the combination of DDM parameters with log_10 M_c, and the right panel illustrates their combination with θ_ ej in the ΛDDM model. In both panels, it is evident that fitting the WL-only data favours a weak suppression of the power spectrum, indicated by the green contours located in the bottom left region. Conversely, when combining WL and CMB, strong suppression is required either from baryons or from DDM, as indicated by the purple contours. It is conceivable that having even stronger baryonic effects at our disposal could entirely replace the need for DDM and provide a satisfactory fit to both data sets individually, though the physical justification for such high baryonic feedback remains an open question. §.§ Cosmic Microwave Background In Fig. <ref> we show how two-body decays influence the predictions for CMB observables. We display temperature power spectra in the left and polarization spectra in the right panel and add their cross-correlations in the middle panel. In the top row, we show the CMB spectra for ΛCDM (black) and ΛDDM (green) best fits, while we plot the predictions for ΛCDM best-fit configuration extended by different combinations of Γ and v_k (dashed lines). In the smaller bottom panels, we show the difference between all these predictions with the ΛCDM best-fit cosmology. We can observe that two-body decays do not change the CMB signal significantly, meaning that the scatter of the CMB data is larger than the observable effects of two-body decays. § ADDITIONAL QUANTIFICATION OF THE TENSION We use two different metrics to assess the tension between CMB and WL observations within the assumed cosmological model. The first one is the standard Gaussian metric as introduced in Eq. (<ref>). However, this criterion assumes a Gaussian posterior on the parameter of interest and, importantly, is agnostic about how well the underlying data are being fitted. Therefore, we also employ Difference in maximum a posteriori Q_ DMAP criterion <cit.> to further analyze the tension. This criterion is defined as Q_ DMAP = χ^2_ min,12 - (χ^2_ min,1 + χ^2_ min,2), where χ^2_ min,12 is a minimal χ^2 value of combined analysis and χ^2_ min,1, χ^2_ min,2 minimal χ^2 values obtained for data set 1 and 2, respectively. We then express the tension between data sets 1 and 2 in terms of σ as √(| Q_ DMAP|). This criterion evaluates the incapability of a combined analysis to approach the goodness of fit of individual analyses, i.e. when either of the data sets is fitted separately, hinting at a tension between such observations. In this case, however, one is not informed about the source of the tension, i.e. which parameters acquire incompatible values. This criterion is not subject to the assumption of Gaussian posteriors but cannot deal with over-fitting, i.e. how many additional degrees of freedom our new model possesses. The last criterion we used to assess the efficiency of the ΛDDM model is the change in the Akaike information criterion (AIC) defined as Δ AIC = Δχ^2_ min + 2(N_Λ DDM - N_Λ CDM), where N_ℳ is a number of free parameters in model ℳ. In order a new model is preferred over ΛCDM, we require substantial evidence against it based on Jeffreys' scale <cit.> using p<exp(-Δ AIC/2) <cit.>, where p = 10^0.5. This implies that the difference in AIC (Δ AIC) between the new model and ΛCDM should satisfy the condition: Δ AIC < Δ AIC_0 = -2.3. Applying the Q_ DMAP criterion defined in Eq. (<ref>) to ΛCDM, we obtain a mutual tension of 3.4σ between KiDS-1000 and Planck 2018. After including two-body decays, the combined analysis results in a notable improvement of the best fit by Δχ^2_ min = 7.9 compared to the combined ΛCDM analysis. The tension is reduced to 1.9σ which is 1.5σ lower than in the ΛCDM case. Regarding the Akaike information criterion (Eq. <ref>) we obtain Δ AIC = 3.9 and Δ AIC = 4.0 in the WL and CMB scenario, respectively and this adding two free parameters does not lead to the improved fit to the data. In the combined scenario, Δ AIC = -3.9< Δ AIC_0. This means the two additional parameters in the combined analysis are efficient. However, the combined ΛDDM fit is still worse by Δχ^2_ min = 3.8 compared to what ΛDDM scenario yields when treating the two data sets separately. We believe that fitting the data separately provides a better indicator of resolving the S_8 tension, in which the ΛDDM model fails. For a summary of combined MCMC results, we refer to Tab. <ref>. § MCMC RESULTS In this section, we provide more details about the parameters resulting from our MCMC analyses. In Tab. <ref>, we report our findings from ΛCDM analysis <cit.>, while in Tab. <ref>, we summarize the actual ΛDDM model results. In both cases, we provide separate constraints from WL, CMB and combined analysis. In the upper part of the tables, we show the posteriors of the parameters sampled by MCMC, showing mean (best fit) values with corresponding upper and lower deviations. Dashed fields in the tables inform the parameters were not present in a given MCMC setup, unconst is used when parameters constraints could not be obtained from MCMC and we state no uncertainties whenever referring to a value constant throughout the inference.
http://arxiv.org/abs/2307.00563v1
20230702131000
Casimir free energy for massive scalars: a comparative study of various approaches
[ "M. Sasanpour", "S. S. Gousheh" ]
hep-th
[ "hep-th" ]
Casimir free energy for massive scalars: a comparative study of various approaches M. Sasanpour[m_sasanpour@sbu.ac.ir] and S. S. Gousheh[ss-gousheh@sbu.ac.ir] Department of Physics, Shahid Beheshti University, Evin, Tehran, 1983969411, Iran August 1, 2023 Abstract: We compute the Casimir thermodynamic quantities for a massive real scalar field between two parallel plates with the Dirichlet boundary conditions, using three different general approaches and present explicit solutions for each. The Casimir thermodynamic quantities include the Casimir Helmholtz free energy, pressure, energy, and entropy. The three general approaches that we use are based on the fundamental definition of Casimir thermodynamic quantities, the analytic continuation method, and the zero temperature subtraction method. Within the analytic continuation approach, we use two distinct methods which are based on the utilization of the zeta function and the Schlömilch summation formula. We include the renormalized versions of the latter two approaches as well, whereas the first approach does not require one. Within each general approach, we obtain the same results in a few different ways to ascertain the selected cancellations of infinities have been done correctly. We show that, as expected, the results based on the zeta function and the Schlömilch summation formula are equivalent. We then do a comparative study of the three different general approaches and their results and show that they are in principle not equivalent to each other, and they yield equivalent results only in the massless case. In particular, we show that the Casimir energy calculated only by the first approach has all three properties of going to zero as the temperature, mass of the field or the distance between the plates increases. Moreover, we show that in this approach the Casimir entropy reaches a positive constant in the high temperature limit, which can explain the linear term in the Casimir free energy. Keywords: Casimir effects, finite temperature, massive scalar field, the generalized zeta function, the Schlömilch summation formula, the fundamental definition. § INTRODUCTION The Casimir effect, predicted by Hendrik Casimir in 1948 <cit.>, is a direct consequence of the zero-point energy of the quantum fields and has played an important role in various branches of physics such as particle physics <cit.>, condensed matter and laser physics <cit.>, nanotechnology <cit.>, string theory <cit.>, and cosmology <cit.>. This effect appears when a system is subject to nontrivial boundary conditions, background fields such as solitons, or nontrivial space-time backgrounds. In the experimental aspect, Sparnaay was the first to attempt to observe the Casimir effect <cit.>, but Lamoreaux et al. <cit.> were the first to measure the Casimir force with acceptable precision. For a comprehensive review, see for example <cit.>. In this paper, we explore the differences between three of the commonly used general approaches for calculating the finite temperature Casimir effects for the bosonic case, as has been done for the fermionic case in <cit.>. Our reference general approach is based on the fundamental definition of the Casimir thermodynamic quantities which for the Casimir Helmholtz free energy, for example, is the difference between the infinite vacuum Helmholtz free energies of systems subject to the constraints and the corresponding ones that are free from them, both being at the same temperature. We shall henceforth refer to this as the fundamental approach. The second approach is based on the analytic continuation methods, for which we include the zeta function method and the Schlömilch summation formula method, as two distinct representatives. The third approach is based on the zero temperature subtraction method. We also include the renormalized versions of the latter three methods, and shall refer to them collectively as the zeta function approach (ZFA), the Schlömilch formula approach (SFA), and the zero temperature subtraction approach (ZTSA), respectively. As mentioned above, both ZFA and SFA are representatives of the analytic continuation approach. As is well known, the result of analytic continuations is unique, and one of the questions that we want to address here, similar to the fermionic case <cit.>, is whether this unique result is the physically acceptable one that we seek for the Casimir thermodynamic quantities of the bosonic case. In order to be concrete, we concentrate on an illustrative example for the bosonic case. Our choice is a massive real scalar field confined between two parallel plates with the Dirichlet boundary condition. In this paper, we present calculations for the Casimir thermodynamic quantities within each of the general approaches mentioned above, and present their results in explicit forms. Moreover, to ascertain the validity of our results, we present or outline a few different ways of obtaining the same results within each general approach. We then do a comparative study of the three general approaches and their results. Before we start with the computations, we briefly review the historical development of the finite temperature Casimir effects and the use of various approaches. The fundamental definition of the zero temperature Casimir energy, as stated by Casimir in 1948, is the difference between the zero point energies of the system with and without the constraints. Finite temperature Casimir effect was first introduced by Lifshitz <cit.> in 1956, who calculated the attractive force between two parallel dielectric plates at finite temperature, by introducing fluctuating electromagnetic field. At high temperatures, the Casimir pressure was found to be proportional to the temperature. This term was subsequently denoted as the classical term[It was named the classical term, since it did not have any factors of ħ  <cit.>. In this paper we present an alternative justification for this name.]. Later on, Mehra <cit.> in 1967, used the Helmholtz free energy, which we shall henceforth refer to simply as the free energy, to calculate the thermal correction to the zero temperature Casimir pressure for a conducting cubic cavity. In that paper, the Casimir pressure was calculated as the difference between the pressure inside and outside of the cube, both being at the same temperature. His results also included the classical term at high temperatures. The next major work on thermal corrections is due to Brown and Maclay <cit.> in 1969, who calculated the electromagnetic stress-energy tensor between two conducting parallel plates. Using the image-source construction, they obtained the components of the tensor as thermodynamic variables, without any divergent terms. However, for the first time, the final results for both the Casimir pressure and energy density included terms due to the black-body radiation which are proportional to T^4. In a series of papers from 1976 to 1980, Dowker et al. <cit.> calculated the vacuum expectation value of the stress-energy tensor at finite temperature using the Green function formalism for a scalar field in curved space-time. They used three different renormalization schemes to obtain finite results. First, they subtracted the (0,0) temperature-spatial mode. Second, they used a `Casimir renormalization' as the difference between free energies before and after constructing the boundary, both being at the same temperature, to compute the heat kernel coefficients. This is analogous to the fundamental approach. Third, they subtracted the contribution of the free Green function at the zero temperature, which they referred to as `the standard flat space renormalization prescription'. This is equivalent to ZTSA. The high temperature limit of ⟨ T_00⟩ in their first and third work had terms proportional to T^4 and T, while Casimir free energy in their second work had terms proportional to T^3, T and T ln T. In 1978, Balian and Duplantier <cit.> defined and used the fundamental definition of the Casimir free energy for the electromagnetic field in a region bounded by thin perfect conductors with arbitrary smooth shapes. The high temperature limit of their results for parallel plates was proportional to T, while for the enclosures included an additional term proportional to Tln T. In 1983, Ambjørn and Wolfram <cit.> computed the Casimir energy and entropy for scalar and electromagnetic fields in a hypercuboidal region, using the generalized zeta function along with the reflection formula as an analytic continuation technique. They showed that the high temperature limit of the Casimir energy for the scalar field in a rectangular cavity in 3+1 dimensions includes terms proportional to T^4, T^2 and Tln T. In 1991, Kirsten <cit.> computed the heat kernel coefficients for the grand thermodynamic potential for a massive bosonic field in hypercuboids in n-dimensions subject to the Dirichlet boundary condition, using the zeta function, and in four dimensions obtained terms proportional to T^4, T^3, T^2, T and T ln T. In 1992, Elizalde and Romeo <cit.> calculated the high and low temperatures expansions of the free energy for a massive scalar field in hypercuboids of arbitrary dimensions, using multidimensional Epstein zeta functions. They indicated that, as stated in <cit.>, to calculate the Casimir free energy, one has to subtract the free energy of the unconstrained boson field, which would eliminate only the T^4 term at high temperatures. In 2008, Geyer et al. <cit.> suggested a renormalization procedure to calculate the finite temperature free energy, which would supplement the use of zeta function. They stated that the use of zeta function does not include all necessary subtractions, and the terms proportional to powers of T higher than the classical terms obtained in the high temperature limit from the heat kernel method, have to be subtracted. Subsequently, in 2009, Bordag et al. <cit.> presented a general picture of the renormalization for the Casimir free energy within the ZFA. They used the heat kernel coefficients for subtraction of the extra terms at low and high temperature limits. Most of the work mentioned above have used ZFA. In 2018, Mo and Jia <cit.> used SFA to calculate the thermal correction of the Casimir free energy for an electromagnetic field in a conducting rectangular box. To eliminate the high temperature divergences, they defined a renormalized free energy by subtracting the free black-body term along with any possible terms proportional to T^2 and T^3, with reference to Geyer's work <cit.>. They showed that after removing these terms, the high temperature limit of their expression for the Casimir free energy included terms proportional to T ln (T) and T. They further removed the T ln (T) term, reasoning that it is independent of the geometry of the boundary. As is apparent from the historical outline presented here for the bosonic cases, and also presented in <cit.> for the fermionic cases, the zeta function has been used extensively in the calculations of the Casimir effects to evaluate the sums over the regular spatial and Matsubara modes, and often as an analytical continuation technique. In some methods, the zeta function is used explicitly to calculate the Casimir thermodynamic quantities, e.g. in <cit.>, or implicitly, e.g. in the Schlömilch's formula, or as a supplementary part. Examples of the latter include the heat kernel method and the Bogoliubov transformation where the zeta function is used to evaluate the final summations. The spatial modes of the massive and massless scalar fields with the usual Dirichlet or Neumann boundary conditions are both regular, and the generalized zeta function can be used for summing over them. As mentioned before, in this paper we compute the Casimir effects for a massive real scalar field between two parallel plates at finite temperatures by three different general approaches, i.e., the fundamental approach, the ZTSA, and the analytic continuation approach, represented here by the ZFA and the SFA. Within each general approach, we display or outline multiple ways of computing the same physical quantities and use various methods and computational techniques to ascertain the selected and delicate cancellations of divergent sums and integrals have been done correctly, all yielding equivalent results. These methods include the Poison summation formula, the Abel-Plana formula, and the Principle of the Argument theorem. For all three approaches mentioned above, we calculate the results for both the massive and massless cases, and show that the massless limits of the massive cases always coincide with the massless cases. First we use the fundamental approach to calculate all of the Casimir thermodynamic quantities, and show that the Casimir free energy and pressure decrease linearly with T at high temperatures, while the Casimir energy goes to zero and the Casimir entropy goes to a positive constant. This linear temperature term can be looked upon as stemming from the nonzero value of Casimir entropy according to the thermodynamic relation F_(T,L) =E_(T,L) - TS_(T,L), which is a reason for calling it the classical term. We also show that at any temperature all of the Casimir thermodynamic quantities go to zero in the large mass and plate separation limits, as expected. Indeed, in this approach, the subtraction of the thermodynamic quantities of the constrained and unconstrained systems at the same temperature yields the correct extensive results, including the mentioned limits, without having to label any terms in the resulting expressions for the Casimir quantities as unphysical, and subsequently removing them by hand. Next we calculate the Casimir thermodynamic quantities using ZFA and SFA, both being in the category of analytic continuum approach, and show that, as expected, their results are equivalent. Finally we use the ZTSA. We then show that the results obtained by the three different general approaches are in principle not equivalent. The results of ZFA, SFA, and ZTSA contain extra nonpolynomial terms in variables T and m, as compared to those of the fundamental approach. To calculate the renormalized versions of ZFA, SFA, and ZTSA, we first obtain the high temperature limits of these extra nonpolynomial terms, both directly and by the heat kernel method up to and including the black-body term T^d in d space-time dimensions. The expansion of the results for ZFA and SFA yield terms proportional to T^4, T^3, T^2, Tln (T), T and ln (T), and for ZTSA yield T^4, T^2, T and ln (T). As we shall show, all even powers of T, including ln (T), are due the thermal free energy of the free case which has not been subtracted, and the odd powers of T, except for the classical term, are due to a nonextensive term generated by the analytic continuation. We then subtract terms with powers of T greater or equal to two, in accordance with the renormalization programs introduced<cit.>.The remaining linear term is the correct classical term, while the remaining Tln (T) and ln (T) terms are unphysical. Therefore the differences between the results of ZFA, ZTSA and the fundamental approach for the massive case are not resolved by the renormalization programs of the former two. On the other hand, as we shall show, the extra terms in the massless case are simple polynomials and can be removed by the renormalization programs introduced <cit.>, or cancel out in the piston method for the Casimir pressure <cit.>. Hence, for the massless case, the results of the fundamental approach, the ZFA (supplemented with the piston approach or its renormalized version), and the renormalized version of ZTSA are all equivalent. We like to emphasize that the fundamental approach does not require any supplementary renormalization program. The outline of the paper is as follows. In Sec. <ref>, we present two forms for the free energy of a real scalar field at finite temperature, which is subject to the Dirichlet boundary conditions at two plates but is otherwise free, starting with the path integral formalism. In Sec. <ref>, we calculate the Casimir free energy for a massless scalar field using the fundamental approach and show that the Casimir free energy and pressure decrease linearly at high temperatures and go to zero at the large plate separations. In Sec. <ref>, we calculate the Casimir free energy and pressure of a massless scalar field using the ZFA, the SFA, and the ZTSA, obtaining identical results for the first two. These two have extra T^4 and T^3 terms, while the ZTSA only includes the extra T^4 term. We then show how the renormalization program subtracts these extra terms, yielding the correct results, based on the fundamental approach. In Sec. <ref>, we consider a massive scalar field as the simplest nontrivial example, and calculate the Casimir free energy, pressure, energy, and entropy using the fundamental approach. We show that, as expected, they all go to zero in the large mass or large plate separation limits. More importantly, we show that the Casimir free energy and pressure decrease linearly with T at high temperatures, while the Casimir energy goes to zero and the Casimir entropy goes to a positive constant. In Sec. <ref>, we calculate the Casimir free energy and pressure for the same massive scalar problem as in Sec. <ref>, using the other two general approaches, i.e., the analytic continuation approach, represented by the ZFA and the SFA, and the ZTSA, including their renormalized versions. We show that, as expected, the results of the SFA, and the ZFA are equivalent. However, we show that none of the four different sets of results that we obtain in Sec. <ref> is equivalent to that based on the fundamental approach. In particular we show that the contain different nonpolynomial functions of m and T, which cannot be removed, even in the high temperature limit, by the renormalization programs thus far devised. As a side note, we present the condition under which the piston method would yield the correct Casimir pressure. Finally, in Sec. <ref>, we present our conclusions. § THE HELMHOLTZ FREE ENERGY Historically, the first and most commonly-used approach to thermal field theory is the imaginary-time formalism. This approach has its roots in the work of Felix Bloch in 1932, who noticed the analogy between the inverse temperature and imaginary-time <cit.>, which led to the so-called temperature Green functions with purely imaginary-time arguments. In 1955, Matsubara presented the first systematic approach to formulate quantum field theory at finite temperature by the imaginary-time formalism, using the Wick rotation <cit.>. The discrete frequencies in this formalism are known as the Matsubara frequencies. In 1957, Ezawa et al. extended Matsubara's work to the relativistic quantum field theory <cit.>. They discovered the periodicity (anti-periodicity) conditions for the Green function of boson (fermion) fields, the generalization of which became known as the KMS (Kubo <cit.> (1957), Martin and Schwinger <cit.> (1959)) condition. In the 1960s, Schwinger <cit.>, Keldysh <cit.>, and others <cit.> developed the real time formalism for the finite temperature field theory. The latest development of this formalism was presented by Takahashi and Umezawa <cit.>, based on an operator formulation of the field theory at finite temperature, which is called thermofield dynamics. Since then, many subjects in finite temperature field theory, e.g., thermal Ward-Takahashi relations, KMS relations, and renormalization procedure, have been studied and are reported in, for example, <cit.>. In this paper, we use the Matsubara formalism to study the Casimir effect for a free real scalar field confined between two parallel plates at finite temperature. In this formalism, a Euclidean field theory is obtained by a Wick rotation on the time coordinate, t → - i τ, such that the Euclidean time τ is confined to the interval τ∈ [0 , β], where β=(kT)^-1  <cit.>. The partition function in the path integral representation becomes: Z = ∫_[ ϕ (β , r⃗ ) = ϕ (0 , r⃗) ] Dϕexp( - ∫_0^β dτ∫d^3 xℒ_E ) . For a free scalar field, this expression simplifies to Z = [( - ∂ _ E^2 + m^2/μ^2)^- 1/2], where μ is an arbitrary mass scale introduced for dimensional reasons. It should not appear in acceptable final expressions for physically measurable quantities. Using the partition function given by Eq. (<ref>), the free energy is obtained as F = - ln(Z)/β = T/2ln[ ( - ∂ _ E^2 + m^2 /μ^2) ] = T/2[ ln( P_E^2 + m^2 /μ^2) ]. The trace in Eq. (<ref>) indicates the summation over eigenvalues of Klein-Gordon operator in the momentum-space representation. Moreover, the modes of zero-component of momentum, i.e., the Matsubara frequencies, are discrete due to the KMS periodicity condition on the finite τ interval: ω_n_0 = 2 n_0 π/β, (n_0 =0, ± 1, ± 2, ± 3, ... ) . We impose the Dirichlet boundary condition at the plates, as follows Φ (x) | _(z = z_j) = 0. We consider the plates to be located at z =- L/2 and z =L/2, and obtain the following condition for the discrete spatial modes in z direction: f(k_n_1) := sin(k_n_1 L) = 0. Note that the modes for both the massive and massless cases are regular, i.e., equally spaced, and given by k_n_1 = n_1π/ L (n_1 = 1, 2, 3,....) . Using Eqs. (<ref>, <ref>, <ref>), the expression for the free energy becomes F_(T,L) = T A/2∫d^2 K_T/(2 π)^2∑_n_0 = - ∞^∞∑_n_1=1^∞ln[ ( 2n_0π/β)^2 + ω _n_1 , K_T^2/μ^2], where ω _n_1 , K_T = √(( n_1π/L )^2 + K_T^2 + m^2), and A denotes the area of the plates. We shall refer to this expression for free energy as the first form. A very commonly-used alternate expression for the first form, which has an embedded analytic continuation, is the following[This is obtained by replacing the logarithm using ln(A/μ^2)=-lim_s → 0∂/∂ s[1/μ^-2sΓ(s)∫_0^∞ dt e^-t A t^s-1], where we have assumed that A has mass dimension two, as in Eq. (<ref>). We like to emphasize that the above replacement includes an analytic continuation. To trace it, we note that the expression for the logarithm is obtained by first using the identity ln(A/μ^2)=-lim_s → 0∂/∂ s(A/μ^2)^-s. Next, A^-s has been replaced using the Euler integral representation of the Gamma function, i.e., Γ(s)=∫_0^∞ dt e^-t t^s-1. This integral is finite for s>0, while it admits an analytic continuation for s≤0.] F _(T,L)= - T A/2∫d^2 K_T/(2 π)^2∑_n_0 = - ∞^∞∑_n_1=1^∞lim_s → 0∂/∂ s∫_0^∞e^ - t [ ( 2n_0π/β)^2 + ω _n_1 , K_T^2 ]/μ^- 2 sΓ (s) t^1 - s dt . The integral over t, accompanied by the operation lim_s → 0∂/∂ s, embodies the analytic continuation. This is important since in this paper we intend to keep track of all analytic continuations. The result of this expression depends, in principle, on the order in which the sums and integrals are performed. However, if we do the integral over t last we obtain the analytic continuation of this expression, which is certainly finite and unique, regardless of the order of other sums and integral. If we do the integral over t at any step other than last and we are interested in the analytic continuation of this expression, we are going to need a supplementary analytic continuation at the end. Integrating over the transverse momenta, we obtain F _(T,L)=- T A/8 π∑_n_0 = - ∞^∞∑_n_1=1^∞lim_s → 0∂/∂ s∫_0^∞e^ - t [ ( 2n_0π/β)^2 + ω _n_1^2 ]/μ^- 2 sΓ (s) t^2 - s dt , where ω_n_1=√(( n_1π/L)^2 + m^2). If we now integrate over t, we obtain the form which is almost invariably used in the ZFA: F _(T,L)= -TA/8π∑_n_0 = - ∞^∞∑_n_1 = 1^∞lim_s → 0∂/∂ sμ^2 s/s-1[ ( 2n_0π/β)^2 + ω _n_1^2 ]^1 - s . On the other hand, one can perform the sum over the Matsubara frequencies in the original expression for the free energy given in Eq. (<ref>), using the Principle of the Argument theorem <cit.>, to obtain the usual form in statistical mechanics <cit.>: F_ = A/2∫d^2 K_T/( 2 π)^2∑_n_1=1^∞[ ω _n_1 , K_T + 2 T ln( 1 - e^ - βω _n_1 , K_T) ] , which we shall refer to as the second form of the free energy. This expression and the original first form given by Eq. (<ref>), do not contain any embedded analytic continuation. One advantage of this form is that the contribution of the zero temperature part is separated from the thermal correction part. § THE CASIMIR FREE ENERGY FOR A MASSLESS SCALAR FIELD In this section, we calculate the Casimir free energy, using its fundamental definition, for a free real scalar field between two parallel plates, separated by a distance L, with the Dirichlet boundary conditions. In Sec. <ref>, we generalize to the massive case, and verify that, as expected, its massless limit coincide with the results of this section. As mentioned above, the fundamental definition of F_ is the difference between the free energy of the system in the presence of nonperturbative conditions or constraints and the one with no constraints, both being at the same temperature T and having the same volume. The nonperturbative conditions or constraints include boundary conditions, background fields such as solitons, and nontrivial space-time backgrounds. For cases where the constraints are in the form of non-trivial boundary conditions, the free cases can be defined as the cases in which the boundaries have been placed at spatial infinities. For the latter cases, the fundamental definition can be written as F_(T,L) = F_(T,L) - F_(T,L) , where the dependence of F_ on L simply denotes the restriction of the volume of space considered. We expect this dependence to be linear for simple extensive thermodynamic quantities, such as F_(T,L). To calculate the free energy for a massless scalar, we use the first form presented in Eq. (<ref>), and obtain: F _(T,L)=-TA/8 π∑_n_0 = - ∞^∞∑_n_1 = 1^∞lim_s → 0∂/∂ s∫_0^∞e^ - t [ ( 2n_0π/β)^2 + ( n_1π/L)^2 ]/μ^- 2 sΓ (s) t^2-s dt . First, we use the Poisson summation formula[The Poisson summation formula (see, for example, <cit.>) for a continuous and bounded function f on ℝ can be expressed as ∑_n = - ∞^∞ f(n) =∑_m = - ∞^∞∫_- ∞^∞ dx f(x) e^- i 2 π m x . ] for the sum over Matsubara frequencies, as follows ∑_n_0 = -∞^∞ e^ - t (2n_0π/β)^2 = β/2√(π t) + β/√(π t)∑_n_0 = 1^∞ e^ - n_0^2 β^2/4 t , and we evaluate the integral over t, to obtain F_(T,L) = -A/16 √(π^3)lim_s → 0∂/∂ sμ^ 2 s/Γ (s)∑_n_1=1^∞{Γ(s - 3/2) (π n_1/L)^3 - 2s + . . 4 ∑_n_0=1^∞(2 π n_1/n_0 L β)^3/2 - s K_3/2-s (n_0 n_1 π/TL)}. Next, we evaluate lim_ s → 0∂ / ∂ s only for the second term of Eq. (<ref>) which is finite, and express the result in a form in which the zero temperature part is separated from the thermal correction part, as follows F_(T,L) = F_(0,L)+Δ F_(T,L), F_(0,L) = -A/16 √(π^3)lim_s → 0∂/∂ sΓ(s - 3/2)/μ^- 2 sΓ (s)∑_n_1=1^∞(n_1 π/L)^3 - 2s, Δ F_(T,L) = - A √(T^3 /2L^3)∑_n_0=1^∞∑_n_1=1^∞(n_1/n_0)^3/2 K_3/2(n_0 n_1 π/TL). On the other hand, the free energy of the unconstrained case, which is considered at the same temperature T and same volume V=AL, can be computed using any of the forms presented in Sec. <ref>. However, to ultimately use the fundamental definition, it is important to use the same form and same order of summations and integrations as used to calculate the free energy of the bounded configuration, which in this case is Eq. (<ref>). Hence, we use the following expression for the free case, F_(T,L) = - T A/8π∑_n_0 = - ∞^∞∫_0^∞L dk/πlim_s → 0∂/∂ s∫_0^∞e^ - t [(2n_0π/β)^2 + k^2 + m^2 ]/μ^- 2 sΓ (s) t^2 - s dt , with m=0 in this case. Performing the same procedure as for the bounded case, we obtain the free energy of the free case. Below, we again express the result in a form in which the zero temperature part is separated from the thermal correction part, as follows F_(T,L) = F_(0,L) + Δ F_(T,L) , F_(0,L) = -A/16 √(π^3)lim_s → 0∂/∂ sΓ(s - 3/2)/μ^- 2 sΓ (s)∫_0^∞(k' π/L)^3 - 2s dk' , Δ F_(T,L) = - A √(T^3 /2L^3)∑_n_0=1^∞∫_0^∞ dk' (k'/n_0)^3/2 K_3/2(n_0 k' π/TL), = -AL π^2/90 T^4 , where k' =Lk/π. As can be seen from Eq. (<ref>), F_(0,L), being the continuum version of F_(0,L) given in Eq. (<ref>), is also divergent, while the thermal correction term of the free case is finite. The last equality, i.e., Δ F_(T,L)=-ALT^4 π^2/90, which is easily obtained by first evaluating the integral and then evaluating the resulting finite sum over the Matsubara modes by ζ(4), shows that Δ F_(T,L) for the massless case is simply the black-body radiation term. Substituting our results for F_(T,L), given in Eq. (<ref>), and F_(T,L), given in Eq. (<ref>), into Eq. (<ref>), we obtain the following expression for F_(T,L) based on its fundamental definition, F_(T,L) = -A/16 lim_s → 0∂/∂ sΓ(s - 3/2) π^3/2-2s/μ^- 2 sΓ (s) L^3-2s[∑_n_1=1^∞(n_1)^3 - 2s-∫_0^∞(k')^3 - 2s dk' ]- A √(T^3 /2L^3)∑_n_0=1^∞1/√(n_0^3)[ ∑_n_1=1^∞√(n_1^3) K_3/2(n_0 n_1 π/TL) - ∫_0^∞ dk' √((k')^3) K_3/2(n_0 k' π/TL)] . Finally, using the Abel-Plana formula (see, for example, <cit.>)[The simplest form that is needed here is the following ∑_n = 1^∞ f(n) - ∫_a^∞ f(x) dx =1/2 i[∫_a^a - i∞ f(z) ( (π z)-i) dz - ∫_a^a + i∞ f(z) ( (π z)+i)dz], where 0<a<1. This indicates that we need to use a regularization here: We consider the free case to be a limiting form of the bounded case in which the distance between the plates, denoted by L_1, goes to infinity. Then a= L/(π L_1)→ 0, and the above expression simplifies to, lim_a → 0[∑_n = 1^∞ f(n) - ∫_a^∞ f(x) dx] =lim_a → 0 i ∫_0^∞f (a + it) - f (a - it)/e^2 π t - 1 dt . ] for the sum over the spatial modes and evaluating lim_ s → 0∂ / ∂ s[We have used lim_s → 0∂/∂ sf(s)/Γ (s) = f(0), when f(s) is an analytic function for s<1.], the divergent terms completely cancel and we obtain the free energy between the two plates. Below, we express the results in a form in which the zero temperature part is separated from the thermal correction part, as follows[If we start with Eq. (<ref>) for the massless case, i.e., ω_n_1=k_n_1= n_1 π /L, and use the Abel-Plana summation formula, the bounded and free cases both include F_(0,L)=A L/16 π^2lim_s → 0[∂/∂ sμ^2s/s-1∫_0^∞ dk k^3-2s], which is now explicitly divergent. However, this makes no difference in the fundamental approach since, upon subtraction, they again cancel each other yielding the same result as given by Eq. (<ref>).] F_(T,L) = - A π^2/1440 L^3 - 2 A L T^4/π^2∑_n_0 = 1 ^∞∑_n_1 = 1^∞1/[n_0^2 +( 2 n_1 T L)^2 ]^2 . We can now compute the sum over n_0 to obtain, F_(T,L) = - A T/16 π L^2∑_n_1 = 1^∞1/n_1^3[ ( 2 πn_1T L ) +( 2 πn_1T L ) ^2 ( 2 πn_1T L ) ]. The zero temperature limits of Eqs. (<ref>, <ref>) yield the following well known result F_(0,L) =E_(0,L) =-π^2 A/(1440L^3). The high temperature limit of F_(T,L) is F_(T,L) - A ζ(3)/16 π L^2T, which shows the classical behavior. One can also start with the second form of the free energy given by Eq. (<ref>) and use the Abel-Plana summation formula to obtain exactly the same expression for the Casimir free energy as given by Eq. (<ref>) (see Appendix <ref>). Another powerful method which can also be used to calculate the free energies, besides the Abel-Plana formula, is based on the Principle of the Argument theorem. In Appendix <ref>, we use this method to calculate the Casimir free energy within the fundamental approach in four different ways, starting with Eqs. (<ref>, <ref>, <ref>), the results of which are identical to those displayed in Eqs. (<ref>, <ref>, <ref>). In Fig. (<ref>), the Casimir free energy is plotted as a function of temperature for various values of L. As can be seen from this figure, F_(T,L) is always negative, and goes to zero as L increases. Moreover, this shows that there is a classical term proportional to T for the massless scalar fields between two parallel plates as the temperature increases, which, as we shall show, also holds for the massive scalar fields. Note that the subtraction of the free case at the same temperature amounts to the complete cancellation of zero-temperature infinities and the black-body term, without the need for any extra renormalization program, with the classical term remaining as the leading high temperature term. Having obtained the Casimir free energy, one can easily calculate all other thermodynamic quantities such as the Casimir pressure, energy, and entropy. For example the Casimir pressure is given by, P_ (T,L) = -1/A∂/∂ L F_(T,L) =- T/8 π L^3∑_n_1 = 1^∞1/n_1^3 [ ( 2 πn_1T L ) + . . ( 2 πn_1T L ) ^2 ( 2 πn_1T L )+ ( 2 πn_1T L )^2 ( 2 πn_1T L ) ^2 ( 2 πn_1T L ) ] . Moreover, one can calculate directly the Casimir pressure based on its fundamental definition, as given by <cit.>, which is the difference between the pressure in the regions between the two plates and outside the plates, both regions being at the same temperature. To this end, we consider two inner plates enclosed within two outer plates, as the distance between the latter two goes to infinity, and obtain the same result as given by Eq. (<ref>). By integrating the expression for the pressure over the distance between two plates, at fixed temperature, the Casimir free energy can be calculated, yielding the same result as given by Eq. (<ref>), without any extra terms. In Fig. (<ref>), the Casimir pressure is plotted as a function of temperature for various values of L. As can be seen from this figure, P_(T,L) is always negative, and goes to zero as L increases. Furthermore, as can be shown from Eq. (<ref>) or inferred from Eq. (<ref>), in the high temperature limit the classical term is - ζ(3)/8 π L^3T. § MASSLESS SCALAR FIELDS AND THE GENERALIZED ZETA FUNCTION In this section, we consider the zeta function approach (ZFA) which is commonly used for computing the Casimir free energy for massless real scalar at finite temperature. To fully explore this approach, we consider four different ways of using the zeta function and show that they yield equivalent results. Moreover, we shall compute the Casimir free energy using the Schlömilch formulas approach (SFA) <cit.>, as the second representative of the analytic continuation approach, and show that its results are equivalent to those of the ZFA. Finally we use the zero temperature subtraction approach (ZTSA) <cit.>, and show that its results are not equivalent to those of the ZFA and SFA. Moreover, as we shall show, none of these results are equivalent to the one obtained in the last section, based on the fundamental definition of the Casimir free energy, since they contain extra terms. The results of the analytic continuation approaches have two extra terms, i.e., T^3 and T^4 terms, while the ones of ZTSA have only an extra T^4 term. We then illustrate how the renormalization procedure for these approaches in this trivial example yields the correct results, which we take to be those obtained using the fundamental approach. Moreover, we show that if we calculate the free energies of both the bounded and free cases using the zeta function and subtract them according to the fundamental approach all extra terms cancel and we obtain the correct results. For the computation of the Casimir free energy using the ZFA, we use only the first form given by Eq. (<ref>), as expressed in Eq. (<ref>). For the massless case we have ω_n_1=k_n_1= n_1 π /L. To use the generalized Epstein zeta function to calculate the sums, we express the sum over n_0 modes as a sum over positive integers and a zero mode: F (T,L) = -T A/8 πlim_s → 0∂/∂ sμ^2 s/s-1[ ∑_n_1 = 1^∞( n_1^2 π^2 /L^2 )^1 - s + . . 2∑_n_0 = 1 ^∞∑_n_1 = 1 ^∞( 4 n_0^2 π^2 /β^2 + n_1^2 π^2 / L^2 )^1 - s] . In first three methods, we use this form of the Helmholtz free energy. The first term can be easily calculated using the Riemann zeta function. Below we use the generalized zeta function in three different ways to compute the double summation in the second term. For our first method, we use the homogeneous generalized Epstein zeta function to do simultaneously the double summations for the second term in Eq. (<ref>). In this case, the analytic continuation is rendered by the reflection formula (see Appendix <ref>). Here, we present the final result as follows (see Appendix <ref>) F_(T,L) = F_(T,L)+ Δ F_(T,L) +A ζ (3)/4 π T^3 . We have denoted the Casimir free energy obtained by this method as F_ to distinguish it from the one obtained using the fundamental definition, which we have simply denoted by F_ and is given by Eq. (<ref>). As shown in this equation, F_ has two extra terms. The first extra term is Δ F_(T,L) which is the thermal correction term of the massless free case, i.e., the black-body term proportional to T^4, and given by Eq. (<ref>). The last term is an extra L-independent T^3 term and does not contribute to the pressure. As shown in Appendix <ref>, this extra nonextensive term is precisely minus one half of the contribution of the zero spatial mode, which is disallowed by the Dirichlet boundary conditions. However, the zero temperature limit of the above expression gives the correct result F_(0,L)=E_(0,L) = - π^2 A/(1440L^3). For our second method, we use the inhomogeneous form of the Epstein zeta function to first sum over the Matsubara modes for the second term in Eq. (<ref>), obtaining (see Appendix <ref>) F_(T,L) = - A/16 √(π^3)lim_s → 0∂/∂ sΓ(s - 3/2)/μ^- 2 sΓ (s)∑_n_1 = 1^∞(n_1 π/L)^ 3 - 2s+ A ζ (3)/4 π T^3 - A T^3/4π∑_n_0 = 1^∞( π n_0/2 T L) + π n_0/2 T L^2 ( π n_0/2 T L)/n_0^3 . The first term on the right hand side has a sum over the spatial modes which is divergent. We can express this sum in terms of the zeta function ζ (2s -3) and use its analytic continuation[We have used: ζ(-3)=1/120.] to obtain, F_(T,L) = - A π^2/1440 L^3 + A ζ (3)/4 π T^3 - A T^3/4π∑_n_0 = 1^∞( π n_0/2 T L) + π n_0/2 T L^2 ( π n_0/2 T L)/n_0^3 . This expression for F_ (T,L) is equivalent to Eq. (<ref>), once we use the expression for F_ (T,L) given by Eq. (<ref>). In this case, the black-body T^4 term does not appear explicitly in the final result, but it is embedded in the high temperature limit of the last term of Eq. (<ref>). For our third method, we use the inhomogeneous form of the Epstein zeta function to first sum over the spatial modes of the free energy given by Eq. (<ref>), and obtain (see Appendix <ref>) F_ (T,L) = AT/8 πlim_s → 0∂/∂ sμ^2 s/Γ(s)∑_n_0 = 1^∞[Γ(s-1) ( 2 n_0 π/β)^2 - 2 s +. . LΓ(s - 3/2)/√(π)( 2 n_0 π/β)^3 - 2 s] + F_ (T,L), where F_(T,L) is given by Eq. (<ref>). The first and second terms on the right-hand side have sums over temperature modes which are divergent and can be expressed in terms of zeta function ζ(2s - 2) and ζ(2s - 3), respectively. Applying the analytic continuation of the zeta function embedded in its reflection formula[We have used: ζ(2s-2) Γ(s-1) = Γ(3/2-s) ζ(3 - 2s) π^2s - 5/2 ζ(2s-3) Γ(s-3/2) = Γ(2-s) ζ(4 - 2s) π^2s - 7/2 ], we obtain a finite result which is identical to Eq. (<ref>). For our fourth method, we use the homogeneous generalized zeta function to do simultaneously the summations. To do this case, we express the sum over the spatial modes of Eq. (<ref>) as one half of the difference between the sum over all integers and the zero mode to obtain the following form for the free energy: F (T,L) = T A/16 πlim_s → 0∂/∂ sμ^2 s/s-1∑_n_0 = -∞^∞[ ( 4 n_0^2 π^2 /β^2 )^1 - s - . . ∑_n_1 = -∞^∞( 4 n_0^2 π^2 /β^2 + n_1^2 π^2 / L^2 )^1 - s] . We use the generalized homogeneous zeta functions to do both the single-sum and simultaneously the double-sum terms in Eq. (<ref>). For both cases, the analytic continuation is rendered by the reflection formula (see Appendix <ref>). After computing the sums and simplifying, the expression that we obtain for F_(T,L) is identical to Eq. (<ref>), where the expression for F_ is given by Eq. (<ref>). Hence, we have shown that all four different ways of using the zeta function yield equivalent results. As stated above, we also obtain the Casimir free energy using the Schlömilch formula approach (SFA) <cit.>, as a second representative of the analytic continuation approach. In fact, this approach is used for obtaining only the thermal corrections of the Casimir effect, and the zero temperature part should be calculated separately using other methods, e.g., the ZFA. For the computation of the Casimir free energy in this approach, we use the second form of the free energy given by Eq. (<ref>). In the massless case, we have ω_n_1, K_T=√(K_T^2 +(n_1 π/L)^2) and use the generalized Schlömilch formulas to sum over the spatial modes for the thermal corrections part of the free energy to obtain (see Appendix <ref>) F_(T,L) = F_(T,L), where the expression for F_(T,L) is given by Eq. (<ref>), within which F_ is given by Eq. (<ref>). This is the expected results since both ZFA and SFA are members of the analytic continuation approach. Next, we calculate the Casimir free energy using the zero temperature subtraction approach (ZTSA) <cit.>. This approach is defined by F_(T,L) = F_ (T,L) - F_ (0,L) . Adding an subtracting Δ F_(T,L) and using definitions given by Eqs. (<ref>,<ref>), we can write the following alternative expression for F_(T,L) F_(T,L) = F_(T,L) + Δ F_(T,L). This equality holds for both the massless and massive cases. Using Eqs. (<ref>,<ref>) we have for the massless case F_(T,L)=F_(T,L) - A ζ(3)/4 π T^3=F_(T,L) - A ζ(3)/4 π T^3 We display the results of the three general approaches in Fig. (<ref>). As can be seen from this figure, the free energy obtained via the ZFA, SFA, or ZTSA decreases as T^4 at high temperatures, while the one obtained via the fundamental definition only decreases linearly. The temperature dependence of ZFA and SFA differs from that of ZTSA due to the T^3 term. In fact, this extra term makes F_(T,L) a nonextensive quantities. One can now easily calculate all other thermodynamic quantities using the expressions obtained for the free energies by the ZFA, SFA, or ZTSA. For example, calculation of pressure, using first part of Eq. (<ref>), yields P_(T,L) = P_(T,L)=P_(T,L)= P_(T,L) +Δ P_(T), where an expression for P_(T,L) is given by Eq. (<ref>), and Δ P_(T) = ( π^2/90) T^4 is the thermal correction to the pressure of the free case. As shown in Eq. (<ref>), the Casimir pressures obtained using the analytic continuation approach is identical to the one obtained using the zero temperature subtraction approach, since the extra T^3 term in the former is L independent. However, none of these results are equivalent to the Casimir pressure obtained by the fundamental approach, since they all contain Δ P_(T), which is the black-body term in the massless case. In Fig. (<ref>), we compare the pressure obtained using the ZFA, the SFA, or the ZTSA, given by Eq. (<ref>), with the Casimir pressure obtained based on the fundamental definition given by Eq. (<ref>). As can be seen, the pressure obtained using the ZFA is negative, corresponding to attractive forces, at low temperatures and becomes positive, corresponding to repulsive forces, at high temperature, while the Casimir pressure is always negative and decreases as linearly at high temperatures. As mentioned in the Introduction, it has been recognized that the ZFA might yield additional unphysical terms, and renormalization programs have been devised to eliminate them. The most common program is to subtract the polynomials in T appearing in the large temperature limit, with exponents higher that one corresponding to the classical term <cit.>. These are usually calculated using the heat kernel coefficients. In the massless case, the mentioned polynomial includes T^3 and T^4 terms, while for the pressure there is only the T^4 term. In this case the renormalization program yields the correct results, based on the fundamental approach. Specifically, the removal of Δ F_(T,L) = - ( A L π^2/90) T^4 and (Aζ (3)/(4 π))T^3 from the expression for F_(T,L)=F_(T,L) in Eq. (<ref>), the removal of Δ F_(T,L) from the expression for F_(T,L) in Eq. (<ref>), and the removal of Δ P_(T) = ( π^2/90) T^4 from the expression for P_(T,L)=P_(T,L)=P_(T,L) in Eq. (<ref>), yield the correct the results. We like to emphasize that these extra unphysical terms appear in the results of the ZFA, the SFA, and the ZTSA for different reasons. In the two former cases, they are left out by the embedded analytic continuation, and in the latter case, the Δ F_(T, L) is left out by its definition. As we have shown, in the massless case, these extra terms are simple polynomials in T which can be easily removed by the renormalization programs that have been devised. In the next section, we solve the massive case using the fundamental approach, which subtracts the corresponding thermodynamic quantities of the free case from those of the bounded case. We find that the thermal corrections to the free case are no longer simple polynomials that can be removed by any renormalization programs thus far devised to supplement ZFA, SFA, and ZTSA. § THE CASIMIR FREE ENERGY FOR A MASSIVE SCALAR FIELD In this section, we calculate the Casimir free energy, using its fundamental definition as given by Eq. (<ref>), for a massive real scalar field confined between two parallel plates with the Dirichlet boundary conditions at finite temperature. Then, we calculate other Casimir thermodynamic quantities, including pressure, energy, and entropy, and show that all of them are finite and go to zero as the mass, or L increases. Moreover, by increasing temperature, the Casimir free energy and pressure decrease linearly, the Casimir entropy goes to a nonzero constant, and the Casimir energy goes to zero. In the next section, we compute the Casimir free energy using ZFA, SFA and ZTSA, and compare the results. We start with the first form of the free energy given by Eq. (<ref>), use the Poisson summation formula on the Matsubara frequencies given by Eq. (<ref>), evaluate the integral over t, and obtain F_(T,L) = -A/ 4√(π^3)∑_n_1=1^∞lim_s → 0∂/∂ sμ^2 s/Γ (s)[Γ(s - 3/2)/4(ω_n_1)^3 - 2s+ . . ∑_n_0 = 1^∞(2ω_n_1/n_0 β)^3/2-s K_3/2-s(βn_0ω_n_1) ], where ω_n_1=√((n_1 π/L)^2 + m^2). The spatial modes k_n_1 are the roots of f(k_n_1) in Eq. (<ref>), which are regular, i.e., they are equally spaced due to the Dirichlet boundary conditions, contrary to the massive fermionic case <cit.>. To evaluate the sum over the spatial modes, we use the Principle of the Argument theorem and after simplifying (see Appendix <ref>), we can express the free energy of the bounded region as F_(T,L) = - A /4√(π^5)lim_s → 0∂/∂ sμ^2 s/Γ (s){ L Γ(s - 1/2) / 2∫_ 0^∞p^2 - 2sω(p) dp -. . ∑_n_0 = 1^∞(2T/n_0 )^1/2 - sL ∫_0^∞ p^3/2 - s J_1/2 - s(n_0 β p) ω(p) dp+ ∫_0^∞ln( 1 - e^ - 2 L ω(p))×. . [Γ(s - 1/2) /2p^2 - 2s - ∑_n_0 = 1^∞(2T/n_0)^1/2 - s p^3/2 - s J_1/2 - s(n_0 β p) ] dp } , where ω(p)=√(p^2 + m^2). Only the first term of the above expression contains a divergent integral. Therefore, for the other terms, which include the logarithm function and the Bessel function, we evaluate lim_ s → 0∂ / ∂ s, and after simplifying we obtain F_(T,L) = -AL/ 8√(π^5)lim_s → 0∂/∂ s[Γ(s - 1/2) /μ^-2 sΓ (s)∫_ 0^∞p^2 - 2sω(p) dp ]- AL m^2/π^2{T^2/2× ∑_n_0 = 1^∞K_2 (n_0 β m)/ n_0^2 + 1/8 L^2∑_n_1 = 1^∞K_2 (2 n_1 m L)/ n_1^2 + T^2 ∑_n_0 = 1^∞∑_n_1 = 1^∞K_2 (β m ω_n_0, n_1)/(ω_n_0, n_1)^2}, where ω_n_0, n_1 =√(n_0^2+(2 n_1 TL)^2). Since we are going to use the fundamental definition of the Casimir free energy, we also need to calculate the free energy of the free massive case at finite temperature. To this end, we start with the first form of the free energy as given by Eq. (<ref>) and follow the same procedure as in the bounded case. That is, we use the Poisson summation on the Matsubara frequencies, evaluate the integral over t, and evaluate lim_ s → 0∂ / ∂ s for the finite parts. Then, we can express the free energy of the free case as a zero temperature part and a finite temperature correction part as follows F_(T,L) = F_(0,L) +Δ F_(T,L) F_(0,L) = -AL/ 8√(π^5)lim_s → 0∂/∂ s[ Γ(s - 1/2)/μ^-2 sΓ (s)∫_0^∞ k^2 [ω(k)]^1 - 2s dk], Δ F_(T,L) = - A L T^2 m^2/2 π^2∑_n_0 = 1^∞K_2 (n_0 β m)/ n_0^2 , where ω(k)=√(k^2 + m^2). The first two terms of F_(T,L), given by Eq. (<ref>), are equivalent to the two terms of F_(T,L), given by Eq. (<ref>). The second terms are actually identical, while the first terms contain equivalent divergent integrals, which we compute using dimensional regularization, with fixed s, obtaining F _(0,L) = - AL/16√(π^5)lim_s → 0∂/∂ sμ^2s/Γ (s)[m^4-2sΓ(3/2-s)Γ(s-2) ] = - AL/16√(π^5)lim_s → 0∂/∂ s m^4/Γ (s)[√(π)/4s -√(π)/8( 4 ln(m/μ) -3)] = -A L m^4/(128 π^2)[ 3 - 4 ln(m/μ)] . Notice how the divergent term proportional to s^-1 which appears in the second line is eliminated. This result is finite due to the analytic continuation embedded in expression that we have used for the first form of the free energy Eq. (<ref>), as mentioned in Sec. <ref>. Now, using the fundamental definition, as expressed in Eq. (<ref>), these four terms cancel each other upon subtraction, and we obtain the following expression for the Casimir free energy for a massive scalar field confined between two plates F_(T,L) = - AL m^2/π^2∑_n_1 = 1^∞{K_2(2 n_1 m L)/8 L^2 n_1^2 + T^2∑_n_0=1^∞K_2(m β√(n_0^2+(2 n_1 TL)^2))/[n_0^2+(2 n_1 TL)^2]} . The zero temperature and finite temperature correction parts, i.e., F_(0,L) and Δ F_(T,L), are associated with the two terms in Eq. (<ref>), respectively. This expression is our main result for Casimir free energy which is obtained by its fundamental definition. One advantage of fundamental approach is that the free parts cancel out, whether they are finite or not. The high temperature limits of the our results for F_(T,L), F_(T,L), and F_(T,L) are important, particularly when we compare them to the analogous results obtained by the other approaches in the next section. Below we display the results including the massless limit of the latter, F_(T,L) - ( A L π^2/90 ) T^4 + ( A L m^2 /24 ) T^2 + A L m^4/32 π^2 ln(4 π T/m)- [A/4∑_n_1=1^∞√(m^3/L π^3 n_1^3) K_3/2(2 n_1 m L) ]T, F_(T,L) - ( A L π^2/90 ) T^4 + ( A L m^2 /24 ) T^2 + A L m^4/32 π^2 ln(4 π T/m), F_(T,L) - [AL/4∑_n_1 = 1^∞(m/π L n_1)^3/2 K_3/2(2 n_1 m L) ] T - A ζ(3)/16 π L^2T. Notice that in the massless limit, the coefficients of the T^2 and ln T terms go to zero, while only the T^4 and T terms remain. Below we outline a few alternative derivations of our main result, i.e., F_(T,L) given by Eq. (<ref>), with details presented in appendices. As mentioned above, the spatial modes for the massive scalar field are also regular, and hence the sum over n_1 of the bounded case given by Eq. (<ref>) can be also calculated using the generalized Abel-Plana summation formula, as an alternative method. After subtracting the free case, we obtain an expression for F_(T ,L) which is identical to the above expression (see Appendix <ref>). Moreover, we can start with the same first form of the free energy given by Eq. (<ref>) as used in the computations by the ZFA, and use the Abel-Plana summation formula for both sums over n_0 and n_1 modes. After simplifying, we obtain exactly the same expression given by Eq. (<ref>) (see Appendix <ref>). One can easily show that using the second form of the free energy given by Eq. (<ref>), one obtains exactly the same expression as in Eq. (<ref>). We show the details of this computation, in which we utilize dimensional regularization, in Appendix <ref>. As we have shown, the free energies of both the bounded and free cases contain F_(0,L) which is in principle divergent. Its value, obtained using the second form in Appendix <ref>, is proportional to Γ(-2) which is divergent. This is due to the fact that the second form given by Eq. (<ref>), similar to the expression of the first form given by Eq. (<ref>) and in contrast to the expression of the first form given by Eq. (<ref>), does not have an embedded analytic continuation. One of the advantages the fundamental definition is that F_(0,L) terms cancel upon subtraction of F_(T,L) from F_(T,L), whether they are infinite or have been rendered finite by analytic continuations. The other advantage is that both F_(T,L) and F_(T,L) contain Δ F_(T,L) which also cancel upon subtraction, regardless of whether it is a simple polynomial of T or not. When using the fundamental approach, we have implicitly assumed that the contributions to the Casimir free energy coming from the regions outside of the bounded region cancel with the corresponding contributions of the free case. We use the Boyer method to ascertain this cancellation in Appendix <ref>. In Fig. (<ref>), we plot the Casimir free energy of a massive scalar field for various values of mass. As can be seen, the Casimir free energy goes to zero rapidly as mass of the scalar field increases and decreases linearly as the temperature increases with a slope that depends on the mass, in accordance with Eq. (<ref>). As can be seen directly from Eq. (<ref>), the Casimir free energy goes to zero rapidly as L increases, as well. Moreover, as can be seen from Fig. (<ref>), and can be shown easily from Eq. (<ref>), the massless limit of our result for the massive case coincides exactly with the massless case given by Eq. (<ref>). The zero temperature limit of F_(T,L), given by Eq. (<ref>), yields the following well known result, F_(0,L) =E_(0,L) = - A m^2/8 π^2 L∑_n_1=1^∞K_2(2 n_1 m L)/n_1^2 . Now, one can obtain other thermodynamic quantities including, the Casimir pressure, Casimir energy, and Casimir entropy from the expression we have obtained for the Casimir free energy in Eq. (<ref>). We calculate the Casimir pressure for a massive scalar field, in analogy with the massless case shown in Eq. (<ref>), and obtain, P_(T,L) = - m^2/π^2 ∑_n_1 = 1^∞{1 /8 L^2n_1^2[ 3 K_2(2 n_1 m L) + (2 n_1 m L) K_1(2 n_1 m L)] + ∑_n_0=1^∞[m T ω_n_0, n_1(2 n_1 L)^2 K_1(m βω_n_0, n_1) + (12 n_1^2 T^2 L^2 -n_0^2) K_2(m βω_n_0, n_1)]/β^2ω_n_0, n_1^4} , where ω_n_0, n_1 = √(n_0^2+(2 n_1 TL)^2). The zero temperature and finite temperature correction parts, i.e., P_(0,L) and Δ P_(T,L), are associated with the two terms in Eq. (<ref>), respectively. We plot P_(T,L) for various values of mass in Fig. (<ref>). As can be seen, the Casimir pressure also goes to zero rapidly as the mass of scalar field increases and decreases linearly at high temperature. Moreover, as can be seen from Fig. (<ref>), and can be shown easily from Eq. (<ref>), the massless limit of our result for the massive case coincides exactly with the massless case given by Eq. (<ref>). The Casimir energy can be calculated using either of the following two expressions, E_(T,L) = E_(T,L) - E_(T,L)=∂/∂β[β F_(T,L)]. The first expression is its fundamental definition. We use the second expression to obtain, E_(T,L) = - A m^2 L/π^2∑_n_1 = 1^∞{K_2(2 n_1 m L)/8 L^2 n_1^2 +T^2∑_n_0=1^∞1/ω_n_0, n_1^2× [K_2(m βω_n_0, n_1)- m β n_0^2/ω_n_0, n_1 K_3( β m ω_n_0, n_1)] } . Finally, we calculate the Casimir entropy and obtain, S_(T,L) = -∂/∂ T[F_(T,L)] =A /π^2∑_n_0 = 1^∞∑_n_1 = 1^∞L m^3 n_0^2 /ω_n_0, n_1^3 K_3( β m ω_n_0, n_1). Equations (<ref>), (<ref>), and (<ref>) lead to the following expected relation F_(T,L) = E_(T,L) - TS_(T,L). The high temperature limit of E_(T,L) is zero and those of S_(T,L) for the massive and massless cases are S_(T,L) AL/4∑_n_1 = 1^∞(m/π L n_1)^3/2 K_3/2(2 n_1 m L) A ζ(3)/16 π L^2. That is, the Casimir entropy goes to a nonzero positive constant at high temperatures[This is in contrast to the fermionic case, where the Casimir entropy and free energy also go to zero <cit.>.]. Hence, Eqs. (<ref>) and (<ref>) show that the appearance of the classical term, i.e., the linear temperature dependence of F_(T,L) at high T, shown in Eq. (<ref>), is due to the constant nonzero limit of S_(T,L). In figure Fig. (<ref>), we show all of these Casimir thermodynamic quantities. As can be seen in this figure, the Casimir free energy and pressure decrease linearly as temperature increases, while the Casimir energy goes to zero. On the other hand, in the high temperatures limit, the Casimir entropy goes to a positive constant, and hence the TS_(T,L) increases linearly in that limit. Moreover, one can easily show that all of the Casimir thermodynamic quantities go to zero as m or L increases. It is worth mentioning that the Casimir entropy in our model is positive for the entire range of T. In the fundamental approach used here, the Casimir entropy is defined as S_(T,L)= S_(T,L)-S_(T,L). Hence the interpretation of S_(T,L)>0 for entire range of T is simply that S_(T,L)> S_(T,L), which are incidentally both positive. § MASSIVE SCALAR FIELDS AND THE GENERALIZED ZETA FUNCTION The zeta function approach (ZFA) has been used to calculate the Casimir free energy for the massive scalar field between two plates and some solutions have been presented (see for example in <cit.>). In this section, we compute explicitly the final results for the Casimir free energy and Casimir pressure for this problem using the ZFA, the Schlömilch formulas approach (SFA) as the second representative of the analytic continuation approach, and also using the zero temperature subtraction approach (ZTSA). We then show that, similar to the massless case, only the results of ZFA and SFA are equivalent, while neither of these results are equivalent to the one obtained in Sec. <ref> based on the fundamental approach. Most importantly, we show that, contrary to the massless case, these discrepancies cannot be fixed completely by the renormalization program mentioned before, since the extra unphysical terms are non-polynomial functions of T. To use the ZFA, we start with the first form of the free energy given by Eq. (<ref>), and compute the sum over spatial modes using the inhomogeneous Epstein zeta function (see Appendix <ref>). The expression for the free energy becomes F_(T,L) = -T A/16 πlim_s → 0∂/∂ sμ^2 s/Γ (s)∑_n_0 = -∞^∞{ - Γ (s -1) (ω'_n_0)^2-2s + L Γ( s -3/2)/√(π)(ω'_n_0)^3 - 2s +4L/√(π)∑_n_1 =1^∞(ω'_n_0/n_1 L)^3/2 - s K_3/2 - s(2 n_1 L ω'_n_0) }, where ω'_n_0 = √((2 n_0 π/β)^2+ m^2). To explore the mechanism of removal of divergences from this point forward, it is useful to compare this expression with the analogous one that we have obtained for the massless case after using the inhomogeneous zeta function on the spatial modes, i.e., Eq. (<ref>). There are terms in both expressions which include the divergent sum over the Matsubara frequencies and are remnants from the use of the inhomogeneous zeta function on the spatial modes. As mentioned in Sec. <ref> for the massless case, we can obtain the analytic continuation of these divergent terms using a supplementary zeta function. So, here, we first present the sum over n_0 modes in terms of positive integers and a zero mode, and then use the inhomogeneous Epstein zeta function given by Eq. (<ref>). After computing this divergent sum for the first and second terms in the curly bracket in Eq. (<ref>), we obtain the following expression F_(T,L) = - A/8√(π^3)lim_s → 0∂/∂ sμ^2 s/Γ (s){ -Γ( s - 3/2) m^3 - 2s/4 + LΓ( s - 2)m^4 - 2s/4 √(π)+. . ∑_j=1^∞[ L/√(π)(m2/j β)^2-s K_2-s(j m β) - (m2/j β)^3/2-s K_3/2-s(j m β) ] + . . 2TL ∑_n_0= - ∞^∞∑_n_1=1^∞(ω'_n_0/L n_1)^3/2-s K_3/2-s(2n_1 L ω'_n_0) }. As can be seen in this expression, the second term includes a divergent gamma function, i.e., Γ(s - 2), and is equivalent to the first term of the F_(T,L), given by Eq. (<ref>) after evaluating the integral over p: As shown in Eq. (<ref>), these terms are both equal to the analytic continuation of F_(0,L) which is equal to [-A L m^4/(128 π^2)] [ 3 - 4 ln(m/μ)]. This is the analytic continuation embedded in the first form of free energy Eq. (<ref>), and has rendered this term finite. Finally, evaluating lim_s → 0∂/∂ s, we can express the final result as follows F_(T,L) = A m^3/24 π- A L m^4/128 π^2[ 3 - 4 ln(m/μ) ] +Δ F_(T,L) + A ∑_j=1^∞(T m/2 π j)^3/2 K_3/2(j mβ)- A L T/4√(π^3)∑_n_0= - ∞^∞∑_n_1=1^∞(ω'_n_0/L n_1)^3/2 K_3/2(2n_1 L ω'_n_0) . The order of the terms presented above is the same as in Eq. (<ref>), i.e., the first and second terms are the finite temperature-independent terms mentioned above, Δ F_(T,L) is the thermal correction of the free case given by Eq. (<ref>), and the fourth term is an extra L-independent thermal correction term which is not in the Casimir free energy obtained in the last section. In fact, the first and fourth terms in F_(T,L), given by Eq. (<ref>), which are L-independent and do not contribute to the pressure, are precisely minus one half of the contribution of the zero spatial mode that is disallowed by the Dirichlet boundary conditions, and these terms make F_(T,L) a nonextensive thermodynamic quantity (see Appendix <ref>)[As discussed in Sec. <ref>, analogous superfluous term, i.e., A ζ (3)/4 π T^3, has appeared in the expression for F_(T,L) for the massless case given by Eq. (<ref>). In fact this term is precisely the massless limit of the two terms mentioned above.]. Oftentimes, the first two terms are discarded, with reasoning that they are temperature independent. However, note that the second one contributes to the pressure. Next, we express the sum over the n_0 modes of the last term by a sum over positive integers and a zero mode, and evaluate it using the Abel-Plana formula, as used for Eq. (<ref>). The result is F_(T,L) given in Eq. (<ref>). After simplifying, we can summarize the final result as follows F_(T,L) = F_(T,L) + Δ F_(T,L)- A L m^4/128 π^2[ 3 - 4 ln(m/μ) ] + A m^3/24 π +A∑_n_0=1^∞(m T/2 π n_0)^3/2 K_3/2(n_0 β m) , where Δ F_(T,L) is given by Eq. (<ref>). As stated above, we also obtain the Casimir free energy for a massive real scalar field confined between two plates using the Schlömilch formulas approach (SFA) <cit.>, as a second representative of the analytic continuation approach, similar to the massless case in Sec. <ref>. As mentioned before, we can use this approach for obtaining only the thermal corrections of the Casimir effect, while we calculate the zero temperature part separately using the Epstein inhomogeneous zeta function. To compute the Casimir free energy in this approach, we use the second form of the free energy given by Eq. (<ref>). In the massive case, we have ω_n_1, K_T = √((n_1 π/L)^2+K_T^2 +m^2) and use the generalized Schlömilch formulas to calculate sum over spatial modes of the thermal corrections part, to obtain (see Appendix <ref>) F_(T,L) = F_(T,L) , where the expression for F_(T,L) is given by Eq. (<ref>). As expected, the results obtained using the ZFA and the SFA, as two distinct methods within the analytic continuation approach, are completely equivalent. For our last approach, which is ZTSA, we note that no new computation is needed, since its definition holds for both massive and massless cases. That is, rewriting Eqs. (<ref>) and (<ref>), we have, F_(T,L) ≡ F_ (T,L) - F_ (0,L)= F_(T,L) +Δ F_(T,L). For the massive case, we simply use Eq. (<ref>) for F_(T,L), Eq. (<ref>) for F_(T,L), and Eq. (<ref>) for F_(T,L). This shows that, by definition, the results of ZTSA differs from those of the fundamental approach by Δ F_(T,L), which is zero only at zero temperature. Now we can compare the results of the analytic continuation approach, i.e., F_ and F_ given by Eqs. (<ref>) and (<ref>), with the fundamental approach, i.e., F_ given in Eq. (<ref>), and the zero temperature subtraction approach, i.e., F_ given by Eq. (<ref>). First we note that the sum of the first two terms of F_ is precisely F_. Hence, F_ and F_ contain three extra terms as compared to F_, and four extra terms as compared to F_. The first extra term in Eq. (<ref>) is Δ F_(T,L). The second extra term of is the analytic continuation of the divergent F_(0,L) term computed in Eq. (<ref>), which does contribute a T-independent term to the pressure. As mentioned above, the third and fourth extra terms in Eq. (<ref>) are equivalent to the contribution of the - F_^n_1=0(T)/2, which is disallowed by the Dirichlet boundary condition. An important feature of the Casimir thermodynamic quantities is their high temperature expansion. Below we display this expansion for F_(T,L), F_(T,L) - A/4[ ∑_n_1=1^∞√(m^3/L π^3 n_1^3) K_3/2(2 n_1 m L) ]T - ( A L π^2/90 ) T^4 +( A L m^2/24 ) T^2 +A L m^4/32 π^2ln( 2 T/m) +( A ζ (3)/4 π) T^3 -( A m^2/8 π) T ln(T/m). The first and second lines are the high temperature expansions of F_ and F_, respectively. Hence the sum of the fist two lines is the high temperature expansion of F_ and F_. That is F_(T,L) - A/4[ ∑_n_1=1^∞√(m^3/L π^3 n_1^3) K_3/2(2 n_1 m L) ]T - ( A L π^2/90 ) T^4 +( A L m^2/24 ) T^2 +A L m^4/32 π^2ln( 2 T/m) The two terms in last line of Eq. (<ref>) are the high temperature expansion of the last extra term of F_(T,L), given in Eq. (<ref>). Note that this expansion includes ln(T/m) and Tln(T/m), in addition to integer powers of T. As we have shown here for the case of scalars, and also for the fermionic case <cit.>, it has long been recognized that the use of the ZFA yields extra unphysical terms. To remedy this, Geyer et al. <cit.> defined a renormalization program in which the polynomial terms obtained using the heat kernel coefficients with powers greater or equal to two are subtracted. In their work on the bosonic cases, they emphasized that all of the mentioned terms are of quantum character and do not include the classical term which is proportional to the temperature. We now explore the results of this renormalization program. Below, we state the renormalization program as presented in reference <cit.>, F^= E_0^ + Δ_ F_0 - α_0( k_ T)^4 -α_1( k_ T)^3 - α_2( k_ T)^2 . We can identify the sum of the first two terms as F_(T,L). The heat kernel coefficients α_n depend on geometrical characteristics of the configuration. We calculate these coefficients in Appendix <ref>, and show that they are identical to those of the high temperature expansions of F_, as given by Eq. (<ref>). We also show how the divergent vacuum energy at zero temperature can be obtained by the heat kernel method. Therefore, based on this renormalization program, the physical Casimir free energy for a massive scalar field confined between two parallel plates, and its high temperature limit, obtained using zeta function are as follows F_^(T,L)= F_(T,L) + A L π^2/90 T^4 - A ζ(3)/4 π T^3 - A L m^2/24 T^2 - A/4[ ∑_n_1=1^∞√(m^3/L π^3 n_1^3) K_3/2(2 n_1 m L) ]T- A m^2/8 π T ln(T/m)+A L m^4/32 π^2ln( 2 T/m), where F_ is given by Eq. (<ref>). Since the results of SFA and ZFA are equivalent, we can also define F_^(T,L) = F_^(T,L), and, henceforth, only concentrate on the latter. One can analogously define a renormalized ZTSA free energy as follows F_^(T,L) = F_(T,L) + ( A L π^2/90 ) T^4 - ( A L m^2/24 ) T^2 - A/4[ ∑_n_1=1^∞√(m^3/L π^3 n_1^3) K_3/2(2 n_1 m L) ]T+A L m^4/32 π^2ln( 2 T/m), where F_ and F_ are given by Eqs. (<ref>, <ref>). To illustrate the differences between the five different expressions that we have obtained for the Casimir free energy and their high temperature limits, i.e., F_ given by Eqs. (<ref>) and (<ref>), F_ given by Eq. (<ref>) and Eq. (<ref>), F_ given by Eq. (<ref>) and (<ref>), F_^(T,L) given by Eq. (<ref>), and F_^(T,L) given by Eq. (<ref>), we plot them in Fig. (<ref>). As can be seen in this figure, none of these results are equivalent. Even the renormalized versions do not match F_. The free energies obtained via ZFA and the ZTSA decrease as T^4 at high temperatures due to the black body term, while the Casimir free energy decreases as T at high temperatures due to the classical term. The renormalized versions do not match F_ even at high temperatures: F_, besides the classical term, has extra Tln T and ln T terms, while F_ has an extra ln T term. Only the zero temperature limit of ZTSA matches that of F_. So far, we have illustrated that the conventional renormalization programs for the ZFA or ZTSA, based on subtracting powers of T^2 and higher in the high temperature expansions, do not in general yield the correct results. However, we can still utilize the facility of the zeta function method and use the fundamental approach as a new renormalization program. To do this, we need to calculate the free energy of the free massive case by applying the zeta function. Here, we emphasize that to use the fundamental approach, the procedures for the computation of the free energy in the bounded and free cases should be equivalent. Hence, we perform the same procedure as for the bounded case, i.e., F_(T,L) given by Eq. (<ref>), by assuming that in the free case there are plates located at ± L'/2, which we shall eventually take to infinity, F_^(T,L') = - A/8√(π^3)lim_s → 0∂/∂ sμ^2 s/Γ (s){ -Γ( s - 3/2) m^3 - 2s/4 + L'Γ( s - 2)m^4 - 2s/4 √(π)+. . ∑_j=1[ L'/√(π)(m2/j β)^2-s K_2-s(j m β) - (m2/j β)^3/2-s K_3/2-s(j m β) ] + . . 2TL' ∑_n_0= - ∞^∞∑_n_1=1^∞(ω'_n_0/L' n_1)^3/2-s K_3/2-s(2n_1 L' ω'_n_0) }. As can be seen, there are terms in the above expression which are proportional to the volume and, as mentioned in Sec. <ref>, in the fundamental approach the difference between the free energy of the bounded and free cases is considered at the same temperature T and same volume. So, we express the second and third terms of F_^(T,L') at volume V=AL, while the last terms goes to zero as L'→∞. In this limit, the result is lim_L'≫1F_^(T,L) = - A/8√(π^3)lim_s → 0∂/∂ sμ^2 s/Γ (s){ -Γ( s - 3/2) m^3 - 2s/4 + LΓ( s - 2)m^4 - 2s/4 √(π)+. . ∑_j=1[ L/√(π)(m2/j β)^2-s K_2-s(j m β) - (m2/j β)^3/2-s K_3/2-s(j m β) ] }. One can see that F_^(T,L), given by Eq. (<ref>), is equivalent to the first four terms of F_(T,L), given by Eq. (<ref>). This implies that F_(T,L)= F_^(T,L) - F_^(T,L), where we have denoted F_(T,L) by F_^(T,L) to emphasis that this is in accord with the fundamental definition. Note that the nonextensive terms, i.e., the first and the fourth terms, have also canceled out in this approach and the final result is extensive. This expression can be looked upon as the correct renormalization scheme, but is nothing more than an, albeit useful, expression for the fundamental definition. One can now obtain other thermodynamic quantities based on the expressions we have obtained for the free energy using the ZFA given in Eq. (<ref>), the ZTSA given in Eq. (<ref>), and their renormalized versions given in Eqs. (<ref>, <ref>). As an example, we calculate the pressure for a massive scalar field using the free energy obtained via the zeta function, in analogy with the massless case shown in Eq. (<ref>). We express the result in terms of P_(T,L), obtained by the fundamental approach and given in Eq. (<ref>), as follows P_(T,L) =P_(T,L) = P_(T,L)+ m^4/128 π^2[ 3 - 4 ln(m/μ) ]+Δ P_(T), Δ P_(T) = T^2 m^2/2 π^2∑_j = 1^∞K_2( j β m)/j^2 . As before, the second term is a constant term which is a remnant from the use of the zeta function, and the third term is the thermal correction to the pressure of the free case, which the zeta function fails to subtract. Next, we calculate the pressure using the free energy obtained via the ZTSA. We express the result in terms of P_(T,L) as follows, P_(T,L) = P_(T,L) +Δ P_(T) , where Δ P_(T) is given in Eq. (<ref>). Next, we calculate the pressure obtained via the renormalized zeta function, i.e., F_^(T,L), and the renormalized ZTSA, i.e., F_^(T,L), given by Eqs. (<ref>, <ref>). The results are, P_^(T,L) =P_^(T,L) = P_(T,L) - ( π^2/90 ) T^4 + ( m^2/24 ) T^2 P_^(T,L) = P_(T,L) - ( π^2/90 ) T^4 + ( m^2/24 ) T^2 . In Fig. (<ref>), we compare these results with the Casimir pressure obtained based on the fundamental approach, given by Eq. (<ref>). As can be seen, the pressure obtained using the ZFA and the ZTSA are negative at low temperatures and positive at high temperature, while the Casimir pressure is always negative and decreases linearly with increasing temperature. The differences between these results, besides the constant term present in P_(T,L) and its renormalized version, are due to the thermal correction of pressure of free case Δ P_(T) which is a non-polynomial function of T for the massive scalar field, as presented in Eq. (<ref>). The pressure obtained using the ZFA and the ZTSA all diverge as T^4 at high temperatures, and their renormalized versions as Tln (T/m) and ln (T/m). At T=0, only the ZTSA results match the P_(0,L). On a side note, we can now examine the applicability and limitations of the piston method, which can be used if one is interested only in the Casimir pressure. In this approach, the pressure on the bounded and unbounded sides of the piston are calculated and subtracted (see in <cit.>). The zeta function method is almost invariably used for this purpose. To trace the cancellations that occur in this subtraction, we first write Eq. (<ref>), with the labeling mentioned above, as follows P_^(T,L) = P_(T,L) + P_^(T), P_^(T) = m^4/128 π^2[ 3 - 4 ln(m/μ) ]+Δ P_(T), where Δ P_(T) is given by Eq. (<ref>). Now it is clear that if the zeta function is used to calculate the pressure of both the bounded and unbounded sides of the piston and the results are subtracted, the extra term cancels and one obtains the correct result, i.e. P_(T,L). In fact, one obtains the correct expression as long as the method used for the two regions is the same, whether it is ZFA, SFA, ZTSA or the Abel-Plana formula. § SUMMARY AND DISCUSSION In this paper, we have explored the implications and results of the fundamental definition of the Casimir free energy for a scalar field, and how they compare with the results based on two general approaches in common use, i.e., the analytic continuation approach, represented here by the zeta function approach (ZFA) and the Schlömilch formula approach (SFA), and the zero temperature subtraction approach (ZTSA). We have also included the renormalized versions of the latter two, as only the fundamental approach does not require one, since it has it built-in. Here, we have concentrated on the Casimir effects for a real scalar field between two parallel plates, separated by a distance L, with Dirichlet boundary condition. The fundamental definition of F_ is the difference between the free energy of the system in the presence of nonperturbative conditions or constraints, and the one with no constraints, which we have referred to as the free case, both being at the same temperature T and having the same volume. That is, F_(T,L)=F_(T,L)-F_(T,L). Our two main tools for the computation of F_ and F_ have been the Abel-Plana formula and the Principle of the Argument Theorem. As is well known, both F_(T,L) and F_(T,L) have zero temperature divergent parts and finite temperature correction parts which partially cancel upon subtraction, leaving F_(T,L) with both zero and finite temperature parts. We have found that F_(T,L) = F_(0,L) + Δ F_(T,L) computed using the Abel-Plana formula or the Principle of the Argument Theorem, is precisely extensive, i.e., proportional to the volume V=AL. In the analytic continuation approach this subtraction is to be rendered by the analytic continuation of F_(T,L). In the zero temperature subtraction approach only the zero temperature part of the free energy is subtracted, i.e., F_(T,L)=F_(T,L)- F_(0,L). To ensure that the delicate cancellation of infinities has been done correctly within each general approach, we have used or outlined several different methods leading to the same results within each approach throughout the paper. In Sec. <ref>, we have used the fundamental approach to compute the Casimir thermodynamics quantities for the massless case, including the Casimir free energy, pressure, energy, and entropy. In Sec. <ref>, we have used the fundamental approach to compute the Casimir thermodynamics quantities for the massive case, and shown that its results in massless limit coincides with those of the massless case computed in Sec. <ref>. We have shown that, as expected, all of the Casimir thermodynamic quantities go to zero as the mass or L increases. The high temperature limit of F_(T,L) contains T^4, T^2, T, ln(T) terms, out of which only the linear term remains after subtracting F_(T,L). In Fig. <ref>, we have displayed the Casimir thermodynamic quantities as a function of T, which shows that they do not change sign, and only S_ is positive. In the high temperature limit, E_→ 0, S_→, F_∼ -T, P_∼ -T. The linear temperature dependence of the latter two is attributed to the classical term: What we have shown here explicitly is that this linear T-dependence at high T can be related to the behavior of S_ and E_, in accordance with the relation F_(T,L)=E_(T,L)-T S_(T,L). These results, and in particular E_(∞,L)= 0, are obtained due to the subtraction of the free case at the same temperature, which amounts to the complete cancellation of both the zero temperature and the thermal correction parts of the bounded case which are equivalent to those of the free case. We have also computed the Casimir thermodynamic quantities using the analytic continuation approach, represented here by ZFA and SFA, and also ZTSA. In Sec. <ref>, we have concentrated on the massless case, and in Sec. <ref> on the massive case, and shown that the results of the latter in massless limit coincides with those of the massless case computed in Sec. <ref>. We have shown that, as expected, the results of ZFA and SFA are always equivalent, but they differ from those of the fundamental approach. In particular F_(T,L) contains four extra terms as compared to F_(T,L): The analytic continuation of F_(0,L), Δ F_(T,L), and two extra L-independent terms, the sum of which is equivalent to F_^n_1=0(T), which is disallowed by the Dirichlet boundary conditions. It is interesting to note that the free energy of the free case computed using the zeta function, denoted by F_^(T,L), consists of the same four terms mentioned above, the last two of which make this energy nonextensive. This also shows that if we were to use the fundamental definition as a renormalization within the ZFA, we would obtain the correct results: F_(T,L)- F_^(T,L)=F_(T,L). However, the renormalization programs thus far devised rely on the high temperature expansion, which we describe below. The high temperature expansion of F_(T,L) includes that of F_(T,L) which is the classical term proportional to T, those of Δ F_(T,L) which consists of T^4, T^2, and ln(T) terms, and the two nonextensive terms which consist of T^3 and Tln(T) terms. As mentioned above, it has long been recognized that the zeta function approach produces extra unphysical terms. The renormalization programs that has been devised for this purpose is to find the high temperature expansion of F_(T,L), using the heat kernel method, and to subtract all terms T^n with n≥ 2. As we have shown, the heat kernel method can reproduce the coefficients of all terms in the high temperature expansion, including those of Tln(T) and ln(T), except for the linear term which we have obtained by a slight modification. As is apparent, the result of this program, denoted by F_^(T,L), is not equal to F_(T,L) even at high temperatures, since Tln(T) and ln(T) terms remain unsubtracted. Moreover the four extra unphysical terms of F_^(T,L) mentioned above are nonpolynomial functions of m and T for the massive case, which reduce to a polynomial with T^4 and T^3 terms in the massless limit. Hence the renormalization program devised works well only in the massless case. Finally, the results for the ZTSA, by definition, have only one extra term as compared to the fundamental approach since F_(T,L) = F_(T,L) + Δ F_(T,L). Once again, Δ F_(T,L) is a nonpolynomial function of m and T, which reduces to the black-body term T^4 in the massless limit. Hence the renormalization program works well only in the massless case. That is, in the massless case we have F_=F_^=F_^=F_^. However, as illustrated in Fig. (<ref>) for the massive case, the five expressions for the Casimir free energy, i.e., F_, F_ or F_, F_, F_^ or F_^, and F_^, are not equivalent at any temperature, except at T=0, where the ZTSA results are equal to F_(0,L). In particular, as T→∞, F_∼ T, F_, F_ and F_ ∼ T^4, and F_^, F_^ and F_^ ∼ Tln(T/m). These differences are also present in the Casimir pressure illustrated in Fig. (<ref>). equationsection § CALCULATION OF THE CASIMIR FREE ENERGY OF THE MASSLESS AND MASSIVE CASES USING THE ABEL-PLANA SUMMATION FORMULA In the first part of this appendix, we calculate the Casimir free energy for a massless real scalar using its fundamental definition, starting with the second form of the free energy given by Eq. (<ref>), and show that the final result is equivalent to the result given in Eq. (<ref>). We first evaluate the integrals over the transverse momenta for both the bounded and free cases using the dimensional regularization, and then subtract the results according to the fundamental definition given by Eq. (<ref>), to obtain, F_ (T,L) =- A π^2/12 L^3[ ∑_n_1 = 1 ^∞(n_1)^3 - ∫_ 0 ^∞ dk' (k')^3] -A √(T^3/2L^3)∑_j=1^∞1/√(j^3)× [ ∑_n_1 = 1 ^∞(n_1 )^3/2 K_3/2( j π n_1/TL) - ∫_ 0 ^∞ dk' (k')^3/2 K_3/2( j π k'/TL)], where k'=k π /L. As can be seen in the above expression, the zero temperature parts of the bounded and free cases, given by the two terms in the first square bracket, are separately divergent, since the expression given in Eq. (<ref>) contains no analytic continuation. Now, using the Abel-Plana formula, given by Eq. (<ref>), the divergences cancel and after simplifying[Using [(it)^3/2 K_3/2 (i t α) - (-it)^3/2 K_3/2 (- i t α)]= -i π t^3/2 J_3/2 (t α).] we obtain the F_ given by Eq. (<ref>). We can also obtain another form for the Casimir free energy. First, we expand the logarithm of thermal correction part of the free energy given by Eq. (<ref>) for large values of β, then we integrate over the transverse momenta, and finally use the Abel-Plana formula to obtain, F_(T,L) = - π ^2 A/1440 L^3 + π ^2 A L T^4 /90 - A T^3/4π∑_j = 1^∞( π j/2 T L) + π j/2 T L^2 ( π j/2 T L)/j^3 . The first term is the zero temperature part and the rest constitute the thermal correction part. This form is equivalent to the result obtained above, i.e., Eq. (<ref>). However, to have an accurate plot using this form, one has keep a large number of terms, otherwise the graph would show an increase relative to the classical term at high values of T. This is due to the high β expansion mentioned above. In the last part of this appendix, we use the generalized Abel-Plana summation formula to compute the Casimir free energy for a massive case based on the fundamental definition. As mentioned in Sec. <ref>, one can consider two different ways of using the Abel-Plana summation formula to calculate the Casimir free energy. In the first case, we only use this formula to calculate the sum over the spatial modes. Hence, for the bounded case, we consider F_(T,L) given by Eq. (<ref>), and for the free case, we start with the first form of the free energy, given by Eq. (<ref>), and perform the same procedure as the bounded case in Sec. <ref> and resulted in Eq. (<ref>). Then, we express the Casimir free energy based on the fundamental definition as follows F_(T,L) = -A/ 4√(π^3)lim_s → 0∂/∂ sμ^2 s/Γ (s){Γ(s - 3/2)/4[∑_n_1^∞ω_n_1^3 - 2s - ∫_0^∞ dk' ω_k'^3 - 2s]+ ∑_n_0 = 1^∞[∑_n_1 = 1^∞(2 ω_n_1/n_0β)^3/2 - s K_3/2-s(βn_0ω_n_1) -∫_0^∞ dk' (2 ω_k'/n_0β)^3/2-s K_3/2-s(β n_0 ω_k')]}, where ω_n_1=√((n_1 π/L)^2+m^2) and ω_k'=√((k' π/L)^2+m^2). Next, we calculate the sum over n_1 for each bracket of Eq. (<ref>) using the generalized Abel-Plana formula, as used in Eqs. (<ref>, <ref>) for the massless case, and obtain F_ (T,L) =-ALm/ 4√(π^3)lim_s → 0∂/∂ sμ^2 s/Γ(s)[m^3-2s/ 2Γ(5/2 - s)∫_1^∞dt (t^2 - 1)^3/2 - s/(e^2 m L t - 1) +. . ∑_n_0 = 1^∞(2 m/n_0β)^3/2-s∫_0^∞ dt t^5/2-s J_3/2 -s( n_0 β m t) /√(t^2+1)(e^2 m L √(t^2 +1) - 1)]. After evaluating the integral over t and simplifying, we obtain the Casimir free energy given by Eq. (<ref>). In the second case, we first calculate the sum over Matsubara modes and then the sum over the remaining spatial modes using the Abel-Plana summation formula for both of them. To do this, we start with the first form of the free energy given by Eq. (<ref>), express the sum over the Matsubara frequencies as the positive integer and zero modes, and then evaluate the sum over n_0 for the bounded and free cases using the Abel-Plana summation formula[Here, we have used the following simple form of the Abel-Plana summation formula to evaluate the sum over n_0 modes: ∑_n=0^∞ f(n) = ∫_0^∞ dx f(x) +1/2f(0) + i ∫_0^∞ dtf(it) - f(-it)/e^2 π t - 1 ]. The resulting expression is as follows F_ (T,L) =-TA/ 4πlim_s → 0∂/∂ sμ^2 s/Γ(s){Γ(s-1) ∫_0^∞ dk_0 [ ∑_n_1^∞ω_n_1, k_0^1 - s - ∫_0^∞ dk' ω_k', k_0^1 - s]+ β/√(π)∑_j = 1^∞[∑_n_1 = 1^∞(2ω_n_1/jβ)^3/2 - s K_3/2-s(β j ω_n_1) -∫_0^∞ dk' (2ω_k', k_0/jβ)^3/2-s K_3/2-s(β j ω_k', k_0)]}, where ω_n_1, k_0=√(ω_n_1^2+(k_0 2 π T)^2) and ω_k', k_0=√(ω_k'^2+(k_0 2 π T)^2). Then, we use the generalized Abel-Plana formula for the expressions in the bracket of Eq. (<ref>) which contains the sum over the spatial modes, as used in Eq. (<ref>), and after simplifying obtain the same expression given by Eq. (<ref>). § CALCULATION OF THE CASIMIR FREE ENERGY OF THE MASSLESS AND MASSIVE CASES USING THE PRINCIPLE OF THE ARGUMENT THEOREM In this appendix we compute the Casimir free energy of a real scalar field, for both the massless and massive cases, using its fundamental definition and utilizing the Principle of the Argument theorem. In particular, we use this theorem to sum over the spatial modes of the free energy. As mentioned in Sec. <ref>, this summation includes the regular modes which are the roots of f(k_n_1) in Eq. (<ref>). To show the consistency of our method, we use this theorem for various expressions that we have obtained for the free energy, all yielding equivalent results. The Principle of the Argument theorem relates the difference between the number of zeros and poles of a meromorphic function f(z), to a contour integral of the logarithmic derivative of the function <cit.>. In this paper, we use the generalized form of the Principle of the Argument theorem which is as follows <cit.> ∑_n g(a_n) - ∑_m g(b_m) = 1/2 π i∮_C g(z) d[ ln(f(z)) ], where a_n and b_m are the zeroes and poles of f(z) inside the closed contour C, respectively, and g(z) is assumed to be an analytic function in the region enclosed by the contour C. In applying this theorem to our problem, we find it convenient to use the following generalization of Eq. (<ref>) <cit.> ∑_n g(a_n) - ∑_m g(b_m) = 1/2 π i∮_C g(z) d[ ln(f(z) h(z))], with the condition that the function h(z) should be analytic and have no zeros in the region enclosed by the contour C. In the first part of this appendix, we compute the Casimir free energy for a massless real scalar field. We use the Principle of the Argument theorem to evaluate the sum over the spatial modes for various forms of the F_(T,L) in four different ways. The first three methods are based on the first forms of the free energy, given by Eqs. (<ref>, <ref>), and the fourth is based on the second form given by Eq. (<ref>). Then, we show that they all yield equivalent results. In the first and second methods within the fundamental approach, we start with the form of the free energy given by Eq. (<ref>) and, after using this theorem for the sum over n_1, obtain F_(T,L) = -TA/8 π∑_n_0= - ∞^∞1/2 π i∮_Clim_s → 0∂/∂ sΓ (s - 1)/μ^- 2 sΓ (s) q^2 - 2s× d{ln[ 2 /i sin(√(q^2 - (2 π n_0 T)^2) L )]}, where g(q_n_1^2) = g(k_n_1^2 + 4 π^2 n_0^2 T^2) is the summand in Eq. (<ref>), while g(q) is the integrand defined in Eq. (<ref>). We have chosen h(q) = 2 / i[In the fermionic case, h(q) turns out to be nontrivial <cit.>.]. The closed contour C in the complex q-plane should enclose all of the roots of f(k_n_1). As can be seen in Fig. (<ref>), the closed contour C is composed of two arcs, C_R and C_r, and also two straight line segments L_1, and L_2. To compute this contour integral over q, we replace the q^2 - 2s term by the following integral representation q^2 - 2s = ∫_0^∞e^- t q^2dt/t^2-sΓ(s - 1). Next, we integrate by parts. In the limit R →∞ and r → 0, only L_1 and L_2 give nonzero contributions, which can be written as follows F_(T,L) = - i TA/8π^2 ∑_n_0 = - ∞^∞lim_s → 0∂/∂ sμ^2 s/Γ(s)∫_- i ∞^i ∞ dq ∫_0^∞dt e^- t q^2 q/t^1 - s× ln[ 2/isin(√(q^2 - (2 π n_0 T)^2) L )]. After changing variable q = ip, and evaluating the integral over t, we obtain F_(T,L) = TA/8 π^2 ∑_n_0 = - ∞^∞lim_s → 0∂/∂ sμ^2 s/Γ(s)∫_0^∞ dp Γ(s) [ (i p)^1 - 2s + (- i p)^1 - 2s] × ln[ e^L√(p^2 + (2 π n_0 T)^2)(1 - e^- 2 L √(p^2 + (2 π n_0 T)^2))] . Then we simplify[ We use the following identity: Γ(-z) sin (π z) = -π/Γ (z+1) z ∉ℤ ] and obtain, F_(T,L) = TAL/4 π∑_n_0 =- ∞^∞lim_s → 0∂/∂ sμ^2 s/Γ(s)Γ(1 - s) ∫_0^∞ dp { p^1 - 2sω_n_0 (p) - p^3 - 2s/(1-s) ω_n_0 (p)1/e^ 2 L ω_n_0(p) - 1} , where ω_n_0 (p) = √(p^2 +(2 π n_0 T)^2). Next, we use the Poisson summation formula to evaluate the sum over temperature modes, and after simplifying we evaluate lim_ s → 0∂ / ∂ s for all terms except for the divergent integral term which appear in the zero temperature part and obtain F_(0,L) = - A L/16√(π^5)lim_s → 0∂/∂ sΓ(s- 3/2)/μ^- 2 sΓ (s)∫_0^∞ dp p^3-2s - A π^2/1440 L^3 Δ F_(T,L) = -A L π^2/90 T^4 - 2 A L T^4 /π^2∑_n_0=1^∞∑_j=1^∞1/[n_0^2 + (2 j T L)^2]^2. To use the fundamental definition, we calculate the contribution of the free case by starting with the same form of the free energy, given by Eq. (<ref>), expressed as follows F_(T,L) = - TAL/8 π^2 ∑_n_0 = - ∞^∞lim_s → 0∂/∂ sΓ (s-1)/μ^-2 sΓ(s)∫_0^∞ dk [ k^2 + (2 π n_0 T)^2]^1 - s . we evaluate the sum over n_0, similarly to the bounded case, using the Poisson summation formula and obtain F_(0,L) = - A L/16√(π^5)lim_s → 0∂/∂ sΓ(s- 3/2)/μ^-2 sΓ (s)∫_0^∞ dk k^3-2s Δ F_(T,L) = -A L π^2/90 T^4 . As can be seen from Eq. (<ref>), the zero and finite temperature correction terms of the free case is identical to the first terms of the bounded case given in Eq. (<ref>) which after subtracting, these terms completely cancel and we obtain the same expression for the Casimir free energy as given by Eq. (<ref>). In the second method, we rewrite the sum over the Matsubara frequencies in Eqs. (<ref>, <ref>) for the bounded and free cases in terms of the positive integers and zero, and then evaluate this sum using the same form of the Abel-Plana formula which is used for Eq. (<ref>). After subtracting the free energy of the bounded from the free case, we obtain the same Casimir free energy as given by Eq (<ref>). For the third method, we start with the expression for F_(T,L) given in Eq. (<ref>), evaluate the sum over the Matsubara frequencies using the Poisson summation formula, and use the Principle of the Argument theorem to sum over the spatial modes to obtain F_(T,L) = -A/2 √(π^3)1/2 π i∮_C d{ln[ 2 /i sin( k_n_1 L )]}× {lim_s → 0∂/∂ sΓ(s - 3/2)/μ^-2 s8Γ (s) k_n_1^3 - 2s+√(2T^3)∑_n_0=1^∞√(k_n_1^3/n_0^3) K_3/2( n_0 β k_n_1)} . We can now follow the same steps as above, use the same contour shown in Fig. (<ref>) for the variable q_n_1^2=k_n_1^2, and simplify[Using √((i p)^3) K_1/2(i p a ) +√((- i p)^3) K_1/2(- i p a ) = π√(p^3) J_1/2(p a ).] the results to obtain F_(T,L) = -A/2 √(π^3)∫_0^∞ dp [Lp + ln( 1 - e^- 2 p L)] × {lim_s → 0∂/∂ sΓ(s - 1/2)/μ^-2 s4πΓ (s) p^2 - 2s- ∑_n_0=1^∞√(T p^3/2 n_0) J_1/2( n_0 β p)} . Only one of the four terms in the integrand, after carrying out the multiplication, is divergent. For the other three terms, evaluating the integral over p and lim_s → 0∂/∂ s, where applicable, we obtain F_(T,L) = -AL/8 √(π^5)lim_s → 0∂/∂ sΓ(s - 1/2)/μ^- 2 sΓ (s)∫_0^∞ dp p^3 - 2s + F_(T,L) +Δ F_(T,L), where F_(T,L) and Δ F_(T,L) are given by Eqs. (<ref>, <ref>). Next, we calculate F_(T,L)) analogously to the bounded case, by starting with Eq. (<ref>) for the massless case and computing the sum over n_0 using the Poisson summation formula. After simplifying, the free energy of the free case includes the zero and finite temperature correction parts which are exactly equal to the first and last term of the free energy of the bounded case, given by Eq. (<ref>). This shows that Eq. (<ref>) can be rewritten as F_(T,L)=F_(T,L)+ F_(T,L), as expected. In the fourth method within the fundamental approach, we start with the second form of the free energy, given by Eq. (<ref>), compute the sum over the spatial modes using the Principle of the Argument theorem, and obtain F_(T,L) = A/4π∫_0^∞ dK_T K_T 1/2 π i∮_C d{ln[ 2 /i sin( √(q^2 - K_T^2) L )]} {q + 2 T ln(1 - e^- β q) }, where q_n_1^2 = K_T^2 + k_n_1^2. After following the same steps as above, we obtain F_(T,L) = A/4π^2∫_0^∞ dK_T K_T ∫_0^∞ dp[L ω_K_T (p)+ ln(1 - e^-2L ω_K_T (p) )] [1+2∑_j =1^∞cos(β j p)], where ω_K_T (p)=√(p^2+ K_T^2). Then, we evaluate the integrals over K_T and p for all terms, except for the first one resulting from the multiplication of the brackets of Eq. (<ref>), which contains a divergent integral. Next, we calculate F_(T,L) by starting with the same second form of the free energy as has been used for the bounded case, and obtain F_(T,L) = A/4π∫_0^∞ dK_T K_T ∫_0^∞dk L/π[ω_K_T(k) +2 T ln(1 - e^- βω_K_T(k))], where ω_K_T(k)=√(k_T^2 + k^2). As can be seen the first terms of Eqs. (<ref>) and (<ref>) are divergent and identical. After subtracting these two expression, according to Eq. (<ref>), we obtain the same expression for the F_(T,L) as in Eq. (<ref>). In the last part of this appendix, we use this theorem for the computations related to the massive case, as stated in Sec. <ref>. We start with Eq. (<ref>) which contains a sum over the regular spatial modes which are the roots of f(k_n_1) in Eq. (<ref>). We use the Principle of the Argument theorem, as expressed in Eq. (<ref>), to compute this sum and obtain F_(T,L) = -A/4√(π^3)1/2 π i∮_Clim_s → 0∂/∂ sμ^2 s/Γ (s)[Γ( s - 3/2)/4 q^3 - 2s + . . ∑_n_0 = 1^∞(2 q/ n_0 β)^3/2 - s K_3/2 - s(βn_0 q) ] d{ln[ 2/isin(√(q^2 - m^2) L )]}, where g(q_n_1^2) = g(k_n_1^2 + m^2) is the summand in Eq. (<ref>), while g(q) is the integrand defined in Eq. (<ref>). We have also chosen h(q) = 2 / i for the massive case, and consider the same closed contour C in the complex q-plane as shown in Fig. (<ref>). After integrating by parts and taking the limit R →∞ and r → 0, Eq. (<ref>) becomes F_(T,L) = - i A/4√(π^5)lim_s → 0∂/∂ sμ^2 s/Γ(s)∫_- i ∞^i ∞ dq [ ∫_0^∞dt e^- t q^2 q/4 √(t^3-2s) + ∑_n_0 = 1^∞(2 /n_0)^1/2 - s×. . T^1/2 - sq^3/2 - s K_1/2 - s(βn_0 q) ] {ln[ 2/isin(√(q^2 - m^2) L )]}. After changing variable q = ip, and evaluating the integral over t, we express the results as follows F_(T,L) = A/4√(π^5)lim_s → 0∂/∂ sμ^2 s/Γ (s)∫_0^∞ dp {Γ( s - 1/2)/4[ (i p)^2 - 2s + (- i p)^2 - 2s]+ ∑_n_0 = 1^∞(2T/n_0)^1/2 - s[ (i p)^3/2 - s K_1/2 - s( i p β n_0) + (- i p)^3/2 - s K_1/2 - s(- i p β n_0) ] }× {ln[ e^L √(p^2 + m^2)(1 - e^- 2 L √(p^2 + m^2))] }. We finally can simplify this expression, as done in Eq. (<ref>), to obtain the free energy for the bounded case given by Eq. (<ref>). § CALCULATION OF THE FREE ENERGY USING THE GENERALIZED ZETA FUNCTION The most commonly-used approach for calculating the Casimir effects is the zeta function approach (ZFA). The generalized zeta function <cit.> is given dy the following expression, Z_p^M^2(s ; a_1 ,..., a_p ; c_1 ,..., c_p) = ∑_n_1 = - ∞^∞ ...∑_n_p = - ∞^∞[ a_1(n_1 - c_1)^2 + ... + a_p(n_p - c_p)^2 + M^2]^ - s. The above expression yields finite results for Re (s) > p/2, and admits an analytic continuation for Re (s) < p/2, <cit.>. This form is also referred to as the inhomogeneous generalized zeta function. If we set the parameters c_1 ,..., c_p to zero, we obtain a special form of the inhomogeneous generalized zeta function. An important special form called the homogeneous zeta function is obtained when the parameters c_1 ,..., c_p, and M are set to zero. For this case, there is a constraint that the sums should not include the (n_1=0, ... ,n_p=0) mode. Obviously, for the massive case we have to use the inhomogeneous form, while, as shown in the text, both forms can be used for the massless case. In the first part of this appendix, we show explicitly four different ways of using the zeta function for calculating the free energy of the massless case, as outlined in Sec. <ref>, obtaining two equivalent expressions summarized in the form given by Eq. (<ref>) with two different forms for F_ (T,L) given by Eqs. (<ref>, <ref>). In the first method, we do the double sum simultaneously, so as to obtain the final result shown in Eq. (<ref>). The sums in the expression that we have obtained for F(T,L), given by Eq. (<ref>), are over only positive definite integers, so we use the homogeneous form of the generalized inhomogeneous Epstein zeta function <cit.>, given by E_p^M^2(s ; a_1 ,..., a_p) = ∑_n_1 = 1 ^∞...∑_n_p = 1^∞[ a_1 n_1^2 + a_2 n_2^2 + ... + a_p n_p^2 + M^2]^ - s . That is, we use E_p^0, which is usually denoted by E_p, and express F_ (T,L) as follows F_ (T,L) =- T A/8 πlim_s → 0∂/∂ sΓ (s - 1)/μ^-2 sΓ (s){ E_1( s - 1 ; π^2/L^2) + 2 E_2( s - 1 ; 4π^2/β^2 , π^2/ L^2) } . To compute the second part of Eq. (<ref>), we use the following relation for the Epstein zeta function, E_2, E_2(s;a_1,a_2) = - ζ (2s)/2 a_1^s + √(π/a_2)Γ (s - 1/2) ζ (2s - 1)/2 Γ (s) a_1^(s - 1/2) + 2 π^s/√(a_2^( s + 1/2))Γ (s) √(a_1^( s - 1/2))∑_m_1 = 1^∞∑_m_2 = 1^∞( m_2/m_1)^(s - 1/2) K_1/2 - s( 2 πm_1m_2√(a_1/a_2)). Now we set a_1=4π^2/β^2 and a_2=π^2/ L^2 to obtain F_ (T,L) =- T A/8 πlim_s → 0∂/∂ sμ^2 s/Γ (s){[(π/L)^2 - 2s - (2π/β)^2 - 2s] Γ (s-1) ζ (2s -2) + L/√(π)×. . (2π)^3 - 2sζ (2s -3)/β^3 - 2sΓ(s - 3/2) +8(2)^1 - 2s/2/( L β^3)^1 - 2s/2π^s -1∑_n_0 = 1^∞∑_n_1 = 1^∞(n_0/n_1)^3/2 - s K_3/2 - s(4 π n_0 n_1 L T) } . Now, an analytic continuation may be implemented by the application of the following zeta function reflection formula <cit.>, π ^- s'/2Γ(s'/2) ζ(s') = π ^s' -1/2Γ(1 - s' /2) ζ(1 - s'). Using the reflection formula for the first two terms of Eq. (<ref>), evaluating lim_s → 0∂/∂ s, the expression for the free energy becomes F_ (T,L) = -A T/16 π L^2ζ (3) + A T^3/4 πζ (3) - A L T^4/π^2 ζ (4) - A √(2 T^5/L)∑_n_0 = 1 ^∞∑_n_1 = 1 ^∞(n_0 /n_1)^3/2 K_3/2( 4 π n_0 n_1 T L ) . After computing the sum over n_0[We have used the following identities, ∑_m = 1^∞√(m^3) K_3/2(ma) = √(π/ 2 a^3)(a + 1) e^a - 1/( e^a - 1 )^2 = ∑_m = 1^∞√(m^3) K_ - 3/2(ma).] and simplifying the expression, we obtain our final result given by Eq. (<ref>), where the expression for F_ is given by Eq. (<ref>). We have checked that, as expected, reversing the labels, i.e., a_2=4π^2/β^2, a_1=π^2/ L^2 and m_1 ↔ m_2, does not alter the results. As mentioned in Sec. <ref>, this final result, given by Eq. (<ref>), includes an extra L-independent thermal correction term which is related to the finite value of the free energy of the zero spatial mode. This is in spite of the fact that the expressions for free energy in our problem, starting with Eq. (<ref>), do not contain the zero spatial mode. To illustrate this point, we first start with the first form of the free energy given in Eq. (<ref>) for the case of n_1=0, and after simplifying we obtain F_^n_1=0 (T)= -A T/4 π∑_n_0 = 1 ^∞lim_s → 0∂/∂ sΓ (s - 1)/μ^-2 sΓ (s)(2 n_0 π/β)^2 - 2s. Next, we use the reflection formula of the zeta function, given by Eq. (<ref>), and after computing the sum over n_0, we obtain F_^n_1=0 (T)= -A ζ(3)/2 π T^3 . Comparing the above result with the last term of F_(T,L) given by Eq. (<ref>), we observe that the extra term of F_ is equal to minus one half of the contribution of the zero spatial mode. Next we show that, as mentioned in Sec. <ref>, the final results are independent of the order of calculation of the summations. To do this, we again start with the expression for F_ (T,L), given by Eq. (<ref>), and set a_1=π^2/L^2 and a_2=4π^2/β^2 to get an alternative expression for Eq. (<ref>). After simplifying the result, using the reflection formula given by Eq. (<ref>), evaluating lim_s → 0∂/∂ s, the expression for the free energy becomes F_ (T,L)= -A /16 π^2 L^3ζ (4) - A √(T^3/2 L^3)∑_n_0 = 1 ^∞∑_n_1 = 1 ^∞(n_1 /n_0)^3/2 K_3/2( π n_0 n_1/ T L) . By calculating the sum over n_1 modes first, we obtain F_ (T,L) = -A π^2/1440 L^3 + A T^3/4πζ (3) - A T^3/4 π∑_n_0 = 1 ^∞[ (n_0 π/2 T L) + n_0 π/2 T L^2 ( π n_0/2 T L)] . The above result is equivalent to the form given by Eq. (<ref>) for F_ (T,L) with F_ (T,L) given by Eq. (<ref>). Next, we compute the free energy of the massless case using the inhomogeneous Epstein zeta function to do the sums separately. To do this, we use the free energy given by Eq. (<ref>) which includes the sums over positive integers. For our first case, which constitutes our second method, we first calculate the sum over Matsubara modes, and then the sum over the remaining spatial modes. To do this, we consider the spatial modes, i.e., n_1^2 π^2 /L^2, as the constant term of Eq. (<ref>). Then, we use the following expression for E_1^M^2(s ; a)  <cit.> E_1^M^2(s ; a) = - 1/2 M^2s + √(π/a)1/2 Γ(s) M^2s - 1[ Γ(s - 1/2) + . 4 ∑_j = 1^∞( √( a)/π j M)^( 1/2 - s) K_1/2 - s(2 π j M/√(a))], to compute the free energy. Using this expression in Eq. (<ref>), we obtain, F_ (T,L) = -A /8 πlim_s → 0∂/∂ sμ^2 s/Γ (s)∑_n_1 = 1 ^∞{Γ(s -3/2)/2√(π)( n_1π/L)^3 - 2 s + 2 π^1-s∑_n_0 = 1 ^∞( 2n_1/n_0 β L)^3/2 - s K_3/2 - s( n_0 n_1 π/TL) } . Evaluating lim_s → 0∂/∂ s for the second term, the expression for the free energy becomes, F_ (T,L) = -A /16 √(π^3)lim_s → 0∂/∂ sΓ(s -3/2)/μ^-2 sΓ (s)∑_n_1 = 1 ^∞( n_1π/L)^3 - 2 s + A/√(2)∑_n_1 = 1 ^∞∑_n_0 = 1 ^∞( n_1 T /n_0 L)^3/2 K_3/2( n_0 n_1 π/TL) . This expression is equivalent to Eq. (<ref>) and, as mentioned in Sec. <ref>, we calculate the divergent sum over the spatial modes using the analytic continuation embedded in ζ(-3). The final result is displayed in Eq. (<ref>). For our second case, which constitutes our third method, we first calculate the sum over spatial modes and then the sum over the remaining Matsubara frequencies. To do this, we can start with the expression for free energy given by Eq. (<ref>) and consider the Matsubara modes, i.e., 4 n_0^2 π^2 /β^2, as a constant term of Eq. (<ref>). However, we prefer to backtrack and start with Eq. (<ref>), in which the sum over Matsubara frequencies is not broken to two pieces. We have checked that the final results are the same either way. Then, we use Eq. (<ref>) to compute the free energy, obtaining the following expression, F_ (T,L)= - AT/16 πlim_s → 0∂/∂ sΓ(s-1)/μ^-2 sΓ(s)∑_n_0 = -∞^∞[- ( 4 n_0^2 π^2 /β^2 )^1 - s + Γ(s - 3/2)/Γ (s-1)×. . L/√(π)( 4 n_0^2 π^2 /β^2 )^3/2 - s + ∑_j = 1^∞4L ( 4 n_0^2 π^2 /β^2 j^2 L^2)^3 - 2s/4/√(π)Γ (s-1) K_3 - 2s/2( 2 L j √(4π^2n_0^2/β^2))] . The two first terms in the above expression are divergent and can be written as the homogeneous zeta function. For the last term, we evaluate lim_s → 0∂/∂ s, and then express the sum over temperature modes in terms of positive integes and a zero mode, which gives a nonzero contribution[We have used the following expansion: lim_n → 0(√(n^3) K_3/2 (n a)) = √(π/2 a^3) + O(n^2)]. We can now express the result as follows F_ (T,L)= AT/16 πlim_s → 0∂/∂ sμ^2 s/Γ(s)[Γ(s-1) Z_1( s - 1 ; 4 π^2/β^2) - LΓ(s - 3/2)/√(π)×. . Z_1( s - 3/2 ; 4 π^2/β^2)] - A ∑_n_0 = 1^∞∑_j = 1^∞√(2 T^5 n_0^3 / j^3 L) K_3 /2(4 π j n_0 T L)+A T /16 π L^2∑_j = 1^∞1/j^3, where the last term in Eq. (<ref>) is the contribution of the zero mode of the third term of Eq. (<ref>). For the first two terms of Eq. (<ref>), an analytic continuation may be implemented by the application of the following zeta function reflection formula <cit.>, π ^- s'Γ (s') Z_p (s' ; a_1,...,a_p) = π ^- p/2 + s'/√(a_1a_2...a_p)Γ (p/2 - s') Z_p ( (p/2 - s') ; 1/a_1,...,1/a_p). The application of the reflection reduces Eq. (<ref>) to Eq. (<ref>) and, as mentioned in Sec. <ref>, we calculate the divergent sum over the Matsubara modes using the analytic continuation of zeta function rendered by its reflection formula given by Eq. (<ref>), which yields our final result given by Eq. (<ref>). For our fourth case, we do the double sums simultaneously using the homogeneous generalized zeta function. As mentioned in Sec. <ref>, we first with start the free energy given by Eq. (<ref>), leading to the expression for the free energy of the massless case given by Eq. (<ref>), which can be expressed in terms of homogeneous generalized zeta functions as follows, F_ (T,L) = T A/16 πlim_s → 0∂/∂ sΓ (s - 1)/μ^-2 sΓ (s){ Z_1( s - 1 ; 4 π^2/β^2) - Z_2( s - 1 ; 4π^2/β^2 , π^2/ L^2) } . Here, we use the zeta function reflection formula given by Eq. (<ref>) for the first and second terms of Eq. (<ref>), as follows, Z_1( s - 1 ; 4 π^2/β^2) = βΓ(3/2 - s)/2 Γ (s - 1) π^7/2 - 2s Z_1( 3/2 - s; β^2/4π^2 , L^2/π^2) Z_2( s - 1 ; 4π^2/β^2 , π^2/L^2) = β L Γ (2 - s)/2 Γ (s - 1) π^5 - 2s Z_2( 2 - s; β^2/4π^2 , L^2/π^2) = β L Γ (2 - s)/2Γ (s - 1) π^5 - 2s∑_n_0 = - ∞^∞∑_n_1 = - ∞^∞'[ (n_0β/2π)^2 + (n_1 L/π)^2]^(s - 2). Using these analytic continuations for the two terms of Eq. (<ref>) and evaluating lim_s → 0∂/∂ s, the expression for the free energy becomes F_ (T,L) = A/4 π∑_n_0 = 1 ^∞1/(n_0 β)^3 - AL/π^2∑_n_0 = 1 ^∞1/(n_0 β)^4 - AL/16 π^2∑_n_1 = 1 ^∞1/(n_1 L)^4 - 2 A L/π^2 ∑_n_0 = 1 ^∞∑_n_1 = 1^∞( n_0^2 β^2 + 4 n_1^2 L^2 )^- 2 . After computing the sums over n_0 modes in Eq. (<ref>) and simplifying, we obtain our result given by Eq. (<ref>), where the expression for F_ is given by Eq. (<ref>). In the last part of this appendix, we use the inhomogeneous generalized zeta functions to compute the free energy of the massive case. As mentioned in Sec. <ref>, we start with the first form of the free energy given by Eq. (<ref>), and compute the sum over n_1 modes using the inhomogeneous Epstein zeta function. To do this, we consider the mass term and the regular Matsubara frequencies, i.e., 4n_0^2 π^2/β^2 + m^2, as the constant term of Eq. (<ref>), i.e., M^2, and use Eq. (<ref>) to obtain the free energy given by Eq. (<ref>). Moreover, as mentioned in Sec. <ref>, the final result given by Eq. (<ref>) includes extra terms, which the first and fourth ones are related to a finite value of the free energy for a zero spatial mode. To clarify this point, we first start the first form of the free energy given by Eq. (<ref>) for the case of n_1=0, similar to what is done for the massless case, given by Eq. (<ref>), and after simplifying we obtain F_^n_1=0 (T,L) = -TA/8 πlim_s → 0∂/∂ sΓ (s - 1)/μ^-2 sΓ (s){ m^2 - 2s + 2∑_n_0 = 1 ^∞[(2 π n_0/β)^2 +m^2 ]^1-s}. Next, we use the inhomogeneous Epstein zeta function given by Eq. (<ref>) to compute the sum over Matsubara frequencies, and after taking the limit s → 0, we obtain F_^n_1=0 (T,L) = -A m^3/12 π -A ∑_j = 1 ^∞√(m^3 T^3/2 j^3 π^3) K_3/2(β j m). Comparing the above result with the first and last terms of F_(T,L) given by Eq. (<ref>), we observe that these extra terms of F_ is equal to minus one half of the contribution of the zero spatial mode. § CALCULATION OF THE FREE ENERGY USING THE GENERALIZED SCHLÖMILCH FORMULAS The original Schlömilch formula <cit.>, which can be used for evaluating sums is the following, α∑_k=1^∞k/e^2 α k - 1 + β∑_k=1^∞k/e^2 β k - 1 = α + β/24 - 1/4, where α, β > 0, and αβ= π^2. In this paper, we use the following expression for the generalized form of the Schlömilch formula <cit.> to calculate the Casimir free energy for both massless and massive cases, ∑_n=1^∞ln(1 - e^- α√((θ n)^2 + m^2))= ∑_n=1^∞ln(1 - e^- 2 π/θ√((2 π n/α)^2 + m^2)) - 1/2ln(1 - e^- α m) + 1/θ[∫_0^∞ dx ln(1 - e^- α√(x^2 + m^2)) + α∫_m^∞dy √(y^2 - m^2)/(e^2 π y /θ - 1)]+ 1/2ln(1 - e^-2 π m/θ). In the first part of this appendix, we consider the massless case. As mentioned in Sec. <ref>, we start with the second form of the free energy given by Eq. (<ref>) and use this method only for the thermal corrections part. To do this, we consider the transverse momenta, i.e., K_T, of this part as the constant term of Eq. (<ref>), i.e., m, and obtain Δ F_(T,L) = TA/2 π∫_0^∞ dK_T K_T {∑_n = 1^∞ln(1 - e^- 2 L √(K_T^2 + (2 π n T)^2)) - 1/2× ln(1 - e^- β K_T) + L/π[∫_0^∞ dx ln(1 - e^- β√(K_T^2 + x^2)) + β∫_K_T^∞dy √(y^2 - K_T^2)/e^2 L y -1] + ln(1 - e^- 2 L K_T)/2}. We have denoted the thermal correction part of the Casimir free energy obtained by the Schlömilch formula approach as Δ F_, to distinguish it from the one obtained using the fundamental definition, and the ZFA. Next, we evaluate the integrals over K_T, x, y for all terms and simplify them to obtain Δ F_(T,L) = -A √(2 T^5/L)∑_n = 1^∞∑_j = 1^∞(n/j)^3/2 K_3/2( 4 πnj TL) + A ζ (3)/4 π T^3+ Δ F_(T,L) - F_(0,L) - A ζ (3)/16 π L^2 T, where Δ F_(T,L) is the thermal correction term of the massless free case, given by Eq. (<ref>), and F_(0,L)= - A π^2/(1440 L^3) is the Casimir free energy at zero temperature. After evaluating the sum over n[We have used the following identity: ∑_n = 1^∞√(n^3) K_3/2(nα) = √(π/ 8 α^3)[(α/2) -1 + (α/2) ^2 (α/2)] ], and simplifying the above result, we obtain Δ F_(T,L) = F_(T,L) +A ζ (3)/4 π T^3+Δ F_(T,L) - F_(0,L) , where F_ is given by Eq. (<ref>). As mentioned in Sec. <ref>, the zero temperature part of the free energy should be calculated separately. To do this, we start with the zero temperature part of Eq. (<ref>) and compute the integral over K_T using the dimensional regularization, and then evaluate the sum over n_1 using the analytic continuation of the zeta function, i.e., ζ(-3), obtaining F_(0,L)=F_(0,L) = - A π^2/(1440 L^3). The final result is, as expected, identical to the free energy obtained using the ZFA given by Eq.(<ref>). That is, F_(T,L) = F_(0,L)+Δ F_(T,L) =F_(T,L) = F_(T,L) +Δ F_(T,L)+A ζ (3)/4 π T^3, In the last part of this appendix, we consider the massive case. As mentioned in Sec. <ref>, we start with the second form of the free energy given by Eq. (<ref>) and use Eq. (<ref>) to calculate the sum over the spatial modes in the thermal corrections part. To do this, we consider the transverse momenta and mass term, i.e., M^2=m^2+K_T^2, of this part as the constant term of Eq. (<ref>), i.e., m, and obtain Δ F_(T,L) = A T/2 π∫_0^∞ dK_T K_T {∑_n = 1^∞ln(1 - e^ - 2 Lω_n, K_T) - ln(1 - e^ - βω_K_T)/2 - ∑_j=1^∞Lω_K_T/π j[ K_1 (β j ω_K_T) - β/2L K_1 (2 L j ω_K_T)] + ln(1 - e^- 2 L ω_K_T)/2} , where ω_n, K_T=√(4 π^2 n^2/β^2+ K_T^2 +m^2) and ω_K_T=√(K_T^2 +m^2). Next, we evaluate all integrals and simplify them to obtain the following expression for the thermal corrections part Δ F_(T,L) = - AT/2{L∑_j=1^∞∑_n=1^∞(ω_n/π j L)^3/2 K_3/2(2 j L ω_n) - √(T m^3/2 π^3)∑_j=1^∞K_3/2(j m β)/√(j^3) + m^2T L/π^2∑_j=1^∞K_2 ( j m β)/j^2} - F_(0,L) - AT/4√(m^3/π^3 L)∑_j=1^∞K_3/2(2 j m L)/√(j^3), where ω_n = √((2 π n T)^2+ m^2), and F_(0,L) is given by Eq. (<ref>). To simplifying this result, we can write the last term of this expression as a half of the contribution of n=0 of the first term. As mentioned in Sec. <ref>, we calculate the zero temperature part of the free energy of the massive case separately. Therefore, we start with the zero temperature part of Eq. (<ref>) and after evaluating the integral over K_T using the dimensional regularization, we compute the sum over the spatial modes using the inhomogeneous Epstein zeta function to obtain F_(0,L) = Am^3/24 π -A L m^4/128 π^2[3 - 4 ln(m/μ) ] + F_(0,L). After considering the zero and finite temperature contributions of the free energy given by Eqs. (<ref>, <ref>), F_(T,L) becomes F_(T,L) = Am^3/24 π -A L m^4/128 π^2[3 - 4 ln(m/μ) ] - ∑_j=1^∞∑_n=0^∞A√(ω_n^3) K_3/2(2 j L ω_n)/2√(π^3 j^3 L)β + A √(T^3 m^3/8 π^3)∑_j=1^∞K_3/2(j m β)/√(j^3) + Δ F_(T,L)- AT/4√(m^3/π^3 L)∑_j=1^∞K_3/2(2 j m L)/√(j^3). We can finally use the Abel-Plana summation formula to evaluate the sum over n of the third term of this result, and after simplifying, as expected, the final result is identical to F_(T,L) given by Eq. (<ref>). § CALCULATION OF THE CASIMIR FREE ENERGY USING THE DIMENSIONAL REGULARIZATION In this appendix, we calculate the Casimir free energy for a massive real scalar field, based on its fundamental definition, using the second form of the free energy given by Eq. (<ref>). For the bounded region, we evaluate the integral over the transverse momenta using the dimensional regularization, and obtain F_(T,L) = - A/2∑_n_1=1^∞lim_D→2{Γ(-D+1/2)/√((4 π)^D+1)ω_n_1^D+1 + 4∑_j=1^∞(ω_n_1T/2 π j)^D+1/2 K_D+1/2( β j ω_n_1) }, where ω_n_1=√(n_1^2 π^2/L^2+m^2). Then, we evaluate the sum over the regular spatial modes using the Principle of the Argument theorem, which is the same procedure as done for the bounded case in Sec. <ref> given by Eq. (<ref>) (see Appendix <ref>), and obtain F_(T,L) = A lim_D→2∫_0^∞ dp{[p^D/Γ( D+1/2) √((4 π)^D+1) + (p/2 π)^D+1/2×. . ∑_j=1^∞J_D-1/2( β j p)/√((β j)^D-1)] [L ω (p) +ln(1 - e^- 2 L ω (p))] }, where ω (p) = √(p^2 + m^2). After evaluating the integral over p for all terms of the above expression, and simplifying, the free energy for the bounded case becomes F_(T,L) = - A L/(2 π)^D+2/2lim_D→2{m^D+2Γ(- D+2/2)/2^D+4/2 +2 (m/β)^D+2/2× ∑_n_0=1^∞K_D+2/2( β n_0 m)/ n_0^D+2/2 +2 ( m/2L)^D+2/2∑_n_1=1^∞K_D+2/2( 2 n_1 mL)/ n_1^D+2/2 + 4 ( m/β)^D+2/2∑_n_0=1^∞∑_n_1=1^∞K_D+2/2( β m ω_n_0, n_1)/(ω_n_0, n_1)^D+2/2} , where ω_n_0, n_1 = √(n_0^2 + (2 n_1 T L)^2). Next, we calculate the free energy of the free case at finite temperature, by starting with the second form of the free energy, and using dimensional regularization to calculate the integrals over momenta and obtain F_(T,L) =- AL lim_D→2{m^D+2Γ(- D+2/2)/2 (4 π)^D+2/2 +2 m^D+2/2∑_n_0=1^∞K_D+2/2( β n_0 m)/(2 πβ n_0)^D+2/2} . As can be seen, the first two terms of F_(T,L), given by Eq. (<ref>), are identical to the two terms of F_(T,L), given by Eq. (<ref>). Notice that the first terms actually diverge as Γ( -2), in contrast to the analogous terms in Eqs. (<ref>, <ref>) which are obtained using the first form of the free energy Eq. (<ref>), which has an embedded analytic continuation. After subtracting these terms, and taking the limit D → 2, we obtain the same expression for the Casimir free energy as in Eq. (<ref>). In this appendix, we also show that if we use the generalized Abel-Plana summation formula to calculate the sum over the regular spatial modes in Eq. (<ref>) for the bounded case, the Casimir free energy is identical to the expression given by Eq. (<ref>). For do this, we also consider F_(T,L) in the second form of the free energy and evaluate the integral over the transverse momenta using the dimensional regularization, and present the Casimir free energy as follows F_(T,L) =lim_D→2{ -A Γ(-D+1/2)/2 √((4 π)^D+1)[∑_n_1=1^∞ω_n_1^D+1 -∫_0^∞ dk'ω_k'^D+1]- 2 A/(2 π)^D+1/2× ∑_j=1^∞√(T^D+1/ j^D+1)[∑_n_1=1^∞√(ω_n_1^D+1) K_D+1/2( β j ω_n_1) -∫_0^∞ dk'√(ω_k'^D+1) K_D+1/2( β j ω_k')]}, where ω_n_1=√(n_1^2 π^2/L^2+m^2) and ω_k'=√(k'^2 π^2/L^2+m^2). Notice that each term in the first bracket of Eq. (<ref>) is divergent which one can easily calculate and show that the divergent part of each term is exactly identical to the first term of Eqs. (<ref>, <ref>). By evaluating the sum over n_1 modes of Eq. (<ref>) using the generalized Abel-Plana summation formula, as the same procedure as done in Appendix <ref> to calculate the sum over the spatial modes for the massive case, we obtain F_(T,L) = -ALmlim_D→2{-m^D+1/√((4 π)^D+1)Γ(D+3/2)∫_1^∞ dt √((t^2 -1)^D+1)/e^2mLt-1 + ∑_j=1^∞√(( Tm/ 2 π j)^D+1)∫_0^∞ dt 2√(t^D+3) J_D+1/2(β j m t)/√(t^2+1)(e^2mL √(t^2+1) - 1)}. Then, by computing the integral over t, simplifying, and taking D→ 2, we obtain the same expression for the Casimir free energy as in Eq. (<ref>). § CALCULATION OF THE CASIMIR FREE ENERGY FOR MASSIVE SCALARS USING THE BOYER METHOD In this appendix, we calculate the Casimir free energy for a massive real scalar field using the Boyer method <cit.> and show that the final result is equivalent to the result given in Eq. (<ref>), obtained using the fundamental approach. In this method, we subtract the free energies of two configurations, at the same temperature, and obtain the Casimir free energy of our original system by taking appropriate limits. Configuration A consists of two inner plates located at z=± L/2 surrounded by two outer plates located at z=± L_1 /2. Configuration B is similar to A except the two inner plates are located z=± L_2 /2, with L< L_2 <L_1, as depicted in Fig. (<ref>). The Casimir free energy can be defined in terms of the difference between the free energies of configurations A and B as follows F_(T,L) =lim_L_2 →∞{lim_L_1 →∞[ F_A(T,L,L_1) - F_B(T,L_2,L_1)] } , where F_A(T,L,L_1) = F_^I(T,L)+2 F_^II(T,L,L_1) and F_B(T,L_2,L_1) = F_^I(T,L_2)+2 F_^II(T,L_2,L_1). Moreover, F_^I denotes the free energy between two inner plates and the F_^II denotes the free energy of bounded regions adjacent to the inner plates. In fact, the Boyer method can be thought of as a rigorous implementation of the fundamental definition, provided the two configurations are taken to be at the same temperature, in which any possible contributions from the regions outside of the bounded region is also taken into account. To calculate the free energy for each of six regions shown in Fig. (<ref>), we use the result obtained in the Sec. <ref> for the F_, given by Eq. (<ref>). For example, the free energy for the outer bounded regions of configuration B becomes F_^II(T,L_2,L_1) = -A (L_1 - L_2)/16 √(π^5)lim_s → 0∂/∂ sΓ(s - 1/2)/μ^-2 sΓ (s)∫_ 0^∞ p^2 - 2sω(p) dp -A m^2/π^2× {(L_1 - L_2)/4 β^2∑_n_0 = 1^∞K_2 (n_0 β m)/ n_0^2 + 1/4 (L_1 - L_2)∑_n_1 = 1^∞K_2 (n_1 m (L_1 - L_2))/ n_1^2 + T^2 /2× (L_1 - L_2) ∑_n_0 = 1^∞∑_n_1 = 1^∞K_2 (β m ω'_n_0, n_1)/(ω'_n_0, n_1)^2}, where ω'_n_0, n_1 = √(n_0^2 + n_1^2 T^2 ( L_1 - L_2)^2). As mentioned before, only the first term of the above expression contains a divergent part. Moreover, F_^II(T,L_2,L_1)=F_^II(T,L_1-L_2) and the first two terms are linear in L_1-L_2. Adding the contributions of the three regions of configuration B we obtain F_B(T,L_2,L_1) = -A L_1/8√(π^5)lim_s → 0∂/∂ sΓ(s - 1/2)/μ^-2 sΓ (s)∫_ 0^∞ p^2 - 2sω(p) dp -A m^2/π^2× { L_1/2 β^2∑_n_0 = 1^∞K_2 (n_0 β m)/ n_0^2 +∑_n_1 = 1^∞[K_2 (2n_1 m L_2)/ 8 L_2 n_1^2+K_2 (n_1 m (L_1 - L_2))/ 2 (L_1 - L_2) n_1^2] + T^2∑_n_0 = 1^∞∑_n_1 = 1^∞[ L_2 K_2 (β m ω”_n_0, n_1)/(ω”_n_0, n_1)^2+(L_1- L_2) K_2 (β m ω'_n_0, n_1)/(ω'_n_0, n_1)^2]} , where ω”_n_0, n_1 = √(n_0^2 + ( 2 n_1 T L_2)^2). We can compute F_A(T,L,L_1) similarly. Upon using Eq. (<ref>) to calculate F_(T,L), the first two terms of F_A(T,L,L_1) and F_B(T,L_2,L_1), which include divergent integrals, cancel even before we take the limits and we obtain F_(T,L) = - Am^2/π^2∑_n_1=1^∞lim_L_2 →∞{lim_L_1 →∞[ K_2 (2n_1 m L)/ 8 L n_1^2-K_2 (2n_1 m L_2)/ 8 L_2 n_1^2 +. . K_2 (n_1 m (L_1 - L))/ 2 (L_1 - L) n_1^2-K_2 (n_1 m (L_1 - L_2))/ 2 (L_1 - L_2) n_1^2 + T^2∑_n_0 = 1^∞[ L K_2 (β m ω_n_0, n_1)/(ω_n_0, n_1)^2 -L_2 ×. . . . K_2 (β m ω”_n_0, n_1)/(ω”_n_0, n_1)^2 + (L_1- L)K_2 (β m ω”'_n_0, n_1)/(ω”'_n_0, n_1)^2 - (L_1- L_2) K_2 (β m ω'_n_0, n_1)/(ω'_n_0, n_1)^2] ]}, where ω”'_n_0, n_1 = √(n_0^2 + n_1^2 T^2 ( L_1 - L)^2). Finally, upon taking the limits L_1 →∞, and L_2 →∞, sequentially, we obtain the same expression for the Casimir free energy given by Eq. (<ref>). This proves that, when using the fundamental approach, the contributions from the outer regions in the bounded case are precisely canceled by the corresponding contributions of the free case. § THE HEAT KERNEL COEFFICIENTS The heat kernel expansion is an important tool in the computations of the Casimir effects, which can be used to obtain the high and low temperature limits of the Casimir thermodynamics quantities, including the divergences in the vacuum energy <cit.>. In the first part of this appendix, we obtain the divergent term of the energy at zero temperature for our model by calculating the heat kernel coefficients. To obtain the energy, we use the partition function at zero temperature for a massive free real scalar in path integral representation: Z[0] = ∫ D ϕexp{ i ∫d^4 x1/2[ ( ∂_μϕ)^2 -m^2 ϕ^2 ] } = [( ∂^2 + m^2/μ^2)]^-1/2, where μ is an arbitrary mass scale introduced for dimensional reasons, as explained in Sec. <ref>. Using the effective action, the vacuum energy for time-independent boundaries is obtained as <cit.> E = i/T ln(Z[0]) = -i/2T ln[(∂^2 + m^2 /μ^2)]= - i/2T[ ln( -P^2 + m^2 /μ^2) ], where T is the total time and the trace indicates the summation over eigenvalues of Klein-Gordon operator in the momentum space representation. The explicit form of the energy at zero temperature is E _(0,L)= i/2 T∫_-∞^∞Tdω/2 π∫Ad^2 K_T/(2 π)^2∑_n_1lim_s → 0∂/∂ s[ - ω^2 + ω _n_1 , K_T^2 ]^-s/μ^-2s = i/2∫_-∞^∞dω/2 π∫Ad^2 K_T/(2 π)^2∑_n_1lim_s → 0∂/∂ s∫_0^∞dt e^-t( - ω^2 + ω _n_1 , K_T^2)/μ^-2s t^1-sΓ (s), where ω_n_1 , K_T=√(K_T^2+ k_n_1^2+m^2). Due to the Dirichlet boundary conditions at the plates for the massive case, the longitudinal momentum k_n_1 takes on discrete regular values which are solutions to Eq. (<ref>) and given by Eq. (<ref>). Note that the second form of energy given by Eq. (<ref>) has an embedded analytic continuation, as mentioned in Sec. <ref>. After performing a wick rotation on ω, we evaluate its integral and, to obtain the nonzero heat kernel coefficients for our model at zero temperature, present the result in terms of the spatial heat kernel as E _(0,L)= -lim_s → 0∂/∂ sμ^2s/Γ (s)∫_0^∞dt e^ -tm^2/4√(π)t^3/2 - s𝐊(t) ≡ -lim_s → 0∂/∂ sẼ _(s), where the spatial heat kernel for our model is the following 𝐊(t)=∫Ad^2 K_T/(2 π)^2∑_n_1 e^ - t (K_T^2+k_n_1^2) = A/4π t∑_n_1 e^-t k_n_1^2. In the second part of Eq. (<ref>) we have defined Ẽ _(s). The spatial heat kernel, which we shall henceforth refer to it simply as the heat kernel, obeys the heat conduction equation with the initial condition 𝐊(r, r'| t=0) = δ^3 (r-r'). Hence, its behavior for small t describes the divergences in the vacuum energy. It has th following expansion for small t <cit.>, 𝐊(t) 1/(4π t)^3/2∑_n=0^∞ a_n/2 t^n/2. where a_n/2 are the heat kernel coefficients for the massless case. The overall coefficient of the sum is actually t^-d/2, where d=3 is the dimension of space. The integrand in Eq. (<ref>) has a pole at t=0. So, to obtain the divergent part of this integral, we divide the interval of the integration into t ∈ [0, 1] and t ∈ (1,∞). Then we need to only evaluate the integral over t in the first interval, for which we use the expansion of 𝐊(t), given by Eq. (<ref>), and the exponential mass term as follows: 𝐊(t) e^-t m^21/(4π t)^3/2∑_n=0^∞ a_n/2 t^n/2∑_k=0^∞(-1)^k m^2k t^k/k!= 1/(4π t)^3/2∑_j=0^∞(∑_k=0^[j/2] a_j/2 - k(-1)^k m^2k/k!) t^j/2≡1/(4π t)^3/2∑_j=0^∞α_j/2 t^j/2, where the second expression has been obtained by combining powers of t. In the last expression we have defined the heat kernel coefficients for the massive case as α_j/2. Then we insert this expression into Eq. (<ref>) and integrate to obtain Ẽ _(s)= -1/32 π^2∑_j=0^∞[α_j/2/s-2+j/2]. The expression in the bracket in Eq. (<ref>) has a simple pole at j=4, with coefficient α_2. Therefore its divergent part is Ẽ _^(s)= -1/64 π^2[2 a_2-2 a_1 m^2+ a_0 m^4/s]. Now, we obtain the heat kernel coefficients, i.e., a_n/2, which include a sum of two local integrals, one over the volume and the other over the surface <cit.>. According to our model, which includes two parallel plates with Dirichlet boundary conditions, the surface part of the plates and the volume part give a nonzero contribution for each part. So, the only nonzero coefficients for the volume and surface parts are a_0=AL and a_1/2 = - A √(π), respectively. Hence, the divergence of the integral part of the expression for the energy at zero temperature, given by Eq. (<ref>), can be inferred from Eq. (<ref>), which yields -ALm^4/64 π^2s. Note that this is precisely the divergent term that appears in F _(0,L) and F _(0,L) shown Eq. (<ref>), and also in the second term in Eq. (<ref>) for F_(T,L). In the last part of this appendix, we obtain the nonzero heat kernel coefficients for our model at high temperatures. We first, present Eq. (<ref>) in terms of the spatial heat kerne, and express the sum over Matsubara frequencies as the sum over the positive integers and a zero mode to obtain F_(T,L) = -T/2lim_s → 0∂/∂ s∫_0^∞dt e^-t m^2/Γ (s) t^1-s𝐊(t) [1 + 2 ∑_n_0 = 1^∞ e^-t (2 n_0 π/β)^2], where the 𝐊(t) is given by Eq. (<ref>) for our model. As stated in <cit.>, the behavior of the above expression as T →∞, is determined by the behavior of the heat kernel 𝐊(t) as t → 0. We evaluate the integral over t using the expansion of e^-t m^2𝐊(t) given by Eq. (<ref>), and obtain the following expression in the high temperature limit F_(T,L) -T/8√(π^3)∑_j=0^∞lim_s → 0∂/∂ s1/Γ (s) α_j/2[ 1/j + 2s -3 + . . (2π/β)^3-2s-jΓ(2s-3+j/2) ζ (2s+j-3) ]. As can be seen, the first term in the bracket in Eq. (<ref>), which is proportional to T, has only one simple pole at j=3, whereas the second term in the bracket includes one pole in the gamma function and one in the zeta function. Evaluating lim_s → 0∂/∂ s, we obtain the following expression for the high temperature expansion of the free energy F_(T,L) - π^2 α_0/90 T^4 - ζ (3) α_1/2/4 √(π^3) T^3 - α_1/24 T^2 -α_3/2/8 √(π^3)T [γ +ln(T)] - α_2/16 π^2(γ - ln (4 π T)) +... . As mentioned above, the only nonzero heat kernel coefficients for our model are a_0 and a_1/2. So, we can easily obtain the heat kernel coefficients α_j/2 that appear in Eq. (<ref>), using their definition given in Eq. (<ref>), in the high temperature limit and express them as follows α_0=AL, α_1/2= - √(π) A, α_1=- m^2 AL, α_3/2 = √(π) m^2 A, α_2= m^4/2 AL . Now we can compare the high temperature expansion obtained here for F_(T,L), using the heat kernel method, with that of F_(T,L) given in Eq. (<ref>). We note that the coefficients of all of the temperature-dependent terms of the former are correct except for the linear or the classical term. This term is the result of the first term in the square bracket in Eq. (<ref>), and has to be computed separately and without any expansions. After evaluating the integral over t for this term, we obtain F_^(T,L) = -TA/8 πlim_s → 0∂/∂ sΓ (s -1 )/μ^-2sΓ (s) E_1^m^2(s-1 ; π^2/L^2). Next, we use Eq. (<ref>) to compute this term in high temperature limit, and obtain F_^(T,L) -TAL/4 √(π^3)lim_s → 0∂/∂ sμ^2s/Γ (s)∑_j=1^∞(m/jL)^3/2- s K_3/2- s(2jmL) After taking the lim_s → 0∂/∂ s, the μ factor disappears, and the result is identical to the classical term in F_ in the high temperature limit, given by Eq. (<ref>). On a side note, the temperature-independent term in Eq. (<ref>) is, not surprisingly, incorrect. 99 r1Cas. Hendrick B.G. Casimir, “On the attraction between two perfectly conducting plates.", Proc. Kon. Ned. Akad. Wet. 51 (1948): 793. r32Particle. Milton, Kimball A., “The Casimir effect: physical manifestations of zero-point energy.", World Scientific (2001). r322Particle. Cucchieri, Attilio, Axel Maas, and Tereza Mendes., “Infrared properties of propagators in Landau-gauge pure Yang-Mills theory at finite temperature.", Physical Review D 75.7 (2007): 076003. r323Particle. S. Ejiri, Y. Maezawa, N. Ukita, S. Aoki, T. Hatsuda, N. Ishii, K. Kanaya, and T. Umeda, “Equation of state and heavy-quark free energy at finite temperature and density in two flavor lattice QCD with Wilson quark action.", Physical Review D 82.1 (2010): 014508. r324Particle. Santos, A. F., and Faqir C. Khanna, “Standard Model Extension and Casimir effect for fermions at finite temperature.", Physics Letters B 762 (2016): 283-287. r33Cond. De Martini, F., and G. R. Jacobovitz.,“Anomalous spontaneous–stimulated-decay phase transition and zero-threshold laser action in a microscopic cavity.", Physical Review Letters 60.17 (1988): 1711. r332Cond. De Martini, F., et al., “Spontaneous emission in the optical microscopic cavity.", Physical Review A 43.5 (1991): 2480. r333Cond. Mohideen, Umar, and Anushree Roy., “Precision measurement of the Casimir force from 0.1 to 0.9 μ m.", Physical Review Letters 81.21 (1998): 4549. r334Cond. J. M. Obrecht, R. J. Wild, M. Antezza, L. P. Pitaevskii, S. Stringari, and E. A. Cornell, “Measurement of the temperature dependence of the Casimir-Polder force.", Physical review letters 98.6 (2007): 063201. r33Nano. M. Bordag, U. Mohideen, V.M. Mostepanenko, “New developments in the Casimir effect ", Physic Reports 353 (2001): 1-205. r332Nano. S. Bellucci and A. A. Saharian, “ Fermionic Casimir effect for parallel plates in the presence of compact dimensions with applications to nanotubes”, Phys. Rev. D 80 (2009): 105003. r333Nano. E. Elizalde, S.D. Odintsov, A.A. Saharian, “Fermionic condensate and Casimir densities in the presence of compact dimensions with applications to nanotubes ”, Phys. Rev. D 83 (2011): 105023. r334Nano. S. Bellucci, E.R. Bezerra de Mello, A.A. Saharian, “ Finite temperature fermionic condensate and currents in topologically nontrivial spaces”, Phys. Rev. D 89 (2014): 085002. r34String. Y. Maezawa, N. Ukita, S. Aoki, S. Ejiri, T. Hatsuda, N. Ishii, and K. Kanaya., “Heavy-quark free energy, Debye mass, and spatial string tension at finite temperature in two flavor lattice QCD with Wilson quark action.", Physical Review D 75.7 (2007): 074501. r342String. Mykkänen, Anne, Marco Panero, and Kari Rummukainen., “Casimir scaling and renormalization of Polyakov loops in large-N gauge theories.", Journal of High Energy Physics 2012.5 (2012): 69. r343String. Bezerra, V. B., H. F. Mota, and C. R. Muniz., “Thermal Casimir effect in closed cosmological models with a cosmic string.", Physical Review D 89.2 (2014): 024015. r35Cosmo. Pietroni, Massimo., “Brane worlds and the cosmic coincidence problem.", Physical Review D 67.10 (2003): 103523. r352Cosmo. Perivolaropoulos, Leandros, “Vacuum energy, the cosmological constant, and compact extra dimensions: Constraints from Casimir effect experiments.", Physical Review D 77.10 (2008): 107301. r353Cosmo. V. B. Bezerra, G. L. Klimchitskaya, V. M. Mostepanenko, and C. Romero, “Thermal Casimir effect in closed Friedmann universe revisited.", Physical Review D 83.1 (2011): 104042. r354Cosmo. Marino, Jamir, Antonio Noto, and Roberto Passante., “Thermal and Nonthermal Signatures of the Unruh Effect in Casimir-Polder Forces.", Physical review letters 113.2 (2014): 020403. r3Spar. Marcus J. Sparnaay, “Measurements of attractive forces between flat plates.", Physica (Utrecht) 24 (1958): 751-764. r4Lamo. Steve K. Lamoreaux, “Demonstration of the Casimir force in the 0.6 to 6 μ m range.", Phys. Rev. Lett. 78 (1997): 5. r30Kimb. Kimball A. Milton, “The Casimir effect: recent controversies and progress.", Journal of Physics A: Mathematical and General 37.38 (2004): R209. r302Kimb. K.A. Milton, “Recent developments in the Casimir effect.", Journal of Physics: Conference Series. Vol. 161. No. 1. IOP Publishing (2009). r31Bord2. Michael Bordag and et al., “Advances in the Casimir effect", OUP Oxford 145 (2009). r26Goushe. M. Sasanpour, C. Ajilian and S. S. Gousheh, “Casimir free energy for massive fermions: a comparative study of various approaches.", Journal of Physics A: Mathematical and Theoretical (2022). r2Lif. E. M. Lifshitz, “The theory of molecular attractive forces between solids.", Zh. Eksp. Teor. Fiz. 29 (1956) : 94-110 (Sov. Phys. JETP 2 73-83). r20Lokh. V. B. Svetovoy, and M. V. Lokhanin, “Linear temperature correction to the Casimir force.", Physics Letters A 280, no. 4 (2001): 177-181. r5Mehr. J. Mehra, “Temperature correction to the Casimir effect.", Physica 37.1 (1967): 145-152. r6Brown. Lowell S. Brown, and G. Jordan Maclay, “Vacuum stress between conducting plates: an image solution.", Physical Review 184.5 (1969): 1272. r8Dowk76. Dowker, J. S., and Raymond Critchley, “Vacuum stress tensor in an Einstein universe: Finite-temperature effects.", Physical Review D 15.6 (1977): 1484. r8Dowk78. Dowker, J. S., and Gerard Kennedy, “Finite temperature and boundary effects in static space-times.", Journal of Physics A: Mathematical and General 11.5 (1978): 895. r8Dowk80. Kennedy, Gerard, Raymond Critchley, and J. S. Dowker, “Finite temperature field theory with boundaries: Stress tensor and surface action renormalisation.", Annals of Physics 125.2 (1980): 346-400. r9Balian. Balian, Roger, and Bertrand Duplantier, “Electromagnetic waves near perfect conductors. II. Casimir effect.", Annals of Physics 112.1 (1978): 165-208. r10Wolf. Jan Ambjørn, and Stephen Wolfram, “Properties of the vacuum. I. Mechanical and thermodynamic.", Annals of Physics 147(1) (1983): 1-32. r12Kris. K. Kirsten, “Casimir effect at finite temperature.", Journal of Physics A: Mathematical and General 24.14 (1991): 3281. r51Elizald92. Elizalde, E., and A. Romeo., “Epstein-function analysis of the Casimir effect at finite temperature for massive fields.", International Journal of Modern Physics A 7.29 (1992): 7365-7399. r11Plunien. G. Plunien, B. Müller, and Walter Greiner, “The casimir effect." Physics Reports 134, no. 2-3 (1986): 87-193. r17Geyer. B. Geyer, G.L. Klimchitskaya, and V.M. Mostepanenko, “Thermal Casimir effect in ideal metal rectangular boxes.",The European Physical Journal C 57.4 (2008): 823-834. r29Junji. Zhongyou Mo and Junji Jia, “ Generalized Schlömilch formulas and thermal Casimir effect of a fermionic rectangular box" ,Phys. Rev. A 98, 012512 (2018). r29Teo. L.P. Teo, “Finite temperature Casimir effect for scalar field with Robin boundary conditions in spacetime with extra dimensions.", Journal of High Energy Physics 2009 no. 11 (2009): 095 ; L.P. Teo, “Finite temperature Casimir effect for massive scalar field in spacetime with extra dimensions.", Journal of High Energy Physics 2009 no. 06 (2009): 076. r26Cheng. Hongbo. Cheng, “Casimir effect for parallel plates involving massless Majorana fermions at finite temperature.", Physical Review D 82.4 (2010): 045005. r27Khoo. F.S. Khoo, and L.P. Teo, “Finite temperature Casimir effect of massive fermionic fields in the presence of compact dimensions.", Physics Letters B 703.2 (2011): 199-207. r36Bloch. F. Bloch, “ On the theory of the exchange problem and the remanence phenomenon of ferromagnetics" Z. Phys. 74 295–335 (1932). r37Mats. Takeo Matsubara, “A New Approach to Quantum-Statistical Mechanics", Progress of Theoretical Physics 14 351 (1955). r38Ezawa. Hiroshi Ezawa, Yukio Tomozawa, and Hiroomi Umezawa., “Quantum statistics of fields and multiple production of mesons.", II Nuovo Cimento (1955-1965) 5.4 (1957): 810-841. r38Kubo. R. Kubo, “Statistical-Mechanical Theory of Irreversible Processes. I.", The Physical Society of Japan 12 (1957). r38MaSch. Paual C. Martin and Julian Schwinger, “Theory of many-particle system. I", Phy. Reviw 115 (1959). r38Schwin. J. Schwinger, “Brownian motion of a quantum oscillator.", Journal of Mathematical Physics 2.3 (1961): 407-432. r38Keld. Keldysh, Leonid Veniaminovich., “Diagram technique for nonequilibrium processes.", Zh. Eksp. Teor. Fiz. 47 (1964): 1018. r38Mill. K. T. Mahanthappa, “Multiple production of photons in quantum electrodynamics.", Physical Review 126.1 (1962): 329. r382Mill. P. M. Bakshi and K. T. Mahanthappa., “Expectation value formalism in quantum field theory. I.", Journal of Mathematical Physics 4.1 (1963): 1-11. r383Mill. R. Mills, “Propagators for many-particle systems: an elementary treatment.", CRC Press (1969). r38Umeza. Umezawa, Hiroomi, Hiroshi Matsumoto, and Masashi Tachiki., “Thermo field dynamics and condensed states.", (1982). r382Umeza. Takahashi, Yasushi, and Hiroomi Umezawa., “Thermo field dynamics.", International journal of modern Physics B 10.13n14 (1996): 1755-1805. r40Kapusta. J. I. Kapusta, “Finite-Temperature Field Theory.", Journal of Physics G: Nuclear and Particle Physics 15(3) (1989): 267; J. I. Kapusta, and C. Gale, “Finite-temperature field theory: Principles and applications.”, Cambridge university press (2006). r41Bellac. Le. Bellac, “Thermal field theory.", Cambridge University Press (2000). r41Khanna. F. C. Khanna, “ Thermal quantum field theory: algebraic aspects and applications.", World Scientific (2009). r42Lands. N.P. Landsman and C. van Weert, “ Real-and imaginary-time field theory at finite temperature and density", Phys. Rep. 145 (1987) 141. r42Lain. M. Laine and A. Vuorinen., “ Basics of thermal field theory.", Lect. Notes Phys 925.1 (2016). r49Ahlf. Lars V. Ahlfors,“Complex Analysis: An introduction to the theory of analytic functions of one complex variable (3rd edn).", McGraw-Hill, New York (1979). r43Stein. E. M. Stein and G. Weiss, “Introduction to Fourier Analysis on Euclidean Spaces (PMS-32)", Princeton university press Vol. 32 (1971). r432Stein. J. J. Benedetto and G. Zimmermann, “Sampling multipliers and the Poisson summation formula.", Journal of Fourier Analysis and Applications 3(5) (1997): 505-523. r43Gasq. M. A. Pinsky, “Introduction to Fourier analysis and wavelets.", American Mathematical Soc. Vol. 102 (2008). r432Gasq. C. Gasquet and P. Witomski, “Fourier Analysis and Applications. Filtering, Numerical Computation, Wavelets.", Springer Science & Business Media Vol. 30 (2013). r43Sahra. A.A. Saharian, “The generalized Abel-Plana formula with applications to Bessel functions and Casimir effect", (2007). r22Rav. S. A. Gundersen, and F. Ravndal, “The fermionic Casimir effect at finite temperature.", Annals of Physics 182.1 (1988): 90-111. r47Elizald. E. Elizalde, S.D. Odintsov, A. Romeo, A.A. Bytsenko and S. Zerbini, “ Zeta Regularization Techniques with Applications", World Scientific. (1994). r48Kris2. Klaus Kirsten, “Generalized multidimensional Epstein Zeta functions", Journal of Mathematical Physics 35 459 (1994). r48Elizald. E. Elizalde, “Zeta functions: formulas and applications", Journal of Computational and Applied Mathematics 118 (2000). r482Elizald. Emilio Elizalde, “Zeta function regularization in Casimir effect calculations and JS Dowker's contribution", Int. J. Mod. Phys. A,27 1260005 (2012). r50Sch1. O. Schl"̈omilch, Ber. Verh. K. Sachs. Gesell. Wiss. Leipzig 29, 101 (1877). r50Sch2. O. Schlömilch, Compendium der Höheren Analysis, 4th ed., Vol. 2 (Friedrich Vieweg und Sohn, Braunschweig, 1895). r50Sch3. Grosswald, Emil. "Comments on some formulae of Ramanujan." Acta Arithmetica 21, no. 1 (1972): 25-34. r50Boyer. Timothy H. Boyer, “Quantum electromagnetic zero-point energy of a conducting spherical shell and the Casimir model for a charged particle.", Physical Review 174.5 (1968): 1764. r3Vassil Dmitri V. Vassilevich, “Heat kernel expansion: user's manual.", Physics reports 388(5-6) (2003): 279-360. r45Bor. Bordag M., “Free energy and entropy for thin sheets.", Physical Review D 98, no. 8 (2018): 085010. r45Teo. L.P. Teo, “Finite Temperature Fermionic Casimir Interaction In Anti-De Sitter Space–Time", International Journal of Modern Physics A 28.31 (2013): 1350158. r283Flach. A. Flachi, “Strongly Interacting Fermions and Phases of the Casimir Effect.", Physical review letters 110.6, (2013): 060401. r29Erdas. Erdas Andrea, and Kevin P. Seltzer, “Finite temperature Casimir effect for charged massless scalars in a magnetic field.", Physical Review D 88 no. 10 (2013): 105007; Erdas Andrea, and Kevin P. Seltzer, “Finite temperature Casimir effect for massive scalars in a magnetic field.", International Journal of Modern Physics A 29 no. 17 (2014): 1450091. r29Alex. Aleixo Giulia, and Herondy F. Santana Mota, “Thermal Casimir effect for the scalar field in flat spacetime under a helix boundary condition.", Physical Review D 104 no. 4 (2021): 045012. r29Marac. Valery N. Marachevsky, “Casimir interaction of two plates inside a cylinder.", Physical Review D 75 no. 8 (2007): 085019. r29Rand. Marianne Rypestøl, and Iver Brevik, “Finite-temperature Casimir effect in Randall–Sundrum models.", New Journal of Physics 12 no. 1 (2010): 013022.
http://arxiv.org/abs/2307.00371v1
20230701154833
Learning Content-enhanced Mask Transformer for Domain Generalized Urban-Scene Segmentation
[ "Qi Bi", "Shaodi You", "Theo Gevers" ]
cs.CV
[ "cs.CV" ]
Controlling the electron-phonon heat exchange in a metallic film by its position in a dielectric slab D. V. AnghelInstitutul National de Cercetare-Dezvoltare pentru Fizica si Inginerie Nucleara Horia Hulubei, 077125 Magurele, Ilfov, Romania, Research Institute of the University of Bucharest (ICUB), 050663 Bucharest, Romania, BLTP, JINR, Dubna, Moscow region, 141980, Russia, dragos@theory.nipne.ro, M. DolineanuInstitutul National de Cercetare-Dezvoltare pentru Fizica si Inginerie Nucleara Horia Hulubei, 077125 Magurele, Ilfov, Romania, Doctoral School of Physics, University of Bucharest, Faculty of Physics, 077125 Magurele, Ilfov, Romania, mircea.dolineanu@theory.nipne.ro, J. BergliDepartment of Physics, University of Oslo, PO Box 1048, Blindern, 0316 Oslo, Norway, joakim.bergli@fys.uio.no, and I. J. MaasiltaNanoscience Center, Department of Physics, University of Jyvaskyla, FI-40014 Jyväskyä, Finland, ilari.j.maasilta@jyu.fi August 1, 2023 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Domain-generalized urban-scene semantic segmentation (USSS) aims to learn generalized semantic predictions across diverse urban-scene styles. Unlike domain gap challenges, USSS is unique in that the semantic categories are often similar in different urban scenes, while the styles can vary significantly due to changes in urban landscapes, weather conditions, lighting, and other factors. Existing approaches typically rely on convolutional neural networks (CNNs) to learn the content of urban scenes. In this paper, we propose a Content-enhanced Mask TransFormer (CMFormer) for domain-generalized USSS. The main idea is to enhance the focus of the fundamental component, the mask attention mechanism, in Transformer segmentation models on content information. To achieve this, we introduce a novel content-enhanced mask attention mechanism. It learns mask queries from both the image feature and its down-sampled counterpart, as lower-resolution image features usually contain more robust content information and are less sensitive to style variations. These features are fused into a Transformer decoder and integrated into a multi-resolution content-enhanced mask attention learning scheme. Extensive experiments conducted on various domain-generalized urban-scene segmentation datasets demonstrate that the proposed CMFormer significantly outperforms existing CNN-based methods for domain-generalized semantic segmentation, achieving improvements of up to 14.00% in terms of mIoU (mean intersection over union). The source code for CMFormer will be made available at this https://github.com/BiQiWHU/domain-generalized-urban-scene-segmentationrepository. § INTRODUCTION Urban-scene semantic segmentation (USSS) is a challenging problem because of the large scene variations due to changing landscape, weather, and lighting conditions <cit.>. Unreliable USSS can pose a significant risk to road users. Hence, domain generalization is essential for robust USSS <cit.>. In contrast to common domain generalization, domain generalized USSS requires special attention because the domain gap is mainly caused by large style variations whereas changes in semantics largely remain consistent (see Fig. <ref> for an example). Existing approaches can be divided into two groups. One group focuses on the style de-coupling. This is usually achieved by a normalization <cit.> or whitening <cit.> transformation. However, the de-coupling methodology falls short as the content is not learnt in a robust way. The other group is based on adverse domain training <cit.>. However, these methods usually do not particularly focus on urban styles and therefore their performance is limited. In this paper, we focus on two main objectives for USSS: (1) a concise but expressive representation of the content, and (2) a robust module for style change handling. For the first goal, we propose to use Mask Transformer. Transformer segmentation models <cit.> provide more expressive representations than CNN-based methods because of their mask attention mechanisms allowing to incorporate long-range dependencies <cit.>, context <cit.>, and saliency <cit.>. For the second goal, mask-level learning is used which is less sensitive than pixel-level learning in handling style changes <cit.>. To this end, a novel Content-enhanced Mask TransFormer (CMFormer) is proposed. Particularly, a content-enhanced mask attention mechanism is introduced, which takes the original image feature together with its down-sampled counterpart as input. The down-sampled features contain more content and are less sensitive to style variations. Both features are fused to learn the context. Then, the content-enhanced mask attention is extended to incorporate multi-resolution features. Large-scale experiments are conducted with various domain generalized USSS settings, , trained on one dataset from <cit.> as the source domain, and validated on the rest of the four datasets as the unseen target domains. The experiments show that the proposed CMFormer achieves upto 14.00% mIoU improvement compared to the state-of-the-art (, SAW <cit.>, WildNet <cit.>). The proposed CMFormer also outperforms existing mask-level Transformer segmentation methods and the generic segmentation methods pre-trained by the foundation model (, SAM <cit.>, SegGPT <cit.>). It also shows state-of-the-art performance on synthetic-to-real and ideal-to-adverse generalization. Our contribution is summarized as follows: * To the best of our knowledge, it is the first attempt to use ViT for the task of domain generalized USSS. * A content-enhanced mask attention mechanism is proposed enhancing the content information from image features. * A multi-resolution content-enhanced mask attention learning scheme is proposed for expressive content representation and style robustness. * Extensive experiments show a large performance improvement over existing SOTA by upto 14.00% mIoU. § RELATED WORK Domain Generalization has been studied on no task-specific scenarios in the field of both machine learning and computer vision. Harary <cit.> considered domain generalization in an unsupervised manner by learning a domain bridge. Hu <cit.> proposed a framework for image retrieval in an unsupervised setting. Zhou <cit.> proposed a framework to generalize to new homogeneous domains. Qiao <cit.> and Peng <cit.> proposed to learn domain generalization from a single source domain. Many other methods have been proposed such as entropy regularization <cit.>, casual matching <cit.>, extrinsic-intrinsic interaction <cit.>, balance invariance <cit.>, batch normalization embeddings <cit.> and multiple latent domain modeling <cit.>. Domain Generalized Semantic Segmentation can be regarded as the exploration of the prior unsupervised domain adaption segmentation task <cit.>, but requires a larger generalization capability of a model on a variety of target domains. Existing methods focus on the generalization of in-the-wild <cit.>, scribble <cit.> and multi-source images <cit.>, where substantial alterations can occur in both the content and style. Domain Generalized USSS focuses on the generalization of driving-scenes <cit.>. These methods use either a normalization transformation (, IBN <cit.>, IN <cit.>, SAN <cit.>) or whitening transformation (, IW <cit.>, ISW <cit.>, DIRL <cit.>, SAW <cit.>) on the training domain, to enable the model to generalize better on the target domains. Other advanced methods for domain generalization in segmentation typically rely on external images to incorporate more diverse styles <cit.>, and leverage content consistency across multi-scale features <cit.>. To the best of our knowledge, all of these methods are based on CNN. Mask Transformer for Semantic Segmentation Earlier ViT based segmentation models (, SegFormer <cit.>) follow the CNN based pipeline. Recently, the focus is on mask-level pipelines, which use the queries in the Transformer decoder to learn the masks, , Segmenter <cit.>, MaskFormer <cit.>, Max-DeepLab <cit.>, CMT-DeepLab <cit.>, kMaX-DeepLab <cit.> and . More recently, Mask2Former <cit.> further simplified the pipeline of MaskFormer and achieves better performance. § METHODOLOGY §.§ Problem Definition Domain Generalization can be formulated as a worst-case problem <cit.>. Given a source domain 𝒮, and a set of unseen target domains 𝒯_1, 𝒯_2, ⋯, a model parameterized by θ with the task-specific loss ℒ_task, the generic domain generalization task can be formulated as a worst-case problem, given by min_θsupp_𝒯: D(𝒮;𝒯_1, 𝒯_2, ⋯) ≤ρ𝔼_T [ℒ_task(θ; 𝒯_1, 𝒯_2, ⋯)], where θ denotes the model parameters, D(𝒮; 𝒯_1, 𝒯_2, ⋯) corresponds to the distance between the source 𝒮 and target domain 𝒯, and ρ denotes the constraint threshold. Domain generalized USSS is challenging as it has similar content (, semantics) but variation of r0.45 < g r a p h i c s > (a) Feature domain for urban-scenes, where the content is similar while the styles have clear gaps; (b) The goal is robust style handing and stable content representing. style (, urban landscapes, weather, season, illumination) among each domain 𝒮, 𝒯_1, 𝒯_2, ⋯. Visual examples have been provided in Fig. <ref>. Here we analyze the feature domain. As shown in Fig. <ref>a, while the content distribution is similar, the style distribution is separated and accounts for the domain gap. Our goal is to find a representation where both style and content are similarly distributed among domains, therefore minimizing the domain gap (in Fig.<ref>b). §.§ Content-enhanced Mask Attention Our proposed idea is to enhance the focus of the mask attention mechanism <cit.> on content information from urban scenes. This enhancement aids the segmentation masks in concentrating on scene content while reducing sensitivity to style variations. Since content/style information is usually more pronounced in the low/high resolution features of an image, it is intuitive to amplify the influence of low-resolution features in the self-attention learning. Mask Attention Mechanism learns the query features as the segmentation masks by introducing a mask attention matrix based on the self-attention mechanism. Let 𝐅_l ∈ℝ^(W_l · H_l) × C_F denote the image features from the image decoder with a spatial width and height of W_l and H_l, and let 𝐗_l ∈ℝ^N × C denote the features of the l^th layer in a Transformer decoder, where N is the number of semantic categories. Further, C_F and C denote the channel dimensions of the image features and self-attention embeddings respectively. 𝐗_0 refers to the input query features of the Transformer decoder. The key 𝐊_l ∈ℝ^(W_l · H_l) × C, value 𝐕_l ∈ℝ^(W_l · H_l) × C, and query 𝐐_l ∈ℝ^N × C is computed by linear transformations f_K, f_V and f_Q, respectively, given by 𝐊_l = f_K(𝐅_l-1), 𝐕_l = f_V(𝐅_l-1), 𝐐_l = f_Q(𝐗_l-1). Then, the query feature 𝐗_l is computed by 𝐗_l = softmax (ℳ_l-1 + 𝐐_l 𝐊_l^𝖳 ) 𝐕_l + 𝐗_l-1, where ℳ_l-1∈{0,1}^N × H_l W_l is a binary mask attention matrix from the resized mask prediction of the previous (l-1)^th layer, with a threshold of 0.5. ℳ_0 is binarized and resized from 𝐗_0. It filters the foreground regions of an image, given by ℳ_l-1(x,y) = 0 if ℳ_l-1(x,y)=1 - ∞ else. Content Enhancement is incorporated into the mask attention mechanism by acquiring supplementary query features from low-resolution image features. This is based on the understanding that content information tends to be more discriminative, while style information is less sensitive in low-resolution image features. To this end, a parallel branch that leverages low-resolution image features is introduced in each Transformer decoder layer (in Fig. <ref>a). The content-enhanced image feature 𝐅_l^c∈ℝ^(W_l · H_l) × C_F is computed by average pooling avgpool from the original image feature 𝐅_l^c∈ℝ^(W_l · H_l) × C_F by 𝐅_l^c = avgpool(𝐅_l). Following Eq. <ref>, for the l^th Transformer decoder, the content enhanced key 𝐊^c_l ∈ℝ^(W_l/2 · H_l/2) × C and value 𝐕^c_l ∈ℝ^(W_l/2 · H_l/2) × C are computed from the linear transformation of the down-sampled image feature 𝐅_l^c. Then, similar to Eq. <ref>, the content-enhanced query feature 𝐗_l^c is computed as 𝐗_l^c = softmax (ℳ_l-1^c + 𝐐_l 𝐊_l^c 𝖳 ) 𝐕_l^c + 𝐗_l-1^c, where ℳ_l-1^c∈{0,1}^N × (H_l/2 · W_l/2) also follows the computation of ℳ_l-1 defined in Eq. <ref>. Content-aware Query Feature Fusion is proposed to merge the query 𝐗_l with content-enhanced query features 𝐗_l^c, allowing for the highlighting of content while retaining the necessary details for semantic prediction. The fused feature 𝐗_l^final serves as the final output of the l^th Transformer decoder, and it is computed as follows: 𝐗_l^final = h_l([𝐗_l, 𝐗_l^c]), Here, [·, ·] represents the concatenation operation, and h_l(·) refers to a linear layer. §.§ Content-enhancement for Multi-resolution Features In the process of decoding domain-generalized segmentation predictions from coarse-to-fine image features, it is crucial to enhance the content information at each resolution. This ensures that the content remains highlighted in the query features of masks, allowing for the learning of segmentation masks that are less sensitive to style variations. Multi-resolution Features include ×32, ×16 and ×8 resolution features from the image decoder, denoted as 𝐅^× 32, 𝐅^× 16 and 𝐅^× 8, respectively. Following Eq. <ref>, <ref>, the key, value and query for 𝐅^× 32 and 𝐅^× 16 and 𝐅^× 8 are denoted by 𝐊^× 32, 𝐕^× 32, 𝐐^× 32 and 𝐊^× 16, 𝐕^× 16, 𝐐^× 16 and 𝐊^× 8, 𝐕^× 8, 𝐐^× 8, respectively. Content-enhanced Multi-resolution Features are down-sampled from ×32, ×16 and ×8 resolution features 𝐅^× 32 and 𝐅^× 16 and 𝐅^× 8 into 𝐅^× 32, c and 𝐅^× 16, c and 𝐅^× 8, c by following Eq. <ref>. Also following Eq. <ref>, <ref>, the key, value and query for 𝐅^× 32, c and 𝐅^× 16, c and 𝐅^× 8, c are denoted as 𝐊^× 32, c, 𝐕^× 32, c, 𝐐^× 32, c and 𝐊^× 16, c, 𝐕^× 16, c, 𝐐^× 16, c and 𝐊^× 8, c, 𝐕^× 8, c, 𝐐^× 8, c, respectively. Multi-resolution Feature Fusion to the Transformer Decoder directly follows the original Mask2Former <cit.>, which alternatively learns the image features from each different resolution. As there are 9 self-attention layers in the Transformer decoder, the first, forth, seventh layers take the × 32 image features as input (black down-arrows in Fig. <ref>b), the second, fifth, eighth layers take the × 16 image features as the input (blue downarrows in Fig. <ref>b), and the third, sixth, ninth layer take the × 8 image features as the input (green downarrows in Fig. <ref>b). Taking the first, forth, seventh layers with × 32 image features as an example (here l=1, 4, 7), the content-enhanced learning process is given by 𝐗_l = softmax (ℳ_l-1 + 𝐐_l^× 32𝐊_l^×32 𝖳 ) 𝐕_l^× 32 + 𝐗_l-1^final, 𝐗_l^c = softmax (ℳ_l-1^c + 𝐐_l-1^× 32, c𝐊^×32, c 𝖳 ) 𝐕^× 32, c + 𝐗_l-1^final, Following Eq. <ref>, 𝐗_l^c and 𝐗_l are merged to compute the final output of this decoder layer, denoted as 𝐗_l^final. For the first layer, 𝐗_0^final is equivalent to 𝐗_0, which is the input query features of the Transformer decoder. The content enhance learning of the second, fifth and eighth layer follows Eq. <ref>, <ref> but uses the 𝐊^× 16, c, 𝐕^× 16, c, 𝐐^× 16, c and 𝐊^× 16, 𝐕^× 16, 𝐐^× 16 as input. The third, sixth and ninth layer also follow Eq. <ref>, <ref> but takes 𝐊^× 8, c, 𝐕^× 8, c, 𝐐^× 8, c and 𝐊^× 8, 𝐕^× 8, 𝐐^× 8 as input. §.§ Network Architecture and Implementation Details The overall framework is shown in Fig. <ref>b. The Swin-Base Transformer <cit.> is used as the backbone, and the image decoder is inherited from Mask2Former <cit.>. The pre-trained model from ImageNet <cit.> is utilized for initialization. The image decoder from <cit.> uses the off-the-shelf multi-scale deformable attention Transformer (MSDeformAttn) <cit.> with the default setting in <cit.>. By considering the image features from the Swin-Based encoder as input, every 6 MSDeformAttn layers are used to progressively up-sample the image features in × 32, × 16, × 8, and × 4, respectively. The 1/4 resolution feature map is fused with the features from the Transformer decoder for dense prediction. Following the default setting of MaskFormer <cit.> and Mask2Former <cit.>, the final loss function ℒ is a linear combination of the binary cross-entropy loss ℒ_ce, dice loss ℒ_dice, and the classification loss ℒ_cls, given by ℒ = λ_ceℒ_ce + λ_diceℒ_dice + λ_clsℒ_cls, with hyper-parameters λ_ce=λ_dice=5.0, λ_cls=2.0 as the default of Mask2Former without any tuning. The Adam optimizer is used with an initial learning rate of 1×10^-4. The weight decay is set 0.05. The training terminates after 50 epochs. § EXPERIMENT §.§ Dataset & Evaluation Protocols Building upon prior research in domain-generalized USSS, our experiments utilize five different semantic segmentation datasets. Specifically, CityScapes <cit.> provides 2,975 and 500 well-annotated samples for training and validation, respectively. These driving-scenes are captured in Germany cities with a resolution of 2048×1024. BDD-100K <cit.> also provides diverse urban driving scenes with a resolution of 1280×720. 7,000 and 1,000 fine-annotated samples are provided for training and validation of semantic segmentation, respectively. Mapillary <cit.> is also a real-scene large-scale semantic segmentation dataset with 25,000 samples. SYNTHIA <cit.> is large-scale synthetic dataset, and provides 9,400 images with a resolution of 1280×760. GTA5 <cit.> is a synthetic semantic segmentation dataset rendered by the GTAV game engine. It provides 24,966 simulated urban-street samples with a resolution of 1914×1052. We use C, B, M, S and G to denote these five datasets. Following prior domain generalized USSS works <cit.>, the segmentation model is trained on one dataset as the source domain, and is validated on the rest of the four datasets as the target domains. Three settings include: 1) G to C, B, M, S; 2) S to C, B, M, G; and 3) C to B, M, G, S. mIoU (in percentage %) is used as the validation metric. All of our experiments are performed three times and averaged for fair comparison. All the reported performance is directly cited from prior works under the ResNet-50 backbone <cit.>. §.§ Comparison with State-of-the-art Domain Generalized USSS Methods GTA5 Source Domain The third column of Table <ref> reports the performance on target domains of C, B, M and S, respectively. The proposed CMFormer shows a performance improvement of 10.66%, 10.63%, 14.00% and 12.46% compared to existing state-of-the-art CNN based methods on each target domain, respectively. These outcomes demonstrate the feature generalization ability of the proposed CMFormer. Notice that the source domain GTA5 is a synthetic dataset, while the target domains are real images. It further validates the performance of the proposed method. SYNTHIA Source Domain The forth column of Table <ref> reports the performance. The proposed CMFormer shows a 5.67%, 8.73% and 11.49% mIoU performance gain against the best CNN based methods, respectively. However, on the BBD-100K (B) dataset, the semantic-aware whitening (SAW) method <cit.> outperforms the proposed CMFormer by 1.80% mIoU. Nevertheless, the proposed CMFormer still outperforms the rest state-of-the-art methods. The performance gain of the proposed CMFormer when trained on SYNTHIA dataset is not as significant as it is trained on CityScapes or GTA5 dataset. The explanation may be that the SYNTHIA dataset has much fewer samples than GTA5 dataset, , 9400 v.s. 24966, and a transformer may be under-trained. CityScapes Source Domain The last column of Table <ref> reports the performance. The proposed CMFormer shows a performance gain of 6.32%, 10.43%, 9.50% and 12.11% mIoU on the B, M, G and S dataset against the state-of-the-art CNN based method. As BDD100K dataset contains many nigh-time urban-street images, it is particularly challenging for existing domain generalized USSS methods. Still, a performance gain of 6.32% is observed by the proposed CMFormer. §.§ Ablation Studies On Content-enhancement of Each Resolution Table <ref> reports the performance of the proposed CMFormer when × 32, × 16 and × 8 image features are implemented with content enhancement. When no image features are implemented with content enhancement, CMFormer degrades into a Mask2Former <cit.> with Swin-Base backbone. When only implementing content enhancement on the × 32 image feature, the down-sampled × 128 image feature may propagate little content information to the segmentation mask, and only a performance gain of 0.74%, 1.43%, 0.37% and 0.64% on B, M, G and S target domain is observed. When further implementing content enhancement on the × 16 image feature, the enhanced content information begins to play a role, and an additional performance gain of 1.93%, 2.17%, 0.12% and 0.58% is observed. Then, the content enhancement on the × 8 image feature also demonstrates a significant impact on the generalization ability. Similar observation can be found on the S→C, B, M, G setting. To better understand the content enhancement for mask attention, some segmentation masks learnt by the existing mask attention and our content enhanced mask attention are visualized in the second and third row of Fig. <ref>. The proposed content enhanced mask attention is less sensitive to the style variation. The learnt masks are more capable to separate the key objects from the background. On Multi-resolution Feature Embedding Strategy is inherited from Mask2Former in an alternative manner <cit.>. To investigate its effectiveness, the method is compared with two alternative solutions: (1) plain sequence embedding (denoted as plain) and (2) alternative embedding in a descent order (denoted as descent). For the plain sequence embedding, the first, second and third three decoder layers are fed into the ×32, ×16 and ×8 image features, respectively. For the alternative embedding in a descent order, each three decoder layers are fed into the image features with the order of ×8, ×16 and ×32. The results are listed in Table <ref>. The embedding strategy inherited from <cit.> has a slightly better performance compared to the two other solutions. §.§ Comparison with Recent State-of-the-art in Related Tasks The proposed CMFormer is compared against three categories of recent state-of-the-art approaches that are relevant to related tasks. While these approaches do not specifically focus on domain-generalized USSS, we compare them to further highlight the optimality of the proposed CMFormer. Specially, we consider mask-level Transformer segmentation models (Segmenter <cit.>, MaskFormer <cit.>, Mask2Former <cit.>, denoted as ℳ), combining state-of-the-art CNN based domain generalization techniques with segmentation Transformer (denoted as 𝒢), and the plain segmentation Transformer that is pre-trained from the foundation models (SAM <cit.>, SegGPT <cit.>, denoted as ℱ). It can be seen that the proposed CMFormer significantly outperforms other methods. It is also very interesting to observe that the whitening transformation (IW <cit.>, ISW <cit.>, SAW <cit.>) for CNN based domain generalized segmentation frameworks do not work for mask-level Transformer segmentation pipeline. §.§ Extra Experiments on Generalization To Adverse Domains Beyond the standard validation protocols in existing methods <cit.>, we further validate the proposed CMFormer's performance by benchmarking it on the adverse conditions dataset with correspondance (ACDC) <cit.>. It is the largest semantic segmentation dataset under a variety of adverse conditions, including rain, fog, night and snow. We set the fog, night, rain and snow as four different unseen domains, and directly use the model pre-trained on CityScapes for inference. The results are shown in Table <ref>. It significantly outperforms existing domain generalized segmentation methods (IN <cit.>, Iternorm <cit.>, IW <cit.>, ISW <cit.>, ISSA <cit.>) by upto 10.3%, 0.5%, 11.6%, 11.1% on the fog, night, rain and snow domains, respectively. On the other hand, it shows a superior performance of 4.4%, 4.0% and 1.8% against the second best-performed method Mask2Former on the fog, rain and snow domain, respectively. From Synthetic Domain to Real Domain We also test the generalization ability of the CMFormer when trained on the synthetic domains (G+S) and validated on the three real-world domains B, C and M, respectively. The results are shown in Table <ref>. The proposed CMFormer significantly outperforms the instance normalization based (IBN <cit.>), whitening transformation based (ISW <cit.>) and adversarial domain training based (SHADE <cit.>, AdvStyle <cit.>) methods by 10% mIoU. §.§ Visualization T-SNE visualization To better understand the style change handling and stable robust representation learning, the feature space with samples when using B, M, G, S as target domains are shown in Fig. <ref> by t-SNE. ISW <cit.>, a typical style-decoupling method, has some difficulty to handle the style variation, but the distance of the semantic embeddings from each domain is close. Advstyle <cit.>, a typical adverse style augmentation method, can better mix the styles, but the distance of semantic embeddings is not properly reduced. The original mask attention from <cit.> can reduce the distance of each semantic embedding but still has some difficulty to mix the different styles. In contrast, the proposed CMFormer can not only reduce the distance of semantic embeddings, but also allow the styles to be more uniformly distributed and therefore minimizes the domain gap. Quantitative segmentation prediction Some generalized segmentation prediction results on the C → B, M, G, S setting (in Fig. <ref>) and on the C → adverse domain setting (in Fig. <ref>). Compared with the CNN based methods, the proposed CMFormer shows a better segmentation prediction, especially in terms of the completeness of objects. § CONCLUSION In this paper, we explored the feasibility of adapting the mask Transformer for domain-generalized urban-scene semantic segmentation (USSS). To address the challenges of style variation and robust content representation, we proposed a content-enhanced mask attention mechanism. This mechanism is designed to capture more resilient content features while being less sensitive to style variations. Furthermore, we extended it to incorporate multi-resolution features and integrate it into a novel framework called the Content-enhanced Mask TransFormer (CMFormer). To evaluate the effectiveness of CMFormer, we conducted extensive experiments on multiple settings. The results demonstrated the superior performance of CMFormer compared to existing domain-generalized USSS methods. Limitation Discussion & Boarder Social Impact. The proposed content-enhanced mask attention mechanism is derived from the self-attention mechanism and can be seamlessly integrated into ViT based segmentation pipelines. The experimental results demonstrate its superior performance, thereby indicating the potential to shift the focus of domain-generalized USSS towards ViT based pipelines. Given the criticality of safety-related road applications, the proposed method holds significant importance. It has the potential to enhance the accuracy and reliability of semantic segmentation models, thereby contributing to safer and more efficient autonomous systems. Overall, the proposed content-enhanced mask attention mechanism not only offers promising advancements in domain-generalized USSS but also holds potential for broader applications in real-world scenarios. 68 urlstyle [Carlucci et al.(2019)Carlucci, D'Innocente, Bucci, Caputo, and Tommasi]carlucci2019domain Fabio M Carlucci, Antonio D'Innocente, Silvia Bucci, Barbara Caputo, and Tatiana Tommasi. Domain generalization by solving jigsaw puzzles. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2229–2238, 2019. [Chattopadhyay et al.(2020)Chattopadhyay, Balaji, and Hoffman]chattopadhyay2020learning Prithvijit Chattopadhyay, Yogesh Balaji, and Judy Hoffman. Learning to balance specificity and invariance for in and out of domain generalization. In European Conference on Computer Vision, pages 301–318. Springer, 2020. [Chen et al.(2018)Chen, Zhu, Papandreou, Schroff, and Adam]chen2018encoder Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam. Encoder-decoder with atrous separable convolution for semantic image segmentation. In Proceedings of the European conference on computer vision (ECCV), pages 801–818, 2018. [Chen et al.(2022)Chen, Huang, Tsai, Yang, Ding, and Kuo]chen2022learning Wei-Ting Chen, Zhi-Kai Huang, Cheng-Che Tsai, Hao-Hsiang Yang, Jian-Jiun Ding, and Sy-Yen Kuo. Learning multiple adverse weather removal via two-stage knowledge learning and multi-contrastive regularization: Toward a unified model. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 17653–17662, 2022. [Cheng et al.(2021)Cheng, Schwing, and Kirillov]cheng2021mask Bowen Cheng, Alex Schwing, and Alexander Kirillov. Per-pixel classification is not all you need for semantic segmentation. Advances in Neural Information Processing Systems, 34:0 17864–17875, 2021. [Cheng et al.(2022)Cheng, Misra, Schwing, Kirillov, and Girdhar]cheng2021per Bowen Cheng, Ishan Misra, Alexander G Schwing, Alexander Kirillov, and Rohit Girdhar. Masked-attention mask transformer for universal image segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1290–1299, 2022. [Choi et al.(2021)Choi, Jung, Yun, Kim, Kim, and Choo]Robust2021 S. Choi, S. Jung, H. Yun, J. Kim, S. Kim, and J. Choo. Robustnet: Improving domain generalization in urban-scene segmentation via instance selective whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 11580–11590, 2021. [Cordts et al.(2016)Cordts, Omran, Ramos, Rehfeld, Enzweiler, Benenson, Franke, Roth, and Schiele]cordts2016cityscapes Marius Cordts, Mohamed Omran, Sebastian Ramos, Timo Rehfeld, Markus Enzweiler, Rodrigo Benenson, Uwe Franke, Stefan Roth, and Bernt Schiele. The cityscapes dataset for semantic urban scene understanding. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3213–3223, 2016. [Deng et al.(2009)Deng, Dong, Socher, Li, Li, and Fei-Fei]deng2009imagenet Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pages 248–255. Ieee, 2009. [Diaz-Ruiz et al.(2022)Diaz-Ruiz, Xia, You, Nino, Chen, Monica, Chen, Luo, Wang, Emond, et al.]diaz2022ithaca365 Carlos A Diaz-Ruiz, Youya Xia, Yurong You, Jose Nino, Junan Chen, Josephine Monica, Xiangyu Chen, Katie Luo, Yan Wang, Marc Emond, et al. Ithaca365: Dataset and driving perception under repeated and challenging weather conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 21383–21392, 2022. [Dosovitskiy et al.(2020)Dosovitskiy, Beyer, Kolesnikov, Weissenborn, Zhai, Unterthiner, Dehghani, Minderer, Heigold, Gelly, et al.]dosovitskiy2020image Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2020. [Fu et al.(2019)Fu, Liu, Tian, Li, Bao, Fang, and Lu]fu2019dual Jun Fu, Jing Liu, Haijie Tian, Yong Li, Yongjun Bao, Zhiwei Fang, and Hanqing Lu. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 3146–3154, 2019. [Harary et al.(2022)Harary, Schwartz, Arbelle, Staar, Abu-Hussein, Amrani, Herzig, Alfassy, Giryes, Kuehne, et al.]harary2022unsupervised Sivan Harary, Eli Schwartz, Assaf Arbelle, Peter Staar, Shady Abu-Hussein, Elad Amrani, Roei Herzig, Amit Alfassy, Raja Giryes, Hilde Kuehne, et al. Unsupervised domain generalization by learning a bridge across domains. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5280–5290, 2022. [He et al.(2016)He, Zhang, Ren, and Sun]he2016deep Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016. [Hu and Lee(2022)]hu2022feature Conghui Hu and Gim Hee Lee. Feature representation learning for unsupervised cross-domain image retrieval. In European Conference on Computer Vision, pages 529–544. Springer, 2022. [Huang et al.(2019a)Huang, Zhou, Zhu, Liu, and Shao]instancenorm2019 L. Huang, Y. Zhou, F. Zhu, L. Liu, and L. Shao. Iterative normalization: Beyond standardization towards efficient whitening. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 4874–4883, 2019a. [Huang et al.(2019b)Huang, Zhou, Zhu, Liu, and Shao]huang2019iterative Lei Huang, Yi Zhou, Fan Zhu, Li Liu, and Ling Shao. Iterative normalization: Beyond standardization towards efficient whitening. In Proceedings of the ieee/cvf conference on computer vision and pattern recognition, pages 4874–4883, 2019b. [Huang et al.(2020)Huang, Wang, Xing, and Huang]huang2020self Zeyi Huang, Haohan Wang, Eric P Xing, and Dong Huang. Self-challenging improves cross-domain generalization. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16, pages 124–140. Springer, 2020. [Ji et al.(2021)Ji, Yu, Wu, Ma, Bian, Bi, Li, Liu, Cheng, and Zheng]ji2021learning Wei Ji, Shuang Yu, Junde Wu, Kai Ma, Cheng Bian, Qi Bi, Jingjing Li, Hanruo Liu, Li Cheng, and Yefeng Zheng. Learning calibrated medical image segmentation via multi-rater agreement modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12341–12351, 2021. [Kim et al.(2022)Kim, Lee, Park, Min, and Sohn]kim2022pin Jin Kim, Jiyoung Lee, Jungin Park, Dongbo Min, and Kwanghoon Sohn. Pin the memory: Learning to generalize semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4350–4360, 2022. [Kirillov et al.(2023)Kirillov, Mintun, Ravi, Mao, Rolland, Gustafson, Xiao, Whitehead, Berg, Lo, et al.]kirillov2023segment Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. [Lambert et al.(2020)Lambert, Liu, Sener, Hays, and Koltun]lambert2020mseg John Lambert, Zhuang Liu, Ozan Sener, James Hays, and Vladlen Koltun. Mseg: A composite dataset for multi-domain semantic segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 2879–2888, 2020. [Lee et al.(2022)Lee, Seong, Lee, and Kim]lee2022wildnet Suhyeon Lee, Hongje Seong, Seongwon Lee, and Euntai Kim. Wildnet: Learning domain generalized semantic segmentation from the wild. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 9936–9946, 2022. [Li et al.(2021)Li, Namkoong, and Xia]li2021evaluating Mike Li, Hongseok Namkoong, and Shangzhou Xia. Evaluating model performance under worst-case subpopulations. Advances in Neural Information Processing Systems, 34:0 17325–17334, 2021. [Li et al.(2023)Li, Zhang, Keuper, and Khoreva]li2023intra Yumeng Li, Dan Zhang, Margret Keuper, and Anna Khoreva. Intra-source style augmentation for improved domain generalization. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 509–519, 2023. [Lin et al.(2017)Lin, Milan, Shen, and Reid]lin2017refinenet Guosheng Lin, Anton Milan, Chunhua Shen, and Ian Reid. Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1925–1934, 2017. [Liu et al.(2021)Liu, Lin, Cao, Hu, Wei, Zhang, Lin, and Guo]liu2021swin Ze Liu, Yutong Lin, Yue Cao, Han Hu, Yixuan Wei, Zheng Zhang, Stephen Lin, and Baining Guo. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR), pages 10012–10022, 2021. [Liu et al.(2022)Liu, Hu, Lin, Yao, Xie, Wei, Ning, Cao, Zhang, Dong, et al.]liu2022swin Ze Liu, Han Hu, Yutong Lin, Zhuliang Yao, Zhenda Xie, Yixuan Wei, Jia Ning, Yue Cao, Zheng Zhang, Li Dong, et al. Swin transformer v2: Scaling up capacity and resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 12009–12019, 2022. [Mahajan et al.(2021)Mahajan, Tople, and Sharma]mahajan2021domain Divyat Mahajan, Shruti Tople, and Amit Sharma. Domain generalization using causal matching. In International Conference on Machine Learning, pages 7313–7324. PMLR, 2021. [Matsuura and Harada(2020)]matsuura2020domain Toshihiko Matsuura and Tatsuya Harada. Domain generalization using a mixture of multiple latent domains. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 11749–11756, 2020. [Mirza et al.(2022)Mirza, Masana, Possegger, and Bischof]mirza2022efficient M Jehanzeb Mirza, Marc Masana, Horst Possegger, and Horst Bischof. An efficient domain-incremental learning approach to drive in all weather conditions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 3001–3011, 2022. [Motiian et al.(2017)Motiian, Piccirilli, Adjeroh, and Doretto]motiian2017unified Saeid Motiian, Marco Piccirilli, Donald A Adjeroh, and Gianfranco Doretto. Unified deep supervised domain adaptation and generalization. In Proceedings of the IEEE international conference on computer vision, pages 5715–5725, 2017. [Neuhold et al.(2017)Neuhold, Ollmann, Rota Bulo, and Kontschieder]neuhold2017mapillary Gerhard Neuhold, Tobias Ollmann, Samuel Rota Bulo, and Peter Kontschieder. The mapillary vistas dataset for semantic understanding of street scenes. In Proceedings of the IEEE international conference on computer vision, pages 4990–4999, 2017. [Pan et al.(2022)Pan, Bi, Yang, Zhu, and Bian]pan2022label Junwen Pan, Qi Bi, Yanzhan Yang, Pengfei Zhu, and Cheng Bian. Label-efficient hybrid-supervised learning for medical image segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 2026–2034, 2022. [Pan et al.(2018)Pan, Luo, Shi, and Tang]IBNet2018 X. Pan, P. Luo, J. Shi, and X. Tang. Two at once: Enhancing learning and generalization capacities via ibn-net. In Proceedings of the European Conference on Computer Vision (ECCV), pages 464–479, 2018. [Pan et al.(2019)Pan, Zhan, Shi, Tang, and Luo]SW2019 X. Pan, X. Zhan, J. Shi, X. Tang, and P. Luo. Switchable whitening for deep representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 1863–1871, 2019. [Peng et al.(2021)Peng, Lei, Liu, Zhang, and Liu]peng2021global Duo Peng, Yinjie Lei, Lingqiao Liu, Pingping Zhang, and Jun Liu. Global and local texture randomization for synthetic-to-real semantic segmentation. IEEE Transactions on Image Processing, 30:0 6594–6608, 2021. [Peng et al.(2022a)Peng, Lei, Hayat, Guo, and Li]peng2022semantic Duo Peng, Yinjie Lei, Munawar Hayat, Yulan Guo, and Wen Li. Semantic-aware domain generalized segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2594–2605, 2022a. [Peng et al.(2022b)Peng, Qiao, and Zhao]peng2022out Xi Peng, Fengchun Qiao, and Long Zhao. Out-of-domain generalization from a single source: An uncertainty quantification approach. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022b. [Piva et al.(2023)Piva, de Geus, and Dubbelman]piva2023empirical Fabrizio J Piva, Daan de Geus, and Gijs Dubbelman. Empirical generalization study: Unsupervised domain adaptation vs. domain generalization methods for semantic segmentation in the wild. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 499–508, 2023. [Qiao et al.(2020)Qiao, Zhao, and Peng]qiao2020learning Fengchun Qiao, Long Zhao, and Xi Peng. Learning to learn single domain generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 12556–12565, 2020. [Richter et al.(2016)Richter, Vineet, Roth, and Koltun]richter2016playing Stephan R Richter, Vibhav Vineet, Stefan Roth, and Vladlen Koltun. Playing for data: Ground truth from computer games. In European conference on computer vision, pages 102–118. Springer, 2016. [Ros et al.(2016)Ros, Sellart, Materzynska, Vazquez, and Lopez]ros2016synthia German Ros, Laura Sellart, Joanna Materzynska, David Vazquez, and Antonio M Lopez. The synthia dataset: A large collection of synthetic images for semantic segmentation of urban scenes. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 3234–3243, 2016. [Sakaridis et al.(2021)Sakaridis, Dai, and Van Gool]sakaridis2021acdc Christos Sakaridis, Dengxin Dai, and Luc Van Gool. Acdc: The adverse conditions dataset with correspondences for semantic driving scene understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 10765–10775, 2021. [Segu et al.(2023)Segu, Tonioni, and Tombari]segu2023batch Mattia Segu, Alessio Tonioni, and Federico Tombari. Batch normalization embeddings for deep domain generalization. Pattern Recognition, 135:0 109115, 2023. [Simonyan and Zisserman(2014)]simonyan2014very Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. [Strudel et al.(2021)Strudel, Garcia, Laptev, and Schmid]strudel2021segmenter Robin Strudel, Ricardo Garcia, Ivan Laptev, and Cordelia Schmid. Segmenter: Transformer for semantic segmentation. In Proceedings of the IEEE/CVF international conference on computer vision, pages 7262–7272, 2021. [Tjio et al.(2022)Tjio, Liu, Zhou, and Goh]tjio2022adversarial Gabriel Tjio, Ping Liu, Joey Tianyi Zhou, and Rick Siow Mong Goh. Adversarial semantic hallucination for domain generalized semantic segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pages 318–327, 2022. [Vapnik(2013)]vapnik1999nature Vladimir Vapnik. The nature of statistical learning theory. Springer science & business media, 2013. [Volpi et al.(2018)Volpi, Namkoong, Sener, Duchi, Murino, and Savarese]volpi2018generalizing Riccardo Volpi, Hongseok Namkoong, Ozan Sener, John C Duchi, Vittorio Murino, and Silvio Savarese. Generalizing to unseen domains via adversarial data augmentation. Advances in neural information processing systems, 31, 2018. [Wang et al.(2021a)Wang, Zhu, Adam, Yuille, and Chen]wang2021max Huiyu Wang, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Max-deeplab: End-to-end panoptic segmentation with mask transformers. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 5463–5474, 2021a. [Wang et al.(2020a)Wang, Sun, Cheng, Jiang, Deng, Zhao, Liu, Mu, Tan, Wang, et al.]wang2020deep Jingdong Wang, Ke Sun, Tianheng Cheng, Borui Jiang, Chaorui Deng, Yang Zhao, Dong Liu, Yadong Mu, Mingkui Tan, Xinggang Wang, et al. Deep high-resolution representation learning for visual recognition. IEEE transactions on pattern analysis and machine intelligence, 430 (10):0 3349–3364, 2020a. [Wang et al.(2020b)Wang, Yu, Li, Fu, and Heng]wang2020learning Shujun Wang, Lequan Yu, Caizi Li, Chi-Wing Fu, and Pheng-Ann Heng. Learning from extrinsic and intrinsic supervisions for domain generalization. In European Conference on Computer Vision, pages 159–176. Springer, 2020b. [Wang et al.(2023)Wang, Zhang, Cao, Wang, Shen, and Huang]wang2023seggpt Xinlong Wang, Xiaosong Zhang, Yue Cao, Wen Wang, Chunhua Shen, and Tiejun Huang. Seggpt: Segmenting everything in context. arXiv preprint arXiv:2304.03284, 2023. [Wang et al.(2021b)Wang, Luo, Qiu, Huang, and Baktashmotlagh]wang2021learning Zijian Wang, Yadan Luo, Ruihong Qiu, Zi Huang, and Mahsa Baktashmotlagh. Learning to diversify for single domain generalization. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 834–843, 2021b. [Xie et al.(2021)Xie, Wang, Yu, Anandkumar, Alvarez, and Luo]xie2021segformer Enze Xie, Wenhai Wang, Zhiding Yu, Anima Anandkumar, Jose M Alvarez, and Ping Luo. Segformer: Simple and efficient design for semantic segmentation with transformers. Advances in Neural Information Processing Systems, 34:0 12077–12090, 2021. [Xu et al.(2022)Xu, Yao, Jiang, Jiang, Chu, Han, Zhang, Wang, and Tai]xu2022dirl Qi Xu, Liang Yao, Zhengkai Jiang, Guannan Jiang, Wenqing Chu, Wenhui Han, Wei Zhang, Chengjie Wang, and Ying Tai. Dirl: Domain-invariant representation learning for generalizable semantic segmentation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, pages 2884–2892, 2022. [Xu et al.(2019)Xu, Zhou, Venkatesan, Swaminathan, and Majumder]xu2019d Xiang Xu, Xiong Zhou, Ragav Venkatesan, Gurumurthy Swaminathan, and Orchid Majumder. d-sne: Domain adaptation using stochastic neighborhood embedding. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2497–2506, 2019. [Yu et al.(2018)Yu, Xian, Chen, Liu, Liao, Madhavan, and Darrell]yu2018bdd100k Fisher Yu, Wenqi Xian, Yingying Chen, Fangchen Liu, Mike Liao, Vashisht Madhavan, and Trevor Darrell. Bdd100k: A diverse driving video database with scalable annotation tooling. arXiv preprint arXiv:1805.04687, 20 (5):0 6, 2018. [Yu et al.(2022a)Yu, Wang, Kim, Qiao, Collins, Zhu, Adam, Yuille, and Chen]yu2022cmt Qihang Yu, Huiyu Wang, Dahun Kim, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. Cmt-deeplab: Clustering mask transformers for panoptic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 2560–2570, 2022a. [Yu et al.(2022b)Yu, Wang, Qiao, Collins, Zhu, Adam, Yuille, and Chen]yu2022k Qihang Yu, Huiyu Wang, Siyuan Qiao, Maxwell Collins, Yukun Zhu, Hartwig Adam, Alan Yuille, and Liang-Chieh Chen. k-means mask transformer. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIX, pages 288–307. Springer, 2022b. [Yue et al.(2019a)Yue, Zhang, Zhao, Sangiovanni-Vincentelli, Keutzer, and Gong]PyramidConsistency2019 X. Yue, Y. Zhang, S. Zhao, A. Sangiovanni-Vincentelli, K. Keutzer, and B. Gong. Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 2100–2110, 2019a. [Yue et al.(2019b)Yue, Zhang, Zhao, Sangiovanni-Vincentelli, Keutzer, and Gong]yue2019domain Xiangyu Yue, Yang Zhang, Sicheng Zhao, Alberto Sangiovanni-Vincentelli, Kurt Keutzer, and Boqing Gong. Domain randomization and pyramid consistency: Simulation-to-real generalization without accessing target domain data. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 2100–2110, 2019b. [Zhao et al.(2020)Zhao, Gong, Liu, Fu, and Tao]zhao2020domain Shanshan Zhao, Mingming Gong, Tongliang Liu, Huan Fu, and Dacheng Tao. Domain generalization via entropy regularization. Advances in Neural Information Processing Systems, 33:0 16096–16107, 2020. [Zhao et al.(2022)Zhao, Zhong, Zhao, Sebe, and Lee]zhao2022style Yuyang Zhao, Zhun Zhong, Na Zhao, Nicu Sebe, and Gim Hee Lee. Style-hallucinated dual consistency learning for domain generalized semantic segmentation. In Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXVIII, pages 535–552. Springer, 2022. [Zhong et al.(2022)Zhong, Zhao, Lee, and Sebe]zhong2022adversarial Zhun Zhong, Yuyang Zhao, Gim Hee Lee, and Nicu Sebe. Adversarial style augmentation for domain generalized urban-scene segmentation. In Advances in Neural Information Processing Systems, 2022. [Zhou et al.(2020)Zhou, Yang, Hospedales, and Xiang]zhou2020learning Kaiyang Zhou, Yongxin Yang, Timothy Hospedales, and Tao Xiang. Learning to generate novel domains for domain generalization. In European conference on computer vision, pages 561–578. Springer, 2020. [Zhu et al.(2021)Zhu, Su, Lu, Li, Wang, and Dai]zhu2021deformable Xizhou Zhu, Weijie Su, Lewei Lu, Bin Li, Xiaogang Wang, and Jifeng Dai. Deformable detr: Deformable transformers for end-to-end object detection. In International Conference on Learning Representations, 2021. § MORE COMPARISON WITH STATE-OF-THE-ART FROM OTHER BACKBONES In addition to Table 1 in the main submission, which compares the proposed CMFormer with existing CNN based domain generalized USSS methods using ResNet-50 backbone, here we provide further comparison with other backbones such as VGG-16 <cit.> and ResNet-101 <cit.>. Results on VGG-16 Table <ref> reports the results of existing domain generalized USSS methods on the VGG-16 backbone and comparison with the proposed CMFormer. Under the G→C, B, M, S setting, the proposed CMFormer outperforms second-best by 16.13%, 13.61%, 19.34% and 15.34%, respectively. Under the S→C, B, M, G setting, the proposed CMFormer outperforms second-best by 6.23% on C, 10.02% on M, and 12.71% on G, respectively. The reason for the slightly inferior performance against SAW <cit.> by 0.88% on B has been discussed in our submission, as S source domain has small amount of training samples for ViT models. Under the C→B, M, G, S setting, the proposed CMFormer outperforms the second-best by 10.08%, 14.73%, 12.38% and 13.92%, respectively. Results on ResNet-101 Table <ref> reports the results of existing domain generalized USSS methods on the ResNet-101 backbone and comparison with the proposed CMFormer. Under the G→C, B, M, S setting, the proposed CMFormer outperforms the second-best by 8.65%, 6.25%, 13.01% and 11.29%, respectively. Under the S→C, B, M, G setting, the proposed CMFormer outperforms the second-best by 3.72%, 5.99% and 9.86% on C, M, G, respectively. On B, the performance is 2.54% inferior to SAW <cit.>. Under the C→B, M, G, S setting, the proposed CMFormer outperforms the second-best by 10.08%, 14.73%, 12.38% and 13.92%, respectively. § MODEL-SIZE VS. PERFORMANCE We validate the trade-off between model size and performance. Specifically, under the C→S setting, the parameters (denoted as para. num.) and GFLOPs of existing CNN based methods are compared with the proposed CMFormer. In addition, we report the results of the proposed CMFormer under the Swin-Tiny, Swin-Base and Swin-Large backbones respectively for more comprehensive comparison. Table <ref> summarizes these statistics, and Fig. <ref> visualizes them. Some important observations can be made. (1) When using the Swin-Tiny backbone, the proposed CMFormer shows its superiority on both r0.45 < g r a p h i c s > Visualization of mIoU (y-axis in %) vs. GFLOPs (x-axis) of existing domain generalized USSS methods and the proposed CMFormer. segmentation accuracy and computational efficiency against existing CNN based domain generalized USSS methods. It shows a 8.93% mIoU against ISW <cit.> with an even 60.63 less GFLOPs. (2) The use of Swin-Base backbone can double both the GFLOPs and parameter number of the proposed CMFormer, when compared with existing CNN based domain generalized USSS methods. However, an additional 5.30% mIoU against the Swin-Tiny backbone can be gained, leading to a total of 14.23% mIoU improvement against the CNN based ISW <cit.>. (3) The use of Swin-Large backbone can double both the GFLOPs and parameter number of the proposed CMFormer compared with the use of Swin-Base backbone. However, this huge computational cost only leads to an additional performance gain of 2.52% mIoU. Thus, the use of Swin-Base backbone seems to be a good trade-off between model size and segmentation accuracy. § EFFECTIVENESS ON GENERIC DOMAIN GENERALIZATION Following the prior domain generalized USSS work AdvStyle <cit.>, We also test if the proposed content-enhanced strategy can be scalable to generic domain generalization. Two commonly-used benchmarks for generic domain generalization are Digits [http://yann.lecun.com/exdb/mnist/] and PACS [http://sketchx.eecs.qmul.ac.uk/], respectively. Existing state-of-the-art generic domain generalization methods, namely, ERM <cit.>, CCSA <cit.>, d-SNE <cit.>, JiGen <cit.>, ADA <cit.>, M-ADA <cit.>, ME-ADA <cit.>, RSC <cit.> and L2D <cit.>, are involved for comparison. As the proposed content-enhanced strategy is designed for Transformer architectures, we choose ViT-Tiny based L2D pipeline as our baseline. Then, we embed the proposed content-enhanced strategy into the baseline (denoted as L2D+CE), and report its performance on these settings. Results on Digits Dataset The results are reported in Table <ref>. The proposed content-enhanced strategy leads to a performance gain of 1.1%, 0.8% and 2.2% under the SVHN, MNIST-M and USPS target domain respectively, when compared with the ViT-tiny based L2D baseline. Also, on the USPS target domain, it achieves the state-of-the-art performance with an accuracy of 82.4%. Results on PACS Dataset The results are reported in Table <ref>. Different from Digits dataset, the evaluation protocol of PACS dataset requires using Art., Car., Ske. and Pho. as the source domain, and reports the average accuracy of the rest three target domains. The proposed content-enhanced strategy leads to a performance gain of 0.6%, 0.8%, 0.5% and 0.7% against the ViT-tiny based L2D baseline, when using Art., Cars., Ske., and Pho. as the source domain, respectively. Noticeably, it achieves the state-of-the-art performance on PACS dataset. § MORE VISUALIZED RESULTS Following Sec.4.6 in the main submission, more visual results under the C →B, M, G, S setting and under the C → adverse domain setting are provided in Fig. <ref> and Fig. <ref>, respectively. It can be seen that, under a variety of urban styles and adverse domains like rain, fog and snow, the proposed CMFormer shows a more precise and more reasonable inference than existing CNN based domain generalized USSS methods.
http://arxiv.org/abs/2307.03320v1
20230706222231
Discovering new B[e] supergiants and candidate Luminous Blue Variables in nearby galaxies
[ "Grigoris Maravelias", "Stephan de Wit", "Alceste Z. Bonanos", "Frank Tramper", "Gonzalo Munoz-Sanchez", "Evangelia Christodoulou" ]
astro-ph.GA
[ "astro-ph.GA", "astro-ph.SR" ]
§ INTRODUCTION How exactly single massive stars, born as O/B-type main-sequence stars, progress to more evolved phases and eventually die remains an open question. Binarity, which has an important implication in the evolution, even further complicates the quest for an answer. Observational data has revealed a number of transitional phases in which massive stars can be found, also known as the massive star "zoo". Whether they pass through certain phases or not depends on their initial mass (≥ 8 ), metallicity (Z), rotational velocity (v_rot), mass loss properties and binarity <cit.>. Although some of them are quite distinct (e.g. Wolf-Rayet stars as opposed to Red Supergiants - RSGs), there are phases which display common observables, such as B[e] supergiants (B[e]SGs) and Luminous Blue Variables (LBVs). The B[e] phenomenon is characterized by numerous emission lines in the optical spectra <cit.>. In particular, there is strong Balmer emission, low excitation permitted (e.g., Fe ii), and forbidden lines (of [Fe ii], and [O i]), as well as strong near- or mid-IR excess due to hot circumstellar dust. However, this can be observed in sources at different evolutionary stages (such as in Herbig AeBe stars, symbiotic systems, and compact planetary nebulae, see <cit.> for detailed classification criteria). The B[e]SGs form a distinct subgroup based on a number of secondary criteria. They are luminous stars (log L/L_⊙≳ 4.0), showing broad Balmer emission lines with P Cygni or double-peaked profiles. They may also display evidence of chemically processed material (e.g., ^13CO enrichment, TiO) which points to an evolved nature, although it is not yet certain if they are in pre- or post-RSG phases <cit.>. The presence of the hot circumstellar dust is due to a complex circumstellar environment (CSE) formed by two components, a stellar wind radiating from the poles and a denser equatorial ring-like structure <cit.>. However, the formation mechanism of this structure remains elusive. A variety of mechanisms have been proposed, such as the following: fast rotation <cit.>, the bi-stability mechanism <cit.>, slow-wind solutions <cit.>, magneto-rotational instability <cit.> , mass transfer in binaries <cit.>, mergers <cit.>, non-radial pulsations or the presence of objects that clear their paths <cit.>. Although poorly constrained, their initial masses range from roughly 10  to less than 40  (Mehner 2023, IAU S361, subm.). The LBVs are another rare subgroup of massive evolved stars, considered to represent a transitional phase from massive O-type main-sequence to Wolf–Rayet stars (e.g., <cit.>). They experience instabilities that lead to photometric variability, typically referred to as S Dor cycles <cit.>, as well as outbursts and episodic mass loss, similar to the giant eruption of η Carina that resulted in large amounts of mass lost through ejecta (e.g., <cit.>). It is not yet fully understood whether these two types of variability are related (e.g., <cit.>). Apart from the evident photometric variability, their spectral appearance changes significantly during their outburst activities (S Dor cycle). It is typical to experience loops from hot (spectra of O/B type) to cool states (A/F spectral types while in outbursts). Depending on the luminosity, the brightest LBVs (log L/L_⊙>5.8) seem to directly originate from main-sequence stars (with mass >50 ), while the less luminous ones are possibly post-RSG objects that have lost almost half of their initial masses (within the range of ∼25–40 ) during the RSG phase (Mehner 2023, IAU S361, subm.). Currently, various mechanisms have been suggested, such as radiation and pressure instabilities, stellar rotation, and binarity (see the reviews on the theory and observational evidence in <cit.>, Mehner 2023, IAU S361, subm., and the references therein) and, as such, no comprehensive theory exists to explain them. Therefore, if and how these two phases are linked remains an open question. B[e]SGs tend to have initial masses with a wide range below the most luminous LBVs, and in accordance with the less luminous ones. The presence of similar lines in their spectra points to similarities in their CSEs, with shells and bipolar nebulae observed in both cases <cit.>. Due to their photometric variability, LBVs are more commonly detected in other galaxies compared to B[e]SGs, which generally display less variability. <cit.> commented on a relative low variation of up to 0.2 mag, which was not the case in more recent studies, see Section <ref> for more details. . Therefore, B[e]SGs need to be searched for to be discovered. This has only been successful for 56 (candidate) sources in the Galaxy and for the Magellanic Clouds (MCs), M31 and M33, and M81 <cit.>, and only recently in NGC 247 <cit.>. On the other hand, LBVs have been found in more galaxies (additional to the aforementioned), such as IC 10, IC 1613, NGC 2366, NGC 6822, NGC 1156, DDO 68, and PHL 293B <cit.>, summing up to about 150 sources (including candidates). This paper presents the discovery of new B[e]SGs and LBV candidates found with a systematic survey to identify massive, evolved, dusty sources in nearby galaxies (≤5 Mpc), as part of the ASSESS project<https://assess.astro.noa.gr> (Bonanos 2023, IAU S361, subm.). In Section <ref> we provide a short summary of the observations and of our approach, in Section <ref> we present the new sources, and in Sections <ref> and <ref> we discuss and conclude our work. § MATERIALS AND METHODS §.§ Galaxy Sample For the ASSESS project, a list of 27 nearby galaxies (≤5 Mpc) was compiled (see Bonanos 2023, IAU S361, subm.). In this paper, we present our results from a sub-sample of these galaxies (Table <ref>) for which the spectral classification is final, while for another set we have scheduled observations in queue and have submitted proposals. For some galaxies (e.g., MCs) data have been collected through other catalogs/surveys and are presented separately (e.g., <cit.>). The aim of the ASSESS project is to determine the role of episodic mass loss by detecting and analyzing dusty evolved stars that are primary candidates to exhibit episodic mass loss events (Bonanos 2023, IAU S361, subm.). This mass loss results in the formation of complex structures, such as shells and bipolar nebulae in Wolf–Rayet stars and LBVs (e.g., <cit.>), detached shells in AGBs and RSGs (e.g., <cit.>), disks and rings around B[e]SGs (e.g., <cit.>, or even the dust-enshrouded shells within which the progenitors of Super-Luminous Supernovae lay (e.g., <cit.>). The presence of these dusty CSEs makes these sources bright in mid-IR imaging. Therefore, we based our catalog construction on published point-source Spitzer catalogs <cit.>. Since IR data alone cannot distinguish between these sources, the base catalogs were supplemented with other optical and near-IR surveys (Pan-STARRS1; <cit.>, VISTA Hemisphere Survey—VHS; <cit.>, Gaia DR2; <cit.>). Gaia information was also used to remove foreground sources when possible (see <cit.>, and Tramper et al., in prep., for more details). Given this data collection, we performed a selection process to minimize contamination by AGB stars and background IR galaxies/quasars. An absolute magnitude cut of M_[3.6]≤-9.0 <cit.> and an apparent magnitude cut at m_[4.5]≤15.5 <cit.> were applied to avoid AGB stars and background galaxies, respectively. In order to select the dusty targets we considered all sources with an IR excess, defined by the color term m_[3.6]-m_[4.5]>0.1 mag (to exclude the majority of foreground stars, for which this is approximately 0, and to select the most dusty IR sources). The three aforementioned criteria served as a minimum to consider a source as a priority target. Consequently, the reddest and brightest point-sources in the Spitzer catalogs were given the highest priority. An extensive priority list/system was constructed by imposing certain limits for the color term, M_[3.6], and the presence of an optical counterpart (for more details, see Tramper et al., in prep.). Depending on the galaxy size we ended up with a few tens to hundreds of targets per galaxy. To obtain spectroscopic data for such a large number of targets we required instruments with multi-object spectroscopic modes. With these we could allocate up to a few tens of objects per pointing. Multiple pointings (with dithering and/or overlap) were applied to cover more extended galaxies and when the density of the target was high. Therefore, when we were creating the necessary multi-object masks we were forced to select sources based on the spatial limitations (e.g., located out of the field-of-view or at the sensor's gap) and spectral overlaps. Consequently, some priority targets were dropped and, additionally, non-priority targets (“fillers”, i.e., sources dropped through the target selection approach described previously) were added to fill the space. §.§ Observations To verify the nature of our selected targets we needed spectroscopic information. Since this is not available for the majority of the ASSESS galaxies, we initiated an observation campaign to obtain low resolution spectra. Given the large number of targets, along with the sizes of the galaxies, we used the multi-object spectroscopic modes of the Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy (OSIRIS; <cit.>), on the 10.4 m GTC (<cit.>, for the galaxies visible from the Northern hemisphere, i.e., IC 10 and NGC 6822). We used the FOcal Reducer/low dispersion Spectrograph 2 (FORS; <cit.>), at 8.2 m ESO/VLT (for the Southern galaxies, i.e., the rest of Table 1). The resolving power and wavelength coverage was similar for both instruments, at ∼500–700 over the range for GTC/OSIRIS and R∼1000 over the range ∼5200–8700 Å for VLT/FORS2. Details for the observations and data reduction can be found at Munoz-Sanchez et al., in prep., for the GTC/OSIRIS campaign and Tramper et al., in prep., for the VLT/FORS2 campaign. Here we provide only a short overview of the data reduction followed. For the OSIRIS data we used the GTCMOS package<https://www.inaoep.mx/ ydm/gtcmos/gtcmos.html> accessed 1/9/2022—śee also <cit.> which is an IRAFIRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation. This pipeline for spectroscopic data combines (for each raw exposure) the two CCD images from the detector (correcting for geometric distortions) and performs bias subtraction. Although it can perform the wavelength calibration and can correct the curvature across the spatial direction in 2D images, we noticed that it was not perfect. For this reason we opted to perform a manual approach and extracted a small cut in the image around each slit. We performed the wavelength calibration individually for each of these images (slits) and tilt was corrected when necessary. The science and sky spectra were extracted (in 1D), and followed by flux calibration. We used IRAF to extract the long-slit spectra for standard stars, and then the routine and to obtain the sensitivity curve. This was applied through the routine to the science spectrum. For the FORS2 data, we used the FORS2 pipeline v5.5.7 under the EsoReflex environment <cit.>. This resulted in flux-calibrated, sky-subtracted 1D spectra for each slit on the mask. However, for some slits the pipeline did not produce suitable spectra, due to multiple objects in the slit, strongly variable nebular emission, slit overlap, and/or strong vignetting at the top of the CCD. For this reason, we also performed the reduction without sky subtraction and manually selected the object and sky extraction regions from the 2D spectrum. For each slit, the automatically and manually extracted spectra were visually inspected, and the best reduction was chosen. §.§ Spectral Classification The resolution and wavelength range (as described in the previous section) provide access to a number of spectral features, such as Hα (a mass loss tracer for high Ṁ stars), the TiO bands (present in cool stars), He i and He ii lines (indicative of hot stars), various metal lines (notably Fe lines), and the Ca triplet (luminosity indicator). Therefore, we were able to effectively classify the vast majority of our targets. Both B[e]SGs and LBVs are characterized by strong emission lines, indicative of their complex CSEs. Hα is usually found in very strong emissions and is significantly broadened in the presence of strong stellar winds and/or the presence of a (detached) disk (e.g., <cit.>). There were a number of He i lines (at λλ5876.6, 6678.2, 7065.2, 7281.4) within our observed range, which manifest in the hottest sources. In the quiescence state of LBVs, the presence of He lines indicates hotter sources (of B/A spectral type, which can be observed even with P-Cygni profiles when stellar winds are strong, such as, for example, in <cit.>). However, when an outburst is triggered and evolves outwards, the temperature temporarily decreases until the ejecta become optically thin. As a result of this temperature shift, the spectral lines typical for the quiescent LBV weaken and metal emission lines strengthen (e.g., <cit.>). During this phase, and depending on the temperature and density conditions of the circumstellar material, they may also display some forbidden Fe lines. B[e]SGs display additional forbidden emission lines, due to their more complex CSEs, with typical examples being [O i] λλ5577, 6300, 6364 and [Ca ii] λ 7291, 7324. The latter is more evident in the more luminous sources (e.g., <cit.>). Therefore, among all sources identified with strong Hα emissions, we classified as being B[e]SGs those with evident [O i] λ6300 <cit.>, and as being LBVs those without. Both classes may display forbidden emission lines from Fe and Ca (e.g., <cit.>), while all of them display Fe emission lines. We note here that these LBVs are candidate sources, since there is no absolute way to characterize an LBV from a single-epoch spectrum (in contrast to B[e]SGs). It has to be supplemented with more spectroscopic or photometric observations that reveal variability (and possibly the return to a hotter state). We also note that our sample contained more interesting sources that displayed Hα in emission (i.e., main sequence O-stars and blue supergiants), but these were left for future papers (e.g., Munoz-Sanchez et al. 2023, IAU S361, submission). § RESULTS §.§ Statistics From our large observational campaign, we were able to robustly classify (after careful visual inspection) 465 objects in the 12 targeted galaxies (see Table <ref>). Only 11 out of all of these (∼3%) contained features in their optical spectra that indicated a B[e] SG/LBV nature (which was the subject of the current work, with the rest being left for future papers). Other stellar sources related to massive stars included mainly RSGs (∼37%), other Blue Supergiants (∼7%), and Yellow Supergiants (∼5%). There was a small number of emission objects (∼2%), carbon stars (∼6%), and AGN/QSO and other background galaxies (∼4%), while another bulk of sources were classified as H ii regions (∼22%) and foreground sources (∼14%). In Table <ref> we present the identified objects. We note that, although the same approach was followed for all 12 galaxies, we obtained null results for five of them: IC 10, NGC 1313, Sextans A, M83, NGC 6822. In addition, there were only four objects (∼36%) with previous spectral information, for which we confirmed or updated classification. It is also interesting to note that ∼64% of these sources were considered priority targets in our survey (Table <ref>, col. 4), while the rest failed to pass our selection criteria (see Section <ref>). We further discuss these facts in Section <ref>. §.§ Spectra All spectra showed a strong, broadened Hα component, accompanied by several other characteristic emission lines. We present their spectra in Figures <ref> and <ref>, where the strength of the Hα emission for all objects is highlighted in the right panel. The order of the spectra (from top to bottom) was one of decreasing Hα strength. We identified a series of Fe ii emission lines in the left wing of Hα (∼6200–6500 Å), and, when the spectrum extended far enough to bluer wavelengths, we identified another series ranging from roughly ∼5100–5400 Å. Figure <ref> showcases these lines in a zoom-in on the ∼6200–6500 Å region. We used the Fe ii emission lines in this region to correct for the radial velocity (RV) shift. The obtained RV values are shown in column 9 of Table <ref>. Therefore, we verified that the RVs were in agreement with the motion of their host galaxies, confirming that these stars were, indeed, of extragalactic origin. According to the classification criteria presented in Section <ref>, we robustly identified 6 sources as being B[e]SGs: WLM-1, NGC55-1, NGC247-1, NGC253-1, NGC300-1, and NGC300-2. Figure <ref> presents the full spectra for the B[e]SGs, while Figure <ref> shows the characteristic [O i] λ6300 line. It is particularly interesting to note the very strong He i lines of NGC300-1. These emission lines require a hotter formation region, such as a spherical or a bipolar shell formed by a strong stellar wind, in addition to the structures that give rise to the forbidden emission features. We also note the absence of [Fe ii] lines for the WLM-1, NGC253-1, and NGC300-2 sources. Half of the sources (NGC55-1, NGC247-1, NGC300-1) displayed strong [Ca ii] emission lines, while for one source (NGC253-1) they were very faint (limited by the noise), and were totally absent for two of the sources (WLM-1, NGC300-2; see Figure <ref>). These lines were stronger in luminous sources (e.g., <cit.>). The very low SNR for the NGC253-1 and NGC300-2 (see Table <ref>, column 6) justified the lack of Fe and Ca lines. In the case of WLM-1, the SNR was sufficiently good that the lack of forbidden Fe lines should be considered a real non-detection (similar to source WLM 23 from <cit.>. We further discuss this in Section <ref>). Unfortunately, due to overlapping slits in the mask design, some of these spectra suffered from artifacts from the reduction processing (in particular, NGC300-2). Although the B[e] phenomenon can also characterize other types of objects, we noticed a lack of dominant emission lines, such as nebular lines ([N II] λλ6548, 6583, [S  ii] λλ6717,6731, [Ar iii] λ7135), present in planetary nebulae (e.g., <cit.>), O VI Raman-scattered lines (λλ6830,7088) of symbiotic systems (e.g., <cit.>), or even the absorption lines of Li i 6708 present in young stellar objects (e.g., <cit.>). Moreover, during the visual screening of all these spectra, objects with such characteristic lines would be classified differently, as all possible objects were considered. Additionally, at the distances we were looking at, we were mainly probing the upper part of the Hertzsprung–Russell diagram, while their RVs were relatively compatible (within their error margins) with those of their host galaxies. Our Gaia cleaning approach removed the majority of the foreground sources (naturally, a small fraction remained hidden in our target lists). Therefore, we consider these objects to be strong supergiant candidates. We characterized as LBVc the following 4 sources: NGC55-2, NGC55-3, NGC247-2, and NGC3109-1 (see Figure <ref>). NGC55-2 was the hotter of all these sources as it was the only LBVc with all He i lines in emission. NGC3109-1 displayed He i lines in absorption, while the rest did not show any of these lines. During the outbursts the He i lines decrease and vanish, as the temperature and the density (due to the expanding pseudo-photosphere) drop significantly to allow for other lines to form. It is during these cooler states that Fe lines become evident in LBVs. Depending on the conditions, forbidden emission lines may form. This was the case with NGC55-3, which displayed the [Ca ii] lines in emission, along with a few [Fe ii] lines. The other sources did not show any forbidden lines. Similar to the B[e]SG spectra, there were unavoidable residuals and artifacts, due to the slit overlap and reduction issues. Of these cases, NGC7793-1 was the most extreme exampleFeatures at λλ∼5577, 5811 (step), 5846, 6855, and the region around the [Ca ii] lines.. The region at [O i] λ6300 was highly contaminated with a sky residual line from another source in the slit. Therefore, we could not conclude whether this line existed or not. We noticed the presence of some [Fe ii] and the [Ca ii] lines, but a B[e]SG or LBV classification solely from this spectrum was not possible. However, additional information could be retrieved from photometry (see Section <ref>), so that we could propose a B[e]SG candidate (B[e]SG c) classification for NGC7793-1. The final classification for each star is provided in column 7 of Table <ref>. §.§ Light Curves and Variability We collected variability information for all targets from both Pan-STARRS DR2<https://catalogs.mast.stsci.edu/panstarrs/> and the VizieR<http://vizier.cds.unistra.fr/> services. We found four sources (WLM-1, NGC247-1, NGC274-2, and NGC3109-1) with data in the Pan-STARRS DR2 releaseThere were only a couple of detections (epochs) for NGC253-1, which did not provide any meaningful information, and, therefore, we did not consider them. A declination of about -25^∘ was very close to the limit of the survey. All other galaxies with southern declination than -30^∘, i.e., NGC 55, NGC 300, and NGC 7793, were not visible. (with an approximate coverage between 2010 and 2014). We considered only values with >0.9 to select the best data. For three sources (NGC55-2, NGC55-3, and NGC247-1) we found additional data in the catalog of large-amplitude variables from Gaia DR2 (covering 2014 to 2016; <cit.>), and NGC3109-1 had already been reported as a variable <cit.>. In Table <ref> we summarize the collected information for all sources and their corresponding magnitude differences (peak-to-peak) for all (5) Pan-STARRS filters, the two Gaia filters (for which we doubled the quoted values in the catalog to match the Pan-STARRS definition of magnitude difference, and some additional variability studies). In total, we found light curves for two B[e]SGs (WLM-1 and NGC247-1) and four LBVc (NGC55-2, NGC55-3, NGC247-1, and NGC3109-1). We show the Pan-STARRS light curves in Figure <ref> and <ref>, where we plot the magnitude difference at each epoch with the mean for the particular filter (indicated on the y-axis label). For the B[e]SGs we noticed a (mean) variability of 0.25-0.3 mag, while for the LBVs it was slightly larger, at 0.3–0.44 mag. There were no obvious trends in the B[e]SG light curves, while, in the case of NGC3109-1, a dimming across all filters was observed. <cit.> also detected such a trend, although smaller, for this target, due to the different filters used. Limited by the photometric data, they argued that a background galaxy or AGN was not excluded, but, given our spectrum and its consistent RV value with its host galaxy, we could actually verify its stellar nature. For NGC247-2, the light curves were generally flatter. There was a noticeable peak present in the y light curve (at MJD∼56300 days), which was not evident in the other filters (although we note that there were no observations around the same epoch). The quality flags corresponding to these particular points did not show any issue. However, we should be cautious with this, as further mining of the data is needed to reveal if this is a real event or an artifact. NGC247-1 was the only source for which we had multiple sources of variability information. Very good agreement between the Pan-STARRS and Gaia data is evident, and consistent with the value quoted by <cit.> (ΔV=0.29 ± 0.09 mag). Although <cit.> quoted a smaller value (Δ g' ∼ 0.1 mag), their time coverage was limited to about 6 months, a time frame that definitely does not cover the whole variability cycles for these sources. Traditionally, LBVs are considered variable at many scales (e.g., <cit.>). The (optical) S Dor variability is of the order of 0.1 mag to about 2.5 mag with cycles ranging from years to decades. The giant eruptions, although much more energetic (∼5 mag) are less frequent events (a time frame in the order of centuries), and, therefore, a smaller subgroup of LBVs have been observed to display such events. On the other hand, the B[e]SGs are considered more stable, with variability that does not exceed ∼0.2 mag (optical; <cit.>). However, this is changing and significant variability is observed, due to binary interactions and possible pulsations (e.g., <cit.>). Therefore, it is not surprising to observe similar magnitude differences between the two classes. § DISCUSSION §.§ Demographics As mentioned already in Section <ref>, we did not detect B[e]SGs or LBVs in the following five (out of 12) galaxies: IC 10, NGC 1313 Sextans A, M83, NGC 6822. M83 and NGC 1313 are the most distant galaxies (at 4.9 and 4.6 Mpc, respectively) and confusion becomes an important issue (unsurprisingly, M83 is the galaxy for which we detected the most H ii regions; see Tramper et al., in prep.). Due to the spatial resolution of Spitzer and the increasing distance of some of our target galaxies, H ii regions or other point-like objects (e.g., clusters) were included in the point-source catalogs and, therefore, considered to be viable targets in our priority system. The farthest galaxies, for which the majority of observed targets were, indeed, resolved point sources and at least one was either an LBVc or a B[e]SG, were NGC 7793 and NGC 253 (at ∼3.4 Mpc). Therefore, the null detections for IC 10 and NGC 6822 (less than 1 Mpc) and for Sextans A (at 1.34 Mpc) were not due to distance and confusion. <cit.> detected one LBV in NGC 6822 (J194503.77-145619.1) and three in IC 10 (J002012.13+591848.0, J002016.48+591906.9, J002020.35+591837.6). Our inability to recover these targets was due to two reasons. Firstly, we imposed strict criteria to prioritize our target selection (see Section <ref>) based on relative strong IR luminosity and color. Almost all of these targets (except for IC 10 J002020.35+591837.6) had m_[4.5]>15.5 mags, which directly excluded them from further consideration. This was further supported by the fact that four out of or our 11 discoveries initially did not pass as a priority target (see Table <ref>), but were observed as “filler" stars (see Section <ref>). This was particularly important for galaxies with smaller sizes, where only one (IC 10, Sextans A) or two pointings (NGC 6822) were performed. Therefore, the second reason was the limitations that arose from the particular pointing(s) to the galaxy, as targets might have been located out of the field-of-view or at a sensor's gap (which was the case for IC 10 J002020.35 + 591837.6), and therefore not be observable. Other reasons (not corresponding to the aforementioned targets) that could impact the selection of a target or render its spectrum useless include overlapping slits, a poor wavelength calibration and/or SNR, or other reduction issues. In the case of NGC 55, four LBVs (including two candidates) are known <cit.>. Two of them (the candidates) were recovered from our survey (NGC55-2 and NGC55-3 as B_13 and B_34, respectively) as LBVc (see Section <ref>). The other two (C1_30,00:14:59.91,-39:12:11.88 and A_42,0:16:09.69,-39:16:13.44) were sources outside the region investigated by <cit.>, so without any Spitzer data to be included in our base catalogs. In total, our approach was successful in detecting these populations, and it was mainly limited by technical issues. §.§ Comparison with Previous Classifications Four of our sources had previous classifications (see Table <ref>). WLM-1 had been identified as an Hα source previously <cit.>, through a photometric survey, and identified as an Fe line star through spectroscopic observations (WLM 23 in <cit.>). Even though the presence of the [O i] λ6300 line was noted, the source was not classified as a B[e]SG, due to the lack of forbidden Fe lines (see e.g., <cit.> on Fe stars). Therefore, we updated its classification to a B[e]SG from an Fe star. We also noted that our spectrum (obtained on November 2020) was very similar to theirs (obtained on December 2012), which might indicate that the star was rather stable over this eight-year period (however, this should be treated with caution due to the lack of systematic observations). NGC55-2 and NGC55-3 had been identified as candidate LBV/WN11 (ids B_34 and B_13, respectively), with both Balmer and He i lines in emission and with P-Cygni profiles <cit.>. Their spectra were within the 3800–5000 Å range and outside ours. However, given that the diagnostic [O i] line was not present, we classified both of these sources as LBVc, consistent with the previous resultsAs our observations were obtained from different epochs (October–December 2020) than those by <cit.> (November 2004) the spectra appearance might have changed, but there was no wavelength overlap to confirm this.. For NGC247-1 we provided a classification of B[e]SG, similar to what was suggested by <cit.>. We note here that their spectral coverage was ∼4400–7400 Å which overlapped with our observed range. Hence, we can also comment that no significant differences existed between the two observations (October 2018 and December 2020 by <cit.> and our observations, respectively), although this time difference is rather small with respect to the variability timescales for these sources <cit.>. Therefore, we confirmed the previous classifications for three out of four sources, leaving us with 6 new B[e]SGs (including the reclassified Fe star and the candidate NGC7793-1) and 2 LBVc. The majority (∼72%) of our findings are genuine discoveries and, as such, contribute greatly to the pool of extragalactic B[e]SGs, in particular. §.§ Separating the Two Classes with Photometry The total numbers of B[e]SGs and LBVs (even including candidates) are definitely small. Combined with the uncertainty pertaining to their roles in stellar evolution theory (e.g., B[e]SGs are not predicted by any code) it is easy to grasp why we really need larger samples and from different galactic environments, to fully understand these sources. Photometric data are typically used to pinpoint interesting candidates. These kinds of diagnostics exist mainly for IR, due to the presence of dust around these objects. <cit.> found the B[e]SG, LBVs and RSGs to be among the most luminous sources in the mid-IR, using a color-magnitude diagram (CMD) with a combination of near-IR (2MASS) and mid-IR (Spitzer) J-[3.6] and [3.6]-[4.5] for the massive stars in the Large Magellanic Cloud (with a similar work for the Small Magellanic Cloud presented in <cit.>). In the most recent census of B[e]SGs, <cit.> presented color–color diagrams (CCD) to highlight the separation between B[e]SGs and LBVs (see their Figure <ref>). Indeed, by using the 2MASS near-IR colors H-K and J-H and mid-IR WISE W2-W4 and W1-W2 the two classes are distinct. This is the result of the hot dust component in the B[e]SGs, (formed in the denser disk/ring-like CSE closer to the star) which intensifies the near- and mid-IR excesses, compared to the LBVs (which form dust further away as the wind mass-loss and/or outburst material dissipates). Therefore, the location of a source in these diagrams may be used to verify its nature. We attempted to replicate these aforementioned diagrams by adding the new sources. However, one strong limitation was the lack of data for our sample. For the mid-IR WISE <cit.> we found data for 5 (out of 11) sources (see Table <ref>). Using the data for 21 stars (excepting LHA 120-S 111) provided in <cit.> we plot, in Figure <ref>, the WISE colors for the MC sources and our 5 objects. We notice that, in general, the newly discovered sources are almost consistent with the loci of the MC sources, with the exception of NGC55-1. The new B[e]SG extend the W2-W4 color further to the red, while the LBVc NGC55-3 extended the W1-W2 color to the blue. Errors were plotted in the cases where they were availableOnly NGC55-1 had an error estimate in the W4 band, while the rest of the sources did not. For all other sources we could only plot W1–W2 errors.. The errors provided for NGC55-1 were (numerically) small and placed it within the locus of LBV. However, caution should be taken with WISE photometry, as the resolution from W1 to W4 worsens significantly, and, combined with the distance of our galaxies, the photometric measurements could be strongly affected by confusion due to crowding (e.g., for both NGC 55 and NGC300 at ∼2 Mpc). Combined with the position (and the uncertainty) of the LBVc NGC55-3, we might also be looking at a systematic offset of these populations. Unfortunately, the points in this plot are too scarce to make a robust examination of how the different galactic environments (e.g., metallicity, extinction effects) affect the positions of these populations. We were unable to construct the J-H vs. H-K CCD because of the lack of 2MASS data for our sources (only for NGC3109-1 did data exist; 2MASS point source catalog; <cit.>), due to the shallowness of the survey and the distances of our target galaxies. However, we were able to acquire J photometry from the VHS DR5 for 5 of our sources (including NGC3109-1; <cit.>). Equipped with both J and [3.6] photometry we plot, in Figure <ref>, the equivalent CMD plot presented in <cit.>, where the underlying MC objects were the same as in <cit.>. We notice excellent agreement of all new sources to their corresponding classes. Once again we were hampered by a lack of data for our sample. We could remedy this using the complete data from Spitzer and Gaia surveys (missing NGC253-1 from our sample without Gaia data). In order to consider the MC sources, we used the Gaia DR3 <cit.> and Spitzer data from the SAGE survey <cit.>. This time, we only lost two targets (CPD-69 463 and LHA 120-S 83 without Spitzer data), but were still left with 19 sources. In Figure <ref> we present the optical (Gaia) CMD, plotting BP–RP vs. M_G band. We notice the lack of any correlation in the optical. In Figure <ref> we also present the mid-IR (Spitzer) CMD, plotting [3.6]–[4.5] vs. M_[3.6] band. The separation between the two classes becomes more evident in this case. The presence of hotter dusty environments becomes more significant for B[e]SGs, as they looked redder than LBVs (with a [3.6]–[4.5] range between 0.5 to 0.65 mag). They also tend to be much more luminous in the [3.6] than the LBVs. We highlighted the position of NGC7793-1 in this plot. Although, from its spectrum alone, we could not determine a secure classification (due to issues with the obtained spectrum) it is located among the B[e]SGs of our sample and of the MCs. Therefore, we considered it a candidate B[e]SG. A future spectrum is needed to verify the existence of the [O i] λ6300 line, similar to the rest of the secure B[e]SGs in our sample. We also tried to combine the optical and IR data in a CMD where we plot the [3.6]–[4.5] vs. M_G magnitude (Figure <ref>). The result was actually similar to the previous IR CMD (as the x-axis did not change). In this case, the plot can be more helpful, as the LBVs are populating the upper left part of the plot. Therefore, very bright optical sources with IR color up to ∼0.5 mag were most probably LBVs, while sources with color >0.5 mag would be B[e]SG (at almost any G magnitude). §.§ Metallicity Dependence of Populations In this section, we examine the populations of the two classes as a function of metallicity. For this, we plot the cumulative distribution function with metallicity (Figure <ref>), considering all detected and known objects in our sample of galaxies. Namely, the numbers presented in Table <ref>, as well as the two LBVs in NGC55 <cit.>, one in NGC 6822 and three in IC 10 <cit.>, resulting in 7 B[e]SGs (including the NGC7793-1 candidate) and 10 LBVs in our sample of 12 galaxies. We notice the presence of B[e]SGs at metallicity as low as ∼0.14 Z_⊙ (WLM). The current work is the first to detect these sources at such low metallicities. The population of LBVs begins at ∼0.21 Z_⊙ (NGC 3109), and then increases steadily as we move towards higher metallicities. B[e]SGs presents an important step (increase) around ∼0.4 Z_⊙. In total, the two populations do not look significantly different. We have to be cautious interpreting this figure, however, due to the low number of statistics and completeness issues, as, for example, depending on the angle under which we observe a galaxy, we may not be able to fully observe its stellar content (e.g., NGC 253). § CONCLUSIONS In this work, we report the detection of 6 secure B[e]SGs, 1 candidate B[e]SG, and 4 LBV candidates sources, of which 6 B[e]SGs and 2 LBVs are new discoveries. They are based on spectroscopic and photometric diagnostics, supplemented with RVs that are consistent with their host galaxies. By inspecting the available IR (2MASS, WISE, Spitzer) and optical (Gaia) CMDs we find that the new sources are totally consistent with the loci of these populations from MCs. This adds further support regarding their natures. Building the cumulative distribution function of both populations with metallicity we notice the presence of B[e]SGs at environments with Z∼0.14 Z_⊙, which increases the pool of extragalactic B[e]SGs and, especially, at lower metallicities. This is particularly important in order to investigate (with increased samples) these phases of massive stars. Since B[e]SGs and LBVs are among the classes with the most important episodic and outburst activities they provide valuable information on the role of episodic mass loss and insights into stellar evolution in general. Conceptualization, G.M. and A.Z.B.; Funding acquisition, A.Z.B.; Investigation, G.M., S.d.W., A.Z.B., G.M.-S. and E.C.; Methodology, G.M., S.d.W. and F.T.; Software, F.T. and G.M.-S.; Supervision, A.Z.B.; Visualization, G.M. and S.d.W.; Writing—original draft, G.M., S.d.W. and A.Z.B.; Writing—review & editing, G.M., S.d.W., A.Z.B., F.T., G.M.-S. and E.C. All authors have read and agreed to the published version of the manuscript. This research was funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (Grant agreement No. 772086). Photometry and 1D extracted spectra will become available through the VizieR/CDS catalog tool. GM acknowledges feedback from Francisco Najarro and Michaela Kraus. Based on observations collected at the European Southern Observatory under the ESO programme 105.20HJ and 109.22W2. Based on observations made with the Gran Telescopio Canarias (GTC), installed at the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofísica de Canarias, on the island of La Palma (programme GTC83/20A). This work was (partly) based on data obtained with the instrument OSIRIS, built by a Consortium led by the Instituto de Astrofísica de Canarias in collaboration with the Instituto de Astronomía of the Universidad Autónoma de México. OSIRIS was funded by GRANTECAN and the National Plan of Astronomy and Astrophysics of the Spanish Government. This work was based, in part, on observations made with the Spitzer Space Telescope, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under a contract with NASA. This work made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC was provided by national institutions, in particular, the institutions participating in the Gaia Multilateral Agreement. Based on observations made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID(s) 179.A-2010(A), 179.A-2010(B), 179.A-2010(C), 179.A-2010(D), 179.A-2010(E), 179.A-2010(F), 179.A-2010(G), 179.A-2010(H), 179.A-2010(I), 179.A-2010(J), 179.A-2010(K), 179.A-2010(L), 179.A-2010(M), 179.A-2010(N), 179.A-2010(O) (regarding VISTA Hemisphere Survey). This publication made use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication used data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. This work used Astropy <http://www.astropy.org>: a community-developed core Python package and an ecosystem of tools and resources for astronomy <cit.>, NumPy (<https://numpy.org/>; <cit.>), and matplotlib (<https://matplotlib.org/>; <cit.>) The authors declare no conflict of interest. Abbreviations The following abbreviations are used in this manuscript: B[e]SG B[e] Supergiant CCD Color-Color Diagram CMD Color-Magnitude Diagram CSE circumstellar environment LBV Luminous Blue Variable MC Magellanic Cloud SNR Signal to Noise Ratio RSG Red Supergiant RV Radial Velocity -0cm References
http://arxiv.org/abs/2307.01565v1
20230704083606
An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems
[ "Shuyi Wang", "Guido Zuccon" ]
cs.IR
[ "cs.IR" ]
The University of Queensland 4072 St Lucia Brisbane QLD Australia shuyi.wang@uq.edu.au The University of Queensland 4072 St Lucia Brisbane QLD Australia g.zuccon@uq.edu.au Federated online learning to rank (FOLTR) aims to preserve user privacy by not sharing their searchable data and search interactions, while guaranteeing high search effectiveness, especially in contexts where individual users have scarce training data and interactions. For this, FOLTR trains learning to rank models in an online manner – i.e. by exploiting users' interactions with the search systems (queries, clicks), rather than labels – and federatively – i.e. by not aggregating interaction data in a central server for training purposes, but by training instances of a model on each user device on their own private data, and then sharing the model updates, not the data, across a set of users that have formed the federation. Existing FOLTR methods build upon advances in federated learning. While federated learning methods have been shown effective at training machine learning models in a distributed way without the need of data sharing, they can be susceptible to attacks that target either the system's security or its overall effectiveness. In this paper, we consider attacks on FOLTR systems that aim to compromise their search effectiveness. Within this scope, we experiment with and analyse data and model poisoning attack methods to showcase their impact on FOLTR search effectiveness. We also explore the effectiveness of defense methods designed to counteract attacks on FOLTR systems. We contribute an understanding of the effect of attack and defense methods for FOLTR systems, as well as identifying the key factors influencing their effectiveness. <ccs2012> <concept> <concept_id>10002951.10003317.10003338.10003343</concept_id> <concept_desc>Information systems Learning to rank</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Information systems Learning to rank An Analysis of Untargeted Poisoning Attack and Defense Methods for Federated Online Learning to Rank Systems Guido Zuccon August 1, 2023 ============================================================================================================ § INTRODUCTION In Online Learning to Rank (OLTR), all documents are stored in a server, and users' queries and interaction data (e.g., clicks) are also collected in the server. The ranker is then trained in a centralised and online manner. However, this setting could potentially infringe on users' privacy as users may not want to share their queries and interactions. In addition, documents containing personal information, like in email search <cit.> or desktop search <cit.>, may not be appropriate to surrender to a third party search service. To address this issue, a new paradigm – Federated Online Learning to Rank (FOLTR) – has been explored <cit.>. In FOLTR (as in Figure <ref>), clients retain their data locally, train a local ranker, and then share the local model weights (or gradients) with the sever instead of the raw data. The server plays a very different role – aggregating the received weights in an effective manner (e.g., via federated averaging <cit.>) and then broadcasting the obtained global ranker to the clients, which in turns use the global ranker to replace their local ranker. The whole process is carried out iteratively. Compared with conventional OLTR, FOLTR provides a mechanism to safeguard users' privacy. Also, the collaborative training makes the local rankers more effective than if they were trained separately with only the data of each single user. Existing FOLTR systems however are not necessarily secure: the federation mechanism provides malicious clients with opportunities for attacking the effectiveness of the global ranker. For example, malicious clients can send arbitrary weights to the server so that the convergence of the global ranker can be perturbed after aggregation. This kind of attack is termed as untargeted poisoning attack and aim to compromise the integrity of the global model trained federatively <cit.>. This issue is critical for federated learning systems, but it has not yet been studied for FOLTR. In this work, we initiate the investigation of poisoning attacks and corresponding defense methods in the context of FOLTR systems. Outside of FOLTR systems, poisoning attacks on federated learning systems has been shown successful in compromising model integrity across several federated machine learning tasks <cit.>, including in natural language processing and recommender systems. To mitigate or remove the threat posed by poisoning attacks, defense strategies have been designed and optimised <cit.>. Defense strategies typically act upon the aggregation rules used in the global model updating phase. The vulnerability of existing FOLTR methods to these attacks and the effectiveness of the related defense mechanism is unknown. Previous work in FOLTR has shown that findings obtained with respect to federated learning in other areas of Machine Learning or Deep Learning do not directly translate to the online learning to rank context, and therefore the study of these techniques in the context of FOLTR is important. For example, <cit.> have found that methods for dealing with non identical and independently distributed data in federated learning systems do not generalise to the context of FOLTR. Therefore, the performance of poisoning attacks and defense methods proposed in general domain cannot be guaranteed when applied to FOLTR: we address this limitation by adapting and investigating these methods to the setting of FOLTR and establish baselines for future studies. In this paper, we complement the state-of-the-art FOLTR system with one untargeted attack module and one defense module (shown in Figure <ref>). For the untargeted attack module, we implement a data poisoning method that compromises the local training data to affect the trained model, and two model poisoning methods that directly corrupt the local model updates. As for the defense module, we implement four Byzantine-robust aggregation rules to safeguard against such attacks. These defense mechanisms rely on statistical techniques to identify outliers among the received weights and subsequently exclude them during the aggregation process. Through extensive empirical experiments, we (1) investigate the vulnerability of FOLTR systems to untargeted poisoning attacks, and show under which conditions poisoning attacks can represent a real threat to FOLTR systems; and (2) demonstrate the effectiveness of defense strategies, and importantly reveal the presence of issues with defense strategies if applied to FOLTR systems for which an attack is not in place. § RELATED WORK §.§ Federated OLTR Unlike traditional Learning to Rank (LTR), Online Learning to Rank (OLTR) optimizes rankers through implicit user feedback (e.g., clicks) to directly influence search engine result pages in real-time production. The earliest method, Dueling Bandit Gradient Descent (DBGD) <cit.>, uniformly samples variations of the ranking model and updates the ranker based on online interleaving evaluation. To mitigate the high variance and regret inherent in DBGD, subsequent methods have improved it through techniques like multiple interleaving <cit.>, projected gradient <cit.>, and counterfactual evaluation <cit.>. In contrast to DBGD-based approaches, Pairwise Differentiable Gradient Descent (PDGD) <cit.> utilizes a Plackett-Luce model to sample the ranking list and estimates gradients from inferred pairwise preferences. This method has been found to exhibit greater resilience to noise and higher effectiveness in optimizing neural models. OLTR methods have been thoroughly investigated in a centralized setting, where a central server possesses the data to be searched and gathers users' search interactions, such as queries and clicks. The training of the ranker also takes place on this server. However, this centralized paradigm is not well-suited for privacy-preserving requirement where each client may not wish to, or cannot, share the searchable data, queries and other interactions. This is the case, for example, of hospitals wanting to collaborate together to create powerful rankers to identify the cohort of patients for specific rare conditions (and as such, each hospital only holds limited data that would not be sufficient to train an effective ranker individually), but that by legislation they are forbidden to share the actual data. To handle this issue, Federated Online Learning to Rank (FOLTR) methods have been proposed. These methods consider a decentralized machine learning scenario where data owners (clients) collaboratively train the model without sharing their data under the coordination of a central server. One such method is the Federated OLTR with Evolutionary Strategies (FOLtR-ES) <cit.>, which extends the OLTR optimization scenario to the Federated SGD <cit.> and utilizes Evolution Strategies as optimization method <cit.>. While FOLtR-ES performs well on small-scale datasets under certain evaluation metrics, its effectiveness does not generalise to large-scale datasets and standard OLTR metrics <cit.>. Because of this, we do not consider FOLtR-ES in our study. An alternative method is the FPDGD <cit.>, which builds upon the state-of-the-art OLTR method, the Pairwise Differentiable Gradient Descent (PDGD) <cit.>, and integrates it into the Federated Averaging (FedAvg) framework <cit.>. FPDGD exhibits effectiveness comparable to centralized OLTR methods, representing the current state-of-the-art FOLTR method. Thus, our empirical investigation of attack and defense methods on FOLTR systems relies on the FPDGD method, which is further described in Section <ref>. §.§ Poisoning Attacks on Federated Learning Poisoning attacks on federated learning systems aim to compromise the integrity of the system's global model. Poisoning attacks can be grouped according to the goals of the attack into two categories: untargeted poisoning attacks, and targeted poisoning attacks (also known as backdoor attacks). Targeted poisoning attacks aim to manipulate a global model according to the attacker's objectives, such as misclassifying a group of data with certain features to a label chosen by the attacker, while maintaining normal model effectiveness under other conditions. This is accomplished through backdoor attacks <cit.>, which are designed to allow the targeted manipulations to transpire stealthily and without detection. In contrast, untargeted poisoning attacks (also known as Byzantine failures <cit.>) aim to decrease the overall effectiveness of the global model indiscriminately for all users and data groups. Current untargeted poisoning methods can be divided into two categories: data poisoning and model poisoning. Label flipping <cit.> is a representative data poisoning method: the labels of honest training data are changed without altering their features. Model poisoning, on the other hand, directly affects the local model updates before they are sent to the centralized server. For example, Baruch et al. <cit.> poisons the local model updates through the addition of noise computed from the variance between the before-attack model updates, while Fang et al. <cit.>'s attacks are optimized to undermine specific robust aggregation rules. Among untargeted poisoning attacks on federated learning systems, model poisoning methods have been found to be the most successful <cit.>. In particular, data poisoning attacks have limited success when Byzantine-robust defense aggregation rules are in use <cit.>; we introduce these defense methods in Section <ref>. Furthermore, most data poisoning attacks assume that the attacker has prior knowledge about the entire training dataset, which is often unrealistic in practice. In this paper, we focus on untargeted poisoning attacks, delving into the effectiveness of both data poisoning and model poisoning methods. These attack methods are studied within the framework of a FOLTR system based on FPDGD, with and without the integration of defense countermeasures. § PRELIMINARIES §.§ Online Learning to Rank (OLTR) In OLTR, the ranker is learned directly from user interactions (clicks in our study), rather than editorial labels. In this context, each client performs searches on several queries during each local training phase. For each query q, the candidate documents set is D_q and the local training data held by each client is {(x_i, c_i), i=1...|D_q|}_q with feature representation (x_i) and user's click signal (c_i) for each (q, d_i)-pair (where d_i ∈ D_q). The value of the click feedback c_i is either 0 (unclicked) or 1 (clicked). In practice, the click is dependent on the relevance degree of the candidate document d_i to the query q, the rank position of d_i, and other noise or randomness factors. §.§ Federated Pairwise Differentiable Gradient Descent (FPDGD) We add our attacking and defense modules to the current state-of-the-art FOLTR system, the Federated Pairwise Differentiable Gradient Descent (FPDGD) <cit.>, which is outlined in Algorithm <ref>. Within each iteration t, each client u considers N_u interactions and updates the local ranker using Pairwise Differentiable Gradient Descent (PDGD) <cit.>. After the local update is finished, each client sends the trained weights θ^u_t to the server. The server then leverages the widely-used Federated Averaging <cit.> to aggregate the local model updates. Afterwards, the new global weights θ_t+1 are sent back to the clients as their new local rankers. We refer the reader to the original FPDGD paper for more details <cit.>. § ATTACKS TO FOLTR SYSTEMS §.§ Problem Definition and Threat Model Attacker's capability: Poisoning attacks can come from both members (insiders) and non-members (outsiders) of the FOLTR system. Insiders include both the central server and the clients, while outsiders include eavesdroppers on communication channels and users of the final ranker (this is similar to adversarial attacks during inference). In this study, we focus on insider attacks by malicious participants in the FOLTR system since insider attacks are generally more effective than outsider attacks <cit.>. We assume the attacker has control over m collusive clients, which means that the training data and local model updates can be exchanged among the malicious clients. We restrict the percentage of collusive clients to less than 50%: higher amounts would make it trivial to manipulate the global model. Attacker's background knowledge: We assume that the attacker has only access to the compromised clients: the training data and local rankers of all remaining clients remain not accessible to the attacker. Thus, the attacker has limited prior knowledge: the training data and the locally updated models from the poisoned clients, and the shared global model. The exception of having full prior knowledge[i.e. the attacker can also access information (training data, model updates) of non-poisoned clients.] will only be for the purpose of analysis and will be clarified in place. Problem Formulation: Assume n clients are involved in the FOLTR system. Among them, m clients are malicious. Without loss of generality, we assume the first m participants are compromised. Be 𝐰_𝐢 the local model that the i-th client sends to the central server. The global ranking model is updated through aggregating all 𝐰_𝐢: 𝐰_𝐠 = agg(𝐰_1, ..., 𝐰_𝐦, 𝐰_𝐦+1, ..., 𝐰_𝐧) §.§ Data Poisoning Data poisoning methods aim to corrupt the training data in order to degrade the model's effectiveness. This can be done by adding malicious instances or altering existing instances in an adversarial manner. Our data poisoning attack to FOLTR is inspired by the label flipping strategy <cit.>, in which the labels of honest training samples from one class are flipped to another class, while the features of the flipped samples are kept unchanged. In our case, we want to change the label of irrelevant documents into "high-relevant" and vise versa, without any changes to the feature representation of the corresponding query-document pairs. To achieve so, the attacker needs to intentionally flip the feedback by clicking on irrelevant documents to bring arbitrary noise thus poison the training. In our experiments, as no click data is available with the considered datasets, we follow the common practice from previous literature in OLTR and FOLTR <cit.> of simulating click behaviour based on the extensively-used Simplified Dynamic Bayesian Network (SDBN) click model <cit.>. This click model has been shown to produce reasonable predictions of real-world user click behaviour. Under SDBN, users examine a search engine result page (SERP) from top to bottom. Each document is inspected and clicked with click probability P(click = 1|rel(d)), conditioned on the actual relevance label rel(d) of the document. After a document is clicked, the user decides to stop the search session with stopping probability P(stop = 1|click=1, rel(d)), or continue otherwise. Commonly, three instantiations of SDBN are considered: (1) a perfect user examines every document and clicks on all relevant documents thus provides very reliable feedback, (2) a navigational user searches for reasonably relevant documents with a higher probability to stop searching after one click, (3) an informational user typically clicks on many documents without a specific information preference thus provides the noisiest click feedback. Inspired by the three instantiations, we manipulate one poison instantiation to simulate malicious clicking behaviour. The click probability of poison instantiation is the reverse version of the perfect click behaviour: the highest probability of clicking is associated with the least relevance label. All stop probabilities in poison instantiation are set to zero as we assume the attacker wants to poison as many clicks as possible. The values we adopt for the four instantiations of SDBN are reported in Table <ref>. §.§ Model Poisoning Unlike data poisoning, model poisoning directly modifies the local model updates (through poisoning gradients or model parameter updates) before sending them to the server. Some literature shows that model poisoning is more effective than data poisoning <cit.> while it also requires sophisticated technical capabilities and high computational resources than solely poisoning data. In this section, we investigate two existing model poisoning methods. §.§.§ Little Is Enough (LIE) Baruch et al. <cit.> find that if the variance between local updates is sufficiently high, the attacks can make use of this by adding small amounts of noise to the compromised local models and bypass the detection of defense methods. They provide a perturbation range in which the attackers can successfully poison the learning process. To conduct the attack, the adversaries first compute the average μ and standard deviation σ of the before-attack benign local model updates of all collusive attackers (𝐰_1, ..., 𝐰_𝐦). A coefficient z is used and computed based on the number of benign and malicious clients. Finally, the local model of attackers is manipulated as 𝐰^𝐦_𝐢 = μ - zσ for i ∈{1, ..., m} and sent to the central server who aggregates updates from all participants under certain rules in Equation <ref>. Baruch et al. <cit.> observe that, for image classification tasks, the small noises sufficiently compromise the global model while being sufficiently small to evade detection from defense strategies. §.§.§ Fang's Attack Fang et al. <cit.> proposed an optimization-based model poisoning attack tailored to specific robust aggregation rules (Krum, Multi-Krum, Trimmed Mean and Median), as will be explained in Section <ref>. Fang's attack is conducted separately under two assumptions: (1) full knowledge, and (2) partial knowledge. Under full knowledge, the attacker has full access to local model updates of all benign clients. This is a strong and impractical assumption, and it is often not the case in real attacks on federated learning systems. In the partial knowledge scenario, the attacker only knows the local training data and models of the compromised clients. In their attack to the robust aggregation rules Krum and Multi-Krum, the attacker computes the average μ of the benign updates in their possession, computes a perturbation 𝐬= - sign(μ - 𝐰_𝐠), and finally computes a malicious update as 𝐰^𝐦_𝐢 = (𝐰_𝐠 + λ·𝐬) by solving for the coefficient λ, where 𝐰_𝐠 is the before-attack global model during each federated training step. Thus, under the full knowledge assumption, the average μ and perturbation signal 𝐬 are computed based on all benign updates (𝐰_1, ..., 𝐰_𝐦, 𝐰_𝐦+1, ..., 𝐰_𝐧). For updates of the malicious clients (𝐰_1, ..., 𝐰_𝐦), the before-attack benign updates are leveraged. Under the partial knowledge scenario, only the before-attack benign updates (𝐰_1, ..., 𝐰_𝐦) are used to estimate the real values for average μ and the reversed deviation vector 𝐬. When attacking Trimmed Mean and Median, the goal is to craft the compromised local models based on the maximum w_max,j or minimum w_min,j benign parameters for each dimension j of the local model (this is one of the key features used by Trimmed Mean and Median for defending). The choice of w_max,j or w_min,j depends on which one deviates the global model towards the inverse of its update direction without attacks. Similar to when attacking Krum, the reversed deviation vector 𝐬 is computed with full knowledge or estimated under partial knowledge with only before-attack updates from all attackers, so as the estimation of w_max,j and w_min,j. After getting the j-th value of vector 𝐬, the j-th dimension of the compromised local model is randomly sampled from the range built based on w_max,j (if s_j = 1) or w_min,j (if s_j = -1). § DEFENSE FOR FOLTR SYSTEMS The current state-of-the-art defense methods against untargeted poisoning attacks focus on enhancing the robustness of the aggregation rules (Equation <ref>) used during the global update phase, to counteract attempts by malicious clients to corrupt the training. Next, we describe four robust aggregation rules that have been shown effective in general federated learning, but have not been evaluated for FOLTR. §.§ Krum and Multi-Krum The intuition behind the Krum method for robust aggregation <cit.> is that the malicious local model updates need to be far from the benign ones in order for the success of poisoning the global model. To evaluate how far a model update 𝐰_𝐢 is from the others, Krum computes the Euclidean distances between 𝐰_𝐢 and 𝐰_𝐣 for i ≠ j. We denote i → j if 𝐰_𝐣 belongs to the set of n-m-2 closest local models of 𝐰_𝐢. Then the sum of n-m-2 shortest distances to 𝐰_𝐢 is computed and denoted as s(i) = ∑_i → j^Euc_dist(𝐰_𝐢, 𝐰_𝐣). After computing the distance score s(i) for all local updates, Krum selects the local model with the smallest s(i) as the global model w_g: 𝐰_𝐠 = Krum(𝐰_1, ..., 𝐰_𝐦, 𝐰_𝐦+1, ..., 𝐰_𝐧) = min_𝐰_𝐢 s(i) Multi-Krum is a variation of the Krum method. Multi-Krum, like Krum, calculates the distance score s(i) for each 𝐰_𝐢. However, instead of choosing the local model with the lowest distance score as the global model (as Krum does), Multi-Krum selects the top f local models with the lowest scores and computes the average of these f models (𝐰'_𝐢, where i ∈{1, ..., f}) to be the global model. 𝐰_𝐠 = Multi-Krum(𝐰_1, ..., 𝐰_𝐦, 𝐰_𝐦+1, ..., 𝐰_𝐧) =1/f∑_i=1^f𝐰'_𝐢 In our empirical investigation, we set the Multi-Krum parameter f = n-m, as in previous work <cit.>. §.§ Trimmed Mean and Median Assume that w_ij is the j-th parameter of the i-th local model. For each j-th model parameter, the Trimmed Mean method <cit.> aggregates them separately across all local models. After removing the β largest and smallest among w_1j, ..., w_nj, the Trimmed Mean method computes the mean of the remaining n-2β parameters as the j-th parameter of the global model. We denote U_j = {w_1j, ..., w_(n-2β)j} as the subset of {w_1j, ..., w_nj} obtained by removing the largest and smallest β fraction of its elements. That is, the j-th parameter of the global model updated by Trimmed Mean is: w_j = Trimmed Mean(w_1j, ..., w_nj) = 1/n-2β∑_w_ij∈ U_j w_ij In our implementation, as in previous work on general federated learning <cit.>, we set β to be the number of compromised clients m. The Median method, like the Trimmed Mean method, sorts the j-th parameter of n local models. Instead of discarding the β largest and smallest values (as in Trimmed Mean), the Median uses the median of w_1j, ..., w_nj as the j-th parameter of the global model: w_j = Median(w_1j, ..., w_nj) In case n is an even number, the median is calculated as the average of the middle two values. § EXPERIMENTAL SETUP We next describe our experimental setup to evaluate the considered attack and defense mechanism in the context of a FOLTR system. Datasets. Our experiments are performed on four commonly-used LTR datasets: MQ2007 <cit.>, MSLR-WEB10k <cit.>, Yahoo <cit.>, and Istella-S <cit.>. Each dataset consists of a set of queries and the corresponding pre-selected candidate documents for each query. Each query-document pair is represented by a multi-dimensional feature vector, and have a corresponding annotated relevance label. Among the selected four datasets, MQ2007 <cit.> is the smallest with 1,700 queries, 46-dimensional feature vectors, and 3-level relevance assessments (from not relevant (0) to very relevant (2)). The other three datasets are larger, more recent, and provided by commercial search engines. MSLR-WEB10k has 10,000 queries and each query is associated with 125 documents on average, each represented with 136 features. Yahoo has 29,900 queries and each query-document pair has 700 features. Istella-S is the largest, with 33,018 queries, 220 features, and an average of 103 documents per query. These three commercial datasets are all annotated for relevance on a five-grade-scale: from not relevant (0) to perfectly relevant (4). Federated setup. We consider 10 participants (n=10) in our experiments, among which m clients are attackers. This setup is representative of a cross-silo FOLTR system, typical of a federation of a few institutions or organisations, e.g. hospitals creating a ranker for cohort identification from electronic health records <cit.>. In this paper we will not consider the setup of a cross-device FOLTR system, where many clients are involved in the federation: this is representative of a web-scale federation. We assume that the malicious clients can collude with each other to exchange their local data and model updates to enhance the impact of attacks. In the federated setting, each client holds a copy of the current ranker and updates the local ranker through issuing N_u=5 queries along with the respective interactions. The attackers can only compromise the local updating phase through poisoning the training data or model updates of the controlled malicious clients. After the local updating finishes, the central server will receive the updated ranker from each client and aggregate all local messages to update the global ranker. In our experiments, we consider the following aggregation rules: (1) FedAvg, (2) other robust aggregation rules introduced in Section <ref>. Unless otherwise specified, we train the global ranker through T=10,000 global updating times. User simulations. We follow the standard setup for user simulations in OLTR  <cit.>. We randomly sample from the set of queries in the static dataset to determine the query that the user issues each time. After that, the pre-selected documents for the query are ranked by the current local ranking model to generate a ranking result. For every query, we limit the SERP to 10 documents. User interactions (clicks) on the displayed ranking list are simulated through the SDBN click models introduced in Sec <ref>. For the user simulation in model poisoning, we simulate three types of users using the three click instantiations: perfect, navigational, and informational. We experiment on the three types of users separately in order to show the impact of attacking on different types of users. For data poisoning, we simulate the poisoned click based on the poison click combining with benign users on the aforementioned three types of click models separately to show the impact of our data poisoning strategies on different types of benign clicks. Ranking models. We experiment on a linear and neural model as the ranking model when training with FPDGD. For the linear model, we set the learning rate η = 0.1 and zero initialization was used. As in the original PDGD and FPDGD studies <cit.>, the neural ranker is optimized using a single hidden-layer neural network with 64 hidden nodes, along with η = 0.1. Evaluation. We evaluate the attack methods by comparing the gap in offline performance obtained when a specific attack is performed and when no attack is performed. The higher the performance degradation, the more effective the attack. As we limit each SERP to 10 documents, we use nDCG@10 for offline evaluation. The offline performance is measured through averaging the nDCG scores of the global ranker over the queries in the held-out test dataset with the actual relevance label. We record the offline nDCG@10 score of the global ranker during each federated training update. § RESULTS FOR DATA POISONING We perform data poisoning attack and four defense methods across different settings of user behaviours (i.e. click models) and number of attackers ({10%, 20%, 30%, 40%}). Results on MSLR10k with a linear ranker are shown as solid lines in Figure <ref> – results for other datasets are similar and omitted for space constraints. §.§ Attacks In the plots of Figure <ref>, the solid lines represent the results of data poisoning when no defense method is deployed. Among them, the black line represents no attacking situation ("honest" baseline). We can observe that the effect of data poisoning depends on the settings of user behaviors (i.e. click models) and the number of attackers. Effect of number of attackers. By comparing the solid curves in each plot of Figure <ref>, we can observe that the overall performance of the FOLTR system decreases as the number of attackers increases, compared to the “honest” baseline. Thus, the higher the number of attackers, the more degradation on the FOLTR system is experienced. Ease of attack under different user behaviours. By comparing the plots within each row, we see the effect of data poisoning is different under different user behaviours. In the navigational and informational settings, attacks carried by as little as 20% of clients can significantly affect the system. However, to successfully attack the perfect click, a higher number of malicious clients is needed. Across all datasets, the informational click model is the most affected by attacks, while the perfect click model only experiences considerable losses when a large number of clients has been compromised. Neural ranker vs. linear ranker. The findings from results for the neural ranker under data poisoning attack are similar to those for the linear ranker – and this pattern is valid across all remaining experiments we report. Therefore, we only report experiments using the linear ranker due to limited space. §.§ Defense Next, we demonstrate the effectiveness of our four defense mechanisms against data poisoning attack. The results on MSLR10k are shown by the dashed curves in Figure <ref>. Each row corresponds to one defense method. Krum. Overall, Krum performs well across all datasets and for all three types of click models once the percentage of malicious clients reaches 20% or higher, with the exception of MQ2007. However, Krum does not work when defending against 10% of clients, except for Istella-S. The accuracy drop from deploying Krum (as shown in Section <ref>) outweighs its effectiveness in defense, especially when there is a relatively small impact on the effectiveness of the model, as is in the case when 10% of the clients are malicious. Additionally, Krum does not show any improvement in defending certain scenarios under the informational click model, such as for MQ2007 under all percentages of malicious clients, and for MSLR10k when 40% of clients are malicious. Multi-Krum. The results obtained for Multi-Krum show similar effectiveness on the perfect click model as Krum. It is important to note that the perfect click model is the hardest to attack among the three types of click models considered. Multi-Krum provides slightly better defense performance on navigational clicks compared to Krum, especially when there are fewer attackers (30% or less). However, for the informational click model, Multi-Krum does not perform as well as Krum. This is because the variance of the local model updates is relatively higher in the noisier informational click model. After averaging the selected local models, the advantage of Multi-Krum is reduced, especially when there are more than 30% malicious clients. Trimmed Mean. Across all experiments, Trimmed Mean does not perform well on the noisiest click model (informational) when there are more than 30% malicious clients involved. When the malicious clients are 20% or 30%, Trimmed Mean provides lower performance gains compared to Krum, but it performs similarly to Krum when only 10% of the clients are malicious. Median. Like Trimmed Mean, Median does not provide improved performance on the noisy informational click model when 30% or 40% of clients are malicious. Similarly, and like other robust aggregation rules, Median does not show significant improvements when only 10% of clients are malicious. In fact, the Median's performance even decreases on the navigational click model for MSLR10k with 10% of malicious clients. When the malicious clients are 20% and 30% of all clients in the federation, the performance gain provided by Median is similar to that of Trimmed Mean. Summary. Overall, Krum and Multi-Krum work better than Trimmed Mean or Median when defending against data poisoning attacks, with the exception that Trimmed Mean and Median perform better on the smaller MQ2007 dataset. § RESULTS FOR MODEL POISONING We implement the model poisoning strategies specified in Section <ref> and report their results, specifically comparing their poisoning effectiveness with that of data poisoning methods. §.§ Little Is Enough (LIE) The experimental results obtained for LIE are partially shown in Figure <ref>, along with a comparison with data poisoning. Ineffectiveness of LIE. The results indicate that LIE is less effective in attacking the performance of the global model compared to data poisoning, with one exception for the perfect click model on the Yahoo dataset when 40% of the clients are malicious. This shows that adding random noise to compromise the local models is less effective for attacking the global ranker performance than compromising the click signals directly. Because of the poor attacking effectiveness of LIE, we do not investigate how it performs when defense strategies are put in place. §.§ Fang's Attack In our experiments, we implement Fang's attacks on four robust aggregation rules, with each attacking strategy tailoring specific defense strategies except that the same attack method is shared for Trimmed Mean and Median. Full knowledge vs. partial knowledge. First, we compare the attacking performance under both full knowledge and partial knowledge assumptions. According to previous findings in general federated learning <cit.>, attacking with full knowledge performs consistently better than with partial knowledge as the tailored attack can be optimised with auxiliary information about benign clients. From our results (results on MSLR10k under Krum are shown in Figure <ref>), we observe that full knowledge performs better with fewer malicious clients (10% and 20%), but the gap in effectiveness obtained between full and partial knowledge decreases as the number of malicious clients increases (30% and 40%), thus leading to differences compared to the general results in federated learning. This is because with more malicious clients, partial knowledge (knowledge of before-attack local model updates for compromised clients) provides enough information to effectively poison the global model while avoiding detection by robust defense strategies. Fang's Attack vs. data poisoning. Next, we compare Fang's attack under the full knowledge assumption against the data poisoning method under the same robust-aggregation rule (results on MSLR10k under Krum are shown in Figure <ref>). We find that Fang's attack can successfully poison FOLTR and mitigate the impact of defense methods compared to data poisoning. This finding aligns with the original results from <cit.>. § IMPACT OF DEFENSE UNDER NO-ATTACK Robust aggregation rules exhibit improvements in defending against poisoning attacks under some circumstances. But in real-world settings, the administrator of the FOLTR system has no knowledge of whether an attack is taking place. Thus, if the system administrator wishes to ensure protection against attacks, they may be required to deploy defense strategies irrespective of an attack ever taking place, or not. However, is there a price to pay, in terms of search effectiveness, if a defense strategy is deployed on a FOLTR system that is not exposed to an attack? We investigate this next, by comparing the effectiveness of a FOLTR system with no malicious clients and with different defense strategies implemented against the effectiveness of the same system with no defense. The experimental results on MSLR10k reported in Figure <ref> show that using Krum and Median leads to a decrease in performance compared to the FedAvg baseline when no attacks are present. Results for other datasets are similar and are omitted for space reasons. This finding has also been reported before in general federated learning literature <cit.>, especially when each client's local training data is non independent and identically distributed (non-IID). This is because those Byzantine-robust FL methods exclude some local model updates when aggregating them as the global model update <cit.>. This decrease raises questions about the use of these methods in FOLTR systems when no malicious client is present – and it suggests that if reliable methods for attack detection were available, then defense mechanisms may better be deployed only once the attack takes place. § SUMMARY OF KEY FINDINGS Based on the presented empirical results above, we identify the following key findings: * In general, the perfect click type is more difficult to attack compared to the other two click models, whether it be data or model poisoning methods, except in specific instances when employing Fang's attack under the full knowledge assumption. To successfully attack a FOLTR system when perfect click feedback is present, a larger number of attackers is required due to the relatively low variance between local updates. As a result, more clients must be compromised to inject noise, otherwise the attack is more likely to be detected by robust aggregation rules. * Among all attacking strategies studied in this paper, Fang's attack with full knowledge emerged as the most successful in diminishing the performance of the global model, though some exceptions were observed in the noisy informational click scenario. When there were more malicious clients (i.e. 30% or 40% of the total clients), Fang's attack with partial knowledge is just as effective as with full knowledge. This indicates that model poisoning is more effective than data poisoning. Furthermore, when defense measures were implemented, Fang's attack demonstrated greater success against Krum and Multi-Krum aggregation rules in comparison to Trimmed Mean and Median. * It is essential to highlight that although Krum has proven effective in countering data poisoning and Trimmed Mean in defending against Fang's attack, deploying these two aggregation rules should be exercised with caution as they result in an overall decrease in search performance if the system is not exposed to attacks. Thus, the selection of these defense mechanisms should be carefully considered, taking into account the specific context and risk of potential attacks to strike the right balance between security and search effectiveness. § CONCLUSION In this paper we explore attacks and defense mechanisms for federated online learning to rank (FOLTR) systems, focusing on the potential degradation of ranking performance caused by untargedted poisoning attacks. We investigate both data and model poisoning strategies and evaluate the effectiveness of various state-of-the-art robust aggregation rules for federated learning in countering these attacks. Our findings indicate that sophisticated model poisoning strategies outperform data poisoning methods, even when defense mechanisms are in place. We also reveal that deploying defense mechanisms without an ongoing attack can lead to ranker performance degradation. This finding recommends care in the deployment of such mechanisms and suggests that future research should explore defense strategies that do not deteriorate FOLTR ranker performance if no attack is underway. This is the first study that systematically analyses the threats brought by untargeted poisoning attacks and demonstrates the effectiveness (and associated drawbacks) of existing defense methods on mitigating the impact of malicious adversaries under federated online learning to rank system. Due to space limitations, we could not include all experiment results in the paper. The complete results, along with code and settings are available at <https://github.com/ielab/foltr-attacks>. Shuyi Wang is the recipient of a Google PhD Fellowship. This research is partially funded by Beijing Baidu Netcom Technology Co, Ltd, for the project "Federated Online Learning of Neural Rankers", under funding schema 2022 CCF-Baidu Pinecone. ACM-Reference-Format
http://arxiv.org/abs/2307.00891v1
20230703094050
Efficient Interpolation-Based Decoding of Reed-Solomon Codes
[ "Wrya K. Kadir", "Hsuan-Yin Lin", "Eirik Rosnes" ]
cs.IT
[ "cs.IT", "math.IT" ]
Efficient Interpolation-Based Decoding of Reed-Solomon Codes Wrya K. Kadir, Hsuan-Yin Lin, and Eirik Rosnes Simula UiB, N–5006 Bergen, Norway Emails:{wrya, lin, eirikrosnes}@simula.no ==================================================================================================================================================================================================== We propose a new interpolation-based error decoding algorithm for (n,k) Reed-Solomon (RS) codes over a finite field of size q, where n=q-1 is the length and k is the dimension. In particular, we employ the fast Fourier transform (FFT) together with properties of a circulant matrix associated with the error interpolation polynomial and some known results from elimination theory in the decoding process. The asymptotic computational complexity of the proposed algorithm for correcting any t ≤⌊n-k/2⌋ errors in an (n,k) RS code is of order 𝒪(tlog^2 t) and 𝒪(nlog^2 n loglog n) over FFT-friendly and arbitrary finite fields, respectively, achieving the best currently known asymptotic decoding complexity, proposed for the same set of parameters. § INTRODUCTION Reed-Solomon (RS) codes are the most widely known family of maximum distance separable (MDS) codes and were introduced by Reed and Solomon <cit.>. RS codes have been used in secret sharing <cit.>, space communications, consumer electronics <cit.>, and QR codes <cit.>. Hence, it is of great interest to devise efficient error decoding algorithms for RS codes. For an (n,k) RS code, where n is the code length and k is the message length (dimension), there are two main hard-decision decoding approaches, namely, syndrome-based and interpolation-based decoding. Berlekamp and Massey <cit.> introduced a decoding algorithm in 1969, and later Sugiyama <cit.> used the Euclidean algorithm to solve the key equation of this algorithm. Both algorithms are syndrome-based decoding algorithms with computation complexity 𝒪(n(n-k)) <cit.>. Welch and Berlekamp <cit.> introduced the first interpolation-based RS decoder with complexity 𝒪(n^3) (see also <cit.>). During the last two decades, several new decoding algorithms have been proposed for RS codes under the light of the (multiplicative) fast Fourier transform (FFT). Justesen in <cit.> used the FFT instead of the method by Sugiyama to solve the key equation and achieved the complexity 𝒪(qlog^2q), where q∈{2^2^i+1| i=0,1,2,3, 4} is the field size. The interpolation-based decoding algorithm proposed by Gao <cit.> for codes with length n over an FFT-friendly finite field <cit.> of size q, where q-1 is divisible by n, has complexity 𝒪(nlog^2 n). For RS codes over fields with even characteristics, an additive FFT <cit.> can be applied. A comparison study between the syndrome-based algorithms <cit.> and the interpolation-based decoding algorithm <cit.> was done in <cit.>. The study shows that the interpolation-based algorithm <cit.> is more efficient than syndrome-based decoders only for RS codes of very low rate. New versions of additive FFTs were introduced in <cit.>, and additive FFT-based decoding algorithms on RS codes appeared in <cit.>. These algorithms require finite fields with even characteristics and additional constraints, such as n, k, or n-k being a power of two. The new additive FFTs are not compatible with our code parameters. In this paper, we develop a new interpolation-based error decoding algorithm for (n,k) RS codes over finite fields of size q and for any t ≤⌊n-k/2⌋ errors with block length n=q-1. Our algorithm is based on a simple observation, yielding the decoding complexity 𝒪(tlog^2 t) over an FFT-friendly finite field and 𝒪(nlog^2 nloglog n) over an arbitrary finite field (see Theorem <ref>). The interpolation-based algorithm in <cit.> can also achieve the same complexity order, but only over finite fields with even characteristic. We use properties of the error interpolation polynomial (see <ref> and <ref>) and its associated circulant matrix in the decoding process. Finding the error locator polynomial and its roots are the most time-consuming steps in syndrome-based algorithms while finding the greatest common divisor (GCD) between two polynomials is the step with the highest time complexity for the existing interpolation-based algorithms, but none of these steps are required for our algorithm. As an alternate, we reduce the decoding problem to the problem of solving a Toeplitz linear system of equations which can be solved with the same complexity as finding a GCD <cit.>. The complexity analysis in Section <ref> shows the asymptotic complexity of the proposed algorithm, and Table <ref> compares it to algorithms in <cit.>. A similar idea that uses the rank properties of Dickson matrices has been used in decoding rank-metric codes <cit.>. § NOTATION Let 𝔽_q denote a finite field with q elements and 𝔽_q[x] a univariate polynomial ring over . The degree of a polynomial f(x) ∈𝔽_q[x] is the largest of the degrees of the individual terms and is denoted by (f(x)). For simplicity, we sometimes write f instead of f(x) if the dependency on x is clear from the context, and we also use f to denote the vector of coefficients of f(x). We denote the GCD of two polynomials f,g by (f,g). Let α be a primitive n-th root of unity, i.e., n is the smallest positive integer such that α^n=1. The evaluation of a polynomial of degree less than n on distinct points in 𝒜≜{α^0,…, α^n-1} is referred to as the discrete Fourier transform (DFT). The interpolation is the inverse transform, called the inverse discrete Fourier transform (IDFT). FFT refers to an algorithm that computes the DFT of length n in time complexity 𝒪(n(log n)^u), for some small u. A (multiplicative) FFT of length n requires that n is the product of only small prime numbers and also that x^n-1 has n distinct roots in 𝔽_q. Since we consider n=q-1, the second condition always holds. A finite field that provides these two properties is referred to as FFT-friendly <cit.>. Let ρ_-1,ρ_0∈𝔽_q[x] and ρ_0 0. The Euclidean remainder sequence is defined as ρ_i-2=q_iρ_i-1+ρ_i, (ρ_i)<(ρ_i-1), for 1≤ i≤ s, where ρ_i,q_i∈_q[x] are the i-th remainder and quotient, respectively, and s is the smallest positive integer for which ρ_s|ρ_s-1. ρ_s=(ρ_-1,ρ_0). § DECODING REED-SOLOMON CODES An (n,k) RS code 𝒞 with length n, dimension k, and minimum distance n-k+1 over is a classical example of a polynomial evaluation MDS code. A message vector f =(f_0,…, f_k-1)∈^k is treated as the coefficient vector of a polynomial of degree less than k and evaluated over 𝒜. The output is an RS codeword. §.§ Encoding and Error Generation The message f=(f_0,f_1,…,f_k-1)∈𝔽_q^k, with corresponding message polynomial f(x)=f_0+f_1x+⋯+f_k-1x^k-1∈𝔽_q[x], is padded by n-k zeros and encoded as c=f̃·G_α, where G_α=(G_α)_i,j=α^j(i-1) 0≤ i,j≤ n-1, is a generator matrix of 𝒞, f̃(f_0…,f_k-1,0…,0)∈^n, α is an n-th primitive root of unity, and n=q-1. Hence, encoding RS codes is equivalent to computing a DFT. The matrix G_α is a nonsingular Vandermonde matrix <cit.>. The channel chooses a random vector e∈𝔽_q^n of weight t where t≤⌊n-k/2⌋. Let g(x)∈𝔽_q[x] be the error interpolation polynomial with coefficient vector g̃=(g_0,…, g_n-1)∈𝔽_q^n such that e=g̃·G_α. The error vector e is added to our codeword c and we get r=c+e=f̃·G_α+g̃·G_α=(f̃+g̃)·G_α, where r denotes the received vector. §.§ Decoding §.§.§ Circulant Matrix We recall the following known result from <cit.>. Let ℝ be a commutative ring with identity, n a positive integer, and α∈ℝ a primitive n-th root of unity. Then, α^-1 is also a primitive n-th root of unity and G_αG_α^-1=nI, where I is the identity matrix of order n. Using Lemma <ref>, computing G_α^-1=1/nG_α^-1 is easy and can be done in advance. Multiplying a vector by G_α^-1 is an IDFT. Matrix G_α is nonsingular and we can multiply both sides of (<ref>) by G_α^-1 and get r·G_α^-1=(f̃+g̃)·G_α·G_α^-1=f̃+g̃. Let β=(β_0,…, β_n-1)=r·G_α^-1. The decoder knows r and G_α^-1, and hence β. One can write (<ref>) as rCl (β_0,…,β_n-1) = (f_0,…,f_k-1,0,…,0) +(g_0,…,g_k-1,g_k,…, g_n-1). The decoder needs to recover g(x) and evaluate g(x) on α^0,…,α^n-1 to get the error values. It can be seen from (<ref>) that the n-k coefficients g_k,…,g_n-1 of g(x) are known and the remaining task is to find the k coefficients g_0,…,g_k-1. Let 𝒞 be an RS code over 𝔽_11 with n=10, k=4, and α=2∈𝔽_11. Let r=( 8, 0, 4, 3, 6, 10, 1, 8, 4, 3 )∈𝔽_11^10 be a received vector via a noisy channel with t=3 errors, and let G_α be the Vandermonde matrix associated with α. Then, β=r·G_α^-1=(8, 0, 9, 0, 2, 1, 8, 7, 4, 2 ). Since β=f̃+g̃, we already know the last n-k=6 coefficients of g̃ and β_i=g_i for 4≤ i≤ 9, i.e., (g_4,…, g_9)=(2,1,8,7,4,2). Let a=(a_0,a_1,…, a_d-1)∈𝔽_q^d. The matrix of the form A= ( [ a_0 a_1 ⋯ a_d-3 a_d-2 a_d-1; a_d-1 a_0 ⋯ a_d-4 a_d-3 a_d-2; a_d-2 a_d-1 ⋯ a_d-5 a_d-4 a_d-3; ⋮ ⋮ ⋱ ⋮ ⋮ ⋮; a_1 a_2 ⋯ a_d-2 a_d-1 a_0 ]) is the circulant matrix associated with vector a, where each row is the right circular shift of the row above. A circulant matrix is a special case of a Toeplitz matrix and all the square submatrices of a circulant matrix have a Toeplitz structure as all the entries along the diagonal are equal, and those along each line parallel to the diagonal are also equal. A u× u Toeplitz matrix can be determined by its 2u-1, u∈, entries in its first row and first column. Let a(x)=a_0+a_1x+⋯+a_q-2x^q-2∈𝔽_q[x] be a polynomial of degree less than q-1 and let a=(a_0,…, a_q-2) be its coefficient vector. Then, the circulant matrix associated with a of the form in (<ref>), has rank τ if the number of nonzero roots of a(x) is q-1-τ. We write the circulant matrix associated with the coefficient vector g̃=(g_0,…,g_n-1) as c G̃= ccc|cccc g_0 g_1 ⋯ g_n-t-1 g_n-t … g_n-1 g_n-1 g_0 ⋯ g_n-t-2 g_n-t-1 … g_n-2 ⋮ ⋮ ⋯ ⋮ ⋮ … ⋮ g_n-t+1 g_n-t+2 ⋯ g_n-2t g_n-2t+1 … g_n-t ⋮ ⋮ ⋱ ⋮ ⋮ ⋱ ⋮ g_1 g_2 ⋯ g_n-t g_n-t+1 … g_0 ,  where t is the number of errors. Hence, e has n-t zero components which correspond to the number of nonzero distinct roots of g(x). Using Theorem <ref> one can conclude that the rank of G̃ is t, and therefore it contains a t× t full rank submatrix. We are going to show that the Toeplitz t× t submatrix formed by the first t rows and the last t columns of G̃ (colored in red) is always nonsingular. All the entries located in the top right t× (t+1) submatrix of G̃ are known and we employ them to find the unknown coefficients g_0,…,g_k-1. <ref> The circulant matrix G̃ associated with g̃ has the form c [1.0]G=cccccc|cccc g_0 g_1 g_2 g_3 2 1 8 7 4 2 2 g_0 g_1 g_2 g_3 2 1 8 7 4 4 2 g_0 g_1 g_2 g_3 2 1 8 7 7 4 2 g_0 g_1 g_2 g_3 2 1 8 8 7 4 2 g_0 g_1 g_2 g_3 2 1 1 8 7 4 2 g_0 g_1 g_2 g_3 2 2 1 8 7 4 2 g_0 g_1 g_2 g_3 g_3 2 1 8 7 4 2 g_0 g_1 g_2 g_2 g_3 2 1 8 7 4 2 g_0 g_1 g_1 g_2 g_3 2 1 8 7 4 2 g_0 , and using Theorem <ref>, its rank is t=3. §.§.§ Subresultants Let b(x)=∑_i=0^db_ix^i and a(x)=∑_j=0^ma_jx^j be two polynomials of degree d and m in 𝔽_q[x], respectively, where d≥ m. The matrix of the form c S(b,a)[0.950] cccc|ccccc[cell-space-limits=0.25pt] b_d a_m b_d-1 b_d a_m-1 a_m b_d-1 ⋱ a_m-1 ⋱ ⋱ b_d a_1 ⋱ b_d-1 a_0 a_1 a_m b_0 a_0 ⋱ a_m-1 b_0 ⋱ ⋱ ⋱ a_1 b_0 a_0 [shorten,yshift=3pt]1-19-4m columns [shorten,yshift=3pt]1-59-9d columns is called the Sylvester matrix of polynomials b and a. The first m columns are equipped by b_i's and the last d columns by a_j's. All the entries outside of the two parallelograms are equal to zero. The resultant of b and a is the determinant of the Sylvester matrix and we denote it by (b,a) ∈. For 0≤ l≤ m, the determinant of the (m+d-2l)× (m+d-2l) Sylvester submatrix c S_l(b,a)[0.950] cccc|ccccc b_d a_m b_d-1 b_d a_m-1 a_m b_d-m+l+1 ⋯ ⋯ b_d a_l+1 b_l+1 b_m a_m-d+l+1 a_m b_2l-m+1 b_l a_2l-d+1 a_l [shorten,yshift=3pt]1-18-4m-l columns [shorten,yshift=3pt]1-58-9d-l columns is called the l-th subresultant of polynomials b and a and it is denoted by _l(b,a)∈𝔽_q. The first m-l columns of S_l(b,a) are equipped by b_i's and the rest by a_j's. By convention, a b_i and an a_j are zero for i,j<0. The 0-th subresultant is the resultant of b and a. §.§.§ Relation Between G̃ and S(b,a) We review an important result from <cit.> (Theorem <ref>) and present a new key theorem (Theorem <ref>). Consider two polynomials b,a∈[x] with degrees d≥ m, respectively, and let 0≤ l≤ m. A polynomial of degree l appears in the Euclidean remainder sequence of b and a if and only if _l(b,a)≠ 0. The l-th subresultant of b and a is the leading coefficient of a polynomial of degree l appearing in the Euclidean remainder sequence, and being nonzero means such a polynomial exists. Let b(x)=x^q-1-1 and a(x)=∑_j=0^ma_jx^j be two polynomials in 𝔽_q[x], where a(x)=h(x)· z(x) and h(x)=∑_i=0^vh_ix^i has exactly v≤ m≤ q-1 distinct nonzero simple roots. The polynomial a(x)=h· z=h· s· r· x^u satisfies one of the following: (i) s,r∈𝔽_q and 0≤ u≤ m-v. (ii) s∈, r∈[x], 0≤ u≤ m-v-(r), and factors of r appear in the factors of h if (r)≠ 0. (iii) r,s∈[x], 0≤ u≤ m-v-(r)-(s), and factors of r appear in the factors of h if (r)≠ 0. Polynomial s is irreducible over 𝔽_q[x] with a degree of at least two. Then, (b,a)=h and _v(b,a)≠ 0. (i) If a satisfies (i) and u=0, then a=r· s· h and hence b and a share v distinct linear factors and (b,a)=h. Therefore, a degree v polynomial appears in the remainder sequence and _v(b,a)≠ 0. If a satisfies (i) and u≠ 0, then a=r· s· h· x^u. Since b does not have zero as a root, (b,x^u)=1, and again b and a share v distinct linear factors and (b,a)=(b,h)=h. It is easy to see that factors of x^u do not contribute in (b,a) so we do not consider them in our analysis for the remaining cases (ii) and (iii). (ii) If a satisfies (ii) and r has p≤(r) distinct nonzero roots, then (h,r) is a degree p polynomial which is the multiplication of r's distinct linear factors which are also factors of h. So again (b,a)=(b,h)=h. (iii) If a satisfies (iii), then s is irreducible and it is pairwise coprime with b, h, r, and h· r, and also s· x^u is coprime with b. Since (b,r· x^u)=r, by properties of GCDs of two polynomials, we have (b,h· r· x^u· s)=(b,h)=h. In all cases, we have (b,a)=h and there exists a polynomial with degree v in the remainder sequence. Finally, using Theorem <ref> we conclude that _v(b,a)≠ 0. Note that _v(x^q-1-1,a) in Theorem <ref> is the determinant of the matrix S_v(x^q-1-1,a)= c [0.950] cccc|cccccc 1       a_m       0 1     a_m-1 a_m     ⋮ ⋮ ⋱   ⋮   ⋱   0 ⋮ ⋱ 1 a_v+1 … … a_m   0 ⋮ ⋱ 0 a_v … …   a_m ⋮ ⋮ ⋱ ⋯ ⋮ ⋱ ⋱   ⋱ 0 ⋮ ⋱ 0 a_m-q+v+2 … … … … a_m ⋮     ⋮ ⋮         ⋮ 0 … … 0 a_2v-q+2 … … … … a_v , where the determinant of S_v(x^q-1-1,a) is equal to the determinant of its (q-1-v) × (q-1-v) Toeplitz submatrix (colored in red) in the downright corner with entries a_2v-q+2,…,a_v-1,a_v,a_v+1…,a_m,0,…,0 and a_v on the main diagonal. In other words, if polynomial a satisfies the properties in Theorem <ref>, then the Toeplitz submatrix in S_v(x^q-1-1,a) is always nonsingular. This observation is the key point for the proposed decoding algorithm. The error interpolation polynomial g(x) has the exact same form as the polynomial a(x) in Theorem <ref> where v=n-t. The error interpolation polynomial g(x) with degree m up to n-1 is evaluated on nonzero points in 𝒜 and generates an error vector with t nonzero components and n-t zero components. Consequently, g(x) has n-t nonzero distinct roots associated with the zero components of e due to the fact that our evaluation points in 𝒜 are all nonzero. If the degree of g(x) is exactly n-t, then u=0 and g(x) satisfies (i) in Theorem <ref>. If (g(x))=λ>n-t, then it has an additional irreducible factor(s) besides its n-t distinct linear factors. If the additional factors are all linear (λ-n+t additional linear factors), then g(x) satisfies either (i) or (ii). If the additional factors are not all linear, then g(x) satisfies (iii). So our error interpolation polynomial g(x) has the exact same properties as we have for the polynomial a(x) in Theorem <ref>. Hence, g_i=a_i for 0≤ i≤ m-1. Let G be the circulant matrix associated with the error interpolation polynomial g(x) where the weight of the error vector e is t. Then, the t× t Toeplitz submatrix formed by the first t rows and the last t columns of G̃ is nonsingular. Using Lemma <ref>, if v=n-t and m≤ n-1, then g_i=a_i for 0≤ i ≤ m-1. Comparing the t× t Toeplitz matrix formed by the first t rows and the last t columns of G̃ (colored in red) in (<ref>) with the Toeplitz submatrix formed by the last q-1-v rows and the last q-1-v columns of S_v(x^q-1-1,a) (colored in red) in (<ref>) shows that these two submatrices are equal (q-1-v=n-(n-t)=t). It is proven in Theorem <ref> that the determinant of S_v(x^q-1-1,a) is nonzero and so the determinant of its (q-1-v)× (q-1-v) minor (colored in red) obtained by removing the first m-v rows and columns, which dedicates the determinant of S_v(x^q-1-1,a), is nonzero. Thus, the determinant of the aforementioned t× t Toeplitz submatrix of G̃ is nonzero, and therefore it is nonsingular. <ref> We have S_7(x^10-1,g)= ([ 1 2 ; 0 1 4 2 ; 0 0 7 4 2; 0 0 8 7 4; 0 0 1 8 7 ]). Note that the two 3 × 3 submatrices colored in red in G̃ in (<ref>) and S_7(x^10-1,g) are equal. From Theorem <ref>, the determinant of S_7 is nonzero, and so is the determinant of the submatrix of G̃ colored in red. The matrix G̃ has rank t=3 and comparing with S_7 and using Lemma <ref>, it follows that the 3× 3 submatrix in the top right corner of G̃ is nonsingular. §.§.§ Reconstruction of g As mentioned before, all the entries in the t× (t+1) submatrix located in the top right corner of G̃ are known. Let g̃_j denote the j-th column of G̃ for 0≤ i≤ n-1. Using Lemma <ref>, we know that the column vectors g_n-t,…, g̃_n-1 are independent and based on Theorem <ref> the rank of G̃ is t, hence we can write the first t entries of g̃_n-t-1 (colored in blue) as a linear combination of the first t entries of the column vectors g̃_n-t,…, g̃_n-1 (colored in red). This gives the Toeplitz linear system of equations g_i=η_1g_i+1+η_2g_i+2+⋯+η_tg_i+t, for n-2t≤ i≤ n-t-1, with t equations and t unknowns η_1,…, η_t. The coefficient matrix for our system is nonsingular and a unique solution for η_1,…, η_t exists. <ref> One can write the entries colored in blue as a linear combination of the entries colored in red, resulting in the linear system of equations 7η_1+4η_2+2η_3 =8 8η_1+7η_2+4η_3 =1 1η_1+8η_2+7η_3 =2. Since the coefficient matrix has a Toeplitz structure, we can use the algorithm in <cit.> to find its solution (η_1,η_2,η_3)=(6,1,3). Now, we can use the η_i's and write the entries g_k-1,…, g_0 of g̃_n-t-1 as a linear combination of the column vectors g̃_n-t,…, g̃_n-1 in a Toeplitz form as g_i=η_1g_i+1+η_2g_i+2+⋯+η_tg_i+t, for 0≤ i ≤ n-2t-1, which recursively gives the unknown coefficients g_k-1,…,g_0. When the polynomial g(x) is found, we can compute e=g̃·G_α to find the error vector, and finally subtract e from the received word r gives the codeword c. <ref> In this step, we work on the submatrix obtained by removing the first n-t-1=6 columns, the first t=3 rows, and the last t=3 rows of G̃. We use the coefficients (η_1,η_2,η_3)=(6,1,3) computed above to find g_3 first, and then g_2,g_1,g_0, recursively, as g_3 =2η_1+1η_2+8η_3 g_2 =g_3η_1+2η_2+1η_3 g_1 =g_2η_1+g_3η_2+2η_3 g_0 =g_1η_1+g_2η_2+g_3η_3 using methods in <cit.> and obtain (g_3,g_2,g_1,g_0)=(4,7,8,1). Next, we compute the error vector e=g̃·G_α=(0, 0, 0, 0, 5, 0, 4, 0, 1, 0 ), and finally the corrected codeword c=r-e=( 8, 0, 4, 3, 1, 10, 8, 8, 3, 3 )∈𝒞. Our decoding algorithm consists of the following four steps. Step 1. Compute β=r·G_α^-1. Step 2. Solve the t× t Toeplitz linear system T·η=([ g_n-t … g_n-1; g_n-t-1 … g_n-2; ⋮ ⋱ ⋮; g_n-2t+1 … g_n-t ])·([ η_1; η_2; ⋮; η_t ])=([ g_n-t-1; g_n-t-2; ⋮; g_n-2t ]), and find η=(η_1,…,η_t). The column vector in the right hand side contains the first t entries of g̃_n-t-1. Step 3. Use η from Step 2 to solve the k× k Toeplitz linear system T̂·η = ([ g_k … g_k+t-1; g_k-1 … g_k+t-2; ⋮ ⋱ ⋮; g_1 … g_t ])·([ η_1; η_2; ⋮; η_t ])=([ g_k-1; g_k-2; ⋮; g_0 ]), and obtain g_0,…, g_k-1. Step 4. Calculate the error values via e=g̃·G_α. § COMPLEXITY ANALYSIS We summarize our proposed decoding algorithm in <ref> and relate the steps above to the line numbers in <ref> in the following discussion. The first (<ref>) and the fourth (<ref>) steps are computed using an IDFT and a DFT, respectively, and since the matrix G_α is a Vandermonde matrix and α is a primitive n-th root of unity, one can use an IFFT and an FFT, respectively, to accomplish them. When is FFT-friendly, the Cooley-Tukey FFT <cit.>, which requires 𝒪(nlog n) field operations, can be applied and when is not FFT-friendly, one can use the algorithm in <cit.> with complexity 𝒪(nlog^2nloglog n). The only condition that we set for the decoding algorithm is n=q-1 and this guarantees the existence of a primitive n-th root of unity in . Step 2 is the bottleneck of our decoding algorithm. In Steps 2 (<ref>) and 3 (<ref>) we solve two linear systems of equations with t and k variables, respectively, where the coefficient matrix is a nonsingular Toeplitz matrix. In <cit.>, Brent proposed an algorithm to solve Toeplitz linear systems, which again can involve an FFT. In particular, one can find the vector η in <ref> using the algorithm by Brent in two steps. The first step is to find the inverse of T and the second step is to find η using the subalgorithm. For an FFT-friendly finite field , the complexity of the first step is of order 𝒪(tlog^2t), while the complexity of the second step is 𝒪(tlog t). For an arbitrary finite field, however, the overall complexity becomes O(tlog^2 tloglog t). Step 3 is a convolution, and one can compute g_k-1,…, g_0 (<ref>) using an FFT with complexity 𝒪(klog k) over an FFT-friendly finite field or with complexity 𝒪(klog kloglog k) using a DFT over an arbitrary finite field <cit.>, <cit.>. More methods to solve Toeplitz linear equation systems can be found in <cit.>. In summary, the overall complexity becomes 𝒪(nlog n+tlog^2 t) = 𝒪(tlog^2 t) (as t is of order 𝒪(n), but smaller) and 𝒪(nlog^2 n loglog n) over an FFT-friendly and arbitrary finite field, respectively. We summarize the complexity discussion with the following theorem. Consider an (n,k) RS code over , obtained by evaluation on 𝒜={α^0,α^1,…,α^n-1} where α is a primitive n-th root of unity and n=q-1. Then, every received word can be uniquely decoded up to t≤⌊n-k/2⌋ errors using <ref> with asymptotic complexity 𝒪(tlog^2t) and 𝒪(nlog^2nloglog n), respectively, for FFT-friendly and arbitrary finite fields 𝔽_q. § CONCLUSION We proposed a new interpolation-based decoding algorithm for (n,k) RS codes over finite fields , where n=q-1. We showed that it can correct any t≤⌊n-k/2⌋ errors with complexity 𝒪(tlog^2 t) and 𝒪(nlog^2 n loglog n) for FFT-friendly and arbitrary finite fields, respectively. It is based on properties of a circulant matrix associated with the error interpolation polynomial and some known results from elimination theory. 18 IEEEtran
http://arxiv.org/abs/2307.02711v1
20230706012550
On the $\operatorname{rix}$ statistic and valley-hopping
[ "Nadia Lafrenière", "Yan Zhuang" ]
math.CO
[ "math.CO", "05A05 (Primary), 05E18 (Secondary)" ]
Multi-Similarity Contrastive Learning Emily Mu Massachusetts Institute of Technology John Guttag Massachusetts Institute of Technology Maggie Makar University of Michigan ================================================================================================================================================================================== This paper studies the relationship between the modified Foata–Strehl action (a.k.a. valley-hopping)—a group action on permutations used to demonstrate the γ-positivity of the Eulerian polynomials—and the number of rixed points —a recursively-defined permutation statistic introduced by Lin in the context of an equidistribution problem. We give a linear-time iterative algorithm for computing the set of rixed points, and prove that the statistic is homomesic under valley-hopping. We also demonstrate that a bijection Φ introduced by Lin and Zeng in the study of the statistic sends orbits of the valley-hopping action to orbits of a cyclic version of valley-hopping, which implies that the number of fixed points is homomesic under cyclic valley-hopping. Keywords: rixed points, fixed points, valley-hopping, modified Foata-Strehl action, permutation statistics, homomesy 2020 Mathematics Subject Classification. Primary 05A05; Secondary 05E18. § INTRODUCTION Let 𝔖_n denote the symmetric group of permutations of [n]{1,2,…,n}. We will usually write permutations in one-line notation, so π∈𝔖_n is written as π=π_1π_2⋯π_n. Given a permutation π∈𝔖_n, we call π_i (where i∈[n-1]) a descent of π if π_i>π_i+1 and we call i∈[n] an excedance of π if i<π_i. Let (π) denote the number of descents of π, and (π) its number of excedances. The descent number and excedance number are classical permutation statistics which are well known to be equidistributed over 𝔖_n; in other words, for any fixed n and k, the number of permutations in 𝔖_n with (π)=k is equal to the number of permutations in 𝔖_n with (π)=k. The distributions of both statistics are encoded by the Eulerian polynomials A_n(t)∑_π∈𝔖_nt^(π)=∑_π∈𝔖_nt^(π). The Eulerian polynomials are γ-positive: there exist non-negative coefficients γ_k for which A_n(t)=∑_k=0^⌊ (n-1)/2⌋γ_kt^k(1+t)^n-1-2k. The γ-positivity of A_n(t) implies that the coefficients of A_n(t) are unimodal and symmetric. One method of proving the γ-positivity of A_n(t) which yields a combinatorial interpretation of the coefficients γ_k is via a group action on permutations which we will refer to as valley-hopping (but which is also called the modified Foata–Strehl action in the literature). One can show that the distribution of over each orbit of the valley-hopping action is of the form t^k(1+t)^n-1-2k, and summing over all orbits completes the proof. Valley-hopping and its variants have been used to provide combinatorial proofs for related γ-positivity results, some of which are described in the survey article <cit.> on γ-positivity in combinatorics and geometry. This paper concerns the relationship between valley-hopping and a permutation statistic denoted . The statistic is defined recursively in the following way: let (∅)=0, and if w = w_1 w_2 ⋯ w_k is a word with distinct positive integer letters and largest letter w_i, then let (w) 0, if i=1<k, 1+(w_1w_2⋯ w_k-1), if i=k, (w_i+1 w_i+2⋯ w_k), if 1<i<k. Equivalently, (π) is the number of “rixed points” of π as defined by Lin and Zeng <cit.>. The statistic was introduced by Lin <cit.> in the context of an equidistribution problem which we will now describe. Consider the basic Eulerian polynomials A_n(t,q,r)∑_π∈𝔖_nt^(π)r^(π)q^(π)-(π) where (π) is the number of fixed points of π and (π)∑_π_i>π_i+1i is the major index of π (the sum of its descents). Since and are equidistributed over 𝔖_n, one may ask if there are statistics _1 and _2 for which (,,-) and (,_1,_2) are equidistributed over 𝔖_n. Lin showed that we can take _1= and _2=, where the latter is the number of “admissible inversions” of π. In other words, A_n(t,q,r)=∑_π∈𝔖_nt^(π)r^(π)q^(π) is an alternative interpretation for the basic Eulerian polynomials. Prior to Lin's introduction of the statistic, Shareshian and Wachs <cit.> had remarked that the polynomials A_n(t,0,r) and A_n(t,1,r) satisfy a refined version of γ-positivity. Later, Lin and Zeng <cit.> used valley-hopping and Lin's interpretation of the basic Eulerian polynomials to give combinatorial interpretations for the γ-coefficients of A_n(t,0,r) and A_n(t,1,r). Along the way, they defined a bijection Φ𝔖_n→𝔖_n satisfying (π)=(Φ(π)) and (π)=(Φ(π)) where is the set of rixed points and the set of fixed points. The existence of this bijection Φ demonstrates that not only are (,) and (,) jointly equidistributed over 𝔖_n, but (,) and (,) are as well.[The same is not true for (,,-) and (,,).] Dynamical algebraic combinatorics—which, broadly speaking, investigates phenomena associated with actions on combinatorial structures—is an emerging area of research within algebraic combinatorics. An example of such phenomena is homomesy, where a statistic on a set of combinatorial objects has the same average value over each orbit of an action; see <cit.> for a survey of this topic. Motivated by <cit.>, which was a systematic investigation of the homomesy phenomenon on permutations, the present work originated as an attempt to identify permutation statistics which are homomesic under valley-hopping, which was not considered in <cit.>. Following the approach of <cit.>, we automatically searched for permutation statistics from the online FindStat database <cit.> that exhibited homomesic behavior under valley-hopping for 2≤ n ≤ 6. Our positive matches included the descent number and some related statistics (such as the number of ascents), but aside from these, the only statistic that appeared to be homomesic under valley-hopping is the statistic.[As of June 28, 2023, there were 387 other permutation statistics with code in the FindStat database, but we found counterexamples for all of them.] A simple examination of the orbit structure of the valley-hopping action shows that is homomesic under valley-hopping,[In fact, the homomesy of under valley-hopping is implicit in the valley-hopping proof for the γ-positivity of the Eulerian polynomials.] which in turn implies that the related statistics are homomesic by way of symmetry arguments. On the other hand, proving that is homomesic under valley-hopping required further investigation, and it soon became evident to us that there is more to the relationship between the statistic and valley-hopping than meets the eye. Our subsequent explorations on this topic led to the full results presented here. The organization of our paper is as follows. Section 2 introduces the definitions of several permutation statistics that are relevant to this work, as well as Lin and Zeng's “rix-factorization” of a permutation (which is needed to define the statistic). Section 3 is devoted to a linear-time iterative algorithm for computing the set of rixed points and the rix-factorization. After we present and demonstrate the validity of our algorithm, we will use this algorithm to help prove several results, including a recursive definition for that lifts the definition of given in (<ref>), another characterization for rixed points, and a few additional lemmas about rixed points and the rix-factorization which we will use later on in this paper. Section 4 is again expository, and provides the definition of valley-hopping and introduces a cyclic version of valley-hopping. Cyclic valley-hopping was originally defined on derangements by Sun and Wang <cit.>, and was extended to all permutations by Cooper, Jones, and the second author <cit.>. While the version of cyclic valley-hopping due to Cooper–Jones–Zhuang fixes all fixed points, our version of cyclic valley-hopping does not fix fixed points. We also define “restricted” versions of valley-hopping and cyclic valley-hopping. Restricted valley-hopping was first introduced by Lin and Zeng <cit.>, whereas restricted cyclic valley-hopping is precisely the version of cyclic valley-hopping due to Cooper–Jones–Zhuang mentioned above. Sections 5–6 focus on our main results concerning the relationship between valley-hopping and the statistic. In Section 5, we prove that is homomesic under valley-hopping. Finally, in Section 6, we show that the bijection Φ of Lin and Zeng sends valley-hopping orbits to cyclic valley-hopping orbits (and sends restricted valley-hopping orbits to restricted cyclic valley-hopping orbits). As a consequence, (the number of fixed points) is homomesic under cyclic valley-hopping. § PERMUTATION STATISTICS The purpose of this preliminary section is to introduce several permutation statistics that will be relevant to our work. Fix a permutation π=π_1π_2⋯π_n in 𝔖_n. We have already defined descents; an ascent of π is a letter π_i (where i∈[n]) for which π_i < π_i+1, with the convention π_n+1=∞—i.e., an ascent is a letter that is not a descent. For example, take π = 135987426. Then the ascents of π are 1, 3, 5, 2, and 6, whereas its descents are π are 9, 8, 7, and 4. Notice that, under our definition, the last letter of a permutation is always an ascent. Let us adopt the convention π_0=π_n+1=∞ for the definitions below. Given i∈[n], we call π_i: * a peak of π if π_i-1<π_i>π_i+1; * a valley of π if π_i-1>π_i<π_i+1; * a double ascent of π if π_i-1<π_i<π_i+1; * a double descent of π if π_i-1>π_i>π_i+1. Continuing the example above, the only peak of π = 135987426 is 9; its valleys are 1 and 2; its double ascents are 3, 5, and 6; and its double descents are 8, 7, and 4. In particular, observe that every letter of a permutation is either a peak, valley, double ascent, or double descent. We note that the terms ascent, descent, peak, valley, double ascent, and double descent more commonly refer to a position i as opposed to a letter π_i, and most authors do not take π_0=π_n+1=∞ when defining these terms. It will be more convenient for us to adopt these conventions. Next, recall that the statistic was defined by Lin using the recursive formula (<ref>), and that Lin and Zeng later defined the set-valued statistic for which gives the cardinality. The definition of relies on Lin and Zeng's “rix-factorization”, which is given below. Each permutation π∈𝔖_n can be uniquely written in the form π = α_1 ⋯α_k β where the factors α_1,…,α_k,β (henceforth called rix-factors) are obtained by applying the following algorithm: (1) Initialize wπ and i0. (2) If w is an increasing word, let β w and terminate the algorithm. Otherwise, increase i by 1, let x be the largest descent of w, and write w = w'xw” (so that w' consists of all the letters of w to the left of x, and w” all the letters to the right of x). (3) If w'= ∅, let β w and terminate the algorithm. Otherwise, let α_i w'x and w w”, and go to (2). The expression (<ref>) is called the rix-factorization of π. When writing out the rix-factorization of a permutation, we will often use vertical bars to demarcate the rix-factors. It will also be convenient for us to let β_1(π) denote the first letter of β in the rix-factorization of a permutation π. A rixed point of π is a letter in the maximal increasing suffix of π which is not smaller than β_1(π), and the set of rixed points of π is denoted (π). Let us walk through the algorithm in Definition <ref> for the permutation π = 142785369: (1) Set w=142785369 and i=0. (2-1) Since w is not increasing, we set i=1, x=8, w'=1427, and w”=5369. (3-1) Since w'≠∅, we set α_1=14278 and w=5369. (2-2) Since w is not increasing, we set i=2, x=5, w'=∅, and w”=369. (3-2) Since w = ∅, we set β=5369. Thus the rix-factorization of π is 14278|5369 and (π)={ 6,9 }. The above example showcased a permutation for which the algorithm terminates in step (3). Below is an example in which termination occurs in step (2). Let us walk through the algorithm in Definition <ref> for the permutation π = 23816457: (1) Set w=23816457 and i=0. (2-1) Since w is not increasing, we set i=1, x=8, w'=23, and w”=16457. (3-1) Since w'≠∅, we set α_1=238 and w=16457. (2-2) Since w is not increasing, we set i=2, x=6, w'=1, and w”=457. (3-2) Since w'≠∅, we set α_2=16 and w=457. (2-3) Since w is increasing, we set β=457. Thus the rix-factorization of π is 238|16|457 and (π)={ 4,5,7 }. While the algorithm in Definition <ref> is recursive, our algorithm in the next section is iterative. § AN ITERATIVE ALGORITHM FOR RIXED POINTS AND THE RIX-FACTORIZATION In this section, we give an iterative algorithm for computing the rixed points of a permutation along with its rix-factorization. This is achieved through the use of pointers on the permutation, restricting it to a valid factor. At first the valid factor is taken to be the entire permutation, but we gradually restrict the valid factor as the algorithm progresses. To make this algorithm iterative, we consider all the entries of the permutation in decreasing order. For each entry x, we first check if it appears in the valid factor, and if it does, we use the local shape of the permutation around x to move a boundary of the valid factor inward. When we move the left boundary, then a new term is added to the rix-factorization; when we move the right boundary, x is added as a rixed point. After describing the algorithm explicitly, we will prove that the output of our algorithm indeed gives the rix-factorization and the rixed points as defined by Lin and Zeng, and then we use our algorithm to prove several more results concerning rixed points and the rix-factorization. §.§ Explicit description of the algorithm Let π∈𝔖_n and σ = π^-1. Throughout this algorithm, let π_l ⋯π_r denote the valid factor. We begin with l=1 and r=n, so that the entire permutation π is the valid factor. We let x iterate through each of the letters n, n-1, n-2, …, 1—in that order—until the stopping condition described below occurs. Let i be the position of x in π, so that π_i=x or, equivalently, σ_x = i. If x belongs to the valid factor (i.e., if l ≤ i ≤ r): (a) If x is a peak of π, then the valid factor becomes π_i+1⋯π_r, and π_l⋯π_i = π_l⋯ x is added to the rix-factorization. Otherwise, x is either the first or the last letter of the valid factor (because letters are examined in decreasing order). (b) If x is the first letter of the valid factor, then we add π_i⋯π_n=x⋯π_n to the rix-factorization, and x is added as a rixed point if x is an ascent of π. The algorithm terminates. (c) If x is the last letter (but not the first) of the valid factor, then the valid factor becomes π_l ⋯π_i-1, and x is added as a rixed point. Once the algorithm stops (when x is the first letter of the valid factor), we return the set of rixed points and the rix-factorization. Pseudocode for Algorithm <ref> is given below. Algorithm <ref> is executed in a time that is linear with respect to n. The number of operations is at least linear, since finding the inverse of a permutation requires reading it all and is thus executed in linear time. All other operations (comparisons, attributions, additions to list) are done in constant time, and there is a single for-loop (that we go through at most n times), meaning that the algorithm requires a number of operations that is at most linear with respect to n. Let us illustrate this algorithm with two examples; compare with Examples <ref>–<ref>. [Example <ref> continued] Let us walk through Algorithm <ref> for π = 142785369, highlighting the evolution of the valid factor: (x=9) 142785369→ 14278536, because x=9 is at the end of the valid factor. The rix-factorization contains no terms yet, and 9 is added as a rixed point. (x=8) 14278536 → 536, because x=8 is a peak. We add 14278 to the rix-factorization, and the set of rixed points is unchanged. (x=7) 536 → 536, because x=7 is outside the valid factor. The rix-factorization and the set of rixed points are unchanged. (x=6) 536→ 53, because x=6 is at the end of the valid factor. The rix-factorization is unchanged, and 6 is added as a rixed point. (x=5) The algorithm terminates because x=5 is the first letter of the valid factor 53. We add 5369 to the rix-factorization, but 5 is not added as a rixed point because it is not an ascent. Thus the rix-factorization of π is 14278|5369 and (π)={ 6,9 }, which agrees with what was obtained before. [Example <ref> continued] Let us walk through Algorithm <ref> for π = 23816457, highlighting the evolution of the valid factor: (x=8) 23816457 → 16457, because x=8 is a peak. We add 238 to the rix-factorization, and the set of rixed points is currently empty. (x=7) 16457→ 1645, because x=7 is at the end of the valid factor. The rix-factorization is unchanged, and 7 is added as a rixed point. (x=6) 1645→ 45, because x=6 is a peak. We add 16 to the rix-factorization, and the set of rixed points is unchanged. (x=5) 45→ 4, because x=5 is at the end of the valid factor. The rix-factorization is unchanged, and 5 is added as a rixed point. (x=4) The algorithm terminates because x=4 is the first letter of the valid factor 4. We add 457 to the rix-factorization, and 4 is added as a rixed point because it is an ascent. Thus the rix-factorization of π is 238|16|457 and (π)={ 4,5,7 }, which agrees with what was obtained before. See Figure <ref> for visual depictions of Examples <ref> and <ref>. §.§ Proof that Algorithm <ref> gives the rix-factorization We first show that Algorithm <ref> indeed gives what Lin and Zeng defined to be the rix-factorization (as in Definition <ref>). To do so, we need a lemma regarding what is to the right of the valid factor. At any stage during the execution of Algorithm <ref>, if the valid factor is π_l⋯π_r, then π_r, π_r+1, …, π_n are all ascents of π, so π_r< π_r+1 < ⋯ < π_n. We proceed by induction. Our base case is at the start of the algorithm, at which point there is nothing to the right of the valid factor. Since the last letter of a permutation is defined to be an ascent by convention, the result holds. Now, assume that the statement is true when the valid factor is π_l⋯π_r. We need to show that iterating Algorithm <ref> by one step preserves the accuracy of the statement. For this, we show that whenever the right boundary of the valid factor moves, it moves only by one position and the new right boundary is an ascent of π. By Algorithm <ref>, the right boundary changes only when π_r is the largest letter of the valid factor π_l⋯π_r and l≠ r, at which point the valid factor becomes π_l⋯π_r-1. Hence, π_r-1 < π_r, so π_r-1 is an ascent of π. We also know from the induction hypothesis that π_r, π_r+1, …, π_n are ascents. Thus, everything weakly to the right of the right boundary π_r-1 is an ascent, or equivalently, π_r-1<π_r<⋯ < π_n. Algorithm <ref> produces the rix-factorization as given in Definition <ref>. To prove the proposition, we simultaneously apply to a permutation π the procedures in Definition <ref> and in Algorithm <ref>, showing that the terms added to the rix-factorization are the same at each step. Algorithm <ref> considers a factor of π (seen as a word), called the valid factor. Let w be as in Definition <ref> and let v be the valid factor of π plus what is to its right. At first, we have w=v=π. We show, by induction, that v=w at each step of the joint execution of the procedure in Definition <ref> and that of Algorithm <ref>. The base case is v=w=π, and no term is in the rix-factorization at this stage. For the induction hypothesis, we assume v=w, and we apply the recursive procedure in Definition <ref> and the iterative procedure in Algorithm <ref>. Consider the following two cases: (1) Suppose that w has a descent; then it is not an increasing word. Let y be the largest descent of w. We have v=w by the induction hypothesis, so y is also the largest descent of v. Also, let x be the largest letter of the valid factor. There are two options for x: either it is an ascent or it is a descent. If x is a descent, then it must be the largest descent in v, and since what is to the right of the valid factor consists only of ascents (by Lemma <ref>), we have y = x. If x is an ascent and is the largest letter of the valid factor, then it must be the rightmost letter of the valid factor (otherwise, the letter to its right is a larger letter in the valid factor). Hence, in Algorithm <ref>, the right boundary of the valid factor moves one position to the left, and v is untouched. We repeat the process until the largest letter is a descent, making x=y. Let us write w=w'yw”. Following the procedure in Definition <ref>, if w' is empty, the algorithm stops and β = w is added to the rix-factorization as the last rix-factor. Otherwise, if w' is not empty, then we add w'y to the rix-factorization and repeat the process with w” in place of w. Also, let us write v as v'xv”. Following Algorithm <ref>, if v' is empty then x is the first letter of the valid factor, so β=v is added to the rix-factorization and the algorithm stops. If v' is not empty, then x is a peak, so we add v'x to the rix-factorization and repeat the process with v” in place of v. Since x=y and v=w, we have v'=w' and v”=w”; thus the same terms have been added to the rix-factorization, concluding the induction step in this case. (2) If w has no descent, then neither does v, so they are both increasing words. In that case, Definition <ref> sets β=w and terminates the process. As for v, since it is increasing, its largest letter is successively the largest letter of the valid factor. Thus, during the execution of Algorithm <ref>, the right boundary of the valid factor moves one step to the left at a time, which does not impact v. Hence, the process in Algorithm <ref> is repeated until the valid factor has a single letter, in which case the algorithm stops and we add β = v to the rix-factorization. By these two cases, we have shown inductively that the terms of the rix-factorization obtained using Definition <ref> and Algorithm <ref> are the same, thus completing the proof. §.§ Proof that Algorithm <ref> gives the rixed points Recall from Definition <ref> that the set of rixed points of a permutation π is defined as a letter in the maximal increasing suffix of π that is no smaller than β_1(π), the first letter of the rix-factor β of π. We show here that the set of rixed points obtained from Algorithm <ref> is indeed the same set. Algorithm <ref> produces the set of rixed points as given in Definition <ref>. We let A be the set of letters in the maximal increasing suffix of π that are no smaller than the first letter of β_1(π) (so A is the set of rixed points as obtained from Definition <ref>). Also, let B contain the successive right boundaries of the valid factor of π, as well as the left boundary of the valid factor when Algorithm <ref> terminates if it is an ascent (so B is the set of rixed points as obtained from Algorithm <ref>). We show that A = B. Let y ∈ A. Then y is larger than or equal to β_1(π), which is the stopping point of Algorithm <ref>. Hence, y is considered during the execution of the Algorithm <ref>. We also know that y is part of the valid factor when it is considered, since the right boundary only excludes letters after consideration, and the left boundary only moves to the right of peaks. However, since y is part of an increasing suffix, it cannot have a peak to its right nor can it be a peak itself (or a descent in general). In particular, the fact that y is not a peak also implies that y must either be the first or the last letter of the valid factor when it is considered. If y is the first letter of the valid factor, then the algorithm stops, and y is in B because it is an ascent (as it is part of an increasing suffix). Otherwise, if y is the last letter (but not the first), then the algorithm still adds y to B. In any case, y ∈ B. We now prove the other inclusion. Let z ∈ B. Then, z is either the first letter of the valid factor when the algorithm stops, or it is the last letter of the valid factor at some point during the execution of the algorithm. In the latter case, z being at the end of the valid factor means that everything to its right is greater than z (by Lemma <ref>), so it is part of an increasing suffix of π. If z is the first letter of the valid factor when the algorithm stops, then it is in B only if is an ascent of π, in which case z is also the last letter of the valid factor by the argument in Case (1) of the proof of Proposition <ref>. In either case, z belongs to the maximal increasing suffix of π. Moreover, we know that z is considered by Algorithm <ref> during its execution and that β_1 (π) is the last letter considered prior to termination. Since the algorithm considers the letters in π in decreasing order of their values, it follows that z≥β_1 (π). Therefore, z∈ A. We have thus proved that A = B, so Algorithm <ref> indeed gives the set of rixed points. We now show that the recursive definition (<ref>) for the statistic can be adapted to obtain a recursive algorithm for computing the set of rixed points. As a consequence, we get that (π) is indeed the cardinality of (π) for any permutation π.[This was stated by Lin and Zeng <cit.> but no proof was given.] Given a word w = w_1 w_2 ⋯ w_k with distinct positive integer letters, define (w) in the following way: If k=0 i.e., if w=∅, then (w) = ∅. Otherwise, if w_i is the largest letter of w, then let (w)∅, if i=1<k, {w_k}∪(w_1w_2⋯ w_k-1), if i=k, (w_i+1w_i+2⋯ w_k), if 1<i<k. Then, for any permutation π, the set (π) obtained using the above recursive definition is indeed the set of rixed points of π. Before proving Proposition <ref>, let us illustrate this recursive algorithm for with a couple examples. By comparing these with Examples <ref>–<ref>, which compute the rixed points of the same permutations but using Algorithm <ref>, we see that the valid factors of Algorithm <ref> are precisely the words w at each step of (<ref>). This will be key to our proof of Proposition <ref>. [Examples <ref> and <ref> continued] We use Proposition <ref> to compute the rixed points of π = 142785369: (142785369) = {9}∪(14278536) = {9}∪(536) = {6,9}∪(53) = {6,9}. [Examples <ref> and <ref> continued] We use Proposition <ref> to compute the rixed points of π = 23816457: (23816457) = (16457) = {7}∪(1645) = {7}∪(45) = {5,7}∪(4) = {4,5,7}. We show using induction that the three cases in (<ref>) correspond (with a slight adjustment) to the three cases in Algorithm <ref>, and that the word w changes in the same way as the valid factor in Algorithm <ref>. Both procedures begin with the entire permutation π, which establishes the base case. For our induction hypothesis, suppose that w = w_1⋯ w_k is the valid factor of π. Let w_i be the largest letter of w. Consider the following cases: * If w_i is the first letter (but not the last) of w, then there are no rixed points in w according to (<ref>). Note that if w_i is both an ascent of π and the first letter of w, then it is the only (and therefore last) letter of w because it is the largest letter in w. Hence, this case corresponds to case (b) of Algorithm <ref> when w_i is not an ascent of π, in which no rixed points are added and the algorithm terminates. * If w_i is the last letter of w, then (<ref>) adds w_i as a rixed point and w becomes w_1⋯ w_k-1. Note that if w_i is the only letter of w, then w_1⋯ w_k-1 = ∅ and the procedure in (<ref>) stops. This corresponds to case (c) of Algorithm <ref>, as well as case (b) when w_i is the only letter of w (and thus an ascent of π by Lemma <ref>). * Otherwise, the largest letter w_i of w is neither its first nor last (which means that w_i is a peak of π), so w becomes w_i+1⋯ w_k and no rixed point is added. This corresponds to case (a) of Algorithm <ref>. Because the valid factor and the set of rixed points change in the same way at each step of both algorithms, we get the same set of rixed points at the end. §.§ A characterization of rixed points If y is a descent of π, let us call y a leading descent of π if there is no larger descent of π appearing after y. For example, the leading descents of π=194376528 are 9, 7, 6, and 5, whereas 4 is a descent of π but not a leading descent because the larger letters 7, 6, and 5 are all descents of π that appear after 4. Leading descents are useful in the study of rixed points. For example, note that the x in step (2) of Definition <ref> is always a leading descent of π. We will now use leading descents to define the “maximal descending ridge” of a permutation, which plays a role in our subsequent characterization of rixed points. In this definition, we will prepend ∞ to π and consider ∞ to be the first leading descent of ∞π. We define the maximal descending ridge of a permutation π in the following way: * If π_i and π_i+1 are the leftmost pair of leading descents in adjacent positions, then the maximal descending ridge of π is the prefix of ∞π ending with π_i. * If ∞π does not have two leading descents in adjacent positions, then the maximal descending ridge of π is the prefix of ∞π ending with the rightmost descent of π. To illustrate, the maximal descending ridges of the permutations 142785369 and 23816457 (from the earlier examples) are ∞ 14278 and ∞ 23816, respectively. Let π∈𝔖_n. The rixed points of π are characterized by the following: (a) If y∈(π), then either y is a double ascent of π, or y is the rightmost valley of π and is either the first letter of π or immediately follows a peak in π. (b) A letter y satisfying the above conditions is a rixed point of π if and only if none of y+1, y+2, …, n is immediately to the right of the maximal descending ridge of π or appears as a peak to the right of y. Before giving the proof, let us use the characterization given by Theorem <ref> to determine the rixed points of the two permutations from the earlier examples. [Examples <ref>, <ref>, and <ref> continued] Take π = 142785369, whose maximal descending ridge is ∞ 14278. The letters of π meeting the requirements in Theorem <ref> (a) are 7, 6, and 9, but 7 is not a rixed point because 8 is a peak immediately following 7.[We also know that 7 cannot be a rixed point because it is not part of an increasing suffix of π.] On the other hand, 6 and 9 are both rixed points because none of 7, 8, and 9 is immediately to the right of the maximal descending ridge or is a peak to the right of 6, and there is no letter in π larger than 9. Thus, (π)={ 6,9 }. [Examples <ref>, <ref>, and <ref> continued] Take π = 23816457, whose maximal descending ridge is ∞ 23816. The letters of π meeting the requirements in Theorem <ref> (a) are 3, 4, 5, and 7, and then it is readily verified that 3 is the only one which does not meet the requirements in Theorem <ref> (b). Thus, (π)={ 4,5,7 }. In Examples <ref> and <ref>, the letter following the maximal descending ridge of π coincides with the beginning of the β rix-factor of π. This will be confirmed in the proof of Theorem <ref> below, as we shall show that if Algorithm <ref> terminates while considering x (so that x is the first letter β_1 (π) of β), then x immediately follows the maximal descending ridge of π. Therefore, we can also compute the rixed points of a permutation π by using the maximal descending ridge to determine β_1 (π) and then applying Definition <ref>. We first prove (a). By Definition <ref>, every rixed point y of π belongs to an increasing suffix of π, so y is an ascent of π. Every ascent is either a double ascent or a valley, and if y is a valley, then it is the rightmost valley of π as it must be the first letter of the maximal increasing suffix of π. Furthermore, in Algorithm <ref>, we see that a valley y is added as a rixed point only when it is the first letter of the valid factor; if y is instead the last letter (but not the first) of the valid factor, then y would not be the largest letter of the valid factor, contradicting the fact that Algorithm <ref> considers the letters of π in decreasing order. Since y is the first letter of the valid factor, it follows that it is either the first letter of π or it is immediately to the right of a peak (because this is how the left boundary is moved in Algorithm <ref>). Hence, part (a) is proven. To prove (b), let us first assume that y is a rixed point of π, and show that none of y+1, y+2, …, n is immediately to the right of the maximal descending ridge of π or appears as a peak to the right of y. For this, recall again that the letters of π are inspected in decreasing order by Algorithm <ref> until we reach the stopping condition, which is when the largest letter of the valid factor is its first letter. Let q be the largest (and thus first) letter of the valid factor when the algorithm stops. None of q+1, q+2, …, n is a peak to the right of q; otherwise, the left boundary of the valid factor would have been moved to the right of q, and so q would not be in the valid factor. We also know that y is weakly to the right of q and that y≥ q because y is a rixed point, so {y+1, y+2, …, n} is a subset of {q+1, q+2, …, n }, and thus it follows from the analogous statement for q that none of y+1, y+2, …, n is a peak to the right of y. By the same reasoning as in part (a), either q is the first letter of π, or it immediately follows a peak of π—call it z. In the latter case, we claim that z is the last letter of the maximal descending ridge of π, which we prove in the following steps: (1) We show that z is a leading descent of π. Otherwise, if z^' were the largest descent to the right of z that is greater than z, then z^' would be a peak, and so the left boundary of Algorithm <ref> would have moved to the right of z^' upon considering z^'. Hence, no such z^' exists, so z is indeed a leading descent. (2) We show that there cannot be leading descents in adjacent positions weakly to the left of z, which would imply that z belongs to the maximal descending ridge. Suppose otherwise, and let z^' and z^'' be the leftmost pair of leading descents in adjacent positions. (Note that z^' can be ∞, and z^'' can be z unless z^'=∞.) Since z^' > z^''≥ z > q and Algorithm <ref> terminates while considering q, the algorithm must consider z^' (unless z^'=∞) and z^'' at some point. If z^'=∞, then z^'' is the first letter of π and thus the first letter of the valid factor at the beginning of Algorithm <ref>. If z^'≠∞, then z^' is a peak, so the left boundary of the valid factor would move to z^'' upon considering z^'. Either way, the valid factor will begin with z^'' at some point during the execution of the algorithm. And since z^'' is a leading descent, there are no peaks larger than z^'' to the right of z^'', so the left boundary would stay at z^'' until z^'' is considered by the algorithm, at which point the algorithm terminates. This contradicts the assumption that the algorithm terminates while considering q, so no such z^' and z^'' exist. (3) If q is a descent, then q is a leading descent by the same reasoning as in (1), so z is the last letter of the maximal descending ridge. If q is an ascent, then it is added by Algorithm <ref> as a rixed point, and since the rixed points form an increasing suffix of π, this means that z is the rightmost descent of π and therefore the last letter of the maximal descending ridge. Note that if q is the first letter of π, then the argument in (3) suffices to show that the maximal descending ridge is ∞. In either case, q is immediately to the right of the maximal descending ridge, and since y≥ q, it follows that none of y+1, y+2, …, n is immediately to the right of the maximal descending ridge. Conversely, suppose that y satisfies the conditions in part (a), and that none of y+1, y+2, …, n immediately follows the maximal descending ridge of π or appears as a peak to the right of y. We wish to show that y is a rixed point of π. First, note that the left boundary of the valid factor never moves to the right of y during the execution of Algorithm <ref>, since no letter larger than y is a peak to the right of y. Also, if the algorithm were to terminate while considering a letter q larger than y, then we know from an argument earlier in this proof that q immediately follows the maximal descending ridge, which is a contradiction. Hence, y is considered by Algorithm <ref> at some point during its execution, and y appears in the valid factor while under consideration. Now, recall that the conditions in part (a) imply that y is an ascent and therefore not a peak, so y is either the first or the last letter of the valid factor when it is considered by the algorithm. If y is the first letter, then the fact that it is an ascent guarantees that it is a rixed point. If y is the last letter (but not the first) of the valid factor, then Algorithm <ref> adds it as a rixed point as well. Thus the proof of part (b) is complete. §.§ Properties of rixed points and the rix-factorization Before proceeding, we shall give a few more properties of rixed points and the rix-factorization which will be used later in this paper. Let π be a permutation with rix-factorization π=α_1⋯α_kβ. Let y∈(π). Then y is a valley of π if y=β_1(π), and is a double ascent of π otherwise. Recall from Theorem <ref> (a) that a rixed point of π is either a valley or a double ascent of π. By the algorithm in Definition <ref>, β_1(π) must either be the first letter of π or is immediately preceded by a descent. Hence, if y=β_1(π), then y is a valley of π. Now, suppose that y≠β_1(π), so that y>β_1(π). Then y is added to the set of rixed points in Algorithm <ref> when it is the last letter of the valid factor, which is subsequently set to π_l⋯π_r where π_r is the letter immediately preceding y. If π_r is a peak of π, then in particular π_r>y and so the algorithm must have examined x=π_r prior to x=y, at which stage the valid factor would have been set to begin with y. This means that the algorithm would terminate at x=y and y would be the first letter of β, a contradiction. Therefore, y is not the first letter of π and does not immediately follow a peak, so y is a double ascent of π by Theorem <ref>. Let π be a permutation with rix-factorization π=α_1⋯α_kβ. If y is a leading descent of π, then either y is an entry of β or y is the last letter of α_i for some 1≤ i≤ k. Suppose that y is a leading descent of π but does not belong to β. Then y belongs to a rix-factor α_i of π. It is evident from both Definition <ref> and Algorithm <ref> that the last letter of α_i is the largest letter of α_i and is a descent of π. So, if y were not the last letter of α_i, then the last letter of α_i would be a descent of π to the right of y which is larger than y, contradicting the assumption that y is a leading descent of π. Let π be a permutation with rix-factorization π=α_1 α_2 ⋯α_k β. The following are equivalent: (a) The rix-factor β is increasing. (b) Every letter of β is a rixed point of π. (c) Every letter of β is either a rixed point of π or is β_1(π). (d) β_1(π) is a rixed point of π. Furthermore, let y be any letter of β that is neither β_1(π) nor a rixed point. Then y<β_1(π). The equivalences (a)–(b) and (a)–(d) are immediate from Definition <ref>, and (b) obviously implies (c). If every letter of β appearing after β_1(π) is a rixed point of π, then β_1(π) is also part of the maximal increasing suffix consisting of letters not smaller than β_1(π), so β_1(π) is also a rixed point of π. Thus (c) implies (b). Now, let y be a letter of β that is neither β_1(π) nor a rixed point. Then it is easy to see that y is a non-initial letter of the valid factor when Algorithm <ref> terminates, whereas β_1(π) is the initial letter of that valid factor and is the letter being considered by the algorithm at that point. Since the letter being considered at any point is the largest letter of the valid factor (or is not in the valid factor), it follows that y<β_1(π). § VALLEY-HOPPING AND CYCLIC VALLEY-HOPPING Most of our remaining results will deal with the interplay between rixed points and valley-hopping; here we shall define the latter. Fix π∈𝔖_n and x∈[n]. We may write π=w_1 w_2 x w_4 w_5 where w_2 is the maximal consecutive subword immediately to the left of x whose letters are all smaller than x, and w_4 is the maximal consecutive subword immediately to the right of x whose letters are all smaller than x. Define φ_x𝔖_n→𝔖_n by φ_x(π) w_1 w_4 x w_2 w_5, if x is a double ascent or double descent of π, π, if x is a peak or valley of π. It is easy to see that φ_x is an involution, and that φ_x commutes with φ_y for all x,y∈[n]. Thus, given a subset S⊆[n], it makes sense to define φ_S𝔖_n→𝔖_n by φ_S∏_x∈ Sφ_x. The involutions {φ_S}_S⊆[n] define a ℤ_2^n-action on 𝔖_n, called the modified Foata–Strehl action or valley-hopping. For example, if π=834279156 and S={6,7,8}, then we have φ_S(π)=734289615; see Figure <ref>. This figure makes it apparent that, pictorially, the elements of S are indeed “hopping” over valleys upon applying φ_S. Also, observe that x∈ S is a double ascent of π if and only if x is a double descent of φ_S(π), and that it is a double descent of π if and only if it is a double ascent of φ_S(π). The valley-hopping action originated in work of Foata and Strehl <cit.>, and was independently discovered by Shapiro, Woan, and Getu <cit.> and by Brändén <cit.>. A cyclic version of valley-hopping was later defined by Sun and Wang <cit.> for derangements, and then extended to the entire symmetric group by Cooper, Jones, and the second author <cit.>. Below, we will extend Sun and Wang's action in a slightly different way. As of this point, we have only needed to write permutations in one-line notation, but we shall now need both one-line notation and cycle notation. When writing a permutation in cycle notation, we shall write each cycle with its largest value first and listing the cycles in increasing order of their largest value; this convention is referred to as canonical cycle representation. Let o𝔖_n→𝔖_n denote Foata's fundamental transformation, which takes a permutation π in canonical cycle representation and outputs the permutation o(π) in one-line notation obtained from π by erasing the parentheses. Given x∈[n] and S⊆[n], define ψ_x𝔖_n→𝔖_n by ψ_x o^-1∘φ_x∘ o and ψ_S𝔖_n→𝔖_n by ψ_S∏_x∈ Sψ_x. The {ψ_S}_S⊆[n] induce a ℤ_2^n-action on 𝔖_n which we call cyclic valley-hopping. See Figure <ref> for an example. We will also consider “restricted” versions of valley-hopping and cyclic valley-hopping. Define restricted valley-hopping to be the ℤ_2^n-action on 𝔖_n induced by the involutions φ̂_S∏_x∈ Sφ̂_x where φ̂_x(π)π, if x∈(π) or if x=β_1(π), φ_x(π), otherwise. Moreover, define restricted cyclic valley-hopping to be the ℤ_2^n-action on 𝔖_n induced by the involutions ψ̂_S∏_x∈ Sψ̂_x where ψ̂_x(π)π, if x∈(π) or if x is the first letter of o(π), ψ_x(π), otherwise. Restricted valley-hopping was first defined by Lin and Zeng <cit.>, and restricted cyclic valley-hopping is precisely the aforementioned extension of Sun and Wang's action due to Cooper, Jones, and the second author. § RIX IS HOMOMESIC UNDER VALLEY-HOPPING Having defined the valley-hopping action, our next goal is to prove the following. The statistic is 1-mesic under valley-hopping. A statistic is k-mesic under an action if its average value over each orbit is equal to k. In other words, we claim that the permutations in each valley-hopping orbit have 1 rixed point on average. Given an orbit Π of the valley-hopping action, define the set R_Π by R_Π{ (π,x):π∈Π and x∈(π) } and the map ϕ R_Π→Π by taking ϕ(π,x) φ_x (π)—i.e., the permutation in Π obtained by letting x hop in π. For any (π,x)∈ R_Π, we have x=β_1(ϕ(π,x)). Fix (π,x)∈ R_Π and let π=α_1⋯α_kβ and ϕ(π,x)=α_1^'⋯α_m^'β^' be the rix-factorizations of π and ϕ(π,x), respectively. Suppose that x=β_1(π). Then x is a valley of π by Lemma <ref>, so π=ϕ(π,x) and thus x=β_1(π)=β_1(ϕ(π,x)). Hence, let us assume for the rest of this proof that x≠β_1(π), which by Lemma <ref> means that x is a double ascent of π. Since x is a double ascent of π, we know that x is a double descent of ϕ(π,x). In addition, we know that x hops over β_1(π) because x>β_1(π)—that is, x appears before β_1(π) in ϕ(π,x). In fact, we claim that x is a leading descent of ϕ(π,x). To see this, first recall from Definition <ref> that either β is increasing or β_1(π) is the largest descent of β; in either case, there cannot be a descent of ϕ(π,x) larger than x to the right of β_1(π). There also cannot be a descent of ϕ(π,x) larger than x located between x and β_1(π), as x would not have been able to hop over that descent. Therefore, x is a leading descent of ϕ(π,x), which by Lemma <ref> implies that x is either the last letter of some α_i^' or belongs to β^'. Assume by way of contradiction that x is the last letter of α_i^'. If x is the first letter of ϕ(π,x), then x cannot be the last letter of α_i^' because α_i^' has length at least 2 by Definition <ref>; but in this case, we would have β^'=ϕ(π,x) which gives the desired conclusion. Otherwise, let y be the letter immediately preceding x in ϕ(π,x). We know that y>x by the definition of valley-hopping; after all, if y<x, then x would have hopped over it. In other words, y is a descent of ϕ(π,x). In fact, y is also a descent of π; the letter z immediately following y in π appears after x in ϕ(π,x), and so x>z (and thus y>z) because x hopped over it. The following illustrates the relative placements of y, z, β_1(π), x before and after valley-hopping: π=⋯ yz⋯β_1(π)⋯ x⋯ ϕ(x,π)=⋯ yxz⋯β_1(π)⋯. Because x is a leading descent of ϕ(π,x) and y>x, it follows that y is a leading descent of both π and ϕ(x,π). We know that y cannot be in β^' because x is not in β^', so by Lemma <ref>, it must be true that y is the last letter of α_i-1^', and yet this is impossible because it would mean that α_i^' has length 1. Therefore, our assumption that x is the last letter of some α_i^' is false, and so x belongs to β^' and either y is the last letter of α_m^' or also belongs to β^'. If we can show that y is the last letter of α_m^', then x would be the first letter of β^' as desired. Up to this point, we've only used the fact that y is a leading descent of ϕ(x,π), but recall that y is also a leading descent of π. Since y is not in β, by appealing to Lemma <ref> again, it follows that y is the last letter of some α_j. Letting x hop does not change the prefix of the permutation π up to and including y, nor does it change whether y is a leading descent, so the first j rix-factors α_1,…,α_j of π are precisely the rix-factors α_1^',…,α_m^' of ϕ(x,π). Hence, y is the last letter of α_m^' and we are done. The map ϕ R_Π→Π is a bijection. In light of Lemma <ref>, we can recover x from ϕ(π,x) by taking x=β_1(ϕ(π,x)), and we can recover π from ϕ(π,x) and x by letting x hop in ϕ(π,x). Theorem <ref> is an immediate corollary of Proposition <ref>. After all, the fact that ϕ is a bijection tells us that the total number of rixed points among permutations in Π is equal to the number of permutations in Π. In other words, the average value of over any valley-hopping orbit is 1. § PHI SENDS VALLEY-HOPPING ORBITS TO CYCLIC VALLEY-HOPPING ORBITS As mentioned in the introduction, Lin and Zeng <cit.> define a bijection Φ𝔖_n→𝔖_n satisfying (π)=(Φ(π)) and (π)=(Φ(π)). The remainder of our paper will be devoted to proving the following theorem. Let π be a permutation, Π the valley-hopping orbit containing π, Π̂ the restricted valley-hopping orbit containing π, Π^' the cyclic valley-hopping orbit containing Φ(π), and Π̂^' the restricted cyclic valley-hopping orbit containing Φ(π). Then: (a) Φ(Π)=Π^' (b) Φ(Π̂)=Π̂^' In other words, Φ sends orbits of the valley-hopping action to orbits of cyclic valley-hopping—so that these actions are in sense “the same” up to Φ—and the restricted versions of valley-hopping and cyclic valley-hopping are related in the same way. Before reviewing the definition of Φ and working toward the proof of Theorem <ref>, we note that the following is an immediate consequence of Theorems <ref> and <ref> since (π)=(Φ(π)) implies (π)=(Φ(π)). The statistic is 1-mesic under cyclic valley-hopping. In <cit.> and <cit.>, the authors show that the number of fixed points is homomesic under some “Foatic actions", which are compositions of the form f ∘ o^-1∘ g ∘ o where f,g are dihedral actions (such as the reverse map, the inverse map, and the complement map). We checked whether the rix statistic is homomesic under any Foatic actions, but found counterexamples for all of them. §.§ The bijection Phi Let π∈𝔖_n have rix-factorization π=α_1 α_2 ⋯α_k β. Also, let (π)={π_j,π_j+1,…,π_n} and let δ be defined by βδπ_jπ_j+1⋯π_n—that is, δ is obtained from β upon removing all rixed points. If δ=d_1 d_2 ⋯ d_l, then let δ̃(d_1,d_l,d_l-1,…,d_2) and for each α_i = a_1 a_2 ⋯ a_l, let α̃_i(a_l,a_l-1,…,a_1). Then Lin and Zeng define Φ(π) to be the following concatenation of cycles: Φ(π)α̃_1α̃_2⋯α̃_kδ̃(π_j)(π_j+1)⋯(π_n). For example, given π=7 6 9 1 8 4 2 3 5 10 11, we have Φ(π)=(9,6,7)(8,1)(4,3,2)(5)(10)(11). In (<ref>), each cycle is written with its largest letter first, the cycles of length at least 2 are arranged in decreasing order of their largest letter,[If x_i denotes the last letter of α_i, then x_1 > x_2 > ⋯ > x_k > β_1(π) <cit.>.] and the fixed points are arranged in increasing order and after the cycles of length at least 2. However, for our purposes, we will need to rearrange the order of the cycles to be in line with canonical cycle representation. To that end, let us write Φ(π) as Φ(π)=δ̃μ_kα̃_kμ_k-1α̃_k-1⋯μ_1α̃_1μ_0 where each μ_i consists of all fixed points (in increasing order) which are greater than the first entry of the previous cycle and (if i>0) less than the first entry of the subsequent cycle. Continuing with the example π=7 6 9 1 8 4 2 3 5 10 11, the cycles of Φ(π) are rearranged to become Φ(π)=(4,3,2)(5)(8,1)(9,6,7)(10)(11) so that μ_2=(5), μ_1 is empty, and μ_0=(10)(11). §.§ Proof of Theorem <ref> Our proof of Theorem <ref> will require a few additional lemmas. Let π∈𝔖_n and x∈[n]. Then: (a) x is a peak of π if and only if x is a peak of o(Φ(π)); (b) x is a valley of π if and only if x is a valley of o(Φ(π)); (c) x is a double ascent or a double descent of π if and only if x is a double ascent or double descent of o(Φ(π)). Before giving the proof of Lemma <ref>, let us briefly describe the intuition behind this lemma. The map o ∘Φ takes a permutation π in one-line notation, considers the permutation Φ(π) in cycle notation where the cycles are determined by the rix-factorization of π, but then erases the parentheses to obtain the permutation o(Φ(π)) in one-line notation. The overarching idea of the proof is to show that if x is a peak of π and if o ∘Φ changes the neighboring letters of π, then x will stay a peak, and that the same is true if x is instead a valley. However, we will need to carefully check a number of cases to verify that this is indeed true. We shall first establish the forward directions of (a) and (b): if x is a peak (respectively, valley) of π, then x is a peak (respectively, valley) of o(Φ(π)). Case 1: x is an element of α_i for some 1≤ i≤ k. Let us write α_i = a_1 ⋯ a_j x a_j+1⋯ a_l, so that in Φ(π) we have α̃_i=(a_l,…,a_j+1,x,a_j,…,a_1). Suppose that x is a peak of π. Because each α_i ends with a descent, x cannot be the first letter of α_i. This means that a_1⋯ a_j cannot be empty but a_j+1⋯ a_l can. If a_j+1⋯ a_l is not empty, then x is clearly a peak of o(Φ(π)). If a_j+1⋯ a_l is empty, then x would be the first letter of the cycle α̃_i, so as long as α̃_i is not the first cycle of Φ(π), we are guaranteed by canonical cycle representation that x is a peak of o(Φ(π)). If α̃_i were the first cycle of Φ(π), then that means δ is empty and that there are no rixed points of π smaller than x; however, that would imply that x is immediately followed in π by a rixed point larger than x, which is a contradiction because x is the last letter of α_i and thus a descent. Therefore, x is a peak of o(Φ(π)). Now, suppose that x is a valley of π. Because each α_i ends with a descent, x cannot be the last letter of α_i. This means that a_1⋯ a_j can be empty but a_j+1⋯ a_l cannot. Similar to above, x is clearly a valley of o(Φ(π)) if a_1⋯ a_j is not empty. If a_1⋯ a_j is empty, then x is the last letter of the cycle α̃_i, in which case x would still be a valley of o(Φ(π)) by the definition of canonical cycle representation. Case 2: x is in β but is neither β_1(π) nor a rixed point of π. Recall that δ=d_1d_2⋯ d_l is obtained from β by deleting all the rixed points, and that δ̃=(d_1,d_l,d_l-1,…,d_3,d_2). In this case, we have x=d_i for some i≠1, and it is easy to see that the desired result holds when i≠2 and i≠ l. So it remains to check the cases when x=d_2 and x=d_l. Recall that d_1>x (guaranteed by Lemma <ref>), and that d_l is either the last letter of π or is followed by a rixed point (which is by definition larger than d_1 and thus larger than d_l); hence, in none of these cases can x be a peak of π. Then consider the following subcases: * Suppose that x=d_2=d_l, so that δ=d_1x and δ̃=(d_1,x). Then x is a valley in both π and o(Φ(π)). * Suppose that x=d_2≠ d_l, so that δ=d_1 x d_3 ⋯ d_l and δ̃=(d_1,d_l,…,d_3,x). Then x is a valley of π if and only if x<d_3. As the last entry of the cycle δ̃ in canonical cycle representation, x is either followed by a larger letter in o(Φ(π)) or is the last letter of o(Φ(π)). Either way, we see that x is also a valley of o(Φ(π)) when x<d_3. * Suppose that x=d_l≠ d_2, so that δ=d_1 d_2 ⋯ d_l-1 x and δ̃=(d_1,x,d_l-1,…,d_3). Then x is a valley if and only if x<d_l-1, in which case it is also a valley of o(Φ(π)). Case 3: x=β_1(π) is a rixed point of π. By Lemma <ref>, we know that in this case x is a valley of π. Note that (x) is a fixed point of Φ(π) and is in fact the first cycle of Φ(π), so x is the first letter of o(Φ(π)). Per canonical cycle representation, x is either followed by a larger letter in o(Φ(π)) or is the last letter of o(Φ(π)). Either way, x is also a valley of o(Φ(π)). The above three cases are the only ones that we need consider. Indeed, if x is a rixed point of π but is not β_1(π), then x is a double ascent of π by Lemma <ref>. Furthermore, if x=β_1(π) is not a rixed point of π, then x is a double descent of π by Lemma <ref> and the fact that x=β_1(π) is either the first letter of π or is preceded by a peak. Hence, the forward directions of (a) and (b) follow. Now, note that the forward direction of (a) implies that (π)≤(o(Φ(π))) for all π∈𝔖_n, where (π) denotes the number of peaks of π. Moreover, the o(Φ(π)) span all permutations in 𝔖_n because o and Φ are bijections; so, if it were not true that (π)=(o(Φ(π))) for all π∈𝔖_n, then summing over all π∈𝔖_n would result in the absurdity that the total number of peaks over all π∈𝔖_n is less than the total number of peaks over all π∈𝔖_n. Hence, the backward direction of (a) is established, and the backward direction of (b) follows from the same reasoning. Finally, it is clear that (a) and (b) imply (c), and thus the proof is complete. Let π be a permutation, Π the valley-hopping orbit containing π, Π̂ the restricted valley-hopping orbit containing π, Π^' the cyclic valley-hopping orbit containing Φ(π), and Π̂^' the restricted cyclic valley-hopping orbit containing Φ(π). Then: (a) |Π|=|Π^'|. (b) |Π̂|=|Π̂^'|. Let (π) denote the total number of double ascents and double descents of π. Then the number of permutations in Π is equal to 2^(π). Similarly, the number of permutations in Π^' is 2^(o(Φ(π))). Lemma <ref> (c) implies (π)=(o(Φ(π))), which completes the proof of (a). To prove (b), let us first make the following observations. First, it is clear from the definition of canonical cycle representation that every fixed point of Φ(π) is a double ascent of o(Φ(π)) unless the fixed point is the first cycle of Φ(π), in which case it is a valley of o(Φ(π)). In addition, it is easy to see that the first cycle of Φ(π) is a fixed point if and only if the first cycle is (β_1(π)), which occurs if and only if β_1(π) is a rixed point of π. As such, let us divide into the following cases: Case 1: β_1(π) is a rixed point of π. By Lemma <ref>, we know that β_1(π) is a valley of π while all of the other rixed points are double ascents of π, so the number of permutations in Π̂ is 2^(π)-(π)+1. On the other hand, from the discussion above, we know that the number of permutations in Π̂^' is 2^(o(Φ(π)))-(Φ(π))+1=2^(π)-(π)+1. Case 2: β_1(π) is not a rixed point of π. Appealing to Lemma <ref> again, we see that all of the rixed points of π are double ascents of π, so the number of permutations in Π̂ is 2^(π)-(π). Accordingly, the number of permutations in Π̂^' is 2^(o(Φ(π)))-(Φ(π))=2^(π)-(π). Thus the proof of (b) is complete. We now seek to show that whenever two permutations π and σ are in the same (restricted) valley-hopping orbit, then Φ(π) and Φ(σ) are in the same (restricted) cyclic valley-hopping orbit. Toward this goal, we prove the following lemma, which will again require extensive casework. Let σ=φ_x(π) and let π=α_1α_2⋯α_kβ be the rix-factorization of π. (a) If x is neither β_1(π) nor a rixed point of π, then Φ(σ)=ψ_x(Φ(π)). (b) If x is a rixed point of π, then Φ(σ)=ψ_S(Φ(π)) where S={β_1(π)}∪{ y∈(π):y≤ x }. (c) If x=β_1(π), then Φ(σ)=ψ_S(Φ(π)) where S={β_1(σ)}∪{ y∈(σ):y≤ x }. We divide into cases based on the position of x in π. In all of the cases below, let σ=α_1^'α_2^'⋯α_m^'β^' be the rix-factorization of σ. Cases 1–2 will establish part (a), whereas Cases 3–5 will prove parts (b) and (c). Case 1: x is in α_i for some 1≤ i≤ k. If x is a peak or valley of π, then σ=π and x is also a peak or valley of o(Φ(π)) by Lemma <ref>, which together imply Φ(σ)=ψ_x(Φ(π)). So, for the remainder of this case, let us assume that x is neither a peak nor a valley of π. In particular, this means that x is not the last letter of α_i, since we know from Algorithm <ref> that the last letter of each α_i is a peak of π. Observe that both the last letter of α_i-1 (if i>1) and the last letter of α_i are larger than x; if x were instead larger than either, then x would have been considered by Algorithm <ref> and thus removed from the valid factor prior to when α_i was added to the rix-factorization, which is impossible. Hence, upon applying φ_x to π, the letter x belongs to the same rix-factor, so the number of -factors is unchanged. This means that π and σ have exactly the same -factors except for α_i and α_i^', and similarly with the cycles in the canonical cycle representation of Φ(π) and Φ(σ). Now, let us write α_i=a_1 ⋯ a_p x a_p+1⋯ a_l, so that α̃_i=(a_l,…,a_p+1,x,a_p,…,a_1). If x is a double descent of π, then we have α_i^'=a_1 ⋯ a_p a_p+1⋯ a_q x a_q+1⋯ a_l and α̃_i^'=(a_l,…,a_q+1,x,a_q,…,a_p+1,a_p,…,a_1); here, a_q+1 is the closest letter to the right of x in α_i that is larger than x. It is clear that when we apply ψ_x to Φ(π), the cycle α̃_i is transformed to α̃_i^' and therefore Φ(σ)=ψ_x(Φ(π)). The case when x is a double ascent of π is similar. Case 2: x is in β but is neither β_1(π) nor a rixed point of π. By Lemma <ref>, we have x<β_1(π), and therefore x is also smaller than the rixed points of π. This means that Φ(π) and Φ(σ) have exactly the same cycles except for δ̃ and δ̃^', and the remainder of the proof for this case follows in a similar way as in Case 1. Case 3: x=β_1(π) is a rixed point of π. We know from Lemma <ref> that, in this case, x is the smallest rixed point of π. Moreover, by Lemma <ref>, we have β_1(σ)=x and thus there are no rixed points of σ smaller than x. This means that our set S as defined in the statements of (b) and (c) is given by S={x}. Note that x is a valley of π by Lemma <ref> and therefore a valley of o(Φ(π)) by Lemma <ref>. Since x being a valley of π implies σ=π, we have Φ(σ)=ψ_S(Φ(π)) as desired. Case 4: x≠β_1(π) is a rixed point of π. Let (π)={π_j,π_j+1,…,π_l-1,π_l=x,π_l+1,…,π_n} be the set of rixed points of π. Then we can write the rix-factorization of π as π=α_1α_2⋯α_kβδπ_j⋯π_l-1xπ_l+1⋯π_n. As usual, we write Φ(π)=δ̃μ_kα̃_kμ_k-1α̃_k-1⋯μ_1α̃_1μ_0. From Lemma <ref>, we have that x=β_1(σ). This means that the rix-factorization of σ is given by σ=α_1α_2⋯α_mβ^'xα_m+1⋯α_kδπ_j⋯π_l-1π_l+1⋯π_n. Note that the last letter of α_m is the closest letter to the left of x in π that is larger than x, and that π_j,π_j+1,…,π_l-1 are not rixed points of σ because they are less than x. Taking δ^' to be the analogue of δ but for σ, we have δ^'=xα_m+1⋯α_kδπ_jπ_j+1⋯π_l-1 and Φ(σ)=δ̃^'μ_m^'α̃_m^'μ_m-1^'α̃_m-1^'⋯μ_1^'α̃_1^'μ_0^' where α̃_i^'=α̃_i for each 1≤ i≤ m, μ_i^'=μ_i for each 1≤ i≤ m-1, and μ_m^' is obtained from μ_m by removing all the rixed points of π that are less than or equal to x. Next, we characterize the cycle δ̃^'. Assume that δ is nonempty; we omit the proof of the case where δ is empty as it is similar but slightly easier. If we write out the letters of δ and α_m+1⋯α_k as δ = d_1 d_2 ⋯ d_p and α_m+1⋯α_k = a_1 a_2 ⋯ a_q, then we have δ̃=(d_1,d_p,…,d_2) and δ̃^'=(x,π_l-1,…,π_j+1,π_j,d_p,…,d_2,d_1,a_q,…,a_2,a_1). From here, we see that o(Φ(π)) and o(Φ(σ)) are the same except for the positions of the letters d_1=β_1(π),π_j,π_j+1,…,π_l-1,x. More precisely, to obtain o(Φ(σ)) from o(Φ(π)), we remove all of these letters from where they are initially located, insert β_1(π) between d_2 and a_q, and prepend the remaining letters π_j,π_j+1,…,π_l-1,x at the beginning but in reverse order. To complete the proof of this case, we shall argue that this arrangement is obtained precisely by applying φ_S to o(Φ(π)) for S={β_1(π),π_j,π_j+1,…,π_l-1,x}. From Lemma <ref>, we know that β_1(π) is larger than all of the letters d_2,…,d_p, but a_q>β_1(π) from the definition of rix-factorization. Hence, letting β_1(π) hop in o(Φ(π)) will move β_1(π) to the desired position. Furthermore, by the fact that the rixed points of π are all greater than β_1(π) and by the definition of canonical cycle representation, each of the letters π_j,π_j+1,…,π_l-1,x is larger than all letters to its left in o(Φ(π)). And since π_j<π_j+1<⋯<π_l-1<x, letting all of these letters hop will move them to the beginning in reverse order. Hence we have o(Φ(σ))=φ_S(o(Φ(π))), and applying o^-1 to both sides gives us Φ(σ)=ψ_S(Φ(π)). Case 5: x=β_1(π) is not a rixed point of π. Write β = x d_2 ⋯ d_l π_j ⋯π_n where π_j,…,π_n are the rixed points of π, so δ = x d_2 ⋯ d_l and δ̃=(x,d_l,d_l-1…,d_2). By Lemma <ref>, we know that x is a double descent of π, that d_2⋯ d_l is nonempty, and that x is larger than all of the letters d_2,…,d_l. Thus, in applying φ_x to π, the rix-factors α_i will be unchanged but x will hop over all of the letters d_2,…,d_l. In other words, we have α_i^'=α_i for all 1≤ i≤ k and d_2 ⋯ d_l x π_j ⋯π_n=α_k+1^'⋯α_m^'β^'d_p d_p+1⋯ d_l x π_j ⋯π_n for some 2≤ p≤ l. Note that x is a rixed point of σ because x>d_p and is part of an increasing suffix of σ. Let us write (σ)={d_q,d_q+1…,d_l,x,π_j,…,π_n} so that β^'=δ^'d_p d_p+1⋯ d_q-1 d_q d_q+1⋯ d_l x π_j ⋯π_n. Thus Φ(σ), in canonical cycle representation, begins with the cycles δ̃^'(d_p,d_q-1,…,d_p+1)α̃_m^'⋯α̃_k+1^'. Note that the letters in the cycles α̃_m^'⋯α̃_k+1^' are precisely d_p-1,…,d_3,d_2 in this given order. Comparing with δ̃=(x,d_l,d_l-1…,d_2), we see that in going from o(Φ(π)) to o(Φ(σ)), the only difference is in the positions of the letters d_p=β_1(σ) and the letters x,d_q,d_q+1,…,d_l (the rixed points of σ smaller than or equal to x). We must show that the movement of these letters corresponds precisely to letting them hop—that is, applying φ_S to o(Φ(π)) where S={d_p,x,d_q,d_q+1,…,d_l} results in o(Φ(σ)). Let y be any of the letters x,d_q,d_q+1,…,d_l, which are all rixed points of σ and thus fixed points of Φ(σ). Per canonical cycle representation, to go from o(Φ(π)) to o(Φ(σ)), each of these y must be moved to the position immediately before the closest letter to the right of y that is larger than y, which is the first entry of the cycle immediately after (y) in Φ(σ). Now, recall that d_q<d_q+1<⋯<d_l<x and δ̃=(x,d_l,d_l-1…,d_2); together, these imply that d_q is either a double descent or valley[It is possible for d_q to be a valley only when d_q = d_p, i.e, when δ^' is empty.] of o(Φ(π)), and all the other y are double descents of o(Φ(π)), so they will all hop to the right (or remain stationary) in applying φ_S. By the definition of valley-hopping, each of these y will move to the position immediately before the closest letter to the right of y that is larger than y, precisely as described above. Hence, in applying φ_S to o(Φ(π)), all of the letters x,d_q,d_q+1,…,d_l will move to the desired positions. It remains to show that d_p will move to the correct position as well. At this point, we note that it is possible for d_p=β_1(σ) to be a rixed point of σ, and in this case we would have d_p=d_q and the proof would be complete. So let us assume that d_p is not a rixed point of σ. In going from o(Φ(π)) to o(Φ(σ)), the letter d_p is moved to the very beginning of the permutation. Upon letting the letters x,d_q,d_q+1,…,d_l hop in o(Φ(π)), only the letters d_q-1,…,d_p+1 appear before d_p, so it suffices to show that d_p is a double ascent of o(Φ(π)) and that it is larger than all of the letters d_q-1,…,d_p+1. The latter follows from Lemma <ref>, as d_p=β_1(σ) and d_q-1,…,d_p+1 are precisely the letters in β^' which are neither β_1(σ) nor rixed points of σ. To see that d_p is a double ascent of o(Φ(π)), we first note that d_p is preceded by d_p+1 in o(Φ(π)) and d_p+1<d_p. Now we consider two subcases: * If p=2, then d_p is the last entry of the cycle δ̃ of Φ(π), which by canonical cycle representation implies that d_p is a double ascent of o(Φ(π)). * If p>2, then d_p is followed by d_p-1 in o(Φ(π)). Note that d_p-1 is the first entry of the cycle α̃_m^' appearing after δ̃^' in Φ(σ), which by canonical cycle representation implies that d_p<d_p-1. Hence, d_p is a double ascent of o(Φ(π)). Therefore, the remaining letter d_p also moves to the correct position after applying φ_S, and the proof is complete. Let π be a permutation, Π the valley-hopping orbit containing π, Π̂ the restricted valley-hopping orbit containing π, Π^' the cyclic valley-hopping orbit containing Φ(π), and Π̂^' the restricted cyclic valley-hopping orbit containing Φ(π). Then: (a) Φ(Π)⊆Π^' (b) Φ(Π̂)⊆Π̂^' The proof of part (a) from Lemma <ref> is straightforward and so it is omitted. Here we prove (b) as it requires a more subtle argument. Let σ∈Π̂. Then, there exists S={x_1,x_2,…,x_k} for which σ=φ̂_S(π)=(φ̂_x_k∘⋯∘φ̂_x_2∘φ̂_x_1)(π). Assume without loss of generality that none of the x_i is β_1(π) or a rixed point of π. According to <cit.>, β_1(π) and (π) are invariant under restricted valley-hopping, which means that none of the x_i is the first letter of the β rix-factor or is a rixed point of any of the permutations φ̂_x_1(π), (φ̂_x_2∘φ̂_x_1)(π), …, (φ̂_x_k∘⋯∘φ̂_x_2∘φ̂_x_1)(π) in the same restricted valley-hopping orbit as π. Thus we have φ̂_x_i=φ_x_i for each 1≤ i≤ k; that is, σ=φ_S(π)=(φ_x_k∘⋯∘φ_x_2∘φ_x_1)(π). Applying Lemma <ref> (a), we deduce Φ(σ)=ψ_S(Φ(π))=(ψ_x_k∘⋯∘ψ_x_2∘ψ_x_1)(Φ(π)). Observe from the definition of Φ that the rixed points of a permutation τ are precisely the fixed points of Φ(τ), and that β_1(τ) is the first letter of o(Φ(τ)). Consequently, we have that ψ_x_i=ψ̂_x_i for each 1≤ i≤ k, which implies Φ(σ)=ψ̂_S(Φ(π)). Hence, (b) is proven. Theorem <ref> now follows easily from Corollaries <ref> and <ref>. By Corollary <ref> and the fact that Φ is a bijection, we have |Φ(Π)|=|Π|=|Π^'| and |Φ(Π̂)|=|Π̂|=|Π̂^'|, which together with Φ(Π)⊆Π^' and Φ(Π̂)⊆Π̂^' (Corollary <ref>) yield the desired results. Acknowledgements. We thank Tom Roby for helpful discussions on this project. YZ was partially supported by an AMS-Simons Travel Grant. plain
http://arxiv.org/abs/2307.00681v1
20230702231251
Acoustic propulsion of nano- and microcones: dependence on particle size, acoustic energy density, and sound frequency
[ "Johannes Voß", "Raphael Wittkowski" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.mes-hall", "physics.comp-ph", "physics.flu-dyn", "physics.med-ph" ]
Institut für Theoretische Physik, Center for Soft Nanoscience, Westfälische Wilhelms-Universität Münster, 48149 Münster, Germany [Corresponding author: ]raphael.wittkowski@uni-muenster.de Institut für Theoretische Physik, Center for Soft Nanoscience, Westfälische Wilhelms-Universität Münster, 48149 Münster, Germany Employing acoustofluidic simulations, we study the propulsion of cone-shaped nano- and microparticles by a traveling ultrasound wave. In particular, we investigate how the acoustic propulsion of the particles depends on their size and the energy density and frequency of the ultrasound wave. Our results reveal that the flow field generated around the particles depends on all three of these parameters. The results also show that the propulsion velocity of a particle increases linearly with the particle size and energy density and that an increase of the sound frequency leads to an increase of the propulsion velocity for frequencies below about 1 MHz but to a decrease of the propulsion velocity for larger frequencies. These findings are compared with preliminary results from the literature. 2exKeywords: nano- and microcones, acoustic propulsion, size dependence, acoustic energy density dependence, frequency dependence, ultrasound Acoustic propulsion of nano- and microcones: dependence on particle size, acoustic energy density, and sound frequency Raphael Wittkowski August 1, 2023 ====================================================================================================================== § INTRODUCTION Research on motile, artificial nano- and microparticles has resulted in a large number of different realizations of such particles <cit.>. They cover various propulsion mechanisms: chemical propulsion <cit.>, light propulsion <cit.>, X-ray propulsion <cit.>, acoustic propulsion <cit.>, and others <cit.>. Among these mechanisms, acoustic propulsion has some significant advantages, such as that it is fuel-free and biocompatible and that it allows for supplying the particles continuously with energy <cit.>. With these properties, acoustically propelled nano- and microparticles have potential future applications in medicine <cit.>, where they could be used, e.g., for drug delivery <cit.>, in materials science <cit.>, where they could form active materials with exceptional properties <cit.>, and in other fields <cit.>. Although this type of particles has been intensively investigated in recent years, mainly based on experiments <cit.> but also using computer simulations <cit.> and analytical approaches <cit.>, we are still at the beginning to understand the properties of these particles. For example, it is still rather unclear how their propulsion depends on the size of the particles, the energy density of the ultrasound, and its frequency. A few studies have addressed the dependence of the acoustic propulsion on the particle size so far <cit.>, but there is a discrepancy between experimental and theoretical work. In one experimental study <cit.>, half-sphere cups (nanoshells) with different diameters were investigated in a standing ultrasound wave and a decreasing propulsion speed was observed for an increasing particle diameter. According to theoretical approaches <cit.>, however, the speed should increase with the particle size. In one study, a linear increase with the particle diameter was found for a sphere-like particle <cit.>. Another study found for different particle shapes a nonlinear but still increasing dependence of the propulsion on the particle size <cit.>. However, with regard to future applications of acoustically propelled particles in medicine, the particle size is a critical parameter whose effect on the propulsion should be understood, since, depending on the drug-release method, the particles must have a sufficiently large volume <cit.> or surface <cit.> but are not allowed to be so large that they can clog veins <cit.>. The dependence of the acoustic propulsion on the energy density of the ultrasound has been studied in some experiments so far <cit.>, but there is not yet any investigation of this dependence that is based on simulations or analytical approaches. This constitutes a problem since in experimental work only the driving voltage applied to the ultrasound transducer is tuned directly, whereas the corresponding acoustic energy density that is established in the experimental setup near to the particles is not measured. So far, only a rough estimate E∝ V^2 <cit.> can be used to convert the driving voltage V into the energy density E. In reality, the dependence of these quantities is affected by many details of the experimental setup, such as the acoustic coupling of adjacent components between the transducer and the particles. The existing experimental studies found either a linear <cit.> or a quadratic <cit.> dependence of the propulsion speed on the driving voltage. Therefore, it now needs to be clarified which of these scaling relations is the correct one and what is the actual dependence of the propulsion speed on the acoustic energy density. Determining these relations will be helpful to foresee the speed of acoustically propelled particles in particular applications, such as in medicine where the ultrasound intensity has to be limited to ensure harmlessness <cit.>. The frequency of the ultrasound is hard to change in most experiments since they use standing instead of traveling ultrasound waves <cit.>. Therefore, little is known about the dependence of the propulsion on the frequency. Up to now, there are only one experimental work <cit.> and two analytical studies <cit.> that addressed this dependence. The experimental study <cit.> observed a local minimum of the propulsion and even a switch of the propulsion direction when changing the frequency. In contrast, the analytical studies <cit.> found that the propulsion velocity increases linearly with the frequency. A better understanding of the dependence of the propulsion on the frequency would also be helpful for predicting the propulsion speed of a particle in a particular application. For example, in medical applications, the ultrasound frequency cannot be chosen arbitrarily, since the penetration depth of the ultrasound in biological tissue depends strongly on the frequency <cit.>. In this work, we, therefore, study carefully how the acoustic propulsion of nano- and microparticles depends on the particle size, acoustic energy density, and sound frequency. We consider cone-shaped particles, which are known to have a relatively efficient acoustic propulsion and are thus particularly relevant for applications <cit.>, and an ultrasound field consisting of a planar traveling ultrasound wave, which is more application relevant than the frequently chosen standing ultrasound waves <cit.>. To calculate the sound-induced flow field that is generated around a particle and the resulting propulsion force and torque that act on the particle, we use acoustofluidic simulations. § RESULTS AND DISCUSSION We study the time-averaged flow field generated around a cone-shaped particle in water, which has diameter σ and height h=σ and is exposed to a planar traveling ultrasound wave, and the corresponding time-averaged propulsion force and torque that are exerted on the particle in the stationary state, for various values of the particle's size σ, the ultrasound's pressure amplitude Δ p (and thus energy density E), and the ultrasound's frequency f. The water is initially at standard temperature and standard pressure and quiescent. Each parameter is varied separately from the other parameters, which are then fixed to their reference values σ_R=2^-1/2, Δ p_R=10 (i.e., E_R=22.7 ^-3), and f_R=1. See Methods for details. §.§ Dependence on particle size First, we study the dependence of the particle's flow field and propulsion force and torque on the particle's size σ. We vary the particle's diameter as σ∈[0.1,10]σ_R while keeping the pressure amplitude at Δ p = Δ p_R and the frequency at f=f_R. §.§.§ Flow field The simulation results for the flow field are shown in Fig. <ref>. When the particle's diameter increases, we see that the structure of the flow field around the particle changes significantly. For small diameters, the flow field is qualitatively similar to the structure that has been reported in Ref. <cit.>. When σ=0.1σ_R, the flow field is dominated by four large vortices at the top left, top right, bottom left, and bottom right of the particle. Their centers have similar distances from the center of mass of the particle and form the edges of a square. As a consequence of the assembly of vortices, the fluid flows with similar strength on the left and right towards the particle and below and above the particle away from it. Moreover, the pressure is increased on the left and right and decreased below and above the particle. When σ is increased up to σ=2σ_R, the structure of the flow field remains qualitatively similar, but the centers of the vortices slowly diverge from the center of mass of the particle. Thereby, the upper two vortex centers become closer to each other than the lower ones so that the vortex assembly changes from a square to an isosceles trapezoid. The overall strength of the fluid flow increases. While the strength of the flow remains similar on the left and right of the particle, it becomes weaker than the lateral flow below the particle and stronger than the lateral flow above the particle. Hence, the strongest fluid flow occurs in front of the particle between the two upper vortices. With increasing σ, there thus occurs an increasing asymmetry of the fluid flow in front of and behind the particle. Furthermore, the minima and maxima of the pressure field become more pronounced when σ increases. Figure <ref>c shows the fluid flow for the reference case with σ=σ_R, Δ p = Δ p_R, and f=f_R. These trends continue when σ increases beyond σ=2σ_R. However, the structure of the fluid flow changes significantly for larger σ. When proceeding to σ=5σ_R, two secondary vortices occur between the two (primary) lower vortices. Furthermore, far away from the particle at its top left, a further secondary vortex center occurs. Thus, the mirror symmetry of the assembly of vortices is broken. When increasing σ further to σ=10σ_R, the centers of the secondary vortices approach the centers of the neighboring primary vortices. Moreover, the primary vortex at the top right of the particle disappears, which makes the vortex assembly even more asymmetric. We now study the dependence on the particle size of the distance of the vortex centers from the center of mass of the particle in more detail. Figure <ref>a shows these distances as a function of the particle's diameter σ. One can see that, in terms of σ, the distances rapidly decline when σ increases. The dependence of the distances δ_pv of the primary vortices on σ can be described by a function δ_pv(σ)=aσ + b with coefficients a and b. The latter coefficient becomes dominant for small particle sizes and determines a minimal vortex-to-particle distance that is not deceeded for any value of σ. When function (<ref>) is fitted to the simulation data for the vortex-to-particle distances, one obtains fit functions that are in excellent agreement with the simulation data. The resulting fit functions are stated as equations and visualized in Fig. <ref>a. For the second term in Eq. (<ref>), we find b ≈1.09≈ 2 δ_vpd, which is approximately the thickness of the viscous boundary layer in which vortices form near a boundary (Schlichting streaming) that is exposed to ultrasound <cit.>. Here, δ_vpd is the viscous penetration depth <cit.> δ_vpd=√(ν_s/πρ_0 f)≈0.57 , where for our situation ν_s=1.002 is the shear viscosity and ρ_0=998 ^-3 is the mean mass density of the fluid and f=f_R is the frequency of the ultrasound. For comparison, δ_vpd is also shown in Fig. <ref>a. §.§.§ Propulsion force and torque The simulation results for the propulsion force and torque are shown in Fig. <ref>a-c (see Methods for the definitions of the components of the propulsion force and torque). One can see that the propulsion force F_∥ parallel to the particle's orientation increases with the particle's diameter σ from F_∥=6.30·10^-3 to F_∥=21.81. Its pressure component F_∥,p decreases from F_∥,p=-4.49·10^-2 to F_∥,p=-595.32, whereas its viscous component F_∥,v increases from F_∥,v=5.12·10^-2 to F_∥,v=617.13. The translational propulsion velocity v_∥ parallel to the particle's orientation increases from v_∥=8.40·10^-3 ^-1 to v_∥=2.92·10^-1 ^-1. This increase of the propulsion speed is in line with the observation of Section <ref> that the flow near the particle becomes stronger when σ increases. Next, we consider the perpendicular components of the propulsion. We see that they all increase with the particle's diameter σ. The propulsion force F_⊥ perpendicular to the particle's orientation increases from F_⊥=4.52·10^-3 to F_⊥=10.26, its pressure component F_⊥,p increases from F_⊥,p=2.32·10^-3 to F_⊥,p=6.97, its viscous component F_⊥,v increases from F_⊥,v=2.20·10^-3 to F_⊥,v=3.29, and the translational propulsion velocity v_⊥ perpendicular to the particle's orientation increases from v_⊥=5.85·10^-3 ^-1 to v_⊥=1.33·10^-1 ^-1. When considering the angular components of the propulsion, the dependence on the particle size is more complicated. The propulsion torque T increases from T=7.04·10^-6 to T=9.45·10^-1 at σ=5σ_R and slightly decreases to T=9.17·10^-1 afterward. Its pressure component T_p increases from T_p=9.59·10^-6 to T_p=6.05·10^-1, whereas the viscous component T_v increases from T_v=-2.55·10^-6 to T_v=7.37· 10^-1 at σ=5σ_R and then decreases to T_v=3.11·10^-1. For the angular propulsion velocity ω we find first a decrease from ω=7.89·10^-3 ^-1 to ω=-2.56·10^-2 ^-1 at σ=σ_R, then an increase to ω=5.07·10^-3 ^-1 at σ=5σ_R, and finally a decrease to ω=1.40·10^-3 ^-1. The values of ω are small for all considered particle sizes and can be neglected compared to rotational Brownian motion. In the case σ=σ_R, where ω is maximal, its value corresponds to a rotation by 90° within approximately 60. In contrast, the time scale for a reorientation of the particle by Brownian motion is D_R^-1=0.43 with the particle's rotational diffusion coefficient D_R=2.34^-1. For some of the curves, we can state simple fit functions: 𝔣(σ)= a σ^2, for 𝔣∈{F_∥,p, F_∥,v, F_∥}, a σ, for 𝔣∈{v_∥}, a σ + b σ^3, for 𝔣∈{F_⊥,p, F_⊥,v, F_⊥}. The values of the fit coefficients a and b are given in Tab. <ref>. These fit functions are in good agreement with the simulation data. The linear fit function for v_∥(σ) results from the fit function for F_∥(σ) and the Stokes law (<ref>) (see Tab. <ref>). We can compare our findings with the results of previous studies. Reference <cit.>, which is based on an analytical approach, found v_∥(σ)∝σ, which implies F_∥(σ)∝ K_22(σ) v_∥(σ) ∝σ^2 with K_22(σ)∝σ. These scaling relations are in line with our fit functions, although the system considered in Ref. <cit.> differs significantly from the system that is considered in our article. In particular, Ref. <cit.> considered near-sphere particles and a standing ultrasound wave. In Ref. <cit.>, v_∥ is studied for dumbbell-shaped particles in a standing ultrasound wave by an analytical approach. According to this reference, v_∥ should scale as v_∥∝-β^5/2 for β≪ 1 and as v_∥∝β^1/2 for β≫ 1. Furthermore, there should be a sign change of v_∥ at β∈ O(1). Here, β is the acoustic Reynolds number β=πρ_0σ^2 f/(2ν_s) and ranges between 0.2 and 78 when we vary σ. The scaling for β≫ 1 is in very good agreement with our simulation results and the different scaling for smaller values of β is in line with the fact that our simulation results differ somewhat from the linear scaling for σ<2σ_R. Since v_∥∝σ for β≫ 1, the propulsion speed is constantly at 0.04 body lengths per second for a large particle size. According to Ref. <cit.>, the force F_⊥ has two contributions. One is the acoustic radiation force, which scales as ∝σ^3, and the other one is the acoustic streaming force, which scales as ∝σ. This provides an explanation for why our fit function for F_⊥(σ) requires two terms. However, both components act in opposite directions according to Ref. <cit.> so there should be a switch in the propulsion direction for smaller values of σ, whereas the prefactors of both terms in our fit function have the same sign. This discrepancy might result from numerical inaccuracies of the simulations that affect mainly the results for small particles. Reference <cit.>, which is based on experiments, found a decrease of v_∥ for increasing particle size. This is in contrast to our findings, but the discrepancy is likely to originate from the differences of the performed experiments compared to the system studied in the present work. In Ref. <cit.>, half-sphere-cup-shaped particles (nanoshells) in a standing ultrasound wave were studied and it is known that at least the particle shape has a significant effect on the propulsion <cit.>. Furthermore, the frequency of the ultrasound (f=2.66) was different from the frequency that we considered when we varied the particle size. §.§ Dependence on acoustic energy density Second, we study the dependence of the particle's flow field and propulsion force and torque on the ultrasound's acoustic energy density E. For this purpose, we vary the ultrasound's pressure amplitude as Δ p ∈[0.1,10]Δ p_R and thus the acoustic energy density as E ∈[0.01,100] E_R while keeping the particle diameter at σ=σ_R and the frequency at f=f_R. §.§.§ Flow field The simulation results for the flow field are shown in Fig. <ref>. When the ultrasound's pressure amplitude increases, we see that the flow field remains qualitatively similar to the reference case that is shown in Fig. <ref>c. Again, the flow field is dominated by four vortices at the top left, top right, bottom left, and bottom right of the particle. The positions of the centers of the vortices are approximately independent of the pressure amplitude Δ p. Their distances from the center of mass of the particle are ≈2.16σ for the top left, ≈2.21σ for the top right, ≈2.25σ for the bottom left, and ≈2.34 for the bottom right vortex. However, the fluid flow becomes stronger and the minimum and maximum of the pressure field become more pronounced when Δ p increases. The strength of the mass-current density approximately scales as ∝Δ p^2 and for Δ p=100, the minimum pressure occurs at the upper tip of the particle whereas the maximum pressure occurs near the lower end of the left and right edges of the particle. §.§.§ Propulsion force and torque The simulation results for the propulsion force and torque are shown in Fig. <ref>d-f. One can see that, except for their signs, the propulsion force F_∥ parallel to the particle's orientation, its pressure component F_∥,p and viscous component F_∥, v, the translational propulsion velocity v_∥ parallel to the particle's orientation, the propulsion force F_⊥ perpendicular to the particle's orientation, its pressure component F_⊥, p and viscous component F_⊥, v, the translational propulsion velocity v_⊥ perpendicular to the particle's orientation, the propulsion torque T, its pressure component T_p and viscous component T_v, and the angular propulsion velocity ω all scale similarly with the ultrasound's pressure amplitude Δ p. F_∥ increases from F_∥=5.25·10^-3 to F_∥=54.76, F_∥,p decreases from F_∥,p=-7.48·10^-2 to F_∥,p=-746.82, F_∥, v increases from F_∥,v=8.00·10^-2 to F_∥,v=801.58, v_∥ increases from v_∥=7.01·10^-4 ^-1 to v_∥=73.21 ^-1, F_⊥ increases from F_⊥=6.25·10^-3 to F_⊥=10.49, F_⊥, p increases from F_⊥,p=4.22·10^-3 to F_⊥,p=7.02, F_⊥, v increases from F_⊥,v=2.03·10^-3 to F_⊥,v=3.47, v_⊥ increases from v_⊥=7.95·10^-4 ^-1 to v_⊥=1.33 ^-1, T decreases from T=-1.79·10^-3 to T=-3.01, T_p decreases from T_p=-1.26·10^-3 to T_p=-2.19, T_v decreases from T_v=-5.25·10^-4 to T_v=-8.19·10^-1, and ω decreases from ω=-9.88·10^-4 ^-1 to ω=-1.66^-1. The increase of the propulsion speed v_∥ is in line with the observation of Section <ref> that the flow near the particle becomes stronger when Δ p increases. Similar to our findings for a variation of the particle size (see Section <ref>), the torques are again small compared to Brownian motion. The maximum ω=-1.66^-1 of the angular velocity corresponds to a rotation of the particle by 90° within 1. In contrast, rotational Brownian motion reorients the particle already on a time scale of D_R^-1=0.43, where D_R=2.34^-1. For the dependence of all these quantities on Δ p, a simple fit function can be given: 𝔤(Δ p) = a Δ p ^2 = 2 a ρ_0 c_f^2 E, for 𝔤∈{F_∥,p, F_∥,v, F_∥, v_∥, F_⊥,p, F_⊥,v, F_⊥, v_⊥, T_p, T_v, T, ω}. Here, c_f is the speed of sound and the values of the fit coefficient a are given in Tab. <ref>. The agreement of the fit functions for F_∥, F_∥,p, F_∥,v, and v_∥ with the simulation results is excellent. For the other quantities, the overall agreement is good and the agreement for large values of Δ p is even very good. The fit function for v_∥(Δ p) results from the fit function for F_∥(Δ p) and the Stokes law (<ref>). Similarly, the fit functions for v_⊥(Δ p) and ω(Δ p) result from the fit functions for F_⊥(Δ p) and T(Δ p), respectively. These analytical relations are explicitly given in Tab. <ref>. Again, we compare our findings with the results of previous studies. At first, we compare with theoretical studies. There, in line with our findings, a quadratic dependency of v_∥ on Δ p was found <cit.>. Note that the largest pressure amplitude Δ p=100 that we considered in our simulations is so large that linearizations of the Navier-Stokes equations that are used in Refs. <cit.> are not applicable. According to Ref. <cit.>, the acoustic radiation force and the acoustic streaming force that contribute to F_⊥(Δ p) both scale as ∝Δ p^2. This is consistent with the scaling F_⊥(Δ p)∝Δ p^2 of our corresponding fit function. A comparison with experimental results is not straightforwardly possible, since experimental studies only measure the amplitude of the voltage V that drives the ultrasound transducer but not directly the pressure amplitude Δ p in the liquid near the particle. Therefore, we need to employ the rough estimate E∝ V^2 <cit.> to convert the driving voltage V into the energy density E∝Δ p^2 and thus into Δ p. With this conversion, the scalings for F_∥ and v_∥ with Δ p that have been found in the experiment-based studies <cit.> are consistent with our results. Furthermore, Ref. <cit.> suggests the same scaling of T with Δ p based on experiments that we found in our simulations. In Fig. <ref>a, we observed that the propulsion velocity v_∥ increases with the particle's diameter σ and reaches a value of v_∥=2.92·10^-1 ^-1 for σ=10σ_R. Since we have revealed how v_∥ scales with Δ p and E∝Δ p^2, we can now determine how fast a particle with diameter σ=10σ_R would move if it is exposed to ultrasound with the maximum energy density E_max=4.9 ^-3 that is permitted by the U.S. Food and Drug Administration for diagnostic applications of ultrasound in the human body <cit.>. Rescaling v_∥ to this energy density results in a propulsion speed of approximately 9 body lengths per second, which is quite fast. Next, we assess how a free particle would move along a trajectory when it is observed for some time. The type of motion of the particle would depend on its translational propulsion velocity, angular propulsion velocity, and Brownian motion. Thus, it will depend on the particle's diameter σ and the acoustic energy density E. Therefore, we now study how the particle's type of motion depends on σ and E. With the classification for the qualitative particle motion that is described in the Methods, the type of motion can be classified as random motion (E < min{E_dir,E_gui}), where the particle's Brownian rotation dominates the particle's translational and rotational propulsion, directional motion with random orientation (E_dir < E < E_gui), where the particle's translational propulsion dominates the particle's Brownian rotation and the Brownian rotation dominates the particle's rotational propulsion, or directional guided motion (E > E_gui), where the particle's translational propulsion and rotational propulsion dominate the Brownian rotation. The energy density thresholds E_dir and E_gui are defined by Eqs. (<ref>) and (<ref>) in the Methods. These thresholds depend on the particle's diameter σ, the particle's rotational diffusion coefficient D_R, which in turn depends on σ, and the translational and angular propulsion velocities v_∥ and ω, which in turn depend on σ and E. To determine the values of v_∥ and ω as functions of σ and E, we use our simulation results for their dependence on σ (see Section <ref>) and their proportionality to E that we found in the present section. The results are shown in Fig. <ref>a. One can see that E_gui is always larger than E_dir and that, except for a small dip of E_gui at σ=σ_R, both energy density thresholds decrease when σ increases. A reason for this trend is that Brownian rotation decreases when the particle size increases. Thus, for larger particles, a lower intensity of the ultrasound is required to overcome random motion and to reach directional motion with random orientation or even directional guided motion. For example, particles with σ≳σ_R can show directional motion for harmless ultrasound intensities (E<E_max). This means that for medical applications such as drug delivery, the particle size should not be significantly smaller than σ_R=2^-1/2. §.§ Dependence on sound frequency Third, we study the dependence of the particle's flow field and propulsion force and torque on the ultrasound's frequency f. We vary the frequency as f ∈[0.5,10]f_R while keeping the particle diameter at σ=σ_R and the pressure amplitude at Δ p=Δ p_R. §.§.§ Flow field The simulation results for the flow field are shown in Fig. <ref>. When the ultrasound's frequency f increases, we see that the structure of the flow field around the particle changes significantly, but not as strongly as for an increase of the particle size (see Section <ref>). For all frequencies, the basic structure of the flow field is dominated by four vortices at the top left, top right, bottom left, and bottom right of the particle, similar to the reference case that is shown in Fig. <ref>c. When f increases, the centers of the vortices approach the center of mass of the particle. Thereby, the centers of the two upper vortices become closer than the centers of the two lower vortices. Thus, the assembly of the centers of the vortices again forms an isosceles trapezoid. As a consequence of the change in the vortex assembly, the fluid flow does not retain a similar strength besides, below, and above the particle, but becomes much stronger in front of the particle than besides or below it. The approach of the centers of the two upper vortices and the associated concentration of the fluid flow in front of the particle upon an increase of the frequency are similar to what we observed for a moderate increase of the particle size (see Section <ref>). However, a formation of new vortices and a disappearance of vortices, as we have seen in Section <ref> for a strong increase of the particle size, do not occur in the present situation. Moreover, different from what we observed for an increase of σ (see Section <ref>) and Δ p (see Section <ref>), the strength of the fluid flow and the variation of the pressure field do not increase with f. In contrast, the mass-current density declines (except for a small area in front of the particle), and the pressure variation remains approximately constant when f increases. We now consider the distances of the centers of the vortices from the center of mass of the particle in more detail. Figure <ref>b shows how these distances depend on the ultrasound frequency f. One can see that the distances of the vortex centers from the particle's center of mass rapidly decrease when f increases. The dependence of these distances δ_pv on f can be described by a function δ_pv(f) = aσ + b √(/f) with coefficients a and b. In Fig. <ref>b, the functions that result from fitting Eq. (<ref>) to the simulation data for the vortex-to-particle distances are visualized. For these fit functions, also explicit equations are given. Their agreement with the simulation data is excellent. For the second term in Eq. (<ref>), we obtain b √(/f)≈ 1.57σ√(/f)≈ 2δ_vpd, which is the typical size of the boundary layer vortex thickness for the Schlichting streaming <cit.>. Remarkably, the scaling of δ_pv with f is in line with the f-dependence of the viscous penetration depth δ_vpd(f) ≈ 0.8σ√(/f) (see Eq. (<ref>)). §.§.§ Propulsion force and torque The simulation results for the propulsion force and torque are shown in Fig. <ref>g-i. Interestingly, the dependence of F_∥, F_∥, p, F_∥, v, v_∥, F_⊥, F_⊥, p, F_⊥, v, v_⊥, T, T_p, T_v, and ω on the ultrasound's frequency f is rather different and in the most cases quite complicated. F_∥, F_∥, p, F_∥, v, and v_∥ have the simplest frequency dependence of these. F_∥ first increases from F_∥=4.58·10^-1 to F_∥=5.40·10^-1 at f=1 and then decreases to F_∥=1.46·10^-1; F_∥, p increases from F_∥, p=-7.61 to F_∥, p=-6.99; F_∥, v decreases from F_∥, v=8.07 to F_∥, v=7.14; v_∥ first increases from v_∥=6.11·10^-2 ^-1 to v_∥=7.21·10^-2 ^-1 at f=1 MHz and then decreases to v_∥=1.95·10^-2 ^-1. The frequency dependence of F_⊥, F_⊥, p, F_⊥, v, and v_⊥ is more complicated. F_⊥ first decreases from F_⊥=2.23·10^-1 to F_⊥=1.45·10^-1 at f=0.8, then increases to F_⊥=2.07·10^-1 at f=2, afterward decreases to F_⊥=1.93·10^-1 at f=5, and finally increases to F_⊥=3.31·10^-1; F_⊥, p first decreases from F_⊥, p=1.29·10^-1 to F_⊥, p=9.33·10^-2 at f=0.8 and then increases to F_⊥, p=2.16·10^-1; F_⊥, v first decreases from F_⊥, v=9.36·10^-2 to F_⊥, v=4.94·10^-2 at f=1, then increases to F_⊥, v=8.41·10^-2 at f=2, afterward decreases to F_⊥, v=5.85·10^-2 at f=5, and finally increases to F_⊥, v=1.15·10^-1; v_⊥ first decreases from v_⊥=2.84·10^-2 ^-1 to v_⊥=1.86·10^-2 ^-1 at f=0.8, then increases to v_⊥=2.64·10^-2 ^-1 at f=2, afterward decreases to v_⊥=2.46·10^-2 ^-1 at f=5, and finally increases to v_⊥=4.22·10^-2 ^-1. Also, the frequency dependence of T, T_p, T_v, and ω is rather complicated, but now increases and decreases are typically inverted compared to the previous case. T first increases from T=-3.84·10^-2 to T=-2.18·10^-2 at f=0.8, then decreases to T=-4.64·10^-2 at f=1, afterward increases to T=-2.45·10^-2 at f=5, and finally decreases to T=-6.34·10^-2; T_p first decreases from T_p=-1.35·10^-2 to T_p=-3.53·10^-2 at f=1, then increases to T_p=-1.63·10^-2 at f=2, and finally decreases to T_p=-3.90·10^-2; T_v first increases from T_v=-2.49·10^-2 to T_v=-6.18·10^-3 at f=0.8, then decreases to T_v=-2.16·10^-2 at f=2, afterward increases to T_v=-3.92·10^-3 at f=5, then decreases again to T_v=-2.72·10^-2 at f=8, and finally increases to T=-2.44·10^-2; and ω first increases from ω=-2.04·10^-2 ^-1 to ω=-1.14·10^-2 ^-1 at f=0.8, then decreases to ω=-2.55·10^-2 ^-1 at f=1, afterward increases to ω=-1.26·10^-2 ^-1 at f=5, and finally decreases to ω=-3.39·10^-2 ^-1. The overall decrease of the propulsion speed is in line with the observation of Section <ref> that the overall flow in the liquid around the particle declines when f increases. Since the values of ω are of the same order of magnitude as for a variation of the particle size (see Section <ref>), the angular propulsion can again be neglected compared to rotational Brownian motion. Due to the complicated frequency dependence of these quantities, we cannot provide simple fit functions. Next, we compare our results with the literature. The observed overall decrease of the propulsion speed v_∥ for increasing frequency f is in contradiction to the results of an analytical approach that is described in Ref. <cit.>. According to this reference, the propulsion speed should increase with the frequency. However, the results of this reference are not directly applicable to our situation. They require an acoustic Reynolds number β with β≪ 1 or β≫ 1, whereas in our system β∈ [0.4,7.8]. This is likely to explain the discrepancy between their results and our findings. The observed overall increase of the force F_⊥ with f is consistent with the increasing force that a traveling ultrasound wave exerts on a spherical particle in the direction of ultrasound propagation <cit.>. Finally, we address the type of motion that a free particle would exhibit and its dependence on the frequency f and acoustic energy density E. The corresponding results are shown in Fig. <ref>b. Different from what we observed for a variation of the particle diameter σ (see Section <ref>), the energy density E_dir, which constitutes an upper threshold for random motion of the particle, increases with f and the energy density E_gui>E_dir, which constitutes an upper threshold for directional motion with random orientation and a lower threshold for directional guided motion, fluctuates around the energy density E_max and only slightly decreases when f increases. Hence, for all considered frequencies, directional motion of the particle is possible for harmless ultrasound intensities (E<E_max). § CONCLUSIONS In this work, we carefully investigated how the acoustic propulsion of cone-shaped nano- and microparticles by a planar traveling ultrasound wave depends on the size of the particles, the energy density of the ultrasound, and its frequency. We found that all three parameters have a strong influence on the flow field generated around the nano- and microcones and their resulting propulsion velocity. When increasing the particle size or frequency, the structure of the flow field around the particle changes significantly, whereas an increasing energy density leads only to increasing the strength of the flow field. The propulsion velocity of the particles was found to be approximately proportional to the particle size and acoustic energy density, but to have a nonmonotonic dependence on the frequency with a maximum at about 1 MHz. As particle size, acoustic energy density, and sound frequency are fundamental parameters of all systems involving ultrasound-propelled particles, our results are highly relevant for the ongoing research on this type of artificial, motile particles. For example, the results can help to plan future experiments and to develop future applications of acoustically propelled nano- and microparticles. The observation that the propulsion is maximal for a frequency of about 1 MHz is particularly beneficial for applications of acoustically propelled particles in medicine since this frequency is sufficiently high to allow precise directing and structuring of the acoustic field and sufficiently low to reach a large penetration depth in tissue <cit.>. Our results can also strongly accelerate theoretical research on such particles, since knowing the strength and direction of the acoustic propulsion for a given system allows to use an effective instead of a direct description of the propulsion and thus to describe the particles' motion on several orders of magnitude larger time scales in analytical modeling and computer simulations <cit.>. Future research should continue our work by studying the influence of other system parameters on the acoustic propulsion. A particularly important parameter that should be varied in future research is the viscosity of the fluid that surrounds the particles. Up to now, only a few experiments have addressed the acoustic propulsion in fluids with different viscosities <cit.> and they were not able to vary the viscosity independently from other parameters of the system. § METHODS Our methods are similar to those described in Ref. <cit.>, which have been proven to be successful. The methods mainly consist of direct computational fluid dynamics simulations, which are based on numerically solving the compressible Navier-Stokes equations. In contrast to other numerical approaches that have been used to study acoustically propelled particles <cit.>, our approach solves the full compressible Navier-Stokes equations and does not involve a perturbative expansion, which allows for a higher accuracy. The setup for our simulations is shown in Fig. <ref>. The system includes a fluid-filled rectangular domain with width l_1,1+l_1,2 and height l_2=200, where we choose water as the fluid. For convenience, we choose the width to be parallel to the x_1-axis and the height to be parallel to the x_2-axis of a Cartesian coordinate system. At the left edge of the domain, which shall constitute an inlet for the ultrasound wave, we impose time-dependent boundary conditions that correspond to a planar ultrasound wave entering the system. For this purpose, we prescribe a time-dependent velocity _in(t)=Δ u sin(2π f t) and pressure p_in(t)=Δ p sin(2π f t), where t denotes time, Δ u=Δ p/(ρ_0 c_f) is the flow velocity amplitude of the entering wave, and Δ p is its pressure amplitude. The water shall initially be at standard temperature T_0=293.15 and standard pressure p_0=101325 so that we set the initial mass density of the water as ρ_0=998 ^-3, its sound velocity as c_f=1484 ^-1, its shear viscosity as ν_s=1.002, and its bulk viscosity as ν_b=2.87. Moreover, we assume that the water is initially at rest, i.e., at t=0 it has the vanishing velocity field _0=0⃗ ^-1. In this study, we consider pressure amplitudes of Δ p ∈[0.1,10]Δ p_R with the reference pressure amplitude Δ p_R=10 and ultrasound frequencies of f∈[0.5,10]f_R with the reference frequency f_R=1. The reference values are chosen such that they are consistent with previous work <cit.>. Furthermore, the reference pressure amplitude Δ p_R corresponds to an acoustic energy density E (see below for an equation for E) that is considered to be harmless and has been approved for diagnostic applications of ultrasound <cit.>, and the considered frequencies are similar to those used in many experiments that have been reported in the literature <cit.>. At the lower and upper edges of the simulation domain, we prescribe slip boundary conditions. The traveling ultrasound wave entering the system at the inlet will then propagate parallel to the x_1-axis towards the right edge of the domain, which we choose as outlet so that the ultrasound wave can leave the simulation domain. After some time, the wave reaches a cone-shaped particle with a variable diameter σ∈[0.1,10]σ_R, where the reference diameter σ_R=2^-1/2 is again chosen to be consistent with previous work <cit.>. The particle's height is set to h=σ so that, through σ, only the size of the particle but not its aspect ratio is varied and the aspect ratio is consistent with the choice in Ref. <cit.>. We position the particle in the simulation domain such that its center of mass S has a distance l_1,1 from the inlet and is vertically centered. The orientation of the particle, which we describe by the orientation of the particle's axis of symmetry, is chosen so that it is parallel to the x_2-axis. At the boundary of the particle domain Ω_p, we prescribe no-slip boundary conditions. When the ultrasound, whose acoustic energy density is given by E=Δ p^2/(2 ρ_0 c_f^2)∈[0.23,2275] ^-3 and set through the pressure amplitude Δ p, interacts with the particle, a time-averaged, ultrasound-induced propulsion force F⃗ and torque T are exerted on the center of mass S of the particle. The propulsion force can be split into components F_∥ and F_⊥ parallel and perpendicular to the particle orientation, respectively. After the interaction with the particle, the ultrasound wave propagates further and eventually can reach the outlet, which is at a distance of l_1,2 from S. Analogous to Refs. <cit.>, we choose the width l_1,1+l_1,2 of the simulation domain so that l_1,1=λ(f_R)/4, where λ(f)=c_f/f is the wavelength of the ultrasound, and so that l_1,1+l_1,2 is a multiple of λ(f)/2. In addition, we demand that l_1,2 is the smallest value that fulfills also the condition l_1,2⩾100, to restrict the simulation domain to a reasonable size while ensuring a sufficiently large distance of the particle from the outlet. To calculate the propulsion force F⃗ and torque T, we first simulate the propagation of the ultrasound wave and its interaction with the particle by numerically solving the continuity equation for the mass-density field of the fluid, the compressible Navier-Stokes equations, and a linear constitutive equation for the fluid's pressure field. For this purpose, we use the finite volume software package OpenFOAM <cit.>. From the velocity and pressure fields of the fluid, we then calculate the time-dependent force and torque that act on the particle in the laboratory frame through appropriate integrals of the stress tensor Σ over the particle surface. The time-dependent force and torque are given by F⃗^(p)+F⃗^(v) and T^(p)+T^(v), respectively, where the pressure component (superscript (p)) and viscous component (superscript (v)) are given by <cit.> F^(α)_i = ∑^2_j=1∫_∂Ω_pΣ^(α)_ij A_j, T^(α) = ∑^2_j,k,l=1∫_∂Ω_pϵ_3jk(x_j-x_p,j)Σ^(α)_kl A_l with α∈{p,v}. The symbols Σ^(p) and Σ^(v) denote the pressure component and the viscous component of the stress tensor Σ, respectively, A⃗(x⃗)=( A_1(x⃗), A_2(x⃗))^T is the normal, outwards oriented surface element of the particle-domain boundary ∂Ω_p at position x⃗∈∂Ω_p, ϵ_ijk denotes the Levi-Civita symbol, and x⃗_p is the position of S. Since the time-dependent force and torque converge slowly towards a stationary state, we calculate the time-averaged, stationary force F⃗ and torque T acting on the particle by locally averaging over one period of the ultrasound wave and extrapolating towards t →∞ using the procedure described in Ref. <cit.>. We decompose the force as F⃗=F⃗_p+F⃗_v into a pressure component F⃗_p=⟨F⃗^(p)⟩ and viscous component F⃗_v=⟨F⃗^(v)⟩, where ⟨·⟩ denotes the average over time. Similarly, we decompose the torque as T=T_p + T_v with pressure component T_p=⟨ T^(p)⟩ and viscous component T_v=⟨ T^(v)⟩. We are also interested in the translational velocity v⃗ and angular velocity ω of the particle that correspond to F⃗ and T. v⃗ and ω can be calculated from F⃗ and T through the Stokes law <cit.> 𝔳⃗=1/ν_sH^-1 𝔉⃗. For a more compact notation, we use here the translational-angular velocity vector 𝔳⃗=(v⃗,ω)^T and the force-torque vector 𝔉⃗=(F⃗,T)^T. H^-1 is the inverse of the hydrodynamic resistance matrix H= [ K C^T_S; C_S Ω_S ] with the submatrices K, C_S, and Ω_S. The latter two submatrices depend on a reference point, which we choose here as the center of mass S, as denoted by a subscript S. Since H corresponds to three spatial dimensions, whereas we perform simulations in two spatial dimensions to keep the computational effort acceptable, we assume a thickness of σ of the particle in the third dimension when calculating H. For a particle with the reference diameter σ_R, this results in K = [ 7.74 0 0; 0 7.48 0; 0 0 7.16 ], C_S = [ 0 0 0.05^2; 0 0 0; -0.11^2 0 0 ], Ω_S = [ 1.81^3 0 0; 0 1.69^3 0; 0 0 1.73^3 ]. We then neglect the contributions K_33, C_13, Ω_11, and Ω_22, which correspond to the lower and upper surfaces of the particle, and can then use the three-dimensional versions of Eqs. (<ref>)-(<ref>). The matrix H can, e.g., be calculated with the software <cit.>. It is not necessary to calculate H for each value of σ. Since only the size of the particles is varied in this study, whereas the qualitative shape is unchanged, it is quite easily possible to calculate H for some value of σ from the matrix H corresponding to σ_R (see Ref. <cit.> for details): K(σ) = K(σ_R)σ/σ_R, C_S(σ) = C_S(σ_R)(σ/σ_R)^2, Ω_S(σ) = Ω_S(σ_R)(σ/σ_R)^3. From H, one can also calculate the particle's diffusion tensor 𝒟=(k_B T_0 / ν_s) H^-1 with the Boltzmann constant k_B. The particle's rotational diffusion coefficient, corresponding to rotation in the x_1-x_2 plane, is then given by D_R=(𝒟)_66. In the main part of this article, we discuss the values of the components of F⃗ and v⃗ that correspond to the directions parallel and perpendicular to the particle's orientation, respectively. The parallel component of F⃗ is given by F_∥=(F⃗)_2=F_∥,p+F_∥,v with pressure component F_∥,p=(⟨F⃗^(p)⟩)_2 and viscous component F_∥,v=(⟨F⃗^(v)⟩)_2. Analogously, the perpendicular component of F⃗ is given by F_⊥ = (⟨F⃗_⊥⟩)_1=F_⊥,p+F_⊥,v with pressure component F_⊥,p=(⟨F⃗^(p)⟩)_1 and viscous component F_⊥,v=(⟨F⃗^(v)⟩)_1. For v⃗, the parallel component is obtained as v_∥=(v⃗)_2 and the perpendicular component is given by v_⊥=(v⃗)_1. To assess the type of motion of a particle, one can compare its rotational Brownian motion with its translational and rotational propulsion <cit.>. First, if the translational and rotational propulsions are weak compared to the Brownian rotation, there is only random motion. On the other hand, if translational or rotational (which can align the orientation of the particle <cit.>) propulsion is strong compared to the Brownian rotation, there is directional motion. Second, the particle can attain random orientations if the particle's angular propulsion velocity is small compared to the Brownian rotation. On the other hand, if the angular propulsion is strong compared to the Brownian rotation, the orientation of the particle is dominated by the propulsion torque so we observe guided motion. Since the particle's propulsion is proportional to the acoustic energy density E (see Section <ref>), the type of motion depends on E as follows <cit.>: * E < min{E_dir,E_gui}: Random motion * E > min{E_dir,E_gui}: Directional motion * E < E_gui: Random orientation * E > E_gui: Guided motion Here, the energy density thresholds are given by <cit.> E_dir = σ D_R E_R/|v_∥|, E_gui = π D_R E_R/2|ω| with the reference acoustic energy density E_R=22.7 ^-3. Before solving the system of equations governing our simulations numerically, we nondimensionalize it. This leads to four dimensionless numbers, being the Euler number Eu, which corresponds to the pressure amplitude of the ultrasound wave, the Helmholtz number He, which corresponds to the frequency of the wave, a Reynolds number Re_b, which corresponds to the bulk viscosity, and a Reynolds number Re_s, which corresponds to the shear viscosity. With the parameter values listed in Tab. <ref>, these dimensionless numbers obtain the following values: Eu =Δ p/ρ_0 Δ u^2≈2.20·10^4-2.20·10^6, He = fσ/c_f≈4.76·10^-5-4.76·10^-3, Re_b =ρ_0 Δ u σ/ν_b≈1.66·10^-4-1.66·10^-2, Re_s =ρ_0 Δ u σ/ν_s≈4.76·10^-4-4.76·10^-2. One can also define the Reynolds number Re_p=ρ_0 σ/ν_s√(v_∥^2+v_⊥^2) <10^-5, which characterizes the particle motion through the fluid. See Ref. <cit.> for a more detailed discussion of the meaning of the dimensionless numbers. When solving the field equations describing the dynamics of the fluid with the finite volume method, we use a structured, mixed rectangular-triangular mesh with about 300,000-800,000 cells. This mesh has a very small cell size Δ x close to the particle and a larger Δ x far away from the particle. For the time integration, we use an adaptive time-step method and ensure that the time-step size Δ t is always sufficiently small such that the Courant-Friedrichs-Lewy number fulfills the condition C = c_fΔ t/Δ x < 1 . Our simulations run from t=0 to t = t_max⩾ 500/f. With the chosen settings, an individual simulation run costs typically 36,000 CPU core hours of computation time on current hardware. Due to the larger domain width and period for lower frequencies, the computational expense was higher for simulations with lower frequencies. For example, a simulation with f=0.5 required 144,000 CPU core hours. Table <ref> summarizes the parameters that are relevant for our simulations. Their values are in line with those chosen in Ref. <cit.>. § DATA AVAILABILITY The raw data corresponding to the figures shown in this article are available as Supplementary Material <cit.>. § CONFLICTS OF INTEREST There are no conflicts of interest to declare. We thank Patrick Kurzeja for helpful discussions. R.W. is funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – WI 4170/3. The simulations for this work were performed on the computer cluster PALMA II of the University of Münster. apsrev41Control apsrev4-1
http://arxiv.org/abs/2307.01792v1
20230704160220
Quantum simulation of in-medium QCD jets: momentum broadening, gluon production, and entropy growth
[ "João Barata", "Xiaojian Du", "Meijian Li", "Wenyang Qian", "Carlos A. Salgado" ]
hep-ph
[ "hep-ph", "nucl-th", "quant-ph" ]
- =+0.25cm =-0.25cm ⟩⟩ ⟨⟨